Jack Harrhy

Linkblog/2025/03/23

Haiku heart Nvidia (porting Nvidia GPU driver), (Critical) Notes on MCP.

Haiku heart Nvidia (porting Nvidia GPU driver)

As many people already knows, Nvidia published their kernel driver under MIT license: GitHub - NVIDIA/open-gpu-kernel-modules: NVIDIA Linux open GPU kernel module source 223 (I will call it NVRM). This driver is very portable and its platform-independent part can be compiled for Haiku with minor effort (but it need to implement OS-specific binding code to be actually useful). This is very valuable for Haiku because Linux kernel GPU drivers are very hard to port and it heavily depends on Linux kernel internals. Unfortunately userland OpenGL/Vulkan driver source code is not published. But as part of Mesa 3D project, new Vulkan driver “NVK” is being developed and is functional already. Mesa NVK driver is using Nouveau as kernel driver, so it can’t be directly used with NVRM kernel driver. NVK source code provides platform abstraction that allows to implement support of other kernel drivers such as NVRM.

I finally managed to make initial port NVRM kernel driver to Haiku and added initial NVRM API support to Mesa NVK Vulkan driver, so NVRM and NVK can work together. Some simple Vulkan tests are working.

Absolute madman getting the open source Linux kernel modules running in Haiku.

From phoronix.

Tao of Mac - Notes on MCP

I’ve been playing with Anthropic’s MCP for a little while now, and I have a few gripes. I understand it is undergoing a kind of Cambrian explosion right now, but I am not very impressed with it. Maybe it’s echoes of TRON, but I can’t bring myself to like it.

I’ve been meaning to look into MCP as of late, as it seems interesting.

This is an interesting take though, a critique on its complexity.

Most of what I’ve seen has just been wow look, MCP!, actually getting some negative feedback on it is good to consume.

Basically, not saying I think MCP servers aren’t cool anymore, just that I like a new piece of tech not just being hyped without detractors speaking up.

The design seems to assume you are either running a bunch of servers locally (as subprocesses, which, again, raises a few interesting security issues) or talking to something with enough compute power to run a stateful server, and isn’t really a good fit for the way we use APIs today, considering many are usually run in stateless hosting environments like AWS Lambda or Cloudflare Workers.

This is interesting, I assumed MCP would still make sense in a backend stateless environment, but if this is the case, I think MCP servers might be quite useless for my use case I think I’d use them for…

I still don’t know if I conceptualize them correctly…

Oh well.