Tag Archives: KMS

Splitting DRM and KMS device nodes

While most devices of the 3 major x86 desktop GPU-providers have GPU and display-controllers merged on a single card, recent development (especially on ARM) shows that rendering (via GPU) and mode-setting (via display-controller) are not necessarily bound to the same device. To better support such devices, several changes are being worked on for DRM.

In it’s current form, the DRM subsystem provides one general-purpose device-node for each registered DRM device: /dev/dri/card<num>. An additional control-node is also created, but it remains unused as of this writing. While in general a kernel driver is allowed to register multiple DRM devices for a single physical device, no driver made use of this, yet. That means, whatever hardware you use, both mode-setting and rendering is done via the same device node. This entails some rather serious consequences:

  1. Access-management to mode-setting and rendering is done via the same file-system node
  2. Mode-setting resources of a single card cannot be split among multiple graphics-servers
  3. Sharing display-controllers between cards is rather complicated

In the following sections, I want to look closer at each of these points and describe what has been done and what is still planned to overcome these restrictions. This is a highly technical description of the changes and serves as outline for the Linux-Plumbers session on this topic. I expect the reader to be familiar with DRM internals.

1) Render-nodes

While render-nodes have been discussed since 2009 on dri-devel, several mmap-related security-issues have prevented it from being merged. Those have all been fixed and 3-days ago, the basic render-node infrastructure has been merged. While it’s still marked as experimental and hidden behind the drm.rnodes module parameter, I’m confident we will enable it by default in one of the next kernel releases.

What are render-nodes?

From a user-space perspective, render-nodes are “like a big FPU” (krh) that can be used by applications to speed up computations and rendering. They are accessible via /dev/dri/renderD<num> and provide the basic DRM rendering interface. Compared to the old card<num> nodes, they lack some features:

  • No mode-setting (KMS) ioctls allowed
  • No insecure gem-flink allowed (use dma-buf instead!)
  • No DRM-auth required/supported
  • No legacy pre-KMS DRM-API supported

So whenever an application wants hardware-accelerated rendering, GPGPU access or offscreen-rendering, it no longer needs to ask a graphics-server (via DRI or wl_drm) but can instead open any available render node and start using it. Access-control to render-nodes is done via standard file-system modes. It’s no longer shared with mode-setting resources and thus can be provided for less-privileged applications.

It is important to note that render-nodes do not provide any new APIs. Instead, they just split a subset of the already available DRM-API off to a new device-node. The legacy node is not changed but kept for backwards-compatibility (and, obviously, for mode-setting).

It’s also important to know that render-nodes are not bound to a specific card. While internally it’s created by the same driver as the legacy node, user-space should never assume any connection between a render-node and a legacy/mode-setting node. Instead, if user-space requires hardware-acceleration, it should open any node and use it. For communication back to the graphics-server, dma-buf shall be used. Really! Questions like “how do I find the render-node for a given card?” don’t make any sense. Yes, driver-specific user-space can figure out whether and which render-node was created by which driver, but driver-unspecific user-space should never do that! Depending on your use-cases, either open any render-node you want (maybe allow an environment-variable to select it) or let the graphics-server do that for you and pass the FD via your graphics-API (X11, wayland, …).

So with render-nodes, kernel drivers can now provide an interface only for off-screen rendering and GPGPU work. Devices without any display-controller can avoid any mode-setting nodes and just provide a render-node. User-space, on the other hand, can finally use GPUs without requiring any privileged graphics-server running. They’re independent of the kernel-internal DRM-Master concept!

2) Mode-setting nodes

While splitting off render-nodes from the legacy node simplifies the situation for most applications, we didn’t simplify it for mode-setting applications. Currently, if a graphics-server wants to program a display-controller, it needs to be DRM-Master for the given card. It can acquire it via drmSetMaster() and drop it via drmDropMaster(). But only one application can be DRM-Master at a time. Moreover, only applications with CAP_SYS_ADMIN privileges can acquire DRM-Master. This prevents some quite fancy features:

  • Running an XServer without root-privileges
  • Using two different XServers to control two independent monitors/connectors of the same card

The initial idea (and Ilija Hadzic’s follow-up) to support this were mode-setting nodes. A privileged ioctl on the control-node would allow applications to split mode-setting resources across different device-nodes. You could have /dev/dri/modesetD1 and /dev/dri/modesetD2 to split your KMS CRTC and Connector resources. An XServer could use one of these nodes to program the now reduced set of resources. We would have one DRM-Master per node and we’d be fine. We could remove the CAP_SYS_ADMIN restriction and instead rely on file-system access-modes to control access to KMS resources.

Another discussed idea to avoid creating a bunch of file-system nodes, is to allocate these resources on-the-fly. All mode-setting-resources would now be bound to a DRM-Master object. An application can only access the resources available on the DRM-Master that it is assigned to. Initially, all resources are bound to the default DRM-Master as usual, which everyone gets assigned to when opening a legacy node. A new ioctl DRM_CLONE_MASTER is used to create a new DRM-Master with the same resources as the previous DRM-Master of an application. Via a DRM_DROP_MASTER_RESOURCE an application can drop KMS resources from their DRM-Master object. Due to their design, neither requires a CAP_SYS_ADMIN restriction as they only clone or drop privileges, they never acquire new privs! So they can be used by any application with access to the control node to create two new DRM-Master resources and pass them to two independent XServers. These use the passed FD to access the card, instead of opening the legacy or mode-setting nodes.

From the kernel side, the only thing that changes is that we can have multiple active DRM-Master objects. In fact, per DRM-Master one open-file might be allowed KMS access. However, this doesn’t require any driver-modifications (which were mostly “master-agnostic”, anyway) and only a few core DRM changes (except for vmwgfx-ttm-lock..).

3) DRM infrastructure

The previous two chapters focused on user-space APIs, but we also want the kernel-internal infrastructure to account for split hardware. However, fact is we already have anything we need. If some hardware exists without display-controller, you simply omit the DRIVER_MODESET flag and only set DRIVER_RENDER. DRM core will only create a render-node for this device then. If your hardware only provides a display-controller, but no real rendering hardware, you simply set DRIVER_MODESET but omit DRIVER_RENDER (which is what SimpleDRM is doing).

Yes, you currently get a bunch of unused DRM code compiled-in if you don’t use some features. However, this is not because DRM requires it, but only because no-one sent any patches for it, yet! DRM-core is driven by DRM-driver developers!

There is a reason why mid-layers are frowned upon in DRM land. There is no group of core DRM developers, but rather a bunch of driver-authors who write fancy driver-extensions. And once multiple drivers use them, they factor it out and move it to DRM core. So don’t complain about missing DRM features, but rather extend your drivers. If it’s a nice feature, you can count on it being incorporated into DRM-core at some point. It might be you doing most of the work, though!

Advertisement

KMSCON: Linux KMS/DRM based Virtual Console

For about half a year I am now constantly working on a new project called kmscon. The idea emerged when reading on Jesse Barnes’ Blog about EGL+KMS. KMS stands for Kernel Mode Setting and is provided by the kernel DRM (Direct Rendering Manager) subsystem. The modesetting API (KMS) is a small part of the whole DRM API, but it works for all DRM drivers in the kernel. Therefore, with DRM you can get simple framebuffer access to all connected monitors. With udev you will also be notified about hot-plugged monitors. Perfect conditions for kmscon.

Kmscon is a small application that simply draws a VT220/VT102 compatible terminal emulator on all connected displays. A simple replacement for the kernel-console or for xterm. It is fully hot-plug capable and automatically detects all connected displays. It is multi-seat capable and only selects monitors that are assigned to the correct seat. It has only one mandatory dependency, which is libudev. This is used for device enumeration and hotplugging. All other dependencies are optional.

Main focus was not writing a decent VT220 emulator. There are lots of them out there (the guys from the Enlightenment project wrote one in under 1 month called terminology) and you can either include an existing one with kmscon or improve the kmscon vte layer. I rather focused on the integration with the operating system. kmscon runs without an X11 environment or any helpers. It needs to do everything on its own. No Gtk, no EFL, no Qt. Of course, they could be included and in fact, kmscon includes optional pango-font-renderers, however, at such a low level, you want at least the possibility to run kmscon without any of these dependencies. Therefore, bare kmscon uses a built-in static 8×16 font which is copied into the 2D framebuffers to draw a console.

Hardware-accelerated Rendering

If mesa is compiled with –with-egl-platforms=drm (which it is on all major distributions except Arch Linux) then we can get OpenGL contexts on bare DRM devices. This is done via EGL. kmscon includes an optional rendering backend for it when compiled with –enable-gles2. In combination with the Pango or Freetype2 font backends of kmscon, you get a hardware-accelerated console with anti-aliases fonts without any X11/Wayland/etc.

If you think this is overkill or if you have no idea why this is needed, then try running a console on a slower machine like the Intel Atom N450 or some Pentium III. Then use an application like “less” and scroll one screen at a time. This means, the whole console is redrawn on every keyboard input. My Atom N450 is fast enough to draw this but if I connect a second monitor, then this will get horribly slow very fast. Rendering both monitors takes about half a second here. When connecting 5 monitors via DisplayLink USB-devices, the performance will be horrible. Therefore, I am happy about every CPU-cycle I can safe by pushing rendering to the graphics-card.

The Use-Cases

I got much (often quite harsh) feedback that kmscon is again some software that is not needed as it replaces perfectly well-working software. Therefore, I want to explain what kmscon does better and why I need it. I compare it to the linux kernel-console as kmscon is a replacement for it:

  • Full internationalization support. No-one wants (and we currently do not have) full internationalized keyboard handling in the kernel. There is also no way to print a full CJK character set or even the full Unicode character set on the linux console. Adding this to the kernel would mean having big character tables in non-swappable kernel memory. Therefore, implementing this in user-space is the only way.
  • Hardware accelerated drawing. With multi-seat becoming more and more common and multiple monitors connected to a single computer, we do not want to spend too much time drawing text on the CPU. However, using the GPU pipeline from the kernel would require new in-kernel DRM APIs which are currently not available. With GPU-accelerated rendering we can also add anti-aliased fonts or soft-shadows which can enhance readability a lot (although others might consider this cosmetic BS).
  • Controllable Monitor/Console mapping. By using the DRM API we can have as many consoles simultaneously as we want and can map them to different monitors or clone the output. We can even span a console across multiple monitors. I also think of some kind of “tabbed” consoles.
  • Full vt220 to vt510 support. The kernel console supports only a small subset of the DEC VT APIs. It does not even correctly emulate the VT102 API (although it’s pretty close to vt102). In user-space we can extend this to even support all the xterm supported escape sequences. This also includes a better scrollback-buffer which is pretty limited in the kernel console.
  • No CONFIG_VT. CONFIG_VT is the kernel config-option that enables the virtual-consoles. The reasons why I think it is bad are beyond the scope of this document, but kmscon was mainly designed to also work without VTs, that is, CONFIG_VT=n.

There are many more points, but these 5 points were important enough for me to start working on a replacement. However, I never tried making kmscon the main working console for your graphical environment. On the contrary, I personally still use xterm for my daily work, but as an emergency console I use kmscon. It works when everything else has failed and always provides me a safe fallback-console.

Furthermore, kmscon works perfectly well simultaneously with the kernel-console. So if you don’t like kmscon, then don’t use it. But if you want to give it a shot, you can use it in parallel with other VTs.

“The console belongs in the kernel so it can run under memory pressure and/or during system failure!

I get this a lot. As a matter of fact the in-kernel linux console does not run under memory-pressure or during system failure, either. Therefore, there is almost no disadvantage in running the console in user-space. In fact, the kernel console and kmscon only implement the rendering pipeline for the text console. Anything you do with it or any program that you run on the console (including a shell like bash) runs in user-space! And when the system fails and user-space is no longer working correctly, then your bash won’t run either so there is no point in having a working console layer when there is nothing to show.

And even if your video-driver fails, then your kernel-console cannot recover as you probably run fbcon which uses the same drivers as user-space. The only fallback would be vgacon which is only accessible from the kernel, but recovering via text-mode doesn’t work in most video-driver-failure-cases either. Therefore, this whole argument is simply wrong, but most of you probably know that already.

However, one needs to take into account that the kernel-console can also print kernel-panics/oopses. This cannot be done by kmscon or any replacement. But this feature does not require a terminal-emulator nor VTs so I wrote a replacement for this called fblog. This is in fact a very useful and prominent feature of the kernel-console which must remain in the kernel.

Kmscon facts

The current kmscon release is kmscon-3 which is still a development release. However, it works quite well on my machine and I would be glad to get some more testers. You can get more information on:

Kmscon has many features. Here is a list of the most important ones:

  1. Safe fallback rendering via /dev/fbX (simply run ./kmscon –fbdev)
  2. DRM dumb-fbs as 2D backend
  3. EGL+3D hardware-accelerated rendering when compiled with OpenGLESv2
  4. Full hotplug capable (monitors and input devices)
  5. Multi-seat capable
  6. Support for multiple monitors
  7. Almost full VT220 compatibility
  8. Modularized input/video/VT handling via libuterm
  9. Only libudev.so as mandatory dependency!
  10. Plain built-in optional keyboard backend
  11. Optional internationalized keyboard backend based on libxkbcommon.so
  12. Built-in VT-compatibility but also runs without VTs
  13. Fully Unicode/UTF8 compatible
  14. Fully internationalized terminal emulation
  15. …and more…

If you want to run kmscon, then please run it as root as it needs access to graphics hardware. By default, kmscon uses DRM devices as output devices. It does not use fbdev devices as many DRM devices also provide fbdev devices for the same physical monitor. If you pass “–fbdev” as command-line argument, then kmscon uses fbdev exclusively! kmscon also supports using DRM devices without OpenGL/EGL/etc.! If compiled kmscon without OpenGLESv2 support but with DRM support, then the DRM devices are used to get direct framebuffer access similar to fbdev. Only if OpenGLESv2 is enabled, kmscon uses hardware-acceleration.

Run “./kmscon -h” to get more information on command-line options. The “–debug” switch is very helpful and “–xkb-layout=de” will switch to a German keyboard layout (if you use the xkbcommon keyboard backend).

Kmscon is still experimental, but I would be glad about any feedback.