drm: Lobotomize set_busid nonsense for !pci drivers
[deliverable/linux.git] / Documentation / gpu / drm-uapi.rst
CommitLineData
22554020 1===================
ca00c2b9
JN
2Userland interfaces
3===================
4
5The DRM core exports several interfaces to applications, generally
6intended to be used through corresponding libdrm wrapper functions. In
7addition, drivers export device-specific interfaces for use by userspace
8drivers & device-aware applications through ioctls and sysfs files.
9
10External interfaces include: memory mapping, context management, DMA
11operations, AGP management, vblank control, fence management, memory
12management, and output management.
13
14Cover generic ioctls and sysfs layout here. We only need high-level
15info, since man pages should cover the rest.
16
a3257256
DV
17libdrm Device Lookup
18====================
19
20.. kernel-doc:: drivers/gpu/drm/drm_ioctl.c
21 :doc: getunique and setversion story
22
ca00c2b9 23Render nodes
22554020 24============
ca00c2b9
JN
25
26DRM core provides multiple character-devices for user-space to use.
27Depending on which device is opened, user-space can perform a different
28set of operations (mainly ioctls). The primary node is always created
29and called card<num>. Additionally, a currently unused control node,
30called controlD<num> is also created. The primary node provides all
31legacy operations and historically was the only interface used by
32userspace. With KMS, the control node was introduced. However, the
33planned KMS control interface has never been written and so the control
34node stays unused to date.
35
36With the increased use of offscreen renderers and GPGPU applications,
37clients no longer require running compositors or graphics servers to
38make use of a GPU. But the DRM API required unprivileged clients to
39authenticate to a DRM-Master prior to getting GPU access. To avoid this
40step and to grant clients GPU access without authenticating, render
41nodes were introduced. Render nodes solely serve render clients, that
42is, no modesetting or privileged ioctls can be issued on render nodes.
43Only non-global rendering commands are allowed. If a driver supports
44render nodes, it must advertise it via the DRIVER_RENDER DRM driver
45capability. If not supported, the primary node must be used for render
46clients together with the legacy drmAuth authentication procedure.
47
48If a driver advertises render node support, DRM core will create a
49separate render node called renderD<num>. There will be one render node
50per device. No ioctls except PRIME-related ioctls will be allowed on
51this node. Especially GEM_OPEN will be explicitly prohibited. Render
52nodes are designed to avoid the buffer-leaks, which occur if clients
53guess the flink names or mmap offsets on the legacy interface.
54Additionally to this basic interface, drivers must mark their
55driver-dependent render-only ioctls as DRM_RENDER_ALLOW so render
56clients can use them. Driver authors must be careful not to allow any
57privileged ioctls on render nodes.
58
59With render nodes, user-space can now control access to the render node
60via basic file-system access-modes. A running graphics server which
61authenticates clients on the privileged primary/legacy node is no longer
62required. Instead, a client can open the render node and is immediately
63granted GPU access. Communication between clients (or servers) is done
64via PRIME. FLINK from render node to legacy node is not supported. New
65clients must not use the insecure FLINK interface.
66
67Besides dropping all modeset/global ioctls, render nodes also drop the
68DRM-Master concept. There is no reason to associate render clients with
69a DRM-Master as they are independent of any graphics server. Besides,
70they must work without any running master, anyway. Drivers must be able
71to run without a master object if they support render nodes. If, on the
72other hand, a driver requires shared state between clients which is
73visible to user-space and accessible beyond open-file boundaries, they
74cannot support render nodes.
75
76VBlank event handling
22554020 77=====================
ca00c2b9
JN
78
79The DRM core exposes two vertical blank related ioctls:
80
81DRM_IOCTL_WAIT_VBLANK
82 This takes a struct drm_wait_vblank structure as its argument, and
83 it is used to block or request a signal when a specified vblank
84 event occurs.
85
86DRM_IOCTL_MODESET_CTL
87 This was only used for user-mode-settind drivers around modesetting
88 changes to allow the kernel to update the vblank interrupt after
89 mode setting, since on many devices the vertical blank counter is
90 reset to 0 at some point during modeset. Modern drivers should not
91 call this any more since with kernel mode setting it is a no-op.
92
93This second part of the GPU Driver Developer's Guide documents driver
94code, implementation details and also all the driver-specific userspace
95interfaces. Especially since all hardware-acceleration interfaces to
96userspace are driver specific for efficiency and other reasons these
97interfaces can be rather substantial. Hence every driver has its own
98chapter.
This page took 0.029712 seconds and 5 git commands to generate.