drm for v5.15-rc1
core:
- extract i915 eDP backlight into core
- DP aux bus support
- drm_device.irq_enabled removed
- port drivers to native irq interfaces
- export gem shadow plane handling for vgem
- print proper driver name in framebuffer registration
- driver fixes for implicit fencing rules
- ARM fixed rate compression modifier added
- updated fb damage handling
- rmfb ioctl logging/docs
- drop drm_gem_object_put_locked
- define DRM_FORMAT_MAX_PLANES
- add gem fb vmap/vunmap helpers
- add lockdep_assert(once) helpers
- mark drm irq midlayer as legacy
- use offset adjusted bo mapping conversion
vgaarb:
- cleanups
fbdev:
- extend efifb handling to all arches
- div by 0 fixes for multiple drivers
udmabuf:
- add hugepage mapping support
dma-buf:
- non-dynamic exporter fixups
- document implicit fencing rules
amdgpu:
- Initial Cyan Skillfish support
- switch virtual DCE over to vkms based atomic
- VCN/JPEG power down fixes
- NAVI PCIE link handling fixes
- AMD HDMI freesync fixes
- Yellow Carp + Beige Goby fixes
- Clockgating/S0ix/SMU/EEPROM fixes
- embed hw fence in job
- rework dma-resv handling
- ensure eviction to system ram
amdkfd:
- uapi: SVM address range query added
- sysfs leak fix
- GPUVM TLB optimizations
- vmfault/migration counters
i915:
- Enable JSL and EHL by default
- preliminary XeHP/DG2 support
- remove all CNL support (never shipped)
- move to TTM for discrete memory support
- allow mixed object mmap handling
- GEM uAPI spring cleaning
- add I915_MMAP_OBJECT_FIXED
- reinstate ADL-P mmap ioctls
- drop a bunch of unused by userspace features
- disable and remove GPU relocations
- revert some i915 misfeatures
- major refactoring of GuC for Gen11+
- execbuffer object locking separate step
- reject caching/set-domain on discrete
- Enable pipe DMC loading on XE-LPD and ADL-P
- add PSF GV point support
- Refactor and fix DDI buffer translations
- Clean up FBC CFB allocation code
- Finish INTEL_GEN() and friends macro conversions
nouveau:
- add eDP backlight support
- implicit fence fix
msm:
- a680/7c3 support
- drm/scheduler conversion
panfrost:
- rework GPU reset
virtio:
- fix fencing for planes
ast:
- add detect support
bochs:
- move to tiny GPU driver
vc4:
- use hotplug irqs
- HDMI codec support
vmwgfx:
- use internal vmware device headers
ingenic:
- demidlayering irq
rcar-du:
- shutdown fixes
- convert to bridge connector helpers
zynqmp-dsub:
- misc fixes
mgag200:
- convert PLL handling to atomic
mediatek:
- MT8133 AAL support
- gem mmap object support
- MT8167 support
etnaviv:
- NXP Layerscape LS1028A SoC support
- GEM mmap cleanups
tegra:
- new user API
exynos:
- missing unlock fix
- build warning fix
- use refcount_t
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEEKbZHaGwW9KfbeusDHTzWXnEhr4FAmEtvn8ACgkQDHTzWXnE
hr7aqw//WfcIyGdPLjAz59cW8jm+FgihD5colHtOUYRHRO4GeX/bNNufquR8+N3y
HESsyZdpihFHms/wURMq41ibmHg0EuHA01HZzjZuGBesG4F9I8sP/HnDOxDuYuAx
N7Lg4PlUNlfFHmw7Y84owQ6s/XWmNp5iZ8e/mTK5hcraJFQKS4QO74n9RbG/F1vC
Hc3P6AnpqGac2AEGXt0NjIRxVVCTUIBGx+XOhj+1AMyAGzt9VcO1DS9PVCS0zsEy
zKMj9tZAPNg0wYsXAi4kA1lK7uVY8KoXSVDYLpsI5Or2/e7mfq2b4EWrezbtp6UA
H+w86axuwJq7NaYHYH6HqyrLTOmvcHgIl2LoZN91KaNt61xfJT3XZkyQoYViGIrJ
oZy6X/+s+WPoW98bHZrr6vbcxtWKfEeQyUFEAaDMmraKNJwROjtwgFC9DP8MDctq
PUSM+XkwbGRRxQfv9dNKufeWfV5blVfzEJO8EfTU1YET3WTDaUHe/FoIcLZt2DZG
JAJgZkIlU8egthPdakUjQz/KoyLMyovcN5zcjgzgjA9PyNEq74uElN9l446kSSxu
jEVErOdd+aG3Zzk7/ZZL/RmpNQpPfpQ2RaPUkgeUsW01myNzUNuU3KUDaSlVa+Oi
1n7eKoaQ2to/+LjhYApVriri4hIZckNNn5FnnhkgwGi8mpHQIVQ=
=vZkA
-----END PGP SIGNATURE-----
Merge tag 'drm-next-2021-08-31-1' of git://anongit.freedesktop.org/drm/drm
Pull drm updates from Dave Airlie:
"Highlights:
- i915 has seen a lot of refactoring and uAPI cleanups due to a
change in the upstream direction going forward
This has all been audited with known userspace, but there may be
some pitfalls that were missed.
- i915 now uses common TTM to enable discrete memory on DG1/2 GPUs
- i915 enables Jasper and Elkhart Lake by default and has preliminary
XeHP/DG2 support
- amdgpu adds support for Cyan Skillfish
- lots of implicit fencing rules documented and fixed up in drivers
- msm now uses the core scheduler
- the irq midlayer has been removed for non-legacy drivers
- the sysfb code now works on more than x86.
Otherwise the usual smattering of stuff everywhere, panels, bridges,
refactorings.
Detailed summary:
core:
- extract i915 eDP backlight into core
- DP aux bus support
- drm_device.irq_enabled removed
- port drivers to native irq interfaces
- export gem shadow plane handling for vgem
- print proper driver name in framebuffer registration
- driver fixes for implicit fencing rules
- ARM fixed rate compression modifier added
- updated fb damage handling
- rmfb ioctl logging/docs
- drop drm_gem_object_put_locked
- define DRM_FORMAT_MAX_PLANES
- add gem fb vmap/vunmap helpers
- add lockdep_assert(once) helpers
- mark drm irq midlayer as legacy
- use offset adjusted bo mapping conversion
vgaarb:
- cleanups
fbdev:
- extend efifb handling to all arches
- div by 0 fixes for multiple drivers
udmabuf:
- add hugepage mapping support
dma-buf:
- non-dynamic exporter fixups
- document implicit fencing rules
amdgpu:
- Initial Cyan Skillfish support
- switch virtual DCE over to vkms based atomic
- VCN/JPEG power down fixes
- NAVI PCIE link handling fixes
- AMD HDMI freesync fixes
- Yellow Carp + Beige Goby fixes
- Clockgating/S0ix/SMU/EEPROM fixes
- embed hw fence in job
- rework dma-resv handling
- ensure eviction to system ram
amdkfd:
- uapi: SVM address range query added
- sysfs leak fix
- GPUVM TLB optimizations
- vmfault/migration counters
i915:
- Enable JSL and EHL by default
- preliminary XeHP/DG2 support
- remove all CNL support (never shipped)
- move to TTM for discrete memory support
- allow mixed object mmap handling
- GEM uAPI spring cleaning
- add I915_MMAP_OBJECT_FIXED
- reinstate ADL-P mmap ioctls
- drop a bunch of unused by userspace features
- disable and remove GPU relocations
- revert some i915 misfeatures
- major refactoring of GuC for Gen11+
- execbuffer object locking separate step
- reject caching/set-domain on discrete
- Enable pipe DMC loading on XE-LPD and ADL-P
- add PSF GV point support
- Refactor and fix DDI buffer translations
- Clean up FBC CFB allocation code
- Finish INTEL_GEN() and friends macro conversions
nouveau:
- add eDP backlight support
- implicit fence fix
msm:
- a680/7c3 support
- drm/scheduler conversion
panfrost:
- rework GPU reset
virtio:
- fix fencing for planes
ast:
- add detect support
bochs:
- move to tiny GPU driver
vc4:
- use hotplug irqs
- HDMI codec support
vmwgfx:
- use internal vmware device headers
ingenic:
- demidlayering irq
rcar-du:
- shutdown fixes
- convert to bridge connector helpers
zynqmp-dsub:
- misc fixes
mgag200:
- convert PLL handling to atomic
mediatek:
- MT8133 AAL support
- gem mmap object support
- MT8167 support
etnaviv:
- NXP Layerscape LS1028A SoC support
- GEM mmap cleanups
tegra:
- new user API
exynos:
- missing unlock fix
- build warning fix
- use refcount_t"
* tag 'drm-next-2021-08-31-1' of git://anongit.freedesktop.org/drm/drm: (1318 commits)
drm/amd/display: Move AllowDRAMSelfRefreshOrDRAMClockChangeInVblank to bounding box
drm/amd/display: Remove duplicate dml init
drm/amd/display: Update bounding box states (v2)
drm/amd/display: Update number of DCN3 clock states
drm/amdgpu: disable GFX CGCG in aldebaran
drm/amdgpu: Clear RAS interrupt status on aldebaran
drm/amdgpu: Add support for RAS XGMI err query
drm/amdkfd: Account for SH/SE count when setting up cu masks.
drm/amdgpu: rename amdgpu_bo_get_preferred_pin_domain
drm/amdgpu: drop redundant cancel_delayed_work_sync call
drm/amdgpu: add missing cleanups for more ASICs on UVD/VCE suspend
drm/amdgpu: add missing cleanups for Polaris12 UVD/VCE on suspend
drm/amdkfd: map SVM range with correct access permission
drm/amdkfd: check access permisson to restore retry fault
drm/amdgpu: Update RAS XGMI Error Query
drm/amdgpu: Add driver infrastructure for MCA RAS
drm/amd/display: Add Logging for HDMI color depth information
drm/amd/amdgpu: consolidate PSP TA init shared buf functions
drm/amd/amdgpu: add name field back to ras_common_if
drm/amdgpu: Fix build with missing pm_suspend_target_state module export
...
This commit is contained in:
commit
477f70cd2a
1124 changed files with 62336 additions and 35102 deletions
|
|
@ -54,7 +54,7 @@ struct dma_buf_ops {
|
|||
* device), and otherwise need to fail the attach operation.
|
||||
*
|
||||
* The exporter should also in general check whether the current
|
||||
* allocation fullfills the DMA constraints of the new device. If this
|
||||
* allocation fulfills the DMA constraints of the new device. If this
|
||||
* is not the case, and the allocation cannot be moved, it should also
|
||||
* fail the attach operation.
|
||||
*
|
||||
|
|
@ -96,6 +96,12 @@ struct dma_buf_ops {
|
|||
* This is called automatically for non-dynamic importers from
|
||||
* dma_buf_attach().
|
||||
*
|
||||
* Note that similar to non-dynamic exporters in their @map_dma_buf
|
||||
* callback the driver must guarantee that the memory is available for
|
||||
* use and cleared of any old data by the time this function returns.
|
||||
* Drivers which pipeline their buffer moves internally must wait for
|
||||
* all moves and clears to complete.
|
||||
*
|
||||
* Returns:
|
||||
*
|
||||
* 0 on success, negative error code on failure.
|
||||
|
|
@ -144,9 +150,18 @@ struct dma_buf_ops {
|
|||
* This is always called with the dmabuf->resv object locked when
|
||||
* the dynamic_mapping flag is true.
|
||||
*
|
||||
* Note that for non-dynamic exporters the driver must guarantee that
|
||||
* that the memory is available for use and cleared of any old data by
|
||||
* the time this function returns. Drivers which pipeline their buffer
|
||||
* moves internally must wait for all moves and clears to complete.
|
||||
* Dynamic exporters do not need to follow this rule: For non-dynamic
|
||||
* importers the buffer is already pinned through @pin, which has the
|
||||
* same requirements. Dynamic importers otoh are required to obey the
|
||||
* dma_resv fences.
|
||||
*
|
||||
* Returns:
|
||||
*
|
||||
* A &sg_table scatter list of or the backing storage of the DMA buffer,
|
||||
* A &sg_table scatter list of the backing storage of the DMA buffer,
|
||||
* already mapped into the device address space of the &device attached
|
||||
* with the provided &dma_buf_attachment. The addresses and lengths in
|
||||
* the scatter list are PAGE_SIZE aligned.
|
||||
|
|
@ -168,7 +183,7 @@ struct dma_buf_ops {
|
|||
*
|
||||
* This is called by dma_buf_unmap_attachment() and should unmap and
|
||||
* release the &sg_table allocated in @map_dma_buf, and it is mandatory.
|
||||
* For static dma_buf handling this might also unpins the backing
|
||||
* For static dma_buf handling this might also unpin the backing
|
||||
* storage if this is the last mapping of the DMA buffer.
|
||||
*/
|
||||
void (*unmap_dma_buf)(struct dma_buf_attachment *,
|
||||
|
|
@ -237,7 +252,7 @@ struct dma_buf_ops {
|
|||
* This callback is used by the dma_buf_mmap() function
|
||||
*
|
||||
* Note that the mapping needs to be incoherent, userspace is expected
|
||||
* to braket CPU access using the DMA_BUF_IOCTL_SYNC interface.
|
||||
* to bracket CPU access using the DMA_BUF_IOCTL_SYNC interface.
|
||||
*
|
||||
* Because dma-buf buffers have invariant size over their lifetime, the
|
||||
* dma-buf core checks whether a vma is too large and rejects such
|
||||
|
|
@ -274,27 +289,6 @@ struct dma_buf_ops {
|
|||
|
||||
/**
|
||||
* struct dma_buf - shared buffer object
|
||||
* @size: size of the buffer; invariant over the lifetime of the buffer.
|
||||
* @file: file pointer used for sharing buffers across, and for refcounting.
|
||||
* @attachments: list of dma_buf_attachment that denotes all devices attached,
|
||||
* protected by dma_resv lock.
|
||||
* @ops: dma_buf_ops associated with this buffer object.
|
||||
* @lock: used internally to serialize list manipulation, attach/detach and
|
||||
* vmap/unmap
|
||||
* @vmapping_counter: used internally to refcnt the vmaps
|
||||
* @vmap_ptr: the current vmap ptr if vmapping_counter > 0
|
||||
* @exp_name: name of the exporter; useful for debugging.
|
||||
* @name: userspace-provided name; useful for accounting and debugging,
|
||||
* protected by @resv.
|
||||
* @name_lock: spinlock to protect name access
|
||||
* @owner: pointer to exporter module; used for refcounting when exporter is a
|
||||
* kernel module.
|
||||
* @list_node: node for dma_buf accounting and debugging.
|
||||
* @priv: exporter specific private data for this buffer object.
|
||||
* @resv: reservation object linked to this dma-buf
|
||||
* @poll: for userspace poll support
|
||||
* @cb_excl: for userspace poll support
|
||||
* @cb_shared: for userspace poll support
|
||||
*
|
||||
* This represents a shared buffer, created by calling dma_buf_export(). The
|
||||
* userspace representation is a normal file descriptor, which can be created by
|
||||
|
|
@ -306,30 +300,152 @@ struct dma_buf_ops {
|
|||
* Device DMA access is handled by the separate &struct dma_buf_attachment.
|
||||
*/
|
||||
struct dma_buf {
|
||||
/**
|
||||
* @size:
|
||||
*
|
||||
* Size of the buffer; invariant over the lifetime of the buffer.
|
||||
*/
|
||||
size_t size;
|
||||
|
||||
/**
|
||||
* @file:
|
||||
*
|
||||
* File pointer used for sharing buffers across, and for refcounting.
|
||||
* See dma_buf_get() and dma_buf_put().
|
||||
*/
|
||||
struct file *file;
|
||||
|
||||
/**
|
||||
* @attachments:
|
||||
*
|
||||
* List of dma_buf_attachment that denotes all devices attached,
|
||||
* protected by &dma_resv lock @resv.
|
||||
*/
|
||||
struct list_head attachments;
|
||||
|
||||
/** @ops: dma_buf_ops associated with this buffer object. */
|
||||
const struct dma_buf_ops *ops;
|
||||
|
||||
/**
|
||||
* @lock:
|
||||
*
|
||||
* Used internally to serialize list manipulation, attach/detach and
|
||||
* vmap/unmap. Note that in many cases this is superseeded by
|
||||
* dma_resv_lock() on @resv.
|
||||
*/
|
||||
struct mutex lock;
|
||||
|
||||
/**
|
||||
* @vmapping_counter:
|
||||
*
|
||||
* Used internally to refcnt the vmaps returned by dma_buf_vmap().
|
||||
* Protected by @lock.
|
||||
*/
|
||||
unsigned vmapping_counter;
|
||||
|
||||
/**
|
||||
* @vmap_ptr:
|
||||
* The current vmap ptr if @vmapping_counter > 0. Protected by @lock.
|
||||
*/
|
||||
struct dma_buf_map vmap_ptr;
|
||||
|
||||
/**
|
||||
* @exp_name:
|
||||
*
|
||||
* Name of the exporter; useful for debugging. See the
|
||||
* DMA_BUF_SET_NAME IOCTL.
|
||||
*/
|
||||
const char *exp_name;
|
||||
|
||||
/**
|
||||
* @name:
|
||||
*
|
||||
* Userspace-provided name; useful for accounting and debugging,
|
||||
* protected by dma_resv_lock() on @resv and @name_lock for read access.
|
||||
*/
|
||||
const char *name;
|
||||
|
||||
/** @name_lock: Spinlock to protect name acces for read access. */
|
||||
spinlock_t name_lock;
|
||||
|
||||
/**
|
||||
* @owner:
|
||||
*
|
||||
* Pointer to exporter module; used for refcounting when exporter is a
|
||||
* kernel module.
|
||||
*/
|
||||
struct module *owner;
|
||||
|
||||
/** @list_node: node for dma_buf accounting and debugging. */
|
||||
struct list_head list_node;
|
||||
|
||||
/** @priv: exporter specific private data for this buffer object. */
|
||||
void *priv;
|
||||
|
||||
/**
|
||||
* @resv:
|
||||
*
|
||||
* Reservation object linked to this dma-buf.
|
||||
*
|
||||
* IMPLICIT SYNCHRONIZATION RULES:
|
||||
*
|
||||
* Drivers which support implicit synchronization of buffer access as
|
||||
* e.g. exposed in `Implicit Fence Poll Support`_ must follow the
|
||||
* below rules.
|
||||
*
|
||||
* - Drivers must add a shared fence through dma_resv_add_shared_fence()
|
||||
* for anything the userspace API considers a read access. This highly
|
||||
* depends upon the API and window system.
|
||||
*
|
||||
* - Similarly drivers must set the exclusive fence through
|
||||
* dma_resv_add_excl_fence() for anything the userspace API considers
|
||||
* write access.
|
||||
*
|
||||
* - Drivers may just always set the exclusive fence, since that only
|
||||
* causes unecessarily synchronization, but no correctness issues.
|
||||
*
|
||||
* - Some drivers only expose a synchronous userspace API with no
|
||||
* pipelining across drivers. These do not set any fences for their
|
||||
* access. An example here is v4l.
|
||||
*
|
||||
* DYNAMIC IMPORTER RULES:
|
||||
*
|
||||
* Dynamic importers, see dma_buf_attachment_is_dynamic(), have
|
||||
* additional constraints on how they set up fences:
|
||||
*
|
||||
* - Dynamic importers must obey the exclusive fence and wait for it to
|
||||
* signal before allowing access to the buffer's underlying storage
|
||||
* through the device.
|
||||
*
|
||||
* - Dynamic importers should set fences for any access that they can't
|
||||
* disable immediately from their &dma_buf_attach_ops.move_notify
|
||||
* callback.
|
||||
*/
|
||||
struct dma_resv *resv;
|
||||
|
||||
/* poll support */
|
||||
/** @poll: for userspace poll support */
|
||||
wait_queue_head_t poll;
|
||||
|
||||
/** @cb_excl: for userspace poll support */
|
||||
/** @cb_shared: for userspace poll support */
|
||||
struct dma_buf_poll_cb_t {
|
||||
struct dma_fence_cb cb;
|
||||
wait_queue_head_t *poll;
|
||||
|
||||
__poll_t active;
|
||||
} cb_excl, cb_shared;
|
||||
#ifdef CONFIG_DMABUF_SYSFS_STATS
|
||||
/**
|
||||
* @sysfs_entry:
|
||||
*
|
||||
* For exposing information about this buffer in sysfs. See also
|
||||
* `DMA-BUF statistics`_ for the uapi this enables.
|
||||
*/
|
||||
struct dma_buf_sysfs_entry {
|
||||
struct kobject kobj;
|
||||
struct dma_buf *dmabuf;
|
||||
} *sysfs_entry;
|
||||
#endif
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
@ -464,7 +580,7 @@ static inline bool dma_buf_is_dynamic(struct dma_buf *dmabuf)
|
|||
|
||||
/**
|
||||
* dma_buf_attachment_is_dynamic - check if a DMA-buf attachment uses dynamic
|
||||
* mappinsg
|
||||
* mappings
|
||||
* @attach: the DMA-buf attachment to check
|
||||
*
|
||||
* Returns true if a DMA-buf importer wants to call the map/unmap functions with
|
||||
|
|
|
|||
|
|
@ -12,25 +12,41 @@
|
|||
|
||||
#include <linux/dma-fence.h>
|
||||
#include <linux/irq_work.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
/**
|
||||
* struct dma_fence_chain - fence to represent an node of a fence chain
|
||||
* @base: fence base class
|
||||
* @lock: spinlock for fence handling
|
||||
* @prev: previous fence of the chain
|
||||
* @prev_seqno: original previous seqno before garbage collection
|
||||
* @fence: encapsulated fence
|
||||
* @cb: callback structure for signaling
|
||||
* @work: irq work item for signaling
|
||||
* @lock: spinlock for fence handling
|
||||
*/
|
||||
struct dma_fence_chain {
|
||||
struct dma_fence base;
|
||||
spinlock_t lock;
|
||||
struct dma_fence __rcu *prev;
|
||||
u64 prev_seqno;
|
||||
struct dma_fence *fence;
|
||||
struct dma_fence_cb cb;
|
||||
struct irq_work work;
|
||||
union {
|
||||
/**
|
||||
* @cb: callback for signaling
|
||||
*
|
||||
* This is used to add the callback for signaling the
|
||||
* complection of the fence chain. Never used at the same time
|
||||
* as the irq work.
|
||||
*/
|
||||
struct dma_fence_cb cb;
|
||||
|
||||
/**
|
||||
* @work: irq work item for signaling
|
||||
*
|
||||
* Irq work structure to allow us to add the callback without
|
||||
* running into lock inversion. Never used at the same time as
|
||||
* the callback.
|
||||
*/
|
||||
struct irq_work work;
|
||||
};
|
||||
spinlock_t lock;
|
||||
};
|
||||
|
||||
extern const struct dma_fence_ops dma_fence_chain_ops;
|
||||
|
|
@ -51,6 +67,30 @@ to_dma_fence_chain(struct dma_fence *fence)
|
|||
return container_of(fence, struct dma_fence_chain, base);
|
||||
}
|
||||
|
||||
/**
|
||||
* dma_fence_chain_alloc
|
||||
*
|
||||
* Returns a new struct dma_fence_chain object or NULL on failure.
|
||||
*/
|
||||
static inline struct dma_fence_chain *dma_fence_chain_alloc(void)
|
||||
{
|
||||
return kmalloc(sizeof(struct dma_fence_chain), GFP_KERNEL);
|
||||
};
|
||||
|
||||
/**
|
||||
* dma_fence_chain_free
|
||||
* @chain: chain node to free
|
||||
*
|
||||
* Frees up an allocated but not used struct dma_fence_chain object. This
|
||||
* doesn't need an RCU grace period since the fence was never initialized nor
|
||||
* published. After dma_fence_chain_init() has been called the fence must be
|
||||
* released by calling dma_fence_put(), and not through this function.
|
||||
*/
|
||||
static inline void dma_fence_chain_free(struct dma_fence_chain *chain)
|
||||
{
|
||||
kfree(chain);
|
||||
};
|
||||
|
||||
/**
|
||||
* dma_fence_chain_for_each - iterate over all fences in chain
|
||||
* @iter: current fence
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@
|
|||
#ifndef _LINUX_FB_H
|
||||
#define _LINUX_FB_H
|
||||
|
||||
#include <linux/refcount.h>
|
||||
#include <linux/kgdb.h>
|
||||
#include <uapi/linux/fb.h>
|
||||
|
||||
|
|
@ -435,7 +436,7 @@ struct fb_tile_ops {
|
|||
|
||||
|
||||
struct fb_info {
|
||||
atomic_t count;
|
||||
refcount_t count;
|
||||
int node;
|
||||
int flags;
|
||||
/*
|
||||
|
|
|
|||
|
|
@ -170,6 +170,8 @@ u32 host1x_syncpt_base_id(struct host1x_syncpt_base *base);
|
|||
void host1x_syncpt_release_vblank_reservation(struct host1x_client *client,
|
||||
u32 syncpt_id);
|
||||
|
||||
struct dma_fence *host1x_fence_create(struct host1x_syncpt *sp, u32 threshold);
|
||||
|
||||
/*
|
||||
* host1x channel
|
||||
*/
|
||||
|
|
@ -216,8 +218,8 @@ struct host1x_job {
|
|||
struct host1x_client *client;
|
||||
|
||||
/* Gathers and their memory */
|
||||
struct host1x_job_gather *gathers;
|
||||
unsigned int num_gathers;
|
||||
struct host1x_job_cmd *cmds;
|
||||
unsigned int num_cmds;
|
||||
|
||||
/* Array of handles to be pinned & unpinned */
|
||||
struct host1x_reloc *relocs;
|
||||
|
|
@ -234,9 +236,15 @@ struct host1x_job {
|
|||
u32 syncpt_incrs;
|
||||
u32 syncpt_end;
|
||||
|
||||
/* Completion waiter ref */
|
||||
void *waiter;
|
||||
|
||||
/* Maximum time to wait for this job */
|
||||
unsigned int timeout;
|
||||
|
||||
/* Job has timed out and should be released */
|
||||
bool cancelled;
|
||||
|
||||
/* Index and number of slots used in the push buffer */
|
||||
unsigned int first_get;
|
||||
unsigned int num_slots;
|
||||
|
|
@ -257,12 +265,25 @@ struct host1x_job {
|
|||
|
||||
/* Add a channel wait for previous ops to complete */
|
||||
bool serialize;
|
||||
|
||||
/* Fast-forward syncpoint increments on job timeout */
|
||||
bool syncpt_recovery;
|
||||
|
||||
/* Callback called when job is freed */
|
||||
void (*release)(struct host1x_job *job);
|
||||
void *user_data;
|
||||
|
||||
/* Whether host1x-side firewall should be ran for this job or not */
|
||||
bool enable_firewall;
|
||||
};
|
||||
|
||||
struct host1x_job *host1x_job_alloc(struct host1x_channel *ch,
|
||||
u32 num_cmdbufs, u32 num_relocs);
|
||||
u32 num_cmdbufs, u32 num_relocs,
|
||||
bool skip_firewall);
|
||||
void host1x_job_add_gather(struct host1x_job *job, struct host1x_bo *bo,
|
||||
unsigned int words, unsigned int offset);
|
||||
void host1x_job_add_wait(struct host1x_job *job, u32 id, u32 thresh,
|
||||
bool relative, u32 next_class);
|
||||
struct host1x_job *host1x_job_get(struct host1x_job *job);
|
||||
void host1x_job_put(struct host1x_job *job);
|
||||
int host1x_job_pin(struct host1x_job *job, struct device *dev);
|
||||
|
|
|
|||
|
|
@ -306,31 +306,29 @@ extern void lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie);
|
|||
|
||||
#define lockdep_depth(tsk) (debug_locks ? (tsk)->lockdep_depth : 0)
|
||||
|
||||
#define lockdep_assert_held(l) do { \
|
||||
WARN_ON(debug_locks && \
|
||||
lockdep_is_held(l) == LOCK_STATE_NOT_HELD); \
|
||||
} while (0)
|
||||
#define lockdep_assert(cond) \
|
||||
do { WARN_ON(debug_locks && !(cond)); } while (0)
|
||||
|
||||
#define lockdep_assert_not_held(l) do { \
|
||||
WARN_ON(debug_locks && \
|
||||
lockdep_is_held(l) == LOCK_STATE_HELD); \
|
||||
} while (0)
|
||||
#define lockdep_assert_once(cond) \
|
||||
do { WARN_ON_ONCE(debug_locks && !(cond)); } while (0)
|
||||
|
||||
#define lockdep_assert_held_write(l) do { \
|
||||
WARN_ON(debug_locks && !lockdep_is_held_type(l, 0)); \
|
||||
} while (0)
|
||||
#define lockdep_assert_held(l) \
|
||||
lockdep_assert(lockdep_is_held(l) != LOCK_STATE_NOT_HELD)
|
||||
|
||||
#define lockdep_assert_held_read(l) do { \
|
||||
WARN_ON(debug_locks && !lockdep_is_held_type(l, 1)); \
|
||||
} while (0)
|
||||
#define lockdep_assert_not_held(l) \
|
||||
lockdep_assert(lockdep_is_held(l) != LOCK_STATE_HELD)
|
||||
|
||||
#define lockdep_assert_held_once(l) do { \
|
||||
WARN_ON_ONCE(debug_locks && !lockdep_is_held(l)); \
|
||||
} while (0)
|
||||
#define lockdep_assert_held_write(l) \
|
||||
lockdep_assert(lockdep_is_held_type(l, 0))
|
||||
|
||||
#define lockdep_assert_none_held_once() do { \
|
||||
WARN_ON_ONCE(debug_locks && current->lockdep_depth); \
|
||||
} while (0)
|
||||
#define lockdep_assert_held_read(l) \
|
||||
lockdep_assert(lockdep_is_held_type(l, 1))
|
||||
|
||||
#define lockdep_assert_held_once(l) \
|
||||
lockdep_assert_once(lockdep_is_held(l) != LOCK_STATE_NOT_HELD)
|
||||
|
||||
#define lockdep_assert_none_held_once() \
|
||||
lockdep_assert_once(!current->lockdep_depth)
|
||||
|
||||
#define lockdep_recursing(tsk) ((tsk)->lockdep_recursion)
|
||||
|
||||
|
|
@ -407,6 +405,9 @@ extern int lock_is_held(const void *);
|
|||
extern int lockdep_is_held(const void *);
|
||||
#define lockdep_is_held_type(l, r) (1)
|
||||
|
||||
#define lockdep_assert(c) do { } while (0)
|
||||
#define lockdep_assert_once(c) do { } while (0)
|
||||
|
||||
#define lockdep_assert_held(l) do { (void)(l); } while (0)
|
||||
#define lockdep_assert_not_held(l) do { (void)(l); } while (0)
|
||||
#define lockdep_assert_held_write(l) do { (void)(l); } while (0)
|
||||
|
|
|
|||
94
include/linux/sysfb.h
Normal file
94
include/linux/sysfb.h
Normal file
|
|
@ -0,0 +1,94 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
||||
#ifndef _LINUX_SYSFB_H
|
||||
#define _LINUX_SYSFB_H
|
||||
|
||||
/*
|
||||
* Generic System Framebuffers on x86
|
||||
* Copyright (c) 2012-2013 David Herrmann <dh.herrmann@gmail.com>
|
||||
*/
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/platform_data/simplefb.h>
|
||||
#include <linux/screen_info.h>
|
||||
|
||||
enum {
|
||||
M_I17, /* 17-Inch iMac */
|
||||
M_I20, /* 20-Inch iMac */
|
||||
M_I20_SR, /* 20-Inch iMac (Santa Rosa) */
|
||||
M_I24, /* 24-Inch iMac */
|
||||
M_I24_8_1, /* 24-Inch iMac, 8,1th gen */
|
||||
M_I24_10_1, /* 24-Inch iMac, 10,1th gen */
|
||||
M_I27_11_1, /* 27-Inch iMac, 11,1th gen */
|
||||
M_MINI, /* Mac Mini */
|
||||
M_MINI_3_1, /* Mac Mini, 3,1th gen */
|
||||
M_MINI_4_1, /* Mac Mini, 4,1th gen */
|
||||
M_MB, /* MacBook */
|
||||
M_MB_2, /* MacBook, 2nd rev. */
|
||||
M_MB_3, /* MacBook, 3rd rev. */
|
||||
M_MB_5_1, /* MacBook, 5th rev. */
|
||||
M_MB_6_1, /* MacBook, 6th rev. */
|
||||
M_MB_7_1, /* MacBook, 7th rev. */
|
||||
M_MB_SR, /* MacBook, 2nd gen, (Santa Rosa) */
|
||||
M_MBA, /* MacBook Air */
|
||||
M_MBA_3, /* Macbook Air, 3rd rev */
|
||||
M_MBP, /* MacBook Pro */
|
||||
M_MBP_2, /* MacBook Pro 2nd gen */
|
||||
M_MBP_2_2, /* MacBook Pro 2,2nd gen */
|
||||
M_MBP_SR, /* MacBook Pro (Santa Rosa) */
|
||||
M_MBP_4, /* MacBook Pro, 4th gen */
|
||||
M_MBP_5_1, /* MacBook Pro, 5,1th gen */
|
||||
M_MBP_5_2, /* MacBook Pro, 5,2th gen */
|
||||
M_MBP_5_3, /* MacBook Pro, 5,3rd gen */
|
||||
M_MBP_6_1, /* MacBook Pro, 6,1th gen */
|
||||
M_MBP_6_2, /* MacBook Pro, 6,2th gen */
|
||||
M_MBP_7_1, /* MacBook Pro, 7,1th gen */
|
||||
M_MBP_8_2, /* MacBook Pro, 8,2nd gen */
|
||||
M_UNKNOWN /* placeholder */
|
||||
};
|
||||
|
||||
struct efifb_dmi_info {
|
||||
char *optname;
|
||||
unsigned long base;
|
||||
int stride;
|
||||
int width;
|
||||
int height;
|
||||
int flags;
|
||||
};
|
||||
|
||||
#ifdef CONFIG_EFI
|
||||
|
||||
extern struct efifb_dmi_info efifb_dmi_list[];
|
||||
void sysfb_apply_efi_quirks(struct platform_device *pd);
|
||||
|
||||
#else /* CONFIG_EFI */
|
||||
|
||||
static inline void sysfb_apply_efi_quirks(struct platform_device *pd)
|
||||
{
|
||||
}
|
||||
|
||||
#endif /* CONFIG_EFI */
|
||||
|
||||
#ifdef CONFIG_SYSFB_SIMPLEFB
|
||||
|
||||
bool sysfb_parse_mode(const struct screen_info *si,
|
||||
struct simplefb_platform_data *mode);
|
||||
int sysfb_create_simplefb(const struct screen_info *si,
|
||||
const struct simplefb_platform_data *mode);
|
||||
|
||||
#else /* CONFIG_SYSFB_SIMPLE */
|
||||
|
||||
static inline bool sysfb_parse_mode(const struct screen_info *si,
|
||||
struct simplefb_platform_data *mode)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline int sysfb_create_simplefb(const struct screen_info *si,
|
||||
const struct simplefb_platform_data *mode)
|
||||
{
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
#endif /* CONFIG_SYSFB_SIMPLE */
|
||||
|
||||
#endif /* _LINUX_SYSFB_H */
|
||||
|
|
@ -33,6 +33,8 @@
|
|||
|
||||
#include <video/vga.h>
|
||||
|
||||
struct pci_dev;
|
||||
|
||||
/* Legacy VGA regions */
|
||||
#define VGA_RSRC_NONE 0x00
|
||||
#define VGA_RSRC_LEGACY_IO 0x01
|
||||
|
|
@ -42,42 +44,45 @@
|
|||
#define VGA_RSRC_NORMAL_IO 0x04
|
||||
#define VGA_RSRC_NORMAL_MEM 0x08
|
||||
|
||||
/* Passing that instead of a pci_dev to use the system "default"
|
||||
* device, that is the one used by vgacon. Archs will probably
|
||||
* have to provide their own vga_default_device();
|
||||
*/
|
||||
#define VGA_DEFAULT_DEVICE (NULL)
|
||||
|
||||
struct pci_dev;
|
||||
|
||||
/* For use by clients */
|
||||
|
||||
/**
|
||||
* vga_set_legacy_decoding
|
||||
*
|
||||
* @pdev: pci device of the VGA card
|
||||
* @decodes: bit mask of what legacy regions the card decodes
|
||||
*
|
||||
* Indicates to the arbiter if the card decodes legacy VGA IOs,
|
||||
* legacy VGA Memory, both, or none. All cards default to both,
|
||||
* the card driver (fbdev for example) should tell the arbiter
|
||||
* if it has disabled legacy decoding, so the card can be left
|
||||
* out of the arbitration process (and can be safe to take
|
||||
* interrupts at any time.
|
||||
*/
|
||||
#if defined(CONFIG_VGA_ARB)
|
||||
extern void vga_set_legacy_decoding(struct pci_dev *pdev,
|
||||
unsigned int decodes);
|
||||
#else
|
||||
#ifdef CONFIG_VGA_ARB
|
||||
void vga_set_legacy_decoding(struct pci_dev *pdev, unsigned int decodes);
|
||||
int vga_get(struct pci_dev *pdev, unsigned int rsrc, int interruptible);
|
||||
void vga_put(struct pci_dev *pdev, unsigned int rsrc);
|
||||
struct pci_dev *vga_default_device(void);
|
||||
void vga_set_default_device(struct pci_dev *pdev);
|
||||
int vga_remove_vgacon(struct pci_dev *pdev);
|
||||
int vga_client_register(struct pci_dev *pdev,
|
||||
unsigned int (*set_decode)(struct pci_dev *pdev, bool state));
|
||||
#else /* CONFIG_VGA_ARB */
|
||||
static inline void vga_set_legacy_decoding(struct pci_dev *pdev,
|
||||
unsigned int decodes) { };
|
||||
#endif
|
||||
|
||||
#if defined(CONFIG_VGA_ARB)
|
||||
extern int vga_get(struct pci_dev *pdev, unsigned int rsrc, int interruptible);
|
||||
#else
|
||||
static inline int vga_get(struct pci_dev *pdev, unsigned int rsrc, int interruptible) { return 0; }
|
||||
#endif
|
||||
unsigned int decodes)
|
||||
{
|
||||
};
|
||||
static inline int vga_get(struct pci_dev *pdev, unsigned int rsrc,
|
||||
int interruptible)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
static inline void vga_put(struct pci_dev *pdev, unsigned int rsrc)
|
||||
{
|
||||
}
|
||||
static inline struct pci_dev *vga_default_device(void)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
static inline void vga_set_default_device(struct pci_dev *pdev)
|
||||
{
|
||||
}
|
||||
static inline int vga_remove_vgacon(struct pci_dev *pdev)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
static inline int vga_client_register(struct pci_dev *pdev,
|
||||
unsigned int (*set_decode)(struct pci_dev *pdev, bool state))
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
#endif /* CONFIG_VGA_ARB */
|
||||
|
||||
/**
|
||||
* vga_get_interruptible
|
||||
|
|
@ -109,48 +114,9 @@ static inline int vga_get_uninterruptible(struct pci_dev *pdev,
|
|||
return vga_get(pdev, rsrc, 0);
|
||||
}
|
||||
|
||||
#if defined(CONFIG_VGA_ARB)
|
||||
extern void vga_put(struct pci_dev *pdev, unsigned int rsrc);
|
||||
#else
|
||||
static inline void vga_put(struct pci_dev *pdev, unsigned int rsrc)
|
||||
static inline void vga_client_unregister(struct pci_dev *pdev)
|
||||
{
|
||||
vga_client_register(pdev, NULL);
|
||||
}
|
||||
#endif
|
||||
|
||||
|
||||
#ifdef CONFIG_VGA_ARB
|
||||
extern struct pci_dev *vga_default_device(void);
|
||||
extern void vga_set_default_device(struct pci_dev *pdev);
|
||||
extern int vga_remove_vgacon(struct pci_dev *pdev);
|
||||
#else
|
||||
static inline struct pci_dev *vga_default_device(void) { return NULL; }
|
||||
static inline void vga_set_default_device(struct pci_dev *pdev) { }
|
||||
static inline int vga_remove_vgacon(struct pci_dev *pdev) { return 0; }
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Architectures should define this if they have several
|
||||
* independent PCI domains that can afford concurrent VGA
|
||||
* decoding
|
||||
*/
|
||||
#ifndef __ARCH_HAS_VGA_CONFLICT
|
||||
static inline int vga_conflicts(struct pci_dev *p1, struct pci_dev *p2)
|
||||
{
|
||||
return 1;
|
||||
}
|
||||
#endif
|
||||
|
||||
#if defined(CONFIG_VGA_ARB)
|
||||
int vga_client_register(struct pci_dev *pdev, void *cookie,
|
||||
void (*irq_set_state)(void *cookie, bool state),
|
||||
unsigned int (*set_vga_decode)(void *cookie, bool state));
|
||||
#else
|
||||
static inline int vga_client_register(struct pci_dev *pdev, void *cookie,
|
||||
void (*irq_set_state)(void *cookie, bool state),
|
||||
unsigned int (*set_vga_decode)(void *cookie, bool state))
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif /* LINUX_VGA_H */
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue