summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/host1x (follow)
Commit message (Collapse)AuthorAgeFilesLines
* gpu: host1x: Avoid trying to use GART on Tegra20Robin Murphy2022-11-181-0/+4
| | | | | | | | | | | | | | | | Since commit c7e3ca515e78 ("iommu/tegra: gart: Do not register with bus") quite some time ago, the GART driver has effectively disabled itself to avoid issues with the GPU driver expecting it to work in ways that it doesn't. As of commit 57365a04c921 ("iommu: Move bus setup to IOMMU device registration") that bodge no longer works, but really the GPU driver should be responsible for its own behaviour anyway. Make the workaround explicit. Reported-by: Jon Hunter <jonathanh@nvidia.com> Suggested-by: Dmitry Osipenko <digetx@gmail.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Tested-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Use the bitmap API to allocate bitmapsChristophe JAILLET2022-07-081-6/+2
| | | | | | | | | | | | Use bitmap_zalloc()/bitmap_free() instead of hand-writing them. It is less verbose and it improves the semantic. While at it, remove a useless bitmap_zero() call. The bitmap is already zero'ed when allocated. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Generalize host1x_cdma_push_wide()Mikko Perttunen2022-07-081-15/+9
| | | | | | | | | | | | | host1x_cdma_push_wide() had the assumptions that the last parameter word was a NOP opcode, and that NOP opcodes could be used in all situations. Neither are true with the new job opcode sequence, so adjust the function to not have these assumptions, and instead place an early RESTART opcode when necessary to jump back to the beginning of the pushbuffer. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Initialize syncval in channel_submit()Mikko Perttunen2022-07-081-0/+1
| | | | | | | | | During the refactoring of channel_submit(), assignment of syncval was moved but it is also used in channel_submit(). Add this assignment back to channel_submit() as well. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Register context bus unconditionallyRobin Murphy2022-07-081-5/+0
| | | | | | | | | | | | | Conditional registration is a problem for other subsystems which may unwittingly try to interact with host1x_context_device_bus_type in an uninitialised state on non-Tegra platforms. A look under /sys/bus on a typical system already reveals plenty of entries from enabled but otherwise irrelevant configs, so lets keep things simple and register our context bus unconditionally too. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Use RESTART_W to skip timed out jobs on Tegra186+Mikko Perttunen2022-07-081-2/+17
| | | | | | | | | | When MLOCK enforcement is enabled, the 0-word write currently done is rejected by the hardware outside of an MLOCK region. As such, on these chips, which also have the newer, more convenient RESTART_W opcode, use that instead to skip over the timed out job. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Add MLOCK release code on Tegra234Mikko Perttunen2022-07-082-0/+41
| | | | | | | | | | | With the full-featured opcode sequence using MLOCKs, we need to also unlock those MLOCKs in the event of a timeout. However, it turns out that on Tegra186/Tegra194, by default, we don't need to do this; furthermore, on Tegra234 it is much simpler to do; so only implement this on Tegra234 for the time being. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Rewrite job opcode sequenceMikko Perttunen2022-07-081-59/+85
| | | | | | | | | | | | For new (Tegra186+) SoCs, use a new ('full-featured') job opcode sequence that is compatible with virtualization. In particular, the Host1x hardware in Tegra234 is more strict regarding the sequence, requiring ACQUIRE_MLOCK-SETCLASS-SETSTREAMID opcodes to occur in that sequence without gaps (except for SETPAYLOAD), so let's do it properly in one go now. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Tegra234 device data and headersMikko Perttunen2022-07-0810-1/+354
| | | | | | | Add device data and chip headers for Tegra234. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Program interrupt destinations on Tegra234Mikko Perttunen2022-07-081-0/+11
| | | | | | | | | | | | On Tegra234, each Host1x VM has 8 interrupt lines. Each syncpoint can be configured with which interrupt line should be used for threshold interrupt, allowing for load balancing. For now, to keep backwards compatibility, just set all syncpoints to the first interrupt. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Allow reset to be missingMikko Perttunen2022-07-081-3/+0
| | | | | | | | Host1x on Tegra234 does not have a software-controllable reset line. As such, don't bail out if we don't find one in the device tree. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Program virtualization tablesMikko Perttunen2022-07-082-3/+26
| | | | | | | | | | | | Program virtualization tables specifying which VMs have access to which Host1x hardware resources. Programming these has become mandatory in Tegra234. For now, since the driver does not operate as a Host1x hypervisor, we basically allow access to everything to everyone. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Simplify register mapping and add common apertureMikko Perttunen2022-07-082-27/+22
| | | | | | | | Refactor 'regs' property loading using devm_platform_ioremap_* and add loading of the 'common' region found on Tegra234. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Deduplicate hardware headersMikko Perttunen2022-07-087-703/+156
| | | | | | | | | | | Host1x class information and opcodes are unchanged or backwards compatible across SoCs so let's not duplicate them for each one but have them in a shared header file. At the same time, add opcode functions for acquire/release_mlock. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Program context stream ID on submissionMikko Perttunen2022-07-083-4/+68
| | | | | | | | | | | | | Add code to do stream ID switching at the beginning of a job. The stream ID is switched to the stream ID specified by the context passed in the job structure. Before switching the stream ID, an OP_DONE wait is done on the channel's engine to ensure that there is no residual ongoing work that might do DMA using the new stream ID. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Add context device management codeMikko Perttunen2022-07-085-1/+214
| | | | | | | | Add code to register context devices from device tree, allocate them out and manage their refcounts. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Add context busMikko Perttunen2022-06-013-0/+37
| | | | | | | | | | | | The context bus is a "dummy" bus that contains struct devices that correspond to IOMMU contexts assigned through Host1x to processes. Even when host1x itself is built as a module, the bus is registered in built-in code so that the built-in ARM SMMU driver is able to reference it. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Show all allocated syncpts via debugfsJon Hunter2022-04-061-4/+7
| | | | | | | | | | | | | When the host1x syncpts status is dumped via the debugfs, syncpts that have been allocated but not yet used are not shown and so currently it is not possible to see all the allocated syncpts. Update the path for dumping the syncpt status via the debugfs to show all allocated syncpts even if they have not been used yet. Note that when the syncpt status is dumped by the kernel itself for debugging only the active syncpt are shown. Signed-off-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Do not use mapping cache for job submissionsThierry Reding2022-04-061-2/+2
| | | | | | | | | | | | | | | | | | Buffer mappings used in job submissions are usually small and not rapidly reused as opposed to framebuffers (which are usually large and rapidly reused, for example when page-flipping between double-buffered framebuffers). Avoid going through the mapping cache for these buffers since the cache would also lead to leaks if nobody is ever releasing the cache's last reference. For DRM/KMS these last references are dropped when the framebuffers are removed and therefore no longer needed. While at it, also add a note about the need to explicitly remove the final reference to the mapping in the cache. Reviewed-by: Jon Hunter <jonathanh@nvidia.com> Tested-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Fix a memory leak in 'host1x_remove()'Christophe JAILLET2022-03-011-0/+1
| | | | | | | | | Add a missing 'host1x_channel_list_free()' call in the remove function, as already done in the error handling path of the probe function. Fixes: 8474b02531c4 ("gpu: host1x: Refactor channel allocation code") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Fix an error handling path in 'host1x_probe()'Christophe JAILLET2022-03-011-2/+5
| | | | | | | | | | | | Add the missing 'host1x_bo_cache_destroy()' call in the error handling path of the probe, as already done in the remove function. In order to simplify the error handling, move the 'host1x_bo_cache_init()' call after all the devm_ function. Fixes: 1f39b1dfa53c ("drm/tegra: Implement buffer object cache") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Always return syncpoint value when waitingMikko Perttunen2022-02-161-17/+2
| | | | | | | | | | The new TegraDRM UAPI uses syncpoint waiting with timeout set to zero to indicate reading the syncpoint value. To support that we need to return the syncpoint value always when waiting. Fixes: 44e961381354 ("drm/tegra: Implement syncpoint wait UAPI") Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Fix hang on Tegra186+Dmitry Osipenko2022-01-271-8/+8
| | | | | | | | | | | | | Tegra186+ hangs if host1x hardware is disabled at a kernel boot time because we touch hardware before runtime PM is resumed. Move sync point assignment initialization to the RPM-resume callback. Older SoCs were unaffected because they skip that sync point initialization. Tested-by: Jon Hunter <jonathanh@nvidia.com> # T186 Reported-by: Jon Hunter <jonathanh@nvidia.com> # T186 Fixes: 6b6776e2ab8a ("gpu: host1x: Add initial runtime PM and OPP support") Signed-off-by: Dmitry Osipenko <digetx@gmail.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Add back arm_iommu_detach_device()Dmitry Osipenko2021-12-161-0/+15
| | | | | | | | | | | | | | | Host1x DMA buffer isn't mapped properly when CONFIG_ARM_DMA_USE_IOMMU=y. The memory management code of Host1x driver has a longstanding overhaul overdue and it's not obvious where the problem is in this case. Hence let's add back the old workaround which we already had sometime before. It explicitly detaches Host1x device from the offending implicit IOMMU domain. This fixes a completely broken Host1x DMA in case of ARM32 multiplatform kernel config. Cc: stable@vger.kernel.org Fixes: af1cbfb9bf0f ("gpu: host1x: Support DMA mapping of buffers") Signed-off-by: Dmitry Osipenko <digetx@gmail.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Add host1x_channel_stop()Dmitry Osipenko2021-12-161-0/+8
| | | | | | | | | | | | | | Add host1x_channel_stop() which waits till channel becomes idle and then stops the channel hardware. This is needed for supporting suspend/resume by host1x drivers since the hardware state is lost after power-gating, thus the channel needs to be stopped before client enters into suspend. Tested-by: Peter Geis <pgwipeout@gmail.com> # Ouya T30 Tested-by: Paul Fertser <fercerpav@gmail.com> # PAZ00 T20 Tested-by: Nicolas Chauvet <kwizart@gmail.com> # PAZ00 T20 and TK1 T124 Tested-by: Matt Merhar <mattmerhar@protonmail.com> # Ouya T30 Signed-off-by: Dmitry Osipenko <digetx@gmail.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Add initial runtime PM and OPP supportDmitry Osipenko2021-12-166-56/+164
| | | | | | | | | | | | | | | Add runtime PM and OPP support to the Host1x driver. For the starter we will keep host1x always-on because dynamic power management require a major refactoring of the driver code since lot's of code paths are missing the RPM handling and we're going to remove some of these paths in the future. Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Tested-by: Peter Geis <pgwipeout@gmail.com> # Ouya T30 Tested-by: Paul Fertser <fercerpav@gmail.com> # PAZ00 T20 Tested-by: Nicolas Chauvet <kwizart@gmail.com> # PAZ00 T20 and TK1 T124 Tested-by: Matt Merhar <mattmerhar@protonmail.com> # Ouya T30 Signed-off-by: Dmitry Osipenko <digetx@gmail.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Add missing DMA API includeRobin Murphy2021-12-161-0/+1
| | | | | | | | | | | | Host1x seems to be relying on picking up dma-mapping.h transitively from iova.h, which has no reason to include it in the first place. Fix the former issue before we totally break things by fixing the latter one. CC: Thierry Reding <thierry.reding@gmail.com> CC: Mikko Perttunen <mperttunen@nvidia.com> CC: dri-devel@lists.freedesktop.org Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: select CONFIG_DMA_SHARED_BUFFERArnd Bergmann2021-12-161-0/+1
| | | | | | | | | | | | | | | | | Linking fails when dma-buf is disabled: ld.lld: error: undefined symbol: dma_fence_release >>> referenced by fence.c >>> gpu/host1x/fence.o:(host1x_syncpt_fence_enable_signaling) in archive drivers/built-in.a >>> referenced by fence.c >>> gpu/host1x/fence.o:(host1x_fence_signal) in archive drivers/built-in.a >>> referenced by fence.c >>> gpu/host1x/fence.o:(do_fence_timeout) in archive drivers/built-in.a Fixes: 687db2207b1b ("gpu: host1x: Add DMA fence implementation") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Drop excess kernel-doc entry @keyRandy Dunlap2021-12-161-1/+0
| | | | | | | | | | | | | | | Fix kernel-doc warning in host1x: ../drivers/gpu/host1x/bus.c:774: warning: Excess function parameter 'key' description in '__host1x_client_register' Fixes: 0cfe5a6e758f ("gpu: host1x: Split up client initalization and registration") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: dri-devel@lists.freedesktop.org Cc: linux-tegra@vger.kernel.org Cc: David Airlie <airlied@linux.ie> Cc: Daniel Vetter <daniel@ffwll.ch> Signed-off-by: Thierry Reding <treding@nvidia.com>
* drm/tegra: Add NVDEC driverMikko Perttunen2021-12-161-0/+18
| | | | | | | | | Add support for booting and using NVDEC on Tegra210, Tegra186 and Tegra194 to the Host1x and TegraDRM drivers. Booting in secure mode is not currently supported. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* drm/tegra: Implement buffer object cacheThierry Reding2021-12-164-2/+84
| | | | | | | | This cache is used to avoid mapping and unmapping buffer objects unnecessarily. Mappings are cached per client and stay hot until the buffer object is destroyed. Signed-off-by: Thierry Reding <treding@nvidia.com>
* drm/tegra: Implement correct DMA-BUF semanticsThierry Reding2021-12-162-108/+58
| | | | | | | | | DMA-BUF requires that each device that accesses a DMA-BUF attaches to it separately. To do so the host1x_bo_pin() and host1x_bo_unpin() functions need to be reimplemented so that they can return a mapping, which either represents an attachment or a map of the driver's own GEM object. Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Plug potential memory leakThierry Reding2021-09-161-1/+3
| | | | | | | | | | The memory allocated for a DMA fence could be leaked if the code failed to allocate the waiter object. Make sure to release the fence allocation on failure. Reported-by: kernel test robot <lkp@intel.com> Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu/host1x: fence: Make spinlock staticDmitry Osipenko2021-09-161-1/+1
| | | | | | | | | | The DEFINE_SPINLOCK macro creates a global spinlock symbol that is visible to the whole kernel. This is unintended in the code, fix it. Fixes: 687db2207b1b ("gpu: host1x: Add DMA fence implementation") Signed-off-by: Dmitry Osipenko <digetx@gmail.com> Reviewed-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: debug: Dump DMASTART and DMAEND registerThierry Reding2021-08-132-3/+21
| | | | | | | Show the values of the DMASTART and DMAEND registers when dumping status to help with failure analysis. Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: debug: Dump only relevant parts of CDMA push bufferThierry Reding2021-08-131-10/+7
| | | | | | | | | | | Dumping the full CDMA push buffer takes a long time and isn't very useful since most of the contents are not relevant. Instead only show the CDMA push buffer entries associated with current jobs. While at it, tweak the indentation a bit to make the output more readable. Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: debug: Use dma_addr_t more consistentlyThierry Reding2021-08-131-4/+4
| | | | | | | | | | | The host1x debug code uses a mix of phys_addr_t, dma_addr_t and u32 to represent addresses. However, these addresses are always DMA addresses so use the appropriate type. This fixes some issues with how these addresses are displayed, because they could be truncated in some cases and not show the full address. Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Add option to skip firewall for a jobMikko Perttunen2021-08-101-8/+13
| | | | | | | | | | The new UAPI will have its own firewall, and we don't want to run the firewall in the Host1x driver for those jobs. As such, add a parameter to host1x_job_alloc to specify if we want to skip the firewall in the Host1x driver. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Add support for syncpoint waits in CDMA pushbufferMikko Perttunen2021-08-109-41/+199
| | | | | | | | | | | | | | | | | | | | Add support for inserting syncpoint waits in the CDMA pushbuffer. These waits need to be done in HOST1X class, while gather submitted by the application execute in engine class. Support is added by converting the gather list of job into a command list that can include both gathers and waits. When the job is submitted, these commands are pushed as the appropriate opcodes on the CDMA pushbuffer. Also supported are waits relative to the start of the job, which are useful for jobs doing multiple things with an engine that doesn't natively support pipelining. While at it, use 32-bit waits on chips that support them. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Add job release callbackMikko Perttunen2021-08-101-0/+3
| | | | | | | | | Add a callback field to the job structure, to be called just before the job is to be freed. This allows the job's submitter to clean up any of its own state, like decrement runtime PM refcounts. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Add no-recovery modeMikko Perttunen2021-08-105-7/+71
| | | | | | | | | | | | | | | | | | | | | | | Add a new property for jobs to enable or disable recovery i.e. CPU increments of syncpoints to max value on job timeout. This allows for a more solid model for hanged jobs, where userspace doesn't need to guess if a syncpoint increment happened because the job completed, or because job timeout was triggered. On job timeout, we stop the channel, NOP all future jobs on the channel using the same syncpoint, mark the syncpoint as locked and resume the channel from the next job, if any. The future jobs are NOPed, since because we don't do the CPU increments, the value of the syncpoint is no longer synchronized, and any waiters would become confused if a future job incremented the syncpoint. The syncpoint is marked locked to ensure that any future jobs cannot increment the syncpoint either, until the application has recognized the situation and reallocated the syncpoint. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Add DMA fence implementationMikko Perttunen2021-08-105-0/+193
| | | | | | | | | | | | | | Add an implementation of dma_fences based on syncpoints. Syncpoint interrupts are used to signal fences. Additionally, after software signaling has been enabled, a 30 second timeout is started. If the syncpoint threshold is not reached within this period, the fence is signalled with an -ETIMEDOUT error code. This is to allow fences that would never reach their syncpoint threshold to be cleaned up. The timeout can potentially be removed in the future after job tracking code has been refactored. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Split up client initalization and registrationThierry Reding2021-05-171-6/+24
| | | | | | | | | | | | | | In some cases we may need to initialize the host1x client first before registering it. This commit adds a new helper that will do nothing but the initialization of the data structure. At the same time, the initialization is removed from the registration function. Note, however, that for simplicity we explicitly initialize the client when the host1x_client_register() function is called, as opposed to the low-level __host1x_client_register() function. This allows existing callers to remain unchanged. Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Add early init and late exit callbacksThierry Reding2021-03-311-0/+31
| | | | | | | | | These callbacks can be used by client drivers to run code during early init and during late exit. Early init callbacks are run prior to the regular init callbacks while late exit callbacks run after the regular exit callbacks. Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Fix Tegra194 syncpt interrupt thresholdJon Hunter2021-03-311-1/+1
| | | | | | | | | | | | Syncpoint interrupts are not working as expected on Tegra194. The problem is that the syncpoint interrupt threshold being used is the global interrupt threshold and not the virtual interrupt threshold. Fix this by using the virtual interrupt threshold which aligns with downstream. Signed-off-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Assign intr waiter inside lockMikko Perttunen2021-03-311-2/+3
| | | | | | | | | | | | | | | Move the assignment of the ref out-pointer in host1x_intr_add_action to happen within the spinlock. With the current arrangement, it is possible for the waiter to complete before the assignment has happened, which breaks horribly if the waiter completion callback tries to use the reference. In practice, there is currently no situation where this issue can manifest -- it was first noticed with the upcoming DMA fence implementation patches. As such this doesn't need to be backported. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Reserve VBLANK syncpoints at initializationMikko Perttunen2021-03-313-1/+46
| | | | | | | | | | | | | | | On T20-T148 chips, the bootloader can set up a boot splash screen with DC configured to increment syncpoint 26/27 at VBLANK. Because of this we shouldn't allow these syncpoints to be allocated until DC has been reset and will no longer increment them in the background. As such, on these chips, reserve those two syncpoints at initialization, and only mark them free once the DC driver has indicated it's safe to do so. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Reset max value when freeing a syncpointMikko Perttunen2021-03-311-0/+2
| | | | | | | | | | With job recovery becoming optional, syncpoints may have a mismatch between their value and max value when freed. As such, when freeing, set the max value to the current value of the syncpoint so that it is in a sane state for the next user. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Cleanup and refcounting for syncpointsMikko Perttunen2021-03-318-39/+76
| | | | | | | | | | Add reference counting for allocated syncpoints to allow keeping them allocated while jobs are referencing them. Additionally, clean up various places using syncpoint IDs to use host1x_syncpt pointers instead. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
* gpu: host1x: Use HW-equivalent syncpoint expiration checkMikko Perttunen2021-03-301-49/+2
| | | | | | | | | | | | | | | | Make syncpoint expiration checks always use the same logic used by the hardware. This ensures that there are no race conditions that could occur because of the hardware triggering a syncpoint interrupt and then the driver disagreeing. One situation where this could occur is if a job incremented a syncpoint too many times -- then the hardware would trigger an interrupt, but the driver would assume that a syncpoint value greater than the syncpoint's max value is in the future, and not clean up the job. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>