diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2023-04-26 01:12:15 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2023-04-26 01:12:15 +0200 |
commit | c8cc58e289ed3b5bc50258f52776cf3dfa3bad66 (patch) | |
tree | fab95a9e92dd1b7ddec386294365ebd2ba130ec3 /drivers/accel | |
parent | Merge tag 'slab-for-6.4' of git://git.kernel.org/pub/scm/linux/kernel/git/vba... (diff) | |
parent | Merge tag 'exynos-drm-next-for-v6.4-2' of git://git.kernel.org/pub/scm/linux/... (diff) | |
download | linux-c8cc58e289ed3b5bc50258f52776cf3dfa3bad66.tar.xz linux-c8cc58e289ed3b5bc50258f52776cf3dfa3bad66.zip |
Merge tag 'drm-next-2023-04-24' of git://anongit.freedesktop.org/drm/drm
Pull drm updates from Dave Airlie:
"There is a new Qualcomm accel driver for their QAIC, dma-fence got a
deadline feature added, lots of refactoring around fbdev emulation,
and the usual pre-release hw enablements from AMD and Intel and fixes
everywhere.
New drivers:
- add QAIC acceleration driver
dma-buf:
- constify kobj_type structs
- Reject prime DMA-Buf attachment if get_sg_table is missing.
fbdev:
- cmdline parser fixes
- implement fbdev emulation for GEM DMA drivers
- always use shadow buffer in fbdev emulation helpers
dma-fence:
- add deadline hint to fences
- signal private stub fence
core:
- improve DisplayID 2.0 and EDID parsing
- add gem eviction function + callback
- prep to convert shmem helper to GEM resv lock
- move suballocator from radeon/amdgpu to core for Xe
- HPD polling fixes
- Documentation improvements
- Add atomic enable_plane callback
- use tgid instead of pid for client tracking
- DP: Add SDP Error Detection Configuration Register
- Add prime import/export to vram-helper
- use pci aperture helpers in more drivers
panel:
- Radxa 8/10HD support
- Samsung AMD495QA01 support
- Elida KD50T048A
- Sony TD4353
- Novatek NT36523
- STARRY 2081101QFH032011-53G
- B133UAN01.0
- AUO NE135FBM-N41
i915:
- More MTL enabling
- fix s/r problems with MEI/PXP
- Implement fb_dirty for PSR,FBC,DRRS fixes
- Fix eDP+DSI dual panel systems
- Fix issue #6333: "list_add corruption" and full system lockup from
performance monitoring
- Don't use stolen memory or BAR for ring buffers on LLC platforms
- Make sure DSM size has correct 1MiB granularity on Gen12+
- Whitelist COMMON_SLICE_CHICKEN3 for UMD access on Gen12+
- Add engine TLB invalidation for Meteorlake
- Fix GSC races on driver load/unload on Meteorlake+
- Make kobj_type structures constant
- Move fd_install after last use of fence
- wm/vblank refactoring
- display code refactoring
- Create GSC submission targeting HDCP and PXP usages on MTL+
- Enable HDCP2.x via GSC CS
- Fix context runtime accounting on sysfs fdinfo for heavy workloads
- Use i915 instead of dev_priv insied the file_priv structure
- Replace fake flex-array with flexible-array member
amdgpu:
- Make kobj structures const
- Generalize dmabuf import to work with KFD
- Add capped/uncapped workload handling for supported APUs
- Expose additional memory stats via fdinfo
- Register vga_switcheroo for apple-gmux
- Initial NBIO7.9, GC 9.4.3, GFXHUB 1.2, MMHUB 1.8 support
- Initial DC FAM infrastructure
- Link DC backlight to connector device rather than PCI device
- Add sysfs nodes for secondary VCN clocks
amdkfd:
- Make kobj structures const
- Support for exporting buffers via dmabuf
- Multi-VMA page migration fixes
- initial GC 9.4.3 support
radeon:
- iMac fix
- convert to client based fbdev emulation
habanalabs:
- Add opcodes to the CS ioctl to allow user to stall/resume specific
engines inside Gaudi2.
- INFO ioctl the amount of device memory that the driver and f/w
reserve for themselves.
- INFO ioctl a bit-mask of the available rotator engines
- INFO ioctl the register's address of the f/w that should be used to
trigger interrupts
- INFO ioctl two new opcodes to fetch information on h/w and f/w
events
- Enable graceful reset mechanism for compute-reset.
- Align to the latest firmware specs.
- Enforce the release order of the compute device and dma-buf.
msm:
- UBWC decoder programming rework
- SM8550, SM8450 bindings update
- uapi C++ fix
- a3xx and a4xx devfreq support
- GPU and GEM updates to avoid allocations which could trigger
reclaim (shrinker) in fence signaling path
- dma-fence deadline hint support and wait-boost
- a640/650 speed bin support
cirrus:
- convert to regular atomic helpers
- add damage clipping
mediatek:
- 10-bit overlay support
- mt8195 support
- Only trigger DRM HPD events if bridge is attached
- Change the aux retries times when receiving AUX_DEFER
rockchip:
- add 4K support
vc4:
- use drm_gem_objects
virtio:
- allow KMS support to be disabled
- add damage clipping
vmwgfx:
- buffer object lifetime fixes
exynos:
- move MIPI DSI driver to drm bridge for iMX sharing
- use kernel fbdev emulation
panfrost:
- add support for mali MT81xx devices
- add speed binning support
lima:
- add usage stats
tegra:
- fbdev client conversion
vkms:
- Add primary plane positioning support"
* tag 'drm-next-2023-04-24' of git://anongit.freedesktop.org/drm/drm: (1495 commits)
drm/i915/dp_mst: Fix active port PLL selection for secondary MST streams
drm/exynos: Implement fbdev emulation as in-kernel client
drm/exynos: Initialize fbdev DRM client
drm/exynos: Remove fb_helper from struct exynos_drm_private
drm/exynos: Remove struct exynos_drm_fbdev
drm/exynos: Remove exynos_gem from struct exynos_drm_fbdev
drm/i915: Fix memory leaks in i915 selftests
drm/i915: Make intel_get_crtc_new_encoder() less oopsy
drm/i915/gt: Avoid out-of-bounds access when loading HuC
drm/amdgpu: add some basic elements for multiple XCD case
drm/amdgpu: move vmhub out of amdgpu_ring_funcs (v4)
Revert "drm/amdgpu: enable ras for mp0 v13_0_10 on SRIOV"
drm/amdgpu: add common ip block for GC 9.4.3
drm/amd/display: Add logging when DP link training Clock recovery is Successful
drm/amdgpu: add common early init support for GC 9.4.3
drm/amdgpu: switch to v9_4_3 gfx_funcs callbacks for GC 9.4.3
drm/amd/display: Add logging when setting DP sink power state fails
drm/amdkfd: Add gfx_target_version for GC 9.4.3
drm/amdkfd: Enable HW_UPDATE_RPTR on GC 9.4.3
drm/amdgpu: reserve the old gc_11_0_*_mes.bin
...
Diffstat (limited to 'drivers/accel')
46 files changed, 10373 insertions, 3875 deletions
diff --git a/drivers/accel/Kconfig b/drivers/accel/Kconfig index c437206aa3f1..64065fb8922b 100644 --- a/drivers/accel/Kconfig +++ b/drivers/accel/Kconfig @@ -26,5 +26,6 @@ menuconfig DRM_ACCEL source "drivers/accel/habanalabs/Kconfig" source "drivers/accel/ivpu/Kconfig" +source "drivers/accel/qaic/Kconfig" endif diff --git a/drivers/accel/Makefile b/drivers/accel/Makefile index f22fd44d586b..ab3df932937f 100644 --- a/drivers/accel/Makefile +++ b/drivers/accel/Makefile @@ -2,3 +2,4 @@ obj-$(CONFIG_DRM_ACCEL_HABANALABS) += habanalabs/ obj-$(CONFIG_DRM_ACCEL_IVPU) += ivpu/ +obj-$(CONFIG_DRM_ACCEL_QAIC) += qaic/ diff --git a/drivers/accel/habanalabs/common/command_buffer.c b/drivers/accel/habanalabs/common/command_buffer.c index 3a0535ac28b1..6e09f48750a0 100644 --- a/drivers/accel/habanalabs/common/command_buffer.c +++ b/drivers/accel/habanalabs/common/command_buffer.c @@ -45,20 +45,29 @@ static int cb_map_mem(struct hl_ctx *ctx, struct hl_cb *cb) } mutex_lock(&hdev->mmu_lock); + rc = hl_mmu_map_contiguous(ctx, cb->virtual_addr, cb->bus_address, cb->roundup_size); if (rc) { dev_err(hdev->dev, "Failed to map VA %#llx to CB\n", cb->virtual_addr); - goto err_va_umap; + goto err_va_pool_free; } + rc = hl_mmu_invalidate_cache(hdev, false, MMU_OP_USERPTR | MMU_OP_SKIP_LOW_CACHE_INV); + if (rc) + goto err_mmu_unmap; + mutex_unlock(&hdev->mmu_lock); cb->is_mmu_mapped = true; - return rc; -err_va_umap: + return 0; + +err_mmu_unmap: + hl_mmu_unmap_contiguous(ctx, cb->virtual_addr, cb->roundup_size); +err_va_pool_free: mutex_unlock(&hdev->mmu_lock); gen_pool_free(ctx->cb_va_pool, cb->virtual_addr, cb->roundup_size); + return rc; } diff --git a/drivers/accel/habanalabs/common/command_submission.c b/drivers/accel/habanalabs/common/command_submission.c index 8270db0a72a2..af9d2e22c6e7 100644 --- a/drivers/accel/habanalabs/common/command_submission.c +++ b/drivers/accel/habanalabs/common/command_submission.c @@ -14,10 +14,10 @@ #define HL_CS_FLAGS_TYPE_MASK (HL_CS_FLAGS_SIGNAL | HL_CS_FLAGS_WAIT | \ HL_CS_FLAGS_COLLECTIVE_WAIT | HL_CS_FLAGS_RESERVE_SIGNALS_ONLY | \ HL_CS_FLAGS_UNRESERVE_SIGNALS_ONLY | HL_CS_FLAGS_ENGINE_CORE_COMMAND | \ - HL_CS_FLAGS_FLUSH_PCI_HBW_WRITES) + HL_CS_FLAGS_ENGINES_COMMAND | HL_CS_FLAGS_FLUSH_PCI_HBW_WRITES) -#define MAX_TS_ITER_NUM 10 +#define MAX_TS_ITER_NUM 100 /** * enum hl_cs_wait_status - cs wait status @@ -657,7 +657,7 @@ static inline void cs_release_sob_reset_handler(struct hl_device *hdev, /* * we get refcount upon reservation of signals or signal/wait cs for the * hw_sob object, and need to put it when the first staged cs - * (which cotains the encaps signals) or cs signal/wait is completed. + * (which contains the encaps signals) or cs signal/wait is completed. */ if ((hl_cs_cmpl->type == CS_TYPE_SIGNAL) || (hl_cs_cmpl->type == CS_TYPE_WAIT) || @@ -1082,9 +1082,8 @@ static void wake_pending_user_interrupt_threads(struct hl_user_interrupt *interrupt) { struct hl_user_pending_interrupt *pend, *temp; - unsigned long flags; - spin_lock_irqsave(&interrupt->wait_list_lock, flags); + spin_lock(&interrupt->wait_list_lock); list_for_each_entry_safe(pend, temp, &interrupt->wait_list_head, wait_list_node) { if (pend->ts_reg_info.buf) { list_del(&pend->wait_list_node); @@ -1095,7 +1094,7 @@ wake_pending_user_interrupt_threads(struct hl_user_interrupt *interrupt) complete_all(&pend->fence.completion); } } - spin_unlock_irqrestore(&interrupt->wait_list_lock, flags); + spin_unlock(&interrupt->wait_list_lock); } void hl_release_pending_user_interrupts(struct hl_device *hdev) @@ -1168,6 +1167,22 @@ static void cs_completion(struct work_struct *work) hl_complete_job(hdev, job); } +u32 hl_get_active_cs_num(struct hl_device *hdev) +{ + u32 active_cs_num = 0; + struct hl_cs *cs; + + spin_lock(&hdev->cs_mirror_lock); + + list_for_each_entry(cs, &hdev->cs_mirror_list, mirror_node) + if (!cs->completed) + active_cs_num++; + + spin_unlock(&hdev->cs_mirror_lock); + + return active_cs_num; +} + static int validate_queue_index(struct hl_device *hdev, struct hl_cs_chunk *chunk, enum hl_queue_type *queue_type, @@ -1304,6 +1319,8 @@ static enum hl_cs_type hl_cs_get_cs_type(u32 cs_type_flags) return CS_UNRESERVE_SIGNALS; else if (cs_type_flags & HL_CS_FLAGS_ENGINE_CORE_COMMAND) return CS_TYPE_ENGINE_CORE; + else if (cs_type_flags & HL_CS_FLAGS_ENGINES_COMMAND) + return CS_TYPE_ENGINES; else if (cs_type_flags & HL_CS_FLAGS_FLUSH_PCI_HBW_WRITES) return CS_TYPE_FLUSH_PCI_HBW_WRITES; else @@ -2429,10 +2446,13 @@ out: static int cs_ioctl_engine_cores(struct hl_fpriv *hpriv, u64 engine_cores, u32 num_engine_cores, u32 core_command) { - int rc; struct hl_device *hdev = hpriv->hdev; void __user *engine_cores_arr; u32 *cores; + int rc; + + if (!hdev->asic_prop.supports_engine_modes) + return -EPERM; if (!num_engine_cores || num_engine_cores > hdev->asic_prop.num_engine_cores) { dev_err(hdev->dev, "Number of engine cores %d is invalid\n", num_engine_cores); @@ -2461,6 +2481,48 @@ static int cs_ioctl_engine_cores(struct hl_fpriv *hpriv, u64 engine_cores, return rc; } +static int cs_ioctl_engines(struct hl_fpriv *hpriv, u64 engines_arr_user_addr, + u32 num_engines, enum hl_engine_command command) +{ + struct hl_device *hdev = hpriv->hdev; + u32 *engines, max_num_of_engines; + void __user *engines_arr; + int rc; + + if (!hdev->asic_prop.supports_engine_modes) + return -EPERM; + + if (command >= HL_ENGINE_COMMAND_MAX) { + dev_err(hdev->dev, "Engine command is invalid\n"); + return -EINVAL; + } + + max_num_of_engines = hdev->asic_prop.max_num_of_engines; + if (command == HL_ENGINE_CORE_RUN || command == HL_ENGINE_CORE_HALT) + max_num_of_engines = hdev->asic_prop.num_engine_cores; + + if (!num_engines || num_engines > max_num_of_engines) { + dev_err(hdev->dev, "Number of engines %d is invalid\n", num_engines); + return -EINVAL; + } + + engines_arr = (void __user *) (uintptr_t) engines_arr_user_addr; + engines = kmalloc_array(num_engines, sizeof(u32), GFP_KERNEL); + if (!engines) + return -ENOMEM; + + if (copy_from_user(engines, engines_arr, num_engines * sizeof(u32))) { + dev_err(hdev->dev, "Failed to copy engine-ids array from user\n"); + kfree(engines); + return -EFAULT; + } + + rc = hdev->asic_funcs->set_engines(hdev, engines, num_engines, command); + kfree(engines); + + return rc; +} + static int cs_ioctl_flush_pci_hbw_writes(struct hl_fpriv *hpriv) { struct hl_device *hdev = hpriv->hdev; @@ -2532,6 +2594,10 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data) rc = cs_ioctl_engine_cores(hpriv, args->in.engine_cores, args->in.num_engine_cores, args->in.core_command); break; + case CS_TYPE_ENGINES: + rc = cs_ioctl_engines(hpriv, args->in.engines, + args->in.num_engines, args->in.engine_command); + break; case CS_TYPE_FLUSH_PCI_HBW_WRITES: rc = cs_ioctl_flush_pci_hbw_writes(hpriv); break; @@ -3143,8 +3209,9 @@ static int ts_buff_get_kernel_ts_record(struct hl_mmap_mem_buf *buf, struct hl_user_pending_interrupt *cb_last = (struct hl_user_pending_interrupt *)ts_buff->kernel_buff_address + (ts_buff->kernel_buff_size / sizeof(struct hl_user_pending_interrupt)); - unsigned long flags, iter_counter = 0; + unsigned long iter_counter = 0; u64 current_cq_counter; + ktime_t timestamp; /* Validate ts_offset not exceeding last max */ if (requested_offset_record >= cb_last) { @@ -3153,8 +3220,10 @@ static int ts_buff_get_kernel_ts_record(struct hl_mmap_mem_buf *buf, return -EINVAL; } + timestamp = ktime_get(); + start_over: - spin_lock_irqsave(wait_list_lock, flags); + spin_lock(wait_list_lock); /* Unregister only if we didn't reach the target value * since in this case there will be no handling in irq context @@ -3165,7 +3234,7 @@ start_over: current_cq_counter = *requested_offset_record->cq_kernel_addr; if (current_cq_counter < requested_offset_record->cq_target_value) { list_del(&requested_offset_record->wait_list_node); - spin_unlock_irqrestore(wait_list_lock, flags); + spin_unlock(wait_list_lock); hl_mmap_mem_buf_put(requested_offset_record->ts_reg_info.buf); hl_cb_put(requested_offset_record->ts_reg_info.cq_cb); @@ -3176,13 +3245,14 @@ start_over: dev_dbg(buf->mmg->dev, "ts node in middle of irq handling\n"); - /* irq handling in the middle give it time to finish */ - spin_unlock_irqrestore(wait_list_lock, flags); - usleep_range(1, 10); + /* irq thread handling in the middle give it time to finish */ + spin_unlock(wait_list_lock); + usleep_range(100, 1000); if (++iter_counter == MAX_TS_ITER_NUM) { dev_err(buf->mmg->dev, - "handling registration interrupt took too long!!\n"); - return -EINVAL; + "Timestamp offset processing reached timeout of %lld ms\n", + ktime_ms_delta(ktime_get(), timestamp)); + return -EAGAIN; } goto start_over; @@ -3197,7 +3267,7 @@ start_over: (u64 *) cq_cb->kernel_address + cq_offset; requested_offset_record->cq_target_value = target_value; - spin_unlock_irqrestore(wait_list_lock, flags); + spin_unlock(wait_list_lock); } *pend = requested_offset_record; @@ -3217,7 +3287,7 @@ static int _hl_interrupt_wait_ioctl(struct hl_device *hdev, struct hl_ctx *ctx, struct hl_user_pending_interrupt *pend; struct hl_mmap_mem_buf *buf; struct hl_cb *cq_cb; - unsigned long timeout, flags; + unsigned long timeout; long completion_rc; int rc = 0; @@ -3264,7 +3334,7 @@ static int _hl_interrupt_wait_ioctl(struct hl_device *hdev, struct hl_ctx *ctx, pend->cq_target_value = target_value; } - spin_lock_irqsave(&interrupt->wait_list_lock, flags); + spin_lock(&interrupt->wait_list_lock); /* We check for completion value as interrupt could have been received * before we added the node to the wait list @@ -3272,7 +3342,7 @@ static int _hl_interrupt_wait_ioctl(struct hl_device *hdev, struct hl_ctx *ctx, if (*pend->cq_kernel_addr >= target_value) { if (register_ts_record) pend->ts_reg_info.in_use = 0; - spin_unlock_irqrestore(&interrupt->wait_list_lock, flags); + spin_unlock(&interrupt->wait_list_lock); *status = HL_WAIT_CS_STATUS_COMPLETED; @@ -3284,7 +3354,7 @@ static int _hl_interrupt_wait_ioctl(struct hl_device *hdev, struct hl_ctx *ctx, goto set_timestamp; } } else if (!timeout_us) { - spin_unlock_irqrestore(&interrupt->wait_list_lock, flags); + spin_unlock(&interrupt->wait_list_lock); *status = HL_WAIT_CS_STATUS_BUSY; pend->fence.timestamp = ktime_get(); goto set_timestamp; @@ -3309,7 +3379,7 @@ static int _hl_interrupt_wait_ioctl(struct hl_device *hdev, struct hl_ctx *ctx, pend->ts_reg_info.in_use = 1; list_add_tail(&pend->wait_list_node, &interrupt->wait_list_head); - spin_unlock_irqrestore(&interrupt->wait_list_lock, flags); + spin_unlock(&interrupt->wait_list_lock); if (register_ts_record) { rc = *status = HL_WAIT_CS_STATUS_COMPLETED; @@ -3353,9 +3423,9 @@ static int _hl_interrupt_wait_ioctl(struct hl_device *hdev, struct hl_ctx *ctx, * for ts record, the node will be deleted in the irq handler after * we reach the target value. */ - spin_lock_irqsave(&interrupt->wait_list_lock, flags); + spin_lock(&interrupt->wait_list_lock); list_del(&pend->wait_list_node); - spin_unlock_irqrestore(&interrupt->wait_list_lock, flags); + spin_unlock(&interrupt->wait_list_lock); set_timestamp: *timestamp = ktime_to_ns(pend->fence.timestamp); @@ -3383,7 +3453,7 @@ static int _hl_interrupt_wait_ioctl_user_addr(struct hl_device *hdev, struct hl_ u64 *timestamp) { struct hl_user_pending_interrupt *pend; - unsigned long timeout, flags; + unsigned long timeout; u64 completion_value; long completion_rc; int rc = 0; @@ -3403,9 +3473,9 @@ static int _hl_interrupt_wait_ioctl_user_addr(struct hl_device *hdev, struct hl_ /* Add pending user interrupt to relevant list for the interrupt * handler to monitor */ - spin_lock_irqsave(&interrupt->wait_list_lock, flags); + spin_lock(&interrupt->wait_list_lock); list_add_tail(&pend->wait_list_node, &interrupt->wait_list_head); - spin_unlock_irqrestore(&interrupt->wait_list_lock, flags); + spin_unlock(&interrupt->wait_list_lock); /* We check for completion value as interrupt could have been received * before we added the node to the wait list @@ -3436,14 +3506,14 @@ wait_again: * If comparison fails, keep waiting until timeout expires */ if (completion_rc > 0) { - spin_lock_irqsave(&interrupt->wait_list_lock, flags); + spin_lock(&interrupt->wait_list_lock); /* reinit_completion must be called before we check for user * completion value, otherwise, if interrupt is received after * the comparison and before the next wait_for_completion, * we will reach timeout and fail */ reinit_completion(&pend->fence.completion); - spin_unlock_irqrestore(&interrupt->wait_list_lock, flags); + spin_unlock(&interrupt->wait_list_lock); if (copy_from_user(&completion_value, u64_to_user_ptr(user_address), 8)) { dev_err(hdev->dev, "Failed to copy completion value from user\n"); @@ -3480,9 +3550,9 @@ wait_again: } remove_pending_user_interrupt: - spin_lock_irqsave(&interrupt->wait_list_lock, flags); + spin_lock(&interrupt->wait_list_lock); list_del(&pend->wait_list_node); - spin_unlock_irqrestore(&interrupt->wait_list_lock, flags); + spin_unlock(&interrupt->wait_list_lock); *timestamp = ktime_to_ns(pend->fence.timestamp); diff --git a/drivers/accel/habanalabs/common/debugfs.c b/drivers/accel/habanalabs/common/debugfs.c index 945c0e6758ca..22dd17c077c0 100644 --- a/drivers/accel/habanalabs/common/debugfs.c +++ b/drivers/accel/habanalabs/common/debugfs.c @@ -258,7 +258,7 @@ static int vm_show(struct seq_file *s, void *data) if (!dev_entry->hdev->mmu_enable) return 0; - spin_lock(&dev_entry->ctx_mem_hash_spinlock); + mutex_lock(&dev_entry->ctx_mem_hash_mutex); list_for_each_entry(ctx, &dev_entry->ctx_mem_hash_list, debugfs_list) { once = false; @@ -329,7 +329,7 @@ static int vm_show(struct seq_file *s, void *data) } - spin_unlock(&dev_entry->ctx_mem_hash_spinlock); + mutex_unlock(&dev_entry->ctx_mem_hash_mutex); ctx = hl_get_compute_ctx(dev_entry->hdev); if (ctx) { @@ -1583,209 +1583,216 @@ static const struct file_operations hl_debugfs_fops = { .release = single_release, }; -static void add_secured_nodes(struct hl_dbg_device_entry *dev_entry) +static void add_secured_nodes(struct hl_dbg_device_entry *dev_entry, struct dentry *root) { debugfs_create_u8("i2c_bus", 0644, - dev_entry->root, + root, &dev_entry->i2c_bus); debugfs_create_u8("i2c_addr", 0644, - dev_entry->root, + root, &dev_entry->i2c_addr); debugfs_create_u8("i2c_reg", 0644, - dev_entry->root, + root, &dev_entry->i2c_reg); debugfs_create_u8("i2c_len", 0644, - dev_entry->root, + root, &dev_entry->i2c_len); debugfs_create_file("i2c_data", 0644, - dev_entry->root, + root, dev_entry, &hl_i2c_data_fops); debugfs_create_file("led0", 0200, - dev_entry->root, + root, dev_entry, &hl_led0_fops); debugfs_create_file("led1", 0200, - dev_entry->root, + root, dev_entry, &hl_led1_fops); debugfs_create_file("led2", 0200, - dev_entry->root, + root, dev_entry, &hl_led2_fops); } -void hl_debugfs_add_device(struct hl_device *hdev) +static void add_files_to_device(struct hl_device *hdev, struct hl_dbg_device_entry *dev_entry, + struct dentry *root) { - struct hl_dbg_device_entry *dev_entry = &hdev->hl_debugfs; int count = ARRAY_SIZE(hl_debugfs_list); struct hl_debugfs_entry *entry; int i; - dev_entry->hdev = hdev; - dev_entry->entry_arr = kmalloc_array(count, - sizeof(struct hl_debugfs_entry), - GFP_KERNEL); - if (!dev_entry->entry_arr) - return; - - dev_entry->data_dma_blob_desc.size = 0; - dev_entry->data_dma_blob_desc.data = NULL; - dev_entry->mon_dump_blob_desc.size = 0; - dev_entry->mon_dump_blob_desc.data = NULL; - - INIT_LIST_HEAD(&dev_entry->file_list); - INIT_LIST_HEAD(&dev_entry->cb_list); - INIT_LIST_HEAD(&dev_entry->cs_list); - INIT_LIST_HEAD(&dev_entry->cs_job_list); - INIT_LIST_HEAD(&dev_entry->userptr_list); - INIT_LIST_HEAD(&dev_entry->ctx_mem_hash_list); - mutex_init(&dev_entry->file_mutex); - init_rwsem(&dev_entry->state_dump_sem); - spin_lock_init(&dev_entry->cb_spinlock); - spin_lock_init(&dev_entry->cs_spinlock); - spin_lock_init(&dev_entry->cs_job_spinlock); - spin_lock_init(&dev_entry->userptr_spinlock); - spin_lock_init(&dev_entry->ctx_mem_hash_spinlock); - - dev_entry->root = debugfs_create_dir(dev_name(hdev->dev), - hl_debug_root); - debugfs_create_x64("memory_scrub_val", 0644, - dev_entry->root, + root, &hdev->memory_scrub_val); debugfs_create_file("memory_scrub", 0200, - dev_entry->root, + root, dev_entry, &hl_mem_scrub_fops); debugfs_create_x64("addr", 0644, - dev_entry->root, + root, &dev_entry->addr); debugfs_create_file("data32", 0644, - dev_entry->root, + root, dev_entry, &hl_data32b_fops); debugfs_create_file("data64", 0644, - dev_entry->root, + root, dev_entry, &hl_data64b_fops); debugfs_create_file("set_power_state", 0200, - dev_entry->root, + root, dev_entry, &hl_power_fops); debugfs_create_file("device", 0200, - dev_entry->root, + root, dev_entry, &hl_device_fops); debugfs_create_file("clk_gate", 0200, - dev_entry->root, + root, dev_entry, &hl_clk_gate_fops); debugfs_create_file("stop_on_err", 0644, - dev_entry->root, + root, dev_entry, &hl_stop_on_err_fops); debugfs_create_file("dump_security_violations", 0644, - dev_entry->root, + root, dev_entry, &hl_security_violations_fops); debugfs_create_file("dump_razwi_events", 0644, - dev_entry->root, + root, dev_entry, &hl_razwi_check_fops); debugfs_create_file("dma_size", 0200, - dev_entry->root, + root, dev_entry, &hl_dma_size_fops); debugfs_create_blob("data_dma", 0400, - dev_entry->root, + root, &dev_entry->data_dma_blob_desc); debugfs_create_file("monitor_dump_trig", 0200, - dev_entry->root, + root, dev_entry, &hl_monitor_dump_fops); debugfs_create_blob("monitor_dump", 0400, - dev_entry->root, + root, &dev_entry->mon_dump_blob_desc); debugfs_create_x8("skip_reset_on_timeout", 0644, - dev_entry->root, + root, &hdev->reset_info.skip_reset_on_timeout); debugfs_create_file("state_dump", 0600, - dev_entry->root, + root, dev_entry, &hl_state_dump_fops); debugfs_create_file("timeout_locked", 0644, - dev_entry->root, + root, dev_entry, &hl_timeout_locked_fops); debugfs_create_u32("device_release_watchdog_timeout", 0644, - dev_entry->root, + root, &hdev->device_release_watchdog_timeout_sec); for (i = 0, entry = dev_entry->entry_arr ; i < count ; i++, entry++) { debugfs_create_file(hl_debugfs_list[i].name, 0444, - dev_entry->root, + root, entry, &hl_debugfs_fops); entry->info_ent = &hl_debugfs_list[i]; entry->dev_entry = dev_entry; } +} + +void hl_debugfs_add_device(struct hl_device *hdev) +{ + struct hl_dbg_device_entry *dev_entry = &hdev->hl_debugfs; + int count = ARRAY_SIZE(hl_debugfs_list); + + dev_entry->hdev = hdev; + dev_entry->entry_arr = kmalloc_array(count, + sizeof(struct hl_debugfs_entry), + GFP_KERNEL); + if (!dev_entry->entry_arr) + return; + + dev_entry->data_dma_blob_desc.size = 0; + dev_entry->data_dma_blob_desc.data = NULL; + dev_entry->mon_dump_blob_desc.size = 0; + dev_entry->mon_dump_blob_desc.data = NULL; + + INIT_LIST_HEAD(&dev_entry->file_list); + INIT_LIST_HEAD(&dev_entry->cb_list); + INIT_LIST_HEAD(&dev_entry->cs_list); + INIT_LIST_HEAD(&dev_entry->cs_job_list); + INIT_LIST_HEAD(&dev_entry->userptr_list); + INIT_LIST_HEAD(&dev_entry->ctx_mem_hash_list); + mutex_init(&dev_entry->file_mutex); + init_rwsem(&dev_entry->state_dump_sem); + spin_lock_init(&dev_entry->cb_spinlock); + spin_lock_init(&dev_entry->cs_spinlock); + spin_lock_init(&dev_entry->cs_job_spinlock); + spin_lock_init(&dev_entry->userptr_spinlock); + mutex_init(&dev_entry->ctx_mem_hash_mutex); + + dev_entry->root = debugfs_create_dir(dev_name(hdev->dev), + hl_debug_root); + add_files_to_device(hdev, dev_entry, dev_entry->root); if (!hdev->asic_prop.fw_security_enabled) - add_secured_nodes(dev_entry); + add_secured_nodes(dev_entry, dev_entry->root); } void hl_debugfs_remove_device(struct hl_device *hdev) @@ -1795,6 +1802,7 @@ void hl_debugfs_remove_device(struct hl_device *hdev) debugfs_remove_recursive(entry->root); + mutex_destroy(&entry->ctx_mem_hash_mutex); mutex_destroy(&entry->file_mutex); vfree(entry->data_dma_blob_desc.data); @@ -1901,18 +1909,18 @@ void hl_debugfs_add_ctx_mem_hash(struct hl_device *hdev, struct hl_ctx *ctx) { struct hl_dbg_device_entry *dev_entry = &hdev->hl_debugfs; - spin_lock(&dev_entry->ctx_mem_hash_spinlock); + mutex_lock(&dev_entry->ctx_mem_hash_mutex); list_add(&ctx->debugfs_list, &dev_entry->ctx_mem_hash_list); - spin_unlock(&dev_entry->ctx_mem_hash_spinlock); + mutex_unlock(&dev_entry->ctx_mem_hash_mutex); } void hl_debugfs_remove_ctx_mem_hash(struct hl_device *hdev, struct hl_ctx *ctx) { struct hl_dbg_device_entry *dev_entry = &hdev->hl_debugfs; - spin_lock(&dev_entry->ctx_mem_hash_spinlock); + mutex_lock(&dev_entry->ctx_mem_hash_mutex); list_del(&ctx->debugfs_list); - spin_unlock(&dev_entry->ctx_mem_hash_spinlock); + mutex_unlock(&dev_entry->ctx_mem_hash_mutex); } /** diff --git a/drivers/accel/habanalabs/common/decoder.c b/drivers/accel/habanalabs/common/decoder.c index 2aab14d74b53..c03a6da45d00 100644 --- a/drivers/accel/habanalabs/common/decoder.c +++ b/drivers/accel/habanalabs/common/decoder.c @@ -43,36 +43,44 @@ static void dec_print_abnrm_intr_source(struct hl_device *hdev, u32 irq_status) intr_source[2], intr_source[3], intr_source[4], intr_source[5]); } -static void dec_error_intr_work(struct hl_device *hdev, u32 base_addr, u32 core_id) +static void dec_abnrm_intr_work(struct work_struct *work) { + struct hl_dec *dec = container_of(work, struct hl_dec, abnrm_intr_work); + struct hl_device *hdev = dec->hdev; + u32 irq_status, event_mask = 0; bool reset_required = false; - u32 irq_status; - irq_status = RREG32(base_addr + VCMD_IRQ_STATUS_OFFSET); + irq_status = RREG32(dec->base_addr + VCMD_IRQ_STATUS_OFFSET); - dev_err(hdev->dev, "Decoder abnormal interrupt %#x, core %d\n", irq_status, core_id); + dev_err(hdev->dev, "Decoder abnormal interrupt %#x, core %d\n", irq_status, dec->core_id); dec_print_abnrm_intr_source(hdev, irq_status); - if (irq_status & VCMD_IRQ_STATUS_TIMEOUT_MASK) - reset_required = true; - /* Clear the interrupt */ - WREG32(base_addr + VCMD_IRQ_STATUS_OFFSET, irq_status); + WREG32(dec->base_addr + VCMD_IRQ_STATUS_OFFSET, irq_status); /* Flush the interrupt clear */ - RREG32(base_addr + VCMD_IRQ_STATUS_OFFSET); - - if (reset_required) - hl_device_reset(hdev, HL_DRV_RESET_HARD); -} + RREG32(dec->base_addr + VCMD_IRQ_STATUS_OFFSET); -static void dec_completion_abnrm(struct work_struct *work) -{ - struct hl_dec *dec = container_of(work, struct hl_dec, completion_abnrm_work); - struct hl_device *hdev = dec->hdev; + if (irq_status & VCMD_IRQ_STATUS_TIMEOUT_MASK) { + reset_required = true; + event_mask |= HL_NOTIFIER_EVENT_GENERAL_HW_ERR; + } - dec_error_intr_work(hdev, dec->base_addr, dec->core_id); + if (irq_status & VCMD_IRQ_STATUS_CMDERR_MASK) + event_mask |= HL_NOTIFIER_EVENT_UNDEFINED_OPCODE; + + if (irq_status & (VCMD_IRQ_STATUS_ENDCMD_MASK | + VCMD_IRQ_STATUS_BUSERR_MASK | + VCMD_IRQ_STATUS_ABORT_MASK)) + event_mask |= HL_NOTIFIER_EVENT_USER_ENGINE_ERR; + + if (reset_required) { + event_mask |= HL_NOTIFIER_EVENT_DEVICE_RESET; + hl_device_cond_reset(hdev, 0, event_mask); + } else if (event_mask) { + hl_notifier_event_send_all(hdev, event_mask); + } } void hl_dec_fini(struct hl_device *hdev) @@ -98,7 +106,7 @@ int hl_dec_init(struct hl_device *hdev) dec = hdev->dec + j; dec->hdev = hdev; - INIT_WORK(&dec->completion_abnrm_work, dec_completion_abnrm); + INIT_WORK(&dec->abnrm_intr_work, dec_abnrm_intr_work); dec->core_id = j; dec->base_addr = hdev->asic_funcs->get_dec_base_addr(hdev, j); if (!dec->base_addr) { diff --git a/drivers/accel/habanalabs/common/device.c b/drivers/accel/habanalabs/common/device.c index 9933e5858a36..fabfc501ef54 100644 --- a/drivers/accel/habanalabs/common/device.c +++ b/drivers/accel/habanalabs/common/device.c @@ -22,7 +22,6 @@ enum dma_alloc_type { DMA_ALLOC_COHERENT, - DMA_ALLOC_CPU_ACCESSIBLE, DMA_ALLOC_POOL, }; @@ -121,9 +120,6 @@ static void *hl_dma_alloc_common(struct hl_device *hdev, size_t size, dma_addr_t case DMA_ALLOC_COHERENT: ptr = hdev->asic_funcs->asic_dma_alloc_coherent(hdev, size, dma_handle, flag); break; - case DMA_ALLOC_CPU_ACCESSIBLE: - ptr = hdev->asic_funcs->cpu_accessible_dma_pool_alloc(hdev, size, dma_handle); - break; case DMA_ALLOC_POOL: ptr = hdev->asic_funcs->asic_dma_pool_zalloc(hdev, size, flag, dma_handle); break; @@ -147,9 +143,6 @@ static void hl_asic_dma_free_common(struct hl_device *hdev, size_t size, void *c case DMA_ALLOC_COHERENT: hdev->asic_funcs->asic_dma_free_coherent(hdev, size, cpu_addr, dma_handle); break; - case DMA_ALLOC_CPU_ACCESSIBLE: - hdev->asic_funcs->cpu_accessible_dma_pool_free(hdev, size, cpu_addr); - break; case DMA_ALLOC_POOL: hdev->asic_funcs->asic_dma_pool_free(hdev, cpu_addr, dma_handle); break; @@ -170,18 +163,6 @@ void hl_asic_dma_free_coherent_caller(struct hl_device *hdev, size_t size, void hl_asic_dma_free_common(hdev, size, cpu_addr, dma_handle, DMA_ALLOC_COHERENT, caller); } -void *hl_cpu_accessible_dma_pool_alloc_caller(struct hl_device *hdev, size_t size, - dma_addr_t *dma_handle, const char *caller) -{ - return hl_dma_alloc_common(hdev, size, dma_handle, 0, DMA_ALLOC_CPU_ACCESSIBLE, caller); -} - -void hl_cpu_accessible_dma_pool_free_caller(struct hl_device *hdev, size_t size, void *vaddr, - const char *caller) -{ - hl_asic_dma_free_common(hdev, size, vaddr, 0, DMA_ALLOC_CPU_ACCESSIBLE, caller); -} - void *hl_asic_dma_pool_zalloc_caller(struct hl_device *hdev, size_t size, gfp_t mem_flags, dma_addr_t *dma_handle, const char *caller) { @@ -194,6 +175,16 @@ void hl_asic_dma_pool_free_caller(struct hl_device *hdev, void *vaddr, dma_addr_ hl_asic_dma_free_common(hdev, 0, vaddr, dma_addr, DMA_ALLOC_POOL, caller); } +void *hl_cpu_accessible_dma_pool_alloc(struct hl_device *hdev, size_t size, dma_addr_t *dma_handle) +{ + return hdev->asic_funcs->cpu_accessible_dma_pool_alloc(hdev, size, dma_handle); +} + +void hl_cpu_accessible_dma_pool_free(struct hl_device *hdev, size_t size, void *vaddr) +{ + hdev->asic_funcs->cpu_accessible_dma_pool_free(hdev, size, vaddr); +} + int hl_dma_map_sgtable(struct hl_device *hdev, struct sg_table *sgt, enum dma_data_direction dir) { struct asic_fixed_properties *prop = &hdev->asic_prop; @@ -389,18 +380,17 @@ bool hl_ctrl_device_operational(struct hl_device *hdev, static void print_idle_status_mask(struct hl_device *hdev, const char *message, u64 idle_mask[HL_BUSY_ENGINES_MASK_EXT_SIZE]) { - u32 pad_width[HL_BUSY_ENGINES_MASK_EXT_SIZE] = {}; - - BUILD_BUG_ON(HL_BUSY_ENGINES_MASK_EXT_SIZE != 4); - - pad_width[3] = idle_mask[3] ? 16 : 0; - pad_width[2] = idle_mask[2] || pad_width[3] ? 16 : 0; - pad_width[1] = idle_mask[1] || pad_width[2] ? 16 : 0; - pad_width[0] = idle_mask[0] || pad_width[1] ? 16 : 0; - - dev_err(hdev->dev, "%s (mask %0*llx_%0*llx_%0*llx_%0*llx)\n", - message, pad_width[3], idle_mask[3], pad_width[2], idle_mask[2], - pad_width[1], idle_mask[1], pad_width[0], idle_mask[0]); + if (idle_mask[3]) + dev_err(hdev->dev, "%s (mask %#llx_%016llx_%016llx_%016llx)\n", + message, idle_mask[3], idle_mask[2], idle_mask[1], idle_mask[0]); + else if (idle_mask[2]) + dev_err(hdev->dev, "%s (mask %#llx_%016llx_%016llx)\n", + message, idle_mask[2], idle_mask[1], idle_mask[0]); + else if (idle_mask[1]) + dev_err(hdev->dev, "%s (mask %#llx_%016llx)\n", + message, idle_mask[1], idle_mask[0]); + else + dev_err(hdev->dev, "%s (mask %#llx)\n", message, idle_mask[0]); } static void hpriv_release(struct kref *ref) @@ -423,6 +413,9 @@ static void hpriv_release(struct kref *ref) mutex_destroy(&hpriv->ctx_lock); mutex_destroy(&hpriv->restore_phase_mutex); + /* There should be no memory buffers at this point and handles IDR can be destroyed */ + hl_mem_mgr_idr_destroy(&hpriv->mem_mgr); + /* Device should be reset if reset-upon-device-release is enabled, or if there is a pending * reset that waits for device release. */ @@ -492,6 +485,36 @@ int hl_hpriv_put(struct hl_fpriv *hpriv) return kref_put(&hpriv->refcount, hpriv_release); } +static void print_device_in_use_info(struct hl_device *hdev, const char *message) +{ + u32 active_cs_num, dmabuf_export_cnt; + bool unknown_reason = true; + char buf[128]; + size_t size; + int offset; + + size = sizeof(buf); + offset = 0; + + active_cs_num = hl_get_active_cs_num(hdev); + if (active_cs_num) { + unknown_reason = false; + offset += scnprintf(buf + offset, size - offset, " [%u active CS]", active_cs_num); + } + + dmabuf_export_cnt = atomic_read(&hdev->dmabuf_export_cnt); + if (dmabuf_export_cnt) { + unknown_reason = false; + offset += scnprintf(buf + offset, size - offset, " [%u exported dma-buf]", + dmabuf_export_cnt); + } + + if (unknown_reason) + scnprintf(buf + offset, size - offset, " [unknown reason]"); + + dev_notice(hdev->dev, "%s%s\n", message, buf); +} + /* * hl_device_release - release function for habanalabs device * @@ -514,17 +537,20 @@ static int hl_device_release(struct inode *inode, struct file *filp) } hl_ctx_mgr_fini(hdev, &hpriv->ctx_mgr); + + /* Memory buffers might be still in use at this point and thus the handles IDR destruction + * is postponed to hpriv_release(). + */ hl_mem_mgr_fini(&hpriv->mem_mgr); hdev->compute_ctx_in_release = 1; if (!hl_hpriv_put(hpriv)) { - dev_notice(hdev->dev, "User process closed FD but device still in use\n"); + print_device_in_use_info(hdev, "User process closed FD but device still in use"); hl_device_reset(hdev, HL_DRV_RESET_HARD); } - hdev->last_open_session_duration_jif = - jiffies - hdev->last_successful_open_jif; + hdev->last_open_session_duration_jif = jiffies - hdev->last_successful_open_jif; return 0; } @@ -617,7 +643,7 @@ static void device_release_func(struct device *dev) * device_init_cdev - Initialize cdev and device for habanalabs device * * @hdev: pointer to habanalabs device structure - * @hclass: pointer to the class object of the device + * @class: pointer to the class object of the device * @minor: minor number of the specific device * @fpos: file operations to install for this device * @name: name of the device as it will appear in the filesystem @@ -626,7 +652,7 @@ static void device_release_func(struct device *dev) * * Initialize a cdev and a Linux device for habanalabs's device. */ -static int device_init_cdev(struct hl_device *hdev, struct class *hclass, +static int device_init_cdev(struct hl_device *hdev, struct class *class, int minor, const struct file_operations *fops, char *name, struct cdev *cdev, struct device **dev) @@ -640,7 +666,7 @@ static int device_init_cdev(struct hl_device *hdev, struct class *hclass, device_initialize(*dev); (*dev)->devt = MKDEV(hdev->major, minor); - (*dev)->class = hclass; + (*dev)->class = class; (*dev)->release = device_release_func; dev_set_drvdata(*dev, hdev); dev_set_name(*dev, "%s", name); @@ -733,14 +759,14 @@ static void device_hard_reset_pending(struct work_struct *work) static void device_release_watchdog_func(struct work_struct *work) { - struct hl_device_reset_work *device_release_watchdog_work = - container_of(work, struct hl_device_reset_work, reset_work.work); - struct hl_device *hdev = device_release_watchdog_work->hdev; + struct hl_device_reset_work *watchdog_work = + container_of(work, struct hl_device_reset_work, reset_work.work); + struct hl_device *hdev = watchdog_work->hdev; u32 flags; - dev_dbg(hdev->dev, "Device wasn't released in time. Initiate device reset.\n"); + dev_dbg(hdev->dev, "Device wasn't released in time. Initiate hard-reset.\n"); - flags = device_release_watchdog_work->flags | HL_DRV_RESET_FROM_WD_THR; + flags = watchdog_work->flags | HL_DRV_RESET_HARD | HL_DRV_RESET_FROM_WD_THR; hl_device_reset(hdev, flags); } @@ -805,7 +831,7 @@ static int device_early_init(struct hl_device *hdev) } for (i = 0 ; i < hdev->asic_prop.completion_queues_count ; i++) { - snprintf(workq_name, 32, "hl-free-jobs-%u", (u32) i); + snprintf(workq_name, 32, "hl%u-free-jobs-%u", hdev->cdev_idx, (u32) i); hdev->cq_wq[i] = create_singlethread_workqueue(workq_name); if (hdev->cq_wq[i] == NULL) { dev_err(hdev->dev, "Failed to allocate CQ workqueue\n"); @@ -814,14 +840,16 @@ static int device_early_init(struct hl_device *hdev) } } - hdev->eq_wq = create_singlethread_workqueue("hl-events"); + snprintf(workq_name, 32, "hl%u-events", hdev->cdev_idx); + hdev->eq_wq = create_singlethread_workqueue(workq_name); if (hdev->eq_wq == NULL) { dev_err(hdev->dev, "Failed to allocate EQ workqueue\n"); rc = -ENOMEM; goto free_cq_wq; } - hdev->cs_cmplt_wq = alloc_workqueue("hl-cs-completions", WQ_UNBOUND, 0); + snprintf(workq_name, 32, "hl%u-cs-completions", hdev->cdev_idx); + hdev->cs_cmplt_wq = alloc_workqueue(workq_name, WQ_UNBOUND, 0); if (!hdev->cs_cmplt_wq) { dev_err(hdev->dev, "Failed to allocate CS completions workqueue\n"); @@ -829,7 +857,8 @@ static int device_early_init(struct hl_device *hdev) goto free_eq_wq; } - hdev->ts_free_obj_wq = alloc_workqueue("hl-ts-free-obj", WQ_UNBOUND, 0); + snprintf(workq_name, 32, "hl%u-ts-free-obj", hdev->cdev_idx); + hdev->ts_free_obj_wq = alloc_workqueue(workq_name, WQ_UNBOUND, 0); if (!hdev->ts_free_obj_wq) { dev_err(hdev->dev, "Failed to allocate Timestamp registration free workqueue\n"); @@ -837,15 +866,15 @@ static int device_early_init(struct hl_device *hdev) goto free_cs_cmplt_wq; } - hdev->prefetch_wq = alloc_workqueue("hl-prefetch", WQ_UNBOUND, 0); + snprintf(workq_name, 32, "hl%u-prefetch", hdev->cdev_idx); + hdev->prefetch_wq = alloc_workqueue(workq_name, WQ_UNBOUND, 0); if (!hdev->prefetch_wq) { dev_err(hdev->dev, "Failed to allocate MMU prefetch workqueue\n"); rc = -ENOMEM; goto free_ts_free_wq; } - hdev->hl_chip_info = kzalloc(sizeof(struct hwmon_chip_info), - GFP_KERNEL); + hdev->hl_chip_info = kzalloc(sizeof(struct hwmon_chip_info), GFP_KERNEL); if (!hdev->hl_chip_info) { rc = -ENOMEM; goto free_prefetch_wq; @@ -857,7 +886,8 @@ static int device_early_init(struct hl_device *hdev) hl_mem_mgr_init(hdev->dev, &hdev->kernel_mem_mgr); - hdev->reset_wq = create_singlethread_workqueue("hl_device_reset"); + snprintf(workq_name, 32, "hl%u_device_reset", hdev->cdev_idx); + hdev->reset_wq = create_singlethread_workqueue(workq_name); if (!hdev->reset_wq) { rc = -ENOMEM; dev_err(hdev->dev, "Failed to create device reset WQ\n"); @@ -887,6 +917,7 @@ static int device_early_init(struct hl_device *hdev) free_cb_mgr: hl_mem_mgr_fini(&hdev->kernel_mem_mgr); + hl_mem_mgr_idr_destroy(&hdev->kernel_mem_mgr); free_chip_info: kfree(hdev->hl_chip_info); free_prefetch_wq: @@ -930,6 +961,7 @@ static void device_early_fini(struct hl_device *hdev) mutex_destroy(&hdev->clk_throttling.lock); hl_mem_mgr_fini(&hdev->kernel_mem_mgr); + hl_mem_mgr_idr_destroy(&hdev->kernel_mem_mgr); kfree(hdev->hl_chip_info); @@ -953,6 +985,8 @@ static void hl_device_heartbeat(struct work_struct *work) { struct hl_device *hdev = container_of(work, struct hl_device, work_heartbeat.work); + struct hl_info_fw_err_info info = {0}; + u64 event_mask = HL_NOTIFIER_EVENT_DEVICE_RESET | HL_NOTIFIER_EVENT_DEVICE_UNAVAILABLE; if (!hl_device_operational(hdev, NULL)) goto reschedule; @@ -963,7 +997,10 @@ static void hl_device_heartbeat(struct work_struct *work) if (hl_device_operational(hdev, NULL)) dev_err(hdev->dev, "Device heartbeat failed!\n"); - hl_device_reset(hdev, HL_DRV_RESET_HARD | HL_DRV_RESET_HEARTBEAT); + info.err_type = HL_INFO_FW_HEARTBEAT_ERR; + info.event_mask = &event_mask; + hl_handle_fw_err(hdev, &info); + hl_device_cond_reset(hdev, HL_DRV_RESET_HARD | HL_DRV_RESET_HEARTBEAT, event_mask); return; @@ -1234,7 +1271,6 @@ int hl_device_resume(struct hl_device *hdev) return 0; disable_device: - pci_clear_master(hdev->pdev); pci_disable_device(hdev->pdev); return rc; @@ -1344,6 +1380,34 @@ static void device_disable_open_processes(struct hl_device *hdev, bool control_d mutex_unlock(fd_lock); } +static void send_disable_pci_access(struct hl_device *hdev, u32 flags) +{ + /* If reset is due to heartbeat, device CPU is no responsive in + * which case no point sending PCI disable message to it. + */ + if ((flags & HL_DRV_RESET_HARD) && + !(flags & (HL_DRV_RESET_HEARTBEAT | HL_DRV_RESET_BYPASS_REQ_TO_FW))) { + /* Disable PCI access from device F/W so he won't send + * us additional interrupts. We disable MSI/MSI-X at + * the halt_engines function and we can't have the F/W + * sending us interrupts after that. We need to disable + * the access here because if the device is marked + * disable, the message won't be send. Also, in case + * of heartbeat, the device CPU is marked as disable + * so this message won't be sent + */ + if (hl_fw_send_pci_access_msg(hdev, CPUCP_PACKET_DISABLE_PCI_ACCESS, 0x0)) { + dev_warn(hdev->dev, "Failed to disable FW's PCI access\n"); + return; + } + + /* verify that last EQs are handled before disabled is set */ + if (hdev->cpu_queues_enable) + synchronize_irq(pci_irq_vector(hdev->pdev, + hdev->asic_prop.eq_interrupt_id)); + } +} + static void handle_reset_trigger(struct hl_device *hdev, u32 flags) { u32 cur_reset_trigger = HL_RESET_TRIGGER_DEFAULT; @@ -1382,28 +1446,6 @@ static void handle_reset_trigger(struct hl_device *hdev, u32 flags) } else { hdev->reset_info.reset_trigger_repeated = 1; } - - /* If reset is due to heartbeat, device CPU is no responsive in - * which case no point sending PCI disable message to it. - * - * If F/W is performing the reset, no need to send it a message to disable - * PCI access - */ - if ((flags & HL_DRV_RESET_HARD) && - !(flags & (HL_DRV_RESET_HEARTBEAT | HL_DRV_RESET_BYPASS_REQ_TO_FW))) { - /* Disable PCI access from device F/W so he won't send - * us additional interrupts. We disable MSI/MSI-X at - * the halt_engines function and we can't have the F/W - * sending us interrupts after that. We need to disable - * the access here because if the device is marked - * disable, the message won't be send. Also, in case - * of heartbeat, the device CPU is marked as disable - * so this message won't be sent - */ - if (hl_fw_send_pci_access_msg(hdev, CPUCP_PACKET_DISABLE_PCI_ACCESS, 0x0)) - dev_warn(hdev->dev, - "Failed to disable PCI access by F/W\n"); - } } /* @@ -1424,12 +1466,11 @@ static void handle_reset_trigger(struct hl_device *hdev, u32 flags) */ int hl_device_reset(struct hl_device *hdev, u32 flags) { - bool hard_reset, from_hard_reset_thread, fw_reset, hard_instead_soft = false, - reset_upon_device_release = false, schedule_hard_reset = false, - delay_reset, from_dev_release, from_watchdog_thread; + bool hard_reset, from_hard_reset_thread, fw_reset, reset_upon_device_release, + schedule_hard_reset = false, delay_reset, from_dev_release, from_watchdog_thread; u64 idle_mask[HL_BUSY_ENGINES_MASK_EXT_SIZE] = {0}; struct hl_ctx *ctx; - int i, rc; + int i, rc, hw_fini_rc; if (!hdev->init_done) { dev_err(hdev->dev, "Can't reset before initialization is done\n"); @@ -1442,6 +1483,7 @@ int hl_device_reset(struct hl_device *hdev, u32 flags) from_dev_release = !!(flags & HL_DRV_RESET_DEV_RELEASE); delay_reset = !!(flags & HL_DRV_RESET_DELAY); from_watchdog_thread = !!(flags & HL_DRV_RESET_FROM_WD_THR); + reset_upon_device_release = hdev->reset_upon_device_release && from_dev_release; if (!hard_reset && (hl_device_status(hdev) == HL_DEVICE_STATUS_MALFUNCTION)) { dev_dbg(hdev->dev, "soft-reset isn't supported on a malfunctioning device\n"); @@ -1449,30 +1491,26 @@ int hl_device_reset(struct hl_device *hdev, u32 flags) } if (!hard_reset && !hdev->asic_prop.supports_compute_reset) { - hard_instead_soft = true; + dev_dbg(hdev->dev, "asic doesn't support compute reset - do hard-reset instead\n"); hard_reset = true; } - if (hdev->reset_upon_device_release && from_dev_release) { + if (reset_upon_device_release) { if (hard_reset) { dev_crit(hdev->dev, "Aborting reset because hard-reset is mutually exclusive with reset-on-device-release\n"); return -EINVAL; } - reset_upon_device_release = true; - goto do_reset; } if (!hard_reset && !hdev->asic_prop.allow_inference_soft_reset) { - hard_instead_soft = true; + dev_dbg(hdev->dev, + "asic doesn't allow inference soft reset - do hard-reset instead\n"); hard_reset = true; } - if (hard_instead_soft) - dev_dbg(hdev->dev, "Doing hard-reset instead of compute reset\n"); - do_reset: /* Re-entry of reset thread */ if (from_hard_reset_thread && hdev->process_kill_trial_cnt) @@ -1480,14 +1518,14 @@ do_reset: /* * Prevent concurrency in this function - only one reset should be - * done at any given time. Only need to perform this if we didn't - * get from the dedicated hard reset thread + * done at any given time. We need to perform this only if we didn't + * get here from a dedicated hard reset thread. */ if (!from_hard_reset_thread) { /* Block future CS/VM/JOB completion operations */ spin_lock(&hdev->reset_info.lock); if (hdev->reset_info.in_reset) { - /* We only allow scheduling of a hard reset during compute reset */ + /* We allow scheduling of a hard reset only during a compute reset */ if (hard_reset && hdev->reset_info.in_compute_reset) hdev->reset_info.hard_reset_schedule_flags = flags; spin_unlock(&hdev->reset_info.lock); @@ -1505,15 +1543,17 @@ do_reset: /* Cancel the device release watchdog work if required. * In case of reset-upon-device-release while the release watchdog work is - * scheduled, do hard-reset instead of compute-reset. + * scheduled due to a hard-reset, do hard-reset instead of compute-reset. */ if ((hard_reset || from_dev_release) && hdev->reset_info.watchdog_active) { + struct hl_device_reset_work *watchdog_work = + &hdev->device_release_watchdog_work; + hdev->reset_info.watchdog_active = 0; if (!from_watchdog_thread) - cancel_delayed_work_sync( - &hdev->device_release_watchdog_work.reset_work); + cancel_delayed_work_sync(&watchdog_work->reset_work); - if (from_dev_release) { + if (from_dev_release && (watchdog_work->flags & HL_DRV_RESET_HARD)) { hdev->reset_info.in_compute_reset = 0; flags |= HL_DRV_RESET_HARD; flags &= ~HL_DRV_RESET_DEV_RELEASE; @@ -1524,7 +1564,9 @@ do_reset: if (delay_reset) usleep_range(HL_RESET_DELAY_USEC, HL_RESET_DELAY_USEC << 1); +escalate_reset_flow: handle_reset_trigger(hdev, flags); + send_disable_pci_access(hdev, flags); /* This also blocks future CS/VM/JOB completion operations */ hdev->disabled = true; @@ -1539,7 +1581,6 @@ do_reset: dev_dbg(hdev->dev, "Going to reset engines of inference device\n"); } -again: if ((hard_reset) && (!from_hard_reset_thread)) { hdev->reset_info.hard_reset_pending = true; @@ -1592,7 +1633,7 @@ kill_processes: } /* Reset the H/W. It will be in idle state after this returns */ - hdev->asic_funcs->hw_fini(hdev, hard_reset, fw_reset); + hw_fini_rc = hdev->asic_funcs->hw_fini(hdev, hard_reset, fw_reset); if (hard_reset) { hdev->fw_loader.fw_comp_loaded = FW_TYPE_NONE; @@ -1619,6 +1660,10 @@ kill_processes: hl_ctx_put(ctx); } + if (hw_fini_rc) { + rc = hw_fini_rc; + goto out_err; + } /* Finished tear-down, starting to re-initialize */ if (hard_reset) { @@ -1784,10 +1829,8 @@ kill_processes: dev_info(hdev->dev, "Performing hard reset scheduled during compute reset\n"); flags = hdev->reset_info.hard_reset_schedule_flags; hdev->reset_info.hard_reset_schedule_flags = 0; - hdev->disabled = true; hard_reset = true; - handle_reset_trigger(hdev, flags); - goto again; + goto escalate_reset_flow; } } @@ -1804,20 +1847,19 @@ out_err: "%s Failed to reset! Device is NOT usable\n", dev_name(&(hdev)->pdev->dev)); hdev->reset_info.hard_reset_cnt++; - } else if (reset_upon_device_release) { - spin_unlock(&hdev->reset_info.lock); - dev_err(hdev->dev, "Failed to reset device after user release\n"); - flags |= HL_DRV_RESET_HARD; - flags &= ~HL_DRV_RESET_DEV_RELEASE; - hard_reset = true; - goto again; } else { + if (reset_upon_device_release) { + dev_err(hdev->dev, "Failed to reset device after user release\n"); + flags &= ~HL_DRV_RESET_DEV_RELEASE; + } else { + dev_err(hdev->dev, "Failed to do compute reset\n"); + hdev->reset_info.compute_reset_cnt++; + } + spin_unlock(&hdev->reset_info.lock); - dev_err(hdev->dev, "Failed to do compute reset\n"); - hdev->reset_info.compute_reset_cnt++; flags |= HL_DRV_RESET_HARD; hard_reset = true; - goto again; + goto escalate_reset_flow; } hdev->reset_info.in_reset = 0; @@ -1840,10 +1882,6 @@ int hl_device_cond_reset(struct hl_device *hdev, u32 flags, u64 event_mask) { struct hl_ctx *ctx = NULL; - /* Device release watchdog is only for hard reset */ - if (!(flags & HL_DRV_RESET_HARD) && hdev->asic_prop.allow_inference_soft_reset) - goto device_reset; - /* F/W reset cannot be postponed */ if (flags & HL_DRV_RESET_BYPASS_REQ_TO_FW) goto device_reset; @@ -1871,7 +1909,7 @@ int hl_device_cond_reset(struct hl_device *hdev, u32 flags, u64 event_mask) goto out; hdev->device_release_watchdog_work.flags = flags; - dev_dbg(hdev->dev, "Device is going to be reset in %u sec unless being released\n", + dev_dbg(hdev->dev, "Device is going to be hard-reset in %u sec unless being released\n", hdev->device_release_watchdog_timeout_sec); schedule_delayed_work(&hdev->device_release_watchdog_work.reset_work, msecs_to_jiffies(hdev->device_release_watchdog_timeout_sec * 1000)); @@ -1939,37 +1977,27 @@ void hl_notifier_event_send_all(struct hl_device *hdev, u64 event_mask) mutex_unlock(&hdev->fpriv_ctrl_list_lock); } -/* - * hl_device_init - main initialization function for habanalabs device - * - * @hdev: pointer to habanalabs device structure - * - * Allocate an id for the device, do early initialization and then call the - * ASIC specific initialization functions. Finally, create the cdev and the - * Linux device to expose it to the user - */ -int hl_device_init(struct hl_device *hdev, struct class *hclass) +static int create_cdev(struct hl_device *hdev) { - int i, rc, cq_cnt, user_interrupt_cnt, cq_ready_cnt; char *name; - bool add_cdev_sysfs_on_err = false; + int rc; hdev->cdev_idx = hdev->id / 2; name = kasprintf(GFP_KERNEL, "hl%d", hdev->cdev_idx); if (!name) { rc = -ENOMEM; - goto out_disabled; + goto out_err; } /* Initialize cdev and device structures */ - rc = device_init_cdev(hdev, hclass, hdev->id, &hl_ops, name, + rc = device_init_cdev(hdev, hdev->hclass, hdev->id, &hl_ops, name, &hdev->cdev, &hdev->dev); kfree(name); if (rc) - goto out_disabled; + goto out_err; name = kasprintf(GFP_KERNEL, "hl_controlD%d", hdev->cdev_idx); if (!name) { @@ -1978,7 +2006,7 @@ int hl_device_init(struct hl_device *hdev, struct class *hclass) } /* Initialize cdev and device structures for control device */ - rc = device_init_cdev(hdev, hclass, hdev->id_control, &hl_ctrl_ops, + rc = device_init_cdev(hdev, hdev->hclass, hdev->id_control, &hl_ctrl_ops, name, &hdev->cdev_ctrl, &hdev->dev_ctrl); kfree(name); @@ -1986,10 +2014,36 @@ int hl_device_init(struct hl_device *hdev, struct class *hclass) if (rc) goto free_dev; + return 0; + +free_dev: + put_device(hdev->dev); +out_err: + return rc; +} + +/* + * hl_device_init - main initialization function for habanalabs device + * + * @hdev: pointer to habanalabs device structure + * + * Allocate an id for the device, do early initialization and then call the + * ASIC specific initialization functions. Finally, create the cdev and the + * Linux device to expose it to the user + */ +int hl_device_init(struct hl_device *hdev) +{ + int i, rc, cq_cnt, user_interrupt_cnt, cq_ready_cnt; + bool add_cdev_sysfs_on_err = false; + + rc = create_cdev(hdev); + if (rc) + goto out_disabled; + /* Initialize ASIC function pointers and perform early init */ rc = device_early_init(hdev); if (rc) - goto free_dev_ctrl; + goto free_dev; user_interrupt_cnt = hdev->asic_prop.user_dec_intr_count + hdev->asic_prop.user_interrupt_count; @@ -2241,9 +2295,8 @@ free_usr_intr_mem: kfree(hdev->user_interrupt); early_fini: device_early_fini(hdev); -free_dev_ctrl: - put_device(hdev->dev_ctrl); free_dev: + put_device(hdev->dev_ctrl); put_device(hdev->dev); out_disabled: hdev->disabled = true; @@ -2364,7 +2417,9 @@ void hl_device_fini(struct hl_device *hdev) hl_cb_pool_fini(hdev); /* Reset the H/W. It will be in idle state after this returns */ - hdev->asic_funcs->hw_fini(hdev, true, false); + rc = hdev->asic_funcs->hw_fini(hdev, true, false); + if (rc) + dev_err(hdev->dev, "hw_fini failed in device fini while removing device %d\n", rc); hdev->fw_loader.fw_comp_loaded = FW_TYPE_NONE; @@ -2566,3 +2621,49 @@ void hl_handle_page_fault(struct hl_device *hdev, u64 addr, u16 eng_id, bool is_ if (event_mask) *event_mask |= HL_NOTIFIER_EVENT_PAGE_FAULT; } + +static void hl_capture_hw_err(struct hl_device *hdev, u16 event_id) +{ + struct hw_err_info *info = &hdev->captured_err_info.hw_err; + + /* Capture only the first HW err */ + if (atomic_cmpxchg(&info->event_detected, 0, 1)) + return; + + info->event.timestamp = ktime_to_ns(ktime_get()); + info->event.event_id = event_id; + + info->event_info_available = true; +} + +void hl_handle_critical_hw_err(struct hl_device *hdev, u16 event_id, u64 *event_mask) +{ + hl_capture_hw_err(hdev, event_id); + + if (event_mask) + *event_mask |= HL_NOTIFIER_EVENT_CRITICL_HW_ERR; +} + +static void hl_capture_fw_err(struct hl_device *hdev, struct hl_info_fw_err_info *fw_info) +{ + struct fw_err_info *info = &hdev->captured_err_info.fw_err; + + /* Capture only the first FW error */ + if (atomic_cmpxchg(&info->event_detected, 0, 1)) + return; + + info->event.timestamp = ktime_to_ns(ktime_get()); + info->event.err_type = fw_info->err_type; + if (fw_info->err_type == HL_INFO_FW_REPORTED_ERR) + info->event.event_id = fw_info->event_id; + + info->event_info_available = true; +} + +void hl_handle_fw_err(struct hl_device *hdev, struct hl_info_fw_err_info *info) +{ + hl_capture_fw_err(hdev, info); + + if (info->event_mask) + *info->event_mask |= HL_NOTIFIER_EVENT_CRITICL_FW_ERR; +} diff --git a/drivers/accel/habanalabs/common/firmware_if.c b/drivers/accel/habanalabs/common/firmware_if.c index da892d8fb3d6..59f61ec66445 100644 --- a/drivers/accel/habanalabs/common/firmware_if.c +++ b/drivers/accel/habanalabs/common/firmware_if.c @@ -71,7 +71,7 @@ free_fw_ver: return NULL; } -static int extract_fw_sub_versions(struct hl_device *hdev, char *preboot_ver) +static int hl_get_preboot_major_minor(struct hl_device *hdev, char *preboot_ver) { char major[8], minor[8], *first_dot, *second_dot; int rc; @@ -86,7 +86,7 @@ static int extract_fw_sub_versions(struct hl_device *hdev, char *preboot_ver) if (rc) { dev_err(hdev->dev, "Error %d parsing preboot major version\n", rc); - goto out; + return rc; } /* skip the first dot */ @@ -102,9 +102,6 @@ static int extract_fw_sub_versions(struct hl_device *hdev, char *preboot_ver) if (rc) dev_err(hdev->dev, "Error %d parsing preboot minor version\n", rc); - -out: - kfree(preboot_ver); return rc; } @@ -1263,7 +1260,7 @@ void hl_fw_ask_hard_reset_without_linux(struct hl_device *hdev) COMMS_RST_DEV, 0, false, hdev->fw_loader.cpu_timeout); if (rc) - dev_warn(hdev->dev, "Failed sending COMMS_RST_DEV\n"); + dev_err(hdev->dev, "Failed sending COMMS_RST_DEV\n"); } else { WREG32(static_loader->kmd_msg_to_cpu_reg, KMD_MSG_RST_DEV); } @@ -1281,10 +1278,10 @@ void hl_fw_ask_halt_machine_without_linux(struct hl_device *hdev) /* Stop device CPU to make sure nothing bad happens */ if (hdev->asic_prop.dynamic_fw_load) { rc = hl_fw_dynamic_send_protocol_cmd(hdev, &hdev->fw_loader, - COMMS_GOTO_WFE, 0, true, + COMMS_GOTO_WFE, 0, false, hdev->fw_loader.cpu_timeout); if (rc) - dev_warn(hdev->dev, "Failed sending COMMS_GOTO_WFE\n"); + dev_err(hdev->dev, "Failed sending COMMS_GOTO_WFE\n"); } else { WREG32(static_loader->kmd_msg_to_cpu_reg, KMD_MSG_GOTO_WFE); msleep(static_loader->cpu_reset_wait_msec); @@ -2181,8 +2178,8 @@ static int hl_fw_dynamic_read_device_fw_version(struct hl_device *hdev, dev_info(hdev->dev, "preboot version %s\n", preboot_ver); - /* This function takes care of freeing preboot_ver */ - rc = extract_fw_sub_versions(hdev, preboot_ver); + rc = hl_get_preboot_major_minor(hdev, preboot_ver); + kfree(preboot_ver); if (rc) return rc; } @@ -3152,7 +3149,7 @@ int hl_fw_get_sec_attest_info(struct hl_device *hdev, struct cpucp_sec_attest_in int hl_fw_send_generic_request(struct hl_device *hdev, enum hl_passthrough_type sub_opcode, dma_addr_t buff, u32 *size) { - struct cpucp_packet pkt = {0}; + struct cpucp_packet pkt = {}; u64 result; int rc = 0; diff --git a/drivers/accel/habanalabs/common/habanalabs.h b/drivers/accel/habanalabs/common/habanalabs.h index fa05e76d3d21..eaae69a9f817 100644 --- a/drivers/accel/habanalabs/common/habanalabs.h +++ b/drivers/accel/habanalabs/common/habanalabs.h @@ -155,18 +155,12 @@ enum hl_mmu_enablement { #define hl_asic_dma_alloc_coherent(hdev, size, dma_handle, flags) \ hl_asic_dma_alloc_coherent_caller(hdev, size, dma_handle, flags, __func__) -#define hl_cpu_accessible_dma_pool_alloc(hdev, size, dma_handle) \ - hl_cpu_accessible_dma_pool_alloc_caller(hdev, size, dma_handle, __func__) - #define hl_asic_dma_pool_zalloc(hdev, size, mem_flags, dma_handle) \ hl_asic_dma_pool_zalloc_caller(hdev, size, mem_flags, dma_handle, __func__) #define hl_asic_dma_free_coherent(hdev, size, cpu_addr, dma_handle) \ hl_asic_dma_free_coherent_caller(hdev, size, cpu_addr, dma_handle, __func__) -#define hl_cpu_accessible_dma_pool_free(hdev, size, vaddr) \ - hl_cpu_accessible_dma_pool_free_caller(hdev, size, vaddr, __func__) - #define hl_asic_dma_pool_free(hdev, vaddr, dma_addr) \ hl_asic_dma_pool_free_caller(hdev, vaddr, dma_addr, __func__) @@ -378,6 +372,7 @@ enum hl_cs_type { CS_RESERVE_SIGNALS, CS_UNRESERVE_SIGNALS, CS_TYPE_ENGINE_CORE, + CS_TYPE_ENGINES, CS_TYPE_FLUSH_PCI_HBW_WRITES, }; @@ -592,6 +587,8 @@ struct hl_hints_range { * @host_base_address: host physical start address for host DMA from device * @host_end_address: host physical end address for host DMA from device * @max_freq_value: current max clk frequency. + * @engine_core_interrupt_reg_addr: interrupt register address for engine core to use + * in order to raise events toward FW. * @clk_pll_index: clock PLL index that specify which PLL determines the clock * we display to the user * @mmu_pgt_size: MMU page tables total size. @@ -612,8 +609,8 @@ struct hl_hints_range { * @cb_pool_cb_cnt: number of CBs in the CB pool. * @cb_pool_cb_size: size of each CB in the CB pool. * @decoder_enabled_mask: which decoders are enabled. - * @decoder_binning_mask: which decoders are binned, 0 means usable and 1 - * means binned (at most one binned decoder per dcore). + * @decoder_binning_mask: which decoders are binned, 0 means usable and 1 means binned. + * @rotator_enabled_mask: which rotators are enabled. * @edma_enabled_mask: which EDMAs are enabled. * @edma_binning_mask: which EDMAs are binned, 0 means usable and 1 means * binned (at most one binned DMA). @@ -648,7 +645,8 @@ struct hl_hints_range { * which the property supports_user_set_page_size is true * (i.e. the DRAM supports multiple page sizes), otherwise * it will shall be equal to dram_page_size. - * @num_engine_cores: number of engine cpu cores + * @num_engine_cores: number of engine cpu cores. + * @max_num_of_engines: maximum number of all engines in the ASIC. * @num_of_special_blocks: special_blocks array size. * @glbl_err_cause_num: global err cause number. * @hbw_flush_reg: register to read to generate HBW flush. value of 0 means HBW flush is @@ -663,6 +661,8 @@ struct hl_hints_range { * @first_available_cq: first available CQ for the user. * @user_interrupt_count: number of user interrupts. * @user_dec_intr_count: number of decoder interrupts exposed to user. + * @tpc_interrupt_id: interrupt id for TPC to use in order to raise events towards the host. + * @eq_interrupt_id: interrupt id for EQ, uses to synchronize EQ interrupts in hard-reset. * @cache_line_size: device cache line size. * @server_type: Server type that the ASIC is currently installed in. * The value is according to enum hl_server_type in uapi file. @@ -698,6 +698,7 @@ struct hl_hints_range { * @supports_user_set_page_size: true if user can set the allocation page size. * @dma_mask: the dma mask to be set for this device * @supports_advanced_cpucp_rc: true if new cpucp opcodes are supported. + * @supports_engine_modes: true if changing engines/engine_cores modes is supported. */ struct asic_fixed_properties { struct hw_queue_properties *hw_queues_props; @@ -739,6 +740,7 @@ struct asic_fixed_properties { u64 host_base_address; u64 host_end_address; u64 max_freq_value; + u64 engine_core_interrupt_reg_addr; u32 clk_pll_index; u32 mmu_pgt_size; u32 mmu_pte_size; @@ -759,6 +761,7 @@ struct asic_fixed_properties { u32 cb_pool_cb_size; u32 decoder_enabled_mask; u32 decoder_binning_mask; + u32 rotator_enabled_mask; u32 edma_enabled_mask; u32 edma_binning_mask; u32 max_pending_cs; @@ -775,6 +778,7 @@ struct asic_fixed_properties { u32 xbar_edge_enabled_mask; u32 device_mem_alloc_default_page_size; u32 num_engine_cores; + u32 max_num_of_engines; u32 num_of_special_blocks; u32 glbl_err_cause_num; u32 hbw_flush_reg; @@ -788,6 +792,8 @@ struct asic_fixed_properties { u16 first_available_cq[HL_MAX_DCORES]; u16 user_interrupt_count; u16 user_dec_intr_count; + u16 tpc_interrupt_id; + u16 eq_interrupt_id; u16 cache_line_size; u16 server_type; u8 completion_queues_count; @@ -811,6 +817,7 @@ struct asic_fixed_properties { u8 supports_user_set_page_size; u8 dma_mask; u8 supports_advanced_cpucp_rc; + u8 supports_engine_modes; }; /** @@ -1096,6 +1103,8 @@ struct hl_cq { enum hl_user_interrupt_type { HL_USR_INTERRUPT_CQ = 0, HL_USR_INTERRUPT_DECODER, + HL_USR_INTERRUPT_TPC, + HL_USR_INTERRUPT_UNEXPECTED }; /** @@ -1104,6 +1113,7 @@ enum hl_user_interrupt_type { * @type: user interrupt type * @wait_list_head: head to the list of user threads pending on this interrupt * @wait_list_lock: protects wait_list_head + * @timestamp: last timestamp taken upon interrupt * @interrupt_id: msix interrupt id */ struct hl_user_interrupt { @@ -1111,6 +1121,7 @@ struct hl_user_interrupt { enum hl_user_interrupt_type type; struct list_head wait_list_head; spinlock_t wait_list_lock; + ktime_t timestamp; u32 interrupt_id; }; @@ -1200,15 +1211,15 @@ struct hl_eq { /** * struct hl_dec - describes a decoder sw instance. * @hdev: pointer to the device structure. - * @completion_abnrm_work: workqueue object to run when decoder generates an error interrupt + * @abnrm_intr_work: workqueue work item to run when decoder generates an error interrupt. * @core_id: ID of the decoder. * @base_addr: base address of the decoder. */ struct hl_dec { - struct hl_device *hdev; - struct work_struct completion_abnrm_work; - u32 core_id; - u32 base_addr; + struct hl_device *hdev; + struct work_struct abnrm_intr_work; + u32 core_id; + u32 base_addr; }; /** @@ -1562,6 +1573,7 @@ struct engines_data { * @access_dev_mem: access device memory * @set_dram_bar_base: set the base of the DRAM BAR * @set_engine_cores: set a config command to engine cores + * @set_engines: set a config command to user engines * @send_device_activity: indication to FW about device availability * @set_dram_properties: set DRAM related properties. * @set_binning_masks: set binning/enable masks for all relevant components. @@ -1574,7 +1586,7 @@ struct hl_asic_funcs { int (*sw_init)(struct hl_device *hdev); int (*sw_fini)(struct hl_device *hdev); int (*hw_init)(struct hl_device *hdev); - void (*hw_fini)(struct hl_device *hdev, bool hard_reset, bool fw_reset); + int (*hw_fini)(struct hl_device *hdev, bool hard_reset, bool fw_reset); void (*halt_engines)(struct hl_device *hdev, bool hard_reset, bool fw_reset); int (*suspend)(struct hl_device *hdev); int (*resume)(struct hl_device *hdev); @@ -1701,6 +1713,8 @@ struct hl_asic_funcs { u64 (*set_dram_bar_base)(struct hl_device *hdev, u64 addr); int (*set_engine_cores)(struct hl_device *hdev, u32 *core_ids, u32 num_cores, u32 core_command); + int (*set_engines)(struct hl_device *hdev, u32 *engine_ids, + u32 num_engines, u32 engine_command); int (*send_device_activity)(struct hl_device *hdev, bool open); int (*set_dram_properties)(struct hl_device *hdev); int (*set_binning_masks)(struct hl_device *hdev); @@ -1824,7 +1838,7 @@ struct hl_cs_outcome_store { * @hpriv: pointer to the private (Kernel Driver) data of the process (fd). * @hdev: pointer to the device structure. * @refcount: reference counter for the context. Context is released only when - * this hits 0l. It is incremented on CS and CS_WAIT. + * this hits 0. It is incremented on CS and CS_WAIT. * @cs_pending: array of hl fence objects representing pending CS. * @outcome_store: storage data structure used to remember outcomes of completed * command submissions for a long time after CS id wraparound. @@ -2318,7 +2332,7 @@ struct hl_debugfs_entry { * @userptr_list: list of available userptrs (virtual memory chunk descriptor). * @userptr_spinlock: protects userptr_list. * @ctx_mem_hash_list: list of available contexts with MMU mappings. - * @ctx_mem_hash_spinlock: protects cb_list. + * @ctx_mem_hash_mutex: protects list of available contexts with MMU mappings. * @data_dma_blob_desc: data DMA descriptor of blob. * @mon_dump_blob_desc: monitor dump descriptor of blob. * @state_dump: data of the system states in case of a bad cs. @@ -2349,7 +2363,7 @@ struct hl_dbg_device_entry { struct list_head userptr_list; spinlock_t userptr_spinlock; struct list_head ctx_mem_hash_list; - spinlock_t ctx_mem_hash_spinlock; + struct mutex ctx_mem_hash_mutex; struct debugfs_blob_wrapper data_dma_blob_desc; struct debugfs_blob_wrapper mon_dump_blob_desc; char *state_dump[HL_STATE_DUMP_HIST_LEN]; @@ -2974,8 +2988,8 @@ struct cs_timeout_info { * @cq_addr: the address of the current handled command buffer * @cq_size: the size of the current handled command buffer * @cb_addr_streams_len: num of streams - actual len of cb_addr_streams array. - * should be equal to 1 incase of undefined opcode - * in Upper-CP (specific stream) and equal to 4 incase + * should be equal to 1 in case of undefined opcode + * in Upper-CP (specific stream) and equal to 4 in case * of undefined opcode in Lower-CP. * @engine_id: engine-id that the error occurred on * @stream_id: the stream id the error occurred on. In case the stream equals to @@ -3032,17 +3046,55 @@ struct razwi_info { }; /** + * struct hw_err_info - HW error information. + * @event: holds information on the event. + * @event_detected: if set as 1, then a HW event was discovered for the + * first time after the driver has finished booting-up. + * currently we assume that only fatal events (that require hard-reset) are + * reported so we don't care of the others that might follow it. + * so once changed to 1, it will remain that way. + * TODO: support multiple events. + * @event_info_available: indicates that a HW event info is now available. + */ +struct hw_err_info { + struct hl_info_hw_err_event event; + atomic_t event_detected; + bool event_info_available; +}; + +/** + * struct fw_err_info - FW error information. + * @event: holds information on the event. + * @event_detected: if set as 1, then a FW event was discovered for the + * first time after the driver has finished booting-up. + * currently we assume that only fatal events (that require hard-reset) are + * reported so we don't care of the others that might follow it. + * so once changed to 1, it will remain that way. + * TODO: support multiple events. + * @event_info_available: indicates that a HW event info is now available. + */ +struct fw_err_info { + struct hl_info_fw_err_event event; + atomic_t event_detected; + bool event_info_available; +}; + +/** * struct hl_error_info - holds information collected during an error. * @cs_timeout: CS timeout error information. * @razwi_info: RAZWI information. * @undef_opcode: undefined opcode information. * @page_fault_info: page fault information. + * @hw_err: (fatal) hardware error information. + * @fw_err: firmware error information. */ struct hl_error_info { struct cs_timeout_info cs_timeout; struct razwi_info razwi_info; struct undefined_opcode_info undef_opcode; struct page_fault_info page_fault_info; + struct hw_err_info hw_err; + struct fw_err_info fw_err; }; /** @@ -3090,6 +3142,7 @@ struct hl_reset_info { * (required only for PCI address match mode) * @pcie_bar: array of available PCIe bars virtual addresses. * @rmmio: configuration area address on SRAM. + * @hclass: pointer to the habanalabs class. * @cdev: related char device. * @cdev_ctrl: char device for control operations only (INFO IOCTL) * @dev: related kernel basic device structure. @@ -3104,6 +3157,8 @@ struct hl_reset_info { * @user_interrupt: array of hl_user_interrupt. upon the corresponding user * interrupt, driver will monitor the list of fences * registered to this interrupt. + * @tpc_interrupt: single TPC interrupt for all TPCs. + * @unexpected_error_interrupt: single interrupt for unexpected user error indication. * @common_user_cq_interrupt: common user CQ interrupt for all user CQ interrupts. * upon any user CQ interrupt, driver will monitor the * list of fences registered to this common structure. @@ -3199,6 +3254,7 @@ struct hl_reset_info { * drams are binned-out * @tpc_binning: contains mask of tpc engines that is received from the f/w which indicates which * tpc engines are binned-out + * @dmabuf_export_cnt: number of dma-buf exporting. * @card_type: Various ASICs have several card types. This indicates the card * type of the current device. * @major: habanalabs kernel driver major. @@ -3253,6 +3309,8 @@ struct hl_reset_info { * @supports_mmu_prefetch: true if prefetch is supported, otherwise false. * @reset_upon_device_release: reset the device when the user closes the file descriptor of the * device. + * @supports_ctx_switch: true if a ctx switch is required upon first submission. + * @support_preboot_binning: true if we support read binning info from preboot. * @nic_ports_mask: Controls which NIC ports are enabled. Used only for testing. * @fw_components: Controls which f/w components to load to the device. There are multiple f/w * stages and sometimes we want to stop at a certain stage. Used only for testing. @@ -3266,14 +3324,13 @@ struct hl_reset_info { * Used only for testing. * @heartbeat: Controls if we want to enable the heartbeat mechanism vs. the f/w, which verifies * that the f/w is always alive. Used only for testing. - * @supports_ctx_switch: true if a ctx switch is required upon first submission. - * @support_preboot_binning: true if we support read binning info from preboot. */ struct hl_device { struct pci_dev *pdev; u64 pcie_bar_phys[HL_PCI_NUM_BARS]; void __iomem *pcie_bar[HL_PCI_NUM_BARS]; void __iomem *rmmio; + struct class *hclass; struct cdev cdev; struct cdev cdev_ctrl; struct device *dev; @@ -3286,6 +3343,8 @@ struct hl_device { enum hl_asic_type asic_type; struct hl_cq *completion_queue; struct hl_user_interrupt *user_interrupt; + struct hl_user_interrupt tpc_interrupt; + struct hl_user_interrupt unexpected_error_interrupt; struct hl_user_interrupt common_user_cq_interrupt; struct hl_user_interrupt common_decoder_interrupt; struct hl_cs **shadow_cs_queue; @@ -3369,7 +3428,7 @@ struct hl_device { u64 fw_comms_poll_interval_usec; u64 dram_binning; u64 tpc_binning; - + atomic_t dmabuf_export_cnt; enum cpucp_card_types card_type; u32 major; u32 high_pll; @@ -3412,7 +3471,7 @@ struct hl_device { u8 supports_ctx_switch; u8 support_preboot_binning; - /* Parameters for bring-up */ + /* Parameters for bring-up to be upstreamed */ u64 nic_ports_mask; u64 fw_components; u8 mmu_enable; @@ -3450,6 +3509,20 @@ struct hl_cs_encaps_sig_handle { u32 count; }; +/** + * struct hl_info_fw_err_info - firmware error information structure + * @err_type: The type of error detected (or reported). + * @event_mask: Pointer to the event mask to be modified with the detected error flag + * (can be NULL) + * @event_id: The id of the event that reported the error + * (applicable when err_type is HL_INFO_FW_REPORTED_ERR). + */ +struct hl_info_fw_err_info { + enum hl_info_fw_err_type err_type; + u64 *event_mask; + u16 event_id; +}; + /* * IOCTLs */ @@ -3474,6 +3547,10 @@ struct hl_ioctl_desc { hl_ioctl_t *func; }; +static inline bool hl_is_fw_ver_below_1_9(struct hl_device *hdev) +{ + return (hdev->fw_major_version < 42); +} /* * Kernel module functions that can be accessed by entire module @@ -3537,14 +3614,12 @@ static inline bool hl_mem_area_crosses_range(u64 address, u32 size, } uint64_t hl_set_dram_bar_default(struct hl_device *hdev, u64 addr); +void *hl_cpu_accessible_dma_pool_alloc(struct hl_device *hdev, size_t size, dma_addr_t *dma_handle); +void hl_cpu_accessible_dma_pool_free(struct hl_device *hdev, size_t size, void *vaddr); void *hl_asic_dma_alloc_coherent_caller(struct hl_device *hdev, size_t size, dma_addr_t *dma_handle, gfp_t flag, const char *caller); void hl_asic_dma_free_coherent_caller(struct hl_device *hdev, size_t size, void *cpu_addr, dma_addr_t dma_handle, const char *caller); -void *hl_cpu_accessible_dma_pool_alloc_caller(struct hl_device *hdev, size_t size, - dma_addr_t *dma_handle, const char *caller); -void hl_cpu_accessible_dma_pool_free_caller(struct hl_device *hdev, size_t size, void *vaddr, - const char *caller); void *hl_asic_dma_pool_zalloc_caller(struct hl_device *hdev, size_t size, gfp_t mem_flags, dma_addr_t *dma_handle, const char *caller); void hl_asic_dma_pool_free_caller(struct hl_device *hdev, void *vaddr, dma_addr_t dma_addr, @@ -3591,7 +3666,7 @@ irqreturn_t hl_irq_handler_cq(int irq, void *arg); irqreturn_t hl_irq_handler_eq(int irq, void *arg); irqreturn_t hl_irq_handler_dec_abnrm(int irq, void *arg); irqreturn_t hl_irq_handler_user_interrupt(int irq, void *arg); -irqreturn_t hl_irq_handler_default(int irq, void *arg); +irqreturn_t hl_irq_user_interrupt_thread_handler(int irq, void *arg); u32 hl_cq_inc_ptr(u32 ptr); int hl_asid_init(struct hl_device *hdev); @@ -3612,7 +3687,7 @@ int hl_ctx_get_fences(struct hl_ctx *ctx, u64 *seq_arr, void hl_ctx_mgr_init(struct hl_ctx_mgr *mgr); void hl_ctx_mgr_fini(struct hl_device *hdev, struct hl_ctx_mgr *mgr); -int hl_device_init(struct hl_device *hdev, struct class *hclass); +int hl_device_init(struct hl_device *hdev); void hl_device_fini(struct hl_device *hdev); int hl_device_suspend(struct hl_device *hdev); int hl_device_resume(struct hl_device *hdev); @@ -3662,6 +3737,7 @@ bool cs_needs_timeout(struct hl_cs *cs); bool is_staged_cs_last_exists(struct hl_device *hdev, struct hl_cs *cs); struct hl_cs *hl_staged_cs_find_first(struct hl_device *hdev, u64 cs_seq); void hl_multi_cs_completion_init(struct hl_device *hdev); +u32 hl_get_active_cs_num(struct hl_device *hdev); void goya_set_asic_funcs(struct hl_device *hdev); void gaudi_set_asic_funcs(struct hl_device *hdev); @@ -3861,6 +3937,7 @@ const char *hl_sync_engine_to_string(enum hl_sync_engine_type engine_type); void hl_mem_mgr_init(struct device *dev, struct hl_mem_mgr *mmg); void hl_mem_mgr_fini(struct hl_mem_mgr *mmg); +void hl_mem_mgr_idr_destroy(struct hl_mem_mgr *mmg); int hl_mem_mgr_mmap(struct hl_mem_mgr *mmg, struct vm_area_struct *vma, void *args); struct hl_mmap_mem_buf *hl_mmap_mem_buf_get(struct hl_mem_mgr *mmg, @@ -3879,6 +3956,8 @@ void hl_handle_razwi(struct hl_device *hdev, u64 addr, u16 *engine_id, u16 num_o void hl_capture_page_fault(struct hl_device *hdev, u64 addr, u16 eng_id, bool is_pmmu); void hl_handle_page_fault(struct hl_device *hdev, u64 addr, u16 eng_id, bool is_pmmu, u64 *event_mask); +void hl_handle_critical_hw_err(struct hl_device *hdev, u16 event_id, u64 *event_mask); +void hl_handle_fw_err(struct hl_device *hdev, struct hl_info_fw_err_info *info); #ifdef CONFIG_DEBUG_FS diff --git a/drivers/accel/habanalabs/common/habanalabs_drv.c b/drivers/accel/habanalabs/common/habanalabs_drv.c index 03dae57dc838..a4b3f50f1cba 100644 --- a/drivers/accel/habanalabs/common/habanalabs_drv.c +++ b/drivers/accel/habanalabs/common/habanalabs_drv.c @@ -12,7 +12,6 @@ #include "../include/hw_ip/pci/pci_general.h" #include <linux/pci.h> -#include <linux/aer.h> #include <linux/module.h> #define CREATE_TRACE_POINTS @@ -221,12 +220,9 @@ int hl_device_open(struct inode *inode, struct file *filp) hl_debugfs_add_file(hpriv); + memset(&hdev->captured_err_info, 0, sizeof(hdev->captured_err_info)); atomic_set(&hdev->captured_err_info.cs_timeout.write_enable, 1); - atomic_set(&hdev->captured_err_info.razwi_info.razwi_detected, 0); - atomic_set(&hdev->captured_err_info.page_fault_info.page_fault_detected, 0); hdev->captured_err_info.undef_opcode.write_enable = true; - hdev->captured_err_info.razwi_info.razwi_info_available = false; - hdev->captured_err_info.page_fault_info.page_fault_info_available = false; hdev->open_counter++; hdev->last_successful_open_jif = jiffies; @@ -237,6 +233,7 @@ int hl_device_open(struct inode *inode, struct file *filp) out_err: mutex_unlock(&hdev->fpriv_list_lock); hl_mem_mgr_fini(&hpriv->mem_mgr); + hl_mem_mgr_idr_destroy(&hpriv->mem_mgr); hl_ctx_mgr_fini(hpriv->hdev, &hpriv->ctx_mgr); filp->private_data = NULL; mutex_destroy(&hpriv->ctx_lock); @@ -324,6 +321,7 @@ static void copy_kernel_module_params_to_device(struct hl_device *hdev) hdev->asic_prop.fw_security_enabled = is_asic_secured(hdev->asic_type); hdev->major = hl_major; + hdev->hclass = hl_class; hdev->memory_scrub = memory_scrub; hdev->reset_on_lockup = reset_on_lockup; hdev->boot_error_status_mask = boot_error_status_mask; @@ -550,9 +548,7 @@ static int hl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) pci_set_drvdata(pdev, hdev); - pci_enable_pcie_error_reporting(pdev); - - rc = hl_device_init(hdev, hl_class); + rc = hl_device_init(hdev); if (rc) { dev_err(&pdev->dev, "Fatal error during habanalabs device init\n"); rc = -ENODEV; @@ -562,7 +558,6 @@ static int hl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) return 0; disable_device: - pci_disable_pcie_error_reporting(pdev); pci_set_drvdata(pdev, NULL); destroy_hdev(hdev); @@ -585,7 +580,6 @@ static void hl_pci_remove(struct pci_dev *pdev) return; hl_device_fini(hdev); - pci_disable_pcie_error_reporting(pdev); pci_set_drvdata(pdev, NULL); destroy_hdev(hdev); } diff --git a/drivers/accel/habanalabs/common/habanalabs_ioctl.c b/drivers/accel/habanalabs/common/habanalabs_ioctl.c index 5005e6fca691..203ee857810c 100644 --- a/drivers/accel/habanalabs/common/habanalabs_ioctl.c +++ b/drivers/accel/habanalabs/common/habanalabs_ioctl.c @@ -102,11 +102,15 @@ static int hw_ip_info(struct hl_device *hdev, struct hl_info_args *args) hw_ip.mme_master_slave_mode = prop->mme_master_slave_mode; hw_ip.first_available_interrupt_id = prop->first_available_user_interrupt; hw_ip.number_of_user_interrupts = prop->user_interrupt_count; + hw_ip.tpc_interrupt_id = prop->tpc_interrupt_id; hw_ip.edma_enabled_mask = prop->edma_enabled_mask; hw_ip.server_type = prop->server_type; hw_ip.security_enabled = prop->fw_security_enabled; hw_ip.revision_id = hdev->pdev->revision; + hw_ip.rotator_enabled_mask = prop->rotator_enabled_mask; + hw_ip.engine_core_interrupt_reg_addr = prop->engine_core_interrupt_reg_addr; + hw_ip.reserved_dram_size = dram_kmd_size; return copy_to_user(out, &hw_ip, min((size_t) size, sizeof(hw_ip))) ? -EFAULT : 0; @@ -830,6 +834,50 @@ static int user_mappings_info(struct hl_fpriv *hpriv, struct hl_info_args *args) return copy_to_user(out, pgf_info->user_mappings, actual_size) ? -EFAULT : 0; } +static int hw_err_info(struct hl_fpriv *hpriv, struct hl_info_args *args) +{ + void __user *user_buf = (void __user *) (uintptr_t) args->return_pointer; + struct hl_device *hdev = hpriv->hdev; + u32 user_buf_size = args->return_size; + struct hw_err_info *info; + int rc; + + if ((!user_buf_size) || (!user_buf)) + return -EINVAL; + + if (user_buf_size < sizeof(struct hl_info_hw_err_event)) + return -ENOMEM; + + info = &hdev->captured_err_info.hw_err; + if (!info->event_info_available) + return -ENOENT; + + rc = copy_to_user(user_buf, &info->event, sizeof(struct hl_info_hw_err_event)); + return rc ? -EFAULT : 0; +} + +static int fw_err_info(struct hl_fpriv *hpriv, struct hl_info_args *args) +{ + void __user *user_buf = (void __user *) (uintptr_t) args->return_pointer; + struct hl_device *hdev = hpriv->hdev; + u32 user_buf_size = args->return_size; + struct fw_err_info *info; + int rc; + + if ((!user_buf_size) || (!user_buf)) + return -EINVAL; + + if (user_buf_size < sizeof(struct hl_info_fw_err_event)) + return -ENOMEM; + + info = &hdev->captured_err_info.fw_err; + if (!info->event_info_available) + return -ENOENT; + + rc = copy_to_user(user_buf, &info->event, sizeof(struct hl_info_fw_err_event)); + return rc ? -EFAULT : 0; +} + static int send_fw_generic_request(struct hl_device *hdev, struct hl_info_args *info_args) { void __user *buff = (void __user *) (uintptr_t) info_args->return_pointer; @@ -950,6 +998,14 @@ static int _hl_info_ioctl(struct hl_fpriv *hpriv, void *data, case HL_INFO_UNREGISTER_EVENTFD: return eventfd_unregister(hpriv, args); + case HL_INFO_HW_ERR_EVENT: + return hw_err_info(hpriv, args); + + case HL_INFO_FW_ERR_EVENT: + return fw_err_info(hpriv, args); + + case HL_INFO_DRAM_USAGE: + return dram_usage_info(hpriv, args); default: break; } @@ -962,10 +1018,6 @@ static int _hl_info_ioctl(struct hl_fpriv *hpriv, void *data, } switch (args->op) { - case HL_INFO_DRAM_USAGE: - rc = dram_usage_info(hpriv, args); - break; - case HL_INFO_HW_IDLE: rc = hw_idle(hdev, args); break; diff --git a/drivers/accel/habanalabs/common/irq.c b/drivers/accel/habanalabs/common/irq.c index 04844e843a7b..c67895b1cdeb 100644 --- a/drivers/accel/habanalabs/common/irq.c +++ b/drivers/accel/habanalabs/common/irq.c @@ -280,7 +280,6 @@ static void handle_user_interrupt(struct hl_device *hdev, struct hl_user_interru struct list_head *ts_reg_free_list_head = NULL; struct timestamp_reg_work_obj *job; bool reg_node_handle_fail = false; - ktime_t now = ktime_get(); int rc; /* For registration nodes: @@ -303,13 +302,13 @@ static void handle_user_interrupt(struct hl_device *hdev, struct hl_user_interru if (pend->ts_reg_info.buf) { if (!reg_node_handle_fail) { rc = handle_registration_node(hdev, pend, - &ts_reg_free_list_head, now); + &ts_reg_free_list_head, intr->timestamp); if (rc) reg_node_handle_fail = true; } } else { /* Handle wait target value node */ - pend->fence.timestamp = now; + pend->fence.timestamp = intr->timestamp; complete_all(&pend->fence.completion); } } @@ -326,6 +325,26 @@ static void handle_user_interrupt(struct hl_device *hdev, struct hl_user_interru } } +static void handle_tpc_interrupt(struct hl_device *hdev) +{ + u64 event_mask; + u32 flags; + + event_mask = HL_NOTIFIER_EVENT_TPC_ASSERT | + HL_NOTIFIER_EVENT_USER_ENGINE_ERR | + HL_NOTIFIER_EVENT_DEVICE_RESET; + + flags = HL_DRV_RESET_DELAY; + + dev_err_ratelimited(hdev->dev, "Received TPC assert\n"); + hl_device_cond_reset(hdev, flags, event_mask); +} + +static void handle_unexpected_user_interrupt(struct hl_device *hdev) +{ + dev_err_ratelimited(hdev->dev, "Received unexpected user error interrupt\n"); +} + /** * hl_irq_handler_user_interrupt - irq handler for user interrupts * @@ -336,6 +355,23 @@ static void handle_user_interrupt(struct hl_device *hdev, struct hl_user_interru irqreturn_t hl_irq_handler_user_interrupt(int irq, void *arg) { struct hl_user_interrupt *user_int = arg; + + user_int->timestamp = ktime_get(); + + return IRQ_WAKE_THREAD; +} + +/** + * hl_irq_user_interrupt_thread_handler - irq thread handler for user interrupts. + * This function is invoked by threaded irq mechanism + * + * @irq: irq number + * @arg: pointer to user interrupt structure + * + */ +irqreturn_t hl_irq_user_interrupt_thread_handler(int irq, void *arg) +{ + struct hl_user_interrupt *user_int = arg; struct hl_device *hdev = user_int->hdev; switch (user_int->type) { @@ -351,6 +387,12 @@ irqreturn_t hl_irq_handler_user_interrupt(int irq, void *arg) /* Handle decoder interrupt registered on this specific irq */ handle_user_interrupt(hdev, user_int); break; + case HL_USR_INTERRUPT_TPC: + handle_tpc_interrupt(hdev); + break; + case HL_USR_INTERRUPT_UNEXPECTED: + handle_unexpected_user_interrupt(hdev); + break; default: break; } @@ -359,24 +401,6 @@ irqreturn_t hl_irq_handler_user_interrupt(int irq, void *arg) } /** - * hl_irq_handler_default - default irq handler - * - * @irq: irq number - * @arg: pointer to user interrupt structure - * - */ -irqreturn_t hl_irq_handler_default(int irq, void *arg) -{ - struct hl_user_interrupt *user_interrupt = arg; - struct hl_device *hdev = user_interrupt->hdev; - u32 interrupt_id = user_interrupt->interrupt_id; - - dev_err(hdev->dev, "got invalid user interrupt %u", interrupt_id); - - return IRQ_HANDLED; -} - -/** * hl_irq_handler_eq - irq handler for event queue * * @irq: irq number @@ -391,8 +415,8 @@ irqreturn_t hl_irq_handler_eq(int irq, void *arg) struct hl_eq_entry *eq_base; struct hl_eqe_work *handle_eqe_work; bool entry_ready; - u32 cur_eqe; - u16 cur_eqe_index; + u32 cur_eqe, ctl; + u16 cur_eqe_index, event_type; eq_base = eq->kernel_address; @@ -405,11 +429,10 @@ irqreturn_t hl_irq_handler_eq(int irq, void *arg) cur_eqe_index = FIELD_GET(EQ_CTL_INDEX_MASK, cur_eqe); if ((hdev->event_queue.check_eqe_index) && - (((eq->prev_eqe_index + 1) & EQ_CTL_INDEX_MASK) - != cur_eqe_index)) { + (((eq->prev_eqe_index + 1) & EQ_CTL_INDEX_MASK) != cur_eqe_index)) { dev_dbg(hdev->dev, - "EQE 0x%x in queue is ready but index does not match %d!=%d", - eq_base[eq->ci].hdr.ctl, + "EQE %#x in queue is ready but index does not match %d!=%d", + cur_eqe, ((eq->prev_eqe_index + 1) & EQ_CTL_INDEX_MASK), cur_eqe_index); break; @@ -426,7 +449,10 @@ irqreturn_t hl_irq_handler_eq(int irq, void *arg) dma_rmb(); if (hdev->disabled && !hdev->reset_info.in_compute_reset) { - dev_warn(hdev->dev, "Device disabled but received an EQ event\n"); + ctl = le32_to_cpu(eq_entry->hdr.ctl); + event_type = ((ctl & EQ_CTL_EVENT_TYPE_MASK) >> EQ_CTL_EVENT_TYPE_SHIFT); + dev_warn(hdev->dev, + "Device disabled but received an EQ event (%u)\n", event_type); goto skip_irq; } @@ -463,7 +489,7 @@ irqreturn_t hl_irq_handler_dec_abnrm(int irq, void *arg) { struct hl_dec *dec = arg; - schedule_work(&dec->completion_abnrm_work); + schedule_work(&dec->abnrm_intr_work); return IRQ_HANDLED; } diff --git a/drivers/accel/habanalabs/common/memory.c b/drivers/accel/habanalabs/common/memory.c index 761a47e89b00..a7b6a273ce21 100644 --- a/drivers/accel/habanalabs/common/memory.c +++ b/drivers/accel/habanalabs/common/memory.c @@ -235,10 +235,8 @@ static int dma_map_host_va(struct hl_device *hdev, u64 addr, u64 size, } rc = hl_pin_host_memory(hdev, addr, size, userptr); - if (rc) { - dev_err(hdev->dev, "Failed to pin host memory\n"); + if (rc) goto pin_err; - } userptr->dma_mapped = true; userptr->dir = DMA_BIDIRECTIONAL; @@ -607,6 +605,7 @@ static u64 get_va_block(struct hl_device *hdev, bool is_align_pow_2 = is_power_of_2(va_range->page_size); bool is_hint_dram_addr = hl_is_dram_va(hdev, hint_addr); bool force_hint = flags & HL_MEM_FORCE_HINT; + int rc; if (is_align_pow_2) align_mask = ~((u64)va_block_align - 1); @@ -724,9 +723,13 @@ static u64 get_va_block(struct hl_device *hdev, kfree(new_va_block); } - if (add_prev) - add_va_block_locked(hdev, &va_range->list, prev_start, - prev_end); + if (add_prev) { + rc = add_va_block_locked(hdev, &va_range->list, prev_start, prev_end); + if (rc) { + reserved_valid_start = 0; + goto out; + } + } print_va_list_locked(hdev, &va_range->list); out: @@ -1097,10 +1100,8 @@ static int map_device_va(struct hl_ctx *ctx, struct hl_mem_in *args, u64 *device huge_page_size = hdev->asic_prop.pmmu_huge.page_size; rc = dma_map_host_va(hdev, addr, size, &userptr); - if (rc) { - dev_err(hdev->dev, "failed to get userptr from va\n"); + if (rc) return rc; - } rc = init_phys_pg_pack_from_userptr(ctx, userptr, &phys_pg_pack, false); @@ -1270,6 +1271,18 @@ init_page_pack_err: return rc; } +/* Should be called while the context's mem_hash_lock is taken */ +static struct hl_vm_hash_node *get_vm_hash_node_locked(struct hl_ctx *ctx, u64 vaddr) +{ + struct hl_vm_hash_node *hnode; + + hash_for_each_possible(ctx->mem_hash, hnode, node, vaddr) + if (vaddr == hnode->vaddr) + return hnode; + + return NULL; +} + /** * unmap_device_va() - unmap the given device virtual address. * @ctx: pointer to the context structure. @@ -1285,10 +1298,10 @@ static int unmap_device_va(struct hl_ctx *ctx, struct hl_mem_in *args, { struct hl_vm_phys_pg_pack *phys_pg_pack = NULL; u64 vaddr = args->unmap.device_virt_addr; - struct hl_vm_hash_node *hnode = NULL; struct asic_fixed_properties *prop; struct hl_device *hdev = ctx->hdev; struct hl_userptr *userptr = NULL; + struct hl_vm_hash_node *hnode; struct hl_va_range *va_range; enum vm_type *vm_type; bool is_userptr; @@ -1298,15 +1311,10 @@ static int unmap_device_va(struct hl_ctx *ctx, struct hl_mem_in *args, /* protect from double entrance */ mutex_lock(&ctx->mem_hash_lock); - hash_for_each_possible(ctx->mem_hash, hnode, node, (unsigned long)vaddr) - if (vaddr == hnode->vaddr) - break; - + hnode = get_vm_hash_node_locked(ctx, vaddr); if (!hnode) { mutex_unlock(&ctx->mem_hash_lock); - dev_err(hdev->dev, - "unmap failed, no mem hnode for vaddr 0x%llx\n", - vaddr); + dev_err(hdev->dev, "unmap failed, no mem hnode for vaddr 0x%llx\n", vaddr); return -EINVAL; } @@ -1779,6 +1787,44 @@ static void hl_unmap_dmabuf(struct dma_buf_attachment *attachment, kfree(sgt); } +static struct hl_vm_hash_node *memhash_node_export_get(struct hl_ctx *ctx, u64 addr) +{ + struct hl_device *hdev = ctx->hdev; + struct hl_vm_hash_node *hnode; + + /* get the memory handle */ + mutex_lock(&ctx->mem_hash_lock); + hnode = get_vm_hash_node_locked(ctx, addr); + if (!hnode) { + mutex_unlock(&ctx->mem_hash_lock); + dev_dbg(hdev->dev, "map address %#llx not found\n", addr); + return ERR_PTR(-EINVAL); + } + + if (upper_32_bits(hnode->handle)) { + mutex_unlock(&ctx->mem_hash_lock); + dev_dbg(hdev->dev, "invalid handle %#llx for map address %#llx\n", + hnode->handle, addr); + return ERR_PTR(-EINVAL); + } + + /* + * node found, increase export count so this memory cannot be unmapped + * and the hash node cannot be deleted. + */ + hnode->export_cnt++; + mutex_unlock(&ctx->mem_hash_lock); + + return hnode; +} + +static void memhash_node_export_put(struct hl_ctx *ctx, struct hl_vm_hash_node *hnode) +{ + mutex_lock(&ctx->mem_hash_lock); + hnode->export_cnt--; + mutex_unlock(&ctx->mem_hash_lock); +} + static void hl_release_dmabuf(struct dma_buf *dmabuf) { struct hl_dmabuf_priv *hl_dmabuf = dmabuf->priv; @@ -1789,13 +1835,15 @@ static void hl_release_dmabuf(struct dma_buf *dmabuf) ctx = hl_dmabuf->ctx; - if (hl_dmabuf->memhash_hnode) { - mutex_lock(&ctx->mem_hash_lock); - hl_dmabuf->memhash_hnode->export_cnt--; - mutex_unlock(&ctx->mem_hash_lock); - } + if (hl_dmabuf->memhash_hnode) + memhash_node_export_put(ctx, hl_dmabuf->memhash_hnode); + atomic_dec(&ctx->hdev->dmabuf_export_cnt); hl_ctx_put(ctx); + + /* Paired with get_file() in export_dmabuf() */ + fput(ctx->hpriv->filp); + kfree(hl_dmabuf); } @@ -1834,6 +1882,13 @@ static int export_dmabuf(struct hl_ctx *ctx, hl_dmabuf->ctx = ctx; hl_ctx_get(hl_dmabuf->ctx); + atomic_inc(&ctx->hdev->dmabuf_export_cnt); + + /* Get compute device file to enforce release order, such that all exported dma-buf will be + * released first and only then the compute device. + * Paired with fput() in hl_release_dmabuf(). + */ + get_file(ctx->hpriv->filp); *dmabuf_fd = fd; @@ -1933,47 +1988,6 @@ static int validate_export_params(struct hl_device *hdev, u64 device_addr, u64 s return 0; } -static struct hl_vm_hash_node *memhash_node_export_get(struct hl_ctx *ctx, u64 addr) -{ - struct hl_device *hdev = ctx->hdev; - struct hl_vm_hash_node *hnode; - - /* get the memory handle */ - mutex_lock(&ctx->mem_hash_lock); - hash_for_each_possible(ctx->mem_hash, hnode, node, (unsigned long)addr) - if (addr == hnode->vaddr) - break; - - if (!hnode) { - mutex_unlock(&ctx->mem_hash_lock); - dev_dbg(hdev->dev, "map address %#llx not found\n", addr); - return ERR_PTR(-EINVAL); - } - - if (upper_32_bits(hnode->handle)) { - mutex_unlock(&ctx->mem_hash_lock); - dev_dbg(hdev->dev, "invalid handle %#llx for map address %#llx\n", - hnode->handle, addr); - return ERR_PTR(-EINVAL); - } - - /* - * node found, increase export count so this memory cannot be unmapped - * and the hash node cannot be deleted. - */ - hnode->export_cnt++; - mutex_unlock(&ctx->mem_hash_lock); - - return hnode; -} - -static void memhash_node_export_put(struct hl_ctx *ctx, struct hl_vm_hash_node *hnode) -{ - mutex_lock(&ctx->mem_hash_lock); - hnode->export_cnt--; - mutex_unlock(&ctx->mem_hash_lock); -} - static struct hl_vm_phys_pg_pack *get_phys_pg_pack_from_hash_node(struct hl_device *hdev, struct hl_vm_hash_node *hnode) { @@ -2221,11 +2235,11 @@ static struct hl_mmap_mem_buf_behavior hl_ts_behavior = { * allocate_timestamps_buffers() - allocate timestamps buffers * This function will allocate ts buffer that will later on be mapped to the user * in order to be able to read the timestamp. - * in additon it'll allocate an extra buffer for registration management. + * in addition it'll allocate an extra buffer for registration management. * since we cannot fail during registration for out-of-memory situation, so * we'll prepare a pool which will be used as user interrupt nodes and instead * of dynamically allocating nodes while registration we'll pick the node from - * this pool. in addtion it'll add node to the mapping hash which will be used + * this pool. in addition it'll add node to the mapping hash which will be used * to map user ts buffer to the internal kernel ts buffer. * @hpriv: pointer to the private data of the fd * @args: ioctl input diff --git a/drivers/accel/habanalabs/common/memory_mgr.c b/drivers/accel/habanalabs/common/memory_mgr.c index 0f2759e26547..c4d84df355b0 100644 --- a/drivers/accel/habanalabs/common/memory_mgr.c +++ b/drivers/accel/habanalabs/common/memory_mgr.c @@ -275,7 +275,7 @@ int hl_mem_mgr_mmap(struct hl_mem_mgr *mmg, struct vm_area_struct *vma, if (atomic_cmpxchg(&buf->mmap, 0, 1)) { dev_err(mmg->dev, - "%s, Memory mmap failed, already mmaped to user\n", + "%s, Memory mmap failed, already mapped to user\n", buf->behavior->topic); rc = -EINVAL; goto put_mem; @@ -341,8 +341,19 @@ void hl_mem_mgr_fini(struct hl_mem_mgr *mmg) "%s: Buff handle %u for CTX is still alive\n", topic, id); } +} - /* TODO: can it happen that some buffer is still in use at this point? */ +/** + * hl_mem_mgr_idr_destroy() - destroy memory manager IDR. + * @mmg: parent unified memory manager + * + * Destroy the memory manager IDR. + * Shall be called when IDR is empty and no memory buffers are in use. + */ +void hl_mem_mgr_idr_destroy(struct hl_mem_mgr *mmg) +{ + if (!idr_is_empty(&mmg->handles)) + dev_crit(mmg->dev, "memory manager IDR is destroyed while it is not empty!\n"); idr_destroy(&mmg->handles); } diff --git a/drivers/accel/habanalabs/common/mmu/mmu.c b/drivers/accel/habanalabs/common/mmu/mmu.c index a42ae8bc61e8..f379e5b461a6 100644 --- a/drivers/accel/habanalabs/common/mmu/mmu.c +++ b/drivers/accel/habanalabs/common/mmu/mmu.c @@ -540,8 +540,8 @@ static void hl_mmu_pa_page_with_offset(struct hl_ctx *ctx, u64 virt_addr, u32 page_off; /* - * Bit arithmetics cannot be used for non power of two page - * sizes. In addition, since bit arithmetics is not used, + * Bit arithmetic cannot be used for non power of two page + * sizes. In addition, since bit arithmetic is not used, * we cannot ignore dram base. All that shall be considered. */ @@ -679,7 +679,9 @@ int hl_mmu_invalidate_cache(struct hl_device *hdev, bool is_hard, u32 flags) rc = hdev->asic_funcs->mmu_invalidate_cache(hdev, is_hard, flags); if (rc) - dev_err_ratelimited(hdev->dev, "MMU cache invalidation failed\n"); + dev_err_ratelimited(hdev->dev, + "%s cache invalidation failed, rc=%d\n", + flags == VM_TYPE_USERPTR ? "PMMU" : "HMMU", rc); return rc; } @@ -692,7 +694,9 @@ int hl_mmu_invalidate_cache_range(struct hl_device *hdev, bool is_hard, rc = hdev->asic_funcs->mmu_invalidate_cache_range(hdev, is_hard, flags, asid, va, size); if (rc) - dev_err_ratelimited(hdev->dev, "MMU cache range invalidation failed\n"); + dev_err_ratelimited(hdev->dev, + "%s cache range invalidation failed: va=%#llx, size=%llu, rc=%d", + flags == VM_TYPE_USERPTR ? "PMMU" : "HMMU", va, size, rc); return rc; } @@ -757,7 +761,7 @@ u64 hl_mmu_get_next_hop_addr(struct hl_ctx *ctx, u64 curr_pte) * @mmu_prop: MMU properties. * @hop_idx: HOP index. * @hop_addr: HOP address. - * @virt_addr: virtual address fro the translation. + * @virt_addr: virtual address for the translation. * * @return the matching PTE value on success, otherwise U64_MAX. */ diff --git a/drivers/accel/habanalabs/common/pci/pci.c b/drivers/accel/habanalabs/common/pci/pci.c index d1f4c695baf2..191e0e3cf3a5 100644 --- a/drivers/accel/habanalabs/common/pci/pci.c +++ b/drivers/accel/habanalabs/common/pci/pci.c @@ -420,7 +420,6 @@ int hl_pci_init(struct hl_device *hdev) unmap_pci_bars: hl_pci_bars_unmap(hdev); disable_device: - pci_clear_master(pdev); pci_disable_device(pdev); return rc; @@ -436,6 +435,5 @@ void hl_pci_fini(struct hl_device *hdev) { hl_pci_bars_unmap(hdev); - pci_clear_master(hdev->pdev); pci_disable_device(hdev->pdev); } diff --git a/drivers/accel/habanalabs/common/security.c b/drivers/accel/habanalabs/common/security.c index 5f03ade07ead..297e6e44fd0c 100644 --- a/drivers/accel/habanalabs/common/security.c +++ b/drivers/accel/habanalabs/common/security.c @@ -502,7 +502,7 @@ free_glbl_sec: int hl_init_pb_ranges_single_dcore(struct hl_device *hdev, u32 dcore_offset, u32 num_instances, u32 instance_offset, const u32 pb_blocks[], u32 blocks_array_size, - const struct range *regs_range_array, u32 regs_range_array_size) + const struct range *user_regs_range_array, u32 user_regs_range_array_size) { int i; struct hl_block_glbl_sec *glbl_sec; @@ -514,8 +514,8 @@ int hl_init_pb_ranges_single_dcore(struct hl_device *hdev, u32 dcore_offset, return -ENOMEM; hl_secure_block(hdev, glbl_sec, blocks_array_size); - hl_unsecure_registers_range(hdev, regs_range_array, - regs_range_array_size, 0, pb_blocks, glbl_sec, + hl_unsecure_registers_range(hdev, user_regs_range_array, + user_regs_range_array_size, 0, pb_blocks, glbl_sec, blocks_array_size); /* Fill all blocks with the same configuration */ diff --git a/drivers/accel/habanalabs/common/security.h b/drivers/accel/habanalabs/common/security.h index 234b4a6ed8bc..d7a3b3e82ea4 100644 --- a/drivers/accel/habanalabs/common/security.h +++ b/drivers/accel/habanalabs/common/security.h @@ -10,7 +10,7 @@ #include <linux/io-64-nonatomic-lo-hi.h> -extern struct hl_device *hdev; +struct hl_device; /* special blocks */ #define HL_MAX_NUM_OF_GLBL_ERR_CAUSE 10 diff --git a/drivers/accel/habanalabs/common/sysfs.c b/drivers/accel/habanalabs/common/sysfs.c index 735d8bed0066..01f89f029355 100644 --- a/drivers/accel/habanalabs/common/sysfs.c +++ b/drivers/accel/habanalabs/common/sysfs.c @@ -497,10 +497,14 @@ int hl_sysfs_init(struct hl_device *hdev) if (rc) { dev_err(hdev->dev, "Failed to add groups to device, error %d\n", rc); - return rc; + goto remove_groups; } return 0; + +remove_groups: + device_remove_groups(hdev->dev, hl_dev_attr_groups); + return rc; } void hl_sysfs_fini(struct hl_device *hdev) diff --git a/drivers/accel/habanalabs/gaudi/gaudi.c b/drivers/accel/habanalabs/gaudi/gaudi.c index bb858b94e1e8..a29aa8f7b6f3 100644 --- a/drivers/accel/habanalabs/gaudi/gaudi.c +++ b/drivers/accel/habanalabs/gaudi/gaudi.c @@ -656,6 +656,7 @@ static int gaudi_set_fixed_properties(struct hl_device *hdev) prop->cfg_size = CFG_SIZE; prop->max_asid = MAX_ASID; prop->num_of_events = GAUDI_EVENT_SIZE; + prop->max_num_of_engines = GAUDI_ENGINE_ID_SIZE; prop->tpc_enabled_mask = TPC_ENABLED_MASK; set_default_power_values(hdev); @@ -679,6 +680,10 @@ static int gaudi_set_fixed_properties(struct hl_device *hdev) (num_sync_stream_queues * HL_RSVD_MONS); prop->first_available_user_interrupt = USHRT_MAX; + prop->tpc_interrupt_id = USHRT_MAX; + + /* single msi */ + prop->eq_interrupt_id = 0; for (i = 0 ; i < HL_MAX_DCORES ; i++) prop->first_available_cq[i] = USHRT_MAX; @@ -867,13 +872,18 @@ pci_init: rc = hl_fw_read_preboot_status(hdev); if (rc) { if (hdev->reset_on_preboot_fail) + /* we are already on failure flow, so don't check if hw_fini fails. */ hdev->asic_funcs->hw_fini(hdev, true, false); goto pci_fini; } if (gaudi_get_hw_state(hdev) == HL_DEVICE_HW_STATE_DIRTY) { dev_dbg(hdev->dev, "H/W state is dirty, must reset before initializing\n"); - hdev->asic_funcs->hw_fini(hdev, true, false); + rc = hdev->asic_funcs->hw_fini(hdev, true, false); + if (rc) { + dev_err(hdev->dev, "failed to reset HW in dirty state (%d)\n", rc); + goto pci_fini; + } } return 0; @@ -2010,38 +2020,6 @@ static int gaudi_enable_msi_single(struct hl_device *hdev) return rc; } -static int gaudi_enable_msi_multi(struct hl_device *hdev) -{ - int cq_cnt = hdev->asic_prop.completion_queues_count; - int rc, i, irq_cnt_init, irq; - - for (i = 0, irq_cnt_init = 0 ; i < cq_cnt ; i++, irq_cnt_init++) { - irq = gaudi_pci_irq_vector(hdev, i, false); - rc = request_irq(irq, hl_irq_handler_cq, 0, gaudi_irq_name[i], - &hdev->completion_queue[i]); - if (rc) { - dev_err(hdev->dev, "Failed to request IRQ %d", irq); - goto free_irqs; - } - } - - irq = gaudi_pci_irq_vector(hdev, GAUDI_EVENT_QUEUE_MSI_IDX, true); - rc = request_irq(irq, hl_irq_handler_eq, 0, gaudi_irq_name[cq_cnt], - &hdev->event_queue); - if (rc) { - dev_err(hdev->dev, "Failed to request IRQ %d", irq); - goto free_irqs; - } - - return 0; - -free_irqs: - for (i = 0 ; i < irq_cnt_init ; i++) - free_irq(gaudi_pci_irq_vector(hdev, i, false), - &hdev->completion_queue[i]); - return rc; -} - static int gaudi_enable_msi(struct hl_device *hdev) { struct gaudi_device *gaudi = hdev->asic_specific; @@ -2056,14 +2034,7 @@ static int gaudi_enable_msi(struct hl_device *hdev) return rc; } - if (rc < NUMBER_OF_INTERRUPTS) { - gaudi->multi_msi_mode = false; - rc = gaudi_enable_msi_single(hdev); - } else { - gaudi->multi_msi_mode = true; - rc = gaudi_enable_msi_multi(hdev); - } - + rc = gaudi_enable_msi_single(hdev); if (rc) goto free_pci_irq_vectors; @@ -2079,47 +2050,23 @@ free_pci_irq_vectors: static void gaudi_sync_irqs(struct hl_device *hdev) { struct gaudi_device *gaudi = hdev->asic_specific; - int i, cq_cnt = hdev->asic_prop.completion_queues_count; if (!(gaudi->hw_cap_initialized & HW_CAP_MSI)) return; /* Wait for all pending IRQs to be finished */ - if (gaudi->multi_msi_mode) { - for (i = 0 ; i < cq_cnt ; i++) - synchronize_irq(gaudi_pci_irq_vector(hdev, i, false)); - - synchronize_irq(gaudi_pci_irq_vector(hdev, - GAUDI_EVENT_QUEUE_MSI_IDX, - true)); - } else { - synchronize_irq(gaudi_pci_irq_vector(hdev, 0, false)); - } + synchronize_irq(gaudi_pci_irq_vector(hdev, 0, false)); } static void gaudi_disable_msi(struct hl_device *hdev) { struct gaudi_device *gaudi = hdev->asic_specific; - int i, irq, cq_cnt = hdev->asic_prop.completion_queues_count; if (!(gaudi->hw_cap_initialized & HW_CAP_MSI)) return; gaudi_sync_irqs(hdev); - - if (gaudi->multi_msi_mode) { - irq = gaudi_pci_irq_vector(hdev, GAUDI_EVENT_QUEUE_MSI_IDX, - true); - free_irq(irq, &hdev->event_queue); - - for (i = 0 ; i < cq_cnt ; i++) { - irq = gaudi_pci_irq_vector(hdev, i, false); - free_irq(irq, &hdev->completion_queue[i]); - } - } else { - free_irq(gaudi_pci_irq_vector(hdev, 0, false), hdev); - } - + free_irq(gaudi_pci_irq_vector(hdev, 0, false), hdev); pci_free_irq_vectors(hdev->pdev); gaudi->hw_cap_initialized &= ~HW_CAP_MSI; @@ -3718,7 +3665,7 @@ static int gaudi_mmu_init(struct hl_device *hdev) if (rc) { dev_err(hdev->dev, "failed to set hop0 addr for asid %d\n", i); - goto err; + return rc; } } @@ -3729,7 +3676,9 @@ static int gaudi_mmu_init(struct hl_device *hdev) /* mem cache invalidation */ WREG32(mmSTLB_MEM_CACHE_INVALIDATION, 1); - hl_mmu_invalidate_cache(hdev, true, 0); + rc = hl_mmu_invalidate_cache(hdev, true, 0); + if (rc) + return rc; WREG32(mmMMU_UP_MMU_ENABLE, 1); WREG32(mmMMU_UP_SPI_MASK, 0xF); @@ -3745,9 +3694,6 @@ static int gaudi_mmu_init(struct hl_device *hdev) gaudi->hw_cap_initialized |= HW_CAP_MMU; return 0; - -err: - return rc; } static int gaudi_load_firmware_to_device(struct hl_device *hdev) @@ -3915,11 +3861,7 @@ static int gaudi_init_cpu_queues(struct hl_device *hdev, u32 cpu_timeout) WREG32(mmCPU_IF_PF_PQ_PI, 0); - if (gaudi->multi_msi_mode) - WREG32(mmCPU_IF_QUEUE_INIT, PQ_INIT_STATUS_READY_FOR_CP); - else - WREG32(mmCPU_IF_QUEUE_INIT, - PQ_INIT_STATUS_READY_FOR_CP_SINGLE_MSI); + WREG32(mmCPU_IF_QUEUE_INIT, PQ_INIT_STATUS_READY_FOR_CP_SINGLE_MSI); irq_handler_offset = prop->gic_interrupts_enable ? mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR : @@ -4068,7 +4010,7 @@ disable_queues: return rc; } -static void gaudi_hw_fini(struct hl_device *hdev, bool hard_reset, bool fw_reset) +static int gaudi_hw_fini(struct hl_device *hdev, bool hard_reset, bool fw_reset) { struct cpu_dyn_regs *dyn_regs = &hdev->fw_loader.dynamic_loader.comm_desc.cpu_dyn_regs; @@ -4078,7 +4020,7 @@ static void gaudi_hw_fini(struct hl_device *hdev, bool hard_reset, bool fw_reset if (!hard_reset) { dev_err(hdev->dev, "GAUDI doesn't support soft-reset\n"); - return; + return 0; } if (hdev->pldm) { @@ -4199,10 +4141,10 @@ skip_reset: msleep(reset_timeout_ms); status = RREG32(mmPSOC_GLOBAL_CONF_BTM_FSM); - if (status & PSOC_GLOBAL_CONF_BTM_FSM_STATE_MASK) - dev_err(hdev->dev, - "Timeout while waiting for device to reset 0x%x\n", - status); + if (status & PSOC_GLOBAL_CONF_BTM_FSM_STATE_MASK) { + dev_err(hdev->dev, "Timeout while waiting for device to reset 0x%x\n", status); + return -ETIMEDOUT; + } if (gaudi) { gaudi->hw_cap_initialized &= ~(HW_CAP_CPU | HW_CAP_CPU_Q | HW_CAP_HBM | @@ -4215,6 +4157,7 @@ skip_reset: hdev->device_cpu_is_halted = false; } + return 0; } static int gaudi_suspend(struct hl_device *hdev) @@ -5595,7 +5538,6 @@ static void gaudi_add_end_of_cb_packets(struct hl_device *hdev, void *kernel_add u32 len, u32 original_len, u64 cq_addr, u32 cq_val, u32 msi_vec, bool eb) { - struct gaudi_device *gaudi = hdev->asic_specific; struct packet_msg_prot *cq_pkt; struct packet_nop *cq_padding; u64 msi_addr; @@ -5625,12 +5567,7 @@ static void gaudi_add_end_of_cb_packets(struct hl_device *hdev, void *kernel_add tmp |= FIELD_PREP(GAUDI_PKT_CTL_MB_MASK, 1); cq_pkt->ctl = cpu_to_le32(tmp); cq_pkt->value = cpu_to_le32(1); - - if (gaudi->multi_msi_mode) - msi_addr = mmPCIE_MSI_INTR_0 + msi_vec * 4; - else - msi_addr = mmPCIE_CORE_MSI_REQ; - + msi_addr = hdev->pdev ? mmPCIE_CORE_MSI_REQ : mmPCIE_MSI_INTR_0 + msi_vec * 4; cq_pkt->addr = cpu_to_le64(CFG_BASE + msi_addr); } @@ -7297,7 +7234,7 @@ static void gaudi_handle_qman_err(struct hl_device *hdev, u16 event_type, u64 *e } static void gaudi_print_irq_info(struct hl_device *hdev, u16 event_type, - bool razwi, u64 *event_mask) + bool check_razwi, u64 *event_mask) { bool is_read = false, is_write = false; u16 engine_id[2], num_of_razwi_eng = 0; @@ -7316,7 +7253,7 @@ static void gaudi_print_irq_info(struct hl_device *hdev, u16 event_type, dev_err_ratelimited(hdev->dev, "Received H/W interrupt %d [\"%s\"]\n", event_type, desc); - if (razwi) { + if (check_razwi) { gaudi_print_and_get_razwi_info(hdev, &engine_id[0], &engine_id[1], &is_read, &is_write); gaudi_print_and_get_mmu_error_info(hdev, &razwi_addr, event_mask); @@ -7333,8 +7270,9 @@ static void gaudi_print_irq_info(struct hl_device *hdev, u16 event_type, num_of_razwi_eng = 1; } - hl_handle_razwi(hdev, razwi_addr, engine_id, num_of_razwi_eng, razwi_flags, - event_mask); + if (razwi_flags) + hl_handle_razwi(hdev, razwi_addr, engine_id, num_of_razwi_eng, + razwi_flags, event_mask); } } @@ -7633,6 +7571,7 @@ static void gaudi_print_clk_change_info(struct hl_device *hdev, u16 event_type, static void gaudi_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_entry) { struct gaudi_device *gaudi = hdev->asic_specific; + struct hl_info_fw_err_info fw_err_info; u64 data = le64_to_cpu(eq_entry->data[0]), event_mask = 0; u32 ctl = le32_to_cpu(eq_entry->hdr.ctl); u32 fw_fatal_err_flag = 0, flags = 0; @@ -7911,7 +7850,10 @@ static void gaudi_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_entr case GAUDI_EVENT_FW_ALIVE_S: gaudi_print_irq_info(hdev, event_type, false, &event_mask); gaudi_print_fw_alive_info(hdev, &eq_entry->fw_alive); - event_mask |= HL_NOTIFIER_EVENT_GENERAL_HW_ERR; + fw_err_info.err_type = HL_INFO_FW_REPORTED_ERR; + fw_err_info.event_id = event_type; + fw_err_info.event_mask = &event_mask; + hl_handle_fw_err(hdev, &fw_err_info); goto reset_device; default: @@ -7942,6 +7884,10 @@ reset_device: } if (reset_required) { + /* escalate general hw errors to critical/fatal error */ + if (event_mask & HL_NOTIFIER_EVENT_GENERAL_HW_ERR) + hl_handle_critical_hw_err(hdev, event_type, &event_mask); + hl_device_cond_reset(hdev, flags, event_mask); } else { hl_fw_unmask_irq(hdev, event_type); @@ -8403,19 +8349,26 @@ static int gaudi_internal_cb_pool_init(struct hl_device *hdev, } mutex_lock(&hdev->mmu_lock); + rc = hl_mmu_map_contiguous(ctx, hdev->internal_cb_va_base, hdev->internal_cb_pool_dma_addr, HOST_SPACE_INTERNAL_CB_SZ); - - hl_mmu_invalidate_cache(hdev, false, MMU_OP_USERPTR); - mutex_unlock(&hdev->mmu_lock); - if (rc) goto unreserve_internal_cb_pool; + rc = hl_mmu_invalidate_cache(hdev, false, MMU_OP_USERPTR); + if (rc) + goto unmap_internal_cb_pool; + + mutex_unlock(&hdev->mmu_lock); + return 0; +unmap_internal_cb_pool: + hl_mmu_unmap_contiguous(ctx, hdev->internal_cb_va_base, + HOST_SPACE_INTERNAL_CB_SZ); unreserve_internal_cb_pool: + mutex_unlock(&hdev->mmu_lock); hl_unreserve_va_block(hdev, ctx, hdev->internal_cb_va_base, HOST_SPACE_INTERNAL_CB_SZ); destroy_internal_cb_pool: diff --git a/drivers/accel/habanalabs/gaudi/gaudiP.h b/drivers/accel/habanalabs/gaudi/gaudiP.h index 3d88d56c8eb3..b8fa724be5a1 100644 --- a/drivers/accel/habanalabs/gaudi/gaudiP.h +++ b/drivers/accel/habanalabs/gaudi/gaudiP.h @@ -28,20 +28,8 @@ #define NUMBER_OF_COLLECTIVE_QUEUES 12 #define NUMBER_OF_SOBS_IN_GRP 11 -/* - * Number of MSI interrupts IDS: - * Each completion queue has 1 ID - * The event queue has 1 ID - */ -#define NUMBER_OF_INTERRUPTS (NUMBER_OF_CMPLT_QUEUES + \ - NUMBER_OF_CPU_HW_QUEUES) - #define GAUDI_STREAM_MASTER_ARR_SIZE 8 -#if (NUMBER_OF_INTERRUPTS > GAUDI_MSI_ENTRIES) -#error "Number of MSI interrupts must be smaller or equal to GAUDI_MSI_ENTRIES" -#endif - #define CORESIGHT_TIMEOUT_USEC 100000 /* 100 ms */ #define GAUDI_MAX_CLK_FREQ 2200000000ull /* 2200 MHz */ @@ -324,8 +312,6 @@ struct gaudi_internal_qman_info { * signal we can use this engine in later code paths. * Each bit is cleared upon reset of its corresponding H/W * engine. - * @multi_msi_mode: whether we are working in multi MSI single MSI mode. - * Multi MSI is possible only with IOMMU enabled. * @mmu_cache_inv_pi: PI for MMU cache invalidation flow. The H/W expects an * 8-bit value so use u8. */ @@ -345,7 +331,6 @@ struct gaudi_device { u32 events_stat[GAUDI_EVENT_SIZE]; u32 events_stat_aggregate[GAUDI_EVENT_SIZE]; u32 hw_cap_initialized; - u8 multi_msi_mode; u8 mmu_cache_inv_pi; }; diff --git a/drivers/accel/habanalabs/gaudi2/gaudi2.c b/drivers/accel/habanalabs/gaudi2/gaudi2.c index 6f415fa94eee..b778cf764a68 100644 --- a/drivers/accel/habanalabs/gaudi2/gaudi2.c +++ b/drivers/accel/habanalabs/gaudi2/gaudi2.c @@ -23,7 +23,8 @@ #define GAUDI2_DMA_POOL_BLK_SIZE SZ_256 /* 256 bytes */ #define GAUDI2_RESET_TIMEOUT_MSEC 2000 /* 2000ms */ -#define GAUDI2_RESET_POLL_TIMEOUT_USEC 50000 /* 50ms */ + +#define GAUDI2_RESET_POLL_TIMEOUT_USEC 500000 /* 500ms */ #define GAUDI2_PLDM_HRESET_TIMEOUT_MSEC 25000 /* 25s */ #define GAUDI2_PLDM_SRESET_TIMEOUT_MSEC 25000 /* 25s */ #define GAUDI2_PLDM_RESET_POLL_TIMEOUT_USEC 3000000 /* 3s */ @@ -86,10 +87,11 @@ #define KDMA_TIMEOUT_USEC USEC_PER_SEC -#define IS_DMA_IDLE(dma_core_idle_ind_mask) \ - (!((dma_core_idle_ind_mask) & \ - ((DCORE0_EDMA0_CORE_IDLE_IND_MASK_DESC_CNT_STS_MASK) | \ - (DCORE0_EDMA0_CORE_IDLE_IND_MASK_COMP_MASK)))) +#define IS_DMA_IDLE(dma_core_sts0) \ + (!((dma_core_sts0) & (DCORE0_EDMA0_CORE_STS0_BUSY_MASK))) + +#define IS_DMA_HALTED(dma_core_sts1) \ + ((dma_core_sts1) & (DCORE0_EDMA0_CORE_STS1_IS_HALT_MASK)) #define IS_MME_IDLE(mme_arch_sts) (((mme_arch_sts) & MME_ARCH_IDLE_MASK) == MME_ARCH_IDLE_MASK) @@ -132,6 +134,282 @@ #define ENGINE_ID_DCORE_OFFSET (GAUDI2_DCORE1_ENGINE_ID_EDMA_0 - GAUDI2_DCORE0_ENGINE_ID_EDMA_0) +/* RAZWI initiator coordinates */ +#define RAZWI_GET_AXUSER_XY(x) \ + ((x & 0xF8001FF0) >> 4) + +#define RAZWI_GET_AXUSER_LOW_XY(x) \ + ((x & 0x00001FF0) >> 4) + +#define RAZWI_INITIATOR_AXUER_L_X_SHIFT 0 +#define RAZWI_INITIATOR_AXUER_L_X_MASK 0x1F +#define RAZWI_INITIATOR_AXUER_L_Y_SHIFT 5 +#define RAZWI_INITIATOR_AXUER_L_Y_MASK 0xF + +#define RAZWI_INITIATOR_AXUER_H_X_SHIFT 23 +#define RAZWI_INITIATOR_AXUER_H_X_MASK 0x1F + +#define RAZWI_INITIATOR_ID_X_Y_LOW(x, y) \ + ((((y) & RAZWI_INITIATOR_AXUER_L_Y_MASK) << RAZWI_INITIATOR_AXUER_L_Y_SHIFT) | \ + (((x) & RAZWI_INITIATOR_AXUER_L_X_MASK) << RAZWI_INITIATOR_AXUER_L_X_SHIFT)) + +#define RAZWI_INITIATOR_ID_X_HIGH(x) \ + (((x) & RAZWI_INITIATOR_AXUER_H_X_MASK) << RAZWI_INITIATOR_AXUER_H_X_SHIFT) + +#define RAZWI_INITIATOR_ID_X_Y(xl, yl, xh) \ + (RAZWI_INITIATOR_ID_X_Y_LOW(xl, yl) | RAZWI_INITIATOR_ID_X_HIGH(xh)) + +#define PSOC_RAZWI_ENG_STR_SIZE 128 +#define PSOC_RAZWI_MAX_ENG_PER_RTR 5 + +struct gaudi2_razwi_info { + u32 axuser_xy; + u32 rtr_ctrl; + u16 eng_id; + char *eng_name; +}; + +static struct gaudi2_razwi_info common_razwi_info[] = { + {RAZWI_INITIATOR_ID_X_Y(2, 4, 0), mmDCORE0_RTR0_CTRL_BASE, + GAUDI2_DCORE0_ENGINE_ID_DEC_0, "DEC0"}, + {RAZWI_INITIATOR_ID_X_Y(2, 4, 4), mmDCORE0_RTR0_CTRL_BASE, + GAUDI2_DCORE0_ENGINE_ID_DEC_1, "DEC1"}, + {RAZWI_INITIATOR_ID_X_Y(17, 4, 18), mmDCORE1_RTR7_CTRL_BASE, + GAUDI2_DCORE1_ENGINE_ID_DEC_0, "DEC2"}, + {RAZWI_INITIATOR_ID_X_Y(17, 4, 14), mmDCORE1_RTR7_CTRL_BASE, + GAUDI2_DCORE1_ENGINE_ID_DEC_1, "DEC3"}, + {RAZWI_INITIATOR_ID_X_Y(2, 11, 0), mmDCORE2_RTR0_CTRL_BASE, + GAUDI2_DCORE2_ENGINE_ID_DEC_0, "DEC4"}, + {RAZWI_INITIATOR_ID_X_Y(2, 11, 4), mmDCORE2_RTR0_CTRL_BASE, + GAUDI2_DCORE2_ENGINE_ID_DEC_1, "DEC5"}, + {RAZWI_INITIATOR_ID_X_Y(17, 11, 18), mmDCORE3_RTR7_CTRL_BASE, + GAUDI2_DCORE3_ENGINE_ID_DEC_0, "DEC6"}, + {RAZWI_INITIATOR_ID_X_Y(17, 11, 14), mmDCORE3_RTR7_CTRL_BASE, + GAUDI2_DCORE3_ENGINE_ID_DEC_1, "DEC7"}, + {RAZWI_INITIATOR_ID_X_Y(2, 4, 6), mmDCORE0_RTR0_CTRL_BASE, + GAUDI2_PCIE_ENGINE_ID_DEC_0, "DEC8"}, + {RAZWI_INITIATOR_ID_X_Y(2, 4, 7), mmDCORE0_RTR0_CTRL_BASE, + GAUDI2_PCIE_ENGINE_ID_DEC_0, "DEC9"}, + {RAZWI_INITIATOR_ID_X_Y(3, 4, 2), mmDCORE0_RTR1_CTRL_BASE, + GAUDI2_DCORE0_ENGINE_ID_TPC_0, "TPC0"}, + {RAZWI_INITIATOR_ID_X_Y(3, 4, 4), mmDCORE0_RTR1_CTRL_BASE, + GAUDI2_DCORE0_ENGINE_ID_TPC_1, "TPC1"}, + {RAZWI_INITIATOR_ID_X_Y(4, 4, 2), mmDCORE0_RTR2_CTRL_BASE, + GAUDI2_DCORE0_ENGINE_ID_TPC_2, "TPC2"}, + {RAZWI_INITIATOR_ID_X_Y(4, 4, 4), mmDCORE0_RTR2_CTRL_BASE, + GAUDI2_DCORE0_ENGINE_ID_TPC_3, "TPC3"}, + {RAZWI_INITIATOR_ID_X_Y(5, 4, 2), mmDCORE0_RTR3_CTRL_BASE, + GAUDI2_DCORE0_ENGINE_ID_TPC_4, "TPC4"}, + {RAZWI_INITIATOR_ID_X_Y(5, 4, 4), mmDCORE0_RTR3_CTRL_BASE, + GAUDI2_DCORE0_ENGINE_ID_TPC_5, "TPC5"}, + {RAZWI_INITIATOR_ID_X_Y(16, 4, 14), mmDCORE1_RTR6_CTRL_BASE, + GAUDI2_DCORE1_ENGINE_ID_TPC_0, "TPC6"}, + {RAZWI_INITIATOR_ID_X_Y(16, 4, 16), mmDCORE1_RTR6_CTRL_BASE, + GAUDI2_DCORE1_ENGINE_ID_TPC_1, "TPC7"}, + {RAZWI_INITIATOR_ID_X_Y(15, 4, 14), mmDCORE1_RTR5_CTRL_BASE, + GAUDI2_DCORE1_ENGINE_ID_TPC_2, "TPC8"}, + {RAZWI_INITIATOR_ID_X_Y(15, 4, 16), mmDCORE1_RTR5_CTRL_BASE, + GAUDI2_DCORE1_ENGINE_ID_TPC_3, "TPC9"}, + {RAZWI_INITIATOR_ID_X_Y(14, 4, 14), mmDCORE1_RTR4_CTRL_BASE, + GAUDI2_DCORE1_ENGINE_ID_TPC_4, "TPC10"}, + {RAZWI_INITIATOR_ID_X_Y(14, 4, 16), mmDCORE1_RTR4_CTRL_BASE, + GAUDI2_DCORE1_ENGINE_ID_TPC_5, "TPC11"}, + {RAZWI_INITIATOR_ID_X_Y(5, 11, 2), mmDCORE2_RTR3_CTRL_BASE, + GAUDI2_DCORE2_ENGINE_ID_TPC_0, "TPC12"}, + {RAZWI_INITIATOR_ID_X_Y(5, 11, 4), mmDCORE2_RTR3_CTRL_BASE, + GAUDI2_DCORE2_ENGINE_ID_TPC_1, "TPC13"}, + {RAZWI_INITIATOR_ID_X_Y(4, 11, 2), mmDCORE2_RTR2_CTRL_BASE, + GAUDI2_DCORE2_ENGINE_ID_TPC_2, "TPC14"}, + {RAZWI_INITIATOR_ID_X_Y(4, 11, 4), mmDCORE2_RTR2_CTRL_BASE, + GAUDI2_DCORE2_ENGINE_ID_TPC_3, "TPC15"}, + {RAZWI_INITIATOR_ID_X_Y(3, 11, 2), mmDCORE2_RTR1_CTRL_BASE, + GAUDI2_DCORE2_ENGINE_ID_TPC_4, "TPC16"}, + {RAZWI_INITIATOR_ID_X_Y(3, 11, 4), mmDCORE2_RTR1_CTRL_BASE, + GAUDI2_DCORE2_ENGINE_ID_TPC_5, "TPC17"}, + {RAZWI_INITIATOR_ID_X_Y(14, 11, 14), mmDCORE3_RTR4_CTRL_BASE, + GAUDI2_DCORE3_ENGINE_ID_TPC_0, "TPC18"}, + {RAZWI_INITIATOR_ID_X_Y(14, 11, 16), mmDCORE3_RTR4_CTRL_BASE, + GAUDI2_DCORE3_ENGINE_ID_TPC_1, "TPC19"}, + {RAZWI_INITIATOR_ID_X_Y(15, 11, 14), mmDCORE3_RTR5_CTRL_BASE, + GAUDI2_DCORE3_ENGINE_ID_TPC_2, "TPC20"}, + {RAZWI_INITIATOR_ID_X_Y(15, 11, 16), mmDCORE3_RTR5_CTRL_BASE, + GAUDI2_DCORE3_ENGINE_ID_TPC_3, "TPC21"}, + {RAZWI_INITIATOR_ID_X_Y(16, 11, 14), mmDCORE3_RTR6_CTRL_BASE, + GAUDI2_DCORE3_ENGINE_ID_TPC_4, "TPC22"}, + {RAZWI_INITIATOR_ID_X_Y(16, 11, 16), mmDCORE3_RTR6_CTRL_BASE, + GAUDI2_DCORE3_ENGINE_ID_TPC_5, "TPC23"}, + {RAZWI_INITIATOR_ID_X_Y(2, 4, 2), mmDCORE0_RTR0_CTRL_BASE, + GAUDI2_DCORE3_ENGINE_ID_TPC_5, "TPC24"}, + {RAZWI_INITIATOR_ID_X_Y(17, 4, 8), mmDCORE1_RTR7_CTRL_BASE, + GAUDI2_ENGINE_ID_NIC0_0, "NIC0"}, + {RAZWI_INITIATOR_ID_X_Y(17, 4, 10), mmDCORE1_RTR7_CTRL_BASE, + GAUDI2_ENGINE_ID_NIC0_1, "NIC1"}, + {RAZWI_INITIATOR_ID_X_Y(17, 4, 12), mmDCORE1_RTR7_CTRL_BASE, + GAUDI2_ENGINE_ID_NIC1_0, "NIC2"}, + {RAZWI_INITIATOR_ID_X_Y(17, 4, 14), mmDCORE1_RTR7_CTRL_BASE, + GAUDI2_ENGINE_ID_NIC1_1, "NIC3"}, + {RAZWI_INITIATOR_ID_X_Y(17, 4, 15), mmDCORE1_RTR7_CTRL_BASE, + GAUDI2_ENGINE_ID_NIC2_0, "NIC4"}, + {RAZWI_INITIATOR_ID_X_Y(2, 11, 2), mmDCORE2_RTR0_CTRL_BASE, + GAUDI2_ENGINE_ID_NIC2_1, "NIC5"}, + {RAZWI_INITIATOR_ID_X_Y(2, 11, 4), mmDCORE2_RTR0_CTRL_BASE, + GAUDI2_ENGINE_ID_NIC3_0, "NIC6"}, + {RAZWI_INITIATOR_ID_X_Y(2, 11, 6), mmDCORE2_RTR0_CTRL_BASE, + GAUDI2_ENGINE_ID_NIC3_1, "NIC7"}, + {RAZWI_INITIATOR_ID_X_Y(2, 11, 8), mmDCORE2_RTR0_CTRL_BASE, + GAUDI2_ENGINE_ID_NIC4_0, "NIC8"}, + {RAZWI_INITIATOR_ID_X_Y(17, 11, 12), mmDCORE3_RTR7_CTRL_BASE, + GAUDI2_ENGINE_ID_NIC4_1, "NIC9"}, + {RAZWI_INITIATOR_ID_X_Y(17, 11, 14), mmDCORE3_RTR7_CTRL_BASE, + GAUDI2_ENGINE_ID_NIC5_0, "NIC10"}, + {RAZWI_INITIATOR_ID_X_Y(17, 11, 16), mmDCORE3_RTR7_CTRL_BASE, + GAUDI2_ENGINE_ID_NIC5_1, "NIC11"}, + {RAZWI_INITIATOR_ID_X_Y(2, 4, 2), mmDCORE0_RTR0_CTRL_BASE, + GAUDI2_ENGINE_ID_PDMA_0, "PDMA0"}, + {RAZWI_INITIATOR_ID_X_Y(2, 4, 3), mmDCORE0_RTR0_CTRL_BASE, + GAUDI2_ENGINE_ID_PDMA_1, "PDMA1"}, + {RAZWI_INITIATOR_ID_X_Y(2, 4, 4), mmDCORE0_RTR0_CTRL_BASE, + GAUDI2_ENGINE_ID_SIZE, "PMMU"}, + {RAZWI_INITIATOR_ID_X_Y(2, 4, 5), mmDCORE0_RTR0_CTRL_BASE, + GAUDI2_ENGINE_ID_SIZE, "PCIE"}, + {RAZWI_INITIATOR_ID_X_Y(17, 4, 16), mmDCORE1_RTR7_CTRL_BASE, + GAUDI2_ENGINE_ID_ARC_FARM, "ARC_FARM"}, + {RAZWI_INITIATOR_ID_X_Y(17, 4, 17), mmDCORE1_RTR7_CTRL_BASE, + GAUDI2_ENGINE_ID_KDMA, "KDMA"}, + {RAZWI_INITIATOR_ID_X_Y(1, 5, 1), mmSFT0_HBW_RTR_IF1_RTR_CTRL_BASE, + GAUDI2_DCORE0_ENGINE_ID_EDMA_0, "EDMA0"}, + {RAZWI_INITIATOR_ID_X_Y(1, 5, 1), mmSFT0_HBW_RTR_IF0_RTR_CTRL_BASE, + GAUDI2_DCORE0_ENGINE_ID_EDMA_1, "EDMA1"}, + {RAZWI_INITIATOR_ID_X_Y(18, 5, 18), mmSFT1_HBW_RTR_IF1_RTR_CTRL_BASE, + GAUDI2_DCORE1_ENGINE_ID_EDMA_0, "EDMA2"}, + {RAZWI_INITIATOR_ID_X_Y(18, 5, 18), mmSFT1_HBW_RTR_IF0_RTR_CTRL_BASE, + GAUDI2_DCORE1_ENGINE_ID_EDMA_1, "EDMA3"}, + {RAZWI_INITIATOR_ID_X_Y(1, 10, 1), mmSFT2_HBW_RTR_IF0_RTR_CTRL_BASE, + GAUDI2_DCORE2_ENGINE_ID_EDMA_0, "EDMA4"}, + {RAZWI_INITIATOR_ID_X_Y(1, 10, 1), mmSFT2_HBW_RTR_IF1_RTR_CTRL_BASE, + GAUDI2_DCORE2_ENGINE_ID_EDMA_1, "EDMA5"}, + {RAZWI_INITIATOR_ID_X_Y(18, 10, 18), mmSFT2_HBW_RTR_IF0_RTR_CTRL_BASE, + GAUDI2_DCORE3_ENGINE_ID_EDMA_0, "EDMA6"}, + {RAZWI_INITIATOR_ID_X_Y(18, 10, 18), mmSFT2_HBW_RTR_IF1_RTR_CTRL_BASE, + GAUDI2_DCORE3_ENGINE_ID_EDMA_1, "EDMA7"}, + {RAZWI_INITIATOR_ID_X_Y(1, 5, 0), mmDCORE0_RTR0_CTRL_BASE, + GAUDI2_ENGINE_ID_SIZE, "HMMU0"}, + {RAZWI_INITIATOR_ID_X_Y(18, 5, 19), mmDCORE1_RTR7_CTRL_BASE, + GAUDI2_ENGINE_ID_SIZE, "HMMU1"}, + {RAZWI_INITIATOR_ID_X_Y(1, 5, 0), mmDCORE0_RTR0_CTRL_BASE, + GAUDI2_ENGINE_ID_SIZE, "HMMU2"}, + {RAZWI_INITIATOR_ID_X_Y(18, 5, 19), mmDCORE1_RTR7_CTRL_BASE, + GAUDI2_ENGINE_ID_SIZE, "HMMU3"}, + {RAZWI_INITIATOR_ID_X_Y(1, 5, 0), mmDCORE0_RTR0_CTRL_BASE, + GAUDI2_ENGINE_ID_SIZE, "HMMU4"}, + {RAZWI_INITIATOR_ID_X_Y(18, 5, 19), mmDCORE1_RTR7_CTRL_BASE, + GAUDI2_ENGINE_ID_SIZE, "HMMU5"}, + {RAZWI_INITIATOR_ID_X_Y(1, 5, 0), mmDCORE0_RTR0_CTRL_BASE, + GAUDI2_ENGINE_ID_SIZE, "HMMU6"}, + {RAZWI_INITIATOR_ID_X_Y(18, 5, 19), mmDCORE1_RTR7_CTRL_BASE, + GAUDI2_ENGINE_ID_SIZE, "HMMU7"}, + {RAZWI_INITIATOR_ID_X_Y(1, 10, 0), mmDCORE2_RTR0_CTRL_BASE, + GAUDI2_ENGINE_ID_SIZE, "HMMU8"}, + {RAZWI_INITIATOR_ID_X_Y(18, 10, 19), mmDCORE3_RTR7_CTRL_BASE, + GAUDI2_ENGINE_ID_SIZE, "HMMU9"}, + {RAZWI_INITIATOR_ID_X_Y(1, 10, 0), mmDCORE2_RTR0_CTRL_BASE, + GAUDI2_ENGINE_ID_SIZE, "HMMU10"}, + {RAZWI_INITIATOR_ID_X_Y(18, 10, 19), mmDCORE3_RTR7_CTRL_BASE, + GAUDI2_ENGINE_ID_SIZE, "HMMU11"}, + {RAZWI_INITIATOR_ID_X_Y(1, 10, 0), mmDCORE2_RTR0_CTRL_BASE, + GAUDI2_ENGINE_ID_SIZE, "HMMU12"}, + {RAZWI_INITIATOR_ID_X_Y(18, 10, 19), mmDCORE3_RTR7_CTRL_BASE, + GAUDI2_ENGINE_ID_SIZE, "HMMU13"}, + {RAZWI_INITIATOR_ID_X_Y(1, 10, 0), mmDCORE2_RTR0_CTRL_BASE, + GAUDI2_ENGINE_ID_SIZE, "HMMU14"}, + {RAZWI_INITIATOR_ID_X_Y(18, 10, 19), mmDCORE3_RTR7_CTRL_BASE, + GAUDI2_ENGINE_ID_SIZE, "HMMU15"}, + {RAZWI_INITIATOR_ID_X_Y(2, 11, 2), mmDCORE2_RTR0_CTRL_BASE, + GAUDI2_ENGINE_ID_ROT_0, "ROT0"}, + {RAZWI_INITIATOR_ID_X_Y(17, 11, 16), mmDCORE3_RTR7_CTRL_BASE, + GAUDI2_ENGINE_ID_ROT_1, "ROT1"}, + {RAZWI_INITIATOR_ID_X_Y(2, 11, 2), mmDCORE2_RTR0_CTRL_BASE, + GAUDI2_ENGINE_ID_PSOC, "CPU"}, + {RAZWI_INITIATOR_ID_X_Y(17, 11, 11), mmDCORE3_RTR7_CTRL_BASE, + GAUDI2_ENGINE_ID_PSOC, "PSOC"} +}; + +static struct gaudi2_razwi_info mme_razwi_info[] = { + /* MME X high coordinate is N/A, hence using only low coordinates */ + {RAZWI_INITIATOR_ID_X_Y_LOW(7, 4), mmDCORE0_RTR5_CTRL_BASE, + GAUDI2_DCORE0_ENGINE_ID_MME, "MME0_WAP0"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(9, 4), mmDCORE0_RTR7_CTRL_BASE, + GAUDI2_DCORE0_ENGINE_ID_MME, "MME0_WAP1"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(8, 4), mmDCORE0_RTR6_CTRL_BASE, + GAUDI2_DCORE0_ENGINE_ID_MME, "MME0_CTRL_WR"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(9, 4), mmDCORE0_RTR7_CTRL_BASE, + GAUDI2_DCORE0_ENGINE_ID_MME, "MME0_CTRL_RD"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(6, 4), mmDCORE0_RTR4_CTRL_BASE, + GAUDI2_DCORE0_ENGINE_ID_MME, "MME0_SBTE0"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(6, 4), mmDCORE0_RTR4_CTRL_BASE, + GAUDI2_DCORE0_ENGINE_ID_MME, "MME0_SBTE1"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(7, 4), mmDCORE0_RTR5_CTRL_BASE, + GAUDI2_DCORE0_ENGINE_ID_MME, "MME0_SBTE2"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(8, 4), mmDCORE0_RTR6_CTRL_BASE, + GAUDI2_DCORE0_ENGINE_ID_MME, "MME0_SBTE3"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(9, 4), mmDCORE0_RTR7_CTRL_BASE, + GAUDI2_DCORE0_ENGINE_ID_MME, "MME0_SBTE4"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(12, 4), mmDCORE1_RTR2_CTRL_BASE, + GAUDI2_DCORE1_ENGINE_ID_MME, "MME1_WAP0"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(10, 4), mmDCORE1_RTR0_CTRL_BASE, + GAUDI2_DCORE1_ENGINE_ID_MME, "MME1_WAP1"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(11, 4), mmDCORE1_RTR1_CTRL_BASE, + GAUDI2_DCORE1_ENGINE_ID_MME, "MME1_CTRL_WR"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(10, 4), mmDCORE1_RTR0_CTRL_BASE, + GAUDI2_DCORE1_ENGINE_ID_MME, "MME1_CTRL_RD"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(13, 4), mmDCORE1_RTR3_CTRL_BASE, + GAUDI2_DCORE1_ENGINE_ID_MME, "MME1_SBTE0"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(13, 4), mmDCORE1_RTR3_CTRL_BASE, + GAUDI2_DCORE1_ENGINE_ID_MME, "MME1_SBTE1"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(12, 4), mmDCORE1_RTR2_CTRL_BASE, + GAUDI2_DCORE1_ENGINE_ID_MME, "MME1_SBTE2"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(11, 4), mmDCORE1_RTR1_CTRL_BASE, + GAUDI2_DCORE1_ENGINE_ID_MME, "MME1_SBTE3"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(10, 4), mmDCORE1_RTR0_CTRL_BASE, + GAUDI2_DCORE1_ENGINE_ID_MME, "MME1_SBTE4"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(7, 11), mmDCORE2_RTR5_CTRL_BASE, + GAUDI2_DCORE2_ENGINE_ID_MME, "MME2_WAP0"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(9, 11), mmDCORE2_RTR7_CTRL_BASE, + GAUDI2_DCORE2_ENGINE_ID_MME, "MME2_WAP1"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(8, 11), mmDCORE2_RTR6_CTRL_BASE, + GAUDI2_DCORE2_ENGINE_ID_MME, "MME2_CTRL_WR"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(9, 11), mmDCORE2_RTR7_CTRL_BASE, + GAUDI2_DCORE2_ENGINE_ID_MME, "MME2_CTRL_RD"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(6, 11), mmDCORE2_RTR4_CTRL_BASE, + GAUDI2_DCORE2_ENGINE_ID_MME, "MME2_SBTE0"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(6, 11), mmDCORE2_RTR4_CTRL_BASE, + GAUDI2_DCORE2_ENGINE_ID_MME, "MME2_SBTE1"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(7, 11), mmDCORE2_RTR5_CTRL_BASE, + GAUDI2_DCORE2_ENGINE_ID_MME, "MME2_SBTE2"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(8, 11), mmDCORE2_RTR6_CTRL_BASE, + GAUDI2_DCORE2_ENGINE_ID_MME, "MME2_SBTE3"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(9, 11), mmDCORE2_RTR7_CTRL_BASE, + GAUDI2_DCORE2_ENGINE_ID_MME, "MME2_SBTE4"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(12, 11), mmDCORE3_RTR2_CTRL_BASE, + GAUDI2_DCORE3_ENGINE_ID_MME, "MME3_WAP0"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(10, 11), mmDCORE3_RTR0_CTRL_BASE, + GAUDI2_DCORE3_ENGINE_ID_MME, "MME3_WAP1"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(11, 11), mmDCORE3_RTR1_CTRL_BASE, + GAUDI2_DCORE3_ENGINE_ID_MME, "MME3_CTRL_WR"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(10, 11), mmDCORE3_RTR0_CTRL_BASE, + GAUDI2_DCORE3_ENGINE_ID_MME, "MME3_CTRL_RD"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(13, 11), mmDCORE3_RTR3_CTRL_BASE, + GAUDI2_DCORE3_ENGINE_ID_MME, "MME3_SBTE0"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(13, 11), mmDCORE3_RTR3_CTRL_BASE, + GAUDI2_DCORE3_ENGINE_ID_MME, "MME3_SBTE1"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(12, 11), mmDCORE3_RTR2_CTRL_BASE, + GAUDI2_DCORE3_ENGINE_ID_MME, "MME3_SBTE2"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(11, 11), mmDCORE3_RTR1_CTRL_BASE, + GAUDI2_DCORE3_ENGINE_ID_MME, "MME3_SBTE3"}, + {RAZWI_INITIATOR_ID_X_Y_LOW(10, 11), mmDCORE3_RTR0_CTRL_BASE, + GAUDI2_DCORE3_ENGINE_ID_MME, "MME3_SBTE4"} +}; + enum hl_pmmu_fatal_cause { LATENCY_RD_OUT_FIFO_OVERRUN, LATENCY_WR_OUT_FIFO_OVERRUN, @@ -1437,6 +1715,34 @@ static const u32 gaudi2_tpc_cfg_blocks_bases[TPC_ID_SIZE] = { [TPC_ID_DCORE0_TPC6] = mmDCORE0_TPC6_CFG_BASE, }; +static const u32 gaudi2_tpc_eml_cfg_blocks_bases[TPC_ID_SIZE] = { + [TPC_ID_DCORE0_TPC0] = mmDCORE0_TPC0_EML_CFG_BASE, + [TPC_ID_DCORE0_TPC1] = mmDCORE0_TPC1_EML_CFG_BASE, + [TPC_ID_DCORE0_TPC2] = mmDCORE0_TPC2_EML_CFG_BASE, + [TPC_ID_DCORE0_TPC3] = mmDCORE0_TPC3_EML_CFG_BASE, + [TPC_ID_DCORE0_TPC4] = mmDCORE0_TPC4_EML_CFG_BASE, + [TPC_ID_DCORE0_TPC5] = mmDCORE0_TPC5_EML_CFG_BASE, + [TPC_ID_DCORE1_TPC0] = mmDCORE1_TPC0_EML_CFG_BASE, + [TPC_ID_DCORE1_TPC1] = mmDCORE1_TPC1_EML_CFG_BASE, + [TPC_ID_DCORE1_TPC2] = mmDCORE1_TPC2_EML_CFG_BASE, + [TPC_ID_DCORE1_TPC3] = mmDCORE1_TPC3_EML_CFG_BASE, + [TPC_ID_DCORE1_TPC4] = mmDCORE1_TPC4_EML_CFG_BASE, + [TPC_ID_DCORE1_TPC5] = mmDCORE1_TPC5_EML_CFG_BASE, + [TPC_ID_DCORE2_TPC0] = mmDCORE2_TPC0_EML_CFG_BASE, + [TPC_ID_DCORE2_TPC1] = mmDCORE2_TPC1_EML_CFG_BASE, + [TPC_ID_DCORE2_TPC2] = mmDCORE2_TPC2_EML_CFG_BASE, + [TPC_ID_DCORE2_TPC3] = mmDCORE2_TPC3_EML_CFG_BASE, + [TPC_ID_DCORE2_TPC4] = mmDCORE2_TPC4_EML_CFG_BASE, + [TPC_ID_DCORE2_TPC5] = mmDCORE2_TPC5_EML_CFG_BASE, + [TPC_ID_DCORE3_TPC0] = mmDCORE3_TPC0_EML_CFG_BASE, + [TPC_ID_DCORE3_TPC1] = mmDCORE3_TPC1_EML_CFG_BASE, + [TPC_ID_DCORE3_TPC2] = mmDCORE3_TPC2_EML_CFG_BASE, + [TPC_ID_DCORE3_TPC3] = mmDCORE3_TPC3_EML_CFG_BASE, + [TPC_ID_DCORE3_TPC4] = mmDCORE3_TPC4_EML_CFG_BASE, + [TPC_ID_DCORE3_TPC5] = mmDCORE3_TPC5_EML_CFG_BASE, + [TPC_ID_DCORE0_TPC6] = mmDCORE0_TPC6_EML_CFG_BASE, +}; + const u32 gaudi2_rot_blocks_bases[ROTATOR_ID_SIZE] = { [ROTATOR_ID_0] = mmROT0_BASE, [ROTATOR_ID_1] = mmROT1_BASE @@ -1475,6 +1781,56 @@ static const u32 gaudi2_rot_id_to_queue_id[ROTATOR_ID_SIZE] = { [ROTATOR_ID_1] = GAUDI2_QUEUE_ID_ROT_1_0, }; +static const u32 gaudi2_tpc_engine_id_to_tpc_id[] = { + [GAUDI2_DCORE0_ENGINE_ID_TPC_0] = TPC_ID_DCORE0_TPC0, + [GAUDI2_DCORE0_ENGINE_ID_TPC_1] = TPC_ID_DCORE0_TPC1, + [GAUDI2_DCORE0_ENGINE_ID_TPC_2] = TPC_ID_DCORE0_TPC2, + [GAUDI2_DCORE0_ENGINE_ID_TPC_3] = TPC_ID_DCORE0_TPC3, + [GAUDI2_DCORE0_ENGINE_ID_TPC_4] = TPC_ID_DCORE0_TPC4, + [GAUDI2_DCORE0_ENGINE_ID_TPC_5] = TPC_ID_DCORE0_TPC5, + [GAUDI2_DCORE1_ENGINE_ID_TPC_0] = TPC_ID_DCORE1_TPC0, + [GAUDI2_DCORE1_ENGINE_ID_TPC_1] = TPC_ID_DCORE1_TPC1, + [GAUDI2_DCORE1_ENGINE_ID_TPC_2] = TPC_ID_DCORE1_TPC2, + [GAUDI2_DCORE1_ENGINE_ID_TPC_3] = TPC_ID_DCORE1_TPC3, + [GAUDI2_DCORE1_ENGINE_ID_TPC_4] = TPC_ID_DCORE1_TPC4, + [GAUDI2_DCORE1_ENGINE_ID_TPC_5] = TPC_ID_DCORE1_TPC5, + [GAUDI2_DCORE2_ENGINE_ID_TPC_0] = TPC_ID_DCORE2_TPC0, + [GAUDI2_DCORE2_ENGINE_ID_TPC_1] = TPC_ID_DCORE2_TPC1, + [GAUDI2_DCORE2_ENGINE_ID_TPC_2] = TPC_ID_DCORE2_TPC2, + [GAUDI2_DCORE2_ENGINE_ID_TPC_3] = TPC_ID_DCORE2_TPC3, + [GAUDI2_DCORE2_ENGINE_ID_TPC_4] = TPC_ID_DCORE2_TPC4, + [GAUDI2_DCORE2_ENGINE_ID_TPC_5] = TPC_ID_DCORE2_TPC5, + [GAUDI2_DCORE3_ENGINE_ID_TPC_0] = TPC_ID_DCORE3_TPC0, + [GAUDI2_DCORE3_ENGINE_ID_TPC_1] = TPC_ID_DCORE3_TPC1, + [GAUDI2_DCORE3_ENGINE_ID_TPC_2] = TPC_ID_DCORE3_TPC2, + [GAUDI2_DCORE3_ENGINE_ID_TPC_3] = TPC_ID_DCORE3_TPC3, + [GAUDI2_DCORE3_ENGINE_ID_TPC_4] = TPC_ID_DCORE3_TPC4, + [GAUDI2_DCORE3_ENGINE_ID_TPC_5] = TPC_ID_DCORE3_TPC5, + /* the PCI TPC is placed last (mapped liked HW) */ + [GAUDI2_DCORE0_ENGINE_ID_TPC_6] = TPC_ID_DCORE0_TPC6, +}; + +static const u32 gaudi2_mme_engine_id_to_mme_id[] = { + [GAUDI2_DCORE0_ENGINE_ID_MME] = MME_ID_DCORE0, + [GAUDI2_DCORE1_ENGINE_ID_MME] = MME_ID_DCORE1, + [GAUDI2_DCORE2_ENGINE_ID_MME] = MME_ID_DCORE2, + [GAUDI2_DCORE3_ENGINE_ID_MME] = MME_ID_DCORE3, +}; + +static const u32 gaudi2_edma_engine_id_to_edma_id[] = { + [GAUDI2_ENGINE_ID_PDMA_0] = DMA_CORE_ID_PDMA0, + [GAUDI2_ENGINE_ID_PDMA_1] = DMA_CORE_ID_PDMA1, + [GAUDI2_DCORE0_ENGINE_ID_EDMA_0] = DMA_CORE_ID_EDMA0, + [GAUDI2_DCORE0_ENGINE_ID_EDMA_1] = DMA_CORE_ID_EDMA1, + [GAUDI2_DCORE1_ENGINE_ID_EDMA_0] = DMA_CORE_ID_EDMA2, + [GAUDI2_DCORE1_ENGINE_ID_EDMA_1] = DMA_CORE_ID_EDMA3, + [GAUDI2_DCORE2_ENGINE_ID_EDMA_0] = DMA_CORE_ID_EDMA4, + [GAUDI2_DCORE2_ENGINE_ID_EDMA_1] = DMA_CORE_ID_EDMA5, + [GAUDI2_DCORE3_ENGINE_ID_EDMA_0] = DMA_CORE_ID_EDMA6, + [GAUDI2_DCORE3_ENGINE_ID_EDMA_1] = DMA_CORE_ID_EDMA7, + [GAUDI2_ENGINE_ID_KDMA] = DMA_CORE_ID_KDMA, +}; + const u32 edma_stream_base[NUM_OF_EDMA_PER_DCORE * NUM_OF_DCORES] = { GAUDI2_QUEUE_ID_DCORE0_EDMA_0_0, GAUDI2_QUEUE_ID_DCORE0_EDMA_1_0, @@ -1499,41 +1855,6 @@ static const char gaudi2_vdec_irq_name[GAUDI2_VDEC_MSIX_ENTRIES][GAUDI2_MAX_STRI "gaudi2 vdec s_1", "gaudi2 vdec s_1 abnormal" }; -static const u32 rtr_coordinates_to_rtr_id[NUM_OF_RTR_PER_DCORE * NUM_OF_DCORES] = { - RTR_ID_X_Y(2, 4), - RTR_ID_X_Y(3, 4), - RTR_ID_X_Y(4, 4), - RTR_ID_X_Y(5, 4), - RTR_ID_X_Y(6, 4), - RTR_ID_X_Y(7, 4), - RTR_ID_X_Y(8, 4), - RTR_ID_X_Y(9, 4), - RTR_ID_X_Y(10, 4), - RTR_ID_X_Y(11, 4), - RTR_ID_X_Y(12, 4), - RTR_ID_X_Y(13, 4), - RTR_ID_X_Y(14, 4), - RTR_ID_X_Y(15, 4), - RTR_ID_X_Y(16, 4), - RTR_ID_X_Y(17, 4), - RTR_ID_X_Y(2, 11), - RTR_ID_X_Y(3, 11), - RTR_ID_X_Y(4, 11), - RTR_ID_X_Y(5, 11), - RTR_ID_X_Y(6, 11), - RTR_ID_X_Y(7, 11), - RTR_ID_X_Y(8, 11), - RTR_ID_X_Y(9, 11), - RTR_ID_X_Y(0, 0),/* 24 no id */ - RTR_ID_X_Y(0, 0),/* 25 no id */ - RTR_ID_X_Y(0, 0),/* 26 no id */ - RTR_ID_X_Y(0, 0),/* 27 no id */ - RTR_ID_X_Y(14, 11), - RTR_ID_X_Y(15, 11), - RTR_ID_X_Y(16, 11), - RTR_ID_X_Y(17, 11) -}; - enum rtr_id { DCORE0_RTR0, DCORE0_RTR1, @@ -1784,7 +2105,14 @@ static void gaudi2_set_arc_id_cap(struct hl_device *hdev, u64 arc_id); static void gaudi2_memset_device_lbw(struct hl_device *hdev, u32 addr, u32 size, u32 val); static int gaudi2_send_job_to_kdma(struct hl_device *hdev, u64 src_addr, u64 dst_addr, u32 size, bool is_memset); +static bool gaudi2_get_tpc_idle_status(struct hl_device *hdev, u64 *mask_arr, u8 mask_len, + struct engines_data *e); +static bool gaudi2_get_mme_idle_status(struct hl_device *hdev, u64 *mask_arr, u8 mask_len, + struct engines_data *e); +static bool gaudi2_get_edma_idle_status(struct hl_device *hdev, u64 *mask_arr, u8 mask_len, + struct engines_data *e); static u64 gaudi2_mmu_scramble_addr(struct hl_device *hdev, u64 raw_addr); +static u64 gaudi2_mmu_descramble_addr(struct hl_device *hdev, u64 scrambled_addr); static void gaudi2_init_scrambler_hbm(struct hl_device *hdev) { @@ -1988,6 +2316,8 @@ static int gaudi2_set_fixed_properties(struct hl_device *hdev) prop->hints_range_reservation = true; + prop->rotator_enabled_mask = BIT(NUM_OF_ROT) - 1; + if (hdev->pldm) prop->mmu_pgt_size = 0x800000; /* 8MB */ else @@ -2011,7 +2341,6 @@ static int gaudi2_set_fixed_properties(struct hl_device *hdev) prop->dmmu.num_hops = MMU_ARCH_6_HOPS; prop->dmmu.last_mask = LAST_MASK; prop->dmmu.host_resident = 1; - /* TODO: will be duplicated until implementing per-MMU props */ prop->dmmu.hop_table_size = prop->mmu_hop_table_size; prop->dmmu.hop0_tables_total_size = prop->mmu_hop0_tables_total_size; @@ -2027,7 +2356,6 @@ static int gaudi2_set_fixed_properties(struct hl_device *hdev) prop->pmmu.host_resident = 1; prop->pmmu.num_hops = MMU_ARCH_6_HOPS; prop->pmmu.last_mask = LAST_MASK; - /* TODO: will be duplicated until implementing per-MMU props */ prop->pmmu.hop_table_size = prop->mmu_hop_table_size; prop->pmmu.hop0_tables_total_size = prop->mmu_hop0_tables_total_size; @@ -2084,11 +2412,14 @@ static int gaudi2_set_fixed_properties(struct hl_device *hdev) prop->pmmu_huge.end_addr = VA_HOST_SPACE_HPAGE_END; } + prop->max_num_of_engines = GAUDI2_ENGINE_ID_SIZE; prop->num_engine_cores = CPU_ID_MAX; prop->cfg_size = CFG_SIZE; prop->max_asid = MAX_ASID; prop->num_of_events = GAUDI2_EVENT_SIZE; + prop->supports_engine_modes = true; + prop->dc_power_default = DC_POWER_DEFAULT; prop->cb_pool_cb_cnt = GAUDI2_CB_POOL_CB_CNT; @@ -2107,6 +2438,8 @@ static int gaudi2_set_fixed_properties(struct hl_device *hdev) (num_sync_stream_queues * HL_RSVD_MONS); prop->first_available_user_interrupt = GAUDI2_IRQ_NUM_USER_FIRST; + prop->tpc_interrupt_id = GAUDI2_IRQ_NUM_TPC_ASSERT; + prop->eq_interrupt_id = GAUDI2_IRQ_NUM_EVENT_QUEUE; prop->first_available_cq[0] = GAUDI2_RESERVED_CQ_NUMBER; @@ -2555,6 +2888,10 @@ static int gaudi2_cpucp_info_get(struct hl_device *hdev) hdev->tpc_binning = le64_to_cpu(prop->cpucp_info.tpc_binning_mask); hdev->decoder_binning = lower_32_bits(le64_to_cpu(prop->cpucp_info.decoder_binning_mask)); + dev_dbg(hdev->dev, "Read binning masks: tpc: 0x%llx, dram: 0x%llx, edma: 0x%x, dec: 0x%x\n", + hdev->tpc_binning, hdev->dram_binning, hdev->edma_binning, + hdev->decoder_binning); + /* * at this point the DRAM parameters need to be updated according to data obtained * from the FW @@ -2644,13 +2981,18 @@ static int gaudi2_early_init(struct hl_device *hdev) rc = hl_fw_read_preboot_status(hdev); if (rc) { if (hdev->reset_on_preboot_fail) + /* we are already on failure flow, so don't check if hw_fini fails. */ hdev->asic_funcs->hw_fini(hdev, true, false); goto pci_fini; } if (gaudi2_get_hw_state(hdev) == HL_DEVICE_HW_STATE_DIRTY) { dev_dbg(hdev->dev, "H/W state is dirty, must reset before initializing\n"); - hdev->asic_funcs->hw_fini(hdev, true, false); + rc = hdev->asic_funcs->hw_fini(hdev, true, false); + if (rc) { + dev_err(hdev->dev, "failed to reset HW in dirty state (%d)\n", rc); + goto pci_fini; + } } return 0; @@ -2692,6 +3034,7 @@ static bool gaudi2_is_arc_tpc_owned(u64 arc_id) static void gaudi2_init_arcs(struct hl_device *hdev) { + struct cpu_dyn_regs *dyn_regs = &hdev->fw_loader.dynamic_loader.comm_desc.cpu_dyn_regs; struct gaudi2_device *gaudi2 = hdev->asic_specific; u64 arc_id; u32 i; @@ -2721,6 +3064,10 @@ static void gaudi2_init_arcs(struct hl_device *hdev) gaudi2_set_arc_id_cap(hdev, arc_id); } + + /* Fetch ARC scratchpad address */ + hdev->asic_prop.engine_core_interrupt_reg_addr = + CFG_BASE + le32_to_cpu(dyn_regs->eng_arc_irq_ctrl); } static int gaudi2_scrub_arc_dccm(struct hl_device *hdev, u32 cpu_id) @@ -2772,16 +3119,21 @@ static int gaudi2_scrub_arc_dccm(struct hl_device *hdev, u32 cpu_id) return 0; } -static void gaudi2_scrub_arcs_dccm(struct hl_device *hdev) +static int gaudi2_scrub_arcs_dccm(struct hl_device *hdev) { u16 arc_id; + int rc; for (arc_id = CPU_ID_SCHED_ARC0 ; arc_id < CPU_ID_MAX ; arc_id++) { if (!gaudi2_is_arc_enabled(hdev, arc_id)) continue; - gaudi2_scrub_arc_dccm(hdev, arc_id); + rc = gaudi2_scrub_arc_dccm(hdev, arc_id); + if (rc) + return rc; } + + return 0; } static int gaudi2_late_init(struct hl_device *hdev) @@ -2805,7 +3157,13 @@ static int gaudi2_late_init(struct hl_device *hdev) } gaudi2_init_arcs(hdev); - gaudi2_scrub_arcs_dccm(hdev); + + rc = gaudi2_scrub_arcs_dccm(hdev); + if (rc) { + dev_err(hdev->dev, "Failed to scrub arcs DCCM\n"); + goto disable_pci_access; + } + gaudi2_init_security(hdev); return 0; @@ -2989,6 +3347,13 @@ static void gaudi2_user_interrupt_setup(struct hl_device *hdev) struct asic_fixed_properties *prop = &hdev->asic_prop; int i, j, k; + /* Initialize TPC interrupt */ + HL_USR_INTR_STRUCT_INIT(hdev->tpc_interrupt, hdev, 0, HL_USR_INTERRUPT_TPC); + + /* Initialize unexpected error interrupt */ + HL_USR_INTR_STRUCT_INIT(hdev->unexpected_error_interrupt, hdev, 0, + HL_USR_INTERRUPT_UNEXPECTED); + /* Initialize common user CQ interrupt */ HL_USR_INTR_STRUCT_INIT(hdev->common_user_cq_interrupt, hdev, HL_COMMON_USER_CQ_INTERRUPT_ID, HL_USR_INTERRUPT_CQ); @@ -3115,6 +3480,48 @@ static int gaudi2_special_blocks_iterator_config(struct hl_device *hdev) return gaudi2_special_blocks_config(hdev); } +static void gaudi2_test_queues_msgs_free(struct hl_device *hdev) +{ + struct gaudi2_device *gaudi2 = hdev->asic_specific; + struct gaudi2_queues_test_info *msg_info = gaudi2->queues_test_info; + int i; + + for (i = 0 ; i < GAUDI2_NUM_TESTED_QS ; i++) { + /* bail-out if this is an allocation failure point */ + if (!msg_info[i].kern_addr) + break; + + hl_asic_dma_pool_free(hdev, msg_info[i].kern_addr, msg_info[i].dma_addr); + msg_info[i].kern_addr = NULL; + } +} + +static int gaudi2_test_queues_msgs_alloc(struct hl_device *hdev) +{ + struct gaudi2_device *gaudi2 = hdev->asic_specific; + struct gaudi2_queues_test_info *msg_info = gaudi2->queues_test_info; + int i, rc; + + /* allocate a message-short buf for each Q we intend to test */ + for (i = 0 ; i < GAUDI2_NUM_TESTED_QS ; i++) { + msg_info[i].kern_addr = + (void *)hl_asic_dma_pool_zalloc(hdev, sizeof(struct packet_msg_short), + GFP_KERNEL, &msg_info[i].dma_addr); + if (!msg_info[i].kern_addr) { + dev_err(hdev->dev, + "Failed to allocate dma memory for H/W queue %d testing\n", i); + rc = -ENOMEM; + goto err_exit; + } + } + + return 0; + +err_exit: + gaudi2_test_queues_msgs_free(hdev); + return rc; +} + static int gaudi2_sw_init(struct hl_device *hdev) { struct asic_fixed_properties *prop = &hdev->asic_prop; @@ -3214,8 +3621,14 @@ static int gaudi2_sw_init(struct hl_device *hdev) if (rc) goto free_scratchpad_mem; + rc = gaudi2_test_queues_msgs_alloc(hdev); + if (rc) + goto special_blocks_free; + return 0; +special_blocks_free: + gaudi2_special_blocks_iterator_free(hdev); free_scratchpad_mem: hl_asic_dma_pool_free(hdev, gaudi2->scratchpad_kernel_address, gaudi2->scratchpad_bus_address); @@ -3238,6 +3651,8 @@ static int gaudi2_sw_fini(struct hl_device *hdev) struct asic_fixed_properties *prop = &hdev->asic_prop; struct gaudi2_device *gaudi2 = hdev->asic_specific; + gaudi2_test_queues_msgs_free(hdev); + gaudi2_special_blocks_iterator_free(hdev); hl_cpu_accessible_dma_pool_free(hdev, prop->pmmu.page_size, gaudi2->virt_msix_db_cpu_addr); @@ -3646,6 +4061,10 @@ static const char *gaudi2_irq_name(u16 irq_number) return "gaudi2 completion"; case GAUDI2_IRQ_NUM_DCORE0_DEC0_NRM ... GAUDI2_IRQ_NUM_SHARED_DEC1_ABNRM: return gaudi2_vdec_irq_name[irq_number - GAUDI2_IRQ_NUM_DCORE0_DEC0_NRM]; + case GAUDI2_IRQ_NUM_TPC_ASSERT: + return "gaudi2 tpc assert"; + case GAUDI2_IRQ_NUM_UNEXPECTED_ERROR: + return "gaudi2 unexpected error"; case GAUDI2_IRQ_NUM_USER_FIRST ... GAUDI2_IRQ_NUM_USER_LAST: return "gaudi2 user completion"; default: @@ -3677,7 +4096,6 @@ static void gaudi2_dec_disable_msix(struct hl_device *hdev, u32 max_irq_num) static int gaudi2_dec_enable_msix(struct hl_device *hdev) { int rc, i, irq_init_cnt, irq, relative_idx; - irq_handler_t irq_handler; struct hl_dec *dec; for (i = GAUDI2_IRQ_NUM_DCORE0_DEC0_NRM, irq_init_cnt = 0; @@ -3687,20 +4105,24 @@ static int gaudi2_dec_enable_msix(struct hl_device *hdev) irq = pci_irq_vector(hdev->pdev, i); relative_idx = i - GAUDI2_IRQ_NUM_DCORE0_DEC0_NRM; - irq_handler = (relative_idx % 2) ? - hl_irq_handler_dec_abnrm : - hl_irq_handler_user_interrupt; - - dec = hdev->dec + relative_idx / 2; - /* We pass different structures depending on the irq handler. For the abnormal * interrupt we pass hl_dec and for the regular interrupt we pass the relevant * user_interrupt entry + * + * TODO: change the dec abnrm to threaded irq */ - rc = request_irq(irq, irq_handler, 0, gaudi2_irq_name(i), - ((relative_idx % 2) ? - (void *) dec : - (void *) &hdev->user_interrupt[dec->core_id])); + + dec = hdev->dec + relative_idx / 2; + if (relative_idx % 2) { + rc = request_irq(irq, hl_irq_handler_dec_abnrm, 0, + gaudi2_irq_name(i), (void *) dec); + } else { + rc = request_threaded_irq(irq, hl_irq_handler_user_interrupt, + hl_irq_user_interrupt_thread_handler, IRQF_ONESHOT, + gaudi2_irq_name(i), + (void *) &hdev->user_interrupt[dec->core_id]); + } + if (rc) { dev_err(hdev->dev, "Failed to request IRQ %d", irq); goto free_dec_irqs; @@ -3719,7 +4141,6 @@ static int gaudi2_enable_msix(struct hl_device *hdev) struct asic_fixed_properties *prop = &hdev->asic_prop; struct gaudi2_device *gaudi2 = hdev->asic_specific; int rc, irq, i, j, user_irq_init_cnt; - irq_handler_t irq_handler; struct hl_cq *cq; if (gaudi2->hw_cap_initialized & HW_CAP_MSIX) @@ -3755,14 +4176,33 @@ static int gaudi2_enable_msix(struct hl_device *hdev) goto free_event_irq; } + irq = pci_irq_vector(hdev->pdev, GAUDI2_IRQ_NUM_TPC_ASSERT); + rc = request_threaded_irq(irq, hl_irq_handler_user_interrupt, + hl_irq_user_interrupt_thread_handler, IRQF_ONESHOT, + gaudi2_irq_name(GAUDI2_IRQ_NUM_TPC_ASSERT), &hdev->tpc_interrupt); + if (rc) { + dev_err(hdev->dev, "Failed to request IRQ %d", irq); + goto free_dec_irq; + } + + irq = pci_irq_vector(hdev->pdev, GAUDI2_IRQ_NUM_UNEXPECTED_ERROR); + rc = request_irq(irq, hl_irq_handler_user_interrupt, 0, + gaudi2_irq_name(GAUDI2_IRQ_NUM_UNEXPECTED_ERROR), + &hdev->unexpected_error_interrupt); + if (rc) { + dev_err(hdev->dev, "Failed to request IRQ %d", irq); + goto free_tpc_irq; + } + for (i = GAUDI2_IRQ_NUM_USER_FIRST, j = prop->user_dec_intr_count, user_irq_init_cnt = 0; user_irq_init_cnt < prop->user_interrupt_count; i++, j++, user_irq_init_cnt++) { irq = pci_irq_vector(hdev->pdev, i); - irq_handler = hl_irq_handler_user_interrupt; + rc = request_threaded_irq(irq, hl_irq_handler_user_interrupt, + hl_irq_user_interrupt_thread_handler, IRQF_ONESHOT, + gaudi2_irq_name(i), &hdev->user_interrupt[j]); - rc = request_irq(irq, irq_handler, 0, gaudi2_irq_name(i), &hdev->user_interrupt[j]); if (rc) { dev_err(hdev->dev, "Failed to request IRQ %d", irq); goto free_user_irq; @@ -3780,9 +4220,13 @@ free_user_irq: irq = pci_irq_vector(hdev->pdev, i); free_irq(irq, &hdev->user_interrupt[j]); } - - gaudi2_dec_disable_msix(hdev, GAUDI2_IRQ_NUM_SHARED_DEC1_ABNRM + 1); - + irq = pci_irq_vector(hdev->pdev, GAUDI2_IRQ_NUM_UNEXPECTED_ERROR); + free_irq(irq, &hdev->unexpected_error_interrupt); +free_tpc_irq: + irq = pci_irq_vector(hdev->pdev, GAUDI2_IRQ_NUM_TPC_ASSERT); + free_irq(irq, &hdev->tpc_interrupt); +free_dec_irq: + gaudi2_dec_disable_msix(hdev, GAUDI2_IRQ_NUM_DEC_LAST + 1); free_event_irq: irq = pci_irq_vector(hdev->pdev, GAUDI2_IRQ_NUM_EVENT_QUEUE); free_irq(irq, cq); @@ -3814,6 +4258,9 @@ static void gaudi2_sync_irqs(struct hl_device *hdev) synchronize_irq(irq); } + synchronize_irq(pci_irq_vector(hdev->pdev, GAUDI2_IRQ_NUM_TPC_ASSERT)); + synchronize_irq(pci_irq_vector(hdev->pdev, GAUDI2_IRQ_NUM_UNEXPECTED_ERROR)); + for (i = GAUDI2_IRQ_NUM_USER_FIRST, j = 0 ; j < hdev->asic_prop.user_interrupt_count; i++, j++) { irq = pci_irq_vector(hdev->pdev, i); @@ -3840,6 +4287,12 @@ static void gaudi2_disable_msix(struct hl_device *hdev) gaudi2_dec_disable_msix(hdev, GAUDI2_IRQ_NUM_SHARED_DEC1_ABNRM + 1); + irq = pci_irq_vector(hdev->pdev, GAUDI2_IRQ_NUM_TPC_ASSERT); + free_irq(irq, &hdev->tpc_interrupt); + + irq = pci_irq_vector(hdev->pdev, GAUDI2_IRQ_NUM_UNEXPECTED_ERROR); + free_irq(irq, &hdev->unexpected_error_interrupt); + for (i = GAUDI2_IRQ_NUM_USER_FIRST, j = prop->user_dec_intr_count, k = 0; k < hdev->asic_prop.user_interrupt_count ; i++, j++, k++) { @@ -4037,7 +4490,6 @@ static int gaudi2_set_engine_cores(struct hl_device *hdev, u32 *core_ids, { int i, rc; - for (i = 0 ; i < num_cores ; i++) { if (gaudi2_is_arc_enabled(hdev, core_ids[i])) gaudi2_set_arc_running_mode(hdev, core_ids[i], core_command); @@ -4059,6 +4511,139 @@ static int gaudi2_set_engine_cores(struct hl_device *hdev, u32 *core_ids, return 0; } +static int gaudi2_set_tpc_engine_mode(struct hl_device *hdev, u32 engine_id, u32 engine_command) +{ + struct gaudi2_device *gaudi2 = hdev->asic_specific; + u32 reg_base, reg_addr, reg_val, tpc_id; + + if (!(gaudi2->tpc_hw_cap_initialized & HW_CAP_TPC_MASK)) + return 0; + + tpc_id = gaudi2_tpc_engine_id_to_tpc_id[engine_id]; + if (!(gaudi2->tpc_hw_cap_initialized & BIT_ULL(HW_CAP_TPC_SHIFT + tpc_id))) + return 0; + + reg_base = gaudi2_tpc_cfg_blocks_bases[tpc_id]; + reg_addr = reg_base + TPC_CFG_STALL_OFFSET; + reg_val = FIELD_PREP(DCORE0_TPC0_CFG_TPC_STALL_V_MASK, + !!(engine_command == HL_ENGINE_STALL)); + WREG32(reg_addr, reg_val); + + if (engine_command == HL_ENGINE_RESUME) { + reg_base = gaudi2_tpc_eml_cfg_blocks_bases[tpc_id]; + reg_addr = reg_base + TPC_EML_CFG_DBG_CNT_OFFSET; + RMWREG32(reg_addr, 0x1, DCORE0_TPC0_EML_CFG_DBG_CNT_DBG_EXIT_MASK); + } + + return 0; +} + +static int gaudi2_set_mme_engine_mode(struct hl_device *hdev, u32 engine_id, u32 engine_command) +{ + struct gaudi2_device *gaudi2 = hdev->asic_specific; + u32 reg_base, reg_addr, reg_val, mme_id; + + mme_id = gaudi2_mme_engine_id_to_mme_id[engine_id]; + if (!(gaudi2->hw_cap_initialized & BIT_ULL(HW_CAP_MME_SHIFT + mme_id))) + return 0; + + reg_base = gaudi2_mme_ctrl_lo_blocks_bases[mme_id]; + reg_addr = reg_base + MME_CTRL_LO_QM_STALL_OFFSET; + reg_val = FIELD_PREP(DCORE0_MME_CTRL_LO_QM_STALL_V_MASK, + !!(engine_command == HL_ENGINE_STALL)); + WREG32(reg_addr, reg_val); + + return 0; +} + +static int gaudi2_set_edma_engine_mode(struct hl_device *hdev, u32 engine_id, u32 engine_command) +{ + struct gaudi2_device *gaudi2 = hdev->asic_specific; + u32 reg_base, reg_addr, reg_val, edma_id; + + if (!(gaudi2->hw_cap_initialized & HW_CAP_EDMA_MASK)) + return 0; + + edma_id = gaudi2_edma_engine_id_to_edma_id[engine_id]; + if (!(gaudi2->hw_cap_initialized & BIT_ULL(HW_CAP_EDMA_SHIFT + edma_id))) + return 0; + + reg_base = gaudi2_dma_core_blocks_bases[edma_id]; + reg_addr = reg_base + EDMA_CORE_CFG_STALL_OFFSET; + reg_val = FIELD_PREP(DCORE0_EDMA0_CORE_CFG_1_HALT_MASK, + !!(engine_command == HL_ENGINE_STALL)); + WREG32(reg_addr, reg_val); + + if (engine_command == HL_ENGINE_STALL) { + reg_val = FIELD_PREP(DCORE0_EDMA0_CORE_CFG_1_HALT_MASK, 0x1) | + FIELD_PREP(DCORE0_EDMA0_CORE_CFG_1_FLUSH_MASK, 0x1); + WREG32(reg_addr, reg_val); + } + + return 0; +} + +static int gaudi2_set_engine_modes(struct hl_device *hdev, + u32 *engine_ids, u32 num_engines, u32 engine_command) +{ + int i, rc; + + for (i = 0 ; i < num_engines ; ++i) { + switch (engine_ids[i]) { + case GAUDI2_DCORE0_ENGINE_ID_TPC_0 ... GAUDI2_DCORE0_ENGINE_ID_TPC_5: + case GAUDI2_DCORE1_ENGINE_ID_TPC_0 ... GAUDI2_DCORE1_ENGINE_ID_TPC_5: + case GAUDI2_DCORE2_ENGINE_ID_TPC_0 ... GAUDI2_DCORE2_ENGINE_ID_TPC_5: + case GAUDI2_DCORE3_ENGINE_ID_TPC_0 ... GAUDI2_DCORE3_ENGINE_ID_TPC_5: + rc = gaudi2_set_tpc_engine_mode(hdev, engine_ids[i], engine_command); + if (rc) + return rc; + + break; + case GAUDI2_DCORE0_ENGINE_ID_MME: + case GAUDI2_DCORE1_ENGINE_ID_MME: + case GAUDI2_DCORE2_ENGINE_ID_MME: + case GAUDI2_DCORE3_ENGINE_ID_MME: + rc = gaudi2_set_mme_engine_mode(hdev, engine_ids[i], engine_command); + if (rc) + return rc; + + break; + case GAUDI2_DCORE0_ENGINE_ID_EDMA_0 ... GAUDI2_DCORE0_ENGINE_ID_EDMA_1: + case GAUDI2_DCORE1_ENGINE_ID_EDMA_0 ... GAUDI2_DCORE1_ENGINE_ID_EDMA_1: + case GAUDI2_DCORE2_ENGINE_ID_EDMA_0 ... GAUDI2_DCORE2_ENGINE_ID_EDMA_1: + case GAUDI2_DCORE3_ENGINE_ID_EDMA_0 ... GAUDI2_DCORE3_ENGINE_ID_EDMA_1: + rc = gaudi2_set_edma_engine_mode(hdev, engine_ids[i], engine_command); + if (rc) + return rc; + + break; + default: + dev_err(hdev->dev, "Invalid engine ID %u\n", engine_ids[i]); + return -EINVAL; + } + } + + return 0; +} + +static int gaudi2_set_engines(struct hl_device *hdev, u32 *engine_ids, + u32 num_engines, u32 engine_command) +{ + switch (engine_command) { + case HL_ENGINE_CORE_HALT: + case HL_ENGINE_CORE_RUN: + return gaudi2_set_engine_cores(hdev, engine_ids, num_engines, engine_command); + + case HL_ENGINE_STALL: + case HL_ENGINE_RESUME: + return gaudi2_set_engine_modes(hdev, engine_ids, num_engines, engine_command); + + default: + dev_err(hdev->dev, "failed to execute command id %u\n", engine_command); + return -EINVAL; + } +} + static void gaudi2_halt_engines(struct hl_device *hdev, bool hard_reset, bool fw_reset) { u32 wait_timeout_ms; @@ -5509,11 +6094,10 @@ static void gaudi2_send_hard_reset_cmd(struct hl_device *hdev) * gaudi2_execute_hard_reset - execute hard reset by driver/FW * * @hdev: pointer to the habanalabs device structure - * @reset_sleep_ms: sleep time in msec after reset * * This function executes hard reset based on if driver/FW should do the reset */ -static void gaudi2_execute_hard_reset(struct hl_device *hdev, u32 reset_sleep_ms) +static void gaudi2_execute_hard_reset(struct hl_device *hdev) { if (hdev->asic_prop.hard_reset_done_by_fw) { gaudi2_send_hard_reset_cmd(hdev); @@ -5531,17 +6115,37 @@ static void gaudi2_execute_hard_reset(struct hl_device *hdev, u32 reset_sleep_ms WREG32(mmPSOC_RESET_CONF_SW_ALL_RST, 1); } +static int gaudi2_get_soft_rst_done_indication(struct hl_device *hdev, u32 poll_timeout_us) +{ + int i, rc = 0; + u32 reg_val; + + for (i = 0 ; i < GAUDI2_RESET_POLL_CNT ; i++) + rc = hl_poll_timeout( + hdev, + mmCPU_RST_STATUS_TO_HOST, + reg_val, + reg_val == CPU_RST_STATUS_SOFT_RST_DONE, + 1000, + poll_timeout_us); + + if (rc) + dev_err(hdev->dev, "Timeout while waiting for FW to complete soft reset (0x%x)\n", + reg_val); + return rc; +} + /** * gaudi2_execute_soft_reset - execute soft reset by driver/FW * * @hdev: pointer to the habanalabs device structure - * @reset_sleep_ms: sleep time in msec after reset * @driver_performs_reset: true if driver should perform reset instead of f/w. + * @poll_timeout_us: time to wait for response from f/w. * * This function executes soft reset based on if driver/FW should do the reset */ -static void gaudi2_execute_soft_reset(struct hl_device *hdev, u32 reset_sleep_ms, - bool driver_performs_reset) +static int gaudi2_execute_soft_reset(struct hl_device *hdev, bool driver_performs_reset, + u32 poll_timeout_us) { struct cpu_dyn_regs *dyn_regs = &hdev->fw_loader.dynamic_loader.comm_desc.cpu_dyn_regs; @@ -5554,7 +6158,8 @@ static void gaudi2_execute_soft_reset(struct hl_device *hdev, u32 reset_sleep_ms WREG32(le32_to_cpu(dyn_regs->gic_host_soft_rst_irq), gaudi2_irq_map_table[GAUDI2_EVENT_CPU_SOFT_RESET].cpu_id); - return; + + return gaudi2_get_soft_rst_done_indication(hdev, poll_timeout_us); } /* Block access to engines, QMANs and SM during reset, these @@ -5569,17 +6174,14 @@ static void gaudi2_execute_soft_reset(struct hl_device *hdev, u32 reset_sleep_ms mmPCIE_VDEC1_MSTR_IF_RR_SHRD_HBW_BASE + HL_BLOCK_SIZE); WREG32(mmPSOC_RESET_CONF_SOFT_RST, 1); + return 0; } -static void gaudi2_poll_btm_indication(struct hl_device *hdev, u32 reset_sleep_ms, - u32 poll_timeout_us) +static void gaudi2_poll_btm_indication(struct hl_device *hdev, u32 poll_timeout_us) { int i, rc = 0; u32 reg_val; - /* without this sleep reset will not work */ - msleep(reset_sleep_ms); - /* We poll the BTM done indication multiple times after reset due to * a HW errata 'GAUDI2_0300' */ @@ -5596,30 +6198,12 @@ static void gaudi2_poll_btm_indication(struct hl_device *hdev, u32 reset_sleep_m dev_err(hdev->dev, "Timeout while waiting for device to reset 0x%x\n", reg_val); } -static void gaudi2_get_soft_rst_done_indication(struct hl_device *hdev, u32 poll_timeout_us) -{ - int i, rc = 0; - u32 reg_val; - - for (i = 0 ; i < GAUDI2_RESET_POLL_CNT ; i++) - rc = hl_poll_timeout( - hdev, - mmCPU_RST_STATUS_TO_HOST, - reg_val, - reg_val == CPU_RST_STATUS_SOFT_RST_DONE, - 1000, - poll_timeout_us); - - if (rc) - dev_err(hdev->dev, "Timeout while waiting for FW to complete soft reset (0x%x)\n", - reg_val); -} - -static void gaudi2_hw_fini(struct hl_device *hdev, bool hard_reset, bool fw_reset) +static int gaudi2_hw_fini(struct hl_device *hdev, bool hard_reset, bool fw_reset) { struct gaudi2_device *gaudi2 = hdev->asic_specific; u32 poll_timeout_us, reset_sleep_ms; bool driver_performs_reset = false; + int rc; if (hdev->pldm) { reset_sleep_ms = hard_reset ? GAUDI2_PLDM_HRESET_TIMEOUT_MSEC : @@ -5637,7 +6221,7 @@ static void gaudi2_hw_fini(struct hl_device *hdev, bool hard_reset, bool fw_rese if (hard_reset) { driver_performs_reset = !hdev->asic_prop.hard_reset_done_by_fw; - gaudi2_execute_hard_reset(hdev, reset_sleep_ms); + gaudi2_execute_hard_reset(hdev); } else { /* * As we have to support also work with preboot only (which does not supports @@ -5647,11 +6231,13 @@ static void gaudi2_hw_fini(struct hl_device *hdev, bool hard_reset, bool fw_rese */ driver_performs_reset = (hdev->fw_components == FW_TYPE_PREBOOT_CPU && !hdev->asic_prop.fw_security_enabled); - gaudi2_execute_soft_reset(hdev, reset_sleep_ms, driver_performs_reset); + rc = gaudi2_execute_soft_reset(hdev, driver_performs_reset, poll_timeout_us); + if (rc) + return rc; } skip_reset: - if (driver_performs_reset || hard_reset) + if (driver_performs_reset || hard_reset) { /* * Instead of waiting for BTM indication we should wait for preboot ready: * Consider the below scenario: @@ -5671,17 +6257,18 @@ skip_reset: * communicate with FW that is during reset. * to overcome this we will always wait to preboot ready indication */ - if ((hdev->fw_components & FW_TYPE_PREBOOT_CPU)) { - msleep(reset_sleep_ms); + + /* without this sleep reset will not work */ + msleep(reset_sleep_ms); + + if (hdev->fw_components & FW_TYPE_PREBOOT_CPU) hl_fw_wait_preboot_ready(hdev); - } else { - gaudi2_poll_btm_indication(hdev, reset_sleep_ms, poll_timeout_us); - } - else - gaudi2_get_soft_rst_done_indication(hdev, poll_timeout_us); + else + gaudi2_poll_btm_indication(hdev, poll_timeout_us); + } if (!gaudi2) - return; + return 0; gaudi2->dec_hw_cap_initialized &= ~(HW_CAP_DEC_MASK); gaudi2->tpc_hw_cap_initialized &= ~(HW_CAP_TPC_MASK); @@ -5708,6 +6295,7 @@ skip_reset: HW_CAP_PDMA_MASK | HW_CAP_EDMA_MASK | HW_CAP_MME_MASK | HW_CAP_ROT_MASK); } + return 0; } static int gaudi2_suspend(struct hl_device *hdev) @@ -6259,28 +6847,29 @@ static void gaudi2_qman_set_test_mode(struct hl_device *hdev, u32 hw_queue_id, b } } -static int gaudi2_test_queue(struct hl_device *hdev, u32 hw_queue_id) +static inline u32 gaudi2_test_queue_hw_queue_id_to_sob_id(struct hl_device *hdev, u32 hw_queue_id) { - u32 sob_offset = hdev->asic_prop.first_available_user_sob[0] * 4; + return hdev->asic_prop.first_available_user_sob[0] + + hw_queue_id - GAUDI2_QUEUE_ID_PDMA_0_0; +} + +static void gaudi2_test_queue_clear(struct hl_device *hdev, u32 hw_queue_id) +{ + u32 sob_offset = gaudi2_test_queue_hw_queue_id_to_sob_id(hdev, hw_queue_id) * 4; u32 sob_addr = mmDCORE0_SYNC_MNGR_OBJS_SOB_OBJ_0 + sob_offset; - u32 timeout_usec, tmp, sob_base = 1, sob_val = 0x5a5a; - struct packet_msg_short *msg_short_pkt; - dma_addr_t pkt_dma_addr; - size_t pkt_size; - int rc; - if (hdev->pldm) - timeout_usec = GAUDI2_PLDM_TEST_QUEUE_WAIT_USEC; - else - timeout_usec = GAUDI2_TEST_QUEUE_WAIT_USEC; + /* Reset the SOB value */ + WREG32(sob_addr, 0); +} - pkt_size = sizeof(*msg_short_pkt); - msg_short_pkt = hl_asic_dma_pool_zalloc(hdev, pkt_size, GFP_KERNEL, &pkt_dma_addr); - if (!msg_short_pkt) { - dev_err(hdev->dev, "Failed to allocate packet for H/W queue %d testing\n", - hw_queue_id); - return -ENOMEM; - } +static int gaudi2_test_queue_send_msg_short(struct hl_device *hdev, u32 hw_queue_id, u32 sob_val, + struct gaudi2_queues_test_info *msg_info) +{ + u32 sob_offset = gaudi2_test_queue_hw_queue_id_to_sob_id(hdev, hw_queue_id) * 4; + u32 tmp, sob_base = 1; + struct packet_msg_short *msg_short_pkt = msg_info->kern_addr; + size_t pkt_size = sizeof(struct packet_msg_short); + int rc; tmp = (PACKET_MSG_SHORT << GAUDI2_PKT_CTL_OPCODE_SHIFT) | (1 << GAUDI2_PKT_CTL_EB_SHIFT) | @@ -6291,15 +6880,25 @@ static int gaudi2_test_queue(struct hl_device *hdev, u32 hw_queue_id) msg_short_pkt->value = cpu_to_le32(sob_val); msg_short_pkt->ctl = cpu_to_le32(tmp); - /* Reset the SOB value */ - WREG32(sob_addr, 0); + rc = hl_hw_queue_send_cb_no_cmpl(hdev, hw_queue_id, pkt_size, msg_info->dma_addr); + if (rc) + dev_err(hdev->dev, + "Failed to send msg_short packet to H/W queue %d\n", hw_queue_id); - rc = hl_hw_queue_send_cb_no_cmpl(hdev, hw_queue_id, pkt_size, pkt_dma_addr); - if (rc) { - dev_err(hdev->dev, "Failed to send msg_short packet to H/W queue %d\n", - hw_queue_id); - goto free_pkt; - } + return rc; +} + +static int gaudi2_test_queue_wait_completion(struct hl_device *hdev, u32 hw_queue_id, u32 sob_val) +{ + u32 sob_offset = gaudi2_test_queue_hw_queue_id_to_sob_id(hdev, hw_queue_id) * 4; + u32 sob_addr = mmDCORE0_SYNC_MNGR_OBJS_SOB_OBJ_0 + sob_offset; + u32 timeout_usec, tmp; + int rc; + + if (hdev->pldm) + timeout_usec = GAUDI2_PLDM_TEST_QUEUE_WAIT_USEC; + else + timeout_usec = GAUDI2_TEST_QUEUE_WAIT_USEC; rc = hl_poll_timeout( hdev, @@ -6315,11 +6914,6 @@ static int gaudi2_test_queue(struct hl_device *hdev, u32 hw_queue_id) rc = -EIO; } - /* Reset the SOB value */ - WREG32(sob_addr, 0); - -free_pkt: - hl_asic_dma_pool_free(hdev, (void *) msg_short_pkt, pkt_dma_addr); return rc; } @@ -6339,42 +6933,60 @@ static int gaudi2_test_cpu_queue(struct hl_device *hdev) static int gaudi2_test_queues(struct hl_device *hdev) { - int i, rc, ret_val = 0; + struct gaudi2_device *gaudi2 = hdev->asic_specific; + struct gaudi2_queues_test_info *msg_info; + u32 sob_val = 0x5a5a; + int i, rc; + /* send test message on all enabled Qs */ for (i = GAUDI2_QUEUE_ID_PDMA_0_0 ; i < GAUDI2_QUEUE_ID_CPU_PQ; i++) { if (!gaudi2_is_queue_enabled(hdev, i)) continue; + msg_info = &gaudi2->queues_test_info[i - GAUDI2_QUEUE_ID_PDMA_0_0]; gaudi2_qman_set_test_mode(hdev, i, true); - rc = gaudi2_test_queue(hdev, i); - gaudi2_qman_set_test_mode(hdev, i, false); - - if (rc) { - ret_val = -EINVAL; + gaudi2_test_queue_clear(hdev, i); + rc = gaudi2_test_queue_send_msg_short(hdev, i, sob_val, msg_info); + if (rc) goto done; - } } rc = gaudi2_test_cpu_queue(hdev); - if (rc) { - ret_val = -EINVAL; + if (rc) goto done; + + /* verify that all messages were processed */ + for (i = GAUDI2_QUEUE_ID_PDMA_0_0 ; i < GAUDI2_QUEUE_ID_CPU_PQ; i++) { + if (!gaudi2_is_queue_enabled(hdev, i)) + continue; + + rc = gaudi2_test_queue_wait_completion(hdev, i, sob_val); + if (rc) + /* chip is not usable, no need for cleanups, just bail-out with error */ + goto done; + + gaudi2_test_queue_clear(hdev, i); + gaudi2_qman_set_test_mode(hdev, i, false); } done: - return ret_val; + return rc; } static int gaudi2_compute_reset_late_init(struct hl_device *hdev) { struct gaudi2_device *gaudi2 = hdev->asic_specific; size_t irq_arr_size; + int rc; - /* TODO: missing gaudi2_nic_resume. - * Until implemented nic_hw_cap_initialized will remain zeroed - */ gaudi2_init_arcs(hdev); - gaudi2_scrub_arcs_dccm(hdev); + + rc = gaudi2_scrub_arcs_dccm(hdev); + if (rc) { + dev_err(hdev->dev, "Failed to scrub arcs DCCM\n"); + return rc; + } + gaudi2_init_security(hdev); /* Unmask all IRQs since some could have been received during the soft reset */ @@ -6382,74 +6994,21 @@ static int gaudi2_compute_reset_late_init(struct hl_device *hdev) return hl_fw_unmask_irq_arr(hdev, gaudi2->hw_events, irq_arr_size); } -static void gaudi2_is_tpc_engine_idle(struct hl_device *hdev, int dcore, int inst, u32 offset, - struct iterate_module_ctx *ctx) -{ - struct gaudi2_tpc_idle_data *idle_data = ctx->data; - u32 tpc_cfg_sts, qm_glbl_sts0, qm_glbl_sts1, qm_cgm_sts; - bool is_eng_idle; - int engine_idx; - - if ((dcore == 0) && (inst == (NUM_DCORE0_TPC - 1))) - engine_idx = GAUDI2_DCORE0_ENGINE_ID_TPC_6; - else - engine_idx = GAUDI2_DCORE0_ENGINE_ID_TPC_0 + - dcore * GAUDI2_ENGINE_ID_DCORE_OFFSET + inst; - - tpc_cfg_sts = RREG32(mmDCORE0_TPC0_CFG_STATUS + offset); - qm_glbl_sts0 = RREG32(mmDCORE0_TPC0_QM_GLBL_STS0 + offset); - qm_glbl_sts1 = RREG32(mmDCORE0_TPC0_QM_GLBL_STS1 + offset); - qm_cgm_sts = RREG32(mmDCORE0_TPC0_QM_CGM_STS + offset); - - is_eng_idle = IS_QM_IDLE(qm_glbl_sts0, qm_glbl_sts1, qm_cgm_sts) && - IS_TPC_IDLE(tpc_cfg_sts); - *(idle_data->is_idle) &= is_eng_idle; - - if (idle_data->mask && !is_eng_idle) - set_bit(engine_idx, idle_data->mask); - - if (idle_data->e) - hl_engine_data_sprintf(idle_data->e, - idle_data->tpc_fmt, dcore, inst, - is_eng_idle ? "Y" : "N", - qm_glbl_sts0, qm_cgm_sts, tpc_cfg_sts); -} - -static bool gaudi2_is_device_idle(struct hl_device *hdev, u64 *mask_arr, u8 mask_len, - struct engines_data *e) +static bool gaudi2_get_edma_idle_status(struct hl_device *hdev, u64 *mask_arr, u8 mask_len, + struct engines_data *e) { - u32 qm_glbl_sts0, qm_glbl_sts1, qm_cgm_sts, dma_core_idle_ind_mask, - mme_arch_sts, dec_swreg15, dec_enabled_bit; + u32 qm_glbl_sts0, qm_glbl_sts1, qm_cgm_sts, dma_core_sts0, dma_core_sts1; struct asic_fixed_properties *prop = &hdev->asic_prop; - const char *rot_fmt = "%-6d%-5d%-9s%#-14x%#-12x%s\n"; unsigned long *mask = (unsigned long *) mask_arr; - const char *edma_fmt = "%-6d%-6d%-9s%#-14x%#x\n"; - const char *mme_fmt = "%-5d%-6s%-9s%#-14x%#x\n"; - const char *nic_fmt = "%-5d%-9s%#-14x%#-12x\n"; - const char *pdma_fmt = "%-6d%-9s%#-14x%#x\n"; - const char *pcie_dec_fmt = "%-10d%-9s%#x\n"; - const char *dec_fmt = "%-6d%-5d%-9s%#x\n"; + const char *edma_fmt = "%-6d%-6d%-9s%#-14x%#-15x%#x\n"; bool is_idle = true, is_eng_idle; - u64 offset; - - struct gaudi2_tpc_idle_data tpc_idle_data = { - .tpc_fmt = "%-6d%-5d%-9s%#-14x%#-12x%#x\n", - .e = e, - .mask = mask, - .is_idle = &is_idle, - }; - struct iterate_module_ctx tpc_iter = { - .fn = &gaudi2_is_tpc_engine_idle, - .data = &tpc_idle_data, - }; - int engine_idx, i, j; + u64 offset; - /* EDMA, Two engines per Dcore */ if (e) hl_engine_data_sprintf(e, - "\nCORE EDMA is_idle QM_GLBL_STS0 DMA_CORE_IDLE_IND_MASK\n" - "---- ---- ------- ------------ ----------------------\n"); + "\nCORE EDMA is_idle QM_GLBL_STS0 DMA_CORE_STS0 DMA_CORE_STS1\n" + "---- ---- ------- ------------ ------------- -------------\n"); for (i = 0; i < NUM_OF_DCORES; i++) { for (j = 0 ; j < NUM_OF_EDMA_PER_DCORE ; j++) { @@ -6462,45 +7021,56 @@ static bool gaudi2_is_device_idle(struct hl_device *hdev, u64 *mask_arr, u8 mask i * GAUDI2_ENGINE_ID_DCORE_OFFSET + j; offset = i * DCORE_OFFSET + j * DCORE_EDMA_OFFSET; - dma_core_idle_ind_mask = - RREG32(mmDCORE0_EDMA0_CORE_IDLE_IND_MASK + offset); + dma_core_sts0 = RREG32(mmDCORE0_EDMA0_CORE_STS0 + offset); + dma_core_sts1 = RREG32(mmDCORE0_EDMA0_CORE_STS1 + offset); qm_glbl_sts0 = RREG32(mmDCORE0_EDMA0_QM_GLBL_STS0 + offset); qm_glbl_sts1 = RREG32(mmDCORE0_EDMA0_QM_GLBL_STS1 + offset); qm_cgm_sts = RREG32(mmDCORE0_EDMA0_QM_CGM_STS + offset); is_eng_idle = IS_QM_IDLE(qm_glbl_sts0, qm_glbl_sts1, qm_cgm_sts) && - IS_DMA_IDLE(dma_core_idle_ind_mask); + IS_DMA_IDLE(dma_core_sts0) && !IS_DMA_HALTED(dma_core_sts1); is_idle &= is_eng_idle; if (mask && !is_eng_idle) set_bit(engine_idx, mask); if (e) - hl_engine_data_sprintf(e, edma_fmt, i, j, - is_eng_idle ? "Y" : "N", - qm_glbl_sts0, - dma_core_idle_ind_mask); + hl_engine_data_sprintf(e, edma_fmt, i, j, is_eng_idle ? "Y" : "N", + qm_glbl_sts0, dma_core_sts0, dma_core_sts1); } } - /* PDMA, Two engines in Full chip */ + return is_idle; +} + +static bool gaudi2_get_pdma_idle_status(struct hl_device *hdev, u64 *mask_arr, u8 mask_len, + struct engines_data *e) +{ + u32 qm_glbl_sts0, qm_glbl_sts1, qm_cgm_sts, dma_core_sts0, dma_core_sts1; + unsigned long *mask = (unsigned long *) mask_arr; + const char *pdma_fmt = "%-6d%-9s%#-14x%#-15x%#x\n"; + bool is_idle = true, is_eng_idle; + int engine_idx, i; + u64 offset; + if (e) hl_engine_data_sprintf(e, - "\nPDMA is_idle QM_GLBL_STS0 DMA_CORE_IDLE_IND_MASK\n" - "---- ------- ------------ ----------------------\n"); + "\nPDMA is_idle QM_GLBL_STS0 DMA_CORE_STS0 DMA_CORE_STS1\n" + "---- ------- ------------ ------------- -------------\n"); for (i = 0 ; i < NUM_OF_PDMA ; i++) { engine_idx = GAUDI2_ENGINE_ID_PDMA_0 + i; offset = i * PDMA_OFFSET; - dma_core_idle_ind_mask = RREG32(mmPDMA0_CORE_IDLE_IND_MASK + offset); + dma_core_sts0 = RREG32(mmPDMA0_CORE_STS0 + offset); + dma_core_sts1 = RREG32(mmPDMA0_CORE_STS1 + offset); qm_glbl_sts0 = RREG32(mmPDMA0_QM_GLBL_STS0 + offset); qm_glbl_sts1 = RREG32(mmPDMA0_QM_GLBL_STS1 + offset); qm_cgm_sts = RREG32(mmPDMA0_QM_CGM_STS + offset); is_eng_idle = IS_QM_IDLE(qm_glbl_sts0, qm_glbl_sts1, qm_cgm_sts) && - IS_DMA_IDLE(dma_core_idle_ind_mask); + IS_DMA_IDLE(dma_core_sts0) && !IS_DMA_HALTED(dma_core_sts1); is_idle &= is_eng_idle; if (mask && !is_eng_idle) @@ -6508,9 +7078,22 @@ static bool gaudi2_is_device_idle(struct hl_device *hdev, u64 *mask_arr, u8 mask if (e) hl_engine_data_sprintf(e, pdma_fmt, i, is_eng_idle ? "Y" : "N", - qm_glbl_sts0, dma_core_idle_ind_mask); + qm_glbl_sts0, dma_core_sts0, dma_core_sts1); } + return is_idle; +} + +static bool gaudi2_get_nic_idle_status(struct hl_device *hdev, u64 *mask_arr, u8 mask_len, + struct engines_data *e) +{ + unsigned long *mask = (unsigned long *) mask_arr; + const char *nic_fmt = "%-5d%-9s%#-14x%#-12x\n"; + u32 qm_glbl_sts0, qm_glbl_sts1, qm_cgm_sts; + bool is_idle = true, is_eng_idle; + int engine_idx, i; + u64 offset = 0; + /* NIC, twelve macros in Full chip */ if (e && hdev->nic_ports_mask) hl_engine_data_sprintf(e, @@ -6544,6 +7127,19 @@ static bool gaudi2_is_device_idle(struct hl_device *hdev, u64 *mask_arr, u8 mask qm_glbl_sts0, qm_cgm_sts); } + return is_idle; +} + +static bool gaudi2_get_mme_idle_status(struct hl_device *hdev, u64 *mask_arr, u8 mask_len, + struct engines_data *e) +{ + u32 qm_glbl_sts0, qm_glbl_sts1, qm_cgm_sts, mme_arch_sts; + unsigned long *mask = (unsigned long *) mask_arr; + const char *mme_fmt = "%-5d%-6s%-9s%#-14x%#x\n"; + bool is_idle = true, is_eng_idle; + int engine_idx, i; + u64 offset; + if (e) hl_engine_data_sprintf(e, "\nMME Stub is_idle QM_GLBL_STS0 MME_ARCH_STATUS\n" @@ -6574,16 +7170,82 @@ static bool gaudi2_is_device_idle(struct hl_device *hdev, u64 *mask_arr, u8 mask set_bit(engine_idx, mask); } - /* - * TPC - */ + return is_idle; +} + +static void gaudi2_is_tpc_engine_idle(struct hl_device *hdev, int dcore, int inst, u32 offset, + struct iterate_module_ctx *ctx) +{ + struct gaudi2_tpc_idle_data *idle_data = ctx->data; + u32 tpc_cfg_sts, qm_glbl_sts0, qm_glbl_sts1, qm_cgm_sts; + bool is_eng_idle; + int engine_idx; + + if ((dcore == 0) && (inst == (NUM_DCORE0_TPC - 1))) + engine_idx = GAUDI2_DCORE0_ENGINE_ID_TPC_6; + else + engine_idx = GAUDI2_DCORE0_ENGINE_ID_TPC_0 + + dcore * GAUDI2_ENGINE_ID_DCORE_OFFSET + inst; + + tpc_cfg_sts = RREG32(mmDCORE0_TPC0_CFG_STATUS + offset); + qm_glbl_sts0 = RREG32(mmDCORE0_TPC0_QM_GLBL_STS0 + offset); + qm_glbl_sts1 = RREG32(mmDCORE0_TPC0_QM_GLBL_STS1 + offset); + qm_cgm_sts = RREG32(mmDCORE0_TPC0_QM_CGM_STS + offset); + + is_eng_idle = IS_QM_IDLE(qm_glbl_sts0, qm_glbl_sts1, qm_cgm_sts) && + IS_TPC_IDLE(tpc_cfg_sts); + *(idle_data->is_idle) &= is_eng_idle; + + if (idle_data->mask && !is_eng_idle) + set_bit(engine_idx, idle_data->mask); + + if (idle_data->e) + hl_engine_data_sprintf(idle_data->e, + idle_data->tpc_fmt, dcore, inst, + is_eng_idle ? "Y" : "N", + qm_glbl_sts0, qm_cgm_sts, tpc_cfg_sts); +} + +static bool gaudi2_get_tpc_idle_status(struct hl_device *hdev, u64 *mask_arr, u8 mask_len, + struct engines_data *e) +{ + struct asic_fixed_properties *prop = &hdev->asic_prop; + unsigned long *mask = (unsigned long *) mask_arr; + bool is_idle = true; + + struct gaudi2_tpc_idle_data tpc_idle_data = { + .tpc_fmt = "%-6d%-5d%-9s%#-14x%#-12x%#x\n", + .e = e, + .mask = mask, + .is_idle = &is_idle, + }; + struct iterate_module_ctx tpc_iter = { + .fn = &gaudi2_is_tpc_engine_idle, + .data = &tpc_idle_data, + }; + if (e && prop->tpc_enabled_mask) hl_engine_data_sprintf(e, - "\nCORE TPC is_idle QM_GLBL_STS0 QM_CGM_STS DMA_CORE_IDLE_IND_MASK\n" - "---- --- -------- ------------ ---------- ----------------------\n"); + "\nCORE TPC is_idle QM_GLBL_STS0 QM_CGM_STS STATUS\n" + "---- --- ------- ------------ ---------- ------\n"); gaudi2_iterate_tpcs(hdev, &tpc_iter); + return tpc_idle_data.is_idle; +} + +static bool gaudi2_get_decoder_idle_status(struct hl_device *hdev, u64 *mask_arr, u8 mask_len, + struct engines_data *e) +{ + struct asic_fixed_properties *prop = &hdev->asic_prop; + unsigned long *mask = (unsigned long *) mask_arr; + const char *pcie_dec_fmt = "%-10d%-9s%#x\n"; + const char *dec_fmt = "%-6d%-5d%-9s%#x\n"; + bool is_idle = true, is_eng_idle; + u32 dec_swreg15, dec_enabled_bit; + int engine_idx, i, j; + u64 offset; + /* Decoders, two each Dcore and two shared PCIe decoders */ if (e && (prop->decoder_enabled_mask & (~PCIE_DEC_EN_MASK))) hl_engine_data_sprintf(e, @@ -6638,10 +7300,23 @@ static bool gaudi2_is_device_idle(struct hl_device *hdev, u64 *mask_arr, u8 mask is_eng_idle ? "Y" : "N", dec_swreg15); } + return is_idle; +} + +static bool gaudi2_get_rotator_idle_status(struct hl_device *hdev, u64 *mask_arr, u8 mask_len, + struct engines_data *e) +{ + const char *rot_fmt = "%-6d%-5d%-9s%#-14x%#-14x%#x\n"; + unsigned long *mask = (unsigned long *) mask_arr; + u32 qm_glbl_sts0, qm_glbl_sts1, qm_cgm_sts; + bool is_idle = true, is_eng_idle; + int engine_idx, i; + u64 offset; + if (e) hl_engine_data_sprintf(e, - "\nCORE ROT is_idle QM_GLBL_STS0 QM_CGM_STS DMA_CORE_STS0\n" - "---- ---- ------- ------------ ---------- -------------\n"); + "\nCORE ROT is_idle QM_GLBL_STS0 QM_GLBL_STS1 QM_CGM_STS\n" + "---- --- ------- ------------ ------------ ----------\n"); for (i = 0 ; i < NUM_OF_ROT ; i++) { engine_idx = GAUDI2_ENGINE_ID_ROT_0 + i; @@ -6660,12 +7335,28 @@ static bool gaudi2_is_device_idle(struct hl_device *hdev, u64 *mask_arr, u8 mask if (e) hl_engine_data_sprintf(e, rot_fmt, i, 0, is_eng_idle ? "Y" : "N", - qm_glbl_sts0, qm_cgm_sts, "-"); + qm_glbl_sts0, qm_glbl_sts1, qm_cgm_sts); } return is_idle; } +static bool gaudi2_is_device_idle(struct hl_device *hdev, u64 *mask_arr, u8 mask_len, + struct engines_data *e) +{ + bool is_idle = true; + + is_idle &= gaudi2_get_edma_idle_status(hdev, mask_arr, mask_len, e); + is_idle &= gaudi2_get_pdma_idle_status(hdev, mask_arr, mask_len, e); + is_idle &= gaudi2_get_nic_idle_status(hdev, mask_arr, mask_len, e); + is_idle &= gaudi2_get_mme_idle_status(hdev, mask_arr, mask_len, e); + is_idle &= gaudi2_get_tpc_idle_status(hdev, mask_arr, mask_len, e); + is_idle &= gaudi2_get_decoder_idle_status(hdev, mask_arr, mask_len, e); + is_idle &= gaudi2_get_rotator_idle_status(hdev, mask_arr, mask_len, e); + + return is_idle; +} + static void gaudi2_hw_queues_lock(struct hl_device *hdev) __acquires(&gaudi2->hw_queues_lock) { @@ -7040,7 +7731,7 @@ static bool gaudi2_handle_ecc_event(struct hl_device *hdev, u16 event_type, memory_wrapper_idx = ecc_data->memory_wrapper_idx; gaudi2_print_event(hdev, event_type, !ecc_data->is_critical, - "ECC error detected. address: %#llx. Syndrom: %#llx. block id %u. critical %u.\n", + "ECC error detected. address: %#llx. Syndrom: %#llx. block id %u. critical %u.", ecc_address, ecc_syndrom, memory_wrapper_idx, ecc_data->is_critical); return !!ecc_data->is_critical; @@ -7352,10 +8043,8 @@ static void gaudi2_ack_module_razwi_event_handler(struct hl_device *hdev, case RAZWI_TPC: hbw_rtr_id = gaudi2_tpc_initiator_hbw_rtr_id[module_idx]; - /* TODO : remove this check and depend only on tpc routers table - * when SW-118828 is resolved - */ - if (!hdev->asic_prop.fw_security_enabled && + if (hl_is_fw_ver_below_1_9(hdev) && + !hdev->asic_prop.fw_security_enabled && ((module_idx == 0) || (module_idx == 1))) lbw_rtr_id = DCORE0_RTR0; else @@ -7526,297 +8215,115 @@ static void gaudi2_check_if_razwi_happened(struct hl_device *hdev) gaudi2_ack_module_razwi_event_handler(hdev, RAZWI_ROT, mod_idx, 0, NULL); } -static const char *gaudi2_get_initiators_name(u32 rtr_id) -{ - switch (rtr_id) { - case DCORE0_RTR0: - return "DEC0/1/8/9, TPC24, PDMA0/1, PMMU, PCIE_IF, EDMA0/2, HMMU0/2/4/6, CPU"; - case DCORE0_RTR1: - return "TPC0/1"; - case DCORE0_RTR2: - return "TPC2/3"; - case DCORE0_RTR3: - return "TPC4/5"; - case DCORE0_RTR4: - return "MME0_SBTE0/1"; - case DCORE0_RTR5: - return "MME0_WAP0/SBTE2"; - case DCORE0_RTR6: - return "MME0_CTRL_WR/SBTE3"; - case DCORE0_RTR7: - return "MME0_WAP1/CTRL_RD/SBTE4"; - case DCORE1_RTR0: - return "MME1_WAP1/CTRL_RD/SBTE4"; - case DCORE1_RTR1: - return "MME1_CTRL_WR/SBTE3"; - case DCORE1_RTR2: - return "MME1_WAP0/SBTE2"; - case DCORE1_RTR3: - return "MME1_SBTE0/1"; - case DCORE1_RTR4: - return "TPC10/11"; - case DCORE1_RTR5: - return "TPC8/9"; - case DCORE1_RTR6: - return "TPC6/7"; - case DCORE1_RTR7: - return "DEC2/3, NIC0/1/2/3/4, ARC_FARM, KDMA, EDMA1/3, HMMU1/3/5/7"; - case DCORE2_RTR0: - return "DEC4/5, NIC5/6/7/8, EDMA4/6, HMMU8/10/12/14, ROT0"; - case DCORE2_RTR1: - return "TPC16/17"; - case DCORE2_RTR2: - return "TPC14/15"; - case DCORE2_RTR3: - return "TPC12/13"; - case DCORE2_RTR4: - return "MME2_SBTE0/1"; - case DCORE2_RTR5: - return "MME2_WAP0/SBTE2"; - case DCORE2_RTR6: - return "MME2_CTRL_WR/SBTE3"; - case DCORE2_RTR7: - return "MME2_WAP1/CTRL_RD/SBTE4"; - case DCORE3_RTR0: - return "MME3_WAP1/CTRL_RD/SBTE4"; - case DCORE3_RTR1: - return "MME3_CTRL_WR/SBTE3"; - case DCORE3_RTR2: - return "MME3_WAP0/SBTE2"; - case DCORE3_RTR3: - return "MME3_SBTE0/1"; - case DCORE3_RTR4: - return "TPC18/19"; - case DCORE3_RTR5: - return "TPC20/21"; - case DCORE3_RTR6: - return "TPC22/23"; - case DCORE3_RTR7: - return "DEC6/7, NIC9/10/11, EDMA5/7, HMMU9/11/13/15, ROT1, PSOC"; - default: - return "N/A"; - } -} - -static u16 gaudi2_get_razwi_initiators(u32 rtr_id, u16 *engines) -{ - switch (rtr_id) { - case DCORE0_RTR0: - engines[0] = GAUDI2_DCORE0_ENGINE_ID_DEC_0; - engines[1] = GAUDI2_DCORE0_ENGINE_ID_DEC_1; - engines[2] = GAUDI2_PCIE_ENGINE_ID_DEC_0; - engines[3] = GAUDI2_PCIE_ENGINE_ID_DEC_1; - engines[4] = GAUDI2_DCORE0_ENGINE_ID_TPC_6; - engines[5] = GAUDI2_ENGINE_ID_PDMA_0; - engines[6] = GAUDI2_ENGINE_ID_PDMA_1; - engines[7] = GAUDI2_ENGINE_ID_PCIE; - engines[8] = GAUDI2_DCORE0_ENGINE_ID_EDMA_0; - engines[9] = GAUDI2_DCORE1_ENGINE_ID_EDMA_0; - engines[10] = GAUDI2_ENGINE_ID_PSOC; - return 11; - - case DCORE0_RTR1: - engines[0] = GAUDI2_DCORE0_ENGINE_ID_TPC_0; - engines[1] = GAUDI2_DCORE0_ENGINE_ID_TPC_1; - return 2; - - case DCORE0_RTR2: - engines[0] = GAUDI2_DCORE0_ENGINE_ID_TPC_2; - engines[1] = GAUDI2_DCORE0_ENGINE_ID_TPC_3; - return 2; - - case DCORE0_RTR3: - engines[0] = GAUDI2_DCORE0_ENGINE_ID_TPC_4; - engines[1] = GAUDI2_DCORE0_ENGINE_ID_TPC_5; - return 2; - - case DCORE0_RTR4: - case DCORE0_RTR5: - case DCORE0_RTR6: - case DCORE0_RTR7: - engines[0] = GAUDI2_DCORE0_ENGINE_ID_MME; - return 1; - - case DCORE1_RTR0: - case DCORE1_RTR1: - case DCORE1_RTR2: - case DCORE1_RTR3: - engines[0] = GAUDI2_DCORE1_ENGINE_ID_MME; - return 1; - - case DCORE1_RTR4: - engines[0] = GAUDI2_DCORE1_ENGINE_ID_TPC_4; - engines[1] = GAUDI2_DCORE1_ENGINE_ID_TPC_5; - return 2; - - case DCORE1_RTR5: - engines[0] = GAUDI2_DCORE1_ENGINE_ID_TPC_2; - engines[1] = GAUDI2_DCORE1_ENGINE_ID_TPC_3; - return 2; - - case DCORE1_RTR6: - engines[0] = GAUDI2_DCORE1_ENGINE_ID_TPC_0; - engines[1] = GAUDI2_DCORE1_ENGINE_ID_TPC_1; - return 2; - - case DCORE1_RTR7: - engines[0] = GAUDI2_DCORE1_ENGINE_ID_DEC_0; - engines[1] = GAUDI2_DCORE1_ENGINE_ID_DEC_1; - engines[2] = GAUDI2_ENGINE_ID_NIC0_0; - engines[3] = GAUDI2_ENGINE_ID_NIC1_0; - engines[4] = GAUDI2_ENGINE_ID_NIC2_0; - engines[5] = GAUDI2_ENGINE_ID_NIC3_0; - engines[6] = GAUDI2_ENGINE_ID_NIC4_0; - engines[7] = GAUDI2_ENGINE_ID_ARC_FARM; - engines[8] = GAUDI2_ENGINE_ID_KDMA; - engines[9] = GAUDI2_DCORE0_ENGINE_ID_EDMA_1; - engines[10] = GAUDI2_DCORE1_ENGINE_ID_EDMA_1; - return 11; - - case DCORE2_RTR0: - engines[0] = GAUDI2_DCORE2_ENGINE_ID_DEC_0; - engines[1] = GAUDI2_DCORE2_ENGINE_ID_DEC_1; - engines[2] = GAUDI2_ENGINE_ID_NIC5_0; - engines[3] = GAUDI2_ENGINE_ID_NIC6_0; - engines[4] = GAUDI2_ENGINE_ID_NIC7_0; - engines[5] = GAUDI2_ENGINE_ID_NIC8_0; - engines[6] = GAUDI2_DCORE2_ENGINE_ID_EDMA_0; - engines[7] = GAUDI2_DCORE3_ENGINE_ID_EDMA_0; - engines[8] = GAUDI2_ENGINE_ID_ROT_0; - return 9; - - case DCORE2_RTR1: - engines[0] = GAUDI2_DCORE2_ENGINE_ID_TPC_4; - engines[1] = GAUDI2_DCORE2_ENGINE_ID_TPC_5; - return 2; - - case DCORE2_RTR2: - engines[0] = GAUDI2_DCORE2_ENGINE_ID_TPC_2; - engines[1] = GAUDI2_DCORE2_ENGINE_ID_TPC_3; - return 2; - - case DCORE2_RTR3: - engines[0] = GAUDI2_DCORE2_ENGINE_ID_TPC_0; - engines[1] = GAUDI2_DCORE2_ENGINE_ID_TPC_1; - return 2; - - case DCORE2_RTR4: - case DCORE2_RTR5: - case DCORE2_RTR6: - case DCORE2_RTR7: - engines[0] = GAUDI2_DCORE2_ENGINE_ID_MME; - return 1; - case DCORE3_RTR0: - case DCORE3_RTR1: - case DCORE3_RTR2: - case DCORE3_RTR3: - engines[0] = GAUDI2_DCORE3_ENGINE_ID_MME; - return 1; - case DCORE3_RTR4: - engines[0] = GAUDI2_DCORE3_ENGINE_ID_TPC_0; - engines[1] = GAUDI2_DCORE3_ENGINE_ID_TPC_1; - return 2; - case DCORE3_RTR5: - engines[0] = GAUDI2_DCORE3_ENGINE_ID_TPC_2; - engines[1] = GAUDI2_DCORE3_ENGINE_ID_TPC_3; - return 2; - case DCORE3_RTR6: - engines[0] = GAUDI2_DCORE3_ENGINE_ID_TPC_4; - engines[1] = GAUDI2_DCORE3_ENGINE_ID_TPC_5; - return 2; - case DCORE3_RTR7: - engines[0] = GAUDI2_DCORE3_ENGINE_ID_DEC_0; - engines[1] = GAUDI2_DCORE3_ENGINE_ID_DEC_1; - engines[2] = GAUDI2_ENGINE_ID_NIC9_0; - engines[3] = GAUDI2_ENGINE_ID_NIC10_0; - engines[4] = GAUDI2_ENGINE_ID_NIC11_0; - engines[5] = GAUDI2_DCORE2_ENGINE_ID_EDMA_1; - engines[6] = GAUDI2_DCORE3_ENGINE_ID_EDMA_1; - engines[7] = GAUDI2_ENGINE_ID_ROT_1; - engines[8] = GAUDI2_ENGINE_ID_ROT_0; - return 9; - default: - return 0; - } -} - -static void gaudi2_razwi_unmapped_addr_hbw_printf_info(struct hl_device *hdev, u32 rtr_id, - u64 rtr_ctrl_base_addr, bool is_write, - u64 *event_mask) +static int gaudi2_psoc_razwi_get_engines(struct gaudi2_razwi_info *razwi_info, u32 array_size, + u32 axuser_xy, u32 *base, u16 *eng_id, + char *eng_name) { - u16 engines[HL_RAZWI_MAX_NUM_OF_ENGINES_PER_RTR], num_of_eng; - u32 razwi_hi, razwi_lo; - u8 rd_wr_flag; - num_of_eng = gaudi2_get_razwi_initiators(rtr_id, &engines[0]); + int i, num_of_eng = 0; + u16 str_size = 0; - if (is_write) { - razwi_hi = RREG32(rtr_ctrl_base_addr + DEC_RAZWI_HBW_AW_ADDR_HI); - razwi_lo = RREG32(rtr_ctrl_base_addr + DEC_RAZWI_HBW_AW_ADDR_LO); - rd_wr_flag = HL_RAZWI_WRITE; - - /* Clear set indication */ - WREG32(rtr_ctrl_base_addr + DEC_RAZWI_HBW_AW_SET, 0x1); - } else { - razwi_hi = RREG32(rtr_ctrl_base_addr + DEC_RAZWI_HBW_AR_ADDR_HI); - razwi_lo = RREG32(rtr_ctrl_base_addr + DEC_RAZWI_HBW_AR_ADDR_LO); - rd_wr_flag = HL_RAZWI_READ; + for (i = 0 ; i < array_size ; i++) { + if (axuser_xy != razwi_info[i].axuser_xy) + continue; - /* Clear set indication */ - WREG32(rtr_ctrl_base_addr + DEC_RAZWI_HBW_AR_SET, 0x1); + eng_id[num_of_eng] = razwi_info[i].eng_id; + base[num_of_eng] = razwi_info[i].rtr_ctrl; + if (!num_of_eng) + str_size += snprintf(eng_name + str_size, + PSOC_RAZWI_ENG_STR_SIZE - str_size, "%s", + razwi_info[i].eng_name); + else + str_size += snprintf(eng_name + str_size, + PSOC_RAZWI_ENG_STR_SIZE - str_size, " or %s", + razwi_info[i].eng_name); + num_of_eng++; } - hl_handle_razwi(hdev, (u64)razwi_hi << 32 | razwi_lo, &engines[0], num_of_eng, - rd_wr_flag | HL_RAZWI_HBW, event_mask); - dev_err_ratelimited(hdev->dev, - "RAZWI PSOC unmapped HBW %s error, rtr id %u, address %#llx\n", - is_write ? "WR" : "RD", rtr_id, (u64)razwi_hi << 32 | razwi_lo); - - dev_err_ratelimited(hdev->dev, - "Initiators: %s\n", gaudi2_get_initiators_name(rtr_id)); + return num_of_eng; } -static void gaudi2_razwi_unmapped_addr_lbw_printf_info(struct hl_device *hdev, u32 rtr_id, - u64 rtr_ctrl_base_addr, bool is_write, - u64 *event_mask) +static bool gaudi2_handle_psoc_razwi_happened(struct hl_device *hdev, u32 razwi_reg, + u64 *event_mask) { - u16 engines[HL_RAZWI_MAX_NUM_OF_ENGINES_PER_RTR], num_of_eng; - u64 razwi_addr = CFG_BASE; - u8 rd_wr_flag; + u32 axuser_xy = RAZWI_GET_AXUSER_XY(razwi_reg), addr_hi = 0, addr_lo = 0; + u32 base[PSOC_RAZWI_MAX_ENG_PER_RTR]; + u16 num_of_eng, eng_id[PSOC_RAZWI_MAX_ENG_PER_RTR]; + char eng_name_str[PSOC_RAZWI_ENG_STR_SIZE]; + bool razwi_happened = false; + int i; - num_of_eng = gaudi2_get_razwi_initiators(rtr_id, &engines[0]); + num_of_eng = gaudi2_psoc_razwi_get_engines(common_razwi_info, ARRAY_SIZE(common_razwi_info), + axuser_xy, base, eng_id, eng_name_str); - if (is_write) { - razwi_addr += RREG32(rtr_ctrl_base_addr + DEC_RAZWI_LBW_AW_ADDR); - rd_wr_flag = HL_RAZWI_WRITE; + /* If no match for XY coordinates, try to find it in MME razwi table */ + if (!num_of_eng) { + axuser_xy = RAZWI_GET_AXUSER_LOW_XY(razwi_reg); + num_of_eng = gaudi2_psoc_razwi_get_engines(mme_razwi_info, + ARRAY_SIZE(mme_razwi_info), + axuser_xy, base, eng_id, + eng_name_str); + } - /* Clear set indication */ - WREG32(rtr_ctrl_base_addr + DEC_RAZWI_LBW_AW_SET, 0x1); - } else { - razwi_addr += RREG32(rtr_ctrl_base_addr + DEC_RAZWI_LBW_AR_ADDR); - rd_wr_flag = HL_RAZWI_READ; + for (i = 0 ; i < num_of_eng ; i++) { + if (RREG32(base[i] + DEC_RAZWI_HBW_AW_SET)) { + addr_hi = RREG32(base[i] + DEC_RAZWI_HBW_AW_ADDR_HI); + addr_lo = RREG32(base[i] + DEC_RAZWI_HBW_AW_ADDR_LO); + dev_err(hdev->dev, + "PSOC HBW AW RAZWI: %s, address (aligned to 128 byte): 0x%llX\n", + eng_name_str, ((u64)addr_hi << 32) + addr_lo); + hl_handle_razwi(hdev, ((u64)addr_hi << 32) + addr_lo, &eng_id[0], + num_of_eng, HL_RAZWI_HBW | HL_RAZWI_WRITE, event_mask); + razwi_happened = true; + } - /* Clear set indication */ - WREG32(rtr_ctrl_base_addr + DEC_RAZWI_LBW_AR_SET, 0x1); - } + if (RREG32(base[i] + DEC_RAZWI_HBW_AR_SET)) { + addr_hi = RREG32(base[i] + DEC_RAZWI_HBW_AR_ADDR_HI); + addr_lo = RREG32(base[i] + DEC_RAZWI_HBW_AR_ADDR_LO); + dev_err(hdev->dev, + "PSOC HBW AR RAZWI: %s, address (aligned to 128 byte): 0x%llX\n", + eng_name_str, ((u64)addr_hi << 32) + addr_lo); + hl_handle_razwi(hdev, ((u64)addr_hi << 32) + addr_lo, &eng_id[0], + num_of_eng, HL_RAZWI_HBW | HL_RAZWI_READ, event_mask); + razwi_happened = true; + } - hl_handle_razwi(hdev, razwi_addr, &engines[0], num_of_eng, rd_wr_flag | HL_RAZWI_LBW, - event_mask); - dev_err_ratelimited(hdev->dev, - "RAZWI PSOC unmapped LBW %s error, rtr id %u, address 0x%llX\n", - is_write ? "WR" : "RD", rtr_id, razwi_addr); + if (RREG32(base[i] + DEC_RAZWI_LBW_AW_SET)) { + addr_lo = RREG32(base[i] + DEC_RAZWI_LBW_AW_ADDR); + dev_err(hdev->dev, + "PSOC LBW AW RAZWI: %s, address (aligned to 128 byte): 0x%X\n", + eng_name_str, addr_lo); + hl_handle_razwi(hdev, addr_lo, &eng_id[0], + num_of_eng, HL_RAZWI_LBW | HL_RAZWI_WRITE, event_mask); + razwi_happened = true; + } - dev_err_ratelimited(hdev->dev, - "Initiators: %s\n", gaudi2_get_initiators_name(rtr_id)); + if (RREG32(base[i] + DEC_RAZWI_LBW_AR_SET)) { + addr_lo = RREG32(base[i] + DEC_RAZWI_LBW_AR_ADDR); + dev_err(hdev->dev, + "PSOC LBW AR RAZWI: %s, address (aligned to 128 byte): 0x%X\n", + eng_name_str, addr_lo); + hl_handle_razwi(hdev, addr_lo, &eng_id[0], + num_of_eng, HL_RAZWI_LBW | HL_RAZWI_READ, event_mask); + razwi_happened = true; + } + /* In common case the loop will break, when there is only one engine id, or + * several engines with the same router. The exceptional case is with psoc razwi + * from EDMA, where it's possible to get axuser id which fits 2 routers (2 + * interfaces of sft router). In this case, maybe the first router won't hold info + * and we will need to iterate on the other router. + */ + if (razwi_happened) + break; + } + + return razwi_happened; } /* PSOC RAZWI interrupt occurs only when trying to access a bad address */ static int gaudi2_ack_psoc_razwi_event_handler(struct hl_device *hdev, u64 *event_mask) { - u32 hbw_aw_set, hbw_ar_set, lbw_aw_set, lbw_ar_set, rtr_id, dcore_id, dcore_rtr_id, xy, - razwi_mask_info, razwi_intr = 0, error_count = 0; - int rtr_map_arr_len = NUM_OF_RTR_PER_DCORE * NUM_OF_DCORES; - u64 rtr_ctrl_base_addr; + u32 razwi_mask_info, razwi_intr = 0, error_count = 0; if (hdev->pldm || !(hdev->fw_components & FW_TYPE_LINUX)) { razwi_intr = RREG32(mmPSOC_GLOBAL_CONF_RAZWI_INTERRUPT); @@ -7825,63 +8332,22 @@ static int gaudi2_ack_psoc_razwi_event_handler(struct hl_device *hdev, u64 *even } razwi_mask_info = RREG32(mmPSOC_GLOBAL_CONF_RAZWI_MASK_INFO); - xy = FIELD_GET(PSOC_GLOBAL_CONF_RAZWI_MASK_INFO_AXUSER_L_MASK, razwi_mask_info); dev_err_ratelimited(hdev->dev, "PSOC RAZWI interrupt: Mask %d, AR %d, AW %d, AXUSER_L 0x%x AXUSER_H 0x%x\n", FIELD_GET(PSOC_GLOBAL_CONF_RAZWI_MASK_INFO_MASK_MASK, razwi_mask_info), FIELD_GET(PSOC_GLOBAL_CONF_RAZWI_MASK_INFO_WAS_AR_MASK, razwi_mask_info), FIELD_GET(PSOC_GLOBAL_CONF_RAZWI_MASK_INFO_WAS_AW_MASK, razwi_mask_info), - xy, + FIELD_GET(PSOC_GLOBAL_CONF_RAZWI_MASK_INFO_AXUSER_L_MASK, razwi_mask_info), FIELD_GET(PSOC_GLOBAL_CONF_RAZWI_MASK_INFO_AXUSER_H_MASK, razwi_mask_info)); - if (xy == 0) { - dev_err_ratelimited(hdev->dev, - "PSOC RAZWI interrupt: received event from 0 rtr coordinates\n"); - goto clear; - } - - /* Find router id by router coordinates */ - for (rtr_id = 0 ; rtr_id < rtr_map_arr_len ; rtr_id++) - if (rtr_coordinates_to_rtr_id[rtr_id] == xy) - break; - - if (rtr_id == rtr_map_arr_len) { + if (gaudi2_handle_psoc_razwi_happened(hdev, razwi_mask_info, event_mask)) + error_count++; + else dev_err_ratelimited(hdev->dev, - "PSOC RAZWI interrupt: invalid rtr coordinates (0x%x)\n", xy); - goto clear; - } - - /* Find router mstr_if register base */ - dcore_id = rtr_id / NUM_OF_RTR_PER_DCORE; - dcore_rtr_id = rtr_id % NUM_OF_RTR_PER_DCORE; - rtr_ctrl_base_addr = mmDCORE0_RTR0_CTRL_BASE + dcore_id * DCORE_OFFSET + - dcore_rtr_id * DCORE_RTR_OFFSET; - - hbw_aw_set = RREG32(rtr_ctrl_base_addr + DEC_RAZWI_HBW_AW_SET); - hbw_ar_set = RREG32(rtr_ctrl_base_addr + DEC_RAZWI_HBW_AR_SET); - lbw_aw_set = RREG32(rtr_ctrl_base_addr + DEC_RAZWI_LBW_AW_SET); - lbw_ar_set = RREG32(rtr_ctrl_base_addr + DEC_RAZWI_LBW_AR_SET); - - if (hbw_aw_set) - gaudi2_razwi_unmapped_addr_hbw_printf_info(hdev, rtr_id, - rtr_ctrl_base_addr, true, event_mask); - - if (hbw_ar_set) - gaudi2_razwi_unmapped_addr_hbw_printf_info(hdev, rtr_id, - rtr_ctrl_base_addr, false, event_mask); - - if (lbw_aw_set) - gaudi2_razwi_unmapped_addr_lbw_printf_info(hdev, rtr_id, - rtr_ctrl_base_addr, true, event_mask); - - if (lbw_ar_set) - gaudi2_razwi_unmapped_addr_lbw_printf_info(hdev, rtr_id, - rtr_ctrl_base_addr, false, event_mask); + "PSOC RAZWI interrupt: invalid razwi info (0x%x)\n", + razwi_mask_info); - error_count++; - -clear: /* Clear Interrupts only on pldm or if f/w doesn't handle interrupts */ if (hdev->pldm || !(hdev->fw_components & FW_TYPE_LINUX)) WREG32(mmPSOC_GLOBAL_CONF_RAZWI_INTERRUPT, razwi_intr); @@ -7976,7 +8442,7 @@ static int gaudi2_handle_qman_err(struct hl_device *hdev, u16 event_type, u64 *e { u32 qid_base, error_count = 0; u64 qman_base; - u8 index; + u8 index = 0; switch (event_type) { case GAUDI2_EVENT_TPC0_QM ... GAUDI2_EVENT_TPC5_QM: @@ -8094,23 +8560,28 @@ static int gaudi2_handle_qman_err(struct hl_device *hdev, u16 event_type, u64 *e static int gaudi2_handle_arc_farm_sei_err(struct hl_device *hdev, u16 event_type) { - u32 i, sts_val, sts_clr_val = 0, error_count = 0; + u32 i, sts_val, sts_clr_val, error_count = 0, arc_farm; - sts_val = RREG32(mmARC_FARM_ARC0_AUX_ARC_SEI_INTR_STS); + for (arc_farm = 0 ; arc_farm < NUM_OF_ARC_FARMS_ARC ; arc_farm++) { + sts_clr_val = 0; + sts_val = RREG32(mmARC_FARM_ARC0_AUX_ARC_SEI_INTR_STS + + (arc_farm * ARC_FARM_OFFSET)); - for (i = 0 ; i < GAUDI2_NUM_OF_ARC_SEI_ERR_CAUSE ; i++) { - if (sts_val & BIT(i)) { - gaudi2_print_event(hdev, event_type, true, - "err cause: %s", gaudi2_arc_sei_error_cause[i]); - sts_clr_val |= BIT(i); - error_count++; + for (i = 0 ; i < GAUDI2_NUM_OF_ARC_SEI_ERR_CAUSE ; i++) { + if (sts_val & BIT(i)) { + gaudi2_print_event(hdev, event_type, true, + "ARC FARM ARC %u err cause: %s", + arc_farm, gaudi2_arc_sei_error_cause[i]); + sts_clr_val |= BIT(i); + error_count++; + } } + WREG32(mmARC_FARM_ARC0_AUX_ARC_SEI_INTR_CLR + (arc_farm * ARC_FARM_OFFSET), + sts_clr_val); } hl_check_for_glbl_errors(hdev); - WREG32(mmARC_FARM_ARC0_AUX_ARC_SEI_INTR_CLR, sts_clr_val); - return error_count; } @@ -8318,14 +8789,13 @@ static int gaudi2_handle_kdma_core_event(struct hl_device *hdev, u16 event_type, return error_count; } -static int gaudi2_handle_dma_core_event(struct hl_device *hdev, u16 event_type, - u64 intr_cause_data) +static int gaudi2_handle_dma_core_event(struct hl_device *hdev, u16 event_type, int sts_addr) { - u32 error_count = 0; + u32 error_count = 0, sts_val = RREG32(sts_addr); int i; for (i = 0 ; i < GAUDI2_NUM_OF_DMA_CORE_INTR_CAUSE ; i++) - if (intr_cause_data & BIT(i)) { + if (sts_val & BIT(i)) { gaudi2_print_event(hdev, event_type, true, "err cause: %s", gaudi2_dma_core_interrupts_cause[i]); error_count++; @@ -8336,6 +8806,27 @@ static int gaudi2_handle_dma_core_event(struct hl_device *hdev, u16 event_type, return error_count; } +static int gaudi2_handle_pdma_core_event(struct hl_device *hdev, u16 event_type, int pdma_idx) +{ + u32 sts_addr; + + sts_addr = mmPDMA0_CORE_ERR_CAUSE + pdma_idx * PDMA_OFFSET; + return gaudi2_handle_dma_core_event(hdev, event_type, sts_addr); +} + +static int gaudi2_handle_edma_core_event(struct hl_device *hdev, u16 event_type, int edma_idx) +{ + static const int edma_event_index_map[] = {2, 3, 0, 1, 6, 7, 4, 5}; + u32 sts_addr, index; + + index = edma_event_index_map[edma_idx]; + + sts_addr = mmDCORE0_EDMA0_CORE_ERR_CAUSE + + DCORE_OFFSET * (index / NUM_OF_EDMA_PER_DCORE) + + DCORE_EDMA_OFFSET * (index % NUM_OF_EDMA_PER_DCORE); + return gaudi2_handle_dma_core_event(hdev, event_type, sts_addr); +} + static void gaudi2_print_pcie_mstr_rr_mstr_if_razwi_info(struct hl_device *hdev, u64 *event_mask) { u32 mstr_if_base_addr = mmPCIE_MSTR_RR_MSTR_IF_RR_SHRD_HBW_BASE, razwi_happened_addr; @@ -8433,7 +8924,7 @@ static int gaudi2_handle_hif_fatal(struct hl_device *hdev, u16 event_type, u64 i static void gaudi2_handle_page_error(struct hl_device *hdev, u64 mmu_base, bool is_pmmu, u64 *event_mask) { - u32 valid, val, axid_l, axid_h; + u32 valid, val; u64 addr; valid = RREG32(mmu_base + MMU_OFFSET(mmDCORE0_HMMU0_MMU_ACCESS_PAGE_ERROR_VALID)); @@ -8446,14 +8937,14 @@ static void gaudi2_handle_page_error(struct hl_device *hdev, u64 mmu_base, bool addr <<= 32; addr |= RREG32(mmu_base + MMU_OFFSET(mmDCORE0_HMMU0_MMU_PAGE_ERROR_CAPTURE_VA)); - axid_l = RREG32(mmu_base + MMU_OFFSET(mmDCORE0_HMMU0_MMU_PAGE_FAULT_ID_LSB)); - axid_h = RREG32(mmu_base + MMU_OFFSET(mmDCORE0_HMMU0_MMU_PAGE_FAULT_ID_MSB)); + if (!is_pmmu) + addr = gaudi2_mmu_descramble_addr(hdev, addr); - dev_err_ratelimited(hdev->dev, "%s page fault on va 0x%llx, transaction id 0x%llX\n", - is_pmmu ? "PMMU" : "HMMU", addr, ((u64)axid_h << 32) + axid_l); + dev_err_ratelimited(hdev->dev, "%s page fault on va 0x%llx\n", + is_pmmu ? "PMMU" : "HMMU", addr); hl_handle_page_fault(hdev, addr, 0, is_pmmu, event_mask); - WREG32(mmu_base + MMU_OFFSET(mmDCORE0_HMMU0_MMU_PAGE_ERROR_CAPTURE), 0); + WREG32(mmu_base + MMU_OFFSET(mmDCORE0_HMMU0_MMU_ACCESS_PAGE_ERROR_VALID), 0); } static void gaudi2_handle_access_error(struct hl_device *hdev, u64 mmu_base, bool is_pmmu) @@ -8471,9 +8962,12 @@ static void gaudi2_handle_access_error(struct hl_device *hdev, u64 mmu_base, boo addr <<= 32; addr |= RREG32(mmu_base + MMU_OFFSET(mmDCORE0_HMMU0_MMU_ACCESS_ERROR_CAPTURE_VA)); + if (!is_pmmu) + addr = gaudi2_mmu_descramble_addr(hdev, addr); + dev_err_ratelimited(hdev->dev, "%s access error on va 0x%llx\n", is_pmmu ? "PMMU" : "HMMU", addr); - WREG32(mmu_base + MMU_OFFSET(mmDCORE0_HMMU0_MMU_ACCESS_ERROR_CAPTURE), 0); + WREG32(mmu_base + MMU_OFFSET(mmDCORE0_HMMU0_MMU_ACCESS_PAGE_ERROR_VALID), 0); } static int gaudi2_handle_mmu_spi_sei_generic(struct hl_device *hdev, u16 event_type, @@ -8534,7 +9028,7 @@ static int gaudi2_handle_sm_err(struct hl_device *hdev, u16 event_type, u8 sm_in continue; gaudi2_print_event(hdev, event_type, true, - "err cause: %s. %s: 0x%X\n", + "err cause: %s. %s: 0x%X", gaudi2_sm_sei_cause[i].cause_name, gaudi2_sm_sei_cause[i].log_name, sei_cause_log); @@ -8565,46 +9059,110 @@ static int gaudi2_handle_sm_err(struct hl_device *hdev, u16 event_type, u8 sm_in return error_count; } +static u64 get_hmmu_base(u16 event_type) +{ + u8 dcore, index_in_dcore; + + switch (event_type) { + case GAUDI2_EVENT_HMMU_0_AXI_ERR_RSP: + case GAUDI2_EVENT_HMMU0_SPI_BASE ... GAUDI2_EVENT_HMMU0_SECURITY_ERROR: + dcore = 0; + index_in_dcore = 0; + break; + case GAUDI2_EVENT_HMMU_1_AXI_ERR_RSP: + case GAUDI2_EVENT_HMMU1_SPI_BASE ... GAUDI2_EVENT_HMMU1_SECURITY_ERROR: + dcore = 1; + index_in_dcore = 0; + break; + case GAUDI2_EVENT_HMMU_2_AXI_ERR_RSP: + case GAUDI2_EVENT_HMMU2_SPI_BASE ... GAUDI2_EVENT_HMMU2_SECURITY_ERROR: + dcore = 0; + index_in_dcore = 1; + break; + case GAUDI2_EVENT_HMMU_3_AXI_ERR_RSP: + case GAUDI2_EVENT_HMMU3_SPI_BASE ... GAUDI2_EVENT_HMMU3_SECURITY_ERROR: + dcore = 1; + index_in_dcore = 1; + break; + case GAUDI2_EVENT_HMMU_4_AXI_ERR_RSP: + case GAUDI2_EVENT_HMMU4_SPI_BASE ... GAUDI2_EVENT_HMMU4_SECURITY_ERROR: + dcore = 3; + index_in_dcore = 2; + break; + case GAUDI2_EVENT_HMMU_5_AXI_ERR_RSP: + case GAUDI2_EVENT_HMMU5_SPI_BASE ... GAUDI2_EVENT_HMMU5_SECURITY_ERROR: + dcore = 2; + index_in_dcore = 2; + break; + case GAUDI2_EVENT_HMMU_6_AXI_ERR_RSP: + case GAUDI2_EVENT_HMMU6_SPI_BASE ... GAUDI2_EVENT_HMMU6_SECURITY_ERROR: + dcore = 3; + index_in_dcore = 3; + break; + case GAUDI2_EVENT_HMMU_7_AXI_ERR_RSP: + case GAUDI2_EVENT_HMMU7_SPI_BASE ... GAUDI2_EVENT_HMMU7_SECURITY_ERROR: + dcore = 2; + index_in_dcore = 3; + break; + case GAUDI2_EVENT_HMMU_8_AXI_ERR_RSP: + case GAUDI2_EVENT_HMMU8_SPI_BASE ... GAUDI2_EVENT_HMMU8_SECURITY_ERROR: + dcore = 0; + index_in_dcore = 2; + break; + case GAUDI2_EVENT_HMMU_9_AXI_ERR_RSP: + case GAUDI2_EVENT_HMMU9_SPI_BASE ... GAUDI2_EVENT_HMMU9_SECURITY_ERROR: + dcore = 1; + index_in_dcore = 2; + break; + case GAUDI2_EVENT_HMMU_10_AXI_ERR_RSP: + case GAUDI2_EVENT_HMMU10_SPI_BASE ... GAUDI2_EVENT_HMMU10_SECURITY_ERROR: + dcore = 0; + index_in_dcore = 3; + break; + case GAUDI2_EVENT_HMMU_11_AXI_ERR_RSP: + case GAUDI2_EVENT_HMMU11_SPI_BASE ... GAUDI2_EVENT_HMMU11_SECURITY_ERROR: + dcore = 1; + index_in_dcore = 3; + break; + case GAUDI2_EVENT_HMMU_12_AXI_ERR_RSP: + case GAUDI2_EVENT_HMMU12_SPI_BASE ... GAUDI2_EVENT_HMMU12_SECURITY_ERROR: + dcore = 3; + index_in_dcore = 0; + break; + case GAUDI2_EVENT_HMMU_13_AXI_ERR_RSP: + case GAUDI2_EVENT_HMMU13_SPI_BASE ... GAUDI2_EVENT_HMMU13_SECURITY_ERROR: + dcore = 2; + index_in_dcore = 0; + break; + case GAUDI2_EVENT_HMMU_14_AXI_ERR_RSP: + case GAUDI2_EVENT_HMMU14_SPI_BASE ... GAUDI2_EVENT_HMMU14_SECURITY_ERROR: + dcore = 3; + index_in_dcore = 1; + break; + case GAUDI2_EVENT_HMMU_15_AXI_ERR_RSP: + case GAUDI2_EVENT_HMMU15_SPI_BASE ... GAUDI2_EVENT_HMMU15_SECURITY_ERROR: + dcore = 2; + index_in_dcore = 1; + break; + default: + return ULONG_MAX; + } + + return mmDCORE0_HMMU0_MMU_BASE + dcore * DCORE_OFFSET + index_in_dcore * DCORE_HMMU_OFFSET; +} + static int gaudi2_handle_mmu_spi_sei_err(struct hl_device *hdev, u16 event_type, u64 *event_mask) { bool is_pmmu = false; u32 error_count = 0; u64 mmu_base; - u8 index; switch (event_type) { - case GAUDI2_EVENT_HMMU0_PAGE_FAULT_OR_WR_PERM ... GAUDI2_EVENT_HMMU3_SECURITY_ERROR: - index = (event_type - GAUDI2_EVENT_HMMU0_PAGE_FAULT_OR_WR_PERM) / 3; - mmu_base = mmDCORE0_HMMU0_MMU_BASE + index * DCORE_HMMU_OFFSET; - break; - case GAUDI2_EVENT_HMMU_0_AXI_ERR_RSP ... GAUDI2_EVENT_HMMU_3_AXI_ERR_RSP: - index = (event_type - GAUDI2_EVENT_HMMU_0_AXI_ERR_RSP); - mmu_base = mmDCORE0_HMMU0_MMU_BASE + index * DCORE_HMMU_OFFSET; - break; - case GAUDI2_EVENT_HMMU8_PAGE_FAULT_WR_PERM ... GAUDI2_EVENT_HMMU11_SECURITY_ERROR: - index = (event_type - GAUDI2_EVENT_HMMU8_PAGE_FAULT_WR_PERM) / 3; - mmu_base = mmDCORE1_HMMU0_MMU_BASE + index * DCORE_HMMU_OFFSET; - break; - case GAUDI2_EVENT_HMMU_8_AXI_ERR_RSP ... GAUDI2_EVENT_HMMU_11_AXI_ERR_RSP: - index = (event_type - GAUDI2_EVENT_HMMU_8_AXI_ERR_RSP); - mmu_base = mmDCORE1_HMMU0_MMU_BASE + index * DCORE_HMMU_OFFSET; - break; - case GAUDI2_EVENT_HMMU7_PAGE_FAULT_WR_PERM ... GAUDI2_EVENT_HMMU4_SECURITY_ERROR: - index = (event_type - GAUDI2_EVENT_HMMU7_PAGE_FAULT_WR_PERM) / 3; - mmu_base = mmDCORE2_HMMU0_MMU_BASE + index * DCORE_HMMU_OFFSET; - break; - case GAUDI2_EVENT_HMMU_7_AXI_ERR_RSP ... GAUDI2_EVENT_HMMU_4_AXI_ERR_RSP: - index = (event_type - GAUDI2_EVENT_HMMU_7_AXI_ERR_RSP); - mmu_base = mmDCORE2_HMMU0_MMU_BASE + index * DCORE_HMMU_OFFSET; - break; - case GAUDI2_EVENT_HMMU15_PAGE_FAULT_WR_PERM ... GAUDI2_EVENT_HMMU12_SECURITY_ERROR: - index = (event_type - GAUDI2_EVENT_HMMU15_PAGE_FAULT_WR_PERM) / 3; - mmu_base = mmDCORE3_HMMU0_MMU_BASE + index * DCORE_HMMU_OFFSET; - break; - case GAUDI2_EVENT_HMMU_15_AXI_ERR_RSP ... GAUDI2_EVENT_HMMU_12_AXI_ERR_RSP: - index = (event_type - GAUDI2_EVENT_HMMU_15_AXI_ERR_RSP); - mmu_base = mmDCORE3_HMMU0_MMU_BASE + index * DCORE_HMMU_OFFSET; + case GAUDI2_EVENT_HMMU_0_AXI_ERR_RSP ... GAUDI2_EVENT_HMMU_12_AXI_ERR_RSP: + case GAUDI2_EVENT_HMMU0_SPI_BASE ... GAUDI2_EVENT_HMMU12_SECURITY_ERROR: + mmu_base = get_hmmu_base(event_type); break; + case GAUDI2_EVENT_PMMU0_PAGE_FAULT_WR_PERM ... GAUDI2_EVENT_PMMU0_SECURITY_ERROR: case GAUDI2_EVENT_PMMU_AXI_ERR_RSP_0: is_pmmu = true; @@ -8614,6 +9172,9 @@ static int gaudi2_handle_mmu_spi_sei_err(struct hl_device *hdev, u16 event_type, return 0; } + if (mmu_base == ULONG_MAX) + return 0; + error_count = gaudi2_handle_mmu_spi_sei_generic(hdev, event_type, mmu_base, is_pmmu, event_mask); hl_check_for_glbl_errors(hdev); @@ -8740,12 +9301,12 @@ static bool gaudi2_handle_hbm_mc_sei_err(struct hl_device *hdev, u16 event_type, if (cause_idx > GAUDI2_NUM_OF_HBM_SEI_CAUSE - 1) { gaudi2_print_event(hdev, event_type, true, "err cause: %s", - "Invalid HBM SEI event cause (%d) provided by FW\n", cause_idx); + "Invalid HBM SEI event cause (%d) provided by FW", cause_idx); return true; } gaudi2_print_event(hdev, event_type, !sei_data->hdr.is_critical, - "System %s Error Interrupt - HBM(%u) MC(%u) MC_CH(%u) MC_PC(%u). Error cause: %s\n", + "System %s Error Interrupt - HBM(%u) MC(%u) MC_CH(%u) MC_PC(%u). Error cause: %s", sei_data->hdr.is_critical ? "Critical" : "Non-critical", hbm_id, mc_id, sei_data->hdr.mc_channel, sei_data->hdr.mc_pseudo_channel, hbm_mc_sei_cause[cause_idx]); @@ -8869,7 +9430,7 @@ static void gaudi2_print_out_of_sync_info(struct hl_device *hdev, u16 event_type struct hl_hw_queue *q = &hdev->kernel_queues[GAUDI2_QUEUE_ID_CPU_PQ]; gaudi2_print_event(hdev, event_type, false, - "FW: pi=%u, ci=%u, LKD: pi=%u, ci=%d\n", + "FW: pi=%u, ci=%u, LKD: pi=%u, ci=%d", le32_to_cpu(sync_err->pi), le32_to_cpu(sync_err->ci), q->pi, atomic_read(&q->ci)); } @@ -8883,7 +9444,7 @@ static int gaudi2_handle_pcie_p2p_msix(struct hl_device *hdev, u16 event_type) if (p2p_intr) { gaudi2_print_event(hdev, event_type, true, - "pcie p2p transaction terminated due to security, req_id(0x%x)\n", + "pcie p2p transaction terminated due to security, req_id(0x%x)", RREG32(mmPCIE_WRAP_P2P_REQ_ID)); WREG32(mmPCIE_WRAP_P2P_INTR, 0x1); @@ -8892,7 +9453,7 @@ static int gaudi2_handle_pcie_p2p_msix(struct hl_device *hdev, u16 event_type) if (msix_gw_intr) { gaudi2_print_event(hdev, event_type, true, - "pcie msi-x gen denied due to vector num check failure, vec(0x%X)\n", + "pcie msi-x gen denied due to vector num check failure, vec(0x%X)", RREG32(mmPCIE_WRAP_MSIX_GW_VEC)); WREG32(mmPCIE_WRAP_MSIX_GW_INTR, 0x1); @@ -8954,7 +9515,7 @@ static void gaudi2_print_cpu_pkt_failure_info(struct hl_device *hdev, u16 event_ struct hl_hw_queue *q = &hdev->kernel_queues[GAUDI2_QUEUE_ID_CPU_PQ]; gaudi2_print_event(hdev, event_type, false, - "FW reported sanity check failure, FW: pi=%u, ci=%u, LKD: pi=%u, ci=%d\n", + "FW reported sanity check failure, FW: pi=%u, ci=%u, LKD: pi=%u, ci=%d", le32_to_cpu(sync_err->pi), le32_to_cpu(sync_err->ci), q->pi, atomic_read(&q->ci)); } @@ -8974,11 +9535,11 @@ static int hl_arc_event_handle(struct hl_device *hdev, u16 event_type, q = (struct hl_engine_arc_dccm_queue_full_irq *) &payload; gaudi2_print_event(hdev, event_type, true, - "ARC DCCM Full event: EngId: %u, Intr_type: %u, Qidx: %u\n", + "ARC DCCM Full event: EngId: %u, Intr_type: %u, Qidx: %u", engine_id, intr_type, q->queue_index); return 1; default: - gaudi2_print_event(hdev, event_type, true, "Unknown ARC event type\n"); + gaudi2_print_event(hdev, event_type, true, "Unknown ARC event type"); return 0; } } @@ -8987,7 +9548,7 @@ static void gaudi2_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_ent { struct gaudi2_device *gaudi2 = hdev->asic_specific; bool reset_required = false, is_critical = false; - u32 index, ctl, reset_flags = HL_DRV_RESET_HARD, error_count = 0; + u32 index, ctl, reset_flags = 0, error_count = 0; u64 event_mask = 0; u16 event_type; @@ -9024,19 +9585,18 @@ static void gaudi2_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_ent break; case GAUDI2_EVENT_ARC_AXI_ERROR_RESPONSE_0: - reset_flags |= HL_DRV_RESET_FW_FATAL_ERR; error_count = gaudi2_handle_arc_farm_sei_err(hdev, event_type); - event_mask |= HL_NOTIFIER_EVENT_GENERAL_HW_ERR; + event_mask |= HL_NOTIFIER_EVENT_USER_ENGINE_ERR; break; case GAUDI2_EVENT_CPU_AXI_ERR_RSP: error_count = gaudi2_handle_cpu_sei_err(hdev, event_type); - event_mask |= HL_NOTIFIER_EVENT_GENERAL_HW_ERR; + reset_flags |= HL_DRV_RESET_FW_FATAL_ERR; + event_mask |= HL_NOTIFIER_EVENT_CRITICL_FW_ERR; break; case GAUDI2_EVENT_PDMA_CH0_AXI_ERR_RSP: case GAUDI2_EVENT_PDMA_CH1_AXI_ERR_RSP: - reset_flags |= HL_DRV_RESET_FW_FATAL_ERR; error_count = gaudi2_handle_qm_sei_err(hdev, event_type, true, &event_mask); event_mask |= HL_NOTIFIER_EVENT_USER_ENGINE_ERR; break; @@ -9153,9 +9713,15 @@ static void gaudi2_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_ent event_mask |= HL_NOTIFIER_EVENT_GENERAL_HW_ERR; break; - case GAUDI2_EVENT_HDMA2_CORE ... GAUDI2_EVENT_PDMA1_CORE: - error_count = gaudi2_handle_dma_core_event(hdev, event_type, - le64_to_cpu(eq_entry->intr_cause.intr_cause_data)); + case GAUDI2_EVENT_HDMA2_CORE ... GAUDI2_EVENT_HDMA5_CORE: + index = event_type - GAUDI2_EVENT_HDMA2_CORE; + error_count = gaudi2_handle_edma_core_event(hdev, event_type, index); + event_mask |= HL_NOTIFIER_EVENT_USER_ENGINE_ERR; + break; + + case GAUDI2_EVENT_PDMA0_CORE ... GAUDI2_EVENT_PDMA1_CORE: + index = event_type - GAUDI2_EVENT_PDMA0_CORE; + error_count = gaudi2_handle_pdma_core_event(hdev, event_type, index); event_mask |= HL_NOTIFIER_EVENT_USER_ENGINE_ERR; break; @@ -9217,12 +9783,14 @@ static void gaudi2_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_ent case GAUDI2_EVENT_PCIE_DRAIN_COMPLETE: error_count = gaudi2_handle_pcie_drain(hdev, &eq_entry->pcie_drain_ind_data); + reset_flags |= HL_DRV_RESET_FW_FATAL_ERR; event_mask |= HL_NOTIFIER_EVENT_GENERAL_HW_ERR; break; case GAUDI2_EVENT_PSOC59_RPM_ERROR_OR_DRAIN: error_count = gaudi2_handle_psoc_drain(hdev, le64_to_cpu(eq_entry->intr_cause.intr_cause_data)); + reset_flags |= HL_DRV_RESET_FW_FATAL_ERR; event_mask |= HL_NOTIFIER_EVENT_GENERAL_HW_ERR; break; @@ -9251,6 +9819,7 @@ static void gaudi2_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_ent break; case GAUDI2_EVENT_PSOC_AXI_ERR_RSP: error_count = GAUDI2_NA_EVENT_CAUSE; + reset_flags |= HL_DRV_RESET_FW_FATAL_ERR; event_mask |= HL_NOTIFIER_EVENT_GENERAL_HW_ERR; break; case GAUDI2_EVENT_PSOC_PRSTN_FALL: @@ -9264,6 +9833,7 @@ static void gaudi2_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_ent break; case GAUDI2_EVENT_PCIE_FATAL_ERR: error_count = GAUDI2_NA_EVENT_CAUSE; + reset_flags |= HL_DRV_RESET_FW_FATAL_ERR; event_mask |= HL_NOTIFIER_EVENT_GENERAL_HW_ERR; break; case GAUDI2_EVENT_TPC0_BMON_SPMU: @@ -9331,6 +9901,7 @@ static void gaudi2_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_ent case GAUDI2_EVENT_CPU_PKT_QUEUE_OUT_SYNC: gaudi2_print_out_of_sync_info(hdev, event_type, &eq_entry->pkt_sync_err); error_count = GAUDI2_NA_EVENT_CAUSE; + reset_flags |= HL_DRV_RESET_FW_FATAL_ERR; event_mask |= HL_NOTIFIER_EVENT_GENERAL_HW_ERR; break; @@ -9372,6 +9943,7 @@ static void gaudi2_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_ent case GAUDI2_EVENT_CPU_PKT_SANITY_FAILED: gaudi2_print_cpu_pkt_failure_info(hdev, event_type, &eq_entry->pkt_sync_err); error_count = GAUDI2_NA_EVENT_CAUSE; + reset_flags |= HL_DRV_RESET_FW_FATAL_ERR; event_mask |= HL_NOTIFIER_EVENT_GENERAL_HW_ERR; break; @@ -9381,7 +9953,7 @@ static void gaudi2_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_ent break; case GAUDI2_EVENT_CPU_FP32_NOT_SUPPORTED: - case GAUDI2_EVENT_DEV_RESET_REQ: + case GAUDI2_EVENT_CPU_DEV_RESET_REQ: event_mask |= HL_NOTIFIER_EVENT_GENERAL_HW_ERR; error_count = GAUDI2_NA_EVENT_CAUSE; is_critical = true; @@ -9403,12 +9975,18 @@ static void gaudi2_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_ent gaudi2_print_event(hdev, event_type, true, "%d", event_type); else if (error_count == 0) gaudi2_print_event(hdev, event_type, true, - "No error cause for H/W event %u\n", event_type); + "No error cause for H/W event %u", event_type); - if ((gaudi2_irq_map_table[event_type].reset || reset_required) && - (hdev->hard_reset_on_fw_events || - (hdev->asic_prop.fw_security_enabled && is_critical))) - goto reset_device; + if ((gaudi2_irq_map_table[event_type].reset != EVENT_RESET_TYPE_NONE) || + reset_required) { + if (reset_required || + (gaudi2_irq_map_table[event_type].reset == EVENT_RESET_TYPE_HARD)) + reset_flags |= HL_DRV_RESET_HARD; + + if (hdev->hard_reset_on_fw_events || + (hdev->asic_prop.fw_security_enabled && is_critical)) + goto reset_device; + } /* Send unmask irq only for interrupts not classified as MSG */ if (!gaudi2_irq_map_table[event_type].msg) @@ -9426,6 +10004,10 @@ reset_device: } else { reset_flags |= HL_DRV_RESET_DELAY; } + /* escalate general hw errors to critical/fatal error */ + if (event_mask & HL_NOTIFIER_EVENT_GENERAL_HW_ERR) + hl_handle_critical_hw_err(hdev, event_type, &event_mask); + event_mask |= HL_NOTIFIER_EVENT_DEVICE_RESET; hl_device_cond_reset(hdev, reset_flags, event_mask); } @@ -9832,16 +10414,23 @@ static int gaudi2_debugfs_read_dma(struct hl_device *hdev, u64 addr, u32 size, v /* Create mapping on asic side */ mutex_lock(&hdev->mmu_lock); + rc = hl_mmu_map_contiguous(ctx, reserved_va_base, host_mem_dma_addr, SZ_2M); - hl_mmu_invalidate_cache_range(hdev, false, + if (rc) { + dev_err(hdev->dev, "Failed to create mapping on asic mmu\n"); + goto unreserve_va; + } + + rc = hl_mmu_invalidate_cache_range(hdev, false, MMU_OP_USERPTR | MMU_OP_SKIP_LOW_CACHE_INV, ctx->asid, reserved_va_base, SZ_2M); - mutex_unlock(&hdev->mmu_lock); if (rc) { - dev_err(hdev->dev, "Failed to create mapping on asic mmu\n"); + hl_mmu_unmap_contiguous(ctx, reserved_va_base, SZ_2M); goto unreserve_va; } + mutex_unlock(&hdev->mmu_lock); + /* Enable MMU on KDMA */ gaudi2_kdma_set_mmbp_asid(hdev, false, ctx->asid); @@ -9870,11 +10459,16 @@ static int gaudi2_debugfs_read_dma(struct hl_device *hdev, u64 addr, u32 size, v gaudi2_kdma_set_mmbp_asid(hdev, true, HL_KERNEL_ASID_ID); mutex_lock(&hdev->mmu_lock); - hl_mmu_unmap_contiguous(ctx, reserved_va_base, SZ_2M); - hl_mmu_invalidate_cache_range(hdev, false, MMU_OP_USERPTR, + + rc = hl_mmu_unmap_contiguous(ctx, reserved_va_base, SZ_2M); + if (rc) + goto unreserve_va; + + rc = hl_mmu_invalidate_cache_range(hdev, false, MMU_OP_USERPTR, ctx->asid, reserved_va_base, SZ_2M); - mutex_unlock(&hdev->mmu_lock); + unreserve_va: + mutex_unlock(&hdev->mmu_lock); hl_unreserve_va_block(hdev, ctx, reserved_va_base, SZ_2M); free_data_buffer: hl_asic_dma_free_coherent(hdev, SZ_2M, host_mem_virtual_addr, host_mem_dma_addr); @@ -9927,17 +10521,24 @@ static int gaudi2_internal_cb_pool_init(struct hl_device *hdev, struct hl_ctx *c } mutex_lock(&hdev->mmu_lock); + rc = hl_mmu_map_contiguous(ctx, hdev->internal_cb_va_base, hdev->internal_cb_pool_dma_addr, HOST_SPACE_INTERNAL_CB_SZ); - hl_mmu_invalidate_cache(hdev, false, MMU_OP_USERPTR); - mutex_unlock(&hdev->mmu_lock); - if (rc) goto unreserve_internal_cb_pool; + rc = hl_mmu_invalidate_cache(hdev, false, MMU_OP_USERPTR); + if (rc) + goto unmap_internal_cb_pool; + + mutex_unlock(&hdev->mmu_lock); + return 0; +unmap_internal_cb_pool: + hl_mmu_unmap_contiguous(ctx, hdev->internal_cb_va_base, HOST_SPACE_INTERNAL_CB_SZ); unreserve_internal_cb_pool: + mutex_unlock(&hdev->mmu_lock); hl_unreserve_va_block(hdev, ctx, hdev->internal_cb_va_base, HOST_SPACE_INTERNAL_CB_SZ); destroy_internal_cb_pool: gen_pool_destroy(hdev->internal_cb_pool); @@ -10724,6 +11325,7 @@ static const struct hl_asic_funcs gaudi2_funcs = { .access_dev_mem = hl_access_dev_mem, .set_dram_bar_base = gaudi2_set_hbm_bar_base, .set_engine_cores = gaudi2_set_engine_cores, + .set_engines = gaudi2_set_engines, .send_device_activity = gaudi2_send_device_activity, .set_dram_properties = gaudi2_set_dram_properties, .set_binning_masks = gaudi2_set_binning_masks, diff --git a/drivers/accel/habanalabs/gaudi2/gaudi2P.h b/drivers/accel/habanalabs/gaudi2/gaudi2P.h index 2687404d9d21..1cebe707772e 100644 --- a/drivers/accel/habanalabs/gaudi2/gaudi2P.h +++ b/drivers/accel/habanalabs/gaudi2/gaudi2P.h @@ -240,6 +240,8 @@ #define GAUDI2_SOB_INCREMENT_BY_ONE (FIELD_PREP(DCORE0_SYNC_MNGR_OBJS_SOB_OBJ_VAL_MASK, 1) | \ FIELD_PREP(DCORE0_SYNC_MNGR_OBJS_SOB_OBJ_INC_MASK, 1)) +#define GAUDI2_NUM_TESTED_QS (GAUDI2_QUEUE_ID_CPU_PQ - GAUDI2_QUEUE_ID_PDMA_0_0) + #define GAUDI2_NUM_OF_GLBL_ERR_CAUSE 8 enum gaudi2_reserved_sob_id { @@ -387,6 +389,8 @@ enum gaudi2_edma_id { * We have 64 CQ's per dcore, CQ0 in dcore 0 is reserved for legacy mode */ #define GAUDI2_NUM_USER_INTERRUPTS 255 +#define GAUDI2_NUM_RESERVED_INTERRUPTS 1 +#define GAUDI2_TOTAL_USER_INTERRUPTS (GAUDI2_NUM_USER_INTERRUPTS + GAUDI2_NUM_RESERVED_INTERRUPTS) enum gaudi2_irq_num { GAUDI2_IRQ_NUM_EVENT_QUEUE = GAUDI2_EVENT_QUEUE_MSIX_IDX, @@ -410,12 +414,15 @@ enum gaudi2_irq_num { GAUDI2_IRQ_NUM_SHARED_DEC0_ABNRM, GAUDI2_IRQ_NUM_SHARED_DEC1_NRM, GAUDI2_IRQ_NUM_SHARED_DEC1_ABNRM, + GAUDI2_IRQ_NUM_DEC_LAST = GAUDI2_IRQ_NUM_SHARED_DEC1_ABNRM, GAUDI2_IRQ_NUM_COMPLETION, GAUDI2_IRQ_NUM_NIC_PORT_FIRST, GAUDI2_IRQ_NUM_NIC_PORT_LAST = (GAUDI2_IRQ_NUM_NIC_PORT_FIRST + NIC_NUMBER_OF_PORTS - 1), + GAUDI2_IRQ_NUM_TPC_ASSERT, GAUDI2_IRQ_NUM_RESERVED_FIRST, - GAUDI2_IRQ_NUM_RESERVED_LAST = (GAUDI2_MSIX_ENTRIES - GAUDI2_NUM_USER_INTERRUPTS - 1), - GAUDI2_IRQ_NUM_USER_FIRST, + GAUDI2_IRQ_NUM_RESERVED_LAST = (GAUDI2_MSIX_ENTRIES - GAUDI2_TOTAL_USER_INTERRUPTS - 1), + GAUDI2_IRQ_NUM_UNEXPECTED_ERROR = RESERVED_MSIX_UNEXPECTED_USER_ERROR_INTERRUPT, + GAUDI2_IRQ_NUM_USER_FIRST = GAUDI2_IRQ_NUM_UNEXPECTED_ERROR + 1, GAUDI2_IRQ_NUM_USER_LAST = (GAUDI2_IRQ_NUM_USER_FIRST + GAUDI2_NUM_USER_INTERRUPTS - 1), GAUDI2_IRQ_NUM_LAST = (GAUDI2_MSIX_ENTRIES - 1) }; @@ -448,6 +455,17 @@ struct dup_block_ctx { }; /** + * struct gaudi2_queues_test_info - Holds the address of a the messages used for testing the + * device queues. + * @dma_addr: the address used by the HW for accessing the message. + * @kern_addr: The address used by the driver for accessing the message. + */ +struct gaudi2_queues_test_info { + dma_addr_t dma_addr; + void *kern_addr; +}; + +/** * struct gaudi2_device - ASIC specific manage structure. * @cpucp_info_get: get information on device from CPU-CP * @mapped_blocks: array that holds the base address and size of all blocks @@ -505,6 +523,7 @@ struct dup_block_ctx { * @flush_db_fifo: flag to force flush DB FIFO after a write. * @hbm_cfg: HBM subsystem settings * @hw_queues_lock_mutex: used by simulator instead of hw_queues_lock. + * @queues_test_info: information used by the driver when testing the HW queues. */ struct gaudi2_device { int (*cpucp_info_get)(struct hl_device *hdev); @@ -532,6 +551,9 @@ struct gaudi2_device { u32 events_stat[GAUDI2_EVENT_SIZE]; u32 events_stat_aggregate[GAUDI2_EVENT_SIZE]; u32 num_of_valid_hw_events; + + /* Queue testing */ + struct gaudi2_queues_test_info queues_test_info[GAUDI2_NUM_TESTED_QS]; }; /* diff --git a/drivers/accel/habanalabs/gaudi2/gaudi2_coresight.c b/drivers/accel/habanalabs/gaudi2/gaudi2_coresight.c index 1dfbe293ecec..25b5368f37dd 100644 --- a/drivers/accel/habanalabs/gaudi2/gaudi2_coresight.c +++ b/drivers/accel/habanalabs/gaudi2/gaudi2_coresight.c @@ -2657,7 +2657,7 @@ int gaudi2_coresight_init(struct hl_device *hdev) /* * Mask out all the disabled binned offsets. * so when user request to configure a binned or masked out component, - * driver will ignore programing it ( happens when offset value is set to 0x0 ) + * driver will ignore programming it ( happens when offset value is set to 0x0 ) * this is being set in gaudi2_coresight_set_disabled_components */ diff --git a/drivers/accel/habanalabs/gaudi2/gaudi2_masks.h b/drivers/accel/habanalabs/gaudi2/gaudi2_masks.h index e9ac87828221..e6664c4a2cf5 100644 --- a/drivers/accel/habanalabs/gaudi2/gaudi2_masks.h +++ b/drivers/accel/habanalabs/gaudi2/gaudi2_masks.h @@ -79,7 +79,6 @@ DCORE0_MME_CTRL_LO_ARCH_STATUS_QM_RDY_MASK) #define TPC_IDLE_MASK (DCORE0_TPC0_CFG_STATUS_SCALAR_PIPE_EMPTY_MASK | \ - DCORE0_TPC0_CFG_STATUS_VECTOR_PIPE_EMPTY_MASK | \ DCORE0_TPC0_CFG_STATUS_IQ_EMPTY_MASK | \ DCORE0_TPC0_CFG_STATUS_SB_EMPTY_MASK | \ DCORE0_TPC0_CFG_STATUS_QM_IDLE_MASK | \ @@ -87,6 +86,8 @@ #define DCORE0_TPC0_QM_CGM_STS_AGENT_IDLE_MASK 0x100 +#define DCORE0_TPC0_EML_CFG_DBG_CNT_DBG_EXIT_MASK 0x40 + /* CGM_IDLE_MASK is valid for all engines CGM idle check */ #define CGM_IDLE_MASK DCORE0_TPC0_QM_CGM_STS_AGENT_IDLE_MASK diff --git a/drivers/accel/habanalabs/gaudi2/gaudi2_security.c b/drivers/accel/habanalabs/gaudi2/gaudi2_security.c index a212f82e6604..694735f9e6e6 100644 --- a/drivers/accel/habanalabs/gaudi2/gaudi2_security.c +++ b/drivers/accel/habanalabs/gaudi2/gaudi2_security.c @@ -1595,6 +1595,7 @@ static const u32 gaudi2_pb_dcr0_tpc0_unsecured_regs[] = { mmDCORE0_TPC0_CFG_KERNEL_SRF_30, mmDCORE0_TPC0_CFG_KERNEL_SRF_31, mmDCORE0_TPC0_CFG_TPC_SB_L0CD, + mmDCORE0_TPC0_CFG_TPC_ID, mmDCORE0_TPC0_CFG_QM_KERNEL_ID_INC, mmDCORE0_TPC0_CFG_QM_TID_BASE_SIZE_HIGH_DIM_0, mmDCORE0_TPC0_CFG_QM_TID_BASE_SIZE_HIGH_DIM_1, diff --git a/drivers/accel/habanalabs/goya/goya.c b/drivers/accel/habanalabs/goya/goya.c index df65e9bdc18a..fb0ac9df841a 100644 --- a/drivers/accel/habanalabs/goya/goya.c +++ b/drivers/accel/habanalabs/goya/goya.c @@ -472,6 +472,8 @@ int goya_set_fixed_properties(struct hl_device *hdev) prop->max_pending_cs = GOYA_MAX_PENDING_CS; prop->first_available_user_interrupt = USHRT_MAX; + prop->tpc_interrupt_id = USHRT_MAX; + prop->eq_interrupt_id = GOYA_EVENT_QUEUE_MSIX_IDX; for (i = 0 ; i < HL_MAX_DCORES ; i++) prop->first_available_cq[i] = USHRT_MAX; @@ -668,13 +670,18 @@ pci_init: rc = hl_fw_read_preboot_status(hdev); if (rc) { if (hdev->reset_on_preboot_fail) + /* we are already on failure flow, so don't check if hw_fini fails. */ hdev->asic_funcs->hw_fini(hdev, true, false); goto pci_fini; } if (goya_get_hw_state(hdev) == HL_DEVICE_HW_STATE_DIRTY) { dev_dbg(hdev->dev, "H/W state is dirty, must reset before initializing\n"); - hdev->asic_funcs->hw_fini(hdev, true, false); + rc = hdev->asic_funcs->hw_fini(hdev, true, false); + if (rc) { + dev_err(hdev->dev, "failed to reset HW in dirty state (%d)\n", rc); + goto pci_fini; + } } if (!hdev->pldm) { @@ -2782,7 +2789,7 @@ disable_queues: return rc; } -static void goya_hw_fini(struct hl_device *hdev, bool hard_reset, bool fw_reset) +static int goya_hw_fini(struct hl_device *hdev, bool hard_reset, bool fw_reset) { struct goya_device *goya = hdev->asic_specific; u32 reset_timeout_ms, cpu_timeout_ms, status; @@ -2828,17 +2835,17 @@ static void goya_hw_fini(struct hl_device *hdev, bool hard_reset, bool fw_reset) msleep(reset_timeout_ms); status = RREG32(mmPSOC_GLOBAL_CONF_BTM_FSM); - if (status & PSOC_GLOBAL_CONF_BTM_FSM_STATE_MASK) - dev_err(hdev->dev, - "Timeout while waiting for device to reset 0x%x\n", - status); + if (status & PSOC_GLOBAL_CONF_BTM_FSM_STATE_MASK) { + dev_err(hdev->dev, "Timeout while waiting for device to reset 0x%x\n", status); + return -ETIMEDOUT; + } if (!hard_reset && goya) { goya->hw_cap_initialized &= ~(HW_CAP_DMA | HW_CAP_MME | HW_CAP_GOLDEN | HW_CAP_TPC); WREG32(mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR, GOYA_ASYNC_EVENT_ID_SOFT_RESET); - return; + return 0; } /* Chicken bit to re-initiate boot sequencer flow */ @@ -2857,6 +2864,7 @@ static void goya_hw_fini(struct hl_device *hdev, bool hard_reset, bool fw_reset) memset(goya->events_stat, 0, sizeof(goya->events_stat)); } + return 0; } int goya_suspend(struct hl_device *hdev) diff --git a/drivers/accel/habanalabs/include/common/cpucp_if.h b/drivers/accel/habanalabs/include/common/cpucp_if.h index d713252a4f13..8bbe685458c4 100644 --- a/drivers/accel/habanalabs/include/common/cpucp_if.h +++ b/drivers/accel/habanalabs/include/common/cpucp_if.h @@ -357,6 +357,7 @@ struct hl_eq_addr_dec_intr_data { struct hl_eq_entry { struct hl_eq_header hdr; union { + __le64 data_placeholder; struct hl_eq_ecc_data ecc_data; struct hl_eq_hbm_ecc_data hbm_ecc_data; /* Gaudi1 HBM */ struct hl_eq_sm_sei_data sm_sei_data; @@ -661,6 +662,9 @@ enum pq_init_status { * CPUCP_PACKET_ACTIVE_STATUS_SET - * LKD sends FW indication whether device is free or in use, this indication is reported * also to the BMC. + * + * CPUCP_PACKET_REGISTER_INTERRUPTS - + * Packet to register interrupts indicating LKD is ready to receive events from FW. */ enum cpucp_packet_id { @@ -725,6 +729,8 @@ enum cpucp_packet_id { CPUCP_PACKET_RESERVED9, /* not used */ CPUCP_PACKET_RESERVED10, /* not used */ CPUCP_PACKET_RESERVED11, /* not used */ + CPUCP_PACKET_RESERVED12, /* internal */ + CPUCP_PACKET_REGISTER_INTERRUPTS, /* internal */ CPUCP_PACKET_ID_MAX /* must be last */ }; @@ -1127,6 +1133,7 @@ struct cpucp_security_info { * (0 = functional 1 = binned) * @interposer_version: Interposer version programmed in eFuse * @substrate_version: Substrate version programmed in eFuse + * @fw_hbm_region_size: Size in bytes of FW reserved region in HBM. * @fw_os_version: Firmware OS Version */ struct cpucp_info { @@ -1154,7 +1161,7 @@ struct cpucp_info { __u8 substrate_version; __u8 reserved2; struct cpucp_security_info sec_info; - __le32 reserved3; + __le32 fw_hbm_region_size; __u8 pll_map[PLL_MAP_LEN]; __le64 mme_binning_mask; __u8 fw_os_version[VERSION_MAX_LEN]; diff --git a/drivers/accel/habanalabs/include/common/hl_boot_if.h b/drivers/accel/habanalabs/include/common/hl_boot_if.h index 2256add057c5..c58d76a2705c 100644 --- a/drivers/accel/habanalabs/include/common/hl_boot_if.h +++ b/drivers/accel/habanalabs/include/common/hl_boot_if.h @@ -770,15 +770,23 @@ enum hl_components { HL_COMPONENTS_ARMCP, HL_COMPONENTS_CPLD, HL_COMPONENTS_UBOOT, + HL_COMPONENTS_FUSE, HL_COMPONENTS_MAX_NUM = 16 }; +#define NAME_MAX_LEN 32 /* bytes */ +struct hl_module_data { + __u8 name[NAME_MAX_LEN]; + __u8 version[VERSION_MAX_LEN]; +}; + /** * struct hl_component_versions - versions associated with hl component. * @struct_size: size of all the struct (including dynamic size of modules). * @modules_offset: offset of the modules field in this struct. * @component: version of the component itself. * @fw_os: Firmware OS Version. + * @comp_name: Name of the component. * @modules_mask: i'th bit (from LSB) is a flag - on if module i in enum * hl_modules is used. * @modules_counter: number of set bits in modules_mask. @@ -791,45 +799,14 @@ struct hl_component_versions { __le16 modules_offset; __u8 component[VERSION_MAX_LEN]; __u8 fw_os[VERSION_MAX_LEN]; + __u8 comp_name[NAME_MAX_LEN]; __le16 modules_mask; __u8 modules_counter; __u8 reserved[1]; - __u8 modules[][VERSION_MAX_LEN]; -}; - -/** - * struct hl_fw_versions - all versions (fuse, cpucp's components with their - * modules) - * @struct_size: size of all the struct (including dynamic size of components). - * @components_offset: offset of the components field in this struct. - * @fuse: silicon production FUSE information. - * @components_mask: i'th bit (from LSB) is a flag - on if component i in enum - * hl_components is used. - * @components_counter: number of set bits in components_mask. - * @reserved: reserved for future use. - * @components: versions of hl components. Index i corresponds to the i'th bit - * that is *on* in components_mask. For example, if - * components_mask=0b101, then *components represents arcpid and - * *(hl_component_versions*)((char*)components + 1') represents - * preboot, where 1' = components[0].struct_size. - */ -struct hl_fw_versions { - __le16 struct_size; - __le16 components_offset; - __u8 fuse[VERSION_MAX_LEN]; - __le16 components_mask; - __u8 components_counter; - __u8 reserved[1]; - struct hl_component_versions components[]; + struct hl_module_data modules[]; }; -/* Max size of struct hl_component_versions */ -#define HL_COMPONENT_VERSIONS_MAX_SIZE \ - (sizeof(struct hl_component_versions) + HL_MODULES_MAX_NUM * \ - VERSION_MAX_LEN) - -/* Max size of struct hl_fw_versions */ -#define HL_FW_VERSIONS_MAX_SIZE (sizeof(struct hl_fw_versions) + \ - HL_COMPONENTS_MAX_NUM * HL_COMPONENT_VERSIONS_MAX_SIZE) +/* Max size of fit size */ +#define HL_FW_VERSIONS_FIT_SIZE 4096 #endif /* HL_BOOT_IF_H */ diff --git a/drivers/accel/habanalabs/include/gaudi2/asic_reg/gaudi2_regs.h b/drivers/accel/habanalabs/include/gaudi2/asic_reg/gaudi2_regs.h index 0bf3092bfeea..6c58af614236 100644 --- a/drivers/accel/habanalabs/include/gaudi2/asic_reg/gaudi2_regs.h +++ b/drivers/accel/habanalabs/include/gaudi2/asic_reg/gaudi2_regs.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 * - * Copyright 2020-2022 HabanaLabs, Ltd. + * Copyright 2020-2023 HabanaLabs, Ltd. * All Rights Reserved. * */ @@ -164,6 +164,8 @@ #define mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR 0x4800040 +#define mmDCORE0_TPC0_EML_CFG_DBG_CNT 0x40000 + #define SM_OBJS_PROT_BITS_OFFS 0x14000 #define DCORE_OFFSET (mmDCORE1_TPC0_QM_BASE - mmDCORE0_TPC0_QM_BASE) @@ -185,7 +187,10 @@ #define TPC_CFG_STALL_ON_ERR_OFFSET (mmDCORE0_TPC0_CFG_STALL_ON_ERR - mmDCORE0_TPC0_CFG_BASE) #define TPC_CFG_TPC_INTR_MASK_OFFSET (mmDCORE0_TPC0_CFG_TPC_INTR_MASK - mmDCORE0_TPC0_CFG_BASE) #define TPC_CFG_MSS_CONFIG_OFFSET (mmDCORE0_TPC0_CFG_MSS_CONFIG - mmDCORE0_TPC0_CFG_BASE) +#define TPC_EML_CFG_DBG_CNT_OFFSET (mmDCORE0_TPC0_EML_CFG_DBG_CNT - mmDCORE0_TPC0_EML_CFG_BASE) +#define EDMA_CORE_CFG_STALL_OFFSET (mmDCORE0_EDMA0_CORE_CFG_1 - mmDCORE0_EDMA0_CORE_BASE) +#define MME_CTRL_LO_QM_STALL_OFFSET (mmDCORE0_MME_CTRL_LO_QM_STALL - mmDCORE0_MME_CTRL_LO_BASE) #define MME_ACC_INTR_MASK_OFFSET (mmDCORE0_MME_ACC_INTR_MASK - mmDCORE0_MME_ACC_BASE) #define MME_ACC_WR_AXI_AGG_COUT0_OFFSET (mmDCORE0_MME_ACC_WR_AXI_AGG_COUT0 - mmDCORE0_MME_ACC_BASE) #define MME_ACC_WR_AXI_AGG_COUT1_OFFSET (mmDCORE0_MME_ACC_WR_AXI_AGG_COUT1 - mmDCORE0_MME_ACC_BASE) @@ -538,6 +543,8 @@ #define HBM_MC_SPI_IEEE1500_COMP_MASK BIT(3) #define HBM_MC_SPI_IEEE1500_PAUSED_MASK BIT(4) +#define ARC_FARM_OFFSET (mmARC_FARM_ARC1_AUX_BASE - mmARC_FARM_ARC0_AUX_BASE) + #include "nic0_qpc0_regs.h" #include "nic0_qm0_regs.h" #include "nic0_qm_arc_aux0_regs.h" diff --git a/drivers/accel/habanalabs/include/gaudi2/gaudi2.h b/drivers/accel/habanalabs/include/gaudi2/gaudi2.h index 5b4f9e108798..0231d6c55b4a 100644 --- a/drivers/accel/habanalabs/include/gaudi2/gaudi2.h +++ b/drivers/accel/habanalabs/include/gaudi2/gaudi2.h @@ -63,6 +63,8 @@ #define RESERVED_VA_RANGE_FOR_ARC_ON_HOST_HPAGE_START 0xFFF0F80000000000ull #define RESERVED_VA_RANGE_FOR_ARC_ON_HOST_HPAGE_END 0xFFF0FFFFFFFFFFFFull +#define RESERVED_MSIX_UNEXPECTED_USER_ERROR_INTERRUPT 256 + #define GAUDI2_MSIX_ENTRIES 512 #define QMAN_PQ_ENTRY_SIZE 16 /* Bytes */ diff --git a/drivers/accel/habanalabs/include/gaudi2/gaudi2_async_events.h b/drivers/accel/habanalabs/include/gaudi2/gaudi2_async_events.h index 50852cc80373..f661068d0c5f 100644 --- a/drivers/accel/habanalabs/include/gaudi2/gaudi2_async_events.h +++ b/drivers/accel/habanalabs/include/gaudi2/gaudi2_async_events.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 * - * Copyright 2018-2021 HabanaLabs, Ltd. + * Copyright 2018-2022 HabanaLabs, Ltd. * All Rights Reserved. * */ @@ -958,7 +958,7 @@ enum gaudi2_async_event_id { GAUDI2_EVENT_CPU11_STATUS_NIC11_ENG1 = 1318, GAUDI2_EVENT_ARC_DCCM_FULL = 1319, GAUDI2_EVENT_CPU_FP32_NOT_SUPPORTED = 1320, - GAUDI2_EVENT_DEV_RESET_REQ = 1321, + GAUDI2_EVENT_CPU_DEV_RESET_REQ = 1321, GAUDI2_EVENT_SIZE, }; diff --git a/drivers/accel/habanalabs/include/gaudi2/gaudi2_async_ids_map_extended.h b/drivers/accel/habanalabs/include/gaudi2/gaudi2_async_ids_map_extended.h index 82be01bea98e..ad01fc4e9940 100644 --- a/drivers/accel/habanalabs/include/gaudi2/gaudi2_async_ids_map_extended.h +++ b/drivers/accel/habanalabs/include/gaudi2/gaudi2_async_ids_map_extended.h @@ -10,6 +10,12 @@ ** DO NOT EDIT BELOW ** ************************************/ +enum event_reset_type { + EVENT_RESET_TYPE_NONE, + EVENT_RESET_TYPE_COMPUTE, + EVENT_RESET_TYPE_HARD, +}; + #ifndef __GAUDI2_ASYNC_IDS_MAP_EVENTS_EXT_H_ #define __GAUDI2_ASYNC_IDS_MAP_EVENTS_EXT_H_ @@ -23,2650 +29,2650 @@ struct gaudi2_async_events_ids_map { }; static struct gaudi2_async_events_ids_map gaudi2_irq_map_table[] = { - { .fc_id = 0, .cpu_id = 0, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1, .cpu_id = 1, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 2, .cpu_id = 2, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 3, .cpu_id = 3, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 4, .cpu_id = 4, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 5, .cpu_id = 5, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 6, .cpu_id = 6, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 7, .cpu_id = 7, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 8, .cpu_id = 8, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 9, .cpu_id = 9, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 10, .cpu_id = 10, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 11, .cpu_id = 11, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 12, .cpu_id = 12, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 13, .cpu_id = 13, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 14, .cpu_id = 14, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 15, .cpu_id = 15, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 16, .cpu_id = 16, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 17, .cpu_id = 17, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 18, .cpu_id = 18, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 19, .cpu_id = 19, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 20, .cpu_id = 20, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 21, .cpu_id = 21, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 22, .cpu_id = 22, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 23, .cpu_id = 23, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 24, .cpu_id = 24, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 25, .cpu_id = 25, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 26, .cpu_id = 26, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 27, .cpu_id = 27, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 28, .cpu_id = 28, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 29, .cpu_id = 29, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 30, .cpu_id = 30, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 31, .cpu_id = 31, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 32, .cpu_id = 32, .valid = 1, - .msg = 0, .reset = 0, .name = "PCIE_CORE_SERR" }, - { .fc_id = 33, .cpu_id = 33, .valid = 1, - .msg = 0, .reset = 1, .name = "PCIE_CORE_DERR" }, - { .fc_id = 34, .cpu_id = 34, .valid = 1, - .msg = 0, .reset = 0, .name = "PCIE_IF_SERR" }, - { .fc_id = 35, .cpu_id = 35, .valid = 1, - .msg = 0, .reset = 1, .name = "PCIE_IF_DERR" }, - { .fc_id = 36, .cpu_id = 36, .valid = 1, - .msg = 0, .reset = 0, .name = "PCIE_PHY_SERR" }, - { .fc_id = 37, .cpu_id = 37, .valid = 1, - .msg = 0, .reset = 1, .name = "PCIE_PHY_DERR" }, - { .fc_id = 38, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC0_ECC_SERR" }, - { .fc_id = 39, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC1_ECC_SERR" }, - { .fc_id = 40, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC2_ECC_SERR" }, - { .fc_id = 41, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC3_ECC_SERR" }, - { .fc_id = 42, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC4_ECC_SERR" }, - { .fc_id = 43, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC5_ECC_SERR" }, - { .fc_id = 44, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC6_ECC_SERR" }, - { .fc_id = 45, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC7_ECC_SERR" }, - { .fc_id = 46, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC8_ECC_SERR" }, - { .fc_id = 47, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC9_ECC_SERR" }, - { .fc_id = 48, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC10_ECC_SERR" }, - { .fc_id = 49, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC11_ECC_SERR" }, - { .fc_id = 50, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC12_ECC_SERR" }, - { .fc_id = 51, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC13_ECC_SERR" }, - { .fc_id = 52, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC14_ECC_SERR" }, - { .fc_id = 53, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC15_ECC_SERR" }, - { .fc_id = 54, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC16_ECC_SERR" }, - { .fc_id = 55, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC17_ECC_SERR" }, - { .fc_id = 56, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC18_ECC_SERR" }, - { .fc_id = 57, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC19_ECC_SERR" }, - { .fc_id = 58, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC20_ECC_SERR" }, - { .fc_id = 59, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC21_ECC_SERR" }, - { .fc_id = 60, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC22_ECC_SERR" }, - { .fc_id = 61, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC23_ECC_SERR" }, - { .fc_id = 62, .cpu_id = 38, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC24_ECC_SERR" }, - { .fc_id = 63, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC0_ECC_DERR" }, - { .fc_id = 64, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC1_ECC_DERR" }, - { .fc_id = 65, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC2_ECC_DERR" }, - { .fc_id = 66, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC3_ECC_DERR" }, - { .fc_id = 67, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC4_ECC_DERR" }, - { .fc_id = 68, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC5_ECC_DERR" }, - { .fc_id = 69, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC6_ECC_DERR" }, - { .fc_id = 70, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC7_ECC_DERR" }, - { .fc_id = 71, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC8_ECC_DERR" }, - { .fc_id = 72, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC9_ECC_DERR" }, - { .fc_id = 73, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC10_ECC_DERR" }, - { .fc_id = 74, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC11_ECC_DERR" }, - { .fc_id = 75, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC12_ECC_DERR" }, - { .fc_id = 76, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC13_ECC_DERR" }, - { .fc_id = 77, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC14_ECC_DERR" }, - { .fc_id = 78, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC15_ECC_DERR" }, - { .fc_id = 79, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC16_ECC_DERR" }, - { .fc_id = 80, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC17_ECC_DERR" }, - { .fc_id = 81, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC18_ECC_DERR" }, - { .fc_id = 82, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC19_ECC_DERR" }, - { .fc_id = 83, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC20_ECC_DERR" }, - { .fc_id = 84, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC21_ECC_DERR" }, - { .fc_id = 85, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC22_ECC_DERR" }, - { .fc_id = 86, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC23_ECC_DERR" }, - { .fc_id = 87, .cpu_id = 39, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC24_ECC_DERR" }, - { .fc_id = 88, .cpu_id = 40, .valid = 1, - .msg = 0, .reset = 0, .name = "MME0_SBTE0_ECC_SERR" }, - { .fc_id = 89, .cpu_id = 40, .valid = 1, - .msg = 0, .reset = 0, .name = "MME0_SBTE1_ECC_SERR" }, - { .fc_id = 90, .cpu_id = 40, .valid = 1, - .msg = 0, .reset = 0, .name = "MME0_SBTE2_ECC_SERR" }, - { .fc_id = 91, .cpu_id = 40, .valid = 1, - .msg = 0, .reset = 0, .name = "MME0_SBTE3_ECC_SERR" }, - { .fc_id = 92, .cpu_id = 40, .valid = 1, - .msg = 0, .reset = 0, .name = "MME0_SBTE4_ECC_SERR" }, - { .fc_id = 93, .cpu_id = 40, .valid = 1, - .msg = 0, .reset = 0, .name = "MME0_CTRL_ECC_SERR" }, - { .fc_id = 94, .cpu_id = 40, .valid = 1, - .msg = 0, .reset = 0, .name = "MME0_WAP_ECC_SERR" }, - { .fc_id = 95, .cpu_id = 41, .valid = 1, - .msg = 0, .reset = 0, .name = "MME1_SBTE0_ECC_SERR" }, - { .fc_id = 96, .cpu_id = 41, .valid = 1, - .msg = 0, .reset = 0, .name = "MME1_SBTE1_ECC_SERR" }, - { .fc_id = 97, .cpu_id = 41, .valid = 1, - .msg = 0, .reset = 0, .name = "MME1_SBTE2_ECC_SERR" }, - { .fc_id = 98, .cpu_id = 41, .valid = 1, - .msg = 0, .reset = 0, .name = "MME1_SBTE3_ECC_SERR" }, - { .fc_id = 99, .cpu_id = 41, .valid = 1, - .msg = 0, .reset = 0, .name = "MME1_SBTE4_ECC_SERR" }, - { .fc_id = 100, .cpu_id = 41, .valid = 1, - .msg = 0, .reset = 0, .name = "MME1_CTRL_ECC_SERR" }, - { .fc_id = 101, .cpu_id = 41, .valid = 1, - .msg = 0, .reset = 0, .name = "MME1_WAP_ECC_SERR" }, - { .fc_id = 102, .cpu_id = 42, .valid = 1, - .msg = 0, .reset = 0, .name = "MME2_SBTE0_ECC_SERR" }, - { .fc_id = 103, .cpu_id = 42, .valid = 1, - .msg = 0, .reset = 0, .name = "MME2_SBTE1_ECC_SERR" }, - { .fc_id = 104, .cpu_id = 42, .valid = 1, - .msg = 0, .reset = 0, .name = "MME2_SBTE2_ECC_SERR" }, - { .fc_id = 105, .cpu_id = 42, .valid = 1, - .msg = 0, .reset = 0, .name = "MME2_SBTE3_ECC_SERR" }, - { .fc_id = 106, .cpu_id = 42, .valid = 1, - .msg = 0, .reset = 0, .name = "MME2_SBTE4_ECC_SERR" }, - { .fc_id = 107, .cpu_id = 42, .valid = 1, - .msg = 0, .reset = 0, .name = "MME2_CTRL_ECC_SERR" }, - { .fc_id = 108, .cpu_id = 42, .valid = 1, - .msg = 0, .reset = 0, .name = "MME2_WAP_ECC_SERR" }, - { .fc_id = 109, .cpu_id = 43, .valid = 1, - .msg = 0, .reset = 0, .name = "MME3_SBTE0_ECC_SERR" }, - { .fc_id = 110, .cpu_id = 43, .valid = 1, - .msg = 0, .reset = 0, .name = "MME3_SBTE1_ECC_SERR" }, - { .fc_id = 111, .cpu_id = 43, .valid = 1, - .msg = 0, .reset = 0, .name = "MME3_SBTE2_ECC_SERR" }, - { .fc_id = 112, .cpu_id = 43, .valid = 1, - .msg = 0, .reset = 0, .name = "MME3_SBTE3_ECC_SERR" }, - { .fc_id = 113, .cpu_id = 43, .valid = 1, - .msg = 0, .reset = 0, .name = "MME3_SBTE4_ECC_SERR" }, - { .fc_id = 114, .cpu_id = 43, .valid = 1, - .msg = 0, .reset = 0, .name = "MME3_CTRL_ECC_SERR" }, - { .fc_id = 115, .cpu_id = 43, .valid = 1, - .msg = 0, .reset = 0, .name = "MME3_WAP_ECC_SERR" }, - { .fc_id = 116, .cpu_id = 44, .valid = 1, - .msg = 0, .reset = 1, .name = "MME0_SBTE0_ECC_DERR" }, - { .fc_id = 117, .cpu_id = 44, .valid = 1, - .msg = 0, .reset = 1, .name = "MME0_SBTE1_ECC_DERR" }, - { .fc_id = 118, .cpu_id = 44, .valid = 1, - .msg = 0, .reset = 1, .name = "MME0_SBTE2_ECC_DERR" }, - { .fc_id = 119, .cpu_id = 44, .valid = 1, - .msg = 0, .reset = 1, .name = "MME0_SBTE3_ECC_DERR" }, - { .fc_id = 120, .cpu_id = 44, .valid = 1, - .msg = 0, .reset = 1, .name = "MME0_SBTE4_ECC_DERR" }, - { .fc_id = 121, .cpu_id = 44, .valid = 1, - .msg = 0, .reset = 1, .name = "MME0_CTRL_ECC_DERR" }, - { .fc_id = 122, .cpu_id = 44, .valid = 1, - .msg = 0, .reset = 1, .name = "MME0_WAP_ECC_DERR" }, - { .fc_id = 123, .cpu_id = 45, .valid = 1, - .msg = 0, .reset = 1, .name = "MME1_SBTE0_ECC_DERR" }, - { .fc_id = 124, .cpu_id = 45, .valid = 1, - .msg = 0, .reset = 1, .name = "MME1_SBTE1_ECC_DERR" }, - { .fc_id = 125, .cpu_id = 45, .valid = 1, - .msg = 0, .reset = 1, .name = "MME1_SBTE2_ECC_DERR" }, - { .fc_id = 126, .cpu_id = 45, .valid = 1, - .msg = 0, .reset = 1, .name = "MME1_SBTE3_ECC_DERR" }, - { .fc_id = 127, .cpu_id = 45, .valid = 1, - .msg = 0, .reset = 1, .name = "MME1_SBTE4_ECC_DERR" }, - { .fc_id = 128, .cpu_id = 45, .valid = 1, - .msg = 0, .reset = 1, .name = "MME1_CTRL_ECC_DERR" }, - { .fc_id = 129, .cpu_id = 45, .valid = 1, - .msg = 0, .reset = 1, .name = "MME1_WAP_ECC_DERR" }, - { .fc_id = 130, .cpu_id = 46, .valid = 1, - .msg = 0, .reset = 1, .name = "MME2_SBTE0_ECC_DERR" }, - { .fc_id = 131, .cpu_id = 46, .valid = 1, - .msg = 0, .reset = 1, .name = "MME2_SBTE1_ECC_DERR" }, - { .fc_id = 132, .cpu_id = 46, .valid = 1, - .msg = 0, .reset = 1, .name = "MME2_SBTE2_ECC_DERR" }, - { .fc_id = 133, .cpu_id = 46, .valid = 1, - .msg = 0, .reset = 1, .name = "MME2_SBTE3_ECC_DERR" }, - { .fc_id = 134, .cpu_id = 46, .valid = 1, - .msg = 0, .reset = 1, .name = "MME2_SBTE4_ECC_DERR" }, - { .fc_id = 135, .cpu_id = 46, .valid = 1, - .msg = 0, .reset = 1, .name = "MME2_CTRL_ECC_DERR" }, - { .fc_id = 136, .cpu_id = 46, .valid = 1, - .msg = 0, .reset = 1, .name = "MME2_WAP_ECC_DERR" }, - { .fc_id = 137, .cpu_id = 47, .valid = 1, - .msg = 0, .reset = 1, .name = "MME3_SBTE0_ECC_DERR" }, - { .fc_id = 138, .cpu_id = 47, .valid = 1, - .msg = 0, .reset = 1, .name = "MME3_SBTE1_ECC_DERR" }, - { .fc_id = 139, .cpu_id = 47, .valid = 1, - .msg = 0, .reset = 1, .name = "MME3_SBTE2_ECC_DERR" }, - { .fc_id = 140, .cpu_id = 47, .valid = 1, - .msg = 0, .reset = 1, .name = "MME3_SBTE3_ECC_DERR" }, - { .fc_id = 141, .cpu_id = 47, .valid = 1, - .msg = 0, .reset = 1, .name = "MME3_SBTE4_ECC_DERR" }, - { .fc_id = 142, .cpu_id = 47, .valid = 1, - .msg = 0, .reset = 1, .name = "MME3_CTRL_ECC_DERR" }, - { .fc_id = 143, .cpu_id = 47, .valid = 1, - .msg = 0, .reset = 1, .name = "MME3_WAP_ECC_DERR" }, - { .fc_id = 144, .cpu_id = 48, .valid = 1, - .msg = 0, .reset = 0, .name = "HDMA2_ECC_SERR" }, - { .fc_id = 145, .cpu_id = 48, .valid = 1, - .msg = 0, .reset = 0, .name = "HDMA3_ECC_SERR" }, - { .fc_id = 146, .cpu_id = 48, .valid = 1, - .msg = 0, .reset = 0, .name = "HDMA0_ECC_SERR" }, - { .fc_id = 147, .cpu_id = 48, .valid = 1, - .msg = 0, .reset = 0, .name = "HDMA1_ECC_SERR" }, - { .fc_id = 148, .cpu_id = 48, .valid = 1, - .msg = 0, .reset = 0, .name = "HDMA6_ECC_SERR" }, - { .fc_id = 149, .cpu_id = 48, .valid = 1, - .msg = 0, .reset = 0, .name = "HDMA7_ECC_SERR" }, - { .fc_id = 150, .cpu_id = 48, .valid = 1, - .msg = 0, .reset = 0, .name = "HDMA4_ECC_SERR" }, - { .fc_id = 151, .cpu_id = 48, .valid = 1, - .msg = 0, .reset = 0, .name = "HDMA5_ECC_SERR" }, - { .fc_id = 152, .cpu_id = 49, .valid = 1, - .msg = 0, .reset = 1, .name = "HDMA2_ECC_DERR" }, - { .fc_id = 153, .cpu_id = 49, .valid = 1, - .msg = 0, .reset = 1, .name = "HDMA3_ECC_DERR" }, - { .fc_id = 154, .cpu_id = 49, .valid = 1, - .msg = 0, .reset = 1, .name = "HDMA0_ECC_DERR" }, - { .fc_id = 155, .cpu_id = 49, .valid = 1, - .msg = 0, .reset = 1, .name = "HDMA1_ECC_DERR" }, - { .fc_id = 156, .cpu_id = 49, .valid = 1, - .msg = 0, .reset = 1, .name = "HDMA6_ECC_DERR" }, - { .fc_id = 157, .cpu_id = 49, .valid = 1, - .msg = 0, .reset = 1, .name = "HDMA7_ECC_DERR" }, - { .fc_id = 158, .cpu_id = 49, .valid = 1, - .msg = 0, .reset = 1, .name = "HDMA4_ECC_DERR" }, - { .fc_id = 159, .cpu_id = 49, .valid = 1, - .msg = 0, .reset = 1, .name = "HDMA5_ECC_DERR" }, - { .fc_id = 160, .cpu_id = 50, .valid = 1, - .msg = 0, .reset = 0, .name = "KDMA0_ECC_SERR" }, - { .fc_id = 161, .cpu_id = 51, .valid = 1, - .msg = 0, .reset = 0, .name = "PDMA0_ECC_SERR" }, - { .fc_id = 162, .cpu_id = 51, .valid = 1, - .msg = 0, .reset = 0, .name = "PDMA1_ECC_SERR" }, - { .fc_id = 163, .cpu_id = 52, .valid = 1, - .msg = 0, .reset = 1, .name = "KDMA0_ECC_DERR" }, - { .fc_id = 164, .cpu_id = 53, .valid = 1, - .msg = 0, .reset = 1, .name = "PDMA0_ECC_DERR" }, - { .fc_id = 165, .cpu_id = 53, .valid = 1, - .msg = 0, .reset = 1, .name = "PDMA1_ECC_DERR" }, - { .fc_id = 166, .cpu_id = 54, .valid = 1, - .msg = 0, .reset = 0, .name = "CPU_IF_ECC_SERR" }, - { .fc_id = 167, .cpu_id = 55, .valid = 1, - .msg = 0, .reset = 1, .name = "CPU_IF_ECC_DERR" }, - { .fc_id = 168, .cpu_id = 56, .valid = 1, - .msg = 0, .reset = 0, .name = "PSOC_MEM_SERR" }, - { .fc_id = 169, .cpu_id = 57, .valid = 1, - .msg = 0, .reset = 1, .name = "PSOC_MEM_DERR" }, - { .fc_id = 170, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM0_ECC_SERR" }, - { .fc_id = 171, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM1_ECC_SERR" }, - { .fc_id = 172, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM2_ECC_SERR" }, - { .fc_id = 173, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM3_ECC_SERR" }, - { .fc_id = 174, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM4_ECC_SERR" }, - { .fc_id = 175, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM5_ECC_SERR" }, - { .fc_id = 176, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM6_ECC_SERR" }, - { .fc_id = 177, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM7_ECC_SERR" }, - { .fc_id = 178, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM8_ECC_SERR" }, - { .fc_id = 179, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM9_ECC_SERR" }, - { .fc_id = 180, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM10_ECC_SERR" }, - { .fc_id = 181, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM11_ECC_SERR" }, - { .fc_id = 182, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM12_ECC_SERR" }, - { .fc_id = 183, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM13_ECC_SERR" }, - { .fc_id = 184, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM14_ECC_SERR" }, - { .fc_id = 185, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM15_ECC_SERR" }, - { .fc_id = 186, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM16_ECC_SERR" }, - { .fc_id = 187, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM17_ECC_SERR" }, - { .fc_id = 188, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM18_ECC_SERR" }, - { .fc_id = 189, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM19_ECC_SERR" }, - { .fc_id = 190, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM20_ECC_SERR" }, - { .fc_id = 191, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM21_ECC_SERR" }, - { .fc_id = 192, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM22_ECC_SERR" }, - { .fc_id = 193, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM23_ECC_SERR" }, - { .fc_id = 194, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM24_ECC_SERR" }, - { .fc_id = 195, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM25_ECC_SERR" }, - { .fc_id = 196, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM26_ECC_SERR" }, - { .fc_id = 197, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM27_ECC_SERR" }, - { .fc_id = 198, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM28_ECC_SERR" }, - { .fc_id = 199, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM29_ECC_SERR" }, - { .fc_id = 200, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM30_ECC_SERR" }, - { .fc_id = 201, .cpu_id = 58, .valid = 1, - .msg = 0, .reset = 0, .name = "SRAM31_ECC_SERR" }, - { .fc_id = 202, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM0_ECC_DERR" }, - { .fc_id = 203, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM1_ECC_DERR" }, - { .fc_id = 204, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM2_ECC_DERR" }, - { .fc_id = 205, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM3_ECC_DERR" }, - { .fc_id = 206, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM4_ECC_DERR" }, - { .fc_id = 207, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM5_ECC_DERR" }, - { .fc_id = 208, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM6_ECC_DERR" }, - { .fc_id = 209, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM7_ECC_DERR" }, - { .fc_id = 210, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM8_ECC_DERR" }, - { .fc_id = 211, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM9_ECC_DERR" }, - { .fc_id = 212, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM10_ECC_DERR" }, - { .fc_id = 213, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM11_ECC_DERR" }, - { .fc_id = 214, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM12_ECC_DERR" }, - { .fc_id = 215, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM13_ECC_DERR" }, - { .fc_id = 216, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM14_ECC_DERR" }, - { .fc_id = 217, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM15_ECC_DERR" }, - { .fc_id = 218, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM16_ECC_DERR" }, - { .fc_id = 219, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM17_ECC_DERR" }, - { .fc_id = 220, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM18_ECC_DERR" }, - { .fc_id = 221, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM19_ECC_DERR" }, - { .fc_id = 222, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM20_ECC_DERR" }, - { .fc_id = 223, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM21_ECC_DERR" }, - { .fc_id = 224, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM22_ECC_DERR" }, - { .fc_id = 225, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM23_ECC_DERR" }, - { .fc_id = 226, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM24_ECC_DERR" }, - { .fc_id = 227, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM25_ECC_DERR" }, - { .fc_id = 228, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM26_ECC_DERR" }, - { .fc_id = 229, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM27_ECC_DERR" }, - { .fc_id = 230, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM28_ECC_DERR" }, - { .fc_id = 231, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM29_ECC_DERR" }, - { .fc_id = 232, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM30_ECC_DERR" }, - { .fc_id = 233, .cpu_id = 59, .valid = 1, - .msg = 0, .reset = 1, .name = "SRAM31_ECC_DERR" }, - { .fc_id = 234, .cpu_id = 60, .valid = 1, - .msg = 0, .reset = 1, .name = "GIC500" }, - { .fc_id = 235, .cpu_id = 61, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM_0_MC0_ECC_SERR" }, - { .fc_id = 236, .cpu_id = 61, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM_1_MC0_ECC_SERR" }, - { .fc_id = 237, .cpu_id = 61, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM_2_MC0_ECC_SERR" }, - { .fc_id = 238, .cpu_id = 61, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM_3_MC0_ECC_SERR" }, - { .fc_id = 239, .cpu_id = 61, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM_4_MC0_ECC_SERR" }, - { .fc_id = 240, .cpu_id = 61, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM_5_MC0_ECC_SERR" }, - { .fc_id = 241, .cpu_id = 61, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM_0_MC1_ECC_SERR" }, - { .fc_id = 242, .cpu_id = 61, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM_1_MC1_ECC_SERR" }, - { .fc_id = 243, .cpu_id = 61, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM_2_MC1_ECC_SERR" }, - { .fc_id = 244, .cpu_id = 61, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM_3_MC1_ECC_SERR" }, - { .fc_id = 245, .cpu_id = 61, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM_4_MC1_ECC_SERR" }, - { .fc_id = 246, .cpu_id = 61, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM_5_MC1_ECC_SERR" }, - { .fc_id = 247, .cpu_id = 62, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM_0_MC0_ECC_DERR" }, - { .fc_id = 248, .cpu_id = 62, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM_1_MC0_ECC_DERR" }, - { .fc_id = 249, .cpu_id = 62, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM_2_MC0_ECC_DERR" }, - { .fc_id = 250, .cpu_id = 62, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM_3_MC0_ECC_DERR" }, - { .fc_id = 251, .cpu_id = 62, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM_4_MC0_ECC_DERR" }, - { .fc_id = 252, .cpu_id = 62, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM_5_MC0_ECC_DERR" }, - { .fc_id = 253, .cpu_id = 62, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM_0_MC1_ECC_DERR" }, - { .fc_id = 254, .cpu_id = 62, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM_1_MC1_ECC_DERR" }, - { .fc_id = 255, .cpu_id = 62, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM_2_MC1_ECC_DERR" }, - { .fc_id = 256, .cpu_id = 62, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM_3_MC1_ECC_DERR" }, - { .fc_id = 257, .cpu_id = 62, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM_4_MC1_ECC_DERR" }, - { .fc_id = 258, .cpu_id = 62, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM_5_MC1_ECC_DERR" }, - { .fc_id = 259, .cpu_id = 63, .valid = 1, - .msg = 0, .reset = 0, .name = "HMMU_0_ECC_SERR" }, - { .fc_id = 260, .cpu_id = 63, .valid = 1, - .msg = 0, .reset = 0, .name = "HMMU_1_ECC_SERR" }, - { .fc_id = 261, .cpu_id = 63, .valid = 1, - .msg = 0, .reset = 0, .name = "HMMU_2_ECC_SERR" }, - { .fc_id = 262, .cpu_id = 63, .valid = 1, - .msg = 0, .reset = 0, .name = "HMMU_3_ECC_SERR" }, - { .fc_id = 263, .cpu_id = 63, .valid = 1, - .msg = 0, .reset = 0, .name = "HMMU_8_ECC_SERR" }, - { .fc_id = 264, .cpu_id = 63, .valid = 1, - .msg = 0, .reset = 0, .name = "HMMU_9_ECC_SERR" }, - { .fc_id = 265, .cpu_id = 63, .valid = 1, - .msg = 0, .reset = 0, .name = "HMMU_10_ECC_SERR" }, - { .fc_id = 266, .cpu_id = 63, .valid = 1, - .msg = 0, .reset = 0, .name = "HMMU_11_ECC_SERR" }, - { .fc_id = 267, .cpu_id = 63, .valid = 1, - .msg = 0, .reset = 0, .name = "HMMU_7_ECC_SERR" }, - { .fc_id = 268, .cpu_id = 63, .valid = 1, - .msg = 0, .reset = 0, .name = "HMMU_6_ECC_SERR" }, - { .fc_id = 269, .cpu_id = 63, .valid = 1, - .msg = 0, .reset = 0, .name = "HMMU_5_ECC_SERR" }, - { .fc_id = 270, .cpu_id = 63, .valid = 1, - .msg = 0, .reset = 0, .name = "HMMU_4_ECC_SERR" }, - { .fc_id = 271, .cpu_id = 63, .valid = 1, - .msg = 0, .reset = 0, .name = "HMMU_15_ECC_SERR" }, - { .fc_id = 272, .cpu_id = 63, .valid = 1, - .msg = 0, .reset = 0, .name = "HMMU_14_ECC_SERR" }, - { .fc_id = 273, .cpu_id = 63, .valid = 1, - .msg = 0, .reset = 0, .name = "HMMU_13_ECC_SERR" }, - { .fc_id = 274, .cpu_id = 63, .valid = 1, - .msg = 0, .reset = 0, .name = "HMMU_12_ECC_SERR" }, - { .fc_id = 275, .cpu_id = 64, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_0_ECC_DERR" }, - { .fc_id = 276, .cpu_id = 64, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_1_ECC_DERR" }, - { .fc_id = 277, .cpu_id = 64, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_2_ECC_DERR" }, - { .fc_id = 278, .cpu_id = 64, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_3_ECC_DERR" }, - { .fc_id = 279, .cpu_id = 64, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_8_ECC_DERR" }, - { .fc_id = 280, .cpu_id = 64, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_9_ECC_DERR" }, - { .fc_id = 281, .cpu_id = 64, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_10_ECC_DERR" }, - { .fc_id = 282, .cpu_id = 64, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_11_ECC_DERR" }, - { .fc_id = 283, .cpu_id = 64, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_7_ECC_DERR" }, - { .fc_id = 284, .cpu_id = 64, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_6_ECC_DERR" }, - { .fc_id = 285, .cpu_id = 64, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_5_ECC_DERR" }, - { .fc_id = 286, .cpu_id = 64, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_4_ECC_DERR" }, - { .fc_id = 287, .cpu_id = 64, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_15_ECC_DERR" }, - { .fc_id = 288, .cpu_id = 64, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_14_ECC_DERR" }, - { .fc_id = 289, .cpu_id = 64, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_13_ECC_DERR" }, - { .fc_id = 290, .cpu_id = 64, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_12_ECC_DERR" }, - { .fc_id = 291, .cpu_id = 65, .valid = 1, - .msg = 0, .reset = 0, .name = "PMMU_ECC_SERR" }, - { .fc_id = 292, .cpu_id = 66, .valid = 1, - .msg = 0, .reset = 1, .name = "PMMU_ECC_DERR" }, - { .fc_id = 293, .cpu_id = 67, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 294, .cpu_id = 68, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 295, .cpu_id = 69, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC0_VCD_ECC_SERR" }, - { .fc_id = 296, .cpu_id = 69, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC1_VCD_ECC_SERR" }, - { .fc_id = 297, .cpu_id = 69, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC2_VCD_ECC_SERR" }, - { .fc_id = 298, .cpu_id = 69, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC3_VCD_ECC_SERR" }, - { .fc_id = 299, .cpu_id = 69, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC4_VCD_ECC_SERR" }, - { .fc_id = 300, .cpu_id = 69, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC5_VCD_ECC_SERR" }, - { .fc_id = 301, .cpu_id = 69, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC6_VCD_ECC_SERR" }, - { .fc_id = 302, .cpu_id = 69, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC7_VCD_ECC_SERR" }, - { .fc_id = 303, .cpu_id = 69, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC8_VCD_ECC_SERR" }, - { .fc_id = 304, .cpu_id = 69, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC9_VCD_ECC_SERR" }, - { .fc_id = 305, .cpu_id = 69, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC0_L2C_ECC_SERR" }, - { .fc_id = 306, .cpu_id = 69, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC1_L2C_ECC_SERR" }, - { .fc_id = 307, .cpu_id = 69, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC2_L2C_ECC_SERR" }, - { .fc_id = 308, .cpu_id = 69, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC3_L2C_ECC_SERR" }, - { .fc_id = 309, .cpu_id = 69, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC4_L2C_ECC_SERR" }, - { .fc_id = 310, .cpu_id = 69, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC5_L2C_ECC_SERR" }, - { .fc_id = 311, .cpu_id = 69, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC6_L2C_ECC_SERR" }, - { .fc_id = 312, .cpu_id = 69, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC7_L2C_ECC_SERR" }, - { .fc_id = 313, .cpu_id = 69, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC8_L2C_ECC_SERR" }, - { .fc_id = 314, .cpu_id = 69, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC9_L2C_ECC_SERR" }, - { .fc_id = 315, .cpu_id = 70, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC0_VCD_ECC_DERR" }, - { .fc_id = 316, .cpu_id = 70, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC1_VCD_ECC_DERR" }, - { .fc_id = 317, .cpu_id = 70, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC2_VCD_ECC_DERR" }, - { .fc_id = 318, .cpu_id = 70, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC3_VCD_ECC_DERR" }, - { .fc_id = 319, .cpu_id = 70, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC4_VCD_ECC_DERR" }, - { .fc_id = 320, .cpu_id = 70, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC5_VCD_ECC_DERR" }, - { .fc_id = 321, .cpu_id = 70, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC6_VCD_ECC_DERR" }, - { .fc_id = 322, .cpu_id = 70, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC7_VCD_ECC_DERR" }, - { .fc_id = 323, .cpu_id = 70, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC8_VCD_ECC_DERR" }, - { .fc_id = 324, .cpu_id = 70, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC9_VCD_ECC_DERR" }, - { .fc_id = 325, .cpu_id = 70, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC0_L2C_ECC_DERR" }, - { .fc_id = 326, .cpu_id = 70, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC1_L2C_ECC_DERR" }, - { .fc_id = 327, .cpu_id = 70, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC2_L2C_ECC_DERR" }, - { .fc_id = 328, .cpu_id = 70, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC3_L2C_ECC_DERR" }, - { .fc_id = 329, .cpu_id = 70, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC4_L2C_ECC_DERR" }, - { .fc_id = 330, .cpu_id = 70, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC5_L2C_ECC_DERR" }, - { .fc_id = 331, .cpu_id = 70, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC6_L2C_ECC_DERR" }, - { .fc_id = 332, .cpu_id = 70, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC7_L2C_ECC_DERR" }, - { .fc_id = 333, .cpu_id = 70, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC8_L2C_ECC_DERR" }, - { .fc_id = 334, .cpu_id = 70, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC9_L2C_ECC_DERR" }, - { .fc_id = 335, .cpu_id = 71, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 336, .cpu_id = 72, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 337, .cpu_id = 73, .valid = 1, - .msg = 0, .reset = 0, .name = "HIF0_ECC_SERR" }, - { .fc_id = 338, .cpu_id = 73, .valid = 1, - .msg = 0, .reset = 0, .name = "HIF1_ECC_SERR" }, - { .fc_id = 339, .cpu_id = 73, .valid = 1, - .msg = 0, .reset = 0, .name = "HIF2_ECC_SERR" }, - { .fc_id = 340, .cpu_id = 73, .valid = 1, - .msg = 0, .reset = 0, .name = "HIF3_ECC_SERR" }, - { .fc_id = 341, .cpu_id = 73, .valid = 1, - .msg = 0, .reset = 0, .name = "HIF8_ECC_SERR" }, - { .fc_id = 342, .cpu_id = 73, .valid = 1, - .msg = 0, .reset = 0, .name = "HIF9_ECC_SERR" }, - { .fc_id = 343, .cpu_id = 73, .valid = 1, - .msg = 0, .reset = 0, .name = "HIF10_ECC_SERR" }, - { .fc_id = 344, .cpu_id = 73, .valid = 1, - .msg = 0, .reset = 0, .name = "HIF11_ECC_SERR" }, - { .fc_id = 345, .cpu_id = 73, .valid = 1, - .msg = 0, .reset = 0, .name = "HIF7_ECC_SERR" }, - { .fc_id = 346, .cpu_id = 73, .valid = 1, - .msg = 0, .reset = 0, .name = "HIF6_ECC_SERR" }, - { .fc_id = 347, .cpu_id = 73, .valid = 1, - .msg = 0, .reset = 0, .name = "HIF5_ECC_SERR" }, - { .fc_id = 348, .cpu_id = 73, .valid = 1, - .msg = 0, .reset = 0, .name = "HIF4_ECC_SERR" }, - { .fc_id = 349, .cpu_id = 73, .valid = 1, - .msg = 0, .reset = 0, .name = "HIF15_ECC_SERR" }, - { .fc_id = 350, .cpu_id = 73, .valid = 1, - .msg = 0, .reset = 0, .name = "HIF14_ECC_SERR" }, - { .fc_id = 351, .cpu_id = 73, .valid = 1, - .msg = 0, .reset = 0, .name = "HIF13_ECC_SERR" }, - { .fc_id = 352, .cpu_id = 73, .valid = 1, - .msg = 0, .reset = 0, .name = "HIF12_ECC_SERR" }, - { .fc_id = 353, .cpu_id = 74, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF0_ECC_DERR" }, - { .fc_id = 354, .cpu_id = 74, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF1_ECC_DERR" }, - { .fc_id = 355, .cpu_id = 74, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF2_ECC_DERR" }, - { .fc_id = 356, .cpu_id = 74, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF3_ECC_DERR" }, - { .fc_id = 357, .cpu_id = 74, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF8_ECC_DERR" }, - { .fc_id = 358, .cpu_id = 74, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF9_ECC_DERR" }, - { .fc_id = 359, .cpu_id = 74, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF10_ECC_DERR" }, - { .fc_id = 360, .cpu_id = 74, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF11_ECC_DERR" }, - { .fc_id = 361, .cpu_id = 74, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF7_ECC_DERR" }, - { .fc_id = 362, .cpu_id = 74, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF6_ECC_DERR" }, - { .fc_id = 363, .cpu_id = 74, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF5_ECC_DERR" }, - { .fc_id = 364, .cpu_id = 74, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF4_ECC_DERR" }, - { .fc_id = 365, .cpu_id = 74, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF15_ECC_DERR" }, - { .fc_id = 366, .cpu_id = 74, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF14_ECC_DERR" }, - { .fc_id = 367, .cpu_id = 74, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF13_ECC_DERR" }, - { .fc_id = 368, .cpu_id = 74, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF12_ECC_DERR" }, - { .fc_id = 369, .cpu_id = 75, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC0_ECC_SERR" }, - { .fc_id = 370, .cpu_id = 75, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC1_ECC_SERR" }, - { .fc_id = 371, .cpu_id = 75, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC2_ECC_SERR" }, - { .fc_id = 372, .cpu_id = 75, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC3_ECC_SERR" }, - { .fc_id = 373, .cpu_id = 75, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC4_ECC_SERR" }, - { .fc_id = 374, .cpu_id = 75, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC5_ECC_SERR" }, - { .fc_id = 375, .cpu_id = 75, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC6_ECC_SERR" }, - { .fc_id = 376, .cpu_id = 75, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC7_ECC_SERR" }, - { .fc_id = 377, .cpu_id = 75, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC8_ECC_SERR" }, - { .fc_id = 378, .cpu_id = 75, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC9_ECC_SERR" }, - { .fc_id = 379, .cpu_id = 75, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC10_ECC_SERR" }, - { .fc_id = 380, .cpu_id = 75, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC11_ECC_SERR" }, - { .fc_id = 381, .cpu_id = 76, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC0_ECC_DERR" }, - { .fc_id = 382, .cpu_id = 76, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC1_ECC_DERR" }, - { .fc_id = 383, .cpu_id = 76, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC2_ECC_DERR" }, - { .fc_id = 384, .cpu_id = 76, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC3_ECC_DERR" }, - { .fc_id = 385, .cpu_id = 76, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC4_ECC_DERR" }, - { .fc_id = 386, .cpu_id = 76, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC5_ECC_DERR" }, - { .fc_id = 387, .cpu_id = 76, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC6_ECC_DERR" }, - { .fc_id = 388, .cpu_id = 76, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC7_ECC_DERR" }, - { .fc_id = 389, .cpu_id = 76, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC8_ECC_DERR" }, - { .fc_id = 390, .cpu_id = 76, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC9_ECC_DERR" }, - { .fc_id = 391, .cpu_id = 76, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC10_ECC_DERR" }, - { .fc_id = 392, .cpu_id = 76, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC11_ECC_DERR" }, - { .fc_id = 393, .cpu_id = 77, .valid = 1, - .msg = 0, .reset = 1, .name = "SM0_ECC_DERR" }, - { .fc_id = 394, .cpu_id = 77, .valid = 1, - .msg = 0, .reset = 1, .name = "SM1_ECC_DERR" }, - { .fc_id = 395, .cpu_id = 77, .valid = 1, - .msg = 0, .reset = 1, .name = "SM2_ECC_DERR" }, - { .fc_id = 396, .cpu_id = 77, .valid = 1, - .msg = 0, .reset = 1, .name = "SM3_ECC_DERR" }, - { .fc_id = 397, .cpu_id = 78, .valid = 1, - .msg = 0, .reset = 0, .name = "SM0_ECC_SERR" }, - { .fc_id = 398, .cpu_id = 78, .valid = 1, - .msg = 0, .reset = 0, .name = "SM1_ECC_SERR" }, - { .fc_id = 399, .cpu_id = 78, .valid = 1, - .msg = 0, .reset = 0, .name = "SM2_ECC_SERR" }, - { .fc_id = 400, .cpu_id = 78, .valid = 1, - .msg = 0, .reset = 0, .name = "SM3_ECC_SERR" }, - { .fc_id = 401, .cpu_id = 79, .valid = 1, - .msg = 0, .reset = 0, .name = "XBAR0_ECC_SERR" }, - { .fc_id = 402, .cpu_id = 79, .valid = 1, - .msg = 0, .reset = 0, .name = "XBAR1_ECC_SERR" }, - { .fc_id = 403, .cpu_id = 79, .valid = 1, - .msg = 0, .reset = 0, .name = "XBAR2_ECC_SERR" }, - { .fc_id = 404, .cpu_id = 79, .valid = 1, - .msg = 0, .reset = 0, .name = "XBAR3_ECC_SERR" }, - { .fc_id = 405, .cpu_id = 80, .valid = 1, - .msg = 0, .reset = 1, .name = "XBAR0_ECC_DERR" }, - { .fc_id = 406, .cpu_id = 80, .valid = 1, - .msg = 0, .reset = 1, .name = "XBAR1_ECC_DERR" }, - { .fc_id = 407, .cpu_id = 80, .valid = 1, - .msg = 0, .reset = 1, .name = "XBAR2_ECC_DERR" }, - { .fc_id = 408, .cpu_id = 80, .valid = 1, - .msg = 0, .reset = 1, .name = "XBAR3_ECC_DERR" }, - { .fc_id = 409, .cpu_id = 81, .valid = 1, - .msg = 0, .reset = 0, .name = "ARC0_ECC_SERR" }, - { .fc_id = 410, .cpu_id = 82, .valid = 1, - .msg = 0, .reset = 1, .name = "ARC0_ECC_DERR" }, - { .fc_id = 411, .cpu_id = 83, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 412, .cpu_id = 84, .valid = 1, - .msg = 0, .reset = 1, .name = "PCIE_ADDR_DEC_ERR" }, - { .fc_id = 413, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC0_AXI_ERR_RSP" }, - { .fc_id = 414, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC1_AXI_ERR_RSP" }, - { .fc_id = 415, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC2_AXI_ERR_RSP" }, - { .fc_id = 416, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC3_AXI_ERR_RSP" }, - { .fc_id = 417, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC4_AXI_ERR_RSP" }, - { .fc_id = 418, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC5_AXI_ERR_RSP" }, - { .fc_id = 419, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC6_AXI_ERR_RSP" }, - { .fc_id = 420, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC7_AXI_ERR_RSP" }, - { .fc_id = 421, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC8_AXI_ERR_RSP" }, - { .fc_id = 422, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC9_AXI_ERR_RSP" }, - { .fc_id = 423, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC10_AXI_ERR_RSP" }, - { .fc_id = 424, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC11_AXI_ERR_RSP" }, - { .fc_id = 425, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC12_AXI_ERR_RSP" }, - { .fc_id = 426, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC13_AXI_ERR_RSP" }, - { .fc_id = 427, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC14_AXI_ERR_RSP" }, - { .fc_id = 428, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC15_AXI_ERR_RSP" }, - { .fc_id = 429, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC16_AXI_ERR_RSP" }, - { .fc_id = 430, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC17_AXI_ERR_RSP" }, - { .fc_id = 431, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC18_AXI_ERR_RSP" }, - { .fc_id = 432, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC19_AXI_ERR_RSP" }, - { .fc_id = 433, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC20_AXI_ERR_RSP" }, - { .fc_id = 434, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC21_AXI_ERR_RSP" }, - { .fc_id = 435, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC22_AXI_ERR_RSP" }, - { .fc_id = 436, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC23_AXI_ERR_RSP" }, - { .fc_id = 437, .cpu_id = 85, .valid = 1, - .msg = 0, .reset = 1, .name = "TPC24_AXI_ERR_RSP" }, - { .fc_id = 438, .cpu_id = 86, .valid = 1, - .msg = 0, .reset = 1, .name = "AXI_ECC" }, - { .fc_id = 439, .cpu_id = 87, .valid = 1, - .msg = 0, .reset = 1, .name = "L2_RAM_ECC" }, - { .fc_id = 440, .cpu_id = 88, .valid = 1, - .msg = 0, .reset = 1, .name = "MME0_SBTE0_AXI_ERR_RSP" }, - { .fc_id = 441, .cpu_id = 88, .valid = 1, - .msg = 0, .reset = 1, .name = "MME0_SBTE1_AXI_ERR_RSP" }, - { .fc_id = 442, .cpu_id = 88, .valid = 1, - .msg = 0, .reset = 1, .name = "MME0_SBTE2_AXI_ERR_RSP" }, - { .fc_id = 443, .cpu_id = 88, .valid = 1, - .msg = 0, .reset = 1, .name = "MME0_SBTE3_AXI_ERR_RSP" }, - { .fc_id = 444, .cpu_id = 88, .valid = 1, - .msg = 0, .reset = 1, .name = "MME0_SBTE4_AXI_ERR_RSP" }, - { .fc_id = 445, .cpu_id = 88, .valid = 1, - .msg = 0, .reset = 1, .name = "MME0_CTRL_AXI_ERROR_RESPONSE" }, - { .fc_id = 446, .cpu_id = 88, .valid = 1, - .msg = 0, .reset = 1, .name = "MME0_QMAN_SW_ERROR" }, - { .fc_id = 447, .cpu_id = 89, .valid = 1, - .msg = 0, .reset = 1, .name = "MME1_SBTE0_AXI_ERR_RSP" }, - { .fc_id = 448, .cpu_id = 89, .valid = 1, - .msg = 0, .reset = 1, .name = "MME1_SBTE1_AXI_ERR_RSP" }, - { .fc_id = 449, .cpu_id = 89, .valid = 1, - .msg = 0, .reset = 1, .name = "MME1_SBTE2_AXI_ERR_RSP" }, - { .fc_id = 450, .cpu_id = 89, .valid = 1, - .msg = 0, .reset = 1, .name = "MME1_SBTE3_AXI_ERR_RSP" }, - { .fc_id = 451, .cpu_id = 89, .valid = 1, - .msg = 0, .reset = 1, .name = "MME1_SBTE4_AXI_ERR_RSP" }, - { .fc_id = 452, .cpu_id = 89, .valid = 1, - .msg = 0, .reset = 1, .name = "MME1_CTRL_AXI_ERROR_RESPONSE" }, - { .fc_id = 453, .cpu_id = 89, .valid = 1, - .msg = 0, .reset = 1, .name = "MME1_QMAN_SW_ERROR" }, - { .fc_id = 454, .cpu_id = 90, .valid = 1, - .msg = 0, .reset = 1, .name = "MME2_SBTE0_AXI_ERR_RSP" }, - { .fc_id = 455, .cpu_id = 90, .valid = 1, - .msg = 0, .reset = 1, .name = "MME2_SBTE1_AXI_ERR_RSP" }, - { .fc_id = 456, .cpu_id = 90, .valid = 1, - .msg = 0, .reset = 1, .name = "MME2_SBTE2_AXI_ERR_RSP" }, - { .fc_id = 457, .cpu_id = 90, .valid = 1, - .msg = 0, .reset = 1, .name = "MME2_SBTE3_AXI_ERR_RSP" }, - { .fc_id = 458, .cpu_id = 90, .valid = 1, - .msg = 0, .reset = 1, .name = "MME2_SBTE4_AXI_ERR_RSP" }, - { .fc_id = 459, .cpu_id = 90, .valid = 1, - .msg = 0, .reset = 1, .name = "MME2_CTRL_AXI_ERROR_RESPONSE" }, - { .fc_id = 460, .cpu_id = 90, .valid = 1, - .msg = 0, .reset = 1, .name = "MME2_QMAN_SW_ERROR" }, - { .fc_id = 461, .cpu_id = 91, .valid = 1, - .msg = 0, .reset = 1, .name = "MME3_SBTE0_AXI_ERR_RSP" }, - { .fc_id = 462, .cpu_id = 91, .valid = 1, - .msg = 0, .reset = 1, .name = "MME3_SBTE1_AXI_ERR_RSP" }, - { .fc_id = 463, .cpu_id = 91, .valid = 1, - .msg = 0, .reset = 1, .name = "MME3_SBTE2_AXI_ERR_RSP" }, - { .fc_id = 464, .cpu_id = 91, .valid = 1, - .msg = 0, .reset = 1, .name = "MME3_SBTE3_AXI_ERR_RSP" }, - { .fc_id = 465, .cpu_id = 91, .valid = 1, - .msg = 0, .reset = 1, .name = "MME3_SBTE4_AXI_ERR_RSP" }, - { .fc_id = 466, .cpu_id = 91, .valid = 1, - .msg = 0, .reset = 1, .name = "MME3_CTRL_AXI_ERROR_RESPONSE" }, - { .fc_id = 467, .cpu_id = 91, .valid = 1, - .msg = 0, .reset = 1, .name = "MME3_QMAN_SW_ERROR" }, - { .fc_id = 468, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "PSOC_MME_PLL_LOCK_ERR" }, - { .fc_id = 469, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "PSOC_CPU_PLL_LOCK_ERR" }, - { .fc_id = 470, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE3_TPC_PLL_LOCK_ERR" }, - { .fc_id = 471, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE3_NIC_PLL_LOCK_ERR" }, - { .fc_id = 472, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE3_XBAR_MMU_PLL_LOCK_ERR" }, - { .fc_id = 473, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE3_XBAR_DMA_PLL_LOCK_ERR" }, - { .fc_id = 474, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE3_XBAR_IF_PLL_LOCK_ERR" }, - { .fc_id = 475, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE3_XBAR_BANK_PLL_LOCK_ERR" }, - { .fc_id = 476, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE1_XBAR_MMU_PLL_LOCK_ERR" }, - { .fc_id = 477, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE1_XBAR_DMA_PLL_LOCK_ERR" }, - { .fc_id = 478, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE1_XBAR_IF_PLL_LOCK_ERR" }, - { .fc_id = 479, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE1_XBAR_MESH_PLL_LOCK_ERR" }, - { .fc_id = 480, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE1_TPC_PLL_LOCK_ERR" }, - { .fc_id = 481, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE1_NIC_PLL_LOCK_ERR" }, - { .fc_id = 482, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "PMMU_MME_PLL_LOCK_ERR" }, - { .fc_id = 483, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE0_TPC_PLL_LOCK_ERR" }, - { .fc_id = 484, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE0_PCI_PLL_LOCK_ERR" }, - { .fc_id = 485, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE0_XBAR_MMU_PLL_LOCK_ERR" }, - { .fc_id = 486, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE0_XBAR_DMA_PLL_LOCK_ERR" }, - { .fc_id = 487, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE0_XBAR_IF_PLL_LOCK_ERR" }, - { .fc_id = 488, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE0_XBAR_MESH_PLL_LOCK_ERR" }, - { .fc_id = 489, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE2_XBAR_MMU_PLL_LOCK_ERR" }, - { .fc_id = 490, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE2_XBAR_DMA_PLL_LOCK_ERR" }, - { .fc_id = 491, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE2_XBAR_IF_PLL_LOCK_ERR" }, - { .fc_id = 492, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE2_XBAR_BANK_PLL_LOCK_ERR" }, - { .fc_id = 493, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE2_TPC_PLL_LOCK_ERR" }, - { .fc_id = 494, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "PSOC_VID_PLL_LOCK_ERR" }, - { .fc_id = 495, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "PMMU_VID_PLL_LOCK_ERR" }, - { .fc_id = 496, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE3_HBM_PLL_LOCK_ERR" }, - { .fc_id = 497, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE1_XBAR_HBM_PLL_LOCK_ERR" }, - { .fc_id = 498, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE1_HBM_PLL_LOCK_ERR" }, - { .fc_id = 499, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE0_HBM_PLL_LOCK_ERR" }, - { .fc_id = 500, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE2_XBAR_HBM_PLL_LOCK_ERR" }, - { .fc_id = 501, .cpu_id = 92, .valid = 1, - .msg = 0, .reset = 1, .name = "DCORE2_HBM_PLL_LOCK_ERR" }, - { .fc_id = 502, .cpu_id = 93, .valid = 1, - .msg = 0, .reset = 1, .name = "CPU_AXI_ERR_RSP" }, - { .fc_id = 503, .cpu_id = 94, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_0_AXI_ERR_RSP" }, - { .fc_id = 504, .cpu_id = 94, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_1_AXI_ERR_RSP" }, - { .fc_id = 505, .cpu_id = 94, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_2_AXI_ERR_RSP" }, - { .fc_id = 506, .cpu_id = 94, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_3_AXI_ERR_RSP" }, - { .fc_id = 507, .cpu_id = 94, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_8_AXI_ERR_RSP" }, - { .fc_id = 508, .cpu_id = 94, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_9_AXI_ERR_RSP" }, - { .fc_id = 509, .cpu_id = 94, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_10_AXI_ERR_RSP" }, - { .fc_id = 510, .cpu_id = 94, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_11_AXI_ERR_RSP" }, - { .fc_id = 511, .cpu_id = 94, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_7_AXI_ERR_RSP" }, - { .fc_id = 512, .cpu_id = 94, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_6_AXI_ERR_RSP" }, - { .fc_id = 513, .cpu_id = 94, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_5_AXI_ERR_RSP" }, - { .fc_id = 514, .cpu_id = 94, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_4_AXI_ERR_RSP" }, - { .fc_id = 515, .cpu_id = 94, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_15_AXI_ERR_RSP" }, - { .fc_id = 516, .cpu_id = 94, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_14_AXI_ERR_RSP" }, - { .fc_id = 517, .cpu_id = 94, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_13_AXI_ERR_RSP" }, - { .fc_id = 518, .cpu_id = 94, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU_12_AXI_ERR_RSP" }, - { .fc_id = 519, .cpu_id = 95, .valid = 1, - .msg = 0, .reset = 1, .name = "PMMU_FATAL" }, - { .fc_id = 520, .cpu_id = 96, .valid = 1, - .msg = 0, .reset = 1, .name = "PMMU_AXI_ERR_RSP" }, - { .fc_id = 521, .cpu_id = 97, .valid = 1, - .msg = 0, .reset = 0, .name = "VM0_ALARM_A" }, - { .fc_id = 522, .cpu_id = 98, .valid = 1, - .msg = 0, .reset = 0, .name = "VM0_ALARM_B" }, - { .fc_id = 523, .cpu_id = 99, .valid = 1, - .msg = 0, .reset = 0, .name = "VM1_ALARM_A" }, - { .fc_id = 524, .cpu_id = 100, .valid = 1, - .msg = 0, .reset = 0, .name = "VM1_ALARM_B" }, - { .fc_id = 525, .cpu_id = 101, .valid = 1, - .msg = 0, .reset = 0, .name = "VM2_ALARM_A" }, - { .fc_id = 526, .cpu_id = 102, .valid = 1, - .msg = 0, .reset = 0, .name = "VM2_ALARM_B" }, - { .fc_id = 527, .cpu_id = 103, .valid = 1, - .msg = 0, .reset = 0, .name = "VM3_ALARM_A" }, - { .fc_id = 528, .cpu_id = 104, .valid = 1, - .msg = 0, .reset = 0, .name = "VM3_ALARM_B" }, - { .fc_id = 529, .cpu_id = 105, .valid = 1, - .msg = 0, .reset = 1, .name = "PSOC_AXI_ERR_RSP" }, - { .fc_id = 530, .cpu_id = 106, .valid = 1, - .msg = 0, .reset = 0, .name = "PSOC_PRSTN_FALL" }, - { .fc_id = 531, .cpu_id = 107, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 532, .cpu_id = 107, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 533, .cpu_id = 107, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 534, .cpu_id = 107, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 535, .cpu_id = 107, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 536, .cpu_id = 107, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 537, .cpu_id = 107, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 538, .cpu_id = 107, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 539, .cpu_id = 108, .valid = 1, - .msg = 0, .reset = 1, .name = "KDMA_CH0_AXI_ERR_RSP" }, - { .fc_id = 540, .cpu_id = 109, .valid = 1, - .msg = 0, .reset = 1, .name = "PDMA_CH0_AXI_ERR_RSP" }, - { .fc_id = 541, .cpu_id = 109, .valid = 1, - .msg = 0, .reset = 1, .name = "PDMA_CH1_AXI_ERR_RSP" }, - { .fc_id = 542, .cpu_id = 110, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM_CATTRIP_0" }, - { .fc_id = 543, .cpu_id = 111, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM_CATTRIP_1" }, - { .fc_id = 544, .cpu_id = 112, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM_CATTRIP_2" }, - { .fc_id = 545, .cpu_id = 113, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM_CATTRIP_3" }, - { .fc_id = 546, .cpu_id = 114, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM_CATTRIP_4" }, - { .fc_id = 547, .cpu_id = 115, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM_CATTRIP_5" }, - { .fc_id = 548, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM0_MC0_SEI_SEVERE" }, - { .fc_id = 549, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM0_MC0_SEI_NON_SEVERE" }, - { .fc_id = 550, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM0_MC1_SEI_SEVERE" }, - { .fc_id = 551, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM0_MC1_SEI_NON_SEVERE" }, - { .fc_id = 552, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM1_MC0_SEI_SEVERE" }, - { .fc_id = 553, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM1_MC0_SEI_NON_SEVERE" }, - { .fc_id = 554, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM1_MC1_SEI_SEVERE" }, - { .fc_id = 555, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM1_MC1_SEI_NON_SEVERE" }, - { .fc_id = 556, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM2_MC0_SEI_SEVERE" }, - { .fc_id = 557, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM2_MC0_SEI_NON_SEVERE" }, - { .fc_id = 558, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM2_MC1_SEI_SEVERE" }, - { .fc_id = 559, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM2_MC1_SEI_NON_SEVERE" }, - { .fc_id = 560, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM3_MC0_SEI_SEVERE" }, - { .fc_id = 561, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM3_MC0_SEI_NON_SEVERE" }, - { .fc_id = 562, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM3_MC1_SEI_SEVERE" }, - { .fc_id = 563, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM3_MC1_SEI_NON_SEVERE" }, - { .fc_id = 564, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM4_MC0_SEI_SEVERE" }, - { .fc_id = 565, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM4_MC0_SEI_NON_SEVERE" }, - { .fc_id = 566, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM4_MC1_SEI_SEVERE" }, - { .fc_id = 567, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM4_MC1_SEI_NON_SEVERE" }, - { .fc_id = 568, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM5_MC0_SEI_SEVERE" }, - { .fc_id = 569, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM5_MC0_SEI_NON_SEVERE" }, - { .fc_id = 570, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 1, .name = "HBM5_MC1_SEI_SEVERE" }, - { .fc_id = 571, .cpu_id = 116, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM5_MC1_SEI_NON_SEVERE" }, - { .fc_id = 572, .cpu_id = 117, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC0_AXI_ERR_RSPONSE" }, - { .fc_id = 573, .cpu_id = 117, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC1_AXI_ERR_RSPONSE" }, - { .fc_id = 574, .cpu_id = 117, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC2_AXI_ERR_RSPONSE" }, - { .fc_id = 575, .cpu_id = 117, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC3_AXI_ERR_RSPONSE" }, - { .fc_id = 576, .cpu_id = 117, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC4_AXI_ERR_RSPONSE" }, - { .fc_id = 577, .cpu_id = 117, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC5_AXI_ERR_RSPONSE" }, - { .fc_id = 578, .cpu_id = 117, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC6_AXI_ERR_RSPONSE" }, - { .fc_id = 579, .cpu_id = 117, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC7_AXI_ERR_RSPONSE" }, - { .fc_id = 580, .cpu_id = 117, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC8_AXI_ERR_RSPONSE" }, - { .fc_id = 581, .cpu_id = 117, .valid = 1, - .msg = 0, .reset = 1, .name = "DEC9_AXI_ERR_RSPONSE" }, - { .fc_id = 582, .cpu_id = 118, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 583, .cpu_id = 119, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 584, .cpu_id = 120, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF0_FATAL" }, - { .fc_id = 585, .cpu_id = 120, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF1_FATAL" }, - { .fc_id = 586, .cpu_id = 120, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF2_FATAL" }, - { .fc_id = 587, .cpu_id = 120, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF3_FATAL" }, - { .fc_id = 588, .cpu_id = 120, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF8_FATAL" }, - { .fc_id = 589, .cpu_id = 120, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF9_FATAL" }, - { .fc_id = 590, .cpu_id = 120, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF10_FATAL" }, - { .fc_id = 591, .cpu_id = 120, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF11_FATAL" }, - { .fc_id = 592, .cpu_id = 120, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF7_FATAL" }, - { .fc_id = 593, .cpu_id = 120, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF6_FATAL" }, - { .fc_id = 594, .cpu_id = 120, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF5_FATAL" }, - { .fc_id = 595, .cpu_id = 120, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF4_FATAL" }, - { .fc_id = 596, .cpu_id = 120, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF15_FATAL" }, - { .fc_id = 597, .cpu_id = 120, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF14_FATAL" }, - { .fc_id = 598, .cpu_id = 120, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF13_FATAL" }, - { .fc_id = 599, .cpu_id = 120, .valid = 1, - .msg = 0, .reset = 1, .name = "HIF12_FATAL" }, - { .fc_id = 600, .cpu_id = 121, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC0_AXI_ERROR_RESPONSE" }, - { .fc_id = 601, .cpu_id = 121, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC1_AXI_ERROR_RESPONSE" }, - { .fc_id = 602, .cpu_id = 121, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC2_AXI_ERROR_RESPONSE" }, - { .fc_id = 603, .cpu_id = 121, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC3_AXI_ERROR_RESPONSE" }, - { .fc_id = 604, .cpu_id = 121, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC4_AXI_ERROR_RESPONSE" }, - { .fc_id = 605, .cpu_id = 121, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC5_AXI_ERROR_RESPONSE" }, - { .fc_id = 606, .cpu_id = 121, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC6_AXI_ERROR_RESPONSE" }, - { .fc_id = 607, .cpu_id = 121, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC7_AXI_ERROR_RESPONSE" }, - { .fc_id = 608, .cpu_id = 121, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC8_AXI_ERROR_RESPONSE" }, - { .fc_id = 609, .cpu_id = 121, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC9_AXI_ERROR_RESPONSE" }, - { .fc_id = 610, .cpu_id = 121, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC10_AXI_ERROR_RESPONSE" }, - { .fc_id = 611, .cpu_id = 121, .valid = 1, - .msg = 0, .reset = 1, .name = "NIC11_AXI_ERROR_RESPONSE" }, - { .fc_id = 612, .cpu_id = 122, .valid = 1, - .msg = 0, .reset = 1, .name = "SM0_AXI_ERROR_RESPONSE" }, - { .fc_id = 613, .cpu_id = 122, .valid = 1, - .msg = 0, .reset = 1, .name = "SM1_AXI_ERROR_RESPONSE" }, - { .fc_id = 614, .cpu_id = 122, .valid = 1, - .msg = 0, .reset = 1, .name = "SM2_AXI_ERROR_RESPONSE" }, - { .fc_id = 615, .cpu_id = 122, .valid = 1, - .msg = 0, .reset = 1, .name = "SM3_AXI_ERROR_RESPONSE" }, - { .fc_id = 616, .cpu_id = 123, .valid = 1, - .msg = 0, .reset = 1, .name = "ARC_AXI_ERROR_RESPONSE" }, - { .fc_id = 617, .cpu_id = 124, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 618, .cpu_id = 125, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 619, .cpu_id = 125, .valid = 1, - .msg = 0, .reset = 0, .name = "PCIE_FLR_REQUESTED" }, - { .fc_id = 620, .cpu_id = 125, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 621, .cpu_id = 125, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 622, .cpu_id = 125, .valid = 1, - .msg = 0, .reset = 1, .name = "PCIE_APB_TIMEOUT" }, - { .fc_id = 623, .cpu_id = 125, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 624, .cpu_id = 125, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 625, .cpu_id = 125, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 626, .cpu_id = 125, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 627, .cpu_id = 125, .valid = 1, - .msg = 0, .reset = 0, .name = "PCIE_FATAL_ERR" }, - { .fc_id = 628, .cpu_id = 125, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 629, .cpu_id = 126, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 630, .cpu_id = 127, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 631, .cpu_id = 128, .valid = 1, - .msg = 0, .reset = 0, .name = "PCIE_P2P_MSIX" }, - { .fc_id = 632, .cpu_id = 129, .valid = 1, - .msg = 0, .reset = 0, .name = "PCIE_DRAIN_COMPLETE" }, - { .fc_id = 633, .cpu_id = 130, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC0_BMON_SPMU" }, - { .fc_id = 634, .cpu_id = 131, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC0_KERNEL_ERR" }, - { .fc_id = 635, .cpu_id = 132, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC1_BMON_SPMU" }, - { .fc_id = 636, .cpu_id = 133, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC1_KERNEL_ERR" }, - { .fc_id = 637, .cpu_id = 134, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC2_BMON_SPMU" }, - { .fc_id = 638, .cpu_id = 135, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC2_KERNEL_ERR" }, - { .fc_id = 639, .cpu_id = 136, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC3_BMON_SPMU" }, - { .fc_id = 640, .cpu_id = 137, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC3_KERNEL_ERR" }, - { .fc_id = 641, .cpu_id = 138, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC4_BMON_SPMU" }, - { .fc_id = 642, .cpu_id = 139, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC4_KERNEL_ERR" }, - { .fc_id = 643, .cpu_id = 140, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC5_BMON_SPMU" }, - { .fc_id = 644, .cpu_id = 141, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC5_KERNEL_ERR" }, - { .fc_id = 645, .cpu_id = 150, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC6_BMON_SPMU" }, - { .fc_id = 646, .cpu_id = 151, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC6_KERNEL_ERR" }, - { .fc_id = 647, .cpu_id = 152, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC7_BMON_SPMU" }, - { .fc_id = 648, .cpu_id = 153, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC7_KERNEL_ERR" }, - { .fc_id = 649, .cpu_id = 146, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC8_BMON_SPMU" }, - { .fc_id = 650, .cpu_id = 147, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC8_KERNEL_ERR" }, - { .fc_id = 651, .cpu_id = 148, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC9_BMON_SPMU" }, - { .fc_id = 652, .cpu_id = 149, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC9_KERNEL_ERR" }, - { .fc_id = 653, .cpu_id = 142, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC10_BMON_SPMU" }, - { .fc_id = 654, .cpu_id = 143, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC10_KERNEL_ERR" }, - { .fc_id = 655, .cpu_id = 144, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC11_BMON_SPMU" }, - { .fc_id = 656, .cpu_id = 145, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC11_KERNEL_ERR" }, - { .fc_id = 657, .cpu_id = 162, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC12_BMON_SPMU" }, - { .fc_id = 658, .cpu_id = 163, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC12_KERNEL_ERR" }, - { .fc_id = 659, .cpu_id = 164, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC13_BMON_SPMU" }, - { .fc_id = 660, .cpu_id = 165, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC13_KERNEL_ERR" }, - { .fc_id = 661, .cpu_id = 158, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC14_BMON_SPMU" }, - { .fc_id = 662, .cpu_id = 159, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC14_KERNEL_ERR" }, - { .fc_id = 663, .cpu_id = 160, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC15_BMON_SPMU" }, - { .fc_id = 664, .cpu_id = 161, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC15_KERNEL_ERR" }, - { .fc_id = 665, .cpu_id = 154, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC16_BMON_SPMU" }, - { .fc_id = 666, .cpu_id = 155, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC16_KERNEL_ERR" }, - { .fc_id = 667, .cpu_id = 156, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC17_BMON_SPMU" }, - { .fc_id = 668, .cpu_id = 157, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC17_KERNEL_ERR" }, - { .fc_id = 669, .cpu_id = 166, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC18_BMON_SPMU" }, - { .fc_id = 670, .cpu_id = 167, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC18_KERNEL_ERR" }, - { .fc_id = 671, .cpu_id = 168, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC19_BMON_SPMU" }, - { .fc_id = 672, .cpu_id = 169, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC19_KERNEL_ERR" }, - { .fc_id = 673, .cpu_id = 170, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC20_BMON_SPMU" }, - { .fc_id = 674, .cpu_id = 171, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC20_KERNEL_ERR" }, - { .fc_id = 675, .cpu_id = 172, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC21_BMON_SPMU" }, - { .fc_id = 676, .cpu_id = 173, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC21_KERNEL_ERR" }, - { .fc_id = 677, .cpu_id = 174, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC22_BMON_SPMU" }, - { .fc_id = 678, .cpu_id = 175, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC22_KERNEL_ERR" }, - { .fc_id = 679, .cpu_id = 176, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC23_BMON_SPMU" }, - { .fc_id = 680, .cpu_id = 177, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC23_KERNEL_ERR" }, - { .fc_id = 681, .cpu_id = 178, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC24_BMON_SPMU" }, - { .fc_id = 682, .cpu_id = 179, .valid = 1, - .msg = 0, .reset = 0, .name = "TPC24_KERNEL_ERR" }, - { .fc_id = 683, .cpu_id = 180, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 684, .cpu_id = 180, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 685, .cpu_id = 180, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 686, .cpu_id = 180, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 687, .cpu_id = 180, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 688, .cpu_id = 180, .valid = 1, - .msg = 0, .reset = 0, .name = "MME0_CTRL_BMON_SPMU" }, - { .fc_id = 689, .cpu_id = 180, .valid = 1, - .msg = 0, .reset = 0, .name = "MME0_SBTE_BMON_SPMU" }, - { .fc_id = 690, .cpu_id = 180, .valid = 1, - .msg = 0, .reset = 0, .name = "MME0_WAP_BMON_SPMU" }, - { .fc_id = 691, .cpu_id = 180, .valid = 1, - .msg = 0, .reset = 0, .name = "MME0_WAP_SOURCE_RESULT_INVALID" }, - { .fc_id = 692, .cpu_id = 181, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 693, .cpu_id = 181, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 694, .cpu_id = 181, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 695, .cpu_id = 181, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 696, .cpu_id = 181, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 697, .cpu_id = 181, .valid = 1, - .msg = 0, .reset = 0, .name = "MME1_CTRL_BMON_SPMU" }, - { .fc_id = 698, .cpu_id = 181, .valid = 1, - .msg = 0, .reset = 0, .name = "MME1_SBTE_BMON_SPMU" }, - { .fc_id = 699, .cpu_id = 181, .valid = 1, - .msg = 0, .reset = 0, .name = "MME1_WAP_BMON_SPMU" }, - { .fc_id = 700, .cpu_id = 181, .valid = 1, - .msg = 0, .reset = 0, .name = "MME1_WAP_SOURCE_RESULT_INVALID" }, - { .fc_id = 701, .cpu_id = 182, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 702, .cpu_id = 182, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 703, .cpu_id = 182, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 704, .cpu_id = 182, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 705, .cpu_id = 182, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 706, .cpu_id = 182, .valid = 1, - .msg = 0, .reset = 0, .name = "MME2_CTRL_BMON_SPMU" }, - { .fc_id = 707, .cpu_id = 182, .valid = 1, - .msg = 0, .reset = 0, .name = "MME2_SBTE_BMON_SPMU" }, - { .fc_id = 708, .cpu_id = 182, .valid = 1, - .msg = 0, .reset = 0, .name = "MME2_WAP_BMON_SPMU" }, - { .fc_id = 709, .cpu_id = 182, .valid = 1, - .msg = 0, .reset = 0, .name = "MME2_WAP_SOURCE_RESULT_INVALID" }, - { .fc_id = 710, .cpu_id = 183, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 711, .cpu_id = 183, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 712, .cpu_id = 183, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 713, .cpu_id = 183, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 714, .cpu_id = 183, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 715, .cpu_id = 183, .valid = 1, - .msg = 0, .reset = 0, .name = "MME3_CTRL_BMON_SPMU" }, - { .fc_id = 716, .cpu_id = 183, .valid = 1, - .msg = 0, .reset = 0, .name = "MME3_SBTE_BMON_SPMU" }, - { .fc_id = 717, .cpu_id = 183, .valid = 1, - .msg = 0, .reset = 0, .name = "MME3_WAP_BMON_SPMU" }, - { .fc_id = 718, .cpu_id = 183, .valid = 1, - .msg = 0, .reset = 0, .name = "MME3_WAP_SOURCE_RESULT_INVALID" }, - { .fc_id = 719, .cpu_id = 184, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 720, .cpu_id = 184, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU0_PAGE_FAULT_OR_WR_PERM" }, - { .fc_id = 721, .cpu_id = 184, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU0_SECURITY_ERROR" }, - { .fc_id = 722, .cpu_id = 185, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 723, .cpu_id = 185, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU1_PAGE_FAULT_WR_PERM" }, - { .fc_id = 724, .cpu_id = 185, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU1_SECURITY_ERROR" }, - { .fc_id = 725, .cpu_id = 186, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 726, .cpu_id = 186, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU2_PAGE_FAULT_WR_PERM" }, - { .fc_id = 727, .cpu_id = 186, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU2_SECURITY_ERROR" }, - { .fc_id = 728, .cpu_id = 187, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 729, .cpu_id = 187, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU3_PAGE_FAULT_WR_PERM" }, - { .fc_id = 730, .cpu_id = 187, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU3_SECURITY_ERROR" }, - { .fc_id = 731, .cpu_id = 188, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 732, .cpu_id = 188, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU8_PAGE_FAULT_WR_PERM" }, - { .fc_id = 733, .cpu_id = 188, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU8_SECURITY_ERROR" }, - { .fc_id = 734, .cpu_id = 189, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 735, .cpu_id = 189, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU9_PAGE_FAULT_WR_PERM" }, - { .fc_id = 736, .cpu_id = 189, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU9_SECURITY_ERROR" }, - { .fc_id = 737, .cpu_id = 190, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 738, .cpu_id = 190, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU10_PAGE_FAULT_WR_PERM" }, - { .fc_id = 739, .cpu_id = 190, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU10_SECURITY_ERROR" }, - { .fc_id = 740, .cpu_id = 191, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 741, .cpu_id = 191, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU11_PAGE_FAULT_WR_PERM" }, - { .fc_id = 742, .cpu_id = 191, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU11_SECURITY_ERROR" }, - { .fc_id = 743, .cpu_id = 192, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 744, .cpu_id = 192, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU7_PAGE_FAULT_WR_PERM" }, - { .fc_id = 745, .cpu_id = 192, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU7_SECURITY_ERROR" }, - { .fc_id = 746, .cpu_id = 193, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 747, .cpu_id = 193, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU6_PAGE_FAULT_WR_PERM" }, - { .fc_id = 748, .cpu_id = 193, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU6_SECURITY_ERROR" }, - { .fc_id = 749, .cpu_id = 194, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 750, .cpu_id = 194, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU5_PAGE_FAULT_WR_PERM" }, - { .fc_id = 751, .cpu_id = 194, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU5_SECURITY_ERROR" }, - { .fc_id = 752, .cpu_id = 195, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 753, .cpu_id = 195, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU4_PAGE_FAULT_WR_PERM" }, - { .fc_id = 754, .cpu_id = 195, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU4_SECURITY_ERROR" }, - { .fc_id = 755, .cpu_id = 196, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 756, .cpu_id = 196, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU15_PAGE_FAULT_WR_PERM" }, - { .fc_id = 757, .cpu_id = 196, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU15_SECURITY_ERROR" }, - { .fc_id = 758, .cpu_id = 197, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 759, .cpu_id = 197, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU14_PAGE_FAULT_WR_PERM" }, - { .fc_id = 760, .cpu_id = 197, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU14_SECURITY_ERROR" }, - { .fc_id = 761, .cpu_id = 198, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 762, .cpu_id = 198, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU13_PAGE_FAULT_WR_PERM" }, - { .fc_id = 763, .cpu_id = 198, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU13_SECURITY_ERROR" }, - { .fc_id = 764, .cpu_id = 199, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 765, .cpu_id = 199, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU12_PAGE_FAULT_WR_PERM" }, - { .fc_id = 766, .cpu_id = 199, .valid = 1, - .msg = 0, .reset = 1, .name = "HMMU12_SECURITY_ERROR" }, - { .fc_id = 767, .cpu_id = 200, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 768, .cpu_id = 201, .valid = 1, - .msg = 0, .reset = 1, .name = "PMMU0_PAGE_FAULT_WR_PERM" }, - { .fc_id = 769, .cpu_id = 202, .valid = 1, - .msg = 0, .reset = 1, .name = "PMMU0_SECURITY_ERROR" }, - { .fc_id = 770, .cpu_id = 203, .valid = 1, - .msg = 0, .reset = 0, .name = "HDMA2_BM_SPMU" }, - { .fc_id = 771, .cpu_id = 204, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 772, .cpu_id = 205, .valid = 1, - .msg = 0, .reset = 0, .name = "HDMA3_BM_SPMU" }, - { .fc_id = 773, .cpu_id = 206, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 774, .cpu_id = 207, .valid = 1, - .msg = 0, .reset = 0, .name = "HDMA0_BM_SPMU" }, - { .fc_id = 775, .cpu_id = 208, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 776, .cpu_id = 209, .valid = 1, - .msg = 0, .reset = 0, .name = "HDMA1_BM_SPMU" }, - { .fc_id = 777, .cpu_id = 210, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 778, .cpu_id = 211, .valid = 1, - .msg = 0, .reset = 0, .name = "HDMA6_BM_SPMU" }, - { .fc_id = 779, .cpu_id = 212, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 780, .cpu_id = 213, .valid = 1, - .msg = 0, .reset = 0, .name = "HDMA7_BM_SPMU" }, - { .fc_id = 781, .cpu_id = 214, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 782, .cpu_id = 215, .valid = 1, - .msg = 0, .reset = 0, .name = "HDMA4_BM_SPMU" }, - { .fc_id = 783, .cpu_id = 216, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 784, .cpu_id = 217, .valid = 1, - .msg = 0, .reset = 0, .name = "HDMA5_BM_SPMU" }, - { .fc_id = 785, .cpu_id = 218, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 786, .cpu_id = 219, .valid = 1, - .msg = 0, .reset = 0, .name = "KDMA_BM_SPMU" }, - { .fc_id = 787, .cpu_id = 220, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 788, .cpu_id = 221, .valid = 1, - .msg = 0, .reset = 0, .name = "PDMA0_BM_SPMU" }, - { .fc_id = 789, .cpu_id = 222, .valid = 1, - .msg = 0, .reset = 0, .name = "PDMA1_BM_SPMU" }, - { .fc_id = 790, .cpu_id = 223, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM0_MC0_SPI" }, - { .fc_id = 791, .cpu_id = 224, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM0_MC1_SPI" }, - { .fc_id = 792, .cpu_id = 225, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM1_MC0_SPI" }, - { .fc_id = 793, .cpu_id = 226, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM1_MC1_SPI" }, - { .fc_id = 794, .cpu_id = 227, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM2_MC0_SPI" }, - { .fc_id = 795, .cpu_id = 228, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM2_MC1_SPI" }, - { .fc_id = 796, .cpu_id = 229, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM3_MC0_SPI" }, - { .fc_id = 797, .cpu_id = 230, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM3_MC1_SPI" }, - { .fc_id = 798, .cpu_id = 231, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM4_MC0_SPI" }, - { .fc_id = 799, .cpu_id = 232, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM4_MC1_SPI" }, - { .fc_id = 800, .cpu_id = 233, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM5_MC0_SPI" }, - { .fc_id = 801, .cpu_id = 234, .valid = 1, - .msg = 0, .reset = 0, .name = "HBM5_MC1_SPI" }, - { .fc_id = 802, .cpu_id = 235, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 803, .cpu_id = 236, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 804, .cpu_id = 237, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 805, .cpu_id = 238, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 806, .cpu_id = 239, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 807, .cpu_id = 240, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 808, .cpu_id = 241, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 809, .cpu_id = 242, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 810, .cpu_id = 243, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 811, .cpu_id = 244, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 812, .cpu_id = 245, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 813, .cpu_id = 246, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 814, .cpu_id = 247, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 815, .cpu_id = 248, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 816, .cpu_id = 249, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 817, .cpu_id = 250, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 818, .cpu_id = 251, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 819, .cpu_id = 252, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 820, .cpu_id = 253, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 821, .cpu_id = 254, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 822, .cpu_id = 255, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 823, .cpu_id = 256, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 824, .cpu_id = 257, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 825, .cpu_id = 258, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 826, .cpu_id = 259, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 827, .cpu_id = 260, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 828, .cpu_id = 261, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 829, .cpu_id = 262, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 830, .cpu_id = 263, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 831, .cpu_id = 264, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 832, .cpu_id = 265, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 833, .cpu_id = 266, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 834, .cpu_id = 267, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 835, .cpu_id = 268, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 836, .cpu_id = 269, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 837, .cpu_id = 270, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 838, .cpu_id = 271, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 839, .cpu_id = 272, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 840, .cpu_id = 273, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 841, .cpu_id = 274, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 842, .cpu_id = 275, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 843, .cpu_id = 276, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 844, .cpu_id = 277, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 845, .cpu_id = 278, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 846, .cpu_id = 279, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 847, .cpu_id = 280, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 848, .cpu_id = 281, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 849, .cpu_id = 282, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 850, .cpu_id = 283, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 851, .cpu_id = 284, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 852, .cpu_id = 285, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 853, .cpu_id = 286, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 854, .cpu_id = 287, .valid = 0, - .msg = 0, .reset = 1, .name = "" }, - { .fc_id = 855, .cpu_id = 288, .valid = 0, - .msg = 0, .reset = 1, .name = "" }, - { .fc_id = 856, .cpu_id = 289, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 857, .cpu_id = 290, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 858, .cpu_id = 291, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 859, .cpu_id = 292, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 860, .cpu_id = 293, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 861, .cpu_id = 294, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 862, .cpu_id = 295, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 863, .cpu_id = 296, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 864, .cpu_id = 297, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 865, .cpu_id = 298, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 866, .cpu_id = 299, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 867, .cpu_id = 300, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 868, .cpu_id = 301, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 869, .cpu_id = 302, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 870, .cpu_id = 303, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 871, .cpu_id = 304, .valid = 1, - .msg = 0, .reset = 1, .name = "RPM_ERROR_OR_DRAIN" }, - { .fc_id = 872, .cpu_id = 305, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 873, .cpu_id = 306, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 874, .cpu_id = 307, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 875, .cpu_id = 308, .valid = 1, - .msg = 0, .reset = 0, .name = "RAZWI_OR_PID_MIN_MAX_INTERRUPT" }, - { .fc_id = 876, .cpu_id = 309, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 877, .cpu_id = 310, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 878, .cpu_id = 311, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 879, .cpu_id = 312, .valid = 0, - .msg = 0, .reset = 1, .name = "" }, - { .fc_id = 880, .cpu_id = 313, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 881, .cpu_id = 314, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 882, .cpu_id = 315, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 883, .cpu_id = 316, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 884, .cpu_id = 317, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 885, .cpu_id = 318, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 886, .cpu_id = 319, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 887, .cpu_id = 320, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 888, .cpu_id = 321, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 889, .cpu_id = 322, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 890, .cpu_id = 323, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 891, .cpu_id = 324, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 892, .cpu_id = 325, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 893, .cpu_id = 326, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 894, .cpu_id = 327, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 895, .cpu_id = 328, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 896, .cpu_id = 329, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC0_SPI" }, - { .fc_id = 897, .cpu_id = 329, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC0_BMON_SPMU" }, - { .fc_id = 898, .cpu_id = 330, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC1_SPI" }, - { .fc_id = 899, .cpu_id = 330, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC1_BMON_SPMU" }, - { .fc_id = 900, .cpu_id = 331, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC2_SPI" }, - { .fc_id = 901, .cpu_id = 331, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC2_BMON_SPMU" }, - { .fc_id = 902, .cpu_id = 332, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC3_SPI" }, - { .fc_id = 903, .cpu_id = 332, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC3_BMON_SPMU" }, - { .fc_id = 904, .cpu_id = 333, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC4_SPI" }, - { .fc_id = 905, .cpu_id = 333, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC4_BMON_SPMU" }, - { .fc_id = 906, .cpu_id = 334, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC5_SPI" }, - { .fc_id = 907, .cpu_id = 334, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC5_BMON_SPMU" }, - { .fc_id = 908, .cpu_id = 335, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC6_SPI" }, - { .fc_id = 909, .cpu_id = 335, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC6_BMON_SPMU" }, - { .fc_id = 910, .cpu_id = 336, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC7_SPI" }, - { .fc_id = 911, .cpu_id = 336, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC7_BMON_SPMU" }, - { .fc_id = 912, .cpu_id = 337, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC8_SPI" }, - { .fc_id = 913, .cpu_id = 337, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC8_BMON_SPMU" }, - { .fc_id = 914, .cpu_id = 338, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC9_SPI" }, - { .fc_id = 915, .cpu_id = 338, .valid = 1, - .msg = 0, .reset = 0, .name = "DEC9_BMON_SPMU" }, - { .fc_id = 916, .cpu_id = 339, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 917, .cpu_id = 340, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 918, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 919, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 920, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 921, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 922, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 923, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 924, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 925, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 926, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 927, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 928, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 929, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 930, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 931, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 932, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 933, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 934, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 935, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 936, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 937, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 938, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 939, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 940, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 941, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 942, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 943, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 944, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 945, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 946, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 947, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 948, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 949, .cpu_id = 341, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 950, .cpu_id = 342, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 951, .cpu_id = 343, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC0_BMON_SPMU" }, - { .fc_id = 952, .cpu_id = 343, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC0_SW_ERROR" }, - { .fc_id = 953, .cpu_id = 343, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 954, .cpu_id = 343, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 955, .cpu_id = 344, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC1_BMON_SPMU" }, - { .fc_id = 956, .cpu_id = 344, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC1_SW_ERROR" }, - { .fc_id = 957, .cpu_id = 344, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 958, .cpu_id = 344, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 959, .cpu_id = 345, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC2_BMON_SPMU" }, - { .fc_id = 960, .cpu_id = 345, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC2_SW_ERROR" }, - { .fc_id = 961, .cpu_id = 345, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 962, .cpu_id = 345, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 963, .cpu_id = 346, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC3_BMON_SPMU" }, - { .fc_id = 964, .cpu_id = 346, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC3_SW_ERROR" }, - { .fc_id = 965, .cpu_id = 346, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 966, .cpu_id = 346, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 967, .cpu_id = 347, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC4_BMON_SPMU" }, - { .fc_id = 968, .cpu_id = 347, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC4_SW_ERROR" }, - { .fc_id = 969, .cpu_id = 347, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 970, .cpu_id = 347, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 971, .cpu_id = 348, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC5_BMON_SPMU" }, - { .fc_id = 972, .cpu_id = 348, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC5_SW_ERROR" }, - { .fc_id = 973, .cpu_id = 348, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 974, .cpu_id = 348, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 975, .cpu_id = 349, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC6_BMON_SPMU" }, - { .fc_id = 976, .cpu_id = 349, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC6_SW_ERROR" }, - { .fc_id = 977, .cpu_id = 349, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 978, .cpu_id = 349, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 979, .cpu_id = 350, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC7_BMON_SPMU" }, - { .fc_id = 980, .cpu_id = 350, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC7_SW_ERROR" }, - { .fc_id = 981, .cpu_id = 350, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 982, .cpu_id = 350, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 983, .cpu_id = 351, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC8_BMON_SPMU" }, - { .fc_id = 984, .cpu_id = 351, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC8_SW_ERROR" }, - { .fc_id = 985, .cpu_id = 351, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 986, .cpu_id = 351, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 987, .cpu_id = 352, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC9_BMON_SPMU" }, - { .fc_id = 988, .cpu_id = 352, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC9_SW_ERROR" }, - { .fc_id = 989, .cpu_id = 352, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 990, .cpu_id = 352, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 991, .cpu_id = 353, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC10_BMON_SPMU" }, - { .fc_id = 992, .cpu_id = 353, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC10_SW_ERROR" }, - { .fc_id = 993, .cpu_id = 353, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 994, .cpu_id = 353, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 995, .cpu_id = 354, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC11_BMON_SPMU" }, - { .fc_id = 996, .cpu_id = 354, .valid = 1, - .msg = 0, .reset = 0, .name = "NIC11_SW_ERROR" }, - { .fc_id = 997, .cpu_id = 354, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 998, .cpu_id = 354, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 999, .cpu_id = 355, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1000, .cpu_id = 356, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1001, .cpu_id = 357, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1002, .cpu_id = 358, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1003, .cpu_id = 359, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1004, .cpu_id = 360, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1005, .cpu_id = 361, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1006, .cpu_id = 362, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1007, .cpu_id = 363, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1008, .cpu_id = 368, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1009, .cpu_id = 369, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1010, .cpu_id = 366, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1011, .cpu_id = 367, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1012, .cpu_id = 364, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1013, .cpu_id = 365, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1014, .cpu_id = 374, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1015, .cpu_id = 375, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1016, .cpu_id = 372, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1017, .cpu_id = 373, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1018, .cpu_id = 370, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1019, .cpu_id = 371, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1020, .cpu_id = 376, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1021, .cpu_id = 377, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1022, .cpu_id = 378, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1023, .cpu_id = 379, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1024, .cpu_id = 380, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1025, .cpu_id = 381, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1026, .cpu_id = 382, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1027, .cpu_id = 383, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1028, .cpu_id = 384, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1029, .cpu_id = 385, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1030, .cpu_id = 386, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1031, .cpu_id = 387, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1032, .cpu_id = 388, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1033, .cpu_id = 389, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1034, .cpu_id = 390, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1035, .cpu_id = 391, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1036, .cpu_id = 392, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1037, .cpu_id = 393, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1038, .cpu_id = 394, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1039, .cpu_id = 395, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1040, .cpu_id = 396, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1041, .cpu_id = 397, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1042, .cpu_id = 398, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1043, .cpu_id = 399, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1044, .cpu_id = 400, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1045, .cpu_id = 401, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1046, .cpu_id = 402, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1047, .cpu_id = 403, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1048, .cpu_id = 404, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1049, .cpu_id = 405, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1050, .cpu_id = 406, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1051, .cpu_id = 407, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1052, .cpu_id = 408, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1053, .cpu_id = 409, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1054, .cpu_id = 410, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1055, .cpu_id = 411, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1056, .cpu_id = 412, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1057, .cpu_id = 413, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1058, .cpu_id = 414, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1059, .cpu_id = 414, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1060, .cpu_id = 414, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1061, .cpu_id = 414, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1062, .cpu_id = 414, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1063, .cpu_id = 414, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1064, .cpu_id = 414, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1065, .cpu_id = 414, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1066, .cpu_id = 414, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1067, .cpu_id = 414, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1068, .cpu_id = 415, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1069, .cpu_id = 416, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1070, .cpu_id = 416, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1071, .cpu_id = 416, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1072, .cpu_id = 416, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1073, .cpu_id = 416, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1074, .cpu_id = 416, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1075, .cpu_id = 416, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1076, .cpu_id = 416, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1077, .cpu_id = 416, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1078, .cpu_id = 416, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1079, .cpu_id = 416, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1080, .cpu_id = 416, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1081, .cpu_id = 416, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1082, .cpu_id = 416, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1083, .cpu_id = 416, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1084, .cpu_id = 416, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1085, .cpu_id = 417, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1086, .cpu_id = 417, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1087, .cpu_id = 417, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1088, .cpu_id = 417, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1089, .cpu_id = 417, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1090, .cpu_id = 417, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1091, .cpu_id = 417, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1092, .cpu_id = 417, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1093, .cpu_id = 417, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1094, .cpu_id = 417, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1095, .cpu_id = 417, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1096, .cpu_id = 417, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1097, .cpu_id = 417, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1098, .cpu_id = 417, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1099, .cpu_id = 417, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1100, .cpu_id = 417, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1101, .cpu_id = 418, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1102, .cpu_id = 419, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1103, .cpu_id = 420, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1104, .cpu_id = 421, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1105, .cpu_id = 422, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1106, .cpu_id = 422, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1107, .cpu_id = 422, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1108, .cpu_id = 422, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1109, .cpu_id = 422, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1110, .cpu_id = 422, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1111, .cpu_id = 422, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1112, .cpu_id = 422, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1113, .cpu_id = 422, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1114, .cpu_id = 422, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1115, .cpu_id = 422, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1116, .cpu_id = 422, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1117, .cpu_id = 423, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1118, .cpu_id = 424, .valid = 1, - .msg = 0, .reset = 0, .name = "ROTATOR0_SERR" }, - { .fc_id = 1119, .cpu_id = 425, .valid = 1, - .msg = 0, .reset = 0, .name = "ROTATOR1_SERR" }, - { .fc_id = 1120, .cpu_id = 426, .valid = 1, - .msg = 0, .reset = 1, .name = "ROTATOR0_DERR" }, - { .fc_id = 1121, .cpu_id = 427, .valid = 1, - .msg = 0, .reset = 1, .name = "ROTATOR1_DERR" }, - { .fc_id = 1122, .cpu_id = 428, .valid = 1, - .msg = 0, .reset = 1, .name = "ROTATOR0_AXI_ERROR_RESPONSE" }, - { .fc_id = 1123, .cpu_id = 429, .valid = 1, - .msg = 0, .reset = 1, .name = "ROTATOR1_AXI_ERROR_RESPONSE" }, - { .fc_id = 1124, .cpu_id = 430, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1125, .cpu_id = 431, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1126, .cpu_id = 432, .valid = 1, - .msg = 0, .reset = 0, .name = "ROTATOR0_BMON_SPMU" }, - { .fc_id = 1127, .cpu_id = 433, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1128, .cpu_id = 434, .valid = 1, - .msg = 0, .reset = 0, .name = "ROTATOR1_BMON_SPMU" }, - { .fc_id = 1129, .cpu_id = 435, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1130, .cpu_id = 436, .valid = 1, - .msg = 0, .reset = 0, .name = "SM0_BMON_SPMU" }, - { .fc_id = 1131, .cpu_id = 437, .valid = 1, - .msg = 0, .reset = 0, .name = "SM1_BMON_SPMU" }, - { .fc_id = 1132, .cpu_id = 438, .valid = 1, - .msg = 0, .reset = 0, .name = "SM2_BMON_SPMU" }, - { .fc_id = 1133, .cpu_id = 439, .valid = 1, - .msg = 0, .reset = 0, .name = "SM3_BMON_SPMU" }, - { .fc_id = 1134, .cpu_id = 440, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1135, .cpu_id = 441, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1136, .cpu_id = 442, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1137, .cpu_id = 443, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1138, .cpu_id = 444, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1139, .cpu_id = 445, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1140, .cpu_id = 446, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1141, .cpu_id = 447, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1142, .cpu_id = 448, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1143, .cpu_id = 449, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1144, .cpu_id = 450, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1145, .cpu_id = 451, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1146, .cpu_id = 452, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1147, .cpu_id = 453, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1148, .cpu_id = 454, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1149, .cpu_id = 455, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1150, .cpu_id = 456, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1151, .cpu_id = 457, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1152, .cpu_id = 458, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1153, .cpu_id = 459, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1154, .cpu_id = 460, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1155, .cpu_id = 461, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1156, .cpu_id = 462, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1157, .cpu_id = 463, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1158, .cpu_id = 464, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1159, .cpu_id = 465, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1160, .cpu_id = 466, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1161, .cpu_id = 467, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1162, .cpu_id = 468, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1163, .cpu_id = 469, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1164, .cpu_id = 470, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1165, .cpu_id = 471, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1166, .cpu_id = 472, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1167, .cpu_id = 473, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1168, .cpu_id = 474, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1169, .cpu_id = 475, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1170, .cpu_id = 476, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1171, .cpu_id = 477, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1172, .cpu_id = 478, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1173, .cpu_id = 479, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1174, .cpu_id = 480, .valid = 1, - .msg = 1, .reset = 0, .name = "PSOC_DMA_QM" }, - { .fc_id = 1175, .cpu_id = 481, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1176, .cpu_id = 482, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1177, .cpu_id = 483, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1178, .cpu_id = 484, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1179, .cpu_id = 485, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1180, .cpu_id = 486, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1181, .cpu_id = 487, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1182, .cpu_id = 488, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1183, .cpu_id = 489, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1184, .cpu_id = 490, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1185, .cpu_id = 491, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1186, .cpu_id = 492, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1187, .cpu_id = 493, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1188, .cpu_id = 494, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1189, .cpu_id = 495, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1190, .cpu_id = 496, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1191, .cpu_id = 497, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1192, .cpu_id = 498, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1193, .cpu_id = 499, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1194, .cpu_id = 500, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1195, .cpu_id = 501, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1196, .cpu_id = 502, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1197, .cpu_id = 503, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1198, .cpu_id = 504, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1199, .cpu_id = 505, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1200, .cpu_id = 506, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1201, .cpu_id = 507, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1202, .cpu_id = 508, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1203, .cpu_id = 509, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1204, .cpu_id = 510, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1205, .cpu_id = 511, .valid = 0, - .msg = 0, .reset = 0, .name = "" }, - { .fc_id = 1206, .cpu_id = 512, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC0_QM" }, - { .fc_id = 1207, .cpu_id = 513, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC1_QM" }, - { .fc_id = 1208, .cpu_id = 514, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC2_QM" }, - { .fc_id = 1209, .cpu_id = 515, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC3_QM" }, - { .fc_id = 1210, .cpu_id = 516, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC4_QM" }, - { .fc_id = 1211, .cpu_id = 517, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC5_QM" }, - { .fc_id = 1212, .cpu_id = 518, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC6_QM" }, - { .fc_id = 1213, .cpu_id = 519, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC7_QM" }, - { .fc_id = 1214, .cpu_id = 520, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC8_QM" }, - { .fc_id = 1215, .cpu_id = 521, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC9_QM" }, - { .fc_id = 1216, .cpu_id = 522, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC10_QM" }, - { .fc_id = 1217, .cpu_id = 523, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC11_QM" }, - { .fc_id = 1218, .cpu_id = 524, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC12_QM" }, - { .fc_id = 1219, .cpu_id = 525, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC13_QM" }, - { .fc_id = 1220, .cpu_id = 526, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC14_QM" }, - { .fc_id = 1221, .cpu_id = 527, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC15_QM" }, - { .fc_id = 1222, .cpu_id = 528, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC16_QM" }, - { .fc_id = 1223, .cpu_id = 529, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC17_QM" }, - { .fc_id = 1224, .cpu_id = 530, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC18_QM" }, - { .fc_id = 1225, .cpu_id = 531, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC19_QM" }, - { .fc_id = 1226, .cpu_id = 532, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC20_QM" }, - { .fc_id = 1227, .cpu_id = 533, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC21_QM" }, - { .fc_id = 1228, .cpu_id = 534, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC22_QM" }, - { .fc_id = 1229, .cpu_id = 535, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC23_QM" }, - { .fc_id = 1230, .cpu_id = 536, .valid = 1, - .msg = 1, .reset = 0, .name = "TPC24_QM" }, - { .fc_id = 1231, .cpu_id = 537, .valid = 0, - .msg = 1, .reset = 0, .name = "" }, - { .fc_id = 1232, .cpu_id = 538, .valid = 1, - .msg = 1, .reset = 0, .name = "MME0_QM" }, - { .fc_id = 1233, .cpu_id = 539, .valid = 1, - .msg = 1, .reset = 0, .name = "MME1_QM" }, - { .fc_id = 1234, .cpu_id = 540, .valid = 1, - .msg = 1, .reset = 0, .name = "MME2_QM" }, - { .fc_id = 1235, .cpu_id = 541, .valid = 1, - .msg = 1, .reset = 0, .name = "MME3_QM" }, - { .fc_id = 1236, .cpu_id = 542, .valid = 1, - .msg = 1, .reset = 0, .name = "HDMA2_QM" }, - { .fc_id = 1237, .cpu_id = 543, .valid = 1, - .msg = 1, .reset = 0, .name = "HDMA3_QM" }, - { .fc_id = 1238, .cpu_id = 544, .valid = 1, - .msg = 1, .reset = 0, .name = "HDMA0_QM" }, - { .fc_id = 1239, .cpu_id = 545, .valid = 1, - .msg = 1, .reset = 0, .name = "HDMA1_QM" }, - { .fc_id = 1240, .cpu_id = 546, .valid = 1, - .msg = 1, .reset = 0, .name = "HDMA6_QM" }, - { .fc_id = 1241, .cpu_id = 547, .valid = 1, - .msg = 1, .reset = 0, .name = "HDMA7_QM" }, - { .fc_id = 1242, .cpu_id = 548, .valid = 1, - .msg = 1, .reset = 0, .name = "HDMA4_QM" }, - { .fc_id = 1243, .cpu_id = 549, .valid = 1, - .msg = 1, .reset = 0, .name = "HDMA5_QM" }, - { .fc_id = 1244, .cpu_id = 550, .valid = 1, - .msg = 1, .reset = 0, .name = "PDMA0_QM" }, - { .fc_id = 1245, .cpu_id = 551, .valid = 1, - .msg = 1, .reset = 0, .name = "PDMA1_QM" }, - { .fc_id = 1246, .cpu_id = 552, .valid = 1, - .msg = 1, .reset = 0, .name = "PI_UPDATE" }, - { .fc_id = 1247, .cpu_id = 553, .valid = 1, - .msg = 1, .reset = 0, .name = "HALT_MACHINE" }, - { .fc_id = 1248, .cpu_id = 554, .valid = 1, - .msg = 1, .reset = 0, .name = "INTS_REGISTER" }, - { .fc_id = 1249, .cpu_id = 555, .valid = 1, - .msg = 1, .reset = 0, .name = "ROT0_QM" }, - { .fc_id = 1250, .cpu_id = 556, .valid = 1, - .msg = 1, .reset = 0, .name = "ROT1_QM" }, - { .fc_id = 1251, .cpu_id = 557, .valid = 1, - .msg = 1, .reset = 0, .name = "SOFT_RESET" }, - { .fc_id = 1252, .cpu_id = 558, .valid = 1, - .msg = 1, .reset = 0, .name = "CPLD_SHUTDOWN_CAUSE" }, - { .fc_id = 1253, .cpu_id = 559, .valid = 1, - .msg = 1, .reset = 0, .name = "FIX_POWER_ENV_S" }, - { .fc_id = 1254, .cpu_id = 560, .valid = 1, - .msg = 1, .reset = 0, .name = "FIX_POWER_ENV_E" }, - { .fc_id = 1255, .cpu_id = 561, .valid = 1, - .msg = 1, .reset = 0, .name = "FIX_THERMAL_ENV_S" }, - { .fc_id = 1256, .cpu_id = 562, .valid = 1, - .msg = 1, .reset = 0, .name = "FIX_THERMAL_ENV_E" }, - { .fc_id = 1257, .cpu_id = 563, .valid = 1, - .msg = 1, .reset = 0, .name = "CPLD_SHUTDOWN_EVENT" }, - { .fc_id = 1258, .cpu_id = 564, .valid = 1, - .msg = 1, .reset = 0, .name = "PKT_QUEUE_OUT_SYNC" }, - { .fc_id = 1259, .cpu_id = 565, .valid = 1, - .msg = 1, .reset = 0, .name = "HDMA2_CORE" }, - { .fc_id = 1260, .cpu_id = 566, .valid = 1, - .msg = 1, .reset = 0, .name = "HDMA3_CORE" }, - { .fc_id = 1261, .cpu_id = 567, .valid = 1, - .msg = 1, .reset = 0, .name = "HDMA0_CORE" }, - { .fc_id = 1262, .cpu_id = 568, .valid = 1, - .msg = 1, .reset = 0, .name = "HDMA1_CORE" }, - { .fc_id = 1263, .cpu_id = 569, .valid = 1, - .msg = 1, .reset = 0, .name = "HDMA6_CORE" }, - { .fc_id = 1264, .cpu_id = 570, .valid = 1, - .msg = 1, .reset = 0, .name = "HDMA7_CORE" }, - { .fc_id = 1265, .cpu_id = 571, .valid = 1, - .msg = 1, .reset = 0, .name = "HDMA4_CORE" }, - { .fc_id = 1266, .cpu_id = 572, .valid = 1, - .msg = 1, .reset = 0, .name = "HDMA5_CORE" }, - { .fc_id = 1267, .cpu_id = 573, .valid = 1, - .msg = 1, .reset = 0, .name = "PDMA0_CORE" }, - { .fc_id = 1268, .cpu_id = 574, .valid = 1, - .msg = 1, .reset = 0, .name = "PDMA1_CORE" }, - { .fc_id = 1269, .cpu_id = 575, .valid = 1, - .msg = 1, .reset = 0, .name = "KDMA0_CORE" }, - { .fc_id = 1270, .cpu_id = 576, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC0_QM0" }, - { .fc_id = 1271, .cpu_id = 577, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC0_QM1" }, - { .fc_id = 1272, .cpu_id = 578, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC1_QM0" }, - { .fc_id = 1273, .cpu_id = 579, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC1_QM1" }, - { .fc_id = 1274, .cpu_id = 580, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC2_QM0" }, - { .fc_id = 1275, .cpu_id = 581, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC2_QM1" }, - { .fc_id = 1276, .cpu_id = 582, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC3_QM0" }, - { .fc_id = 1277, .cpu_id = 583, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC3_QM1" }, - { .fc_id = 1278, .cpu_id = 584, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC4_QM0" }, - { .fc_id = 1279, .cpu_id = 585, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC4_QM1" }, - { .fc_id = 1280, .cpu_id = 586, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC5_QM0" }, - { .fc_id = 1281, .cpu_id = 587, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC5_QM1" }, - { .fc_id = 1282, .cpu_id = 588, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC6_QM0" }, - { .fc_id = 1283, .cpu_id = 589, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC6_QM1" }, - { .fc_id = 1284, .cpu_id = 590, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC7_QM0" }, - { .fc_id = 1285, .cpu_id = 591, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC7_QM1" }, - { .fc_id = 1286, .cpu_id = 592, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC8_QM0" }, - { .fc_id = 1287, .cpu_id = 593, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC8_QM1" }, - { .fc_id = 1288, .cpu_id = 594, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC9_QM0" }, - { .fc_id = 1289, .cpu_id = 595, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC9_QM1" }, - { .fc_id = 1290, .cpu_id = 596, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC10_QM0" }, - { .fc_id = 1291, .cpu_id = 597, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC10_QM1" }, - { .fc_id = 1292, .cpu_id = 598, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC11_QM0" }, - { .fc_id = 1293, .cpu_id = 599, .valid = 1, - .msg = 1, .reset = 0, .name = "NIC11_QM1" }, - { .fc_id = 1294, .cpu_id = 600, .valid = 1, - .msg = 1, .reset = 0, .name = "CPU_PKT_SANITY_FAILED" }, - { .fc_id = 1295, .cpu_id = 601, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC0_ENG0" }, - { .fc_id = 1296, .cpu_id = 602, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC0_ENG1" }, - { .fc_id = 1297, .cpu_id = 603, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC1_ENG0" }, - { .fc_id = 1298, .cpu_id = 604, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC1_ENG1" }, - { .fc_id = 1299, .cpu_id = 605, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC2_ENG0" }, - { .fc_id = 1300, .cpu_id = 606, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC2_ENG1" }, - { .fc_id = 1301, .cpu_id = 607, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC3_ENG0" }, - { .fc_id = 1302, .cpu_id = 608, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC3_ENG1" }, - { .fc_id = 1303, .cpu_id = 609, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC4_ENG0" }, - { .fc_id = 1304, .cpu_id = 610, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC4_ENG1" }, - { .fc_id = 1305, .cpu_id = 611, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC5_ENG0" }, - { .fc_id = 1306, .cpu_id = 612, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC5_ENG1" }, - { .fc_id = 1307, .cpu_id = 613, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC6_ENG0" }, - { .fc_id = 1308, .cpu_id = 614, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC6_ENG1" }, - { .fc_id = 1309, .cpu_id = 615, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC7_ENG0" }, - { .fc_id = 1310, .cpu_id = 616, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC7_ENG1" }, - { .fc_id = 1311, .cpu_id = 617, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC8_ENG0" }, - { .fc_id = 1312, .cpu_id = 618, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC8_ENG1" }, - { .fc_id = 1313, .cpu_id = 619, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC9_ENG0" }, - { .fc_id = 1314, .cpu_id = 620, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC9_ENG1" }, - { .fc_id = 1315, .cpu_id = 621, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC10_ENG0" }, - { .fc_id = 1316, .cpu_id = 622, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC10_ENG1" }, - { .fc_id = 1317, .cpu_id = 623, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC11_ENG0" }, - { .fc_id = 1318, .cpu_id = 624, .valid = 1, - .msg = 1, .reset = 0, .name = "STATUS_NIC11_ENG1" }, - { .fc_id = 1319, .cpu_id = 625, .valid = 1, - .msg = 1, .reset = 0, .name = "ARC_DCCM_FULL" }, - { .fc_id = 1320, .cpu_id = 626, .valid = 1, - .msg = 1, .reset = 1, .name = "FP32_NOT_SUPPORTED" }, - { .fc_id = 1321, .cpu_id = 627, .valid = 1, - .msg = 1, .reset = 1, .name = "DEV_RESET_REQ" }, + { .fc_id = 0, .cpu_id = 0, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1, .cpu_id = 1, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 2, .cpu_id = 2, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 3, .cpu_id = 3, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 4, .cpu_id = 4, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 5, .cpu_id = 5, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 6, .cpu_id = 6, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 7, .cpu_id = 7, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 8, .cpu_id = 8, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 9, .cpu_id = 9, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 10, .cpu_id = 10, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 11, .cpu_id = 11, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 12, .cpu_id = 12, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 13, .cpu_id = 13, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 14, .cpu_id = 14, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 15, .cpu_id = 15, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 16, .cpu_id = 16, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 17, .cpu_id = 17, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 18, .cpu_id = 18, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 19, .cpu_id = 19, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 20, .cpu_id = 20, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 21, .cpu_id = 21, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 22, .cpu_id = 22, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 23, .cpu_id = 23, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 24, .cpu_id = 24, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 25, .cpu_id = 25, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 26, .cpu_id = 26, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 27, .cpu_id = 27, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 28, .cpu_id = 28, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 29, .cpu_id = 29, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 30, .cpu_id = 30, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 31, .cpu_id = 31, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 32, .cpu_id = 32, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "PCIE_CORE_SERR" }, + { .fc_id = 33, .cpu_id = 33, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "PCIE_CORE_DERR" }, + { .fc_id = 34, .cpu_id = 34, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "PCIE_IF_SERR" }, + { .fc_id = 35, .cpu_id = 35, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "PCIE_IF_DERR" }, + { .fc_id = 36, .cpu_id = 36, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "PCIE_PHY_SERR" }, + { .fc_id = 37, .cpu_id = 37, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "PCIE_PHY_DERR" }, + { .fc_id = 38, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC0_ECC_SERR" }, + { .fc_id = 39, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC1_ECC_SERR" }, + { .fc_id = 40, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC2_ECC_SERR" }, + { .fc_id = 41, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC3_ECC_SERR" }, + { .fc_id = 42, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC4_ECC_SERR" }, + { .fc_id = 43, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC5_ECC_SERR" }, + { .fc_id = 44, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC6_ECC_SERR" }, + { .fc_id = 45, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC7_ECC_SERR" }, + { .fc_id = 46, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC8_ECC_SERR" }, + { .fc_id = 47, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC9_ECC_SERR" }, + { .fc_id = 48, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC10_ECC_SERR" }, + { .fc_id = 49, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC11_ECC_SERR" }, + { .fc_id = 50, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC12_ECC_SERR" }, + { .fc_id = 51, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC13_ECC_SERR" }, + { .fc_id = 52, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC14_ECC_SERR" }, + { .fc_id = 53, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC15_ECC_SERR" }, + { .fc_id = 54, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC16_ECC_SERR" }, + { .fc_id = 55, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC17_ECC_SERR" }, + { .fc_id = 56, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC18_ECC_SERR" }, + { .fc_id = 57, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC19_ECC_SERR" }, + { .fc_id = 58, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC20_ECC_SERR" }, + { .fc_id = 59, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC21_ECC_SERR" }, + { .fc_id = 60, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC22_ECC_SERR" }, + { .fc_id = 61, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC23_ECC_SERR" }, + { .fc_id = 62, .cpu_id = 38, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC24_ECC_SERR" }, + { .fc_id = 63, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC0_ECC_DERR" }, + { .fc_id = 64, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC1_ECC_DERR" }, + { .fc_id = 65, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC2_ECC_DERR" }, + { .fc_id = 66, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC3_ECC_DERR" }, + { .fc_id = 67, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC4_ECC_DERR" }, + { .fc_id = 68, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC5_ECC_DERR" }, + { .fc_id = 69, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC6_ECC_DERR" }, + { .fc_id = 70, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC7_ECC_DERR" }, + { .fc_id = 71, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC8_ECC_DERR" }, + { .fc_id = 72, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC9_ECC_DERR" }, + { .fc_id = 73, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC10_ECC_DERR" }, + { .fc_id = 74, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC11_ECC_DERR" }, + { .fc_id = 75, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC12_ECC_DERR" }, + { .fc_id = 76, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC13_ECC_DERR" }, + { .fc_id = 77, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC14_ECC_DERR" }, + { .fc_id = 78, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC15_ECC_DERR" }, + { .fc_id = 79, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC16_ECC_DERR" }, + { .fc_id = 80, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC17_ECC_DERR" }, + { .fc_id = 81, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC18_ECC_DERR" }, + { .fc_id = 82, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC19_ECC_DERR" }, + { .fc_id = 83, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC20_ECC_DERR" }, + { .fc_id = 84, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC21_ECC_DERR" }, + { .fc_id = 85, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC22_ECC_DERR" }, + { .fc_id = 86, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC23_ECC_DERR" }, + { .fc_id = 87, .cpu_id = 39, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "TPC24_ECC_DERR" }, + { .fc_id = 88, .cpu_id = 40, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME0_SBTE0_ECC_SERR" }, + { .fc_id = 89, .cpu_id = 40, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME0_SBTE1_ECC_SERR" }, + { .fc_id = 90, .cpu_id = 40, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME0_SBTE2_ECC_SERR" }, + { .fc_id = 91, .cpu_id = 40, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME0_SBTE3_ECC_SERR" }, + { .fc_id = 92, .cpu_id = 40, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME0_SBTE4_ECC_SERR" }, + { .fc_id = 93, .cpu_id = 40, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME0_CTRL_ECC_SERR" }, + { .fc_id = 94, .cpu_id = 40, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME0_WAP_ECC_SERR" }, + { .fc_id = 95, .cpu_id = 41, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME1_SBTE0_ECC_SERR" }, + { .fc_id = 96, .cpu_id = 41, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME1_SBTE1_ECC_SERR" }, + { .fc_id = 97, .cpu_id = 41, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME1_SBTE2_ECC_SERR" }, + { .fc_id = 98, .cpu_id = 41, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME1_SBTE3_ECC_SERR" }, + { .fc_id = 99, .cpu_id = 41, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME1_SBTE4_ECC_SERR" }, + { .fc_id = 100, .cpu_id = 41, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME1_CTRL_ECC_SERR" }, + { .fc_id = 101, .cpu_id = 41, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME1_WAP_ECC_SERR" }, + { .fc_id = 102, .cpu_id = 42, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME2_SBTE0_ECC_SERR" }, + { .fc_id = 103, .cpu_id = 42, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME2_SBTE1_ECC_SERR" }, + { .fc_id = 104, .cpu_id = 42, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME2_SBTE2_ECC_SERR" }, + { .fc_id = 105, .cpu_id = 42, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME2_SBTE3_ECC_SERR" }, + { .fc_id = 106, .cpu_id = 42, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME2_SBTE4_ECC_SERR" }, + { .fc_id = 107, .cpu_id = 42, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME2_CTRL_ECC_SERR" }, + { .fc_id = 108, .cpu_id = 42, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME2_WAP_ECC_SERR" }, + { .fc_id = 109, .cpu_id = 43, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME3_SBTE0_ECC_SERR" }, + { .fc_id = 110, .cpu_id = 43, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME3_SBTE1_ECC_SERR" }, + { .fc_id = 111, .cpu_id = 43, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME3_SBTE2_ECC_SERR" }, + { .fc_id = 112, .cpu_id = 43, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME3_SBTE3_ECC_SERR" }, + { .fc_id = 113, .cpu_id = 43, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME3_SBTE4_ECC_SERR" }, + { .fc_id = 114, .cpu_id = 43, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME3_CTRL_ECC_SERR" }, + { .fc_id = 115, .cpu_id = 43, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME3_WAP_ECC_SERR" }, + { .fc_id = 116, .cpu_id = 44, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME0_SBTE0_ECC_DERR" }, + { .fc_id = 117, .cpu_id = 44, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME0_SBTE1_ECC_DERR" }, + { .fc_id = 118, .cpu_id = 44, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME0_SBTE2_ECC_DERR" }, + { .fc_id = 119, .cpu_id = 44, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME0_SBTE3_ECC_DERR" }, + { .fc_id = 120, .cpu_id = 44, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME0_SBTE4_ECC_DERR" }, + { .fc_id = 121, .cpu_id = 44, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME0_CTRL_ECC_DERR" }, + { .fc_id = 122, .cpu_id = 44, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME0_WAP_ECC_DERR" }, + { .fc_id = 123, .cpu_id = 45, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME1_SBTE0_ECC_DERR" }, + { .fc_id = 124, .cpu_id = 45, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME1_SBTE1_ECC_DERR" }, + { .fc_id = 125, .cpu_id = 45, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME1_SBTE2_ECC_DERR" }, + { .fc_id = 126, .cpu_id = 45, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME1_SBTE3_ECC_DERR" }, + { .fc_id = 127, .cpu_id = 45, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME1_SBTE4_ECC_DERR" }, + { .fc_id = 128, .cpu_id = 45, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME1_CTRL_ECC_DERR" }, + { .fc_id = 129, .cpu_id = 45, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME1_WAP_ECC_DERR" }, + { .fc_id = 130, .cpu_id = 46, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME2_SBTE0_ECC_DERR" }, + { .fc_id = 131, .cpu_id = 46, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME2_SBTE1_ECC_DERR" }, + { .fc_id = 132, .cpu_id = 46, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME2_SBTE2_ECC_DERR" }, + { .fc_id = 133, .cpu_id = 46, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME2_SBTE3_ECC_DERR" }, + { .fc_id = 134, .cpu_id = 46, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME2_SBTE4_ECC_DERR" }, + { .fc_id = 135, .cpu_id = 46, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME2_CTRL_ECC_DERR" }, + { .fc_id = 136, .cpu_id = 46, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME2_WAP_ECC_DERR" }, + { .fc_id = 137, .cpu_id = 47, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME3_SBTE0_ECC_DERR" }, + { .fc_id = 138, .cpu_id = 47, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME3_SBTE1_ECC_DERR" }, + { .fc_id = 139, .cpu_id = 47, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME3_SBTE2_ECC_DERR" }, + { .fc_id = 140, .cpu_id = 47, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME3_SBTE3_ECC_DERR" }, + { .fc_id = 141, .cpu_id = 47, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME3_SBTE4_ECC_DERR" }, + { .fc_id = 142, .cpu_id = 47, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME3_CTRL_ECC_DERR" }, + { .fc_id = 143, .cpu_id = 47, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "MME3_WAP_ECC_DERR" }, + { .fc_id = 144, .cpu_id = 48, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "EDMA2_ECC_SERR" }, + { .fc_id = 145, .cpu_id = 48, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "EDMA3_ECC_SERR" }, + { .fc_id = 146, .cpu_id = 48, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "EDMA0_ECC_SERR" }, + { .fc_id = 147, .cpu_id = 48, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "EDMA1_ECC_SERR" }, + { .fc_id = 148, .cpu_id = 48, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "EDMA6_ECC_SERR" }, + { .fc_id = 149, .cpu_id = 48, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "EDMA7_ECC_SERR" }, + { .fc_id = 150, .cpu_id = 48, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HDMA4_ECC_SERR" }, + { .fc_id = 151, .cpu_id = 48, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HDMA5_ECC_SERR" }, + { .fc_id = 152, .cpu_id = 49, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "EDMA2_ECC_DERR" }, + { .fc_id = 153, .cpu_id = 49, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "EDMA3_ECC_DERR" }, + { .fc_id = 154, .cpu_id = 49, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "EDMA0_ECC_DERR" }, + { .fc_id = 155, .cpu_id = 49, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "EDMA1_ECC_DERR" }, + { .fc_id = 156, .cpu_id = 49, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "EDMA6_ECC_DERR" }, + { .fc_id = 157, .cpu_id = 49, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "EDMA7_ECC_DERR" }, + { .fc_id = 158, .cpu_id = 49, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "EDMA4_ECC_DERR" }, + { .fc_id = 159, .cpu_id = 49, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "EDMA5_ECC_DERR" }, + { .fc_id = 160, .cpu_id = 50, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "KDMA0_ECC_SERR" }, + { .fc_id = 161, .cpu_id = 51, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "PDMA0_ECC_SERR" }, + { .fc_id = 162, .cpu_id = 51, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "PDMA1_ECC_SERR" }, + { .fc_id = 163, .cpu_id = 52, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "KDMA0_ECC_DERR" }, + { .fc_id = 164, .cpu_id = 53, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "PDMA0_ECC_DERR" }, + { .fc_id = 165, .cpu_id = 53, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "PDMA1_ECC_DERR" }, + { .fc_id = 166, .cpu_id = 54, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "CPU_IF_ECC_SERR" }, + { .fc_id = 167, .cpu_id = 55, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "CPU_IF_ECC_DERR" }, + { .fc_id = 168, .cpu_id = 56, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "PSOC_MEM_SERR" }, + { .fc_id = 169, .cpu_id = 57, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "PSOC_MEM_DERR" }, + { .fc_id = 170, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM0_ECC_SERR" }, + { .fc_id = 171, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM1_ECC_SERR" }, + { .fc_id = 172, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM2_ECC_SERR" }, + { .fc_id = 173, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM3_ECC_SERR" }, + { .fc_id = 174, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM4_ECC_SERR" }, + { .fc_id = 175, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM5_ECC_SERR" }, + { .fc_id = 176, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM6_ECC_SERR" }, + { .fc_id = 177, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM7_ECC_SERR" }, + { .fc_id = 178, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM8_ECC_SERR" }, + { .fc_id = 179, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM9_ECC_SERR" }, + { .fc_id = 180, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM10_ECC_SERR" }, + { .fc_id = 181, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM11_ECC_SERR" }, + { .fc_id = 182, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM12_ECC_SERR" }, + { .fc_id = 183, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM13_ECC_SERR" }, + { .fc_id = 184, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM14_ECC_SERR" }, + { .fc_id = 185, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM15_ECC_SERR" }, + { .fc_id = 186, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM16_ECC_SERR" }, + { .fc_id = 187, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM17_ECC_SERR" }, + { .fc_id = 188, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM18_ECC_SERR" }, + { .fc_id = 189, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM19_ECC_SERR" }, + { .fc_id = 190, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM20_ECC_SERR" }, + { .fc_id = 191, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM21_ECC_SERR" }, + { .fc_id = 192, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM22_ECC_SERR" }, + { .fc_id = 193, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM23_ECC_SERR" }, + { .fc_id = 194, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM24_ECC_SERR" }, + { .fc_id = 195, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM25_ECC_SERR" }, + { .fc_id = 196, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM26_ECC_SERR" }, + { .fc_id = 197, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM27_ECC_SERR" }, + { .fc_id = 198, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM28_ECC_SERR" }, + { .fc_id = 199, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM29_ECC_SERR" }, + { .fc_id = 200, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM30_ECC_SERR" }, + { .fc_id = 201, .cpu_id = 58, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SRAM31_ECC_SERR" }, + { .fc_id = 202, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM0_ECC_DERR" }, + { .fc_id = 203, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM1_ECC_DERR" }, + { .fc_id = 204, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM2_ECC_DERR" }, + { .fc_id = 205, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM3_ECC_DERR" }, + { .fc_id = 206, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM4_ECC_DERR" }, + { .fc_id = 207, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM5_ECC_DERR" }, + { .fc_id = 208, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM6_ECC_DERR" }, + { .fc_id = 209, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM7_ECC_DERR" }, + { .fc_id = 210, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM8_ECC_DERR" }, + { .fc_id = 211, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM9_ECC_DERR" }, + { .fc_id = 212, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM10_ECC_DERR" }, + { .fc_id = 213, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM11_ECC_DERR" }, + { .fc_id = 214, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM12_ECC_DERR" }, + { .fc_id = 215, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM13_ECC_DERR" }, + { .fc_id = 216, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM14_ECC_DERR" }, + { .fc_id = 217, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM15_ECC_DERR" }, + { .fc_id = 218, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM16_ECC_DERR" }, + { .fc_id = 219, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM17_ECC_DERR" }, + { .fc_id = 220, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM18_ECC_DERR" }, + { .fc_id = 221, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM19_ECC_DERR" }, + { .fc_id = 222, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM20_ECC_DERR" }, + { .fc_id = 223, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM21_ECC_DERR" }, + { .fc_id = 224, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM22_ECC_DERR" }, + { .fc_id = 225, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM23_ECC_DERR" }, + { .fc_id = 226, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM24_ECC_DERR" }, + { .fc_id = 227, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM25_ECC_DERR" }, + { .fc_id = 228, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM26_ECC_DERR" }, + { .fc_id = 229, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM27_ECC_DERR" }, + { .fc_id = 230, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM28_ECC_DERR" }, + { .fc_id = 231, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM29_ECC_DERR" }, + { .fc_id = 232, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM30_ECC_DERR" }, + { .fc_id = 233, .cpu_id = 59, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SRAM31_ECC_DERR" }, + { .fc_id = 234, .cpu_id = 60, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "GIC500" }, + { .fc_id = 235, .cpu_id = 61, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM_0_MC0_ECC_SERR" }, + { .fc_id = 236, .cpu_id = 61, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM_1_MC0_ECC_SERR" }, + { .fc_id = 237, .cpu_id = 61, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM_2_MC0_ECC_SERR" }, + { .fc_id = 238, .cpu_id = 61, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM_3_MC0_ECC_SERR" }, + { .fc_id = 239, .cpu_id = 61, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM_4_MC0_ECC_SERR" }, + { .fc_id = 240, .cpu_id = 61, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM_5_MC0_ECC_SERR" }, + { .fc_id = 241, .cpu_id = 61, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM_0_MC1_ECC_SERR" }, + { .fc_id = 242, .cpu_id = 61, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM_1_MC1_ECC_SERR" }, + { .fc_id = 243, .cpu_id = 61, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM_2_MC1_ECC_SERR" }, + { .fc_id = 244, .cpu_id = 61, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM_3_MC1_ECC_SERR" }, + { .fc_id = 245, .cpu_id = 61, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM_4_MC1_ECC_SERR" }, + { .fc_id = 246, .cpu_id = 61, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM_5_MC1_ECC_SERR" }, + { .fc_id = 247, .cpu_id = 62, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM_0_MC0_ECC_DERR" }, + { .fc_id = 248, .cpu_id = 62, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM_1_MC0_ECC_DERR" }, + { .fc_id = 249, .cpu_id = 62, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM_2_MC0_ECC_DERR" }, + { .fc_id = 250, .cpu_id = 62, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM_3_MC0_ECC_DERR" }, + { .fc_id = 251, .cpu_id = 62, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM_4_MC0_ECC_DERR" }, + { .fc_id = 252, .cpu_id = 62, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM_5_MC0_ECC_DERR" }, + { .fc_id = 253, .cpu_id = 62, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM_0_MC1_ECC_DERR" }, + { .fc_id = 254, .cpu_id = 62, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM_1_MC1_ECC_DERR" }, + { .fc_id = 255, .cpu_id = 62, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM_2_MC1_ECC_DERR" }, + { .fc_id = 256, .cpu_id = 62, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM_3_MC1_ECC_DERR" }, + { .fc_id = 257, .cpu_id = 62, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM_4_MC1_ECC_DERR" }, + { .fc_id = 258, .cpu_id = 62, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM_5_MC1_ECC_DERR" }, + { .fc_id = 259, .cpu_id = 63, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU_0_ECC_SERR" }, + { .fc_id = 260, .cpu_id = 63, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU_1_ECC_SERR" }, + { .fc_id = 261, .cpu_id = 63, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU_2_ECC_SERR" }, + { .fc_id = 262, .cpu_id = 63, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU_3_ECC_SERR" }, + { .fc_id = 263, .cpu_id = 63, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU_8_ECC_SERR" }, + { .fc_id = 264, .cpu_id = 63, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU_9_ECC_SERR" }, + { .fc_id = 265, .cpu_id = 63, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU_10_ECC_SERR" }, + { .fc_id = 266, .cpu_id = 63, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU_11_ECC_SERR" }, + { .fc_id = 267, .cpu_id = 63, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU_7_ECC_SERR" }, + { .fc_id = 268, .cpu_id = 63, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU_6_ECC_SERR" }, + { .fc_id = 269, .cpu_id = 63, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU_5_ECC_SERR" }, + { .fc_id = 270, .cpu_id = 63, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU_4_ECC_SERR" }, + { .fc_id = 271, .cpu_id = 63, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU_15_ECC_SERR" }, + { .fc_id = 272, .cpu_id = 63, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU_14_ECC_SERR" }, + { .fc_id = 273, .cpu_id = 63, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU_13_ECC_SERR" }, + { .fc_id = 274, .cpu_id = 63, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU_12_ECC_SERR" }, + { .fc_id = 275, .cpu_id = 64, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_0_ECC_DERR" }, + { .fc_id = 276, .cpu_id = 64, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_1_ECC_DERR" }, + { .fc_id = 277, .cpu_id = 64, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_2_ECC_DERR" }, + { .fc_id = 278, .cpu_id = 64, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_3_ECC_DERR" }, + { .fc_id = 279, .cpu_id = 64, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_8_ECC_DERR" }, + { .fc_id = 280, .cpu_id = 64, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_9_ECC_DERR" }, + { .fc_id = 281, .cpu_id = 64, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_10_ECC_DERR" }, + { .fc_id = 282, .cpu_id = 64, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_11_ECC_DERR" }, + { .fc_id = 283, .cpu_id = 64, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_7_ECC_DERR" }, + { .fc_id = 284, .cpu_id = 64, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_6_ECC_DERR" }, + { .fc_id = 285, .cpu_id = 64, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_5_ECC_DERR" }, + { .fc_id = 286, .cpu_id = 64, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_4_ECC_DERR" }, + { .fc_id = 287, .cpu_id = 64, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_15_ECC_DERR" }, + { .fc_id = 288, .cpu_id = 64, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_14_ECC_DERR" }, + { .fc_id = 289, .cpu_id = 64, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_13_ECC_DERR" }, + { .fc_id = 290, .cpu_id = 64, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_12_ECC_DERR" }, + { .fc_id = 291, .cpu_id = 65, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "PMMU_ECC_SERR" }, + { .fc_id = 292, .cpu_id = 66, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "PMMU_ECC_DERR" }, + { .fc_id = 293, .cpu_id = 67, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 294, .cpu_id = 68, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 295, .cpu_id = 69, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC0_VCD_ECC_SERR" }, + { .fc_id = 296, .cpu_id = 69, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC1_VCD_ECC_SERR" }, + { .fc_id = 297, .cpu_id = 69, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC2_VCD_ECC_SERR" }, + { .fc_id = 298, .cpu_id = 69, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC3_VCD_ECC_SERR" }, + { .fc_id = 299, .cpu_id = 69, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC4_VCD_ECC_SERR" }, + { .fc_id = 300, .cpu_id = 69, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC5_VCD_ECC_SERR" }, + { .fc_id = 301, .cpu_id = 69, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC6_VCD_ECC_SERR" }, + { .fc_id = 302, .cpu_id = 69, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC7_VCD_ECC_SERR" }, + { .fc_id = 303, .cpu_id = 69, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC8_VCD_ECC_SERR" }, + { .fc_id = 304, .cpu_id = 69, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC9_VCD_ECC_SERR" }, + { .fc_id = 305, .cpu_id = 69, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC0_L2C_ECC_SERR" }, + { .fc_id = 306, .cpu_id = 69, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC1_L2C_ECC_SERR" }, + { .fc_id = 307, .cpu_id = 69, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC2_L2C_ECC_SERR" }, + { .fc_id = 308, .cpu_id = 69, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC3_L2C_ECC_SERR" }, + { .fc_id = 309, .cpu_id = 69, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC4_L2C_ECC_SERR" }, + { .fc_id = 310, .cpu_id = 69, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC5_L2C_ECC_SERR" }, + { .fc_id = 311, .cpu_id = 69, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC6_L2C_ECC_SERR" }, + { .fc_id = 312, .cpu_id = 69, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC7_L2C_ECC_SERR" }, + { .fc_id = 313, .cpu_id = 69, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC8_L2C_ECC_SERR" }, + { .fc_id = 314, .cpu_id = 69, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC9_L2C_ECC_SERR" }, + { .fc_id = 315, .cpu_id = 70, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEC0_VCD_ECC_DERR" }, + { .fc_id = 316, .cpu_id = 70, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEC1_VCD_ECC_DERR" }, + { .fc_id = 317, .cpu_id = 70, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEC2_VCD_ECC_DERR" }, + { .fc_id = 318, .cpu_id = 70, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEC3_VCD_ECC_DERR" }, + { .fc_id = 319, .cpu_id = 70, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEC4_VCD_ECC_DERR" }, + { .fc_id = 320, .cpu_id = 70, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEC5_VCD_ECC_DERR" }, + { .fc_id = 321, .cpu_id = 70, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEC6_VCD_ECC_DERR" }, + { .fc_id = 322, .cpu_id = 70, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEC7_VCD_ECC_DERR" }, + { .fc_id = 323, .cpu_id = 70, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEC8_VCD_ECC_DERR" }, + { .fc_id = 324, .cpu_id = 70, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEC9_VCD_ECC_DERR" }, + { .fc_id = 325, .cpu_id = 70, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEC0_L2C_ECC_DERR" }, + { .fc_id = 326, .cpu_id = 70, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEC1_L2C_ECC_DERR" }, + { .fc_id = 327, .cpu_id = 70, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEC2_L2C_ECC_DERR" }, + { .fc_id = 328, .cpu_id = 70, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEC3_L2C_ECC_DERR" }, + { .fc_id = 329, .cpu_id = 70, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEC4_L2C_ECC_DERR" }, + { .fc_id = 330, .cpu_id = 70, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEC5_L2C_ECC_DERR" }, + { .fc_id = 331, .cpu_id = 70, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEC6_L2C_ECC_DERR" }, + { .fc_id = 332, .cpu_id = 70, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEC7_L2C_ECC_DERR" }, + { .fc_id = 333, .cpu_id = 70, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEC8_L2C_ECC_DERR" }, + { .fc_id = 334, .cpu_id = 70, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEC9_L2C_ECC_DERR" }, + { .fc_id = 335, .cpu_id = 71, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 336, .cpu_id = 72, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 337, .cpu_id = 73, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HIF0_ECC_SERR" }, + { .fc_id = 338, .cpu_id = 73, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HIF1_ECC_SERR" }, + { .fc_id = 339, .cpu_id = 73, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HIF2_ECC_SERR" }, + { .fc_id = 340, .cpu_id = 73, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HIF3_ECC_SERR" }, + { .fc_id = 341, .cpu_id = 73, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HIF8_ECC_SERR" }, + { .fc_id = 342, .cpu_id = 73, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HIF9_ECC_SERR" }, + { .fc_id = 343, .cpu_id = 73, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HIF10_ECC_SERR" }, + { .fc_id = 344, .cpu_id = 73, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HIF11_ECC_SERR" }, + { .fc_id = 345, .cpu_id = 73, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HIF7_ECC_SERR" }, + { .fc_id = 346, .cpu_id = 73, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HIF6_ECC_SERR" }, + { .fc_id = 347, .cpu_id = 73, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HIF5_ECC_SERR" }, + { .fc_id = 348, .cpu_id = 73, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HIF4_ECC_SERR" }, + { .fc_id = 349, .cpu_id = 73, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HIF15_ECC_SERR" }, + { .fc_id = 350, .cpu_id = 73, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HIF14_ECC_SERR" }, + { .fc_id = 351, .cpu_id = 73, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HIF13_ECC_SERR" }, + { .fc_id = 352, .cpu_id = 73, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HIF12_ECC_SERR" }, + { .fc_id = 353, .cpu_id = 74, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF0_ECC_DERR" }, + { .fc_id = 354, .cpu_id = 74, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF1_ECC_DERR" }, + { .fc_id = 355, .cpu_id = 74, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF2_ECC_DERR" }, + { .fc_id = 356, .cpu_id = 74, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF3_ECC_DERR" }, + { .fc_id = 357, .cpu_id = 74, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF8_ECC_DERR" }, + { .fc_id = 358, .cpu_id = 74, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF9_ECC_DERR" }, + { .fc_id = 359, .cpu_id = 74, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF10_ECC_DERR" }, + { .fc_id = 360, .cpu_id = 74, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF11_ECC_DERR" }, + { .fc_id = 361, .cpu_id = 74, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF7_ECC_DERR" }, + { .fc_id = 362, .cpu_id = 74, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF6_ECC_DERR" }, + { .fc_id = 363, .cpu_id = 74, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF5_ECC_DERR" }, + { .fc_id = 364, .cpu_id = 74, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF4_ECC_DERR" }, + { .fc_id = 365, .cpu_id = 74, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF15_ECC_DERR" }, + { .fc_id = 366, .cpu_id = 74, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF14_ECC_DERR" }, + { .fc_id = 367, .cpu_id = 74, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF13_ECC_DERR" }, + { .fc_id = 368, .cpu_id = 74, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF12_ECC_DERR" }, + { .fc_id = 369, .cpu_id = 75, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC0_ECC_SERR" }, + { .fc_id = 370, .cpu_id = 75, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC1_ECC_SERR" }, + { .fc_id = 371, .cpu_id = 75, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC2_ECC_SERR" }, + { .fc_id = 372, .cpu_id = 75, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC3_ECC_SERR" }, + { .fc_id = 373, .cpu_id = 75, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC4_ECC_SERR" }, + { .fc_id = 374, .cpu_id = 75, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC5_ECC_SERR" }, + { .fc_id = 375, .cpu_id = 75, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC6_ECC_SERR" }, + { .fc_id = 376, .cpu_id = 75, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC7_ECC_SERR" }, + { .fc_id = 377, .cpu_id = 75, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC8_ECC_SERR" }, + { .fc_id = 378, .cpu_id = 75, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC9_ECC_SERR" }, + { .fc_id = 379, .cpu_id = 75, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC10_ECC_SERR" }, + { .fc_id = 380, .cpu_id = 75, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC11_ECC_SERR" }, + { .fc_id = 381, .cpu_id = 76, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC0_ECC_DERR" }, + { .fc_id = 382, .cpu_id = 76, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC1_ECC_DERR" }, + { .fc_id = 383, .cpu_id = 76, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC2_ECC_DERR" }, + { .fc_id = 384, .cpu_id = 76, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC3_ECC_DERR" }, + { .fc_id = 385, .cpu_id = 76, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC4_ECC_DERR" }, + { .fc_id = 386, .cpu_id = 76, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC5_ECC_DERR" }, + { .fc_id = 387, .cpu_id = 76, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC6_ECC_DERR" }, + { .fc_id = 388, .cpu_id = 76, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC7_ECC_DERR" }, + { .fc_id = 389, .cpu_id = 76, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC8_ECC_DERR" }, + { .fc_id = 390, .cpu_id = 76, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC9_ECC_DERR" }, + { .fc_id = 391, .cpu_id = 76, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC10_ECC_DERR" }, + { .fc_id = 392, .cpu_id = 76, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC11_ECC_DERR" }, + { .fc_id = 393, .cpu_id = 77, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SM0_ECC_DERR" }, + { .fc_id = 394, .cpu_id = 77, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SM1_ECC_DERR" }, + { .fc_id = 395, .cpu_id = 77, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SM2_ECC_DERR" }, + { .fc_id = 396, .cpu_id = 77, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "SM3_ECC_DERR" }, + { .fc_id = 397, .cpu_id = 78, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SM0_ECC_SERR" }, + { .fc_id = 398, .cpu_id = 78, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SM1_ECC_SERR" }, + { .fc_id = 399, .cpu_id = 78, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SM2_ECC_SERR" }, + { .fc_id = 400, .cpu_id = 78, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SM3_ECC_SERR" }, + { .fc_id = 401, .cpu_id = 79, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "XBAR0_ECC_SERR" }, + { .fc_id = 402, .cpu_id = 79, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "XBAR1_ECC_SERR" }, + { .fc_id = 403, .cpu_id = 79, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "XBAR2_ECC_SERR" }, + { .fc_id = 404, .cpu_id = 79, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "XBAR3_ECC_SERR" }, + { .fc_id = 405, .cpu_id = 80, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "XBAR0_ECC_DERR" }, + { .fc_id = 406, .cpu_id = 80, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "XBAR1_ECC_DERR" }, + { .fc_id = 407, .cpu_id = 80, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "XBAR2_ECC_DERR" }, + { .fc_id = 408, .cpu_id = 80, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "XBAR3_ECC_DERR" }, + { .fc_id = 409, .cpu_id = 81, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "ARC0_ECC_SERR" }, + { .fc_id = 410, .cpu_id = 82, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "ARC0_ECC_DERR" }, + { .fc_id = 411, .cpu_id = 83, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 412, .cpu_id = 84, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "PCIE_ADDR_DEC_ERR" }, + { .fc_id = 413, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC0_AXI_ERR_RSP" }, + { .fc_id = 414, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC1_AXI_ERR_RSP" }, + { .fc_id = 415, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC2_AXI_ERR_RSP" }, + { .fc_id = 416, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC3_AXI_ERR_RSP" }, + { .fc_id = 417, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC4_AXI_ERR_RSP" }, + { .fc_id = 418, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC5_AXI_ERR_RSP" }, + { .fc_id = 419, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC6_AXI_ERR_RSP" }, + { .fc_id = 420, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC7_AXI_ERR_RSP" }, + { .fc_id = 421, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC8_AXI_ERR_RSP" }, + { .fc_id = 422, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC9_AXI_ERR_RSP" }, + { .fc_id = 423, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC10_AXI_ERR_RSP" }, + { .fc_id = 424, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC11_AXI_ERR_RSP" }, + { .fc_id = 425, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC12_AXI_ERR_RSP" }, + { .fc_id = 426, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC13_AXI_ERR_RSP" }, + { .fc_id = 427, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC14_AXI_ERR_RSP" }, + { .fc_id = 428, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC15_AXI_ERR_RSP" }, + { .fc_id = 429, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC16_AXI_ERR_RSP" }, + { .fc_id = 430, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC17_AXI_ERR_RSP" }, + { .fc_id = 431, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC18_AXI_ERR_RSP" }, + { .fc_id = 432, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC19_AXI_ERR_RSP" }, + { .fc_id = 433, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC20_AXI_ERR_RSP" }, + { .fc_id = 434, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC21_AXI_ERR_RSP" }, + { .fc_id = 435, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC22_AXI_ERR_RSP" }, + { .fc_id = 436, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC23_AXI_ERR_RSP" }, + { .fc_id = 437, .cpu_id = 85, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC24_AXI_ERR_RSP" }, + { .fc_id = 438, .cpu_id = 86, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "AXI_ECC" }, + { .fc_id = 439, .cpu_id = 87, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "L2_RAM_ECC" }, + { .fc_id = 440, .cpu_id = 88, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME0_SBTE0_AXI_ERR_RSP" }, + { .fc_id = 441, .cpu_id = 88, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME0_SBTE1_AXI_ERR_RSP" }, + { .fc_id = 442, .cpu_id = 88, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME0_SBTE2_AXI_ERR_RSP" }, + { .fc_id = 443, .cpu_id = 88, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME0_SBTE3_AXI_ERR_RSP" }, + { .fc_id = 444, .cpu_id = 88, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME0_SBTE4_AXI_ERR_RSP" }, + { .fc_id = 445, .cpu_id = 88, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME0_CTRL_AXI_ERROR_RESPONSE" }, + { .fc_id = 446, .cpu_id = 88, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME0_QMAN_SW_ERROR" }, + { .fc_id = 447, .cpu_id = 89, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME1_SBTE0_AXI_ERR_RSP" }, + { .fc_id = 448, .cpu_id = 89, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME1_SBTE1_AXI_ERR_RSP" }, + { .fc_id = 449, .cpu_id = 89, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME1_SBTE2_AXI_ERR_RSP" }, + { .fc_id = 450, .cpu_id = 89, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME1_SBTE3_AXI_ERR_RSP" }, + { .fc_id = 451, .cpu_id = 89, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME1_SBTE4_AXI_ERR_RSP" }, + { .fc_id = 452, .cpu_id = 89, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME1_CTRL_AXI_ERROR_RESPONSE" }, + { .fc_id = 453, .cpu_id = 89, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME1_QMAN_SW_ERROR" }, + { .fc_id = 454, .cpu_id = 90, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME2_SBTE0_AXI_ERR_RSP" }, + { .fc_id = 455, .cpu_id = 90, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME2_SBTE1_AXI_ERR_RSP" }, + { .fc_id = 456, .cpu_id = 90, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME2_SBTE2_AXI_ERR_RSP" }, + { .fc_id = 457, .cpu_id = 90, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME2_SBTE3_AXI_ERR_RSP" }, + { .fc_id = 458, .cpu_id = 90, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME2_SBTE4_AXI_ERR_RSP" }, + { .fc_id = 459, .cpu_id = 90, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME2_CTRL_AXI_ERROR_RESPONSE" }, + { .fc_id = 460, .cpu_id = 90, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME2_QMAN_SW_ERROR" }, + { .fc_id = 461, .cpu_id = 91, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME3_SBTE0_AXI_ERR_RSP" }, + { .fc_id = 462, .cpu_id = 91, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME3_SBTE1_AXI_ERR_RSP" }, + { .fc_id = 463, .cpu_id = 91, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME3_SBTE2_AXI_ERR_RSP" }, + { .fc_id = 464, .cpu_id = 91, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME3_SBTE3_AXI_ERR_RSP" }, + { .fc_id = 465, .cpu_id = 91, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME3_SBTE4_AXI_ERR_RSP" }, + { .fc_id = 466, .cpu_id = 91, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME3_CTRL_AXI_ERROR_RESPONSE" }, + { .fc_id = 467, .cpu_id = 91, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME3_QMAN_SW_ERROR" }, + { .fc_id = 468, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "PSOC_MME_PLL_LOCK_ERR" }, + { .fc_id = 469, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "PSOC_CPU_PLL_LOCK_ERR" }, + { .fc_id = 470, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE3_TPC_PLL_LOCK_ERR" }, + { .fc_id = 471, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE3_NIC_PLL_LOCK_ERR" }, + { .fc_id = 472, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE3_XBAR_MMU_PLL_LOCK_ERR" }, + { .fc_id = 473, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE3_XBAR_DMA_PLL_LOCK_ERR" }, + { .fc_id = 474, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE3_XBAR_IF_PLL_LOCK_ERR" }, + { .fc_id = 475, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE3_XBAR_BANK_PLL_LOCK_ERR" }, + { .fc_id = 476, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE1_XBAR_MMU_PLL_LOCK_ERR" }, + { .fc_id = 477, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE1_XBAR_DMA_PLL_LOCK_ERR" }, + { .fc_id = 478, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE1_XBAR_IF_PLL_LOCK_ERR" }, + { .fc_id = 479, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE1_XBAR_MESH_PLL_LOCK_ERR" }, + { .fc_id = 480, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE1_TPC_PLL_LOCK_ERR" }, + { .fc_id = 481, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE1_NIC_PLL_LOCK_ERR" }, + { .fc_id = 482, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "PMMU_MME_PLL_LOCK_ERR" }, + { .fc_id = 483, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE0_TPC_PLL_LOCK_ERR" }, + { .fc_id = 484, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE0_PCI_PLL_LOCK_ERR" }, + { .fc_id = 485, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE0_XBAR_MMU_PLL_LOCK_ERR" }, + { .fc_id = 486, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE0_XBAR_DMA_PLL_LOCK_ERR" }, + { .fc_id = 487, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE0_XBAR_IF_PLL_LOCK_ERR" }, + { .fc_id = 488, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE0_XBAR_MESH_PLL_LOCK_ERR" }, + { .fc_id = 489, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE2_XBAR_MMU_PLL_LOCK_ERR" }, + { .fc_id = 490, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE2_XBAR_DMA_PLL_LOCK_ERR" }, + { .fc_id = 491, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE2_XBAR_IF_PLL_LOCK_ERR" }, + { .fc_id = 492, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE2_XBAR_BANK_PLL_LOCK_ERR" }, + { .fc_id = 493, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE2_TPC_PLL_LOCK_ERR" }, + { .fc_id = 494, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "PSOC_VID_PLL_LOCK_ERR" }, + { .fc_id = 495, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "PMMU_VID_PLL_LOCK_ERR" }, + { .fc_id = 496, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE3_HBM_PLL_LOCK_ERR" }, + { .fc_id = 497, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE1_XBAR_HBM_PLL_LOCK_ERR" }, + { .fc_id = 498, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE1_HBM_PLL_LOCK_ERR" }, + { .fc_id = 499, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE0_HBM_PLL_LOCK_ERR" }, + { .fc_id = 500, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE2_XBAR_HBM_PLL_LOCK_ERR" }, + { .fc_id = 501, .cpu_id = 92, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DCORE2_HBM_PLL_LOCK_ERR" }, + { .fc_id = 502, .cpu_id = 93, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "CPU_AXI_ERR_RSP" }, + { .fc_id = 503, .cpu_id = 94, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_0_AXI_ERR_RSP" }, + { .fc_id = 504, .cpu_id = 94, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_1_AXI_ERR_RSP" }, + { .fc_id = 505, .cpu_id = 94, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_2_AXI_ERR_RSP" }, + { .fc_id = 506, .cpu_id = 94, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_3_AXI_ERR_RSP" }, + { .fc_id = 507, .cpu_id = 94, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_8_AXI_ERR_RSP" }, + { .fc_id = 508, .cpu_id = 94, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_9_AXI_ERR_RSP" }, + { .fc_id = 509, .cpu_id = 94, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_10_AXI_ERR_RSP" }, + { .fc_id = 510, .cpu_id = 94, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_11_AXI_ERR_RSP" }, + { .fc_id = 511, .cpu_id = 94, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_7_AXI_ERR_RSP" }, + { .fc_id = 512, .cpu_id = 94, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_6_AXI_ERR_RSP" }, + { .fc_id = 513, .cpu_id = 94, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_5_AXI_ERR_RSP" }, + { .fc_id = 514, .cpu_id = 94, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_4_AXI_ERR_RSP" }, + { .fc_id = 515, .cpu_id = 94, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_15_AXI_ERR_RSP" }, + { .fc_id = 516, .cpu_id = 94, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_14_AXI_ERR_RSP" }, + { .fc_id = 517, .cpu_id = 94, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_13_AXI_ERR_RSP" }, + { .fc_id = 518, .cpu_id = 94, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU_12_AXI_ERR_RSP" }, + { .fc_id = 519, .cpu_id = 95, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "PMMU_FATAL" }, + { .fc_id = 520, .cpu_id = 96, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "PMMU_AXI_ERR_RSP" }, + { .fc_id = 521, .cpu_id = 97, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "VM0_ALARM_A" }, + { .fc_id = 522, .cpu_id = 98, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "VM0_ALARM_B" }, + { .fc_id = 523, .cpu_id = 99, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "VM1_ALARM_A" }, + { .fc_id = 524, .cpu_id = 100, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "VM1_ALARM_B" }, + { .fc_id = 525, .cpu_id = 101, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "VM2_ALARM_A" }, + { .fc_id = 526, .cpu_id = 102, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "VM2_ALARM_B" }, + { .fc_id = 527, .cpu_id = 103, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "VM3_ALARM_A" }, + { .fc_id = 528, .cpu_id = 104, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "VM3_ALARM_B" }, + { .fc_id = 529, .cpu_id = 105, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "PSOC_AXI_ERR_RSP" }, + { .fc_id = 530, .cpu_id = 106, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "PSOC_PRSTN_FALL" }, + { .fc_id = 531, .cpu_id = 107, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 532, .cpu_id = 107, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 533, .cpu_id = 107, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 534, .cpu_id = 107, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 535, .cpu_id = 107, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 536, .cpu_id = 107, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 537, .cpu_id = 107, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 538, .cpu_id = 107, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 539, .cpu_id = 108, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "KDMA_CH0_AXI_ERR_RSP" }, + { .fc_id = 540, .cpu_id = 109, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "PDMA_CH0_AXI_ERR_RSP" }, + { .fc_id = 541, .cpu_id = 109, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "PDMA_CH1_AXI_ERR_RSP" }, + { .fc_id = 542, .cpu_id = 110, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM_CATTRIP_0" }, + { .fc_id = 543, .cpu_id = 111, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM_CATTRIP_1" }, + { .fc_id = 544, .cpu_id = 112, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM_CATTRIP_2" }, + { .fc_id = 545, .cpu_id = 113, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM_CATTRIP_3" }, + { .fc_id = 546, .cpu_id = 114, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM_CATTRIP_4" }, + { .fc_id = 547, .cpu_id = 115, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM_CATTRIP_5" }, + { .fc_id = 548, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM0_MC0_SEI_SEVERE" }, + { .fc_id = 549, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM0_MC0_SEI_NON_SEVERE" }, + { .fc_id = 550, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM0_MC1_SEI_SEVERE" }, + { .fc_id = 551, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM0_MC1_SEI_NON_SEVERE" }, + { .fc_id = 552, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM1_MC0_SEI_SEVERE" }, + { .fc_id = 553, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM1_MC0_SEI_NON_SEVERE" }, + { .fc_id = 554, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM1_MC1_SEI_SEVERE" }, + { .fc_id = 555, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM1_MC1_SEI_NON_SEVERE" }, + { .fc_id = 556, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM2_MC0_SEI_SEVERE" }, + { .fc_id = 557, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM2_MC0_SEI_NON_SEVERE" }, + { .fc_id = 558, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM2_MC1_SEI_SEVERE" }, + { .fc_id = 559, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM2_MC1_SEI_NON_SEVERE" }, + { .fc_id = 560, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM3_MC0_SEI_SEVERE" }, + { .fc_id = 561, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM3_MC0_SEI_NON_SEVERE" }, + { .fc_id = 562, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM3_MC1_SEI_SEVERE" }, + { .fc_id = 563, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM3_MC1_SEI_NON_SEVERE" }, + { .fc_id = 564, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM4_MC0_SEI_SEVERE" }, + { .fc_id = 565, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM4_MC0_SEI_NON_SEVERE" }, + { .fc_id = 566, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM4_MC1_SEI_SEVERE" }, + { .fc_id = 567, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM4_MC1_SEI_NON_SEVERE" }, + { .fc_id = 568, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM5_MC0_SEI_SEVERE" }, + { .fc_id = 569, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM5_MC0_SEI_NON_SEVERE" }, + { .fc_id = 570, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HBM5_MC1_SEI_SEVERE" }, + { .fc_id = 571, .cpu_id = 116, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM5_MC1_SEI_NON_SEVERE" }, + { .fc_id = 572, .cpu_id = 117, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC0_AXI_ERR_RSPONSE" }, + { .fc_id = 573, .cpu_id = 117, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC1_AXI_ERR_RSPONSE" }, + { .fc_id = 574, .cpu_id = 117, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC2_AXI_ERR_RSPONSE" }, + { .fc_id = 575, .cpu_id = 117, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC3_AXI_ERR_RSPONSE" }, + { .fc_id = 576, .cpu_id = 117, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC4_AXI_ERR_RSPONSE" }, + { .fc_id = 577, .cpu_id = 117, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC5_AXI_ERR_RSPONSE" }, + { .fc_id = 578, .cpu_id = 117, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC6_AXI_ERR_RSPONSE" }, + { .fc_id = 579, .cpu_id = 117, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC7_AXI_ERR_RSPONSE" }, + { .fc_id = 580, .cpu_id = 117, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC8_AXI_ERR_RSPONSE" }, + { .fc_id = 581, .cpu_id = 117, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC9_AXI_ERR_RSPONSE" }, + { .fc_id = 582, .cpu_id = 118, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 583, .cpu_id = 119, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 584, .cpu_id = 120, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF0_FATAL" }, + { .fc_id = 585, .cpu_id = 120, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF1_FATAL" }, + { .fc_id = 586, .cpu_id = 120, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF2_FATAL" }, + { .fc_id = 587, .cpu_id = 120, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF3_FATAL" }, + { .fc_id = 588, .cpu_id = 120, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF8_FATAL" }, + { .fc_id = 589, .cpu_id = 120, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF9_FATAL" }, + { .fc_id = 590, .cpu_id = 120, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF10_FATAL" }, + { .fc_id = 591, .cpu_id = 120, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF11_FATAL" }, + { .fc_id = 592, .cpu_id = 120, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF7_FATAL" }, + { .fc_id = 593, .cpu_id = 120, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF6_FATAL" }, + { .fc_id = 594, .cpu_id = 120, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF5_FATAL" }, + { .fc_id = 595, .cpu_id = 120, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF4_FATAL" }, + { .fc_id = 596, .cpu_id = 120, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF15_FATAL" }, + { .fc_id = 597, .cpu_id = 120, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF14_FATAL" }, + { .fc_id = 598, .cpu_id = 120, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF13_FATAL" }, + { .fc_id = 599, .cpu_id = 120, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HIF12_FATAL" }, + { .fc_id = 600, .cpu_id = 121, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC0_AXI_ERROR_RESPONSE" }, + { .fc_id = 601, .cpu_id = 121, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC1_AXI_ERROR_RESPONSE" }, + { .fc_id = 602, .cpu_id = 121, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC2_AXI_ERROR_RESPONSE" }, + { .fc_id = 603, .cpu_id = 121, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC3_AXI_ERROR_RESPONSE" }, + { .fc_id = 604, .cpu_id = 121, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC4_AXI_ERROR_RESPONSE" }, + { .fc_id = 605, .cpu_id = 121, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC5_AXI_ERROR_RESPONSE" }, + { .fc_id = 606, .cpu_id = 121, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC6_AXI_ERROR_RESPONSE" }, + { .fc_id = 607, .cpu_id = 121, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC7_AXI_ERROR_RESPONSE" }, + { .fc_id = 608, .cpu_id = 121, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC8_AXI_ERROR_RESPONSE" }, + { .fc_id = 609, .cpu_id = 121, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC9_AXI_ERROR_RESPONSE" }, + { .fc_id = 610, .cpu_id = 121, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC10_AXI_ERROR_RESPONSE" }, + { .fc_id = 611, .cpu_id = 121, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC11_AXI_ERROR_RESPONSE" }, + { .fc_id = 612, .cpu_id = 122, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "SM0_AXI_ERROR_RESPONSE" }, + { .fc_id = 613, .cpu_id = 122, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "SM1_AXI_ERROR_RESPONSE" }, + { .fc_id = 614, .cpu_id = 122, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "SM2_AXI_ERROR_RESPONSE" }, + { .fc_id = 615, .cpu_id = 122, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "SM3_AXI_ERROR_RESPONSE" }, + { .fc_id = 616, .cpu_id = 123, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "ARC_AXI_ERROR_RESPONSE" }, + { .fc_id = 617, .cpu_id = 124, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 618, .cpu_id = 125, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 619, .cpu_id = 125, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "PCIE_FLR_REQUESTED" }, + { .fc_id = 620, .cpu_id = 125, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 621, .cpu_id = 125, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 622, .cpu_id = 125, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "PCIE_APB_TIMEOUT" }, + { .fc_id = 623, .cpu_id = 125, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 624, .cpu_id = 125, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 625, .cpu_id = 125, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 626, .cpu_id = 125, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 627, .cpu_id = 125, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "PCIE_FATAL_ERR" }, + { .fc_id = 628, .cpu_id = 125, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 629, .cpu_id = 126, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 630, .cpu_id = 127, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 631, .cpu_id = 128, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "PCIE_P2P_MSIX" }, + { .fc_id = 632, .cpu_id = 129, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "PCIE_DRAIN_COMPLETE" }, + { .fc_id = 633, .cpu_id = 130, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC0_BMON_SPMU" }, + { .fc_id = 634, .cpu_id = 131, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC0_KERNEL_ERR" }, + { .fc_id = 635, .cpu_id = 132, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC1_BMON_SPMU" }, + { .fc_id = 636, .cpu_id = 133, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC1_KERNEL_ERR" }, + { .fc_id = 637, .cpu_id = 134, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC2_BMON_SPMU" }, + { .fc_id = 638, .cpu_id = 135, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC2_KERNEL_ERR" }, + { .fc_id = 639, .cpu_id = 136, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC3_BMON_SPMU" }, + { .fc_id = 640, .cpu_id = 137, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC3_KERNEL_ERR" }, + { .fc_id = 641, .cpu_id = 138, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC4_BMON_SPMU" }, + { .fc_id = 642, .cpu_id = 139, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC4_KERNEL_ERR" }, + { .fc_id = 643, .cpu_id = 140, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC5_BMON_SPMU" }, + { .fc_id = 644, .cpu_id = 141, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC5_KERNEL_ERR" }, + { .fc_id = 645, .cpu_id = 150, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC6_BMON_SPMU" }, + { .fc_id = 646, .cpu_id = 151, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC6_KERNEL_ERR" }, + { .fc_id = 647, .cpu_id = 152, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC7_BMON_SPMU" }, + { .fc_id = 648, .cpu_id = 153, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC7_KERNEL_ERR" }, + { .fc_id = 649, .cpu_id = 146, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC8_BMON_SPMU" }, + { .fc_id = 650, .cpu_id = 147, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC8_KERNEL_ERR" }, + { .fc_id = 651, .cpu_id = 148, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC9_BMON_SPMU" }, + { .fc_id = 652, .cpu_id = 149, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC9_KERNEL_ERR" }, + { .fc_id = 653, .cpu_id = 142, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC10_BMON_SPMU" }, + { .fc_id = 654, .cpu_id = 143, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC10_KERNEL_ERR" }, + { .fc_id = 655, .cpu_id = 144, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC11_BMON_SPMU" }, + { .fc_id = 656, .cpu_id = 145, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC11_KERNEL_ERR" }, + { .fc_id = 657, .cpu_id = 162, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC12_BMON_SPMU" }, + { .fc_id = 658, .cpu_id = 163, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC12_KERNEL_ERR" }, + { .fc_id = 659, .cpu_id = 164, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC13_BMON_SPMU" }, + { .fc_id = 660, .cpu_id = 165, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC13_KERNEL_ERR" }, + { .fc_id = 661, .cpu_id = 158, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC14_BMON_SPMU" }, + { .fc_id = 662, .cpu_id = 159, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC14_KERNEL_ERR" }, + { .fc_id = 663, .cpu_id = 160, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC15_BMON_SPMU" }, + { .fc_id = 664, .cpu_id = 161, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC15_KERNEL_ERR" }, + { .fc_id = 665, .cpu_id = 154, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC16_BMON_SPMU" }, + { .fc_id = 666, .cpu_id = 155, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC16_KERNEL_ERR" }, + { .fc_id = 667, .cpu_id = 156, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC17_BMON_SPMU" }, + { .fc_id = 668, .cpu_id = 157, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC17_KERNEL_ERR" }, + { .fc_id = 669, .cpu_id = 166, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC18_BMON_SPMU" }, + { .fc_id = 670, .cpu_id = 167, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC18_KERNEL_ERR" }, + { .fc_id = 671, .cpu_id = 168, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC19_BMON_SPMU" }, + { .fc_id = 672, .cpu_id = 169, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC19_KERNEL_ERR" }, + { .fc_id = 673, .cpu_id = 170, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC20_BMON_SPMU" }, + { .fc_id = 674, .cpu_id = 171, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC20_KERNEL_ERR" }, + { .fc_id = 675, .cpu_id = 172, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC21_BMON_SPMU" }, + { .fc_id = 676, .cpu_id = 173, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC21_KERNEL_ERR" }, + { .fc_id = 677, .cpu_id = 174, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC22_BMON_SPMU" }, + { .fc_id = 678, .cpu_id = 175, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC22_KERNEL_ERR" }, + { .fc_id = 679, .cpu_id = 176, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC23_BMON_SPMU" }, + { .fc_id = 680, .cpu_id = 177, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC23_KERNEL_ERR" }, + { .fc_id = 681, .cpu_id = 178, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "TPC24_BMON_SPMU" }, + { .fc_id = 682, .cpu_id = 179, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC24_KERNEL_ERR" }, + { .fc_id = 683, .cpu_id = 180, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 684, .cpu_id = 180, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 685, .cpu_id = 180, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 686, .cpu_id = 180, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 687, .cpu_id = 180, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 688, .cpu_id = 180, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME0_CTRL_BMON_SPMU" }, + { .fc_id = 689, .cpu_id = 180, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME0_SBTE_BMON_SPMU" }, + { .fc_id = 690, .cpu_id = 180, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME0_WAP_BMON_SPMU" }, + { .fc_id = 691, .cpu_id = 180, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME0_WAP_SOURCE_RESULT_INVALID" }, + { .fc_id = 692, .cpu_id = 181, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 693, .cpu_id = 181, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 694, .cpu_id = 181, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 695, .cpu_id = 181, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 696, .cpu_id = 181, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 697, .cpu_id = 181, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME1_CTRL_BMON_SPMU" }, + { .fc_id = 698, .cpu_id = 181, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME1_SBTE_BMON_SPMU" }, + { .fc_id = 699, .cpu_id = 181, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME1_WAP_BMON_SPMU" }, + { .fc_id = 700, .cpu_id = 181, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME1_WAP_SOURCE_RESULT_INVALID" }, + { .fc_id = 701, .cpu_id = 182, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 702, .cpu_id = 182, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 703, .cpu_id = 182, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 704, .cpu_id = 182, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 705, .cpu_id = 182, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 706, .cpu_id = 182, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME2_CTRL_BMON_SPMU" }, + { .fc_id = 707, .cpu_id = 182, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME2_SBTE_BMON_SPMU" }, + { .fc_id = 708, .cpu_id = 182, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME2_WAP_BMON_SPMU" }, + { .fc_id = 709, .cpu_id = 182, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME2_WAP_SOURCE_RESULT_INVALID" }, + { .fc_id = 710, .cpu_id = 183, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 711, .cpu_id = 183, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 712, .cpu_id = 183, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 713, .cpu_id = 183, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 714, .cpu_id = 183, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 715, .cpu_id = 183, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME3_CTRL_BMON_SPMU" }, + { .fc_id = 716, .cpu_id = 183, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME3_SBTE_BMON_SPMU" }, + { .fc_id = 717, .cpu_id = 183, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "MME3_WAP_BMON_SPMU" }, + { .fc_id = 718, .cpu_id = 183, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME3_WAP_SOURCE_RESULT_INVALID" }, + { .fc_id = 719, .cpu_id = 184, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 720, .cpu_id = 184, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU0_PAGE_FAULT_OR_WR_PERM" }, + { .fc_id = 721, .cpu_id = 184, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU0_SECURITY_ERROR" }, + { .fc_id = 722, .cpu_id = 185, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 723, .cpu_id = 185, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU1_PAGE_FAULT_WR_PERM" }, + { .fc_id = 724, .cpu_id = 185, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU1_SECURITY_ERROR" }, + { .fc_id = 725, .cpu_id = 186, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 726, .cpu_id = 186, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU2_PAGE_FAULT_WR_PERM" }, + { .fc_id = 727, .cpu_id = 186, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU2_SECURITY_ERROR" }, + { .fc_id = 728, .cpu_id = 187, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 729, .cpu_id = 187, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU3_PAGE_FAULT_WR_PERM" }, + { .fc_id = 730, .cpu_id = 187, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU3_SECURITY_ERROR" }, + { .fc_id = 731, .cpu_id = 188, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 732, .cpu_id = 188, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU8_PAGE_FAULT_WR_PERM" }, + { .fc_id = 733, .cpu_id = 188, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU8_SECURITY_ERROR" }, + { .fc_id = 734, .cpu_id = 189, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 735, .cpu_id = 189, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU9_PAGE_FAULT_WR_PERM" }, + { .fc_id = 736, .cpu_id = 189, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU9_SECURITY_ERROR" }, + { .fc_id = 737, .cpu_id = 190, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 738, .cpu_id = 190, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU10_PAGE_FAULT_WR_PERM" }, + { .fc_id = 739, .cpu_id = 190, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU10_SECURITY_ERROR" }, + { .fc_id = 740, .cpu_id = 191, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 741, .cpu_id = 191, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU11_PAGE_FAULT_WR_PERM" }, + { .fc_id = 742, .cpu_id = 191, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU11_SECURITY_ERROR" }, + { .fc_id = 743, .cpu_id = 192, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 744, .cpu_id = 192, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU7_PAGE_FAULT_WR_PERM" }, + { .fc_id = 745, .cpu_id = 192, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU7_SECURITY_ERROR" }, + { .fc_id = 746, .cpu_id = 193, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 747, .cpu_id = 193, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU6_PAGE_FAULT_WR_PERM" }, + { .fc_id = 748, .cpu_id = 193, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU6_SECURITY_ERROR" }, + { .fc_id = 749, .cpu_id = 194, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 750, .cpu_id = 194, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU5_PAGE_FAULT_WR_PERM" }, + { .fc_id = 751, .cpu_id = 194, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU5_SECURITY_ERROR" }, + { .fc_id = 752, .cpu_id = 195, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 753, .cpu_id = 195, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU4_PAGE_FAULT_WR_PERM" }, + { .fc_id = 754, .cpu_id = 195, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU4_SECURITY_ERROR" }, + { .fc_id = 755, .cpu_id = 196, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 756, .cpu_id = 196, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU15_PAGE_FAULT_WR_PERM" }, + { .fc_id = 757, .cpu_id = 196, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU15_SECURITY_ERROR" }, + { .fc_id = 758, .cpu_id = 197, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 759, .cpu_id = 197, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU14_PAGE_FAULT_WR_PERM" }, + { .fc_id = 760, .cpu_id = 197, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU14_SECURITY_ERROR" }, + { .fc_id = 761, .cpu_id = 198, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 762, .cpu_id = 198, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU13_PAGE_FAULT_WR_PERM" }, + { .fc_id = 763, .cpu_id = 198, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU13_SECURITY_ERROR" }, + { .fc_id = 764, .cpu_id = 199, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 765, .cpu_id = 199, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HMMU12_PAGE_FAULT_WR_PERM" }, + { .fc_id = 766, .cpu_id = 199, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "HMMU12_SECURITY_ERROR" }, + { .fc_id = 767, .cpu_id = 200, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 768, .cpu_id = 201, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "PMMU0_PAGE_FAULT_WR_PERM" }, + { .fc_id = 769, .cpu_id = 202, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "PMMU0_SECURITY_ERROR" }, + { .fc_id = 770, .cpu_id = 203, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "EDMA2_BM_SPMU" }, + { .fc_id = 771, .cpu_id = 204, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 772, .cpu_id = 205, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "EDMA3_BM_SPMU" }, + { .fc_id = 773, .cpu_id = 206, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 774, .cpu_id = 207, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "EDMA0_BM_SPMU" }, + { .fc_id = 775, .cpu_id = 208, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 776, .cpu_id = 209, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "EDMA1_BM_SPMU" }, + { .fc_id = 777, .cpu_id = 210, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 778, .cpu_id = 211, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "EDMA6_BM_SPMU" }, + { .fc_id = 779, .cpu_id = 212, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 780, .cpu_id = 213, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "EDMA7_BM_SPMU" }, + { .fc_id = 781, .cpu_id = 214, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 782, .cpu_id = 215, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "EDMA4_BM_SPMU" }, + { .fc_id = 783, .cpu_id = 216, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 784, .cpu_id = 217, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "EDMA5_BM_SPMU" }, + { .fc_id = 785, .cpu_id = 218, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 786, .cpu_id = 219, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "KDMA_BM_SPMU" }, + { .fc_id = 787, .cpu_id = 220, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 788, .cpu_id = 221, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "PDMA0_BM_SPMU" }, + { .fc_id = 789, .cpu_id = 222, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "PDMA1_BM_SPMU" }, + { .fc_id = 790, .cpu_id = 223, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM0_MC0_SPI" }, + { .fc_id = 791, .cpu_id = 224, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM0_MC1_SPI" }, + { .fc_id = 792, .cpu_id = 225, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM1_MC0_SPI" }, + { .fc_id = 793, .cpu_id = 226, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM1_MC1_SPI" }, + { .fc_id = 794, .cpu_id = 227, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM2_MC0_SPI" }, + { .fc_id = 795, .cpu_id = 228, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM2_MC1_SPI" }, + { .fc_id = 796, .cpu_id = 229, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM3_MC0_SPI" }, + { .fc_id = 797, .cpu_id = 230, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM3_MC1_SPI" }, + { .fc_id = 798, .cpu_id = 231, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM4_MC0_SPI" }, + { .fc_id = 799, .cpu_id = 232, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM4_MC1_SPI" }, + { .fc_id = 800, .cpu_id = 233, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM5_MC0_SPI" }, + { .fc_id = 801, .cpu_id = 234, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "HBM5_MC1_SPI" }, + { .fc_id = 802, .cpu_id = 235, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 803, .cpu_id = 236, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 804, .cpu_id = 237, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 805, .cpu_id = 238, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 806, .cpu_id = 239, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 807, .cpu_id = 240, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 808, .cpu_id = 241, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 809, .cpu_id = 242, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 810, .cpu_id = 243, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 811, .cpu_id = 244, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 812, .cpu_id = 245, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 813, .cpu_id = 246, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 814, .cpu_id = 247, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 815, .cpu_id = 248, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 816, .cpu_id = 249, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 817, .cpu_id = 250, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 818, .cpu_id = 251, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 819, .cpu_id = 252, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 820, .cpu_id = 253, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 821, .cpu_id = 254, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 822, .cpu_id = 255, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 823, .cpu_id = 256, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 824, .cpu_id = 257, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 825, .cpu_id = 258, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 826, .cpu_id = 259, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 827, .cpu_id = 260, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 828, .cpu_id = 261, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 829, .cpu_id = 262, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 830, .cpu_id = 263, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 831, .cpu_id = 264, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 832, .cpu_id = 265, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 833, .cpu_id = 266, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 834, .cpu_id = 267, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 835, .cpu_id = 268, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 836, .cpu_id = 269, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 837, .cpu_id = 270, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 838, .cpu_id = 271, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 839, .cpu_id = 272, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 840, .cpu_id = 273, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 841, .cpu_id = 274, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 842, .cpu_id = 275, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 843, .cpu_id = 276, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 844, .cpu_id = 277, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 845, .cpu_id = 278, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 846, .cpu_id = 279, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 847, .cpu_id = 280, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 848, .cpu_id = 281, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 849, .cpu_id = 282, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 850, .cpu_id = 283, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 851, .cpu_id = 284, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 852, .cpu_id = 285, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 853, .cpu_id = 286, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 854, .cpu_id = 287, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "" }, + { .fc_id = 855, .cpu_id = 288, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "" }, + { .fc_id = 856, .cpu_id = 289, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 857, .cpu_id = 290, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 858, .cpu_id = 291, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 859, .cpu_id = 292, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 860, .cpu_id = 293, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 861, .cpu_id = 294, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 862, .cpu_id = 295, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 863, .cpu_id = 296, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 864, .cpu_id = 297, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 865, .cpu_id = 298, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 866, .cpu_id = 299, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 867, .cpu_id = 300, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 868, .cpu_id = 301, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 869, .cpu_id = 302, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 870, .cpu_id = 303, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 871, .cpu_id = 304, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "RPM_ERROR_OR_DRAIN" }, + { .fc_id = 872, .cpu_id = 305, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 873, .cpu_id = 306, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 874, .cpu_id = 307, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 875, .cpu_id = 308, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "RAZWI_OR_PID_MIN_MAX_INTERRUPT" }, + { .fc_id = 876, .cpu_id = 309, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 877, .cpu_id = 310, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 878, .cpu_id = 311, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 879, .cpu_id = 312, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "" }, + { .fc_id = 880, .cpu_id = 313, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 881, .cpu_id = 314, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 882, .cpu_id = 315, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 883, .cpu_id = 316, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 884, .cpu_id = 317, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 885, .cpu_id = 318, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 886, .cpu_id = 319, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 887, .cpu_id = 320, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 888, .cpu_id = 321, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 889, .cpu_id = 322, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 890, .cpu_id = 323, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 891, .cpu_id = 324, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 892, .cpu_id = 325, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 893, .cpu_id = 326, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 894, .cpu_id = 327, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 895, .cpu_id = 328, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 896, .cpu_id = 329, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC0_SPI" }, + { .fc_id = 897, .cpu_id = 329, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC0_BMON_SPMU" }, + { .fc_id = 898, .cpu_id = 330, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC1_SPI" }, + { .fc_id = 899, .cpu_id = 330, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC1_SPI" }, + { .fc_id = 900, .cpu_id = 331, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC2_SPI" }, + { .fc_id = 901, .cpu_id = 331, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC2_BMON_SPMU" }, + { .fc_id = 902, .cpu_id = 332, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC3_SPI" }, + { .fc_id = 903, .cpu_id = 332, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC3_BMON_SPMU" }, + { .fc_id = 904, .cpu_id = 333, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC4_SPI" }, + { .fc_id = 905, .cpu_id = 333, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC4_BMON_SPMU" }, + { .fc_id = 906, .cpu_id = 334, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC5_SPI" }, + { .fc_id = 907, .cpu_id = 334, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC5_BMON_SPMU" }, + { .fc_id = 908, .cpu_id = 335, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC6_SPI" }, + { .fc_id = 909, .cpu_id = 335, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC6_BMON_SPMU" }, + { .fc_id = 910, .cpu_id = 336, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC7_SPI" }, + { .fc_id = 911, .cpu_id = 336, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC7_BMON_SPMU" }, + { .fc_id = 912, .cpu_id = 337, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC8_SPI" }, + { .fc_id = 913, .cpu_id = 337, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC8_BMON_SPMU" }, + { .fc_id = 914, .cpu_id = 338, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "DEC9_SPI" }, + { .fc_id = 915, .cpu_id = 338, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "DEC9_BMON_SPMU" }, + { .fc_id = 916, .cpu_id = 339, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 917, .cpu_id = 340, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 918, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 919, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 920, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 921, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 922, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 923, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 924, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 925, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 926, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 927, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 928, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 929, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 930, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 931, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 932, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 933, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 934, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 935, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 936, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 937, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 938, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 939, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 940, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 941, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 942, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 943, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 944, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 945, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 946, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 947, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 948, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 949, .cpu_id = 341, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 950, .cpu_id = 342, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 951, .cpu_id = 343, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC0_BMON_SPMU" }, + { .fc_id = 952, .cpu_id = 343, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC0_SW_ERROR" }, + { .fc_id = 953, .cpu_id = 343, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 954, .cpu_id = 343, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 955, .cpu_id = 344, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC1_BMON_SPMU" }, + { .fc_id = 956, .cpu_id = 344, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC1_SW_ERROR" }, + { .fc_id = 957, .cpu_id = 344, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 958, .cpu_id = 344, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 959, .cpu_id = 345, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC2_BMON_SPMU" }, + { .fc_id = 960, .cpu_id = 345, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC2_SW_ERROR" }, + { .fc_id = 961, .cpu_id = 345, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 962, .cpu_id = 345, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 963, .cpu_id = 346, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC3_BMON_SPMU" }, + { .fc_id = 964, .cpu_id = 346, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC3_SW_ERROR" }, + { .fc_id = 965, .cpu_id = 346, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 966, .cpu_id = 346, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 967, .cpu_id = 347, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC4_BMON_SPMU" }, + { .fc_id = 968, .cpu_id = 347, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC4_SW_ERROR" }, + { .fc_id = 969, .cpu_id = 347, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 970, .cpu_id = 347, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 971, .cpu_id = 348, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC5_BMON_SPMU" }, + { .fc_id = 972, .cpu_id = 348, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC5_SW_ERROR" }, + { .fc_id = 973, .cpu_id = 348, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 974, .cpu_id = 348, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 975, .cpu_id = 349, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC6_BMON_SPMU" }, + { .fc_id = 976, .cpu_id = 349, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC6_SW_ERROR" }, + { .fc_id = 977, .cpu_id = 349, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 978, .cpu_id = 349, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 979, .cpu_id = 350, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC7_BMON_SPMU" }, + { .fc_id = 980, .cpu_id = 350, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC7_SW_ERROR" }, + { .fc_id = 981, .cpu_id = 350, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 982, .cpu_id = 350, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 983, .cpu_id = 351, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC8_BMON_SPMU" }, + { .fc_id = 984, .cpu_id = 351, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC8_SW_ERROR" }, + { .fc_id = 985, .cpu_id = 351, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 986, .cpu_id = 351, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 987, .cpu_id = 352, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC9_BMON_SPMU" }, + { .fc_id = 988, .cpu_id = 352, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC9_SW_ERROR" }, + { .fc_id = 989, .cpu_id = 352, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 990, .cpu_id = 352, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 991, .cpu_id = 353, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC10_BMON_SPMU" }, + { .fc_id = 992, .cpu_id = 353, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC10_SW_ERROR" }, + { .fc_id = 993, .cpu_id = 353, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 994, .cpu_id = 353, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 995, .cpu_id = 354, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC11_BMON_SPMU" }, + { .fc_id = 996, .cpu_id = 354, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "NIC11_SW_ERROR" }, + { .fc_id = 997, .cpu_id = 354, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 998, .cpu_id = 354, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 999, .cpu_id = 355, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1000, .cpu_id = 356, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1001, .cpu_id = 357, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1002, .cpu_id = 358, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1003, .cpu_id = 359, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1004, .cpu_id = 360, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1005, .cpu_id = 361, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1006, .cpu_id = 362, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1007, .cpu_id = 363, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1008, .cpu_id = 368, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1009, .cpu_id = 369, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1010, .cpu_id = 366, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1011, .cpu_id = 367, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1012, .cpu_id = 364, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1013, .cpu_id = 365, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1014, .cpu_id = 374, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1015, .cpu_id = 375, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1016, .cpu_id = 372, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1017, .cpu_id = 373, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1018, .cpu_id = 370, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1019, .cpu_id = 371, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1020, .cpu_id = 376, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1021, .cpu_id = 377, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1022, .cpu_id = 378, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1023, .cpu_id = 379, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1024, .cpu_id = 380, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1025, .cpu_id = 381, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1026, .cpu_id = 382, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1027, .cpu_id = 383, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1028, .cpu_id = 384, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1029, .cpu_id = 385, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1030, .cpu_id = 386, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1031, .cpu_id = 387, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1032, .cpu_id = 388, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1033, .cpu_id = 389, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1034, .cpu_id = 390, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1035, .cpu_id = 391, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1036, .cpu_id = 392, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1037, .cpu_id = 393, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1038, .cpu_id = 394, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1039, .cpu_id = 395, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1040, .cpu_id = 396, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1041, .cpu_id = 397, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1042, .cpu_id = 398, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1043, .cpu_id = 399, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1044, .cpu_id = 400, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1045, .cpu_id = 401, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1046, .cpu_id = 402, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1047, .cpu_id = 403, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1048, .cpu_id = 404, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1049, .cpu_id = 405, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1050, .cpu_id = 406, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1051, .cpu_id = 407, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1052, .cpu_id = 408, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1053, .cpu_id = 409, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1054, .cpu_id = 410, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1055, .cpu_id = 411, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1056, .cpu_id = 412, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1057, .cpu_id = 413, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1058, .cpu_id = 414, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1059, .cpu_id = 414, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1060, .cpu_id = 414, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1061, .cpu_id = 414, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1062, .cpu_id = 414, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1063, .cpu_id = 414, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1064, .cpu_id = 414, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1065, .cpu_id = 414, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1066, .cpu_id = 414, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1067, .cpu_id = 414, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1068, .cpu_id = 415, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1069, .cpu_id = 416, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1070, .cpu_id = 416, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1071, .cpu_id = 416, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1072, .cpu_id = 416, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1073, .cpu_id = 416, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1074, .cpu_id = 416, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1075, .cpu_id = 416, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1076, .cpu_id = 416, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1077, .cpu_id = 416, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1078, .cpu_id = 416, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1079, .cpu_id = 416, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1080, .cpu_id = 416, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1081, .cpu_id = 416, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1082, .cpu_id = 416, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1083, .cpu_id = 416, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1084, .cpu_id = 416, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1085, .cpu_id = 417, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1086, .cpu_id = 417, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1087, .cpu_id = 417, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1088, .cpu_id = 417, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1089, .cpu_id = 417, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1090, .cpu_id = 417, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1091, .cpu_id = 417, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1092, .cpu_id = 417, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1093, .cpu_id = 417, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1094, .cpu_id = 417, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1095, .cpu_id = 417, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1096, .cpu_id = 417, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1097, .cpu_id = 417, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1098, .cpu_id = 417, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1099, .cpu_id = 417, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1100, .cpu_id = 417, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1101, .cpu_id = 418, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1102, .cpu_id = 419, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1103, .cpu_id = 420, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1104, .cpu_id = 421, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1105, .cpu_id = 422, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1106, .cpu_id = 422, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1107, .cpu_id = 422, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1108, .cpu_id = 422, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1109, .cpu_id = 422, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1110, .cpu_id = 422, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1111, .cpu_id = 422, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1112, .cpu_id = 422, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1113, .cpu_id = 422, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1114, .cpu_id = 422, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1115, .cpu_id = 422, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1116, .cpu_id = 422, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1117, .cpu_id = 423, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1118, .cpu_id = 424, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "ROTATOR0_SERR" }, + { .fc_id = 1119, .cpu_id = 425, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "ROTATOR1_SERR" }, + { .fc_id = 1120, .cpu_id = 426, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "ROTATOR0_DERR" }, + { .fc_id = 1121, .cpu_id = 427, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_HARD, + .name = "ROTATOR1_DERR" }, + { .fc_id = 1122, .cpu_id = 428, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "ROTATOR0_AXI_ERROR_RESPONSE" }, + { .fc_id = 1123, .cpu_id = 429, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "ROTATOR1_AXI_ERROR_RESPONSE" }, + { .fc_id = 1124, .cpu_id = 430, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1125, .cpu_id = 431, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1126, .cpu_id = 432, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "ROTATOR0_BMON_SPMU" }, + { .fc_id = 1127, .cpu_id = 433, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1128, .cpu_id = 434, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "ROTATOR1_BMON_SPMU" }, + { .fc_id = 1129, .cpu_id = 435, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1130, .cpu_id = 436, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SM0_BMON_SPMU" }, + { .fc_id = 1131, .cpu_id = 437, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SM1_BMON_SPMU" }, + { .fc_id = 1132, .cpu_id = 438, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SM2_BMON_SPMU" }, + { .fc_id = 1133, .cpu_id = 439, .valid = 1, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "SM3_BMON_SPMU" }, + { .fc_id = 1134, .cpu_id = 440, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1135, .cpu_id = 441, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1136, .cpu_id = 442, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1137, .cpu_id = 443, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1138, .cpu_id = 444, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1139, .cpu_id = 445, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1140, .cpu_id = 446, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1141, .cpu_id = 447, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1142, .cpu_id = 448, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1143, .cpu_id = 449, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1144, .cpu_id = 450, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1145, .cpu_id = 451, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1146, .cpu_id = 452, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1147, .cpu_id = 453, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1148, .cpu_id = 454, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1149, .cpu_id = 455, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1150, .cpu_id = 456, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1151, .cpu_id = 457, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1152, .cpu_id = 458, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1153, .cpu_id = 459, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1154, .cpu_id = 460, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1155, .cpu_id = 461, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1156, .cpu_id = 462, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1157, .cpu_id = 463, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1158, .cpu_id = 464, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1159, .cpu_id = 465, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1160, .cpu_id = 466, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1161, .cpu_id = 467, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1162, .cpu_id = 468, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1163, .cpu_id = 469, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1164, .cpu_id = 470, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1165, .cpu_id = 471, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1166, .cpu_id = 472, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1167, .cpu_id = 473, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1168, .cpu_id = 474, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1169, .cpu_id = 475, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1170, .cpu_id = 476, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1171, .cpu_id = 477, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1172, .cpu_id = 478, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1173, .cpu_id = 479, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1174, .cpu_id = 480, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1175, .cpu_id = 481, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1176, .cpu_id = 482, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1177, .cpu_id = 483, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1178, .cpu_id = 484, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1179, .cpu_id = 485, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1180, .cpu_id = 486, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1181, .cpu_id = 487, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1182, .cpu_id = 488, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1183, .cpu_id = 489, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1184, .cpu_id = 490, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1185, .cpu_id = 491, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1186, .cpu_id = 492, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1187, .cpu_id = 493, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1188, .cpu_id = 494, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1189, .cpu_id = 495, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1190, .cpu_id = 496, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1191, .cpu_id = 497, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1192, .cpu_id = 498, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1193, .cpu_id = 499, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1194, .cpu_id = 500, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1195, .cpu_id = 501, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1196, .cpu_id = 502, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1197, .cpu_id = 503, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1198, .cpu_id = 504, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1199, .cpu_id = 505, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1200, .cpu_id = 506, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1201, .cpu_id = 507, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1202, .cpu_id = 508, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1203, .cpu_id = 509, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1204, .cpu_id = 510, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1205, .cpu_id = 511, .valid = 0, .msg = 0, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1206, .cpu_id = 512, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC0_QM" }, + { .fc_id = 1207, .cpu_id = 513, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC1_QM" }, + { .fc_id = 1208, .cpu_id = 514, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC2_QM" }, + { .fc_id = 1209, .cpu_id = 515, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC3_QM" }, + { .fc_id = 1210, .cpu_id = 516, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC4_QM" }, + { .fc_id = 1211, .cpu_id = 517, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC5_QM" }, + { .fc_id = 1212, .cpu_id = 518, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC6_QM" }, + { .fc_id = 1213, .cpu_id = 519, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC7_QM" }, + { .fc_id = 1214, .cpu_id = 520, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC8_QM" }, + { .fc_id = 1215, .cpu_id = 521, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC9_QM" }, + { .fc_id = 1216, .cpu_id = 522, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC10_QM" }, + { .fc_id = 1217, .cpu_id = 523, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC11_QM" }, + { .fc_id = 1218, .cpu_id = 524, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC12_QM" }, + { .fc_id = 1219, .cpu_id = 525, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC13_QM" }, + { .fc_id = 1220, .cpu_id = 526, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC14_QM" }, + { .fc_id = 1221, .cpu_id = 527, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC15_QM" }, + { .fc_id = 1222, .cpu_id = 528, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC16_QM" }, + { .fc_id = 1223, .cpu_id = 529, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC17_QM" }, + { .fc_id = 1224, .cpu_id = 530, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC18_QM" }, + { .fc_id = 1225, .cpu_id = 531, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC19_QM" }, + { .fc_id = 1226, .cpu_id = 532, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC20_QM" }, + { .fc_id = 1227, .cpu_id = 533, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC21_QM" }, + { .fc_id = 1228, .cpu_id = 534, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC22_QM" }, + { .fc_id = 1229, .cpu_id = 535, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC23_QM" }, + { .fc_id = 1230, .cpu_id = 536, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "TPC24_QM" }, + { .fc_id = 1231, .cpu_id = 537, .valid = 0, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "" }, + { .fc_id = 1232, .cpu_id = 538, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME0_QM" }, + { .fc_id = 1233, .cpu_id = 539, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME1_QM" }, + { .fc_id = 1234, .cpu_id = 540, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME2_QM" }, + { .fc_id = 1235, .cpu_id = 541, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "MME3_QM" }, + { .fc_id = 1236, .cpu_id = 542, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "EDMA2_QM" }, + { .fc_id = 1237, .cpu_id = 543, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "EDMA3_QM" }, + { .fc_id = 1238, .cpu_id = 544, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "EDMA0_QM" }, + { .fc_id = 1239, .cpu_id = 545, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "EDMA1_QM" }, + { .fc_id = 1240, .cpu_id = 546, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "EDMA6_QM" }, + { .fc_id = 1241, .cpu_id = 547, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "EDMA7_QM" }, + { .fc_id = 1242, .cpu_id = 548, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "EDMA4_QM" }, + { .fc_id = 1243, .cpu_id = 549, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "EDMA5_QM" }, + { .fc_id = 1244, .cpu_id = 550, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "PDMA0_QM" }, + { .fc_id = 1245, .cpu_id = 551, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "PDMA1_QM" }, + { .fc_id = 1246, .cpu_id = 552, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "PI_UPDATE" }, + { .fc_id = 1247, .cpu_id = 553, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "HALT_MACHINE" }, + { .fc_id = 1248, .cpu_id = 554, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "INTS_REGISTER" }, + { .fc_id = 1249, .cpu_id = 555, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "ROT0_QM" }, + { .fc_id = 1250, .cpu_id = 556, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "ROT1_QM" }, + { .fc_id = 1251, .cpu_id = 557, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "SOFT_RESET" }, + { .fc_id = 1252, .cpu_id = 558, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "CPLD_SHUTDOWN_CAUSE" }, + { .fc_id = 1253, .cpu_id = 559, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "FIX_POWER_ENV_S" }, + { .fc_id = 1254, .cpu_id = 560, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "FIX_POWER_ENV_E" }, + { .fc_id = 1255, .cpu_id = 561, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "FIX_THERMAL_ENV_S" }, + { .fc_id = 1256, .cpu_id = 562, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "FIX_THERMAL_ENV_E" }, + { .fc_id = 1257, .cpu_id = 563, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "CPLD_SHUTDOWN_EVENT" }, + { .fc_id = 1258, .cpu_id = 564, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "PKT_QUEUE_OUT_SYNC" }, + { .fc_id = 1259, .cpu_id = 565, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "EDMA2_CORE" }, + { .fc_id = 1260, .cpu_id = 566, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "EDMA3_CORE" }, + { .fc_id = 1261, .cpu_id = 567, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "EDMA0_CORE" }, + { .fc_id = 1262, .cpu_id = 568, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "EDMA1_CORE" }, + { .fc_id = 1263, .cpu_id = 569, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "EDMA6_CORE" }, + { .fc_id = 1264, .cpu_id = 570, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "EDMA7_CORE" }, + { .fc_id = 1265, .cpu_id = 571, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "EDMA4_CORE" }, + { .fc_id = 1266, .cpu_id = 572, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "EDMA5_CORE" }, + { .fc_id = 1267, .cpu_id = 573, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "PDMA0_CORE" }, + { .fc_id = 1268, .cpu_id = 574, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "PDMA1_CORE" }, + { .fc_id = 1269, .cpu_id = 575, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "KDMA0_CORE" }, + { .fc_id = 1270, .cpu_id = 576, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC0_QM0" }, + { .fc_id = 1271, .cpu_id = 577, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC0_QM1" }, + { .fc_id = 1272, .cpu_id = 578, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC1_QM0" }, + { .fc_id = 1273, .cpu_id = 579, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC1_QM1" }, + { .fc_id = 1274, .cpu_id = 580, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC2_QM0" }, + { .fc_id = 1275, .cpu_id = 581, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC2_QM1" }, + { .fc_id = 1276, .cpu_id = 582, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC3_QM0" }, + { .fc_id = 1277, .cpu_id = 583, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC3_QM1" }, + { .fc_id = 1278, .cpu_id = 584, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC4_QM0" }, + { .fc_id = 1279, .cpu_id = 585, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC4_QM1" }, + { .fc_id = 1280, .cpu_id = 586, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC5_QM0" }, + { .fc_id = 1281, .cpu_id = 587, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC5_QM1" }, + { .fc_id = 1282, .cpu_id = 588, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC6_QM0" }, + { .fc_id = 1283, .cpu_id = 589, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC6_QM1" }, + { .fc_id = 1284, .cpu_id = 590, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC7_QM0" }, + { .fc_id = 1285, .cpu_id = 591, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC7_QM1" }, + { .fc_id = 1286, .cpu_id = 592, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC8_QM0" }, + { .fc_id = 1287, .cpu_id = 593, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC8_QM1" }, + { .fc_id = 1288, .cpu_id = 594, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC9_QM0" }, + { .fc_id = 1289, .cpu_id = 595, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC9_QM1" }, + { .fc_id = 1290, .cpu_id = 596, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC10_QM0" }, + { .fc_id = 1291, .cpu_id = 597, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC10_QM1" }, + { .fc_id = 1292, .cpu_id = 598, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC11_QM0" }, + { .fc_id = 1293, .cpu_id = 599, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "NIC11_QM1" }, + { .fc_id = 1294, .cpu_id = 600, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "CPU_PKT_SANITY_FAILED" }, + { .fc_id = 1295, .cpu_id = 601, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC0_ENG0" }, + { .fc_id = 1296, .cpu_id = 602, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC0_ENG1" }, + { .fc_id = 1297, .cpu_id = 603, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC1_ENG0" }, + { .fc_id = 1298, .cpu_id = 604, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC1_ENG1" }, + { .fc_id = 1299, .cpu_id = 605, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC2_ENG0" }, + { .fc_id = 1300, .cpu_id = 606, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC2_ENG1" }, + { .fc_id = 1301, .cpu_id = 607, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC3_ENG0" }, + { .fc_id = 1302, .cpu_id = 608, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC3_ENG1" }, + { .fc_id = 1303, .cpu_id = 609, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC4_ENG0" }, + { .fc_id = 1304, .cpu_id = 610, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC4_ENG1" }, + { .fc_id = 1305, .cpu_id = 611, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC5_ENG0" }, + { .fc_id = 1306, .cpu_id = 612, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC5_ENG1" }, + { .fc_id = 1307, .cpu_id = 613, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC6_ENG0" }, + { .fc_id = 1308, .cpu_id = 614, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC6_ENG1" }, + { .fc_id = 1309, .cpu_id = 615, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC7_ENG0" }, + { .fc_id = 1310, .cpu_id = 616, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC7_ENG1" }, + { .fc_id = 1311, .cpu_id = 617, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC8_ENG0" }, + { .fc_id = 1312, .cpu_id = 618, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC8_ENG1" }, + { .fc_id = 1313, .cpu_id = 619, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC9_ENG0" }, + { .fc_id = 1314, .cpu_id = 620, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC9_ENG1" }, + { .fc_id = 1315, .cpu_id = 621, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC10_ENG0" }, + { .fc_id = 1316, .cpu_id = 622, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC10_ENG1" }, + { .fc_id = 1317, .cpu_id = 623, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC11_ENG0" }, + { .fc_id = 1318, .cpu_id = 624, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_NONE, + .name = "STATUS_NIC11_ENG1" }, + { .fc_id = 1319, .cpu_id = 625, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_COMPUTE, + .name = "ARC_DCCM_FULL" }, + { .fc_id = 1320, .cpu_id = 626, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "FP32_NOT_SUPPORTED" }, + { .fc_id = 1321, .cpu_id = 627, .valid = 1, .msg = 1, .reset = EVENT_RESET_TYPE_HARD, + .name = "DEV_RESET_REQ" }, }; #endif /* __GAUDI2_ASYNC_IDS_MAP_EVENTS_EXT_H_ */ diff --git a/drivers/accel/habanalabs/include/gaudi2/gaudi2_fw_if.h b/drivers/accel/habanalabs/include/gaudi2/gaudi2_fw_if.h index 82f3ca2a3966..8522f24deac0 100644 --- a/drivers/accel/habanalabs/include/gaudi2/gaudi2_fw_if.h +++ b/drivers/accel/habanalabs/include/gaudi2/gaudi2_fw_if.h @@ -63,7 +63,10 @@ struct gaudi2_cold_rst_data { u32 fake_sig_validation_en : 1; u32 bist_skip_enable : 1; u32 bist_need_iatu_config : 1; - u32 reserved : 24; + u32 fake_bis_compliant : 1; + u32 wd_rst_cause_arm : 1; + u32 wd_rst_cause_arcpid : 1; + u32 reserved : 21; }; __le32 data; }; diff --git a/drivers/accel/ivpu/ivpu_drv.c b/drivers/accel/ivpu/ivpu_drv.c index 6a320a73e3cc..8396db2b5203 100644 --- a/drivers/accel/ivpu/ivpu_drv.c +++ b/drivers/accel/ivpu/ivpu_drv.c @@ -437,6 +437,10 @@ static int ivpu_pci_init(struct ivpu_device *vdev) /* Clear any pending errors */ pcie_capability_clear_word(pdev, PCI_EXP_DEVSTA, 0x3f); + /* VPU MTL does not require PCI spec 10m D3hot delay */ + if (ivpu_is_mtl(vdev)) + pdev->d3hot_delay = 0; + ret = pcim_enable_device(pdev); if (ret) { ivpu_err(vdev, "Failed to enable PCI device: %d\n", ret); diff --git a/drivers/accel/ivpu/ivpu_pm.c b/drivers/accel/ivpu/ivpu_pm.c index bde42d6383da..aa4d56dc52b3 100644 --- a/drivers/accel/ivpu/ivpu_pm.c +++ b/drivers/accel/ivpu/ivpu_pm.c @@ -239,8 +239,6 @@ int ivpu_rpm_get(struct ivpu_device *vdev) { int ret; - ivpu_dbg(vdev, RPM, "rpm_get count %d\n", atomic_read(&vdev->drm.dev->power.usage_count)); - ret = pm_runtime_resume_and_get(vdev->drm.dev); if (!drm_WARN_ON(&vdev->drm, ret < 0)) vdev->pm->suspend_reschedule_counter = PM_RESCHEDULE_LIMIT; @@ -250,8 +248,6 @@ int ivpu_rpm_get(struct ivpu_device *vdev) void ivpu_rpm_put(struct ivpu_device *vdev) { - ivpu_dbg(vdev, RPM, "rpm_put count %d\n", atomic_read(&vdev->drm.dev->power.usage_count)); - pm_runtime_mark_last_busy(vdev->drm.dev); pm_runtime_put_autosuspend(vdev->drm.dev); } @@ -321,16 +317,10 @@ void ivpu_pm_enable(struct ivpu_device *vdev) pm_runtime_allow(dev); pm_runtime_mark_last_busy(dev); pm_runtime_put_autosuspend(dev); - - ivpu_dbg(vdev, RPM, "Enable RPM count %d\n", atomic_read(&dev->power.usage_count)); } void ivpu_pm_disable(struct ivpu_device *vdev) { - struct device *dev = vdev->drm.dev; - - ivpu_dbg(vdev, RPM, "Disable RPM count %d\n", atomic_read(&dev->power.usage_count)); - pm_runtime_get_noresume(vdev->drm.dev); pm_runtime_forbid(vdev->drm.dev); } diff --git a/drivers/accel/qaic/Kconfig b/drivers/accel/qaic/Kconfig new file mode 100644 index 000000000000..a9f866230058 --- /dev/null +++ b/drivers/accel/qaic/Kconfig @@ -0,0 +1,23 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# Qualcomm Cloud AI accelerators driver +# + +config DRM_ACCEL_QAIC + tristate "Qualcomm Cloud AI accelerators" + depends on DRM_ACCEL + depends on PCI && HAS_IOMEM + depends on MHI_BUS + depends on MMU + select CRC32 + help + Enables driver for Qualcomm's Cloud AI accelerator PCIe cards that are + designed to accelerate Deep Learning inference workloads. + + The driver manages the PCIe devices and provides an IOCTL interface + for users to submit workloads to the devices. + + If unsure, say N. + + To compile this driver as a module, choose M here: the + module will be called qaic. diff --git a/drivers/accel/qaic/Makefile b/drivers/accel/qaic/Makefile new file mode 100644 index 000000000000..d5f4952ae79a --- /dev/null +++ b/drivers/accel/qaic/Makefile @@ -0,0 +1,13 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# Makefile for Qualcomm Cloud AI accelerators driver +# + +obj-$(CONFIG_DRM_ACCEL_QAIC) := qaic.o + +qaic-y := \ + mhi_controller.o \ + mhi_qaic_ctrl.o \ + qaic_control.o \ + qaic_data.o \ + qaic_drv.o diff --git a/drivers/accel/qaic/mhi_controller.c b/drivers/accel/qaic/mhi_controller.c new file mode 100644 index 000000000000..5036e58e7235 --- /dev/null +++ b/drivers/accel/qaic/mhi_controller.c @@ -0,0 +1,563 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* Copyright (c) 2019-2021, The Linux Foundation. All rights reserved. */ +/* Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved. */ + +#include <linux/delay.h> +#include <linux/err.h> +#include <linux/memblock.h> +#include <linux/mhi.h> +#include <linux/moduleparam.h> +#include <linux/pci.h> +#include <linux/sizes.h> + +#include "mhi_controller.h" +#include "qaic.h" + +#define MAX_RESET_TIME_SEC 25 + +static unsigned int mhi_timeout_ms = 2000; /* 2 sec default */ +module_param(mhi_timeout_ms, uint, 0600); +MODULE_PARM_DESC(mhi_timeout_ms, "MHI controller timeout value"); + +static struct mhi_channel_config aic100_channels[] = { + { + .name = "QAIC_LOOPBACK", + .num = 0, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_LOOPBACK", + .num = 1, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_SAHARA", + .num = 2, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_SBL, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_SAHARA", + .num = 3, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_SBL, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_DIAG", + .num = 4, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_DIAG", + .num = 5, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_SSR", + .num = 6, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_SSR", + .num = 7, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_QDSS", + .num = 8, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_QDSS", + .num = 9, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_CONTROL", + .num = 10, + .num_elements = 128, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_CONTROL", + .num = 11, + .num_elements = 128, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_LOGGING", + .num = 12, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_SBL, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_LOGGING", + .num = 13, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_SBL, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_STATUS", + .num = 14, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_STATUS", + .num = 15, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_TELEMETRY", + .num = 16, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_TELEMETRY", + .num = 17, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_DEBUG", + .num = 18, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_DEBUG", + .num = 19, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_TIMESYNC", + .num = 20, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_SBL | MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .num = 21, + .name = "QAIC_TIMESYNC", + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_SBL | MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, +}; + +static struct mhi_event_config aic100_events[] = { + { + .num_elements = 32, + .irq_moderation_ms = 0, + .irq = 0, + .channel = U32_MAX, + .priority = 1, + .mode = MHI_DB_BRST_DISABLE, + .data_type = MHI_ER_CTRL, + .hardware_event = false, + .client_managed = false, + .offload_channel = false, + }, +}; + +static struct mhi_controller_config aic100_config = { + .max_channels = 128, + .timeout_ms = 0, /* controlled by mhi_timeout */ + .buf_len = 0, + .num_channels = ARRAY_SIZE(aic100_channels), + .ch_cfg = aic100_channels, + .num_events = ARRAY_SIZE(aic100_events), + .event_cfg = aic100_events, + .use_bounce_buf = false, + .m2_no_db = false, +}; + +static int mhi_read_reg(struct mhi_controller *mhi_cntrl, void __iomem *addr, u32 *out) +{ + u32 tmp = readl_relaxed(addr); + + if (tmp == U32_MAX) + return -EIO; + + *out = tmp; + + return 0; +} + +static void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *addr, u32 val) +{ + writel_relaxed(val, addr); +} + +static int mhi_runtime_get(struct mhi_controller *mhi_cntrl) +{ + return 0; +} + +static void mhi_runtime_put(struct mhi_controller *mhi_cntrl) +{ +} + +static void mhi_status_cb(struct mhi_controller *mhi_cntrl, enum mhi_callback reason) +{ + struct qaic_device *qdev = pci_get_drvdata(to_pci_dev(mhi_cntrl->cntrl_dev)); + + /* this event occurs in atomic context */ + if (reason == MHI_CB_FATAL_ERROR) + pci_err(qdev->pdev, "Fatal error received from device. Attempting to recover\n"); + /* this event occurs in non-atomic context */ + if (reason == MHI_CB_SYS_ERROR) + qaic_dev_reset_clean_local_state(qdev, true); +} + +static int mhi_reset_and_async_power_up(struct mhi_controller *mhi_cntrl) +{ + u8 time_sec = 1; + int current_ee; + int ret; + + /* Reset the device to bring the device in PBL EE */ + mhi_soc_reset(mhi_cntrl); + + /* + * Keep checking the execution environment(EE) after every 1 second + * interval. + */ + do { + msleep(1000); + current_ee = mhi_get_exec_env(mhi_cntrl); + } while (current_ee != MHI_EE_PBL && time_sec++ <= MAX_RESET_TIME_SEC); + + /* If the device is in PBL EE retry power up */ + if (current_ee == MHI_EE_PBL) + ret = mhi_async_power_up(mhi_cntrl); + else + ret = -EIO; + + return ret; +} + +struct mhi_controller *qaic_mhi_register_controller(struct pci_dev *pci_dev, void __iomem *mhi_bar, + int mhi_irq) +{ + struct mhi_controller *mhi_cntrl; + int ret; + + mhi_cntrl = devm_kzalloc(&pci_dev->dev, sizeof(*mhi_cntrl), GFP_KERNEL); + if (!mhi_cntrl) + return ERR_PTR(-ENOMEM); + + mhi_cntrl->cntrl_dev = &pci_dev->dev; + + /* + * Covers the entire possible physical ram region. Remote side is + * going to calculate a size of this range, so subtract 1 to prevent + * rollover. + */ + mhi_cntrl->iova_start = 0; + mhi_cntrl->iova_stop = PHYS_ADDR_MAX - 1; + mhi_cntrl->status_cb = mhi_status_cb; + mhi_cntrl->runtime_get = mhi_runtime_get; + mhi_cntrl->runtime_put = mhi_runtime_put; + mhi_cntrl->read_reg = mhi_read_reg; + mhi_cntrl->write_reg = mhi_write_reg; + mhi_cntrl->regs = mhi_bar; + mhi_cntrl->reg_len = SZ_4K; + mhi_cntrl->nr_irqs = 1; + mhi_cntrl->irq = devm_kmalloc(&pci_dev->dev, sizeof(*mhi_cntrl->irq), GFP_KERNEL); + + if (!mhi_cntrl->irq) + return ERR_PTR(-ENOMEM); + + mhi_cntrl->irq[0] = mhi_irq; + mhi_cntrl->fw_image = "qcom/aic100/sbl.bin"; + + /* use latest configured timeout */ + aic100_config.timeout_ms = mhi_timeout_ms; + ret = mhi_register_controller(mhi_cntrl, &aic100_config); + if (ret) { + pci_err(pci_dev, "mhi_register_controller failed %d\n", ret); + return ERR_PTR(ret); + } + + ret = mhi_prepare_for_power_up(mhi_cntrl); + if (ret) { + pci_err(pci_dev, "mhi_prepare_for_power_up failed %d\n", ret); + goto prepare_power_up_fail; + } + + ret = mhi_async_power_up(mhi_cntrl); + /* + * If EIO is returned it is possible that device is in SBL EE, which is + * undesired. SOC reset the device and try to power up again. + */ + if (ret == -EIO && MHI_EE_SBL == mhi_get_exec_env(mhi_cntrl)) { + pci_err(pci_dev, "Found device in SBL at MHI init. Attempting a reset.\n"); + ret = mhi_reset_and_async_power_up(mhi_cntrl); + } + + if (ret) { + pci_err(pci_dev, "mhi_async_power_up failed %d\n", ret); + goto power_up_fail; + } + + return mhi_cntrl; + +power_up_fail: + mhi_unprepare_after_power_down(mhi_cntrl); +prepare_power_up_fail: + mhi_unregister_controller(mhi_cntrl); + return ERR_PTR(ret); +} + +void qaic_mhi_free_controller(struct mhi_controller *mhi_cntrl, bool link_up) +{ + mhi_power_down(mhi_cntrl, link_up); + mhi_unprepare_after_power_down(mhi_cntrl); + mhi_unregister_controller(mhi_cntrl); +} + +void qaic_mhi_start_reset(struct mhi_controller *mhi_cntrl) +{ + mhi_power_down(mhi_cntrl, true); +} + +void qaic_mhi_reset_done(struct mhi_controller *mhi_cntrl) +{ + struct pci_dev *pci_dev = container_of(mhi_cntrl->cntrl_dev, struct pci_dev, dev); + int ret; + + ret = mhi_async_power_up(mhi_cntrl); + if (ret) + pci_err(pci_dev, "mhi_async_power_up failed after reset %d\n", ret); +} diff --git a/drivers/accel/qaic/mhi_controller.h b/drivers/accel/qaic/mhi_controller.h new file mode 100644 index 000000000000..2ae45d768e24 --- /dev/null +++ b/drivers/accel/qaic/mhi_controller.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2019-2020, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef MHICONTROLLERQAIC_H_ +#define MHICONTROLLERQAIC_H_ + +struct mhi_controller *qaic_mhi_register_controller(struct pci_dev *pci_dev, void __iomem *mhi_bar, + int mhi_irq); +void qaic_mhi_free_controller(struct mhi_controller *mhi_cntrl, bool link_up); +void qaic_mhi_start_reset(struct mhi_controller *mhi_cntrl); +void qaic_mhi_reset_done(struct mhi_controller *mhi_cntrl); + +#endif /* MHICONTROLLERQAIC_H_ */ diff --git a/drivers/accel/qaic/mhi_qaic_ctrl.c b/drivers/accel/qaic/mhi_qaic_ctrl.c new file mode 100644 index 000000000000..0c7e571f1f12 --- /dev/null +++ b/drivers/accel/qaic/mhi_qaic_ctrl.c @@ -0,0 +1,569 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. */ + +#include <linux/kernel.h> +#include <linux/mhi.h> +#include <linux/mod_devicetable.h> +#include <linux/module.h> +#include <linux/poll.h> +#include <linux/xarray.h> +#include <uapi/linux/eventpoll.h> + +#include "mhi_qaic_ctrl.h" +#include "qaic.h" + +#define MHI_QAIC_CTRL_DRIVER_NAME "mhi_qaic_ctrl" +#define MHI_QAIC_CTRL_MAX_MINORS 128 +#define MHI_MAX_MTU 0xffff +static DEFINE_XARRAY_ALLOC(mqc_xa); +static struct class *mqc_dev_class; +static int mqc_dev_major; + +/** + * struct mqc_buf - Buffer structure used to receive data from device + * @data: Address of data to read from + * @odata: Original address returned from *alloc() API. Used to free this buf. + * @len: Length of data in byte + * @node: This buffer will be part of list managed in struct mqc_dev + */ +struct mqc_buf { + void *data; + void *odata; + size_t len; + struct list_head node; +}; + +/** + * struct mqc_dev - MHI QAIC Control Device + * @minor: MQC device node minor number + * @mhi_dev: Associated mhi device object + * @mtu: Max TRE buffer length + * @enabled: Flag to track the state of the MQC device + * @lock: Mutex lock to serialize access to open_count + * @read_lock: Mutex lock to serialize readers + * @write_lock: Mutex lock to serialize writers + * @ul_wq: Wait queue for writers + * @dl_wq: Wait queue for readers + * @dl_queue_lock: Spin lock to serialize access to download queue + * @dl_queue: Queue of downloaded buffers + * @open_count: Track open counts + * @ref_count: Reference count for this structure + */ +struct mqc_dev { + u32 minor; + struct mhi_device *mhi_dev; + size_t mtu; + bool enabled; + struct mutex lock; + struct mutex read_lock; + struct mutex write_lock; + wait_queue_head_t ul_wq; + wait_queue_head_t dl_wq; + spinlock_t dl_queue_lock; + struct list_head dl_queue; + unsigned int open_count; + struct kref ref_count; +}; + +static void mqc_dev_release(struct kref *ref) +{ + struct mqc_dev *mqcdev = container_of(ref, struct mqc_dev, ref_count); + + mutex_destroy(&mqcdev->read_lock); + mutex_destroy(&mqcdev->write_lock); + mutex_destroy(&mqcdev->lock); + kfree(mqcdev); +} + +static int mhi_qaic_ctrl_fill_dl_queue(struct mqc_dev *mqcdev) +{ + struct mhi_device *mhi_dev = mqcdev->mhi_dev; + struct mqc_buf *ctrlbuf; + int rx_budget; + int ret = 0; + void *data; + + rx_budget = mhi_get_free_desc_count(mhi_dev, DMA_FROM_DEVICE); + if (rx_budget < 0) + return -EIO; + + while (rx_budget--) { + data = kzalloc(mqcdev->mtu + sizeof(*ctrlbuf), GFP_KERNEL); + if (!data) + return -ENOMEM; + + ctrlbuf = data + mqcdev->mtu; + ctrlbuf->odata = data; + + ret = mhi_queue_buf(mhi_dev, DMA_FROM_DEVICE, data, mqcdev->mtu, MHI_EOT); + if (ret) { + kfree(data); + dev_err(&mhi_dev->dev, "Failed to queue buffer\n"); + return ret; + } + } + + return ret; +} + +static int mhi_qaic_ctrl_dev_start_chan(struct mqc_dev *mqcdev) +{ + struct device *dev = &mqcdev->mhi_dev->dev; + int ret = 0; + + ret = mutex_lock_interruptible(&mqcdev->lock); + if (ret) + return ret; + if (!mqcdev->enabled) { + ret = -ENODEV; + goto release_dev_lock; + } + if (!mqcdev->open_count) { + ret = mhi_prepare_for_transfer(mqcdev->mhi_dev); + if (ret) { + dev_err(dev, "Error starting transfer channels\n"); + goto release_dev_lock; + } + + ret = mhi_qaic_ctrl_fill_dl_queue(mqcdev); + if (ret) { + dev_err(dev, "Error filling download queue.\n"); + goto mhi_unprepare; + } + } + mqcdev->open_count++; + mutex_unlock(&mqcdev->lock); + + return 0; + +mhi_unprepare: + mhi_unprepare_from_transfer(mqcdev->mhi_dev); +release_dev_lock: + mutex_unlock(&mqcdev->lock); + return ret; +} + +static struct mqc_dev *mqc_dev_get_by_minor(unsigned int minor) +{ + struct mqc_dev *mqcdev; + + xa_lock(&mqc_xa); + mqcdev = xa_load(&mqc_xa, minor); + if (mqcdev) + kref_get(&mqcdev->ref_count); + xa_unlock(&mqc_xa); + + return mqcdev; +} + +static int mhi_qaic_ctrl_open(struct inode *inode, struct file *filp) +{ + struct mqc_dev *mqcdev; + int ret; + + mqcdev = mqc_dev_get_by_minor(iminor(inode)); + if (!mqcdev) { + pr_debug("mqc: minor %d not found\n", iminor(inode)); + return -EINVAL; + } + + ret = mhi_qaic_ctrl_dev_start_chan(mqcdev); + if (ret) { + kref_put(&mqcdev->ref_count, mqc_dev_release); + return ret; + } + + filp->private_data = mqcdev; + + return 0; +} + +static void mhi_qaic_ctrl_buf_free(struct mqc_buf *ctrlbuf) +{ + list_del(&ctrlbuf->node); + kfree(ctrlbuf->odata); +} + +static void __mhi_qaic_ctrl_release(struct mqc_dev *mqcdev) +{ + struct mqc_buf *ctrlbuf, *tmp; + + mhi_unprepare_from_transfer(mqcdev->mhi_dev); + wake_up_interruptible(&mqcdev->ul_wq); + wake_up_interruptible(&mqcdev->dl_wq); + /* + * Free the dl_queue. As we have already unprepared mhi transfers, we + * do not expect any callback functions that update dl_queue hence no need + * to grab dl_queue lock. + */ + mutex_lock(&mqcdev->read_lock); + list_for_each_entry_safe(ctrlbuf, tmp, &mqcdev->dl_queue, node) + mhi_qaic_ctrl_buf_free(ctrlbuf); + mutex_unlock(&mqcdev->read_lock); +} + +static int mhi_qaic_ctrl_release(struct inode *inode, struct file *file) +{ + struct mqc_dev *mqcdev = file->private_data; + + mutex_lock(&mqcdev->lock); + mqcdev->open_count--; + if (!mqcdev->open_count && mqcdev->enabled) + __mhi_qaic_ctrl_release(mqcdev); + mutex_unlock(&mqcdev->lock); + + kref_put(&mqcdev->ref_count, mqc_dev_release); + + return 0; +} + +static __poll_t mhi_qaic_ctrl_poll(struct file *file, poll_table *wait) +{ + struct mqc_dev *mqcdev = file->private_data; + struct mhi_device *mhi_dev; + __poll_t mask = 0; + + mhi_dev = mqcdev->mhi_dev; + + poll_wait(file, &mqcdev->ul_wq, wait); + poll_wait(file, &mqcdev->dl_wq, wait); + + mutex_lock(&mqcdev->lock); + if (!mqcdev->enabled) { + mutex_unlock(&mqcdev->lock); + return EPOLLERR; + } + + spin_lock_bh(&mqcdev->dl_queue_lock); + if (!list_empty(&mqcdev->dl_queue)) + mask |= EPOLLIN | EPOLLRDNORM; + spin_unlock_bh(&mqcdev->dl_queue_lock); + + if (mutex_lock_interruptible(&mqcdev->write_lock)) { + mutex_unlock(&mqcdev->lock); + return EPOLLERR; + } + if (mhi_get_free_desc_count(mhi_dev, DMA_TO_DEVICE) > 0) + mask |= EPOLLOUT | EPOLLWRNORM; + mutex_unlock(&mqcdev->write_lock); + mutex_unlock(&mqcdev->lock); + + dev_dbg(&mhi_dev->dev, "Client attempted to poll, returning mask 0x%x\n", mask); + + return mask; +} + +static int mhi_qaic_ctrl_tx(struct mqc_dev *mqcdev) +{ + int ret; + + ret = wait_event_interruptible(mqcdev->ul_wq, !mqcdev->enabled || + mhi_get_free_desc_count(mqcdev->mhi_dev, DMA_TO_DEVICE) > 0); + + if (!mqcdev->enabled) + return -ENODEV; + + return ret; +} + +static ssize_t mhi_qaic_ctrl_write(struct file *file, const char __user *buf, size_t count, + loff_t *offp) +{ + struct mqc_dev *mqcdev = file->private_data; + struct mhi_device *mhi_dev; + size_t bytes_xfered = 0; + struct device *dev; + int ret, nr_desc; + + mhi_dev = mqcdev->mhi_dev; + dev = &mhi_dev->dev; + + if (!mhi_dev->ul_chan) + return -EOPNOTSUPP; + + if (!buf || !count) + return -EINVAL; + + dev_dbg(dev, "Request to transfer %zu bytes\n", count); + + ret = mhi_qaic_ctrl_tx(mqcdev); + if (ret) + return ret; + + if (mutex_lock_interruptible(&mqcdev->write_lock)) + return -EINTR; + + nr_desc = mhi_get_free_desc_count(mhi_dev, DMA_TO_DEVICE); + if (nr_desc * mqcdev->mtu < count) { + ret = -EMSGSIZE; + dev_dbg(dev, "Buffer too big to transfer\n"); + goto unlock_mutex; + } + + while (count != bytes_xfered) { + enum mhi_flags flags; + size_t to_copy; + void *kbuf; + + to_copy = min_t(size_t, count - bytes_xfered, mqcdev->mtu); + kbuf = kmalloc(to_copy, GFP_KERNEL); + if (!kbuf) { + ret = -ENOMEM; + goto unlock_mutex; + } + + ret = copy_from_user(kbuf, buf + bytes_xfered, to_copy); + if (ret) { + kfree(kbuf); + ret = -EFAULT; + goto unlock_mutex; + } + + if (bytes_xfered + to_copy == count) + flags = MHI_EOT; + else + flags = MHI_CHAIN; + + ret = mhi_queue_buf(mhi_dev, DMA_TO_DEVICE, kbuf, to_copy, flags); + if (ret) { + kfree(kbuf); + dev_err(dev, "Failed to queue buf of size %zu\n", to_copy); + goto unlock_mutex; + } + + bytes_xfered += to_copy; + } + + mutex_unlock(&mqcdev->write_lock); + dev_dbg(dev, "bytes xferred: %zu\n", bytes_xfered); + + return bytes_xfered; + +unlock_mutex: + mutex_unlock(&mqcdev->write_lock); + return ret; +} + +static int mhi_qaic_ctrl_rx(struct mqc_dev *mqcdev) +{ + int ret; + + ret = wait_event_interruptible(mqcdev->dl_wq, + !mqcdev->enabled || !list_empty(&mqcdev->dl_queue)); + + if (!mqcdev->enabled) + return -ENODEV; + + return ret; +} + +static ssize_t mhi_qaic_ctrl_read(struct file *file, char __user *buf, size_t count, loff_t *ppos) +{ + struct mqc_dev *mqcdev = file->private_data; + struct mqc_buf *ctrlbuf; + size_t to_copy; + int ret; + + if (!mqcdev->mhi_dev->dl_chan) + return -EOPNOTSUPP; + + ret = mhi_qaic_ctrl_rx(mqcdev); + if (ret) + return ret; + + if (mutex_lock_interruptible(&mqcdev->read_lock)) + return -EINTR; + + ctrlbuf = list_first_entry_or_null(&mqcdev->dl_queue, struct mqc_buf, node); + if (!ctrlbuf) { + mutex_unlock(&mqcdev->read_lock); + ret = -ENODEV; + goto error_out; + } + + to_copy = min_t(size_t, count, ctrlbuf->len); + if (copy_to_user(buf, ctrlbuf->data, to_copy)) { + mutex_unlock(&mqcdev->read_lock); + dev_dbg(&mqcdev->mhi_dev->dev, "Failed to copy data to user buffer\n"); + ret = -EFAULT; + goto error_out; + } + + ctrlbuf->len -= to_copy; + ctrlbuf->data += to_copy; + + if (!ctrlbuf->len) { + spin_lock_bh(&mqcdev->dl_queue_lock); + mhi_qaic_ctrl_buf_free(ctrlbuf); + spin_unlock_bh(&mqcdev->dl_queue_lock); + mhi_qaic_ctrl_fill_dl_queue(mqcdev); + dev_dbg(&mqcdev->mhi_dev->dev, "Read buf freed\n"); + } + + mutex_unlock(&mqcdev->read_lock); + return to_copy; + +error_out: + mutex_unlock(&mqcdev->read_lock); + return ret; +} + +static const struct file_operations mhidev_fops = { + .owner = THIS_MODULE, + .open = mhi_qaic_ctrl_open, + .release = mhi_qaic_ctrl_release, + .read = mhi_qaic_ctrl_read, + .write = mhi_qaic_ctrl_write, + .poll = mhi_qaic_ctrl_poll, +}; + +static void mhi_qaic_ctrl_ul_xfer_cb(struct mhi_device *mhi_dev, struct mhi_result *mhi_result) +{ + struct mqc_dev *mqcdev = dev_get_drvdata(&mhi_dev->dev); + + dev_dbg(&mhi_dev->dev, "%s: status: %d xfer_len: %zu\n", __func__, + mhi_result->transaction_status, mhi_result->bytes_xferd); + + kfree(mhi_result->buf_addr); + + if (!mhi_result->transaction_status) + wake_up_interruptible(&mqcdev->ul_wq); +} + +static void mhi_qaic_ctrl_dl_xfer_cb(struct mhi_device *mhi_dev, struct mhi_result *mhi_result) +{ + struct mqc_dev *mqcdev = dev_get_drvdata(&mhi_dev->dev); + struct mqc_buf *ctrlbuf; + + dev_dbg(&mhi_dev->dev, "%s: status: %d receive_len: %zu\n", __func__, + mhi_result->transaction_status, mhi_result->bytes_xferd); + + if (mhi_result->transaction_status && + mhi_result->transaction_status != -EOVERFLOW) { + kfree(mhi_result->buf_addr); + return; + } + + ctrlbuf = mhi_result->buf_addr + mqcdev->mtu; + ctrlbuf->data = mhi_result->buf_addr; + ctrlbuf->len = mhi_result->bytes_xferd; + spin_lock_bh(&mqcdev->dl_queue_lock); + list_add_tail(&ctrlbuf->node, &mqcdev->dl_queue); + spin_unlock_bh(&mqcdev->dl_queue_lock); + + wake_up_interruptible(&mqcdev->dl_wq); +} + +static int mhi_qaic_ctrl_probe(struct mhi_device *mhi_dev, const struct mhi_device_id *id) +{ + struct mqc_dev *mqcdev; + struct device *dev; + int ret; + + mqcdev = kzalloc(sizeof(*mqcdev), GFP_KERNEL); + if (!mqcdev) + return -ENOMEM; + + kref_init(&mqcdev->ref_count); + mutex_init(&mqcdev->lock); + mqcdev->mhi_dev = mhi_dev; + + ret = xa_alloc(&mqc_xa, &mqcdev->minor, mqcdev, XA_LIMIT(0, MHI_QAIC_CTRL_MAX_MINORS), + GFP_KERNEL); + if (ret) { + kfree(mqcdev); + return ret; + } + + init_waitqueue_head(&mqcdev->ul_wq); + init_waitqueue_head(&mqcdev->dl_wq); + mutex_init(&mqcdev->read_lock); + mutex_init(&mqcdev->write_lock); + spin_lock_init(&mqcdev->dl_queue_lock); + INIT_LIST_HEAD(&mqcdev->dl_queue); + mqcdev->mtu = min_t(size_t, id->driver_data, MHI_MAX_MTU); + mqcdev->enabled = true; + mqcdev->open_count = 0; + dev_set_drvdata(&mhi_dev->dev, mqcdev); + + dev = device_create(mqc_dev_class, &mhi_dev->dev, MKDEV(mqc_dev_major, mqcdev->minor), + mqcdev, "%s", dev_name(&mhi_dev->dev)); + if (IS_ERR(dev)) { + xa_erase(&mqc_xa, mqcdev->minor); + dev_set_drvdata(&mhi_dev->dev, NULL); + kfree(mqcdev); + return PTR_ERR(dev); + } + + return 0; +}; + +static void mhi_qaic_ctrl_remove(struct mhi_device *mhi_dev) +{ + struct mqc_dev *mqcdev = dev_get_drvdata(&mhi_dev->dev); + + device_destroy(mqc_dev_class, MKDEV(mqc_dev_major, mqcdev->minor)); + + mutex_lock(&mqcdev->lock); + mqcdev->enabled = false; + if (mqcdev->open_count) + __mhi_qaic_ctrl_release(mqcdev); + mutex_unlock(&mqcdev->lock); + + xa_erase(&mqc_xa, mqcdev->minor); + kref_put(&mqcdev->ref_count, mqc_dev_release); +} + +/* .driver_data stores max mtu */ +static const struct mhi_device_id mhi_qaic_ctrl_match_table[] = { + { .chan = "QAIC_SAHARA", .driver_data = SZ_32K}, + {}, +}; +MODULE_DEVICE_TABLE(mhi, mhi_qaic_ctrl_match_table); + +static struct mhi_driver mhi_qaic_ctrl_driver = { + .id_table = mhi_qaic_ctrl_match_table, + .remove = mhi_qaic_ctrl_remove, + .probe = mhi_qaic_ctrl_probe, + .ul_xfer_cb = mhi_qaic_ctrl_ul_xfer_cb, + .dl_xfer_cb = mhi_qaic_ctrl_dl_xfer_cb, + .driver = { + .name = MHI_QAIC_CTRL_DRIVER_NAME, + }, +}; + +int mhi_qaic_ctrl_init(void) +{ + int ret; + + ret = register_chrdev(0, MHI_QAIC_CTRL_DRIVER_NAME, &mhidev_fops); + if (ret < 0) + return ret; + + mqc_dev_major = ret; + mqc_dev_class = class_create(THIS_MODULE, MHI_QAIC_CTRL_DRIVER_NAME); + if (IS_ERR(mqc_dev_class)) { + ret = PTR_ERR(mqc_dev_class); + goto unregister_chrdev; + } + + ret = mhi_driver_register(&mhi_qaic_ctrl_driver); + if (ret) + goto destroy_class; + + return 0; + +destroy_class: + class_destroy(mqc_dev_class); +unregister_chrdev: + unregister_chrdev(mqc_dev_major, MHI_QAIC_CTRL_DRIVER_NAME); + return ret; +} + +void mhi_qaic_ctrl_deinit(void) +{ + mhi_driver_unregister(&mhi_qaic_ctrl_driver); + class_destroy(mqc_dev_class); + unregister_chrdev(mqc_dev_major, MHI_QAIC_CTRL_DRIVER_NAME); + xa_destroy(&mqc_xa); +} diff --git a/drivers/accel/qaic/mhi_qaic_ctrl.h b/drivers/accel/qaic/mhi_qaic_ctrl.h new file mode 100644 index 000000000000..930b3ace1a59 --- /dev/null +++ b/drivers/accel/qaic/mhi_qaic_ctrl.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef __MHI_QAIC_CTRL_H__ +#define __MHI_QAIC_CTRL_H__ + +int mhi_qaic_ctrl_init(void); +void mhi_qaic_ctrl_deinit(void); + +#endif /* __MHI_QAIC_CTRL_H__ */ diff --git a/drivers/accel/qaic/qaic.h b/drivers/accel/qaic/qaic.h new file mode 100644 index 000000000000..f2bd637a0d4e --- /dev/null +++ b/drivers/accel/qaic/qaic.h @@ -0,0 +1,282 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2019-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _QAIC_H_ +#define _QAIC_H_ + +#include <linux/interrupt.h> +#include <linux/kref.h> +#include <linux/mhi.h> +#include <linux/mutex.h> +#include <linux/pci.h> +#include <linux/spinlock.h> +#include <linux/srcu.h> +#include <linux/wait.h> +#include <linux/workqueue.h> +#include <drm/drm_device.h> +#include <drm/drm_gem.h> + +#define QAIC_DBC_BASE SZ_128K +#define QAIC_DBC_SIZE SZ_4K + +#define QAIC_NO_PARTITION -1 + +#define QAIC_DBC_OFF(i) ((i) * QAIC_DBC_SIZE + QAIC_DBC_BASE) + +#define to_qaic_bo(obj) container_of(obj, struct qaic_bo, base) + +extern bool datapath_polling; + +struct qaic_user { + /* Uniquely identifies this user for the device */ + int handle; + struct kref ref_count; + /* Char device opened by this user */ + struct qaic_drm_device *qddev; + /* Node in list of users that opened this drm device */ + struct list_head node; + /* SRCU used to synchronize this user during cleanup */ + struct srcu_struct qddev_lock; + atomic_t chunk_id; +}; + +struct dma_bridge_chan { + /* Pointer to device strcut maintained by driver */ + struct qaic_device *qdev; + /* ID of this DMA bridge channel(DBC) */ + unsigned int id; + /* Synchronizes access to xfer_list */ + spinlock_t xfer_lock; + /* Base address of request queue */ + void *req_q_base; + /* Base address of response queue */ + void *rsp_q_base; + /* + * Base bus address of request queue. Response queue bus address can be + * calculated by adding request queue size to this variable + */ + dma_addr_t dma_addr; + /* Total size of request and response queue in byte */ + u32 total_size; + /* Capacity of request/response queue */ + u32 nelem; + /* The user that opened this DBC */ + struct qaic_user *usr; + /* + * Request ID of next memory handle that goes in request queue. One + * memory handle can enqueue more than one request elements, all + * this requests that belong to same memory handle have same request ID + */ + u16 next_req_id; + /* true: DBC is in use; false: DBC not in use */ + bool in_use; + /* + * Base address of device registers. Used to read/write request and + * response queue's head and tail pointer of this DBC. + */ + void __iomem *dbc_base; + /* Head of list where each node is a memory handle queued in request queue */ + struct list_head xfer_list; + /* Synchronizes DBC readers during cleanup */ + struct srcu_struct ch_lock; + /* + * When this DBC is released, any thread waiting on this wait queue is + * woken up + */ + wait_queue_head_t dbc_release; + /* Head of list where each node is a bo associated with this DBC */ + struct list_head bo_lists; + /* The irq line for this DBC. Used for polling */ + unsigned int irq; + /* Polling work item to simulate interrupts */ + struct work_struct poll_work; +}; + +struct qaic_device { + /* Pointer to base PCI device struct of our physical device */ + struct pci_dev *pdev; + /* Req. ID of request that will be queued next in MHI control device */ + u32 next_seq_num; + /* Base address of bar 0 */ + void __iomem *bar_0; + /* Base address of bar 2 */ + void __iomem *bar_2; + /* Controller structure for MHI devices */ + struct mhi_controller *mhi_cntrl; + /* MHI control channel device */ + struct mhi_device *cntl_ch; + /* List of requests queued in MHI control device */ + struct list_head cntl_xfer_list; + /* Synchronizes MHI control device transactions and its xfer list */ + struct mutex cntl_mutex; + /* Array of DBC struct of this device */ + struct dma_bridge_chan *dbc; + /* Work queue for tasks related to MHI control device */ + struct workqueue_struct *cntl_wq; + /* Synchronizes all the users of device during cleanup */ + struct srcu_struct dev_lock; + /* true: Device under reset; false: Device not under reset */ + bool in_reset; + /* + * true: A tx MHI transaction has failed and a rx buffer is still queued + * in control device. Such a buffer is considered lost rx buffer + * false: No rx buffer is lost in control device + */ + bool cntl_lost_buf; + /* Maximum number of DBC supported by this device */ + u32 num_dbc; + /* Reference to the drm_device for this device when it is created */ + struct qaic_drm_device *qddev; + /* Generate the CRC of a control message */ + u32 (*gen_crc)(void *msg); + /* Validate the CRC of a control message */ + bool (*valid_crc)(void *msg); +}; + +struct qaic_drm_device { + /* Pointer to the root device struct driven by this driver */ + struct qaic_device *qdev; + /* + * The physical device can be partition in number of logical devices. + * And each logical device is given a partition id. This member stores + * that id. QAIC_NO_PARTITION is a sentinel used to mark that this drm + * device is the actual physical device + */ + s32 partition_id; + /* Pointer to the drm device struct of this drm device */ + struct drm_device *ddev; + /* Head in list of users who have opened this drm device */ + struct list_head users; + /* Synchronizes access to users list */ + struct mutex users_mutex; +}; + +struct qaic_bo { + struct drm_gem_object base; + /* Scatter/gather table for allocate/imported BO */ + struct sg_table *sgt; + /* BO size requested by user. GEM object might be bigger in size. */ + u64 size; + /* Head in list of slices of this BO */ + struct list_head slices; + /* Total nents, for all slices of this BO */ + int total_slice_nents; + /* + * Direction of transfer. It can assume only two value DMA_TO_DEVICE and + * DMA_FROM_DEVICE. + */ + int dir; + /* The pointer of the DBC which operates on this BO */ + struct dma_bridge_chan *dbc; + /* Number of slice that belongs to this buffer */ + u32 nr_slice; + /* Number of slice that have been transferred by DMA engine */ + u32 nr_slice_xfer_done; + /* true = BO is queued for execution, true = BO is not queued */ + bool queued; + /* + * If true then user has attached slicing information to this BO by + * calling DRM_IOCTL_QAIC_ATTACH_SLICE_BO ioctl. + */ + bool sliced; + /* Request ID of this BO if it is queued for execution */ + u16 req_id; + /* Handle assigned to this BO */ + u32 handle; + /* Wait on this for completion of DMA transfer of this BO */ + struct completion xfer_done; + /* + * Node in linked list where head is dbc->xfer_list. + * This link list contain BO's that are queued for DMA transfer. + */ + struct list_head xfer_list; + /* + * Node in linked list where head is dbc->bo_lists. + * This link list contain BO's that are associated with the DBC it is + * linked to. + */ + struct list_head bo_list; + struct { + /* + * Latest timestamp(ns) at which kernel received a request to + * execute this BO + */ + u64 req_received_ts; + /* + * Latest timestamp(ns) at which kernel enqueued requests of + * this BO for execution in DMA queue + */ + u64 req_submit_ts; + /* + * Latest timestamp(ns) at which kernel received a completion + * interrupt for requests of this BO + */ + u64 req_processed_ts; + /* + * Number of elements already enqueued in DMA queue before + * enqueuing requests of this BO + */ + u32 queue_level_before; + } perf_stats; + +}; + +struct bo_slice { + /* Mapped pages */ + struct sg_table *sgt; + /* Number of requests required to queue in DMA queue */ + int nents; + /* See enum dma_data_direction */ + int dir; + /* Actual requests that will be copied in DMA queue */ + struct dbc_req *reqs; + struct kref ref_count; + /* true: No DMA transfer required */ + bool no_xfer; + /* Pointer to the parent BO handle */ + struct qaic_bo *bo; + /* Node in list of slices maintained by parent BO */ + struct list_head slice; + /* Size of this slice in bytes */ + u64 size; + /* Offset of this slice in buffer */ + u64 offset; +}; + +int get_dbc_req_elem_size(void); +int get_dbc_rsp_elem_size(void); +int get_cntl_version(struct qaic_device *qdev, struct qaic_user *usr, u16 *major, u16 *minor); +int qaic_manage_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +void qaic_mhi_ul_xfer_cb(struct mhi_device *mhi_dev, struct mhi_result *mhi_result); + +void qaic_mhi_dl_xfer_cb(struct mhi_device *mhi_dev, struct mhi_result *mhi_result); + +int qaic_control_open(struct qaic_device *qdev); +void qaic_control_close(struct qaic_device *qdev); +void qaic_release_usr(struct qaic_device *qdev, struct qaic_user *usr); + +irqreturn_t dbc_irq_threaded_fn(int irq, void *data); +irqreturn_t dbc_irq_handler(int irq, void *data); +int disable_dbc(struct qaic_device *qdev, u32 dbc_id, struct qaic_user *usr); +void enable_dbc(struct qaic_device *qdev, u32 dbc_id, struct qaic_user *usr); +void wakeup_dbc(struct qaic_device *qdev, u32 dbc_id); +void release_dbc(struct qaic_device *qdev, u32 dbc_id); + +void wake_all_cntl(struct qaic_device *qdev); +void qaic_dev_reset_clean_local_state(struct qaic_device *qdev, bool exit_reset); + +struct drm_gem_object *qaic_gem_prime_import(struct drm_device *dev, struct dma_buf *dma_buf); + +int qaic_create_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +int qaic_mmap_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +int qaic_attach_slice_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +int qaic_execute_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +int qaic_partial_execute_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +int qaic_wait_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +int qaic_perf_stats_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +void irq_polling_work(struct work_struct *work); + +#endif /* _QAIC_H_ */ diff --git a/drivers/accel/qaic/qaic_control.c b/drivers/accel/qaic/qaic_control.c new file mode 100644 index 000000000000..9f216eb6f76e --- /dev/null +++ b/drivers/accel/qaic/qaic_control.c @@ -0,0 +1,1526 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* Copyright (c) 2019-2021, The Linux Foundation. All rights reserved. */ +/* Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved. */ + +#include <asm/byteorder.h> +#include <linux/completion.h> +#include <linux/crc32.h> +#include <linux/delay.h> +#include <linux/dma-mapping.h> +#include <linux/kref.h> +#include <linux/list.h> +#include <linux/mhi.h> +#include <linux/mm.h> +#include <linux/moduleparam.h> +#include <linux/mutex.h> +#include <linux/pci.h> +#include <linux/scatterlist.h> +#include <linux/types.h> +#include <linux/uaccess.h> +#include <linux/workqueue.h> +#include <linux/wait.h> +#include <drm/drm_device.h> +#include <drm/drm_file.h> +#include <uapi/drm/qaic_accel.h> + +#include "qaic.h" + +#define MANAGE_MAGIC_NUMBER ((__force __le32)0x43494151) /* "QAIC" in little endian */ +#define QAIC_DBC_Q_GAP SZ_256 +#define QAIC_DBC_Q_BUF_ALIGN SZ_4K +#define QAIC_MANAGE_EXT_MSG_LENGTH SZ_64K /* Max DMA message length */ +#define QAIC_WRAPPER_MAX_SIZE SZ_4K +#define QAIC_MHI_RETRY_WAIT_MS 100 +#define QAIC_MHI_RETRY_MAX 20 + +static unsigned int control_resp_timeout_s = 60; /* 60 sec default */ +module_param(control_resp_timeout_s, uint, 0600); +MODULE_PARM_DESC(control_resp_timeout_s, "Timeout for NNC responses from QSM"); + +struct manage_msg { + u32 len; + u32 count; + u8 data[]; +}; + +/* + * wire encoding structures for the manage protocol. + * All fields are little endian on the wire + */ +struct wire_msg_hdr { + __le32 crc32; /* crc of everything following this field in the message */ + __le32 magic_number; + __le32 sequence_number; + __le32 len; /* length of this message */ + __le32 count; /* number of transactions in this message */ + __le32 handle; /* unique id to track the resources consumed */ + __le32 partition_id; /* partition id for the request (signed) */ + __le32 padding; /* must be 0 */ +} __packed; + +struct wire_msg { + struct wire_msg_hdr hdr; + u8 data[]; +} __packed; + +struct wire_trans_hdr { + __le32 type; + __le32 len; +} __packed; + +/* Each message sent from driver to device are organized in a list of wrapper_msg */ +struct wrapper_msg { + struct list_head list; + struct kref ref_count; + u32 len; /* length of data to transfer */ + struct wrapper_list *head; + union { + struct wire_msg msg; + struct wire_trans_hdr trans; + }; +}; + +struct wrapper_list { + struct list_head list; + spinlock_t lock; /* Protects the list state during additions and removals */ +}; + +struct wire_trans_passthrough { + struct wire_trans_hdr hdr; + u8 data[]; +} __packed; + +struct wire_addr_size_pair { + __le64 addr; + __le64 size; +} __packed; + +struct wire_trans_dma_xfer { + struct wire_trans_hdr hdr; + __le32 tag; + __le32 count; + __le32 dma_chunk_id; + __le32 padding; + struct wire_addr_size_pair data[]; +} __packed; + +/* Initiated by device to continue the DMA xfer of a large piece of data */ +struct wire_trans_dma_xfer_cont { + struct wire_trans_hdr hdr; + __le32 dma_chunk_id; + __le32 padding; + __le64 xferred_size; +} __packed; + +struct wire_trans_activate_to_dev { + struct wire_trans_hdr hdr; + __le64 req_q_addr; + __le64 rsp_q_addr; + __le32 req_q_size; + __le32 rsp_q_size; + __le32 buf_len; + __le32 options; /* unused, but BIT(16) has meaning to the device */ +} __packed; + +struct wire_trans_activate_from_dev { + struct wire_trans_hdr hdr; + __le32 status; + __le32 dbc_id; + __le64 options; /* unused */ +} __packed; + +struct wire_trans_deactivate_from_dev { + struct wire_trans_hdr hdr; + __le32 status; + __le32 dbc_id; +} __packed; + +struct wire_trans_terminate_to_dev { + struct wire_trans_hdr hdr; + __le32 handle; + __le32 padding; +} __packed; + +struct wire_trans_terminate_from_dev { + struct wire_trans_hdr hdr; + __le32 status; + __le32 padding; +} __packed; + +struct wire_trans_status_to_dev { + struct wire_trans_hdr hdr; +} __packed; + +struct wire_trans_status_from_dev { + struct wire_trans_hdr hdr; + __le16 major; + __le16 minor; + __le32 status; + __le64 status_flags; +} __packed; + +struct wire_trans_validate_part_to_dev { + struct wire_trans_hdr hdr; + __le32 part_id; + __le32 padding; +} __packed; + +struct wire_trans_validate_part_from_dev { + struct wire_trans_hdr hdr; + __le32 status; + __le32 padding; +} __packed; + +struct xfer_queue_elem { + /* + * Node in list of ongoing transfer request on control channel. + * Maintained by root device struct. + */ + struct list_head list; + /* Sequence number of this transfer request */ + u32 seq_num; + /* This is used to wait on until completion of transfer request */ + struct completion xfer_done; + /* Received data from device */ + void *buf; +}; + +struct dma_xfer { + /* Node in list of DMA transfers which is used for cleanup */ + struct list_head list; + /* SG table of memory used for DMA */ + struct sg_table *sgt; + /* Array pages used for DMA */ + struct page **page_list; + /* Number of pages used for DMA */ + unsigned long nr_pages; +}; + +struct ioctl_resources { + /* List of all DMA transfers which is used later for cleanup */ + struct list_head dma_xfers; + /* Base address of request queue which belongs to a DBC */ + void *buf; + /* + * Base bus address of request queue which belongs to a DBC. Response + * queue base bus address can be calculated by adding size of request + * queue to base bus address of request queue. + */ + dma_addr_t dma_addr; + /* Total size of request queue and response queue in byte */ + u32 total_size; + /* Total number of elements that can be queued in each of request and response queue */ + u32 nelem; + /* Base address of response queue which belongs to a DBC */ + void *rsp_q_base; + /* Status of the NNC message received */ + u32 status; + /* DBC id of the DBC received from device */ + u32 dbc_id; + /* + * DMA transfer request messages can be big in size and it may not be + * possible to send them in one shot. In such cases the messages are + * broken into chunks, this field stores ID of such chunks. + */ + u32 dma_chunk_id; + /* Total number of bytes transferred for a DMA xfer request */ + u64 xferred_dma_size; + /* Header of transaction message received from user. Used during DMA xfer request. */ + void *trans_hdr; +}; + +struct resp_work { + struct work_struct work; + struct qaic_device *qdev; + void *buf; +}; + +/* + * Since we're working with little endian messages, its useful to be able to + * increment without filling a whole line with conversions back and forth just + * to add one(1) to a message count. + */ +static __le32 incr_le32(__le32 val) +{ + return cpu_to_le32(le32_to_cpu(val) + 1); +} + +static u32 gen_crc(void *msg) +{ + struct wrapper_list *wrappers = msg; + struct wrapper_msg *w; + u32 crc = ~0; + + list_for_each_entry(w, &wrappers->list, list) + crc = crc32(crc, &w->msg, w->len); + + return crc ^ ~0; +} + +static u32 gen_crc_stub(void *msg) +{ + return 0; +} + +static bool valid_crc(void *msg) +{ + struct wire_msg_hdr *hdr = msg; + bool ret; + u32 crc; + + /* + * The output of this algorithm is always converted to the native + * endianness. + */ + crc = le32_to_cpu(hdr->crc32); + hdr->crc32 = 0; + ret = (crc32(~0, msg, le32_to_cpu(hdr->len)) ^ ~0) == crc; + hdr->crc32 = cpu_to_le32(crc); + return ret; +} + +static bool valid_crc_stub(void *msg) +{ + return true; +} + +static void free_wrapper(struct kref *ref) +{ + struct wrapper_msg *wrapper = container_of(ref, struct wrapper_msg, ref_count); + + list_del(&wrapper->list); + kfree(wrapper); +} + +static void save_dbc_buf(struct qaic_device *qdev, struct ioctl_resources *resources, + struct qaic_user *usr) +{ + u32 dbc_id = resources->dbc_id; + + if (resources->buf) { + wait_event_interruptible(qdev->dbc[dbc_id].dbc_release, !qdev->dbc[dbc_id].in_use); + qdev->dbc[dbc_id].req_q_base = resources->buf; + qdev->dbc[dbc_id].rsp_q_base = resources->rsp_q_base; + qdev->dbc[dbc_id].dma_addr = resources->dma_addr; + qdev->dbc[dbc_id].total_size = resources->total_size; + qdev->dbc[dbc_id].nelem = resources->nelem; + enable_dbc(qdev, dbc_id, usr); + qdev->dbc[dbc_id].in_use = true; + resources->buf = NULL; + } +} + +static void free_dbc_buf(struct qaic_device *qdev, struct ioctl_resources *resources) +{ + if (resources->buf) + dma_free_coherent(&qdev->pdev->dev, resources->total_size, resources->buf, + resources->dma_addr); + resources->buf = NULL; +} + +static void free_dma_xfers(struct qaic_device *qdev, struct ioctl_resources *resources) +{ + struct dma_xfer *xfer; + struct dma_xfer *x; + int i; + + list_for_each_entry_safe(xfer, x, &resources->dma_xfers, list) { + dma_unmap_sgtable(&qdev->pdev->dev, xfer->sgt, DMA_TO_DEVICE, 0); + sg_free_table(xfer->sgt); + kfree(xfer->sgt); + for (i = 0; i < xfer->nr_pages; ++i) + put_page(xfer->page_list[i]); + kfree(xfer->page_list); + list_del(&xfer->list); + kfree(xfer); + } +} + +static struct wrapper_msg *add_wrapper(struct wrapper_list *wrappers, u32 size) +{ + struct wrapper_msg *w = kzalloc(size, GFP_KERNEL); + + if (!w) + return NULL; + list_add_tail(&w->list, &wrappers->list); + kref_init(&w->ref_count); + w->head = wrappers; + return w; +} + +static int encode_passthrough(struct qaic_device *qdev, void *trans, struct wrapper_list *wrappers, + u32 *user_len) +{ + struct qaic_manage_trans_passthrough *in_trans = trans; + struct wire_trans_passthrough *out_trans; + struct wrapper_msg *trans_wrapper; + struct wrapper_msg *wrapper; + struct wire_msg *msg; + u32 msg_hdr_len; + + wrapper = list_first_entry(&wrappers->list, struct wrapper_msg, list); + msg = &wrapper->msg; + msg_hdr_len = le32_to_cpu(msg->hdr.len); + + if (in_trans->hdr.len % 8 != 0) + return -EINVAL; + + if (msg_hdr_len + in_trans->hdr.len > QAIC_MANAGE_EXT_MSG_LENGTH) + return -ENOSPC; + + trans_wrapper = add_wrapper(wrappers, + offsetof(struct wrapper_msg, trans) + in_trans->hdr.len); + if (!trans_wrapper) + return -ENOMEM; + trans_wrapper->len = in_trans->hdr.len; + out_trans = (struct wire_trans_passthrough *)&trans_wrapper->trans; + + memcpy(out_trans->data, in_trans->data, in_trans->hdr.len - sizeof(in_trans->hdr)); + msg->hdr.len = cpu_to_le32(msg_hdr_len + in_trans->hdr.len); + msg->hdr.count = incr_le32(msg->hdr.count); + *user_len += in_trans->hdr.len; + out_trans->hdr.type = cpu_to_le32(QAIC_TRANS_PASSTHROUGH_TO_DEV); + out_trans->hdr.len = cpu_to_le32(in_trans->hdr.len); + + return 0; +} + +/* returns error code for failure, 0 if enough pages alloc'd, 1 if dma_cont is needed */ +static int find_and_map_user_pages(struct qaic_device *qdev, + struct qaic_manage_trans_dma_xfer *in_trans, + struct ioctl_resources *resources, struct dma_xfer *xfer) +{ + unsigned long need_pages; + struct page **page_list; + unsigned long nr_pages; + struct sg_table *sgt; + u64 xfer_start_addr; + int ret; + int i; + + xfer_start_addr = in_trans->addr + resources->xferred_dma_size; + + need_pages = DIV_ROUND_UP(in_trans->size + offset_in_page(xfer_start_addr) - + resources->xferred_dma_size, PAGE_SIZE); + + nr_pages = need_pages; + + while (1) { + page_list = kmalloc_array(nr_pages, sizeof(*page_list), GFP_KERNEL | __GFP_NOWARN); + if (!page_list) { + nr_pages = nr_pages / 2; + if (!nr_pages) + return -ENOMEM; + } else { + break; + } + } + + ret = get_user_pages_fast(xfer_start_addr, nr_pages, 0, page_list); + if (ret < 0 || ret != nr_pages) { + ret = -EFAULT; + goto free_page_list; + } + + sgt = kmalloc(sizeof(*sgt), GFP_KERNEL); + if (!sgt) { + ret = -ENOMEM; + goto put_pages; + } + + ret = sg_alloc_table_from_pages(sgt, page_list, nr_pages, + offset_in_page(xfer_start_addr), + in_trans->size - resources->xferred_dma_size, GFP_KERNEL); + if (ret) { + ret = -ENOMEM; + goto free_sgt; + } + + ret = dma_map_sgtable(&qdev->pdev->dev, sgt, DMA_TO_DEVICE, 0); + if (ret) + goto free_table; + + xfer->sgt = sgt; + xfer->page_list = page_list; + xfer->nr_pages = nr_pages; + + return need_pages > nr_pages ? 1 : 0; + +free_table: + sg_free_table(sgt); +free_sgt: + kfree(sgt); +put_pages: + for (i = 0; i < nr_pages; ++i) + put_page(page_list[i]); +free_page_list: + kfree(page_list); + return ret; +} + +/* returns error code for failure, 0 if everything was encoded, 1 if dma_cont is needed */ +static int encode_addr_size_pairs(struct dma_xfer *xfer, struct wrapper_list *wrappers, + struct ioctl_resources *resources, u32 msg_hdr_len, u32 *size, + struct wire_trans_dma_xfer **out_trans) +{ + struct wrapper_msg *trans_wrapper; + struct sg_table *sgt = xfer->sgt; + struct wire_addr_size_pair *asp; + struct scatterlist *sg; + struct wrapper_msg *w; + unsigned int dma_len; + u64 dma_chunk_len; + void *boundary; + int nents_dma; + int nents; + int i; + + nents = sgt->nents; + nents_dma = nents; + *size = QAIC_MANAGE_EXT_MSG_LENGTH - msg_hdr_len - sizeof(**out_trans); + for_each_sgtable_sg(sgt, sg, i) { + *size -= sizeof(*asp); + /* Save 1K for possible follow-up transactions. */ + if (*size < SZ_1K) { + nents_dma = i; + break; + } + } + + trans_wrapper = add_wrapper(wrappers, QAIC_WRAPPER_MAX_SIZE); + if (!trans_wrapper) + return -ENOMEM; + *out_trans = (struct wire_trans_dma_xfer *)&trans_wrapper->trans; + + asp = (*out_trans)->data; + boundary = (void *)trans_wrapper + QAIC_WRAPPER_MAX_SIZE; + *size = 0; + + dma_len = 0; + w = trans_wrapper; + dma_chunk_len = 0; + for_each_sg(sgt->sgl, sg, nents_dma, i) { + asp->size = cpu_to_le64(dma_len); + dma_chunk_len += dma_len; + if (dma_len) { + asp++; + if ((void *)asp + sizeof(*asp) > boundary) { + w->len = (void *)asp - (void *)&w->msg; + *size += w->len; + w = add_wrapper(wrappers, QAIC_WRAPPER_MAX_SIZE); + if (!w) + return -ENOMEM; + boundary = (void *)w + QAIC_WRAPPER_MAX_SIZE; + asp = (struct wire_addr_size_pair *)&w->msg; + } + } + asp->addr = cpu_to_le64(sg_dma_address(sg)); + dma_len = sg_dma_len(sg); + } + /* finalize the last segment */ + asp->size = cpu_to_le64(dma_len); + w->len = (void *)asp + sizeof(*asp) - (void *)&w->msg; + *size += w->len; + dma_chunk_len += dma_len; + resources->xferred_dma_size += dma_chunk_len; + + return nents_dma < nents ? 1 : 0; +} + +static void cleanup_xfer(struct qaic_device *qdev, struct dma_xfer *xfer) +{ + int i; + + dma_unmap_sgtable(&qdev->pdev->dev, xfer->sgt, DMA_TO_DEVICE, 0); + sg_free_table(xfer->sgt); + kfree(xfer->sgt); + for (i = 0; i < xfer->nr_pages; ++i) + put_page(xfer->page_list[i]); + kfree(xfer->page_list); +} + +static int encode_dma(struct qaic_device *qdev, void *trans, struct wrapper_list *wrappers, + u32 *user_len, struct ioctl_resources *resources, struct qaic_user *usr) +{ + struct qaic_manage_trans_dma_xfer *in_trans = trans; + struct wire_trans_dma_xfer *out_trans; + struct wrapper_msg *wrapper; + struct dma_xfer *xfer; + struct wire_msg *msg; + bool need_cont_dma; + u32 msg_hdr_len; + u32 size; + int ret; + + wrapper = list_first_entry(&wrappers->list, struct wrapper_msg, list); + msg = &wrapper->msg; + msg_hdr_len = le32_to_cpu(msg->hdr.len); + + if (msg_hdr_len > (UINT_MAX - QAIC_MANAGE_EXT_MSG_LENGTH)) + return -EINVAL; + + /* There should be enough space to hold at least one ASP entry. */ + if (msg_hdr_len + sizeof(*out_trans) + sizeof(struct wire_addr_size_pair) > + QAIC_MANAGE_EXT_MSG_LENGTH) + return -ENOMEM; + + if (in_trans->addr + in_trans->size < in_trans->addr || !in_trans->size) + return -EINVAL; + + xfer = kmalloc(sizeof(*xfer), GFP_KERNEL); + if (!xfer) + return -ENOMEM; + + ret = find_and_map_user_pages(qdev, in_trans, resources, xfer); + if (ret < 0) + goto free_xfer; + + need_cont_dma = (bool)ret; + + ret = encode_addr_size_pairs(xfer, wrappers, resources, msg_hdr_len, &size, &out_trans); + if (ret < 0) + goto cleanup_xfer; + + need_cont_dma = need_cont_dma || (bool)ret; + + msg->hdr.len = cpu_to_le32(msg_hdr_len + size); + msg->hdr.count = incr_le32(msg->hdr.count); + + out_trans->hdr.type = cpu_to_le32(QAIC_TRANS_DMA_XFER_TO_DEV); + out_trans->hdr.len = cpu_to_le32(size); + out_trans->tag = cpu_to_le32(in_trans->tag); + out_trans->count = cpu_to_le32((size - sizeof(*out_trans)) / + sizeof(struct wire_addr_size_pair)); + + *user_len += in_trans->hdr.len; + + if (resources->dma_chunk_id) { + out_trans->dma_chunk_id = cpu_to_le32(resources->dma_chunk_id); + } else if (need_cont_dma) { + while (resources->dma_chunk_id == 0) + resources->dma_chunk_id = atomic_inc_return(&usr->chunk_id); + + out_trans->dma_chunk_id = cpu_to_le32(resources->dma_chunk_id); + } + resources->trans_hdr = trans; + + list_add(&xfer->list, &resources->dma_xfers); + return 0; + +cleanup_xfer: + cleanup_xfer(qdev, xfer); +free_xfer: + kfree(xfer); + return ret; +} + +static int encode_activate(struct qaic_device *qdev, void *trans, struct wrapper_list *wrappers, + u32 *user_len, struct ioctl_resources *resources) +{ + struct qaic_manage_trans_activate_to_dev *in_trans = trans; + struct wire_trans_activate_to_dev *out_trans; + struct wrapper_msg *trans_wrapper; + struct wrapper_msg *wrapper; + struct wire_msg *msg; + dma_addr_t dma_addr; + u32 msg_hdr_len; + void *buf; + u32 nelem; + u32 size; + int ret; + + wrapper = list_first_entry(&wrappers->list, struct wrapper_msg, list); + msg = &wrapper->msg; + msg_hdr_len = le32_to_cpu(msg->hdr.len); + + if (msg_hdr_len + sizeof(*out_trans) > QAIC_MANAGE_MAX_MSG_LENGTH) + return -ENOSPC; + + if (!in_trans->queue_size) + return -EINVAL; + + if (in_trans->pad) + return -EINVAL; + + nelem = in_trans->queue_size; + size = (get_dbc_req_elem_size() + get_dbc_rsp_elem_size()) * nelem; + if (size / nelem != get_dbc_req_elem_size() + get_dbc_rsp_elem_size()) + return -EINVAL; + + if (size + QAIC_DBC_Q_GAP + QAIC_DBC_Q_BUF_ALIGN < size) + return -EINVAL; + + size = ALIGN((size + QAIC_DBC_Q_GAP), QAIC_DBC_Q_BUF_ALIGN); + + buf = dma_alloc_coherent(&qdev->pdev->dev, size, &dma_addr, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + trans_wrapper = add_wrapper(wrappers, + offsetof(struct wrapper_msg, trans) + sizeof(*out_trans)); + if (!trans_wrapper) { + ret = -ENOMEM; + goto free_dma; + } + trans_wrapper->len = sizeof(*out_trans); + out_trans = (struct wire_trans_activate_to_dev *)&trans_wrapper->trans; + + out_trans->hdr.type = cpu_to_le32(QAIC_TRANS_ACTIVATE_TO_DEV); + out_trans->hdr.len = cpu_to_le32(sizeof(*out_trans)); + out_trans->buf_len = cpu_to_le32(size); + out_trans->req_q_addr = cpu_to_le64(dma_addr); + out_trans->req_q_size = cpu_to_le32(nelem); + out_trans->rsp_q_addr = cpu_to_le64(dma_addr + size - nelem * get_dbc_rsp_elem_size()); + out_trans->rsp_q_size = cpu_to_le32(nelem); + out_trans->options = cpu_to_le32(in_trans->options); + + *user_len += in_trans->hdr.len; + msg->hdr.len = cpu_to_le32(msg_hdr_len + sizeof(*out_trans)); + msg->hdr.count = incr_le32(msg->hdr.count); + + resources->buf = buf; + resources->dma_addr = dma_addr; + resources->total_size = size; + resources->nelem = nelem; + resources->rsp_q_base = buf + size - nelem * get_dbc_rsp_elem_size(); + return 0; + +free_dma: + dma_free_coherent(&qdev->pdev->dev, size, buf, dma_addr); + return ret; +} + +static int encode_deactivate(struct qaic_device *qdev, void *trans, + u32 *user_len, struct qaic_user *usr) +{ + struct qaic_manage_trans_deactivate *in_trans = trans; + + if (in_trans->dbc_id >= qdev->num_dbc || in_trans->pad) + return -EINVAL; + + *user_len += in_trans->hdr.len; + + return disable_dbc(qdev, in_trans->dbc_id, usr); +} + +static int encode_status(struct qaic_device *qdev, void *trans, struct wrapper_list *wrappers, + u32 *user_len) +{ + struct qaic_manage_trans_status_to_dev *in_trans = trans; + struct wire_trans_status_to_dev *out_trans; + struct wrapper_msg *trans_wrapper; + struct wrapper_msg *wrapper; + struct wire_msg *msg; + u32 msg_hdr_len; + + wrapper = list_first_entry(&wrappers->list, struct wrapper_msg, list); + msg = &wrapper->msg; + msg_hdr_len = le32_to_cpu(msg->hdr.len); + + if (msg_hdr_len + in_trans->hdr.len > QAIC_MANAGE_MAX_MSG_LENGTH) + return -ENOSPC; + + trans_wrapper = add_wrapper(wrappers, sizeof(*trans_wrapper)); + if (!trans_wrapper) + return -ENOMEM; + + trans_wrapper->len = sizeof(*out_trans); + out_trans = (struct wire_trans_status_to_dev *)&trans_wrapper->trans; + + out_trans->hdr.type = cpu_to_le32(QAIC_TRANS_STATUS_TO_DEV); + out_trans->hdr.len = cpu_to_le32(in_trans->hdr.len); + msg->hdr.len = cpu_to_le32(msg_hdr_len + in_trans->hdr.len); + msg->hdr.count = incr_le32(msg->hdr.count); + *user_len += in_trans->hdr.len; + + return 0; +} + +static int encode_message(struct qaic_device *qdev, struct manage_msg *user_msg, + struct wrapper_list *wrappers, struct ioctl_resources *resources, + struct qaic_user *usr) +{ + struct qaic_manage_trans_hdr *trans_hdr; + struct wrapper_msg *wrapper; + struct wire_msg *msg; + u32 user_len = 0; + int ret; + int i; + + if (!user_msg->count) { + ret = -EINVAL; + goto out; + } + + wrapper = list_first_entry(&wrappers->list, struct wrapper_msg, list); + msg = &wrapper->msg; + + msg->hdr.len = cpu_to_le32(sizeof(msg->hdr)); + + if (resources->dma_chunk_id) { + ret = encode_dma(qdev, resources->trans_hdr, wrappers, &user_len, resources, usr); + msg->hdr.count = cpu_to_le32(1); + goto out; + } + + for (i = 0; i < user_msg->count; ++i) { + if (user_len >= user_msg->len) { + ret = -EINVAL; + break; + } + trans_hdr = (struct qaic_manage_trans_hdr *)(user_msg->data + user_len); + if (user_len + trans_hdr->len > user_msg->len) { + ret = -EINVAL; + break; + } + + switch (trans_hdr->type) { + case QAIC_TRANS_PASSTHROUGH_FROM_USR: + ret = encode_passthrough(qdev, trans_hdr, wrappers, &user_len); + break; + case QAIC_TRANS_DMA_XFER_FROM_USR: + ret = encode_dma(qdev, trans_hdr, wrappers, &user_len, resources, usr); + break; + case QAIC_TRANS_ACTIVATE_FROM_USR: + ret = encode_activate(qdev, trans_hdr, wrappers, &user_len, resources); + break; + case QAIC_TRANS_DEACTIVATE_FROM_USR: + ret = encode_deactivate(qdev, trans_hdr, &user_len, usr); + break; + case QAIC_TRANS_STATUS_FROM_USR: + ret = encode_status(qdev, trans_hdr, wrappers, &user_len); + break; + default: + ret = -EINVAL; + break; + } + + if (ret) + break; + } + + if (user_len != user_msg->len) + ret = -EINVAL; +out: + if (ret) { + free_dma_xfers(qdev, resources); + free_dbc_buf(qdev, resources); + return ret; + } + + return 0; +} + +static int decode_passthrough(struct qaic_device *qdev, void *trans, struct manage_msg *user_msg, + u32 *msg_len) +{ + struct qaic_manage_trans_passthrough *out_trans; + struct wire_trans_passthrough *in_trans = trans; + u32 len; + + out_trans = (void *)user_msg->data + user_msg->len; + + len = le32_to_cpu(in_trans->hdr.len); + if (len % 8 != 0) + return -EINVAL; + + if (user_msg->len + len > QAIC_MANAGE_MAX_MSG_LENGTH) + return -ENOSPC; + + memcpy(out_trans->data, in_trans->data, len - sizeof(in_trans->hdr)); + user_msg->len += len; + *msg_len += len; + out_trans->hdr.type = le32_to_cpu(in_trans->hdr.type); + out_trans->hdr.len = len; + + return 0; +} + +static int decode_activate(struct qaic_device *qdev, void *trans, struct manage_msg *user_msg, + u32 *msg_len, struct ioctl_resources *resources, struct qaic_user *usr) +{ + struct qaic_manage_trans_activate_from_dev *out_trans; + struct wire_trans_activate_from_dev *in_trans = trans; + u32 len; + + out_trans = (void *)user_msg->data + user_msg->len; + + len = le32_to_cpu(in_trans->hdr.len); + if (user_msg->len + len > QAIC_MANAGE_MAX_MSG_LENGTH) + return -ENOSPC; + + user_msg->len += len; + *msg_len += len; + out_trans->hdr.type = le32_to_cpu(in_trans->hdr.type); + out_trans->hdr.len = len; + out_trans->status = le32_to_cpu(in_trans->status); + out_trans->dbc_id = le32_to_cpu(in_trans->dbc_id); + out_trans->options = le64_to_cpu(in_trans->options); + + if (!resources->buf) + /* how did we get an activate response without a request? */ + return -EINVAL; + + if (out_trans->dbc_id >= qdev->num_dbc) + /* + * The device assigned an invalid resource, which should never + * happen. Return an error so the user can try to recover. + */ + return -ENODEV; + + if (out_trans->status) + /* + * Allocating resources failed on device side. This is not an + * expected behaviour, user is expected to handle this situation. + */ + return -ECANCELED; + + resources->status = out_trans->status; + resources->dbc_id = out_trans->dbc_id; + save_dbc_buf(qdev, resources, usr); + + return 0; +} + +static int decode_deactivate(struct qaic_device *qdev, void *trans, u32 *msg_len, + struct qaic_user *usr) +{ + struct wire_trans_deactivate_from_dev *in_trans = trans; + u32 dbc_id = le32_to_cpu(in_trans->dbc_id); + u32 status = le32_to_cpu(in_trans->status); + + if (dbc_id >= qdev->num_dbc) + /* + * The device assigned an invalid resource, which should never + * happen. Inject an error so the user can try to recover. + */ + return -ENODEV; + + if (status) { + /* + * Releasing resources failed on the device side, which puts + * us in a bind since they may still be in use, so enable the + * dbc. User is expected to retry deactivation. + */ + enable_dbc(qdev, dbc_id, usr); + return -ECANCELED; + } + + release_dbc(qdev, dbc_id); + *msg_len += sizeof(*in_trans); + + return 0; +} + +static int decode_status(struct qaic_device *qdev, void *trans, struct manage_msg *user_msg, + u32 *user_len, struct wire_msg *msg) +{ + struct qaic_manage_trans_status_from_dev *out_trans; + struct wire_trans_status_from_dev *in_trans = trans; + u32 len; + + out_trans = (void *)user_msg->data + user_msg->len; + + len = le32_to_cpu(in_trans->hdr.len); + if (user_msg->len + len > QAIC_MANAGE_MAX_MSG_LENGTH) + return -ENOSPC; + + out_trans->hdr.type = QAIC_TRANS_STATUS_FROM_DEV; + out_trans->hdr.len = len; + out_trans->major = le16_to_cpu(in_trans->major); + out_trans->minor = le16_to_cpu(in_trans->minor); + out_trans->status_flags = le64_to_cpu(in_trans->status_flags); + out_trans->status = le32_to_cpu(in_trans->status); + *user_len += le32_to_cpu(in_trans->hdr.len); + user_msg->len += len; + + if (out_trans->status) + return -ECANCELED; + if (out_trans->status_flags & BIT(0) && !valid_crc(msg)) + return -EPIPE; + + return 0; +} + +static int decode_message(struct qaic_device *qdev, struct manage_msg *user_msg, + struct wire_msg *msg, struct ioctl_resources *resources, + struct qaic_user *usr) +{ + u32 msg_hdr_len = le32_to_cpu(msg->hdr.len); + struct wire_trans_hdr *trans_hdr; + u32 msg_len = 0; + int ret; + int i; + + if (msg_hdr_len > QAIC_MANAGE_MAX_MSG_LENGTH) + return -EINVAL; + + user_msg->len = 0; + user_msg->count = le32_to_cpu(msg->hdr.count); + + for (i = 0; i < user_msg->count; ++i) { + trans_hdr = (struct wire_trans_hdr *)(msg->data + msg_len); + if (msg_len + le32_to_cpu(trans_hdr->len) > msg_hdr_len) + return -EINVAL; + + switch (le32_to_cpu(trans_hdr->type)) { + case QAIC_TRANS_PASSTHROUGH_FROM_DEV: + ret = decode_passthrough(qdev, trans_hdr, user_msg, &msg_len); + break; + case QAIC_TRANS_ACTIVATE_FROM_DEV: + ret = decode_activate(qdev, trans_hdr, user_msg, &msg_len, resources, usr); + break; + case QAIC_TRANS_DEACTIVATE_FROM_DEV: + ret = decode_deactivate(qdev, trans_hdr, &msg_len, usr); + break; + case QAIC_TRANS_STATUS_FROM_DEV: + ret = decode_status(qdev, trans_hdr, user_msg, &msg_len, msg); + break; + default: + return -EINVAL; + } + + if (ret) + return ret; + } + + if (msg_len != (msg_hdr_len - sizeof(msg->hdr))) + return -EINVAL; + + return 0; +} + +static void *msg_xfer(struct qaic_device *qdev, struct wrapper_list *wrappers, u32 seq_num, + bool ignore_signal) +{ + struct xfer_queue_elem elem; + struct wire_msg *out_buf; + struct wrapper_msg *w; + int retry_count; + long ret; + + if (qdev->in_reset) { + mutex_unlock(&qdev->cntl_mutex); + return ERR_PTR(-ENODEV); + } + + elem.seq_num = seq_num; + elem.buf = NULL; + init_completion(&elem.xfer_done); + if (likely(!qdev->cntl_lost_buf)) { + /* + * The max size of request to device is QAIC_MANAGE_EXT_MSG_LENGTH. + * The max size of response from device is QAIC_MANAGE_MAX_MSG_LENGTH. + */ + out_buf = kmalloc(QAIC_MANAGE_MAX_MSG_LENGTH, GFP_KERNEL); + if (!out_buf) { + mutex_unlock(&qdev->cntl_mutex); + return ERR_PTR(-ENOMEM); + } + + ret = mhi_queue_buf(qdev->cntl_ch, DMA_FROM_DEVICE, out_buf, + QAIC_MANAGE_MAX_MSG_LENGTH, MHI_EOT); + if (ret) { + mutex_unlock(&qdev->cntl_mutex); + return ERR_PTR(ret); + } + } else { + /* + * we lost a buffer because we queued a recv buf, but then + * queuing the corresponding tx buf failed. To try to avoid + * a memory leak, lets reclaim it and use it for this + * transaction. + */ + qdev->cntl_lost_buf = false; + } + + list_for_each_entry(w, &wrappers->list, list) { + kref_get(&w->ref_count); + retry_count = 0; +retry: + ret = mhi_queue_buf(qdev->cntl_ch, DMA_TO_DEVICE, &w->msg, w->len, + list_is_last(&w->list, &wrappers->list) ? MHI_EOT : MHI_CHAIN); + if (ret) { + if (ret == -EAGAIN && retry_count++ < QAIC_MHI_RETRY_MAX) { + msleep_interruptible(QAIC_MHI_RETRY_WAIT_MS); + if (!signal_pending(current)) + goto retry; + } + + qdev->cntl_lost_buf = true; + kref_put(&w->ref_count, free_wrapper); + mutex_unlock(&qdev->cntl_mutex); + return ERR_PTR(ret); + } + } + + list_add_tail(&elem.list, &qdev->cntl_xfer_list); + mutex_unlock(&qdev->cntl_mutex); + + if (ignore_signal) + ret = wait_for_completion_timeout(&elem.xfer_done, control_resp_timeout_s * HZ); + else + ret = wait_for_completion_interruptible_timeout(&elem.xfer_done, + control_resp_timeout_s * HZ); + /* + * not using _interruptable because we have to cleanup or we'll + * likely cause memory corruption + */ + mutex_lock(&qdev->cntl_mutex); + if (!list_empty(&elem.list)) + list_del(&elem.list); + if (!ret && !elem.buf) + ret = -ETIMEDOUT; + else if (ret > 0 && !elem.buf) + ret = -EIO; + mutex_unlock(&qdev->cntl_mutex); + + if (ret < 0) { + kfree(elem.buf); + return ERR_PTR(ret); + } else if (!qdev->valid_crc(elem.buf)) { + kfree(elem.buf); + return ERR_PTR(-EPIPE); + } + + return elem.buf; +} + +/* Add a transaction to abort the outstanding DMA continuation */ +static int abort_dma_cont(struct qaic_device *qdev, struct wrapper_list *wrappers, u32 dma_chunk_id) +{ + struct wire_trans_dma_xfer *out_trans; + u32 size = sizeof(*out_trans); + struct wrapper_msg *wrapper; + struct wrapper_msg *w; + struct wire_msg *msg; + + wrapper = list_first_entry(&wrappers->list, struct wrapper_msg, list); + msg = &wrapper->msg; + + /* Remove all but the first wrapper which has the msg header */ + list_for_each_entry_safe(wrapper, w, &wrappers->list, list) + if (!list_is_first(&wrapper->list, &wrappers->list)) + kref_put(&wrapper->ref_count, free_wrapper); + + wrapper = add_wrapper(wrappers, offsetof(struct wrapper_msg, trans) + sizeof(*out_trans)); + + if (!wrapper) + return -ENOMEM; + + out_trans = (struct wire_trans_dma_xfer *)&wrapper->trans; + out_trans->hdr.type = cpu_to_le32(QAIC_TRANS_DMA_XFER_TO_DEV); + out_trans->hdr.len = cpu_to_le32(size); + out_trans->tag = cpu_to_le32(0); + out_trans->count = cpu_to_le32(0); + out_trans->dma_chunk_id = cpu_to_le32(dma_chunk_id); + + msg->hdr.len = cpu_to_le32(size + sizeof(*msg)); + msg->hdr.count = cpu_to_le32(1); + wrapper->len = size; + + return 0; +} + +static struct wrapper_list *alloc_wrapper_list(void) +{ + struct wrapper_list *wrappers; + + wrappers = kmalloc(sizeof(*wrappers), GFP_KERNEL); + if (!wrappers) + return NULL; + INIT_LIST_HEAD(&wrappers->list); + spin_lock_init(&wrappers->lock); + + return wrappers; +} + +static int qaic_manage_msg_xfer(struct qaic_device *qdev, struct qaic_user *usr, + struct manage_msg *user_msg, struct ioctl_resources *resources, + struct wire_msg **rsp) +{ + struct wrapper_list *wrappers; + struct wrapper_msg *wrapper; + struct wrapper_msg *w; + bool all_done = false; + struct wire_msg *msg; + int ret; + + wrappers = alloc_wrapper_list(); + if (!wrappers) + return -ENOMEM; + + wrapper = add_wrapper(wrappers, sizeof(*wrapper)); + if (!wrapper) { + kfree(wrappers); + return -ENOMEM; + } + + msg = &wrapper->msg; + wrapper->len = sizeof(*msg); + + ret = encode_message(qdev, user_msg, wrappers, resources, usr); + if (ret && resources->dma_chunk_id) + ret = abort_dma_cont(qdev, wrappers, resources->dma_chunk_id); + if (ret) + goto encode_failed; + + ret = mutex_lock_interruptible(&qdev->cntl_mutex); + if (ret) + goto lock_failed; + + msg->hdr.magic_number = MANAGE_MAGIC_NUMBER; + msg->hdr.sequence_number = cpu_to_le32(qdev->next_seq_num++); + + if (usr) { + msg->hdr.handle = cpu_to_le32(usr->handle); + msg->hdr.partition_id = cpu_to_le32(usr->qddev->partition_id); + } else { + msg->hdr.handle = 0; + msg->hdr.partition_id = cpu_to_le32(QAIC_NO_PARTITION); + } + + msg->hdr.padding = cpu_to_le32(0); + msg->hdr.crc32 = cpu_to_le32(qdev->gen_crc(wrappers)); + + /* msg_xfer releases the mutex */ + *rsp = msg_xfer(qdev, wrappers, qdev->next_seq_num - 1, false); + if (IS_ERR(*rsp)) + ret = PTR_ERR(*rsp); + +lock_failed: + free_dma_xfers(qdev, resources); +encode_failed: + spin_lock(&wrappers->lock); + list_for_each_entry_safe(wrapper, w, &wrappers->list, list) + kref_put(&wrapper->ref_count, free_wrapper); + all_done = list_empty(&wrappers->list); + spin_unlock(&wrappers->lock); + if (all_done) + kfree(wrappers); + + return ret; +} + +static int qaic_manage(struct qaic_device *qdev, struct qaic_user *usr, struct manage_msg *user_msg) +{ + struct wire_trans_dma_xfer_cont *dma_cont = NULL; + struct ioctl_resources resources; + struct wire_msg *rsp = NULL; + int ret; + + memset(&resources, 0, sizeof(struct ioctl_resources)); + + INIT_LIST_HEAD(&resources.dma_xfers); + + if (user_msg->len > QAIC_MANAGE_MAX_MSG_LENGTH || + user_msg->count > QAIC_MANAGE_MAX_MSG_LENGTH / sizeof(struct qaic_manage_trans_hdr)) + return -EINVAL; + +dma_xfer_continue: + ret = qaic_manage_msg_xfer(qdev, usr, user_msg, &resources, &rsp); + if (ret) + return ret; + /* dma_cont should be the only transaction if present */ + if (le32_to_cpu(rsp->hdr.count) == 1) { + dma_cont = (struct wire_trans_dma_xfer_cont *)rsp->data; + if (le32_to_cpu(dma_cont->hdr.type) != QAIC_TRANS_DMA_XFER_CONT) + dma_cont = NULL; + } + if (dma_cont) { + if (le32_to_cpu(dma_cont->dma_chunk_id) == resources.dma_chunk_id && + le64_to_cpu(dma_cont->xferred_size) == resources.xferred_dma_size) { + kfree(rsp); + goto dma_xfer_continue; + } + + ret = -EINVAL; + goto dma_cont_failed; + } + + ret = decode_message(qdev, user_msg, rsp, &resources, usr); + +dma_cont_failed: + free_dbc_buf(qdev, &resources); + kfree(rsp); + return ret; +} + +int qaic_manage_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + struct qaic_manage_msg *user_msg; + struct qaic_device *qdev; + struct manage_msg *msg; + struct qaic_user *usr; + u8 __user *user_data; + int qdev_rcu_id; + int usr_rcu_id; + int ret; + + usr = file_priv->driver_priv; + + usr_rcu_id = srcu_read_lock(&usr->qddev_lock); + if (!usr->qddev) { + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + return -ENODEV; + } + + qdev = usr->qddev->qdev; + + qdev_rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + return -ENODEV; + } + + user_msg = data; + + if (user_msg->len > QAIC_MANAGE_MAX_MSG_LENGTH) { + ret = -EINVAL; + goto out; + } + + msg = kzalloc(QAIC_MANAGE_MAX_MSG_LENGTH + sizeof(*msg), GFP_KERNEL); + if (!msg) { + ret = -ENOMEM; + goto out; + } + + msg->len = user_msg->len; + msg->count = user_msg->count; + + user_data = u64_to_user_ptr(user_msg->data); + + if (copy_from_user(msg->data, user_data, user_msg->len)) { + ret = -EFAULT; + goto free_msg; + } + + ret = qaic_manage(qdev, usr, msg); + + /* + * If the qaic_manage() is successful then we copy the message onto + * userspace memory but we have an exception for -ECANCELED. + * For -ECANCELED, it means that device has NACKed the message with a + * status error code which userspace would like to know. + */ + if (ret == -ECANCELED || !ret) { + if (copy_to_user(user_data, msg->data, msg->len)) { + ret = -EFAULT; + } else { + user_msg->len = msg->len; + user_msg->count = msg->count; + } + } + +free_msg: + kfree(msg); +out: + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + return ret; +} + +int get_cntl_version(struct qaic_device *qdev, struct qaic_user *usr, u16 *major, u16 *minor) +{ + struct qaic_manage_trans_status_from_dev *status_result; + struct qaic_manage_trans_status_to_dev *status_query; + struct manage_msg *user_msg; + int ret; + + user_msg = kmalloc(sizeof(*user_msg) + sizeof(*status_result), GFP_KERNEL); + if (!user_msg) { + ret = -ENOMEM; + goto out; + } + user_msg->len = sizeof(*status_query); + user_msg->count = 1; + + status_query = (struct qaic_manage_trans_status_to_dev *)user_msg->data; + status_query->hdr.type = QAIC_TRANS_STATUS_FROM_USR; + status_query->hdr.len = sizeof(status_query->hdr); + + ret = qaic_manage(qdev, usr, user_msg); + if (ret) + goto kfree_user_msg; + status_result = (struct qaic_manage_trans_status_from_dev *)user_msg->data; + *major = status_result->major; + *minor = status_result->minor; + + if (status_result->status_flags & BIT(0)) { /* device is using CRC */ + /* By default qdev->gen_crc is programmed to generate CRC */ + qdev->valid_crc = valid_crc; + } else { + /* By default qdev->valid_crc is programmed to bypass CRC */ + qdev->gen_crc = gen_crc_stub; + } + +kfree_user_msg: + kfree(user_msg); +out: + return ret; +} + +static void resp_worker(struct work_struct *work) +{ + struct resp_work *resp = container_of(work, struct resp_work, work); + struct qaic_device *qdev = resp->qdev; + struct wire_msg *msg = resp->buf; + struct xfer_queue_elem *elem; + struct xfer_queue_elem *i; + bool found = false; + + mutex_lock(&qdev->cntl_mutex); + list_for_each_entry_safe(elem, i, &qdev->cntl_xfer_list, list) { + if (elem->seq_num == le32_to_cpu(msg->hdr.sequence_number)) { + found = true; + list_del_init(&elem->list); + elem->buf = msg; + complete_all(&elem->xfer_done); + break; + } + } + mutex_unlock(&qdev->cntl_mutex); + + if (!found) + /* request must have timed out, drop packet */ + kfree(msg); + + kfree(resp); +} + +static void free_wrapper_from_list(struct wrapper_list *wrappers, struct wrapper_msg *wrapper) +{ + bool all_done = false; + + spin_lock(&wrappers->lock); + kref_put(&wrapper->ref_count, free_wrapper); + all_done = list_empty(&wrappers->list); + spin_unlock(&wrappers->lock); + + if (all_done) + kfree(wrappers); +} + +void qaic_mhi_ul_xfer_cb(struct mhi_device *mhi_dev, struct mhi_result *mhi_result) +{ + struct wire_msg *msg = mhi_result->buf_addr; + struct wrapper_msg *wrapper = container_of(msg, struct wrapper_msg, msg); + + free_wrapper_from_list(wrapper->head, wrapper); +} + +void qaic_mhi_dl_xfer_cb(struct mhi_device *mhi_dev, struct mhi_result *mhi_result) +{ + struct qaic_device *qdev = dev_get_drvdata(&mhi_dev->dev); + struct wire_msg *msg = mhi_result->buf_addr; + struct resp_work *resp; + + if (mhi_result->transaction_status || msg->hdr.magic_number != MANAGE_MAGIC_NUMBER) { + kfree(msg); + return; + } + + resp = kmalloc(sizeof(*resp), GFP_ATOMIC); + if (!resp) { + kfree(msg); + return; + } + + INIT_WORK(&resp->work, resp_worker); + resp->qdev = qdev; + resp->buf = msg; + queue_work(qdev->cntl_wq, &resp->work); +} + +int qaic_control_open(struct qaic_device *qdev) +{ + if (!qdev->cntl_ch) + return -ENODEV; + + qdev->cntl_lost_buf = false; + /* + * By default qaic should assume that device has CRC enabled. + * Qaic comes to know if device has CRC enabled or disabled during the + * device status transaction, which is the first transaction performed + * on control channel. + * + * So CRC validation of first device status transaction response is + * ignored (by calling valid_crc_stub) and is done later during decoding + * if device has CRC enabled. + * Now that qaic knows whether device has CRC enabled or not it acts + * accordingly. + */ + qdev->gen_crc = gen_crc; + qdev->valid_crc = valid_crc_stub; + + return mhi_prepare_for_transfer(qdev->cntl_ch); +} + +void qaic_control_close(struct qaic_device *qdev) +{ + mhi_unprepare_from_transfer(qdev->cntl_ch); +} + +void qaic_release_usr(struct qaic_device *qdev, struct qaic_user *usr) +{ + struct wire_trans_terminate_to_dev *trans; + struct wrapper_list *wrappers; + struct wrapper_msg *wrapper; + struct wire_msg *msg; + struct wire_msg *rsp; + + wrappers = alloc_wrapper_list(); + if (!wrappers) + return; + + wrapper = add_wrapper(wrappers, sizeof(*wrapper) + sizeof(*msg) + sizeof(*trans)); + if (!wrapper) + return; + + msg = &wrapper->msg; + + trans = (struct wire_trans_terminate_to_dev *)msg->data; + + trans->hdr.type = cpu_to_le32(QAIC_TRANS_TERMINATE_TO_DEV); + trans->hdr.len = cpu_to_le32(sizeof(*trans)); + trans->handle = cpu_to_le32(usr->handle); + + mutex_lock(&qdev->cntl_mutex); + wrapper->len = sizeof(msg->hdr) + sizeof(*trans); + msg->hdr.magic_number = MANAGE_MAGIC_NUMBER; + msg->hdr.sequence_number = cpu_to_le32(qdev->next_seq_num++); + msg->hdr.len = cpu_to_le32(wrapper->len); + msg->hdr.count = cpu_to_le32(1); + msg->hdr.handle = cpu_to_le32(usr->handle); + msg->hdr.padding = cpu_to_le32(0); + msg->hdr.crc32 = cpu_to_le32(qdev->gen_crc(wrappers)); + + /* + * msg_xfer releases the mutex + * We don't care about the return of msg_xfer since we will not do + * anything different based on what happens. + * We ignore pending signals since one will be set if the user is + * killed, and we need give the device a chance to cleanup, otherwise + * DMA may still be in progress when we return. + */ + rsp = msg_xfer(qdev, wrappers, qdev->next_seq_num - 1, true); + if (!IS_ERR(rsp)) + kfree(rsp); + free_wrapper_from_list(wrappers, wrapper); +} + +void wake_all_cntl(struct qaic_device *qdev) +{ + struct xfer_queue_elem *elem; + struct xfer_queue_elem *i; + + mutex_lock(&qdev->cntl_mutex); + list_for_each_entry_safe(elem, i, &qdev->cntl_xfer_list, list) { + list_del_init(&elem->list); + complete_all(&elem->xfer_done); + } + mutex_unlock(&qdev->cntl_mutex); +} diff --git a/drivers/accel/qaic/qaic_data.c b/drivers/accel/qaic/qaic_data.c new file mode 100644 index 000000000000..c0a574cd1b35 --- /dev/null +++ b/drivers/accel/qaic/qaic_data.c @@ -0,0 +1,1902 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* Copyright (c) 2019-2021, The Linux Foundation. All rights reserved. */ +/* Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved. */ + +#include <linux/bitfield.h> +#include <linux/bits.h> +#include <linux/completion.h> +#include <linux/delay.h> +#include <linux/dma-buf.h> +#include <linux/dma-mapping.h> +#include <linux/interrupt.h> +#include <linux/kref.h> +#include <linux/list.h> +#include <linux/math64.h> +#include <linux/mm.h> +#include <linux/moduleparam.h> +#include <linux/scatterlist.h> +#include <linux/spinlock.h> +#include <linux/srcu.h> +#include <linux/types.h> +#include <linux/uaccess.h> +#include <linux/wait.h> +#include <drm/drm_file.h> +#include <drm/drm_gem.h> +#include <drm/drm_print.h> +#include <uapi/drm/qaic_accel.h> + +#include "qaic.h" + +#define SEM_VAL_MASK GENMASK_ULL(11, 0) +#define SEM_INDEX_MASK GENMASK_ULL(4, 0) +#define BULK_XFER BIT(3) +#define GEN_COMPLETION BIT(4) +#define INBOUND_XFER 1 +#define OUTBOUND_XFER 2 +#define REQHP_OFF 0x0 /* we read this */ +#define REQTP_OFF 0x4 /* we write this */ +#define RSPHP_OFF 0x8 /* we write this */ +#define RSPTP_OFF 0xc /* we read this */ + +#define ENCODE_SEM(val, index, sync, cmd, flags) \ + ({ \ + FIELD_PREP(GENMASK(11, 0), (val)) | \ + FIELD_PREP(GENMASK(20, 16), (index)) | \ + FIELD_PREP(BIT(22), (sync)) | \ + FIELD_PREP(GENMASK(26, 24), (cmd)) | \ + FIELD_PREP(GENMASK(30, 29), (flags)) | \ + FIELD_PREP(BIT(31), (cmd) ? 1 : 0); \ + }) +#define NUM_EVENTS 128 +#define NUM_DELAYS 10 + +static unsigned int wait_exec_default_timeout_ms = 5000; /* 5 sec default */ +module_param(wait_exec_default_timeout_ms, uint, 0600); +MODULE_PARM_DESC(wait_exec_default_timeout_ms, "Default timeout for DRM_IOCTL_QAIC_WAIT_BO"); + +static unsigned int datapath_poll_interval_us = 100; /* 100 usec default */ +module_param(datapath_poll_interval_us, uint, 0600); +MODULE_PARM_DESC(datapath_poll_interval_us, + "Amount of time to sleep between activity when datapath polling is enabled"); + +struct dbc_req { + /* + * A request ID is assigned to each memory handle going in DMA queue. + * As a single memory handle can enqueue multiple elements in DMA queue + * all of them will have the same request ID. + */ + __le16 req_id; + /* Future use */ + __u8 seq_id; + /* + * Special encoded variable + * 7 0 - Do not force to generate MSI after DMA is completed + * 1 - Force to generate MSI after DMA is completed + * 6:5 Reserved + * 4 1 - Generate completion element in the response queue + * 0 - No Completion Code + * 3 0 - DMA request is a Link list transfer + * 1 - DMA request is a Bulk transfer + * 2 Reserved + * 1:0 00 - No DMA transfer involved + * 01 - DMA transfer is part of inbound transfer + * 10 - DMA transfer has outbound transfer + * 11 - NA + */ + __u8 cmd; + __le32 resv; + /* Source address for the transfer */ + __le64 src_addr; + /* Destination address for the transfer */ + __le64 dest_addr; + /* Length of transfer request */ + __le32 len; + __le32 resv2; + /* Doorbell address */ + __le64 db_addr; + /* + * Special encoded variable + * 7 1 - Doorbell(db) write + * 0 - No doorbell write + * 6:2 Reserved + * 1:0 00 - 32 bit access, db address must be aligned to 32bit-boundary + * 01 - 16 bit access, db address must be aligned to 16bit-boundary + * 10 - 8 bit access, db address must be aligned to 8bit-boundary + * 11 - Reserved + */ + __u8 db_len; + __u8 resv3; + __le16 resv4; + /* 32 bit data written to doorbell address */ + __le32 db_data; + /* + * Special encoded variable + * All the fields of sem_cmdX are passed from user and all are ORed + * together to form sem_cmd. + * 0:11 Semaphore value + * 15:12 Reserved + * 20:16 Semaphore index + * 21 Reserved + * 22 Semaphore Sync + * 23 Reserved + * 26:24 Semaphore command + * 28:27 Reserved + * 29 Semaphore DMA out bound sync fence + * 30 Semaphore DMA in bound sync fence + * 31 Enable semaphore command + */ + __le32 sem_cmd0; + __le32 sem_cmd1; + __le32 sem_cmd2; + __le32 sem_cmd3; +} __packed; + +struct dbc_rsp { + /* Request ID of the memory handle whose DMA transaction is completed */ + __le16 req_id; + /* Status of the DMA transaction. 0 : Success otherwise failure */ + __le16 status; +} __packed; + +inline int get_dbc_req_elem_size(void) +{ + return sizeof(struct dbc_req); +} + +inline int get_dbc_rsp_elem_size(void) +{ + return sizeof(struct dbc_rsp); +} + +static void free_slice(struct kref *kref) +{ + struct bo_slice *slice = container_of(kref, struct bo_slice, ref_count); + + list_del(&slice->slice); + drm_gem_object_put(&slice->bo->base); + sg_free_table(slice->sgt); + kfree(slice->sgt); + kfree(slice->reqs); + kfree(slice); +} + +static int clone_range_of_sgt_for_slice(struct qaic_device *qdev, struct sg_table **sgt_out, + struct sg_table *sgt_in, u64 size, u64 offset) +{ + int total_len, len, nents, offf = 0, offl = 0; + struct scatterlist *sg, *sgn, *sgf, *sgl; + struct sg_table *sgt; + int ret, j; + + /* find out number of relevant nents needed for this mem */ + total_len = 0; + sgf = NULL; + sgl = NULL; + nents = 0; + + size = size ? size : PAGE_SIZE; + for (sg = sgt_in->sgl; sg; sg = sg_next(sg)) { + len = sg_dma_len(sg); + + if (!len) + continue; + if (offset >= total_len && offset < total_len + len) { + sgf = sg; + offf = offset - total_len; + } + if (sgf) + nents++; + if (offset + size >= total_len && + offset + size <= total_len + len) { + sgl = sg; + offl = offset + size - total_len; + break; + } + total_len += len; + } + + if (!sgf || !sgl) { + ret = -EINVAL; + goto out; + } + + sgt = kzalloc(sizeof(*sgt), GFP_KERNEL); + if (!sgt) { + ret = -ENOMEM; + goto out; + } + + ret = sg_alloc_table(sgt, nents, GFP_KERNEL); + if (ret) + goto free_sgt; + + /* copy relevant sg node and fix page and length */ + sgn = sgf; + for_each_sgtable_sg(sgt, sg, j) { + memcpy(sg, sgn, sizeof(*sg)); + if (sgn == sgf) { + sg_dma_address(sg) += offf; + sg_dma_len(sg) -= offf; + sg_set_page(sg, sg_page(sgn), sg_dma_len(sg), offf); + } else { + offf = 0; + } + if (sgn == sgl) { + sg_dma_len(sg) = offl - offf; + sg_set_page(sg, sg_page(sgn), offl - offf, offf); + sg_mark_end(sg); + break; + } + sgn = sg_next(sgn); + } + + *sgt_out = sgt; + return ret; + +free_sgt: + kfree(sgt); +out: + *sgt_out = NULL; + return ret; +} + +static int encode_reqs(struct qaic_device *qdev, struct bo_slice *slice, + struct qaic_attach_slice_entry *req) +{ + __le64 db_addr = cpu_to_le64(req->db_addr); + __le32 db_data = cpu_to_le32(req->db_data); + struct scatterlist *sg; + __u8 cmd = BULK_XFER; + int presync_sem; + u64 dev_addr; + __u8 db_len; + int i; + + if (!slice->no_xfer) + cmd |= (slice->dir == DMA_TO_DEVICE ? INBOUND_XFER : OUTBOUND_XFER); + + if (req->db_len && !IS_ALIGNED(req->db_addr, req->db_len / 8)) + return -EINVAL; + + presync_sem = req->sem0.presync + req->sem1.presync + req->sem2.presync + req->sem3.presync; + if (presync_sem > 1) + return -EINVAL; + + presync_sem = req->sem0.presync << 0 | req->sem1.presync << 1 | + req->sem2.presync << 2 | req->sem3.presync << 3; + + switch (req->db_len) { + case 32: + db_len = BIT(7); + break; + case 16: + db_len = BIT(7) | 1; + break; + case 8: + db_len = BIT(7) | 2; + break; + case 0: + db_len = 0; /* doorbell is not active for this command */ + break; + default: + return -EINVAL; /* should never hit this */ + } + + /* + * When we end up splitting up a single request (ie a buf slice) into + * multiple DMA requests, we have to manage the sync data carefully. + * There can only be one presync sem. That needs to be on every xfer + * so that the DMA engine doesn't transfer data before the receiver is + * ready. We only do the doorbell and postsync sems after the xfer. + * To guarantee previous xfers for the request are complete, we use a + * fence. + */ + dev_addr = req->dev_addr; + for_each_sgtable_sg(slice->sgt, sg, i) { + slice->reqs[i].cmd = cmd; + slice->reqs[i].src_addr = cpu_to_le64(slice->dir == DMA_TO_DEVICE ? + sg_dma_address(sg) : dev_addr); + slice->reqs[i].dest_addr = cpu_to_le64(slice->dir == DMA_TO_DEVICE ? + dev_addr : sg_dma_address(sg)); + /* + * sg_dma_len(sg) returns size of a DMA segment, maximum DMA + * segment size is set to UINT_MAX by qaic and hence return + * values of sg_dma_len(sg) can never exceed u32 range. So, + * by down sizing we are not corrupting the value. + */ + slice->reqs[i].len = cpu_to_le32((u32)sg_dma_len(sg)); + switch (presync_sem) { + case BIT(0): + slice->reqs[i].sem_cmd0 = cpu_to_le32(ENCODE_SEM(req->sem0.val, + req->sem0.index, + req->sem0.presync, + req->sem0.cmd, + req->sem0.flags)); + break; + case BIT(1): + slice->reqs[i].sem_cmd1 = cpu_to_le32(ENCODE_SEM(req->sem1.val, + req->sem1.index, + req->sem1.presync, + req->sem1.cmd, + req->sem1.flags)); + break; + case BIT(2): + slice->reqs[i].sem_cmd2 = cpu_to_le32(ENCODE_SEM(req->sem2.val, + req->sem2.index, + req->sem2.presync, + req->sem2.cmd, + req->sem2.flags)); + break; + case BIT(3): + slice->reqs[i].sem_cmd3 = cpu_to_le32(ENCODE_SEM(req->sem3.val, + req->sem3.index, + req->sem3.presync, + req->sem3.cmd, + req->sem3.flags)); + break; + } + dev_addr += sg_dma_len(sg); + } + /* add post transfer stuff to last segment */ + i--; + slice->reqs[i].cmd |= GEN_COMPLETION; + slice->reqs[i].db_addr = db_addr; + slice->reqs[i].db_len = db_len; + slice->reqs[i].db_data = db_data; + /* + * Add a fence if we have more than one request going to the hardware + * representing the entirety of the user request, and the user request + * has no presync condition. + * Fences are expensive, so we try to avoid them. We rely on the + * hardware behavior to avoid needing one when there is a presync + * condition. When a presync exists, all requests for that same + * presync will be queued into a fifo. Thus, since we queue the + * post xfer activity only on the last request we queue, the hardware + * will ensure that the last queued request is processed last, thus + * making sure the post xfer activity happens at the right time without + * a fence. + */ + if (i && !presync_sem) + req->sem0.flags |= (slice->dir == DMA_TO_DEVICE ? + QAIC_SEM_INSYNCFENCE : QAIC_SEM_OUTSYNCFENCE); + slice->reqs[i].sem_cmd0 = cpu_to_le32(ENCODE_SEM(req->sem0.val, req->sem0.index, + req->sem0.presync, req->sem0.cmd, + req->sem0.flags)); + slice->reqs[i].sem_cmd1 = cpu_to_le32(ENCODE_SEM(req->sem1.val, req->sem1.index, + req->sem1.presync, req->sem1.cmd, + req->sem1.flags)); + slice->reqs[i].sem_cmd2 = cpu_to_le32(ENCODE_SEM(req->sem2.val, req->sem2.index, + req->sem2.presync, req->sem2.cmd, + req->sem2.flags)); + slice->reqs[i].sem_cmd3 = cpu_to_le32(ENCODE_SEM(req->sem3.val, req->sem3.index, + req->sem3.presync, req->sem3.cmd, + req->sem3.flags)); + + return 0; +} + +static int qaic_map_one_slice(struct qaic_device *qdev, struct qaic_bo *bo, + struct qaic_attach_slice_entry *slice_ent) +{ + struct sg_table *sgt = NULL; + struct bo_slice *slice; + int ret; + + ret = clone_range_of_sgt_for_slice(qdev, &sgt, bo->sgt, slice_ent->size, slice_ent->offset); + if (ret) + goto out; + + slice = kmalloc(sizeof(*slice), GFP_KERNEL); + if (!slice) { + ret = -ENOMEM; + goto free_sgt; + } + + slice->reqs = kcalloc(sgt->nents, sizeof(*slice->reqs), GFP_KERNEL); + if (!slice->reqs) { + ret = -ENOMEM; + goto free_slice; + } + + slice->no_xfer = !slice_ent->size; + slice->sgt = sgt; + slice->nents = sgt->nents; + slice->dir = bo->dir; + slice->bo = bo; + slice->size = slice_ent->size; + slice->offset = slice_ent->offset; + + ret = encode_reqs(qdev, slice, slice_ent); + if (ret) + goto free_req; + + bo->total_slice_nents += sgt->nents; + kref_init(&slice->ref_count); + drm_gem_object_get(&bo->base); + list_add_tail(&slice->slice, &bo->slices); + + return 0; + +free_req: + kfree(slice->reqs); +free_slice: + kfree(slice); +free_sgt: + sg_free_table(sgt); + kfree(sgt); +out: + return ret; +} + +static int create_sgt(struct qaic_device *qdev, struct sg_table **sgt_out, u64 size) +{ + struct scatterlist *sg; + struct sg_table *sgt; + struct page **pages; + int *pages_order; + int buf_extra; + int max_order; + int nr_pages; + int ret = 0; + int i, j, k; + int order; + + if (size) { + nr_pages = DIV_ROUND_UP(size, PAGE_SIZE); + /* + * calculate how much extra we are going to allocate, to remove + * later + */ + buf_extra = (PAGE_SIZE - size % PAGE_SIZE) % PAGE_SIZE; + max_order = min(MAX_ORDER - 1, get_order(size)); + } else { + /* allocate a single page for book keeping */ + nr_pages = 1; + buf_extra = 0; + max_order = 0; + } + + pages = kvmalloc_array(nr_pages, sizeof(*pages) + sizeof(*pages_order), GFP_KERNEL); + if (!pages) { + ret = -ENOMEM; + goto out; + } + pages_order = (void *)pages + sizeof(*pages) * nr_pages; + + /* + * Allocate requested memory using alloc_pages. It is possible to allocate + * the requested memory in multiple chunks by calling alloc_pages + * multiple times. Use SG table to handle multiple allocated pages. + */ + i = 0; + while (nr_pages > 0) { + order = min(get_order(nr_pages * PAGE_SIZE), max_order); + while (1) { + pages[i] = alloc_pages(GFP_KERNEL | GFP_HIGHUSER | + __GFP_NOWARN | __GFP_ZERO | + (order ? __GFP_NORETRY : __GFP_RETRY_MAYFAIL), + order); + if (pages[i]) + break; + if (!order--) { + ret = -ENOMEM; + goto free_partial_alloc; + } + } + + max_order = order; + pages_order[i] = order; + + nr_pages -= 1 << order; + if (nr_pages <= 0) + /* account for over allocation */ + buf_extra += abs(nr_pages) * PAGE_SIZE; + i++; + } + + sgt = kmalloc(sizeof(*sgt), GFP_KERNEL); + if (!sgt) { + ret = -ENOMEM; + goto free_partial_alloc; + } + + if (sg_alloc_table(sgt, i, GFP_KERNEL)) { + ret = -ENOMEM; + goto free_sgt; + } + + /* Populate the SG table with the allocated memory pages */ + sg = sgt->sgl; + for (k = 0; k < i; k++, sg = sg_next(sg)) { + /* Last entry requires special handling */ + if (k < i - 1) { + sg_set_page(sg, pages[k], PAGE_SIZE << pages_order[k], 0); + } else { + sg_set_page(sg, pages[k], (PAGE_SIZE << pages_order[k]) - buf_extra, 0); + sg_mark_end(sg); + } + } + + kvfree(pages); + *sgt_out = sgt; + return ret; + +free_sgt: + kfree(sgt); +free_partial_alloc: + for (j = 0; j < i; j++) + __free_pages(pages[j], pages_order[j]); + kvfree(pages); +out: + *sgt_out = NULL; + return ret; +} + +static bool invalid_sem(struct qaic_sem *sem) +{ + if (sem->val & ~SEM_VAL_MASK || sem->index & ~SEM_INDEX_MASK || + !(sem->presync == 0 || sem->presync == 1) || sem->pad || + sem->flags & ~(QAIC_SEM_INSYNCFENCE | QAIC_SEM_OUTSYNCFENCE) || + sem->cmd > QAIC_SEM_WAIT_GT_0) + return true; + return false; +} + +static int qaic_validate_req(struct qaic_device *qdev, struct qaic_attach_slice_entry *slice_ent, + u32 count, u64 total_size) +{ + int i; + + for (i = 0; i < count; i++) { + if (!(slice_ent[i].db_len == 32 || slice_ent[i].db_len == 16 || + slice_ent[i].db_len == 8 || slice_ent[i].db_len == 0) || + invalid_sem(&slice_ent[i].sem0) || invalid_sem(&slice_ent[i].sem1) || + invalid_sem(&slice_ent[i].sem2) || invalid_sem(&slice_ent[i].sem3)) + return -EINVAL; + + if (slice_ent[i].offset + slice_ent[i].size > total_size) + return -EINVAL; + } + + return 0; +} + +static void qaic_free_sgt(struct sg_table *sgt) +{ + struct scatterlist *sg; + + for (sg = sgt->sgl; sg; sg = sg_next(sg)) + if (sg_page(sg)) + __free_pages(sg_page(sg), get_order(sg->length)); + sg_free_table(sgt); + kfree(sgt); +} + +static void qaic_gem_print_info(struct drm_printer *p, unsigned int indent, + const struct drm_gem_object *obj) +{ + struct qaic_bo *bo = to_qaic_bo(obj); + + drm_printf_indent(p, indent, "user requested size=%llu\n", bo->size); +} + +static const struct vm_operations_struct drm_vm_ops = { + .open = drm_gem_vm_open, + .close = drm_gem_vm_close, +}; + +static int qaic_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) +{ + struct qaic_bo *bo = to_qaic_bo(obj); + unsigned long offset = 0; + struct scatterlist *sg; + int ret; + + if (obj->import_attach) + return -EINVAL; + + for (sg = bo->sgt->sgl; sg; sg = sg_next(sg)) { + if (sg_page(sg)) { + ret = remap_pfn_range(vma, vma->vm_start + offset, page_to_pfn(sg_page(sg)), + sg->length, vma->vm_page_prot); + if (ret) + goto out; + offset += sg->length; + } + } + +out: + return ret; +} + +static void qaic_free_object(struct drm_gem_object *obj) +{ + struct qaic_bo *bo = to_qaic_bo(obj); + + if (obj->import_attach) { + /* DMABUF/PRIME Path */ + dma_buf_detach(obj->import_attach->dmabuf, obj->import_attach); + dma_buf_put(obj->import_attach->dmabuf); + } else { + /* Private buffer allocation path */ + qaic_free_sgt(bo->sgt); + } + + drm_gem_object_release(obj); + kfree(bo); +} + +static const struct drm_gem_object_funcs qaic_gem_funcs = { + .free = qaic_free_object, + .print_info = qaic_gem_print_info, + .mmap = qaic_gem_object_mmap, + .vm_ops = &drm_vm_ops, +}; + +static struct qaic_bo *qaic_alloc_init_bo(void) +{ + struct qaic_bo *bo; + + bo = kzalloc(sizeof(*bo), GFP_KERNEL); + if (!bo) + return ERR_PTR(-ENOMEM); + + INIT_LIST_HEAD(&bo->slices); + init_completion(&bo->xfer_done); + complete_all(&bo->xfer_done); + + return bo; +} + +int qaic_create_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + struct qaic_create_bo *args = data; + int usr_rcu_id, qdev_rcu_id; + struct drm_gem_object *obj; + struct qaic_device *qdev; + struct qaic_user *usr; + struct qaic_bo *bo; + size_t size; + int ret; + + if (args->pad) + return -EINVAL; + + usr = file_priv->driver_priv; + usr_rcu_id = srcu_read_lock(&usr->qddev_lock); + if (!usr->qddev) { + ret = -ENODEV; + goto unlock_usr_srcu; + } + + qdev = usr->qddev->qdev; + qdev_rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + ret = -ENODEV; + goto unlock_dev_srcu; + } + + size = PAGE_ALIGN(args->size); + if (size == 0) { + ret = -EINVAL; + goto unlock_dev_srcu; + } + + bo = qaic_alloc_init_bo(); + if (IS_ERR(bo)) { + ret = PTR_ERR(bo); + goto unlock_dev_srcu; + } + obj = &bo->base; + + drm_gem_private_object_init(dev, obj, size); + + obj->funcs = &qaic_gem_funcs; + ret = create_sgt(qdev, &bo->sgt, size); + if (ret) + goto free_bo; + + bo->size = args->size; + + ret = drm_gem_handle_create(file_priv, obj, &args->handle); + if (ret) + goto free_sgt; + + bo->handle = args->handle; + drm_gem_object_put(obj); + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + + return 0; + +free_sgt: + qaic_free_sgt(bo->sgt); +free_bo: + kfree(bo); +unlock_dev_srcu: + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); +unlock_usr_srcu: + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + return ret; +} + +int qaic_mmap_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + struct qaic_mmap_bo *args = data; + int usr_rcu_id, qdev_rcu_id; + struct drm_gem_object *obj; + struct qaic_device *qdev; + struct qaic_user *usr; + int ret; + + usr = file_priv->driver_priv; + usr_rcu_id = srcu_read_lock(&usr->qddev_lock); + if (!usr->qddev) { + ret = -ENODEV; + goto unlock_usr_srcu; + } + + qdev = usr->qddev->qdev; + qdev_rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + ret = -ENODEV; + goto unlock_dev_srcu; + } + + obj = drm_gem_object_lookup(file_priv, args->handle); + if (!obj) { + ret = -ENOENT; + goto unlock_dev_srcu; + } + + ret = drm_gem_create_mmap_offset(obj); + if (ret == 0) + args->offset = drm_vma_node_offset_addr(&obj->vma_node); + + drm_gem_object_put(obj); + +unlock_dev_srcu: + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); +unlock_usr_srcu: + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + return ret; +} + +struct drm_gem_object *qaic_gem_prime_import(struct drm_device *dev, struct dma_buf *dma_buf) +{ + struct dma_buf_attachment *attach; + struct drm_gem_object *obj; + struct qaic_bo *bo; + size_t size; + int ret; + + bo = qaic_alloc_init_bo(); + if (IS_ERR(bo)) { + ret = PTR_ERR(bo); + goto out; + } + + obj = &bo->base; + get_dma_buf(dma_buf); + + attach = dma_buf_attach(dma_buf, dev->dev); + if (IS_ERR(attach)) { + ret = PTR_ERR(attach); + goto attach_fail; + } + + size = PAGE_ALIGN(attach->dmabuf->size); + if (size == 0) { + ret = -EINVAL; + goto size_align_fail; + } + + drm_gem_private_object_init(dev, obj, size); + /* + * skipping dma_buf_map_attachment() as we do not know the direction + * just yet. Once the direction is known in the subsequent IOCTL to + * attach slicing, we can do it then. + */ + + obj->funcs = &qaic_gem_funcs; + obj->import_attach = attach; + obj->resv = dma_buf->resv; + + return obj; + +size_align_fail: + dma_buf_detach(dma_buf, attach); +attach_fail: + dma_buf_put(dma_buf); + kfree(bo); +out: + return ERR_PTR(ret); +} + +static int qaic_prepare_import_bo(struct qaic_bo *bo, struct qaic_attach_slice_hdr *hdr) +{ + struct drm_gem_object *obj = &bo->base; + struct sg_table *sgt; + int ret; + + if (obj->import_attach->dmabuf->size < hdr->size) + return -EINVAL; + + sgt = dma_buf_map_attachment(obj->import_attach, hdr->dir); + if (IS_ERR(sgt)) { + ret = PTR_ERR(sgt); + return ret; + } + + bo->sgt = sgt; + bo->size = hdr->size; + + return 0; +} + +static int qaic_prepare_export_bo(struct qaic_device *qdev, struct qaic_bo *bo, + struct qaic_attach_slice_hdr *hdr) +{ + int ret; + + if (bo->size != hdr->size) + return -EINVAL; + + ret = dma_map_sgtable(&qdev->pdev->dev, bo->sgt, hdr->dir, 0); + if (ret) + return -EFAULT; + + return 0; +} + +static int qaic_prepare_bo(struct qaic_device *qdev, struct qaic_bo *bo, + struct qaic_attach_slice_hdr *hdr) +{ + int ret; + + if (bo->base.import_attach) + ret = qaic_prepare_import_bo(bo, hdr); + else + ret = qaic_prepare_export_bo(qdev, bo, hdr); + + if (ret == 0) + bo->dir = hdr->dir; + + return ret; +} + +static void qaic_unprepare_import_bo(struct qaic_bo *bo) +{ + dma_buf_unmap_attachment(bo->base.import_attach, bo->sgt, bo->dir); + bo->sgt = NULL; + bo->size = 0; +} + +static void qaic_unprepare_export_bo(struct qaic_device *qdev, struct qaic_bo *bo) +{ + dma_unmap_sgtable(&qdev->pdev->dev, bo->sgt, bo->dir, 0); +} + +static void qaic_unprepare_bo(struct qaic_device *qdev, struct qaic_bo *bo) +{ + if (bo->base.import_attach) + qaic_unprepare_import_bo(bo); + else + qaic_unprepare_export_bo(qdev, bo); + + bo->dir = 0; +} + +static void qaic_free_slices_bo(struct qaic_bo *bo) +{ + struct bo_slice *slice, *temp; + + list_for_each_entry_safe(slice, temp, &bo->slices, slice) + kref_put(&slice->ref_count, free_slice); +} + +static int qaic_attach_slicing_bo(struct qaic_device *qdev, struct qaic_bo *bo, + struct qaic_attach_slice_hdr *hdr, + struct qaic_attach_slice_entry *slice_ent) +{ + int ret, i; + + for (i = 0; i < hdr->count; i++) { + ret = qaic_map_one_slice(qdev, bo, &slice_ent[i]); + if (ret) { + qaic_free_slices_bo(bo); + return ret; + } + } + + if (bo->total_slice_nents > qdev->dbc[hdr->dbc_id].nelem) { + qaic_free_slices_bo(bo); + return -ENOSPC; + } + + bo->sliced = true; + bo->nr_slice = hdr->count; + list_add_tail(&bo->bo_list, &qdev->dbc[hdr->dbc_id].bo_lists); + + return 0; +} + +int qaic_attach_slice_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + struct qaic_attach_slice_entry *slice_ent; + struct qaic_attach_slice *args = data; + struct dma_bridge_chan *dbc; + int usr_rcu_id, qdev_rcu_id; + struct drm_gem_object *obj; + struct qaic_device *qdev; + unsigned long arg_size; + struct qaic_user *usr; + u8 __user *user_data; + struct qaic_bo *bo; + int ret; + + usr = file_priv->driver_priv; + usr_rcu_id = srcu_read_lock(&usr->qddev_lock); + if (!usr->qddev) { + ret = -ENODEV; + goto unlock_usr_srcu; + } + + qdev = usr->qddev->qdev; + qdev_rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + ret = -ENODEV; + goto unlock_dev_srcu; + } + + if (args->hdr.count == 0) { + ret = -EINVAL; + goto unlock_dev_srcu; + } + + arg_size = args->hdr.count * sizeof(*slice_ent); + if (arg_size / args->hdr.count != sizeof(*slice_ent)) { + ret = -EINVAL; + goto unlock_dev_srcu; + } + + if (args->hdr.dbc_id >= qdev->num_dbc) { + ret = -EINVAL; + goto unlock_dev_srcu; + } + + if (args->hdr.size == 0) { + ret = -EINVAL; + goto unlock_dev_srcu; + } + + if (!(args->hdr.dir == DMA_TO_DEVICE || args->hdr.dir == DMA_FROM_DEVICE)) { + ret = -EINVAL; + goto unlock_dev_srcu; + } + + dbc = &qdev->dbc[args->hdr.dbc_id]; + if (dbc->usr != usr) { + ret = -EINVAL; + goto unlock_dev_srcu; + } + + if (args->data == 0) { + ret = -EINVAL; + goto unlock_dev_srcu; + } + + user_data = u64_to_user_ptr(args->data); + + slice_ent = kzalloc(arg_size, GFP_KERNEL); + if (!slice_ent) { + ret = -EINVAL; + goto unlock_dev_srcu; + } + + ret = copy_from_user(slice_ent, user_data, arg_size); + if (ret) { + ret = -EFAULT; + goto free_slice_ent; + } + + ret = qaic_validate_req(qdev, slice_ent, args->hdr.count, args->hdr.size); + if (ret) + goto free_slice_ent; + + obj = drm_gem_object_lookup(file_priv, args->hdr.handle); + if (!obj) { + ret = -ENOENT; + goto free_slice_ent; + } + + bo = to_qaic_bo(obj); + + ret = qaic_prepare_bo(qdev, bo, &args->hdr); + if (ret) + goto put_bo; + + ret = qaic_attach_slicing_bo(qdev, bo, &args->hdr, slice_ent); + if (ret) + goto unprepare_bo; + + if (args->hdr.dir == DMA_TO_DEVICE) + dma_sync_sgtable_for_cpu(&qdev->pdev->dev, bo->sgt, args->hdr.dir); + + bo->dbc = dbc; + drm_gem_object_put(obj); + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + + return 0; + +unprepare_bo: + qaic_unprepare_bo(qdev, bo); +put_bo: + drm_gem_object_put(obj); +free_slice_ent: + kfree(slice_ent); +unlock_dev_srcu: + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); +unlock_usr_srcu: + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + return ret; +} + +static inline int copy_exec_reqs(struct qaic_device *qdev, struct bo_slice *slice, u32 dbc_id, + u32 head, u32 *ptail) +{ + struct dma_bridge_chan *dbc = &qdev->dbc[dbc_id]; + struct dbc_req *reqs = slice->reqs; + u32 tail = *ptail; + u32 avail; + + avail = head - tail; + if (head <= tail) + avail += dbc->nelem; + + --avail; + + if (avail < slice->nents) + return -EAGAIN; + + if (tail + slice->nents > dbc->nelem) { + avail = dbc->nelem - tail; + avail = min_t(u32, avail, slice->nents); + memcpy(dbc->req_q_base + tail * get_dbc_req_elem_size(), reqs, + sizeof(*reqs) * avail); + reqs += avail; + avail = slice->nents - avail; + if (avail) + memcpy(dbc->req_q_base, reqs, sizeof(*reqs) * avail); + } else { + memcpy(dbc->req_q_base + tail * get_dbc_req_elem_size(), reqs, + sizeof(*reqs) * slice->nents); + } + + *ptail = (tail + slice->nents) % dbc->nelem; + + return 0; +} + +/* + * Based on the value of resize we may only need to transmit first_n + * entries and the last entry, with last_bytes to send from the last entry. + * Note that first_n could be 0. + */ +static inline int copy_partial_exec_reqs(struct qaic_device *qdev, struct bo_slice *slice, + u64 resize, u32 dbc_id, u32 head, u32 *ptail) +{ + struct dma_bridge_chan *dbc = &qdev->dbc[dbc_id]; + struct dbc_req *reqs = slice->reqs; + struct dbc_req *last_req; + u32 tail = *ptail; + u64 total_bytes; + u64 last_bytes; + u32 first_n; + u32 avail; + int ret; + int i; + + avail = head - tail; + if (head <= tail) + avail += dbc->nelem; + + --avail; + + total_bytes = 0; + for (i = 0; i < slice->nents; i++) { + total_bytes += le32_to_cpu(reqs[i].len); + if (total_bytes >= resize) + break; + } + + if (total_bytes < resize) { + /* User space should have used the full buffer path. */ + ret = -EINVAL; + return ret; + } + + first_n = i; + last_bytes = i ? resize + le32_to_cpu(reqs[i].len) - total_bytes : resize; + + if (avail < (first_n + 1)) + return -EAGAIN; + + if (first_n) { + if (tail + first_n > dbc->nelem) { + avail = dbc->nelem - tail; + avail = min_t(u32, avail, first_n); + memcpy(dbc->req_q_base + tail * get_dbc_req_elem_size(), reqs, + sizeof(*reqs) * avail); + last_req = reqs + avail; + avail = first_n - avail; + if (avail) + memcpy(dbc->req_q_base, last_req, sizeof(*reqs) * avail); + } else { + memcpy(dbc->req_q_base + tail * get_dbc_req_elem_size(), reqs, + sizeof(*reqs) * first_n); + } + } + + /* Copy over the last entry. Here we need to adjust len to the left over + * size, and set src and dst to the entry it is copied to. + */ + last_req = dbc->req_q_base + (tail + first_n) % dbc->nelem * get_dbc_req_elem_size(); + memcpy(last_req, reqs + slice->nents - 1, sizeof(*reqs)); + + /* + * last_bytes holds size of a DMA segment, maximum DMA segment size is + * set to UINT_MAX by qaic and hence last_bytes can never exceed u32 + * range. So, by down sizing we are not corrupting the value. + */ + last_req->len = cpu_to_le32((u32)last_bytes); + last_req->src_addr = reqs[first_n].src_addr; + last_req->dest_addr = reqs[first_n].dest_addr; + + *ptail = (tail + first_n + 1) % dbc->nelem; + + return 0; +} + +static int send_bo_list_to_device(struct qaic_device *qdev, struct drm_file *file_priv, + struct qaic_execute_entry *exec, unsigned int count, + bool is_partial, struct dma_bridge_chan *dbc, u32 head, + u32 *tail) +{ + struct qaic_partial_execute_entry *pexec = (struct qaic_partial_execute_entry *)exec; + struct drm_gem_object *obj; + struct bo_slice *slice; + unsigned long flags; + struct qaic_bo *bo; + bool queued; + int i, j; + int ret; + + for (i = 0; i < count; i++) { + /* + * ref count will be decremented when the transfer of this + * buffer is complete. It is inside dbc_irq_threaded_fn(). + */ + obj = drm_gem_object_lookup(file_priv, + is_partial ? pexec[i].handle : exec[i].handle); + if (!obj) { + ret = -ENOENT; + goto failed_to_send_bo; + } + + bo = to_qaic_bo(obj); + + if (!bo->sliced) { + ret = -EINVAL; + goto failed_to_send_bo; + } + + if (is_partial && pexec[i].resize > bo->size) { + ret = -EINVAL; + goto failed_to_send_bo; + } + + spin_lock_irqsave(&dbc->xfer_lock, flags); + queued = bo->queued; + bo->queued = true; + if (queued) { + spin_unlock_irqrestore(&dbc->xfer_lock, flags); + ret = -EINVAL; + goto failed_to_send_bo; + } + + bo->req_id = dbc->next_req_id++; + + list_for_each_entry(slice, &bo->slices, slice) { + /* + * If this slice does not fall under the given + * resize then skip this slice and continue the loop + */ + if (is_partial && pexec[i].resize && pexec[i].resize <= slice->offset) + continue; + + for (j = 0; j < slice->nents; j++) + slice->reqs[j].req_id = cpu_to_le16(bo->req_id); + + /* + * If it is a partial execute ioctl call then check if + * resize has cut this slice short then do a partial copy + * else do complete copy + */ + if (is_partial && pexec[i].resize && + pexec[i].resize < slice->offset + slice->size) + ret = copy_partial_exec_reqs(qdev, slice, + pexec[i].resize - slice->offset, + dbc->id, head, tail); + else + ret = copy_exec_reqs(qdev, slice, dbc->id, head, tail); + if (ret) { + bo->queued = false; + spin_unlock_irqrestore(&dbc->xfer_lock, flags); + goto failed_to_send_bo; + } + } + reinit_completion(&bo->xfer_done); + list_add_tail(&bo->xfer_list, &dbc->xfer_list); + spin_unlock_irqrestore(&dbc->xfer_lock, flags); + dma_sync_sgtable_for_device(&qdev->pdev->dev, bo->sgt, bo->dir); + } + + return 0; + +failed_to_send_bo: + if (likely(obj)) + drm_gem_object_put(obj); + for (j = 0; j < i; j++) { + spin_lock_irqsave(&dbc->xfer_lock, flags); + bo = list_last_entry(&dbc->xfer_list, struct qaic_bo, xfer_list); + obj = &bo->base; + bo->queued = false; + list_del(&bo->xfer_list); + spin_unlock_irqrestore(&dbc->xfer_lock, flags); + dma_sync_sgtable_for_cpu(&qdev->pdev->dev, bo->sgt, bo->dir); + drm_gem_object_put(obj); + } + return ret; +} + +static void update_profiling_data(struct drm_file *file_priv, + struct qaic_execute_entry *exec, unsigned int count, + bool is_partial, u64 received_ts, u64 submit_ts, u32 queue_level) +{ + struct qaic_partial_execute_entry *pexec = (struct qaic_partial_execute_entry *)exec; + struct drm_gem_object *obj; + struct qaic_bo *bo; + int i; + + for (i = 0; i < count; i++) { + /* + * Since we already committed the BO to hardware, the only way + * this should fail is a pending signal. We can't cancel the + * submit to hardware, so we have to just skip the profiling + * data. In case the signal is not fatal to the process, we + * return success so that the user doesn't try to resubmit. + */ + obj = drm_gem_object_lookup(file_priv, + is_partial ? pexec[i].handle : exec[i].handle); + if (!obj) + break; + bo = to_qaic_bo(obj); + bo->perf_stats.req_received_ts = received_ts; + bo->perf_stats.req_submit_ts = submit_ts; + bo->perf_stats.queue_level_before = queue_level; + queue_level += bo->total_slice_nents; + drm_gem_object_put(obj); + } +} + +static int __qaic_execute_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv, + bool is_partial) +{ + struct qaic_partial_execute_entry *pexec; + struct qaic_execute *args = data; + struct qaic_execute_entry *exec; + struct dma_bridge_chan *dbc; + int usr_rcu_id, qdev_rcu_id; + struct qaic_device *qdev; + struct qaic_user *usr; + u8 __user *user_data; + unsigned long n; + u64 received_ts; + u32 queue_level; + u64 submit_ts; + int rcu_id; + u32 head; + u32 tail; + u64 size; + int ret; + + received_ts = ktime_get_ns(); + + size = is_partial ? sizeof(*pexec) : sizeof(*exec); + + n = (unsigned long)size * args->hdr.count; + if (args->hdr.count == 0 || n / args->hdr.count != size) + return -EINVAL; + + user_data = u64_to_user_ptr(args->data); + + exec = kcalloc(args->hdr.count, size, GFP_KERNEL); + pexec = (struct qaic_partial_execute_entry *)exec; + if (!exec) + return -ENOMEM; + + if (copy_from_user(exec, user_data, n)) { + ret = -EFAULT; + goto free_exec; + } + + usr = file_priv->driver_priv; + usr_rcu_id = srcu_read_lock(&usr->qddev_lock); + if (!usr->qddev) { + ret = -ENODEV; + goto unlock_usr_srcu; + } + + qdev = usr->qddev->qdev; + qdev_rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + ret = -ENODEV; + goto unlock_dev_srcu; + } + + if (args->hdr.dbc_id >= qdev->num_dbc) { + ret = -EINVAL; + goto unlock_dev_srcu; + } + + dbc = &qdev->dbc[args->hdr.dbc_id]; + + rcu_id = srcu_read_lock(&dbc->ch_lock); + if (!dbc->usr || dbc->usr->handle != usr->handle) { + ret = -EPERM; + goto release_ch_rcu; + } + + head = readl(dbc->dbc_base + REQHP_OFF); + tail = readl(dbc->dbc_base + REQTP_OFF); + + if (head == U32_MAX || tail == U32_MAX) { + /* PCI link error */ + ret = -ENODEV; + goto release_ch_rcu; + } + + queue_level = head <= tail ? tail - head : dbc->nelem - (head - tail); + + ret = send_bo_list_to_device(qdev, file_priv, exec, args->hdr.count, is_partial, dbc, + head, &tail); + if (ret) + goto release_ch_rcu; + + /* Finalize commit to hardware */ + submit_ts = ktime_get_ns(); + writel(tail, dbc->dbc_base + REQTP_OFF); + + update_profiling_data(file_priv, exec, args->hdr.count, is_partial, received_ts, + submit_ts, queue_level); + + if (datapath_polling) + schedule_work(&dbc->poll_work); + +release_ch_rcu: + srcu_read_unlock(&dbc->ch_lock, rcu_id); +unlock_dev_srcu: + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); +unlock_usr_srcu: + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); +free_exec: + kfree(exec); + return ret; +} + +int qaic_execute_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + return __qaic_execute_bo_ioctl(dev, data, file_priv, false); +} + +int qaic_partial_execute_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + return __qaic_execute_bo_ioctl(dev, data, file_priv, true); +} + +/* + * Our interrupt handling is a bit more complicated than a simple ideal, but + * sadly necessary. + * + * Each dbc has a completion queue. Entries in the queue correspond to DMA + * requests which the device has processed. The hardware already has a built + * in irq mitigation. When the device puts an entry into the queue, it will + * only trigger an interrupt if the queue was empty. Therefore, when adding + * the Nth event to a non-empty queue, the hardware doesn't trigger an + * interrupt. This means the host doesn't get additional interrupts signaling + * the same thing - the queue has something to process. + * This behavior can be overridden in the DMA request. + * This means that when the host receives an interrupt, it is required to + * drain the queue. + * + * This behavior is what NAPI attempts to accomplish, although we can't use + * NAPI as we don't have a netdev. We use threaded irqs instead. + * + * However, there is a situation where the host drains the queue fast enough + * that every event causes an interrupt. Typically this is not a problem as + * the rate of events would be low. However, that is not the case with + * lprnet for example. On an Intel Xeon D-2191 where we run 8 instances of + * lprnet, the host receives roughly 80k interrupts per second from the device + * (per /proc/interrupts). While NAPI documentation indicates the host should + * just chug along, sadly that behavior causes instability in some hosts. + * + * Therefore, we implement an interrupt disable scheme similar to NAPI. The + * key difference is that we will delay after draining the queue for a small + * time to allow additional events to come in via polling. Using the above + * lprnet workload, this reduces the number of interrupts processed from + * ~80k/sec to about 64 in 5 minutes and appears to solve the system + * instability. + */ +irqreturn_t dbc_irq_handler(int irq, void *data) +{ + struct dma_bridge_chan *dbc = data; + int rcu_id; + u32 head; + u32 tail; + + rcu_id = srcu_read_lock(&dbc->ch_lock); + + if (!dbc->usr) { + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return IRQ_HANDLED; + } + + head = readl(dbc->dbc_base + RSPHP_OFF); + if (head == U32_MAX) { /* PCI link error */ + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return IRQ_NONE; + } + + tail = readl(dbc->dbc_base + RSPTP_OFF); + if (tail == U32_MAX) { /* PCI link error */ + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return IRQ_NONE; + } + + if (head == tail) { /* queue empty */ + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return IRQ_NONE; + } + + disable_irq_nosync(irq); + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return IRQ_WAKE_THREAD; +} + +void irq_polling_work(struct work_struct *work) +{ + struct dma_bridge_chan *dbc = container_of(work, struct dma_bridge_chan, poll_work); + unsigned long flags; + int rcu_id; + u32 head; + u32 tail; + + rcu_id = srcu_read_lock(&dbc->ch_lock); + + while (1) { + if (dbc->qdev->in_reset) { + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return; + } + if (!dbc->usr) { + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return; + } + spin_lock_irqsave(&dbc->xfer_lock, flags); + if (list_empty(&dbc->xfer_list)) { + spin_unlock_irqrestore(&dbc->xfer_lock, flags); + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return; + } + spin_unlock_irqrestore(&dbc->xfer_lock, flags); + + head = readl(dbc->dbc_base + RSPHP_OFF); + if (head == U32_MAX) { /* PCI link error */ + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return; + } + + tail = readl(dbc->dbc_base + RSPTP_OFF); + if (tail == U32_MAX) { /* PCI link error */ + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return; + } + + if (head != tail) { + irq_wake_thread(dbc->irq, dbc); + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return; + } + + cond_resched(); + usleep_range(datapath_poll_interval_us, 2 * datapath_poll_interval_us); + } +} + +irqreturn_t dbc_irq_threaded_fn(int irq, void *data) +{ + struct dma_bridge_chan *dbc = data; + int event_count = NUM_EVENTS; + int delay_count = NUM_DELAYS; + struct qaic_device *qdev; + struct qaic_bo *bo, *i; + struct dbc_rsp *rsp; + unsigned long flags; + int rcu_id; + u16 status; + u16 req_id; + u32 head; + u32 tail; + + rcu_id = srcu_read_lock(&dbc->ch_lock); + + head = readl(dbc->dbc_base + RSPHP_OFF); + if (head == U32_MAX) /* PCI link error */ + goto error_out; + + qdev = dbc->qdev; +read_fifo: + + if (!event_count) { + event_count = NUM_EVENTS; + cond_resched(); + } + + /* + * if this channel isn't assigned or gets unassigned during processing + * we have nothing further to do + */ + if (!dbc->usr) + goto error_out; + + tail = readl(dbc->dbc_base + RSPTP_OFF); + if (tail == U32_MAX) /* PCI link error */ + goto error_out; + + if (head == tail) { /* queue empty */ + if (delay_count) { + --delay_count; + usleep_range(100, 200); + goto read_fifo; /* check for a new event */ + } + goto normal_out; + } + + delay_count = NUM_DELAYS; + while (head != tail) { + if (!event_count) + break; + --event_count; + rsp = dbc->rsp_q_base + head * sizeof(*rsp); + req_id = le16_to_cpu(rsp->req_id); + status = le16_to_cpu(rsp->status); + if (status) + pci_dbg(qdev->pdev, "req_id %d failed with status %d\n", req_id, status); + spin_lock_irqsave(&dbc->xfer_lock, flags); + /* + * A BO can receive multiple interrupts, since a BO can be + * divided into multiple slices and a buffer receives as many + * interrupts as slices. So until it receives interrupts for + * all the slices we cannot mark that buffer complete. + */ + list_for_each_entry_safe(bo, i, &dbc->xfer_list, xfer_list) { + if (bo->req_id == req_id) + bo->nr_slice_xfer_done++; + else + continue; + + if (bo->nr_slice_xfer_done < bo->nr_slice) + break; + + /* + * At this point we have received all the interrupts for + * BO, which means BO execution is complete. + */ + dma_sync_sgtable_for_cpu(&qdev->pdev->dev, bo->sgt, bo->dir); + bo->nr_slice_xfer_done = 0; + bo->queued = false; + list_del(&bo->xfer_list); + bo->perf_stats.req_processed_ts = ktime_get_ns(); + complete_all(&bo->xfer_done); + drm_gem_object_put(&bo->base); + break; + } + spin_unlock_irqrestore(&dbc->xfer_lock, flags); + head = (head + 1) % dbc->nelem; + } + + /* + * Update the head pointer of response queue and let the device know + * that we have consumed elements from the queue. + */ + writel(head, dbc->dbc_base + RSPHP_OFF); + + /* elements might have been put in the queue while we were processing */ + goto read_fifo; + +normal_out: + if (likely(!datapath_polling)) + enable_irq(irq); + else + schedule_work(&dbc->poll_work); + /* checking the fifo and enabling irqs is a race, missed event check */ + tail = readl(dbc->dbc_base + RSPTP_OFF); + if (tail != U32_MAX && head != tail) { + if (likely(!datapath_polling)) + disable_irq_nosync(irq); + goto read_fifo; + } + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return IRQ_HANDLED; + +error_out: + srcu_read_unlock(&dbc->ch_lock, rcu_id); + if (likely(!datapath_polling)) + enable_irq(irq); + else + schedule_work(&dbc->poll_work); + + return IRQ_HANDLED; +} + +int qaic_wait_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + struct qaic_wait *args = data; + int usr_rcu_id, qdev_rcu_id; + struct dma_bridge_chan *dbc; + struct drm_gem_object *obj; + struct qaic_device *qdev; + unsigned long timeout; + struct qaic_user *usr; + struct qaic_bo *bo; + int rcu_id; + int ret; + + usr = file_priv->driver_priv; + usr_rcu_id = srcu_read_lock(&usr->qddev_lock); + if (!usr->qddev) { + ret = -ENODEV; + goto unlock_usr_srcu; + } + + qdev = usr->qddev->qdev; + qdev_rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + ret = -ENODEV; + goto unlock_dev_srcu; + } + + if (args->pad != 0) { + ret = -EINVAL; + goto unlock_dev_srcu; + } + + if (args->dbc_id >= qdev->num_dbc) { + ret = -EINVAL; + goto unlock_dev_srcu; + } + + dbc = &qdev->dbc[args->dbc_id]; + + rcu_id = srcu_read_lock(&dbc->ch_lock); + if (dbc->usr != usr) { + ret = -EPERM; + goto unlock_ch_srcu; + } + + obj = drm_gem_object_lookup(file_priv, args->handle); + if (!obj) { + ret = -ENOENT; + goto unlock_ch_srcu; + } + + bo = to_qaic_bo(obj); + timeout = args->timeout ? args->timeout : wait_exec_default_timeout_ms; + timeout = msecs_to_jiffies(timeout); + ret = wait_for_completion_interruptible_timeout(&bo->xfer_done, timeout); + if (!ret) { + ret = -ETIMEDOUT; + goto put_obj; + } + if (ret > 0) + ret = 0; + + if (!dbc->usr) + ret = -EPERM; + +put_obj: + drm_gem_object_put(obj); +unlock_ch_srcu: + srcu_read_unlock(&dbc->ch_lock, rcu_id); +unlock_dev_srcu: + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); +unlock_usr_srcu: + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + return ret; +} + +int qaic_perf_stats_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + struct qaic_perf_stats_entry *ent = NULL; + struct qaic_perf_stats *args = data; + int usr_rcu_id, qdev_rcu_id; + struct drm_gem_object *obj; + struct qaic_device *qdev; + struct qaic_user *usr; + struct qaic_bo *bo; + int ret, i; + + usr = file_priv->driver_priv; + usr_rcu_id = srcu_read_lock(&usr->qddev_lock); + if (!usr->qddev) { + ret = -ENODEV; + goto unlock_usr_srcu; + } + + qdev = usr->qddev->qdev; + qdev_rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + ret = -ENODEV; + goto unlock_dev_srcu; + } + + if (args->hdr.dbc_id >= qdev->num_dbc) { + ret = -EINVAL; + goto unlock_dev_srcu; + } + + ent = kcalloc(args->hdr.count, sizeof(*ent), GFP_KERNEL); + if (!ent) { + ret = -EINVAL; + goto unlock_dev_srcu; + } + + ret = copy_from_user(ent, u64_to_user_ptr(args->data), args->hdr.count * sizeof(*ent)); + if (ret) { + ret = -EFAULT; + goto free_ent; + } + + for (i = 0; i < args->hdr.count; i++) { + obj = drm_gem_object_lookup(file_priv, ent[i].handle); + if (!obj) { + ret = -ENOENT; + goto free_ent; + } + bo = to_qaic_bo(obj); + /* + * perf stats ioctl is called before wait ioctl is complete then + * the latency information is invalid. + */ + if (bo->perf_stats.req_processed_ts < bo->perf_stats.req_submit_ts) { + ent[i].device_latency_us = 0; + } else { + ent[i].device_latency_us = div_u64((bo->perf_stats.req_processed_ts - + bo->perf_stats.req_submit_ts), 1000); + } + ent[i].submit_latency_us = div_u64((bo->perf_stats.req_submit_ts - + bo->perf_stats.req_received_ts), 1000); + ent[i].queue_level_before = bo->perf_stats.queue_level_before; + ent[i].num_queue_element = bo->total_slice_nents; + drm_gem_object_put(obj); + } + + if (copy_to_user(u64_to_user_ptr(args->data), ent, args->hdr.count * sizeof(*ent))) + ret = -EFAULT; + +free_ent: + kfree(ent); +unlock_dev_srcu: + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); +unlock_usr_srcu: + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + return ret; +} + +static void empty_xfer_list(struct qaic_device *qdev, struct dma_bridge_chan *dbc) +{ + unsigned long flags; + struct qaic_bo *bo; + + spin_lock_irqsave(&dbc->xfer_lock, flags); + while (!list_empty(&dbc->xfer_list)) { + bo = list_first_entry(&dbc->xfer_list, typeof(*bo), xfer_list); + bo->queued = false; + list_del(&bo->xfer_list); + spin_unlock_irqrestore(&dbc->xfer_lock, flags); + dma_sync_sgtable_for_cpu(&qdev->pdev->dev, bo->sgt, bo->dir); + complete_all(&bo->xfer_done); + drm_gem_object_put(&bo->base); + spin_lock_irqsave(&dbc->xfer_lock, flags); + } + spin_unlock_irqrestore(&dbc->xfer_lock, flags); +} + +int disable_dbc(struct qaic_device *qdev, u32 dbc_id, struct qaic_user *usr) +{ + if (!qdev->dbc[dbc_id].usr || qdev->dbc[dbc_id].usr->handle != usr->handle) + return -EPERM; + + qdev->dbc[dbc_id].usr = NULL; + synchronize_srcu(&qdev->dbc[dbc_id].ch_lock); + return 0; +} + +/** + * enable_dbc - Enable the DBC. DBCs are disabled by removing the context of + * user. Add user context back to DBC to enable it. This function trusts the + * DBC ID passed and expects the DBC to be disabled. + * @qdev: Qranium device handle + * @dbc_id: ID of the DBC + * @usr: User context + */ +void enable_dbc(struct qaic_device *qdev, u32 dbc_id, struct qaic_user *usr) +{ + qdev->dbc[dbc_id].usr = usr; +} + +void wakeup_dbc(struct qaic_device *qdev, u32 dbc_id) +{ + struct dma_bridge_chan *dbc = &qdev->dbc[dbc_id]; + + dbc->usr = NULL; + empty_xfer_list(qdev, dbc); + synchronize_srcu(&dbc->ch_lock); +} + +void release_dbc(struct qaic_device *qdev, u32 dbc_id) +{ + struct bo_slice *slice, *slice_temp; + struct qaic_bo *bo, *bo_temp; + struct dma_bridge_chan *dbc; + + dbc = &qdev->dbc[dbc_id]; + if (!dbc->in_use) + return; + + wakeup_dbc(qdev, dbc_id); + + dma_free_coherent(&qdev->pdev->dev, dbc->total_size, dbc->req_q_base, dbc->dma_addr); + dbc->total_size = 0; + dbc->req_q_base = NULL; + dbc->dma_addr = 0; + dbc->nelem = 0; + dbc->usr = NULL; + + list_for_each_entry_safe(bo, bo_temp, &dbc->bo_lists, bo_list) { + list_for_each_entry_safe(slice, slice_temp, &bo->slices, slice) + kref_put(&slice->ref_count, free_slice); + bo->sliced = false; + INIT_LIST_HEAD(&bo->slices); + bo->total_slice_nents = 0; + bo->dir = 0; + bo->dbc = NULL; + bo->nr_slice = 0; + bo->nr_slice_xfer_done = 0; + bo->queued = false; + bo->req_id = 0; + init_completion(&bo->xfer_done); + complete_all(&bo->xfer_done); + list_del(&bo->bo_list); + bo->perf_stats.req_received_ts = 0; + bo->perf_stats.req_submit_ts = 0; + bo->perf_stats.req_processed_ts = 0; + bo->perf_stats.queue_level_before = 0; + } + + dbc->in_use = false; + wake_up(&dbc->dbc_release); +} diff --git a/drivers/accel/qaic/qaic_drv.c b/drivers/accel/qaic/qaic_drv.c new file mode 100644 index 000000000000..1106ad88a5b6 --- /dev/null +++ b/drivers/accel/qaic/qaic_drv.c @@ -0,0 +1,647 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* Copyright (c) 2019-2021, The Linux Foundation. All rights reserved. */ +/* Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved. */ + +#include <linux/delay.h> +#include <linux/dma-mapping.h> +#include <linux/idr.h> +#include <linux/interrupt.h> +#include <linux/list.h> +#include <linux/kref.h> +#include <linux/mhi.h> +#include <linux/module.h> +#include <linux/msi.h> +#include <linux/mutex.h> +#include <linux/pci.h> +#include <linux/spinlock.h> +#include <linux/workqueue.h> +#include <linux/wait.h> +#include <drm/drm_accel.h> +#include <drm/drm_drv.h> +#include <drm/drm_file.h> +#include <drm/drm_gem.h> +#include <drm/drm_ioctl.h> +#include <uapi/drm/qaic_accel.h> + +#include "mhi_controller.h" +#include "mhi_qaic_ctrl.h" +#include "qaic.h" + +MODULE_IMPORT_NS(DMA_BUF); + +#define PCI_DEV_AIC100 0xa100 +#define QAIC_NAME "qaic" +#define QAIC_DESC "Qualcomm Cloud AI Accelerators" +#define CNTL_MAJOR 5 +#define CNTL_MINOR 0 + +bool datapath_polling; +module_param(datapath_polling, bool, 0400); +MODULE_PARM_DESC(datapath_polling, "Operate the datapath in polling mode"); +static bool link_up; +static DEFINE_IDA(qaic_usrs); + +static int qaic_create_drm_device(struct qaic_device *qdev, s32 partition_id); +static void qaic_destroy_drm_device(struct qaic_device *qdev, s32 partition_id); + +static void free_usr(struct kref *kref) +{ + struct qaic_user *usr = container_of(kref, struct qaic_user, ref_count); + + cleanup_srcu_struct(&usr->qddev_lock); + ida_free(&qaic_usrs, usr->handle); + kfree(usr); +} + +static int qaic_open(struct drm_device *dev, struct drm_file *file) +{ + struct qaic_drm_device *qddev = dev->dev_private; + struct qaic_device *qdev = qddev->qdev; + struct qaic_user *usr; + int rcu_id; + int ret; + + rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + ret = -ENODEV; + goto dev_unlock; + } + + usr = kmalloc(sizeof(*usr), GFP_KERNEL); + if (!usr) { + ret = -ENOMEM; + goto dev_unlock; + } + + usr->handle = ida_alloc(&qaic_usrs, GFP_KERNEL); + if (usr->handle < 0) { + ret = usr->handle; + goto free_usr; + } + usr->qddev = qddev; + atomic_set(&usr->chunk_id, 0); + init_srcu_struct(&usr->qddev_lock); + kref_init(&usr->ref_count); + + ret = mutex_lock_interruptible(&qddev->users_mutex); + if (ret) + goto cleanup_usr; + + list_add(&usr->node, &qddev->users); + mutex_unlock(&qddev->users_mutex); + + file->driver_priv = usr; + + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return 0; + +cleanup_usr: + cleanup_srcu_struct(&usr->qddev_lock); +free_usr: + kfree(usr); +dev_unlock: + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return ret; +} + +static void qaic_postclose(struct drm_device *dev, struct drm_file *file) +{ + struct qaic_user *usr = file->driver_priv; + struct qaic_drm_device *qddev; + struct qaic_device *qdev; + int qdev_rcu_id; + int usr_rcu_id; + int i; + + qddev = usr->qddev; + usr_rcu_id = srcu_read_lock(&usr->qddev_lock); + if (qddev) { + qdev = qddev->qdev; + qdev_rcu_id = srcu_read_lock(&qdev->dev_lock); + if (!qdev->in_reset) { + qaic_release_usr(qdev, usr); + for (i = 0; i < qdev->num_dbc; ++i) + if (qdev->dbc[i].usr && qdev->dbc[i].usr->handle == usr->handle) + release_dbc(qdev, i); + } + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); + + mutex_lock(&qddev->users_mutex); + if (!list_empty(&usr->node)) + list_del_init(&usr->node); + mutex_unlock(&qddev->users_mutex); + } + + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + kref_put(&usr->ref_count, free_usr); + + file->driver_priv = NULL; +} + +DEFINE_DRM_ACCEL_FOPS(qaic_accel_fops); + +static const struct drm_ioctl_desc qaic_drm_ioctls[] = { + DRM_IOCTL_DEF_DRV(QAIC_MANAGE, qaic_manage_ioctl, 0), + DRM_IOCTL_DEF_DRV(QAIC_CREATE_BO, qaic_create_bo_ioctl, 0), + DRM_IOCTL_DEF_DRV(QAIC_MMAP_BO, qaic_mmap_bo_ioctl, 0), + DRM_IOCTL_DEF_DRV(QAIC_ATTACH_SLICE_BO, qaic_attach_slice_bo_ioctl, 0), + DRM_IOCTL_DEF_DRV(QAIC_EXECUTE_BO, qaic_execute_bo_ioctl, 0), + DRM_IOCTL_DEF_DRV(QAIC_PARTIAL_EXECUTE_BO, qaic_partial_execute_bo_ioctl, 0), + DRM_IOCTL_DEF_DRV(QAIC_WAIT_BO, qaic_wait_bo_ioctl, 0), + DRM_IOCTL_DEF_DRV(QAIC_PERF_STATS_BO, qaic_perf_stats_bo_ioctl, 0), +}; + +static const struct drm_driver qaic_accel_driver = { + .driver_features = DRIVER_GEM | DRIVER_COMPUTE_ACCEL, + + .name = QAIC_NAME, + .desc = QAIC_DESC, + .date = "20190618", + + .fops = &qaic_accel_fops, + .open = qaic_open, + .postclose = qaic_postclose, + + .ioctls = qaic_drm_ioctls, + .num_ioctls = ARRAY_SIZE(qaic_drm_ioctls), + .prime_fd_to_handle = drm_gem_prime_fd_to_handle, + .gem_prime_import = qaic_gem_prime_import, +}; + +static int qaic_create_drm_device(struct qaic_device *qdev, s32 partition_id) +{ + struct qaic_drm_device *qddev; + struct drm_device *ddev; + struct device *pdev; + int ret; + + /* Hold off implementing partitions until the uapi is determined */ + if (partition_id != QAIC_NO_PARTITION) + return -EINVAL; + + pdev = &qdev->pdev->dev; + + qddev = kzalloc(sizeof(*qddev), GFP_KERNEL); + if (!qddev) + return -ENOMEM; + + ddev = drm_dev_alloc(&qaic_accel_driver, pdev); + if (IS_ERR(ddev)) { + ret = PTR_ERR(ddev); + goto ddev_fail; + } + + ddev->dev_private = qddev; + qddev->ddev = ddev; + + qddev->qdev = qdev; + qddev->partition_id = partition_id; + INIT_LIST_HEAD(&qddev->users); + mutex_init(&qddev->users_mutex); + + qdev->qddev = qddev; + + ret = drm_dev_register(ddev, 0); + if (ret) { + pci_dbg(qdev->pdev, "%s: drm_dev_register failed %d\n", __func__, ret); + goto drm_reg_fail; + } + + return 0; + +drm_reg_fail: + mutex_destroy(&qddev->users_mutex); + qdev->qddev = NULL; + drm_dev_put(ddev); +ddev_fail: + kfree(qddev); + return ret; +} + +static void qaic_destroy_drm_device(struct qaic_device *qdev, s32 partition_id) +{ + struct qaic_drm_device *qddev; + struct qaic_user *usr; + + qddev = qdev->qddev; + + /* + * Existing users get unresolvable errors till they close FDs. + * Need to sync carefully with users calling close(). The + * list of users can be modified elsewhere when the lock isn't + * held here, but the sync'ing the srcu with the mutex held + * could deadlock. Grab the mutex so that the list will be + * unmodified. The user we get will exist as long as the + * lock is held. Signal that the qcdev is going away, and + * grab a reference to the user so they don't go away for + * synchronize_srcu(). Then release the mutex to avoid + * deadlock and make sure the user has observed the signal. + * With the lock released, we cannot maintain any state of the + * user list. + */ + mutex_lock(&qddev->users_mutex); + while (!list_empty(&qddev->users)) { + usr = list_first_entry(&qddev->users, struct qaic_user, node); + list_del_init(&usr->node); + kref_get(&usr->ref_count); + usr->qddev = NULL; + mutex_unlock(&qddev->users_mutex); + synchronize_srcu(&usr->qddev_lock); + kref_put(&usr->ref_count, free_usr); + mutex_lock(&qddev->users_mutex); + } + mutex_unlock(&qddev->users_mutex); + + if (qddev->ddev) { + drm_dev_unregister(qddev->ddev); + drm_dev_put(qddev->ddev); + } + + kfree(qddev); +} + +static int qaic_mhi_probe(struct mhi_device *mhi_dev, const struct mhi_device_id *id) +{ + struct qaic_device *qdev; + u16 major, minor; + int ret; + + /* + * Invoking this function indicates that the control channel to the + * device is available. We use that as a signal to indicate that + * the device side firmware has booted. The device side firmware + * manages the device resources, so we need to communicate with it + * via the control channel in order to utilize the device. Therefore + * we wait until this signal to create the drm dev that userspace will + * use to control the device, because without the device side firmware, + * userspace can't do anything useful. + */ + + qdev = pci_get_drvdata(to_pci_dev(mhi_dev->mhi_cntrl->cntrl_dev)); + + qdev->in_reset = false; + + dev_set_drvdata(&mhi_dev->dev, qdev); + qdev->cntl_ch = mhi_dev; + + ret = qaic_control_open(qdev); + if (ret) { + pci_dbg(qdev->pdev, "%s: control_open failed %d\n", __func__, ret); + return ret; + } + + ret = get_cntl_version(qdev, NULL, &major, &minor); + if (ret || major != CNTL_MAJOR || minor > CNTL_MINOR) { + pci_err(qdev->pdev, "%s: Control protocol version (%d.%d) not supported. Supported version is (%d.%d). Ret: %d\n", + __func__, major, minor, CNTL_MAJOR, CNTL_MINOR, ret); + ret = -EINVAL; + goto close_control; + } + + ret = qaic_create_drm_device(qdev, QAIC_NO_PARTITION); + + return ret; + +close_control: + qaic_control_close(qdev); + return ret; +} + +static void qaic_mhi_remove(struct mhi_device *mhi_dev) +{ +/* This is redundant since we have already observed the device crash */ +} + +static void qaic_notify_reset(struct qaic_device *qdev) +{ + int i; + + qdev->in_reset = true; + /* wake up any waiters to avoid waiting for timeouts at sync */ + wake_all_cntl(qdev); + for (i = 0; i < qdev->num_dbc; ++i) + wakeup_dbc(qdev, i); + synchronize_srcu(&qdev->dev_lock); +} + +void qaic_dev_reset_clean_local_state(struct qaic_device *qdev, bool exit_reset) +{ + int i; + + qaic_notify_reset(qdev); + + /* remove drmdevs to prevent new users from coming in */ + qaic_destroy_drm_device(qdev, QAIC_NO_PARTITION); + + /* start tearing things down */ + for (i = 0; i < qdev->num_dbc; ++i) + release_dbc(qdev, i); + + if (exit_reset) + qdev->in_reset = false; +} + +static struct qaic_device *create_qdev(struct pci_dev *pdev, const struct pci_device_id *id) +{ + struct qaic_device *qdev; + int i; + + qdev = devm_kzalloc(&pdev->dev, sizeof(*qdev), GFP_KERNEL); + if (!qdev) + return NULL; + + if (id->device == PCI_DEV_AIC100) { + qdev->num_dbc = 16; + qdev->dbc = devm_kcalloc(&pdev->dev, qdev->num_dbc, sizeof(*qdev->dbc), GFP_KERNEL); + if (!qdev->dbc) + return NULL; + } + + qdev->cntl_wq = alloc_workqueue("qaic_cntl", WQ_UNBOUND, 0); + if (!qdev->cntl_wq) + return NULL; + + pci_set_drvdata(pdev, qdev); + qdev->pdev = pdev; + + mutex_init(&qdev->cntl_mutex); + INIT_LIST_HEAD(&qdev->cntl_xfer_list); + init_srcu_struct(&qdev->dev_lock); + + for (i = 0; i < qdev->num_dbc; ++i) { + spin_lock_init(&qdev->dbc[i].xfer_lock); + qdev->dbc[i].qdev = qdev; + qdev->dbc[i].id = i; + INIT_LIST_HEAD(&qdev->dbc[i].xfer_list); + init_srcu_struct(&qdev->dbc[i].ch_lock); + init_waitqueue_head(&qdev->dbc[i].dbc_release); + INIT_LIST_HEAD(&qdev->dbc[i].bo_lists); + } + + return qdev; +} + +static void cleanup_qdev(struct qaic_device *qdev) +{ + int i; + + for (i = 0; i < qdev->num_dbc; ++i) + cleanup_srcu_struct(&qdev->dbc[i].ch_lock); + cleanup_srcu_struct(&qdev->dev_lock); + pci_set_drvdata(qdev->pdev, NULL); + destroy_workqueue(qdev->cntl_wq); +} + +static int init_pci(struct qaic_device *qdev, struct pci_dev *pdev) +{ + int bars; + int ret; + + bars = pci_select_bars(pdev, IORESOURCE_MEM); + + /* make sure the device has the expected BARs */ + if (bars != (BIT(0) | BIT(2) | BIT(4))) { + pci_dbg(pdev, "%s: expected BARs 0, 2, and 4 not found in device. Found 0x%x\n", + __func__, bars); + return -EINVAL; + } + + ret = pcim_enable_device(pdev); + if (ret) + return ret; + + ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); + if (ret) + return ret; + ret = dma_set_max_seg_size(&pdev->dev, UINT_MAX); + if (ret) + return ret; + + qdev->bar_0 = devm_ioremap_resource(&pdev->dev, &pdev->resource[0]); + if (IS_ERR(qdev->bar_0)) + return PTR_ERR(qdev->bar_0); + + qdev->bar_2 = devm_ioremap_resource(&pdev->dev, &pdev->resource[2]); + if (IS_ERR(qdev->bar_2)) + return PTR_ERR(qdev->bar_2); + + /* Managed release since we use pcim_enable_device above */ + pci_set_master(pdev); + + return 0; +} + +static int init_msi(struct qaic_device *qdev, struct pci_dev *pdev) +{ + int mhi_irq; + int ret; + int i; + + /* Managed release since we use pcim_enable_device */ + ret = pci_alloc_irq_vectors(pdev, 1, 32, PCI_IRQ_MSI); + if (ret < 0) + return ret; + + if (ret < 32) { + pci_err(pdev, "%s: Requested 32 MSIs. Obtained %d MSIs which is less than the 32 required.\n", + __func__, ret); + return -ENODEV; + } + + mhi_irq = pci_irq_vector(pdev, 0); + if (mhi_irq < 0) + return mhi_irq; + + for (i = 0; i < qdev->num_dbc; ++i) { + ret = devm_request_threaded_irq(&pdev->dev, pci_irq_vector(pdev, i + 1), + dbc_irq_handler, dbc_irq_threaded_fn, IRQF_SHARED, + "qaic_dbc", &qdev->dbc[i]); + if (ret) + return ret; + + if (datapath_polling) { + qdev->dbc[i].irq = pci_irq_vector(pdev, i + 1); + disable_irq_nosync(qdev->dbc[i].irq); + INIT_WORK(&qdev->dbc[i].poll_work, irq_polling_work); + } + } + + return mhi_irq; +} + +static int qaic_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) +{ + struct qaic_device *qdev; + int mhi_irq; + int ret; + int i; + + qdev = create_qdev(pdev, id); + if (!qdev) + return -ENOMEM; + + ret = init_pci(qdev, pdev); + if (ret) + goto cleanup_qdev; + + for (i = 0; i < qdev->num_dbc; ++i) + qdev->dbc[i].dbc_base = qdev->bar_2 + QAIC_DBC_OFF(i); + + mhi_irq = init_msi(qdev, pdev); + if (mhi_irq < 0) { + ret = mhi_irq; + goto cleanup_qdev; + } + + qdev->mhi_cntrl = qaic_mhi_register_controller(pdev, qdev->bar_0, mhi_irq); + if (IS_ERR(qdev->mhi_cntrl)) { + ret = PTR_ERR(qdev->mhi_cntrl); + goto cleanup_qdev; + } + + return 0; + +cleanup_qdev: + cleanup_qdev(qdev); + return ret; +} + +static void qaic_pci_remove(struct pci_dev *pdev) +{ + struct qaic_device *qdev = pci_get_drvdata(pdev); + + if (!qdev) + return; + + qaic_dev_reset_clean_local_state(qdev, false); + qaic_mhi_free_controller(qdev->mhi_cntrl, link_up); + cleanup_qdev(qdev); +} + +static void qaic_pci_shutdown(struct pci_dev *pdev) +{ + /* see qaic_exit for what link_up is doing */ + link_up = true; + qaic_pci_remove(pdev); +} + +static pci_ers_result_t qaic_pci_error_detected(struct pci_dev *pdev, pci_channel_state_t error) +{ + return PCI_ERS_RESULT_NEED_RESET; +} + +static void qaic_pci_reset_prepare(struct pci_dev *pdev) +{ + struct qaic_device *qdev = pci_get_drvdata(pdev); + + qaic_notify_reset(qdev); + qaic_mhi_start_reset(qdev->mhi_cntrl); + qaic_dev_reset_clean_local_state(qdev, false); +} + +static void qaic_pci_reset_done(struct pci_dev *pdev) +{ + struct qaic_device *qdev = pci_get_drvdata(pdev); + + qdev->in_reset = false; + qaic_mhi_reset_done(qdev->mhi_cntrl); +} + +static const struct mhi_device_id qaic_mhi_match_table[] = { + { .chan = "QAIC_CONTROL", }, + {}, +}; + +static struct mhi_driver qaic_mhi_driver = { + .id_table = qaic_mhi_match_table, + .remove = qaic_mhi_remove, + .probe = qaic_mhi_probe, + .ul_xfer_cb = qaic_mhi_ul_xfer_cb, + .dl_xfer_cb = qaic_mhi_dl_xfer_cb, + .driver = { + .name = "qaic_mhi", + }, +}; + +static const struct pci_device_id qaic_ids[] = { + { PCI_DEVICE(PCI_VENDOR_ID_QCOM, PCI_DEV_AIC100), }, + { } +}; +MODULE_DEVICE_TABLE(pci, qaic_ids); + +static const struct pci_error_handlers qaic_pci_err_handler = { + .error_detected = qaic_pci_error_detected, + .reset_prepare = qaic_pci_reset_prepare, + .reset_done = qaic_pci_reset_done, +}; + +static struct pci_driver qaic_pci_driver = { + .name = QAIC_NAME, + .id_table = qaic_ids, + .probe = qaic_pci_probe, + .remove = qaic_pci_remove, + .shutdown = qaic_pci_shutdown, + .err_handler = &qaic_pci_err_handler, +}; + +static int __init qaic_init(void) +{ + int ret; + + ret = mhi_driver_register(&qaic_mhi_driver); + if (ret) { + pr_debug("qaic: mhi_driver_register failed %d\n", ret); + return ret; + } + + ret = pci_register_driver(&qaic_pci_driver); + if (ret) { + pr_debug("qaic: pci_register_driver failed %d\n", ret); + goto free_mhi; + } + + ret = mhi_qaic_ctrl_init(); + if (ret) { + pr_debug("qaic: mhi_qaic_ctrl_init failed %d\n", ret); + goto free_pci; + } + + return 0; + +free_pci: + pci_unregister_driver(&qaic_pci_driver); +free_mhi: + mhi_driver_unregister(&qaic_mhi_driver); + return ret; +} + +static void __exit qaic_exit(void) +{ + /* + * We assume that qaic_pci_remove() is called due to a hotplug event + * which would mean that the link is down, and thus + * qaic_mhi_free_controller() should not try to access the device during + * cleanup. + * We call pci_unregister_driver() below, which also triggers + * qaic_pci_remove(), but since this is module exit, we expect the link + * to the device to be up, in which case qaic_mhi_free_controller() + * should try to access the device during cleanup to put the device in + * a sane state. + * For that reason, we set link_up here to let qaic_mhi_free_controller + * know the expected link state. Since the module is going to be + * removed at the end of this, we don't need to worry about + * reinitializing the link_up state after the cleanup is done. + */ + link_up = true; + mhi_qaic_ctrl_deinit(); + pci_unregister_driver(&qaic_pci_driver); + mhi_driver_unregister(&qaic_mhi_driver); +} + +module_init(qaic_init); +module_exit(qaic_exit); + +MODULE_AUTHOR(QAIC_DESC " Kernel Driver Team"); +MODULE_DESCRIPTION(QAIC_DESC " Accel Driver"); +MODULE_LICENSE("GPL"); |