diff options
691 files changed, 16753 insertions, 11135 deletions
@@ -68,6 +68,7 @@ Juha Yrjola <juha.yrjola@solidboot.com> Kay Sievers <kay.sievers@vrfy.org> Kenneth W Chen <kenneth.w.chen@intel.com> Koushik <raghavendra.koushik@neterion.com> +Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Leonid I Ananiev <leonid.i.ananiev@intel.com> Linas Vepstas <linas@austin.ibm.com> Mark Brown <broonie@sirena.org.uk> @@ -111,3 +112,4 @@ Uwe Kleine-König <ukl@pengutronix.de> Uwe Kleine-König <Uwe.Kleine-Koenig@digi.com> Valdis Kletnieks <Valdis.Kletnieks@vt.edu> Takashi YOSHII <takashi.yoshii.zj@renesas.com> +Yusuke Goda <goda.yusuke@renesas.com> diff --git a/Documentation/ABI/stable/sysfs-acpi-pmprofile b/Documentation/ABI/stable/sysfs-acpi-pmprofile new file mode 100644 index 000000000000..964c7a8afb26 --- /dev/null +++ b/Documentation/ABI/stable/sysfs-acpi-pmprofile @@ -0,0 +1,22 @@ +What: /sys/firmware/acpi/pm_profile +Date: 03-Nov-2011 +KernelVersion: v3.2 +Contact: linux-acpi@vger.kernel.org +Description: The ACPI pm_profile sysfs interface exports the platform + power management (and performance) requirement expectations + as provided by BIOS. The integer value is directly passed as + retrieved from the FADT ACPI table. +Values: For possible values see ACPI specification: + 5.2.9 Fixed ACPI Description Table (FADT) + Field: Preferred_PM_Profile + + Currently these values are defined by spec: + 0 Unspecified + 1 Desktop + 2 Mobile + 3 Workstation + 4 Enterprise Server + 5 SOHO Server + 6 Appliance PC + 7 Performance Server + >7 Reserved diff --git a/Documentation/DocBook/drm.tmpl b/Documentation/DocBook/drm.tmpl index c27915893974..196b8b9dba11 100644 --- a/Documentation/DocBook/drm.tmpl +++ b/Documentation/DocBook/drm.tmpl @@ -32,7 +32,7 @@ The Linux DRM layer contains code intended to support the needs of complex graphics devices, usually containing programmable pipelines well suited to 3D graphics acceleration. Graphics - drivers in the kernel can make use of DRM functions to make + drivers in the kernel may make use of DRM functions to make tasks like memory management, interrupt handling and DMA easier, and provide a uniform interface to applications. </para> @@ -57,10 +57,10 @@ existing drivers. </para> <para> - First, we'll go over some typical driver initialization + First, we go over some typical driver initialization requirements, like setting up command buffers, creating an initial output configuration, and initializing core services. - Subsequent sections will cover core internals in more detail, + Subsequent sections cover core internals in more detail, providing implementation notes and examples. </para> <para> @@ -74,7 +74,7 @@ </para> <para> The core of every DRM driver is struct drm_driver. Drivers - will typically statically initialize a drm_driver structure, + typically statically initialize a drm_driver structure, then pass it to drm_init() at load time. </para> @@ -88,8 +88,8 @@ </para> <programlisting> static struct drm_driver driver = { - /* don't use mtrr's here, the Xserver or user space app should - * deal with them for intel hardware. + /* Don't use MTRRs here; the Xserver or userspace app should + * deal with them for Intel hardware. */ .driver_features = DRIVER_USE_AGP | DRIVER_REQUIRE_AGP | @@ -154,8 +154,8 @@ </programlisting> <para> In the example above, taken from the i915 DRM driver, the driver - sets several flags indicating what core features it supports. - We'll go over the individual callbacks in later sections. Since + sets several flags indicating what core features it supports; + we go over the individual callbacks in later sections. Since flags indicate which features your driver supports to the DRM core, you need to set most of them prior to calling drm_init(). Some, like DRIVER_MODESET can be set later based on user supplied parameters, @@ -203,8 +203,8 @@ <term>DRIVER_HAVE_IRQ</term><term>DRIVER_IRQ_SHARED</term> <listitem> <para> - DRIVER_HAVE_IRQ indicates whether the driver has a IRQ - handler, DRIVER_IRQ_SHARED indicates whether the device & + DRIVER_HAVE_IRQ indicates whether the driver has an IRQ + handler. DRIVER_IRQ_SHARED indicates whether the device & handler support shared IRQs (note that this is required of PCI drivers). </para> @@ -214,8 +214,8 @@ <term>DRIVER_DMA_QUEUE</term> <listitem> <para> - If the driver queues DMA requests and completes them - asynchronously, this flag should be set. Deprecated. + Should be set if the driver queues DMA requests and completes them + asynchronously. Deprecated. </para> </listitem> </varlistentry> @@ -238,7 +238,7 @@ </variablelist> <para> In this specific case, the driver requires AGP and supports - IRQs. DMA, as we'll see, is handled by device specific ioctls + IRQs. DMA, as discussed later, is handled by device-specific ioctls in this case. It also supports the kernel mode setting APIs, though unlike in the actual i915 driver source, this example unconditionally exports KMS capability. @@ -269,36 +269,34 @@ initial output configuration. </para> <para> - Note that the tasks performed at driver load time must not - conflict with DRM client requirements. For instance, if user + If compatibility is a concern (e.g. with drivers converted over + to the new interfaces from the old ones), care must be taken to + prevent device initialization and control that is incompatible with + currently active userspace drivers. For instance, if user level mode setting drivers are in use, it would be problematic to perform output discovery & configuration at load time. - Likewise, if pre-memory management aware user level drivers are + Likewise, if user-level drivers unaware of memory management are in use, memory management and command buffer setup may need to - be omitted. These requirements are driver specific, and care + be omitted. These requirements are driver-specific, and care needs to be taken to keep both old and new applications and libraries working. The i915 driver supports the "modeset" module parameter to control whether advanced features are - enabled at load time or in legacy fashion. If compatibility is - a concern (e.g. with drivers converted over to the new interfaces - from the old ones), care must be taken to prevent incompatible - device initialization and control with the currently active - userspace drivers. + enabled at load time or in legacy fashion. </para> <sect2> <title>Driver private & performance counters</title> <para> The driver private hangs off the main drm_device structure and - can be used for tracking various device specific bits of + can be used for tracking various device-specific bits of information, like register offsets, command buffer status, register state for suspend/resume, etc. At load time, a - driver can simply allocate one and set drm_device.dev_priv - appropriately; at unload the driver can free it and set - drm_device.dev_priv to NULL. + driver may simply allocate one and set drm_device.dev_priv + appropriately; it should be freed and drm_device.dev_priv set + to NULL when the driver is unloaded. </para> <para> - The DRM supports several counters which can be used for rough + The DRM supports several counters which may be used for rough performance characterization. Note that the DRM stat counter system is not often used by applications, and supporting additional counters is completely optional. @@ -307,15 +305,15 @@ These interfaces are deprecated and should not be used. If performance monitoring is desired, the developer should investigate and potentially enhance the kernel perf and tracing infrastructure to export - GPU related performance information to performance monitoring - tools and applications. + GPU related performance information for consumption by performance + monitoring tools and applications. </para> </sect2> <sect2> <title>Configuring the device</title> <para> - Obviously, device configuration will be device specific. + Obviously, device configuration is device-specific. However, there are several common operations: finding a device's PCI resources, mapping them, and potentially setting up an IRQ handler. @@ -323,10 +321,10 @@ <para> Finding & mapping resources is fairly straightforward. The DRM wrapper functions, drm_get_resource_start() and - drm_get_resource_len() can be used to find BARs on the given + drm_get_resource_len(), may be used to find BARs on the given drm_device struct. Once those values have been retrieved, the driver load function can call drm_addmap() to create a new - mapping for the BAR in question. Note you'll probably want a + mapping for the BAR in question. Note that you probably want a drm_local_map_t in your driver private structure to track any mappings you create. <!-- !Fdrivers/gpu/drm/drm_bufs.c drm_get_resource_* --> @@ -335,20 +333,20 @@ <para> if compatibility with other operating systems isn't a concern (DRM drivers can run under various BSD variants and OpenSolaris), - native Linux calls can be used for the above, e.g. pci_resource_* + native Linux calls may be used for the above, e.g. pci_resource_* and iomap*/iounmap. See the Linux device driver book for more info. </para> <para> - Once you have a register map, you can use the DRM_READn() and + Once you have a register map, you may use the DRM_READn() and DRM_WRITEn() macros to access the registers on your device, or - use driver specific versions to offset into your MMIO space - relative to a driver specific base pointer (see I915_READ for - example). + use driver-specific versions to offset into your MMIO space + relative to a driver-specific base pointer (see I915_READ for + an example). </para> <para> If your device supports interrupt generation, you may want to - setup an interrupt handler at driver load time as well. This + set up an interrupt handler when the driver is loaded. This is done using the drm_irq_install() function. If your device supports vertical blank interrupts, it should call drm_vblank_init() to initialize the core vblank handling code before @@ -357,7 +355,7 @@ </para> <!--!Fdrivers/char/drm/drm_irq.c drm_irq_install--> <para> - Once your interrupt handler is registered (it'll use your + Once your interrupt handler is registered (it uses your drm_driver.irq_handler as the actual interrupt handling function), you can safely enable interrupts on your device, assuming any other state your interrupt handler uses is also @@ -371,10 +369,10 @@ using the pci_map_rom() call, a convenience function that takes care of mapping the actual ROM, whether it has been shadowed into memory (typically at address 0xc0000) or exists - on the PCI device in the ROM BAR. Note that once you've - mapped the ROM and extracted any necessary information, be - sure to unmap it; on many devices the ROM address decoder is - shared with other BARs, so leaving it mapped can cause + on the PCI device in the ROM BAR. Note that after the ROM + has been mapped and any necessary information has been extracted, + it should be unmapped; on many devices, the ROM address decoder is + shared with other BARs, so leaving it mapped could cause undesired behavior like hangs or memory corruption. <!--!Fdrivers/pci/rom.c pci_map_rom--> </para> @@ -389,9 +387,9 @@ should support a memory manager. </para> <para> - If your driver supports memory management (it should!), you'll + If your driver supports memory management (it should!), you need to set that up at load time as well. How you initialize - it depends on which memory manager you're using, TTM or GEM. + it depends on which memory manager you're using: TTM or GEM. </para> <sect3> <title>TTM initialization</title> @@ -401,7 +399,7 @@ and devices with dedicated video RAM (VRAM), i.e. most discrete graphics devices. If your device has dedicated RAM, supporting TTM is desirable. TTM also integrates tightly with your - driver specific buffer execution function. See the radeon + driver-specific buffer execution function. See the radeon driver for examples. </para> <para> @@ -429,21 +427,21 @@ created by the memory manager at runtime. Your global TTM should have a type of TTM_GLOBAL_TTM_MEM. The size field for the global object should be sizeof(struct ttm_mem_global), and the init and - release hooks should point at your driver specific init and - release routines, which will probably eventually call - ttm_mem_global_init and ttm_mem_global_release respectively. + release hooks should point at your driver-specific init and + release routines, which probably eventually call + ttm_mem_global_init and ttm_mem_global_release, respectively. </para> <para> Once your global TTM accounting structure is set up and initialized - (done by calling ttm_global_item_ref on the global object you - just created), you'll need to create a buffer object TTM to + by calling ttm_global_item_ref() on it, + you need to create a buffer object TTM to provide a pool for buffer object allocation by clients and the kernel itself. The type of this object should be TTM_GLOBAL_TTM_BO, and its size should be sizeof(struct ttm_bo_global). Again, - driver specific init and release functions can be provided, - likely eventually calling ttm_bo_global_init and - ttm_bo_global_release, respectively. Also like the previous - object, ttm_global_item_ref is used to create an initial reference + driver-specific init and release functions may be provided, + likely eventually calling ttm_bo_global_init() and + ttm_bo_global_release(), respectively. Also, like the previous + object, ttm_global_item_ref() is used to create an initial reference count for the TTM, which will call your initialization function. </para> </sect3> @@ -453,27 +451,26 @@ GEM is an alternative to TTM, designed specifically for UMA devices. It has simpler initialization and execution requirements than TTM, but has no VRAM management capability. Core GEM - initialization is comprised of a basic drm_mm_init call to create + is initialized by calling drm_mm_init() to create a GTT DRM MM object, which provides an address space pool for - object allocation. In a KMS configuration, the driver will - need to allocate and initialize a command ring buffer following - basic GEM initialization. Most UMA devices have a so-called + object allocation. In a KMS configuration, the driver + needs to allocate and initialize a command ring buffer following + core GEM initialization. A UMA device usually has what is called a "stolen" memory region, which provides space for the initial framebuffer and large, contiguous memory regions required by the - device. This space is not typically managed by GEM, and must + device. This space is not typically managed by GEM, and it must be initialized separately into its own DRM MM object. </para> <para> - Initialization will be driver specific, and will depend on - the architecture of the device. In the case of Intel + Initialization is driver-specific. In the case of Intel integrated graphics chips like 965GM, GEM initialization can be done by calling the internal GEM init function, i915_gem_do_init(). Since the 965GM is a UMA device - (i.e. it doesn't have dedicated VRAM), GEM will manage + (i.e. it doesn't have dedicated VRAM), GEM manages making regular RAM available for GPU operations. Memory set aside by the BIOS (called "stolen" memory by the i915 - driver) will be managed by the DRM memrange allocator; the - rest of the aperture will be managed by GEM. + driver) is managed by the DRM memrange allocator; the + rest of the aperture is managed by GEM. <programlisting> /* Basic memrange allocator for stolen space (aka vram) */ drm_memrange_init(&dev_priv->vram, 0, prealloc_size); @@ -483,7 +480,7 @@ <!--!Edrivers/char/drm/drm_memrange.c--> </para> <para> - Once the memory manager has been set up, we can allocate the + Once the memory manager has been set up, we may allocate the command buffer. In the i915 case, this is also done with a GEM function, i915_gem_init_ringbuffer(). </para> @@ -493,16 +490,25 @@ <sect2> <title>Output configuration</title> <para> - The final initialization task is output configuration. This involves - finding and initializing the CRTCs, encoders and connectors - for your device, creating an initial configuration and - registering a framebuffer console driver. + The final initialization task is output configuration. This involves: + <itemizedlist> + <listitem> + Finding and initializing the CRTCs, encoders, and connectors + for the device. + </listitem> + <listitem> + Creating an initial configuration. + </listitem> + <listitem> + Registering a framebuffer console driver. + </listitem> + </itemizedlist> </para> <sect3> <title>Output discovery and initialization</title> <para> - Several core functions exist to create CRTCs, encoders and - connectors, namely drm_crtc_init(), drm_connector_init() and + Several core functions exist to create CRTCs, encoders, and + connectors, namely: drm_crtc_init(), drm_connector_init(), and drm_encoder_init(), along with several "helper" functions to perform common tasks. </para> @@ -555,10 +561,10 @@ void intel_crt_init(struct drm_device *dev) </programlisting> <para> In the example above (again, taken from the i915 driver), a - CRT connector and encoder combination is created. A device - specific i2c bus is also created, for fetching EDID data and + CRT connector and encoder combination is created. A device-specific + i2c bus is also created for fetching EDID data and performing monitor detection. Once the process is complete, - the new connector is registered with sysfs, to make its + the new connector is registered with sysfs to make its properties available to applications. </para> <sect4> @@ -567,12 +573,12 @@ void intel_crt_init(struct drm_device *dev) Since many PC-class graphics devices have similar display output designs, the DRM provides a set of helper functions to make output management easier. The core helper routines handle - encoder re-routing and disabling of unused functions following - mode set. Using the helpers is optional, but recommended for + encoder re-routing and the disabling of unused functions following + mode setting. Using the helpers is optional, but recommended for devices with PC-style architectures (i.e. a set of display planes for feeding pixels to encoders which are in turn routed to connectors). Devices with more complex requirements needing - finer grained management can opt to use the core callbacks + finer grained management may opt to use the core callbacks directly. </para> <para> @@ -580,17 +586,25 @@ void intel_crt_init(struct drm_device *dev) </para> </sect4> <para> - For each encoder, CRTC and connector, several functions must - be provided, depending on the object type. Encoder objects - need to provide a DPMS (basically on/off) function, mode fixup - (for converting requested modes into native hardware timings), - and prepare, set and commit functions for use by the core DRM - helper functions. Connector helpers need to provide mode fetch and - validity functions as well as an encoder matching function for - returning an ideal encoder for a given connector. The core - connector functions include a DPMS callback, (deprecated) - save/restore routines, detection, mode probing, property handling, - and cleanup functions. + Each encoder object needs to provide: + <itemizedlist> + <listitem> + A DPMS (basically on/off) function. + </listitem> + <listitem> + A mode-fixup function (for converting requested modes into + native hardware timings). + </listitem> + <listitem> + Functions (prepare, set, and commit) for use by the core DRM + helper functions. + </listitem> + </itemizedlist> + Connector helpers need to provide functions (mode-fetch, validity, + and encoder-matching) for returning an ideal encoder for a given + connector. The core connector functions include a DPMS callback, + save/restore routines (deprecated), detection, mode probing, + property handling, and cleanup functions. </para> <!--!Edrivers/char/drm/drm_crtc.h--> <!--!Edrivers/char/drm/drm_crtc.c--> @@ -605,23 +619,34 @@ void intel_crt_init(struct drm_device *dev) <title>VBlank event handling</title> <para> The DRM core exposes two vertical blank related ioctls: - DRM_IOCTL_WAIT_VBLANK and DRM_IOCTL_MODESET_CTL. + <variablelist> + <varlistentry> + <term>DRM_IOCTL_WAIT_VBLANK</term> + <listitem> + <para> + This takes a struct drm_wait_vblank structure as its argument, + and it is used to block or request a signal when a specified + vblank event occurs. + </para> + </listitem> + </varlistentry> + <varlistentry> + <term>DRM_IOCTL_MODESET_CTL</term> + <listitem> + <para> + This should be called by application level drivers before and + after mode setting, since on many devices the vertical blank + counter is reset at that time. Internally, the DRM snapshots + the last vblank count when the ioctl is called with the + _DRM_PRE_MODESET command, so that the counter won't go backwards + (which is dealt with when _DRM_POST_MODESET is used). + </para> + </listitem> + </varlistentry> + </variablelist> <!--!Edrivers/char/drm/drm_irq.c--> </para> <para> - DRM_IOCTL_WAIT_VBLANK takes a struct drm_wait_vblank structure - as its argument, and is used to block or request a signal when a - specified vblank event occurs. - </para> - <para> - DRM_IOCTL_MODESET_CTL should be called by application level - drivers before and after mode setting, since on many devices the - vertical blank counter will be reset at that time. Internally, - the DRM snapshots the last vblank count when the ioctl is called - with the _DRM_PRE_MODESET command so that the counter won't go - backwards (which is dealt with when _DRM_POST_MODESET is used). - </para> - <para> To support the functions above, the DRM core provides several helper functions for tracking vertical blank counters, and requires drivers to provide several callbacks: @@ -632,24 +657,24 @@ void intel_crt_init(struct drm_device *dev) register. The enable and disable vblank callbacks should enable and disable vertical blank interrupts, respectively. In the absence of DRM clients waiting on vblank events, the core DRM - code will use the disable_vblank() function to disable - interrupts, which saves power. They'll be re-enabled again when + code uses the disable_vblank() function to disable + interrupts, which saves power. They are re-enabled again when a client calls the vblank wait ioctl above. </para> <para> - Devices that don't provide a count register can simply use an + A device that doesn't provide a count register may simply use an internal atomic counter incremented on every vertical blank - interrupt, and can make their enable and disable vblank - functions into no-ops. + interrupt (and then treat the enable_vblank() and disable_vblank() + callbacks as no-ops). </para> </sect1> <sect1> <title>Memory management</title> <para> - The memory manager lies at the heart of many DRM operations, and - is also required to support advanced client features like OpenGL - pbuffers. The DRM currently contains two memory managers, TTM + The memory manager lies at the heart of many DRM operations; it + is required to support advanced client features like OpenGL + pbuffers. The DRM currently contains two memory managers: TTM and GEM. </para> @@ -679,41 +704,46 @@ void intel_crt_init(struct drm_device *dev) <para> GEM-enabled drivers must provide gem_init_object() and gem_free_object() callbacks to support the core memory - allocation routines. They should also provide several driver - specific ioctls to support command execution, pinning, buffer + allocation routines. They should also provide several driver-specific + ioctls to support command execution, pinning, buffer read & write, mapping, and domain ownership transfers. </para> <para> - On a fundamental level, GEM involves several operations: memory - allocation and freeing, command execution, and aperture management - at command execution time. Buffer object allocation is relatively + On a fundamental level, GEM involves several operations: + <itemizedlist> + <listitem>Memory allocation and freeing</listitem> + <listitem>Command execution</listitem> + <listitem>Aperture management at command execution time</listitem> + </itemizedlist> + Buffer object allocation is relatively straightforward and largely provided by Linux's shmem layer, which provides memory to back each object. When mapped into the GTT or used in a command buffer, the backing pages for an object are flushed to memory and marked write combined so as to be coherent - with the GPU. Likewise, when the GPU finishes rendering to an object, - if the CPU accesses it, it must be made coherent with the CPU's view + with the GPU. Likewise, if the CPU accesses an object after the GPU + has finished rendering to the object, then the object must be made + coherent with the CPU's view of memory, usually involving GPU cache flushing of various kinds. - This core CPU<->GPU coherency management is provided by the GEM - set domain function, which evaluates an object's current domain and + This core CPU<->GPU coherency management is provided by a + device-specific ioctl, which evaluates an object's current domain and performs any necessary flushing or synchronization to put the object into the desired coherency domain (note that the object may be busy, - i.e. an active render target; in that case the set domain function - will block the client and wait for rendering to complete before + i.e. an active render target; in that case, setting the domain + blocks the client and waits for rendering to complete before performing any necessary flushing operations). </para> <para> Perhaps the most important GEM function is providing a command execution interface to clients. Client programs construct command - buffers containing references to previously allocated memory objects - and submit them to GEM. At that point, GEM will take care to bind + buffers containing references to previously allocated memory objects, + and then submit them to GEM. At that point, GEM takes care to bind all the objects into the GTT, execute the buffer, and provide necessary synchronization between clients accessing the same buffers. This often involves evicting some objects from the GTT and re-binding others (a fairly expensive operation), and providing relocation support which hides fixed GTT offsets from clients. Clients must take care not to submit command buffers that reference more objects - than can fit in the GTT or GEM will reject them and no rendering + than can fit in the GTT; otherwise, GEM will reject them and no rendering will occur. Similarly, if several objects in the buffer require fence registers to be allocated for correct rendering (e.g. 2D blits on pre-965 chips), care must be taken not to require more fence @@ -729,7 +759,7 @@ void intel_crt_init(struct drm_device *dev) <title>Output management</title> <para> At the core of the DRM output management code is a set of - structures representing CRTCs, encoders and connectors. + structures representing CRTCs, encoders, and connectors. </para> <para> A CRTC is an abstraction representing a part of the chip that @@ -765,21 +795,19 @@ void intel_crt_init(struct drm_device *dev) <sect1> <title>Framebuffer management</title> <para> - In order to set a mode on a given CRTC, encoder and connector - configuration, clients need to provide a framebuffer object which - will provide a source of pixels for the CRTC to deliver to the encoder(s) - and ultimately the connector(s) in the configuration. A framebuffer - is fundamentally a driver specific memory object, made into an opaque - handle by the DRM addfb function. Once an fb has been created this - way it can be passed to the KMS mode setting routines for use in - a configuration. + Clients need to provide a framebuffer object which provides a source + of pixels for a CRTC to deliver to the encoder(s) and ultimately the + connector(s). A framebuffer is fundamentally a driver-specific memory + object, made into an opaque handle by the DRM's addfb() function. + Once a framebuffer has been created this way, it may be passed to the + KMS mode setting routines for use in a completed configuration. </para> </sect1> <sect1> <title>Command submission & fencing</title> <para> - This should cover a few device specific command submission + This should cover a few device-specific command submission implementations. </para> </sect1> @@ -789,7 +817,7 @@ void intel_crt_init(struct drm_device *dev) <para> The DRM core provides some suspend/resume code, but drivers wanting full suspend/resume support should provide save() and - restore() functions. These will be called at suspend, + restore() functions. These are called at suspend, hibernate, or resume time, and should perform any state save or restore required by your device across suspend or hibernate states. @@ -812,8 +840,8 @@ void intel_crt_init(struct drm_device *dev) <para> The DRM core exports several interfaces to applications, generally intended to be used through corresponding libdrm - wrapper functions. In addition, drivers export device specific - interfaces for use by userspace drivers & device aware + wrapper functions. In addition, drivers export device-specific + interfaces for use by userspace drivers & device-aware applications through ioctls and sysfs files. </para> <para> @@ -822,8 +850,8 @@ void intel_crt_init(struct drm_device *dev) management, memory management, and output management. </para> <para> - Cover generic ioctls and sysfs layout here. Only need high - level info, since man pages will cover the rest. + Cover generic ioctls and sysfs layout here. We only need high-level + info, since man pages should cover the rest. </para> </chapter> diff --git a/Documentation/DocBook/mtdnand.tmpl b/Documentation/DocBook/mtdnand.tmpl index 17910e2052ad..0c674be0d3c6 100644 --- a/Documentation/DocBook/mtdnand.tmpl +++ b/Documentation/DocBook/mtdnand.tmpl @@ -572,7 +572,7 @@ static void board_select_chip (struct mtd_info *mtd, int chip) </para> <para> The simplest way to activate the FLASH based bad block table support - is to set the option NAND_USE_FLASH_BBT in the option field of + is to set the option NAND_BBT_USE_FLASH in the bbt_option field of the nand chip structure before calling nand_scan(). For AG-AND chips is this done by default. This activates the default FLASH based bad block table functionality @@ -773,20 +773,6 @@ struct nand_oobinfo { done according to the default builtin scheme. </para> </sect2> - <sect2 id="User_space_placement_selection"> - <title>User space placement selection</title> - <para> - All non ecc functions like mtd->read and mtd->write use an internal - structure, which can be set by an ioctl. This structure is preset - to the autoplacement default. - <programlisting> - ioctl (fd, MEMSETOOBSEL, oobsel); - </programlisting> - oobsel is a pointer to a user supplied structure of type - nand_oobconfig. The contents of this structure must match the - criteria of the filesystem, which will be used. See an example in utils/nandwrite.c. - </para> - </sect2> </sect1> <sect1 id="Spare_area_autoplacement_default"> <title>Spare area autoplacement default schemes</title> @@ -1158,9 +1144,6 @@ in this page</entry> These constants are defined in nand.h. They are ored together to describe the functionality. <programlisting> -/* Use a flash based bad block table. This option is parsed by the - * default bad block table function (nand_default_bbt). */ -#define NAND_USE_FLASH_BBT 0x00010000 /* The hw ecc generator provides a syndrome instead a ecc value on read * This can only work if we have the ecc bytes directly behind the * data bytes. Applies for DOC and AG-AND Renesas HW Reed Solomon generators */ diff --git a/Documentation/cgroups/freezer-subsystem.txt b/Documentation/cgroups/freezer-subsystem.txt index c21d77742a07..7e62de1e59ff 100644 --- a/Documentation/cgroups/freezer-subsystem.txt +++ b/Documentation/cgroups/freezer-subsystem.txt @@ -33,9 +33,9 @@ demonstrate this problem using nested bash shells: From a second, unrelated bash shell: $ kill -SIGSTOP 16690 - $ kill -SIGCONT 16990 + $ kill -SIGCONT 16690 - <at this point 16990 exits and causes 16644 to exit too> + <at this point 16690 exits and causes 16644 to exit too> This happens because bash can observe both signals and choose how it responds to them. diff --git a/Documentation/devicetree/bindings/mtd/atmel-dataflash.txt b/Documentation/devicetree/bindings/mtd/atmel-dataflash.txt new file mode 100644 index 000000000000..ef66ddd01da0 --- /dev/null +++ b/Documentation/devicetree/bindings/mtd/atmel-dataflash.txt @@ -0,0 +1,14 @@ +* Atmel Data Flash + +Required properties: +- compatible : "atmel,<model>", "atmel,<series>", "atmel,dataflash". + +Example: + +flash@1 { + #address-cells = <1>; + #size-cells = <1>; + compatible = "atmel,at45db321d", "atmel,at45", "atmel,dataflash"; + spi-max-frequency = <25000000>; + reg = <1>; +}; diff --git a/Documentation/sound/alsa/HD-Audio-Models.txt b/Documentation/sound/alsa/HD-Audio-Models.txt index 4f3443230d89..edad99abec21 100644 --- a/Documentation/sound/alsa/HD-Audio-Models.txt +++ b/Documentation/sound/alsa/HD-Audio-Models.txt @@ -349,6 +349,7 @@ STAC92HD83* ref Reference board mic-ref Reference board with power management for ports dell-s14 Dell laptop + dell-vostro-3500 Dell Vostro 3500 laptop hp HP laptops with (inverted) mute-LED hp-dv7-4000 HP dv-7 4000 auto BIOS setup (default) @@ -92,7 +92,7 @@ always += missing-syscalls targets += missing-syscalls quiet_cmd_syscalls = CALL $< - cmd_syscalls = $(CONFIG_SHELL) $< $(CC) $(c_flags) + cmd_syscalls = $(CONFIG_SHELL) $< $(CC) $(c_flags) $(missing_syscalls_flags) missing-syscalls: scripts/checksyscalls.sh $(offsets-file) FORCE $(call cmd,syscalls) diff --git a/MAINTAINERS b/MAINTAINERS index 6388a96dc1c4..071a99674347 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1032,6 +1032,7 @@ F: arch/arm/include/asm/hardware/ioc.h F: arch/arm/include/asm/hardware/iomd.h F: arch/arm/include/asm/hardware/memc.h F: arch/arm/mach-rpc/ +F: drivers/net/ethernet/8390/etherh.c F: drivers/net/ethernet/i825xx/ether1* F: drivers/net/ethernet/seeq/ether3* F: drivers/scsi/arm/ @@ -1105,6 +1106,7 @@ F: drivers/media/video/s5p-fimc/ ARM/SAMSUNG S5P SERIES Multi Format Codec (MFC) SUPPORT M: Kyungmin Park <kyungmin.park@samsung.com> M: Kamil Debski <k.debski@samsung.com> +M: Jeongtae Park <jtp.park@samsung.com> L: linux-arm-kernel@lists.infradead.org L: linux-media@vger.kernel.org S: Maintained @@ -2341,6 +2343,13 @@ S: Supported F: drivers/gpu/drm/i915 F: include/drm/i915* +DRM DRIVERS FOR EXYNOS +M: Inki Dae <inki.dae@samsung.com> +L: dri-devel@lists.freedesktop.org +S: Supported +F: drivers/gpu/drm/exynos +F: include/drm/exynos* + DSCC4 DRIVER M: Francois Romieu <romieu@fr.zoreil.com> L: netdev@vger.kernel.org @@ -4672,7 +4681,7 @@ L: linux-omap@vger.kernel.org W: http://www.muru.com/linux/omap/ W: http://linux.omap.com/ Q: http://patchwork.kernel.org/project/linux-omap/list/ -T: git git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap-2.6.git +T: git git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap.git S: Maintained F: arch/arm/*omap*/ @@ -5470,7 +5479,7 @@ S: Maintained F: drivers/net/ethernet/rdc/r6040.c RDS - RELIABLE DATAGRAM SOCKETS -M: Andy Grover <andy.grover@oracle.com> +M: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com> L: rds-devel@oss.oracle.com (moderated for non-subscribers) S: Supported F: net/rds/ @@ -6121,7 +6130,7 @@ F: sound/ SOUND - SOC LAYER / DYNAMIC AUDIO POWER MANAGEMENT (ASoC) M: Liam Girdwood <lrg@ti.com> M: Mark Brown <broonie@opensource.wolfsonmicro.com> -T: git git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound-2.6.git +T: git git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git L: alsa-devel@alsa-project.org (moderated for non-subscribers) W: http://alsa-project.org/main/index.php/ASoC S: Supported @@ -1,8 +1,8 @@ VERSION = 3 -PATCHLEVEL = 1 +PATCHLEVEL = 2 SUBLEVEL = 0 -EXTRAVERSION = -NAME = "Divemaster Edition" +EXTRAVERSION = -rc2 +NAME = Saber-toothed Squirrel # *DOCUMENTATION* # To see a list of typical targets execute "make help" diff --git a/arch/arm/boot/dts/tegra-ventana.dts b/arch/arm/boot/dts/tegra-ventana.dts index 9b29a623aaf1..3f9abd6b6964 100644 --- a/arch/arm/boot/dts/tegra-ventana.dts +++ b/arch/arm/boot/dts/tegra-ventana.dts @@ -22,11 +22,10 @@ sdhci@c8000400 { cd-gpios = <&gpio 69 0>; /* gpio PI5 */ wp-gpios = <&gpio 57 0>; /* gpio PH1 */ - power-gpios = <&gpio 155 0>; /* gpio PT3 */ + power-gpios = <&gpio 70 0>; /* gpio PI6 */ }; sdhci@c8000600 { - power-gpios = <&gpio 70 0>; /* gpio PI6 */ support-8bit; }; }; diff --git a/arch/arm/mach-at91/at91cap9.c b/arch/arm/mach-at91/at91cap9.c index ecdd54dd68c6..17632b82dd76 100644 --- a/arch/arm/mach-at91/at91cap9.c +++ b/arch/arm/mach-at91/at91cap9.c @@ -137,7 +137,7 @@ static struct clk pwm_clk = { .type = CLK_TYPE_PERIPHERAL, }; static struct clk macb_clk = { - .name = "macb_clk", + .name = "pclk", .pmc_mask = 1 << AT91CAP9_ID_EMAC, .type = CLK_TYPE_PERIPHERAL, }; @@ -210,6 +210,8 @@ static struct clk *periph_clocks[] __initdata = { }; static struct clk_lookup periph_clocks_lookups[] = { + /* One additional fake clock for macb_hclk */ + CLKDEV_CON_ID("hclk", &macb_clk), CLKDEV_CON_DEV_ID("hclk", "atmel_usba_udc", &utmi_clk), CLKDEV_CON_DEV_ID("pclk", "atmel_usba_udc", &udphs_clk), CLKDEV_CON_DEV_ID("mci_clk", "at91_mci.0", &mmc0_clk), diff --git a/arch/arm/mach-at91/at91cap9_devices.c b/arch/arm/mach-at91/at91cap9_devices.c index a4401d6b5b07..695aecab0a67 100644 --- a/arch/arm/mach-at91/at91cap9_devices.c +++ b/arch/arm/mach-at91/at91cap9_devices.c @@ -98,7 +98,7 @@ void __init at91_add_device_usbh(struct at91_usbh_data *data) {} * USB HS Device (Gadget) * -------------------------------------------------------------------- */ -#if defined(CONFIG_USB_GADGET_ATMEL_USBA) || defined(CONFIG_USB_GADGET_ATMEL_USBA_MODULE) +#if defined(CONFIG_USB_ATMEL_USBA) || defined(CONFIG_USB_ATMEL_USBA_MODULE) static struct resource usba_udc_resources[] = { [0] = { @@ -200,7 +200,7 @@ void __init at91_add_device_usba(struct usba_platform_data *data) {} #if defined(CONFIG_MACB) || defined(CONFIG_MACB_MODULE) static u64 eth_dmamask = DMA_BIT_MASK(32); -static struct at91_eth_data eth_data; +static struct macb_platform_data eth_data; static struct resource eth_resources[] = { [0] = { @@ -227,7 +227,7 @@ static struct platform_device at91cap9_eth_device = { .num_resources = ARRAY_SIZE(eth_resources), }; -void __init at91_add_device_eth(struct at91_eth_data *data) +void __init at91_add_device_eth(struct macb_platform_data *data) { if (!data) return; @@ -264,7 +264,7 @@ void __init at91_add_device_eth(struct at91_eth_data *data) platform_device_register(&at91cap9_eth_device); } #else -void __init at91_add_device_eth(struct at91_eth_data *data) {} +void __init at91_add_device_eth(struct macb_platform_data *data) {} #endif @@ -1021,8 +1021,8 @@ void __init at91_add_device_ssc(unsigned id, unsigned pins) {} #if defined(CONFIG_SERIAL_ATMEL) static struct resource dbgu_resources[] = { [0] = { - .start = AT91_VA_BASE_SYS + AT91_DBGU, - .end = AT91_VA_BASE_SYS + AT91_DBGU + SZ_512 - 1, + .start = AT91_BASE_SYS + AT91_DBGU, + .end = AT91_BASE_SYS + AT91_DBGU + SZ_512 - 1, .flags = IORESOURCE_MEM, }, [1] = { @@ -1035,7 +1035,6 @@ static struct resource dbgu_resources[] = { static struct atmel_uart_data dbgu_data = { .use_dma_tx = 0, .use_dma_rx = 0, /* DBGU not capable of receive DMA */ - .regs = (void __iomem *)(AT91_VA_BASE_SYS + AT91_DBGU), }; static u64 dbgu_dmamask = DMA_BIT_MASK(32); diff --git a/arch/arm/mach-at91/at91rm9200_devices.c b/arch/arm/mach-at91/at91rm9200_devices.c index 01d8bbd1468b..5610f14e342e 100644 --- a/arch/arm/mach-at91/at91rm9200_devices.c +++ b/arch/arm/mach-at91/at91rm9200_devices.c @@ -135,7 +135,7 @@ void __init at91_add_device_udc(struct at91_udc_data *data) {} #if defined(CONFIG_ARM_AT91_ETHER) || defined(CONFIG_ARM_AT91_ETHER_MODULE) static u64 eth_dmamask = DMA_BIT_MASK(32); -static struct at91_eth_data eth_data; +static struct macb_platform_data eth_data; static struct resource eth_resources[] = { [0] = { @@ -162,7 +162,7 @@ static struct platform_device at91rm9200_eth_device = { .num_resources = ARRAY_SIZE(eth_resources), }; -void __init at91_add_device_eth(struct at91_eth_data *data) +void __init at91_add_device_eth(struct macb_platform_data *data) { if (!data) return; @@ -199,7 +199,7 @@ void __init at91_add_device_eth(struct at91_eth_data *data) platform_device_register(&at91rm9200_eth_device); } #else -void __init at91_add_device_eth(struct at91_eth_data *data) {} +void __init at91_add_device_eth(struct macb_platform_data *data) {} #endif @@ -877,8 +877,8 @@ void __init at91_add_device_ssc(unsigned id, unsigned pins) {} #if defined(CONFIG_SERIAL_ATMEL) static struct resource dbgu_resources[] = { [0] = { - .start = AT91_VA_BASE_SYS + AT91_DBGU, - .end = AT91_VA_BASE_SYS + AT91_DBGU + SZ_512 - 1, + .start = AT91_BASE_SYS + AT91_DBGU, + .end = AT91_BASE_SYS + AT91_DBGU + SZ_512 - 1, .flags = IORESOURCE_MEM, }, [1] = { @@ -891,7 +891,6 @@ static struct resource dbgu_resources[] = { static struct atmel_uart_data dbgu_data = { .use_dma_tx = 0, .use_dma_rx = 0, /* DBGU not capable of receive DMA */ - .regs = (void __iomem *)(AT91_VA_BASE_SYS + AT91_DBGU), }; static u64 dbgu_dmamask = DMA_BIT_MASK(32); diff --git a/arch/arm/mach-at91/at91sam9260.c b/arch/arm/mach-at91/at91sam9260.c index b84a9f642f59..249ed1f5912d 100644 --- a/arch/arm/mach-at91/at91sam9260.c +++ b/arch/arm/mach-at91/at91sam9260.c @@ -120,7 +120,7 @@ static struct clk ohci_clk = { .type = CLK_TYPE_PERIPHERAL, }; static struct clk macb_clk = { - .name = "macb_clk", + .name = "pclk", .pmc_mask = 1 << AT91SAM9260_ID_EMAC, .type = CLK_TYPE_PERIPHERAL, }; @@ -190,6 +190,8 @@ static struct clk *periph_clocks[] __initdata = { }; static struct clk_lookup periph_clocks_lookups[] = { + /* One additional fake clock for macb_hclk */ + CLKDEV_CON_ID("hclk", &macb_clk), CLKDEV_CON_DEV_ID("spi_clk", "atmel_spi.0", &spi0_clk), CLKDEV_CON_DEV_ID("spi_clk", "atmel_spi.1", &spi1_clk), CLKDEV_CON_DEV_ID("t0_clk", "atmel_tcb.0", &tc0_clk), diff --git a/arch/arm/mach-at91/at91sam9260_devices.c b/arch/arm/mach-at91/at91sam9260_devices.c index 24b6f8c0440d..ff75f7d4091b 100644 --- a/arch/arm/mach-at91/at91sam9260_devices.c +++ b/arch/arm/mach-at91/at91sam9260_devices.c @@ -136,7 +136,7 @@ void __init at91_add_device_udc(struct at91_udc_data *data) {} #if defined(CONFIG_MACB) || defined(CONFIG_MACB_MODULE) static u64 eth_dmamask = DMA_BIT_MASK(32); -static struct at91_eth_data eth_data; +static struct macb_platform_data eth_data; static struct resource eth_resources[] = { [0] = { @@ -163,7 +163,7 @@ static struct platform_device at91sam9260_eth_device = { .num_resources = ARRAY_SIZE(eth_resources), }; -void __init at91_add_device_eth(struct at91_eth_data *data) +void __init at91_add_device_eth(struct macb_platform_data *data) { if (!data) return; @@ -200,7 +200,7 @@ void __init at91_add_device_eth(struct at91_eth_data *data) platform_device_register(&at91sam9260_eth_device); } #else -void __init at91_add_device_eth(struct at91_eth_data *data) {} +void __init at91_add_device_eth(struct macb_platform_data *data) {} #endif @@ -837,8 +837,8 @@ void __init at91_add_device_ssc(unsigned id, unsigned pins) {} #if defined(CONFIG_SERIAL_ATMEL) static struct resource dbgu_resources[] = { [0] = { - .start = AT91_VA_BASE_SYS + AT91_DBGU, - .end = AT91_VA_BASE_SYS + AT91_DBGU + SZ_512 - 1, + .start = AT91_BASE_SYS + AT91_DBGU, + .end = AT91_BASE_SYS + AT91_DBGU + SZ_512 - 1, .flags = IORESOURCE_MEM, }, [1] = { @@ -851,7 +851,6 @@ static struct resource dbgu_resources[] = { static struct atmel_uart_data dbgu_data = { .use_dma_tx = 0, .use_dma_rx = 0, /* DBGU not capable of receive DMA */ - .regs = (void __iomem *)(AT91_VA_BASE_SYS + AT91_DBGU), }; static u64 dbgu_dmamask = DMA_BIT_MASK(32); diff --git a/arch/arm/mach-at91/at91sam9261_devices.c b/arch/arm/mach-at91/at91sam9261_devices.c index 3b70b3897d95..ae78f4d03b73 100644 --- a/arch/arm/mach-at91/at91sam9261_devices.c +++ b/arch/arm/mach-at91/at91sam9261_devices.c @@ -816,8 +816,8 @@ void __init at91_add_device_ssc(unsigned id, unsigned pins) {} #if defined(CONFIG_SERIAL_ATMEL) static struct resource dbgu_resources[] = { [0] = { - .start = AT91_VA_BASE_SYS + AT91_DBGU, - .end = AT91_VA_BASE_SYS + AT91_DBGU + SZ_512 - 1, + .start = AT91_BASE_SYS + AT91_DBGU, + .end = AT91_BASE_SYS + AT91_DBGU + SZ_512 - 1, .flags = IORESOURCE_MEM, }, [1] = { @@ -830,7 +830,6 @@ static struct resource dbgu_resources[] = { static struct atmel_uart_data dbgu_data = { .use_dma_tx = 0, .use_dma_rx = 0, /* DBGU not capable of receive DMA */ - .regs = (void __iomem *)(AT91_VA_BASE_SYS + AT91_DBGU), }; static u64 dbgu_dmamask = DMA_BIT_MASK(32); diff --git a/arch/arm/mach-at91/at91sam9263.c b/arch/arm/mach-at91/at91sam9263.c index f83fbb0ee0c5..182d112dc59d 100644 --- a/arch/arm/mach-at91/at91sam9263.c +++ b/arch/arm/mach-at91/at91sam9263.c @@ -118,7 +118,7 @@ static struct clk pwm_clk = { .type = CLK_TYPE_PERIPHERAL, }; static struct clk macb_clk = { - .name = "macb_clk", + .name = "pclk", .pmc_mask = 1 << AT91SAM9263_ID_EMAC, .type = CLK_TYPE_PERIPHERAL, }; @@ -182,6 +182,8 @@ static struct clk *periph_clocks[] __initdata = { }; static struct clk_lookup periph_clocks_lookups[] = { + /* One additional fake clock for macb_hclk */ + CLKDEV_CON_ID("hclk", &macb_clk), CLKDEV_CON_DEV_ID("pclk", "ssc.0", &ssc0_clk), CLKDEV_CON_DEV_ID("pclk", "ssc.1", &ssc1_clk), CLKDEV_CON_DEV_ID("mci_clk", "at91_mci.0", &mmc0_clk), diff --git a/arch/arm/mach-at91/at91sam9263_devices.c b/arch/arm/mach-at91/at91sam9263_devices.c index 3faa1fde9ad9..68562ce2af94 100644 --- a/arch/arm/mach-at91/at91sam9263_devices.c +++ b/arch/arm/mach-at91/at91sam9263_devices.c @@ -144,7 +144,7 @@ void __init at91_add_device_udc(struct at91_udc_data *data) {} #if defined(CONFIG_MACB) || defined(CONFIG_MACB_MODULE) static u64 eth_dmamask = DMA_BIT_MASK(32); -static struct at91_eth_data eth_data; +static struct macb_platform_data eth_data; static struct resource eth_resources[] = { [0] = { @@ -171,7 +171,7 @@ static struct platform_device at91sam9263_eth_device = { .num_resources = ARRAY_SIZE(eth_resources), }; -void __init at91_add_device_eth(struct at91_eth_data *data) +void __init at91_add_device_eth(struct macb_platform_data *data) { if (!data) return; @@ -208,7 +208,7 @@ void __init at91_add_device_eth(struct at91_eth_data *data) platform_device_register(&at91sam9263_eth_device); } #else -void __init at91_add_device_eth(struct at91_eth_data *data) {} +void __init at91_add_device_eth(struct macb_platform_data *data) {} #endif @@ -1196,8 +1196,8 @@ void __init at91_add_device_ssc(unsigned id, unsigned pins) {} static struct resource dbgu_resources[] = { [0] = { - .start = AT91_VA_BASE_SYS + AT91_DBGU, - .end = AT91_VA_BASE_SYS + AT91_DBGU + SZ_512 - 1, + .start = AT91_BASE_SYS + AT91_DBGU, + .end = AT91_BASE_SYS + AT91_DBGU + SZ_512 - 1, .flags = IORESOURCE_MEM, }, [1] = { @@ -1210,7 +1210,6 @@ static struct resource dbgu_resources[] = { static struct atmel_uart_data dbgu_data = { .use_dma_tx = 0, .use_dma_rx = 0, /* DBGU not capable of receive DMA */ - .regs = (void __iomem *)(AT91_VA_BASE_SYS + AT91_DBGU), }; static u64 dbgu_dmamask = DMA_BIT_MASK(32); diff --git a/arch/arm/mach-at91/at91sam9g45.c b/arch/arm/mach-at91/at91sam9g45.c index 318b0407ea04..5a0e522ffa94 100644 --- a/arch/arm/mach-at91/at91sam9g45.c +++ b/arch/arm/mach-at91/at91sam9g45.c @@ -150,7 +150,7 @@ static struct clk ac97_clk = { .type = CLK_TYPE_PERIPHERAL, }; static struct clk macb_clk = { - .name = "macb_clk", + .name = "pclk", .pmc_mask = 1 << AT91SAM9G45_ID_EMAC, .type = CLK_TYPE_PERIPHERAL, }; @@ -209,6 +209,8 @@ static struct clk *periph_clocks[] __initdata = { }; static struct clk_lookup periph_clocks_lookups[] = { + /* One additional fake clock for macb_hclk */ + CLKDEV_CON_ID("hclk", &macb_clk), /* One additional fake clock for ohci */ CLKDEV_CON_ID("ohci_clk", &uhphs_clk), CLKDEV_CON_DEV_ID("ehci_clk", "atmel-ehci", &uhphs_clk), diff --git a/arch/arm/mach-at91/at91sam9g45_devices.c b/arch/arm/mach-at91/at91sam9g45_devices.c index 000b5e1da965..e2cb835c4d7c 100644 --- a/arch/arm/mach-at91/at91sam9g45_devices.c +++ b/arch/arm/mach-at91/at91sam9g45_devices.c @@ -197,7 +197,7 @@ void __init at91_add_device_usbh_ehci(struct at91_usbh_data *data) {} * USB HS Device (Gadget) * -------------------------------------------------------------------- */ -#if defined(CONFIG_USB_GADGET_ATMEL_USBA) || defined(CONFIG_USB_GADGET_ATMEL_USBA_MODULE) +#if defined(CONFIG_USB_ATMEL_USBA) || defined(CONFIG_USB_ATMEL_USBA_MODULE) static struct resource usba_udc_resources[] = { [0] = { .start = AT91SAM9G45_UDPHS_FIFO, @@ -284,7 +284,7 @@ void __init at91_add_device_usba(struct usba_platform_data *data) {} #if defined(CONFIG_MACB) || defined(CONFIG_MACB_MODULE) static u64 eth_dmamask = DMA_BIT_MASK(32); -static struct at91_eth_data eth_data; +static struct macb_platform_data eth_data; static struct resource eth_resources[] = { [0] = { @@ -311,7 +311,7 @@ static struct platform_device at91sam9g45_eth_device = { .num_resources = ARRAY_SIZE(eth_resources), }; -void __init at91_add_device_eth(struct at91_eth_data *data) +void __init at91_add_device_eth(struct macb_platform_data *data) { if (!data) return; @@ -348,7 +348,7 @@ void __init at91_add_device_eth(struct at91_eth_data *data) platform_device_register(&at91sam9g45_eth_device); } #else -void __init at91_add_device_eth(struct at91_eth_data *data) {} +void __init at91_add_device_eth(struct macb_platform_data *data) {} #endif @@ -1332,8 +1332,8 @@ void __init at91_add_device_ssc(unsigned id, unsigned pins) {} #if defined(CONFIG_SERIAL_ATMEL) static struct resource dbgu_resources[] = { [0] = { - .start = AT91_VA_BASE_SYS + AT91_DBGU, - .end = AT91_VA_BASE_SYS + AT91_DBGU + SZ_512 - 1, + .start = AT91_BASE_SYS + AT91_DBGU, + .end = AT91_BASE_SYS + AT91_DBGU + SZ_512 - 1, .flags = IORESOURCE_MEM, }, [1] = { @@ -1346,7 +1346,6 @@ static struct resource dbgu_resources[] = { static struct atmel_uart_data dbgu_data = { .use_dma_tx = 0, .use_dma_rx = 0, - .regs = (void __iomem *)(AT91_VA_BASE_SYS + AT91_DBGU), }; static u64 dbgu_dmamask = DMA_BIT_MASK(32); diff --git a/arch/arm/mach-at91/at91sam9rl_devices.c b/arch/arm/mach-at91/at91sam9rl_devices.c index 305a851b5bff..628eb566d60c 100644 --- a/arch/arm/mach-at91/at91sam9rl_devices.c +++ b/arch/arm/mach-at91/at91sam9rl_devices.c @@ -75,7 +75,7 @@ void __init at91_add_device_hdmac(void) {} * USB HS Device (Gadget) * -------------------------------------------------------------------- */ -#if defined(CONFIG_USB_GADGET_ATMEL_USBA) || defined(CONFIG_USB_GADGET_ATMEL_USBA_MODULE) +#if defined(CONFIG_USB_ATMEL_USBA) || defined(CONFIG_USB_ATMEL_USBA_MODULE) static struct resource usba_udc_resources[] = { [0] = { @@ -908,8 +908,8 @@ void __init at91_add_device_ssc(unsigned id, unsigned pins) {} #if defined(CONFIG_SERIAL_ATMEL) static struct resource dbgu_resources[] = { [0] = { - .start = AT91_VA_BASE_SYS + AT91_DBGU, - .end = AT91_VA_BASE_SYS + AT91_DBGU + SZ_512 - 1, + .start = AT91_BASE_SYS + AT91_DBGU, + .end = AT91_BASE_SYS + AT91_DBGU + SZ_512 - 1, .flags = IORESOURCE_MEM, }, [1] = { @@ -922,7 +922,6 @@ static struct resource dbgu_resources[] = { static struct atmel_uart_data dbgu_data = { .use_dma_tx = 0, .use_dma_rx = 0, /* DBGU not capable of receive DMA */ - .regs = (void __iomem *)(AT91_VA_BASE_SYS + AT91_DBGU), }; static u64 dbgu_dmamask = DMA_BIT_MASK(32); diff --git a/arch/arm/mach-at91/board-1arm.c b/arch/arm/mach-at91/board-1arm.c index 367d5cd5e362..a60d98d7c3e2 100644 --- a/arch/arm/mach-at91/board-1arm.c +++ b/arch/arm/mach-at91/board-1arm.c @@ -63,7 +63,7 @@ static void __init onearm_init_early(void) at91_set_serial_console(0); } -static struct at91_eth_data __initdata onearm_eth_data = { +static struct macb_platform_data __initdata onearm_eth_data = { .phy_irq_pin = AT91_PIN_PC4, .is_rmii = 1, }; diff --git a/arch/arm/mach-at91/board-afeb-9260v1.c b/arch/arm/mach-at91/board-afeb-9260v1.c index 0487ea10c2d6..17fc77925707 100644 --- a/arch/arm/mach-at91/board-afeb-9260v1.c +++ b/arch/arm/mach-at91/board-afeb-9260v1.c @@ -103,7 +103,7 @@ static struct spi_board_info afeb9260_spi_devices[] = { /* * MACB Ethernet device */ -static struct at91_eth_data __initdata afeb9260_macb_data = { +static struct macb_platform_data __initdata afeb9260_macb_data = { .phy_irq_pin = AT91_PIN_PA9, .is_rmii = 0, }; @@ -130,19 +130,14 @@ static struct mtd_partition __initdata afeb9260_nand_partition[] = { }, }; -static struct mtd_partition * __init nand_partitions(int size, int *num_partitions) -{ - *num_partitions = ARRAY_SIZE(afeb9260_nand_partition); - return afeb9260_nand_partition; -} - static struct atmel_nand_data __initdata afeb9260_nand_data = { .ale = 21, .cle = 22, .rdy_pin = AT91_PIN_PC13, .enable_pin = AT91_PIN_PC14, - .partition_info = nand_partitions, .bus_width_16 = 0, + .parts = afeb9260_nand_partition, + .num_parts = ARRAY_SIZE(afeb9260_nand_partition), }; diff --git a/arch/arm/mach-at91/board-cam60.c b/arch/arm/mach-at91/board-cam60.c index 747b2eaa9737..2037d2c40eb4 100644 --- a/arch/arm/mach-at91/board-cam60.c +++ b/arch/arm/mach-at91/board-cam60.c @@ -115,7 +115,7 @@ static struct spi_board_info cam60_spi_devices[] __initdata = { /* * MACB Ethernet device */ -static struct __initdata at91_eth_data cam60_macb_data = { +static struct __initdata macb_platform_data cam60_macb_data = { .phy_irq_pin = AT91_PIN_PB5, .is_rmii = 0, }; @@ -132,19 +132,14 @@ static struct mtd_partition __initdata cam60_nand_partition[] = { }, }; -static struct mtd_partition * __init nand_partitions(int size, int *num_partitions) -{ - *num_partitions = ARRAY_SIZE(cam60_nand_partition); - return cam60_nand_partition; -} - static struct atmel_nand_data __initdata cam60_nand_data = { .ale = 21, .cle = 22, // .det_pin = ... not there .rdy_pin = AT91_PIN_PA9, .enable_pin = AT91_PIN_PA7, - .partition_info = nand_partitions, + .parts = cam60_nand_partition, + .num_parts = ARRAY_SIZE(cam60_nand_partition), }; static struct sam9_smc_config __initdata cam60_nand_smc_config = { diff --git a/arch/arm/mach-at91/board-cap9adk.c b/arch/arm/mach-at91/board-cap9adk.c index 062670351a6a..af5520c366fe 100644 --- a/arch/arm/mach-at91/board-cap9adk.c +++ b/arch/arm/mach-at91/board-cap9adk.c @@ -153,7 +153,7 @@ static struct at91_mmc_data __initdata cap9adk_mmc_data = { /* * MACB Ethernet device */ -static struct at91_eth_data __initdata cap9adk_macb_data = { +static struct macb_platform_data __initdata cap9adk_macb_data = { .is_rmii = 1, }; @@ -169,19 +169,14 @@ static struct mtd_partition __initdata cap9adk_nand_partitions[] = { }, }; -static struct mtd_partition * __init nand_partitions(int size, int *num_partitions) -{ - *num_partitions = ARRAY_SIZE(cap9adk_nand_partitions); - return cap9adk_nand_partitions; -} - static struct atmel_nand_data __initdata cap9adk_nand_data = { .ale = 21, .cle = 22, // .det_pin = ... not connected // .rdy_pin = ... not connected .enable_pin = AT91_PIN_PD15, - .partition_info = nand_partitions, + .parts = cap9adk_nand_partitions, + .num_parts = ARRAY_SIZE(cap9adk_nand_partitions), }; static struct sam9_smc_config __initdata cap9adk_nand_smc_config = { diff --git a/arch/arm/mach-at91/board-carmeva.c b/arch/arm/mach-at91/board-carmeva.c index 774c87fcbd5b..529b356cdb7d 100644 --- a/arch/arm/mach-at91/board-carmeva.c +++ b/arch/arm/mach-at91/board-carmeva.c @@ -57,7 +57,7 @@ static void __init carmeva_init_early(void) at91_set_serial_console(0); } -static struct at91_eth_data __initdata carmeva_eth_data = { +static struct macb_platform_data __initdata carmeva_eth_data = { .phy_irq_pin = AT91_PIN_PC4, .is_rmii = 1, }; diff --git a/arch/arm/mach-at91/board-cpu9krea.c b/arch/arm/mach-at91/board-cpu9krea.c index fc885a4ce243..04d2b9b50464 100644 --- a/arch/arm/mach-at91/board-cpu9krea.c +++ b/arch/arm/mach-at91/board-cpu9krea.c @@ -99,7 +99,7 @@ static struct at91_udc_data __initdata cpu9krea_udc_data = { /* * MACB Ethernet device */ -static struct at91_eth_data __initdata cpu9krea_macb_data = { +static struct macb_platform_data __initdata cpu9krea_macb_data = { .is_rmii = 1, }; diff --git a/arch/arm/mach-at91/board-cpuat91.c b/arch/arm/mach-at91/board-cpuat91.c index d35e65b08ccd..7a4c82e8da51 100644 --- a/arch/arm/mach-at91/board-cpuat91.c +++ b/arch/arm/mach-at91/board-cpuat91.c @@ -82,7 +82,7 @@ static void __init cpuat91_init_early(void) at91_set_serial_console(0); } -static struct at91_eth_data __initdata cpuat91_eth_data = { +static struct macb_platform_data __initdata cpuat91_eth_data = { .is_rmii = 1, }; diff --git a/arch/arm/mach-at91/board-csb337.c b/arch/arm/mach-at91/board-csb337.c index c3936665e645..b004b20b8e42 100644 --- a/arch/arm/mach-at91/board-csb337.c +++ b/arch/arm/mach-at91/board-csb337.c @@ -58,7 +58,7 @@ static void __init csb337_init_early(void) at91_set_serial_console(0); } -static struct at91_eth_data __initdata csb337_eth_data = { +static struct macb_platform_data __initdata csb337_eth_data = { .phy_irq_pin = AT91_PIN_PC2, .is_rmii = 0, }; diff --git a/arch/arm/mach-at91/board-csb637.c b/arch/arm/mach-at91/board-csb637.c index 586100e2acbb..e966de5219c7 100644 --- a/arch/arm/mach-at91/board-csb637.c +++ b/arch/arm/mach-at91/board-csb637.c @@ -52,7 +52,7 @@ static void __init csb637_init_early(void) at91_set_serial_console(0); } -static struct at91_eth_data __initdata csb637_eth_data = { +static struct macb_platform_data __initdata csb637_eth_data = { .phy_irq_pin = AT91_PIN_PC0, .is_rmii = 0, }; diff --git a/arch/arm/mach-at91/board-eb9200.c b/arch/arm/mach-at91/board-eb9200.c index 45db7a3dbef0..3788fa527121 100644 --- a/arch/arm/mach-at91/board-eb9200.c +++ b/arch/arm/mach-at91/board-eb9200.c @@ -60,7 +60,7 @@ static void __init eb9200_init_early(void) at91_set_serial_console(0); } -static struct at91_eth_data __initdata eb9200_eth_data = { +static struct macb_platform_data __initdata eb9200_eth_data = { .phy_irq_pin = AT91_PIN_PC4, .is_rmii = 1, }; diff --git a/arch/arm/mach-at91/board-ecbat91.c b/arch/arm/mach-at91/board-ecbat91.c index 2f9c16d29212..af7622eae1a9 100644 --- a/arch/arm/mach-at91/board-ecbat91.c +++ b/arch/arm/mach-at91/board-ecbat91.c @@ -64,7 +64,7 @@ static void __init ecb_at91init_early(void) at91_set_serial_console(0); } -static struct at91_eth_data __initdata ecb_at91eth_data = { +static struct macb_platform_data __initdata ecb_at91eth_data = { .phy_irq_pin = AT91_PIN_PC4, .is_rmii = 0, }; diff --git a/arch/arm/mach-at91/board-eco920.c b/arch/arm/mach-at91/board-eco920.c index 8252c722607b..8e75867d1d18 100644 --- a/arch/arm/mach-at91/board-eco920.c +++ b/arch/arm/mach-at91/board-eco920.c @@ -47,7 +47,7 @@ static void __init eco920_init_early(void) at91_set_serial_console(0); } -static struct at91_eth_data __initdata eco920_eth_data = { +static struct macb_platform_data __initdata eco920_eth_data = { .phy_irq_pin = AT91_PIN_PC2, .is_rmii = 1, }; diff --git a/arch/arm/mach-at91/board-foxg20.c b/arch/arm/mach-at91/board-foxg20.c index f27d1a780cfa..de8e09642f4e 100644 --- a/arch/arm/mach-at91/board-foxg20.c +++ b/arch/arm/mach-at91/board-foxg20.c @@ -135,7 +135,7 @@ static struct spi_board_info foxg20_spi_devices[] = { /* * MACB Ethernet device */ -static struct at91_eth_data __initdata foxg20_macb_data = { +static struct macb_platform_data __initdata foxg20_macb_data = { .phy_irq_pin = AT91_PIN_PA7, .is_rmii = 1, }; diff --git a/arch/arm/mach-at91/board-gsia18s.c b/arch/arm/mach-at91/board-gsia18s.c index 2e95949737e6..51c82f151119 100644 --- a/arch/arm/mach-at91/board-gsia18s.c +++ b/arch/arm/mach-at91/board-gsia18s.c @@ -93,7 +93,7 @@ static struct at91_udc_data __initdata udc_data = { /* * MACB Ethernet device */ -static struct at91_eth_data __initdata macb_data = { +static struct macb_platform_data __initdata macb_data = { .phy_irq_pin = AT91_PIN_PA28, .is_rmii = 1, }; diff --git a/arch/arm/mach-at91/board-kafa.c b/arch/arm/mach-at91/board-kafa.c index 3bae73e63633..9628a3defcf4 100644 --- a/arch/arm/mach-at91/board-kafa.c +++ b/arch/arm/mach-at91/board-kafa.c @@ -61,7 +61,7 @@ static void __init kafa_init_early(void) at91_set_serial_console(0); } -static struct at91_eth_data __initdata kafa_eth_data = { +static struct macb_platform_data __initdata kafa_eth_data = { .phy_irq_pin = AT91_PIN_PC4, .is_rmii = 0, }; diff --git a/arch/arm/mach-at91/board-kb9202.c b/arch/arm/mach-at91/board-kb9202.c index 15a3f1a87ab0..5ba5244cb632 100644 --- a/arch/arm/mach-at91/board-kb9202.c +++ b/arch/arm/mach-at91/board-kb9202.c @@ -69,7 +69,7 @@ static void __init kb9202_init_early(void) at91_set_serial_console(0); } -static struct at91_eth_data __initdata kb9202_eth_data = { +static struct macb_platform_data __initdata kb9202_eth_data = { .phy_irq_pin = AT91_PIN_PB29, .is_rmii = 0, }; @@ -97,19 +97,14 @@ static struct mtd_partition __initdata kb9202_nand_partition[] = { }, }; -static struct mtd_partition * __init nand_partitions(int size, int *num_partitions) -{ - *num_partitions = ARRAY_SIZE(kb9202_nand_partition); - return kb9202_nand_partition; -} - static struct atmel_nand_data __initdata kb9202_nand_data = { .ale = 22, .cle = 21, // .det_pin = ... not there .rdy_pin = AT91_PIN_PC29, .enable_pin = AT91_PIN_PC28, - .partition_info = nand_partitions, + .parts = kb9202_nand_partition, + .num_parts = ARRAY_SIZE(kb9202_nand_partition), }; static void __init kb9202_board_init(void) diff --git a/arch/arm/mach-at91/board-neocore926.c b/arch/arm/mach-at91/board-neocore926.c index 6094496f7edb..56e7aee11b59 100644 --- a/arch/arm/mach-at91/board-neocore926.c +++ b/arch/arm/mach-at91/board-neocore926.c @@ -155,7 +155,7 @@ static struct at91_mmc_data __initdata neocore926_mmc_data = { /* * MACB Ethernet device */ -static struct at91_eth_data __initdata neocore926_macb_data = { +static struct macb_platform_data __initdata neocore926_macb_data = { .phy_irq_pin = AT91_PIN_PE31, .is_rmii = 1, }; @@ -182,19 +182,14 @@ static struct mtd_partition __initdata neocore926_nand_partition[] = { }, }; -static struct mtd_partition * __init nand_partitions(int size, int *num_partitions) -{ - *num_partitions = ARRAY_SIZE(neocore926_nand_partition); - return neocore926_nand_partition; -} - static struct atmel_nand_data __initdata neocore926_nand_data = { .ale = 21, .cle = 22, .rdy_pin = AT91_PIN_PB19, .rdy_pin_active_low = 1, .enable_pin = AT91_PIN_PD15, - .partition_info = nand_partitions, + .parts = neocore926_nand_partition, + .num_parts = ARRAY_SIZE(neocore926_nand_partition), }; static struct sam9_smc_config __initdata neocore926_nand_smc_config = { diff --git a/arch/arm/mach-at91/board-pcontrol-g20.c b/arch/arm/mach-at91/board-pcontrol-g20.c index 49e3f699b48e..c545a3e635a4 100644 --- a/arch/arm/mach-at91/board-pcontrol-g20.c +++ b/arch/arm/mach-at91/board-pcontrol-g20.c @@ -122,7 +122,7 @@ static struct at91_udc_data __initdata pcontrol_g20_udc_data = { /* * MACB Ethernet device */ -static struct at91_eth_data __initdata macb_data = { +static struct macb_platform_data __initdata macb_data = { .phy_irq_pin = AT91_PIN_PA28, .is_rmii = 1, }; diff --git a/arch/arm/mach-at91/board-picotux200.c b/arch/arm/mach-at91/board-picotux200.c index 0a8fe6a1b7c8..dc18759a24b4 100644 --- a/arch/arm/mach-at91/board-picotux200.c +++ b/arch/arm/mach-at91/board-picotux200.c @@ -60,7 +60,7 @@ static void __init picotux200_init_early(void) at91_set_serial_console(0); } -static struct at91_eth_data __initdata picotux200_eth_data = { +static struct macb_platform_data __initdata picotux200_eth_data = { .phy_irq_pin = AT91_PIN_PC4, .is_rmii = 1, }; diff --git a/arch/arm/mach-at91/board-qil-a9260.c b/arch/arm/mach-at91/board-qil-a9260.c index 938cc390bea3..5444d6ac514a 100644 --- a/arch/arm/mach-at91/board-qil-a9260.c +++ b/arch/arm/mach-at91/board-qil-a9260.c @@ -104,7 +104,7 @@ static struct spi_board_info ek_spi_devices[] = { /* * MACB Ethernet device */ -static struct at91_eth_data __initdata ek_macb_data = { +static struct macb_platform_data __initdata ek_macb_data = { .phy_irq_pin = AT91_PIN_PA31, .is_rmii = 1, }; @@ -130,19 +130,14 @@ static struct mtd_partition __initdata ek_nand_partition[] = { }, }; -static struct mtd_partition * __init nand_partitions(int size, int *num_partitions) -{ - *num_partitions = ARRAY_SIZE(ek_nand_partition); - return ek_nand_partition; -} - static struct atmel_nand_data __initdata ek_nand_data = { .ale = 21, .cle = 22, // .det_pin = ... not connected .rdy_pin = AT91_PIN_PC13, .enable_pin = AT91_PIN_PC14, - .partition_info = nand_partitions, + .parts = ek_nand_partition, + .num_parts = ARRAY_SIZE(ek_nand_partition), }; static struct sam9_smc_config __initdata ek_nand_smc_config = { diff --git a/arch/arm/mach-at91/board-rm9200dk.c b/arch/arm/mach-at91/board-rm9200dk.c index b4ac30e38a9e..022d0cebda9d 100644 --- a/arch/arm/mach-at91/board-rm9200dk.c +++ b/arch/arm/mach-at91/board-rm9200dk.c @@ -65,7 +65,7 @@ static void __init dk_init_early(void) at91_set_serial_console(0); } -static struct at91_eth_data __initdata dk_eth_data = { +static struct macb_platform_data __initdata dk_eth_data = { .phy_irq_pin = AT91_PIN_PC4, .is_rmii = 1, }; @@ -138,19 +138,14 @@ static struct mtd_partition __initdata dk_nand_partition[] = { }, }; -static struct mtd_partition * __init nand_partitions(int size, int *num_partitions) -{ - *num_partitions = ARRAY_SIZE(dk_nand_partition); - return dk_nand_partition; -} - static struct atmel_nand_data __initdata dk_nand_data = { .ale = 22, .cle = 21, .det_pin = AT91_PIN_PB1, .rdy_pin = AT91_PIN_PC2, // .enable_pin = ... not there - .partition_info = nand_partitions, + .parts = dk_nand_partition, + .num_parts = ARRAY_SIZE(dk_nand_partition), }; #define DK_FLASH_BASE AT91_CHIPSELECT_0 diff --git a/arch/arm/mach-at91/board-rm9200ek.c b/arch/arm/mach-at91/board-rm9200ek.c index 99fd7f8aee0e..ed275861adef 100644 --- a/arch/arm/mach-at91/board-rm9200ek.c +++ b/arch/arm/mach-at91/board-rm9200ek.c @@ -65,7 +65,7 @@ static void __init ek_init_early(void) at91_set_serial_console(0); } -static struct at91_eth_data __initdata ek_eth_data = { +static struct macb_platform_data __initdata ek_eth_data = { .phy_irq_pin = AT91_PIN_PC4, .is_rmii = 1, }; diff --git a/arch/arm/mach-at91/board-rsi-ews.c b/arch/arm/mach-at91/board-rsi-ews.c index e927df0175df..ed3b21f77674 100644 --- a/arch/arm/mach-at91/board-rsi-ews.c +++ b/arch/arm/mach-at91/board-rsi-ews.c @@ -60,7 +60,7 @@ static void __init rsi_ews_init_early(void) /* * Ethernet */ -static struct at91_eth_data rsi_ews_eth_data __initdata = { +static struct macb_platform_data rsi_ews_eth_data __initdata = { .phy_irq_pin = AT91_PIN_PC4, .is_rmii = 1, }; diff --git a/arch/arm/mach-at91/board-sam9-l9260.c b/arch/arm/mach-at91/board-sam9-l9260.c index 2a21e790250e..3e4b50e6f6ab 100644 --- a/arch/arm/mach-at91/board-sam9-l9260.c +++ b/arch/arm/mach-at91/board-sam9-l9260.c @@ -109,7 +109,7 @@ static struct spi_board_info ek_spi_devices[] = { /* * MACB Ethernet device */ -static struct at91_eth_data __initdata ek_macb_data = { +static struct macb_platform_data __initdata ek_macb_data = { .phy_irq_pin = AT91_PIN_PA7, .is_rmii = 0, }; @@ -131,19 +131,14 @@ static struct mtd_partition __initdata ek_nand_partition[] = { }, }; -static struct mtd_partition * __init nand_partitions(int size, int *num_partitions) -{ - *num_partitions = ARRAY_SIZE(ek_nand_partition); - return ek_nand_partition; -} - static struct atmel_nand_data __initdata ek_nand_data = { .ale = 21, .cle = 22, // .det_pin = ... not connected .rdy_pin = AT91_PIN_PC13, .enable_pin = AT91_PIN_PC14, - .partition_info = nand_partitions, + .parts = ek_nand_partition, + .num_parts = ARRAY_SIZE(ek_nand_partition), }; static struct sam9_smc_config __initdata ek_nand_smc_config = { diff --git a/arch/arm/mach-at91/board-sam9260ek.c b/arch/arm/mach-at91/board-sam9260ek.c index 89c8b579bfda..13478e14a543 100644 --- a/arch/arm/mach-at91/board-sam9260ek.c +++ b/arch/arm/mach-at91/board-sam9260ek.c @@ -151,7 +151,7 @@ static struct spi_board_info ek_spi_devices[] = { /* * MACB Ethernet device */ -static struct at91_eth_data __initdata ek_macb_data = { +static struct macb_platform_data __initdata ek_macb_data = { .phy_irq_pin = AT91_PIN_PA7, .is_rmii = 1, }; @@ -173,19 +173,14 @@ static struct mtd_partition __initdata ek_nand_partition[] = { }, }; -static struct mtd_partition * __init nand_partitions(int size, int *num_partitions) -{ - *num_partitions = ARRAY_SIZE(ek_nand_partition); - return ek_nand_partition; -} - static struct atmel_nand_data __initdata ek_nand_data = { .ale = 21, .cle = 22, // .det_pin = ... not connected .rdy_pin = AT91_PIN_PC13, .enable_pin = AT91_PIN_PC14, - .partition_info = nand_partitions, + .parts = ek_nand_partition, + .num_parts = ARRAY_SIZE(ek_nand_partition), }; static struct sam9_smc_config __initdata ek_nand_smc_config = { diff --git a/arch/arm/mach-at91/board-sam9261ek.c b/arch/arm/mach-at91/board-sam9261ek.c index 3741f43cdae9..b005b738e8ff 100644 --- a/arch/arm/mach-at91/board-sam9261ek.c +++ b/arch/arm/mach-at91/board-sam9261ek.c @@ -179,19 +179,14 @@ static struct mtd_partition __initdata ek_nand_partition[] = { }, }; -static struct mtd_partition * __init nand_partitions(int size, int *num_partitions) -{ - *num_partitions = ARRAY_SIZE(ek_nand_partition); - return ek_nand_partition; -} - static struct atmel_nand_data __initdata ek_nand_data = { .ale = 22, .cle = 21, // .det_pin = ... not connected .rdy_pin = AT91_PIN_PC15, .enable_pin = AT91_PIN_PC14, - .partition_info = nand_partitions, + .parts = ek_nand_partition, + .num_parts = ARRAY_SIZE(ek_nand_partition), }; static struct sam9_smc_config __initdata ek_nand_smc_config = { diff --git a/arch/arm/mach-at91/board-sam9263ek.c b/arch/arm/mach-at91/board-sam9263ek.c index a580dd451a41..fcf194e6e4fe 100644 --- a/arch/arm/mach-at91/board-sam9263ek.c +++ b/arch/arm/mach-at91/board-sam9263ek.c @@ -158,7 +158,7 @@ static struct at91_mmc_data __initdata ek_mmc_data = { /* * MACB Ethernet device */ -static struct at91_eth_data __initdata ek_macb_data = { +static struct macb_platform_data __initdata ek_macb_data = { .phy_irq_pin = AT91_PIN_PE31, .is_rmii = 1, }; @@ -180,19 +180,14 @@ static struct mtd_partition __initdata ek_nand_partition[] = { }, }; -static struct mtd_partition * __init nand_partitions(int size, int *num_partitions) -{ - *num_partitions = ARRAY_SIZE(ek_nand_partition); - return ek_nand_partition; -} - static struct atmel_nand_data __initdata ek_nand_data = { .ale = 21, .cle = 22, // .det_pin = ... not connected .rdy_pin = AT91_PIN_PA22, .enable_pin = AT91_PIN_PD15, - .partition_info = nand_partitions, + .parts = ek_nand_partition, + .num_parts = ARRAY_SIZE(ek_nand_partition), }; static struct sam9_smc_config __initdata ek_nand_smc_config = { diff --git a/arch/arm/mach-at91/board-sam9g20ek.c b/arch/arm/mach-at91/board-sam9g20ek.c index 8d77c2ff96b2..78d27cc3cc09 100644 --- a/arch/arm/mach-at91/board-sam9g20ek.c +++ b/arch/arm/mach-at91/board-sam9g20ek.c @@ -123,7 +123,7 @@ static struct spi_board_info ek_spi_devices[] = { /* * MACB Ethernet device */ -static struct at91_eth_data __initdata ek_macb_data = { +static struct macb_platform_data __initdata ek_macb_data = { .phy_irq_pin = AT91_PIN_PA7, .is_rmii = 1, }; @@ -157,19 +157,14 @@ static struct mtd_partition __initdata ek_nand_partition[] = { }, }; -static struct mtd_partition * __init nand_partitions(int size, int *num_partitions) -{ - *num_partitions = ARRAY_SIZE(ek_nand_partition); - return ek_nand_partition; -} - /* det_pin is not connected */ static struct atmel_nand_data __initdata ek_nand_data = { .ale = 21, .cle = 22, .rdy_pin = AT91_PIN_PC13, .enable_pin = AT91_PIN_PC14, - .partition_info = nand_partitions, + .parts = ek_nand_partition, + .num_parts = ARRAY_SIZE(ek_nand_partition), }; static struct sam9_smc_config __initdata ek_nand_smc_config = { diff --git a/arch/arm/mach-at91/board-sam9m10g45ek.c b/arch/arm/mach-at91/board-sam9m10g45ek.c index 2d6203ac1a42..4e1ee9d87096 100644 --- a/arch/arm/mach-at91/board-sam9m10g45ek.c +++ b/arch/arm/mach-at91/board-sam9m10g45ek.c @@ -115,7 +115,7 @@ static struct mci_platform_data __initdata mci1_data = { /* * MACB Ethernet device */ -static struct at91_eth_data __initdata ek_macb_data = { +static struct macb_platform_data __initdata ek_macb_data = { .phy_irq_pin = AT91_PIN_PD5, .is_rmii = 1, }; @@ -137,19 +137,14 @@ static struct mtd_partition __initdata ek_nand_partition[] = { }, }; -static struct mtd_partition * __init nand_partitions(int size, int *num_partitions) -{ - *num_partitions = ARRAY_SIZE(ek_nand_partition); - return ek_nand_partition; -} - /* det_pin is not connected */ static struct atmel_nand_data __initdata ek_nand_data = { .ale = 21, .cle = 22, .rdy_pin = AT91_PIN_PC8, .enable_pin = AT91_PIN_PC14, - .partition_info = nand_partitions, + .parts = ek_nand_partition, + .num_parts = ARRAY_SIZE(ek_nand_partition), }; static struct sam9_smc_config __initdata ek_nand_smc_config = { diff --git a/arch/arm/mach-at91/board-sam9rlek.c b/arch/arm/mach-at91/board-sam9rlek.c index 39a28effc3df..b2b748239f36 100644 --- a/arch/arm/mach-at91/board-sam9rlek.c +++ b/arch/arm/mach-at91/board-sam9rlek.c @@ -88,19 +88,14 @@ static struct mtd_partition __initdata ek_nand_partition[] = { }, }; -static struct mtd_partition * __init nand_partitions(int size, int *num_partitions) -{ - *num_partitions = ARRAY_SIZE(ek_nand_partition); - return ek_nand_partition; -} - static struct atmel_nand_data __initdata ek_nand_data = { .ale = 21, .cle = 22, // .det_pin = ... not connected .rdy_pin = AT91_PIN_PD17, .enable_pin = AT91_PIN_PB6, - .partition_info = nand_partitions, + .parts = ek_nand_partition, + .num_parts = ARRAY_SIZE(ek_nand_partition), }; static struct sam9_smc_config __initdata ek_nand_smc_config = { diff --git a/arch/arm/mach-at91/board-snapper9260.c b/arch/arm/mach-at91/board-snapper9260.c index c73d25e5faea..fbec934a7ce9 100644 --- a/arch/arm/mach-at91/board-snapper9260.c +++ b/arch/arm/mach-at91/board-snapper9260.c @@ -65,7 +65,7 @@ static struct at91_udc_data __initdata snapper9260_udc_data = { .vbus_polled = 1, }; -static struct at91_eth_data snapper9260_macb_data = { +static struct macb_platform_data snapper9260_macb_data = { .is_rmii = 1, }; @@ -97,18 +97,12 @@ static struct mtd_partition __initdata snapper9260_nand_partitions[] = { }, }; -static struct mtd_partition * __init -snapper9260_nand_partition_info(int size, int *num_partitions) -{ - *num_partitions = ARRAY_SIZE(snapper9260_nand_partitions); - return snapper9260_nand_partitions; -} - static struct atmel_nand_data __initdata snapper9260_nand_data = { .ale = 21, .cle = 22, .rdy_pin = AT91_PIN_PC13, - .partition_info = snapper9260_nand_partition_info, + .parts = snapper9260_nand_partitions, + .num_parts = ARRAY_SIZE(snapper9260_nand_partitions), .bus_width_16 = 0, }; diff --git a/arch/arm/mach-at91/board-stamp9g20.c b/arch/arm/mach-at91/board-stamp9g20.c index 936e5fd7f406..7c06c07d872b 100644 --- a/arch/arm/mach-at91/board-stamp9g20.c +++ b/arch/arm/mach-at91/board-stamp9g20.c @@ -157,7 +157,7 @@ static struct at91_udc_data __initdata stamp9g20evb_udc_data = { /* * MACB Ethernet device */ -static struct at91_eth_data __initdata macb_data = { +static struct macb_platform_data __initdata macb_data = { .phy_irq_pin = AT91_PIN_PA28, .is_rmii = 1, }; diff --git a/arch/arm/mach-at91/board-usb-a926x.c b/arch/arm/mach-at91/board-usb-a926x.c index 5852d3d9890c..3d84233f78eb 100644 --- a/arch/arm/mach-at91/board-usb-a926x.c +++ b/arch/arm/mach-at91/board-usb-a926x.c @@ -146,7 +146,7 @@ static void __init ek_add_device_spi(void) /* * MACB Ethernet device */ -static struct at91_eth_data __initdata ek_macb_data = { +static struct macb_platform_data __initdata ek_macb_data = { .phy_irq_pin = AT91_PIN_PE31, .is_rmii = 1, }; @@ -190,19 +190,14 @@ static struct mtd_partition __initdata ek_nand_partition[] = { } }; -static struct mtd_partition * __init nand_partitions(int size, int *num_partitions) -{ - *num_partitions = ARRAY_SIZE(ek_nand_partition); - return ek_nand_partition; -} - static struct atmel_nand_data __initdata ek_nand_data = { .ale = 21, .cle = 22, // .det_pin = ... not connected .rdy_pin = AT91_PIN_PA22, .enable_pin = AT91_PIN_PD15, - .partition_info = nand_partitions, + .parts = ek_nand_partition, + .num_parts = ARRAY_SIZE(ek_nand_partition), }; static struct sam9_smc_config __initdata usb_a9260_nand_smc_config = { diff --git a/arch/arm/mach-at91/board-yl-9200.c b/arch/arm/mach-at91/board-yl-9200.c index 3c288b396fc4..2c40a21b2794 100644 --- a/arch/arm/mach-at91/board-yl-9200.c +++ b/arch/arm/mach-at91/board-yl-9200.c @@ -110,7 +110,7 @@ static struct gpio_led yl9200_leds[] = { /* * Ethernet */ -static struct at91_eth_data __initdata yl9200_eth_data = { +static struct macb_platform_data __initdata yl9200_eth_data = { .phy_irq_pin = AT91_PIN_PB28, .is_rmii = 1, }; @@ -172,19 +172,14 @@ static struct mtd_partition __initdata yl9200_nand_partition[] = { } }; -static struct mtd_partition * __init nand_partitions(int size, int *num_partitions) -{ - *num_partitions = ARRAY_SIZE(yl9200_nand_partition); - return yl9200_nand_partition; -} - static struct atmel_nand_data __initdata yl9200_nand_data = { .ale = 6, .cle = 7, // .det_pin = ... not connected .rdy_pin = AT91_PIN_PC14, /* R/!B (Sheet10) */ .enable_pin = AT91_PIN_PC15, /* !CE (Sheet10) */ - .partition_info = nand_partitions, + .parts = yl9200_nand_partition, + .num_parts = ARRAY_SIZE(yl9200_nand_partition), }; /* @@ -389,7 +384,7 @@ static struct spi_board_info yl9200_spi_devices[] = { #include <video/s1d13xxxfb.h> -static void __init yl9200_init_video(void) +static void yl9200_init_video(void) { /* NWAIT Signal */ at91_set_A_periph(AT91_PIN_PC6, 0); diff --git a/arch/arm/mach-at91/cpuidle.c b/arch/arm/mach-at91/cpuidle.c index f474272c0eac..a851e6c98421 100644 --- a/arch/arm/mach-at91/cpuidle.c +++ b/arch/arm/mach-at91/cpuidle.c @@ -34,7 +34,8 @@ static struct cpuidle_driver at91_idle_driver = { /* Actual code that puts the SoC in different idle states */ static int at91_enter_idle(struct cpuidle_device *dev, - struct cpuidle_state *state) + struct cpuidle_driver *drv, + int index) { struct timeval before, after; int idle_time; @@ -42,10 +43,10 @@ static int at91_enter_idle(struct cpuidle_device *dev, local_irq_disable(); do_gettimeofday(&before); - if (state == &dev->states[0]) + if (index == 0) /* Wait for interrupt state */ cpu_do_idle(); - else if (state == &dev->states[1]) { + else if (index == 1) { asm("b 1f; .align 5; 1:"); asm("mcr p15, 0, r0, c7, c10, 4"); /* drain write buffer */ saved_lpr = sdram_selfrefresh_enable(); @@ -56,34 +57,38 @@ static int at91_enter_idle(struct cpuidle_device *dev, local_irq_enable(); idle_time = (after.tv_sec - before.tv_sec) * USEC_PER_SEC + (after.tv_usec - before.tv_usec); - return idle_time; + + dev->last_residency = idle_time; + return index; } /* Initialize CPU idle by registering the idle states */ static int at91_init_cpuidle(void) { struct cpuidle_device *device; - - cpuidle_register_driver(&at91_idle_driver); + struct cpuidle_driver *driver = &at91_idle_driver; device = &per_cpu(at91_cpuidle_device, smp_processor_id()); device->state_count = AT91_MAX_STATES; + driver->state_count = AT91_MAX_STATES; /* Wait for interrupt state */ - device->states[0].enter = at91_enter_idle; - device->states[0].exit_latency = 1; - device->states[0].target_residency = 10000; - device->states[0].flags = CPUIDLE_FLAG_TIME_VALID; - strcpy(device->states[0].name, "WFI"); - strcpy(device->states[0].desc, "Wait for interrupt"); + driver->states[0].enter = at91_enter_idle; + driver->states[0].exit_latency = 1; + driver->states[0].target_residency = 10000; + driver->states[0].flags = CPUIDLE_FLAG_TIME_VALID; + strcpy(driver->states[0].name, "WFI"); + strcpy(driver->states[0].desc, "Wait for interrupt"); /* Wait for interrupt and RAM self refresh state */ - device->states[1].enter = at91_enter_idle; - device->states[1].exit_latency = 10; - device->states[1].target_residency = 10000; - device->states[1].flags = CPUIDLE_FLAG_TIME_VALID; - strcpy(device->states[1].name, "RAM_SR"); - strcpy(device->states[1].desc, "WFI and RAM Self Refresh"); + driver->states[1].enter = at91_enter_idle; + driver->states[1].exit_latency = 10; + driver->states[1].target_residency = 10000; + driver->states[1].flags = CPUIDLE_FLAG_TIME_VALID; + strcpy(driver->states[1].name, "RAM_SR"); + strcpy(driver->states[1].desc, "WFI and RAM Self Refresh"); + + cpuidle_register_driver(&at91_idle_driver); if (cpuidle_register_device(device)) { printk(KERN_ERR "at91_init_cpuidle: Failed registering\n"); diff --git a/arch/arm/mach-at91/include/mach/board.h b/arch/arm/mach-at91/include/mach/board.h index d07767f4052e..e209a2992245 100644 --- a/arch/arm/mach-at91/include/mach/board.h +++ b/arch/arm/mach-at91/include/mach/board.h @@ -40,6 +40,7 @@ #include <linux/atmel-mci.h> #include <sound/atmel-ac97c.h> #include <linux/serial.h> +#include <linux/platform_data/macb.h> /* USB Device */ struct at91_udc_data { @@ -81,18 +82,7 @@ extern void __init at91_add_device_mmc(short mmc_id, struct at91_mmc_data *data) /* atmel-mci platform config */ extern void __init at91_add_device_mci(short mmc_id, struct mci_platform_data *data); - /* Ethernet (EMAC & MACB) */ -struct at91_eth_data { - u32 phy_mask; - u8 phy_irq_pin; /* PHY IRQ */ - u8 is_rmii; /* using RMII interface? */ -}; -extern void __init at91_add_device_eth(struct at91_eth_data *data); - -#if defined(CONFIG_ARCH_AT91SAM9260) || defined(CONFIG_ARCH_AT91SAM9263) || defined(CONFIG_ARCH_AT91SAM9G20) || defined(CONFIG_ARCH_AT91CAP9) \ - || defined(CONFIG_ARCH_AT91SAM9G45) -#define eth_platform_data at91_eth_data -#endif +extern void __init at91_add_device_eth(struct macb_platform_data *data); /* USB Host */ struct at91_usbh_data { @@ -117,7 +107,8 @@ struct atmel_nand_data { u8 ale; /* address line number connected to ALE */ u8 cle; /* address line number connected to CLE */ u8 bus_width_16; /* buswidth is 16 bit */ - struct mtd_partition* (*partition_info)(int, int*); + struct mtd_partition *parts; + unsigned int num_parts; }; extern void __init at91_add_device_nand(struct atmel_nand_data *data); diff --git a/arch/arm/mach-at91/include/mach/vmalloc.h b/arch/arm/mach-at91/include/mach/vmalloc.h index 8eb459f3f5b7..8e4a1bd0ab1d 100644 --- a/arch/arm/mach-at91/include/mach/vmalloc.h +++ b/arch/arm/mach-at91/include/mach/vmalloc.h @@ -21,6 +21,8 @@ #ifndef __ASM_ARCH_VMALLOC_H #define __ASM_ARCH_VMALLOC_H +#include <mach/hardware.h> + #define VMALLOC_END (AT91_VIRT_BASE & PGDIR_MASK) #endif diff --git a/arch/arm/mach-davinci/board-da830-evm.c b/arch/arm/mach-davinci/board-da830-evm.c index 26d94c0b555c..11c3db985285 100644 --- a/arch/arm/mach-davinci/board-da830-evm.c +++ b/arch/arm/mach-davinci/board-da830-evm.c @@ -377,7 +377,7 @@ static struct davinci_nand_pdata da830_evm_nand_pdata = { .nr_parts = ARRAY_SIZE(da830_evm_nand_partitions), .ecc_mode = NAND_ECC_HW, .ecc_bits = 4, - .options = NAND_USE_FLASH_BBT, + .bbt_options = NAND_BBT_USE_FLASH, .bbt_td = &da830_evm_nand_bbt_main_descr, .bbt_md = &da830_evm_nand_bbt_mirror_descr, .timing = &da830_evm_nandflash_timing, diff --git a/arch/arm/mach-davinci/board-da850-evm.c b/arch/arm/mach-davinci/board-da850-evm.c index ec21663f8ddc..1d7d24995226 100644 --- a/arch/arm/mach-davinci/board-da850-evm.c +++ b/arch/arm/mach-davinci/board-da850-evm.c @@ -256,7 +256,7 @@ static struct davinci_nand_pdata da850_evm_nandflash_data = { .nr_parts = ARRAY_SIZE(da850_evm_nandflash_partition), .ecc_mode = NAND_ECC_HW, .ecc_bits = 4, - .options = NAND_USE_FLASH_BBT, + .bbt_options = NAND_BBT_USE_FLASH, .timing = &da850_evm_nandflash_timing, }; diff --git a/arch/arm/mach-davinci/board-dm355-evm.c b/arch/arm/mach-davinci/board-dm355-evm.c index 65566280b7c9..4e0e707c313d 100644 --- a/arch/arm/mach-davinci/board-dm355-evm.c +++ b/arch/arm/mach-davinci/board-dm355-evm.c @@ -77,7 +77,7 @@ static struct davinci_nand_pdata davinci_nand_data = { .parts = davinci_nand_partitions, .nr_parts = ARRAY_SIZE(davinci_nand_partitions), .ecc_mode = NAND_ECC_HW, - .options = NAND_USE_FLASH_BBT, + .bbt_options = NAND_BBT_USE_FLASH, .ecc_bits = 4, }; diff --git a/arch/arm/mach-davinci/board-dm355-leopard.c b/arch/arm/mach-davinci/board-dm355-leopard.c index b307470b071d..ff2d2413279a 100644 --- a/arch/arm/mach-davinci/board-dm355-leopard.c +++ b/arch/arm/mach-davinci/board-dm355-leopard.c @@ -74,7 +74,7 @@ static struct davinci_nand_pdata davinci_nand_data = { .parts = davinci_nand_partitions, .nr_parts = ARRAY_SIZE(davinci_nand_partitions), .ecc_mode = NAND_ECC_HW_SYNDROME, - .options = NAND_USE_FLASH_BBT, + .bbt_options = NAND_BBT_USE_FLASH, }; static struct resource davinci_nand_resources[] = { diff --git a/arch/arm/mach-davinci/board-dm365-evm.c b/arch/arm/mach-davinci/board-dm365-evm.c index 04c43abcca66..1918ae711428 100644 --- a/arch/arm/mach-davinci/board-dm365-evm.c +++ b/arch/arm/mach-davinci/board-dm365-evm.c @@ -139,7 +139,7 @@ static struct davinci_nand_pdata davinci_nand_data = { .parts = davinci_nand_partitions, .nr_parts = ARRAY_SIZE(davinci_nand_partitions), .ecc_mode = NAND_ECC_HW, - .options = NAND_USE_FLASH_BBT, + .bbt_options = NAND_BBT_USE_FLASH, .ecc_bits = 4, }; diff --git a/arch/arm/mach-davinci/board-dm644x-evm.c b/arch/arm/mach-davinci/board-dm644x-evm.c index 28fafa7819bc..0cf8abf78d33 100644 --- a/arch/arm/mach-davinci/board-dm644x-evm.c +++ b/arch/arm/mach-davinci/board-dm644x-evm.c @@ -151,7 +151,7 @@ static struct davinci_nand_pdata davinci_evm_nandflash_data = { .parts = davinci_evm_nandflash_partition, .nr_parts = ARRAY_SIZE(davinci_evm_nandflash_partition), .ecc_mode = NAND_ECC_HW, - .options = NAND_USE_FLASH_BBT, + .bbt_options = NAND_BBT_USE_FLASH, .timing = &davinci_evm_nandflash_timing, }; diff --git a/arch/arm/mach-davinci/board-mityomapl138.c b/arch/arm/mach-davinci/board-mityomapl138.c index 6efc84cceca0..3cfff555e8f2 100644 --- a/arch/arm/mach-davinci/board-mityomapl138.c +++ b/arch/arm/mach-davinci/board-mityomapl138.c @@ -396,7 +396,8 @@ static struct davinci_nand_pdata mityomapl138_nandflash_data = { .parts = mityomapl138_nandflash_partition, .nr_parts = ARRAY_SIZE(mityomapl138_nandflash_partition), .ecc_mode = NAND_ECC_HW, - .options = NAND_USE_FLASH_BBT | NAND_BUSWIDTH_16, + .bbt_options = NAND_BBT_USE_FLASH, + .options = NAND_BUSWIDTH_16, .ecc_bits = 1, /* 4 bit mode is not supported with 16 bit NAND */ }; diff --git a/arch/arm/mach-davinci/board-neuros-osd2.c b/arch/arm/mach-davinci/board-neuros-osd2.c index 38d6f644d8b9..e5f231aefee4 100644 --- a/arch/arm/mach-davinci/board-neuros-osd2.c +++ b/arch/arm/mach-davinci/board-neuros-osd2.c @@ -87,7 +87,7 @@ static struct davinci_nand_pdata davinci_ntosd2_nandflash_data = { .parts = davinci_ntosd2_nandflash_partition, .nr_parts = ARRAY_SIZE(davinci_ntosd2_nandflash_partition), .ecc_mode = NAND_ECC_HW, - .options = NAND_USE_FLASH_BBT, + .bbt_options = NAND_BBT_USE_FLASH, }; static struct resource davinci_ntosd2_nandflash_resource[] = { diff --git a/arch/arm/mach-davinci/board-tnetv107x-evm.c b/arch/arm/mach-davinci/board-tnetv107x-evm.c index 90ee7b5aabdc..f69e40a29e02 100644 --- a/arch/arm/mach-davinci/board-tnetv107x-evm.c +++ b/arch/arm/mach-davinci/board-tnetv107x-evm.c @@ -144,7 +144,7 @@ static struct davinci_nand_pdata nand_config = { .parts = nand_partitions, .nr_parts = ARRAY_SIZE(nand_partitions), .ecc_mode = NAND_ECC_HW, - .options = NAND_USE_FLASH_BBT, + .bbt_options = NAND_BBT_USE_FLASH, .ecc_bits = 1, }; diff --git a/arch/arm/mach-davinci/cpuidle.c b/arch/arm/mach-davinci/cpuidle.c index 60d2f4871afa..a30c7c5a6d83 100644 --- a/arch/arm/mach-davinci/cpuidle.c +++ b/arch/arm/mach-davinci/cpuidle.c @@ -79,9 +79,11 @@ static struct davinci_ops davinci_states[DAVINCI_CPUIDLE_MAX_STATES] = { /* Actual code that puts the SoC in different idle states */ static int davinci_enter_idle(struct cpuidle_device *dev, - struct cpuidle_state *state) + struct cpuidle_driver *drv, + int index) { - struct davinci_ops *ops = cpuidle_get_statedata(state); + struct cpuidle_state_usage *state_usage = &dev->states_usage[index]; + struct davinci_ops *ops = cpuidle_get_statedata(state_usage); struct timeval before, after; int idle_time; @@ -99,13 +101,17 @@ static int davinci_enter_idle(struct cpuidle_device *dev, local_irq_enable(); idle_time = (after.tv_sec - before.tv_sec) * USEC_PER_SEC + (after.tv_usec - before.tv_usec); - return idle_time; + + dev->last_residency = idle_time; + + return index; } static int __init davinci_cpuidle_probe(struct platform_device *pdev) { int ret; struct cpuidle_device *device; + struct cpuidle_driver *driver = &davinci_idle_driver; struct davinci_cpuidle_config *pdata = pdev->dev.platform_data; device = &per_cpu(davinci_cpuidle_device, smp_processor_id()); @@ -117,32 +123,33 @@ static int __init davinci_cpuidle_probe(struct platform_device *pdev) ddr2_reg_base = pdata->ddr2_ctlr_base; - ret = cpuidle_register_driver(&davinci_idle_driver); - if (ret) { - dev_err(&pdev->dev, "failed to register driver\n"); - return ret; - } - /* Wait for interrupt state */ - device->states[0].enter = davinci_enter_idle; - device->states[0].exit_latency = 1; - device->states[0].target_residency = 10000; - device->states[0].flags = CPUIDLE_FLAG_TIME_VALID; - strcpy(device->states[0].name, "WFI"); - strcpy(device->states[0].desc, "Wait for interrupt"); + driver->states[0].enter = davinci_enter_idle; + driver->states[0].exit_latency = 1; + driver->states[0].target_residency = 10000; + driver->states[0].flags = CPUIDLE_FLAG_TIME_VALID; + strcpy(driver->states[0].name, "WFI"); + strcpy(driver->states[0].desc, "Wait for interrupt"); /* Wait for interrupt and DDR self refresh state */ - device->states[1].enter = davinci_enter_idle; - device->states[1].exit_latency = 10; - device->states[1].target_residency = 10000; - device->states[1].flags = CPUIDLE_FLAG_TIME_VALID; - strcpy(device->states[1].name, "DDR SR"); - strcpy(device->states[1].desc, "WFI and DDR Self Refresh"); + driver->states[1].enter = davinci_enter_idle; + driver->states[1].exit_latency = 10; + driver->states[1].target_residency = 10000; + driver->states[1].flags = CPUIDLE_FLAG_TIME_VALID; + strcpy(driver->states[1].name, "DDR SR"); + strcpy(driver->states[1].desc, "WFI and DDR Self Refresh"); if (pdata->ddr2_pdown) davinci_states[1].flags |= DAVINCI_CPUIDLE_FLAGS_DDR2_PWDN; - cpuidle_set_statedata(&device->states[1], &davinci_states[1]); + cpuidle_set_statedata(&device->states_usage[1], &davinci_states[1]); device->state_count = DAVINCI_CPUIDLE_MAX_STATES; + driver->state_count = DAVINCI_CPUIDLE_MAX_STATES; + + ret = cpuidle_register_driver(&davinci_idle_driver); + if (ret) { + dev_err(&pdev->dev, "failed to register driver\n"); + return ret; + } ret = cpuidle_register_device(device); if (ret) { diff --git a/arch/arm/mach-davinci/include/mach/nand.h b/arch/arm/mach-davinci/include/mach/nand.h index 025151049f05..1cf555aef896 100644 --- a/arch/arm/mach-davinci/include/mach/nand.h +++ b/arch/arm/mach-davinci/include/mach/nand.h @@ -74,8 +74,10 @@ struct davinci_nand_pdata { /* platform_data */ nand_ecc_modes_t ecc_mode; u8 ecc_bits; - /* e.g. NAND_BUSWIDTH_16 or NAND_USE_FLASH_BBT */ + /* e.g. NAND_BUSWIDTH_16 */ unsigned options; + /* e.g. NAND_BBT_USE_FLASH */ + unsigned bbt_options; /* Main and mirror bbt descriptor overrides */ struct nand_bbt_descr *bbt_td; diff --git a/arch/arm/mach-ep93xx/ts72xx.c b/arch/arm/mach-ep93xx/ts72xx.c index 1ade3c340507..8b2f1435bcac 100644 --- a/arch/arm/mach-ep93xx/ts72xx.c +++ b/arch/arm/mach-ep93xx/ts72xx.c @@ -116,8 +116,9 @@ static struct mtd_partition ts72xx_nand_parts[] = { .mask_flags = MTD_WRITEABLE, /* force read-only */ }, { .name = "Linux", - .offset = MTDPART_OFS_APPEND, - .size = 0, /* filled in later */ + .offset = MTDPART_OFS_RETAIN, + .size = TS72XX_REDBOOT_PART_SIZE, + /* leave so much for last partition */ }, { .name = "RedBoot", .offset = MTDPART_OFS_APPEND, @@ -126,28 +127,14 @@ static struct mtd_partition ts72xx_nand_parts[] = { }, }; -static void ts72xx_nand_set_parts(uint64_t size, - struct platform_nand_chip *chip) -{ - /* Factory TS-72xx boards only come with 32MiB or 128MiB NAND options */ - if (size == SZ_32M || size == SZ_128M) { - /* Set the "Linux" partition size */ - ts72xx_nand_parts[1].size = size - TS72XX_REDBOOT_PART_SIZE; - - chip->partitions = ts72xx_nand_parts; - chip->nr_partitions = ARRAY_SIZE(ts72xx_nand_parts); - } else { - pr_warning("Unknown nand disk size:%lluMiB\n", size >> 20); - } -} - static struct platform_nand_data ts72xx_nand_data = { .chip = { .nr_chips = 1, .chip_offset = 0, .chip_delay = 15, .part_probe_types = ts72xx_nand_part_probes, - .set_parts = ts72xx_nand_set_parts, + .partitions = ts72xx_nand_parts, + .nr_partitions = ARRAY_SIZE(ts72xx_nand_parts), }, .ctrl = { .cmd_ctrl = ts72xx_nand_hwcontrol, diff --git a/arch/arm/mach-exynos/cpuidle.c b/arch/arm/mach-exynos/cpuidle.c index bf7e96f2793a..35f6502144ae 100644 --- a/arch/arm/mach-exynos/cpuidle.c +++ b/arch/arm/mach-exynos/cpuidle.c @@ -16,7 +16,8 @@ #include <asm/proc-fns.h> static int exynos4_enter_idle(struct cpuidle_device *dev, - struct cpuidle_state *state); + struct cpuidle_driver *drv, + int index); static struct cpuidle_state exynos4_cpuidle_set[] = { [0] = { @@ -37,7 +38,8 @@ static struct cpuidle_driver exynos4_idle_driver = { }; static int exynos4_enter_idle(struct cpuidle_device *dev, - struct cpuidle_state *state) + struct cpuidle_driver *drv, + int index) { struct timeval before, after; int idle_time; @@ -52,29 +54,31 @@ static int exynos4_enter_idle(struct cpuidle_device *dev, idle_time = (after.tv_sec - before.tv_sec) * USEC_PER_SEC + (after.tv_usec - before.tv_usec); - return idle_time; + dev->last_residency = idle_time; + return index; } static int __init exynos4_init_cpuidle(void) { int i, max_cpuidle_state, cpu_id; struct cpuidle_device *device; - + struct cpuidle_driver *drv = &exynos4_idle_driver; + + /* Setup cpuidle driver */ + drv->state_count = (sizeof(exynos4_cpuidle_set) / + sizeof(struct cpuidle_state)); + max_cpuidle_state = drv->state_count; + for (i = 0; i < max_cpuidle_state; i++) { + memcpy(&drv->states[i], &exynos4_cpuidle_set[i], + sizeof(struct cpuidle_state)); + } cpuidle_register_driver(&exynos4_idle_driver); for_each_cpu(cpu_id, cpu_online_mask) { device = &per_cpu(exynos4_cpuidle_device, cpu_id); device->cpu = cpu_id; - device->state_count = (sizeof(exynos4_cpuidle_set) / - sizeof(struct cpuidle_state)); - - max_cpuidle_state = device->state_count; - - for (i = 0; i < max_cpuidle_state; i++) { - memcpy(&device->states[i], &exynos4_cpuidle_set[i], - sizeof(struct cpuidle_state)); - } + device->state_count = drv->state_count; if (cpuidle_register_device(device)) { printk(KERN_ERR "CPUidle register device failed\n,"); diff --git a/arch/arm/mach-imx/Makefile.boot b/arch/arm/mach-imx/Makefile.boot index 22d85889f622..cfede5768aa0 100644 --- a/arch/arm/mach-imx/Makefile.boot +++ b/arch/arm/mach-imx/Makefile.boot @@ -1,22 +1,26 @@ -zreladdr-$(CONFIG_ARCH_MX1) += 0x08008000 -params_phys-$(CONFIG_ARCH_MX1) := 0x08000100 -initrd_phys-$(CONFIG_ARCH_MX1) := 0x08800000 +zreladdr-$(CONFIG_SOC_IMX1) += 0x08008000 +params_phys-$(CONFIG_SOC_IMX1) := 0x08000100 +initrd_phys-$(CONFIG_SOC_IMX1) := 0x08800000 -zreladdr-$(CONFIG_MACH_MX21) += 0xC0008000 -params_phys-$(CONFIG_MACH_MX21) := 0xC0000100 -initrd_phys-$(CONFIG_MACH_MX21) := 0xC0800000 +zreladdr-$(CONFIG_SOC_IMX21) += 0xC0008000 +params_phys-$(CONFIG_SOC_IMX21) := 0xC0000100 +initrd_phys-$(CONFIG_SOC_IMX21) := 0xC0800000 -zreladdr-$(CONFIG_ARCH_MX25) += 0x80008000 -params_phys-$(CONFIG_ARCH_MX25) := 0x80000100 -initrd_phys-$(CONFIG_ARCH_MX25) := 0x80800000 +zreladdr-$(CONFIG_SOC_IMX25) += 0x80008000 +params_phys-$(CONFIG_SOC_IMX25) := 0x80000100 +initrd_phys-$(CONFIG_SOC_IMX25) := 0x80800000 -zreladdr-$(CONFIG_MACH_MX27) += 0xA0008000 -params_phys-$(CONFIG_MACH_MX27) := 0xA0000100 -initrd_phys-$(CONFIG_MACH_MX27) := 0xA0800000 +zreladdr-$(CONFIG_SOC_IMX27) += 0xA0008000 +params_phys-$(CONFIG_SOC_IMX27) := 0xA0000100 +initrd_phys-$(CONFIG_SOC_IMX27) := 0xA0800000 -zreladdr-$(CONFIG_ARCH_MX3) += 0x80008000 -params_phys-$(CONFIG_ARCH_MX3) := 0x80000100 -initrd_phys-$(CONFIG_ARCH_MX3) := 0x80800000 +zreladdr-$(CONFIG_SOC_IMX31) += 0x80008000 +params_phys-$(CONFIG_SOC_IMX31) := 0x80000100 +initrd_phys-$(CONFIG_SOC_IMX31) := 0x80800000 + +zreladdr-$(CONFIG_SOC_IMX35) += 0x80008000 +params_phys-$(CONFIG_SOC_IMX35) := 0x80000100 +initrd_phys-$(CONFIG_SOC_IMX35) := 0x80800000 zreladdr-$(CONFIG_SOC_IMX6Q) += 0x10008000 params_phys-$(CONFIG_SOC_IMX6Q) := 0x10000100 diff --git a/arch/arm/mach-imx/clock-imx6q.c b/arch/arm/mach-imx/clock-imx6q.c index e0b926dfeced..613a1b993bff 100644 --- a/arch/arm/mach-imx/clock-imx6q.c +++ b/arch/arm/mach-imx/clock-imx6q.c @@ -1139,7 +1139,7 @@ static int _clk_set_rate(struct clk *clk, unsigned long rate) return -EINVAL; max_div = ((d->bm_pred >> d->bp_pred) + 1) * - ((d->bm_pred >> d->bp_pred) + 1); + ((d->bm_podf >> d->bp_podf) + 1); div = parent_rate / rate; if (div == 0) @@ -2002,6 +2002,21 @@ int __init mx6q_clocks_init(void) clk_set_rate(&asrc_serial_clk, 1500000); clk_set_rate(&enfc_clk, 11000000); + /* + * Before pinctrl API is available, we have to rely on the pad + * configuration set up by bootloader. For usdhc example here, + * u-boot sets up the pads for 49.5 MHz case, and we have to lower + * the usdhc clock from 198 to 49.5 MHz to match the pad configuration. + * + * FIXME: This is should be removed after pinctrl API is available. + * At that time, usdhc driver can call pinctrl API to change pad + * configuration dynamically per different usdhc clock settings. + */ + clk_set_rate(&usdhc1_clk, 49500000); + clk_set_rate(&usdhc2_clk, 49500000); + clk_set_rate(&usdhc3_clk, 49500000); + clk_set_rate(&usdhc4_clk, 49500000); + np = of_find_compatible_node(NULL, NULL, "fsl,imx6q-gpt"); base = of_iomap(np, 0); WARN_ON(!base); diff --git a/arch/arm/mach-kirkwood/cpuidle.c b/arch/arm/mach-kirkwood/cpuidle.c index 864e569f684e..7088180b018b 100644 --- a/arch/arm/mach-kirkwood/cpuidle.c +++ b/arch/arm/mach-kirkwood/cpuidle.c @@ -33,17 +33,18 @@ static DEFINE_PER_CPU(struct cpuidle_device, kirkwood_cpuidle_device); /* Actual code that puts the SoC in different idle states */ static int kirkwood_enter_idle(struct cpuidle_device *dev, - struct cpuidle_state *state) + struct cpuidle_driver *drv, + int index) { struct timeval before, after; int idle_time; local_irq_disable(); do_gettimeofday(&before); - if (state == &dev->states[0]) + if (index == 0) /* Wait for interrupt state */ cpu_do_idle(); - else if (state == &dev->states[1]) { + else if (index == 1) { /* * Following write will put DDR in self refresh. * Note that we have 256 cycles before DDR puts it @@ -58,35 +59,40 @@ static int kirkwood_enter_idle(struct cpuidle_device *dev, local_irq_enable(); idle_time = (after.tv_sec - before.tv_sec) * USEC_PER_SEC + (after.tv_usec - before.tv_usec); - return idle_time; + + /* Update last residency */ + dev->last_residency = idle_time; + + return index; } /* Initialize CPU idle by registering the idle states */ static int kirkwood_init_cpuidle(void) { struct cpuidle_device *device; - - cpuidle_register_driver(&kirkwood_idle_driver); + struct cpuidle_driver *driver = &kirkwood_idle_driver; device = &per_cpu(kirkwood_cpuidle_device, smp_processor_id()); device->state_count = KIRKWOOD_MAX_STATES; + driver->state_count = KIRKWOOD_MAX_STATES; /* Wait for interrupt state */ - device->states[0].enter = kirkwood_enter_idle; - device->states[0].exit_latency = 1; - device->states[0].target_residency = 10000; - device->states[0].flags = CPUIDLE_FLAG_TIME_VALID; - strcpy(device->states[0].name, "WFI"); - strcpy(device->states[0].desc, "Wait for interrupt"); + driver->states[0].enter = kirkwood_enter_idle; + driver->states[0].exit_latency = 1; + driver->states[0].target_residency = 10000; + driver->states[0].flags = CPUIDLE_FLAG_TIME_VALID; + strcpy(driver->states[0].name, "WFI"); + strcpy(driver->states[0].desc, "Wait for interrupt"); /* Wait for interrupt and DDR self refresh state */ - device->states[1].enter = kirkwood_enter_idle; - device->states[1].exit_latency = 10; - device->states[1].target_residency = 10000; - device->states[1].flags = CPUIDLE_FLAG_TIME_VALID; - strcpy(device->states[1].name, "DDR SR"); - strcpy(device->states[1].desc, "WFI and DDR Self Refresh"); + driver->states[1].enter = kirkwood_enter_idle; + driver->states[1].exit_latency = 10; + driver->states[1].target_residency = 10000; + driver->states[1].flags = CPUIDLE_FLAG_TIME_VALID; + strcpy(driver->states[1].name, "DDR SR"); + strcpy(driver->states[1].desc, "WFI and DDR Self Refresh"); + cpuidle_register_driver(&kirkwood_idle_driver); if (cpuidle_register_device(device)) { printk(KERN_ERR "kirkwood_init_cpuidle: Failed registering\n"); return -EIO; diff --git a/arch/arm/mach-mmp/aspenite.c b/arch/arm/mach-mmp/aspenite.c index edcbadad31c1..f0d236dfb02b 100644 --- a/arch/arm/mach-mmp/aspenite.c +++ b/arch/arm/mach-mmp/aspenite.c @@ -167,8 +167,9 @@ static struct mtd_partition aspenite_nand_partitions[] = { static struct pxa3xx_nand_platform_data aspenite_nand_info = { .enable_arbiter = 1, - .parts = aspenite_nand_partitions, - .nr_parts = ARRAY_SIZE(aspenite_nand_partitions), + .num_cs = 1, + .parts[0] = aspenite_nand_partitions, + .nr_parts[0] = ARRAY_SIZE(aspenite_nand_partitions), }; static struct i2c_board_info aspenite_i2c_info[] __initdata = { diff --git a/arch/arm/mach-msm/Makefile b/arch/arm/mach-msm/Makefile index 4285dfd80b6f..4ad3969b9881 100644 --- a/arch/arm/mach-msm/Makefile +++ b/arch/arm/mach-msm/Makefile @@ -15,6 +15,8 @@ obj-$(CONFIG_MSM_SMD) += smd.o smd_debug.o obj-$(CONFIG_MSM_SMD) += last_radio_log.o obj-$(CONFIG_MSM_SCM) += scm.o scm-boot.o +CFLAGS_scm.o :=$(call as-instr,.arch_extension sec,-DREQUIRES_SEC=1) + obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o obj-$(CONFIG_SMP) += headsmp.o platsmp.o diff --git a/arch/arm/mach-msm/board-msm7x30.c b/arch/arm/mach-msm/board-msm7x30.c index 71de5062c71e..db81ed531031 100644 --- a/arch/arm/mach-msm/board-msm7x30.c +++ b/arch/arm/mach-msm/board-msm7x30.c @@ -42,8 +42,8 @@ extern struct sys_timer msm_timer; -static void __init msm7x30_fixup(struct machine_desc *desc, struct tag *tag, - char **cmdline, struct meminfo *mi) +static void __init msm7x30_fixup(struct tag *tag, char **cmdline, + struct meminfo *mi) { for (; tag->hdr.size; tag = tag_next(tag)) if (tag->hdr.tag == ATAG_MEM && tag->u.mem.start == 0x200000) { diff --git a/arch/arm/mach-msm/board-msm8960.c b/arch/arm/mach-msm/board-msm8960.c index b04468e7d00e..6dc1cbd2a595 100644 --- a/arch/arm/mach-msm/board-msm8960.c +++ b/arch/arm/mach-msm/board-msm8960.c @@ -32,8 +32,8 @@ #include "devices.h" -static void __init msm8960_fixup(struct machine_desc *desc, struct tag *tag, - char **cmdline, struct meminfo *mi) +static void __init msm8960_fixup(struct tag *tag, char **cmdline, + struct meminfo *mi) { for (; tag->hdr.size; tag = tag_next(tag)) if (tag->hdr.tag == ATAG_MEM && diff --git a/arch/arm/mach-msm/board-msm8x60.c b/arch/arm/mach-msm/board-msm8x60.c index cf38e2284fa9..44bf71688373 100644 --- a/arch/arm/mach-msm/board-msm8x60.c +++ b/arch/arm/mach-msm/board-msm8x60.c @@ -28,8 +28,8 @@ #include <mach/board.h> #include <mach/msm_iomap.h> -static void __init msm8x60_fixup(struct machine_desc *desc, struct tag *tag, - char **cmdline, struct meminfo *mi) +static void __init msm8x60_fixup(struct tag *tag, char **cmdline, + struct meminfo *mi) { for (; tag->hdr.size; tag = tag_next(tag)) if (tag->hdr.tag == ATAG_MEM && diff --git a/arch/arm/mach-msm/scm.c b/arch/arm/mach-msm/scm.c index 232f97a04504..bafabb502580 100644 --- a/arch/arm/mach-msm/scm.c +++ b/arch/arm/mach-msm/scm.c @@ -180,6 +180,9 @@ static u32 smc(u32 cmd_addr) __asmeq("%1", "r0") __asmeq("%2", "r1") __asmeq("%3", "r2") +#ifdef REQUIRES_SEC + ".arch_extension sec\n" +#endif "smc #0 @ switch to secure world\n" : "=r" (r0) : "r" (r0), "r" (r1), "r" (r2) diff --git a/arch/arm/mach-mx5/clock-mx51-mx53.c b/arch/arm/mach-mx5/clock-mx51-mx53.c index 2aacf41c48e7..4cb276977190 100644 --- a/arch/arm/mach-mx5/clock-mx51-mx53.c +++ b/arch/arm/mach-mx5/clock-mx51-mx53.c @@ -1281,9 +1281,9 @@ DEFINE_CLOCK(gpt_clk, 0, MXC_CCM_CCGR2, MXC_CCM_CCGRx_CG9_OFFSET, NULL, NULL, &ipg_clk, &gpt_ipg_clk); DEFINE_CLOCK(pwm1_clk, 0, MXC_CCM_CCGR2, MXC_CCM_CCGRx_CG6_OFFSET, - NULL, NULL, &ipg_clk, NULL); + NULL, NULL, &ipg_perclk, NULL); DEFINE_CLOCK(pwm2_clk, 0, MXC_CCM_CCGR2, MXC_CCM_CCGRx_CG8_OFFSET, - NULL, NULL, &ipg_clk, NULL); + NULL, NULL, &ipg_perclk, NULL); /* I2C */ DEFINE_CLOCK(i2c1_clk, 0, MXC_CCM_CCGR1, MXC_CCM_CCGRx_CG9_OFFSET, @@ -1634,6 +1634,7 @@ int __init mx53_clocks_init(unsigned long ckil, unsigned long osc, return 0; } +#ifdef CONFIG_OF static void __init clk_get_freq_dt(unsigned long *ckil, unsigned long *osc, unsigned long *ckih1, unsigned long *ckih2) { @@ -1671,3 +1672,4 @@ int __init mx53_clocks_init_dt(void) clk_get_freq_dt(&ckil, &osc, &ckih1, &ckih2); return mx53_clocks_init(ckil, osc, ckih1, ckih2); } +#endif diff --git a/arch/arm/mach-mxs/mach-mx28evk.c b/arch/arm/mach-mxs/mach-mx28evk.c index ac2316d53d3c..064ec5abaa55 100644 --- a/arch/arm/mach-mxs/mach-mx28evk.c +++ b/arch/arm/mach-mxs/mach-mx28evk.c @@ -471,7 +471,8 @@ static void __init mx28evk_init(void) "mmc0-slot-power"); if (ret) pr_warn("failed to request gpio mmc0-slot-power: %d\n", ret); - mx28_add_mxs_mmc(0, &mx28evk_mmc_pdata[0]); + else + mx28_add_mxs_mmc(0, &mx28evk_mmc_pdata[0]); ret = gpio_request_one(MX28EVK_MMC1_SLOT_POWER, GPIOF_OUT_INIT_LOW, "mmc1-slot-power"); @@ -480,7 +481,6 @@ static void __init mx28evk_init(void) else mx28_add_mxs_mmc(1, &mx28evk_mmc_pdata[1]); - mx28_add_mxs_mmc(1, &mx28evk_mmc_pdata[1]); mx28_add_rtc_stmp3xxx(); gpio_led_register_device(0, &mx28evk_led_data); diff --git a/arch/arm/mach-omap1/board-palmz71.c b/arch/arm/mach-omap1/board-palmz71.c index c6fe61dfe856..42061573e380 100644 --- a/arch/arm/mach-omap1/board-palmz71.c +++ b/arch/arm/mach-omap1/board-palmz71.c @@ -42,7 +42,6 @@ #include <plat/irda.h> #include <plat/keypad.h> #include <plat/common.h> -#include <plat/omap-alsa.h> #include <linux/spi/spi.h> #include <linux/spi/ads7846.h> diff --git a/arch/arm/mach-omap1/pm.c b/arch/arm/mach-omap1/pm.c index 495b3987d461..89ea20ca0ccc 100644 --- a/arch/arm/mach-omap1/pm.c +++ b/arch/arm/mach-omap1/pm.c @@ -116,7 +116,7 @@ void omap1_pm_idle(void) return; } -#ifdef CONFIG_OMAP_MPU_TIMER +#if defined(CONFIG_OMAP_MPU_TIMER) && !defined(CONFIG_OMAP_DM_TIMER) #warning Enable 32kHz OS timer in order to allow sleep states in idle use_idlect1 = use_idlect1 & ~(1 << 9); #else diff --git a/arch/arm/mach-omap2/board-devkit8000.c b/arch/arm/mach-omap2/board-devkit8000.c index 42918940c530..90154e411da0 100644 --- a/arch/arm/mach-omap2/board-devkit8000.c +++ b/arch/arm/mach-omap2/board-devkit8000.c @@ -226,7 +226,6 @@ static int devkit8000_twl_gpio_setup(struct device *dev, { int ret; - omap_mux_init_gpio(29, OMAP_PIN_INPUT); /* gpio + 0 is "mmc0_cd" (input/IRQ) */ mmc[0].gpio_cd = gpio + 0; omap2_hsmmc_init(mmc); diff --git a/arch/arm/mach-omap2/board-generic.c b/arch/arm/mach-omap2/board-generic.c index 0cc9094e5ee0..fb55fa3dad5a 100644 --- a/arch/arm/mach-omap2/board-generic.c +++ b/arch/arm/mach-omap2/board-generic.c @@ -28,6 +28,7 @@ * XXX: Still needed to boot until the i2c & twl driver is adapted to * device-tree */ +#ifdef CONFIG_ARCH_OMAP4 static struct twl4030_platform_data sdp4430_twldata = { .irq_base = TWL6030_IRQ_BASE, .irq_end = TWL6030_IRQ_END, @@ -37,7 +38,9 @@ static void __init omap4_i2c_init(void) { omap4_pmic_init("twl6030", &sdp4430_twldata); } +#endif +#ifdef CONFIG_ARCH_OMAP3 static struct twl4030_platform_data beagle_twldata = { .irq_base = TWL4030_IRQ_BASE, .irq_end = TWL4030_IRQ_END, @@ -47,6 +50,7 @@ static void __init omap3_i2c_init(void) { omap3_pmic_init("twl4030", &beagle_twldata); } +#endif static struct of_device_id omap_dt_match_table[] __initdata = { { .compatible = "simple-bus", }, @@ -72,17 +76,21 @@ static void __init omap_generic_init(void) of_platform_populate(NULL, omap_dt_match_table, NULL, NULL); } +#ifdef CONFIG_ARCH_OMAP4 static void __init omap4_init(void) { omap4_i2c_init(); omap_generic_init(); } +#endif +#ifdef CONFIG_ARCH_OMAP3 static void __init omap3_init(void) { omap3_i2c_init(); omap_generic_init(); } +#endif #if defined(CONFIG_SOC_OMAP2420) static const char *omap242x_boards_compat[] __initdata = { diff --git a/arch/arm/mach-omap2/board-h4.c b/arch/arm/mach-omap2/board-h4.c index c12666ee7017..8b351d92a1cc 100644 --- a/arch/arm/mach-omap2/board-h4.c +++ b/arch/arm/mach-omap2/board-h4.c @@ -25,6 +25,7 @@ #include <linux/err.h> #include <linux/clk.h> #include <linux/io.h> +#include <linux/input/matrix_keypad.h> #include <mach/hardware.h> #include <asm/mach-types.h> @@ -34,7 +35,6 @@ #include <plat/usb.h> #include <plat/board.h> #include <plat/common.h> -#include <plat/keypad.h> #include <plat/menelaus.h> #include <plat/dma.h> #include <plat/gpmc.h> @@ -50,10 +50,8 @@ #define H4_ETHR_GPIO_IRQ 92 -static unsigned int row_gpios[6] = { 88, 89, 124, 11, 6, 96 }; -static unsigned int col_gpios[7] = { 90, 91, 100, 36, 12, 97, 98 }; - -static const unsigned int h4_keymap[] = { +#if defined(CONFIG_KEYBOARD_MATRIX) || defined(CONFIG_KEYBOARD_MATRIX_MODULE) +static const uint32_t board_matrix_keys[] = { KEY(0, 0, KEY_LEFT), KEY(1, 0, KEY_RIGHT), KEY(2, 0, KEY_A), @@ -86,6 +84,71 @@ static const unsigned int h4_keymap[] = { KEY(4, 5, KEY_ENTER), }; +static const struct matrix_keymap_data board_keymap_data = { + .keymap = board_matrix_keys, + .keymap_size = ARRAY_SIZE(board_matrix_keys), +}; + +static unsigned int board_keypad_row_gpios[] = { + 88, 89, 124, 11, 6, 96 +}; + +static unsigned int board_keypad_col_gpios[] = { + 90, 91, 100, 36, 12, 97, 98 +}; + +static struct matrix_keypad_platform_data board_keypad_platform_data = { + .keymap_data = &board_keymap_data, + .row_gpios = board_keypad_row_gpios, + .num_row_gpios = ARRAY_SIZE(board_keypad_row_gpios), + .col_gpios = board_keypad_col_gpios, + .num_col_gpios = ARRAY_SIZE(board_keypad_col_gpios), + .active_low = 1, + + .debounce_ms = 20, + .col_scan_delay_us = 5, +}; + +static struct platform_device board_keyboard = { + .name = "matrix-keypad", + .id = -1, + .dev = { + .platform_data = &board_keypad_platform_data, + }, +}; +static void __init board_mkp_init(void) +{ + omap_mux_init_gpio(88, OMAP_PULL_ENA | OMAP_PULL_UP); + omap_mux_init_gpio(89, OMAP_PULL_ENA | OMAP_PULL_UP); + omap_mux_init_gpio(124, OMAP_PULL_ENA | OMAP_PULL_UP); + omap_mux_init_signal("mcbsp2_dr.gpio_11", OMAP_PULL_ENA | OMAP_PULL_UP); + if (omap_has_menelaus()) { + omap_mux_init_signal("sdrc_a14.gpio0", + OMAP_PULL_ENA | OMAP_PULL_UP); + omap_mux_init_signal("vlynq_rx0.gpio_15", 0); + omap_mux_init_signal("gpio_98", 0); + board_keypad_row_gpios[5] = 0; + board_keypad_col_gpios[2] = 15; + board_keypad_col_gpios[6] = 18; + } else { + omap_mux_init_signal("gpio_96", OMAP_PULL_ENA | OMAP_PULL_UP); + omap_mux_init_signal("gpio_100", 0); + omap_mux_init_signal("gpio_98", 0); + } + omap_mux_init_signal("gpio_90", 0); + omap_mux_init_signal("gpio_91", 0); + omap_mux_init_signal("gpio_36", 0); + omap_mux_init_signal("mcbsp2_clkx.gpio_12", 0); + omap_mux_init_signal("gpio_97", 0); + + platform_device_register(&board_keyboard); +} +#else +static inline void board_mkp_init(void) +{ +} +#endif + static struct mtd_partition h4_partitions[] = { /* bootloader (U-Boot, etc) in first sector */ { @@ -137,31 +200,8 @@ static struct platform_device h4_flash_device = { .resource = &h4_flash_resource, }; -static const struct matrix_keymap_data h4_keymap_data = { - .keymap = h4_keymap, - .keymap_size = ARRAY_SIZE(h4_keymap), -}; - -static struct omap_kp_platform_data h4_kp_data = { - .rows = 6, - .cols = 7, - .keymap_data = &h4_keymap_data, - .rep = true, - .row_gpios = row_gpios, - .col_gpios = col_gpios, -}; - -static struct platform_device h4_kp_device = { - .name = "omap-keypad", - .id = -1, - .dev = { - .platform_data = &h4_kp_data, - }, -}; - static struct platform_device *h4_devices[] __initdata = { &h4_flash_device, - &h4_kp_device, }; static struct panel_generic_dpi_data h4_panel_data = { @@ -336,31 +376,7 @@ static void __init omap_h4_init(void) * if not needed. */ -#if defined(CONFIG_KEYBOARD_OMAP) || defined(CONFIG_KEYBOARD_OMAP_MODULE) - omap_mux_init_gpio(88, OMAP_PULL_ENA | OMAP_PULL_UP); - omap_mux_init_gpio(89, OMAP_PULL_ENA | OMAP_PULL_UP); - omap_mux_init_gpio(124, OMAP_PULL_ENA | OMAP_PULL_UP); - omap_mux_init_signal("mcbsp2_dr.gpio_11", OMAP_PULL_ENA | OMAP_PULL_UP); - if (omap_has_menelaus()) { - omap_mux_init_signal("sdrc_a14.gpio0", - OMAP_PULL_ENA | OMAP_PULL_UP); - omap_mux_init_signal("vlynq_rx0.gpio_15", 0); - omap_mux_init_signal("gpio_98", 0); - row_gpios[5] = 0; - col_gpios[2] = 15; - col_gpios[6] = 18; - } else { - omap_mux_init_signal("gpio_96", OMAP_PULL_ENA | OMAP_PULL_UP); - omap_mux_init_signal("gpio_100", 0); - omap_mux_init_signal("gpio_98", 0); - } - omap_mux_init_signal("gpio_90", 0); - omap_mux_init_signal("gpio_91", 0); - omap_mux_init_signal("gpio_36", 0); - omap_mux_init_signal("mcbsp2_clkx.gpio_12", 0); - omap_mux_init_signal("gpio_97", 0); -#endif - + board_mkp_init(); i2c_register_board_info(1, h4_i2c_board_info, ARRAY_SIZE(h4_i2c_board_info)); diff --git a/arch/arm/mach-omap2/clkt_dpll.c b/arch/arm/mach-omap2/clkt_dpll.c index bcffee001bfa..e069a9be93df 100644 --- a/arch/arm/mach-omap2/clkt_dpll.c +++ b/arch/arm/mach-omap2/clkt_dpll.c @@ -46,10 +46,19 @@ (DPLL_SCALE_FACTOR / DPLL_SCALE_BASE)) /* DPLL valid Fint frequency band limits - from 34xx TRM Section 4.7.6.2 */ -#define DPLL_FINT_BAND1_MIN 750000 -#define DPLL_FINT_BAND1_MAX 2100000 -#define DPLL_FINT_BAND2_MIN 7500000 -#define DPLL_FINT_BAND2_MAX 21000000 +#define OMAP3430_DPLL_FINT_BAND1_MIN 750000 +#define OMAP3430_DPLL_FINT_BAND1_MAX 2100000 +#define OMAP3430_DPLL_FINT_BAND2_MIN 7500000 +#define OMAP3430_DPLL_FINT_BAND2_MAX 21000000 + +/* + * DPLL valid Fint frequency range for OMAP36xx and OMAP4xxx. + * From device data manual section 4.3 "DPLL and DLL Specifications". + */ +#define OMAP3PLUS_DPLL_FINT_JTYPE_MIN 500000 +#define OMAP3PLUS_DPLL_FINT_JTYPE_MAX 2500000 +#define OMAP3PLUS_DPLL_FINT_MIN 32000 +#define OMAP3PLUS_DPLL_FINT_MAX 52000000 /* _dpll_test_fint() return codes */ #define DPLL_FINT_UNDERFLOW -1 @@ -71,33 +80,43 @@ static int _dpll_test_fint(struct clk *clk, u8 n) { struct dpll_data *dd; - long fint; + long fint, fint_min, fint_max; int ret = 0; dd = clk->dpll_data; /* DPLL divider must result in a valid jitter correction val */ fint = clk->parent->rate / n; - if (fint < DPLL_FINT_BAND1_MIN) { + if (cpu_is_omap24xx()) { + /* Should not be called for OMAP2, so warn if it is called */ + WARN(1, "No fint limits available for OMAP2!\n"); + return DPLL_FINT_INVALID; + } else if (cpu_is_omap3430()) { + fint_min = OMAP3430_DPLL_FINT_BAND1_MIN; + fint_max = OMAP3430_DPLL_FINT_BAND2_MAX; + } else if (dd->flags & DPLL_J_TYPE) { + fint_min = OMAP3PLUS_DPLL_FINT_JTYPE_MIN; + fint_max = OMAP3PLUS_DPLL_FINT_JTYPE_MAX; + } else { + fint_min = OMAP3PLUS_DPLL_FINT_MIN; + fint_max = OMAP3PLUS_DPLL_FINT_MAX; + } + + if (fint < fint_min) { pr_debug("rejecting n=%d due to Fint failure, " "lowering max_divider\n", n); dd->max_divider = n; ret = DPLL_FINT_UNDERFLOW; - - } else if (fint > DPLL_FINT_BAND1_MAX && - fint < DPLL_FINT_BAND2_MIN) { - - pr_debug("rejecting n=%d due to Fint failure\n", n); - ret = DPLL_FINT_INVALID; - - } else if (fint > DPLL_FINT_BAND2_MAX) { - + } else if (fint > fint_max) { pr_debug("rejecting n=%d due to Fint failure, " "boosting min_divider\n", n); dd->min_divider = n; ret = DPLL_FINT_INVALID; - + } else if (cpu_is_omap3430() && fint > OMAP3430_DPLL_FINT_BAND1_MAX && + fint < OMAP3430_DPLL_FINT_BAND2_MIN) { + pr_debug("rejecting n=%d due to Fint failure\n", n); + ret = DPLL_FINT_INVALID; } return ret; diff --git a/arch/arm/mach-omap2/clock.h b/arch/arm/mach-omap2/clock.h index 48ac568881bd..2311bc217226 100644 --- a/arch/arm/mach-omap2/clock.h +++ b/arch/arm/mach-omap2/clock.h @@ -66,6 +66,8 @@ void omap3_noncore_dpll_disable(struct clk *clk); int omap4_dpllmx_gatectrl_read(struct clk *clk); void omap4_dpllmx_allow_gatectrl(struct clk *clk); void omap4_dpllmx_deny_gatectrl(struct clk *clk); +long omap4_dpll_regm4xen_round_rate(struct clk *clk, unsigned long target_rate); +unsigned long omap4_dpll_regm4xen_recalc(struct clk *clk); #ifdef CONFIG_OMAP_RESET_CLOCKS void omap2_clk_disable_unused(struct clk *clk); diff --git a/arch/arm/mach-omap2/clock2420_data.c b/arch/arm/mach-omap2/clock2420_data.c index 14a6277dd184..61ad3855f10a 100644 --- a/arch/arm/mach-omap2/clock2420_data.c +++ b/arch/arm/mach-omap2/clock2420_data.c @@ -1898,18 +1898,6 @@ static struct omap_clk omap2420_clks[] = { CLK(NULL, "pka_ick", &pka_ick, CK_242X), CLK(NULL, "usb_fck", &usb_fck, CK_242X), CLK("musb-hdrc", "fck", &osc_ck, CK_242X), - CLK("omap_timer.1", "fck", &gpt1_fck, CK_242X), - CLK("omap_timer.2", "fck", &gpt2_fck, CK_242X), - CLK("omap_timer.3", "fck", &gpt3_fck, CK_242X), - CLK("omap_timer.4", "fck", &gpt4_fck, CK_242X), - CLK("omap_timer.5", "fck", &gpt5_fck, CK_242X), - CLK("omap_timer.6", "fck", &gpt6_fck, CK_242X), - CLK("omap_timer.7", "fck", &gpt7_fck, CK_242X), - CLK("omap_timer.8", "fck", &gpt8_fck, CK_242X), - CLK("omap_timer.9", "fck", &gpt9_fck, CK_242X), - CLK("omap_timer.10", "fck", &gpt10_fck, CK_242X), - CLK("omap_timer.11", "fck", &gpt11_fck, CK_242X), - CLK("omap_timer.12", "fck", &gpt12_fck, CK_242X), CLK("omap_timer.1", "32k_ck", &func_32k_ck, CK_243X), CLK("omap_timer.2", "32k_ck", &func_32k_ck, CK_243X), CLK("omap_timer.3", "32k_ck", &func_32k_ck, CK_243X), diff --git a/arch/arm/mach-omap2/clock2430_data.c b/arch/arm/mach-omap2/clock2430_data.c index ea6717cfa3c8..0cc12879e7b9 100644 --- a/arch/arm/mach-omap2/clock2430_data.c +++ b/arch/arm/mach-omap2/clock2430_data.c @@ -1998,18 +1998,6 @@ static struct omap_clk omap2430_clks[] = { CLK(NULL, "mdm_intc_ick", &mdm_intc_ick, CK_243X), CLK("omap_hsmmc.0", "mmchsdb_fck", &mmchsdb1_fck, CK_243X), CLK("omap_hsmmc.1", "mmchsdb_fck", &mmchsdb2_fck, CK_243X), - CLK("omap_timer.1", "fck", &gpt1_fck, CK_243X), - CLK("omap_timer.2", "fck", &gpt2_fck, CK_243X), - CLK("omap_timer.3", "fck", &gpt3_fck, CK_243X), - CLK("omap_timer.4", "fck", &gpt4_fck, CK_243X), - CLK("omap_timer.5", "fck", &gpt5_fck, CK_243X), - CLK("omap_timer.6", "fck", &gpt6_fck, CK_243X), - CLK("omap_timer.7", "fck", &gpt7_fck, CK_243X), - CLK("omap_timer.8", "fck", &gpt8_fck, CK_243X), - CLK("omap_timer.9", "fck", &gpt9_fck, CK_243X), - CLK("omap_timer.10", "fck", &gpt10_fck, CK_243X), - CLK("omap_timer.11", "fck", &gpt11_fck, CK_243X), - CLK("omap_timer.12", "fck", &gpt12_fck, CK_243X), CLK("omap_timer.1", "32k_ck", &func_32k_ck, CK_243X), CLK("omap_timer.2", "32k_ck", &func_32k_ck, CK_243X), CLK("omap_timer.3", "32k_ck", &func_32k_ck, CK_243X), diff --git a/arch/arm/mach-omap2/clock3xxx_data.c b/arch/arm/mach-omap2/clock3xxx_data.c index 65dd363163bc..5d0064a4fb5a 100644 --- a/arch/arm/mach-omap2/clock3xxx_data.c +++ b/arch/arm/mach-omap2/clock3xxx_data.c @@ -3464,18 +3464,6 @@ static struct omap_clk omap3xxx_clks[] = { CLK("musb-am35x", "fck", &hsotgusb_fck_am35xx, CK_AM35XX), CLK(NULL, "hecc_ck", &hecc_ck, CK_AM35XX), CLK(NULL, "uart4_ick", &uart4_ick_am35xx, CK_AM35XX), - CLK("omap_timer.1", "fck", &gpt1_fck, CK_3XXX), - CLK("omap_timer.2", "fck", &gpt2_fck, CK_3XXX), - CLK("omap_timer.3", "fck", &gpt3_fck, CK_3XXX), - CLK("omap_timer.4", "fck", &gpt4_fck, CK_3XXX), - CLK("omap_timer.5", "fck", &gpt5_fck, CK_3XXX), - CLK("omap_timer.6", "fck", &gpt6_fck, CK_3XXX), - CLK("omap_timer.7", "fck", &gpt7_fck, CK_3XXX), - CLK("omap_timer.8", "fck", &gpt8_fck, CK_3XXX), - CLK("omap_timer.9", "fck", &gpt9_fck, CK_3XXX), - CLK("omap_timer.10", "fck", &gpt10_fck, CK_3XXX), - CLK("omap_timer.11", "fck", &gpt11_fck, CK_3XXX), - CLK("omap_timer.12", "fck", &gpt12_fck, CK_3XXX), CLK("omap_timer.1", "32k_ck", &omap_32k_fck, CK_3XXX), CLK("omap_timer.2", "32k_ck", &omap_32k_fck, CK_3XXX), CLK("omap_timer.3", "32k_ck", &omap_32k_fck, CK_3XXX), diff --git a/arch/arm/mach-omap2/clock44xx.h b/arch/arm/mach-omap2/clock44xx.h index 7ceb870e7ab8..287a46f78d97 100644 --- a/arch/arm/mach-omap2/clock44xx.h +++ b/arch/arm/mach-omap2/clock44xx.h @@ -8,6 +8,13 @@ #ifndef __ARCH_ARM_MACH_OMAP2_CLOCK44XX_H #define __ARCH_ARM_MACH_OMAP2_CLOCK44XX_H +/* + * OMAP4430_REGM4XEN_MULT: If the CM_CLKMODE_DPLL_ABE.DPLL_REGM4XEN bit is + * set, then the DPLL's lock frequency is multiplied by 4 (OMAP4430 TRM + * vV Section 3.6.3.3.1 "DPLLs Output Clocks Parameters") + */ +#define OMAP4430_REGM4XEN_MULT 4 + int omap4xxx_clk_init(void); #endif diff --git a/arch/arm/mach-omap2/clock44xx_data.c b/arch/arm/mach-omap2/clock44xx_data.c index 946bf04a956d..0798a802497a 100644 --- a/arch/arm/mach-omap2/clock44xx_data.c +++ b/arch/arm/mach-omap2/clock44xx_data.c @@ -270,8 +270,8 @@ static struct clk dpll_abe_ck = { .dpll_data = &dpll_abe_dd, .init = &omap2_init_dpll_parent, .ops = &clkops_omap3_noncore_dpll_ops, - .recalc = &omap3_dpll_recalc, - .round_rate = &omap2_dpll_round_rate, + .recalc = &omap4_dpll_regm4xen_recalc, + .round_rate = &omap4_dpll_regm4xen_round_rate, .set_rate = &omap3_noncore_dpll_set_rate, }; @@ -1195,11 +1195,25 @@ static struct clk l4_wkup_clk_mux_ck = { .recalc = &omap2_clksel_recalc, }; +static const struct clksel_rate div2_2to1_rates[] = { + { .div = 1, .val = 1, .flags = RATE_IN_4430 }, + { .div = 2, .val = 0, .flags = RATE_IN_4430 }, + { .div = 0 }, +}; + +static const struct clksel ocp_abe_iclk_div[] = { + { .parent = &aess_fclk, .rates = div2_2to1_rates }, + { .parent = NULL }, +}; + static struct clk ocp_abe_iclk = { .name = "ocp_abe_iclk", .parent = &aess_fclk, + .clksel = ocp_abe_iclk_div, + .clksel_reg = OMAP4430_CM1_ABE_AESS_CLKCTRL, + .clksel_mask = OMAP4430_CLKSEL_AESS_FCLK_MASK, .ops = &clkops_null, - .recalc = &followparent_recalc, + .recalc = &omap2_clksel_recalc, }; static struct clk per_abe_24m_fclk = { @@ -1398,9 +1412,9 @@ static struct clk dss_dss_clk = { }; static const struct clksel_rate div3_8to32_rates[] = { - { .div = 8, .val = 0, .flags = RATE_IN_44XX }, - { .div = 16, .val = 1, .flags = RATE_IN_44XX }, - { .div = 32, .val = 2, .flags = RATE_IN_44XX }, + { .div = 8, .val = 0, .flags = RATE_IN_4460 }, + { .div = 16, .val = 1, .flags = RATE_IN_4460 }, + { .div = 32, .val = 2, .flags = RATE_IN_4460 }, { .div = 0 }, }; @@ -3363,17 +3377,6 @@ static struct omap_clk omap44xx_clks[] = { CLK("usbhs-omap.0", "usbhost_ick", &dummy_ck, CK_443X), CLK("usbhs-omap.0", "usbtll_fck", &dummy_ck, CK_443X), CLK("omap_wdt", "ick", &dummy_ck, CK_443X), - CLK("omap_timer.1", "fck", &timer1_fck, CK_443X), - CLK("omap_timer.2", "fck", &timer2_fck, CK_443X), - CLK("omap_timer.3", "fck", &timer3_fck, CK_443X), - CLK("omap_timer.4", "fck", &timer4_fck, CK_443X), - CLK("omap_timer.5", "fck", &timer5_fck, CK_443X), - CLK("omap_timer.6", "fck", &timer6_fck, CK_443X), - CLK("omap_timer.7", "fck", &timer7_fck, CK_443X), - CLK("omap_timer.8", "fck", &timer8_fck, CK_443X), - CLK("omap_timer.9", "fck", &timer9_fck, CK_443X), - CLK("omap_timer.10", "fck", &timer10_fck, CK_443X), - CLK("omap_timer.11", "fck", &timer11_fck, CK_443X), CLK("omap_timer.1", "32k_ck", &sys_32k_ck, CK_443X), CLK("omap_timer.2", "32k_ck", &sys_32k_ck, CK_443X), CLK("omap_timer.3", "32k_ck", &sys_32k_ck, CK_443X), @@ -3403,12 +3406,12 @@ int __init omap4xxx_clk_init(void) struct omap_clk *c; u32 cpu_clkflg; - if (cpu_is_omap44xx()) { + if (cpu_is_omap443x()) { cpu_mask = RATE_IN_4430; cpu_clkflg = CK_443X; } else if (cpu_is_omap446x()) { - cpu_mask = RATE_IN_4460; - cpu_clkflg = CK_446X; + cpu_mask = RATE_IN_4460 | RATE_IN_4430; + cpu_clkflg = CK_446X | CK_443X; } else { return 0; } diff --git a/arch/arm/mach-omap2/cpuidle34xx.c b/arch/arm/mach-omap2/cpuidle34xx.c index 4bf6e6e8b100..1fe35c24fba2 100644 --- a/arch/arm/mach-omap2/cpuidle34xx.c +++ b/arch/arm/mach-omap2/cpuidle34xx.c @@ -88,17 +88,21 @@ static int _cpuidle_deny_idle(struct powerdomain *pwrdm, /** * omap3_enter_idle - Programs OMAP3 to enter the specified state * @dev: cpuidle device - * @state: The target state to be programmed + * @drv: cpuidle driver + * @index: the index of state to be entered * * Called from the CPUidle framework to program the device to the * specified target state selected by the governor. */ static int omap3_enter_idle(struct cpuidle_device *dev, - struct cpuidle_state *state) + struct cpuidle_driver *drv, + int index) { - struct omap3_idle_statedata *cx = cpuidle_get_statedata(state); + struct omap3_idle_statedata *cx = + cpuidle_get_statedata(&dev->states_usage[index]); struct timespec ts_preidle, ts_postidle, ts_idle; u32 mpu_state = cx->mpu_state, core_state = cx->core_state; + int idle_time; /* Used to keep track of the total time in idle */ getnstimeofday(&ts_preidle); @@ -113,7 +117,7 @@ static int omap3_enter_idle(struct cpuidle_device *dev, goto return_sleep_time; /* Deny idle for C1 */ - if (state == &dev->states[0]) { + if (index == 0) { pwrdm_for_each_clkdm(mpu_pd, _cpuidle_deny_idle); pwrdm_for_each_clkdm(core_pd, _cpuidle_deny_idle); } @@ -122,7 +126,7 @@ static int omap3_enter_idle(struct cpuidle_device *dev, omap_sram_idle(); /* Re-allow idle for C1 */ - if (state == &dev->states[0]) { + if (index == 0) { pwrdm_for_each_clkdm(mpu_pd, _cpuidle_allow_idle); pwrdm_for_each_clkdm(core_pd, _cpuidle_allow_idle); } @@ -134,28 +138,38 @@ return_sleep_time: local_irq_enable(); local_fiq_enable(); - return ts_idle.tv_nsec / NSEC_PER_USEC + ts_idle.tv_sec * USEC_PER_SEC; + idle_time = ts_idle.tv_nsec / NSEC_PER_USEC + ts_idle.tv_sec * \ + USEC_PER_SEC; + + /* Update cpuidle counters */ + dev->last_residency = idle_time; + + return index; } /** * next_valid_state - Find next valid C-state * @dev: cpuidle device - * @state: Currently selected C-state + * @drv: cpuidle driver + * @index: Index of currently selected c-state * - * If the current state is valid, it is returned back to the caller. - * Else, this function searches for a lower c-state which is still - * valid. + * If the state corresponding to index is valid, index is returned back + * to the caller. Else, this function searches for a lower c-state which is + * still valid (as defined in omap3_power_states[]) and returns its index. * * A state is valid if the 'valid' field is enabled and * if it satisfies the enable_off_mode condition. */ -static struct cpuidle_state *next_valid_state(struct cpuidle_device *dev, - struct cpuidle_state *curr) +static int next_valid_state(struct cpuidle_device *dev, + struct cpuidle_driver *drv, + int index) { - struct cpuidle_state *next = NULL; - struct omap3_idle_statedata *cx = cpuidle_get_statedata(curr); + struct cpuidle_state_usage *curr_usage = &dev->states_usage[index]; + struct cpuidle_state *curr = &drv->states[index]; + struct omap3_idle_statedata *cx = cpuidle_get_statedata(curr_usage); u32 mpu_deepest_state = PWRDM_POWER_RET; u32 core_deepest_state = PWRDM_POWER_RET; + int next_index = -1; if (enable_off_mode) { mpu_deepest_state = PWRDM_POWER_OFF; @@ -172,20 +186,20 @@ static struct cpuidle_state *next_valid_state(struct cpuidle_device *dev, if ((cx->valid) && (cx->mpu_state >= mpu_deepest_state) && (cx->core_state >= core_deepest_state)) { - return curr; + return index; } else { int idx = OMAP3_NUM_STATES - 1; /* Reach the current state starting at highest C-state */ for (; idx >= 0; idx--) { - if (&dev->states[idx] == curr) { - next = &dev->states[idx]; + if (&drv->states[idx] == curr) { + next_index = idx; break; } } /* Should never hit this condition */ - WARN_ON(next == NULL); + WARN_ON(next_index == -1); /* * Drop to next valid state. @@ -193,41 +207,44 @@ static struct cpuidle_state *next_valid_state(struct cpuidle_device *dev, */ idx--; for (; idx >= 0; idx--) { - cx = cpuidle_get_statedata(&dev->states[idx]); + cx = cpuidle_get_statedata(&dev->states_usage[idx]); if ((cx->valid) && (cx->mpu_state >= mpu_deepest_state) && (cx->core_state >= core_deepest_state)) { - next = &dev->states[idx]; + next_index = idx; break; } } /* * C1 is always valid. - * So, no need to check for 'next==NULL' outside this loop. + * So, no need to check for 'next_index == -1' outside + * this loop. */ } - return next; + return next_index; } /** * omap3_enter_idle_bm - Checks for any bus activity * @dev: cpuidle device - * @state: The target state to be programmed + * @drv: cpuidle driver + * @index: array index of target state to be programmed * * This function checks for any pending activity and then programs * the device to the specified or a safer state. */ static int omap3_enter_idle_bm(struct cpuidle_device *dev, - struct cpuidle_state *state) + struct cpuidle_driver *drv, + int index) { - struct cpuidle_state *new_state; + int new_state_idx; u32 core_next_state, per_next_state = 0, per_saved_state = 0, cam_state; struct omap3_idle_statedata *cx; int ret; if (!omap3_can_sleep()) { - new_state = dev->safe_state; + new_state_idx = drv->safe_state_index; goto select_state; } @@ -237,7 +254,7 @@ static int omap3_enter_idle_bm(struct cpuidle_device *dev, */ cam_state = pwrdm_read_pwrst(cam_pd); if (cam_state == PWRDM_POWER_ON) { - new_state = dev->safe_state; + new_state_idx = drv->safe_state_index; goto select_state; } @@ -253,7 +270,7 @@ static int omap3_enter_idle_bm(struct cpuidle_device *dev, * Prevent PER off if CORE is not in retention or off as this * would disable PER wakeups completely. */ - cx = cpuidle_get_statedata(state); + cx = cpuidle_get_statedata(&dev->states_usage[index]); core_next_state = cx->core_state; per_next_state = per_saved_state = pwrdm_read_next_pwrst(per_pd); if ((per_next_state == PWRDM_POWER_OFF) && @@ -264,11 +281,10 @@ static int omap3_enter_idle_bm(struct cpuidle_device *dev, if (per_next_state != per_saved_state) pwrdm_set_next_pwrst(per_pd, per_next_state); - new_state = next_valid_state(dev, state); + new_state_idx = next_valid_state(dev, drv, index); select_state: - dev->last_state = new_state; - ret = omap3_enter_idle(dev, new_state); + ret = omap3_enter_idle(dev, drv, new_state_idx); /* Restore original PER state if it was modified */ if (per_next_state != per_saved_state) @@ -301,22 +317,31 @@ struct cpuidle_driver omap3_idle_driver = { .owner = THIS_MODULE, }; -/* Helper to fill the C-state common data and register the driver_data */ -static inline struct omap3_idle_statedata *_fill_cstate( - struct cpuidle_device *dev, +/* Helper to fill the C-state common data*/ +static inline void _fill_cstate(struct cpuidle_driver *drv, int idx, const char *descr) { - struct omap3_idle_statedata *cx = &omap3_idle_data[idx]; - struct cpuidle_state *state = &dev->states[idx]; + struct cpuidle_state *state = &drv->states[idx]; state->exit_latency = cpuidle_params_table[idx].exit_latency; state->target_residency = cpuidle_params_table[idx].target_residency; state->flags = CPUIDLE_FLAG_TIME_VALID; state->enter = omap3_enter_idle_bm; - cx->valid = cpuidle_params_table[idx].valid; sprintf(state->name, "C%d", idx + 1); strncpy(state->desc, descr, CPUIDLE_DESC_LEN); - cpuidle_set_statedata(state, cx); + +} + +/* Helper to register the driver_data */ +static inline struct omap3_idle_statedata *_fill_cstate_usage( + struct cpuidle_device *dev, + int idx) +{ + struct omap3_idle_statedata *cx = &omap3_idle_data[idx]; + struct cpuidle_state_usage *state_usage = &dev->states_usage[idx]; + + cx->valid = cpuidle_params_table[idx].valid; + cpuidle_set_statedata(state_usage, cx); return cx; } @@ -330,6 +355,7 @@ static inline struct omap3_idle_statedata *_fill_cstate( int __init omap3_idle_init(void) { struct cpuidle_device *dev; + struct cpuidle_driver *drv = &omap3_idle_driver; struct omap3_idle_statedata *cx; mpu_pd = pwrdm_lookup("mpu_pwrdm"); @@ -337,44 +363,52 @@ int __init omap3_idle_init(void) per_pd = pwrdm_lookup("per_pwrdm"); cam_pd = pwrdm_lookup("cam_pwrdm"); - cpuidle_register_driver(&omap3_idle_driver); + + drv->safe_state_index = -1; dev = &per_cpu(omap3_idle_dev, smp_processor_id()); /* C1 . MPU WFI + Core active */ - cx = _fill_cstate(dev, 0, "MPU ON + CORE ON"); - (&dev->states[0])->enter = omap3_enter_idle; - dev->safe_state = &dev->states[0]; + _fill_cstate(drv, 0, "MPU ON + CORE ON"); + (&drv->states[0])->enter = omap3_enter_idle; + drv->safe_state_index = 0; + cx = _fill_cstate_usage(dev, 0); cx->valid = 1; /* C1 is always valid */ cx->mpu_state = PWRDM_POWER_ON; cx->core_state = PWRDM_POWER_ON; /* C2 . MPU WFI + Core inactive */ - cx = _fill_cstate(dev, 1, "MPU ON + CORE ON"); + _fill_cstate(drv, 1, "MPU ON + CORE ON"); + cx = _fill_cstate_usage(dev, 1); cx->mpu_state = PWRDM_POWER_ON; cx->core_state = PWRDM_POWER_ON; /* C3 . MPU CSWR + Core inactive */ - cx = _fill_cstate(dev, 2, "MPU RET + CORE ON"); + _fill_cstate(drv, 2, "MPU RET + CORE ON"); + cx = _fill_cstate_usage(dev, 2); cx->mpu_state = PWRDM_POWER_RET; cx->core_state = PWRDM_POWER_ON; /* C4 . MPU OFF + Core inactive */ - cx = _fill_cstate(dev, 3, "MPU OFF + CORE ON"); + _fill_cstate(drv, 3, "MPU OFF + CORE ON"); + cx = _fill_cstate_usage(dev, 3); cx->mpu_state = PWRDM_POWER_OFF; cx->core_state = PWRDM_POWER_ON; /* C5 . MPU RET + Core RET */ - cx = _fill_cstate(dev, 4, "MPU RET + CORE RET"); + _fill_cstate(drv, 4, "MPU RET + CORE RET"); + cx = _fill_cstate_usage(dev, 4); cx->mpu_state = PWRDM_POWER_RET; cx->core_state = PWRDM_POWER_RET; /* C6 . MPU OFF + Core RET */ - cx = _fill_cstate(dev, 5, "MPU OFF + CORE RET"); + _fill_cstate(drv, 5, "MPU OFF + CORE RET"); + cx = _fill_cstate_usage(dev, 5); cx->mpu_state = PWRDM_POWER_OFF; cx->core_state = PWRDM_POWER_RET; /* C7 . MPU OFF + Core OFF */ - cx = _fill_cstate(dev, 6, "MPU OFF + CORE OFF"); + _fill_cstate(drv, 6, "MPU OFF + CORE OFF"); + cx = _fill_cstate_usage(dev, 6); /* * Erratum i583: implementation for ES rev < Es1.2 on 3630. We cannot * enable OFF mode in a stable form for previous revisions. @@ -388,6 +422,9 @@ int __init omap3_idle_init(void) cx->mpu_state = PWRDM_POWER_OFF; cx->core_state = PWRDM_POWER_OFF; + drv->state_count = OMAP3_NUM_STATES; + cpuidle_register_driver(&omap3_idle_driver); + dev->state_count = OMAP3_NUM_STATES; if (cpuidle_register_device(dev)) { printk(KERN_ERR "%s: CPUidle register device failed\n", diff --git a/arch/arm/mach-omap2/devices.c b/arch/arm/mach-omap2/devices.c index 68ec03152d5f..c15cfada5f13 100644 --- a/arch/arm/mach-omap2/devices.c +++ b/arch/arm/mach-omap2/devices.c @@ -318,18 +318,10 @@ static inline void omap_init_audio(void) {} #if defined(CONFIG_SND_OMAP_SOC_MCPDM) || \ defined(CONFIG_SND_OMAP_SOC_MCPDM_MODULE) -static struct omap_device_pm_latency omap_mcpdm_latency[] = { - { - .deactivate_func = omap_device_idle_hwmods, - .activate_func = omap_device_enable_hwmods, - .flags = OMAP_DEVICE_LATENCY_AUTO_ADJUST, - }, -}; - static void omap_init_mcpdm(void) { struct omap_hwmod *oh; - struct omap_device *od; + struct platform_device *pdev; oh = omap_hwmod_lookup("mcpdm"); if (!oh) { @@ -337,11 +329,8 @@ static void omap_init_mcpdm(void) return; } - od = omap_device_build("omap-mcpdm", -1, oh, NULL, 0, - omap_mcpdm_latency, - ARRAY_SIZE(omap_mcpdm_latency), 0); - if (IS_ERR(od)) - printk(KERN_ERR "Could not build omap_device for omap-mcpdm-dai\n"); + pdev = omap_device_build("omap-mcpdm", -1, oh, NULL, 0, NULL, 0, 0); + WARN(IS_ERR(pdev), "Can't build omap_device for omap-mcpdm.\n"); } #else static inline void omap_init_mcpdm(void) {} diff --git a/arch/arm/mach-omap2/dpll3xxx.c b/arch/arm/mach-omap2/dpll3xxx.c index f77022be783d..fc56745676fa 100644 --- a/arch/arm/mach-omap2/dpll3xxx.c +++ b/arch/arm/mach-omap2/dpll3xxx.c @@ -390,7 +390,8 @@ int omap3_noncore_dpll_enable(struct clk *clk) * propagating? */ if (!r) - clk->rate = omap2_get_dpll_rate(clk); + clk->rate = (clk->recalc) ? clk->recalc(clk) : + omap2_get_dpll_rate(clk); return r; } @@ -424,6 +425,7 @@ void omap3_noncore_dpll_disable(struct clk *clk) int omap3_noncore_dpll_set_rate(struct clk *clk, unsigned long rate) { struct clk *new_parent = NULL; + unsigned long hw_rate; u16 freqsel = 0; struct dpll_data *dd; int ret; @@ -435,7 +437,8 @@ int omap3_noncore_dpll_set_rate(struct clk *clk, unsigned long rate) if (!dd) return -EINVAL; - if (rate == omap2_get_dpll_rate(clk)) + hw_rate = (clk->recalc) ? clk->recalc(clk) : omap2_get_dpll_rate(clk); + if (rate == hw_rate) return 0; /* @@ -455,7 +458,7 @@ int omap3_noncore_dpll_set_rate(struct clk *clk, unsigned long rate) new_parent = dd->clk_bypass; } else { if (dd->last_rounded_rate != rate) - omap2_dpll_round_rate(clk, rate); + rate = clk->round_rate(clk, rate); if (dd->last_rounded_rate == 0) return -EINVAL; diff --git a/arch/arm/mach-omap2/dpll44xx.c b/arch/arm/mach-omap2/dpll44xx.c index 4e4da6160d05..9c6a296b3dc3 100644 --- a/arch/arm/mach-omap2/dpll44xx.c +++ b/arch/arm/mach-omap2/dpll44xx.c @@ -19,6 +19,7 @@ #include <plat/clock.h> #include "clock.h" +#include "clock44xx.h" #include "cm-regbits-44xx.h" /* Supported only on OMAP4 */ @@ -82,3 +83,71 @@ const struct clkops clkops_omap4_dpllmx_ops = { .deny_idle = omap4_dpllmx_deny_gatectrl, }; +/** + * omap4_dpll_regm4xen_recalc - compute DPLL rate, considering REGM4XEN bit + * @clk: struct clk * of the DPLL to compute the rate for + * + * Compute the output rate for the OMAP4 DPLL represented by @clk. + * Takes the REGM4XEN bit into consideration, which is needed for the + * OMAP4 ABE DPLL. Returns the DPLL's output rate (before M-dividers) + * upon success, or 0 upon error. + */ +unsigned long omap4_dpll_regm4xen_recalc(struct clk *clk) +{ + u32 v; + unsigned long rate; + struct dpll_data *dd; + + if (!clk || !clk->dpll_data) + return 0; + + dd = clk->dpll_data; + + rate = omap2_get_dpll_rate(clk); + + /* regm4xen adds a multiplier of 4 to DPLL calculations */ + v = __raw_readl(dd->control_reg); + if (v & OMAP4430_DPLL_REGM4XEN_MASK) + rate *= OMAP4430_REGM4XEN_MULT; + + return rate; +} + +/** + * omap4_dpll_regm4xen_round_rate - round DPLL rate, considering REGM4XEN bit + * @clk: struct clk * of the DPLL to round a rate for + * @target_rate: the desired rate of the DPLL + * + * Compute the rate that would be programmed into the DPLL hardware + * for @clk if set_rate() were to be provided with the rate + * @target_rate. Takes the REGM4XEN bit into consideration, which is + * needed for the OMAP4 ABE DPLL. Returns the rounded rate (before + * M-dividers) upon success, -EINVAL if @clk is null or not a DPLL, or + * ~0 if an error occurred in omap2_dpll_round_rate(). + */ +long omap4_dpll_regm4xen_round_rate(struct clk *clk, unsigned long target_rate) +{ + u32 v; + struct dpll_data *dd; + long r; + + if (!clk || !clk->dpll_data) + return -EINVAL; + + dd = clk->dpll_data; + + /* regm4xen adds a multiplier of 4 to DPLL calculations */ + v = __raw_readl(dd->control_reg) & OMAP4430_DPLL_REGM4XEN_MASK; + + if (v) + target_rate = target_rate / OMAP4430_REGM4XEN_MULT; + + r = omap2_dpll_round_rate(clk, target_rate); + if (r == ~0) + return r; + + if (v) + clk->dpll_data->last_rounded_rate *= OMAP4430_REGM4XEN_MULT; + + return clk->dpll_data->last_rounded_rate; +} diff --git a/arch/arm/mach-omap2/dsp.c b/arch/arm/mach-omap2/dsp.c index 911cd2e68d46..74f18f2952df 100644 --- a/arch/arm/mach-omap2/dsp.c +++ b/arch/arm/mach-omap2/dsp.c @@ -18,6 +18,7 @@ * of the OMAP PM core code. */ +#include <linux/module.h> #include <linux/platform_device.h> #include "cm2xxx_3xxx.h" #include "prm2xxx_3xxx.h" diff --git a/arch/arm/mach-omap2/hsmmc.c b/arch/arm/mach-omap2/hsmmc.c index 77085847e4e7..f4a1020559a7 100644 --- a/arch/arm/mach-omap2/hsmmc.c +++ b/arch/arm/mach-omap2/hsmmc.c @@ -129,15 +129,11 @@ static void omap4_hsmmc1_before_set_reg(struct device *dev, int slot, * Assume we power both OMAP VMMC1 (for CMD, CLK, DAT0..3) and the * card with Vcc regulator (from twl4030 or whatever). OMAP has both * 1.8V and 3.0V modes, controlled by the PBIAS register. - * - * In 8-bit modes, OMAP VMMC1A (for DAT4..7) needs a supply, which - * is most naturally TWL VSIM; those pins also use PBIAS. - * - * FIXME handle VMMC1A as needed ... */ reg = omap4_ctrl_pad_readl(control_pbias_offset); reg &= ~(OMAP4_MMC1_PBIASLITE_PWRDNZ_MASK | - OMAP4_MMC1_PWRDNZ_MASK); + OMAP4_MMC1_PWRDNZ_MASK | + OMAP4_MMC1_PBIASLITE_VMODE_MASK); omap4_ctrl_pad_writel(reg, control_pbias_offset); } @@ -172,12 +168,6 @@ static void omap4_hsmmc1_after_set_reg(struct device *dev, int slot, reg &= ~(OMAP4_MMC1_PWRDNZ_MASK); omap4_ctrl_pad_writel(reg, control_pbias_offset); } - } else { - reg = omap4_ctrl_pad_readl(control_pbias_offset); - reg |= (OMAP4_MMC1_PBIASLITE_PWRDNZ_MASK | - OMAP4_MMC1_PWRDNZ_MASK | - OMAP4_MMC1_PBIASLITE_VMODE_MASK); - omap4_ctrl_pad_writel(reg, control_pbias_offset); } } @@ -489,7 +479,7 @@ void __init omap2_hsmmc_init(struct omap2_hsmmc_info *controllers) OMAP4_SDMMC1_PUSTRENGTH_GRP1_MASK); reg &= ~(OMAP4_SDMMC1_PUSTRENGTH_GRP2_MASK | OMAP4_SDMMC1_PUSTRENGTH_GRP3_MASK); - reg |= (OMAP4_USBC1_DR0_SPEEDCTRL_MASK| + reg |= (OMAP4_SDMMC1_DR0_SPEEDCTRL_MASK | OMAP4_SDMMC1_DR1_SPEEDCTRL_MASK | OMAP4_SDMMC1_DR2_SPEEDCTRL_MASK); omap4_ctrl_pad_writel(reg, control_mmc1); diff --git a/arch/arm/mach-omap2/id.c b/arch/arm/mach-omap2/id.c index d27daf921c7e..7f47092a193f 100644 --- a/arch/arm/mach-omap2/id.c +++ b/arch/arm/mach-omap2/id.c @@ -187,8 +187,11 @@ static void __init omap3_check_features(void) OMAP3_CHECK_FEATURE(status, ISP); if (cpu_is_omap3630()) omap_features |= OMAP3_HAS_192MHZ_CLK; - if (!cpu_is_omap3505() && !cpu_is_omap3517()) + if (cpu_is_omap3430() || cpu_is_omap3630()) omap_features |= OMAP3_HAS_IO_WAKEUP; + if (cpu_is_omap3630() || omap_rev() == OMAP3430_REV_ES3_1 || + omap_rev() == OMAP3430_REV_ES3_1_2) + omap_features |= OMAP3_HAS_IO_CHAIN_CTRL; omap_features |= OMAP3_HAS_SDRC; diff --git a/arch/arm/mach-omap2/include/mach/ctrl_module_pad_core_44xx.h b/arch/arm/mach-omap2/include/mach/ctrl_module_pad_core_44xx.h index c88420de1151..1e2d3322f33e 100644 --- a/arch/arm/mach-omap2/include/mach/ctrl_module_pad_core_44xx.h +++ b/arch/arm/mach-omap2/include/mach/ctrl_module_pad_core_44xx.h @@ -941,10 +941,10 @@ #define OMAP4_DSI2_LANEENABLE_MASK (0x7 << 29) #define OMAP4_DSI1_LANEENABLE_SHIFT 24 #define OMAP4_DSI1_LANEENABLE_MASK (0x1f << 24) -#define OMAP4_DSI1_PIPD_SHIFT 19 -#define OMAP4_DSI1_PIPD_MASK (0x1f << 19) -#define OMAP4_DSI2_PIPD_SHIFT 14 -#define OMAP4_DSI2_PIPD_MASK (0x1f << 14) +#define OMAP4_DSI2_PIPD_SHIFT 19 +#define OMAP4_DSI2_PIPD_MASK (0x1f << 19) +#define OMAP4_DSI1_PIPD_SHIFT 14 +#define OMAP4_DSI1_PIPD_MASK (0x1f << 14) /* CONTROL_MCBSPLP */ #define OMAP4_ALBCTRLRX_FSX_SHIFT 31 diff --git a/arch/arm/mach-omap2/io.c b/arch/arm/mach-omap2/io.c index a5d8dce2a70b..25d20ced03e1 100644 --- a/arch/arm/mach-omap2/io.c +++ b/arch/arm/mach-omap2/io.c @@ -359,6 +359,7 @@ static void __init omap_hwmod_init_postsetup(void) omap_pm_if_early_init(); } +#ifdef CONFIG_ARCH_OMAP2 void __init omap2420_init_early(void) { omap2_set_globals_242x(); @@ -382,11 +383,13 @@ void __init omap2430_init_early(void) omap_hwmod_init_postsetup(); omap2430_clk_init(); } +#endif /* * Currently only board-omap3beagle.c should call this because of the * same machine_id for 34xx and 36xx beagle.. Will get fixed with DT. */ +#ifdef CONFIG_ARCH_OMAP3 void __init omap3_init_early(void) { omap2_set_globals_3xxx(); @@ -430,7 +433,9 @@ void __init ti816x_init_early(void) omap_hwmod_init_postsetup(); omap3xxx_clk_init(); } +#endif +#ifdef CONFIG_ARCH_OMAP4 void __init omap4430_init_early(void) { omap2_set_globals_443x(); @@ -442,6 +447,7 @@ void __init omap4430_init_early(void) omap_hwmod_init_postsetup(); omap4xxx_clk_init(); } +#endif void __init omap_sdrc_init(struct omap_sdrc_params *sdrc_cs0, struct omap_sdrc_params *sdrc_cs1) diff --git a/arch/arm/mach-omap2/mailbox.c b/arch/arm/mach-omap2/mailbox.c index 86d564a640bb..609ea2ded7e3 100644 --- a/arch/arm/mach-omap2/mailbox.c +++ b/arch/arm/mach-omap2/mailbox.c @@ -10,6 +10,7 @@ * for more details. */ +#include <linux/module.h> #include <linux/clk.h> #include <linux/err.h> #include <linux/platform_device.h> diff --git a/arch/arm/mach-omap2/omap-iommu.c b/arch/arm/mach-omap2/omap-iommu.c index e61feadcda4e..b8822048e409 100644 --- a/arch/arm/mach-omap2/omap-iommu.c +++ b/arch/arm/mach-omap2/omap-iommu.c @@ -10,6 +10,7 @@ * published by the Free Software Foundation. */ +#include <linux/module.h> #include <linux/platform_device.h> #include <plat/iommu.h> diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c index d71380705080..6b3088db83b7 100644 --- a/arch/arm/mach-omap2/omap_hwmod.c +++ b/arch/arm/mach-omap2/omap_hwmod.c @@ -2625,7 +2625,7 @@ ohsps_unlock: * Returns the context loss count of the powerdomain assocated with @oh * upon success, or zero if no powerdomain exists for @oh. */ -u32 omap_hwmod_get_context_loss_count(struct omap_hwmod *oh) +int omap_hwmod_get_context_loss_count(struct omap_hwmod *oh) { struct powerdomain *pwrdm; int ret = 0; diff --git a/arch/arm/mach-omap2/omap_hwmod_3xxx_data.c b/arch/arm/mach-omap2/omap_hwmod_3xxx_data.c index 3008e1672c7a..bc9035ec87fc 100644 --- a/arch/arm/mach-omap2/omap_hwmod_3xxx_data.c +++ b/arch/arm/mach-omap2/omap_hwmod_3xxx_data.c @@ -3159,7 +3159,6 @@ static __initdata struct omap_hwmod *omap3xxx_hwmods[] = { &omap3xxx_mmc2_hwmod, &omap3xxx_mmc3_hwmod, &omap3xxx_mpu_hwmod, - &omap3xxx_iva_hwmod, &omap3xxx_timer1_hwmod, &omap3xxx_timer2_hwmod, @@ -3188,8 +3187,6 @@ static __initdata struct omap_hwmod *omap3xxx_hwmods[] = { &omap3xxx_i2c1_hwmod, &omap3xxx_i2c2_hwmod, &omap3xxx_i2c3_hwmod, - &omap34xx_sr1_hwmod, - &omap34xx_sr2_hwmod, /* gpio class */ &omap3xxx_gpio1_hwmod, @@ -3211,8 +3208,6 @@ static __initdata struct omap_hwmod *omap3xxx_hwmods[] = { &omap3xxx_mcbsp2_sidetone_hwmod, &omap3xxx_mcbsp3_sidetone_hwmod, - /* mailbox class */ - &omap3xxx_mailbox_hwmod, /* mcspi class */ &omap34xx_mcspi1, @@ -3225,31 +3220,39 @@ static __initdata struct omap_hwmod *omap3xxx_hwmods[] = { /* 3430ES1-only hwmods */ static __initdata struct omap_hwmod *omap3430es1_hwmods[] = { + &omap3xxx_iva_hwmod, &omap3430es1_dss_core_hwmod, + &omap3xxx_mailbox_hwmod, NULL }; /* 3430ES2+-only hwmods */ static __initdata struct omap_hwmod *omap3430es2plus_hwmods[] = { + &omap3xxx_iva_hwmod, &omap3xxx_dss_core_hwmod, &omap3xxx_usbhsotg_hwmod, + &omap3xxx_mailbox_hwmod, NULL }; /* 34xx-only hwmods (all ES revisions) */ static __initdata struct omap_hwmod *omap34xx_hwmods[] = { + &omap3xxx_iva_hwmod, &omap34xx_sr1_hwmod, &omap34xx_sr2_hwmod, + &omap3xxx_mailbox_hwmod, NULL }; /* 36xx-only hwmods (all ES revisions) */ static __initdata struct omap_hwmod *omap36xx_hwmods[] = { + &omap3xxx_iva_hwmod, &omap3xxx_uart4_hwmod, &omap3xxx_dss_core_hwmod, &omap36xx_sr1_hwmod, &omap36xx_sr2_hwmod, &omap3xxx_usbhsotg_hwmod, + &omap3xxx_mailbox_hwmod, NULL }; @@ -3267,7 +3270,7 @@ int __init omap3xxx_hwmod_init(void) /* Register hwmods common to all OMAP3 */ r = omap_hwmod_register(omap3xxx_hwmods); - if (!r) + if (r < 0) return r; rev = omap_rev(); @@ -3292,7 +3295,7 @@ int __init omap3xxx_hwmod_init(void) }; r = omap_hwmod_register(h); - if (!r) + if (r < 0) return r; /* diff --git a/arch/arm/mach-omap2/omap_l3_noc.c b/arch/arm/mach-omap2/omap_l3_noc.c index c8b1bef92e5a..6a66aa5e2a5b 100644 --- a/arch/arm/mach-omap2/omap_l3_noc.c +++ b/arch/arm/mach-omap2/omap_l3_noc.c @@ -20,6 +20,7 @@ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 * USA */ +#include <linux/module.h> #include <linux/init.h> #include <linux/io.h> #include <linux/platform_device.h> diff --git a/arch/arm/mach-omap2/pm34xx.c b/arch/arm/mach-omap2/pm34xx.c index c8cbd00a41af..efa66494c1e3 100644 --- a/arch/arm/mach-omap2/pm34xx.c +++ b/arch/arm/mach-omap2/pm34xx.c @@ -99,31 +99,27 @@ static void omap3_enable_io_chain(void) { int timeout = 0; - if (omap_rev() >= OMAP3430_REV_ES3_1) { - omap2_prm_set_mod_reg_bits(OMAP3430_EN_IO_CHAIN_MASK, WKUP_MOD, - PM_WKEN); - /* Do a readback to assure write has been done */ - omap2_prm_read_mod_reg(WKUP_MOD, PM_WKEN); - - while (!(omap2_prm_read_mod_reg(WKUP_MOD, PM_WKEN) & - OMAP3430_ST_IO_CHAIN_MASK)) { - timeout++; - if (timeout > 1000) { - printk(KERN_ERR "Wake up daisy chain " - "activation failed.\n"); - return; - } - omap2_prm_set_mod_reg_bits(OMAP3430_ST_IO_CHAIN_MASK, - WKUP_MOD, PM_WKEN); + omap2_prm_set_mod_reg_bits(OMAP3430_EN_IO_CHAIN_MASK, WKUP_MOD, + PM_WKEN); + /* Do a readback to assure write has been done */ + omap2_prm_read_mod_reg(WKUP_MOD, PM_WKEN); + + while (!(omap2_prm_read_mod_reg(WKUP_MOD, PM_WKEN) & + OMAP3430_ST_IO_CHAIN_MASK)) { + timeout++; + if (timeout > 1000) { + pr_err("Wake up daisy chain activation failed.\n"); + return; } + omap2_prm_set_mod_reg_bits(OMAP3430_ST_IO_CHAIN_MASK, + WKUP_MOD, PM_WKEN); } } static void omap3_disable_io_chain(void) { - if (omap_rev() >= OMAP3430_REV_ES3_1) - omap2_prm_clear_mod_reg_bits(OMAP3430_EN_IO_CHAIN_MASK, WKUP_MOD, - PM_WKEN); + omap2_prm_clear_mod_reg_bits(OMAP3430_EN_IO_CHAIN_MASK, WKUP_MOD, + PM_WKEN); } static void omap3_core_save_context(void) @@ -363,7 +359,6 @@ void omap_sram_idle(void) printk(KERN_ERR "Invalid mpu state in sram_idle\n"); return; } - pwrdm_pre_transition(); /* NEON control */ if (pwrdm_read_pwrst(neon_pwrdm) == PWRDM_POWER_ON) @@ -376,7 +371,8 @@ void omap_sram_idle(void) (per_next_state < PWRDM_POWER_ON || core_next_state < PWRDM_POWER_ON)) { omap2_prm_set_mod_reg_bits(OMAP3430_EN_IO_MASK, WKUP_MOD, PM_WKEN); - omap3_enable_io_chain(); + if (omap3_has_io_chain_ctrl()) + omap3_enable_io_chain(); } /* Block console output in case it is on one of the OMAP UARTs */ @@ -386,6 +382,8 @@ void omap_sram_idle(void) if (!console_trylock()) goto console_still_active; + pwrdm_pre_transition(); + /* PER */ if (per_next_state < PWRDM_POWER_ON) { per_going_off = (per_next_state == PWRDM_POWER_OFF) ? 1 : 0; @@ -409,13 +407,14 @@ void omap_sram_idle(void) omap3_intc_prepare_idle(); /* - * On EMU/HS devices ROM code restores a SRDC value - * from scratchpad which has automatic self refresh on timeout - * of AUTO_CNT = 1 enabled. This takes care of erratum ID i443. - * Hence store/restore the SDRC_POWER register here. - */ - if (omap_rev() >= OMAP3430_REV_ES3_0 && - omap_type() != OMAP2_DEVICE_TYPE_GP && + * On EMU/HS devices ROM code restores a SRDC value + * from scratchpad which has automatic self refresh on timeout + * of AUTO_CNT = 1 enabled. This takes care of erratum ID i443. + * Hence store/restore the SDRC_POWER register here. + */ + if (cpu_is_omap3430() && omap_rev() >= OMAP3430_REV_ES3_0 && + (omap_type() == OMAP2_DEVICE_TYPE_EMU || + omap_type() == OMAP2_DEVICE_TYPE_SEC) && core_next_state == PWRDM_POWER_OFF) sdrc_pwr = sdrc_read_reg(SDRC_POWER); @@ -432,8 +431,9 @@ void omap_sram_idle(void) omap34xx_do_sram_idle(save_state); /* Restore normal SDRC POWER settings */ - if (omap_rev() >= OMAP3430_REV_ES3_0 && - omap_type() != OMAP2_DEVICE_TYPE_GP && + if (cpu_is_omap3430() && omap_rev() >= OMAP3430_REV_ES3_0 && + (omap_type() == OMAP2_DEVICE_TYPE_EMU || + omap_type() == OMAP2_DEVICE_TYPE_SEC) && core_next_state == PWRDM_POWER_OFF) sdrc_write_reg(sdrc_pwr, SDRC_POWER); @@ -455,6 +455,8 @@ void omap_sram_idle(void) } omap3_intc_resume_idle(); + pwrdm_post_transition(); + /* PER */ if (per_next_state < PWRDM_POWER_ON) { per_prev_state = pwrdm_read_prev_pwrst(per_pwrdm); @@ -475,11 +477,10 @@ console_still_active: core_next_state < PWRDM_POWER_ON)) { omap2_prm_clear_mod_reg_bits(OMAP3430_EN_IO_MASK, WKUP_MOD, PM_WKEN); - omap3_disable_io_chain(); + if (omap3_has_io_chain_ctrl()) + omap3_disable_io_chain(); } - pwrdm_post_transition(); - clkdm_allow_idle(mpu_pwrdm->pwrdm_clkdms[0]); } @@ -870,6 +871,9 @@ static int __init omap3_pm_init(void) if (!cpu_is_omap34xx()) return -ENODEV; + if (!omap3_has_io_chain_ctrl()) + pr_warning("PM: no software I/O chain control; some wakeups may be lost\n"); + pm_errata_configure(); /* XXX prcm_setup_regs needs to be before enabling hw diff --git a/arch/arm/mach-omap2/powerdomain.c b/arch/arm/mach-omap2/powerdomain.c index 5164d587ef52..8a18d1bd61c8 100644 --- a/arch/arm/mach-omap2/powerdomain.c +++ b/arch/arm/mach-omap2/powerdomain.c @@ -1002,16 +1002,16 @@ int pwrdm_post_transition(void) * @pwrdm: struct powerdomain * to wait for * * Context loss count is the sum of powerdomain off-mode counter, the - * logic off counter and the per-bank memory off counter. Returns 0 + * logic off counter and the per-bank memory off counter. Returns negative * (and WARNs) upon error, otherwise, returns the context loss count. */ -u32 pwrdm_get_context_loss_count(struct powerdomain *pwrdm) +int pwrdm_get_context_loss_count(struct powerdomain *pwrdm) { int i, count; if (!pwrdm) { WARN(1, "powerdomain: %s: pwrdm is null\n", __func__); - return 0; + return -ENODEV; } count = pwrdm->state_counter[PWRDM_POWER_OFF]; @@ -1020,7 +1020,13 @@ u32 pwrdm_get_context_loss_count(struct powerdomain *pwrdm) for (i = 0; i < pwrdm->banks; i++) count += pwrdm->ret_mem_off_counter[i]; - pr_debug("powerdomain: %s: context loss count = %u\n", + /* + * Context loss count has to be a non-negative value. Clear the sign + * bit to get a value range from 0 to INT_MAX. + */ + count &= INT_MAX; + + pr_debug("powerdomain: %s: context loss count = %d\n", pwrdm->name, count); return count; diff --git a/arch/arm/mach-omap2/powerdomain.h b/arch/arm/mach-omap2/powerdomain.h index 42e6dd8f2a78..0d72a8a8ce4d 100644 --- a/arch/arm/mach-omap2/powerdomain.h +++ b/arch/arm/mach-omap2/powerdomain.h @@ -217,7 +217,7 @@ int pwrdm_clkdm_state_switch(struct clockdomain *clkdm); int pwrdm_pre_transition(void); int pwrdm_post_transition(void); int pwrdm_set_lowpwrstchange(struct powerdomain *pwrdm); -u32 pwrdm_get_context_loss_count(struct powerdomain *pwrdm); +int pwrdm_get_context_loss_count(struct powerdomain *pwrdm); bool pwrdm_can_ever_lose_context(struct powerdomain *pwrdm); extern void omap242x_powerdomains_init(void); diff --git a/arch/arm/mach-omap2/smartreflex.c b/arch/arm/mach-omap2/smartreflex.c index 0347b93211e6..6a4f6839a7d9 100644 --- a/arch/arm/mach-omap2/smartreflex.c +++ b/arch/arm/mach-omap2/smartreflex.c @@ -17,6 +17,7 @@ * published by the Free Software Foundation. */ +#include <linux/module.h> #include <linux/interrupt.h> #include <linux/clk.h> #include <linux/io.h> diff --git a/arch/arm/mach-omap2/timer.c b/arch/arm/mach-omap2/timer.c index e49fc7be2229..037b0d7d4e05 100644 --- a/arch/arm/mach-omap2/timer.c +++ b/arch/arm/mach-omap2/timer.c @@ -408,14 +408,6 @@ static int omap2_dm_timer_set_src(struct platform_device *pdev, int source) return ret; } -struct omap_device_pm_latency omap2_dmtimer_latency[] = { - { - .deactivate_func = omap_device_idle_hwmods, - .activate_func = omap_device_enable_hwmods, - .flags = OMAP_DEVICE_LATENCY_AUTO_ADJUST, - }, -}; - /** * omap_timer_init - build and register timer device with an * associated timer hwmod @@ -477,9 +469,7 @@ static int __init omap_timer_init(struct omap_hwmod *oh, void *unused) pdata->get_context_loss_count = omap_pm_get_dev_context_loss_count; #endif pdev = omap_device_build(name, id, oh, pdata, sizeof(*pdata), - omap2_dmtimer_latency, - ARRAY_SIZE(omap2_dmtimer_latency), - 0); + NULL, 0, 0); if (IS_ERR(pdev)) { pr_err("%s: Can't build omap_device for %s: %s.\n", diff --git a/arch/arm/mach-omap2/usb-musb.c b/arch/arm/mach-omap2/usb-musb.c index 47fb5d607630..267975086a7b 100644 --- a/arch/arm/mach-omap2/usb-musb.c +++ b/arch/arm/mach-omap2/usb-musb.c @@ -60,44 +60,6 @@ static struct musb_hdrc_platform_data musb_plat = { static u64 musb_dmamask = DMA_BIT_MASK(32); -static void usb_musb_mux_init(struct omap_musb_board_data *board_data) -{ - switch (board_data->interface_type) { - case MUSB_INTERFACE_UTMI: - omap_mux_init_signal("usba0_otg_dp", OMAP_PIN_INPUT); - omap_mux_init_signal("usba0_otg_dm", OMAP_PIN_INPUT); - break; - case MUSB_INTERFACE_ULPI: - omap_mux_init_signal("usba0_ulpiphy_clk", - OMAP_PIN_INPUT_PULLDOWN); - omap_mux_init_signal("usba0_ulpiphy_stp", - OMAP_PIN_INPUT_PULLDOWN); - omap_mux_init_signal("usba0_ulpiphy_dir", - OMAP_PIN_INPUT_PULLDOWN); - omap_mux_init_signal("usba0_ulpiphy_nxt", - OMAP_PIN_INPUT_PULLDOWN); - omap_mux_init_signal("usba0_ulpiphy_dat0", - OMAP_PIN_INPUT_PULLDOWN); - omap_mux_init_signal("usba0_ulpiphy_dat1", - OMAP_PIN_INPUT_PULLDOWN); - omap_mux_init_signal("usba0_ulpiphy_dat2", - OMAP_PIN_INPUT_PULLDOWN); - omap_mux_init_signal("usba0_ulpiphy_dat3", - OMAP_PIN_INPUT_PULLDOWN); - omap_mux_init_signal("usba0_ulpiphy_dat4", - OMAP_PIN_INPUT_PULLDOWN); - omap_mux_init_signal("usba0_ulpiphy_dat5", - OMAP_PIN_INPUT_PULLDOWN); - omap_mux_init_signal("usba0_ulpiphy_dat6", - OMAP_PIN_INPUT_PULLDOWN); - omap_mux_init_signal("usba0_ulpiphy_dat7", - OMAP_PIN_INPUT_PULLDOWN); - break; - default: - break; - } -} - static struct omap_musb_board_data musb_default_board_data = { .interface_type = MUSB_INTERFACE_ULPI, .mode = MUSB_OTG, diff --git a/arch/arm/mach-orion5x/ts78xx-setup.c b/arch/arm/mach-orion5x/ts78xx-setup.c index 6c75cd35c4c8..b35e2005a348 100644 --- a/arch/arm/mach-orion5x/ts78xx-setup.c +++ b/arch/arm/mach-orion5x/ts78xx-setup.c @@ -275,7 +275,7 @@ static struct platform_nand_data ts78xx_ts_nand_data = { .partitions = ts78xx_ts_nand_parts, .nr_partitions = ARRAY_SIZE(ts78xx_ts_nand_parts), .chip_delay = 15, - .options = NAND_USE_FLASH_BBT, + .bbt_options = NAND_BBT_USE_FLASH, }, .ctrl = { /* diff --git a/arch/arm/mach-picoxcell/include/mach/debug-macro.S b/arch/arm/mach-picoxcell/include/mach/debug-macro.S index 8f2c234ed9d9..58d4ee3ae949 100644 --- a/arch/arm/mach-picoxcell/include/mach/debug-macro.S +++ b/arch/arm/mach-picoxcell/include/mach/debug-macro.S @@ -14,7 +14,7 @@ #define UART_SHIFT 2 - .macro addruart, rp, rv + .macro addruart, rp, rv, tmp ldr \rv, =PHYS_TO_IO(PICOXCELL_UART1_BASE) ldr \rp, =PICOXCELL_UART1_BASE .endm diff --git a/arch/arm/mach-pxa/cm-x300.c b/arch/arm/mach-pxa/cm-x300.c index 21ac8e3e2f7a..684acf6ed3d5 100644 --- a/arch/arm/mach-pxa/cm-x300.c +++ b/arch/arm/mach-pxa/cm-x300.c @@ -424,8 +424,9 @@ static struct mtd_partition cm_x300_nand_partitions[] = { static struct pxa3xx_nand_platform_data cm_x300_nand_info = { .enable_arbiter = 1, .keep_config = 1, - .parts = cm_x300_nand_partitions, - .nr_parts = ARRAY_SIZE(cm_x300_nand_partitions), + .num_cs = 1, + .parts[0] = cm_x300_nand_partitions, + .nr_parts[0] = ARRAY_SIZE(cm_x300_nand_partitions), }; static void __init cm_x300_init_nand(void) diff --git a/arch/arm/mach-pxa/colibri-pxa3xx.c b/arch/arm/mach-pxa/colibri-pxa3xx.c index 3f9be419959d..2b8ca0de8a3d 100644 --- a/arch/arm/mach-pxa/colibri-pxa3xx.c +++ b/arch/arm/mach-pxa/colibri-pxa3xx.c @@ -139,8 +139,9 @@ static struct mtd_partition colibri_nand_partitions[] = { static struct pxa3xx_nand_platform_data colibri_nand_info = { .enable_arbiter = 1, .keep_config = 1, - .parts = colibri_nand_partitions, - .nr_parts = ARRAY_SIZE(colibri_nand_partitions), + .num_cs = 1, + .parts[0] = colibri_nand_partitions, + .nr_parts[0] = ARRAY_SIZE(colibri_nand_partitions), }; void __init colibri_pxa3xx_init_nand(void) diff --git a/arch/arm/mach-pxa/littleton.c b/arch/arm/mach-pxa/littleton.c index 27147f6ff7cb..d21e28b46d81 100644 --- a/arch/arm/mach-pxa/littleton.c +++ b/arch/arm/mach-pxa/littleton.c @@ -325,8 +325,9 @@ static struct mtd_partition littleton_nand_partitions[] = { static struct pxa3xx_nand_platform_data littleton_nand_info = { .enable_arbiter = 1, - .parts = littleton_nand_partitions, - .nr_parts = ARRAY_SIZE(littleton_nand_partitions), + .num_cs = 1, + .parts[0] = littleton_nand_partitions, + .nr_parts[0] = ARRAY_SIZE(littleton_nand_partitions), }; static void __init littleton_init_nand(void) diff --git a/arch/arm/mach-pxa/mxm8x10.c b/arch/arm/mach-pxa/mxm8x10.c index a13a1e365851..83570a79e7d2 100644 --- a/arch/arm/mach-pxa/mxm8x10.c +++ b/arch/arm/mach-pxa/mxm8x10.c @@ -389,10 +389,11 @@ static struct mtd_partition mxm_8x10_nand_partitions[] = { }; static struct pxa3xx_nand_platform_data mxm_8x10_nand_info = { - .enable_arbiter = 1, - .keep_config = 1, - .parts = mxm_8x10_nand_partitions, - .nr_parts = ARRAY_SIZE(mxm_8x10_nand_partitions) + .enable_arbiter = 1, + .keep_config = 1, + .num_cs = 1, + .parts[0] = mxm_8x10_nand_partitions, + .nr_parts[0] = ARRAY_SIZE(mxm_8x10_nand_partitions) }; static void __init mxm_8x10_nand_init(void) diff --git a/arch/arm/mach-pxa/raumfeld.c b/arch/arm/mach-pxa/raumfeld.c index 9c58e87f2b82..78d643783f99 100644 --- a/arch/arm/mach-pxa/raumfeld.c +++ b/arch/arm/mach-pxa/raumfeld.c @@ -346,8 +346,9 @@ static struct mtd_partition raumfeld_nand_partitions[] = { static struct pxa3xx_nand_platform_data raumfeld_nand_info = { .enable_arbiter = 1, .keep_config = 1, - .parts = raumfeld_nand_partitions, - .nr_parts = ARRAY_SIZE(raumfeld_nand_partitions), + .num_cs = 1, + .parts[0] = raumfeld_nand_partitions, + .nr_parts[0] = ARRAY_SIZE(raumfeld_nand_partitions), }; /** diff --git a/arch/arm/mach-pxa/zylonite.c b/arch/arm/mach-pxa/zylonite.c index 2406fd2727ef..a4c807527095 100644 --- a/arch/arm/mach-pxa/zylonite.c +++ b/arch/arm/mach-pxa/zylonite.c @@ -366,8 +366,9 @@ static struct mtd_partition zylonite_nand_partitions[] = { static struct pxa3xx_nand_platform_data zylonite_nand_info = { .enable_arbiter = 1, - .parts = zylonite_nand_partitions, - .nr_parts = ARRAY_SIZE(zylonite_nand_partitions), + .num_cs = 1, + .parts[0] = zylonite_nand_partitions, + .nr_parts[0] = ARRAY_SIZE(zylonite_nand_partitions), }; static void __init zylonite_init_nand(void) diff --git a/arch/arm/mach-shmobile/Makefile b/arch/arm/mach-shmobile/Makefile index 2aec2f732515..737bdc631b0d 100644 --- a/arch/arm/mach-shmobile/Makefile +++ b/arch/arm/mach-shmobile/Makefile @@ -3,7 +3,7 @@ # # Common objects -obj-y := timer.o console.o clock.o pm_runtime.o +obj-y := timer.o console.o clock.o # CPU objects obj-$(CONFIG_ARCH_SH7367) += setup-sh7367.o clock-sh7367.o intc-sh7367.o diff --git a/arch/arm/mach-shmobile/board-ag5evm.c b/arch/arm/mach-shmobile/board-ag5evm.c index 83624e26b884..b862e9f81e3e 100644 --- a/arch/arm/mach-shmobile/board-ag5evm.c +++ b/arch/arm/mach-shmobile/board-ag5evm.c @@ -515,14 +515,14 @@ static void __init ag5evm_init(void) /* enable MMCIF */ gpio_request(GPIO_FN_MMCCLK0, NULL); gpio_request(GPIO_FN_MMCCMD0_PU, NULL); - gpio_request(GPIO_FN_MMCD0_0, NULL); - gpio_request(GPIO_FN_MMCD0_1, NULL); - gpio_request(GPIO_FN_MMCD0_2, NULL); - gpio_request(GPIO_FN_MMCD0_3, NULL); - gpio_request(GPIO_FN_MMCD0_4, NULL); - gpio_request(GPIO_FN_MMCD0_5, NULL); - gpio_request(GPIO_FN_MMCD0_6, NULL); - gpio_request(GPIO_FN_MMCD0_7, NULL); + gpio_request(GPIO_FN_MMCD0_0_PU, NULL); + gpio_request(GPIO_FN_MMCD0_1_PU, NULL); + gpio_request(GPIO_FN_MMCD0_2_PU, NULL); + gpio_request(GPIO_FN_MMCD0_3_PU, NULL); + gpio_request(GPIO_FN_MMCD0_4_PU, NULL); + gpio_request(GPIO_FN_MMCD0_5_PU, NULL); + gpio_request(GPIO_FN_MMCD0_6_PU, NULL); + gpio_request(GPIO_FN_MMCD0_7_PU, NULL); gpio_request(GPIO_PORT208, NULL); /* Reset */ gpio_direction_output(GPIO_PORT208, 1); diff --git a/arch/arm/mach-shmobile/board-ap4evb.c b/arch/arm/mach-shmobile/board-ap4evb.c index a3aa0f6df964..4c865ece9ac4 100644 --- a/arch/arm/mach-shmobile/board-ap4evb.c +++ b/arch/arm/mach-shmobile/board-ap4evb.c @@ -201,7 +201,7 @@ static struct physmap_flash_data nor_flash_data = { static struct resource nor_flash_resources[] = { [0] = { .start = 0x20000000, /* CS0 shadow instead of regular CS0 */ - .end = 0x28000000 - 1, /* needed by USB MASK ROM boot */ + .end = 0x28000000 - 1, /* needed by USB MASK ROM boot */ .flags = IORESOURCE_MEM, } }; diff --git a/arch/arm/mach-shmobile/board-kota2.c b/arch/arm/mach-shmobile/board-kota2.c index adc73122bf20..bd9a78424d6b 100644 --- a/arch/arm/mach-shmobile/board-kota2.c +++ b/arch/arm/mach-shmobile/board-kota2.c @@ -48,6 +48,7 @@ #include <asm/hardware/cache-l2x0.h> #include <asm/traps.h> +/* SMSC 9220 */ static struct resource smsc9220_resources[] = { [0] = { .start = 0x14000000, /* CS5A */ @@ -77,6 +78,7 @@ static struct platform_device eth_device = { .num_resources = ARRAY_SIZE(smsc9220_resources), }; +/* KEYSC */ static struct sh_keysc_info keysc_platdata = { .mode = SH_KEYSC_MODE_6, .scan_timing = 3, @@ -120,6 +122,7 @@ static struct platform_device keysc_device = { }, }; +/* GPIO KEY */ #define GPIO_KEY(c, g, d) { .code = c, .gpio = g, .desc = d, .active_low = 1 } static struct gpio_keys_button gpio_buttons[] = { @@ -150,6 +153,7 @@ static struct platform_device gpio_keys_device = { }, }; +/* GPIO LED */ #define GPIO_LED(n, g) { .name = n, .gpio = g } static struct gpio_led gpio_leds[] = { @@ -175,6 +179,7 @@ static struct platform_device gpio_leds_device = { }, }; +/* MMCIF */ static struct resource mmcif_resources[] = { [0] = { .name = "MMCIF", @@ -207,6 +212,7 @@ static struct platform_device mmcif_device = { .resource = mmcif_resources, }; +/* SDHI0 */ static struct sh_mobile_sdhi_info sdhi0_info = { .tmio_caps = MMC_CAP_SD_HIGHSPEED, .tmio_flags = TMIO_MMC_WRPROTECT_DISABLE | TMIO_MMC_HAS_IDLE_WAIT, @@ -243,6 +249,7 @@ static struct platform_device sdhi0_device = { }, }; +/* SDHI1 */ static struct sh_mobile_sdhi_info sdhi1_info = { .tmio_caps = MMC_CAP_NONREMOVABLE | MMC_CAP_SDIO_IRQ, .tmio_flags = TMIO_MMC_WRPROTECT_DISABLE | TMIO_MMC_HAS_IDLE_WAIT, diff --git a/arch/arm/mach-shmobile/clock-sh7372.c b/arch/arm/mach-shmobile/clock-sh7372.c index 66975921e646..995a9c3aec8f 100644 --- a/arch/arm/mach-shmobile/clock-sh7372.c +++ b/arch/arm/mach-shmobile/clock-sh7372.c @@ -476,7 +476,7 @@ static struct clk_ops fsidiv_clk_ops = { .disable = fsidiv_disable, }; -static struct clk_mapping sh7372_fsidiva_clk_mapping = { +static struct clk_mapping fsidiva_clk_mapping = { .phys = FSIDIVA, .len = 8, }; @@ -484,10 +484,10 @@ static struct clk_mapping sh7372_fsidiva_clk_mapping = { struct clk sh7372_fsidiva_clk = { .ops = &fsidiv_clk_ops, .parent = &div6_reparent_clks[DIV6_FSIA], /* late install */ - .mapping = &sh7372_fsidiva_clk_mapping, + .mapping = &fsidiva_clk_mapping, }; -static struct clk_mapping sh7372_fsidivb_clk_mapping = { +static struct clk_mapping fsidivb_clk_mapping = { .phys = FSIDIVB, .len = 8, }; @@ -495,7 +495,7 @@ static struct clk_mapping sh7372_fsidivb_clk_mapping = { struct clk sh7372_fsidivb_clk = { .ops = &fsidiv_clk_ops, .parent = &div6_reparent_clks[DIV6_FSIB], /* late install */ - .mapping = &sh7372_fsidivb_clk_mapping, + .mapping = &fsidivb_clk_mapping, }; static struct clk *late_main_clks[] = { diff --git a/arch/arm/mach-shmobile/cpuidle.c b/arch/arm/mach-shmobile/cpuidle.c index 2e44f11f592e..1b2334277e85 100644 --- a/arch/arm/mach-shmobile/cpuidle.c +++ b/arch/arm/mach-shmobile/cpuidle.c @@ -26,65 +26,59 @@ void (*shmobile_cpuidle_modes[CPUIDLE_STATE_MAX])(void) = { }; static int shmobile_cpuidle_enter(struct cpuidle_device *dev, - struct cpuidle_state *state) + struct cpuidle_driver *drv, + int index) { ktime_t before, after; - int requested_state = state - &dev->states[0]; - dev->last_state = &dev->states[requested_state]; before = ktime_get(); local_irq_disable(); local_fiq_disable(); - shmobile_cpuidle_modes[requested_state](); + shmobile_cpuidle_modes[index](); local_irq_enable(); local_fiq_enable(); after = ktime_get(); - return ktime_to_ns(ktime_sub(after, before)) >> 10; + dev->last_residency = ktime_to_ns(ktime_sub(after, before)) >> 10; + + return index; } static struct cpuidle_device shmobile_cpuidle_dev; static struct cpuidle_driver shmobile_cpuidle_driver = { .name = "shmobile_cpuidle", .owner = THIS_MODULE, + .states[0] = { + .name = "C1", + .desc = "WFI", + .exit_latency = 1, + .target_residency = 1 * 2, + .flags = CPUIDLE_FLAG_TIME_VALID, + }, + .safe_state_index = 0, /* C1 */ + .state_count = 1, }; -void (*shmobile_cpuidle_setup)(struct cpuidle_device *dev); +void (*shmobile_cpuidle_setup)(struct cpuidle_driver *drv); static int shmobile_cpuidle_init(void) { struct cpuidle_device *dev = &shmobile_cpuidle_dev; - struct cpuidle_state *state; + struct cpuidle_driver *drv = &shmobile_cpuidle_driver; int i; - cpuidle_register_driver(&shmobile_cpuidle_driver); - - for (i = 0; i < CPUIDLE_STATE_MAX; i++) { - dev->states[i].name[0] = '\0'; - dev->states[i].desc[0] = '\0'; - dev->states[i].enter = shmobile_cpuidle_enter; - } - - i = CPUIDLE_DRIVER_STATE_START; - - state = &dev->states[i++]; - snprintf(state->name, CPUIDLE_NAME_LEN, "C1"); - strncpy(state->desc, "WFI", CPUIDLE_DESC_LEN); - state->exit_latency = 1; - state->target_residency = 1 * 2; - state->power_usage = 3; - state->flags = 0; - state->flags |= CPUIDLE_FLAG_TIME_VALID; - - dev->safe_state = state; - dev->state_count = i; + for (i = 0; i < CPUIDLE_STATE_MAX; i++) + drv->states[i].enter = shmobile_cpuidle_enter; if (shmobile_cpuidle_setup) - shmobile_cpuidle_setup(dev); + shmobile_cpuidle_setup(drv); + + cpuidle_register_driver(drv); + dev->state_count = drv->state_count; cpuidle_register_device(dev); return 0; diff --git a/arch/arm/mach-shmobile/include/mach/common.h b/arch/arm/mach-shmobile/include/mach/common.h index c0cdbf997c91..834bd6cd508f 100644 --- a/arch/arm/mach-shmobile/include/mach/common.h +++ b/arch/arm/mach-shmobile/include/mach/common.h @@ -9,9 +9,9 @@ extern int clk_init(void); extern void shmobile_handle_irq_intc(struct pt_regs *); extern void shmobile_handle_irq_gic(struct pt_regs *); extern struct platform_suspend_ops shmobile_suspend_ops; -struct cpuidle_device; +struct cpuidle_driver; extern void (*shmobile_cpuidle_modes[])(void); -extern void (*shmobile_cpuidle_setup)(struct cpuidle_device *dev); +extern void (*shmobile_cpuidle_setup)(struct cpuidle_driver *drv); extern void sh7367_init_irq(void); extern void sh7367_add_early_devices(void); diff --git a/arch/arm/mach-shmobile/include/mach/sh73a0.h b/arch/arm/mach-shmobile/include/mach/sh73a0.h index 18ae6a990bc2..881d515a9686 100644 --- a/arch/arm/mach-shmobile/include/mach/sh73a0.h +++ b/arch/arm/mach-shmobile/include/mach/sh73a0.h @@ -470,6 +470,14 @@ enum { GPIO_FN_SDHICMD2_PU, GPIO_FN_MMCCMD0_PU, GPIO_FN_MMCCMD1_PU, + GPIO_FN_MMCD0_0_PU, + GPIO_FN_MMCD0_1_PU, + GPIO_FN_MMCD0_2_PU, + GPIO_FN_MMCD0_3_PU, + GPIO_FN_MMCD0_4_PU, + GPIO_FN_MMCD0_5_PU, + GPIO_FN_MMCD0_6_PU, + GPIO_FN_MMCD0_7_PU, GPIO_FN_FSIACK_PU, GPIO_FN_FSIAILR_PU, GPIO_FN_FSIAIBT_PU, diff --git a/arch/arm/mach-shmobile/pfc-sh7367.c b/arch/arm/mach-shmobile/pfc-sh7367.c index 128555e76e43..e6e524654e67 100644 --- a/arch/arm/mach-shmobile/pfc-sh7367.c +++ b/arch/arm/mach-shmobile/pfc-sh7367.c @@ -21,68 +21,49 @@ #include <linux/gpio.h> #include <mach/sh7367.h> -#define _1(fn, pfx, sfx) fn(pfx, sfx) - -#define _10(fn, pfx, sfx) \ - _1(fn, pfx##0, sfx), _1(fn, pfx##1, sfx), \ - _1(fn, pfx##2, sfx), _1(fn, pfx##3, sfx), \ - _1(fn, pfx##4, sfx), _1(fn, pfx##5, sfx), \ - _1(fn, pfx##6, sfx), _1(fn, pfx##7, sfx), \ - _1(fn, pfx##8, sfx), _1(fn, pfx##9, sfx) - -#define _90(fn, pfx, sfx) \ - _10(fn, pfx##1, sfx), _10(fn, pfx##2, sfx), \ - _10(fn, pfx##3, sfx), _10(fn, pfx##4, sfx), \ - _10(fn, pfx##5, sfx), _10(fn, pfx##6, sfx), \ - _10(fn, pfx##7, sfx), _10(fn, pfx##8, sfx), \ - _10(fn, pfx##9, sfx) - -#define _273(fn, pfx, sfx) \ - _10(fn, pfx, sfx), _90(fn, pfx, sfx), \ - _10(fn, pfx##10, sfx), _90(fn, pfx##1, sfx), \ - _10(fn, pfx##20, sfx), _10(fn, pfx##21, sfx), \ - _10(fn, pfx##22, sfx), _10(fn, pfx##23, sfx), \ - _10(fn, pfx##24, sfx), _10(fn, pfx##25, sfx), \ - _10(fn, pfx##26, sfx), _1(fn, pfx##270, sfx), \ - _1(fn, pfx##271, sfx), _1(fn, pfx##272, sfx) - -#define _PORT(pfx, sfx) pfx##_##sfx -#define PORT_273(str) _273(_PORT, PORT, str) +#define CPU_ALL_PORT(fn, pfx, sfx) \ + PORT_10(fn, pfx, sfx), PORT_90(fn, pfx, sfx), \ + PORT_10(fn, pfx##10, sfx), PORT_90(fn, pfx##1, sfx), \ + PORT_10(fn, pfx##20, sfx), PORT_10(fn, pfx##21, sfx), \ + PORT_10(fn, pfx##22, sfx), PORT_10(fn, pfx##23, sfx), \ + PORT_10(fn, pfx##24, sfx), PORT_10(fn, pfx##25, sfx), \ + PORT_10(fn, pfx##26, sfx), PORT_1(fn, pfx##270, sfx), \ + PORT_1(fn, pfx##271, sfx), PORT_1(fn, pfx##272, sfx) enum { PINMUX_RESERVED = 0, PINMUX_DATA_BEGIN, - PORT_273(DATA), /* PORT0_DATA -> PORT272_DATA */ + PORT_ALL(DATA), /* PORT0_DATA -> PORT272_DATA */ PINMUX_DATA_END, PINMUX_INPUT_BEGIN, - PORT_273(IN), /* PORT0_IN -> PORT272_IN */ + PORT_ALL(IN), /* PORT0_IN -> PORT272_IN */ PINMUX_INPUT_END, PINMUX_INPUT_PULLUP_BEGIN, - PORT_273(IN_PU), /* PORT0_IN_PU -> PORT272_IN_PU */ + PORT_ALL(IN_PU), /* PORT0_IN_PU -> PORT272_IN_PU */ PINMUX_INPUT_PULLUP_END, PINMUX_INPUT_PULLDOWN_BEGIN, - PORT_273(IN_PD), /* PORT0_IN_PD -> PORT272_IN_PD */ + PORT_ALL(IN_PD), /* PORT0_IN_PD -> PORT272_IN_PD */ PINMUX_INPUT_PULLDOWN_END, PINMUX_OUTPUT_BEGIN, - PORT_273(OUT), /* PORT0_OUT -> PORT272_OUT */ + PORT_ALL(OUT), /* PORT0_OUT -> PORT272_OUT */ PINMUX_OUTPUT_END, PINMUX_FUNCTION_BEGIN, - PORT_273(FN_IN), /* PORT0_FN_IN -> PORT272_FN_IN */ - PORT_273(FN_OUT), /* PORT0_FN_OUT -> PORT272_FN_OUT */ - PORT_273(FN0), /* PORT0_FN0 -> PORT272_FN0 */ - PORT_273(FN1), /* PORT0_FN1 -> PORT272_FN1 */ - PORT_273(FN2), /* PORT0_FN2 -> PORT272_FN2 */ - PORT_273(FN3), /* PORT0_FN3 -> PORT272_FN3 */ - PORT_273(FN4), /* PORT0_FN4 -> PORT272_FN4 */ - PORT_273(FN5), /* PORT0_FN5 -> PORT272_FN5 */ - PORT_273(FN6), /* PORT0_FN6 -> PORT272_FN6 */ - PORT_273(FN7), /* PORT0_FN7 -> PORT272_FN7 */ + PORT_ALL(FN_IN), /* PORT0_FN_IN -> PORT272_FN_IN */ + PORT_ALL(FN_OUT), /* PORT0_FN_OUT -> PORT272_FN_OUT */ + PORT_ALL(FN0), /* PORT0_FN0 -> PORT272_FN0 */ + PORT_ALL(FN1), /* PORT0_FN1 -> PORT272_FN1 */ + PORT_ALL(FN2), /* PORT0_FN2 -> PORT272_FN2 */ + PORT_ALL(FN3), /* PORT0_FN3 -> PORT272_FN3 */ + PORT_ALL(FN4), /* PORT0_FN4 -> PORT272_FN4 */ + PORT_ALL(FN5), /* PORT0_FN5 -> PORT272_FN5 */ + PORT_ALL(FN6), /* PORT0_FN6 -> PORT272_FN6 */ + PORT_ALL(FN7), /* PORT0_FN7 -> PORT272_FN7 */ MSELBCR_MSEL2_1, MSELBCR_MSEL2_0, PINMUX_FUNCTION_END, @@ -327,41 +308,6 @@ enum { PINMUX_MARK_END, }; -#define PORT_DATA_I(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, PORT##nr##_IN) - -#define PORT_DATA_I_PD(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_IN, PORT##nr##_IN_PD) - -#define PORT_DATA_I_PU(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_IN, PORT##nr##_IN_PU) - -#define PORT_DATA_I_PU_PD(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_IN, PORT##nr##_IN_PD, PORT##nr##_IN_PU) - -#define PORT_DATA_O(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, PORT##nr##_OUT) - -#define PORT_DATA_IO(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, PORT##nr##_OUT, \ - PORT##nr##_IN) - -#define PORT_DATA_IO_PD(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, PORT##nr##_OUT, \ - PORT##nr##_IN, PORT##nr##_IN_PD) - -#define PORT_DATA_IO_PU(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, PORT##nr##_OUT, \ - PORT##nr##_IN, PORT##nr##_IN_PU) - -#define PORT_DATA_IO_PU_PD(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, PORT##nr##_OUT, \ - PORT##nr##_IN, PORT##nr##_IN_PD, PORT##nr##_IN_PU) - - static pinmux_enum_t pinmux_data[] = { /* specify valid pin states for each pin in GPIO mode */ @@ -1098,13 +1044,9 @@ static pinmux_enum_t pinmux_data[] = { PINMUX_DATA(DIVLOCK_MARK, PORT272_FN1), }; -#define _GPIO_PORT(pfx, sfx) PINMUX_GPIO(GPIO_PORT##pfx, PORT##pfx##_DATA) -#define GPIO_PORT_273() _273(_GPIO_PORT, , unused) -#define GPIO_FN(str) PINMUX_GPIO(GPIO_FN_##str, str##_MARK) - static struct pinmux_gpio pinmux_gpios[] = { /* 49-1 -> 49-6 (GPIO) */ - GPIO_PORT_273(), + GPIO_PORT_ALL(), /* Special Pull-up / Pull-down Functions */ GPIO_FN(PORT48_KEYIN0_PU), GPIO_FN(PORT49_KEYIN1_PU), @@ -1345,22 +1287,6 @@ static struct pinmux_gpio pinmux_gpios[] = { GPIO_FN(DIVLOCK), }; -/* helper for top 4 bits in PORTnCR */ -#define PCRH(in, in_pd, in_pu, out) \ - 0, (out), (in), 0, \ - 0, 0, 0, 0, \ - 0, 0, (in_pd), 0, \ - 0, 0, (in_pu), 0 - -#define PORTCR(nr, reg) \ - { PINMUX_CFG_REG("PORT" nr "CR", reg, 8, 4) { \ - PCRH(PORT##nr##_IN, PORT##nr##_IN_PD, \ - PORT##nr##_IN_PU, PORT##nr##_OUT), \ - PORT##nr##_FN0, PORT##nr##_FN1, PORT##nr##_FN2, \ - PORT##nr##_FN3, PORT##nr##_FN4, PORT##nr##_FN5, \ - PORT##nr##_FN6, PORT##nr##_FN7 } \ - } - static struct pinmux_cfg_reg pinmux_config_regs[] = { PORTCR(0, 0xe6050000), /* PORT0CR */ PORTCR(1, 0xe6050001), /* PORT1CR */ diff --git a/arch/arm/mach-shmobile/pfc-sh7372.c b/arch/arm/mach-shmobile/pfc-sh7372.c index 9c265dae138a..1bd6585a6acf 100644 --- a/arch/arm/mach-shmobile/pfc-sh7372.c +++ b/arch/arm/mach-shmobile/pfc-sh7372.c @@ -25,27 +25,13 @@ #include <linux/gpio.h> #include <mach/sh7372.h> -#define _1(fn, pfx, sfx) fn(pfx, sfx) - -#define _10(fn, pfx, sfx) \ - _1(fn, pfx##0, sfx), _1(fn, pfx##1, sfx), \ - _1(fn, pfx##2, sfx), _1(fn, pfx##3, sfx), \ - _1(fn, pfx##4, sfx), _1(fn, pfx##5, sfx), \ - _1(fn, pfx##6, sfx), _1(fn, pfx##7, sfx), \ - _1(fn, pfx##8, sfx), _1(fn, pfx##9, sfx) - -#define _80(fn, pfx, sfx) \ - _10(fn, pfx##1, sfx), _10(fn, pfx##2, sfx), \ - _10(fn, pfx##3, sfx), _10(fn, pfx##4, sfx), \ - _10(fn, pfx##5, sfx), _10(fn, pfx##6, sfx), \ - _10(fn, pfx##7, sfx), _10(fn, pfx##8, sfx) - -#define _190(fn, pfx, sfx) \ - _10(fn, pfx, sfx), _80(fn, pfx, sfx), _10(fn, pfx##9, sfx), \ - _10(fn, pfx##10, sfx), _80(fn, pfx##1, sfx), _1(fn, pfx##190, sfx) - -#define _PORT(pfx, sfx) pfx##_##sfx -#define PORT_ALL(str) _190(_PORT, PORT, str) +#define CPU_ALL_PORT(fn, pfx, sfx) \ + PORT_10(fn, pfx, sfx), PORT_90(fn, pfx, sfx), \ + PORT_10(fn, pfx##10, sfx), PORT_10(fn, pfx##11, sfx), \ + PORT_10(fn, pfx##12, sfx), PORT_10(fn, pfx##13, sfx), \ + PORT_10(fn, pfx##14, sfx), PORT_10(fn, pfx##15, sfx), \ + PORT_10(fn, pfx##16, sfx), PORT_10(fn, pfx##17, sfx), \ + PORT_10(fn, pfx##18, sfx), PORT_1(fn, pfx##190, sfx) enum { PINMUX_RESERVED = 0, @@ -381,108 +367,124 @@ enum { PINMUX_MARK_END, }; -/* PORT_DATA_I_PD(nr) */ -#define _I___D(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_IN, PORT##nr##_IN_PD) - -/* PORT_DATA_I_PU(nr) */ -#define _I__U_(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_IN, PORT##nr##_IN_PU) - -/* PORT_DATA_I_PU_PD(nr) */ -#define _I__UD(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_IN, PORT##nr##_IN_PD, PORT##nr##_IN_PU) - -/* PORT_DATA_O(nr) */ -#define __O___(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, PORT##nr##_OUT) - -/* PORT_DATA_IO(nr) */ -#define _IO___(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, PORT##nr##_OUT, \ - PORT##nr##_IN) - -/* PORT_DATA_IO_PD(nr) */ -#define _IO__D(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, PORT##nr##_OUT, \ - PORT##nr##_IN, PORT##nr##_IN_PD) - -/* PORT_DATA_IO_PU(nr) */ -#define _IO_U_(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, PORT##nr##_OUT, \ - PORT##nr##_IN, PORT##nr##_IN_PU) - -/* PORT_DATA_IO_PU_PD(nr) */ -#define _IO_UD(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, PORT##nr##_OUT, \ - PORT##nr##_IN, PORT##nr##_IN_PD, PORT##nr##_IN_PU) - - static pinmux_enum_t pinmux_data[] = { /* specify valid pin states for each pin in GPIO mode */ - - _IO__D(0), _IO__D(1), __O___(2), _I___D(3), _I___D(4), - _I___D(5), _IO_UD(6), _I___D(7), _IO__D(8), __O___(9), - - __O___(10), __O___(11), _IO_UD(12), _IO__D(13), _IO__D(14), - __O___(15), _IO__D(16), _IO__D(17), _I___D(18), _IO___(19), - - _IO___(20), _IO___(21), _IO___(22), _IO___(23), _IO___(24), - _IO___(25), _IO___(26), _IO___(27), _IO___(28), _IO___(29), - - _IO___(30), _IO___(31), _IO___(32), _IO___(33), _IO___(34), - _IO___(35), _IO___(36), _IO___(37), _IO___(38), _IO___(39), - - _IO___(40), _IO___(41), _IO___(42), _IO___(43), _IO___(44), - _IO___(45), _IO_U_(46), _IO_U_(47), _IO_U_(48), _IO_U_(49), - - _IO_U_(50), _IO_U_(51), _IO_U_(52), _IO_U_(53), _IO_U_(54), - _IO_U_(55), _IO_U_(56), _IO_U_(57), _IO_U_(58), _IO_U_(59), - - _IO_U_(60), _IO_U_(61), _IO___(62), __O___(63), __O___(64), - _IO_U_(65), __O___(66), _IO_U_(67), __O___(68), _IO___(69), /*66?*/ - - _IO___(70), _IO___(71), __O___(72), _I__U_(73), _I__UD(74), - _IO_UD(75), _IO_UD(76), _IO_UD(77), _IO_UD(78), _IO_UD(79), - - _IO_UD(80), _IO_UD(81), _IO_UD(82), _IO_UD(83), _IO_UD(84), - _IO_UD(85), _IO_UD(86), _IO_UD(87), _IO_UD(88), _IO_UD(89), - - _IO_UD(90), _IO_UD(91), _IO_UD(92), _IO_UD(93), _IO_UD(94), - _IO_UD(95), _IO_U_(96), _IO_UD(97), _IO_UD(98), __O___(99), /*99?*/ - - _IO__D(100), _IO__D(101), _IO__D(102), _IO__D(103), _IO__D(104), - _IO__D(105), _IO_U_(106), _IO_U_(107), _IO_U_(108), _IO_U_(109), - - _IO_U_(110), _IO_U_(111), _IO__D(112), _IO__D(113), _IO_U_(114), - _IO_U_(115), _IO_U_(116), _IO_U_(117), _IO_U_(118), _IO_U_(119), - - _IO_U_(120), _IO__D(121), _IO__D(122), _IO__D(123), _IO__D(124), - _IO__D(125), _IO__D(126), _IO__D(127), _IO__D(128), _IO_UD(129), - - _IO_UD(130), _IO_UD(131), _IO_UD(132), _IO_UD(133), _IO_UD(134), - _IO_UD(135), _IO__D(136), _IO__D(137), _IO__D(138), _IO__D(139), - - _IO__D(140), _IO__D(141), _IO__D(142), _IO_UD(143), _IO__D(144), - _IO__D(145), _IO__D(146), _IO__D(147), _IO__D(148), _IO__D(149), - - _IO__D(150), _IO__D(151), _IO_UD(152), _I___D(153), _IO_UD(154), - _I___D(155), _IO__D(156), _IO__D(157), _I___D(158), _IO__D(159), - - __O___(160), _IO__D(161), _IO__D(162), _IO__D(163), _I___D(164), - _IO__D(165), _I___D(166), _I___D(167), _I___D(168), _I___D(169), - - _I___D(170), __O___(171), _IO_UD(172), _IO_UD(173), _IO_UD(174), - _IO_UD(175), _IO_UD(176), _IO_UD(177), _IO_UD(178), __O___(179), - - _IO_UD(180), _IO_UD(181), _IO_UD(182), _IO_UD(183), _IO_UD(184), - __O___(185), _IO_UD(186), _IO_UD(187), _IO_UD(188), _IO_UD(189), - - _IO_UD(190), + PORT_DATA_IO_PD(0), PORT_DATA_IO_PD(1), + PORT_DATA_O(2), PORT_DATA_I_PD(3), + PORT_DATA_I_PD(4), PORT_DATA_I_PD(5), + PORT_DATA_IO_PU_PD(6), PORT_DATA_I_PD(7), + PORT_DATA_IO_PD(8), PORT_DATA_O(9), + + PORT_DATA_O(10), PORT_DATA_O(11), + PORT_DATA_IO_PU_PD(12), PORT_DATA_IO_PD(13), + PORT_DATA_IO_PD(14), PORT_DATA_O(15), + PORT_DATA_IO_PD(16), PORT_DATA_IO_PD(17), + PORT_DATA_I_PD(18), PORT_DATA_IO(19), + + PORT_DATA_IO(20), PORT_DATA_IO(21), + PORT_DATA_IO(22), PORT_DATA_IO(23), + PORT_DATA_IO(24), PORT_DATA_IO(25), + PORT_DATA_IO(26), PORT_DATA_IO(27), + PORT_DATA_IO(28), PORT_DATA_IO(29), + + PORT_DATA_IO(30), PORT_DATA_IO(31), + PORT_DATA_IO(32), PORT_DATA_IO(33), + PORT_DATA_IO(34), PORT_DATA_IO(35), + PORT_DATA_IO(36), PORT_DATA_IO(37), + PORT_DATA_IO(38), PORT_DATA_IO(39), + + PORT_DATA_IO(40), PORT_DATA_IO(41), + PORT_DATA_IO(42), PORT_DATA_IO(43), + PORT_DATA_IO(44), PORT_DATA_IO(45), + PORT_DATA_IO_PU(46), PORT_DATA_IO_PU(47), + PORT_DATA_IO_PU(48), PORT_DATA_IO_PU(49), + + PORT_DATA_IO_PU(50), PORT_DATA_IO_PU(51), + PORT_DATA_IO_PU(52), PORT_DATA_IO_PU(53), + PORT_DATA_IO_PU(54), PORT_DATA_IO_PU(55), + PORT_DATA_IO_PU(56), PORT_DATA_IO_PU(57), + PORT_DATA_IO_PU(58), PORT_DATA_IO_PU(59), + + PORT_DATA_IO_PU(60), PORT_DATA_IO_PU(61), + PORT_DATA_IO(62), PORT_DATA_O(63), + PORT_DATA_O(64), PORT_DATA_IO_PU(65), + PORT_DATA_O(66), PORT_DATA_IO_PU(67), /*66?*/ + PORT_DATA_O(68), PORT_DATA_IO(69), + + PORT_DATA_IO(70), PORT_DATA_IO(71), + PORT_DATA_O(72), PORT_DATA_I_PU(73), + PORT_DATA_I_PU_PD(74), PORT_DATA_IO_PU_PD(75), + PORT_DATA_IO_PU_PD(76), PORT_DATA_IO_PU_PD(77), + PORT_DATA_IO_PU_PD(78), PORT_DATA_IO_PU_PD(79), + + PORT_DATA_IO_PU_PD(80), PORT_DATA_IO_PU_PD(81), + PORT_DATA_IO_PU_PD(82), PORT_DATA_IO_PU_PD(83), + PORT_DATA_IO_PU_PD(84), PORT_DATA_IO_PU_PD(85), + PORT_DATA_IO_PU_PD(86), PORT_DATA_IO_PU_PD(87), + PORT_DATA_IO_PU_PD(88), PORT_DATA_IO_PU_PD(89), + + PORT_DATA_IO_PU_PD(90), PORT_DATA_IO_PU_PD(91), + PORT_DATA_IO_PU_PD(92), PORT_DATA_IO_PU_PD(93), + PORT_DATA_IO_PU_PD(94), PORT_DATA_IO_PU_PD(95), + PORT_DATA_IO_PU(96), PORT_DATA_IO_PU_PD(97), + PORT_DATA_IO_PU_PD(98), PORT_DATA_O(99), /*99?*/ + + PORT_DATA_IO_PD(100), PORT_DATA_IO_PD(101), + PORT_DATA_IO_PD(102), PORT_DATA_IO_PD(103), + PORT_DATA_IO_PD(104), PORT_DATA_IO_PD(105), + PORT_DATA_IO_PU(106), PORT_DATA_IO_PU(107), + PORT_DATA_IO_PU(108), PORT_DATA_IO_PU(109), + + PORT_DATA_IO_PU(110), PORT_DATA_IO_PU(111), + PORT_DATA_IO_PD(112), PORT_DATA_IO_PD(113), + PORT_DATA_IO_PU(114), PORT_DATA_IO_PU(115), + PORT_DATA_IO_PU(116), PORT_DATA_IO_PU(117), + PORT_DATA_IO_PU(118), PORT_DATA_IO_PU(119), + + PORT_DATA_IO_PU(120), PORT_DATA_IO_PD(121), + PORT_DATA_IO_PD(122), PORT_DATA_IO_PD(123), + PORT_DATA_IO_PD(124), PORT_DATA_IO_PD(125), + PORT_DATA_IO_PD(126), PORT_DATA_IO_PD(127), + PORT_DATA_IO_PD(128), PORT_DATA_IO_PU_PD(129), + + PORT_DATA_IO_PU_PD(130), PORT_DATA_IO_PU_PD(131), + PORT_DATA_IO_PU_PD(132), PORT_DATA_IO_PU_PD(133), + PORT_DATA_IO_PU_PD(134), PORT_DATA_IO_PU_PD(135), + PORT_DATA_IO_PD(136), PORT_DATA_IO_PD(137), + PORT_DATA_IO_PD(138), PORT_DATA_IO_PD(139), + + PORT_DATA_IO_PD(140), PORT_DATA_IO_PD(141), + PORT_DATA_IO_PD(142), PORT_DATA_IO_PU_PD(143), + PORT_DATA_IO_PD(144), PORT_DATA_IO_PD(145), + PORT_DATA_IO_PD(146), PORT_DATA_IO_PD(147), + PORT_DATA_IO_PD(148), PORT_DATA_IO_PD(149), + + PORT_DATA_IO_PD(150), PORT_DATA_IO_PD(151), + PORT_DATA_IO_PU_PD(152), PORT_DATA_I_PD(153), + PORT_DATA_IO_PU_PD(154), PORT_DATA_I_PD(155), + PORT_DATA_IO_PD(156), PORT_DATA_IO_PD(157), + PORT_DATA_I_PD(158), PORT_DATA_IO_PD(159), + + PORT_DATA_O(160), PORT_DATA_IO_PD(161), + PORT_DATA_IO_PD(162), PORT_DATA_IO_PD(163), + PORT_DATA_I_PD(164), PORT_DATA_IO_PD(165), + PORT_DATA_I_PD(166), PORT_DATA_I_PD(167), + PORT_DATA_I_PD(168), PORT_DATA_I_PD(169), + + PORT_DATA_I_PD(170), PORT_DATA_O(171), + PORT_DATA_IO_PU_PD(172), PORT_DATA_IO_PU_PD(173), + PORT_DATA_IO_PU_PD(174), PORT_DATA_IO_PU_PD(175), + PORT_DATA_IO_PU_PD(176), PORT_DATA_IO_PU_PD(177), + PORT_DATA_IO_PU_PD(178), PORT_DATA_O(179), + + PORT_DATA_IO_PU_PD(180), PORT_DATA_IO_PU_PD(181), + PORT_DATA_IO_PU_PD(182), PORT_DATA_IO_PU_PD(183), + PORT_DATA_IO_PU_PD(184), PORT_DATA_O(185), + PORT_DATA_IO_PU_PD(186), PORT_DATA_IO_PU_PD(187), + PORT_DATA_IO_PU_PD(188), PORT_DATA_IO_PU_PD(189), + + PORT_DATA_IO_PU_PD(190), /* IRQ */ PINMUX_DATA(IRQ0_6_MARK, PORT6_FN0, MSEL1CR_0_0), @@ -926,10 +928,6 @@ static pinmux_enum_t pinmux_data[] = { PINMUX_DATA(MFIv4_MARK, MSEL4CR_6_1), }; -#define _GPIO_PORT(pfx, sfx) PINMUX_GPIO(GPIO_PORT##pfx, PORT##pfx##_DATA) -#define GPIO_PORT_ALL() _190(_GPIO_PORT, , unused) -#define GPIO_FN(str) PINMUX_GPIO(GPIO_FN_##str, str##_MARK) - static struct pinmux_gpio pinmux_gpios[] = { /* PORT */ @@ -1201,22 +1199,6 @@ static struct pinmux_gpio pinmux_gpios[] = { GPIO_FN(SDENC_DV_CLKI), }; -/* helper for top 4 bits in PORTnCR */ -#define PCRH(in, in_pd, in_pu, out) \ - 0, (out), (in), 0, \ - 0, 0, 0, 0, \ - 0, 0, (in_pd), 0, \ - 0, 0, (in_pu), 0 - -#define PORTCR(nr, reg) \ - { PINMUX_CFG_REG("PORT" nr "CR", reg, 8, 4) { \ - PCRH(PORT##nr##_IN, PORT##nr##_IN_PD, \ - PORT##nr##_IN_PU, PORT##nr##_OUT), \ - PORT##nr##_FN0, PORT##nr##_FN1, PORT##nr##_FN2, \ - PORT##nr##_FN3, PORT##nr##_FN4, PORT##nr##_FN5, \ - PORT##nr##_FN6, PORT##nr##_FN7 } \ - } - static struct pinmux_cfg_reg pinmux_config_regs[] = { PORTCR(0, 0xE6051000), /* PORT0CR */ PORTCR(1, 0xE6051001), /* PORT1CR */ diff --git a/arch/arm/mach-shmobile/pfc-sh7377.c b/arch/arm/mach-shmobile/pfc-sh7377.c index 613e6842ad05..2f10511946ad 100644 --- a/arch/arm/mach-shmobile/pfc-sh7377.c +++ b/arch/arm/mach-shmobile/pfc-sh7377.c @@ -22,84 +22,65 @@ #include <linux/gpio.h> #include <mach/sh7377.h> -#define _1(fn, pfx, sfx) fn(pfx, sfx) - -#define _10(fn, pfx, sfx) \ - _1(fn, pfx##0, sfx), _1(fn, pfx##1, sfx), \ - _1(fn, pfx##2, sfx), _1(fn, pfx##3, sfx), \ - _1(fn, pfx##4, sfx), _1(fn, pfx##5, sfx), \ - _1(fn, pfx##6, sfx), _1(fn, pfx##7, sfx), \ - _1(fn, pfx##8, sfx), _1(fn, pfx##9, sfx) - -#define _90(fn, pfx, sfx) \ - _10(fn, pfx##1, sfx), _10(fn, pfx##2, sfx), \ - _10(fn, pfx##3, sfx), _10(fn, pfx##4, sfx), \ - _10(fn, pfx##5, sfx), _10(fn, pfx##6, sfx), \ - _10(fn, pfx##7, sfx), _10(fn, pfx##8, sfx), \ - _10(fn, pfx##9, sfx) - -#define _265(fn, pfx, sfx) \ - _10(fn, pfx, sfx), _90(fn, pfx, sfx), \ - _10(fn, pfx##10, sfx), \ - _1(fn, pfx##110, sfx), _1(fn, pfx##111, sfx), \ - _1(fn, pfx##112, sfx), _1(fn, pfx##113, sfx), \ - _1(fn, pfx##114, sfx), _1(fn, pfx##115, sfx), \ - _1(fn, pfx##116, sfx), _1(fn, pfx##117, sfx), \ - _1(fn, pfx##118, sfx), \ - _1(fn, pfx##128, sfx), _1(fn, pfx##129, sfx), \ - _10(fn, pfx##13, sfx), _10(fn, pfx##14, sfx), \ - _10(fn, pfx##15, sfx), \ - _1(fn, pfx##160, sfx), _1(fn, pfx##161, sfx), \ - _1(fn, pfx##162, sfx), _1(fn, pfx##163, sfx), \ - _1(fn, pfx##164, sfx), \ - _1(fn, pfx##192, sfx), _1(fn, pfx##193, sfx), \ - _1(fn, pfx##194, sfx), _1(fn, pfx##195, sfx), \ - _1(fn, pfx##196, sfx), _1(fn, pfx##197, sfx), \ - _1(fn, pfx##198, sfx), _1(fn, pfx##199, sfx), \ - _10(fn, pfx##20, sfx), _10(fn, pfx##21, sfx), \ - _10(fn, pfx##22, sfx), _10(fn, pfx##23, sfx), \ - _10(fn, pfx##24, sfx), _10(fn, pfx##25, sfx), \ - _1(fn, pfx##260, sfx), _1(fn, pfx##261, sfx), \ - _1(fn, pfx##262, sfx), _1(fn, pfx##263, sfx), \ - _1(fn, pfx##264, sfx) - -#define _PORT(pfx, sfx) pfx##_##sfx -#define PORT_265(str) _265(_PORT, PORT, str) +#define CPU_ALL_PORT(fn, pfx, sfx) \ + PORT_10(fn, pfx, sfx), PORT_90(fn, pfx, sfx), \ + PORT_10(fn, pfx##10, sfx), \ + PORT_1(fn, pfx##110, sfx), PORT_1(fn, pfx##111, sfx), \ + PORT_1(fn, pfx##112, sfx), PORT_1(fn, pfx##113, sfx), \ + PORT_1(fn, pfx##114, sfx), PORT_1(fn, pfx##115, sfx), \ + PORT_1(fn, pfx##116, sfx), PORT_1(fn, pfx##117, sfx), \ + PORT_1(fn, pfx##118, sfx), \ + PORT_1(fn, pfx##128, sfx), PORT_1(fn, pfx##129, sfx), \ + PORT_10(fn, pfx##13, sfx), PORT_10(fn, pfx##14, sfx), \ + PORT_10(fn, pfx##15, sfx), \ + PORT_1(fn, pfx##160, sfx), PORT_1(fn, pfx##161, sfx), \ + PORT_1(fn, pfx##162, sfx), PORT_1(fn, pfx##163, sfx), \ + PORT_1(fn, pfx##164, sfx), \ + PORT_1(fn, pfx##192, sfx), PORT_1(fn, pfx##193, sfx), \ + PORT_1(fn, pfx##194, sfx), PORT_1(fn, pfx##195, sfx), \ + PORT_1(fn, pfx##196, sfx), PORT_1(fn, pfx##197, sfx), \ + PORT_1(fn, pfx##198, sfx), PORT_1(fn, pfx##199, sfx), \ + PORT_10(fn, pfx##20, sfx), PORT_10(fn, pfx##21, sfx), \ + PORT_10(fn, pfx##22, sfx), PORT_10(fn, pfx##23, sfx), \ + PORT_10(fn, pfx##24, sfx), PORT_10(fn, pfx##25, sfx), \ + PORT_1(fn, pfx##260, sfx), PORT_1(fn, pfx##261, sfx), \ + PORT_1(fn, pfx##262, sfx), PORT_1(fn, pfx##263, sfx), \ + PORT_1(fn, pfx##264, sfx) enum { PINMUX_RESERVED = 0, PINMUX_DATA_BEGIN, - PORT_265(DATA), /* PORT0_DATA -> PORT264_DATA */ + PORT_ALL(DATA), /* PORT0_DATA -> PORT264_DATA */ PINMUX_DATA_END, PINMUX_INPUT_BEGIN, - PORT_265(IN), /* PORT0_IN -> PORT264_IN */ + PORT_ALL(IN), /* PORT0_IN -> PORT264_IN */ PINMUX_INPUT_END, PINMUX_INPUT_PULLUP_BEGIN, - PORT_265(IN_PU), /* PORT0_IN_PU -> PORT264_IN_PU */ + PORT_ALL(IN_PU), /* PORT0_IN_PU -> PORT264_IN_PU */ PINMUX_INPUT_PULLUP_END, PINMUX_INPUT_PULLDOWN_BEGIN, - PORT_265(IN_PD), /* PORT0_IN_PD -> PORT264_IN_PD */ + PORT_ALL(IN_PD), /* PORT0_IN_PD -> PORT264_IN_PD */ PINMUX_INPUT_PULLDOWN_END, PINMUX_OUTPUT_BEGIN, - PORT_265(OUT), /* PORT0_OUT -> PORT264_OUT */ + PORT_ALL(OUT), /* PORT0_OUT -> PORT264_OUT */ PINMUX_OUTPUT_END, PINMUX_FUNCTION_BEGIN, - PORT_265(FN_IN), /* PORT0_FN_IN -> PORT264_FN_IN */ - PORT_265(FN_OUT), /* PORT0_FN_OUT -> PORT264_FN_OUT */ - PORT_265(FN0), /* PORT0_FN0 -> PORT264_FN0 */ - PORT_265(FN1), /* PORT0_FN1 -> PORT264_FN1 */ - PORT_265(FN2), /* PORT0_FN2 -> PORT264_FN2 */ - PORT_265(FN3), /* PORT0_FN3 -> PORT264_FN3 */ - PORT_265(FN4), /* PORT0_FN4 -> PORT264_FN4 */ - PORT_265(FN5), /* PORT0_FN5 -> PORT264_FN5 */ - PORT_265(FN6), /* PORT0_FN6 -> PORT264_FN6 */ - PORT_265(FN7), /* PORT0_FN7 -> PORT264_FN7 */ + PORT_ALL(FN_IN), /* PORT0_FN_IN -> PORT264_FN_IN */ + PORT_ALL(FN_OUT), /* PORT0_FN_OUT -> PORT264_FN_OUT */ + PORT_ALL(FN0), /* PORT0_FN0 -> PORT264_FN0 */ + PORT_ALL(FN1), /* PORT0_FN1 -> PORT264_FN1 */ + PORT_ALL(FN2), /* PORT0_FN2 -> PORT264_FN2 */ + PORT_ALL(FN3), /* PORT0_FN3 -> PORT264_FN3 */ + PORT_ALL(FN4), /* PORT0_FN4 -> PORT264_FN4 */ + PORT_ALL(FN5), /* PORT0_FN5 -> PORT264_FN5 */ + PORT_ALL(FN6), /* PORT0_FN6 -> PORT264_FN6 */ + PORT_ALL(FN7), /* PORT0_FN7 -> PORT264_FN7 */ MSELBCR_MSEL17_1, MSELBCR_MSEL17_0, MSELBCR_MSEL16_1, MSELBCR_MSEL16_0, @@ -360,45 +341,6 @@ enum { PINMUX_MARK_END, }; -#define PORT_DATA_I(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, PORT##nr##_IN) - -#define PORT_DATA_I_PD(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_IN, PORT##nr##_IN_PD) - -#define PORT_DATA_I_PU(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_IN, PORT##nr##_IN_PU) - -#define PORT_DATA_I_PU_PD(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_IN, PORT##nr##_IN_PD, \ - PORT##nr##_IN_PU) - -#define PORT_DATA_O(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_OUT) - -#define PORT_DATA_IO(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_OUT, PORT##nr##_IN) - -#define PORT_DATA_IO_PD(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_OUT, PORT##nr##_IN, \ - PORT##nr##_IN_PD) - -#define PORT_DATA_IO_PU(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_OUT, PORT##nr##_IN, \ - PORT##nr##_IN_PU) - -#define PORT_DATA_IO_PU_PD(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_OUT, PORT##nr##_IN, \ - PORT##nr##_IN_PD, PORT##nr##_IN_PU) - static pinmux_enum_t pinmux_data[] = { /* specify valid pin states for each pin in GPIO mode */ /* 55-1 (GPIO) */ @@ -1078,13 +1020,9 @@ static pinmux_enum_t pinmux_data[] = { PINMUX_DATA(RESETOUTS_MARK, PORT264_FN1), }; -#define _GPIO_PORT(pfx, sfx) PINMUX_GPIO(GPIO_PORT##pfx, PORT##pfx##_DATA) -#define GPIO_PORT_265() _265(_GPIO_PORT, , unused) -#define GPIO_FN(str) PINMUX_GPIO(GPIO_FN_##str, str##_MARK) - static struct pinmux_gpio pinmux_gpios[] = { /* 55-1 -> 55-5 (GPIO) */ - GPIO_PORT_265(), + GPIO_PORT_ALL(), /* Special Pull-up / Pull-down Functions */ GPIO_FN(PORT66_KEYIN0_PU), GPIO_FN(PORT67_KEYIN1_PU), @@ -1362,23 +1300,6 @@ static struct pinmux_gpio pinmux_gpios[] = { GPIO_FN(RESETOUTS), }; -/* helper for top 4 bits in PORTnCR */ -#define PCRH(in, in_pd, in_pu, out) \ - 0, (out), (in), 0, \ - 0, 0, 0, 0, \ - 0, 0, (in_pd), 0, \ - 0, 0, (in_pu), 0 - -#define PORTCR(nr, reg) \ - { PINMUX_CFG_REG("PORT" nr "CR", reg, 8, 4) { \ - PCRH(PORT##nr##_IN, PORT##nr##_IN_PD, \ - PORT##nr##_IN_PU, PORT##nr##_OUT), \ - PORT##nr##_FN0, PORT##nr##_FN1, \ - PORT##nr##_FN2, PORT##nr##_FN3, \ - PORT##nr##_FN4, PORT##nr##_FN5, \ - PORT##nr##_FN6, PORT##nr##_FN7 } \ - } - static struct pinmux_cfg_reg pinmux_config_regs[] = { PORTCR(0, 0xe6050000), /* PORT0CR */ PORTCR(1, 0xe6050001), /* PORT1CR */ diff --git a/arch/arm/mach-shmobile/pfc-sh73a0.c b/arch/arm/mach-shmobile/pfc-sh73a0.c index 5abe02fbd6b9..e05634ce2e0d 100644 --- a/arch/arm/mach-shmobile/pfc-sh73a0.c +++ b/arch/arm/mach-shmobile/pfc-sh73a0.c @@ -24,83 +24,71 @@ #include <mach/sh73a0.h> #include <mach/irqs.h> -#define _1(fn, pfx, sfx) fn(pfx, sfx) - -#define _10(fn, pfx, sfx) \ - _1(fn, pfx##0, sfx), _1(fn, pfx##1, sfx), \ - _1(fn, pfx##2, sfx), _1(fn, pfx##3, sfx), \ - _1(fn, pfx##4, sfx), _1(fn, pfx##5, sfx), \ - _1(fn, pfx##6, sfx), _1(fn, pfx##7, sfx), \ - _1(fn, pfx##8, sfx), _1(fn, pfx##9, sfx) - -#define _310(fn, pfx, sfx) \ - _10(fn, pfx, sfx), _10(fn, pfx##1, sfx), \ - _10(fn, pfx##2, sfx), _10(fn, pfx##3, sfx), \ - _10(fn, pfx##4, sfx), _10(fn, pfx##5, sfx), \ - _10(fn, pfx##6, sfx), _10(fn, pfx##7, sfx), \ - _10(fn, pfx##8, sfx), _10(fn, pfx##9, sfx), \ - _10(fn, pfx##10, sfx), \ - _1(fn, pfx##110, sfx), _1(fn, pfx##111, sfx), \ - _1(fn, pfx##112, sfx), _1(fn, pfx##113, sfx), \ - _1(fn, pfx##114, sfx), _1(fn, pfx##115, sfx), \ - _1(fn, pfx##116, sfx), _1(fn, pfx##117, sfx), \ - _1(fn, pfx##118, sfx), \ - _1(fn, pfx##128, sfx), _1(fn, pfx##129, sfx), \ - _10(fn, pfx##13, sfx), _10(fn, pfx##14, sfx), \ - _10(fn, pfx##15, sfx), \ - _1(fn, pfx##160, sfx), _1(fn, pfx##161, sfx), \ - _1(fn, pfx##162, sfx), _1(fn, pfx##163, sfx), \ - _1(fn, pfx##164, sfx), \ - _1(fn, pfx##192, sfx), _1(fn, pfx##193, sfx), \ - _1(fn, pfx##194, sfx), _1(fn, pfx##195, sfx), \ - _1(fn, pfx##196, sfx), _1(fn, pfx##197, sfx), \ - _1(fn, pfx##198, sfx), _1(fn, pfx##199, sfx), \ - _10(fn, pfx##20, sfx), _10(fn, pfx##21, sfx), \ - _10(fn, pfx##22, sfx), _10(fn, pfx##23, sfx), \ - _10(fn, pfx##24, sfx), _10(fn, pfx##25, sfx), \ - _10(fn, pfx##26, sfx), _10(fn, pfx##27, sfx), \ - _1(fn, pfx##280, sfx), _1(fn, pfx##281, sfx), \ - _1(fn, pfx##282, sfx), \ - _1(fn, pfx##288, sfx), _1(fn, pfx##289, sfx), \ - _10(fn, pfx##29, sfx), _10(fn, pfx##30, sfx) - -#define _PORT(pfx, sfx) pfx##_##sfx -#define PORT_310(str) _310(_PORT, PORT, str) +#define CPU_ALL_PORT(fn, pfx, sfx) \ + PORT_10(fn, pfx, sfx), PORT_10(fn, pfx##1, sfx), \ + PORT_10(fn, pfx##2, sfx), PORT_10(fn, pfx##3, sfx), \ + PORT_10(fn, pfx##4, sfx), PORT_10(fn, pfx##5, sfx), \ + PORT_10(fn, pfx##6, sfx), PORT_10(fn, pfx##7, sfx), \ + PORT_10(fn, pfx##8, sfx), PORT_10(fn, pfx##9, sfx), \ + PORT_10(fn, pfx##10, sfx), \ + PORT_1(fn, pfx##110, sfx), PORT_1(fn, pfx##111, sfx), \ + PORT_1(fn, pfx##112, sfx), PORT_1(fn, pfx##113, sfx), \ + PORT_1(fn, pfx##114, sfx), PORT_1(fn, pfx##115, sfx), \ + PORT_1(fn, pfx##116, sfx), PORT_1(fn, pfx##117, sfx), \ + PORT_1(fn, pfx##118, sfx), \ + PORT_1(fn, pfx##128, sfx), PORT_1(fn, pfx##129, sfx), \ + PORT_10(fn, pfx##13, sfx), PORT_10(fn, pfx##14, sfx), \ + PORT_10(fn, pfx##15, sfx), \ + PORT_1(fn, pfx##160, sfx), PORT_1(fn, pfx##161, sfx), \ + PORT_1(fn, pfx##162, sfx), PORT_1(fn, pfx##163, sfx), \ + PORT_1(fn, pfx##164, sfx), \ + PORT_1(fn, pfx##192, sfx), PORT_1(fn, pfx##193, sfx), \ + PORT_1(fn, pfx##194, sfx), PORT_1(fn, pfx##195, sfx), \ + PORT_1(fn, pfx##196, sfx), PORT_1(fn, pfx##197, sfx), \ + PORT_1(fn, pfx##198, sfx), PORT_1(fn, pfx##199, sfx), \ + PORT_10(fn, pfx##20, sfx), PORT_10(fn, pfx##21, sfx), \ + PORT_10(fn, pfx##22, sfx), PORT_10(fn, pfx##23, sfx), \ + PORT_10(fn, pfx##24, sfx), PORT_10(fn, pfx##25, sfx), \ + PORT_10(fn, pfx##26, sfx), PORT_10(fn, pfx##27, sfx), \ + PORT_1(fn, pfx##280, sfx), PORT_1(fn, pfx##281, sfx), \ + PORT_1(fn, pfx##282, sfx), \ + PORT_1(fn, pfx##288, sfx), PORT_1(fn, pfx##289, sfx), \ + PORT_10(fn, pfx##29, sfx), PORT_10(fn, pfx##30, sfx) enum { PINMUX_RESERVED = 0, PINMUX_DATA_BEGIN, - PORT_310(DATA), /* PORT0_DATA -> PORT309_DATA */ + PORT_ALL(DATA), /* PORT0_DATA -> PORT309_DATA */ PINMUX_DATA_END, PINMUX_INPUT_BEGIN, - PORT_310(IN), /* PORT0_IN -> PORT309_IN */ + PORT_ALL(IN), /* PORT0_IN -> PORT309_IN */ PINMUX_INPUT_END, PINMUX_INPUT_PULLUP_BEGIN, - PORT_310(IN_PU), /* PORT0_IN_PU -> PORT309_IN_PU */ + PORT_ALL(IN_PU), /* PORT0_IN_PU -> PORT309_IN_PU */ PINMUX_INPUT_PULLUP_END, PINMUX_INPUT_PULLDOWN_BEGIN, - PORT_310(IN_PD), /* PORT0_IN_PD -> PORT309_IN_PD */ + PORT_ALL(IN_PD), /* PORT0_IN_PD -> PORT309_IN_PD */ PINMUX_INPUT_PULLDOWN_END, PINMUX_OUTPUT_BEGIN, - PORT_310(OUT), /* PORT0_OUT -> PORT309_OUT */ + PORT_ALL(OUT), /* PORT0_OUT -> PORT309_OUT */ PINMUX_OUTPUT_END, PINMUX_FUNCTION_BEGIN, - PORT_310(FN_IN), /* PORT0_FN_IN -> PORT309_FN_IN */ - PORT_310(FN_OUT), /* PORT0_FN_OUT -> PORT309_FN_OUT */ - PORT_310(FN0), /* PORT0_FN0 -> PORT309_FN0 */ - PORT_310(FN1), /* PORT0_FN1 -> PORT309_FN1 */ - PORT_310(FN2), /* PORT0_FN2 -> PORT309_FN2 */ - PORT_310(FN3), /* PORT0_FN3 -> PORT309_FN3 */ - PORT_310(FN4), /* PORT0_FN4 -> PORT309_FN4 */ - PORT_310(FN5), /* PORT0_FN5 -> PORT309_FN5 */ - PORT_310(FN6), /* PORT0_FN6 -> PORT309_FN6 */ - PORT_310(FN7), /* PORT0_FN7 -> PORT309_FN7 */ + PORT_ALL(FN_IN), /* PORT0_FN_IN -> PORT309_FN_IN */ + PORT_ALL(FN_OUT), /* PORT0_FN_OUT -> PORT309_FN_OUT */ + PORT_ALL(FN0), /* PORT0_FN0 -> PORT309_FN0 */ + PORT_ALL(FN1), /* PORT0_FN1 -> PORT309_FN1 */ + PORT_ALL(FN2), /* PORT0_FN2 -> PORT309_FN2 */ + PORT_ALL(FN3), /* PORT0_FN3 -> PORT309_FN3 */ + PORT_ALL(FN4), /* PORT0_FN4 -> PORT309_FN4 */ + PORT_ALL(FN5), /* PORT0_FN5 -> PORT309_FN5 */ + PORT_ALL(FN6), /* PORT0_FN6 -> PORT309_FN6 */ + PORT_ALL(FN7), /* PORT0_FN7 -> PORT309_FN7 */ MSEL2CR_MSEL19_0, MSEL2CR_MSEL19_1, MSEL2CR_MSEL18_0, MSEL2CR_MSEL18_1, @@ -508,6 +496,14 @@ enum { SDHICMD2_PU_MARK, MMCCMD0_PU_MARK, MMCCMD1_PU_MARK, + MMCD0_0_PU_MARK, + MMCD0_1_PU_MARK, + MMCD0_2_PU_MARK, + MMCD0_3_PU_MARK, + MMCD0_4_PU_MARK, + MMCD0_5_PU_MARK, + MMCD0_6_PU_MARK, + MMCD0_7_PU_MARK, FSIBISLD_PU_MARK, FSIACK_PU_MARK, FSIAILR_PU_MARK, @@ -517,45 +513,6 @@ enum { PINMUX_MARK_END, }; -#define PORT_DATA_I(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, PORT##nr##_IN) - -#define PORT_DATA_I_PD(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_IN, PORT##nr##_IN_PD) - -#define PORT_DATA_I_PU(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_IN, PORT##nr##_IN_PU) - -#define PORT_DATA_I_PU_PD(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_IN, PORT##nr##_IN_PD, \ - PORT##nr##_IN_PU) - -#define PORT_DATA_O(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_OUT) - -#define PORT_DATA_IO(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_OUT, PORT##nr##_IN) - -#define PORT_DATA_IO_PD(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_OUT, PORT##nr##_IN, \ - PORT##nr##_IN_PD) - -#define PORT_DATA_IO_PU(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_OUT, PORT##nr##_IN, \ - PORT##nr##_IN_PU) - -#define PORT_DATA_IO_PU_PD(nr) \ - PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ - PORT##nr##_OUT, PORT##nr##_IN, \ - PORT##nr##_IN_PD, PORT##nr##_IN_PU) - static pinmux_enum_t pinmux_data[] = { /* specify valid pin states for each pin in GPIO mode */ @@ -1561,6 +1518,24 @@ static pinmux_enum_t pinmux_data[] = { MSEL4CR_MSEL15_0), PINMUX_DATA(MMCCMD1_PU_MARK, PORT297_FN2, PORT297_IN_PU, MSEL4CR_MSEL15_1), + + PINMUX_DATA(MMCD0_0_PU_MARK, + PORT271_FN1, PORT271_IN_PU, MSEL4CR_MSEL15_0), + PINMUX_DATA(MMCD0_1_PU_MARK, + PORT272_FN1, PORT272_IN_PU, MSEL4CR_MSEL15_0), + PINMUX_DATA(MMCD0_2_PU_MARK, + PORT273_FN1, PORT273_IN_PU, MSEL4CR_MSEL15_0), + PINMUX_DATA(MMCD0_3_PU_MARK, + PORT274_FN1, PORT274_IN_PU, MSEL4CR_MSEL15_0), + PINMUX_DATA(MMCD0_4_PU_MARK, + PORT275_FN1, PORT275_IN_PU, MSEL4CR_MSEL15_0), + PINMUX_DATA(MMCD0_5_PU_MARK, + PORT276_FN1, PORT276_IN_PU, MSEL4CR_MSEL15_0), + PINMUX_DATA(MMCD0_6_PU_MARK, + PORT277_FN1, PORT277_IN_PU, MSEL4CR_MSEL15_0), + PINMUX_DATA(MMCD0_7_PU_MARK, + PORT278_FN1, PORT278_IN_PU, MSEL4CR_MSEL15_0), + PINMUX_DATA(FSIBISLD_PU_MARK, PORT39_FN1, PORT39_IN_PU), PINMUX_DATA(FSIACK_PU_MARK, PORT49_FN1, PORT49_IN_PU), PINMUX_DATA(FSIAILR_PU_MARK, PORT50_FN5, PORT50_IN_PU), @@ -1568,12 +1543,8 @@ static pinmux_enum_t pinmux_data[] = { PINMUX_DATA(FSIAISLD_PU_MARK, PORT55_FN1, PORT55_IN_PU), }; -#define _GPIO_PORT(pfx, sfx) PINMUX_GPIO(GPIO_PORT##pfx, PORT##pfx##_DATA) -#define GPIO_PORT_310() _310(_GPIO_PORT, , unused) -#define GPIO_FN(str) PINMUX_GPIO(GPIO_FN_##str, str##_MARK) - static struct pinmux_gpio pinmux_gpios[] = { - GPIO_PORT_310(), + GPIO_PORT_ALL(), /* Table 25-1 (Functions 0-7) */ GPIO_FN(VBUS_0), @@ -2236,24 +2207,20 @@ static struct pinmux_gpio pinmux_gpios[] = { GPIO_FN(SDHICMD2_PU), GPIO_FN(MMCCMD0_PU), GPIO_FN(MMCCMD1_PU), + GPIO_FN(MMCD0_0_PU), + GPIO_FN(MMCD0_1_PU), + GPIO_FN(MMCD0_2_PU), + GPIO_FN(MMCD0_3_PU), + GPIO_FN(MMCD0_4_PU), + GPIO_FN(MMCD0_5_PU), + GPIO_FN(MMCD0_6_PU), + GPIO_FN(MMCD0_7_PU), GPIO_FN(FSIACK_PU), GPIO_FN(FSIAILR_PU), GPIO_FN(FSIAIBT_PU), GPIO_FN(FSIAISLD_PU), }; -#define PORTCR(nr, reg) \ - { PINMUX_CFG_REG("PORT" nr "CR", reg, 8, 4) { \ - 0, \ - /*0001*/ PORT##nr##_OUT , \ - /*0010*/ PORT##nr##_IN , 0, 0, 0, 0, 0, 0, 0, \ - /*1010*/ PORT##nr##_IN_PD, 0, 0, 0, \ - /*1110*/ PORT##nr##_IN_PU, 0, \ - PORT##nr##_FN0, PORT##nr##_FN1, PORT##nr##_FN2, \ - PORT##nr##_FN3, PORT##nr##_FN4, PORT##nr##_FN5, \ - PORT##nr##_FN6, PORT##nr##_FN7, 0, 0, 0, 0, 0, 0, 0, 0 } \ - } - static struct pinmux_cfg_reg pinmux_config_regs[] = { PORTCR(0, 0xe6050000), /* PORT0CR */ PORTCR(1, 0xe6050001), /* PORT1CR */ diff --git a/arch/arm/mach-shmobile/pm-sh7372.c b/arch/arm/mach-shmobile/pm-sh7372.c index 79612737c5b2..0a5b22942fd3 100644 --- a/arch/arm/mach-shmobile/pm-sh7372.c +++ b/arch/arm/mach-shmobile/pm-sh7372.c @@ -402,22 +402,18 @@ static void sh7372_setup_a3sm(unsigned long msk, unsigned long msk2) #ifdef CONFIG_CPU_IDLE -static void sh7372_cpuidle_setup(struct cpuidle_device *dev) +static void sh7372_cpuidle_setup(struct cpuidle_driver *drv) { - struct cpuidle_state *state; - int i = dev->state_count; + struct cpuidle_state *state = &drv->states[drv->state_count]; - state = &dev->states[i]; snprintf(state->name, CPUIDLE_NAME_LEN, "C2"); strncpy(state->desc, "Core Standby Mode", CPUIDLE_DESC_LEN); state->exit_latency = 10; state->target_residency = 20 + 10; - state->power_usage = 1; /* perhaps not */ - state->flags = 0; - state->flags |= CPUIDLE_FLAG_TIME_VALID; - shmobile_cpuidle_modes[i] = sh7372_enter_core_standby; + state->flags = CPUIDLE_FLAG_TIME_VALID; + shmobile_cpuidle_modes[drv->state_count] = sh7372_enter_core_standby; - dev->state_count = i + 1; + drv->state_count++; } static void sh7372_cpuidle_init(void) diff --git a/arch/arm/mach-tegra/board-dt.c b/arch/arm/mach-tegra/board-dt.c index d368f8dafcfd..74743ad3d2d3 100644 --- a/arch/arm/mach-tegra/board-dt.c +++ b/arch/arm/mach-tegra/board-dt.c @@ -101,6 +101,13 @@ static void __init tegra_dt_init(void) tegra_clk_init_from_table(tegra_dt_clk_init_table); + /* + * Finished with the static registrations now; fill in the missing + * devices + */ + of_platform_populate(NULL, tegra_dt_match_table, + tegra20_auxdata_lookup, NULL); + for (i = 0; i < ARRAY_SIZE(pinmux_configs); i++) { if (of_machine_is_compatible(pinmux_configs[i].machine)) { pinmux_configs[i].init(); @@ -110,12 +117,6 @@ static void __init tegra_dt_init(void) WARN(i == ARRAY_SIZE(pinmux_configs), "Unknown platform! Pinmuxing not initialized\n"); - - /* - * Finished with the static registrations now; fill in the missing - * devices - */ - of_platform_populate(NULL, tegra_dt_match_table, tegra20_auxdata_lookup, NULL); } static const char * tegra_dt_board_compat[] = { diff --git a/arch/arm/mach-tegra/board-harmony-pinmux.c b/arch/arm/mach-tegra/board-harmony-pinmux.c index e99b45618cd0..7a4a26d5174c 100644 --- a/arch/arm/mach-tegra/board-harmony-pinmux.c +++ b/arch/arm/mach-tegra/board-harmony-pinmux.c @@ -16,6 +16,8 @@ #include <linux/kernel.h> #include <linux/gpio.h> +#include <linux/of.h> + #include <mach/pinmux.h> #include "gpio-names.h" @@ -161,7 +163,9 @@ static struct tegra_gpio_table gpio_table[] = { void harmony_pinmux_init(void) { - platform_add_devices(pinmux_devices, ARRAY_SIZE(pinmux_devices)); + if (!of_machine_is_compatible("nvidia,tegra20")) + platform_add_devices(pinmux_devices, + ARRAY_SIZE(pinmux_devices)); tegra_pinmux_config_table(harmony_pinmux, ARRAY_SIZE(harmony_pinmux)); diff --git a/arch/arm/mach-tegra/board-paz00-pinmux.c b/arch/arm/mach-tegra/board-paz00-pinmux.c index fb20894862b0..be30e215f4b7 100644 --- a/arch/arm/mach-tegra/board-paz00-pinmux.c +++ b/arch/arm/mach-tegra/board-paz00-pinmux.c @@ -16,6 +16,8 @@ #include <linux/kernel.h> #include <linux/gpio.h> +#include <linux/of.h> + #include <mach/pinmux.h> #include "gpio-names.h" @@ -158,7 +160,9 @@ static struct tegra_gpio_table gpio_table[] = { void paz00_pinmux_init(void) { - platform_add_devices(pinmux_devices, ARRAY_SIZE(pinmux_devices)); + if (!of_machine_is_compatible("nvidia,tegra20")) + platform_add_devices(pinmux_devices, + ARRAY_SIZE(pinmux_devices)); tegra_pinmux_config_table(paz00_pinmux, ARRAY_SIZE(paz00_pinmux)); diff --git a/arch/arm/mach-tegra/board-seaboard-pinmux.c b/arch/arm/mach-tegra/board-seaboard-pinmux.c index fbce31daa3c9..b1c2972f62fe 100644 --- a/arch/arm/mach-tegra/board-seaboard-pinmux.c +++ b/arch/arm/mach-tegra/board-seaboard-pinmux.c @@ -16,6 +16,7 @@ #include <linux/kernel.h> #include <linux/init.h> #include <linux/gpio.h> +#include <linux/of.h> #include <mach/pinmux.h> #include <mach/pinmux-t2.h> @@ -191,6 +192,7 @@ static struct tegra_gpio_table common_gpio_table[] = { { .gpio = TEGRA_GPIO_SD2_POWER, .enable = true }, { .gpio = TEGRA_GPIO_LIDSWITCH, .enable = true }, { .gpio = TEGRA_GPIO_POWERKEY, .enable = true }, + { .gpio = TEGRA_GPIO_HP_DET, .enable = true }, { .gpio = TEGRA_GPIO_ISL29018_IRQ, .enable = true }, { .gpio = TEGRA_GPIO_CDC_IRQ, .enable = true }, { .gpio = TEGRA_GPIO_USB1, .enable = true }, @@ -218,7 +220,9 @@ static void __init update_pinmux(struct tegra_pingroup_config *newtbl, int size) void __init seaboard_common_pinmux_init(void) { - platform_add_devices(pinmux_devices, ARRAY_SIZE(pinmux_devices)); + if (!of_machine_is_compatible("nvidia,tegra20")) + platform_add_devices(pinmux_devices, + ARRAY_SIZE(pinmux_devices)); tegra_pinmux_config_table(seaboard_pinmux, ARRAY_SIZE(seaboard_pinmux)); diff --git a/arch/arm/mach-tegra/board-trimslice-pinmux.c b/arch/arm/mach-tegra/board-trimslice-pinmux.c index 4969dd28a04c..7ab719d46da0 100644 --- a/arch/arm/mach-tegra/board-trimslice-pinmux.c +++ b/arch/arm/mach-tegra/board-trimslice-pinmux.c @@ -16,6 +16,7 @@ #include <linux/gpio.h> #include <linux/kernel.h> #include <linux/init.h> +#include <linux/of.h> #include <mach/pinmux.h> @@ -157,7 +158,9 @@ static struct tegra_gpio_table gpio_table[] = { void __init trimslice_pinmux_init(void) { - platform_add_devices(pinmux_devices, ARRAY_SIZE(pinmux_devices)); + if (!of_machine_is_compatible("nvidia,tegra20")) + platform_add_devices(pinmux_devices, + ARRAY_SIZE(pinmux_devices)); tegra_pinmux_config_table(trimslice_pinmux, ARRAY_SIZE(trimslice_pinmux)); tegra_gpio_config(gpio_table, ARRAY_SIZE(gpio_table)); } diff --git a/arch/arm/plat-mxc/Kconfig b/arch/arm/plat-mxc/Kconfig index a08a95107a63..b3a1f2b3ada3 100644 --- a/arch/arm/plat-mxc/Kconfig +++ b/arch/arm/plat-mxc/Kconfig @@ -10,7 +10,7 @@ choice config ARCH_IMX_V4_V5 bool "i.MX1, i.MX21, i.MX25, i.MX27" - select AUTO_ZRELADDR + select AUTO_ZRELADDR if !ZBOOT_ROM select ARM_PATCH_PHYS_VIRT help This enables support for systems based on the Freescale i.MX ARMv4 @@ -26,7 +26,7 @@ config ARCH_IMX_V6_V7 config ARCH_MX5 bool "i.MX50, i.MX51, i.MX53" - select AUTO_ZRELADDR + select AUTO_ZRELADDR if !ZBOOT_ROM select ARM_PATCH_PHYS_VIRT help This enables support for machines using Freescale's i.MX50 and i.MX53 diff --git a/arch/arm/plat-mxc/avic.c b/arch/arm/plat-mxc/avic.c index 8875fb415f68..55f15699a383 100644 --- a/arch/arm/plat-mxc/avic.c +++ b/arch/arm/plat-mxc/avic.c @@ -22,6 +22,7 @@ #include <linux/io.h> #include <mach/common.h> #include <asm/mach/irq.h> +#include <asm/exception.h> #include <mach/hardware.h> #include "irq-common.h" diff --git a/arch/arm/plat-mxc/gic.c b/arch/arm/plat-mxc/gic.c index b3b8eed263b8..12f8f8109010 100644 --- a/arch/arm/plat-mxc/gic.c +++ b/arch/arm/plat-mxc/gic.c @@ -28,21 +28,14 @@ asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs) if (irqnr == 1023) break; - if (irqnr > 29 && irqnr < 1021) + if (irqnr > 15 && irqnr < 1021) handle_IRQ(irqnr, regs); #ifdef CONFIG_SMP - else if (irqnr < 16) { + else { writel_relaxed(irqstat, gic_cpu_base_addr + GIC_CPU_EOI); handle_IPI(irqnr, regs); } #endif -#ifdef CONFIG_LOCAL_TIMERS - else if (irqnr == 29) { - writel_relaxed(irqstat, gic_cpu_base_addr + - GIC_CPU_EOI); - handle_local_timer(regs); - } -#endif } while (1); } diff --git a/arch/arm/plat-mxc/include/mach/entry-macro.S b/arch/arm/plat-mxc/include/mach/entry-macro.S index 9fe0dfcf4e7e..ca5cf26a04b1 100644 --- a/arch/arm/plat-mxc/include/mach/entry-macro.S +++ b/arch/arm/plat-mxc/include/mach/entry-macro.S @@ -25,6 +25,3 @@ .macro test_for_ipi, irqnr, irqstat, base, tmp .endm - - .macro test_for_ltirq, irqnr, irqstat, base, tmp - .endm diff --git a/arch/arm/plat-mxc/tzic.c b/arch/arm/plat-mxc/tzic.c index e993a184189a..a3c164c7ba82 100644 --- a/arch/arm/plat-mxc/tzic.c +++ b/arch/arm/plat-mxc/tzic.c @@ -17,6 +17,7 @@ #include <linux/io.h> #include <asm/mach/irq.h> +#include <asm/exception.h> #include <mach/hardware.h> #include <mach/common.h> diff --git a/arch/arm/plat-omap/dmtimer.c b/arch/arm/plat-omap/dmtimer.c index 2def4e1990ed..af3b92be8459 100644 --- a/arch/arm/plat-omap/dmtimer.c +++ b/arch/arm/plat-omap/dmtimer.c @@ -35,6 +35,7 @@ * 675 Mass Ave, Cambridge, MA 02139, USA. */ +#include <linux/module.h> #include <linux/io.h> #include <linux/slab.h> #include <linux/err.h> diff --git a/arch/arm/plat-omap/i2c.c b/arch/arm/plat-omap/i2c.c index 679cbd49c019..db071bc71c4d 100644 --- a/arch/arm/plat-omap/i2c.c +++ b/arch/arm/plat-omap/i2c.c @@ -184,7 +184,7 @@ static inline int omap2_i2c_add_bus(int bus_id) NULL, 0, 0); WARN(IS_ERR(pdev), "Could not build omap_device for %s\n", name); - return PTR_ERR(pdev); + return PTR_RET(pdev); } #else static inline int omap2_i2c_add_bus(int bus_id) diff --git a/arch/arm/plat-omap/include/plat/cpu.h b/arch/arm/plat-omap/include/plat/cpu.h index 2f9026942229..408a12f79205 100644 --- a/arch/arm/plat-omap/include/plat/cpu.h +++ b/arch/arm/plat-omap/include/plat/cpu.h @@ -399,6 +399,13 @@ void omap2_check_revision(void); /* * Runtime detection of OMAP3 features + * + * OMAP3_HAS_IO_CHAIN_CTRL: Some later members of the OMAP3 chip + * family have OS-level control over the I/O chain clock. This is + * to avoid a window during which wakeups could potentially be lost + * during powerdomain transitions. If this bit is set, it + * indicates that the chip does support OS-level control of this + * feature. */ extern u32 omap_features; @@ -410,9 +417,10 @@ extern u32 omap_features; #define OMAP3_HAS_192MHZ_CLK BIT(5) #define OMAP3_HAS_IO_WAKEUP BIT(6) #define OMAP3_HAS_SDRC BIT(7) -#define OMAP4_HAS_MPU_1GHZ BIT(8) -#define OMAP4_HAS_MPU_1_2GHZ BIT(9) -#define OMAP4_HAS_MPU_1_5GHZ BIT(10) +#define OMAP3_HAS_IO_CHAIN_CTRL BIT(8) +#define OMAP4_HAS_MPU_1GHZ BIT(9) +#define OMAP4_HAS_MPU_1_2GHZ BIT(10) +#define OMAP4_HAS_MPU_1_5GHZ BIT(11) #define OMAP3_HAS_FEATURE(feat,flag) \ @@ -429,12 +437,11 @@ OMAP3_HAS_FEATURE(isp, ISP) OMAP3_HAS_FEATURE(192mhz_clk, 192MHZ_CLK) OMAP3_HAS_FEATURE(io_wakeup, IO_WAKEUP) OMAP3_HAS_FEATURE(sdrc, SDRC) +OMAP3_HAS_FEATURE(io_chain_ctrl, IO_CHAIN_CTRL) /* * Runtime detection of OMAP4 features */ -extern u32 omap_features; - #define OMAP4_HAS_FEATURE(feat, flag) \ static inline unsigned int omap4_has_ ##feat(void) \ { \ diff --git a/arch/arm/plat-omap/include/plat/dmtimer.h b/arch/arm/plat-omap/include/plat/dmtimer.h index d11025e6e7a4..9418f00b6c38 100644 --- a/arch/arm/plat-omap/include/plat/dmtimer.h +++ b/arch/arm/plat-omap/include/plat/dmtimer.h @@ -104,7 +104,7 @@ struct dmtimer_platform_data { bool loses_context; - u32 (*get_context_loss_count)(struct device *dev); + int (*get_context_loss_count)(struct device *dev); }; struct omap_dm_timer *omap_dm_timer_request(void); @@ -279,7 +279,7 @@ struct omap_dm_timer { struct platform_device *pdev; struct list_head node; - u32 (*get_context_loss_count)(struct device *dev); + int (*get_context_loss_count)(struct device *dev); }; int omap_dm_timer_prepare(struct omap_dm_timer *timer); diff --git a/arch/arm/plat-omap/include/plat/omap-alsa.h b/arch/arm/plat-omap/include/plat/omap-alsa.h deleted file mode 100644 index b53055b390d0..000000000000 --- a/arch/arm/plat-omap/include/plat/omap-alsa.h +++ /dev/null @@ -1,123 +0,0 @@ -/* - * arch/arm/plat-omap/include/mach/omap-alsa.h - * - * Alsa Driver for AIC23 and TSC2101 codecs on OMAP platform boards. - * - * Copyright (C) 2006 Mika Laitio <lamikr@cc.jyu.fi> - * - * Copyright (C) 2005 Instituto Nokia de Tecnologia - INdT - Manaus Brazil - * Written by Daniel Petrini, David Cohen, Anderson Briglia - * {daniel.petrini, david.cohen, anderson.briglia}@indt.org.br - * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2 of the License, or (at your - * option) any later version. - * - * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED - * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF - * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN - * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, - * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT - * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF - * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON - * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF - * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, write to the Free Software Foundation, Inc., - * 675 Mass Ave, Cambridge, MA 02139, USA. - * - * History - * ------- - * - * 2005/07/25 INdT-10LE Kernel Team - Alsa driver for omap osk, - * original version based in sa1100 driver - * and omap oss driver. - */ - -#ifndef __OMAP_ALSA_H -#define __OMAP_ALSA_H - -#include <plat/dma.h> -#include <sound/core.h> -#include <sound/pcm.h> -#include <plat/mcbsp.h> -#include <linux/platform_device.h> - -#define DMA_BUF_SIZE (1024 * 8) - -/* - * Buffer management for alsa and dma - */ -struct audio_stream { - char *id; /* identification string */ - int stream_id; /* numeric identification */ - int dma_dev; /* dma number of that device */ - int *lch; /* Chain of channels this stream is linked to */ - char started; /* to store if the chain was started or not */ - int dma_q_head; /* DMA Channel Q Head */ - int dma_q_tail; /* DMA Channel Q Tail */ - char dma_q_count; /* DMA Channel Q Count */ - int active:1; /* we are using this stream for transfer now */ - int period; /* current transfer period */ - int periods; /* current count of periods registerd in the DMA engine */ - spinlock_t dma_lock; /* for locking in DMA operations */ - struct snd_pcm_substream *stream; /* the pcm stream */ - unsigned linked:1; /* dma channels linked */ - int offset; /* store start position of the last period in the alsa buffer */ - int (*hw_start)(void); /* interface to start HW interface, e.g. McBSP */ - int (*hw_stop)(void); /* interface to stop HW interface, e.g. McBSP */ -}; - -/* - * Alsa card structure for aic23 - */ -struct snd_card_omap_codec { - struct snd_card *card; - struct snd_pcm *pcm; - long samplerate; - struct audio_stream s[2]; /* playback & capture */ -}; - -/* Codec specific information and function pointers. - * Codec (omap-alsa-aic23.c and omap-alsa-tsc2101.c) - * are responsible for defining the function pointers. - */ -struct omap_alsa_codec_config { - char *name; - struct omap_mcbsp_reg_cfg *mcbsp_regs_alsa; - struct snd_pcm_hw_constraint_list *hw_constraints_rates; - struct snd_pcm_hardware *snd_omap_alsa_playback; - struct snd_pcm_hardware *snd_omap_alsa_capture; - void (*codec_configure_dev)(void); - void (*codec_set_samplerate)(long); - void (*codec_clock_setup)(void); - int (*codec_clock_on)(void); - int (*codec_clock_off)(void); - int (*get_default_samplerate)(void); -}; - -/*********** Mixer function prototypes *************************/ -int snd_omap_mixer(struct snd_card_omap_codec *); -void snd_omap_init_mixer(void); - -#ifdef CONFIG_PM -void snd_omap_suspend_mixer(void); -void snd_omap_resume_mixer(void); -#endif - -int snd_omap_alsa_post_probe(struct platform_device *pdev, struct omap_alsa_codec_config *config); -int snd_omap_alsa_remove(struct platform_device *pdev); -#ifdef CONFIG_PM -int snd_omap_alsa_suspend(struct platform_device *pdev, pm_message_t state); -int snd_omap_alsa_resume(struct platform_device *pdev); -#else -#define snd_omap_alsa_suspend NULL -#define snd_omap_alsa_resume NULL -#endif - -void callback_omap_alsa_sound_dma(void *); - -#endif diff --git a/arch/arm/plat-omap/include/plat/omap-pm.h b/arch/arm/plat-omap/include/plat/omap-pm.h index 0840df813f4f..67faa7b8fe92 100644 --- a/arch/arm/plat-omap/include/plat/omap-pm.h +++ b/arch/arm/plat-omap/include/plat/omap-pm.h @@ -342,9 +342,9 @@ unsigned long omap_pm_cpu_get_freq(void); * driver must restore device context. If the number of context losses * exceeds the maximum positive integer, the function will wrap to 0 and * continue counting. Returns the number of context losses for this device, - * or zero upon error. + * or negative value upon error. */ -u32 omap_pm_get_dev_context_loss_count(struct device *dev); +int omap_pm_get_dev_context_loss_count(struct device *dev); void omap_pm_enable_off_mode(void); void omap_pm_disable_off_mode(void); diff --git a/arch/arm/plat-omap/include/plat/omap_device.h b/arch/arm/plat-omap/include/plat/omap_device.h index 12c5b0c345bf..51423d2727a5 100644 --- a/arch/arm/plat-omap/include/plat/omap_device.h +++ b/arch/arm/plat-omap/include/plat/omap_device.h @@ -107,7 +107,7 @@ struct device *omap_device_get_by_hwmod_name(const char *oh_name); int omap_device_align_pm_lat(struct platform_device *pdev, u32 new_wakeup_lat_limit); struct powerdomain *omap_device_get_pwrdm(struct omap_device *od); -u32 omap_device_get_context_loss_count(struct platform_device *pdev); +int omap_device_get_context_loss_count(struct platform_device *pdev); /* Other */ diff --git a/arch/arm/plat-omap/include/plat/omap_hwmod.h b/arch/arm/plat-omap/include/plat/omap_hwmod.h index 5419f1a2aaa4..8b372ede17c1 100644 --- a/arch/arm/plat-omap/include/plat/omap_hwmod.h +++ b/arch/arm/plat-omap/include/plat/omap_hwmod.h @@ -600,7 +600,7 @@ int omap_hwmod_for_each_by_class(const char *classname, void *user); int omap_hwmod_set_postsetup_state(struct omap_hwmod *oh, u8 state); -u32 omap_hwmod_get_context_loss_count(struct omap_hwmod *oh); +int omap_hwmod_get_context_loss_count(struct omap_hwmod *oh); int omap_hwmod_no_setup_reset(struct omap_hwmod *oh); diff --git a/arch/arm/plat-omap/omap-pm-noop.c b/arch/arm/plat-omap/omap-pm-noop.c index b0471bb2d47d..3dc3801aace4 100644 --- a/arch/arm/plat-omap/omap-pm-noop.c +++ b/arch/arm/plat-omap/omap-pm-noop.c @@ -27,7 +27,7 @@ #include <plat/omap_device.h> static bool off_mode_enabled; -static u32 dummy_context_loss_counter; +static int dummy_context_loss_counter; /* * Device-driver-originated constraints (via board-*.c files) @@ -311,22 +311,32 @@ void omap_pm_disable_off_mode(void) #ifdef CONFIG_ARCH_OMAP2PLUS -u32 omap_pm_get_dev_context_loss_count(struct device *dev) +int omap_pm_get_dev_context_loss_count(struct device *dev) { struct platform_device *pdev = to_platform_device(dev); - u32 count; + int count; if (WARN_ON(!dev)) - return 0; + return -ENODEV; if (dev->parent == &omap_device_parent) { count = omap_device_get_context_loss_count(pdev); } else { WARN_ONCE(off_mode_enabled, "omap_pm: using dummy context loss counter; device %s should be converted to omap_device", dev_name(dev)); - if (off_mode_enabled) - dummy_context_loss_counter++; + count = dummy_context_loss_counter; + + if (off_mode_enabled) { + count++; + /* + * Context loss count has to be a non-negative value. + * Clear the sign bit to get a value range from 0 to + * INT_MAX. + */ + count &= INT_MAX; + dummy_context_loss_counter = count; + } } pr_debug("OMAP PM: context loss count for dev %s = %d\n", @@ -337,7 +347,7 @@ u32 omap_pm_get_dev_context_loss_count(struct device *dev) #else -u32 omap_pm_get_dev_context_loss_count(struct device *dev) +int omap_pm_get_dev_context_loss_count(struct device *dev) { return dummy_context_loss_counter; } diff --git a/arch/arm/plat-omap/omap_device.c b/arch/arm/plat-omap/omap_device.c index cd90bedd9306..e8d98693d2dd 100644 --- a/arch/arm/plat-omap/omap_device.c +++ b/arch/arm/plat-omap/omap_device.c @@ -78,6 +78,7 @@ #undef DEBUG #include <linux/kernel.h> +#include <linux/export.h> #include <linux/platform_device.h> #include <linux/slab.h> #include <linux/err.h> @@ -426,7 +427,7 @@ static int _omap_device_notifier_call(struct notifier_block *nb, * return the context loss counter for that hwmod, otherwise return * zero. */ -u32 omap_device_get_context_loss_count(struct platform_device *pdev) +int omap_device_get_context_loss_count(struct platform_device *pdev) { struct omap_device *od; u32 ret = 0; diff --git a/arch/arm/plat-pxa/include/plat/pxa3xx_nand.h b/arch/arm/plat-pxa/include/plat/pxa3xx_nand.h index 442301fe48b4..c42f39f20195 100644 --- a/arch/arm/plat-pxa/include/plat/pxa3xx_nand.h +++ b/arch/arm/plat-pxa/include/plat/pxa3xx_nand.h @@ -41,6 +41,19 @@ struct pxa3xx_nand_flash { struct pxa3xx_nand_timing *timing; /* NAND Flash timing */ }; +/* + * Current pxa3xx_nand controller has two chip select which + * both be workable. + * + * Notice should be taken that: + * When you want to use this feature, you should not enable the + * keep configuration feature, for two chip select could be + * attached with different nand chip. The different page size + * and timing requirement make the keep configuration impossible. + */ + +/* The max num of chip select current support */ +#define NUM_CHIP_SELECT (2) struct pxa3xx_nand_platform_data { /* the data flash bus is shared between the Static Memory @@ -52,8 +65,11 @@ struct pxa3xx_nand_platform_data { /* allow platform code to keep OBM/bootloader defined NFC config */ int keep_config; - const struct mtd_partition *parts; - unsigned int nr_parts; + /* indicate how many chip selects will be used */ + int num_cs; + + const struct mtd_partition *parts[NUM_CHIP_SELECT]; + unsigned int nr_parts[NUM_CHIP_SELECT]; const struct pxa3xx_nand_flash * flash; size_t num_flash; diff --git a/arch/arm/plat-samsung/dma-ops.c b/arch/arm/plat-samsung/dma-ops.c index 6e3d9abc9e2e..93a994a5dd8f 100644 --- a/arch/arm/plat-samsung/dma-ops.c +++ b/arch/arm/plat-samsung/dma-ops.c @@ -14,6 +14,7 @@ #include <linux/errno.h> #include <linux/amba/pl330.h> #include <linux/scatterlist.h> +#include <linux/export.h> #include <mach/dma.h> diff --git a/arch/arm/plat-samsung/s3c-dma-ops.c b/arch/arm/plat-samsung/s3c-dma-ops.c index 582333c70585..781494912827 100644 --- a/arch/arm/plat-samsung/s3c-dma-ops.c +++ b/arch/arm/plat-samsung/s3c-dma-ops.c @@ -14,6 +14,7 @@ #include <linux/errno.h> #include <linux/slab.h> #include <linux/types.h> +#include <linux/export.h> #include <mach/dma.h> diff --git a/arch/avr32/boards/atngw100/setup.c b/arch/avr32/boards/atngw100/setup.c index fafed4c38fd2..7c756fb189f7 100644 --- a/arch/avr32/boards/atngw100/setup.c +++ b/arch/avr32/boards/atngw100/setup.c @@ -90,11 +90,6 @@ static struct mtd_partition nand_partitions[] = { }, }; -static struct mtd_partition *nand_part_info(int size, int *num_partitions) -{ - *num_partitions = ARRAY_SIZE(nand_partitions); - return nand_partitions; -} static struct atmel_nand_data atngw100mkii_nand_data __initdata = { .cle = 21, @@ -102,7 +97,8 @@ static struct atmel_nand_data atngw100mkii_nand_data __initdata = { .rdy_pin = GPIO_PIN_PB(28), .enable_pin = GPIO_PIN_PE(23), .bus_width_16 = true, - .partition_info = nand_part_info, + .parts = nand_partitions, + .num_parts = ARRAY_SIZE(nand_partitions), }; #endif @@ -113,7 +109,7 @@ struct eth_addr { u8 addr[6]; }; static struct eth_addr __initdata hw_addr[2]; -static struct eth_platform_data __initdata eth_data[2]; +static struct macb_platform_data __initdata eth_data[2]; static struct spi_board_info spi0_board_info[] __initdata = { { diff --git a/arch/avr32/boards/atstk1000/atstk1002.c b/arch/avr32/boards/atstk1000/atstk1002.c index 6ce30fb2ec94..c56ddac85d61 100644 --- a/arch/avr32/boards/atstk1000/atstk1002.c +++ b/arch/avr32/boards/atstk1000/atstk1002.c @@ -90,18 +90,13 @@ static struct mtd_partition nand_partitions[] = { }, }; -static struct mtd_partition *nand_part_info(int size, int *num_partitions) -{ - *num_partitions = ARRAY_SIZE(nand_partitions); - return nand_partitions; -} - static struct atmel_nand_data atstk1006_nand_data __initdata = { .cle = 21, .ale = 22, .rdy_pin = GPIO_PIN_PB(30), .enable_pin = GPIO_PIN_PB(29), - .partition_info = nand_part_info, + .parts = nand_partitions, + .num_parts = ARRAY_SIZE(num_partitions), }; #endif @@ -110,7 +105,7 @@ struct eth_addr { }; static struct eth_addr __initdata hw_addr[2]; -static struct eth_platform_data __initdata eth_data[2] = { +static struct macb_platform_data __initdata eth_data[2] = { { /* * The MDIO pullups on STK1000 are a bit too weak for diff --git a/arch/avr32/boards/favr-32/setup.c b/arch/avr32/boards/favr-32/setup.c index 86fab77a5a00..27bd6fbe21cb 100644 --- a/arch/avr32/boards/favr-32/setup.c +++ b/arch/avr32/boards/favr-32/setup.c @@ -50,7 +50,7 @@ struct eth_addr { u8 addr[6]; }; static struct eth_addr __initdata hw_addr[1]; -static struct eth_platform_data __initdata eth_data[1] = { +static struct macb_platform_data __initdata eth_data[1] = { { .phy_mask = ~(1U << 1), }, diff --git a/arch/avr32/boards/hammerhead/setup.c b/arch/avr32/boards/hammerhead/setup.c index da14fbdd4e8e..9d1efd1cd425 100644 --- a/arch/avr32/boards/hammerhead/setup.c +++ b/arch/avr32/boards/hammerhead/setup.c @@ -102,7 +102,7 @@ struct eth_addr { }; static struct eth_addr __initdata hw_addr[1]; -static struct eth_platform_data __initdata eth_data[1]; +static struct macb_platform_data __initdata eth_data[1]; /* * The next two functions should go away as the boot loader is diff --git a/arch/avr32/boards/merisc/setup.c b/arch/avr32/boards/merisc/setup.c index e61bc948f959..ed137e335796 100644 --- a/arch/avr32/boards/merisc/setup.c +++ b/arch/avr32/boards/merisc/setup.c @@ -52,7 +52,7 @@ struct eth_addr { }; static struct eth_addr __initdata hw_addr[2]; -static struct eth_platform_data __initdata eth_data[2]; +static struct macb_platform_data __initdata eth_data[2]; static int ads7846_get_pendown_state_PB26(void) { diff --git a/arch/avr32/boards/mimc200/setup.c b/arch/avr32/boards/mimc200/setup.c index c4da5cba2dbf..05358aa5ef7d 100644 --- a/arch/avr32/boards/mimc200/setup.c +++ b/arch/avr32/boards/mimc200/setup.c @@ -86,7 +86,7 @@ struct eth_addr { u8 addr[6]; }; static struct eth_addr __initdata hw_addr[2]; -static struct eth_platform_data __initdata eth_data[2]; +static struct macb_platform_data __initdata eth_data[2]; static struct spi_eeprom eeprom_25lc010 = { .name = "25lc010", diff --git a/arch/avr32/mach-at32ap/at32ap700x.c b/arch/avr32/mach-at32ap/at32ap700x.c index 7fbf0dcb9afe..402a7bb72669 100644 --- a/arch/avr32/mach-at32ap/at32ap700x.c +++ b/arch/avr32/mach-at32ap/at32ap700x.c @@ -1067,7 +1067,7 @@ void __init at32_setup_serial_console(unsigned int usart_id) * -------------------------------------------------------------------- */ #ifdef CONFIG_CPU_AT32AP7000 -static struct eth_platform_data macb0_data; +static struct macb_platform_data macb0_data; static struct resource macb0_resource[] = { PBMEM(0xfff01800), IRQ(25), @@ -1076,7 +1076,7 @@ DEFINE_DEV_DATA(macb, 0); DEV_CLK(hclk, macb0, hsb, 8); DEV_CLK(pclk, macb0, pbb, 6); -static struct eth_platform_data macb1_data; +static struct macb_platform_data macb1_data; static struct resource macb1_resource[] = { PBMEM(0xfff01c00), IRQ(26), @@ -1086,7 +1086,7 @@ DEV_CLK(hclk, macb1, hsb, 9); DEV_CLK(pclk, macb1, pbb, 7); struct platform_device *__init -at32_add_device_eth(unsigned int id, struct eth_platform_data *data) +at32_add_device_eth(unsigned int id, struct macb_platform_data *data) { struct platform_device *pdev; u32 pin_mask; @@ -1163,7 +1163,7 @@ at32_add_device_eth(unsigned int id, struct eth_platform_data *data) return NULL; } - memcpy(pdev->dev.platform_data, data, sizeof(struct eth_platform_data)); + memcpy(pdev->dev.platform_data, data, sizeof(struct macb_platform_data)); platform_device_register(pdev); return pdev; diff --git a/arch/avr32/mach-at32ap/include/mach/board.h b/arch/avr32/mach-at32ap/include/mach/board.h index 679458d9a622..67b111ce332d 100644 --- a/arch/avr32/mach-at32ap/include/mach/board.h +++ b/arch/avr32/mach-at32ap/include/mach/board.h @@ -6,6 +6,7 @@ #include <linux/types.h> #include <linux/serial.h> +#include <linux/platform_data/macb.h> #define GPIO_PIN_NONE (-1) @@ -42,12 +43,8 @@ struct atmel_uart_data { void at32_map_usart(unsigned int hw_id, unsigned int line, int flags); struct platform_device *at32_add_device_usart(unsigned int id); -struct eth_platform_data { - u32 phy_mask; - u8 is_rmii; -}; struct platform_device * -at32_add_device_eth(unsigned int id, struct eth_platform_data *data); +at32_add_device_eth(unsigned int id, struct macb_platform_data *data); struct spi_board_info; struct platform_device * @@ -128,7 +125,8 @@ struct atmel_nand_data { u8 ale; /* address line number connected to ALE */ u8 cle; /* address line number connected to CLE */ u8 bus_width_16; /* buswidth is 16 bit */ - struct mtd_partition *(*partition_info)(int size, int *num_partitions); + struct mtd_partition *parts; + unsigned int num_parts; }; struct platform_device * at32_add_device_nand(unsigned int id, struct atmel_nand_data *data); diff --git a/arch/blackfin/include/asm/bfin_serial.h b/arch/blackfin/include/asm/bfin_serial.h index 7fd0ec7b5b0f..ecacdf34768b 100644 --- a/arch/blackfin/include/asm/bfin_serial.h +++ b/arch/blackfin/include/asm/bfin_serial.h @@ -32,6 +32,8 @@ struct work_struct; struct bfin_serial_port { struct uart_port port; unsigned int old_status; + int tx_irq; + int rx_irq; int status_irq; #ifndef BFIN_UART_BF54X_STYLE unsigned int lsr; diff --git a/arch/blackfin/mach-bf518/boards/ezbrd.c b/arch/blackfin/mach-bf518/boards/ezbrd.c index 1082e49f7a9f..d1c0c0cff3ef 100644 --- a/arch/blackfin/mach-bf518/boards/ezbrd.c +++ b/arch/blackfin/mach-bf518/boards/ezbrd.c @@ -373,8 +373,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX+1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { @@ -416,8 +421,13 @@ static struct resource bfin_uart1_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART1_TX, + .end = IRQ_UART1_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART1_RX, - .end = IRQ_UART1_RX+1, + .end = IRQ_UART1_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/blackfin/mach-bf518/boards/tcm-bf518.c b/arch/blackfin/mach-bf518/boards/tcm-bf518.c index 55c127908815..5470bf89e52e 100644 --- a/arch/blackfin/mach-bf518/boards/tcm-bf518.c +++ b/arch/blackfin/mach-bf518/boards/tcm-bf518.c @@ -309,8 +309,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX+1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { @@ -352,8 +357,13 @@ static struct resource bfin_uart1_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART1_TX, + .end = IRQ_UART1_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART1_RX, - .end = IRQ_UART1_RX+1, + .end = IRQ_UART1_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/blackfin/mach-bf527/boards/ad7160eval.c b/arch/blackfin/mach-bf527/boards/ad7160eval.c index 8d65d476f118..5bc6938157ad 100644 --- a/arch/blackfin/mach-bf527/boards/ad7160eval.c +++ b/arch/blackfin/mach-bf527/boards/ad7160eval.c @@ -381,8 +381,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX+1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { @@ -424,8 +429,13 @@ static struct resource bfin_uart1_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART1_TX, + .end = IRQ_UART1_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART1_RX, - .end = IRQ_UART1_RX+1, + .end = IRQ_UART1_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/blackfin/mach-bf527/boards/cm_bf527.c b/arch/blackfin/mach-bf527/boards/cm_bf527.c index 6410fc1af8ed..cd289698b4dd 100644 --- a/arch/blackfin/mach-bf527/boards/cm_bf527.c +++ b/arch/blackfin/mach-bf527/boards/cm_bf527.c @@ -8,6 +8,7 @@ */ #include <linux/device.h> +#include <linux/export.h> #include <linux/platform_device.h> #include <linux/mtd/mtd.h> #include <linux/mtd/partitions.h> @@ -539,8 +540,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX+1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { @@ -582,8 +588,13 @@ static struct resource bfin_uart1_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART1_TX, + .end = IRQ_UART1_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART1_RX, - .end = IRQ_UART1_RX+1, + .end = IRQ_UART1_RX, .flags = IORESOURCE_IRQ, }, { @@ -801,7 +812,6 @@ static struct platform_device bfin_sport1_uart_device = { #if defined(CONFIG_KEYBOARD_GPIO) || defined(CONFIG_KEYBOARD_GPIO_MODULE) #include <linux/input.h> #include <linux/gpio_keys.h> -#include <linux/export.h> static struct gpio_keys_button bfin_gpio_keys_table[] = { {BTN_0, GPIO_PF14, 1, "gpio-keys: BTN0"}, diff --git a/arch/blackfin/mach-bf527/boards/ezbrd.c b/arch/blackfin/mach-bf527/boards/ezbrd.c index 64f7278aba53..9f792eafd1cc 100644 --- a/arch/blackfin/mach-bf527/boards/ezbrd.c +++ b/arch/blackfin/mach-bf527/boards/ezbrd.c @@ -7,6 +7,7 @@ */ #include <linux/device.h> +#include <linux/export.h> #include <linux/platform_device.h> #include <linux/mtd/mtd.h> #include <linux/mtd/partitions.h> @@ -417,8 +418,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX+1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { @@ -460,8 +466,13 @@ static struct resource bfin_uart1_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART1_TX, + .end = IRQ_UART1_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART1_RX, - .end = IRQ_UART1_RX+1, + .end = IRQ_UART1_RX, .flags = IORESOURCE_IRQ, }, { @@ -674,7 +685,6 @@ static struct platform_device bfin_sport1_uart_device = { #if defined(CONFIG_KEYBOARD_GPIO) || defined(CONFIG_KEYBOARD_GPIO_MODULE) #include <linux/input.h> #include <linux/gpio_keys.h> -#include <linux/export.h> static struct gpio_keys_button bfin_gpio_keys_table[] = { {BTN_0, GPIO_PG0, 1, "gpio-keys: BTN0"}, diff --git a/arch/blackfin/mach-bf527/boards/ezkit.c b/arch/blackfin/mach-bf527/boards/ezkit.c index e4c6a122b66c..3ecafff5d2ef 100644 --- a/arch/blackfin/mach-bf527/boards/ezkit.c +++ b/arch/blackfin/mach-bf527/boards/ezkit.c @@ -711,8 +711,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX+1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { @@ -754,8 +759,13 @@ static struct resource bfin_uart1_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART1_TX, + .end = IRQ_UART1_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART1_RX, - .end = IRQ_UART1_RX+1, + .end = IRQ_UART1_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/blackfin/mach-bf527/boards/tll6527m.c b/arch/blackfin/mach-bf527/boards/tll6527m.c index 76dbc03a8d4d..3a92c4318d2d 100644 --- a/arch/blackfin/mach-bf527/boards/tll6527m.c +++ b/arch/blackfin/mach-bf527/boards/tll6527m.c @@ -496,8 +496,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX+1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { @@ -540,8 +545,13 @@ static struct resource bfin_uart1_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART1_TX, + .end = IRQ_UART1_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART1_RX, - .end = IRQ_UART1_RX+1, + .end = IRQ_UART1_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/blackfin/mach-bf533/boards/H8606.c b/arch/blackfin/mach-bf533/boards/H8606.c index 5da5787fc4ef..47cadd316e76 100644 --- a/arch/blackfin/mach-bf533/boards/H8606.c +++ b/arch/blackfin/mach-bf533/boards/H8606.c @@ -238,8 +238,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX + 1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/blackfin/mach-bf533/boards/blackstamp.c b/arch/blackfin/mach-bf533/boards/blackstamp.c index b0ec825fb4ec..18817d57c7a1 100644 --- a/arch/blackfin/mach-bf533/boards/blackstamp.c +++ b/arch/blackfin/mach-bf533/boards/blackstamp.c @@ -193,8 +193,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX + 1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/blackfin/mach-bf533/boards/cm_bf533.c b/arch/blackfin/mach-bf533/boards/cm_bf533.c index 14f54a31e74c..2c8f30ef6a7b 100644 --- a/arch/blackfin/mach-bf533/boards/cm_bf533.c +++ b/arch/blackfin/mach-bf533/boards/cm_bf533.c @@ -221,8 +221,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX + 1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/blackfin/mach-bf533/boards/ezkit.c b/arch/blackfin/mach-bf533/boards/ezkit.c index ecd2801f050d..144556e14499 100644 --- a/arch/blackfin/mach-bf533/boards/ezkit.c +++ b/arch/blackfin/mach-bf533/boards/ezkit.c @@ -292,8 +292,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX + 1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/blackfin/mach-bf533/boards/ip0x.c b/arch/blackfin/mach-bf533/boards/ip0x.c index fbee77fa9211..b597d4e50d58 100644 --- a/arch/blackfin/mach-bf533/boards/ip0x.c +++ b/arch/blackfin/mach-bf533/boards/ip0x.c @@ -151,8 +151,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX + 1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/blackfin/mach-bf533/boards/stamp.c b/arch/blackfin/mach-bf533/boards/stamp.c index 964a8e5f79b4..2afd02e14bd1 100644 --- a/arch/blackfin/mach-bf533/boards/stamp.c +++ b/arch/blackfin/mach-bf533/boards/stamp.c @@ -298,8 +298,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX + 1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/blackfin/mach-bf537/boards/cm_bf537e.c b/arch/blackfin/mach-bf537/boards/cm_bf537e.c index 1471c51ea697..604a430038e1 100644 --- a/arch/blackfin/mach-bf537/boards/cm_bf537e.c +++ b/arch/blackfin/mach-bf537/boards/cm_bf537e.c @@ -8,6 +8,7 @@ */ #include <linux/device.h> +#include <linux/export.h> #include <linux/etherdevice.h> #include <linux/platform_device.h> #include <linux/mtd/mtd.h> @@ -305,8 +306,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX+1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { @@ -366,8 +372,13 @@ static struct resource bfin_uart1_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART1_TX, + .end = IRQ_UART1_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART1_RX, - .end = IRQ_UART1_RX+1, + .end = IRQ_UART1_RX, .flags = IORESOURCE_IRQ, }, { @@ -569,7 +580,6 @@ static struct platform_device bfin_sport1_uart_device = { #if defined(CONFIG_BFIN_MAC) || defined(CONFIG_BFIN_MAC_MODULE) #include <linux/bfin_mac.h> -#include <linux/export.h> static const unsigned short bfin_mac_peripherals[] = P_MII0; static struct bfin_phydev_platform_data bfin_phydev_data[] = { diff --git a/arch/blackfin/mach-bf537/boards/cm_bf537u.c b/arch/blackfin/mach-bf537/boards/cm_bf537u.c index 47cf37de33ba..d916b46a44fe 100644 --- a/arch/blackfin/mach-bf537/boards/cm_bf537u.c +++ b/arch/blackfin/mach-bf537/boards/cm_bf537u.c @@ -9,6 +9,7 @@ #include <linux/device.h> #include <linux/etherdevice.h> +#include <linux/export.h> #include <linux/platform_device.h> #include <linux/mtd/mtd.h> #include <linux/mtd/partitions.h> @@ -306,8 +307,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX+1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { @@ -349,8 +355,13 @@ static struct resource bfin_uart1_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART1_TX, + .end = IRQ_UART1_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART1_RX, - .end = IRQ_UART1_RX+1, + .end = IRQ_UART1_RX, .flags = IORESOURCE_IRQ, }, { @@ -534,7 +545,6 @@ static struct platform_device bfin_sport1_uart_device = { #if defined(CONFIG_BFIN_MAC) || defined(CONFIG_BFIN_MAC_MODULE) #include <linux/bfin_mac.h> -#include <linux/export.h> static const unsigned short bfin_mac_peripherals[] = P_MII0; static struct bfin_phydev_platform_data bfin_phydev_data[] = { diff --git a/arch/blackfin/mach-bf537/boards/dnp5370.c b/arch/blackfin/mach-bf537/boards/dnp5370.c index 33e69e427e98..5f307228be63 100644 --- a/arch/blackfin/mach-bf537/boards/dnp5370.c +++ b/arch/blackfin/mach-bf537/boards/dnp5370.c @@ -12,6 +12,7 @@ */ #include <linux/device.h> +#include <linux/export.h> #include <linux/kernel.h> #include <linux/platform_device.h> #include <linux/io.h> @@ -49,7 +50,6 @@ static struct platform_device rtc_device = { #if defined(CONFIG_BFIN_MAC) || defined(CONFIG_BFIN_MAC_MODULE) #include <linux/bfin_mac.h> -#include <linux/export.h> static const unsigned short bfin_mac_peripherals[] = P_RMII0; static struct bfin_phydev_platform_data bfin_phydev_data[] = { @@ -237,8 +237,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX+1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { @@ -281,8 +286,13 @@ static struct resource bfin_uart1_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART1_TX, + .end = IRQ_UART1_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART1_RX, - .end = IRQ_UART1_RX+1, + .end = IRQ_UART1_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/blackfin/mach-bf537/boards/minotaur.c b/arch/blackfin/mach-bf537/boards/minotaur.c index c62f9dccd9f7..3901dd093b90 100644 --- a/arch/blackfin/mach-bf537/boards/minotaur.c +++ b/arch/blackfin/mach-bf537/boards/minotaur.c @@ -240,8 +240,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX+1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { @@ -283,8 +288,13 @@ static struct resource bfin_uart1_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART1_TX, + .end = IRQ_UART1_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART1_RX, - .end = IRQ_UART1_RX+1, + .end = IRQ_UART1_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/blackfin/mach-bf537/boards/pnav10.c b/arch/blackfin/mach-bf537/boards/pnav10.c index 3099e91114fc..aebd31c845f0 100644 --- a/arch/blackfin/mach-bf537/boards/pnav10.c +++ b/arch/blackfin/mach-bf537/boards/pnav10.c @@ -8,6 +8,7 @@ #include <linux/device.h> #include <linux/etherdevice.h> +#include <linux/export.h> #include <linux/platform_device.h> #include <linux/mtd/mtd.h> #include <linux/mtd/partitions.h> @@ -309,8 +310,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX+1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { @@ -352,8 +358,13 @@ static struct resource bfin_uart1_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART1_TX, + .end = IRQ_UART1_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART1_RX, - .end = IRQ_UART1_RX+1, + .end = IRQ_UART1_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/blackfin/mach-bf537/boards/stamp.c b/arch/blackfin/mach-bf537/boards/stamp.c index 27f955db9976..7fbb0bbf8676 100644 --- a/arch/blackfin/mach-bf537/boards/stamp.c +++ b/arch/blackfin/mach-bf537/boards/stamp.c @@ -7,6 +7,7 @@ */ #include <linux/device.h> +#include <linux/export.h> #include <linux/kernel.h> #include <linux/platform_device.h> #include <linux/io.h> @@ -1566,8 +1567,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX+1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { @@ -1621,8 +1627,13 @@ static struct resource bfin_uart1_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART1_TX, + .end = IRQ_UART1_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART1_RX, - .end = IRQ_UART1_RX+1, + .end = IRQ_UART1_RX, .flags = IORESOURCE_IRQ, }, { @@ -1992,7 +2003,6 @@ static struct adp8870_backlight_platform_data adp8870_pdata = { #if defined(CONFIG_BACKLIGHT_ADP8860) || defined(CONFIG_BACKLIGHT_ADP8860_MODULE) #include <linux/i2c/adp8860.h> -#include <linux/export.h> static struct led_info adp8860_leds[] = { { .name = "adp8860-led7", diff --git a/arch/blackfin/mach-bf537/boards/tcm_bf537.c b/arch/blackfin/mach-bf537/boards/tcm_bf537.c index 841803038d6f..6917ce2fa55e 100644 --- a/arch/blackfin/mach-bf537/boards/tcm_bf537.c +++ b/arch/blackfin/mach-bf537/boards/tcm_bf537.c @@ -9,6 +9,7 @@ #include <linux/device.h> #include <linux/etherdevice.h> +#include <linux/export.h> #include <linux/platform_device.h> #include <linux/mtd/mtd.h> #include <linux/mtd/partitions.h> @@ -306,8 +307,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX+1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { @@ -349,8 +355,13 @@ static struct resource bfin_uart1_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART1_TX, + .end = IRQ_UART1_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART1_RX, - .end = IRQ_UART1_RX+1, + .end = IRQ_UART1_RX, .flags = IORESOURCE_IRQ, }, { @@ -536,7 +547,6 @@ static struct platform_device bfin_sport1_uart_device = { #if defined(CONFIG_BFIN_MAC) || defined(CONFIG_BFIN_MAC_MODULE) #include <linux/bfin_mac.h> -#include <linux/export.h> static const unsigned short bfin_mac_peripherals[] = P_MII0; static struct bfin_phydev_platform_data bfin_phydev_data[] = { diff --git a/arch/blackfin/mach-bf538/boards/ezkit.c b/arch/blackfin/mach-bf538/boards/ezkit.c index 629f3c333415..8356eb599f19 100644 --- a/arch/blackfin/mach-bf538/boards/ezkit.c +++ b/arch/blackfin/mach-bf538/boards/ezkit.c @@ -49,8 +49,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX+1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { @@ -104,8 +109,13 @@ static struct resource bfin_uart1_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART1_TX, + .end = IRQ_UART1_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART1_RX, - .end = IRQ_UART1_RX+1, + .end = IRQ_UART1_RX, .flags = IORESOURCE_IRQ, }, { @@ -147,8 +157,13 @@ static struct resource bfin_uart2_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART2_TX, + .end = IRQ_UART2_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART2_RX, - .end = IRQ_UART2_RX+1, + .end = IRQ_UART2_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/blackfin/mach-bf548/boards/cm_bf548.c b/arch/blackfin/mach-bf548/boards/cm_bf548.c index 212b9e0a08c8..0350eacec21b 100644 --- a/arch/blackfin/mach-bf548/boards/cm_bf548.c +++ b/arch/blackfin/mach-bf548/boards/cm_bf548.c @@ -135,8 +135,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX+1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { @@ -178,8 +183,13 @@ static struct resource bfin_uart1_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART1_TX, + .end = IRQ_UART1_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART1_RX, - .end = IRQ_UART1_RX+1, + .end = IRQ_UART1_RX, .flags = IORESOURCE_IRQ, }, { @@ -237,8 +247,13 @@ static struct resource bfin_uart2_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART2_TX, + .end = IRQ_UART2_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART2_RX, - .end = IRQ_UART2_RX+1, + .end = IRQ_UART2_RX, .flags = IORESOURCE_IRQ, }, { @@ -280,8 +295,13 @@ static struct resource bfin_uart3_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART3_TX, + .end = IRQ_UART3_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART3_RX, - .end = IRQ_UART3_RX+1, + .end = IRQ_UART3_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/blackfin/mach-bf548/boards/ezkit.c b/arch/blackfin/mach-bf548/boards/ezkit.c index cd9cbb68de69..bb868ac0fe2d 100644 --- a/arch/blackfin/mach-bf548/boards/ezkit.c +++ b/arch/blackfin/mach-bf548/boards/ezkit.c @@ -241,8 +241,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART0_TX, + .end = IRQ_UART0_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART0_RX, - .end = IRQ_UART0_RX+1, + .end = IRQ_UART0_RX, .flags = IORESOURCE_IRQ, }, { @@ -284,8 +289,13 @@ static struct resource bfin_uart1_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART1_TX, + .end = IRQ_UART1_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART1_RX, - .end = IRQ_UART1_RX+1, + .end = IRQ_UART1_RX, .flags = IORESOURCE_IRQ, }, { @@ -343,8 +353,13 @@ static struct resource bfin_uart2_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART2_TX, + .end = IRQ_UART2_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART2_RX, - .end = IRQ_UART2_RX+1, + .end = IRQ_UART2_RX, .flags = IORESOURCE_IRQ, }, { @@ -386,8 +401,13 @@ static struct resource bfin_uart3_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART3_TX, + .end = IRQ_UART3_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART3_RX, - .end = IRQ_UART3_RX+1, + .end = IRQ_UART3_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/blackfin/mach-bf561/boards/acvilon.c b/arch/blackfin/mach-bf561/boards/acvilon.c index 972e1347c6bc..b1b7339b6ba7 100644 --- a/arch/blackfin/mach-bf561/boards/acvilon.c +++ b/arch/blackfin/mach-bf561/boards/acvilon.c @@ -203,8 +203,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART_TX, + .end = IRQ_UART_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART_RX, - .end = IRQ_UART_RX + 1, + .end = IRQ_UART_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/blackfin/mach-bf561/boards/cm_bf561.c b/arch/blackfin/mach-bf561/boards/cm_bf561.c index c1b72f2d6354..c017cf07ed4e 100644 --- a/arch/blackfin/mach-bf561/boards/cm_bf561.c +++ b/arch/blackfin/mach-bf561/boards/cm_bf561.c @@ -277,8 +277,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART_TX, + .end = IRQ_UART_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART_RX, - .end = IRQ_UART_RX+1, + .end = IRQ_UART_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/blackfin/mach-bf561/boards/ezkit.c b/arch/blackfin/mach-bf561/boards/ezkit.c index 9490dc800ca5..27f22ed381d9 100644 --- a/arch/blackfin/mach-bf561/boards/ezkit.c +++ b/arch/blackfin/mach-bf561/boards/ezkit.c @@ -172,8 +172,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART_TX, + .end = IRQ_UART_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART_RX, - .end = IRQ_UART_RX+1, + .end = IRQ_UART_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/blackfin/mach-bf561/boards/tepla.c b/arch/blackfin/mach-bf561/boards/tepla.c index bb056e60f6ed..1a57bc986aad 100644 --- a/arch/blackfin/mach-bf561/boards/tepla.c +++ b/arch/blackfin/mach-bf561/boards/tepla.c @@ -51,8 +51,13 @@ static struct resource bfin_uart0_resources[] = { .flags = IORESOURCE_MEM, }, { + .start = IRQ_UART_TX, + .end = IRQ_UART_TX, + .flags = IORESOURCE_IRQ, + }, + { .start = IRQ_UART_RX, - .end = IRQ_UART_RX+1, + .end = IRQ_UART_RX, .flags = IORESOURCE_IRQ, }, { diff --git a/arch/cris/arch-v32/drivers/mach-a3/nandflash.c b/arch/cris/arch-v32/drivers/mach-a3/nandflash.c index f58f2c1c5295..7fb52128ddc9 100644 --- a/arch/cris/arch-v32/drivers/mach-a3/nandflash.c +++ b/arch/cris/arch-v32/drivers/mach-a3/nandflash.c @@ -163,7 +163,7 @@ struct mtd_info *__init crisv32_nand_flash_probe(void) this->ecc.mode = NAND_ECC_SOFT; /* Enable the following for a flash based bad block table */ - /* this->options = NAND_USE_FLASH_BBT; */ + /* this->bbt_options = NAND_BBT_USE_FLASH; */ /* Scan to find existence of the device */ if (nand_scan(crisv32_mtd, 1)) { diff --git a/arch/cris/arch-v32/drivers/mach-fs/nandflash.c b/arch/cris/arch-v32/drivers/mach-fs/nandflash.c index d5b0cc9f976b..e03238454b0e 100644 --- a/arch/cris/arch-v32/drivers/mach-fs/nandflash.c +++ b/arch/cris/arch-v32/drivers/mach-fs/nandflash.c @@ -154,7 +154,7 @@ struct mtd_info *__init crisv32_nand_flash_probe(void) this->ecc.mode = NAND_ECC_SOFT; /* Enable the following for a flash based bad block table */ - /* this->options = NAND_USE_FLASH_BBT; */ + /* this->bbt_options = NAND_BBT_USE_FLASH; */ /* Scan to find existence of the device */ if (nand_scan(crisv32_mtd, 1)) { diff --git a/arch/m68k/Kconfig b/arch/m68k/Kconfig index 6c28582fb98f..361d54019bb0 100644 --- a/arch/m68k/Kconfig +++ b/arch/m68k/Kconfig @@ -4,8 +4,8 @@ config M68K select HAVE_IDE select HAVE_AOUT if MMU select GENERIC_ATOMIC64 if MMU - select HAVE_GENERIC_HARDIRQS if !MMU - select GENERIC_IRQ_SHOW if !MMU + select HAVE_GENERIC_HARDIRQS + select GENERIC_IRQ_SHOW select ARCH_HAVE_NMI_SAFE_CMPXCHG if RMW_INSNS config RWSEM_GENERIC_SPINLOCK diff --git a/arch/m68k/Kconfig.bus b/arch/m68k/Kconfig.bus index 8294f0c1785e..3adb499584fb 100644 --- a/arch/m68k/Kconfig.bus +++ b/arch/m68k/Kconfig.bus @@ -2,6 +2,15 @@ if MMU comment "Bus Support" +config DIO + bool "DIO bus support" + depends on HP300 + default y + help + Say Y here to enable support for the "DIO" expansion bus used in + HP300 machines. If you are using such a system you almost certainly + want this. + config NUBUS bool depends on MAC diff --git a/arch/m68k/Kconfig.devices b/arch/m68k/Kconfig.devices index d214034be6a6..6033f5d4e67e 100644 --- a/arch/m68k/Kconfig.devices +++ b/arch/m68k/Kconfig.devices @@ -24,6 +24,37 @@ config PROC_HARDWARE including the model, CPU, MMU, clock speed, BogoMIPS rating, and memory size. +config NATFEAT + bool "ARAnyM emulator support" + depends on ATARI + help + This option enables support for ARAnyM native features, such as + access to a disk image as /dev/hda. + +config NFBLOCK + tristate "NatFeat block device support" + depends on BLOCK && NATFEAT + help + Say Y to include support for the ARAnyM NatFeat block device + which allows direct access to the hard drives without using + the hardware emulation. + +config NFCON + tristate "NatFeat console driver" + depends on NATFEAT + help + Say Y to include support for the ARAnyM NatFeat console driver + which allows the console output to be redirected to the stderr + output of ARAnyM. + +config NFETH + tristate "NatFeat Ethernet support" + depends on ETHERNET && NATFEAT + help + Say Y to include support for the ARAnyM NatFeat network device + which will emulate a regular ethernet device while presenting an + ethertap device to the host system. + endmenu menu "Character devices" diff --git a/arch/m68k/amiga/amiints.c b/arch/m68k/amiga/amiints.c index c5b5212cc3f9..47b5f90002ab 100644 --- a/arch/m68k/amiga/amiints.c +++ b/arch/m68k/amiga/amiints.c @@ -1,43 +1,15 @@ /* - * linux/arch/m68k/amiga/amiints.c -- Amiga Linux interrupt handling code + * Amiga Linux interrupt handling code * * This file is subject to the terms and conditions of the GNU General Public * License. See the file COPYING in the main directory of this archive * for more details. - * - * 11/07/96: rewritten interrupt handling, irq lists are exists now only for - * this sources where it makes sense (VERTB/PORTS/EXTER) and you must - * be careful that dev_id for this sources is unique since this the - * only possibility to distinguish between different handlers for - * free_irq. irq lists also have different irq flags: - * - IRQ_FLG_FAST: handler is inserted at top of list (after other - * fast handlers) - * - IRQ_FLG_SLOW: handler is inserted at bottom of list and before - * they're executed irq level is set to the previous - * one, but handlers don't need to be reentrant, if - * reentrance occurred, slow handlers will be just - * called again. - * The whole interrupt handling for CIAs is moved to cia.c - * /Roman Zippel - * - * 07/08/99: rewamp of the interrupt handling - we now have two types of - * interrupts, normal and fast handlers, fast handlers being - * marked with IRQF_DISABLED and runs with all other interrupts - * disabled. Normal interrupts disable their own source but - * run with all other interrupt sources enabled. - * PORTS and EXTER interrupts are always shared even if the - * drivers do not explicitly mark this when calling - * request_irq which they really should do. - * This is similar to the way interrupts are handled on all - * other architectures and makes a ton of sense besides - * having the advantage of making it easier to share - * drivers. - * /Jes */ #include <linux/init.h> #include <linux/interrupt.h> #include <linux/errno.h> +#include <linux/irq.h> #include <asm/irq.h> #include <asm/traps.h> @@ -45,56 +17,6 @@ #include <asm/amigaints.h> #include <asm/amipcmcia.h> -static void amiga_enable_irq(unsigned int irq); -static void amiga_disable_irq(unsigned int irq); -static irqreturn_t ami_int1(int irq, void *dev_id); -static irqreturn_t ami_int3(int irq, void *dev_id); -static irqreturn_t ami_int4(int irq, void *dev_id); -static irqreturn_t ami_int5(int irq, void *dev_id); - -static struct irq_controller amiga_irq_controller = { - .name = "amiga", - .lock = __SPIN_LOCK_UNLOCKED(amiga_irq_controller.lock), - .enable = amiga_enable_irq, - .disable = amiga_disable_irq, -}; - -/* - * void amiga_init_IRQ(void) - * - * Parameters: None - * - * Returns: Nothing - * - * This function should be called during kernel startup to initialize - * the amiga IRQ handling routines. - */ - -void __init amiga_init_IRQ(void) -{ - if (request_irq(IRQ_AUTO_1, ami_int1, 0, "int1", NULL)) - pr_err("Couldn't register int%d\n", 1); - if (request_irq(IRQ_AUTO_3, ami_int3, 0, "int3", NULL)) - pr_err("Couldn't register int%d\n", 3); - if (request_irq(IRQ_AUTO_4, ami_int4, 0, "int4", NULL)) - pr_err("Couldn't register int%d\n", 4); - if (request_irq(IRQ_AUTO_5, ami_int5, 0, "int5", NULL)) - pr_err("Couldn't register int%d\n", 5); - - m68k_setup_irq_controller(&amiga_irq_controller, IRQ_USER, AMI_STD_IRQS); - - /* turn off PCMCIA interrupts */ - if (AMIGAHW_PRESENT(PCMCIA)) - gayle.inten = GAYLE_IRQ_IDE; - - /* turn off all interrupts and enable the master interrupt bit */ - amiga_custom.intena = 0x7fff; - amiga_custom.intreq = 0x7fff; - amiga_custom.intena = IF_SETCLR | IF_INTEN; - - cia_init_IRQ(&ciaa_base); - cia_init_IRQ(&ciab_base); -} /* * Enable/disable a particular machine specific interrupt source. @@ -103,112 +25,150 @@ void __init amiga_init_IRQ(void) * internal data, that may not be changed by the interrupt at the same time. */ -static void amiga_enable_irq(unsigned int irq) +static void amiga_irq_enable(struct irq_data *data) { - amiga_custom.intena = IF_SETCLR | (1 << (irq - IRQ_USER)); + amiga_custom.intena = IF_SETCLR | (1 << (data->irq - IRQ_USER)); } -static void amiga_disable_irq(unsigned int irq) +static void amiga_irq_disable(struct irq_data *data) { - amiga_custom.intena = 1 << (irq - IRQ_USER); + amiga_custom.intena = 1 << (data->irq - IRQ_USER); } +static struct irq_chip amiga_irq_chip = { + .name = "amiga", + .irq_enable = amiga_irq_enable, + .irq_disable = amiga_irq_disable, +}; + + /* * The builtin Amiga hardware interrupt handlers. */ -static irqreturn_t ami_int1(int irq, void *dev_id) +static void ami_int1(unsigned int irq, struct irq_desc *desc) { unsigned short ints = amiga_custom.intreqr & amiga_custom.intenar; /* if serial transmit buffer empty, interrupt */ if (ints & IF_TBE) { amiga_custom.intreq = IF_TBE; - m68k_handle_int(IRQ_AMIGA_TBE); + generic_handle_irq(IRQ_AMIGA_TBE); } /* if floppy disk transfer complete, interrupt */ if (ints & IF_DSKBLK) { amiga_custom.intreq = IF_DSKBLK; - m68k_handle_int(IRQ_AMIGA_DSKBLK); + generic_handle_irq(IRQ_AMIGA_DSKBLK); } /* if software interrupt set, interrupt */ if (ints & IF_SOFT) { amiga_custom.intreq = IF_SOFT; - m68k_handle_int(IRQ_AMIGA_SOFT); + generic_handle_irq(IRQ_AMIGA_SOFT); } - return IRQ_HANDLED; } -static irqreturn_t ami_int3(int irq, void *dev_id) +static void ami_int3(unsigned int irq, struct irq_desc *desc) { unsigned short ints = amiga_custom.intreqr & amiga_custom.intenar; /* if a blitter interrupt */ if (ints & IF_BLIT) { amiga_custom.intreq = IF_BLIT; - m68k_handle_int(IRQ_AMIGA_BLIT); + generic_handle_irq(IRQ_AMIGA_BLIT); } /* if a copper interrupt */ if (ints & IF_COPER) { amiga_custom.intreq = IF_COPER; - m68k_handle_int(IRQ_AMIGA_COPPER); + generic_handle_irq(IRQ_AMIGA_COPPER); } /* if a vertical blank interrupt */ if (ints & IF_VERTB) { amiga_custom.intreq = IF_VERTB; - m68k_handle_int(IRQ_AMIGA_VERTB); + generic_handle_irq(IRQ_AMIGA_VERTB); } - return IRQ_HANDLED; } -static irqreturn_t ami_int4(int irq, void *dev_id) +static void ami_int4(unsigned int irq, struct irq_desc *desc) { unsigned short ints = amiga_custom.intreqr & amiga_custom.intenar; /* if audio 0 interrupt */ if (ints & IF_AUD0) { amiga_custom.intreq = IF_AUD0; - m68k_handle_int(IRQ_AMIGA_AUD0); + generic_handle_irq(IRQ_AMIGA_AUD0); } /* if audio 1 interrupt */ if (ints & IF_AUD1) { amiga_custom.intreq = IF_AUD1; - m68k_handle_int(IRQ_AMIGA_AUD1); + generic_handle_irq(IRQ_AMIGA_AUD1); } /* if audio 2 interrupt */ if (ints & IF_AUD2) { amiga_custom.intreq = IF_AUD2; - m68k_handle_int(IRQ_AMIGA_AUD2); + generic_handle_irq(IRQ_AMIGA_AUD2); } /* if audio 3 interrupt */ if (ints & IF_AUD3) { amiga_custom.intreq = IF_AUD3; - m68k_handle_int(IRQ_AMIGA_AUD3); + generic_handle_irq(IRQ_AMIGA_AUD3); } - return IRQ_HANDLED; } -static irqreturn_t ami_int5(int irq, void *dev_id) +static void ami_int5(unsigned int irq, struct irq_desc *desc) { unsigned short ints = amiga_custom.intreqr & amiga_custom.intenar; /* if serial receive buffer full interrupt */ if (ints & IF_RBF) { /* acknowledge of IF_RBF must be done by the serial interrupt */ - m68k_handle_int(IRQ_AMIGA_RBF); + generic_handle_irq(IRQ_AMIGA_RBF); } /* if a disk sync interrupt */ if (ints & IF_DSKSYN) { amiga_custom.intreq = IF_DSKSYN; - m68k_handle_int(IRQ_AMIGA_DSKSYN); + generic_handle_irq(IRQ_AMIGA_DSKSYN); } - return IRQ_HANDLED; +} + + +/* + * void amiga_init_IRQ(void) + * + * Parameters: None + * + * Returns: Nothing + * + * This function should be called during kernel startup to initialize + * the amiga IRQ handling routines. + */ + +void __init amiga_init_IRQ(void) +{ + m68k_setup_irq_controller(&amiga_irq_chip, handle_simple_irq, IRQ_USER, + AMI_STD_IRQS); + + irq_set_chained_handler(IRQ_AUTO_1, ami_int1); + irq_set_chained_handler(IRQ_AUTO_3, ami_int3); + irq_set_chained_handler(IRQ_AUTO_4, ami_int4); + irq_set_chained_handler(IRQ_AUTO_5, ami_int5); + + /* turn off PCMCIA interrupts */ + if (AMIGAHW_PRESENT(PCMCIA)) + gayle.inten = GAYLE_IRQ_IDE; + + /* turn off all interrupts and enable the master interrupt bit */ + amiga_custom.intena = 0x7fff; + amiga_custom.intreq = 0x7fff; + amiga_custom.intena = IF_SETCLR | IF_INTEN; + + cia_init_IRQ(&ciaa_base); + cia_init_IRQ(&ciab_base); } diff --git a/arch/m68k/amiga/cia.c b/arch/m68k/amiga/cia.c index ecd0f7ca6f0e..18c0e29976e3 100644 --- a/arch/m68k/amiga/cia.c +++ b/arch/m68k/amiga/cia.c @@ -93,13 +93,14 @@ static irqreturn_t cia_handler(int irq, void *dev_id) amiga_custom.intreq = base->int_mask; for (; ints; mach_irq++, ints >>= 1) { if (ints & 1) - m68k_handle_int(mach_irq); + generic_handle_irq(mach_irq); } return IRQ_HANDLED; } -static void cia_enable_irq(unsigned int irq) +static void cia_irq_enable(struct irq_data *data) { + unsigned int irq = data->irq; unsigned char mask; if (irq >= IRQ_AMIGA_CIAB) { @@ -113,19 +114,20 @@ static void cia_enable_irq(unsigned int irq) } } -static void cia_disable_irq(unsigned int irq) +static void cia_irq_disable(struct irq_data *data) { + unsigned int irq = data->irq; + if (irq >= IRQ_AMIGA_CIAB) cia_able_irq(&ciab_base, 1 << (irq - IRQ_AMIGA_CIAB)); else cia_able_irq(&ciaa_base, 1 << (irq - IRQ_AMIGA_CIAA)); } -static struct irq_controller cia_irq_controller = { +static struct irq_chip cia_irq_chip = { .name = "cia", - .lock = __SPIN_LOCK_UNLOCKED(cia_irq_controller.lock), - .enable = cia_enable_irq, - .disable = cia_disable_irq, + .irq_enable = cia_irq_enable, + .irq_disable = cia_irq_disable, }; /* @@ -134,9 +136,9 @@ static struct irq_controller cia_irq_controller = { * into this chain. */ -static void auto_enable_irq(unsigned int irq) +static void auto_irq_enable(struct irq_data *data) { - switch (irq) { + switch (data->irq) { case IRQ_AUTO_2: amiga_custom.intena = IF_SETCLR | IF_PORTS; break; @@ -146,9 +148,9 @@ static void auto_enable_irq(unsigned int irq) } } -static void auto_disable_irq(unsigned int irq) +static void auto_irq_disable(struct irq_data *data) { - switch (irq) { + switch (data->irq) { case IRQ_AUTO_2: amiga_custom.intena = IF_PORTS; break; @@ -158,24 +160,25 @@ static void auto_disable_irq(unsigned int irq) } } -static struct irq_controller auto_irq_controller = { +static struct irq_chip auto_irq_chip = { .name = "auto", - .lock = __SPIN_LOCK_UNLOCKED(auto_irq_controller.lock), - .enable = auto_enable_irq, - .disable = auto_disable_irq, + .irq_enable = auto_irq_enable, + .irq_disable = auto_irq_disable, }; void __init cia_init_IRQ(struct ciabase *base) { - m68k_setup_irq_controller(&cia_irq_controller, base->cia_irq, CIA_IRQS); + m68k_setup_irq_controller(&cia_irq_chip, handle_simple_irq, + base->cia_irq, CIA_IRQS); /* clear any pending interrupt and turn off all interrupts */ cia_set_irq(base, CIA_ICR_ALL); cia_able_irq(base, CIA_ICR_ALL); /* override auto int and install CIA handler */ - m68k_setup_irq_controller(&auto_irq_controller, base->handler_irq, 1); - m68k_irq_startup(base->handler_irq); + m68k_setup_irq_controller(&auto_irq_chip, handle_simple_irq, + base->handler_irq, 1); + m68k_irq_startup_irq(base->handler_irq); if (request_irq(base->handler_irq, cia_handler, IRQF_SHARED, base->name, base)) pr_err("Couldn't register %s interrupt\n", base->name); diff --git a/arch/m68k/apollo/dn_ints.c b/arch/m68k/apollo/dn_ints.c index 5d47f3aa3810..17be1e7e2df2 100644 --- a/arch/m68k/apollo/dn_ints.c +++ b/arch/m68k/apollo/dn_ints.c @@ -1,19 +1,13 @@ #include <linux/interrupt.h> +#include <linux/irq.h> -#include <asm/irq.h> #include <asm/traps.h> #include <asm/apollohw.h> -void dn_process_int(unsigned int irq, struct pt_regs *fp) +unsigned int apollo_irq_startup(struct irq_data *data) { - __m68k_handle_int(irq, fp); + unsigned int irq = data->irq; - *(volatile unsigned char *)(pica)=0x20; - *(volatile unsigned char *)(picb)=0x20; -} - -int apollo_irq_startup(unsigned int irq) -{ if (irq < 8) *(volatile unsigned char *)(pica+1) &= ~(1 << irq); else @@ -21,24 +15,33 @@ int apollo_irq_startup(unsigned int irq) return 0; } -void apollo_irq_shutdown(unsigned int irq) +void apollo_irq_shutdown(struct irq_data *data) { + unsigned int irq = data->irq; + if (irq < 8) *(volatile unsigned char *)(pica+1) |= (1 << irq); else *(volatile unsigned char *)(picb+1) |= (1 << (irq - 8)); } -static struct irq_controller apollo_irq_controller = { +void apollo_irq_eoi(struct irq_data *data) +{ + *(volatile unsigned char *)(pica) = 0x20; + *(volatile unsigned char *)(picb) = 0x20; +} + +static struct irq_chip apollo_irq_chip = { .name = "apollo", - .lock = __SPIN_LOCK_UNLOCKED(apollo_irq_controller.lock), - .startup = apollo_irq_startup, - .shutdown = apollo_irq_shutdown, + .irq_startup = apollo_irq_startup, + .irq_shutdown = apollo_irq_shutdown, + .irq_eoi = apollo_irq_eoi, }; void __init dn_init_IRQ(void) { - m68k_setup_user_interrupt(VEC_USER + 96, 16, dn_process_int); - m68k_setup_irq_controller(&apollo_irq_controller, IRQ_APOLLO, 16); + m68k_setup_user_interrupt(VEC_USER + 96, 16); + m68k_setup_irq_controller(&apollo_irq_chip, handle_fasteoi_irq, + IRQ_APOLLO, 16); } diff --git a/arch/m68k/atari/ataints.c b/arch/m68k/atari/ataints.c index 26a804e67bce..6d196dadfdbc 100644 --- a/arch/m68k/atari/ataints.c +++ b/arch/m68k/atari/ataints.c @@ -60,243 +60,7 @@ * <asm/atariints.h>): Autovector interrupts are 1..7, then follow ST-MFP, * TT-MFP, SCC, and finally VME interrupts. Vector numbers for the latter can * be allocated by atari_register_vme_int(). - * - * Each interrupt can be of three types: - * - * - SLOW: The handler runs with all interrupts enabled, except the one it - * was called by (to avoid reentering). This should be the usual method. - * But it is currently possible only for MFP ints, since only the MFP - * offers an easy way to mask interrupts. - * - * - FAST: The handler runs with all interrupts disabled. This should be used - * only for really fast handlers, that just do actions immediately - * necessary, and let the rest do a bottom half or task queue. - * - * - PRIORITIZED: The handler can be interrupted by higher-level ints - * (greater IPL, no MFP priorities!). This is the method of choice for ints - * which should be slow, but are not from a MFP. - * - * The feature of more than one handler for one int source is still there, but - * only applicable if all handers are of the same type. To not slow down - * processing of ints with only one handler by the chaining feature, the list - * calling function atari_call_irq_list() is only plugged in at the time the - * second handler is registered. - * - * Implementation notes: For fast-as-possible int handling, there are separate - * entry points for each type (slow/fast/prio). The assembler handler calls - * the irq directly in the usual case, no C wrapper is involved. In case of - * multiple handlers, atari_call_irq_list() is registered as handler and calls - * in turn the real irq's. To ease access from assembler level to the irq - * function pointer and accompanying data, these two are stored in a separate - * array, irq_handler[]. The rest of data (type, name) are put into a second - * array, irq_param, that is accessed from C only. For each slow interrupt (32 - * in all) there are separate handler functions, which makes it possible to - * hard-code the MFP register address and value, are necessary to mask the - * int. If there'd be only one generic function, lots of calculations would be - * needed to determine MFP register and int mask from the vector number :-( - * - * Furthermore, slow ints may not lower the IPL below its previous value - * (before the int happened). This is needed so that an int of class PRIO, on - * that this int may be stacked, cannot be reentered. This feature is - * implemented as follows: If the stack frame format is 1 (throwaway), the int - * is not stacked, and the IPL is anded with 0xfbff, resulting in a new level - * 2, which still blocks the HSYNC, but no interrupts of interest. If the - * frame format is 0, the int is nested, and the old IPL value can be found in - * the sr copy in the frame. - */ - -#if 0 - -#define NUM_INT_SOURCES (8 + NUM_ATARI_SOURCES) - -typedef void (*asm_irq_handler)(void); - -struct irqhandler { - irqreturn_t (*handler)(int, void *, struct pt_regs *); - void *dev_id; -}; - -struct irqparam { - unsigned long flags; - const char *devname; -}; - -/* - * Array with irq's and their parameter data. This array is accessed from low - * level assembler code, so an element size of 8 allows usage of index scaling - * addressing mode. */ -static struct irqhandler irq_handler[NUM_INT_SOURCES]; - -/* - * This array hold the rest of parameters of int handlers: type - * (slow,fast,prio) and the name of the handler. These values are only - * accessed from C - */ -static struct irqparam irq_param[NUM_INT_SOURCES]; - -/* check for valid int number (complex, sigh...) */ -#define IS_VALID_INTNO(n) \ - ((n) > 0 && \ - /* autovec and ST-MFP ok anyway */ \ - (((n) < TTMFP_SOURCE_BASE) || \ - /* TT-MFP ok if present */ \ - ((n) >= TTMFP_SOURCE_BASE && (n) < SCC_SOURCE_BASE && \ - ATARIHW_PRESENT(TT_MFP)) || \ - /* SCC ok if present and number even */ \ - ((n) >= SCC_SOURCE_BASE && (n) < VME_SOURCE_BASE && \ - !((n) & 1) && ATARIHW_PRESENT(SCC)) || \ - /* greater numbers ok if they are registered VME vectors */ \ - ((n) >= VME_SOURCE_BASE && (n) < VME_SOURCE_BASE + VME_MAX_SOURCES && \ - free_vme_vec_bitmap & (1 << ((n) - VME_SOURCE_BASE))))) - - -/* - * Here start the assembler entry points for interrupts - */ - -#define IRQ_NAME(nr) atari_slow_irq_##nr##_handler(void) - -#define BUILD_SLOW_IRQ(n) \ -asmlinkage void IRQ_NAME(n); \ -/* Dummy function to allow asm with operands. */ \ -void atari_slow_irq_##n##_dummy (void) { \ -__asm__ (__ALIGN_STR "\n" \ -"atari_slow_irq_" #n "_handler:\t" \ -" addl %6,%5\n" /* preempt_count() += HARDIRQ_OFFSET */ \ - SAVE_ALL_INT "\n" \ - GET_CURRENT(%%d0) "\n" \ -" andb #~(1<<(%c3&7)),%a4:w\n" /* mask this interrupt */ \ - /* get old IPL from stack frame */ \ -" bfextu %%sp@(%c2){#5,#3},%%d0\n" \ -" movew %%sr,%%d1\n" \ -" bfins %%d0,%%d1{#21,#3}\n" \ -" movew %%d1,%%sr\n" /* set IPL = previous value */ \ -" addql #1,%a0\n" \ -" lea %a1,%%a0\n" \ -" pea %%sp@\n" /* push addr of frame */ \ -" movel %%a0@(4),%%sp@-\n" /* push handler data */ \ -" pea (%c3+8)\n" /* push int number */ \ -" movel %%a0@,%%a0\n" \ -" jbsr %%a0@\n" /* call the handler */ \ -" addql #8,%%sp\n" \ -" addql #4,%%sp\n" \ -" orw #0x0600,%%sr\n" \ -" andw #0xfeff,%%sr\n" /* set IPL = 6 again */ \ -" orb #(1<<(%c3&7)),%a4:w\n" /* now unmask the int again */ \ -" jbra ret_from_interrupt\n" \ - : : "i" (&kstat_cpu(0).irqs[n+8]), "i" (&irq_handler[n+8]), \ - "n" (PT_OFF_SR), "n" (n), \ - "i" (n & 8 ? (n & 16 ? &tt_mfp.int_mk_a : &st_mfp.int_mk_a) \ - : (n & 16 ? &tt_mfp.int_mk_b : &st_mfp.int_mk_b)), \ - "m" (preempt_count()), "di" (HARDIRQ_OFFSET) \ -); \ - for (;;); /* fake noreturn */ \ -} - -BUILD_SLOW_IRQ(0); -BUILD_SLOW_IRQ(1); -BUILD_SLOW_IRQ(2); -BUILD_SLOW_IRQ(3); -BUILD_SLOW_IRQ(4); -BUILD_SLOW_IRQ(5); -BUILD_SLOW_IRQ(6); -BUILD_SLOW_IRQ(7); -BUILD_SLOW_IRQ(8); -BUILD_SLOW_IRQ(9); -BUILD_SLOW_IRQ(10); -BUILD_SLOW_IRQ(11); -BUILD_SLOW_IRQ(12); -BUILD_SLOW_IRQ(13); -BUILD_SLOW_IRQ(14); -BUILD_SLOW_IRQ(15); -BUILD_SLOW_IRQ(16); -BUILD_SLOW_IRQ(17); -BUILD_SLOW_IRQ(18); -BUILD_SLOW_IRQ(19); -BUILD_SLOW_IRQ(20); -BUILD_SLOW_IRQ(21); -BUILD_SLOW_IRQ(22); -BUILD_SLOW_IRQ(23); -BUILD_SLOW_IRQ(24); -BUILD_SLOW_IRQ(25); -BUILD_SLOW_IRQ(26); -BUILD_SLOW_IRQ(27); -BUILD_SLOW_IRQ(28); -BUILD_SLOW_IRQ(29); -BUILD_SLOW_IRQ(30); -BUILD_SLOW_IRQ(31); - -asm_irq_handler slow_handlers[32] = { - [0] = atari_slow_irq_0_handler, - [1] = atari_slow_irq_1_handler, - [2] = atari_slow_irq_2_handler, - [3] = atari_slow_irq_3_handler, - [4] = atari_slow_irq_4_handler, - [5] = atari_slow_irq_5_handler, - [6] = atari_slow_irq_6_handler, - [7] = atari_slow_irq_7_handler, - [8] = atari_slow_irq_8_handler, - [9] = atari_slow_irq_9_handler, - [10] = atari_slow_irq_10_handler, - [11] = atari_slow_irq_11_handler, - [12] = atari_slow_irq_12_handler, - [13] = atari_slow_irq_13_handler, - [14] = atari_slow_irq_14_handler, - [15] = atari_slow_irq_15_handler, - [16] = atari_slow_irq_16_handler, - [17] = atari_slow_irq_17_handler, - [18] = atari_slow_irq_18_handler, - [19] = atari_slow_irq_19_handler, - [20] = atari_slow_irq_20_handler, - [21] = atari_slow_irq_21_handler, - [22] = atari_slow_irq_22_handler, - [23] = atari_slow_irq_23_handler, - [24] = atari_slow_irq_24_handler, - [25] = atari_slow_irq_25_handler, - [26] = atari_slow_irq_26_handler, - [27] = atari_slow_irq_27_handler, - [28] = atari_slow_irq_28_handler, - [29] = atari_slow_irq_29_handler, - [30] = atari_slow_irq_30_handler, - [31] = atari_slow_irq_31_handler -}; - -asmlinkage void atari_fast_irq_handler( void ); -asmlinkage void atari_prio_irq_handler( void ); - -/* Dummy function to allow asm with operands. */ -void atari_fast_prio_irq_dummy (void) { -__asm__ (__ALIGN_STR "\n" -"atari_fast_irq_handler:\n\t" - "orw #0x700,%%sr\n" /* disable all interrupts */ -"atari_prio_irq_handler:\n\t" - "addl %3,%2\n\t" /* preempt_count() += HARDIRQ_OFFSET */ - SAVE_ALL_INT "\n\t" - GET_CURRENT(%%d0) "\n\t" - /* get vector number from stack frame and convert to source */ - "bfextu %%sp@(%c1){#4,#10},%%d0\n\t" - "subw #(0x40-8),%%d0\n\t" - "jpl 1f\n\t" - "addw #(0x40-8-0x18),%%d0\n" - "1:\tlea %a0,%%a0\n\t" - "addql #1,%%a0@(%%d0:l:4)\n\t" - "lea irq_handler,%%a0\n\t" - "lea %%a0@(%%d0:l:8),%%a0\n\t" - "pea %%sp@\n\t" /* push frame address */ - "movel %%a0@(4),%%sp@-\n\t" /* push handler data */ - "movel %%d0,%%sp@-\n\t" /* push int number */ - "movel %%a0@,%%a0\n\t" - "jsr %%a0@\n\t" /* and call the handler */ - "addql #8,%%sp\n\t" - "addql #4,%%sp\n\t" - "jbra ret_from_interrupt" - : : "i" (&kstat_cpu(0).irqs), "n" (PT_OFF_FORMATVEC), - "m" (preempt_count()), "di" (HARDIRQ_OFFSET) -); - for (;;); -} -#endif /* * Bitmap for free interrupt vector numbers @@ -320,31 +84,44 @@ extern void atari_microwire_cmd(int cmd); extern int atari_SCC_reset_done; -static int atari_startup_irq(unsigned int irq) +static unsigned int atari_irq_startup(struct irq_data *data) { - m68k_irq_startup(irq); + unsigned int irq = data->irq; + + m68k_irq_startup(data); atari_turnon_irq(irq); atari_enable_irq(irq); return 0; } -static void atari_shutdown_irq(unsigned int irq) +static void atari_irq_shutdown(struct irq_data *data) { + unsigned int irq = data->irq; + atari_disable_irq(irq); atari_turnoff_irq(irq); - m68k_irq_shutdown(irq); + m68k_irq_shutdown(data); if (irq == IRQ_AUTO_4) vectors[VEC_INT4] = falcon_hblhandler; } -static struct irq_controller atari_irq_controller = { +static void atari_irq_enable(struct irq_data *data) +{ + atari_enable_irq(data->irq); +} + +static void atari_irq_disable(struct irq_data *data) +{ + atari_disable_irq(data->irq); +} + +static struct irq_chip atari_irq_chip = { .name = "atari", - .lock = __SPIN_LOCK_UNLOCKED(atari_irq_controller.lock), - .startup = atari_startup_irq, - .shutdown = atari_shutdown_irq, - .enable = atari_enable_irq, - .disable = atari_disable_irq, + .irq_startup = atari_irq_startup, + .irq_shutdown = atari_irq_shutdown, + .irq_enable = atari_irq_enable, + .irq_disable = atari_irq_disable, }; /* @@ -360,8 +137,9 @@ static struct irq_controller atari_irq_controller = { void __init atari_init_IRQ(void) { - m68k_setup_user_interrupt(VEC_USER, NUM_ATARI_SOURCES - IRQ_USER, NULL); - m68k_setup_irq_controller(&atari_irq_controller, 1, NUM_ATARI_SOURCES - 1); + m68k_setup_user_interrupt(VEC_USER, NUM_ATARI_SOURCES - IRQ_USER); + m68k_setup_irq_controller(&atari_irq_chip, handle_simple_irq, 1, + NUM_ATARI_SOURCES - 1); /* Initialize the MFP(s) */ diff --git a/arch/m68k/bvme6000/config.c b/arch/m68k/bvme6000/config.c index 1edd95095cb4..81286476f740 100644 --- a/arch/m68k/bvme6000/config.c +++ b/arch/m68k/bvme6000/config.c @@ -86,7 +86,7 @@ static void bvme6000_get_model(char *model) */ static void __init bvme6000_init_IRQ(void) { - m68k_setup_user_interrupt(VEC_USER, 192, NULL); + m68k_setup_user_interrupt(VEC_USER, 192); } void __init config_bvme6000(void) diff --git a/arch/m68k/hp300/time.c b/arch/m68k/hp300/time.c index f6312c7d8727..c87fe69b0728 100644 --- a/arch/m68k/hp300/time.c +++ b/arch/m68k/hp300/time.c @@ -70,7 +70,7 @@ void __init hp300_sched_init(irq_handler_t vector) asm volatile(" movpw %0,%1@(5)" : : "d" (INTVAL), "a" (CLOCKBASE)); - if (request_irq(IRQ_AUTO_6, hp300_tick, IRQ_FLG_STD, "timer tick", vector)) + if (request_irq(IRQ_AUTO_6, hp300_tick, 0, "timer tick", vector)) pr_err("Couldn't register timer interrupt\n"); out_8(CLOCKBASE + CLKCR2, 0x1); /* select CR1 */ diff --git a/arch/m68k/include/asm/hardirq.h b/arch/m68k/include/asm/hardirq.h index 870e5347155b..db30ed276878 100644 --- a/arch/m68k/include/asm/hardirq.h +++ b/arch/m68k/include/asm/hardirq.h @@ -18,6 +18,11 @@ #ifdef CONFIG_MMU +static inline void ack_bad_irq(unsigned int irq) +{ + pr_crit("unexpected IRQ trap at vector %02x\n", irq); +} + /* entry.S is sensitive to the offsets of these fields */ typedef struct { unsigned int __softirq_pending; diff --git a/arch/m68k/include/asm/irq.h b/arch/m68k/include/asm/irq.h index 69ed0d74d532..6198df5ff245 100644 --- a/arch/m68k/include/asm/irq.h +++ b/arch/m68k/include/asm/irq.h @@ -27,11 +27,6 @@ #ifdef CONFIG_MMU -#include <linux/linkage.h> -#include <linux/hardirq.h> -#include <linux/irqreturn.h> -#include <linux/spinlock_types.h> - /* * Interrupt source definitions * General interrupt sources are the level 1-7. @@ -54,10 +49,6 @@ #define IRQ_USER 8 -extern unsigned int irq_canonicalize(unsigned int irq); - -struct pt_regs; - /* * various flags for request_irq() - the Amiga now uses the standard * mechanism like all other architectures - IRQF_DISABLED and @@ -71,57 +62,27 @@ struct pt_regs; #define IRQ_FLG_STD (0x8000) /* internally used */ #endif -/* - * This structure is used to chain together the ISRs for a particular - * interrupt source (if it supports chaining). - */ -typedef struct irq_node { - irqreturn_t (*handler)(int, void *); - void *dev_id; - struct irq_node *next; - unsigned long flags; - const char *devname; -} irq_node_t; - -/* - * This structure has only 4 elements for speed reasons - */ -struct irq_handler { - int (*handler)(int, void *); - unsigned long flags; - void *dev_id; - const char *devname; -}; - -struct irq_controller { - const char *name; - spinlock_t lock; - int (*startup)(unsigned int irq); - void (*shutdown)(unsigned int irq); - void (*enable)(unsigned int irq); - void (*disable)(unsigned int irq); -}; - -extern int m68k_irq_startup(unsigned int); -extern void m68k_irq_shutdown(unsigned int); - -/* - * This function returns a new irq_node_t - */ -extern irq_node_t *new_irq_node(void); +struct irq_data; +struct irq_chip; +struct irq_desc; +extern unsigned int m68k_irq_startup(struct irq_data *data); +extern unsigned int m68k_irq_startup_irq(unsigned int irq); +extern void m68k_irq_shutdown(struct irq_data *data); +extern void m68k_setup_auto_interrupt(void (*handler)(unsigned int, + struct pt_regs *)); +extern void m68k_setup_user_interrupt(unsigned int vec, unsigned int cnt); +extern void m68k_setup_irq_controller(struct irq_chip *, + void (*handle)(unsigned int irq, + struct irq_desc *desc), + unsigned int irq, unsigned int cnt); -extern void m68k_setup_auto_interrupt(void (*handler)(unsigned int, struct pt_regs *)); -extern void m68k_setup_user_interrupt(unsigned int vec, unsigned int cnt, - void (*handler)(unsigned int, struct pt_regs *)); -extern void m68k_setup_irq_controller(struct irq_controller *, unsigned int, unsigned int); - -asmlinkage void m68k_handle_int(unsigned int); -asmlinkage void __m68k_handle_int(unsigned int, struct pt_regs *); +extern unsigned int irq_canonicalize(unsigned int irq); #else #define irq_canonicalize(irq) (irq) #endif /* CONFIG_MMU */ asmlinkage void do_IRQ(int irq, struct pt_regs *regs); +extern atomic_t irq_err_count; #endif /* _M68K_IRQ_H_ */ diff --git a/arch/m68k/include/asm/macintosh.h b/arch/m68k/include/asm/macintosh.h index c2a1c5eac1a6..12ebe43b008b 100644 --- a/arch/m68k/include/asm/macintosh.h +++ b/arch/m68k/include/asm/macintosh.h @@ -12,6 +12,8 @@ extern void mac_reset(void); extern void mac_poweroff(void); extern void mac_init_IRQ(void); extern int mac_irq_pending(unsigned int); +extern void mac_irq_enable(struct irq_data *data); +extern void mac_irq_disable(struct irq_data *data); /* * Floppy driver magic hook - probably shouldn't be here diff --git a/arch/m68k/include/asm/q40ints.h b/arch/m68k/include/asm/q40ints.h index 3d970afb708f..22f12c9eb910 100644 --- a/arch/m68k/include/asm/q40ints.h +++ b/arch/m68k/include/asm/q40ints.h @@ -24,6 +24,3 @@ #define Q40_IRQ10_MASK (1<<5) #define Q40_IRQ14_MASK (1<<6) #define Q40_IRQ15_MASK (1<<7) - -extern unsigned long q40_probe_irq_on (void); -extern int q40_probe_irq_off (unsigned long irqs); diff --git a/arch/m68k/kernel/Makefile b/arch/m68k/kernel/Makefile index e7f0f2e5ad44..c5696193281a 100644 --- a/arch/m68k/kernel/Makefile +++ b/arch/m68k/kernel/Makefile @@ -6,16 +6,15 @@ extra-$(CONFIG_MMU) := head.o extra-$(CONFIG_SUN3) := sun3-head.o extra-y += vmlinux.lds -obj-y := entry.o m68k_ksyms.o module.o process.o ptrace.o setup.o signal.o \ - sys_m68k.o syscalltable.o time.o traps.o +obj-y := entry.o irq.o m68k_ksyms.o module.o process.o ptrace.o setup.o \ + signal.o sys_m68k.o syscalltable.o time.o traps.o -obj-$(CONFIG_MMU) += ints.o devres.o vectors.o -devres-$(CONFIG_MMU) = ../../../kernel/irq/devres.o +obj-$(CONFIG_MMU) += ints.o vectors.o ifndef CONFIG_MMU_SUN3 obj-y += dma.o endif ifndef CONFIG_MMU -obj-y += init_task.o irq.o +obj-y += init_task.o endif diff --git a/arch/m68k/kernel/entry_mm.S b/arch/m68k/kernel/entry_mm.S index bd0ec05263b2..c713f514843d 100644 --- a/arch/m68k/kernel/entry_mm.S +++ b/arch/m68k/kernel/entry_mm.S @@ -48,7 +48,7 @@ .globl sys_fork, sys_clone, sys_vfork .globl ret_from_interrupt, bad_interrupt .globl auto_irqhandler_fixup -.globl user_irqvec_fixup, user_irqhandler_fixup +.globl user_irqvec_fixup .text ENTRY(buserr) @@ -207,7 +207,7 @@ ENTRY(auto_inthandler) movel %sp,%sp@- movel %d0,%sp@- | put vector # on stack auto_irqhandler_fixup = . + 2 - jsr __m68k_handle_int | process the IRQ + jsr do_IRQ | process the IRQ addql #8,%sp | pop parameters off stack ret_from_interrupt: @@ -240,8 +240,7 @@ user_irqvec_fixup = . + 2 movel %sp,%sp@- movel %d0,%sp@- | put vector # on stack -user_irqhandler_fixup = . + 2 - jsr __m68k_handle_int | process the IRQ + jsr do_IRQ | process the IRQ addql #8,%sp | pop parameters off stack subqb #1,%curptr@(TASK_INFO+TINFO_PREEMPT+1) diff --git a/arch/m68k/kernel/ints.c b/arch/m68k/kernel/ints.c index 761ee0440c99..74fefac00899 100644 --- a/arch/m68k/kernel/ints.c +++ b/arch/m68k/kernel/ints.c @@ -4,25 +4,6 @@ * This file is subject to the terms and conditions of the GNU General Public * License. See the file COPYING in the main directory of this archive * for more details. - * - * 07/03/96: Timer initialization, and thus mach_sched_init(), - * removed from request_irq() and moved to init_time(). - * We should therefore consider renaming our add_isr() and - * remove_isr() to request_irq() and free_irq() - * respectively, so they are compliant with the other - * architectures. /Jes - * 11/07/96: Changed all add_/remove_isr() to request_/free_irq() calls. - * Removed irq list support, if any machine needs an irq server - * it must implement this itself (as it's already done), instead - * only default handler are used with mach_default_handler. - * request_irq got some flags different from other architectures: - * - IRQ_FLG_REPLACE : Replace an existing handler (the default one - * can be replaced without this flag) - * - IRQ_FLG_LOCK : handler can't be replaced - * There are other machine depending flags, see there - * If you want to replace a default handler you should know what - * you're doing, since it might handle different other irq sources - * which must be served /Roman Zippel */ #include <linux/module.h> @@ -47,33 +28,22 @@ #endif extern u32 auto_irqhandler_fixup[]; -extern u32 user_irqhandler_fixup[]; extern u16 user_irqvec_fixup[]; -/* table for system interrupt handlers */ -static struct irq_node *irq_list[NR_IRQS]; -static struct irq_controller *irq_controller[NR_IRQS]; -static int irq_depth[NR_IRQS]; - static int m68k_first_user_vec; -static struct irq_controller auto_irq_controller = { +static struct irq_chip auto_irq_chip = { .name = "auto", - .lock = __SPIN_LOCK_UNLOCKED(auto_irq_controller.lock), - .startup = m68k_irq_startup, - .shutdown = m68k_irq_shutdown, + .irq_startup = m68k_irq_startup, + .irq_shutdown = m68k_irq_shutdown, }; -static struct irq_controller user_irq_controller = { +static struct irq_chip user_irq_chip = { .name = "user", - .lock = __SPIN_LOCK_UNLOCKED(user_irq_controller.lock), - .startup = m68k_irq_startup, - .shutdown = m68k_irq_shutdown, + .irq_startup = m68k_irq_startup, + .irq_shutdown = m68k_irq_shutdown, }; -#define NUM_IRQ_NODES 100 -static irq_node_t nodes[NUM_IRQ_NODES]; - /* * void init_IRQ(void) * @@ -96,7 +66,7 @@ void __init init_IRQ(void) } for (i = IRQ_AUTO_1; i <= IRQ_AUTO_7; i++) - irq_controller[i] = &auto_irq_controller; + irq_set_chip_and_handler(i, &auto_irq_chip, handle_simple_irq); mach_init_IRQ(); } @@ -106,7 +76,7 @@ void __init init_IRQ(void) * @handler: called from auto vector interrupts * * setup the handler to be called from auto vector interrupts instead of the - * standard __m68k_handle_int(), it will be called with irq numbers in the range + * standard do_IRQ(), it will be called with irq numbers in the range * from IRQ_AUTO_1 - IRQ_AUTO_7. */ void __init m68k_setup_auto_interrupt(void (*handler)(unsigned int, struct pt_regs *)) @@ -120,217 +90,49 @@ void __init m68k_setup_auto_interrupt(void (*handler)(unsigned int, struct pt_re * m68k_setup_user_interrupt * @vec: first user vector interrupt to handle * @cnt: number of active user vector interrupts - * @handler: called from user vector interrupts * * setup user vector interrupts, this includes activating the specified range * of interrupts, only then these interrupts can be requested (note: this is - * different from auto vector interrupts). An optional handler can be installed - * to be called instead of the default __m68k_handle_int(), it will be called - * with irq numbers starting from IRQ_USER. + * different from auto vector interrupts). */ -void __init m68k_setup_user_interrupt(unsigned int vec, unsigned int cnt, - void (*handler)(unsigned int, struct pt_regs *)) +void __init m68k_setup_user_interrupt(unsigned int vec, unsigned int cnt) { int i; BUG_ON(IRQ_USER + cnt > NR_IRQS); m68k_first_user_vec = vec; for (i = 0; i < cnt; i++) - irq_controller[IRQ_USER + i] = &user_irq_controller; + irq_set_chip(IRQ_USER + i, &user_irq_chip); *user_irqvec_fixup = vec - IRQ_USER; - if (handler) - *user_irqhandler_fixup = (u32)handler; flush_icache(); } /** * m68k_setup_irq_controller - * @contr: irq controller which controls specified irq + * @chip: irq chip which controls specified irq + * @handle: flow handler which handles specified irq * @irq: first irq to be managed by the controller + * @cnt: number of irqs to be managed by the controller * * Change the controller for the specified range of irq, which will be used to * manage these irq. auto/user irq already have a default controller, which can * be changed as well, but the controller probably should use m68k_irq_startup/ * m68k_irq_shutdown. */ -void m68k_setup_irq_controller(struct irq_controller *contr, unsigned int irq, +void m68k_setup_irq_controller(struct irq_chip *chip, + irq_flow_handler_t handle, unsigned int irq, unsigned int cnt) { int i; - for (i = 0; i < cnt; i++) - irq_controller[irq + i] = contr; -} - -irq_node_t *new_irq_node(void) -{ - irq_node_t *node; - short i; - - for (node = nodes, i = NUM_IRQ_NODES-1; i >= 0; node++, i--) { - if (!node->handler) { - memset(node, 0, sizeof(*node)); - return node; - } + for (i = 0; i < cnt; i++) { + irq_set_chip(irq + i, chip); + if (handle) + irq_set_handler(irq + i, handle); } - - printk ("new_irq_node: out of nodes\n"); - return NULL; } -int setup_irq(unsigned int irq, struct irq_node *node) -{ - struct irq_controller *contr; - struct irq_node **prev; - unsigned long flags; - - if (irq >= NR_IRQS || !(contr = irq_controller[irq])) { - printk("%s: Incorrect IRQ %d from %s\n", - __func__, irq, node->devname); - return -ENXIO; - } - - spin_lock_irqsave(&contr->lock, flags); - - prev = irq_list + irq; - if (*prev) { - /* Can't share interrupts unless both agree to */ - if (!((*prev)->flags & node->flags & IRQF_SHARED)) { - spin_unlock_irqrestore(&contr->lock, flags); - return -EBUSY; - } - while (*prev) - prev = &(*prev)->next; - } - - if (!irq_list[irq]) { - if (contr->startup) - contr->startup(irq); - else - contr->enable(irq); - } - node->next = NULL; - *prev = node; - - spin_unlock_irqrestore(&contr->lock, flags); - - return 0; -} - -int request_irq(unsigned int irq, - irq_handler_t handler, - unsigned long flags, const char *devname, void *dev_id) -{ - struct irq_node *node; - int res; - - node = new_irq_node(); - if (!node) - return -ENOMEM; - - node->handler = handler; - node->flags = flags; - node->dev_id = dev_id; - node->devname = devname; - - res = setup_irq(irq, node); - if (res) - node->handler = NULL; - - return res; -} - -EXPORT_SYMBOL(request_irq); - -void free_irq(unsigned int irq, void *dev_id) -{ - struct irq_controller *contr; - struct irq_node **p, *node; - unsigned long flags; - - if (irq >= NR_IRQS || !(contr = irq_controller[irq])) { - printk("%s: Incorrect IRQ %d\n", __func__, irq); - return; - } - - spin_lock_irqsave(&contr->lock, flags); - - p = irq_list + irq; - while ((node = *p)) { - if (node->dev_id == dev_id) - break; - p = &node->next; - } - - if (node) { - *p = node->next; - node->handler = NULL; - } else - printk("%s: Removing probably wrong IRQ %d\n", - __func__, irq); - - if (!irq_list[irq]) { - if (contr->shutdown) - contr->shutdown(irq); - else - contr->disable(irq); - } - - spin_unlock_irqrestore(&contr->lock, flags); -} - -EXPORT_SYMBOL(free_irq); - -void enable_irq(unsigned int irq) -{ - struct irq_controller *contr; - unsigned long flags; - - if (irq >= NR_IRQS || !(contr = irq_controller[irq])) { - printk("%s: Incorrect IRQ %d\n", - __func__, irq); - return; - } - - spin_lock_irqsave(&contr->lock, flags); - if (irq_depth[irq]) { - if (!--irq_depth[irq]) { - if (contr->enable) - contr->enable(irq); - } - } else - WARN_ON(1); - spin_unlock_irqrestore(&contr->lock, flags); -} - -EXPORT_SYMBOL(enable_irq); - -void disable_irq(unsigned int irq) -{ - struct irq_controller *contr; - unsigned long flags; - - if (irq >= NR_IRQS || !(contr = irq_controller[irq])) { - printk("%s: Incorrect IRQ %d\n", - __func__, irq); - return; - } - - spin_lock_irqsave(&contr->lock, flags); - if (!irq_depth[irq]++) { - if (contr->disable) - contr->disable(irq); - } - spin_unlock_irqrestore(&contr->lock, flags); -} - -EXPORT_SYMBOL(disable_irq); - -void disable_irq_nosync(unsigned int irq) __attribute__((alias("disable_irq"))); - -EXPORT_SYMBOL(disable_irq_nosync); - -int m68k_irq_startup(unsigned int irq) +unsigned int m68k_irq_startup_irq(unsigned int irq) { if (irq <= IRQ_AUTO_7) vectors[VEC_SPUR + irq] = auto_inthandler; @@ -339,41 +141,21 @@ int m68k_irq_startup(unsigned int irq) return 0; } -void m68k_irq_shutdown(unsigned int irq) +unsigned int m68k_irq_startup(struct irq_data *data) { - if (irq <= IRQ_AUTO_7) - vectors[VEC_SPUR + irq] = bad_inthandler; - else - vectors[m68k_first_user_vec + irq - IRQ_USER] = bad_inthandler; + return m68k_irq_startup_irq(data->irq); } - -/* - * Do we need these probe functions on the m68k? - * - * ... may be useful with ISA devices - */ -unsigned long probe_irq_on (void) +void m68k_irq_shutdown(struct irq_data *data) { -#ifdef CONFIG_Q40 - if (MACH_IS_Q40) - return q40_probe_irq_on(); -#endif - return 0; -} + unsigned int irq = data->irq; -EXPORT_SYMBOL(probe_irq_on); - -int probe_irq_off (unsigned long irqs) -{ -#ifdef CONFIG_Q40 - if (MACH_IS_Q40) - return q40_probe_irq_off(irqs); -#endif - return 0; + if (irq <= IRQ_AUTO_7) + vectors[VEC_SPUR + irq] = bad_inthandler; + else + vectors[m68k_first_user_vec + irq - IRQ_USER] = bad_inthandler; } -EXPORT_SYMBOL(probe_irq_off); unsigned int irq_canonicalize(unsigned int irq) { @@ -386,52 +168,9 @@ unsigned int irq_canonicalize(unsigned int irq) EXPORT_SYMBOL(irq_canonicalize); -asmlinkage void m68k_handle_int(unsigned int irq) -{ - struct irq_node *node; - kstat_cpu(0).irqs[irq]++; - node = irq_list[irq]; - do { - node->handler(irq, node->dev_id); - node = node->next; - } while (node); -} - -asmlinkage void __m68k_handle_int(unsigned int irq, struct pt_regs *regs) -{ - struct pt_regs *old_regs; - old_regs = set_irq_regs(regs); - m68k_handle_int(irq); - set_irq_regs(old_regs); -} asmlinkage void handle_badint(struct pt_regs *regs) { - kstat_cpu(0).irqs[0]++; - printk("unexpected interrupt from %u\n", regs->vector); -} - -int show_interrupts(struct seq_file *p, void *v) -{ - struct irq_controller *contr; - struct irq_node *node; - int i = *(loff_t *) v; - - /* autovector interrupts */ - if (irq_list[i]) { - contr = irq_controller[i]; - node = irq_list[i]; - seq_printf(p, "%-8s %3u: %10u %s", contr->name, i, kstat_cpu(0).irqs[i], node->devname); - while ((node = node->next)) - seq_printf(p, ", %s", node->devname); - seq_puts(p, "\n"); - } - return 0; -} - -#ifdef CONFIG_PROC_FS -void init_irq_proc(void) -{ - /* Insert /proc/irq driver here */ + atomic_inc(&irq_err_count); + pr_warn("unexpected interrupt from %u\n", regs->vector); } -#endif diff --git a/arch/m68k/mac/baboon.c b/arch/m68k/mac/baboon.c index 2a96bebd8969..b403924a1cad 100644 --- a/arch/m68k/mac/baboon.c +++ b/arch/m68k/mac/baboon.c @@ -11,6 +11,7 @@ #include <linux/mm.h> #include <linux/delay.h> #include <linux/init.h> +#include <linux/irq.h> #include <asm/traps.h> #include <asm/bootinfo.h> @@ -20,9 +21,6 @@ /* #define DEBUG_IRQS */ -extern void mac_enable_irq(unsigned int); -extern void mac_disable_irq(unsigned int); - int baboon_present; static volatile struct baboon *baboon; static unsigned char baboon_disabled; @@ -53,7 +51,7 @@ void __init baboon_init(void) * Baboon interrupt handler. This works a lot like a VIA. */ -static irqreturn_t baboon_irq(int irq, void *dev_id) +static void baboon_irq(unsigned int irq, struct irq_desc *desc) { int irq_bit, irq_num; unsigned char events; @@ -64,15 +62,16 @@ static irqreturn_t baboon_irq(int irq, void *dev_id) (uint) baboon->mb_status); #endif - if (!(events = baboon->mb_ifr & 0x07)) - return IRQ_NONE; + events = baboon->mb_ifr & 0x07; + if (!events) + return; irq_num = IRQ_BABOON_0; irq_bit = 1; do { if (events & irq_bit) { baboon->mb_ifr &= ~irq_bit; - m68k_handle_int(irq_num); + generic_handle_irq(irq_num); } irq_bit <<= 1; irq_num++; @@ -82,7 +81,6 @@ static irqreturn_t baboon_irq(int irq, void *dev_id) /* for now we need to smash all interrupts */ baboon->mb_ifr &= ~events; #endif - return IRQ_HANDLED; } /* @@ -92,8 +90,7 @@ static irqreturn_t baboon_irq(int irq, void *dev_id) void __init baboon_register_interrupts(void) { baboon_disabled = 0; - if (request_irq(IRQ_NUBUS_C, baboon_irq, 0, "baboon", (void *)baboon)) - pr_err("Couldn't register baboon interrupt\n"); + irq_set_chained_handler(IRQ_NUBUS_C, baboon_irq); } /* @@ -111,7 +108,7 @@ void baboon_irq_enable(int irq) baboon_disabled &= ~(1 << irq_idx); if (!baboon_disabled) - mac_enable_irq(IRQ_NUBUS_C); + mac_irq_enable(irq_get_irq_data(IRQ_NUBUS_C)); } void baboon_irq_disable(int irq) @@ -124,7 +121,7 @@ void baboon_irq_disable(int irq) baboon_disabled |= 1 << irq_idx; if (baboon_disabled) - mac_disable_irq(IRQ_NUBUS_C); + mac_irq_disable(irq_get_irq_data(IRQ_NUBUS_C)); } void baboon_irq_clear(int irq) diff --git a/arch/m68k/mac/iop.c b/arch/m68k/mac/iop.c index 1ad4e9d80eba..a5462cc0bfd6 100644 --- a/arch/m68k/mac/iop.c +++ b/arch/m68k/mac/iop.c @@ -305,15 +305,13 @@ void __init iop_register_interrupts(void) { if (iop_ism_present) { if (oss_present) { - if (request_irq(OSS_IRQLEV_IOPISM, iop_ism_irq, - IRQ_FLG_LOCK, "ISM IOP", - (void *) IOP_NUM_ISM)) + if (request_irq(OSS_IRQLEV_IOPISM, iop_ism_irq, 0, + "ISM IOP", (void *)IOP_NUM_ISM)) pr_err("Couldn't register ISM IOP interrupt\n"); oss_irq_enable(IRQ_MAC_ADB); } else { - if (request_irq(IRQ_VIA2_0, iop_ism_irq, - IRQ_FLG_LOCK|IRQ_FLG_FAST, "ISM IOP", - (void *) IOP_NUM_ISM)) + if (request_irq(IRQ_VIA2_0, iop_ism_irq, 0, "ISM IOP", + (void *)IOP_NUM_ISM)) pr_err("Couldn't register ISM IOP interrupt\n"); } if (!iop_alive(iop_base[IOP_NUM_ISM])) { diff --git a/arch/m68k/mac/macints.c b/arch/m68k/mac/macints.c index f92190c159b4..ba220b70ab8c 100644 --- a/arch/m68k/mac/macints.c +++ b/arch/m68k/mac/macints.c @@ -190,14 +190,10 @@ irqreturn_t mac_debug_handler(int, void *); /* #define DEBUG_MACINTS */ -void mac_enable_irq(unsigned int irq); -void mac_disable_irq(unsigned int irq); - -static struct irq_controller mac_irq_controller = { +static struct irq_chip mac_irq_chip = { .name = "mac", - .lock = __SPIN_LOCK_UNLOCKED(mac_irq_controller.lock), - .enable = mac_enable_irq, - .disable = mac_disable_irq, + .irq_enable = mac_irq_enable, + .irq_disable = mac_irq_disable, }; void __init mac_init_IRQ(void) @@ -205,7 +201,7 @@ void __init mac_init_IRQ(void) #ifdef DEBUG_MACINTS printk("mac_init_IRQ(): Setting things up...\n"); #endif - m68k_setup_irq_controller(&mac_irq_controller, IRQ_USER, + m68k_setup_irq_controller(&mac_irq_chip, handle_simple_irq, IRQ_USER, NUM_MAC_SOURCES - IRQ_USER); /* Make sure the SONIC interrupt is cleared or things get ugly */ #ifdef SHUTUP_SONIC @@ -241,16 +237,17 @@ void __init mac_init_IRQ(void) } /* - * mac_enable_irq - enable an interrupt source - * mac_disable_irq - disable an interrupt source + * mac_irq_enable - enable an interrupt source + * mac_irq_disable - disable an interrupt source * mac_clear_irq - clears a pending interrupt - * mac_pending_irq - Returns the pending status of an IRQ (nonzero = pending) + * mac_irq_pending - returns the pending status of an IRQ (nonzero = pending) * * These routines are just dispatchers to the VIA/OSS/PSC routines. */ -void mac_enable_irq(unsigned int irq) +void mac_irq_enable(struct irq_data *data) { + int irq = data->irq; int irq_src = IRQ_SRC(irq); switch(irq_src) { @@ -283,8 +280,9 @@ void mac_enable_irq(unsigned int irq) } } -void mac_disable_irq(unsigned int irq) +void mac_irq_disable(struct irq_data *data) { + int irq = data->irq; int irq_src = IRQ_SRC(irq); switch(irq_src) { diff --git a/arch/m68k/mac/oss.c b/arch/m68k/mac/oss.c index a9c0f5ab4cc0..a4c82dab9ff1 100644 --- a/arch/m68k/mac/oss.c +++ b/arch/m68k/mac/oss.c @@ -19,6 +19,7 @@ #include <linux/mm.h> #include <linux/delay.h> #include <linux/init.h> +#include <linux/irq.h> #include <asm/bootinfo.h> #include <asm/macintosh.h> @@ -29,10 +30,7 @@ int oss_present; volatile struct mac_oss *oss; -static irqreturn_t oss_irq(int, void *); -static irqreturn_t oss_nubus_irq(int, void *); - -extern irqreturn_t via1_irq(int, void *); +extern void via1_irq(unsigned int irq, struct irq_desc *desc); /* * Initialize the OSS @@ -60,26 +58,6 @@ void __init oss_init(void) } /* - * Register the OSS and NuBus interrupt dispatchers. - */ - -void __init oss_register_interrupts(void) -{ - if (request_irq(OSS_IRQLEV_SCSI, oss_irq, IRQ_FLG_LOCK, - "scsi", (void *) oss)) - pr_err("Couldn't register %s interrupt\n", "scsi"); - if (request_irq(OSS_IRQLEV_NUBUS, oss_nubus_irq, IRQ_FLG_LOCK, - "nubus", (void *) oss)) - pr_err("Couldn't register %s interrupt\n", "nubus"); - if (request_irq(OSS_IRQLEV_SOUND, oss_irq, IRQ_FLG_LOCK, - "sound", (void *) oss)) - pr_err("Couldn't register %s interrupt\n", "sound"); - if (request_irq(OSS_IRQLEV_VIA1, via1_irq, IRQ_FLG_LOCK, - "via1", (void *) via1)) - pr_err("Couldn't register %s interrupt\n", "via1"); -} - -/* * Initialize OSS for Nubus access */ @@ -92,17 +70,17 @@ void __init oss_nubus_init(void) * and SCSI; everything else is routed to its own autovector IRQ. */ -static irqreturn_t oss_irq(int irq, void *dev_id) +static void oss_irq(unsigned int irq, struct irq_desc *desc) { int events; events = oss->irq_pending & (OSS_IP_SOUND|OSS_IP_SCSI); if (!events) - return IRQ_NONE; + return; #ifdef DEBUG_IRQS if ((console_loglevel == 10) && !(events & OSS_IP_SCSI)) { - printk("oss_irq: irq %d events = 0x%04X\n", irq, + printk("oss_irq: irq %u events = 0x%04X\n", irq, (int) oss->irq_pending); } #endif @@ -113,11 +91,10 @@ static irqreturn_t oss_irq(int irq, void *dev_id) /* FIXME: call sound handler */ } else if (events & OSS_IP_SCSI) { oss->irq_pending &= ~OSS_IP_SCSI; - m68k_handle_int(IRQ_MAC_SCSI); + generic_handle_irq(IRQ_MAC_SCSI); } else { /* FIXME: error check here? */ } - return IRQ_HANDLED; } /* @@ -126,13 +103,13 @@ static irqreturn_t oss_irq(int irq, void *dev_id) * Unlike the VIA/RBV this is on its own autovector interrupt level. */ -static irqreturn_t oss_nubus_irq(int irq, void *dev_id) +static void oss_nubus_irq(unsigned int irq, struct irq_desc *desc) { int events, irq_bit, i; events = oss->irq_pending & OSS_IP_NUBUS; if (!events) - return IRQ_NONE; + return; #ifdef DEBUG_NUBUS_INT if (console_loglevel > 7) { @@ -148,10 +125,21 @@ static irqreturn_t oss_nubus_irq(int irq, void *dev_id) irq_bit >>= 1; if (events & irq_bit) { oss->irq_pending &= ~irq_bit; - m68k_handle_int(NUBUS_SOURCE_BASE + i); + generic_handle_irq(NUBUS_SOURCE_BASE + i); } } while(events & (irq_bit - 1)); - return IRQ_HANDLED; +} + +/* + * Register the OSS and NuBus interrupt dispatchers. + */ + +void __init oss_register_interrupts(void) +{ + irq_set_chained_handler(OSS_IRQLEV_SCSI, oss_irq); + irq_set_chained_handler(OSS_IRQLEV_NUBUS, oss_nubus_irq); + irq_set_chained_handler(OSS_IRQLEV_SOUND, oss_irq); + irq_set_chained_handler(OSS_IRQLEV_VIA1, via1_irq); } /* diff --git a/arch/m68k/mac/psc.c b/arch/m68k/mac/psc.c index a4c3eb60706e..e6c2d20f328d 100644 --- a/arch/m68k/mac/psc.c +++ b/arch/m68k/mac/psc.c @@ -18,6 +18,7 @@ #include <linux/mm.h> #include <linux/delay.h> #include <linux/init.h> +#include <linux/irq.h> #include <asm/traps.h> #include <asm/bootinfo.h> @@ -30,8 +31,6 @@ int psc_present; volatile __u8 *psc; -irqreturn_t psc_irq(int, void *); - /* * Debugging dump, used in various places to see what's going on. */ @@ -112,52 +111,52 @@ void __init psc_init(void) } /* - * Register the PSC interrupt dispatchers for autovector interrupts 3-6. - */ - -void __init psc_register_interrupts(void) -{ - if (request_irq(IRQ_AUTO_3, psc_irq, 0, "psc3", (void *) 0x30)) - pr_err("Couldn't register psc%d interrupt\n", 3); - if (request_irq(IRQ_AUTO_4, psc_irq, 0, "psc4", (void *) 0x40)) - pr_err("Couldn't register psc%d interrupt\n", 4); - if (request_irq(IRQ_AUTO_5, psc_irq, 0, "psc5", (void *) 0x50)) - pr_err("Couldn't register psc%d interrupt\n", 5); - if (request_irq(IRQ_AUTO_6, psc_irq, 0, "psc6", (void *) 0x60)) - pr_err("Couldn't register psc%d interrupt\n", 6); -} - -/* * PSC interrupt handler. It's a lot like the VIA interrupt handler. */ -irqreturn_t psc_irq(int irq, void *dev_id) +static void psc_irq(unsigned int irq, struct irq_desc *desc) { - int pIFR = pIFRbase + ((int) dev_id); - int pIER = pIERbase + ((int) dev_id); + unsigned int offset = (unsigned int)irq_desc_get_handler_data(desc); + int pIFR = pIFRbase + offset; + int pIER = pIERbase + offset; int irq_num; unsigned char irq_bit, events; #ifdef DEBUG_IRQS - printk("psc_irq: irq %d pIFR = 0x%02X pIER = 0x%02X\n", + printk("psc_irq: irq %u pIFR = 0x%02X pIER = 0x%02X\n", irq, (int) psc_read_byte(pIFR), (int) psc_read_byte(pIER)); #endif events = psc_read_byte(pIFR) & psc_read_byte(pIER) & 0xF; if (!events) - return IRQ_NONE; + return; irq_num = irq << 3; irq_bit = 1; do { if (events & irq_bit) { psc_write_byte(pIFR, irq_bit); - m68k_handle_int(irq_num); + generic_handle_irq(irq_num); } irq_num++; irq_bit <<= 1; } while (events >= irq_bit); - return IRQ_HANDLED; +} + +/* + * Register the PSC interrupt dispatchers for autovector interrupts 3-6. + */ + +void __init psc_register_interrupts(void) +{ + irq_set_chained_handler(IRQ_AUTO_3, psc_irq); + irq_set_handler_data(IRQ_AUTO_3, (void *)0x30); + irq_set_chained_handler(IRQ_AUTO_4, psc_irq); + irq_set_handler_data(IRQ_AUTO_4, (void *)0x40); + irq_set_chained_handler(IRQ_AUTO_5, psc_irq); + irq_set_handler_data(IRQ_AUTO_5, (void *)0x50); + irq_set_chained_handler(IRQ_AUTO_6, psc_irq); + irq_set_handler_data(IRQ_AUTO_6, (void *)0x60); } void psc_irq_enable(int irq) { diff --git a/arch/m68k/mac/via.c b/arch/m68k/mac/via.c index e71166daec6a..f1600ad26621 100644 --- a/arch/m68k/mac/via.c +++ b/arch/m68k/mac/via.c @@ -28,6 +28,7 @@ #include <linux/delay.h> #include <linux/init.h> #include <linux/module.h> +#include <linux/irq.h> #include <asm/bootinfo.h> #include <asm/macintosh.h> @@ -77,9 +78,6 @@ static int gIER,gIFR,gBufA,gBufB; static u8 nubus_disabled; void via_debug_dump(void); -irqreturn_t via1_irq(int, void *); -irqreturn_t via2_irq(int, void *); -irqreturn_t via_nubus_irq(int, void *); void via_irq_enable(int irq); void via_irq_disable(int irq); void via_irq_clear(int irq); @@ -281,40 +279,11 @@ void __init via_init_clock(irq_handler_t func) via1[vT1CL] = MAC_CLOCK_LOW; via1[vT1CH] = MAC_CLOCK_HIGH; - if (request_irq(IRQ_MAC_TIMER_1, func, IRQ_FLG_LOCK, "timer", func)) + if (request_irq(IRQ_MAC_TIMER_1, func, 0, "timer", func)) pr_err("Couldn't register %s interrupt\n", "timer"); } /* - * Register the interrupt dispatchers for VIA or RBV machines only. - */ - -void __init via_register_interrupts(void) -{ - if (via_alt_mapping) { - if (request_irq(IRQ_AUTO_1, via1_irq, - IRQ_FLG_LOCK|IRQ_FLG_FAST, "software", - (void *) via1)) - pr_err("Couldn't register %s interrupt\n", "software"); - if (request_irq(IRQ_AUTO_6, via1_irq, - IRQ_FLG_LOCK|IRQ_FLG_FAST, "via1", - (void *) via1)) - pr_err("Couldn't register %s interrupt\n", "via1"); - } else { - if (request_irq(IRQ_AUTO_1, via1_irq, - IRQ_FLG_LOCK|IRQ_FLG_FAST, "via1", - (void *) via1)) - pr_err("Couldn't register %s interrupt\n", "via1"); - } - if (request_irq(IRQ_AUTO_2, via2_irq, IRQ_FLG_LOCK|IRQ_FLG_FAST, - "via2", (void *) via2)) - pr_err("Couldn't register %s interrupt\n", "via2"); - if (request_irq(IRQ_MAC_NUBUS, via_nubus_irq, - IRQ_FLG_LOCK|IRQ_FLG_FAST, "nubus", (void *) via2)) - pr_err("Couldn't register %s interrupt\n", "nubus"); -} - -/* * Debugging dump, used in various places to see what's going on. */ @@ -446,48 +415,46 @@ void __init via_nubus_init(void) * via6522.c :-), disable/pending masks added. */ -irqreturn_t via1_irq(int irq, void *dev_id) +void via1_irq(unsigned int irq, struct irq_desc *desc) { int irq_num; unsigned char irq_bit, events; events = via1[vIFR] & via1[vIER] & 0x7F; if (!events) - return IRQ_NONE; + return; irq_num = VIA1_SOURCE_BASE; irq_bit = 1; do { if (events & irq_bit) { via1[vIFR] = irq_bit; - m68k_handle_int(irq_num); + generic_handle_irq(irq_num); } ++irq_num; irq_bit <<= 1; } while (events >= irq_bit); - return IRQ_HANDLED; } -irqreturn_t via2_irq(int irq, void *dev_id) +static void via2_irq(unsigned int irq, struct irq_desc *desc) { int irq_num; unsigned char irq_bit, events; events = via2[gIFR] & via2[gIER] & 0x7F; if (!events) - return IRQ_NONE; + return; irq_num = VIA2_SOURCE_BASE; irq_bit = 1; do { if (events & irq_bit) { via2[gIFR] = irq_bit | rbv_clear; - m68k_handle_int(irq_num); + generic_handle_irq(irq_num); } ++irq_num; irq_bit <<= 1; } while (events >= irq_bit); - return IRQ_HANDLED; } /* @@ -495,7 +462,7 @@ irqreturn_t via2_irq(int irq, void *dev_id) * VIA2 dispatcher as a fast interrupt handler. */ -irqreturn_t via_nubus_irq(int irq, void *dev_id) +void via_nubus_irq(unsigned int irq, struct irq_desc *desc) { int slot_irq; unsigned char slot_bit, events; @@ -506,7 +473,7 @@ irqreturn_t via_nubus_irq(int irq, void *dev_id) else events &= ~via2[vDirA]; if (!events) - return IRQ_NONE; + return; do { slot_irq = IRQ_NUBUS_F; @@ -514,7 +481,7 @@ irqreturn_t via_nubus_irq(int irq, void *dev_id) do { if (events & slot_bit) { events &= ~slot_bit; - m68k_handle_int(slot_irq); + generic_handle_irq(slot_irq); } --slot_irq; slot_bit >>= 1; @@ -528,7 +495,24 @@ irqreturn_t via_nubus_irq(int irq, void *dev_id) else events &= ~via2[vDirA]; } while (events); - return IRQ_HANDLED; +} + +/* + * Register the interrupt dispatchers for VIA or RBV machines only. + */ + +void __init via_register_interrupts(void) +{ + if (via_alt_mapping) { + /* software interrupt */ + irq_set_chained_handler(IRQ_AUTO_1, via1_irq); + /* via1 interrupt */ + irq_set_chained_handler(IRQ_AUTO_6, via1_irq); + } else { + irq_set_chained_handler(IRQ_AUTO_1, via1_irq); + } + irq_set_chained_handler(IRQ_AUTO_2, via2_irq); + irq_set_chained_handler(IRQ_MAC_NUBUS, via_nubus_irq); } void via_irq_enable(int irq) { diff --git a/arch/m68k/mvme147/config.c b/arch/m68k/mvme147/config.c index 6cb9c3a9b6c9..5de924ef42ed 100644 --- a/arch/m68k/mvme147/config.c +++ b/arch/m68k/mvme147/config.c @@ -81,7 +81,7 @@ static void mvme147_get_model(char *model) void __init mvme147_init_IRQ(void) { - m68k_setup_user_interrupt(VEC_USER, 192, NULL); + m68k_setup_user_interrupt(VEC_USER, 192); } void __init config_mvme147(void) @@ -114,8 +114,7 @@ static irqreturn_t mvme147_timer_int (int irq, void *dev_id) void mvme147_sched_init (irq_handler_t timer_routine) { tick_handler = timer_routine; - if (request_irq(PCC_IRQ_TIMER1, mvme147_timer_int, IRQ_FLG_REPLACE, - "timer 1", NULL)) + if (request_irq(PCC_IRQ_TIMER1, mvme147_timer_int, 0, "timer 1", NULL)) pr_err("Couldn't register timer interrupt\n"); /* Init the clock with a value */ diff --git a/arch/m68k/mvme16x/config.c b/arch/m68k/mvme16x/config.c index 0b28e2621653..31a66d99cbca 100644 --- a/arch/m68k/mvme16x/config.c +++ b/arch/m68k/mvme16x/config.c @@ -117,7 +117,7 @@ static void mvme16x_get_hardware_list(struct seq_file *m) static void __init mvme16x_init_IRQ (void) { - m68k_setup_user_interrupt(VEC_USER, 192, NULL); + m68k_setup_user_interrupt(VEC_USER, 192); } #define pcc2chip ((volatile u_char *)0xfff42000) diff --git a/arch/m68k/q40/q40ints.c b/arch/m68k/q40/q40ints.c index 9f0e3d59bf92..2b888491f29a 100644 --- a/arch/m68k/q40/q40ints.c +++ b/arch/m68k/q40/q40ints.c @@ -15,10 +15,10 @@ #include <linux/kernel.h> #include <linux/errno.h> #include <linux/interrupt.h> +#include <linux/irq.h> #include <asm/ptrace.h> #include <asm/system.h> -#include <asm/irq.h> #include <asm/traps.h> #include <asm/q40_master.h> @@ -35,35 +35,36 @@ */ static void q40_irq_handler(unsigned int, struct pt_regs *fp); -static void q40_enable_irq(unsigned int); -static void q40_disable_irq(unsigned int); +static void q40_irq_enable(struct irq_data *data); +static void q40_irq_disable(struct irq_data *data); unsigned short q40_ablecount[35]; unsigned short q40_state[35]; -static int q40_irq_startup(unsigned int irq) +static unsigned int q40_irq_startup(struct irq_data *data) { + unsigned int irq = data->irq; + /* test for ISA ints not implemented by HW */ switch (irq) { case 1: case 2: case 8: case 9: case 11: case 12: case 13: printk("%s: ISA IRQ %d not implemented by HW\n", __func__, irq); - return -ENXIO; + /* FIXME return -ENXIO; */ } return 0; } -static void q40_irq_shutdown(unsigned int irq) +static void q40_irq_shutdown(struct irq_data *data) { } -static struct irq_controller q40_irq_controller = { +static struct irq_chip q40_irq_chip = { .name = "q40", - .lock = __SPIN_LOCK_UNLOCKED(q40_irq_controller.lock), - .startup = q40_irq_startup, - .shutdown = q40_irq_shutdown, - .enable = q40_enable_irq, - .disable = q40_disable_irq, + .irq_startup = q40_irq_startup, + .irq_shutdown = q40_irq_shutdown, + .irq_enable = q40_irq_enable, + .irq_disable = q40_irq_disable, }; /* @@ -81,13 +82,14 @@ static int disabled; void __init q40_init_IRQ(void) { - m68k_setup_irq_controller(&q40_irq_controller, 1, Q40_IRQ_MAX); + m68k_setup_irq_controller(&q40_irq_chip, handle_simple_irq, 1, + Q40_IRQ_MAX); /* setup handler for ISA ints */ m68k_setup_auto_interrupt(q40_irq_handler); - m68k_irq_startup(IRQ_AUTO_2); - m68k_irq_startup(IRQ_AUTO_4); + m68k_irq_startup_irq(IRQ_AUTO_2); + m68k_irq_startup_irq(IRQ_AUTO_4); /* now enable some ints.. */ master_outb(1, EXT_ENABLE_REG); /* ISA IRQ 5-15 */ @@ -218,11 +220,11 @@ static void q40_irq_handler(unsigned int irq, struct pt_regs *fp) switch (irq) { case 4: case 6: - __m68k_handle_int(Q40_IRQ_SAMPLE, fp); + do_IRQ(Q40_IRQ_SAMPLE, fp); return; } if (mir & Q40_IRQ_FRAME_MASK) { - __m68k_handle_int(Q40_IRQ_FRAME, fp); + do_IRQ(Q40_IRQ_FRAME, fp); master_outb(-1, FRAME_CLEAR_REG); } if ((mir & Q40_IRQ_SER_MASK) || (mir & Q40_IRQ_EXT_MASK)) { @@ -257,7 +259,7 @@ static void q40_irq_handler(unsigned int irq, struct pt_regs *fp) goto iirq; } q40_state[irq] |= IRQ_INPROGRESS; - __m68k_handle_int(irq, fp); + do_IRQ(irq, fp); q40_state[irq] &= ~IRQ_INPROGRESS; /* naively enable everything, if that fails than */ @@ -288,25 +290,29 @@ static void q40_irq_handler(unsigned int irq, struct pt_regs *fp) mir = master_inb(IIRQ_REG); /* should test whether keyboard irq is really enabled, doing it in defhand */ if (mir & Q40_IRQ_KEYB_MASK) - __m68k_handle_int(Q40_IRQ_KEYBOARD, fp); + do_IRQ(Q40_IRQ_KEYBOARD, fp); return; } -void q40_enable_irq(unsigned int irq) +void q40_irq_enable(struct irq_data *data) { + unsigned int irq = data->irq; + if (irq >= 5 && irq <= 15) { mext_disabled--; if (mext_disabled > 0) - printk("q40_enable_irq : nested disable/enable\n"); + printk("q40_irq_enable : nested disable/enable\n"); if (mext_disabled == 0) master_outb(1, EXT_ENABLE_REG); } } -void q40_disable_irq(unsigned int irq) +void q40_irq_disable(struct irq_data *data) { + unsigned int irq = data->irq; + /* disable ISA iqs : only do something if the driver has been * verified to be Q40 "compatible" - right now IDE, NE2K * Any driver should not attempt to sleep across disable_irq !! @@ -319,13 +325,3 @@ void q40_disable_irq(unsigned int irq) printk("disable_irq nesting count %d\n",mext_disabled); } } - -unsigned long q40_probe_irq_on(void) -{ - printk("irq probing not working - reconfigure the driver to avoid this\n"); - return -1; -} -int q40_probe_irq_off(unsigned long irqs) -{ - return -1; -} diff --git a/arch/m68k/sun3/sun3ints.c b/arch/m68k/sun3/sun3ints.c index 6464ad3ae3e6..78b60f53e90a 100644 --- a/arch/m68k/sun3/sun3ints.c +++ b/arch/m68k/sun3/sun3ints.c @@ -51,25 +51,29 @@ void sun3_disable_irq(unsigned int irq) static irqreturn_t sun3_int7(int irq, void *dev_id) { - *sun3_intreg |= (1 << irq); - if (!(kstat_cpu(0).irqs[irq] % 2000)) - sun3_leds(led_pattern[(kstat_cpu(0).irqs[irq] % 16000) / 2000]); + unsigned int cnt; + + cnt = kstat_irqs_cpu(irq, 0); + if (!(cnt % 2000)) + sun3_leds(led_pattern[cnt % 16000 / 2000]); return IRQ_HANDLED; } static irqreturn_t sun3_int5(int irq, void *dev_id) { + unsigned int cnt; + #ifdef CONFIG_SUN3 intersil_clear(); #endif - *sun3_intreg |= (1 << irq); #ifdef CONFIG_SUN3 intersil_clear(); #endif xtime_update(1); update_process_times(user_mode(get_irq_regs())); - if (!(kstat_cpu(0).irqs[irq] % 20)) - sun3_leds(led_pattern[(kstat_cpu(0).irqs[irq] % 160) / 20]); + cnt = kstat_irqs_cpu(irq, 0); + if (!(cnt % 20)) + sun3_leds(led_pattern[cnt % 160 / 20]); return IRQ_HANDLED; } @@ -79,29 +83,33 @@ static irqreturn_t sun3_vec255(int irq, void *dev_id) return IRQ_HANDLED; } -static void sun3_inthandle(unsigned int irq, struct pt_regs *fp) +static void sun3_irq_enable(struct irq_data *data) { - *sun3_intreg &= ~(1 << irq); + sun3_enable_irq(data->irq); +}; - __m68k_handle_int(irq, fp); -} +static void sun3_irq_disable(struct irq_data *data) +{ + sun3_disable_irq(data->irq); +}; -static struct irq_controller sun3_irq_controller = { +static struct irq_chip sun3_irq_chip = { .name = "sun3", - .lock = __SPIN_LOCK_UNLOCKED(sun3_irq_controller.lock), - .startup = m68k_irq_startup, - .shutdown = m68k_irq_shutdown, - .enable = sun3_enable_irq, - .disable = sun3_disable_irq, + .irq_startup = m68k_irq_startup, + .irq_shutdown = m68k_irq_shutdown, + .irq_enable = sun3_irq_enable, + .irq_disable = sun3_irq_disable, + .irq_mask = sun3_irq_disable, + .irq_unmask = sun3_irq_enable, }; void __init sun3_init_IRQ(void) { *sun3_intreg = 1; - m68k_setup_auto_interrupt(sun3_inthandle); - m68k_setup_irq_controller(&sun3_irq_controller, IRQ_AUTO_1, 7); - m68k_setup_user_interrupt(VEC_USER, 128, NULL); + m68k_setup_irq_controller(&sun3_irq_chip, handle_level_irq, IRQ_AUTO_1, + 7); + m68k_setup_user_interrupt(VEC_USER, 128); if (request_irq(IRQ_AUTO_5, sun3_int5, 0, "int5", NULL)) pr_err("Couldn't register %s interrupt\n", "int5"); diff --git a/arch/mips/Makefile b/arch/mips/Makefile index 9b4cb00407d7..0be318609fc6 100644 --- a/arch/mips/Makefile +++ b/arch/mips/Makefile @@ -286,11 +286,11 @@ CLEAN_FILES += vmlinux.32 vmlinux.64 archprepare: ifdef CONFIG_MIPS32_N32 @echo ' Checking missing-syscalls for N32' - $(Q)$(MAKE) $(build)=. missing-syscalls ccflags-y="-mabi=n32" + $(Q)$(MAKE) $(build)=. missing-syscalls missing_syscalls_flags="-mabi=n32" endif ifdef CONFIG_MIPS32_O32 @echo ' Checking missing-syscalls for O32' - $(Q)$(MAKE) $(build)=. missing-syscalls ccflags-y="-mabi=32" + $(Q)$(MAKE) $(build)=. missing-syscalls missing_syscalls_flags="-mabi=32" endif install: diff --git a/arch/powerpc/boot/dts/charon.dts b/arch/powerpc/boot/dts/charon.dts new file mode 100644 index 000000000000..0e00e508eaa6 --- /dev/null +++ b/arch/powerpc/boot/dts/charon.dts @@ -0,0 +1,236 @@ +/* + * charon board Device Tree Source + * + * Copyright (C) 2007 Semihalf + * Marian Balakowicz <m8@semihalf.com> + * + * Copyright (C) 2010 DENX Software Engineering GmbH + * Heiko Schocher <hs@denx.de> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the + * Free Software Foundation; either version 2 of the License, or (at your + * option) any later version. + */ + +/dts-v1/; + +/ { + model = "anon,charon"; + compatible = "anon,charon"; + #address-cells = <1>; + #size-cells = <1>; + interrupt-parent = <&mpc5200_pic>; + + cpus { + #address-cells = <1>; + #size-cells = <0>; + + PowerPC,5200@0 { + device_type = "cpu"; + reg = <0>; + d-cache-line-size = <32>; + i-cache-line-size = <32>; + d-cache-size = <0x4000>; // L1, 16K + i-cache-size = <0x4000>; // L1, 16K + timebase-frequency = <0>; // from bootloader + bus-frequency = <0>; // from bootloader + clock-frequency = <0>; // from bootloader + }; + }; + + memory { + device_type = "memory"; + reg = <0x00000000 0x08000000>; // 128MB + }; + + soc5200@f0000000 { + #address-cells = <1>; + #size-cells = <1>; + compatible = "fsl,mpc5200-immr"; + ranges = <0 0xf0000000 0x0000c000>; + reg = <0xf0000000 0x00000100>; + bus-frequency = <0>; // from bootloader + system-frequency = <0>; // from bootloader + + cdm@200 { + compatible = "fsl,mpc5200-cdm"; + reg = <0x200 0x38>; + }; + + mpc5200_pic: interrupt-controller@500 { + // 5200 interrupts are encoded into two levels; + interrupt-controller; + #interrupt-cells = <3>; + compatible = "fsl,mpc5200-pic"; + reg = <0x500 0x80>; + }; + + timer@600 { // General Purpose Timer + compatible = "fsl,mpc5200-gpt"; + reg = <0x600 0x10>; + interrupts = <1 9 0>; + fsl,has-wdt; + }; + + can@900 { + compatible = "fsl,mpc5200-mscan"; + interrupts = <2 17 0>; + reg = <0x900 0x80>; + }; + + can@980 { + compatible = "fsl,mpc5200-mscan"; + interrupts = <2 18 0>; + reg = <0x980 0x80>; + }; + + gpio_simple: gpio@b00 { + compatible = "fsl,mpc5200-gpio"; + reg = <0xb00 0x40>; + interrupts = <1 7 0>; + gpio-controller; + #gpio-cells = <2>; + }; + + usb@1000 { + compatible = "fsl,mpc5200-ohci","ohci-be"; + reg = <0x1000 0xff>; + interrupts = <2 6 0>; + }; + + dma-controller@1200 { + device_type = "dma-controller"; + compatible = "fsl,mpc5200-bestcomm"; + reg = <0x1200 0x80>; + interrupts = <3 0 0 3 1 0 3 2 0 3 3 0 + 3 4 0 3 5 0 3 6 0 3 7 0 + 3 8 0 3 9 0 3 10 0 3 11 0 + 3 12 0 3 13 0 3 14 0 3 15 0>; + }; + + xlb@1f00 { + compatible = "fsl,mpc5200-xlb"; + reg = <0x1f00 0x100>; + }; + + serial@2000 { // PSC1 + compatible = "fsl,mpc5200-psc-uart"; + reg = <0x2000 0x100>; + interrupts = <2 1 0>; + }; + + serial@2400 { // PSC3 + compatible = "fsl,mpc5200-psc-uart"; + reg = <0x2400 0x100>; + interrupts = <2 3 0>; + }; + + ethernet@3000 { + compatible = "fsl,mpc5200-fec"; + reg = <0x3000 0x400>; + local-mac-address = [ 00 00 00 00 00 00 ]; + interrupts = <2 5 0>; + fixed-link = <1 1 100 0 0>; + }; + + mdio@3000 { + #address-cells = <1>; + #size-cells = <0>; + compatible = "fsl,mpc5200-mdio"; + reg = <0x3000 0x400>; // fec range, since we need to setup fec interrupts + interrupts = <2 5 0>; // these are for "mii command finished", not link changes & co. + }; + + ata@3a00 { + compatible = "fsl,mpc5200-ata"; + reg = <0x3a00 0x100>; + interrupts = <2 7 0>; + }; + + i2c@3d00 { + #address-cells = <1>; + #size-cells = <0>; + compatible = "fsl,mpc5200-i2c","fsl-i2c"; + reg = <0x3d00 0x40>; + interrupts = <2 15 0>; + }; + + + i2c@3d40 { + #address-cells = <1>; + #size-cells = <0>; + compatible = "fsl,mpc5200-i2c","fsl-i2c"; + reg = <0x3d40 0x40>; + interrupts = <2 16 0>; + + dtt@28 { + compatible = "national,lm80"; + reg = <0x28>; + }; + + rtc@68 { + compatible = "dallas,ds1374"; + reg = <0x68>; + }; + }; + + sram@8000 { + compatible = "fsl,mpc5200-sram"; + reg = <0x8000 0x4000>; + }; + }; + + localbus { + compatible = "fsl,mpc5200-lpb","simple-bus"; + #address-cells = <2>; + #size-cells = <1>; + ranges = < 0 0 0xfc000000 0x02000000 + 1 0 0xe0000000 0x04000000 // CS1 range, SM501 + 3 0 0xe8000000 0x00080000>; + + flash@0,0 { + compatible = "cfi-flash"; + reg = <0 0 0x02000000>; + bank-width = <4>; + device-width = <2>; + #size-cells = <1>; + #address-cells = <1>; + }; + + display@1,0 { + compatible = "smi,sm501"; + reg = <1 0x00000000 0x00800000 + 1 0x03e00000 0x00200000>; + mode = "640x480-32@60"; + interrupts = <1 1 3>; + little-endian; + }; + + mram0@3,0 { + compatible = "mtd-ram"; + reg = <3 0x00000 0x80000>; + bank-width = <1>; + }; + }; + + pci@f0000d00 { + #interrupt-cells = <1>; + #size-cells = <2>; + #address-cells = <3>; + device_type = "pci"; + compatible = "fsl,mpc5200-pci"; + reg = <0xf0000d00 0x100>; + interrupt-map-mask = <0xf800 0 0 7>; + interrupt-map = <0xc000 0 0 1 &mpc5200_pic 0 0 3 + 0xc000 0 0 2 &mpc5200_pic 0 0 3 + 0xc000 0 0 3 &mpc5200_pic 0 0 3 + 0xc000 0 0 4 &mpc5200_pic 0 0 3>; + clock-frequency = <0>; // From boot loader + interrupts = <2 8 0 2 9 0 2 10 0>; + bus-range = <0 0>; + ranges = <0x42000000 0 0x80000000 0x80000000 0 0x10000000 + 0x02000000 0 0x90000000 0x90000000 0 0x10000000 + 0x01000000 0 0x00000000 0xa0000000 0 0x01000000>; + }; +}; diff --git a/arch/powerpc/configs/52xx/tqm5200_defconfig b/arch/powerpc/configs/52xx/tqm5200_defconfig index 959cd2cfc275..716a37be16e3 100644 --- a/arch/powerpc/configs/52xx/tqm5200_defconfig +++ b/arch/powerpc/configs/52xx/tqm5200_defconfig @@ -1,9 +1,10 @@ CONFIG_EXPERIMENTAL=y CONFIG_SYSVIPC=y +CONFIG_SPARSE_IRQ=y CONFIG_LOG_BUF_SHIFT=14 CONFIG_BLK_DEV_INITRD=y # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set -CONFIG_EXPERT=y +CONFIG_EMBEDDED=y # CONFIG_SYSCTL_SYSCALL is not set # CONFIG_KALLSYMS is not set # CONFIG_EPOLL is not set @@ -17,7 +18,6 @@ CONFIG_PPC_MPC5200_SIMPLE=y CONFIG_PPC_MPC5200_BUGFIX=y # CONFIG_PPC_PMAC is not set CONFIG_PPC_BESTCOMM=y -CONFIG_SPARSE_IRQ=y CONFIG_PM=y # CONFIG_PCI is not set CONFIG_NET=y @@ -38,17 +38,18 @@ CONFIG_MTD=y CONFIG_MTD_CONCAT=y CONFIG_MTD_PARTITIONS=y CONFIG_MTD_CMDLINE_PARTS=y +CONFIG_MTD_OF_PARTS=y CONFIG_MTD_CHAR=y CONFIG_MTD_BLOCK=y CONFIG_MTD_CFI=y CONFIG_MTD_CFI_AMDSTD=y CONFIG_MTD_ROM=y CONFIG_MTD_PHYSMAP_OF=y +CONFIG_MTD_PLATRAM=y CONFIG_PROC_DEVICETREE=y CONFIG_BLK_DEV_LOOP=y CONFIG_BLK_DEV_RAM=y CONFIG_BLK_DEV_RAM_SIZE=32768 -# CONFIG_MISC_DEVICES is not set CONFIG_BLK_DEV_SD=y CONFIG_CHR_DEV_SG=y CONFIG_ATA=y @@ -56,13 +57,11 @@ CONFIG_PATA_MPC52xx=y CONFIG_PATA_PLATFORM=y CONFIG_NETDEVICES=y CONFIG_LXT_PHY=y +CONFIG_FIXED_PHY=y CONFIG_NET_ETHERNET=y CONFIG_FEC_MPC52xx=y # CONFIG_NETDEV_1000 is not set # CONFIG_NETDEV_10000 is not set -# CONFIG_INPUT is not set -# CONFIG_SERIO is not set -# CONFIG_VT is not set CONFIG_SERIAL_MPC52xx=y CONFIG_SERIAL_MPC52xx_CONSOLE=y CONFIG_SERIAL_MPC52xx_CONSOLE_BAUD=115200 @@ -70,7 +69,13 @@ CONFIG_SERIAL_MPC52xx_CONSOLE_BAUD=115200 CONFIG_I2C=y CONFIG_I2C_CHARDEV=y CONFIG_I2C_MPC=y +CONFIG_SENSORS_LM80=y CONFIG_WATCHDOG=y +CONFIG_MFD_SM501=y +CONFIG_FB=y +CONFIG_FB_FOREIGN_ENDIAN=y +CONFIG_FB_SM501=y +CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_USB=y CONFIG_USB_DEVICEFS=y # CONFIG_USB_DEVICE_CLASS is not set @@ -80,10 +85,10 @@ CONFIG_USB_OHCI_HCD_PPC_OF_BE=y CONFIG_USB_STORAGE=y CONFIG_RTC_CLASS=y CONFIG_RTC_DRV_DS1307=y +CONFIG_RTC_DRV_DS1374=y CONFIG_EXT2_FS=y CONFIG_EXT3_FS=y # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set -CONFIG_INOTIFY=y CONFIG_MSDOS_FS=y CONFIG_VFAT_FS=y CONFIG_PROC_KCORE=y @@ -102,7 +107,6 @@ CONFIG_DEBUG_KERNEL=y CONFIG_DETECT_HUNG_TASK=y # CONFIG_DEBUG_BUGVERBOSE is not set CONFIG_DEBUG_INFO=y -# CONFIG_RCU_CPU_STALL_DETECTOR is not set CONFIG_CRYPTO_ECB=y CONFIG_CRYPTO_PCBC=y # CONFIG_CRYPTO_ANSI_CPRNG is not set diff --git a/arch/powerpc/configs/ppc64_defconfig b/arch/powerpc/configs/ppc64_defconfig index 84a685a505fe..535711fcb13c 100644 --- a/arch/powerpc/configs/ppc64_defconfig +++ b/arch/powerpc/configs/ppc64_defconfig @@ -485,3 +485,7 @@ CONFIG_CRYPTO_TWOFISH=m CONFIG_CRYPTO_LZO=m # CONFIG_CRYPTO_ANSI_CPRNG is not set # CONFIG_CRYPTO_HW is not set +CONFIG_VIRTUALIZATION=y +CONFIG_KVM_BOOK3S_64=m +CONFIG_KVM_BOOK3S_64_HV=y +CONFIG_VHOST_NET=m diff --git a/arch/powerpc/configs/pseries_defconfig b/arch/powerpc/configs/pseries_defconfig index 96a58b709705..a72f2415a647 100644 --- a/arch/powerpc/configs/pseries_defconfig +++ b/arch/powerpc/configs/pseries_defconfig @@ -362,3 +362,7 @@ CONFIG_CRYPTO_TWOFISH=m CONFIG_CRYPTO_LZO=m # CONFIG_CRYPTO_ANSI_CPRNG is not set # CONFIG_CRYPTO_HW is not set +CONFIG_VIRTUALIZATION=y +CONFIG_KVM_BOOK3S_64=m +CONFIG_KVM_BOOK3S_64_HV=y +CONFIG_VHOST_NET=m diff --git a/arch/powerpc/include/asm/floppy.h b/arch/powerpc/include/asm/floppy.h index 24bd34c57e9d..936a904ae78c 100644 --- a/arch/powerpc/include/asm/floppy.h +++ b/arch/powerpc/include/asm/floppy.h @@ -108,10 +108,10 @@ static int fd_request_irq(void) { if (can_use_virtual_dma) return request_irq(FLOPPY_IRQ, floppy_hardint, - IRQF_DISABLED, "floppy", NULL); + 0, "floppy", NULL); else return request_irq(FLOPPY_IRQ, floppy_interrupt, - IRQF_DISABLED, "floppy", NULL); + 0, "floppy", NULL); } static int vdma_dma_setup(char *addr, unsigned long size, int mode, int io) diff --git a/arch/powerpc/include/asm/lv1call.h b/arch/powerpc/include/asm/lv1call.h index 9cd5fc828a37..f77c708c67a0 100644 --- a/arch/powerpc/include/asm/lv1call.h +++ b/arch/powerpc/include/asm/lv1call.h @@ -316,7 +316,7 @@ LV1_CALL(gpu_context_free, 1, 0, 218 ) LV1_CALL(gpu_context_iomap, 5, 0, 221 ) LV1_CALL(gpu_context_attribute, 6, 0, 225 ) LV1_CALL(gpu_context_intr, 1, 1, 227 ) -LV1_CALL(gpu_attribute, 5, 0, 228 ) +LV1_CALL(gpu_attribute, 3, 0, 228 ) LV1_CALL(get_rtc, 0, 2, 232 ) LV1_CALL(set_ppe_periodic_tracer_frequency, 1, 0, 240 ) LV1_CALL(start_ppe_periodic_tracer, 5, 0, 241 ) diff --git a/arch/powerpc/include/asm/xics.h b/arch/powerpc/include/asm/xics.h index bd6c401c0ee5..c48de98ba94e 100644 --- a/arch/powerpc/include/asm/xics.h +++ b/arch/powerpc/include/asm/xics.h @@ -15,8 +15,8 @@ #define DEFAULT_PRIORITY 5 /* - * Mark IPIs as higher priority so we can take them inside interrupts that - * arent marked IRQF_DISABLED + * Mark IPIs as higher priority so we can take them inside interrupts + * FIXME: still true now? */ #define IPI_PRIORITY 4 diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index a54d92fec612..cf9c69b9189c 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -267,7 +267,7 @@ vsx_unavailable_pSeries_1: #ifdef CONFIG_CBE_RAS STD_EXCEPTION_HV(0x1200, 0x1202, cbe_system_error) - KVM_HANDLER_PR_SKIP(PACA_EXGEN, EXC_HV, 0x1202) + KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0x1202) #endif /* CONFIG_CBE_RAS */ STD_EXCEPTION_PSERIES(0x1300, 0x1300, instruction_breakpoint) @@ -275,7 +275,7 @@ vsx_unavailable_pSeries_1: #ifdef CONFIG_CBE_RAS STD_EXCEPTION_HV(0x1600, 0x1602, cbe_maintenance) - KVM_HANDLER_PR_SKIP(PACA_EXGEN, EXC_HV, 0x1602) + KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0x1602) #endif /* CONFIG_CBE_RAS */ STD_EXCEPTION_PSERIES(0x1700, 0x1700, altivec_assist) @@ -283,7 +283,7 @@ vsx_unavailable_pSeries_1: #ifdef CONFIG_CBE_RAS STD_EXCEPTION_HV(0x1800, 0x1802, cbe_thermal) - KVM_HANDLER_PR_SKIP(PACA_EXGEN, EXC_HV, 0x1802) + KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0x1802) #endif /* CONFIG_CBE_RAS */ . = 0x3000 diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 25ddbfc7dd36..6df70907d60a 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -187,7 +187,7 @@ int smp_request_message_ipi(int virq, int msg) return 1; } #endif - err = request_irq(virq, smp_ipi_action[msg], IRQF_DISABLED|IRQF_PERCPU, + err = request_irq(virq, smp_ipi_action[msg], IRQF_PERCPU, smp_ipi_name[msg], 0); WARN(err < 0, "unable to request_irq %d for %s (rc %d)\n", virq, smp_ipi_name[msg], err); diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index f422231d9235..44d8829334ab 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -1263,7 +1263,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_206) addi r6,r5,VCORE_NAPPING_THREADS 31: lwarx r4,0,r6 or r4,r4,r0 - popcntw r7,r4 + PPC_POPCNTW(r7,r4) cmpw r7,r8 bge 2f stwcx. r4,0,r6 diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 16da595ff402..2dd6bdd31fe1 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -34,6 +34,7 @@ #include <linux/suspend.h> #include <linux/memblock.h> #include <linux/hugetlb.h> +#include <linux/slab.h> #include <asm/pgalloc.h> #include <asm/prom.h> @@ -555,3 +556,32 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, book3e_hugetlb_preload(vma->vm_mm, address, *ptep); #endif } + +/* + * System memory should not be in /proc/iomem but various tools expect it + * (eg kdump). + */ +static int add_system_ram_resources(void) +{ + struct memblock_region *reg; + + for_each_memblock(memory, reg) { + struct resource *res; + unsigned long base = reg->base; + unsigned long size = reg->size; + + res = kzalloc(sizeof(struct resource), GFP_KERNEL); + WARN_ON(!res); + + if (res) { + res->name = "System RAM"; + res->start = base; + res->end = base + size - 1; + res->flags = IORESOURCE_MEM; + WARN_ON(request_resource(&iomem_resource, res) < 0); + } + } + + return 0; +} +subsys_initcall(add_system_ram_resources); diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c index c7dd4dec4df8..b22a83a91cb8 100644 --- a/arch/powerpc/mm/numa.c +++ b/arch/powerpc/mm/numa.c @@ -315,7 +315,10 @@ static int __init find_min_common_depth(void) struct device_node *root; const char *vec5; - root = of_find_node_by_path("/rtas"); + if (firmware_has_feature(FW_FEATURE_OPAL)) + root = of_find_node_by_path("/ibm,opal"); + else + root = of_find_node_by_path("/rtas"); if (!root) root = of_find_node_by_path("/"); @@ -344,12 +347,19 @@ static int __init find_min_common_depth(void) #define VEC5_AFFINITY_BYTE 5 #define VEC5_AFFINITY 0x80 - chosen = of_find_node_by_path("/chosen"); - if (chosen) { - vec5 = of_get_property(chosen, "ibm,architecture-vec-5", NULL); - if (vec5 && (vec5[VEC5_AFFINITY_BYTE] & VEC5_AFFINITY)) { - dbg("Using form 1 affinity\n"); - form1_affinity = 1; + + if (firmware_has_feature(FW_FEATURE_OPAL)) + form1_affinity = 1; + else { + chosen = of_find_node_by_path("/chosen"); + if (chosen) { + vec5 = of_get_property(chosen, + "ibm,architecture-vec-5", NULL); + if (vec5 && (vec5[VEC5_AFFINITY_BYTE] & + VEC5_AFFINITY)) { + dbg("Using form 1 affinity\n"); + form1_affinity = 1; + } } } diff --git a/arch/powerpc/platforms/52xx/mpc5200_simple.c b/arch/powerpc/platforms/52xx/mpc5200_simple.c index e36d6e232ae6..846b789fb195 100644 --- a/arch/powerpc/platforms/52xx/mpc5200_simple.c +++ b/arch/powerpc/platforms/52xx/mpc5200_simple.c @@ -50,6 +50,7 @@ static void __init mpc5200_simple_setup_arch(void) /* list of the supported boards */ static const char *board[] __initdata = { + "anon,charon", "intercontrol,digsy-mtc", "manroland,mucmc52", "manroland,uc101", diff --git a/arch/powerpc/platforms/cell/beat.c b/arch/powerpc/platforms/cell/beat.c index 232fc384e855..852592b2b712 100644 --- a/arch/powerpc/platforms/cell/beat.c +++ b/arch/powerpc/platforms/cell/beat.c @@ -230,7 +230,7 @@ static int __init beat_register_event(void) } ev->virq = virq; - rc = request_irq(virq, ev->handler, IRQF_DISABLED, + rc = request_irq(virq, ev->handler, 0, ev->typecode, NULL); if (rc != 0) { printk(KERN_ERR "Beat: failed to request virtual IRQ" diff --git a/arch/powerpc/platforms/cell/celleb_scc_pciex.c b/arch/powerpc/platforms/cell/celleb_scc_pciex.c index ae790ac4a589..14be2bd358b8 100644 --- a/arch/powerpc/platforms/cell/celleb_scc_pciex.c +++ b/arch/powerpc/platforms/cell/celleb_scc_pciex.c @@ -514,7 +514,7 @@ static __init int celleb_setup_pciex(struct device_node *node, virq = irq_create_of_mapping(oirq.controller, oirq.specifier, oirq.size); if (request_irq(virq, pciex_handle_internal_irq, - IRQF_DISABLED, "pciex", (void *)phb)) { + 0, "pciex", (void *)phb)) { pr_err("PCIEXC:Failed to request irq\n"); goto error; } diff --git a/arch/powerpc/platforms/cell/iommu.c b/arch/powerpc/platforms/cell/iommu.c index fc46fcac3921..592c3d51b817 100644 --- a/arch/powerpc/platforms/cell/iommu.c +++ b/arch/powerpc/platforms/cell/iommu.c @@ -412,8 +412,7 @@ static void cell_iommu_enable_hardware(struct cbe_iommu *iommu) IIC_IRQ_IOEX_ATI | (iommu->nid << IIC_IRQ_NODE_SHIFT)); BUG_ON(virq == NO_IRQ); - ret = request_irq(virq, ioc_interrupt, IRQF_DISABLED, - iommu->name, iommu); + ret = request_irq(virq, ioc_interrupt, 0, iommu->name, iommu); BUG_ON(ret); /* set the IOC segment table origin register (and turn on the iommu) */ diff --git a/arch/powerpc/platforms/cell/pmu.c b/arch/powerpc/platforms/cell/pmu.c index 1acf36010423..59c1a1694104 100644 --- a/arch/powerpc/platforms/cell/pmu.c +++ b/arch/powerpc/platforms/cell/pmu.c @@ -392,7 +392,7 @@ static int __init cbe_init_pm_irq(void) } rc = request_irq(irq, cbe_pm_irq, - IRQF_DISABLED, "cbe-pmu-0", NULL); + 0, "cbe-pmu-0", NULL); if (rc) { printk("ERROR: Request for irq on node %d failed\n", node); diff --git a/arch/powerpc/platforms/cell/spu_base.c b/arch/powerpc/platforms/cell/spu_base.c index 3675da73623f..e94d3ecdd8bb 100644 --- a/arch/powerpc/platforms/cell/spu_base.c +++ b/arch/powerpc/platforms/cell/spu_base.c @@ -442,8 +442,7 @@ static int spu_request_irqs(struct spu *spu) snprintf(spu->irq_c0, sizeof (spu->irq_c0), "spe%02d.0", spu->number); ret = request_irq(spu->irqs[0], spu_irq_class_0, - IRQF_DISABLED, - spu->irq_c0, spu); + 0, spu->irq_c0, spu); if (ret) goto bail0; } @@ -451,8 +450,7 @@ static int spu_request_irqs(struct spu *spu) snprintf(spu->irq_c1, sizeof (spu->irq_c1), "spe%02d.1", spu->number); ret = request_irq(spu->irqs[1], spu_irq_class_1, - IRQF_DISABLED, - spu->irq_c1, spu); + 0, spu->irq_c1, spu); if (ret) goto bail1; } @@ -460,8 +458,7 @@ static int spu_request_irqs(struct spu *spu) snprintf(spu->irq_c2, sizeof (spu->irq_c2), "spe%02d.2", spu->number); ret = request_irq(spu->irqs[2], spu_irq_class_2, - IRQF_DISABLED, - spu->irq_c2, spu); + 0, spu->irq_c2, spu); if (ret) goto bail2; } diff --git a/arch/powerpc/platforms/powermac/pic.c b/arch/powerpc/platforms/powermac/pic.c index cb40e921a565..901bfbddc3dd 100644 --- a/arch/powerpc/platforms/powermac/pic.c +++ b/arch/powerpc/platforms/powermac/pic.c @@ -272,7 +272,6 @@ static struct irqaction xmon_action = { static struct irqaction gatwick_cascade_action = { .handler = gatwick_action, - .flags = IRQF_DISABLED, .name = "cascade", }; diff --git a/arch/powerpc/platforms/powermac/smp.c b/arch/powerpc/platforms/powermac/smp.c index 9a521dc8e485..9b6a820bdd7d 100644 --- a/arch/powerpc/platforms/powermac/smp.c +++ b/arch/powerpc/platforms/powermac/smp.c @@ -200,7 +200,7 @@ static int psurge_secondary_ipi_init(void) if (psurge_secondary_virq) rc = request_irq(psurge_secondary_virq, psurge_ipi_intr, - IRQF_DISABLED|IRQF_PERCPU, "IPI", NULL); + IRQF_PERCPU, "IPI", NULL); if (rc) pr_err("Failed to setup secondary cpu IPI\n"); @@ -408,7 +408,7 @@ static int __init smp_psurge_kick_cpu(int nr) static struct irqaction psurge_irqaction = { .handler = psurge_ipi_intr, - .flags = IRQF_DISABLED|IRQF_PERCPU, + .flags = IRQF_PERCPU, .name = "primary IPI", }; diff --git a/arch/powerpc/platforms/ps3/device-init.c b/arch/powerpc/platforms/ps3/device-init.c index 6c4b5837fc8a..3f175e8aedb4 100644 --- a/arch/powerpc/platforms/ps3/device-init.c +++ b/arch/powerpc/platforms/ps3/device-init.c @@ -825,7 +825,7 @@ static int ps3_probe_thread(void *data) spin_lock_init(&dev.lock); - res = request_irq(irq, ps3_notification_interrupt, IRQF_DISABLED, + res = request_irq(irq, ps3_notification_interrupt, 0, "ps3_notification", &dev); if (res) { pr_err("%s:%u: request_irq failed %d\n", __func__, __LINE__, diff --git a/arch/powerpc/platforms/ps3/repository.c b/arch/powerpc/platforms/ps3/repository.c index 5e304c292f68..ca40f6afd35d 100644 --- a/arch/powerpc/platforms/ps3/repository.c +++ b/arch/powerpc/platforms/ps3/repository.c @@ -184,7 +184,7 @@ int ps3_repository_read_bus_type(unsigned int bus_index, enum ps3_bus_type *bus_type) { int result; - u64 v1; + u64 v1 = 0; result = read_node(PS3_LPAR_ID_PME, make_first_field("bus", bus_index), @@ -199,7 +199,7 @@ int ps3_repository_read_bus_num_dev(unsigned int bus_index, unsigned int *num_dev) { int result; - u64 v1; + u64 v1 = 0; result = read_node(PS3_LPAR_ID_PME, make_first_field("bus", bus_index), @@ -239,7 +239,7 @@ int ps3_repository_read_dev_type(unsigned int bus_index, unsigned int dev_index, enum ps3_dev_type *dev_type) { int result; - u64 v1; + u64 v1 = 0; result = read_node(PS3_LPAR_ID_PME, make_first_field("bus", bus_index), @@ -256,8 +256,8 @@ int ps3_repository_read_dev_intr(unsigned int bus_index, enum ps3_interrupt_type *intr_type, unsigned int *interrupt_id) { int result; - u64 v1; - u64 v2; + u64 v1 = 0; + u64 v2 = 0; result = read_node(PS3_LPAR_ID_PME, make_first_field("bus", bus_index), @@ -275,7 +275,7 @@ int ps3_repository_read_dev_reg_type(unsigned int bus_index, enum ps3_reg_type *reg_type) { int result; - u64 v1; + u64 v1 = 0; result = read_node(PS3_LPAR_ID_PME, make_first_field("bus", bus_index), @@ -615,7 +615,7 @@ int ps3_repository_read_stor_dev_num_regions(unsigned int bus_index, unsigned int dev_index, unsigned int *num_regions) { int result; - u64 v1; + u64 v1 = 0; result = read_node(PS3_LPAR_ID_PME, make_first_field("bus", bus_index), @@ -631,7 +631,7 @@ int ps3_repository_read_stor_dev_region_id(unsigned int bus_index, unsigned int *region_id) { int result; - u64 v1; + u64 v1 = 0; result = read_node(PS3_LPAR_ID_PME, make_first_field("bus", bus_index), @@ -786,7 +786,7 @@ int ps3_repository_read_mm_info(u64 *rm_base, u64 *rm_size, u64 *region_total) int ps3_repository_read_num_spu_reserved(unsigned int *num_spu_reserved) { int result; - u64 v1; + u64 v1 = 0; result = read_node(PS3_LPAR_ID_CURRENT, make_first_field("bi", 0), @@ -805,7 +805,7 @@ int ps3_repository_read_num_spu_reserved(unsigned int *num_spu_reserved) int ps3_repository_read_num_spu_resource_id(unsigned int *num_resource_id) { int result; - u64 v1; + u64 v1 = 0; result = read_node(PS3_LPAR_ID_CURRENT, make_first_field("bi", 0), @@ -827,8 +827,8 @@ int ps3_repository_read_spu_resource_id(unsigned int res_index, enum ps3_spu_resource_type *resource_type, unsigned int *resource_id) { int result; - u64 v1; - u64 v2; + u64 v1 = 0; + u64 v2 = 0; result = read_node(PS3_LPAR_ID_CURRENT, make_first_field("bi", 0), @@ -854,7 +854,7 @@ static int ps3_repository_read_boot_dat_address(u64 *address) int ps3_repository_read_boot_dat_size(unsigned int *size) { int result; - u64 v1; + u64 v1 = 0; result = read_node(PS3_LPAR_ID_CURRENT, make_first_field("bi", 0), @@ -869,7 +869,7 @@ int ps3_repository_read_boot_dat_size(unsigned int *size) int ps3_repository_read_vuart_av_port(unsigned int *port) { int result; - u64 v1; + u64 v1 = 0; result = read_node(PS3_LPAR_ID_CURRENT, make_first_field("bi", 0), @@ -884,7 +884,7 @@ int ps3_repository_read_vuart_av_port(unsigned int *port) int ps3_repository_read_vuart_sysmgr_port(unsigned int *port) { int result; - u64 v1; + u64 v1 = 0; result = read_node(PS3_LPAR_ID_CURRENT, make_first_field("bi", 0), @@ -919,7 +919,7 @@ int ps3_repository_read_boot_dat_info(u64 *lpar_addr, unsigned int *size) int ps3_repository_read_num_be(unsigned int *num_be) { int result; - u64 v1; + u64 v1 = 0; result = read_node(PS3_LPAR_ID_PME, make_first_field("ben", 0), diff --git a/arch/powerpc/sysdev/fsl_rio.c b/arch/powerpc/sysdev/fsl_rio.c index de170fd5ba4e..22ffccd8bef5 100644 --- a/arch/powerpc/sysdev/fsl_rio.c +++ b/arch/powerpc/sysdev/fsl_rio.c @@ -23,7 +23,7 @@ */ #include <linux/init.h> -#include <linux/export.h> +#include <linux/module.h> #include <linux/types.h> #include <linux/dma-mapping.h> #include <linux/interrupt.h> diff --git a/arch/powerpc/sysdev/mpic.c b/arch/powerpc/sysdev/mpic.c index 0842c6f8a3e6..8c7e8528e7c4 100644 --- a/arch/powerpc/sysdev/mpic.c +++ b/arch/powerpc/sysdev/mpic.c @@ -800,8 +800,6 @@ static void mpic_end_ipi(struct irq_data *d) * IPIs are marked IRQ_PER_CPU. This has the side effect of * preventing the IRQ_PENDING/IRQ_INPROGRESS logic from * applying to them. We EOI them late to avoid re-entering. - * We mark IPI's with IRQF_DISABLED as they must run with - * irqs disabled. */ mpic_eoi(mpic); } diff --git a/arch/powerpc/sysdev/ppc4xx_soc.c b/arch/powerpc/sysdev/ppc4xx_soc.c index d3d6ce3c33b4..0debcc31ad70 100644 --- a/arch/powerpc/sysdev/ppc4xx_soc.c +++ b/arch/powerpc/sysdev/ppc4xx_soc.c @@ -115,7 +115,7 @@ static int __init ppc4xx_l2c_probe(void) } /* Install error handler */ - if (request_irq(irq, l2c_error_handler, IRQF_DISABLED, "L2C", 0) < 0) { + if (request_irq(irq, l2c_error_handler, 0, "L2C", 0) < 0) { printk(KERN_ERR "Cannot install L2C error handler" ", cache is not enabled\n"); of_node_put(np); diff --git a/arch/powerpc/sysdev/xics/xics-common.c b/arch/powerpc/sysdev/xics/xics-common.c index 3d93a8ded0f8..63762c672a03 100644 --- a/arch/powerpc/sysdev/xics/xics-common.c +++ b/arch/powerpc/sysdev/xics/xics-common.c @@ -134,11 +134,10 @@ static void xics_request_ipi(void) BUG_ON(ipi == NO_IRQ); /* - * IPIs are marked IRQF_DISABLED as they must run with irqs - * disabled, and PERCPU. The handler was set in map. + * IPIs are marked IRQF_PERCPU. The handler was set in map. */ BUG_ON(request_irq(ipi, icp_ops->ipi_action, - IRQF_DISABLED|IRQF_PERCPU, "IPI", NULL)); + IRQF_PERCPU, "IPI", NULL)); } int __init xics_smp_probe(void) diff --git a/arch/sh/include/asm/page.h b/arch/sh/include/asm/page.h index 0dca9a5c6be6..15d970328f71 100644 --- a/arch/sh/include/asm/page.h +++ b/arch/sh/include/asm/page.h @@ -151,8 +151,13 @@ typedef struct page *pgtable_t; #endif /* !__ASSEMBLY__ */ #ifdef CONFIG_UNCACHED_MAPPING +#if defined(CONFIG_29BIT) +#define UNCAC_ADDR(addr) P2SEGADDR(addr) +#define CAC_ADDR(addr) P1SEGADDR(addr) +#else #define UNCAC_ADDR(addr) ((addr) - PAGE_OFFSET + uncached_start) #define CAC_ADDR(addr) ((addr) - uncached_start + PAGE_OFFSET) +#endif #else #define UNCAC_ADDR(addr) ((addr)) #define CAC_ADDR(addr) ((addr)) diff --git a/arch/sh/include/asm/unistd_32.h b/arch/sh/include/asm/unistd_32.h index 3432008d2888..152b8627a184 100644 --- a/arch/sh/include/asm/unistd_32.h +++ b/arch/sh/include/asm/unistd_32.h @@ -375,8 +375,10 @@ #define __NR_syncfs 362 #define __NR_sendmmsg 363 #define __NR_setns 364 +#define __NR_process_vm_readv 365 +#define __NR_process_vm_writev 366 -#define NR_syscalls 365 +#define NR_syscalls 367 #ifdef __KERNEL__ diff --git a/arch/sh/include/asm/unistd_64.h b/arch/sh/include/asm/unistd_64.h index ec9898665f23..c330c23db5a0 100644 --- a/arch/sh/include/asm/unistd_64.h +++ b/arch/sh/include/asm/unistd_64.h @@ -396,10 +396,12 @@ #define __NR_syncfs 373 #define __NR_sendmmsg 374 #define __NR_setns 375 +#define __NR_process_vm_readv 376 +#define __NR_process_vm_writev 377 #ifdef __KERNEL__ -#define NR_syscalls 376 +#define NR_syscalls 378 #define __ARCH_WANT_IPC_PARSE_VERSION #define __ARCH_WANT_OLD_READDIR diff --git a/arch/sh/kernel/cpu/sh2a/setup-sh7203.c b/arch/sh/kernel/cpu/sh2a/setup-sh7203.c index a43124e608c3..0bd744f9a3b7 100644 --- a/arch/sh/kernel/cpu/sh2a/setup-sh7203.c +++ b/arch/sh/kernel/cpu/sh2a/setup-sh7203.c @@ -176,10 +176,12 @@ static DECLARE_INTC_DESC(intc_desc, "sh7203", vectors, groups, static struct plat_sci_port scif0_platform_data = { .mapbase = 0xfffe8000, .flags = UPF_BOOT_AUTOCONF, - .scscr = SCSCR_RE | SCSCR_TE | SCSCR_REIE, + .scscr = SCSCR_RIE | SCSCR_TIE | SCSCR_RE | SCSCR_TE | + SCSCR_REIE, .scbrr_algo_id = SCBRR_ALGO_2, .type = PORT_SCIF, .irqs = { 192, 192, 192, 192 }, + .regtype = SCIx_SH2_SCIF_FIFODATA_REGTYPE, }; static struct platform_device scif0_device = { @@ -193,10 +195,12 @@ static struct platform_device scif0_device = { static struct plat_sci_port scif1_platform_data = { .mapbase = 0xfffe8800, .flags = UPF_BOOT_AUTOCONF, - .scscr = SCSCR_RE | SCSCR_TE | SCSCR_REIE, + .scscr = SCSCR_RIE | SCSCR_TIE | SCSCR_RE | SCSCR_TE | + SCSCR_REIE, .scbrr_algo_id = SCBRR_ALGO_2, .type = PORT_SCIF, .irqs = { 196, 196, 196, 196 }, + .regtype = SCIx_SH2_SCIF_FIFODATA_REGTYPE, }; static struct platform_device scif1_device = { @@ -210,10 +214,12 @@ static struct platform_device scif1_device = { static struct plat_sci_port scif2_platform_data = { .mapbase = 0xfffe9000, .flags = UPF_BOOT_AUTOCONF, - .scscr = SCSCR_RE | SCSCR_TE | SCSCR_REIE, + .scscr = SCSCR_RIE | SCSCR_TIE | SCSCR_RE | SCSCR_TE | + SCSCR_REIE, .scbrr_algo_id = SCBRR_ALGO_2, .type = PORT_SCIF, .irqs = { 200, 200, 200, 200 }, + .regtype = SCIx_SH2_SCIF_FIFODATA_REGTYPE, }; static struct platform_device scif2_device = { @@ -227,10 +233,12 @@ static struct platform_device scif2_device = { static struct plat_sci_port scif3_platform_data = { .mapbase = 0xfffe9800, .flags = UPF_BOOT_AUTOCONF, - .scscr = SCSCR_RE | SCSCR_TE | SCSCR_REIE, + .scscr = SCSCR_RIE | SCSCR_TIE | SCSCR_RE | SCSCR_TE | + SCSCR_REIE, .scbrr_algo_id = SCBRR_ALGO_2, .type = PORT_SCIF, .irqs = { 204, 204, 204, 204 }, + .regtype = SCIx_SH2_SCIF_FIFODATA_REGTYPE, }; static struct platform_device scif3_device = { diff --git a/arch/sh/kernel/cpu/shmobile/cpuidle.c b/arch/sh/kernel/cpu/shmobile/cpuidle.c index 7d98f909a8ac..1cc257c9b1e3 100644 --- a/arch/sh/kernel/cpu/shmobile/cpuidle.c +++ b/arch/sh/kernel/cpu/shmobile/cpuidle.c @@ -26,11 +26,12 @@ static unsigned long cpuidle_mode[] = { }; static int cpuidle_sleep_enter(struct cpuidle_device *dev, - struct cpuidle_state *state) + struct cpuidle_driver *drv, + int index) { unsigned long allowed_mode = arch_hwblk_sleep_mode(); ktime_t before, after; - int requested_state = state - &dev->states[0]; + int requested_state = index; int allowed_state; int k; @@ -47,11 +48,13 @@ static int cpuidle_sleep_enter(struct cpuidle_device *dev, */ k = min_t(int, allowed_state, requested_state); - dev->last_state = &dev->states[k]; before = ktime_get(); sh_mobile_call_standby(cpuidle_mode[k]); after = ktime_get(); - return ktime_to_ns(ktime_sub(after, before)) >> 10; + + dev->last_residency = (int)ktime_to_ns(ktime_sub(after, before)) >> 10; + + return k; } static struct cpuidle_device cpuidle_dev; @@ -63,19 +66,19 @@ static struct cpuidle_driver cpuidle_driver = { void sh_mobile_setup_cpuidle(void) { struct cpuidle_device *dev = &cpuidle_dev; + struct cpuidle_driver *drv = &cpuidle_driver; struct cpuidle_state *state; int i; - cpuidle_register_driver(&cpuidle_driver); for (i = 0; i < CPUIDLE_STATE_MAX; i++) { - dev->states[i].name[0] = '\0'; - dev->states[i].desc[0] = '\0'; + drv->states[i].name[0] = '\0'; + drv->states[i].desc[0] = '\0'; } i = CPUIDLE_DRIVER_STATE_START; - state = &dev->states[i++]; + state = &drv->states[i++]; snprintf(state->name, CPUIDLE_NAME_LEN, "C1"); strncpy(state->desc, "SuperH Sleep Mode", CPUIDLE_DESC_LEN); state->exit_latency = 1; @@ -85,10 +88,10 @@ void sh_mobile_setup_cpuidle(void) state->flags |= CPUIDLE_FLAG_TIME_VALID; state->enter = cpuidle_sleep_enter; - dev->safe_state = state; + drv->safe_state_index = i-1; if (sh_mobile_sleep_supported & SUSP_SH_SF) { - state = &dev->states[i++]; + state = &drv->states[i++]; snprintf(state->name, CPUIDLE_NAME_LEN, "C2"); strncpy(state->desc, "SuperH Sleep Mode [SF]", CPUIDLE_DESC_LEN); @@ -101,7 +104,7 @@ void sh_mobile_setup_cpuidle(void) } if (sh_mobile_sleep_supported & SUSP_SH_STANDBY) { - state = &dev->states[i++]; + state = &drv->states[i++]; snprintf(state->name, CPUIDLE_NAME_LEN, "C3"); strncpy(state->desc, "SuperH Mobile Standby Mode [SF]", CPUIDLE_DESC_LEN); @@ -113,7 +116,10 @@ void sh_mobile_setup_cpuidle(void) state->enter = cpuidle_sleep_enter; } + drv->state_count = i; dev->state_count = i; + cpuidle_register_driver(&cpuidle_driver); + cpuidle_register_device(dev); } diff --git a/arch/sh/kernel/syscalls_32.S b/arch/sh/kernel/syscalls_32.S index 293e39c59c00..ee56a9b1a981 100644 --- a/arch/sh/kernel/syscalls_32.S +++ b/arch/sh/kernel/syscalls_32.S @@ -382,3 +382,5 @@ ENTRY(sys_call_table) .long sys_syncfs .long sys_sendmmsg .long sys_setns + .long sys_process_vm_readv /* 365 */ + .long sys_process_vm_writev diff --git a/arch/sh/kernel/syscalls_64.S b/arch/sh/kernel/syscalls_64.S index ceb34b94afa9..9af7de26fb71 100644 --- a/arch/sh/kernel/syscalls_64.S +++ b/arch/sh/kernel/syscalls_64.S @@ -402,3 +402,5 @@ sys_call_table: .long sys_syncfs .long sys_sendmmsg .long sys_setns /* 375 */ + .long sys_process_vm_readv + .long sys_process_vm_writev diff --git a/arch/sparc/include/asm/unistd.h b/arch/sparc/include/asm/unistd.h index 6260d5deeabc..c7cb0af0eb59 100644 --- a/arch/sparc/include/asm/unistd.h +++ b/arch/sparc/include/asm/unistd.h @@ -406,8 +406,10 @@ #define __NR_syncfs 335 #define __NR_sendmmsg 336 #define __NR_setns 337 +#define __NR_process_vm_readv 338 +#define __NR_process_vm_writev 339 -#define NR_syscalls 338 +#define NR_syscalls 340 #ifdef __32bit_syscall_numbers__ /* Sparc 32-bit only has the "setresuid32", "getresuid32" variants, diff --git a/arch/sparc/kernel/systbls_32.S b/arch/sparc/kernel/systbls_32.S index 09d8ec454450..63402f9e9f51 100644 --- a/arch/sparc/kernel/systbls_32.S +++ b/arch/sparc/kernel/systbls_32.S @@ -84,4 +84,4 @@ sys_call_table: /*320*/ .long sys_dup3, sys_pipe2, sys_inotify_init1, sys_accept4, sys_preadv /*325*/ .long sys_pwritev, sys_rt_tgsigqueueinfo, sys_perf_event_open, sys_recvmmsg, sys_fanotify_init /*330*/ .long sys_fanotify_mark, sys_prlimit64, sys_name_to_handle_at, sys_open_by_handle_at, sys_clock_adjtime -/*335*/ .long sys_syncfs, sys_sendmmsg, sys_setns +/*335*/ .long sys_syncfs, sys_sendmmsg, sys_setns, sys_process_vm_readv, sys_process_vm_writev diff --git a/arch/sparc/kernel/systbls_64.S b/arch/sparc/kernel/systbls_64.S index edbec45d4688..db86b1a0e9a9 100644 --- a/arch/sparc/kernel/systbls_64.S +++ b/arch/sparc/kernel/systbls_64.S @@ -85,7 +85,7 @@ sys_call_table32: /*320*/ .word sys_dup3, sys_pipe2, sys_inotify_init1, sys_accept4, compat_sys_preadv .word compat_sys_pwritev, compat_sys_rt_tgsigqueueinfo, sys_perf_event_open, compat_sys_recvmmsg, sys_fanotify_init /*330*/ .word sys32_fanotify_mark, sys_prlimit64, sys_name_to_handle_at, compat_sys_open_by_handle_at, compat_sys_clock_adjtime - .word sys_syncfs, compat_sys_sendmmsg, sys_setns + .word sys_syncfs, compat_sys_sendmmsg, sys_setns, compat_sys_process_vm_readv, compat_sys_process_vm_writev #endif /* CONFIG_COMPAT */ @@ -162,4 +162,4 @@ sys_call_table: /*320*/ .word sys_dup3, sys_pipe2, sys_inotify_init1, sys_accept4, sys_preadv .word sys_pwritev, sys_rt_tgsigqueueinfo, sys_perf_event_open, sys_recvmmsg, sys_fanotify_init /*330*/ .word sys_fanotify_mark, sys_prlimit64, sys_name_to_handle_at, sys_open_by_handle_at, sys_clock_adjtime - .word sys_syncfs, sys_sendmmsg, sys_setns + .word sys_syncfs, sys_sendmmsg, sys_setns, sys_process_vm_readv, sys_process_vm_writev diff --git a/arch/x86/platform/ce4100/ce4100.c b/arch/x86/platform/ce4100/ce4100.c index 28071bb31db7..4c61b52191eb 100644 --- a/arch/x86/platform/ce4100/ce4100.c +++ b/arch/x86/platform/ce4100/ce4100.c @@ -109,7 +109,7 @@ static __init void sdv_serial_fixup(void) } #else -static inline void sdv_serial_fixup(void); +static inline void sdv_serial_fixup(void) {}; #endif static void __init sdv_arch_setup(void) diff --git a/arch/x86/platform/mrst/mrst.c b/arch/x86/platform/mrst/mrst.c index 6ed7afdaf4af..541020df0da6 100644 --- a/arch/x86/platform/mrst/mrst.c +++ b/arch/x86/platform/mrst/mrst.c @@ -608,6 +608,7 @@ static void *msic_ocd_platform_data(void *info) } static const struct devs_id __initconst device_ids[] = { + {"bma023", SFI_DEV_TYPE_I2C, 1, &no_platform_data}, {"pmic_gpio", SFI_DEV_TYPE_SPI, 1, &pmic_gpio_platform_data}, {"spi_max3111", SFI_DEV_TYPE_SPI, 0, &max3111_platform_data}, {"i2c_max7315", SFI_DEV_TYPE_I2C, 1, &max7315_platform_data}, diff --git a/arch/x86/platform/mrst/pmu.c b/arch/x86/platform/mrst/pmu.c index 9281da7d91bd..c0ac06da57ac 100644 --- a/arch/x86/platform/mrst/pmu.c +++ b/arch/x86/platform/mrst/pmu.c @@ -70,7 +70,7 @@ static struct mrst_device mrst_devs[] = { /* 24 */ { 0x4110, 0 }, /* Lincroft */ }; -/* n.b. We ignore PCI-id 0x815 in LSS9 b/c MeeGo has no driver for it */ +/* n.b. We ignore PCI-id 0x815 in LSS9 b/c Linux has no driver for it */ static u16 mrst_lss9_pci_ids[] = {0x080a, 0x0814, 0}; static u16 mrst_lss10_pci_ids[] = {0x0800, 0x0801, 0x0802, 0x0803, 0x0804, 0x0805, 0x080f, 0}; diff --git a/arch/x86/platform/mrst/vrtc.c b/arch/x86/platform/mrst/vrtc.c index a8ac6f1eb66d..225bd0f0f675 100644 --- a/arch/x86/platform/mrst/vrtc.c +++ b/arch/x86/platform/mrst/vrtc.c @@ -76,8 +76,8 @@ unsigned long vrtc_get_time(void) spin_unlock_irqrestore(&rtc_lock, flags); - /* vRTC YEAR reg contains the offset to 1960 */ - year += 1960; + /* vRTC YEAR reg contains the offset to 1972 */ + year += 1972; printk(KERN_INFO "vRTC: sec: %d min: %d hour: %d day: %d " "mon: %d year: %d\n", sec, min, hour, mday, mon, year); diff --git a/crypto/ablkcipher.c b/crypto/ablkcipher.c index a816f24f2d52..a0f768c1d9aa 100644 --- a/crypto/ablkcipher.c +++ b/crypto/ablkcipher.c @@ -383,6 +383,7 @@ static int crypto_init_ablkcipher_ops(struct crypto_tfm *tfm, u32 type, return 0; } +#ifdef CONFIG_NET static int crypto_ablkcipher_report(struct sk_buff *skb, struct crypto_alg *alg) { struct crypto_report_blkcipher rblkcipher; @@ -404,6 +405,12 @@ static int crypto_ablkcipher_report(struct sk_buff *skb, struct crypto_alg *alg) nla_put_failure: return -EMSGSIZE; } +#else +static int crypto_ablkcipher_report(struct sk_buff *skb, struct crypto_alg *alg) +{ + return -ENOSYS; +} +#endif static void crypto_ablkcipher_show(struct seq_file *m, struct crypto_alg *alg) __attribute__ ((unused)); @@ -457,6 +464,7 @@ static int crypto_init_givcipher_ops(struct crypto_tfm *tfm, u32 type, return 0; } +#ifdef CONFIG_NET static int crypto_givcipher_report(struct sk_buff *skb, struct crypto_alg *alg) { struct crypto_report_blkcipher rblkcipher; @@ -478,6 +486,12 @@ static int crypto_givcipher_report(struct sk_buff *skb, struct crypto_alg *alg) nla_put_failure: return -EMSGSIZE; } +#else +static int crypto_givcipher_report(struct sk_buff *skb, struct crypto_alg *alg) +{ + return -ENOSYS; +} +#endif static void crypto_givcipher_show(struct seq_file *m, struct crypto_alg *alg) __attribute__ ((unused)); diff --git a/crypto/aead.c b/crypto/aead.c index 701556ffaaef..04add3dca6fe 100644 --- a/crypto/aead.c +++ b/crypto/aead.c @@ -111,6 +111,7 @@ static int crypto_init_aead_ops(struct crypto_tfm *tfm, u32 type, u32 mask) return 0; } +#ifdef CONFIG_NET static int crypto_aead_report(struct sk_buff *skb, struct crypto_alg *alg) { struct crypto_report_aead raead; @@ -132,6 +133,12 @@ static int crypto_aead_report(struct sk_buff *skb, struct crypto_alg *alg) nla_put_failure: return -EMSGSIZE; } +#else +static int crypto_aead_report(struct sk_buff *skb, struct crypto_alg *alg) +{ + return -ENOSYS; +} +#endif static void crypto_aead_show(struct seq_file *m, struct crypto_alg *alg) __attribute__ ((unused)); @@ -190,6 +197,7 @@ static int crypto_init_nivaead_ops(struct crypto_tfm *tfm, u32 type, u32 mask) return 0; } +#ifdef CONFIG_NET static int crypto_nivaead_report(struct sk_buff *skb, struct crypto_alg *alg) { struct crypto_report_aead raead; @@ -210,6 +218,12 @@ static int crypto_nivaead_report(struct sk_buff *skb, struct crypto_alg *alg) nla_put_failure: return -EMSGSIZE; } +#else +static int crypto_nivaead_report(struct sk_buff *skb, struct crypto_alg *alg) +{ + return -ENOSYS; +} +#endif static void crypto_nivaead_show(struct seq_file *m, struct crypto_alg *alg) diff --git a/crypto/ahash.c b/crypto/ahash.c index a3e6ef99394a..ac93c99cfae8 100644 --- a/crypto/ahash.c +++ b/crypto/ahash.c @@ -399,6 +399,7 @@ static unsigned int crypto_ahash_extsize(struct crypto_alg *alg) return sizeof(struct crypto_shash *); } +#ifdef CONFIG_NET static int crypto_ahash_report(struct sk_buff *skb, struct crypto_alg *alg) { struct crypto_report_hash rhash; @@ -416,6 +417,12 @@ static int crypto_ahash_report(struct sk_buff *skb, struct crypto_alg *alg) nla_put_failure: return -EMSGSIZE; } +#else +static int crypto_ahash_report(struct sk_buff *skb, struct crypto_alg *alg) +{ + return -ENOSYS; +} +#endif static void crypto_ahash_show(struct seq_file *m, struct crypto_alg *alg) __attribute__ ((unused)); diff --git a/crypto/blkcipher.c b/crypto/blkcipher.c index 2572d2600136..1e61d1a888b2 100644 --- a/crypto/blkcipher.c +++ b/crypto/blkcipher.c @@ -494,6 +494,7 @@ static int crypto_init_blkcipher_ops(struct crypto_tfm *tfm, u32 type, u32 mask) return crypto_init_blkcipher_ops_async(tfm); } +#ifdef CONFIG_NET static int crypto_blkcipher_report(struct sk_buff *skb, struct crypto_alg *alg) { struct crypto_report_blkcipher rblkcipher; @@ -515,6 +516,12 @@ static int crypto_blkcipher_report(struct sk_buff *skb, struct crypto_alg *alg) nla_put_failure: return -EMSGSIZE; } +#else +static int crypto_blkcipher_report(struct sk_buff *skb, struct crypto_alg *alg) +{ + return -ENOSYS; +} +#endif static void crypto_blkcipher_show(struct seq_file *m, struct crypto_alg *alg) __attribute__ ((unused)); diff --git a/crypto/crypto_user.c b/crypto/crypto_user.c index 2abca780312d..0605a2bbba75 100644 --- a/crypto/crypto_user.c +++ b/crypto/crypto_user.c @@ -44,9 +44,6 @@ static struct crypto_alg *crypto_alg_match(struct crypto_user_alg *p, int exact) down_read(&crypto_alg_sem); - if (list_empty(&crypto_alg_list)) - return NULL; - list_for_each_entry(q, &crypto_alg_list, cra_list) { int match = 0; diff --git a/crypto/pcompress.c b/crypto/pcompress.c index fefda78a6a2a..2e458e5482d0 100644 --- a/crypto/pcompress.c +++ b/crypto/pcompress.c @@ -48,6 +48,7 @@ static int crypto_pcomp_init_tfm(struct crypto_tfm *tfm) return 0; } +#ifdef CONFIG_NET static int crypto_pcomp_report(struct sk_buff *skb, struct crypto_alg *alg) { struct crypto_report_comp rpcomp; @@ -62,6 +63,12 @@ static int crypto_pcomp_report(struct sk_buff *skb, struct crypto_alg *alg) nla_put_failure: return -EMSGSIZE; } +#else +static int crypto_pcomp_report(struct sk_buff *skb, struct crypto_alg *alg) +{ + return -ENOSYS; +} +#endif static void crypto_pcomp_show(struct seq_file *m, struct crypto_alg *alg) __attribute__ ((unused)); diff --git a/crypto/rng.c b/crypto/rng.c index feb7de00f437..64f864fa8043 100644 --- a/crypto/rng.c +++ b/crypto/rng.c @@ -60,6 +60,7 @@ static int crypto_init_rng_ops(struct crypto_tfm *tfm, u32 type, u32 mask) return 0; } +#ifdef CONFIG_NET static int crypto_rng_report(struct sk_buff *skb, struct crypto_alg *alg) { struct crypto_report_rng rrng; @@ -76,6 +77,12 @@ static int crypto_rng_report(struct sk_buff *skb, struct crypto_alg *alg) nla_put_failure: return -EMSGSIZE; } +#else +static int crypto_rng_report(struct sk_buff *skb, struct crypto_alg *alg) +{ + return -ENOSYS; +} +#endif static void crypto_rng_show(struct seq_file *m, struct crypto_alg *alg) __attribute__ ((unused)); diff --git a/crypto/shash.c b/crypto/shash.c index ea8a9c6e21e3..9100912716ae 100644 --- a/crypto/shash.c +++ b/crypto/shash.c @@ -524,6 +524,7 @@ static unsigned int crypto_shash_extsize(struct crypto_alg *alg) return alg->cra_ctxsize; } +#ifdef CONFIG_NET static int crypto_shash_report(struct sk_buff *skb, struct crypto_alg *alg) { struct crypto_report_hash rhash; @@ -541,6 +542,12 @@ static int crypto_shash_report(struct sk_buff *skb, struct crypto_alg *alg) nla_put_failure: return -EMSGSIZE; } +#else +static int crypto_shash_report(struct sk_buff *skb, struct crypto_alg *alg) +{ + return -ENOSYS; +} +#endif static void crypto_shash_show(struct seq_file *m, struct crypto_alg *alg) __attribute__ ((unused)); diff --git a/drivers/acpi/acpica/hwregs.c b/drivers/acpi/acpica/hwregs.c index 55accb7018bb..cc70f3fdcdd1 100644 --- a/drivers/acpi/acpica/hwregs.c +++ b/drivers/acpi/acpica/hwregs.c @@ -269,16 +269,17 @@ acpi_status acpi_hw_clear_acpi_status(void) status = acpi_hw_register_write(ACPI_REGISTER_PM1_STATUS, ACPI_BITMASK_ALL_FIXED_STATUS); - if (ACPI_FAILURE(status)) { - goto unlock_and_exit; - } + + acpi_os_release_lock(acpi_gbl_hardware_lock, lock_flags); + + if (ACPI_FAILURE(status)) + goto exit; /* Clear the GPE Bits in all GPE registers in all GPE blocks */ status = acpi_ev_walk_gpe_list(acpi_hw_clear_gpe_block, NULL); - unlock_and_exit: - acpi_os_release_lock(acpi_gbl_hardware_lock, lock_flags); +exit: return_ACPI_STATUS(status); } diff --git a/drivers/acpi/atomicio.c b/drivers/acpi/atomicio.c index 04ae1c88c03c..cfc0cc10af39 100644 --- a/drivers/acpi/atomicio.c +++ b/drivers/acpi/atomicio.c @@ -76,7 +76,7 @@ static void __iomem *__acpi_ioremap_fast(phys_addr_t paddr, { struct acpi_iomap *map; - map = __acpi_find_iomap(paddr, size); + map = __acpi_find_iomap(paddr, size/8); if (map) return map->vaddr + (paddr - map->paddr); else diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c index 437ddbf0c49a..9ecec98bc76e 100644 --- a/drivers/acpi/bus.c +++ b/drivers/acpi/bus.c @@ -911,10 +911,7 @@ void __init acpi_early_init(void) } #endif - status = - acpi_enable_subsystem(~ - (ACPI_NO_HARDWARE_INIT | - ACPI_NO_ACPI_ENABLE)); + status = acpi_enable_subsystem(~ACPI_NO_ACPI_ENABLE); if (ACPI_FAILURE(status)) { printk(KERN_ERR PREFIX "Unable to enable ACPI\n"); goto error0; @@ -935,8 +932,7 @@ static int __init acpi_bus_init(void) acpi_os_initialize1(); - status = - acpi_enable_subsystem(ACPI_NO_HARDWARE_INIT | ACPI_NO_ACPI_ENABLE); + status = acpi_enable_subsystem(ACPI_NO_ACPI_ENABLE); if (ACPI_FAILURE(status)) { printk(KERN_ERR PREFIX "Unable to start the ACPI Interpreter\n"); diff --git a/drivers/acpi/processor_driver.c b/drivers/acpi/processor_driver.c index a4e0f1ba6040..9d7bc9f6b6cc 100644 --- a/drivers/acpi/processor_driver.c +++ b/drivers/acpi/processor_driver.c @@ -426,7 +426,7 @@ static int acpi_cpu_soft_notify(struct notifier_block *nfb, if (action == CPU_ONLINE && pr) { acpi_processor_ppc_has_changed(pr, 0); - acpi_processor_cst_has_changed(pr); + acpi_processor_hotplug(pr); acpi_processor_reevaluate_tstate(pr, action); acpi_processor_tstate_has_changed(pr); } @@ -503,8 +503,7 @@ static int __cpuinit acpi_processor_add(struct acpi_device *device) acpi_processor_get_throttling_info(pr); acpi_processor_get_limit_info(pr); - - if (cpuidle_get_driver() == &acpi_idle_driver) + if (!cpuidle_get_driver() || cpuidle_get_driver() == &acpi_idle_driver) acpi_processor_power_init(pr, device); pr->cdev = thermal_cooling_device_register("Processor", device, @@ -800,17 +799,9 @@ static int __init acpi_processor_init(void) memset(&errata, 0, sizeof(errata)); - if (!cpuidle_register_driver(&acpi_idle_driver)) { - printk(KERN_DEBUG "ACPI: %s registered with cpuidle\n", - acpi_idle_driver.name); - } else { - printk(KERN_DEBUG "ACPI: acpi_idle yielding to %s\n", - cpuidle_get_driver()->name); - } - result = acpi_bus_register_driver(&acpi_processor_driver); if (result < 0) - goto out_cpuidle; + return result; acpi_processor_install_hotplug_notify(); @@ -821,11 +812,6 @@ static int __init acpi_processor_init(void) acpi_processor_throttling_init(); return 0; - -out_cpuidle: - cpuidle_unregister_driver(&acpi_idle_driver); - - return result; } static void __exit acpi_processor_exit(void) diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c index 9b88f9828d8c..0e8e2de2ed3e 100644 --- a/drivers/acpi/processor_idle.c +++ b/drivers/acpi/processor_idle.c @@ -224,7 +224,6 @@ static void lapic_timer_state_broadcast(struct acpi_processor *pr, /* * Suspend / resume control */ -static int acpi_idle_suspend; static u32 saved_bm_rld; static void acpi_idle_bm_rld_save(void) @@ -243,21 +242,13 @@ static void acpi_idle_bm_rld_restore(void) int acpi_processor_suspend(struct acpi_device * device, pm_message_t state) { - if (acpi_idle_suspend == 1) - return 0; - acpi_idle_bm_rld_save(); - acpi_idle_suspend = 1; return 0; } int acpi_processor_resume(struct acpi_device * device) { - if (acpi_idle_suspend == 0) - return 0; - acpi_idle_bm_rld_restore(); - acpi_idle_suspend = 0; return 0; } @@ -741,66 +732,65 @@ static inline void acpi_idle_do_entry(struct acpi_processor_cx *cx) /** * acpi_idle_enter_c1 - enters an ACPI C1 state-type * @dev: the target CPU - * @state: the state data + * @drv: cpuidle driver containing cpuidle state info + * @index: index of target state * * This is equivalent to the HALT instruction. */ static int acpi_idle_enter_c1(struct cpuidle_device *dev, - struct cpuidle_state *state) + struct cpuidle_driver *drv, int index) { ktime_t kt1, kt2; s64 idle_time; struct acpi_processor *pr; - struct acpi_processor_cx *cx = cpuidle_get_statedata(state); + struct cpuidle_state_usage *state_usage = &dev->states_usage[index]; + struct acpi_processor_cx *cx = cpuidle_get_statedata(state_usage); pr = __this_cpu_read(processors); + dev->last_residency = 0; if (unlikely(!pr)) - return 0; + return -EINVAL; local_irq_disable(); - /* Do not access any ACPI IO ports in suspend path */ - if (acpi_idle_suspend) { - local_irq_enable(); - cpu_relax(); - return 0; - } - lapic_timer_state_broadcast(pr, cx, 1); kt1 = ktime_get_real(); acpi_idle_do_entry(cx); kt2 = ktime_get_real(); idle_time = ktime_to_us(ktime_sub(kt2, kt1)); + /* Update device last_residency*/ + dev->last_residency = (int)idle_time; + local_irq_enable(); cx->usage++; lapic_timer_state_broadcast(pr, cx, 0); - return idle_time; + return index; } /** * acpi_idle_enter_simple - enters an ACPI state without BM handling * @dev: the target CPU - * @state: the state data + * @drv: cpuidle driver with cpuidle state information + * @index: the index of suggested state */ static int acpi_idle_enter_simple(struct cpuidle_device *dev, - struct cpuidle_state *state) + struct cpuidle_driver *drv, int index) { struct acpi_processor *pr; - struct acpi_processor_cx *cx = cpuidle_get_statedata(state); + struct cpuidle_state_usage *state_usage = &dev->states_usage[index]; + struct acpi_processor_cx *cx = cpuidle_get_statedata(state_usage); ktime_t kt1, kt2; s64 idle_time_ns; s64 idle_time; pr = __this_cpu_read(processors); + dev->last_residency = 0; if (unlikely(!pr)) - return 0; - - if (acpi_idle_suspend) - return(acpi_idle_enter_c1(dev, state)); + return -EINVAL; local_irq_disable(); @@ -815,7 +805,7 @@ static int acpi_idle_enter_simple(struct cpuidle_device *dev, if (unlikely(need_resched())) { current_thread_info()->status |= TS_POLLING; local_irq_enable(); - return 0; + return -EINVAL; } } @@ -837,6 +827,9 @@ static int acpi_idle_enter_simple(struct cpuidle_device *dev, idle_time = idle_time_ns; do_div(idle_time, NSEC_PER_USEC); + /* Update device last_residency*/ + dev->last_residency = (int)idle_time; + /* Tell the scheduler how much we idled: */ sched_clock_idle_wakeup_event(idle_time_ns); @@ -848,7 +841,7 @@ static int acpi_idle_enter_simple(struct cpuidle_device *dev, lapic_timer_state_broadcast(pr, cx, 0); cx->time += idle_time; - return idle_time; + return index; } static int c3_cpu_count; @@ -857,37 +850,37 @@ static DEFINE_RAW_SPINLOCK(c3_lock); /** * acpi_idle_enter_bm - enters C3 with proper BM handling * @dev: the target CPU - * @state: the state data + * @drv: cpuidle driver containing state data + * @index: the index of suggested state * * If BM is detected, the deepest non-C3 idle state is entered instead. */ static int acpi_idle_enter_bm(struct cpuidle_device *dev, - struct cpuidle_state *state) + struct cpuidle_driver *drv, int index) { struct acpi_processor *pr; - struct acpi_processor_cx *cx = cpuidle_get_statedata(state); + struct cpuidle_state_usage *state_usage = &dev->states_usage[index]; + struct acpi_processor_cx *cx = cpuidle_get_statedata(state_usage); ktime_t kt1, kt2; s64 idle_time_ns; s64 idle_time; pr = __this_cpu_read(processors); + dev->last_residency = 0; if (unlikely(!pr)) - return 0; - - if (acpi_idle_suspend) - return(acpi_idle_enter_c1(dev, state)); + return -EINVAL; if (!cx->bm_sts_skip && acpi_idle_bm_check()) { - if (dev->safe_state) { - dev->last_state = dev->safe_state; - return dev->safe_state->enter(dev, dev->safe_state); + if (drv->safe_state_index >= 0) { + return drv->states[drv->safe_state_index].enter(dev, + drv, drv->safe_state_index); } else { local_irq_disable(); acpi_safe_halt(); local_irq_enable(); - return 0; + return -EINVAL; } } @@ -904,7 +897,7 @@ static int acpi_idle_enter_bm(struct cpuidle_device *dev, if (unlikely(need_resched())) { current_thread_info()->status |= TS_POLLING; local_irq_enable(); - return 0; + return -EINVAL; } } @@ -954,6 +947,9 @@ static int acpi_idle_enter_bm(struct cpuidle_device *dev, idle_time = idle_time_ns; do_div(idle_time, NSEC_PER_USEC); + /* Update device last_residency*/ + dev->last_residency = (int)idle_time; + /* Tell the scheduler how much we idled: */ sched_clock_idle_wakeup_event(idle_time_ns); @@ -965,7 +961,7 @@ static int acpi_idle_enter_bm(struct cpuidle_device *dev, lapic_timer_state_broadcast(pr, cx, 0); cx->time += idle_time; - return idle_time; + return index; } struct cpuidle_driver acpi_idle_driver = { @@ -974,14 +970,16 @@ struct cpuidle_driver acpi_idle_driver = { }; /** - * acpi_processor_setup_cpuidle - prepares and configures CPUIDLE + * acpi_processor_setup_cpuidle_cx - prepares and configures CPUIDLE + * device i.e. per-cpu data + * * @pr: the ACPI processor */ -static int acpi_processor_setup_cpuidle(struct acpi_processor *pr) +static int acpi_processor_setup_cpuidle_cx(struct acpi_processor *pr) { int i, count = CPUIDLE_DRIVER_STATE_START; struct acpi_processor_cx *cx; - struct cpuidle_state *state; + struct cpuidle_state_usage *state_usage; struct cpuidle_device *dev = &pr->power.dev; if (!pr->flags.power_setup_done) @@ -992,9 +990,62 @@ static int acpi_processor_setup_cpuidle(struct acpi_processor *pr) } dev->cpu = pr->id; + + if (max_cstate == 0) + max_cstate = 1; + + for (i = 1; i < ACPI_PROCESSOR_MAX_POWER && i <= max_cstate; i++) { + cx = &pr->power.states[i]; + state_usage = &dev->states_usage[count]; + + if (!cx->valid) + continue; + +#ifdef CONFIG_HOTPLUG_CPU + if ((cx->type != ACPI_STATE_C1) && (num_online_cpus() > 1) && + !pr->flags.has_cst && + !(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED)) + continue; +#endif + + cpuidle_set_statedata(state_usage, cx); + + count++; + if (count == CPUIDLE_STATE_MAX) + break; + } + + dev->state_count = count; + + if (!count) + return -EINVAL; + + return 0; +} + +/** + * acpi_processor_setup_cpuidle states- prepares and configures cpuidle + * global state data i.e. idle routines + * + * @pr: the ACPI processor + */ +static int acpi_processor_setup_cpuidle_states(struct acpi_processor *pr) +{ + int i, count = CPUIDLE_DRIVER_STATE_START; + struct acpi_processor_cx *cx; + struct cpuidle_state *state; + struct cpuidle_driver *drv = &acpi_idle_driver; + + if (!pr->flags.power_setup_done) + return -EINVAL; + + if (pr->flags.power == 0) + return -EINVAL; + + drv->safe_state_index = -1; for (i = 0; i < CPUIDLE_STATE_MAX; i++) { - dev->states[i].name[0] = '\0'; - dev->states[i].desc[0] = '\0'; + drv->states[i].name[0] = '\0'; + drv->states[i].desc[0] = '\0'; } if (max_cstate == 0) @@ -1002,7 +1053,6 @@ static int acpi_processor_setup_cpuidle(struct acpi_processor *pr) for (i = 1; i < ACPI_PROCESSOR_MAX_POWER && i <= max_cstate; i++) { cx = &pr->power.states[i]; - state = &dev->states[count]; if (!cx->valid) continue; @@ -1013,8 +1063,8 @@ static int acpi_processor_setup_cpuidle(struct acpi_processor *pr) !(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED)) continue; #endif - cpuidle_set_statedata(state, cx); + state = &drv->states[count]; snprintf(state->name, CPUIDLE_NAME_LEN, "C%d", i); strncpy(state->desc, cx->desc, CPUIDLE_DESC_LEN); state->exit_latency = cx->latency; @@ -1027,13 +1077,13 @@ static int acpi_processor_setup_cpuidle(struct acpi_processor *pr) state->flags |= CPUIDLE_FLAG_TIME_VALID; state->enter = acpi_idle_enter_c1; - dev->safe_state = state; + drv->safe_state_index = count; break; case ACPI_STATE_C2: state->flags |= CPUIDLE_FLAG_TIME_VALID; state->enter = acpi_idle_enter_simple; - dev->safe_state = state; + drv->safe_state_index = count; break; case ACPI_STATE_C3: @@ -1049,7 +1099,7 @@ static int acpi_processor_setup_cpuidle(struct acpi_processor *pr) break; } - dev->state_count = count; + drv->state_count = count; if (!count) return -EINVAL; @@ -1057,7 +1107,7 @@ static int acpi_processor_setup_cpuidle(struct acpi_processor *pr) return 0; } -int acpi_processor_cst_has_changed(struct acpi_processor *pr) +int acpi_processor_hotplug(struct acpi_processor *pr) { int ret = 0; @@ -1078,7 +1128,7 @@ int acpi_processor_cst_has_changed(struct acpi_processor *pr) cpuidle_disable_device(&pr->power.dev); acpi_processor_get_power_info(pr); if (pr->flags.power) { - acpi_processor_setup_cpuidle(pr); + acpi_processor_setup_cpuidle_cx(pr); ret = cpuidle_enable_device(&pr->power.dev); } cpuidle_resume_and_unlock(); @@ -1086,10 +1136,72 @@ int acpi_processor_cst_has_changed(struct acpi_processor *pr) return ret; } +int acpi_processor_cst_has_changed(struct acpi_processor *pr) +{ + int cpu; + struct acpi_processor *_pr; + + if (disabled_by_idle_boot_param()) + return 0; + + if (!pr) + return -EINVAL; + + if (nocst) + return -ENODEV; + + if (!pr->flags.power_setup_done) + return -ENODEV; + + /* + * FIXME: Design the ACPI notification to make it once per + * system instead of once per-cpu. This condition is a hack + * to make the code that updates C-States be called once. + */ + + if (smp_processor_id() == 0 && + cpuidle_get_driver() == &acpi_idle_driver) { + + cpuidle_pause_and_lock(); + /* Protect against cpu-hotplug */ + get_online_cpus(); + + /* Disable all cpuidle devices */ + for_each_online_cpu(cpu) { + _pr = per_cpu(processors, cpu); + if (!_pr || !_pr->flags.power_setup_done) + continue; + cpuidle_disable_device(&_pr->power.dev); + } + + /* Populate Updated C-state information */ + acpi_processor_setup_cpuidle_states(pr); + + /* Enable all cpuidle devices */ + for_each_online_cpu(cpu) { + _pr = per_cpu(processors, cpu); + if (!_pr || !_pr->flags.power_setup_done) + continue; + acpi_processor_get_power_info(_pr); + if (_pr->flags.power) { + acpi_processor_setup_cpuidle_cx(_pr); + cpuidle_enable_device(&_pr->power.dev); + } + } + put_online_cpus(); + cpuidle_resume_and_unlock(); + } + + return 0; +} + +static int acpi_processor_registered; + int __cpuinit acpi_processor_power_init(struct acpi_processor *pr, struct acpi_device *device) { acpi_status status = 0; + int retval; static int first_run; if (disabled_by_idle_boot_param()) @@ -1126,9 +1238,26 @@ int __cpuinit acpi_processor_power_init(struct acpi_processor *pr, * platforms that only support C1. */ if (pr->flags.power) { - acpi_processor_setup_cpuidle(pr); - if (cpuidle_register_device(&pr->power.dev)) - return -EIO; + /* Register acpi_idle_driver if not already registered */ + if (!acpi_processor_registered) { + acpi_processor_setup_cpuidle_states(pr); + retval = cpuidle_register_driver(&acpi_idle_driver); + if (retval) + return retval; + printk(KERN_DEBUG "ACPI: %s registered with cpuidle\n", + acpi_idle_driver.name); + } + /* Register per-cpu cpuidle_device. Cpuidle driver + * must already be registered before registering device + */ + acpi_processor_setup_cpuidle_cx(pr); + retval = cpuidle_register_device(&pr->power.dev); + if (retval) { + if (acpi_processor_registered == 0) + cpuidle_unregister_driver(&acpi_idle_driver); + return retval; + } + acpi_processor_registered++; } return 0; } @@ -1139,8 +1268,13 @@ int acpi_processor_power_exit(struct acpi_processor *pr, if (disabled_by_idle_boot_param()) return 0; - cpuidle_unregister_device(&pr->power.dev); - pr->flags.power_setup_done = 0; + if (pr->flags.power) { + cpuidle_unregister_device(&pr->power.dev); + acpi_processor_registered--; + if (acpi_processor_registered == 0) + cpuidle_unregister_driver(&acpi_idle_driver); + } + pr->flags.power_setup_done = 0; return 0; } diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c index 449c556274c0..8ab80bafe3f1 100644 --- a/drivers/acpi/scan.c +++ b/drivers/acpi/scan.c @@ -1062,13 +1062,12 @@ static void acpi_add_id(struct acpi_device *device, const char *dev_id) if (!id) return; - id->id = kmalloc(strlen(dev_id) + 1, GFP_KERNEL); + id->id = kstrdup(dev_id, GFP_KERNEL); if (!id->id) { kfree(id); return; } - strcpy(id->id, dev_id); list_add_tail(&id->list, &device->pnp.ids); } diff --git a/drivers/acpi/sysfs.c b/drivers/acpi/sysfs.c index c538d0ef10ff..9f66181c814e 100644 --- a/drivers/acpi/sysfs.c +++ b/drivers/acpi/sysfs.c @@ -706,11 +706,23 @@ static void __exit interrupt_stats_exit(void) return; } +static ssize_t +acpi_show_profile(struct device *dev, struct device_attribute *attr, + char *buf) +{ + return sprintf(buf, "%d\n", acpi_gbl_FADT.preferred_profile); +} + +static const struct device_attribute pm_profile_attr = + __ATTR(pm_profile, S_IRUGO, acpi_show_profile, NULL); + int __init acpi_sysfs_init(void) { int result; result = acpi_tables_sysfs_init(); - + if (result) + return result; + result = sysfs_create_file(acpi_kobj, &pm_profile_attr.attr); return result; } diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c index fb7b90b05922..cf26222a93c5 100644 --- a/drivers/ata/ahci.c +++ b/drivers/ata/ahci.c @@ -390,6 +390,9 @@ static const struct pci_device_id ahci_pci_tbl[] = { /* Promise */ { PCI_VDEVICE(PROMISE, 0x3f20), board_ahci }, /* PDC42819 */ + /* Asmedia */ + { PCI_VDEVICE(ASMEDIA, 0x0612), board_ahci }, /* ASM1061 */ + /* Generic, PCI class code for AHCI */ { PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_STORAGE_SATA_AHCI, 0xffffff, board_ahci }, diff --git a/drivers/ata/ahci_platform.c b/drivers/ata/ahci_platform.c index 004f2ce3dc73..ec555951176e 100644 --- a/drivers/ata/ahci_platform.c +++ b/drivers/ata/ahci_platform.c @@ -65,7 +65,7 @@ static struct scsi_host_template ahci_platform_sht = { static int __init ahci_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; - struct ahci_platform_data *pdata = dev->platform_data; + struct ahci_platform_data *pdata = dev_get_platdata(dev); const struct platform_device_id *id = platform_get_device_id(pdev); struct ata_port_info pi = ahci_port_info[id->driver_data]; const struct ata_port_info *ppi[] = { &pi, NULL }; @@ -191,7 +191,7 @@ err0: static int __devexit ahci_remove(struct platform_device *pdev) { struct device *dev = &pdev->dev; - struct ahci_platform_data *pdata = dev->platform_data; + struct ahci_platform_data *pdata = dev_get_platdata(dev); struct ata_host *host = dev_get_drvdata(dev); ata_host_detach(host); diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c index f22957c2769a..a9b282038000 100644 --- a/drivers/ata/libata-eh.c +++ b/drivers/ata/libata-eh.c @@ -2883,7 +2883,7 @@ int ata_eh_reset(struct ata_link *link, int classify, sata_scr_read(link, SCR_STATUS, &sstatus)) rc = -ERESTART; - if (rc == -ERESTART || try >= max_tries) { + if (try >= max_tries) { /* * Thaw host port even if reset failed, so that the port * can be retried on the next phy event. This risks @@ -2909,6 +2909,16 @@ int ata_eh_reset(struct ata_link *link, int classify, ata_eh_acquire(ap); } + /* + * While disks spinup behind PMP, some controllers fail sending SRST. + * They need to be reset - as well as the PMP - before retrying. + */ + if (rc == -ERESTART) { + if (ata_is_host_link(link)) + ata_eh_thaw_port(ap); + goto out; + } + if (try == max_tries - 1) { sata_down_spd_limit(link, 0); if (slave) diff --git a/drivers/ata/libata-pmp.c b/drivers/ata/libata-pmp.c index 104462dbc524..21b80c555c60 100644 --- a/drivers/ata/libata-pmp.c +++ b/drivers/ata/libata-pmp.c @@ -389,12 +389,9 @@ static void sata_pmp_quirks(struct ata_port *ap) /* link reports offline after LPM */ link->flags |= ATA_LFLAG_NO_LPM; - /* Class code report is unreliable and SRST - * times out under certain configurations. - */ + /* Class code report is unreliable. */ if (link->pmp < 5) - link->flags |= ATA_LFLAG_NO_SRST | - ATA_LFLAG_ASSUME_ATA; + link->flags |= ATA_LFLAG_ASSUME_ATA; /* port 5 is for SEMB device and it doesn't like SRST */ if (link->pmp == 5) diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c index 72a9770ac42f..2a5412e7e9c1 100644 --- a/drivers/ata/libata-scsi.c +++ b/drivers/ata/libata-scsi.c @@ -1217,6 +1217,10 @@ void ata_scsi_slave_destroy(struct scsi_device *sdev) /** * __ata_change_queue_depth - helper for ata_scsi_change_queue_depth + * @ap: ATA port to which the device change the queue depth + * @sdev: SCSI device to configure queue depth for + * @queue_depth: new queue depth + * @reason: calling context * * libsas and libata have different approaches for associating a sdev to * its ata_port. diff --git a/drivers/ata/pata_of_platform.c b/drivers/ata/pata_of_platform.c index a72ab0dde4e5..2a472c5bb7db 100644 --- a/drivers/ata/pata_of_platform.c +++ b/drivers/ata/pata_of_platform.c @@ -52,7 +52,7 @@ static int __devinit pata_of_platform_probe(struct platform_device *ofdev) } ret = of_irq_to_resource(dn, 0, &irq_res); - if (ret == NO_IRQ) + if (!ret) irq_res.start = irq_res.end = 0; else irq_res.flags = 0; diff --git a/drivers/ata/sata_sis.c b/drivers/ata/sata_sis.c index 447d9c05fb5a..95ec435f0eb4 100644 --- a/drivers/ata/sata_sis.c +++ b/drivers/ata/sata_sis.c @@ -104,7 +104,7 @@ static const struct ata_port_info sis_port_info = { }; MODULE_AUTHOR("Uwe Koziolek"); -MODULE_DESCRIPTION("low-level driver for Silicon Integratad Systems SATA controller"); +MODULE_DESCRIPTION("low-level driver for Silicon Integrated Systems SATA controller"); MODULE_LICENSE("GPL"); MODULE_DEVICE_TABLE(pci, sis_pci_tbl); MODULE_VERSION(DRV_VERSION); diff --git a/drivers/base/power/opp.c b/drivers/base/power/opp.c index 434a6c011675..95706fa24c73 100644 --- a/drivers/base/power/opp.c +++ b/drivers/base/power/opp.c @@ -669,7 +669,7 @@ struct srcu_notifier_head *opp_get_notifier(struct device *dev) struct device_opp *dev_opp = find_device_opp(dev); if (IS_ERR(dev_opp)) - return ERR_PTR(PTR_ERR(dev_opp)); /* matching type */ + return ERR_CAST(dev_opp); /* matching type */ return &dev_opp->head; } diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c index db7cb8111fbe..106beb194f3c 100644 --- a/drivers/bluetooth/ath3k.c +++ b/drivers/bluetooth/ath3k.c @@ -105,7 +105,7 @@ static int ath3k_load_firmware(struct usb_device *udev, pipe = usb_sndctrlpipe(udev, 0); - send_buf = kmalloc(BULK_SIZE, GFP_ATOMIC); + send_buf = kmalloc(BULK_SIZE, GFP_KERNEL); if (!send_buf) { BT_ERR("Can't allocate memory chunk for firmware"); return -ENOMEM; @@ -176,7 +176,7 @@ static int ath3k_load_fwfile(struct usb_device *udev, count = firmware->size; - send_buf = kmalloc(BULK_SIZE, GFP_ATOMIC); + send_buf = kmalloc(BULK_SIZE, GFP_KERNEL); if (!send_buf) { BT_ERR("Can't allocate memory chunk for firmware"); return -ENOMEM; diff --git a/drivers/bluetooth/bcm203x.c b/drivers/bluetooth/bcm203x.c index 8b1b643a519b..54952ab800b8 100644 --- a/drivers/bluetooth/bcm203x.c +++ b/drivers/bluetooth/bcm203x.c @@ -24,6 +24,7 @@ #include <linux/module.h> +#include <linux/atomic.h> #include <linux/kernel.h> #include <linux/init.h> #include <linux/slab.h> @@ -65,6 +66,7 @@ struct bcm203x_data { unsigned long state; struct work_struct work; + atomic_t shutdown; struct urb *urb; unsigned char *buffer; @@ -97,6 +99,7 @@ static void bcm203x_complete(struct urb *urb) data->state = BCM203X_SELECT_MEMORY; + /* use workqueue to have a small delay */ schedule_work(&data->work); break; @@ -155,7 +158,10 @@ static void bcm203x_work(struct work_struct *work) struct bcm203x_data *data = container_of(work, struct bcm203x_data, work); - if (usb_submit_urb(data->urb, GFP_ATOMIC) < 0) + if (atomic_read(&data->shutdown)) + return; + + if (usb_submit_urb(data->urb, GFP_KERNEL) < 0) BT_ERR("Can't submit URB"); } @@ -243,6 +249,7 @@ static int bcm203x_probe(struct usb_interface *intf, const struct usb_device_id usb_set_intfdata(intf, data); + /* use workqueue to have a small delay */ schedule_work(&data->work); return 0; @@ -254,6 +261,9 @@ static void bcm203x_disconnect(struct usb_interface *intf) BT_DBG("intf %p", intf); + atomic_inc(&data->shutdown); + cancel_work_sync(&data->work); + usb_kill_urb(data->urb); usb_set_intfdata(intf, NULL); diff --git a/drivers/bluetooth/bfusb.c b/drivers/bluetooth/bfusb.c index 005919ab043c..61b591470a90 100644 --- a/drivers/bluetooth/bfusb.c +++ b/drivers/bluetooth/bfusb.c @@ -568,22 +568,23 @@ static int bfusb_load_firmware(struct bfusb_data *data, BT_INFO("BlueFRITZ! USB loading firmware"); + buf = kmalloc(BFUSB_MAX_BLOCK_SIZE + 3, GFP_KERNEL); + if (!buf) { + BT_ERR("Can't allocate memory chunk for firmware"); + return -ENOMEM; + } + pipe = usb_sndctrlpipe(data->udev, 0); if (usb_control_msg(data->udev, pipe, USB_REQ_SET_CONFIGURATION, 0, 1, 0, NULL, 0, USB_CTRL_SET_TIMEOUT) < 0) { BT_ERR("Can't change to loading configuration"); + kfree(buf); return -EBUSY; } data->udev->toggle[0] = data->udev->toggle[1] = 0; - buf = kmalloc(BFUSB_MAX_BLOCK_SIZE + 3, GFP_ATOMIC); - if (!buf) { - BT_ERR("Can't allocate memory chunk for firmware"); - return -ENOMEM; - } - pipe = usb_sndbulkpipe(data->udev, data->bulk_out_ep); while (count) { diff --git a/drivers/char/agp/intel-gtt.c b/drivers/char/agp/intel-gtt.c index 66cd0b8096ca..c92424ca1a55 100644 --- a/drivers/char/agp/intel-gtt.c +++ b/drivers/char/agp/intel-gtt.c @@ -1186,10 +1186,11 @@ static void gen6_cleanup(void) /* Certain Gen5 chipsets require require idling the GPU before * unmapping anything from the GTT when VT-d is enabled. */ -extern int intel_iommu_gfx_mapped; static inline int needs_idle_maps(void) { +#ifdef CONFIG_INTEL_IOMMU const unsigned short gpu_devid = intel_private.pcidev->device; + extern int intel_iommu_gfx_mapped; /* Query intel_iommu to see if we need the workaround. Presumably that * was loaded first. @@ -1198,7 +1199,7 @@ static inline int needs_idle_maps(void) gpu_devid == PCI_DEVICE_ID_INTEL_IRONLAKE_M_IG) && intel_iommu_gfx_mapped) return 1; - +#endif return 0; } @@ -1236,7 +1237,7 @@ static int i9xx_setup(void) intel_private.gtt_bus_addr = reg_addr + gtt_offset; } - if (needs_idle_maps()); + if (needs_idle_maps()) intel_private.base.do_idle_maps = 1; intel_i9xx_setup_flush(); diff --git a/drivers/cpufreq/db8500-cpufreq.c b/drivers/cpufreq/db8500-cpufreq.c index edaa987621ea..f5002015d82e 100644 --- a/drivers/cpufreq/db8500-cpufreq.c +++ b/drivers/cpufreq/db8500-cpufreq.c @@ -109,7 +109,7 @@ static unsigned int db8500_cpufreq_getspeed(unsigned int cpu) static int __cpuinit db8500_cpufreq_init(struct cpufreq_policy *policy) { - int res; + int i, res; BUILD_BUG_ON(ARRAY_SIZE(idx2opp) + 1 != ARRAY_SIZE(freq_table)); @@ -120,8 +120,8 @@ static int __cpuinit db8500_cpufreq_init(struct cpufreq_policy *policy) freq_table[3].frequency = 1000000; } pr_info("db8500-cpufreq : Available frequencies:\n"); - while (freq_table[i].frequency != CPUFREQ_TABLE_END) - pr_info(" %d Mhz\n", freq_table[i++].frequency/1000); + for (i = 0; freq_table[i].frequency != CPUFREQ_TABLE_END; i++) + pr_info(" %d Mhz\n", freq_table[i].frequency/1000); /* get policy fields based on the table */ res = cpufreq_frequency_table_cpuinfo(policy, freq_table); diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c index becd6d99203b..06ce2680d00d 100644 --- a/drivers/cpuidle/cpuidle.c +++ b/drivers/cpuidle/cpuidle.c @@ -62,8 +62,9 @@ static int __cpuidle_register_device(struct cpuidle_device *dev); int cpuidle_idle_call(void) { struct cpuidle_device *dev = __this_cpu_read(cpuidle_devices); + struct cpuidle_driver *drv = cpuidle_get_driver(); struct cpuidle_state *target_state; - int next_state; + int next_state, entered_state; if (off) return -ENODEV; @@ -84,45 +85,36 @@ int cpuidle_idle_call(void) hrtimer_peek_ahead_timers(); #endif - /* - * Call the device's prepare function before calling the - * governor's select function. ->prepare gives the device's - * cpuidle driver a chance to update any dynamic information - * of its cpuidle states for the current idle period, e.g. - * state availability, latencies, residencies, etc. - */ - if (dev->prepare) - dev->prepare(dev); - /* ask the governor for the next state */ - next_state = cpuidle_curr_governor->select(dev); + next_state = cpuidle_curr_governor->select(drv, dev); if (need_resched()) { local_irq_enable(); return 0; } - target_state = &dev->states[next_state]; - - /* enter the state and update stats */ - dev->last_state = target_state; + target_state = &drv->states[next_state]; trace_power_start(POWER_CSTATE, next_state, dev->cpu); trace_cpu_idle(next_state, dev->cpu); - dev->last_residency = target_state->enter(dev, target_state); + entered_state = target_state->enter(dev, drv, next_state); trace_power_end(dev->cpu); trace_cpu_idle(PWR_EVENT_EXIT, dev->cpu); - if (dev->last_state) - target_state = dev->last_state; - - target_state->time += (unsigned long long)dev->last_residency; - target_state->usage++; + if (entered_state >= 0) { + /* Update cpuidle counters */ + /* This can be moved to within driver enter routine + * but that results in multiple copies of same code. + */ + dev->states_usage[entered_state].time += + (unsigned long long)dev->last_residency; + dev->states_usage[entered_state].usage++; + } /* give the governor an opportunity to reflect on the outcome */ if (cpuidle_curr_governor->reflect) - cpuidle_curr_governor->reflect(dev); + cpuidle_curr_governor->reflect(dev, entered_state); return 0; } @@ -173,11 +165,11 @@ void cpuidle_resume_and_unlock(void) EXPORT_SYMBOL_GPL(cpuidle_resume_and_unlock); #ifdef CONFIG_ARCH_HAS_CPU_RELAX -static int poll_idle(struct cpuidle_device *dev, struct cpuidle_state *st) +static int poll_idle(struct cpuidle_device *dev, + struct cpuidle_driver *drv, int index) { ktime_t t1, t2; s64 diff; - int ret; t1 = ktime_get(); local_irq_enable(); @@ -189,15 +181,14 @@ static int poll_idle(struct cpuidle_device *dev, struct cpuidle_state *st) if (diff > INT_MAX) diff = INT_MAX; - ret = (int) diff; - return ret; + dev->last_residency = (int) diff; + + return index; } -static void poll_idle_init(struct cpuidle_device *dev) +static void poll_idle_init(struct cpuidle_driver *drv) { - struct cpuidle_state *state = &dev->states[0]; - - cpuidle_set_statedata(state, NULL); + struct cpuidle_state *state = &drv->states[0]; snprintf(state->name, CPUIDLE_NAME_LEN, "POLL"); snprintf(state->desc, CPUIDLE_DESC_LEN, "CPUIDLE CORE POLL IDLE"); @@ -208,7 +199,7 @@ static void poll_idle_init(struct cpuidle_device *dev) state->enter = poll_idle; } #else -static void poll_idle_init(struct cpuidle_device *dev) {} +static void poll_idle_init(struct cpuidle_driver *drv) {} #endif /* CONFIG_ARCH_HAS_CPU_RELAX */ /** @@ -235,21 +226,20 @@ int cpuidle_enable_device(struct cpuidle_device *dev) return ret; } - poll_idle_init(dev); + poll_idle_init(cpuidle_get_driver()); if ((ret = cpuidle_add_state_sysfs(dev))) return ret; if (cpuidle_curr_governor->enable && - (ret = cpuidle_curr_governor->enable(dev))) + (ret = cpuidle_curr_governor->enable(cpuidle_get_driver(), dev))) goto fail_sysfs; for (i = 0; i < dev->state_count; i++) { - dev->states[i].usage = 0; - dev->states[i].time = 0; + dev->states_usage[i].usage = 0; + dev->states_usage[i].time = 0; } dev->last_residency = 0; - dev->last_state = NULL; smp_wmb(); @@ -283,7 +273,7 @@ void cpuidle_disable_device(struct cpuidle_device *dev) dev->enabled = 0; if (cpuidle_curr_governor->disable) - cpuidle_curr_governor->disable(dev); + cpuidle_curr_governor->disable(cpuidle_get_driver(), dev); cpuidle_remove_state_sysfs(dev); enabled_devices--; @@ -311,26 +301,6 @@ static int __cpuidle_register_device(struct cpuidle_device *dev) init_completion(&dev->kobj_unregister); - /* - * cpuidle driver should set the dev->power_specified bit - * before registering the device if the driver provides - * power_usage numbers. - * - * For those devices whose ->power_specified is not set, - * we fill in power_usage with decreasing values as the - * cpuidle code has an implicit assumption that state Cn - * uses less power than C(n-1). - * - * With CONFIG_ARCH_HAS_CPU_RELAX, C0 is already assigned - * an power value of -1. So we use -2, -3, etc, for other - * c-states. - */ - if (!dev->power_specified) { - int i; - for (i = CPUIDLE_DRIVER_STATE_START; i < dev->state_count; i++) - dev->states[i].power_usage = -1 - i; - } - per_cpu(cpuidle_devices, dev->cpu) = dev; list_add(&dev->device_list, &cpuidle_detected_devices); if ((ret = cpuidle_add_sysfs(sys_dev))) { diff --git a/drivers/cpuidle/driver.c b/drivers/cpuidle/driver.c index 3f7e3cedd133..284d7af5a9c8 100644 --- a/drivers/cpuidle/driver.c +++ b/drivers/cpuidle/driver.c @@ -17,6 +17,30 @@ static struct cpuidle_driver *cpuidle_curr_driver; DEFINE_SPINLOCK(cpuidle_driver_lock); +static void __cpuidle_register_driver(struct cpuidle_driver *drv) +{ + int i; + /* + * cpuidle driver should set the drv->power_specified bit + * before registering if the driver provides + * power_usage numbers. + * + * If power_specified is not set, + * we fill in power_usage with decreasing values as the + * cpuidle code has an implicit assumption that state Cn + * uses less power than C(n-1). + * + * With CONFIG_ARCH_HAS_CPU_RELAX, C0 is already assigned + * an power value of -1. So we use -2, -3, etc, for other + * c-states. + */ + if (!drv->power_specified) { + for (i = CPUIDLE_DRIVER_STATE_START; i < drv->state_count; i++) + drv->states[i].power_usage = -1 - i; + } +} + + /** * cpuidle_register_driver - registers a driver * @drv: the driver @@ -34,6 +58,7 @@ int cpuidle_register_driver(struct cpuidle_driver *drv) spin_unlock(&cpuidle_driver_lock); return -EBUSY; } + __cpuidle_register_driver(drv); cpuidle_curr_driver = drv; spin_unlock(&cpuidle_driver_lock); diff --git a/drivers/cpuidle/governors/ladder.c b/drivers/cpuidle/governors/ladder.c index 3b8fce20f023..b6a09ea859b1 100644 --- a/drivers/cpuidle/governors/ladder.c +++ b/drivers/cpuidle/governors/ladder.c @@ -60,9 +60,11 @@ static inline void ladder_do_selection(struct ladder_device *ldev, /** * ladder_select_state - selects the next state to enter + * @drv: cpuidle driver * @dev: the CPU */ -static int ladder_select_state(struct cpuidle_device *dev) +static int ladder_select_state(struct cpuidle_driver *drv, + struct cpuidle_device *dev) { struct ladder_device *ldev = &__get_cpu_var(ladder_devices); struct ladder_device_state *last_state; @@ -77,15 +79,17 @@ static int ladder_select_state(struct cpuidle_device *dev) last_state = &ldev->states[last_idx]; - if (dev->states[last_idx].flags & CPUIDLE_FLAG_TIME_VALID) - last_residency = cpuidle_get_last_residency(dev) - dev->states[last_idx].exit_latency; + if (drv->states[last_idx].flags & CPUIDLE_FLAG_TIME_VALID) { + last_residency = cpuidle_get_last_residency(dev) - \ + drv->states[last_idx].exit_latency; + } else last_residency = last_state->threshold.promotion_time + 1; /* consider promotion */ - if (last_idx < dev->state_count - 1 && + if (last_idx < drv->state_count - 1 && last_residency > last_state->threshold.promotion_time && - dev->states[last_idx + 1].exit_latency <= latency_req) { + drv->states[last_idx + 1].exit_latency <= latency_req) { last_state->stats.promotion_count++; last_state->stats.demotion_count = 0; if (last_state->stats.promotion_count >= last_state->threshold.promotion_count) { @@ -96,11 +100,11 @@ static int ladder_select_state(struct cpuidle_device *dev) /* consider demotion */ if (last_idx > CPUIDLE_DRIVER_STATE_START && - dev->states[last_idx].exit_latency > latency_req) { + drv->states[last_idx].exit_latency > latency_req) { int i; for (i = last_idx - 1; i > CPUIDLE_DRIVER_STATE_START; i--) { - if (dev->states[i].exit_latency <= latency_req) + if (drv->states[i].exit_latency <= latency_req) break; } ladder_do_selection(ldev, last_idx, i); @@ -123,9 +127,11 @@ static int ladder_select_state(struct cpuidle_device *dev) /** * ladder_enable_device - setup for the governor + * @drv: cpuidle driver * @dev: the CPU */ -static int ladder_enable_device(struct cpuidle_device *dev) +static int ladder_enable_device(struct cpuidle_driver *drv, + struct cpuidle_device *dev) { int i; struct ladder_device *ldev = &per_cpu(ladder_devices, dev->cpu); @@ -134,8 +140,8 @@ static int ladder_enable_device(struct cpuidle_device *dev) ldev->last_state_idx = CPUIDLE_DRIVER_STATE_START; - for (i = 0; i < dev->state_count; i++) { - state = &dev->states[i]; + for (i = 0; i < drv->state_count; i++) { + state = &drv->states[i]; lstate = &ldev->states[i]; lstate->stats.promotion_count = 0; @@ -144,7 +150,7 @@ static int ladder_enable_device(struct cpuidle_device *dev) lstate->threshold.promotion_count = PROMOTION_COUNT; lstate->threshold.demotion_count = DEMOTION_COUNT; - if (i < dev->state_count - 1) + if (i < drv->state_count - 1) lstate->threshold.promotion_time = state->exit_latency; if (i > 0) lstate->threshold.demotion_time = state->exit_latency; @@ -153,11 +159,24 @@ static int ladder_enable_device(struct cpuidle_device *dev) return 0; } +/** + * ladder_reflect - update the correct last_state_idx + * @dev: the CPU + * @index: the index of actual state entered + */ +static void ladder_reflect(struct cpuidle_device *dev, int index) +{ + struct ladder_device *ldev = &__get_cpu_var(ladder_devices); + if (index > 0) + ldev->last_state_idx = index; +} + static struct cpuidle_governor ladder_governor = { .name = "ladder", .rating = 10, .enable = ladder_enable_device, .select = ladder_select_state, + .reflect = ladder_reflect, .owner = THIS_MODULE, }; diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c index 00275244ce2f..ad0952601ae2 100644 --- a/drivers/cpuidle/governors/menu.c +++ b/drivers/cpuidle/governors/menu.c @@ -183,7 +183,7 @@ static inline int performance_multiplier(void) static DEFINE_PER_CPU(struct menu_device, menu_devices); -static void menu_update(struct cpuidle_device *dev); +static void menu_update(struct cpuidle_driver *drv, struct cpuidle_device *dev); /* This implements DIV_ROUND_CLOSEST but avoids 64 bit division */ static u64 div_round64(u64 dividend, u32 divisor) @@ -229,9 +229,10 @@ static void detect_repeating_patterns(struct menu_device *data) /** * menu_select - selects the next idle state to enter + * @drv: cpuidle driver containing state data * @dev: the CPU */ -static int menu_select(struct cpuidle_device *dev) +static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev) { struct menu_device *data = &__get_cpu_var(menu_devices); int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY); @@ -241,7 +242,7 @@ static int menu_select(struct cpuidle_device *dev) struct timespec t; if (data->needs_update) { - menu_update(dev); + menu_update(drv, dev); data->needs_update = 0; } @@ -286,11 +287,9 @@ static int menu_select(struct cpuidle_device *dev) * Find the idle state with the lowest power while satisfying * our constraints. */ - for (i = CPUIDLE_DRIVER_STATE_START; i < dev->state_count; i++) { - struct cpuidle_state *s = &dev->states[i]; + for (i = CPUIDLE_DRIVER_STATE_START; i < drv->state_count; i++) { + struct cpuidle_state *s = &drv->states[i]; - if (s->flags & CPUIDLE_FLAG_IGNORE) - continue; if (s->target_residency > data->predicted_us) continue; if (s->exit_latency > latency_req) @@ -311,26 +310,30 @@ static int menu_select(struct cpuidle_device *dev) /** * menu_reflect - records that data structures need update * @dev: the CPU + * @index: the index of actual entered state * * NOTE: it's important to be fast here because this operation will add to * the overall exit latency. */ -static void menu_reflect(struct cpuidle_device *dev) +static void menu_reflect(struct cpuidle_device *dev, int index) { struct menu_device *data = &__get_cpu_var(menu_devices); - data->needs_update = 1; + data->last_state_idx = index; + if (index >= 0) + data->needs_update = 1; } /** * menu_update - attempts to guess what happened after entry + * @drv: cpuidle driver containing state data * @dev: the CPU */ -static void menu_update(struct cpuidle_device *dev) +static void menu_update(struct cpuidle_driver *drv, struct cpuidle_device *dev) { struct menu_device *data = &__get_cpu_var(menu_devices); int last_idx = data->last_state_idx; unsigned int last_idle_us = cpuidle_get_last_residency(dev); - struct cpuidle_state *target = &dev->states[last_idx]; + struct cpuidle_state *target = &drv->states[last_idx]; unsigned int measured_us; u64 new_factor; @@ -384,9 +387,11 @@ static void menu_update(struct cpuidle_device *dev) /** * menu_enable_device - scans a CPU's states and does setup + * @drv: cpuidle driver * @dev: the CPU */ -static int menu_enable_device(struct cpuidle_device *dev) +static int menu_enable_device(struct cpuidle_driver *drv, + struct cpuidle_device *dev) { struct menu_device *data = &per_cpu(menu_devices, dev->cpu); diff --git a/drivers/cpuidle/sysfs.c b/drivers/cpuidle/sysfs.c index be7917ec40c9..1e756e160dca 100644 --- a/drivers/cpuidle/sysfs.c +++ b/drivers/cpuidle/sysfs.c @@ -216,7 +216,8 @@ static struct kobj_type ktype_cpuidle = { struct cpuidle_state_attr { struct attribute attr; - ssize_t (*show)(struct cpuidle_state *, char *); + ssize_t (*show)(struct cpuidle_state *, \ + struct cpuidle_state_usage *, char *); ssize_t (*store)(struct cpuidle_state *, const char *, size_t); }; @@ -224,19 +225,22 @@ struct cpuidle_state_attr { static struct cpuidle_state_attr attr_##_name = __ATTR(_name, 0444, show, NULL) #define define_show_state_function(_name) \ -static ssize_t show_state_##_name(struct cpuidle_state *state, char *buf) \ +static ssize_t show_state_##_name(struct cpuidle_state *state, \ + struct cpuidle_state_usage *state_usage, char *buf) \ { \ return sprintf(buf, "%u\n", state->_name);\ } #define define_show_state_ull_function(_name) \ -static ssize_t show_state_##_name(struct cpuidle_state *state, char *buf) \ +static ssize_t show_state_##_name(struct cpuidle_state *state, \ + struct cpuidle_state_usage *state_usage, char *buf) \ { \ - return sprintf(buf, "%llu\n", state->_name);\ + return sprintf(buf, "%llu\n", state_usage->_name);\ } #define define_show_state_str_function(_name) \ -static ssize_t show_state_##_name(struct cpuidle_state *state, char *buf) \ +static ssize_t show_state_##_name(struct cpuidle_state *state, \ + struct cpuidle_state_usage *state_usage, char *buf) \ { \ if (state->_name[0] == '\0')\ return sprintf(buf, "<null>\n");\ @@ -269,16 +273,18 @@ static struct attribute *cpuidle_state_default_attrs[] = { #define kobj_to_state_obj(k) container_of(k, struct cpuidle_state_kobj, kobj) #define kobj_to_state(k) (kobj_to_state_obj(k)->state) +#define kobj_to_state_usage(k) (kobj_to_state_obj(k)->state_usage) #define attr_to_stateattr(a) container_of(a, struct cpuidle_state_attr, attr) static ssize_t cpuidle_state_show(struct kobject * kobj, struct attribute * attr ,char * buf) { int ret = -EIO; struct cpuidle_state *state = kobj_to_state(kobj); + struct cpuidle_state_usage *state_usage = kobj_to_state_usage(kobj); struct cpuidle_state_attr * cattr = attr_to_stateattr(attr); if (cattr->show) - ret = cattr->show(state, buf); + ret = cattr->show(state, state_usage, buf); return ret; } @@ -316,13 +322,15 @@ int cpuidle_add_state_sysfs(struct cpuidle_device *device) { int i, ret = -ENOMEM; struct cpuidle_state_kobj *kobj; + struct cpuidle_driver *drv = cpuidle_get_driver(); /* state statistics */ for (i = 0; i < device->state_count; i++) { kobj = kzalloc(sizeof(struct cpuidle_state_kobj), GFP_KERNEL); if (!kobj) goto error_state; - kobj->state = &device->states[i]; + kobj->state = &drv->states[i]; + kobj->state_usage = &device->states_usage[i]; init_completion(&kobj->kobj_unregister); ret = kobject_init_and_add(&kobj->kobj, &ktype_state_cpuidle, &device->kobj, diff --git a/drivers/gpio/gpio-omap.c b/drivers/gpio/gpio-omap.c index 0e49d87f6c60..0b0562979171 100644 --- a/drivers/gpio/gpio-omap.c +++ b/drivers/gpio/gpio-omap.c @@ -148,13 +148,17 @@ static int _get_gpio_dataout(struct gpio_bank *bank, int gpio) return (__raw_readl(reg) & GPIO_BIT(bank, gpio)) != 0; } -#define MOD_REG_BIT(reg, bit_mask, set) \ -do { \ - int l = __raw_readl(base + reg); \ - if (set) l |= bit_mask; \ - else l &= ~bit_mask; \ - __raw_writel(l, base + reg); \ -} while(0) +static inline void _gpio_rmw(void __iomem *base, u32 reg, u32 mask, bool set) +{ + int l = __raw_readl(base + reg); + + if (set) + l |= mask; + else + l &= ~mask; + + __raw_writel(l, base + reg); +} /** * _set_gpio_debounce - low level gpio debounce time @@ -210,28 +214,28 @@ static inline void set_24xx_gpio_triggering(struct gpio_bank *bank, int gpio, u32 gpio_bit = 1 << gpio; if (cpu_is_omap44xx()) { - MOD_REG_BIT(OMAP4_GPIO_LEVELDETECT0, gpio_bit, - trigger & IRQ_TYPE_LEVEL_LOW); - MOD_REG_BIT(OMAP4_GPIO_LEVELDETECT1, gpio_bit, - trigger & IRQ_TYPE_LEVEL_HIGH); - MOD_REG_BIT(OMAP4_GPIO_RISINGDETECT, gpio_bit, - trigger & IRQ_TYPE_EDGE_RISING); - MOD_REG_BIT(OMAP4_GPIO_FALLINGDETECT, gpio_bit, - trigger & IRQ_TYPE_EDGE_FALLING); + _gpio_rmw(base, OMAP4_GPIO_LEVELDETECT0, gpio_bit, + trigger & IRQ_TYPE_LEVEL_LOW); + _gpio_rmw(base, OMAP4_GPIO_LEVELDETECT1, gpio_bit, + trigger & IRQ_TYPE_LEVEL_HIGH); + _gpio_rmw(base, OMAP4_GPIO_RISINGDETECT, gpio_bit, + trigger & IRQ_TYPE_EDGE_RISING); + _gpio_rmw(base, OMAP4_GPIO_FALLINGDETECT, gpio_bit, + trigger & IRQ_TYPE_EDGE_FALLING); } else { - MOD_REG_BIT(OMAP24XX_GPIO_LEVELDETECT0, gpio_bit, - trigger & IRQ_TYPE_LEVEL_LOW); - MOD_REG_BIT(OMAP24XX_GPIO_LEVELDETECT1, gpio_bit, - trigger & IRQ_TYPE_LEVEL_HIGH); - MOD_REG_BIT(OMAP24XX_GPIO_RISINGDETECT, gpio_bit, - trigger & IRQ_TYPE_EDGE_RISING); - MOD_REG_BIT(OMAP24XX_GPIO_FALLINGDETECT, gpio_bit, - trigger & IRQ_TYPE_EDGE_FALLING); + _gpio_rmw(base, OMAP24XX_GPIO_LEVELDETECT0, gpio_bit, + trigger & IRQ_TYPE_LEVEL_LOW); + _gpio_rmw(base, OMAP24XX_GPIO_LEVELDETECT1, gpio_bit, + trigger & IRQ_TYPE_LEVEL_HIGH); + _gpio_rmw(base, OMAP24XX_GPIO_RISINGDETECT, gpio_bit, + trigger & IRQ_TYPE_EDGE_RISING); + _gpio_rmw(base, OMAP24XX_GPIO_FALLINGDETECT, gpio_bit, + trigger & IRQ_TYPE_EDGE_FALLING); } if (likely(!(bank->non_wakeup_gpios & gpio_bit))) { if (cpu_is_omap44xx()) { - MOD_REG_BIT(OMAP4_GPIO_IRQWAKEN0, gpio_bit, - trigger != 0); + _gpio_rmw(base, OMAP4_GPIO_IRQWAKEN0, gpio_bit, + trigger != 0); } else { /* * GPIO wakeup request can only be generated on edge @@ -1086,6 +1090,11 @@ omap_mpuio_alloc_gc(struct gpio_bank *bank, unsigned int irq_start, gc = irq_alloc_generic_chip("MPUIO", 1, irq_start, bank->base, handle_simple_irq); + if (!gc) { + dev_err(bank->dev, "Memory alloc failed for gc\n"); + return; + } + ct = gc->chip_types; /* NOTE: No ack required, reading IRQ status clears it. */ diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c index 0550dcb85814..147df8ae79db 100644 --- a/drivers/gpio/gpio-pca953x.c +++ b/drivers/gpio/gpio-pca953x.c @@ -596,9 +596,6 @@ static int __devinit device_pca953x_init(struct pca953x_chip *chip, int invert) /* set platform specific polarity inversion */ ret = pca953x_write_reg(chip, PCA953X_INVERT, invert); - if (ret) - goto out; - return 0; out: return ret; } @@ -640,7 +637,7 @@ static int __devinit pca953x_probe(struct i2c_client *client, struct pca953x_platform_data *pdata; struct pca953x_chip *chip; int irq_base=0, invert=0; - int ret = 0; + int ret; chip = kzalloc(sizeof(struct pca953x_chip), GFP_KERNEL); if (chip == NULL) @@ -673,10 +670,10 @@ static int __devinit pca953x_probe(struct i2c_client *client, pca953x_setup_gpio(chip, id->driver_data & PCA_GPIO_MASK); if (chip->chip_type == PCA953X_TYPE) - device_pca953x_init(chip, invert); - else if (chip->chip_type == PCA957X_TYPE) - device_pca957x_init(chip, invert); + ret = device_pca953x_init(chip, invert); else + ret = device_pca957x_init(chip, invert); + if (ret) goto out_failed; ret = pca953x_irq_setup(chip, id, irq_base); diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index 785127cb281b..1368826ef284 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -9,7 +9,6 @@ menuconfig DRM depends on (AGP || AGP=n) && !EMULATED_CMPXCHG && MMU select I2C select I2C_ALGOBIT - select SLOW_WORK help Kernel-level support for the Direct Rendering Infrastructure (DRI) introduced in XFree86 4.0. If you say Y here, you need to select @@ -96,6 +95,7 @@ config DRM_I915 select FB_CFB_IMAGEBLIT # i915 depends on ACPI_VIDEO when ACPI is enabled # but for select to work, need to select ACPI_VIDEO's dependencies, ick + select BACKLIGHT_LCD_SUPPORT if ACPI select BACKLIGHT_CLASS_DEVICE if ACPI select VIDEO_OUTPUT_CONTROL if ACPI select INPUT if ACPI diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c index 711d9653abd0..405c63b9d539 100644 --- a/drivers/gpu/drm/drm_crtc.c +++ b/drivers/gpu/drm/drm_crtc.c @@ -163,6 +163,7 @@ static struct drm_conn_prop_enum_list drm_connector_enum_list[] = { DRM_MODE_CONNECTOR_HDMIB, "HDMI-B", 0 }, { DRM_MODE_CONNECTOR_TV, "TV", 0 }, { DRM_MODE_CONNECTOR_eDP, "eDP", 0 }, + { DRM_MODE_CONNECTOR_VIRTUAL, "Virtual", 0}, }; static struct drm_prop_enum_list drm_encoder_enum_list[] = @@ -171,6 +172,7 @@ static struct drm_prop_enum_list drm_encoder_enum_list[] = { DRM_MODE_ENCODER_TMDS, "TMDS" }, { DRM_MODE_ENCODER_LVDS, "LVDS" }, { DRM_MODE_ENCODER_TVDAC, "TV" }, + { DRM_MODE_ENCODER_VIRTUAL, "Virtual" }, }; char *drm_get_encoder_name(struct drm_encoder *encoder) @@ -464,8 +466,10 @@ void drm_connector_init(struct drm_device *dev, list_add_tail(&connector->head, &dev->mode_config.connector_list); dev->mode_config.num_connector++; - drm_connector_attach_property(connector, - dev->mode_config.edid_property, 0); + if (connector_type != DRM_MODE_CONNECTOR_VIRTUAL) + drm_connector_attach_property(connector, + dev->mode_config.edid_property, + 0); drm_connector_attach_property(connector, dev->mode_config.dpms_property, 0); @@ -2114,8 +2118,10 @@ struct drm_property *drm_property_create(struct drm_device *dev, int flags, property->num_values = num_values; INIT_LIST_HEAD(&property->enum_blob_list); - if (name) + if (name) { strncpy(property->name, name, DRM_PROP_NAME_LEN); + property->name[DRM_PROP_NAME_LEN-1] = '\0'; + } list_add_tail(&property->head, &dev->mode_config.property_list); return property; diff --git a/drivers/gpu/drm/drm_crtc_helper.c b/drivers/gpu/drm/drm_crtc_helper.c index 2957636161e8..3969f7553fe7 100644 --- a/drivers/gpu/drm/drm_crtc_helper.c +++ b/drivers/gpu/drm/drm_crtc_helper.c @@ -484,6 +484,7 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set) struct drm_connector *save_connectors, *connector; int count = 0, ro, fail = 0; struct drm_crtc_helper_funcs *crtc_funcs; + struct drm_mode_set save_set; int ret = 0; int i; @@ -556,6 +557,12 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set) save_connectors[count++] = *connector; } + save_set.crtc = set->crtc; + save_set.mode = &set->crtc->mode; + save_set.x = set->crtc->x; + save_set.y = set->crtc->y; + save_set.fb = set->crtc->fb; + /* We should be able to check here if the fb has the same properties * and then just flip_or_move it */ if (set->crtc->fb != set->fb) { @@ -721,6 +728,12 @@ fail: *connector = save_connectors[count++]; } + /* Try to restore the config */ + if (mode_changed && + !drm_crtc_helper_set_mode(save_set.crtc, save_set.mode, save_set.x, + save_set.y, save_set.fb)) + DRM_ERROR("failed to restore config after modeset failure\n"); + kfree(save_connectors); kfree(save_encoders); kfree(save_crtcs); diff --git a/drivers/gpu/drm/drm_debugfs.c b/drivers/gpu/drm/drm_debugfs.c index d067c12ba940..1c7a1c0d3edd 100644 --- a/drivers/gpu/drm/drm_debugfs.c +++ b/drivers/gpu/drm/drm_debugfs.c @@ -118,7 +118,10 @@ int drm_debugfs_create_files(struct drm_info_list *files, int count, tmp->minor = minor; tmp->dent = ent; tmp->info_ent = &files[i]; - list_add(&(tmp->list), &(minor->debugfs_nodes.list)); + + mutex_lock(&minor->debugfs_lock); + list_add(&tmp->list, &minor->debugfs_list); + mutex_unlock(&minor->debugfs_lock); } return 0; @@ -146,7 +149,8 @@ int drm_debugfs_init(struct drm_minor *minor, int minor_id, char name[64]; int ret; - INIT_LIST_HEAD(&minor->debugfs_nodes.list); + INIT_LIST_HEAD(&minor->debugfs_list); + mutex_init(&minor->debugfs_lock); sprintf(name, "%d", minor_id); minor->debugfs_root = debugfs_create_dir(name, root); if (!minor->debugfs_root) { @@ -192,8 +196,9 @@ int drm_debugfs_remove_files(struct drm_info_list *files, int count, struct drm_info_node *tmp; int i; + mutex_lock(&minor->debugfs_lock); for (i = 0; i < count; i++) { - list_for_each_safe(pos, q, &minor->debugfs_nodes.list) { + list_for_each_safe(pos, q, &minor->debugfs_list) { tmp = list_entry(pos, struct drm_info_node, list); if (tmp->info_ent == &files[i]) { debugfs_remove(tmp->dent); @@ -202,6 +207,7 @@ int drm_debugfs_remove_files(struct drm_info_list *files, int count, } } } + mutex_unlock(&minor->debugfs_lock); return 0; } EXPORT_SYMBOL(drm_debugfs_remove_files); diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c index fc81af9dbf42..40c187c60f44 100644 --- a/drivers/gpu/drm/drm_drv.c +++ b/drivers/gpu/drm/drm_drv.c @@ -125,7 +125,7 @@ static struct drm_ioctl_desc drm_ioctls[] = { DRM_IOCTL_DEF(DRM_IOCTL_SG_ALLOC, drm_sg_alloc_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_IOCTL_DEF(DRM_IOCTL_SG_FREE, drm_sg_free, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), - DRM_IOCTL_DEF(DRM_IOCTL_WAIT_VBLANK, drm_wait_vblank, 0), + DRM_IOCTL_DEF(DRM_IOCTL_WAIT_VBLANK, drm_wait_vblank, DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_MODESET_CTL, drm_modeset_ctl, 0), diff --git a/drivers/gpu/drm/drm_irq.c b/drivers/gpu/drm/drm_irq.c index cb3794a00f98..68b756253f9f 100644 --- a/drivers/gpu/drm/drm_irq.c +++ b/drivers/gpu/drm/drm_irq.c @@ -407,13 +407,16 @@ int drm_irq_uninstall(struct drm_device *dev) /* * Wake up any waiters so they don't hang. */ - spin_lock_irqsave(&dev->vbl_lock, irqflags); - for (i = 0; i < dev->num_crtcs; i++) { - DRM_WAKEUP(&dev->vbl_queue[i]); - dev->vblank_enabled[i] = 0; - dev->last_vblank[i] = dev->driver->get_vblank_counter(dev, i); + if (dev->num_crtcs) { + spin_lock_irqsave(&dev->vbl_lock, irqflags); + for (i = 0; i < dev->num_crtcs; i++) { + DRM_WAKEUP(&dev->vbl_queue[i]); + dev->vblank_enabled[i] = 0; + dev->last_vblank[i] = + dev->driver->get_vblank_counter(dev, i); + } + spin_unlock_irqrestore(&dev->vbl_lock, irqflags); } - spin_unlock_irqrestore(&dev->vbl_lock, irqflags); if (!irq_enabled) return -EINVAL; @@ -1125,6 +1128,7 @@ static int drm_queue_vblank_event(struct drm_device *dev, int pipe, trace_drm_vblank_event_delivered(current->pid, pipe, vblwait->request.sequence); } else { + /* drm_handle_vblank_events will call drm_vblank_put */ list_add_tail(&e->base.link, &dev->vblank_event_list); vblwait->reply.sequence = vblwait->request.sequence; } @@ -1205,8 +1209,12 @@ int drm_wait_vblank(struct drm_device *dev, void *data, goto done; } - if (flags & _DRM_VBLANK_EVENT) + if (flags & _DRM_VBLANK_EVENT) { + /* must hold on to the vblank ref until the event fires + * drm_vblank_put will be called asynchronously + */ return drm_queue_vblank_event(dev, crtc, vblwait, file_priv); + } if ((flags & _DRM_VBLANK_NEXTONMISS) && (seq - vblwait->request.sequence) <= (1<<23)) { diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index d14b44e13f51..4f40f1ce1d8e 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -1506,7 +1506,10 @@ drm_add_fake_info_node(struct drm_minor *minor, node->minor = minor; node->dent = ent; node->info_ent = (void *) key; - list_add(&node->list, &minor->debugfs_nodes.list); + + mutex_lock(&minor->debugfs_lock); + list_add(&node->list, &minor->debugfs_list); + mutex_unlock(&minor->debugfs_lock); return 0; } diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index cc531bb59c26..e9c2cfe45daa 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -789,8 +789,8 @@ static struct vm_operations_struct i915_gem_vm_ops = { }; static struct drm_driver driver = { - /* don't use mtrr's here, the Xserver or user space app should - * deal with them for intel hardware. + /* Don't use MTRRs here; the Xserver or userspace app should + * deal with them for Intel hardware. */ .driver_features = DRIVER_USE_AGP | DRIVER_REQUIRE_AGP | /* DRIVER_USE_MTRR |*/ diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 6651c36b6e8a..d18b07adcffa 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1396,7 +1396,7 @@ i915_gem_mmap_gtt(struct drm_file *file, if (obj->base.size > dev_priv->mm.gtt_mappable_end) { ret = -E2BIG; - goto unlock; + goto out; } if (obj->madv != I915_MADV_WILLNEED) { diff --git a/drivers/gpu/drm/nouveau/nouveau_bios.c b/drivers/gpu/drm/nouveau/nouveau_bios.c index 032a82098136..5fc201b49d30 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bios.c +++ b/drivers/gpu/drm/nouveau/nouveau_bios.c @@ -640,10 +640,9 @@ static int nv50_pll_set(struct drm_device *dev, uint32_t reg, uint32_t clk) { struct drm_nouveau_private *dev_priv = dev->dev_private; - uint32_t reg0 = nv_rd32(dev, reg + 0); - uint32_t reg1 = nv_rd32(dev, reg + 4); struct nouveau_pll_vals pll; struct pll_lims pll_limits; + u32 ctrl, mask, coef; int ret; ret = get_pll_limits(dev, reg, &pll_limits); @@ -654,15 +653,20 @@ nv50_pll_set(struct drm_device *dev, uint32_t reg, uint32_t clk) if (!clk) return -ERANGE; - reg0 = (reg0 & 0xfff8ffff) | (pll.log2P << 16); - reg1 = (reg1 & 0xffff0000) | (pll.N1 << 8) | pll.M1; - - if (dev_priv->vbios.execute) { - still_alive(); - nv_wr32(dev, reg + 4, reg1); - nv_wr32(dev, reg + 0, reg0); + coef = pll.N1 << 8 | pll.M1; + ctrl = pll.log2P << 16; + mask = 0x00070000; + if (reg == 0x004008) { + mask |= 0x01f80000; + ctrl |= (pll_limits.log2p_bias << 19); + ctrl |= (pll.log2P << 22); } + if (!dev_priv->vbios.execute) + return 0; + + nv_mask(dev, reg + 0, mask, ctrl); + nv_wr32(dev, reg + 4, coef); return 0; } diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 7226f419e178..7cc37e690860 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -148,7 +148,7 @@ set_placement_range(struct nouveau_bo *nvbo, uint32_t type) if (dev_priv->card_type == NV_10 && nvbo->tile_mode && (type & TTM_PL_FLAG_VRAM) && - nvbo->bo.mem.num_pages < vram_pages / 2) { + nvbo->bo.mem.num_pages < vram_pages / 4) { /* * Make sure that the color and depth buffers are handled * by independent memory controller units. Up to a 9x diff --git a/drivers/gpu/drm/nouveau/nouveau_channel.c b/drivers/gpu/drm/nouveau/nouveau_channel.c index a319d5646ea9..bb6ec9ef8676 100644 --- a/drivers/gpu/drm/nouveau/nouveau_channel.c +++ b/drivers/gpu/drm/nouveau/nouveau_channel.c @@ -158,6 +158,7 @@ nouveau_channel_alloc(struct drm_device *dev, struct nouveau_channel **chan_ret, INIT_LIST_HEAD(&chan->nvsw.vbl_wait); INIT_LIST_HEAD(&chan->nvsw.flip); INIT_LIST_HEAD(&chan->fence.pending); + spin_lock_init(&chan->fence.lock); /* setup channel's memory and vm */ ret = nouveau_gpuobj_channel_init(chan, vram_handle, gart_handle); diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c index e0d275e1c96c..cea6696b1906 100644 --- a/drivers/gpu/drm/nouveau/nouveau_connector.c +++ b/drivers/gpu/drm/nouveau/nouveau_connector.c @@ -710,7 +710,7 @@ nouveau_connector_mode_valid(struct drm_connector *connector, case OUTPUT_DP: max_clock = nv_encoder->dp.link_nr; max_clock *= nv_encoder->dp.link_bw; - clock = clock * nouveau_connector_bpp(connector) / 8; + clock = clock * nouveau_connector_bpp(connector) / 10; break; default: BUG_ON(1); diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.c b/drivers/gpu/drm/nouveau/nouveau_fbcon.c index 14a8627efe4d..3a4cc32b9e44 100644 --- a/drivers/gpu/drm/nouveau/nouveau_fbcon.c +++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.c @@ -487,6 +487,7 @@ int nouveau_fbcon_init(struct drm_device *dev) { struct drm_nouveau_private *dev_priv = dev->dev_private; struct nouveau_fbdev *nfbdev; + int preferred_bpp; int ret; nfbdev = kzalloc(sizeof(struct nouveau_fbdev), GFP_KERNEL); @@ -505,7 +506,15 @@ int nouveau_fbcon_init(struct drm_device *dev) } drm_fb_helper_single_add_all_connectors(&nfbdev->helper); - drm_fb_helper_initial_config(&nfbdev->helper, 32); + + if (dev_priv->vram_size <= 32 * 1024 * 1024) + preferred_bpp = 8; + else if (dev_priv->vram_size <= 64 * 1024 * 1024) + preferred_bpp = 16; + else + preferred_bpp = 32; + + drm_fb_helper_initial_config(&nfbdev->helper, preferred_bpp); return 0; } diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c b/drivers/gpu/drm/nouveau/nouveau_fence.c index 81116cfea275..2f6daae68b9d 100644 --- a/drivers/gpu/drm/nouveau/nouveau_fence.c +++ b/drivers/gpu/drm/nouveau/nouveau_fence.c @@ -539,8 +539,6 @@ nouveau_fence_channel_init(struct nouveau_channel *chan) return ret; } - INIT_LIST_HEAD(&chan->fence.pending); - spin_lock_init(&chan->fence.lock); atomic_set(&chan->fence.last_sequence_irq, 0); return 0; } diff --git a/drivers/gpu/drm/nouveau/nouveau_i2c.c b/drivers/gpu/drm/nouveau/nouveau_i2c.c index c6143df48b9f..d39b2202b197 100644 --- a/drivers/gpu/drm/nouveau/nouveau_i2c.c +++ b/drivers/gpu/drm/nouveau/nouveau_i2c.c @@ -333,7 +333,7 @@ nouveau_i2c_identify(struct drm_device *dev, const char *what, NV_DEBUG(dev, "Probing %ss on I2C bus: %d\n", what, index); - for (i = 0; info[i].addr; i++) { + for (i = 0; i2c && info[i].addr; i++) { if (nouveau_probe_i2c_addr(i2c, info[i].addr) && (!match || match(i2c, &info[i]))) { NV_INFO(dev, "Detected %s: %s\n", what, info[i].type); diff --git a/drivers/gpu/drm/nouveau/nouveau_perf.c b/drivers/gpu/drm/nouveau/nouveau_perf.c index 9f178aa94162..33d03fbf00df 100644 --- a/drivers/gpu/drm/nouveau/nouveau_perf.c +++ b/drivers/gpu/drm/nouveau/nouveau_perf.c @@ -239,7 +239,7 @@ nouveau_perf_init(struct drm_device *dev) if(version == 0x15) { memtimings->timing = kcalloc(entries, sizeof(*memtimings->timing), GFP_KERNEL); - if(!memtimings) { + if (!memtimings->timing) { NV_WARN(dev,"Could not allocate memtiming table\n"); return; } diff --git a/drivers/gpu/drm/nouveau/nouveau_state.c b/drivers/gpu/drm/nouveau/nouveau_state.c index 82478e0998e5..d8831ab42bb9 100644 --- a/drivers/gpu/drm/nouveau/nouveau_state.c +++ b/drivers/gpu/drm/nouveau/nouveau_state.c @@ -579,6 +579,14 @@ nouveau_card_init(struct drm_device *dev) if (ret) goto out_display_early; + /* workaround an odd issue on nvc1 by disabling the device's + * nosnoop capability. hopefully won't cause issues until a + * better fix is found - assuming there is one... + */ + if (dev_priv->chipset == 0xc1) { + nv_mask(dev, 0x00088080, 0x00000800, 0x00000000); + } + nouveau_pm_init(dev); ret = engine->vram.init(dev); @@ -1102,12 +1110,13 @@ int nouveau_load(struct drm_device *dev, unsigned long flags) dev_priv->noaccel = !!nouveau_noaccel; if (nouveau_noaccel == -1) { switch (dev_priv->chipset) { - case 0xc1: /* known broken */ - case 0xc8: /* never tested */ +#if 0 + case 0xXX: /* known broken */ NV_INFO(dev, "acceleration disabled by default, pass " "noaccel=0 to force enable\n"); dev_priv->noaccel = true; break; +#endif default: dev_priv->noaccel = false; break; diff --git a/drivers/gpu/drm/nouveau/nv40_pm.c b/drivers/gpu/drm/nouveau/nv40_pm.c index bbc0b9c7e1f7..e676b0d53478 100644 --- a/drivers/gpu/drm/nouveau/nv40_pm.c +++ b/drivers/gpu/drm/nouveau/nv40_pm.c @@ -57,12 +57,14 @@ read_pll_2(struct drm_device *dev, u32 reg) int P = (ctrl & 0x00070000) >> 16; u32 ref = 27000, clk = 0; - if (ctrl & 0x80000000) + if ((ctrl & 0x80000000) && M1) { clk = ref * N1 / M1; - - if (!(ctrl & 0x00000100)) { - if (ctrl & 0x40000000) - clk = clk * N2 / M2; + if ((ctrl & 0x40000100) == 0x40000000) { + if (M2) + clk = clk * N2 / M2; + else + clk = 0; + } } return clk >> P; @@ -177,6 +179,11 @@ nv40_pm_clocks_pre(struct drm_device *dev, struct nouveau_pm_level *perflvl) } /* memory clock */ + if (!perflvl->memory) { + info->mpll_ctrl = 0x00000000; + goto out; + } + ret = nv40_calc_pll(dev, 0x004020, &pll, perflvl->memory, &N1, &M1, &N2, &M2, &log2P); if (ret < 0) @@ -264,6 +271,9 @@ nv40_pm_clocks_set(struct drm_device *dev, void *pre_state) mdelay(5); nv_mask(dev, 0x00c040, 0x00000333, info->ctrl); + if (!info->mpll_ctrl) + goto resume; + /* wait for vblank start on active crtcs, disable memory access */ for (i = 0; i < 2; i++) { if (!(crtc_mask & (1 << i))) diff --git a/drivers/gpu/drm/nouveau/nv50_graph.c b/drivers/gpu/drm/nouveau/nv50_graph.c index 8c979b31ff61..ac601f7c4e1a 100644 --- a/drivers/gpu/drm/nouveau/nv50_graph.c +++ b/drivers/gpu/drm/nouveau/nv50_graph.c @@ -131,8 +131,8 @@ nv50_graph_init(struct drm_device *dev, int engine) NV_DEBUG(dev, "\n"); /* master reset */ - nv_mask(dev, 0x000200, 0x00200100, 0x00000000); - nv_mask(dev, 0x000200, 0x00200100, 0x00200100); + nv_mask(dev, 0x000200, 0x00201000, 0x00000000); + nv_mask(dev, 0x000200, 0x00201000, 0x00201000); nv_wr32(dev, 0x40008c, 0x00000004); /* HW_CTX_SWITCH_ENABLED */ /* reset/enable traps and interrupts */ diff --git a/drivers/gpu/drm/nouveau/nv50_grctx.c b/drivers/gpu/drm/nouveau/nv50_grctx.c index d05c2c3b2444..4b46d6968566 100644 --- a/drivers/gpu/drm/nouveau/nv50_grctx.c +++ b/drivers/gpu/drm/nouveau/nv50_grctx.c @@ -601,7 +601,7 @@ nv50_graph_construct_mmio(struct nouveau_grctx *ctx) gr_def(ctx, offset + 0x1c, 0x00880000); break; case 0x86: - gr_def(ctx, offset + 0x1c, 0x008c0000); + gr_def(ctx, offset + 0x1c, 0x018c0000); break; case 0x92: case 0x96: diff --git a/drivers/gpu/drm/nouveau/nv50_vram.c b/drivers/gpu/drm/nouveau/nv50_vram.c index 9da23838e63e..2e45e57fd869 100644 --- a/drivers/gpu/drm/nouveau/nv50_vram.c +++ b/drivers/gpu/drm/nouveau/nv50_vram.c @@ -160,7 +160,7 @@ nv50_vram_rblock(struct drm_device *dev) colbits = (r4 & 0x0000f000) >> 12; rowbitsa = ((r4 & 0x000f0000) >> 16) + 8; rowbitsb = ((r4 & 0x00f00000) >> 20) + 8; - banks = ((r4 & 0x01000000) ? 8 : 4); + banks = 1 << (((r4 & 0x03000000) >> 24) + 2); rowsize = parts * banks * (1 << colbits) * 8; predicted = rowsize << rowbitsa; diff --git a/drivers/gpu/drm/nouveau/nvc0_graph.c b/drivers/gpu/drm/nouveau/nvc0_graph.c index bbdbc51830c8..a74e501afd25 100644 --- a/drivers/gpu/drm/nouveau/nvc0_graph.c +++ b/drivers/gpu/drm/nouveau/nvc0_graph.c @@ -157,8 +157,8 @@ nvc0_graph_create_context_mmio_list(struct nouveau_channel *chan) struct nvc0_graph_priv *priv = nv_engine(chan->dev, NVOBJ_ENGINE_GR); struct nvc0_graph_chan *grch = chan->engctx[NVOBJ_ENGINE_GR]; struct drm_device *dev = chan->dev; + struct drm_nouveau_private *dev_priv = dev->dev_private; int i = 0, gpc, tp, ret; - u32 magic; ret = nouveau_gpuobj_new(dev, chan, 0x2000, 256, NVOBJ_FLAG_VM, &grch->unk408004); @@ -207,14 +207,37 @@ nvc0_graph_create_context_mmio_list(struct nouveau_channel *chan) nv_wo32(grch->mmio, i++ * 4, 0x0041880c); nv_wo32(grch->mmio, i++ * 4, 0x80000018); - magic = 0x02180000; - nv_wo32(grch->mmio, i++ * 4, 0x00405830); - nv_wo32(grch->mmio, i++ * 4, magic); - for (gpc = 0; gpc < priv->gpc_nr; gpc++) { - for (tp = 0; tp < priv->tp_nr[gpc]; tp++, magic += 0x0324) { - u32 reg = 0x504520 + (gpc * 0x8000) + (tp * 0x0800); - nv_wo32(grch->mmio, i++ * 4, reg); - nv_wo32(grch->mmio, i++ * 4, magic); + if (dev_priv->chipset != 0xc1) { + u32 magic = 0x02180000; + nv_wo32(grch->mmio, i++ * 4, 0x00405830); + nv_wo32(grch->mmio, i++ * 4, magic); + for (gpc = 0; gpc < priv->gpc_nr; gpc++) { + for (tp = 0; tp < priv->tp_nr[gpc]; tp++) { + u32 reg = TP_UNIT(gpc, tp, 0x520); + nv_wo32(grch->mmio, i++ * 4, reg); + nv_wo32(grch->mmio, i++ * 4, magic); + magic += 0x0324; + } + } + } else { + u32 magic = 0x02180000; + nv_wo32(grch->mmio, i++ * 4, 0x00405830); + nv_wo32(grch->mmio, i++ * 4, magic | 0x0000218); + nv_wo32(grch->mmio, i++ * 4, 0x004064c4); + nv_wo32(grch->mmio, i++ * 4, 0x0086ffff); + for (gpc = 0; gpc < priv->gpc_nr; gpc++) { + for (tp = 0; tp < priv->tp_nr[gpc]; tp++) { + u32 reg = TP_UNIT(gpc, tp, 0x520); + nv_wo32(grch->mmio, i++ * 4, reg); + nv_wo32(grch->mmio, i++ * 4, (1 << 28) | magic); + magic += 0x0324; + } + for (tp = 0; tp < priv->tp_nr[gpc]; tp++) { + u32 reg = TP_UNIT(gpc, tp, 0x544); + nv_wo32(grch->mmio, i++ * 4, reg); + nv_wo32(grch->mmio, i++ * 4, magic); + magic += 0x0324; + } } } diff --git a/drivers/gpu/drm/nouveau/nvc0_grctx.c b/drivers/gpu/drm/nouveau/nvc0_grctx.c index dd0e6a736b3b..96b0b93d94ca 100644 --- a/drivers/gpu/drm/nouveau/nvc0_grctx.c +++ b/drivers/gpu/drm/nouveau/nvc0_grctx.c @@ -1812,6 +1812,7 @@ nvc0_grctx_generate(struct nouveau_channel *chan) /* calculate first set of magics */ memcpy(tpnr, priv->tp_nr, sizeof(priv->tp_nr)); + gpc = -1; for (tp = 0; tp < priv->tp_total; tp++) { do { gpc = (gpc + 1) % priv->gpc_nr; @@ -1861,30 +1862,26 @@ nvc0_grctx_generate(struct nouveau_channel *chan) if (1) { u32 tp_mask = 0, tp_set = 0; - u8 tpnr[GPC_MAX]; + u8 tpnr[GPC_MAX], a, b; memcpy(tpnr, priv->tp_nr, sizeof(priv->tp_nr)); for (gpc = 0; gpc < priv->gpc_nr; gpc++) tp_mask |= ((1 << priv->tp_nr[gpc]) - 1) << (gpc * 8); - gpc = -1; - for (i = 0, gpc = -1; i < 32; i++) { - int ltp = i * (priv->tp_total - 1) / 32; - - do { - gpc = (gpc + 1) % priv->gpc_nr; - } while (!tpnr[gpc]); - tp = priv->tp_nr[gpc] - tpnr[gpc]--; + for (i = 0, gpc = -1, b = -1; i < 32; i++) { + a = (i * (priv->tp_total - 1)) / 32; + if (a != b) { + b = a; + do { + gpc = (gpc + 1) % priv->gpc_nr; + } while (!tpnr[gpc]); + tp = priv->tp_nr[gpc] - tpnr[gpc]--; - tp_set |= 1 << ((gpc * 8) + tp); + tp_set |= 1 << ((gpc * 8) + tp); + } - do { - nv_wr32(dev, 0x406800 + (i * 0x20), tp_set); - tp_set ^= tp_mask; - nv_wr32(dev, 0x406c00 + (i * 0x20), tp_set); - tp_set ^= tp_mask; - } while (ltp == (++i * (priv->tp_total - 1) / 32)); - i--; + nv_wr32(dev, 0x406800 + (i * 0x20), tp_set); + nv_wr32(dev, 0x406c00 + (i * 0x20), tp_set ^ tp_mask); } } diff --git a/drivers/gpu/drm/nouveau/nvc0_vram.c b/drivers/gpu/drm/nouveau/nvc0_vram.c index edbfe9360ae2..ce984d573a51 100644 --- a/drivers/gpu/drm/nouveau/nvc0_vram.c +++ b/drivers/gpu/drm/nouveau/nvc0_vram.c @@ -43,7 +43,7 @@ static const u8 types[256] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 3, 3, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, - 3, 3, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 3, 3, + 3, 3, 3, 1, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 3, 3, 3, 3, 0, 0, 0, 0, 0, 0, 3, 0, 0, 3, 0, 3, 0, 3, 3, 0, 3, 3, 3, 3, 3, 0, 0, 3, 0, 3, 0, 3, 3, 0, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 0, 1, 1, 0 @@ -110,22 +110,26 @@ nvc0_vram_init(struct drm_device *dev) u32 bsize = nv_rd32(dev, 0x10f20c); u32 offset, length; bool uniform = true; - int ret, i; + int ret, part; NV_DEBUG(dev, "0x100800: 0x%08x\n", nv_rd32(dev, 0x100800)); NV_DEBUG(dev, "parts 0x%08x bcast_mem_amount 0x%08x\n", parts, bsize); /* read amount of vram attached to each memory controller */ - for (i = 0; i < parts; i++) { - u32 psize = nv_rd32(dev, 0x11020c + (i * 0x1000)); + part = 0; + while (parts) { + u32 psize = nv_rd32(dev, 0x11020c + (part++ * 0x1000)); + if (psize == 0) + continue; + parts--; + if (psize != bsize) { if (psize < bsize) bsize = psize; uniform = false; } - NV_DEBUG(dev, "%d: mem_amount 0x%08x\n", i, psize); - + NV_DEBUG(dev, "%d: mem_amount 0x%08x\n", part, psize); dev_priv->vram_size += (u64)psize << 20; } diff --git a/drivers/gpu/drm/radeon/Makefile b/drivers/gpu/drm/radeon/Makefile index 9f363e0c4b60..cf8b4bc3e73d 100644 --- a/drivers/gpu/drm/radeon/Makefile +++ b/drivers/gpu/drm/radeon/Makefile @@ -70,7 +70,7 @@ radeon-y += radeon_device.o radeon_asic.o radeon_kms.o \ r200.o radeon_legacy_tv.o r600_cs.o r600_blit.o r600_blit_shaders.o \ r600_blit_kms.o radeon_pm.o atombios_dp.o r600_audio.o r600_hdmi.o \ evergreen.o evergreen_cs.o evergreen_blit_shaders.o evergreen_blit_kms.o \ - radeon_trace_points.o ni.o cayman_blit_shaders.o + radeon_trace_points.o ni.o cayman_blit_shaders.o atombios_encoders.o radeon-$(CONFIG_COMPAT) += radeon_ioc32.o radeon-$(CONFIG_VGA_SWITCHEROO) += radeon_atpx_handler.o diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c index a515b2a09d85..87631fede1f8 100644 --- a/drivers/gpu/drm/radeon/atombios_crtc.c +++ b/drivers/gpu/drm/radeon/atombios_crtc.c @@ -558,7 +558,7 @@ static u32 atombios_adjust_pll(struct drm_crtc *crtc, bpc = connector->display_info.bpc; encoder_mode = atombios_get_encoder_mode(encoder); if ((radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT | ATOM_DEVICE_DFP_SUPPORT)) || - radeon_encoder_is_dp_bridge(encoder)) { + (radeon_encoder_get_dp_bridge_encoder_id(encoder) != ENCODER_OBJECT_ID_NONE)) { if (connector) { struct radeon_connector *radeon_connector = to_radeon_connector(connector); struct radeon_connector_atom_dig *dig_connector = @@ -638,44 +638,29 @@ static u32 atombios_adjust_pll(struct drm_crtc *crtc, if (ss_enabled && ss->percentage) args.v3.sInput.ucDispPllConfig |= DISPPLL_CONFIG_SS_ENABLE; - if (radeon_encoder->devices & (ATOM_DEVICE_DFP_SUPPORT) || - radeon_encoder_is_dp_bridge(encoder)) { + if (ENCODER_MODE_IS_DP(encoder_mode)) { + args.v3.sInput.ucDispPllConfig |= + DISPPLL_CONFIG_COHERENT_MODE; + /* 16200 or 27000 */ + args.v3.sInput.usPixelClock = cpu_to_le16(dp_clock / 10); + } else if (radeon_encoder->devices & (ATOM_DEVICE_DFP_SUPPORT)) { struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; - if (encoder_mode == ATOM_ENCODER_MODE_DP) { + if (encoder_mode == ATOM_ENCODER_MODE_HDMI) + /* deep color support */ + args.v3.sInput.usPixelClock = + cpu_to_le16((mode->clock * bpc / 8) / 10); + if (dig->coherent_mode) args.v3.sInput.ucDispPllConfig |= DISPPLL_CONFIG_COHERENT_MODE; - /* 16200 or 27000 */ - args.v3.sInput.usPixelClock = cpu_to_le16(dp_clock / 10); - } else { - if (encoder_mode == ATOM_ENCODER_MODE_HDMI) { - /* deep color support */ - args.v3.sInput.usPixelClock = - cpu_to_le16((mode->clock * bpc / 8) / 10); - } - if (dig->coherent_mode) - args.v3.sInput.ucDispPllConfig |= - DISPPLL_CONFIG_COHERENT_MODE; - if (mode->clock > 165000) - args.v3.sInput.ucDispPllConfig |= - DISPPLL_CONFIG_DUAL_LINK; - } - } else if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { - if (encoder_mode == ATOM_ENCODER_MODE_DP) { + if (mode->clock > 165000) args.v3.sInput.ucDispPllConfig |= - DISPPLL_CONFIG_COHERENT_MODE; - /* 16200 or 27000 */ - args.v3.sInput.usPixelClock = cpu_to_le16(dp_clock / 10); - } else if (encoder_mode != ATOM_ENCODER_MODE_LVDS) { - if (mode->clock > 165000) - args.v3.sInput.ucDispPllConfig |= - DISPPLL_CONFIG_DUAL_LINK; - } + DISPPLL_CONFIG_DUAL_LINK; } - if (radeon_encoder_is_dp_bridge(encoder)) { - struct drm_encoder *ext_encoder = radeon_atom_get_external_encoder(encoder); - struct radeon_encoder *ext_radeon_encoder = to_radeon_encoder(ext_encoder); - args.v3.sInput.ucExtTransmitterID = ext_radeon_encoder->encoder_id; - } else + if (radeon_encoder_get_dp_bridge_encoder_id(encoder) != + ENCODER_OBJECT_ID_NONE) + args.v3.sInput.ucExtTransmitterID = + radeon_encoder_get_dp_bridge_encoder_id(encoder); + else args.v3.sInput.ucExtTransmitterID = 0; atom_execute_table(rdev->mode_info.atom_context, @@ -945,6 +930,7 @@ static void atombios_crtc_set_pll(struct drm_crtc *crtc, struct drm_display_mode bpc = connector->display_info.bpc; switch (encoder_mode) { + case ATOM_ENCODER_MODE_DP_MST: case ATOM_ENCODER_MODE_DP: /* DP/eDP */ dp_clock = dig_connector->dp_clock / 10; @@ -1450,7 +1436,7 @@ static int radeon_atom_pick_pll(struct drm_crtc *crtc) * PPLL/DCPLL programming and only program the DP DTO for the * crtc virtual pixel clock. */ - if (atombios_get_encoder_mode(test_encoder) == ATOM_ENCODER_MODE_DP) { + if (ENCODER_MODE_IS_DP(atombios_get_encoder_mode(test_encoder))) { if (ASIC_IS_DCE5(rdev) || rdev->clock.dp_extclk) return ATOM_PPLL_INVALID; } @@ -1536,12 +1522,6 @@ static bool atombios_crtc_mode_fixup(struct drm_crtc *crtc, struct drm_display_mode *mode, struct drm_display_mode *adjusted_mode) { - struct drm_device *dev = crtc->dev; - struct radeon_device *rdev = dev->dev_private; - - /* adjust pm to upcoming mode change */ - radeon_pm_compute_clocks(rdev); - if (!radeon_crtc_scaling_mode_fixup(crtc, mode, adjusted_mode)) return false; return true; diff --git a/drivers/gpu/drm/radeon/atombios_dp.c b/drivers/gpu/drm/radeon/atombios_dp.c index 79e8ebc05307..6fb335a4fdda 100644 --- a/drivers/gpu/drm/radeon/atombios_dp.c +++ b/drivers/gpu/drm/radeon/atombios_dp.c @@ -283,7 +283,7 @@ int radeon_dp_i2c_aux_ch(struct i2c_adapter *adapter, int mode, } } - DRM_ERROR("aux i2c too many retries, giving up\n"); + DRM_DEBUG_KMS("aux i2c too many retries, giving up\n"); return -EREMOTEIO; } @@ -482,7 +482,8 @@ static int radeon_dp_get_dp_link_clock(struct drm_connector *connector, int bpp = convert_bpc_to_bpp(connector->display_info.bpc); int lane_num, max_pix_clock; - if (radeon_connector_encoder_is_dp_bridge(connector)) + if (radeon_connector_encoder_get_dp_bridge_encoder_id(connector) == + ENCODER_OBJECT_ID_NUTMEG) return 270000; lane_num = radeon_dp_get_dp_lane_number(connector, dpcd, pix_clock); @@ -553,17 +554,32 @@ static void radeon_dp_set_panel_mode(struct drm_encoder *encoder, { struct drm_device *dev = encoder->dev; struct radeon_device *rdev = dev->dev_private; + struct radeon_connector *radeon_connector = to_radeon_connector(connector); int panel_mode = DP_PANEL_MODE_EXTERNAL_DP_MODE; if (!ASIC_IS_DCE4(rdev)) return; - if (radeon_connector_encoder_is_dp_bridge(connector)) + if (radeon_connector_encoder_get_dp_bridge_encoder_id(connector) == + ENCODER_OBJECT_ID_NUTMEG) panel_mode = DP_PANEL_MODE_INTERNAL_DP1_MODE; + else if (radeon_connector_encoder_get_dp_bridge_encoder_id(connector) == + ENCODER_OBJECT_ID_TRAVIS) + panel_mode = DP_PANEL_MODE_INTERNAL_DP2_MODE; + else if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) { + u8 tmp = radeon_read_dpcd_reg(radeon_connector, DP_EDP_CONFIGURATION_CAP); + if (tmp & 1) + panel_mode = DP_PANEL_MODE_INTERNAL_DP2_MODE; + } atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_SETUP_PANEL_MODE, panel_mode); + + if ((connector->connector_type == DRM_MODE_CONNECTOR_eDP) && + (panel_mode == DP_PANEL_MODE_INTERNAL_DP2_MODE)) { + radeon_write_dpcd_reg(radeon_connector, DP_EDP_CONFIGURATION_SET, 1); + } } void radeon_dp_set_link_config(struct drm_connector *connector, diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c new file mode 100644 index 000000000000..39c04c1b8472 --- /dev/null +++ b/drivers/gpu/drm/radeon/atombios_encoders.c @@ -0,0 +1,2369 @@ +/* + * Copyright 2007-11 Advanced Micro Devices, Inc. + * Copyright 2008 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: Dave Airlie + * Alex Deucher + */ +#include "drmP.h" +#include "drm_crtc_helper.h" +#include "radeon_drm.h" +#include "radeon.h" +#include "atom.h" + +extern int atom_debug; + +/* evil but including atombios.h is much worse */ +bool radeon_atom_get_tv_timings(struct radeon_device *rdev, int index, + struct drm_display_mode *mode); + + +static inline bool radeon_encoder_is_digital(struct drm_encoder *encoder) +{ + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + switch (radeon_encoder->encoder_id) { + case ENCODER_OBJECT_ID_INTERNAL_LVDS: + case ENCODER_OBJECT_ID_INTERNAL_TMDS1: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: + case ENCODER_OBJECT_ID_INTERNAL_LVTM1: + case ENCODER_OBJECT_ID_INTERNAL_DVO1: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: + case ENCODER_OBJECT_ID_INTERNAL_DDI: + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: + return true; + default: + return false; + } +} + +static struct drm_connector * +radeon_get_connector_for_encoder_init(struct drm_encoder *encoder) +{ + struct drm_device *dev = encoder->dev; + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + struct drm_connector *connector; + struct radeon_connector *radeon_connector; + + list_for_each_entry(connector, &dev->mode_config.connector_list, head) { + radeon_connector = to_radeon_connector(connector); + if (radeon_encoder->devices & radeon_connector->devices) + return connector; + } + return NULL; +} + +static bool radeon_atom_mode_fixup(struct drm_encoder *encoder, + struct drm_display_mode *mode, + struct drm_display_mode *adjusted_mode) +{ + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + + /* set the active encoder to connector routing */ + radeon_encoder_set_active_device(encoder); + drm_mode_set_crtcinfo(adjusted_mode, 0); + + /* hw bug */ + if ((mode->flags & DRM_MODE_FLAG_INTERLACE) + && (mode->crtc_vsync_start < (mode->crtc_vdisplay + 2))) + adjusted_mode->crtc_vsync_start = adjusted_mode->crtc_vdisplay + 2; + + /* get the native mode for LVDS */ + if (radeon_encoder->active_device & (ATOM_DEVICE_LCD_SUPPORT)) + radeon_panel_mode_fixup(encoder, adjusted_mode); + + /* get the native mode for TV */ + if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT)) { + struct radeon_encoder_atom_dac *tv_dac = radeon_encoder->enc_priv; + if (tv_dac) { + if (tv_dac->tv_std == TV_STD_NTSC || + tv_dac->tv_std == TV_STD_NTSC_J || + tv_dac->tv_std == TV_STD_PAL_M) + radeon_atom_get_tv_timings(rdev, 0, adjusted_mode); + else + radeon_atom_get_tv_timings(rdev, 1, adjusted_mode); + } + } + + if (ASIC_IS_DCE3(rdev) && + ((radeon_encoder->active_device & (ATOM_DEVICE_DFP_SUPPORT | ATOM_DEVICE_LCD_SUPPORT)) || + (radeon_encoder_get_dp_bridge_encoder_id(encoder) != ENCODER_OBJECT_ID_NONE))) { + struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); + radeon_dp_set_link_config(connector, mode); + } + + return true; +} + +static void +atombios_dac_setup(struct drm_encoder *encoder, int action) +{ + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + DAC_ENCODER_CONTROL_PS_ALLOCATION args; + int index = 0; + struct radeon_encoder_atom_dac *dac_info = radeon_encoder->enc_priv; + + memset(&args, 0, sizeof(args)); + + switch (radeon_encoder->encoder_id) { + case ENCODER_OBJECT_ID_INTERNAL_DAC1: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1: + index = GetIndexIntoMasterTable(COMMAND, DAC1EncoderControl); + break; + case ENCODER_OBJECT_ID_INTERNAL_DAC2: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2: + index = GetIndexIntoMasterTable(COMMAND, DAC2EncoderControl); + break; + } + + args.ucAction = action; + + if (radeon_encoder->active_device & (ATOM_DEVICE_CRT_SUPPORT)) + args.ucDacStandard = ATOM_DAC1_PS2; + else if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT)) + args.ucDacStandard = ATOM_DAC1_CV; + else { + switch (dac_info->tv_std) { + case TV_STD_PAL: + case TV_STD_PAL_M: + case TV_STD_SCART_PAL: + case TV_STD_SECAM: + case TV_STD_PAL_CN: + args.ucDacStandard = ATOM_DAC1_PAL; + break; + case TV_STD_NTSC: + case TV_STD_NTSC_J: + case TV_STD_PAL_60: + default: + args.ucDacStandard = ATOM_DAC1_NTSC; + break; + } + } + args.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); + + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); + +} + +static void +atombios_tv_setup(struct drm_encoder *encoder, int action) +{ + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + TV_ENCODER_CONTROL_PS_ALLOCATION args; + int index = 0; + struct radeon_encoder_atom_dac *dac_info = radeon_encoder->enc_priv; + + memset(&args, 0, sizeof(args)); + + index = GetIndexIntoMasterTable(COMMAND, TVEncoderControl); + + args.sTVEncoder.ucAction = action; + + if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT)) + args.sTVEncoder.ucTvStandard = ATOM_TV_CV; + else { + switch (dac_info->tv_std) { + case TV_STD_NTSC: + args.sTVEncoder.ucTvStandard = ATOM_TV_NTSC; + break; + case TV_STD_PAL: + args.sTVEncoder.ucTvStandard = ATOM_TV_PAL; + break; + case TV_STD_PAL_M: + args.sTVEncoder.ucTvStandard = ATOM_TV_PALM; + break; + case TV_STD_PAL_60: + args.sTVEncoder.ucTvStandard = ATOM_TV_PAL60; + break; + case TV_STD_NTSC_J: + args.sTVEncoder.ucTvStandard = ATOM_TV_NTSCJ; + break; + case TV_STD_SCART_PAL: + args.sTVEncoder.ucTvStandard = ATOM_TV_PAL; /* ??? */ + break; + case TV_STD_SECAM: + args.sTVEncoder.ucTvStandard = ATOM_TV_SECAM; + break; + case TV_STD_PAL_CN: + args.sTVEncoder.ucTvStandard = ATOM_TV_PALCN; + break; + default: + args.sTVEncoder.ucTvStandard = ATOM_TV_NTSC; + break; + } + } + + args.sTVEncoder.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); + + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); + +} + +union dvo_encoder_control { + ENABLE_EXTERNAL_TMDS_ENCODER_PS_ALLOCATION ext_tmds; + DVO_ENCODER_CONTROL_PS_ALLOCATION dvo; + DVO_ENCODER_CONTROL_PS_ALLOCATION_V3 dvo_v3; +}; + +void +atombios_dvo_setup(struct drm_encoder *encoder, int action) +{ + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + union dvo_encoder_control args; + int index = GetIndexIntoMasterTable(COMMAND, DVOEncoderControl); + uint8_t frev, crev; + + memset(&args, 0, sizeof(args)); + + if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) + return; + + switch (frev) { + case 1: + switch (crev) { + case 1: + /* R4xx, R5xx */ + args.ext_tmds.sXTmdsEncoder.ucEnable = action; + + if (radeon_encoder->pixel_clock > 165000) + args.ext_tmds.sXTmdsEncoder.ucMisc |= PANEL_ENCODER_MISC_DUAL; + + args.ext_tmds.sXTmdsEncoder.ucMisc |= ATOM_PANEL_MISC_888RGB; + break; + case 2: + /* RS600/690/740 */ + args.dvo.sDVOEncoder.ucAction = action; + args.dvo.sDVOEncoder.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); + /* DFP1, CRT1, TV1 depending on the type of port */ + args.dvo.sDVOEncoder.ucDeviceType = ATOM_DEVICE_DFP1_INDEX; + + if (radeon_encoder->pixel_clock > 165000) + args.dvo.sDVOEncoder.usDevAttr.sDigAttrib.ucAttribute |= PANEL_ENCODER_MISC_DUAL; + break; + case 3: + /* R6xx */ + args.dvo_v3.ucAction = action; + args.dvo_v3.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); + args.dvo_v3.ucDVOConfig = 0; /* XXX */ + break; + default: + DRM_ERROR("Unknown table version %d, %d\n", frev, crev); + break; + } + break; + default: + DRM_ERROR("Unknown table version %d, %d\n", frev, crev); + break; + } + + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); +} + +union lvds_encoder_control { + LVDS_ENCODER_CONTROL_PS_ALLOCATION v1; + LVDS_ENCODER_CONTROL_PS_ALLOCATION_V2 v2; +}; + +void +atombios_digital_setup(struct drm_encoder *encoder, int action) +{ + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; + union lvds_encoder_control args; + int index = 0; + int hdmi_detected = 0; + uint8_t frev, crev; + + if (!dig) + return; + + if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_HDMI) + hdmi_detected = 1; + + memset(&args, 0, sizeof(args)); + + switch (radeon_encoder->encoder_id) { + case ENCODER_OBJECT_ID_INTERNAL_LVDS: + index = GetIndexIntoMasterTable(COMMAND, LVDSEncoderControl); + break; + case ENCODER_OBJECT_ID_INTERNAL_TMDS1: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: + index = GetIndexIntoMasterTable(COMMAND, TMDS1EncoderControl); + break; + case ENCODER_OBJECT_ID_INTERNAL_LVTM1: + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) + index = GetIndexIntoMasterTable(COMMAND, LVDSEncoderControl); + else + index = GetIndexIntoMasterTable(COMMAND, TMDS2EncoderControl); + break; + } + + if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) + return; + + switch (frev) { + case 1: + case 2: + switch (crev) { + case 1: + args.v1.ucMisc = 0; + args.v1.ucAction = action; + if (hdmi_detected) + args.v1.ucMisc |= PANEL_ENCODER_MISC_HDMI_TYPE; + args.v1.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { + if (dig->lcd_misc & ATOM_PANEL_MISC_DUAL) + args.v1.ucMisc |= PANEL_ENCODER_MISC_DUAL; + if (dig->lcd_misc & ATOM_PANEL_MISC_888RGB) + args.v1.ucMisc |= ATOM_PANEL_MISC_888RGB; + } else { + if (dig->linkb) + args.v1.ucMisc |= PANEL_ENCODER_MISC_TMDS_LINKB; + if (radeon_encoder->pixel_clock > 165000) + args.v1.ucMisc |= PANEL_ENCODER_MISC_DUAL; + /*if (pScrn->rgbBits == 8) */ + args.v1.ucMisc |= ATOM_PANEL_MISC_888RGB; + } + break; + case 2: + case 3: + args.v2.ucMisc = 0; + args.v2.ucAction = action; + if (crev == 3) { + if (dig->coherent_mode) + args.v2.ucMisc |= PANEL_ENCODER_MISC_COHERENT; + } + if (hdmi_detected) + args.v2.ucMisc |= PANEL_ENCODER_MISC_HDMI_TYPE; + args.v2.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); + args.v2.ucTruncate = 0; + args.v2.ucSpatial = 0; + args.v2.ucTemporal = 0; + args.v2.ucFRC = 0; + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { + if (dig->lcd_misc & ATOM_PANEL_MISC_DUAL) + args.v2.ucMisc |= PANEL_ENCODER_MISC_DUAL; + if (dig->lcd_misc & ATOM_PANEL_MISC_SPATIAL) { + args.v2.ucSpatial = PANEL_ENCODER_SPATIAL_DITHER_EN; + if (dig->lcd_misc & ATOM_PANEL_MISC_888RGB) + args.v2.ucSpatial |= PANEL_ENCODER_SPATIAL_DITHER_DEPTH; + } + if (dig->lcd_misc & ATOM_PANEL_MISC_TEMPORAL) { + args.v2.ucTemporal = PANEL_ENCODER_TEMPORAL_DITHER_EN; + if (dig->lcd_misc & ATOM_PANEL_MISC_888RGB) + args.v2.ucTemporal |= PANEL_ENCODER_TEMPORAL_DITHER_DEPTH; + if (((dig->lcd_misc >> ATOM_PANEL_MISC_GREY_LEVEL_SHIFT) & 0x3) == 2) + args.v2.ucTemporal |= PANEL_ENCODER_TEMPORAL_LEVEL_4; + } + } else { + if (dig->linkb) + args.v2.ucMisc |= PANEL_ENCODER_MISC_TMDS_LINKB; + if (radeon_encoder->pixel_clock > 165000) + args.v2.ucMisc |= PANEL_ENCODER_MISC_DUAL; + } + break; + default: + DRM_ERROR("Unknown table version %d, %d\n", frev, crev); + break; + } + break; + default: + DRM_ERROR("Unknown table version %d, %d\n", frev, crev); + break; + } + + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); +} + +int +atombios_get_encoder_mode(struct drm_encoder *encoder) +{ + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + struct drm_connector *connector; + struct radeon_connector *radeon_connector; + struct radeon_connector_atom_dig *dig_connector; + + /* dp bridges are always DP */ + if (radeon_encoder_get_dp_bridge_encoder_id(encoder) != ENCODER_OBJECT_ID_NONE) + return ATOM_ENCODER_MODE_DP; + + /* DVO is always DVO */ + if (radeon_encoder->encoder_id == ATOM_ENCODER_MODE_DVO) + return ATOM_ENCODER_MODE_DVO; + + connector = radeon_get_connector_for_encoder(encoder); + /* if we don't have an active device yet, just use one of + * the connectors tied to the encoder. + */ + if (!connector) + connector = radeon_get_connector_for_encoder_init(encoder); + radeon_connector = to_radeon_connector(connector); + + switch (connector->connector_type) { + case DRM_MODE_CONNECTOR_DVII: + case DRM_MODE_CONNECTOR_HDMIB: /* HDMI-B is basically DL-DVI; analog works fine */ + if (drm_detect_monitor_audio(radeon_connector->edid) && radeon_audio) { + /* fix me */ + if (ASIC_IS_DCE4(rdev)) + return ATOM_ENCODER_MODE_DVI; + else + return ATOM_ENCODER_MODE_HDMI; + } else if (radeon_connector->use_digital) + return ATOM_ENCODER_MODE_DVI; + else + return ATOM_ENCODER_MODE_CRT; + break; + case DRM_MODE_CONNECTOR_DVID: + case DRM_MODE_CONNECTOR_HDMIA: + default: + if (drm_detect_monitor_audio(radeon_connector->edid) && radeon_audio) { + /* fix me */ + if (ASIC_IS_DCE4(rdev)) + return ATOM_ENCODER_MODE_DVI; + else + return ATOM_ENCODER_MODE_HDMI; + } else + return ATOM_ENCODER_MODE_DVI; + break; + case DRM_MODE_CONNECTOR_LVDS: + return ATOM_ENCODER_MODE_LVDS; + break; + case DRM_MODE_CONNECTOR_DisplayPort: + dig_connector = radeon_connector->con_priv; + if ((dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) || + (dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_eDP)) + return ATOM_ENCODER_MODE_DP; + else if (drm_detect_monitor_audio(radeon_connector->edid) && radeon_audio) { + /* fix me */ + if (ASIC_IS_DCE4(rdev)) + return ATOM_ENCODER_MODE_DVI; + else + return ATOM_ENCODER_MODE_HDMI; + } else + return ATOM_ENCODER_MODE_DVI; + break; + case DRM_MODE_CONNECTOR_eDP: + return ATOM_ENCODER_MODE_DP; + case DRM_MODE_CONNECTOR_DVIA: + case DRM_MODE_CONNECTOR_VGA: + return ATOM_ENCODER_MODE_CRT; + break; + case DRM_MODE_CONNECTOR_Composite: + case DRM_MODE_CONNECTOR_SVIDEO: + case DRM_MODE_CONNECTOR_9PinDIN: + /* fix me */ + return ATOM_ENCODER_MODE_TV; + /*return ATOM_ENCODER_MODE_CV;*/ + break; + } +} + +/* + * DIG Encoder/Transmitter Setup + * + * DCE 3.0/3.1 + * - 2 DIG transmitter blocks. UNIPHY (links A and B) and LVTMA. + * Supports up to 3 digital outputs + * - 2 DIG encoder blocks. + * DIG1 can drive UNIPHY link A or link B + * DIG2 can drive UNIPHY link B or LVTMA + * + * DCE 3.2 + * - 3 DIG transmitter blocks. UNIPHY0/1/2 (links A and B). + * Supports up to 5 digital outputs + * - 2 DIG encoder blocks. + * DIG1/2 can drive UNIPHY0/1/2 link A or link B + * + * DCE 4.0/5.0 + * - 3 DIG transmitter blocks UNIPHY0/1/2 (links A and B). + * Supports up to 6 digital outputs + * - 6 DIG encoder blocks. + * - DIG to PHY mapping is hardcoded + * DIG1 drives UNIPHY0 link A, A+B + * DIG2 drives UNIPHY0 link B + * DIG3 drives UNIPHY1 link A, A+B + * DIG4 drives UNIPHY1 link B + * DIG5 drives UNIPHY2 link A, A+B + * DIG6 drives UNIPHY2 link B + * + * DCE 4.1 + * - 3 DIG transmitter blocks UNIPHY0/1/2 (links A and B). + * Supports up to 6 digital outputs + * - 2 DIG encoder blocks. + * DIG1/2 can drive UNIPHY0/1/2 link A or link B + * + * Routing + * crtc -> dig encoder -> UNIPHY/LVTMA (1 or 2 links) + * Examples: + * crtc0 -> dig2 -> LVTMA links A+B -> TMDS/HDMI + * crtc1 -> dig1 -> UNIPHY0 link B -> DP + * crtc0 -> dig1 -> UNIPHY2 link A -> LVDS + * crtc1 -> dig2 -> UNIPHY1 link B+A -> TMDS/HDMI + */ + +union dig_encoder_control { + DIG_ENCODER_CONTROL_PS_ALLOCATION v1; + DIG_ENCODER_CONTROL_PARAMETERS_V2 v2; + DIG_ENCODER_CONTROL_PARAMETERS_V3 v3; + DIG_ENCODER_CONTROL_PARAMETERS_V4 v4; +}; + +void +atombios_dig_encoder_setup(struct drm_encoder *encoder, int action, int panel_mode) +{ + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; + struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); + union dig_encoder_control args; + int index = 0; + uint8_t frev, crev; + int dp_clock = 0; + int dp_lane_count = 0; + int hpd_id = RADEON_HPD_NONE; + int bpc = 8; + + if (connector) { + struct radeon_connector *radeon_connector = to_radeon_connector(connector); + struct radeon_connector_atom_dig *dig_connector = + radeon_connector->con_priv; + + dp_clock = dig_connector->dp_clock; + dp_lane_count = dig_connector->dp_lane_count; + hpd_id = radeon_connector->hpd.hpd; + bpc = connector->display_info.bpc; + } + + /* no dig encoder assigned */ + if (dig->dig_encoder == -1) + return; + + memset(&args, 0, sizeof(args)); + + if (ASIC_IS_DCE4(rdev)) + index = GetIndexIntoMasterTable(COMMAND, DIGxEncoderControl); + else { + if (dig->dig_encoder) + index = GetIndexIntoMasterTable(COMMAND, DIG2EncoderControl); + else + index = GetIndexIntoMasterTable(COMMAND, DIG1EncoderControl); + } + + if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) + return; + + switch (frev) { + case 1: + switch (crev) { + case 1: + args.v1.ucAction = action; + args.v1.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); + if (action == ATOM_ENCODER_CMD_SETUP_PANEL_MODE) + args.v3.ucPanelMode = panel_mode; + else + args.v1.ucEncoderMode = atombios_get_encoder_mode(encoder); + + if (ENCODER_MODE_IS_DP(args.v1.ucEncoderMode)) + args.v1.ucLaneNum = dp_lane_count; + else if (radeon_encoder->pixel_clock > 165000) + args.v1.ucLaneNum = 8; + else + args.v1.ucLaneNum = 4; + + if (ENCODER_MODE_IS_DP(args.v1.ucEncoderMode) && (dp_clock == 270000)) + args.v1.ucConfig |= ATOM_ENCODER_CONFIG_DPLINKRATE_2_70GHZ; + switch (radeon_encoder->encoder_id) { + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: + args.v1.ucConfig = ATOM_ENCODER_CONFIG_V2_TRANSMITTER1; + break; + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: + args.v1.ucConfig = ATOM_ENCODER_CONFIG_V2_TRANSMITTER2; + break; + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: + args.v1.ucConfig = ATOM_ENCODER_CONFIG_V2_TRANSMITTER3; + break; + } + if (dig->linkb) + args.v1.ucConfig |= ATOM_ENCODER_CONFIG_LINKB; + else + args.v1.ucConfig |= ATOM_ENCODER_CONFIG_LINKA; + break; + case 2: + case 3: + args.v3.ucAction = action; + args.v3.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); + if (action == ATOM_ENCODER_CMD_SETUP_PANEL_MODE) + args.v3.ucPanelMode = panel_mode; + else + args.v3.ucEncoderMode = atombios_get_encoder_mode(encoder); + + if (ENCODER_MODE_IS_DP(args.v1.ucEncoderMode)) + args.v3.ucLaneNum = dp_lane_count; + else if (radeon_encoder->pixel_clock > 165000) + args.v3.ucLaneNum = 8; + else + args.v3.ucLaneNum = 4; + + if (ENCODER_MODE_IS_DP(args.v1.ucEncoderMode) && (dp_clock == 270000)) + args.v1.ucConfig |= ATOM_ENCODER_CONFIG_V3_DPLINKRATE_2_70GHZ; + args.v3.acConfig.ucDigSel = dig->dig_encoder; + switch (bpc) { + case 0: + args.v3.ucBitPerColor = PANEL_BPC_UNDEFINE; + break; + case 6: + args.v3.ucBitPerColor = PANEL_6BIT_PER_COLOR; + break; + case 8: + default: + args.v3.ucBitPerColor = PANEL_8BIT_PER_COLOR; + break; + case 10: + args.v3.ucBitPerColor = PANEL_10BIT_PER_COLOR; + break; + case 12: + args.v3.ucBitPerColor = PANEL_12BIT_PER_COLOR; + break; + case 16: + args.v3.ucBitPerColor = PANEL_16BIT_PER_COLOR; + break; + } + break; + case 4: + args.v4.ucAction = action; + args.v4.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); + if (action == ATOM_ENCODER_CMD_SETUP_PANEL_MODE) + args.v4.ucPanelMode = panel_mode; + else + args.v4.ucEncoderMode = atombios_get_encoder_mode(encoder); + + if (ENCODER_MODE_IS_DP(args.v1.ucEncoderMode)) + args.v4.ucLaneNum = dp_lane_count; + else if (radeon_encoder->pixel_clock > 165000) + args.v4.ucLaneNum = 8; + else + args.v4.ucLaneNum = 4; + + if (ENCODER_MODE_IS_DP(args.v1.ucEncoderMode)) { + if (dp_clock == 270000) + args.v1.ucConfig |= ATOM_ENCODER_CONFIG_V4_DPLINKRATE_2_70GHZ; + else if (dp_clock == 540000) + args.v1.ucConfig |= ATOM_ENCODER_CONFIG_V4_DPLINKRATE_5_40GHZ; + } + args.v4.acConfig.ucDigSel = dig->dig_encoder; + switch (bpc) { + case 0: + args.v4.ucBitPerColor = PANEL_BPC_UNDEFINE; + break; + case 6: + args.v4.ucBitPerColor = PANEL_6BIT_PER_COLOR; + break; + case 8: + default: + args.v4.ucBitPerColor = PANEL_8BIT_PER_COLOR; + break; + case 10: + args.v4.ucBitPerColor = PANEL_10BIT_PER_COLOR; + break; + case 12: + args.v4.ucBitPerColor = PANEL_12BIT_PER_COLOR; + break; + case 16: + args.v4.ucBitPerColor = PANEL_16BIT_PER_COLOR; + break; + } + if (hpd_id == RADEON_HPD_NONE) + args.v4.ucHPD_ID = 0; + else + args.v4.ucHPD_ID = hpd_id + 1; + break; + default: + DRM_ERROR("Unknown table version %d, %d\n", frev, crev); + break; + } + break; + default: + DRM_ERROR("Unknown table version %d, %d\n", frev, crev); + break; + } + + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); + +} + +union dig_transmitter_control { + DIG_TRANSMITTER_CONTROL_PS_ALLOCATION v1; + DIG_TRANSMITTER_CONTROL_PARAMETERS_V2 v2; + DIG_TRANSMITTER_CONTROL_PARAMETERS_V3 v3; + DIG_TRANSMITTER_CONTROL_PARAMETERS_V4 v4; +}; + +void +atombios_dig_transmitter_setup(struct drm_encoder *encoder, int action, uint8_t lane_num, uint8_t lane_set) +{ + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; + struct drm_connector *connector; + union dig_transmitter_control args; + int index = 0; + uint8_t frev, crev; + bool is_dp = false; + int pll_id = 0; + int dp_clock = 0; + int dp_lane_count = 0; + int connector_object_id = 0; + int igp_lane_info = 0; + int dig_encoder = dig->dig_encoder; + + if (action == ATOM_TRANSMITTER_ACTION_INIT) { + connector = radeon_get_connector_for_encoder_init(encoder); + /* just needed to avoid bailing in the encoder check. the encoder + * isn't used for init + */ + dig_encoder = 0; + } else + connector = radeon_get_connector_for_encoder(encoder); + + if (connector) { + struct radeon_connector *radeon_connector = to_radeon_connector(connector); + struct radeon_connector_atom_dig *dig_connector = + radeon_connector->con_priv; + + dp_clock = dig_connector->dp_clock; + dp_lane_count = dig_connector->dp_lane_count; + connector_object_id = + (radeon_connector->connector_object_id & OBJECT_ID_MASK) >> OBJECT_ID_SHIFT; + igp_lane_info = dig_connector->igp_lane_info; + } + + if (encoder->crtc) { + struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc); + pll_id = radeon_crtc->pll_id; + } + + /* no dig encoder assigned */ + if (dig_encoder == -1) + return; + + if (ENCODER_MODE_IS_DP(atombios_get_encoder_mode(encoder))) + is_dp = true; + + memset(&args, 0, sizeof(args)); + + switch (radeon_encoder->encoder_id) { + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: + index = GetIndexIntoMasterTable(COMMAND, DVOOutputControl); + break; + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: + index = GetIndexIntoMasterTable(COMMAND, UNIPHYTransmitterControl); + break; + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: + index = GetIndexIntoMasterTable(COMMAND, LVTMATransmitterControl); + break; + } + + if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) + return; + + switch (frev) { + case 1: + switch (crev) { + case 1: + args.v1.ucAction = action; + if (action == ATOM_TRANSMITTER_ACTION_INIT) { + args.v1.usInitInfo = cpu_to_le16(connector_object_id); + } else if (action == ATOM_TRANSMITTER_ACTION_SETUP_VSEMPH) { + args.v1.asMode.ucLaneSel = lane_num; + args.v1.asMode.ucLaneSet = lane_set; + } else { + if (is_dp) + args.v1.usPixelClock = + cpu_to_le16(dp_clock / 10); + else if (radeon_encoder->pixel_clock > 165000) + args.v1.usPixelClock = cpu_to_le16((radeon_encoder->pixel_clock / 2) / 10); + else + args.v1.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); + } + + args.v1.ucConfig = ATOM_TRANSMITTER_CONFIG_CLKSRC_PPLL; + + if (dig_encoder) + args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_DIG2_ENCODER; + else + args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_DIG1_ENCODER; + + if ((rdev->flags & RADEON_IS_IGP) && + (radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_UNIPHY)) { + if (is_dp || (radeon_encoder->pixel_clock <= 165000)) { + if (igp_lane_info & 0x1) + args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_0_3; + else if (igp_lane_info & 0x2) + args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_4_7; + else if (igp_lane_info & 0x4) + args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_8_11; + else if (igp_lane_info & 0x8) + args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_12_15; + } else { + if (igp_lane_info & 0x3) + args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_0_7; + else if (igp_lane_info & 0xc) + args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_8_15; + } + } + + if (dig->linkb) + args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LINKB; + else + args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LINKA; + + if (is_dp) + args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_COHERENT; + else if (radeon_encoder->devices & (ATOM_DEVICE_DFP_SUPPORT)) { + if (dig->coherent_mode) + args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_COHERENT; + if (radeon_encoder->pixel_clock > 165000) + args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_8LANE_LINK; + } + break; + case 2: + args.v2.ucAction = action; + if (action == ATOM_TRANSMITTER_ACTION_INIT) { + args.v2.usInitInfo = cpu_to_le16(connector_object_id); + } else if (action == ATOM_TRANSMITTER_ACTION_SETUP_VSEMPH) { + args.v2.asMode.ucLaneSel = lane_num; + args.v2.asMode.ucLaneSet = lane_set; + } else { + if (is_dp) + args.v2.usPixelClock = + cpu_to_le16(dp_clock / 10); + else if (radeon_encoder->pixel_clock > 165000) + args.v2.usPixelClock = cpu_to_le16((radeon_encoder->pixel_clock / 2) / 10); + else + args.v2.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); + } + + args.v2.acConfig.ucEncoderSel = dig_encoder; + if (dig->linkb) + args.v2.acConfig.ucLinkSel = 1; + + switch (radeon_encoder->encoder_id) { + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: + args.v2.acConfig.ucTransmitterSel = 0; + break; + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: + args.v2.acConfig.ucTransmitterSel = 1; + break; + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: + args.v2.acConfig.ucTransmitterSel = 2; + break; + } + + if (is_dp) { + args.v2.acConfig.fCoherentMode = 1; + args.v2.acConfig.fDPConnector = 1; + } else if (radeon_encoder->devices & (ATOM_DEVICE_DFP_SUPPORT)) { + if (dig->coherent_mode) + args.v2.acConfig.fCoherentMode = 1; + if (radeon_encoder->pixel_clock > 165000) + args.v2.acConfig.fDualLinkConnector = 1; + } + break; + case 3: + args.v3.ucAction = action; + if (action == ATOM_TRANSMITTER_ACTION_INIT) { + args.v3.usInitInfo = cpu_to_le16(connector_object_id); + } else if (action == ATOM_TRANSMITTER_ACTION_SETUP_VSEMPH) { + args.v3.asMode.ucLaneSel = lane_num; + args.v3.asMode.ucLaneSet = lane_set; + } else { + if (is_dp) + args.v3.usPixelClock = + cpu_to_le16(dp_clock / 10); + else if (radeon_encoder->pixel_clock > 165000) + args.v3.usPixelClock = cpu_to_le16((radeon_encoder->pixel_clock / 2) / 10); + else + args.v3.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); + } + + if (is_dp) + args.v3.ucLaneNum = dp_lane_count; + else if (radeon_encoder->pixel_clock > 165000) + args.v3.ucLaneNum = 8; + else + args.v3.ucLaneNum = 4; + + if (dig->linkb) + args.v3.acConfig.ucLinkSel = 1; + if (dig_encoder & 1) + args.v3.acConfig.ucEncoderSel = 1; + + /* Select the PLL for the PHY + * DP PHY should be clocked from external src if there is + * one. + */ + /* On DCE4, if there is an external clock, it generates the DP ref clock */ + if (is_dp && rdev->clock.dp_extclk) + args.v3.acConfig.ucRefClkSource = 2; /* external src */ + else + args.v3.acConfig.ucRefClkSource = pll_id; + + switch (radeon_encoder->encoder_id) { + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: + args.v3.acConfig.ucTransmitterSel = 0; + break; + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: + args.v3.acConfig.ucTransmitterSel = 1; + break; + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: + args.v3.acConfig.ucTransmitterSel = 2; + break; + } + + if (is_dp) + args.v3.acConfig.fCoherentMode = 1; /* DP requires coherent */ + else if (radeon_encoder->devices & (ATOM_DEVICE_DFP_SUPPORT)) { + if (dig->coherent_mode) + args.v3.acConfig.fCoherentMode = 1; + if (radeon_encoder->pixel_clock > 165000) + args.v3.acConfig.fDualLinkConnector = 1; + } + break; + case 4: + args.v4.ucAction = action; + if (action == ATOM_TRANSMITTER_ACTION_INIT) { + args.v4.usInitInfo = cpu_to_le16(connector_object_id); + } else if (action == ATOM_TRANSMITTER_ACTION_SETUP_VSEMPH) { + args.v4.asMode.ucLaneSel = lane_num; + args.v4.asMode.ucLaneSet = lane_set; + } else { + if (is_dp) + args.v4.usPixelClock = + cpu_to_le16(dp_clock / 10); + else if (radeon_encoder->pixel_clock > 165000) + args.v4.usPixelClock = cpu_to_le16((radeon_encoder->pixel_clock / 2) / 10); + else + args.v4.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); + } + + if (is_dp) + args.v4.ucLaneNum = dp_lane_count; + else if (radeon_encoder->pixel_clock > 165000) + args.v4.ucLaneNum = 8; + else + args.v4.ucLaneNum = 4; + + if (dig->linkb) + args.v4.acConfig.ucLinkSel = 1; + if (dig_encoder & 1) + args.v4.acConfig.ucEncoderSel = 1; + + /* Select the PLL for the PHY + * DP PHY should be clocked from external src if there is + * one. + */ + /* On DCE5 DCPLL usually generates the DP ref clock */ + if (is_dp) { + if (rdev->clock.dp_extclk) + args.v4.acConfig.ucRefClkSource = ENCODER_REFCLK_SRC_EXTCLK; + else + args.v4.acConfig.ucRefClkSource = ENCODER_REFCLK_SRC_DCPLL; + } else + args.v4.acConfig.ucRefClkSource = pll_id; + + switch (radeon_encoder->encoder_id) { + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: + args.v4.acConfig.ucTransmitterSel = 0; + break; + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: + args.v4.acConfig.ucTransmitterSel = 1; + break; + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: + args.v4.acConfig.ucTransmitterSel = 2; + break; + } + + if (is_dp) + args.v4.acConfig.fCoherentMode = 1; /* DP requires coherent */ + else if (radeon_encoder->devices & (ATOM_DEVICE_DFP_SUPPORT)) { + if (dig->coherent_mode) + args.v4.acConfig.fCoherentMode = 1; + if (radeon_encoder->pixel_clock > 165000) + args.v4.acConfig.fDualLinkConnector = 1; + } + break; + default: + DRM_ERROR("Unknown table version %d, %d\n", frev, crev); + break; + } + break; + default: + DRM_ERROR("Unknown table version %d, %d\n", frev, crev); + break; + } + + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); +} + +bool +atombios_set_edp_panel_power(struct drm_connector *connector, int action) +{ + struct radeon_connector *radeon_connector = to_radeon_connector(connector); + struct drm_device *dev = radeon_connector->base.dev; + struct radeon_device *rdev = dev->dev_private; + union dig_transmitter_control args; + int index = GetIndexIntoMasterTable(COMMAND, UNIPHYTransmitterControl); + uint8_t frev, crev; + + if (connector->connector_type != DRM_MODE_CONNECTOR_eDP) + goto done; + + if (!ASIC_IS_DCE4(rdev)) + goto done; + + if ((action != ATOM_TRANSMITTER_ACTION_POWER_ON) && + (action != ATOM_TRANSMITTER_ACTION_POWER_OFF)) + goto done; + + if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) + goto done; + + memset(&args, 0, sizeof(args)); + + args.v1.ucAction = action; + + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); + + /* wait for the panel to power up */ + if (action == ATOM_TRANSMITTER_ACTION_POWER_ON) { + int i; + + for (i = 0; i < 300; i++) { + if (radeon_hpd_sense(rdev, radeon_connector->hpd.hpd)) + return true; + mdelay(1); + } + return false; + } +done: + return true; +} + +union external_encoder_control { + EXTERNAL_ENCODER_CONTROL_PS_ALLOCATION v1; + EXTERNAL_ENCODER_CONTROL_PS_ALLOCATION_V3 v3; +}; + +static void +atombios_external_encoder_setup(struct drm_encoder *encoder, + struct drm_encoder *ext_encoder, + int action) +{ + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + struct radeon_encoder *ext_radeon_encoder = to_radeon_encoder(ext_encoder); + union external_encoder_control args; + struct drm_connector *connector; + int index = GetIndexIntoMasterTable(COMMAND, ExternalEncoderControl); + u8 frev, crev; + int dp_clock = 0; + int dp_lane_count = 0; + int connector_object_id = 0; + u32 ext_enum = (ext_radeon_encoder->encoder_enum & ENUM_ID_MASK) >> ENUM_ID_SHIFT; + int bpc = 8; + + if (action == EXTERNAL_ENCODER_ACTION_V3_ENCODER_INIT) + connector = radeon_get_connector_for_encoder_init(encoder); + else + connector = radeon_get_connector_for_encoder(encoder); + + if (connector) { + struct radeon_connector *radeon_connector = to_radeon_connector(connector); + struct radeon_connector_atom_dig *dig_connector = + radeon_connector->con_priv; + + dp_clock = dig_connector->dp_clock; + dp_lane_count = dig_connector->dp_lane_count; + connector_object_id = + (radeon_connector->connector_object_id & OBJECT_ID_MASK) >> OBJECT_ID_SHIFT; + bpc = connector->display_info.bpc; + } + + memset(&args, 0, sizeof(args)); + + if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) + return; + + switch (frev) { + case 1: + /* no params on frev 1 */ + break; + case 2: + switch (crev) { + case 1: + case 2: + args.v1.sDigEncoder.ucAction = action; + args.v1.sDigEncoder.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); + args.v1.sDigEncoder.ucEncoderMode = atombios_get_encoder_mode(encoder); + + if (ENCODER_MODE_IS_DP(args.v1.sDigEncoder.ucEncoderMode)) { + if (dp_clock == 270000) + args.v1.sDigEncoder.ucConfig |= ATOM_ENCODER_CONFIG_DPLINKRATE_2_70GHZ; + args.v1.sDigEncoder.ucLaneNum = dp_lane_count; + } else if (radeon_encoder->pixel_clock > 165000) + args.v1.sDigEncoder.ucLaneNum = 8; + else + args.v1.sDigEncoder.ucLaneNum = 4; + break; + case 3: + args.v3.sExtEncoder.ucAction = action; + if (action == EXTERNAL_ENCODER_ACTION_V3_ENCODER_INIT) + args.v3.sExtEncoder.usConnectorId = cpu_to_le16(connector_object_id); + else + args.v3.sExtEncoder.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); + args.v3.sExtEncoder.ucEncoderMode = atombios_get_encoder_mode(encoder); + + if (ENCODER_MODE_IS_DP(args.v3.sExtEncoder.ucEncoderMode)) { + if (dp_clock == 270000) + args.v3.sExtEncoder.ucConfig |= EXTERNAL_ENCODER_CONFIG_V3_DPLINKRATE_2_70GHZ; + else if (dp_clock == 540000) + args.v3.sExtEncoder.ucConfig |= EXTERNAL_ENCODER_CONFIG_V3_DPLINKRATE_5_40GHZ; + args.v3.sExtEncoder.ucLaneNum = dp_lane_count; + } else if (radeon_encoder->pixel_clock > 165000) + args.v3.sExtEncoder.ucLaneNum = 8; + else + args.v3.sExtEncoder.ucLaneNum = 4; + switch (ext_enum) { + case GRAPH_OBJECT_ENUM_ID1: + args.v3.sExtEncoder.ucConfig |= EXTERNAL_ENCODER_CONFIG_V3_ENCODER1; + break; + case GRAPH_OBJECT_ENUM_ID2: + args.v3.sExtEncoder.ucConfig |= EXTERNAL_ENCODER_CONFIG_V3_ENCODER2; + break; + case GRAPH_OBJECT_ENUM_ID3: + args.v3.sExtEncoder.ucConfig |= EXTERNAL_ENCODER_CONFIG_V3_ENCODER3; + break; + } + switch (bpc) { + case 0: + args.v3.sExtEncoder.ucBitPerColor = PANEL_BPC_UNDEFINE; + break; + case 6: + args.v3.sExtEncoder.ucBitPerColor = PANEL_6BIT_PER_COLOR; + break; + case 8: + default: + args.v3.sExtEncoder.ucBitPerColor = PANEL_8BIT_PER_COLOR; + break; + case 10: + args.v3.sExtEncoder.ucBitPerColor = PANEL_10BIT_PER_COLOR; + break; + case 12: + args.v3.sExtEncoder.ucBitPerColor = PANEL_12BIT_PER_COLOR; + break; + case 16: + args.v3.sExtEncoder.ucBitPerColor = PANEL_16BIT_PER_COLOR; + break; + } + break; + default: + DRM_ERROR("Unknown table version: %d, %d\n", frev, crev); + return; + } + break; + default: + DRM_ERROR("Unknown table version: %d, %d\n", frev, crev); + return; + } + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); +} + +static void +atombios_yuv_setup(struct drm_encoder *encoder, bool enable) +{ + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc); + ENABLE_YUV_PS_ALLOCATION args; + int index = GetIndexIntoMasterTable(COMMAND, EnableYUV); + uint32_t temp, reg; + + memset(&args, 0, sizeof(args)); + + if (rdev->family >= CHIP_R600) + reg = R600_BIOS_3_SCRATCH; + else + reg = RADEON_BIOS_3_SCRATCH; + + /* XXX: fix up scratch reg handling */ + temp = RREG32(reg); + if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT)) + WREG32(reg, (ATOM_S3_TV1_ACTIVE | + (radeon_crtc->crtc_id << 18))); + else if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT)) + WREG32(reg, (ATOM_S3_CV_ACTIVE | (radeon_crtc->crtc_id << 24))); + else + WREG32(reg, 0); + + if (enable) + args.ucEnable = ATOM_ENABLE; + args.ucCRTC = radeon_crtc->crtc_id; + + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); + + WREG32(reg, temp); +} + +static void +radeon_atom_encoder_dpms_avivo(struct drm_encoder *encoder, int mode) +{ + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + DISPLAY_DEVICE_OUTPUT_CONTROL_PS_ALLOCATION args; + int index = 0; + + memset(&args, 0, sizeof(args)); + + switch (radeon_encoder->encoder_id) { + case ENCODER_OBJECT_ID_INTERNAL_TMDS1: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: + index = GetIndexIntoMasterTable(COMMAND, TMDSAOutputControl); + break; + case ENCODER_OBJECT_ID_INTERNAL_DVO1: + case ENCODER_OBJECT_ID_INTERNAL_DDI: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: + index = GetIndexIntoMasterTable(COMMAND, DVOOutputControl); + break; + case ENCODER_OBJECT_ID_INTERNAL_LVDS: + index = GetIndexIntoMasterTable(COMMAND, LCD1OutputControl); + break; + case ENCODER_OBJECT_ID_INTERNAL_LVTM1: + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) + index = GetIndexIntoMasterTable(COMMAND, LCD1OutputControl); + else + index = GetIndexIntoMasterTable(COMMAND, LVTMAOutputControl); + break; + case ENCODER_OBJECT_ID_INTERNAL_DAC1: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1: + if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT)) + index = GetIndexIntoMasterTable(COMMAND, TV1OutputControl); + else if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT)) + index = GetIndexIntoMasterTable(COMMAND, CV1OutputControl); + else + index = GetIndexIntoMasterTable(COMMAND, DAC1OutputControl); + break; + case ENCODER_OBJECT_ID_INTERNAL_DAC2: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2: + if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT)) + index = GetIndexIntoMasterTable(COMMAND, TV1OutputControl); + else if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT)) + index = GetIndexIntoMasterTable(COMMAND, CV1OutputControl); + else + index = GetIndexIntoMasterTable(COMMAND, DAC2OutputControl); + break; + default: + return; + } + + switch (mode) { + case DRM_MODE_DPMS_ON: + args.ucAction = ATOM_ENABLE; + /* workaround for DVOOutputControl on some RS690 systems */ + if (radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_DDI) { + u32 reg = RREG32(RADEON_BIOS_3_SCRATCH); + WREG32(RADEON_BIOS_3_SCRATCH, reg & ~ATOM_S3_DFP2I_ACTIVE); + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); + WREG32(RADEON_BIOS_3_SCRATCH, reg); + } else + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { + args.ucAction = ATOM_LCD_BLON; + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); + } + break; + case DRM_MODE_DPMS_STANDBY: + case DRM_MODE_DPMS_SUSPEND: + case DRM_MODE_DPMS_OFF: + args.ucAction = ATOM_DISABLE; + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { + args.ucAction = ATOM_LCD_BLOFF; + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); + } + break; + } +} + +static void +radeon_atom_encoder_dpms_dig(struct drm_encoder *encoder, int mode) +{ + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); + struct radeon_connector *radeon_connector = NULL; + struct radeon_connector_atom_dig *radeon_dig_connector = NULL; + + if (connector) { + radeon_connector = to_radeon_connector(connector); + radeon_dig_connector = radeon_connector->con_priv; + } + + switch (mode) { + case DRM_MODE_DPMS_ON: + /* some early dce3.2 boards have a bug in their transmitter control table */ + if ((rdev->family == CHIP_RV710) || (rdev->family == CHIP_RV730)) + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE, 0, 0); + else + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE_OUTPUT, 0, 0); + if (ENCODER_MODE_IS_DP(atombios_get_encoder_mode(encoder)) && connector) { + if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) { + atombios_set_edp_panel_power(connector, + ATOM_TRANSMITTER_ACTION_POWER_ON); + radeon_dig_connector->edp_on = true; + } + if (ASIC_IS_DCE4(rdev)) + atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_OFF, 0); + radeon_dp_link_train(encoder, connector); + if (ASIC_IS_DCE4(rdev)) + atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_ON, 0); + } + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_LCD_BLON, 0, 0); + break; + case DRM_MODE_DPMS_STANDBY: + case DRM_MODE_DPMS_SUSPEND: + case DRM_MODE_DPMS_OFF: + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE_OUTPUT, 0, 0); + if (ENCODER_MODE_IS_DP(atombios_get_encoder_mode(encoder)) && connector) { + if (ASIC_IS_DCE4(rdev)) + atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_OFF, 0); + if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) { + atombios_set_edp_panel_power(connector, + ATOM_TRANSMITTER_ACTION_POWER_OFF); + radeon_dig_connector->edp_on = false; + } + } + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_LCD_BLOFF, 0, 0); + break; + } +} + +static void +radeon_atom_encoder_dpms_ext(struct drm_encoder *encoder, + struct drm_encoder *ext_encoder, + int mode) +{ + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + + switch (mode) { + case DRM_MODE_DPMS_ON: + default: + if (ASIC_IS_DCE41(rdev)) { + atombios_external_encoder_setup(encoder, ext_encoder, + EXTERNAL_ENCODER_ACTION_V3_ENABLE_OUTPUT); + atombios_external_encoder_setup(encoder, ext_encoder, + EXTERNAL_ENCODER_ACTION_V3_ENCODER_BLANKING_OFF); + } else + atombios_external_encoder_setup(encoder, ext_encoder, ATOM_ENABLE); + break; + case DRM_MODE_DPMS_STANDBY: + case DRM_MODE_DPMS_SUSPEND: + case DRM_MODE_DPMS_OFF: + if (ASIC_IS_DCE41(rdev)) { + atombios_external_encoder_setup(encoder, ext_encoder, + EXTERNAL_ENCODER_ACTION_V3_ENCODER_BLANKING); + atombios_external_encoder_setup(encoder, ext_encoder, + EXTERNAL_ENCODER_ACTION_V3_DISABLE_OUTPUT); + } else + atombios_external_encoder_setup(encoder, ext_encoder, ATOM_DISABLE); + break; + } +} + +static void +radeon_atom_encoder_dpms(struct drm_encoder *encoder, int mode) +{ + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + struct drm_encoder *ext_encoder = radeon_get_external_encoder(encoder); + + DRM_DEBUG_KMS("encoder dpms %d to mode %d, devices %08x, active_devices %08x\n", + radeon_encoder->encoder_id, mode, radeon_encoder->devices, + radeon_encoder->active_device); + switch (radeon_encoder->encoder_id) { + case ENCODER_OBJECT_ID_INTERNAL_TMDS1: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: + case ENCODER_OBJECT_ID_INTERNAL_LVDS: + case ENCODER_OBJECT_ID_INTERNAL_LVTM1: + case ENCODER_OBJECT_ID_INTERNAL_DVO1: + case ENCODER_OBJECT_ID_INTERNAL_DDI: + case ENCODER_OBJECT_ID_INTERNAL_DAC2: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2: + radeon_atom_encoder_dpms_avivo(encoder, mode); + break; + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: + radeon_atom_encoder_dpms_dig(encoder, mode); + break; + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: + if (ASIC_IS_DCE5(rdev)) { + switch (mode) { + case DRM_MODE_DPMS_ON: + atombios_dvo_setup(encoder, ATOM_ENABLE); + break; + case DRM_MODE_DPMS_STANDBY: + case DRM_MODE_DPMS_SUSPEND: + case DRM_MODE_DPMS_OFF: + atombios_dvo_setup(encoder, ATOM_DISABLE); + break; + } + } else if (ASIC_IS_DCE3(rdev)) + radeon_atom_encoder_dpms_dig(encoder, mode); + else + radeon_atom_encoder_dpms_avivo(encoder, mode); + break; + case ENCODER_OBJECT_ID_INTERNAL_DAC1: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1: + if (ASIC_IS_DCE5(rdev)) { + switch (mode) { + case DRM_MODE_DPMS_ON: + atombios_dac_setup(encoder, ATOM_ENABLE); + break; + case DRM_MODE_DPMS_STANDBY: + case DRM_MODE_DPMS_SUSPEND: + case DRM_MODE_DPMS_OFF: + atombios_dac_setup(encoder, ATOM_DISABLE); + break; + } + } else + radeon_atom_encoder_dpms_avivo(encoder, mode); + break; + default: + return; + } + + if (ext_encoder) + radeon_atom_encoder_dpms_ext(encoder, ext_encoder, mode); + + radeon_atombios_encoder_dpms_scratch_regs(encoder, (mode == DRM_MODE_DPMS_ON) ? true : false); + +} + +union crtc_source_param { + SELECT_CRTC_SOURCE_PS_ALLOCATION v1; + SELECT_CRTC_SOURCE_PARAMETERS_V2 v2; +}; + +static void +atombios_set_encoder_crtc_source(struct drm_encoder *encoder) +{ + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc); + union crtc_source_param args; + int index = GetIndexIntoMasterTable(COMMAND, SelectCRTC_Source); + uint8_t frev, crev; + struct radeon_encoder_atom_dig *dig; + + memset(&args, 0, sizeof(args)); + + if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) + return; + + switch (frev) { + case 1: + switch (crev) { + case 1: + default: + if (ASIC_IS_AVIVO(rdev)) + args.v1.ucCRTC = radeon_crtc->crtc_id; + else { + if (radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_DAC1) { + args.v1.ucCRTC = radeon_crtc->crtc_id; + } else { + args.v1.ucCRTC = radeon_crtc->crtc_id << 2; + } + } + switch (radeon_encoder->encoder_id) { + case ENCODER_OBJECT_ID_INTERNAL_TMDS1: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: + args.v1.ucDevice = ATOM_DEVICE_DFP1_INDEX; + break; + case ENCODER_OBJECT_ID_INTERNAL_LVDS: + case ENCODER_OBJECT_ID_INTERNAL_LVTM1: + if (radeon_encoder->devices & ATOM_DEVICE_LCD1_SUPPORT) + args.v1.ucDevice = ATOM_DEVICE_LCD1_INDEX; + else + args.v1.ucDevice = ATOM_DEVICE_DFP3_INDEX; + break; + case ENCODER_OBJECT_ID_INTERNAL_DVO1: + case ENCODER_OBJECT_ID_INTERNAL_DDI: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: + args.v1.ucDevice = ATOM_DEVICE_DFP2_INDEX; + break; + case ENCODER_OBJECT_ID_INTERNAL_DAC1: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1: + if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT)) + args.v1.ucDevice = ATOM_DEVICE_TV1_INDEX; + else if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT)) + args.v1.ucDevice = ATOM_DEVICE_CV_INDEX; + else + args.v1.ucDevice = ATOM_DEVICE_CRT1_INDEX; + break; + case ENCODER_OBJECT_ID_INTERNAL_DAC2: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2: + if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT)) + args.v1.ucDevice = ATOM_DEVICE_TV1_INDEX; + else if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT)) + args.v1.ucDevice = ATOM_DEVICE_CV_INDEX; + else + args.v1.ucDevice = ATOM_DEVICE_CRT2_INDEX; + break; + } + break; + case 2: + args.v2.ucCRTC = radeon_crtc->crtc_id; + if (radeon_encoder_get_dp_bridge_encoder_id(encoder) != ENCODER_OBJECT_ID_NONE) { + struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); + + if (connector->connector_type == DRM_MODE_CONNECTOR_LVDS) + args.v2.ucEncodeMode = ATOM_ENCODER_MODE_LVDS; + else if (connector->connector_type == DRM_MODE_CONNECTOR_VGA) + args.v2.ucEncodeMode = ATOM_ENCODER_MODE_CRT; + else + args.v2.ucEncodeMode = atombios_get_encoder_mode(encoder); + } else + args.v2.ucEncodeMode = atombios_get_encoder_mode(encoder); + switch (radeon_encoder->encoder_id) { + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: + dig = radeon_encoder->enc_priv; + switch (dig->dig_encoder) { + case 0: + args.v2.ucEncoderID = ASIC_INT_DIG1_ENCODER_ID; + break; + case 1: + args.v2.ucEncoderID = ASIC_INT_DIG2_ENCODER_ID; + break; + case 2: + args.v2.ucEncoderID = ASIC_INT_DIG3_ENCODER_ID; + break; + case 3: + args.v2.ucEncoderID = ASIC_INT_DIG4_ENCODER_ID; + break; + case 4: + args.v2.ucEncoderID = ASIC_INT_DIG5_ENCODER_ID; + break; + case 5: + args.v2.ucEncoderID = ASIC_INT_DIG6_ENCODER_ID; + break; + } + break; + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: + args.v2.ucEncoderID = ASIC_INT_DVO_ENCODER_ID; + break; + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1: + if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT)) + args.v2.ucEncoderID = ASIC_INT_TV_ENCODER_ID; + else if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT)) + args.v2.ucEncoderID = ASIC_INT_TV_ENCODER_ID; + else + args.v2.ucEncoderID = ASIC_INT_DAC1_ENCODER_ID; + break; + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2: + if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT)) + args.v2.ucEncoderID = ASIC_INT_TV_ENCODER_ID; + else if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT)) + args.v2.ucEncoderID = ASIC_INT_TV_ENCODER_ID; + else + args.v2.ucEncoderID = ASIC_INT_DAC2_ENCODER_ID; + break; + } + break; + } + break; + default: + DRM_ERROR("Unknown table version: %d, %d\n", frev, crev); + return; + } + + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); + + /* update scratch regs with new routing */ + radeon_atombios_encoder_crtc_scratch_regs(encoder, radeon_crtc->crtc_id); +} + +static void +atombios_apply_encoder_quirks(struct drm_encoder *encoder, + struct drm_display_mode *mode) +{ + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc); + + /* Funky macbooks */ + if ((dev->pdev->device == 0x71C5) && + (dev->pdev->subsystem_vendor == 0x106b) && + (dev->pdev->subsystem_device == 0x0080)) { + if (radeon_encoder->devices & ATOM_DEVICE_LCD1_SUPPORT) { + uint32_t lvtma_bit_depth_control = RREG32(AVIVO_LVTMA_BIT_DEPTH_CONTROL); + + lvtma_bit_depth_control &= ~AVIVO_LVTMA_BIT_DEPTH_CONTROL_TRUNCATE_EN; + lvtma_bit_depth_control &= ~AVIVO_LVTMA_BIT_DEPTH_CONTROL_SPATIAL_DITHER_EN; + + WREG32(AVIVO_LVTMA_BIT_DEPTH_CONTROL, lvtma_bit_depth_control); + } + } + + /* set scaler clears this on some chips */ + if (ASIC_IS_AVIVO(rdev) && + (!(radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT)))) { + if (ASIC_IS_DCE4(rdev)) { + if (mode->flags & DRM_MODE_FLAG_INTERLACE) + WREG32(EVERGREEN_DATA_FORMAT + radeon_crtc->crtc_offset, + EVERGREEN_INTERLEAVE_EN); + else + WREG32(EVERGREEN_DATA_FORMAT + radeon_crtc->crtc_offset, 0); + } else { + if (mode->flags & DRM_MODE_FLAG_INTERLACE) + WREG32(AVIVO_D1MODE_DATA_FORMAT + radeon_crtc->crtc_offset, + AVIVO_D1MODE_INTERLEAVE_EN); + else + WREG32(AVIVO_D1MODE_DATA_FORMAT + radeon_crtc->crtc_offset, 0); + } + } +} + +static int radeon_atom_pick_dig_encoder(struct drm_encoder *encoder) +{ + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc); + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + struct drm_encoder *test_encoder; + struct radeon_encoder_atom_dig *dig; + uint32_t dig_enc_in_use = 0; + + /* DCE4/5 */ + if (ASIC_IS_DCE4(rdev)) { + dig = radeon_encoder->enc_priv; + if (ASIC_IS_DCE41(rdev)) { + /* ontario follows DCE4 */ + if (rdev->family == CHIP_PALM) { + if (dig->linkb) + return 1; + else + return 0; + } else + /* llano follows DCE3.2 */ + return radeon_crtc->crtc_id; + } else { + switch (radeon_encoder->encoder_id) { + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: + if (dig->linkb) + return 1; + else + return 0; + break; + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: + if (dig->linkb) + return 3; + else + return 2; + break; + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: + if (dig->linkb) + return 5; + else + return 4; + break; + } + } + } + + /* on DCE32 and encoder can driver any block so just crtc id */ + if (ASIC_IS_DCE32(rdev)) { + return radeon_crtc->crtc_id; + } + + /* on DCE3 - LVTMA can only be driven by DIGB */ + list_for_each_entry(test_encoder, &dev->mode_config.encoder_list, head) { + struct radeon_encoder *radeon_test_encoder; + + if (encoder == test_encoder) + continue; + + if (!radeon_encoder_is_digital(test_encoder)) + continue; + + radeon_test_encoder = to_radeon_encoder(test_encoder); + dig = radeon_test_encoder->enc_priv; + + if (dig->dig_encoder >= 0) + dig_enc_in_use |= (1 << dig->dig_encoder); + } + + if (radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA) { + if (dig_enc_in_use & 0x2) + DRM_ERROR("LVDS required digital encoder 2 but it was in use - stealing\n"); + return 1; + } + if (!(dig_enc_in_use & 1)) + return 0; + return 1; +} + +/* This only needs to be called once at startup */ +void +radeon_atom_encoder_init(struct radeon_device *rdev) +{ + struct drm_device *dev = rdev->ddev; + struct drm_encoder *encoder; + + list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) { + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + struct drm_encoder *ext_encoder = radeon_get_external_encoder(encoder); + + switch (radeon_encoder->encoder_id) { + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_INIT, 0, 0); + break; + default: + break; + } + + if (ext_encoder && ASIC_IS_DCE41(rdev)) + atombios_external_encoder_setup(encoder, ext_encoder, + EXTERNAL_ENCODER_ACTION_V3_ENCODER_INIT); + } +} + +static void +radeon_atom_encoder_mode_set(struct drm_encoder *encoder, + struct drm_display_mode *mode, + struct drm_display_mode *adjusted_mode) +{ + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + struct drm_encoder *ext_encoder = radeon_get_external_encoder(encoder); + + radeon_encoder->pixel_clock = adjusted_mode->clock; + + if (ASIC_IS_AVIVO(rdev) && !ASIC_IS_DCE4(rdev)) { + if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT | ATOM_DEVICE_TV_SUPPORT)) + atombios_yuv_setup(encoder, true); + else + atombios_yuv_setup(encoder, false); + } + + switch (radeon_encoder->encoder_id) { + case ENCODER_OBJECT_ID_INTERNAL_TMDS1: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: + case ENCODER_OBJECT_ID_INTERNAL_LVDS: + case ENCODER_OBJECT_ID_INTERNAL_LVTM1: + atombios_digital_setup(encoder, PANEL_ENCODER_ACTION_ENABLE); + break; + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: + if (ASIC_IS_DCE4(rdev)) { + /* disable the transmitter */ + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); + /* setup and enable the encoder */ + atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_SETUP, 0); + + /* enable the transmitter */ + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE, 0, 0); + } else { + /* disable the encoder and transmitter */ + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); + atombios_dig_encoder_setup(encoder, ATOM_DISABLE, 0); + + /* setup and enable the encoder and transmitter */ + atombios_dig_encoder_setup(encoder, ATOM_ENABLE, 0); + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_SETUP, 0, 0); + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE, 0, 0); + } + break; + case ENCODER_OBJECT_ID_INTERNAL_DDI: + case ENCODER_OBJECT_ID_INTERNAL_DVO1: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: + atombios_dvo_setup(encoder, ATOM_ENABLE); + break; + case ENCODER_OBJECT_ID_INTERNAL_DAC1: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1: + case ENCODER_OBJECT_ID_INTERNAL_DAC2: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2: + atombios_dac_setup(encoder, ATOM_ENABLE); + if (radeon_encoder->devices & (ATOM_DEVICE_TV_SUPPORT | ATOM_DEVICE_CV_SUPPORT)) { + if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT | ATOM_DEVICE_CV_SUPPORT)) + atombios_tv_setup(encoder, ATOM_ENABLE); + else + atombios_tv_setup(encoder, ATOM_DISABLE); + } + break; + } + + if (ext_encoder) { + if (ASIC_IS_DCE41(rdev)) + atombios_external_encoder_setup(encoder, ext_encoder, + EXTERNAL_ENCODER_ACTION_V3_ENCODER_SETUP); + else + atombios_external_encoder_setup(encoder, ext_encoder, ATOM_ENABLE); + } + + atombios_apply_encoder_quirks(encoder, adjusted_mode); + + if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_HDMI) { + r600_hdmi_enable(encoder); + r600_hdmi_setmode(encoder, adjusted_mode); + } +} + +static bool +atombios_dac_load_detect(struct drm_encoder *encoder, struct drm_connector *connector) +{ + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + struct radeon_connector *radeon_connector = to_radeon_connector(connector); + + if (radeon_encoder->devices & (ATOM_DEVICE_TV_SUPPORT | + ATOM_DEVICE_CV_SUPPORT | + ATOM_DEVICE_CRT_SUPPORT)) { + DAC_LOAD_DETECTION_PS_ALLOCATION args; + int index = GetIndexIntoMasterTable(COMMAND, DAC_LoadDetection); + uint8_t frev, crev; + + memset(&args, 0, sizeof(args)); + + if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) + return false; + + args.sDacload.ucMisc = 0; + + if ((radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_DAC1) || + (radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1)) + args.sDacload.ucDacType = ATOM_DAC_A; + else + args.sDacload.ucDacType = ATOM_DAC_B; + + if (radeon_connector->devices & ATOM_DEVICE_CRT1_SUPPORT) + args.sDacload.usDeviceID = cpu_to_le16(ATOM_DEVICE_CRT1_SUPPORT); + else if (radeon_connector->devices & ATOM_DEVICE_CRT2_SUPPORT) + args.sDacload.usDeviceID = cpu_to_le16(ATOM_DEVICE_CRT2_SUPPORT); + else if (radeon_connector->devices & ATOM_DEVICE_CV_SUPPORT) { + args.sDacload.usDeviceID = cpu_to_le16(ATOM_DEVICE_CV_SUPPORT); + if (crev >= 3) + args.sDacload.ucMisc = DAC_LOAD_MISC_YPrPb; + } else if (radeon_connector->devices & ATOM_DEVICE_TV1_SUPPORT) { + args.sDacload.usDeviceID = cpu_to_le16(ATOM_DEVICE_TV1_SUPPORT); + if (crev >= 3) + args.sDacload.ucMisc = DAC_LOAD_MISC_YPrPb; + } + + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); + + return true; + } else + return false; +} + +static enum drm_connector_status +radeon_atom_dac_detect(struct drm_encoder *encoder, struct drm_connector *connector) +{ + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + struct radeon_connector *radeon_connector = to_radeon_connector(connector); + uint32_t bios_0_scratch; + + if (!atombios_dac_load_detect(encoder, connector)) { + DRM_DEBUG_KMS("detect returned false \n"); + return connector_status_unknown; + } + + if (rdev->family >= CHIP_R600) + bios_0_scratch = RREG32(R600_BIOS_0_SCRATCH); + else + bios_0_scratch = RREG32(RADEON_BIOS_0_SCRATCH); + + DRM_DEBUG_KMS("Bios 0 scratch %x %08x\n", bios_0_scratch, radeon_encoder->devices); + if (radeon_connector->devices & ATOM_DEVICE_CRT1_SUPPORT) { + if (bios_0_scratch & ATOM_S0_CRT1_MASK) + return connector_status_connected; + } + if (radeon_connector->devices & ATOM_DEVICE_CRT2_SUPPORT) { + if (bios_0_scratch & ATOM_S0_CRT2_MASK) + return connector_status_connected; + } + if (radeon_connector->devices & ATOM_DEVICE_CV_SUPPORT) { + if (bios_0_scratch & (ATOM_S0_CV_MASK|ATOM_S0_CV_MASK_A)) + return connector_status_connected; + } + if (radeon_connector->devices & ATOM_DEVICE_TV1_SUPPORT) { + if (bios_0_scratch & (ATOM_S0_TV1_COMPOSITE | ATOM_S0_TV1_COMPOSITE_A)) + return connector_status_connected; /* CTV */ + else if (bios_0_scratch & (ATOM_S0_TV1_SVIDEO | ATOM_S0_TV1_SVIDEO_A)) + return connector_status_connected; /* STV */ + } + return connector_status_disconnected; +} + +static enum drm_connector_status +radeon_atom_dig_detect(struct drm_encoder *encoder, struct drm_connector *connector) +{ + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + struct radeon_connector *radeon_connector = to_radeon_connector(connector); + struct drm_encoder *ext_encoder = radeon_get_external_encoder(encoder); + u32 bios_0_scratch; + + if (!ASIC_IS_DCE4(rdev)) + return connector_status_unknown; + + if (!ext_encoder) + return connector_status_unknown; + + if ((radeon_connector->devices & ATOM_DEVICE_CRT_SUPPORT) == 0) + return connector_status_unknown; + + /* load detect on the dp bridge */ + atombios_external_encoder_setup(encoder, ext_encoder, + EXTERNAL_ENCODER_ACTION_V3_DACLOAD_DETECTION); + + bios_0_scratch = RREG32(R600_BIOS_0_SCRATCH); + + DRM_DEBUG_KMS("Bios 0 scratch %x %08x\n", bios_0_scratch, radeon_encoder->devices); + if (radeon_connector->devices & ATOM_DEVICE_CRT1_SUPPORT) { + if (bios_0_scratch & ATOM_S0_CRT1_MASK) + return connector_status_connected; + } + if (radeon_connector->devices & ATOM_DEVICE_CRT2_SUPPORT) { + if (bios_0_scratch & ATOM_S0_CRT2_MASK) + return connector_status_connected; + } + if (radeon_connector->devices & ATOM_DEVICE_CV_SUPPORT) { + if (bios_0_scratch & (ATOM_S0_CV_MASK|ATOM_S0_CV_MASK_A)) + return connector_status_connected; + } + if (radeon_connector->devices & ATOM_DEVICE_TV1_SUPPORT) { + if (bios_0_scratch & (ATOM_S0_TV1_COMPOSITE | ATOM_S0_TV1_COMPOSITE_A)) + return connector_status_connected; /* CTV */ + else if (bios_0_scratch & (ATOM_S0_TV1_SVIDEO | ATOM_S0_TV1_SVIDEO_A)) + return connector_status_connected; /* STV */ + } + return connector_status_disconnected; +} + +void +radeon_atom_ext_encoder_setup_ddc(struct drm_encoder *encoder) +{ + struct drm_encoder *ext_encoder = radeon_get_external_encoder(encoder); + + if (ext_encoder) + /* ddc_setup on the dp bridge */ + atombios_external_encoder_setup(encoder, ext_encoder, + EXTERNAL_ENCODER_ACTION_V3_DDC_SETUP); + +} + +static void radeon_atom_encoder_prepare(struct drm_encoder *encoder) +{ + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); + + if ((radeon_encoder->active_device & + (ATOM_DEVICE_DFP_SUPPORT | ATOM_DEVICE_LCD_SUPPORT)) || + (radeon_encoder_get_dp_bridge_encoder_id(encoder) != + ENCODER_OBJECT_ID_NONE)) { + struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; + if (dig) + dig->dig_encoder = radeon_atom_pick_dig_encoder(encoder); + } + + radeon_atom_output_lock(encoder, true); + radeon_atom_encoder_dpms(encoder, DRM_MODE_DPMS_OFF); + + if (connector) { + struct radeon_connector *radeon_connector = to_radeon_connector(connector); + + /* select the clock/data port if it uses a router */ + if (radeon_connector->router.cd_valid) + radeon_router_select_cd_port(radeon_connector); + + /* turn eDP panel on for mode set */ + if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) + atombios_set_edp_panel_power(connector, + ATOM_TRANSMITTER_ACTION_POWER_ON); + } + + /* this is needed for the pll/ss setup to work correctly in some cases */ + atombios_set_encoder_crtc_source(encoder); +} + +static void radeon_atom_encoder_commit(struct drm_encoder *encoder) +{ + radeon_atom_encoder_dpms(encoder, DRM_MODE_DPMS_ON); + radeon_atom_output_lock(encoder, false); +} + +static void radeon_atom_encoder_disable(struct drm_encoder *encoder) +{ + struct drm_device *dev = encoder->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + struct radeon_encoder_atom_dig *dig; + + /* check for pre-DCE3 cards with shared encoders; + * can't really use the links individually, so don't disable + * the encoder if it's in use by another connector + */ + if (!ASIC_IS_DCE3(rdev)) { + struct drm_encoder *other_encoder; + struct radeon_encoder *other_radeon_encoder; + + list_for_each_entry(other_encoder, &dev->mode_config.encoder_list, head) { + other_radeon_encoder = to_radeon_encoder(other_encoder); + if ((radeon_encoder->encoder_id == other_radeon_encoder->encoder_id) && + drm_helper_encoder_in_use(other_encoder)) + goto disable_done; + } + } + + radeon_atom_encoder_dpms(encoder, DRM_MODE_DPMS_OFF); + + switch (radeon_encoder->encoder_id) { + case ENCODER_OBJECT_ID_INTERNAL_TMDS1: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: + case ENCODER_OBJECT_ID_INTERNAL_LVDS: + case ENCODER_OBJECT_ID_INTERNAL_LVTM1: + atombios_digital_setup(encoder, PANEL_ENCODER_ACTION_DISABLE); + break; + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: + if (ASIC_IS_DCE4(rdev)) + /* disable the transmitter */ + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); + else { + /* disable the encoder and transmitter */ + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); + atombios_dig_encoder_setup(encoder, ATOM_DISABLE, 0); + } + break; + case ENCODER_OBJECT_ID_INTERNAL_DDI: + case ENCODER_OBJECT_ID_INTERNAL_DVO1: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: + atombios_dvo_setup(encoder, ATOM_DISABLE); + break; + case ENCODER_OBJECT_ID_INTERNAL_DAC1: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1: + case ENCODER_OBJECT_ID_INTERNAL_DAC2: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2: + atombios_dac_setup(encoder, ATOM_DISABLE); + if (radeon_encoder->devices & (ATOM_DEVICE_TV_SUPPORT | ATOM_DEVICE_CV_SUPPORT)) + atombios_tv_setup(encoder, ATOM_DISABLE); + break; + } + +disable_done: + if (radeon_encoder_is_digital(encoder)) { + if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_HDMI) + r600_hdmi_disable(encoder); + dig = radeon_encoder->enc_priv; + dig->dig_encoder = -1; + } + radeon_encoder->active_device = 0; +} + +/* these are handled by the primary encoders */ +static void radeon_atom_ext_prepare(struct drm_encoder *encoder) +{ + +} + +static void radeon_atom_ext_commit(struct drm_encoder *encoder) +{ + +} + +static void +radeon_atom_ext_mode_set(struct drm_encoder *encoder, + struct drm_display_mode *mode, + struct drm_display_mode *adjusted_mode) +{ + +} + +static void radeon_atom_ext_disable(struct drm_encoder *encoder) +{ + +} + +static void +radeon_atom_ext_dpms(struct drm_encoder *encoder, int mode) +{ + +} + +static bool radeon_atom_ext_mode_fixup(struct drm_encoder *encoder, + struct drm_display_mode *mode, + struct drm_display_mode *adjusted_mode) +{ + return true; +} + +static const struct drm_encoder_helper_funcs radeon_atom_ext_helper_funcs = { + .dpms = radeon_atom_ext_dpms, + .mode_fixup = radeon_atom_ext_mode_fixup, + .prepare = radeon_atom_ext_prepare, + .mode_set = radeon_atom_ext_mode_set, + .commit = radeon_atom_ext_commit, + .disable = radeon_atom_ext_disable, + /* no detect for TMDS/LVDS yet */ +}; + +static const struct drm_encoder_helper_funcs radeon_atom_dig_helper_funcs = { + .dpms = radeon_atom_encoder_dpms, + .mode_fixup = radeon_atom_mode_fixup, + .prepare = radeon_atom_encoder_prepare, + .mode_set = radeon_atom_encoder_mode_set, + .commit = radeon_atom_encoder_commit, + .disable = radeon_atom_encoder_disable, + .detect = radeon_atom_dig_detect, +}; + +static const struct drm_encoder_helper_funcs radeon_atom_dac_helper_funcs = { + .dpms = radeon_atom_encoder_dpms, + .mode_fixup = radeon_atom_mode_fixup, + .prepare = radeon_atom_encoder_prepare, + .mode_set = radeon_atom_encoder_mode_set, + .commit = radeon_atom_encoder_commit, + .detect = radeon_atom_dac_detect, +}; + +void radeon_enc_destroy(struct drm_encoder *encoder) +{ + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); + kfree(radeon_encoder->enc_priv); + drm_encoder_cleanup(encoder); + kfree(radeon_encoder); +} + +static const struct drm_encoder_funcs radeon_atom_enc_funcs = { + .destroy = radeon_enc_destroy, +}; + +struct radeon_encoder_atom_dac * +radeon_atombios_set_dac_info(struct radeon_encoder *radeon_encoder) +{ + struct drm_device *dev = radeon_encoder->base.dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_encoder_atom_dac *dac = kzalloc(sizeof(struct radeon_encoder_atom_dac), GFP_KERNEL); + + if (!dac) + return NULL; + + dac->tv_std = radeon_atombios_get_tv_info(rdev); + return dac; +} + +struct radeon_encoder_atom_dig * +radeon_atombios_set_dig_info(struct radeon_encoder *radeon_encoder) +{ + int encoder_enum = (radeon_encoder->encoder_enum & ENUM_ID_MASK) >> ENUM_ID_SHIFT; + struct radeon_encoder_atom_dig *dig = kzalloc(sizeof(struct radeon_encoder_atom_dig), GFP_KERNEL); + + if (!dig) + return NULL; + + /* coherent mode by default */ + dig->coherent_mode = true; + dig->dig_encoder = -1; + + if (encoder_enum == 2) + dig->linkb = true; + else + dig->linkb = false; + + return dig; +} + +void +radeon_add_atom_encoder(struct drm_device *dev, + uint32_t encoder_enum, + uint32_t supported_device, + u16 caps) +{ + struct radeon_device *rdev = dev->dev_private; + struct drm_encoder *encoder; + struct radeon_encoder *radeon_encoder; + + /* see if we already added it */ + list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) { + radeon_encoder = to_radeon_encoder(encoder); + if (radeon_encoder->encoder_enum == encoder_enum) { + radeon_encoder->devices |= supported_device; + return; + } + + } + + /* add a new one */ + radeon_encoder = kzalloc(sizeof(struct radeon_encoder), GFP_KERNEL); + if (!radeon_encoder) + return; + + encoder = &radeon_encoder->base; + switch (rdev->num_crtc) { + case 1: + encoder->possible_crtcs = 0x1; + break; + case 2: + default: + encoder->possible_crtcs = 0x3; + break; + case 4: + encoder->possible_crtcs = 0xf; + break; + case 6: + encoder->possible_crtcs = 0x3f; + break; + } + + radeon_encoder->enc_priv = NULL; + + radeon_encoder->encoder_enum = encoder_enum; + radeon_encoder->encoder_id = (encoder_enum & OBJECT_ID_MASK) >> OBJECT_ID_SHIFT; + radeon_encoder->devices = supported_device; + radeon_encoder->rmx_type = RMX_OFF; + radeon_encoder->underscan_type = UNDERSCAN_OFF; + radeon_encoder->is_ext_encoder = false; + radeon_encoder->caps = caps; + + switch (radeon_encoder->encoder_id) { + case ENCODER_OBJECT_ID_INTERNAL_LVDS: + case ENCODER_OBJECT_ID_INTERNAL_TMDS1: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: + case ENCODER_OBJECT_ID_INTERNAL_LVTM1: + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { + radeon_encoder->rmx_type = RMX_FULL; + drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_LVDS); + radeon_encoder->enc_priv = radeon_atombios_get_lvds_info(radeon_encoder); + } else { + drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_TMDS); + radeon_encoder->enc_priv = radeon_atombios_set_dig_info(radeon_encoder); + } + drm_encoder_helper_add(encoder, &radeon_atom_dig_helper_funcs); + break; + case ENCODER_OBJECT_ID_INTERNAL_DAC1: + drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_DAC); + radeon_encoder->enc_priv = radeon_atombios_set_dac_info(radeon_encoder); + drm_encoder_helper_add(encoder, &radeon_atom_dac_helper_funcs); + break; + case ENCODER_OBJECT_ID_INTERNAL_DAC2: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2: + drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_TVDAC); + radeon_encoder->enc_priv = radeon_atombios_set_dac_info(radeon_encoder); + drm_encoder_helper_add(encoder, &radeon_atom_dac_helper_funcs); + break; + case ENCODER_OBJECT_ID_INTERNAL_DVO1: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: + case ENCODER_OBJECT_ID_INTERNAL_DDI: + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { + radeon_encoder->rmx_type = RMX_FULL; + drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_LVDS); + radeon_encoder->enc_priv = radeon_atombios_get_lvds_info(radeon_encoder); + } else if (radeon_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT)) { + drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_DAC); + radeon_encoder->enc_priv = radeon_atombios_set_dig_info(radeon_encoder); + } else { + drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_TMDS); + radeon_encoder->enc_priv = radeon_atombios_set_dig_info(radeon_encoder); + } + drm_encoder_helper_add(encoder, &radeon_atom_dig_helper_funcs); + break; + case ENCODER_OBJECT_ID_SI170B: + case ENCODER_OBJECT_ID_CH7303: + case ENCODER_OBJECT_ID_EXTERNAL_SDVOA: + case ENCODER_OBJECT_ID_EXTERNAL_SDVOB: + case ENCODER_OBJECT_ID_TITFP513: + case ENCODER_OBJECT_ID_VT1623: + case ENCODER_OBJECT_ID_HDMI_SI1930: + case ENCODER_OBJECT_ID_TRAVIS: + case ENCODER_OBJECT_ID_NUTMEG: + /* these are handled by the primary encoders */ + radeon_encoder->is_ext_encoder = true; + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) + drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_LVDS); + else if (radeon_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT)) + drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_DAC); + else + drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_TMDS); + drm_encoder_helper_add(encoder, &radeon_atom_ext_helper_funcs); + break; + } +} diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/evergreen.c index ed406e8404a3..1d603a3335db 100644 --- a/drivers/gpu/drm/radeon/evergreen.c +++ b/drivers/gpu/drm/radeon/evergreen.c @@ -157,6 +157,57 @@ int sumo_get_temp(struct radeon_device *rdev) return actual_temp * 1000; } +void sumo_pm_init_profile(struct radeon_device *rdev) +{ + int idx; + + /* default */ + rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index; + rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index; + rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_cm_idx = 0; + rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_cm_idx = 0; + + /* low,mid sh/mh */ + if (rdev->flags & RADEON_IS_MOBILITY) + idx = radeon_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 0); + else + idx = radeon_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 0); + + rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_cm_idx = 0; + rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_cm_idx = 0; + + rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_cm_idx = 0; + rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_cm_idx = 0; + + rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_cm_idx = 0; + rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_cm_idx = 0; + + rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_cm_idx = 0; + rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_cm_idx = 0; + + /* high sh/mh */ + idx = radeon_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 0); + rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_cm_idx = 0; + rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_cm_idx = + rdev->pm.power_state[idx].num_clock_modes - 1; + + rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_cm_idx = 0; + rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_cm_idx = + rdev->pm.power_state[idx].num_clock_modes - 1; +} + void evergreen_pm_misc(struct radeon_device *rdev) { int req_ps_idx = rdev->pm.requested_power_state_index; @@ -353,6 +404,7 @@ void evergreen_hpd_init(struct radeon_device *rdev) default: break; } + radeon_hpd_set_polarity(rdev, radeon_connector->hpd.hpd); } if (rdev->irq.installed) evergreen_irq_set(rdev); @@ -893,7 +945,7 @@ int evergreen_pcie_gart_enable(struct radeon_device *rdev) u32 tmp; int r; - if (rdev->gart.table.vram.robj == NULL) { + if (rdev->gart.robj == NULL) { dev_err(rdev->dev, "No VRAM object for PCIE GART.\n"); return -EINVAL; } @@ -945,7 +997,6 @@ int evergreen_pcie_gart_enable(struct radeon_device *rdev) void evergreen_pcie_gart_disable(struct radeon_device *rdev) { u32 tmp; - int r; /* Disable all tables */ WREG32(VM_CONTEXT0_CNTL, 0); @@ -965,14 +1016,7 @@ void evergreen_pcie_gart_disable(struct radeon_device *rdev) WREG32(MC_VM_MB_L1_TLB1_CNTL, tmp); WREG32(MC_VM_MB_L1_TLB2_CNTL, tmp); WREG32(MC_VM_MB_L1_TLB3_CNTL, tmp); - if (rdev->gart.table.vram.robj) { - r = radeon_bo_reserve(rdev->gart.table.vram.robj, false); - if (likely(r == 0)) { - radeon_bo_kunmap(rdev->gart.table.vram.robj); - radeon_bo_unpin(rdev->gart.table.vram.robj); - radeon_bo_unreserve(rdev->gart.table.vram.robj); - } - } + radeon_gart_table_vram_unpin(rdev); } void evergreen_pcie_gart_fini(struct radeon_device *rdev) @@ -1226,7 +1270,7 @@ void evergreen_mc_program(struct radeon_device *rdev) WREG32(MC_VM_SYSTEM_APERTURE_HIGH_ADDR, rdev->mc.vram_end >> 12); } - WREG32(MC_VM_SYSTEM_APERTURE_DEFAULT_ADDR, 0); + WREG32(MC_VM_SYSTEM_APERTURE_DEFAULT_ADDR, rdev->vram_scratch.gpu_addr >> 12); if (rdev->flags & RADEON_IS_IGP) { tmp = RREG32(MC_FUS_VM_FB_OFFSET) & 0x000FFFFF; tmp |= ((rdev->mc.vram_end >> 20) & 0xF) << 24; @@ -3031,6 +3075,10 @@ static int evergreen_startup(struct radeon_device *rdev) } } + r = r600_vram_scratch_init(rdev); + if (r) + return r; + evergreen_mc_program(rdev); if (rdev->flags & RADEON_IS_AGP) { evergreen_agp_enable(rdev); @@ -3235,6 +3283,7 @@ void evergreen_fini(struct radeon_device *rdev) radeon_ib_pool_fini(rdev); radeon_irq_kms_fini(rdev); evergreen_pcie_gart_fini(rdev); + r600_vram_scratch_fini(rdev); radeon_gem_fini(rdev); radeon_fence_driver_fini(rdev); radeon_agp_fini(rdev); diff --git a/drivers/gpu/drm/radeon/evergreen_blit_kms.c b/drivers/gpu/drm/radeon/evergreen_blit_kms.c index dcf11bbc06d9..914e5af84163 100644 --- a/drivers/gpu/drm/radeon/evergreen_blit_kms.c +++ b/drivers/gpu/drm/radeon/evergreen_blit_kms.c @@ -94,6 +94,15 @@ cp_set_surface_sync(struct radeon_device *rdev, else cp_coher_size = ((size + 255) >> 8); + if (rdev->family >= CHIP_CAYMAN) { + /* CP_COHER_CNTL2 has to be set manually when submitting a surface_sync + * to the RB directly. For IBs, the CP programs this as part of the + * surface_sync packet. + */ + radeon_ring_write(rdev, PACKET3(PACKET3_SET_CONFIG_REG, 1)); + radeon_ring_write(rdev, (0x85e8 - PACKET3_SET_CONFIG_REG_START) >> 2); + radeon_ring_write(rdev, 0); /* CP_COHER_CNTL2 */ + } radeon_ring_write(rdev, PACKET3(PACKET3_SURFACE_SYNC, 3)); radeon_ring_write(rdev, sync_type); radeon_ring_write(rdev, cp_coher_size); @@ -174,7 +183,7 @@ set_vtx_resource(struct radeon_device *rdev, u64 gpu_addr) static void set_tex_resource(struct radeon_device *rdev, int format, int w, int h, int pitch, - u64 gpu_addr) + u64 gpu_addr, u32 size) { u32 sq_tex_resource_word0, sq_tex_resource_word1; u32 sq_tex_resource_word4, sq_tex_resource_word7; @@ -196,6 +205,9 @@ set_tex_resource(struct radeon_device *rdev, sq_tex_resource_word7 = format | S__SQ_CONSTANT_TYPE(SQ_TEX_VTX_VALID_TEXTURE); + cp_set_surface_sync(rdev, + PACKET3_TC_ACTION_ENA, size, gpu_addr); + radeon_ring_write(rdev, PACKET3(PACKET3_SET_RESOURCE, 8)); radeon_ring_write(rdev, 0); radeon_ring_write(rdev, sq_tex_resource_word0); @@ -613,11 +625,13 @@ int evergreen_blit_init(struct radeon_device *rdev) rdev->r600_blit.primitives.set_default_state = set_default_state; rdev->r600_blit.ring_size_common = 55; /* shaders + def state */ - rdev->r600_blit.ring_size_common += 10; /* fence emit for VB IB */ + rdev->r600_blit.ring_size_common += 16; /* fence emit for VB IB */ rdev->r600_blit.ring_size_common += 5; /* done copy */ - rdev->r600_blit.ring_size_common += 10; /* fence emit for done copy */ + rdev->r600_blit.ring_size_common += 16; /* fence emit for done copy */ rdev->r600_blit.ring_size_per_loop = 74; + if (rdev->family >= CHIP_CAYMAN) + rdev->r600_blit.ring_size_per_loop += 9; /* additional DWs for surface sync */ rdev->r600_blit.max_dim = 16384; diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c index fdb93f884575..0e5799857465 100644 --- a/drivers/gpu/drm/radeon/ni.c +++ b/drivers/gpu/drm/radeon/ni.c @@ -262,8 +262,11 @@ int ni_mc_load_microcode(struct radeon_device *rdev) WREG32(MC_SEQ_SUP_CNTL, 0x00000001); /* wait for training to complete */ - while (!(RREG32(MC_IO_PAD_CNTL_D0) & MEM_FALL_OUT_CMD)) - udelay(10); + for (i = 0; i < rdev->usec_timeout; i++) { + if (RREG32(MC_IO_PAD_CNTL_D0) & MEM_FALL_OUT_CMD) + break; + udelay(1); + } if (running) WREG32(MC_SHARED_BLACKOUT_CNTL, blackout); @@ -933,7 +936,7 @@ int cayman_pcie_gart_enable(struct radeon_device *rdev) { int r; - if (rdev->gart.table.vram.robj == NULL) { + if (rdev->gart.robj == NULL) { dev_err(rdev->dev, "No VRAM object for PCIE GART.\n"); return -EINVAL; } @@ -978,8 +981,6 @@ int cayman_pcie_gart_enable(struct radeon_device *rdev) void cayman_pcie_gart_disable(struct radeon_device *rdev) { - int r; - /* Disable all tables */ WREG32(VM_CONTEXT0_CNTL, 0); WREG32(VM_CONTEXT1_CNTL, 0); @@ -995,14 +996,7 @@ void cayman_pcie_gart_disable(struct radeon_device *rdev) WREG32(VM_L2_CNTL2, 0); WREG32(VM_L2_CNTL3, L2_CACHE_BIGK_ASSOCIATIVITY | L2_CACHE_BIGK_FRAGMENT_SIZE(6)); - if (rdev->gart.table.vram.robj) { - r = radeon_bo_reserve(rdev->gart.table.vram.robj, false); - if (likely(r == 0)) { - radeon_bo_kunmap(rdev->gart.table.vram.robj); - radeon_bo_unpin(rdev->gart.table.vram.robj); - radeon_bo_unreserve(rdev->gart.table.vram.robj); - } - } + radeon_gart_table_vram_unpin(rdev); } void cayman_pcie_gart_fini(struct radeon_device *rdev) @@ -1362,6 +1356,10 @@ static int cayman_startup(struct radeon_device *rdev) return r; } + r = r600_vram_scratch_init(rdev); + if (r) + return r; + evergreen_mc_program(rdev); r = cayman_pcie_gart_enable(rdev); if (r) @@ -1557,6 +1555,7 @@ void cayman_fini(struct radeon_device *rdev) radeon_ib_pool_fini(rdev); radeon_irq_kms_fini(rdev); cayman_pcie_gart_fini(rdev); + r600_vram_scratch_fini(rdev); radeon_gem_fini(rdev); radeon_fence_driver_fini(rdev); radeon_bo_fini(rdev); diff --git a/drivers/gpu/drm/radeon/r100.c b/drivers/gpu/drm/radeon/r100.c index cbf49f4f408e..ad158ea49901 100644 --- a/drivers/gpu/drm/radeon/r100.c +++ b/drivers/gpu/drm/radeon/r100.c @@ -537,6 +537,7 @@ void r100_hpd_init(struct radeon_device *rdev) default: break; } + radeon_hpd_set_polarity(rdev, radeon_connector->hpd.hpd); } if (rdev->irq.installed) r100_irq_set(rdev); @@ -577,7 +578,7 @@ int r100_pci_gart_init(struct radeon_device *rdev) { int r; - if (rdev->gart.table.ram.ptr) { + if (rdev->gart.ptr) { WARN(1, "R100 PCI GART already initialized\n"); return 0; } @@ -636,10 +637,12 @@ void r100_pci_gart_disable(struct radeon_device *rdev) int r100_pci_gart_set_page(struct radeon_device *rdev, int i, uint64_t addr) { + u32 *gtt = rdev->gart.ptr; + if (i < 0 || i > rdev->gart.num_gpu_pages) { return -EINVAL; } - rdev->gart.table.ram.ptr[i] = cpu_to_le32(lower_32_bits(addr)); + gtt[i] = cpu_to_le32(lower_32_bits(addr)); return 0; } diff --git a/drivers/gpu/drm/radeon/r300.c b/drivers/gpu/drm/radeon/r300.c index 33f2b68c680b..400b26df652a 100644 --- a/drivers/gpu/drm/radeon/r300.c +++ b/drivers/gpu/drm/radeon/r300.c @@ -74,7 +74,7 @@ void rv370_pcie_gart_tlb_flush(struct radeon_device *rdev) int rv370_pcie_gart_set_page(struct radeon_device *rdev, int i, uint64_t addr) { - void __iomem *ptr = (void *)rdev->gart.table.vram.ptr; + void __iomem *ptr = rdev->gart.ptr; if (i < 0 || i > rdev->gart.num_gpu_pages) { return -EINVAL; @@ -93,7 +93,7 @@ int rv370_pcie_gart_init(struct radeon_device *rdev) { int r; - if (rdev->gart.table.vram.robj) { + if (rdev->gart.robj) { WARN(1, "RV370 PCIE GART already initialized\n"); return 0; } @@ -116,7 +116,7 @@ int rv370_pcie_gart_enable(struct radeon_device *rdev) uint32_t tmp; int r; - if (rdev->gart.table.vram.robj == NULL) { + if (rdev->gart.robj == NULL) { dev_err(rdev->dev, "No VRAM object for PCIE GART.\n"); return -EINVAL; } @@ -154,7 +154,6 @@ int rv370_pcie_gart_enable(struct radeon_device *rdev) void rv370_pcie_gart_disable(struct radeon_device *rdev) { u32 tmp; - int r; WREG32_PCIE(RADEON_PCIE_TX_GART_START_LO, 0); WREG32_PCIE(RADEON_PCIE_TX_GART_END_LO, 0); @@ -163,14 +162,7 @@ void rv370_pcie_gart_disable(struct radeon_device *rdev) tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_CNTL); tmp |= RADEON_PCIE_TX_GART_UNMAPPED_ACCESS_DISCARD; WREG32_PCIE(RADEON_PCIE_TX_GART_CNTL, tmp & ~RADEON_PCIE_TX_GART_EN); - if (rdev->gart.table.vram.robj) { - r = radeon_bo_reserve(rdev->gart.table.vram.robj, false); - if (likely(r == 0)) { - radeon_bo_kunmap(rdev->gart.table.vram.robj); - radeon_bo_unpin(rdev->gart.table.vram.robj); - radeon_bo_unreserve(rdev->gart.table.vram.robj); - } - } + radeon_gart_table_vram_unpin(rdev); } void rv370_pcie_gart_fini(struct radeon_device *rdev) diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c index 4e777c1e4b7b..9cdda0b3b081 100644 --- a/drivers/gpu/drm/radeon/r600.c +++ b/drivers/gpu/drm/radeon/r600.c @@ -288,24 +288,6 @@ void r600_pm_get_dynpm_state(struct radeon_device *rdev) pcie_lanes); } -static int r600_pm_get_type_index(struct radeon_device *rdev, - enum radeon_pm_state_type ps_type, - int instance) -{ - int i; - int found_instance = -1; - - for (i = 0; i < rdev->pm.num_power_states; i++) { - if (rdev->pm.power_state[i].type == ps_type) { - found_instance++; - if (found_instance == instance) - return i; - } - } - /* return default if no match */ - return rdev->pm.default_power_state_index; -} - void rs780_pm_init_profile(struct radeon_device *rdev) { if (rdev->pm.num_power_states == 2) { @@ -421,6 +403,8 @@ void rs780_pm_init_profile(struct radeon_device *rdev) void r600_pm_init_profile(struct radeon_device *rdev) { + int idx; + if (rdev->family == CHIP_R600) { /* XXX */ /* default */ @@ -502,81 +486,43 @@ void r600_pm_init_profile(struct radeon_device *rdev) rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_cm_idx = 0; rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_cm_idx = 2; /* low sh */ - if (rdev->flags & RADEON_IS_MOBILITY) { - rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_ps_idx = - r600_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 0); - rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_ps_idx = - r600_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 0); - rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_cm_idx = 0; - rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_cm_idx = 0; - } else { - rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_ps_idx = - r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 0); - rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_ps_idx = - r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 0); - rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_cm_idx = 0; - rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_cm_idx = 0; - } + if (rdev->flags & RADEON_IS_MOBILITY) + idx = radeon_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 0); + else + idx = radeon_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 0); + rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_cm_idx = 0; + rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_cm_idx = 0; /* mid sh */ - if (rdev->flags & RADEON_IS_MOBILITY) { - rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_ps_idx = - r600_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 0); - rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_ps_idx = - r600_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 0); - rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_cm_idx = 0; - rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_cm_idx = 1; - } else { - rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_ps_idx = - r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 0); - rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_ps_idx = - r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 0); - rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_cm_idx = 0; - rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_cm_idx = 1; - } + rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_cm_idx = 0; + rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_cm_idx = 1; /* high sh */ - rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_ps_idx = - r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 0); - rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_ps_idx = - r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 0); + idx = radeon_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 0); + rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_ps_idx = idx; rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_cm_idx = 0; rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_cm_idx = 2; /* low mh */ - if (rdev->flags & RADEON_IS_MOBILITY) { - rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_ps_idx = - r600_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 1); - rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_ps_idx = - r600_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 1); - rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_cm_idx = 0; - rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_cm_idx = 0; - } else { - rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_ps_idx = - r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 1); - rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_ps_idx = - r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 1); - rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_cm_idx = 0; - rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_cm_idx = 0; - } + if (rdev->flags & RADEON_IS_MOBILITY) + idx = radeon_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 1); + else + idx = radeon_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 1); + rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_cm_idx = 0; + rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_cm_idx = 0; /* mid mh */ - if (rdev->flags & RADEON_IS_MOBILITY) { - rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_ps_idx = - r600_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 1); - rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_ps_idx = - r600_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 1); - rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_cm_idx = 0; - rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_cm_idx = 1; - } else { - rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_ps_idx = - r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 1); - rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_ps_idx = - r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 1); - rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_cm_idx = 0; - rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_cm_idx = 1; - } + rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_cm_idx = 0; + rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_cm_idx = 1; /* high mh */ - rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_ps_idx = - r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 1); - rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_ps_idx = - r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 1); + idx = radeon_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 1); + rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_ps_idx = idx; + rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_ps_idx = idx; rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_cm_idx = 0; rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_cm_idx = 2; } @@ -763,13 +709,14 @@ void r600_hpd_init(struct radeon_device *rdev) struct drm_device *dev = rdev->ddev; struct drm_connector *connector; - if (ASIC_IS_DCE3(rdev)) { - u32 tmp = DC_HPDx_CONNECTION_TIMER(0x9c4) | DC_HPDx_RX_INT_TIMER(0xfa); - if (ASIC_IS_DCE32(rdev)) - tmp |= DC_HPDx_EN; + list_for_each_entry(connector, &dev->mode_config.connector_list, head) { + struct radeon_connector *radeon_connector = to_radeon_connector(connector); + + if (ASIC_IS_DCE3(rdev)) { + u32 tmp = DC_HPDx_CONNECTION_TIMER(0x9c4) | DC_HPDx_RX_INT_TIMER(0xfa); + if (ASIC_IS_DCE32(rdev)) + tmp |= DC_HPDx_EN; - list_for_each_entry(connector, &dev->mode_config.connector_list, head) { - struct radeon_connector *radeon_connector = to_radeon_connector(connector); switch (radeon_connector->hpd.hpd) { case RADEON_HPD_1: WREG32(DC_HPD1_CONTROL, tmp); @@ -799,10 +746,7 @@ void r600_hpd_init(struct radeon_device *rdev) default: break; } - } - } else { - list_for_each_entry(connector, &dev->mode_config.connector_list, head) { - struct radeon_connector *radeon_connector = to_radeon_connector(connector); + } else { switch (radeon_connector->hpd.hpd) { case RADEON_HPD_1: WREG32(DC_HOT_PLUG_DETECT1_CONTROL, DC_HOT_PLUG_DETECTx_EN); @@ -820,6 +764,7 @@ void r600_hpd_init(struct radeon_device *rdev) break; } } + radeon_hpd_set_polarity(rdev, radeon_connector->hpd.hpd); } if (rdev->irq.installed) r600_irq_set(rdev); @@ -897,7 +842,7 @@ void r600_pcie_gart_tlb_flush(struct radeon_device *rdev) /* flush hdp cache so updates hit vram */ if ((rdev->family >= CHIP_RV770) && (rdev->family <= CHIP_RV740) && !(rdev->flags & RADEON_IS_AGP)) { - void __iomem *ptr = (void *)rdev->gart.table.vram.ptr; + void __iomem *ptr = (void *)rdev->gart.ptr; u32 tmp; /* r7xx hw bug. write to HDP_DEBUG1 followed by fb read @@ -932,7 +877,7 @@ int r600_pcie_gart_init(struct radeon_device *rdev) { int r; - if (rdev->gart.table.vram.robj) { + if (rdev->gart.robj) { WARN(1, "R600 PCIE GART already initialized\n"); return 0; } @@ -949,7 +894,7 @@ int r600_pcie_gart_enable(struct radeon_device *rdev) u32 tmp; int r, i; - if (rdev->gart.table.vram.robj == NULL) { + if (rdev->gart.robj == NULL) { dev_err(rdev->dev, "No VRAM object for PCIE GART.\n"); return -EINVAL; } @@ -1004,7 +949,7 @@ int r600_pcie_gart_enable(struct radeon_device *rdev) void r600_pcie_gart_disable(struct radeon_device *rdev) { u32 tmp; - int i, r; + int i; /* Disable all tables */ for (i = 0; i < 7; i++) @@ -1031,14 +976,7 @@ void r600_pcie_gart_disable(struct radeon_device *rdev) WREG32(MC_VM_L1_TLB_MCB_WR_SYS_CNTL, tmp); WREG32(MC_VM_L1_TLB_MCB_RD_HDP_CNTL, tmp); WREG32(MC_VM_L1_TLB_MCB_WR_HDP_CNTL, tmp); - if (rdev->gart.table.vram.robj) { - r = radeon_bo_reserve(rdev->gart.table.vram.robj, false); - if (likely(r == 0)) { - radeon_bo_kunmap(rdev->gart.table.vram.robj); - radeon_bo_unpin(rdev->gart.table.vram.robj); - radeon_bo_unreserve(rdev->gart.table.vram.robj); - } - } + radeon_gart_table_vram_unpin(rdev); } void r600_pcie_gart_fini(struct radeon_device *rdev) @@ -1138,7 +1076,7 @@ static void r600_mc_program(struct radeon_device *rdev) WREG32(MC_VM_SYSTEM_APERTURE_LOW_ADDR, rdev->mc.vram_start >> 12); WREG32(MC_VM_SYSTEM_APERTURE_HIGH_ADDR, rdev->mc.vram_end >> 12); } - WREG32(MC_VM_SYSTEM_APERTURE_DEFAULT_ADDR, 0); + WREG32(MC_VM_SYSTEM_APERTURE_DEFAULT_ADDR, rdev->vram_scratch.gpu_addr >> 12); tmp = ((rdev->mc.vram_end >> 24) & 0xFFFF) << 16; tmp |= ((rdev->mc.vram_start >> 24) & 0xFFFF); WREG32(MC_VM_FB_LOCATION, tmp); @@ -1277,6 +1215,53 @@ int r600_mc_init(struct radeon_device *rdev) return 0; } +int r600_vram_scratch_init(struct radeon_device *rdev) +{ + int r; + + if (rdev->vram_scratch.robj == NULL) { + r = radeon_bo_create(rdev, RADEON_GPU_PAGE_SIZE, + PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM, + &rdev->vram_scratch.robj); + if (r) { + return r; + } + } + + r = radeon_bo_reserve(rdev->vram_scratch.robj, false); + if (unlikely(r != 0)) + return r; + r = radeon_bo_pin(rdev->vram_scratch.robj, + RADEON_GEM_DOMAIN_VRAM, &rdev->vram_scratch.gpu_addr); + if (r) { + radeon_bo_unreserve(rdev->vram_scratch.robj); + return r; + } + r = radeon_bo_kmap(rdev->vram_scratch.robj, + (void **)&rdev->vram_scratch.ptr); + if (r) + radeon_bo_unpin(rdev->vram_scratch.robj); + radeon_bo_unreserve(rdev->vram_scratch.robj); + + return r; +} + +void r600_vram_scratch_fini(struct radeon_device *rdev) +{ + int r; + + if (rdev->vram_scratch.robj == NULL) { + return; + } + r = radeon_bo_reserve(rdev->vram_scratch.robj, false); + if (likely(r == 0)) { + radeon_bo_kunmap(rdev->vram_scratch.robj); + radeon_bo_unpin(rdev->vram_scratch.robj); + radeon_bo_unreserve(rdev->vram_scratch.robj); + } + radeon_bo_unref(&rdev->vram_scratch.robj); +} + /* We doesn't check that the GPU really needs a reset we simply do the * reset, it's up to the caller to determine if the GPU needs one. We * might add an helper function to check that. @@ -2332,6 +2317,14 @@ void r600_fence_ring_emit(struct radeon_device *rdev, if (rdev->wb.use_event) { u64 addr = rdev->wb.gpu_addr + R600_WB_EVENT_OFFSET + (u64)(rdev->fence_drv.scratch_reg - rdev->scratch.reg_base); + /* flush read cache over gart */ + radeon_ring_write(rdev, PACKET3(PACKET3_SURFACE_SYNC, 3)); + radeon_ring_write(rdev, PACKET3_TC_ACTION_ENA | + PACKET3_VC_ACTION_ENA | + PACKET3_SH_ACTION_ENA); + radeon_ring_write(rdev, 0xFFFFFFFF); + radeon_ring_write(rdev, 0); + radeon_ring_write(rdev, 10); /* poll interval */ /* EVENT_WRITE_EOP - flush caches, send int */ radeon_ring_write(rdev, PACKET3(PACKET3_EVENT_WRITE_EOP, 4)); radeon_ring_write(rdev, EVENT_TYPE(CACHE_FLUSH_AND_INV_EVENT_TS) | EVENT_INDEX(5)); @@ -2340,6 +2333,14 @@ void r600_fence_ring_emit(struct radeon_device *rdev, radeon_ring_write(rdev, fence->seq); radeon_ring_write(rdev, 0); } else { + /* flush read cache over gart */ + radeon_ring_write(rdev, PACKET3(PACKET3_SURFACE_SYNC, 3)); + radeon_ring_write(rdev, PACKET3_TC_ACTION_ENA | + PACKET3_VC_ACTION_ENA | + PACKET3_SH_ACTION_ENA); + radeon_ring_write(rdev, 0xFFFFFFFF); + radeon_ring_write(rdev, 0); + radeon_ring_write(rdev, 10); /* poll interval */ radeon_ring_write(rdev, PACKET3(PACKET3_EVENT_WRITE, 0)); radeon_ring_write(rdev, EVENT_TYPE(CACHE_FLUSH_AND_INV_EVENT) | EVENT_INDEX(0)); /* wait for 3D idle clean */ @@ -2421,6 +2422,10 @@ int r600_startup(struct radeon_device *rdev) } } + r = r600_vram_scratch_init(rdev); + if (r) + return r; + r600_mc_program(rdev); if (rdev->flags & RADEON_IS_AGP) { r600_agp_enable(rdev); @@ -2641,6 +2646,7 @@ void r600_fini(struct radeon_device *rdev) radeon_ib_pool_fini(rdev); radeon_irq_kms_fini(rdev); r600_pcie_gart_fini(rdev); + r600_vram_scratch_fini(rdev); radeon_agp_fini(rdev); radeon_gem_fini(rdev); radeon_fence_driver_fini(rdev); diff --git a/drivers/gpu/drm/radeon/r600_blit_kms.c b/drivers/gpu/drm/radeon/r600_blit_kms.c index c4cf1308d4a1..e09d2818f949 100644 --- a/drivers/gpu/drm/radeon/r600_blit_kms.c +++ b/drivers/gpu/drm/radeon/r600_blit_kms.c @@ -201,7 +201,7 @@ set_vtx_resource(struct radeon_device *rdev, u64 gpu_addr) static void set_tex_resource(struct radeon_device *rdev, int format, int w, int h, int pitch, - u64 gpu_addr) + u64 gpu_addr, u32 size) { uint32_t sq_tex_resource_word0, sq_tex_resource_word1, sq_tex_resource_word4; @@ -222,6 +222,9 @@ set_tex_resource(struct radeon_device *rdev, S_038010_DST_SEL_Z(SQ_SEL_Z) | S_038010_DST_SEL_W(SQ_SEL_W); + cp_set_surface_sync(rdev, + PACKET3_TC_ACTION_ENA, size, gpu_addr); + radeon_ring_write(rdev, PACKET3(PACKET3_SET_RESOURCE, 7)); radeon_ring_write(rdev, 0); radeon_ring_write(rdev, sq_tex_resource_word0); @@ -500,9 +503,9 @@ int r600_blit_init(struct radeon_device *rdev) rdev->r600_blit.primitives.set_default_state = set_default_state; rdev->r600_blit.ring_size_common = 40; /* shaders + def state */ - rdev->r600_blit.ring_size_common += 10; /* fence emit for VB IB */ + rdev->r600_blit.ring_size_common += 16; /* fence emit for VB IB */ rdev->r600_blit.ring_size_common += 5; /* done copy */ - rdev->r600_blit.ring_size_common += 10; /* fence emit for done copy */ + rdev->r600_blit.ring_size_common += 16; /* fence emit for done copy */ rdev->r600_blit.ring_size_per_loop = 76; /* set_render_target emits 2 extra dwords on rv6xx */ @@ -760,10 +763,7 @@ void r600_kms_blit_copy(struct radeon_device *rdev, vb[11] = i2f(h); rdev->r600_blit.primitives.set_tex_resource(rdev, FMT_8_8_8_8, - w, h, w, src_gpu_addr); - rdev->r600_blit.primitives.cp_set_surface_sync(rdev, - PACKET3_TC_ACTION_ENA, - size_in_bytes, src_gpu_addr); + w, h, w, src_gpu_addr, size_in_bytes); rdev->r600_blit.primitives.set_render_target(rdev, COLOR_8_8_8_8, w, h, dst_gpu_addr); rdev->r600_blit.primitives.set_scissors(rdev, 0, 0, w, h); diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h index e3170c794c1d..fc5a1d642cb5 100644 --- a/drivers/gpu/drm/radeon/radeon.h +++ b/drivers/gpu/drm/radeon/radeon.h @@ -93,6 +93,7 @@ extern int radeon_audio; extern int radeon_disp_priority; extern int radeon_hw_i2c; extern int radeon_pcie_gen2; +extern int radeon_msi; /* * Copy from radeon_drv.h so we don't have to include both and have conflicting @@ -306,30 +307,17 @@ int radeon_mode_dumb_destroy(struct drm_file *file_priv, */ struct radeon_mc; -struct radeon_gart_table_ram { - volatile uint32_t *ptr; -}; - -struct radeon_gart_table_vram { - struct radeon_bo *robj; - volatile uint32_t *ptr; -}; - -union radeon_gart_table { - struct radeon_gart_table_ram ram; - struct radeon_gart_table_vram vram; -}; - #define RADEON_GPU_PAGE_SIZE 4096 #define RADEON_GPU_PAGE_MASK (RADEON_GPU_PAGE_SIZE - 1) #define RADEON_GPU_PAGE_SHIFT 12 struct radeon_gart { dma_addr_t table_addr; + struct radeon_bo *robj; + void *ptr; unsigned num_gpu_pages; unsigned num_cpu_pages; unsigned table_size; - union radeon_gart_table table; struct page **pages; dma_addr_t *pages_addr; bool *ttm_alloced; @@ -340,6 +328,8 @@ int radeon_gart_table_ram_alloc(struct radeon_device *rdev); void radeon_gart_table_ram_free(struct radeon_device *rdev); int radeon_gart_table_vram_alloc(struct radeon_device *rdev); void radeon_gart_table_vram_free(struct radeon_device *rdev); +int radeon_gart_table_vram_pin(struct radeon_device *rdev); +void radeon_gart_table_vram_unpin(struct radeon_device *rdev); int radeon_gart_init(struct radeon_device *rdev); void radeon_gart_fini(struct radeon_device *rdev); void radeon_gart_unbind(struct radeon_device *rdev, unsigned offset, @@ -347,6 +337,7 @@ void radeon_gart_unbind(struct radeon_device *rdev, unsigned offset, int radeon_gart_bind(struct radeon_device *rdev, unsigned offset, int pages, struct page **pagelist, dma_addr_t *dma_addr); +void radeon_gart_restore(struct radeon_device *rdev); /* @@ -437,25 +428,26 @@ union radeon_irq_stat_regs { struct evergreen_irq_stat_regs evergreen; }; +#define RADEON_MAX_HPD_PINS 6 +#define RADEON_MAX_CRTCS 6 +#define RADEON_MAX_HDMI_BLOCKS 2 + struct radeon_irq { bool installed; bool sw_int; - /* FIXME: use a define max crtc rather than hardcode it */ - bool crtc_vblank_int[6]; - bool pflip[6]; + bool crtc_vblank_int[RADEON_MAX_CRTCS]; + bool pflip[RADEON_MAX_CRTCS]; wait_queue_head_t vblank_queue; - /* FIXME: use defines for max hpd/dacs */ - bool hpd[6]; + bool hpd[RADEON_MAX_HPD_PINS]; bool gui_idle; bool gui_idle_acked; wait_queue_head_t idle_queue; - /* FIXME: use defines for max HDMI blocks */ - bool hdmi[2]; + bool hdmi[RADEON_MAX_HDMI_BLOCKS]; spinlock_t sw_lock; int sw_refcount; union radeon_irq_stat_regs stat_regs; - spinlock_t pflip_lock[6]; - int pflip_refcount[6]; + spinlock_t pflip_lock[RADEON_MAX_CRTCS]; + int pflip_refcount[RADEON_MAX_CRTCS]; }; int radeon_irq_kms_init(struct radeon_device *rdev); @@ -533,7 +525,7 @@ struct r600_blit_cp_primitives { void (*set_vtx_resource)(struct radeon_device *rdev, u64 gpu_addr); void (*set_tex_resource)(struct radeon_device *rdev, int format, int w, int h, int pitch, - u64 gpu_addr); + u64 gpu_addr, u32 size); void (*set_scissors)(struct radeon_device *rdev, int x1, int y1, int x2, int y2); void (*draw_auto)(struct radeon_device *rdev); @@ -792,8 +784,7 @@ struct radeon_pm_clock_info { struct radeon_power_state { enum radeon_pm_state_type type; - /* XXX: use a define for num clock modes */ - struct radeon_pm_clock_info clock_info[8]; + struct radeon_pm_clock_info *clock_info; /* number of valid clock modes in this power state */ int num_clock_modes; struct radeon_pm_clock_info *default_clock_mode; @@ -863,6 +854,9 @@ struct radeon_pm { struct device *int_hwmon_dev; }; +int radeon_pm_get_type_index(struct radeon_device *rdev, + enum radeon_pm_state_type ps_type, + int instance); /* * Benchmarking @@ -1143,12 +1137,55 @@ int radeon_gem_set_tiling_ioctl(struct drm_device *dev, void *data, int radeon_gem_get_tiling_ioctl(struct drm_device *dev, void *data, struct drm_file *filp); -/* VRAM scratch page for HDP bug */ -struct r700_vram_scratch { +/* VRAM scratch page for HDP bug, default vram page */ +struct r600_vram_scratch { struct radeon_bo *robj; volatile uint32_t *ptr; + u64 gpu_addr; }; + +/* + * Mutex which allows recursive locking from the same process. + */ +struct radeon_mutex { + struct mutex mutex; + struct task_struct *owner; + int level; +}; + +static inline void radeon_mutex_init(struct radeon_mutex *mutex) +{ + mutex_init(&mutex->mutex); + mutex->owner = NULL; + mutex->level = 0; +} + +static inline void radeon_mutex_lock(struct radeon_mutex *mutex) +{ + if (mutex_trylock(&mutex->mutex)) { + /* The mutex was unlocked before, so it's ours now */ + mutex->owner = current; + } else if (mutex->owner != current) { + /* Another process locked the mutex, take it */ + mutex_lock(&mutex->mutex); + mutex->owner = current; + } + /* Otherwise the mutex was already locked by this process */ + + mutex->level++; +} + +static inline void radeon_mutex_unlock(struct radeon_mutex *mutex) +{ + if (--mutex->level > 0) + return; + + mutex->owner = NULL; + mutex_unlock(&mutex->mutex); +} + + /* * Core structure, functions and helpers. */ @@ -1204,7 +1241,7 @@ struct radeon_device { struct radeon_gem gem; struct radeon_pm pm; uint32_t bios_scratch[RADEON_BIOS_NUM_SCRATCH]; - struct mutex cs_mutex; + struct radeon_mutex cs_mutex; struct radeon_wb wb; struct radeon_dummy_page dummy_page; bool gpu_lockup; @@ -1218,7 +1255,7 @@ struct radeon_device { const struct firmware *rlc_fw; /* r6/700 RLC firmware */ const struct firmware *mc_fw; /* NI MC firmware */ struct r600_blit r600_blit; - struct r700_vram_scratch vram_scratch; + struct r600_vram_scratch vram_scratch; int msi_enabled; /* msi enabled */ struct r600_ih ih; /* r6/700 interrupt ring */ struct work_struct hotplug_work; @@ -1442,8 +1479,6 @@ void radeon_ring_write(struct radeon_device *rdev, uint32_t v); /* AGP */ extern int radeon_gpu_reset(struct radeon_device *rdev); extern void radeon_agp_disable(struct radeon_device *rdev); -extern int radeon_gart_table_vram_pin(struct radeon_device *rdev); -extern void radeon_gart_restore(struct radeon_device *rdev); extern int radeon_modeset_init(struct radeon_device *rdev); extern void radeon_modeset_fini(struct radeon_device *rdev); extern bool radeon_card_posted(struct radeon_device *rdev); @@ -1467,6 +1502,12 @@ extern int radeon_suspend_kms(struct drm_device *dev, pm_message_t state); extern void radeon_ttm_set_active_vram_size(struct radeon_device *rdev, u64 size); /* + * R600 vram scratch functions + */ +int r600_vram_scratch_init(struct radeon_device *rdev); +void r600_vram_scratch_fini(struct radeon_device *rdev); + +/* * r600 functions used by radeon_encoder.c */ extern void r600_hdmi_enable(struct drm_encoder *encoder); diff --git a/drivers/gpu/drm/radeon/radeon_asic.c b/drivers/gpu/drm/radeon/radeon_asic.c index e2944566ffea..a2e1eae114ef 100644 --- a/drivers/gpu/drm/radeon/radeon_asic.c +++ b/drivers/gpu/drm/radeon/radeon_asic.c @@ -834,7 +834,7 @@ static struct radeon_asic sumo_asic = { .pm_misc = &evergreen_pm_misc, .pm_prepare = &evergreen_pm_prepare, .pm_finish = &evergreen_pm_finish, - .pm_init_profile = &rs780_pm_init_profile, + .pm_init_profile = &sumo_pm_init_profile, .pm_get_dynpm_state = &r600_pm_get_dynpm_state, .pre_page_flip = &evergreen_pre_page_flip, .page_flip = &evergreen_page_flip, diff --git a/drivers/gpu/drm/radeon/radeon_asic.h b/drivers/gpu/drm/radeon/radeon_asic.h index 85f14f0337e4..59914842a729 100644 --- a/drivers/gpu/drm/radeon/radeon_asic.h +++ b/drivers/gpu/drm/radeon/radeon_asic.h @@ -413,6 +413,7 @@ extern int evergreen_cs_parse(struct radeon_cs_parser *p); extern void evergreen_pm_misc(struct radeon_device *rdev); extern void evergreen_pm_prepare(struct radeon_device *rdev); extern void evergreen_pm_finish(struct radeon_device *rdev); +extern void sumo_pm_init_profile(struct radeon_device *rdev); extern void evergreen_pre_page_flip(struct radeon_device *rdev, int crtc); extern u32 evergreen_page_flip(struct radeon_device *rdev, int crtc, u64 crtc_base); extern void evergreen_post_page_flip(struct radeon_device *rdev, int crtc); diff --git a/drivers/gpu/drm/radeon/radeon_atombios.c b/drivers/gpu/drm/radeon/radeon_atombios.c index 08d0b94332e6..d2d179267af3 100644 --- a/drivers/gpu/drm/radeon/radeon_atombios.c +++ b/drivers/gpu/drm/radeon/radeon_atombios.c @@ -1999,6 +1999,10 @@ static int radeon_atombios_parse_power_table_1_3(struct radeon_device *rdev) rdev->pm.power_state[state_index].clock_info[0].voltage.type = VOLTAGE_NONE; switch (frev) { case 1: + rdev->pm.power_state[state_index].clock_info = + kzalloc(sizeof(struct radeon_pm_clock_info) * 1, GFP_KERNEL); + if (!rdev->pm.power_state[state_index].clock_info) + return state_index; rdev->pm.power_state[state_index].num_clock_modes = 1; rdev->pm.power_state[state_index].clock_info[0].mclk = le16_to_cpu(power_info->info.asPowerPlayInfo[i].usMemoryClock); @@ -2035,6 +2039,10 @@ static int radeon_atombios_parse_power_table_1_3(struct radeon_device *rdev) state_index++; break; case 2: + rdev->pm.power_state[state_index].clock_info = + kzalloc(sizeof(struct radeon_pm_clock_info) * 1, GFP_KERNEL); + if (!rdev->pm.power_state[state_index].clock_info) + return state_index; rdev->pm.power_state[state_index].num_clock_modes = 1; rdev->pm.power_state[state_index].clock_info[0].mclk = le32_to_cpu(power_info->info_2.asPowerPlayInfo[i].ulMemoryClock); @@ -2072,6 +2080,10 @@ static int radeon_atombios_parse_power_table_1_3(struct radeon_device *rdev) state_index++; break; case 3: + rdev->pm.power_state[state_index].clock_info = + kzalloc(sizeof(struct radeon_pm_clock_info) * 1, GFP_KERNEL); + if (!rdev->pm.power_state[state_index].clock_info) + return state_index; rdev->pm.power_state[state_index].num_clock_modes = 1; rdev->pm.power_state[state_index].clock_info[0].mclk = le32_to_cpu(power_info->info_3.asPowerPlayInfo[i].ulMemoryClock); @@ -2257,7 +2269,7 @@ static void radeon_atombios_parse_pplib_non_clock_info(struct radeon_device *rde rdev->pm.default_power_state_index = state_index; rdev->pm.power_state[state_index].default_clock_mode = &rdev->pm.power_state[state_index].clock_info[mode_index - 1]; - if (ASIC_IS_DCE5(rdev)) { + if (ASIC_IS_DCE5(rdev) && !(rdev->flags & RADEON_IS_IGP)) { /* NI chips post without MC ucode, so default clocks are strobe mode only */ rdev->pm.default_sclk = rdev->pm.power_state[state_index].clock_info[0].sclk; rdev->pm.default_mclk = rdev->pm.power_state[state_index].clock_info[0].mclk; @@ -2377,17 +2389,31 @@ static int radeon_atombios_parse_power_table_4_5(struct radeon_device *rdev) le16_to_cpu(power_info->pplib.usNonClockInfoArrayOffset) + (power_state->v1.ucNonClockStateIndex * power_info->pplib.ucNonClockSize)); - for (j = 0; j < (power_info->pplib.ucStateEntrySize - 1); j++) { - clock_info = (union pplib_clock_info *) - (mode_info->atom_context->bios + data_offset + - le16_to_cpu(power_info->pplib.usClockInfoArrayOffset) + - (power_state->v1.ucClockStateIndices[j] * - power_info->pplib.ucClockInfoSize)); - valid = radeon_atombios_parse_pplib_clock_info(rdev, - state_index, mode_index, - clock_info); - if (valid) - mode_index++; + rdev->pm.power_state[i].clock_info = kzalloc(sizeof(struct radeon_pm_clock_info) * + ((power_info->pplib.ucStateEntrySize - 1) ? + (power_info->pplib.ucStateEntrySize - 1) : 1), + GFP_KERNEL); + if (!rdev->pm.power_state[i].clock_info) + return state_index; + if (power_info->pplib.ucStateEntrySize - 1) { + for (j = 0; j < (power_info->pplib.ucStateEntrySize - 1); j++) { + clock_info = (union pplib_clock_info *) + (mode_info->atom_context->bios + data_offset + + le16_to_cpu(power_info->pplib.usClockInfoArrayOffset) + + (power_state->v1.ucClockStateIndices[j] * + power_info->pplib.ucClockInfoSize)); + valid = radeon_atombios_parse_pplib_clock_info(rdev, + state_index, mode_index, + clock_info); + if (valid) + mode_index++; + } + } else { + rdev->pm.power_state[state_index].clock_info[0].mclk = + rdev->clock.default_mclk; + rdev->pm.power_state[state_index].clock_info[0].sclk = + rdev->clock.default_sclk; + mode_index++; } rdev->pm.power_state[state_index].num_clock_modes = mode_index; if (mode_index) { @@ -2456,18 +2482,32 @@ static int radeon_atombios_parse_power_table_6(struct radeon_device *rdev) non_clock_array_index = i; /* power_state->v2.nonClockInfoIndex */ non_clock_info = (struct _ATOM_PPLIB_NONCLOCK_INFO *) &non_clock_info_array->nonClockInfo[non_clock_array_index]; - for (j = 0; j < power_state->v2.ucNumDPMLevels; j++) { - clock_array_index = power_state->v2.clockInfoIndex[j]; - /* XXX this might be an inagua bug... */ - if (clock_array_index >= clock_info_array->ucNumEntries) - continue; - clock_info = (union pplib_clock_info *) - &clock_info_array->clockInfo[clock_array_index]; - valid = radeon_atombios_parse_pplib_clock_info(rdev, - state_index, mode_index, - clock_info); - if (valid) - mode_index++; + rdev->pm.power_state[i].clock_info = kzalloc(sizeof(struct radeon_pm_clock_info) * + (power_state->v2.ucNumDPMLevels ? + power_state->v2.ucNumDPMLevels : 1), + GFP_KERNEL); + if (!rdev->pm.power_state[i].clock_info) + return state_index; + if (power_state->v2.ucNumDPMLevels) { + for (j = 0; j < power_state->v2.ucNumDPMLevels; j++) { + clock_array_index = power_state->v2.clockInfoIndex[j]; + /* XXX this might be an inagua bug... */ + if (clock_array_index >= clock_info_array->ucNumEntries) + continue; + clock_info = (union pplib_clock_info *) + &clock_info_array->clockInfo[clock_array_index]; + valid = radeon_atombios_parse_pplib_clock_info(rdev, + state_index, mode_index, + clock_info); + if (valid) + mode_index++; + } + } else { + rdev->pm.power_state[state_index].clock_info[0].mclk = + rdev->clock.default_mclk; + rdev->pm.power_state[state_index].clock_info[0].sclk = + rdev->clock.default_sclk; + mode_index++; } rdev->pm.power_state[state_index].num_clock_modes = mode_index; if (mode_index) { @@ -2524,19 +2564,23 @@ void radeon_atombios_get_power_modes(struct radeon_device *rdev) } else { rdev->pm.power_state = kzalloc(sizeof(struct radeon_power_state), GFP_KERNEL); if (rdev->pm.power_state) { - /* add the default mode */ - rdev->pm.power_state[state_index].type = - POWER_STATE_TYPE_DEFAULT; - rdev->pm.power_state[state_index].num_clock_modes = 1; - rdev->pm.power_state[state_index].clock_info[0].mclk = rdev->clock.default_mclk; - rdev->pm.power_state[state_index].clock_info[0].sclk = rdev->clock.default_sclk; - rdev->pm.power_state[state_index].default_clock_mode = - &rdev->pm.power_state[state_index].clock_info[0]; - rdev->pm.power_state[state_index].clock_info[0].voltage.type = VOLTAGE_NONE; - rdev->pm.power_state[state_index].pcie_lanes = 16; - rdev->pm.default_power_state_index = state_index; - rdev->pm.power_state[state_index].flags = 0; - state_index++; + rdev->pm.power_state[0].clock_info = + kzalloc(sizeof(struct radeon_pm_clock_info) * 1, GFP_KERNEL); + if (rdev->pm.power_state[0].clock_info) { + /* add the default mode */ + rdev->pm.power_state[state_index].type = + POWER_STATE_TYPE_DEFAULT; + rdev->pm.power_state[state_index].num_clock_modes = 1; + rdev->pm.power_state[state_index].clock_info[0].mclk = rdev->clock.default_mclk; + rdev->pm.power_state[state_index].clock_info[0].sclk = rdev->clock.default_sclk; + rdev->pm.power_state[state_index].default_clock_mode = + &rdev->pm.power_state[state_index].clock_info[0]; + rdev->pm.power_state[state_index].clock_info[0].voltage.type = VOLTAGE_NONE; + rdev->pm.power_state[state_index].pcie_lanes = 16; + rdev->pm.default_power_state_index = state_index; + rdev->pm.power_state[state_index].flags = 0; + state_index++; + } } } diff --git a/drivers/gpu/drm/radeon/radeon_benchmark.c b/drivers/gpu/drm/radeon/radeon_benchmark.c index 5cafc90de7f8..17e1a9b2d8fb 100644 --- a/drivers/gpu/drm/radeon/radeon_benchmark.c +++ b/drivers/gpu/drm/radeon/radeon_benchmark.c @@ -98,7 +98,7 @@ static void radeon_benchmark_move(struct radeon_device *rdev, unsigned size, struct radeon_bo *sobj = NULL; uint64_t saddr, daddr; int r, n; - unsigned int time; + int time; n = RADEON_BENCHMARK_ITERATIONS; r = radeon_bo_create(rdev, size, PAGE_SIZE, true, sdomain, &sobj); diff --git a/drivers/gpu/drm/radeon/radeon_combios.c b/drivers/gpu/drm/radeon/radeon_combios.c index 8bf83c4b4147..81fc100be7e1 100644 --- a/drivers/gpu/drm/radeon/radeon_combios.c +++ b/drivers/gpu/drm/radeon/radeon_combios.c @@ -2563,14 +2563,17 @@ void radeon_combios_get_power_modes(struct radeon_device *rdev) /* allocate 2 power states */ rdev->pm.power_state = kzalloc(sizeof(struct radeon_power_state) * 2, GFP_KERNEL); - if (!rdev->pm.power_state) { - rdev->pm.default_power_state_index = state_index; - rdev->pm.num_power_states = 0; - - rdev->pm.current_power_state_index = rdev->pm.default_power_state_index; - rdev->pm.current_clock_mode_index = 0; - return; - } + if (rdev->pm.power_state) { + /* allocate 1 clock mode per state */ + rdev->pm.power_state[0].clock_info = + kzalloc(sizeof(struct radeon_pm_clock_info) * 1, GFP_KERNEL); + rdev->pm.power_state[1].clock_info = + kzalloc(sizeof(struct radeon_pm_clock_info) * 1, GFP_KERNEL); + if (!rdev->pm.power_state[0].clock_info || + !rdev->pm.power_state[1].clock_info) + goto pm_failed; + } else + goto pm_failed; /* check for a thermal chip */ offset = combios_get_table_offset(dev, COMBIOS_OVERDRIVE_INFO_TABLE); @@ -2735,6 +2738,14 @@ default_mode: rdev->pm.current_power_state_index = rdev->pm.default_power_state_index; rdev->pm.current_clock_mode_index = 0; + return; + +pm_failed: + rdev->pm.default_power_state_index = state_index; + rdev->pm.num_power_states = 0; + + rdev->pm.current_power_state_index = rdev->pm.default_power_state_index; + rdev->pm.current_clock_mode_index = 0; } void radeon_external_tmds_setup(struct drm_encoder *encoder) diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c index dec6cbe6a0a6..e7cb3ab09243 100644 --- a/drivers/gpu/drm/radeon/radeon_connectors.c +++ b/drivers/gpu/drm/radeon/radeon_connectors.c @@ -44,8 +44,6 @@ extern void radeon_legacy_backlight_init(struct radeon_encoder *radeon_encoder, struct drm_connector *drm_connector); -bool radeon_connector_encoder_is_dp_bridge(struct drm_connector *connector); - void radeon_connector_hotplug(struct drm_connector *connector) { struct drm_device *dev = connector->dev; @@ -432,55 +430,6 @@ int radeon_connector_set_property(struct drm_connector *connector, struct drm_pr return 0; } -/* - * Some integrated ATI Radeon chipset implementations (e. g. - * Asus M2A-VM HDMI) may indicate the availability of a DDC, - * even when there's no monitor connected. For these connectors - * following DDC probe extension will be applied: check also for the - * availability of EDID with at least a correct EDID header. Only then, - * DDC is assumed to be available. This prevents drm_get_edid() and - * drm_edid_block_valid() from periodically dumping data and kernel - * errors into the logs and onto the terminal. - */ -static bool radeon_connector_needs_extended_probe(struct radeon_device *dev, - uint32_t supported_device, - int connector_type) -{ - /* Asus M2A-VM HDMI board sends data to i2c bus even, - * if HDMI add-on card is not plugged in or HDMI is disabled in - * BIOS. Valid DDC can only be assumed, if also a valid EDID header - * can be retrieved via i2c bus during DDC probe */ - if ((dev->pdev->device == 0x791e) && - (dev->pdev->subsystem_vendor == 0x1043) && - (dev->pdev->subsystem_device == 0x826d)) { - if ((connector_type == DRM_MODE_CONNECTOR_HDMIA) && - (supported_device == ATOM_DEVICE_DFP2_SUPPORT)) - return true; - } - /* ECS A740GM-M with ATI RADEON 2100 sends data to i2c bus - * for a DVI connector that is not implemented */ - if ((dev->pdev->device == 0x796e) && - (dev->pdev->subsystem_vendor == 0x1019) && - (dev->pdev->subsystem_device == 0x2615)) { - if ((connector_type == DRM_MODE_CONNECTOR_DVID) && - (supported_device == ATOM_DEVICE_DFP2_SUPPORT)) - return true; - } - /* TOSHIBA Satellite L300D with ATI Mobility Radeon x1100 - * (RS690M) sends data to i2c bus for a HDMI connector that - * is not implemented */ - if ((dev->pdev->device == 0x791f) && - (dev->pdev->subsystem_vendor == 0x1179) && - (dev->pdev->subsystem_device == 0xff68)) { - if ((connector_type == DRM_MODE_CONNECTOR_HDMIA) && - (supported_device == ATOM_DEVICE_DFP2_SUPPORT)) - return true; - } - - /* Default: no EDID header probe required for DDC probing */ - return false; -} - static void radeon_fixup_lvds_native_mode(struct drm_encoder *encoder, struct drm_connector *connector) { @@ -721,8 +670,7 @@ radeon_vga_detect(struct drm_connector *connector, bool force) ret = connector_status_disconnected; if (radeon_connector->ddc_bus) - dret = radeon_ddc_probe(radeon_connector, - radeon_connector->requires_extended_probe); + dret = radeon_ddc_probe(radeon_connector); if (dret) { radeon_connector->detected_by_load = false; if (radeon_connector->edid) { @@ -764,7 +712,7 @@ radeon_vga_detect(struct drm_connector *connector, bool force) if (radeon_connector->dac_load_detect && encoder) { encoder_funcs = encoder->helper_private; ret = encoder_funcs->detect(encoder, connector); - if (ret == connector_status_connected) + if (ret != connector_status_disconnected) radeon_connector->detected_by_load = true; } } @@ -904,8 +852,7 @@ radeon_dvi_detect(struct drm_connector *connector, bool force) bool dret = false; if (radeon_connector->ddc_bus) - dret = radeon_ddc_probe(radeon_connector, - radeon_connector->requires_extended_probe); + dret = radeon_ddc_probe(radeon_connector); if (dret) { radeon_connector->detected_by_load = false; if (radeon_connector->edid) { @@ -1005,8 +952,9 @@ radeon_dvi_detect(struct drm_connector *connector, bool force) ret = encoder_funcs->detect(encoder, connector); if (ret == connector_status_connected) { radeon_connector->use_digital = false; - radeon_connector->detected_by_load = true; } + if (ret != connector_status_disconnected) + radeon_connector->detected_by_load = true; } break; } @@ -1203,7 +1151,8 @@ static int radeon_dp_get_modes(struct drm_connector *connector) } } else { /* need to setup ddc on the bridge */ - if (radeon_connector_encoder_is_dp_bridge(connector)) { + if (radeon_connector_encoder_get_dp_bridge_encoder_id(connector) != + ENCODER_OBJECT_ID_NONE) { if (encoder) radeon_atom_ext_encoder_setup_ddc(encoder); } @@ -1213,13 +1162,12 @@ static int radeon_dp_get_modes(struct drm_connector *connector) return ret; } -bool radeon_connector_encoder_is_dp_bridge(struct drm_connector *connector) +u16 radeon_connector_encoder_get_dp_bridge_encoder_id(struct drm_connector *connector) { struct drm_mode_object *obj; struct drm_encoder *encoder; struct radeon_encoder *radeon_encoder; int i; - bool found = false; for (i = 0; i < DRM_CONNECTOR_MAX_ENCODER; i++) { if (connector->encoder_ids[i] == 0) @@ -1235,14 +1183,13 @@ bool radeon_connector_encoder_is_dp_bridge(struct drm_connector *connector) switch (radeon_encoder->encoder_id) { case ENCODER_OBJECT_ID_TRAVIS: case ENCODER_OBJECT_ID_NUTMEG: - found = true; - break; + return radeon_encoder->encoder_id; default: break; } } - return found; + return ENCODER_OBJECT_ID_NONE; } bool radeon_connector_encoder_is_hbr2(struct drm_connector *connector) @@ -1319,7 +1266,8 @@ radeon_dp_detect(struct drm_connector *connector, bool force) if (!radeon_dig_connector->edp_on) atombios_set_edp_panel_power(connector, ATOM_TRANSMITTER_ACTION_POWER_OFF); - } else if (radeon_connector_encoder_is_dp_bridge(connector)) { + } else if (radeon_connector_encoder_get_dp_bridge_encoder_id(connector) != + ENCODER_OBJECT_ID_NONE) { /* DP bridges are always DP */ radeon_dig_connector->dp_sink_type = CONNECTOR_OBJECT_ID_DISPLAYPORT; /* get the DPCD from the bridge */ @@ -1328,8 +1276,7 @@ radeon_dp_detect(struct drm_connector *connector, bool force) if (encoder) { /* setup ddc on the bridge */ radeon_atom_ext_encoder_setup_ddc(encoder); - if (radeon_ddc_probe(radeon_connector, - radeon_connector->requires_extended_probe)) /* try DDC */ + if (radeon_ddc_probe(radeon_connector)) /* try DDC */ ret = connector_status_connected; else if (radeon_connector->dac_load_detect) { /* try load detection */ struct drm_encoder_helper_funcs *encoder_funcs = encoder->helper_private; @@ -1347,8 +1294,7 @@ radeon_dp_detect(struct drm_connector *connector, bool force) if (radeon_dp_getdpcd(radeon_connector)) ret = connector_status_connected; } else { - if (radeon_ddc_probe(radeon_connector, - radeon_connector->requires_extended_probe)) + if (radeon_ddc_probe(radeon_connector)) ret = connector_status_connected; } } @@ -1493,9 +1439,7 @@ radeon_add_atom_connector(struct drm_device *dev, radeon_connector->shared_ddc = shared_ddc; radeon_connector->connector_object_id = connector_object_id; radeon_connector->hpd = *hpd; - radeon_connector->requires_extended_probe = - radeon_connector_needs_extended_probe(rdev, supported_device, - connector_type); + radeon_connector->router = *router; if (router->ddc_valid || router->cd_valid) { radeon_connector->router_bus = radeon_i2c_lookup(rdev, &router->i2c_info); @@ -1842,9 +1786,7 @@ radeon_add_legacy_connector(struct drm_device *dev, radeon_connector->devices = supported_device; radeon_connector->connector_object_id = connector_object_id; radeon_connector->hpd = *hpd; - radeon_connector->requires_extended_probe = - radeon_connector_needs_extended_probe(rdev, supported_device, - connector_type); + switch (connector_type) { case DRM_MODE_CONNECTOR_VGA: drm_connector_init(dev, &radeon_connector->base, &radeon_vga_connector_funcs, connector_type); diff --git a/drivers/gpu/drm/radeon/radeon_cs.c b/drivers/gpu/drm/radeon/radeon_cs.c index fae00c0d75aa..ccaa243c1442 100644 --- a/drivers/gpu/drm/radeon/radeon_cs.c +++ b/drivers/gpu/drm/radeon/radeon_cs.c @@ -222,7 +222,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) struct radeon_cs_chunk *ib_chunk; int r; - mutex_lock(&rdev->cs_mutex); + radeon_mutex_lock(&rdev->cs_mutex); /* initialize parser */ memset(&parser, 0, sizeof(struct radeon_cs_parser)); parser.filp = filp; @@ -233,14 +233,14 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) if (r) { DRM_ERROR("Failed to initialize parser !\n"); radeon_cs_parser_fini(&parser, r); - mutex_unlock(&rdev->cs_mutex); + radeon_mutex_unlock(&rdev->cs_mutex); return r; } r = radeon_ib_get(rdev, &parser.ib); if (r) { DRM_ERROR("Failed to get ib !\n"); radeon_cs_parser_fini(&parser, r); - mutex_unlock(&rdev->cs_mutex); + radeon_mutex_unlock(&rdev->cs_mutex); return r; } r = radeon_cs_parser_relocs(&parser); @@ -248,7 +248,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) if (r != -ERESTARTSYS) DRM_ERROR("Failed to parse relocation %d!\n", r); radeon_cs_parser_fini(&parser, r); - mutex_unlock(&rdev->cs_mutex); + radeon_mutex_unlock(&rdev->cs_mutex); return r; } /* Copy the packet into the IB, the parser will read from the @@ -260,14 +260,14 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) if (r || parser.parser_error) { DRM_ERROR("Invalid command stream !\n"); radeon_cs_parser_fini(&parser, r); - mutex_unlock(&rdev->cs_mutex); + radeon_mutex_unlock(&rdev->cs_mutex); return r; } r = radeon_cs_finish_pages(&parser); if (r) { DRM_ERROR("Invalid command stream !\n"); radeon_cs_parser_fini(&parser, r); - mutex_unlock(&rdev->cs_mutex); + radeon_mutex_unlock(&rdev->cs_mutex); return r; } r = radeon_ib_schedule(rdev, parser.ib); @@ -275,7 +275,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) DRM_ERROR("Failed to schedule IB !\n"); } radeon_cs_parser_fini(&parser, r); - mutex_unlock(&rdev->cs_mutex); + radeon_mutex_unlock(&rdev->cs_mutex); return r; } diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c index c33bc914d93d..c4d00a171411 100644 --- a/drivers/gpu/drm/radeon/radeon_device.c +++ b/drivers/gpu/drm/radeon/radeon_device.c @@ -716,7 +716,7 @@ int radeon_device_init(struct radeon_device *rdev, /* mutex initialization are all done here so we * can recall function without having locking issues */ - mutex_init(&rdev->cs_mutex); + radeon_mutex_init(&rdev->cs_mutex); mutex_init(&rdev->ib_pool.mutex); mutex_init(&rdev->cp.mutex); mutex_init(&rdev->dc_hw_i2c_mutex); @@ -955,6 +955,9 @@ int radeon_gpu_reset(struct radeon_device *rdev) int r; int resched; + /* Prevent CS ioctl from interfering */ + radeon_mutex_lock(&rdev->cs_mutex); + radeon_save_bios_scratch_regs(rdev); /* block TTM */ resched = ttm_bo_lock_delayed_workqueue(&rdev->mman.bdev); @@ -967,10 +970,15 @@ int radeon_gpu_reset(struct radeon_device *rdev) radeon_restore_bios_scratch_regs(rdev); drm_helper_resume_force_mode(rdev->ddev); ttm_bo_unlock_delayed_workqueue(&rdev->mman.bdev, resched); - return 0; } - /* bad news, how to tell it to userspace ? */ - dev_info(rdev->dev, "GPU reset failed\n"); + + radeon_mutex_unlock(&rdev->cs_mutex); + + if (r) { + /* bad news, how to tell it to userspace ? */ + dev_info(rdev->dev, "GPU reset failed\n"); + } + return r; } diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c index 6adb3e58affd..a22d6e6a49a2 100644 --- a/drivers/gpu/drm/radeon/radeon_display.c +++ b/drivers/gpu/drm/radeon/radeon_display.c @@ -33,8 +33,6 @@ #include "drm_crtc_helper.h" #include "drm_edid.h" -static int radeon_ddc_dump(struct drm_connector *connector); - static void avivo_crtc_load_lut(struct drm_crtc *crtc) { struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc); @@ -669,7 +667,6 @@ static void radeon_print_display_setup(struct drm_device *dev) static bool radeon_setup_enc_conn(struct drm_device *dev) { struct radeon_device *rdev = dev->dev_private; - struct drm_connector *drm_connector; bool ret = false; if (rdev->bios) { @@ -689,8 +686,6 @@ static bool radeon_setup_enc_conn(struct drm_device *dev) if (ret) { radeon_setup_encoder_clones(dev); radeon_print_display_setup(dev); - list_for_each_entry(drm_connector, &dev->mode_config.connector_list, head) - radeon_ddc_dump(drm_connector); } return ret; @@ -708,7 +703,8 @@ int radeon_ddc_get_modes(struct radeon_connector *radeon_connector) if ((radeon_connector->base.connector_type == DRM_MODE_CONNECTOR_DisplayPort) || (radeon_connector->base.connector_type == DRM_MODE_CONNECTOR_eDP) || - radeon_connector_encoder_is_dp_bridge(&radeon_connector->base)) { + (radeon_connector_encoder_get_dp_bridge_encoder_id(&radeon_connector->base) != + ENCODER_OBJECT_ID_NONE)) { struct radeon_connector_atom_dig *dig = radeon_connector->con_priv; if ((dig->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT || @@ -743,34 +739,6 @@ int radeon_ddc_get_modes(struct radeon_connector *radeon_connector) return 0; } -static int radeon_ddc_dump(struct drm_connector *connector) -{ - struct edid *edid; - struct radeon_connector *radeon_connector = to_radeon_connector(connector); - int ret = 0; - - /* on hw with routers, select right port */ - if (radeon_connector->router.ddc_valid) - radeon_router_select_ddc_port(radeon_connector); - - if (!radeon_connector->ddc_bus) - return -1; - edid = drm_get_edid(connector, &radeon_connector->ddc_bus->adapter); - /* Log EDID retrieval status here. In particular with regard to - * connectors with requires_extended_probe flag set, that will prevent - * function radeon_dvi_detect() to fetch EDID on this connector, - * as long as there is no valid EDID header found */ - if (edid) { - DRM_INFO("Radeon display connector %s: Found valid EDID", - drm_get_connector_name(connector)); - kfree(edid); - } else { - DRM_INFO("Radeon display connector %s: No monitor connected or invalid EDID", - drm_get_connector_name(connector)); - } - return ret; -} - /* avivo */ static void avivo_get_fb_div(struct radeon_pll *pll, u32 target_clock, diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c index 969933833ccb..a0b35e909489 100644 --- a/drivers/gpu/drm/radeon/radeon_drv.c +++ b/drivers/gpu/drm/radeon/radeon_drv.c @@ -119,6 +119,7 @@ int radeon_audio = 0; int radeon_disp_priority = 0; int radeon_hw_i2c = 0; int radeon_pcie_gen2 = 0; +int radeon_msi = -1; MODULE_PARM_DESC(no_wb, "Disable AGP writeback for scratch registers"); module_param_named(no_wb, radeon_no_wb, int, 0444); @@ -165,6 +166,9 @@ module_param_named(hw_i2c, radeon_hw_i2c, int, 0444); MODULE_PARM_DESC(pcie_gen2, "PCIE Gen2 mode (1 = enable)"); module_param_named(pcie_gen2, radeon_pcie_gen2, int, 0444); +MODULE_PARM_DESC(msi, "MSI support (1 = enable, 0 = disable, -1 = auto)"); +module_param_named(msi, radeon_msi, int, 0444); + static int radeon_suspend(struct drm_device *dev, pm_message_t state) { drm_radeon_private_t *dev_priv = dev->dev_private; diff --git a/drivers/gpu/drm/radeon/radeon_encoders.c b/drivers/gpu/drm/radeon/radeon_encoders.c index eb3f6dc6df83..06e413e6a920 100644 --- a/drivers/gpu/drm/radeon/radeon_encoders.c +++ b/drivers/gpu/drm/radeon/radeon_encoders.c @@ -29,12 +29,6 @@ #include "radeon.h" #include "atom.h" -extern int atom_debug; - -/* evil but including atombios.h is much worse */ -bool radeon_atom_get_tv_timings(struct radeon_device *rdev, int index, - struct drm_display_mode *mode); - static uint32_t radeon_encoder_clones(struct drm_encoder *encoder) { struct drm_device *dev = encoder->dev; @@ -156,27 +150,6 @@ radeon_get_encoder_enum(struct drm_device *dev, uint32_t supported_device, uint8 return ret; } -static inline bool radeon_encoder_is_digital(struct drm_encoder *encoder) -{ - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - switch (radeon_encoder->encoder_id) { - case ENCODER_OBJECT_ID_INTERNAL_LVDS: - case ENCODER_OBJECT_ID_INTERNAL_TMDS1: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: - case ENCODER_OBJECT_ID_INTERNAL_LVTM1: - case ENCODER_OBJECT_ID_INTERNAL_DVO1: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: - case ENCODER_OBJECT_ID_INTERNAL_DDI: - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: - return true; - default: - return false; - } -} - void radeon_link_encoder_connector(struct drm_device *dev) { @@ -229,23 +202,7 @@ radeon_get_connector_for_encoder(struct drm_encoder *encoder) return NULL; } -static struct drm_connector * -radeon_get_connector_for_encoder_init(struct drm_encoder *encoder) -{ - struct drm_device *dev = encoder->dev; - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - struct drm_connector *connector; - struct radeon_connector *radeon_connector; - - list_for_each_entry(connector, &dev->mode_config.connector_list, head) { - radeon_connector = to_radeon_connector(connector); - if (radeon_encoder->devices & radeon_connector->devices) - return connector; - } - return NULL; -} - -struct drm_encoder *radeon_atom_get_external_encoder(struct drm_encoder *encoder) +struct drm_encoder *radeon_get_external_encoder(struct drm_encoder *encoder) { struct drm_device *dev = encoder->dev; struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); @@ -266,9 +223,9 @@ struct drm_encoder *radeon_atom_get_external_encoder(struct drm_encoder *encoder return NULL; } -bool radeon_encoder_is_dp_bridge(struct drm_encoder *encoder) +u16 radeon_encoder_get_dp_bridge_encoder_id(struct drm_encoder *encoder) { - struct drm_encoder *other_encoder = radeon_atom_get_external_encoder(encoder); + struct drm_encoder *other_encoder = radeon_get_external_encoder(encoder); if (other_encoder) { struct radeon_encoder *radeon_encoder = to_radeon_encoder(other_encoder); @@ -332,2105 +289,3 @@ void radeon_panel_mode_fixup(struct drm_encoder *encoder, } -static bool radeon_atom_mode_fixup(struct drm_encoder *encoder, - struct drm_display_mode *mode, - struct drm_display_mode *adjusted_mode) -{ - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - - /* set the active encoder to connector routing */ - radeon_encoder_set_active_device(encoder); - drm_mode_set_crtcinfo(adjusted_mode, 0); - - /* hw bug */ - if ((mode->flags & DRM_MODE_FLAG_INTERLACE) - && (mode->crtc_vsync_start < (mode->crtc_vdisplay + 2))) - adjusted_mode->crtc_vsync_start = adjusted_mode->crtc_vdisplay + 2; - - /* get the native mode for LVDS */ - if (radeon_encoder->active_device & (ATOM_DEVICE_LCD_SUPPORT)) - radeon_panel_mode_fixup(encoder, adjusted_mode); - - /* get the native mode for TV */ - if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT)) { - struct radeon_encoder_atom_dac *tv_dac = radeon_encoder->enc_priv; - if (tv_dac) { - if (tv_dac->tv_std == TV_STD_NTSC || - tv_dac->tv_std == TV_STD_NTSC_J || - tv_dac->tv_std == TV_STD_PAL_M) - radeon_atom_get_tv_timings(rdev, 0, adjusted_mode); - else - radeon_atom_get_tv_timings(rdev, 1, adjusted_mode); - } - } - - if (ASIC_IS_DCE3(rdev) && - ((radeon_encoder->active_device & (ATOM_DEVICE_DFP_SUPPORT | ATOM_DEVICE_LCD_SUPPORT)) || - radeon_encoder_is_dp_bridge(encoder))) { - struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); - radeon_dp_set_link_config(connector, mode); - } - - return true; -} - -static void -atombios_dac_setup(struct drm_encoder *encoder, int action) -{ - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - DAC_ENCODER_CONTROL_PS_ALLOCATION args; - int index = 0; - struct radeon_encoder_atom_dac *dac_info = radeon_encoder->enc_priv; - - memset(&args, 0, sizeof(args)); - - switch (radeon_encoder->encoder_id) { - case ENCODER_OBJECT_ID_INTERNAL_DAC1: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1: - index = GetIndexIntoMasterTable(COMMAND, DAC1EncoderControl); - break; - case ENCODER_OBJECT_ID_INTERNAL_DAC2: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2: - index = GetIndexIntoMasterTable(COMMAND, DAC2EncoderControl); - break; - } - - args.ucAction = action; - - if (radeon_encoder->active_device & (ATOM_DEVICE_CRT_SUPPORT)) - args.ucDacStandard = ATOM_DAC1_PS2; - else if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT)) - args.ucDacStandard = ATOM_DAC1_CV; - else { - switch (dac_info->tv_std) { - case TV_STD_PAL: - case TV_STD_PAL_M: - case TV_STD_SCART_PAL: - case TV_STD_SECAM: - case TV_STD_PAL_CN: - args.ucDacStandard = ATOM_DAC1_PAL; - break; - case TV_STD_NTSC: - case TV_STD_NTSC_J: - case TV_STD_PAL_60: - default: - args.ucDacStandard = ATOM_DAC1_NTSC; - break; - } - } - args.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); - - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); - -} - -static void -atombios_tv_setup(struct drm_encoder *encoder, int action) -{ - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - TV_ENCODER_CONTROL_PS_ALLOCATION args; - int index = 0; - struct radeon_encoder_atom_dac *dac_info = radeon_encoder->enc_priv; - - memset(&args, 0, sizeof(args)); - - index = GetIndexIntoMasterTable(COMMAND, TVEncoderControl); - - args.sTVEncoder.ucAction = action; - - if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT)) - args.sTVEncoder.ucTvStandard = ATOM_TV_CV; - else { - switch (dac_info->tv_std) { - case TV_STD_NTSC: - args.sTVEncoder.ucTvStandard = ATOM_TV_NTSC; - break; - case TV_STD_PAL: - args.sTVEncoder.ucTvStandard = ATOM_TV_PAL; - break; - case TV_STD_PAL_M: - args.sTVEncoder.ucTvStandard = ATOM_TV_PALM; - break; - case TV_STD_PAL_60: - args.sTVEncoder.ucTvStandard = ATOM_TV_PAL60; - break; - case TV_STD_NTSC_J: - args.sTVEncoder.ucTvStandard = ATOM_TV_NTSCJ; - break; - case TV_STD_SCART_PAL: - args.sTVEncoder.ucTvStandard = ATOM_TV_PAL; /* ??? */ - break; - case TV_STD_SECAM: - args.sTVEncoder.ucTvStandard = ATOM_TV_SECAM; - break; - case TV_STD_PAL_CN: - args.sTVEncoder.ucTvStandard = ATOM_TV_PALCN; - break; - default: - args.sTVEncoder.ucTvStandard = ATOM_TV_NTSC; - break; - } - } - - args.sTVEncoder.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); - - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); - -} - -union dvo_encoder_control { - ENABLE_EXTERNAL_TMDS_ENCODER_PS_ALLOCATION ext_tmds; - DVO_ENCODER_CONTROL_PS_ALLOCATION dvo; - DVO_ENCODER_CONTROL_PS_ALLOCATION_V3 dvo_v3; -}; - -void -atombios_dvo_setup(struct drm_encoder *encoder, int action) -{ - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - union dvo_encoder_control args; - int index = GetIndexIntoMasterTable(COMMAND, DVOEncoderControl); - - memset(&args, 0, sizeof(args)); - - if (ASIC_IS_DCE3(rdev)) { - /* DCE3+ */ - args.dvo_v3.ucAction = action; - args.dvo_v3.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); - args.dvo_v3.ucDVOConfig = 0; /* XXX */ - } else if (ASIC_IS_DCE2(rdev)) { - /* DCE2 (pre-DCE3 R6xx, RS600/690/740 */ - args.dvo.sDVOEncoder.ucAction = action; - args.dvo.sDVOEncoder.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); - /* DFP1, CRT1, TV1 depending on the type of port */ - args.dvo.sDVOEncoder.ucDeviceType = ATOM_DEVICE_DFP1_INDEX; - - if (radeon_encoder->pixel_clock > 165000) - args.dvo.sDVOEncoder.usDevAttr.sDigAttrib.ucAttribute |= PANEL_ENCODER_MISC_DUAL; - } else { - /* R4xx, R5xx */ - args.ext_tmds.sXTmdsEncoder.ucEnable = action; - - if (radeon_encoder->pixel_clock > 165000) - args.ext_tmds.sXTmdsEncoder.ucMisc |= PANEL_ENCODER_MISC_DUAL; - - /*if (pScrn->rgbBits == 8)*/ - args.ext_tmds.sXTmdsEncoder.ucMisc |= ATOM_PANEL_MISC_888RGB; - } - - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); -} - -union lvds_encoder_control { - LVDS_ENCODER_CONTROL_PS_ALLOCATION v1; - LVDS_ENCODER_CONTROL_PS_ALLOCATION_V2 v2; -}; - -void -atombios_digital_setup(struct drm_encoder *encoder, int action) -{ - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; - union lvds_encoder_control args; - int index = 0; - int hdmi_detected = 0; - uint8_t frev, crev; - - if (!dig) - return; - - if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_HDMI) - hdmi_detected = 1; - - memset(&args, 0, sizeof(args)); - - switch (radeon_encoder->encoder_id) { - case ENCODER_OBJECT_ID_INTERNAL_LVDS: - index = GetIndexIntoMasterTable(COMMAND, LVDSEncoderControl); - break; - case ENCODER_OBJECT_ID_INTERNAL_TMDS1: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: - index = GetIndexIntoMasterTable(COMMAND, TMDS1EncoderControl); - break; - case ENCODER_OBJECT_ID_INTERNAL_LVTM1: - if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) - index = GetIndexIntoMasterTable(COMMAND, LVDSEncoderControl); - else - index = GetIndexIntoMasterTable(COMMAND, TMDS2EncoderControl); - break; - } - - if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) - return; - - switch (frev) { - case 1: - case 2: - switch (crev) { - case 1: - args.v1.ucMisc = 0; - args.v1.ucAction = action; - if (hdmi_detected) - args.v1.ucMisc |= PANEL_ENCODER_MISC_HDMI_TYPE; - args.v1.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); - if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { - if (dig->lcd_misc & ATOM_PANEL_MISC_DUAL) - args.v1.ucMisc |= PANEL_ENCODER_MISC_DUAL; - if (dig->lcd_misc & ATOM_PANEL_MISC_888RGB) - args.v1.ucMisc |= ATOM_PANEL_MISC_888RGB; - } else { - if (dig->linkb) - args.v1.ucMisc |= PANEL_ENCODER_MISC_TMDS_LINKB; - if (radeon_encoder->pixel_clock > 165000) - args.v1.ucMisc |= PANEL_ENCODER_MISC_DUAL; - /*if (pScrn->rgbBits == 8) */ - args.v1.ucMisc |= ATOM_PANEL_MISC_888RGB; - } - break; - case 2: - case 3: - args.v2.ucMisc = 0; - args.v2.ucAction = action; - if (crev == 3) { - if (dig->coherent_mode) - args.v2.ucMisc |= PANEL_ENCODER_MISC_COHERENT; - } - if (hdmi_detected) - args.v2.ucMisc |= PANEL_ENCODER_MISC_HDMI_TYPE; - args.v2.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); - args.v2.ucTruncate = 0; - args.v2.ucSpatial = 0; - args.v2.ucTemporal = 0; - args.v2.ucFRC = 0; - if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { - if (dig->lcd_misc & ATOM_PANEL_MISC_DUAL) - args.v2.ucMisc |= PANEL_ENCODER_MISC_DUAL; - if (dig->lcd_misc & ATOM_PANEL_MISC_SPATIAL) { - args.v2.ucSpatial = PANEL_ENCODER_SPATIAL_DITHER_EN; - if (dig->lcd_misc & ATOM_PANEL_MISC_888RGB) - args.v2.ucSpatial |= PANEL_ENCODER_SPATIAL_DITHER_DEPTH; - } - if (dig->lcd_misc & ATOM_PANEL_MISC_TEMPORAL) { - args.v2.ucTemporal = PANEL_ENCODER_TEMPORAL_DITHER_EN; - if (dig->lcd_misc & ATOM_PANEL_MISC_888RGB) - args.v2.ucTemporal |= PANEL_ENCODER_TEMPORAL_DITHER_DEPTH; - if (((dig->lcd_misc >> ATOM_PANEL_MISC_GREY_LEVEL_SHIFT) & 0x3) == 2) - args.v2.ucTemporal |= PANEL_ENCODER_TEMPORAL_LEVEL_4; - } - } else { - if (dig->linkb) - args.v2.ucMisc |= PANEL_ENCODER_MISC_TMDS_LINKB; - if (radeon_encoder->pixel_clock > 165000) - args.v2.ucMisc |= PANEL_ENCODER_MISC_DUAL; - } - break; - default: - DRM_ERROR("Unknown table version %d, %d\n", frev, crev); - break; - } - break; - default: - DRM_ERROR("Unknown table version %d, %d\n", frev, crev); - break; - } - - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); -} - -int -atombios_get_encoder_mode(struct drm_encoder *encoder) -{ - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - struct drm_connector *connector; - struct radeon_connector *radeon_connector; - struct radeon_connector_atom_dig *dig_connector; - - /* dp bridges are always DP */ - if (radeon_encoder_is_dp_bridge(encoder)) - return ATOM_ENCODER_MODE_DP; - - /* DVO is always DVO */ - if (radeon_encoder->encoder_id == ATOM_ENCODER_MODE_DVO) - return ATOM_ENCODER_MODE_DVO; - - connector = radeon_get_connector_for_encoder(encoder); - /* if we don't have an active device yet, just use one of - * the connectors tied to the encoder. - */ - if (!connector) - connector = radeon_get_connector_for_encoder_init(encoder); - radeon_connector = to_radeon_connector(connector); - - switch (connector->connector_type) { - case DRM_MODE_CONNECTOR_DVII: - case DRM_MODE_CONNECTOR_HDMIB: /* HDMI-B is basically DL-DVI; analog works fine */ - if (drm_detect_monitor_audio(radeon_connector->edid) && radeon_audio) { - /* fix me */ - if (ASIC_IS_DCE4(rdev)) - return ATOM_ENCODER_MODE_DVI; - else - return ATOM_ENCODER_MODE_HDMI; - } else if (radeon_connector->use_digital) - return ATOM_ENCODER_MODE_DVI; - else - return ATOM_ENCODER_MODE_CRT; - break; - case DRM_MODE_CONNECTOR_DVID: - case DRM_MODE_CONNECTOR_HDMIA: - default: - if (drm_detect_monitor_audio(radeon_connector->edid) && radeon_audio) { - /* fix me */ - if (ASIC_IS_DCE4(rdev)) - return ATOM_ENCODER_MODE_DVI; - else - return ATOM_ENCODER_MODE_HDMI; - } else - return ATOM_ENCODER_MODE_DVI; - break; - case DRM_MODE_CONNECTOR_LVDS: - return ATOM_ENCODER_MODE_LVDS; - break; - case DRM_MODE_CONNECTOR_DisplayPort: - dig_connector = radeon_connector->con_priv; - if ((dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) || - (dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_eDP)) - return ATOM_ENCODER_MODE_DP; - else if (drm_detect_monitor_audio(radeon_connector->edid) && radeon_audio) { - /* fix me */ - if (ASIC_IS_DCE4(rdev)) - return ATOM_ENCODER_MODE_DVI; - else - return ATOM_ENCODER_MODE_HDMI; - } else - return ATOM_ENCODER_MODE_DVI; - break; - case DRM_MODE_CONNECTOR_eDP: - return ATOM_ENCODER_MODE_DP; - case DRM_MODE_CONNECTOR_DVIA: - case DRM_MODE_CONNECTOR_VGA: - return ATOM_ENCODER_MODE_CRT; - break; - case DRM_MODE_CONNECTOR_Composite: - case DRM_MODE_CONNECTOR_SVIDEO: - case DRM_MODE_CONNECTOR_9PinDIN: - /* fix me */ - return ATOM_ENCODER_MODE_TV; - /*return ATOM_ENCODER_MODE_CV;*/ - break; - } -} - -/* - * DIG Encoder/Transmitter Setup - * - * DCE 3.0/3.1 - * - 2 DIG transmitter blocks. UNIPHY (links A and B) and LVTMA. - * Supports up to 3 digital outputs - * - 2 DIG encoder blocks. - * DIG1 can drive UNIPHY link A or link B - * DIG2 can drive UNIPHY link B or LVTMA - * - * DCE 3.2 - * - 3 DIG transmitter blocks. UNIPHY0/1/2 (links A and B). - * Supports up to 5 digital outputs - * - 2 DIG encoder blocks. - * DIG1/2 can drive UNIPHY0/1/2 link A or link B - * - * DCE 4.0/5.0 - * - 3 DIG transmitter blocks UNIPHY0/1/2 (links A and B). - * Supports up to 6 digital outputs - * - 6 DIG encoder blocks. - * - DIG to PHY mapping is hardcoded - * DIG1 drives UNIPHY0 link A, A+B - * DIG2 drives UNIPHY0 link B - * DIG3 drives UNIPHY1 link A, A+B - * DIG4 drives UNIPHY1 link B - * DIG5 drives UNIPHY2 link A, A+B - * DIG6 drives UNIPHY2 link B - * - * DCE 4.1 - * - 3 DIG transmitter blocks UNIPHY0/1/2 (links A and B). - * Supports up to 6 digital outputs - * - 2 DIG encoder blocks. - * DIG1/2 can drive UNIPHY0/1/2 link A or link B - * - * Routing - * crtc -> dig encoder -> UNIPHY/LVTMA (1 or 2 links) - * Examples: - * crtc0 -> dig2 -> LVTMA links A+B -> TMDS/HDMI - * crtc1 -> dig1 -> UNIPHY0 link B -> DP - * crtc0 -> dig1 -> UNIPHY2 link A -> LVDS - * crtc1 -> dig2 -> UNIPHY1 link B+A -> TMDS/HDMI - */ - -union dig_encoder_control { - DIG_ENCODER_CONTROL_PS_ALLOCATION v1; - DIG_ENCODER_CONTROL_PARAMETERS_V2 v2; - DIG_ENCODER_CONTROL_PARAMETERS_V3 v3; - DIG_ENCODER_CONTROL_PARAMETERS_V4 v4; -}; - -void -atombios_dig_encoder_setup(struct drm_encoder *encoder, int action, int panel_mode) -{ - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; - struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); - union dig_encoder_control args; - int index = 0; - uint8_t frev, crev; - int dp_clock = 0; - int dp_lane_count = 0; - int hpd_id = RADEON_HPD_NONE; - int bpc = 8; - - if (connector) { - struct radeon_connector *radeon_connector = to_radeon_connector(connector); - struct radeon_connector_atom_dig *dig_connector = - radeon_connector->con_priv; - - dp_clock = dig_connector->dp_clock; - dp_lane_count = dig_connector->dp_lane_count; - hpd_id = radeon_connector->hpd.hpd; - bpc = connector->display_info.bpc; - } - - /* no dig encoder assigned */ - if (dig->dig_encoder == -1) - return; - - memset(&args, 0, sizeof(args)); - - if (ASIC_IS_DCE4(rdev)) - index = GetIndexIntoMasterTable(COMMAND, DIGxEncoderControl); - else { - if (dig->dig_encoder) - index = GetIndexIntoMasterTable(COMMAND, DIG2EncoderControl); - else - index = GetIndexIntoMasterTable(COMMAND, DIG1EncoderControl); - } - - if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) - return; - - args.v1.ucAction = action; - args.v1.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); - if (action == ATOM_ENCODER_CMD_SETUP_PANEL_MODE) - args.v3.ucPanelMode = panel_mode; - else - args.v1.ucEncoderMode = atombios_get_encoder_mode(encoder); - - if ((args.v1.ucEncoderMode == ATOM_ENCODER_MODE_DP) || - (args.v1.ucEncoderMode == ATOM_ENCODER_MODE_DP_MST)) - args.v1.ucLaneNum = dp_lane_count; - else if (radeon_encoder->pixel_clock > 165000) - args.v1.ucLaneNum = 8; - else - args.v1.ucLaneNum = 4; - - if (ASIC_IS_DCE5(rdev)) { - if ((args.v1.ucEncoderMode == ATOM_ENCODER_MODE_DP) || - (args.v1.ucEncoderMode == ATOM_ENCODER_MODE_DP_MST)) { - if (dp_clock == 270000) - args.v1.ucConfig |= ATOM_ENCODER_CONFIG_V4_DPLINKRATE_2_70GHZ; - else if (dp_clock == 540000) - args.v1.ucConfig |= ATOM_ENCODER_CONFIG_V4_DPLINKRATE_5_40GHZ; - } - args.v4.acConfig.ucDigSel = dig->dig_encoder; - switch (bpc) { - case 0: - args.v4.ucBitPerColor = PANEL_BPC_UNDEFINE; - break; - case 6: - args.v4.ucBitPerColor = PANEL_6BIT_PER_COLOR; - break; - case 8: - default: - args.v4.ucBitPerColor = PANEL_8BIT_PER_COLOR; - break; - case 10: - args.v4.ucBitPerColor = PANEL_10BIT_PER_COLOR; - break; - case 12: - args.v4.ucBitPerColor = PANEL_12BIT_PER_COLOR; - break; - case 16: - args.v4.ucBitPerColor = PANEL_16BIT_PER_COLOR; - break; - } - if (hpd_id == RADEON_HPD_NONE) - args.v4.ucHPD_ID = 0; - else - args.v4.ucHPD_ID = hpd_id + 1; - } else if (ASIC_IS_DCE4(rdev)) { - if ((args.v1.ucEncoderMode == ATOM_ENCODER_MODE_DP) && (dp_clock == 270000)) - args.v1.ucConfig |= ATOM_ENCODER_CONFIG_V3_DPLINKRATE_2_70GHZ; - args.v3.acConfig.ucDigSel = dig->dig_encoder; - switch (bpc) { - case 0: - args.v3.ucBitPerColor = PANEL_BPC_UNDEFINE; - break; - case 6: - args.v3.ucBitPerColor = PANEL_6BIT_PER_COLOR; - break; - case 8: - default: - args.v3.ucBitPerColor = PANEL_8BIT_PER_COLOR; - break; - case 10: - args.v3.ucBitPerColor = PANEL_10BIT_PER_COLOR; - break; - case 12: - args.v3.ucBitPerColor = PANEL_12BIT_PER_COLOR; - break; - case 16: - args.v3.ucBitPerColor = PANEL_16BIT_PER_COLOR; - break; - } - } else { - if ((args.v1.ucEncoderMode == ATOM_ENCODER_MODE_DP) && (dp_clock == 270000)) - args.v1.ucConfig |= ATOM_ENCODER_CONFIG_DPLINKRATE_2_70GHZ; - switch (radeon_encoder->encoder_id) { - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: - args.v1.ucConfig = ATOM_ENCODER_CONFIG_V2_TRANSMITTER1; - break; - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: - args.v1.ucConfig = ATOM_ENCODER_CONFIG_V2_TRANSMITTER2; - break; - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: - args.v1.ucConfig = ATOM_ENCODER_CONFIG_V2_TRANSMITTER3; - break; - } - if (dig->linkb) - args.v1.ucConfig |= ATOM_ENCODER_CONFIG_LINKB; - else - args.v1.ucConfig |= ATOM_ENCODER_CONFIG_LINKA; - } - - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); - -} - -union dig_transmitter_control { - DIG_TRANSMITTER_CONTROL_PS_ALLOCATION v1; - DIG_TRANSMITTER_CONTROL_PARAMETERS_V2 v2; - DIG_TRANSMITTER_CONTROL_PARAMETERS_V3 v3; - DIG_TRANSMITTER_CONTROL_PARAMETERS_V4 v4; -}; - -void -atombios_dig_transmitter_setup(struct drm_encoder *encoder, int action, uint8_t lane_num, uint8_t lane_set) -{ - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; - struct drm_connector *connector; - union dig_transmitter_control args; - int index = 0; - uint8_t frev, crev; - bool is_dp = false; - int pll_id = 0; - int dp_clock = 0; - int dp_lane_count = 0; - int connector_object_id = 0; - int igp_lane_info = 0; - int dig_encoder = dig->dig_encoder; - - if (action == ATOM_TRANSMITTER_ACTION_INIT) { - connector = radeon_get_connector_for_encoder_init(encoder); - /* just needed to avoid bailing in the encoder check. the encoder - * isn't used for init - */ - dig_encoder = 0; - } else - connector = radeon_get_connector_for_encoder(encoder); - - if (connector) { - struct radeon_connector *radeon_connector = to_radeon_connector(connector); - struct radeon_connector_atom_dig *dig_connector = - radeon_connector->con_priv; - - dp_clock = dig_connector->dp_clock; - dp_lane_count = dig_connector->dp_lane_count; - connector_object_id = - (radeon_connector->connector_object_id & OBJECT_ID_MASK) >> OBJECT_ID_SHIFT; - igp_lane_info = dig_connector->igp_lane_info; - } - - /* no dig encoder assigned */ - if (dig_encoder == -1) - return; - - if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_DP) - is_dp = true; - - memset(&args, 0, sizeof(args)); - - switch (radeon_encoder->encoder_id) { - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: - index = GetIndexIntoMasterTable(COMMAND, DVOOutputControl); - break; - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: - index = GetIndexIntoMasterTable(COMMAND, UNIPHYTransmitterControl); - break; - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: - index = GetIndexIntoMasterTable(COMMAND, LVTMATransmitterControl); - break; - } - - if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) - return; - - args.v1.ucAction = action; - if (action == ATOM_TRANSMITTER_ACTION_INIT) { - args.v1.usInitInfo = cpu_to_le16(connector_object_id); - } else if (action == ATOM_TRANSMITTER_ACTION_SETUP_VSEMPH) { - args.v1.asMode.ucLaneSel = lane_num; - args.v1.asMode.ucLaneSet = lane_set; - } else { - if (is_dp) - args.v1.usPixelClock = - cpu_to_le16(dp_clock / 10); - else if (radeon_encoder->pixel_clock > 165000) - args.v1.usPixelClock = cpu_to_le16((radeon_encoder->pixel_clock / 2) / 10); - else - args.v1.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); - } - if (ASIC_IS_DCE4(rdev)) { - if (is_dp) - args.v3.ucLaneNum = dp_lane_count; - else if (radeon_encoder->pixel_clock > 165000) - args.v3.ucLaneNum = 8; - else - args.v3.ucLaneNum = 4; - - if (dig->linkb) - args.v3.acConfig.ucLinkSel = 1; - if (dig_encoder & 1) - args.v3.acConfig.ucEncoderSel = 1; - - /* Select the PLL for the PHY - * DP PHY should be clocked from external src if there is - * one. - */ - if (encoder->crtc) { - struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc); - pll_id = radeon_crtc->pll_id; - } - - if (ASIC_IS_DCE5(rdev)) { - /* On DCE5 DCPLL usually generates the DP ref clock */ - if (is_dp) { - if (rdev->clock.dp_extclk) - args.v4.acConfig.ucRefClkSource = ENCODER_REFCLK_SRC_EXTCLK; - else - args.v4.acConfig.ucRefClkSource = ENCODER_REFCLK_SRC_DCPLL; - } else - args.v4.acConfig.ucRefClkSource = pll_id; - } else { - /* On DCE4, if there is an external clock, it generates the DP ref clock */ - if (is_dp && rdev->clock.dp_extclk) - args.v3.acConfig.ucRefClkSource = 2; /* external src */ - else - args.v3.acConfig.ucRefClkSource = pll_id; - } - - switch (radeon_encoder->encoder_id) { - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: - args.v3.acConfig.ucTransmitterSel = 0; - break; - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: - args.v3.acConfig.ucTransmitterSel = 1; - break; - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: - args.v3.acConfig.ucTransmitterSel = 2; - break; - } - - if (is_dp) - args.v3.acConfig.fCoherentMode = 1; /* DP requires coherent */ - else if (radeon_encoder->devices & (ATOM_DEVICE_DFP_SUPPORT)) { - if (dig->coherent_mode) - args.v3.acConfig.fCoherentMode = 1; - if (radeon_encoder->pixel_clock > 165000) - args.v3.acConfig.fDualLinkConnector = 1; - } - } else if (ASIC_IS_DCE32(rdev)) { - args.v2.acConfig.ucEncoderSel = dig_encoder; - if (dig->linkb) - args.v2.acConfig.ucLinkSel = 1; - - switch (radeon_encoder->encoder_id) { - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: - args.v2.acConfig.ucTransmitterSel = 0; - break; - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: - args.v2.acConfig.ucTransmitterSel = 1; - break; - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: - args.v2.acConfig.ucTransmitterSel = 2; - break; - } - - if (is_dp) { - args.v2.acConfig.fCoherentMode = 1; - args.v2.acConfig.fDPConnector = 1; - } else if (radeon_encoder->devices & (ATOM_DEVICE_DFP_SUPPORT)) { - if (dig->coherent_mode) - args.v2.acConfig.fCoherentMode = 1; - if (radeon_encoder->pixel_clock > 165000) - args.v2.acConfig.fDualLinkConnector = 1; - } - } else { - args.v1.ucConfig = ATOM_TRANSMITTER_CONFIG_CLKSRC_PPLL; - - if (dig_encoder) - args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_DIG2_ENCODER; - else - args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_DIG1_ENCODER; - - if ((rdev->flags & RADEON_IS_IGP) && - (radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_UNIPHY)) { - if (is_dp || (radeon_encoder->pixel_clock <= 165000)) { - if (igp_lane_info & 0x1) - args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_0_3; - else if (igp_lane_info & 0x2) - args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_4_7; - else if (igp_lane_info & 0x4) - args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_8_11; - else if (igp_lane_info & 0x8) - args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_12_15; - } else { - if (igp_lane_info & 0x3) - args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_0_7; - else if (igp_lane_info & 0xc) - args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_8_15; - } - } - - if (dig->linkb) - args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LINKB; - else - args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LINKA; - - if (is_dp) - args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_COHERENT; - else if (radeon_encoder->devices & (ATOM_DEVICE_DFP_SUPPORT)) { - if (dig->coherent_mode) - args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_COHERENT; - if (radeon_encoder->pixel_clock > 165000) - args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_8LANE_LINK; - } - } - - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); -} - -bool -atombios_set_edp_panel_power(struct drm_connector *connector, int action) -{ - struct radeon_connector *radeon_connector = to_radeon_connector(connector); - struct drm_device *dev = radeon_connector->base.dev; - struct radeon_device *rdev = dev->dev_private; - union dig_transmitter_control args; - int index = GetIndexIntoMasterTable(COMMAND, UNIPHYTransmitterControl); - uint8_t frev, crev; - - if (connector->connector_type != DRM_MODE_CONNECTOR_eDP) - goto done; - - if (!ASIC_IS_DCE4(rdev)) - goto done; - - if ((action != ATOM_TRANSMITTER_ACTION_POWER_ON) && - (action != ATOM_TRANSMITTER_ACTION_POWER_OFF)) - goto done; - - if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) - goto done; - - memset(&args, 0, sizeof(args)); - - args.v1.ucAction = action; - - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); - - /* wait for the panel to power up */ - if (action == ATOM_TRANSMITTER_ACTION_POWER_ON) { - int i; - - for (i = 0; i < 300; i++) { - if (radeon_hpd_sense(rdev, radeon_connector->hpd.hpd)) - return true; - mdelay(1); - } - return false; - } -done: - return true; -} - -union external_encoder_control { - EXTERNAL_ENCODER_CONTROL_PS_ALLOCATION v1; - EXTERNAL_ENCODER_CONTROL_PS_ALLOCATION_V3 v3; -}; - -static void -atombios_external_encoder_setup(struct drm_encoder *encoder, - struct drm_encoder *ext_encoder, - int action) -{ - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - struct radeon_encoder *ext_radeon_encoder = to_radeon_encoder(ext_encoder); - union external_encoder_control args; - struct drm_connector *connector; - int index = GetIndexIntoMasterTable(COMMAND, ExternalEncoderControl); - u8 frev, crev; - int dp_clock = 0; - int dp_lane_count = 0; - int connector_object_id = 0; - u32 ext_enum = (ext_radeon_encoder->encoder_enum & ENUM_ID_MASK) >> ENUM_ID_SHIFT; - int bpc = 8; - - if (action == EXTERNAL_ENCODER_ACTION_V3_ENCODER_INIT) - connector = radeon_get_connector_for_encoder_init(encoder); - else - connector = radeon_get_connector_for_encoder(encoder); - - if (connector) { - struct radeon_connector *radeon_connector = to_radeon_connector(connector); - struct radeon_connector_atom_dig *dig_connector = - radeon_connector->con_priv; - - dp_clock = dig_connector->dp_clock; - dp_lane_count = dig_connector->dp_lane_count; - connector_object_id = - (radeon_connector->connector_object_id & OBJECT_ID_MASK) >> OBJECT_ID_SHIFT; - bpc = connector->display_info.bpc; - } - - memset(&args, 0, sizeof(args)); - - if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) - return; - - switch (frev) { - case 1: - /* no params on frev 1 */ - break; - case 2: - switch (crev) { - case 1: - case 2: - args.v1.sDigEncoder.ucAction = action; - args.v1.sDigEncoder.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); - args.v1.sDigEncoder.ucEncoderMode = atombios_get_encoder_mode(encoder); - - if (args.v1.sDigEncoder.ucEncoderMode == ATOM_ENCODER_MODE_DP) { - if (dp_clock == 270000) - args.v1.sDigEncoder.ucConfig |= ATOM_ENCODER_CONFIG_DPLINKRATE_2_70GHZ; - args.v1.sDigEncoder.ucLaneNum = dp_lane_count; - } else if (radeon_encoder->pixel_clock > 165000) - args.v1.sDigEncoder.ucLaneNum = 8; - else - args.v1.sDigEncoder.ucLaneNum = 4; - break; - case 3: - args.v3.sExtEncoder.ucAction = action; - if (action == EXTERNAL_ENCODER_ACTION_V3_ENCODER_INIT) - args.v3.sExtEncoder.usConnectorId = cpu_to_le16(connector_object_id); - else - args.v3.sExtEncoder.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); - args.v3.sExtEncoder.ucEncoderMode = atombios_get_encoder_mode(encoder); - - if (args.v3.sExtEncoder.ucEncoderMode == ATOM_ENCODER_MODE_DP) { - if (dp_clock == 270000) - args.v3.sExtEncoder.ucConfig |= EXTERNAL_ENCODER_CONFIG_V3_DPLINKRATE_2_70GHZ; - else if (dp_clock == 540000) - args.v3.sExtEncoder.ucConfig |= EXTERNAL_ENCODER_CONFIG_V3_DPLINKRATE_5_40GHZ; - args.v3.sExtEncoder.ucLaneNum = dp_lane_count; - } else if (radeon_encoder->pixel_clock > 165000) - args.v3.sExtEncoder.ucLaneNum = 8; - else - args.v3.sExtEncoder.ucLaneNum = 4; - switch (ext_enum) { - case GRAPH_OBJECT_ENUM_ID1: - args.v3.sExtEncoder.ucConfig |= EXTERNAL_ENCODER_CONFIG_V3_ENCODER1; - break; - case GRAPH_OBJECT_ENUM_ID2: - args.v3.sExtEncoder.ucConfig |= EXTERNAL_ENCODER_CONFIG_V3_ENCODER2; - break; - case GRAPH_OBJECT_ENUM_ID3: - args.v3.sExtEncoder.ucConfig |= EXTERNAL_ENCODER_CONFIG_V3_ENCODER3; - break; - } - switch (bpc) { - case 0: - args.v3.sExtEncoder.ucBitPerColor = PANEL_BPC_UNDEFINE; - break; - case 6: - args.v3.sExtEncoder.ucBitPerColor = PANEL_6BIT_PER_COLOR; - break; - case 8: - default: - args.v3.sExtEncoder.ucBitPerColor = PANEL_8BIT_PER_COLOR; - break; - case 10: - args.v3.sExtEncoder.ucBitPerColor = PANEL_10BIT_PER_COLOR; - break; - case 12: - args.v3.sExtEncoder.ucBitPerColor = PANEL_12BIT_PER_COLOR; - break; - case 16: - args.v3.sExtEncoder.ucBitPerColor = PANEL_16BIT_PER_COLOR; - break; - } - break; - default: - DRM_ERROR("Unknown table version: %d, %d\n", frev, crev); - return; - } - break; - default: - DRM_ERROR("Unknown table version: %d, %d\n", frev, crev); - return; - } - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); -} - -static void -atombios_yuv_setup(struct drm_encoder *encoder, bool enable) -{ - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc); - ENABLE_YUV_PS_ALLOCATION args; - int index = GetIndexIntoMasterTable(COMMAND, EnableYUV); - uint32_t temp, reg; - - memset(&args, 0, sizeof(args)); - - if (rdev->family >= CHIP_R600) - reg = R600_BIOS_3_SCRATCH; - else - reg = RADEON_BIOS_3_SCRATCH; - - /* XXX: fix up scratch reg handling */ - temp = RREG32(reg); - if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT)) - WREG32(reg, (ATOM_S3_TV1_ACTIVE | - (radeon_crtc->crtc_id << 18))); - else if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT)) - WREG32(reg, (ATOM_S3_CV_ACTIVE | (radeon_crtc->crtc_id << 24))); - else - WREG32(reg, 0); - - if (enable) - args.ucEnable = ATOM_ENABLE; - args.ucCRTC = radeon_crtc->crtc_id; - - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); - - WREG32(reg, temp); -} - -static void -radeon_atom_encoder_dpms(struct drm_encoder *encoder, int mode) -{ - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - struct drm_encoder *ext_encoder = radeon_atom_get_external_encoder(encoder); - DISPLAY_DEVICE_OUTPUT_CONTROL_PS_ALLOCATION args; - int index = 0; - bool is_dig = false; - bool is_dce5_dac = false; - bool is_dce5_dvo = false; - - memset(&args, 0, sizeof(args)); - - DRM_DEBUG_KMS("encoder dpms %d to mode %d, devices %08x, active_devices %08x\n", - radeon_encoder->encoder_id, mode, radeon_encoder->devices, - radeon_encoder->active_device); - switch (radeon_encoder->encoder_id) { - case ENCODER_OBJECT_ID_INTERNAL_TMDS1: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: - index = GetIndexIntoMasterTable(COMMAND, TMDSAOutputControl); - break; - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: - is_dig = true; - break; - case ENCODER_OBJECT_ID_INTERNAL_DVO1: - case ENCODER_OBJECT_ID_INTERNAL_DDI: - index = GetIndexIntoMasterTable(COMMAND, DVOOutputControl); - break; - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: - if (ASIC_IS_DCE5(rdev)) - is_dce5_dvo = true; - else if (ASIC_IS_DCE3(rdev)) - is_dig = true; - else - index = GetIndexIntoMasterTable(COMMAND, DVOOutputControl); - break; - case ENCODER_OBJECT_ID_INTERNAL_LVDS: - index = GetIndexIntoMasterTable(COMMAND, LCD1OutputControl); - break; - case ENCODER_OBJECT_ID_INTERNAL_LVTM1: - if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) - index = GetIndexIntoMasterTable(COMMAND, LCD1OutputControl); - else - index = GetIndexIntoMasterTable(COMMAND, LVTMAOutputControl); - break; - case ENCODER_OBJECT_ID_INTERNAL_DAC1: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1: - if (ASIC_IS_DCE5(rdev)) - is_dce5_dac = true; - else { - if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT)) - index = GetIndexIntoMasterTable(COMMAND, TV1OutputControl); - else if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT)) - index = GetIndexIntoMasterTable(COMMAND, CV1OutputControl); - else - index = GetIndexIntoMasterTable(COMMAND, DAC1OutputControl); - } - break; - case ENCODER_OBJECT_ID_INTERNAL_DAC2: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2: - if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT)) - index = GetIndexIntoMasterTable(COMMAND, TV1OutputControl); - else if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT)) - index = GetIndexIntoMasterTable(COMMAND, CV1OutputControl); - else - index = GetIndexIntoMasterTable(COMMAND, DAC2OutputControl); - break; - } - - if (is_dig) { - switch (mode) { - case DRM_MODE_DPMS_ON: - /* some early dce3.2 boards have a bug in their transmitter control table */ - if ((rdev->family == CHIP_RV710) || (rdev->family == CHIP_RV730)) - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE, 0, 0); - else - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE_OUTPUT, 0, 0); - if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_DP) { - struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); - - if (connector && - (connector->connector_type == DRM_MODE_CONNECTOR_eDP)) { - struct radeon_connector *radeon_connector = to_radeon_connector(connector); - struct radeon_connector_atom_dig *radeon_dig_connector = - radeon_connector->con_priv; - atombios_set_edp_panel_power(connector, - ATOM_TRANSMITTER_ACTION_POWER_ON); - radeon_dig_connector->edp_on = true; - } - if (ASIC_IS_DCE4(rdev)) - atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_OFF, 0); - radeon_dp_link_train(encoder, connector); - if (ASIC_IS_DCE4(rdev)) - atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_ON, 0); - } - if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_LCD_BLON, 0, 0); - break; - case DRM_MODE_DPMS_STANDBY: - case DRM_MODE_DPMS_SUSPEND: - case DRM_MODE_DPMS_OFF: - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE_OUTPUT, 0, 0); - if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_DP) { - struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); - - if (ASIC_IS_DCE4(rdev)) - atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_OFF, 0); - if (connector && - (connector->connector_type == DRM_MODE_CONNECTOR_eDP)) { - struct radeon_connector *radeon_connector = to_radeon_connector(connector); - struct radeon_connector_atom_dig *radeon_dig_connector = - radeon_connector->con_priv; - atombios_set_edp_panel_power(connector, - ATOM_TRANSMITTER_ACTION_POWER_OFF); - radeon_dig_connector->edp_on = false; - } - } - if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_LCD_BLOFF, 0, 0); - break; - } - } else if (is_dce5_dac) { - switch (mode) { - case DRM_MODE_DPMS_ON: - atombios_dac_setup(encoder, ATOM_ENABLE); - break; - case DRM_MODE_DPMS_STANDBY: - case DRM_MODE_DPMS_SUSPEND: - case DRM_MODE_DPMS_OFF: - atombios_dac_setup(encoder, ATOM_DISABLE); - break; - } - } else if (is_dce5_dvo) { - switch (mode) { - case DRM_MODE_DPMS_ON: - atombios_dvo_setup(encoder, ATOM_ENABLE); - break; - case DRM_MODE_DPMS_STANDBY: - case DRM_MODE_DPMS_SUSPEND: - case DRM_MODE_DPMS_OFF: - atombios_dvo_setup(encoder, ATOM_DISABLE); - break; - } - } else { - switch (mode) { - case DRM_MODE_DPMS_ON: - args.ucAction = ATOM_ENABLE; - /* workaround for DVOOutputControl on some RS690 systems */ - if (radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_DDI) { - u32 reg = RREG32(RADEON_BIOS_3_SCRATCH); - WREG32(RADEON_BIOS_3_SCRATCH, reg & ~ATOM_S3_DFP2I_ACTIVE); - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); - WREG32(RADEON_BIOS_3_SCRATCH, reg); - } else - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); - if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { - args.ucAction = ATOM_LCD_BLON; - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); - } - break; - case DRM_MODE_DPMS_STANDBY: - case DRM_MODE_DPMS_SUSPEND: - case DRM_MODE_DPMS_OFF: - args.ucAction = ATOM_DISABLE; - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); - if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { - args.ucAction = ATOM_LCD_BLOFF; - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); - } - break; - } - } - - if (ext_encoder) { - switch (mode) { - case DRM_MODE_DPMS_ON: - default: - if (ASIC_IS_DCE41(rdev)) { - atombios_external_encoder_setup(encoder, ext_encoder, - EXTERNAL_ENCODER_ACTION_V3_ENABLE_OUTPUT); - atombios_external_encoder_setup(encoder, ext_encoder, - EXTERNAL_ENCODER_ACTION_V3_ENCODER_BLANKING_OFF); - } else - atombios_external_encoder_setup(encoder, ext_encoder, ATOM_ENABLE); - break; - case DRM_MODE_DPMS_STANDBY: - case DRM_MODE_DPMS_SUSPEND: - case DRM_MODE_DPMS_OFF: - if (ASIC_IS_DCE41(rdev)) { - atombios_external_encoder_setup(encoder, ext_encoder, - EXTERNAL_ENCODER_ACTION_V3_ENCODER_BLANKING); - atombios_external_encoder_setup(encoder, ext_encoder, - EXTERNAL_ENCODER_ACTION_V3_DISABLE_OUTPUT); - } else - atombios_external_encoder_setup(encoder, ext_encoder, ATOM_DISABLE); - break; - } - } - - radeon_atombios_encoder_dpms_scratch_regs(encoder, (mode == DRM_MODE_DPMS_ON) ? true : false); - -} - -union crtc_source_param { - SELECT_CRTC_SOURCE_PS_ALLOCATION v1; - SELECT_CRTC_SOURCE_PARAMETERS_V2 v2; -}; - -static void -atombios_set_encoder_crtc_source(struct drm_encoder *encoder) -{ - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc); - union crtc_source_param args; - int index = GetIndexIntoMasterTable(COMMAND, SelectCRTC_Source); - uint8_t frev, crev; - struct radeon_encoder_atom_dig *dig; - - memset(&args, 0, sizeof(args)); - - if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) - return; - - switch (frev) { - case 1: - switch (crev) { - case 1: - default: - if (ASIC_IS_AVIVO(rdev)) - args.v1.ucCRTC = radeon_crtc->crtc_id; - else { - if (radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_DAC1) { - args.v1.ucCRTC = radeon_crtc->crtc_id; - } else { - args.v1.ucCRTC = radeon_crtc->crtc_id << 2; - } - } - switch (radeon_encoder->encoder_id) { - case ENCODER_OBJECT_ID_INTERNAL_TMDS1: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: - args.v1.ucDevice = ATOM_DEVICE_DFP1_INDEX; - break; - case ENCODER_OBJECT_ID_INTERNAL_LVDS: - case ENCODER_OBJECT_ID_INTERNAL_LVTM1: - if (radeon_encoder->devices & ATOM_DEVICE_LCD1_SUPPORT) - args.v1.ucDevice = ATOM_DEVICE_LCD1_INDEX; - else - args.v1.ucDevice = ATOM_DEVICE_DFP3_INDEX; - break; - case ENCODER_OBJECT_ID_INTERNAL_DVO1: - case ENCODER_OBJECT_ID_INTERNAL_DDI: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: - args.v1.ucDevice = ATOM_DEVICE_DFP2_INDEX; - break; - case ENCODER_OBJECT_ID_INTERNAL_DAC1: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1: - if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT)) - args.v1.ucDevice = ATOM_DEVICE_TV1_INDEX; - else if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT)) - args.v1.ucDevice = ATOM_DEVICE_CV_INDEX; - else - args.v1.ucDevice = ATOM_DEVICE_CRT1_INDEX; - break; - case ENCODER_OBJECT_ID_INTERNAL_DAC2: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2: - if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT)) - args.v1.ucDevice = ATOM_DEVICE_TV1_INDEX; - else if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT)) - args.v1.ucDevice = ATOM_DEVICE_CV_INDEX; - else - args.v1.ucDevice = ATOM_DEVICE_CRT2_INDEX; - break; - } - break; - case 2: - args.v2.ucCRTC = radeon_crtc->crtc_id; - if (radeon_encoder_is_dp_bridge(encoder)) { - struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); - - if (connector->connector_type == DRM_MODE_CONNECTOR_LVDS) - args.v2.ucEncodeMode = ATOM_ENCODER_MODE_LVDS; - else if (connector->connector_type == DRM_MODE_CONNECTOR_VGA) - args.v2.ucEncodeMode = ATOM_ENCODER_MODE_CRT; - else - args.v2.ucEncodeMode = atombios_get_encoder_mode(encoder); - } else - args.v2.ucEncodeMode = atombios_get_encoder_mode(encoder); - switch (radeon_encoder->encoder_id) { - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: - dig = radeon_encoder->enc_priv; - switch (dig->dig_encoder) { - case 0: - args.v2.ucEncoderID = ASIC_INT_DIG1_ENCODER_ID; - break; - case 1: - args.v2.ucEncoderID = ASIC_INT_DIG2_ENCODER_ID; - break; - case 2: - args.v2.ucEncoderID = ASIC_INT_DIG3_ENCODER_ID; - break; - case 3: - args.v2.ucEncoderID = ASIC_INT_DIG4_ENCODER_ID; - break; - case 4: - args.v2.ucEncoderID = ASIC_INT_DIG5_ENCODER_ID; - break; - case 5: - args.v2.ucEncoderID = ASIC_INT_DIG6_ENCODER_ID; - break; - } - break; - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: - args.v2.ucEncoderID = ASIC_INT_DVO_ENCODER_ID; - break; - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1: - if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT)) - args.v2.ucEncoderID = ASIC_INT_TV_ENCODER_ID; - else if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT)) - args.v2.ucEncoderID = ASIC_INT_TV_ENCODER_ID; - else - args.v2.ucEncoderID = ASIC_INT_DAC1_ENCODER_ID; - break; - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2: - if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT)) - args.v2.ucEncoderID = ASIC_INT_TV_ENCODER_ID; - else if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT)) - args.v2.ucEncoderID = ASIC_INT_TV_ENCODER_ID; - else - args.v2.ucEncoderID = ASIC_INT_DAC2_ENCODER_ID; - break; - } - break; - } - break; - default: - DRM_ERROR("Unknown table version: %d, %d\n", frev, crev); - return; - } - - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); - - /* update scratch regs with new routing */ - radeon_atombios_encoder_crtc_scratch_regs(encoder, radeon_crtc->crtc_id); -} - -static void -atombios_apply_encoder_quirks(struct drm_encoder *encoder, - struct drm_display_mode *mode) -{ - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc); - - /* Funky macbooks */ - if ((dev->pdev->device == 0x71C5) && - (dev->pdev->subsystem_vendor == 0x106b) && - (dev->pdev->subsystem_device == 0x0080)) { - if (radeon_encoder->devices & ATOM_DEVICE_LCD1_SUPPORT) { - uint32_t lvtma_bit_depth_control = RREG32(AVIVO_LVTMA_BIT_DEPTH_CONTROL); - - lvtma_bit_depth_control &= ~AVIVO_LVTMA_BIT_DEPTH_CONTROL_TRUNCATE_EN; - lvtma_bit_depth_control &= ~AVIVO_LVTMA_BIT_DEPTH_CONTROL_SPATIAL_DITHER_EN; - - WREG32(AVIVO_LVTMA_BIT_DEPTH_CONTROL, lvtma_bit_depth_control); - } - } - - /* set scaler clears this on some chips */ - if (ASIC_IS_AVIVO(rdev) && - (!(radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT)))) { - if (ASIC_IS_DCE4(rdev)) { - if (mode->flags & DRM_MODE_FLAG_INTERLACE) - WREG32(EVERGREEN_DATA_FORMAT + radeon_crtc->crtc_offset, - EVERGREEN_INTERLEAVE_EN); - else - WREG32(EVERGREEN_DATA_FORMAT + radeon_crtc->crtc_offset, 0); - } else { - if (mode->flags & DRM_MODE_FLAG_INTERLACE) - WREG32(AVIVO_D1MODE_DATA_FORMAT + radeon_crtc->crtc_offset, - AVIVO_D1MODE_INTERLEAVE_EN); - else - WREG32(AVIVO_D1MODE_DATA_FORMAT + radeon_crtc->crtc_offset, 0); - } - } -} - -static int radeon_atom_pick_dig_encoder(struct drm_encoder *encoder) -{ - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc); - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - struct drm_encoder *test_encoder; - struct radeon_encoder_atom_dig *dig; - uint32_t dig_enc_in_use = 0; - - /* DCE4/5 */ - if (ASIC_IS_DCE4(rdev)) { - dig = radeon_encoder->enc_priv; - if (ASIC_IS_DCE41(rdev)) { - /* ontario follows DCE4 */ - if (rdev->family == CHIP_PALM) { - if (dig->linkb) - return 1; - else - return 0; - } else - /* llano follows DCE3.2 */ - return radeon_crtc->crtc_id; - } else { - switch (radeon_encoder->encoder_id) { - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: - if (dig->linkb) - return 1; - else - return 0; - break; - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: - if (dig->linkb) - return 3; - else - return 2; - break; - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: - if (dig->linkb) - return 5; - else - return 4; - break; - } - } - } - - /* on DCE32 and encoder can driver any block so just crtc id */ - if (ASIC_IS_DCE32(rdev)) { - return radeon_crtc->crtc_id; - } - - /* on DCE3 - LVTMA can only be driven by DIGB */ - list_for_each_entry(test_encoder, &dev->mode_config.encoder_list, head) { - struct radeon_encoder *radeon_test_encoder; - - if (encoder == test_encoder) - continue; - - if (!radeon_encoder_is_digital(test_encoder)) - continue; - - radeon_test_encoder = to_radeon_encoder(test_encoder); - dig = radeon_test_encoder->enc_priv; - - if (dig->dig_encoder >= 0) - dig_enc_in_use |= (1 << dig->dig_encoder); - } - - if (radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA) { - if (dig_enc_in_use & 0x2) - DRM_ERROR("LVDS required digital encoder 2 but it was in use - stealing\n"); - return 1; - } - if (!(dig_enc_in_use & 1)) - return 0; - return 1; -} - -/* This only needs to be called once at startup */ -void -radeon_atom_encoder_init(struct radeon_device *rdev) -{ - struct drm_device *dev = rdev->ddev; - struct drm_encoder *encoder; - - list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) { - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - struct drm_encoder *ext_encoder = radeon_atom_get_external_encoder(encoder); - - switch (radeon_encoder->encoder_id) { - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_INIT, 0, 0); - break; - default: - break; - } - - if (ext_encoder && ASIC_IS_DCE41(rdev)) - atombios_external_encoder_setup(encoder, ext_encoder, - EXTERNAL_ENCODER_ACTION_V3_ENCODER_INIT); - } -} - -static void -radeon_atom_encoder_mode_set(struct drm_encoder *encoder, - struct drm_display_mode *mode, - struct drm_display_mode *adjusted_mode) -{ - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - struct drm_encoder *ext_encoder = radeon_atom_get_external_encoder(encoder); - - radeon_encoder->pixel_clock = adjusted_mode->clock; - - if (ASIC_IS_AVIVO(rdev) && !ASIC_IS_DCE4(rdev)) { - if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT | ATOM_DEVICE_TV_SUPPORT)) - atombios_yuv_setup(encoder, true); - else - atombios_yuv_setup(encoder, false); - } - - switch (radeon_encoder->encoder_id) { - case ENCODER_OBJECT_ID_INTERNAL_TMDS1: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: - case ENCODER_OBJECT_ID_INTERNAL_LVDS: - case ENCODER_OBJECT_ID_INTERNAL_LVTM1: - atombios_digital_setup(encoder, PANEL_ENCODER_ACTION_ENABLE); - break; - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: - if (ASIC_IS_DCE4(rdev)) { - /* disable the transmitter */ - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); - /* setup and enable the encoder */ - atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_SETUP, 0); - - /* enable the transmitter */ - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE, 0, 0); - } else { - /* disable the encoder and transmitter */ - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); - atombios_dig_encoder_setup(encoder, ATOM_DISABLE, 0); - - /* setup and enable the encoder and transmitter */ - atombios_dig_encoder_setup(encoder, ATOM_ENABLE, 0); - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_SETUP, 0, 0); - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE, 0, 0); - } - break; - case ENCODER_OBJECT_ID_INTERNAL_DDI: - case ENCODER_OBJECT_ID_INTERNAL_DVO1: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: - atombios_dvo_setup(encoder, ATOM_ENABLE); - break; - case ENCODER_OBJECT_ID_INTERNAL_DAC1: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1: - case ENCODER_OBJECT_ID_INTERNAL_DAC2: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2: - atombios_dac_setup(encoder, ATOM_ENABLE); - if (radeon_encoder->devices & (ATOM_DEVICE_TV_SUPPORT | ATOM_DEVICE_CV_SUPPORT)) { - if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT | ATOM_DEVICE_CV_SUPPORT)) - atombios_tv_setup(encoder, ATOM_ENABLE); - else - atombios_tv_setup(encoder, ATOM_DISABLE); - } - break; - } - - if (ext_encoder) { - if (ASIC_IS_DCE41(rdev)) - atombios_external_encoder_setup(encoder, ext_encoder, - EXTERNAL_ENCODER_ACTION_V3_ENCODER_SETUP); - else - atombios_external_encoder_setup(encoder, ext_encoder, ATOM_ENABLE); - } - - atombios_apply_encoder_quirks(encoder, adjusted_mode); - - if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_HDMI) { - r600_hdmi_enable(encoder); - r600_hdmi_setmode(encoder, adjusted_mode); - } -} - -static bool -atombios_dac_load_detect(struct drm_encoder *encoder, struct drm_connector *connector) -{ - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - struct radeon_connector *radeon_connector = to_radeon_connector(connector); - - if (radeon_encoder->devices & (ATOM_DEVICE_TV_SUPPORT | - ATOM_DEVICE_CV_SUPPORT | - ATOM_DEVICE_CRT_SUPPORT)) { - DAC_LOAD_DETECTION_PS_ALLOCATION args; - int index = GetIndexIntoMasterTable(COMMAND, DAC_LoadDetection); - uint8_t frev, crev; - - memset(&args, 0, sizeof(args)); - - if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) - return false; - - args.sDacload.ucMisc = 0; - - if ((radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_DAC1) || - (radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1)) - args.sDacload.ucDacType = ATOM_DAC_A; - else - args.sDacload.ucDacType = ATOM_DAC_B; - - if (radeon_connector->devices & ATOM_DEVICE_CRT1_SUPPORT) - args.sDacload.usDeviceID = cpu_to_le16(ATOM_DEVICE_CRT1_SUPPORT); - else if (radeon_connector->devices & ATOM_DEVICE_CRT2_SUPPORT) - args.sDacload.usDeviceID = cpu_to_le16(ATOM_DEVICE_CRT2_SUPPORT); - else if (radeon_connector->devices & ATOM_DEVICE_CV_SUPPORT) { - args.sDacload.usDeviceID = cpu_to_le16(ATOM_DEVICE_CV_SUPPORT); - if (crev >= 3) - args.sDacload.ucMisc = DAC_LOAD_MISC_YPrPb; - } else if (radeon_connector->devices & ATOM_DEVICE_TV1_SUPPORT) { - args.sDacload.usDeviceID = cpu_to_le16(ATOM_DEVICE_TV1_SUPPORT); - if (crev >= 3) - args.sDacload.ucMisc = DAC_LOAD_MISC_YPrPb; - } - - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); - - return true; - } else - return false; -} - -static enum drm_connector_status -radeon_atom_dac_detect(struct drm_encoder *encoder, struct drm_connector *connector) -{ - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - struct radeon_connector *radeon_connector = to_radeon_connector(connector); - uint32_t bios_0_scratch; - - if (!atombios_dac_load_detect(encoder, connector)) { - DRM_DEBUG_KMS("detect returned false \n"); - return connector_status_unknown; - } - - if (rdev->family >= CHIP_R600) - bios_0_scratch = RREG32(R600_BIOS_0_SCRATCH); - else - bios_0_scratch = RREG32(RADEON_BIOS_0_SCRATCH); - - DRM_DEBUG_KMS("Bios 0 scratch %x %08x\n", bios_0_scratch, radeon_encoder->devices); - if (radeon_connector->devices & ATOM_DEVICE_CRT1_SUPPORT) { - if (bios_0_scratch & ATOM_S0_CRT1_MASK) - return connector_status_connected; - } - if (radeon_connector->devices & ATOM_DEVICE_CRT2_SUPPORT) { - if (bios_0_scratch & ATOM_S0_CRT2_MASK) - return connector_status_connected; - } - if (radeon_connector->devices & ATOM_DEVICE_CV_SUPPORT) { - if (bios_0_scratch & (ATOM_S0_CV_MASK|ATOM_S0_CV_MASK_A)) - return connector_status_connected; - } - if (radeon_connector->devices & ATOM_DEVICE_TV1_SUPPORT) { - if (bios_0_scratch & (ATOM_S0_TV1_COMPOSITE | ATOM_S0_TV1_COMPOSITE_A)) - return connector_status_connected; /* CTV */ - else if (bios_0_scratch & (ATOM_S0_TV1_SVIDEO | ATOM_S0_TV1_SVIDEO_A)) - return connector_status_connected; /* STV */ - } - return connector_status_disconnected; -} - -static enum drm_connector_status -radeon_atom_dig_detect(struct drm_encoder *encoder, struct drm_connector *connector) -{ - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - struct radeon_connector *radeon_connector = to_radeon_connector(connector); - struct drm_encoder *ext_encoder = radeon_atom_get_external_encoder(encoder); - u32 bios_0_scratch; - - if (!ASIC_IS_DCE4(rdev)) - return connector_status_unknown; - - if (!ext_encoder) - return connector_status_unknown; - - if ((radeon_connector->devices & ATOM_DEVICE_CRT_SUPPORT) == 0) - return connector_status_unknown; - - /* load detect on the dp bridge */ - atombios_external_encoder_setup(encoder, ext_encoder, - EXTERNAL_ENCODER_ACTION_V3_DACLOAD_DETECTION); - - bios_0_scratch = RREG32(R600_BIOS_0_SCRATCH); - - DRM_DEBUG_KMS("Bios 0 scratch %x %08x\n", bios_0_scratch, radeon_encoder->devices); - if (radeon_connector->devices & ATOM_DEVICE_CRT1_SUPPORT) { - if (bios_0_scratch & ATOM_S0_CRT1_MASK) - return connector_status_connected; - } - if (radeon_connector->devices & ATOM_DEVICE_CRT2_SUPPORT) { - if (bios_0_scratch & ATOM_S0_CRT2_MASK) - return connector_status_connected; - } - if (radeon_connector->devices & ATOM_DEVICE_CV_SUPPORT) { - if (bios_0_scratch & (ATOM_S0_CV_MASK|ATOM_S0_CV_MASK_A)) - return connector_status_connected; - } - if (radeon_connector->devices & ATOM_DEVICE_TV1_SUPPORT) { - if (bios_0_scratch & (ATOM_S0_TV1_COMPOSITE | ATOM_S0_TV1_COMPOSITE_A)) - return connector_status_connected; /* CTV */ - else if (bios_0_scratch & (ATOM_S0_TV1_SVIDEO | ATOM_S0_TV1_SVIDEO_A)) - return connector_status_connected; /* STV */ - } - return connector_status_disconnected; -} - -void -radeon_atom_ext_encoder_setup_ddc(struct drm_encoder *encoder) -{ - struct drm_encoder *ext_encoder = radeon_atom_get_external_encoder(encoder); - - if (ext_encoder) - /* ddc_setup on the dp bridge */ - atombios_external_encoder_setup(encoder, ext_encoder, - EXTERNAL_ENCODER_ACTION_V3_DDC_SETUP); - -} - -static void radeon_atom_encoder_prepare(struct drm_encoder *encoder) -{ - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); - - if ((radeon_encoder->active_device & - (ATOM_DEVICE_DFP_SUPPORT | ATOM_DEVICE_LCD_SUPPORT)) || - radeon_encoder_is_dp_bridge(encoder)) { - struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; - if (dig) - dig->dig_encoder = radeon_atom_pick_dig_encoder(encoder); - } - - radeon_atom_output_lock(encoder, true); - radeon_atom_encoder_dpms(encoder, DRM_MODE_DPMS_OFF); - - if (connector) { - struct radeon_connector *radeon_connector = to_radeon_connector(connector); - - /* select the clock/data port if it uses a router */ - if (radeon_connector->router.cd_valid) - radeon_router_select_cd_port(radeon_connector); - - /* turn eDP panel on for mode set */ - if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) - atombios_set_edp_panel_power(connector, - ATOM_TRANSMITTER_ACTION_POWER_ON); - } - - /* this is needed for the pll/ss setup to work correctly in some cases */ - atombios_set_encoder_crtc_source(encoder); -} - -static void radeon_atom_encoder_commit(struct drm_encoder *encoder) -{ - radeon_atom_encoder_dpms(encoder, DRM_MODE_DPMS_ON); - radeon_atom_output_lock(encoder, false); -} - -static void radeon_atom_encoder_disable(struct drm_encoder *encoder) -{ - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - struct radeon_encoder_atom_dig *dig; - - /* check for pre-DCE3 cards with shared encoders; - * can't really use the links individually, so don't disable - * the encoder if it's in use by another connector - */ - if (!ASIC_IS_DCE3(rdev)) { - struct drm_encoder *other_encoder; - struct radeon_encoder *other_radeon_encoder; - - list_for_each_entry(other_encoder, &dev->mode_config.encoder_list, head) { - other_radeon_encoder = to_radeon_encoder(other_encoder); - if ((radeon_encoder->encoder_id == other_radeon_encoder->encoder_id) && - drm_helper_encoder_in_use(other_encoder)) - goto disable_done; - } - } - - radeon_atom_encoder_dpms(encoder, DRM_MODE_DPMS_OFF); - - switch (radeon_encoder->encoder_id) { - case ENCODER_OBJECT_ID_INTERNAL_TMDS1: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: - case ENCODER_OBJECT_ID_INTERNAL_LVDS: - case ENCODER_OBJECT_ID_INTERNAL_LVTM1: - atombios_digital_setup(encoder, PANEL_ENCODER_ACTION_DISABLE); - break; - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: - if (ASIC_IS_DCE4(rdev)) - /* disable the transmitter */ - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); - else { - /* disable the encoder and transmitter */ - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); - atombios_dig_encoder_setup(encoder, ATOM_DISABLE, 0); - } - break; - case ENCODER_OBJECT_ID_INTERNAL_DDI: - case ENCODER_OBJECT_ID_INTERNAL_DVO1: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: - atombios_dvo_setup(encoder, ATOM_DISABLE); - break; - case ENCODER_OBJECT_ID_INTERNAL_DAC1: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1: - case ENCODER_OBJECT_ID_INTERNAL_DAC2: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2: - atombios_dac_setup(encoder, ATOM_DISABLE); - if (radeon_encoder->devices & (ATOM_DEVICE_TV_SUPPORT | ATOM_DEVICE_CV_SUPPORT)) - atombios_tv_setup(encoder, ATOM_DISABLE); - break; - } - -disable_done: - if (radeon_encoder_is_digital(encoder)) { - if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_HDMI) - r600_hdmi_disable(encoder); - dig = radeon_encoder->enc_priv; - dig->dig_encoder = -1; - } - radeon_encoder->active_device = 0; -} - -/* these are handled by the primary encoders */ -static void radeon_atom_ext_prepare(struct drm_encoder *encoder) -{ - -} - -static void radeon_atom_ext_commit(struct drm_encoder *encoder) -{ - -} - -static void -radeon_atom_ext_mode_set(struct drm_encoder *encoder, - struct drm_display_mode *mode, - struct drm_display_mode *adjusted_mode) -{ - -} - -static void radeon_atom_ext_disable(struct drm_encoder *encoder) -{ - -} - -static void -radeon_atom_ext_dpms(struct drm_encoder *encoder, int mode) -{ - -} - -static bool radeon_atom_ext_mode_fixup(struct drm_encoder *encoder, - struct drm_display_mode *mode, - struct drm_display_mode *adjusted_mode) -{ - return true; -} - -static const struct drm_encoder_helper_funcs radeon_atom_ext_helper_funcs = { - .dpms = radeon_atom_ext_dpms, - .mode_fixup = radeon_atom_ext_mode_fixup, - .prepare = radeon_atom_ext_prepare, - .mode_set = radeon_atom_ext_mode_set, - .commit = radeon_atom_ext_commit, - .disable = radeon_atom_ext_disable, - /* no detect for TMDS/LVDS yet */ -}; - -static const struct drm_encoder_helper_funcs radeon_atom_dig_helper_funcs = { - .dpms = radeon_atom_encoder_dpms, - .mode_fixup = radeon_atom_mode_fixup, - .prepare = radeon_atom_encoder_prepare, - .mode_set = radeon_atom_encoder_mode_set, - .commit = radeon_atom_encoder_commit, - .disable = radeon_atom_encoder_disable, - .detect = radeon_atom_dig_detect, -}; - -static const struct drm_encoder_helper_funcs radeon_atom_dac_helper_funcs = { - .dpms = radeon_atom_encoder_dpms, - .mode_fixup = radeon_atom_mode_fixup, - .prepare = radeon_atom_encoder_prepare, - .mode_set = radeon_atom_encoder_mode_set, - .commit = radeon_atom_encoder_commit, - .detect = radeon_atom_dac_detect, -}; - -void radeon_enc_destroy(struct drm_encoder *encoder) -{ - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - kfree(radeon_encoder->enc_priv); - drm_encoder_cleanup(encoder); - kfree(radeon_encoder); -} - -static const struct drm_encoder_funcs radeon_atom_enc_funcs = { - .destroy = radeon_enc_destroy, -}; - -struct radeon_encoder_atom_dac * -radeon_atombios_set_dac_info(struct radeon_encoder *radeon_encoder) -{ - struct drm_device *dev = radeon_encoder->base.dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder_atom_dac *dac = kzalloc(sizeof(struct radeon_encoder_atom_dac), GFP_KERNEL); - - if (!dac) - return NULL; - - dac->tv_std = radeon_atombios_get_tv_info(rdev); - return dac; -} - -struct radeon_encoder_atom_dig * -radeon_atombios_set_dig_info(struct radeon_encoder *radeon_encoder) -{ - int encoder_enum = (radeon_encoder->encoder_enum & ENUM_ID_MASK) >> ENUM_ID_SHIFT; - struct radeon_encoder_atom_dig *dig = kzalloc(sizeof(struct radeon_encoder_atom_dig), GFP_KERNEL); - - if (!dig) - return NULL; - - /* coherent mode by default */ - dig->coherent_mode = true; - dig->dig_encoder = -1; - - if (encoder_enum == 2) - dig->linkb = true; - else - dig->linkb = false; - - return dig; -} - -void -radeon_add_atom_encoder(struct drm_device *dev, - uint32_t encoder_enum, - uint32_t supported_device, - u16 caps) -{ - struct radeon_device *rdev = dev->dev_private; - struct drm_encoder *encoder; - struct radeon_encoder *radeon_encoder; - - /* see if we already added it */ - list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) { - radeon_encoder = to_radeon_encoder(encoder); - if (radeon_encoder->encoder_enum == encoder_enum) { - radeon_encoder->devices |= supported_device; - return; - } - - } - - /* add a new one */ - radeon_encoder = kzalloc(sizeof(struct radeon_encoder), GFP_KERNEL); - if (!radeon_encoder) - return; - - encoder = &radeon_encoder->base; - switch (rdev->num_crtc) { - case 1: - encoder->possible_crtcs = 0x1; - break; - case 2: - default: - encoder->possible_crtcs = 0x3; - break; - case 4: - encoder->possible_crtcs = 0xf; - break; - case 6: - encoder->possible_crtcs = 0x3f; - break; - } - - radeon_encoder->enc_priv = NULL; - - radeon_encoder->encoder_enum = encoder_enum; - radeon_encoder->encoder_id = (encoder_enum & OBJECT_ID_MASK) >> OBJECT_ID_SHIFT; - radeon_encoder->devices = supported_device; - radeon_encoder->rmx_type = RMX_OFF; - radeon_encoder->underscan_type = UNDERSCAN_OFF; - radeon_encoder->is_ext_encoder = false; - radeon_encoder->caps = caps; - - switch (radeon_encoder->encoder_id) { - case ENCODER_OBJECT_ID_INTERNAL_LVDS: - case ENCODER_OBJECT_ID_INTERNAL_TMDS1: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: - case ENCODER_OBJECT_ID_INTERNAL_LVTM1: - if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { - radeon_encoder->rmx_type = RMX_FULL; - drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_LVDS); - radeon_encoder->enc_priv = radeon_atombios_get_lvds_info(radeon_encoder); - } else { - drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_TMDS); - radeon_encoder->enc_priv = radeon_atombios_set_dig_info(radeon_encoder); - } - drm_encoder_helper_add(encoder, &radeon_atom_dig_helper_funcs); - break; - case ENCODER_OBJECT_ID_INTERNAL_DAC1: - drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_DAC); - radeon_encoder->enc_priv = radeon_atombios_set_dac_info(radeon_encoder); - drm_encoder_helper_add(encoder, &radeon_atom_dac_helper_funcs); - break; - case ENCODER_OBJECT_ID_INTERNAL_DAC2: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2: - drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_TVDAC); - radeon_encoder->enc_priv = radeon_atombios_set_dac_info(radeon_encoder); - drm_encoder_helper_add(encoder, &radeon_atom_dac_helper_funcs); - break; - case ENCODER_OBJECT_ID_INTERNAL_DVO1: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1: - case ENCODER_OBJECT_ID_INTERNAL_DDI: - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: - if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { - radeon_encoder->rmx_type = RMX_FULL; - drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_LVDS); - radeon_encoder->enc_priv = radeon_atombios_get_lvds_info(radeon_encoder); - } else if (radeon_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT)) { - drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_DAC); - radeon_encoder->enc_priv = radeon_atombios_set_dig_info(radeon_encoder); - } else { - drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_TMDS); - radeon_encoder->enc_priv = radeon_atombios_set_dig_info(radeon_encoder); - } - drm_encoder_helper_add(encoder, &radeon_atom_dig_helper_funcs); - break; - case ENCODER_OBJECT_ID_SI170B: - case ENCODER_OBJECT_ID_CH7303: - case ENCODER_OBJECT_ID_EXTERNAL_SDVOA: - case ENCODER_OBJECT_ID_EXTERNAL_SDVOB: - case ENCODER_OBJECT_ID_TITFP513: - case ENCODER_OBJECT_ID_VT1623: - case ENCODER_OBJECT_ID_HDMI_SI1930: - case ENCODER_OBJECT_ID_TRAVIS: - case ENCODER_OBJECT_ID_NUTMEG: - /* these are handled by the primary encoders */ - radeon_encoder->is_ext_encoder = true; - if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) - drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_LVDS); - else if (radeon_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT)) - drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_DAC); - else - drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_TMDS); - drm_encoder_helper_add(encoder, &radeon_atom_ext_helper_funcs); - break; - } -} diff --git a/drivers/gpu/drm/radeon/radeon_gart.c b/drivers/gpu/drm/radeon/radeon_gart.c index fdc3a9a54bf8..ba7ab79e12c1 100644 --- a/drivers/gpu/drm/radeon/radeon_gart.c +++ b/drivers/gpu/drm/radeon/radeon_gart.c @@ -49,27 +49,27 @@ int radeon_gart_table_ram_alloc(struct radeon_device *rdev) rdev->gart.table_size >> PAGE_SHIFT); } #endif - rdev->gart.table.ram.ptr = ptr; - memset((void *)rdev->gart.table.ram.ptr, 0, rdev->gart.table_size); + rdev->gart.ptr = ptr; + memset((void *)rdev->gart.ptr, 0, rdev->gart.table_size); return 0; } void radeon_gart_table_ram_free(struct radeon_device *rdev) { - if (rdev->gart.table.ram.ptr == NULL) { + if (rdev->gart.ptr == NULL) { return; } #ifdef CONFIG_X86 if (rdev->family == CHIP_RS400 || rdev->family == CHIP_RS480 || rdev->family == CHIP_RS690 || rdev->family == CHIP_RS740) { - set_memory_wb((unsigned long)rdev->gart.table.ram.ptr, + set_memory_wb((unsigned long)rdev->gart.ptr, rdev->gart.table_size >> PAGE_SHIFT); } #endif pci_free_consistent(rdev->pdev, rdev->gart.table_size, - (void *)rdev->gart.table.ram.ptr, + (void *)rdev->gart.ptr, rdev->gart.table_addr); - rdev->gart.table.ram.ptr = NULL; + rdev->gart.ptr = NULL; rdev->gart.table_addr = 0; } @@ -77,10 +77,10 @@ int radeon_gart_table_vram_alloc(struct radeon_device *rdev) { int r; - if (rdev->gart.table.vram.robj == NULL) { + if (rdev->gart.robj == NULL) { r = radeon_bo_create(rdev, rdev->gart.table_size, PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM, - &rdev->gart.table.vram.robj); + &rdev->gart.robj); if (r) { return r; } @@ -93,38 +93,46 @@ int radeon_gart_table_vram_pin(struct radeon_device *rdev) uint64_t gpu_addr; int r; - r = radeon_bo_reserve(rdev->gart.table.vram.robj, false); + r = radeon_bo_reserve(rdev->gart.robj, false); if (unlikely(r != 0)) return r; - r = radeon_bo_pin(rdev->gart.table.vram.robj, + r = radeon_bo_pin(rdev->gart.robj, RADEON_GEM_DOMAIN_VRAM, &gpu_addr); if (r) { - radeon_bo_unreserve(rdev->gart.table.vram.robj); + radeon_bo_unreserve(rdev->gart.robj); return r; } - r = radeon_bo_kmap(rdev->gart.table.vram.robj, - (void **)&rdev->gart.table.vram.ptr); + r = radeon_bo_kmap(rdev->gart.robj, &rdev->gart.ptr); if (r) - radeon_bo_unpin(rdev->gart.table.vram.robj); - radeon_bo_unreserve(rdev->gart.table.vram.robj); + radeon_bo_unpin(rdev->gart.robj); + radeon_bo_unreserve(rdev->gart.robj); rdev->gart.table_addr = gpu_addr; return r; } -void radeon_gart_table_vram_free(struct radeon_device *rdev) +void radeon_gart_table_vram_unpin(struct radeon_device *rdev) { int r; - if (rdev->gart.table.vram.robj == NULL) { + if (rdev->gart.robj == NULL) { return; } - r = radeon_bo_reserve(rdev->gart.table.vram.robj, false); + r = radeon_bo_reserve(rdev->gart.robj, false); if (likely(r == 0)) { - radeon_bo_kunmap(rdev->gart.table.vram.robj); - radeon_bo_unpin(rdev->gart.table.vram.robj); - radeon_bo_unreserve(rdev->gart.table.vram.robj); + radeon_bo_kunmap(rdev->gart.robj); + radeon_bo_unpin(rdev->gart.robj); + radeon_bo_unreserve(rdev->gart.robj); + rdev->gart.ptr = NULL; } - radeon_bo_unref(&rdev->gart.table.vram.robj); +} + +void radeon_gart_table_vram_free(struct radeon_device *rdev) +{ + if (rdev->gart.robj == NULL) { + return; + } + radeon_gart_table_vram_unpin(rdev); + radeon_bo_unref(&rdev->gart.robj); } @@ -151,12 +159,14 @@ void radeon_gart_unbind(struct radeon_device *rdev, unsigned offset, if (rdev->gart.pages[p]) { if (!rdev->gart.ttm_alloced[p]) pci_unmap_page(rdev->pdev, rdev->gart.pages_addr[p], - PAGE_SIZE, PCI_DMA_BIDIRECTIONAL); + PAGE_SIZE, PCI_DMA_BIDIRECTIONAL); rdev->gart.pages[p] = NULL; rdev->gart.pages_addr[p] = rdev->dummy_page.addr; page_base = rdev->gart.pages_addr[p]; for (j = 0; j < (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); j++, t++) { - radeon_gart_set_page(rdev, t, page_base); + if (rdev->gart.ptr) { + radeon_gart_set_page(rdev, t, page_base); + } page_base += RADEON_GPU_PAGE_SIZE; } } @@ -199,10 +209,12 @@ int radeon_gart_bind(struct radeon_device *rdev, unsigned offset, } } rdev->gart.pages[p] = pagelist[i]; - page_base = rdev->gart.pages_addr[p]; - for (j = 0; j < (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); j++, t++) { - radeon_gart_set_page(rdev, t, page_base); - page_base += RADEON_GPU_PAGE_SIZE; + if (rdev->gart.ptr) { + page_base = rdev->gart.pages_addr[p]; + for (j = 0; j < (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); j++, t++) { + radeon_gart_set_page(rdev, t, page_base); + page_base += RADEON_GPU_PAGE_SIZE; + } } } mb(); @@ -215,6 +227,9 @@ void radeon_gart_restore(struct radeon_device *rdev) int i, j, t; u64 page_base; + if (!rdev->gart.ptr) { + return; + } for (i = 0, t = 0; i < rdev->gart.num_cpu_pages; i++) { page_base = rdev->gart.pages_addr[i]; for (j = 0; j < (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); j++, t++) { diff --git a/drivers/gpu/drm/radeon/radeon_i2c.c b/drivers/gpu/drm/radeon/radeon_i2c.c index e6d110ce2331..7bb1b079f480 100644 --- a/drivers/gpu/drm/radeon/radeon_i2c.c +++ b/drivers/gpu/drm/radeon/radeon_i2c.c @@ -34,7 +34,7 @@ * radeon_ddc_probe * */ -bool radeon_ddc_probe(struct radeon_connector *radeon_connector, bool requires_extended_probe) +bool radeon_ddc_probe(struct radeon_connector *radeon_connector) { u8 out = 0x0; u8 buf[8]; @@ -49,15 +49,11 @@ bool radeon_ddc_probe(struct radeon_connector *radeon_connector, bool requires_e { .addr = 0x50, .flags = I2C_M_RD, - .len = 1, + .len = 8, .buf = buf, } }; - /* Read 8 bytes from i2c for extended probe of EDID header */ - if (requires_extended_probe) - msgs[1].len = 8; - /* on hw with routers, select right port */ if (radeon_connector->router.ddc_valid) radeon_router_select_ddc_port(radeon_connector); @@ -66,17 +62,15 @@ bool radeon_ddc_probe(struct radeon_connector *radeon_connector, bool requires_e if (ret != 2) /* Couldn't find an accessible DDC on this connector */ return false; - if (requires_extended_probe) { - /* Probe also for valid EDID header - * EDID header starts with: - * 0x00,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0x00. - * Only the first 6 bytes must be valid as - * drm_edid_block_valid() can fix the last 2 bytes */ - if (drm_edid_header_is_valid(buf) < 6) { - /* Couldn't find an accessible EDID on this - * connector */ - return false; - } + /* Probe also for valid EDID header + * EDID header starts with: + * 0x00,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0x00. + * Only the first 6 bytes must be valid as + * drm_edid_block_valid() can fix the last 2 bytes */ + if (drm_edid_header_is_valid(buf) < 6) { + /* Couldn't find an accessible EDID on this + * connector */ + return false; } return true; } diff --git a/drivers/gpu/drm/radeon/radeon_irq_kms.c b/drivers/gpu/drm/radeon/radeon_irq_kms.c index 9ec830c77af0..8f86aeb26693 100644 --- a/drivers/gpu/drm/radeon/radeon_irq_kms.c +++ b/drivers/gpu/drm/radeon/radeon_irq_kms.c @@ -67,10 +67,10 @@ void radeon_driver_irq_preinstall_kms(struct drm_device *dev) /* Disable *all* interrupts */ rdev->irq.sw_int = false; rdev->irq.gui_idle = false; - for (i = 0; i < rdev->num_crtc; i++) - rdev->irq.crtc_vblank_int[i] = false; - for (i = 0; i < 6; i++) { + for (i = 0; i < RADEON_MAX_HPD_PINS; i++) rdev->irq.hpd[i] = false; + for (i = 0; i < RADEON_MAX_CRTCS; i++) { + rdev->irq.crtc_vblank_int[i] = false; rdev->irq.pflip[i] = false; } radeon_irq_set(rdev); @@ -99,15 +99,55 @@ void radeon_driver_irq_uninstall_kms(struct drm_device *dev) /* Disable *all* interrupts */ rdev->irq.sw_int = false; rdev->irq.gui_idle = false; - for (i = 0; i < rdev->num_crtc; i++) - rdev->irq.crtc_vblank_int[i] = false; - for (i = 0; i < 6; i++) { + for (i = 0; i < RADEON_MAX_HPD_PINS; i++) rdev->irq.hpd[i] = false; + for (i = 0; i < RADEON_MAX_CRTCS; i++) { + rdev->irq.crtc_vblank_int[i] = false; rdev->irq.pflip[i] = false; } radeon_irq_set(rdev); } +static bool radeon_msi_ok(struct radeon_device *rdev) +{ + /* RV370/RV380 was first asic with MSI support */ + if (rdev->family < CHIP_RV380) + return false; + + /* MSIs don't work on AGP */ + if (rdev->flags & RADEON_IS_AGP) + return false; + + /* force MSI on */ + if (radeon_msi == 1) + return true; + else if (radeon_msi == 0) + return false; + + /* Quirks */ + /* HP RS690 only seems to work with MSIs. */ + if ((rdev->pdev->device == 0x791f) && + (rdev->pdev->subsystem_vendor == 0x103c) && + (rdev->pdev->subsystem_device == 0x30c2)) + return true; + + /* Dell RS690 only seems to work with MSIs. */ + if ((rdev->pdev->device == 0x791f) && + (rdev->pdev->subsystem_vendor == 0x1028) && + (rdev->pdev->subsystem_device == 0x01fd)) + return true; + + if (rdev->flags & RADEON_IS_IGP) { + /* APUs work fine with MSIs */ + if (rdev->family >= CHIP_PALM) + return true; + /* lots of IGPs have problems with MSIs */ + return false; + } + + return true; +} + int radeon_irq_kms_init(struct radeon_device *rdev) { int i; @@ -124,12 +164,8 @@ int radeon_irq_kms_init(struct radeon_device *rdev) } /* enable msi */ rdev->msi_enabled = 0; - /* MSIs don't seem to work reliably on all IGP - * chips. Disable MSI on them for now. - */ - if ((rdev->family >= CHIP_RV380) && - ((!(rdev->flags & RADEON_IS_IGP)) || (rdev->family >= CHIP_PALM)) && - (!(rdev->flags & RADEON_IS_AGP))) { + + if (radeon_msi_ok(rdev)) { int ret = pci_enable_msi(rdev->pdev); if (!ret) { rdev->msi_enabled = 1; diff --git a/drivers/gpu/drm/radeon/radeon_legacy_crtc.c b/drivers/gpu/drm/radeon/radeon_legacy_crtc.c index 41a5d48e657b..daadf2111040 100644 --- a/drivers/gpu/drm/radeon/radeon_legacy_crtc.c +++ b/drivers/gpu/drm/radeon/radeon_legacy_crtc.c @@ -991,12 +991,6 @@ static bool radeon_crtc_mode_fixup(struct drm_crtc *crtc, struct drm_display_mode *mode, struct drm_display_mode *adjusted_mode) { - struct drm_device *dev = crtc->dev; - struct radeon_device *rdev = dev->dev_private; - - /* adjust pm to upcoming mode change */ - radeon_pm_compute_clocks(rdev); - if (!radeon_crtc_scaling_mode_fixup(crtc, mode, adjusted_mode)) return false; return true; diff --git a/drivers/gpu/drm/radeon/radeon_mode.h b/drivers/gpu/drm/radeon/radeon_mode.h index ed0178f03235..2c2e75ef8a37 100644 --- a/drivers/gpu/drm/radeon/radeon_mode.h +++ b/drivers/gpu/drm/radeon/radeon_mode.h @@ -438,9 +438,6 @@ struct radeon_connector { struct radeon_i2c_chan *ddc_bus; /* some systems have an hdmi and vga port with a shared ddc line */ bool shared_ddc; - /* for some Radeon chip families we apply an additional EDID header - check as part of the DDC probe */ - bool requires_extended_probe; bool use_digital; /* we need to mind the EDID between detect and get modes due to analog/digital/tvencoder */ @@ -459,6 +456,8 @@ struct radeon_framebuffer { struct drm_gem_object *obj; }; +#define ENCODER_MODE_IS_DP(em) (((em) == ATOM_ENCODER_MODE_DP) || \ + ((em) == ATOM_ENCODER_MODE_DP_MST)) extern enum radeon_tv_std radeon_combios_get_tv_info(struct radeon_device *rdev); @@ -468,8 +467,8 @@ radeon_atombios_get_tv_info(struct radeon_device *rdev); extern struct drm_connector * radeon_get_connector_for_encoder(struct drm_encoder *encoder); -extern bool radeon_encoder_is_dp_bridge(struct drm_encoder *encoder); -extern bool radeon_connector_encoder_is_dp_bridge(struct drm_connector *connector); +extern u16 radeon_encoder_get_dp_bridge_encoder_id(struct drm_encoder *encoder); +extern u16 radeon_connector_encoder_get_dp_bridge_encoder_id(struct drm_connector *connector); extern bool radeon_connector_encoder_is_hbr2(struct drm_connector *connector); extern bool radeon_connector_is_dp12_capable(struct drm_connector *connector); @@ -489,7 +488,7 @@ extern void atombios_dig_transmitter_setup(struct drm_encoder *encoder, int action, uint8_t lane_num, uint8_t lane_set); extern void radeon_atom_ext_encoder_setup_ddc(struct drm_encoder *encoder); -extern struct drm_encoder *radeon_atom_get_external_encoder(struct drm_encoder *encoder); +extern struct drm_encoder *radeon_get_external_encoder(struct drm_encoder *encoder); extern int radeon_dp_i2c_aux_ch(struct i2c_adapter *adapter, int mode, u8 write_byte, u8 *read_byte); @@ -519,8 +518,7 @@ extern void radeon_i2c_put_byte(struct radeon_i2c_chan *i2c, u8 val); extern void radeon_router_select_ddc_port(struct radeon_connector *radeon_connector); extern void radeon_router_select_cd_port(struct radeon_connector *radeon_connector); -extern bool radeon_ddc_probe(struct radeon_connector *radeon_connector, - bool requires_extended_probe); +extern bool radeon_ddc_probe(struct radeon_connector *radeon_connector); extern int radeon_ddc_get_modes(struct radeon_connector *radeon_connector); extern struct drm_encoder *radeon_best_encoder(struct drm_connector *connector); diff --git a/drivers/gpu/drm/radeon/radeon_pm.c b/drivers/gpu/drm/radeon/radeon_pm.c index 6fabe89fa6a1..78a665bd9519 100644 --- a/drivers/gpu/drm/radeon/radeon_pm.c +++ b/drivers/gpu/drm/radeon/radeon_pm.c @@ -53,6 +53,24 @@ static void radeon_pm_set_clocks(struct radeon_device *rdev); #define ACPI_AC_CLASS "ac_adapter" +int radeon_pm_get_type_index(struct radeon_device *rdev, + enum radeon_pm_state_type ps_type, + int instance) +{ + int i; + int found_instance = -1; + + for (i = 0; i < rdev->pm.num_power_states; i++) { + if (rdev->pm.power_state[i].type == ps_type) { + found_instance++; + if (found_instance == instance) + return i; + } + } + /* return default if no match */ + return rdev->pm.default_power_state_index; +} + #ifdef CONFIG_ACPI static int radeon_acpi_event(struct notifier_block *nb, unsigned long val, diff --git a/drivers/gpu/drm/radeon/rs400.c b/drivers/gpu/drm/radeon/rs400.c index 89a6e1ecea8d..06b90c87f8f3 100644 --- a/drivers/gpu/drm/radeon/rs400.c +++ b/drivers/gpu/drm/radeon/rs400.c @@ -77,7 +77,7 @@ int rs400_gart_init(struct radeon_device *rdev) { int r; - if (rdev->gart.table.ram.ptr) { + if (rdev->gart.ptr) { WARN(1, "RS400 GART already initialized\n"); return 0; } @@ -212,6 +212,7 @@ void rs400_gart_fini(struct radeon_device *rdev) int rs400_gart_set_page(struct radeon_device *rdev, int i, uint64_t addr) { uint32_t entry; + u32 *gtt = rdev->gart.ptr; if (i < 0 || i > rdev->gart.num_gpu_pages) { return -EINVAL; @@ -221,7 +222,7 @@ int rs400_gart_set_page(struct radeon_device *rdev, int i, uint64_t addr) ((upper_32_bits(addr) & 0xff) << 4) | RS400_PTE_WRITEABLE | RS400_PTE_READABLE; entry = cpu_to_le32(entry); - rdev->gart.table.ram.ptr[i] = entry; + gtt[i] = entry; return 0; } diff --git a/drivers/gpu/drm/radeon/rs600.c b/drivers/gpu/drm/radeon/rs600.c index 9320dd6404f6..481b99e89f65 100644 --- a/drivers/gpu/drm/radeon/rs600.c +++ b/drivers/gpu/drm/radeon/rs600.c @@ -287,6 +287,7 @@ void rs600_hpd_init(struct radeon_device *rdev) default: break; } + radeon_hpd_set_polarity(rdev, radeon_connector->hpd.hpd); } if (rdev->irq.installed) rs600_irq_set(rdev); @@ -413,7 +414,7 @@ int rs600_gart_init(struct radeon_device *rdev) { int r; - if (rdev->gart.table.vram.robj) { + if (rdev->gart.robj) { WARN(1, "RS600 GART already initialized\n"); return 0; } @@ -431,7 +432,7 @@ static int rs600_gart_enable(struct radeon_device *rdev) u32 tmp; int r, i; - if (rdev->gart.table.vram.robj == NULL) { + if (rdev->gart.robj == NULL) { dev_err(rdev->dev, "No VRAM object for PCIE GART.\n"); return -EINVAL; } @@ -494,20 +495,12 @@ static int rs600_gart_enable(struct radeon_device *rdev) void rs600_gart_disable(struct radeon_device *rdev) { u32 tmp; - int r; /* FIXME: disable out of gart access */ WREG32_MC(R_000100_MC_PT0_CNTL, 0); tmp = RREG32_MC(R_000009_MC_CNTL1); WREG32_MC(R_000009_MC_CNTL1, tmp & C_000009_ENABLE_PAGE_TABLES); - if (rdev->gart.table.vram.robj) { - r = radeon_bo_reserve(rdev->gart.table.vram.robj, false); - if (r == 0) { - radeon_bo_kunmap(rdev->gart.table.vram.robj); - radeon_bo_unpin(rdev->gart.table.vram.robj); - radeon_bo_unreserve(rdev->gart.table.vram.robj); - } - } + radeon_gart_table_vram_unpin(rdev); } void rs600_gart_fini(struct radeon_device *rdev) @@ -525,7 +518,7 @@ void rs600_gart_fini(struct radeon_device *rdev) int rs600_gart_set_page(struct radeon_device *rdev, int i, uint64_t addr) { - void __iomem *ptr = (void *)rdev->gart.table.vram.ptr; + void __iomem *ptr = (void *)rdev->gart.ptr; if (i < 0 || i > rdev->gart.num_gpu_pages) { return -EINVAL; diff --git a/drivers/gpu/drm/radeon/rv770.c b/drivers/gpu/drm/radeon/rv770.c index 87cc1feee3ac..a983f410ab89 100644 --- a/drivers/gpu/drm/radeon/rv770.c +++ b/drivers/gpu/drm/radeon/rv770.c @@ -124,7 +124,7 @@ int rv770_pcie_gart_enable(struct radeon_device *rdev) u32 tmp; int r, i; - if (rdev->gart.table.vram.robj == NULL) { + if (rdev->gart.robj == NULL) { dev_err(rdev->dev, "No VRAM object for PCIE GART.\n"); return -EINVAL; } @@ -171,7 +171,7 @@ int rv770_pcie_gart_enable(struct radeon_device *rdev) void rv770_pcie_gart_disable(struct radeon_device *rdev) { u32 tmp; - int i, r; + int i; /* Disable all tables */ for (i = 0; i < 7; i++) @@ -191,14 +191,7 @@ void rv770_pcie_gart_disable(struct radeon_device *rdev) WREG32(MC_VM_MB_L1_TLB1_CNTL, tmp); WREG32(MC_VM_MB_L1_TLB2_CNTL, tmp); WREG32(MC_VM_MB_L1_TLB3_CNTL, tmp); - if (rdev->gart.table.vram.robj) { - r = radeon_bo_reserve(rdev->gart.table.vram.robj, false); - if (likely(r == 0)) { - radeon_bo_kunmap(rdev->gart.table.vram.robj); - radeon_bo_unpin(rdev->gart.table.vram.robj); - radeon_bo_unreserve(rdev->gart.table.vram.robj); - } - } + radeon_gart_table_vram_unpin(rdev); } void rv770_pcie_gart_fini(struct radeon_device *rdev) @@ -282,7 +275,7 @@ static void rv770_mc_program(struct radeon_device *rdev) WREG32(MC_VM_SYSTEM_APERTURE_HIGH_ADDR, rdev->mc.vram_end >> 12); } - WREG32(MC_VM_SYSTEM_APERTURE_DEFAULT_ADDR, 0); + WREG32(MC_VM_SYSTEM_APERTURE_DEFAULT_ADDR, rdev->vram_scratch.gpu_addr >> 12); tmp = ((rdev->mc.vram_end >> 24) & 0xFFFF) << 16; tmp |= ((rdev->mc.vram_start >> 24) & 0xFFFF); WREG32(MC_VM_FB_LOCATION, tmp); @@ -959,54 +952,6 @@ static void rv770_gpu_init(struct radeon_device *rdev) } -static int rv770_vram_scratch_init(struct radeon_device *rdev) -{ - int r; - u64 gpu_addr; - - if (rdev->vram_scratch.robj == NULL) { - r = radeon_bo_create(rdev, RADEON_GPU_PAGE_SIZE, - PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM, - &rdev->vram_scratch.robj); - if (r) { - return r; - } - } - - r = radeon_bo_reserve(rdev->vram_scratch.robj, false); - if (unlikely(r != 0)) - return r; - r = radeon_bo_pin(rdev->vram_scratch.robj, - RADEON_GEM_DOMAIN_VRAM, &gpu_addr); - if (r) { - radeon_bo_unreserve(rdev->vram_scratch.robj); - return r; - } - r = radeon_bo_kmap(rdev->vram_scratch.robj, - (void **)&rdev->vram_scratch.ptr); - if (r) - radeon_bo_unpin(rdev->vram_scratch.robj); - radeon_bo_unreserve(rdev->vram_scratch.robj); - - return r; -} - -static void rv770_vram_scratch_fini(struct radeon_device *rdev) -{ - int r; - - if (rdev->vram_scratch.robj == NULL) { - return; - } - r = radeon_bo_reserve(rdev->vram_scratch.robj, false); - if (likely(r == 0)) { - radeon_bo_kunmap(rdev->vram_scratch.robj); - radeon_bo_unpin(rdev->vram_scratch.robj); - radeon_bo_unreserve(rdev->vram_scratch.robj); - } - radeon_bo_unref(&rdev->vram_scratch.robj); -} - void r700_vram_gtt_location(struct radeon_device *rdev, struct radeon_mc *mc) { u64 size_bf, size_af; @@ -1106,6 +1051,10 @@ static int rv770_startup(struct radeon_device *rdev) } } + r = r600_vram_scratch_init(rdev); + if (r) + return r; + rv770_mc_program(rdev); if (rdev->flags & RADEON_IS_AGP) { rv770_agp_enable(rdev); @@ -1114,9 +1063,7 @@ static int rv770_startup(struct radeon_device *rdev) if (r) return r; } - r = rv770_vram_scratch_init(rdev); - if (r) - return r; + rv770_gpu_init(rdev); r = r600_blit_init(rdev); if (r) { @@ -1316,7 +1263,7 @@ void rv770_fini(struct radeon_device *rdev) radeon_ib_pool_fini(rdev); radeon_irq_kms_fini(rdev); rv770_pcie_gart_fini(rdev); - rv770_vram_scratch_fini(rdev); + r600_vram_scratch_fini(rdev); radeon_gem_fini(rdev); radeon_fence_driver_fini(rdev); radeon_agp_fini(rdev); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c index 1805b8c2a948..dff8fc767152 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c @@ -104,6 +104,9 @@ #define DRM_IOCTL_VMW_PRESENT_READBACK \ DRM_IOW(DRM_COMMAND_BASE + DRM_VMW_PRESENT_READBACK, \ struct drm_vmw_present_readback_arg) +#define DRM_IOCTL_VMW_UPDATE_LAYOUT \ + DRM_IOW(DRM_COMMAND_BASE + DRM_VMW_UPDATE_LAYOUT, \ + struct drm_vmw_update_layout_arg) /** * The core DRM version of this macro doesn't account for @@ -166,6 +169,9 @@ static struct drm_ioctl_desc vmw_ioctls[] = { VMW_IOCTL_DEF(VMW_PRESENT_READBACK, vmw_present_readback_ioctl, DRM_MASTER | DRM_AUTH | DRM_UNLOCKED), + VMW_IOCTL_DEF(VMW_UPDATE_LAYOUT, + vmw_kms_update_layout_ioctl, + DRM_MASTER | DRM_UNLOCKED), }; static struct pci_device_id vmw_pci_id_list[] = { diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h index 30589d0aecd9..8cca91a93bde 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h @@ -40,9 +40,9 @@ #include "ttm/ttm_module.h" #include "vmwgfx_fence.h" -#define VMWGFX_DRIVER_DATE "20111008" +#define VMWGFX_DRIVER_DATE "20111025" #define VMWGFX_DRIVER_MAJOR 2 -#define VMWGFX_DRIVER_MINOR 2 +#define VMWGFX_DRIVER_MINOR 3 #define VMWGFX_DRIVER_PATCHLEVEL 0 #define VMWGFX_FILE_PAGE_OFFSET 0x00100000 #define VMWGFX_FIFO_STATIC_SIZE (1024*1024) @@ -633,6 +633,8 @@ int vmw_kms_readback(struct vmw_private *dev_priv, struct drm_vmw_fence_rep __user *user_fence_rep, struct drm_vmw_rect *clips, uint32_t num_clips); +int vmw_kms_update_layout_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); /** * Overlay control - vmwgfx_overlay.c diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c index 8b14dfd513a1..880e285d7578 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c @@ -105,12 +105,17 @@ int vmw_du_crtc_cursor_set(struct drm_crtc *crtc, struct drm_file *file_priv, struct vmw_dma_buffer *dmabuf = NULL; int ret; + /* A lot of the code assumes this */ + if (handle && (width != 64 || height != 64)) + return -EINVAL; + if (handle) { ret = vmw_user_surface_lookup_handle(dev_priv, tfile, handle, &surface); if (!ret) { if (!surface->snooper.image) { DRM_ERROR("surface not suitable for cursor\n"); + vmw_surface_unreference(&surface); return -EINVAL; } } else { @@ -176,7 +181,9 @@ err_unreserve: return 0; } - vmw_cursor_update_position(dev_priv, true, du->cursor_x, du->cursor_y); + vmw_cursor_update_position(dev_priv, true, + du->cursor_x + du->hotspot_x, + du->cursor_y + du->hotspot_y); return 0; } @@ -191,7 +198,8 @@ int vmw_du_crtc_cursor_move(struct drm_crtc *crtc, int x, int y) du->cursor_y = y + crtc->y; vmw_cursor_update_position(dev_priv, shown, - du->cursor_x, du->cursor_y); + du->cursor_x + du->hotspot_x, + du->cursor_y + du->hotspot_y); return 0; } @@ -212,7 +220,7 @@ void vmw_kms_cursor_snoop(struct vmw_surface *srf, SVGA3dCmdHeader header; SVGA3dCmdSurfaceDMA dma; } *cmd; - int ret; + int i, ret; cmd = container_of(header, struct vmw_dma_cmd, header); @@ -234,16 +242,19 @@ void vmw_kms_cursor_snoop(struct vmw_surface *srf, box_count = (cmd->header.size - sizeof(SVGA3dCmdSurfaceDMA)) / sizeof(SVGA3dCopyBox); - if (cmd->dma.guest.pitch != (64 * 4) || - cmd->dma.guest.ptr.offset % PAGE_SIZE || + if (cmd->dma.guest.ptr.offset % PAGE_SIZE || box->x != 0 || box->y != 0 || box->z != 0 || box->srcx != 0 || box->srcy != 0 || box->srcz != 0 || - box->w != 64 || box->h != 64 || box->d != 1 || - box_count != 1) { + box->d != 1 || box_count != 1) { /* TODO handle none page aligned offsets */ - /* TODO handle partial uploads and pitch != 256 */ - /* TODO handle more then one copy (size != 64) */ - DRM_ERROR("lazy programmer, can't handle weird stuff\n"); + /* TODO handle more dst & src != 0 */ + /* TODO handle more then one copy */ + DRM_ERROR("Cant snoop dma request for cursor!\n"); + DRM_ERROR("(%u, %u, %u) (%u, %u, %u) (%ux%ux%u) %u %u\n", + box->srcx, box->srcy, box->srcz, + box->x, box->y, box->z, + box->w, box->h, box->d, box_count, + cmd->dma.guest.ptr.offset); return; } @@ -262,7 +273,16 @@ void vmw_kms_cursor_snoop(struct vmw_surface *srf, virtual = ttm_kmap_obj_virtual(&map, &dummy); - memcpy(srf->snooper.image, virtual, 64*64*4); + if (box->w == 64 && cmd->dma.guest.pitch == 64*4) { + memcpy(srf->snooper.image, virtual, 64*64*4); + } else { + /* Image is unsigned pointer. */ + for (i = 0; i < box->h; i++) + memcpy(srf->snooper.image + i * 64, + virtual + i * cmd->dma.guest.pitch, + box->w * 4); + } + srf->snooper.age++; /* we can't call this function from this function since execbuf has @@ -394,8 +414,9 @@ static int do_surface_dirty_sou(struct vmw_private *dev_priv, top = clips->y1; bottom = clips->y2; - clips_ptr = clips; - for (i = 1; i < num_clips; i++, clips_ptr += inc) { + /* skip the first clip rect */ + for (i = 1, clips_ptr = clips + inc; + i < num_clips; i++, clips_ptr += inc) { left = min_t(int, left, (int)clips_ptr->x1); right = max_t(int, right, (int)clips_ptr->x2); top = min_t(int, top, (int)clips_ptr->y1); @@ -994,7 +1015,7 @@ static struct drm_framebuffer *vmw_kms_fb_create(struct drm_device *dev, required_size = mode_cmd->pitch * mode_cmd->height; if (unlikely(required_size > (u64) dev_priv->vram_size)) { DRM_ERROR("VRAM size is too small for requested mode.\n"); - return NULL; + return ERR_PTR(-ENOMEM); } /* @@ -1307,7 +1328,10 @@ int vmw_kms_close(struct vmw_private *dev_priv) * drm_encoder_cleanup which takes the lock we deadlock. */ drm_mode_config_cleanup(dev_priv->dev); - vmw_kms_close_legacy_display_system(dev_priv); + if (dev_priv->sou_priv) + vmw_kms_close_screen_object_display(dev_priv); + else + vmw_kms_close_legacy_display_system(dev_priv); return 0; } @@ -1517,6 +1541,8 @@ int vmw_du_update_layout(struct vmw_private *dev_priv, unsigned num, du->pref_width = rects[du->unit].w; du->pref_height = rects[du->unit].h; du->pref_active = true; + du->gui_x = rects[du->unit].x; + du->gui_y = rects[du->unit].y; } else { du->pref_width = 800; du->pref_height = 600; @@ -1572,12 +1598,14 @@ vmw_du_connector_detect(struct drm_connector *connector, bool force) uint32_t num_displays; struct drm_device *dev = connector->dev; struct vmw_private *dev_priv = vmw_priv(dev); + struct vmw_display_unit *du = vmw_connector_to_du(connector); mutex_lock(&dev_priv->hw_mutex); num_displays = vmw_read(dev_priv, SVGA_REG_NUM_DISPLAYS); mutex_unlock(&dev_priv->hw_mutex); - return ((vmw_connector_to_du(connector)->unit < num_displays) ? + return ((vmw_connector_to_du(connector)->unit < num_displays && + du->pref_active) ? connector_status_connected : connector_status_disconnected); } @@ -1658,6 +1686,28 @@ static struct drm_display_mode vmw_kms_connector_builtin[] = { { DRM_MODE("", 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) }, }; +/** + * vmw_guess_mode_timing - Provide fake timings for a + * 60Hz vrefresh mode. + * + * @mode - Pointer to a struct drm_display_mode with hdisplay and vdisplay + * members filled in. + */ +static void vmw_guess_mode_timing(struct drm_display_mode *mode) +{ + mode->hsync_start = mode->hdisplay + 50; + mode->hsync_end = mode->hsync_start + 50; + mode->htotal = mode->hsync_end + 50; + + mode->vsync_start = mode->vdisplay + 50; + mode->vsync_end = mode->vsync_start + 50; + mode->vtotal = mode->vsync_end + 50; + + mode->clock = (u32)mode->htotal * (u32)mode->vtotal / 100 * 6; + mode->vrefresh = drm_mode_vrefresh(mode); +} + + int vmw_du_connector_fill_modes(struct drm_connector *connector, uint32_t max_width, uint32_t max_height) { @@ -1680,18 +1730,23 @@ int vmw_du_connector_fill_modes(struct drm_connector *connector, return 0; mode->hdisplay = du->pref_width; mode->vdisplay = du->pref_height; - mode->vrefresh = drm_mode_vrefresh(mode); + vmw_guess_mode_timing(mode); + if (vmw_kms_validate_mode_vram(dev_priv, mode->hdisplay * 2, mode->vdisplay)) { drm_mode_probed_add(connector, mode); + } else { + drm_mode_destroy(dev, mode); + mode = NULL; + } - if (du->pref_mode) { - list_del_init(&du->pref_mode->head); - drm_mode_destroy(dev, du->pref_mode); - } - - du->pref_mode = mode; + if (du->pref_mode) { + list_del_init(&du->pref_mode->head); + drm_mode_destroy(dev, du->pref_mode); } + + /* mode might be null here, this is intended */ + du->pref_mode = mode; } for (i = 0; vmw_kms_connector_builtin[i].type != 0; i++) { @@ -1712,6 +1767,10 @@ int vmw_du_connector_fill_modes(struct drm_connector *connector, drm_mode_probed_add(connector, mode); } + /* Move the prefered mode first, help apps pick the right mode. */ + if (du->pref_mode) + list_move(&du->pref_mode->head, &connector->probed_modes); + drm_mode_connector_list_update(connector); return 1; @@ -1723,3 +1782,63 @@ int vmw_du_connector_set_property(struct drm_connector *connector, { return 0; } + + +int vmw_kms_update_layout_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv) +{ + struct vmw_private *dev_priv = vmw_priv(dev); + struct drm_vmw_update_layout_arg *arg = + (struct drm_vmw_update_layout_arg *)data; + struct vmw_master *vmaster = vmw_master(file_priv->master); + void __user *user_rects; + struct drm_vmw_rect *rects; + unsigned rects_size; + int ret; + int i; + struct drm_mode_config *mode_config = &dev->mode_config; + + ret = ttm_read_lock(&vmaster->lock, true); + if (unlikely(ret != 0)) + return ret; + + if (!arg->num_outputs) { + struct drm_vmw_rect def_rect = {0, 0, 800, 600}; + vmw_du_update_layout(dev_priv, 1, &def_rect); + goto out_unlock; + } + + rects_size = arg->num_outputs * sizeof(struct drm_vmw_rect); + rects = kzalloc(rects_size, GFP_KERNEL); + if (unlikely(!rects)) { + ret = -ENOMEM; + goto out_unlock; + } + + user_rects = (void __user *)(unsigned long)arg->rects; + ret = copy_from_user(rects, user_rects, rects_size); + if (unlikely(ret != 0)) { + DRM_ERROR("Failed to get rects.\n"); + ret = -EFAULT; + goto out_free; + } + + for (i = 0; i < arg->num_outputs; ++i) { + if (rects->x < 0 || + rects->y < 0 || + rects->x + rects->w > mode_config->max_width || + rects->y + rects->h > mode_config->max_height) { + DRM_ERROR("Invalid GUI layout.\n"); + ret = -EINVAL; + goto out_free; + } + } + + vmw_du_update_layout(dev_priv, arg->num_outputs, rects); + +out_free: + kfree(rects); +out_unlock: + ttm_read_unlock(&vmaster->lock); + return ret; +} diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h index db0b901f8c3f..af8e6e5bd964 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h @@ -96,6 +96,13 @@ struct vmw_display_unit { unsigned pref_height; bool pref_active; struct drm_display_mode *pref_mode; + + /* + * Gui positioning + */ + int gui_x; + int gui_y; + bool is_implicit; }; #define vmw_crtc_to_du(x) \ @@ -126,8 +133,7 @@ int vmw_du_connector_fill_modes(struct drm_connector *connector, int vmw_du_connector_set_property(struct drm_connector *connector, struct drm_property *property, uint64_t val); -int vmw_du_update_layout(struct vmw_private *dev_priv, unsigned num, - struct drm_vmw_rect *rects); + /* * Legacy display unit functions - vmwgfx_ldu.c diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c index 92f56bc594eb..90c5e3928491 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c @@ -337,13 +337,14 @@ static int vmw_ldu_init(struct vmw_private *dev_priv, unsigned unit) ldu->base.pref_width = 800; ldu->base.pref_height = 600; ldu->base.pref_mode = NULL; + ldu->base.is_implicit = true; drm_connector_init(dev, connector, &vmw_legacy_connector_funcs, - DRM_MODE_CONNECTOR_LVDS); + DRM_MODE_CONNECTOR_VIRTUAL); connector->status = vmw_du_connector_detect(connector, true); drm_encoder_init(dev, encoder, &vmw_legacy_encoder_funcs, - DRM_MODE_ENCODER_LVDS); + DRM_MODE_ENCODER_VIRTUAL); drm_mode_connector_attach_encoder(connector, encoder); encoder->possible_crtcs = (1 << unit); encoder->possible_clones = 0; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c b/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c index 477b2a9eb3c2..4defdcf1c72e 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c @@ -36,12 +36,9 @@ container_of(x, struct vmw_screen_object_unit, base.connector) struct vmw_screen_object_display { - struct list_head active; + unsigned num_implicit; - unsigned num_active; - unsigned last_num_active; - - struct vmw_framebuffer *fb; + struct vmw_framebuffer *implicit_fb; }; /** @@ -54,13 +51,11 @@ struct vmw_screen_object_unit { struct vmw_dma_buffer *buffer; /**< Backing store buffer */ bool defined; - - struct list_head active; + bool active_implicit; }; static void vmw_sou_destroy(struct vmw_screen_object_unit *sou) { - list_del_init(&sou->active); vmw_display_unit_cleanup(&sou->base); kfree(sou); } @@ -75,58 +70,31 @@ static void vmw_sou_crtc_destroy(struct drm_crtc *crtc) vmw_sou_destroy(vmw_crtc_to_sou(crtc)); } -static int vmw_sou_del_active(struct vmw_private *vmw_priv, +static void vmw_sou_del_active(struct vmw_private *vmw_priv, struct vmw_screen_object_unit *sou) { struct vmw_screen_object_display *ld = vmw_priv->sou_priv; - if (list_empty(&sou->active)) - return 0; - /* Must init otherwise list_empty(&sou->active) will not work. */ - list_del_init(&sou->active); - if (--(ld->num_active) == 0) { - BUG_ON(!ld->fb); - if (ld->fb->unpin) - ld->fb->unpin(ld->fb); - ld->fb = NULL; + if (sou->active_implicit) { + if (--(ld->num_implicit) == 0) + ld->implicit_fb = NULL; + sou->active_implicit = false; } - - return 0; } -static int vmw_sou_add_active(struct vmw_private *vmw_priv, +static void vmw_sou_add_active(struct vmw_private *vmw_priv, struct vmw_screen_object_unit *sou, struct vmw_framebuffer *vfb) { struct vmw_screen_object_display *ld = vmw_priv->sou_priv; - struct vmw_screen_object_unit *entry; - struct list_head *at; - - BUG_ON(!ld->num_active && ld->fb); - if (vfb != ld->fb) { - if (ld->fb && ld->fb->unpin) - ld->fb->unpin(ld->fb); - if (vfb->pin) - vfb->pin(vfb); - ld->fb = vfb; - } - - if (!list_empty(&sou->active)) - return 0; - at = &ld->active; - list_for_each_entry(entry, &ld->active, active) { - if (entry->base.unit > sou->base.unit) - break; + BUG_ON(!ld->num_implicit && ld->implicit_fb); - at = &entry->active; + if (!sou->active_implicit && sou->base.is_implicit) { + ld->implicit_fb = vfb; + sou->active_implicit = true; + ld->num_implicit++; } - - list_add(&sou->active, at); - - ld->num_active++; - - return 0; } /** @@ -164,8 +132,13 @@ static int vmw_sou_fifo_create(struct vmw_private *dev_priv, (sou->base.unit == 0 ? SVGA_SCREEN_IS_PRIMARY : 0); cmd->obj.size.width = mode->hdisplay; cmd->obj.size.height = mode->vdisplay; - cmd->obj.root.x = x; - cmd->obj.root.y = y; + if (sou->base.is_implicit) { + cmd->obj.root.x = x; + cmd->obj.root.y = y; + } else { + cmd->obj.root.x = sou->base.gui_x; + cmd->obj.root.y = sou->base.gui_y; + } /* Ok to assume that buffer is pinned in vram */ vmw_bo_get_guest_ptr(&sou->buffer->base, &cmd->obj.backingStore.ptr); @@ -312,10 +285,11 @@ static int vmw_sou_crtc_set_config(struct drm_mode_set *set) } /* sou only supports one fb active at the time */ - if (dev_priv->sou_priv->fb && vfb && - !(dev_priv->sou_priv->num_active == 1 && - !list_empty(&sou->active)) && - dev_priv->sou_priv->fb != vfb) { + if (sou->base.is_implicit && + dev_priv->sou_priv->implicit_fb && vfb && + !(dev_priv->sou_priv->num_implicit == 1 && + sou->active_implicit) && + dev_priv->sou_priv->implicit_fb != vfb) { DRM_ERROR("Multiple framebuffers not supported\n"); return -EINVAL; } @@ -471,19 +445,20 @@ static int vmw_sou_init(struct vmw_private *dev_priv, unsigned unit) encoder = &sou->base.encoder; connector = &sou->base.connector; - INIT_LIST_HEAD(&sou->active); + sou->active_implicit = false; sou->base.pref_active = (unit == 0); sou->base.pref_width = 800; sou->base.pref_height = 600; sou->base.pref_mode = NULL; + sou->base.is_implicit = true; drm_connector_init(dev, connector, &vmw_legacy_connector_funcs, - DRM_MODE_CONNECTOR_LVDS); + DRM_MODE_CONNECTOR_VIRTUAL); connector->status = vmw_du_connector_detect(connector, true); drm_encoder_init(dev, encoder, &vmw_screen_object_encoder_funcs, - DRM_MODE_ENCODER_LVDS); + DRM_MODE_ENCODER_VIRTUAL); drm_mode_connector_attach_encoder(connector, encoder); encoder->possible_crtcs = (1 << unit); encoder->possible_clones = 0; @@ -520,10 +495,8 @@ int vmw_kms_init_screen_object_display(struct vmw_private *dev_priv) if (unlikely(!dev_priv->sou_priv)) goto err_no_mem; - INIT_LIST_HEAD(&dev_priv->sou_priv->active); - dev_priv->sou_priv->num_active = 0; - dev_priv->sou_priv->last_num_active = 0; - dev_priv->sou_priv->fb = NULL; + dev_priv->sou_priv->num_implicit = 0; + dev_priv->sou_priv->implicit_fb = NULL; ret = drm_vblank_init(dev, VMWGFX_NUM_DISPLAY_UNITS); if (unlikely(ret != 0)) @@ -558,9 +531,6 @@ int vmw_kms_close_screen_object_display(struct vmw_private *dev_priv) drm_vblank_cleanup(dev); - if (!list_empty(&dev_priv->sou_priv->active)) - DRM_ERROR("Still have active outputs when unloading driver"); - kfree(dev_priv->sou_priv); return 0; diff --git a/drivers/hwspinlock/u8500_hsem.c b/drivers/hwspinlock/u8500_hsem.c index 143461a95ae4..86980fe04117 100644 --- a/drivers/hwspinlock/u8500_hsem.c +++ b/drivers/hwspinlock/u8500_hsem.c @@ -21,6 +21,7 @@ * General Public License for more details. */ +#include <linux/module.h> #include <linux/delay.h> #include <linux/io.h> #include <linux/pm_runtime.h> @@ -108,10 +109,8 @@ static int __devinit u8500_hsem_probe(struct platform_device *pdev) return -ENODEV; io_base = ioremap(res->start, resource_size(res)); - if (!io_base) { - ret = -ENOMEM; - goto free_state; - } + if (!io_base) + return -ENOMEM; /* make sure protocol 1 is selected */ val = readl(io_base + HSEM_CTRL_REG); diff --git a/drivers/ide/ide-cd.c b/drivers/ide/ide-cd.c index 04b09564bfa9..8126824daccb 100644 --- a/drivers/ide/ide-cd.c +++ b/drivers/ide/ide-cd.c @@ -43,7 +43,6 @@ /* For SCSI -> ATAPI command conversion */ #include <scsi/scsi.h> -#include <linux/irq.h> #include <linux/io.h> #include <asm/byteorder.h> #include <linux/uaccess.h> diff --git a/drivers/ide/ide-floppy.c b/drivers/ide/ide-floppy.c index 61fdf544fbd6..3d42043fec51 100644 --- a/drivers/ide/ide-floppy.c +++ b/drivers/ide/ide-floppy.c @@ -35,7 +35,6 @@ #include <scsi/scsi_ioctl.h> #include <asm/byteorder.h> -#include <linux/irq.h> #include <linux/uaccess.h> #include <linux/io.h> #include <asm/unaligned.h> diff --git a/drivers/ide/ide-tape.c b/drivers/ide/ide-tape.c index 7ecb1ade8874..ce8237d36159 100644 --- a/drivers/ide/ide-tape.c +++ b/drivers/ide/ide-tape.c @@ -41,7 +41,6 @@ #include <scsi/scsi.h> #include <asm/byteorder.h> -#include <linux/irq.h> #include <linux/uaccess.h> #include <linux/io.h> #include <asm/unaligned.h> diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c index 18767f8ab090..5d2f8e13cf0e 100644 --- a/drivers/idle/intel_idle.c +++ b/drivers/idle/intel_idle.c @@ -82,7 +82,8 @@ static unsigned int mwait_substates; static unsigned int lapic_timer_reliable_states = (1 << 1); /* Default to only C1 */ static struct cpuidle_device __percpu *intel_idle_cpuidle_devices; -static int intel_idle(struct cpuidle_device *dev, struct cpuidle_state *state); +static int intel_idle(struct cpuidle_device *dev, + struct cpuidle_driver *drv, int index); static struct cpuidle_state *cpuidle_state_table; @@ -110,7 +111,6 @@ static struct cpuidle_state nehalem_cstates[MWAIT_MAX_NUM_CSTATES] = { { /* MWAIT C1 */ .name = "C1-NHM", .desc = "MWAIT 0x00", - .driver_data = (void *) 0x00, .flags = CPUIDLE_FLAG_TIME_VALID, .exit_latency = 3, .target_residency = 6, @@ -118,7 +118,6 @@ static struct cpuidle_state nehalem_cstates[MWAIT_MAX_NUM_CSTATES] = { { /* MWAIT C2 */ .name = "C3-NHM", .desc = "MWAIT 0x10", - .driver_data = (void *) 0x10, .flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_TLB_FLUSHED, .exit_latency = 20, .target_residency = 80, @@ -126,7 +125,6 @@ static struct cpuidle_state nehalem_cstates[MWAIT_MAX_NUM_CSTATES] = { { /* MWAIT C3 */ .name = "C6-NHM", .desc = "MWAIT 0x20", - .driver_data = (void *) 0x20, .flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_TLB_FLUSHED, .exit_latency = 200, .target_residency = 800, @@ -138,7 +136,6 @@ static struct cpuidle_state snb_cstates[MWAIT_MAX_NUM_CSTATES] = { { /* MWAIT C1 */ .name = "C1-SNB", .desc = "MWAIT 0x00", - .driver_data = (void *) 0x00, .flags = CPUIDLE_FLAG_TIME_VALID, .exit_latency = 1, .target_residency = 1, @@ -146,7 +143,6 @@ static struct cpuidle_state snb_cstates[MWAIT_MAX_NUM_CSTATES] = { { /* MWAIT C2 */ .name = "C3-SNB", .desc = "MWAIT 0x10", - .driver_data = (void *) 0x10, .flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_TLB_FLUSHED, .exit_latency = 80, .target_residency = 211, @@ -154,7 +150,6 @@ static struct cpuidle_state snb_cstates[MWAIT_MAX_NUM_CSTATES] = { { /* MWAIT C3 */ .name = "C6-SNB", .desc = "MWAIT 0x20", - .driver_data = (void *) 0x20, .flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_TLB_FLUSHED, .exit_latency = 104, .target_residency = 345, @@ -162,7 +157,6 @@ static struct cpuidle_state snb_cstates[MWAIT_MAX_NUM_CSTATES] = { { /* MWAIT C4 */ .name = "C7-SNB", .desc = "MWAIT 0x30", - .driver_data = (void *) 0x30, .flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_TLB_FLUSHED, .exit_latency = 109, .target_residency = 345, @@ -174,7 +168,6 @@ static struct cpuidle_state atom_cstates[MWAIT_MAX_NUM_CSTATES] = { { /* MWAIT C1 */ .name = "C1-ATM", .desc = "MWAIT 0x00", - .driver_data = (void *) 0x00, .flags = CPUIDLE_FLAG_TIME_VALID, .exit_latency = 1, .target_residency = 4, @@ -182,7 +175,6 @@ static struct cpuidle_state atom_cstates[MWAIT_MAX_NUM_CSTATES] = { { /* MWAIT C2 */ .name = "C2-ATM", .desc = "MWAIT 0x10", - .driver_data = (void *) 0x10, .flags = CPUIDLE_FLAG_TIME_VALID, .exit_latency = 20, .target_residency = 80, @@ -191,7 +183,6 @@ static struct cpuidle_state atom_cstates[MWAIT_MAX_NUM_CSTATES] = { { /* MWAIT C4 */ .name = "C4-ATM", .desc = "MWAIT 0x30", - .driver_data = (void *) 0x30, .flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_TLB_FLUSHED, .exit_latency = 100, .target_residency = 400, @@ -200,23 +191,55 @@ static struct cpuidle_state atom_cstates[MWAIT_MAX_NUM_CSTATES] = { { /* MWAIT C6 */ .name = "C6-ATM", .desc = "MWAIT 0x52", - .driver_data = (void *) 0x52, .flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_TLB_FLUSHED, .exit_latency = 140, .target_residency = 560, .enter = &intel_idle }, }; +static int get_driver_data(int cstate) +{ + int driver_data; + switch (cstate) { + + case 1: /* MWAIT C1 */ + driver_data = 0x00; + break; + case 2: /* MWAIT C2 */ + driver_data = 0x10; + break; + case 3: /* MWAIT C3 */ + driver_data = 0x20; + break; + case 4: /* MWAIT C4 */ + driver_data = 0x30; + break; + case 5: /* MWAIT C5 */ + driver_data = 0x40; + break; + case 6: /* MWAIT C6 */ + driver_data = 0x52; + break; + default: + driver_data = 0x00; + } + return driver_data; +} + /** * intel_idle * @dev: cpuidle_device - * @state: cpuidle state + * @drv: cpuidle driver + * @index: index of cpuidle state * */ -static int intel_idle(struct cpuidle_device *dev, struct cpuidle_state *state) +static int intel_idle(struct cpuidle_device *dev, + struct cpuidle_driver *drv, int index) { unsigned long ecx = 1; /* break on interrupt flag */ - unsigned long eax = (unsigned long)cpuidle_get_statedata(state); + struct cpuidle_state *state = &drv->states[index]; + struct cpuidle_state_usage *state_usage = &dev->states_usage[index]; + unsigned long eax = (unsigned long)cpuidle_get_statedata(state_usage); unsigned int cstate; ktime_t kt_before, kt_after; s64 usec_delta; @@ -257,7 +280,10 @@ static int intel_idle(struct cpuidle_device *dev, struct cpuidle_state *state) if (!(lapic_timer_reliable_states & (1 << (cstate)))) clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_EXIT, &cpu); - return usec_delta; + /* Update cpuidle counters */ + dev->last_residency = (int)usec_delta; + + return index; } static void __setup_broadcast_timer(void *arg) @@ -398,6 +424,60 @@ static void intel_idle_cpuidle_devices_uninit(void) return; } /* + * intel_idle_cpuidle_driver_init() + * allocate, initialize cpuidle_states + */ +static int intel_idle_cpuidle_driver_init(void) +{ + int cstate; + struct cpuidle_driver *drv = &intel_idle_driver; + + drv->state_count = 1; + + for (cstate = 1; cstate < MWAIT_MAX_NUM_CSTATES; ++cstate) { + int num_substates; + + if (cstate > max_cstate) { + printk(PREFIX "max_cstate %d reached\n", + max_cstate); + break; + } + + /* does the state exist in CPUID.MWAIT? */ + num_substates = (mwait_substates >> ((cstate) * 4)) + & MWAIT_SUBSTATE_MASK; + if (num_substates == 0) + continue; + /* is the state not enabled? */ + if (cpuidle_state_table[cstate].enter == NULL) { + /* does the driver not know about the state? */ + if (*cpuidle_state_table[cstate].name == '\0') + pr_debug(PREFIX "unaware of model 0x%x" + " MWAIT %d please" + " contact lenb@kernel.org", + boot_cpu_data.x86_model, cstate); + continue; + } + + if ((cstate > 2) && + !boot_cpu_has(X86_FEATURE_NONSTOP_TSC)) + mark_tsc_unstable("TSC halts in idle" + " states deeper than C2"); + + drv->states[drv->state_count] = /* structure copy */ + cpuidle_state_table[cstate]; + + drv->state_count += 1; + } + + if (auto_demotion_disable_flags) + smp_call_function(auto_demotion_disable, NULL, 1); + + return 0; +} + + +/* * intel_idle_cpuidle_devices_init() * allocate, initialize, register cpuidle_devices */ @@ -431,22 +511,11 @@ static int intel_idle_cpuidle_devices_init(void) continue; /* is the state not enabled? */ if (cpuidle_state_table[cstate].enter == NULL) { - /* does the driver not know about the state? */ - if (*cpuidle_state_table[cstate].name == '\0') - pr_debug(PREFIX "unaware of model 0x%x" - " MWAIT %d please" - " contact lenb@kernel.org", - boot_cpu_data.x86_model, cstate); continue; } - if ((cstate > 2) && - !boot_cpu_has(X86_FEATURE_NONSTOP_TSC)) - mark_tsc_unstable("TSC halts in idle" - " states deeper than C2"); - - dev->states[dev->state_count] = /* structure copy */ - cpuidle_state_table[cstate]; + dev->states_usage[dev->state_count].driver_data = + (void *)get_driver_data(cstate); dev->state_count += 1; } @@ -459,8 +528,6 @@ static int intel_idle_cpuidle_devices_init(void) return -EIO; } } - if (auto_demotion_disable_flags) - smp_call_function(auto_demotion_disable, NULL, 1); return 0; } @@ -478,6 +545,7 @@ static int __init intel_idle_init(void) if (retval) return retval; + intel_idle_cpuidle_driver_init(); retval = cpuidle_register_driver(&intel_idle_driver); if (retval) { printk(KERN_DEBUG PREFIX "intel_idle yielding to %s", diff --git a/drivers/iommu/omap-iommu-debug.c b/drivers/iommu/omap-iommu-debug.c index 9c192e79f806..288da5c1499d 100644 --- a/drivers/iommu/omap-iommu-debug.c +++ b/drivers/iommu/omap-iommu-debug.c @@ -10,6 +10,7 @@ * published by the Free Software Foundation. */ +#include <linux/module.h> #include <linux/err.h> #include <linux/clk.h> #include <linux/io.h> diff --git a/drivers/iommu/omap-iovmm.c b/drivers/iommu/omap-iovmm.c index e8fdb8830f69..46be456fcc00 100644 --- a/drivers/iommu/omap-iovmm.c +++ b/drivers/iommu/omap-iovmm.c @@ -10,6 +10,7 @@ * published by the Free Software Foundation. */ +#include <linux/module.h> #include <linux/err.h> #include <linux/slab.h> #include <linux/vmalloc.h> diff --git a/drivers/macintosh/via-macii.c b/drivers/macintosh/via-macii.c index 817f37a875c9..c9570fcf1cce 100644 --- a/drivers/macintosh/via-macii.c +++ b/drivers/macintosh/via-macii.c @@ -159,7 +159,7 @@ int macii_init(void) err = macii_init_via(); if (err) goto out; - err = request_irq(IRQ_MAC_ADB, macii_interrupt, IRQ_FLG_LOCK, "ADB", + err = request_irq(IRQ_MAC_ADB, macii_interrupt, 0, "ADB", macii_interrupt); if (err) goto out; diff --git a/drivers/macintosh/via-maciisi.c b/drivers/macintosh/via-maciisi.c index 9ab5b0c34f0d..34d02a91b29f 100644 --- a/drivers/macintosh/via-maciisi.c +++ b/drivers/macintosh/via-maciisi.c @@ -122,8 +122,8 @@ maciisi_init(void) return err; } - if (request_irq(IRQ_MAC_ADB, maciisi_interrupt, IRQ_FLG_LOCK | IRQ_FLG_FAST, - "ADB", maciisi_interrupt)) { + if (request_irq(IRQ_MAC_ADB, maciisi_interrupt, 0, "ADB", + maciisi_interrupt)) { printk(KERN_ERR "maciisi_init: can't get irq %d\n", IRQ_MAC_ADB); return -EAGAIN; } diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c index cb246667dd52..0a6806f80ab5 100644 --- a/drivers/md/dm-bufio.c +++ b/drivers/md/dm-bufio.c @@ -14,6 +14,7 @@ #include <linux/vmalloc.h> #include <linux/version.h> #include <linux/shrinker.h> +#include <linux/module.h> #define DM_MSG_PREFIX "bufio" diff --git a/drivers/md/persistent-data/dm-btree-remove.c b/drivers/md/persistent-data/dm-btree-remove.c index 65fd85ec6514..023fbc2d389e 100644 --- a/drivers/md/persistent-data/dm-btree-remove.c +++ b/drivers/md/persistent-data/dm-btree-remove.c @@ -8,7 +8,7 @@ #include "dm-btree-internal.h" #include "dm-transaction-manager.h" -#include <linux/module.h> +#include <linux/export.h> /* * Removing an entry from a btree diff --git a/drivers/md/persistent-data/dm-btree.c b/drivers/md/persistent-data/dm-btree.c index e0638be53ea4..bd1e7ffbe26c 100644 --- a/drivers/md/persistent-data/dm-btree.c +++ b/drivers/md/persistent-data/dm-btree.c @@ -8,7 +8,7 @@ #include "dm-space-map.h" #include "dm-transaction-manager.h" -#include <linux/module.h> +#include <linux/export.h> #include <linux/device-mapper.h> #define DM_MSG_PREFIX "btree" diff --git a/drivers/md/persistent-data/dm-space-map-checker.c b/drivers/md/persistent-data/dm-space-map-checker.c index bb44a937fe63..50ed53bf4aa2 100644 --- a/drivers/md/persistent-data/dm-space-map-checker.c +++ b/drivers/md/persistent-data/dm-space-map-checker.c @@ -7,6 +7,7 @@ #include "dm-space-map-checker.h" #include <linux/device-mapper.h> +#include <linux/export.h> #ifdef CONFIG_DM_DEBUG_SPACE_MAPS diff --git a/drivers/md/persistent-data/dm-space-map-disk.c b/drivers/md/persistent-data/dm-space-map-disk.c index aeff7852cf79..fc469ba9f627 100644 --- a/drivers/md/persistent-data/dm-space-map-disk.c +++ b/drivers/md/persistent-data/dm-space-map-disk.c @@ -12,7 +12,7 @@ #include <linux/list.h> #include <linux/slab.h> -#include <linux/module.h> +#include <linux/export.h> #include <linux/device-mapper.h> #define DM_MSG_PREFIX "space map disk" diff --git a/drivers/md/persistent-data/dm-transaction-manager.c b/drivers/md/persistent-data/dm-transaction-manager.c index 728e89a3f978..6f8d38747d7f 100644 --- a/drivers/md/persistent-data/dm-transaction-manager.c +++ b/drivers/md/persistent-data/dm-transaction-manager.c @@ -10,7 +10,7 @@ #include "dm-space-map-metadata.h" #include "dm-persistent-data-internal.h" -#include <linux/module.h> +#include <linux/export.h> #include <linux/slab.h> #include <linux/device-mapper.h> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 472aedfb07cf..297e26092178 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -3110,7 +3110,7 @@ static void handle_stripe(struct stripe_head *sh) struct r5dev *pdev, *qdev; clear_bit(STRIPE_HANDLE, &sh->state); - if (test_and_set_bit(STRIPE_ACTIVE, &sh->state)) { + if (test_and_set_bit_lock(STRIPE_ACTIVE, &sh->state)) { /* already being handled, ensure it gets handled * again when current action finishes */ set_bit(STRIPE_HANDLE, &sh->state); @@ -3159,10 +3159,14 @@ static void handle_stripe(struct stripe_head *sh) /* check if the array has lost more than max_degraded devices and, * if so, some requests might need to be failed. */ - if (s.failed > conf->max_degraded && s.to_read+s.to_write+s.written) - handle_failed_stripe(conf, sh, &s, disks, &s.return_bi); - if (s.failed > conf->max_degraded && s.syncing) - handle_failed_sync(conf, sh, &s); + if (s.failed > conf->max_degraded) { + sh->check_state = 0; + sh->reconstruct_state = 0; + if (s.to_read+s.to_write+s.written) + handle_failed_stripe(conf, sh, &s, disks, &s.return_bi); + if (s.syncing) + handle_failed_sync(conf, sh, &s); + } /* * might be able to return some write requests if the parity blocks @@ -3371,7 +3375,7 @@ finish: return_io(s.return_bi); - clear_bit(STRIPE_ACTIVE, &sh->state); + clear_bit_unlock(STRIPE_ACTIVE, &sh->state); } static void raid5_activate_delayed(struct r5conf *conf) diff --git a/drivers/media/dvb/dvb-usb/mxl111sf-i2c.c b/drivers/media/dvb/dvb-usb/mxl111sf-i2c.c index 2e8c288258a9..34434557ef65 100644 --- a/drivers/media/dvb/dvb-usb/mxl111sf-i2c.c +++ b/drivers/media/dvb/dvb-usb/mxl111sf-i2c.c @@ -398,7 +398,6 @@ static int mxl111sf_i2c_readagain(struct mxl111sf_state *state, u8 i2c_r_data[24]; u8 i = 0; u8 fifo_status = 0; - int ret; int status = 0; mxl_i2c("read %d bytes", count); @@ -418,7 +417,7 @@ static int mxl111sf_i2c_readagain(struct mxl111sf_state *state, i2c_w_data[4+(i*3)] = 0x00; } - ret = mxl111sf_i2c_get_data(state, 0, i2c_w_data, i2c_r_data); + mxl111sf_i2c_get_data(state, 0, i2c_w_data, i2c_r_data); /* Check for I2C NACK status */ if (mxl111sf_i2c_check_status(state) == 1) { diff --git a/drivers/media/dvb/dvb-usb/mxl111sf-phy.c b/drivers/media/dvb/dvb-usb/mxl111sf-phy.c index 91dc1fc2825b..b741b3a7a325 100644 --- a/drivers/media/dvb/dvb-usb/mxl111sf-phy.c +++ b/drivers/media/dvb/dvb-usb/mxl111sf-phy.c @@ -296,8 +296,7 @@ int mxl111sf_config_spi(struct mxl111sf_state *state, int onoff) goto fail; ret = mxl111sf_write_reg(state, 0x00, 0x00); - if (mxl_fail(ret)) - goto fail; + mxl_fail(ret); fail: return ret; } @@ -328,11 +327,13 @@ int mxl111sf_idac_config(struct mxl111sf_state *state, /* set hysteresis value reg: 0x0B<5:0> */ ret = mxl111sf_write_reg(state, V6_IDAC_HYSTERESIS_REG, (hysteresis_value & 0x3F)); + mxl_fail(ret); } ret = mxl111sf_write_reg(state, V6_IDAC_SETTINGS_REG, val); + mxl_fail(ret); - return val; + return ret; } /* diff --git a/drivers/media/video/s5k6aa.c b/drivers/media/video/s5k6aa.c index 2446736b7871..0df7f2a41814 100644 --- a/drivers/media/video/s5k6aa.c +++ b/drivers/media/video/s5k6aa.c @@ -19,6 +19,7 @@ #include <linux/gpio.h> #include <linux/i2c.h> #include <linux/media.h> +#include <linux/module.h> #include <linux/regulator/consumer.h> #include <linux/slab.h> diff --git a/drivers/media/video/s5p-mfc/s5p_mfc_dec.c b/drivers/media/video/s5p-mfc/s5p_mfc_dec.c index 725634d9736d..844a4d7797bc 100644 --- a/drivers/media/video/s5p-mfc/s5p_mfc_dec.c +++ b/drivers/media/video/s5p-mfc/s5p_mfc_dec.c @@ -220,8 +220,8 @@ static int vidioc_querycap(struct file *file, void *priv, strncpy(cap->card, dev->plat_dev->name, sizeof(cap->card) - 1); cap->bus_info[0] = 0; cap->version = KERNEL_VERSION(1, 0, 0); - cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_VIDEO_OUTPUT - | V4L2_CAP_STREAMING; + cap->capabilities = V4L2_CAP_VIDEO_CAPTURE_MPLANE | + V4L2_CAP_VIDEO_OUTPUT_MPLANE | V4L2_CAP_STREAMING; return 0; } diff --git a/drivers/media/video/s5p-mfc/s5p_mfc_enc.c b/drivers/media/video/s5p-mfc/s5p_mfc_enc.c index ecef127dbc66..1e8cdb77d4b8 100644 --- a/drivers/media/video/s5p-mfc/s5p_mfc_enc.c +++ b/drivers/media/video/s5p-mfc/s5p_mfc_enc.c @@ -785,8 +785,8 @@ static int vidioc_querycap(struct file *file, void *priv, strncpy(cap->card, dev->plat_dev->name, sizeof(cap->card) - 1); cap->bus_info[0] = 0; cap->version = KERNEL_VERSION(1, 0, 0); - cap->capabilities = V4L2_CAP_VIDEO_CAPTURE - | V4L2_CAP_VIDEO_OUTPUT + cap->capabilities = V4L2_CAP_VIDEO_CAPTURE_MPLANE + | V4L2_CAP_VIDEO_OUTPUT_MPLANE | V4L2_CAP_STREAMING; return 0; } diff --git a/drivers/media/video/uvc/uvc_ctrl.c b/drivers/media/video/uvc/uvc_ctrl.c index 10c2364f3e8a..254d32688843 100644 --- a/drivers/media/video/uvc/uvc_ctrl.c +++ b/drivers/media/video/uvc/uvc_ctrl.c @@ -1016,7 +1016,8 @@ int uvc_query_v4l2_menu(struct uvc_video_chain *chain, menu_info = &mapping->menu_info[query_menu->index]; - if (ctrl->info.flags & UVC_CTRL_FLAG_GET_RES) { + if (mapping->data_type == UVC_CTRL_DATA_TYPE_BITMASK && + (ctrl->info.flags & UVC_CTRL_FLAG_GET_RES)) { s32 bitmap; if (!ctrl->cached) { @@ -1225,7 +1226,8 @@ int uvc_ctrl_set(struct uvc_video_chain *chain, /* Valid menu indices are reported by the GET_RES request for * UVC controls that support it. */ - if (ctrl->info.flags & UVC_CTRL_FLAG_GET_RES) { + if (mapping->data_type == UVC_CTRL_DATA_TYPE_BITMASK && + (ctrl->info.flags & UVC_CTRL_FLAG_GET_RES)) { if (!ctrl->cached) { ret = uvc_ctrl_populate_cache(chain, ctrl); if (ret < 0) diff --git a/drivers/media/video/v4l2-ctrls.c b/drivers/media/video/v4l2-ctrls.c index f17f92b86a30..0f415dade05a 100644 --- a/drivers/media/video/v4l2-ctrls.c +++ b/drivers/media/video/v4l2-ctrls.c @@ -821,8 +821,8 @@ static void send_event(struct v4l2_fh *fh, struct v4l2_ctrl *ctrl, u32 changes) fill_event(&ev, ctrl, changes); list_for_each_entry(sev, &ctrl->ev_subs, node) - if (sev->fh && (sev->fh != fh || - (sev->flags & V4L2_EVENT_SUB_FL_ALLOW_FEEDBACK))) + if (sev->fh != fh || + (sev->flags & V4L2_EVENT_SUB_FL_ALLOW_FEEDBACK)) v4l2_event_queue_fh(sev->fh, &ev); } @@ -947,6 +947,7 @@ static void new_to_cur(struct v4l2_fh *fh, struct v4l2_ctrl *ctrl, if (ctrl->cluster[0]->has_volatiles) ctrl->flags |= V4L2_CTRL_FLAG_VOLATILE; } + fh = NULL; } if (changed || update_inactive) { /* If a control was changed that was not one of the controls diff --git a/drivers/media/video/v4l2-event.c b/drivers/media/video/v4l2-event.c index 46037f225529..c26ad9637143 100644 --- a/drivers/media/video/v4l2-event.c +++ b/drivers/media/video/v4l2-event.c @@ -216,6 +216,9 @@ int v4l2_event_subscribe(struct v4l2_fh *fh, unsigned long flags; unsigned i; + if (sub->type == V4L2_EVENT_ALL) + return -EINVAL; + if (elems < 1) elems = 1; if (sub->type == V4L2_EVENT_CTRL) { @@ -283,6 +286,7 @@ int v4l2_event_unsubscribe(struct v4l2_fh *fh, { struct v4l2_subscribed_event *sev; unsigned long flags; + int i; if (sub->type == V4L2_EVENT_ALL) { v4l2_event_unsubscribe_all(fh); @@ -293,8 +297,12 @@ int v4l2_event_unsubscribe(struct v4l2_fh *fh, sev = v4l2_event_subscribed(fh, sub->type, sub->id); if (sev != NULL) { + /* Remove any pending events for this subscription */ + for (i = 0; i < sev->in_use; i++) { + list_del(&sev->events[sev_pos(sev, i)].list); + fh->navailable--; + } list_del(&sev->list); - sev->fh = NULL; } spin_unlock_irqrestore(&fh->vdev->fh_lock, flags); diff --git a/drivers/media/video/videobuf2-core.c b/drivers/media/video/videobuf2-core.c index 979e544388cb..95a3f5e82aef 100644 --- a/drivers/media/video/videobuf2-core.c +++ b/drivers/media/video/videobuf2-core.c @@ -131,6 +131,7 @@ static void __setup_offsets(struct vb2_queue *q, unsigned int n) continue; for (plane = 0; plane < vb->num_planes; ++plane) { + vb->v4l2_planes[plane].length = q->plane_sizes[plane]; vb->v4l2_planes[plane].m.mem_offset = off; dprintk(3, "Buffer %d, plane %d offset 0x%08lx\n", @@ -264,6 +265,7 @@ static void __vb2_queue_free(struct vb2_queue *q, unsigned int buffers) q->num_buffers -= buffers; if (!q->num_buffers) q->memory = 0; + INIT_LIST_HEAD(&q->queued_list); } /** @@ -296,14 +298,14 @@ static bool __buffer_in_use(struct vb2_queue *q, struct vb2_buffer *vb) { unsigned int plane; for (plane = 0; plane < vb->num_planes; ++plane) { + void *mem_priv = vb->planes[plane].mem_priv; /* * If num_users() has not been provided, call_memop * will return 0, apparently nobody cares about this * case anyway. If num_users() returns more than 1, * we are not the only user of the plane's memory. */ - if (call_memop(q, plane, num_users, - vb->planes[plane].mem_priv) > 1) + if (mem_priv && call_memop(q, plane, num_users, mem_priv) > 1) return true; } return false; diff --git a/drivers/mfd/ab5500-core.c b/drivers/mfd/ab5500-core.c index 4175544b491b..ec10629a0b0b 100644 --- a/drivers/mfd/ab5500-core.c +++ b/drivers/mfd/ab5500-core.c @@ -13,6 +13,7 @@ * TODO: Event handling with irq_chip. Waiting for PRCMU fw support. */ +#include <linux/module.h> #include <linux/mutex.h> #include <linux/err.h> #include <linux/platform_device.h> diff --git a/drivers/mfd/ab5500-debugfs.c b/drivers/mfd/ab5500-debugfs.c index 6be1fe6b5f9a..43c0ebb81956 100644 --- a/drivers/mfd/ab5500-debugfs.c +++ b/drivers/mfd/ab5500-debugfs.c @@ -4,6 +4,7 @@ * Debugfs support for the AB5500 MFD driver */ +#include <linux/export.h> #include <linux/debugfs.h> #include <linux/seq_file.h> #include <linux/mfd/ab5500/ab5500.h> diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c index ae57769ba50d..4b976f00ea85 100644 --- a/drivers/mmc/host/sdhci-esdhc-imx.c +++ b/drivers/mmc/host/sdhci-esdhc-imx.c @@ -32,6 +32,7 @@ /* VENDOR SPEC register */ #define SDHCI_VENDOR_SPEC 0xC0 #define SDHCI_VENDOR_SPEC_SDIO_QUIRK 0x00000002 +#define SDHCI_WTMK_LVL 0x44 #define SDHCI_MIX_CTRL 0x48 /* @@ -476,6 +477,13 @@ static int __devinit sdhci_esdhc_imx_probe(struct platform_device *pdev) if (is_imx53_esdhc(imx_data)) imx_data->flags |= ESDHC_FLAG_MULTIBLK_NO_INT; + /* + * The imx6q ROM code will change the default watermark level setting + * to something insane. Change it back here. + */ + if (is_imx6q_usdhc(imx_data)) + writel(0x08100810, host->ioaddr + SDHCI_WTMK_LVL); + boarddata = &imx_data->boarddata; if (sdhci_esdhc_imx_probe_dt(pdev, boarddata) < 0) { if (!host->mmc->parent->platform_data) { diff --git a/drivers/mtd/Kconfig b/drivers/mtd/Kconfig index 66b616ebe536..318a869286ab 100644 --- a/drivers/mtd/Kconfig +++ b/drivers/mtd/Kconfig @@ -12,27 +12,17 @@ menuconfig MTD if MTD -config MTD_DEBUG - bool "Debugging" - help - This turns on low-level debugging for the entire MTD sub-system. - Normally, you should say 'N'. - -config MTD_DEBUG_VERBOSE - int "Debugging verbosity (0 = quiet, 3 = noisy)" - depends on MTD_DEBUG - default "0" - help - Determines the verbosity level of the MTD debugging messages. - config MTD_TESTS - tristate "MTD tests support" + tristate "MTD tests support (DANGEROUS)" depends on m help This option includes various MTD tests into compilation. The tests should normally be compiled as kernel modules. The modules perform various checks and verifications when loaded. + WARNING: some of the tests will ERASE entire MTD device which they + test. Do not use these tests unless you really know what you do. + config MTD_REDBOOT_PARTS tristate "RedBoot partition table parsing" ---help--- @@ -137,7 +127,8 @@ config MTD_AFS_PARTS 'physmap' map driver (CONFIG_MTD_PHYSMAP) does this, for example. config MTD_OF_PARTS - def_bool y + tristate "OpenFirmware partitioning information support" + default Y depends on OF help This provides a partition parsing function which derives diff --git a/drivers/mtd/Makefile b/drivers/mtd/Makefile index 39664c4229ff..9aaac3ac89f3 100644 --- a/drivers/mtd/Makefile +++ b/drivers/mtd/Makefile @@ -5,8 +5,8 @@ # Core functionality. obj-$(CONFIG_MTD) += mtd.o mtd-y := mtdcore.o mtdsuper.o mtdconcat.o mtdpart.o -mtd-$(CONFIG_MTD_OF_PARTS) += ofpart.o +obj-$(CONFIG_MTD_OF_PARTS) += ofpart.o obj-$(CONFIG_MTD_REDBOOT_PARTS) += redboot.o obj-$(CONFIG_MTD_CMDLINE_PARTS) += cmdlinepart.o obj-$(CONFIG_MTD_AFS_PARTS) += afs.o diff --git a/drivers/mtd/afs.c b/drivers/mtd/afs.c index 302372c08b56..89a02f6f65dc 100644 --- a/drivers/mtd/afs.c +++ b/drivers/mtd/afs.c @@ -162,8 +162,8 @@ afs_read_iis(struct mtd_info *mtd, struct image_info_struct *iis, u_int ptr) } static int parse_afs_partitions(struct mtd_info *mtd, - struct mtd_partition **pparts, - unsigned long origin) + struct mtd_partition **pparts, + struct mtd_part_parser_data *data) { struct mtd_partition *parts; u_int mask, off, idx, sz; diff --git a/drivers/mtd/ar7part.c b/drivers/mtd/ar7part.c index 95949b97de6a..f40ea4547554 100644 --- a/drivers/mtd/ar7part.c +++ b/drivers/mtd/ar7part.c @@ -47,7 +47,7 @@ struct ar7_bin_rec { static int create_mtd_partitions(struct mtd_info *master, struct mtd_partition **pparts, - unsigned long origin) + struct mtd_part_parser_data *data) { struct ar7_bin_rec header; unsigned int offset; diff --git a/drivers/mtd/chips/cfi_cmdset_0002.c b/drivers/mtd/chips/cfi_cmdset_0002.c index 23175edd5634..8d70895a58d6 100644 --- a/drivers/mtd/chips/cfi_cmdset_0002.c +++ b/drivers/mtd/chips/cfi_cmdset_0002.c @@ -145,8 +145,7 @@ static void fixup_amd_bootblock(struct mtd_info *mtd) if (((major << 8) | minor) < 0x3131) { /* CFI version 1.0 => don't trust bootloc */ - DEBUG(MTD_DEBUG_LEVEL1, - "%s: JEDEC Vendor ID is 0x%02X Device ID is 0x%02X\n", + pr_debug("%s: JEDEC Vendor ID is 0x%02X Device ID is 0x%02X\n", map->name, cfi->mfr, cfi->id); /* AFAICS all 29LV400 with a bottom boot block have a device ID @@ -166,8 +165,7 @@ static void fixup_amd_bootblock(struct mtd_info *mtd) * the 8-bit device ID. */ (cfi->mfr == CFI_MFR_MACRONIX)) { - DEBUG(MTD_DEBUG_LEVEL1, - "%s: Macronix MX29LV400C with bottom boot block" + pr_debug("%s: Macronix MX29LV400C with bottom boot block" " detected\n", map->name); extp->TopBottom = 2; /* bottom boot */ } else @@ -178,8 +176,7 @@ static void fixup_amd_bootblock(struct mtd_info *mtd) extp->TopBottom = 2; /* bottom boot */ } - DEBUG(MTD_DEBUG_LEVEL1, - "%s: AMD CFI PRI V%c.%c has no boot block field;" + pr_debug("%s: AMD CFI PRI V%c.%c has no boot block field;" " deduced %s from Device ID\n", map->name, major, minor, extp->TopBottom == 2 ? "bottom" : "top"); } @@ -191,7 +188,7 @@ static void fixup_use_write_buffers(struct mtd_info *mtd) struct map_info *map = mtd->priv; struct cfi_private *cfi = map->fldrv_priv; if (cfi->cfiq->BufWriteTimeoutTyp) { - DEBUG(MTD_DEBUG_LEVEL1, "Using buffer write method\n" ); + pr_debug("Using buffer write method\n" ); mtd->write = cfi_amdstd_write_buffers; } } @@ -443,8 +440,8 @@ struct mtd_info *cfi_cmdset_0002(struct map_info *map, int primary) mtd->writesize = 1; mtd->writebufsize = cfi_interleave(cfi) << cfi->cfiq->MaxBufWriteSize; - DEBUG(MTD_DEBUG_LEVEL3, "MTD %s(): write buffer size %d\n", - __func__, mtd->writebufsize); + pr_debug("MTD %s(): write buffer size %d\n", __func__, + mtd->writebufsize); mtd->reboot_notifier.notifier_call = cfi_amdstd_reboot; @@ -1163,7 +1160,7 @@ static int __xipram do_write_oneword(struct map_info *map, struct flchip *chip, return ret; } - DEBUG( MTD_DEBUG_LEVEL3, "MTD %s(): WRITE 0x%.8lx(0x%.8lx)\n", + pr_debug("MTD %s(): WRITE 0x%.8lx(0x%.8lx)\n", __func__, adr, datum.x[0] ); /* @@ -1174,7 +1171,7 @@ static int __xipram do_write_oneword(struct map_info *map, struct flchip *chip, */ oldd = map_read(map, adr); if (map_word_equal(map, oldd, datum)) { - DEBUG( MTD_DEBUG_LEVEL3, "MTD %s(): NOP\n", + pr_debug("MTD %s(): NOP\n", __func__); goto op_done; } @@ -1400,7 +1397,7 @@ static int __xipram do_write_buffer(struct map_info *map, struct flchip *chip, datum = map_word_load(map, buf); - DEBUG( MTD_DEBUG_LEVEL3, "MTD %s(): WRITE 0x%.8lx(0x%.8lx)\n", + pr_debug("MTD %s(): WRITE 0x%.8lx(0x%.8lx)\n", __func__, adr, datum.x[0] ); XIP_INVAL_CACHED_RANGE(map, adr, len); @@ -1587,7 +1584,7 @@ static int __xipram do_erase_chip(struct map_info *map, struct flchip *chip) return ret; } - DEBUG( MTD_DEBUG_LEVEL3, "MTD %s(): ERASE 0x%.8lx\n", + pr_debug("MTD %s(): ERASE 0x%.8lx\n", __func__, chip->start ); XIP_INVAL_CACHED_RANGE(map, adr, map->size); @@ -1675,7 +1672,7 @@ static int __xipram do_erase_oneblock(struct map_info *map, struct flchip *chip, return ret; } - DEBUG( MTD_DEBUG_LEVEL3, "MTD %s(): ERASE 0x%.8lx\n", + pr_debug("MTD %s(): ERASE 0x%.8lx\n", __func__, adr ); XIP_INVAL_CACHED_RANGE(map, adr, len); @@ -1801,8 +1798,7 @@ static int do_atmel_lock(struct map_info *map, struct flchip *chip, goto out_unlock; chip->state = FL_LOCKING; - DEBUG(MTD_DEBUG_LEVEL3, "MTD %s(): LOCK 0x%08lx len %d\n", - __func__, adr, len); + pr_debug("MTD %s(): LOCK 0x%08lx len %d\n", __func__, adr, len); cfi_send_gen_cmd(0xAA, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL); @@ -1837,8 +1833,7 @@ static int do_atmel_unlock(struct map_info *map, struct flchip *chip, goto out_unlock; chip->state = FL_UNLOCKING; - DEBUG(MTD_DEBUG_LEVEL3, "MTD %s(): LOCK 0x%08lx len %d\n", - __func__, adr, len); + pr_debug("MTD %s(): LOCK 0x%08lx len %d\n", __func__, adr, len); cfi_send_gen_cmd(0xAA, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL); diff --git a/drivers/mtd/chips/fwh_lock.h b/drivers/mtd/chips/fwh_lock.h index 5e3cc80128aa..89c6595454a5 100644 --- a/drivers/mtd/chips/fwh_lock.h +++ b/drivers/mtd/chips/fwh_lock.h @@ -34,8 +34,7 @@ static int fwh_xxlock_oneblock(struct map_info *map, struct flchip *chip, /* Refuse the operation if the we cannot look behind the chip */ if (chip->start < 0x400000) { - DEBUG( MTD_DEBUG_LEVEL3, - "MTD %s(): chip->start: %lx wanted >= 0x400000\n", + pr_debug( "MTD %s(): chip->start: %lx wanted >= 0x400000\n", __func__, chip->start ); return -EIO; } diff --git a/drivers/mtd/chips/jedec_probe.c b/drivers/mtd/chips/jedec_probe.c index ea832ea0e4aa..c443f527a53a 100644 --- a/drivers/mtd/chips/jedec_probe.c +++ b/drivers/mtd/chips/jedec_probe.c @@ -1914,11 +1914,10 @@ static void jedec_reset(u32 base, struct map_info *map, struct cfi_private *cfi) * (oh and incidentaly the jedec spec - 3.5.3.3) the reset * sequence is *supposed* to be 0xaa at 0x5555, 0x55 at * 0x2aaa, 0xF0 at 0x5555 this will not affect the AMD chips - * as they will ignore the writes and dont care what address + * as they will ignore the writes and don't care what address * the F0 is written to */ if (cfi->addr_unlock1) { - DEBUG( MTD_DEBUG_LEVEL3, - "reset unlock called %x %x \n", + pr_debug( "reset unlock called %x %x \n", cfi->addr_unlock1,cfi->addr_unlock2); cfi_send_gen_cmd(0xaa, cfi->addr_unlock1, base, map, cfi, cfi->device_type, NULL); cfi_send_gen_cmd(0x55, cfi->addr_unlock2, base, map, cfi, cfi->device_type, NULL); @@ -1941,7 +1940,7 @@ static int cfi_jedec_setup(struct map_info *map, struct cfi_private *cfi, int in uint8_t uaddr; if (!(jedec_table[index].devtypes & cfi->device_type)) { - DEBUG(MTD_DEBUG_LEVEL1, "Rejecting potential %s with incompatible %d-bit device type\n", + pr_debug("Rejecting potential %s with incompatible %d-bit device type\n", jedec_table[index].name, 4 * (1<<cfi->device_type)); return 0; } @@ -2021,7 +2020,7 @@ static inline int jedec_match( uint32_t base, * there aren't. */ if (finfo->dev_id > 0xff) { - DEBUG( MTD_DEBUG_LEVEL3, "%s(): ID is not 8bit\n", + pr_debug("%s(): ID is not 8bit\n", __func__); goto match_done; } @@ -2045,12 +2044,10 @@ static inline int jedec_match( uint32_t base, } /* the part size must fit in the memory window */ - DEBUG( MTD_DEBUG_LEVEL3, - "MTD %s(): Check fit 0x%.8x + 0x%.8x = 0x%.8x\n", + pr_debug("MTD %s(): Check fit 0x%.8x + 0x%.8x = 0x%.8x\n", __func__, base, 1 << finfo->dev_size, base + (1 << finfo->dev_size) ); if ( base + cfi_interleave(cfi) * ( 1 << finfo->dev_size ) > map->size ) { - DEBUG( MTD_DEBUG_LEVEL3, - "MTD %s(): 0x%.4x 0x%.4x %dKiB doesn't fit\n", + pr_debug("MTD %s(): 0x%.4x 0x%.4x %dKiB doesn't fit\n", __func__, finfo->mfr_id, finfo->dev_id, 1 << finfo->dev_size ); goto match_done; @@ -2061,13 +2058,12 @@ static inline int jedec_match( uint32_t base, uaddr = finfo->uaddr; - DEBUG( MTD_DEBUG_LEVEL3, "MTD %s(): check unlock addrs 0x%.4x 0x%.4x\n", + pr_debug("MTD %s(): check unlock addrs 0x%.4x 0x%.4x\n", __func__, cfi->addr_unlock1, cfi->addr_unlock2 ); if ( MTD_UADDR_UNNECESSARY != uaddr && MTD_UADDR_DONT_CARE != uaddr && ( unlock_addrs[uaddr].addr1 / cfi->device_type != cfi->addr_unlock1 || unlock_addrs[uaddr].addr2 / cfi->device_type != cfi->addr_unlock2 ) ) { - DEBUG( MTD_DEBUG_LEVEL3, - "MTD %s(): 0x%.4x 0x%.4x did not match\n", + pr_debug("MTD %s(): 0x%.4x 0x%.4x did not match\n", __func__, unlock_addrs[uaddr].addr1, unlock_addrs[uaddr].addr2); @@ -2083,15 +2079,13 @@ static inline int jedec_match( uint32_t base, * FIXME - write a driver that takes all of the chip info as * module parameters, doesn't probe but forces a load. */ - DEBUG( MTD_DEBUG_LEVEL3, - "MTD %s(): check ID's disappear when not in ID mode\n", + pr_debug("MTD %s(): check ID's disappear when not in ID mode\n", __func__ ); jedec_reset( base, map, cfi ); mfr = jedec_read_mfr( map, base, cfi ); id = jedec_read_id( map, base, cfi ); if ( mfr == cfi->mfr && id == cfi->id ) { - DEBUG( MTD_DEBUG_LEVEL3, - "MTD %s(): ID 0x%.2x:0x%.2x did not change after reset:\n" + pr_debug("MTD %s(): ID 0x%.2x:0x%.2x did not change after reset:\n" "You might need to manually specify JEDEC parameters.\n", __func__, cfi->mfr, cfi->id ); goto match_done; @@ -2104,7 +2098,7 @@ static inline int jedec_match( uint32_t base, * Put the device back in ID mode - only need to do this if we * were truly frobbing a real device. */ - DEBUG( MTD_DEBUG_LEVEL3, "MTD %s(): return to ID mode\n", __func__ ); + pr_debug("MTD %s(): return to ID mode\n", __func__ ); if (cfi->addr_unlock1) { cfi_send_gen_cmd(0xaa, cfi->addr_unlock1, base, map, cfi, cfi->device_type, NULL); cfi_send_gen_cmd(0x55, cfi->addr_unlock2, base, map, cfi, cfi->device_type, NULL); @@ -2167,13 +2161,11 @@ static int jedec_probe_chip(struct map_info *map, __u32 base, cfi->mfr = jedec_read_mfr(map, base, cfi); cfi->id = jedec_read_id(map, base, cfi); - DEBUG(MTD_DEBUG_LEVEL3, - "Search for id:(%02x %02x) interleave(%d) type(%d)\n", + pr_debug("Search for id:(%02x %02x) interleave(%d) type(%d)\n", cfi->mfr, cfi->id, cfi_interleave(cfi), cfi->device_type); for (i = 0; i < ARRAY_SIZE(jedec_table); i++) { if ( jedec_match( base, map, cfi, &jedec_table[i] ) ) { - DEBUG( MTD_DEBUG_LEVEL3, - "MTD %s(): matched device 0x%x,0x%x unlock_addrs: 0x%.4x 0x%.4x\n", + pr_debug("MTD %s(): matched device 0x%x,0x%x unlock_addrs: 0x%.4x 0x%.4x\n", __func__, cfi->mfr, cfi->id, cfi->addr_unlock1, cfi->addr_unlock2 ); if (!cfi_jedec_setup(map, cfi, i)) diff --git a/drivers/mtd/cmdlinepart.c b/drivers/mtd/cmdlinepart.c index 8cf667da2408..ddf9ec6d9168 100644 --- a/drivers/mtd/cmdlinepart.c +++ b/drivers/mtd/cmdlinepart.c @@ -189,10 +189,7 @@ static struct mtd_partition * newpart(char *s, extra_mem_size; parts = kzalloc(alloc_size, GFP_KERNEL); if (!parts) - { - printk(KERN_ERR ERRP "out of memory\n"); return NULL; - } extra_mem = (unsigned char *)(parts + *num_parts); } /* enter this partition (offset will be calculated later if it is zero at this point) */ @@ -317,8 +314,8 @@ static int mtdpart_setup_real(char *s) * the first one in the chain if a NULL mtd_id is passed in. */ static int parse_cmdline_partitions(struct mtd_info *master, - struct mtd_partition **pparts, - unsigned long origin) + struct mtd_partition **pparts, + struct mtd_part_parser_data *data) { unsigned long offset; int i; diff --git a/drivers/mtd/devices/Kconfig b/drivers/mtd/devices/Kconfig index 35081ce77fbd..283d887f7825 100644 --- a/drivers/mtd/devices/Kconfig +++ b/drivers/mtd/devices/Kconfig @@ -249,6 +249,16 @@ config MTD_DOC2001PLUS under "NAND Flash Device Drivers" (currently that driver does not support all Millennium Plus devices). +config MTD_DOCG3 + tristate "M-Systems Disk-On-Chip G3" + ---help--- + This provides an MTD device driver for the M-Systems DiskOnChip + G3 devices. + + The driver provides access to G3 DiskOnChip, distributed by + M-Systems and now Sandisk. The support is very experimental, + and doesn't give access to any write operations. + config MTD_DOCPROBE tristate select MTD_DOCECC @@ -268,8 +278,7 @@ config MTD_DOCPROBE_ADVANCED config MTD_DOCPROBE_ADDRESS hex "Physical address of DiskOnChip" if MTD_DOCPROBE_ADVANCED depends on MTD_DOCPROBE - default "0x0000" if MTD_DOCPROBE_ADVANCED - default "0" if !MTD_DOCPROBE_ADVANCED + default "0x0" ---help--- By default, the probe for DiskOnChip devices will look for a DiskOnChip at every multiple of 0x2000 between 0xC8000 and 0xEE000. diff --git a/drivers/mtd/devices/Makefile b/drivers/mtd/devices/Makefile index f3226b1d38fc..56c7cd462f11 100644 --- a/drivers/mtd/devices/Makefile +++ b/drivers/mtd/devices/Makefile @@ -5,6 +5,7 @@ obj-$(CONFIG_MTD_DOC2000) += doc2000.o obj-$(CONFIG_MTD_DOC2001) += doc2001.o obj-$(CONFIG_MTD_DOC2001PLUS) += doc2001plus.o +obj-$(CONFIG_MTD_DOCG3) += docg3.o obj-$(CONFIG_MTD_DOCPROBE) += docprobe.o obj-$(CONFIG_MTD_DOCECC) += docecc.o obj-$(CONFIG_MTD_SLRAM) += slram.o @@ -17,3 +18,5 @@ obj-$(CONFIG_MTD_BLOCK2MTD) += block2mtd.o obj-$(CONFIG_MTD_DATAFLASH) += mtd_dataflash.o obj-$(CONFIG_MTD_M25P80) += m25p80.o obj-$(CONFIG_MTD_SST25L) += sst25l.o + +CFLAGS_docg3.o += -I$(src)
\ No newline at end of file diff --git a/drivers/mtd/devices/doc2000.c b/drivers/mtd/devices/doc2000.c index f7fbf6025ef2..e9fad9151219 100644 --- a/drivers/mtd/devices/doc2000.c +++ b/drivers/mtd/devices/doc2000.c @@ -82,8 +82,7 @@ static int _DoC_WaitReady(struct DiskOnChip *doc) void __iomem *docptr = doc->virtadr; unsigned long timeo = jiffies + (HZ * 10); - DEBUG(MTD_DEBUG_LEVEL3, - "_DoC_WaitReady called for out-of-line wait\n"); + pr_debug("_DoC_WaitReady called for out-of-line wait\n"); /* Out-of-line routine to wait for chip response */ while (!(ReadDOC(docptr, CDSNControl) & CDSN_CTRL_FR_B)) { @@ -92,7 +91,7 @@ static int _DoC_WaitReady(struct DiskOnChip *doc) DoC_Delay(doc, 2); if (time_after(jiffies, timeo)) { - DEBUG(MTD_DEBUG_LEVEL2, "_DoC_WaitReady timed out.\n"); + pr_debug("_DoC_WaitReady timed out.\n"); return -EIO; } udelay(1); @@ -323,8 +322,7 @@ static int DoC_IdentChip(struct DiskOnChip *doc, int floor, int chip) /* Reset the chip */ if (DoC_Command(doc, NAND_CMD_RESET, CDSN_CTRL_WP)) { - DEBUG(MTD_DEBUG_LEVEL2, - "DoC_Command (reset) for %d,%d returned true\n", + pr_debug("DoC_Command (reset) for %d,%d returned true\n", floor, chip); return 0; } @@ -332,8 +330,7 @@ static int DoC_IdentChip(struct DiskOnChip *doc, int floor, int chip) /* Read the NAND chip ID: 1. Send ReadID command */ if (DoC_Command(doc, NAND_CMD_READID, CDSN_CTRL_WP)) { - DEBUG(MTD_DEBUG_LEVEL2, - "DoC_Command (ReadID) for %d,%d returned true\n", + pr_debug("DoC_Command (ReadID) for %d,%d returned true\n", floor, chip); return 0; } @@ -699,7 +696,7 @@ static int doc_read(struct mtd_info *mtd, loff_t from, size_t len, #ifdef ECC_DEBUG printk(KERN_ERR "DiskOnChip ECC Error: Read at %lx\n", (long)from); #endif - /* Read the ECC syndrom through the DiskOnChip ECC + /* Read the ECC syndrome through the DiskOnChip ECC logic. These syndrome will be all ZERO when there is no error */ for (i = 0; i < 6; i++) { @@ -930,7 +927,7 @@ static int doc_read_oob(struct mtd_info *mtd, loff_t ofs, uint8_t *buf = ops->oobbuf; size_t len = ops->len; - BUG_ON(ops->mode != MTD_OOB_PLACE); + BUG_ON(ops->mode != MTD_OPS_PLACE_OOB); ofs += ops->ooboffs; @@ -1094,7 +1091,7 @@ static int doc_write_oob(struct mtd_info *mtd, loff_t ofs, struct DiskOnChip *this = mtd->priv; int ret; - BUG_ON(ops->mode != MTD_OOB_PLACE); + BUG_ON(ops->mode != MTD_OPS_PLACE_OOB); mutex_lock(&this->lock); ret = doc_write_oob_nolock(mtd, ofs + ops->ooboffs, ops->len, diff --git a/drivers/mtd/devices/doc2001.c b/drivers/mtd/devices/doc2001.c index 241192f05bc8..a3f7a27499be 100644 --- a/drivers/mtd/devices/doc2001.c +++ b/drivers/mtd/devices/doc2001.c @@ -55,15 +55,14 @@ static int _DoC_WaitReady(void __iomem * docptr) { unsigned short c = 0xffff; - DEBUG(MTD_DEBUG_LEVEL3, - "_DoC_WaitReady called for out-of-line wait\n"); + pr_debug("_DoC_WaitReady called for out-of-line wait\n"); /* Out-of-line routine to wait for chip response */ while (!(ReadDOC(docptr, CDSNControl) & CDSN_CTRL_FR_B) && --c) ; if (c == 0) - DEBUG(MTD_DEBUG_LEVEL2, "_DoC_WaitReady timed out.\n"); + pr_debug("_DoC_WaitReady timed out.\n"); return (c == 0); } @@ -464,7 +463,7 @@ static int doc_read (struct mtd_info *mtd, loff_t from, size_t len, #ifdef ECC_DEBUG printk("DiskOnChip ECC Error: Read at %lx\n", (long)from); #endif - /* Read the ECC syndrom through the DiskOnChip ECC logic. + /* Read the ECC syndrome through the DiskOnChip ECC logic. These syndrome will be all ZERO when there is no error */ for (i = 0; i < 6; i++) { syndrome[i] = ReadDOC(docptr, ECCSyndrome0 + i); @@ -632,7 +631,7 @@ static int doc_read_oob(struct mtd_info *mtd, loff_t ofs, uint8_t *buf = ops->oobbuf; size_t len = ops->len; - BUG_ON(ops->mode != MTD_OOB_PLACE); + BUG_ON(ops->mode != MTD_OPS_PLACE_OOB); ofs += ops->ooboffs; @@ -690,7 +689,7 @@ static int doc_write_oob(struct mtd_info *mtd, loff_t ofs, uint8_t *buf = ops->oobbuf; size_t len = ops->len; - BUG_ON(ops->mode != MTD_OOB_PLACE); + BUG_ON(ops->mode != MTD_OPS_PLACE_OOB); ofs += ops->ooboffs; diff --git a/drivers/mtd/devices/doc2001plus.c b/drivers/mtd/devices/doc2001plus.c index 09ae0adc3ad0..99351bc3e0ed 100644 --- a/drivers/mtd/devices/doc2001plus.c +++ b/drivers/mtd/devices/doc2001plus.c @@ -61,15 +61,14 @@ static int _DoC_WaitReady(void __iomem * docptr) { unsigned int c = 0xffff; - DEBUG(MTD_DEBUG_LEVEL3, - "_DoC_WaitReady called for out-of-line wait\n"); + pr_debug("_DoC_WaitReady called for out-of-line wait\n"); /* Out-of-line routine to wait for chip response */ while (((ReadDOC(docptr, Mplus_FlashControl) & CDSN_CTRL_FR_B_MASK) != CDSN_CTRL_FR_B_MASK) && --c) ; if (c == 0) - DEBUG(MTD_DEBUG_LEVEL2, "_DoC_WaitReady timed out.\n"); + pr_debug("_DoC_WaitReady timed out.\n"); return (c == 0); } @@ -655,7 +654,7 @@ static int doc_read(struct mtd_info *mtd, loff_t from, size_t len, #ifdef ECC_DEBUG printk("DiskOnChip ECC Error: Read at %lx\n", (long)from); #endif - /* Read the ECC syndrom through the DiskOnChip ECC logic. + /* Read the ECC syndrome through the DiskOnChip ECC logic. These syndrome will be all ZERO when there is no error */ for (i = 0; i < 6; i++) syndrome[i] = ReadDOC(docptr, Mplus_ECCSyndrome0 + i); @@ -835,7 +834,7 @@ static int doc_read_oob(struct mtd_info *mtd, loff_t ofs, uint8_t *buf = ops->oobbuf; size_t len = ops->len; - BUG_ON(ops->mode != MTD_OOB_PLACE); + BUG_ON(ops->mode != MTD_OPS_PLACE_OOB); ofs += ops->ooboffs; @@ -920,7 +919,7 @@ static int doc_write_oob(struct mtd_info *mtd, loff_t ofs, uint8_t *buf = ops->oobbuf; size_t len = ops->len; - BUG_ON(ops->mode != MTD_OOB_PLACE); + BUG_ON(ops->mode != MTD_OPS_PLACE_OOB); ofs += ops->ooboffs; diff --git a/drivers/mtd/devices/docecc.c b/drivers/mtd/devices/docecc.c index 37ef29a73ee4..4a1c39b6f37d 100644 --- a/drivers/mtd/devices/docecc.c +++ b/drivers/mtd/devices/docecc.c @@ -2,7 +2,7 @@ * ECC algorithm for M-systems disk on chip. We use the excellent Reed * Solmon code of Phil Karn (karn@ka9q.ampr.org) available under the * GNU GPL License. The rest is simply to convert the disk on chip - * syndrom into a standard syndom. + * syndrome into a standard syndome. * * Author: Fabrice Bellard (fabrice.bellard@netgem.com) * Copyright (C) 2000 Netgem S.A. diff --git a/drivers/mtd/devices/docg3.c b/drivers/mtd/devices/docg3.c new file mode 100644 index 000000000000..bdcf5df982e8 --- /dev/null +++ b/drivers/mtd/devices/docg3.c @@ -0,0 +1,1114 @@ +/* + * Handles the M-Systems DiskOnChip G3 chip + * + * Copyright (C) 2011 Robert Jarzmik + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + * + */ + +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/errno.h> +#include <linux/platform_device.h> +#include <linux/string.h> +#include <linux/slab.h> +#include <linux/io.h> +#include <linux/delay.h> +#include <linux/mtd/mtd.h> +#include <linux/mtd/partitions.h> + +#include <linux/debugfs.h> +#include <linux/seq_file.h> + +#define CREATE_TRACE_POINTS +#include "docg3.h" + +/* + * This driver handles the DiskOnChip G3 flash memory. + * + * As no specification is available from M-Systems/Sandisk, this drivers lacks + * several functions available on the chip, as : + * - block erase + * - page write + * - IPL write + * - ECC fixing (lack of BCH algorith understanding) + * - powerdown / powerup + * + * The bus data width (8bits versus 16bits) is not handled (if_cfg flag), and + * the driver assumes a 16bits data bus. + * + * DocG3 relies on 2 ECC algorithms, which are handled in hardware : + * - a 1 byte Hamming code stored in the OOB for each page + * - a 7 bytes BCH code stored in the OOB for each page + * The BCH part is only used for check purpose, no correction is available as + * some information is missing. What is known is that : + * - BCH is in GF(2^14) + * - BCH is over data of 520 bytes (512 page + 7 page_info bytes + * + 1 hamming byte) + * - BCH can correct up to 4 bits (t = 4) + * - BCH syndroms are calculated in hardware, and checked in hardware as well + * + */ + +static inline u8 doc_readb(struct docg3 *docg3, u16 reg) +{ + u8 val = readb(docg3->base + reg); + + trace_docg3_io(0, 8, reg, (int)val); + return val; +} + +static inline u16 doc_readw(struct docg3 *docg3, u16 reg) +{ + u16 val = readw(docg3->base + reg); + + trace_docg3_io(0, 16, reg, (int)val); + return val; +} + +static inline void doc_writeb(struct docg3 *docg3, u8 val, u16 reg) +{ + writeb(val, docg3->base + reg); + trace_docg3_io(1, 16, reg, val); +} + +static inline void doc_writew(struct docg3 *docg3, u16 val, u16 reg) +{ + writew(val, docg3->base + reg); + trace_docg3_io(1, 16, reg, val); +} + +static inline void doc_flash_command(struct docg3 *docg3, u8 cmd) +{ + doc_writeb(docg3, cmd, DOC_FLASHCOMMAND); +} + +static inline void doc_flash_sequence(struct docg3 *docg3, u8 seq) +{ + doc_writeb(docg3, seq, DOC_FLASHSEQUENCE); +} + +static inline void doc_flash_address(struct docg3 *docg3, u8 addr) +{ + doc_writeb(docg3, addr, DOC_FLASHADDRESS); +} + +static char const *part_probes[] = { "cmdlinepart", "saftlpart", NULL }; + +static int doc_register_readb(struct docg3 *docg3, int reg) +{ + u8 val; + + doc_writew(docg3, reg, DOC_READADDRESS); + val = doc_readb(docg3, reg); + doc_vdbg("Read register %04x : %02x\n", reg, val); + return val; +} + +static int doc_register_readw(struct docg3 *docg3, int reg) +{ + u16 val; + + doc_writew(docg3, reg, DOC_READADDRESS); + val = doc_readw(docg3, reg); + doc_vdbg("Read register %04x : %04x\n", reg, val); + return val; +} + +/** + * doc_delay - delay docg3 operations + * @docg3: the device + * @nbNOPs: the number of NOPs to issue + * + * As no specification is available, the right timings between chip commands are + * unknown. The only available piece of information are the observed nops on a + * working docg3 chip. + * Therefore, doc_delay relies on a busy loop of NOPs, instead of scheduler + * friendlier msleep() functions or blocking mdelay(). + */ +static void doc_delay(struct docg3 *docg3, int nbNOPs) +{ + int i; + + doc_dbg("NOP x %d\n", nbNOPs); + for (i = 0; i < nbNOPs; i++) + doc_writeb(docg3, 0, DOC_NOP); +} + +static int is_prot_seq_error(struct docg3 *docg3) +{ + int ctrl; + + ctrl = doc_register_readb(docg3, DOC_FLASHCONTROL); + return ctrl & (DOC_CTRL_PROTECTION_ERROR | DOC_CTRL_SEQUENCE_ERROR); +} + +static int doc_is_ready(struct docg3 *docg3) +{ + int ctrl; + + ctrl = doc_register_readb(docg3, DOC_FLASHCONTROL); + return ctrl & DOC_CTRL_FLASHREADY; +} + +static int doc_wait_ready(struct docg3 *docg3) +{ + int maxWaitCycles = 100; + + do { + doc_delay(docg3, 4); + cpu_relax(); + } while (!doc_is_ready(docg3) && maxWaitCycles--); + doc_delay(docg3, 2); + if (maxWaitCycles > 0) + return 0; + else + return -EIO; +} + +static int doc_reset_seq(struct docg3 *docg3) +{ + int ret; + + doc_writeb(docg3, 0x10, DOC_FLASHCONTROL); + doc_flash_sequence(docg3, DOC_SEQ_RESET); + doc_flash_command(docg3, DOC_CMD_RESET); + doc_delay(docg3, 2); + ret = doc_wait_ready(docg3); + + doc_dbg("doc_reset_seq() -> isReady=%s\n", ret ? "false" : "true"); + return ret; +} + +/** + * doc_read_data_area - Read data from data area + * @docg3: the device + * @buf: the buffer to fill in + * @len: the lenght to read + * @first: first time read, DOC_READADDRESS should be set + * + * Reads bytes from flash data. Handles the single byte / even bytes reads. + */ +static void doc_read_data_area(struct docg3 *docg3, void *buf, int len, + int first) +{ + int i, cdr, len4; + u16 data16, *dst16; + u8 data8, *dst8; + + doc_dbg("doc_read_data_area(buf=%p, len=%d)\n", buf, len); + cdr = len & 0x3; + len4 = len - cdr; + + if (first) + doc_writew(docg3, DOC_IOSPACE_DATA, DOC_READADDRESS); + dst16 = buf; + for (i = 0; i < len4; i += 2) { + data16 = doc_readw(docg3, DOC_IOSPACE_DATA); + *dst16 = data16; + dst16++; + } + + if (cdr) { + doc_writew(docg3, DOC_IOSPACE_DATA | DOC_READADDR_ONE_BYTE, + DOC_READADDRESS); + doc_delay(docg3, 1); + dst8 = (u8 *)dst16; + for (i = 0; i < cdr; i++) { + data8 = doc_readb(docg3, DOC_IOSPACE_DATA); + *dst8 = data8; + dst8++; + } + } +} + +/** + * doc_set_data_mode - Sets the flash to reliable data mode + * @docg3: the device + * + * The reliable data mode is a bit slower than the fast mode, but less errors + * occur. Entering the reliable mode cannot be done without entering the fast + * mode first. + */ +static void doc_set_reliable_mode(struct docg3 *docg3) +{ + doc_dbg("doc_set_reliable_mode()\n"); + doc_flash_sequence(docg3, DOC_SEQ_SET_MODE); + doc_flash_command(docg3, DOC_CMD_FAST_MODE); + doc_flash_command(docg3, DOC_CMD_RELIABLE_MODE); + doc_delay(docg3, 2); +} + +/** + * doc_set_asic_mode - Set the ASIC mode + * @docg3: the device + * @mode: the mode + * + * The ASIC can work in 3 modes : + * - RESET: all registers are zeroed + * - NORMAL: receives and handles commands + * - POWERDOWN: minimal poweruse, flash parts shut off + */ +static void doc_set_asic_mode(struct docg3 *docg3, u8 mode) +{ + int i; + + for (i = 0; i < 12; i++) + doc_readb(docg3, DOC_IOSPACE_IPL); + + mode |= DOC_ASICMODE_MDWREN; + doc_dbg("doc_set_asic_mode(%02x)\n", mode); + doc_writeb(docg3, mode, DOC_ASICMODE); + doc_writeb(docg3, ~mode, DOC_ASICMODECONFIRM); + doc_delay(docg3, 1); +} + +/** + * doc_set_device_id - Sets the devices id for cascaded G3 chips + * @docg3: the device + * @id: the chip to select (amongst 0, 1, 2, 3) + * + * There can be 4 cascaded G3 chips. This function selects the one which will + * should be the active one. + */ +static void doc_set_device_id(struct docg3 *docg3, int id) +{ + u8 ctrl; + + doc_dbg("doc_set_device_id(%d)\n", id); + doc_writeb(docg3, id, DOC_DEVICESELECT); + ctrl = doc_register_readb(docg3, DOC_FLASHCONTROL); + + ctrl &= ~DOC_CTRL_VIOLATION; + ctrl |= DOC_CTRL_CE; + doc_writeb(docg3, ctrl, DOC_FLASHCONTROL); +} + +/** + * doc_set_extra_page_mode - Change flash page layout + * @docg3: the device + * + * Normally, the flash page is split into the data (512 bytes) and the out of + * band data (16 bytes). For each, 4 more bytes can be accessed, where the wear + * leveling counters are stored. To access this last area of 4 bytes, a special + * mode must be input to the flash ASIC. + * + * Returns 0 if no error occured, -EIO else. + */ +static int doc_set_extra_page_mode(struct docg3 *docg3) +{ + int fctrl; + + doc_dbg("doc_set_extra_page_mode()\n"); + doc_flash_sequence(docg3, DOC_SEQ_PAGE_SIZE_532); + doc_flash_command(docg3, DOC_CMD_PAGE_SIZE_532); + doc_delay(docg3, 2); + + fctrl = doc_register_readb(docg3, DOC_FLASHCONTROL); + if (fctrl & (DOC_CTRL_PROTECTION_ERROR | DOC_CTRL_SEQUENCE_ERROR)) + return -EIO; + else + return 0; +} + +/** + * doc_seek - Set both flash planes to the specified block, page for reading + * @docg3: the device + * @block0: the first plane block index + * @block1: the second plane block index + * @page: the page index within the block + * @wear: if true, read will occur on the 4 extra bytes of the wear area + * @ofs: offset in page to read + * + * Programs the flash even and odd planes to the specific block and page. + * Alternatively, programs the flash to the wear area of the specified page. + */ +static int doc_read_seek(struct docg3 *docg3, int block0, int block1, int page, + int wear, int ofs) +{ + int sector, ret = 0; + + doc_dbg("doc_seek(blocks=(%d,%d), page=%d, ofs=%d, wear=%d)\n", + block0, block1, page, ofs, wear); + + if (!wear && (ofs < 2 * DOC_LAYOUT_PAGE_SIZE)) { + doc_flash_sequence(docg3, DOC_SEQ_SET_PLANE1); + doc_flash_command(docg3, DOC_CMD_READ_PLANE1); + doc_delay(docg3, 2); + } else { + doc_flash_sequence(docg3, DOC_SEQ_SET_PLANE2); + doc_flash_command(docg3, DOC_CMD_READ_PLANE2); + doc_delay(docg3, 2); + } + + doc_set_reliable_mode(docg3); + if (wear) + ret = doc_set_extra_page_mode(docg3); + if (ret) + goto out; + + sector = (block0 << DOC_ADDR_BLOCK_SHIFT) + (page & DOC_ADDR_PAGE_MASK); + doc_flash_sequence(docg3, DOC_SEQ_READ); + doc_flash_command(docg3, DOC_CMD_PROG_BLOCK_ADDR); + doc_delay(docg3, 1); + doc_flash_address(docg3, sector & 0xff); + doc_flash_address(docg3, (sector >> 8) & 0xff); + doc_flash_address(docg3, (sector >> 16) & 0xff); + doc_delay(docg3, 1); + + sector = (block1 << DOC_ADDR_BLOCK_SHIFT) + (page & DOC_ADDR_PAGE_MASK); + doc_flash_command(docg3, DOC_CMD_PROG_BLOCK_ADDR); + doc_delay(docg3, 1); + doc_flash_address(docg3, sector & 0xff); + doc_flash_address(docg3, (sector >> 8) & 0xff); + doc_flash_address(docg3, (sector >> 16) & 0xff); + doc_delay(docg3, 2); + +out: + return ret; +} + +/** + * doc_read_page_ecc_init - Initialize hardware ECC engine + * @docg3: the device + * @len: the number of bytes covered by the ECC (BCH covered) + * + * The function does initialize the hardware ECC engine to compute the Hamming + * ECC (on 1 byte) and the BCH Syndroms (on 7 bytes). + * + * Return 0 if succeeded, -EIO on error + */ +static int doc_read_page_ecc_init(struct docg3 *docg3, int len) +{ + doc_writew(docg3, DOC_ECCCONF0_READ_MODE + | DOC_ECCCONF0_BCH_ENABLE | DOC_ECCCONF0_HAMMING_ENABLE + | (len & DOC_ECCCONF0_DATA_BYTES_MASK), + DOC_ECCCONF0); + doc_delay(docg3, 4); + doc_register_readb(docg3, DOC_FLASHCONTROL); + return doc_wait_ready(docg3); +} + +/** + * doc_read_page_prepare - Prepares reading data from a flash page + * @docg3: the device + * @block0: the first plane block index on flash memory + * @block1: the second plane block index on flash memory + * @page: the page index in the block + * @offset: the offset in the page (must be a multiple of 4) + * + * Prepares the page to be read in the flash memory : + * - tell ASIC to map the flash pages + * - tell ASIC to be in read mode + * + * After a call to this method, a call to doc_read_page_finish is mandatory, + * to end the read cycle of the flash. + * + * Read data from a flash page. The length to be read must be between 0 and + * (page_size + oob_size + wear_size), ie. 532, and a multiple of 4 (because + * the extra bytes reading is not implemented). + * + * As pages are grouped by 2 (in 2 planes), reading from a page must be done + * in two steps: + * - one read of 512 bytes at offset 0 + * - one read of 512 bytes at offset 512 + 16 + * + * Returns 0 if successful, -EIO if a read error occured. + */ +static int doc_read_page_prepare(struct docg3 *docg3, int block0, int block1, + int page, int offset) +{ + int wear_area = 0, ret = 0; + + doc_dbg("doc_read_page_prepare(blocks=(%d,%d), page=%d, ofsInPage=%d)\n", + block0, block1, page, offset); + if (offset >= DOC_LAYOUT_WEAR_OFFSET) + wear_area = 1; + if (!wear_area && offset > (DOC_LAYOUT_PAGE_OOB_SIZE * 2)) + return -EINVAL; + + doc_set_device_id(docg3, docg3->device_id); + ret = doc_reset_seq(docg3); + if (ret) + goto err; + + /* Program the flash address block and page */ + ret = doc_read_seek(docg3, block0, block1, page, wear_area, offset); + if (ret) + goto err; + + doc_flash_command(docg3, DOC_CMD_READ_ALL_PLANES); + doc_delay(docg3, 2); + doc_wait_ready(docg3); + + doc_flash_command(docg3, DOC_CMD_SET_ADDR_READ); + doc_delay(docg3, 1); + if (offset >= DOC_LAYOUT_PAGE_SIZE * 2) + offset -= 2 * DOC_LAYOUT_PAGE_SIZE; + doc_flash_address(docg3, offset >> 2); + doc_delay(docg3, 1); + doc_wait_ready(docg3); + + doc_flash_command(docg3, DOC_CMD_READ_FLASH); + + return 0; +err: + doc_writeb(docg3, 0, DOC_DATAEND); + doc_delay(docg3, 2); + return -EIO; +} + +/** + * doc_read_page_getbytes - Reads bytes from a prepared page + * @docg3: the device + * @len: the number of bytes to be read (must be a multiple of 4) + * @buf: the buffer to be filled in + * @first: 1 if first time read, DOC_READADDRESS should be set + * + */ +static int doc_read_page_getbytes(struct docg3 *docg3, int len, u_char *buf, + int first) +{ + doc_read_data_area(docg3, buf, len, first); + doc_delay(docg3, 2); + return len; +} + +/** + * doc_get_hw_bch_syndroms - Get hardware calculated BCH syndroms + * @docg3: the device + * @syns: the array of 7 integers where the syndroms will be stored + */ +static void doc_get_hw_bch_syndroms(struct docg3 *docg3, int *syns) +{ + int i; + + for (i = 0; i < DOC_ECC_BCH_SIZE; i++) + syns[i] = doc_register_readb(docg3, DOC_BCH_SYNDROM(i)); +} + +/** + * doc_read_page_finish - Ends reading of a flash page + * @docg3: the device + * + * As a side effect, resets the chip selector to 0. This ensures that after each + * read operation, the floor 0 is selected. Therefore, if the systems halts, the + * reboot will boot on floor 0, where the IPL is. + */ +static void doc_read_page_finish(struct docg3 *docg3) +{ + doc_writeb(docg3, 0, DOC_DATAEND); + doc_delay(docg3, 2); + doc_set_device_id(docg3, 0); +} + +/** + * calc_block_sector - Calculate blocks, pages and ofs. + + * @from: offset in flash + * @block0: first plane block index calculated + * @block1: second plane block index calculated + * @page: page calculated + * @ofs: offset in page + */ +static void calc_block_sector(loff_t from, int *block0, int *block1, int *page, + int *ofs) +{ + uint sector; + + sector = from / DOC_LAYOUT_PAGE_SIZE; + *block0 = sector / (DOC_LAYOUT_PAGES_PER_BLOCK * DOC_LAYOUT_NBPLANES) + * DOC_LAYOUT_NBPLANES; + *block1 = *block0 + 1; + *page = sector % (DOC_LAYOUT_PAGES_PER_BLOCK * DOC_LAYOUT_NBPLANES); + *page /= DOC_LAYOUT_NBPLANES; + if (sector % 2) + *ofs = DOC_LAYOUT_PAGE_OOB_SIZE; + else + *ofs = 0; +} + +/** + * doc_read - Read bytes from flash + * @mtd: the device + * @from: the offset from first block and first page, in bytes, aligned on page + * size + * @len: the number of bytes to read (must be a multiple of 4) + * @retlen: the number of bytes actually read + * @buf: the filled in buffer + * + * Reads flash memory pages. This function does not read the OOB chunk, but only + * the page data. + * + * Returns 0 if read successfull, of -EIO, -EINVAL if an error occured + */ +static int doc_read(struct mtd_info *mtd, loff_t from, size_t len, + size_t *retlen, u_char *buf) +{ + struct docg3 *docg3 = mtd->priv; + int block0, block1, page, readlen, ret, ofs = 0; + int syn[DOC_ECC_BCH_SIZE], eccconf1; + u8 oob[DOC_LAYOUT_OOB_SIZE]; + + ret = -EINVAL; + doc_dbg("doc_read(from=%lld, len=%zu, buf=%p)\n", from, len, buf); + if (from % DOC_LAYOUT_PAGE_SIZE) + goto err; + if (len % 4) + goto err; + calc_block_sector(from, &block0, &block1, &page, &ofs); + if (block1 > docg3->max_block) + goto err; + + *retlen = 0; + ret = 0; + readlen = min_t(size_t, len, (size_t)DOC_LAYOUT_PAGE_SIZE); + while (!ret && len > 0) { + readlen = min_t(size_t, len, (size_t)DOC_LAYOUT_PAGE_SIZE); + ret = doc_read_page_prepare(docg3, block0, block1, page, ofs); + if (ret < 0) + goto err; + ret = doc_read_page_ecc_init(docg3, DOC_ECC_BCH_COVERED_BYTES); + if (ret < 0) + goto err_in_read; + ret = doc_read_page_getbytes(docg3, readlen, buf, 1); + if (ret < readlen) + goto err_in_read; + ret = doc_read_page_getbytes(docg3, DOC_LAYOUT_OOB_SIZE, + oob, 0); + if (ret < DOC_LAYOUT_OOB_SIZE) + goto err_in_read; + + *retlen += readlen; + buf += readlen; + len -= readlen; + + ofs ^= DOC_LAYOUT_PAGE_OOB_SIZE; + if (ofs == 0) + page += 2; + if (page > DOC_ADDR_PAGE_MASK) { + page = 0; + block0 += 2; + block1 += 2; + } + + /* + * There should be a BCH bitstream fixing algorithm here ... + * By now, a page read failure is triggered by BCH error + */ + doc_get_hw_bch_syndroms(docg3, syn); + eccconf1 = doc_register_readb(docg3, DOC_ECCCONF1); + + doc_dbg("OOB - INFO: %02x:%02x:%02x:%02x:%02x:%02x:%02x\n", + oob[0], oob[1], oob[2], oob[3], oob[4], + oob[5], oob[6]); + doc_dbg("OOB - HAMMING: %02x\n", oob[7]); + doc_dbg("OOB - BCH_ECC: %02x:%02x:%02x:%02x:%02x:%02x:%02x\n", + oob[8], oob[9], oob[10], oob[11], oob[12], + oob[13], oob[14]); + doc_dbg("OOB - UNUSED: %02x\n", oob[15]); + doc_dbg("ECC checks: ECCConf1=%x\n", eccconf1); + doc_dbg("ECC BCH syndrom: %02x:%02x:%02x:%02x:%02x:%02x:%02x\n", + syn[0], syn[1], syn[2], syn[3], syn[4], syn[5], syn[6]); + + ret = -EBADMSG; + if (block0 >= DOC_LAYOUT_BLOCK_FIRST_DATA) { + if (eccconf1 & DOC_ECCCONF1_BCH_SYNDROM_ERR) + goto err_in_read; + if (is_prot_seq_error(docg3)) + goto err_in_read; + } + doc_read_page_finish(docg3); + } + + return 0; +err_in_read: + doc_read_page_finish(docg3); +err: + return ret; +} + +/** + * doc_read_oob - Read out of band bytes from flash + * @mtd: the device + * @from: the offset from first block and first page, in bytes, aligned on page + * size + * @ops: the mtd oob structure + * + * Reads flash memory OOB area of pages. + * + * Returns 0 if read successfull, of -EIO, -EINVAL if an error occured + */ +static int doc_read_oob(struct mtd_info *mtd, loff_t from, + struct mtd_oob_ops *ops) +{ + struct docg3 *docg3 = mtd->priv; + int block0, block1, page, ofs, ret; + u8 *buf = ops->oobbuf; + size_t len = ops->ooblen; + + doc_dbg("doc_read_oob(from=%lld, buf=%p, len=%zu)\n", from, buf, len); + if (len != DOC_LAYOUT_OOB_SIZE) + return -EINVAL; + + switch (ops->mode) { + case MTD_OPS_PLACE_OOB: + buf += ops->ooboffs; + break; + default: + break; + } + + calc_block_sector(from, &block0, &block1, &page, &ofs); + if (block1 > docg3->max_block) + return -EINVAL; + + ret = doc_read_page_prepare(docg3, block0, block1, page, + ofs + DOC_LAYOUT_PAGE_SIZE); + if (!ret) + ret = doc_read_page_ecc_init(docg3, DOC_LAYOUT_OOB_SIZE); + if (!ret) + ret = doc_read_page_getbytes(docg3, DOC_LAYOUT_OOB_SIZE, + buf, 1); + doc_read_page_finish(docg3); + + if (ret > 0) + ops->oobretlen = ret; + else + ops->oobretlen = 0; + return (ret > 0) ? 0 : ret; +} + +static int doc_reload_bbt(struct docg3 *docg3) +{ + int block = DOC_LAYOUT_BLOCK_BBT; + int ret = 0, nbpages, page; + u_char *buf = docg3->bbt; + + nbpages = DIV_ROUND_UP(docg3->max_block + 1, 8 * DOC_LAYOUT_PAGE_SIZE); + for (page = 0; !ret && (page < nbpages); page++) { + ret = doc_read_page_prepare(docg3, block, block + 1, + page + DOC_LAYOUT_PAGE_BBT, 0); + if (!ret) + ret = doc_read_page_ecc_init(docg3, + DOC_LAYOUT_PAGE_SIZE); + if (!ret) + doc_read_page_getbytes(docg3, DOC_LAYOUT_PAGE_SIZE, + buf, 1); + buf += DOC_LAYOUT_PAGE_SIZE; + } + doc_read_page_finish(docg3); + return ret; +} + +/** + * doc_block_isbad - Checks whether a block is good or not + * @mtd: the device + * @from: the offset to find the correct block + * + * Returns 1 if block is bad, 0 if block is good + */ +static int doc_block_isbad(struct mtd_info *mtd, loff_t from) +{ + struct docg3 *docg3 = mtd->priv; + int block0, block1, page, ofs, is_good; + + calc_block_sector(from, &block0, &block1, &page, &ofs); + doc_dbg("doc_block_isbad(from=%lld) => block=(%d,%d), page=%d, ofs=%d\n", + from, block0, block1, page, ofs); + + if (block0 < DOC_LAYOUT_BLOCK_FIRST_DATA) + return 0; + if (block1 > docg3->max_block) + return -EINVAL; + + is_good = docg3->bbt[block0 >> 3] & (1 << (block0 & 0x7)); + return !is_good; +} + +/** + * doc_get_erase_count - Get block erase count + * @docg3: the device + * @from: the offset in which the block is. + * + * Get the number of times a block was erased. The number is the maximum of + * erase times between first and second plane (which should be equal normally). + * + * Returns The number of erases, or -EINVAL or -EIO on error. + */ +static int doc_get_erase_count(struct docg3 *docg3, loff_t from) +{ + u8 buf[DOC_LAYOUT_WEAR_SIZE]; + int ret, plane1_erase_count, plane2_erase_count; + int block0, block1, page, ofs; + + doc_dbg("doc_get_erase_count(from=%lld, buf=%p)\n", from, buf); + if (from % DOC_LAYOUT_PAGE_SIZE) + return -EINVAL; + calc_block_sector(from, &block0, &block1, &page, &ofs); + if (block1 > docg3->max_block) + return -EINVAL; + + ret = doc_reset_seq(docg3); + if (!ret) + ret = doc_read_page_prepare(docg3, block0, block1, page, + ofs + DOC_LAYOUT_WEAR_OFFSET); + if (!ret) + ret = doc_read_page_getbytes(docg3, DOC_LAYOUT_WEAR_SIZE, + buf, 1); + doc_read_page_finish(docg3); + + if (ret || (buf[0] != DOC_ERASE_MARK) || (buf[2] != DOC_ERASE_MARK)) + return -EIO; + plane1_erase_count = (u8)(~buf[1]) | ((u8)(~buf[4]) << 8) + | ((u8)(~buf[5]) << 16); + plane2_erase_count = (u8)(~buf[3]) | ((u8)(~buf[6]) << 8) + | ((u8)(~buf[7]) << 16); + + return max(plane1_erase_count, plane2_erase_count); +} + +/* + * Debug sysfs entries + */ +static int dbg_flashctrl_show(struct seq_file *s, void *p) +{ + struct docg3 *docg3 = (struct docg3 *)s->private; + + int pos = 0; + u8 fctrl = doc_register_readb(docg3, DOC_FLASHCONTROL); + + pos += seq_printf(s, + "FlashControl : 0x%02x (%s,CE# %s,%s,%s,flash %s)\n", + fctrl, + fctrl & DOC_CTRL_VIOLATION ? "protocol violation" : "-", + fctrl & DOC_CTRL_CE ? "active" : "inactive", + fctrl & DOC_CTRL_PROTECTION_ERROR ? "protection error" : "-", + fctrl & DOC_CTRL_SEQUENCE_ERROR ? "sequence error" : "-", + fctrl & DOC_CTRL_FLASHREADY ? "ready" : "not ready"); + return pos; +} +DEBUGFS_RO_ATTR(flashcontrol, dbg_flashctrl_show); + +static int dbg_asicmode_show(struct seq_file *s, void *p) +{ + struct docg3 *docg3 = (struct docg3 *)s->private; + + int pos = 0; + int pctrl = doc_register_readb(docg3, DOC_ASICMODE); + int mode = pctrl & 0x03; + + pos += seq_printf(s, + "%04x : RAM_WE=%d,RSTIN_RESET=%d,BDETCT_RESET=%d,WRITE_ENABLE=%d,POWERDOWN=%d,MODE=%d%d (", + pctrl, + pctrl & DOC_ASICMODE_RAM_WE ? 1 : 0, + pctrl & DOC_ASICMODE_RSTIN_RESET ? 1 : 0, + pctrl & DOC_ASICMODE_BDETCT_RESET ? 1 : 0, + pctrl & DOC_ASICMODE_MDWREN ? 1 : 0, + pctrl & DOC_ASICMODE_POWERDOWN ? 1 : 0, + mode >> 1, mode & 0x1); + + switch (mode) { + case DOC_ASICMODE_RESET: + pos += seq_printf(s, "reset"); + break; + case DOC_ASICMODE_NORMAL: + pos += seq_printf(s, "normal"); + break; + case DOC_ASICMODE_POWERDOWN: + pos += seq_printf(s, "powerdown"); + break; + } + pos += seq_printf(s, ")\n"); + return pos; +} +DEBUGFS_RO_ATTR(asic_mode, dbg_asicmode_show); + +static int dbg_device_id_show(struct seq_file *s, void *p) +{ + struct docg3 *docg3 = (struct docg3 *)s->private; + int pos = 0; + int id = doc_register_readb(docg3, DOC_DEVICESELECT); + + pos += seq_printf(s, "DeviceId = %d\n", id); + return pos; +} +DEBUGFS_RO_ATTR(device_id, dbg_device_id_show); + +static int dbg_protection_show(struct seq_file *s, void *p) +{ + struct docg3 *docg3 = (struct docg3 *)s->private; + int pos = 0; + int protect = doc_register_readb(docg3, DOC_PROTECTION); + int dps0 = doc_register_readb(docg3, DOC_DPS0_STATUS); + int dps0_low = doc_register_readb(docg3, DOC_DPS0_ADDRLOW); + int dps0_high = doc_register_readb(docg3, DOC_DPS0_ADDRHIGH); + int dps1 = doc_register_readb(docg3, DOC_DPS1_STATUS); + int dps1_low = doc_register_readb(docg3, DOC_DPS1_ADDRLOW); + int dps1_high = doc_register_readb(docg3, DOC_DPS1_ADDRHIGH); + + pos += seq_printf(s, "Protection = 0x%02x (", + protect); + if (protect & DOC_PROTECT_FOUNDRY_OTP_LOCK) + pos += seq_printf(s, "FOUNDRY_OTP_LOCK,"); + if (protect & DOC_PROTECT_CUSTOMER_OTP_LOCK) + pos += seq_printf(s, "CUSTOMER_OTP_LOCK,"); + if (protect & DOC_PROTECT_LOCK_INPUT) + pos += seq_printf(s, "LOCK_INPUT,"); + if (protect & DOC_PROTECT_STICKY_LOCK) + pos += seq_printf(s, "STICKY_LOCK,"); + if (protect & DOC_PROTECT_PROTECTION_ENABLED) + pos += seq_printf(s, "PROTECTION ON,"); + if (protect & DOC_PROTECT_IPL_DOWNLOAD_LOCK) + pos += seq_printf(s, "IPL_DOWNLOAD_LOCK,"); + if (protect & DOC_PROTECT_PROTECTION_ERROR) + pos += seq_printf(s, "PROTECT_ERR,"); + else + pos += seq_printf(s, "NO_PROTECT_ERR"); + pos += seq_printf(s, ")\n"); + + pos += seq_printf(s, "DPS0 = 0x%02x : " + "Protected area [0x%x - 0x%x] : OTP=%d, READ=%d, " + "WRITE=%d, HW_LOCK=%d, KEY_OK=%d\n", + dps0, dps0_low, dps0_high, + !!(dps0 & DOC_DPS_OTP_PROTECTED), + !!(dps0 & DOC_DPS_READ_PROTECTED), + !!(dps0 & DOC_DPS_WRITE_PROTECTED), + !!(dps0 & DOC_DPS_HW_LOCK_ENABLED), + !!(dps0 & DOC_DPS_KEY_OK)); + pos += seq_printf(s, "DPS1 = 0x%02x : " + "Protected area [0x%x - 0x%x] : OTP=%d, READ=%d, " + "WRITE=%d, HW_LOCK=%d, KEY_OK=%d\n", + dps1, dps1_low, dps1_high, + !!(dps1 & DOC_DPS_OTP_PROTECTED), + !!(dps1 & DOC_DPS_READ_PROTECTED), + !!(dps1 & DOC_DPS_WRITE_PROTECTED), + !!(dps1 & DOC_DPS_HW_LOCK_ENABLED), + !!(dps1 & DOC_DPS_KEY_OK)); + return pos; +} +DEBUGFS_RO_ATTR(protection, dbg_protection_show); + +static int __init doc_dbg_register(struct docg3 *docg3) +{ + struct dentry *root, *entry; + + root = debugfs_create_dir("docg3", NULL); + if (!root) + return -ENOMEM; + + entry = debugfs_create_file("flashcontrol", S_IRUSR, root, docg3, + &flashcontrol_fops); + if (entry) + entry = debugfs_create_file("asic_mode", S_IRUSR, root, + docg3, &asic_mode_fops); + if (entry) + entry = debugfs_create_file("device_id", S_IRUSR, root, + docg3, &device_id_fops); + if (entry) + entry = debugfs_create_file("protection", S_IRUSR, root, + docg3, &protection_fops); + if (entry) { + docg3->debugfs_root = root; + return 0; + } else { + debugfs_remove_recursive(root); + return -ENOMEM; + } +} + +static void __exit doc_dbg_unregister(struct docg3 *docg3) +{ + debugfs_remove_recursive(docg3->debugfs_root); +} + +/** + * doc_set_driver_info - Fill the mtd_info structure and docg3 structure + * @chip_id: The chip ID of the supported chip + * @mtd: The structure to fill + */ +static void __init doc_set_driver_info(int chip_id, struct mtd_info *mtd) +{ + struct docg3 *docg3 = mtd->priv; + int cfg; + + cfg = doc_register_readb(docg3, DOC_CONFIGURATION); + docg3->if_cfg = (cfg & DOC_CONF_IF_CFG ? 1 : 0); + + switch (chip_id) { + case DOC_CHIPID_G3: + mtd->name = "DiskOnChip G3"; + docg3->max_block = 2047; + break; + } + mtd->type = MTD_NANDFLASH; + /* + * Once write methods are added, the correct flags will be set. + * mtd->flags = MTD_CAP_NANDFLASH; + */ + mtd->flags = MTD_CAP_ROM; + mtd->size = (docg3->max_block + 1) * DOC_LAYOUT_BLOCK_SIZE; + mtd->erasesize = DOC_LAYOUT_BLOCK_SIZE * DOC_LAYOUT_NBPLANES; + mtd->writesize = DOC_LAYOUT_PAGE_SIZE; + mtd->oobsize = DOC_LAYOUT_OOB_SIZE; + mtd->owner = THIS_MODULE; + mtd->erase = NULL; + mtd->point = NULL; + mtd->unpoint = NULL; + mtd->read = doc_read; + mtd->write = NULL; + mtd->read_oob = doc_read_oob; + mtd->write_oob = NULL; + mtd->sync = NULL; + mtd->block_isbad = doc_block_isbad; +} + +/** + * doc_probe - Probe the IO space for a DiskOnChip G3 chip + * @pdev: platform device + * + * Probes for a G3 chip at the specified IO space in the platform data + * ressources. + * + * Returns 0 on success, -ENOMEM, -ENXIO on error + */ +static int __init docg3_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct docg3 *docg3; + struct mtd_info *mtd; + struct resource *ress; + int ret, bbt_nbpages; + u16 chip_id, chip_id_inv; + + ret = -ENOMEM; + docg3 = kzalloc(sizeof(struct docg3), GFP_KERNEL); + if (!docg3) + goto nomem1; + mtd = kzalloc(sizeof(struct mtd_info), GFP_KERNEL); + if (!mtd) + goto nomem2; + mtd->priv = docg3; + + ret = -ENXIO; + ress = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!ress) { + dev_err(dev, "No I/O memory resource defined\n"); + goto noress; + } + docg3->base = ioremap(ress->start, DOC_IOSPACE_SIZE); + + docg3->dev = &pdev->dev; + docg3->device_id = 0; + doc_set_device_id(docg3, docg3->device_id); + doc_set_asic_mode(docg3, DOC_ASICMODE_RESET); + doc_set_asic_mode(docg3, DOC_ASICMODE_NORMAL); + + chip_id = doc_register_readw(docg3, DOC_CHIPID); + chip_id_inv = doc_register_readw(docg3, DOC_CHIPID_INV); + + ret = -ENODEV; + if (chip_id != (u16)(~chip_id_inv)) { + doc_info("No device found at IO addr %p\n", + (void *)ress->start); + goto nochipfound; + } + + switch (chip_id) { + case DOC_CHIPID_G3: + doc_info("Found a G3 DiskOnChip at addr %p\n", + (void *)ress->start); + break; + default: + doc_err("Chip id %04x is not a DiskOnChip G3 chip\n", chip_id); + goto nochipfound; + } + + doc_set_driver_info(chip_id, mtd); + platform_set_drvdata(pdev, mtd); + + ret = -ENOMEM; + bbt_nbpages = DIV_ROUND_UP(docg3->max_block + 1, + 8 * DOC_LAYOUT_PAGE_SIZE); + docg3->bbt = kzalloc(bbt_nbpages * DOC_LAYOUT_PAGE_SIZE, GFP_KERNEL); + if (!docg3->bbt) + goto nochipfound; + doc_reload_bbt(docg3); + + ret = mtd_device_parse_register(mtd, part_probes, + NULL, NULL, 0); + if (ret) + goto register_error; + + doc_dbg_register(docg3); + return 0; + +register_error: + kfree(docg3->bbt); +nochipfound: + iounmap(docg3->base); +noress: + kfree(mtd); +nomem2: + kfree(docg3); +nomem1: + return ret; +} + +/** + * docg3_release - Release the driver + * @pdev: the platform device + * + * Returns 0 + */ +static int __exit docg3_release(struct platform_device *pdev) +{ + struct mtd_info *mtd = platform_get_drvdata(pdev); + struct docg3 *docg3 = mtd->priv; + + doc_dbg_unregister(docg3); + mtd_device_unregister(mtd); + iounmap(docg3->base); + kfree(docg3->bbt); + kfree(docg3); + kfree(mtd); + return 0; +} + +static struct platform_driver g3_driver = { + .driver = { + .name = "docg3", + .owner = THIS_MODULE, + }, + .remove = __exit_p(docg3_release), +}; + +static int __init docg3_init(void) +{ + return platform_driver_probe(&g3_driver, docg3_probe); +} +module_init(docg3_init); + + +static void __exit docg3_exit(void) +{ + platform_driver_unregister(&g3_driver); +} +module_exit(docg3_exit); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Robert Jarzmik <robert.jarzmik@free.fr>"); +MODULE_DESCRIPTION("MTD driver for DiskOnChip G3"); diff --git a/drivers/mtd/devices/docg3.h b/drivers/mtd/devices/docg3.h new file mode 100644 index 000000000000..0d407be24594 --- /dev/null +++ b/drivers/mtd/devices/docg3.h @@ -0,0 +1,297 @@ +/* + * Handles the M-Systems DiskOnChip G3 chip + * + * Copyright (C) 2011 Robert Jarzmik + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + * + */ + +#ifndef _MTD_DOCG3_H +#define _MTD_DOCG3_H + +/* + * Flash memory areas : + * - 0x0000 .. 0x07ff : IPL + * - 0x0800 .. 0x0fff : Data area + * - 0x1000 .. 0x17ff : Registers + * - 0x1800 .. 0x1fff : Unknown + */ +#define DOC_IOSPACE_IPL 0x0000 +#define DOC_IOSPACE_DATA 0x0800 +#define DOC_IOSPACE_SIZE 0x2000 + +/* + * DOC G3 layout and adressing scheme + * A page address for the block "b", plane "P" and page "p": + * address = [bbbb bPpp pppp] + */ + +#define DOC_ADDR_PAGE_MASK 0x3f +#define DOC_ADDR_BLOCK_SHIFT 6 +#define DOC_LAYOUT_NBPLANES 2 +#define DOC_LAYOUT_PAGES_PER_BLOCK 64 +#define DOC_LAYOUT_PAGE_SIZE 512 +#define DOC_LAYOUT_OOB_SIZE 16 +#define DOC_LAYOUT_WEAR_SIZE 8 +#define DOC_LAYOUT_PAGE_OOB_SIZE \ + (DOC_LAYOUT_PAGE_SIZE + DOC_LAYOUT_OOB_SIZE) +#define DOC_LAYOUT_WEAR_OFFSET (DOC_LAYOUT_PAGE_OOB_SIZE * 2) +#define DOC_LAYOUT_BLOCK_SIZE \ + (DOC_LAYOUT_PAGES_PER_BLOCK * DOC_LAYOUT_PAGE_SIZE) +#define DOC_ECC_BCH_SIZE 7 +#define DOC_ECC_BCH_COVERED_BYTES \ + (DOC_LAYOUT_PAGE_SIZE + DOC_LAYOUT_OOB_PAGEINFO_SZ + \ + DOC_LAYOUT_OOB_HAMMING_SZ + DOC_LAYOUT_OOB_BCH_SZ) + +/* + * Blocks distribution + */ +#define DOC_LAYOUT_BLOCK_BBT 0 +#define DOC_LAYOUT_BLOCK_OTP 0 +#define DOC_LAYOUT_BLOCK_FIRST_DATA 6 + +#define DOC_LAYOUT_PAGE_BBT 4 + +/* + * Extra page OOB (16 bytes wide) layout + */ +#define DOC_LAYOUT_OOB_PAGEINFO_OFS 0 +#define DOC_LAYOUT_OOB_HAMMING_OFS 7 +#define DOC_LAYOUT_OOB_BCH_OFS 8 +#define DOC_LAYOUT_OOB_UNUSED_OFS 15 +#define DOC_LAYOUT_OOB_PAGEINFO_SZ 7 +#define DOC_LAYOUT_OOB_HAMMING_SZ 1 +#define DOC_LAYOUT_OOB_BCH_SZ 7 +#define DOC_LAYOUT_OOB_UNUSED_SZ 1 + + +#define DOC_CHIPID_G3 0x200 +#define DOC_ERASE_MARK 0xaa +/* + * Flash registers + */ +#define DOC_CHIPID 0x1000 +#define DOC_TEST 0x1004 +#define DOC_BUSLOCK 0x1006 +#define DOC_ENDIANCONTROL 0x1008 +#define DOC_DEVICESELECT 0x100a +#define DOC_ASICMODE 0x100c +#define DOC_CONFIGURATION 0x100e +#define DOC_INTERRUPTCONTROL 0x1010 +#define DOC_READADDRESS 0x101a +#define DOC_DATAEND 0x101e +#define DOC_INTERRUPTSTATUS 0x1020 + +#define DOC_FLASHSEQUENCE 0x1032 +#define DOC_FLASHCOMMAND 0x1034 +#define DOC_FLASHADDRESS 0x1036 +#define DOC_FLASHCONTROL 0x1038 +#define DOC_NOP 0x103e + +#define DOC_ECCCONF0 0x1040 +#define DOC_ECCCONF1 0x1042 +#define DOC_ECCPRESET 0x1044 +#define DOC_HAMMINGPARITY 0x1046 +#define DOC_BCH_SYNDROM(idx) (0x1048 + (idx << 1)) + +#define DOC_PROTECTION 0x1056 +#define DOC_DPS0_ADDRLOW 0x1060 +#define DOC_DPS0_ADDRHIGH 0x1062 +#define DOC_DPS1_ADDRLOW 0x1064 +#define DOC_DPS1_ADDRHIGH 0x1066 +#define DOC_DPS0_STATUS 0x106c +#define DOC_DPS1_STATUS 0x106e + +#define DOC_ASICMODECONFIRM 0x1072 +#define DOC_CHIPID_INV 0x1074 + +/* + * Flash sequences + * A sequence is preset before one or more commands are input to the chip. + */ +#define DOC_SEQ_RESET 0x00 +#define DOC_SEQ_PAGE_SIZE_532 0x03 +#define DOC_SEQ_SET_MODE 0x09 +#define DOC_SEQ_READ 0x12 +#define DOC_SEQ_SET_PLANE1 0x0e +#define DOC_SEQ_SET_PLANE2 0x10 +#define DOC_SEQ_PAGE_SETUP 0x1d + +/* + * Flash commands + */ +#define DOC_CMD_READ_PLANE1 0x00 +#define DOC_CMD_SET_ADDR_READ 0x05 +#define DOC_CMD_READ_ALL_PLANES 0x30 +#define DOC_CMD_READ_PLANE2 0x50 +#define DOC_CMD_READ_FLASH 0xe0 +#define DOC_CMD_PAGE_SIZE_532 0x3c + +#define DOC_CMD_PROG_BLOCK_ADDR 0x60 +#define DOC_CMD_PROG_CYCLE1 0x80 +#define DOC_CMD_PROG_CYCLE2 0x10 +#define DOC_CMD_ERASECYCLE2 0xd0 + +#define DOC_CMD_RELIABLE_MODE 0x22 +#define DOC_CMD_FAST_MODE 0xa2 + +#define DOC_CMD_RESET 0xff + +/* + * Flash register : DOC_FLASHCONTROL + */ +#define DOC_CTRL_VIOLATION 0x20 +#define DOC_CTRL_CE 0x10 +#define DOC_CTRL_UNKNOWN_BITS 0x08 +#define DOC_CTRL_PROTECTION_ERROR 0x04 +#define DOC_CTRL_SEQUENCE_ERROR 0x02 +#define DOC_CTRL_FLASHREADY 0x01 + +/* + * Flash register : DOC_ASICMODE + */ +#define DOC_ASICMODE_RESET 0x00 +#define DOC_ASICMODE_NORMAL 0x01 +#define DOC_ASICMODE_POWERDOWN 0x02 +#define DOC_ASICMODE_MDWREN 0x04 +#define DOC_ASICMODE_BDETCT_RESET 0x08 +#define DOC_ASICMODE_RSTIN_RESET 0x10 +#define DOC_ASICMODE_RAM_WE 0x20 + +/* + * Flash register : DOC_ECCCONF0 + */ +#define DOC_ECCCONF0_READ_MODE 0x8000 +#define DOC_ECCCONF0_AUTO_ECC_ENABLE 0x4000 +#define DOC_ECCCONF0_HAMMING_ENABLE 0x1000 +#define DOC_ECCCONF0_BCH_ENABLE 0x0800 +#define DOC_ECCCONF0_DATA_BYTES_MASK 0x07ff + +/* + * Flash register : DOC_ECCCONF1 + */ +#define DOC_ECCCONF1_BCH_SYNDROM_ERR 0x80 +#define DOC_ECCCONF1_UNKOWN1 0x40 +#define DOC_ECCCONF1_UNKOWN2 0x20 +#define DOC_ECCCONF1_UNKOWN3 0x10 +#define DOC_ECCCONF1_HAMMING_BITS_MASK 0x0f + +/* + * Flash register : DOC_PROTECTION + */ +#define DOC_PROTECT_FOUNDRY_OTP_LOCK 0x01 +#define DOC_PROTECT_CUSTOMER_OTP_LOCK 0x02 +#define DOC_PROTECT_LOCK_INPUT 0x04 +#define DOC_PROTECT_STICKY_LOCK 0x08 +#define DOC_PROTECT_PROTECTION_ENABLED 0x10 +#define DOC_PROTECT_IPL_DOWNLOAD_LOCK 0x20 +#define DOC_PROTECT_PROTECTION_ERROR 0x80 + +/* + * Flash register : DOC_DPS0_STATUS and DOC_DPS1_STATUS + */ +#define DOC_DPS_OTP_PROTECTED 0x01 +#define DOC_DPS_READ_PROTECTED 0x02 +#define DOC_DPS_WRITE_PROTECTED 0x04 +#define DOC_DPS_HW_LOCK_ENABLED 0x08 +#define DOC_DPS_KEY_OK 0x80 + +/* + * Flash register : DOC_CONFIGURATION + */ +#define DOC_CONF_IF_CFG 0x80 +#define DOC_CONF_MAX_ID_MASK 0x30 +#define DOC_CONF_VCCQ_3V 0x01 + +/* + * Flash register : DOC_READADDRESS + */ +#define DOC_READADDR_INC 0x8000 +#define DOC_READADDR_ONE_BYTE 0x4000 +#define DOC_READADDR_ADDR_MASK 0x1fff + +/** + * struct docg3 - DiskOnChip driver private data + * @dev: the device currently under control + * @base: mapped IO space + * @device_id: number of the cascaded DoCG3 device (0, 1, 2 or 3) + * @if_cfg: if true, reads are on 16bits, else reads are on 8bits + * @bbt: bad block table cache + * @debugfs_root: debugfs root node + */ +struct docg3 { + struct device *dev; + void __iomem *base; + unsigned int device_id:4; + unsigned int if_cfg:1; + int max_block; + u8 *bbt; + struct dentry *debugfs_root; +}; + +#define doc_err(fmt, arg...) dev_err(docg3->dev, (fmt), ## arg) +#define doc_info(fmt, arg...) dev_info(docg3->dev, (fmt), ## arg) +#define doc_dbg(fmt, arg...) dev_dbg(docg3->dev, (fmt), ## arg) +#define doc_vdbg(fmt, arg...) dev_vdbg(docg3->dev, (fmt), ## arg) + +#define DEBUGFS_RO_ATTR(name, show_fct) \ + static int name##_open(struct inode *inode, struct file *file) \ + { return single_open(file, show_fct, inode->i_private); } \ + static const struct file_operations name##_fops = { \ + .owner = THIS_MODULE, \ + .open = name##_open, \ + .llseek = seq_lseek, \ + .read = seq_read, \ + .release = single_release \ + }; +#endif + +/* + * Trace events part + */ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM docg3 + +#if !defined(_MTD_DOCG3_TRACE) || defined(TRACE_HEADER_MULTI_READ) +#define _MTD_DOCG3_TRACE + +#include <linux/tracepoint.h> + +TRACE_EVENT(docg3_io, + TP_PROTO(int op, int width, u16 reg, int val), + TP_ARGS(op, width, reg, val), + TP_STRUCT__entry( + __field(int, op) + __field(unsigned char, width) + __field(u16, reg) + __field(int, val)), + TP_fast_assign( + __entry->op = op; + __entry->width = width; + __entry->reg = reg; + __entry->val = val;), + TP_printk("docg3: %s%02d reg=%04x, val=%04x", + __entry->op ? "write" : "read", __entry->width, + __entry->reg, __entry->val) + ); +#endif + +/* This part must be outside protection */ +#undef TRACE_INCLUDE_PATH +#undef TRACE_INCLUDE_FILE +#define TRACE_INCLUDE_PATH . +#define TRACE_INCLUDE_FILE docg3 +#include <trace/define_trace.h> diff --git a/drivers/mtd/devices/docprobe.c b/drivers/mtd/devices/docprobe.c index d374603493a7..45116bb30297 100644 --- a/drivers/mtd/devices/docprobe.c +++ b/drivers/mtd/devices/docprobe.c @@ -50,11 +50,6 @@ #include <linux/mtd/nand.h> #include <linux/mtd/doc2000.h> -/* Where to look for the devices? */ -#ifndef CONFIG_MTD_DOCPROBE_ADDRESS -#define CONFIG_MTD_DOCPROBE_ADDRESS 0 -#endif - static unsigned long doc_config_location = CONFIG_MTD_DOCPROBE_ADDRESS; module_param(doc_config_location, ulong, 0); diff --git a/drivers/mtd/devices/lart.c b/drivers/mtd/devices/lart.c index 772a0ff89e0f..3a11ea628e58 100644 --- a/drivers/mtd/devices/lart.c +++ b/drivers/mtd/devices/lart.c @@ -34,9 +34,6 @@ /* debugging */ //#define LART_DEBUG -/* partition support */ -#define HAVE_PARTITIONS - #include <linux/kernel.h> #include <linux/module.h> #include <linux/types.h> @@ -44,9 +41,7 @@ #include <linux/errno.h> #include <linux/string.h> #include <linux/mtd/mtd.h> -#ifdef HAVE_PARTITIONS #include <linux/mtd/partitions.h> -#endif #ifndef CONFIG_SA1100_LART #error This is for LART architecture only @@ -598,7 +593,6 @@ static struct mtd_erase_region_info erase_regions[] = { } }; -#ifdef HAVE_PARTITIONS static struct mtd_partition lart_partitions[] = { /* blob */ { @@ -619,7 +613,7 @@ static struct mtd_partition lart_partitions[] = { .size = INITRD_LEN, /* MTDPART_SIZ_FULL */ } }; -#endif +#define NUM_PARTITIONS ARRAY_SIZE(lart_partitions) static int __init lart_flash_init (void) { @@ -668,7 +662,6 @@ static int __init lart_flash_init (void) result,mtd.eraseregions[result].erasesize,mtd.eraseregions[result].erasesize / 1024, result,mtd.eraseregions[result].numblocks); -#ifdef HAVE_PARTITIONS printk ("\npartitions = %d\n", ARRAY_SIZE(lart_partitions)); for (result = 0; result < ARRAY_SIZE(lart_partitions); result++) @@ -681,25 +674,16 @@ static int __init lart_flash_init (void) result,lart_partitions[result].offset, result,lart_partitions[result].size,lart_partitions[result].size / 1024); #endif -#endif -#ifndef HAVE_PARTITIONS - result = mtd_device_register(&mtd, NULL, 0); -#else result = mtd_device_register(&mtd, lart_partitions, ARRAY_SIZE(lart_partitions)); -#endif return (result); } static void __exit lart_flash_exit (void) { -#ifndef HAVE_PARTITIONS - mtd_device_unregister(&mtd); -#else mtd_device_unregister(&mtd); -#endif } module_init (lart_flash_init); diff --git a/drivers/mtd/devices/m25p80.c b/drivers/mtd/devices/m25p80.c index 35180e475c4c..884904d3f9d2 100644 --- a/drivers/mtd/devices/m25p80.c +++ b/drivers/mtd/devices/m25p80.c @@ -30,6 +30,7 @@ #include <linux/mtd/cfi.h> #include <linux/mtd/mtd.h> #include <linux/mtd/partitions.h> +#include <linux/of_platform.h> #include <linux/spi/spi.h> #include <linux/spi/flash.h> @@ -88,7 +89,6 @@ struct m25p { struct spi_device *spi; struct mutex lock; struct mtd_info mtd; - unsigned partitioned:1; u16 page_size; u16 addr_width; u8 erase_opcode; @@ -209,9 +209,8 @@ static int wait_till_ready(struct m25p *flash) */ static int erase_chip(struct m25p *flash) { - DEBUG(MTD_DEBUG_LEVEL3, "%s: %s %lldKiB\n", - dev_name(&flash->spi->dev), __func__, - (long long)(flash->mtd.size >> 10)); + pr_debug("%s: %s %lldKiB\n", dev_name(&flash->spi->dev), __func__, + (long long)(flash->mtd.size >> 10)); /* Wait until finished previous write command. */ if (wait_till_ready(flash)) @@ -250,9 +249,8 @@ static int m25p_cmdsz(struct m25p *flash) */ static int erase_sector(struct m25p *flash, u32 offset) { - DEBUG(MTD_DEBUG_LEVEL3, "%s: %s %dKiB at 0x%08x\n", - dev_name(&flash->spi->dev), __func__, - flash->mtd.erasesize / 1024, offset); + pr_debug("%s: %s %dKiB at 0x%08x\n", dev_name(&flash->spi->dev), + __func__, flash->mtd.erasesize / 1024, offset); /* Wait until finished previous write command. */ if (wait_till_ready(flash)) @@ -286,9 +284,9 @@ static int m25p80_erase(struct mtd_info *mtd, struct erase_info *instr) u32 addr,len; uint32_t rem; - DEBUG(MTD_DEBUG_LEVEL2, "%s: %s %s 0x%llx, len %lld\n", - dev_name(&flash->spi->dev), __func__, "at", - (long long)instr->addr, (long long)instr->len); + pr_debug("%s: %s at 0x%llx, len %lld\n", dev_name(&flash->spi->dev), + __func__, (long long)instr->addr, + (long long)instr->len); /* sanity checks */ if (instr->addr + instr->len > flash->mtd.size) @@ -348,9 +346,8 @@ static int m25p80_read(struct mtd_info *mtd, loff_t from, size_t len, struct spi_transfer t[2]; struct spi_message m; - DEBUG(MTD_DEBUG_LEVEL2, "%s: %s %s 0x%08x, len %zd\n", - dev_name(&flash->spi->dev), __func__, "from", - (u32)from, len); + pr_debug("%s: %s from 0x%08x, len %zd\n", dev_name(&flash->spi->dev), + __func__, (u32)from, len); /* sanity checks */ if (!len) @@ -417,9 +414,8 @@ static int m25p80_write(struct mtd_info *mtd, loff_t to, size_t len, struct spi_transfer t[2]; struct spi_message m; - DEBUG(MTD_DEBUG_LEVEL2, "%s: %s %s 0x%08x, len %zd\n", - dev_name(&flash->spi->dev), __func__, "to", - (u32)to, len); + pr_debug("%s: %s to 0x%08x, len %zd\n", dev_name(&flash->spi->dev), + __func__, (u32)to, len); *retlen = 0; @@ -510,9 +506,8 @@ static int sst_write(struct mtd_info *mtd, loff_t to, size_t len, size_t actual; int cmd_sz, ret; - DEBUG(MTD_DEBUG_LEVEL2, "%s: %s %s 0x%08x, len %zd\n", - dev_name(&flash->spi->dev), __func__, "to", - (u32)to, len); + pr_debug("%s: %s to 0x%08x, len %zd\n", dev_name(&flash->spi->dev), + __func__, (u32)to, len); *retlen = 0; @@ -661,6 +656,7 @@ static const struct spi_device_id m25p_ids[] = { { "at25fs040", INFO(0x1f6604, 0, 64 * 1024, 8, SECT_4K) }, { "at25df041a", INFO(0x1f4401, 0, 64 * 1024, 8, SECT_4K) }, + { "at25df321a", INFO(0x1f4701, 0, 64 * 1024, 64, SECT_4K) }, { "at25df641", INFO(0x1f4800, 0, 64 * 1024, 128, SECT_4K) }, { "at26f004", INFO(0x1f0400, 0, 64 * 1024, 8, SECT_4K) }, @@ -671,6 +667,7 @@ static const struct spi_device_id m25p_ids[] = { /* EON -- en25xxx */ { "en25f32", INFO(0x1c3116, 0, 64 * 1024, 64, SECT_4K) }, { "en25p32", INFO(0x1c2016, 0, 64 * 1024, 64, 0) }, + { "en25q32b", INFO(0x1c3016, 0, 64 * 1024, 64, 0) }, { "en25p64", INFO(0x1c2017, 0, 64 * 1024, 128, 0) }, /* Intel/Numonyx -- xxxs33b */ @@ -788,8 +785,8 @@ static const struct spi_device_id *__devinit jedec_probe(struct spi_device *spi) */ tmp = spi_write_then_read(spi, &code, 1, id, 5); if (tmp < 0) { - DEBUG(MTD_DEBUG_LEVEL0, "%s: error %d reading JEDEC ID\n", - dev_name(&spi->dev), tmp); + pr_debug("%s: error %d reading JEDEC ID\n", + dev_name(&spi->dev), tmp); return ERR_PTR(tmp); } jedec = id[0]; @@ -825,8 +822,12 @@ static int __devinit m25p_probe(struct spi_device *spi) struct m25p *flash; struct flash_info *info; unsigned i; - struct mtd_partition *parts = NULL; - int nr_parts = 0; + struct mtd_part_parser_data ppdata; + +#ifdef CONFIG_MTD_OF_PARTS + if (!of_device_is_available(spi->dev.of_node)) + return -ENODEV; +#endif /* Platform data helps sort out which chip type we have, as * well as how this board partitions it. If we don't have @@ -928,6 +929,7 @@ static int __devinit m25p_probe(struct spi_device *spi) if (info->flags & M25P_NO_ERASE) flash->mtd.flags |= MTD_NO_ERASE; + ppdata.of_node = spi->dev.of_node; flash->mtd.dev.parent = &spi->dev; flash->page_size = info->page_size; @@ -945,8 +947,7 @@ static int __devinit m25p_probe(struct spi_device *spi) dev_info(&spi->dev, "%s (%lld Kbytes)\n", id->name, (long long)flash->mtd.size >> 10); - DEBUG(MTD_DEBUG_LEVEL2, - "mtd .name = %s, .size = 0x%llx (%lldMiB) " + pr_debug("mtd .name = %s, .size = 0x%llx (%lldMiB) " ".erasesize = 0x%.8x (%uKiB) .numeraseregions = %d\n", flash->mtd.name, (long long)flash->mtd.size, (long long)(flash->mtd.size >> 20), @@ -955,8 +956,7 @@ static int __devinit m25p_probe(struct spi_device *spi) if (flash->mtd.numeraseregions) for (i = 0; i < flash->mtd.numeraseregions; i++) - DEBUG(MTD_DEBUG_LEVEL2, - "mtd.eraseregions[%d] = { .offset = 0x%llx, " + pr_debug("mtd.eraseregions[%d] = { .offset = 0x%llx, " ".erasesize = 0x%.8x (%uKiB), " ".numblocks = %d }\n", i, (long long)flash->mtd.eraseregions[i].offset, @@ -968,41 +968,9 @@ static int __devinit m25p_probe(struct spi_device *spi) /* partitions should match sector boundaries; and it may be good to * use readonly partitions for writeprotected sectors (BP2..BP0). */ - if (mtd_has_cmdlinepart()) { - static const char *part_probes[] - = { "cmdlinepart", NULL, }; - - nr_parts = parse_mtd_partitions(&flash->mtd, - part_probes, &parts, 0); - } - - if (nr_parts <= 0 && data && data->parts) { - parts = data->parts; - nr_parts = data->nr_parts; - } - -#ifdef CONFIG_MTD_OF_PARTS - if (nr_parts <= 0 && spi->dev.of_node) { - nr_parts = of_mtd_parse_partitions(&spi->dev, - spi->dev.of_node, &parts); - } -#endif - - if (nr_parts > 0) { - for (i = 0; i < nr_parts; i++) { - DEBUG(MTD_DEBUG_LEVEL2, "partitions[%d] = " - "{.name = %s, .offset = 0x%llx, " - ".size = 0x%llx (%lldKiB) }\n", - i, parts[i].name, - (long long)parts[i].offset, - (long long)parts[i].size, - (long long)(parts[i].size >> 10)); - } - flash->partitioned = 1; - } - - return mtd_device_register(&flash->mtd, parts, nr_parts) == 1 ? - -ENODEV : 0; + return mtd_device_parse_register(&flash->mtd, NULL, &ppdata, + data ? data->parts : NULL, + data ? data->nr_parts : 0); } diff --git a/drivers/mtd/devices/mtd_dataflash.c b/drivers/mtd/devices/mtd_dataflash.c index 13749d458a31..d75c7af18a63 100644 --- a/drivers/mtd/devices/mtd_dataflash.c +++ b/drivers/mtd/devices/mtd_dataflash.c @@ -17,6 +17,8 @@ #include <linux/mutex.h> #include <linux/err.h> #include <linux/math64.h> +#include <linux/of.h> +#include <linux/of_device.h> #include <linux/spi/spi.h> #include <linux/spi/flash.h> @@ -24,7 +26,6 @@ #include <linux/mtd/mtd.h> #include <linux/mtd/partitions.h> - /* * DataFlash is a kind of SPI flash. Most AT45 chips have two buffers in * each chip, which may be used for double buffered I/O; but this driver @@ -98,6 +99,16 @@ struct dataflash { struct mtd_info mtd; }; +#ifdef CONFIG_OF +static const struct of_device_id dataflash_dt_ids[] = { + { .compatible = "atmel,at45", }, + { .compatible = "atmel,dataflash", }, + { /* sentinel */ } +}; +#else +#define dataflash_dt_ids NULL +#endif + /* ......................................................................... */ /* @@ -122,7 +133,7 @@ static int dataflash_waitready(struct spi_device *spi) for (;;) { status = dataflash_status(spi); if (status < 0) { - DEBUG(MTD_DEBUG_LEVEL1, "%s: status %d?\n", + pr_debug("%s: status %d?\n", dev_name(&spi->dev), status); status = 0; } @@ -149,7 +160,7 @@ static int dataflash_erase(struct mtd_info *mtd, struct erase_info *instr) uint8_t *command; uint32_t rem; - DEBUG(MTD_DEBUG_LEVEL2, "%s: erase addr=0x%llx len 0x%llx\n", + pr_debug("%s: erase addr=0x%llx len 0x%llx\n", dev_name(&spi->dev), (long long)instr->addr, (long long)instr->len); @@ -187,7 +198,7 @@ static int dataflash_erase(struct mtd_info *mtd, struct erase_info *instr) command[2] = (uint8_t)(pageaddr >> 8); command[3] = 0; - DEBUG(MTD_DEBUG_LEVEL3, "ERASE %s: (%x) %x %x %x [%i]\n", + pr_debug("ERASE %s: (%x) %x %x %x [%i]\n", do_block ? "block" : "page", command[0], command[1], command[2], command[3], pageaddr); @@ -238,8 +249,8 @@ static int dataflash_read(struct mtd_info *mtd, loff_t from, size_t len, uint8_t *command; int status; - DEBUG(MTD_DEBUG_LEVEL2, "%s: read 0x%x..0x%x\n", - dev_name(&priv->spi->dev), (unsigned)from, (unsigned)(from + len)); + pr_debug("%s: read 0x%x..0x%x\n", dev_name(&priv->spi->dev), + (unsigned)from, (unsigned)(from + len)); *retlen = 0; @@ -255,7 +266,7 @@ static int dataflash_read(struct mtd_info *mtd, loff_t from, size_t len, command = priv->command; - DEBUG(MTD_DEBUG_LEVEL3, "READ: (%x) %x %x %x\n", + pr_debug("READ: (%x) %x %x %x\n", command[0], command[1], command[2], command[3]); spi_message_init(&msg); @@ -287,7 +298,7 @@ static int dataflash_read(struct mtd_info *mtd, loff_t from, size_t len, *retlen = msg.actual_length - 8; status = 0; } else - DEBUG(MTD_DEBUG_LEVEL1, "%s: read %x..%x --> %d\n", + pr_debug("%s: read %x..%x --> %d\n", dev_name(&priv->spi->dev), (unsigned)from, (unsigned)(from + len), status); @@ -314,7 +325,7 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len, int status = -EINVAL; uint8_t *command; - DEBUG(MTD_DEBUG_LEVEL2, "%s: write 0x%x..0x%x\n", + pr_debug("%s: write 0x%x..0x%x\n", dev_name(&spi->dev), (unsigned)to, (unsigned)(to + len)); *retlen = 0; @@ -340,7 +351,7 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len, mutex_lock(&priv->lock); while (remaining > 0) { - DEBUG(MTD_DEBUG_LEVEL3, "write @ %i:%i len=%i\n", + pr_debug("write @ %i:%i len=%i\n", pageaddr, offset, writelen); /* REVISIT: @@ -368,12 +379,12 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len, command[2] = (addr & 0x0000FF00) >> 8; command[3] = 0; - DEBUG(MTD_DEBUG_LEVEL3, "TRANSFER: (%x) %x %x %x\n", + pr_debug("TRANSFER: (%x) %x %x %x\n", command[0], command[1], command[2], command[3]); status = spi_sync(spi, &msg); if (status < 0) - DEBUG(MTD_DEBUG_LEVEL1, "%s: xfer %u -> %d \n", + pr_debug("%s: xfer %u -> %d\n", dev_name(&spi->dev), addr, status); (void) dataflash_waitready(priv->spi); @@ -386,7 +397,7 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len, command[2] = (addr & 0x0000FF00) >> 8; command[3] = (addr & 0x000000FF); - DEBUG(MTD_DEBUG_LEVEL3, "PROGRAM: (%x) %x %x %x\n", + pr_debug("PROGRAM: (%x) %x %x %x\n", command[0], command[1], command[2], command[3]); x[1].tx_buf = writebuf; @@ -395,7 +406,7 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len, status = spi_sync(spi, &msg); spi_transfer_del(x + 1); if (status < 0) - DEBUG(MTD_DEBUG_LEVEL1, "%s: pgm %u/%u -> %d \n", + pr_debug("%s: pgm %u/%u -> %d\n", dev_name(&spi->dev), addr, writelen, status); (void) dataflash_waitready(priv->spi); @@ -410,12 +421,12 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len, command[2] = (addr & 0x0000FF00) >> 8; command[3] = 0; - DEBUG(MTD_DEBUG_LEVEL3, "COMPARE: (%x) %x %x %x\n", + pr_debug("COMPARE: (%x) %x %x %x\n", command[0], command[1], command[2], command[3]); status = spi_sync(spi, &msg); if (status < 0) - DEBUG(MTD_DEBUG_LEVEL1, "%s: compare %u -> %d \n", + pr_debug("%s: compare %u -> %d\n", dev_name(&spi->dev), addr, status); status = dataflash_waitready(priv->spi); @@ -634,11 +645,10 @@ add_dataflash_otp(struct spi_device *spi, char *name, { struct dataflash *priv; struct mtd_info *device; + struct mtd_part_parser_data ppdata; struct flash_platform_data *pdata = spi->dev.platform_data; char *otp_tag = ""; int err = 0; - struct mtd_partition *parts; - int nr_parts = 0; priv = kzalloc(sizeof *priv, GFP_KERNEL); if (!priv) @@ -677,28 +687,11 @@ add_dataflash_otp(struct spi_device *spi, char *name, pagesize, otp_tag); dev_set_drvdata(&spi->dev, priv); - if (mtd_has_cmdlinepart()) { - static const char *part_probes[] = { "cmdlinepart", NULL, }; - - nr_parts = parse_mtd_partitions(device, part_probes, &parts, - 0); - } + ppdata.of_node = spi->dev.of_node; + err = mtd_device_parse_register(device, NULL, &ppdata, + pdata ? pdata->parts : NULL, + pdata ? pdata->nr_parts : 0); - if (nr_parts <= 0 && pdata && pdata->parts) { - parts = pdata->parts; - nr_parts = pdata->nr_parts; - } - - if (nr_parts > 0) { - priv->partitioned = 1; - err = mtd_device_register(device, parts, nr_parts); - goto out; - } - - if (mtd_device_register(device, NULL, 0) == 1) - err = -ENODEV; - -out: if (!err) return 0; @@ -787,7 +780,7 @@ static struct flash_info *__devinit jedec_probe(struct spi_device *spi) */ tmp = spi_write_then_read(spi, &code, 1, id, 3); if (tmp < 0) { - DEBUG(MTD_DEBUG_LEVEL0, "%s: error %d reading JEDEC ID\n", + pr_debug("%s: error %d reading JEDEC ID\n", dev_name(&spi->dev), tmp); return ERR_PTR(tmp); } @@ -804,7 +797,7 @@ static struct flash_info *__devinit jedec_probe(struct spi_device *spi) tmp < ARRAY_SIZE(dataflash_data); tmp++, info++) { if (info->jedec_id == jedec) { - DEBUG(MTD_DEBUG_LEVEL1, "%s: OTP, sector protect%s\n", + pr_debug("%s: OTP, sector protect%s\n", dev_name(&spi->dev), (info->flags & SUP_POW2PS) ? ", binary pagesize" : "" @@ -812,8 +805,7 @@ static struct flash_info *__devinit jedec_probe(struct spi_device *spi) if (info->flags & SUP_POW2PS) { status = dataflash_status(spi); if (status < 0) { - DEBUG(MTD_DEBUG_LEVEL1, - "%s: status error %d\n", + pr_debug("%s: status error %d\n", dev_name(&spi->dev), status); return ERR_PTR(status); } @@ -878,7 +870,7 @@ static int __devinit dataflash_probe(struct spi_device *spi) */ status = dataflash_status(spi); if (status <= 0 || status == 0xff) { - DEBUG(MTD_DEBUG_LEVEL1, "%s: status error %d\n", + pr_debug("%s: status error %d\n", dev_name(&spi->dev), status); if (status == 0 || status == 0xff) status = -ENODEV; @@ -914,14 +906,14 @@ static int __devinit dataflash_probe(struct spi_device *spi) break; /* obsolete AT45DB1282 not (yet?) supported */ default: - DEBUG(MTD_DEBUG_LEVEL1, "%s: unsupported device (%x)\n", - dev_name(&spi->dev), status & 0x3c); + pr_debug("%s: unsupported device (%x)\n", dev_name(&spi->dev), + status & 0x3c); status = -ENODEV; } if (status < 0) - DEBUG(MTD_DEBUG_LEVEL1, "%s: add_dataflash --> %d\n", - dev_name(&spi->dev), status); + pr_debug("%s: add_dataflash --> %d\n", dev_name(&spi->dev), + status); return status; } @@ -931,7 +923,7 @@ static int __devexit dataflash_remove(struct spi_device *spi) struct dataflash *flash = dev_get_drvdata(&spi->dev); int status; - DEBUG(MTD_DEBUG_LEVEL1, "%s: remove\n", dev_name(&spi->dev)); + pr_debug("%s: remove\n", dev_name(&spi->dev)); status = mtd_device_unregister(&flash->mtd); if (status == 0) { @@ -946,6 +938,7 @@ static struct spi_driver dataflash_driver = { .name = "mtd_dataflash", .bus = &spi_bus_type, .owner = THIS_MODULE, + .of_match_table = dataflash_dt_ids, }, .probe = dataflash_probe, diff --git a/drivers/mtd/devices/sst25l.c b/drivers/mtd/devices/sst25l.c index 83e80c65d6e7..d38ef3bffe8d 100644 --- a/drivers/mtd/devices/sst25l.c +++ b/drivers/mtd/devices/sst25l.c @@ -52,8 +52,6 @@ struct sst25l_flash { struct spi_device *spi; struct mutex lock; struct mtd_info mtd; - - int partitioned; }; struct flash_info { @@ -381,8 +379,6 @@ static int __devinit sst25l_probe(struct spi_device *spi) struct sst25l_flash *flash; struct flash_platform_data *data; int ret, i; - struct mtd_partition *parts = NULL; - int nr_parts = 0; flash_info = sst25l_match_device(spi); if (!flash_info) @@ -414,8 +410,7 @@ static int __devinit sst25l_probe(struct spi_device *spi) dev_info(&spi->dev, "%s (%lld KiB)\n", flash_info->name, (long long)flash->mtd.size >> 10); - DEBUG(MTD_DEBUG_LEVEL2, - "mtd .name = %s, .size = 0x%llx (%lldMiB) " + pr_debug("mtd .name = %s, .size = 0x%llx (%lldMiB) " ".erasesize = 0x%.8x (%uKiB) .numeraseregions = %d\n", flash->mtd.name, (long long)flash->mtd.size, (long long)(flash->mtd.size >> 20), @@ -423,37 +418,10 @@ static int __devinit sst25l_probe(struct spi_device *spi) flash->mtd.numeraseregions); - if (mtd_has_cmdlinepart()) { - static const char *part_probes[] = {"cmdlinepart", NULL}; - - nr_parts = parse_mtd_partitions(&flash->mtd, - part_probes, - &parts, 0); - } - - if (nr_parts <= 0 && data && data->parts) { - parts = data->parts; - nr_parts = data->nr_parts; - } - - if (nr_parts > 0) { - for (i = 0; i < nr_parts; i++) { - DEBUG(MTD_DEBUG_LEVEL2, "partitions[%d] = " - "{.name = %s, .offset = 0x%llx, " - ".size = 0x%llx (%lldKiB) }\n", - i, parts[i].name, - (long long)parts[i].offset, - (long long)parts[i].size, - (long long)(parts[i].size >> 10)); - } - - flash->partitioned = 1; - return mtd_device_register(&flash->mtd, parts, - nr_parts); - } - - ret = mtd_device_register(&flash->mtd, NULL, 0); - if (ret == 1) { + ret = mtd_device_parse_register(&flash->mtd, NULL, 0, + data ? data->parts : NULL, + data ? data->nr_parts : 0); + if (ret) { kfree(flash); dev_set_drvdata(&spi->dev, NULL); return -ENODEV; diff --git a/drivers/mtd/ftl.c b/drivers/mtd/ftl.c index 037b399df3f1..c7382bb686c6 100644 --- a/drivers/mtd/ftl.c +++ b/drivers/mtd/ftl.c @@ -339,7 +339,7 @@ static int erase_xfer(partition_t *part, struct erase_info *erase; xfer = &part->XferInfo[xfernum]; - DEBUG(1, "ftl_cs: erasing xfer unit at 0x%x\n", xfer->Offset); + pr_debug("ftl_cs: erasing xfer unit at 0x%x\n", xfer->Offset); xfer->state = XFER_ERASING; /* Is there a free erase slot? Always in MTD. */ @@ -415,7 +415,7 @@ static int prepare_xfer(partition_t *part, int i) xfer = &part->XferInfo[i]; xfer->state = XFER_FAILED; - DEBUG(1, "ftl_cs: preparing xfer unit at 0x%x\n", xfer->Offset); + pr_debug("ftl_cs: preparing xfer unit at 0x%x\n", xfer->Offset); /* Write the transfer unit header */ header = part->header; @@ -476,7 +476,7 @@ static int copy_erase_unit(partition_t *part, uint16_t srcunit, eun = &part->EUNInfo[srcunit]; xfer = &part->XferInfo[xferunit]; - DEBUG(2, "ftl_cs: copying block 0x%x to 0x%x\n", + pr_debug("ftl_cs: copying block 0x%x to 0x%x\n", eun->Offset, xfer->Offset); @@ -598,7 +598,7 @@ static int copy_erase_unit(partition_t *part, uint16_t srcunit, unit with the fewest erases, and usually pick the data unit with the most deleted blocks. But with a small probability, pick the oldest data unit instead. This means that we generally postpone - the next reclaimation as long as possible, but shuffle static + the next reclamation as long as possible, but shuffle static stuff around a bit for wear leveling. ======================================================================*/ @@ -609,8 +609,8 @@ static int reclaim_block(partition_t *part) uint32_t best; int queued, ret; - DEBUG(0, "ftl_cs: reclaiming space...\n"); - DEBUG(3, "NumTransferUnits == %x\n", part->header.NumTransferUnits); + pr_debug("ftl_cs: reclaiming space...\n"); + pr_debug("NumTransferUnits == %x\n", part->header.NumTransferUnits); /* Pick the least erased transfer unit */ best = 0xffffffff; xfer = 0xffff; do { @@ -618,22 +618,22 @@ static int reclaim_block(partition_t *part) for (i = 0; i < part->header.NumTransferUnits; i++) { int n=0; if (part->XferInfo[i].state == XFER_UNKNOWN) { - DEBUG(3,"XferInfo[%d].state == XFER_UNKNOWN\n",i); + pr_debug("XferInfo[%d].state == XFER_UNKNOWN\n",i); n=1; erase_xfer(part, i); } if (part->XferInfo[i].state == XFER_ERASING) { - DEBUG(3,"XferInfo[%d].state == XFER_ERASING\n",i); + pr_debug("XferInfo[%d].state == XFER_ERASING\n",i); n=1; queued = 1; } else if (part->XferInfo[i].state == XFER_ERASED) { - DEBUG(3,"XferInfo[%d].state == XFER_ERASED\n",i); + pr_debug("XferInfo[%d].state == XFER_ERASED\n",i); n=1; prepare_xfer(part, i); } if (part->XferInfo[i].state == XFER_PREPARED) { - DEBUG(3,"XferInfo[%d].state == XFER_PREPARED\n",i); + pr_debug("XferInfo[%d].state == XFER_PREPARED\n",i); n=1; if (part->XferInfo[i].EraseCount <= best) { best = part->XferInfo[i].EraseCount; @@ -641,12 +641,12 @@ static int reclaim_block(partition_t *part) } } if (!n) - DEBUG(3,"XferInfo[%d].state == %x\n",i, part->XferInfo[i].state); + pr_debug("XferInfo[%d].state == %x\n",i, part->XferInfo[i].state); } if (xfer == 0xffff) { if (queued) { - DEBUG(1, "ftl_cs: waiting for transfer " + pr_debug("ftl_cs: waiting for transfer " "unit to be prepared...\n"); if (part->mbd.mtd->sync) part->mbd.mtd->sync(part->mbd.mtd); @@ -656,7 +656,7 @@ static int reclaim_block(partition_t *part) printk(KERN_NOTICE "ftl_cs: reclaim failed: no " "suitable transfer units!\n"); else - DEBUG(1, "ftl_cs: reclaim failed: no " + pr_debug("ftl_cs: reclaim failed: no " "suitable transfer units!\n"); return -EIO; @@ -666,7 +666,7 @@ static int reclaim_block(partition_t *part) eun = 0; if ((jiffies % shuffle_freq) == 0) { - DEBUG(1, "ftl_cs: recycling freshest block...\n"); + pr_debug("ftl_cs: recycling freshest block...\n"); best = 0xffffffff; for (i = 0; i < part->DataUnits; i++) if (part->EUNInfo[i].EraseCount <= best) { @@ -686,7 +686,7 @@ static int reclaim_block(partition_t *part) printk(KERN_NOTICE "ftl_cs: reclaim failed: " "no free blocks!\n"); else - DEBUG(1,"ftl_cs: reclaim failed: " + pr_debug("ftl_cs: reclaim failed: " "no free blocks!\n"); return -EIO; @@ -771,7 +771,7 @@ static uint32_t find_free(partition_t *part) printk(KERN_NOTICE "ftl_cs: bad free list!\n"); return 0; } - DEBUG(2, "ftl_cs: found free block at %d in %d\n", blk, eun); + pr_debug("ftl_cs: found free block at %d in %d\n", blk, eun); return blk; } /* find_free */ @@ -791,7 +791,7 @@ static int ftl_read(partition_t *part, caddr_t buffer, int ret; size_t offset, retlen; - DEBUG(2, "ftl_cs: ftl_read(0x%p, 0x%lx, %ld)\n", + pr_debug("ftl_cs: ftl_read(0x%p, 0x%lx, %ld)\n", part, sector, nblocks); if (!(part->state & FTL_FORMATTED)) { printk(KERN_NOTICE "ftl_cs: bad partition\n"); @@ -840,7 +840,7 @@ static int set_bam_entry(partition_t *part, uint32_t log_addr, int ret; size_t retlen, offset; - DEBUG(2, "ftl_cs: set_bam_entry(0x%p, 0x%x, 0x%x)\n", + pr_debug("ftl_cs: set_bam_entry(0x%p, 0x%x, 0x%x)\n", part, log_addr, virt_addr); bsize = 1 << part->header.EraseUnitSize; eun = log_addr / bsize; @@ -905,7 +905,7 @@ static int ftl_write(partition_t *part, caddr_t buffer, int ret; size_t retlen, offset; - DEBUG(2, "ftl_cs: ftl_write(0x%p, %ld, %ld)\n", + pr_debug("ftl_cs: ftl_write(0x%p, %ld, %ld)\n", part, sector, nblocks); if (!(part->state & FTL_FORMATTED)) { printk(KERN_NOTICE "ftl_cs: bad partition\n"); @@ -1011,7 +1011,7 @@ static int ftl_discardsect(struct mtd_blktrans_dev *dev, partition_t *part = (void *)dev; uint32_t bsize = 1 << part->header.EraseUnitSize; - DEBUG(1, "FTL erase sector %ld for %d sectors\n", + pr_debug("FTL erase sector %ld for %d sectors\n", sector, nr_sects); while (nr_sects) { diff --git a/drivers/mtd/inftlcore.c b/drivers/mtd/inftlcore.c index d7592e67d048..dd034efd1875 100644 --- a/drivers/mtd/inftlcore.c +++ b/drivers/mtd/inftlcore.c @@ -63,14 +63,12 @@ static void inftl_add_mtd(struct mtd_blktrans_ops *tr, struct mtd_info *mtd) return; } - DEBUG(MTD_DEBUG_LEVEL3, "INFTL: add_mtd for %s\n", mtd->name); + pr_debug("INFTL: add_mtd for %s\n", mtd->name); inftl = kzalloc(sizeof(*inftl), GFP_KERNEL); - if (!inftl) { - printk(KERN_WARNING "INFTL: Out of memory for data structures\n"); + if (!inftl) return; - } inftl->mbd.mtd = mtd; inftl->mbd.devnum = -1; @@ -133,7 +131,7 @@ static void inftl_remove_dev(struct mtd_blktrans_dev *dev) { struct INFTLrecord *inftl = (void *)dev; - DEBUG(MTD_DEBUG_LEVEL3, "INFTL: remove_dev (i=%d)\n", dev->devnum); + pr_debug("INFTL: remove_dev (i=%d)\n", dev->devnum); del_mtd_blktrans_dev(dev); @@ -154,7 +152,7 @@ int inftl_read_oob(struct mtd_info *mtd, loff_t offs, size_t len, struct mtd_oob_ops ops; int res; - ops.mode = MTD_OOB_PLACE; + ops.mode = MTD_OPS_PLACE_OOB; ops.ooboffs = offs & (mtd->writesize - 1); ops.ooblen = len; ops.oobbuf = buf; @@ -174,7 +172,7 @@ int inftl_write_oob(struct mtd_info *mtd, loff_t offs, size_t len, struct mtd_oob_ops ops; int res; - ops.mode = MTD_OOB_PLACE; + ops.mode = MTD_OPS_PLACE_OOB; ops.ooboffs = offs & (mtd->writesize - 1); ops.ooblen = len; ops.oobbuf = buf; @@ -194,7 +192,7 @@ static int inftl_write(struct mtd_info *mtd, loff_t offs, size_t len, struct mtd_oob_ops ops; int res; - ops.mode = MTD_OOB_PLACE; + ops.mode = MTD_OPS_PLACE_OOB; ops.ooboffs = offs; ops.ooblen = mtd->oobsize; ops.oobbuf = oob; @@ -215,16 +213,16 @@ static u16 INFTL_findfreeblock(struct INFTLrecord *inftl, int desperate) u16 pot = inftl->LastFreeEUN; int silly = inftl->nb_blocks; - DEBUG(MTD_DEBUG_LEVEL3, "INFTL: INFTL_findfreeblock(inftl=%p," - "desperate=%d)\n", inftl, desperate); + pr_debug("INFTL: INFTL_findfreeblock(inftl=%p,desperate=%d)\n", + inftl, desperate); /* * Normally, we force a fold to happen before we run out of free * blocks completely. */ if (!desperate && inftl->numfreeEUNs < 2) { - DEBUG(MTD_DEBUG_LEVEL1, "INFTL: there are too few free " - "EUNs (%d)\n", inftl->numfreeEUNs); + pr_debug("INFTL: there are too few free EUNs (%d)\n", + inftl->numfreeEUNs); return BLOCK_NIL; } @@ -259,8 +257,8 @@ static u16 INFTL_foldchain(struct INFTLrecord *inftl, unsigned thisVUC, unsigned struct inftl_oob oob; size_t retlen; - DEBUG(MTD_DEBUG_LEVEL3, "INFTL: INFTL_foldchain(inftl=%p,thisVUC=%d," - "pending=%d)\n", inftl, thisVUC, pendingblock); + pr_debug("INFTL: INFTL_foldchain(inftl=%p,thisVUC=%d,pending=%d)\n", + inftl, thisVUC, pendingblock); memset(BlockMap, 0xff, sizeof(BlockMap)); memset(BlockDeleted, 0, sizeof(BlockDeleted)); @@ -323,8 +321,7 @@ static u16 INFTL_foldchain(struct INFTLrecord *inftl, unsigned thisVUC, unsigned * Chain, and the Erase Unit into which we are supposed to be copying. * Go for it. */ - DEBUG(MTD_DEBUG_LEVEL1, "INFTL: folding chain %d into unit %d\n", - thisVUC, targetEUN); + pr_debug("INFTL: folding chain %d into unit %d\n", thisVUC, targetEUN); for (block = 0; block < inftl->EraseSize/SECTORSIZE ; block++) { unsigned char movebuf[SECTORSIZE]; @@ -349,14 +346,13 @@ static u16 INFTL_foldchain(struct INFTLrecord *inftl, unsigned thisVUC, unsigned ret = mtd->read(mtd, (inftl->EraseSize * BlockMap[block]) + (block * SECTORSIZE), SECTORSIZE, &retlen, movebuf); - if (ret < 0 && ret != -EUCLEAN) { + if (ret < 0 && !mtd_is_bitflip(ret)) { ret = mtd->read(mtd, (inftl->EraseSize * BlockMap[block]) + (block * SECTORSIZE), SECTORSIZE, &retlen, movebuf); if (ret != -EIO) - DEBUG(MTD_DEBUG_LEVEL1, "INFTL: error went " - "away on retry?\n"); + pr_debug("INFTL: error went away on retry?\n"); } memset(&oob, 0xff, sizeof(struct inftl_oob)); oob.b.Status = oob.b.Status1 = SECTOR_USED; @@ -372,8 +368,7 @@ static u16 INFTL_foldchain(struct INFTLrecord *inftl, unsigned thisVUC, unsigned * is important, by doing oldest first if we crash/reboot then it * it is relatively simple to clean up the mess). */ - DEBUG(MTD_DEBUG_LEVEL1, "INFTL: want to erase virtual chain %d\n", - thisVUC); + pr_debug("INFTL: want to erase virtual chain %d\n", thisVUC); for (;;) { /* Find oldest unit in chain. */ @@ -421,7 +416,7 @@ static u16 INFTL_makefreeblock(struct INFTLrecord *inftl, unsigned pendingblock) u16 ChainLength = 0, thislen; u16 chain, EUN; - DEBUG(MTD_DEBUG_LEVEL3, "INFTL: INFTL_makefreeblock(inftl=%p," + pr_debug("INFTL: INFTL_makefreeblock(inftl=%p," "pending=%d)\n", inftl, pendingblock); for (chain = 0; chain < inftl->nb_blocks; chain++) { @@ -484,8 +479,8 @@ static inline u16 INFTL_findwriteunit(struct INFTLrecord *inftl, unsigned block) size_t retlen; int silly, silly2 = 3; - DEBUG(MTD_DEBUG_LEVEL3, "INFTL: INFTL_findwriteunit(inftl=%p," - "block=%d)\n", inftl, block); + pr_debug("INFTL: INFTL_findwriteunit(inftl=%p,block=%d)\n", + inftl, block); do { /* @@ -501,8 +496,8 @@ static inline u16 INFTL_findwriteunit(struct INFTLrecord *inftl, unsigned block) blockofs, 8, &retlen, (char *)&bci); status = bci.Status | bci.Status1; - DEBUG(MTD_DEBUG_LEVEL3, "INFTL: status of block %d in " - "EUN %d is %x\n", block , writeEUN, status); + pr_debug("INFTL: status of block %d in EUN %d is %x\n", + block , writeEUN, status); switch(status) { case SECTOR_FREE: @@ -555,9 +550,9 @@ hitused: * Hopefully we free something, lets try again. * This time we are desperate... */ - DEBUG(MTD_DEBUG_LEVEL1, "INFTL: using desperate==1 " - "to find free EUN to accommodate write to " - "VUC %d\n", thisVUC); + pr_debug("INFTL: using desperate==1 to find free EUN " + "to accommodate write to VUC %d\n", + thisVUC); writeEUN = INFTL_findfreeblock(inftl, 1); if (writeEUN == BLOCK_NIL) { /* @@ -647,7 +642,7 @@ static void INFTL_trydeletechain(struct INFTLrecord *inftl, unsigned thisVUC) struct inftl_bci bci; size_t retlen; - DEBUG(MTD_DEBUG_LEVEL3, "INFTL: INFTL_trydeletechain(inftl=%p," + pr_debug("INFTL: INFTL_trydeletechain(inftl=%p," "thisVUC=%d)\n", inftl, thisVUC); memset(BlockUsed, 0, sizeof(BlockUsed)); @@ -711,7 +706,7 @@ static void INFTL_trydeletechain(struct INFTLrecord *inftl, unsigned thisVUC) * For each block in the chain free it and make it available * for future use. Erase from the oldest unit first. */ - DEBUG(MTD_DEBUG_LEVEL1, "INFTL: deleting empty VUC %d\n", thisVUC); + pr_debug("INFTL: deleting empty VUC %d\n", thisVUC); for (;;) { u16 *prevEUN = &inftl->VUtable[thisVUC]; @@ -719,7 +714,7 @@ static void INFTL_trydeletechain(struct INFTLrecord *inftl, unsigned thisVUC) /* If the chain is all gone already, we're done */ if (thisEUN == BLOCK_NIL) { - DEBUG(MTD_DEBUG_LEVEL2, "INFTL: Empty VUC %d for deletion was already absent\n", thisEUN); + pr_debug("INFTL: Empty VUC %d for deletion was already absent\n", thisEUN); return; } @@ -731,7 +726,7 @@ static void INFTL_trydeletechain(struct INFTLrecord *inftl, unsigned thisVUC) thisEUN = *prevEUN; } - DEBUG(MTD_DEBUG_LEVEL3, "Deleting EUN %d from VUC %d\n", + pr_debug("Deleting EUN %d from VUC %d\n", thisEUN, thisVUC); if (INFTL_formatblock(inftl, thisEUN) < 0) { @@ -767,7 +762,7 @@ static int INFTL_deleteblock(struct INFTLrecord *inftl, unsigned block) size_t retlen; struct inftl_bci bci; - DEBUG(MTD_DEBUG_LEVEL3, "INFTL: INFTL_deleteblock(inftl=%p," + pr_debug("INFTL: INFTL_deleteblock(inftl=%p," "block=%d)\n", inftl, block); while (thisEUN < inftl->nb_blocks) { @@ -826,7 +821,7 @@ static int inftl_writeblock(struct mtd_blktrans_dev *mbd, unsigned long block, struct inftl_oob oob; char *p, *pend; - DEBUG(MTD_DEBUG_LEVEL3, "INFTL: inftl_writeblock(inftl=%p,block=%ld," + pr_debug("INFTL: inftl_writeblock(inftl=%p,block=%ld," "buffer=%p)\n", inftl, block, buffer); /* Is block all zero? */ @@ -876,7 +871,7 @@ static int inftl_readblock(struct mtd_blktrans_dev *mbd, unsigned long block, struct inftl_bci bci; size_t retlen; - DEBUG(MTD_DEBUG_LEVEL3, "INFTL: inftl_readblock(inftl=%p,block=%ld," + pr_debug("INFTL: inftl_readblock(inftl=%p,block=%ld," "buffer=%p)\n", inftl, block, buffer); while (thisEUN < inftl->nb_blocks) { @@ -922,7 +917,7 @@ foundit: int ret = mtd->read(mtd, ptr, SECTORSIZE, &retlen, buffer); /* Handle corrected bit flips gracefully */ - if (ret < 0 && ret != -EUCLEAN) + if (ret < 0 && !mtd_is_bitflip(ret)) return -EIO; } return 0; diff --git a/drivers/mtd/inftlmount.c b/drivers/mtd/inftlmount.c index 104052e774b0..2ff601f816ce 100644 --- a/drivers/mtd/inftlmount.c +++ b/drivers/mtd/inftlmount.c @@ -53,7 +53,7 @@ static int find_boot_record(struct INFTLrecord *inftl) struct INFTLPartition *ip; size_t retlen; - DEBUG(MTD_DEBUG_LEVEL3, "INFTL: find_boot_record(inftl=%p)\n", inftl); + pr_debug("INFTL: find_boot_record(inftl=%p)\n", inftl); /* * Assume logical EraseSize == physical erasesize for starting the @@ -139,24 +139,20 @@ static int find_boot_record(struct INFTLrecord *inftl) mh->FormatFlags = le32_to_cpu(mh->FormatFlags); mh->PercentUsed = le32_to_cpu(mh->PercentUsed); -#ifdef CONFIG_MTD_DEBUG_VERBOSE - if (CONFIG_MTD_DEBUG_VERBOSE >= 2) { - printk("INFTL: Media Header ->\n" - " bootRecordID = %s\n" - " NoOfBootImageBlocks = %d\n" - " NoOfBinaryPartitions = %d\n" - " NoOfBDTLPartitions = %d\n" - " BlockMultiplerBits = %d\n" - " FormatFlgs = %d\n" - " OsakVersion = 0x%x\n" - " PercentUsed = %d\n", - mh->bootRecordID, mh->NoOfBootImageBlocks, - mh->NoOfBinaryPartitions, - mh->NoOfBDTLPartitions, - mh->BlockMultiplierBits, mh->FormatFlags, - mh->OsakVersion, mh->PercentUsed); - } -#endif + pr_debug("INFTL: Media Header ->\n" + " bootRecordID = %s\n" + " NoOfBootImageBlocks = %d\n" + " NoOfBinaryPartitions = %d\n" + " NoOfBDTLPartitions = %d\n" + " BlockMultiplerBits = %d\n" + " FormatFlgs = %d\n" + " OsakVersion = 0x%x\n" + " PercentUsed = %d\n", + mh->bootRecordID, mh->NoOfBootImageBlocks, + mh->NoOfBinaryPartitions, + mh->NoOfBDTLPartitions, + mh->BlockMultiplierBits, mh->FormatFlags, + mh->OsakVersion, mh->PercentUsed); if (mh->NoOfBDTLPartitions == 0) { printk(KERN_WARNING "INFTL: Media Header sanity check " @@ -200,19 +196,15 @@ static int find_boot_record(struct INFTLrecord *inftl) ip->spareUnits = le32_to_cpu(ip->spareUnits); ip->Reserved0 = le32_to_cpu(ip->Reserved0); -#ifdef CONFIG_MTD_DEBUG_VERBOSE - if (CONFIG_MTD_DEBUG_VERBOSE >= 2) { - printk(" PARTITION[%d] ->\n" - " virtualUnits = %d\n" - " firstUnit = %d\n" - " lastUnit = %d\n" - " flags = 0x%x\n" - " spareUnits = %d\n", - i, ip->virtualUnits, ip->firstUnit, - ip->lastUnit, ip->flags, - ip->spareUnits); - } -#endif + pr_debug(" PARTITION[%d] ->\n" + " virtualUnits = %d\n" + " firstUnit = %d\n" + " lastUnit = %d\n" + " flags = 0x%x\n" + " spareUnits = %d\n", + i, ip->virtualUnits, ip->firstUnit, + ip->lastUnit, ip->flags, + ip->spareUnits); if (ip->Reserved0 != ip->firstUnit) { struct erase_info *instr = &inftl->instr; @@ -375,7 +367,7 @@ static int check_free_sectors(struct INFTLrecord *inftl, unsigned int address, * * Return: 0 when succeed, -1 on error. * - * ToDo: 1. Is it neceressary to check_free_sector after erasing ?? + * ToDo: 1. Is it necessary to check_free_sector after erasing ?? */ int INFTL_formatblock(struct INFTLrecord *inftl, int block) { @@ -385,8 +377,7 @@ int INFTL_formatblock(struct INFTLrecord *inftl, int block) struct mtd_info *mtd = inftl->mbd.mtd; int physblock; - DEBUG(MTD_DEBUG_LEVEL3, "INFTL: INFTL_formatblock(inftl=%p," - "block=%d)\n", inftl, block); + pr_debug("INFTL: INFTL_formatblock(inftl=%p,block=%d)\n", inftl, block); memset(instr, 0, sizeof(struct erase_info)); @@ -476,30 +467,30 @@ void INFTL_dumptables(struct INFTLrecord *s) { int i; - printk("-------------------------------------------" + pr_debug("-------------------------------------------" "----------------------------------\n"); - printk("VUtable[%d] ->", s->nb_blocks); + pr_debug("VUtable[%d] ->", s->nb_blocks); for (i = 0; i < s->nb_blocks; i++) { if ((i % 8) == 0) - printk("\n%04x: ", i); - printk("%04x ", s->VUtable[i]); + pr_debug("\n%04x: ", i); + pr_debug("%04x ", s->VUtable[i]); } - printk("\n-------------------------------------------" + pr_debug("\n-------------------------------------------" "----------------------------------\n"); - printk("PUtable[%d-%d=%d] ->", s->firstEUN, s->lastEUN, s->nb_blocks); + pr_debug("PUtable[%d-%d=%d] ->", s->firstEUN, s->lastEUN, s->nb_blocks); for (i = 0; i <= s->lastEUN; i++) { if ((i % 8) == 0) - printk("\n%04x: ", i); - printk("%04x ", s->PUtable[i]); + pr_debug("\n%04x: ", i); + pr_debug("%04x ", s->PUtable[i]); } - printk("\n-------------------------------------------" + pr_debug("\n-------------------------------------------" "----------------------------------\n"); - printk("INFTL ->\n" + pr_debug("INFTL ->\n" " EraseSize = %d\n" " h/s/c = %d/%d/%d\n" " numvunits = %d\n" @@ -513,7 +504,7 @@ void INFTL_dumptables(struct INFTLrecord *s) s->numvunits, s->firstEUN, s->lastEUN, s->numfreeEUNs, s->LastFreeEUN, s->nb_blocks, s->nb_boot_blocks); - printk("\n-------------------------------------------" + pr_debug("\n-------------------------------------------" "----------------------------------\n"); } @@ -521,25 +512,25 @@ void INFTL_dumpVUchains(struct INFTLrecord *s) { int logical, block, i; - printk("-------------------------------------------" + pr_debug("-------------------------------------------" "----------------------------------\n"); - printk("INFTL Virtual Unit Chains:\n"); + pr_debug("INFTL Virtual Unit Chains:\n"); for (logical = 0; logical < s->nb_blocks; logical++) { block = s->VUtable[logical]; if (block > s->nb_blocks) continue; - printk(" LOGICAL %d --> %d ", logical, block); + pr_debug(" LOGICAL %d --> %d ", logical, block); for (i = 0; i < s->nb_blocks; i++) { if (s->PUtable[block] == BLOCK_NIL) break; block = s->PUtable[block]; - printk("%d ", block); + pr_debug("%d ", block); } - printk("\n"); + pr_debug("\n"); } - printk("-------------------------------------------" + pr_debug("-------------------------------------------" "----------------------------------\n"); } @@ -555,7 +546,7 @@ int INFTL_mount(struct INFTLrecord *s) int i; u8 *ANACtable, ANAC; - DEBUG(MTD_DEBUG_LEVEL3, "INFTL: INFTL_mount(inftl=%p)\n", s); + pr_debug("INFTL: INFTL_mount(inftl=%p)\n", s); /* Search for INFTL MediaHeader and Spare INFTL Media Header */ if (find_boot_record(s) < 0) { @@ -585,7 +576,7 @@ int INFTL_mount(struct INFTLrecord *s) * NOTEXPLORED state. Then at the end we will try to format it and * mark it as free. */ - DEBUG(MTD_DEBUG_LEVEL3, "INFTL: pass 1, explore each unit\n"); + pr_debug("INFTL: pass 1, explore each unit\n"); for (first_block = s->firstEUN; first_block <= s->lastEUN; first_block++) { if (s->PUtable[first_block] != BLOCK_NOTEXPLORED) continue; @@ -717,17 +708,14 @@ int INFTL_mount(struct INFTLrecord *s) logical_block = BLOCK_NIL; } -#ifdef CONFIG_MTD_DEBUG_VERBOSE - if (CONFIG_MTD_DEBUG_VERBOSE >= 2) - INFTL_dumptables(s); -#endif + INFTL_dumptables(s); /* * Second pass, check for infinite loops in chains. These are * possible because we don't update the previous pointers when * we fold chains. No big deal, just fix them up in PUtable. */ - DEBUG(MTD_DEBUG_LEVEL3, "INFTL: pass 2, validate virtual chains\n"); + pr_debug("INFTL: pass 2, validate virtual chains\n"); for (logical_block = 0; logical_block < s->numvunits; logical_block++) { block = s->VUtable[logical_block]; last_block = BLOCK_NIL; @@ -772,12 +760,8 @@ int INFTL_mount(struct INFTLrecord *s) } } -#ifdef CONFIG_MTD_DEBUG_VERBOSE - if (CONFIG_MTD_DEBUG_VERBOSE >= 2) - INFTL_dumptables(s); - if (CONFIG_MTD_DEBUG_VERBOSE >= 2) - INFTL_dumpVUchains(s); -#endif + INFTL_dumptables(s); + INFTL_dumpVUchains(s); /* * Third pass, format unreferenced blocks and init free block count. @@ -785,7 +769,7 @@ int INFTL_mount(struct INFTLrecord *s) s->numfreeEUNs = 0; s->LastFreeEUN = BLOCK_NIL; - DEBUG(MTD_DEBUG_LEVEL3, "INFTL: pass 3, format unused blocks\n"); + pr_debug("INFTL: pass 3, format unused blocks\n"); for (block = s->firstEUN; block <= s->lastEUN; block++) { if (s->PUtable[block] == BLOCK_NOTEXPLORED) { printk("INFTL: unreferenced block %d, formatting it\n", diff --git a/drivers/mtd/maps/Kconfig b/drivers/mtd/maps/Kconfig index c0c328c5b133..8e0c4bf9f7fb 100644 --- a/drivers/mtd/maps/Kconfig +++ b/drivers/mtd/maps/Kconfig @@ -41,8 +41,6 @@ config MTD_PHYSMAP_START are mapped on your particular target board. Refer to the memory map which should hopefully be in the documentation for your board. - Ignore this option if you use run-time physmap configuration - (i.e., run-time calling physmap_configure()). config MTD_PHYSMAP_LEN hex "Physical length of flash mapping" @@ -55,8 +53,6 @@ config MTD_PHYSMAP_LEN than the total amount of flash present. Refer to the memory map which should hopefully be in the documentation for your board. - Ignore this option if you use run-time physmap configuration - (i.e., run-time calling physmap_configure()). config MTD_PHYSMAP_BANKWIDTH int "Bank width in octets" @@ -67,8 +63,6 @@ config MTD_PHYSMAP_BANKWIDTH in octets. For example, if you have a data bus width of 32 bits, you would set the bus width octet value to 4. This is used internally by the CFI drivers. - Ignore this option if you use run-time physmap configuration - (i.e., run-time calling physmap_configure()). config MTD_PHYSMAP_OF tristate "Flash device in physical memory map based on OF description" @@ -260,7 +254,6 @@ config MTD_BCM963XX config MTD_LANTIQ tristate "Lantiq SoC NOR support" depends on LANTIQ - select MTD_PARTITIONS help Support for NOR flash attached to the Lantiq SoC's External Bus Unit. @@ -339,10 +332,6 @@ config MTD_SOLUTIONENGINE This enables access to the flash chips on the Hitachi SolutionEngine and similar boards. Say 'Y' if you are building a kernel for such a board. -config MTD_ARM_INTEGRATOR - tristate "CFI Flash device mapped on ARM Integrator/P720T" - depends on ARM && MTD_CFI - config MTD_CDB89712 tristate "Cirrus CDB89712 evaluation board mappings" depends on MTD_CFI && ARCH_CDB89712 @@ -398,13 +387,6 @@ config MTD_AUTCPU12 This enables access to the NV-RAM on autronix autcpu12 board. If you have such a board, say 'Y'. -config MTD_EDB7312 - tristate "CFI Flash device mapped on EDB7312" - depends on ARCH_EDB7312 && MTD_CFI - help - This enables access to the CFI Flash on the Cogent EDB7312 board. - If you have such a board, say 'Y' here. - config MTD_IMPA7 tristate "JEDEC Flash device mapped on impA7" depends on ARM && MTD_JEDECPROBE @@ -412,14 +394,6 @@ config MTD_IMPA7 This enables access to the NOR Flash on the impA7 board of implementa GmbH. If you have such a board, say 'Y' here. -config MTD_CEIVA - tristate "JEDEC Flash device mapped on Ceiva/Polaroid PhotoMax Digital Picture Frame" - depends on MTD_JEDECPROBE && ARCH_CEIVA - help - This enables access to the flash chips on the Ceiva/Polaroid - PhotoMax Digital Picture Frame. - If you have such a device, say 'Y'. - config MTD_H720X tristate "Hynix evaluation board mappings" depends on MTD_CFI && ( ARCH_H7201 || ARCH_H7202 ) diff --git a/drivers/mtd/maps/Makefile b/drivers/mtd/maps/Makefile index cb48b11affff..45dcb8b14f22 100644 --- a/drivers/mtd/maps/Makefile +++ b/drivers/mtd/maps/Makefile @@ -19,7 +19,6 @@ obj-$(CONFIG_MTD_CK804XROM) += ck804xrom.o obj-$(CONFIG_MTD_TSUNAMI) += tsunami_flash.o obj-$(CONFIG_MTD_PXA2XX) += pxa2xx-flash.o obj-$(CONFIG_MTD_MBX860) += mbx860.o -obj-$(CONFIG_MTD_CEIVA) += ceiva.o obj-$(CONFIG_MTD_OCTAGON) += octagon-5066.o obj-$(CONFIG_MTD_PHYSMAP) += physmap.o obj-$(CONFIG_MTD_PHYSMAP_OF) += physmap_of.o @@ -40,7 +39,6 @@ obj-$(CONFIG_MTD_DBOX2) += dbox2-flash.o obj-$(CONFIG_MTD_SOLUTIONENGINE)+= solutionengine.o obj-$(CONFIG_MTD_PCI) += pci.o obj-$(CONFIG_MTD_AUTCPU12) += autcpu12-nvram.o -obj-$(CONFIG_MTD_EDB7312) += edb7312.o obj-$(CONFIG_MTD_IMPA7) += impa7.o obj-$(CONFIG_MTD_FORTUNET) += fortunet.o obj-$(CONFIG_MTD_UCLINUX) += uclinux.o diff --git a/drivers/mtd/maps/bcm963xx-flash.c b/drivers/mtd/maps/bcm963xx-flash.c index 608967fe74c6..736ca10ca9f1 100644 --- a/drivers/mtd/maps/bcm963xx-flash.c +++ b/drivers/mtd/maps/bcm963xx-flash.c @@ -21,6 +21,7 @@ #include <linux/init.h> #include <linux/kernel.h> #include <linux/slab.h> +#include <linux/module.h> #include <linux/mtd/map.h> #include <linux/mtd/mtd.h> #include <linux/mtd/partitions.h> diff --git a/drivers/mtd/maps/bfin-async-flash.c b/drivers/mtd/maps/bfin-async-flash.c index 67815eed2f00..6d6b2b5674ee 100644 --- a/drivers/mtd/maps/bfin-async-flash.c +++ b/drivers/mtd/maps/bfin-async-flash.c @@ -41,7 +41,6 @@ struct async_state { uint32_t flash_ambctl0, flash_ambctl1; uint32_t save_ambctl0, save_ambctl1; unsigned long irq_flags; - struct mtd_partition *parts; }; static void switch_to_flash(struct async_state *state) @@ -165,18 +164,8 @@ static int __devinit bfin_flash_probe(struct platform_device *pdev) return -ENXIO; } - ret = parse_mtd_partitions(state->mtd, part_probe_types, &pdata->parts, 0); - if (ret > 0) { - pr_devinit(KERN_NOTICE DRIVER_NAME ": Using commandline partition definition\n"); - mtd_device_register(state->mtd, pdata->parts, ret); - state->parts = pdata->parts; - } else if (pdata->nr_parts) { - pr_devinit(KERN_NOTICE DRIVER_NAME ": Using board partition definition\n"); - mtd_device_register(state->mtd, pdata->parts, pdata->nr_parts); - } else { - pr_devinit(KERN_NOTICE DRIVER_NAME ": no partition info available, registering whole flash at once\n"); - mtd_device_register(state->mtd, NULL, 0); - } + mtd_device_parse_register(state->mtd, part_probe_types, 0, + pdata->parts, pdata->nr_parts); platform_set_drvdata(pdev, state); @@ -188,7 +177,6 @@ static int __devexit bfin_flash_remove(struct platform_device *pdev) struct async_state *state = platform_get_drvdata(pdev); gpio_free(state->enet_flash_pin); mtd_device_unregister(state->mtd); - kfree(state->parts); map_destroy(state->mtd); kfree(state); return 0; diff --git a/drivers/mtd/maps/ceiva.c b/drivers/mtd/maps/ceiva.c deleted file mode 100644 index 06f9c9815720..000000000000 --- a/drivers/mtd/maps/ceiva.c +++ /dev/null @@ -1,341 +0,0 @@ -/* - * Ceiva flash memory driver. - * Copyright (C) 2002 Rob Scott <rscott@mtrob.fdns.net> - * - * Note: this driver supports jedec compatible devices. Modification - * for CFI compatible devices should be straight forward: change - * jedec_probe to cfi_probe. - * - * Based on: sa1100-flash.c, which has the following copyright: - * Flash memory access on SA11x0 based devices - * - * (C) 2000 Nicolas Pitre <nico@fluxnic.net> - * - */ - -#include <linux/module.h> -#include <linux/types.h> -#include <linux/ioport.h> -#include <linux/kernel.h> -#include <linux/init.h> -#include <linux/slab.h> - -#include <linux/mtd/mtd.h> -#include <linux/mtd/map.h> -#include <linux/mtd/partitions.h> -#include <linux/mtd/concat.h> - -#include <mach/hardware.h> -#include <asm/mach-types.h> -#include <asm/io.h> -#include <asm/sizes.h> - -/* - * This isn't complete yet, so... - */ -#define CONFIG_MTD_CEIVA_STATICMAP - -#ifdef CONFIG_MTD_CEIVA_STATICMAP -/* - * See include/linux/mtd/partitions.h for definition of the mtd_partition - * structure. - * - * Please note: - * 1. The flash size given should be the largest flash size that can - * be accommodated. - * - * 2. The bus width must defined in clps_setup_flash. - * - * The MTD layer will detect flash chip aliasing and reduce the size of - * the map accordingly. - * - */ - -#ifdef CONFIG_ARCH_CEIVA -/* Flash / Partition sizing */ -/* For the 28F8003, we use the block mapping to calcuate the sizes */ -#define MAX_SIZE_KiB (16 + 8 + 8 + 96 + (7*128)) -#define BOOT_PARTITION_SIZE_KiB (16) -#define PARAMS_PARTITION_SIZE_KiB (8) -#define KERNEL_PARTITION_SIZE_KiB (4*128) -/* Use both remaining portion of first flash, and all of second flash */ -#define ROOT_PARTITION_SIZE_KiB (3*128) + (8*128) - -static struct mtd_partition ceiva_partitions[] = { - { - .name = "Ceiva BOOT partition", - .size = BOOT_PARTITION_SIZE_KiB*1024, - .offset = 0, - - },{ - .name = "Ceiva parameters partition", - .size = PARAMS_PARTITION_SIZE_KiB*1024, - .offset = (16 + 8) * 1024, - },{ - .name = "Ceiva kernel partition", - .size = (KERNEL_PARTITION_SIZE_KiB)*1024, - .offset = 0x20000, - - },{ - .name = "Ceiva root filesystem partition", - .offset = MTDPART_OFS_APPEND, - .size = (ROOT_PARTITION_SIZE_KiB)*1024, - } -}; -#endif - -static int __init clps_static_partitions(struct mtd_partition **parts) -{ - int nb_parts = 0; - -#ifdef CONFIG_ARCH_CEIVA - if (machine_is_ceiva()) { - *parts = ceiva_partitions; - nb_parts = ARRAY_SIZE(ceiva_partitions); - } -#endif - return nb_parts; -} -#endif - -struct clps_info { - unsigned long base; - unsigned long size; - int width; - void *vbase; - struct map_info *map; - struct mtd_info *mtd; - struct resource *res; -}; - -#define NR_SUBMTD 4 - -static struct clps_info info[NR_SUBMTD]; - -static int __init clps_setup_mtd(struct clps_info *clps, int nr, struct mtd_info **rmtd) -{ - struct mtd_info *subdev[nr]; - struct map_info *maps; - int i, found = 0, ret = 0; - - /* - * Allocate the map_info structs in one go. - */ - maps = kzalloc(sizeof(struct map_info) * nr, GFP_KERNEL); - if (!maps) - return -ENOMEM; - /* - * Claim and then map the memory regions. - */ - for (i = 0; i < nr; i++) { - if (clps[i].base == (unsigned long)-1) - break; - - clps[i].res = request_mem_region(clps[i].base, clps[i].size, "clps flash"); - if (!clps[i].res) { - ret = -EBUSY; - break; - } - - clps[i].map = maps + i; - - clps[i].map->name = "clps flash"; - clps[i].map->phys = clps[i].base; - - clps[i].vbase = ioremap(clps[i].base, clps[i].size); - if (!clps[i].vbase) { - ret = -ENOMEM; - break; - } - - clps[i].map->virt = (void __iomem *)clps[i].vbase; - clps[i].map->bankwidth = clps[i].width; - clps[i].map->size = clps[i].size; - - simple_map_init(&clps[i].map); - - clps[i].mtd = do_map_probe("jedec_probe", clps[i].map); - if (clps[i].mtd == NULL) { - ret = -ENXIO; - break; - } - clps[i].mtd->owner = THIS_MODULE; - subdev[i] = clps[i].mtd; - - printk(KERN_INFO "clps flash: JEDEC device at 0x%08lx, %dMiB, " - "%d-bit\n", clps[i].base, clps[i].mtd->size >> 20, - clps[i].width * 8); - found += 1; - } - - /* - * ENXIO is special. It means we didn't find a chip when - * we probed. We need to tear down the mapping, free the - * resource and mark it as such. - */ - if (ret == -ENXIO) { - iounmap(clps[i].vbase); - clps[i].vbase = NULL; - release_resource(clps[i].res); - clps[i].res = NULL; - } - - /* - * If we found one device, don't bother with concat support. - * If we found multiple devices, use concat if we have it - * available, otherwise fail. - */ - if (ret == 0 || ret == -ENXIO) { - if (found == 1) { - *rmtd = subdev[0]; - ret = 0; - } else if (found > 1) { - /* - * We detected multiple devices. Concatenate - * them together. - */ - *rmtd = mtd_concat_create(subdev, found, - "clps flash"); - if (*rmtd == NULL) - ret = -ENXIO; - } - } - - /* - * If we failed, clean up. - */ - if (ret) { - do { - if (clps[i].mtd) - map_destroy(clps[i].mtd); - if (clps[i].vbase) - iounmap(clps[i].vbase); - if (clps[i].res) - release_resource(clps[i].res); - } while (i--); - - kfree(maps); - } - - return ret; -} - -static void __exit clps_destroy_mtd(struct clps_info *clps, struct mtd_info *mtd) -{ - int i; - - mtd_device_unregister(mtd); - - if (mtd != clps[0].mtd) - mtd_concat_destroy(mtd); - - for (i = NR_SUBMTD; i >= 0; i--) { - if (clps[i].mtd) - map_destroy(clps[i].mtd); - if (clps[i].vbase) - iounmap(clps[i].vbase); - if (clps[i].res) - release_resource(clps[i].res); - } - kfree(clps[0].map); -} - -/* - * We define the memory space, size, and width for the flash memory - * space here. - */ - -static int __init clps_setup_flash(void) -{ - int nr = 0; - -#ifdef CONFIG_ARCH_CEIVA - if (machine_is_ceiva()) { - info[0].base = CS0_PHYS_BASE; - info[0].size = SZ_32M; - info[0].width = CEIVA_FLASH_WIDTH; - info[1].base = CS1_PHYS_BASE; - info[1].size = SZ_32M; - info[1].width = CEIVA_FLASH_WIDTH; - nr = 2; - } -#endif - return nr; -} - -static struct mtd_partition *parsed_parts; -static const char *probes[] = { "cmdlinepart", "RedBoot", NULL }; - -static void __init clps_locate_partitions(struct mtd_info *mtd) -{ - const char *part_type = NULL; - int nr_parts = 0; - do { - /* - * Partition selection stuff. - */ - nr_parts = parse_mtd_partitions(mtd, probes, &parsed_parts, 0); - if (nr_parts > 0) { - part_type = "command line"; - break; - } -#ifdef CONFIG_MTD_CEIVA_STATICMAP - nr_parts = clps_static_partitions(&parsed_parts); - if (nr_parts > 0) { - part_type = "static"; - break; - } - printk("found: %d partitions\n", nr_parts); -#endif - } while (0); - - if (nr_parts == 0) { - printk(KERN_NOTICE "clps flash: no partition info " - "available, registering whole flash\n"); - mtd_device_register(mtd, NULL, 0); - } else { - printk(KERN_NOTICE "clps flash: using %s partition " - "definition\n", part_type); - mtd_device_register(mtd, parsed_parts, nr_parts); - } - - /* Always succeeds. */ -} - -static void __exit clps_destroy_partitions(void) -{ - kfree(parsed_parts); -} - -static struct mtd_info *mymtd; - -static int __init clps_mtd_init(void) -{ - int ret; - int nr; - - nr = clps_setup_flash(); - if (nr < 0) - return nr; - - ret = clps_setup_mtd(info, nr, &mymtd); - if (ret) - return ret; - - clps_locate_partitions(mymtd); - - return 0; -} - -static void __exit clps_mtd_cleanup(void) -{ - clps_destroy_mtd(info, mymtd); - clps_destroy_partitions(); -} - -module_init(clps_mtd_init); -module_exit(clps_mtd_cleanup); - -MODULE_AUTHOR("Rob Scott"); -MODULE_DESCRIPTION("Cirrus Logic JEDEC map driver"); -MODULE_LICENSE("GPL"); diff --git a/drivers/mtd/maps/dc21285.c b/drivers/mtd/maps/dc21285.c index 7a9e1989c977..f43b365b848c 100644 --- a/drivers/mtd/maps/dc21285.c +++ b/drivers/mtd/maps/dc21285.c @@ -145,14 +145,10 @@ static struct map_info dc21285_map = { /* Partition stuff */ -static struct mtd_partition *dc21285_parts; static const char *probes[] = { "RedBoot", "cmdlinepart", NULL }; static int __init init_dc21285(void) { - - int nrparts; - /* Determine bankwidth */ switch (*CSR_SA110_CNTL & (3<<14)) { case SA110_CNTL_ROMWIDTH_8: @@ -200,8 +196,7 @@ static int __init init_dc21285(void) dc21285_mtd->owner = THIS_MODULE; - nrparts = parse_mtd_partitions(dc21285_mtd, probes, &dc21285_parts, 0); - mtd_device_register(dc21285_mtd, dc21285_parts, nrparts); + mtd_device_parse_register(dc21285_mtd, probes, 0, NULL, 0); if(machine_is_ebsa285()) { /* @@ -224,8 +219,6 @@ static int __init init_dc21285(void) static void __exit cleanup_dc21285(void) { mtd_device_unregister(dc21285_mtd); - if (dc21285_parts) - kfree(dc21285_parts); map_destroy(dc21285_mtd); iounmap(dc21285_map.virt); } diff --git a/drivers/mtd/maps/edb7312.c b/drivers/mtd/maps/edb7312.c deleted file mode 100644 index fe42a212bb3e..000000000000 --- a/drivers/mtd/maps/edb7312.c +++ /dev/null @@ -1,134 +0,0 @@ -/* - * Handle mapping of the NOR flash on Cogent EDB7312 boards - * - * Copyright 2002 SYSGO Real-Time Solutions GmbH - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -#include <linux/module.h> -#include <linux/types.h> -#include <linux/kernel.h> -#include <linux/init.h> -#include <asm/io.h> -#include <linux/mtd/mtd.h> -#include <linux/mtd/map.h> -#include <linux/mtd/partitions.h> - -#define WINDOW_ADDR 0x00000000 /* physical properties of flash */ -#define WINDOW_SIZE 0x01000000 -#define BUSWIDTH 2 -#define FLASH_BLOCKSIZE_MAIN 0x20000 -#define FLASH_NUMBLOCKS_MAIN 128 -/* can be "cfi_probe", "jedec_probe", "map_rom", NULL }; */ -#define PROBETYPES { "cfi_probe", NULL } - -#define MSG_PREFIX "EDB7312-NOR:" /* prefix for our printk()'s */ -#define MTDID "edb7312-nor" /* for mtdparts= partitioning */ - -static struct mtd_info *mymtd; - -struct map_info edb7312nor_map = { - .name = "NOR flash on EDB7312", - .size = WINDOW_SIZE, - .bankwidth = BUSWIDTH, - .phys = WINDOW_ADDR, -}; - -/* - * MTD partitioning stuff - */ -static struct mtd_partition static_partitions[3] = -{ - { - .name = "ARMboot", - .size = 0x40000, - .offset = 0 - }, - { - .name = "Kernel", - .size = 0x200000, - .offset = 0x40000 - }, - { - .name = "RootFS", - .size = 0xDC0000, - .offset = 0x240000 - }, -}; - -static const char *probes[] = { "RedBoot", "cmdlinepart", NULL }; - -static int mtd_parts_nb = 0; -static struct mtd_partition *mtd_parts = 0; - -static int __init init_edb7312nor(void) -{ - static const char *rom_probe_types[] = PROBETYPES; - const char **type; - const char *part_type = 0; - - printk(KERN_NOTICE MSG_PREFIX "0x%08x at 0x%08x\n", - WINDOW_SIZE, WINDOW_ADDR); - edb7312nor_map.virt = ioremap(WINDOW_ADDR, WINDOW_SIZE); - - if (!edb7312nor_map.virt) { - printk(MSG_PREFIX "failed to ioremap\n"); - return -EIO; - } - - simple_map_init(&edb7312nor_map); - - mymtd = 0; - type = rom_probe_types; - for(; !mymtd && *type; type++) { - mymtd = do_map_probe(*type, &edb7312nor_map); - } - if (mymtd) { - mymtd->owner = THIS_MODULE; - - mtd_parts_nb = parse_mtd_partitions(mymtd, probes, &mtd_parts, MTDID); - if (mtd_parts_nb > 0) - part_type = "detected"; - - if (mtd_parts_nb == 0) { - mtd_parts = static_partitions; - mtd_parts_nb = ARRAY_SIZE(static_partitions); - part_type = "static"; - } - - if (mtd_parts_nb == 0) - printk(KERN_NOTICE MSG_PREFIX "no partition info available\n"); - else - printk(KERN_NOTICE MSG_PREFIX - "using %s partition definition\n", part_type); - /* Register the whole device first. */ - mtd_device_register(mymtd, NULL, 0); - mtd_device_register(mymtd, mtd_parts, mtd_parts_nb); - return 0; - } - - iounmap((void *)edb7312nor_map.virt); - return -ENXIO; -} - -static void __exit cleanup_edb7312nor(void) -{ - if (mymtd) { - mtd_device_unregister(mymtd); - map_destroy(mymtd); - } - if (edb7312nor_map.virt) { - iounmap((void *)edb7312nor_map.virt); - edb7312nor_map.virt = 0; - } -} - -module_init(init_edb7312nor); -module_exit(cleanup_edb7312nor); - -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Marius Groeger <mag@sysgo.de>"); -MODULE_DESCRIPTION("Generic configurable MTD map driver"); diff --git a/drivers/mtd/maps/gpio-addr-flash.c b/drivers/mtd/maps/gpio-addr-flash.c index 7568c5f8b8ae..1ec66f031c51 100644 --- a/drivers/mtd/maps/gpio-addr-flash.c +++ b/drivers/mtd/maps/gpio-addr-flash.c @@ -187,7 +187,6 @@ static const char *part_probe_types[] = { "cmdlinepart", "RedBoot", NULL }; */ static int __devinit gpio_flash_probe(struct platform_device *pdev) { - int nr_parts; size_t i, arr_size; struct physmap_flash_data *pdata; struct resource *memory; @@ -252,20 +251,9 @@ static int __devinit gpio_flash_probe(struct platform_device *pdev) return -ENXIO; } - nr_parts = parse_mtd_partitions(state->mtd, part_probe_types, - &pdata->parts, 0); - if (nr_parts > 0) { - pr_devinit(KERN_NOTICE PFX "Using commandline partition definition\n"); - kfree(pdata->parts); - } else if (pdata->nr_parts) { - pr_devinit(KERN_NOTICE PFX "Using board partition definition\n"); - nr_parts = pdata->nr_parts; - } else { - pr_devinit(KERN_NOTICE PFX "no partition info available, registering whole flash at once\n"); - nr_parts = 0; - } - mtd_device_register(state->mtd, pdata->parts, nr_parts); + mtd_device_parse_register(state->mtd, part_probe_types, 0, + pdata->parts, pdata->nr_parts); return 0; } diff --git a/drivers/mtd/maps/h720x-flash.c b/drivers/mtd/maps/h720x-flash.c index 7f035860a36b..49c14187fc66 100644 --- a/drivers/mtd/maps/h720x-flash.c +++ b/drivers/mtd/maps/h720x-flash.c @@ -58,18 +58,11 @@ static struct mtd_partition h720x_partitions[] = { #define NUM_PARTITIONS ARRAY_SIZE(h720x_partitions) -static int nr_mtd_parts; -static struct mtd_partition *mtd_parts; -static const char *probes[] = { "cmdlinepart", NULL }; - /* * Initialize FLASH support */ static int __init h720x_mtd_init(void) { - - char *part_type = NULL; - h720x_map.virt = ioremap(h720x_map.phys, h720x_map.size); if (!h720x_map.virt) { @@ -92,16 +85,8 @@ static int __init h720x_mtd_init(void) if (mymtd) { mymtd->owner = THIS_MODULE; - nr_mtd_parts = parse_mtd_partitions(mymtd, probes, &mtd_parts, 0); - if (nr_mtd_parts > 0) - part_type = "command line"; - if (nr_mtd_parts <= 0) { - mtd_parts = h720x_partitions; - nr_mtd_parts = NUM_PARTITIONS; - part_type = "builtin"; - } - printk(KERN_INFO "Using %s partition table\n", part_type); - mtd_device_register(mymtd, mtd_parts, nr_mtd_parts); + mtd_device_parse_register(mymtd, NULL, 0, + h720x_partitions, NUM_PARTITIONS); return 0; } @@ -120,10 +105,6 @@ static void __exit h720x_mtd_cleanup(void) map_destroy(mymtd); } - /* Free partition info, if commandline partition was used */ - if (mtd_parts && (mtd_parts != h720x_partitions)) - kfree (mtd_parts); - if (h720x_map.virt) { iounmap((void *)h720x_map.virt); h720x_map.virt = 0; diff --git a/drivers/mtd/maps/impa7.c b/drivers/mtd/maps/impa7.c index 404a50cbafa0..f47aedb24366 100644 --- a/drivers/mtd/maps/impa7.c +++ b/drivers/mtd/maps/impa7.c @@ -49,7 +49,7 @@ static struct map_info impa7_map[NUM_FLASHBANKS] = { /* * MTD partitioning stuff */ -static struct mtd_partition static_partitions[] = +static struct mtd_partition partitions[] = { { .name = "FileSystem", @@ -58,16 +58,10 @@ static struct mtd_partition static_partitions[] = }, }; -static int mtd_parts_nb[NUM_FLASHBANKS]; -static struct mtd_partition *mtd_parts[NUM_FLASHBANKS]; - -static const char *probes[] = { "cmdlinepart", NULL }; - static int __init init_impa7(void) { static const char *rom_probe_types[] = PROBETYPES; const char **type; - const char *part_type = 0; int i; static struct { u_long addr; u_long size; } pt[NUM_FLASHBANKS] = { { WINDOW_ADDR0, WINDOW_SIZE0 }, @@ -97,23 +91,9 @@ static int __init init_impa7(void) if (impa7_mtd[i]) { impa7_mtd[i]->owner = THIS_MODULE; devicesfound++; - mtd_parts_nb[i] = parse_mtd_partitions(impa7_mtd[i], - probes, - &mtd_parts[i], - 0); - if (mtd_parts_nb[i] > 0) { - part_type = "command line"; - } else { - mtd_parts[i] = static_partitions; - mtd_parts_nb[i] = ARRAY_SIZE(static_partitions); - part_type = "static"; - } - - printk(KERN_NOTICE MSG_PREFIX - "using %s partition definition\n", - part_type); - mtd_device_register(impa7_mtd[i], - mtd_parts[i], mtd_parts_nb[i]); + mtd_device_parse_register(impa7_mtd[i], NULL, 0, + partitions, + ARRAY_SIZE(partitions)); } else iounmap((void *)impa7_map[i].virt); diff --git a/drivers/mtd/maps/intel_vr_nor.c b/drivers/mtd/maps/intel_vr_nor.c index d2f47be8754b..08c239604ee4 100644 --- a/drivers/mtd/maps/intel_vr_nor.c +++ b/drivers/mtd/maps/intel_vr_nor.c @@ -44,7 +44,6 @@ struct vr_nor_mtd { void __iomem *csr_base; struct map_info map; struct mtd_info *info; - int nr_parts; struct pci_dev *dev; }; @@ -71,13 +70,9 @@ static void __devexit vr_nor_destroy_partitions(struct vr_nor_mtd *p) static int __devinit vr_nor_init_partitions(struct vr_nor_mtd *p) { - struct mtd_partition *parts; - static const char *part_probes[] = { "cmdlinepart", NULL }; - /* register the flash bank */ /* partition the flash bank */ - p->nr_parts = parse_mtd_partitions(p->info, part_probes, &parts, 0); - return mtd_device_register(p->info, parts, p->nr_parts); + return mtd_device_parse_register(p->info, NULL, 0, NULL, 0); } static void __devexit vr_nor_destroy_mtd_setup(struct vr_nor_mtd *p) diff --git a/drivers/mtd/maps/ixp2000.c b/drivers/mtd/maps/ixp2000.c index 1594a802631d..437fcd2f352f 100644 --- a/drivers/mtd/maps/ixp2000.c +++ b/drivers/mtd/maps/ixp2000.c @@ -38,7 +38,6 @@ struct ixp2000_flash_info { struct mtd_info *mtd; struct map_info map; - struct mtd_partition *partitions; struct resource *res; }; @@ -125,8 +124,6 @@ static int ixp2000_flash_remove(struct platform_device *dev) if (info->map.map_priv_1) iounmap((void *) info->map.map_priv_1); - kfree(info->partitions); - if (info->res) { release_resource(info->res); kfree(info->res); @@ -229,13 +226,7 @@ static int ixp2000_flash_probe(struct platform_device *dev) } info->mtd->owner = THIS_MODULE; - err = parse_mtd_partitions(info->mtd, probes, &info->partitions, 0); - if (err > 0) { - err = mtd_device_register(info->mtd, info->partitions, err); - if(err) - dev_err(&dev->dev, "Could not parse partitions\n"); - } - + err = mtd_device_parse_register(info->mtd, probes, 0, NULL, 0); if (err) goto Error; diff --git a/drivers/mtd/maps/ixp4xx.c b/drivers/mtd/maps/ixp4xx.c index 155b21942f47..30409015a3de 100644 --- a/drivers/mtd/maps/ixp4xx.c +++ b/drivers/mtd/maps/ixp4xx.c @@ -145,7 +145,6 @@ static void ixp4xx_write16(struct map_info *map, map_word d, unsigned long adr) struct ixp4xx_flash_info { struct mtd_info *mtd; struct map_info map; - struct mtd_partition *partitions; struct resource *res; }; @@ -168,8 +167,6 @@ static int ixp4xx_flash_remove(struct platform_device *dev) if (info->map.virt) iounmap(info->map.virt); - kfree(info->partitions); - if (info->res) { release_resource(info->res); kfree(info->res); @@ -185,8 +182,6 @@ static int ixp4xx_flash_probe(struct platform_device *dev) { struct flash_platform_data *plat = dev->dev.platform_data; struct ixp4xx_flash_info *info; - const char *part_type = NULL; - int nr_parts = 0; int err = -1; if (!plat) @@ -252,28 +247,12 @@ static int ixp4xx_flash_probe(struct platform_device *dev) /* Use the fast version */ info->map.write = ixp4xx_write16; - nr_parts = parse_mtd_partitions(info->mtd, probes, &info->partitions, - dev->resource->start); - if (nr_parts > 0) { - part_type = "dynamic"; - } else { - info->partitions = plat->parts; - nr_parts = plat->nr_parts; - part_type = "static"; - } - if (nr_parts == 0) - printk(KERN_NOTICE "IXP4xx flash: no partition info " - "available, registering whole flash\n"); - else - printk(KERN_NOTICE "IXP4xx flash: using %s partition " - "definition\n", part_type); - - err = mtd_device_register(info->mtd, info->partitions, nr_parts); - if (err) + err = mtd_device_parse_register(info->mtd, probes, dev->resource->start, + plat->parts, plat->nr_parts); + if (err) { printk(KERN_ERR "Could not parse partitions\n"); - - if (err) goto Error; + } return 0; diff --git a/drivers/mtd/maps/lantiq-flash.c b/drivers/mtd/maps/lantiq-flash.c index 7e508969239e..4f10e27ada55 100644 --- a/drivers/mtd/maps/lantiq-flash.c +++ b/drivers/mtd/maps/lantiq-flash.c @@ -107,16 +107,12 @@ ltq_copy_to(struct map_info *map, unsigned long to, spin_unlock_irqrestore(&ebu_lock, flags); } -static const char const *part_probe_types[] = { "cmdlinepart", NULL }; - static int __init ltq_mtd_probe(struct platform_device *pdev) { struct physmap_flash_data *ltq_mtd_data = dev_get_platdata(&pdev->dev); struct ltq_mtd *ltq_mtd; - struct mtd_partition *parts; struct resource *res; - int nr_parts = 0; struct cfi_private *cfi; int err; @@ -172,17 +168,8 @@ ltq_mtd_probe(struct platform_device *pdev) cfi->addr_unlock1 ^= 1; cfi->addr_unlock2 ^= 1; - nr_parts = parse_mtd_partitions(ltq_mtd->mtd, - part_probe_types, &parts, 0); - if (nr_parts > 0) { - dev_info(&pdev->dev, - "using %d partitions from cmdline", nr_parts); - } else { - nr_parts = ltq_mtd_data->nr_parts; - parts = ltq_mtd_data->parts; - } - - err = mtd_device_register(ltq_mtd->mtd, parts, nr_parts); + err = mtd_device_parse_register(ltq_mtd->mtd, NULL, 0, + ltq_mtd_data->parts, ltq_mtd_data->nr_parts); if (err) { dev_err(&pdev->dev, "failed to add partitions\n"); goto err_destroy; diff --git a/drivers/mtd/maps/latch-addr-flash.c b/drivers/mtd/maps/latch-addr-flash.c index 5936c466e901..119baa7d7477 100644 --- a/drivers/mtd/maps/latch-addr-flash.c +++ b/drivers/mtd/maps/latch-addr-flash.c @@ -33,9 +33,6 @@ struct latch_addr_flash_info { /* cache; could be found out of res */ unsigned long win_mask; - int nr_parts; - struct mtd_partition *parts; - spinlock_t lock; }; @@ -97,8 +94,6 @@ static void lf_copy_from(struct map_info *map, void *to, static char *rom_probe_types[] = { "cfi_probe", NULL }; -static char *part_probe_types[] = { "cmdlinepart", NULL }; - static int latch_addr_flash_remove(struct platform_device *dev) { struct latch_addr_flash_info *info; @@ -112,8 +107,6 @@ static int latch_addr_flash_remove(struct platform_device *dev) latch_addr_data = dev->dev.platform_data; if (info->mtd != NULL) { - if (info->nr_parts) - kfree(info->parts); mtd_device_unregister(info->mtd); map_destroy(info->mtd); } @@ -206,21 +199,8 @@ static int __devinit latch_addr_flash_probe(struct platform_device *dev) } info->mtd->owner = THIS_MODULE; - err = parse_mtd_partitions(info->mtd, (const char **)part_probe_types, - &info->parts, 0); - if (err > 0) { - mtd_device_register(info->mtd, info->parts, err); - return 0; - } - if (latch_addr_data->nr_parts) { - pr_notice("Using latch-addr-flash partition information\n"); - mtd_device_register(info->mtd, - latch_addr_data->parts, - latch_addr_data->nr_parts); - return 0; - } - - mtd_device_register(info->mtd, NULL, 0); + mtd_device_parse_register(info->mtd, NULL, 0, + latch_addr_data->parts, latch_addr_data->nr_parts); return 0; iounmap: diff --git a/drivers/mtd/maps/pcmciamtd.c b/drivers/mtd/maps/pcmciamtd.c index bbe168b65c26..e8e9fec23553 100644 --- a/drivers/mtd/maps/pcmciamtd.c +++ b/drivers/mtd/maps/pcmciamtd.c @@ -22,22 +22,6 @@ #include <linux/mtd/map.h> #include <linux/mtd/mtd.h> -#ifdef CONFIG_MTD_DEBUG -static int debug = CONFIG_MTD_DEBUG_VERBOSE; -module_param(debug, int, 0); -MODULE_PARM_DESC(debug, "Set Debug Level 0=quiet, 5=noisy"); -#undef DEBUG -#define DEBUG(n, format, arg...) \ - if (n <= debug) { \ - printk(KERN_DEBUG __FILE__ ":%s(): " format "\n", __func__ , ## arg); \ - } - -#else -#undef DEBUG -#define DEBUG(n, arg...) -static const int debug = 0; -#endif - #define info(format, arg...) printk(KERN_INFO "pcmciamtd: " format "\n" , ## arg) #define DRIVER_DESC "PCMCIA Flash memory card driver" @@ -105,13 +89,13 @@ static caddr_t remap_window(struct map_info *map, unsigned long to) int ret; if (!pcmcia_dev_present(dev->p_dev)) { - DEBUG(1, "device removed"); + pr_debug("device removed\n"); return 0; } offset = to & ~(dev->win_size-1); if (offset != dev->offset) { - DEBUG(2, "Remapping window from 0x%8.8x to 0x%8.8x", + pr_debug("Remapping window from 0x%8.8x to 0x%8.8x\n", dev->offset, offset); ret = pcmcia_map_mem_page(dev->p_dev, win, offset); if (ret != 0) @@ -132,7 +116,7 @@ static map_word pcmcia_read8_remap(struct map_info *map, unsigned long ofs) return d; d.x[0] = readb(addr); - DEBUG(3, "ofs = 0x%08lx (%p) data = 0x%02lx", ofs, addr, d.x[0]); + pr_debug("ofs = 0x%08lx (%p) data = 0x%02lx\n", ofs, addr, d.x[0]); return d; } @@ -147,7 +131,7 @@ static map_word pcmcia_read16_remap(struct map_info *map, unsigned long ofs) return d; d.x[0] = readw(addr); - DEBUG(3, "ofs = 0x%08lx (%p) data = 0x%04lx", ofs, addr, d.x[0]); + pr_debug("ofs = 0x%08lx (%p) data = 0x%04lx\n", ofs, addr, d.x[0]); return d; } @@ -157,7 +141,7 @@ static void pcmcia_copy_from_remap(struct map_info *map, void *to, unsigned long struct pcmciamtd_dev *dev = (struct pcmciamtd_dev *)map->map_priv_1; unsigned long win_size = dev->win_size; - DEBUG(3, "to = %p from = %lu len = %zd", to, from, len); + pr_debug("to = %p from = %lu len = %zd\n", to, from, len); while(len) { int toread = win_size - (from & (win_size-1)); caddr_t addr; @@ -169,7 +153,7 @@ static void pcmcia_copy_from_remap(struct map_info *map, void *to, unsigned long if(!addr) return; - DEBUG(4, "memcpy from %p to %p len = %d", addr, to, toread); + pr_debug("memcpy from %p to %p len = %d\n", addr, to, toread); memcpy_fromio(to, addr, toread); len -= toread; to += toread; @@ -185,7 +169,7 @@ static void pcmcia_write8_remap(struct map_info *map, map_word d, unsigned long if(!addr) return; - DEBUG(3, "adr = 0x%08lx (%p) data = 0x%02lx", adr, addr, d.x[0]); + pr_debug("adr = 0x%08lx (%p) data = 0x%02lx\n", adr, addr, d.x[0]); writeb(d.x[0], addr); } @@ -196,7 +180,7 @@ static void pcmcia_write16_remap(struct map_info *map, map_word d, unsigned long if(!addr) return; - DEBUG(3, "adr = 0x%08lx (%p) data = 0x%04lx", adr, addr, d.x[0]); + pr_debug("adr = 0x%08lx (%p) data = 0x%04lx\n", adr, addr, d.x[0]); writew(d.x[0], addr); } @@ -206,7 +190,7 @@ static void pcmcia_copy_to_remap(struct map_info *map, unsigned long to, const v struct pcmciamtd_dev *dev = (struct pcmciamtd_dev *)map->map_priv_1; unsigned long win_size = dev->win_size; - DEBUG(3, "to = %lu from = %p len = %zd", to, from, len); + pr_debug("to = %lu from = %p len = %zd\n", to, from, len); while(len) { int towrite = win_size - (to & (win_size-1)); caddr_t addr; @@ -218,7 +202,7 @@ static void pcmcia_copy_to_remap(struct map_info *map, unsigned long to, const v if(!addr) return; - DEBUG(4, "memcpy from %p to %p len = %d", from, addr, towrite); + pr_debug("memcpy from %p to %p len = %d\n", from, addr, towrite); memcpy_toio(addr, from, towrite); len -= towrite; to += towrite; @@ -240,7 +224,7 @@ static map_word pcmcia_read8(struct map_info *map, unsigned long ofs) return d; d.x[0] = readb(win_base + ofs); - DEBUG(3, "ofs = 0x%08lx (%p) data = 0x%02lx", + pr_debug("ofs = 0x%08lx (%p) data = 0x%02lx\n", ofs, win_base + ofs, d.x[0]); return d; } @@ -255,7 +239,7 @@ static map_word pcmcia_read16(struct map_info *map, unsigned long ofs) return d; d.x[0] = readw(win_base + ofs); - DEBUG(3, "ofs = 0x%08lx (%p) data = 0x%04lx", + pr_debug("ofs = 0x%08lx (%p) data = 0x%04lx\n", ofs, win_base + ofs, d.x[0]); return d; } @@ -268,7 +252,7 @@ static void pcmcia_copy_from(struct map_info *map, void *to, unsigned long from, if(DEV_REMOVED(map)) return; - DEBUG(3, "to = %p from = %lu len = %zd", to, from, len); + pr_debug("to = %p from = %lu len = %zd\n", to, from, len); memcpy_fromio(to, win_base + from, len); } @@ -280,7 +264,7 @@ static void pcmcia_write8(struct map_info *map, map_word d, unsigned long adr) if(DEV_REMOVED(map)) return; - DEBUG(3, "adr = 0x%08lx (%p) data = 0x%02lx", + pr_debug("adr = 0x%08lx (%p) data = 0x%02lx\n", adr, win_base + adr, d.x[0]); writeb(d.x[0], win_base + adr); } @@ -293,7 +277,7 @@ static void pcmcia_write16(struct map_info *map, map_word d, unsigned long adr) if(DEV_REMOVED(map)) return; - DEBUG(3, "adr = 0x%08lx (%p) data = 0x%04lx", + pr_debug("adr = 0x%08lx (%p) data = 0x%04lx\n", adr, win_base + adr, d.x[0]); writew(d.x[0], win_base + adr); } @@ -306,7 +290,7 @@ static void pcmcia_copy_to(struct map_info *map, unsigned long to, const void *f if(DEV_REMOVED(map)) return; - DEBUG(3, "to = %lu from = %p len = %zd", to, from, len); + pr_debug("to = %lu from = %p len = %zd\n", to, from, len); memcpy_toio(win_base + to, from, len); } @@ -316,7 +300,7 @@ static void pcmciamtd_set_vpp(struct map_info *map, int on) struct pcmciamtd_dev *dev = (struct pcmciamtd_dev *)map->map_priv_1; struct pcmcia_device *link = dev->p_dev; - DEBUG(2, "dev = %p on = %d vpp = %d\n", dev, on, dev->vpp); + pr_debug("dev = %p on = %d vpp = %d\n\n", dev, on, dev->vpp); pcmcia_fixup_vpp(link, on ? dev->vpp : 0); } @@ -325,7 +309,7 @@ static void pcmciamtd_release(struct pcmcia_device *link) { struct pcmciamtd_dev *dev = link->priv; - DEBUG(3, "link = 0x%p", link); + pr_debug("link = 0x%p\n", link); if (link->resource[2]->end) { if(dev->win_base) { @@ -337,7 +321,6 @@ static void pcmciamtd_release(struct pcmcia_device *link) } -#ifdef CONFIG_MTD_DEBUG static int pcmciamtd_cistpl_format(struct pcmcia_device *p_dev, tuple_t *tuple, void *priv_data) @@ -347,7 +330,7 @@ static int pcmciamtd_cistpl_format(struct pcmcia_device *p_dev, if (!pcmcia_parse_tuple(tuple, &parse)) { cistpl_format_t *t = &parse.format; (void)t; /* Shut up, gcc */ - DEBUG(2, "Format type: %u, Error Detection: %u, offset = %u, length =%u", + pr_debug("Format type: %u, Error Detection: %u, offset = %u, length =%u\n", t->type, t->edc, t->offset, t->length); } return -ENOSPC; @@ -363,12 +346,11 @@ static int pcmciamtd_cistpl_jedec(struct pcmcia_device *p_dev, if (!pcmcia_parse_tuple(tuple, &parse)) { cistpl_jedec_t *t = &parse.jedec; for (i = 0; i < t->nid; i++) - DEBUG(2, "JEDEC: 0x%02x 0x%02x", + pr_debug("JEDEC: 0x%02x 0x%02x\n", t->id[i].mfr, t->id[i].info); } return -ENOSPC; } -#endif static int pcmciamtd_cistpl_device(struct pcmcia_device *p_dev, tuple_t *tuple, @@ -382,14 +364,14 @@ static int pcmciamtd_cistpl_device(struct pcmcia_device *p_dev, if (pcmcia_parse_tuple(tuple, &parse)) return -EINVAL; - DEBUG(2, "Common memory:"); + pr_debug("Common memory:\n"); dev->pcmcia_map.size = t->dev[0].size; /* from here on: DEBUG only */ for (i = 0; i < t->ndev; i++) { - DEBUG(2, "Region %d, type = %u", i, t->dev[i].type); - DEBUG(2, "Region %d, wp = %u", i, t->dev[i].wp); - DEBUG(2, "Region %d, speed = %u ns", i, t->dev[i].speed); - DEBUG(2, "Region %d, size = %u bytes", i, t->dev[i].size); + pr_debug("Region %d, type = %u\n", i, t->dev[i].type); + pr_debug("Region %d, wp = %u\n", i, t->dev[i].wp); + pr_debug("Region %d, speed = %u ns\n", i, t->dev[i].speed); + pr_debug("Region %d, size = %u bytes\n", i, t->dev[i].size); } return 0; } @@ -409,12 +391,12 @@ static int pcmciamtd_cistpl_geo(struct pcmcia_device *p_dev, dev->pcmcia_map.bankwidth = t->geo[0].buswidth; /* from here on: DEBUG only */ for (i = 0; i < t->ngeo; i++) { - DEBUG(2, "region: %d bankwidth = %u", i, t->geo[i].buswidth); - DEBUG(2, "region: %d erase_block = %u", i, t->geo[i].erase_block); - DEBUG(2, "region: %d read_block = %u", i, t->geo[i].read_block); - DEBUG(2, "region: %d write_block = %u", i, t->geo[i].write_block); - DEBUG(2, "region: %d partition = %u", i, t->geo[i].partition); - DEBUG(2, "region: %d interleave = %u", i, t->geo[i].interleave); + pr_debug("region: %d bankwidth = %u\n", i, t->geo[i].buswidth); + pr_debug("region: %d erase_block = %u\n", i, t->geo[i].erase_block); + pr_debug("region: %d read_block = %u\n", i, t->geo[i].read_block); + pr_debug("region: %d write_block = %u\n", i, t->geo[i].write_block); + pr_debug("region: %d partition = %u\n", i, t->geo[i].partition); + pr_debug("region: %d interleave = %u\n", i, t->geo[i].interleave); } return 0; } @@ -432,13 +414,11 @@ static void card_settings(struct pcmciamtd_dev *dev, struct pcmcia_device *p_dev if (p_dev->prod_id[i]) strcat(dev->mtd_name, p_dev->prod_id[i]); } - DEBUG(2, "Found name: %s", dev->mtd_name); + pr_debug("Found name: %s\n", dev->mtd_name); } -#ifdef CONFIG_MTD_DEBUG pcmcia_loop_tuple(p_dev, CISTPL_FORMAT, pcmciamtd_cistpl_format, NULL); pcmcia_loop_tuple(p_dev, CISTPL_JEDEC_C, pcmciamtd_cistpl_jedec, NULL); -#endif pcmcia_loop_tuple(p_dev, CISTPL_DEVICE, pcmciamtd_cistpl_device, dev); pcmcia_loop_tuple(p_dev, CISTPL_DEVICE_GEO, pcmciamtd_cistpl_geo, dev); @@ -450,12 +430,12 @@ static void card_settings(struct pcmciamtd_dev *dev, struct pcmcia_device *p_dev if(force_size) { dev->pcmcia_map.size = force_size << 20; - DEBUG(2, "size forced to %dM", force_size); + pr_debug("size forced to %dM\n", force_size); } if(bankwidth) { dev->pcmcia_map.bankwidth = bankwidth; - DEBUG(2, "bankwidth forced to %d", bankwidth); + pr_debug("bankwidth forced to %d\n", bankwidth); } dev->pcmcia_map.name = dev->mtd_name; @@ -464,7 +444,7 @@ static void card_settings(struct pcmciamtd_dev *dev, struct pcmcia_device *p_dev *new_name = 1; } - DEBUG(1, "Device: Size: %lu Width:%d Name: %s", + pr_debug("Device: Size: %lu Width:%d Name: %s\n", dev->pcmcia_map.size, dev->pcmcia_map.bankwidth << 3, dev->mtd_name); } @@ -479,7 +459,7 @@ static int pcmciamtd_config(struct pcmcia_device *link) static char *probes[] = { "jedec_probe", "cfi_probe" }; int new_name = 0; - DEBUG(3, "link=0x%p", link); + pr_debug("link=0x%p\n", link); card_settings(dev, link, &new_name); @@ -512,11 +492,11 @@ static int pcmciamtd_config(struct pcmcia_device *link) do { int ret; - DEBUG(2, "requesting window with size = %luKiB memspeed = %d", + pr_debug("requesting window with size = %luKiB memspeed = %d\n", (unsigned long) resource_size(link->resource[2]) >> 10, mem_speed); ret = pcmcia_request_window(link, link->resource[2], mem_speed); - DEBUG(2, "ret = %d dev->win_size = %d", ret, dev->win_size); + pr_debug("ret = %d dev->win_size = %d\n", ret, dev->win_size); if(ret) { j++; link->resource[2]->start = 0; @@ -524,21 +504,21 @@ static int pcmciamtd_config(struct pcmcia_device *link) force_size << 20 : MAX_PCMCIA_ADDR; link->resource[2]->end >>= j; } else { - DEBUG(2, "Got window of size %luKiB", (unsigned long) + pr_debug("Got window of size %luKiB\n", (unsigned long) resource_size(link->resource[2]) >> 10); dev->win_size = resource_size(link->resource[2]); break; } } while (link->resource[2]->end >= 0x1000); - DEBUG(2, "dev->win_size = %d", dev->win_size); + pr_debug("dev->win_size = %d\n", dev->win_size); if(!dev->win_size) { dev_err(&dev->p_dev->dev, "Cannot allocate memory window\n"); pcmciamtd_release(link); return -ENODEV; } - DEBUG(1, "Allocated a window of %dKiB", dev->win_size >> 10); + pr_debug("Allocated a window of %dKiB\n", dev->win_size >> 10); /* Get write protect status */ dev->win_base = ioremap(link->resource[2]->start, @@ -549,7 +529,7 @@ static int pcmciamtd_config(struct pcmcia_device *link) pcmciamtd_release(link); return -ENODEV; } - DEBUG(1, "mapped window dev = %p @ %pR, base = %p", + pr_debug("mapped window dev = %p @ %pR, base = %p\n", dev, link->resource[2], dev->win_base); dev->offset = 0; @@ -564,7 +544,7 @@ static int pcmciamtd_config(struct pcmcia_device *link) } link->config_index = 0; - DEBUG(2, "Setting Configuration"); + pr_debug("Setting Configuration\n"); ret = pcmcia_enable_device(link); if (ret != 0) { if (dev->win_base) { @@ -580,17 +560,17 @@ static int pcmciamtd_config(struct pcmcia_device *link) mtd = do_map_probe("map_rom", &dev->pcmcia_map); } else { for(i = 0; i < ARRAY_SIZE(probes); i++) { - DEBUG(1, "Trying %s", probes[i]); + pr_debug("Trying %s\n", probes[i]); mtd = do_map_probe(probes[i], &dev->pcmcia_map); if(mtd) break; - DEBUG(1, "FAILED: %s", probes[i]); + pr_debug("FAILED: %s\n", probes[i]); } } if(!mtd) { - DEBUG(1, "Can not find an MTD"); + pr_debug("Can not find an MTD\n"); pcmciamtd_release(link); return -ENODEV; } @@ -617,7 +597,7 @@ static int pcmciamtd_config(struct pcmcia_device *link) /* If the memory found is fits completely into the mapped PCMCIA window, use the faster non-remapping read/write functions */ if(mtd->size <= dev->win_size) { - DEBUG(1, "Using non remapping memory functions"); + pr_debug("Using non remapping memory functions\n"); dev->pcmcia_map.map_priv_2 = (unsigned long)dev->win_base; if (dev->pcmcia_map.bankwidth == 1) { dev->pcmcia_map.read = pcmcia_read8; @@ -645,7 +625,7 @@ static int pcmciamtd_config(struct pcmcia_device *link) static int pcmciamtd_suspend(struct pcmcia_device *dev) { - DEBUG(2, "EVENT_PM_RESUME"); + pr_debug("EVENT_PM_RESUME\n"); /* get_lock(link); */ @@ -654,7 +634,7 @@ static int pcmciamtd_suspend(struct pcmcia_device *dev) static int pcmciamtd_resume(struct pcmcia_device *dev) { - DEBUG(2, "EVENT_PM_SUSPEND"); + pr_debug("EVENT_PM_SUSPEND\n"); /* free_lock(link); */ @@ -666,7 +646,7 @@ static void pcmciamtd_detach(struct pcmcia_device *link) { struct pcmciamtd_dev *dev = link->priv; - DEBUG(3, "link=0x%p", link); + pr_debug("link=0x%p\n", link); if(dev->mtd_info) { mtd_device_unregister(dev->mtd_info); @@ -686,7 +666,7 @@ static int pcmciamtd_probe(struct pcmcia_device *link) /* Create new memory card device */ dev = kzalloc(sizeof(*dev), GFP_KERNEL); if (!dev) return -ENOMEM; - DEBUG(1, "dev=0x%p", dev); + pr_debug("dev=0x%p\n", dev); dev->p_dev = link; link->priv = dev; @@ -755,7 +735,7 @@ static int __init init_pcmciamtd(void) static void __exit exit_pcmciamtd(void) { - DEBUG(1, DRIVER_DESC " unloading"); + pr_debug(DRIVER_DESC " unloading"); pcmcia_unregister_driver(&pcmciamtd_driver); } diff --git a/drivers/mtd/maps/physmap.c b/drivers/mtd/maps/physmap.c index f64cee4a3bfb..66e8200079c2 100644 --- a/drivers/mtd/maps/physmap.c +++ b/drivers/mtd/maps/physmap.c @@ -27,8 +27,6 @@ struct physmap_flash_info { struct mtd_info *mtd[MAX_RESOURCES]; struct mtd_info *cmtd; struct map_info map[MAX_RESOURCES]; - int nr_parts; - struct mtd_partition *parts; }; static int physmap_flash_remove(struct platform_device *dev) @@ -46,8 +44,6 @@ static int physmap_flash_remove(struct platform_device *dev) if (info->cmtd) { mtd_device_unregister(info->cmtd); - if (info->nr_parts) - kfree(info->parts); if (info->cmtd != info->mtd[0]) mtd_concat_destroy(info->cmtd); } @@ -175,23 +171,8 @@ static int physmap_flash_probe(struct platform_device *dev) if (err) goto err_out; - err = parse_mtd_partitions(info->cmtd, part_probe_types, - &info->parts, 0); - if (err > 0) { - mtd_device_register(info->cmtd, info->parts, err); - info->nr_parts = err; - return 0; - } - - if (physmap_data->nr_parts) { - printk(KERN_NOTICE "Using physmap partition information\n"); - mtd_device_register(info->cmtd, physmap_data->parts, - physmap_data->nr_parts); - return 0; - } - - mtd_device_register(info->cmtd, NULL, 0); - + mtd_device_parse_register(info->cmtd, part_probe_types, 0, + physmap_data->parts, physmap_data->nr_parts); return 0; err_out: @@ -245,21 +226,6 @@ static struct platform_device physmap_flash = { .num_resources = 1, .resource = &physmap_flash_resource, }; - -void physmap_configure(unsigned long addr, unsigned long size, - int bankwidth, void (*set_vpp)(struct map_info *, int)) -{ - physmap_flash_resource.start = addr; - physmap_flash_resource.end = addr + size - 1; - physmap_flash_data.width = bankwidth; - physmap_flash_data.set_vpp = set_vpp; -} - -void physmap_set_partitions(struct mtd_partition *parts, int num_parts) -{ - physmap_flash_data.nr_parts = num_parts; - physmap_flash_data.parts = parts; -} #endif static int __init physmap_init(void) diff --git a/drivers/mtd/maps/physmap_of.c b/drivers/mtd/maps/physmap_of.c index d251d1db129b..7d65f9d3e690 100644 --- a/drivers/mtd/maps/physmap_of.c +++ b/drivers/mtd/maps/physmap_of.c @@ -34,58 +34,10 @@ struct of_flash_list { struct of_flash { struct mtd_info *cmtd; - struct mtd_partition *parts; int list_size; /* number of elements in of_flash_list */ struct of_flash_list list[0]; }; -#define OF_FLASH_PARTS(info) ((info)->parts) -static int parse_obsolete_partitions(struct platform_device *dev, - struct of_flash *info, - struct device_node *dp) -{ - int i, plen, nr_parts; - const struct { - __be32 offset, len; - } *part; - const char *names; - - part = of_get_property(dp, "partitions", &plen); - if (!part) - return 0; /* No partitions found */ - - dev_warn(&dev->dev, "Device tree uses obsolete partition map binding\n"); - - nr_parts = plen / sizeof(part[0]); - - info->parts = kzalloc(nr_parts * sizeof(*info->parts), GFP_KERNEL); - if (!info->parts) - return -ENOMEM; - - names = of_get_property(dp, "partition-names", &plen); - - for (i = 0; i < nr_parts; i++) { - info->parts[i].offset = be32_to_cpu(part->offset); - info->parts[i].size = be32_to_cpu(part->len) & ~1; - if (be32_to_cpu(part->len) & 1) /* bit 0 set signifies read only partition */ - info->parts[i].mask_flags = MTD_WRITEABLE; - - if (names && (plen > 0)) { - int len = strlen(names) + 1; - - info->parts[i].name = (char *)names; - plen -= len; - names += len; - } else { - info->parts[i].name = "unnamed"; - } - - part++; - } - - return nr_parts; -} - static int of_flash_remove(struct platform_device *dev) { struct of_flash *info; @@ -101,11 +53,8 @@ static int of_flash_remove(struct platform_device *dev) mtd_concat_destroy(info->cmtd); } - if (info->cmtd) { - if (OF_FLASH_PARTS(info)) - kfree(OF_FLASH_PARTS(info)); + if (info->cmtd) mtd_device_unregister(info->cmtd); - } for (i = 0; i < info->list_size; i++) { if (info->list[i].mtd) @@ -165,7 +114,8 @@ static struct mtd_info * __devinit obsolete_probe(struct platform_device *dev, specifies the list of partition probers to use. If none is given then the default is use. These take precedence over other device tree information. */ -static const char *part_probe_types_def[] = { "cmdlinepart", "RedBoot", NULL }; +static const char *part_probe_types_def[] = { "cmdlinepart", "RedBoot", + "ofpart", "ofoldpart", NULL }; static const char ** __devinit of_get_probes(struct device_node *dp) { const char *cp; @@ -218,6 +168,7 @@ static int __devinit of_flash_probe(struct platform_device *dev) int reg_tuple_size; struct mtd_info **mtd_list = NULL; resource_size_t res_size; + struct mtd_part_parser_data ppdata; match = of_match_device(of_flash_match, &dev->dev); if (!match) @@ -331,29 +282,12 @@ static int __devinit of_flash_probe(struct platform_device *dev) if (err) goto err_out; + ppdata.of_node = dp; part_probe_types = of_get_probes(dp); - err = parse_mtd_partitions(info->cmtd, part_probe_types, - &info->parts, 0); - if (err < 0) { - of_free_probes(part_probe_types); - goto err_out; - } + mtd_device_parse_register(info->cmtd, part_probe_types, &ppdata, + NULL, 0); of_free_probes(part_probe_types); - if (err == 0) { - err = of_mtd_parse_partitions(&dev->dev, dp, &info->parts); - if (err < 0) - goto err_out; - } - - if (err == 0) { - err = parse_obsolete_partitions(dev, info, dp); - if (err < 0) - goto err_out; - } - - mtd_device_register(info->cmtd, info->parts, err); - kfree(mtd_list); return 0; diff --git a/drivers/mtd/maps/plat-ram.c b/drivers/mtd/maps/plat-ram.c index 9ca1eccba4bc..94f553489725 100644 --- a/drivers/mtd/maps/plat-ram.c +++ b/drivers/mtd/maps/plat-ram.c @@ -44,8 +44,6 @@ struct platram_info { struct device *dev; struct mtd_info *mtd; struct map_info map; - struct mtd_partition *partitions; - bool free_partitions; struct resource *area; struct platdata_mtd_ram *pdata; }; @@ -95,10 +93,6 @@ static int platram_remove(struct platform_device *pdev) if (info->mtd) { mtd_device_unregister(info->mtd); - if (info->partitions) { - if (info->free_partitions) - kfree(info->partitions); - } map_destroy(info->mtd); } @@ -228,21 +222,8 @@ static int platram_probe(struct platform_device *pdev) /* check to see if there are any available partitions, or wether * to add this device whole */ - if (!pdata->nr_partitions) { - /* try to probe using the supplied probe type */ - if (pdata->probes) { - err = parse_mtd_partitions(info->mtd, pdata->probes, - &info->partitions, 0); - info->free_partitions = 1; - if (err > 0) - err = mtd_device_register(info->mtd, - info->partitions, err); - } - } - /* use the static mapping */ - else - err = mtd_device_register(info->mtd, pdata->partitions, - pdata->nr_partitions); + err = mtd_device_parse_register(info->mtd, pdata->probes, 0, + pdata->partitions, pdata->nr_partitions); if (!err) dev_info(&pdev->dev, "registered mtd device\n"); diff --git a/drivers/mtd/maps/pxa2xx-flash.c b/drivers/mtd/maps/pxa2xx-flash.c index 7ae137d4b998..411a17df9fc1 100644 --- a/drivers/mtd/maps/pxa2xx-flash.c +++ b/drivers/mtd/maps/pxa2xx-flash.c @@ -41,8 +41,6 @@ static void pxa2xx_map_inval_cache(struct map_info *map, unsigned long from, } struct pxa2xx_flash_info { - struct mtd_partition *parts; - int nr_parts; struct mtd_info *mtd; struct map_info map; }; @@ -55,9 +53,7 @@ static int __devinit pxa2xx_flash_probe(struct platform_device *pdev) { struct flash_platform_data *flash = pdev->dev.platform_data; struct pxa2xx_flash_info *info; - struct mtd_partition *parts; struct resource *res; - int ret = 0; res = platform_get_resource(pdev, IORESOURCE_MEM, 0); if (!res) @@ -71,8 +67,6 @@ static int __devinit pxa2xx_flash_probe(struct platform_device *pdev) info->map.bankwidth = flash->width; info->map.phys = res->start; info->map.size = resource_size(res); - info->parts = flash->parts; - info->nr_parts = flash->nr_parts; info->map.virt = ioremap(info->map.phys, info->map.size); if (!info->map.virt) { @@ -104,18 +98,7 @@ static int __devinit pxa2xx_flash_probe(struct platform_device *pdev) } info->mtd->owner = THIS_MODULE; - ret = parse_mtd_partitions(info->mtd, probes, &parts, 0); - - if (ret > 0) { - info->nr_parts = ret; - info->parts = parts; - } - - if (!info->nr_parts) - printk("Registering %s as whole device\n", - info->map.name); - - mtd_device_register(info->mtd, info->parts, info->nr_parts); + mtd_device_parse_register(info->mtd, probes, 0, NULL, 0); platform_set_drvdata(pdev, info); return 0; @@ -133,7 +116,6 @@ static int __devexit pxa2xx_flash_remove(struct platform_device *dev) iounmap(info->map.virt); if (info->map.cached) iounmap(info->map.cached); - kfree(info->parts); kfree(info); return 0; } diff --git a/drivers/mtd/maps/rbtx4939-flash.c b/drivers/mtd/maps/rbtx4939-flash.c index 761fb459d2c7..0237f197fd12 100644 --- a/drivers/mtd/maps/rbtx4939-flash.c +++ b/drivers/mtd/maps/rbtx4939-flash.c @@ -25,8 +25,6 @@ struct rbtx4939_flash_info { struct mtd_info *mtd; struct map_info map; - int nr_parts; - struct mtd_partition *parts; }; static int rbtx4939_flash_remove(struct platform_device *dev) @@ -41,8 +39,6 @@ static int rbtx4939_flash_remove(struct platform_device *dev) if (info->mtd) { struct rbtx4939_flash_data *pdata = dev->dev.platform_data; - if (info->nr_parts) - kfree(info->parts); mtd_device_unregister(info->mtd); map_destroy(info->mtd); } @@ -50,7 +46,6 @@ static int rbtx4939_flash_remove(struct platform_device *dev) } static const char *rom_probe_types[] = { "cfi_probe", "jedec_probe", NULL }; -static const char *part_probe_types[] = { "cmdlinepart", NULL }; static int rbtx4939_flash_probe(struct platform_device *dev) { @@ -107,22 +102,11 @@ static int rbtx4939_flash_probe(struct platform_device *dev) info->mtd->owner = THIS_MODULE; if (err) goto err_out; + err = mtd_device_parse_register(info->mtd, NULL, 0, + pdata->parts, pdata->nr_parts); - err = parse_mtd_partitions(info->mtd, part_probe_types, - &info->parts, 0); - if (err > 0) { - mtd_device_register(info->mtd, info->parts, err); - info->nr_parts = err; - return 0; - } - - if (pdata->nr_parts) { - pr_notice("Using rbtx4939 partition information\n"); - mtd_device_register(info->mtd, pdata->parts, pdata->nr_parts); - return 0; - } - - mtd_device_register(info->mtd, NULL, 0); + if (err) + goto err_out; return 0; err_out: diff --git a/drivers/mtd/maps/sa1100-flash.c b/drivers/mtd/maps/sa1100-flash.c index a9b5e0e5c4c5..fa9c0a9670cd 100644 --- a/drivers/mtd/maps/sa1100-flash.c +++ b/drivers/mtd/maps/sa1100-flash.c @@ -131,10 +131,8 @@ struct sa_subdev_info { }; struct sa_info { - struct mtd_partition *parts; struct mtd_info *mtd; int num_subdev; - unsigned int nr_parts; struct sa_subdev_info subdev[0]; }; @@ -231,8 +229,6 @@ static void sa1100_destroy(struct sa_info *info, struct flash_platform_data *pla mtd_concat_destroy(info->mtd); } - kfree(info->parts); - for (i = info->num_subdev - 1; i >= 0; i--) sa1100_destroy_subdev(&info->subdev[i]); kfree(info); @@ -341,10 +337,8 @@ static const char *part_probes[] = { "cmdlinepart", "RedBoot", NULL }; static int __devinit sa1100_mtd_probe(struct platform_device *pdev) { struct flash_platform_data *plat = pdev->dev.platform_data; - struct mtd_partition *parts; - const char *part_type = NULL; struct sa_info *info; - int err, nr_parts = 0; + int err; if (!plat) return -ENODEV; @@ -358,26 +352,8 @@ static int __devinit sa1100_mtd_probe(struct platform_device *pdev) /* * Partition selection stuff. */ - nr_parts = parse_mtd_partitions(info->mtd, part_probes, &parts, 0); - if (nr_parts > 0) { - info->parts = parts; - part_type = "dynamic"; - } else { - parts = plat->parts; - nr_parts = plat->nr_parts; - part_type = "static"; - } - - if (nr_parts == 0) - printk(KERN_NOTICE "SA1100 flash: no partition info " - "available, registering whole flash\n"); - else - printk(KERN_NOTICE "SA1100 flash: using %s partition " - "definition\n", part_type); - - mtd_device_register(info->mtd, parts, nr_parts); - - info->nr_parts = nr_parts; + mtd_device_parse_register(info->mtd, part_probes, 0, + plat->parts, plat->nr_parts); platform_set_drvdata(pdev, info); err = 0; diff --git a/drivers/mtd/maps/solutionengine.c b/drivers/mtd/maps/solutionengine.c index cbf6bade9354..496c40704aff 100644 --- a/drivers/mtd/maps/solutionengine.c +++ b/drivers/mtd/maps/solutionengine.c @@ -19,8 +19,6 @@ static struct mtd_info *flash_mtd; static struct mtd_info *eprom_mtd; -static struct mtd_partition *parsed_parts; - struct map_info soleng_eprom_map = { .name = "Solution Engine EPROM", .size = 0x400000, @@ -51,12 +49,14 @@ static struct mtd_partition superh_se_partitions[] = { .size = MTDPART_SIZ_FULL, } }; +#define NUM_PARTITIONS ARRAY_SIZE(superh_se_partitions) +#else +#define superh_se_partitions NULL +#define NUM_PARTITIONS 0 #endif /* CONFIG_MTD_SUPERH_RESERVE */ static int __init init_soleng_maps(void) { - int nr_parts = 0; - /* First probe at offset 0 */ soleng_flash_map.phys = 0; soleng_flash_map.virt = (void __iomem *)P2SEGADDR(0); @@ -92,21 +92,8 @@ static int __init init_soleng_maps(void) mtd_device_register(eprom_mtd, NULL, 0); } - nr_parts = parse_mtd_partitions(flash_mtd, probes, &parsed_parts, 0); - -#ifdef CONFIG_MTD_SUPERH_RESERVE - if (nr_parts <= 0) { - printk(KERN_NOTICE "Using configured partition at 0x%08x.\n", - CONFIG_MTD_SUPERH_RESERVE); - parsed_parts = superh_se_partitions; - nr_parts = sizeof(superh_se_partitions)/sizeof(*parsed_parts); - } -#endif /* CONFIG_MTD_SUPERH_RESERVE */ - - if (nr_parts > 0) - mtd_device_register(flash_mtd, parsed_parts, nr_parts); - else - mtd_device_register(flash_mtd, NULL, 0); + mtd_device_parse_register(flash_mtd, probes, 0, + superh_se_partitions, NUM_PARTITIONS); return 0; } @@ -118,10 +105,7 @@ static void __exit cleanup_soleng_maps(void) map_destroy(eprom_mtd); } - if (parsed_parts) - mtd_device_unregister(flash_mtd); - else - mtd_device_unregister(flash_mtd); + mtd_device_unregister(flash_mtd); map_destroy(flash_mtd); } diff --git a/drivers/mtd/maps/wr_sbc82xx_flash.c b/drivers/mtd/maps/wr_sbc82xx_flash.c index 901ce968efae..aa7e0cb2893c 100644 --- a/drivers/mtd/maps/wr_sbc82xx_flash.c +++ b/drivers/mtd/maps/wr_sbc82xx_flash.c @@ -20,7 +20,6 @@ #include <asm/immap_cpm2.h> static struct mtd_info *sbcmtd[3]; -static struct mtd_partition *sbcmtd_parts[3]; struct map_info sbc82xx_flash_map[3] = { {.name = "Boot flash"}, @@ -101,6 +100,7 @@ static int __init init_sbc82xx_flash(void) for (i=0; i<3; i++) { int8_t flashcs[3] = { 0, 6, 1 }; int nr_parts; + struct mtd_partition *defparts; printk(KERN_NOTICE "PowerQUICC II %s (%ld MiB on CS%d", sbc82xx_flash_map[i].name, @@ -113,7 +113,8 @@ static int __init init_sbc82xx_flash(void) } printk(" at %08lx)\n", sbc82xx_flash_map[i].phys); - sbc82xx_flash_map[i].virt = ioremap(sbc82xx_flash_map[i].phys, sbc82xx_flash_map[i].size); + sbc82xx_flash_map[i].virt = ioremap(sbc82xx_flash_map[i].phys, + sbc82xx_flash_map[i].size); if (!sbc82xx_flash_map[i].virt) { printk("Failed to ioremap\n"); @@ -129,24 +130,20 @@ static int __init init_sbc82xx_flash(void) sbcmtd[i]->owner = THIS_MODULE; - nr_parts = parse_mtd_partitions(sbcmtd[i], part_probes, - &sbcmtd_parts[i], 0); - if (nr_parts > 0) { - mtd_device_register(sbcmtd[i], sbcmtd_parts[i], - nr_parts); - continue; - } - /* No partitioning detected. Use default */ if (i == 2) { - mtd_device_register(sbcmtd[i], NULL, 0); + defparts = NULL; + nr_parts = 0; } else if (i == bigflash) { - mtd_device_register(sbcmtd[i], bigflash_parts, - ARRAY_SIZE(bigflash_parts)); + defparts = bigflash_parts; + nr_parts = ARRAY_SIZE(bigflash_parts); } else { - mtd_device_register(sbcmtd[i], smallflash_parts, - ARRAY_SIZE(smallflash_parts)); + defparts = smallflash_parts; + nr_parts = ARRAY_SIZE(smallflash_parts); } + + mtd_device_parse_register(sbcmtd[i], part_probes, 0, + defparts, nr_parts); } return 0; } @@ -159,12 +156,8 @@ static void __exit cleanup_sbc82xx_flash(void) if (!sbcmtd[i]) continue; - if (i<2 || sbcmtd_parts[i]) - mtd_device_unregister(sbcmtd[i]); - else - mtd_device_unregister(sbcmtd[i]); + mtd_device_unregister(sbcmtd[i]); - kfree(sbcmtd_parts[i]); map_destroy(sbcmtd[i]); iounmap((void *)sbc82xx_flash_map[i].virt); diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c index ca385697446e..ed8b5e744b12 100644 --- a/drivers/mtd/mtd_blkdevs.c +++ b/drivers/mtd/mtd_blkdevs.c @@ -426,6 +426,8 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new) new->rq->queuedata = new; blk_queue_logical_block_size(new->rq, tr->blksize); + queue_flag_set_unlocked(QUEUE_FLAG_NONROT, new->rq); + if (tr->discard) { queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, new->rq); new->rq->limits.max_discard_sectors = UINT_MAX; diff --git a/drivers/mtd/mtdblock.c b/drivers/mtd/mtdblock.c index 3326615ad66b..7c1dc908a174 100644 --- a/drivers/mtd/mtdblock.c +++ b/drivers/mtd/mtdblock.c @@ -44,7 +44,7 @@ struct mtdblk_dev { enum { STATE_EMPTY, STATE_CLEAN, STATE_DIRTY } cache_state; }; -static struct mutex mtdblks_lock; +static DEFINE_MUTEX(mtdblks_lock); /* * Cache stuff... @@ -119,7 +119,7 @@ static int write_cached_data (struct mtdblk_dev *mtdblk) if (mtdblk->cache_state != STATE_DIRTY) return 0; - DEBUG(MTD_DEBUG_LEVEL2, "mtdblock: writing cached data for \"%s\" " + pr_debug("mtdblock: writing cached data for \"%s\" " "at 0x%lx, size 0x%x\n", mtd->name, mtdblk->cache_offset, mtdblk->cache_size); @@ -148,7 +148,7 @@ static int do_cached_write (struct mtdblk_dev *mtdblk, unsigned long pos, size_t retlen; int ret; - DEBUG(MTD_DEBUG_LEVEL2, "mtdblock: write on \"%s\" at 0x%lx, size 0x%x\n", + pr_debug("mtdblock: write on \"%s\" at 0x%lx, size 0x%x\n", mtd->name, pos, len); if (!sect_size) @@ -218,7 +218,7 @@ static int do_cached_read (struct mtdblk_dev *mtdblk, unsigned long pos, size_t retlen; int ret; - DEBUG(MTD_DEBUG_LEVEL2, "mtdblock: read on \"%s\" at 0x%lx, size 0x%x\n", + pr_debug("mtdblock: read on \"%s\" at 0x%lx, size 0x%x\n", mtd->name, pos, len); if (!sect_size) @@ -283,7 +283,7 @@ static int mtdblock_open(struct mtd_blktrans_dev *mbd) { struct mtdblk_dev *mtdblk = container_of(mbd, struct mtdblk_dev, mbd); - DEBUG(MTD_DEBUG_LEVEL1,"mtdblock_open\n"); + pr_debug("mtdblock_open\n"); mutex_lock(&mtdblks_lock); if (mtdblk->count) { @@ -303,7 +303,7 @@ static int mtdblock_open(struct mtd_blktrans_dev *mbd) mutex_unlock(&mtdblks_lock); - DEBUG(MTD_DEBUG_LEVEL1, "ok\n"); + pr_debug("ok\n"); return 0; } @@ -312,7 +312,7 @@ static int mtdblock_release(struct mtd_blktrans_dev *mbd) { struct mtdblk_dev *mtdblk = container_of(mbd, struct mtdblk_dev, mbd); - DEBUG(MTD_DEBUG_LEVEL1, "mtdblock_release\n"); + pr_debug("mtdblock_release\n"); mutex_lock(&mtdblks_lock); @@ -329,7 +329,7 @@ static int mtdblock_release(struct mtd_blktrans_dev *mbd) mutex_unlock(&mtdblks_lock); - DEBUG(MTD_DEBUG_LEVEL1, "ok\n"); + pr_debug("ok\n"); return 0; } @@ -389,8 +389,6 @@ static struct mtd_blktrans_ops mtdblock_tr = { static int __init init_mtdblock(void) { - mutex_init(&mtdblks_lock); - return register_mtd_blktrans(&mtdblock_tr); } diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c index 61086ea3cc6b..e7dc732ddabc 100644 --- a/drivers/mtd/mtdchar.c +++ b/drivers/mtd/mtdchar.c @@ -43,7 +43,7 @@ static struct vfsmount *mtd_inode_mnt __read_mostly; /* * Data structure to hold the pointer to the mtd device as well - * as mode information ofr various use cases. + * as mode information of various use cases. */ struct mtd_file_info { struct mtd_info *mtd; @@ -86,7 +86,7 @@ static int mtd_open(struct inode *inode, struct file *file) struct mtd_file_info *mfi; struct inode *mtd_ino; - DEBUG(MTD_DEBUG_LEVEL0, "MTD_open\n"); + pr_debug("MTD_open\n"); /* You can't open the RO devices RW */ if ((file->f_mode & FMODE_WRITE) && (minor & 1)) @@ -151,7 +151,7 @@ static int mtd_close(struct inode *inode, struct file *file) struct mtd_file_info *mfi = file->private_data; struct mtd_info *mtd = mfi->mtd; - DEBUG(MTD_DEBUG_LEVEL0, "MTD_close\n"); + pr_debug("MTD_close\n"); /* Only sync if opened RW */ if ((file->f_mode & FMODE_WRITE) && mtd->sync) @@ -195,7 +195,7 @@ static ssize_t mtd_read(struct file *file, char __user *buf, size_t count,loff_t size_t size = count; char *kbuf; - DEBUG(MTD_DEBUG_LEVEL0,"MTD_read\n"); + pr_debug("MTD_read\n"); if (*ppos + count > mtd->size) count = mtd->size - *ppos; @@ -211,17 +211,17 @@ static ssize_t mtd_read(struct file *file, char __user *buf, size_t count,loff_t len = min_t(size_t, count, size); switch (mfi->mode) { - case MTD_MODE_OTP_FACTORY: + case MTD_FILE_MODE_OTP_FACTORY: ret = mtd->read_fact_prot_reg(mtd, *ppos, len, &retlen, kbuf); break; - case MTD_MODE_OTP_USER: + case MTD_FILE_MODE_OTP_USER: ret = mtd->read_user_prot_reg(mtd, *ppos, len, &retlen, kbuf); break; - case MTD_MODE_RAW: + case MTD_FILE_MODE_RAW: { struct mtd_oob_ops ops; - ops.mode = MTD_OOB_RAW; + ops.mode = MTD_OPS_RAW; ops.datbuf = kbuf; ops.oobbuf = NULL; ops.len = len; @@ -233,16 +233,16 @@ static ssize_t mtd_read(struct file *file, char __user *buf, size_t count,loff_t default: ret = mtd->read(mtd, *ppos, len, &retlen, kbuf); } - /* Nand returns -EBADMSG on ecc errors, but it returns + /* Nand returns -EBADMSG on ECC errors, but it returns * the data. For our userspace tools it is important - * to dump areas with ecc errors ! + * to dump areas with ECC errors! * For kernel internal usage it also might return -EUCLEAN * to signal the caller that a bitflip has occurred and has * been corrected by the ECC algorithm. * Userspace software which accesses NAND this way * must be aware of the fact that it deals with NAND */ - if (!ret || (ret == -EUCLEAN) || (ret == -EBADMSG)) { + if (!ret || mtd_is_bitflip_or_eccerr(ret)) { *ppos += retlen; if (copy_to_user(buf, kbuf, retlen)) { kfree(kbuf); @@ -278,7 +278,7 @@ static ssize_t mtd_write(struct file *file, const char __user *buf, size_t count int ret=0; int len; - DEBUG(MTD_DEBUG_LEVEL0,"MTD_write\n"); + pr_debug("MTD_write\n"); if (*ppos == mtd->size) return -ENOSPC; @@ -302,10 +302,10 @@ static ssize_t mtd_write(struct file *file, const char __user *buf, size_t count } switch (mfi->mode) { - case MTD_MODE_OTP_FACTORY: + case MTD_FILE_MODE_OTP_FACTORY: ret = -EROFS; break; - case MTD_MODE_OTP_USER: + case MTD_FILE_MODE_OTP_USER: if (!mtd->write_user_prot_reg) { ret = -EOPNOTSUPP; break; @@ -313,13 +313,14 @@ static ssize_t mtd_write(struct file *file, const char __user *buf, size_t count ret = mtd->write_user_prot_reg(mtd, *ppos, len, &retlen, kbuf); break; - case MTD_MODE_RAW: + case MTD_FILE_MODE_RAW: { struct mtd_oob_ops ops; - ops.mode = MTD_OOB_RAW; + ops.mode = MTD_OPS_RAW; ops.datbuf = kbuf; ops.oobbuf = NULL; + ops.ooboffs = 0; ops.len = len; ret = mtd->write_oob(mtd, *ppos, &ops); @@ -367,13 +368,13 @@ static int otp_select_filemode(struct mtd_file_info *mfi, int mode) if (!mtd->read_fact_prot_reg) ret = -EOPNOTSUPP; else - mfi->mode = MTD_MODE_OTP_FACTORY; + mfi->mode = MTD_FILE_MODE_OTP_FACTORY; break; case MTD_OTP_USER: if (!mtd->read_fact_prot_reg) ret = -EOPNOTSUPP; else - mfi->mode = MTD_MODE_OTP_USER; + mfi->mode = MTD_FILE_MODE_OTP_USER; break; default: ret = -EINVAL; @@ -390,6 +391,7 @@ static int mtd_do_writeoob(struct file *file, struct mtd_info *mtd, uint64_t start, uint32_t length, void __user *ptr, uint32_t __user *retp) { + struct mtd_file_info *mfi = file->private_data; struct mtd_oob_ops ops; uint32_t retlen; int ret = 0; @@ -409,9 +411,10 @@ static int mtd_do_writeoob(struct file *file, struct mtd_info *mtd, return ret; ops.ooblen = length; - ops.ooboffs = start & (mtd->oobsize - 1); + ops.ooboffs = start & (mtd->writesize - 1); ops.datbuf = NULL; - ops.mode = MTD_OOB_PLACE; + ops.mode = (mfi->mode == MTD_FILE_MODE_RAW) ? MTD_OPS_RAW : + MTD_OPS_PLACE_OOB; if (ops.ooboffs && ops.ooblen > (mtd->oobsize - ops.ooboffs)) return -EINVAL; @@ -420,7 +423,7 @@ static int mtd_do_writeoob(struct file *file, struct mtd_info *mtd, if (IS_ERR(ops.oobbuf)) return PTR_ERR(ops.oobbuf); - start &= ~((uint64_t)mtd->oobsize - 1); + start &= ~((uint64_t)mtd->writesize - 1); ret = mtd->write_oob(mtd, start, &ops); if (ops.oobretlen > 0xFFFFFFFFU) @@ -433,9 +436,11 @@ static int mtd_do_writeoob(struct file *file, struct mtd_info *mtd, return ret; } -static int mtd_do_readoob(struct mtd_info *mtd, uint64_t start, - uint32_t length, void __user *ptr, uint32_t __user *retp) +static int mtd_do_readoob(struct file *file, struct mtd_info *mtd, + uint64_t start, uint32_t length, void __user *ptr, + uint32_t __user *retp) { + struct mtd_file_info *mfi = file->private_data; struct mtd_oob_ops ops; int ret = 0; @@ -451,9 +456,10 @@ static int mtd_do_readoob(struct mtd_info *mtd, uint64_t start, return ret; ops.ooblen = length; - ops.ooboffs = start & (mtd->oobsize - 1); + ops.ooboffs = start & (mtd->writesize - 1); ops.datbuf = NULL; - ops.mode = MTD_OOB_PLACE; + ops.mode = (mfi->mode == MTD_FILE_MODE_RAW) ? MTD_OPS_RAW : + MTD_OPS_PLACE_OOB; if (ops.ooboffs && ops.ooblen > (mtd->oobsize - ops.ooboffs)) return -EINVAL; @@ -462,7 +468,7 @@ static int mtd_do_readoob(struct mtd_info *mtd, uint64_t start, if (!ops.oobbuf) return -ENOMEM; - start &= ~((uint64_t)mtd->oobsize - 1); + start &= ~((uint64_t)mtd->writesize - 1); ret = mtd->read_oob(mtd, start, &ops); if (put_user(ops.oobretlen, retp)) @@ -472,13 +478,29 @@ static int mtd_do_readoob(struct mtd_info *mtd, uint64_t start, ret = -EFAULT; kfree(ops.oobbuf); + + /* + * NAND returns -EBADMSG on ECC errors, but it returns the OOB + * data. For our userspace tools it is important to dump areas + * with ECC errors! + * For kernel internal usage it also might return -EUCLEAN + * to signal the caller that a bitflip has occured and has + * been corrected by the ECC algorithm. + * + * Note: currently the standard NAND function, nand_read_oob_std, + * does not calculate ECC for the OOB area, so do not rely on + * this behavior unless you have replaced it with your own. + */ + if (mtd_is_bitflip_or_eccerr(ret)) + return 0; + return ret; } /* * Copies (and truncates, if necessary) data from the larger struct, * nand_ecclayout, to the smaller, deprecated layout struct, - * nand_ecclayout_user. This is necessary only to suppport the deprecated + * nand_ecclayout_user. This is necessary only to support the deprecated * API ioctl ECCGETLAYOUT while allowing all new functionality to use * nand_ecclayout flexibly (i.e. the struct may change size in new * releases without requiring major rewrites). @@ -544,6 +566,55 @@ static int mtd_blkpg_ioctl(struct mtd_info *mtd, } } +static int mtd_write_ioctl(struct mtd_info *mtd, + struct mtd_write_req __user *argp) +{ + struct mtd_write_req req; + struct mtd_oob_ops ops; + void __user *usr_data, *usr_oob; + int ret; + + if (copy_from_user(&req, argp, sizeof(req)) || + !access_ok(VERIFY_READ, req.usr_data, req.len) || + !access_ok(VERIFY_READ, req.usr_oob, req.ooblen)) + return -EFAULT; + if (!mtd->write_oob) + return -EOPNOTSUPP; + + ops.mode = req.mode; + ops.len = (size_t)req.len; + ops.ooblen = (size_t)req.ooblen; + ops.ooboffs = 0; + + usr_data = (void __user *)(uintptr_t)req.usr_data; + usr_oob = (void __user *)(uintptr_t)req.usr_oob; + + if (req.usr_data) { + ops.datbuf = memdup_user(usr_data, ops.len); + if (IS_ERR(ops.datbuf)) + return PTR_ERR(ops.datbuf); + } else { + ops.datbuf = NULL; + } + + if (req.usr_oob) { + ops.oobbuf = memdup_user(usr_oob, ops.ooblen); + if (IS_ERR(ops.oobbuf)) { + kfree(ops.datbuf); + return PTR_ERR(ops.oobbuf); + } + } else { + ops.oobbuf = NULL; + } + + ret = mtd->write_oob(mtd, (loff_t)req.start, &ops); + + kfree(ops.datbuf); + kfree(ops.oobbuf); + + return ret; +} + static int mtd_ioctl(struct file *file, u_int cmd, u_long arg) { struct mtd_file_info *mfi = file->private_data; @@ -553,7 +624,7 @@ static int mtd_ioctl(struct file *file, u_int cmd, u_long arg) u_long size; struct mtd_info_user info; - DEBUG(MTD_DEBUG_LEVEL0, "MTD_ioctl\n"); + pr_debug("MTD_ioctl\n"); size = (cmd & IOCSIZE_MASK) >> IOCSIZE_SHIFT; if (cmd & IOC_IN) { @@ -601,8 +672,8 @@ static int mtd_ioctl(struct file *file, u_int cmd, u_long arg) info.erasesize = mtd->erasesize; info.writesize = mtd->writesize; info.oobsize = mtd->oobsize; - /* The below fields are obsolete */ - info.ecctype = -1; + /* The below field is obsolete */ + info.padding = 0; if (copy_to_user(argp, &info, sizeof(struct mtd_info_user))) return -EFAULT; break; @@ -698,7 +769,7 @@ static int mtd_ioctl(struct file *file, u_int cmd, u_long arg) if (copy_from_user(&buf, argp, sizeof(buf))) ret = -EFAULT; else - ret = mtd_do_readoob(mtd, buf.start, buf.length, + ret = mtd_do_readoob(file, mtd, buf.start, buf.length, buf.ptr, &buf_user->start); break; } @@ -725,12 +796,19 @@ static int mtd_ioctl(struct file *file, u_int cmd, u_long arg) if (copy_from_user(&buf, argp, sizeof(buf))) ret = -EFAULT; else - ret = mtd_do_readoob(mtd, buf.start, buf.length, + ret = mtd_do_readoob(file, mtd, buf.start, buf.length, (void __user *)(uintptr_t)buf.usr_ptr, &buf_user->length); break; } + case MEMWRITE: + { + ret = mtd_write_ioctl(mtd, + (struct mtd_write_req __user *)arg); + break; + } + case MEMLOCK: { struct erase_info_user einfo; @@ -827,7 +905,7 @@ static int mtd_ioctl(struct file *file, u_int cmd, u_long arg) if (copy_from_user(&mode, argp, sizeof(int))) return -EFAULT; - mfi->mode = MTD_MODE_NORMAL; + mfi->mode = MTD_FILE_MODE_NORMAL; ret = otp_select_filemode(mfi, mode); @@ -843,11 +921,11 @@ static int mtd_ioctl(struct file *file, u_int cmd, u_long arg) return -ENOMEM; ret = -EOPNOTSUPP; switch (mfi->mode) { - case MTD_MODE_OTP_FACTORY: + case MTD_FILE_MODE_OTP_FACTORY: if (mtd->get_fact_prot_info) ret = mtd->get_fact_prot_info(mtd, buf, 4096); break; - case MTD_MODE_OTP_USER: + case MTD_FILE_MODE_OTP_USER: if (mtd->get_user_prot_info) ret = mtd->get_user_prot_info(mtd, buf, 4096); break; @@ -871,7 +949,7 @@ static int mtd_ioctl(struct file *file, u_int cmd, u_long arg) { struct otp_info oinfo; - if (mfi->mode != MTD_MODE_OTP_USER) + if (mfi->mode != MTD_FILE_MODE_OTP_USER) return -EINVAL; if (copy_from_user(&oinfo, argp, sizeof(oinfo))) return -EFAULT; @@ -882,7 +960,7 @@ static int mtd_ioctl(struct file *file, u_int cmd, u_long arg) } #endif - /* This ioctl is being deprecated - it truncates the ecc layout */ + /* This ioctl is being deprecated - it truncates the ECC layout */ case ECCGETLAYOUT: { struct nand_ecclayout_user *usrlay; @@ -915,17 +993,17 @@ static int mtd_ioctl(struct file *file, u_int cmd, u_long arg) mfi->mode = 0; switch(arg) { - case MTD_MODE_OTP_FACTORY: - case MTD_MODE_OTP_USER: + case MTD_FILE_MODE_OTP_FACTORY: + case MTD_FILE_MODE_OTP_USER: ret = otp_select_filemode(mfi, arg); break; - case MTD_MODE_RAW: + case MTD_FILE_MODE_RAW: if (!mtd->read_oob || !mtd->write_oob) return -EOPNOTSUPP; mfi->mode = arg; - case MTD_MODE_NORMAL: + case MTD_FILE_MODE_NORMAL: break; default: ret = -EINVAL; @@ -1011,7 +1089,7 @@ static long mtd_compat_ioctl(struct file *file, unsigned int cmd, if (copy_from_user(&buf, argp, sizeof(buf))) ret = -EFAULT; else - ret = mtd_do_readoob(mtd, buf.start, + ret = mtd_do_readoob(file, mtd, buf.start, buf.length, compat_ptr(buf.ptr), &buf_user->start); break; diff --git a/drivers/mtd/mtdconcat.c b/drivers/mtd/mtdconcat.c index e601672a5305..6df4d4d4eb92 100644 --- a/drivers/mtd/mtdconcat.c +++ b/drivers/mtd/mtdconcat.c @@ -95,10 +95,10 @@ concat_read(struct mtd_info *mtd, loff_t from, size_t len, /* Save information about bitflips! */ if (unlikely(err)) { - if (err == -EBADMSG) { + if (mtd_is_eccerr(err)) { mtd->ecc_stats.failed++; ret = err; - } else if (err == -EUCLEAN) { + } else if (mtd_is_bitflip(err)) { mtd->ecc_stats.corrected++; /* Do not overwrite -EBADMSG !! */ if (!ret) @@ -279,10 +279,10 @@ concat_read_oob(struct mtd_info *mtd, loff_t from, struct mtd_oob_ops *ops) /* Save information about bitflips! */ if (unlikely(err)) { - if (err == -EBADMSG) { + if (mtd_is_eccerr(err)) { mtd->ecc_stats.failed++; ret = err; - } else if (err == -EUCLEAN) { + } else if (mtd_is_bitflip(err)) { mtd->ecc_stats.corrected++; /* Do not overwrite -EBADMSG !! */ if (!ret) @@ -770,7 +770,7 @@ struct mtd_info *mtd_concat_create(struct mtd_info *subdev[], /* subdevices to c /* * Set up the new "super" device's MTD object structure, check for - * incompatibilites between the subdevices. + * incompatibilities between the subdevices. */ concat->mtd.type = subdev[0]->type; concat->mtd.flags = subdev[0]->flags; diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c index c510aff289a8..b01993ea260e 100644 --- a/drivers/mtd/mtdcore.c +++ b/drivers/mtd/mtdcore.c @@ -362,7 +362,7 @@ int add_mtd_device(struct mtd_info *mtd) MTD_DEVT(i) + 1, NULL, "mtd%dro", i); - DEBUG(0, "mtd: Giving out device %d to %s\n", i, mtd->name); + pr_debug("mtd: Giving out device %d to %s\n", i, mtd->name); /* No need to get a refcount on the module containing the notifier, since we hold the mtd_table_mutex */ list_for_each_entry(not, &mtd_notifiers, list) @@ -429,27 +429,63 @@ out_error: } /** - * mtd_device_register - register an MTD device. + * mtd_device_parse_register - parse partitions and register an MTD device. * - * @master: the MTD device to register - * @parts: the partitions to register - only valid if nr_parts > 0 - * @nr_parts: the number of partitions in parts. If zero then the full MTD - * device is registered + * @mtd: the MTD device to register + * @types: the list of MTD partition probes to try, see + * 'parse_mtd_partitions()' for more information + * @parser_data: MTD partition parser-specific data + * @parts: fallback partition information to register, if parsing fails; + * only valid if %nr_parts > %0 + * @nr_parts: the number of partitions in parts, if zero then the full + * MTD device is registered if no partition info is found * - * Register an MTD device with the system and optionally, a number of - * partitions. If nr_parts is 0 then the whole device is registered, otherwise - * only the partitions are registered. To register both the full device *and* - * the partitions, call mtd_device_register() twice, once with nr_parts == 0 - * and once equal to the number of partitions. + * This function aggregates MTD partitions parsing (done by + * 'parse_mtd_partitions()') and MTD device and partitions registering. It + * basically follows the most common pattern found in many MTD drivers: + * + * * It first tries to probe partitions on MTD device @mtd using parsers + * specified in @types (if @types is %NULL, then the default list of parsers + * is used, see 'parse_mtd_partitions()' for more information). If none are + * found this functions tries to fallback to information specified in + * @parts/@nr_parts. + * * If any partitioning info was found, this function registers the found + * partitions. + * * If no partitions were found this function just registers the MTD device + * @mtd and exits. + * + * Returns zero in case of success and a negative error code in case of failure. */ -int mtd_device_register(struct mtd_info *master, - const struct mtd_partition *parts, - int nr_parts) +int mtd_device_parse_register(struct mtd_info *mtd, const char **types, + struct mtd_part_parser_data *parser_data, + const struct mtd_partition *parts, + int nr_parts) { - return parts ? add_mtd_partitions(master, parts, nr_parts) : - add_mtd_device(master); + int err; + struct mtd_partition *real_parts; + + err = parse_mtd_partitions(mtd, types, &real_parts, parser_data); + if (err <= 0 && nr_parts && parts) { + real_parts = kmemdup(parts, sizeof(*parts) * nr_parts, + GFP_KERNEL); + if (!real_parts) + err = -ENOMEM; + else + err = nr_parts; + } + + if (err > 0) { + err = add_mtd_partitions(mtd, real_parts, err); + kfree(real_parts); + } else if (err == 0) { + err = add_mtd_device(mtd); + if (err == 1) + err = -ENODEV; + } + + return err; } -EXPORT_SYMBOL_GPL(mtd_device_register); +EXPORT_SYMBOL_GPL(mtd_device_parse_register); /** * mtd_device_unregister - unregister an existing MTD device. diff --git a/drivers/mtd/mtdcore.h b/drivers/mtd/mtdcore.h index 0ed6126b4c1f..961a38408542 100644 --- a/drivers/mtd/mtdcore.h +++ b/drivers/mtd/mtdcore.h @@ -15,6 +15,9 @@ extern int del_mtd_device(struct mtd_info *mtd); extern int add_mtd_partitions(struct mtd_info *, const struct mtd_partition *, int); extern int del_mtd_partitions(struct mtd_info *); +extern int parse_mtd_partitions(struct mtd_info *master, const char **types, + struct mtd_partition **pparts, + struct mtd_part_parser_data *data); #define mtd_for_each_device(mtd) \ for ((mtd) = __mtd_next_device(0); \ diff --git a/drivers/mtd/mtdoops.c b/drivers/mtd/mtdoops.c index e3e40f440323..1e2fa6236705 100644 --- a/drivers/mtd/mtdoops.c +++ b/drivers/mtd/mtdoops.c @@ -258,7 +258,7 @@ static void find_next_position(struct mtdoops_context *cxt) ret = mtd->read(mtd, page * record_size, MTDOOPS_HEADER_SIZE, &retlen, (u_char *) &count[0]); if (retlen != MTDOOPS_HEADER_SIZE || - (ret < 0 && ret != -EUCLEAN)) { + (ret < 0 && !mtd_is_bitflip(ret))) { printk(KERN_ERR "mtdoops: read failure at %ld (%td of %d read), err %d\n", page * record_size, retlen, MTDOOPS_HEADER_SIZE, ret); diff --git a/drivers/mtd/mtdpart.c b/drivers/mtd/mtdpart.c index 630be3e7da04..a0bd2de4752b 100644 --- a/drivers/mtd/mtdpart.c +++ b/drivers/mtd/mtdpart.c @@ -73,9 +73,9 @@ static int part_read(struct mtd_info *mtd, loff_t from, size_t len, res = part->master->read(part->master, from + part->offset, len, retlen, buf); if (unlikely(res)) { - if (res == -EUCLEAN) + if (mtd_is_bitflip(res)) mtd->ecc_stats.corrected += part->master->ecc_stats.corrected - stats.corrected; - if (res == -EBADMSG) + if (mtd_is_eccerr(res)) mtd->ecc_stats.failed += part->master->ecc_stats.failed - stats.failed; } return res; @@ -130,7 +130,7 @@ static int part_read_oob(struct mtd_info *mtd, loff_t from, if (ops->oobbuf) { size_t len, pages; - if (ops->mode == MTD_OOB_AUTO) + if (ops->mode == MTD_OPS_AUTO_OOB) len = mtd->oobavail; else len = mtd->oobsize; @@ -142,9 +142,9 @@ static int part_read_oob(struct mtd_info *mtd, loff_t from, res = part->master->read_oob(part->master, from + part->offset, ops); if (unlikely(res)) { - if (res == -EUCLEAN) + if (mtd_is_bitflip(res)) mtd->ecc_stats.corrected++; - if (res == -EBADMSG) + if (mtd_is_eccerr(res)) mtd->ecc_stats.failed++; } return res; @@ -479,6 +479,19 @@ static struct mtd_part *allocate_partition(struct mtd_info *master, (unsigned long long)cur_offset, (unsigned long long)slave->offset); } } + if (slave->offset == MTDPART_OFS_RETAIN) { + slave->offset = cur_offset; + if (master->size - slave->offset >= slave->mtd.size) { + slave->mtd.size = master->size - slave->offset + - slave->mtd.size; + } else { + printk(KERN_ERR "mtd partition \"%s\" doesn't have enough space: %#llx < %#llx, disabled\n", + part->name, master->size - slave->offset, + slave->mtd.size); + /* register to preserve ordering */ + goto out_register; + } + } if (slave->mtd.size == MTDPART_SIZ_FULL) slave->mtd.size = master->size - slave->offset; @@ -693,6 +706,8 @@ static struct mtd_part_parser *get_partition_parser(const char *name) return ret; } +#define put_partition_parser(p) do { module_put((p)->owner); } while (0) + int register_mtd_parser(struct mtd_part_parser *p) { spin_lock(&part_parser_lock); @@ -712,19 +727,51 @@ int deregister_mtd_parser(struct mtd_part_parser *p) } EXPORT_SYMBOL_GPL(deregister_mtd_parser); +/* + * Do not forget to update 'parse_mtd_partitions()' kerneldoc comment if you + * are changing this array! + */ +static const char *default_mtd_part_types[] = { + "cmdlinepart", + "ofpart", + NULL +}; + +/** + * parse_mtd_partitions - parse MTD partitions + * @master: the master partition (describes whole MTD device) + * @types: names of partition parsers to try or %NULL + * @pparts: array of partitions found is returned here + * @data: MTD partition parser-specific data + * + * This function tries to find partition on MTD device @master. It uses MTD + * partition parsers, specified in @types. However, if @types is %NULL, then + * the default list of parsers is used. The default list contains only the + * "cmdlinepart" and "ofpart" parsers ATM. + * + * This function may return: + * o a negative error code in case of failure + * o zero if no partitions were found + * o a positive number of found partitions, in which case on exit @pparts will + * point to an array containing this number of &struct mtd_info objects. + */ int parse_mtd_partitions(struct mtd_info *master, const char **types, - struct mtd_partition **pparts, unsigned long origin) + struct mtd_partition **pparts, + struct mtd_part_parser_data *data) { struct mtd_part_parser *parser; int ret = 0; + if (!types) + types = default_mtd_part_types; + for ( ; ret <= 0 && *types; types++) { parser = get_partition_parser(*types); if (!parser && !request_module("%s", *types)) parser = get_partition_parser(*types); if (!parser) continue; - ret = (*parser->parse_fn)(master, pparts, origin); + ret = (*parser->parse_fn)(master, pparts, data); if (ret > 0) { printk(KERN_NOTICE "%d %s partitions found on MTD device %s\n", ret, parser->name, master->name); @@ -733,7 +780,6 @@ int parse_mtd_partitions(struct mtd_info *master, const char **types, } return ret; } -EXPORT_SYMBOL_GPL(parse_mtd_partitions); int mtd_is_partition(struct mtd_info *mtd) { diff --git a/drivers/mtd/mtdsuper.c b/drivers/mtd/mtdsuper.c index 89f8e66448ab..a90bfe79916d 100644 --- a/drivers/mtd/mtdsuper.c +++ b/drivers/mtd/mtdsuper.c @@ -27,12 +27,12 @@ static int get_sb_mtd_compare(struct super_block *sb, void *_mtd) struct mtd_info *mtd = _mtd; if (sb->s_mtd == mtd) { - DEBUG(2, "MTDSB: Match on device %d (\"%s\")\n", + pr_debug("MTDSB: Match on device %d (\"%s\")\n", mtd->index, mtd->name); return 1; } - DEBUG(2, "MTDSB: No match, device %d (\"%s\"), device %d (\"%s\")\n", + pr_debug("MTDSB: No match, device %d (\"%s\"), device %d (\"%s\")\n", sb->s_mtd->index, sb->s_mtd->name, mtd->index, mtd->name); return 0; } @@ -71,7 +71,7 @@ static struct dentry *mount_mtd_aux(struct file_system_type *fs_type, int flags, goto already_mounted; /* fresh new superblock */ - DEBUG(1, "MTDSB: New superblock for device %d (\"%s\")\n", + pr_debug("MTDSB: New superblock for device %d (\"%s\")\n", mtd->index, mtd->name); sb->s_flags = flags; @@ -88,7 +88,7 @@ static struct dentry *mount_mtd_aux(struct file_system_type *fs_type, int flags, /* new mountpoint for an already mounted superblock */ already_mounted: - DEBUG(1, "MTDSB: Device %d (\"%s\") is already mounted\n", + pr_debug("MTDSB: Device %d (\"%s\") is already mounted\n", mtd->index, mtd->name); put_mtd_device(mtd); return dget(sb->s_root); @@ -109,7 +109,7 @@ static struct dentry *mount_mtd_nr(struct file_system_type *fs_type, int flags, mtd = get_mtd_device(NULL, mtdnr); if (IS_ERR(mtd)) { - DEBUG(0, "MTDSB: Device #%u doesn't appear to exist\n", mtdnr); + pr_debug("MTDSB: Device #%u doesn't appear to exist\n", mtdnr); return ERR_CAST(mtd); } @@ -132,7 +132,7 @@ struct dentry *mount_mtd(struct file_system_type *fs_type, int flags, if (!dev_name) return ERR_PTR(-EINVAL); - DEBUG(2, "MTDSB: dev_name \"%s\"\n", dev_name); + pr_debug("MTDSB: dev_name \"%s\"\n", dev_name); /* the preferred way of mounting in future; especially when * CONFIG_BLOCK=n - we specify the underlying MTD device by number or @@ -143,7 +143,7 @@ struct dentry *mount_mtd(struct file_system_type *fs_type, int flags, struct mtd_info *mtd; /* mount by MTD device name */ - DEBUG(1, "MTDSB: mtd:%%s, name \"%s\"\n", + pr_debug("MTDSB: mtd:%%s, name \"%s\"\n", dev_name + 4); mtd = get_mtd_device_nm(dev_name + 4); @@ -164,7 +164,7 @@ struct dentry *mount_mtd(struct file_system_type *fs_type, int flags, mtdnr = simple_strtoul(dev_name + 3, &endptr, 0); if (!*endptr) { /* It was a valid number */ - DEBUG(1, "MTDSB: mtd%%d, mtdnr %d\n", + pr_debug("MTDSB: mtd%%d, mtdnr %d\n", mtdnr); return mount_mtd_nr(fs_type, flags, dev_name, data, @@ -180,10 +180,10 @@ struct dentry *mount_mtd(struct file_system_type *fs_type, int flags, bdev = lookup_bdev(dev_name); if (IS_ERR(bdev)) { ret = PTR_ERR(bdev); - DEBUG(1, "MTDSB: lookup_bdev() returned %d\n", ret); + pr_debug("MTDSB: lookup_bdev() returned %d\n", ret); return ERR_PTR(ret); } - DEBUG(1, "MTDSB: lookup_bdev() returned 0\n"); + pr_debug("MTDSB: lookup_bdev() returned 0\n"); ret = -EINVAL; diff --git a/drivers/mtd/mtdswap.c b/drivers/mtd/mtdswap.c index fd7885327611..bd9590c723e4 100644 --- a/drivers/mtd/mtdswap.c +++ b/drivers/mtd/mtdswap.c @@ -86,7 +86,7 @@ struct swap_eb { unsigned int flags; unsigned int active_count; unsigned int erase_count; - unsigned int pad; /* speeds up pointer decremtnt */ + unsigned int pad; /* speeds up pointer decrement */ }; #define MTDSWAP_ECNT_MIN(rbroot) (rb_entry(rb_first(rbroot), struct swap_eb, \ @@ -314,7 +314,7 @@ static int mtdswap_read_oob(struct mtdswap_dev *d, loff_t from, { int ret = d->mtd->read_oob(d->mtd, from, ops); - if (ret == -EUCLEAN) + if (mtd_is_bitflip(ret)) return ret; if (ret) { @@ -350,11 +350,11 @@ static int mtdswap_read_markers(struct mtdswap_dev *d, struct swap_eb *eb) ops.oobbuf = d->oob_buf; ops.ooboffs = 0; ops.datbuf = NULL; - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ret = mtdswap_read_oob(d, offset, &ops); - if (ret && ret != -EUCLEAN) + if (ret && !mtd_is_bitflip(ret)) return ret; data = (struct mtdswap_oobdata *)d->oob_buf; @@ -363,7 +363,7 @@ static int mtdswap_read_markers(struct mtdswap_dev *d, struct swap_eb *eb) if (le16_to_cpu(data->magic) == MTDSWAP_MAGIC_CLEAN) { eb->erase_count = le32_to_cpu(data->count); - if (ret == -EUCLEAN) + if (mtd_is_bitflip(ret)) ret = MTDSWAP_SCANNED_BITFLIP; else { if (le16_to_cpu(data2->magic) == MTDSWAP_MAGIC_DIRTY) @@ -389,7 +389,7 @@ static int mtdswap_write_marker(struct mtdswap_dev *d, struct swap_eb *eb, ops.ooboffs = 0; ops.oobbuf = (uint8_t *)&n; - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ops.datbuf = NULL; if (marker == MTDSWAP_TYPE_CLEAN) { @@ -408,7 +408,7 @@ static int mtdswap_write_marker(struct mtdswap_dev *d, struct swap_eb *eb, if (ret) { dev_warn(d->dev, "Write OOB failed for block at %08llx " "error %d\n", offset, ret); - if (ret == -EIO || ret == -EBADMSG) + if (ret == -EIO || mtd_is_eccerr(ret)) mtdswap_handle_write_error(d, eb); return ret; } @@ -628,7 +628,7 @@ static int mtdswap_map_free_block(struct mtdswap_dev *d, unsigned int page, TREE_COUNT(d, CLEAN)--; ret = mtdswap_write_marker(d, eb, MTDSWAP_TYPE_DIRTY); - } while (ret == -EIO || ret == -EBADMSG); + } while (ret == -EIO || mtd_is_eccerr(ret)); if (ret) return ret; @@ -678,7 +678,7 @@ retry: ret = mtdswap_map_free_block(d, page, bp); eb = d->eb_data + (*bp / d->pages_per_eblk); - if (ret == -EIO || ret == -EBADMSG) { + if (ret == -EIO || mtd_is_eccerr(ret)) { d->curr_write = NULL; eb->active_count--; d->revmap[*bp] = PAGE_UNDEF; @@ -690,7 +690,7 @@ retry: writepos = (loff_t)*bp << PAGE_SHIFT; ret = mtd->write(mtd, writepos, PAGE_SIZE, &retlen, buf); - if (ret == -EIO || ret == -EBADMSG) { + if (ret == -EIO || mtd_is_eccerr(ret)) { d->curr_write_pos--; eb->active_count--; d->revmap[*bp] = PAGE_UNDEF; @@ -738,7 +738,7 @@ static int mtdswap_move_block(struct mtdswap_dev *d, unsigned int oldblock, retry: ret = mtd->read(mtd, readpos, PAGE_SIZE, &retlen, d->page_buf); - if (ret < 0 && ret != -EUCLEAN) { + if (ret < 0 && !mtd_is_bitflip(ret)) { oldeb = d->eb_data + oldblock / d->pages_per_eblk; oldeb->flags |= EBLOCK_READERR; @@ -931,7 +931,7 @@ static unsigned int mtdswap_eblk_passes(struct mtdswap_dev *d, struct mtd_oob_ops ops; int ret; - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ops.len = mtd->writesize; ops.ooblen = mtd->ecclayout->oobavail; ops.ooboffs = 0; @@ -1016,7 +1016,7 @@ static int mtdswap_gc(struct mtdswap_dev *d, unsigned int background) if (ret == 0) mtdswap_rb_add(d, eb, MTDSWAP_CLEAN); - else if (ret != -EIO && ret != -EBADMSG) + else if (ret != -EIO && !mtd_is_eccerr(ret)) mtdswap_rb_add(d, eb, MTDSWAP_DIRTY); return 0; @@ -1164,7 +1164,7 @@ retry: ret = mtd->read(mtd, readpos, PAGE_SIZE, &retlen, buf); d->mtd_read_count++; - if (ret == -EUCLEAN) { + if (mtd_is_bitflip(ret)) { eb->flags |= EBLOCK_BITFLIP; mtdswap_rb_add(d, eb, MTDSWAP_BITFLIP); ret = 0; @@ -1374,11 +1374,10 @@ static int mtdswap_init(struct mtdswap_dev *d, unsigned int eblocks, goto revmap_fail; eblk_bytes = sizeof(struct swap_eb)*d->eblks; - d->eb_data = vmalloc(eblk_bytes); + d->eb_data = vzalloc(eblk_bytes); if (!d->eb_data) goto eb_data_fail; - memset(d->eb_data, 0, eblk_bytes); for (i = 0; i < pages; i++) d->page_data[i] = BLOCK_UNDEF; diff --git a/drivers/mtd/nand/Kconfig b/drivers/mtd/nand/Kconfig index dbfa0f7fb464..cce7b70824c3 100644 --- a/drivers/mtd/nand/Kconfig +++ b/drivers/mtd/nand/Kconfig @@ -83,16 +83,9 @@ config MTD_NAND_DENALI_SCRATCH_REG_ADDR scratch register here to enable this feature. On Intel Moorestown boards, the scratch register is at 0xFF108018. -config MTD_NAND_EDB7312 - tristate "Support for Cirrus Logic EBD7312 evaluation board" - depends on ARCH_EDB7312 - help - This enables the driver for the Cirrus Logic EBD7312 evaluation - board to access the onboard NAND Flash. - config MTD_NAND_H1900 tristate "iPAQ H1900 flash" - depends on ARCH_PXA + depends on ARCH_PXA && BROKEN help This enables the driver for the iPAQ h1900 flash. @@ -116,10 +109,11 @@ config MTD_NAND_AMS_DELTA Support for NAND flash on Amstrad E3 (Delta). config MTD_NAND_OMAP2 - tristate "NAND Flash device on OMAP2 and OMAP3" - depends on ARM && (ARCH_OMAP2 || ARCH_OMAP3) + tristate "NAND Flash device on OMAP2, OMAP3 and OMAP4" + depends on ARM && (ARCH_OMAP2 || ARCH_OMAP3 || ARCH_OMAP4) help - Support for NAND flash on Texas Instruments OMAP2 and OMAP3 platforms. + Support for NAND flash on Texas Instruments OMAP2, OMAP3 and OMAP4 + platforms. config MTD_NAND_IDS tristate @@ -423,6 +417,19 @@ config MTD_NAND_NANDSIM The simulator may simulate various NAND flash chips for the MTD nand layer. +config MTD_NAND_GPMI_NAND + bool "GPMI NAND Flash Controller driver" + depends on MTD_NAND && (SOC_IMX23 || SOC_IMX28) + select MTD_PARTITIONS + select MTD_CMDLINE_PARTS + help + Enables NAND Flash support for IMX23 or IMX28. + The GPMI controller is very powerful, with the help of BCH + module, it can do the hardware ECC. The GPMI supports several + NAND flashs at the same time. The GPMI may conflicts with other + block, such as SD card. So pay attention to it when you enable + the GPMI. + config MTD_NAND_PLATFORM tristate "Support for generic platform NAND driver" help diff --git a/drivers/mtd/nand/Makefile b/drivers/mtd/nand/Makefile index 5745d831168e..618f4ba23699 100644 --- a/drivers/mtd/nand/Makefile +++ b/drivers/mtd/nand/Makefile @@ -13,7 +13,6 @@ obj-$(CONFIG_MTD_NAND_SPIA) += spia.o obj-$(CONFIG_MTD_NAND_AMS_DELTA) += ams-delta.o obj-$(CONFIG_MTD_NAND_AUTCPU12) += autcpu12.o obj-$(CONFIG_MTD_NAND_DENALI) += denali.o -obj-$(CONFIG_MTD_NAND_EDB7312) += edb7312.o obj-$(CONFIG_MTD_NAND_AU1550) += au1550nd.o obj-$(CONFIG_MTD_NAND_BF5XX) += bf5xx_nand.o obj-$(CONFIG_MTD_NAND_PPCHAMELEONEVB) += ppchameleonevb.o @@ -49,5 +48,6 @@ obj-$(CONFIG_MTD_NAND_BCM_UMI) += bcm_umi_nand.o nand_bcm_umi.o obj-$(CONFIG_MTD_NAND_MPC5121_NFC) += mpc5121_nfc.o obj-$(CONFIG_MTD_NAND_RICOH) += r852.o obj-$(CONFIG_MTD_NAND_JZ4740) += jz4740_nand.o +obj-$(CONFIG_MTD_NAND_GPMI_NAND) += gpmi-nand/ nand-objs := nand_base.o nand_bbt.o diff --git a/drivers/mtd/nand/atmel_nand.c b/drivers/mtd/nand/atmel_nand.c index 55da20ccc7a8..23e5d77c39fc 100644 --- a/drivers/mtd/nand/atmel_nand.c +++ b/drivers/mtd/nand/atmel_nand.c @@ -161,37 +161,6 @@ static int atmel_nand_device_ready(struct mtd_info *mtd) !!host->board->rdy_pin_active_low; } -/* - * Minimal-overhead PIO for data access. - */ -static void atmel_read_buf8(struct mtd_info *mtd, u8 *buf, int len) -{ - struct nand_chip *nand_chip = mtd->priv; - - __raw_readsb(nand_chip->IO_ADDR_R, buf, len); -} - -static void atmel_read_buf16(struct mtd_info *mtd, u8 *buf, int len) -{ - struct nand_chip *nand_chip = mtd->priv; - - __raw_readsw(nand_chip->IO_ADDR_R, buf, len / 2); -} - -static void atmel_write_buf8(struct mtd_info *mtd, const u8 *buf, int len) -{ - struct nand_chip *nand_chip = mtd->priv; - - __raw_writesb(nand_chip->IO_ADDR_W, buf, len); -} - -static void atmel_write_buf16(struct mtd_info *mtd, const u8 *buf, int len) -{ - struct nand_chip *nand_chip = mtd->priv; - - __raw_writesw(nand_chip->IO_ADDR_W, buf, len / 2); -} - static void dma_complete_func(void *completion) { complete(completion); @@ -266,33 +235,27 @@ err_buf: static void atmel_read_buf(struct mtd_info *mtd, u8 *buf, int len) { struct nand_chip *chip = mtd->priv; - struct atmel_nand_host *host = chip->priv; if (use_dma && len > mtd->oobsize) /* only use DMA for bigger than oob size: better performances */ if (atmel_nand_dma_op(mtd, buf, len, 1) == 0) return; - if (host->board->bus_width_16) - atmel_read_buf16(mtd, buf, len); - else - atmel_read_buf8(mtd, buf, len); + /* if no DMA operation possible, use PIO */ + memcpy_fromio(buf, chip->IO_ADDR_R, len); } static void atmel_write_buf(struct mtd_info *mtd, const u8 *buf, int len) { struct nand_chip *chip = mtd->priv; - struct atmel_nand_host *host = chip->priv; if (use_dma && len > mtd->oobsize) /* only use DMA for bigger than oob size: better performances */ if (atmel_nand_dma_op(mtd, (void *)buf, len, 0) == 0) return; - if (host->board->bus_width_16) - atmel_write_buf16(mtd, buf, len); - else - atmel_write_buf8(mtd, buf, len); + /* if no DMA operation possible, use PIO */ + memcpy_toio(chip->IO_ADDR_W, buf, len); } /* @@ -481,10 +444,6 @@ static void atmel_nand_hwctl(struct mtd_info *mtd, int mode) } } -#ifdef CONFIG_MTD_CMDLINE_PARTS -static const char *part_probes[] = { "cmdlinepart", NULL }; -#endif - /* * Probe for the NAND device. */ @@ -496,8 +455,6 @@ static int __init atmel_nand_probe(struct platform_device *pdev) struct resource *regs; struct resource *mem; int res; - struct mtd_partition *partitions = NULL; - int num_partitions = 0; mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); if (!mem) { @@ -583,7 +540,7 @@ static int __init atmel_nand_probe(struct platform_device *pdev) if (on_flash_bbt) { printk(KERN_INFO "atmel_nand: Use On Flash BBT\n"); - nand_chip->options |= NAND_USE_FLASH_BBT; + nand_chip->bbt_options |= NAND_BBT_USE_FLASH; } if (!cpu_has_dma()) @@ -594,7 +551,7 @@ static int __init atmel_nand_probe(struct platform_device *pdev) dma_cap_zero(mask); dma_cap_set(DMA_MEMCPY, mask); - host->dma_chan = dma_request_channel(mask, 0, NULL); + host->dma_chan = dma_request_channel(mask, NULL, NULL); if (!host->dma_chan) { dev_err(host->dev, "Failed to request DMA channel\n"); use_dma = 0; @@ -655,27 +612,12 @@ static int __init atmel_nand_probe(struct platform_device *pdev) goto err_scan_tail; } -#ifdef CONFIG_MTD_CMDLINE_PARTS mtd->name = "atmel_nand"; - num_partitions = parse_mtd_partitions(mtd, part_probes, - &partitions, 0); -#endif - if (num_partitions <= 0 && host->board->partition_info) - partitions = host->board->partition_info(mtd->size, - &num_partitions); - - if ((!partitions) || (num_partitions == 0)) { - printk(KERN_ERR "atmel_nand: No partitions defined, or unsupported device.\n"); - res = -ENXIO; - goto err_no_partitions; - } - - res = mtd_device_register(mtd, partitions, num_partitions); + res = mtd_device_parse_register(mtd, NULL, 0, + host->board->parts, host->board->num_parts); if (!res) return res; -err_no_partitions: - nand_release(mtd); err_scan_tail: err_scan_ident: err_no_card: diff --git a/drivers/mtd/nand/au1550nd.c b/drivers/mtd/nand/au1550nd.c index fa5736b9286c..7dd3700f2303 100644 --- a/drivers/mtd/nand/au1550nd.c +++ b/drivers/mtd/nand/au1550nd.c @@ -52,7 +52,7 @@ static const struct mtd_partition partition_info[] = { * au_read_byte - read one byte from the chip * @mtd: MTD device structure * - * read function for 8bit buswith + * read function for 8bit buswidth */ static u_char au_read_byte(struct mtd_info *mtd) { @@ -67,7 +67,7 @@ static u_char au_read_byte(struct mtd_info *mtd) * @mtd: MTD device structure * @byte: pointer to data byte to write * - * write function for 8it buswith + * write function for 8it buswidth */ static void au_write_byte(struct mtd_info *mtd, u_char byte) { @@ -77,11 +77,10 @@ static void au_write_byte(struct mtd_info *mtd, u_char byte) } /** - * au_read_byte16 - read one byte endianess aware from the chip + * au_read_byte16 - read one byte endianness aware from the chip * @mtd: MTD device structure * - * read function for 16bit buswith with - * endianess conversion + * read function for 16bit buswidth with endianness conversion */ static u_char au_read_byte16(struct mtd_info *mtd) { @@ -92,12 +91,11 @@ static u_char au_read_byte16(struct mtd_info *mtd) } /** - * au_write_byte16 - write one byte endianess aware to the chip + * au_write_byte16 - write one byte endianness aware to the chip * @mtd: MTD device structure * @byte: pointer to data byte to write * - * write function for 16bit buswith with - * endianess conversion + * write function for 16bit buswidth with endianness conversion */ static void au_write_byte16(struct mtd_info *mtd, u_char byte) { @@ -110,8 +108,7 @@ static void au_write_byte16(struct mtd_info *mtd, u_char byte) * au_read_word - read one word from the chip * @mtd: MTD device structure * - * read function for 16bit buswith without - * endianess conversion + * read function for 16bit buswidth without endianness conversion */ static u16 au_read_word(struct mtd_info *mtd) { @@ -127,7 +124,7 @@ static u16 au_read_word(struct mtd_info *mtd) * @buf: data buffer * @len: number of bytes to write * - * write function for 8bit buswith + * write function for 8bit buswidth */ static void au_write_buf(struct mtd_info *mtd, const u_char *buf, int len) { @@ -146,7 +143,7 @@ static void au_write_buf(struct mtd_info *mtd, const u_char *buf, int len) * @buf: buffer to store date * @len: number of bytes to read * - * read function for 8bit buswith + * read function for 8bit buswidth */ static void au_read_buf(struct mtd_info *mtd, u_char *buf, int len) { @@ -165,7 +162,7 @@ static void au_read_buf(struct mtd_info *mtd, u_char *buf, int len) * @buf: buffer containing the data to compare * @len: number of bytes to compare * - * verify function for 8bit buswith + * verify function for 8bit buswidth */ static int au_verify_buf(struct mtd_info *mtd, const u_char *buf, int len) { @@ -187,7 +184,7 @@ static int au_verify_buf(struct mtd_info *mtd, const u_char *buf, int len) * @buf: data buffer * @len: number of bytes to write * - * write function for 16bit buswith + * write function for 16bit buswidth */ static void au_write_buf16(struct mtd_info *mtd, const u_char *buf, int len) { @@ -209,7 +206,7 @@ static void au_write_buf16(struct mtd_info *mtd, const u_char *buf, int len) * @buf: buffer to store date * @len: number of bytes to read * - * read function for 16bit buswith + * read function for 16bit buswidth */ static void au_read_buf16(struct mtd_info *mtd, u_char *buf, int len) { @@ -230,7 +227,7 @@ static void au_read_buf16(struct mtd_info *mtd, u_char *buf, int len) * @buf: buffer containing the data to compare * @len: number of bytes to compare * - * verify function for 16bit buswith + * verify function for 16bit buswidth */ static int au_verify_buf16(struct mtd_info *mtd, const u_char *buf, int len) { diff --git a/drivers/mtd/nand/autcpu12.c b/drivers/mtd/nand/autcpu12.c index eddc9a224985..2e42ec2e8ff4 100644 --- a/drivers/mtd/nand/autcpu12.c +++ b/drivers/mtd/nand/autcpu12.c @@ -172,9 +172,9 @@ static int __init autcpu12_init(void) /* Enable the following for a flash based bad block table */ /* - this->options = NAND_USE_FLASH_BBT; + this->bbt_options = NAND_BBT_USE_FLASH; */ - this->options = NAND_USE_FLASH_BBT; + this->bbt_options = NAND_BBT_USE_FLASH; /* Scan to find existence of the device */ if (nand_scan(autcpu12_mtd, 1)) { diff --git a/drivers/mtd/nand/bcm_umi_nand.c b/drivers/mtd/nand/bcm_umi_nand.c index 8c569e454dc5..46b58d672847 100644 --- a/drivers/mtd/nand/bcm_umi_nand.c +++ b/drivers/mtd/nand/bcm_umi_nand.c @@ -52,8 +52,6 @@ static const __devinitconst char gBanner[] = KERN_INFO \ "BCM UMI MTD NAND Driver: 1.00\n"; -const char *part_probes[] = { "cmdlinepart", NULL }; - #if NAND_ECC_BCH static uint8_t scan_ff_pattern[] = { 0xff }; @@ -376,16 +374,18 @@ static int __devinit bcm_umi_nand_probe(struct platform_device *pdev) r = platform_get_resource(pdev, IORESOURCE_MEM, 0); - if (!r) - return -ENXIO; + if (!r) { + err = -ENXIO; + goto out_free; + } /* map physical address */ bcm_umi_io_base = ioremap(r->start, resource_size(r)); if (!bcm_umi_io_base) { printk(KERN_ERR "ioremap to access BCM UMI NAND chip failed\n"); - kfree(board_mtd); - return -EIO; + err = -EIO; + goto out_free; } /* Get pointer to private data */ @@ -401,9 +401,8 @@ static int __devinit bcm_umi_nand_probe(struct platform_device *pdev) /* Initialize the NAND hardware. */ if (bcm_umi_nand_inithw() < 0) { printk(KERN_ERR "BCM UMI NAND chip could not be initialized\n"); - iounmap(bcm_umi_io_base); - kfree(board_mtd); - return -EIO; + err = -EIO; + goto out_unmap; } /* Set address of NAND IO lines */ @@ -436,7 +435,7 @@ static int __devinit bcm_umi_nand_probe(struct platform_device *pdev) #if USE_DMA err = nand_dma_init(); if (err != 0) - return err; + goto out_unmap; #endif /* Figure out the size of the device that we have. @@ -447,9 +446,7 @@ static int __devinit bcm_umi_nand_probe(struct platform_device *pdev) err = nand_scan_ident(board_mtd, 1, NULL); if (err) { printk(KERN_ERR "nand_scan failed: %d\n", err); - iounmap(bcm_umi_io_base); - kfree(board_mtd); - return err; + goto out_unmap; } /* Now that we know the nand size, we can setup the ECC layout */ @@ -468,13 +465,14 @@ static int __devinit bcm_umi_nand_probe(struct platform_device *pdev) { printk(KERN_ERR "NAND - Unrecognized pagesize: %d\n", board_mtd->writesize); - return -EINVAL; + err = -EINVAL; + goto out_unmap; } } #if NAND_ECC_BCH if (board_mtd->writesize > 512) { - if (this->options & NAND_USE_FLASH_BBT) + if (this->bbt_options & NAND_BBT_USE_FLASH) largepage_bbt.options = NAND_BBT_SCAN2NDPAGE; this->badblock_pattern = &largepage_bbt; } @@ -485,33 +483,20 @@ static int __devinit bcm_umi_nand_probe(struct platform_device *pdev) err = nand_scan_tail(board_mtd); if (err) { printk(KERN_ERR "nand_scan failed: %d\n", err); - iounmap(bcm_umi_io_base); - kfree(board_mtd); - return err; + goto out_unmap; } /* Register the partitions */ - { - int nr_partitions; - struct mtd_partition *partition_info; - - board_mtd->name = "bcm_umi-nand"; - nr_partitions = - parse_mtd_partitions(board_mtd, part_probes, - &partition_info, 0); - - if (nr_partitions <= 0) { - printk(KERN_ERR "BCM UMI NAND: Too few partitions - %d\n", - nr_partitions); - iounmap(bcm_umi_io_base); - kfree(board_mtd); - return -EIO; - } - mtd_device_register(board_mtd, partition_info, nr_partitions); - } + board_mtd->name = "bcm_umi-nand"; + mtd_device_parse_register(board_mtd, NULL, 0, NULL, 0); /* Return happy */ return 0; +out_unmap: + iounmap(bcm_umi_io_base); +out_free: + kfree(board_mtd); + return err; } static int bcm_umi_nand_remove(struct platform_device *pdev) diff --git a/drivers/mtd/nand/cafe_nand.c b/drivers/mtd/nand/cafe_nand.c index 7c8df837d3b8..72d3f23490c5 100644 --- a/drivers/mtd/nand/cafe_nand.c +++ b/drivers/mtd/nand/cafe_nand.c @@ -58,7 +58,6 @@ struct cafe_priv { struct nand_chip nand; - struct mtd_partition *parts; struct pci_dev *pdev; void __iomem *mmio; struct rs_control *rs; @@ -372,7 +371,7 @@ static int cafe_nand_read_oob(struct mtd_info *mtd, struct nand_chip *chip, return 1; } /** - * cafe_nand_read_page_syndrome - {REPLACABLE] hardware ecc syndrom based page read + * cafe_nand_read_page_syndrome - [REPLACEABLE] hardware ecc syndrome based page read * @mtd: mtd info structure * @chip: nand chip info structure * @buf: buffer to store read data @@ -631,8 +630,6 @@ static int __devinit cafe_nand_probe(struct pci_dev *pdev, struct cafe_priv *cafe; uint32_t ctrl; int err = 0; - struct mtd_partition *parts; - int nr_parts; /* Very old versions shared the same PCI ident for all three functions on the chip. Verify the class too... */ @@ -687,7 +684,8 @@ static int __devinit cafe_nand_probe(struct pci_dev *pdev, cafe->nand.chip_delay = 0; /* Enable the following for a flash based bad block table */ - cafe->nand.options = NAND_USE_FLASH_BBT | NAND_NO_AUTOINCR | NAND_OWN_BUFFERS; + cafe->nand.bbt_options = NAND_BBT_USE_FLASH; + cafe->nand.options = NAND_NO_AUTOINCR | NAND_OWN_BUFFERS; if (skipbbt) { cafe->nand.options |= NAND_SKIP_BBTSCAN; @@ -800,18 +798,9 @@ static int __devinit cafe_nand_probe(struct pci_dev *pdev, pci_set_drvdata(pdev, mtd); - /* We register the whole device first, separate from the partitions */ - mtd_device_register(mtd, NULL, 0); - -#ifdef CONFIG_MTD_CMDLINE_PARTS mtd->name = "cafe_nand"; -#endif - nr_parts = parse_mtd_partitions(mtd, part_probes, &parts, 0); - if (nr_parts > 0) { - cafe->parts = parts; - dev_info(&cafe->pdev->dev, "%d partitions found\n", nr_parts); - mtd_device_register(mtd, parts, nr_parts); - } + mtd_device_parse_register(mtd, part_probes, 0, NULL, 0); + goto out; out_irq: diff --git a/drivers/mtd/nand/cmx270_nand.c b/drivers/mtd/nand/cmx270_nand.c index be33b0f4634d..737ef9a04fdb 100644 --- a/drivers/mtd/nand/cmx270_nand.c +++ b/drivers/mtd/nand/cmx270_nand.c @@ -51,8 +51,6 @@ static struct mtd_partition partition_info[] = { }; #define NUM_PARTITIONS (ARRAY_SIZE(partition_info)) -const char *part_probes[] = { "cmdlinepart", NULL }; - static u_char cmx270_read_byte(struct mtd_info *mtd) { struct nand_chip *this = mtd->priv; @@ -152,9 +150,6 @@ static int cmx270_device_ready(struct mtd_info *mtd) static int __init cmx270_init(void) { struct nand_chip *this; - const char *part_type; - struct mtd_partition *mtd_parts; - int mtd_parts_nb = 0; int ret; if (!(machine_is_armcore() && cpu_is_pxa27x())) @@ -223,23 +218,9 @@ static int __init cmx270_init(void) goto err_scan; } -#ifdef CONFIG_MTD_CMDLINE_PARTS - mtd_parts_nb = parse_mtd_partitions(cmx270_nand_mtd, part_probes, - &mtd_parts, 0); - if (mtd_parts_nb > 0) - part_type = "command line"; - else - mtd_parts_nb = 0; -#endif - if (!mtd_parts_nb) { - mtd_parts = partition_info; - mtd_parts_nb = NUM_PARTITIONS; - part_type = "static"; - } - /* Register the partitions */ - pr_notice("Using %s partition definition\n", part_type); - ret = mtd_device_register(cmx270_nand_mtd, mtd_parts, mtd_parts_nb); + ret = mtd_device_parse_register(cmx270_nand_mtd, NULL, 0, + partition_info, NUM_PARTITIONS); if (ret) goto err_scan; diff --git a/drivers/mtd/nand/cs553x_nand.c b/drivers/mtd/nand/cs553x_nand.c index f59ad1f2d5db..414afa793563 100644 --- a/drivers/mtd/nand/cs553x_nand.c +++ b/drivers/mtd/nand/cs553x_nand.c @@ -239,7 +239,8 @@ static int __init cs553x_init_one(int cs, int mmio, unsigned long adr) this->ecc.correct = nand_correct_data; /* Enable the following for a flash based bad block table */ - this->options = NAND_USE_FLASH_BBT | NAND_NO_AUTOINCR; + this->bbt_options = NAND_BBT_USE_FLASH; + this->options = NAND_NO_AUTOINCR; /* Scan to find existence of the device */ if (nand_scan(new_mtd, 1)) { @@ -277,15 +278,11 @@ static int is_geode(void) return 0; } -static const char *part_probes[] = { "cmdlinepart", NULL }; - static int __init cs553x_init(void) { int err = -ENXIO; int i; uint64_t val; - int mtd_parts_nb = 0; - struct mtd_partition *mtd_parts = NULL; /* If the CPU isn't a Geode GX or LX, abort */ if (!is_geode()) @@ -315,13 +312,9 @@ static int __init cs553x_init(void) do mtdconcat etc. if we want to. */ for (i = 0; i < NR_CS553X_CONTROLLERS; i++) { if (cs553x_mtd[i]) { - /* If any devices registered, return success. Else the last error. */ - mtd_parts_nb = parse_mtd_partitions(cs553x_mtd[i], part_probes, &mtd_parts, 0); - if (mtd_parts_nb > 0) - printk(KERN_NOTICE "Using command line partition definition\n"); - mtd_device_register(cs553x_mtd[i], mtd_parts, - mtd_parts_nb); + mtd_device_parse_register(cs553x_mtd[i], NULL, 0, + NULL, 0); err = 0; } } diff --git a/drivers/mtd/nand/davinci_nand.c b/drivers/mtd/nand/davinci_nand.c index 1f34951ae1a7..c153e1f77f90 100644 --- a/drivers/mtd/nand/davinci_nand.c +++ b/drivers/mtd/nand/davinci_nand.c @@ -57,7 +57,6 @@ struct davinci_nand_info { struct device *dev; struct clk *clk; - bool partitioned; bool is_readmode; @@ -530,8 +529,6 @@ static int __init nand_davinci_probe(struct platform_device *pdev) int ret; uint32_t val; nand_ecc_modes_t ecc_mode; - struct mtd_partition *mtd_parts = NULL; - int mtd_parts_nb = 0; /* insist on board-specific configuration */ if (!pdata) @@ -581,7 +578,9 @@ static int __init nand_davinci_probe(struct platform_device *pdev) info->chip.chip_delay = 0; info->chip.select_chip = nand_davinci_select_chip; - /* options such as NAND_USE_FLASH_BBT or 16-bit widths */ + /* options such as NAND_BBT_USE_FLASH */ + info->chip.bbt_options = pdata->bbt_options; + /* options such as 16-bit widths */ info->chip.options = pdata->options; info->chip.bbt_td = pdata->bbt_td; info->chip.bbt_md = pdata->bbt_md; @@ -751,33 +750,8 @@ syndrome_done: if (ret < 0) goto err_scan; - if (mtd_has_cmdlinepart()) { - static const char *probes[] __initconst = { - "cmdlinepart", NULL - }; - - mtd_parts_nb = parse_mtd_partitions(&info->mtd, probes, - &mtd_parts, 0); - } - - if (mtd_parts_nb <= 0) { - mtd_parts = pdata->parts; - mtd_parts_nb = pdata->nr_parts; - } - - /* Register any partitions */ - if (mtd_parts_nb > 0) { - ret = mtd_device_register(&info->mtd, mtd_parts, - mtd_parts_nb); - if (ret == 0) - info->partitioned = true; - } - - /* If there's no partition info, just package the whole chip - * as a single MTD device. - */ - if (!info->partitioned) - ret = mtd_device_register(&info->mtd, NULL, 0) ? -ENODEV : 0; + ret = mtd_device_parse_register(&info->mtd, NULL, 0, + pdata->parts, pdata->nr_parts); if (ret < 0) goto err_scan; @@ -816,9 +790,6 @@ err_nomem: static int __exit nand_davinci_remove(struct platform_device *pdev) { struct davinci_nand_info *info = platform_get_drvdata(pdev); - int status; - - status = mtd_device_unregister(&info->mtd); spin_lock_irq(&davinci_nand_lock); if (info->chip.ecc.mode == NAND_ECC_HW_SYNDROME) diff --git a/drivers/mtd/nand/denali.c b/drivers/mtd/nand/denali.c index d5276218945f..3984d488f9ab 100644 --- a/drivers/mtd/nand/denali.c +++ b/drivers/mtd/nand/denali.c @@ -1346,6 +1346,7 @@ static void denali_hw_init(struct denali_nand_info *denali) * */ denali->bbtskipbytes = ioread32(denali->flash_reg + SPARE_AREA_SKIP_BYTES); + detect_max_banks(denali); denali_nand_reset(denali); iowrite32(0x0F, denali->flash_reg + RB_PIN_ENABLED); iowrite32(CHIP_EN_DONT_CARE__FLAG, @@ -1356,7 +1357,6 @@ static void denali_hw_init(struct denali_nand_info *denali) /* Should set value for these registers when init */ iowrite32(0, denali->flash_reg + TWO_ROW_ADDR_CYCLES); iowrite32(1, denali->flash_reg + ECC_ENABLE); - detect_max_banks(denali); denali_nand_timing_set(denali); denali_irq_init(denali); } @@ -1577,7 +1577,8 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id) denali->nand.bbt_md = &bbt_mirror_descr; /* skip the scan for now until we have OOB read and write support */ - denali->nand.options |= NAND_USE_FLASH_BBT | NAND_SKIP_BBTSCAN; + denali->nand.bbt_options |= NAND_BBT_USE_FLASH; + denali->nand.options |= NAND_SKIP_BBTSCAN; denali->nand.ecc.mode = NAND_ECC_HW_SYNDROME; /* Denali Controller only support 15bit and 8bit ECC in MRST, @@ -1676,7 +1677,6 @@ static void denali_pci_remove(struct pci_dev *dev) struct denali_nand_info *denali = pci_get_drvdata(dev); nand_release(&denali->mtd); - mtd_device_unregister(&denali->mtd); denali_irq_cleanup(dev->irq, denali); diff --git a/drivers/mtd/nand/diskonchip.c b/drivers/mtd/nand/diskonchip.c index e1b84cb90f0d..5780dbab6113 100644 --- a/drivers/mtd/nand/diskonchip.c +++ b/drivers/mtd/nand/diskonchip.c @@ -133,7 +133,7 @@ static struct rs_control *rs_decoder; /* * The HW decoder in the DoC ASIC's provides us a error syndrome, - * which we must convert to a standard syndrom usable by the generic + * which we must convert to a standard syndrome usable by the generic * Reed-Solomon library code. * * Fabrice Bellard figured this out in the old docecc code. I added @@ -154,7 +154,7 @@ static int doc_ecc_decode(struct rs_control *rs, uint8_t *data, uint8_t *ecc) ds[3] = ((ecc[3] & 0xc0) >> 6) | ((ecc[0] & 0xff) << 2); parity = ecc[1]; - /* Initialize the syndrom buffer */ + /* Initialize the syndrome buffer */ for (i = 0; i < NROOTS; i++) s[i] = ds[0]; /* @@ -1032,7 +1032,7 @@ static int doc200x_correct_data(struct mtd_info *mtd, u_char *dat, WriteDOC(DOC_ECC_DIS, docptr, Mplus_ECCConf); else WriteDOC(DOC_ECC_DIS, docptr, ECCConf); - if (no_ecc_failures && (ret == -EBADMSG)) { + if (no_ecc_failures && mtd_is_eccerr(ret)) { printk(KERN_ERR "suppressing ECC failure\n"); ret = 0; } @@ -1653,7 +1653,7 @@ static int __init doc_probe(unsigned long physadr) nand->ecc.mode = NAND_ECC_HW_SYNDROME; nand->ecc.size = 512; nand->ecc.bytes = 6; - nand->options = NAND_USE_FLASH_BBT; + nand->bbt_options = NAND_BBT_USE_FLASH; doc->physadr = physadr; doc->virtadr = virtadr; diff --git a/drivers/mtd/nand/edb7312.c b/drivers/mtd/nand/edb7312.c deleted file mode 100644 index 8400d0f6dada..000000000000 --- a/drivers/mtd/nand/edb7312.c +++ /dev/null @@ -1,203 +0,0 @@ -/* - * drivers/mtd/nand/edb7312.c - * - * Copyright (C) 2002 Marius Gröger (mag@sysgo.de) - * - * Derived from drivers/mtd/nand/autcpu12.c - * Copyright (c) 2001 Thomas Gleixner (gleixner@autronix.de) - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * - * Overview: - * This is a device driver for the NAND flash device found on the - * CLEP7312 board which utilizes the Toshiba TC58V64AFT part. This is - * a 64Mibit (8MiB x 8 bits) NAND flash device. - */ - -#include <linux/slab.h> -#include <linux/module.h> -#include <linux/init.h> -#include <linux/mtd/mtd.h> -#include <linux/mtd/nand.h> -#include <linux/mtd/partitions.h> -#include <asm/io.h> -#include <mach/hardware.h> /* for CLPS7111_VIRT_BASE */ -#include <asm/sizes.h> -#include <asm/hardware/clps7111.h> - -/* - * MTD structure for EDB7312 board - */ -static struct mtd_info *ep7312_mtd = NULL; - -/* - * Values specific to the EDB7312 board (used with EP7312 processor) - */ -#define EP7312_FIO_PBASE 0x10000000 /* Phys address of flash */ -#define EP7312_PXDR 0x0001 /* - * IO offset to Port B data register - * where the CLE, ALE and NCE pins - * are wired to. - */ -#define EP7312_PXDDR 0x0041 /* - * IO offset to Port B data direction - * register so we can control the IO - * lines. - */ - -/* - * Module stuff - */ - -static unsigned long ep7312_fio_pbase = EP7312_FIO_PBASE; -static void __iomem *ep7312_pxdr = (void __iomem *)EP7312_PXDR; -static void __iomem *ep7312_pxddr = (void __iomem *)EP7312_PXDDR; - -/* - * Define static partitions for flash device - */ -static struct mtd_partition partition_info[] = { - {.name = "EP7312 Nand Flash", - .offset = 0, - .size = 8 * 1024 * 1024} -}; - -#define NUM_PARTITIONS 1 - -/* - * hardware specific access to control-lines - * - * NAND_NCE: bit 0 -> bit 6 (bit 7 = 1) - * NAND_CLE: bit 1 -> bit 4 - * NAND_ALE: bit 2 -> bit 5 - */ -static void ep7312_hwcontrol(struct mtd_info *mtd, int cmd, unsigned int ctrl) -{ - struct nand_chip *chip = mtd->priv; - - if (ctrl & NAND_CTRL_CHANGE) { - unsigned char bits = 0x80; - - bits |= (ctrl & (NAND_CLE | NAND_ALE)) << 3; - bits |= (ctrl & NAND_NCE) ? 0x00 : 0x40; - - clps_writeb((clps_readb(ep7312_pxdr) & 0xF0) | bits, - ep7312_pxdr); - } - if (cmd != NAND_CMD_NONE) - writeb(cmd, chip->IO_ADDR_W); -} - -/* - * read device ready pin - */ -static int ep7312_device_ready(struct mtd_info *mtd) -{ - return 1; -} - -const char *part_probes[] = { "cmdlinepart", NULL }; - -/* - * Main initialization routine - */ -static int __init ep7312_init(void) -{ - struct nand_chip *this; - const char *part_type = 0; - int mtd_parts_nb = 0; - struct mtd_partition *mtd_parts = 0; - void __iomem *ep7312_fio_base; - - /* Allocate memory for MTD device structure and private data */ - ep7312_mtd = kmalloc(sizeof(struct mtd_info) + sizeof(struct nand_chip), GFP_KERNEL); - if (!ep7312_mtd) { - printk("Unable to allocate EDB7312 NAND MTD device structure.\n"); - return -ENOMEM; - } - - /* map physical address */ - ep7312_fio_base = ioremap(ep7312_fio_pbase, SZ_1K); - if (!ep7312_fio_base) { - printk("ioremap EDB7312 NAND flash failed\n"); - kfree(ep7312_mtd); - return -EIO; - } - - /* Get pointer to private data */ - this = (struct nand_chip *)(&ep7312_mtd[1]); - - /* Initialize structures */ - memset(ep7312_mtd, 0, sizeof(struct mtd_info)); - memset(this, 0, sizeof(struct nand_chip)); - - /* Link the private data with the MTD structure */ - ep7312_mtd->priv = this; - ep7312_mtd->owner = THIS_MODULE; - - /* - * Set GPIO Port B control register so that the pins are configured - * to be outputs for controlling the NAND flash. - */ - clps_writeb(0xf0, ep7312_pxddr); - - /* insert callbacks */ - this->IO_ADDR_R = ep7312_fio_base; - this->IO_ADDR_W = ep7312_fio_base; - this->cmd_ctrl = ep7312_hwcontrol; - this->dev_ready = ep7312_device_ready; - /* 15 us command delay time */ - this->chip_delay = 15; - - /* Scan to find existence of the device */ - if (nand_scan(ep7312_mtd, 1)) { - iounmap((void *)ep7312_fio_base); - kfree(ep7312_mtd); - return -ENXIO; - } - ep7312_mtd->name = "edb7312-nand"; - mtd_parts_nb = parse_mtd_partitions(ep7312_mtd, part_probes, &mtd_parts, 0); - if (mtd_parts_nb > 0) - part_type = "command line"; - else - mtd_parts_nb = 0; - if (mtd_parts_nb == 0) { - mtd_parts = partition_info; - mtd_parts_nb = NUM_PARTITIONS; - part_type = "static"; - } - - /* Register the partitions */ - printk(KERN_NOTICE "Using %s partition definition\n", part_type); - mtd_device_register(ep7312_mtd, mtd_parts, mtd_parts_nb); - - /* Return happy */ - return 0; -} - -module_init(ep7312_init); - -/* - * Clean up routine - */ -static void __exit ep7312_cleanup(void) -{ - struct nand_chip *this = (struct nand_chip *)&ep7312_mtd[1]; - - /* Release resources, unregister device */ - nand_release(ap7312_mtd); - - /* Release io resource */ - iounmap(this->IO_ADDR_R); - - /* Free the MTD device structure */ - kfree(ep7312_mtd); -} - -module_exit(ep7312_cleanup); - -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Marius Groeger <mag@sysgo.de>"); -MODULE_DESCRIPTION("MTD map driver for Cogent EDB7312 board"); diff --git a/drivers/mtd/nand/fsl_elbc_nand.c b/drivers/mtd/nand/fsl_elbc_nand.c index 33d8aad8bba5..eedd8ee2c9ac 100644 --- a/drivers/mtd/nand/fsl_elbc_nand.c +++ b/drivers/mtd/nand/fsl_elbc_nand.c @@ -75,7 +75,6 @@ struct fsl_elbc_fcm_ctrl { unsigned int use_mdr; /* Non zero if the MDR is to be set */ unsigned int oob; /* Non zero if operating on OOB data */ unsigned int counter; /* counter for the initializations */ - char *oob_poi; /* Place to write ECC after read back */ }; /* These map to the positions used by the FCM hardware ECC generator */ @@ -244,6 +243,25 @@ static int fsl_elbc_run_command(struct mtd_info *mtd) return -EIO; } + if (chip->ecc.mode != NAND_ECC_HW) + return 0; + + if (elbc_fcm_ctrl->read_bytes == mtd->writesize + mtd->oobsize) { + uint32_t lteccr = in_be32(&lbc->lteccr); + /* + * if command was a full page read and the ELBC + * has the LTECCR register, then bits 12-15 (ppc order) of + * LTECCR indicates which 512 byte sub-pages had fixed errors. + * bits 28-31 are uncorrectable errors, marked elsewhere. + * for small page nand only 1 bit is used. + * if the ELBC doesn't have the lteccr register it reads 0 + */ + if (lteccr & 0x000F000F) + out_be32(&lbc->lteccr, 0x000F000F); /* clear lteccr */ + if (lteccr & 0x000F0000) + mtd->ecc_stats.corrected++; + } + return 0; } @@ -435,7 +453,6 @@ static void fsl_elbc_cmdfunc(struct mtd_info *mtd, unsigned int command, /* PAGEPROG reuses all of the setup from SEQIN and adds the length */ case NAND_CMD_PAGEPROG: { - int full_page; dev_vdbg(priv->dev, "fsl_elbc_cmdfunc: NAND_CMD_PAGEPROG " "writing %d bytes.\n", elbc_fcm_ctrl->index); @@ -445,34 +462,12 @@ static void fsl_elbc_cmdfunc(struct mtd_info *mtd, unsigned int command, * write so the HW generates the ECC. */ if (elbc_fcm_ctrl->oob || elbc_fcm_ctrl->column != 0 || - elbc_fcm_ctrl->index != mtd->writesize + mtd->oobsize) { + elbc_fcm_ctrl->index != mtd->writesize + mtd->oobsize) out_be32(&lbc->fbcr, elbc_fcm_ctrl->index); - full_page = 0; - } else { + else out_be32(&lbc->fbcr, 0); - full_page = 1; - } fsl_elbc_run_command(mtd); - - /* Read back the page in order to fill in the ECC for the - * caller. Is this really needed? - */ - if (full_page && elbc_fcm_ctrl->oob_poi) { - out_be32(&lbc->fbcr, 3); - set_addr(mtd, 6, page_addr, 1); - - elbc_fcm_ctrl->read_bytes = mtd->writesize + 9; - - fsl_elbc_do_read(chip, 1); - fsl_elbc_run_command(mtd); - - memcpy_fromio(elbc_fcm_ctrl->oob_poi + 6, - &elbc_fcm_ctrl->addr[elbc_fcm_ctrl->index], 3); - elbc_fcm_ctrl->index += 3; - } - - elbc_fcm_ctrl->oob_poi = NULL; return; } @@ -752,13 +747,8 @@ static void fsl_elbc_write_page(struct mtd_info *mtd, struct nand_chip *chip, const uint8_t *buf) { - struct fsl_elbc_mtd *priv = chip->priv; - struct fsl_elbc_fcm_ctrl *elbc_fcm_ctrl = priv->ctrl->nand; - fsl_elbc_write_buf(mtd, buf, mtd->writesize); fsl_elbc_write_buf(mtd, chip->oob_poi, mtd->oobsize); - - elbc_fcm_ctrl->oob_poi = chip->oob_poi; } static int fsl_elbc_chip_init(struct fsl_elbc_mtd *priv) @@ -791,8 +781,8 @@ static int fsl_elbc_chip_init(struct fsl_elbc_mtd *priv) chip->bbt_md = &bbt_mirror_descr; /* set up nand options */ - chip->options = NAND_NO_READRDY | NAND_NO_AUTOINCR | - NAND_USE_FLASH_BBT; + chip->options = NAND_NO_READRDY | NAND_NO_AUTOINCR; + chip->bbt_options = NAND_BBT_USE_FLASH; chip->controller = &elbc_fcm_ctrl->controller; chip->priv = priv; @@ -829,7 +819,6 @@ static int fsl_elbc_chip_remove(struct fsl_elbc_mtd *priv) elbc_fcm_ctrl->chips[priv->bank] = NULL; kfree(priv); - kfree(elbc_fcm_ctrl); return 0; } @@ -842,13 +831,14 @@ static int __devinit fsl_elbc_nand_probe(struct platform_device *pdev) struct resource res; struct fsl_elbc_fcm_ctrl *elbc_fcm_ctrl; static const char *part_probe_types[] - = { "cmdlinepart", "RedBoot", NULL }; - struct mtd_partition *parts; + = { "cmdlinepart", "RedBoot", "ofpart", NULL }; int ret; int bank; struct device *dev; struct device_node *node = pdev->dev.of_node; + struct mtd_part_parser_data ppdata; + ppdata.of_node = pdev->dev.of_node; if (!fsl_lbc_ctrl_dev || !fsl_lbc_ctrl_dev->regs) return -ENODEV; lbc = fsl_lbc_ctrl_dev->regs; @@ -934,17 +924,8 @@ static int __devinit fsl_elbc_nand_probe(struct platform_device *pdev) /* First look for RedBoot table or partitions on the command * line, these take precedence over device tree information */ - ret = parse_mtd_partitions(&priv->mtd, part_probe_types, &parts, 0); - if (ret < 0) - goto err; - - if (ret == 0) { - ret = of_mtd_parse_partitions(priv->dev, node, &parts); - if (ret < 0) - goto err; - } - - mtd_device_register(&priv->mtd, parts, ret); + mtd_device_parse_register(&priv->mtd, part_probe_types, &ppdata, + NULL, 0); printk(KERN_INFO "eLBC NAND device at 0x%llx, bank %d\n", (unsigned long long)res.start, priv->bank); diff --git a/drivers/mtd/nand/fsl_upm.c b/drivers/mtd/nand/fsl_upm.c index 23752fd5bc59..b4f3cc9f32fb 100644 --- a/drivers/mtd/nand/fsl_upm.c +++ b/drivers/mtd/nand/fsl_upm.c @@ -158,7 +158,7 @@ static int __devinit fun_chip_init(struct fsl_upm_nand *fun, { int ret; struct device_node *flash_np; - static const char *part_types[] = { "cmdlinepart", NULL, }; + struct mtd_part_parser_data ppdata; fun->chip.IO_ADDR_R = fun->io_base; fun->chip.IO_ADDR_W = fun->io_base; @@ -192,18 +192,12 @@ static int __devinit fun_chip_init(struct fsl_upm_nand *fun, if (ret) goto err; - ret = parse_mtd_partitions(&fun->mtd, part_types, &fun->parts, 0); - -#ifdef CONFIG_MTD_OF_PARTS - if (ret == 0) { - ret = of_mtd_parse_partitions(fun->dev, flash_np, &fun->parts); - if (ret < 0) - goto err; - } -#endif - ret = mtd_device_register(&fun->mtd, fun->parts, ret); + ppdata.of_node = flash_np; + ret = mtd_device_parse_register(&fun->mtd, NULL, &ppdata, NULL, 0); err: of_node_put(flash_np); + if (ret) + kfree(fun->mtd.name); return ret; } diff --git a/drivers/mtd/nand/fsmc_nand.c b/drivers/mtd/nand/fsmc_nand.c index e9b275ac381c..e53b76064133 100644 --- a/drivers/mtd/nand/fsmc_nand.c +++ b/drivers/mtd/nand/fsmc_nand.c @@ -146,7 +146,7 @@ static struct mtd_partition partition_info_16KB_blk[] = { { .name = "Root File System", .offset = 0x460000, - .size = 0, + .size = MTDPART_SIZ_FULL, }, }; @@ -173,13 +173,10 @@ static struct mtd_partition partition_info_128KB_blk[] = { { .name = "Root File System", .offset = 0x800000, - .size = 0, + .size = MTDPART_SIZ_FULL, }, }; -#ifdef CONFIG_MTD_CMDLINE_PARTS -const char *part_probes[] = { "cmdlinepart", NULL }; -#endif /** * struct fsmc_nand_data - structure for FSMC NAND device state @@ -187,8 +184,6 @@ const char *part_probes[] = { "cmdlinepart", NULL }; * @pid: Part ID on the AMBA PrimeCell format * @mtd: MTD info for a NAND flash. * @nand: Chip related info for a NAND flash. - * @partitions: Partition info for a NAND Flash. - * @nr_partitions: Total number of partition of a NAND flash. * * @ecc_place: ECC placing locations in oobfree type format. * @bank: Bank number for probed device. @@ -203,8 +198,6 @@ struct fsmc_nand_data { u32 pid; struct mtd_info mtd; struct nand_chip nand; - struct mtd_partition *partitions; - unsigned int nr_partitions; struct fsmc_eccplace *ecc_place; unsigned int bank; @@ -716,65 +709,17 @@ static int __init fsmc_nand_probe(struct platform_device *pdev) * platform data, * default partition information present in driver. */ -#ifdef CONFIG_MTD_CMDLINE_PARTS /* - * Check if partition info passed via command line + * Check for partition info passed */ host->mtd.name = "nand"; - host->nr_partitions = parse_mtd_partitions(&host->mtd, part_probes, - &host->partitions, 0); - if (host->nr_partitions <= 0) { -#endif - /* - * Check if partition info passed via command line - */ - if (pdata->partitions) { - host->partitions = pdata->partitions; - host->nr_partitions = pdata->nr_partitions; - } else { - struct mtd_partition *partition; - int i; - - /* Select the default partitions info */ - switch (host->mtd.size) { - case 0x01000000: - case 0x02000000: - case 0x04000000: - host->partitions = partition_info_16KB_blk; - host->nr_partitions = - sizeof(partition_info_16KB_blk) / - sizeof(struct mtd_partition); - break; - case 0x08000000: - case 0x10000000: - case 0x20000000: - case 0x40000000: - host->partitions = partition_info_128KB_blk; - host->nr_partitions = - sizeof(partition_info_128KB_blk) / - sizeof(struct mtd_partition); - break; - default: - ret = -ENXIO; - pr_err("Unsupported NAND size\n"); - goto err_probe; - } - - partition = host->partitions; - for (i = 0; i < host->nr_partitions; i++, partition++) { - if (partition->size == 0) { - partition->size = host->mtd.size - - partition->offset; - break; - } - } - } -#ifdef CONFIG_MTD_CMDLINE_PARTS - } -#endif - - ret = mtd_device_register(&host->mtd, host->partitions, - host->nr_partitions); + ret = mtd_device_parse_register(&host->mtd, NULL, 0, + host->mtd.size <= 0x04000000 ? + partition_info_16KB_blk : + partition_info_128KB_blk, + host->mtd.size <= 0x04000000 ? + ARRAY_SIZE(partition_info_16KB_blk) : + ARRAY_SIZE(partition_info_128KB_blk)); if (ret) goto err_probe; @@ -822,7 +767,7 @@ static int fsmc_nand_remove(struct platform_device *pdev) platform_set_drvdata(pdev, NULL); if (host) { - mtd_device_unregister(&host->mtd); + nand_release(&host->mtd); clk_disable(host->clk); clk_put(host->clk); diff --git a/drivers/mtd/nand/gpmi-nand/Makefile b/drivers/mtd/nand/gpmi-nand/Makefile new file mode 100644 index 000000000000..3a462487c35e --- /dev/null +++ b/drivers/mtd/nand/gpmi-nand/Makefile @@ -0,0 +1,3 @@ +obj-$(CONFIG_MTD_NAND_GPMI_NAND) += gpmi_nand.o +gpmi_nand-objs += gpmi-nand.o +gpmi_nand-objs += gpmi-lib.o diff --git a/drivers/mtd/nand/gpmi-nand/bch-regs.h b/drivers/mtd/nand/gpmi-nand/bch-regs.h new file mode 100644 index 000000000000..4effb8c579db --- /dev/null +++ b/drivers/mtd/nand/gpmi-nand/bch-regs.h @@ -0,0 +1,84 @@ +/* + * Freescale GPMI NAND Flash Driver + * + * Copyright 2008-2011 Freescale Semiconductor, Inc. + * Copyright 2008 Embedded Alley Solutions, Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, write to the Free Software Foundation, Inc., + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + */ +#ifndef __GPMI_NAND_BCH_REGS_H +#define __GPMI_NAND_BCH_REGS_H + +#define HW_BCH_CTRL 0x00000000 +#define HW_BCH_CTRL_SET 0x00000004 +#define HW_BCH_CTRL_CLR 0x00000008 +#define HW_BCH_CTRL_TOG 0x0000000c + +#define BM_BCH_CTRL_COMPLETE_IRQ_EN (1 << 8) +#define BM_BCH_CTRL_COMPLETE_IRQ (1 << 0) + +#define HW_BCH_STATUS0 0x00000010 +#define HW_BCH_MODE 0x00000020 +#define HW_BCH_ENCODEPTR 0x00000030 +#define HW_BCH_DATAPTR 0x00000040 +#define HW_BCH_METAPTR 0x00000050 +#define HW_BCH_LAYOUTSELECT 0x00000070 + +#define HW_BCH_FLASH0LAYOUT0 0x00000080 + +#define BP_BCH_FLASH0LAYOUT0_NBLOCKS 24 +#define BM_BCH_FLASH0LAYOUT0_NBLOCKS (0xff << BP_BCH_FLASH0LAYOUT0_NBLOCKS) +#define BF_BCH_FLASH0LAYOUT0_NBLOCKS(v) \ + (((v) << BP_BCH_FLASH0LAYOUT0_NBLOCKS) & BM_BCH_FLASH0LAYOUT0_NBLOCKS) + +#define BP_BCH_FLASH0LAYOUT0_META_SIZE 16 +#define BM_BCH_FLASH0LAYOUT0_META_SIZE (0xff << BP_BCH_FLASH0LAYOUT0_META_SIZE) +#define BF_BCH_FLASH0LAYOUT0_META_SIZE(v) \ + (((v) << BP_BCH_FLASH0LAYOUT0_META_SIZE)\ + & BM_BCH_FLASH0LAYOUT0_META_SIZE) + +#define BP_BCH_FLASH0LAYOUT0_ECC0 12 +#define BM_BCH_FLASH0LAYOUT0_ECC0 (0xf << BP_BCH_FLASH0LAYOUT0_ECC0) +#define BF_BCH_FLASH0LAYOUT0_ECC0(v) \ + (((v) << BP_BCH_FLASH0LAYOUT0_ECC0) & BM_BCH_FLASH0LAYOUT0_ECC0) + +#define BP_BCH_FLASH0LAYOUT0_DATA0_SIZE 0 +#define BM_BCH_FLASH0LAYOUT0_DATA0_SIZE \ + (0xfff << BP_BCH_FLASH0LAYOUT0_DATA0_SIZE) +#define BF_BCH_FLASH0LAYOUT0_DATA0_SIZE(v) \ + (((v) << BP_BCH_FLASH0LAYOUT0_DATA0_SIZE)\ + & BM_BCH_FLASH0LAYOUT0_DATA0_SIZE) + +#define HW_BCH_FLASH0LAYOUT1 0x00000090 + +#define BP_BCH_FLASH0LAYOUT1_PAGE_SIZE 16 +#define BM_BCH_FLASH0LAYOUT1_PAGE_SIZE \ + (0xffff << BP_BCH_FLASH0LAYOUT1_PAGE_SIZE) +#define BF_BCH_FLASH0LAYOUT1_PAGE_SIZE(v) \ + (((v) << BP_BCH_FLASH0LAYOUT1_PAGE_SIZE) \ + & BM_BCH_FLASH0LAYOUT1_PAGE_SIZE) + +#define BP_BCH_FLASH0LAYOUT1_ECCN 12 +#define BM_BCH_FLASH0LAYOUT1_ECCN (0xf << BP_BCH_FLASH0LAYOUT1_ECCN) +#define BF_BCH_FLASH0LAYOUT1_ECCN(v) \ + (((v) << BP_BCH_FLASH0LAYOUT1_ECCN) & BM_BCH_FLASH0LAYOUT1_ECCN) + +#define BP_BCH_FLASH0LAYOUT1_DATAN_SIZE 0 +#define BM_BCH_FLASH0LAYOUT1_DATAN_SIZE \ + (0xfff << BP_BCH_FLASH0LAYOUT1_DATAN_SIZE) +#define BF_BCH_FLASH0LAYOUT1_DATAN_SIZE(v) \ + (((v) << BP_BCH_FLASH0LAYOUT1_DATAN_SIZE) \ + & BM_BCH_FLASH0LAYOUT1_DATAN_SIZE) +#endif diff --git a/drivers/mtd/nand/gpmi-nand/gpmi-lib.c b/drivers/mtd/nand/gpmi-nand/gpmi-lib.c new file mode 100644 index 000000000000..de4db7604a3f --- /dev/null +++ b/drivers/mtd/nand/gpmi-nand/gpmi-lib.c @@ -0,0 +1,1057 @@ +/* + * Freescale GPMI NAND Flash Driver + * + * Copyright (C) 2008-2011 Freescale Semiconductor, Inc. + * Copyright (C) 2008 Embedded Alley Solutions, Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, write to the Free Software Foundation, Inc., + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + */ +#include <linux/mtd/gpmi-nand.h> +#include <linux/delay.h> +#include <linux/clk.h> +#include <mach/mxs.h> + +#include "gpmi-nand.h" +#include "gpmi-regs.h" +#include "bch-regs.h" + +struct timing_threshod timing_default_threshold = { + .max_data_setup_cycles = (BM_GPMI_TIMING0_DATA_SETUP >> + BP_GPMI_TIMING0_DATA_SETUP), + .internal_data_setup_in_ns = 0, + .max_sample_delay_factor = (BM_GPMI_CTRL1_RDN_DELAY >> + BP_GPMI_CTRL1_RDN_DELAY), + .max_dll_clock_period_in_ns = 32, + .max_dll_delay_in_ns = 16, +}; + +/* + * Clear the bit and poll it cleared. This is usually called with + * a reset address and mask being either SFTRST(bit 31) or CLKGATE + * (bit 30). + */ +static int clear_poll_bit(void __iomem *addr, u32 mask) +{ + int timeout = 0x400; + + /* clear the bit */ + __mxs_clrl(mask, addr); + + /* + * SFTRST needs 3 GPMI clocks to settle, the reference manual + * recommends to wait 1us. + */ + udelay(1); + + /* poll the bit becoming clear */ + while ((readl(addr) & mask) && --timeout) + /* nothing */; + + return !timeout; +} + +#define MODULE_CLKGATE (1 << 30) +#define MODULE_SFTRST (1 << 31) +/* + * The current mxs_reset_block() will do two things: + * [1] enable the module. + * [2] reset the module. + * + * In most of the cases, it's ok. But there is a hardware bug in the BCH block. + * If you try to soft reset the BCH block, it becomes unusable until + * the next hard reset. This case occurs in the NAND boot mode. When the board + * boots by NAND, the ROM of the chip will initialize the BCH blocks itself. + * So If the driver tries to reset the BCH again, the BCH will not work anymore. + * You will see a DMA timeout in this case. + * + * To avoid this bug, just add a new parameter `just_enable` for + * the mxs_reset_block(), and rewrite it here. + */ +int gpmi_reset_block(void __iomem *reset_addr, bool just_enable) +{ + int ret; + int timeout = 0x400; + + /* clear and poll SFTRST */ + ret = clear_poll_bit(reset_addr, MODULE_SFTRST); + if (unlikely(ret)) + goto error; + + /* clear CLKGATE */ + __mxs_clrl(MODULE_CLKGATE, reset_addr); + + if (!just_enable) { + /* set SFTRST to reset the block */ + __mxs_setl(MODULE_SFTRST, reset_addr); + udelay(1); + + /* poll CLKGATE becoming set */ + while ((!(readl(reset_addr) & MODULE_CLKGATE)) && --timeout) + /* nothing */; + if (unlikely(!timeout)) + goto error; + } + + /* clear and poll SFTRST */ + ret = clear_poll_bit(reset_addr, MODULE_SFTRST); + if (unlikely(ret)) + goto error; + + /* clear and poll CLKGATE */ + ret = clear_poll_bit(reset_addr, MODULE_CLKGATE); + if (unlikely(ret)) + goto error; + + return 0; + +error: + pr_err("%s(%p): module reset timeout\n", __func__, reset_addr); + return -ETIMEDOUT; +} + +int gpmi_init(struct gpmi_nand_data *this) +{ + struct resources *r = &this->resources; + int ret; + + ret = clk_enable(r->clock); + if (ret) + goto err_out; + ret = gpmi_reset_block(r->gpmi_regs, false); + if (ret) + goto err_out; + + /* Choose NAND mode. */ + writel(BM_GPMI_CTRL1_GPMI_MODE, r->gpmi_regs + HW_GPMI_CTRL1_CLR); + + /* Set the IRQ polarity. */ + writel(BM_GPMI_CTRL1_ATA_IRQRDY_POLARITY, + r->gpmi_regs + HW_GPMI_CTRL1_SET); + + /* Disable Write-Protection. */ + writel(BM_GPMI_CTRL1_DEV_RESET, r->gpmi_regs + HW_GPMI_CTRL1_SET); + + /* Select BCH ECC. */ + writel(BM_GPMI_CTRL1_BCH_MODE, r->gpmi_regs + HW_GPMI_CTRL1_SET); + + clk_disable(r->clock); + return 0; +err_out: + return ret; +} + +/* This function is very useful. It is called only when the bug occur. */ +void gpmi_dump_info(struct gpmi_nand_data *this) +{ + struct resources *r = &this->resources; + struct bch_geometry *geo = &this->bch_geometry; + u32 reg; + int i; + + pr_err("Show GPMI registers :\n"); + for (i = 0; i <= HW_GPMI_DEBUG / 0x10 + 1; i++) { + reg = readl(r->gpmi_regs + i * 0x10); + pr_err("offset 0x%.3x : 0x%.8x\n", i * 0x10, reg); + } + + /* start to print out the BCH info */ + pr_err("BCH Geometry :\n"); + pr_err("GF length : %u\n", geo->gf_len); + pr_err("ECC Strength : %u\n", geo->ecc_strength); + pr_err("Page Size in Bytes : %u\n", geo->page_size); + pr_err("Metadata Size in Bytes : %u\n", geo->metadata_size); + pr_err("ECC Chunk Size in Bytes: %u\n", geo->ecc_chunk_size); + pr_err("ECC Chunk Count : %u\n", geo->ecc_chunk_count); + pr_err("Payload Size in Bytes : %u\n", geo->payload_size); + pr_err("Auxiliary Size in Bytes: %u\n", geo->auxiliary_size); + pr_err("Auxiliary Status Offset: %u\n", geo->auxiliary_status_offset); + pr_err("Block Mark Byte Offset : %u\n", geo->block_mark_byte_offset); + pr_err("Block Mark Bit Offset : %u\n", geo->block_mark_bit_offset); +} + +/* Configures the geometry for BCH. */ +int bch_set_geometry(struct gpmi_nand_data *this) +{ + struct resources *r = &this->resources; + struct bch_geometry *bch_geo = &this->bch_geometry; + unsigned int block_count; + unsigned int block_size; + unsigned int metadata_size; + unsigned int ecc_strength; + unsigned int page_size; + int ret; + + if (common_nfc_set_geometry(this)) + return !0; + + block_count = bch_geo->ecc_chunk_count - 1; + block_size = bch_geo->ecc_chunk_size; + metadata_size = bch_geo->metadata_size; + ecc_strength = bch_geo->ecc_strength >> 1; + page_size = bch_geo->page_size; + + ret = clk_enable(r->clock); + if (ret) + goto err_out; + + ret = gpmi_reset_block(r->bch_regs, true); + if (ret) + goto err_out; + + /* Configure layout 0. */ + writel(BF_BCH_FLASH0LAYOUT0_NBLOCKS(block_count) + | BF_BCH_FLASH0LAYOUT0_META_SIZE(metadata_size) + | BF_BCH_FLASH0LAYOUT0_ECC0(ecc_strength) + | BF_BCH_FLASH0LAYOUT0_DATA0_SIZE(block_size), + r->bch_regs + HW_BCH_FLASH0LAYOUT0); + + writel(BF_BCH_FLASH0LAYOUT1_PAGE_SIZE(page_size) + | BF_BCH_FLASH0LAYOUT1_ECCN(ecc_strength) + | BF_BCH_FLASH0LAYOUT1_DATAN_SIZE(block_size), + r->bch_regs + HW_BCH_FLASH0LAYOUT1); + + /* Set *all* chip selects to use layout 0. */ + writel(0, r->bch_regs + HW_BCH_LAYOUTSELECT); + + /* Enable interrupts. */ + writel(BM_BCH_CTRL_COMPLETE_IRQ_EN, + r->bch_regs + HW_BCH_CTRL_SET); + + clk_disable(r->clock); + return 0; +err_out: + return ret; +} + +/* Converts time in nanoseconds to cycles. */ +static unsigned int ns_to_cycles(unsigned int time, + unsigned int period, unsigned int min) +{ + unsigned int k; + + k = (time + period - 1) / period; + return max(k, min); +} + +/* Apply timing to current hardware conditions. */ +static int gpmi_nfc_compute_hardware_timing(struct gpmi_nand_data *this, + struct gpmi_nfc_hardware_timing *hw) +{ + struct gpmi_nand_platform_data *pdata = this->pdata; + struct timing_threshod *nfc = &timing_default_threshold; + struct nand_chip *nand = &this->nand; + struct nand_timing target = this->timing; + bool improved_timing_is_available; + unsigned long clock_frequency_in_hz; + unsigned int clock_period_in_ns; + bool dll_use_half_periods; + unsigned int dll_delay_shift; + unsigned int max_sample_delay_in_ns; + unsigned int address_setup_in_cycles; + unsigned int data_setup_in_ns; + unsigned int data_setup_in_cycles; + unsigned int data_hold_in_cycles; + int ideal_sample_delay_in_ns; + unsigned int sample_delay_factor; + int tEYE; + unsigned int min_prop_delay_in_ns = pdata->min_prop_delay_in_ns; + unsigned int max_prop_delay_in_ns = pdata->max_prop_delay_in_ns; + + /* + * If there are multiple chips, we need to relax the timings to allow + * for signal distortion due to higher capacitance. + */ + if (nand->numchips > 2) { + target.data_setup_in_ns += 10; + target.data_hold_in_ns += 10; + target.address_setup_in_ns += 10; + } else if (nand->numchips > 1) { + target.data_setup_in_ns += 5; + target.data_hold_in_ns += 5; + target.address_setup_in_ns += 5; + } + + /* Check if improved timing information is available. */ + improved_timing_is_available = + (target.tREA_in_ns >= 0) && + (target.tRLOH_in_ns >= 0) && + (target.tRHOH_in_ns >= 0) ; + + /* Inspect the clock. */ + clock_frequency_in_hz = nfc->clock_frequency_in_hz; + clock_period_in_ns = 1000000000 / clock_frequency_in_hz; + + /* + * The NFC quantizes setup and hold parameters in terms of clock cycles. + * Here, we quantize the setup and hold timing parameters to the + * next-highest clock period to make sure we apply at least the + * specified times. + * + * For data setup and data hold, the hardware interprets a value of zero + * as the largest possible delay. This is not what's intended by a zero + * in the input parameter, so we impose a minimum of one cycle. + */ + data_setup_in_cycles = ns_to_cycles(target.data_setup_in_ns, + clock_period_in_ns, 1); + data_hold_in_cycles = ns_to_cycles(target.data_hold_in_ns, + clock_period_in_ns, 1); + address_setup_in_cycles = ns_to_cycles(target.address_setup_in_ns, + clock_period_in_ns, 0); + + /* + * The clock's period affects the sample delay in a number of ways: + * + * (1) The NFC HAL tells us the maximum clock period the sample delay + * DLL can tolerate. If the clock period is greater than half that + * maximum, we must configure the DLL to be driven by half periods. + * + * (2) We need to convert from an ideal sample delay, in ns, to a + * "sample delay factor," which the NFC uses. This factor depends on + * whether we're driving the DLL with full or half periods. + * Paraphrasing the reference manual: + * + * AD = SDF x 0.125 x RP + * + * where: + * + * AD is the applied delay, in ns. + * SDF is the sample delay factor, which is dimensionless. + * RP is the reference period, in ns, which is a full clock period + * if the DLL is being driven by full periods, or half that if + * the DLL is being driven by half periods. + * + * Let's re-arrange this in a way that's more useful to us: + * + * 8 + * SDF = AD x ---- + * RP + * + * The reference period is either the clock period or half that, so this + * is: + * + * 8 AD x DDF + * SDF = AD x ----- = -------- + * f x P P + * + * where: + * + * f is 1 or 1/2, depending on how we're driving the DLL. + * P is the clock period. + * DDF is the DLL Delay Factor, a dimensionless value that + * incorporates all the constants in the conversion. + * + * DDF will be either 8 or 16, both of which are powers of two. We can + * reduce the cost of this conversion by using bit shifts instead of + * multiplication or division. Thus: + * + * AD << DDS + * SDF = --------- + * P + * + * or + * + * AD = (SDF >> DDS) x P + * + * where: + * + * DDS is the DLL Delay Shift, the logarithm to base 2 of the DDF. + */ + if (clock_period_in_ns > (nfc->max_dll_clock_period_in_ns >> 1)) { + dll_use_half_periods = true; + dll_delay_shift = 3 + 1; + } else { + dll_use_half_periods = false; + dll_delay_shift = 3; + } + + /* + * Compute the maximum sample delay the NFC allows, under current + * conditions. If the clock is running too slowly, no sample delay is + * possible. + */ + if (clock_period_in_ns > nfc->max_dll_clock_period_in_ns) + max_sample_delay_in_ns = 0; + else { + /* + * Compute the delay implied by the largest sample delay factor + * the NFC allows. + */ + max_sample_delay_in_ns = + (nfc->max_sample_delay_factor * clock_period_in_ns) >> + dll_delay_shift; + + /* + * Check if the implied sample delay larger than the NFC + * actually allows. + */ + if (max_sample_delay_in_ns > nfc->max_dll_delay_in_ns) + max_sample_delay_in_ns = nfc->max_dll_delay_in_ns; + } + + /* + * Check if improved timing information is available. If not, we have to + * use a less-sophisticated algorithm. + */ + if (!improved_timing_is_available) { + /* + * Fold the read setup time required by the NFC into the ideal + * sample delay. + */ + ideal_sample_delay_in_ns = target.gpmi_sample_delay_in_ns + + nfc->internal_data_setup_in_ns; + + /* + * The ideal sample delay may be greater than the maximum + * allowed by the NFC. If so, we can trade off sample delay time + * for more data setup time. + * + * In each iteration of the following loop, we add a cycle to + * the data setup time and subtract a corresponding amount from + * the sample delay until we've satisified the constraints or + * can't do any better. + */ + while ((ideal_sample_delay_in_ns > max_sample_delay_in_ns) && + (data_setup_in_cycles < nfc->max_data_setup_cycles)) { + + data_setup_in_cycles++; + ideal_sample_delay_in_ns -= clock_period_in_ns; + + if (ideal_sample_delay_in_ns < 0) + ideal_sample_delay_in_ns = 0; + + } + + /* + * Compute the sample delay factor that corresponds most closely + * to the ideal sample delay. If the result is too large for the + * NFC, use the maximum value. + * + * Notice that we use the ns_to_cycles function to compute the + * sample delay factor. We do this because the form of the + * computation is the same as that for calculating cycles. + */ + sample_delay_factor = + ns_to_cycles( + ideal_sample_delay_in_ns << dll_delay_shift, + clock_period_in_ns, 0); + + if (sample_delay_factor > nfc->max_sample_delay_factor) + sample_delay_factor = nfc->max_sample_delay_factor; + + /* Skip to the part where we return our results. */ + goto return_results; + } + + /* + * If control arrives here, we have more detailed timing information, + * so we can use a better algorithm. + */ + + /* + * Fold the read setup time required by the NFC into the maximum + * propagation delay. + */ + max_prop_delay_in_ns += nfc->internal_data_setup_in_ns; + + /* + * Earlier, we computed the number of clock cycles required to satisfy + * the data setup time. Now, we need to know the actual nanoseconds. + */ + data_setup_in_ns = clock_period_in_ns * data_setup_in_cycles; + + /* + * Compute tEYE, the width of the data eye when reading from the NAND + * Flash. The eye width is fundamentally determined by the data setup + * time, perturbed by propagation delays and some characteristics of the + * NAND Flash device. + * + * start of the eye = max_prop_delay + tREA + * end of the eye = min_prop_delay + tRHOH + data_setup + */ + tEYE = (int)min_prop_delay_in_ns + (int)target.tRHOH_in_ns + + (int)data_setup_in_ns; + + tEYE -= (int)max_prop_delay_in_ns + (int)target.tREA_in_ns; + + /* + * The eye must be open. If it's not, we can try to open it by + * increasing its main forcer, the data setup time. + * + * In each iteration of the following loop, we increase the data setup + * time by a single clock cycle. We do this until either the eye is + * open or we run into NFC limits. + */ + while ((tEYE <= 0) && + (data_setup_in_cycles < nfc->max_data_setup_cycles)) { + /* Give a cycle to data setup. */ + data_setup_in_cycles++; + /* Synchronize the data setup time with the cycles. */ + data_setup_in_ns += clock_period_in_ns; + /* Adjust tEYE accordingly. */ + tEYE += clock_period_in_ns; + } + + /* + * When control arrives here, the eye is open. The ideal time to sample + * the data is in the center of the eye: + * + * end of the eye + start of the eye + * --------------------------------- - data_setup + * 2 + * + * After some algebra, this simplifies to the code immediately below. + */ + ideal_sample_delay_in_ns = + ((int)max_prop_delay_in_ns + + (int)target.tREA_in_ns + + (int)min_prop_delay_in_ns + + (int)target.tRHOH_in_ns - + (int)data_setup_in_ns) >> 1; + + /* + * The following figure illustrates some aspects of a NAND Flash read: + * + * + * __ _____________________________________ + * RDN \_________________/ + * + * <---- tEYE -----> + * /-----------------\ + * Read Data ----------------------------< >--------- + * \-----------------/ + * ^ ^ ^ ^ + * | | | | + * |<--Data Setup -->|<--Delay Time -->| | + * | | | | + * | | | + * | |<-- Quantized Delay Time -->| + * | | | + * + * + * We have some issues we must now address: + * + * (1) The *ideal* sample delay time must not be negative. If it is, we + * jam it to zero. + * + * (2) The *ideal* sample delay time must not be greater than that + * allowed by the NFC. If it is, we can increase the data setup + * time, which will reduce the delay between the end of the data + * setup and the center of the eye. It will also make the eye + * larger, which might help with the next issue... + * + * (3) The *quantized* sample delay time must not fall either before the + * eye opens or after it closes (the latter is the problem + * illustrated in the above figure). + */ + + /* Jam a negative ideal sample delay to zero. */ + if (ideal_sample_delay_in_ns < 0) + ideal_sample_delay_in_ns = 0; + + /* + * Extend the data setup as needed to reduce the ideal sample delay + * below the maximum permitted by the NFC. + */ + while ((ideal_sample_delay_in_ns > max_sample_delay_in_ns) && + (data_setup_in_cycles < nfc->max_data_setup_cycles)) { + + /* Give a cycle to data setup. */ + data_setup_in_cycles++; + /* Synchronize the data setup time with the cycles. */ + data_setup_in_ns += clock_period_in_ns; + /* Adjust tEYE accordingly. */ + tEYE += clock_period_in_ns; + + /* + * Decrease the ideal sample delay by one half cycle, to keep it + * in the middle of the eye. + */ + ideal_sample_delay_in_ns -= (clock_period_in_ns >> 1); + + /* Jam a negative ideal sample delay to zero. */ + if (ideal_sample_delay_in_ns < 0) + ideal_sample_delay_in_ns = 0; + } + + /* + * Compute the sample delay factor that corresponds to the ideal sample + * delay. If the result is too large, then use the maximum allowed + * value. + * + * Notice that we use the ns_to_cycles function to compute the sample + * delay factor. We do this because the form of the computation is the + * same as that for calculating cycles. + */ + sample_delay_factor = + ns_to_cycles(ideal_sample_delay_in_ns << dll_delay_shift, + clock_period_in_ns, 0); + + if (sample_delay_factor > nfc->max_sample_delay_factor) + sample_delay_factor = nfc->max_sample_delay_factor; + + /* + * These macros conveniently encapsulate a computation we'll use to + * continuously evaluate whether or not the data sample delay is inside + * the eye. + */ + #define IDEAL_DELAY ((int) ideal_sample_delay_in_ns) + + #define QUANTIZED_DELAY \ + ((int) ((sample_delay_factor * clock_period_in_ns) >> \ + dll_delay_shift)) + + #define DELAY_ERROR (abs(QUANTIZED_DELAY - IDEAL_DELAY)) + + #define SAMPLE_IS_NOT_WITHIN_THE_EYE (DELAY_ERROR > (tEYE >> 1)) + + /* + * While the quantized sample time falls outside the eye, reduce the + * sample delay or extend the data setup to move the sampling point back + * toward the eye. Do not allow the number of data setup cycles to + * exceed the maximum allowed by the NFC. + */ + while (SAMPLE_IS_NOT_WITHIN_THE_EYE && + (data_setup_in_cycles < nfc->max_data_setup_cycles)) { + /* + * If control arrives here, the quantized sample delay falls + * outside the eye. Check if it's before the eye opens, or after + * the eye closes. + */ + if (QUANTIZED_DELAY > IDEAL_DELAY) { + /* + * If control arrives here, the quantized sample delay + * falls after the eye closes. Decrease the quantized + * delay time and then go back to re-evaluate. + */ + if (sample_delay_factor != 0) + sample_delay_factor--; + continue; + } + + /* + * If control arrives here, the quantized sample delay falls + * before the eye opens. Shift the sample point by increasing + * data setup time. This will also make the eye larger. + */ + + /* Give a cycle to data setup. */ + data_setup_in_cycles++; + /* Synchronize the data setup time with the cycles. */ + data_setup_in_ns += clock_period_in_ns; + /* Adjust tEYE accordingly. */ + tEYE += clock_period_in_ns; + + /* + * Decrease the ideal sample delay by one half cycle, to keep it + * in the middle of the eye. + */ + ideal_sample_delay_in_ns -= (clock_period_in_ns >> 1); + + /* ...and one less period for the delay time. */ + ideal_sample_delay_in_ns -= clock_period_in_ns; + + /* Jam a negative ideal sample delay to zero. */ + if (ideal_sample_delay_in_ns < 0) + ideal_sample_delay_in_ns = 0; + + /* + * We have a new ideal sample delay, so re-compute the quantized + * delay. + */ + sample_delay_factor = + ns_to_cycles( + ideal_sample_delay_in_ns << dll_delay_shift, + clock_period_in_ns, 0); + + if (sample_delay_factor > nfc->max_sample_delay_factor) + sample_delay_factor = nfc->max_sample_delay_factor; + } + + /* Control arrives here when we're ready to return our results. */ +return_results: + hw->data_setup_in_cycles = data_setup_in_cycles; + hw->data_hold_in_cycles = data_hold_in_cycles; + hw->address_setup_in_cycles = address_setup_in_cycles; + hw->use_half_periods = dll_use_half_periods; + hw->sample_delay_factor = sample_delay_factor; + + /* Return success. */ + return 0; +} + +/* Begin the I/O */ +void gpmi_begin(struct gpmi_nand_data *this) +{ + struct resources *r = &this->resources; + struct timing_threshod *nfc = &timing_default_threshold; + unsigned char *gpmi_regs = r->gpmi_regs; + unsigned int clock_period_in_ns; + uint32_t reg; + unsigned int dll_wait_time_in_us; + struct gpmi_nfc_hardware_timing hw; + int ret; + + /* Enable the clock. */ + ret = clk_enable(r->clock); + if (ret) { + pr_err("We failed in enable the clk\n"); + goto err_out; + } + + /* set ready/busy timeout */ + writel(0x500 << BP_GPMI_TIMING1_BUSY_TIMEOUT, + gpmi_regs + HW_GPMI_TIMING1); + + /* Get the timing information we need. */ + nfc->clock_frequency_in_hz = clk_get_rate(r->clock); + clock_period_in_ns = 1000000000 / nfc->clock_frequency_in_hz; + + gpmi_nfc_compute_hardware_timing(this, &hw); + + /* Set up all the simple timing parameters. */ + reg = BF_GPMI_TIMING0_ADDRESS_SETUP(hw.address_setup_in_cycles) | + BF_GPMI_TIMING0_DATA_HOLD(hw.data_hold_in_cycles) | + BF_GPMI_TIMING0_DATA_SETUP(hw.data_setup_in_cycles) ; + + writel(reg, gpmi_regs + HW_GPMI_TIMING0); + + /* + * DLL_ENABLE must be set to 0 when setting RDN_DELAY or HALF_PERIOD. + */ + writel(BM_GPMI_CTRL1_DLL_ENABLE, gpmi_regs + HW_GPMI_CTRL1_CLR); + + /* Clear out the DLL control fields. */ + writel(BM_GPMI_CTRL1_RDN_DELAY, gpmi_regs + HW_GPMI_CTRL1_CLR); + writel(BM_GPMI_CTRL1_HALF_PERIOD, gpmi_regs + HW_GPMI_CTRL1_CLR); + + /* If no sample delay is called for, return immediately. */ + if (!hw.sample_delay_factor) + return; + + /* Configure the HALF_PERIOD flag. */ + if (hw.use_half_periods) + writel(BM_GPMI_CTRL1_HALF_PERIOD, + gpmi_regs + HW_GPMI_CTRL1_SET); + + /* Set the delay factor. */ + writel(BF_GPMI_CTRL1_RDN_DELAY(hw.sample_delay_factor), + gpmi_regs + HW_GPMI_CTRL1_SET); + + /* Enable the DLL. */ + writel(BM_GPMI_CTRL1_DLL_ENABLE, gpmi_regs + HW_GPMI_CTRL1_SET); + + /* + * After we enable the GPMI DLL, we have to wait 64 clock cycles before + * we can use the GPMI. + * + * Calculate the amount of time we need to wait, in microseconds. + */ + dll_wait_time_in_us = (clock_period_in_ns * 64) / 1000; + + if (!dll_wait_time_in_us) + dll_wait_time_in_us = 1; + + /* Wait for the DLL to settle. */ + udelay(dll_wait_time_in_us); + +err_out: + return; +} + +void gpmi_end(struct gpmi_nand_data *this) +{ + struct resources *r = &this->resources; + clk_disable(r->clock); +} + +/* Clears a BCH interrupt. */ +void gpmi_clear_bch(struct gpmi_nand_data *this) +{ + struct resources *r = &this->resources; + writel(BM_BCH_CTRL_COMPLETE_IRQ, r->bch_regs + HW_BCH_CTRL_CLR); +} + +/* Returns the Ready/Busy status of the given chip. */ +int gpmi_is_ready(struct gpmi_nand_data *this, unsigned chip) +{ + struct resources *r = &this->resources; + uint32_t mask = 0; + uint32_t reg = 0; + + if (GPMI_IS_MX23(this)) { + mask = MX23_BM_GPMI_DEBUG_READY0 << chip; + reg = readl(r->gpmi_regs + HW_GPMI_DEBUG); + } else if (GPMI_IS_MX28(this)) { + mask = MX28_BF_GPMI_STAT_READY_BUSY(1 << chip); + reg = readl(r->gpmi_regs + HW_GPMI_STAT); + } else + pr_err("unknow arch.\n"); + return reg & mask; +} + +static inline void set_dma_type(struct gpmi_nand_data *this, + enum dma_ops_type type) +{ + this->last_dma_type = this->dma_type; + this->dma_type = type; +} + +int gpmi_send_command(struct gpmi_nand_data *this) +{ + struct dma_chan *channel = get_dma_chan(this); + struct dma_async_tx_descriptor *desc; + struct scatterlist *sgl; + int chip = this->current_chip; + u32 pio[3]; + + /* [1] send out the PIO words */ + pio[0] = BF_GPMI_CTRL0_COMMAND_MODE(BV_GPMI_CTRL0_COMMAND_MODE__WRITE) + | BM_GPMI_CTRL0_WORD_LENGTH + | BF_GPMI_CTRL0_CS(chip, this) + | BF_GPMI_CTRL0_LOCK_CS(LOCK_CS_ENABLE, this) + | BF_GPMI_CTRL0_ADDRESS(BV_GPMI_CTRL0_ADDRESS__NAND_CLE) + | BM_GPMI_CTRL0_ADDRESS_INCREMENT + | BF_GPMI_CTRL0_XFER_COUNT(this->command_length); + pio[1] = pio[2] = 0; + desc = channel->device->device_prep_slave_sg(channel, + (struct scatterlist *)pio, + ARRAY_SIZE(pio), DMA_NONE, 0); + if (!desc) { + pr_err("step 1 error\n"); + return -1; + } + + /* [2] send out the COMMAND + ADDRESS string stored in @buffer */ + sgl = &this->cmd_sgl; + + sg_init_one(sgl, this->cmd_buffer, this->command_length); + dma_map_sg(this->dev, sgl, 1, DMA_TO_DEVICE); + desc = channel->device->device_prep_slave_sg(channel, + sgl, 1, DMA_TO_DEVICE, 1); + if (!desc) { + pr_err("step 2 error\n"); + return -1; + } + + /* [3] submit the DMA */ + set_dma_type(this, DMA_FOR_COMMAND); + return start_dma_without_bch_irq(this, desc); +} + +int gpmi_send_data(struct gpmi_nand_data *this) +{ + struct dma_async_tx_descriptor *desc; + struct dma_chan *channel = get_dma_chan(this); + int chip = this->current_chip; + uint32_t command_mode; + uint32_t address; + u32 pio[2]; + + /* [1] PIO */ + command_mode = BV_GPMI_CTRL0_COMMAND_MODE__WRITE; + address = BV_GPMI_CTRL0_ADDRESS__NAND_DATA; + + pio[0] = BF_GPMI_CTRL0_COMMAND_MODE(command_mode) + | BM_GPMI_CTRL0_WORD_LENGTH + | BF_GPMI_CTRL0_CS(chip, this) + | BF_GPMI_CTRL0_LOCK_CS(LOCK_CS_ENABLE, this) + | BF_GPMI_CTRL0_ADDRESS(address) + | BF_GPMI_CTRL0_XFER_COUNT(this->upper_len); + pio[1] = 0; + desc = channel->device->device_prep_slave_sg(channel, + (struct scatterlist *)pio, + ARRAY_SIZE(pio), DMA_NONE, 0); + if (!desc) { + pr_err("step 1 error\n"); + return -1; + } + + /* [2] send DMA request */ + prepare_data_dma(this, DMA_TO_DEVICE); + desc = channel->device->device_prep_slave_sg(channel, &this->data_sgl, + 1, DMA_TO_DEVICE, 1); + if (!desc) { + pr_err("step 2 error\n"); + return -1; + } + /* [3] submit the DMA */ + set_dma_type(this, DMA_FOR_WRITE_DATA); + return start_dma_without_bch_irq(this, desc); +} + +int gpmi_read_data(struct gpmi_nand_data *this) +{ + struct dma_async_tx_descriptor *desc; + struct dma_chan *channel = get_dma_chan(this); + int chip = this->current_chip; + u32 pio[2]; + + /* [1] : send PIO */ + pio[0] = BF_GPMI_CTRL0_COMMAND_MODE(BV_GPMI_CTRL0_COMMAND_MODE__READ) + | BM_GPMI_CTRL0_WORD_LENGTH + | BF_GPMI_CTRL0_CS(chip, this) + | BF_GPMI_CTRL0_LOCK_CS(LOCK_CS_ENABLE, this) + | BF_GPMI_CTRL0_ADDRESS(BV_GPMI_CTRL0_ADDRESS__NAND_DATA) + | BF_GPMI_CTRL0_XFER_COUNT(this->upper_len); + pio[1] = 0; + desc = channel->device->device_prep_slave_sg(channel, + (struct scatterlist *)pio, + ARRAY_SIZE(pio), DMA_NONE, 0); + if (!desc) { + pr_err("step 1 error\n"); + return -1; + } + + /* [2] : send DMA request */ + prepare_data_dma(this, DMA_FROM_DEVICE); + desc = channel->device->device_prep_slave_sg(channel, &this->data_sgl, + 1, DMA_FROM_DEVICE, 1); + if (!desc) { + pr_err("step 2 error\n"); + return -1; + } + + /* [3] : submit the DMA */ + set_dma_type(this, DMA_FOR_READ_DATA); + return start_dma_without_bch_irq(this, desc); +} + +int gpmi_send_page(struct gpmi_nand_data *this, + dma_addr_t payload, dma_addr_t auxiliary) +{ + struct bch_geometry *geo = &this->bch_geometry; + uint32_t command_mode; + uint32_t address; + uint32_t ecc_command; + uint32_t buffer_mask; + struct dma_async_tx_descriptor *desc; + struct dma_chan *channel = get_dma_chan(this); + int chip = this->current_chip; + u32 pio[6]; + + /* A DMA descriptor that does an ECC page read. */ + command_mode = BV_GPMI_CTRL0_COMMAND_MODE__WRITE; + address = BV_GPMI_CTRL0_ADDRESS__NAND_DATA; + ecc_command = BV_GPMI_ECCCTRL_ECC_CMD__BCH_ENCODE; + buffer_mask = BV_GPMI_ECCCTRL_BUFFER_MASK__BCH_PAGE | + BV_GPMI_ECCCTRL_BUFFER_MASK__BCH_AUXONLY; + + pio[0] = BF_GPMI_CTRL0_COMMAND_MODE(command_mode) + | BM_GPMI_CTRL0_WORD_LENGTH + | BF_GPMI_CTRL0_CS(chip, this) + | BF_GPMI_CTRL0_LOCK_CS(LOCK_CS_ENABLE, this) + | BF_GPMI_CTRL0_ADDRESS(address) + | BF_GPMI_CTRL0_XFER_COUNT(0); + pio[1] = 0; + pio[2] = BM_GPMI_ECCCTRL_ENABLE_ECC + | BF_GPMI_ECCCTRL_ECC_CMD(ecc_command) + | BF_GPMI_ECCCTRL_BUFFER_MASK(buffer_mask); + pio[3] = geo->page_size; + pio[4] = payload; + pio[5] = auxiliary; + + desc = channel->device->device_prep_slave_sg(channel, + (struct scatterlist *)pio, + ARRAY_SIZE(pio), DMA_NONE, 0); + if (!desc) { + pr_err("step 2 error\n"); + return -1; + } + set_dma_type(this, DMA_FOR_WRITE_ECC_PAGE); + return start_dma_with_bch_irq(this, desc); +} + +int gpmi_read_page(struct gpmi_nand_data *this, + dma_addr_t payload, dma_addr_t auxiliary) +{ + struct bch_geometry *geo = &this->bch_geometry; + uint32_t command_mode; + uint32_t address; + uint32_t ecc_command; + uint32_t buffer_mask; + struct dma_async_tx_descriptor *desc; + struct dma_chan *channel = get_dma_chan(this); + int chip = this->current_chip; + u32 pio[6]; + + /* [1] Wait for the chip to report ready. */ + command_mode = BV_GPMI_CTRL0_COMMAND_MODE__WAIT_FOR_READY; + address = BV_GPMI_CTRL0_ADDRESS__NAND_DATA; + + pio[0] = BF_GPMI_CTRL0_COMMAND_MODE(command_mode) + | BM_GPMI_CTRL0_WORD_LENGTH + | BF_GPMI_CTRL0_CS(chip, this) + | BF_GPMI_CTRL0_LOCK_CS(LOCK_CS_ENABLE, this) + | BF_GPMI_CTRL0_ADDRESS(address) + | BF_GPMI_CTRL0_XFER_COUNT(0); + pio[1] = 0; + desc = channel->device->device_prep_slave_sg(channel, + (struct scatterlist *)pio, 2, DMA_NONE, 0); + if (!desc) { + pr_err("step 1 error\n"); + return -1; + } + + /* [2] Enable the BCH block and read. */ + command_mode = BV_GPMI_CTRL0_COMMAND_MODE__READ; + address = BV_GPMI_CTRL0_ADDRESS__NAND_DATA; + ecc_command = BV_GPMI_ECCCTRL_ECC_CMD__BCH_DECODE; + buffer_mask = BV_GPMI_ECCCTRL_BUFFER_MASK__BCH_PAGE + | BV_GPMI_ECCCTRL_BUFFER_MASK__BCH_AUXONLY; + + pio[0] = BF_GPMI_CTRL0_COMMAND_MODE(command_mode) + | BM_GPMI_CTRL0_WORD_LENGTH + | BF_GPMI_CTRL0_CS(chip, this) + | BF_GPMI_CTRL0_LOCK_CS(LOCK_CS_ENABLE, this) + | BF_GPMI_CTRL0_ADDRESS(address) + | BF_GPMI_CTRL0_XFER_COUNT(geo->page_size); + + pio[1] = 0; + pio[2] = BM_GPMI_ECCCTRL_ENABLE_ECC + | BF_GPMI_ECCCTRL_ECC_CMD(ecc_command) + | BF_GPMI_ECCCTRL_BUFFER_MASK(buffer_mask); + pio[3] = geo->page_size; + pio[4] = payload; + pio[5] = auxiliary; + desc = channel->device->device_prep_slave_sg(channel, + (struct scatterlist *)pio, + ARRAY_SIZE(pio), DMA_NONE, 1); + if (!desc) { + pr_err("step 2 error\n"); + return -1; + } + + /* [3] Disable the BCH block */ + command_mode = BV_GPMI_CTRL0_COMMAND_MODE__WAIT_FOR_READY; + address = BV_GPMI_CTRL0_ADDRESS__NAND_DATA; + + pio[0] = BF_GPMI_CTRL0_COMMAND_MODE(command_mode) + | BM_GPMI_CTRL0_WORD_LENGTH + | BF_GPMI_CTRL0_CS(chip, this) + | BF_GPMI_CTRL0_LOCK_CS(LOCK_CS_ENABLE, this) + | BF_GPMI_CTRL0_ADDRESS(address) + | BF_GPMI_CTRL0_XFER_COUNT(geo->page_size); + pio[1] = 0; + desc = channel->device->device_prep_slave_sg(channel, + (struct scatterlist *)pio, 2, DMA_NONE, 1); + if (!desc) { + pr_err("step 3 error\n"); + return -1; + } + + /* [4] submit the DMA */ + set_dma_type(this, DMA_FOR_READ_ECC_PAGE); + return start_dma_with_bch_irq(this, desc); +} diff --git a/drivers/mtd/nand/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/gpmi-nand/gpmi-nand.c new file mode 100644 index 000000000000..071b63420f0e --- /dev/null +++ b/drivers/mtd/nand/gpmi-nand/gpmi-nand.c @@ -0,0 +1,1619 @@ +/* + * Freescale GPMI NAND Flash Driver + * + * Copyright (C) 2010-2011 Freescale Semiconductor, Inc. + * Copyright (C) 2008 Embedded Alley Solutions, Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, write to the Free Software Foundation, Inc., + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + */ +#include <linux/clk.h> +#include <linux/slab.h> +#include <linux/interrupt.h> +#include <linux/mtd/gpmi-nand.h> +#include <linux/mtd/partitions.h> + +#include "gpmi-nand.h" + +/* add our owner bbt descriptor */ +static uint8_t scan_ff_pattern[] = { 0xff }; +static struct nand_bbt_descr gpmi_bbt_descr = { + .options = 0, + .offs = 0, + .len = 1, + .pattern = scan_ff_pattern +}; + +/* We will use all the (page + OOB). */ +static struct nand_ecclayout gpmi_hw_ecclayout = { + .eccbytes = 0, + .eccpos = { 0, }, + .oobfree = { {.offset = 0, .length = 0} } +}; + +static irqreturn_t bch_irq(int irq, void *cookie) +{ + struct gpmi_nand_data *this = cookie; + + gpmi_clear_bch(this); + complete(&this->bch_done); + return IRQ_HANDLED; +} + +/* + * Calculate the ECC strength by hand: + * E : The ECC strength. + * G : the length of Galois Field. + * N : The chunk count of per page. + * O : the oobsize of the NAND chip. + * M : the metasize of per page. + * + * The formula is : + * E * G * N + * ------------ <= (O - M) + * 8 + * + * So, we get E by: + * (O - M) * 8 + * E <= ------------- + * G * N + */ +static inline int get_ecc_strength(struct gpmi_nand_data *this) +{ + struct bch_geometry *geo = &this->bch_geometry; + struct mtd_info *mtd = &this->mtd; + int ecc_strength; + + ecc_strength = ((mtd->oobsize - geo->metadata_size) * 8) + / (geo->gf_len * geo->ecc_chunk_count); + + /* We need the minor even number. */ + return round_down(ecc_strength, 2); +} + +int common_nfc_set_geometry(struct gpmi_nand_data *this) +{ + struct bch_geometry *geo = &this->bch_geometry; + struct mtd_info *mtd = &this->mtd; + unsigned int metadata_size; + unsigned int status_size; + unsigned int block_mark_bit_offset; + + /* + * The size of the metadata can be changed, though we set it to 10 + * bytes now. But it can't be too large, because we have to save + * enough space for BCH. + */ + geo->metadata_size = 10; + + /* The default for the length of Galois Field. */ + geo->gf_len = 13; + + /* The default for chunk size. There is no oobsize greater then 512. */ + geo->ecc_chunk_size = 512; + while (geo->ecc_chunk_size < mtd->oobsize) + geo->ecc_chunk_size *= 2; /* keep C >= O */ + + geo->ecc_chunk_count = mtd->writesize / geo->ecc_chunk_size; + + /* We use the same ECC strength for all chunks. */ + geo->ecc_strength = get_ecc_strength(this); + if (!geo->ecc_strength) { + pr_err("We get a wrong ECC strength.\n"); + return -EINVAL; + } + + geo->page_size = mtd->writesize + mtd->oobsize; + geo->payload_size = mtd->writesize; + + /* + * The auxiliary buffer contains the metadata and the ECC status. The + * metadata is padded to the nearest 32-bit boundary. The ECC status + * contains one byte for every ECC chunk, and is also padded to the + * nearest 32-bit boundary. + */ + metadata_size = ALIGN(geo->metadata_size, 4); + status_size = ALIGN(geo->ecc_chunk_count, 4); + + geo->auxiliary_size = metadata_size + status_size; + geo->auxiliary_status_offset = metadata_size; + + if (!this->swap_block_mark) + return 0; + + /* + * We need to compute the byte and bit offsets of + * the physical block mark within the ECC-based view of the page. + * + * NAND chip with 2K page shows below: + * (Block Mark) + * | | + * | D | + * |<---->| + * V V + * +---+----------+-+----------+-+----------+-+----------+-+ + * | M | data |E| data |E| data |E| data |E| + * +---+----------+-+----------+-+----------+-+----------+-+ + * + * The position of block mark moves forward in the ECC-based view + * of page, and the delta is: + * + * E * G * (N - 1) + * D = (---------------- + M) + * 8 + * + * With the formula to compute the ECC strength, and the condition + * : C >= O (C is the ecc chunk size) + * + * It's easy to deduce to the following result: + * + * E * G (O - M) C - M C - M + * ----------- <= ------- <= -------- < --------- + * 8 N N (N - 1) + * + * So, we get: + * + * E * G * (N - 1) + * D = (---------------- + M) < C + * 8 + * + * The above inequality means the position of block mark + * within the ECC-based view of the page is still in the data chunk, + * and it's NOT in the ECC bits of the chunk. + * + * Use the following to compute the bit position of the + * physical block mark within the ECC-based view of the page: + * (page_size - D) * 8 + * + * --Huang Shijie + */ + block_mark_bit_offset = mtd->writesize * 8 - + (geo->ecc_strength * geo->gf_len * (geo->ecc_chunk_count - 1) + + geo->metadata_size * 8); + + geo->block_mark_byte_offset = block_mark_bit_offset / 8; + geo->block_mark_bit_offset = block_mark_bit_offset % 8; + return 0; +} + +struct dma_chan *get_dma_chan(struct gpmi_nand_data *this) +{ + int chipnr = this->current_chip; + + return this->dma_chans[chipnr]; +} + +/* Can we use the upper's buffer directly for DMA? */ +void prepare_data_dma(struct gpmi_nand_data *this, enum dma_data_direction dr) +{ + struct scatterlist *sgl = &this->data_sgl; + int ret; + + this->direct_dma_map_ok = true; + + /* first try to map the upper buffer directly */ + sg_init_one(sgl, this->upper_buf, this->upper_len); + ret = dma_map_sg(this->dev, sgl, 1, dr); + if (ret == 0) { + /* We have to use our own DMA buffer. */ + sg_init_one(sgl, this->data_buffer_dma, PAGE_SIZE); + + if (dr == DMA_TO_DEVICE) + memcpy(this->data_buffer_dma, this->upper_buf, + this->upper_len); + + ret = dma_map_sg(this->dev, sgl, 1, dr); + if (ret == 0) + pr_err("map failed.\n"); + + this->direct_dma_map_ok = false; + } +} + +/* This will be called after the DMA operation is finished. */ +static void dma_irq_callback(void *param) +{ + struct gpmi_nand_data *this = param; + struct completion *dma_c = &this->dma_done; + + complete(dma_c); + + switch (this->dma_type) { + case DMA_FOR_COMMAND: + dma_unmap_sg(this->dev, &this->cmd_sgl, 1, DMA_TO_DEVICE); + break; + + case DMA_FOR_READ_DATA: + dma_unmap_sg(this->dev, &this->data_sgl, 1, DMA_FROM_DEVICE); + if (this->direct_dma_map_ok == false) + memcpy(this->upper_buf, this->data_buffer_dma, + this->upper_len); + break; + + case DMA_FOR_WRITE_DATA: + dma_unmap_sg(this->dev, &this->data_sgl, 1, DMA_TO_DEVICE); + break; + + case DMA_FOR_READ_ECC_PAGE: + case DMA_FOR_WRITE_ECC_PAGE: + /* We have to wait the BCH interrupt to finish. */ + break; + + default: + pr_err("in wrong DMA operation.\n"); + } +} + +int start_dma_without_bch_irq(struct gpmi_nand_data *this, + struct dma_async_tx_descriptor *desc) +{ + struct completion *dma_c = &this->dma_done; + int err; + + init_completion(dma_c); + + desc->callback = dma_irq_callback; + desc->callback_param = this; + dmaengine_submit(desc); + + /* Wait for the interrupt from the DMA block. */ + err = wait_for_completion_timeout(dma_c, msecs_to_jiffies(1000)); + if (!err) { + pr_err("DMA timeout, last DMA :%d\n", this->last_dma_type); + gpmi_dump_info(this); + return -ETIMEDOUT; + } + return 0; +} + +/* + * This function is used in BCH reading or BCH writing pages. + * It will wait for the BCH interrupt as long as ONE second. + * Actually, we must wait for two interrupts : + * [1] firstly the DMA interrupt and + * [2] secondly the BCH interrupt. + */ +int start_dma_with_bch_irq(struct gpmi_nand_data *this, + struct dma_async_tx_descriptor *desc) +{ + struct completion *bch_c = &this->bch_done; + int err; + + /* Prepare to receive an interrupt from the BCH block. */ + init_completion(bch_c); + + /* start the DMA */ + start_dma_without_bch_irq(this, desc); + + /* Wait for the interrupt from the BCH block. */ + err = wait_for_completion_timeout(bch_c, msecs_to_jiffies(1000)); + if (!err) { + pr_err("BCH timeout, last DMA :%d\n", this->last_dma_type); + gpmi_dump_info(this); + return -ETIMEDOUT; + } + return 0; +} + +static int __devinit +acquire_register_block(struct gpmi_nand_data *this, const char *res_name) +{ + struct platform_device *pdev = this->pdev; + struct resources *res = &this->resources; + struct resource *r; + void *p; + + r = platform_get_resource_byname(pdev, IORESOURCE_MEM, res_name); + if (!r) { + pr_err("Can't get resource for %s\n", res_name); + return -ENXIO; + } + + p = ioremap(r->start, resource_size(r)); + if (!p) { + pr_err("Can't remap %s\n", res_name); + return -ENOMEM; + } + + if (!strcmp(res_name, GPMI_NAND_GPMI_REGS_ADDR_RES_NAME)) + res->gpmi_regs = p; + else if (!strcmp(res_name, GPMI_NAND_BCH_REGS_ADDR_RES_NAME)) + res->bch_regs = p; + else + pr_err("unknown resource name : %s\n", res_name); + + return 0; +} + +static void release_register_block(struct gpmi_nand_data *this) +{ + struct resources *res = &this->resources; + if (res->gpmi_regs) + iounmap(res->gpmi_regs); + if (res->bch_regs) + iounmap(res->bch_regs); + res->gpmi_regs = NULL; + res->bch_regs = NULL; +} + +static int __devinit +acquire_bch_irq(struct gpmi_nand_data *this, irq_handler_t irq_h) +{ + struct platform_device *pdev = this->pdev; + struct resources *res = &this->resources; + const char *res_name = GPMI_NAND_BCH_INTERRUPT_RES_NAME; + struct resource *r; + int err; + + r = platform_get_resource_byname(pdev, IORESOURCE_IRQ, res_name); + if (!r) { + pr_err("Can't get resource for %s\n", res_name); + return -ENXIO; + } + + err = request_irq(r->start, irq_h, 0, res_name, this); + if (err) { + pr_err("Can't own %s\n", res_name); + return err; + } + + res->bch_low_interrupt = r->start; + res->bch_high_interrupt = r->end; + return 0; +} + +static void release_bch_irq(struct gpmi_nand_data *this) +{ + struct resources *res = &this->resources; + int i = res->bch_low_interrupt; + + for (; i <= res->bch_high_interrupt; i++) + free_irq(i, this); +} + +static bool gpmi_dma_filter(struct dma_chan *chan, void *param) +{ + struct gpmi_nand_data *this = param; + struct resource *r = this->private; + + if (!mxs_dma_is_apbh(chan)) + return false; + /* + * only catch the GPMI dma channels : + * for mx23 : MX23_DMA_GPMI0 ~ MX23_DMA_GPMI3 + * (These four channels share the same IRQ!) + * + * for mx28 : MX28_DMA_GPMI0 ~ MX28_DMA_GPMI7 + * (These eight channels share the same IRQ!) + */ + if (r->start <= chan->chan_id && chan->chan_id <= r->end) { + chan->private = &this->dma_data; + return true; + } + return false; +} + +static void release_dma_channels(struct gpmi_nand_data *this) +{ + unsigned int i; + for (i = 0; i < DMA_CHANS; i++) + if (this->dma_chans[i]) { + dma_release_channel(this->dma_chans[i]); + this->dma_chans[i] = NULL; + } +} + +static int __devinit acquire_dma_channels(struct gpmi_nand_data *this) +{ + struct platform_device *pdev = this->pdev; + struct gpmi_nand_platform_data *pdata = this->pdata; + struct resources *res = &this->resources; + struct resource *r, *r_dma; + unsigned int i; + + r = platform_get_resource_byname(pdev, IORESOURCE_DMA, + GPMI_NAND_DMA_CHANNELS_RES_NAME); + r_dma = platform_get_resource_byname(pdev, IORESOURCE_IRQ, + GPMI_NAND_DMA_INTERRUPT_RES_NAME); + if (!r || !r_dma) { + pr_err("Can't get resource for DMA\n"); + return -ENXIO; + } + + /* used in gpmi_dma_filter() */ + this->private = r; + + for (i = r->start; i <= r->end; i++) { + struct dma_chan *dma_chan; + dma_cap_mask_t mask; + + if (i - r->start >= pdata->max_chip_count) + break; + + dma_cap_zero(mask); + dma_cap_set(DMA_SLAVE, mask); + + /* get the DMA interrupt */ + if (r_dma->start == r_dma->end) { + /* only register the first. */ + if (i == r->start) + this->dma_data.chan_irq = r_dma->start; + else + this->dma_data.chan_irq = NO_IRQ; + } else + this->dma_data.chan_irq = r_dma->start + (i - r->start); + + dma_chan = dma_request_channel(mask, gpmi_dma_filter, this); + if (!dma_chan) + goto acquire_err; + + /* fill the first empty item */ + this->dma_chans[i - r->start] = dma_chan; + } + + res->dma_low_channel = r->start; + res->dma_high_channel = i; + return 0; + +acquire_err: + pr_err("Can't acquire DMA channel %u\n", i); + release_dma_channels(this); + return -EINVAL; +} + +static int __devinit acquire_resources(struct gpmi_nand_data *this) +{ + struct resources *res = &this->resources; + int ret; + + ret = acquire_register_block(this, GPMI_NAND_GPMI_REGS_ADDR_RES_NAME); + if (ret) + goto exit_regs; + + ret = acquire_register_block(this, GPMI_NAND_BCH_REGS_ADDR_RES_NAME); + if (ret) + goto exit_regs; + + ret = acquire_bch_irq(this, bch_irq); + if (ret) + goto exit_regs; + + ret = acquire_dma_channels(this); + if (ret) + goto exit_dma_channels; + + res->clock = clk_get(&this->pdev->dev, NULL); + if (IS_ERR(res->clock)) { + pr_err("can not get the clock\n"); + ret = -ENOENT; + goto exit_clock; + } + return 0; + +exit_clock: + release_dma_channels(this); +exit_dma_channels: + release_bch_irq(this); +exit_regs: + release_register_block(this); + return ret; +} + +static void release_resources(struct gpmi_nand_data *this) +{ + struct resources *r = &this->resources; + + clk_put(r->clock); + release_register_block(this); + release_bch_irq(this); + release_dma_channels(this); +} + +static int __devinit init_hardware(struct gpmi_nand_data *this) +{ + int ret; + + /* + * This structure contains the "safe" GPMI timing that should succeed + * with any NAND Flash device + * (although, with less-than-optimal performance). + */ + struct nand_timing safe_timing = { + .data_setup_in_ns = 80, + .data_hold_in_ns = 60, + .address_setup_in_ns = 25, + .gpmi_sample_delay_in_ns = 6, + .tREA_in_ns = -1, + .tRLOH_in_ns = -1, + .tRHOH_in_ns = -1, + }; + + /* Initialize the hardwares. */ + ret = gpmi_init(this); + if (ret) + return ret; + + this->timing = safe_timing; + return 0; +} + +static int read_page_prepare(struct gpmi_nand_data *this, + void *destination, unsigned length, + void *alt_virt, dma_addr_t alt_phys, unsigned alt_size, + void **use_virt, dma_addr_t *use_phys) +{ + struct device *dev = this->dev; + + if (virt_addr_valid(destination)) { + dma_addr_t dest_phys; + + dest_phys = dma_map_single(dev, destination, + length, DMA_FROM_DEVICE); + if (dma_mapping_error(dev, dest_phys)) { + if (alt_size < length) { + pr_err("Alternate buffer is too small\n"); + return -ENOMEM; + } + goto map_failed; + } + *use_virt = destination; + *use_phys = dest_phys; + this->direct_dma_map_ok = true; + return 0; + } + +map_failed: + *use_virt = alt_virt; + *use_phys = alt_phys; + this->direct_dma_map_ok = false; + return 0; +} + +static inline void read_page_end(struct gpmi_nand_data *this, + void *destination, unsigned length, + void *alt_virt, dma_addr_t alt_phys, unsigned alt_size, + void *used_virt, dma_addr_t used_phys) +{ + if (this->direct_dma_map_ok) + dma_unmap_single(this->dev, used_phys, length, DMA_FROM_DEVICE); +} + +static inline void read_page_swap_end(struct gpmi_nand_data *this, + void *destination, unsigned length, + void *alt_virt, dma_addr_t alt_phys, unsigned alt_size, + void *used_virt, dma_addr_t used_phys) +{ + if (!this->direct_dma_map_ok) + memcpy(destination, alt_virt, length); +} + +static int send_page_prepare(struct gpmi_nand_data *this, + const void *source, unsigned length, + void *alt_virt, dma_addr_t alt_phys, unsigned alt_size, + const void **use_virt, dma_addr_t *use_phys) +{ + struct device *dev = this->dev; + + if (virt_addr_valid(source)) { + dma_addr_t source_phys; + + source_phys = dma_map_single(dev, (void *)source, length, + DMA_TO_DEVICE); + if (dma_mapping_error(dev, source_phys)) { + if (alt_size < length) { + pr_err("Alternate buffer is too small\n"); + return -ENOMEM; + } + goto map_failed; + } + *use_virt = source; + *use_phys = source_phys; + return 0; + } +map_failed: + /* + * Copy the content of the source buffer into the alternate + * buffer and set up the return values accordingly. + */ + memcpy(alt_virt, source, length); + + *use_virt = alt_virt; + *use_phys = alt_phys; + return 0; +} + +static void send_page_end(struct gpmi_nand_data *this, + const void *source, unsigned length, + void *alt_virt, dma_addr_t alt_phys, unsigned alt_size, + const void *used_virt, dma_addr_t used_phys) +{ + struct device *dev = this->dev; + if (used_virt == source) + dma_unmap_single(dev, used_phys, length, DMA_TO_DEVICE); +} + +static void gpmi_free_dma_buffer(struct gpmi_nand_data *this) +{ + struct device *dev = this->dev; + + if (this->page_buffer_virt && virt_addr_valid(this->page_buffer_virt)) + dma_free_coherent(dev, this->page_buffer_size, + this->page_buffer_virt, + this->page_buffer_phys); + kfree(this->cmd_buffer); + kfree(this->data_buffer_dma); + + this->cmd_buffer = NULL; + this->data_buffer_dma = NULL; + this->page_buffer_virt = NULL; + this->page_buffer_size = 0; +} + +/* Allocate the DMA buffers */ +static int gpmi_alloc_dma_buffer(struct gpmi_nand_data *this) +{ + struct bch_geometry *geo = &this->bch_geometry; + struct device *dev = this->dev; + + /* [1] Allocate a command buffer. PAGE_SIZE is enough. */ + this->cmd_buffer = kzalloc(PAGE_SIZE, GFP_DMA); + if (this->cmd_buffer == NULL) + goto error_alloc; + + /* [2] Allocate a read/write data buffer. PAGE_SIZE is enough. */ + this->data_buffer_dma = kzalloc(PAGE_SIZE, GFP_DMA); + if (this->data_buffer_dma == NULL) + goto error_alloc; + + /* + * [3] Allocate the page buffer. + * + * Both the payload buffer and the auxiliary buffer must appear on + * 32-bit boundaries. We presume the size of the payload buffer is a + * power of two and is much larger than four, which guarantees the + * auxiliary buffer will appear on a 32-bit boundary. + */ + this->page_buffer_size = geo->payload_size + geo->auxiliary_size; + this->page_buffer_virt = dma_alloc_coherent(dev, this->page_buffer_size, + &this->page_buffer_phys, GFP_DMA); + if (!this->page_buffer_virt) + goto error_alloc; + + + /* Slice up the page buffer. */ + this->payload_virt = this->page_buffer_virt; + this->payload_phys = this->page_buffer_phys; + this->auxiliary_virt = this->payload_virt + geo->payload_size; + this->auxiliary_phys = this->payload_phys + geo->payload_size; + return 0; + +error_alloc: + gpmi_free_dma_buffer(this); + pr_err("allocate DMA buffer ret!!\n"); + return -ENOMEM; +} + +static void gpmi_cmd_ctrl(struct mtd_info *mtd, int data, unsigned int ctrl) +{ + struct nand_chip *chip = mtd->priv; + struct gpmi_nand_data *this = chip->priv; + int ret; + + /* + * Every operation begins with a command byte and a series of zero or + * more address bytes. These are distinguished by either the Address + * Latch Enable (ALE) or Command Latch Enable (CLE) signals being + * asserted. When MTD is ready to execute the command, it will deassert + * both latch enables. + * + * Rather than run a separate DMA operation for every single byte, we + * queue them up and run a single DMA operation for the entire series + * of command and data bytes. NAND_CMD_NONE means the END of the queue. + */ + if ((ctrl & (NAND_ALE | NAND_CLE))) { + if (data != NAND_CMD_NONE) + this->cmd_buffer[this->command_length++] = data; + return; + } + + if (!this->command_length) + return; + + ret = gpmi_send_command(this); + if (ret) + pr_err("Chip: %u, Error %d\n", this->current_chip, ret); + + this->command_length = 0; +} + +static int gpmi_dev_ready(struct mtd_info *mtd) +{ + struct nand_chip *chip = mtd->priv; + struct gpmi_nand_data *this = chip->priv; + + return gpmi_is_ready(this, this->current_chip); +} + +static void gpmi_select_chip(struct mtd_info *mtd, int chipnr) +{ + struct nand_chip *chip = mtd->priv; + struct gpmi_nand_data *this = chip->priv; + + if ((this->current_chip < 0) && (chipnr >= 0)) + gpmi_begin(this); + else if ((this->current_chip >= 0) && (chipnr < 0)) + gpmi_end(this); + + this->current_chip = chipnr; +} + +static void gpmi_read_buf(struct mtd_info *mtd, uint8_t *buf, int len) +{ + struct nand_chip *chip = mtd->priv; + struct gpmi_nand_data *this = chip->priv; + + pr_debug("len is %d\n", len); + this->upper_buf = buf; + this->upper_len = len; + + gpmi_read_data(this); +} + +static void gpmi_write_buf(struct mtd_info *mtd, const uint8_t *buf, int len) +{ + struct nand_chip *chip = mtd->priv; + struct gpmi_nand_data *this = chip->priv; + + pr_debug("len is %d\n", len); + this->upper_buf = (uint8_t *)buf; + this->upper_len = len; + + gpmi_send_data(this); +} + +static uint8_t gpmi_read_byte(struct mtd_info *mtd) +{ + struct nand_chip *chip = mtd->priv; + struct gpmi_nand_data *this = chip->priv; + uint8_t *buf = this->data_buffer_dma; + + gpmi_read_buf(mtd, buf, 1); + return buf[0]; +} + +/* + * Handles block mark swapping. + * It can be called in swapping the block mark, or swapping it back, + * because the the operations are the same. + */ +static void block_mark_swapping(struct gpmi_nand_data *this, + void *payload, void *auxiliary) +{ + struct bch_geometry *nfc_geo = &this->bch_geometry; + unsigned char *p; + unsigned char *a; + unsigned int bit; + unsigned char mask; + unsigned char from_data; + unsigned char from_oob; + + if (!this->swap_block_mark) + return; + + /* + * If control arrives here, we're swapping. Make some convenience + * variables. + */ + bit = nfc_geo->block_mark_bit_offset; + p = payload + nfc_geo->block_mark_byte_offset; + a = auxiliary; + + /* + * Get the byte from the data area that overlays the block mark. Since + * the ECC engine applies its own view to the bits in the page, the + * physical block mark won't (in general) appear on a byte boundary in + * the data. + */ + from_data = (p[0] >> bit) | (p[1] << (8 - bit)); + + /* Get the byte from the OOB. */ + from_oob = a[0]; + + /* Swap them. */ + a[0] = from_data; + + mask = (0x1 << bit) - 1; + p[0] = (p[0] & mask) | (from_oob << bit); + + mask = ~0 << bit; + p[1] = (p[1] & mask) | (from_oob >> (8 - bit)); +} + +static int gpmi_ecc_read_page(struct mtd_info *mtd, struct nand_chip *chip, + uint8_t *buf, int page) +{ + struct gpmi_nand_data *this = chip->priv; + struct bch_geometry *nfc_geo = &this->bch_geometry; + void *payload_virt; + dma_addr_t payload_phys; + void *auxiliary_virt; + dma_addr_t auxiliary_phys; + unsigned int i; + unsigned char *status; + unsigned int failed; + unsigned int corrected; + int ret; + + pr_debug("page number is : %d\n", page); + ret = read_page_prepare(this, buf, mtd->writesize, + this->payload_virt, this->payload_phys, + nfc_geo->payload_size, + &payload_virt, &payload_phys); + if (ret) { + pr_err("Inadequate DMA buffer\n"); + ret = -ENOMEM; + return ret; + } + auxiliary_virt = this->auxiliary_virt; + auxiliary_phys = this->auxiliary_phys; + + /* go! */ + ret = gpmi_read_page(this, payload_phys, auxiliary_phys); + read_page_end(this, buf, mtd->writesize, + this->payload_virt, this->payload_phys, + nfc_geo->payload_size, + payload_virt, payload_phys); + if (ret) { + pr_err("Error in ECC-based read: %d\n", ret); + goto exit_nfc; + } + + /* handle the block mark swapping */ + block_mark_swapping(this, payload_virt, auxiliary_virt); + + /* Loop over status bytes, accumulating ECC status. */ + failed = 0; + corrected = 0; + status = auxiliary_virt + nfc_geo->auxiliary_status_offset; + + for (i = 0; i < nfc_geo->ecc_chunk_count; i++, status++) { + if ((*status == STATUS_GOOD) || (*status == STATUS_ERASED)) + continue; + + if (*status == STATUS_UNCORRECTABLE) { + failed++; + continue; + } + corrected += *status; + } + + /* + * Propagate ECC status to the owning MTD only when failed or + * corrected times nearly reaches our ECC correction threshold. + */ + if (failed || corrected >= (nfc_geo->ecc_strength - 1)) { + mtd->ecc_stats.failed += failed; + mtd->ecc_stats.corrected += corrected; + } + + /* + * It's time to deliver the OOB bytes. See gpmi_ecc_read_oob() for + * details about our policy for delivering the OOB. + * + * We fill the caller's buffer with set bits, and then copy the block + * mark to th caller's buffer. Note that, if block mark swapping was + * necessary, it has already been done, so we can rely on the first + * byte of the auxiliary buffer to contain the block mark. + */ + memset(chip->oob_poi, ~0, mtd->oobsize); + chip->oob_poi[0] = ((uint8_t *) auxiliary_virt)[0]; + + read_page_swap_end(this, buf, mtd->writesize, + this->payload_virt, this->payload_phys, + nfc_geo->payload_size, + payload_virt, payload_phys); +exit_nfc: + return ret; +} + +static void gpmi_ecc_write_page(struct mtd_info *mtd, + struct nand_chip *chip, const uint8_t *buf) +{ + struct gpmi_nand_data *this = chip->priv; + struct bch_geometry *nfc_geo = &this->bch_geometry; + const void *payload_virt; + dma_addr_t payload_phys; + const void *auxiliary_virt; + dma_addr_t auxiliary_phys; + int ret; + + pr_debug("ecc write page.\n"); + if (this->swap_block_mark) { + /* + * If control arrives here, we're doing block mark swapping. + * Since we can't modify the caller's buffers, we must copy them + * into our own. + */ + memcpy(this->payload_virt, buf, mtd->writesize); + payload_virt = this->payload_virt; + payload_phys = this->payload_phys; + + memcpy(this->auxiliary_virt, chip->oob_poi, + nfc_geo->auxiliary_size); + auxiliary_virt = this->auxiliary_virt; + auxiliary_phys = this->auxiliary_phys; + + /* Handle block mark swapping. */ + block_mark_swapping(this, + (void *) payload_virt, (void *) auxiliary_virt); + } else { + /* + * If control arrives here, we're not doing block mark swapping, + * so we can to try and use the caller's buffers. + */ + ret = send_page_prepare(this, + buf, mtd->writesize, + this->payload_virt, this->payload_phys, + nfc_geo->payload_size, + &payload_virt, &payload_phys); + if (ret) { + pr_err("Inadequate payload DMA buffer\n"); + return; + } + + ret = send_page_prepare(this, + chip->oob_poi, mtd->oobsize, + this->auxiliary_virt, this->auxiliary_phys, + nfc_geo->auxiliary_size, + &auxiliary_virt, &auxiliary_phys); + if (ret) { + pr_err("Inadequate auxiliary DMA buffer\n"); + goto exit_auxiliary; + } + } + + /* Ask the NFC. */ + ret = gpmi_send_page(this, payload_phys, auxiliary_phys); + if (ret) + pr_err("Error in ECC-based write: %d\n", ret); + + if (!this->swap_block_mark) { + send_page_end(this, chip->oob_poi, mtd->oobsize, + this->auxiliary_virt, this->auxiliary_phys, + nfc_geo->auxiliary_size, + auxiliary_virt, auxiliary_phys); +exit_auxiliary: + send_page_end(this, buf, mtd->writesize, + this->payload_virt, this->payload_phys, + nfc_geo->payload_size, + payload_virt, payload_phys); + } +} + +/* + * There are several places in this driver where we have to handle the OOB and + * block marks. This is the function where things are the most complicated, so + * this is where we try to explain it all. All the other places refer back to + * here. + * + * These are the rules, in order of decreasing importance: + * + * 1) Nothing the caller does can be allowed to imperil the block mark. + * + * 2) In read operations, the first byte of the OOB we return must reflect the + * true state of the block mark, no matter where that block mark appears in + * the physical page. + * + * 3) ECC-based read operations return an OOB full of set bits (since we never + * allow ECC-based writes to the OOB, it doesn't matter what ECC-based reads + * return). + * + * 4) "Raw" read operations return a direct view of the physical bytes in the + * page, using the conventional definition of which bytes are data and which + * are OOB. This gives the caller a way to see the actual, physical bytes + * in the page, without the distortions applied by our ECC engine. + * + * + * What we do for this specific read operation depends on two questions: + * + * 1) Are we doing a "raw" read, or an ECC-based read? + * + * 2) Are we using block mark swapping or transcription? + * + * There are four cases, illustrated by the following Karnaugh map: + * + * | Raw | ECC-based | + * -------------+-------------------------+-------------------------+ + * | Read the conventional | | + * | OOB at the end of the | | + * Swapping | page and return it. It | | + * | contains exactly what | | + * | we want. | Read the block mark and | + * -------------+-------------------------+ return it in a buffer | + * | Read the conventional | full of set bits. | + * | OOB at the end of the | | + * | page and also the block | | + * Transcribing | mark in the metadata. | | + * | Copy the block mark | | + * | into the first byte of | | + * | the OOB. | | + * -------------+-------------------------+-------------------------+ + * + * Note that we break rule #4 in the Transcribing/Raw case because we're not + * giving an accurate view of the actual, physical bytes in the page (we're + * overwriting the block mark). That's OK because it's more important to follow + * rule #2. + * + * It turns out that knowing whether we want an "ECC-based" or "raw" read is not + * easy. When reading a page, for example, the NAND Flash MTD code calls our + * ecc.read_page or ecc.read_page_raw function. Thus, the fact that MTD wants an + * ECC-based or raw view of the page is implicit in which function it calls + * (there is a similar pair of ECC-based/raw functions for writing). + * + * Since MTD assumes the OOB is not covered by ECC, there is no pair of + * ECC-based/raw functions for reading or or writing the OOB. The fact that the + * caller wants an ECC-based or raw view of the page is not propagated down to + * this driver. + */ +static int gpmi_ecc_read_oob(struct mtd_info *mtd, struct nand_chip *chip, + int page, int sndcmd) +{ + struct gpmi_nand_data *this = chip->priv; + + pr_debug("page number is %d\n", page); + /* clear the OOB buffer */ + memset(chip->oob_poi, ~0, mtd->oobsize); + + /* Read out the conventional OOB. */ + chip->cmdfunc(mtd, NAND_CMD_READ0, mtd->writesize, page); + chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); + + /* + * Now, we want to make sure the block mark is correct. In the + * Swapping/Raw case, we already have it. Otherwise, we need to + * explicitly read it. + */ + if (!this->swap_block_mark) { + /* Read the block mark into the first byte of the OOB buffer. */ + chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page); + chip->oob_poi[0] = chip->read_byte(mtd); + } + + /* + * Return true, indicating that the next call to this function must send + * a command. + */ + return true; +} + +static int +gpmi_ecc_write_oob(struct mtd_info *mtd, struct nand_chip *chip, int page) +{ + /* + * The BCH will use all the (page + oob). + * Our gpmi_hw_ecclayout can only prohibit the JFFS2 to write the oob. + * But it can not stop some ioctls such MEMWRITEOOB which uses + * MTD_OPS_PLACE_OOB. So We have to implement this function to prohibit + * these ioctls too. + */ + return -EPERM; +} + +static int gpmi_block_markbad(struct mtd_info *mtd, loff_t ofs) +{ + struct nand_chip *chip = mtd->priv; + struct gpmi_nand_data *this = chip->priv; + int block, ret = 0; + uint8_t *block_mark; + int column, page, status, chipnr; + + /* Get block number */ + block = (int)(ofs >> chip->bbt_erase_shift); + if (chip->bbt) + chip->bbt[block >> 2] |= 0x01 << ((block & 0x03) << 1); + + /* Do we have a flash based bad block table ? */ + if (chip->options & NAND_BBT_USE_FLASH) + ret = nand_update_bbt(mtd, ofs); + else { + chipnr = (int)(ofs >> chip->chip_shift); + chip->select_chip(mtd, chipnr); + + column = this->swap_block_mark ? mtd->writesize : 0; + + /* Write the block mark. */ + block_mark = this->data_buffer_dma; + block_mark[0] = 0; /* bad block marker */ + + /* Shift to get page */ + page = (int)(ofs >> chip->page_shift); + + chip->cmdfunc(mtd, NAND_CMD_SEQIN, column, page); + chip->write_buf(mtd, block_mark, 1); + chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); + + status = chip->waitfunc(mtd, chip); + if (status & NAND_STATUS_FAIL) + ret = -EIO; + + chip->select_chip(mtd, -1); + } + if (!ret) + mtd->ecc_stats.badblocks++; + + return ret; +} + +static int __devinit nand_boot_set_geometry(struct gpmi_nand_data *this) +{ + struct boot_rom_geometry *geometry = &this->rom_geometry; + + /* + * Set the boot block stride size. + * + * In principle, we should be reading this from the OTP bits, since + * that's where the ROM is going to get it. In fact, we don't have any + * way to read the OTP bits, so we go with the default and hope for the + * best. + */ + geometry->stride_size_in_pages = 64; + + /* + * Set the search area stride exponent. + * + * In principle, we should be reading this from the OTP bits, since + * that's where the ROM is going to get it. In fact, we don't have any + * way to read the OTP bits, so we go with the default and hope for the + * best. + */ + geometry->search_area_stride_exponent = 2; + return 0; +} + +static const char *fingerprint = "STMP"; +static int __devinit mx23_check_transcription_stamp(struct gpmi_nand_data *this) +{ + struct boot_rom_geometry *rom_geo = &this->rom_geometry; + struct device *dev = this->dev; + struct mtd_info *mtd = &this->mtd; + struct nand_chip *chip = &this->nand; + unsigned int search_area_size_in_strides; + unsigned int stride; + unsigned int page; + loff_t byte; + uint8_t *buffer = chip->buffers->databuf; + int saved_chip_number; + int found_an_ncb_fingerprint = false; + + /* Compute the number of strides in a search area. */ + search_area_size_in_strides = 1 << rom_geo->search_area_stride_exponent; + + saved_chip_number = this->current_chip; + chip->select_chip(mtd, 0); + + /* + * Loop through the first search area, looking for the NCB fingerprint. + */ + dev_dbg(dev, "Scanning for an NCB fingerprint...\n"); + + for (stride = 0; stride < search_area_size_in_strides; stride++) { + /* Compute the page and byte addresses. */ + page = stride * rom_geo->stride_size_in_pages; + byte = page * mtd->writesize; + + dev_dbg(dev, "Looking for a fingerprint in page 0x%x\n", page); + + /* + * Read the NCB fingerprint. The fingerprint is four bytes long + * and starts in the 12th byte of the page. + */ + chip->cmdfunc(mtd, NAND_CMD_READ0, 12, page); + chip->read_buf(mtd, buffer, strlen(fingerprint)); + + /* Look for the fingerprint. */ + if (!memcmp(buffer, fingerprint, strlen(fingerprint))) { + found_an_ncb_fingerprint = true; + break; + } + + } + + chip->select_chip(mtd, saved_chip_number); + + if (found_an_ncb_fingerprint) + dev_dbg(dev, "\tFound a fingerprint\n"); + else + dev_dbg(dev, "\tNo fingerprint found\n"); + return found_an_ncb_fingerprint; +} + +/* Writes a transcription stamp. */ +static int __devinit mx23_write_transcription_stamp(struct gpmi_nand_data *this) +{ + struct device *dev = this->dev; + struct boot_rom_geometry *rom_geo = &this->rom_geometry; + struct mtd_info *mtd = &this->mtd; + struct nand_chip *chip = &this->nand; + unsigned int block_size_in_pages; + unsigned int search_area_size_in_strides; + unsigned int search_area_size_in_pages; + unsigned int search_area_size_in_blocks; + unsigned int block; + unsigned int stride; + unsigned int page; + loff_t byte; + uint8_t *buffer = chip->buffers->databuf; + int saved_chip_number; + int status; + + /* Compute the search area geometry. */ + block_size_in_pages = mtd->erasesize / mtd->writesize; + search_area_size_in_strides = 1 << rom_geo->search_area_stride_exponent; + search_area_size_in_pages = search_area_size_in_strides * + rom_geo->stride_size_in_pages; + search_area_size_in_blocks = + (search_area_size_in_pages + (block_size_in_pages - 1)) / + block_size_in_pages; + + dev_dbg(dev, "Search Area Geometry :\n"); + dev_dbg(dev, "\tin Blocks : %u\n", search_area_size_in_blocks); + dev_dbg(dev, "\tin Strides: %u\n", search_area_size_in_strides); + dev_dbg(dev, "\tin Pages : %u\n", search_area_size_in_pages); + + /* Select chip 0. */ + saved_chip_number = this->current_chip; + chip->select_chip(mtd, 0); + + /* Loop over blocks in the first search area, erasing them. */ + dev_dbg(dev, "Erasing the search area...\n"); + + for (block = 0; block < search_area_size_in_blocks; block++) { + /* Compute the page address. */ + page = block * block_size_in_pages; + + /* Erase this block. */ + dev_dbg(dev, "\tErasing block 0x%x\n", block); + chip->cmdfunc(mtd, NAND_CMD_ERASE1, -1, page); + chip->cmdfunc(mtd, NAND_CMD_ERASE2, -1, -1); + + /* Wait for the erase to finish. */ + status = chip->waitfunc(mtd, chip); + if (status & NAND_STATUS_FAIL) + dev_err(dev, "[%s] Erase failed.\n", __func__); + } + + /* Write the NCB fingerprint into the page buffer. */ + memset(buffer, ~0, mtd->writesize); + memset(chip->oob_poi, ~0, mtd->oobsize); + memcpy(buffer + 12, fingerprint, strlen(fingerprint)); + + /* Loop through the first search area, writing NCB fingerprints. */ + dev_dbg(dev, "Writing NCB fingerprints...\n"); + for (stride = 0; stride < search_area_size_in_strides; stride++) { + /* Compute the page and byte addresses. */ + page = stride * rom_geo->stride_size_in_pages; + byte = page * mtd->writesize; + + /* Write the first page of the current stride. */ + dev_dbg(dev, "Writing an NCB fingerprint in page 0x%x\n", page); + chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page); + chip->ecc.write_page_raw(mtd, chip, buffer); + chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); + + /* Wait for the write to finish. */ + status = chip->waitfunc(mtd, chip); + if (status & NAND_STATUS_FAIL) + dev_err(dev, "[%s] Write failed.\n", __func__); + } + + /* Deselect chip 0. */ + chip->select_chip(mtd, saved_chip_number); + return 0; +} + +static int __devinit mx23_boot_init(struct gpmi_nand_data *this) +{ + struct device *dev = this->dev; + struct nand_chip *chip = &this->nand; + struct mtd_info *mtd = &this->mtd; + unsigned int block_count; + unsigned int block; + int chipnr; + int page; + loff_t byte; + uint8_t block_mark; + int ret = 0; + + /* + * If control arrives here, we can't use block mark swapping, which + * means we're forced to use transcription. First, scan for the + * transcription stamp. If we find it, then we don't have to do + * anything -- the block marks are already transcribed. + */ + if (mx23_check_transcription_stamp(this)) + return 0; + + /* + * If control arrives here, we couldn't find a transcription stamp, so + * so we presume the block marks are in the conventional location. + */ + dev_dbg(dev, "Transcribing bad block marks...\n"); + + /* Compute the number of blocks in the entire medium. */ + block_count = chip->chipsize >> chip->phys_erase_shift; + + /* + * Loop over all the blocks in the medium, transcribing block marks as + * we go. + */ + for (block = 0; block < block_count; block++) { + /* + * Compute the chip, page and byte addresses for this block's + * conventional mark. + */ + chipnr = block >> (chip->chip_shift - chip->phys_erase_shift); + page = block << (chip->phys_erase_shift - chip->page_shift); + byte = block << chip->phys_erase_shift; + + /* Send the command to read the conventional block mark. */ + chip->select_chip(mtd, chipnr); + chip->cmdfunc(mtd, NAND_CMD_READ0, mtd->writesize, page); + block_mark = chip->read_byte(mtd); + chip->select_chip(mtd, -1); + + /* + * Check if the block is marked bad. If so, we need to mark it + * again, but this time the result will be a mark in the + * location where we transcribe block marks. + */ + if (block_mark != 0xff) { + dev_dbg(dev, "Transcribing mark in block %u\n", block); + ret = chip->block_markbad(mtd, byte); + if (ret) + dev_err(dev, "Failed to mark block bad with " + "ret %d\n", ret); + } + } + + /* Write the stamp that indicates we've transcribed the block marks. */ + mx23_write_transcription_stamp(this); + return 0; +} + +static int __devinit nand_boot_init(struct gpmi_nand_data *this) +{ + nand_boot_set_geometry(this); + + /* This is ROM arch-specific initilization before the BBT scanning. */ + if (GPMI_IS_MX23(this)) + return mx23_boot_init(this); + return 0; +} + +static int __devinit gpmi_set_geometry(struct gpmi_nand_data *this) +{ + int ret; + + /* Free the temporary DMA memory for reading ID. */ + gpmi_free_dma_buffer(this); + + /* Set up the NFC geometry which is used by BCH. */ + ret = bch_set_geometry(this); + if (ret) { + pr_err("set geometry ret : %d\n", ret); + return ret; + } + + /* Alloc the new DMA buffers according to the pagesize and oobsize */ + return gpmi_alloc_dma_buffer(this); +} + +static int gpmi_pre_bbt_scan(struct gpmi_nand_data *this) +{ + int ret; + + /* Set up swap_block_mark, must be set before the gpmi_set_geometry() */ + if (GPMI_IS_MX23(this)) + this->swap_block_mark = false; + else + this->swap_block_mark = true; + + /* Set up the medium geometry */ + ret = gpmi_set_geometry(this); + if (ret) + return ret; + + /* NAND boot init, depends on the gpmi_set_geometry(). */ + return nand_boot_init(this); +} + +static int gpmi_scan_bbt(struct mtd_info *mtd) +{ + struct nand_chip *chip = mtd->priv; + struct gpmi_nand_data *this = chip->priv; + int ret; + + /* Prepare for the BBT scan. */ + ret = gpmi_pre_bbt_scan(this); + if (ret) + return ret; + + /* use the default BBT implementation */ + return nand_default_bbt(mtd); +} + +void gpmi_nfc_exit(struct gpmi_nand_data *this) +{ + nand_release(&this->mtd); + gpmi_free_dma_buffer(this); +} + +static int __devinit gpmi_nfc_init(struct gpmi_nand_data *this) +{ + struct gpmi_nand_platform_data *pdata = this->pdata; + struct mtd_info *mtd = &this->mtd; + struct nand_chip *chip = &this->nand; + int ret; + + /* init current chip */ + this->current_chip = -1; + + /* init the MTD data structures */ + mtd->priv = chip; + mtd->name = "gpmi-nand"; + mtd->owner = THIS_MODULE; + + /* init the nand_chip{}, we don't support a 16-bit NAND Flash bus. */ + chip->priv = this; + chip->select_chip = gpmi_select_chip; + chip->cmd_ctrl = gpmi_cmd_ctrl; + chip->dev_ready = gpmi_dev_ready; + chip->read_byte = gpmi_read_byte; + chip->read_buf = gpmi_read_buf; + chip->write_buf = gpmi_write_buf; + chip->ecc.read_page = gpmi_ecc_read_page; + chip->ecc.write_page = gpmi_ecc_write_page; + chip->ecc.read_oob = gpmi_ecc_read_oob; + chip->ecc.write_oob = gpmi_ecc_write_oob; + chip->scan_bbt = gpmi_scan_bbt; + chip->badblock_pattern = &gpmi_bbt_descr; + chip->block_markbad = gpmi_block_markbad; + chip->options |= NAND_NO_SUBPAGE_WRITE; + chip->ecc.mode = NAND_ECC_HW; + chip->ecc.size = 1; + chip->ecc.layout = &gpmi_hw_ecclayout; + + /* Allocate a temporary DMA buffer for reading ID in the nand_scan() */ + this->bch_geometry.payload_size = 1024; + this->bch_geometry.auxiliary_size = 128; + ret = gpmi_alloc_dma_buffer(this); + if (ret) + goto err_out; + + ret = nand_scan(mtd, pdata->max_chip_count); + if (ret) { + pr_err("Chip scan failed\n"); + goto err_out; + } + + ret = mtd_device_parse_register(mtd, NULL, NULL, + pdata->partitions, pdata->partition_count); + if (ret) + goto err_out; + return 0; + +err_out: + gpmi_nfc_exit(this); + return ret; +} + +static int __devinit gpmi_nand_probe(struct platform_device *pdev) +{ + struct gpmi_nand_platform_data *pdata = pdev->dev.platform_data; + struct gpmi_nand_data *this; + int ret; + + this = kzalloc(sizeof(*this), GFP_KERNEL); + if (!this) { + pr_err("Failed to allocate per-device memory\n"); + return -ENOMEM; + } + + platform_set_drvdata(pdev, this); + this->pdev = pdev; + this->dev = &pdev->dev; + this->pdata = pdata; + + if (pdata->platform_init) { + ret = pdata->platform_init(); + if (ret) + goto platform_init_error; + } + + ret = acquire_resources(this); + if (ret) + goto exit_acquire_resources; + + ret = init_hardware(this); + if (ret) + goto exit_nfc_init; + + ret = gpmi_nfc_init(this); + if (ret) + goto exit_nfc_init; + + return 0; + +exit_nfc_init: + release_resources(this); +platform_init_error: +exit_acquire_resources: + platform_set_drvdata(pdev, NULL); + kfree(this); + return ret; +} + +static int __exit gpmi_nand_remove(struct platform_device *pdev) +{ + struct gpmi_nand_data *this = platform_get_drvdata(pdev); + + gpmi_nfc_exit(this); + release_resources(this); + platform_set_drvdata(pdev, NULL); + kfree(this); + return 0; +} + +static const struct platform_device_id gpmi_ids[] = { + { + .name = "imx23-gpmi-nand", + .driver_data = IS_MX23, + }, { + .name = "imx28-gpmi-nand", + .driver_data = IS_MX28, + }, {}, +}; + +static struct platform_driver gpmi_nand_driver = { + .driver = { + .name = "gpmi-nand", + }, + .probe = gpmi_nand_probe, + .remove = __exit_p(gpmi_nand_remove), + .id_table = gpmi_ids, +}; + +static int __init gpmi_nand_init(void) +{ + int err; + + err = platform_driver_register(&gpmi_nand_driver); + if (err == 0) + printk(KERN_INFO "GPMI NAND driver registered. (IMX)\n"); + else + pr_err("i.MX GPMI NAND driver registration failed\n"); + return err; +} + +static void __exit gpmi_nand_exit(void) +{ + platform_driver_unregister(&gpmi_nand_driver); +} + +module_init(gpmi_nand_init); +module_exit(gpmi_nand_exit); + +MODULE_AUTHOR("Freescale Semiconductor, Inc."); +MODULE_DESCRIPTION("i.MX GPMI NAND Flash Controller Driver"); +MODULE_LICENSE("GPL"); diff --git a/drivers/mtd/nand/gpmi-nand/gpmi-nand.h b/drivers/mtd/nand/gpmi-nand/gpmi-nand.h new file mode 100644 index 000000000000..e023bccb7781 --- /dev/null +++ b/drivers/mtd/nand/gpmi-nand/gpmi-nand.h @@ -0,0 +1,273 @@ +/* + * Freescale GPMI NAND Flash Driver + * + * Copyright (C) 2010-2011 Freescale Semiconductor, Inc. + * Copyright (C) 2008 Embedded Alley Solutions, Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#ifndef __DRIVERS_MTD_NAND_GPMI_NAND_H +#define __DRIVERS_MTD_NAND_GPMI_NAND_H + +#include <linux/mtd/nand.h> +#include <linux/platform_device.h> +#include <linux/dma-mapping.h> +#include <mach/dma.h> + +struct resources { + void *gpmi_regs; + void *bch_regs; + unsigned int bch_low_interrupt; + unsigned int bch_high_interrupt; + unsigned int dma_low_channel; + unsigned int dma_high_channel; + struct clk *clock; +}; + +/** + * struct bch_geometry - BCH geometry description. + * @gf_len: The length of Galois Field. (e.g., 13 or 14) + * @ecc_strength: A number that describes the strength of the ECC + * algorithm. + * @page_size: The size, in bytes, of a physical page, including + * both data and OOB. + * @metadata_size: The size, in bytes, of the metadata. + * @ecc_chunk_size: The size, in bytes, of a single ECC chunk. Note + * the first chunk in the page includes both data and + * metadata, so it's a bit larger than this value. + * @ecc_chunk_count: The number of ECC chunks in the page, + * @payload_size: The size, in bytes, of the payload buffer. + * @auxiliary_size: The size, in bytes, of the auxiliary buffer. + * @auxiliary_status_offset: The offset into the auxiliary buffer at which + * the ECC status appears. + * @block_mark_byte_offset: The byte offset in the ECC-based page view at + * which the underlying physical block mark appears. + * @block_mark_bit_offset: The bit offset into the ECC-based page view at + * which the underlying physical block mark appears. + */ +struct bch_geometry { + unsigned int gf_len; + unsigned int ecc_strength; + unsigned int page_size; + unsigned int metadata_size; + unsigned int ecc_chunk_size; + unsigned int ecc_chunk_count; + unsigned int payload_size; + unsigned int auxiliary_size; + unsigned int auxiliary_status_offset; + unsigned int block_mark_byte_offset; + unsigned int block_mark_bit_offset; +}; + +/** + * struct boot_rom_geometry - Boot ROM geometry description. + * @stride_size_in_pages: The size of a boot block stride, in pages. + * @search_area_stride_exponent: The logarithm to base 2 of the size of a + * search area in boot block strides. + */ +struct boot_rom_geometry { + unsigned int stride_size_in_pages; + unsigned int search_area_stride_exponent; +}; + +/* DMA operations types */ +enum dma_ops_type { + DMA_FOR_COMMAND = 1, + DMA_FOR_READ_DATA, + DMA_FOR_WRITE_DATA, + DMA_FOR_READ_ECC_PAGE, + DMA_FOR_WRITE_ECC_PAGE +}; + +/** + * struct nand_timing - Fundamental timing attributes for NAND. + * @data_setup_in_ns: The data setup time, in nanoseconds. Usually the + * maximum of tDS and tWP. A negative value + * indicates this characteristic isn't known. + * @data_hold_in_ns: The data hold time, in nanoseconds. Usually the + * maximum of tDH, tWH and tREH. A negative value + * indicates this characteristic isn't known. + * @address_setup_in_ns: The address setup time, in nanoseconds. Usually + * the maximum of tCLS, tCS and tALS. A negative + * value indicates this characteristic isn't known. + * @gpmi_sample_delay_in_ns: A GPMI-specific timing parameter. A negative value + * indicates this characteristic isn't known. + * @tREA_in_ns: tREA, in nanoseconds, from the data sheet. A + * negative value indicates this characteristic isn't + * known. + * @tRLOH_in_ns: tRLOH, in nanoseconds, from the data sheet. A + * negative value indicates this characteristic isn't + * known. + * @tRHOH_in_ns: tRHOH, in nanoseconds, from the data sheet. A + * negative value indicates this characteristic isn't + * known. + */ +struct nand_timing { + int8_t data_setup_in_ns; + int8_t data_hold_in_ns; + int8_t address_setup_in_ns; + int8_t gpmi_sample_delay_in_ns; + int8_t tREA_in_ns; + int8_t tRLOH_in_ns; + int8_t tRHOH_in_ns; +}; + +struct gpmi_nand_data { + /* System Interface */ + struct device *dev; + struct platform_device *pdev; + struct gpmi_nand_platform_data *pdata; + + /* Resources */ + struct resources resources; + + /* Flash Hardware */ + struct nand_timing timing; + + /* BCH */ + struct bch_geometry bch_geometry; + struct completion bch_done; + + /* NAND Boot issue */ + bool swap_block_mark; + struct boot_rom_geometry rom_geometry; + + /* MTD / NAND */ + struct nand_chip nand; + struct mtd_info mtd; + + /* General-use Variables */ + int current_chip; + unsigned int command_length; + + /* passed from upper layer */ + uint8_t *upper_buf; + int upper_len; + + /* for DMA operations */ + bool direct_dma_map_ok; + + struct scatterlist cmd_sgl; + char *cmd_buffer; + + struct scatterlist data_sgl; + char *data_buffer_dma; + + void *page_buffer_virt; + dma_addr_t page_buffer_phys; + unsigned int page_buffer_size; + + void *payload_virt; + dma_addr_t payload_phys; + + void *auxiliary_virt; + dma_addr_t auxiliary_phys; + + /* DMA channels */ +#define DMA_CHANS 8 + struct dma_chan *dma_chans[DMA_CHANS]; + struct mxs_dma_data dma_data; + enum dma_ops_type last_dma_type; + enum dma_ops_type dma_type; + struct completion dma_done; + + /* private */ + void *private; +}; + +/** + * struct gpmi_nfc_hardware_timing - GPMI hardware timing parameters. + * @data_setup_in_cycles: The data setup time, in cycles. + * @data_hold_in_cycles: The data hold time, in cycles. + * @address_setup_in_cycles: The address setup time, in cycles. + * @use_half_periods: Indicates the clock is running slowly, so the + * NFC DLL should use half-periods. + * @sample_delay_factor: The sample delay factor. + */ +struct gpmi_nfc_hardware_timing { + uint8_t data_setup_in_cycles; + uint8_t data_hold_in_cycles; + uint8_t address_setup_in_cycles; + bool use_half_periods; + uint8_t sample_delay_factor; +}; + +/** + * struct timing_threshod - Timing threshold + * @max_data_setup_cycles: The maximum number of data setup cycles that + * can be expressed in the hardware. + * @internal_data_setup_in_ns: The time, in ns, that the NFC hardware requires + * for data read internal setup. In the Reference + * Manual, see the chapter "High-Speed NAND + * Timing" for more details. + * @max_sample_delay_factor: The maximum sample delay factor that can be + * expressed in the hardware. + * @max_dll_clock_period_in_ns: The maximum period of the GPMI clock that the + * sample delay DLL hardware can possibly work + * with (the DLL is unusable with longer periods). + * If the full-cycle period is greater than HALF + * this value, the DLL must be configured to use + * half-periods. + * @max_dll_delay_in_ns: The maximum amount of delay, in ns, that the + * DLL can implement. + * @clock_frequency_in_hz: The clock frequency, in Hz, during the current + * I/O transaction. If no I/O transaction is in + * progress, this is the clock frequency during + * the most recent I/O transaction. + */ +struct timing_threshod { + const unsigned int max_chip_count; + const unsigned int max_data_setup_cycles; + const unsigned int internal_data_setup_in_ns; + const unsigned int max_sample_delay_factor; + const unsigned int max_dll_clock_period_in_ns; + const unsigned int max_dll_delay_in_ns; + unsigned long clock_frequency_in_hz; + +}; + +/* Common Services */ +extern int common_nfc_set_geometry(struct gpmi_nand_data *); +extern struct dma_chan *get_dma_chan(struct gpmi_nand_data *); +extern void prepare_data_dma(struct gpmi_nand_data *, + enum dma_data_direction dr); +extern int start_dma_without_bch_irq(struct gpmi_nand_data *, + struct dma_async_tx_descriptor *); +extern int start_dma_with_bch_irq(struct gpmi_nand_data *, + struct dma_async_tx_descriptor *); + +/* GPMI-NAND helper function library */ +extern int gpmi_init(struct gpmi_nand_data *); +extern void gpmi_clear_bch(struct gpmi_nand_data *); +extern void gpmi_dump_info(struct gpmi_nand_data *); +extern int bch_set_geometry(struct gpmi_nand_data *); +extern int gpmi_is_ready(struct gpmi_nand_data *, unsigned chip); +extern int gpmi_send_command(struct gpmi_nand_data *); +extern void gpmi_begin(struct gpmi_nand_data *); +extern void gpmi_end(struct gpmi_nand_data *); +extern int gpmi_read_data(struct gpmi_nand_data *); +extern int gpmi_send_data(struct gpmi_nand_data *); +extern int gpmi_send_page(struct gpmi_nand_data *, + dma_addr_t payload, dma_addr_t auxiliary); +extern int gpmi_read_page(struct gpmi_nand_data *, + dma_addr_t payload, dma_addr_t auxiliary); + +/* BCH : Status Block Completion Codes */ +#define STATUS_GOOD 0x00 +#define STATUS_ERASED 0xff +#define STATUS_UNCORRECTABLE 0xfe + +/* Use the platform_id to distinguish different Archs. */ +#define IS_MX23 0x1 +#define IS_MX28 0x2 +#define GPMI_IS_MX23(x) ((x)->pdev->id_entry->driver_data == IS_MX23) +#define GPMI_IS_MX28(x) ((x)->pdev->id_entry->driver_data == IS_MX28) +#endif diff --git a/drivers/mtd/nand/gpmi-nand/gpmi-regs.h b/drivers/mtd/nand/gpmi-nand/gpmi-regs.h new file mode 100644 index 000000000000..83431240e2f2 --- /dev/null +++ b/drivers/mtd/nand/gpmi-nand/gpmi-regs.h @@ -0,0 +1,172 @@ +/* + * Freescale GPMI NAND Flash Driver + * + * Copyright 2008-2011 Freescale Semiconductor, Inc. + * Copyright 2008 Embedded Alley Solutions, Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, write to the Free Software Foundation, Inc., + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + */ +#ifndef __GPMI_NAND_GPMI_REGS_H +#define __GPMI_NAND_GPMI_REGS_H + +#define HW_GPMI_CTRL0 0x00000000 +#define HW_GPMI_CTRL0_SET 0x00000004 +#define HW_GPMI_CTRL0_CLR 0x00000008 +#define HW_GPMI_CTRL0_TOG 0x0000000c + +#define BP_GPMI_CTRL0_COMMAND_MODE 24 +#define BM_GPMI_CTRL0_COMMAND_MODE (3 << BP_GPMI_CTRL0_COMMAND_MODE) +#define BF_GPMI_CTRL0_COMMAND_MODE(v) \ + (((v) << BP_GPMI_CTRL0_COMMAND_MODE) & BM_GPMI_CTRL0_COMMAND_MODE) +#define BV_GPMI_CTRL0_COMMAND_MODE__WRITE 0x0 +#define BV_GPMI_CTRL0_COMMAND_MODE__READ 0x1 +#define BV_GPMI_CTRL0_COMMAND_MODE__READ_AND_COMPARE 0x2 +#define BV_GPMI_CTRL0_COMMAND_MODE__WAIT_FOR_READY 0x3 + +#define BM_GPMI_CTRL0_WORD_LENGTH (1 << 23) +#define BV_GPMI_CTRL0_WORD_LENGTH__16_BIT 0x0 +#define BV_GPMI_CTRL0_WORD_LENGTH__8_BIT 0x1 + +/* + * Difference in LOCK_CS between imx23 and imx28 : + * This bit may impact the _POWER_ consumption. So some chips + * do not set it. + */ +#define MX23_BP_GPMI_CTRL0_LOCK_CS 22 +#define MX28_BP_GPMI_CTRL0_LOCK_CS 27 +#define LOCK_CS_ENABLE 0x1 +#define BF_GPMI_CTRL0_LOCK_CS(v, x) 0x0 + +/* Difference in CS between imx23 and imx28 */ +#define BP_GPMI_CTRL0_CS 20 +#define MX23_BM_GPMI_CTRL0_CS (3 << BP_GPMI_CTRL0_CS) +#define MX28_BM_GPMI_CTRL0_CS (7 << BP_GPMI_CTRL0_CS) +#define BF_GPMI_CTRL0_CS(v, x) (((v) << BP_GPMI_CTRL0_CS) & \ + (GPMI_IS_MX23((x)) \ + ? MX23_BM_GPMI_CTRL0_CS \ + : MX28_BM_GPMI_CTRL0_CS)) + +#define BP_GPMI_CTRL0_ADDRESS 17 +#define BM_GPMI_CTRL0_ADDRESS (3 << BP_GPMI_CTRL0_ADDRESS) +#define BF_GPMI_CTRL0_ADDRESS(v) \ + (((v) << BP_GPMI_CTRL0_ADDRESS) & BM_GPMI_CTRL0_ADDRESS) +#define BV_GPMI_CTRL0_ADDRESS__NAND_DATA 0x0 +#define BV_GPMI_CTRL0_ADDRESS__NAND_CLE 0x1 +#define BV_GPMI_CTRL0_ADDRESS__NAND_ALE 0x2 + +#define BM_GPMI_CTRL0_ADDRESS_INCREMENT (1 << 16) +#define BV_GPMI_CTRL0_ADDRESS_INCREMENT__DISABLED 0x0 +#define BV_GPMI_CTRL0_ADDRESS_INCREMENT__ENABLED 0x1 + +#define BP_GPMI_CTRL0_XFER_COUNT 0 +#define BM_GPMI_CTRL0_XFER_COUNT (0xffff << BP_GPMI_CTRL0_XFER_COUNT) +#define BF_GPMI_CTRL0_XFER_COUNT(v) \ + (((v) << BP_GPMI_CTRL0_XFER_COUNT) & BM_GPMI_CTRL0_XFER_COUNT) + +#define HW_GPMI_COMPARE 0x00000010 + +#define HW_GPMI_ECCCTRL 0x00000020 +#define HW_GPMI_ECCCTRL_SET 0x00000024 +#define HW_GPMI_ECCCTRL_CLR 0x00000028 +#define HW_GPMI_ECCCTRL_TOG 0x0000002c + +#define BP_GPMI_ECCCTRL_ECC_CMD 13 +#define BM_GPMI_ECCCTRL_ECC_CMD (3 << BP_GPMI_ECCCTRL_ECC_CMD) +#define BF_GPMI_ECCCTRL_ECC_CMD(v) \ + (((v) << BP_GPMI_ECCCTRL_ECC_CMD) & BM_GPMI_ECCCTRL_ECC_CMD) +#define BV_GPMI_ECCCTRL_ECC_CMD__BCH_DECODE 0x0 +#define BV_GPMI_ECCCTRL_ECC_CMD__BCH_ENCODE 0x1 + +#define BM_GPMI_ECCCTRL_ENABLE_ECC (1 << 12) +#define BV_GPMI_ECCCTRL_ENABLE_ECC__ENABLE 0x1 +#define BV_GPMI_ECCCTRL_ENABLE_ECC__DISABLE 0x0 + +#define BP_GPMI_ECCCTRL_BUFFER_MASK 0 +#define BM_GPMI_ECCCTRL_BUFFER_MASK (0x1ff << BP_GPMI_ECCCTRL_BUFFER_MASK) +#define BF_GPMI_ECCCTRL_BUFFER_MASK(v) \ + (((v) << BP_GPMI_ECCCTRL_BUFFER_MASK) & BM_GPMI_ECCCTRL_BUFFER_MASK) +#define BV_GPMI_ECCCTRL_BUFFER_MASK__BCH_AUXONLY 0x100 +#define BV_GPMI_ECCCTRL_BUFFER_MASK__BCH_PAGE 0x1FF + +#define HW_GPMI_ECCCOUNT 0x00000030 +#define HW_GPMI_PAYLOAD 0x00000040 +#define HW_GPMI_AUXILIARY 0x00000050 +#define HW_GPMI_CTRL1 0x00000060 +#define HW_GPMI_CTRL1_SET 0x00000064 +#define HW_GPMI_CTRL1_CLR 0x00000068 +#define HW_GPMI_CTRL1_TOG 0x0000006c + +#define BM_GPMI_CTRL1_BCH_MODE (1 << 18) + +#define BP_GPMI_CTRL1_DLL_ENABLE 17 +#define BM_GPMI_CTRL1_DLL_ENABLE (1 << BP_GPMI_CTRL1_DLL_ENABLE) + +#define BP_GPMI_CTRL1_HALF_PERIOD 16 +#define BM_GPMI_CTRL1_HALF_PERIOD (1 << BP_GPMI_CTRL1_HALF_PERIOD) + +#define BP_GPMI_CTRL1_RDN_DELAY 12 +#define BM_GPMI_CTRL1_RDN_DELAY (0xf << BP_GPMI_CTRL1_RDN_DELAY) +#define BF_GPMI_CTRL1_RDN_DELAY(v) \ + (((v) << BP_GPMI_CTRL1_RDN_DELAY) & BM_GPMI_CTRL1_RDN_DELAY) + +#define BM_GPMI_CTRL1_DEV_RESET (1 << 3) +#define BV_GPMI_CTRL1_DEV_RESET__ENABLED 0x0 +#define BV_GPMI_CTRL1_DEV_RESET__DISABLED 0x1 + +#define BM_GPMI_CTRL1_ATA_IRQRDY_POLARITY (1 << 2) +#define BV_GPMI_CTRL1_ATA_IRQRDY_POLARITY__ACTIVELOW 0x0 +#define BV_GPMI_CTRL1_ATA_IRQRDY_POLARITY__ACTIVEHIGH 0x1 + +#define BM_GPMI_CTRL1_CAMERA_MODE (1 << 1) +#define BV_GPMI_CTRL1_GPMI_MODE__NAND 0x0 +#define BV_GPMI_CTRL1_GPMI_MODE__ATA 0x1 + +#define BM_GPMI_CTRL1_GPMI_MODE (1 << 0) + +#define HW_GPMI_TIMING0 0x00000070 + +#define BP_GPMI_TIMING0_ADDRESS_SETUP 16 +#define BM_GPMI_TIMING0_ADDRESS_SETUP (0xff << BP_GPMI_TIMING0_ADDRESS_SETUP) +#define BF_GPMI_TIMING0_ADDRESS_SETUP(v) \ + (((v) << BP_GPMI_TIMING0_ADDRESS_SETUP) & BM_GPMI_TIMING0_ADDRESS_SETUP) + +#define BP_GPMI_TIMING0_DATA_HOLD 8 +#define BM_GPMI_TIMING0_DATA_HOLD (0xff << BP_GPMI_TIMING0_DATA_HOLD) +#define BF_GPMI_TIMING0_DATA_HOLD(v) \ + (((v) << BP_GPMI_TIMING0_DATA_HOLD) & BM_GPMI_TIMING0_DATA_HOLD) + +#define BP_GPMI_TIMING0_DATA_SETUP 0 +#define BM_GPMI_TIMING0_DATA_SETUP (0xff << BP_GPMI_TIMING0_DATA_SETUP) +#define BF_GPMI_TIMING0_DATA_SETUP(v) \ + (((v) << BP_GPMI_TIMING0_DATA_SETUP) & BM_GPMI_TIMING0_DATA_SETUP) + +#define HW_GPMI_TIMING1 0x00000080 +#define BP_GPMI_TIMING1_BUSY_TIMEOUT 16 + +#define HW_GPMI_TIMING2 0x00000090 +#define HW_GPMI_DATA 0x000000a0 + +/* MX28 uses this to detect READY. */ +#define HW_GPMI_STAT 0x000000b0 +#define MX28_BP_GPMI_STAT_READY_BUSY 24 +#define MX28_BM_GPMI_STAT_READY_BUSY (0xff << MX28_BP_GPMI_STAT_READY_BUSY) +#define MX28_BF_GPMI_STAT_READY_BUSY(v) \ + (((v) << MX28_BP_GPMI_STAT_READY_BUSY) & MX28_BM_GPMI_STAT_READY_BUSY) + +/* MX23 uses this to detect READY. */ +#define HW_GPMI_DEBUG 0x000000c0 +#define MX23_BP_GPMI_DEBUG_READY0 28 +#define MX23_BM_GPMI_DEBUG_READY0 (1 << MX23_BP_GPMI_DEBUG_READY0) +#endif diff --git a/drivers/mtd/nand/h1910.c b/drivers/mtd/nand/h1910.c index 02a03e67109c..5dc6f0d92f1a 100644 --- a/drivers/mtd/nand/h1910.c +++ b/drivers/mtd/nand/h1910.c @@ -81,9 +81,6 @@ static int h1910_device_ready(struct mtd_info *mtd) static int __init h1910_init(void) { struct nand_chip *this; - const char *part_type = 0; - int mtd_parts_nb = 0; - struct mtd_partition *mtd_parts = 0; void __iomem *nandaddr; if (!machine_is_h1900()) @@ -136,22 +133,10 @@ static int __init h1910_init(void) iounmap((void *)nandaddr); return -ENXIO; } -#ifdef CONFIG_MTD_CMDLINE_PARTS - mtd_parts_nb = parse_cmdline_partitions(h1910_nand_mtd, &mtd_parts, "h1910-nand"); - if (mtd_parts_nb > 0) - part_type = "command line"; - else - mtd_parts_nb = 0; -#endif - if (mtd_parts_nb == 0) { - mtd_parts = partition_info; - mtd_parts_nb = NUM_PARTITIONS; - part_type = "static"; - } /* Register the partitions */ - printk(KERN_NOTICE "Using %s partition definition\n", part_type); - mtd_device_register(h1910_nand_mtd, mtd_parts, mtd_parts_nb); + mtd_device_parse_register(h1910_nand_mtd, NULL, 0, + partition_info, NUM_PARTITIONS); /* Return happy */ return 0; diff --git a/drivers/mtd/nand/jz4740_nand.c b/drivers/mtd/nand/jz4740_nand.c index 6e813daed068..e2664073a89b 100644 --- a/drivers/mtd/nand/jz4740_nand.c +++ b/drivers/mtd/nand/jz4740_nand.c @@ -251,10 +251,6 @@ static int jz_nand_correct_ecc_rs(struct mtd_info *mtd, uint8_t *dat, return 0; } -#ifdef CONFIG_MTD_CMDLINE_PARTS -static const char *part_probes[] = {"cmdline", NULL}; -#endif - static int jz_nand_ioremap_resource(struct platform_device *pdev, const char *name, struct resource **res, void __iomem **base) { @@ -299,8 +295,6 @@ static int __devinit jz_nand_probe(struct platform_device *pdev) struct nand_chip *chip; struct mtd_info *mtd; struct jz_nand_platform_data *pdata = pdev->dev.platform_data; - struct mtd_partition *partition_info; - int num_partitions = 0; nand = kzalloc(sizeof(*nand), GFP_KERNEL); if (!nand) { @@ -373,15 +367,9 @@ static int __devinit jz_nand_probe(struct platform_device *pdev) goto err_gpio_free; } -#ifdef CONFIG_MTD_CMDLINE_PARTS - num_partitions = parse_mtd_partitions(mtd, part_probes, - &partition_info, 0); -#endif - if (num_partitions <= 0 && pdata) { - num_partitions = pdata->num_partitions; - partition_info = pdata->partitions; - } - ret = mtd_device_register(mtd, partition_info, num_partitions); + ret = mtd_device_parse_register(mtd, NULL, 0, + pdata ? pdata->partitions : NULL, + pdata ? pdata->num_partitions : 0); if (ret) { dev_err(&pdev->dev, "Failed to add mtd device\n"); diff --git a/drivers/mtd/nand/mpc5121_nfc.c b/drivers/mtd/nand/mpc5121_nfc.c index eb1fbac63eb6..5ede64706346 100644 --- a/drivers/mtd/nand/mpc5121_nfc.c +++ b/drivers/mtd/nand/mpc5121_nfc.c @@ -131,8 +131,6 @@ struct mpc5121_nfc_prv { static void mpc5121_nfc_done(struct mtd_info *mtd); -static const char *mpc5121_nfc_pprobes[] = { "cmdlinepart", NULL }; - /* Read NFC register */ static inline u16 nfc_read(struct mtd_info *mtd, uint reg) { @@ -656,13 +654,13 @@ static int __devinit mpc5121_nfc_probe(struct platform_device *op) struct mpc5121_nfc_prv *prv; struct resource res; struct mtd_info *mtd; - struct mtd_partition *parts; struct nand_chip *chip; unsigned long regs_paddr, regs_size; const __be32 *chips_no; int resettime = 0; int retval = 0; int rev, len; + struct mtd_part_parser_data ppdata; /* * Check SoC revision. This driver supports only NFC @@ -727,6 +725,7 @@ static int __devinit mpc5121_nfc_probe(struct platform_device *op) } mtd->name = "MPC5121 NAND"; + ppdata.of_node = dn; chip->dev_ready = mpc5121_nfc_dev_ready; chip->cmdfunc = mpc5121_nfc_command; chip->read_byte = mpc5121_nfc_read_byte; @@ -735,7 +734,8 @@ static int __devinit mpc5121_nfc_probe(struct platform_device *op) chip->write_buf = mpc5121_nfc_write_buf; chip->verify_buf = mpc5121_nfc_verify_buf; chip->select_chip = mpc5121_nfc_select_chip; - chip->options = NAND_NO_AUTOINCR | NAND_USE_FLASH_BBT; + chip->options = NAND_NO_AUTOINCR; + chip->bbt_options = NAND_BBT_USE_FLASH; chip->ecc.mode = NAND_ECC_SOFT; /* Support external chip-select logic on ADS5121 board */ @@ -837,19 +837,7 @@ static int __devinit mpc5121_nfc_probe(struct platform_device *op) dev_set_drvdata(dev, mtd); /* Register device in MTD */ - retval = parse_mtd_partitions(mtd, mpc5121_nfc_pprobes, &parts, 0); -#ifdef CONFIG_MTD_OF_PARTS - if (retval == 0) - retval = of_mtd_parse_partitions(dev, dn, &parts); -#endif - if (retval < 0) { - dev_err(dev, "Error parsing MTD partitions!\n"); - devm_free_irq(dev, prv->irq, mtd); - retval = -EINVAL; - goto error; - } - - retval = mtd_device_register(mtd, parts, retval); + retval = mtd_device_parse_register(mtd, NULL, &ppdata, NULL, 0); if (retval) { dev_err(dev, "Error adding MTD device!\n"); devm_free_irq(dev, prv->irq, mtd); diff --git a/drivers/mtd/nand/mxc_nand.c b/drivers/mtd/nand/mxc_nand.c index 90df34c4d26c..74a43b818d0e 100644 --- a/drivers/mtd/nand/mxc_nand.c +++ b/drivers/mtd/nand/mxc_nand.c @@ -41,7 +41,7 @@ #define nfc_is_v21() (cpu_is_mx25() || cpu_is_mx35()) #define nfc_is_v1() (cpu_is_mx31() || cpu_is_mx27() || cpu_is_mx21()) -#define nfc_is_v3_2() cpu_is_mx51() +#define nfc_is_v3_2() (cpu_is_mx51() || cpu_is_mx53()) #define nfc_is_v3() nfc_is_v3_2() /* Addresses for NFC registers */ @@ -143,7 +143,6 @@ struct mxc_nand_host { struct mtd_info mtd; struct nand_chip nand; - struct mtd_partition *parts; struct device *dev; void *spare0; @@ -350,8 +349,7 @@ static void wait_op_done(struct mxc_nand_host *host, int useirq) udelay(1); } if (max_retries < 0) - DEBUG(MTD_DEBUG_LEVEL0, "%s: INT not set\n", - __func__); + pr_debug("%s: INT not set\n", __func__); } } @@ -371,7 +369,7 @@ static void send_cmd_v3(struct mxc_nand_host *host, uint16_t cmd, int useirq) * waits for completion. */ static void send_cmd_v1_v2(struct mxc_nand_host *host, uint16_t cmd, int useirq) { - DEBUG(MTD_DEBUG_LEVEL3, "send_cmd(host, 0x%x, %d)\n", cmd, useirq); + pr_debug("send_cmd(host, 0x%x, %d)\n", cmd, useirq); writew(cmd, NFC_V1_V2_FLASH_CMD); writew(NFC_CMD, NFC_V1_V2_CONFIG2); @@ -387,8 +385,7 @@ static void send_cmd_v1_v2(struct mxc_nand_host *host, uint16_t cmd, int useirq) udelay(1); } if (max_retries < 0) - DEBUG(MTD_DEBUG_LEVEL0, "%s: RESET failed\n", - __func__); + pr_debug("%s: RESET failed\n", __func__); } else { /* Wait for operation to complete */ wait_op_done(host, useirq); @@ -411,7 +408,7 @@ static void send_addr_v3(struct mxc_nand_host *host, uint16_t addr, int islast) * a NAND command. */ static void send_addr_v1_v2(struct mxc_nand_host *host, uint16_t addr, int islast) { - DEBUG(MTD_DEBUG_LEVEL3, "send_addr(host, 0x%x %d)\n", addr, islast); + pr_debug("send_addr(host, 0x%x %d)\n", addr, islast); writew(addr, NFC_V1_V2_FLASH_ADDR); writew(NFC_ADDR, NFC_V1_V2_CONFIG2); @@ -561,8 +558,7 @@ static int mxc_nand_correct_data_v1(struct mtd_info *mtd, u_char *dat, uint16_t ecc_status = readw(NFC_V1_V2_ECC_STATUS_RESULT); if (((ecc_status & 0x3) == 2) || ((ecc_status >> 2) == 2)) { - DEBUG(MTD_DEBUG_LEVEL0, - "MXC_NAND: HWECC uncorrectable 2-bit ECC error\n"); + pr_debug("MXC_NAND: HWECC uncorrectable 2-bit ECC error\n"); return -1; } @@ -849,7 +845,7 @@ static void preset_v1_v2(struct mtd_info *mtd) writew(0xffff, NFC_V21_UNLOCKEND_BLKADDR3); } else if (nfc_is_v1()) { writew(0x0, NFC_V1_UNLOCKSTART_BLKADDR); - writew(0x4000, NFC_V1_UNLOCKEND_BLKADDR); + writew(0xffff, NFC_V1_UNLOCKEND_BLKADDR); } else BUG(); @@ -932,8 +928,7 @@ static void mxc_nand_command(struct mtd_info *mtd, unsigned command, struct nand_chip *nand_chip = mtd->priv; struct mxc_nand_host *host = nand_chip->priv; - DEBUG(MTD_DEBUG_LEVEL3, - "mxc_nand_command (cmd = 0x%x, col = 0x%x, page = 0x%x)\n", + pr_debug("mxc_nand_command (cmd = 0x%x, col = 0x%x, page = 0x%x)\n", command, column, page_addr); /* Reset command state information */ @@ -1044,7 +1039,7 @@ static int __init mxcnd_probe(struct platform_device *pdev) struct mxc_nand_platform_data *pdata = pdev->dev.platform_data; struct mxc_nand_host *host; struct resource *res; - int err = 0, __maybe_unused nr_parts = 0; + int err = 0; struct nand_ecclayout *oob_smallpage, *oob_largepage; /* Allocate memory for MTD device structure and private data */ @@ -1179,7 +1174,7 @@ static int __init mxcnd_probe(struct platform_device *pdev) this->bbt_td = &bbt_main_descr; this->bbt_md = &bbt_mirror_descr; /* update flash based bbt */ - this->options |= NAND_USE_FLASH_BBT; + this->bbt_options |= NAND_BBT_USE_FLASH; } init_completion(&host->op_completion); @@ -1231,16 +1226,8 @@ static int __init mxcnd_probe(struct platform_device *pdev) } /* Register the partitions */ - nr_parts = - parse_mtd_partitions(mtd, part_probes, &host->parts, 0); - if (nr_parts > 0) - mtd_device_register(mtd, host->parts, nr_parts); - else if (pdata->parts) - mtd_device_register(mtd, pdata->parts, pdata->nr_parts); - else { - pr_info("Registering %s as whole device\n", mtd->name); - mtd_device_register(mtd, NULL, 0); - } + mtd_device_parse_register(mtd, part_probes, 0, + pdata->parts, pdata->nr_parts); platform_set_drvdata(pdev, host); diff --git a/drivers/mtd/nand/nand_base.c b/drivers/mtd/nand/nand_base.c index a46e9bb847bd..3ed9c5e4d34e 100644 --- a/drivers/mtd/nand/nand_base.c +++ b/drivers/mtd/nand/nand_base.c @@ -21,7 +21,7 @@ * TODO: * Enable cached programming for 2k page size chips * Check, if mtd->ecctype should be set to MTD_ECC_HW - * if we have HW ecc support. + * if we have HW ECC support. * The AG-AND chips have nice features for speed improvement, * which are not supported yet. Read / program 4 pages in one go. * BBT table is not serialized, has to be fixed @@ -113,21 +113,19 @@ static int check_offs_len(struct mtd_info *mtd, /* Start address must align on block boundary */ if (ofs & ((1 << chip->phys_erase_shift) - 1)) { - DEBUG(MTD_DEBUG_LEVEL0, "%s: Unaligned address\n", __func__); + pr_debug("%s: unaligned address\n", __func__); ret = -EINVAL; } /* Length must align on block boundary */ if (len & ((1 << chip->phys_erase_shift) - 1)) { - DEBUG(MTD_DEBUG_LEVEL0, "%s: Length not block aligned\n", - __func__); + pr_debug("%s: length not block aligned\n", __func__); ret = -EINVAL; } /* Do not allow past end of device */ if (ofs + len > mtd->size) { - DEBUG(MTD_DEBUG_LEVEL0, "%s: Past end of device\n", - __func__); + pr_debug("%s: past end of device\n", __func__); ret = -EINVAL; } @@ -136,9 +134,9 @@ static int check_offs_len(struct mtd_info *mtd, /** * nand_release_device - [GENERIC] release chip - * @mtd: MTD device structure + * @mtd: MTD device structure * - * Deselect, release chip lock and wake up anyone waiting on the device + * Deselect, release chip lock and wake up anyone waiting on the device. */ static void nand_release_device(struct mtd_info *mtd) { @@ -157,9 +155,9 @@ static void nand_release_device(struct mtd_info *mtd) /** * nand_read_byte - [DEFAULT] read one byte from the chip - * @mtd: MTD device structure + * @mtd: MTD device structure * - * Default read function for 8bit buswith + * Default read function for 8bit buswidth */ static uint8_t nand_read_byte(struct mtd_info *mtd) { @@ -169,10 +167,11 @@ static uint8_t nand_read_byte(struct mtd_info *mtd) /** * nand_read_byte16 - [DEFAULT] read one byte endianess aware from the chip - * @mtd: MTD device structure + * nand_read_byte16 - [DEFAULT] read one byte endianness aware from the chip + * @mtd: MTD device structure + * + * Default read function for 16bit buswidth with endianness conversion. * - * Default read function for 16bit buswith with - * endianess conversion */ static uint8_t nand_read_byte16(struct mtd_info *mtd) { @@ -182,10 +181,9 @@ static uint8_t nand_read_byte16(struct mtd_info *mtd) /** * nand_read_word - [DEFAULT] read one word from the chip - * @mtd: MTD device structure + * @mtd: MTD device structure * - * Default read function for 16bit buswith without - * endianess conversion + * Default read function for 16bit buswidth without endianness conversion. */ static u16 nand_read_word(struct mtd_info *mtd) { @@ -195,8 +193,8 @@ static u16 nand_read_word(struct mtd_info *mtd) /** * nand_select_chip - [DEFAULT] control CE line - * @mtd: MTD device structure - * @chipnr: chipnumber to select, -1 for deselect + * @mtd: MTD device structure + * @chipnr: chipnumber to select, -1 for deselect * * Default select function for 1 chip devices. */ @@ -218,11 +216,11 @@ static void nand_select_chip(struct mtd_info *mtd, int chipnr) /** * nand_write_buf - [DEFAULT] write buffer to chip - * @mtd: MTD device structure - * @buf: data buffer - * @len: number of bytes to write + * @mtd: MTD device structure + * @buf: data buffer + * @len: number of bytes to write * - * Default write function for 8bit buswith + * Default write function for 8bit buswidth. */ static void nand_write_buf(struct mtd_info *mtd, const uint8_t *buf, int len) { @@ -235,11 +233,11 @@ static void nand_write_buf(struct mtd_info *mtd, const uint8_t *buf, int len) /** * nand_read_buf - [DEFAULT] read chip data into buffer - * @mtd: MTD device structure - * @buf: buffer to store date - * @len: number of bytes to read + * @mtd: MTD device structure + * @buf: buffer to store date + * @len: number of bytes to read * - * Default read function for 8bit buswith + * Default read function for 8bit buswidth. */ static void nand_read_buf(struct mtd_info *mtd, uint8_t *buf, int len) { @@ -252,11 +250,11 @@ static void nand_read_buf(struct mtd_info *mtd, uint8_t *buf, int len) /** * nand_verify_buf - [DEFAULT] Verify chip data against buffer - * @mtd: MTD device structure - * @buf: buffer containing the data to compare - * @len: number of bytes to compare + * @mtd: MTD device structure + * @buf: buffer containing the data to compare + * @len: number of bytes to compare * - * Default verify function for 8bit buswith + * Default verify function for 8bit buswidth. */ static int nand_verify_buf(struct mtd_info *mtd, const uint8_t *buf, int len) { @@ -271,11 +269,11 @@ static int nand_verify_buf(struct mtd_info *mtd, const uint8_t *buf, int len) /** * nand_write_buf16 - [DEFAULT] write buffer to chip - * @mtd: MTD device structure - * @buf: data buffer - * @len: number of bytes to write + * @mtd: MTD device structure + * @buf: data buffer + * @len: number of bytes to write * - * Default write function for 16bit buswith + * Default write function for 16bit buswidth. */ static void nand_write_buf16(struct mtd_info *mtd, const uint8_t *buf, int len) { @@ -291,11 +289,11 @@ static void nand_write_buf16(struct mtd_info *mtd, const uint8_t *buf, int len) /** * nand_read_buf16 - [DEFAULT] read chip data into buffer - * @mtd: MTD device structure - * @buf: buffer to store date - * @len: number of bytes to read + * @mtd: MTD device structure + * @buf: buffer to store date + * @len: number of bytes to read * - * Default read function for 16bit buswith + * Default read function for 16bit buswidth. */ static void nand_read_buf16(struct mtd_info *mtd, uint8_t *buf, int len) { @@ -310,11 +308,11 @@ static void nand_read_buf16(struct mtd_info *mtd, uint8_t *buf, int len) /** * nand_verify_buf16 - [DEFAULT] Verify chip data against buffer - * @mtd: MTD device structure - * @buf: buffer containing the data to compare - * @len: number of bytes to compare + * @mtd: MTD device structure + * @buf: buffer containing the data to compare + * @len: number of bytes to compare * - * Default verify function for 16bit buswith + * Default verify function for 16bit buswidth. */ static int nand_verify_buf16(struct mtd_info *mtd, const uint8_t *buf, int len) { @@ -332,9 +330,9 @@ static int nand_verify_buf16(struct mtd_info *mtd, const uint8_t *buf, int len) /** * nand_block_bad - [DEFAULT] Read bad block marker from the chip - * @mtd: MTD device structure - * @ofs: offset from device start - * @getchip: 0, if the chip is already selected + * @mtd: MTD device structure + * @ofs: offset from device start + * @getchip: 0, if the chip is already selected * * Check, if the block is bad. */ @@ -344,7 +342,7 @@ static int nand_block_bad(struct mtd_info *mtd, loff_t ofs, int getchip) struct nand_chip *chip = mtd->priv; u16 bad; - if (chip->options & NAND_BBT_SCANLASTPAGE) + if (chip->bbt_options & NAND_BBT_SCANLASTPAGE) ofs += mtd->erasesize - mtd->writesize; page = (int)(ofs >> chip->page_shift) & chip->pagemask; @@ -384,11 +382,11 @@ static int nand_block_bad(struct mtd_info *mtd, loff_t ofs, int getchip) /** * nand_default_block_markbad - [DEFAULT] mark a block bad - * @mtd: MTD device structure - * @ofs: offset from device start + * @mtd: MTD device structure + * @ofs: offset from device start * - * This is the default implementation, which can be overridden by - * a hardware specific driver. + * This is the default implementation, which can be overridden by a hardware + * specific driver. */ static int nand_default_block_markbad(struct mtd_info *mtd, loff_t ofs) { @@ -396,7 +394,7 @@ static int nand_default_block_markbad(struct mtd_info *mtd, loff_t ofs) uint8_t buf[2] = { 0, 0 }; int block, ret, i = 0; - if (chip->options & NAND_BBT_SCANLASTPAGE) + if (chip->bbt_options & NAND_BBT_SCANLASTPAGE) ofs += mtd->erasesize - mtd->writesize; /* Get block number */ @@ -404,33 +402,31 @@ static int nand_default_block_markbad(struct mtd_info *mtd, loff_t ofs) if (chip->bbt) chip->bbt[block >> 2] |= 0x01 << ((block & 0x03) << 1); - /* Do we have a flash based bad block table ? */ - if (chip->options & NAND_USE_FLASH_BBT) + /* Do we have a flash based bad block table? */ + if (chip->bbt_options & NAND_BBT_USE_FLASH) ret = nand_update_bbt(mtd, ofs); else { + struct mtd_oob_ops ops; + nand_get_device(chip, mtd, FL_WRITING); - /* Write to first two pages and to byte 1 and 6 if necessary. - * If we write to more than one location, the first error - * encountered quits the procedure. We write two bytes per - * location, so we dont have to mess with 16 bit access. + /* + * Write to first two pages if necessary. If we write to more + * than one location, the first error encountered quits the + * procedure. We write two bytes per location, so we dont have + * to mess with 16 bit access. */ + ops.len = ops.ooblen = 2; + ops.datbuf = NULL; + ops.oobbuf = buf; + ops.ooboffs = chip->badblockpos & ~0x01; + ops.mode = MTD_OPS_PLACE_OOB; do { - chip->ops.len = chip->ops.ooblen = 2; - chip->ops.datbuf = NULL; - chip->ops.oobbuf = buf; - chip->ops.ooboffs = chip->badblockpos & ~0x01; - - ret = nand_do_write_oob(mtd, ofs, &chip->ops); + ret = nand_do_write_oob(mtd, ofs, &ops); - if (!ret && (chip->options & NAND_BBT_SCANBYTE1AND6)) { - chip->ops.ooboffs = NAND_SMALL_BADBLOCK_POS - & ~0x01; - ret = nand_do_write_oob(mtd, ofs, &chip->ops); - } i++; ofs += mtd->writesize; - } while (!ret && (chip->options & NAND_BBT_SCAN2NDPAGE) && + } while (!ret && (chip->bbt_options & NAND_BBT_SCAN2NDPAGE) && i < 2); nand_release_device(mtd); @@ -443,16 +439,16 @@ static int nand_default_block_markbad(struct mtd_info *mtd, loff_t ofs) /** * nand_check_wp - [GENERIC] check if the chip is write protected - * @mtd: MTD device structure - * Check, if the device is write protected + * @mtd: MTD device structure * - * The function expects, that the device is already selected + * Check, if the device is write protected. The function expects, that the + * device is already selected. */ static int nand_check_wp(struct mtd_info *mtd) { struct nand_chip *chip = mtd->priv; - /* broken xD cards report WP despite being writable */ + /* Broken xD cards report WP despite being writable */ if (chip->options & NAND_BROKEN_XD) return 0; @@ -463,10 +459,10 @@ static int nand_check_wp(struct mtd_info *mtd) /** * nand_block_checkbad - [GENERIC] Check if a block is marked bad - * @mtd: MTD device structure - * @ofs: offset from device start - * @getchip: 0, if the chip is already selected - * @allowbbt: 1, if its allowed to access the bbt area + * @mtd: MTD device structure + * @ofs: offset from device start + * @getchip: 0, if the chip is already selected + * @allowbbt: 1, if its allowed to access the bbt area * * Check, if the block is bad. Either by reading the bad block table or * calling of the scan function. @@ -485,8 +481,8 @@ static int nand_block_checkbad(struct mtd_info *mtd, loff_t ofs, int getchip, /** * panic_nand_wait_ready - [GENERIC] Wait for the ready pin after commands. - * @mtd: MTD device structure - * @timeo: Timeout + * @mtd: MTD device structure + * @timeo: Timeout * * Helper function for nand_wait_ready used when needing to wait in interrupt * context. @@ -505,10 +501,7 @@ static void panic_nand_wait_ready(struct mtd_info *mtd, unsigned long timeo) } } -/* - * Wait for the ready pin, after a command - * The timeout is catched later. - */ +/* Wait for the ready pin, after a command. The timeout is caught later. */ void nand_wait_ready(struct mtd_info *mtd) { struct nand_chip *chip = mtd->priv; @@ -519,7 +512,7 @@ void nand_wait_ready(struct mtd_info *mtd) return panic_nand_wait_ready(mtd, 400); led_trigger_event(nand_led_trigger, LED_FULL); - /* wait until command is processed or timeout occures */ + /* Wait until command is processed or timeout occurs */ do { if (chip->dev_ready(mtd)) break; @@ -531,13 +524,13 @@ EXPORT_SYMBOL_GPL(nand_wait_ready); /** * nand_command - [DEFAULT] Send command to NAND device - * @mtd: MTD device structure - * @command: the command to be sent - * @column: the column address for this command, -1 if none - * @page_addr: the page address for this command, -1 if none + * @mtd: MTD device structure + * @command: the command to be sent + * @column: the column address for this command, -1 if none + * @page_addr: the page address for this command, -1 if none * - * Send command to NAND device. This function is used for small page - * devices (256/512 Bytes per page) + * Send command to NAND device. This function is used for small page devices + * (256/512 Bytes per page). */ static void nand_command(struct mtd_info *mtd, unsigned int command, int column, int page_addr) @@ -545,9 +538,7 @@ static void nand_command(struct mtd_info *mtd, unsigned int command, register struct nand_chip *chip = mtd->priv; int ctrl = NAND_CTRL_CLE | NAND_CTRL_CHANGE; - /* - * Write out the command to the device. - */ + /* Write out the command to the device */ if (command == NAND_CMD_SEQIN) { int readcmd; @@ -567,9 +558,7 @@ static void nand_command(struct mtd_info *mtd, unsigned int command, } chip->cmd_ctrl(mtd, command, ctrl); - /* - * Address cycle, when necessary - */ + /* Address cycle, when necessary */ ctrl = NAND_CTRL_ALE | NAND_CTRL_CHANGE; /* Serially input address */ if (column != -1) { @@ -590,8 +579,8 @@ static void nand_command(struct mtd_info *mtd, unsigned int command, chip->cmd_ctrl(mtd, NAND_CMD_NONE, NAND_NCE | NAND_CTRL_CHANGE); /* - * program and erase have their own busy handlers - * status and sequential in needs no delay + * Program and erase have their own busy handlers status and sequential + * in needs no delay */ switch (command) { @@ -625,8 +614,10 @@ static void nand_command(struct mtd_info *mtd, unsigned int command, return; } } - /* Apply this short delay always to ensure that we do wait tWB in - * any case on any machine. */ + /* + * Apply this short delay always to ensure that we do wait tWB in + * any case on any machine. + */ ndelay(100); nand_wait_ready(mtd); @@ -634,14 +625,14 @@ static void nand_command(struct mtd_info *mtd, unsigned int command, /** * nand_command_lp - [DEFAULT] Send command to NAND large page device - * @mtd: MTD device structure - * @command: the command to be sent - * @column: the column address for this command, -1 if none - * @page_addr: the page address for this command, -1 if none + * @mtd: MTD device structure + * @command: the command to be sent + * @column: the column address for this command, -1 if none + * @page_addr: the page address for this command, -1 if none * * Send command to NAND device. This is the version for the new large page - * devices We dont have the separate regions as we have in the small page - * devices. We must emulate NAND_CMD_READOOB to keep the code compatible. + * devices. We don't have the separate regions as we have in the small page + * devices. We must emulate NAND_CMD_READOOB to keep the code compatible. */ static void nand_command_lp(struct mtd_info *mtd, unsigned int command, int column, int page_addr) @@ -683,8 +674,8 @@ static void nand_command_lp(struct mtd_info *mtd, unsigned int command, chip->cmd_ctrl(mtd, NAND_CMD_NONE, NAND_NCE | NAND_CTRL_CHANGE); /* - * program and erase have their own busy handlers - * status, sequential in, and deplete1 need no delay + * Program and erase have their own busy handlers status, sequential + * in, and deplete1 need no delay. */ switch (command) { @@ -698,14 +689,12 @@ static void nand_command_lp(struct mtd_info *mtd, unsigned int command, case NAND_CMD_DEPLETE1: return; - /* - * read error status commands require only a short delay - */ case NAND_CMD_STATUS_ERROR: case NAND_CMD_STATUS_ERROR0: case NAND_CMD_STATUS_ERROR1: case NAND_CMD_STATUS_ERROR2: case NAND_CMD_STATUS_ERROR3: + /* Read error status commands require only a short delay */ udelay(chip->chip_delay); return; @@ -739,7 +728,7 @@ static void nand_command_lp(struct mtd_info *mtd, unsigned int command, default: /* * If we don't have access to the busy pin, we apply the given - * command delay + * command delay. */ if (!chip->dev_ready) { udelay(chip->chip_delay); @@ -747,8 +736,10 @@ static void nand_command_lp(struct mtd_info *mtd, unsigned int command, } } - /* Apply this short delay always to ensure that we do wait tWB in - * any case on any machine. */ + /* + * Apply this short delay always to ensure that we do wait tWB in + * any case on any machine. + */ ndelay(100); nand_wait_ready(mtd); @@ -756,25 +747,25 @@ static void nand_command_lp(struct mtd_info *mtd, unsigned int command, /** * panic_nand_get_device - [GENERIC] Get chip for selected access - * @chip: the nand chip descriptor - * @mtd: MTD device structure - * @new_state: the state which is requested + * @chip: the nand chip descriptor + * @mtd: MTD device structure + * @new_state: the state which is requested * * Used when in panic, no locks are taken. */ static void panic_nand_get_device(struct nand_chip *chip, struct mtd_info *mtd, int new_state) { - /* Hardware controller shared among independend devices */ + /* Hardware controller shared among independent devices */ chip->controller->active = chip; chip->state = new_state; } /** * nand_get_device - [GENERIC] Get chip for selected access - * @chip: the nand chip descriptor - * @mtd: MTD device structure - * @new_state: the state which is requested + * @chip: the nand chip descriptor + * @mtd: MTD device structure + * @new_state: the state which is requested * * Get the device and lock it for exclusive access */ @@ -812,10 +803,10 @@ retry: } /** - * panic_nand_wait - [GENERIC] wait until the command is done - * @mtd: MTD device structure - * @chip: NAND chip structure - * @timeo: Timeout + * panic_nand_wait - [GENERIC] wait until the command is done + * @mtd: MTD device structure + * @chip: NAND chip structure + * @timeo: timeout * * Wait for command done. This is a helper function for nand_wait used when * we are in interrupt context. May happen when in panic and trying to write @@ -838,13 +829,13 @@ static void panic_nand_wait(struct mtd_info *mtd, struct nand_chip *chip, } /** - * nand_wait - [DEFAULT] wait until the command is done - * @mtd: MTD device structure - * @chip: NAND chip structure + * nand_wait - [DEFAULT] wait until the command is done + * @mtd: MTD device structure + * @chip: NAND chip structure * - * Wait for command done. This applies to erase and program only - * Erase can take up to 400ms and program up to 20ms according to - * general NAND and SmartMedia specs + * Wait for command done. This applies to erase and program only. Erase can + * take up to 400ms and program up to 20ms according to general NAND and + * SmartMedia specs. */ static int nand_wait(struct mtd_info *mtd, struct nand_chip *chip) { @@ -859,8 +850,10 @@ static int nand_wait(struct mtd_info *mtd, struct nand_chip *chip) led_trigger_event(nand_led_trigger, LED_FULL); - /* Apply this short delay always to ensure that we do wait tWB in - * any case on any machine. */ + /* + * Apply this short delay always to ensure that we do wait tWB in any + * case on any machine. + */ ndelay(100); if ((state == FL_ERASING) && (chip->options & NAND_IS_AND)) @@ -890,16 +883,15 @@ static int nand_wait(struct mtd_info *mtd, struct nand_chip *chip) /** * __nand_unlock - [REPLACEABLE] unlocks specified locked blocks - * * @mtd: mtd info * @ofs: offset to start unlock from * @len: length to unlock - * @invert: when = 0, unlock the range of blocks within the lower and - * upper boundary address - * when = 1, unlock the range of blocks outside the boundaries - * of the lower and upper boundary address + * @invert: when = 0, unlock the range of blocks within the lower and + * upper boundary address + * when = 1, unlock the range of blocks outside the boundaries + * of the lower and upper boundary address * - * return - unlock status + * Returs unlock status. */ static int __nand_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len, int invert) @@ -919,10 +911,9 @@ static int __nand_unlock(struct mtd_info *mtd, loff_t ofs, /* Call wait ready function */ status = chip->waitfunc(mtd, chip); - udelay(1000); /* See if device thinks it succeeded */ if (status & 0x01) { - DEBUG(MTD_DEBUG_LEVEL0, "%s: Error status = 0x%08x\n", + pr_debug("%s: error status = 0x%08x\n", __func__, status); ret = -EIO; } @@ -932,12 +923,11 @@ static int __nand_unlock(struct mtd_info *mtd, loff_t ofs, /** * nand_unlock - [REPLACEABLE] unlocks specified locked blocks - * * @mtd: mtd info * @ofs: offset to start unlock from * @len: length to unlock * - * return - unlock status + * Returns unlock status. */ int nand_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len) { @@ -945,7 +935,7 @@ int nand_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len) int chipnr; struct nand_chip *chip = mtd->priv; - DEBUG(MTD_DEBUG_LEVEL3, "%s: start = 0x%012llx, len = %llu\n", + pr_debug("%s: start = 0x%012llx, len = %llu\n", __func__, (unsigned long long)ofs, len); if (check_offs_len(mtd, ofs, len)) @@ -964,7 +954,7 @@ int nand_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len) /* Check, if it is write protected */ if (nand_check_wp(mtd)) { - DEBUG(MTD_DEBUG_LEVEL0, "%s: Device is write protected!!!\n", + pr_debug("%s: device is write protected!\n", __func__); ret = -EIO; goto out; @@ -981,18 +971,16 @@ EXPORT_SYMBOL(nand_unlock); /** * nand_lock - [REPLACEABLE] locks all blocks present in the device - * * @mtd: mtd info * @ofs: offset to start unlock from * @len: length to unlock * - * return - lock status + * This feature is not supported in many NAND parts. 'Micron' NAND parts do + * have this feature, but it allows only to lock all blocks, not for specified + * range for block. Implementing 'lock' feature by making use of 'unlock', for + * now. * - * This feature is not supported in many NAND parts. 'Micron' NAND parts - * do have this feature, but it allows only to lock all blocks, not for - * specified range for block. - * - * Implementing 'lock' feature by making use of 'unlock', for now. + * Returns lock status. */ int nand_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len) { @@ -1000,7 +988,7 @@ int nand_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len) int chipnr, status, page; struct nand_chip *chip = mtd->priv; - DEBUG(MTD_DEBUG_LEVEL3, "%s: start = 0x%012llx, len = %llu\n", + pr_debug("%s: start = 0x%012llx, len = %llu\n", __func__, (unsigned long long)ofs, len); if (check_offs_len(mtd, ofs, len)) @@ -1015,7 +1003,7 @@ int nand_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len) /* Check, if it is write protected */ if (nand_check_wp(mtd)) { - DEBUG(MTD_DEBUG_LEVEL0, "%s: Device is write protected!!!\n", + pr_debug("%s: device is write protected!\n", __func__); status = MTD_ERASE_FAILED; ret = -EIO; @@ -1028,10 +1016,9 @@ int nand_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len) /* Call wait ready function */ status = chip->waitfunc(mtd, chip); - udelay(1000); /* See if device thinks it succeeded */ if (status & 0x01) { - DEBUG(MTD_DEBUG_LEVEL0, "%s: Error status = 0x%08x\n", + pr_debug("%s: error status = 0x%08x\n", __func__, status); ret = -EIO; goto out; @@ -1047,13 +1034,13 @@ out: EXPORT_SYMBOL(nand_lock); /** - * nand_read_page_raw - [Intern] read raw page data without ecc - * @mtd: mtd info structure - * @chip: nand chip info structure - * @buf: buffer to store read data - * @page: page number to read + * nand_read_page_raw - [INTERN] read raw page data without ecc + * @mtd: mtd info structure + * @chip: nand chip info structure + * @buf: buffer to store read data + * @page: page number to read * - * Not for syndrome calculating ecc controllers, which use a special oob layout + * Not for syndrome calculating ECC controllers, which use a special oob layout. */ static int nand_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip, uint8_t *buf, int page) @@ -1064,11 +1051,11 @@ static int nand_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip, } /** - * nand_read_page_raw_syndrome - [Intern] read raw page data without ecc - * @mtd: mtd info structure - * @chip: nand chip info structure - * @buf: buffer to store read data - * @page: page number to read + * nand_read_page_raw_syndrome - [INTERN] read raw page data without ecc + * @mtd: mtd info structure + * @chip: nand chip info structure + * @buf: buffer to store read data + * @page: page number to read * * We need a special oob layout and handling even when OOB isn't used. */ @@ -1107,11 +1094,11 @@ static int nand_read_page_raw_syndrome(struct mtd_info *mtd, } /** - * nand_read_page_swecc - [REPLACABLE] software ecc based page read function - * @mtd: mtd info structure - * @chip: nand chip info structure - * @buf: buffer to store read data - * @page: page number to read + * nand_read_page_swecc - [REPLACEABLE] software ECC based page read function + * @mtd: mtd info structure + * @chip: nand chip info structure + * @buf: buffer to store read data + * @page: page number to read */ static int nand_read_page_swecc(struct mtd_info *mtd, struct nand_chip *chip, uint8_t *buf, int page) @@ -1148,12 +1135,12 @@ static int nand_read_page_swecc(struct mtd_info *mtd, struct nand_chip *chip, } /** - * nand_read_subpage - [REPLACABLE] software ecc based sub-page read function - * @mtd: mtd info structure - * @chip: nand chip info structure - * @data_offs: offset of requested data within the page - * @readlen: data length - * @bufpoi: buffer to store read data + * nand_read_subpage - [REPLACEABLE] software ECC based sub-page read function + * @mtd: mtd info structure + * @chip: nand chip info structure + * @data_offs: offset of requested data within the page + * @readlen: data length + * @bufpoi: buffer to store read data */ static int nand_read_subpage(struct mtd_info *mtd, struct nand_chip *chip, uint32_t data_offs, uint32_t readlen, uint8_t *bufpoi) @@ -1166,12 +1153,12 @@ static int nand_read_subpage(struct mtd_info *mtd, struct nand_chip *chip, int busw = (chip->options & NAND_BUSWIDTH_16) ? 2 : 1; int index = 0; - /* Column address wihin the page aligned to ECC size (256bytes). */ + /* Column address within the page aligned to ECC size (256bytes) */ start_step = data_offs / chip->ecc.size; end_step = (data_offs + readlen - 1) / chip->ecc.size; num_steps = end_step - start_step + 1; - /* Data size aligned to ECC ecc.size*/ + /* Data size aligned to ECC ecc.size */ datafrag_len = num_steps * chip->ecc.size; eccfrag_len = num_steps * chip->ecc.bytes; @@ -1183,13 +1170,14 @@ static int nand_read_subpage(struct mtd_info *mtd, struct nand_chip *chip, p = bufpoi + data_col_addr; chip->read_buf(mtd, p, datafrag_len); - /* Calculate ECC */ + /* Calculate ECC */ for (i = 0; i < eccfrag_len ; i += chip->ecc.bytes, p += chip->ecc.size) chip->ecc.calculate(mtd, p, &chip->buffers->ecccalc[i]); - /* The performance is faster if to position offsets - according to ecc.pos. Let make sure here that - there are no gaps in ecc positions */ + /* + * The performance is faster if we position offsets according to + * ecc.pos. Let's make sure that there are no gaps in ECC positions. + */ for (i = 0; i < eccfrag_len - 1; i++) { if (eccpos[i + start_step * chip->ecc.bytes] + 1 != eccpos[i + start_step * chip->ecc.bytes + 1]) { @@ -1201,8 +1189,10 @@ static int nand_read_subpage(struct mtd_info *mtd, struct nand_chip *chip, chip->cmdfunc(mtd, NAND_CMD_RNDOUT, mtd->writesize, -1); chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); } else { - /* send the command to read the particular ecc bytes */ - /* take care about buswidth alignment in read_buf */ + /* + * Send the command to read the particular ECC bytes take care + * about buswidth alignment in read_buf. + */ index = start_step * chip->ecc.bytes; aligned_pos = eccpos[index] & ~(busw - 1); @@ -1235,13 +1225,13 @@ static int nand_read_subpage(struct mtd_info *mtd, struct nand_chip *chip, } /** - * nand_read_page_hwecc - [REPLACABLE] hardware ecc based page read function - * @mtd: mtd info structure - * @chip: nand chip info structure - * @buf: buffer to store read data - * @page: page number to read + * nand_read_page_hwecc - [REPLACEABLE] hardware ECC based page read function + * @mtd: mtd info structure + * @chip: nand chip info structure + * @buf: buffer to store read data + * @page: page number to read * - * Not for syndrome calculating ecc controllers which need a special oob layout + * Not for syndrome calculating ECC controllers which need a special oob layout. */ static int nand_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip, uint8_t *buf, int page) @@ -1280,18 +1270,17 @@ static int nand_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip, } /** - * nand_read_page_hwecc_oob_first - [REPLACABLE] hw ecc, read oob first - * @mtd: mtd info structure - * @chip: nand chip info structure - * @buf: buffer to store read data - * @page: page number to read + * nand_read_page_hwecc_oob_first - [REPLACEABLE] hw ecc, read oob first + * @mtd: mtd info structure + * @chip: nand chip info structure + * @buf: buffer to store read data + * @page: page number to read * - * Hardware ECC for large page chips, require OOB to be read first. - * For this ECC mode, the write_page method is re-used from ECC_HW. - * These methods read/write ECC from the OOB area, unlike the - * ECC_HW_SYNDROME support with multiple ECC steps, follows the - * "infix ECC" scheme and reads/writes ECC from the data area, by - * overwriting the NAND manufacturer bad block markings. + * Hardware ECC for large page chips, require OOB to be read first. For this + * ECC mode, the write_page method is re-used from ECC_HW. These methods + * read/write ECC from the OOB area, unlike the ECC_HW_SYNDROME support with + * multiple ECC steps, follows the "infix ECC" scheme and reads/writes ECC from + * the data area, by overwriting the NAND manufacturer bad block markings. */ static int nand_read_page_hwecc_oob_first(struct mtd_info *mtd, struct nand_chip *chip, uint8_t *buf, int page) @@ -1329,14 +1318,14 @@ static int nand_read_page_hwecc_oob_first(struct mtd_info *mtd, } /** - * nand_read_page_syndrome - [REPLACABLE] hardware ecc syndrom based page read - * @mtd: mtd info structure - * @chip: nand chip info structure - * @buf: buffer to store read data - * @page: page number to read + * nand_read_page_syndrome - [REPLACEABLE] hardware ECC syndrome based page read + * @mtd: mtd info structure + * @chip: nand chip info structure + * @buf: buffer to store read data + * @page: page number to read * - * The hw generator calculates the error syndrome automatically. Therefor - * we need a special oob layout and handling. + * The hw generator calculates the error syndrome automatically. Therefore we + * need a special oob layout and handling. */ static int nand_read_page_syndrome(struct mtd_info *mtd, struct nand_chip *chip, uint8_t *buf, int page) @@ -1384,29 +1373,29 @@ static int nand_read_page_syndrome(struct mtd_info *mtd, struct nand_chip *chip, } /** - * nand_transfer_oob - [Internal] Transfer oob to client buffer - * @chip: nand chip structure - * @oob: oob destination address - * @ops: oob ops structure - * @len: size of oob to transfer + * nand_transfer_oob - [INTERN] Transfer oob to client buffer + * @chip: nand chip structure + * @oob: oob destination address + * @ops: oob ops structure + * @len: size of oob to transfer */ static uint8_t *nand_transfer_oob(struct nand_chip *chip, uint8_t *oob, struct mtd_oob_ops *ops, size_t len) { switch (ops->mode) { - case MTD_OOB_PLACE: - case MTD_OOB_RAW: + case MTD_OPS_PLACE_OOB: + case MTD_OPS_RAW: memcpy(oob, chip->oob_poi + ops->ooboffs, len); return oob + len; - case MTD_OOB_AUTO: { + case MTD_OPS_AUTO_OOB: { struct nand_oobfree *free = chip->ecc.layout->oobfree; uint32_t boffs = 0, roffs = ops->ooboffs; size_t bytes = 0; for (; free->length && len; free++, len -= bytes) { - /* Read request not from offset 0 ? */ + /* Read request not from offset 0? */ if (unlikely(roffs)) { if (roffs >= free->length) { roffs -= free->length; @@ -1432,11 +1421,10 @@ static uint8_t *nand_transfer_oob(struct nand_chip *chip, uint8_t *oob, } /** - * nand_do_read_ops - [Internal] Read data with ECC - * - * @mtd: MTD device structure - * @from: offset to read from - * @ops: oob ops structure + * nand_do_read_ops - [INTERN] Read data with ECC + * @mtd: MTD device structure + * @from: offset to read from + * @ops: oob ops structure * * Internal function. Called with chip held. */ @@ -1451,7 +1439,7 @@ static int nand_do_read_ops(struct mtd_info *mtd, loff_t from, int ret = 0; uint32_t readlen = ops->len; uint32_t oobreadlen = ops->ooblen; - uint32_t max_oobsize = ops->mode == MTD_OOB_AUTO ? + uint32_t max_oobsize = ops->mode == MTD_OPS_AUTO_OOB ? mtd->oobavail : mtd->oobsize; uint8_t *bufpoi, *oob, *buf; @@ -1473,7 +1461,7 @@ static int nand_do_read_ops(struct mtd_info *mtd, loff_t from, bytes = min(mtd->writesize - col, readlen); aligned = (bytes == mtd->writesize); - /* Is the current page in the buffer ? */ + /* Is the current page in the buffer? */ if (realpage != chip->pagebuf || oob) { bufpoi = aligned ? buf : chip->buffers->databuf; @@ -1483,7 +1471,7 @@ static int nand_do_read_ops(struct mtd_info *mtd, loff_t from, } /* Now read the page into the buffer */ - if (unlikely(ops->mode == MTD_OOB_RAW)) + if (unlikely(ops->mode == MTD_OPS_RAW)) ret = chip->ecc.read_page_raw(mtd, chip, bufpoi, page); else if (!aligned && NAND_SUBPAGE_READ(chip) && !oob) @@ -1492,14 +1480,22 @@ static int nand_do_read_ops(struct mtd_info *mtd, loff_t from, else ret = chip->ecc.read_page(mtd, chip, bufpoi, page); - if (ret < 0) + if (ret < 0) { + if (!aligned) + /* Invalidate page cache */ + chip->pagebuf = -1; break; + } /* Transfer not aligned data */ if (!aligned) { if (!NAND_SUBPAGE_READ(chip) && !oob && - !(mtd->ecc_stats.failed - stats.failed)) + !(mtd->ecc_stats.failed - stats.failed) && + (ops->mode != MTD_OPS_RAW)) chip->pagebuf = realpage; + else + /* Invalidate page cache */ + chip->pagebuf = -1; memcpy(buf, chip->buffers->databuf + col, bytes); } @@ -1539,7 +1535,7 @@ static int nand_do_read_ops(struct mtd_info *mtd, loff_t from, if (!readlen) break; - /* For subsequent reads align to page boundary. */ + /* For subsequent reads align to page boundary */ col = 0; /* Increment page address */ realpage++; @@ -1552,8 +1548,9 @@ static int nand_do_read_ops(struct mtd_info *mtd, loff_t from, chip->select_chip(mtd, chipnr); } - /* Check, if the chip supports auto page increment - * or if we have hit a block boundary. + /* + * Check, if the chip supports auto page increment or if we + * have hit a block boundary. */ if (!NAND_CANAUTOINCR(chip) || !(page & blkcheck)) sndcmd = 1; @@ -1574,18 +1571,19 @@ static int nand_do_read_ops(struct mtd_info *mtd, loff_t from, /** * nand_read - [MTD Interface] MTD compatibility function for nand_do_read_ecc - * @mtd: MTD device structure - * @from: offset to read from - * @len: number of bytes to read - * @retlen: pointer to variable to store the number of read bytes - * @buf: the databuffer to put data + * @mtd: MTD device structure + * @from: offset to read from + * @len: number of bytes to read + * @retlen: pointer to variable to store the number of read bytes + * @buf: the databuffer to put data * - * Get hold of the chip and call nand_do_read + * Get hold of the chip and call nand_do_read. */ static int nand_read(struct mtd_info *mtd, loff_t from, size_t len, size_t *retlen, uint8_t *buf) { struct nand_chip *chip = mtd->priv; + struct mtd_oob_ops ops; int ret; /* Do not allow reads past end of device */ @@ -1596,13 +1594,14 @@ static int nand_read(struct mtd_info *mtd, loff_t from, size_t len, nand_get_device(chip, mtd, FL_READING); - chip->ops.len = len; - chip->ops.datbuf = buf; - chip->ops.oobbuf = NULL; + ops.len = len; + ops.datbuf = buf; + ops.oobbuf = NULL; + ops.mode = 0; - ret = nand_do_read_ops(mtd, from, &chip->ops); + ret = nand_do_read_ops(mtd, from, &ops); - *retlen = chip->ops.retlen; + *retlen = ops.retlen; nand_release_device(mtd); @@ -1610,11 +1609,11 @@ static int nand_read(struct mtd_info *mtd, loff_t from, size_t len, } /** - * nand_read_oob_std - [REPLACABLE] the most common OOB data read function - * @mtd: mtd info structure - * @chip: nand chip info structure - * @page: page number to read - * @sndcmd: flag whether to issue read command or not + * nand_read_oob_std - [REPLACEABLE] the most common OOB data read function + * @mtd: mtd info structure + * @chip: nand chip info structure + * @page: page number to read + * @sndcmd: flag whether to issue read command or not */ static int nand_read_oob_std(struct mtd_info *mtd, struct nand_chip *chip, int page, int sndcmd) @@ -1628,12 +1627,12 @@ static int nand_read_oob_std(struct mtd_info *mtd, struct nand_chip *chip, } /** - * nand_read_oob_syndrome - [REPLACABLE] OOB data read function for HW ECC + * nand_read_oob_syndrome - [REPLACEABLE] OOB data read function for HW ECC * with syndromes - * @mtd: mtd info structure - * @chip: nand chip info structure - * @page: page number to read - * @sndcmd: flag whether to issue read command or not + * @mtd: mtd info structure + * @chip: nand chip info structure + * @page: page number to read + * @sndcmd: flag whether to issue read command or not */ static int nand_read_oob_syndrome(struct mtd_info *mtd, struct nand_chip *chip, int page, int sndcmd) @@ -1667,10 +1666,10 @@ static int nand_read_oob_syndrome(struct mtd_info *mtd, struct nand_chip *chip, } /** - * nand_write_oob_std - [REPLACABLE] the most common OOB data write function - * @mtd: mtd info structure - * @chip: nand chip info structure - * @page: page number to write + * nand_write_oob_std - [REPLACEABLE] the most common OOB data write function + * @mtd: mtd info structure + * @chip: nand chip info structure + * @page: page number to write */ static int nand_write_oob_std(struct mtd_info *mtd, struct nand_chip *chip, int page) @@ -1690,11 +1689,11 @@ static int nand_write_oob_std(struct mtd_info *mtd, struct nand_chip *chip, } /** - * nand_write_oob_syndrome - [REPLACABLE] OOB data write function for HW ECC - * with syndrome - only for large page flash ! - * @mtd: mtd info structure - * @chip: nand chip info structure - * @page: page number to write + * nand_write_oob_syndrome - [REPLACEABLE] OOB data write function for HW ECC + * with syndrome - only for large page flash + * @mtd: mtd info structure + * @chip: nand chip info structure + * @page: page number to write */ static int nand_write_oob_syndrome(struct mtd_info *mtd, struct nand_chip *chip, int page) @@ -1749,34 +1748,37 @@ static int nand_write_oob_syndrome(struct mtd_info *mtd, } /** - * nand_do_read_oob - [Intern] NAND read out-of-band - * @mtd: MTD device structure - * @from: offset to read from - * @ops: oob operations description structure + * nand_do_read_oob - [INTERN] NAND read out-of-band + * @mtd: MTD device structure + * @from: offset to read from + * @ops: oob operations description structure * - * NAND read out-of-band data from the spare area + * NAND read out-of-band data from the spare area. */ static int nand_do_read_oob(struct mtd_info *mtd, loff_t from, struct mtd_oob_ops *ops) { int page, realpage, chipnr, sndcmd = 1; struct nand_chip *chip = mtd->priv; + struct mtd_ecc_stats stats; int blkcheck = (1 << (chip->phys_erase_shift - chip->page_shift)) - 1; int readlen = ops->ooblen; int len; uint8_t *buf = ops->oobbuf; - DEBUG(MTD_DEBUG_LEVEL3, "%s: from = 0x%08Lx, len = %i\n", + pr_debug("%s: from = 0x%08Lx, len = %i\n", __func__, (unsigned long long)from, readlen); - if (ops->mode == MTD_OOB_AUTO) + stats = mtd->ecc_stats; + + if (ops->mode == MTD_OPS_AUTO_OOB) len = chip->ecc.layout->oobavail; else len = mtd->oobsize; if (unlikely(ops->ooboffs >= len)) { - DEBUG(MTD_DEBUG_LEVEL0, "%s: Attempt to start read " - "outside oob\n", __func__); + pr_debug("%s: attempt to start read outside oob\n", + __func__); return -EINVAL; } @@ -1784,8 +1786,8 @@ static int nand_do_read_oob(struct mtd_info *mtd, loff_t from, if (unlikely(from >= mtd->size || ops->ooboffs + readlen > ((mtd->size >> chip->page_shift) - (from >> chip->page_shift)) * len)) { - DEBUG(MTD_DEBUG_LEVEL0, "%s: Attempt read beyond end " - "of device\n", __func__); + pr_debug("%s: attempt to read beyond end of device\n", + __func__); return -EINVAL; } @@ -1797,7 +1799,10 @@ static int nand_do_read_oob(struct mtd_info *mtd, loff_t from, page = realpage & chip->pagemask; while (1) { - sndcmd = chip->ecc.read_oob(mtd, chip, page, sndcmd); + if (ops->mode == MTD_OPS_RAW) + sndcmd = chip->ecc.read_oob_raw(mtd, chip, page, sndcmd); + else + sndcmd = chip->ecc.read_oob(mtd, chip, page, sndcmd); len = min(len, readlen); buf = nand_transfer_oob(chip, buf, ops, len); @@ -1830,24 +1835,29 @@ static int nand_do_read_oob(struct mtd_info *mtd, loff_t from, chip->select_chip(mtd, chipnr); } - /* Check, if the chip supports auto page increment - * or if we have hit a block boundary. + /* + * Check, if the chip supports auto page increment or if we + * have hit a block boundary. */ if (!NAND_CANAUTOINCR(chip) || !(page & blkcheck)) sndcmd = 1; } ops->oobretlen = ops->ooblen; - return 0; + + if (mtd->ecc_stats.failed - stats.failed) + return -EBADMSG; + + return mtd->ecc_stats.corrected - stats.corrected ? -EUCLEAN : 0; } /** * nand_read_oob - [MTD Interface] NAND read data and/or out-of-band - * @mtd: MTD device structure - * @from: offset to read from - * @ops: oob operation description structure + * @mtd: MTD device structure + * @from: offset to read from + * @ops: oob operation description structure * - * NAND read data and/or out-of-band data + * NAND read data and/or out-of-band data. */ static int nand_read_oob(struct mtd_info *mtd, loff_t from, struct mtd_oob_ops *ops) @@ -1859,17 +1869,17 @@ static int nand_read_oob(struct mtd_info *mtd, loff_t from, /* Do not allow reads past end of device */ if (ops->datbuf && (from + ops->len) > mtd->size) { - DEBUG(MTD_DEBUG_LEVEL0, "%s: Attempt read " - "beyond end of device\n", __func__); + pr_debug("%s: attempt to read beyond end of device\n", + __func__); return -EINVAL; } nand_get_device(chip, mtd, FL_READING); switch (ops->mode) { - case MTD_OOB_PLACE: - case MTD_OOB_AUTO: - case MTD_OOB_RAW: + case MTD_OPS_PLACE_OOB: + case MTD_OPS_AUTO_OOB: + case MTD_OPS_RAW: break; default: @@ -1888,12 +1898,12 @@ out: /** - * nand_write_page_raw - [Intern] raw page write function - * @mtd: mtd info structure - * @chip: nand chip info structure - * @buf: data buffer + * nand_write_page_raw - [INTERN] raw page write function + * @mtd: mtd info structure + * @chip: nand chip info structure + * @buf: data buffer * - * Not for syndrome calculating ecc controllers, which use a special oob layout + * Not for syndrome calculating ECC controllers, which use a special oob layout. */ static void nand_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip, const uint8_t *buf) @@ -1903,10 +1913,10 @@ static void nand_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip, } /** - * nand_write_page_raw_syndrome - [Intern] raw page write function - * @mtd: mtd info structure - * @chip: nand chip info structure - * @buf: data buffer + * nand_write_page_raw_syndrome - [INTERN] raw page write function + * @mtd: mtd info structure + * @chip: nand chip info structure + * @buf: data buffer * * We need a special oob layout and handling even when ECC isn't checked. */ @@ -1942,10 +1952,10 @@ static void nand_write_page_raw_syndrome(struct mtd_info *mtd, chip->write_buf(mtd, oob, size); } /** - * nand_write_page_swecc - [REPLACABLE] software ecc based page write function - * @mtd: mtd info structure - * @chip: nand chip info structure - * @buf: data buffer + * nand_write_page_swecc - [REPLACEABLE] software ECC based page write function + * @mtd: mtd info structure + * @chip: nand chip info structure + * @buf: data buffer */ static void nand_write_page_swecc(struct mtd_info *mtd, struct nand_chip *chip, const uint8_t *buf) @@ -1957,7 +1967,7 @@ static void nand_write_page_swecc(struct mtd_info *mtd, struct nand_chip *chip, const uint8_t *p = buf; uint32_t *eccpos = chip->ecc.layout->eccpos; - /* Software ecc calculation */ + /* Software ECC calculation */ for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) chip->ecc.calculate(mtd, p, &ecc_calc[i]); @@ -1968,10 +1978,10 @@ static void nand_write_page_swecc(struct mtd_info *mtd, struct nand_chip *chip, } /** - * nand_write_page_hwecc - [REPLACABLE] hardware ecc based page write function - * @mtd: mtd info structure - * @chip: nand chip info structure - * @buf: data buffer + * nand_write_page_hwecc - [REPLACEABLE] hardware ECC based page write function + * @mtd: mtd info structure + * @chip: nand chip info structure + * @buf: data buffer */ static void nand_write_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip, const uint8_t *buf) @@ -1996,13 +2006,13 @@ static void nand_write_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip, } /** - * nand_write_page_syndrome - [REPLACABLE] hardware ecc syndrom based page write - * @mtd: mtd info structure - * @chip: nand chip info structure - * @buf: data buffer + * nand_write_page_syndrome - [REPLACEABLE] hardware ECC syndrome based page write + * @mtd: mtd info structure + * @chip: nand chip info structure + * @buf: data buffer * - * The hw generator calculates the error syndrome automatically. Therefor - * we need a special oob layout and handling. + * The hw generator calculates the error syndrome automatically. Therefore we + * need a special oob layout and handling. */ static void nand_write_page_syndrome(struct mtd_info *mtd, struct nand_chip *chip, const uint8_t *buf) @@ -2041,12 +2051,12 @@ static void nand_write_page_syndrome(struct mtd_info *mtd, /** * nand_write_page - [REPLACEABLE] write one page - * @mtd: MTD device structure - * @chip: NAND chip descriptor - * @buf: the data to write - * @page: page number to write - * @cached: cached programming - * @raw: use _raw version of write_page + * @mtd: MTD device structure + * @chip: NAND chip descriptor + * @buf: the data to write + * @page: page number to write + * @cached: cached programming + * @raw: use _raw version of write_page */ static int nand_write_page(struct mtd_info *mtd, struct nand_chip *chip, const uint8_t *buf, int page, int cached, int raw) @@ -2061,8 +2071,8 @@ static int nand_write_page(struct mtd_info *mtd, struct nand_chip *chip, chip->ecc.write_page(mtd, chip, buf); /* - * Cached progamming disabled for now, Not sure if its worth the - * trouble. The speed gain is not very impressive. (2.3->2.6Mib/s) + * Cached progamming disabled for now. Not sure if it's worth the + * trouble. The speed gain is not very impressive. (2.3->2.6Mib/s). */ cached = 0; @@ -2072,7 +2082,7 @@ static int nand_write_page(struct mtd_info *mtd, struct nand_chip *chip, status = chip->waitfunc(mtd, chip); /* * See if operation failed and additional status checks are - * available + * available. */ if ((status & NAND_STATUS_FAIL) && (chip->errstat)) status = chip->errstat(mtd, chip, FL_WRITING, status, @@ -2096,29 +2106,37 @@ static int nand_write_page(struct mtd_info *mtd, struct nand_chip *chip, } /** - * nand_fill_oob - [Internal] Transfer client buffer to oob - * @chip: nand chip structure - * @oob: oob data buffer - * @len: oob data write length - * @ops: oob ops structure + * nand_fill_oob - [INTERN] Transfer client buffer to oob + * @mtd: MTD device structure + * @oob: oob data buffer + * @len: oob data write length + * @ops: oob ops structure */ -static uint8_t *nand_fill_oob(struct nand_chip *chip, uint8_t *oob, size_t len, - struct mtd_oob_ops *ops) +static uint8_t *nand_fill_oob(struct mtd_info *mtd, uint8_t *oob, size_t len, + struct mtd_oob_ops *ops) { + struct nand_chip *chip = mtd->priv; + + /* + * Initialise to all 0xFF, to avoid the possibility of left over OOB + * data from a previous OOB read. + */ + memset(chip->oob_poi, 0xff, mtd->oobsize); + switch (ops->mode) { - case MTD_OOB_PLACE: - case MTD_OOB_RAW: + case MTD_OPS_PLACE_OOB: + case MTD_OPS_RAW: memcpy(chip->oob_poi + ops->ooboffs, oob, len); return oob + len; - case MTD_OOB_AUTO: { + case MTD_OPS_AUTO_OOB: { struct nand_oobfree *free = chip->ecc.layout->oobfree; uint32_t boffs = 0, woffs = ops->ooboffs; size_t bytes = 0; for (; free->length && len; free++, len -= bytes) { - /* Write request not from offset 0 ? */ + /* Write request not from offset 0? */ if (unlikely(woffs)) { if (woffs >= free->length) { woffs -= free->length; @@ -2146,12 +2164,12 @@ static uint8_t *nand_fill_oob(struct nand_chip *chip, uint8_t *oob, size_t len, #define NOTALIGNED(x) ((x & (chip->subpagesize - 1)) != 0) /** - * nand_do_write_ops - [Internal] NAND write with ECC - * @mtd: MTD device structure - * @to: offset to write to - * @ops: oob operations description structure + * nand_do_write_ops - [INTERN] NAND write with ECC + * @mtd: MTD device structure + * @to: offset to write to + * @ops: oob operations description structure * - * NAND write with ECC + * NAND write with ECC. */ static int nand_do_write_ops(struct mtd_info *mtd, loff_t to, struct mtd_oob_ops *ops) @@ -2161,7 +2179,7 @@ static int nand_do_write_ops(struct mtd_info *mtd, loff_t to, uint32_t writelen = ops->len; uint32_t oobwritelen = ops->ooblen; - uint32_t oobmaxlen = ops->mode == MTD_OOB_AUTO ? + uint32_t oobmaxlen = ops->mode == MTD_OPS_AUTO_OOB ? mtd->oobavail : mtd->oobsize; uint8_t *oob = ops->oobbuf; @@ -2172,10 +2190,10 @@ static int nand_do_write_ops(struct mtd_info *mtd, loff_t to, if (!writelen) return 0; - /* reject writes, which are not page aligned */ + /* Reject writes, which are not page aligned */ if (NOTALIGNED(to) || NOTALIGNED(ops->len)) { - printk(KERN_NOTICE "%s: Attempt to write not " - "page aligned data\n", __func__); + pr_notice("%s: attempt to write non page aligned data\n", + __func__); return -EINVAL; } @@ -2201,10 +2219,6 @@ static int nand_do_write_ops(struct mtd_info *mtd, loff_t to, (chip->pagebuf << chip->page_shift) < (to + ops->len)) chip->pagebuf = -1; - /* If we're not given explicit OOB data, let it be 0xFF */ - if (likely(!oob)) - memset(chip->oob_poi, 0xff, mtd->oobsize); - /* Don't allow multipage oob writes with offset */ if (oob && ops->ooboffs && (ops->ooboffs + ops->ooblen > oobmaxlen)) return -EINVAL; @@ -2214,7 +2228,7 @@ static int nand_do_write_ops(struct mtd_info *mtd, loff_t to, int cached = writelen > bytes && page != blockmask; uint8_t *wbuf = buf; - /* Partial page write ? */ + /* Partial page write? */ if (unlikely(column || writelen < (mtd->writesize - 1))) { cached = 0; bytes = min_t(int, bytes - column, (int) writelen); @@ -2226,12 +2240,15 @@ static int nand_do_write_ops(struct mtd_info *mtd, loff_t to, if (unlikely(oob)) { size_t len = min(oobwritelen, oobmaxlen); - oob = nand_fill_oob(chip, oob, len, ops); + oob = nand_fill_oob(mtd, oob, len, ops); oobwritelen -= len; + } else { + /* We still need to erase leftover OOB data */ + memset(chip->oob_poi, 0xff, mtd->oobsize); } ret = chip->write_page(mtd, chip, wbuf, page, cached, - (ops->mode == MTD_OOB_RAW)); + (ops->mode == MTD_OPS_RAW)); if (ret) break; @@ -2260,11 +2277,11 @@ static int nand_do_write_ops(struct mtd_info *mtd, loff_t to, /** * panic_nand_write - [MTD Interface] NAND write with ECC - * @mtd: MTD device structure - * @to: offset to write to - * @len: number of bytes to write - * @retlen: pointer to variable to store the number of written bytes - * @buf: the data to write + * @mtd: MTD device structure + * @to: offset to write to + * @len: number of bytes to write + * @retlen: pointer to variable to store the number of written bytes + * @buf: the data to write * * NAND write with ECC. Used when performing writes in interrupt context, this * may for example be called by mtdoops when writing an oops while in panic. @@ -2273,6 +2290,7 @@ static int panic_nand_write(struct mtd_info *mtd, loff_t to, size_t len, size_t *retlen, const uint8_t *buf) { struct nand_chip *chip = mtd->priv; + struct mtd_oob_ops ops; int ret; /* Do not allow reads past end of device */ @@ -2281,36 +2299,38 @@ static int panic_nand_write(struct mtd_info *mtd, loff_t to, size_t len, if (!len) return 0; - /* Wait for the device to get ready. */ + /* Wait for the device to get ready */ panic_nand_wait(mtd, chip, 400); - /* Grab the device. */ + /* Grab the device */ panic_nand_get_device(chip, mtd, FL_WRITING); - chip->ops.len = len; - chip->ops.datbuf = (uint8_t *)buf; - chip->ops.oobbuf = NULL; + ops.len = len; + ops.datbuf = (uint8_t *)buf; + ops.oobbuf = NULL; + ops.mode = 0; - ret = nand_do_write_ops(mtd, to, &chip->ops); + ret = nand_do_write_ops(mtd, to, &ops); - *retlen = chip->ops.retlen; + *retlen = ops.retlen; return ret; } /** * nand_write - [MTD Interface] NAND write with ECC - * @mtd: MTD device structure - * @to: offset to write to - * @len: number of bytes to write - * @retlen: pointer to variable to store the number of written bytes - * @buf: the data to write + * @mtd: MTD device structure + * @to: offset to write to + * @len: number of bytes to write + * @retlen: pointer to variable to store the number of written bytes + * @buf: the data to write * - * NAND write with ECC + * NAND write with ECC. */ static int nand_write(struct mtd_info *mtd, loff_t to, size_t len, size_t *retlen, const uint8_t *buf) { struct nand_chip *chip = mtd->priv; + struct mtd_oob_ops ops; int ret; /* Do not allow reads past end of device */ @@ -2321,13 +2341,14 @@ static int nand_write(struct mtd_info *mtd, loff_t to, size_t len, nand_get_device(chip, mtd, FL_WRITING); - chip->ops.len = len; - chip->ops.datbuf = (uint8_t *)buf; - chip->ops.oobbuf = NULL; + ops.len = len; + ops.datbuf = (uint8_t *)buf; + ops.oobbuf = NULL; + ops.mode = 0; - ret = nand_do_write_ops(mtd, to, &chip->ops); + ret = nand_do_write_ops(mtd, to, &ops); - *retlen = chip->ops.retlen; + *retlen = ops.retlen; nand_release_device(mtd); @@ -2336,11 +2357,11 @@ static int nand_write(struct mtd_info *mtd, loff_t to, size_t len, /** * nand_do_write_oob - [MTD Interface] NAND write out-of-band - * @mtd: MTD device structure - * @to: offset to write to - * @ops: oob operation description structure + * @mtd: MTD device structure + * @to: offset to write to + * @ops: oob operation description structure * - * NAND write out-of-band + * NAND write out-of-band. */ static int nand_do_write_oob(struct mtd_info *mtd, loff_t to, struct mtd_oob_ops *ops) @@ -2348,24 +2369,24 @@ static int nand_do_write_oob(struct mtd_info *mtd, loff_t to, int chipnr, page, status, len; struct nand_chip *chip = mtd->priv; - DEBUG(MTD_DEBUG_LEVEL3, "%s: to = 0x%08x, len = %i\n", + pr_debug("%s: to = 0x%08x, len = %i\n", __func__, (unsigned int)to, (int)ops->ooblen); - if (ops->mode == MTD_OOB_AUTO) + if (ops->mode == MTD_OPS_AUTO_OOB) len = chip->ecc.layout->oobavail; else len = mtd->oobsize; /* Do not allow write past end of page */ if ((ops->ooboffs + ops->ooblen) > len) { - DEBUG(MTD_DEBUG_LEVEL0, "%s: Attempt to write " - "past end of page\n", __func__); + pr_debug("%s: attempt to write past end of page\n", + __func__); return -EINVAL; } if (unlikely(ops->ooboffs >= len)) { - DEBUG(MTD_DEBUG_LEVEL0, "%s: Attempt to start " - "write outside oob\n", __func__); + pr_debug("%s: attempt to start write outside oob\n", + __func__); return -EINVAL; } @@ -2374,8 +2395,8 @@ static int nand_do_write_oob(struct mtd_info *mtd, loff_t to, ops->ooboffs + ops->ooblen > ((mtd->size >> chip->page_shift) - (to >> chip->page_shift)) * len)) { - DEBUG(MTD_DEBUG_LEVEL0, "%s: Attempt write beyond " - "end of device\n", __func__); + pr_debug("%s: attempt to write beyond end of device\n", + __func__); return -EINVAL; } @@ -2401,10 +2422,12 @@ static int nand_do_write_oob(struct mtd_info *mtd, loff_t to, if (page == chip->pagebuf) chip->pagebuf = -1; - memset(chip->oob_poi, 0xff, mtd->oobsize); - nand_fill_oob(chip, ops->oobbuf, ops->ooblen, ops); - status = chip->ecc.write_oob(mtd, chip, page & chip->pagemask); - memset(chip->oob_poi, 0xff, mtd->oobsize); + nand_fill_oob(mtd, ops->oobbuf, ops->ooblen, ops); + + if (ops->mode == MTD_OPS_RAW) + status = chip->ecc.write_oob_raw(mtd, chip, page & chip->pagemask); + else + status = chip->ecc.write_oob(mtd, chip, page & chip->pagemask); if (status) return status; @@ -2416,9 +2439,9 @@ static int nand_do_write_oob(struct mtd_info *mtd, loff_t to, /** * nand_write_oob - [MTD Interface] NAND write data and/or out-of-band - * @mtd: MTD device structure - * @to: offset to write to - * @ops: oob operation description structure + * @mtd: MTD device structure + * @to: offset to write to + * @ops: oob operation description structure */ static int nand_write_oob(struct mtd_info *mtd, loff_t to, struct mtd_oob_ops *ops) @@ -2430,17 +2453,17 @@ static int nand_write_oob(struct mtd_info *mtd, loff_t to, /* Do not allow writes past end of device */ if (ops->datbuf && (to + ops->len) > mtd->size) { - DEBUG(MTD_DEBUG_LEVEL0, "%s: Attempt write beyond " - "end of device\n", __func__); + pr_debug("%s: attempt to write beyond end of device\n", + __func__); return -EINVAL; } nand_get_device(chip, mtd, FL_WRITING); switch (ops->mode) { - case MTD_OOB_PLACE: - case MTD_OOB_AUTO: - case MTD_OOB_RAW: + case MTD_OPS_PLACE_OOB: + case MTD_OPS_AUTO_OOB: + case MTD_OPS_RAW: break; default: @@ -2458,11 +2481,11 @@ out: } /** - * single_erease_cmd - [GENERIC] NAND standard block erase command function - * @mtd: MTD device structure - * @page: the page address of the block which will be erased + * single_erase_cmd - [GENERIC] NAND standard block erase command function + * @mtd: MTD device structure + * @page: the page address of the block which will be erased * - * Standard erase command for NAND chips + * Standard erase command for NAND chips. */ static void single_erase_cmd(struct mtd_info *mtd, int page) { @@ -2473,12 +2496,11 @@ static void single_erase_cmd(struct mtd_info *mtd, int page) } /** - * multi_erease_cmd - [GENERIC] AND specific block erase command function - * @mtd: MTD device structure - * @page: the page address of the block which will be erased + * multi_erase_cmd - [GENERIC] AND specific block erase command function + * @mtd: MTD device structure + * @page: the page address of the block which will be erased * - * AND multi block erase command function - * Erase 4 consecutive blocks + * AND multi block erase command function. Erase 4 consecutive blocks. */ static void multi_erase_cmd(struct mtd_info *mtd, int page) { @@ -2493,10 +2515,10 @@ static void multi_erase_cmd(struct mtd_info *mtd, int page) /** * nand_erase - [MTD Interface] erase block(s) - * @mtd: MTD device structure - * @instr: erase instruction + * @mtd: MTD device structure + * @instr: erase instruction * - * Erase one ore more blocks + * Erase one ore more blocks. */ static int nand_erase(struct mtd_info *mtd, struct erase_info *instr) { @@ -2505,12 +2527,12 @@ static int nand_erase(struct mtd_info *mtd, struct erase_info *instr) #define BBT_PAGE_MASK 0xffffff3f /** - * nand_erase_nand - [Internal] erase block(s) - * @mtd: MTD device structure - * @instr: erase instruction - * @allowbbt: allow erasing the bbt area + * nand_erase_nand - [INTERN] erase block(s) + * @mtd: MTD device structure + * @instr: erase instruction + * @allowbbt: allow erasing the bbt area * - * Erase one ore more blocks + * Erase one ore more blocks. */ int nand_erase_nand(struct mtd_info *mtd, struct erase_info *instr, int allowbbt) @@ -2521,9 +2543,9 @@ int nand_erase_nand(struct mtd_info *mtd, struct erase_info *instr, unsigned int bbt_masked_page = 0xffffffff; loff_t len; - DEBUG(MTD_DEBUG_LEVEL3, "%s: start = 0x%012llx, len = %llu\n", - __func__, (unsigned long long)instr->addr, - (unsigned long long)instr->len); + pr_debug("%s: start = 0x%012llx, len = %llu\n", + __func__, (unsigned long long)instr->addr, + (unsigned long long)instr->len); if (check_offs_len(mtd, instr->addr, instr->len)) return -EINVAL; @@ -2545,8 +2567,8 @@ int nand_erase_nand(struct mtd_info *mtd, struct erase_info *instr, /* Check, if it is write protected */ if (nand_check_wp(mtd)) { - DEBUG(MTD_DEBUG_LEVEL0, "%s: Device is write protected!!!\n", - __func__); + pr_debug("%s: device is write protected!\n", + __func__); instr->state = MTD_ERASE_FAILED; goto erase_exit; } @@ -2555,7 +2577,7 @@ int nand_erase_nand(struct mtd_info *mtd, struct erase_info *instr, * If BBT requires refresh, set the BBT page mask to see if the BBT * should be rewritten. Otherwise the mask is set to 0xffffffff which * can not be matched. This is also done when the bbt is actually - * erased to avoid recusrsive updates + * erased to avoid recursive updates. */ if (chip->options & BBT_AUTO_REFRESH && !allowbbt) bbt_masked_page = chip->bbt_td->pages[chipnr] & BBT_PAGE_MASK; @@ -2566,20 +2588,18 @@ int nand_erase_nand(struct mtd_info *mtd, struct erase_info *instr, instr->state = MTD_ERASING; while (len) { - /* - * heck if we have a bad block, we do not erase bad blocks ! - */ + /* Heck if we have a bad block, we do not erase bad blocks! */ if (nand_block_checkbad(mtd, ((loff_t) page) << chip->page_shift, 0, allowbbt)) { - printk(KERN_WARNING "%s: attempt to erase a bad block " - "at page 0x%08x\n", __func__, page); + pr_warn("%s: attempt to erase a bad block at page 0x%08x\n", + __func__, page); instr->state = MTD_ERASE_FAILED; goto erase_exit; } /* * Invalidate the page cache, if we erase the block which - * contains the current cached page + * contains the current cached page. */ if (page <= chip->pagebuf && chip->pagebuf < (page + pages_per_block)) @@ -2599,8 +2619,8 @@ int nand_erase_nand(struct mtd_info *mtd, struct erase_info *instr, /* See if block erase succeeded */ if (status & NAND_STATUS_FAIL) { - DEBUG(MTD_DEBUG_LEVEL0, "%s: Failed erase, " - "page 0x%08x\n", __func__, page); + pr_debug("%s: failed erase, page 0x%08x\n", + __func__, page); instr->state = MTD_ERASE_FAILED; instr->fail_addr = ((loff_t)page << chip->page_shift); @@ -2609,7 +2629,7 @@ int nand_erase_nand(struct mtd_info *mtd, struct erase_info *instr, /* * If BBT requires refresh, set the BBT rewrite flag to the - * page being erased + * page being erased. */ if (bbt_masked_page != 0xffffffff && (page & BBT_PAGE_MASK) == bbt_masked_page) @@ -2628,7 +2648,7 @@ int nand_erase_nand(struct mtd_info *mtd, struct erase_info *instr, /* * If BBT requires refresh and BBT-PERCHIP, set the BBT - * page mask to see if this BBT should be rewritten + * page mask to see if this BBT should be rewritten. */ if (bbt_masked_page != 0xffffffff && (chip->bbt_td->options & NAND_BBT_PERCHIP)) @@ -2651,7 +2671,7 @@ erase_exit: /* * If BBT requires refresh and erase was successful, rewrite any - * selected bad block tables + * selected bad block tables. */ if (bbt_masked_page == 0xffffffff || ret) return ret; @@ -2659,10 +2679,10 @@ erase_exit: for (chipnr = 0; chipnr < chip->numchips; chipnr++) { if (!rewrite_bbt[chipnr]) continue; - /* update the BBT for chip */ - DEBUG(MTD_DEBUG_LEVEL0, "%s: nand_update_bbt " - "(%d:0x%0llx 0x%0x)\n", __func__, chipnr, - rewrite_bbt[chipnr], chip->bbt_td->pages[chipnr]); + /* Update the BBT for chip */ + pr_debug("%s: nand_update_bbt (%d:0x%0llx 0x%0x)\n", + __func__, chipnr, rewrite_bbt[chipnr], + chip->bbt_td->pages[chipnr]); nand_update_bbt(mtd, rewrite_bbt[chipnr]); } @@ -2672,15 +2692,15 @@ erase_exit: /** * nand_sync - [MTD Interface] sync - * @mtd: MTD device structure + * @mtd: MTD device structure * - * Sync is actually a wait for chip ready function + * Sync is actually a wait for chip ready function. */ static void nand_sync(struct mtd_info *mtd) { struct nand_chip *chip = mtd->priv; - DEBUG(MTD_DEBUG_LEVEL3, "%s: called\n", __func__); + pr_debug("%s: called\n", __func__); /* Grab the lock and see if the device is available */ nand_get_device(chip, mtd, FL_SYNCING); @@ -2690,8 +2710,8 @@ static void nand_sync(struct mtd_info *mtd) /** * nand_block_isbad - [MTD Interface] Check if block at offset is bad - * @mtd: MTD device structure - * @offs: offset relative to mtd start + * @mtd: MTD device structure + * @offs: offset relative to mtd start */ static int nand_block_isbad(struct mtd_info *mtd, loff_t offs) { @@ -2704,8 +2724,8 @@ static int nand_block_isbad(struct mtd_info *mtd, loff_t offs) /** * nand_block_markbad - [MTD Interface] Mark block at the given offset as bad - * @mtd: MTD device structure - * @ofs: offset relative to mtd start + * @mtd: MTD device structure + * @ofs: offset relative to mtd start */ static int nand_block_markbad(struct mtd_info *mtd, loff_t ofs) { @@ -2714,7 +2734,7 @@ static int nand_block_markbad(struct mtd_info *mtd, loff_t ofs) ret = nand_block_isbad(mtd, ofs); if (ret) { - /* If it was bad already, return success and do nothing. */ + /* If it was bad already, return success and do nothing */ if (ret > 0) return 0; return ret; @@ -2725,7 +2745,7 @@ static int nand_block_markbad(struct mtd_info *mtd, loff_t ofs) /** * nand_suspend - [MTD Interface] Suspend the NAND flash - * @mtd: MTD device structure + * @mtd: MTD device structure */ static int nand_suspend(struct mtd_info *mtd) { @@ -2736,7 +2756,7 @@ static int nand_suspend(struct mtd_info *mtd) /** * nand_resume - [MTD Interface] Resume the NAND flash - * @mtd: MTD device structure + * @mtd: MTD device structure */ static void nand_resume(struct mtd_info *mtd) { @@ -2745,13 +2765,11 @@ static void nand_resume(struct mtd_info *mtd) if (chip->state == FL_PM_SUSPENDED) nand_release_device(mtd); else - printk(KERN_ERR "%s called for a chip which is not " - "in suspended state\n", __func__); + pr_err("%s called for a chip which is not in suspended state\n", + __func__); } -/* - * Set default functions - */ +/* Set default functions */ static void nand_set_defaults(struct nand_chip *chip, int busw) { /* check for proper chip_delay setup, set 20us if not */ @@ -2793,23 +2811,21 @@ static void nand_set_defaults(struct nand_chip *chip, int busw) } -/* - * sanitize ONFI strings so we can safely print them - */ +/* Sanitize ONFI strings so we can safely print them */ static void sanitize_string(uint8_t *s, size_t len) { ssize_t i; - /* null terminate */ + /* Null terminate */ s[len - 1] = 0; - /* remove non printable chars */ + /* Remove non printable chars */ for (i = 0; i < len - 1; i++) { if (s[i] < ' ' || s[i] > 127) s[i] = '?'; } - /* remove trailing spaces */ + /* Remove trailing spaces */ strim(s); } @@ -2826,28 +2842,28 @@ static u16 onfi_crc16(u16 crc, u8 const *p, size_t len) } /* - * Check if the NAND chip is ONFI compliant, returns 1 if it is, 0 otherwise + * Check if the NAND chip is ONFI compliant, returns 1 if it is, 0 otherwise. */ static int nand_flash_detect_onfi(struct mtd_info *mtd, struct nand_chip *chip, - int busw) + int *busw) { struct nand_onfi_params *p = &chip->onfi_params; int i; int val; - /* try ONFI for unknow chip or LP */ + /* Try ONFI for unknown chip or LP */ chip->cmdfunc(mtd, NAND_CMD_READID, 0x20, -1); if (chip->read_byte(mtd) != 'O' || chip->read_byte(mtd) != 'N' || chip->read_byte(mtd) != 'F' || chip->read_byte(mtd) != 'I') return 0; - printk(KERN_INFO "ONFI flash detected\n"); + pr_info("ONFI flash detected\n"); chip->cmdfunc(mtd, NAND_CMD_PARAM, 0, -1); for (i = 0; i < 3; i++) { chip->read_buf(mtd, (uint8_t *)p, sizeof(*p)); if (onfi_crc16(ONFI_CRC_BASE, (uint8_t *)p, 254) == le16_to_cpu(p->crc)) { - printk(KERN_INFO "ONFI param page %d valid\n", i); + pr_info("ONFI param page %d valid\n", i); break; } } @@ -2855,7 +2871,7 @@ static int nand_flash_detect_onfi(struct mtd_info *mtd, struct nand_chip *chip, if (i == 3) return 0; - /* check version */ + /* Check version */ val = le16_to_cpu(p->revision); if (val & (1 << 5)) chip->onfi_version = 23; @@ -2871,8 +2887,7 @@ static int nand_flash_detect_onfi(struct mtd_info *mtd, struct nand_chip *chip, chip->onfi_version = 0; if (!chip->onfi_version) { - printk(KERN_INFO "%s: unsupported ONFI version: %d\n", - __func__, val); + pr_info("%s: unsupported ONFI version: %d\n", __func__, val); return 0; } @@ -2884,9 +2899,9 @@ static int nand_flash_detect_onfi(struct mtd_info *mtd, struct nand_chip *chip, mtd->erasesize = le32_to_cpu(p->pages_per_block) * mtd->writesize; mtd->oobsize = le16_to_cpu(p->spare_bytes_per_page); chip->chipsize = (uint64_t)le32_to_cpu(p->blocks_per_lun) * mtd->erasesize; - busw = 0; + *busw = 0; if (le16_to_cpu(p->features) & 1) - busw = NAND_BUSWIDTH_16; + *busw = NAND_BUSWIDTH_16; chip->options &= ~NAND_CHIPOPTIONS_MSK; chip->options |= (NAND_NO_READRDY | @@ -2896,7 +2911,7 @@ static int nand_flash_detect_onfi(struct mtd_info *mtd, struct nand_chip *chip, } /* - * Get the flash and manufacturer id and lookup if the type is supported + * Get the flash and manufacturer id and lookup if the type is supported. */ static struct nand_flash_dev *nand_get_flash_type(struct mtd_info *mtd, struct nand_chip *chip, @@ -2913,7 +2928,7 @@ static struct nand_flash_dev *nand_get_flash_type(struct mtd_info *mtd, /* * Reset the chip, required by some chips (e.g. Micron MT29FxGxxxxx) - * after power-up + * after power-up. */ chip->cmdfunc(mtd, NAND_CMD_RESET, -1, -1); @@ -2924,7 +2939,8 @@ static struct nand_flash_dev *nand_get_flash_type(struct mtd_info *mtd, *maf_id = chip->read_byte(mtd); *dev_id = chip->read_byte(mtd); - /* Try again to make sure, as some systems the bus-hold or other + /* + * Try again to make sure, as some systems the bus-hold or other * interface concerns can cause random data which looks like a * possibly credible NAND flash to appear. If the two results do * not match, ignore the device completely. @@ -2936,9 +2952,9 @@ static struct nand_flash_dev *nand_get_flash_type(struct mtd_info *mtd, id_data[i] = chip->read_byte(mtd); if (id_data[0] != *maf_id || id_data[1] != *dev_id) { - printk(KERN_INFO "%s: second ID read did not match " - "%02x,%02x against %02x,%02x\n", __func__, - *maf_id, *dev_id, id_data[0], id_data[1]); + pr_info("%s: second ID read did not match " + "%02x,%02x against %02x,%02x\n", __func__, + *maf_id, *dev_id, id_data[0], id_data[1]); return ERR_PTR(-ENODEV); } @@ -2952,7 +2968,7 @@ static struct nand_flash_dev *nand_get_flash_type(struct mtd_info *mtd, chip->onfi_version = 0; if (!type->name || !type->pagesize) { /* Check is chip is ONFI compliant */ - ret = nand_flash_detect_onfi(mtd, chip, busw); + ret = nand_flash_detect_onfi(mtd, chip, &busw); if (ret) goto ident_done; } @@ -2973,7 +2989,7 @@ static struct nand_flash_dev *nand_get_flash_type(struct mtd_info *mtd, chip->chipsize = (uint64_t)type->chipsize << 20; if (!type->pagesize && chip->init_size) { - /* set the pagesize, oobsize, erasesize by the driver*/ + /* Set the pagesize, oobsize, erasesize by the driver */ busw = chip->init_size(mtd, chip, id_data); } else if (!type->pagesize) { int extid; @@ -3033,7 +3049,7 @@ static struct nand_flash_dev *nand_get_flash_type(struct mtd_info *mtd, } } else { /* - * Old devices have chip data hardcoded in the device id table + * Old devices have chip data hardcoded in the device id table. */ mtd->erasesize = type->erasesize; mtd->writesize = type->pagesize; @@ -3043,7 +3059,7 @@ static struct nand_flash_dev *nand_get_flash_type(struct mtd_info *mtd, /* * Check for Spansion/AMD ID + repeating 5th, 6th byte since * some Spansion chips have erasesize that conflicts with size - * listed in nand_ids table + * listed in nand_ids table. * Data sheet (5 byte ID): Spansion S30ML-P ORNAND (p.39) */ if (*maf_id == NAND_MFR_AMD && id_data[4] != 0x00 && @@ -3057,15 +3073,16 @@ static struct nand_flash_dev *nand_get_flash_type(struct mtd_info *mtd, chip->options &= ~NAND_CHIPOPTIONS_MSK; chip->options |= type->options & NAND_CHIPOPTIONS_MSK; - /* Check if chip is a not a samsung device. Do not clear the - * options for chips which are not having an extended id. + /* + * Check if chip is not a Samsung device. Do not clear the + * options for chips which do not have an extended id. */ if (*maf_id != NAND_MFR_SAMSUNG && !type->pagesize) chip->options &= ~NAND_SAMSUNG_LP_OPTIONS; ident_done: /* - * Set chip as a default. Board drivers can override it, if necessary + * Set chip as a default. Board drivers can override it, if necessary. */ chip->options |= NAND_NO_AUTOINCR; @@ -3077,21 +3094,21 @@ ident_done: /* * Check, if buswidth is correct. Hardware drivers should set - * chip correct ! + * chip correct! */ if (busw != (chip->options & NAND_BUSWIDTH_16)) { - printk(KERN_INFO "NAND device: Manufacturer ID:" - " 0x%02x, Chip ID: 0x%02x (%s %s)\n", *maf_id, - *dev_id, nand_manuf_ids[maf_idx].name, mtd->name); - printk(KERN_WARNING "NAND bus width %d instead %d bit\n", - (chip->options & NAND_BUSWIDTH_16) ? 16 : 8, - busw ? 16 : 8); + pr_info("NAND device: Manufacturer ID:" + " 0x%02x, Chip ID: 0x%02x (%s %s)\n", *maf_id, + *dev_id, nand_manuf_ids[maf_idx].name, mtd->name); + pr_warn("NAND bus width %d instead %d bit\n", + (chip->options & NAND_BUSWIDTH_16) ? 16 : 8, + busw ? 16 : 8); return ERR_PTR(-EINVAL); } /* Calculate the address shift from the page size */ chip->page_shift = ffs(mtd->writesize) - 1; - /* Convert chipsize to number of pages per chip -1. */ + /* Convert chipsize to number of pages per chip -1 */ chip->pagemask = (chip->chipsize >> chip->page_shift) - 1; chip->bbt_erase_shift = chip->phys_erase_shift = @@ -3121,7 +3138,7 @@ ident_done: if ((chip->cellinfo & NAND_CI_CELLTYPE_MSK) && (*maf_id == NAND_MFR_SAMSUNG || *maf_id == NAND_MFR_HYNIX)) - chip->options |= NAND_BBT_SCANLASTPAGE; + chip->bbt_options |= NAND_BBT_SCANLASTPAGE; else if ((!(chip->cellinfo & NAND_CI_CELLTYPE_MSK) && (*maf_id == NAND_MFR_SAMSUNG || *maf_id == NAND_MFR_HYNIX || @@ -3129,17 +3146,7 @@ ident_done: *maf_id == NAND_MFR_AMD)) || (mtd->writesize == 2048 && *maf_id == NAND_MFR_MICRON)) - chip->options |= NAND_BBT_SCAN2NDPAGE; - - /* - * Numonyx/ST 2K pages, x8 bus use BOTH byte 1 and 6 - */ - if (!(busw & NAND_BUSWIDTH_16) && - *maf_id == NAND_MFR_STMICRO && - mtd->writesize == 2048) { - chip->options |= NAND_BBT_SCANBYTE1AND6; - chip->badblockpos = 0; - } + chip->bbt_options |= NAND_BBT_SCAN2NDPAGE; /* Check for AND chips with 4 page planes */ if (chip->options & NAND_4PAGE_ARRAY) @@ -3147,12 +3154,11 @@ ident_done: else chip->erase_cmd = single_erase_cmd; - /* Do not replace user supplied command function ! */ + /* Do not replace user supplied command function! */ if (mtd->writesize > 512 && chip->cmdfunc == nand_command) chip->cmdfunc = nand_command_lp; - /* TODO onfi flash name */ - printk(KERN_INFO "NAND device: Manufacturer ID:" + pr_info("NAND device: Manufacturer ID:" " 0x%02x, Chip ID: 0x%02x (%s %s)\n", *maf_id, *dev_id, nand_manuf_ids[maf_idx].name, chip->onfi_version ? chip->onfi_params.model : type->name); @@ -3162,12 +3168,12 @@ ident_done: /** * nand_scan_ident - [NAND Interface] Scan for the NAND device - * @mtd: MTD device structure - * @maxchips: Number of chips to scan for - * @table: Alternative NAND ID table + * @mtd: MTD device structure + * @maxchips: number of chips to scan for + * @table: alternative NAND ID table * - * This is the first phase of the normal nand_scan() function. It - * reads the flash ID and sets up MTD fields accordingly. + * This is the first phase of the normal nand_scan() function. It reads the + * flash ID and sets up MTD fields accordingly. * * The mtd->owner field must be set to the module of the caller. */ @@ -3189,7 +3195,7 @@ int nand_scan_ident(struct mtd_info *mtd, int maxchips, if (IS_ERR(type)) { if (!(chip->options & NAND_SCAN_SILENT_NODEV)) - printk(KERN_WARNING "No NAND device found.\n"); + pr_warn("No NAND device found\n"); chip->select_chip(mtd, -1); return PTR_ERR(type); } @@ -3207,7 +3213,7 @@ int nand_scan_ident(struct mtd_info *mtd, int maxchips, break; } if (i > 1) - printk(KERN_INFO "%d NAND chips detected\n", i); + pr_info("%d NAND chips detected\n", i); /* Store the number of chips and calc total size for mtd */ chip->numchips = i; @@ -3220,11 +3226,11 @@ EXPORT_SYMBOL(nand_scan_ident); /** * nand_scan_tail - [NAND Interface] Scan for the NAND device - * @mtd: MTD device structure + * @mtd: MTD device structure * - * This is the second phase of the normal nand_scan() function. It - * fills out all the uninitialized function pointers with the defaults - * and scans for a bad block table if appropriate. + * This is the second phase of the normal nand_scan() function. It fills out + * all the uninitialized function pointers with the defaults and scans for a + * bad block table if appropriate. */ int nand_scan_tail(struct mtd_info *mtd) { @@ -3240,7 +3246,7 @@ int nand_scan_tail(struct mtd_info *mtd) chip->oob_poi = chip->buffers->databuf + mtd->writesize; /* - * If no default placement scheme is given, select an appropriate one + * If no default placement scheme is given, select an appropriate one. */ if (!chip->ecc.layout && (chip->ecc.mode != NAND_ECC_SOFT_BCH)) { switch (mtd->oobsize) { @@ -3257,8 +3263,8 @@ int nand_scan_tail(struct mtd_info *mtd) chip->ecc.layout = &nand_oob_128; break; default: - printk(KERN_WARNING "No oob scheme defined for " - "oobsize %d\n", mtd->oobsize); + pr_warn("No oob scheme defined for oobsize %d\n", + mtd->oobsize); BUG(); } } @@ -3267,7 +3273,7 @@ int nand_scan_tail(struct mtd_info *mtd) chip->write_page = nand_write_page; /* - * check ECC mode, default to software if 3byte/512byte hardware ECC is + * Check ECC mode, default to software if 3byte/512byte hardware ECC is * selected and we have 256 byte pagesize fallback to software ECC */ @@ -3276,15 +3282,15 @@ int nand_scan_tail(struct mtd_info *mtd) /* Similar to NAND_ECC_HW, but a separate read_page handle */ if (!chip->ecc.calculate || !chip->ecc.correct || !chip->ecc.hwctl) { - printk(KERN_WARNING "No ECC functions supplied; " - "Hardware ECC not possible\n"); + pr_warn("No ECC functions supplied; " + "hardware ECC not possible\n"); BUG(); } if (!chip->ecc.read_page) chip->ecc.read_page = nand_read_page_hwecc_oob_first; case NAND_ECC_HW: - /* Use standard hwecc read page function ? */ + /* Use standard hwecc read page function? */ if (!chip->ecc.read_page) chip->ecc.read_page = nand_read_page_hwecc; if (!chip->ecc.write_page) @@ -3305,11 +3311,11 @@ int nand_scan_tail(struct mtd_info *mtd) chip->ecc.read_page == nand_read_page_hwecc || !chip->ecc.write_page || chip->ecc.write_page == nand_write_page_hwecc)) { - printk(KERN_WARNING "No ECC functions supplied; " - "Hardware ECC not possible\n"); + pr_warn("No ECC functions supplied; " + "hardware ECC not possible\n"); BUG(); } - /* Use standard syndrome read/write page function ? */ + /* Use standard syndrome read/write page function? */ if (!chip->ecc.read_page) chip->ecc.read_page = nand_read_page_syndrome; if (!chip->ecc.write_page) @@ -3325,9 +3331,9 @@ int nand_scan_tail(struct mtd_info *mtd) if (mtd->writesize >= chip->ecc.size) break; - printk(KERN_WARNING "%d byte HW ECC not possible on " - "%d byte page size, fallback to SW ECC\n", - chip->ecc.size, mtd->writesize); + pr_warn("%d byte HW ECC not possible on " + "%d byte page size, fallback to SW ECC\n", + chip->ecc.size, mtd->writesize); chip->ecc.mode = NAND_ECC_SOFT; case NAND_ECC_SOFT: @@ -3347,7 +3353,7 @@ int nand_scan_tail(struct mtd_info *mtd) case NAND_ECC_SOFT_BCH: if (!mtd_nand_has_bch()) { - printk(KERN_WARNING "CONFIG_MTD_ECC_BCH not enabled\n"); + pr_warn("CONFIG_MTD_ECC_BCH not enabled\n"); BUG(); } chip->ecc.calculate = nand_bch_calculate_ecc; @@ -3362,8 +3368,8 @@ int nand_scan_tail(struct mtd_info *mtd) /* * Board driver should supply ecc.size and ecc.bytes values to * select how many bits are correctable; see nand_bch_init() - * for details. - * Otherwise, default to 4 bits for large page devices + * for details. Otherwise, default to 4 bits for large page + * devices. */ if (!chip->ecc.size && (mtd->oobsize >= 64)) { chip->ecc.size = 512; @@ -3374,14 +3380,14 @@ int nand_scan_tail(struct mtd_info *mtd) chip->ecc.bytes, &chip->ecc.layout); if (!chip->ecc.priv) { - printk(KERN_WARNING "BCH ECC initialization failed!\n"); + pr_warn("BCH ECC initialization failed!\n"); BUG(); } break; case NAND_ECC_NONE: - printk(KERN_WARNING "NAND_ECC_NONE selected by board driver. " - "This is not recommended !!\n"); + pr_warn("NAND_ECC_NONE selected by board driver. " + "This is not recommended!\n"); chip->ecc.read_page = nand_read_page_raw; chip->ecc.write_page = nand_write_page_raw; chip->ecc.read_oob = nand_read_oob_std; @@ -3393,14 +3399,19 @@ int nand_scan_tail(struct mtd_info *mtd) break; default: - printk(KERN_WARNING "Invalid NAND_ECC_MODE %d\n", - chip->ecc.mode); + pr_warn("Invalid NAND_ECC_MODE %d\n", chip->ecc.mode); BUG(); } + /* For many systems, the standard OOB write also works for raw */ + if (!chip->ecc.read_oob_raw) + chip->ecc.read_oob_raw = chip->ecc.read_oob; + if (!chip->ecc.write_oob_raw) + chip->ecc.write_oob_raw = chip->ecc.write_oob; + /* * The number of bytes available for a client to place data into - * the out of band area + * the out of band area. */ chip->ecc.layout->oobavail = 0; for (i = 0; chip->ecc.layout->oobfree[i].length @@ -3411,19 +3422,16 @@ int nand_scan_tail(struct mtd_info *mtd) /* * Set the number of read / write steps for one page depending on ECC - * mode + * mode. */ chip->ecc.steps = mtd->writesize / chip->ecc.size; if (chip->ecc.steps * chip->ecc.size != mtd->writesize) { - printk(KERN_WARNING "Invalid ecc parameters\n"); + pr_warn("Invalid ECC parameters\n"); BUG(); } chip->ecc.total = chip->ecc.steps * chip->ecc.bytes; - /* - * Allow subpage writes up to ecc.steps. Not possible for MLC - * FLASH. - */ + /* Allow subpage writes up to ecc.steps. Not possible for MLC flash */ if (!(chip->options & NAND_NO_SUBPAGE_WRITE) && !(chip->cellinfo & NAND_CI_CELLTYPE_MSK)) { switch (chip->ecc.steps) { @@ -3481,9 +3489,11 @@ int nand_scan_tail(struct mtd_info *mtd) } EXPORT_SYMBOL(nand_scan_tail); -/* is_module_text_address() isn't exported, and it's mostly a pointless +/* + * is_module_text_address() isn't exported, and it's mostly a pointless * test if this is a module _anyway_ -- they'd have to try _really_ hard - * to call us from in-kernel code if the core NAND support is modular. */ + * to call us from in-kernel code if the core NAND support is modular. + */ #ifdef MODULE #define caller_is_module() (1) #else @@ -3493,15 +3503,13 @@ EXPORT_SYMBOL(nand_scan_tail); /** * nand_scan - [NAND Interface] Scan for the NAND device - * @mtd: MTD device structure - * @maxchips: Number of chips to scan for - * - * This fills out all the uninitialized function pointers - * with the defaults. - * The flash ID is read and the mtd/chip structures are - * filled with the appropriate values. - * The mtd->owner field must be set to the module of the caller + * @mtd: MTD device structure + * @maxchips: number of chips to scan for * + * This fills out all the uninitialized function pointers with the defaults. + * The flash ID is read and the mtd/chip structures are filled with the + * appropriate values. The mtd->owner field must be set to the module of the + * caller. */ int nand_scan(struct mtd_info *mtd, int maxchips) { @@ -3509,8 +3517,7 @@ int nand_scan(struct mtd_info *mtd, int maxchips) /* Many callers got this wrong, so check for it for a while... */ if (!mtd->owner && caller_is_module()) { - printk(KERN_CRIT "%s called with NULL mtd->owner!\n", - __func__); + pr_crit("%s called with NULL mtd->owner!\n", __func__); BUG(); } @@ -3523,8 +3530,8 @@ EXPORT_SYMBOL(nand_scan); /** * nand_release - [NAND Interface] Free resources held by the NAND device - * @mtd: MTD device structure -*/ + * @mtd: MTD device structure + */ void nand_release(struct mtd_info *mtd) { struct nand_chip *chip = mtd->priv; diff --git a/drivers/mtd/nand/nand_bbt.c b/drivers/mtd/nand/nand_bbt.c index 4165857752ca..69148ae3bf58 100644 --- a/drivers/mtd/nand/nand_bbt.c +++ b/drivers/mtd/nand/nand_bbt.c @@ -14,7 +14,7 @@ * * When nand_scan_bbt is called, then it tries to find the bad block table * depending on the options in the BBT descriptor(s). If no flash based BBT - * (NAND_USE_FLASH_BBT) is specified then the device is scanned for factory + * (NAND_BBT_USE_FLASH) is specified then the device is scanned for factory * marked good / bad blocks. This information is used to create a memory BBT. * Once a new bad block is discovered then the "factory" information is updated * on the device. @@ -36,9 +36,9 @@ * The table is marked in the OOB area with an ident pattern and a version * number which indicates which of both tables is more up to date. If the NAND * controller needs the complete OOB area for the ECC information then the - * option NAND_USE_FLASH_BBT_NO_OOB should be used: it moves the ident pattern - * and the version byte into the data area and the OOB area will remain - * untouched. + * option NAND_BBT_NO_OOB should be used (along with NAND_BBT_USE_FLASH, of + * course): it moves the ident pattern and the version byte into the data area + * and the OOB area will remain untouched. * * The table uses 2 bits per block * 11b: block is good @@ -81,17 +81,15 @@ static int check_pattern_no_oob(uint8_t *buf, struct nand_bbt_descr *td) /** * check_pattern - [GENERIC] check if a pattern is in the buffer - * @buf: the buffer to search - * @len: the length of buffer to search - * @paglen: the pagelength - * @td: search pattern descriptor + * @buf: the buffer to search + * @len: the length of buffer to search + * @paglen: the pagelength + * @td: search pattern descriptor * - * Check for a pattern at the given place. Used to search bad block - * tables and good / bad block identifiers. - * If the SCAN_EMPTY option is set then check, if all bytes except the - * pattern area contain 0xff - * -*/ + * Check for a pattern at the given place. Used to search bad block tables and + * good / bad block identifiers. If the SCAN_EMPTY option is set then check, if + * all bytes except the pattern area contain 0xff. + */ static int check_pattern(uint8_t *buf, int len, int paglen, struct nand_bbt_descr *td) { int i, end = 0; @@ -110,32 +108,8 @@ static int check_pattern(uint8_t *buf, int len, int paglen, struct nand_bbt_desc p += end; /* Compare the pattern */ - for (i = 0; i < td->len; i++) { - if (p[i] != td->pattern[i]) - return -1; - } - - /* Check both positions 1 and 6 for pattern? */ - if (td->options & NAND_BBT_SCANBYTE1AND6) { - if (td->options & NAND_BBT_SCANEMPTY) { - p += td->len; - end += NAND_SMALL_BADBLOCK_POS - td->offs; - /* Check region between positions 1 and 6 */ - for (i = 0; i < NAND_SMALL_BADBLOCK_POS - td->offs - td->len; - i++) { - if (*p++ != 0xff) - return -1; - } - } - else { - p += NAND_SMALL_BADBLOCK_POS - td->offs; - } - /* Compare the pattern */ - for (i = 0; i < td->len; i++) { - if (p[i] != td->pattern[i]) - return -1; - } - } + if (memcmp(p, td->pattern, td->len)) + return -1; if (td->options & NAND_BBT_SCANEMPTY) { p += td->len; @@ -150,14 +124,13 @@ static int check_pattern(uint8_t *buf, int len, int paglen, struct nand_bbt_desc /** * check_short_pattern - [GENERIC] check if a pattern is in the buffer - * @buf: the buffer to search - * @td: search pattern descriptor - * - * Check for a pattern at the given place. Used to search bad block - * tables and good / bad block identifiers. Same as check_pattern, but - * no optional empty check + * @buf: the buffer to search + * @td: search pattern descriptor * -*/ + * Check for a pattern at the given place. Used to search bad block tables and + * good / bad block identifiers. Same as check_pattern, but no optional empty + * check. + */ static int check_short_pattern(uint8_t *buf, struct nand_bbt_descr *td) { int i; @@ -168,21 +141,14 @@ static int check_short_pattern(uint8_t *buf, struct nand_bbt_descr *td) if (p[td->offs + i] != td->pattern[i]) return -1; } - /* Need to check location 1 AND 6? */ - if (td->options & NAND_BBT_SCANBYTE1AND6) { - for (i = 0; i < td->len; i++) { - if (p[NAND_SMALL_BADBLOCK_POS + i] != td->pattern[i]) - return -1; - } - } return 0; } /** * add_marker_len - compute the length of the marker in data area - * @td: BBT descriptor used for computation + * @td: BBT descriptor used for computation * - * The length will be 0 if the markeris located in OOB area. + * The length will be 0 if the marker is located in OOB area. */ static u32 add_marker_len(struct nand_bbt_descr *td) { @@ -199,34 +165,33 @@ static u32 add_marker_len(struct nand_bbt_descr *td) /** * read_bbt - [GENERIC] Read the bad block table starting from page - * @mtd: MTD device structure - * @buf: temporary buffer - * @page: the starting page - * @num: the number of bbt descriptors to read - * @td: the bbt describtion table - * @offs: offset in the memory table + * @mtd: MTD device structure + * @buf: temporary buffer + * @page: the starting page + * @num: the number of bbt descriptors to read + * @td: the bbt describtion table + * @offs: offset in the memory table * * Read the bad block table starting from page. - * */ static int read_bbt(struct mtd_info *mtd, uint8_t *buf, int page, int num, struct nand_bbt_descr *td, int offs) { - int res, i, j, act = 0; + int res, ret = 0, i, j, act = 0; struct nand_chip *this = mtd->priv; size_t retlen, len, totlen; loff_t from; int bits = td->options & NAND_BBT_NRBITS_MSK; - uint8_t msk = (uint8_t) ((1 << bits) - 1); + uint8_t msk = (uint8_t)((1 << bits) - 1); u32 marker_len; int reserved_block_code = td->reserved_block_code; totlen = (num * bits) >> 3; marker_len = add_marker_len(td); - from = ((loff_t) page) << this->page_shift; + from = ((loff_t)page) << this->page_shift; while (totlen) { - len = min(totlen, (size_t) (1 << this->bbt_erase_shift)); + len = min(totlen, (size_t)(1 << this->bbt_erase_shift)); if (marker_len) { /* * In case the BBT marker is not in the OOB area it @@ -238,11 +203,18 @@ static int read_bbt(struct mtd_info *mtd, uint8_t *buf, int page, int num, } res = mtd->read(mtd, from, len, &retlen, buf); if (res < 0) { - if (retlen != len) { - printk(KERN_INFO "nand_bbt: Error reading bad block table\n"); + if (mtd_is_eccerr(res)) { + pr_info("nand_bbt: ECC error in BBT at " + "0x%012llx\n", from & ~mtd->writesize); + return res; + } else if (mtd_is_bitflip(res)) { + pr_info("nand_bbt: corrected error in BBT at " + "0x%012llx\n", from & ~mtd->writesize); + ret = res; + } else { + pr_info("nand_bbt: error reading BBT\n"); return res; } - printk(KERN_WARNING "nand_bbt: ECC error while reading bad block table\n"); } /* Analyse data */ @@ -253,17 +225,19 @@ static int read_bbt(struct mtd_info *mtd, uint8_t *buf, int page, int num, if (tmp == msk) continue; if (reserved_block_code && (tmp == reserved_block_code)) { - printk(KERN_DEBUG "nand_read_bbt: Reserved block at 0x%012llx\n", - (loff_t)((offs << 2) + (act >> 1)) << this->bbt_erase_shift); + pr_info("nand_read_bbt: reserved block at 0x%012llx\n", + (loff_t)((offs << 2) + (act >> 1)) << this->bbt_erase_shift); this->bbt[offs + (act >> 3)] |= 0x2 << (act & 0x06); mtd->ecc_stats.bbtblocks++; continue; } - /* Leave it for now, if its matured we can move this - * message to MTD_DEBUG_LEVEL0 */ - printk(KERN_DEBUG "nand_read_bbt: Bad block at 0x%012llx\n", - (loff_t)((offs << 2) + (act >> 1)) << this->bbt_erase_shift); - /* Factory marked bad or worn out ? */ + /* + * Leave it for now, if it's matured we can + * move this message to pr_debug. + */ + pr_info("nand_read_bbt: bad block at 0x%012llx\n", + (loff_t)((offs << 2) + (act >> 1)) << this->bbt_erase_shift); + /* Factory marked bad or worn out? */ if (tmp == 0) this->bbt[offs + (act >> 3)] |= 0x3 << (act & 0x06); else @@ -274,20 +248,20 @@ static int read_bbt(struct mtd_info *mtd, uint8_t *buf, int page, int num, totlen -= len; from += len; } - return 0; + return ret; } /** * read_abs_bbt - [GENERIC] Read the bad block table starting at a given page - * @mtd: MTD device structure - * @buf: temporary buffer - * @td: descriptor for the bad block table - * @chip: read the table for a specific chip, -1 read all chips. - * Applies only if NAND_BBT_PERCHIP option is set + * @mtd: MTD device structure + * @buf: temporary buffer + * @td: descriptor for the bad block table + * @chip: read the table for a specific chip, -1 read all chips; applies only if + * NAND_BBT_PERCHIP option is set * - * Read the bad block table for all chips starting at a given page - * We assume that the bbt bits are in consecutive order. -*/ + * Read the bad block table for all chips starting at a given page. We assume + * that the bbt bits are in consecutive order. + */ static int read_abs_bbt(struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr *td, int chip) { struct nand_chip *this = mtd->priv; @@ -313,9 +287,7 @@ static int read_abs_bbt(struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_desc return 0; } -/* - * BBT marker is in the first page, no OOB. - */ +/* BBT marker is in the first page, no OOB */ static int scan_read_raw_data(struct mtd_info *mtd, uint8_t *buf, loff_t offs, struct nand_bbt_descr *td) { @@ -329,35 +301,26 @@ static int scan_read_raw_data(struct mtd_info *mtd, uint8_t *buf, loff_t offs, return mtd->read(mtd, offs, len, &retlen, buf); } -/* - * Scan read raw data from flash - */ +/* Scan read raw data from flash */ static int scan_read_raw_oob(struct mtd_info *mtd, uint8_t *buf, loff_t offs, size_t len) { struct mtd_oob_ops ops; int res; - ops.mode = MTD_OOB_RAW; + ops.mode = MTD_OPS_RAW; ops.ooboffs = 0; ops.ooblen = mtd->oobsize; - while (len > 0) { - if (len <= mtd->writesize) { - ops.oobbuf = buf + len; - ops.datbuf = buf; - ops.len = len; - return mtd->read_oob(mtd, offs, &ops); - } else { - ops.oobbuf = buf + mtd->writesize; - ops.datbuf = buf; - ops.len = mtd->writesize; - res = mtd->read_oob(mtd, offs, &ops); + ops.datbuf = buf; + ops.len = min(len, (size_t)mtd->writesize); + ops.oobbuf = buf + ops.len; - if (res) - return res; - } + res = mtd->read_oob(mtd, offs, &ops); + + if (res) + return res; buf += mtd->oobsize + mtd->writesize; len -= mtd->writesize; @@ -374,15 +337,13 @@ static int scan_read_raw(struct mtd_info *mtd, uint8_t *buf, loff_t offs, return scan_read_raw_oob(mtd, buf, offs, len); } -/* - * Scan write data with oob to flash - */ +/* Scan write data with oob to flash */ static int scan_write_bbt(struct mtd_info *mtd, loff_t offs, size_t len, uint8_t *buf, uint8_t *oob) { struct mtd_oob_ops ops; - ops.mode = MTD_OOB_PLACE; + ops.mode = MTD_OPS_PLACE_OOB; ops.ooboffs = 0; ops.ooblen = mtd->oobsize; ops.datbuf = buf; @@ -403,15 +364,14 @@ static u32 bbt_get_ver_offs(struct mtd_info *mtd, struct nand_bbt_descr *td) /** * read_abs_bbts - [GENERIC] Read the bad block table(s) for all chips starting at a given page - * @mtd: MTD device structure - * @buf: temporary buffer - * @td: descriptor for the bad block table - * @md: descriptor for the bad block table mirror + * @mtd: MTD device structure + * @buf: temporary buffer + * @td: descriptor for the bad block table + * @md: descriptor for the bad block table mirror * - * Read the bad block table(s) for all chips starting at a given page - * We assume that the bbt bits are in consecutive order. - * -*/ + * Read the bad block table(s) for all chips starting at a given page. We + * assume that the bbt bits are in consecutive order. + */ static int read_abs_bbts(struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr *td, struct nand_bbt_descr *md) { @@ -422,8 +382,8 @@ static int read_abs_bbts(struct mtd_info *mtd, uint8_t *buf, scan_read_raw(mtd, buf, (loff_t)td->pages[0] << this->page_shift, mtd->writesize, td); td->version[0] = buf[bbt_get_ver_offs(mtd, td)]; - printk(KERN_DEBUG "Bad block table at page %d, version 0x%02X\n", - td->pages[0], td->version[0]); + pr_info("Bad block table at page %d, version 0x%02X\n", + td->pages[0], td->version[0]); } /* Read the mirror version, if available */ @@ -431,15 +391,13 @@ static int read_abs_bbts(struct mtd_info *mtd, uint8_t *buf, scan_read_raw(mtd, buf, (loff_t)md->pages[0] << this->page_shift, mtd->writesize, td); md->version[0] = buf[bbt_get_ver_offs(mtd, md)]; - printk(KERN_DEBUG "Bad block table at page %d, version 0x%02X\n", - md->pages[0], md->version[0]); + pr_info("Bad block table at page %d, version 0x%02X\n", + md->pages[0], md->version[0]); } return 1; } -/* - * Scan a given block full - */ +/* Scan a given block full */ static int scan_block_full(struct mtd_info *mtd, struct nand_bbt_descr *bd, loff_t offs, uint8_t *buf, size_t readlen, int scanlen, int len) @@ -447,7 +405,8 @@ static int scan_block_full(struct mtd_info *mtd, struct nand_bbt_descr *bd, int ret, j; ret = scan_read_raw_oob(mtd, buf, offs, readlen); - if (ret) + /* Ignore ECC errors when checking for BBM */ + if (ret && !mtd_is_bitflip_or_eccerr(ret)) return ret; for (j = 0; j < len; j++, buf += scanlen) { @@ -457,9 +416,7 @@ static int scan_block_full(struct mtd_info *mtd, struct nand_bbt_descr *bd, return 0; } -/* - * Scan a given block partially - */ +/* Scan a given block partially */ static int scan_block_fast(struct mtd_info *mtd, struct nand_bbt_descr *bd, loff_t offs, uint8_t *buf, int len) { @@ -470,16 +427,16 @@ static int scan_block_fast(struct mtd_info *mtd, struct nand_bbt_descr *bd, ops.oobbuf = buf; ops.ooboffs = 0; ops.datbuf = NULL; - ops.mode = MTD_OOB_PLACE; + ops.mode = MTD_OPS_PLACE_OOB; for (j = 0; j < len; j++) { /* - * Read the full oob until read_oob is fixed to - * handle single byte reads for 16 bit - * buswidth + * Read the full oob until read_oob is fixed to handle single + * byte reads for 16 bit buswidth. */ ret = mtd->read_oob(mtd, offs, &ops); - if (ret) + /* Ignore ECC errors when checking for BBM */ + if (ret && !mtd_is_bitflip_or_eccerr(ret)) return ret; if (check_short_pattern(buf, bd)) @@ -492,14 +449,14 @@ static int scan_block_fast(struct mtd_info *mtd, struct nand_bbt_descr *bd, /** * create_bbt - [GENERIC] Create a bad block table by scanning the device - * @mtd: MTD device structure - * @buf: temporary buffer - * @bd: descriptor for the good/bad block search pattern - * @chip: create the table for a specific chip, -1 read all chips. - * Applies only if NAND_BBT_PERCHIP option is set + * @mtd: MTD device structure + * @buf: temporary buffer + * @bd: descriptor for the good/bad block search pattern + * @chip: create the table for a specific chip, -1 read all chips; applies only + * if NAND_BBT_PERCHIP option is set * - * Create a bad block table by scanning the device - * for the given good/bad block identify pattern + * Create a bad block table by scanning the device for the given good/bad block + * identify pattern. */ static int create_bbt(struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr *bd, int chip) @@ -510,7 +467,7 @@ static int create_bbt(struct mtd_info *mtd, uint8_t *buf, loff_t from; size_t readlen; - printk(KERN_INFO "Scanning device for bad blocks\n"); + pr_info("Scanning device for bad blocks\n"); if (bd->options & NAND_BBT_SCANALLPAGES) len = 1 << (this->bbt_erase_shift - this->page_shift); @@ -530,14 +487,16 @@ static int create_bbt(struct mtd_info *mtd, uint8_t *buf, } if (chip == -1) { - /* Note that numblocks is 2 * (real numblocks) here, see i+=2 - * below as it makes shifting and masking less painful */ + /* + * Note that numblocks is 2 * (real numblocks) here, see i+=2 + * below as it makes shifting and masking less painful + */ numblocks = mtd->size >> (this->bbt_erase_shift - 1); startblock = 0; from = 0; } else { if (chip >= this->numchips) { - printk(KERN_WARNING "create_bbt(): chipnr (%d) > available chips (%d)\n", + pr_warn("create_bbt(): chipnr (%d) > available chips (%d)\n", chip + 1, this->numchips); return -EINVAL; } @@ -547,7 +506,7 @@ static int create_bbt(struct mtd_info *mtd, uint8_t *buf, from = (loff_t)startblock << (this->bbt_erase_shift - 1); } - if (this->options & NAND_BBT_SCANLASTPAGE) + if (this->bbt_options & NAND_BBT_SCANLASTPAGE) from += mtd->erasesize - (mtd->writesize * len); for (i = startblock; i < numblocks;) { @@ -566,8 +525,8 @@ static int create_bbt(struct mtd_info *mtd, uint8_t *buf, if (ret) { this->bbt[i >> 3] |= 0x03 << (i & 0x6); - printk(KERN_WARNING "Bad eraseblock %d at 0x%012llx\n", - i >> 1, (unsigned long long)from); + pr_warn("Bad eraseblock %d at 0x%012llx\n", + i >> 1, (unsigned long long)from); mtd->ecc_stats.badblocks++; } @@ -579,20 +538,18 @@ static int create_bbt(struct mtd_info *mtd, uint8_t *buf, /** * search_bbt - [GENERIC] scan the device for a specific bad block table - * @mtd: MTD device structure - * @buf: temporary buffer - * @td: descriptor for the bad block table + * @mtd: MTD device structure + * @buf: temporary buffer + * @td: descriptor for the bad block table * - * Read the bad block table by searching for a given ident pattern. - * Search is preformed either from the beginning up or from the end of - * the device downwards. The search starts always at the start of a - * block. - * If the option NAND_BBT_PERCHIP is given, each chip is searched - * for a bbt, which contains the bad block information of this chip. - * This is necessary to provide support for certain DOC devices. + * Read the bad block table by searching for a given ident pattern. Search is + * preformed either from the beginning up or from the end of the device + * downwards. The search starts always at the start of a block. If the option + * NAND_BBT_PERCHIP is given, each chip is searched for a bbt, which contains + * the bad block information of this chip. This is necessary to provide support + * for certain DOC devices. * - * The bbt ident pattern resides in the oob area of the first page - * in a block. + * The bbt ident pattern resides in the oob area of the first page in a block. */ static int search_bbt(struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr *td) { @@ -603,7 +560,7 @@ static int search_bbt(struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr int bbtblocks; int blocktopage = this->bbt_erase_shift - this->page_shift; - /* Search direction top -> down ? */ + /* Search direction top -> down? */ if (td->options & NAND_BBT_LASTBLOCK) { startblock = (mtd->size >> this->bbt_erase_shift) - 1; dir = -1; @@ -612,7 +569,7 @@ static int search_bbt(struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr dir = 1; } - /* Do we have a bbt per chip ? */ + /* Do we have a bbt per chip? */ if (td->options & NAND_BBT_PERCHIP) { chips = this->numchips; bbtblocks = this->chipsize >> this->bbt_erase_shift; @@ -651,23 +608,23 @@ static int search_bbt(struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr /* Check, if we found a bbt for each requested chip */ for (i = 0; i < chips; i++) { if (td->pages[i] == -1) - printk(KERN_WARNING "Bad block table not found for chip %d\n", i); + pr_warn("Bad block table not found for chip %d\n", i); else - printk(KERN_DEBUG "Bad block table found at page %d, version 0x%02X\n", td->pages[i], - td->version[i]); + pr_info("Bad block table found at page %d, version " + "0x%02X\n", td->pages[i], td->version[i]); } return 0; } /** * search_read_bbts - [GENERIC] scan the device for bad block table(s) - * @mtd: MTD device structure - * @buf: temporary buffer - * @td: descriptor for the bad block table - * @md: descriptor for the bad block table mirror + * @mtd: MTD device structure + * @buf: temporary buffer + * @td: descriptor for the bad block table + * @md: descriptor for the bad block table mirror * - * Search and read the bad block table(s) -*/ + * Search and read the bad block table(s). + */ static int search_read_bbts(struct mtd_info *mtd, uint8_t * buf, struct nand_bbt_descr *td, struct nand_bbt_descr *md) { /* Search the primary table */ @@ -683,16 +640,14 @@ static int search_read_bbts(struct mtd_info *mtd, uint8_t * buf, struct nand_bbt /** * write_bbt - [GENERIC] (Re)write the bad block table + * @mtd: MTD device structure + * @buf: temporary buffer + * @td: descriptor for the bad block table + * @md: descriptor for the bad block table mirror + * @chipsel: selector for a specific chip, -1 for all * - * @mtd: MTD device structure - * @buf: temporary buffer - * @td: descriptor for the bad block table - * @md: descriptor for the bad block table mirror - * @chipsel: selector for a specific chip, -1 for all - * - * (Re)write the bad block table - * -*/ + * (Re)write the bad block table. + */ static int write_bbt(struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr *td, struct nand_bbt_descr *md, int chipsel) @@ -711,14 +666,14 @@ static int write_bbt(struct mtd_info *mtd, uint8_t *buf, ops.ooblen = mtd->oobsize; ops.ooboffs = 0; ops.datbuf = NULL; - ops.mode = MTD_OOB_PLACE; + ops.mode = MTD_OPS_PLACE_OOB; if (!rcode) rcode = 0xff; - /* Write bad block table per chip rather than per device ? */ + /* Write bad block table per chip rather than per device? */ if (td->options & NAND_BBT_PERCHIP) { numblocks = (int)(this->chipsize >> this->bbt_erase_shift); - /* Full device write or specific chip ? */ + /* Full device write or specific chip? */ if (chipsel == -1) { nrchips = this->numchips; } else { @@ -732,8 +687,8 @@ static int write_bbt(struct mtd_info *mtd, uint8_t *buf, /* Loop through the chips */ for (; chip < nrchips; chip++) { - - /* There was already a version of the table, reuse the page + /* + * There was already a version of the table, reuse the page * This applies for absolute placement too, as we have the * page nr. in td->pages. */ @@ -742,8 +697,10 @@ static int write_bbt(struct mtd_info *mtd, uint8_t *buf, goto write; } - /* Automatic placement of the bad block table */ - /* Search direction top -> down ? */ + /* + * Automatic placement of the bad block table. Search direction + * top -> down? + */ if (td->options & NAND_BBT_LASTBLOCK) { startblock = numblocks * (chip + 1) - 1; dir = -1; @@ -767,7 +724,7 @@ static int write_bbt(struct mtd_info *mtd, uint8_t *buf, if (!md || md->pages[chip] != page) goto write; } - printk(KERN_ERR "No space left to write bad block table\n"); + pr_err("No space left to write bad block table\n"); return -ENOSPC; write: @@ -792,24 +749,22 @@ static int write_bbt(struct mtd_info *mtd, uint8_t *buf, bbtoffs = chip * (numblocks >> 2); - to = ((loff_t) page) << this->page_shift; + to = ((loff_t)page) << this->page_shift; - /* Must we save the block contents ? */ + /* Must we save the block contents? */ if (td->options & NAND_BBT_SAVECONTENT) { /* Make it block aligned */ - to &= ~((loff_t) ((1 << this->bbt_erase_shift) - 1)); + to &= ~((loff_t)((1 << this->bbt_erase_shift) - 1)); len = 1 << this->bbt_erase_shift; res = mtd->read(mtd, to, len, &retlen, buf); if (res < 0) { if (retlen != len) { - printk(KERN_INFO "nand_bbt: Error " - "reading block for writing " - "the bad block table\n"); + pr_info("nand_bbt: error reading block " + "for writing the bad block table\n"); return res; } - printk(KERN_WARNING "nand_bbt: ECC error " - "while reading block for writing " - "bad block table\n"); + pr_warn("nand_bbt: ECC error while reading " + "block for writing bad block table\n"); } /* Read oob data */ ops.ooblen = (len >> this->page_shift) * mtd->oobsize; @@ -822,19 +777,19 @@ static int write_bbt(struct mtd_info *mtd, uint8_t *buf, pageoffs = page - (int)(to >> this->page_shift); offs = pageoffs << this->page_shift; /* Preset the bbt area with 0xff */ - memset(&buf[offs], 0xff, (size_t) (numblocks >> sft)); + memset(&buf[offs], 0xff, (size_t)(numblocks >> sft)); ooboffs = len + (pageoffs * mtd->oobsize); } else if (td->options & NAND_BBT_NO_OOB) { ooboffs = 0; offs = td->len; - /* the version byte */ + /* The version byte */ if (td->options & NAND_BBT_VERSION) offs++; /* Calc length */ - len = (size_t) (numblocks >> sft); + len = (size_t)(numblocks >> sft); len += offs; - /* Make it page aligned ! */ + /* Make it page aligned! */ len = ALIGN(len, mtd->writesize); /* Preset the buffer with 0xff */ memset(buf, 0xff, len); @@ -842,8 +797,8 @@ static int write_bbt(struct mtd_info *mtd, uint8_t *buf, memcpy(buf, td->pattern, td->len); } else { /* Calc length */ - len = (size_t) (numblocks >> sft); - /* Make it page aligned ! */ + len = (size_t)(numblocks >> sft); + /* Make it page aligned! */ len = ALIGN(len, mtd->writesize); /* Preset the buffer with 0xff */ memset(buf, 0xff, len + @@ -857,13 +812,13 @@ static int write_bbt(struct mtd_info *mtd, uint8_t *buf, if (td->options & NAND_BBT_VERSION) buf[ooboffs + td->veroffs] = td->version[chip]; - /* walk through the memory table */ + /* Walk through the memory table */ for (i = 0; i < numblocks;) { uint8_t dat; dat = this->bbt[bbtoffs + (i >> 2)]; for (j = 0; j < 4; j++, i++) { int sftcnt = (i << (3 - sft)) & sftmsk; - /* Do not store the reserved bbt blocks ! */ + /* Do not store the reserved bbt blocks! */ buf[offs + (i >> sft)] &= ~(msk[dat & 0x03] << sftcnt); dat >>= 2; @@ -884,8 +839,8 @@ static int write_bbt(struct mtd_info *mtd, uint8_t *buf, if (res < 0) goto outerr; - printk(KERN_DEBUG "Bad block table written to 0x%012llx, version " - "0x%02X\n", (unsigned long long)to, td->version[chip]); + pr_info("Bad block table written to 0x%012llx, version 0x%02X\n", + (unsigned long long)to, td->version[chip]); /* Mark it as used */ td->pages[chip] = page; @@ -893,19 +848,18 @@ static int write_bbt(struct mtd_info *mtd, uint8_t *buf, return 0; outerr: - printk(KERN_WARNING - "nand_bbt: Error while writing bad block table %d\n", res); + pr_warn("nand_bbt: error while writing bad block table %d\n", res); return res; } /** * nand_memory_bbt - [GENERIC] create a memory based bad block table - * @mtd: MTD device structure - * @bd: descriptor for the good/bad block search pattern + * @mtd: MTD device structure + * @bd: descriptor for the good/bad block search pattern * - * The function creates a memory based bbt by scanning the device - * for manufacturer / software marked good / bad blocks -*/ + * The function creates a memory based bbt by scanning the device for + * manufacturer / software marked good / bad blocks. + */ static inline int nand_memory_bbt(struct mtd_info *mtd, struct nand_bbt_descr *bd) { struct nand_chip *this = mtd->priv; @@ -916,25 +870,24 @@ static inline int nand_memory_bbt(struct mtd_info *mtd, struct nand_bbt_descr *b /** * check_create - [GENERIC] create and write bbt(s) if necessary - * @mtd: MTD device structure - * @buf: temporary buffer - * @bd: descriptor for the good/bad block search pattern + * @mtd: MTD device structure + * @buf: temporary buffer + * @bd: descriptor for the good/bad block search pattern * - * The function checks the results of the previous call to read_bbt - * and creates / updates the bbt(s) if necessary - * Creation is necessary if no bbt was found for the chip/device - * Update is necessary if one of the tables is missing or the - * version nr. of one table is less than the other -*/ + * The function checks the results of the previous call to read_bbt and creates + * / updates the bbt(s) if necessary. Creation is necessary if no bbt was found + * for the chip/device. Update is necessary if one of the tables is missing or + * the version nr. of one table is less than the other. + */ static int check_create(struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr *bd) { - int i, chips, writeops, chipsel, res; + int i, chips, writeops, create, chipsel, res, res2; struct nand_chip *this = mtd->priv; struct nand_bbt_descr *td = this->bbt_td; struct nand_bbt_descr *md = this->bbt_md; struct nand_bbt_descr *rd, *rd2; - /* Do we have a bbt per chip ? */ + /* Do we have a bbt per chip? */ if (td->options & NAND_BBT_PERCHIP) chips = this->numchips; else @@ -942,86 +895,98 @@ static int check_create(struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_desc for (i = 0; i < chips; i++) { writeops = 0; + create = 0; rd = NULL; rd2 = NULL; - /* Per chip or per device ? */ + res = res2 = 0; + /* Per chip or per device? */ chipsel = (td->options & NAND_BBT_PERCHIP) ? i : -1; - /* Mirrored table available ? */ + /* Mirrored table available? */ if (md) { if (td->pages[i] == -1 && md->pages[i] == -1) { + create = 1; writeops = 0x03; - goto create; - } - - if (td->pages[i] == -1) { + } else if (td->pages[i] == -1) { rd = md; - td->version[i] = md->version[i]; - writeops = 1; - goto writecheck; - } - - if (md->pages[i] == -1) { + writeops = 0x01; + } else if (md->pages[i] == -1) { rd = td; - md->version[i] = td->version[i]; - writeops = 2; - goto writecheck; - } - - if (td->version[i] == md->version[i]) { + writeops = 0x02; + } else if (td->version[i] == md->version[i]) { rd = td; if (!(td->options & NAND_BBT_VERSION)) rd2 = md; - goto writecheck; - } - - if (((int8_t) (td->version[i] - md->version[i])) > 0) { + } else if (((int8_t)(td->version[i] - md->version[i])) > 0) { rd = td; - md->version[i] = td->version[i]; - writeops = 2; + writeops = 0x02; } else { rd = md; - td->version[i] = md->version[i]; - writeops = 1; + writeops = 0x01; } - - goto writecheck; - } else { if (td->pages[i] == -1) { + create = 1; writeops = 0x01; - goto create; + } else { + rd = td; } - rd = td; - goto writecheck; } - create: - /* Create the bad block table by scanning the device ? */ - if (!(td->options & NAND_BBT_CREATE)) - continue; - /* Create the table in memory by scanning the chip(s) */ - if (!(this->options & NAND_CREATE_EMPTY_BBT)) - create_bbt(mtd, buf, bd, chipsel); - - td->version[i] = 1; - if (md) - md->version[i] = 1; - writecheck: - /* read back first ? */ - if (rd) - read_abs_bbt(mtd, buf, rd, chipsel); - /* If they weren't versioned, read both. */ - if (rd2) - read_abs_bbt(mtd, buf, rd2, chipsel); - - /* Write the bad block table to the device ? */ + if (create) { + /* Create the bad block table by scanning the device? */ + if (!(td->options & NAND_BBT_CREATE)) + continue; + + /* Create the table in memory by scanning the chip(s) */ + if (!(this->bbt_options & NAND_BBT_CREATE_EMPTY)) + create_bbt(mtd, buf, bd, chipsel); + + td->version[i] = 1; + if (md) + md->version[i] = 1; + } + + /* Read back first? */ + if (rd) { + res = read_abs_bbt(mtd, buf, rd, chipsel); + if (mtd_is_eccerr(res)) { + /* Mark table as invalid */ + rd->pages[i] = -1; + rd->version[i] = 0; + i--; + continue; + } + } + /* If they weren't versioned, read both */ + if (rd2) { + res2 = read_abs_bbt(mtd, buf, rd2, chipsel); + if (mtd_is_eccerr(res2)) { + /* Mark table as invalid */ + rd2->pages[i] = -1; + rd2->version[i] = 0; + i--; + continue; + } + } + + /* Scrub the flash table(s)? */ + if (mtd_is_bitflip(res) || mtd_is_bitflip(res2)) + writeops = 0x03; + + /* Update version numbers before writing */ + if (md) { + td->version[i] = max(td->version[i], md->version[i]); + md->version[i] = td->version[i]; + } + + /* Write the bad block table to the device? */ if ((writeops & 0x01) && (td->options & NAND_BBT_WRITE)) { res = write_bbt(mtd, buf, td, md, chipsel); if (res < 0) return res; } - /* Write the mirror bad block table to the device ? */ + /* Write the mirror bad block table to the device? */ if ((writeops & 0x02) && md && (md->options & NAND_BBT_WRITE)) { res = write_bbt(mtd, buf, md, td, chipsel); if (res < 0) @@ -1033,20 +998,19 @@ static int check_create(struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_desc /** * mark_bbt_regions - [GENERIC] mark the bad block table regions - * @mtd: MTD device structure - * @td: bad block table descriptor + * @mtd: MTD device structure + * @td: bad block table descriptor * - * The bad block table regions are marked as "bad" to prevent - * accidental erasures / writes. The regions are identified by - * the mark 0x02. -*/ + * The bad block table regions are marked as "bad" to prevent accidental + * erasures / writes. The regions are identified by the mark 0x02. + */ static void mark_bbt_region(struct mtd_info *mtd, struct nand_bbt_descr *td) { struct nand_chip *this = mtd->priv; int i, j, chips, block, nrblocks, update; uint8_t oldval, newval; - /* Do we have a bbt per chip ? */ + /* Do we have a bbt per chip? */ if (td->options & NAND_BBT_PERCHIP) { chips = this->numchips; nrblocks = (int)(this->chipsize >> this->bbt_erase_shift); @@ -1083,9 +1047,11 @@ static void mark_bbt_region(struct mtd_info *mtd, struct nand_bbt_descr *td) update = 1; block += 2; } - /* If we want reserved blocks to be recorded to flash, and some - new ones have been marked, then we need to update the stored - bbts. This should only happen once. */ + /* + * If we want reserved blocks to be recorded to flash, and some + * new ones have been marked, then we need to update the stored + * bbts. This should only happen once. + */ if (update && td->reserved_block_code) nand_update_bbt(mtd, (loff_t)(block - 2) << (this->bbt_erase_shift - 1)); } @@ -1093,8 +1059,8 @@ static void mark_bbt_region(struct mtd_info *mtd, struct nand_bbt_descr *td) /** * verify_bbt_descr - verify the bad block description - * @mtd: MTD device structure - * @bd: the table to verify + * @mtd: MTD device structure + * @bd: the table to verify * * This functions performs a few sanity checks on the bad block description * table. @@ -1112,16 +1078,16 @@ static void verify_bbt_descr(struct mtd_info *mtd, struct nand_bbt_descr *bd) pattern_len = bd->len; bits = bd->options & NAND_BBT_NRBITS_MSK; - BUG_ON((this->options & NAND_USE_FLASH_BBT_NO_OOB) && - !(this->options & NAND_USE_FLASH_BBT)); + BUG_ON((this->bbt_options & NAND_BBT_NO_OOB) && + !(this->bbt_options & NAND_BBT_USE_FLASH)); BUG_ON(!bits); if (bd->options & NAND_BBT_VERSION) pattern_len++; if (bd->options & NAND_BBT_NO_OOB) { - BUG_ON(!(this->options & NAND_USE_FLASH_BBT)); - BUG_ON(!(this->options & NAND_USE_FLASH_BBT_NO_OOB)); + BUG_ON(!(this->bbt_options & NAND_BBT_USE_FLASH)); + BUG_ON(!(this->bbt_options & NAND_BBT_NO_OOB)); BUG_ON(bd->offs); if (bd->options & NAND_BBT_VERSION) BUG_ON(bd->veroffs != bd->len); @@ -1141,18 +1107,16 @@ static void verify_bbt_descr(struct mtd_info *mtd, struct nand_bbt_descr *bd) /** * nand_scan_bbt - [NAND Interface] scan, find, read and maybe create bad block table(s) - * @mtd: MTD device structure - * @bd: descriptor for the good/bad block search pattern - * - * The function checks, if a bad block table(s) is/are already - * available. If not it scans the device for manufacturer - * marked good / bad blocks and writes the bad block table(s) to - * the selected place. + * @mtd: MTD device structure + * @bd: descriptor for the good/bad block search pattern * - * The bad block table memory is allocated here. It must be freed - * by calling the nand_free_bbt function. + * The function checks, if a bad block table(s) is/are already available. If + * not it scans the device for manufacturer marked good / bad blocks and writes + * the bad block table(s) to the selected place. * -*/ + * The bad block table memory is allocated here. It must be freed by calling + * the nand_free_bbt function. + */ int nand_scan_bbt(struct mtd_info *mtd, struct nand_bbt_descr *bd) { struct nand_chip *this = mtd->priv; @@ -1162,19 +1126,21 @@ int nand_scan_bbt(struct mtd_info *mtd, struct nand_bbt_descr *bd) struct nand_bbt_descr *md = this->bbt_md; len = mtd->size >> (this->bbt_erase_shift + 2); - /* Allocate memory (2bit per block) and clear the memory bad block table */ + /* + * Allocate memory (2bit per block) and clear the memory bad block + * table. + */ this->bbt = kzalloc(len, GFP_KERNEL); - if (!this->bbt) { - printk(KERN_ERR "nand_scan_bbt: Out of memory\n"); + if (!this->bbt) return -ENOMEM; - } - /* If no primary table decriptor is given, scan the device - * to build a memory based bad block table + /* + * If no primary table decriptor is given, scan the device to build a + * memory based bad block table. */ if (!td) { if ((res = nand_memory_bbt(mtd, bd))) { - printk(KERN_ERR "nand_bbt: Can't scan flash and build the RAM-based BBT\n"); + pr_err("nand_bbt: can't scan flash and build the RAM-based BBT\n"); kfree(this->bbt); this->bbt = NULL; } @@ -1188,13 +1154,12 @@ int nand_scan_bbt(struct mtd_info *mtd, struct nand_bbt_descr *bd) len += (len >> this->page_shift) * mtd->oobsize; buf = vmalloc(len); if (!buf) { - printk(KERN_ERR "nand_bbt: Out of memory\n"); kfree(this->bbt); this->bbt = NULL; return -ENOMEM; } - /* Is the bbt at a given page ? */ + /* Is the bbt at a given page? */ if (td->options & NAND_BBT_ABSPAGE) { res = read_abs_bbts(mtd, buf, td, md); } else { @@ -1216,15 +1181,15 @@ int nand_scan_bbt(struct mtd_info *mtd, struct nand_bbt_descr *bd) /** * nand_update_bbt - [NAND Interface] update bad block table(s) - * @mtd: MTD device structure - * @offs: the offset of the newly marked block + * @mtd: MTD device structure + * @offs: the offset of the newly marked block * - * The function updates the bad block table(s) -*/ + * The function updates the bad block table(s). + */ int nand_update_bbt(struct mtd_info *mtd, loff_t offs) { struct nand_chip *this = mtd->priv; - int len, res = 0, writeops = 0; + int len, res = 0; int chip, chipsel; uint8_t *buf; struct nand_bbt_descr *td = this->bbt_td; @@ -1237,14 +1202,10 @@ int nand_update_bbt(struct mtd_info *mtd, loff_t offs) len = (1 << this->bbt_erase_shift); len += (len >> this->page_shift) * mtd->oobsize; buf = kmalloc(len, GFP_KERNEL); - if (!buf) { - printk(KERN_ERR "nand_update_bbt: Out of memory\n"); + if (!buf) return -ENOMEM; - } - - writeops = md != NULL ? 0x03 : 0x01; - /* Do we have a bbt per chip ? */ + /* Do we have a bbt per chip? */ if (td->options & NAND_BBT_PERCHIP) { chip = (int)(offs >> this->chip_shift); chipsel = chip; @@ -1257,14 +1218,14 @@ int nand_update_bbt(struct mtd_info *mtd, loff_t offs) if (md) md->version[chip]++; - /* Write the bad block table to the device ? */ - if ((writeops & 0x01) && (td->options & NAND_BBT_WRITE)) { + /* Write the bad block table to the device? */ + if (td->options & NAND_BBT_WRITE) { res = write_bbt(mtd, buf, td, md, chipsel); if (res < 0) goto out; } - /* Write the mirror bad block table to the device ? */ - if ((writeops & 0x02) && md && (md->options & NAND_BBT_WRITE)) { + /* Write the mirror bad block table to the device? */ + if (md && (md->options & NAND_BBT_WRITE)) { res = write_bbt(mtd, buf, md, td, chipsel); } @@ -1273,8 +1234,10 @@ int nand_update_bbt(struct mtd_info *mtd, loff_t offs) return res; } -/* Define some generic bad / good block scan pattern which are used - * while scanning a device for factory marked good / bad blocks. */ +/* + * Define some generic bad / good block scan pattern which are used + * while scanning a device for factory marked good / bad blocks. + */ static uint8_t scan_ff_pattern[] = { 0xff, 0xff }; static uint8_t scan_agand_pattern[] = { 0x1C, 0x71, 0xC7, 0x1C, 0x71, 0xC7 }; @@ -1286,8 +1249,7 @@ static struct nand_bbt_descr agand_flashbased = { .pattern = scan_agand_pattern }; -/* Generic flash bbt decriptors -*/ +/* Generic flash bbt descriptors */ static uint8_t bbt_pattern[] = {'B', 'b', 't', '0' }; static uint8_t mirror_pattern[] = {'1', 't', 'b', 'B' }; @@ -1331,31 +1293,27 @@ static struct nand_bbt_descr bbt_mirror_no_bbt_descr = { .pattern = mirror_pattern }; -#define BBT_SCAN_OPTIONS (NAND_BBT_SCANLASTPAGE | NAND_BBT_SCAN2NDPAGE | \ - NAND_BBT_SCANBYTE1AND6) +#define BADBLOCK_SCAN_MASK (~NAND_BBT_NO_OOB) /** - * nand_create_default_bbt_descr - [Internal] Creates a BBT descriptor structure - * @this: NAND chip to create descriptor for + * nand_create_badblock_pattern - [INTERN] Creates a BBT descriptor structure + * @this: NAND chip to create descriptor for * * This function allocates and initializes a nand_bbt_descr for BBM detection - * based on the properties of "this". The new descriptor is stored in + * based on the properties of @this. The new descriptor is stored in * this->badblock_pattern. Thus, this->badblock_pattern should be NULL when * passed to this function. - * */ -static int nand_create_default_bbt_descr(struct nand_chip *this) +static int nand_create_badblock_pattern(struct nand_chip *this) { struct nand_bbt_descr *bd; if (this->badblock_pattern) { - printk(KERN_WARNING "BBT descr already allocated; not replacing.\n"); + pr_warn("Bad block pattern already allocated; not replacing\n"); return -EINVAL; } bd = kzalloc(sizeof(*bd), GFP_KERNEL); - if (!bd) { - printk(KERN_ERR "nand_create_default_bbt_descr: Out of memory\n"); + if (!bd) return -ENOMEM; - } - bd->options = this->options & BBT_SCAN_OPTIONS; + bd->options = this->bbt_options & BADBLOCK_SCAN_MASK; bd->offs = this->badblockpos; bd->len = (this->options & NAND_BUSWIDTH_16) ? 2 : 1; bd->pattern = scan_ff_pattern; @@ -1366,22 +1324,20 @@ static int nand_create_default_bbt_descr(struct nand_chip *this) /** * nand_default_bbt - [NAND Interface] Select a default bad block table for the device - * @mtd: MTD device structure - * - * This function selects the default bad block table - * support for the device and calls the nand_scan_bbt function + * @mtd: MTD device structure * -*/ + * This function selects the default bad block table support for the device and + * calls the nand_scan_bbt function. + */ int nand_default_bbt(struct mtd_info *mtd) { struct nand_chip *this = mtd->priv; - /* Default for AG-AND. We must use a flash based - * bad block table as the devices have factory marked - * _good_ blocks. Erasing those blocks leads to loss - * of the good / bad information, so we _must_ store - * this information in a good / bad table during - * startup + /* + * Default for AG-AND. We must use a flash based bad block table as the + * devices have factory marked _good_ blocks. Erasing those blocks + * leads to loss of the good / bad information, so we _must_ store this + * information in a good / bad table during startup. */ if (this->options & NAND_IS_AND) { /* Use the default pattern descriptors */ @@ -1389,15 +1345,15 @@ int nand_default_bbt(struct mtd_info *mtd) this->bbt_td = &bbt_main_descr; this->bbt_md = &bbt_mirror_descr; } - this->options |= NAND_USE_FLASH_BBT; + this->bbt_options |= NAND_BBT_USE_FLASH; return nand_scan_bbt(mtd, &agand_flashbased); } - /* Is a flash based bad block table requested ? */ - if (this->options & NAND_USE_FLASH_BBT) { + /* Is a flash based bad block table requested? */ + if (this->bbt_options & NAND_BBT_USE_FLASH) { /* Use the default pattern descriptors */ if (!this->bbt_td) { - if (this->options & NAND_USE_FLASH_BBT_NO_OOB) { + if (this->bbt_options & NAND_BBT_NO_OOB) { this->bbt_td = &bbt_main_no_bbt_descr; this->bbt_md = &bbt_mirror_no_bbt_descr; } else { @@ -1411,18 +1367,17 @@ int nand_default_bbt(struct mtd_info *mtd) } if (!this->badblock_pattern) - nand_create_default_bbt_descr(this); + nand_create_badblock_pattern(this); return nand_scan_bbt(mtd, this->badblock_pattern); } /** * nand_isbad_bbt - [NAND Interface] Check if a block is bad - * @mtd: MTD device structure - * @offs: offset in the device - * @allowbbt: allow access to bad block table region - * -*/ + * @mtd: MTD device structure + * @offs: offset in the device + * @allowbbt: allow access to bad block table region + */ int nand_isbad_bbt(struct mtd_info *mtd, loff_t offs, int allowbbt) { struct nand_chip *this = mtd->priv; @@ -1433,8 +1388,9 @@ int nand_isbad_bbt(struct mtd_info *mtd, loff_t offs, int allowbbt) block = (int)(offs >> (this->bbt_erase_shift - 1)); res = (this->bbt[block >> 3] >> (block & 0x06)) & 0x03; - DEBUG(MTD_DEBUG_LEVEL2, "nand_isbad_bbt(): bbt info for offs 0x%08x: (block %d) 0x%02x\n", - (unsigned int)offs, block >> 1, res); + pr_debug("nand_isbad_bbt(): bbt info for offs 0x%08x: " + "(block %d) 0x%02x\n", + (unsigned int)offs, block >> 1, res); switch ((int)res) { case 0x00: diff --git a/drivers/mtd/nand/nand_bch.c b/drivers/mtd/nand/nand_bch.c index 0f931e757116..3803e0bba23b 100644 --- a/drivers/mtd/nand/nand_bch.c +++ b/drivers/mtd/nand/nand_bch.c @@ -93,8 +93,8 @@ int nand_bch_correct_data(struct mtd_info *mtd, unsigned char *buf, buf[errloc[i] >> 3] ^= (1 << (errloc[i] & 7)); /* else error in ecc, no action needed */ - DEBUG(MTD_DEBUG_LEVEL0, "%s: corrected bitflip %u\n", - __func__, errloc[i]); + pr_debug("%s: corrected bitflip %u\n", __func__, + errloc[i]); } } else if (count < 0) { printk(KERN_ERR "ecc unrecoverable error\n"); diff --git a/drivers/mtd/nand/nand_ecc.c b/drivers/mtd/nand/nand_ecc.c index 271b8e735e8f..b7cfe0d37121 100644 --- a/drivers/mtd/nand/nand_ecc.c +++ b/drivers/mtd/nand/nand_ecc.c @@ -110,7 +110,7 @@ static const char bitsperbyte[256] = { /* * addressbits is a lookup table to filter out the bits from the xor-ed - * ecc data that identify the faulty location. + * ECC data that identify the faulty location. * this is only used for repairing parity * see the comments in nand_correct_data for more details */ @@ -153,7 +153,7 @@ static const char addressbits[256] = { * __nand_calculate_ecc - [NAND Interface] Calculate 3-byte ECC for 256/512-byte * block * @buf: input buffer with raw data - * @eccsize: data bytes per ecc step (256 or 512) + * @eccsize: data bytes per ECC step (256 or 512) * @code: output buffer with ECC */ void __nand_calculate_ecc(const unsigned char *buf, unsigned int eccsize, @@ -348,7 +348,7 @@ void __nand_calculate_ecc(const unsigned char *buf, unsigned int eccsize, rp17 = (par ^ rp16) & 0xff; /* - * Finally calculate the ecc bits. + * Finally calculate the ECC bits. * Again here it might seem that there are performance optimisations * possible, but benchmarks showed that on the system this is developed * the code below is the fastest @@ -436,7 +436,7 @@ EXPORT_SYMBOL(nand_calculate_ecc); * @buf: raw data read from the chip * @read_ecc: ECC from the chip * @calc_ecc: the ECC calculated from raw data - * @eccsize: data bytes per ecc step (256 or 512) + * @eccsize: data bytes per ECC step (256 or 512) * * Detect and correct a 1 bit error for eccsize byte block */ @@ -505,7 +505,7 @@ int __nand_correct_data(unsigned char *buf, } /* count nr of bits; use table lookup, faster than calculating it */ if ((bitsperbyte[b0] + bitsperbyte[b1] + bitsperbyte[b2]) == 1) - return 1; /* error in ecc data; no action needed */ + return 1; /* error in ECC data; no action needed */ printk(KERN_ERR "uncorrectable error : "); return -1; diff --git a/drivers/mtd/nand/nandsim.c b/drivers/mtd/nand/nandsim.c index 357e8c5252a8..34c03be77301 100644 --- a/drivers/mtd/nand/nandsim.c +++ b/drivers/mtd/nand/nandsim.c @@ -2273,9 +2273,9 @@ static int __init ns_init_module(void) switch (bbt) { case 2: - chip->options |= NAND_USE_FLASH_BBT_NO_OOB; + chip->bbt_options |= NAND_BBT_NO_OOB; case 1: - chip->options |= NAND_USE_FLASH_BBT; + chip->bbt_options |= NAND_BBT_USE_FLASH; case 0: break; default: diff --git a/drivers/mtd/nand/ndfc.c b/drivers/mtd/nand/ndfc.c index ea2dea8a9c88..ee1713907b92 100644 --- a/drivers/mtd/nand/ndfc.c +++ b/drivers/mtd/nand/ndfc.c @@ -42,7 +42,6 @@ struct ndfc_controller { struct nand_chip chip; int chip_select; struct nand_hw_control ndfc_control; - struct mtd_partition *parts; }; static struct ndfc_controller ndfc_ctrl[NDFC_MAX_CS]; @@ -159,13 +158,9 @@ static int ndfc_verify_buf(struct mtd_info *mtd, const uint8_t *buf, int len) static int ndfc_chip_init(struct ndfc_controller *ndfc, struct device_node *node) { -#ifdef CONFIG_MTD_CMDLINE_PARTS - static const char *part_types[] = { "cmdlinepart", NULL }; -#else - static const char *part_types[] = { NULL }; -#endif struct device_node *flash_np; struct nand_chip *chip = &ndfc->chip; + struct mtd_part_parser_data ppdata; int ret; chip->IO_ADDR_R = ndfc->ndfcbase + NDFC_DATA; @@ -193,6 +188,7 @@ static int ndfc_chip_init(struct ndfc_controller *ndfc, if (!flash_np) return -ENODEV; + ppdata->of_node = flash_np; ndfc->mtd.name = kasprintf(GFP_KERNEL, "%s.%s", dev_name(&ndfc->ofdev->dev), flash_np->name); if (!ndfc->mtd.name) { @@ -204,18 +200,7 @@ static int ndfc_chip_init(struct ndfc_controller *ndfc, if (ret) goto err; - ret = parse_mtd_partitions(&ndfc->mtd, part_types, &ndfc->parts, 0); - if (ret < 0) - goto err; - - if (ret == 0) { - ret = of_mtd_parse_partitions(&ndfc->ofdev->dev, flash_np, - &ndfc->parts); - if (ret < 0) - goto err; - } - - ret = mtd_device_register(&ndfc->mtd, ndfc->parts, ret); + ret = mtd_device_parse_register(&ndfc->mtd, NULL, &ppdata, NULL, 0); err: of_node_put(flash_np); @@ -288,6 +273,7 @@ static int __devexit ndfc_remove(struct platform_device *ofdev) struct ndfc_controller *ndfc = dev_get_drvdata(&ofdev->dev); nand_release(&ndfc->mtd); + kfree(ndfc->mtd.name); return 0; } diff --git a/drivers/mtd/nand/nomadik_nand.c b/drivers/mtd/nand/nomadik_nand.c index b6a5c86ab31e..b463ecfb4c1a 100644 --- a/drivers/mtd/nand/nomadik_nand.c +++ b/drivers/mtd/nand/nomadik_nand.c @@ -187,6 +187,7 @@ static int nomadik_nand_remove(struct platform_device *pdev) pdata->exit(); if (host) { + nand_release(&host->mtd); iounmap(host->cmd_va); iounmap(host->data_va); iounmap(host->addr_va); diff --git a/drivers/mtd/nand/nuc900_nand.c b/drivers/mtd/nand/nuc900_nand.c index 9c30a0b03171..fa8faedfad6e 100644 --- a/drivers/mtd/nand/nuc900_nand.c +++ b/drivers/mtd/nand/nuc900_nand.c @@ -339,6 +339,7 @@ static int __devexit nuc900_nand_remove(struct platform_device *pdev) struct nuc900_nand *nuc900_nand = platform_get_drvdata(pdev); struct resource *res; + nand_release(&nuc900_nand->mtd); iounmap(nuc900_nand->reg); res = platform_get_resource(pdev, IORESOURCE_MEM, 0); diff --git a/drivers/mtd/nand/omap2.c b/drivers/mtd/nand/omap2.c index ec22a5aab038..f745f00f3167 100644 --- a/drivers/mtd/nand/omap2.c +++ b/drivers/mtd/nand/omap2.c @@ -95,8 +95,6 @@ #define P4e_s(a) (TF(a & NAND_Ecc_P4e) << 0) #define P4o_s(a) (TF(a & NAND_Ecc_P4o) << 1) -static const char *part_probes[] = { "cmdlinepart", NULL }; - /* oob info generated runtime depending on ecc algorithm and layout selected */ static struct nand_ecclayout omap_oobinfo; /* Define some generic bad / good block scan pattern which are used @@ -115,7 +113,6 @@ struct omap_nand_info { struct nand_hw_control controller; struct omap_nand_platform_data *pdata; struct mtd_info mtd; - struct mtd_partition *parts; struct nand_chip nand; struct platform_device *pdev; @@ -745,12 +742,12 @@ static int omap_compare_ecc(u8 *ecc_data1, /* read from NAND memory */ case 1: /* Uncorrectable error */ - DEBUG(MTD_DEBUG_LEVEL0, "ECC UNCORRECTED_ERROR 1\n"); + pr_debug("ECC UNCORRECTED_ERROR 1\n"); return -1; case 11: /* UN-Correctable error */ - DEBUG(MTD_DEBUG_LEVEL0, "ECC UNCORRECTED_ERROR B\n"); + pr_debug("ECC UNCORRECTED_ERROR B\n"); return -1; case 12: @@ -767,8 +764,8 @@ static int omap_compare_ecc(u8 *ecc_data1, /* read from NAND memory */ find_bit = (ecc_bit[5] << 2) + (ecc_bit[3] << 1) + ecc_bit[1]; - DEBUG(MTD_DEBUG_LEVEL0, "Correcting single bit ECC error at " - "offset: %d, bit: %d\n", find_byte, find_bit); + pr_debug("Correcting single bit ECC error at offset: " + "%d, bit: %d\n", find_byte, find_bit); page_data[find_byte] ^= (1 << find_bit); @@ -780,7 +777,7 @@ static int omap_compare_ecc(u8 *ecc_data1, /* read from NAND memory */ ecc_data2[2] == 0) return 0; } - DEBUG(MTD_DEBUG_LEVEL0, "UNCORRECTED_ERROR default\n"); + pr_debug("UNCORRECTED_ERROR default\n"); return -1; } } @@ -1104,13 +1101,8 @@ static int __devinit omap_nand_probe(struct platform_device *pdev) goto out_release_mem_region; } - err = parse_mtd_partitions(&info->mtd, part_probes, &info->parts, 0); - if (err > 0) - mtd_device_register(&info->mtd, info->parts, err); - else if (pdata->parts) - mtd_device_register(&info->mtd, pdata->parts, pdata->nr_parts); - else - mtd_device_register(&info->mtd, NULL, 0); + mtd_device_parse_register(&info->mtd, NULL, 0, + pdata->parts, pdata->nr_parts); platform_set_drvdata(pdev, &info->mtd); diff --git a/drivers/mtd/nand/orion_nand.c b/drivers/mtd/nand/orion_nand.c index 7794d0680f91..29f505adaf84 100644 --- a/drivers/mtd/nand/orion_nand.c +++ b/drivers/mtd/nand/orion_nand.c @@ -21,8 +21,6 @@ #include <mach/hardware.h> #include <plat/orion_nand.h> -static const char *part_probes[] = { "cmdlinepart", NULL }; - static void orion_nand_cmd_ctrl(struct mtd_info *mtd, int cmd, unsigned int ctrl) { struct nand_chip *nc = mtd->priv; @@ -81,8 +79,6 @@ static int __init orion_nand_probe(struct platform_device *pdev) struct resource *res; void __iomem *io_base; int ret = 0; - struct mtd_partition *partitions = NULL; - int num_part = 0; nc = kzalloc(sizeof(struct nand_chip) + sizeof(struct mtd_info), GFP_KERNEL); if (!nc) { @@ -132,17 +128,9 @@ static int __init orion_nand_probe(struct platform_device *pdev) goto no_dev; } -#ifdef CONFIG_MTD_CMDLINE_PARTS mtd->name = "orion_nand"; - num_part = parse_mtd_partitions(mtd, part_probes, &partitions, 0); -#endif - /* If cmdline partitions have been passed, let them be used */ - if (num_part <= 0) { - num_part = board->nr_parts; - partitions = board->parts; - } - - ret = mtd_device_register(mtd, partitions, num_part); + ret = mtd_device_parse_register(mtd, NULL, 0, + board->parts, board->nr_parts); if (ret) { nand_release(mtd); goto no_dev; diff --git a/drivers/mtd/nand/pasemi_nand.c b/drivers/mtd/nand/pasemi_nand.c index b1aa41b8a4eb..a97264ececdb 100644 --- a/drivers/mtd/nand/pasemi_nand.c +++ b/drivers/mtd/nand/pasemi_nand.c @@ -155,7 +155,8 @@ static int __devinit pasemi_nand_probe(struct platform_device *ofdev) chip->ecc.mode = NAND_ECC_SOFT; /* Enable the following for a flash based bad block table */ - chip->options = NAND_USE_FLASH_BBT | NAND_NO_AUTOINCR; + chip->options = NAND_NO_AUTOINCR; + chip->bbt_options = NAND_BBT_USE_FLASH; /* Scan to find existence of the device */ if (nand_scan(pasemi_nand_mtd, 1)) { diff --git a/drivers/mtd/nand/plat_nand.c b/drivers/mtd/nand/plat_nand.c index 633c04bf76f6..ea8e1234e0e2 100644 --- a/drivers/mtd/nand/plat_nand.c +++ b/drivers/mtd/nand/plat_nand.c @@ -21,8 +21,6 @@ struct plat_nand_data { struct nand_chip chip; struct mtd_info mtd; void __iomem *io_base; - int nr_parts; - struct mtd_partition *parts; }; /* @@ -79,6 +77,7 @@ static int __devinit plat_nand_probe(struct platform_device *pdev) data->chip.read_buf = pdata->ctrl.read_buf; data->chip.chip_delay = pdata->chip.chip_delay; data->chip.options |= pdata->chip.options; + data->chip.bbt_options |= pdata->chip.bbt_options; data->chip.ecc.hwctl = pdata->ctrl.hwcontrol; data->chip.ecc.layout = pdata->chip.ecclayout; @@ -99,23 +98,9 @@ static int __devinit plat_nand_probe(struct platform_device *pdev) goto out; } - if (pdata->chip.part_probe_types) { - err = parse_mtd_partitions(&data->mtd, - pdata->chip.part_probe_types, - &data->parts, 0); - if (err > 0) { - mtd_device_register(&data->mtd, data->parts, err); - return 0; - } - } - if (pdata->chip.set_parts) - pdata->chip.set_parts(data->mtd.size, &pdata->chip); - if (pdata->chip.partitions) { - data->parts = pdata->chip.partitions; - err = mtd_device_register(&data->mtd, data->parts, - pdata->chip.nr_partitions); - } else - err = mtd_device_register(&data->mtd, NULL, 0); + err = mtd_device_parse_register(&data->mtd, + pdata->chip.part_probe_types, 0, + pdata->chip.partitions, pdata->chip.nr_partitions); if (!err) return err; @@ -145,8 +130,6 @@ static int __devexit plat_nand_remove(struct platform_device *pdev) res = platform_get_resource(pdev, IORESOURCE_MEM, 0); nand_release(&data->mtd); - if (data->parts && data->parts != pdata->chip.partitions) - kfree(data->parts); if (pdata->ctrl.remove) pdata->ctrl.remove(pdev); iounmap(data->io_base); diff --git a/drivers/mtd/nand/ppchameleonevb.c b/drivers/mtd/nand/ppchameleonevb.c index 3bbb796b451c..7e52af51a198 100644 --- a/drivers/mtd/nand/ppchameleonevb.c +++ b/drivers/mtd/nand/ppchameleonevb.c @@ -99,8 +99,6 @@ static struct mtd_partition partition_info_evb[] = { #define NUM_PARTITIONS 1 -extern int parse_cmdline_partitions(struct mtd_info *master, struct mtd_partition **pparts, const char *mtd_id); - /* * hardware specific access to control-lines */ @@ -187,18 +185,12 @@ static int ppchameleonevb_device_ready(struct mtd_info *minfo) } #endif -const char *part_probes[] = { "cmdlinepart", NULL }; -const char *part_probes_evb[] = { "cmdlinepart", NULL }; - /* * Main initialization routine */ static int __init ppchameleonevb_init(void) { struct nand_chip *this; - const char *part_type = 0; - int mtd_parts_nb = 0; - struct mtd_partition *mtd_parts = 0; void __iomem *ppchameleon_fio_base; void __iomem *ppchameleonevb_fio_base; @@ -281,24 +273,13 @@ static int __init ppchameleonevb_init(void) #endif ppchameleon_mtd->name = "ppchameleon-nand"; - mtd_parts_nb = parse_mtd_partitions(ppchameleon_mtd, part_probes, &mtd_parts, 0); - if (mtd_parts_nb > 0) - part_type = "command line"; - else - mtd_parts_nb = 0; - - if (mtd_parts_nb == 0) { - if (ppchameleon_mtd->size == NAND_SMALL_SIZE) - mtd_parts = partition_info_me; - else - mtd_parts = partition_info_hi; - mtd_parts_nb = NUM_PARTITIONS; - part_type = "static"; - } /* Register the partitions */ - printk(KERN_NOTICE "Using %s partition definition\n", part_type); - mtd_device_register(ppchameleon_mtd, mtd_parts, mtd_parts_nb); + mtd_device_parse_register(ppchameleon_mtd, NULL, 0, + ppchameleon_mtd->size == NAND_SMALL_SIZE ? + partition_info_me : + partition_info_hi, + NUM_PARTITIONS); nand_evb_init: /**************************** @@ -382,21 +363,13 @@ static int __init ppchameleonevb_init(void) } ppchameleonevb_mtd->name = NAND_EVB_MTD_NAME; - mtd_parts_nb = parse_mtd_partitions(ppchameleonevb_mtd, part_probes_evb, &mtd_parts, 0); - if (mtd_parts_nb > 0) - part_type = "command line"; - else - mtd_parts_nb = 0; - - if (mtd_parts_nb == 0) { - mtd_parts = partition_info_evb; - mtd_parts_nb = NUM_PARTITIONS; - part_type = "static"; - } /* Register the partitions */ - printk(KERN_NOTICE "Using %s partition definition\n", part_type); - mtd_device_register(ppchameleonevb_mtd, mtd_parts, mtd_parts_nb); + mtd_device_parse_register(ppchameleonevb_mtd, NULL, 0, + ppchameleon_mtd->size == NAND_SMALL_SIZE ? + partition_info_me : + partition_info_hi, + NUM_PARTITIONS); /* Return happy */ return 0; diff --git a/drivers/mtd/nand/pxa3xx_nand.c b/drivers/mtd/nand/pxa3xx_nand.c index 1fb3b3a80581..9eb7f879969e 100644 --- a/drivers/mtd/nand/pxa3xx_nand.c +++ b/drivers/mtd/nand/pxa3xx_nand.c @@ -110,6 +110,7 @@ enum { enum { STATE_IDLE = 0, + STATE_PREPARED, STATE_CMD_HANDLE, STATE_DMA_READING, STATE_DMA_WRITING, @@ -120,21 +121,40 @@ enum { STATE_READY, }; -struct pxa3xx_nand_info { - struct nand_chip nand_chip; +struct pxa3xx_nand_host { + struct nand_chip chip; + struct pxa3xx_nand_cmdset *cmdset; + struct mtd_info *mtd; + void *info_data; + + /* page size of attached chip */ + unsigned int page_size; + int use_ecc; + int cs; + /* calculated from pxa3xx_nand_flash data */ + unsigned int col_addr_cycles; + unsigned int row_addr_cycles; + size_t read_id_bytes; + + /* cached register value */ + uint32_t reg_ndcr; + uint32_t ndtr0cs0; + uint32_t ndtr1cs0; +}; + +struct pxa3xx_nand_info { struct nand_hw_control controller; struct platform_device *pdev; - struct pxa3xx_nand_cmdset *cmdset; struct clk *clk; void __iomem *mmio_base; unsigned long mmio_phys; + struct completion cmd_complete; unsigned int buf_start; unsigned int buf_count; - struct mtd_info *mtd; /* DMA information */ int drcmr_dat; int drcmr_cmd; @@ -142,44 +162,27 @@ struct pxa3xx_nand_info { unsigned char *data_buff; unsigned char *oob_buff; dma_addr_t data_buff_phys; - size_t data_buff_size; int data_dma_ch; struct pxa_dma_desc *data_desc; dma_addr_t data_desc_addr; - uint32_t reg_ndcr; - - /* saved column/page_addr during CMD_SEQIN */ - int seqin_column; - int seqin_page_addr; - - /* relate to the command */ + struct pxa3xx_nand_host *host[NUM_CHIP_SELECT]; unsigned int state; + int cs; int use_ecc; /* use HW ECC ? */ int use_dma; /* use DMA ? */ int is_ready; unsigned int page_size; /* page size of attached chip */ unsigned int data_size; /* data size in FIFO */ + unsigned int oob_size; int retcode; - struct completion cmd_complete; /* generated NDCBx register values */ uint32_t ndcb0; uint32_t ndcb1; uint32_t ndcb2; - - /* timing calcuted from setting */ - uint32_t ndtr0cs0; - uint32_t ndtr1cs0; - - /* calculated from pxa3xx_nand_flash data */ - size_t oob_size; - size_t read_id_bytes; - - unsigned int col_addr_cycles; - unsigned int row_addr_cycles; }; static int use_dma = 1; @@ -225,7 +228,7 @@ static struct pxa3xx_nand_flash builtin_flash_types[] = { /* Define a default flash type setting serve as flash detecting only */ #define DEFAULT_FLASH_TYPE (&builtin_flash_types[0]) -const char *mtd_names[] = {"pxa3xx_nand-0", NULL}; +const char *mtd_names[] = {"pxa3xx_nand-0", "pxa3xx_nand-1", NULL}; #define NDTR0_tCH(c) (min((c), 7) << 19) #define NDTR0_tCS(c) (min((c), 7) << 16) @@ -241,9 +244,10 @@ const char *mtd_names[] = {"pxa3xx_nand-0", NULL}; /* convert nano-seconds to nand flash controller clock cycles */ #define ns2cycle(ns, clk) (int)((ns) * (clk / 1000000) / 1000) -static void pxa3xx_nand_set_timing(struct pxa3xx_nand_info *info, +static void pxa3xx_nand_set_timing(struct pxa3xx_nand_host *host, const struct pxa3xx_nand_timing *t) { + struct pxa3xx_nand_info *info = host->info_data; unsigned long nand_clk = clk_get_rate(info->clk); uint32_t ndtr0, ndtr1; @@ -258,23 +262,24 @@ static void pxa3xx_nand_set_timing(struct pxa3xx_nand_info *info, NDTR1_tWHR(ns2cycle(t->tWHR, nand_clk)) | NDTR1_tAR(ns2cycle(t->tAR, nand_clk)); - info->ndtr0cs0 = ndtr0; - info->ndtr1cs0 = ndtr1; + host->ndtr0cs0 = ndtr0; + host->ndtr1cs0 = ndtr1; nand_writel(info, NDTR0CS0, ndtr0); nand_writel(info, NDTR1CS0, ndtr1); } static void pxa3xx_set_datasize(struct pxa3xx_nand_info *info) { - int oob_enable = info->reg_ndcr & NDCR_SPARE_EN; + struct pxa3xx_nand_host *host = info->host[info->cs]; + int oob_enable = host->reg_ndcr & NDCR_SPARE_EN; - info->data_size = info->page_size; + info->data_size = host->page_size; if (!oob_enable) { info->oob_size = 0; return; } - switch (info->page_size) { + switch (host->page_size) { case 2048: info->oob_size = (info->use_ecc) ? 40 : 64; break; @@ -292,9 +297,10 @@ static void pxa3xx_set_datasize(struct pxa3xx_nand_info *info) */ static void pxa3xx_nand_start(struct pxa3xx_nand_info *info) { + struct pxa3xx_nand_host *host = info->host[info->cs]; uint32_t ndcr; - ndcr = info->reg_ndcr; + ndcr = host->reg_ndcr; ndcr |= info->use_ecc ? NDCR_ECC_EN : 0; ndcr |= info->use_dma ? NDCR_DMA_EN : 0; ndcr |= NDCR_ND_RUN; @@ -359,7 +365,7 @@ static void handle_data_pio(struct pxa3xx_nand_info *info) DIV_ROUND_UP(info->oob_size, 4)); break; default: - printk(KERN_ERR "%s: invalid state %d\n", __func__, + dev_err(&info->pdev->dev, "%s: invalid state %d\n", __func__, info->state); BUG(); } @@ -385,7 +391,7 @@ static void start_data_dma(struct pxa3xx_nand_info *info) desc->dcmd |= DCMD_INCTRGADDR | DCMD_FLOWSRC; break; default: - printk(KERN_ERR "%s: invalid state %d\n", __func__, + dev_err(&info->pdev->dev, "%s: invalid state %d\n", __func__, info->state); BUG(); } @@ -416,6 +422,15 @@ static irqreturn_t pxa3xx_nand_irq(int irq, void *devid) { struct pxa3xx_nand_info *info = devid; unsigned int status, is_completed = 0; + unsigned int ready, cmd_done; + + if (info->cs == 0) { + ready = NDSR_FLASH_RDY; + cmd_done = NDSR_CS0_CMDD; + } else { + ready = NDSR_RDY; + cmd_done = NDSR_CS1_CMDD; + } status = nand_readl(info, NDSR); @@ -437,11 +452,11 @@ static irqreturn_t pxa3xx_nand_irq(int irq, void *devid) handle_data_pio(info); } } - if (status & NDSR_CS0_CMDD) { + if (status & cmd_done) { info->state = STATE_CMD_DONE; is_completed = 1; } - if (status & NDSR_FLASH_RDY) { + if (status & ready) { info->is_ready = 1; info->state = STATE_READY; } @@ -463,12 +478,6 @@ NORMAL_IRQ_EXIT: return IRQ_HANDLED; } -static int pxa3xx_nand_dev_ready(struct mtd_info *mtd) -{ - struct pxa3xx_nand_info *info = mtd->priv; - return (nand_readl(info, NDSR) & NDSR_RDY) ? 1 : 0; -} - static inline int is_buf_blank(uint8_t *buf, size_t len) { for (; len > 0; len--) @@ -481,10 +490,12 @@ static int prepare_command_pool(struct pxa3xx_nand_info *info, int command, uint16_t column, int page_addr) { uint16_t cmd; - int addr_cycle, exec_cmd, ndcb0; - struct mtd_info *mtd = info->mtd; + int addr_cycle, exec_cmd; + struct pxa3xx_nand_host *host; + struct mtd_info *mtd; - ndcb0 = 0; + host = info->host[info->cs]; + mtd = host->mtd; addr_cycle = 0; exec_cmd = 1; @@ -495,6 +506,10 @@ static int prepare_command_pool(struct pxa3xx_nand_info *info, int command, info->use_ecc = 0; info->is_ready = 0; info->retcode = ERR_NONE; + if (info->cs != 0) + info->ndcb0 = NDCB0_CSEL; + else + info->ndcb0 = 0; switch (command) { case NAND_CMD_READ0: @@ -512,20 +527,19 @@ static int prepare_command_pool(struct pxa3xx_nand_info *info, int command, break; } - info->ndcb0 = ndcb0; - addr_cycle = NDCB0_ADDR_CYC(info->row_addr_cycles - + info->col_addr_cycles); + addr_cycle = NDCB0_ADDR_CYC(host->row_addr_cycles + + host->col_addr_cycles); switch (command) { case NAND_CMD_READOOB: case NAND_CMD_READ0: - cmd = info->cmdset->read1; + cmd = host->cmdset->read1; if (command == NAND_CMD_READOOB) info->buf_start = mtd->writesize + column; else info->buf_start = column; - if (unlikely(info->page_size < PAGE_CHUNK_SIZE)) + if (unlikely(host->page_size < PAGE_CHUNK_SIZE)) info->ndcb0 |= NDCB0_CMD_TYPE(0) | addr_cycle | (cmd & NDCB0_CMD1_MASK); @@ -537,7 +551,7 @@ static int prepare_command_pool(struct pxa3xx_nand_info *info, int command, case NAND_CMD_SEQIN: /* small page addr setting */ - if (unlikely(info->page_size < PAGE_CHUNK_SIZE)) { + if (unlikely(host->page_size < PAGE_CHUNK_SIZE)) { info->ndcb1 = ((page_addr & 0xFFFFFF) << 8) | (column & 0xFF); @@ -564,7 +578,7 @@ static int prepare_command_pool(struct pxa3xx_nand_info *info, int command, break; } - cmd = info->cmdset->program; + cmd = host->cmdset->program; info->ndcb0 |= NDCB0_CMD_TYPE(0x1) | NDCB0_AUTO_RS | NDCB0_ST_ROW_EN @@ -574,8 +588,8 @@ static int prepare_command_pool(struct pxa3xx_nand_info *info, int command, break; case NAND_CMD_READID: - cmd = info->cmdset->read_id; - info->buf_count = info->read_id_bytes; + cmd = host->cmdset->read_id; + info->buf_count = host->read_id_bytes; info->ndcb0 |= NDCB0_CMD_TYPE(3) | NDCB0_ADDR_CYC(1) | cmd; @@ -583,7 +597,7 @@ static int prepare_command_pool(struct pxa3xx_nand_info *info, int command, info->data_size = 8; break; case NAND_CMD_STATUS: - cmd = info->cmdset->read_status; + cmd = host->cmdset->read_status; info->buf_count = 1; info->ndcb0 |= NDCB0_CMD_TYPE(4) | NDCB0_ADDR_CYC(1) @@ -593,7 +607,7 @@ static int prepare_command_pool(struct pxa3xx_nand_info *info, int command, break; case NAND_CMD_ERASE1: - cmd = info->cmdset->erase; + cmd = host->cmdset->erase; info->ndcb0 |= NDCB0_CMD_TYPE(2) | NDCB0_AUTO_RS | NDCB0_ADDR_CYC(3) @@ -604,7 +618,7 @@ static int prepare_command_pool(struct pxa3xx_nand_info *info, int command, break; case NAND_CMD_RESET: - cmd = info->cmdset->reset; + cmd = host->cmdset->reset; info->ndcb0 |= NDCB0_CMD_TYPE(5) | cmd; @@ -616,8 +630,8 @@ static int prepare_command_pool(struct pxa3xx_nand_info *info, int command, default: exec_cmd = 0; - printk(KERN_ERR "pxa3xx-nand: non-supported" - " command %x\n", command); + dev_err(&info->pdev->dev, "non-supported command %x\n", + command); break; } @@ -627,7 +641,8 @@ static int prepare_command_pool(struct pxa3xx_nand_info *info, int command, static void pxa3xx_nand_cmdfunc(struct mtd_info *mtd, unsigned command, int column, int page_addr) { - struct pxa3xx_nand_info *info = mtd->priv; + struct pxa3xx_nand_host *host = mtd->priv; + struct pxa3xx_nand_info *info = host->info_data; int ret, exec_cmd; /* @@ -635,9 +650,21 @@ static void pxa3xx_nand_cmdfunc(struct mtd_info *mtd, unsigned command, * "byte" address into a "word" address appropriate * for indexing a word-oriented device */ - if (info->reg_ndcr & NDCR_DWIDTH_M) + if (host->reg_ndcr & NDCR_DWIDTH_M) column /= 2; + /* + * There may be different NAND chip hooked to + * different chip select, so check whether + * chip select has been changed, if yes, reset the timing + */ + if (info->cs != host->cs) { + info->cs = host->cs; + nand_writel(info, NDTR0CS0, host->ndtr0cs0); + nand_writel(info, NDTR1CS0, host->ndtr1cs0); + } + + info->state = STATE_PREPARED; exec_cmd = prepare_command_pool(info, command, column, page_addr); if (exec_cmd) { init_completion(&info->cmd_complete); @@ -646,12 +673,12 @@ static void pxa3xx_nand_cmdfunc(struct mtd_info *mtd, unsigned command, ret = wait_for_completion_timeout(&info->cmd_complete, CHIP_DELAY_TIMEOUT); if (!ret) { - printk(KERN_ERR "Wait time out!!!\n"); + dev_err(&info->pdev->dev, "Wait time out!!!\n"); /* Stop State Machine for next command cycle */ pxa3xx_nand_stop(info); } - info->state = STATE_IDLE; } + info->state = STATE_IDLE; } static void pxa3xx_nand_write_page_hwecc(struct mtd_info *mtd, @@ -664,7 +691,8 @@ static void pxa3xx_nand_write_page_hwecc(struct mtd_info *mtd, static int pxa3xx_nand_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip, uint8_t *buf, int page) { - struct pxa3xx_nand_info *info = mtd->priv; + struct pxa3xx_nand_host *host = mtd->priv; + struct pxa3xx_nand_info *info = host->info_data; chip->read_buf(mtd, buf, mtd->writesize); chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); @@ -685,6 +713,8 @@ static int pxa3xx_nand_read_page_hwecc(struct mtd_info *mtd, * OOB, ignore such double bit errors */ if (is_buf_blank(buf, mtd->writesize)) + info->retcode = ERR_NONE; + else mtd->ecc_stats.failed++; } @@ -693,7 +723,8 @@ static int pxa3xx_nand_read_page_hwecc(struct mtd_info *mtd, static uint8_t pxa3xx_nand_read_byte(struct mtd_info *mtd) { - struct pxa3xx_nand_info *info = mtd->priv; + struct pxa3xx_nand_host *host = mtd->priv; + struct pxa3xx_nand_info *info = host->info_data; char retval = 0xFF; if (info->buf_start < info->buf_count) @@ -705,7 +736,8 @@ static uint8_t pxa3xx_nand_read_byte(struct mtd_info *mtd) static u16 pxa3xx_nand_read_word(struct mtd_info *mtd) { - struct pxa3xx_nand_info *info = mtd->priv; + struct pxa3xx_nand_host *host = mtd->priv; + struct pxa3xx_nand_info *info = host->info_data; u16 retval = 0xFFFF; if (!(info->buf_start & 0x01) && info->buf_start < info->buf_count) { @@ -717,7 +749,8 @@ static u16 pxa3xx_nand_read_word(struct mtd_info *mtd) static void pxa3xx_nand_read_buf(struct mtd_info *mtd, uint8_t *buf, int len) { - struct pxa3xx_nand_info *info = mtd->priv; + struct pxa3xx_nand_host *host = mtd->priv; + struct pxa3xx_nand_info *info = host->info_data; int real_len = min_t(size_t, len, info->buf_count - info->buf_start); memcpy(buf, info->data_buff + info->buf_start, real_len); @@ -727,7 +760,8 @@ static void pxa3xx_nand_read_buf(struct mtd_info *mtd, uint8_t *buf, int len) static void pxa3xx_nand_write_buf(struct mtd_info *mtd, const uint8_t *buf, int len) { - struct pxa3xx_nand_info *info = mtd->priv; + struct pxa3xx_nand_host *host = mtd->priv; + struct pxa3xx_nand_info *info = host->info_data; int real_len = min_t(size_t, len, info->buf_count - info->buf_start); memcpy(info->data_buff + info->buf_start, buf, real_len); @@ -747,7 +781,8 @@ static void pxa3xx_nand_select_chip(struct mtd_info *mtd, int chip) static int pxa3xx_nand_waitfunc(struct mtd_info *mtd, struct nand_chip *this) { - struct pxa3xx_nand_info *info = mtd->priv; + struct pxa3xx_nand_host *host = mtd->priv; + struct pxa3xx_nand_info *info = host->info_data; /* pxa3xx_nand_send_command has waited for command complete */ if (this->state == FL_WRITING || this->state == FL_ERASING) { @@ -770,54 +805,70 @@ static int pxa3xx_nand_config_flash(struct pxa3xx_nand_info *info, { struct platform_device *pdev = info->pdev; struct pxa3xx_nand_platform_data *pdata = pdev->dev.platform_data; + struct pxa3xx_nand_host *host = info->host[info->cs]; uint32_t ndcr = 0x0; /* enable all interrupts */ - if (f->page_size != 2048 && f->page_size != 512) + if (f->page_size != 2048 && f->page_size != 512) { + dev_err(&pdev->dev, "Current only support 2048 and 512 size\n"); return -EINVAL; + } - if (f->flash_width != 16 && f->flash_width != 8) + if (f->flash_width != 16 && f->flash_width != 8) { + dev_err(&pdev->dev, "Only support 8bit and 16 bit!\n"); return -EINVAL; + } /* calculate flash information */ - info->cmdset = &default_cmdset; - info->page_size = f->page_size; - info->read_id_bytes = (f->page_size == 2048) ? 4 : 2; + host->cmdset = &default_cmdset; + host->page_size = f->page_size; + host->read_id_bytes = (f->page_size == 2048) ? 4 : 2; /* calculate addressing information */ - info->col_addr_cycles = (f->page_size == 2048) ? 2 : 1; + host->col_addr_cycles = (f->page_size == 2048) ? 2 : 1; if (f->num_blocks * f->page_per_block > 65536) - info->row_addr_cycles = 3; + host->row_addr_cycles = 3; else - info->row_addr_cycles = 2; + host->row_addr_cycles = 2; ndcr |= (pdata->enable_arbiter) ? NDCR_ND_ARB_EN : 0; - ndcr |= (info->col_addr_cycles == 2) ? NDCR_RA_START : 0; + ndcr |= (host->col_addr_cycles == 2) ? NDCR_RA_START : 0; ndcr |= (f->page_per_block == 64) ? NDCR_PG_PER_BLK : 0; ndcr |= (f->page_size == 2048) ? NDCR_PAGE_SZ : 0; ndcr |= (f->flash_width == 16) ? NDCR_DWIDTH_M : 0; ndcr |= (f->dfc_width == 16) ? NDCR_DWIDTH_C : 0; - ndcr |= NDCR_RD_ID_CNT(info->read_id_bytes); + ndcr |= NDCR_RD_ID_CNT(host->read_id_bytes); ndcr |= NDCR_SPARE_EN; /* enable spare by default */ - info->reg_ndcr = ndcr; + host->reg_ndcr = ndcr; - pxa3xx_nand_set_timing(info, f->timing); + pxa3xx_nand_set_timing(host, f->timing); return 0; } static int pxa3xx_nand_detect_config(struct pxa3xx_nand_info *info) { + /* + * We set 0 by hard coding here, for we don't support keep_config + * when there is more than one chip attached to the controller + */ + struct pxa3xx_nand_host *host = info->host[0]; uint32_t ndcr = nand_readl(info, NDCR); - info->page_size = ndcr & NDCR_PAGE_SZ ? 2048 : 512; - /* set info fields needed to read id */ - info->read_id_bytes = (info->page_size == 2048) ? 4 : 2; - info->reg_ndcr = ndcr; - info->cmdset = &default_cmdset; - info->ndtr0cs0 = nand_readl(info, NDTR0CS0); - info->ndtr1cs0 = nand_readl(info, NDTR1CS0); + if (ndcr & NDCR_PAGE_SZ) { + host->page_size = 2048; + host->read_id_bytes = 4; + } else { + host->page_size = 512; + host->read_id_bytes = 2; + } + + host->reg_ndcr = ndcr & ~NDCR_INT_MASK; + host->cmdset = &default_cmdset; + + host->ndtr0cs0 = nand_readl(info, NDTR0CS0); + host->ndtr1cs0 = nand_readl(info, NDTR1CS0); return 0; } @@ -847,7 +898,6 @@ static int pxa3xx_nand_init_buff(struct pxa3xx_nand_info *info) return -ENOMEM; } - info->data_buff_size = MAX_BUFF_SIZE; info->data_desc = (void *)info->data_buff + data_desc_offset; info->data_desc_addr = info->data_buff_phys + data_desc_offset; @@ -855,7 +905,7 @@ static int pxa3xx_nand_init_buff(struct pxa3xx_nand_info *info) pxa3xx_nand_data_dma_irq, info); if (info->data_dma_ch < 0) { dev_err(&pdev->dev, "failed to request data dma\n"); - dma_free_coherent(&pdev->dev, info->data_buff_size, + dma_free_coherent(&pdev->dev, MAX_BUFF_SIZE, info->data_buff, info->data_buff_phys); return info->data_dma_ch; } @@ -865,24 +915,28 @@ static int pxa3xx_nand_init_buff(struct pxa3xx_nand_info *info) static int pxa3xx_nand_sensing(struct pxa3xx_nand_info *info) { - struct mtd_info *mtd = info->mtd; - struct nand_chip *chip = mtd->priv; - + struct mtd_info *mtd; + int ret; + mtd = info->host[info->cs]->mtd; /* use the common timing to make a try */ - pxa3xx_nand_config_flash(info, &builtin_flash_types[0]); - chip->cmdfunc(mtd, NAND_CMD_RESET, 0, 0); + ret = pxa3xx_nand_config_flash(info, &builtin_flash_types[0]); + if (ret) + return ret; + + pxa3xx_nand_cmdfunc(mtd, NAND_CMD_RESET, 0, 0); if (info->is_ready) - return 1; - else return 0; + + return -ENODEV; } static int pxa3xx_nand_scan(struct mtd_info *mtd) { - struct pxa3xx_nand_info *info = mtd->priv; + struct pxa3xx_nand_host *host = mtd->priv; + struct pxa3xx_nand_info *info = host->info_data; struct platform_device *pdev = info->pdev; struct pxa3xx_nand_platform_data *pdata = pdev->dev.platform_data; - struct nand_flash_dev pxa3xx_flash_ids[2] = { {NULL,}, {NULL,} }; + struct nand_flash_dev pxa3xx_flash_ids[2], *def = NULL; const struct pxa3xx_nand_flash *f = NULL; struct nand_chip *chip = mtd->priv; uint32_t id = -1; @@ -893,22 +947,20 @@ static int pxa3xx_nand_scan(struct mtd_info *mtd) goto KEEP_CONFIG; ret = pxa3xx_nand_sensing(info); - if (!ret) { - kfree(mtd); - info->mtd = NULL; - printk(KERN_INFO "There is no nand chip on cs 0!\n"); + if (ret) { + dev_info(&info->pdev->dev, "There is no chip on cs %d!\n", + info->cs); - return -EINVAL; + return ret; } chip->cmdfunc(mtd, NAND_CMD_READID, 0, 0); id = *((uint16_t *)(info->data_buff)); if (id != 0) - printk(KERN_INFO "Detect a flash id %x\n", id); + dev_info(&info->pdev->dev, "Detect a flash id %x\n", id); else { - kfree(mtd); - info->mtd = NULL; - printk(KERN_WARNING "Read out ID 0, potential timing set wrong!!\n"); + dev_warn(&info->pdev->dev, + "Read out ID 0, potential timing set wrong!!\n"); return -EINVAL; } @@ -926,14 +978,17 @@ static int pxa3xx_nand_scan(struct mtd_info *mtd) } if (i >= (ARRAY_SIZE(builtin_flash_types) + pdata->num_flash - 1)) { - kfree(mtd); - info->mtd = NULL; - printk(KERN_ERR "ERROR!! flash not defined!!!\n"); + dev_err(&info->pdev->dev, "ERROR!! flash not defined!!!\n"); return -EINVAL; } - pxa3xx_nand_config_flash(info, f); + ret = pxa3xx_nand_config_flash(info, f); + if (ret) { + dev_err(&info->pdev->dev, "ERROR! Configure failed\n"); + return ret; + } + pxa3xx_flash_ids[0].name = f->name; pxa3xx_flash_ids[0].id = (f->chip_id >> 8) & 0xffff; pxa3xx_flash_ids[0].pagesize = f->page_size; @@ -942,62 +997,78 @@ static int pxa3xx_nand_scan(struct mtd_info *mtd) pxa3xx_flash_ids[0].erasesize = f->page_size * f->page_per_block; if (f->flash_width == 16) pxa3xx_flash_ids[0].options = NAND_BUSWIDTH_16; + pxa3xx_flash_ids[1].name = NULL; + def = pxa3xx_flash_ids; KEEP_CONFIG: - if (nand_scan_ident(mtd, 1, pxa3xx_flash_ids)) + chip->ecc.mode = NAND_ECC_HW; + chip->ecc.size = host->page_size; + + chip->options = NAND_NO_AUTOINCR; + chip->options |= NAND_NO_READRDY; + if (host->reg_ndcr & NDCR_DWIDTH_M) + chip->options |= NAND_BUSWIDTH_16; + + if (nand_scan_ident(mtd, 1, def)) return -ENODEV; /* calculate addressing information */ - info->col_addr_cycles = (mtd->writesize >= 2048) ? 2 : 1; + if (mtd->writesize >= 2048) + host->col_addr_cycles = 2; + else + host->col_addr_cycles = 1; + info->oob_buff = info->data_buff + mtd->writesize; if ((mtd->size >> chip->page_shift) > 65536) - info->row_addr_cycles = 3; + host->row_addr_cycles = 3; else - info->row_addr_cycles = 2; - mtd->name = mtd_names[0]; - chip->ecc.mode = NAND_ECC_HW; - chip->ecc.size = f->page_size; - - chip->options = (f->flash_width == 16) ? NAND_BUSWIDTH_16 : 0; - chip->options |= NAND_NO_AUTOINCR; - chip->options |= NAND_NO_READRDY; + host->row_addr_cycles = 2; + mtd->name = mtd_names[0]; return nand_scan_tail(mtd); } -static -struct pxa3xx_nand_info *alloc_nand_resource(struct platform_device *pdev) +static int alloc_nand_resource(struct platform_device *pdev) { + struct pxa3xx_nand_platform_data *pdata; struct pxa3xx_nand_info *info; + struct pxa3xx_nand_host *host; struct nand_chip *chip; struct mtd_info *mtd; struct resource *r; - int ret, irq; + int ret, irq, cs; - mtd = kzalloc(sizeof(struct mtd_info) + sizeof(struct pxa3xx_nand_info), - GFP_KERNEL); - if (!mtd) { + pdata = pdev->dev.platform_data; + info = kzalloc(sizeof(*info) + (sizeof(*mtd) + + sizeof(*host)) * pdata->num_cs, GFP_KERNEL); + if (!info) { dev_err(&pdev->dev, "failed to allocate memory\n"); - return NULL; + return -ENOMEM; } - info = (struct pxa3xx_nand_info *)(&mtd[1]); - chip = (struct nand_chip *)(&mtd[1]); info->pdev = pdev; - info->mtd = mtd; - mtd->priv = info; - mtd->owner = THIS_MODULE; - - chip->ecc.read_page = pxa3xx_nand_read_page_hwecc; - chip->ecc.write_page = pxa3xx_nand_write_page_hwecc; - chip->controller = &info->controller; - chip->waitfunc = pxa3xx_nand_waitfunc; - chip->select_chip = pxa3xx_nand_select_chip; - chip->dev_ready = pxa3xx_nand_dev_ready; - chip->cmdfunc = pxa3xx_nand_cmdfunc; - chip->read_word = pxa3xx_nand_read_word; - chip->read_byte = pxa3xx_nand_read_byte; - chip->read_buf = pxa3xx_nand_read_buf; - chip->write_buf = pxa3xx_nand_write_buf; - chip->verify_buf = pxa3xx_nand_verify_buf; + for (cs = 0; cs < pdata->num_cs; cs++) { + mtd = (struct mtd_info *)((unsigned int)&info[1] + + (sizeof(*mtd) + sizeof(*host)) * cs); + chip = (struct nand_chip *)(&mtd[1]); + host = (struct pxa3xx_nand_host *)chip; + info->host[cs] = host; + host->mtd = mtd; + host->cs = cs; + host->info_data = info; + mtd->priv = host; + mtd->owner = THIS_MODULE; + + chip->ecc.read_page = pxa3xx_nand_read_page_hwecc; + chip->ecc.write_page = pxa3xx_nand_write_page_hwecc; + chip->controller = &info->controller; + chip->waitfunc = pxa3xx_nand_waitfunc; + chip->select_chip = pxa3xx_nand_select_chip; + chip->cmdfunc = pxa3xx_nand_cmdfunc; + chip->read_word = pxa3xx_nand_read_word; + chip->read_byte = pxa3xx_nand_read_byte; + chip->read_buf = pxa3xx_nand_read_buf; + chip->write_buf = pxa3xx_nand_write_buf; + chip->verify_buf = pxa3xx_nand_verify_buf; + } spin_lock_init(&chip->controller->lock); init_waitqueue_head(&chip->controller->wq); @@ -1070,13 +1141,13 @@ struct pxa3xx_nand_info *alloc_nand_resource(struct platform_device *pdev) platform_set_drvdata(pdev, info); - return info; + return 0; fail_free_buf: free_irq(irq, info); if (use_dma) { pxa_free_dma(info->data_dma_ch); - dma_free_coherent(&pdev->dev, info->data_buff_size, + dma_free_coherent(&pdev->dev, MAX_BUFF_SIZE, info->data_buff, info->data_buff_phys); } else kfree(info->data_buff); @@ -1088,17 +1159,21 @@ fail_put_clk: clk_disable(info->clk); clk_put(info->clk); fail_free_mtd: - kfree(mtd); - return NULL; + kfree(info); + return ret; } static int pxa3xx_nand_remove(struct platform_device *pdev) { struct pxa3xx_nand_info *info = platform_get_drvdata(pdev); - struct mtd_info *mtd = info->mtd; + struct pxa3xx_nand_platform_data *pdata; struct resource *r; - int irq; + int irq, cs; + if (!info) + return 0; + + pdata = pdev->dev.platform_data; platform_set_drvdata(pdev, NULL); irq = platform_get_irq(pdev, 0); @@ -1106,7 +1181,7 @@ static int pxa3xx_nand_remove(struct platform_device *pdev) free_irq(irq, info); if (use_dma) { pxa_free_dma(info->data_dma_ch); - dma_free_writecombine(&pdev->dev, info->data_buff_size, + dma_free_writecombine(&pdev->dev, MAX_BUFF_SIZE, info->data_buff, info->data_buff_phys); } else kfree(info->data_buff); @@ -1118,10 +1193,9 @@ static int pxa3xx_nand_remove(struct platform_device *pdev) clk_disable(info->clk); clk_put(info->clk); - if (mtd) { - mtd_device_unregister(mtd); - kfree(mtd); - } + for (cs = 0; cs < pdata->num_cs; cs++) + nand_release(info->host[cs]->mtd); + kfree(info); return 0; } @@ -1129,6 +1203,7 @@ static int pxa3xx_nand_probe(struct platform_device *pdev) { struct pxa3xx_nand_platform_data *pdata; struct pxa3xx_nand_info *info; + int ret, cs, probe_success; pdata = pdev->dev.platform_data; if (!pdata) { @@ -1136,52 +1211,88 @@ static int pxa3xx_nand_probe(struct platform_device *pdev) return -ENODEV; } - info = alloc_nand_resource(pdev); - if (info == NULL) - return -ENOMEM; - - if (pxa3xx_nand_scan(info->mtd)) { - dev_err(&pdev->dev, "failed to scan nand\n"); - pxa3xx_nand_remove(pdev); - return -ENODEV; + ret = alloc_nand_resource(pdev); + if (ret) { + dev_err(&pdev->dev, "alloc nand resource failed\n"); + return ret; } - if (mtd_has_cmdlinepart()) { - const char *probes[] = { "cmdlinepart", NULL }; - struct mtd_partition *parts; - int nr_parts; + info = platform_get_drvdata(pdev); + probe_success = 0; + for (cs = 0; cs < pdata->num_cs; cs++) { + info->cs = cs; + ret = pxa3xx_nand_scan(info->host[cs]->mtd); + if (ret) { + dev_warn(&pdev->dev, "failed to scan nand at cs %d\n", + cs); + continue; + } - nr_parts = parse_mtd_partitions(info->mtd, probes, &parts, 0); + ret = mtd_device_parse_register(info->host[cs]->mtd, NULL, 0, + pdata->parts[cs], pdata->nr_parts[cs]); + if (!ret) + probe_success = 1; + } - if (nr_parts) - return mtd_device_register(info->mtd, parts, nr_parts); + if (!probe_success) { + pxa3xx_nand_remove(pdev); + return -ENODEV; } - return mtd_device_register(info->mtd, pdata->parts, pdata->nr_parts); + return 0; } #ifdef CONFIG_PM static int pxa3xx_nand_suspend(struct platform_device *pdev, pm_message_t state) { struct pxa3xx_nand_info *info = platform_get_drvdata(pdev); - struct mtd_info *mtd = info->mtd; + struct pxa3xx_nand_platform_data *pdata; + struct mtd_info *mtd; + int cs; + pdata = pdev->dev.platform_data; if (info->state) { dev_err(&pdev->dev, "driver busy, state = %d\n", info->state); return -EAGAIN; } + for (cs = 0; cs < pdata->num_cs; cs++) { + mtd = info->host[cs]->mtd; + mtd->suspend(mtd); + } + return 0; } static int pxa3xx_nand_resume(struct platform_device *pdev) { struct pxa3xx_nand_info *info = platform_get_drvdata(pdev); - struct mtd_info *mtd = info->mtd; + struct pxa3xx_nand_platform_data *pdata; + struct mtd_info *mtd; + int cs; - nand_writel(info, NDTR0CS0, info->ndtr0cs0); - nand_writel(info, NDTR1CS0, info->ndtr1cs0); - clk_enable(info->clk); + pdata = pdev->dev.platform_data; + /* We don't want to handle interrupt without calling mtd routine */ + disable_int(info, NDCR_INT_MASK); + + /* + * Directly set the chip select to a invalid value, + * then the driver would reset the timing according + * to current chip select at the beginning of cmdfunc + */ + info->cs = 0xff; + + /* + * As the spec says, the NDSR would be updated to 0x1800 when + * doing the nand_clk disable/enable. + * To prevent it damaging state machine of the driver, clear + * all status before resume + */ + nand_writel(info, NDSR, NDSR_MASK); + for (cs = 0; cs < pdata->num_cs; cs++) { + mtd = info->host[cs]->mtd; + mtd->resume(mtd); + } return 0; } diff --git a/drivers/mtd/nand/r852.c b/drivers/mtd/nand/r852.c index cae2e013c986..f20f393bfda6 100644 --- a/drivers/mtd/nand/r852.c +++ b/drivers/mtd/nand/r852.c @@ -1027,7 +1027,7 @@ void r852_shutdown(struct pci_dev *pci_dev) } #ifdef CONFIG_PM -int r852_suspend(struct device *device) +static int r852_suspend(struct device *device) { struct r852_device *dev = pci_get_drvdata(to_pci_dev(device)); @@ -1048,7 +1048,7 @@ int r852_suspend(struct device *device) return 0; } -int r852_resume(struct device *device) +static int r852_resume(struct device *device) { struct r852_device *dev = pci_get_drvdata(to_pci_dev(device)); @@ -1092,7 +1092,7 @@ static const struct pci_device_id r852_pci_id_tbl[] = { MODULE_DEVICE_TABLE(pci, r852_pci_id_tbl); -SIMPLE_DEV_PM_OPS(r852_pm_ops, r852_suspend, r852_resume); +static SIMPLE_DEV_PM_OPS(r852_pm_ops, r852_suspend, r852_resume); static struct pci_driver r852_pci_driver = { .name = DRV_NAME, diff --git a/drivers/mtd/nand/rtc_from4.c b/drivers/mtd/nand/rtc_from4.c index c9f9127ff770..f309addc2fa0 100644 --- a/drivers/mtd/nand/rtc_from4.c +++ b/drivers/mtd/nand/rtc_from4.c @@ -351,7 +351,7 @@ static int rtc_from4_correct_data(struct mtd_info *mtd, const u_char *buf, u_cha return 0; } - /* Read the syndrom pattern from the FPGA and correct the bitorder */ + /* Read the syndrome pattern from the FPGA and correct the bitorder */ rs_ecc = (volatile unsigned short *)(rtc_from4_fio_base + RTC_FROM4_RS_ECC); for (i = 0; i < 8; i++) { ecc[i] = bitrev8(*rs_ecc); @@ -380,7 +380,7 @@ static int rtc_from4_correct_data(struct mtd_info *mtd, const u_char *buf, u_cha /* Let the library code do its magic. */ res = decode_rs8(rs_decoder, (uint8_t *) buf, par, 512, syn, 0, NULL, 0xff, NULL); if (res > 0) { - DEBUG(MTD_DEBUG_LEVEL0, "rtc_from4_correct_data: " "ECC corrected %d errors on read\n", res); + pr_debug("rtc_from4_correct_data: " "ECC corrected %d errors on read\n", res); } return res; } @@ -444,7 +444,6 @@ static int rtc_from4_errstat(struct mtd_info *mtd, struct nand_chip *this, len = mtd->writesize; buf = kmalloc(len, GFP_KERNEL); if (!buf) { - printk(KERN_ERR "rtc_from4_errstat: Out of memory!\n"); er_stat = 1; goto out; } diff --git a/drivers/mtd/nand/s3c2410.c b/drivers/mtd/nand/s3c2410.c index 4405468f196b..868685db6712 100644 --- a/drivers/mtd/nand/s3c2410.c +++ b/drivers/mtd/nand/s3c2410.c @@ -723,7 +723,7 @@ static int s3c24xx_nand_remove(struct platform_device *pdev) /* free the common resources */ - if (info->clk != NULL && !IS_ERR(info->clk)) { + if (!IS_ERR(info->clk)) { s3c2410_nand_clk_set_state(info, CLOCK_DISABLE); clk_put(info->clk); } @@ -744,26 +744,15 @@ static int s3c24xx_nand_remove(struct platform_device *pdev) return 0; } -const char *part_probes[] = { "cmdlinepart", NULL }; static int s3c2410_nand_add_partition(struct s3c2410_nand_info *info, struct s3c2410_nand_mtd *mtd, struct s3c2410_nand_set *set) { - struct mtd_partition *part_info; - int nr_part = 0; + if (set) + mtd->mtd.name = set->name; - if (set == NULL) - return mtd_device_register(&mtd->mtd, NULL, 0); - - mtd->mtd.name = set->name; - nr_part = parse_mtd_partitions(&mtd->mtd, part_probes, &part_info, 0); - - if (nr_part <= 0 && set->nr_partitions > 0) { - nr_part = set->nr_partitions; - part_info = set->partitions; - } - - return mtd_device_register(&mtd->mtd, part_info, nr_part); + return mtd_device_parse_register(&mtd->mtd, NULL, 0, + set->partitions, set->nr_partitions); } /** @@ -880,8 +869,10 @@ static void s3c2410_nand_init_chip(struct s3c2410_nand_info *info, /* If you use u-boot BBT creation code, specifying this flag will * let the kernel fish out the BBT from the NAND, and also skip the * full NAND scan that can take 1/2s or so. Little things... */ - if (set->flash_bbt) - chip->options |= NAND_USE_FLASH_BBT | NAND_SKIP_BBTSCAN; + if (set->flash_bbt) { + chip->bbt_options |= NAND_BBT_USE_FLASH; + chip->options |= NAND_SKIP_BBTSCAN; + } } /** diff --git a/drivers/mtd/nand/sharpsl.c b/drivers/mtd/nand/sharpsl.c index 19e24ed089ea..619d2a504788 100644 --- a/drivers/mtd/nand/sharpsl.c +++ b/drivers/mtd/nand/sharpsl.c @@ -103,16 +103,12 @@ static int sharpsl_nand_calculate_ecc(struct mtd_info *mtd, const u_char * dat, return readb(sharpsl->io + ECCCNTR) != 0; } -static const char *part_probes[] = { "cmdlinepart", NULL }; - /* * Main initialization routine */ static int __devinit sharpsl_nand_probe(struct platform_device *pdev) { struct nand_chip *this; - struct mtd_partition *sharpsl_partition_info; - int nr_partitions; struct resource *r; int err = 0; struct sharpsl_nand *sharpsl; @@ -184,14 +180,9 @@ static int __devinit sharpsl_nand_probe(struct platform_device *pdev) /* Register the partitions */ sharpsl->mtd.name = "sharpsl-nand"; - nr_partitions = parse_mtd_partitions(&sharpsl->mtd, part_probes, &sharpsl_partition_info, 0); - if (nr_partitions <= 0) { - nr_partitions = data->nr_partitions; - sharpsl_partition_info = data->partitions; - } - err = mtd_device_register(&sharpsl->mtd, sharpsl_partition_info, - nr_partitions); + err = mtd_device_parse_register(&sharpsl->mtd, NULL, 0, + data->partitions, data->nr_partitions); if (err) goto err_add; diff --git a/drivers/mtd/nand/sm_common.c b/drivers/mtd/nand/sm_common.c index 43469715b3fa..32ae5af7444f 100644 --- a/drivers/mtd/nand/sm_common.c +++ b/drivers/mtd/nand/sm_common.c @@ -48,7 +48,7 @@ static int sm_block_markbad(struct mtd_info *mtd, loff_t ofs) /* As long as this function is called on erase block boundaries it will work correctly for 256 byte nand */ - ops.mode = MTD_OOB_PLACE; + ops.mode = MTD_OPS_PLACE_OOB; ops.ooboffs = 0; ops.ooblen = mtd->oobsize; ops.oobbuf = (void *)&oob; diff --git a/drivers/mtd/nand/socrates_nand.c b/drivers/mtd/nand/socrates_nand.c index ca2d0555729e..0fb24f9c2327 100644 --- a/drivers/mtd/nand/socrates_nand.c +++ b/drivers/mtd/nand/socrates_nand.c @@ -155,8 +155,6 @@ static int socrates_nand_device_ready(struct mtd_info *mtd) return 1; } -static const char *part_probes[] = { "cmdlinepart", NULL }; - /* * Probe for the NAND device. */ @@ -166,8 +164,7 @@ static int __devinit socrates_nand_probe(struct platform_device *ofdev) struct mtd_info *mtd; struct nand_chip *nand_chip; int res; - struct mtd_partition *partitions = NULL; - int num_partitions = 0; + struct mtd_part_parser_data ppdata; /* Allocate memory for the device structure (and zero it) */ host = kzalloc(sizeof(struct socrates_nand_host), GFP_KERNEL); @@ -193,6 +190,7 @@ static int __devinit socrates_nand_probe(struct platform_device *ofdev) mtd->name = "socrates_nand"; mtd->owner = THIS_MODULE; mtd->dev.parent = &ofdev->dev; + ppdata.of_node = ofdev->dev.of_node; /*should never be accessed directly */ nand_chip->IO_ADDR_R = (void *)0xdeadbeef; @@ -225,30 +223,10 @@ static int __devinit socrates_nand_probe(struct platform_device *ofdev) goto out; } -#ifdef CONFIG_MTD_CMDLINE_PARTS - num_partitions = parse_mtd_partitions(mtd, part_probes, - &partitions, 0); - if (num_partitions < 0) { - res = num_partitions; - goto release; - } -#endif - - if (num_partitions == 0) { - num_partitions = of_mtd_parse_partitions(&ofdev->dev, - ofdev->dev.of_node, - &partitions); - if (num_partitions < 0) { - res = num_partitions; - goto release; - } - } - - res = mtd_device_register(mtd, partitions, num_partitions); + res = mtd_device_parse_register(mtd, NULL, &ppdata, NULL, 0); if (!res) return res; -release: nand_release(mtd); out: diff --git a/drivers/mtd/nand/tmio_nand.c b/drivers/mtd/nand/tmio_nand.c index 11e8371b5683..beebd95f7690 100644 --- a/drivers/mtd/nand/tmio_nand.c +++ b/drivers/mtd/nand/tmio_nand.c @@ -121,9 +121,6 @@ struct tmio_nand { #define mtd_to_tmio(m) container_of(m, struct tmio_nand, mtd) -#ifdef CONFIG_MTD_CMDLINE_PARTS -static const char *part_probes[] = { "cmdlinepart", NULL }; -#endif /*--------------------------------------------------------------------------*/ @@ -381,8 +378,6 @@ static int tmio_probe(struct platform_device *dev) struct tmio_nand *tmio; struct mtd_info *mtd; struct nand_chip *nand_chip; - struct mtd_partition *parts; - int nbparts = 0; int retval; if (data == NULL) @@ -461,15 +456,9 @@ static int tmio_probe(struct platform_device *dev) goto err_scan; } /* Register the partitions */ -#ifdef CONFIG_MTD_CMDLINE_PARTS - nbparts = parse_mtd_partitions(mtd, part_probes, &parts, 0); -#endif - if (nbparts <= 0 && data) { - parts = data->partition; - nbparts = data->num_partitions; - } - - retval = mtd_device_register(mtd, parts, nbparts); + retval = mtd_device_parse_register(mtd, NULL, 0, + data ? data->partition : NULL, + data ? data->num_partitions : 0); if (!retval) return retval; diff --git a/drivers/mtd/nand/txx9ndfmc.c b/drivers/mtd/nand/txx9ndfmc.c index bfba4e39a6c5..ace46fdaef58 100644 --- a/drivers/mtd/nand/txx9ndfmc.c +++ b/drivers/mtd/nand/txx9ndfmc.c @@ -74,7 +74,6 @@ struct txx9ndfmc_drvdata { unsigned char hold; /* in gbusclock */ unsigned char spw; /* in gbusclock */ struct nand_hw_control hw_control; - struct mtd_partition *parts[MAX_TXX9NDFMC_DEV]; }; static struct platform_device *mtd_to_platdev(struct mtd_info *mtd) @@ -287,7 +286,6 @@ static int txx9ndfmc_nand_scan(struct mtd_info *mtd) static int __init txx9ndfmc_probe(struct platform_device *dev) { struct txx9ndfmc_platform_data *plat = dev->dev.platform_data; - static const char *probes[] = { "cmdlinepart", NULL }; int hold, spw; int i; struct txx9ndfmc_drvdata *drvdata; @@ -333,7 +331,6 @@ static int __init txx9ndfmc_probe(struct platform_device *dev) struct txx9ndfmc_priv *txx9_priv; struct nand_chip *chip; struct mtd_info *mtd; - int nr_parts; if (!(plat->ch_mask & (1 << i))) continue; @@ -393,9 +390,7 @@ static int __init txx9ndfmc_probe(struct platform_device *dev) } mtd->name = txx9_priv->mtdname; - nr_parts = parse_mtd_partitions(mtd, probes, - &drvdata->parts[i], 0); - mtd_device_register(mtd, drvdata->parts[i], nr_parts); + mtd_device_parse_register(mtd, NULL, 0, NULL, 0); drvdata->mtds[i] = mtd; } @@ -421,7 +416,6 @@ static int __exit txx9ndfmc_remove(struct platform_device *dev) txx9_priv = chip->priv; nand_release(mtd); - kfree(drvdata->parts[i]); kfree(txx9_priv->mtdname); kfree(txx9_priv); } diff --git a/drivers/mtd/nftlcore.c b/drivers/mtd/nftlcore.c index b155666acfbe..cda77b562ad4 100644 --- a/drivers/mtd/nftlcore.c +++ b/drivers/mtd/nftlcore.c @@ -63,14 +63,12 @@ static void nftl_add_mtd(struct mtd_blktrans_ops *tr, struct mtd_info *mtd) return; } - DEBUG(MTD_DEBUG_LEVEL1, "NFTL: add_mtd for %s\n", mtd->name); + pr_debug("NFTL: add_mtd for %s\n", mtd->name); nftl = kzalloc(sizeof(struct NFTLrecord), GFP_KERNEL); - if (!nftl) { - printk(KERN_WARNING "NFTL: out of memory for data structures\n"); + if (!nftl) return; - } nftl->mbd.mtd = mtd; nftl->mbd.devnum = -1; @@ -132,7 +130,7 @@ static void nftl_remove_dev(struct mtd_blktrans_dev *dev) { struct NFTLrecord *nftl = (void *)dev; - DEBUG(MTD_DEBUG_LEVEL1, "NFTL: remove_dev (i=%d)\n", dev->devnum); + pr_debug("NFTL: remove_dev (i=%d)\n", dev->devnum); del_mtd_blktrans_dev(dev); kfree(nftl->ReplUnitTable); @@ -149,7 +147,7 @@ int nftl_read_oob(struct mtd_info *mtd, loff_t offs, size_t len, struct mtd_oob_ops ops; int res; - ops.mode = MTD_OOB_PLACE; + ops.mode = MTD_OPS_PLACE_OOB; ops.ooboffs = offs & mask; ops.ooblen = len; ops.oobbuf = buf; @@ -170,7 +168,7 @@ int nftl_write_oob(struct mtd_info *mtd, loff_t offs, size_t len, struct mtd_oob_ops ops; int res; - ops.mode = MTD_OOB_PLACE; + ops.mode = MTD_OPS_PLACE_OOB; ops.ooboffs = offs & mask; ops.ooblen = len; ops.oobbuf = buf; @@ -193,7 +191,7 @@ static int nftl_write(struct mtd_info *mtd, loff_t offs, size_t len, struct mtd_oob_ops ops; int res; - ops.mode = MTD_OOB_PLACE; + ops.mode = MTD_OPS_PLACE_OOB; ops.ooboffs = offs & mask; ops.ooblen = mtd->oobsize; ops.oobbuf = oob; @@ -220,7 +218,7 @@ static u16 NFTL_findfreeblock(struct NFTLrecord *nftl, int desperate ) /* Normally, we force a fold to happen before we run out of free blocks completely */ if (!desperate && nftl->numfreeEUNs < 2) { - DEBUG(MTD_DEBUG_LEVEL1, "NFTL_findfreeblock: there are too few free EUNs\n"); + pr_debug("NFTL_findfreeblock: there are too few free EUNs\n"); return BLOCK_NIL; } @@ -291,8 +289,7 @@ static u16 NFTL_foldchain (struct NFTLrecord *nftl, unsigned thisVUC, unsigned p if (block == 2) { foldmark = oob.u.c.FoldMark | oob.u.c.FoldMark1; if (foldmark == FOLD_MARK_IN_PROGRESS) { - DEBUG(MTD_DEBUG_LEVEL1, - "Write Inhibited on EUN %d\n", thisEUN); + pr_debug("Write Inhibited on EUN %d\n", thisEUN); inplace = 0; } else { /* There's no other reason not to do inplace, @@ -357,7 +354,7 @@ static u16 NFTL_foldchain (struct NFTLrecord *nftl, unsigned thisVUC, unsigned p if (BlockLastState[block] != SECTOR_FREE && BlockMap[block] != BLOCK_NIL && BlockMap[block] != targetEUN) { - DEBUG(MTD_DEBUG_LEVEL1, "Setting inplace to 0. VUC %d, " + pr_debug("Setting inplace to 0. VUC %d, " "block %d was %x lastEUN, " "and is in EUN %d (%s) %d\n", thisVUC, block, BlockLastState[block], @@ -373,14 +370,14 @@ static u16 NFTL_foldchain (struct NFTLrecord *nftl, unsigned thisVUC, unsigned p pendingblock < ((thisVUC + 1)* (nftl->EraseSize / 512)) && BlockLastState[pendingblock - (thisVUC * (nftl->EraseSize / 512))] != SECTOR_FREE) { - DEBUG(MTD_DEBUG_LEVEL1, "Pending write not free in EUN %d. " + pr_debug("Pending write not free in EUN %d. " "Folding out of place.\n", targetEUN); inplace = 0; } } if (!inplace) { - DEBUG(MTD_DEBUG_LEVEL1, "Cannot fold Virtual Unit Chain %d in place. " + pr_debug("Cannot fold Virtual Unit Chain %d in place. " "Trying out-of-place\n", thisVUC); /* We need to find a targetEUN to fold into. */ targetEUN = NFTL_findfreeblock(nftl, 1); @@ -410,7 +407,7 @@ static u16 NFTL_foldchain (struct NFTLrecord *nftl, unsigned thisVUC, unsigned p and the Erase Unit into which we are supposed to be copying. Go for it. */ - DEBUG(MTD_DEBUG_LEVEL1,"Folding chain %d into unit %d\n", thisVUC, targetEUN); + pr_debug("Folding chain %d into unit %d\n", thisVUC, targetEUN); for (block = 0; block < nftl->EraseSize / 512 ; block++) { unsigned char movebuf[512]; int ret; @@ -428,7 +425,7 @@ static u16 NFTL_foldchain (struct NFTLrecord *nftl, unsigned thisVUC, unsigned p ret = mtd->read(mtd, (nftl->EraseSize * BlockMap[block]) + (block * 512), 512, &retlen, movebuf); - if (ret < 0 && ret != -EUCLEAN) { + if (ret < 0 && !mtd_is_bitflip(ret)) { ret = mtd->read(mtd, (nftl->EraseSize * BlockMap[block]) + (block * 512), 512, &retlen, movebuf); @@ -457,7 +454,7 @@ static u16 NFTL_foldchain (struct NFTLrecord *nftl, unsigned thisVUC, unsigned p has duplicate chains, we need to free one of the chains because it's not necessary any more. */ thisEUN = nftl->EUNtable[thisVUC]; - DEBUG(MTD_DEBUG_LEVEL1,"Want to erase\n"); + pr_debug("Want to erase\n"); /* For each block in the old chain (except the targetEUN of course), free it and make it available for future use */ @@ -570,7 +567,7 @@ static inline u16 NFTL_findwriteunit(struct NFTLrecord *nftl, unsigned block) (writeEUN * nftl->EraseSize) + blockofs, 8, &retlen, (char *)&bci); - DEBUG(MTD_DEBUG_LEVEL2, "Status of block %d in EUN %d is %x\n", + pr_debug("Status of block %d in EUN %d is %x\n", block , writeEUN, le16_to_cpu(bci.Status)); status = bci.Status | bci.Status1; @@ -623,7 +620,7 @@ static inline u16 NFTL_findwriteunit(struct NFTLrecord *nftl, unsigned block) but they are reserved for when we're desperate. Well, now we're desperate. */ - DEBUG(MTD_DEBUG_LEVEL1, "Using desperate==1 to find free EUN to accommodate write to VUC %d\n", thisVUC); + pr_debug("Using desperate==1 to find free EUN to accommodate write to VUC %d\n", thisVUC); writeEUN = NFTL_findfreeblock(nftl, 1); } if (writeEUN == BLOCK_NIL) { @@ -776,7 +773,7 @@ static int nftl_readblock(struct mtd_blktrans_dev *mbd, unsigned long block, size_t retlen; int res = mtd->read(mtd, ptr, 512, &retlen, buffer); - if (res < 0 && res != -EUCLEAN) + if (res < 0 && !mtd_is_bitflip(res)) return -EIO; } return 0; diff --git a/drivers/mtd/nftlmount.c b/drivers/mtd/nftlmount.c index e3cd1ffad2f6..ac4092591aea 100644 --- a/drivers/mtd/nftlmount.c +++ b/drivers/mtd/nftlmount.c @@ -32,7 +32,7 @@ /* find_boot_record: Find the NFTL Media Header and its Spare copy which contains the * various device information of the NFTL partition and Bad Unit Table. Update - * the ReplUnitTable[] table accroding to the Bad Unit Table. ReplUnitTable[] + * the ReplUnitTable[] table according to the Bad Unit Table. ReplUnitTable[] * is used for management of Erase Unit in other routines in nftl.c and nftlmount.c */ static int find_boot_record(struct NFTLrecord *nftl) @@ -297,7 +297,7 @@ static int check_free_sectors(struct NFTLrecord *nftl, unsigned int address, int * * Return: 0 when succeed, -1 on error. * - * ToDo: 1. Is it neceressary to check_free_sector after erasing ?? + * ToDo: 1. Is it necessary to check_free_sector after erasing ?? */ int NFTL_formatblock(struct NFTLrecord *nftl, int block) { @@ -337,7 +337,7 @@ int NFTL_formatblock(struct NFTLrecord *nftl, int block) nb_erases = le32_to_cpu(uci.WearInfo); nb_erases++; - /* wrap (almost impossible with current flashs) or free block */ + /* wrap (almost impossible with current flash) or free block */ if (nb_erases == 0) nb_erases = 1; @@ -363,10 +363,10 @@ fail: * Mark as 'IGNORE' each incorrect sector. This check is only done if the chain * was being folded when NFTL was interrupted. * - * The check_free_sectors in this function is neceressary. There is a possible + * The check_free_sectors in this function is necessary. There is a possible * situation that after writing the Data area, the Block Control Information is * not updated according (due to power failure or something) which leaves the block - * in an umconsistent state. So we have to check if a block is really FREE in this + * in an inconsistent state. So we have to check if a block is really FREE in this * case. */ static void check_sectors_in_chain(struct NFTLrecord *nftl, unsigned int first_block) { @@ -428,7 +428,7 @@ static int calc_chain_length(struct NFTLrecord *nftl, unsigned int first_block) for (;;) { length++; - /* avoid infinite loops, although this is guaranted not to + /* avoid infinite loops, although this is guaranteed not to happen because of the previous checks */ if (length >= nftl->nb_blocks) { printk("nftl: length too long %d !\n", length); @@ -447,11 +447,11 @@ static int calc_chain_length(struct NFTLrecord *nftl, unsigned int first_block) /* format_chain: Format an invalid Virtual Unit chain. It frees all the Erase Units in a * Virtual Unit Chain, i.e. all the units are disconnected. * - * It is not stricly correct to begin from the first block of the chain because + * It is not strictly correct to begin from the first block of the chain because * if we stop the code, we may see again a valid chain if there was a first_block * flag in a block inside it. But is it really a problem ? * - * FixMe: Figure out what the last statesment means. What if power failure when we are + * FixMe: Figure out what the last statement means. What if power failure when we are * in the for (;;) loop formatting blocks ?? */ static void format_chain(struct NFTLrecord *nftl, unsigned int first_block) @@ -485,7 +485,7 @@ static void format_chain(struct NFTLrecord *nftl, unsigned int first_block) * totally free (only 0xff). * * Definition: Free Erase Unit -- A properly erased/formatted Free Erase Unit should have meet the - * following critia: + * following criteria: * 1. */ static int check_and_mark_free_block(struct NFTLrecord *nftl, int block) { @@ -502,7 +502,7 @@ static int check_and_mark_free_block(struct NFTLrecord *nftl, int block) erase_mark = le16_to_cpu ((h1.EraseMark | h1.EraseMark1)); if (erase_mark != ERASE_MARK) { /* if no erase mark, the block must be totally free. This is - possible in two cases : empty filsystem or interrupted erase (very unlikely) */ + possible in two cases : empty filesystem or interrupted erase (very unlikely) */ if (check_free_sectors (nftl, block * nftl->EraseSize, nftl->EraseSize, 1) != 0) return -1; @@ -544,7 +544,7 @@ static int check_and_mark_free_block(struct NFTLrecord *nftl, int block) /* get_fold_mark: Read fold mark from Unit Control Information #2, we use FOLD_MARK_IN_PROGRESS * to indicate that we are in the progression of a Virtual Unit Chain folding. If the UCI #2 * is FOLD_MARK_IN_PROGRESS when mounting the NFTL, the (previous) folding process is interrupted - * for some reason. A clean up/check of the VUC is neceressary in this case. + * for some reason. A clean up/check of the VUC is necessary in this case. * * WARNING: return 0 if read error */ @@ -657,7 +657,7 @@ int NFTL_mount(struct NFTLrecord *s) printk("Block %d: incorrect logical block: %d expected: %d\n", block, logical_block, first_logical_block); /* the chain is incorrect : we must format it, - but we need to read it completly */ + but we need to read it completely */ do_format_chain = 1; } if (is_first_block) { @@ -669,7 +669,7 @@ int NFTL_mount(struct NFTLrecord *s) printk("Block %d: incorrectly marked as first block in chain\n", block); /* the chain is incorrect : we must format it, - but we need to read it completly */ + but we need to read it completely */ do_format_chain = 1; } else { printk("Block %d: folding in progress - ignoring first block flag\n", diff --git a/drivers/mtd/ofpart.c b/drivers/mtd/ofpart.c index a996718fa6b0..64be8f0848b0 100644 --- a/drivers/mtd/ofpart.c +++ b/drivers/mtd/ofpart.c @@ -20,14 +20,23 @@ #include <linux/slab.h> #include <linux/mtd/partitions.h> -int __devinit of_mtd_parse_partitions(struct device *dev, - struct device_node *node, - struct mtd_partition **pparts) +static int parse_ofpart_partitions(struct mtd_info *master, + struct mtd_partition **pparts, + struct mtd_part_parser_data *data) { + struct device_node *node; const char *partname; struct device_node *pp; int nr_parts, i; + + if (!data) + return 0; + + node = data->of_node; + if (!node) + return 0; + /* First count the subnodes */ pp = NULL; nr_parts = 0; @@ -69,7 +78,7 @@ int __devinit of_mtd_parse_partitions(struct device *dev, if (!i) { of_node_put(pp); - dev_err(dev, "No valid partition found on %s\n", node->full_name); + pr_err("No valid partition found on %s\n", node->full_name); kfree(*pparts); *pparts = NULL; return -EINVAL; @@ -77,6 +86,99 @@ int __devinit of_mtd_parse_partitions(struct device *dev, return nr_parts; } -EXPORT_SYMBOL(of_mtd_parse_partitions); + +static struct mtd_part_parser ofpart_parser = { + .owner = THIS_MODULE, + .parse_fn = parse_ofpart_partitions, + .name = "ofpart", +}; + +static int parse_ofoldpart_partitions(struct mtd_info *master, + struct mtd_partition **pparts, + struct mtd_part_parser_data *data) +{ + struct device_node *dp; + int i, plen, nr_parts; + const struct { + __be32 offset, len; + } *part; + const char *names; + + if (!data) + return 0; + + dp = data->of_node; + if (!dp) + return 0; + + part = of_get_property(dp, "partitions", &plen); + if (!part) + return 0; /* No partitions found */ + + pr_warning("Device tree uses obsolete partition map binding: %s\n", + dp->full_name); + + nr_parts = plen / sizeof(part[0]); + + *pparts = kzalloc(nr_parts * sizeof(*(*pparts)), GFP_KERNEL); + if (!pparts) + return -ENOMEM; + + names = of_get_property(dp, "partition-names", &plen); + + for (i = 0; i < nr_parts; i++) { + (*pparts)[i].offset = be32_to_cpu(part->offset); + (*pparts)[i].size = be32_to_cpu(part->len) & ~1; + /* bit 0 set signifies read only partition */ + if (be32_to_cpu(part->len) & 1) + (*pparts)[i].mask_flags = MTD_WRITEABLE; + + if (names && (plen > 0)) { + int len = strlen(names) + 1; + + (*pparts)[i].name = (char *)names; + plen -= len; + names += len; + } else { + (*pparts)[i].name = "unnamed"; + } + + part++; + } + + return nr_parts; +} + +static struct mtd_part_parser ofoldpart_parser = { + .owner = THIS_MODULE, + .parse_fn = parse_ofoldpart_partitions, + .name = "ofoldpart", +}; + +static int __init ofpart_parser_init(void) +{ + int rc; + rc = register_mtd_parser(&ofpart_parser); + if (rc) + goto out; + + rc = register_mtd_parser(&ofoldpart_parser); + if (!rc) + return 0; + + deregister_mtd_parser(&ofoldpart_parser); +out: + return rc; +} + +module_init(ofpart_parser_init); MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Parser for MTD partitioning information in device tree"); +MODULE_AUTHOR("Vitaly Wool, David Gibson"); +/* + * When MTD core cannot find the requested parser, it tries to load the module + * with the same name. Since we provide the ofoldpart parser, we should have + * the corresponding alias. + */ +MODULE_ALIAS("ofoldpart"); diff --git a/drivers/mtd/onenand/generic.c b/drivers/mtd/onenand/generic.c index 2d70d354d846..7813095264a5 100644 --- a/drivers/mtd/onenand/generic.c +++ b/drivers/mtd/onenand/generic.c @@ -30,11 +30,8 @@ */ #define DRIVER_NAME "onenand-flash" -static const char *part_probes[] = { "cmdlinepart", NULL, }; - struct onenand_info { struct mtd_info mtd; - struct mtd_partition *parts; struct onenand_chip onenand; }; @@ -73,13 +70,9 @@ static int __devinit generic_onenand_probe(struct platform_device *pdev) goto out_iounmap; } - err = parse_mtd_partitions(&info->mtd, part_probes, &info->parts, 0); - if (err > 0) - mtd_device_register(&info->mtd, info->parts, err); - else if (err <= 0 && pdata && pdata->parts) - mtd_device_register(&info->mtd, pdata->parts, pdata->nr_parts); - else - err = mtd_device_register(&info->mtd, NULL, 0); + err = mtd_device_parse_register(&info->mtd, NULL, 0, + pdata ? pdata->parts : NULL, + pdata ? pdata->nr_parts : 0); platform_set_drvdata(pdev, info); @@ -104,7 +97,6 @@ static int __devexit generic_onenand_remove(struct platform_device *pdev) platform_set_drvdata(pdev, NULL); if (info) { - mtd_device_unregister(&info->mtd); onenand_release(&info->mtd); release_mem_region(res->start, size); iounmap(info->onenand.base); diff --git a/drivers/mtd/onenand/omap2.c b/drivers/mtd/onenand/omap2.c index 6a1d6d9a2df9..7e9ea6852b67 100644 --- a/drivers/mtd/onenand/omap2.c +++ b/drivers/mtd/onenand/omap2.c @@ -57,7 +57,6 @@ struct omap2_onenand { unsigned long phys_base; int gpio_irq; struct mtd_info mtd; - struct mtd_partition *parts; struct onenand_chip onenand; struct completion irq_done; struct completion dma_done; @@ -67,8 +66,6 @@ struct omap2_onenand { struct regulator *regulator; }; -static const char *part_probes[] = { "cmdlinepart", NULL, }; - static void omap2_onenand_dma_cb(int lch, u16 ch_status, void *data) { struct omap2_onenand *c = data; @@ -741,6 +738,7 @@ static int __devinit omap2_onenand_probe(struct platform_device *pdev) c->regulator = regulator_get(&pdev->dev, "vonenand"); if (IS_ERR(c->regulator)) { dev_err(&pdev->dev, "Failed to get regulator\n"); + r = PTR_ERR(c->regulator); goto err_release_dma; } c->onenand.enable = omap2_onenand_enable; @@ -753,13 +751,9 @@ static int __devinit omap2_onenand_probe(struct platform_device *pdev) if ((r = onenand_scan(&c->mtd, 1)) < 0) goto err_release_regulator; - r = parse_mtd_partitions(&c->mtd, part_probes, &c->parts, 0); - if (r > 0) - r = mtd_device_register(&c->mtd, c->parts, r); - else if (pdata->parts != NULL) - r = mtd_device_register(&c->mtd, pdata->parts, pdata->nr_parts); - else - r = mtd_device_register(&c->mtd, NULL, 0); + r = mtd_device_parse_register(&c->mtd, NULL, 0, + pdata ? pdata->parts : NULL, + pdata ? pdata->nr_parts : 0); if (r) goto err_release_onenand; @@ -786,7 +780,6 @@ err_release_mem_region: err_free_cs: gpmc_cs_free(c->gpmc_cs); err_kfree: - kfree(c->parts); kfree(c); return r; @@ -809,7 +802,6 @@ static int __devexit omap2_onenand_remove(struct platform_device *pdev) iounmap(c->onenand.base); release_mem_region(c->phys_base, ONENAND_IO_SIZE); gpmc_cs_free(c->gpmc_cs); - kfree(c->parts); kfree(c); return 0; diff --git a/drivers/mtd/onenand/onenand_base.c b/drivers/mtd/onenand/onenand_base.c index ac9e959802a7..a8394730b4b6 100644 --- a/drivers/mtd/onenand/onenand_base.c +++ b/drivers/mtd/onenand/onenand_base.c @@ -1015,7 +1015,7 @@ static void onenand_release_device(struct mtd_info *mtd) } /** - * onenand_transfer_auto_oob - [Internal] oob auto-placement transfer + * onenand_transfer_auto_oob - [INTERN] oob auto-placement transfer * @param mtd MTD device structure * @param buf destination address * @param column oob offset to read from @@ -1079,7 +1079,7 @@ static int onenand_recover_lsb(struct mtd_info *mtd, loff_t addr, int status) return status; /* check if we failed due to uncorrectable error */ - if (status != -EBADMSG && status != ONENAND_BBT_READ_ECC_ERROR) + if (!mtd_is_eccerr(status) && status != ONENAND_BBT_READ_ECC_ERROR) return status; /* check if address lies in MLC region */ @@ -1122,10 +1122,10 @@ static int onenand_mlc_read_ops_nolock(struct mtd_info *mtd, loff_t from, int ret = 0; int writesize = this->writesize; - DEBUG(MTD_DEBUG_LEVEL3, "%s: from = 0x%08x, len = %i\n", - __func__, (unsigned int) from, (int) len); + pr_debug("%s: from = 0x%08x, len = %i\n", __func__, (unsigned int)from, + (int)len); - if (ops->mode == MTD_OOB_AUTO) + if (ops->mode == MTD_OPS_AUTO_OOB) oobsize = this->ecclayout->oobavail; else oobsize = mtd->oobsize; @@ -1159,7 +1159,7 @@ static int onenand_mlc_read_ops_nolock(struct mtd_info *mtd, loff_t from, if (unlikely(ret)) ret = onenand_recover_lsb(mtd, from, ret); onenand_update_bufferram(mtd, from, !ret); - if (ret == -EBADMSG) + if (mtd_is_eccerr(ret)) ret = 0; if (ret) break; @@ -1170,7 +1170,7 @@ static int onenand_mlc_read_ops_nolock(struct mtd_info *mtd, loff_t from, thisooblen = oobsize - oobcolumn; thisooblen = min_t(int, thisooblen, ooblen - oobread); - if (ops->mode == MTD_OOB_AUTO) + if (ops->mode == MTD_OPS_AUTO_OOB) onenand_transfer_auto_oob(mtd, oobbuf, oobcolumn, thisooblen); else this->read_bufferram(mtd, ONENAND_SPARERAM, oobbuf, oobcolumn, thisooblen); @@ -1226,10 +1226,10 @@ static int onenand_read_ops_nolock(struct mtd_info *mtd, loff_t from, int ret = 0, boundary = 0; int writesize = this->writesize; - DEBUG(MTD_DEBUG_LEVEL3, "%s: from = 0x%08x, len = %i\n", - __func__, (unsigned int) from, (int) len); + pr_debug("%s: from = 0x%08x, len = %i\n", __func__, (unsigned int)from, + (int)len); - if (ops->mode == MTD_OOB_AUTO) + if (ops->mode == MTD_OPS_AUTO_OOB) oobsize = this->ecclayout->oobavail; else oobsize = mtd->oobsize; @@ -1255,7 +1255,7 @@ static int onenand_read_ops_nolock(struct mtd_info *mtd, loff_t from, this->command(mtd, ONENAND_CMD_READ, from, writesize); ret = this->wait(mtd, FL_READING); onenand_update_bufferram(mtd, from, !ret); - if (ret == -EBADMSG) + if (mtd_is_eccerr(ret)) ret = 0; } } @@ -1291,7 +1291,7 @@ static int onenand_read_ops_nolock(struct mtd_info *mtd, loff_t from, thisooblen = oobsize - oobcolumn; thisooblen = min_t(int, thisooblen, ooblen - oobread); - if (ops->mode == MTD_OOB_AUTO) + if (ops->mode == MTD_OPS_AUTO_OOB) onenand_transfer_auto_oob(mtd, oobbuf, oobcolumn, thisooblen); else this->read_bufferram(mtd, ONENAND_SPARERAM, oobbuf, oobcolumn, thisooblen); @@ -1315,7 +1315,7 @@ static int onenand_read_ops_nolock(struct mtd_info *mtd, loff_t from, /* Now wait for load */ ret = this->wait(mtd, FL_READING); onenand_update_bufferram(mtd, from, !ret); - if (ret == -EBADMSG) + if (mtd_is_eccerr(ret)) ret = 0; } @@ -1351,19 +1351,19 @@ static int onenand_read_oob_nolock(struct mtd_info *mtd, loff_t from, struct mtd_ecc_stats stats; int read = 0, thislen, column, oobsize; size_t len = ops->ooblen; - mtd_oob_mode_t mode = ops->mode; + unsigned int mode = ops->mode; u_char *buf = ops->oobbuf; int ret = 0, readcmd; from += ops->ooboffs; - DEBUG(MTD_DEBUG_LEVEL3, "%s: from = 0x%08x, len = %i\n", - __func__, (unsigned int) from, (int) len); + pr_debug("%s: from = 0x%08x, len = %i\n", __func__, (unsigned int)from, + (int)len); /* Initialize return length value */ ops->oobretlen = 0; - if (mode == MTD_OOB_AUTO) + if (mode == MTD_OPS_AUTO_OOB) oobsize = this->ecclayout->oobavail; else oobsize = mtd->oobsize; @@ -1403,13 +1403,13 @@ static int onenand_read_oob_nolock(struct mtd_info *mtd, loff_t from, if (unlikely(ret)) ret = onenand_recover_lsb(mtd, from, ret); - if (ret && ret != -EBADMSG) { + if (ret && !mtd_is_eccerr(ret)) { printk(KERN_ERR "%s: read failed = 0x%x\n", __func__, ret); break; } - if (mode == MTD_OOB_AUTO) + if (mode == MTD_OPS_AUTO_OOB) onenand_transfer_auto_oob(mtd, buf, column, thislen); else this->read_bufferram(mtd, ONENAND_SPARERAM, buf, column, thislen); @@ -1487,10 +1487,10 @@ static int onenand_read_oob(struct mtd_info *mtd, loff_t from, int ret; switch (ops->mode) { - case MTD_OOB_PLACE: - case MTD_OOB_AUTO: + case MTD_OPS_PLACE_OOB: + case MTD_OPS_AUTO_OOB: break; - case MTD_OOB_RAW: + case MTD_OPS_RAW: /* Not implemented yet */ default: return -EINVAL; @@ -1576,8 +1576,8 @@ int onenand_bbt_read_oob(struct mtd_info *mtd, loff_t from, size_t len = ops->ooblen; u_char *buf = ops->oobbuf; - DEBUG(MTD_DEBUG_LEVEL3, "%s: from = 0x%08x, len = %zi\n", - __func__, (unsigned int) from, len); + pr_debug("%s: from = 0x%08x, len = %zi\n", __func__, (unsigned int)from, + len); /* Initialize return value */ ops->oobretlen = 0; @@ -1750,8 +1750,8 @@ static int onenand_panic_write(struct mtd_info *mtd, loff_t to, size_t len, /* Wait for any existing operation to clear */ onenand_panic_wait(mtd); - DEBUG(MTD_DEBUG_LEVEL3, "%s: to = 0x%08x, len = %i\n", - __func__, (unsigned int) to, (int) len); + pr_debug("%s: to = 0x%08x, len = %i\n", __func__, (unsigned int)to, + (int)len); /* Initialize retlen, in case of early exit */ *retlen = 0; @@ -1821,7 +1821,7 @@ static int onenand_panic_write(struct mtd_info *mtd, loff_t to, size_t len, } /** - * onenand_fill_auto_oob - [Internal] oob auto-placement transfer + * onenand_fill_auto_oob - [INTERN] oob auto-placement transfer * @param mtd MTD device structure * @param oob_buf oob buffer * @param buf source address @@ -1883,8 +1883,8 @@ static int onenand_write_ops_nolock(struct mtd_info *mtd, loff_t to, u_char *oobbuf; int ret = 0, cmd; - DEBUG(MTD_DEBUG_LEVEL3, "%s: to = 0x%08x, len = %i\n", - __func__, (unsigned int) to, (int) len); + pr_debug("%s: to = 0x%08x, len = %i\n", __func__, (unsigned int)to, + (int)len); /* Initialize retlen, in case of early exit */ ops->retlen = 0; @@ -1908,7 +1908,7 @@ static int onenand_write_ops_nolock(struct mtd_info *mtd, loff_t to, if (!len) return 0; - if (ops->mode == MTD_OOB_AUTO) + if (ops->mode == MTD_OPS_AUTO_OOB) oobsize = this->ecclayout->oobavail; else oobsize = mtd->oobsize; @@ -1945,7 +1945,7 @@ static int onenand_write_ops_nolock(struct mtd_info *mtd, loff_t to, /* We send data to spare ram with oobsize * to prevent byte access */ memset(oobbuf, 0xff, mtd->oobsize); - if (ops->mode == MTD_OOB_AUTO) + if (ops->mode == MTD_OPS_AUTO_OOB) onenand_fill_auto_oob(mtd, oobbuf, oob, oobcolumn, thisooblen); else memcpy(oobbuf + oobcolumn, oob, thisooblen); @@ -2055,7 +2055,7 @@ static int onenand_write_ops_nolock(struct mtd_info *mtd, loff_t to, /** - * onenand_write_oob_nolock - [Internal] OneNAND write out-of-band + * onenand_write_oob_nolock - [INTERN] OneNAND write out-of-band * @param mtd MTD device structure * @param to offset to write to * @param len number of bytes to write @@ -2074,17 +2074,17 @@ static int onenand_write_oob_nolock(struct mtd_info *mtd, loff_t to, u_char *oobbuf; size_t len = ops->ooblen; const u_char *buf = ops->oobbuf; - mtd_oob_mode_t mode = ops->mode; + unsigned int mode = ops->mode; to += ops->ooboffs; - DEBUG(MTD_DEBUG_LEVEL3, "%s: to = 0x%08x, len = %i\n", - __func__, (unsigned int) to, (int) len); + pr_debug("%s: to = 0x%08x, len = %i\n", __func__, (unsigned int)to, + (int)len); /* Initialize retlen, in case of early exit */ ops->oobretlen = 0; - if (mode == MTD_OOB_AUTO) + if (mode == MTD_OPS_AUTO_OOB) oobsize = this->ecclayout->oobavail; else oobsize = mtd->oobsize; @@ -2128,7 +2128,7 @@ static int onenand_write_oob_nolock(struct mtd_info *mtd, loff_t to, /* We send data to spare ram with oobsize * to prevent byte access */ memset(oobbuf, 0xff, mtd->oobsize); - if (mode == MTD_OOB_AUTO) + if (mode == MTD_OPS_AUTO_OOB) onenand_fill_auto_oob(mtd, oobbuf, buf, column, thislen); else memcpy(oobbuf + column, buf, thislen); @@ -2217,10 +2217,10 @@ static int onenand_write_oob(struct mtd_info *mtd, loff_t to, int ret; switch (ops->mode) { - case MTD_OOB_PLACE: - case MTD_OOB_AUTO: + case MTD_OPS_PLACE_OOB: + case MTD_OPS_AUTO_OOB: break; - case MTD_OOB_RAW: + case MTD_OPS_RAW: /* Not implemented yet */ default: return -EINVAL; @@ -2281,7 +2281,7 @@ static int onenand_multiblock_erase_verify(struct mtd_info *mtd, } /** - * onenand_multiblock_erase - [Internal] erase block(s) using multiblock erase + * onenand_multiblock_erase - [INTERN] erase block(s) using multiblock erase * @param mtd MTD device structure * @param instr erase instruction * @param region erase region @@ -2397,7 +2397,7 @@ static int onenand_multiblock_erase(struct mtd_info *mtd, /** - * onenand_block_by_block_erase - [Internal] erase block(s) using regular erase + * onenand_block_by_block_erase - [INTERN] erase block(s) using regular erase * @param mtd MTD device structure * @param instr erase instruction * @param region erase region @@ -2489,8 +2489,9 @@ static int onenand_erase(struct mtd_info *mtd, struct erase_info *instr) struct mtd_erase_region_info *region = NULL; loff_t region_offset = 0; - DEBUG(MTD_DEBUG_LEVEL3, "%s: start=0x%012llx, len=%llu\n", __func__, - (unsigned long long) instr->addr, (unsigned long long) instr->len); + pr_debug("%s: start=0x%012llx, len=%llu\n", __func__, + (unsigned long long)instr->addr, + (unsigned long long)instr->len); /* Do not allow erase past end of device */ if (unlikely((len + addr) > mtd->size)) { @@ -2558,7 +2559,7 @@ static int onenand_erase(struct mtd_info *mtd, struct erase_info *instr) */ static void onenand_sync(struct mtd_info *mtd) { - DEBUG(MTD_DEBUG_LEVEL3, "%s: called\n", __func__); + pr_debug("%s: called\n", __func__); /* Grab the lock and see if the device is available */ onenand_get_device(mtd, FL_SYNCING); @@ -2602,7 +2603,7 @@ static int onenand_default_block_markbad(struct mtd_info *mtd, loff_t ofs) struct bbm_info *bbm = this->bbm; u_char buf[2] = {0, 0}; struct mtd_oob_ops ops = { - .mode = MTD_OOB_PLACE, + .mode = MTD_OPS_PLACE_OOB, .ooblen = 2, .oobbuf = buf, .ooboffs = 0, @@ -2922,7 +2923,7 @@ static int onenand_otp_command(struct mtd_info *mtd, int cmd, loff_t addr, } /** - * onenand_otp_write_oob_nolock - [Internal] OneNAND write out-of-band, specific to OTP + * onenand_otp_write_oob_nolock - [INTERN] OneNAND write out-of-band, specific to OTP * @param mtd MTD device structure * @param to offset to write to * @param len number of bytes to write @@ -3170,7 +3171,7 @@ static int do_otp_lock(struct mtd_info *mtd, loff_t from, size_t len, this->command(mtd, ONENAND_CMD_RESET, 0, 0); this->wait(mtd, FL_RESETING); } else { - ops.mode = MTD_OOB_PLACE; + ops.mode = MTD_OPS_PLACE_OOB; ops.ooblen = len; ops.oobbuf = buf; ops.ooboffs = 0; @@ -3429,6 +3430,19 @@ static void onenand_check_features(struct mtd_info *mtd) else if (numbufs == 1) { this->options |= ONENAND_HAS_4KB_PAGE; this->options |= ONENAND_HAS_CACHE_PROGRAM; + /* + * There are two different 4KiB pagesize chips + * and no way to detect it by H/W config values. + * + * To detect the correct NOP for each chips, + * It should check the version ID as workaround. + * + * Now it has as following + * KFM4G16Q4M has NOP 4 with version ID 0x0131 + * KFM4G16Q5M has NOP 1 with versoin ID 0x013e + */ + if ((this->version_id & 0xf) == 0xe) + this->options |= ONENAND_HAS_NOP_1; } case ONENAND_DEVICE_DENSITY_2Gb: @@ -3663,7 +3677,7 @@ static int flexonenand_check_blocks_erased(struct mtd_info *mtd, int start, int int i, ret; int block; struct mtd_oob_ops ops = { - .mode = MTD_OOB_PLACE, + .mode = MTD_OPS_PLACE_OOB, .ooboffs = 0, .ooblen = mtd->oobsize, .datbuf = NULL, @@ -4054,6 +4068,8 @@ int onenand_scan(struct mtd_info *mtd, int maxchips) this->ecclayout = &onenand_oob_128; mtd->subpage_sft = 2; } + if (ONENAND_IS_NOP_1(this)) + mtd->subpage_sft = 0; break; case 64: this->ecclayout = &onenand_oob_64; diff --git a/drivers/mtd/onenand/onenand_bbt.c b/drivers/mtd/onenand/onenand_bbt.c index b2d7fc5ea25d..66fe3b7e7851 100644 --- a/drivers/mtd/onenand/onenand_bbt.c +++ b/drivers/mtd/onenand/onenand_bbt.c @@ -81,7 +81,7 @@ static int create_bbt(struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr startblock = 0; from = 0; - ops.mode = MTD_OOB_PLACE; + ops.mode = MTD_OPS_PLACE_OOB; ops.ooblen = readlen; ops.oobbuf = buf; ops.len = ops.ooboffs = ops.retlen = ops.oobretlen = 0; @@ -154,7 +154,7 @@ static int onenand_isbad_bbt(struct mtd_info *mtd, loff_t offs, int allowbbt) block = (int) (onenand_block(this, offs) << 1); res = (bbm->bbt[block >> 3] >> (block & 0x06)) & 0x03; - DEBUG(MTD_DEBUG_LEVEL2, "onenand_isbad_bbt: bbt info for offs 0x%08x: (block %d) 0x%02x\n", + pr_debug("onenand_isbad_bbt: bbt info for offs 0x%08x: (block %d) 0x%02x\n", (unsigned int) offs, block >> 1, res); switch ((int) res) { @@ -189,10 +189,8 @@ int onenand_scan_bbt(struct mtd_info *mtd, struct nand_bbt_descr *bd) len = this->chipsize >> (this->erase_shift + 2); /* Allocate memory (2bit per block) and clear the memory bad block table */ bbm->bbt = kzalloc(len, GFP_KERNEL); - if (!bbm->bbt) { - printk(KERN_ERR "onenand_scan_bbt: Out of memory\n"); + if (!bbm->bbt) return -ENOMEM; - } /* Set the bad block position */ bbm->badblockpos = ONENAND_BADBLOCK_POS; diff --git a/drivers/mtd/onenand/samsung.c b/drivers/mtd/onenand/samsung.c index 3306b5b3c736..5474547eafc2 100644 --- a/drivers/mtd/onenand/samsung.c +++ b/drivers/mtd/onenand/samsung.c @@ -147,7 +147,6 @@ struct s3c_onenand { struct resource *dma_res; unsigned long phys_base; struct completion complete; - struct mtd_partition *parts; }; #define CMD_MAP_00(dev, addr) (dev->cmd_map(MAP_00, ((addr) << 1))) @@ -157,8 +156,6 @@ struct s3c_onenand { static struct s3c_onenand *onenand; -static const char *part_probes[] = { "cmdlinepart", NULL, }; - static inline int s3c_read_reg(int offset) { return readl(onenand->base + offset); @@ -1017,13 +1014,9 @@ static int s3c_onenand_probe(struct platform_device *pdev) if (s3c_read_reg(MEM_CFG_OFFSET) & ONENAND_SYS_CFG1_SYNC_READ) dev_info(&onenand->pdev->dev, "OneNAND Sync. Burst Read enabled\n"); - err = parse_mtd_partitions(mtd, part_probes, &onenand->parts, 0); - if (err > 0) - mtd_device_register(mtd, onenand->parts, err); - else if (err <= 0 && pdata && pdata->parts) - mtd_device_register(mtd, pdata->parts, pdata->nr_parts); - else - err = mtd_device_register(mtd, NULL, 0); + err = mtd_device_parse_register(mtd, NULL, 0, + pdata ? pdata->parts : NULL, + pdata ? pdata->nr_parts : 0); platform_set_drvdata(pdev, mtd); diff --git a/drivers/mtd/redboot.c b/drivers/mtd/redboot.c index 84b4dda023f4..e366b1d84ead 100644 --- a/drivers/mtd/redboot.c +++ b/drivers/mtd/redboot.c @@ -57,8 +57,8 @@ static inline int redboot_checksum(struct fis_image_desc *img) } static int parse_redboot_partitions(struct mtd_info *master, - struct mtd_partition **pparts, - unsigned long fis_origin) + struct mtd_partition **pparts, + struct mtd_part_parser_data *data) { int nrparts = 0; struct fis_image_desc *buf; @@ -198,11 +198,10 @@ static int parse_redboot_partitions(struct mtd_info *master, goto out; } new_fl->img = &buf[i]; - if (fis_origin) { - buf[i].flash_base -= fis_origin; - } else { - buf[i].flash_base &= master->size-1; - } + if (data && data->origin) + buf[i].flash_base -= data->origin; + else + buf[i].flash_base &= master->size-1; /* I'm sure the JFFS2 code has done me permanent damage. * I now think the following is _normal_ @@ -298,6 +297,9 @@ static struct mtd_part_parser redboot_parser = { .name = "RedBoot", }; +/* mtd parsers will request the module by parser name */ +MODULE_ALIAS("RedBoot"); + static int __init redboot_parser_init(void) { return register_mtd_parser(&redboot_parser); diff --git a/drivers/mtd/sm_ftl.c b/drivers/mtd/sm_ftl.c index ed3d6cd2c6dc..fddb714e323c 100644 --- a/drivers/mtd/sm_ftl.c +++ b/drivers/mtd/sm_ftl.c @@ -34,7 +34,7 @@ module_param(debug, int, S_IRUGO | S_IWUSR); MODULE_PARM_DESC(debug, "Debug level (0-2)"); -/* ------------------- sysfs attributtes ---------------------------------- */ +/* ------------------- sysfs attributes ---------------------------------- */ struct sm_sysfs_attribute { struct device_attribute dev_attr; char *data; @@ -138,7 +138,7 @@ static int sm_get_lba(uint8_t *lba) if ((lba[0] & 0xF8) != 0x10) return -2; - /* check parity - endianess doesn't matter */ + /* check parity - endianness doesn't matter */ if (hweight16(*(uint16_t *)lba) & 1) return -2; @@ -147,7 +147,7 @@ static int sm_get_lba(uint8_t *lba) /* - * Read LBA asscociated with block + * Read LBA associated with block * returns -1, if block is erased * returns -2 if error happens */ @@ -252,11 +252,11 @@ static int sm_read_sector(struct sm_ftl *ftl, return 0; } - /* User might not need the oob, but we do for data vertification */ + /* User might not need the oob, but we do for data verification */ if (!oob) oob = &tmp_oob; - ops.mode = ftl->smallpagenand ? MTD_OOB_RAW : MTD_OOB_PLACE; + ops.mode = ftl->smallpagenand ? MTD_OPS_RAW : MTD_OPS_PLACE_OOB; ops.ooboffs = 0; ops.ooblen = SM_OOB_SIZE; ops.oobbuf = (void *)oob; @@ -276,12 +276,12 @@ again: return ret; } - /* Unfortunelly, oob read will _always_ succeed, + /* Unfortunately, oob read will _always_ succeed, despite card removal..... */ ret = mtd->read_oob(mtd, sm_mkoffset(ftl, zone, block, boffset), &ops); /* Test for unknown errors */ - if (ret != 0 && ret != -EUCLEAN && ret != -EBADMSG) { + if (ret != 0 && !mtd_is_bitflip_or_eccerr(ret)) { dbg("read of block %d at zone %d, failed due to error (%d)", block, zone, ret); goto again; @@ -306,7 +306,7 @@ again: } /* Test ECC*/ - if (ret == -EBADMSG || + if (mtd_is_eccerr(ret) || (ftl->smallpagenand && sm_correct_sector(buffer, oob))) { dbg("read of block %d at zone %d, failed due to ECC error", @@ -336,7 +336,7 @@ static int sm_write_sector(struct sm_ftl *ftl, if (ftl->unstable) return -EIO; - ops.mode = ftl->smallpagenand ? MTD_OOB_RAW : MTD_OOB_PLACE; + ops.mode = ftl->smallpagenand ? MTD_OPS_RAW : MTD_OPS_PLACE_OOB; ops.len = SM_SECTOR_SIZE; ops.datbuf = buffer; ops.ooboffs = 0; @@ -447,14 +447,14 @@ static void sm_mark_block_bad(struct sm_ftl *ftl, int zone, int block) /* We aren't checking the return value, because we don't care */ /* This also fails on fake xD cards, but I guess these won't expose - any bad blocks till fail completly */ + any bad blocks till fail completely */ for (boffset = 0; boffset < ftl->block_size; boffset += SM_SECTOR_SIZE) sm_write_sector(ftl, zone, block, boffset, NULL, &oob); } /* * Erase a block within a zone - * If erase succedes, it updates free block fifo, otherwise marks block as bad + * If erase succeeds, it updates free block fifo, otherwise marks block as bad */ static int sm_erase_block(struct sm_ftl *ftl, int zone_num, uint16_t block, int put_free) @@ -510,7 +510,7 @@ static void sm_erase_callback(struct erase_info *self) complete(&ftl->erase_completion); } -/* Throughtly test that block is valid. */ +/* Thoroughly test that block is valid. */ static int sm_check_block(struct sm_ftl *ftl, int zone, int block) { int boffset; @@ -526,7 +526,7 @@ static int sm_check_block(struct sm_ftl *ftl, int zone, int block) for (boffset = 0; boffset < ftl->block_size; boffset += SM_SECTOR_SIZE) { - /* This shoudn't happen anyway */ + /* This shouldn't happen anyway */ if (sm_read_sector(ftl, zone, block, boffset, NULL, &oob)) return -2; diff --git a/drivers/mtd/ssfdc.c b/drivers/mtd/ssfdc.c index 5cd189793332..976e3d28b962 100644 --- a/drivers/mtd/ssfdc.c +++ b/drivers/mtd/ssfdc.c @@ -135,8 +135,7 @@ static int get_valid_cis_sector(struct mtd_info *mtd) /* Found */ cis_sector = (int)(offset >> SECTOR_SHIFT); } else { - DEBUG(MTD_DEBUG_LEVEL1, - "SSFDC_RO: CIS/IDI sector not found" + pr_debug("SSFDC_RO: CIS/IDI sector not found" " on %s (mtd%d)\n", mtd->name, mtd->index); } @@ -170,7 +169,7 @@ static int read_raw_oob(struct mtd_info *mtd, loff_t offs, uint8_t *buf) struct mtd_oob_ops ops; int ret; - ops.mode = MTD_OOB_RAW; + ops.mode = MTD_OPS_RAW; ops.ooboffs = 0; ops.ooblen = OOB_SIZE; ops.oobbuf = buf; @@ -221,8 +220,7 @@ static int get_logical_address(uint8_t *oob_buf) block_address >>= 1; if (get_parity(block_address, 10) != parity) { - DEBUG(MTD_DEBUG_LEVEL0, - "SSFDC_RO: logical address field%d" + pr_debug("SSFDC_RO: logical address field%d" "parity error(0x%04X)\n", j+1, block_address); } else { @@ -235,7 +233,7 @@ static int get_logical_address(uint8_t *oob_buf) if (!ok) block_address = -2; - DEBUG(MTD_DEBUG_LEVEL3, "SSFDC_RO: get_logical_address() %d\n", + pr_debug("SSFDC_RO: get_logical_address() %d\n", block_address); return block_address; @@ -249,7 +247,7 @@ static int build_logical_block_map(struct ssfdcr_record *ssfdc) int ret, block_address, phys_block; struct mtd_info *mtd = ssfdc->mbd.mtd; - DEBUG(MTD_DEBUG_LEVEL1, "SSFDC_RO: build_block_map() nblks=%d (%luK)\n", + pr_debug("SSFDC_RO: build_block_map() nblks=%d (%luK)\n", ssfdc->map_len, (unsigned long)ssfdc->map_len * ssfdc->erase_size / 1024); @@ -262,8 +260,7 @@ static int build_logical_block_map(struct ssfdcr_record *ssfdc) ret = read_raw_oob(mtd, offset, oob_buf); if (ret < 0) { - DEBUG(MTD_DEBUG_LEVEL0, - "SSFDC_RO: mtd read_oob() failed at %lu\n", + pr_debug("SSFDC_RO: mtd read_oob() failed at %lu\n", offset); return -1; } @@ -279,8 +276,7 @@ static int build_logical_block_map(struct ssfdcr_record *ssfdc) ssfdc->logic_block_map[block_address] = (unsigned short)phys_block; - DEBUG(MTD_DEBUG_LEVEL2, - "SSFDC_RO: build_block_map() phys_block=%d," + pr_debug("SSFDC_RO: build_block_map() phys_block=%d," "logic_block_addr=%d, zone=%d\n", phys_block, block_address, zone_index); } @@ -304,11 +300,8 @@ static void ssfdcr_add_mtd(struct mtd_blktrans_ops *tr, struct mtd_info *mtd) return; ssfdc = kzalloc(sizeof(struct ssfdcr_record), GFP_KERNEL); - if (!ssfdc) { - printk(KERN_WARNING - "SSFDC_RO: out of memory for data structures\n"); + if (!ssfdc) return; - } ssfdc->mbd.mtd = mtd; ssfdc->mbd.devnum = -1; @@ -319,8 +312,7 @@ static void ssfdcr_add_mtd(struct mtd_blktrans_ops *tr, struct mtd_info *mtd) ssfdc->erase_size = mtd->erasesize; ssfdc->map_len = (u32)mtd->size / mtd->erasesize; - DEBUG(MTD_DEBUG_LEVEL1, - "SSFDC_RO: cis_block=%d,erase_size=%d,map_len=%d,n_zones=%d\n", + pr_debug("SSFDC_RO: cis_block=%d,erase_size=%d,map_len=%d,n_zones=%d\n", ssfdc->cis_block, ssfdc->erase_size, ssfdc->map_len, DIV_ROUND_UP(ssfdc->map_len, MAX_PHYS_BLK_PER_ZONE)); @@ -331,7 +323,7 @@ static void ssfdcr_add_mtd(struct mtd_blktrans_ops *tr, struct mtd_info *mtd) ssfdc->cylinders = (unsigned short)(((u32)mtd->size >> SECTOR_SHIFT) / ((long)ssfdc->sectors * (long)ssfdc->heads)); - DEBUG(MTD_DEBUG_LEVEL1, "SSFDC_RO: using C:%d H:%d S:%d == %ld sects\n", + pr_debug("SSFDC_RO: using C:%d H:%d S:%d == %ld sects\n", ssfdc->cylinders, ssfdc->heads , ssfdc->sectors, (long)ssfdc->cylinders * (long)ssfdc->heads * (long)ssfdc->sectors); @@ -342,11 +334,8 @@ static void ssfdcr_add_mtd(struct mtd_blktrans_ops *tr, struct mtd_info *mtd) /* Allocate logical block map */ ssfdc->logic_block_map = kmalloc(sizeof(ssfdc->logic_block_map[0]) * ssfdc->map_len, GFP_KERNEL); - if (!ssfdc->logic_block_map) { - printk(KERN_WARNING - "SSFDC_RO: out of memory for data structures\n"); + if (!ssfdc->logic_block_map) goto out_err; - } memset(ssfdc->logic_block_map, 0xff, sizeof(ssfdc->logic_block_map[0]) * ssfdc->map_len); @@ -371,7 +360,7 @@ static void ssfdcr_remove_dev(struct mtd_blktrans_dev *dev) { struct ssfdcr_record *ssfdc = (struct ssfdcr_record *)dev; - DEBUG(MTD_DEBUG_LEVEL1, "SSFDC_RO: remove_dev (i=%d)\n", dev->devnum); + pr_debug("SSFDC_RO: remove_dev (i=%d)\n", dev->devnum); del_mtd_blktrans_dev(dev); kfree(ssfdc->logic_block_map); @@ -387,8 +376,7 @@ static int ssfdcr_readsect(struct mtd_blktrans_dev *dev, offset = (int)(logic_sect_no % sectors_per_block); block_address = (int)(logic_sect_no / sectors_per_block); - DEBUG(MTD_DEBUG_LEVEL3, - "SSFDC_RO: ssfdcr_readsect(%lu) sec_per_blk=%d, ofst=%d," + pr_debug("SSFDC_RO: ssfdcr_readsect(%lu) sec_per_blk=%d, ofst=%d," " block_addr=%d\n", logic_sect_no, sectors_per_block, offset, block_address); @@ -397,8 +385,7 @@ static int ssfdcr_readsect(struct mtd_blktrans_dev *dev, block_address = ssfdc->logic_block_map[block_address]; - DEBUG(MTD_DEBUG_LEVEL3, - "SSFDC_RO: ssfdcr_readsect() phys_block_addr=%d\n", + pr_debug("SSFDC_RO: ssfdcr_readsect() phys_block_addr=%d\n", block_address); if (block_address < 0xffff) { @@ -407,8 +394,7 @@ static int ssfdcr_readsect(struct mtd_blktrans_dev *dev, sect_no = (unsigned long)block_address * sectors_per_block + offset; - DEBUG(MTD_DEBUG_LEVEL3, - "SSFDC_RO: ssfdcr_readsect() phys_sect_no=%lu\n", + pr_debug("SSFDC_RO: ssfdcr_readsect() phys_sect_no=%lu\n", sect_no); if (read_physical_sector(ssfdc->mbd.mtd, buf, sect_no) < 0) @@ -424,7 +410,7 @@ static int ssfdcr_getgeo(struct mtd_blktrans_dev *dev, struct hd_geometry *geo) { struct ssfdcr_record *ssfdc = (struct ssfdcr_record *)dev; - DEBUG(MTD_DEBUG_LEVEL1, "SSFDC_RO: ssfdcr_getgeo() C=%d, H=%d, S=%d\n", + pr_debug("SSFDC_RO: ssfdcr_getgeo() C=%d, H=%d, S=%d\n", ssfdc->cylinders, ssfdc->heads, ssfdc->sectors); geo->heads = ssfdc->heads; diff --git a/drivers/mtd/tests/mtd_oobtest.c b/drivers/mtd/tests/mtd_oobtest.c index dec92ae6111a..933f7e5f32d3 100644 --- a/drivers/mtd/tests/mtd_oobtest.c +++ b/drivers/mtd/tests/mtd_oobtest.c @@ -30,7 +30,7 @@ #define PRINT_PREF KERN_INFO "mtd_oobtest: " -static int dev; +static int dev = -EINVAL; module_param(dev, int, S_IRUGO); MODULE_PARM_DESC(dev, "MTD device number to use"); @@ -131,7 +131,7 @@ static int write_eraseblock(int ebnum) for (i = 0; i < pgcnt; ++i, addr += mtd->writesize) { set_random_data(writebuf, use_len); - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ops.len = 0; ops.retlen = 0; ops.ooblen = use_len; @@ -184,7 +184,7 @@ static int verify_eraseblock(int ebnum) for (i = 0; i < pgcnt; ++i, addr += mtd->writesize) { set_random_data(writebuf, use_len); - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ops.len = 0; ops.retlen = 0; ops.ooblen = use_len; @@ -211,7 +211,7 @@ static int verify_eraseblock(int ebnum) if (use_offset != 0 || use_len < mtd->ecclayout->oobavail) { int k; - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ops.len = 0; ops.retlen = 0; ops.ooblen = mtd->ecclayout->oobavail; @@ -276,7 +276,7 @@ static int verify_eraseblock_in_one_go(int ebnum) size_t len = mtd->ecclayout->oobavail * pgcnt; set_random_data(writebuf, len); - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ops.len = 0; ops.retlen = 0; ops.ooblen = len; @@ -366,6 +366,13 @@ static int __init mtd_oobtest_init(void) printk(KERN_INFO "\n"); printk(KERN_INFO "=================================================\n"); + + if (dev < 0) { + printk(PRINT_PREF "Please specify a valid mtd-device via module paramter\n"); + printk(KERN_CRIT "CAREFUL: This test wipes all data on the specified MTD device!\n"); + return -EINVAL; + } + printk(PRINT_PREF "MTD device: %d\n", dev); mtd = get_mtd_device(NULL, dev); @@ -507,7 +514,7 @@ static int __init mtd_oobtest_init(void) addr0 += mtd->erasesize; /* Attempt to write off end of OOB */ - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ops.len = 0; ops.retlen = 0; ops.ooblen = 1; @@ -527,7 +534,7 @@ static int __init mtd_oobtest_init(void) } /* Attempt to read off end of OOB */ - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ops.len = 0; ops.retlen = 0; ops.ooblen = 1; @@ -551,7 +558,7 @@ static int __init mtd_oobtest_init(void) "block is bad\n"); else { /* Attempt to write off end of device */ - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ops.len = 0; ops.retlen = 0; ops.ooblen = mtd->ecclayout->oobavail + 1; @@ -571,7 +578,7 @@ static int __init mtd_oobtest_init(void) } /* Attempt to read off end of device */ - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ops.len = 0; ops.retlen = 0; ops.ooblen = mtd->ecclayout->oobavail + 1; @@ -595,7 +602,7 @@ static int __init mtd_oobtest_init(void) goto out; /* Attempt to write off end of device */ - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ops.len = 0; ops.retlen = 0; ops.ooblen = mtd->ecclayout->oobavail; @@ -615,7 +622,7 @@ static int __init mtd_oobtest_init(void) } /* Attempt to read off end of device */ - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ops.len = 0; ops.retlen = 0; ops.ooblen = mtd->ecclayout->oobavail; @@ -655,7 +662,7 @@ static int __init mtd_oobtest_init(void) addr = (i + 1) * mtd->erasesize - mtd->writesize; for (pg = 0; pg < cnt; ++pg) { set_random_data(writebuf, sz); - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ops.len = 0; ops.retlen = 0; ops.ooblen = sz; @@ -683,7 +690,7 @@ static int __init mtd_oobtest_init(void) continue; set_random_data(writebuf, mtd->ecclayout->oobavail * 2); addr = (i + 1) * mtd->erasesize - mtd->writesize; - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ops.len = 0; ops.retlen = 0; ops.ooblen = mtd->ecclayout->oobavail * 2; diff --git a/drivers/mtd/tests/mtd_pagetest.c b/drivers/mtd/tests/mtd_pagetest.c index 00b937e38c1d..afafb6935fd0 100644 --- a/drivers/mtd/tests/mtd_pagetest.c +++ b/drivers/mtd/tests/mtd_pagetest.c @@ -30,7 +30,7 @@ #define PRINT_PREF KERN_INFO "mtd_pagetest: " -static int dev; +static int dev = -EINVAL; module_param(dev, int, S_IRUGO); MODULE_PARM_DESC(dev, "MTD device number to use"); @@ -128,7 +128,7 @@ static int verify_eraseblock(int ebnum) for (j = 0; j < pgcnt - 1; ++j, addr += pgsize) { /* Do a read to set the internal dataRAMs to different data */ err = mtd->read(mtd, addr0, bufsize, &read, twopages); - if (err == -EUCLEAN) + if (mtd_is_bitflip(err)) err = 0; if (err || read != bufsize) { printk(PRINT_PREF "error: read failed at %#llx\n", @@ -136,7 +136,7 @@ static int verify_eraseblock(int ebnum) return err; } err = mtd->read(mtd, addrn - bufsize, bufsize, &read, twopages); - if (err == -EUCLEAN) + if (mtd_is_bitflip(err)) err = 0; if (err || read != bufsize) { printk(PRINT_PREF "error: read failed at %#llx\n", @@ -146,7 +146,7 @@ static int verify_eraseblock(int ebnum) memset(twopages, 0, bufsize); read = 0; err = mtd->read(mtd, addr, bufsize, &read, twopages); - if (err == -EUCLEAN) + if (mtd_is_bitflip(err)) err = 0; if (err || read != bufsize) { printk(PRINT_PREF "error: read failed at %#llx\n", @@ -164,7 +164,7 @@ static int verify_eraseblock(int ebnum) unsigned long oldnext = next; /* Do a read to set the internal dataRAMs to different data */ err = mtd->read(mtd, addr0, bufsize, &read, twopages); - if (err == -EUCLEAN) + if (mtd_is_bitflip(err)) err = 0; if (err || read != bufsize) { printk(PRINT_PREF "error: read failed at %#llx\n", @@ -172,7 +172,7 @@ static int verify_eraseblock(int ebnum) return err; } err = mtd->read(mtd, addrn - bufsize, bufsize, &read, twopages); - if (err == -EUCLEAN) + if (mtd_is_bitflip(err)) err = 0; if (err || read != bufsize) { printk(PRINT_PREF "error: read failed at %#llx\n", @@ -182,7 +182,7 @@ static int verify_eraseblock(int ebnum) memset(twopages, 0, bufsize); read = 0; err = mtd->read(mtd, addr, bufsize, &read, twopages); - if (err == -EUCLEAN) + if (mtd_is_bitflip(err)) err = 0; if (err || read != bufsize) { printk(PRINT_PREF "error: read failed at %#llx\n", @@ -231,7 +231,7 @@ static int crosstest(void) read = 0; addr = addrn - pgsize - pgsize; err = mtd->read(mtd, addr, pgsize, &read, pp1); - if (err == -EUCLEAN) + if (mtd_is_bitflip(err)) err = 0; if (err || read != pgsize) { printk(PRINT_PREF "error: read failed at %#llx\n", @@ -244,7 +244,7 @@ static int crosstest(void) read = 0; addr = addrn - pgsize - pgsize - pgsize; err = mtd->read(mtd, addr, pgsize, &read, pp1); - if (err == -EUCLEAN) + if (mtd_is_bitflip(err)) err = 0; if (err || read != pgsize) { printk(PRINT_PREF "error: read failed at %#llx\n", @@ -258,7 +258,7 @@ static int crosstest(void) addr = addr0; printk(PRINT_PREF "reading page at %#llx\n", (long long)addr); err = mtd->read(mtd, addr, pgsize, &read, pp2); - if (err == -EUCLEAN) + if (mtd_is_bitflip(err)) err = 0; if (err || read != pgsize) { printk(PRINT_PREF "error: read failed at %#llx\n", @@ -272,7 +272,7 @@ static int crosstest(void) addr = addrn - pgsize; printk(PRINT_PREF "reading page at %#llx\n", (long long)addr); err = mtd->read(mtd, addr, pgsize, &read, pp3); - if (err == -EUCLEAN) + if (mtd_is_bitflip(err)) err = 0; if (err || read != pgsize) { printk(PRINT_PREF "error: read failed at %#llx\n", @@ -286,7 +286,7 @@ static int crosstest(void) addr = addr0; printk(PRINT_PREF "reading page at %#llx\n", (long long)addr); err = mtd->read(mtd, addr, pgsize, &read, pp4); - if (err == -EUCLEAN) + if (mtd_is_bitflip(err)) err = 0; if (err || read != pgsize) { printk(PRINT_PREF "error: read failed at %#llx\n", @@ -345,7 +345,7 @@ static int erasecrosstest(void) printk(PRINT_PREF "reading 1st page of block %d\n", ebnum); memset(readbuf, 0, pgsize); err = mtd->read(mtd, addr0, pgsize, &read, readbuf); - if (err == -EUCLEAN) + if (mtd_is_bitflip(err)) err = 0; if (err || read != pgsize) { printk(PRINT_PREF "error: read failed at %#llx\n", @@ -383,7 +383,7 @@ static int erasecrosstest(void) printk(PRINT_PREF "reading 1st page of block %d\n", ebnum); memset(readbuf, 0, pgsize); err = mtd->read(mtd, addr0, pgsize, &read, readbuf); - if (err == -EUCLEAN) + if (mtd_is_bitflip(err)) err = 0; if (err || read != pgsize) { printk(PRINT_PREF "error: read failed at %#llx\n", @@ -439,7 +439,7 @@ static int erasetest(void) printk(PRINT_PREF "reading 1st page of block %d\n", ebnum); err = mtd->read(mtd, addr0, pgsize, &read, twopages); - if (err == -EUCLEAN) + if (mtd_is_bitflip(err)) err = 0; if (err || read != pgsize) { printk(PRINT_PREF "error: read failed at %#llx\n", @@ -504,6 +504,13 @@ static int __init mtd_pagetest_init(void) printk(KERN_INFO "\n"); printk(KERN_INFO "=================================================\n"); + + if (dev < 0) { + printk(PRINT_PREF "Please specify a valid mtd-device via module paramter\n"); + printk(KERN_CRIT "CAREFUL: This test wipes all data on the specified MTD device!\n"); + return -EINVAL; + } + printk(PRINT_PREF "MTD device: %d\n", dev); mtd = get_mtd_device(NULL, dev); diff --git a/drivers/mtd/tests/mtd_readtest.c b/drivers/mtd/tests/mtd_readtest.c index afe71aa15c4b..550fe51225a7 100644 --- a/drivers/mtd/tests/mtd_readtest.c +++ b/drivers/mtd/tests/mtd_readtest.c @@ -29,7 +29,7 @@ #define PRINT_PREF KERN_INFO "mtd_readtest: " -static int dev; +static int dev = -EINVAL; module_param(dev, int, S_IRUGO); MODULE_PARM_DESC(dev, "MTD device number to use"); @@ -66,7 +66,7 @@ static int read_eraseblock_by_page(int ebnum) if (mtd->oobsize) { struct mtd_oob_ops ops; - ops.mode = MTD_OOB_PLACE; + ops.mode = MTD_OPS_PLACE_OOB; ops.len = 0; ops.retlen = 0; ops.ooblen = mtd->oobsize; @@ -75,7 +75,8 @@ static int read_eraseblock_by_page(int ebnum) ops.datbuf = NULL; ops.oobbuf = oobbuf; ret = mtd->read_oob(mtd, addr, &ops); - if (ret || ops.oobretlen != mtd->oobsize) { + if ((ret && !mtd_is_bitflip(ret)) || + ops.oobretlen != mtd->oobsize) { printk(PRINT_PREF "error: read oob failed at " "%#llx\n", (long long)addr); if (!err) @@ -169,6 +170,12 @@ static int __init mtd_readtest_init(void) printk(KERN_INFO "\n"); printk(KERN_INFO "=================================================\n"); + + if (dev < 0) { + printk(PRINT_PREF "Please specify a valid mtd-device via module paramter\n"); + return -EINVAL; + } + printk(PRINT_PREF "MTD device: %d\n", dev); mtd = get_mtd_device(NULL, dev); diff --git a/drivers/mtd/tests/mtd_speedtest.c b/drivers/mtd/tests/mtd_speedtest.c index 627d4e2466a3..493b367bdd35 100644 --- a/drivers/mtd/tests/mtd_speedtest.c +++ b/drivers/mtd/tests/mtd_speedtest.c @@ -29,7 +29,7 @@ #define PRINT_PREF KERN_INFO "mtd_speedtest: " -static int dev; +static int dev = -EINVAL; module_param(dev, int, S_IRUGO); MODULE_PARM_DESC(dev, "MTD device number to use"); @@ -216,7 +216,7 @@ static int read_eraseblock(int ebnum) err = mtd->read(mtd, addr, mtd->erasesize, &read, iobuf); /* Ignore corrected ECC errors */ - if (err == -EUCLEAN) + if (mtd_is_bitflip(err)) err = 0; if (err || read != mtd->erasesize) { printk(PRINT_PREF "error: read failed at %#llx\n", addr); @@ -237,7 +237,7 @@ static int read_eraseblock_by_page(int ebnum) for (i = 0; i < pgcnt; i++) { err = mtd->read(mtd, addr, pgsize, &read, buf); /* Ignore corrected ECC errors */ - if (err == -EUCLEAN) + if (mtd_is_bitflip(err)) err = 0; if (err || read != pgsize) { printk(PRINT_PREF "error: read failed at %#llx\n", @@ -263,7 +263,7 @@ static int read_eraseblock_by_2pages(int ebnum) for (i = 0; i < n; i++) { err = mtd->read(mtd, addr, sz, &read, buf); /* Ignore corrected ECC errors */ - if (err == -EUCLEAN) + if (mtd_is_bitflip(err)) err = 0; if (err || read != sz) { printk(PRINT_PREF "error: read failed at %#llx\n", @@ -278,7 +278,7 @@ static int read_eraseblock_by_2pages(int ebnum) if (pgcnt % 2) { err = mtd->read(mtd, addr, pgsize, &read, buf); /* Ignore corrected ECC errors */ - if (err == -EUCLEAN) + if (mtd_is_bitflip(err)) err = 0; if (err || read != pgsize) { printk(PRINT_PREF "error: read failed at %#llx\n", @@ -361,6 +361,13 @@ static int __init mtd_speedtest_init(void) printk(KERN_INFO "\n"); printk(KERN_INFO "=================================================\n"); + + if (dev < 0) { + printk(PRINT_PREF "Please specify a valid mtd-device via module paramter\n"); + printk(KERN_CRIT "CAREFUL: This test wipes all data on the specified MTD device!\n"); + return -EINVAL; + } + if (count) printk(PRINT_PREF "MTD device: %d count: %d\n", dev, count); else diff --git a/drivers/mtd/tests/mtd_stresstest.c b/drivers/mtd/tests/mtd_stresstest.c index 531625fc9259..52ffd9120e0d 100644 --- a/drivers/mtd/tests/mtd_stresstest.c +++ b/drivers/mtd/tests/mtd_stresstest.c @@ -30,7 +30,7 @@ #define PRINT_PREF KERN_INFO "mtd_stresstest: " -static int dev; +static int dev = -EINVAL; module_param(dev, int, S_IRUGO); MODULE_PARM_DESC(dev, "MTD device number to use"); @@ -154,7 +154,7 @@ static int do_read(void) } addr = eb * mtd->erasesize + offs; err = mtd->read(mtd, addr, len, &read, readbuf); - if (err == -EUCLEAN) + if (mtd_is_bitflip(err)) err = 0; if (unlikely(err || read != len)) { printk(PRINT_PREF "error: read failed at 0x%llx\n", @@ -250,6 +250,13 @@ static int __init mtd_stresstest_init(void) printk(KERN_INFO "\n"); printk(KERN_INFO "=================================================\n"); + + if (dev < 0) { + printk(PRINT_PREF "Please specify a valid mtd-device via module paramter\n"); + printk(KERN_CRIT "CAREFUL: This test wipes all data on the specified MTD device!\n"); + return -EINVAL; + } + printk(PRINT_PREF "MTD device: %d\n", dev); mtd = get_mtd_device(NULL, dev); diff --git a/drivers/mtd/tests/mtd_subpagetest.c b/drivers/mtd/tests/mtd_subpagetest.c index 334eae53a3db..1a05bfac4eee 100644 --- a/drivers/mtd/tests/mtd_subpagetest.c +++ b/drivers/mtd/tests/mtd_subpagetest.c @@ -29,7 +29,7 @@ #define PRINT_PREF KERN_INFO "mtd_subpagetest: " -static int dev; +static int dev = -EINVAL; module_param(dev, int, S_IRUGO); MODULE_PARM_DESC(dev, "MTD device number to use"); @@ -198,7 +198,7 @@ static int verify_eraseblock(int ebnum) read = 0; err = mtd->read(mtd, addr, subpgsize, &read, readbuf); if (unlikely(err || read != subpgsize)) { - if (err == -EUCLEAN && read == subpgsize) { + if (mtd_is_bitflip(err) && read == subpgsize) { printk(PRINT_PREF "ECC correction at %#llx\n", (long long)addr); err = 0; @@ -226,7 +226,7 @@ static int verify_eraseblock(int ebnum) read = 0; err = mtd->read(mtd, addr, subpgsize, &read, readbuf); if (unlikely(err || read != subpgsize)) { - if (err == -EUCLEAN && read == subpgsize) { + if (mtd_is_bitflip(err) && read == subpgsize) { printk(PRINT_PREF "ECC correction at %#llx\n", (long long)addr); err = 0; @@ -264,7 +264,7 @@ static int verify_eraseblock2(int ebnum) read = 0; err = mtd->read(mtd, addr, subpgsize * k, &read, readbuf); if (unlikely(err || read != subpgsize * k)) { - if (err == -EUCLEAN && read == subpgsize * k) { + if (mtd_is_bitflip(err) && read == subpgsize * k) { printk(PRINT_PREF "ECC correction at %#llx\n", (long long)addr); err = 0; @@ -298,7 +298,7 @@ static int verify_eraseblock_ff(int ebnum) read = 0; err = mtd->read(mtd, addr, subpgsize, &read, readbuf); if (unlikely(err || read != subpgsize)) { - if (err == -EUCLEAN && read == subpgsize) { + if (mtd_is_bitflip(err) && read == subpgsize) { printk(PRINT_PREF "ECC correction at %#llx\n", (long long)addr); err = 0; @@ -379,6 +379,13 @@ static int __init mtd_subpagetest_init(void) printk(KERN_INFO "\n"); printk(KERN_INFO "=================================================\n"); + + if (dev < 0) { + printk(PRINT_PREF "Please specify a valid mtd-device via module paramter\n"); + printk(KERN_CRIT "CAREFUL: This test wipes all data on the specified MTD device!\n"); + return -EINVAL; + } + printk(PRINT_PREF "MTD device: %d\n", dev); mtd = get_mtd_device(NULL, dev); diff --git a/drivers/mtd/tests/mtd_torturetest.c b/drivers/mtd/tests/mtd_torturetest.c index 5c6c3d248901..03ab649a6964 100644 --- a/drivers/mtd/tests/mtd_torturetest.c +++ b/drivers/mtd/tests/mtd_torturetest.c @@ -46,7 +46,7 @@ static int pgcnt; module_param(pgcnt, int, S_IRUGO); MODULE_PARM_DESC(pgcnt, "number of pages per eraseblock to torture (0 => all)"); -static int dev; +static int dev = -EINVAL; module_param(dev, int, S_IRUGO); MODULE_PARM_DESC(dev, "MTD device number to use"); @@ -138,7 +138,7 @@ static inline int check_eraseblock(int ebnum, unsigned char *buf) retry: err = mtd->read(mtd, addr, len, &read, check_buf); - if (err == -EUCLEAN) + if (mtd_is_bitflip(err)) printk(PRINT_PREF "single bit flip occurred at EB %d " "MTD reported that it was fixed.\n", ebnum); else if (err) { @@ -213,6 +213,13 @@ static int __init tort_init(void) printk(KERN_INFO "=================================================\n"); printk(PRINT_PREF "Warning: this program is trying to wear out your " "flash, stop it if this is not wanted.\n"); + + if (dev < 0) { + printk(PRINT_PREF "Please specify a valid mtd-device via module paramter\n"); + printk(KERN_CRIT "CAREFUL: This test wipes all data on the specified MTD device!\n"); + return -EINVAL; + } + printk(PRINT_PREF "MTD device: %d\n", dev); printk(PRINT_PREF "torture %d eraseblocks (%d-%d) of mtd%d\n", ebcnt, eb, eb + ebcnt - 1, dev); diff --git a/drivers/mtd/ubi/eba.c b/drivers/mtd/ubi/eba.c index 4be671815014..fb7f19b62d91 100644 --- a/drivers/mtd/ubi/eba.c +++ b/drivers/mtd/ubi/eba.c @@ -443,7 +443,7 @@ retry: if (err == UBI_IO_BITFLIPS) { scrub = 1; err = 0; - } else if (err == -EBADMSG) { + } else if (mtd_is_eccerr(err)) { if (vol->vol_type == UBI_DYNAMIC_VOLUME) goto out_unlock; scrub = 1; diff --git a/drivers/mtd/ubi/io.c b/drivers/mtd/ubi/io.c index 6ba55c235873..f20b6f22f240 100644 --- a/drivers/mtd/ubi/io.c +++ b/drivers/mtd/ubi/io.c @@ -172,9 +172,9 @@ int ubi_io_read(const struct ubi_device *ubi, void *buf, int pnum, int offset, retry: err = ubi->mtd->read(ubi->mtd, addr, len, &read, buf); if (err) { - const char *errstr = (err == -EBADMSG) ? " (ECC error)" : ""; + const char *errstr = mtd_is_eccerr(err) ? " (ECC error)" : ""; - if (err == -EUCLEAN) { + if (mtd_is_bitflip(err)) { /* * -EUCLEAN is reported if there was a bit-flip which * was corrected, so this is harmless. @@ -205,7 +205,7 @@ retry: * all the requested data. But some buggy drivers might do * this, so we change it to -EIO. */ - if (read != len && err == -EBADMSG) { + if (read != len && mtd_is_eccerr(err)) { ubi_assert(0); err = -EIO; } @@ -469,7 +469,7 @@ static int torture_peb(struct ubi_device *ubi, int pnum) out: mutex_unlock(&ubi->buf_mutex); - if (err == UBI_IO_BITFLIPS || err == -EBADMSG) { + if (err == UBI_IO_BITFLIPS || mtd_is_eccerr(err)) { /* * If a bit-flip or data integrity error was detected, the test * has not passed because it happened on a freshly erased @@ -760,7 +760,7 @@ int ubi_io_read_ec_hdr(struct ubi_device *ubi, int pnum, read_err = ubi_io_read(ubi, ec_hdr, pnum, 0, UBI_EC_HDR_SIZE); if (read_err) { - if (read_err != UBI_IO_BITFLIPS && read_err != -EBADMSG) + if (read_err != UBI_IO_BITFLIPS && !mtd_is_eccerr(read_err)) return read_err; /* @@ -776,7 +776,7 @@ int ubi_io_read_ec_hdr(struct ubi_device *ubi, int pnum, magic = be32_to_cpu(ec_hdr->magic); if (magic != UBI_EC_HDR_MAGIC) { - if (read_err == -EBADMSG) + if (mtd_is_eccerr(read_err)) return UBI_IO_BAD_HDR_EBADMSG; /* @@ -1032,12 +1032,12 @@ int ubi_io_read_vid_hdr(struct ubi_device *ubi, int pnum, p = (char *)vid_hdr - ubi->vid_hdr_shift; read_err = ubi_io_read(ubi, p, pnum, ubi->vid_hdr_aloffset, ubi->vid_hdr_alsize); - if (read_err && read_err != UBI_IO_BITFLIPS && read_err != -EBADMSG) + if (read_err && read_err != UBI_IO_BITFLIPS && !mtd_is_eccerr(read_err)) return read_err; magic = be32_to_cpu(vid_hdr->magic); if (magic != UBI_VID_HDR_MAGIC) { - if (read_err == -EBADMSG) + if (mtd_is_eccerr(read_err)) return UBI_IO_BAD_HDR_EBADMSG; if (ubi_check_pattern(vid_hdr, 0xFF, UBI_VID_HDR_SIZE)) { @@ -1219,7 +1219,7 @@ static int paranoid_check_peb_ec_hdr(const struct ubi_device *ubi, int pnum) return -ENOMEM; err = ubi_io_read(ubi, ec_hdr, pnum, 0, UBI_EC_HDR_SIZE); - if (err && err != UBI_IO_BITFLIPS && err != -EBADMSG) + if (err && err != UBI_IO_BITFLIPS && !mtd_is_eccerr(err)) goto exit; crc = crc32(UBI_CRC32_INIT, ec_hdr, UBI_EC_HDR_SIZE_CRC); @@ -1306,7 +1306,7 @@ static int paranoid_check_peb_vid_hdr(const struct ubi_device *ubi, int pnum) p = (char *)vid_hdr - ubi->vid_hdr_shift; err = ubi_io_read(ubi, p, pnum, ubi->vid_hdr_aloffset, ubi->vid_hdr_alsize); - if (err && err != UBI_IO_BITFLIPS && err != -EBADMSG) + if (err && err != UBI_IO_BITFLIPS && !mtd_is_eccerr(err)) goto exit; crc = crc32(UBI_CRC32_INIT, vid_hdr, UBI_EC_HDR_SIZE_CRC); @@ -1358,7 +1358,7 @@ int ubi_dbg_check_write(struct ubi_device *ubi, const void *buf, int pnum, } err = ubi->mtd->read(ubi->mtd, addr, len, &read, buf1); - if (err && err != -EUCLEAN) + if (err && !mtd_is_bitflip(err)) goto out_free; for (i = 0; i < len; i++) { @@ -1422,7 +1422,7 @@ int ubi_dbg_check_all_ff(struct ubi_device *ubi, int pnum, int offset, int len) } err = ubi->mtd->read(ubi->mtd, addr, len, &read, buf); - if (err && err != -EUCLEAN) { + if (err && !mtd_is_bitflip(err)) { ubi_err("error %d while reading %d bytes from PEB %d:%d, " "read %zd bytes", err, len, pnum, offset, read); goto error; diff --git a/drivers/mtd/ubi/kapi.c b/drivers/mtd/ubi/kapi.c index d39716e5b204..1a35fc5e3b40 100644 --- a/drivers/mtd/ubi/kapi.c +++ b/drivers/mtd/ubi/kapi.c @@ -410,7 +410,7 @@ int ubi_leb_read(struct ubi_volume_desc *desc, int lnum, char *buf, int offset, return 0; err = ubi_eba_read_leb(ubi, vol, lnum, buf, offset, len, check); - if (err && err == -EBADMSG && vol->vol_type == UBI_STATIC_VOLUME) { + if (err && mtd_is_eccerr(err) && vol->vol_type == UBI_STATIC_VOLUME) { ubi_warn("mark volume %d as corrupted", vol_id); vol->corrupted = 1; } diff --git a/drivers/mtd/ubi/misc.c b/drivers/mtd/ubi/misc.c index ff2a65c37f69..f6a7d7ac4b98 100644 --- a/drivers/mtd/ubi/misc.c +++ b/drivers/mtd/ubi/misc.c @@ -81,7 +81,7 @@ int ubi_check_volume(struct ubi_device *ubi, int vol_id) err = ubi_eba_read_leb(ubi, vol, i, buf, 0, size, 1); if (err) { - if (err == -EBADMSG) + if (mtd_is_eccerr(err)) err = 1; break; } diff --git a/drivers/mtd/ubi/scan.c b/drivers/mtd/ubi/scan.c index a3a198f9b98d..0cb17d936b5a 100644 --- a/drivers/mtd/ubi/scan.c +++ b/drivers/mtd/ubi/scan.c @@ -395,7 +395,7 @@ static int compare_lebs(struct ubi_device *ubi, const struct ubi_scan_leb *seb, } err = ubi_io_read_data(ubi, buf, pnum, 0, len); - if (err && err != UBI_IO_BITFLIPS && err != -EBADMSG) + if (err && err != UBI_IO_BITFLIPS && !mtd_is_eccerr(err)) goto out_free_buf; data_crc = be32_to_cpu(vid_hdr->data_crc); @@ -793,7 +793,7 @@ static int check_corruption(struct ubi_device *ubi, struct ubi_vid_hdr *vid_hdr, err = ubi_io_read(ubi, ubi->peb_buf1, pnum, ubi->leb_start, ubi->leb_size); - if (err == UBI_IO_BITFLIPS || err == -EBADMSG) { + if (err == UBI_IO_BITFLIPS || mtd_is_eccerr(err)) { /* * Bit-flips or integrity errors while reading the data area. * It is difficult to say for sure what type of corruption is diff --git a/drivers/mtd/ubi/vtbl.c b/drivers/mtd/ubi/vtbl.c index 4b50a3029b84..9ad18da1891d 100644 --- a/drivers/mtd/ubi/vtbl.c +++ b/drivers/mtd/ubi/vtbl.c @@ -423,7 +423,7 @@ static struct ubi_vtbl_record *process_lvol(struct ubi_device *ubi, err = ubi_io_read_data(ubi, leb[seb->lnum], seb->pnum, 0, ubi->vtbl_size); - if (err == UBI_IO_BITFLIPS || err == -EBADMSG) + if (err == UBI_IO_BITFLIPS || mtd_is_eccerr(err)) /* * Scrub the PEB later. Note, -EBADMSG indicates an * uncorrectable ECC error, but we have our own CRC and diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index b2b9109b6712..b0c577256487 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -560,8 +560,8 @@ static int bond_update_speed_duplex(struct slave *slave) u32 slave_speed; int res; - slave->speed = -1; - slave->duplex = -1; + slave->speed = SPEED_UNKNOWN; + slave->duplex = DUPLEX_UNKNOWN; res = __ethtool_get_settings(slave_dev, &ecmd); if (res < 0) diff --git a/drivers/net/bonding/bond_procfs.c b/drivers/net/bonding/bond_procfs.c index 2acf0b080169..ad284baafe87 100644 --- a/drivers/net/bonding/bond_procfs.c +++ b/drivers/net/bonding/bond_procfs.c @@ -158,12 +158,12 @@ static void bond_info_show_slave(struct seq_file *seq, seq_printf(seq, "\nSlave Interface: %s\n", slave->dev->name); seq_printf(seq, "MII Status: %s\n", (slave->link == BOND_LINK_UP) ? "up" : "down"); - if (slave->speed == -1) + if (slave->speed == SPEED_UNKNOWN) seq_printf(seq, "Speed: %s\n", "Unknown"); else seq_printf(seq, "Speed: %d Mbps\n", slave->speed); - if (slave->duplex == -1) + if (slave->duplex == DUPLEX_UNKNOWN) seq_printf(seq, "Duplex: %s\n", "Unknown"); else seq_printf(seq, "Duplex: %s\n", slave->duplex ? "full" : "half"); diff --git a/drivers/net/ethernet/Makefile b/drivers/net/ethernet/Makefile index be5dde040261..94b7f287d6c5 100644 --- a/drivers/net/ethernet/Makefile +++ b/drivers/net/ethernet/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_NET_VENDOR_ALTEON) += alteon/ obj-$(CONFIG_NET_VENDOR_AMD) += amd/ obj-$(CONFIG_NET_VENDOR_APPLE) += apple/ obj-$(CONFIG_NET_VENDOR_ATHEROS) += atheros/ -obj-$(CONFIG_NET_ATMEL) += cadence/ +obj-$(CONFIG_NET_CADENCE) += cadence/ obj-$(CONFIG_NET_BFIN) += adi/ obj-$(CONFIG_NET_VENDOR_BROADCOM) += broadcom/ obj-$(CONFIG_NET_VENDOR_BROCADE) += brocade/ diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c index 161cbbb4814a..bf4074167d6a 100644 --- a/drivers/net/ethernet/broadcom/tg3.c +++ b/drivers/net/ethernet/broadcom/tg3.c @@ -89,10 +89,10 @@ static inline void _tg3_flag_clear(enum TG3_FLAGS flag, unsigned long *bits) #define DRV_MODULE_NAME "tg3" #define TG3_MAJ_NUM 3 -#define TG3_MIN_NUM 120 +#define TG3_MIN_NUM 121 #define DRV_MODULE_VERSION \ __stringify(TG3_MAJ_NUM) "." __stringify(TG3_MIN_NUM) -#define DRV_MODULE_RELDATE "August 18, 2011" +#define DRV_MODULE_RELDATE "November 2, 2011" #define RESET_KIND_SHUTDOWN 0 #define RESET_KIND_INIT 1 @@ -628,19 +628,23 @@ static void tg3_ape_lock_init(struct tg3 *tp) regbase = TG3_APE_PER_LOCK_GRANT; /* Make sure the driver hasn't any stale locks. */ - for (i = 0; i < 8; i++) { - if (i == TG3_APE_LOCK_GPIO) - continue; - tg3_ape_write32(tp, regbase + 4 * i, APE_LOCK_GRANT_DRIVER); + for (i = TG3_APE_LOCK_PHY0; i <= TG3_APE_LOCK_GPIO; i++) { + switch (i) { + case TG3_APE_LOCK_PHY0: + case TG3_APE_LOCK_PHY1: + case TG3_APE_LOCK_PHY2: + case TG3_APE_LOCK_PHY3: + bit = APE_LOCK_GRANT_DRIVER; + break; + default: + if (!tp->pci_fn) + bit = APE_LOCK_GRANT_DRIVER; + else + bit = 1 << tp->pci_fn; + } + tg3_ape_write32(tp, regbase + 4 * i, bit); } - /* Clear the correct bit of the GPIO lock too. */ - if (!tp->pci_fn) - bit = APE_LOCK_GRANT_DRIVER; - else - bit = 1 << tp->pci_fn; - - tg3_ape_write32(tp, regbase + 4 * TG3_APE_LOCK_GPIO, bit); } static int tg3_ape_lock(struct tg3 *tp, int locknum) @@ -658,6 +662,10 @@ static int tg3_ape_lock(struct tg3 *tp, int locknum) return 0; case TG3_APE_LOCK_GRC: case TG3_APE_LOCK_MEM: + if (!tp->pci_fn) + bit = APE_LOCK_REQ_DRIVER; + else + bit = 1 << tp->pci_fn; break; default: return -EINVAL; @@ -673,11 +681,6 @@ static int tg3_ape_lock(struct tg3 *tp, int locknum) off = 4 * locknum; - if (locknum != TG3_APE_LOCK_GPIO || !tp->pci_fn) - bit = APE_LOCK_REQ_DRIVER; - else - bit = 1 << tp->pci_fn; - tg3_ape_write32(tp, req + off, bit); /* Wait for up to 1 millisecond to acquire lock. */ @@ -710,6 +713,10 @@ static void tg3_ape_unlock(struct tg3 *tp, int locknum) return; case TG3_APE_LOCK_GRC: case TG3_APE_LOCK_MEM: + if (!tp->pci_fn) + bit = APE_LOCK_GRANT_DRIVER; + else + bit = 1 << tp->pci_fn; break; default: return; @@ -720,11 +727,6 @@ static void tg3_ape_unlock(struct tg3 *tp, int locknum) else gnt = TG3_APE_PER_LOCK_GRANT; - if (locknum != TG3_APE_LOCK_GPIO || !tp->pci_fn) - bit = APE_LOCK_GRANT_DRIVER; - else - bit = 1 << tp->pci_fn; - tg3_ape_write32(tp, gnt + 4 * locknum, bit); } @@ -5927,6 +5929,18 @@ static int tg3_poll_work(struct tg3_napi *tnapi, int work_done, int budget) return work_done; } +static inline void tg3_reset_task_schedule(struct tg3 *tp) +{ + if (!test_and_set_bit(TG3_FLAG_RESET_TASK_PENDING, tp->tg3_flags)) + schedule_work(&tp->reset_task); +} + +static inline void tg3_reset_task_cancel(struct tg3 *tp) +{ + cancel_work_sync(&tp->reset_task); + tg3_flag_clear(tp, RESET_TASK_PENDING); +} + static int tg3_poll_msix(struct napi_struct *napi, int budget) { struct tg3_napi *tnapi = container_of(napi, struct tg3_napi, napi); @@ -5967,7 +5981,7 @@ static int tg3_poll_msix(struct napi_struct *napi, int budget) tx_recovery: /* work_done is guaranteed to be less than budget. */ napi_complete(napi); - schedule_work(&tp->reset_task); + tg3_reset_task_schedule(tp); return work_done; } @@ -6002,7 +6016,7 @@ static void tg3_process_error(struct tg3 *tp) tg3_dump_state(tp); tg3_flag_set(tp, ERROR_PROCESSED); - schedule_work(&tp->reset_task); + tg3_reset_task_schedule(tp); } static int tg3_poll(struct napi_struct *napi, int budget) @@ -6049,7 +6063,7 @@ static int tg3_poll(struct napi_struct *napi, int budget) tx_recovery: /* work_done is guaranteed to be less than budget. */ napi_complete(napi); - schedule_work(&tp->reset_task); + tg3_reset_task_schedule(tp); return work_done; } @@ -6338,11 +6352,11 @@ static void tg3_reset_task(struct work_struct *work) { struct tg3 *tp = container_of(work, struct tg3, reset_task); int err; - unsigned int restart_timer; tg3_full_lock(tp, 0); if (!netif_running(tp->dev)) { + tg3_flag_clear(tp, RESET_TASK_PENDING); tg3_full_unlock(tp); return; } @@ -6355,9 +6369,6 @@ static void tg3_reset_task(struct work_struct *work) tg3_full_lock(tp, 1); - restart_timer = tg3_flag(tp, RESTART_TIMER); - tg3_flag_clear(tp, RESTART_TIMER); - if (tg3_flag(tp, TX_RECOVERY_PENDING)) { tp->write32_tx_mbox = tg3_write32_tx_mbox; tp->write32_rx_mbox = tg3_write_flush_reg32; @@ -6372,14 +6383,13 @@ static void tg3_reset_task(struct work_struct *work) tg3_netif_start(tp); - if (restart_timer) - mod_timer(&tp->timer, jiffies + 1); - out: tg3_full_unlock(tp); if (!err) tg3_phy_start(tp); + + tg3_flag_clear(tp, RESET_TASK_PENDING); } static void tg3_tx_timeout(struct net_device *dev) @@ -6391,7 +6401,7 @@ static void tg3_tx_timeout(struct net_device *dev) tg3_dump_state(tp); } - schedule_work(&tp->reset_task); + tg3_reset_task_schedule(tp); } /* Test for DMA buffers crossing any 4GB boundaries: 4G, 8G, etc */ @@ -6442,31 +6452,26 @@ static bool tg3_tx_frag_set(struct tg3_napi *tnapi, u32 *entry, u32 *budget, hwbug = 1; if (tg3_flag(tp, 4K_FIFO_LIMIT)) { + u32 prvidx = *entry; u32 tmp_flag = flags & ~TXD_FLAG_END; - while (len > TG3_TX_BD_DMA_MAX) { + while (len > TG3_TX_BD_DMA_MAX && *budget) { u32 frag_len = TG3_TX_BD_DMA_MAX; len -= TG3_TX_BD_DMA_MAX; - if (len) { - tnapi->tx_buffers[*entry].fragmented = true; - /* Avoid the 8byte DMA problem */ - if (len <= 8) { - len += TG3_TX_BD_DMA_MAX / 2; - frag_len = TG3_TX_BD_DMA_MAX / 2; - } - } else - tmp_flag = flags; - - if (*budget) { - tg3_tx_set_bd(&tnapi->tx_ring[*entry], map, - frag_len, tmp_flag, mss, vlan); - (*budget)--; - *entry = NEXT_TX(*entry); - } else { - hwbug = 1; - break; + /* Avoid the 8byte DMA problem */ + if (len <= 8) { + len += TG3_TX_BD_DMA_MAX / 2; + frag_len = TG3_TX_BD_DMA_MAX / 2; } + tnapi->tx_buffers[*entry].fragmented = true; + + tg3_tx_set_bd(&tnapi->tx_ring[*entry], map, + frag_len, tmp_flag, mss, vlan); + *budget -= 1; + prvidx = *entry; + *entry = NEXT_TX(*entry); + map += frag_len; } @@ -6474,10 +6479,11 @@ static bool tg3_tx_frag_set(struct tg3_napi *tnapi, u32 *entry, u32 *budget, if (*budget) { tg3_tx_set_bd(&tnapi->tx_ring[*entry], map, len, flags, mss, vlan); - (*budget)--; + *budget -= 1; *entry = NEXT_TX(*entry); } else { hwbug = 1; + tnapi->tx_buffers[prvidx].fragmented = false; } } } else { @@ -6509,7 +6515,7 @@ static void tg3_tx_skb_unmap(struct tg3_napi *tnapi, u32 entry, int last) txb = &tnapi->tx_buffers[entry]; } - for (i = 0; i < last; i++) { + for (i = 0; i <= last; i++) { const skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; entry = NEXT_TX(entry); @@ -6559,6 +6565,8 @@ static int tigon3_dma_hwbug_workaround(struct tg3_napi *tnapi, dev_kfree_skb(new_skb); ret = -1; } else { + u32 save_entry = *entry; + base_flags |= TXD_FLAG_END; tnapi->tx_buffers[*entry].skb = new_skb; @@ -6568,7 +6576,7 @@ static int tigon3_dma_hwbug_workaround(struct tg3_napi *tnapi, if (tg3_tx_frag_set(tnapi, entry, budget, new_addr, new_skb->len, base_flags, mss, vlan)) { - tg3_tx_skb_unmap(tnapi, *entry, 0); + tg3_tx_skb_unmap(tnapi, save_entry, -1); dev_kfree_skb(new_skb); ret = -1; } @@ -6758,11 +6766,10 @@ static netdev_tx_t tg3_start_xmit(struct sk_buff *skb, struct net_device *dev) if (tg3_tx_frag_set(tnapi, &entry, &budget, mapping, len, base_flags | ((skb_shinfo(skb)->nr_frags == 0) ? TXD_FLAG_END : 0), - mss, vlan)) + mss, vlan)) { would_hit_hwbug = 1; - /* Now loop through additional data fragments, and queue them. */ - if (skb_shinfo(skb)->nr_frags > 0) { + } else if (skb_shinfo(skb)->nr_frags > 0) { u32 tmp_mss = mss; if (!tg3_flag(tp, HW_TSO_1) && @@ -6784,11 +6791,14 @@ static netdev_tx_t tg3_start_xmit(struct sk_buff *skb, struct net_device *dev) if (dma_mapping_error(&tp->pdev->dev, mapping)) goto dma_error; - if (tg3_tx_frag_set(tnapi, &entry, &budget, mapping, + if (!budget || + tg3_tx_frag_set(tnapi, &entry, &budget, mapping, len, base_flags | ((i == last) ? TXD_FLAG_END : 0), - tmp_mss, vlan)) + tmp_mss, vlan)) { would_hit_hwbug = 1; + break; + } } } @@ -6828,7 +6838,7 @@ static netdev_tx_t tg3_start_xmit(struct sk_buff *skb, struct net_device *dev) return NETDEV_TX_OK; dma_error: - tg3_tx_skb_unmap(tnapi, tnapi->tx_prod, i); + tg3_tx_skb_unmap(tnapi, tnapi->tx_prod, --i); tnapi->tx_buffers[tnapi->tx_prod].skb = NULL; drop: dev_kfree_skb(skb); @@ -7281,7 +7291,8 @@ static void tg3_free_rings(struct tg3 *tp) if (!skb) continue; - tg3_tx_skb_unmap(tnapi, i, skb_shinfo(skb)->nr_frags); + tg3_tx_skb_unmap(tnapi, i, + skb_shinfo(skb)->nr_frags - 1); dev_kfree_skb_any(skb); } @@ -9200,7 +9211,7 @@ static void tg3_timer(unsigned long __opaque) { struct tg3 *tp = (struct tg3 *) __opaque; - if (tp->irq_sync) + if (tp->irq_sync || tg3_flag(tp, RESET_TASK_PENDING)) goto restart_timer; spin_lock(&tp->lock); @@ -9223,10 +9234,9 @@ static void tg3_timer(unsigned long __opaque) } if (!(tr32(WDMAC_MODE) & WDMAC_MODE_ENABLE)) { - tg3_flag_set(tp, RESTART_TIMER); spin_unlock(&tp->lock); - schedule_work(&tp->reset_task); - return; + tg3_reset_task_schedule(tp); + goto restart_timer; } } @@ -9674,15 +9684,14 @@ static int tg3_open(struct net_device *dev) struct tg3_napi *tnapi = &tp->napi[i]; err = tg3_request_irq(tp, i); if (err) { - for (i--; i >= 0; i--) + for (i--; i >= 0; i--) { + tnapi = &tp->napi[i]; free_irq(tnapi->irq_vec, tnapi); - break; + } + goto err_out2; } } - if (err) - goto err_out2; - tg3_full_lock(tp, 0); err = tg3_init_hw(tp, 1); @@ -9783,7 +9792,7 @@ static int tg3_close(struct net_device *dev) struct tg3 *tp = netdev_priv(dev); tg3_napi_disable(tp); - cancel_work_sync(&tp->reset_task); + tg3_reset_task_cancel(tp); netif_tx_stop_all_queues(dev); @@ -11520,7 +11529,7 @@ static int tg3_run_loopback(struct tg3 *tp, u32 pktsz, bool tso_loopback) break; } - tg3_tx_skb_unmap(tnapi, tnapi->tx_prod - 1, 0); + tg3_tx_skb_unmap(tnapi, tnapi->tx_prod - 1, -1); dev_kfree_skb(skb); if (tx_idx != tnapi->tx_prod) @@ -14228,12 +14237,30 @@ static int __devinit tg3_get_invariants(struct tg3 *tp) val = tr32(MEMARB_MODE); tw32(MEMARB_MODE, val | MEMARB_MODE_ENABLE); - if (tg3_flag(tp, PCIX_MODE)) { - pci_read_config_dword(tp->pdev, - tp->pcix_cap + PCI_X_STATUS, &val); - tp->pci_fn = val & 0x7; - } else { - tp->pci_fn = PCI_FUNC(tp->pdev->devfn) & 3; + tp->pci_fn = PCI_FUNC(tp->pdev->devfn) & 3; + if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5704 || + tg3_flag(tp, 5780_CLASS)) { + if (tg3_flag(tp, PCIX_MODE)) { + pci_read_config_dword(tp->pdev, + tp->pcix_cap + PCI_X_STATUS, + &val); + tp->pci_fn = val & 0x7; + } + } else if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5717) { + tg3_read_mem(tp, NIC_SRAM_CPMU_STATUS, &val); + if ((val & NIC_SRAM_CPMUSTAT_SIG_MSK) == + NIC_SRAM_CPMUSTAT_SIG) { + tp->pci_fn = val & TG3_CPMU_STATUS_FMSK_5717; + tp->pci_fn = tp->pci_fn ? 1 : 0; + } + } else if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5719 || + GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5720) { + tg3_read_mem(tp, NIC_SRAM_CPMU_STATUS, &val); + if ((val & NIC_SRAM_CPMUSTAT_SIG_MSK) == + NIC_SRAM_CPMUSTAT_SIG) { + tp->pci_fn = (val & TG3_CPMU_STATUS_FMSK_5719) >> + TG3_CPMU_STATUS_FSHFT_5719; + } } /* Get eeprom hw config before calling tg3_set_power_state(). @@ -15665,7 +15692,7 @@ static void __devexit tg3_remove_one(struct pci_dev *pdev) if (tp->fw) release_firmware(tp->fw); - cancel_work_sync(&tp->reset_task); + tg3_reset_task_cancel(tp); if (tg3_flag(tp, USE_PHYLIB)) { tg3_phy_fini(tp); @@ -15699,7 +15726,7 @@ static int tg3_suspend(struct device *device) if (!netif_running(dev)) return 0; - flush_work_sync(&tp->reset_task); + tg3_reset_task_cancel(tp); tg3_phy_stop(tp); tg3_netif_stop(tp); @@ -15812,12 +15839,10 @@ static pci_ers_result_t tg3_io_error_detected(struct pci_dev *pdev, tg3_netif_stop(tp); del_timer_sync(&tp->timer); - tg3_flag_clear(tp, RESTART_TIMER); /* Want to make sure that the reset task doesn't run */ - cancel_work_sync(&tp->reset_task); + tg3_reset_task_cancel(tp); tg3_flag_clear(tp, TX_RECOVERY_PENDING); - tg3_flag_clear(tp, RESTART_TIMER); netif_device_detach(netdev); diff --git a/drivers/net/ethernet/broadcom/tg3.h b/drivers/net/ethernet/broadcom/tg3.h index f32f288134c7..94b4bd049a33 100644 --- a/drivers/net/ethernet/broadcom/tg3.h +++ b/drivers/net/ethernet/broadcom/tg3.h @@ -1095,6 +1095,11 @@ #define TG3_CPMU_CLCK_ORIDE 0x00003624 #define CPMU_CLCK_ORIDE_MAC_ORIDE_EN 0x80000000 +#define TG3_CPMU_STATUS 0x0000362c +#define TG3_CPMU_STATUS_FMSK_5717 0x20000000 +#define TG3_CPMU_STATUS_FMSK_5719 0xc0000000 +#define TG3_CPMU_STATUS_FSHFT_5719 30 + #define TG3_CPMU_CLCK_STAT 0x00003630 #define CPMU_CLCK_STAT_MAC_CLCK_MASK 0x001f0000 #define CPMU_CLCK_STAT_MAC_CLCK_62_5 0x00000000 @@ -2128,6 +2133,10 @@ #define NIC_SRAM_RGMII_EXT_IBND_RX_EN 0x00000008 #define NIC_SRAM_RGMII_EXT_IBND_TX_EN 0x00000010 +#define NIC_SRAM_CPMU_STATUS 0x00000e00 +#define NIC_SRAM_CPMUSTAT_SIG 0x0000362c +#define NIC_SRAM_CPMUSTAT_SIG_MSK 0x0000ffff + #define NIC_SRAM_RX_MINI_BUFFER_DESC 0x00001000 #define NIC_SRAM_DMA_DESC_POOL_BASE 0x00002000 @@ -2344,9 +2353,13 @@ #define APE_PER_LOCK_GRANT_DRIVER 0x00001000 /* APE convenience enumerations. */ -#define TG3_APE_LOCK_GRC 1 -#define TG3_APE_LOCK_MEM 4 -#define TG3_APE_LOCK_GPIO 7 +#define TG3_APE_LOCK_PHY0 0 +#define TG3_APE_LOCK_GRC 1 +#define TG3_APE_LOCK_PHY1 2 +#define TG3_APE_LOCK_PHY2 3 +#define TG3_APE_LOCK_MEM 4 +#define TG3_APE_LOCK_PHY3 5 +#define TG3_APE_LOCK_GPIO 7 #define TG3_EEPROM_SB_F1R2_MBA_OFF 0x10 @@ -2866,7 +2879,6 @@ enum TG3_FLAGS { TG3_FLAG_JUMBO_CAPABLE, TG3_FLAG_CHIP_RESETTING, TG3_FLAG_INIT_COMPLETE, - TG3_FLAG_RESTART_TIMER, TG3_FLAG_TSO_BUG, TG3_FLAG_IS_5788, TG3_FLAG_MAX_RXPEND_64, @@ -2909,6 +2921,7 @@ enum TG3_FLAGS { TG3_FLAG_APE_HAS_NCSI, TG3_FLAG_5717_PLUS, TG3_FLAG_4K_FIFO_LIMIT, + TG3_FLAG_RESET_TASK_PENDING, /* Add new flags before this comment and TG3_FLAG_NUMBER_OF_FLAGS */ TG3_FLAG_NUMBER_OF_FLAGS, /* Last entry in enum TG3_FLAGS */ diff --git a/drivers/net/ethernet/cadence/Kconfig b/drivers/net/ethernet/cadence/Kconfig index 98849a1fc749..a2e150059bc7 100644 --- a/drivers/net/ethernet/cadence/Kconfig +++ b/drivers/net/ethernet/cadence/Kconfig @@ -5,8 +5,8 @@ config HAVE_NET_MACB bool -config NET_ATMEL - bool "Atmel devices" +config NET_CADENCE + bool "Cadence devices" depends on HAVE_NET_MACB || (ARM && ARCH_AT91RM9200) ---help--- If you have a network (Ethernet) card belonging to this class, say Y. @@ -20,7 +20,7 @@ config NET_ATMEL the remaining Atmel network card questions. If you say Y, you will be asked for your specific card in the following questions. -if NET_ATMEL +if NET_CADENCE config ARM_AT91_ETHER tristate "AT91RM9200 Ethernet support" @@ -32,14 +32,16 @@ config ARM_AT91_ETHER ethernet support, then you should always answer Y to this. config MACB - tristate "Atmel MACB support" + tristate "Cadence MACB/GEM support" depends on HAVE_NET_MACB select PHYLIB ---help--- - The Atmel MACB ethernet interface is found on many AT32 and AT91 - parts. Say Y to include support for the MACB chip. + The Cadence MACB ethernet interface is found on many Atmel AT32 and + AT91 parts. This driver also supports the Cadence GEM (Gigabit + Ethernet MAC found in some ARM SoC devices). Note: the Gigabit mode + is not yet supported. Say Y to include support for the MACB/GEM chip. To compile this driver as a module, choose M here: the module will be called macb. -endif # NET_ATMEL +endif # NET_CADENCE diff --git a/drivers/net/ethernet/cadence/at91_ether.c b/drivers/net/ethernet/cadence/at91_ether.c index 56624d303487..dfeb46cb3f74 100644 --- a/drivers/net/ethernet/cadence/at91_ether.c +++ b/drivers/net/ethernet/cadence/at91_ether.c @@ -26,6 +26,7 @@ #include <linux/skbuff.h> #include <linux/dma-mapping.h> #include <linux/ethtool.h> +#include <linux/platform_data/macb.h> #include <linux/platform_device.h> #include <linux/clk.h> #include <linux/gfp.h> @@ -984,7 +985,7 @@ static const struct net_device_ops at91ether_netdev_ops = { static int __init at91ether_setup(unsigned long phy_type, unsigned short phy_address, struct platform_device *pdev, struct clk *ether_clk) { - struct at91_eth_data *board_data = pdev->dev.platform_data; + struct macb_platform_data *board_data = pdev->dev.platform_data; struct net_device *dev; struct at91_private *lp; unsigned int val; diff --git a/drivers/net/ethernet/cadence/at91_ether.h b/drivers/net/ethernet/cadence/at91_ether.h index 353f4dab62be..3725fbb0defe 100644 --- a/drivers/net/ethernet/cadence/at91_ether.h +++ b/drivers/net/ethernet/cadence/at91_ether.h @@ -85,7 +85,9 @@ struct recv_desc_bufs struct at91_private { struct mii_if_info mii; /* ethtool support */ - struct at91_eth_data board_data; /* board-specific configuration */ + struct macb_platform_data board_data; /* board-specific + * configuration (shared with + * macb for common data */ struct clk *ether_clk; /* clock */ /* PHY */ diff --git a/drivers/net/ethernet/cadence/macb.c b/drivers/net/ethernet/cadence/macb.c index a437b46e5490..64d61461bdc7 100644 --- a/drivers/net/ethernet/cadence/macb.c +++ b/drivers/net/ethernet/cadence/macb.c @@ -1,5 +1,5 @@ /* - * Atmel MACB Ethernet Controller driver + * Cadence MACB/GEM Ethernet Controller driver * * Copyright (C) 2004-2006 Atmel Corporation * @@ -8,6 +8,7 @@ * published by the Free Software Foundation. */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include <linux/clk.h> #include <linux/module.h> #include <linux/moduleparam.h> @@ -19,12 +20,10 @@ #include <linux/netdevice.h> #include <linux/etherdevice.h> #include <linux/dma-mapping.h> +#include <linux/platform_data/macb.h> #include <linux/platform_device.h> #include <linux/phy.h> -#include <mach/board.h> -#include <mach/cpu.h> - #include "macb.h" #define RX_BUFFER_SIZE 128 @@ -60,9 +59,9 @@ static void __macb_set_hwaddr(struct macb *bp) u16 top; bottom = cpu_to_le32(*((u32 *)bp->dev->dev_addr)); - macb_writel(bp, SA1B, bottom); + macb_or_gem_writel(bp, SA1B, bottom); top = cpu_to_le16(*((u16 *)(bp->dev->dev_addr + 4))); - macb_writel(bp, SA1T, top); + macb_or_gem_writel(bp, SA1T, top); } static void __init macb_get_hwaddr(struct macb *bp) @@ -71,8 +70,8 @@ static void __init macb_get_hwaddr(struct macb *bp) u16 top; u8 addr[6]; - bottom = macb_readl(bp, SA1B); - top = macb_readl(bp, SA1T); + bottom = macb_or_gem_readl(bp, SA1B); + top = macb_or_gem_readl(bp, SA1T); addr[0] = bottom & 0xff; addr[1] = (bottom >> 8) & 0xff; @@ -84,7 +83,7 @@ static void __init macb_get_hwaddr(struct macb *bp) if (is_valid_ether_addr(addr)) { memcpy(bp->dev->dev_addr, addr, sizeof(addr)); } else { - dev_info(&bp->pdev->dev, "invalid hw address, using random\n"); + netdev_info(bp->dev, "invalid hw address, using random\n"); random_ether_addr(bp->dev->dev_addr); } } @@ -178,11 +177,12 @@ static void macb_handle_link_change(struct net_device *dev) if (status_change) { if (phydev->link) - printk(KERN_INFO "%s: link up (%d/%s)\n", - dev->name, phydev->speed, - DUPLEX_FULL == phydev->duplex ? "Full":"Half"); + netdev_info(dev, "link up (%d/%s)\n", + phydev->speed, + phydev->duplex == DUPLEX_FULL ? + "Full" : "Half"); else - printk(KERN_INFO "%s: link down\n", dev->name); + netdev_info(dev, "link down\n"); } } @@ -191,12 +191,12 @@ static int macb_mii_probe(struct net_device *dev) { struct macb *bp = netdev_priv(dev); struct phy_device *phydev; - struct eth_platform_data *pdata; + struct macb_platform_data *pdata; int ret; phydev = phy_find_first(bp->mii_bus); if (!phydev) { - printk (KERN_ERR "%s: no PHY found\n", dev->name); + netdev_err(dev, "no PHY found\n"); return -1; } @@ -209,7 +209,7 @@ static int macb_mii_probe(struct net_device *dev) PHY_INTERFACE_MODE_RMII : PHY_INTERFACE_MODE_MII); if (ret) { - printk(KERN_ERR "%s: Could not attach to PHY\n", dev->name); + netdev_err(dev, "Could not attach to PHY\n"); return ret; } @@ -228,7 +228,7 @@ static int macb_mii_probe(struct net_device *dev) static int macb_mii_init(struct macb *bp) { - struct eth_platform_data *pdata; + struct macb_platform_data *pdata; int err = -ENXIO, i; /* Enable management port */ @@ -285,8 +285,8 @@ err_out: static void macb_update_stats(struct macb *bp) { u32 __iomem *reg = bp->regs + MACB_PFR; - u32 *p = &bp->hw_stats.rx_pause_frames; - u32 *end = &bp->hw_stats.tx_pause_frames + 1; + u32 *p = &bp->hw_stats.macb.rx_pause_frames; + u32 *end = &bp->hw_stats.macb.tx_pause_frames + 1; WARN_ON((unsigned long)(end - p - 1) != (MACB_TPF - MACB_PFR) / 4); @@ -303,14 +303,13 @@ static void macb_tx(struct macb *bp) status = macb_readl(bp, TSR); macb_writel(bp, TSR, status); - dev_dbg(&bp->pdev->dev, "macb_tx status = %02lx\n", - (unsigned long)status); + netdev_dbg(bp->dev, "macb_tx status = %02lx\n", (unsigned long)status); if (status & (MACB_BIT(UND) | MACB_BIT(TSR_RLE))) { int i; - printk(KERN_ERR "%s: TX %s, resetting buffers\n", - bp->dev->name, status & MACB_BIT(UND) ? - "underrun" : "retry limit exceeded"); + netdev_err(bp->dev, "TX %s, resetting buffers\n", + status & MACB_BIT(UND) ? + "underrun" : "retry limit exceeded"); /* Transfer ongoing, disable transmitter, to avoid confusion */ if (status & MACB_BIT(TGO)) @@ -369,8 +368,8 @@ static void macb_tx(struct macb *bp) if (!(bufstat & MACB_BIT(TX_USED))) break; - dev_dbg(&bp->pdev->dev, "skb %u (data %p) TX complete\n", - tail, skb->data); + netdev_dbg(bp->dev, "skb %u (data %p) TX complete\n", + tail, skb->data); dma_unmap_single(&bp->pdev->dev, rp->mapping, skb->len, DMA_TO_DEVICE); bp->stats.tx_packets++; @@ -395,8 +394,8 @@ static int macb_rx_frame(struct macb *bp, unsigned int first_frag, len = MACB_BFEXT(RX_FRMLEN, bp->rx_ring[last_frag].ctrl); - dev_dbg(&bp->pdev->dev, "macb_rx_frame frags %u - %u (len %u)\n", - first_frag, last_frag, len); + netdev_dbg(bp->dev, "macb_rx_frame frags %u - %u (len %u)\n", + first_frag, last_frag, len); skb = dev_alloc_skb(len + RX_OFFSET); if (!skb) { @@ -437,8 +436,8 @@ static int macb_rx_frame(struct macb *bp, unsigned int first_frag, bp->stats.rx_packets++; bp->stats.rx_bytes += len; - dev_dbg(&bp->pdev->dev, "received skb of length %u, csum: %08x\n", - skb->len, skb->csum); + netdev_dbg(bp->dev, "received skb of length %u, csum: %08x\n", + skb->len, skb->csum); netif_receive_skb(skb); return 0; @@ -515,8 +514,8 @@ static int macb_poll(struct napi_struct *napi, int budget) work_done = 0; - dev_dbg(&bp->pdev->dev, "poll: status = %08lx, budget = %d\n", - (unsigned long)status, budget); + netdev_dbg(bp->dev, "poll: status = %08lx, budget = %d\n", + (unsigned long)status, budget); work_done = macb_rx(bp, budget); if (work_done < budget) { @@ -565,8 +564,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id) macb_writel(bp, IDR, MACB_RX_INT_FLAGS); if (napi_schedule_prep(&bp->napi)) { - dev_dbg(&bp->pdev->dev, - "scheduling RX softirq\n"); + netdev_dbg(bp->dev, "scheduling RX softirq\n"); __napi_schedule(&bp->napi); } } @@ -582,16 +580,19 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id) if (status & MACB_BIT(ISR_ROVR)) { /* We missed at least one packet */ - bp->hw_stats.rx_overruns++; + if (macb_is_gem(bp)) + bp->hw_stats.gem.rx_overruns++; + else + bp->hw_stats.macb.rx_overruns++; } if (status & MACB_BIT(HRESP)) { /* - * TODO: Reset the hardware, and maybe move the printk - * to a lower-priority context as well (work queue?) + * TODO: Reset the hardware, and maybe move the + * netdev_err to a lower-priority context as well + * (work queue?) */ - printk(KERN_ERR "%s: DMA bus error: HRESP not OK\n", - dev->name); + netdev_err(dev, "DMA bus error: HRESP not OK\n"); } status = macb_readl(bp, ISR); @@ -626,16 +627,12 @@ static int macb_start_xmit(struct sk_buff *skb, struct net_device *dev) unsigned long flags; #ifdef DEBUG - int i; - dev_dbg(&bp->pdev->dev, - "start_xmit: len %u head %p data %p tail %p end %p\n", - skb->len, skb->head, skb->data, - skb_tail_pointer(skb), skb_end_pointer(skb)); - dev_dbg(&bp->pdev->dev, - "data:"); - for (i = 0; i < 16; i++) - printk(" %02x", (unsigned int)skb->data[i]); - printk("\n"); + netdev_dbg(bp->dev, + "start_xmit: len %u head %p data %p tail %p end %p\n", + skb->len, skb->head, skb->data, + skb_tail_pointer(skb), skb_end_pointer(skb)); + print_hex_dump(KERN_DEBUG, "data: ", DUMP_PREFIX_OFFSET, 16, 1, + skb->data, 16, true); #endif len = skb->len; @@ -645,21 +642,20 @@ static int macb_start_xmit(struct sk_buff *skb, struct net_device *dev) if (TX_BUFFS_AVAIL(bp) < 1) { netif_stop_queue(dev); spin_unlock_irqrestore(&bp->lock, flags); - dev_err(&bp->pdev->dev, - "BUG! Tx Ring full when queue awake!\n"); - dev_dbg(&bp->pdev->dev, "tx_head = %u, tx_tail = %u\n", - bp->tx_head, bp->tx_tail); + netdev_err(bp->dev, "BUG! Tx Ring full when queue awake!\n"); + netdev_dbg(bp->dev, "tx_head = %u, tx_tail = %u\n", + bp->tx_head, bp->tx_tail); return NETDEV_TX_BUSY; } entry = bp->tx_head; - dev_dbg(&bp->pdev->dev, "Allocated ring entry %u\n", entry); + netdev_dbg(bp->dev, "Allocated ring entry %u\n", entry); mapping = dma_map_single(&bp->pdev->dev, skb->data, len, DMA_TO_DEVICE); bp->tx_skb[entry].skb = skb; bp->tx_skb[entry].mapping = mapping; - dev_dbg(&bp->pdev->dev, "Mapped skb data %p to DMA addr %08lx\n", - skb->data, (unsigned long)mapping); + netdev_dbg(bp->dev, "Mapped skb data %p to DMA addr %08lx\n", + skb->data, (unsigned long)mapping); ctrl = MACB_BF(TX_FRMLEN, len); ctrl |= MACB_BIT(TX_LAST); @@ -723,27 +719,27 @@ static int macb_alloc_consistent(struct macb *bp) &bp->rx_ring_dma, GFP_KERNEL); if (!bp->rx_ring) goto out_err; - dev_dbg(&bp->pdev->dev, - "Allocated RX ring of %d bytes at %08lx (mapped %p)\n", - size, (unsigned long)bp->rx_ring_dma, bp->rx_ring); + netdev_dbg(bp->dev, + "Allocated RX ring of %d bytes at %08lx (mapped %p)\n", + size, (unsigned long)bp->rx_ring_dma, bp->rx_ring); size = TX_RING_BYTES; bp->tx_ring = dma_alloc_coherent(&bp->pdev->dev, size, &bp->tx_ring_dma, GFP_KERNEL); if (!bp->tx_ring) goto out_err; - dev_dbg(&bp->pdev->dev, - "Allocated TX ring of %d bytes at %08lx (mapped %p)\n", - size, (unsigned long)bp->tx_ring_dma, bp->tx_ring); + netdev_dbg(bp->dev, + "Allocated TX ring of %d bytes at %08lx (mapped %p)\n", + size, (unsigned long)bp->tx_ring_dma, bp->tx_ring); size = RX_RING_SIZE * RX_BUFFER_SIZE; bp->rx_buffers = dma_alloc_coherent(&bp->pdev->dev, size, &bp->rx_buffers_dma, GFP_KERNEL); if (!bp->rx_buffers) goto out_err; - dev_dbg(&bp->pdev->dev, - "Allocated RX buffers of %d bytes at %08lx (mapped %p)\n", - size, (unsigned long)bp->rx_buffers_dma, bp->rx_buffers); + netdev_dbg(bp->dev, + "Allocated RX buffers of %d bytes at %08lx (mapped %p)\n", + size, (unsigned long)bp->rx_buffers_dma, bp->rx_buffers); return 0; @@ -797,6 +793,84 @@ static void macb_reset_hw(struct macb *bp) macb_readl(bp, ISR); } +static u32 gem_mdc_clk_div(struct macb *bp) +{ + u32 config; + unsigned long pclk_hz = clk_get_rate(bp->pclk); + + if (pclk_hz <= 20000000) + config = GEM_BF(CLK, GEM_CLK_DIV8); + else if (pclk_hz <= 40000000) + config = GEM_BF(CLK, GEM_CLK_DIV16); + else if (pclk_hz <= 80000000) + config = GEM_BF(CLK, GEM_CLK_DIV32); + else if (pclk_hz <= 120000000) + config = GEM_BF(CLK, GEM_CLK_DIV48); + else if (pclk_hz <= 160000000) + config = GEM_BF(CLK, GEM_CLK_DIV64); + else + config = GEM_BF(CLK, GEM_CLK_DIV96); + + return config; +} + +static u32 macb_mdc_clk_div(struct macb *bp) +{ + u32 config; + unsigned long pclk_hz; + + if (macb_is_gem(bp)) + return gem_mdc_clk_div(bp); + + pclk_hz = clk_get_rate(bp->pclk); + if (pclk_hz <= 20000000) + config = MACB_BF(CLK, MACB_CLK_DIV8); + else if (pclk_hz <= 40000000) + config = MACB_BF(CLK, MACB_CLK_DIV16); + else if (pclk_hz <= 80000000) + config = MACB_BF(CLK, MACB_CLK_DIV32); + else + config = MACB_BF(CLK, MACB_CLK_DIV64); + + return config; +} + +/* + * Get the DMA bus width field of the network configuration register that we + * should program. We find the width from decoding the design configuration + * register to find the maximum supported data bus width. + */ +static u32 macb_dbw(struct macb *bp) +{ + if (!macb_is_gem(bp)) + return 0; + + switch (GEM_BFEXT(DBWDEF, gem_readl(bp, DCFG1))) { + case 4: + return GEM_BF(DBW, GEM_DBW128); + case 2: + return GEM_BF(DBW, GEM_DBW64); + case 1: + default: + return GEM_BF(DBW, GEM_DBW32); + } +} + +/* + * Configure the receive DMA engine to use the correct receive buffer size. + * This is a configurable parameter for GEM. + */ +static void macb_configure_dma(struct macb *bp) +{ + u32 dmacfg; + + if (macb_is_gem(bp)) { + dmacfg = gem_readl(bp, DMACFG) & ~GEM_BF(RXBS, -1L); + dmacfg |= GEM_BF(RXBS, RX_BUFFER_SIZE / 64); + gem_writel(bp, DMACFG, dmacfg); + } +} + static void macb_init_hw(struct macb *bp) { u32 config; @@ -804,7 +878,7 @@ static void macb_init_hw(struct macb *bp) macb_reset_hw(bp); __macb_set_hwaddr(bp); - config = macb_readl(bp, NCFGR) & MACB_BF(CLK, -1L); + config = macb_mdc_clk_div(bp); config |= MACB_BIT(PAE); /* PAuse Enable */ config |= MACB_BIT(DRFCS); /* Discard Rx FCS */ config |= MACB_BIT(BIG); /* Receive oversized frames */ @@ -812,8 +886,11 @@ static void macb_init_hw(struct macb *bp) config |= MACB_BIT(CAF); /* Copy All Frames */ if (!(bp->dev->flags & IFF_BROADCAST)) config |= MACB_BIT(NBC); /* No BroadCast */ + config |= macb_dbw(bp); macb_writel(bp, NCFGR, config); + macb_configure_dma(bp); + /* Initialize TX and RX buffers */ macb_writel(bp, RBQP, bp->rx_ring_dma); macb_writel(bp, TBQP, bp->tx_ring_dma); @@ -909,8 +986,8 @@ static void macb_sethashtable(struct net_device *dev) mc_filter[bitnr >> 5] |= 1 << (bitnr & 31); } - macb_writel(bp, HRB, mc_filter[0]); - macb_writel(bp, HRT, mc_filter[1]); + macb_or_gem_writel(bp, HRB, mc_filter[0]); + macb_or_gem_writel(bp, HRT, mc_filter[1]); } /* @@ -932,8 +1009,8 @@ static void macb_set_rx_mode(struct net_device *dev) if (dev->flags & IFF_ALLMULTI) { /* Enable all multicast mode */ - macb_writel(bp, HRB, -1); - macb_writel(bp, HRT, -1); + macb_or_gem_writel(bp, HRB, -1); + macb_or_gem_writel(bp, HRT, -1); cfg |= MACB_BIT(NCFGR_MTI); } else if (!netdev_mc_empty(dev)) { /* Enable specific multicasts */ @@ -941,8 +1018,8 @@ static void macb_set_rx_mode(struct net_device *dev) cfg |= MACB_BIT(NCFGR_MTI); } else if (dev->flags & (~IFF_ALLMULTI)) { /* Disable all multicast mode */ - macb_writel(bp, HRB, 0); - macb_writel(bp, HRT, 0); + macb_or_gem_writel(bp, HRB, 0); + macb_or_gem_writel(bp, HRT, 0); cfg &= ~MACB_BIT(NCFGR_MTI); } @@ -954,7 +1031,7 @@ static int macb_open(struct net_device *dev) struct macb *bp = netdev_priv(dev); int err; - dev_dbg(&bp->pdev->dev, "open\n"); + netdev_dbg(bp->dev, "open\n"); /* if the phy is not yet register, retry later*/ if (!bp->phy_dev) @@ -965,9 +1042,8 @@ static int macb_open(struct net_device *dev) err = macb_alloc_consistent(bp); if (err) { - printk(KERN_ERR - "%s: Unable to allocate DMA memory (error %d)\n", - dev->name, err); + netdev_err(dev, "Unable to allocate DMA memory (error %d)\n", + err); return err; } @@ -1005,11 +1081,62 @@ static int macb_close(struct net_device *dev) return 0; } +static void gem_update_stats(struct macb *bp) +{ + u32 __iomem *reg = bp->regs + GEM_OTX; + u32 *p = &bp->hw_stats.gem.tx_octets_31_0; + u32 *end = &bp->hw_stats.gem.rx_udp_checksum_errors + 1; + + for (; p < end; p++, reg++) + *p += __raw_readl(reg); +} + +static struct net_device_stats *gem_get_stats(struct macb *bp) +{ + struct gem_stats *hwstat = &bp->hw_stats.gem; + struct net_device_stats *nstat = &bp->stats; + + gem_update_stats(bp); + + nstat->rx_errors = (hwstat->rx_frame_check_sequence_errors + + hwstat->rx_alignment_errors + + hwstat->rx_resource_errors + + hwstat->rx_overruns + + hwstat->rx_oversize_frames + + hwstat->rx_jabbers + + hwstat->rx_undersized_frames + + hwstat->rx_length_field_frame_errors); + nstat->tx_errors = (hwstat->tx_late_collisions + + hwstat->tx_excessive_collisions + + hwstat->tx_underrun + + hwstat->tx_carrier_sense_errors); + nstat->multicast = hwstat->rx_multicast_frames; + nstat->collisions = (hwstat->tx_single_collision_frames + + hwstat->tx_multiple_collision_frames + + hwstat->tx_excessive_collisions); + nstat->rx_length_errors = (hwstat->rx_oversize_frames + + hwstat->rx_jabbers + + hwstat->rx_undersized_frames + + hwstat->rx_length_field_frame_errors); + nstat->rx_over_errors = hwstat->rx_resource_errors; + nstat->rx_crc_errors = hwstat->rx_frame_check_sequence_errors; + nstat->rx_frame_errors = hwstat->rx_alignment_errors; + nstat->rx_fifo_errors = hwstat->rx_overruns; + nstat->tx_aborted_errors = hwstat->tx_excessive_collisions; + nstat->tx_carrier_errors = hwstat->tx_carrier_sense_errors; + nstat->tx_fifo_errors = hwstat->tx_underrun; + + return nstat; +} + static struct net_device_stats *macb_get_stats(struct net_device *dev) { struct macb *bp = netdev_priv(dev); struct net_device_stats *nstat = &bp->stats; - struct macb_stats *hwstat = &bp->hw_stats; + struct macb_stats *hwstat = &bp->hw_stats.macb; + + if (macb_is_gem(bp)) + return gem_get_stats(bp); /* read stats from hardware */ macb_update_stats(bp); @@ -1119,12 +1246,11 @@ static const struct net_device_ops macb_netdev_ops = { static int __init macb_probe(struct platform_device *pdev) { - struct eth_platform_data *pdata; + struct macb_platform_data *pdata; struct resource *regs; struct net_device *dev; struct macb *bp; struct phy_device *phydev; - unsigned long pclk_hz; u32 config; int err = -ENXIO; @@ -1152,28 +1278,19 @@ static int __init macb_probe(struct platform_device *pdev) spin_lock_init(&bp->lock); -#if defined(CONFIG_ARCH_AT91) - bp->pclk = clk_get(&pdev->dev, "macb_clk"); + bp->pclk = clk_get(&pdev->dev, "pclk"); if (IS_ERR(bp->pclk)) { dev_err(&pdev->dev, "failed to get macb_clk\n"); goto err_out_free_dev; } clk_enable(bp->pclk); -#else - bp->pclk = clk_get(&pdev->dev, "pclk"); - if (IS_ERR(bp->pclk)) { - dev_err(&pdev->dev, "failed to get pclk\n"); - goto err_out_free_dev; - } + bp->hclk = clk_get(&pdev->dev, "hclk"); if (IS_ERR(bp->hclk)) { dev_err(&pdev->dev, "failed to get hclk\n"); goto err_out_put_pclk; } - - clk_enable(bp->pclk); clk_enable(bp->hclk); -#endif bp->regs = ioremap(regs->start, resource_size(regs)); if (!bp->regs) { @@ -1185,9 +1302,8 @@ static int __init macb_probe(struct platform_device *pdev) dev->irq = platform_get_irq(pdev, 0); err = request_irq(dev->irq, macb_interrupt, 0, dev->name, dev); if (err) { - printk(KERN_ERR - "%s: Unable to request IRQ %d (error %d)\n", - dev->name, dev->irq, err); + dev_err(&pdev->dev, "Unable to request IRQ %d (error %d)\n", + dev->irq, err); goto err_out_iounmap; } @@ -1198,15 +1314,8 @@ static int __init macb_probe(struct platform_device *pdev) dev->base_addr = regs->start; /* Set MII management clock divider */ - pclk_hz = clk_get_rate(bp->pclk); - if (pclk_hz <= 20000000) - config = MACB_BF(CLK, MACB_CLK_DIV8); - else if (pclk_hz <= 40000000) - config = MACB_BF(CLK, MACB_CLK_DIV16); - else if (pclk_hz <= 80000000) - config = MACB_BF(CLK, MACB_CLK_DIV32); - else - config = MACB_BF(CLK, MACB_CLK_DIV64); + config = macb_mdc_clk_div(bp); + config |= macb_dbw(bp); macb_writel(bp, NCFGR, config); macb_get_hwaddr(bp); @@ -1214,15 +1323,16 @@ static int __init macb_probe(struct platform_device *pdev) if (pdata && pdata->is_rmii) #if defined(CONFIG_ARCH_AT91) - macb_writel(bp, USRIO, (MACB_BIT(RMII) | MACB_BIT(CLKEN)) ); + macb_or_gem_writel(bp, USRIO, (MACB_BIT(RMII) | + MACB_BIT(CLKEN))); #else - macb_writel(bp, USRIO, 0); + macb_or_gem_writel(bp, USRIO, 0); #endif else #if defined(CONFIG_ARCH_AT91) - macb_writel(bp, USRIO, MACB_BIT(CLKEN)); + macb_or_gem_writel(bp, USRIO, MACB_BIT(CLKEN)); #else - macb_writel(bp, USRIO, MACB_BIT(MII)); + macb_or_gem_writel(bp, USRIO, MACB_BIT(MII)); #endif bp->tx_pending = DEF_TX_RING_PENDING; @@ -1239,13 +1349,13 @@ static int __init macb_probe(struct platform_device *pdev) platform_set_drvdata(pdev, dev); - printk(KERN_INFO "%s: Atmel MACB at 0x%08lx irq %d (%pM)\n", - dev->name, dev->base_addr, dev->irq, dev->dev_addr); + netdev_info(dev, "Cadence %s at 0x%08lx irq %d (%pM)\n", + macb_is_gem(bp) ? "GEM" : "MACB", dev->base_addr, + dev->irq, dev->dev_addr); phydev = bp->phy_dev; - printk(KERN_INFO "%s: attached PHY driver [%s] " - "(mii_bus:phy_addr=%s, irq=%d)\n", dev->name, - phydev->drv->name, dev_name(&phydev->dev), phydev->irq); + netdev_info(dev, "attached PHY driver [%s] (mii_bus:phy_addr=%s, irq=%d)\n", + phydev->drv->name, dev_name(&phydev->dev), phydev->irq); return 0; @@ -1256,14 +1366,10 @@ err_out_free_irq: err_out_iounmap: iounmap(bp->regs); err_out_disable_clocks: -#ifndef CONFIG_ARCH_AT91 clk_disable(bp->hclk); clk_put(bp->hclk); -#endif clk_disable(bp->pclk); -#ifndef CONFIG_ARCH_AT91 err_out_put_pclk: -#endif clk_put(bp->pclk); err_out_free_dev: free_netdev(dev); @@ -1289,10 +1395,8 @@ static int __exit macb_remove(struct platform_device *pdev) unregister_netdev(dev); free_irq(dev->irq, dev); iounmap(bp->regs); -#ifndef CONFIG_ARCH_AT91 clk_disable(bp->hclk); clk_put(bp->hclk); -#endif clk_disable(bp->pclk); clk_put(bp->pclk); free_netdev(dev); @@ -1310,9 +1414,7 @@ static int macb_suspend(struct platform_device *pdev, pm_message_t state) netif_device_detach(netdev); -#ifndef CONFIG_ARCH_AT91 clk_disable(bp->hclk); -#endif clk_disable(bp->pclk); return 0; @@ -1324,9 +1426,7 @@ static int macb_resume(struct platform_device *pdev) struct macb *bp = netdev_priv(netdev); clk_enable(bp->pclk); -#ifndef CONFIG_ARCH_AT91 clk_enable(bp->hclk); -#endif netif_device_attach(netdev); @@ -1361,6 +1461,6 @@ module_init(macb_init); module_exit(macb_exit); MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("Atmel MACB Ethernet driver"); +MODULE_DESCRIPTION("Cadence MACB/GEM Ethernet driver"); MODULE_AUTHOR("Haavard Skinnemoen (Atmel)"); MODULE_ALIAS("platform:macb"); diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cadence/macb.h index d3212f6db703..193107884a5a 100644 --- a/drivers/net/ethernet/cadence/macb.h +++ b/drivers/net/ethernet/cadence/macb.h @@ -59,6 +59,24 @@ #define MACB_TPQ 0x00bc #define MACB_USRIO 0x00c0 #define MACB_WOL 0x00c4 +#define MACB_MID 0x00fc + +/* GEM register offsets. */ +#define GEM_NCFGR 0x0004 +#define GEM_USRIO 0x000c +#define GEM_DMACFG 0x0010 +#define GEM_HRB 0x0080 +#define GEM_HRT 0x0084 +#define GEM_SA1B 0x0088 +#define GEM_SA1T 0x008C +#define GEM_OTX 0x0100 +#define GEM_DCFG1 0x0280 +#define GEM_DCFG2 0x0284 +#define GEM_DCFG3 0x0288 +#define GEM_DCFG4 0x028c +#define GEM_DCFG5 0x0290 +#define GEM_DCFG6 0x0294 +#define GEM_DCFG7 0x0298 /* Bitfields in NCR */ #define MACB_LB_OFFSET 0 @@ -126,6 +144,21 @@ #define MACB_IRXFCS_OFFSET 19 #define MACB_IRXFCS_SIZE 1 +/* GEM specific NCFGR bitfields. */ +#define GEM_CLK_OFFSET 18 +#define GEM_CLK_SIZE 3 +#define GEM_DBW_OFFSET 21 +#define GEM_DBW_SIZE 2 + +/* Constants for data bus width. */ +#define GEM_DBW32 0 +#define GEM_DBW64 1 +#define GEM_DBW128 2 + +/* Bitfields in DMACFG. */ +#define GEM_RXBS_OFFSET 16 +#define GEM_RXBS_SIZE 8 + /* Bitfields in NSR */ #define MACB_NSR_LINK_OFFSET 0 #define MACB_NSR_LINK_SIZE 1 @@ -228,12 +261,30 @@ #define MACB_WOL_MTI_OFFSET 19 #define MACB_WOL_MTI_SIZE 1 +/* Bitfields in MID */ +#define MACB_IDNUM_OFFSET 16 +#define MACB_IDNUM_SIZE 16 +#define MACB_REV_OFFSET 0 +#define MACB_REV_SIZE 16 + +/* Bitfields in DCFG1. */ +#define GEM_DBWDEF_OFFSET 25 +#define GEM_DBWDEF_SIZE 3 + /* Constants for CLK */ #define MACB_CLK_DIV8 0 #define MACB_CLK_DIV16 1 #define MACB_CLK_DIV32 2 #define MACB_CLK_DIV64 3 +/* GEM specific constants for CLK. */ +#define GEM_CLK_DIV8 0 +#define GEM_CLK_DIV16 1 +#define GEM_CLK_DIV32 2 +#define GEM_CLK_DIV48 3 +#define GEM_CLK_DIV64 4 +#define GEM_CLK_DIV96 5 + /* Constants for MAN register */ #define MACB_MAN_SOF 1 #define MACB_MAN_WRITE 1 @@ -254,11 +305,52 @@ << MACB_##name##_OFFSET)) \ | MACB_BF(name,value)) +#define GEM_BIT(name) \ + (1 << GEM_##name##_OFFSET) +#define GEM_BF(name, value) \ + (((value) & ((1 << GEM_##name##_SIZE) - 1)) \ + << GEM_##name##_OFFSET) +#define GEM_BFEXT(name, value)\ + (((value) >> GEM_##name##_OFFSET) \ + & ((1 << GEM_##name##_SIZE) - 1)) +#define GEM_BFINS(name, value, old) \ + (((old) & ~(((1 << GEM_##name##_SIZE) - 1) \ + << GEM_##name##_OFFSET)) \ + | GEM_BF(name, value)) + /* Register access macros */ #define macb_readl(port,reg) \ __raw_readl((port)->regs + MACB_##reg) #define macb_writel(port,reg,value) \ __raw_writel((value), (port)->regs + MACB_##reg) +#define gem_readl(port, reg) \ + __raw_readl((port)->regs + GEM_##reg) +#define gem_writel(port, reg, value) \ + __raw_writel((value), (port)->regs + GEM_##reg) + +/* + * Conditional GEM/MACB macros. These perform the operation to the correct + * register dependent on whether the device is a GEM or a MACB. For registers + * and bitfields that are common across both devices, use macb_{read,write}l + * to avoid the cost of the conditional. + */ +#define macb_or_gem_writel(__bp, __reg, __value) \ + ({ \ + if (macb_is_gem((__bp))) \ + gem_writel((__bp), __reg, __value); \ + else \ + macb_writel((__bp), __reg, __value); \ + }) + +#define macb_or_gem_readl(__bp, __reg) \ + ({ \ + u32 __v; \ + if (macb_is_gem((__bp))) \ + __v = gem_readl((__bp), __reg); \ + else \ + __v = macb_readl((__bp), __reg); \ + __v; \ + }) struct dma_desc { u32 addr; @@ -358,6 +450,54 @@ struct macb_stats { u32 tx_pause_frames; }; +struct gem_stats { + u32 tx_octets_31_0; + u32 tx_octets_47_32; + u32 tx_frames; + u32 tx_broadcast_frames; + u32 tx_multicast_frames; + u32 tx_pause_frames; + u32 tx_64_byte_frames; + u32 tx_65_127_byte_frames; + u32 tx_128_255_byte_frames; + u32 tx_256_511_byte_frames; + u32 tx_512_1023_byte_frames; + u32 tx_1024_1518_byte_frames; + u32 tx_greater_than_1518_byte_frames; + u32 tx_underrun; + u32 tx_single_collision_frames; + u32 tx_multiple_collision_frames; + u32 tx_excessive_collisions; + u32 tx_late_collisions; + u32 tx_deferred_frames; + u32 tx_carrier_sense_errors; + u32 rx_octets_31_0; + u32 rx_octets_47_32; + u32 rx_frames; + u32 rx_broadcast_frames; + u32 rx_multicast_frames; + u32 rx_pause_frames; + u32 rx_64_byte_frames; + u32 rx_65_127_byte_frames; + u32 rx_128_255_byte_frames; + u32 rx_256_511_byte_frames; + u32 rx_512_1023_byte_frames; + u32 rx_1024_1518_byte_frames; + u32 rx_greater_than_1518_byte_frames; + u32 rx_undersized_frames; + u32 rx_oversize_frames; + u32 rx_jabbers; + u32 rx_frame_check_sequence_errors; + u32 rx_length_field_frame_errors; + u32 rx_symbol_errors; + u32 rx_alignment_errors; + u32 rx_resource_errors; + u32 rx_overruns; + u32 rx_ip_header_checksum_errors; + u32 rx_tcp_checksum_errors; + u32 rx_udp_checksum_errors; +}; + struct macb { void __iomem *regs; @@ -376,7 +516,10 @@ struct macb { struct net_device *dev; struct napi_struct napi; struct net_device_stats stats; - struct macb_stats hw_stats; + union { + struct macb_stats macb; + struct gem_stats gem; + } hw_stats; dma_addr_t rx_ring_dma; dma_addr_t tx_ring_dma; @@ -391,4 +534,9 @@ struct macb { unsigned int duplex; }; +static inline bool macb_is_gem(struct macb *bp) +{ + return MACB_BFEXT(IDNUM, macb_readl(bp, MID)) == 0x2; +} + #endif /* _MACB_H */ diff --git a/drivers/net/ethernet/freescale/Kconfig b/drivers/net/ethernet/freescale/Kconfig index 1cf671643d1f..c520cfd3b298 100644 --- a/drivers/net/ethernet/freescale/Kconfig +++ b/drivers/net/ethernet/freescale/Kconfig @@ -7,8 +7,7 @@ config NET_VENDOR_FREESCALE default y depends on FSL_SOC || QUICC_ENGINE || CPM1 || CPM2 || PPC_MPC512x || \ M523x || M527x || M5272 || M528x || M520x || M532x || \ - ARCH_MXC || ARCH_MXS || \ - (PPC_MPC52xx && PPC_BESTCOMM) + ARCH_MXC || ARCH_MXS || (PPC_MPC52xx && PPC_BESTCOMM) ---help--- If you have a network (Ethernet) card belonging to this class, say Y and read the Ethernet-HOWTO, available from diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig index 61029dc7fa6f..76213162fbe3 100644 --- a/drivers/net/ethernet/intel/Kconfig +++ b/drivers/net/ethernet/intel/Kconfig @@ -5,7 +5,11 @@ config NET_VENDOR_INTEL bool "Intel devices" default y - depends on PCI || PCI_MSI + depends on PCI || PCI_MSI || ISA || ISA_DMA_API || ARM || \ + ARCH_ACORN || MCA || MCA_LEGACY || SNI_RM || SUN3 || \ + GSC || BVME6000 || MVME16x || ARCH_ENP2611 || \ + (ARM && ARCH_IXP4XX && IXP4XX_NPE && IXP4XX_QMGR) || \ + EXPERIMENTAL ---help--- If you have a network (Ethernet) card belonging to this class, say Y and read the Ethernet-HOWTO, available from diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c index db95731863d7..00fcd39ad666 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c @@ -442,12 +442,14 @@ static int ixgbe_set_vf_macvlan(struct ixgbe_adapter *adapter, int ixgbe_check_vf_assignment(struct ixgbe_adapter *adapter) { +#ifdef CONFIG_PCI_IOV int i; for (i = 0; i < adapter->num_vfs; i++) { if (adapter->vfinfo[i].vfdev->dev_flags & PCI_DEV_FLAGS_ASSIGNED) return true; } +#endif return false; } diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.h index 4a5d8897faab..df04f1a3857c 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.h +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.h @@ -42,11 +42,11 @@ int ixgbe_ndo_set_vf_spoofchk(struct net_device *netdev, int vf, bool setting); int ixgbe_ndo_get_vf_config(struct net_device *netdev, int vf, struct ifla_vf_info *ivi); void ixgbe_check_vf_rate_limit(struct ixgbe_adapter *adapter); -#ifdef CONFIG_PCI_IOV void ixgbe_disable_sriov(struct ixgbe_adapter *adapter); +int ixgbe_check_vf_assignment(struct ixgbe_adapter *adapter); +#ifdef CONFIG_PCI_IOV void ixgbe_enable_sriov(struct ixgbe_adapter *adapter, const struct ixgbe_info *ii); -int ixgbe_check_vf_assignment(struct ixgbe_adapter *adapter); #endif diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c index cbd026f3bc57..fdc6c394c683 100644 --- a/drivers/net/ethernet/marvell/sky2.c +++ b/drivers/net/ethernet/marvell/sky2.c @@ -366,17 +366,6 @@ static void sky2_phy_init(struct sky2_hw *hw, unsigned port) gm_phy_write(hw, port, PHY_MARV_FE_SPEC_2, spec); } } else { - if (hw->chip_id >= CHIP_ID_YUKON_OPT) { - u16 ctrl2 = gm_phy_read(hw, port, PHY_MARV_EXT_CTRL_2); - - /* enable PHY Reverse Auto-Negotiation */ - ctrl2 |= 1u << 13; - - /* Write PHY changes (SW-reset must follow) */ - gm_phy_write(hw, port, PHY_MARV_EXT_CTRL_2, ctrl2); - } - - /* disable energy detect */ ctrl &= ~PHY_M_PC_EN_DET_MSK; diff --git a/drivers/net/ethernet/natsemi/Kconfig b/drivers/net/ethernet/natsemi/Kconfig index 4a6b9fd073b6..eb836f770f50 100644 --- a/drivers/net/ethernet/natsemi/Kconfig +++ b/drivers/net/ethernet/natsemi/Kconfig @@ -5,7 +5,10 @@ config NET_VENDOR_NATSEMI bool "National Semi-conductor devices" default y - depends on MCA || MAC || MACH_JAZZ || PCI || XTENSA_PLATFORM_XT2000 + depends on AMIGA_PCMCIA || ARM || EISA || EXPERIMENTAL || H8300 || \ + ISA || M32R || MAC || MACH_JAZZ || MACH_TX49XX || MCA || \ + MCA_LEGACY || MIPS || PCI || PCMCIA || SUPERH || \ + XTENSA_PLATFORM_XT2000 || ZORRO ---help--- If you have a network (Ethernet) card belonging to this class, say Y and read the Ethernet-HOWTO, available from diff --git a/drivers/net/ethernet/nvidia/forcedeth.c b/drivers/net/ethernet/nvidia/forcedeth.c index 1e37eb98c4e2..1dca57013cb2 100644 --- a/drivers/net/ethernet/nvidia/forcedeth.c +++ b/drivers/net/ethernet/nvidia/forcedeth.c @@ -1682,6 +1682,7 @@ static void nv_get_hw_stats(struct net_device *dev) np->estats.tx_pause += readl(base + NvRegTxPause); np->estats.rx_pause += readl(base + NvRegRxPause); np->estats.rx_drop_frame += readl(base + NvRegRxDropFrame); + np->estats.rx_errors_total += np->estats.rx_drop_frame; } if (np->driver_data & DEV_HAS_STATISTICS_V3) { @@ -1706,11 +1707,14 @@ static struct net_device_stats *nv_get_stats(struct net_device *dev) nv_get_hw_stats(dev); /* copy to net_device stats */ + dev->stats.tx_packets = np->estats.tx_packets; + dev->stats.rx_bytes = np->estats.rx_bytes; dev->stats.tx_bytes = np->estats.tx_bytes; dev->stats.tx_fifo_errors = np->estats.tx_fifo_errors; dev->stats.tx_carrier_errors = np->estats.tx_carrier_errors; dev->stats.rx_crc_errors = np->estats.rx_crc_errors; dev->stats.rx_over_errors = np->estats.rx_over_errors; + dev->stats.rx_fifo_errors = np->estats.rx_drop_frame; dev->stats.rx_errors = np->estats.rx_errors_total; dev->stats.tx_errors = np->estats.tx_errors_total; } @@ -2099,10 +2103,10 @@ static netdev_tx_t nv_start_xmit(struct sk_buff *skb, struct net_device *dev) /* add fragments to entries count */ for (i = 0; i < fragments; i++) { - u32 size = skb_frag_size(&skb_shinfo(skb)->frags[i]); + u32 frag_size = skb_frag_size(&skb_shinfo(skb)->frags[i]); - entries += (size >> NV_TX2_TSO_MAX_SHIFT) + - ((size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0); + entries += (frag_size >> NV_TX2_TSO_MAX_SHIFT) + + ((frag_size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0); } spin_lock_irqsave(&np->lock, flags); @@ -2141,13 +2145,13 @@ static netdev_tx_t nv_start_xmit(struct sk_buff *skb, struct net_device *dev) /* setup the fragments */ for (i = 0; i < fragments; i++) { const skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; - u32 size = skb_frag_size(frag); + u32 frag_size = skb_frag_size(frag); offset = 0; do { prev_tx = put_tx; prev_tx_ctx = np->put_tx_ctx; - bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size; + bcnt = (frag_size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : frag_size; np->put_tx_ctx->dma = skb_frag_dma_map( &np->pci_dev->dev, frag, offset, @@ -2159,12 +2163,12 @@ static netdev_tx_t nv_start_xmit(struct sk_buff *skb, struct net_device *dev) put_tx->flaglen = cpu_to_le32((bcnt-1) | tx_flags); offset += bcnt; - size -= bcnt; + frag_size -= bcnt; if (unlikely(put_tx++ == np->last_tx.orig)) put_tx = np->first_tx.orig; if (unlikely(np->put_tx_ctx++ == np->last_tx_ctx)) np->put_tx_ctx = np->first_tx_ctx; - } while (size); + } while (frag_size); } /* set last fragment flag */ @@ -2213,10 +2217,10 @@ static netdev_tx_t nv_start_xmit_optimized(struct sk_buff *skb, /* add fragments to entries count */ for (i = 0; i < fragments; i++) { - u32 size = skb_frag_size(&skb_shinfo(skb)->frags[i]); + u32 frag_size = skb_frag_size(&skb_shinfo(skb)->frags[i]); - entries += (size >> NV_TX2_TSO_MAX_SHIFT) + - ((size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0); + entries += (frag_size >> NV_TX2_TSO_MAX_SHIFT) + + ((frag_size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0); } spin_lock_irqsave(&np->lock, flags); @@ -2257,13 +2261,13 @@ static netdev_tx_t nv_start_xmit_optimized(struct sk_buff *skb, /* setup the fragments */ for (i = 0; i < fragments; i++) { skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; - u32 size = skb_frag_size(frag); + u32 frag_size = skb_frag_size(frag); offset = 0; do { prev_tx = put_tx; prev_tx_ctx = np->put_tx_ctx; - bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size; + bcnt = (frag_size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : frag_size; np->put_tx_ctx->dma = skb_frag_dma_map( &np->pci_dev->dev, frag, offset, @@ -2276,12 +2280,12 @@ static netdev_tx_t nv_start_xmit_optimized(struct sk_buff *skb, put_tx->flaglen = cpu_to_le32((bcnt-1) | tx_flags); offset += bcnt; - size -= bcnt; + frag_size -= bcnt; if (unlikely(put_tx++ == np->last_tx.ex)) put_tx = np->first_tx.ex; if (unlikely(np->put_tx_ctx++ == np->last_tx_ctx)) np->put_tx_ctx = np->first_tx_ctx; - } while (size); + } while (frag_size); } /* set last fragment flag */ @@ -2374,16 +2378,8 @@ static int nv_tx_done(struct net_device *dev, int limit) if (np->desc_ver == DESC_VER_1) { if (flags & NV_TX_LASTPACKET) { if (flags & NV_TX_ERROR) { - if (flags & NV_TX_UNDERFLOW) - dev->stats.tx_fifo_errors++; - if (flags & NV_TX_CARRIERLOST) - dev->stats.tx_carrier_errors++; if ((flags & NV_TX_RETRYERROR) && !(flags & NV_TX_RETRYCOUNT_MASK)) nv_legacybackoff_reseed(dev); - dev->stats.tx_errors++; - } else { - dev->stats.tx_packets++; - dev->stats.tx_bytes += np->get_tx_ctx->skb->len; } dev_kfree_skb_any(np->get_tx_ctx->skb); np->get_tx_ctx->skb = NULL; @@ -2392,16 +2388,8 @@ static int nv_tx_done(struct net_device *dev, int limit) } else { if (flags & NV_TX2_LASTPACKET) { if (flags & NV_TX2_ERROR) { - if (flags & NV_TX2_UNDERFLOW) - dev->stats.tx_fifo_errors++; - if (flags & NV_TX2_CARRIERLOST) - dev->stats.tx_carrier_errors++; if ((flags & NV_TX2_RETRYERROR) && !(flags & NV_TX2_RETRYCOUNT_MASK)) nv_legacybackoff_reseed(dev); - dev->stats.tx_errors++; - } else { - dev->stats.tx_packets++; - dev->stats.tx_bytes += np->get_tx_ctx->skb->len; } dev_kfree_skb_any(np->get_tx_ctx->skb); np->get_tx_ctx->skb = NULL; @@ -2434,9 +2422,7 @@ static int nv_tx_done_optimized(struct net_device *dev, int limit) nv_unmap_txskb(np, np->get_tx_ctx); if (flags & NV_TX2_LASTPACKET) { - if (!(flags & NV_TX2_ERROR)) - dev->stats.tx_packets++; - else { + if (flags & NV_TX2_ERROR) { if ((flags & NV_TX2_RETRYERROR) && !(flags & NV_TX2_RETRYCOUNT_MASK)) { if (np->driver_data & DEV_HAS_GEAR_MODE) nv_gear_backoff_reseed(dev); @@ -2636,7 +2622,6 @@ static int nv_rx_process(struct net_device *dev, int limit) if ((flags & NV_RX_ERROR_MASK) == NV_RX_ERROR4) { len = nv_getlen(dev, skb->data, len); if (len < 0) { - dev->stats.rx_errors++; dev_kfree_skb(skb); goto next_pkt; } @@ -2650,11 +2635,6 @@ static int nv_rx_process(struct net_device *dev, int limit) else { if (flags & NV_RX_MISSEDFRAME) dev->stats.rx_missed_errors++; - if (flags & NV_RX_CRCERR) - dev->stats.rx_crc_errors++; - if (flags & NV_RX_OVERFLOW) - dev->stats.rx_over_errors++; - dev->stats.rx_errors++; dev_kfree_skb(skb); goto next_pkt; } @@ -2670,7 +2650,6 @@ static int nv_rx_process(struct net_device *dev, int limit) if ((flags & NV_RX2_ERROR_MASK) == NV_RX2_ERROR4) { len = nv_getlen(dev, skb->data, len); if (len < 0) { - dev->stats.rx_errors++; dev_kfree_skb(skb); goto next_pkt; } @@ -2682,11 +2661,6 @@ static int nv_rx_process(struct net_device *dev, int limit) } /* the rest are hard errors */ else { - if (flags & NV_RX2_CRCERR) - dev->stats.rx_crc_errors++; - if (flags & NV_RX2_OVERFLOW) - dev->stats.rx_over_errors++; - dev->stats.rx_errors++; dev_kfree_skb(skb); goto next_pkt; } @@ -2704,7 +2678,6 @@ static int nv_rx_process(struct net_device *dev, int limit) skb->protocol = eth_type_trans(skb, dev); napi_gro_receive(&np->napi, skb); dev->stats.rx_packets++; - dev->stats.rx_bytes += len; next_pkt: if (unlikely(np->get_rx.orig++ == np->last_rx.orig)) np->get_rx.orig = np->first_rx.orig; @@ -2787,9 +2760,7 @@ static int nv_rx_process_optimized(struct net_device *dev, int limit) __vlan_hwaccel_put_tag(skb, vid); } napi_gro_receive(&np->napi, skb); - dev->stats.rx_packets++; - dev->stats.rx_bytes += len; } else { dev_kfree_skb(skb); } @@ -2962,11 +2933,11 @@ static void nv_set_multicast(struct net_device *dev) struct netdev_hw_addr *ha; netdev_for_each_mc_addr(ha, dev) { - unsigned char *addr = ha->addr; + unsigned char *hw_addr = ha->addr; u32 a, b; - a = le32_to_cpu(*(__le32 *) addr); - b = le16_to_cpu(*(__le16 *) (&addr[4])); + a = le32_to_cpu(*(__le32 *) hw_addr); + b = le16_to_cpu(*(__le16 *) (&hw_addr[4])); alwaysOn[0] &= a; alwaysOff[0] &= ~a; alwaysOn[1] &= b; @@ -3398,7 +3369,8 @@ static irqreturn_t nv_nic_irq_tx(int foo, void *data) for (i = 0;; i++) { events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_TX_ALL; - writel(NVREG_IRQ_TX_ALL, base + NvRegMSIXIrqStatus); + writel(events, base + NvRegMSIXIrqStatus); + netdev_dbg(dev, "tx irq events: %08x\n", events); if (!(events & np->irqmask)) break; @@ -3509,7 +3481,8 @@ static irqreturn_t nv_nic_irq_rx(int foo, void *data) for (i = 0;; i++) { events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_RX_ALL; - writel(NVREG_IRQ_RX_ALL, base + NvRegMSIXIrqStatus); + writel(events, base + NvRegMSIXIrqStatus); + netdev_dbg(dev, "rx irq events: %08x\n", events); if (!(events & np->irqmask)) break; @@ -3553,7 +3526,8 @@ static irqreturn_t nv_nic_irq_other(int foo, void *data) for (i = 0;; i++) { events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_OTHER; - writel(NVREG_IRQ_OTHER, base + NvRegMSIXIrqStatus); + writel(events, base + NvRegMSIXIrqStatus); + netdev_dbg(dev, "irq events: %08x\n", events); if (!(events & np->irqmask)) break; @@ -3617,10 +3591,10 @@ static irqreturn_t nv_nic_irq_test(int foo, void *data) if (!(np->msi_flags & NV_MSI_X_ENABLED)) { events = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK; - writel(NVREG_IRQ_TIMER, base + NvRegIrqStatus); + writel(events & NVREG_IRQ_TIMER, base + NvRegIrqStatus); } else { events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK; - writel(NVREG_IRQ_TIMER, base + NvRegMSIXIrqStatus); + writel(events & NVREG_IRQ_TIMER, base + NvRegMSIXIrqStatus); } pci_push(base); if (!(events & NVREG_IRQ_TIMER)) @@ -4566,7 +4540,7 @@ static void nv_get_ethtool_stats(struct net_device *dev, struct ethtool_stats *e struct fe_priv *np = netdev_priv(dev); /* update stats */ - nv_do_stats_poll((unsigned long)dev); + nv_get_hw_stats(dev); memcpy(buffer, &np->estats, nv_get_sset_count(dev, ETH_SS_STATS)*sizeof(u64)); } diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c index a3ce3d4561ed..74134970b709 100644 --- a/drivers/net/macvlan.c +++ b/drivers/net/macvlan.c @@ -192,6 +192,13 @@ static rx_handler_result_t macvlan_handle_frame(struct sk_buff **pskb) */ macvlan_broadcast(skb, port, src->dev, MACVLAN_MODE_VEPA); + else { + /* forward to original port. */ + vlan = src; + ret = macvlan_broadcast_one(skb, vlan, eth, 0); + goto out; + } + return RX_HANDLER_PASS; } diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c index 7d6082160bcc..fae0fbd8bc88 100644 --- a/drivers/net/usb/usbnet.c +++ b/drivers/net/usb/usbnet.c @@ -1057,7 +1057,8 @@ netdev_tx_t usbnet_start_xmit (struct sk_buff *skb, unsigned long flags; int retval; - skb_tx_timestamp(skb); + if (skb) + skb_tx_timestamp(skb); // some devices want funky USB-level framing, for // win32 driver (usually) and/or hardware quirks diff --git a/drivers/net/wireless/ath/ath9k/ar9002_calib.c b/drivers/net/wireless/ath/ath9k/ar9002_calib.c index e0ab0657cc3a..88279e325dca 100644 --- a/drivers/net/wireless/ath/ath9k/ar9002_calib.c +++ b/drivers/net/wireless/ath/ath9k/ar9002_calib.c @@ -868,10 +868,6 @@ static bool ar9002_hw_init_cal(struct ath_hw *ah, struct ath9k_channel *chan) /* Do PA Calibration */ ar9002_hw_pa_cal(ah, true); - /* Do NF Calibration after DC offset and other calibrations */ - ath9k_hw_loadnf(ah, chan); - ath9k_hw_start_nfcal(ah, true); - if (ah->caldata) ah->caldata->nfcal_pending = true; diff --git a/drivers/net/wireless/ath/ath9k/ar9003_calib.c b/drivers/net/wireless/ath/ath9k/ar9003_calib.c index 16851cb109a6..12a730dcb500 100644 --- a/drivers/net/wireless/ath/ath9k/ar9003_calib.c +++ b/drivers/net/wireless/ath/ath9k/ar9003_calib.c @@ -908,12 +908,15 @@ static bool ar9003_hw_rtt_restore(struct ath_hw *ah, struct ath9k_channel *chan) int i; bool restore; - if (!(ah->caps.hw_caps & ATH9K_HW_CAP_RTT) || !ah->caldata) + if (!ah->caldata) return false; hist = &ah->caldata->rtt_hist; + if (!hist->num_readings) + return false; + ar9003_hw_rtt_enable(ah); - ar9003_hw_rtt_set_mask(ah, 0x10); + ar9003_hw_rtt_set_mask(ah, 0x00); for (i = 0; i < AR9300_MAX_CHAINS; i++) { if (!(ah->rxchainmask & (1 << i))) continue; @@ -1070,6 +1073,7 @@ skip_tx_iqcal: if (is_reusable && (hist->num_readings < RTT_HIST_MAX)) { u32 *table; + hist->num_readings++; for (i = 0; i < AR9300_MAX_CHAINS; i++) { if (!(ah->rxchainmask & (1 << i))) continue; @@ -1081,9 +1085,6 @@ skip_tx_iqcal: ar9003_hw_rtt_disable(ah); } - ath9k_hw_loadnf(ah, chan); - ath9k_hw_start_nfcal(ah, true); - /* Initialize list pointers */ ah->cal_list = ah->cal_list_last = ah->cal_list_curr = NULL; ah->supp_cals = IQ_MISMATCH_CAL; diff --git a/drivers/net/wireless/ath/ath9k/ar9003_phy.h b/drivers/net/wireless/ath/ath9k/ar9003_phy.h index 2f4023e66081..4114fe752c6b 100644 --- a/drivers/net/wireless/ath/ath9k/ar9003_phy.h +++ b/drivers/net/wireless/ath/ath9k/ar9003_phy.h @@ -572,14 +572,14 @@ #define AR_PHY_TXGAIN_TABLE (AR_SM_BASE + 0x300) -#define AR_PHY_TX_IQCAL_CONTROL_0 (AR_SM_BASE + AR_SREV_9485(ah) ? \ - 0x3c4 : 0x444) -#define AR_PHY_TX_IQCAL_CONTROL_1 (AR_SM_BASE + AR_SREV_9485(ah) ? \ - 0x3c8 : 0x448) -#define AR_PHY_TX_IQCAL_START (AR_SM_BASE + AR_SREV_9485(ah) ? \ - 0x3c4 : 0x440) -#define AR_PHY_TX_IQCAL_STATUS_B0 (AR_SM_BASE + AR_SREV_9485(ah) ? \ - 0x3f0 : 0x48c) +#define AR_PHY_TX_IQCAL_CONTROL_0 (AR_SM_BASE + (AR_SREV_9485(ah) ? \ + 0x3c4 : 0x444)) +#define AR_PHY_TX_IQCAL_CONTROL_1 (AR_SM_BASE + (AR_SREV_9485(ah) ? \ + 0x3c8 : 0x448)) +#define AR_PHY_TX_IQCAL_START (AR_SM_BASE + (AR_SREV_9485(ah) ? \ + 0x3c4 : 0x440)) +#define AR_PHY_TX_IQCAL_STATUS_B0 (AR_SM_BASE + (AR_SREV_9485(ah) ? \ + 0x3f0 : 0x48c)) #define AR_PHY_TX_IQCAL_CORR_COEFF_B0(_i) (AR_SM_BASE + \ (AR_SREV_9485(ah) ? \ 0x3d0 : 0x450) + ((_i) << 2)) @@ -651,7 +651,7 @@ #define AR_SWITCH_TABLE_ALL_S (0) #define AR_PHY_65NM_CH0_THERM (AR_SREV_9300(ah) ? 0x16290 :\ - (AR_SREV_9485(ah) ? 0x1628c : 0x16294)) + (AR_SREV_9462(ah) ? 0x16294 : 0x1628c)) #define AR_PHY_65NM_CH0_THERM_LOCAL 0x80000000 #define AR_PHY_65NM_CH0_THERM_LOCAL_S 31 @@ -668,12 +668,12 @@ #define AR_PHY_65NM_CH2_RXTX2 0x16904 #define AR_CH0_TOP2 (AR_SREV_9300(ah) ? 0x1628c : \ - (AR_SREV_9485(ah) ? 0x16284 : 0x16290)) + (AR_SREV_9462(ah) ? 0x16290 : 0x16284)) #define AR_CH0_TOP2_XPABIASLVL 0xf000 #define AR_CH0_TOP2_XPABIASLVL_S 12 #define AR_CH0_XTAL (AR_SREV_9300(ah) ? 0x16294 : \ - (AR_SREV_9485(ah) ? 0x16290 : 0x16298)) + (AR_SREV_9462(ah) ? 0x16298 : 0x16290)) #define AR_CH0_XTAL_CAPINDAC 0x7f000000 #define AR_CH0_XTAL_CAPINDAC_S 24 #define AR_CH0_XTAL_CAPOUTDAC 0x00fe0000 @@ -908,8 +908,8 @@ #define AR_PHY_TPC_5_B1 (AR_SM1_BASE + 0x208) #define AR_PHY_TPC_6_B1 (AR_SM1_BASE + 0x20c) #define AR_PHY_TPC_11_B1 (AR_SM1_BASE + 0x220) -#define AR_PHY_PDADC_TAB_1 (AR_SM1_BASE + (AR_SREV_AR9300(ah) ? \ - 0x240 : 0x280)) +#define AR_PHY_PDADC_TAB_1 (AR_SM1_BASE + (AR_SREV_AR9462(ah) ? \ + 0x280 : 0x240)) #define AR_PHY_TPC_19_B1 (AR_SM1_BASE + 0x240) #define AR_PHY_TPC_19_B1_ALPHA_THERM 0xff #define AR_PHY_TPC_19_B1_ALPHA_THERM_S 0 @@ -931,10 +931,10 @@ #define AR_PHY_AIC_SRAM_ADDR_B1 (AR_SM1_BASE + 0x5f0) #define AR_PHY_AIC_SRAM_DATA_B1 (AR_SM1_BASE + 0x5f4) -#define AR_PHY_RTT_TABLE_SW_INTF_B(i) (0x384 + (i) ? \ - AR_SM1_BASE : AR_SM_BASE) -#define AR_PHY_RTT_TABLE_SW_INTF_1_B(i) (0x388 + (i) ? \ - AR_SM1_BASE : AR_SM_BASE) +#define AR_PHY_RTT_TABLE_SW_INTF_B(i) (0x384 + ((i) ? \ + AR_SM1_BASE : AR_SM_BASE)) +#define AR_PHY_RTT_TABLE_SW_INTF_1_B(i) (0x388 + ((i) ? \ + AR_SM1_BASE : AR_SM_BASE)) /* * Channel 2 Register Map */ diff --git a/drivers/net/wireless/ath/ath9k/ar9485_initvals.h b/drivers/net/wireless/ath/ath9k/ar9485_initvals.h index 611ea6ce8508..d16d029f81a9 100644 --- a/drivers/net/wireless/ath/ath9k/ar9485_initvals.h +++ b/drivers/net/wireless/ath/ath9k/ar9485_initvals.h @@ -521,7 +521,7 @@ static const u32 ar9485_1_1_radio_postamble[][2] = { {0x000160ac, 0x24611800}, {0x000160b0, 0x03284f3e}, {0x0001610c, 0x00170000}, - {0x00016140, 0x10804008}, + {0x00016140, 0x50804008}, }; static const u32 ar9485_1_1_mac_postamble[][5] = { @@ -603,7 +603,7 @@ static const u32 ar9485_1_1_radio_core[][2] = { static const u32 ar9485_1_1_pcie_phy_pll_on_clkreq_enable_L1[][2] = { /* Addr allmodes */ - {0x00018c00, 0x10052e5e}, + {0x00018c00, 0x18052e5e}, {0x00018c04, 0x000801d8}, {0x00018c08, 0x0000080c}, }; @@ -776,7 +776,7 @@ static const u32 ar9485_modes_green_ob_db_tx_gain_1_1[][5] = { static const u32 ar9485_1_1_pcie_phy_clkreq_disable_L1[][2] = { /* Addr allmodes */ - {0x00018c00, 0x10013e5e}, + {0x00018c00, 0x18013e5e}, {0x00018c04, 0x000801d8}, {0x00018c08, 0x0000080c}, }; @@ -882,7 +882,7 @@ static const u32 ar9485_fast_clock_1_1_baseband_postamble[][3] = { static const u32 ar9485_1_1_pcie_phy_pll_on_clkreq_disable_L1[][2] = { /* Addr allmodes */ - {0x00018c00, 0x10012e5e}, + {0x00018c00, 0x18012e5e}, {0x00018c04, 0x000801d8}, {0x00018c08, 0x0000080c}, }; @@ -1021,7 +1021,7 @@ static const u32 ar9485_common_rx_gain_1_1[][2] = { static const u32 ar9485_1_1_pcie_phy_clkreq_enable_L1[][2] = { /* Addr allmodes */ - {0x00018c00, 0x10053e5e}, + {0x00018c00, 0x18053e5e}, {0x00018c04, 0x000801d8}, {0x00018c08, 0x0000080c}, }; diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c index 4952ad8c4e8c..2f91acccb7db 100644 --- a/drivers/net/wireless/ath/ath9k/hw.c +++ b/drivers/net/wireless/ath/ath9k/hw.c @@ -1725,6 +1725,9 @@ int ath9k_hw_reset(struct ath_hw *ah, struct ath9k_channel *chan, if (!ath9k_hw_init_cal(ah, chan)) return -EIO; + ath9k_hw_loadnf(ah, chan); + ath9k_hw_start_nfcal(ah, true); + ENABLE_REGWRITE_BUFFER(ah); ath9k_hw_restore_chainmask(ah); diff --git a/drivers/net/wireless/ath/carl9170/tx.c b/drivers/net/wireless/ath/carl9170/tx.c index d20946939cd8..59472e1605cd 100644 --- a/drivers/net/wireless/ath/carl9170/tx.c +++ b/drivers/net/wireless/ath/carl9170/tx.c @@ -296,7 +296,8 @@ static void carl9170_tx_release(struct kref *ref) super = (void *)skb->data; txinfo->status.ampdu_len = super->s.rix; txinfo->status.ampdu_ack_len = super->s.cnt; - } else if (txinfo->flags & IEEE80211_TX_STAT_ACK) { + } else if ((txinfo->flags & IEEE80211_TX_STAT_ACK) && + !(txinfo->flags & IEEE80211_TX_CTL_REQ_TX_STATUS)) { /* * drop redundant tx_status reports: * @@ -308,15 +309,17 @@ static void carl9170_tx_release(struct kref *ref) * * 3. minstrel_ht is picky, it only accepts * reports of frames with the TX_STATUS_AMPDU flag. + * + * 4. mac80211 is not particularly interested in + * feedback either [CTL_REQ_TX_STATUS not set] */ dev_kfree_skb_any(skb); return; } else { /* - * Frame has failed, but we want to keep it in - * case it was lost due to a power-state - * transition. + * Either the frame transmission has failed or + * mac80211 requested tx status. */ } } diff --git a/drivers/net/wireless/b43/xmit.c b/drivers/net/wireless/b43/xmit.c index c73e8600d218..58ea0e5fabfd 100644 --- a/drivers/net/wireless/b43/xmit.c +++ b/drivers/net/wireless/b43/xmit.c @@ -827,7 +827,6 @@ void b43_rx(struct b43_wldev *dev, struct sk_buff *skb, const void *_rxhdr) #endif return; drop: - b43dbg(dev->wl, "RX: Packet dropped\n"); dev_kfree_skb_any(skb); } diff --git a/drivers/net/wireless/iwlwifi/iwl-core.c b/drivers/net/wireless/iwlwifi/iwl-core.c index b247a56d5135..001fdf140abb 100644 --- a/drivers/net/wireless/iwlwifi/iwl-core.c +++ b/drivers/net/wireless/iwlwifi/iwl-core.c @@ -1755,16 +1755,6 @@ static inline int iwl_check_stuck_queue(struct iwl_priv *priv, int txq) { if (iwl_trans_check_stuck_queue(trans(priv), txq)) { int ret; - if (txq == priv->shrd->cmd_queue) { - /* - * validate command queue still working - * by sending "ECHO" command - */ - if (!iwl_cmd_echo_test(priv)) - return 0; - else - IWL_DEBUG_HC(priv, "echo testing fail\n"); - } ret = iwl_force_reset(priv, IWL_FW_RESET, false); return (ret == -EAGAIN) ? 0 : 1; } diff --git a/drivers/net/wireless/iwlwifi/iwl-pci.c b/drivers/net/wireless/iwlwifi/iwl-pci.c index f0c623ade3ff..1800029911ad 100644 --- a/drivers/net/wireless/iwlwifi/iwl-pci.c +++ b/drivers/net/wireless/iwlwifi/iwl-pci.c @@ -446,10 +446,9 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) pci_write_config_byte(pdev, PCI_CFG_RETRY_TIMEOUT, 0x00); err = pci_enable_msi(pdev); - if (err) { - dev_printk(KERN_ERR, &pdev->dev, "pci_enable_msi failed"); - goto out_iounmap; - } + if (err) + dev_printk(KERN_ERR, &pdev->dev, + "pci_enable_msi failed(0X%x)", err); /* TODO: Move this away, not needed if not MSI */ /* enable rfkill interrupt: hw bug w/a */ @@ -470,7 +469,6 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) out_disable_msi: pci_disable_msi(pdev); -out_iounmap: pci_iounmap(pdev, pci_bus->hw_base); out_pci_release_regions: pci_set_drvdata(pdev, NULL); diff --git a/drivers/net/wireless/iwlwifi/iwl-trans-pcie.c b/drivers/net/wireless/iwlwifi/iwl-trans-pcie.c index 8e8c75c997ee..da3411057afc 100644 --- a/drivers/net/wireless/iwlwifi/iwl-trans-pcie.c +++ b/drivers/net/wireless/iwlwifi/iwl-trans-pcie.c @@ -407,6 +407,7 @@ static void iwl_tx_queue_unmap(struct iwl_trans *trans, int txq_id) struct iwl_queue *q = &txq->q; enum dma_data_direction dma_dir; unsigned long flags; + spinlock_t *lock; if (!q->n_bd) return; @@ -414,19 +415,22 @@ static void iwl_tx_queue_unmap(struct iwl_trans *trans, int txq_id) /* In the command queue, all the TBs are mapped as BIDI * so unmap them as such. */ - if (txq_id == trans->shrd->cmd_queue) + if (txq_id == trans->shrd->cmd_queue) { dma_dir = DMA_BIDIRECTIONAL; - else + lock = &trans->hcmd_lock; + } else { dma_dir = DMA_TO_DEVICE; + lock = &trans->shrd->sta_lock; + } - spin_lock_irqsave(&trans->shrd->sta_lock, flags); + spin_lock_irqsave(lock, flags); while (q->write_ptr != q->read_ptr) { /* The read_ptr needs to bound by q->n_window */ iwlagn_txq_free_tfd(trans, txq, get_cmd_index(q, q->read_ptr), dma_dir); q->read_ptr = iwl_queue_inc_wrap(q->read_ptr, q->n_bd); } - spin_unlock_irqrestore(&trans->shrd->sta_lock, flags); + spin_unlock_irqrestore(lock, flags); } /** diff --git a/drivers/net/wireless/libertas/cfg.c b/drivers/net/wireless/libertas/cfg.c index ff6378276ff0..4fcd653bddc4 100644 --- a/drivers/net/wireless/libertas/cfg.c +++ b/drivers/net/wireless/libertas/cfg.c @@ -728,15 +728,9 @@ static void lbs_scan_worker(struct work_struct *work) le16_to_cpu(scan_cmd->hdr.size), lbs_ret_scan, 0); - if (priv->scan_channel >= priv->scan_req->n_channels) { + if (priv->scan_channel >= priv->scan_req->n_channels) /* Mark scan done */ - if (priv->internal_scan) - kfree(priv->scan_req); - else - cfg80211_scan_done(priv->scan_req, false); - - priv->scan_req = NULL; - } + lbs_scan_done(priv); /* Restart network */ if (carrier) @@ -774,6 +768,21 @@ static void _internal_start_scan(struct lbs_private *priv, bool internal, lbs_deb_leave(LBS_DEB_CFG80211); } +/* + * Clean up priv->scan_req. Should be used to handle the allocation details. + */ +void lbs_scan_done(struct lbs_private *priv) +{ + WARN_ON(!priv->scan_req); + + if (priv->internal_scan) + kfree(priv->scan_req); + else + cfg80211_scan_done(priv->scan_req, false); + + priv->scan_req = NULL; +} + static int lbs_cfg_scan(struct wiphy *wiphy, struct net_device *dev, struct cfg80211_scan_request *request) diff --git a/drivers/net/wireless/libertas/cfg.h b/drivers/net/wireless/libertas/cfg.h index a02ee151710e..558168ce634d 100644 --- a/drivers/net/wireless/libertas/cfg.h +++ b/drivers/net/wireless/libertas/cfg.h @@ -16,6 +16,7 @@ int lbs_reg_notifier(struct wiphy *wiphy, void lbs_send_disconnect_notification(struct lbs_private *priv); void lbs_send_mic_failureevent(struct lbs_private *priv, u32 event); +void lbs_scan_done(struct lbs_private *priv); void lbs_scan_deinit(struct lbs_private *priv); int lbs_disconnect(struct lbs_private *priv, u16 reason); diff --git a/drivers/net/wireless/libertas/main.c b/drivers/net/wireless/libertas/main.c index 4ae99a40dbf7..957681dede17 100644 --- a/drivers/net/wireless/libertas/main.c +++ b/drivers/net/wireless/libertas/main.c @@ -255,10 +255,8 @@ static int lbs_eth_stop(struct net_device *dev) lbs_update_mcast(priv); cancel_delayed_work_sync(&priv->scan_work); - if (priv->scan_req) { - cfg80211_scan_done(priv->scan_req, false); - priv->scan_req = NULL; - } + if (priv->scan_req) + lbs_scan_done(priv); netif_carrier_off(priv->dev); diff --git a/drivers/pinctrl/Kconfig b/drivers/pinctrl/Kconfig index ef566443f945..e17e2f8001d2 100644 --- a/drivers/pinctrl/Kconfig +++ b/drivers/pinctrl/Kconfig @@ -2,23 +2,17 @@ # PINCTRL infrastructure and drivers # -menuconfig PINCTRL - bool "PINCTRL Support" +config PINCTRL + bool depends on EXPERIMENTAL - help - This enables the PINCTRL subsystem for controlling pins - on chip packages, for example multiplexing pins on primarily - PGA and BGA packages for systems on chip. - - If unsure, say N. if PINCTRL +menu "Pin controllers" + depends on PINCTRL + config PINMUX bool "Support pinmux controllers" - help - Say Y here if you want the pincontrol subsystem to handle pin - multiplexing drivers. config DEBUG_PINCTRL bool "Debug PINCTRL calls" @@ -30,14 +24,12 @@ config PINMUX_SIRF bool "CSR SiRFprimaII pinmux driver" depends on ARCH_PRIMA2 select PINMUX - help - Say Y here to enable the SiRFprimaII pinmux driver config PINMUX_U300 bool "U300 pinmux driver" depends on ARCH_U300 select PINMUX - help - Say Y here to enable the U300 pinmux driver + +endmenu endif diff --git a/drivers/platform/x86/Kconfig b/drivers/platform/x86/Kconfig index f4e3d82379d7..7f43cf86d776 100644 --- a/drivers/platform/x86/Kconfig +++ b/drivers/platform/x86/Kconfig @@ -83,8 +83,10 @@ config DELL_LAPTOP depends on EXPERIMENTAL depends on BACKLIGHT_CLASS_DEVICE depends on RFKILL || RFKILL = n - depends on POWER_SUPPLY depends on SERIO_I8042 + select POWER_SUPPLY + select LEDS_CLASS + select NEW_LEDS default n ---help--- This driver adds support for rfkill and backlight control to Dell diff --git a/drivers/pnp/pnpacpi/rsparser.c b/drivers/pnp/pnpacpi/rsparser.c index bbf3edd85beb..5be4a392a3ae 100644 --- a/drivers/pnp/pnpacpi/rsparser.c +++ b/drivers/pnp/pnpacpi/rsparser.c @@ -509,15 +509,12 @@ static __init void pnpacpi_parse_dma_option(struct pnp_dev *dev, struct acpi_resource_dma *p) { int i; - unsigned char map = 0, flags = 0; - - if (p->channel_count == 0) - flags |= IORESOURCE_DISABLED; + unsigned char map = 0, flags; for (i = 0; i < p->channel_count; i++) map |= 1 << p->channels[i]; - flags |= dma_flags(dev, p->type, p->bus_master, p->transfer); + flags = dma_flags(dev, p->type, p->bus_master, p->transfer); pnp_register_dma_resource(dev, option_flags, map, flags); } @@ -527,17 +524,14 @@ static __init void pnpacpi_parse_irq_option(struct pnp_dev *dev, { int i; pnp_irq_mask_t map; - unsigned char flags = 0; - - if (p->interrupt_count == 0) - flags |= IORESOURCE_DISABLED; + unsigned char flags; bitmap_zero(map.bits, PNP_IRQ_NR); for (i = 0; i < p->interrupt_count; i++) if (p->interrupts[i]) __set_bit(p->interrupts[i], map.bits); - flags |= irq_flags(p->triggering, p->polarity, p->sharable); + flags = irq_flags(p->triggering, p->polarity, p->sharable); pnp_register_irq_resource(dev, option_flags, &map, flags); } @@ -547,10 +541,7 @@ static __init void pnpacpi_parse_ext_irq_option(struct pnp_dev *dev, { int i; pnp_irq_mask_t map; - unsigned char flags = 0; - - if (p->interrupt_count == 0) - flags |= IORESOURCE_DISABLED; + unsigned char flags; bitmap_zero(map.bits, PNP_IRQ_NR); for (i = 0; i < p->interrupt_count; i++) { @@ -564,7 +555,7 @@ static __init void pnpacpi_parse_ext_irq_option(struct pnp_dev *dev, } } - flags |= irq_flags(p->triggering, p->polarity, p->sharable); + flags = irq_flags(p->triggering, p->polarity, p->sharable); pnp_register_irq_resource(dev, option_flags, &map, flags); } @@ -574,11 +565,8 @@ static __init void pnpacpi_parse_port_option(struct pnp_dev *dev, { unsigned char flags = 0; - if (io->address_length == 0) - flags |= IORESOURCE_DISABLED; - if (io->io_decode == ACPI_DECODE_16) - flags |= IORESOURCE_IO_16BIT_ADDR; + flags = IORESOURCE_IO_16BIT_ADDR; pnp_register_port_resource(dev, option_flags, io->minimum, io->maximum, io->alignment, io->address_length, flags); } @@ -587,13 +575,8 @@ static __init void pnpacpi_parse_fixed_port_option(struct pnp_dev *dev, unsigned int option_flags, struct acpi_resource_fixed_io *io) { - unsigned char flags = 0; - - if (io->address_length == 0) - flags |= IORESOURCE_DISABLED; - pnp_register_port_resource(dev, option_flags, io->address, io->address, - 0, io->address_length, flags | IORESOURCE_IO_FIXED); + 0, io->address_length, IORESOURCE_IO_FIXED); } static __init void pnpacpi_parse_mem24_option(struct pnp_dev *dev, @@ -602,11 +585,8 @@ static __init void pnpacpi_parse_mem24_option(struct pnp_dev *dev, { unsigned char flags = 0; - if (p->address_length == 0) - flags |= IORESOURCE_DISABLED; - if (p->write_protect == ACPI_READ_WRITE_MEMORY) - flags |= IORESOURCE_MEM_WRITEABLE; + flags = IORESOURCE_MEM_WRITEABLE; pnp_register_mem_resource(dev, option_flags, p->minimum, p->maximum, p->alignment, p->address_length, flags); } @@ -617,11 +597,8 @@ static __init void pnpacpi_parse_mem32_option(struct pnp_dev *dev, { unsigned char flags = 0; - if (p->address_length == 0) - flags |= IORESOURCE_DISABLED; - if (p->write_protect == ACPI_READ_WRITE_MEMORY) - flags |= IORESOURCE_MEM_WRITEABLE; + flags = IORESOURCE_MEM_WRITEABLE; pnp_register_mem_resource(dev, option_flags, p->minimum, p->maximum, p->alignment, p->address_length, flags); } @@ -632,11 +609,8 @@ static __init void pnpacpi_parse_fixed_mem32_option(struct pnp_dev *dev, { unsigned char flags = 0; - if (p->address_length == 0) - flags |= IORESOURCE_DISABLED; - if (p->write_protect == ACPI_READ_WRITE_MEMORY) - flags |= IORESOURCE_MEM_WRITEABLE; + flags = IORESOURCE_MEM_WRITEABLE; pnp_register_mem_resource(dev, option_flags, p->address, p->address, 0, p->address_length, flags); } @@ -656,19 +630,16 @@ static __init void pnpacpi_parse_address_option(struct pnp_dev *dev, return; } - if (p->address_length == 0) - flags |= IORESOURCE_DISABLED; - if (p->resource_type == ACPI_MEMORY_RANGE) { if (p->info.mem.write_protect == ACPI_READ_WRITE_MEMORY) - flags |= IORESOURCE_MEM_WRITEABLE; + flags = IORESOURCE_MEM_WRITEABLE; pnp_register_mem_resource(dev, option_flags, p->minimum, p->minimum, 0, p->address_length, flags); } else if (p->resource_type == ACPI_IO_RANGE) pnp_register_port_resource(dev, option_flags, p->minimum, p->minimum, 0, p->address_length, - flags | IORESOURCE_IO_FIXED); + IORESOURCE_IO_FIXED); } static __init void pnpacpi_parse_ext_address_option(struct pnp_dev *dev, @@ -678,19 +649,16 @@ static __init void pnpacpi_parse_ext_address_option(struct pnp_dev *dev, struct acpi_resource_extended_address64 *p = &r->data.ext_address64; unsigned char flags = 0; - if (p->address_length == 0) - flags |= IORESOURCE_DISABLED; - if (p->resource_type == ACPI_MEMORY_RANGE) { if (p->info.mem.write_protect == ACPI_READ_WRITE_MEMORY) - flags |= IORESOURCE_MEM_WRITEABLE; + flags = IORESOURCE_MEM_WRITEABLE; pnp_register_mem_resource(dev, option_flags, p->minimum, p->minimum, 0, p->address_length, flags); } else if (p->resource_type == ACPI_IO_RANGE) pnp_register_port_resource(dev, option_flags, p->minimum, p->minimum, 0, p->address_length, - flags | IORESOURCE_IO_FIXED); + IORESOURCE_IO_FIXED); } struct acpipnp_parse_option_s { diff --git a/drivers/ps3/ps3-vuart.c b/drivers/ps3/ps3-vuart.c index d9fb729535a1..fb7300837fee 100644 --- a/drivers/ps3/ps3-vuart.c +++ b/drivers/ps3/ps3-vuart.c @@ -952,7 +952,7 @@ static int ps3_vuart_bus_interrupt_get(void) } result = request_irq(vuart_bus_priv.virq, ps3_vuart_irq_handler, - IRQF_DISABLED, "vuart", &vuart_bus_priv); + 0, "vuart", &vuart_bus_priv); if (result) { pr_debug("%s:%d: request_irq failed (%d)\n", diff --git a/drivers/ps3/ps3stor_lib.c b/drivers/ps3/ps3stor_lib.c index cc328dec946b..8c3f5adf1bc6 100644 --- a/drivers/ps3/ps3stor_lib.c +++ b/drivers/ps3/ps3stor_lib.c @@ -167,7 +167,7 @@ int ps3stor_setup(struct ps3_storage_device *dev, irq_handler_t handler) goto fail_close_device; } - error = request_irq(dev->irq, handler, IRQF_DISABLED, + error = request_irq(dev->irq, handler, 0, dev->sbd.core.driver->name, dev); if (error) { dev_err(&dev->sbd.core, "%s:%u: request_irq failed %d\n", diff --git a/drivers/rtc/rtc-mrst.c b/drivers/rtc/rtc-mrst.c index d33544802a2e..bb21f443fb70 100644 --- a/drivers/rtc/rtc-mrst.c +++ b/drivers/rtc/rtc-mrst.c @@ -76,12 +76,15 @@ static inline unsigned char vrtc_is_updating(void) /* * rtc_time's year contains the increment over 1900, but vRTC's YEAR * register can't be programmed to value larger than 0x64, so vRTC - * driver chose to use 1960 (1970 is UNIX time start point) as the base, + * driver chose to use 1972 (1970 is UNIX time start point) as the base, * and does the translation at read/write time. * - * Why not just use 1970 as the offset? it's because using 1960 will + * Why not just use 1970 as the offset? it's because using 1972 will * make it consistent in leap year setting for both vrtc and low-level - * physical rtc devices. + * physical rtc devices. Then why not use 1960 as the offset? If we use + * 1960, for a device's first use, its YEAR register is 0 and the system + * year will be parsed as 1960 which is not a valid UNIX time and will + * cause many applications to fail mysteriously. */ static int mrst_read_time(struct device *dev, struct rtc_time *time) { @@ -99,10 +102,10 @@ static int mrst_read_time(struct device *dev, struct rtc_time *time) time->tm_year = vrtc_cmos_read(RTC_YEAR); spin_unlock_irqrestore(&rtc_lock, flags); - /* Adjust for the 1960/1900 */ - time->tm_year += 60; + /* Adjust for the 1972/1900 */ + time->tm_year += 72; time->tm_mon--; - return RTC_24H; + return rtc_valid_tm(time); } static int mrst_set_time(struct device *dev, struct rtc_time *time) @@ -119,9 +122,9 @@ static int mrst_set_time(struct device *dev, struct rtc_time *time) min = time->tm_min; sec = time->tm_sec; - if (yrs < 70 || yrs > 138) + if (yrs < 72 || yrs > 138) return -EINVAL; - yrs -= 60; + yrs -= 72; spin_lock_irqsave(&rtc_lock, flags); diff --git a/drivers/sh/Makefile b/drivers/sh/Makefile index 24e6cec0ae8d..67e272ab1623 100644 --- a/drivers/sh/Makefile +++ b/drivers/sh/Makefile @@ -7,3 +7,11 @@ obj-$(CONFIG_HAVE_CLK) += clk/ obj-$(CONFIG_MAPLE) += maple/ obj-$(CONFIG_SUPERHYWAY) += superhyway/ obj-$(CONFIG_GENERIC_GPIO) += pfc.o + +# +# For the moment we only use this framework for ARM-based SH/R-Mobile +# platforms and generic SH. SH-based SH-Mobile platforms are still using +# an older framework that is pending up-porting, at which point this +# special casing can go away. +# +obj-$(CONFIG_SUPERH)$(CONFIG_ARCH_SHMOBILE) += pm_runtime.o diff --git a/drivers/sh/clk/core.c b/drivers/sh/clk/core.c index dc8d022c07a1..db257a35e71a 100644 --- a/drivers/sh/clk/core.c +++ b/drivers/sh/clk/core.c @@ -25,7 +25,6 @@ #include <linux/seq_file.h> #include <linux/err.h> #include <linux/io.h> -#include <linux/debugfs.h> #include <linux/cpufreq.h> #include <linux/clk.h> #include <linux/sh_clk.h> @@ -173,6 +172,26 @@ long clk_rate_div_range_round(struct clk *clk, unsigned int div_min, return clk_rate_round_helper(&div_range_round); } +static long clk_rate_mult_range_iter(unsigned int pos, + struct clk_rate_round_data *rounder) +{ + return clk_get_rate(rounder->arg) * pos; +} + +long clk_rate_mult_range_round(struct clk *clk, unsigned int mult_min, + unsigned int mult_max, unsigned long rate) +{ + struct clk_rate_round_data mult_range_round = { + .min = mult_min, + .max = mult_max, + .func = clk_rate_mult_range_iter, + .arg = clk_get_parent(clk), + .rate = rate, + }; + + return clk_rate_round_helper(&mult_range_round); +} + int clk_rate_table_find(struct clk *clk, struct cpufreq_frequency_table *freq_table, unsigned long rate) @@ -205,9 +224,6 @@ int clk_reparent(struct clk *child, struct clk *parent) list_add(&child->sibling, &parent->children); child->parent = parent; - /* now do the debugfs renaming to reattach the child - to the proper parent */ - return 0; } @@ -665,89 +681,6 @@ static int __init clk_syscore_init(void) subsys_initcall(clk_syscore_init); #endif -/* - * debugfs support to trace clock tree hierarchy and attributes - */ -static struct dentry *clk_debugfs_root; - -static int clk_debugfs_register_one(struct clk *c) -{ - int err; - struct dentry *d; - struct clk *pa = c->parent; - char s[255]; - char *p = s; - - p += sprintf(p, "%p", c); - d = debugfs_create_dir(s, pa ? pa->dentry : clk_debugfs_root); - if (!d) - return -ENOMEM; - c->dentry = d; - - d = debugfs_create_u8("usecount", S_IRUGO, c->dentry, (u8 *)&c->usecount); - if (!d) { - err = -ENOMEM; - goto err_out; - } - d = debugfs_create_u32("rate", S_IRUGO, c->dentry, (u32 *)&c->rate); - if (!d) { - err = -ENOMEM; - goto err_out; - } - d = debugfs_create_x32("flags", S_IRUGO, c->dentry, (u32 *)&c->flags); - if (!d) { - err = -ENOMEM; - goto err_out; - } - return 0; - -err_out: - debugfs_remove_recursive(c->dentry); - return err; -} - -static int clk_debugfs_register(struct clk *c) -{ - int err; - struct clk *pa = c->parent; - - if (pa && !pa->dentry) { - err = clk_debugfs_register(pa); - if (err) - return err; - } - - if (!c->dentry) { - err = clk_debugfs_register_one(c); - if (err) - return err; - } - return 0; -} - -static int __init clk_debugfs_init(void) -{ - struct clk *c; - struct dentry *d; - int err; - - d = debugfs_create_dir("clock", NULL); - if (!d) - return -ENOMEM; - clk_debugfs_root = d; - - list_for_each_entry(c, &clock_list, node) { - err = clk_debugfs_register(c); - if (err) - goto err_out; - } - return 0; -err_out: - debugfs_remove_recursive(clk_debugfs_root); - return err; -} -late_initcall(clk_debugfs_init); - static int __init clk_late_init(void) { unsigned long flags; diff --git a/arch/arm/mach-shmobile/pm_runtime.c b/drivers/sh/pm_runtime.c index bd5c6a3b8c55..afe9282629b9 100644 --- a/arch/arm/mach-shmobile/pm_runtime.c +++ b/drivers/sh/pm_runtime.c @@ -1,7 +1,5 @@ /* - * arch/arm/mach-shmobile/pm_runtime.c - * - * Runtime PM support code for SuperH Mobile ARM + * Runtime PM support code * * Copyright (C) 2009-2010 Magnus Damm * diff --git a/drivers/spi/spi-atmel.c b/drivers/spi/spi-atmel.c index 79665e2e6ec5..16d6a839c7fa 100644 --- a/drivers/spi/spi-atmel.c +++ b/drivers/spi/spi-atmel.c @@ -907,7 +907,7 @@ static void atmel_spi_cleanup(struct spi_device *spi) /*-------------------------------------------------------------------------*/ -static int __init atmel_spi_probe(struct platform_device *pdev) +static int __devinit atmel_spi_probe(struct platform_device *pdev) { struct resource *regs; int irq; @@ -1003,7 +1003,7 @@ out_free: return ret; } -static int __exit atmel_spi_remove(struct platform_device *pdev) +static int __devexit atmel_spi_remove(struct platform_device *pdev) { struct spi_master *master = platform_get_drvdata(pdev); struct atmel_spi *as = spi_master_get_devdata(master); @@ -1072,6 +1072,7 @@ static struct platform_driver atmel_spi_driver = { }, .suspend = atmel_spi_suspend, .resume = atmel_spi_resume, + .probe = atmel_spi_probe, .remove = __exit_p(atmel_spi_remove), }; module_platform_driver(atmel_spi_driver); diff --git a/drivers/staging/spectra/lld_mtd.c b/drivers/staging/spectra/lld_mtd.c index 2bd34662beb5..a9c309a167c2 100644 --- a/drivers/staging/spectra/lld_mtd.c +++ b/drivers/staging/spectra/lld_mtd.c @@ -340,7 +340,7 @@ u16 mtd_Read_Page_Main_Spare(u8 *read_data, u32 Block, struct mtd_oob_ops ops; int ret; - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ops.datbuf = read_data; ops.len = DeviceInfo.wPageDataSize; ops.oobbuf = read_data + DeviceInfo.wPageDataSize + BTSIG_OFFSET; @@ -400,7 +400,7 @@ u16 mtd_Write_Page_Main_Spare(u8 *write_data, u32 Block, struct mtd_oob_ops ops; int ret; - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ops.datbuf = write_data; ops.len = DeviceInfo.wPageDataSize; ops.oobbuf = write_data + DeviceInfo.wPageDataSize + BTSIG_OFFSET; @@ -473,7 +473,7 @@ u16 mtd_Read_Page_Spare(u8 *read_data, u32 Block, struct mtd_oob_ops ops; int ret; - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ops.datbuf = NULL; ops.len = 0; ops.oobbuf = read_data; diff --git a/drivers/thermal/thermal_sys.c b/drivers/thermal/thermal_sys.c index 708f8e92771a..dd9a5743fa99 100644 --- a/drivers/thermal/thermal_sys.c +++ b/drivers/thermal/thermal_sys.c @@ -678,10 +678,10 @@ static void thermal_zone_device_set_polling(struct thermal_zone_device *tz, return; if (delay > 1000) - schedule_delayed_work(&(tz->poll_queue), + queue_delayed_work(system_freezable_wq, &(tz->poll_queue), round_jiffies(msecs_to_jiffies(delay))); else - schedule_delayed_work(&(tz->poll_queue), + queue_delayed_work(system_freezable_wq, &(tz->poll_queue), msecs_to_jiffies(delay)); } diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c index 4cb0d0a3e57b..fc7bbba585ce 100644 --- a/drivers/tty/n_gsm.c +++ b/drivers/tty/n_gsm.c @@ -66,14 +66,16 @@ static int debug; module_param(debug, int, 0600); -#define T1 (HZ/10) -#define T2 (HZ/3) -#define N2 3 +/* Defaults: these are from the specification */ + +#define T1 10 /* 100mS */ +#define T2 34 /* 333mS */ +#define N2 3 /* Retry 3 times */ /* Use long timers for testing at low speed with debug on */ #ifdef DEBUG_TIMING -#define T1 HZ -#define T2 (2 * HZ) +#define T1 100 +#define T2 200 #endif /* diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c index 1945c70539c2..aff9d612dff0 100644 --- a/drivers/tty/serial/sh-sci.c +++ b/drivers/tty/serial/sh-sci.c @@ -207,6 +207,25 @@ static struct plat_sci_reg sci_regmap[SCIx_NR_REGTYPES][SCIx_NR_REGS] = { }, /* + * Common SH-2(A) SCIF definitions for ports with FIFO data + * count registers. + */ + [SCIx_SH2_SCIF_FIFODATA_REGTYPE] = { + [SCSMR] = { 0x00, 16 }, + [SCBRR] = { 0x04, 8 }, + [SCSCR] = { 0x08, 16 }, + [SCxTDR] = { 0x0c, 8 }, + [SCxSR] = { 0x10, 16 }, + [SCxRDR] = { 0x14, 8 }, + [SCFCR] = { 0x18, 16 }, + [SCFDR] = { 0x1c, 16 }, + [SCTFDR] = sci_reg_invalid, + [SCRFDR] = sci_reg_invalid, + [SCSPTR] = { 0x20, 16 }, + [SCLSR] = { 0x24, 16 }, + }, + + /* * Common SH-3 SCIF definitions. */ [SCIx_SH3_SCIF_REGTYPE] = { diff --git a/drivers/virtio/virtio_pci.c b/drivers/virtio/virtio_pci.c index 79a31e5b4b68..3d1bf41e8892 100644 --- a/drivers/virtio/virtio_pci.c +++ b/drivers/virtio/virtio_pci.c @@ -594,11 +594,11 @@ static struct virtio_config_ops virtio_pci_config_ops = { static void virtio_pci_release_dev(struct device *_d) { - struct virtio_device *dev = container_of(_d, struct virtio_device, - dev); - struct virtio_pci_device *vp_dev = to_vp_device(dev); - - kfree(vp_dev); + /* + * No need for a release method as we allocate/free + * all devices together with the pci devices. + * Provide an empty one to avoid getting a warning from core. + */ } /* the PCI probing function */ @@ -686,6 +686,7 @@ static void __devexit virtio_pci_remove(struct pci_dev *pci_dev) pci_iounmap(pci_dev, vp_dev->ioaddr); pci_release_regions(pci_dev); pci_disable_device(pci_dev); + kfree(vp_dev); } #ifdef CONFIG_PM diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h index 5a5d325a3935..634608d2a6d0 100644 --- a/fs/btrfs/btrfs_inode.h +++ b/fs/btrfs/btrfs_inode.h @@ -147,14 +147,12 @@ struct btrfs_inode { * the btrfs file release call will add this inode to the * ordered operations list so that we make sure to flush out any * new data the application may have written before commit. - * - * yes, its silly to have a single bitflag, but we might grow more - * of these. */ unsigned ordered_data_close:1; unsigned orphan_meta_reserved:1; unsigned dummy_inode:1; unsigned in_defrag:1; + unsigned delalloc_meta_reserved:1; /* * always compress this one file diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c index 3a1b939c9ae2..5b163572e0ca 100644 --- a/fs/btrfs/delayed-inode.c +++ b/fs/btrfs/delayed-inode.c @@ -617,12 +617,14 @@ static void btrfs_delayed_item_release_metadata(struct btrfs_root *root, static int btrfs_delayed_inode_reserve_metadata( struct btrfs_trans_handle *trans, struct btrfs_root *root, + struct inode *inode, struct btrfs_delayed_node *node) { struct btrfs_block_rsv *src_rsv; struct btrfs_block_rsv *dst_rsv; u64 num_bytes; int ret; + int release = false; src_rsv = trans->block_rsv; dst_rsv = &root->fs_info->delayed_block_rsv; @@ -652,12 +654,65 @@ static int btrfs_delayed_inode_reserve_metadata( if (!ret) node->bytes_reserved = num_bytes; return ret; + } else if (src_rsv == &root->fs_info->delalloc_block_rsv) { + spin_lock(&BTRFS_I(inode)->lock); + if (BTRFS_I(inode)->delalloc_meta_reserved) { + BTRFS_I(inode)->delalloc_meta_reserved = 0; + spin_unlock(&BTRFS_I(inode)->lock); + release = true; + goto migrate; + } + spin_unlock(&BTRFS_I(inode)->lock); + + /* Ok we didn't have space pre-reserved. This shouldn't happen + * too often but it can happen if we do delalloc to an existing + * inode which gets dirtied because of the time update, and then + * isn't touched again until after the transaction commits and + * then we try to write out the data. First try to be nice and + * reserve something strictly for us. If not be a pain and try + * to steal from the delalloc block rsv. + */ + ret = btrfs_block_rsv_add_noflush(root, dst_rsv, num_bytes); + if (!ret) + goto out; + + ret = btrfs_block_rsv_migrate(src_rsv, dst_rsv, num_bytes); + if (!ret) + goto out; + + /* + * Ok this is a problem, let's just steal from the global rsv + * since this really shouldn't happen that often. + */ + WARN_ON(1); + ret = btrfs_block_rsv_migrate(&root->fs_info->global_block_rsv, + dst_rsv, num_bytes); + goto out; } +migrate: ret = btrfs_block_rsv_migrate(src_rsv, dst_rsv, num_bytes); + +out: + /* + * Migrate only takes a reservation, it doesn't touch the size of the + * block_rsv. This is to simplify people who don't normally have things + * migrated from their block rsv. If they go to release their + * reservation, that will decrease the size as well, so if migrate + * reduced size we'd end up with a negative size. But for the + * delalloc_meta_reserved stuff we will only know to drop 1 reservation, + * but we could in fact do this reserve/migrate dance several times + * between the time we did the original reservation and we'd clean it + * up. So to take care of this, release the space for the meta + * reservation here. I think it may be time for a documentation page on + * how block rsvs. work. + */ if (!ret) node->bytes_reserved = num_bytes; + if (release) + btrfs_block_rsv_release(root, src_rsv, num_bytes); + return ret; } @@ -1708,7 +1763,8 @@ int btrfs_delayed_update_inode(struct btrfs_trans_handle *trans, goto release_node; } - ret = btrfs_delayed_inode_reserve_metadata(trans, root, delayed_node); + ret = btrfs_delayed_inode_reserve_metadata(trans, root, inode, + delayed_node); if (ret) goto release_node; diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 102c176fc29c..62afe5c5694e 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -1890,31 +1890,32 @@ struct btrfs_root *open_ctree(struct super_block *sb, u64 features; struct btrfs_key location; struct buffer_head *bh; - struct btrfs_root *extent_root = kzalloc(sizeof(struct btrfs_root), - GFP_NOFS); - struct btrfs_root *csum_root = kzalloc(sizeof(struct btrfs_root), - GFP_NOFS); + struct btrfs_super_block *disk_super; struct btrfs_root *tree_root = btrfs_sb(sb); - struct btrfs_fs_info *fs_info = NULL; - struct btrfs_root *chunk_root = kzalloc(sizeof(struct btrfs_root), - GFP_NOFS); - struct btrfs_root *dev_root = kzalloc(sizeof(struct btrfs_root), - GFP_NOFS); + struct btrfs_fs_info *fs_info = tree_root->fs_info; + struct btrfs_root *extent_root; + struct btrfs_root *csum_root; + struct btrfs_root *chunk_root; + struct btrfs_root *dev_root; struct btrfs_root *log_tree_root; - int ret; int err = -EINVAL; int num_backups_tried = 0; int backup_index = 0; - struct btrfs_super_block *disk_super; + extent_root = fs_info->extent_root = + kzalloc(sizeof(struct btrfs_root), GFP_NOFS); + csum_root = fs_info->csum_root = + kzalloc(sizeof(struct btrfs_root), GFP_NOFS); + chunk_root = fs_info->chunk_root = + kzalloc(sizeof(struct btrfs_root), GFP_NOFS); + dev_root = fs_info->dev_root = + kzalloc(sizeof(struct btrfs_root), GFP_NOFS); - if (!extent_root || !tree_root || !tree_root->fs_info || - !chunk_root || !dev_root || !csum_root) { + if (!extent_root || !csum_root || !chunk_root || !dev_root) { err = -ENOMEM; goto fail; } - fs_info = tree_root->fs_info; ret = init_srcu_struct(&fs_info->subvol_srcu); if (ret) { @@ -1954,12 +1955,6 @@ struct btrfs_root *open_ctree(struct super_block *sb, mutex_init(&fs_info->reloc_mutex); init_completion(&fs_info->kobj_unregister); - fs_info->tree_root = tree_root; - fs_info->extent_root = extent_root; - fs_info->csum_root = csum_root; - fs_info->chunk_root = chunk_root; - fs_info->dev_root = dev_root; - fs_info->fs_devices = fs_devices; INIT_LIST_HEAD(&fs_info->dirty_cowonly_roots); INIT_LIST_HEAD(&fs_info->space_info); btrfs_mapping_init(&fs_info->mapping_tree); @@ -2465,21 +2460,20 @@ fail_sb_buffer: btrfs_stop_workers(&fs_info->caching_workers); fail_alloc: fail_iput: + btrfs_mapping_tree_free(&fs_info->mapping_tree); + invalidate_inode_pages2(fs_info->btree_inode->i_mapping); iput(fs_info->btree_inode); - - btrfs_close_devices(fs_info->fs_devices); - btrfs_mapping_tree_free(&fs_info->mapping_tree); fail_bdi: bdi_destroy(&fs_info->bdi); fail_srcu: cleanup_srcu_struct(&fs_info->subvol_srcu); fail: + btrfs_close_devices(fs_info->fs_devices); free_fs_info(fs_info); return ERR_PTR(err); recovery_tree_root: - if (!btrfs_test_opt(tree_root, RECOVERY)) goto fail_tree_roots; diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index 9879bd474632..b232150b5b6b 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -3797,16 +3797,16 @@ void btrfs_free_block_rsv(struct btrfs_root *root, kfree(rsv); } -int btrfs_block_rsv_add(struct btrfs_root *root, - struct btrfs_block_rsv *block_rsv, - u64 num_bytes) +static inline int __block_rsv_add(struct btrfs_root *root, + struct btrfs_block_rsv *block_rsv, + u64 num_bytes, int flush) { int ret; if (num_bytes == 0) return 0; - ret = reserve_metadata_bytes(root, block_rsv, num_bytes, 1); + ret = reserve_metadata_bytes(root, block_rsv, num_bytes, flush); if (!ret) { block_rsv_add_bytes(block_rsv, num_bytes, 1); return 0; @@ -3815,22 +3815,18 @@ int btrfs_block_rsv_add(struct btrfs_root *root, return ret; } +int btrfs_block_rsv_add(struct btrfs_root *root, + struct btrfs_block_rsv *block_rsv, + u64 num_bytes) +{ + return __block_rsv_add(root, block_rsv, num_bytes, 1); +} + int btrfs_block_rsv_add_noflush(struct btrfs_root *root, struct btrfs_block_rsv *block_rsv, u64 num_bytes) { - int ret; - - if (num_bytes == 0) - return 0; - - ret = reserve_metadata_bytes(root, block_rsv, num_bytes, 0); - if (!ret) { - block_rsv_add_bytes(block_rsv, num_bytes, 1); - return 0; - } - - return ret; + return __block_rsv_add(root, block_rsv, num_bytes, 0); } int btrfs_block_rsv_check(struct btrfs_root *root, @@ -4064,23 +4060,30 @@ int btrfs_snap_reserve_metadata(struct btrfs_trans_handle *trans, */ static unsigned drop_outstanding_extent(struct inode *inode) { + unsigned drop_inode_space = 0; unsigned dropped_extents = 0; BUG_ON(!BTRFS_I(inode)->outstanding_extents); BTRFS_I(inode)->outstanding_extents--; + if (BTRFS_I(inode)->outstanding_extents == 0 && + BTRFS_I(inode)->delalloc_meta_reserved) { + drop_inode_space = 1; + BTRFS_I(inode)->delalloc_meta_reserved = 0; + } + /* * If we have more or the same amount of outsanding extents than we have * reserved then we need to leave the reserved extents count alone. */ if (BTRFS_I(inode)->outstanding_extents >= BTRFS_I(inode)->reserved_extents) - return 0; + return drop_inode_space; dropped_extents = BTRFS_I(inode)->reserved_extents - BTRFS_I(inode)->outstanding_extents; BTRFS_I(inode)->reserved_extents -= dropped_extents; - return dropped_extents; + return dropped_extents + drop_inode_space; } /** @@ -4166,9 +4169,18 @@ int btrfs_delalloc_reserve_metadata(struct inode *inode, u64 num_bytes) nr_extents = BTRFS_I(inode)->outstanding_extents - BTRFS_I(inode)->reserved_extents; BTRFS_I(inode)->reserved_extents += nr_extents; + } - to_reserve = btrfs_calc_trans_metadata_size(root, nr_extents); + /* + * Add an item to reserve for updating the inode when we complete the + * delalloc io. + */ + if (!BTRFS_I(inode)->delalloc_meta_reserved) { + nr_extents++; + BTRFS_I(inode)->delalloc_meta_reserved = 1; } + + to_reserve = btrfs_calc_trans_metadata_size(root, nr_extents); to_reserve += calc_csum_metadata_size(inode, num_bytes, 1); spin_unlock(&BTRFS_I(inode)->lock); diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c index 7a15fcfb3e1f..181760f9d2ab 100644 --- a/fs/btrfs/free-space-cache.c +++ b/fs/btrfs/free-space-cache.c @@ -537,6 +537,13 @@ static int io_ctl_read_entry(struct io_ctl *io_ctl, struct btrfs_free_space *entry, u8 *type) { struct btrfs_free_space_entry *e; + int ret; + + if (!io_ctl->cur) { + ret = io_ctl_check_crc(io_ctl, io_ctl->index); + if (ret) + return ret; + } e = io_ctl->cur; entry->offset = le64_to_cpu(e->offset); @@ -550,10 +557,7 @@ static int io_ctl_read_entry(struct io_ctl *io_ctl, io_ctl_unmap_page(io_ctl); - if (io_ctl->index >= io_ctl->num_pages) - return 0; - - return io_ctl_check_crc(io_ctl, io_ctl->index); + return 0; } static int io_ctl_read_bitmap(struct io_ctl *io_ctl, @@ -561,9 +565,6 @@ static int io_ctl_read_bitmap(struct io_ctl *io_ctl, { int ret; - if (io_ctl->cur && io_ctl->cur != io_ctl->orig) - io_ctl_unmap_page(io_ctl); - ret = io_ctl_check_crc(io_ctl, io_ctl->index); if (ret) return ret; @@ -699,6 +700,8 @@ int __load_free_space_cache(struct btrfs_root *root, struct inode *inode, num_entries--; } + io_ctl_unmap_page(&io_ctl); + /* * We add the bitmaps at the end of the entries in order that * the bitmap entries are added to the cache. diff --git a/fs/btrfs/inode-map.c b/fs/btrfs/inode-map.c index 53dcbdf446cd..f8962a957d65 100644 --- a/fs/btrfs/inode-map.c +++ b/fs/btrfs/inode-map.c @@ -398,6 +398,8 @@ int btrfs_save_ino_cache(struct btrfs_root *root, struct btrfs_free_space_ctl *ctl = root->free_ino_ctl; struct btrfs_path *path; struct inode *inode; + struct btrfs_block_rsv *rsv; + u64 num_bytes; u64 alloc_hint = 0; int ret; int prealloc; @@ -421,11 +423,26 @@ int btrfs_save_ino_cache(struct btrfs_root *root, if (!path) return -ENOMEM; + rsv = trans->block_rsv; + trans->block_rsv = &root->fs_info->trans_block_rsv; + + num_bytes = trans->bytes_reserved; + /* + * 1 item for inode item insertion if need + * 3 items for inode item update (in the worst case) + * 1 item for free space object + * 3 items for pre-allocation + */ + trans->bytes_reserved = btrfs_calc_trans_metadata_size(root, 8); + ret = btrfs_block_rsv_add_noflush(root, trans->block_rsv, + trans->bytes_reserved); + if (ret) + goto out; again: inode = lookup_free_ino_inode(root, path); if (IS_ERR(inode) && PTR_ERR(inode) != -ENOENT) { ret = PTR_ERR(inode); - goto out; + goto out_release; } if (IS_ERR(inode)) { @@ -434,7 +451,7 @@ again: ret = create_free_ino_inode(root, trans, path); if (ret) - goto out; + goto out_release; goto again; } @@ -477,11 +494,14 @@ again: } btrfs_free_reserved_data_space(inode, prealloc); + ret = btrfs_write_out_ino_cache(root, trans, path); out_put: iput(inode); +out_release: + btrfs_block_rsv_release(root, trans->block_rsv, trans->bytes_reserved); out: - if (ret == 0) - ret = btrfs_write_out_ino_cache(root, trans, path); + trans->block_rsv = rsv; + trans->bytes_reserved = num_bytes; btrfs_free_path(path); return ret; diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 966ddcc4c63d..116ab67a06df 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -93,6 +93,8 @@ static noinline int cow_file_range(struct inode *inode, struct page *locked_page, u64 start, u64 end, int *page_started, unsigned long *nr_written, int unlock); +static noinline int btrfs_update_inode_fallback(struct btrfs_trans_handle *trans, + struct btrfs_root *root, struct inode *inode); static int btrfs_init_inode_security(struct btrfs_trans_handle *trans, struct inode *inode, struct inode *dir, @@ -1741,7 +1743,7 @@ static int btrfs_finish_ordered_io(struct inode *inode, u64 start, u64 end) trans = btrfs_join_transaction(root); BUG_ON(IS_ERR(trans)); trans->block_rsv = &root->fs_info->delalloc_block_rsv; - ret = btrfs_update_inode(trans, root, inode); + ret = btrfs_update_inode_fallback(trans, root, inode); BUG_ON(ret); } goto out; @@ -1791,7 +1793,7 @@ static int btrfs_finish_ordered_io(struct inode *inode, u64 start, u64 end) ret = btrfs_ordered_update_i_size(inode, 0, ordered_extent); if (!ret || !test_bit(BTRFS_ORDERED_PREALLOC, &ordered_extent->flags)) { - ret = btrfs_update_inode(trans, root, inode); + ret = btrfs_update_inode_fallback(trans, root, inode); BUG_ON(ret); } ret = 0; @@ -2199,6 +2201,9 @@ int btrfs_orphan_cleanup(struct btrfs_root *root) if (ret) goto out; } + /* release the path since we're done with it */ + btrfs_release_path(path); + root->orphan_cleanup_state = ORPHAN_CLEANUP_DONE; if (root->orphan_block_rsv) @@ -2426,7 +2431,7 @@ static void fill_inode_item(struct btrfs_trans_handle *trans, /* * copy everything in the in-memory inode into the btree. */ -noinline int btrfs_update_inode(struct btrfs_trans_handle *trans, +static noinline int btrfs_update_inode_item(struct btrfs_trans_handle *trans, struct btrfs_root *root, struct inode *inode) { struct btrfs_inode_item *inode_item; @@ -2434,21 +2439,6 @@ noinline int btrfs_update_inode(struct btrfs_trans_handle *trans, struct extent_buffer *leaf; int ret; - /* - * If the inode is a free space inode, we can deadlock during commit - * if we put it into the delayed code. - * - * The data relocation inode should also be directly updated - * without delay - */ - if (!btrfs_is_free_space_inode(root, inode) - && root->root_key.objectid != BTRFS_DATA_RELOC_TREE_OBJECTID) { - ret = btrfs_delayed_update_inode(trans, root, inode); - if (!ret) - btrfs_set_inode_last_trans(trans, inode); - return ret; - } - path = btrfs_alloc_path(); if (!path) return -ENOMEM; @@ -2477,6 +2467,43 @@ failed: } /* + * copy everything in the in-memory inode into the btree. + */ +noinline int btrfs_update_inode(struct btrfs_trans_handle *trans, + struct btrfs_root *root, struct inode *inode) +{ + int ret; + + /* + * If the inode is a free space inode, we can deadlock during commit + * if we put it into the delayed code. + * + * The data relocation inode should also be directly updated + * without delay + */ + if (!btrfs_is_free_space_inode(root, inode) + && root->root_key.objectid != BTRFS_DATA_RELOC_TREE_OBJECTID) { + ret = btrfs_delayed_update_inode(trans, root, inode); + if (!ret) + btrfs_set_inode_last_trans(trans, inode); + return ret; + } + + return btrfs_update_inode_item(trans, root, inode); +} + +static noinline int btrfs_update_inode_fallback(struct btrfs_trans_handle *trans, + struct btrfs_root *root, struct inode *inode) +{ + int ret; + + ret = btrfs_update_inode(trans, root, inode); + if (ret == -ENOSPC) + return btrfs_update_inode_item(trans, root, inode); + return ret; +} + +/* * unlink helper that gets used here in inode.c and in the tree logging * recovery code. It remove a link in a directory with a given name, and * also drops the back refs in the inode to the directory @@ -5632,7 +5659,7 @@ again: if (test_bit(BTRFS_ORDERED_NOCOW, &ordered->flags)) { ret = btrfs_ordered_update_i_size(inode, 0, ordered); if (!ret) - err = btrfs_update_inode(trans, root, inode); + err = btrfs_update_inode_fallback(trans, root, inode); goto out; } @@ -5670,7 +5697,7 @@ again: add_pending_csums(trans, inode, ordered->file_offset, &ordered->list); ret = btrfs_ordered_update_i_size(inode, 0, ordered); if (!ret || !test_bit(BTRFS_ORDERED_PREALLOC, &ordered->flags)) - btrfs_update_inode(trans, root, inode); + btrfs_update_inode_fallback(trans, root, inode); ret = 0; out_unlock: unlock_extent_cached(&BTRFS_I(inode)->io_tree, ordered->file_offset, @@ -6529,14 +6556,16 @@ end_trans: ret = btrfs_orphan_del(NULL, inode); } - trans->block_rsv = &root->fs_info->trans_block_rsv; - ret = btrfs_update_inode(trans, root, inode); - if (ret && !err) - err = ret; + if (trans) { + trans->block_rsv = &root->fs_info->trans_block_rsv; + ret = btrfs_update_inode(trans, root, inode); + if (ret && !err) + err = ret; - nr = trans->blocks_used; - ret = btrfs_end_transaction_throttle(trans, root); - btrfs_btree_balance_dirty(root, nr); + nr = trans->blocks_used; + ret = btrfs_end_transaction_throttle(trans, root); + btrfs_btree_balance_dirty(root, nr); + } out: btrfs_free_block_rsv(root, rsv); @@ -6605,6 +6634,7 @@ struct inode *btrfs_alloc_inode(struct super_block *sb) ei->orphan_meta_reserved = 0; ei->dummy_inode = 0; ei->in_defrag = 0; + ei->delalloc_meta_reserved = 0; ei->force_compress = BTRFS_COMPRESS_NONE; ei->delayed_node = NULL; diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c index 24d654ce7a06..dff29d5e151a 100644 --- a/fs/btrfs/relocation.c +++ b/fs/btrfs/relocation.c @@ -1174,6 +1174,8 @@ static int clone_backref_node(struct btrfs_trans_handle *trans, list_add_tail(&new_edge->list[UPPER], &new_node->lower); } + } else { + list_add_tail(&new_node->lower, &cache->leaves); } rb_node = tree_insert(&cache->rb_root, new_node->bytenr, diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index ed11d3866afd..f4190f22edfb 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -944,50 +944,18 @@ static int scrub_checksum_super(struct scrub_bio *sbio, void *buffer) static int scrub_submit(struct scrub_dev *sdev) { struct scrub_bio *sbio; - struct bio *bio; - int i; if (sdev->curr == -1) return 0; sbio = sdev->bios[sdev->curr]; - - bio = bio_alloc(GFP_NOFS, sbio->count); - if (!bio) - goto nomem; - - bio->bi_private = sbio; - bio->bi_end_io = scrub_bio_end_io; - bio->bi_bdev = sdev->dev->bdev; - bio->bi_sector = sbio->physical >> 9; - - for (i = 0; i < sbio->count; ++i) { - struct page *page; - int ret; - - page = alloc_page(GFP_NOFS); - if (!page) - goto nomem; - - ret = bio_add_page(bio, page, PAGE_SIZE, 0); - if (!ret) { - __free_page(page); - goto nomem; - } - } - sbio->err = 0; sdev->curr = -1; atomic_inc(&sdev->in_flight); - submit_bio(READ, bio); + submit_bio(READ, sbio->bio); return 0; - -nomem: - scrub_free_bio(bio); - - return -ENOMEM; } static int scrub_page(struct scrub_dev *sdev, u64 logical, u64 len, @@ -995,6 +963,8 @@ static int scrub_page(struct scrub_dev *sdev, u64 logical, u64 len, u8 *csum, int force) { struct scrub_bio *sbio; + struct page *page; + int ret; again: /* @@ -1015,12 +985,22 @@ again: } sbio = sdev->bios[sdev->curr]; if (sbio->count == 0) { + struct bio *bio; + sbio->physical = physical; sbio->logical = logical; + bio = bio_alloc(GFP_NOFS, SCRUB_PAGES_PER_BIO); + if (!bio) + return -ENOMEM; + + bio->bi_private = sbio; + bio->bi_end_io = scrub_bio_end_io; + bio->bi_bdev = sdev->dev->bdev; + bio->bi_sector = sbio->physical >> 9; + sbio->err = 0; + sbio->bio = bio; } else if (sbio->physical + sbio->count * PAGE_SIZE != physical || sbio->logical + sbio->count * PAGE_SIZE != logical) { - int ret; - ret = scrub_submit(sdev); if (ret) return ret; @@ -1030,6 +1010,20 @@ again: sbio->spag[sbio->count].generation = gen; sbio->spag[sbio->count].have_csum = 0; sbio->spag[sbio->count].mirror_num = mirror_num; + + page = alloc_page(GFP_NOFS); + if (!page) + return -ENOMEM; + + ret = bio_add_page(sbio->bio, page, PAGE_SIZE, 0); + if (!ret) { + __free_page(page); + ret = scrub_submit(sdev); + if (ret) + return ret; + goto again; + } + if (csum) { sbio->spag[sbio->count].have_csum = 1; memcpy(sbio->spag[sbio->count].csum, csum, sdev->csum_size); diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c index 57080dffdfc6..8bd9d6d0e07a 100644 --- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -197,7 +197,7 @@ static match_table_t tokens = { {Opt_subvolrootid, "subvolrootid=%d"}, {Opt_defrag, "autodefrag"}, {Opt_inode_cache, "inode_cache"}, - {Opt_no_space_cache, "no_space_cache"}, + {Opt_no_space_cache, "nospace_cache"}, {Opt_recovery, "recovery"}, {Opt_err, NULL}, }; @@ -448,6 +448,7 @@ static int btrfs_parse_early_options(const char *options, fmode_t flags, token = match_token(p, tokens, args); switch (token) { case Opt_subvol: + kfree(*subvol_name); *subvol_name = match_strdup(&args[0]); break; case Opt_subvolid: @@ -710,7 +711,7 @@ static int btrfs_show_options(struct seq_file *seq, struct vfsmount *vfs) if (btrfs_test_opt(root, SPACE_CACHE)) seq_puts(seq, ",space_cache"); else - seq_puts(seq, ",no_space_cache"); + seq_puts(seq, ",nospace_cache"); if (btrfs_test_opt(root, CLEAR_CACHE)) seq_puts(seq, ",clear_cache"); if (btrfs_test_opt(root, USER_SUBVOL_RM_ALLOWED)) @@ -890,7 +891,6 @@ static struct dentry *btrfs_mount(struct file_system_type *fs_type, int flags, struct super_block *s; struct dentry *root; struct btrfs_fs_devices *fs_devices = NULL; - struct btrfs_root *tree_root = NULL; struct btrfs_fs_info *fs_info = NULL; fmode_t mode = FMODE_READ; char *subvol_name = NULL; @@ -904,8 +904,10 @@ static struct dentry *btrfs_mount(struct file_system_type *fs_type, int flags, error = btrfs_parse_early_options(data, mode, fs_type, &subvol_name, &subvol_objectid, &subvol_rootid, &fs_devices); - if (error) + if (error) { + kfree(subvol_name); return ERR_PTR(error); + } if (subvol_name) { root = mount_subvol(subvol_name, flags, device_name, data); @@ -917,15 +919,6 @@ static struct dentry *btrfs_mount(struct file_system_type *fs_type, int flags, if (error) return ERR_PTR(error); - error = btrfs_open_devices(fs_devices, mode, fs_type); - if (error) - return ERR_PTR(error); - - if (!(flags & MS_RDONLY) && fs_devices->rw_devices == 0) { - error = -EACCES; - goto error_close_devices; - } - /* * Setup a dummy root and fs_info for test/set super. This is because * we don't actually fill this stuff out until open_ctree, but we need @@ -933,24 +926,36 @@ static struct dentry *btrfs_mount(struct file_system_type *fs_type, int flags, * then open_ctree will properly initialize everything later. */ fs_info = kzalloc(sizeof(struct btrfs_fs_info), GFP_NOFS); - tree_root = kzalloc(sizeof(struct btrfs_root), GFP_NOFS); - if (!fs_info || !tree_root) { + if (!fs_info) + return ERR_PTR(-ENOMEM); + + fs_info->tree_root = kzalloc(sizeof(struct btrfs_root), GFP_NOFS); + if (!fs_info->tree_root) { error = -ENOMEM; - goto error_close_devices; + goto error_fs_info; } - fs_info->tree_root = tree_root; + fs_info->tree_root->fs_info = fs_info; fs_info->fs_devices = fs_devices; - tree_root->fs_info = fs_info; fs_info->super_copy = kzalloc(BTRFS_SUPER_INFO_SIZE, GFP_NOFS); fs_info->super_for_commit = kzalloc(BTRFS_SUPER_INFO_SIZE, GFP_NOFS); if (!fs_info->super_copy || !fs_info->super_for_commit) { error = -ENOMEM; + goto error_fs_info; + } + + error = btrfs_open_devices(fs_devices, mode, fs_type); + if (error) + goto error_fs_info; + + if (!(flags & MS_RDONLY) && fs_devices->rw_devices == 0) { + error = -EACCES; goto error_close_devices; } bdev = fs_devices->latest_bdev; - s = sget(fs_type, btrfs_test_super, btrfs_set_super, tree_root); + s = sget(fs_type, btrfs_test_super, btrfs_set_super, + fs_info->tree_root); if (IS_ERR(s)) { error = PTR_ERR(s); goto error_close_devices; @@ -959,12 +964,12 @@ static struct dentry *btrfs_mount(struct file_system_type *fs_type, int flags, if (s->s_root) { if ((flags ^ s->s_flags) & MS_RDONLY) { deactivate_locked_super(s); - return ERR_PTR(-EBUSY); + error = -EBUSY; + goto error_close_devices; } btrfs_close_devices(fs_devices); free_fs_info(fs_info); - kfree(tree_root); } else { char b[BDEVNAME_SIZE]; @@ -991,8 +996,8 @@ static struct dentry *btrfs_mount(struct file_system_type *fs_type, int flags, error_close_devices: btrfs_close_devices(fs_devices); +error_fs_info: free_fs_info(fs_info); - kfree(tree_root); return ERR_PTR(error); } diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c index 960835eaf4da..6a0574e923bc 100644 --- a/fs/btrfs/transaction.c +++ b/fs/btrfs/transaction.c @@ -882,8 +882,8 @@ static noinline int create_pending_snapshot(struct btrfs_trans_handle *trans, btrfs_reloc_pre_snapshot(trans, pending, &to_reserve); if (to_reserve > 0) { - ret = btrfs_block_rsv_add(root, &pending->block_rsv, - to_reserve); + ret = btrfs_block_rsv_add_noflush(root, &pending->block_rsv, + to_reserve); if (ret) { pending->error = ret; goto fail; diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index f8e2943101a1..c37433d3cd82 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -999,7 +999,7 @@ static int btrfs_free_dev_extent(struct btrfs_trans_handle *trans, key.objectid = device->devid; key.offset = start; key.type = BTRFS_DEV_EXTENT_KEY; - +again: ret = btrfs_search_slot(trans, root, &key, path, -1, 1); if (ret > 0) { ret = btrfs_previous_item(root, path, key.objectid, @@ -1012,6 +1012,9 @@ static int btrfs_free_dev_extent(struct btrfs_trans_handle *trans, struct btrfs_dev_extent); BUG_ON(found_key.offset > start || found_key.offset + btrfs_dev_extent_length(leaf, extent) < start); + key = found_key; + btrfs_release_path(path); + goto again; } else if (ret == 0) { leaf = path->nodes[0]; extent = btrfs_item_ptr(leaf, path->slots[0], diff --git a/fs/cifs/file.c b/fs/cifs/file.c index c1f063cd1b0c..cf0b1539b321 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -645,20 +645,20 @@ int cifs_closedir(struct inode *inode, struct file *file) } static struct cifsLockInfo * -cifs_lock_init(__u64 len, __u64 offset, __u8 type, __u16 netfid) +cifs_lock_init(__u64 offset, __u64 length, __u8 type, __u16 netfid) { - struct cifsLockInfo *li = + struct cifsLockInfo *lock = kmalloc(sizeof(struct cifsLockInfo), GFP_KERNEL); - if (!li) - return li; - li->netfid = netfid; - li->offset = offset; - li->length = len; - li->type = type; - li->pid = current->tgid; - INIT_LIST_HEAD(&li->blist); - init_waitqueue_head(&li->block_q); - return li; + if (!lock) + return lock; + lock->offset = offset; + lock->length = length; + lock->type = type; + lock->netfid = netfid; + lock->pid = current->tgid; + INIT_LIST_HEAD(&lock->blist); + init_waitqueue_head(&lock->block_q); + return lock; } static void @@ -672,7 +672,7 @@ cifs_del_lock_waiters(struct cifsLockInfo *lock) } static bool -cifs_find_lock_conflict(struct cifsInodeInfo *cinode, __u64 offset, +__cifs_find_lock_conflict(struct cifsInodeInfo *cinode, __u64 offset, __u64 length, __u8 type, __u16 netfid, struct cifsLockInfo **conf_lock) { @@ -694,6 +694,14 @@ cifs_find_lock_conflict(struct cifsInodeInfo *cinode, __u64 offset, return false; } +static bool +cifs_find_lock_conflict(struct cifsInodeInfo *cinode, struct cifsLockInfo *lock, + struct cifsLockInfo **conf_lock) +{ + return __cifs_find_lock_conflict(cinode, lock->offset, lock->length, + lock->type, lock->netfid, conf_lock); +} + static int cifs_lock_test(struct cifsInodeInfo *cinode, __u64 offset, __u64 length, __u8 type, __u16 netfid, struct file_lock *flock) @@ -704,8 +712,8 @@ cifs_lock_test(struct cifsInodeInfo *cinode, __u64 offset, __u64 length, mutex_lock(&cinode->lock_mutex); - exist = cifs_find_lock_conflict(cinode, offset, length, type, netfid, - &conf_lock); + exist = __cifs_find_lock_conflict(cinode, offset, length, type, netfid, + &conf_lock); if (exist) { flock->fl_start = conf_lock->offset; flock->fl_end = conf_lock->offset + conf_lock->length - 1; @@ -723,40 +731,27 @@ cifs_lock_test(struct cifsInodeInfo *cinode, __u64 offset, __u64 length, return rc; } -static int -cifs_lock_add(struct cifsInodeInfo *cinode, __u64 len, __u64 offset, - __u8 type, __u16 netfid) +static void +cifs_lock_add(struct cifsInodeInfo *cinode, struct cifsLockInfo *lock) { - struct cifsLockInfo *li; - - li = cifs_lock_init(len, offset, type, netfid); - if (!li) - return -ENOMEM; - mutex_lock(&cinode->lock_mutex); - list_add_tail(&li->llist, &cinode->llist); + list_add_tail(&lock->llist, &cinode->llist); mutex_unlock(&cinode->lock_mutex); - return 0; } static int -cifs_lock_add_if(struct cifsInodeInfo *cinode, __u64 offset, __u64 length, - __u8 type, __u16 netfid, bool wait) +cifs_lock_add_if(struct cifsInodeInfo *cinode, struct cifsLockInfo *lock, + bool wait) { - struct cifsLockInfo *lock, *conf_lock; + struct cifsLockInfo *conf_lock; bool exist; int rc = 0; - lock = cifs_lock_init(length, offset, type, netfid); - if (!lock) - return -ENOMEM; - try_again: exist = false; mutex_lock(&cinode->lock_mutex); - exist = cifs_find_lock_conflict(cinode, offset, length, type, netfid, - &conf_lock); + exist = cifs_find_lock_conflict(cinode, lock, &conf_lock); if (!exist && cinode->can_cache_brlcks) { list_add_tail(&lock->llist, &cinode->llist); mutex_unlock(&cinode->lock_mutex); @@ -775,13 +770,10 @@ try_again: (lock->blist.next == &lock->blist)); if (!rc) goto try_again; - else { - mutex_lock(&cinode->lock_mutex); - list_del_init(&lock->blist); - } + mutex_lock(&cinode->lock_mutex); + list_del_init(&lock->blist); } - kfree(lock); mutex_unlock(&cinode->lock_mutex); return rc; } @@ -933,7 +925,7 @@ cifs_push_posix_locks(struct cifsFileInfo *cfile) else type = CIFS_WRLCK; - lck = cifs_lock_init(length, flock->fl_start, type, + lck = cifs_lock_init(flock->fl_start, length, type, cfile->netfid); if (!lck) { rc = -ENOMEM; @@ -1070,14 +1062,12 @@ cifs_getlk(struct file *file, struct file_lock *flock, __u8 type, if (rc != 0) cERROR(1, "Error unlocking previously locked " "range %d during test of lock", rc); - rc = 0; - return rc; + return 0; } if (type & LOCKING_ANDX_SHARED_LOCK) { flock->fl_type = F_WRLCK; - rc = 0; - return rc; + return 0; } rc = CIFSSMBLock(xid, tcon, netfid, current->tgid, length, @@ -1095,8 +1085,7 @@ cifs_getlk(struct file *file, struct file_lock *flock, __u8 type, } else flock->fl_type = F_WRLCK; - rc = 0; - return rc; + return 0; } static void @@ -1254,20 +1243,26 @@ cifs_setlk(struct file *file, struct file_lock *flock, __u8 type, } if (lock) { - rc = cifs_lock_add_if(cinode, flock->fl_start, length, - type, netfid, wait_flag); + struct cifsLockInfo *lock; + + lock = cifs_lock_init(flock->fl_start, length, type, netfid); + if (!lock) + return -ENOMEM; + + rc = cifs_lock_add_if(cinode, lock, wait_flag); if (rc < 0) - return rc; - else if (!rc) + kfree(lock); + if (rc <= 0) goto out; rc = CIFSSMBLock(xid, tcon, netfid, current->tgid, length, flock->fl_start, 0, 1, type, wait_flag, 0); - if (rc == 0) { - /* For Windows locks we must store them. */ - rc = cifs_lock_add(cinode, length, flock->fl_start, - type, netfid); + if (rc) { + kfree(lock); + goto out; } + + cifs_lock_add(cinode, lock); } else if (unlock) rc = cifs_unlock_range(cfile, flock, xid); diff --git a/fs/dcache.c b/fs/dcache.c index 274f13e2f094..a901c6901bce 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -546,9 +546,11 @@ int d_invalidate(struct dentry * dentry) * would make it unreachable from the root, * we might still populate it if it was a * working directory or similar). + * We also need to leave mountpoints alone, + * directory or not. */ - if (dentry->d_count > 1) { - if (dentry->d_inode && S_ISDIR(dentry->d_inode->i_mode)) { + if (dentry->d_count > 1 && dentry->d_inode) { + if (S_ISDIR(dentry->d_inode->i_mode) || d_mountpoint(dentry)) { spin_unlock(&dentry->d_lock); return -EBUSY; } diff --git a/fs/hfs/trans.c b/fs/hfs/trans.c index e673a88b8ae7..b1ce4c7ad3fb 100644 --- a/fs/hfs/trans.c +++ b/fs/hfs/trans.c @@ -40,6 +40,8 @@ int hfs_mac2asc(struct super_block *sb, char *out, const struct hfs_name *in) src = in->name; srclen = in->len; + if (srclen > HFS_NAMELEN) + srclen = HFS_NAMELEN; dst = out; dstlen = HFS_MAX_NAMELEN; if (nls_io) { diff --git a/fs/jffs2/compr.c b/fs/jffs2/compr.c index de4247021d25..5b6c9d1a2fb9 100644 --- a/fs/jffs2/compr.c +++ b/fs/jffs2/compr.c @@ -53,6 +53,78 @@ static int jffs2_is_best_compression(struct jffs2_compressor *this, return 0; } +/* + * jffs2_selected_compress: + * @compr: Explicit compression type to use (ie, JFFS2_COMPR_ZLIB). + * If 0, just take the first available compression mode. + * @data_in: Pointer to uncompressed data + * @cpage_out: Pointer to returned pointer to buffer for compressed data + * @datalen: On entry, holds the amount of data available for compression. + * On exit, expected to hold the amount of data actually compressed. + * @cdatalen: On entry, holds the amount of space available for compressed + * data. On exit, expected to hold the actual size of the compressed + * data. + * + * Returns: the compression type used. Zero is used to show that the data + * could not be compressed; probably because we couldn't find the requested + * compression mode. + */ +static int jffs2_selected_compress(u8 compr, unsigned char *data_in, + unsigned char **cpage_out, u32 *datalen, u32 *cdatalen) +{ + struct jffs2_compressor *this; + int err, ret = JFFS2_COMPR_NONE; + uint32_t orig_slen, orig_dlen; + char *output_buf; + + output_buf = kmalloc(*cdatalen, GFP_KERNEL); + if (!output_buf) { + printk(KERN_WARNING "JFFS2: No memory for compressor allocation. Compression failed.\n"); + return ret; + } + orig_slen = *datalen; + orig_dlen = *cdatalen; + spin_lock(&jffs2_compressor_list_lock); + list_for_each_entry(this, &jffs2_compressor_list, list) { + /* Skip decompress-only and disabled modules */ + if (!this->compress || this->disabled) + continue; + + /* Skip if not the desired compression type */ + if (compr && (compr != this->compr)) + continue; + + /* + * Either compression type was unspecified, or we found our + * compressor; either way, we're good to go. + */ + this->usecount++; + spin_unlock(&jffs2_compressor_list_lock); + + *datalen = orig_slen; + *cdatalen = orig_dlen; + err = this->compress(data_in, output_buf, datalen, cdatalen); + + spin_lock(&jffs2_compressor_list_lock); + this->usecount--; + if (!err) { + /* Success */ + ret = this->compr; + this->stat_compr_blocks++; + this->stat_compr_orig_size += *datalen; + this->stat_compr_new_size += *cdatalen; + break; + } + } + spin_unlock(&jffs2_compressor_list_lock); + if (ret == JFFS2_COMPR_NONE) + kfree(output_buf); + else + *cpage_out = output_buf; + + return ret; +} + /* jffs2_compress: * @data_in: Pointer to uncompressed data * @cpage_out: Pointer to returned pointer to buffer for compressed data @@ -76,47 +148,23 @@ uint16_t jffs2_compress(struct jffs2_sb_info *c, struct jffs2_inode_info *f, uint32_t *datalen, uint32_t *cdatalen) { int ret = JFFS2_COMPR_NONE; - int compr_ret; + int mode, compr_ret; struct jffs2_compressor *this, *best=NULL; unsigned char *output_buf = NULL, *tmp_buf; uint32_t orig_slen, orig_dlen; uint32_t best_slen=0, best_dlen=0; - switch (jffs2_compression_mode) { + if (c->mount_opts.override_compr) + mode = c->mount_opts.compr; + else + mode = jffs2_compression_mode; + + switch (mode) { case JFFS2_COMPR_MODE_NONE: break; case JFFS2_COMPR_MODE_PRIORITY: - output_buf = kmalloc(*cdatalen,GFP_KERNEL); - if (!output_buf) { - printk(KERN_WARNING "JFFS2: No memory for compressor allocation. Compression failed.\n"); - goto out; - } - orig_slen = *datalen; - orig_dlen = *cdatalen; - spin_lock(&jffs2_compressor_list_lock); - list_for_each_entry(this, &jffs2_compressor_list, list) { - /* Skip decompress-only backwards-compatibility and disabled modules */ - if ((!this->compress)||(this->disabled)) - continue; - - this->usecount++; - spin_unlock(&jffs2_compressor_list_lock); - *datalen = orig_slen; - *cdatalen = orig_dlen; - compr_ret = this->compress(data_in, output_buf, datalen, cdatalen); - spin_lock(&jffs2_compressor_list_lock); - this->usecount--; - if (!compr_ret) { - ret = this->compr; - this->stat_compr_blocks++; - this->stat_compr_orig_size += *datalen; - this->stat_compr_new_size += *cdatalen; - break; - } - } - spin_unlock(&jffs2_compressor_list_lock); - if (ret == JFFS2_COMPR_NONE) - kfree(output_buf); + ret = jffs2_selected_compress(0, data_in, cpage_out, datalen, + cdatalen); break; case JFFS2_COMPR_MODE_SIZE: case JFFS2_COMPR_MODE_FAVOURLZO: @@ -174,22 +222,28 @@ uint16_t jffs2_compress(struct jffs2_sb_info *c, struct jffs2_inode_info *f, best->stat_compr_orig_size += best_slen; best->stat_compr_new_size += best_dlen; ret = best->compr; + *cpage_out = output_buf; } spin_unlock(&jffs2_compressor_list_lock); break; + case JFFS2_COMPR_MODE_FORCELZO: + ret = jffs2_selected_compress(JFFS2_COMPR_LZO, data_in, + cpage_out, datalen, cdatalen); + break; + case JFFS2_COMPR_MODE_FORCEZLIB: + ret = jffs2_selected_compress(JFFS2_COMPR_ZLIB, data_in, + cpage_out, datalen, cdatalen); + break; default: printk(KERN_ERR "JFFS2: unknown compression mode.\n"); } - out: + if (ret == JFFS2_COMPR_NONE) { *cpage_out = data_in; *datalen = *cdatalen; none_stat_compr_blocks++; none_stat_compr_size += *datalen; } - else { - *cpage_out = output_buf; - } return ret; } diff --git a/fs/jffs2/compr.h b/fs/jffs2/compr.h index 13bb7597ab39..5e91d578f4ed 100644 --- a/fs/jffs2/compr.h +++ b/fs/jffs2/compr.h @@ -40,6 +40,8 @@ #define JFFS2_COMPR_MODE_PRIORITY 1 #define JFFS2_COMPR_MODE_SIZE 2 #define JFFS2_COMPR_MODE_FAVOURLZO 3 +#define JFFS2_COMPR_MODE_FORCELZO 4 +#define JFFS2_COMPR_MODE_FORCEZLIB 5 #define FAVOUR_LZO_PERCENT 80 diff --git a/fs/jffs2/fs.c b/fs/jffs2/fs.c index 7286e44ac665..4b8afe39a87f 100644 --- a/fs/jffs2/fs.c +++ b/fs/jffs2/fs.c @@ -379,7 +379,7 @@ void jffs2_dirty_inode(struct inode *inode, int flags) jffs2_do_setattr(inode, &iattr); } -int jffs2_remount_fs (struct super_block *sb, int *flags, char *data) +int jffs2_do_remount_fs(struct super_block *sb, int *flags, char *data) { struct jffs2_sb_info *c = JFFS2_SB_INFO(sb); diff --git a/fs/jffs2/jffs2_fs_sb.h b/fs/jffs2/jffs2_fs_sb.h index 0bc6a6c80a56..55a0c1dceadf 100644 --- a/fs/jffs2/jffs2_fs_sb.h +++ b/fs/jffs2/jffs2_fs_sb.h @@ -29,6 +29,11 @@ struct jffs2_inodirty; +struct jffs2_mount_opts { + bool override_compr; + unsigned int compr; +}; + /* A struct for the overall file system control. Pointers to jffs2_sb_info structs are named `c' in the source code. Nee jffs_control @@ -126,6 +131,7 @@ struct jffs2_sb_info { #endif struct jffs2_summary *summary; /* Summary information */ + struct jffs2_mount_opts mount_opts; #ifdef CONFIG_JFFS2_FS_XATTR #define XATTRINDEX_HASHSIZE (57) diff --git a/fs/jffs2/os-linux.h b/fs/jffs2/os-linux.h index 6c1755c59c0f..ab65ee3ec858 100644 --- a/fs/jffs2/os-linux.h +++ b/fs/jffs2/os-linux.h @@ -176,7 +176,7 @@ void jffs2_dirty_inode(struct inode *inode, int flags); struct inode *jffs2_new_inode (struct inode *dir_i, umode_t mode, struct jffs2_raw_inode *ri); int jffs2_statfs (struct dentry *, struct kstatfs *); -int jffs2_remount_fs (struct super_block *, int *, char *); +int jffs2_do_remount_fs(struct super_block *, int *, char *); int jffs2_do_fill_super(struct super_block *sb, void *data, int silent); void jffs2_gc_release_inode(struct jffs2_sb_info *c, struct jffs2_inode_info *f); diff --git a/fs/jffs2/scan.c b/fs/jffs2/scan.c index 8d8cd3419d02..28107ca136e4 100644 --- a/fs/jffs2/scan.c +++ b/fs/jffs2/scan.c @@ -275,9 +275,7 @@ int jffs2_scan_medium(struct jffs2_sb_info *c) else c->mtd->unpoint(c->mtd, 0, c->mtd->size); #endif - if (s) - kfree(s); - + kfree(s); return ret; } diff --git a/fs/jffs2/super.c b/fs/jffs2/super.c index 853b8e300084..e7e974454115 100644 --- a/fs/jffs2/super.c +++ b/fs/jffs2/super.c @@ -17,11 +17,13 @@ #include <linux/fs.h> #include <linux/err.h> #include <linux/mount.h> +#include <linux/parser.h> #include <linux/jffs2.h> #include <linux/pagemap.h> #include <linux/mtd/super.h> #include <linux/ctype.h> #include <linux/namei.h> +#include <linux/seq_file.h> #include <linux/exportfs.h> #include "compr.h" #include "nodelist.h" @@ -75,6 +77,37 @@ static void jffs2_write_super(struct super_block *sb) unlock_super(sb); } +static const char *jffs2_compr_name(unsigned int compr) +{ + switch (compr) { + case JFFS2_COMPR_MODE_NONE: + return "none"; +#ifdef CONFIG_JFFS2_LZO + case JFFS2_COMPR_MODE_FORCELZO: + return "lzo"; +#endif +#ifdef CONFIG_JFFS2_ZLIB + case JFFS2_COMPR_MODE_FORCEZLIB: + return "zlib"; +#endif + default: + /* should never happen; programmer error */ + WARN_ON(1); + return ""; + } +} + +static int jffs2_show_options(struct seq_file *s, struct vfsmount *mnt) +{ + struct jffs2_sb_info *c = JFFS2_SB_INFO(mnt->mnt_sb); + struct jffs2_mount_opts *opts = &c->mount_opts; + + if (opts->override_compr) + seq_printf(s, ",compr=%s", jffs2_compr_name(opts->compr)); + + return 0; +} + static int jffs2_sync_fs(struct super_block *sb, int wait) { struct jffs2_sb_info *c = JFFS2_SB_INFO(sb); @@ -133,6 +166,85 @@ static const struct export_operations jffs2_export_ops = { .fh_to_parent = jffs2_fh_to_parent, }; +/* + * JFFS2 mount options. + * + * Opt_override_compr: override default compressor + * Opt_err: just end of array marker + */ +enum { + Opt_override_compr, + Opt_err, +}; + +static const match_table_t tokens = { + {Opt_override_compr, "compr=%s"}, + {Opt_err, NULL}, +}; + +static int jffs2_parse_options(struct jffs2_sb_info *c, char *data) +{ + substring_t args[MAX_OPT_ARGS]; + char *p, *name; + + if (!data) + return 0; + + while ((p = strsep(&data, ","))) { + int token; + + if (!*p) + continue; + + token = match_token(p, tokens, args); + switch (token) { + case Opt_override_compr: + name = match_strdup(&args[0]); + + if (!name) + return -ENOMEM; + if (!strcmp(name, "none")) + c->mount_opts.compr = JFFS2_COMPR_MODE_NONE; +#ifdef CONFIG_JFFS2_LZO + else if (!strcmp(name, "lzo")) + c->mount_opts.compr = JFFS2_COMPR_MODE_FORCELZO; +#endif +#ifdef CONFIG_JFFS2_ZLIB + else if (!strcmp(name, "zlib")) + c->mount_opts.compr = + JFFS2_COMPR_MODE_FORCEZLIB; +#endif + else { + printk(KERN_ERR "JFFS2 Error: unknown compressor \"%s\"", + name); + kfree(name); + return -EINVAL; + } + kfree(name); + c->mount_opts.override_compr = true; + break; + default: + printk(KERN_ERR "JFFS2 Error: unrecognized mount option '%s' or missing value\n", + p); + return -EINVAL; + } + } + + return 0; +} + +static int jffs2_remount_fs(struct super_block *sb, int *flags, char *data) +{ + struct jffs2_sb_info *c = JFFS2_SB_INFO(sb); + int err; + + err = jffs2_parse_options(c, data); + if (err) + return -EINVAL; + + return jffs2_do_remount_fs(sb, flags, data); +} + static const struct super_operations jffs2_super_operations = { .alloc_inode = jffs2_alloc_inode, @@ -143,6 +255,7 @@ static const struct super_operations jffs2_super_operations = .remount_fs = jffs2_remount_fs, .evict_inode = jffs2_evict_inode, .dirty_inode = jffs2_dirty_inode, + .show_options = jffs2_show_options, .sync_fs = jffs2_sync_fs, }; @@ -166,6 +279,12 @@ static int jffs2_fill_super(struct super_block *sb, void *data, int silent) c->os_priv = sb; sb->s_fs_info = c; + ret = jffs2_parse_options(c, data); + if (ret) { + kfree(c); + return -EINVAL; + } + /* Initialize JFFS2 superblock locks, the further initialization will * be done later */ mutex_init(&c->alloc_sem); diff --git a/fs/jffs2/wbuf.c b/fs/jffs2/wbuf.c index 4515bea0268f..b09e51d2f81f 100644 --- a/fs/jffs2/wbuf.c +++ b/fs/jffs2/wbuf.c @@ -578,8 +578,7 @@ static int __jffs2_flush_wbuf(struct jffs2_sb_info *c, int pad) if (!jffs2_is_writebuffered(c)) return 0; - if (mutex_trylock(&c->alloc_sem)) { - mutex_unlock(&c->alloc_sem); + if (!mutex_is_locked(&c->alloc_sem)) { printk(KERN_CRIT "jffs2_flush_wbuf() called with alloc_sem not locked!\n"); BUG(); } @@ -1026,7 +1025,7 @@ int jffs2_check_oob_empty(struct jffs2_sb_info *c, int cmlen = min_t(int, c->oobavail, OOB_CM_SIZE); struct mtd_oob_ops ops; - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ops.ooblen = NR_OOB_SCAN_PAGES * c->oobavail; ops.oobbuf = c->oobbuf; ops.len = ops.ooboffs = ops.retlen = ops.oobretlen = 0; @@ -1069,7 +1068,7 @@ int jffs2_check_nand_cleanmarker(struct jffs2_sb_info *c, struct mtd_oob_ops ops; int ret, cmlen = min_t(int, c->oobavail, OOB_CM_SIZE); - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ops.ooblen = cmlen; ops.oobbuf = c->oobbuf; ops.len = ops.ooboffs = ops.retlen = ops.oobretlen = 0; @@ -1095,7 +1094,7 @@ int jffs2_write_nand_cleanmarker(struct jffs2_sb_info *c, struct mtd_oob_ops ops; int cmlen = min_t(int, c->oobavail, OOB_CM_SIZE); - ops.mode = MTD_OOB_AUTO; + ops.mode = MTD_OPS_AUTO_OOB; ops.ooblen = cmlen; ops.oobbuf = (uint8_t *)&oob_cleanmarker; ops.len = ops.ooboffs = ops.retlen = ops.oobretlen = 0; diff --git a/fs/namei.c b/fs/namei.c index ac6d214da827..5008f01787f5 100644 --- a/fs/namei.c +++ b/fs/namei.c @@ -852,7 +852,7 @@ static int follow_managed(struct path *path, unsigned flags) mntput(path->mnt); if (ret == -EISDIR) ret = 0; - return ret; + return ret < 0 ? ret : need_mntput; } int follow_down_one(struct path *path) @@ -900,6 +900,7 @@ static bool __follow_mount_rcu(struct nameidata *nd, struct path *path, break; path->mnt = mounted; path->dentry = mounted->mnt_root; + nd->flags |= LOOKUP_JUMPED; nd->seq = read_seqcount_begin(&path->dentry->d_seq); /* * Update the inode too. We don't need to re-check the @@ -1213,6 +1214,8 @@ retry: path_put_conditional(path, nd); return err; } + if (err) + nd->flags |= LOOKUP_JUMPED; *inode = path->dentry->d_inode; return 0; } @@ -2146,6 +2149,10 @@ static struct file *do_last(struct nameidata *nd, struct path *path, } /* create side of things */ + /* + * This will *only* deal with leaving RCU mode - LOOKUP_JUMPED has been + * cleared when we got to the last component we are about to look up + */ error = complete_walk(nd); if (error) return ERR_PTR(error); @@ -2214,6 +2221,9 @@ static struct file *do_last(struct nameidata *nd, struct path *path, if (error < 0) goto exit_dput; + if (error) + nd->flags |= LOOKUP_JUMPED; + error = -ENOENT; if (!path->dentry->d_inode) goto exit_dput; @@ -2223,6 +2233,10 @@ static struct file *do_last(struct nameidata *nd, struct path *path, path_to_nameidata(path, nd); nd->inode = path->dentry->d_inode; + /* Why this, you ask? _Now_ we might have grown LOOKUP_JUMPED... */ + error = complete_walk(nd); + if (error) + goto exit; error = -EISDIR; if (S_ISDIR(nd->inode->i_mode)) goto exit; diff --git a/fs/proc/base.c b/fs/proc/base.c index 2db1bd3173b2..851ba3dcdc29 100644 --- a/fs/proc/base.c +++ b/fs/proc/base.c @@ -1652,46 +1652,12 @@ out: return error; } -static int proc_pid_fd_link_getattr(struct vfsmount *mnt, struct dentry *dentry, - struct kstat *stat) -{ - struct inode *inode = dentry->d_inode; - struct task_struct *task = get_proc_task(inode); - int rc; - - if (task == NULL) - return -ESRCH; - - rc = -EACCES; - if (lock_trace(task)) - goto out_task; - - generic_fillattr(inode, stat); - unlock_trace(task); - rc = 0; -out_task: - put_task_struct(task); - return rc; -} - static const struct inode_operations proc_pid_link_inode_operations = { .readlink = proc_pid_readlink, .follow_link = proc_pid_follow_link, .setattr = proc_setattr, }; -static const struct inode_operations proc_fdinfo_link_inode_operations = { - .setattr = proc_setattr, - .getattr = proc_pid_fd_link_getattr, -}; - -static const struct inode_operations proc_fd_link_inode_operations = { - .readlink = proc_pid_readlink, - .follow_link = proc_pid_follow_link, - .setattr = proc_setattr, - .getattr = proc_pid_fd_link_getattr, -}; - /* building an inode */ @@ -1923,61 +1889,49 @@ out: static int proc_fd_info(struct inode *inode, struct path *path, char *info) { - struct task_struct *task; - struct files_struct *files; + struct task_struct *task = get_proc_task(inode); + struct files_struct *files = NULL; struct file *file; int fd = proc_fd(inode); - int rc; - - task = get_proc_task(inode); - if (!task) - return -ENOENT; - - rc = -EACCES; - if (lock_trace(task)) - goto out_task; - - rc = -ENOENT; - files = get_files_struct(task); - if (files == NULL) - goto out_unlock; - /* - * We are not taking a ref to the file structure, so we must - * hold ->file_lock. - */ - spin_lock(&files->file_lock); - file = fcheck_files(files, fd); - if (file) { - unsigned int f_flags; - struct fdtable *fdt; - - fdt = files_fdtable(files); - f_flags = file->f_flags & ~O_CLOEXEC; - if (FD_ISSET(fd, fdt->close_on_exec)) - f_flags |= O_CLOEXEC; - - if (path) { - *path = file->f_path; - path_get(&file->f_path); + if (task) { + files = get_files_struct(task); + put_task_struct(task); + } + if (files) { + /* + * We are not taking a ref to the file structure, so we must + * hold ->file_lock. + */ + spin_lock(&files->file_lock); + file = fcheck_files(files, fd); + if (file) { + unsigned int f_flags; + struct fdtable *fdt; + + fdt = files_fdtable(files); + f_flags = file->f_flags & ~O_CLOEXEC; + if (FD_ISSET(fd, fdt->close_on_exec)) + f_flags |= O_CLOEXEC; + + if (path) { + *path = file->f_path; + path_get(&file->f_path); + } + if (info) + snprintf(info, PROC_FDINFO_MAX, + "pos:\t%lli\n" + "flags:\t0%o\n", + (long long) file->f_pos, + f_flags); + spin_unlock(&files->file_lock); + put_files_struct(files); + return 0; } - if (info) - snprintf(info, PROC_FDINFO_MAX, - "pos:\t%lli\n" - "flags:\t0%o\n", - (long long) file->f_pos, - f_flags); - rc = 0; - } else - rc = -ENOENT; - spin_unlock(&files->file_lock); - put_files_struct(files); - -out_unlock: - unlock_trace(task); -out_task: - put_task_struct(task); - return rc; + spin_unlock(&files->file_lock); + put_files_struct(files); + } + return -ENOENT; } static int proc_fd_link(struct inode *inode, struct path *path) @@ -2072,7 +2026,7 @@ static struct dentry *proc_fd_instantiate(struct inode *dir, spin_unlock(&files->file_lock); put_files_struct(files); - inode->i_op = &proc_fd_link_inode_operations; + inode->i_op = &proc_pid_link_inode_operations; inode->i_size = 64; ei->op.proc_get_link = proc_fd_link; d_set_d_op(dentry, &tid_fd_dentry_operations); @@ -2104,12 +2058,7 @@ static struct dentry *proc_lookupfd_common(struct inode *dir, if (fd == ~0U) goto out; - result = ERR_PTR(-EACCES); - if (lock_trace(task)) - goto out; - result = instantiate(dir, dentry, task, &fd); - unlock_trace(task); out: put_task_struct(task); out_no_task: @@ -2129,28 +2078,23 @@ static int proc_readfd_common(struct file * filp, void * dirent, retval = -ENOENT; if (!p) goto out_no_task; - - retval = -EACCES; - if (lock_trace(p)) - goto out; - retval = 0; fd = filp->f_pos; switch (fd) { case 0: if (filldir(dirent, ".", 1, 0, inode->i_ino, DT_DIR) < 0) - goto out_unlock; + goto out; filp->f_pos++; case 1: ino = parent_ino(dentry); if (filldir(dirent, "..", 2, 1, ino, DT_DIR) < 0) - goto out_unlock; + goto out; filp->f_pos++; default: files = get_files_struct(p); if (!files) - goto out_unlock; + goto out; rcu_read_lock(); for (fd = filp->f_pos-2; fd < files_fdtable(files)->max_fds; @@ -2174,9 +2118,6 @@ static int proc_readfd_common(struct file * filp, void * dirent, rcu_read_unlock(); put_files_struct(files); } - -out_unlock: - unlock_trace(p); out: put_task_struct(p); out_no_task: @@ -2254,7 +2195,6 @@ static struct dentry *proc_fdinfo_instantiate(struct inode *dir, ei->fd = fd; inode->i_mode = S_IFREG | S_IRUSR; inode->i_fop = &proc_fdinfo_file_operations; - inode->i_op = &proc_fdinfo_link_inode_operations; d_set_d_op(dentry, &tid_fd_dentry_operations); d_add(dentry, inode); /* Close the race of the process dying before we return the dentry */ diff --git a/fs/ubifs/debug.c b/fs/ubifs/debug.c index eef109a1a927..b09ba2dd8b62 100644 --- a/fs/ubifs/debug.c +++ b/fs/ubifs/debug.c @@ -870,6 +870,22 @@ void dbg_dump_lpt_info(struct ubifs_info *c) spin_unlock(&dbg_lock); } +void dbg_dump_sleb(const struct ubifs_info *c, + const struct ubifs_scan_leb *sleb, int offs) +{ + struct ubifs_scan_node *snod; + + printk(KERN_DEBUG "(pid %d) start dumping scanned data from LEB %d:%d\n", + current->pid, sleb->lnum, offs); + + list_for_each_entry(snod, &sleb->nodes, list) { + cond_resched(); + printk(KERN_DEBUG "Dumping node at LEB %d:%d len %d\n", sleb->lnum, + snod->offs, snod->len); + dbg_dump_node(c, snod->node); + } +} + void dbg_dump_leb(const struct ubifs_info *c, int lnum) { struct ubifs_scan_leb *sleb; diff --git a/fs/ubifs/debug.h b/fs/ubifs/debug.h index feb361e252ac..8d9c46810189 100644 --- a/fs/ubifs/debug.h +++ b/fs/ubifs/debug.h @@ -269,6 +269,8 @@ void dbg_dump_lprop(const struct ubifs_info *c, const struct ubifs_lprops *lp); void dbg_dump_lprops(struct ubifs_info *c); void dbg_dump_lpt_info(struct ubifs_info *c); void dbg_dump_leb(const struct ubifs_info *c, int lnum); +void dbg_dump_sleb(const struct ubifs_info *c, + const struct ubifs_scan_leb *sleb, int offs); void dbg_dump_znode(const struct ubifs_info *c, const struct ubifs_znode *znode); void dbg_dump_heap(struct ubifs_info *c, struct ubifs_lpt_heap *heap, int cat); @@ -387,6 +389,9 @@ static inline void dbg_dump_lpt_info(struct ubifs_info *c) { return; } static inline void dbg_dump_leb(const struct ubifs_info *c, int lnum) { return; } static inline void +dbg_dump_sleb(const struct ubifs_info *c, + const struct ubifs_scan_leb *sleb, int offs) { return; } +static inline void dbg_dump_znode(const struct ubifs_info *c, const struct ubifs_znode *znode) { return; } static inline void dbg_dump_heap(struct ubifs_info *c, diff --git a/fs/ubifs/recovery.c b/fs/ubifs/recovery.c index af02790d9328..ee4f43f4bb99 100644 --- a/fs/ubifs/recovery.c +++ b/fs/ubifs/recovery.c @@ -983,7 +983,7 @@ int ubifs_recover_inl_heads(struct ubifs_info *c, void *sbuf) } /** - * clean_an_unclean_leb - read and write a LEB to remove corruption. + * clean_an_unclean_leb - read and write a LEB to remove corruption. * @c: UBIFS file-system description object * @ucleb: unclean LEB information * @sbuf: LEB-sized buffer to use diff --git a/fs/ubifs/sb.c b/fs/ubifs/sb.c index 93d938ad3d2a..6094c5a5d7a8 100644 --- a/fs/ubifs/sb.c +++ b/fs/ubifs/sb.c @@ -247,7 +247,7 @@ static int create_default_filesystem(struct ubifs_info *c) mst->total_dirty = cpu_to_le64(tmp64); /* The indexing LEB does not contribute to dark space */ - tmp64 = (c->main_lebs - 1) * c->dark_wm; + tmp64 = ((long long)(c->main_lebs - 1) * c->dark_wm); mst->total_dark = cpu_to_le64(tmp64); mst->total_used = cpu_to_le64(UBIFS_INO_NODE_SZ); diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 33b13310ee0c..574d4ee9b625 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -189,7 +189,7 @@ xfs_end_io( int error = 0; if (XFS_FORCED_SHUTDOWN(ip->i_mount)) { - error = -EIO; + ioend->io_error = -EIO; goto done; } if (ioend->io_error) diff --git a/fs/xfs/xfs_buf_item.c b/fs/xfs/xfs_buf_item.c index 1a3513881bce..eac97ef81e2a 100644 --- a/fs/xfs/xfs_buf_item.c +++ b/fs/xfs/xfs_buf_item.c @@ -656,7 +656,7 @@ xfs_buf_item_committing( /* * This is the ops vector shared by all buf log items. */ -static struct xfs_item_ops xfs_buf_item_ops = { +static const struct xfs_item_ops xfs_buf_item_ops = { .iop_size = xfs_buf_item_size, .iop_format = xfs_buf_item_format, .iop_pin = xfs_buf_item_pin, diff --git a/fs/xfs/xfs_dquot_item.c b/fs/xfs/xfs_dquot_item.c index bb3f71d236d2..0dee0b71029d 100644 --- a/fs/xfs/xfs_dquot_item.c +++ b/fs/xfs/xfs_dquot_item.c @@ -295,7 +295,7 @@ xfs_qm_dquot_logitem_committing( /* * This is the ops vector for dquots */ -static struct xfs_item_ops xfs_dquot_item_ops = { +static const struct xfs_item_ops xfs_dquot_item_ops = { .iop_size = xfs_qm_dquot_logitem_size, .iop_format = xfs_qm_dquot_logitem_format, .iop_pin = xfs_qm_dquot_logitem_pin, @@ -483,7 +483,7 @@ xfs_qm_qoff_logitem_committing( { } -static struct xfs_item_ops xfs_qm_qoffend_logitem_ops = { +static const struct xfs_item_ops xfs_qm_qoffend_logitem_ops = { .iop_size = xfs_qm_qoff_logitem_size, .iop_format = xfs_qm_qoff_logitem_format, .iop_pin = xfs_qm_qoff_logitem_pin, @@ -498,7 +498,7 @@ static struct xfs_item_ops xfs_qm_qoffend_logitem_ops = { /* * This is the ops vector shared by all quotaoff-start log items. */ -static struct xfs_item_ops xfs_qm_qoff_logitem_ops = { +static const struct xfs_item_ops xfs_qm_qoff_logitem_ops = { .iop_size = xfs_qm_qoff_logitem_size, .iop_format = xfs_qm_qoff_logitem_format, .iop_pin = xfs_qm_qoff_logitem_pin, diff --git a/fs/xfs/xfs_extfree_item.c b/fs/xfs/xfs_extfree_item.c index d22e62623437..35c2aff38b20 100644 --- a/fs/xfs/xfs_extfree_item.c +++ b/fs/xfs/xfs_extfree_item.c @@ -217,7 +217,7 @@ xfs_efi_item_committing( /* * This is the ops vector shared by all efi log items. */ -static struct xfs_item_ops xfs_efi_item_ops = { +static const struct xfs_item_ops xfs_efi_item_ops = { .iop_size = xfs_efi_item_size, .iop_format = xfs_efi_item_format, .iop_pin = xfs_efi_item_pin, @@ -477,7 +477,7 @@ xfs_efd_item_committing( /* * This is the ops vector shared by all efd log items. */ -static struct xfs_item_ops xfs_efd_item_ops = { +static const struct xfs_item_ops xfs_efd_item_ops = { .iop_size = xfs_efd_item_size, .iop_format = xfs_efd_item_format, .iop_pin = xfs_efd_item_pin, diff --git a/fs/xfs/xfs_inode_item.c b/fs/xfs/xfs_inode_item.c index b7cf21ba240f..abaafdbb3e65 100644 --- a/fs/xfs/xfs_inode_item.c +++ b/fs/xfs/xfs_inode_item.c @@ -795,7 +795,7 @@ xfs_inode_item_committing( /* * This is the ops vector shared by all buf log items. */ -static struct xfs_item_ops xfs_inode_item_ops = { +static const struct xfs_item_ops xfs_inode_item_ops = { .iop_size = xfs_inode_item_size, .iop_format = xfs_inode_item_format, .iop_pin = xfs_inode_item_pin, diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c index 2758a6277c52..a14cd89fe465 100644 --- a/fs/xfs/xfs_log.c +++ b/fs/xfs/xfs_log.c @@ -626,7 +626,7 @@ xfs_log_item_init( struct xfs_mount *mp, struct xfs_log_item *item, int type, - struct xfs_item_ops *ops) + const struct xfs_item_ops *ops) { item->li_mountp = mp; item->li_ailp = mp->m_ail; diff --git a/fs/xfs/xfs_log.h b/fs/xfs/xfs_log.h index 78c9039994af..3f7bf451c034 100644 --- a/fs/xfs/xfs_log.h +++ b/fs/xfs/xfs_log.h @@ -137,7 +137,7 @@ struct xfs_trans; void xfs_log_item_init(struct xfs_mount *mp, struct xfs_log_item *item, int type, - struct xfs_item_ops *ops); + const struct xfs_item_ops *ops); xfs_lsn_t xfs_log_done(struct xfs_mount *mp, struct xlog_ticket *ticket, diff --git a/fs/xfs/xfs_trans.h b/fs/xfs/xfs_trans.h index 603f3eb52041..3ae713c0abd9 100644 --- a/fs/xfs/xfs_trans.h +++ b/fs/xfs/xfs_trans.h @@ -326,7 +326,7 @@ typedef struct xfs_log_item { struct xfs_log_item *); /* buffer item iodone */ /* callback func */ - struct xfs_item_ops *li_ops; /* function list */ + const struct xfs_item_ops *li_ops; /* function list */ /* delayed logging */ struct list_head li_cil; /* CIL pointers */ @@ -341,7 +341,7 @@ typedef struct xfs_log_item { { XFS_LI_IN_AIL, "IN_AIL" }, \ { XFS_LI_ABORTED, "ABORTED" } -typedef struct xfs_item_ops { +struct xfs_item_ops { uint (*iop_size)(xfs_log_item_t *); void (*iop_format)(xfs_log_item_t *, struct xfs_log_iovec *); void (*iop_pin)(xfs_log_item_t *); @@ -352,7 +352,7 @@ typedef struct xfs_item_ops { void (*iop_push)(xfs_log_item_t *); bool (*iop_pushbuf)(xfs_log_item_t *); void (*iop_committing)(xfs_log_item_t *, xfs_lsn_t); -} xfs_item_ops_t; +}; #define IOP_SIZE(ip) (*(ip)->li_ops->iop_size)(ip) #define IOP_FORMAT(ip,vp) (*(ip)->li_ops->iop_format)(ip, vp) diff --git a/fs/xfs/xfs_vnodeops.c b/fs/xfs/xfs_vnodeops.c index 4ecf2a549060..ce9268a2f56b 100644 --- a/fs/xfs/xfs_vnodeops.c +++ b/fs/xfs/xfs_vnodeops.c @@ -112,7 +112,7 @@ xfs_readlink( char *link) { xfs_mount_t *mp = ip->i_mount; - int pathlen; + xfs_fsize_t pathlen; int error = 0; trace_xfs_readlink(ip); @@ -122,13 +122,19 @@ xfs_readlink( xfs_ilock(ip, XFS_ILOCK_SHARED); - ASSERT(S_ISLNK(ip->i_d.di_mode)); - ASSERT(ip->i_d.di_size <= MAXPATHLEN); - pathlen = ip->i_d.di_size; if (!pathlen) goto out; + if (pathlen < 0 || pathlen > MAXPATHLEN) { + xfs_alert(mp, "%s: inode (%llu) bad symlink length (%lld)", + __func__, (unsigned long long) ip->i_ino, + (long long) pathlen); + ASSERT(0); + return XFS_ERROR(EFSCORRUPTED); + } + + if (ip->i_df.if_flags & XFS_IFINLINE) { memcpy(link, ip->i_df.if_u1.if_data, pathlen); link[pathlen] = '\0'; diff --git a/include/acpi/acpi_drivers.h b/include/acpi/acpi_drivers.h index e49c36d38d7e..bb145e4b935e 100644 --- a/include/acpi/acpi_drivers.h +++ b/include/acpi/acpi_drivers.h @@ -144,7 +144,7 @@ static inline void unregister_dock_notifier(struct notifier_block *nb) { } static inline int register_hotplug_dock_device(acpi_handle handle, - struct acpi_dock_ops *ops, + const struct acpi_dock_ops *ops, void *context) { return -ENODEV; diff --git a/include/acpi/actypes.h b/include/acpi/actypes.h index b67231bef632..ed73f6705c86 100644 --- a/include/acpi/actypes.h +++ b/include/acpi/actypes.h @@ -470,7 +470,6 @@ typedef u64 acpi_integer; */ #define ACPI_FULL_INITIALIZATION 0x00 #define ACPI_NO_ADDRESS_SPACE_INIT 0x01 -#define ACPI_NO_HARDWARE_INIT 0x02 #define ACPI_NO_EVENT_INIT 0x04 #define ACPI_NO_HANDLER_INIT 0x08 #define ACPI_NO_ACPI_ENABLE 0x10 diff --git a/include/acpi/processor.h b/include/acpi/processor.h index 67055f180330..610f6fb1bbc2 100644 --- a/include/acpi/processor.h +++ b/include/acpi/processor.h @@ -329,6 +329,7 @@ extern void acpi_processor_throttling_init(void); int acpi_processor_power_init(struct acpi_processor *pr, struct acpi_device *device); int acpi_processor_cst_has_changed(struct acpi_processor *pr); +int acpi_processor_hotplug(struct acpi_processor *pr); int acpi_processor_power_exit(struct acpi_processor *pr, struct acpi_device *device); int acpi_processor_suspend(struct acpi_device * device, pm_message_t state); diff --git a/include/drm/drmP.h b/include/drm/drmP.h index cf399495d38f..1f9e9516e2b7 100644 --- a/include/drm/drmP.h +++ b/include/drm/drmP.h @@ -990,7 +990,9 @@ struct drm_minor { struct proc_dir_entry *proc_root; /**< proc directory entry */ struct drm_info_node proc_nodes; struct dentry *debugfs_root; - struct drm_info_node debugfs_nodes; + + struct list_head debugfs_list; + struct mutex debugfs_lock; /* Protects debugfs_list. */ struct drm_master *master; /* currently active master for this node */ struct list_head master_list; diff --git a/include/drm/drm_dp_helper.h b/include/drm/drm_dp_helper.h index 0d2f727e96be..93df2d72750b 100644 --- a/include/drm/drm_dp_helper.h +++ b/include/drm/drm_dp_helper.h @@ -72,6 +72,7 @@ #define DP_MAIN_LINK_CHANNEL_CODING 0x006 +#define DP_EDP_CONFIGURATION_CAP 0x00d #define DP_TRAINING_AUX_RD_INTERVAL 0x00e #define DP_PSR_SUPPORT 0x070 @@ -159,6 +160,8 @@ # define DP_CP_IRQ (1 << 2) # define DP_SINK_SPECIFIC_IRQ (1 << 6) +#define DP_EDP_CONFIGURATION_SET 0x10a + #define DP_LANE0_1_STATUS 0x202 #define DP_LANE2_3_STATUS 0x203 # define DP_LANE_CR_DONE (1 << 0) diff --git a/include/drm/drm_mode.h b/include/drm/drm_mode.h index c4961ea50a49..d30bedfeb7ef 100644 --- a/include/drm/drm_mode.h +++ b/include/drm/drm_mode.h @@ -120,11 +120,12 @@ struct drm_mode_crtc { struct drm_mode_modeinfo mode; }; -#define DRM_MODE_ENCODER_NONE 0 -#define DRM_MODE_ENCODER_DAC 1 -#define DRM_MODE_ENCODER_TMDS 2 -#define DRM_MODE_ENCODER_LVDS 3 -#define DRM_MODE_ENCODER_TVDAC 4 +#define DRM_MODE_ENCODER_NONE 0 +#define DRM_MODE_ENCODER_DAC 1 +#define DRM_MODE_ENCODER_TMDS 2 +#define DRM_MODE_ENCODER_LVDS 3 +#define DRM_MODE_ENCODER_TVDAC 4 +#define DRM_MODE_ENCODER_VIRTUAL 5 struct drm_mode_get_encoder { __u32 encoder_id; @@ -162,6 +163,7 @@ struct drm_mode_get_encoder { #define DRM_MODE_CONNECTOR_HDMIB 12 #define DRM_MODE_CONNECTOR_TV 13 #define DRM_MODE_CONNECTOR_eDP 14 +#define DRM_MODE_CONNECTOR_VIRTUAL 15 struct drm_mode_get_connector { diff --git a/include/drm/exynos_drm.h b/include/drm/exynos_drm.h index 874c4d271328..1d161cb3aca5 100644 --- a/include/drm/exynos_drm.h +++ b/include/drm/exynos_drm.h @@ -36,11 +36,13 @@ * - this size value would be page-aligned internally. * @flags: user request for setting memory type or cache attributes. * @handle: returned handle for the object. + * @pad: just padding to be 64-bit aligned. */ struct drm_exynos_gem_create { unsigned int size; unsigned int flags; unsigned int handle; + unsigned int pad; }; /** diff --git a/include/drm/vmwgfx_drm.h b/include/drm/vmwgfx_drm.h index cd7cd8162ed6..bcb0912afe7a 100644 --- a/include/drm/vmwgfx_drm.h +++ b/include/drm/vmwgfx_drm.h @@ -54,7 +54,7 @@ #define DRM_VMW_FENCE_EVENT 17 #define DRM_VMW_PRESENT 18 #define DRM_VMW_PRESENT_READBACK 19 - +#define DRM_VMW_UPDATE_LAYOUT 20 /*************************************************************************/ /** @@ -552,31 +552,6 @@ struct drm_vmw_get_3d_cap_arg { /*************************************************************************/ /** - * DRM_VMW_UPDATE_LAYOUT - Update layout - * - * Updates the preferred modes and connection status for connectors. The - * command conisits of one drm_vmw_update_layout_arg pointing out a array - * of num_outputs drm_vmw_rect's. - */ - -/** - * struct drm_vmw_update_layout_arg - * - * @num_outputs: number of active - * @rects: pointer to array of drm_vmw_rect - * - * Input argument to the DRM_VMW_UPDATE_LAYOUT Ioctl. - */ - -struct drm_vmw_update_layout_arg { - uint32_t num_outputs; - uint32_t pad64; - uint64_t rects; -}; - - -/*************************************************************************/ -/** * DRM_VMW_FENCE_WAIT * * Waits for a fence object to signal. The wait is interruptible, so that @@ -788,4 +763,28 @@ struct drm_vmw_present_readback_arg { uint64_t clips_ptr; uint64_t fence_rep; }; + +/*************************************************************************/ +/** + * DRM_VMW_UPDATE_LAYOUT - Update layout + * + * Updates the preferred modes and connection status for connectors. The + * command consists of one drm_vmw_update_layout_arg pointing to an array + * of num_outputs drm_vmw_rect's. + */ + +/** + * struct drm_vmw_update_layout_arg + * + * @num_outputs: number of active connectors + * @rects: pointer to array of drm_vmw_rect cast to an uint64_t + * + * Input argument to the DRM_VMW_UPDATE_LAYOUT Ioctl. + */ +struct drm_vmw_update_layout_arg { + uint32_t num_outputs; + uint32_t pad64; + uint64_t rects; +}; + #endif diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h index 583baf22cad2..7408af843b8a 100644 --- a/include/linux/cpuidle.h +++ b/include/linux/cpuidle.h @@ -23,57 +23,62 @@ struct module; struct cpuidle_device; +struct cpuidle_driver; /**************************** * CPUIDLE DEVICE INTERFACE * ****************************/ +struct cpuidle_state_usage { + void *driver_data; + + unsigned long long usage; + unsigned long long time; /* in US */ +}; + struct cpuidle_state { char name[CPUIDLE_NAME_LEN]; char desc[CPUIDLE_DESC_LEN]; - void *driver_data; unsigned int flags; unsigned int exit_latency; /* in US */ unsigned int power_usage; /* in mW */ unsigned int target_residency; /* in US */ - unsigned long long usage; - unsigned long long time; /* in US */ - int (*enter) (struct cpuidle_device *dev, - struct cpuidle_state *state); + struct cpuidle_driver *drv, + int index); }; /* Idle State Flags */ #define CPUIDLE_FLAG_TIME_VALID (0x01) /* is residency time measurable? */ -#define CPUIDLE_FLAG_IGNORE (0x100) /* ignore during this idle period */ #define CPUIDLE_DRIVER_FLAGS_MASK (0xFFFF0000) /** * cpuidle_get_statedata - retrieves private driver state data - * @state: the state + * @st_usage: the state usage statistics */ -static inline void * cpuidle_get_statedata(struct cpuidle_state *state) +static inline void *cpuidle_get_statedata(struct cpuidle_state_usage *st_usage) { - return state->driver_data; + return st_usage->driver_data; } /** * cpuidle_set_statedata - stores private driver state data - * @state: the state + * @st_usage: the state usage statistics * @data: the private data */ static inline void -cpuidle_set_statedata(struct cpuidle_state *state, void *data) +cpuidle_set_statedata(struct cpuidle_state_usage *st_usage, void *data) { - state->driver_data = data; + st_usage->driver_data = data; } struct cpuidle_state_kobj { struct cpuidle_state *state; + struct cpuidle_state_usage *state_usage; struct completion kobj_unregister; struct kobject kobj; }; @@ -81,22 +86,17 @@ struct cpuidle_state_kobj { struct cpuidle_device { unsigned int registered:1; unsigned int enabled:1; - unsigned int power_specified:1; unsigned int cpu; int last_residency; int state_count; - struct cpuidle_state states[CPUIDLE_STATE_MAX]; + struct cpuidle_state_usage states_usage[CPUIDLE_STATE_MAX]; struct cpuidle_state_kobj *kobjs[CPUIDLE_STATE_MAX]; - struct cpuidle_state *last_state; struct list_head device_list; struct kobject kobj; struct completion kobj_unregister; void *governor_data; - struct cpuidle_state *safe_state; - - int (*prepare) (struct cpuidle_device *dev); }; DECLARE_PER_CPU(struct cpuidle_device *, cpuidle_devices); @@ -120,6 +120,11 @@ static inline int cpuidle_get_last_residency(struct cpuidle_device *dev) struct cpuidle_driver { char name[CPUIDLE_NAME_LEN]; struct module *owner; + + unsigned int power_specified:1; + struct cpuidle_state states[CPUIDLE_STATE_MAX]; + int state_count; + int safe_state_index; }; #ifdef CONFIG_CPU_IDLE @@ -166,11 +171,14 @@ struct cpuidle_governor { struct list_head governor_list; unsigned int rating; - int (*enable) (struct cpuidle_device *dev); - void (*disable) (struct cpuidle_device *dev); + int (*enable) (struct cpuidle_driver *drv, + struct cpuidle_device *dev); + void (*disable) (struct cpuidle_driver *drv, + struct cpuidle_device *dev); - int (*select) (struct cpuidle_device *dev); - void (*reflect) (struct cpuidle_device *dev); + int (*select) (struct cpuidle_driver *drv, + struct cpuidle_device *dev); + void (*reflect) (struct cpuidle_device *dev, int index); struct module *owner; }; diff --git a/include/linux/devfreq.h b/include/linux/devfreq.h index afb94583960c..98ce8124b1cc 100644 --- a/include/linux/devfreq.h +++ b/include/linux/devfreq.h @@ -41,7 +41,7 @@ struct devfreq_dev_status { unsigned long total_time; unsigned long busy_time; unsigned long current_frequency; - void *private_date; + void *private_data; }; /** diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h index 45f00b61c096..de33de1e2052 100644 --- a/include/linux/ethtool.h +++ b/include/linux/ethtool.h @@ -1097,10 +1097,12 @@ struct ethtool_ops { #define SPEED_1000 1000 #define SPEED_2500 2500 #define SPEED_10000 10000 +#define SPEED_UNKNOWN -1 /* Duplex, half or full. */ #define DUPLEX_HALF 0x00 #define DUPLEX_FULL 0x01 +#define DUPLEX_UNKNOWN 0xff /* Which connector port. */ #define PORT_TP 0x00 diff --git a/include/linux/hwspinlock.h b/include/linux/hwspinlock.h index 08a2fee40659..aad6bd4b3efd 100644 --- a/include/linux/hwspinlock.h +++ b/include/linux/hwspinlock.h @@ -118,7 +118,6 @@ int __hwspin_trylock(struct hwspinlock *hwlock, int mode, unsigned long *flags) static inline void __hwspin_unlock(struct hwspinlock *hwlock, int mode, unsigned long *flags) { - return 0; } static inline int hwspin_lock_get_id(struct hwspinlock *hwlock) diff --git a/include/linux/mfd/wm8994/registers.h b/include/linux/mfd/wm8994/registers.h index fae295048a8b..83a9caec0e43 100644 --- a/include/linux/mfd/wm8994/registers.h +++ b/include/linux/mfd/wm8994/registers.h @@ -1963,6 +1963,21 @@ #define WM8958_MICB2_DISCH_WIDTH 1 /* MICB2_DISCH */ /* + * R210 (0xD2) - Mic Detect 3 + */ +#define WM8958_MICD_LVL_MASK 0x07FC /* MICD_LVL - [10:2] */ +#define WM8958_MICD_LVL_SHIFT 2 /* MICD_LVL - [10:2] */ +#define WM8958_MICD_LVL_WIDTH 9 /* MICD_LVL - [10:2] */ +#define WM8958_MICD_VALID 0x0002 /* MICD_VALID */ +#define WM8958_MICD_VALID_MASK 0x0002 /* MICD_VALID */ +#define WM8958_MICD_VALID_SHIFT 1 /* MICD_VALID */ +#define WM8958_MICD_VALID_WIDTH 1 /* MICD_VALID */ +#define WM8958_MICD_STS 0x0001 /* MICD_STS */ +#define WM8958_MICD_STS_MASK 0x0001 /* MICD_STS */ +#define WM8958_MICD_STS_SHIFT 0 /* MICD_STS */ +#define WM8958_MICD_STS_WIDTH 1 /* MICD_STS */ + +/* * R76 (0x4C) - Charge Pump (1) */ #define WM8994_CP_ENA 0x8000 /* CP_ENA */ diff --git a/include/linux/mtd/bbm.h b/include/linux/mtd/bbm.h index 57cc0e63714f..c4eec228eef9 100644 --- a/include/linux/mtd/bbm.h +++ b/include/linux/mtd/bbm.h @@ -86,24 +86,39 @@ struct nand_bbt_descr { #define NAND_BBT_VERSION 0x00000100 /* Create a bbt if none exists */ #define NAND_BBT_CREATE 0x00000200 +/* + * Create an empty BBT with no vendor information. Vendor's information may be + * unavailable, for example, if the NAND controller has a different data and OOB + * layout or if this information is already purged. Must be used in conjunction + * with NAND_BBT_CREATE. + */ +#define NAND_BBT_CREATE_EMPTY 0x00000400 /* Search good / bad pattern through all pages of a block */ -#define NAND_BBT_SCANALLPAGES 0x00000400 +#define NAND_BBT_SCANALLPAGES 0x00000800 /* Scan block empty during good / bad block scan */ -#define NAND_BBT_SCANEMPTY 0x00000800 +#define NAND_BBT_SCANEMPTY 0x00001000 /* Write bbt if neccecary */ -#define NAND_BBT_WRITE 0x00001000 +#define NAND_BBT_WRITE 0x00002000 /* Read and write back block contents when writing bbt */ -#define NAND_BBT_SAVECONTENT 0x00002000 +#define NAND_BBT_SAVECONTENT 0x00004000 /* Search good / bad pattern on the first and the second page */ -#define NAND_BBT_SCAN2NDPAGE 0x00004000 +#define NAND_BBT_SCAN2NDPAGE 0x00008000 /* Search good / bad pattern on the last page of the eraseblock */ -#define NAND_BBT_SCANLASTPAGE 0x00008000 -/* Chip stores bad block marker on BOTH 1st and 6th bytes of OOB */ -#define NAND_BBT_SCANBYTE1AND6 0x00100000 -/* The nand_bbt_descr was created dynamicaly and must be freed */ -#define NAND_BBT_DYNAMICSTRUCT 0x00200000 -/* The bad block table does not OOB for marker */ -#define NAND_BBT_NO_OOB 0x00400000 +#define NAND_BBT_SCANLASTPAGE 0x00010000 +/* + * Use a flash based bad block table. By default, OOB identifier is saved in + * OOB area. This option is passed to the default bad block table function. + */ +#define NAND_BBT_USE_FLASH 0x00020000 +/* Do not store flash based bad block table in OOB area; store it in-band */ +#define NAND_BBT_NO_OOB 0x00040000 + +/* + * Flag set by nand_create_default_bbt_descr(), marking that the nand_bbt_descr + * was allocated dynamicaly and must be freed in nand_release(). Has no meaning + * in nand_chip.bbt_options. + */ +#define NAND_BBT_DYNAMICSTRUCT 0x80000000 /* The maximum number of blocks to scan for a bbt */ #define NAND_BBT_SCAN_MAXBLOCKS 4 diff --git a/include/linux/mtd/mtd.h b/include/linux/mtd/mtd.h index 37be05bbfbc8..9f5b312af783 100644 --- a/include/linux/mtd/mtd.h +++ b/include/linux/mtd/mtd.h @@ -32,17 +32,19 @@ #define MTD_CHAR_MAJOR 90 #define MTD_BLOCK_MAJOR 31 -#define MTD_ERASE_PENDING 0x01 +#define MTD_ERASE_PENDING 0x01 #define MTD_ERASING 0x02 #define MTD_ERASE_SUSPEND 0x04 -#define MTD_ERASE_DONE 0x08 -#define MTD_ERASE_FAILED 0x10 +#define MTD_ERASE_DONE 0x08 +#define MTD_ERASE_FAILED 0x10 #define MTD_FAIL_ADDR_UNKNOWN -1LL -/* If the erase fails, fail_addr might indicate exactly which block failed. If - fail_addr = MTD_FAIL_ADDR_UNKNOWN, the failure was not at the device level or was not - specific to any particular block. */ +/* + * If the erase fails, fail_addr might indicate exactly which block failed. If + * fail_addr = MTD_FAIL_ADDR_UNKNOWN, the failure was not at the device level + * or was not specific to any particular block. + */ struct erase_info { struct mtd_info *mtd; uint64_t addr; @@ -59,26 +61,12 @@ struct erase_info { }; struct mtd_erase_region_info { - uint64_t offset; /* At which this region starts, from the beginning of the MTD */ + uint64_t offset; /* At which this region starts, from the beginning of the MTD */ uint32_t erasesize; /* For this region */ uint32_t numblocks; /* Number of blocks of erasesize in this region */ unsigned long *lockmap; /* If keeping bitmap of locks */ }; -/* - * oob operation modes - * - * MTD_OOB_PLACE: oob data are placed at the given offset - * MTD_OOB_AUTO: oob data are automatically placed at the free areas - * which are defined by the ecclayout - * MTD_OOB_RAW: mode to read oob and data without doing ECC checking - */ -typedef enum { - MTD_OOB_PLACE, - MTD_OOB_AUTO, - MTD_OOB_RAW, -} mtd_oob_mode_t; - /** * struct mtd_oob_ops - oob operation operands * @mode: operation mode @@ -90,7 +78,7 @@ typedef enum { * @ooblen: number of oob bytes to write/read * @oobretlen: number of oob bytes written/read * @ooboffs: offset of oob data in the oob area (only relevant when - * mode = MTD_OOB_PLACE) + * mode = MTD_OPS_PLACE_OOB or MTD_OPS_RAW) * @datbuf: data buffer - if NULL only oob data are read/written * @oobbuf: oob data buffer * @@ -99,7 +87,7 @@ typedef enum { * OOB area. */ struct mtd_oob_ops { - mtd_oob_mode_t mode; + unsigned int mode; size_t len; size_t retlen; size_t ooblen; @@ -173,7 +161,7 @@ struct mtd_info { const char *name; int index; - /* ecc layout structure pointer - read only ! */ + /* ECC layout structure pointer - read only! */ struct nand_ecclayout *ecclayout; /* Data for variable erase regions. If numeraseregions is zero, @@ -324,10 +312,15 @@ static inline uint32_t mtd_mod_by_ws(uint64_t sz, struct mtd_info *mtd) /* Kernel-side ioctl definitions */ struct mtd_partition; - -extern int mtd_device_register(struct mtd_info *master, - const struct mtd_partition *parts, - int nr_parts); +struct mtd_part_parser_data; + +extern int mtd_device_parse_register(struct mtd_info *mtd, + const char **part_probe_types, + struct mtd_part_parser_data *parser_data, + const struct mtd_partition *defparts, + int defnr_parts); +#define mtd_device_register(master, parts, nr_parts) \ + mtd_device_parse_register(master, NULL, NULL, parts, nr_parts) extern int mtd_device_unregister(struct mtd_info *master); extern struct mtd_info *get_mtd_device(struct mtd_info *mtd, int num); extern int __get_mtd_device(struct mtd_info *mtd); @@ -356,27 +349,16 @@ void *mtd_kmalloc_up_to(const struct mtd_info *mtd, size_t *size); void mtd_erase_callback(struct erase_info *instr); -/* - * Debugging macro and defines - */ -#define MTD_DEBUG_LEVEL0 (0) /* Quiet */ -#define MTD_DEBUG_LEVEL1 (1) /* Audible */ -#define MTD_DEBUG_LEVEL2 (2) /* Loud */ -#define MTD_DEBUG_LEVEL3 (3) /* Noisy */ - -#ifdef CONFIG_MTD_DEBUG -#define DEBUG(n, args...) \ - do { \ - if (n <= CONFIG_MTD_DEBUG_VERBOSE) \ - printk(KERN_INFO args); \ - } while(0) -#else /* CONFIG_MTD_DEBUG */ -#define DEBUG(n, args...) \ - do { \ - if (0) \ - printk(KERN_INFO args); \ - } while(0) - -#endif /* CONFIG_MTD_DEBUG */ +static inline int mtd_is_bitflip(int err) { + return err == -EUCLEAN; +} + +static inline int mtd_is_eccerr(int err) { + return err == -EBADMSG; +} + +static inline int mtd_is_bitflip_or_eccerr(int err) { + return mtd_is_bitflip(err) || mtd_is_eccerr(err); +} #endif /* __MTD_MTD_H__ */ diff --git a/include/linux/mtd/nand.h b/include/linux/mtd/nand.h index c2b9ac4fbc4a..904131bab501 100644 --- a/include/linux/mtd/nand.h +++ b/include/linux/mtd/nand.h @@ -42,10 +42,10 @@ extern void nand_release(struct mtd_info *mtd); /* Internal helper for board drivers which need to override command function */ extern void nand_wait_ready(struct mtd_info *mtd); -/* locks all blockes present in the device */ +/* locks all blocks present in the device */ extern int nand_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len); -/* unlocks specified locked blockes */ +/* unlocks specified locked blocks */ extern int nand_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len); /* The maximum number of NAND chips in an array */ @@ -150,7 +150,7 @@ typedef enum { #define NAND_ECC_READ 0 /* Reset Hardware ECC for write */ #define NAND_ECC_WRITE 1 -/* Enable Hardware ECC before syndrom is read back from flash */ +/* Enable Hardware ECC before syndrome is read back from flash */ #define NAND_ECC_READSYN 2 /* Bit mask for flags passed to do_nand_read_ecc */ @@ -163,7 +163,7 @@ typedef enum { */ /* Chip can not auto increment pages */ #define NAND_NO_AUTOINCR 0x00000001 -/* Buswitdh is 16 bit */ +/* Buswidth is 16 bit */ #define NAND_BUSWIDTH_16 0x00000002 /* Device supports partial programming without padding */ #define NAND_NO_PADDING 0x00000004 @@ -219,27 +219,15 @@ typedef enum { #define NAND_CHIPOPTIONS_MSK (0x0000ffff & ~NAND_NO_AUTOINCR) /* Non chip related options */ -/* - * Use a flash based bad block table. OOB identifier is saved in OOB area. - * This option is passed to the default bad block table function. - */ -#define NAND_USE_FLASH_BBT 0x00010000 /* This option skips the bbt scan during initialization. */ -#define NAND_SKIP_BBTSCAN 0x00020000 +#define NAND_SKIP_BBTSCAN 0x00010000 /* * This option is defined if the board driver allocates its own buffers * (e.g. because it needs them DMA-coherent). */ -#define NAND_OWN_BUFFERS 0x00040000 +#define NAND_OWN_BUFFERS 0x00020000 /* Chip may not exist, so silence any errors in scan */ -#define NAND_SCAN_SILENT_NODEV 0x00080000 -/* - * If passed additionally to NAND_USE_FLASH_BBT then BBT code will not touch - * the OOB area. - */ -#define NAND_USE_FLASH_BBT_NO_OOB 0x00800000 -/* Create an empty BBT with no vendor information if the BBT is available */ -#define NAND_CREATE_EMPTY_BBT 0x01000000 +#define NAND_SCAN_SILENT_NODEV 0x00040000 /* Options set by nand scan */ /* Nand scan has allocated controller struct */ @@ -331,27 +319,29 @@ struct nand_hw_control { }; /** - * struct nand_ecc_ctrl - Control structure for ecc - * @mode: ecc mode - * @steps: number of ecc steps per page - * @size: data bytes per ecc step - * @bytes: ecc bytes per step - * @total: total number of ecc bytes per page - * @prepad: padding information for syndrome based ecc generators - * @postpad: padding information for syndrome based ecc generators + * struct nand_ecc_ctrl - Control structure for ECC + * @mode: ECC mode + * @steps: number of ECC steps per page + * @size: data bytes per ECC step + * @bytes: ECC bytes per step + * @total: total number of ECC bytes per page + * @prepad: padding information for syndrome based ECC generators + * @postpad: padding information for syndrome based ECC generators * @layout: ECC layout control struct pointer - * @priv: pointer to private ecc control data - * @hwctl: function to control hardware ecc generator. Must only + * @priv: pointer to private ECC control data + * @hwctl: function to control hardware ECC generator. Must only * be provided if an hardware ECC is available - * @calculate: function for ecc calculation or readback from ecc hardware - * @correct: function for ecc correction, matching to ecc generator (sw/hw) + * @calculate: function for ECC calculation or readback from ECC hardware + * @correct: function for ECC correction, matching to ECC generator (sw/hw) * @read_page_raw: function to read a raw page without ECC * @write_page_raw: function to write a raw page without ECC - * @read_page: function to read a page according to the ecc generator + * @read_page: function to read a page according to the ECC generator * requirements. * @read_subpage: function to read parts of the page covered by ECC. - * @write_page: function to write a page according to the ecc generator + * @write_page: function to write a page according to the ECC generator * requirements. + * @write_oob_raw: function to write chip OOB data without ECC + * @read_oob_raw: function to read chip OOB data without ECC * @read_oob: function to read chip OOB data * @write_oob: function to write chip OOB data */ @@ -380,6 +370,10 @@ struct nand_ecc_ctrl { uint32_t offs, uint32_t len, uint8_t *buf); void (*write_page)(struct mtd_info *mtd, struct nand_chip *chip, const uint8_t *buf); + int (*write_oob_raw)(struct mtd_info *mtd, struct nand_chip *chip, + int page); + int (*read_oob_raw)(struct mtd_info *mtd, struct nand_chip *chip, + int page, int sndcmd); int (*read_oob)(struct mtd_info *mtd, struct nand_chip *chip, int page, int sndcmd); int (*write_oob)(struct mtd_info *mtd, struct nand_chip *chip, @@ -388,8 +382,8 @@ struct nand_ecc_ctrl { /** * struct nand_buffers - buffer structure for read/write - * @ecccalc: buffer for calculated ecc - * @ecccode: buffer for ecc read from flash + * @ecccalc: buffer for calculated ECC + * @ecccode: buffer for ECC read from flash * @databuf: buffer for data - dynamically sized * * Do not change the order of buffers. databuf and oobrbuf must be in @@ -422,7 +416,7 @@ struct nand_buffers { * mtd->oobsize, mtd->writesize and so on. * @id_data contains the 8 bytes values of NAND_CMD_READID. * Return with the bus width. - * @dev_ready: [BOARDSPECIFIC] hardwarespecific function for accesing + * @dev_ready: [BOARDSPECIFIC] hardwarespecific function for accessing * device ready/busy line. If set to NULL no access to * ready/busy is available and the ready/busy information * is read from the chip status register. @@ -430,17 +424,17 @@ struct nand_buffers { * commands to the chip. * @waitfunc: [REPLACEABLE] hardwarespecific function for wait on * ready. - * @ecc: [BOARDSPECIFIC] ecc control ctructure + * @ecc: [BOARDSPECIFIC] ECC control structure * @buffers: buffer structure for read/write * @hwcontrol: platform-specific hardware control structure - * @ops: oob operation operands * @erase_cmd: [INTERN] erase command write function, selectable due * to AND support. * @scan_bbt: [REPLACEABLE] function to scan bad block table * @chip_delay: [BOARDSPECIFIC] chip dependent delay for transferring * data from array to read regs (tR). * @state: [INTERN] the current state of the NAND device - * @oob_poi: poison value buffer + * @oob_poi: "poison value buffer," used for laying out OOB data + * before writing * @page_shift: [INTERN] number of address bits in a page (column * address bits). * @phys_erase_shift: [INTERN] number of address bits in a physical eraseblock @@ -449,6 +443,9 @@ struct nand_buffers { * @options: [BOARDSPECIFIC] various chip options. They can partly * be set to inform nand_scan about special functionality. * See the defines for further explanation. + * @bbt_options: [INTERN] bad block specific options. All options used + * here must come from bbm.h. By default, these options + * will be copied to the appropriate nand_bbt_descr's. * @badblockpos: [INTERN] position of the bad block marker in the oob * area. * @badblockbits: [INTERN] number of bits to left-shift the bad block @@ -464,7 +461,7 @@ struct nand_buffers { * non 0 if ONFI supported. * @onfi_params: [INTERN] holds the ONFI page parameter when ONFI is * supported, 0 otherwise. - * @ecclayout: [REPLACEABLE] the default ecc placement scheme + * @ecclayout: [REPLACEABLE] the default ECC placement scheme * @bbt: [INTERN] bad block table pointer * @bbt_td: [REPLACEABLE] bad block table descriptor for flash * lookup. @@ -472,9 +469,9 @@ struct nand_buffers { * @badblock_pattern: [REPLACEABLE] bad block scan pattern used for initial * bad block scan. * @controller: [REPLACEABLE] a pointer to a hardware controller - * structure which is shared among multiple independend + * structure which is shared among multiple independent * devices. - * @priv: [OPTIONAL] pointer to private chip date + * @priv: [OPTIONAL] pointer to private chip data * @errstat: [OPTIONAL] hardware specific function to perform * additional error status checks (determine if errors are * correctable). @@ -509,6 +506,7 @@ struct nand_chip { int chip_delay; unsigned int options; + unsigned int bbt_options; int page_shift; int phys_erase_shift; @@ -536,8 +534,6 @@ struct nand_chip { struct nand_buffers *buffers; struct nand_hw_control hwcontrol; - struct mtd_oob_ops ops; - uint8_t *bbt; struct nand_bbt_descr *bbt_td; struct nand_bbt_descr *bbt_md; @@ -611,10 +607,9 @@ extern int nand_do_read(struct mtd_info *mtd, loff_t from, size_t len, * @partitions: mtd partition list * @chip_delay: R/B delay value in us * @options: Option flags, e.g. 16bit buswidth - * @ecclayout: ecc layout info structure + * @bbt_options: BBT option flags, e.g. NAND_BBT_USE_FLASH + * @ecclayout: ECC layout info structure * @part_probe_types: NULL-terminated array of probe types - * @set_parts: platform specific function to set partitions - * @priv: hardware controller specific settings */ struct platform_nand_chip { int nr_chips; @@ -624,9 +619,8 @@ struct platform_nand_chip { struct nand_ecclayout *ecclayout; int chip_delay; unsigned int options; + unsigned int bbt_options; const char **part_probe_types; - void (*set_parts)(uint64_t size, struct platform_nand_chip *chip); - void *priv; }; /* Keep gcc happy */ diff --git a/include/linux/mtd/onenand.h b/include/linux/mtd/onenand.h index 52b6f187bf49..4596503c9da9 100644 --- a/include/linux/mtd/onenand.h +++ b/include/linux/mtd/onenand.h @@ -184,6 +184,9 @@ struct onenand_chip { #define ONENAND_IS_CACHE_PROGRAM(this) \ (this->options & ONENAND_HAS_CACHE_PROGRAM) +#define ONENAND_IS_NOP_1(this) \ + (this->options & ONENAND_HAS_NOP_1) + /* Check byte access in OneNAND */ #define ONENAND_CHECK_BYTE_ACCESS(addr) (addr & 0x1) @@ -195,6 +198,7 @@ struct onenand_chip { #define ONENAND_HAS_2PLANE (0x0004) #define ONENAND_HAS_4KB_PAGE (0x0008) #define ONENAND_HAS_CACHE_PROGRAM (0x0010) +#define ONENAND_HAS_NOP_1 (0x0020) #define ONENAND_SKIP_UNLOCK_CHECK (0x0100) #define ONENAND_PAGEBUF_ALLOC (0x1000) #define ONENAND_OOBBUF_ALLOC (0x2000) diff --git a/include/linux/mtd/partitions.h b/include/linux/mtd/partitions.h index 3a6f0372fc96..2475228c1158 100644 --- a/include/linux/mtd/partitions.h +++ b/include/linux/mtd/partitions.h @@ -24,7 +24,9 @@ * will extend to the end of the master MTD device. * offset: absolute starting position within the master MTD device; if * defined as MTDPART_OFS_APPEND, the partition will start where the - * previous one ended; if MTDPART_OFS_NXTBLK, at the next erase block. + * previous one ended; if MTDPART_OFS_NXTBLK, at the next erase block; + * if MTDPART_OFS_RETAIN, consume as much as possible, leaving size + * after the end of partition. * mask_flags: contains flags that have to be masked (removed) from the * master MTD flag set for the corresponding MTD partition. * For example, to force a read-only partition, simply adding @@ -42,12 +44,25 @@ struct mtd_partition { struct nand_ecclayout *ecclayout; /* out of band layout for this partition (NAND only) */ }; +#define MTDPART_OFS_RETAIN (-3) #define MTDPART_OFS_NXTBLK (-2) #define MTDPART_OFS_APPEND (-1) #define MTDPART_SIZ_FULL (0) struct mtd_info; +struct device_node; + +/** + * struct mtd_part_parser_data - used to pass data to MTD partition parsers. + * @origin: for RedBoot, start address of MTD device + * @of_node: for OF parsers, device node containing partitioning information + */ +struct mtd_part_parser_data { + unsigned long origin; + struct device_node *of_node; +}; + /* * Functions dealing with the various ways of partitioning the space @@ -57,37 +72,12 @@ struct mtd_part_parser { struct list_head list; struct module *owner; const char *name; - int (*parse_fn)(struct mtd_info *, struct mtd_partition **, unsigned long); + int (*parse_fn)(struct mtd_info *, struct mtd_partition **, + struct mtd_part_parser_data *); }; extern int register_mtd_parser(struct mtd_part_parser *parser); extern int deregister_mtd_parser(struct mtd_part_parser *parser); -extern int parse_mtd_partitions(struct mtd_info *master, const char **types, - struct mtd_partition **pparts, unsigned long origin); - -#define put_partition_parser(p) do { module_put((p)->owner); } while(0) - -struct device; -struct device_node; - -#ifdef CONFIG_MTD_OF_PARTS -int __devinit of_mtd_parse_partitions(struct device *dev, - struct device_node *node, - struct mtd_partition **pparts); -#else -static inline int of_mtd_parse_partitions(struct device *dev, - struct device_node *node, - struct mtd_partition **pparts) -{ - return 0; -} -#endif - -#ifdef CONFIG_MTD_CMDLINE_PARTS -static inline int mtd_has_cmdlinepart(void) { return 1; } -#else -static inline int mtd_has_cmdlinepart(void) { return 0; } -#endif int mtd_is_partition(struct mtd_info *mtd); int mtd_add_partition(struct mtd_info *master, char *name, diff --git a/include/linux/mtd/physmap.h b/include/linux/mtd/physmap.h index e5f21d293c70..04e018160e2b 100644 --- a/include/linux/mtd/physmap.h +++ b/include/linux/mtd/physmap.h @@ -32,21 +32,4 @@ struct physmap_flash_data { struct mtd_partition *parts; }; -/* - * Board needs to specify the exact mapping during their setup time. - */ -void physmap_configure(unsigned long addr, unsigned long size, - int bankwidth, void (*set_vpp)(struct map_info *, int) ); - -/* - * Machines that wish to do flash partition may want to call this function in - * their setup routine. - * - * physmap_set_partitions(mypartitions, num_parts); - * - * Note that one can always override this hard-coded partition with - * command line partition (you need to enable CONFIG_MTD_CMDLINE_PARTS). - */ -void physmap_set_partitions(struct mtd_partition *parts, int num_parts); - #endif /* __LINUX_MTD_PHYSMAP__ */ diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h index 3fdf251389de..172ba70306d1 100644 --- a/include/linux/pci_ids.h +++ b/include/linux/pci_ids.h @@ -2405,6 +2405,8 @@ #define PCI_VENDOR_ID_AZWAVE 0x1a3b +#define PCI_VENDOR_ID_ASMEDIA 0x1b21 + #define PCI_VENDOR_ID_TEKRAM 0x1de1 #define PCI_DEVICE_ID_TEKRAM_DC290 0xdc29 diff --git a/include/linux/pinctrl/pinctrl.h b/include/linux/pinctrl/pinctrl.h index 3605e947fa90..04c011038f32 100644 --- a/include/linux/pinctrl/pinctrl.h +++ b/include/linux/pinctrl/pinctrl.h @@ -121,6 +121,7 @@ extern const char *pinctrl_dev_get_name(struct pinctrl_dev *pctldev); extern void *pinctrl_dev_get_drvdata(struct pinctrl_dev *pctldev); #else +struct pinctrl_dev; /* Sufficiently stupid default function when pinctrl is not in use */ static inline bool pin_is_valid(struct pinctrl_dev *pctldev, int pin) diff --git a/include/linux/platform_data/macb.h b/include/linux/platform_data/macb.h new file mode 100644 index 000000000000..e7c748fb6053 --- /dev/null +++ b/include/linux/platform_data/macb.h @@ -0,0 +1,17 @@ +/* + * Copyright (C) 2004-2006 Atmel Corporation + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ +#ifndef __MACB_PDATA_H__ +#define __MACB_PDATA_H__ + +struct macb_platform_data { + u32 phy_mask; + u8 phy_irq_pin; /* PHY IRQ */ + u8 is_rmii; /* using RMII interface? */ +}; + +#endif /* __MACB_PDATA_H__ */ diff --git a/include/linux/serial_sci.h b/include/linux/serial_sci.h index 0efa1f10bc2b..369273a52679 100644 --- a/include/linux/serial_sci.h +++ b/include/linux/serial_sci.h @@ -67,6 +67,7 @@ enum { SCIx_IRDA_REGTYPE, SCIx_SCIFA_REGTYPE, SCIx_SCIFB_REGTYPE, + SCIx_SH2_SCIF_FIFODATA_REGTYPE, SCIx_SH3_SCIF_REGTYPE, SCIx_SH4_SCIF_REGTYPE, SCIx_SH4_SCIF_NO_SCSPTR_REGTYPE, diff --git a/include/linux/sh_clk.h b/include/linux/sh_clk.h index 3ccf18648d0a..a20831cf336a 100644 --- a/include/linux/sh_clk.h +++ b/include/linux/sh_clk.h @@ -52,7 +52,6 @@ struct clk { unsigned long arch_flags; void *priv; - struct dentry *dentry; struct clk_mapping *mapping; struct cpufreq_frequency_table *freq_table; unsigned int nr_freqs; @@ -94,6 +93,9 @@ int clk_rate_table_find(struct clk *clk, long clk_rate_div_range_round(struct clk *clk, unsigned int div_min, unsigned int div_max, unsigned long rate); +long clk_rate_mult_range_round(struct clk *clk, unsigned int mult_min, + unsigned int mult_max, unsigned long rate); + long clk_round_parent(struct clk *clk, unsigned long target, unsigned long *best_freq, unsigned long *parent_freq, unsigned int div_min, unsigned int div_max); diff --git a/include/linux/sh_pfc.h b/include/linux/sh_pfc.h index bc8c9208f7e2..8446789216e5 100644 --- a/include/linux/sh_pfc.h +++ b/include/linux/sh_pfc.h @@ -104,4 +104,80 @@ struct pinmux_info { int register_pinmux(struct pinmux_info *pip); int unregister_pinmux(struct pinmux_info *pip); +/* helper macro for port */ +#define PORT_1(fn, pfx, sfx) fn(pfx, sfx) + +#define PORT_10(fn, pfx, sfx) \ + PORT_1(fn, pfx##0, sfx), PORT_1(fn, pfx##1, sfx), \ + PORT_1(fn, pfx##2, sfx), PORT_1(fn, pfx##3, sfx), \ + PORT_1(fn, pfx##4, sfx), PORT_1(fn, pfx##5, sfx), \ + PORT_1(fn, pfx##6, sfx), PORT_1(fn, pfx##7, sfx), \ + PORT_1(fn, pfx##8, sfx), PORT_1(fn, pfx##9, sfx) + +#define PORT_90(fn, pfx, sfx) \ + PORT_10(fn, pfx##1, sfx), PORT_10(fn, pfx##2, sfx), \ + PORT_10(fn, pfx##3, sfx), PORT_10(fn, pfx##4, sfx), \ + PORT_10(fn, pfx##5, sfx), PORT_10(fn, pfx##6, sfx), \ + PORT_10(fn, pfx##7, sfx), PORT_10(fn, pfx##8, sfx), \ + PORT_10(fn, pfx##9, sfx) + +#define _PORT_ALL(pfx, sfx) pfx##_##sfx +#define _GPIO_PORT(pfx, sfx) PINMUX_GPIO(GPIO_PORT##pfx, PORT##pfx##_DATA) +#define PORT_ALL(str) CPU_ALL_PORT(_PORT_ALL, PORT, str) +#define GPIO_PORT_ALL() CPU_ALL_PORT(_GPIO_PORT, , unused) +#define GPIO_FN(str) PINMUX_GPIO(GPIO_FN_##str, str##_MARK) + +/* helper macro for pinmux_enum_t */ +#define PORT_DATA_I(nr) \ + PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, PORT##nr##_IN) + +#define PORT_DATA_I_PD(nr) \ + PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ + PORT##nr##_IN, PORT##nr##_IN_PD) + +#define PORT_DATA_I_PU(nr) \ + PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ + PORT##nr##_IN, PORT##nr##_IN_PU) + +#define PORT_DATA_I_PU_PD(nr) \ + PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, \ + PORT##nr##_IN, PORT##nr##_IN_PD, PORT##nr##_IN_PU) + +#define PORT_DATA_O(nr) \ + PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, PORT##nr##_OUT) + +#define PORT_DATA_IO(nr) \ + PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, PORT##nr##_OUT, \ + PORT##nr##_IN) + +#define PORT_DATA_IO_PD(nr) \ + PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, PORT##nr##_OUT, \ + PORT##nr##_IN, PORT##nr##_IN_PD) + +#define PORT_DATA_IO_PU(nr) \ + PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, PORT##nr##_OUT, \ + PORT##nr##_IN, PORT##nr##_IN_PU) + +#define PORT_DATA_IO_PU_PD(nr) \ + PINMUX_DATA(PORT##nr##_DATA, PORT##nr##_FN0, PORT##nr##_OUT, \ + PORT##nr##_IN, PORT##nr##_IN_PD, PORT##nr##_IN_PU) + +/* helper macro for top 4 bits in PORTnCR */ +#define _PCRH(in, in_pd, in_pu, out) \ + 0, (out), (in), 0, \ + 0, 0, 0, 0, \ + 0, 0, (in_pd), 0, \ + 0, 0, (in_pu), 0 + +#define PORTCR(nr, reg) \ + { \ + PINMUX_CFG_REG("PORT" nr "CR", reg, 8, 4) { \ + _PCRH(PORT##nr##_IN, PORT##nr##_IN_PD, \ + PORT##nr##_IN_PU, PORT##nr##_OUT), \ + PORT##nr##_FN0, PORT##nr##_FN1, \ + PORT##nr##_FN2, PORT##nr##_FN3, \ + PORT##nr##_FN4, PORT##nr##_FN5, \ + PORT##nr##_FN6, PORT##nr##_FN7 } \ + } + #endif /* __SH_PFC_H */ diff --git a/include/mtd/mtd-abi.h b/include/mtd/mtd-abi.h index 2f7d45bcbd24..1a7e1d20adf9 100644 --- a/include/mtd/mtd-abi.h +++ b/include/mtd/mtd-abi.h @@ -45,6 +45,51 @@ struct mtd_oob_buf64 { __u64 usr_ptr; }; +/** + * MTD operation modes + * + * @MTD_OPS_PLACE_OOB: OOB data are placed at the given offset (default) + * @MTD_OPS_AUTO_OOB: OOB data are automatically placed at the free areas + * which are defined by the internal ecclayout + * @MTD_OPS_RAW: data are transferred as-is, with no error correction; + * this mode implies %MTD_OPS_PLACE_OOB + * + * These modes can be passed to ioctl(MEMWRITE) and are also used internally. + * See notes on "MTD file modes" for discussion on %MTD_OPS_RAW vs. + * %MTD_FILE_MODE_RAW. + */ +enum { + MTD_OPS_PLACE_OOB = 0, + MTD_OPS_AUTO_OOB = 1, + MTD_OPS_RAW = 2, +}; + +/** + * struct mtd_write_req - data structure for requesting a write operation + * + * @start: start address + * @len: length of data buffer + * @ooblen: length of OOB buffer + * @usr_data: user-provided data buffer + * @usr_oob: user-provided OOB buffer + * @mode: MTD mode (see "MTD operation modes") + * @padding: reserved, must be set to 0 + * + * This structure supports ioctl(MEMWRITE) operations, allowing data and/or OOB + * writes in various modes. To write to OOB-only, set @usr_data == NULL, and to + * write data-only, set @usr_oob == NULL. However, setting both @usr_data and + * @usr_oob to NULL is not allowed. + */ +struct mtd_write_req { + __u64 start; + __u64 len; + __u64 ooblen; + __u64 usr_data; + __u64 usr_oob; + __u8 mode; + __u8 padding[7]; +}; + #define MTD_ABSENT 0 #define MTD_RAM 1 #define MTD_ROM 2 @@ -59,13 +104,13 @@ struct mtd_oob_buf64 { #define MTD_NO_ERASE 0x1000 /* No erase necessary */ #define MTD_POWERUP_LOCK 0x2000 /* Always locked after reset */ -// Some common devices / combinations of capabilities +/* Some common devices / combinations of capabilities */ #define MTD_CAP_ROM 0 #define MTD_CAP_RAM (MTD_WRITEABLE | MTD_BIT_WRITEABLE | MTD_NO_ERASE) #define MTD_CAP_NORFLASH (MTD_WRITEABLE | MTD_BIT_WRITEABLE) #define MTD_CAP_NANDFLASH (MTD_WRITEABLE) -/* ECC byte placement */ +/* Obsolete ECC byte placement modes (used with obsolete MEMGETOOBSEL) */ #define MTD_NANDECC_OFF 0 // Switch off ECC (Not recommended) #define MTD_NANDECC_PLACE 1 // Use the given placement in the structure (YAFFS1 legacy mode) #define MTD_NANDECC_AUTOPLACE 2 // Use the default placement scheme @@ -80,21 +125,18 @@ struct mtd_oob_buf64 { struct mtd_info_user { __u8 type; __u32 flags; - __u32 size; // Total size of the MTD + __u32 size; /* Total size of the MTD */ __u32 erasesize; __u32 writesize; - __u32 oobsize; // Amount of OOB data per block (e.g. 16) - /* The below two fields are obsolete and broken, do not use them - * (TODO: remove at some point) */ - __u32 ecctype; - __u32 eccsize; + __u32 oobsize; /* Amount of OOB data per block (e.g. 16) */ + __u64 padding; /* Old obsolete field; do not use */ }; struct region_info_user { __u32 offset; /* At which this region starts, - * from the beginning of the MTD */ - __u32 erasesize; /* For this region */ - __u32 numblocks; /* Number of blocks in this region */ + * from the beginning of the MTD */ + __u32 erasesize; /* For this region */ + __u32 numblocks; /* Number of blocks in this region */ __u32 regionindex; }; @@ -104,29 +146,61 @@ struct otp_info { __u32 locked; }; +/* + * Note, the following ioctl existed in the past and was removed: + * #define MEMSETOOBSEL _IOW('M', 9, struct nand_oobinfo) + * Try to avoid adding a new ioctl with the same ioctl number. + */ + +/* Get basic MTD characteristics info (better to use sysfs) */ #define MEMGETINFO _IOR('M', 1, struct mtd_info_user) +/* Erase segment of MTD */ #define MEMERASE _IOW('M', 2, struct erase_info_user) +/* Write out-of-band data from MTD */ #define MEMWRITEOOB _IOWR('M', 3, struct mtd_oob_buf) +/* Read out-of-band data from MTD */ #define MEMREADOOB _IOWR('M', 4, struct mtd_oob_buf) +/* Lock a chip (for MTD that supports it) */ #define MEMLOCK _IOW('M', 5, struct erase_info_user) +/* Unlock a chip (for MTD that supports it) */ #define MEMUNLOCK _IOW('M', 6, struct erase_info_user) +/* Get the number of different erase regions */ #define MEMGETREGIONCOUNT _IOR('M', 7, int) +/* Get information about the erase region for a specific index */ #define MEMGETREGIONINFO _IOWR('M', 8, struct region_info_user) -#define MEMSETOOBSEL _IOW('M', 9, struct nand_oobinfo) +/* Get info about OOB modes (e.g., RAW, PLACE, AUTO) - legacy interface */ #define MEMGETOOBSEL _IOR('M', 10, struct nand_oobinfo) +/* Check if an eraseblock is bad */ #define MEMGETBADBLOCK _IOW('M', 11, __kernel_loff_t) +/* Mark an eraseblock as bad */ #define MEMSETBADBLOCK _IOW('M', 12, __kernel_loff_t) +/* Set OTP (One-Time Programmable) mode (factory vs. user) */ #define OTPSELECT _IOR('M', 13, int) +/* Get number of OTP (One-Time Programmable) regions */ #define OTPGETREGIONCOUNT _IOW('M', 14, int) +/* Get all OTP (One-Time Programmable) info about MTD */ #define OTPGETREGIONINFO _IOW('M', 15, struct otp_info) +/* Lock a given range of user data (must be in mode %MTD_FILE_MODE_OTP_USER) */ #define OTPLOCK _IOR('M', 16, struct otp_info) +/* Get ECC layout (deprecated) */ #define ECCGETLAYOUT _IOR('M', 17, struct nand_ecclayout_user) +/* Get statistics about corrected/uncorrected errors */ #define ECCGETSTATS _IOR('M', 18, struct mtd_ecc_stats) +/* Set MTD mode on a per-file-descriptor basis (see "MTD file modes") */ #define MTDFILEMODE _IO('M', 19) +/* Erase segment of MTD (supports 64-bit address) */ #define MEMERASE64 _IOW('M', 20, struct erase_info_user64) +/* Write data to OOB (64-bit version) */ #define MEMWRITEOOB64 _IOWR('M', 21, struct mtd_oob_buf64) +/* Read data from OOB (64-bit version) */ #define MEMREADOOB64 _IOWR('M', 22, struct mtd_oob_buf64) +/* Check if chip is locked (for MTD that supports it) */ #define MEMISLOCKED _IOR('M', 23, struct erase_info_user) +/* + * Most generic write interface; can write in-band and/or out-of-band in various + * modes (see "struct mtd_write_req") + */ +#define MEMWRITE _IOWR('M', 24, struct mtd_write_req) /* * Obsolete legacy interface. Keep it in order not to break userspace @@ -177,13 +251,27 @@ struct mtd_ecc_stats { }; /* - * Read/write file modes for access to MTD + * MTD file modes - for read/write access to MTD + * + * @MTD_FILE_MODE_NORMAL: OTP disabled, ECC enabled + * @MTD_FILE_MODE_OTP_FACTORY: OTP enabled in factory mode + * @MTD_FILE_MODE_OTP_USER: OTP enabled in user mode + * @MTD_FILE_MODE_RAW: OTP disabled, ECC disabled + * + * These modes can be set via ioctl(MTDFILEMODE). The mode mode will be retained + * separately for each open file descriptor. + * + * Note: %MTD_FILE_MODE_RAW provides the same functionality as %MTD_OPS_RAW - + * raw access to the flash, without error correction or autoplacement schemes. + * Wherever possible, the MTD_OPS_* mode will override the MTD_FILE_MODE_* mode + * (e.g., when using ioctl(MEMWRITE)), but in some cases, the MTD_FILE_MODE is + * used out of necessity (e.g., `write()', ioctl(MEMWRITEOOB64)). */ enum mtd_file_modes { - MTD_MODE_NORMAL = MTD_OTP_OFF, - MTD_MODE_OTP_FACTORY = MTD_OTP_FACTORY, - MTD_MODE_OTP_USER = MTD_OTP_USER, - MTD_MODE_RAW, + MTD_FILE_MODE_NORMAL = MTD_OTP_OFF, + MTD_FILE_MODE_OTP_FACTORY = MTD_OTP_FACTORY, + MTD_FILE_MODE_OTP_USER = MTD_OTP_USER, + MTD_FILE_MODE_RAW, }; #endif /* __MTD_ABI_H__ */ diff --git a/include/net/bluetooth/rfcomm.h b/include/net/bluetooth/rfcomm.h index d5eee2093b1e..e2e3ecad1008 100644 --- a/include/net/bluetooth/rfcomm.h +++ b/include/net/bluetooth/rfcomm.h @@ -211,6 +211,7 @@ struct rfcomm_dlc { #define RFCOMM_AUTH_ACCEPT 6 #define RFCOMM_AUTH_REJECT 7 #define RFCOMM_DEFER_SETUP 8 +#define RFCOMM_ENC_DROP 9 /* Scheduling flags and events */ #define RFCOMM_SCHED_WAKEUP 31 diff --git a/include/net/mac80211.h b/include/net/mac80211.h index dc1123aa8181..72eddd1b410b 100644 --- a/include/net/mac80211.h +++ b/include/net/mac80211.h @@ -3567,8 +3567,9 @@ rate_lowest_index(struct ieee80211_supported_band *sband, return i; /* warn when we cannot find a rate. */ - WARN_ON(1); + WARN_ON_ONCE(1); + /* and return 0 (the lowest index) */ return 0; } diff --git a/include/net/netlink.h b/include/net/netlink.h index 98c185441bee..cb1f3504687f 100644 --- a/include/net/netlink.h +++ b/include/net/netlink.h @@ -192,8 +192,15 @@ enum { * NLA_NUL_STRING Maximum length of string (excluding NUL) * NLA_FLAG Unused * NLA_BINARY Maximum length of attribute payload - * NLA_NESTED_COMPAT Exact length of structure payload - * All other Exact length of attribute payload + * NLA_NESTED Don't use `len' field -- length verification is + * done by checking len of nested header (or empty) + * NLA_NESTED_COMPAT Minimum length of structure payload + * NLA_U8, NLA_U16, + * NLA_U32, NLA_U64, + * NLA_MSECS Leaving the length field zero will verify the + * given type fits, using it verifies minimum length + * just like "All other" + * All other Minimum length of attribute payload * * Example: * static const struct nla_policy my_policy[ATTR_MAX+1] = { diff --git a/kernel/power/qos.c b/kernel/power/qos.c index 56db75147186..995e3bd3417b 100644 --- a/kernel/power/qos.c +++ b/kernel/power/qos.c @@ -70,6 +70,7 @@ static struct pm_qos_constraints cpu_dma_constraints = { }; static struct pm_qos_object cpu_dma_pm_qos = { .constraints = &cpu_dma_constraints, + .name = "cpu_dma_latency", }; static BLOCKING_NOTIFIER_HEAD(network_lat_notifier); diff --git a/lib/nlattr.c b/lib/nlattr.c index ac09f2226dc7..a8408b6cacdf 100644 --- a/lib/nlattr.c +++ b/lib/nlattr.c @@ -20,6 +20,7 @@ static const u16 nla_attr_minlen[NLA_TYPE_MAX+1] = { [NLA_U16] = sizeof(u16), [NLA_U32] = sizeof(u32), [NLA_U64] = sizeof(u64), + [NLA_MSECS] = sizeof(u64), [NLA_NESTED] = NLA_HDRLEN, }; diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 0360d1b5a1dd..a3278f005230 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -1097,13 +1097,13 @@ static void balance_dirty_pages(struct address_space *mapping, pos_ratio = bdi_position_ratio(bdi, dirty_thresh, background_thresh, nr_dirty, bdi_thresh, bdi_dirty); - if (unlikely(pos_ratio == 0)) { + task_ratelimit = ((u64)dirty_ratelimit * pos_ratio) >> + RATELIMIT_CALC_SHIFT; + if (unlikely(task_ratelimit == 0)) { pause = max_pause; goto pause; } - task_ratelimit = (u64)dirty_ratelimit * - pos_ratio >> RATELIMIT_CALC_SHIFT; - pause = (HZ * pages_dirtied) / (task_ratelimit | 1); + pause = HZ * pages_dirtied / task_ratelimit; if (unlikely(pause <= 0)) { trace_balance_dirty_pages(bdi, dirty_thresh, diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c index b84458dcc226..be84ae33ae36 100644 --- a/net/bluetooth/hci_core.c +++ b/net/bluetooth/hci_core.c @@ -613,7 +613,7 @@ static int hci_dev_do_close(struct hci_dev *hdev) if (!test_bit(HCI_RAW, &hdev->flags)) { set_bit(HCI_INIT, &hdev->flags); __hci_request(hdev, hci_reset_req, 0, - msecs_to_jiffies(250)); + msecs_to_jiffies(HCI_INIT_TIMEOUT)); clear_bit(HCI_INIT, &hdev->flags); } diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c index 940858a48cbd..2c7634296866 100644 --- a/net/bluetooth/mgmt.c +++ b/net/bluetooth/mgmt.c @@ -148,8 +148,6 @@ static int read_index_list(struct sock *sk) hci_del_off_timer(d); - set_bit(HCI_MGMT, &d->flags); - if (test_bit(HCI_SETUP, &d->flags)) continue; diff --git a/net/bluetooth/rfcomm/core.c b/net/bluetooth/rfcomm/core.c index 38b618c96de6..4e32e18211f9 100644 --- a/net/bluetooth/rfcomm/core.c +++ b/net/bluetooth/rfcomm/core.c @@ -1802,6 +1802,11 @@ static inline void rfcomm_process_dlcs(struct rfcomm_session *s) continue; } + if (test_bit(RFCOMM_ENC_DROP, &d->flags)) { + __rfcomm_dlc_close(d, ECONNREFUSED); + continue; + } + if (test_and_clear_bit(RFCOMM_AUTH_ACCEPT, &d->flags)) { rfcomm_dlc_clear_timer(d); if (d->out) { @@ -2077,7 +2082,7 @@ static void rfcomm_security_cfm(struct hci_conn *conn, u8 status, u8 encrypt) if (test_and_clear_bit(RFCOMM_SEC_PENDING, &d->flags)) { rfcomm_dlc_clear_timer(d); if (status || encrypt == 0x00) { - __rfcomm_dlc_close(d, ECONNREFUSED); + set_bit(RFCOMM_ENC_DROP, &d->flags); continue; } } @@ -2088,7 +2093,7 @@ static void rfcomm_security_cfm(struct hci_conn *conn, u8 status, u8 encrypt) rfcomm_dlc_set_timer(d, RFCOMM_AUTH_TIMEOUT); continue; } else if (d->sec_level == BT_SECURITY_HIGH) { - __rfcomm_dlc_close(d, ECONNREFUSED); + set_bit(RFCOMM_ENC_DROP, &d->flags); continue; } } diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c index ebd7fb101fbf..d06c65fa5526 100644 --- a/net/mac80211/cfg.c +++ b/net/mac80211/cfg.c @@ -832,6 +832,12 @@ static int ieee80211_add_station(struct wiphy *wiphy, struct net_device *dev, if (is_multicast_ether_addr(mac)) return -EINVAL; + /* Only TDLS-supporting stations can add TDLS peers */ + if ((params->sta_flags_set & BIT(NL80211_STA_FLAG_TDLS_PEER)) && + !((wiphy->flags & WIPHY_FLAG_SUPPORTS_TDLS) && + sdata->vif.type == NL80211_IFTYPE_STATION)) + return -ENOTSUPP; + sta = sta_info_alloc(sdata, mac, GFP_KERNEL); if (!sta) return -ENOMEM; @@ -841,12 +847,6 @@ static int ieee80211_add_station(struct wiphy *wiphy, struct net_device *dev, sta_apply_parameters(local, sta, params); - /* Only TDLS-supporting stations can add TDLS peers */ - if (test_sta_flag(sta, WLAN_STA_TDLS_PEER) && - !((wiphy->flags & WIPHY_FLAG_SUPPORTS_TDLS) && - sdata->vif.type == NL80211_IFTYPE_STATION)) - return -ENOTSUPP; - rate_control_rate_init(sta); layer2_update = sdata->vif.type == NL80211_IFTYPE_AP_VLAN || diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h index 4c3d1f591bec..ea10a51babda 100644 --- a/net/mac80211/ieee80211_i.h +++ b/net/mac80211/ieee80211_i.h @@ -389,6 +389,7 @@ struct ieee80211_if_managed { unsigned long timers_running; /* used for quiesce/restart */ bool powersave; /* powersave requested for this iface */ + bool broken_ap; /* AP is broken -- turn off powersave */ enum ieee80211_smps_mode req_smps, /* requested smps mode */ ap_smps, /* smps mode AP thinks we're in */ driver_smps_mode; /* smps mode request */ diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c index 96f9fae32495..72c8bea81a6c 100644 --- a/net/mac80211/mlme.c +++ b/net/mac80211/mlme.c @@ -639,6 +639,9 @@ static bool ieee80211_powersave_allowed(struct ieee80211_sub_if_data *sdata) if (!mgd->powersave) return false; + if (mgd->broken_ap) + return false; + if (!mgd->associated) return false; @@ -1491,10 +1494,21 @@ static bool ieee80211_assoc_success(struct ieee80211_work *wk, capab_info = le16_to_cpu(mgmt->u.assoc_resp.capab_info); if ((aid & (BIT(15) | BIT(14))) != (BIT(15) | BIT(14))) - printk(KERN_DEBUG "%s: invalid aid value %d; bits 15:14 not " - "set\n", sdata->name, aid); + printk(KERN_DEBUG + "%s: invalid AID value 0x%x; bits 15:14 not set\n", + sdata->name, aid); aid &= ~(BIT(15) | BIT(14)); + ifmgd->broken_ap = false; + + if (aid == 0 || aid > IEEE80211_MAX_AID) { + printk(KERN_DEBUG + "%s: invalid AID value %d (out of range), turn off PS\n", + sdata->name, aid); + aid = 0; + ifmgd->broken_ap = true; + } + pos = mgmt->u.assoc_resp.variable; ieee802_11_parse_elems(pos, len - (pos - (u8 *) mgmt), &elems); diff --git a/net/mac80211/work.c b/net/mac80211/work.c index 94472eb34d76..6c53b6d1002b 100644 --- a/net/mac80211/work.c +++ b/net/mac80211/work.c @@ -1084,14 +1084,13 @@ static void ieee80211_work_work(struct work_struct *work) continue; if (wk->chan != local->tmp_channel) continue; - if (ieee80211_work_ct_coexists(wk->chan_type, - local->tmp_channel_type)) + if (!ieee80211_work_ct_coexists(wk->chan_type, + local->tmp_channel_type)) continue; remain_off_channel = true; } if (!remain_off_channel && local->tmp_channel) { - bool on_oper_chan = ieee80211_cfg_on_oper_channel(local); local->tmp_channel = NULL; /* If tmp_channel wasn't operating channel, then * we need to go back on-channel. @@ -1101,7 +1100,7 @@ static void ieee80211_work_work(struct work_struct *work) * we still need to do a hardware config. Currently, * we cannot be here while scanning, however. */ - if (ieee80211_cfg_on_oper_channel(local) && !on_oper_chan) + if (!ieee80211_cfg_on_oper_channel(local)) ieee80211_hw_config(local, 0); /* At the least, we need to disable offchannel_ps, diff --git a/net/wanrouter/wanproc.c b/net/wanrouter/wanproc.c index f346395314ba..c43612ee96bb 100644 --- a/net/wanrouter/wanproc.c +++ b/net/wanrouter/wanproc.c @@ -81,7 +81,6 @@ static struct proc_dir_entry *proc_router; * Iterator */ static void *r_start(struct seq_file *m, loff_t *pos) - __acquires(kernel_lock) { struct wan_device *wandev; loff_t l = *pos; @@ -103,7 +102,6 @@ static void *r_next(struct seq_file *m, void *v, loff_t *pos) } static void r_stop(struct seq_file *m, void *v) - __releases(kernel_lock) { mutex_unlock(&config_mutex); } diff --git a/sound/core/vmaster.c b/sound/core/vmaster.c index 5dbab38d04af..130cfe677d60 100644 --- a/sound/core/vmaster.c +++ b/sound/core/vmaster.c @@ -52,6 +52,7 @@ struct link_slave { struct link_ctl_info info; int vals[2]; /* current values */ unsigned int flags; + struct snd_kcontrol *kctl; /* original kcontrol pointer */ struct snd_kcontrol slave; /* the copy of original control entry */ }; @@ -252,6 +253,7 @@ int _snd_ctl_add_slave(struct snd_kcontrol *master, struct snd_kcontrol *slave, slave->count * sizeof(*slave->vd), GFP_KERNEL); if (!srec) return -ENOMEM; + srec->kctl = slave; srec->slave = *slave; memcpy(srec->slave.vd, slave->vd, slave->count * sizeof(*slave->vd)); srec->master = master_link; @@ -333,10 +335,18 @@ static int master_put(struct snd_kcontrol *kcontrol, static void master_free(struct snd_kcontrol *kcontrol) { struct link_master *master = snd_kcontrol_chip(kcontrol); - struct link_slave *slave; - - list_for_each_entry(slave, &master->slaves, list) - slave->master = NULL; + struct link_slave *slave, *n; + + /* free all slave links and retore the original slave kctls */ + list_for_each_entry_safe(slave, n, &master->slaves, list) { + struct snd_kcontrol *sctl = slave->kctl; + struct list_head olist = sctl->list; + memcpy(sctl, &slave->slave, sizeof(*sctl)); + memcpy(sctl->vd, slave->slave.vd, + sctl->count * sizeof(*sctl->vd)); + sctl->list = olist; /* keep the current linked-list */ + kfree(slave); + } kfree(master); } diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c index 916a1863af73..e44b107fdc75 100644 --- a/sound/pci/hda/hda_codec.c +++ b/sound/pci/hda/hda_codec.c @@ -2331,6 +2331,39 @@ int snd_hda_codec_reset(struct hda_codec *codec) return 0; } +typedef int (*map_slave_func_t)(void *, struct snd_kcontrol *); + +/* apply the function to all matching slave ctls in the mixer list */ +static int map_slaves(struct hda_codec *codec, const char * const *slaves, + map_slave_func_t func, void *data) +{ + struct hda_nid_item *items; + const char * const *s; + int i, err; + + items = codec->mixers.list; + for (i = 0; i < codec->mixers.used; i++) { + struct snd_kcontrol *sctl = items[i].kctl; + if (!sctl || !sctl->id.name || + sctl->id.iface != SNDRV_CTL_ELEM_IFACE_MIXER) + continue; + for (s = slaves; *s; s++) { + if (!strcmp(sctl->id.name, *s)) { + err = func(data, sctl); + if (err) + return err; + break; + } + } + } + return 0; +} + +static int check_slave_present(void *data, struct snd_kcontrol *sctl) +{ + return 1; +} + /** * snd_hda_add_vmaster - create a virtual master control and add slaves * @codec: HD-audio codec @@ -2351,12 +2384,10 @@ int snd_hda_add_vmaster(struct hda_codec *codec, char *name, unsigned int *tlv, const char * const *slaves) { struct snd_kcontrol *kctl; - const char * const *s; int err; - for (s = slaves; *s && !snd_hda_find_mixer_ctl(codec, *s); s++) - ; - if (!*s) { + err = map_slaves(codec, slaves, check_slave_present, NULL); + if (err != 1) { snd_printdd("No slave found for %s\n", name); return 0; } @@ -2367,23 +2398,10 @@ int snd_hda_add_vmaster(struct hda_codec *codec, char *name, if (err < 0) return err; - for (s = slaves; *s; s++) { - struct snd_kcontrol *sctl; - int i = 0; - for (;;) { - sctl = _snd_hda_find_mixer_ctl(codec, *s, i); - if (!sctl) { - if (!i) - snd_printdd("Cannot find slave %s, " - "skipped\n", *s); - break; - } - err = snd_ctl_add_slave(kctl, sctl); - if (err < 0) - return err; - i++; - } - } + err = map_slaves(codec, slaves, (map_slave_func_t)snd_ctl_add_slave, + kctl); + if (err < 0) + return err; return 0; } EXPORT_SYMBOL_HDA(snd_hda_add_vmaster); @@ -4752,6 +4770,7 @@ int snd_hda_parse_pin_defcfg(struct hda_codec *codec, memset(sequences_hp, 0, sizeof(sequences_hp)); assoc_line_out = 0; + codec->ignore_misc_bit = true; end_nid = codec->start_nid + codec->num_nodes; for (nid = codec->start_nid; nid < end_nid; nid++) { unsigned int wid_caps = get_wcaps(codec, nid); @@ -4767,6 +4786,9 @@ int snd_hda_parse_pin_defcfg(struct hda_codec *codec, continue; def_conf = snd_hda_codec_get_pincfg(codec, nid); + if (!(get_defcfg_misc(snd_hda_codec_get_pincfg(codec, nid)) & + AC_DEFCFG_MISC_NO_PRESENCE)) + codec->ignore_misc_bit = false; conn = get_defcfg_connect(def_conf); if (conn == AC_JACK_PORT_NONE) continue; diff --git a/sound/pci/hda/hda_codec.h b/sound/pci/hda/hda_codec.h index 755f2b0f9d8e..564471169cae 100644 --- a/sound/pci/hda/hda_codec.h +++ b/sound/pci/hda/hda_codec.h @@ -854,6 +854,7 @@ struct hda_codec { unsigned int no_sticky_stream:1; /* no sticky-PCM stream assignment */ unsigned int pins_shutup:1; /* pins are shut up */ unsigned int no_trigger_sense:1; /* don't trigger at pin-sensing */ + unsigned int ignore_misc_bit:1; /* ignore MISC_NO_PRESENCE bit */ #ifdef CONFIG_SND_HDA_POWER_SAVE unsigned int power_on :1; /* current (global) power-state */ unsigned int power_transition :1; /* power-state in transition */ diff --git a/sound/pci/hda/hda_local.h b/sound/pci/hda/hda_local.h index dcbea0da0fa2..6579e0f2bb57 100644 --- a/sound/pci/hda/hda_local.h +++ b/sound/pci/hda/hda_local.h @@ -510,13 +510,15 @@ int snd_hda_jack_detect(struct hda_codec *codec, hda_nid_t nid); static inline bool is_jack_detectable(struct hda_codec *codec, hda_nid_t nid) { - return (snd_hda_query_pin_caps(codec, nid) & AC_PINCAP_PRES_DETECT) && - /* disable MISC_NO_PRESENCE check because it may break too - * many devices - */ - /*(get_defcfg_misc(snd_hda_codec_get_pincfg(codec, nid) & - AC_DEFCFG_MISC_NO_PRESENCE)) &&*/ - (get_wcaps(codec, nid) & AC_WCAP_UNSOL_CAP); + if (!(snd_hda_query_pin_caps(codec, nid) & AC_PINCAP_PRES_DETECT)) + return false; + if (!codec->ignore_misc_bit && + (get_defcfg_misc(snd_hda_codec_get_pincfg(codec, nid)) & + AC_DEFCFG_MISC_NO_PRESENCE)) + return false; + if (!(get_wcaps(codec, nid) & AC_WCAP_UNSOL_CAP)) + return false; + return true; } /* flags for hda_nid_item */ diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c index 5e706e4d1737..0de21193a2b0 100644 --- a/sound/pci/hda/patch_conexant.c +++ b/sound/pci/hda/patch_conexant.c @@ -3062,7 +3062,6 @@ static const struct snd_pci_quirk cxt5066_cfg_tbl[] = { SND_PCI_QUIRK(0x1043, 0x1993, "Asus U50F", CXT5066_ASUS), SND_PCI_QUIRK(0x1179, 0xff1e, "Toshiba Satellite C650D", CXT5066_IDEAPAD), SND_PCI_QUIRK(0x1179, 0xff50, "Toshiba Satellite P500-PSPGSC-01800T", CXT5066_OLPC_XO_1_5), - SND_PCI_QUIRK(0x1179, 0xffe0, "Toshiba Satellite Pro T130-15F", CXT5066_OLPC_XO_1_5), SND_PCI_QUIRK(0x14f1, 0x0101, "Conexant Reference board", CXT5066_LAPTOP), SND_PCI_QUIRK(0x152d, 0x0833, "OLPC XO-1.5", CXT5066_OLPC_XO_1_5), diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index a24e068a021b..308bb575bc06 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -284,7 +284,7 @@ static int alc_mux_select(struct hda_codec *codec, unsigned int adc_idx, struct alc_spec *spec = codec->spec; const struct hda_input_mux *imux; unsigned int mux_idx; - int i, type; + int i, type, num_conns; hda_nid_t nid; mux_idx = adc_idx >= spec->num_mux_defs ? 0 : adc_idx; @@ -307,16 +307,17 @@ static int alc_mux_select(struct hda_codec *codec, unsigned int adc_idx, spec->capsrc_nids[adc_idx] : spec->adc_nids[adc_idx]; /* no selection? */ - if (snd_hda_get_conn_list(codec, nid, NULL) <= 1) + num_conns = snd_hda_get_conn_list(codec, nid, NULL); + if (num_conns <= 1) return 1; type = get_wcaps_type(get_wcaps(codec, nid)); if (type == AC_WID_AUD_MIX) { /* Matrix-mixer style (e.g. ALC882) */ - for (i = 0; i < imux->num_items; i++) { - unsigned int v = (i == idx) ? 0 : HDA_AMP_MUTE; - snd_hda_codec_amp_stereo(codec, nid, HDA_INPUT, - imux->items[i].index, + int active = imux->items[idx].index; + for (i = 0; i < num_conns; i++) { + unsigned int v = (i == active) ? 0 : HDA_AMP_MUTE; + snd_hda_codec_amp_stereo(codec, nid, HDA_INPUT, i, HDA_AMP_MUTE, v); } } else { diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c index 4e715fefebef..edc2b7bc177c 100644 --- a/sound/pci/hda/patch_sigmatel.c +++ b/sound/pci/hda/patch_sigmatel.c @@ -95,6 +95,7 @@ enum { STAC_92HD83XXX_REF, STAC_92HD83XXX_PWR_REF, STAC_DELL_S14, + STAC_DELL_VOSTRO_3500, STAC_92HD83XXX_HP, STAC_92HD83XXX_HP_cNB11_INTQUAD, STAC_HP_DV7_4000, @@ -1659,6 +1660,12 @@ static const unsigned int dell_s14_pin_configs[10] = { 0x40f000f0, 0x40f000f0, }; +static const unsigned int dell_vostro_3500_pin_configs[10] = { + 0x02a11020, 0x0221101f, 0x400000f0, 0x90170110, + 0x400000f1, 0x400000f2, 0x400000f3, 0x90a60160, + 0x400000f4, 0x400000f5, +}; + static const unsigned int hp_dv7_4000_pin_configs[10] = { 0x03a12050, 0x0321201f, 0x40f000f0, 0x90170110, 0x40f000f0, 0x40f000f0, 0x90170110, 0xd5a30140, @@ -1675,6 +1682,7 @@ static const unsigned int *stac92hd83xxx_brd_tbl[STAC_92HD83XXX_MODELS] = { [STAC_92HD83XXX_REF] = ref92hd83xxx_pin_configs, [STAC_92HD83XXX_PWR_REF] = ref92hd83xxx_pin_configs, [STAC_DELL_S14] = dell_s14_pin_configs, + [STAC_DELL_VOSTRO_3500] = dell_vostro_3500_pin_configs, [STAC_92HD83XXX_HP_cNB11_INTQUAD] = hp_cNB11_intquad_pin_configs, [STAC_HP_DV7_4000] = hp_dv7_4000_pin_configs, }; @@ -1684,6 +1692,7 @@ static const char * const stac92hd83xxx_models[STAC_92HD83XXX_MODELS] = { [STAC_92HD83XXX_REF] = "ref", [STAC_92HD83XXX_PWR_REF] = "mic-ref", [STAC_DELL_S14] = "dell-s14", + [STAC_DELL_VOSTRO_3500] = "dell-vostro-3500", [STAC_92HD83XXX_HP] = "hp", [STAC_92HD83XXX_HP_cNB11_INTQUAD] = "hp_cNB11_intquad", [STAC_HP_DV7_4000] = "hp-dv7-4000", @@ -1697,6 +1706,8 @@ static const struct snd_pci_quirk stac92hd83xxx_cfg_tbl[] = { "DFI LanParty", STAC_92HD83XXX_REF), SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x02ba, "unknown Dell", STAC_DELL_S14), + SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x1028, + "Dell Vostro 3500", STAC_DELL_VOSTRO_3500), SND_PCI_QUIRK_MASK(PCI_VENDOR_ID_HP, 0xff00, 0x3600, "HP", STAC_92HD83XXX_HP), SND_PCI_QUIRK(PCI_VENDOR_ID_HP, 0x1656, diff --git a/sound/pci/intel8x0.c b/sound/pci/intel8x0.c index 29e312597f20..11718b49b2e2 100644 --- a/sound/pci/intel8x0.c +++ b/sound/pci/intel8x0.c @@ -1077,6 +1077,13 @@ static snd_pcm_uframes_t snd_intel8x0_pcm_pointer(struct snd_pcm_substream *subs } if (civ != igetbyte(chip, ichdev->reg_offset + ICH_REG_OFF_CIV)) continue; + + /* IO read operation is very expensive inside virtual machine + * as it is emulated. The probability that subsequent PICB read + * will return different result is high enough to loop till + * timeout here. + * Same CIV is strict enough condition to be sure that PICB + * is valid inside VM on emulated card. */ if (chip->inside_vm) break; if (ptr1 == igetword(chip, ichdev->reg_offset + ichdev->roff_picb)) @@ -2930,6 +2937,45 @@ static unsigned int sis_codec_bits[3] = { ICH_PCR, ICH_SCR, ICH_SIS_TCR }; +static int __devinit snd_intel8x0_inside_vm(struct pci_dev *pci) +{ + int result = inside_vm; + char *msg = NULL; + + /* check module parameter first (override detection) */ + if (result >= 0) { + msg = result ? "enable (forced) VM" : "disable (forced) VM"; + goto fini; + } + + /* detect KVM and Parallels virtual environments */ + result = kvm_para_available(); +#ifdef X86_FEATURE_HYPERVISOR + result = result || boot_cpu_has(X86_FEATURE_HYPERVISOR); +#endif + if (!result) + goto fini; + + /* check for known (emulated) devices */ + if (pci->subsystem_vendor == 0x1af4 && + pci->subsystem_device == 0x1100) { + /* KVM emulated sound, PCI SSID: 1af4:1100 */ + msg = "enable KVM"; + } else if (pci->subsystem_vendor == 0x1ab8) { + /* Parallels VM emulated sound, PCI SSID: 1ab8:xxxx */ + msg = "enable Parallels VM"; + } else { + msg = "disable (unknown or VT-d) VM"; + result = 0; + } + +fini: + if (msg != NULL) + printk(KERN_INFO "intel8x0: %s optimization\n", msg); + + return result; +} + static int __devinit snd_intel8x0_create(struct snd_card *card, struct pci_dev *pci, unsigned long device_type, @@ -2997,9 +3043,7 @@ static int __devinit snd_intel8x0_create(struct snd_card *card, if (xbox) chip->xbox = 1; - chip->inside_vm = inside_vm; - if (inside_vm) - printk(KERN_INFO "intel8x0: enable KVM optimization\n"); + chip->inside_vm = snd_intel8x0_inside_vm(pci); if (pci->vendor == PCI_VENDOR_ID_INTEL && pci->device == PCI_DEVICE_ID_INTEL_440MX) @@ -3243,14 +3287,6 @@ static int __devinit snd_intel8x0_probe(struct pci_dev *pci, buggy_irq = 0; } - if (inside_vm < 0) { - /* detect KVM and Parallels virtual environments */ - inside_vm = kvm_para_available(); -#if defined(__i386__) || defined(__x86_64__) - inside_vm = inside_vm || boot_cpu_has(X86_FEATURE_HYPERVISOR); -#endif - } - if ((err = snd_intel8x0_create(card, pci, pci_id->driver_data, &chip)) < 0) { snd_card_free(card); diff --git a/sound/ppc/snd_ps3.c b/sound/ppc/snd_ps3.c index a3ce1b22620d..1aa52eff526a 100644 --- a/sound/ppc/snd_ps3.c +++ b/sound/ppc/snd_ps3.c @@ -876,7 +876,7 @@ static void __devinit snd_ps3_audio_set_base_addr(uint64_t ioaddr_start) (0x0fUL << 12) | (PS3_AUDIO_IOID); - ret = lv1_gpu_attribute(0x100, 0x007, val, 0, 0); + ret = lv1_gpu_attribute(0x100, 0x007, val); if (ret) pr_info("%s: gpu_attribute failed %d\n", __func__, ret); diff --git a/sound/soc/codecs/wm8994.c b/sound/soc/codecs/wm8994.c index 6b73efd26991..9c982e47eb99 100644 --- a/sound/soc/codecs/wm8994.c +++ b/sound/soc/codecs/wm8994.c @@ -56,7 +56,7 @@ static int wm8994_retune_mobile_base[] = { static int wm8994_readable(struct snd_soc_codec *codec, unsigned int reg) { struct wm8994_priv *wm8994 = snd_soc_codec_get_drvdata(codec); - struct wm8994 *control = wm8994->control_data; + struct wm8994 *control = codec->control_data; switch (reg) { case WM8994_GPIO_1: @@ -3030,19 +3030,34 @@ static irqreturn_t wm8958_mic_irq(int irq, void *data) { struct wm8994_priv *wm8994 = data; struct snd_soc_codec *codec = wm8994->codec; - int reg; + int reg, count; - reg = snd_soc_read(codec, WM8958_MIC_DETECT_3); - if (reg < 0) { - dev_err(codec->dev, "Failed to read mic detect status: %d\n", - reg); - return IRQ_NONE; - } + /* We may occasionally read a detection without an impedence + * range being provided - if that happens loop again. + */ + count = 10; + do { + reg = snd_soc_read(codec, WM8958_MIC_DETECT_3); + if (reg < 0) { + dev_err(codec->dev, + "Failed to read mic detect status: %d\n", + reg); + return IRQ_NONE; + } - if (!(reg & WM8958_MICD_VALID)) { - dev_dbg(codec->dev, "Mic detect data not valid\n"); - goto out; - } + if (!(reg & WM8958_MICD_VALID)) { + dev_dbg(codec->dev, "Mic detect data not valid\n"); + goto out; + } + + if (!(reg & WM8958_MICD_STS) || (reg & WM8958_MICD_LVL_MASK)) + break; + + msleep(1); + } while (count--); + + if (count == 0) + dev_warn(codec->dev, "No impedence range reported for jack\n"); #ifndef CONFIG_SND_SOC_WM8994_MODULE trace_snd_soc_jack_irq(dev_name(codec->dev)); @@ -3180,9 +3195,9 @@ static int wm8994_codec_probe(struct snd_soc_codec *codec) wm8994_request_irq(codec->control_data, WM8994_IRQ_FIFOS_ERR, wm8994_fifo_error, "FIFO error", codec); - wm8994_request_irq(wm8994->control_data, WM8994_IRQ_TEMP_WARN, + wm8994_request_irq(codec->control_data, WM8994_IRQ_TEMP_WARN, wm8994_temp_warn, "Thermal warning", codec); - wm8994_request_irq(wm8994->control_data, WM8994_IRQ_TEMP_SHUT, + wm8994_request_irq(codec->control_data, WM8994_IRQ_TEMP_SHUT, wm8994_temp_shut, "Thermal shutdown", codec); ret = wm8994_request_irq(codec->control_data, WM8994_IRQ_DCS_DONE, diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c index 60f65ace7474..ab23869c01bb 100644 --- a/sound/usb/mixer.c +++ b/sound/usb/mixer.c @@ -765,10 +765,61 @@ static void usb_mixer_elem_free(struct snd_kcontrol *kctl) * interface to ALSA control for feature/mixer units */ +/* volume control quirks */ +static void volume_control_quirks(struct usb_mixer_elem_info *cval, + struct snd_kcontrol *kctl) +{ + switch (cval->mixer->chip->usb_id) { + case USB_ID(0x0471, 0x0101): + case USB_ID(0x0471, 0x0104): + case USB_ID(0x0471, 0x0105): + case USB_ID(0x0672, 0x1041): + /* quirk for UDA1321/N101. + * note that detection between firmware 2.1.1.7 (N101) + * and later 2.1.1.21 is not very clear from datasheets. + * I hope that the min value is -15360 for newer firmware --jk + */ + if (!strcmp(kctl->id.name, "PCM Playback Volume") && + cval->min == -15616) { + snd_printk(KERN_INFO + "set volume quirk for UDA1321/N101 chip\n"); + cval->max = -256; + } + break; + + case USB_ID(0x046d, 0x09a4): + if (!strcmp(kctl->id.name, "Mic Capture Volume")) { + snd_printk(KERN_INFO + "set volume quirk for QuickCam E3500\n"); + cval->min = 6080; + cval->max = 8768; + cval->res = 192; + } + break; + + case USB_ID(0x046d, 0x0808): + case USB_ID(0x046d, 0x0809): + case USB_ID(0x046d, 0x081d): /* HD Webcam c510 */ + case USB_ID(0x046d, 0x0991): + /* Most audio usb devices lie about volume resolution. + * Most Logitech webcams have res = 384. + * Proboly there is some logitech magic behind this number --fishor + */ + if (!strcmp(kctl->id.name, "Mic Capture Volume")) { + snd_printk(KERN_INFO + "set resolution quirk: cval->res = 384\n"); + cval->res = 384; + } + break; + + } +} + /* * retrieve the minimum and maximum values for the specified control */ -static int get_min_max(struct usb_mixer_elem_info *cval, int default_min) +static int get_min_max_with_quirks(struct usb_mixer_elem_info *cval, + int default_min, struct snd_kcontrol *kctl) { /* for failsafe */ cval->min = default_min; @@ -844,6 +895,9 @@ static int get_min_max(struct usb_mixer_elem_info *cval, int default_min) cval->initialized = 1; } + if (kctl) + volume_control_quirks(cval, kctl); + /* USB descriptions contain the dB scale in 1/256 dB unit * while ALSA TLV contains in 1/100 dB unit */ @@ -864,6 +918,7 @@ static int get_min_max(struct usb_mixer_elem_info *cval, int default_min) return 0; } +#define get_min_max(cval, def) get_min_max_with_quirks(cval, def, NULL) /* get a feature/mixer unit info */ static int mixer_ctl_feature_info(struct snd_kcontrol *kcontrol, struct snd_ctl_elem_info *uinfo) @@ -882,7 +937,7 @@ static int mixer_ctl_feature_info(struct snd_kcontrol *kcontrol, struct snd_ctl_ uinfo->value.integer.max = 1; } else { if (!cval->initialized) { - get_min_max(cval, 0); + get_min_max_with_quirks(cval, 0, kcontrol); if (cval->initialized && cval->dBmin >= cval->dBmax) { kcontrol->vd[0].access &= ~(SNDRV_CTL_ELEM_ACCESS_TLV_READ | @@ -1045,9 +1100,6 @@ static void build_feature_ctl(struct mixer_build *state, void *raw_desc, cval->ch_readonly = readonly_mask; } - /* get min/max values */ - get_min_max(cval, 0); - /* if all channels in the mask are marked read-only, make the control * read-only. set_cur_mix_value() will check the mask again and won't * issue write commands to read-only channels. */ @@ -1069,6 +1121,9 @@ static void build_feature_ctl(struct mixer_build *state, void *raw_desc, len = snd_usb_copy_string_desc(state, nameid, kctl->id.name, sizeof(kctl->id.name)); + /* get min/max values */ + get_min_max_with_quirks(cval, 0, kctl); + switch (control) { case UAC_FU_MUTE: case UAC_FU_VOLUME: @@ -1118,51 +1173,6 @@ static void build_feature_ctl(struct mixer_build *state, void *raw_desc, break; } - /* volume control quirks */ - switch (state->chip->usb_id) { - case USB_ID(0x0471, 0x0101): - case USB_ID(0x0471, 0x0104): - case USB_ID(0x0471, 0x0105): - case USB_ID(0x0672, 0x1041): - /* quirk for UDA1321/N101. - * note that detection between firmware 2.1.1.7 (N101) - * and later 2.1.1.21 is not very clear from datasheets. - * I hope that the min value is -15360 for newer firmware --jk - */ - if (!strcmp(kctl->id.name, "PCM Playback Volume") && - cval->min == -15616) { - snd_printk(KERN_INFO - "set volume quirk for UDA1321/N101 chip\n"); - cval->max = -256; - } - break; - - case USB_ID(0x046d, 0x09a4): - if (!strcmp(kctl->id.name, "Mic Capture Volume")) { - snd_printk(KERN_INFO - "set volume quirk for QuickCam E3500\n"); - cval->min = 6080; - cval->max = 8768; - cval->res = 192; - } - break; - - case USB_ID(0x046d, 0x0808): - case USB_ID(0x046d, 0x0809): - case USB_ID(0x046d, 0x0991): - /* Most audio usb devices lie about volume resolution. - * Most Logitech webcams have res = 384. - * Proboly there is some logitech magic behind this number --fishor - */ - if (!strcmp(kctl->id.name, "Mic Capture Volume")) { - snd_printk(KERN_INFO - "set resolution quirk: cval->res = 384\n"); - cval->res = 384; - } - break; - - } - range = (cval->max - cval->min) / cval->res; /* Are there devices with volume range more than 255? I use a bit more * to be sure. 384 is a resolution magic number found on Logitech diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c index 2e5bc7344026..a3ddac0deffd 100644 --- a/sound/usb/quirks.c +++ b/sound/usb/quirks.c @@ -137,12 +137,12 @@ static int create_fixed_stream_quirk(struct snd_usb_audio *chip, return -ENOMEM; } if (fp->nr_rates > 0) { - rate_table = kmalloc(sizeof(int) * fp->nr_rates, GFP_KERNEL); + rate_table = kmemdup(fp->rate_table, + sizeof(int) * fp->nr_rates, GFP_KERNEL); if (!rate_table) { kfree(fp); return -ENOMEM; } - memcpy(rate_table, fp->rate_table, sizeof(int) * fp->nr_rates); fp->rate_table = rate_table; } @@ -224,10 +224,9 @@ static int create_uaxx_quirk(struct snd_usb_audio *chip, if (altsd->bNumEndpoints != 1) return -ENXIO; - fp = kmalloc(sizeof(*fp), GFP_KERNEL); + fp = kmemdup(&ua_format, sizeof(*fp), GFP_KERNEL); if (!fp) return -ENOMEM; - memcpy(fp, &ua_format, sizeof(*fp)); fp->iface = altsd->bInterfaceNumber; fp->endpoint = get_endpoint(alts, 0)->bEndpointAddress; diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c index f82480fa7f27..6ab58cc99d53 100644 --- a/tools/perf/builtin-record.c +++ b/tools/perf/builtin-record.c @@ -262,13 +262,16 @@ static bool perf_evlist__equal(struct perf_evlist *evlist, static void open_counters(struct perf_evlist *evlist) { - struct perf_evsel *pos; + struct perf_evsel *pos, *first; if (evlist->cpus->map[0] < 0) no_inherit = true; + first = list_entry(evlist->entries.next, struct perf_evsel, node); + list_for_each_entry(pos, &evlist->entries, node) { struct perf_event_attr *attr = &pos->attr; + struct xyarray *group_fd = NULL; /* * Check if parse_single_tracepoint_event has already asked for * PERF_SAMPLE_TIME. @@ -283,15 +286,19 @@ static void open_counters(struct perf_evlist *evlist) */ bool time_needed = attr->sample_type & PERF_SAMPLE_TIME; + if (group && pos != first) + group_fd = first->fd; + config_attr(pos, evlist); retry_sample_id: attr->sample_id_all = sample_id_all_avail ? 1 : 0; try_again: - if (perf_evsel__open(pos, evlist->cpus, evlist->threads, group) < 0) { + if (perf_evsel__open(pos, evlist->cpus, evlist->threads, group, + group_fd) < 0) { int err = errno; if (err == EPERM || err == EACCES) { - ui__warning_paranoid(); + ui__error_paranoid(); exit(EXIT_FAILURE); } else if (err == ENODEV && cpu_list) { die("No such device - did you specify" diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c index 7ce65f52415e..7d98676808d8 100644 --- a/tools/perf/builtin-stat.c +++ b/tools/perf/builtin-stat.c @@ -278,9 +278,14 @@ struct stats runtime_itlb_cache_stats[MAX_NR_CPUS]; struct stats runtime_dtlb_cache_stats[MAX_NR_CPUS]; struct stats walltime_nsecs_stats; -static int create_perf_stat_counter(struct perf_evsel *evsel) +static int create_perf_stat_counter(struct perf_evsel *evsel, + struct perf_evsel *first) { struct perf_event_attr *attr = &evsel->attr; + struct xyarray *group_fd = NULL; + + if (group && evsel != first) + group_fd = first->fd; if (scale) attr->read_format = PERF_FORMAT_TOTAL_TIME_ENABLED | @@ -289,14 +294,15 @@ static int create_perf_stat_counter(struct perf_evsel *evsel) attr->inherit = !no_inherit; if (system_wide) - return perf_evsel__open_per_cpu(evsel, evsel_list->cpus, group); - + return perf_evsel__open_per_cpu(evsel, evsel_list->cpus, + group, group_fd); if (target_pid == -1 && target_tid == -1) { attr->disabled = 1; attr->enable_on_exec = 1; } - return perf_evsel__open_per_thread(evsel, evsel_list->threads, group); + return perf_evsel__open_per_thread(evsel, evsel_list->threads, + group, group_fd); } /* @@ -396,7 +402,7 @@ static int read_counter(struct perf_evsel *counter) static int run_perf_stat(int argc __used, const char **argv) { unsigned long long t0, t1; - struct perf_evsel *counter; + struct perf_evsel *counter, *first; int status = 0; int child_ready_pipe[2], go_pipe[2]; const bool forks = (argc > 0); @@ -453,8 +459,10 @@ static int run_perf_stat(int argc __used, const char **argv) close(child_ready_pipe[0]); } + first = list_entry(evsel_list->entries.next, struct perf_evsel, node); + list_for_each_entry(counter, &evsel_list->entries, node) { - if (create_perf_stat_counter(counter) < 0) { + if (create_perf_stat_counter(counter, first) < 0) { if (errno == EINVAL || errno == ENOSYS || errno == ENOENT) { if (verbose) ui__warning("%s event is not supported by the kernel.\n", diff --git a/tools/perf/builtin-test.c b/tools/perf/builtin-test.c index efe696f936e2..831d1baeac37 100644 --- a/tools/perf/builtin-test.c +++ b/tools/perf/builtin-test.c @@ -291,7 +291,7 @@ static int test__open_syscall_event(void) goto out_thread_map_delete; } - if (perf_evsel__open_per_thread(evsel, threads, false) < 0) { + if (perf_evsel__open_per_thread(evsel, threads, false, NULL) < 0) { pr_debug("failed to open counter: %s, " "tweak /proc/sys/kernel/perf_event_paranoid?\n", strerror(errno)); @@ -366,7 +366,7 @@ static int test__open_syscall_event_on_all_cpus(void) goto out_thread_map_delete; } - if (perf_evsel__open(evsel, cpus, threads, false) < 0) { + if (perf_evsel__open(evsel, cpus, threads, false, NULL) < 0) { pr_debug("failed to open counter: %s, " "tweak /proc/sys/kernel/perf_event_paranoid?\n", strerror(errno)); @@ -531,7 +531,7 @@ static int test__basic_mmap(void) perf_evlist__add(evlist, evsels[i]); - if (perf_evsel__open(evsels[i], cpus, threads, false) < 0) { + if (perf_evsel__open(evsels[i], cpus, threads, false, NULL) < 0) { pr_debug("failed to open counter: %s, " "tweak /proc/sys/kernel/perf_event_paranoid?\n", strerror(errno)); diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c index 7a871714d44e..c9cdedb58134 100644 --- a/tools/perf/builtin-top.c +++ b/tools/perf/builtin-top.c @@ -89,6 +89,7 @@ static bool vmlinux_warned; static bool inherit = false; static int realtime_prio = 0; static bool group = false; +static bool sample_id_all_avail = true; static unsigned int mmap_pages = 128; static bool dump_symtab = false; @@ -199,7 +200,8 @@ static void record_precise_ip(struct hist_entry *he, int counter, u64 ip) struct symbol *sym; if (he == NULL || he->ms.sym == NULL || - (he != top.sym_filter_entry && use_browser != 1)) + ((top.sym_filter_entry == NULL || + top.sym_filter_entry->ms.sym != he->ms.sym) && use_browser != 1)) return; sym = he->ms.sym; @@ -289,11 +291,13 @@ static void print_sym_table(void) printf("%-*.*s\n", win_width, win_width, graph_dotted_line); - if (top.total_lost_warned != top.session->hists.stats.total_lost) { - top.total_lost_warned = top.session->hists.stats.total_lost; - color_fprintf(stdout, PERF_COLOR_RED, "WARNING:"); - printf(" LOST %" PRIu64 " events, Check IO/CPU overload\n", - top.total_lost_warned); + if (top.sym_evsel->hists.stats.nr_lost_warned != + top.sym_evsel->hists.stats.nr_events[PERF_RECORD_LOST]) { + top.sym_evsel->hists.stats.nr_lost_warned = + top.sym_evsel->hists.stats.nr_events[PERF_RECORD_LOST]; + color_fprintf(stdout, PERF_COLOR_RED, + "WARNING: LOST %d chunks, Check IO/CPU overload", + top.sym_evsel->hists.stats.nr_lost_warned); ++printed; } @@ -561,7 +565,6 @@ static void perf_top__sort_new_samples(void *arg) hists__decay_entries_threaded(&t->sym_evsel->hists, top.hide_user_symbols, top.hide_kernel_symbols); - hists__output_recalc_col_len(&t->sym_evsel->hists, winsize.ws_row - 3); } static void *display_thread_tui(void *arg __used) @@ -671,6 +674,7 @@ static int symbol_filter(struct map *map __used, struct symbol *sym) } static void perf_event__process_sample(const union perf_event *event, + struct perf_evsel *evsel, struct perf_sample *sample, struct perf_session *session) { @@ -770,12 +774,8 @@ static void perf_event__process_sample(const union perf_event *event, } if (al.sym == NULL || !al.sym->ignore) { - struct perf_evsel *evsel; struct hist_entry *he; - evsel = perf_evlist__id2evsel(top.evlist, sample->id); - assert(evsel != NULL); - if ((sort__has_parent || symbol_conf.use_callchain) && sample->callchain) { err = perf_session__resolve_callchain(session, al.thread, @@ -807,6 +807,7 @@ static void perf_event__process_sample(const union perf_event *event, static void perf_session__mmap_read_idx(struct perf_session *self, int idx) { struct perf_sample sample; + struct perf_evsel *evsel; union perf_event *event; int ret; @@ -817,10 +818,16 @@ static void perf_session__mmap_read_idx(struct perf_session *self, int idx) continue; } + evsel = perf_evlist__id2evsel(self->evlist, sample.id); + assert(evsel != NULL); + if (event->header.type == PERF_RECORD_SAMPLE) - perf_event__process_sample(event, &sample, self); - else + perf_event__process_sample(event, evsel, &sample, self); + else if (event->header.type < PERF_RECORD_MAX) { + hists__inc_nr_events(&evsel->hists, event->header.type); perf_event__process(event, &sample, self); + } else + ++self->hists.stats.nr_unknown_events; } } @@ -834,10 +841,16 @@ static void perf_session__mmap_read(struct perf_session *self) static void start_counters(struct perf_evlist *evlist) { - struct perf_evsel *counter; + struct perf_evsel *counter, *first; + + first = list_entry(evlist->entries.next, struct perf_evsel, node); list_for_each_entry(counter, &evlist->entries, node) { struct perf_event_attr *attr = &counter->attr; + struct xyarray *group_fd = NULL; + + if (group && counter != first) + group_fd = first->fd; attr->sample_type = PERF_SAMPLE_IP | PERF_SAMPLE_TID; @@ -858,14 +871,23 @@ static void start_counters(struct perf_evlist *evlist) attr->mmap = 1; attr->comm = 1; attr->inherit = inherit; +retry_sample_id: + attr->sample_id_all = sample_id_all_avail ? 1 : 0; try_again: if (perf_evsel__open(counter, top.evlist->cpus, - top.evlist->threads, group) < 0) { + top.evlist->threads, group, + group_fd) < 0) { int err = errno; if (err == EPERM || err == EACCES) { - ui__warning_paranoid(); + ui__error_paranoid(); goto out_err; + } else if (err == EINVAL && sample_id_all_avail) { + /* + * Old kernel, no attr->sample_id_type_all field + */ + sample_id_all_avail = false; + goto retry_sample_id; } /* * If it's cycles then fall back to hrtimer diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c index bc8f4773d4d8..119e996035c8 100644 --- a/tools/perf/util/annotate.c +++ b/tools/perf/util/annotate.c @@ -310,9 +310,12 @@ fallback: } err = -ENOENT; dso->annotate_warned = 1; - pr_err("Can't annotate %s: No vmlinux file%s was found in the " - "path.\nPlease use 'perf buildid-cache -av vmlinux' or " - "--vmlinux vmlinux.\n", + pr_err("Can't annotate %s:\n\n" + "No vmlinux file%s\nwas found in the path.\n\n" + "Please use:\n\n" + " perf buildid-cache -av vmlinux\n\n" + "or:\n\n" + " --vmlinux vmlinux", sym->name, build_id_msg ?: ""); goto out_free_filename; } diff --git a/tools/perf/util/debug.c b/tools/perf/util/debug.c index 155749d74350..26817daa2961 100644 --- a/tools/perf/util/debug.c +++ b/tools/perf/util/debug.c @@ -47,19 +47,20 @@ int dump_printf(const char *fmt, ...) } #ifdef NO_NEWT_SUPPORT -void ui__warning(const char *format, ...) +int ui__warning(const char *format, ...) { va_list args; va_start(args, format); vfprintf(stderr, format, args); va_end(args); + return 0; } #endif -void ui__warning_paranoid(void) +int ui__error_paranoid(void) { - ui__warning("Permission error - are you root?\n" + return ui__error("Permission error - are you root?\n" "Consider tweaking /proc/sys/kernel/perf_event_paranoid:\n" " -1 - Not paranoid at all\n" " 0 - Disallow raw tracepoint access for unpriv\n" diff --git a/tools/perf/util/debug.h b/tools/perf/util/debug.h index fd53db47e3de..f2ce88d04f54 100644 --- a/tools/perf/util/debug.h +++ b/tools/perf/util/debug.h @@ -19,23 +19,18 @@ static inline int ui_helpline__show_help(const char *format __used, va_list ap _ return 0; } -static inline struct ui_progress *ui_progress__new(const char *title __used, - u64 total __used) -{ - return (struct ui_progress *)1; -} - -static inline void ui_progress__update(struct ui_progress *self __used, - u64 curr __used) {} +static inline void ui_progress__update(u64 curr __used, u64 total __used, + const char *title __used) {} -static inline void ui_progress__delete(struct ui_progress *self __used) {} +#define ui__error(format, arg...) ui__warning(format, ##arg) #else extern char ui_helpline__last_msg[]; int ui_helpline__show_help(const char *format, va_list ap); #include "ui/progress.h" +int ui__error(const char *format, ...) __attribute__((format(printf, 1, 2))); #endif -void ui__warning(const char *format, ...) __attribute__((format(printf, 1, 2))); -void ui__warning_paranoid(void); +int ui__warning(const char *format, ...) __attribute__((format(printf, 1, 2))); +int ui__error_paranoid(void); #endif /* __PERF_DEBUG_H */ diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c index 2f6bc89027da..fbb4b4ab9cc6 100644 --- a/tools/perf/util/evlist.c +++ b/tools/perf/util/evlist.c @@ -539,3 +539,33 @@ void perf_evlist__set_selected(struct perf_evlist *evlist, { evlist->selected = evsel; } + +int perf_evlist__open(struct perf_evlist *evlist, bool group) +{ + struct perf_evsel *evsel, *first; + int err, ncpus, nthreads; + + first = list_entry(evlist->entries.next, struct perf_evsel, node); + + list_for_each_entry(evsel, &evlist->entries, node) { + struct xyarray *group_fd = NULL; + + if (group && evsel != first) + group_fd = first->fd; + + err = perf_evsel__open(evsel, evlist->cpus, evlist->threads, + group, group_fd); + if (err < 0) + goto out_err; + } + + return 0; +out_err: + ncpus = evlist->cpus ? evlist->cpus->nr : 1; + nthreads = evlist->threads ? evlist->threads->nr : 1; + + list_for_each_entry_reverse(evsel, &evlist->entries, node) + perf_evsel__close(evsel, ncpus, nthreads); + + return err; +} diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h index 6be71fc57794..1779ffef7828 100644 --- a/tools/perf/util/evlist.h +++ b/tools/perf/util/evlist.h @@ -50,6 +50,8 @@ struct perf_evsel *perf_evlist__id2evsel(struct perf_evlist *evlist, u64 id); union perf_event *perf_evlist__mmap_read(struct perf_evlist *self, int idx); +int perf_evlist__open(struct perf_evlist *evlist, bool group); + int perf_evlist__alloc_mmap(struct perf_evlist *evlist); int perf_evlist__mmap(struct perf_evlist *evlist, int pages, bool overwrite); void perf_evlist__munmap(struct perf_evlist *evlist); diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c index b46f6e4bff3c..e42626422587 100644 --- a/tools/perf/util/evsel.c +++ b/tools/perf/util/evsel.c @@ -16,6 +16,7 @@ #include "thread_map.h" #define FD(e, x, y) (*(int *)xyarray__entry(e->fd, x, y)) +#define GROUP_FD(group_fd, cpu) (*(int *)xyarray__entry(group_fd, cpu, 0)) int __perf_evsel__sample_size(u64 sample_type) { @@ -204,15 +205,16 @@ int __perf_evsel__read(struct perf_evsel *evsel, } static int __perf_evsel__open(struct perf_evsel *evsel, struct cpu_map *cpus, - struct thread_map *threads, bool group) + struct thread_map *threads, bool group, + struct xyarray *group_fds) { int cpu, thread; unsigned long flags = 0; - int pid = -1; + int pid = -1, err; if (evsel->fd == NULL && perf_evsel__alloc_fd(evsel, cpus->nr, threads->nr) < 0) - return -1; + return -ENOMEM; if (evsel->cgrp) { flags = PERF_FLAG_PID_CGROUP; @@ -220,7 +222,7 @@ static int __perf_evsel__open(struct perf_evsel *evsel, struct cpu_map *cpus, } for (cpu = 0; cpu < cpus->nr; cpu++) { - int group_fd = -1; + int group_fd = group_fds ? GROUP_FD(group_fds, cpu) : -1; for (thread = 0; thread < threads->nr; thread++) { @@ -231,8 +233,10 @@ static int __perf_evsel__open(struct perf_evsel *evsel, struct cpu_map *cpus, pid, cpus->map[cpu], group_fd, flags); - if (FD(evsel, cpu, thread) < 0) + if (FD(evsel, cpu, thread) < 0) { + err = -errno; goto out_close; + } if (group && group_fd == -1) group_fd = FD(evsel, cpu, thread); @@ -249,7 +253,17 @@ out_close: } thread = threads->nr; } while (--cpu >= 0); - return -1; + return err; +} + +void perf_evsel__close(struct perf_evsel *evsel, int ncpus, int nthreads) +{ + if (evsel->fd == NULL) + return; + + perf_evsel__close_fd(evsel, ncpus, nthreads); + perf_evsel__free_fd(evsel); + evsel->fd = NULL; } static struct { @@ -269,7 +283,8 @@ static struct { }; int perf_evsel__open(struct perf_evsel *evsel, struct cpu_map *cpus, - struct thread_map *threads, bool group) + struct thread_map *threads, bool group, + struct xyarray *group_fd) { if (cpus == NULL) { /* Work around old compiler warnings about strict aliasing */ @@ -279,19 +294,23 @@ int perf_evsel__open(struct perf_evsel *evsel, struct cpu_map *cpus, if (threads == NULL) threads = &empty_thread_map.map; - return __perf_evsel__open(evsel, cpus, threads, group); + return __perf_evsel__open(evsel, cpus, threads, group, group_fd); } int perf_evsel__open_per_cpu(struct perf_evsel *evsel, - struct cpu_map *cpus, bool group) + struct cpu_map *cpus, bool group, + struct xyarray *group_fd) { - return __perf_evsel__open(evsel, cpus, &empty_thread_map.map, group); + return __perf_evsel__open(evsel, cpus, &empty_thread_map.map, group, + group_fd); } int perf_evsel__open_per_thread(struct perf_evsel *evsel, - struct thread_map *threads, bool group) + struct thread_map *threads, bool group, + struct xyarray *group_fd) { - return __perf_evsel__open(evsel, &empty_cpu_map.map, threads, group); + return __perf_evsel__open(evsel, &empty_cpu_map.map, threads, group, + group_fd); } static int perf_event__parse_id_sample(const union perf_event *event, u64 type, diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h index e9a31554e265..b1d15e6f7ae3 100644 --- a/tools/perf/util/evsel.h +++ b/tools/perf/util/evsel.h @@ -82,11 +82,15 @@ void perf_evsel__free_id(struct perf_evsel *evsel); void perf_evsel__close_fd(struct perf_evsel *evsel, int ncpus, int nthreads); int perf_evsel__open_per_cpu(struct perf_evsel *evsel, - struct cpu_map *cpus, bool group); + struct cpu_map *cpus, bool group, + struct xyarray *group_fds); int perf_evsel__open_per_thread(struct perf_evsel *evsel, - struct thread_map *threads, bool group); + struct thread_map *threads, bool group, + struct xyarray *group_fds); int perf_evsel__open(struct perf_evsel *evsel, struct cpu_map *cpus, - struct thread_map *threads, bool group); + struct thread_map *threads, bool group, + struct xyarray *group_fds); +void perf_evsel__close(struct perf_evsel *evsel, int ncpus, int nthreads); #define perf_evsel__match(evsel, t, c) \ (evsel->attr.type == PERF_TYPE_##t && \ diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c index 76c0b2c49eb8..bcd05d05b4f0 100644 --- a/tools/perf/util/header.c +++ b/tools/perf/util/header.c @@ -1,5 +1,6 @@ #define _FILE_OFFSET_BITS 64 +#include "util.h" #include <sys/types.h> #include <byteswap.h> #include <unistd.h> @@ -11,7 +12,6 @@ #include "evlist.h" #include "evsel.h" -#include "util.h" #include "header.h" #include "../perf.h" #include "trace-event.h" diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c index f6a993963a1e..a36a3fa81ffb 100644 --- a/tools/perf/util/hist.c +++ b/tools/perf/util/hist.c @@ -365,7 +365,6 @@ static void __hists__collapse_resort(struct hists *hists, bool threaded) root = hists__get_rotate_entries_in(hists); next = rb_first(root); - hists->stats.total_period = 0; while (next) { n = rb_entry(next, struct hist_entry, rb_node_in); @@ -379,7 +378,6 @@ static void __hists__collapse_resort(struct hists *hists, bool threaded) * been set by, say, the hist_browser. */ hists__apply_filters(hists, n); - hists__inc_nr_entries(hists, n); } } } @@ -442,6 +440,7 @@ static void __hists__output_resort(struct hists *hists, bool threaded) hists->entries = RB_ROOT; hists->nr_entries = 0; + hists->stats.total_period = 0; hists__reset_col_len(hists); while (next) { diff --git a/tools/perf/util/hist.h b/tools/perf/util/hist.h index ff93ddc91c5c..c86c1d27bd1e 100644 --- a/tools/perf/util/hist.h +++ b/tools/perf/util/hist.h @@ -28,6 +28,7 @@ struct events_stats { u64 total_lost; u64 total_invalid_chains; u32 nr_events[PERF_RECORD_HEADER_MAX]; + u32 nr_lost_warned; u32 nr_unknown_events; u32 nr_invalid_chains; u32 nr_unknown_id; diff --git a/tools/perf/util/python.c b/tools/perf/util/python.c index 7624324efad4..9dd47a4f2596 100644 --- a/tools/perf/util/python.c +++ b/tools/perf/util/python.c @@ -623,7 +623,11 @@ static PyObject *pyrf_evsel__open(struct pyrf_evsel *pevsel, cpus = ((struct pyrf_cpu_map *)pcpus)->cpus; evsel->attr.inherit = inherit; - if (perf_evsel__open(evsel, cpus, threads, group) < 0) { + /* + * This will group just the fds for this single evsel, to group + * multiple events, use evlist.open(). + */ + if (perf_evsel__open(evsel, cpus, threads, group, NULL) < 0) { PyErr_SetFromErrno(PyExc_OSError); return NULL; } @@ -814,6 +818,25 @@ static PyObject *pyrf_evlist__read_on_cpu(struct pyrf_evlist *pevlist, return Py_None; } +static PyObject *pyrf_evlist__open(struct pyrf_evlist *pevlist, + PyObject *args, PyObject *kwargs) +{ + struct perf_evlist *evlist = &pevlist->evlist; + int group = 0; + static char *kwlist[] = { "group", NULL }; + + if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|OOii", kwlist, &group)) + return NULL; + + if (perf_evlist__open(evlist, group) < 0) { + PyErr_SetFromErrno(PyExc_OSError); + return NULL; + } + + Py_INCREF(Py_None); + return Py_None; +} + static PyMethodDef pyrf_evlist__methods[] = { { .ml_name = "mmap", @@ -822,6 +845,12 @@ static PyMethodDef pyrf_evlist__methods[] = { .ml_doc = PyDoc_STR("mmap the file descriptor table.") }, { + .ml_name = "open", + .ml_meth = (PyCFunction)pyrf_evlist__open, + .ml_flags = METH_VARARGS | METH_KEYWORDS, + .ml_doc = PyDoc_STR("open the file descriptors.") + }, + { .ml_name = "poll", .ml_meth = (PyCFunction)pyrf_evlist__poll, .ml_flags = METH_VARARGS | METH_KEYWORDS, diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c index 20e011c99a94..85c1e6b76f0a 100644 --- a/tools/perf/util/session.c +++ b/tools/perf/util/session.c @@ -502,6 +502,7 @@ static void flush_sample_queue(struct perf_session *s, struct perf_sample sample; u64 limit = os->next_flush; u64 last_ts = os->last_sample ? os->last_sample->timestamp : 0ULL; + unsigned idx = 0, progress_next = os->nr_samples / 16; int ret; if (!ops->ordered_samples || !limit) @@ -521,6 +522,11 @@ static void flush_sample_queue(struct perf_session *s, os->last_flush = iter->timestamp; list_del(&iter->list); list_add(&iter->list, &os->sample_cache); + if (++idx >= progress_next) { + progress_next += os->nr_samples / 16; + ui_progress__update(idx, os->nr_samples, + "Processing time ordered events..."); + } } if (list_empty(head)) { @@ -529,6 +535,8 @@ static void flush_sample_queue(struct perf_session *s, os->last_sample = list_entry(head->prev, struct sample_queue, list); } + + os->nr_samples = 0; } /* @@ -588,6 +596,7 @@ static void __queue_event(struct sample_queue *new, struct perf_session *s) u64 timestamp = new->timestamp; struct list_head *p; + ++os->nr_samples; os->last_sample = new; if (!sample) { @@ -738,10 +747,27 @@ static int perf_session_deliver_event(struct perf_session *session, dump_event(session, event, file_offset, sample); + evsel = perf_evlist__id2evsel(session->evlist, sample->id); + if (evsel != NULL && event->header.type != PERF_RECORD_SAMPLE) { + /* + * XXX We're leaving PERF_RECORD_SAMPLE unnacounted here + * because the tools right now may apply filters, discarding + * some of the samples. For consistency, in the future we + * should have something like nr_filtered_samples and remove + * the sample->period from total_sample_period, etc, KISS for + * now tho. + * + * Also testing against NULL allows us to handle files without + * attr.sample_id_all and/or without PERF_SAMPLE_ID. In the + * future probably it'll be a good idea to restrict event + * processing via perf_session to files with both set. + */ + hists__inc_nr_events(&evsel->hists, event->header.type); + } + switch (event->header.type) { case PERF_RECORD_SAMPLE: dump_sample(session, event, sample); - evsel = perf_evlist__id2evsel(session->evlist, sample->id); if (evsel == NULL) { ++session->hists.stats.nr_unknown_id; return -1; @@ -874,11 +900,11 @@ static void perf_session__warn_about_errors(const struct perf_session *session, const struct perf_event_ops *ops) { if (ops->lost == perf_event__process_lost && - session->hists.stats.total_lost != 0) { - ui__warning("Processed %" PRIu64 " events and LOST %" PRIu64 - "!\n\nCheck IO/CPU overload!\n\n", - session->hists.stats.total_period, - session->hists.stats.total_lost); + session->hists.stats.nr_events[PERF_RECORD_LOST] != 0) { + ui__warning("Processed %d events and lost %d chunks!\n\n" + "Check IO/CPU overload!\n\n", + session->hists.stats.nr_events[0], + session->hists.stats.nr_events[PERF_RECORD_LOST]); } if (session->hists.stats.nr_unknown_events != 0) { @@ -1012,7 +1038,6 @@ int __perf_session__process_events(struct perf_session *session, { u64 head, page_offset, file_offset, file_pos, progress_next; int err, mmap_prot, mmap_flags, map_idx = 0; - struct ui_progress *progress; size_t page_size, mmap_size; char *buf, *mmaps[8]; union perf_event *event; @@ -1030,9 +1055,6 @@ int __perf_session__process_events(struct perf_session *session, file_size = data_offset + data_size; progress_next = file_size / 16; - progress = ui_progress__new("Processing events...", file_size); - if (progress == NULL) - return -1; mmap_size = session->mmap_window; if (mmap_size > file_size) @@ -1095,7 +1117,8 @@ more: if (file_pos >= progress_next) { progress_next += file_size / 16; - ui_progress__update(progress, file_pos); + ui_progress__update(file_pos, file_size, + "Processing events..."); } if (file_pos < file_size) @@ -1106,7 +1129,6 @@ more: session->ordered_samples.next_flush = ULLONG_MAX; flush_sample_queue(session, ops); out_err: - ui_progress__delete(progress); perf_session__warn_about_errors(session, ops); perf_session_free_sample_buffers(session); return err; diff --git a/tools/perf/util/session.h b/tools/perf/util/session.h index 514b06d41f05..6e393c98eb34 100644 --- a/tools/perf/util/session.h +++ b/tools/perf/util/session.h @@ -23,6 +23,7 @@ struct ordered_samples { struct sample_queue *sample_buffer; struct sample_queue *last_sample; int sample_buffer_idx; + unsigned int nr_samples; }; struct perf_session { diff --git a/tools/perf/util/top.h b/tools/perf/util/top.h index 01d1057f3074..399650967958 100644 --- a/tools/perf/util/top.h +++ b/tools/perf/util/top.h @@ -19,7 +19,6 @@ struct perf_top { u64 kernel_samples, us_samples; u64 exact_samples; u64 guest_us_samples, guest_kernel_samples; - u64 total_lost_warned; int print_entries, count_filter, delay_secs; int freq; pid_t target_pid, target_tid; diff --git a/tools/perf/util/trace-event-info.c b/tools/perf/util/trace-event-info.c index 2d530cf74f43..d2655f08bcc0 100644 --- a/tools/perf/util/trace-event-info.c +++ b/tools/perf/util/trace-event-info.c @@ -80,7 +80,7 @@ static void die(const char *fmt, ...) int ret = errno; if (errno) - perror("trace-cmd"); + perror("perf"); else ret = -1; diff --git a/tools/perf/util/ui/browser.c b/tools/perf/util/ui/browser.c index 5359f371d30a..556829124b02 100644 --- a/tools/perf/util/ui/browser.c +++ b/tools/perf/util/ui/browser.c @@ -4,6 +4,7 @@ #include "libslang.h" #include <newt.h> #include "ui.h" +#include "util.h" #include <linux/compiler.h> #include <linux/list.h> #include <linux/rbtree.h> @@ -168,6 +169,59 @@ void ui_browser__refresh_dimensions(struct ui_browser *self) self->x = 0; } +void ui_browser__handle_resize(struct ui_browser *browser) +{ + ui__refresh_dimensions(false); + ui_browser__show(browser, browser->title, ui_helpline__current); + ui_browser__refresh(browser); +} + +int ui_browser__warning(struct ui_browser *browser, int timeout, + const char *format, ...) +{ + va_list args; + char *text; + int key = 0, err; + + va_start(args, format); + err = vasprintf(&text, format, args); + va_end(args); + + if (err < 0) { + va_start(args, format); + ui_helpline__vpush(format, args); + va_end(args); + } else { + while ((key == ui__question_window("Warning!", text, + "Press any key...", + timeout)) == K_RESIZE) + ui_browser__handle_resize(browser); + free(text); + } + + return key; +} + +int ui_browser__help_window(struct ui_browser *browser, const char *text) +{ + int key; + + while ((key = ui__help_window(text)) == K_RESIZE) + ui_browser__handle_resize(browser); + + return key; +} + +bool ui_browser__dialog_yesno(struct ui_browser *browser, const char *text) +{ + int key; + + while ((key = ui__dialog_yesno(text)) == K_RESIZE) + ui_browser__handle_resize(browser); + + return key == K_ENTER || toupper(key) == 'Y'; +} + void ui_browser__reset_index(struct ui_browser *self) { self->index = self->top_idx = 0; @@ -230,13 +284,15 @@ static void ui_browser__scrollbar_set(struct ui_browser *browser) (browser->nr_entries - 1)); } + SLsmg_set_char_set(1); + while (h < height) { ui_browser__gotorc(browser, row++, col); - SLsmg_set_char_set(1); - SLsmg_write_char(h == pct ? SLSMG_DIAMOND_CHAR : SLSMG_BOARD_CHAR); - SLsmg_set_char_set(0); + SLsmg_write_char(h == pct ? SLSMG_DIAMOND_CHAR : SLSMG_CKBRD_CHAR); ++h; } + + SLsmg_set_char_set(0); } static int __ui_browser__refresh(struct ui_browser *browser) @@ -291,53 +347,10 @@ void ui_browser__update_nr_entries(struct ui_browser *browser, u32 nr_entries) browser->seek(browser, browser->top_idx, SEEK_SET); } -static int ui__getch(int delay_secs) -{ - struct timeval timeout, *ptimeout = delay_secs ? &timeout : NULL; - fd_set read_set; - int err, key; - - FD_ZERO(&read_set); - FD_SET(0, &read_set); - - if (delay_secs) { - timeout.tv_sec = delay_secs; - timeout.tv_usec = 0; - } - - err = select(1, &read_set, NULL, NULL, ptimeout); - - if (err == 0) - return K_TIMER; - - if (err == -1) { - if (errno == EINTR) - return K_RESIZE; - return K_ERROR; - } - - key = SLang_getkey(); - if (key != K_ESC) - return key; - - FD_ZERO(&read_set); - FD_SET(0, &read_set); - timeout.tv_sec = 0; - timeout.tv_usec = 20; - err = select(1, &read_set, NULL, NULL, &timeout); - if (err == 0) - return K_ESC; - - SLang_ungetkey(key); - return SLkp_getkey(); -} - int ui_browser__run(struct ui_browser *self, int delay_secs) { int err, key; - pthread__unblock_sigwinch(); - while (1) { off_t offset; @@ -351,10 +364,7 @@ int ui_browser__run(struct ui_browser *self, int delay_secs) key = ui__getch(delay_secs); if (key == K_RESIZE) { - pthread_mutex_lock(&ui__lock); - SLtt_get_screen_size(); - SLsmg_reinit_smg(); - pthread_mutex_unlock(&ui__lock); + ui__refresh_dimensions(false); ui_browser__refresh_dimensions(self); __ui_browser__show_title(self, self->title); ui_helpline__puts(self->helpline); @@ -533,6 +543,47 @@ static int ui_browser__color_config(const char *var, const char *value, return -1; } +void ui_browser__argv_seek(struct ui_browser *browser, off_t offset, int whence) +{ + switch (whence) { + case SEEK_SET: + browser->top = browser->entries; + break; + case SEEK_CUR: + browser->top = browser->top + browser->top_idx + offset; + break; + case SEEK_END: + browser->top = browser->top + browser->nr_entries + offset; + break; + default: + return; + } +} + +unsigned int ui_browser__argv_refresh(struct ui_browser *browser) +{ + unsigned int row = 0, idx = browser->top_idx; + char **pos; + + if (browser->top == NULL) + browser->top = browser->entries; + + pos = (char **)browser->top; + while (idx < browser->nr_entries) { + if (!browser->filter || !browser->filter(browser, *pos)) { + ui_browser__gotorc(browser, row, 0); + browser->write(browser, pos, row); + if (++row == browser->height) + break; + } + + ++idx; + ++pos; + } + + return row; +} + void ui_browser__init(void) { int i = 0; diff --git a/tools/perf/util/ui/browser.h b/tools/perf/util/ui/browser.h index a2c707d33c5e..84d761b730c1 100644 --- a/tools/perf/util/ui/browser.h +++ b/tools/perf/util/ui/browser.h @@ -43,6 +43,15 @@ void ui_browser__hide(struct ui_browser *self); int ui_browser__refresh(struct ui_browser *self); int ui_browser__run(struct ui_browser *browser, int delay_secs); void ui_browser__update_nr_entries(struct ui_browser *browser, u32 nr_entries); +void ui_browser__handle_resize(struct ui_browser *browser); + +int ui_browser__warning(struct ui_browser *browser, int timeout, + const char *format, ...); +int ui_browser__help_window(struct ui_browser *browser, const char *text); +bool ui_browser__dialog_yesno(struct ui_browser *browser, const char *text); + +void ui_browser__argv_seek(struct ui_browser *browser, off_t offset, int whence); +unsigned int ui_browser__argv_refresh(struct ui_browser *browser); void ui_browser__rb_tree_seek(struct ui_browser *self, off_t offset, int whence); unsigned int ui_browser__rb_tree_refresh(struct ui_browser *self); diff --git a/tools/perf/util/ui/browsers/annotate.c b/tools/perf/util/ui/browsers/annotate.c index 4e0cb7fea7d9..0575905d1205 100644 --- a/tools/perf/util/ui/browsers/annotate.c +++ b/tools/perf/util/ui/browsers/annotate.c @@ -1,6 +1,9 @@ +#include "../../util.h" #include "../browser.h" #include "../helpline.h" #include "../libslang.h" +#include "../ui.h" +#include "../util.h" #include "../../annotate.h" #include "../../hist.h" #include "../../sort.h" @@ -8,15 +11,6 @@ #include <pthread.h> #include <newt.h> -static void ui__error_window(const char *fmt, ...) -{ - va_list ap; - - va_start(ap, fmt); - newtWinMessagev((char *)"Error", (char *)"Ok", (char *)fmt, ap); - va_end(ap); -} - struct annotate_browser { struct ui_browser b; struct rb_root entries; @@ -400,7 +394,7 @@ int symbol__tui_annotate(struct symbol *sym, struct map *map, int evidx, return -1; if (symbol__annotate(sym, map, sizeof(struct objdump_line_rb_node)) < 0) { - ui__error_window(ui_helpline__last_msg); + ui__error("%s", ui_helpline__last_msg); return -1; } diff --git a/tools/perf/util/ui/browsers/hists.c b/tools/perf/util/ui/browsers/hists.c index 4663dcb2a19b..d0c94b459685 100644 --- a/tools/perf/util/ui/browsers/hists.c +++ b/tools/perf/util/ui/browsers/hists.c @@ -17,6 +17,7 @@ #include "../browser.h" #include "../helpline.h" #include "../util.h" +#include "../ui.h" #include "map.h" struct hist_browser { @@ -294,6 +295,15 @@ static void hist_browser__set_folding(struct hist_browser *self, bool unfold) ui_browser__reset_index(&self->b); } +static void ui_browser__warn_lost_events(struct ui_browser *browser) +{ + ui_browser__warning(browser, 4, + "Events are being lost, check IO/CPU overload!\n\n" + "You may want to run 'perf' using a RT scheduler policy:\n\n" + " perf top -r 80\n\n" + "Or reduce the sampling frequency."); +} + static int hist_browser__run(struct hist_browser *self, const char *ev_name, void(*timer)(void *arg), void *arg, int delay_secs) { @@ -314,12 +324,18 @@ static int hist_browser__run(struct hist_browser *self, const char *ev_name, key = ui_browser__run(&self->b, delay_secs); switch (key) { - case -1: - /* FIXME we need to check if it was es.reason == NEWT_EXIT_TIMER */ + case K_TIMER: timer(arg); ui_browser__update_nr_entries(&self->b, self->hists->nr_entries); - hists__browser_title(self->hists, title, sizeof(title), - ev_name); + + if (self->hists->stats.nr_lost_warned != + self->hists->stats.nr_events[PERF_RECORD_LOST]) { + self->hists->stats.nr_lost_warned = + self->hists->stats.nr_events[PERF_RECORD_LOST]; + ui_browser__warn_lost_events(&self->b); + } + + hists__browser_title(self->hists, title, sizeof(title), ev_name); ui_browser__show_title(&self->b, title); continue; case 'D': { /* Debug */ @@ -883,7 +899,7 @@ static int perf_evsel__hists_browse(struct perf_evsel *evsel, int nr_events, goto out_free_stack; case 'a': if (!browser->has_symbols) { - ui__warning( + ui_browser__warning(&browser->b, delay_secs * 2, "Annotation is only available for symbolic views, " "include \"sym\" in --sort to use it."); continue; @@ -901,7 +917,8 @@ static int perf_evsel__hists_browse(struct perf_evsel *evsel, int nr_events, case K_F1: case 'h': case '?': - ui__help_window("h/?/F1 Show this window\n" + ui_browser__help_window(&browser->b, + "h/?/F1 Show this window\n" "UP/DOWN/PGUP\n" "PGDN/SPACE Navigate\n" "q/ESC/CTRL+C Exit browser\n\n" @@ -914,7 +931,7 @@ static int perf_evsel__hists_browse(struct perf_evsel *evsel, int nr_events, "C Collapse all callchains\n" "E Expand all callchains\n" "d Zoom into current DSO\n" - "t Zoom into current Thread\n"); + "t Zoom into current Thread"); continue; case K_ENTER: case K_RIGHT: @@ -940,7 +957,8 @@ static int perf_evsel__hists_browse(struct perf_evsel *evsel, int nr_events, } case K_ESC: if (!left_exits && - !ui__dialog_yesno("Do you really want to exit?")) + !ui_browser__dialog_yesno(&browser->b, + "Do you really want to exit?")) continue; /* Fall thru */ case 'q': @@ -993,6 +1011,7 @@ add_exit_option: if (choice == annotate) { struct hist_entry *he; + int err; do_annotate: he = hist_browser__selected_entry(browser); if (he == NULL) @@ -1001,10 +1020,12 @@ do_annotate: * Don't let this be freed, say, by hists__decay_entry. */ he->used = true; - hist_entry__tui_annotate(he, evsel->idx, nr_events, - timer, arg, delay_secs); + err = hist_entry__tui_annotate(he, evsel->idx, nr_events, + timer, arg, delay_secs); he->used = false; ui_browser__update_nr_entries(&browser->b, browser->hists->nr_entries); + if (err) + ui_browser__handle_resize(&browser->b); } else if (choice == browse_map) map__browse(browser->selection->map); else if (choice == zoom_dso) { @@ -1056,6 +1077,7 @@ out: struct perf_evsel_menu { struct ui_browser b; struct perf_evsel *selection; + bool lost_events, lost_events_warned; }; static void perf_evsel_menu__write(struct ui_browser *browser, @@ -1068,14 +1090,29 @@ static void perf_evsel_menu__write(struct ui_browser *browser, unsigned long nr_events = evsel->hists.stats.nr_events[PERF_RECORD_SAMPLE]; const char *ev_name = event_name(evsel); char bf[256], unit; + const char *warn = " "; + size_t printed; ui_browser__set_color(browser, current_entry ? HE_COLORSET_SELECTED : HE_COLORSET_NORMAL); nr_events = convert_unit(nr_events, &unit); - snprintf(bf, sizeof(bf), "%lu%c%s%s", nr_events, - unit, unit == ' ' ? "" : " ", ev_name); - slsmg_write_nstring(bf, browser->width); + printed = snprintf(bf, sizeof(bf), "%lu%c%s%s", nr_events, + unit, unit == ' ' ? "" : " ", ev_name); + slsmg_printf("%s", bf); + + nr_events = evsel->hists.stats.nr_events[PERF_RECORD_LOST]; + if (nr_events != 0) { + menu->lost_events = true; + if (!current_entry) + ui_browser__set_color(browser, HE_COLORSET_TOP); + nr_events = convert_unit(nr_events, &unit); + snprintf(bf, sizeof(bf), ": %ld%c%schunks LOST!", nr_events, + unit, unit == ' ' ? "" : " "); + warn = bf; + } + + slsmg_write_nstring(warn, browser->width - printed); if (current_entry) menu->selection = evsel; @@ -1100,6 +1137,11 @@ static int perf_evsel_menu__run(struct perf_evsel_menu *menu, switch (key) { case K_TIMER: timer(arg); + + if (!menu->lost_events_warned && menu->lost_events) { + ui_browser__warn_lost_events(&menu->b); + menu->lost_events_warned = true; + } continue; case K_RIGHT: case K_ENTER: @@ -1133,7 +1175,8 @@ browse_hists: pos = list_entry(pos->node.prev, struct perf_evsel, node); goto browse_hists; case K_ESC: - if (!ui__dialog_yesno("Do you really want to exit?")) + if (!ui_browser__dialog_yesno(&menu->b, + "Do you really want to exit?")) continue; /* Fall thru */ case 'q': @@ -1145,7 +1188,8 @@ browse_hists: case K_LEFT: continue; case K_ESC: - if (!ui__dialog_yesno("Do you really want to exit?")) + if (!ui_browser__dialog_yesno(&menu->b, + "Do you really want to exit?")) continue; /* Fall thru */ case 'q': diff --git a/tools/perf/util/ui/helpline.c b/tools/perf/util/ui/helpline.c index f36d2ff509ed..6ef3c5691762 100644 --- a/tools/perf/util/ui/helpline.c +++ b/tools/perf/util/ui/helpline.c @@ -1,20 +1,28 @@ #define _GNU_SOURCE #include <stdio.h> #include <stdlib.h> -#include <newt.h> +#include <string.h> #include "../debug.h" #include "helpline.h" #include "ui.h" +#include "libslang.h" void ui_helpline__pop(void) { - newtPopHelpLine(); } +char ui_helpline__current[512]; + void ui_helpline__push(const char *msg) { - newtPushHelpLine(msg); + const size_t sz = sizeof(ui_helpline__current); + + SLsmg_gotorc(SLtt_Screen_Rows - 1, 0); + SLsmg_set_color(0); + SLsmg_write_nstring((char *)msg, SLtt_Screen_Cols); + SLsmg_refresh(); + strncpy(ui_helpline__current, msg, sz)[sz - 1] = '\0'; } void ui_helpline__vpush(const char *fmt, va_list ap) @@ -63,7 +71,7 @@ int ui_helpline__show_help(const char *format, va_list ap) if (ui_helpline__last_msg[backlog - 1] == '\n') { ui_helpline__puts(ui_helpline__last_msg); - newtRefresh(); + SLsmg_refresh(); backlog = 0; } pthread_mutex_unlock(&ui__lock); diff --git a/tools/perf/util/ui/helpline.h b/tools/perf/util/ui/helpline.h index fdcbc0270acd..7bab6b34e35e 100644 --- a/tools/perf/util/ui/helpline.h +++ b/tools/perf/util/ui/helpline.h @@ -11,4 +11,6 @@ void ui_helpline__vpush(const char *fmt, va_list ap); void ui_helpline__fpush(const char *fmt, ...); void ui_helpline__puts(const char *msg); +extern char ui_helpline__current[]; + #endif /* _PERF_UI_HELPLINE_H_ */ diff --git a/tools/perf/util/ui/progress.c b/tools/perf/util/ui/progress.c index d7fc399d36b3..295e366b6311 100644 --- a/tools/perf/util/ui/progress.c +++ b/tools/perf/util/ui/progress.c @@ -1,60 +1,29 @@ -#include <stdlib.h> -#include <newt.h> #include "../cache.h" #include "progress.h" +#include "libslang.h" +#include "ui.h" +#include "browser.h" -struct ui_progress { - newtComponent form, scale; -}; - -struct ui_progress *ui_progress__new(const char *title, u64 total) -{ - struct ui_progress *self = malloc(sizeof(*self)); - - if (self != NULL) { - int cols; - - if (use_browser <= 0) - return self; - newtGetScreenSize(&cols, NULL); - cols -= 4; - newtCenteredWindow(cols, 1, title); - self->form = newtForm(NULL, NULL, 0); - if (self->form == NULL) - goto out_free_self; - self->scale = newtScale(0, 0, cols, total); - if (self->scale == NULL) - goto out_free_form; - newtFormAddComponent(self->form, self->scale); - newtRefresh(); - } - - return self; - -out_free_form: - newtFormDestroy(self->form); -out_free_self: - free(self); - return NULL; -} - -void ui_progress__update(struct ui_progress *self, u64 curr) +void ui_progress__update(u64 curr, u64 total, const char *title) { + int bar, y; /* * FIXME: We should have a per UI backend way of showing progress, * stdio will just show a percentage as NN%, etc. */ if (use_browser <= 0) return; - newtScaleSet(self->scale, curr); - newtRefresh(); -} -void ui_progress__delete(struct ui_progress *self) -{ - if (use_browser > 0) { - newtFormDestroy(self->form); - newtPopWindow(); - } - free(self); + ui__refresh_dimensions(true); + pthread_mutex_lock(&ui__lock); + y = SLtt_Screen_Rows / 2 - 2; + SLsmg_set_color(0); + SLsmg_draw_box(y, 0, 3, SLtt_Screen_Cols); + SLsmg_gotorc(y++, 1); + SLsmg_write_string((char *)title); + SLsmg_set_color(HE_COLORSET_SELECTED); + bar = ((SLtt_Screen_Cols - 2) * curr) / total; + SLsmg_fill_region(y, 1, 1, bar, ' '); + SLsmg_refresh(); + pthread_mutex_unlock(&ui__lock); } diff --git a/tools/perf/util/ui/progress.h b/tools/perf/util/ui/progress.h index a3820a0beb5b..d9c205b59aa1 100644 --- a/tools/perf/util/ui/progress.h +++ b/tools/perf/util/ui/progress.h @@ -1,11 +1,8 @@ #ifndef _PERF_UI_PROGRESS_H_ #define _PERF_UI_PROGRESS_H_ 1 -struct ui_progress; +#include <../types.h> -struct ui_progress *ui_progress__new(const char *title, u64 total); -void ui_progress__delete(struct ui_progress *self); - -void ui_progress__update(struct ui_progress *self, u64 curr); +void ui_progress__update(u64 curr, u64 total, const char *title); #endif diff --git a/tools/perf/util/ui/setup.c b/tools/perf/util/ui/setup.c index 1e6ba06980c4..85a69faa09aa 100644 --- a/tools/perf/util/ui/setup.c +++ b/tools/perf/util/ui/setup.c @@ -7,10 +7,85 @@ #include "browser.h" #include "helpline.h" #include "ui.h" +#include "util.h" #include "libslang.h" +#include "keysyms.h" pthread_mutex_t ui__lock = PTHREAD_MUTEX_INITIALIZER; +static volatile int ui__need_resize; + +void ui__refresh_dimensions(bool force) +{ + if (force || ui__need_resize) { + ui__need_resize = 0; + pthread_mutex_lock(&ui__lock); + SLtt_get_screen_size(); + SLsmg_reinit_smg(); + pthread_mutex_unlock(&ui__lock); + } +} + +static void ui__sigwinch(int sig __used) +{ + ui__need_resize = 1; +} + +static void ui__setup_sigwinch(void) +{ + static bool done; + + if (done) + return; + + done = true; + pthread__unblock_sigwinch(); + signal(SIGWINCH, ui__sigwinch); +} + +int ui__getch(int delay_secs) +{ + struct timeval timeout, *ptimeout = delay_secs ? &timeout : NULL; + fd_set read_set; + int err, key; + + ui__setup_sigwinch(); + + FD_ZERO(&read_set); + FD_SET(0, &read_set); + + if (delay_secs) { + timeout.tv_sec = delay_secs; + timeout.tv_usec = 0; + } + + err = select(1, &read_set, NULL, NULL, ptimeout); + + if (err == 0) + return K_TIMER; + + if (err == -1) { + if (errno == EINTR) + return K_RESIZE; + return K_ERROR; + } + + key = SLang_getkey(); + if (key != K_ESC) + return key; + + FD_ZERO(&read_set); + FD_SET(0, &read_set); + timeout.tv_sec = 0; + timeout.tv_usec = 20; + err = select(1, &read_set, NULL, NULL, &timeout); + if (err == 0) + return K_ESC; + + SLang_ungetkey(key); + return SLkp_getkey(); +} + static void newt_suspend(void *d __used) { newtSuspend(); @@ -71,10 +146,10 @@ void setup_browser(bool fallback_to_pager) void exit_browser(bool wait_for_ok) { if (use_browser > 0) { - if (wait_for_ok) { - char title[] = "Fatal Error", ok[] = "Ok"; - newtWinMessage(title, ok, ui_helpline__last_msg); - } + if (wait_for_ok) + ui__question_window("Fatal Error", + ui_helpline__last_msg, + "Press any key...", 0); ui__exit(); } } diff --git a/tools/perf/util/ui/ui.h b/tools/perf/util/ui/ui.h index d264e059c829..7b67045479f6 100644 --- a/tools/perf/util/ui/ui.h +++ b/tools/perf/util/ui/ui.h @@ -2,7 +2,10 @@ #define _PERF_UI_H_ 1 #include <pthread.h> +#include <stdbool.h> extern pthread_mutex_t ui__lock; +void ui__refresh_dimensions(bool force); + #endif /* _PERF_UI_H_ */ diff --git a/tools/perf/util/ui/util.c b/tools/perf/util/ui/util.c index fdf1fc8f08bc..45daa7c41dad 100644 --- a/tools/perf/util/ui/util.c +++ b/tools/perf/util/ui/util.c @@ -1,6 +1,5 @@ -#include <newt.h> +#include "../util.h" #include <signal.h> -#include <stdio.h> #include <stdbool.h> #include <string.h> #include <sys/ttydefaults.h> @@ -8,72 +7,75 @@ #include "../cache.h" #include "../debug.h" #include "browser.h" +#include "keysyms.h" #include "helpline.h" #include "ui.h" #include "util.h" +#include "libslang.h" -static void newt_form__set_exit_keys(newtComponent self) +static void ui_browser__argv_write(struct ui_browser *browser, + void *entry, int row) { - newtFormAddHotKey(self, NEWT_KEY_LEFT); - newtFormAddHotKey(self, NEWT_KEY_ESCAPE); - newtFormAddHotKey(self, 'Q'); - newtFormAddHotKey(self, 'q'); - newtFormAddHotKey(self, CTRL('c')); -} + char **arg = entry; + bool current_entry = ui_browser__is_current_entry(browser, row); -static newtComponent newt_form__new(void) -{ - newtComponent self = newtForm(NULL, NULL, 0); - if (self) - newt_form__set_exit_keys(self); - return self; + ui_browser__set_color(browser, current_entry ? HE_COLORSET_SELECTED : + HE_COLORSET_NORMAL); + slsmg_write_nstring(*arg, browser->width); } -int ui__popup_menu(int argc, char * const argv[]) +static int popup_menu__run(struct ui_browser *menu) { - struct newtExitStruct es; - int i, rc = -1, max_len = 5; - newtComponent listbox, form = newt_form__new(); + int key; - if (form == NULL) + if (ui_browser__show(menu, " ", "ESC: exit, ENTER|->: Select option") < 0) return -1; - listbox = newtListbox(0, 0, argc, NEWT_FLAG_RETURNEXIT); - if (listbox == NULL) - goto out_destroy_form; + while (1) { + key = ui_browser__run(menu, 0); - newtFormAddComponent(form, listbox); + switch (key) { + case K_RIGHT: + case K_ENTER: + key = menu->index; + break; + case K_LEFT: + case K_ESC: + case 'q': + case CTRL('c'): + key = -1; + break; + default: + continue; + } - for (i = 0; i < argc; ++i) { - int len = strlen(argv[i]); - if (len > max_len) - max_len = len; - if (newtListboxAddEntry(listbox, argv[i], (void *)(long)i)) - goto out_destroy_form; + break; } - newtCenteredWindow(max_len, argc, NULL); - newtFormRun(form, &es); - rc = newtListboxGetCurrent(listbox) - NULL; - if (es.reason == NEWT_EXIT_HOTKEY) - rc = -1; - newtPopWindow(); -out_destroy_form: - newtFormDestroy(form); - return rc; + ui_browser__hide(menu); + return key; } -int ui__help_window(const char *text) +int ui__popup_menu(int argc, char * const argv[]) { - struct newtExitStruct es; - newtComponent tb, form = newt_form__new(); - int rc = -1; + struct ui_browser menu = { + .entries = (void *)argv, + .refresh = ui_browser__argv_refresh, + .seek = ui_browser__argv_seek, + .write = ui_browser__argv_write, + .nr_entries = argc, + }; + + return popup_menu__run(&menu); +} + +int ui__question_window(const char *title, const char *text, + const char *exit_msg, int delay_secs) +{ + int x, y; int max_len = 0, nr_lines = 0; const char *t; - if (form == NULL) - return -1; - t = text; while (1) { const char *sep = strchr(t, '\n'); @@ -90,41 +92,77 @@ int ui__help_window(const char *text) t = sep + 1; } - tb = newtTextbox(0, 0, max_len, nr_lines, 0); - if (tb == NULL) - goto out_destroy_form; - - newtTextboxSetText(tb, text); - newtFormAddComponent(form, tb); - newtCenteredWindow(max_len, nr_lines, NULL); - newtFormRun(form, &es); - newtPopWindow(); - rc = 0; -out_destroy_form: - newtFormDestroy(form); - return rc; + max_len += 2; + nr_lines += 4; + y = SLtt_Screen_Rows / 2 - nr_lines / 2, + x = SLtt_Screen_Cols / 2 - max_len / 2; + + SLsmg_set_color(0); + SLsmg_draw_box(y, x++, nr_lines, max_len); + if (title) { + SLsmg_gotorc(y, x + 1); + SLsmg_write_string((char *)title); + } + SLsmg_gotorc(++y, x); + nr_lines -= 2; + max_len -= 2; + SLsmg_write_wrapped_string((unsigned char *)text, y, x, + nr_lines, max_len, 1); + SLsmg_gotorc(y + nr_lines - 2, x); + SLsmg_write_nstring((char *)" ", max_len); + SLsmg_gotorc(y + nr_lines - 1, x); + SLsmg_write_nstring((char *)exit_msg, max_len); + SLsmg_refresh(); + return ui__getch(delay_secs); } -static const char yes[] = "Yes", no[] = "No", - warning_str[] = "Warning!", ok[] = "Ok"; +int ui__help_window(const char *text) +{ + return ui__question_window("Help", text, "Press any key...", 0); +} -bool ui__dialog_yesno(const char *msg) +int ui__dialog_yesno(const char *msg) { - /* newtWinChoice should really be accepting const char pointers... */ - return newtWinChoice(NULL, (char *)yes, (char *)no, (char *)msg) == 1; + return ui__question_window(NULL, msg, "Enter: Yes, ESC: No", 0); } -void ui__warning(const char *format, ...) +int __ui__warning(const char *title, const char *format, va_list args) { - va_list args; + char *s; + + if (use_browser > 0 && vasprintf(&s, format, args) > 0) { + int key; - va_start(args, format); - if (use_browser > 0) { pthread_mutex_lock(&ui__lock); - newtWinMessagev((char *)warning_str, (char *)ok, - (char *)format, args); + key = ui__question_window(title, s, "Press any key...", 0); pthread_mutex_unlock(&ui__lock); - } else - vfprintf(stderr, format, args); + free(s); + return key; + } + + fprintf(stderr, "%s:\n", title); + vfprintf(stderr, format, args); + return K_ESC; +} + +int ui__warning(const char *format, ...) +{ + int key; + va_list args; + + va_start(args, format); + key = __ui__warning("Warning", format, args); + va_end(args); + return key; +} + +int ui__error(const char *format, ...) +{ + int key; + va_list args; + + va_start(args, format); + key = __ui__warning("Error", format, args); va_end(args); + return key; } diff --git a/tools/perf/util/ui/util.h b/tools/perf/util/ui/util.h index afcbc1d99531..2d1738bd71c8 100644 --- a/tools/perf/util/ui/util.h +++ b/tools/perf/util/ui/util.h @@ -1,10 +1,14 @@ #ifndef _PERF_UI_UTIL_H_ #define _PERF_UI_UTIL_H_ 1 -#include <stdbool.h> +#include <stdarg.h> +int ui__getch(int delay_secs); int ui__popup_menu(int argc, char * const argv[]); int ui__help_window(const char *text); -bool ui__dialog_yesno(const char *msg); +int ui__dialog_yesno(const char *msg); +int ui__question_window(const char *title, const char *text, + const char *exit_msg, int delay_secs); +int __ui__warning(const char *title, const char *format, va_list args); #endif /* _PERF_UI_UTIL_H_ */ diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c index 8b2d37b59c9e..3c6f7808efae 100644 --- a/tools/power/x86/turbostat/turbostat.c +++ b/tools/power/x86/turbostat/turbostat.c @@ -162,19 +162,21 @@ void print_header(void) void dump_cnt(struct counters *cnt) { - fprintf(stderr, "package: %d ", cnt->pkg); - fprintf(stderr, "core:: %d ", cnt->core); - fprintf(stderr, "CPU: %d ", cnt->cpu); - fprintf(stderr, "TSC: %016llX\n", cnt->tsc); - fprintf(stderr, "c3: %016llX\n", cnt->c3); - fprintf(stderr, "c6: %016llX\n", cnt->c6); - fprintf(stderr, "c7: %016llX\n", cnt->c7); - fprintf(stderr, "aperf: %016llX\n", cnt->aperf); - fprintf(stderr, "pc2: %016llX\n", cnt->pc2); - fprintf(stderr, "pc3: %016llX\n", cnt->pc3); - fprintf(stderr, "pc6: %016llX\n", cnt->pc6); - fprintf(stderr, "pc7: %016llX\n", cnt->pc7); - fprintf(stderr, "msr0x%x: %016llX\n", extra_msr_offset, cnt->extra_msr); + if (!cnt) + return; + if (cnt->pkg) fprintf(stderr, "package: %d ", cnt->pkg); + if (cnt->core) fprintf(stderr, "core:: %d ", cnt->core); + if (cnt->cpu) fprintf(stderr, "CPU: %d ", cnt->cpu); + if (cnt->tsc) fprintf(stderr, "TSC: %016llX\n", cnt->tsc); + if (cnt->c3) fprintf(stderr, "c3: %016llX\n", cnt->c3); + if (cnt->c6) fprintf(stderr, "c6: %016llX\n", cnt->c6); + if (cnt->c7) fprintf(stderr, "c7: %016llX\n", cnt->c7); + if (cnt->aperf) fprintf(stderr, "aperf: %016llX\n", cnt->aperf); + if (cnt->pc2) fprintf(stderr, "pc2: %016llX\n", cnt->pc2); + if (cnt->pc3) fprintf(stderr, "pc3: %016llX\n", cnt->pc3); + if (cnt->pc6) fprintf(stderr, "pc6: %016llX\n", cnt->pc6); + if (cnt->pc7) fprintf(stderr, "pc7: %016llX\n", cnt->pc7); + if (cnt->extra_msr) fprintf(stderr, "msr0x%x: %016llX\n", extra_msr_offset, cnt->extra_msr); } void dump_list(struct counters *cnt) diff --git a/tools/testing/ktest/ktest.pl b/tools/testing/ktest/ktest.pl index 8d02ccb10c59..30e2befd6f2a 100755 --- a/tools/testing/ktest/ktest.pl +++ b/tools/testing/ktest/ktest.pl @@ -42,6 +42,7 @@ $default{"BISECT_MANUAL"} = 0; $default{"BISECT_SKIP"} = 1; $default{"SUCCESS_LINE"} = "login:"; $default{"DETECT_TRIPLE_FAULT"} = 1; +$default{"NO_INSTALL"} = 0; $default{"BOOTED_TIMEOUT"} = 1; $default{"DIE_ON_FAILURE"} = 1; $default{"SSH_EXEC"} = "ssh \$SSH_USER\@\$MACHINE \$SSH_COMMAND"; @@ -84,6 +85,7 @@ my $grub_number; my $target; my $make; my $post_install; +my $no_install; my $noclean; my $minconfig; my $start_minconfig; @@ -115,6 +117,7 @@ my $timeout; my $booted_timeout; my $detect_triplefault; my $console; +my $reboot_success_line; my $success_line; my $stop_after_success; my $stop_after_failure; @@ -130,6 +133,12 @@ my %config_help; my %variable; my %force_config; +# do not force reboots on config problems +my $no_reboot = 1; + +# default variables that can be used +chomp ($variable{"PWD"} = `pwd`); + $config_help{"MACHINE"} = << "EOF" The machine hostname that you will test. EOF @@ -241,6 +250,7 @@ sub read_yn { sub get_ktest_config { my ($config) = @_; + my $ans; return if (defined($opt{$config})); @@ -254,16 +264,17 @@ sub get_ktest_config { if (defined($default{$config})) { print "\[$default{$config}\] "; } - $entered_configs{$config} = <STDIN>; - $entered_configs{$config} =~ s/^\s*(.*\S)\s*$/$1/; - if ($entered_configs{$config} =~ /^\s*$/) { + $ans = <STDIN>; + $ans =~ s/^\s*(.*\S)\s*$/$1/; + if ($ans =~ /^\s*$/) { if ($default{$config}) { - $entered_configs{$config} = $default{$config}; + $ans = $default{$config}; } else { print "Your answer can not be blank\n"; next; } } + $entered_configs{$config} = process_variables($ans); last; } } @@ -298,7 +309,7 @@ sub get_ktest_configs { } sub process_variables { - my ($value) = @_; + my ($value, $remove_undef) = @_; my $retval = ""; # We want to check for '\', and it is just easier @@ -316,6 +327,10 @@ sub process_variables { $retval = "$retval$begin"; if (defined($variable{$var})) { $retval = "$retval$variable{$var}"; + } elsif (defined($remove_undef) && $remove_undef) { + # for if statements, any variable that is not defined, + # we simple convert to 0 + $retval = "${retval}0"; } else { # put back the origin piece. $retval = "$retval\$\{$var\}"; @@ -331,10 +346,17 @@ sub process_variables { } sub set_value { - my ($lvalue, $rvalue) = @_; + my ($lvalue, $rvalue, $override, $overrides, $name) = @_; if (defined($opt{$lvalue})) { - die "Error: Option $lvalue defined more than once!\n"; + if (!$override || defined(${$overrides}{$lvalue})) { + my $extra = ""; + if ($override) { + $extra = "In the same override section!\n"; + } + die "$name: $.: Option $lvalue defined more than once!\n$extra"; + } + ${$overrides}{$lvalue} = $rvalue; } if ($rvalue =~ /^\s*$/) { delete $opt{$lvalue}; @@ -355,86 +377,274 @@ sub set_variable { } } -sub read_config { - my ($config) = @_; +sub process_compare { + my ($lval, $cmp, $rval) = @_; + + # remove whitespace + + $lval =~ s/^\s*//; + $lval =~ s/\s*$//; + + $rval =~ s/^\s*//; + $rval =~ s/\s*$//; + + if ($cmp eq "==") { + return $lval eq $rval; + } elsif ($cmp eq "!=") { + return $lval ne $rval; + } + + my $statement = "$lval $cmp $rval"; + my $ret = eval $statement; + + # $@ stores error of eval + if ($@) { + return -1; + } + + return $ret; +} + +sub value_defined { + my ($val) = @_; + + return defined($variable{$2}) || + defined($opt{$2}); +} + +my $d = 0; +sub process_expression { + my ($name, $val) = @_; + + my $c = $d++; + + while ($val =~ s/\(([^\(]*?)\)/\&\&\&\&VAL\&\&\&\&/) { + my $express = $1; + + if (process_expression($name, $express)) { + $val =~ s/\&\&\&\&VAL\&\&\&\&/ 1 /; + } else { + $val =~ s/\&\&\&\&VAL\&\&\&\&/ 0 /; + } + } + + $d--; + my $OR = "\\|\\|"; + my $AND = "\\&\\&"; + + while ($val =~ s/^(.*?)($OR|$AND)//) { + my $express = $1; + my $op = $2; + + if (process_expression($name, $express)) { + if ($op eq "||") { + return 1; + } + } else { + if ($op eq "&&") { + return 0; + } + } + } + + if ($val =~ /(.*)(==|\!=|>=|<=|>|<)(.*)/) { + my $ret = process_compare($1, $2, $3); + if ($ret < 0) { + die "$name: $.: Unable to process comparison\n"; + } + return $ret; + } + + if ($val =~ /^\s*(NOT\s*)?DEFINED\s+(\S+)\s*$/) { + if (defined $1) { + return !value_defined($2); + } else { + return value_defined($2); + } + } + + if ($val =~ /^\s*0\s*$/) { + return 0; + } elsif ($val =~ /^\s*\d+\s*$/) { + return 1; + } + + die ("$name: $.: Undefined content $val in if statement\n"); +} + +sub process_if { + my ($name, $value) = @_; + + # Convert variables and replace undefined ones with 0 + my $val = process_variables($value, 1); + my $ret = process_expression $name, $val; + + return $ret; +} - open(IN, $config) || die "can't read file $config"; +sub __read_config { + my ($config, $current_test_num) = @_; + + my $in; + open($in, $config) || die "can't read file $config"; my $name = $config; $name =~ s,.*/(.*),$1,; - my $test_num = 0; + my $test_num = $$current_test_num; my $default = 1; my $repeat = 1; my $num_tests_set = 0; my $skip = 0; my $rest; + my $line; my $test_case = 0; + my $if = 0; + my $if_set = 0; + my $override = 0; - while (<IN>) { + my %overrides; + + while (<$in>) { # ignore blank lines and comments next if (/^\s*$/ || /\s*\#/); - if (/^\s*TEST_START(.*)/) { + if (/^\s*(TEST_START|DEFAULTS)\b(.*)/) { - $rest = $1; + my $type = $1; + $rest = $2; + $line = $2; - if ($num_tests_set) { - die "$name: $.: Can not specify both NUM_TESTS and TEST_START\n"; - } + my $old_test_num; + my $old_repeat; + $override = 0; + + if ($type eq "TEST_START") { + + if ($num_tests_set) { + die "$name: $.: Can not specify both NUM_TESTS and TEST_START\n"; + } - my $old_test_num = $test_num; - my $old_repeat = $repeat; + $old_test_num = $test_num; + $old_repeat = $repeat; - $test_num += $repeat; - $default = 0; - $repeat = 1; + $test_num += $repeat; + $default = 0; + $repeat = 1; + } else { + $default = 1; + } - if ($rest =~ /\s+SKIP(.*)/) { - $rest = $1; + # If SKIP is anywhere in the line, the command will be skipped + if ($rest =~ s/\s+SKIP\b//) { $skip = 1; } else { $test_case = 1; $skip = 0; } - if ($rest =~ /\s+ITERATE\s+(\d+)(.*)$/) { - $repeat = $1; - $rest = $2; - $repeat_tests{"$test_num"} = $repeat; + if ($rest =~ s/\sELSE\b//) { + if (!$if) { + die "$name: $.: ELSE found with out matching IF section\n$_"; + } + $if = 0; + + if ($if_set) { + $skip = 1; + } else { + $skip = 0; + } } - if ($rest =~ /\s+SKIP(.*)/) { - $rest = $1; - $skip = 1; + if ($rest =~ s/\sIF\s+(.*)//) { + if (process_if($name, $1)) { + $if_set = 1; + } else { + $skip = 1; + } + $if = 1; + } else { + $if = 0; + $if_set = 0; } - if ($rest !~ /^\s*$/) { - die "$name: $.: Gargbage found after TEST_START\n$_"; + if (!$skip) { + if ($type eq "TEST_START") { + if ($rest =~ s/\s+ITERATE\s+(\d+)//) { + $repeat = $1; + $repeat_tests{"$test_num"} = $repeat; + } + } elsif ($rest =~ s/\sOVERRIDE\b//) { + # DEFAULT only + $override = 1; + # Clear previous overrides + %overrides = (); + } + } + + if (!$skip && $rest !~ /^\s*$/) { + die "$name: $.: Gargbage found after $type\n$_"; } - if ($skip) { + if ($skip && $type eq "TEST_START") { $test_num = $old_test_num; $repeat = $old_repeat; } - } elsif (/^\s*DEFAULTS(.*)$/) { - $default = 1; - + } elsif (/^\s*ELSE\b(.*)$/) { + if (!$if) { + die "$name: $.: ELSE found with out matching IF section\n$_"; + } $rest = $1; - - if ($rest =~ /\s+SKIP(.*)/) { - $rest = $1; + if ($if_set) { $skip = 1; + $rest = ""; } else { $skip = 0; + + if ($rest =~ /\sIF\s+(.*)/) { + # May be a ELSE IF section. + if (!process_if($name, $1)) { + $skip = 1; + } + $rest = ""; + } else { + $if = 0; + } } if ($rest !~ /^\s*$/) { die "$name: $.: Gargbage found after DEFAULTS\n$_"; } + } elsif (/^\s*INCLUDE\s+(\S+)/) { + + next if ($skip); + + if (!$default) { + die "$name: $.: INCLUDE can only be done in default sections\n$_"; + } + + my $file = process_variables($1); + + if ($file !~ m,^/,) { + # check the path of the config file first + if ($config =~ m,(.*)/,) { + if (-f "$1/$file") { + $file = "$1/$file"; + } + } + } + + if ( ! -r $file ) { + die "$name: $.: Can't read file $file\n$_"; + } + + if (__read_config($file, \$test_num)) { + $test_case = 1; + } + } elsif (/^\s*([A-Z_\[\]\d]+)\s*=\s*(.*?)\s*$/) { next if ($skip); @@ -460,10 +670,10 @@ sub read_config { } if ($default || $lvalue =~ /\[\d+\]$/) { - set_value($lvalue, $rvalue); + set_value($lvalue, $rvalue, $override, \%overrides, $name); } else { my $val = "$lvalue\[$test_num\]"; - set_value($val, $rvalue); + set_value($val, $rvalue, $override, \%overrides, $name); if ($repeat > 1) { $repeats{$val} = $repeat; @@ -490,13 +700,26 @@ sub read_config { } } - close(IN); - if ($test_num) { $test_num += $repeat - 1; $opt{"NUM_TESTS"} = $test_num; } + close($in); + + $$current_test_num = $test_num; + + return $test_case; +} + +sub read_config { + my ($config) = @_; + + my $test_case; + my $test_num = 0; + + $test_case = __read_config $config, \$test_num; + # make sure we have all mandatory configs get_ktest_configs; @@ -603,8 +826,20 @@ sub doprint { } sub run_command; +sub start_monitor; +sub end_monitor; +sub wait_for_monitor; sub reboot { + my ($time) = @_; + + if (defined($time)) { + start_monitor; + # flush out current monitor + # May contain the reboot success line + wait_for_monitor 1; + } + # try to reboot normally if (run_command $reboot) { if (defined($powercycle_after_reboot)) { @@ -615,12 +850,17 @@ sub reboot { # nope? power cycle it. run_command "$power_cycle"; } + + if (defined($time)) { + wait_for_monitor($time, $reboot_success_line); + end_monitor; + } } sub do_not_reboot { my $i = $iteration; - return $test_type eq "build" || + return $test_type eq "build" || $no_reboot || ($test_type eq "patchcheck" && $opt{"PATCHCHECK_TYPE[$i]"} eq "build") || ($test_type eq "bisect" && $opt{"BISECT_TYPE[$i]"} eq "build"); } @@ -693,16 +933,29 @@ sub end_monitor { } sub wait_for_monitor { - my ($time) = @_; + my ($time, $stop) = @_; + my $full_line = ""; my $line; + my $booted = 0; doprint "** Wait for monitor to settle down **\n"; # read the monitor and wait for the system to calm down - do { + while (!$booted) { $line = wait_for_input($monitor_fp, $time); - print "$line" if (defined($line)); - } while (defined($line)); + last if (!defined($line)); + print "$line"; + $full_line .= $line; + + if (defined($stop) && $full_line =~ /$stop/) { + doprint "wait for monitor detected $stop\n"; + $booted = 1; + } + + if ($line =~ /\n/) { + $full_line = ""; + } + } print "** Monitor flushed **\n"; } @@ -719,10 +972,7 @@ sub fail { # no need to reboot for just building. if (!do_not_reboot) { doprint "REBOOTING\n"; - reboot; - start_monitor; - wait_for_monitor $sleep_time; - end_monitor; + reboot $sleep_time; } my $name = ""; @@ -854,9 +1104,12 @@ sub get_grub_index { open(IN, "$ssh_grub |") or die "unable to get menu.lst"; + my $found = 0; + while (<IN>) { if (/^\s*title\s+$grub_menu\s*$/) { $grub_number++; + $found = 1; last; } elsif (/^\s*title\s/) { $grub_number++; @@ -865,7 +1118,7 @@ sub get_grub_index { close(IN); die "Could not find '$grub_menu' in /boot/grub/menu on $machine" - if ($grub_number < 0); + if (!$found); doprint "$grub_number\n"; } @@ -902,7 +1155,8 @@ sub wait_for_input sub reboot_to { if ($reboot_type eq "grub") { - run_ssh "'(echo \"savedefault --default=$grub_number --once\" | grub --batch && reboot)'"; + run_ssh "'(echo \"savedefault --default=$grub_number --once\" | grub --batch)'"; + reboot; return; } @@ -1083,6 +1337,8 @@ sub do_post_install { sub install { + return if ($no_install); + run_scp "$outputdir/$build_target", "$target_image" or dodie "failed to copy image"; @@ -1140,6 +1396,11 @@ sub get_version { } sub start_monitor_and_boot { + # Make sure the stable kernel has finished booting + start_monitor; + wait_for_monitor 5; + end_monitor; + get_grub_index; get_version; install; @@ -1250,6 +1511,10 @@ sub build { unlink $buildlog; + # Failed builds should not reboot the target + my $save_no_reboot = $no_reboot; + $no_reboot = 1; + if (defined($pre_build)) { my $ret = run_command $pre_build; if (!$ret && defined($pre_build_die) && @@ -1272,15 +1537,15 @@ sub build { # allow for empty configs run_command "touch $output_config"; - run_command "mv $output_config $outputdir/config_temp" or - dodie "moving .config"; + if (!$noclean) { + run_command "mv $output_config $outputdir/config_temp" or + dodie "moving .config"; - if (!$noclean && !run_command "$make mrproper") { - dodie "make mrproper"; - } + run_command "$make mrproper" or dodie "make mrproper"; - run_command "mv $outputdir/config_temp $output_config" or - dodie "moving config_temp"; + run_command "mv $outputdir/config_temp $output_config" or + dodie "moving config_temp"; + } } elsif (!$noclean) { unlink "$output_config"; @@ -1318,10 +1583,15 @@ sub build { if (!$build_ret) { # bisect may need this to pass - return 0 if ($in_bisect); + if ($in_bisect) { + $no_reboot = $save_no_reboot; + return 0; + } fail "failed build" and return 0; } + $no_reboot = $save_no_reboot; + return 1; } @@ -1356,10 +1626,7 @@ sub success { if ($i != $opt{"NUM_TESTS"} && !do_not_reboot) { doprint "Reboot and wait $sleep_time seconds\n"; - reboot; - start_monitor; - wait_for_monitor $sleep_time; - end_monitor; + reboot $sleep_time; } } @@ -1500,10 +1767,7 @@ sub run_git_bisect { sub bisect_reboot { doprint "Reboot and sleep $bisect_sleep_time seconds\n"; - reboot; - start_monitor; - wait_for_monitor $bisect_sleep_time; - end_monitor; + reboot $bisect_sleep_time; } # returns 1 on success, 0 on failure, -1 on skip @@ -2066,10 +2330,7 @@ sub config_bisect { sub patchcheck_reboot { doprint "Reboot and sleep $patchcheck_sleep_time seconds\n"; - reboot; - start_monitor; - wait_for_monitor $patchcheck_sleep_time; - end_monitor; + reboot $patchcheck_sleep_time; } sub patchcheck { @@ -2178,12 +2439,31 @@ sub patchcheck { } my %depends; +my %depcount; my $iflevel = 0; my @ifdeps; # prevent recursion my %read_kconfigs; +sub add_dep { + # $config depends on $dep + my ($config, $dep) = @_; + + if (defined($depends{$config})) { + $depends{$config} .= " " . $dep; + } else { + $depends{$config} = $dep; + } + + # record the number of configs depending on $dep + if (defined $depcount{$dep}) { + $depcount{$dep}++; + } else { + $depcount{$dep} = 1; + } +} + # taken from streamline_config.pl sub read_kconfig { my ($kconfig) = @_; @@ -2230,30 +2510,19 @@ sub read_kconfig { $config = $2; for (my $i = 0; $i < $iflevel; $i++) { - if ($i) { - $depends{$config} .= " " . $ifdeps[$i]; - } else { - $depends{$config} = $ifdeps[$i]; - } - $state = "DEP"; + add_dep $config, $ifdeps[$i]; } # collect the depends for the config } elsif ($state eq "NEW" && /^\s*depends\s+on\s+(.*)$/) { - if (defined($depends{$1})) { - $depends{$config} .= " " . $1; - } else { - $depends{$config} = $1; - } + add_dep $config, $1; # Get the configs that select this config - } elsif ($state ne "NONE" && /^\s*select\s+(\S+)/) { - if (defined($depends{$1})) { - $depends{$1} .= " " . $config; - } else { - $depends{$1} = $config; - } + } elsif ($state eq "NEW" && /^\s*select\s+(\S+)/) { + + # selected by depends on config + add_dep $1, $config; # Check for if statements } elsif (/^if\s+(.*\S)\s*$/) { @@ -2365,11 +2634,18 @@ sub make_new_config { close OUT; } +sub chomp_config { + my ($config) = @_; + + $config =~ s/CONFIG_//; + + return $config; +} + sub get_depends { my ($dep) = @_; - my $kconfig = $dep; - $kconfig =~ s/CONFIG_//; + my $kconfig = chomp_config $dep; $dep = $depends{"$kconfig"}; @@ -2419,8 +2695,7 @@ sub test_this_config { return undef; } - my $kconfig = $config; - $kconfig =~ s/CONFIG_//; + my $kconfig = chomp_config $config; # Test dependencies first if (defined($depends{"$kconfig"})) { @@ -2510,6 +2785,14 @@ sub make_min_config { my @config_keys = keys %min_configs; + # All configs need a depcount + foreach my $config (@config_keys) { + my $kconfig = chomp_config $config; + if (!defined $depcount{$kconfig}) { + $depcount{$kconfig} = 0; + } + } + # Remove anything that was set by the make allnoconfig # we shouldn't need them as they get set for us anyway. foreach my $config (@config_keys) { @@ -2548,8 +2831,13 @@ sub make_min_config { # Now disable each config one by one and do a make oldconfig # till we find a config that changes our list. - # Put configs that did not modify the config at the end. my @test_configs = keys %min_configs; + + # Sort keys by who is most dependent on + @test_configs = sort { $depcount{chomp_config($b)} <=> $depcount{chomp_config($a)} } + @test_configs ; + + # Put configs that did not modify the config at the end. my $reset = 1; for (my $i = 0; $i < $#test_configs; $i++) { if (!defined($nochange_config{$test_configs[0]})) { @@ -2659,10 +2947,7 @@ sub make_min_config { } doprint "Reboot and wait $sleep_time seconds\n"; - reboot; - start_monitor; - wait_for_monitor $sleep_time; - end_monitor; + reboot $sleep_time; } success $i; @@ -2783,6 +3068,9 @@ sub set_test_option { # First we need to do is the builds for (my $i = 1; $i <= $opt{"NUM_TESTS"}; $i++) { + # Do not reboot on failing test options + $no_reboot = 1; + $iteration = $i; my $makecmd = set_test_option("MAKE_CMD", $i); @@ -2811,6 +3099,7 @@ for (my $i = 1; $i <= $opt{"NUM_TESTS"}; $i++) { $reboot_type = set_test_option("REBOOT_TYPE", $i); $grub_menu = set_test_option("GRUB_MENU", $i); $post_install = set_test_option("POST_INSTALL", $i); + $no_install = set_test_option("NO_INSTALL", $i); $reboot_script = set_test_option("REBOOT_SCRIPT", $i); $reboot_on_error = set_test_option("REBOOT_ON_ERROR", $i); $poweroff_on_error = set_test_option("POWEROFF_ON_ERROR", $i); @@ -2832,6 +3121,7 @@ for (my $i = 1; $i <= $opt{"NUM_TESTS"}; $i++) { $console = set_test_option("CONSOLE", $i); $detect_triplefault = set_test_option("DETECT_TRIPLE_FAULT", $i); $success_line = set_test_option("SUCCESS_LINE", $i); + $reboot_success_line = set_test_option("REBOOT_SUCCESS_LINE", $i); $stop_after_success = set_test_option("STOP_AFTER_SUCCESS", $i); $stop_after_failure = set_test_option("STOP_AFTER_FAILURE", $i); $stop_test_after = set_test_option("STOP_TEST_AFTER", $i); @@ -2850,9 +3140,11 @@ for (my $i = 1; $i <= $opt{"NUM_TESTS"}; $i++) { chdir $builddir || die "can't change directory to $builddir"; - if (!-d $tmpdir) { - mkpath($tmpdir) or - die "can't create $tmpdir"; + foreach my $dir ($tmpdir, $outputdir) { + if (!-d $dir) { + mkpath($dir) or + die "can't create $dir"; + } } $ENV{"SSH_USER"} = $ssh_user; @@ -2889,8 +3181,11 @@ for (my $i = 1; $i <= $opt{"NUM_TESTS"}; $i++) { $run_type = "ERROR"; } + my $installme = ""; + $installme = " no_install" if ($no_install); + doprint "\n\n"; - doprint "RUNNING TEST $i of $opt{NUM_TESTS} with option $test_type $run_type\n\n"; + doprint "RUNNING TEST $i of $opt{NUM_TESTS} with option $test_type $run_type$installme\n\n"; unlink $dmesg; unlink $buildlog; @@ -2911,6 +3206,9 @@ for (my $i = 1; $i <= $opt{"NUM_TESTS"}; $i++) { die "failed to checkout $checkout"; } + $no_reboot = 0; + + if ($test_type eq "bisect") { bisect $i; next; @@ -2929,6 +3227,13 @@ for (my $i = 1; $i <= $opt{"NUM_TESTS"}; $i++) { build $build_type or next; } + if ($test_type eq "install") { + get_version; + install; + success $i; + next; + } + if ($test_type ne "build") { my $failed = 0; start_monitor_and_boot or $failed = 1; diff --git a/tools/testing/ktest/sample.conf b/tools/testing/ktest/sample.conf index b8bcd14b5a4d..dbedfa196727 100644 --- a/tools/testing/ktest/sample.conf +++ b/tools/testing/ktest/sample.conf @@ -72,6 +72,128 @@ # the same option name under the same test or as default # ktest will fail to execute, and no tests will run. # +# DEFAULTS OVERRIDE +# +# Options defined in the DEFAULTS section can not be duplicated +# even if they are defined in two different DEFAULT sections. +# This is done to catch mistakes where an option is added but +# the previous option was forgotten about and not commented. +# +# The OVERRIDE keyword can be added to a section to allow this +# section to override other DEFAULT sections values that have +# been defined previously. It will only override options that +# have been defined before its use. Options defined later +# in a non override section will still error. The same option +# can not be defined in the same section even if that section +# is marked OVERRIDE. +# +# +# +# Both TEST_START and DEFAULTS sections can also have the IF keyword +# The value after the IF must evaluate into a 0 or non 0 positive +# integer, and can use the config variables (explained below). +# +# DEFAULTS IF ${IS_X86_32} +# +# The above will process the DEFAULTS section if the config +# variable IS_X86_32 evaluates to a non zero positive integer +# otherwise if it evaluates to zero, it will act the same +# as if the SKIP keyword was used. +# +# The ELSE keyword can be used directly after a section with +# a IF statement. +# +# TEST_START IF ${RUN_NET_TESTS} +# BUILD_TYPE = useconfig:${CONFIG_DIR}/config-network +# +# ELSE +# +# BUILD_TYPE = useconfig:${CONFIG_DIR}/config-normal +# +# +# The ELSE keyword can also contain an IF statement to allow multiple +# if then else sections. But all the sections must be either +# DEFAULT or TEST_START, they can not be a mixture. +# +# TEST_START IF ${RUN_NET_TESTS} +# BUILD_TYPE = useconfig:${CONFIG_DIR}/config-network +# +# ELSE IF ${RUN_DISK_TESTS} +# BUILD_TYPE = useconfig:${CONFIG_DIR}/config-tests +# +# ELSE IF ${RUN_CPU_TESTS} +# BUILD_TYPE = useconfig:${CONFIG_DIR}/config-cpu +# +# ELSE +# BUILD_TYPE = useconfig:${CONFIG_DIR}/config-network +# +# The if statement may also have comparisons that will and for +# == and !=, strings may be used for both sides. +# +# BOX_TYPE := x86_32 +# +# DEFAULTS IF ${BOX_TYPE} == x86_32 +# BUILD_TYPE = useconfig:${CONFIG_DIR}/config-32 +# ELSE +# BUILD_TYPE = useconfig:${CONFIG_DIR}/config-64 +# +# The DEFINED keyword can be used by the IF statements too. +# It returns true if the given config variable or option has been defined +# or false otherwise. +# +# +# DEFAULTS IF DEFINED USE_CC +# CC := ${USE_CC} +# ELSE +# CC := gcc +# +# +# As well as NOT DEFINED. +# +# DEFAULTS IF NOT DEFINED MAKE_CMD +# MAKE_CMD := make ARCH=x86 +# +# +# And/or ops (&&,||) may also be used to make complex conditionals. +# +# TEST_START IF (DEFINED ALL_TESTS || ${MYTEST} == boottest) && ${MACHINE} == gandalf +# +# Notice the use of paranthesis. Without any paranthesis the above would be +# processed the same as: +# +# TEST_START IF DEFINED ALL_TESTS || (${MYTEST} == boottest && ${MACHINE} == gandalf) +# +# +# +# INCLUDE file +# +# The INCLUDE keyword may be used in DEFAULT sections. This will +# read another config file and process that file as well. The included +# file can include other files, add new test cases or default +# statements. Config variables will be passed to these files and changes +# to config variables will be seen by top level config files. Including +# a file is processed just like the contents of the file was cut and pasted +# into the top level file, except, that include files that end with +# TEST_START sections will have that section ended at the end of +# the include file. That is, an included file is included followed +# by another DEFAULT keyword. +# +# Unlike other files referenced in this config, the file path does not need +# to be absolute. If the file does not start with '/', then the directory +# that the current config file was located in is used. If no config by the +# given name is found there, then the current directory is searched. +# +# INCLUDE myfile +# DEFAULT +# +# is the same as: +# +# INCLUDE myfile +# +# Note, if the include file does not contain a full path, the file is +# searched first by the location of the original include file, and then +# by the location that ktest.pl was executed in. +# #### Config variables #### # @@ -253,9 +375,10 @@ # The default test type (default test) # The test types may be: -# build - only build the kernel, do nothing else -# boot - build and boot the kernel -# test - build, boot and if TEST is set, run the test script +# build - only build the kernel, do nothing else +# install - build and install, but do nothing else (does not reboot) +# boot - build, install, and boot the kernel +# test - build, boot and if TEST is set, run the test script # (If TEST is not set, it defaults back to boot) # bisect - Perform a bisect on the kernel (see BISECT_TYPE below) # patchcheck - Do a test on a series of commits in git (see PATCHCHECK below) @@ -293,6 +416,13 @@ # or on some systems: #POST_INSTALL = ssh user@target /sbin/dracut -f /boot/initramfs-test.img $KERNEL_VERSION +# If for some reason you just want to boot the kernel and you do not +# want the test to install anything new. For example, you may just want +# to boot test the same kernel over and over and do not want to go through +# the hassle of installing anything, you can set this option to 1 +# (default 0) +#NO_INSTALL = 1 + # If there is a script that you require to run before the build is done # you can specify it with PRE_BUILD. # @@ -415,6 +545,14 @@ # (default "login:") #SUCCESS_LINE = login: +# To speed up between reboots, defining a line that the +# default kernel produces that represents that the default +# kernel has successfully booted and can be used to pass +# a new test kernel to it. Otherwise ktest.pl will wait till +# SLEEP_TIME to continue. +# (default undefined) +#REBOOT_SUCCESS_LINE = login: + # In case the console constantly fills the screen, having # a specified time to stop the test after success is recommended. # (in seconds) @@ -480,6 +618,8 @@ # another test. If a reboot to the reliable kernel happens, # we wait SLEEP_TIME for the console to stop producing output # before starting the next test. +# +# You can speed up reboot times even more by setting REBOOT_SUCCESS_LINE. # (default 60) #SLEEP_TIME = 60 |