summaryrefslogtreecommitdiffstats
path: root/fs/cifs/transport.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* cifs, freezer: add wait_event_freezekillable and have cifs use itJeff Layton2011-10-191-1/+2
| | | | | | | | | | | | | | | | CIFS currently uses wait_event_killable to put tasks to sleep while they await replies from the server. That function though does not allow the freezer to run. In many cases, the network interface may be going down anyway, in which case the reply will never come. The client then ends up blocking the computer from suspending. Fix this by adding a new wait_event_freezable variant -- wait_event_freezekillable. The idea is to combine the behavior of wait_event_killable and wait_event_freezable -- put the task to sleep and only allow it to be awoken by fatal signals, but also allow the freezer to do its job. Signed-off-by: Jeff Layton <jlayton@redhat.com>
* cifs: add a callback function to receive the rest of the frameJeff Layton2011-10-191-2/+3
| | | | | | | | | | | | | | | In order to handle larger SMBs for readpages and other calls, we want to be able to read into a preallocated set of buffers. Rather than changing all of the existing code to preallocate buffers however, we instead add a receive callback function to the MID. cifsd will call this function once the mid_q_entry has been identified in order to receive the rest of the SMB. If the mid can't be identified or the receive pointer is unset, then the standard 3rd phase receive function will be called. Reviewed-and-Tested-by: Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by: Jeff Layton <jlayton@redhat.com>
* cifs: consolidate signature generating codeJeff Layton2011-10-131-3/+8
| | | | | | | | | | We have two versions of signature generating code. A vectorized and non-vectorized version. Eliminate a large chunk of cut-and-paste code by turning the non-vectorized version into a wrapper around the vectorized one. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com>
* [CIFS] Cleanup use of CONFIG_CIFS_STATS2 ifdef to make transport routines ↵Steve French2011-08-111-34/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | more readable Christoph had requested that the stats related code (in CONFIG_CIFS_STATS2) be moved into helpers to make code flow more readable. This patch should help. For example the following section from transport.c spin_unlock(&GlobalMid_Lock); atomic_inc(&ses->server->num_waiters); wait_event(ses->server->request_q, atomic_read(&ses->server->inFlight) < cifs_max_pending); atomic_dec(&ses->server->num_waiters); spin_lock(&GlobalMid_Lock); becomes simpler (with the patch below): spin_unlock(&GlobalMid_Lock); cifs_num_waiters_inc(server); wait_event(server->request_q, atomic_read(&server->inFlight) < cifs_max_pending); cifs_num_waiters_dec(server); spin_lock(&GlobalMid_Lock); Reviewed-by: Jeff Layton <jlayton@redhat.com> CC: Christoph Hellwig <hch@infradead.org> Signed-off-by: Steve French <sfrench@us.ibm.com> Reviewed-by: Pavel Shilovsky <piastry@etersoft.ru>
* CIFS: Fix missing a decrement of inFlight valuePavel Shilovsky2011-08-031-0/+2
| | | | | | | | if we failed on getting mid entry in cifs_call_async. Cc: stable@kernel.org Signed-off-by: Pavel Shilovsky <piastryyy@gmail.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* [CIFS] Rename three structures to avoid camel caseSteve French2011-05-271-10/+10
| | | | | | | | | | secMode to sec_mode and cifsTconInfo to cifs_tcon and cifsSesInfo to cifs_ses Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: don't call mid_q_entry->callback under the Global_MidLock (try #5)Jeff Layton2011-05-241-15/+8
| | | | | | | | | | | | | | | | | | | | | Minor revision to the last version of this patch -- the only difference is the fix to the cFYI statement in cifs_reconnect. Holding the spinlock while we call this function means that it can't sleep, which really limits what it can do. Taking it out from under the spinlock also means less contention for this global lock. Change the semantics such that the Global_MidLock is not held when the callback is called. To do this requires that we take extra care not to have sync_mid_result remove the mid from the list when the mid is in a state where that has already happened. This prevents list corruption when the mid is sitting on a private list for reconnect or when cifsd is coming down. Reviewed-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com> Reviewed-by: Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: add ignore_pend flag to cifs_call_asyncJeff Layton2011-05-231-2/+3
| | | | | | | | | | The current code always ignores the max_pending limit. Have it instead only optionally ignore the pending limit. For CIFSSMBEcho, we need to ignore it to make sure they always can go out. For async reads, writes and potentially other calls, we need to respect it. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: make cifs_send_async take a kvec arrayJeff Layton2011-05-231-6/+7
| | | | | | | | | | We'll need this for async writes, so convert the call to take a kvec array. CIFSSMBEcho is changed to put a kvec on the stack and pass in the SMB buffer using that. Reviewed-by: Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: consolidate SendReceive response checksJeff Layton2011-05-231-116/+41
| | | | | | | | | | | | | | | | | | | Further consolidate the SendReceive code by moving the checks run over the packet into a separate function that all the SendReceive variants can call. We can also eliminate the check for a receive_len that's too big or too small. cifs_demultiplex_thread already checks that and disconnects the socket if that occurs, while setting the midStatus to MALFORMED. It'll never call this code if that's the case. Finally do a little cleanup. Use "goto out" on errors so that the flow of code in the normal case is more evident. Also switch the logErr variable in map_smb_to_linux_error to a bool. Reviewed-by: Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: keep BCC in little-endian formatJeff Layton2011-05-191-18/+1
| | | | | | | | | | | | | | | | | | | | | | | This is the same patch as originally posted, just with some merge conflicts fixed up... Currently, the ByteCount is usually converted to host-endian on receive. This is confusing however, as we need to keep two sets of routines for accessing it, and keep track of when to use each routine. Munging received packets like this also limits when the signature can be calulated. Simplify the code by keeping the received ByteCount in little-endian format. This allows us to eliminate a set of routines for accessing it and we can now drop the *_le suffixes from the accessor functions since that's now implied. While we're at it, switch all of the places that read the ByteCount directly to use the get_bcc inline which should also clean up some unaligned accesses. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* consistently use smb_buf_length as be32 for cifs (try 3)Steve French2011-05-191-26/+21
| | | | | | | | | | | | | | | | | | | There is one big endian field in the cifs protocol, the RFC1001 length, which cifs code (unlike in the smb2 code) had been handling as u32 until the last possible moment, when it was converted to be32 (its native form) before sending on the wire. To remove the last sparse endian warning, and to make this consistent with the smb2 implementation (which always treats the fields in their native size and endianness), convert all uses of smb_buf_length to be32. This version incorporates Christoph's comment about using be32_add_cpu, and fixes a typo in the second version of the patch. Signed-off-by: Steve French <sfrench@us.ibm.com> Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: don't always drop malformed replies on the floor (try #3)Jeff Layton2011-02-111-0/+3
| | | | | | | | | | | | | | | | | | | | | Slight revision to this patch...use min_t() instead of conditional assignment. Also, remove the FIXME comment and replace it with the explanation that Steve gave earlier. After receiving a packet, we currently check the header. If it's no good, then we toss it out and continue the loop, leaving the caller waiting on that response. In cases where the packet has length inconsistencies, but the MID is valid, this leads to unneeded delays. That's especially problematic now that the client waits indefinitely for responses. Instead, don't immediately discard the packet if checkSMB fails. Try to find a matching mid_q_entry, mark it as having a malformed response and issue the callback. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: enable signing flag in SMB header when server has it onJeff Layton2011-02-041-0/+4
| | | | | | | | | | | | | | cifs_sign_smb only generates a signature if the correct Flags2 bit is set. Make sure that it gets set correctly if we're sending an async call. This patch fixes: https://bugzilla.kernel.org/show_bug.cgi?id=28142 Reported-and-Tested-by: JG <jg@cms.ac> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: don't pop a printk when sending on a socket is interruptedJeff Layton2011-01-311-2/+2
| | | | | | | | | If we kill the process while it's sending on a socket then the kernel_sendmsg will return -EINTR. This is normal. No need to spam the ring buffer with this info. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: send an NT_CANCEL request when a process is signalledJeff Layton2011-01-311-3/+12
| | | | | | | | | | | | Use the new send_nt_cancel function to send an NT_CANCEL when the process is delivered a fatal signal. This is a "best effort" enterprise however, so don't bother to check the return code. There's nothing we can reasonably do if it fails anyway. Reviewed-by: Pavel Shilovsky <piastryyy@gmail.com> Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: handle cancelled requests betterJeff Layton2011-01-311-7/+36
| | | | | | | | | | | | | | | | | | | | Currently, when a request is cancelled via signal, we delete the mid immediately. If the request was already transmitted however, the client is still likely to receive a response. When it does, it won't recognize it however and will pop a printk. It's also a little dangerous to just delete the mid entry like this. We may end up reusing that mid. If we do then we could potentially get the response from the first request confused with the later one. Prevent the reuse of mids by marking them as cancelled and keeping them on the pending_mid_q list. If the reply comes in, we'll delete it from the list then. If it never comes, then we'll delete it at reconnect or when cifsd comes down. Reviewed-by: Pavel Shilovsky <piastryyy@gmail.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: use get/put_unaligned functions to access ByteCountJeff Layton2011-01-201-5/+4
| | | | | | | | | | | | | | | | It's possible that when we access the ByteCount that the alignment will be off. Most CPUs deal with that transparently, but there's usually some performance impact. Some CPUs raise an exception on unaligned accesses. Fix this by accessing the byte count using the get_unaligned and put_unaligned inlined functions. While we're at it, fix the types of some of the variables that end up getting returns from these functions. Acked-by: Pavel Shilovsky <piastryyy@gmail.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: mangle existing header for SMB_COM_NT_CANCELJeff Layton2011-01-201-25/+38
| | | | | | | | | | | | | | | The NT_CANCEL command looks just like the original command, except for a few small differences. The send_nt_cancel function however currently takes a tcon, which we don't have in SendReceive and SendReceive2. Instead of "respinning" the entire header for an NT_CANCEL, just mangle the existing header by replacing just the fields we need. This means we don't need a tcon and allows us to call it from other places. Reviewed-by: Pavel Shilovsky <piastryyy@gmail.com> Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: remove code for setting timeouts on requestsJeff Layton2011-01-201-1/+1
| | | | | | | | | | Since we don't time out individual requests anymore, remove the code that we used to use for setting timeouts on different requests. Reviewed-by: Pavel Shilovsky <piastryyy@gmail.com> Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: add ability to send an echo requestJeff Layton2011-01-201-1/+1
| | | | | | Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: add cifs_call_asyncJeff Layton2011-01-201-1/+57
| | | | | | | | | Add a function that will send a request, and set up the mid for an async reply. Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: allow for different handling of received responseJeff Layton2011-01-201-2/+17
| | | | | | | | | | | | | | | | | | | | | | In order to incorporate async requests, we need to allow for a more general way to do things on receive, rather than just waking up a process. Turn the task pointer in the mid_q_entry into a callback function and a generic data pointer. When a response comes in, or the socket is reconnected, cifsd can call the callback function in order to wake up the process. The default is to just wake up the current process which should mean no change in behavior for existing code. Also, clean up the locking in cifs_reconnect. There doesn't seem to be any need to hold both the srv_mutex and GlobalMid_Lock when walking the list of mids. Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: clean up sync_mid_resultJeff Layton2011-01-201-17/+18
| | | | | | | | Make it use a switch statement based on the value of the midStatus. If the resp_buf is set, then MID_RESPONSE_RECEIVED is too. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: don't reconnect server when we don't get a responseJeff Layton2011-01-201-3/+1
| | | | | | | | | | | | | We only want to force a reconnect to the server under very limited and specific circumstances. Now that we have processes waiting indefinitely for responses, we shouldn't reach this point unless a reconnect is already in process. Thus, there's no reason to re-mark the server for reconnect here. Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de> Reviewed-by: Pavel Shilovsky <piastryyy@gmail.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: wait indefinitely for responsesJeff Layton2011-01-201-93/+17
| | | | | | | | | | | | | | | | | | | | The client should not be timing out on individual SMB requests. Too much of the state between client and server is tied to the state of the socket. If we time out requests and issue spurious disconnects then that comprimises data integrity. Instead of doing this complicated dance where we try to decide how long to wait for a response for particular requests, have the client instead wait indefinitely for a response. Also, use a TASK_KILLABLE sleep here so that fatal signals will break out of this waiting. Later patches will add support for detecting dead peers and forcing reconnects based on that. Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de> Reviewed-by: Pavel Shilovsky <piastryyy@gmail.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: move mid result processing into common functionJeff Layton2011-01-191-78/+43
| | | | | | | Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de> Reviewed-by: Pavel Shilovsky <piastryyy@gmail.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: move locked sections out of DeleteMidQEntry and AllocMidQEntryJeff Layton2011-01-191-17/+24
| | | | | | | | | | | | | | In later patches, we're going to need to have finer-grained control over the addition and removal of these structs from the pending_mid_q and we'll need to be able to call the destructor while holding the spinlock. Move the locked sections out of both routines and into the callers. Fix up current callers of DeleteMidQEntry to call a new routine that dequeues the entry and then destroys it. Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de> Reviewed-by: Pavel Shilovsky <piastryyy@gmail.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: clean up accesses to midCountJeff Layton2011-01-191-3/+3
| | | | | | | | | | | | It's an atomic_t and the code accesses the "counter" field in it directly instead of using atomic_read(). It also is sometimes accessed under a spinlock and sometimes not. Move it out of the spinlock since we don't need belt-and-suspenders for something that's just informational. Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de> Reviewed-by: Pavel Shilovsky <piastryyy@gmail.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: make wait_for_free_request take a TCP_Server_Info pointerJeff Layton2011-01-191-13/+13
| | | | | | | | | The cifsSesInfo pointer is only used to get at the server. Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de> Reviewed-by: Pavel Shilovsky <piastryyy@gmail.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* CIFS: Simplify ipv*_connect functions into one (try #4)Pavel Shilovsky2011-01-061-1/+1
| | | | | | | | | | Make connect logic more ip-protocol independent and move RFC1001 stuff into a separate function. Also replace union addr in TCP_Server_Info structure with sockaddr_storage. Signed-off-by: Pavel Shilovsky <piastryyy@gmail.com> Reviewed-and-Tested-by: Jeff Layton <jlayton@samba.org> Signed-off-by: Steve French <sfrench@us.ibm.com>
* NTLM auth and sign - Allocate session key/client response dynamicallyShirish Pargaonkar2010-10-261-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Start calculating auth response within a session. Move/Add pertinet data structures like session key, server challenge and ntlmv2_hash in a session structure. We should do the calculations within a session before copying session key and response over to server data structures because a session setup can fail. Only after a very first smb session succeeds, it copy/make its session key, session key of smb connection. This key stays with the smb connection throughout its life. sequence_number within server is set to 0x2. The authentication Message Authentication Key (mak) which consists of session key followed by client response within structure session_key is now dynamic. Every authentication type allocates the key + response sized memory within its session structure and later either assigns or frees it once the client response is sent and if session's session key becomes connetion's session key. ntlm/ntlmi authentication functions are rearranged. A function named setup_ntlm_resp(), similar to setup_ntlmv2_resp(), replaces function cifs_calculate_session_key(). size of CIFS_SESS_KEY_SIZE is changed to 16, to reflect the byte size of the key it holds. Reviewed-by: Jeff Layton <jlayton@samba.org> Signed-off-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs NTLMv2/NTLMSSP Change variable name mac_key to session key to reflect ↵Shirish Pargaonkar2010-09-291-3/+3
| | | | | | | | | | | | the key it holds Change name of variable mac_key to session key. The reason mac_key was changed to session key is, this structure does not hold message authentication code, it holds the session key (for ntlmv2, ntlmv1 etc.). mac is generated as a signature in cifs_calc* functions. Signed-off-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* Revert "[CIFS] Fix ntlmv2 auth with ntlmssp"Steve French2010-09-081-3/+3
| | | | | | | | | | | | This reverts commit 9fbc590860e75785bdaf8b83e48fabfe4d4f7d58. The change to kernel crypto and fixes to ntlvm2 and ntlmssp series, introduced a regression. Deferring this patch series to 2.6.37 after Shirish fixes it. Signed-off-by: Steve French <sfrench@us.ibm.com> Acked-by: Jeff Layton <jlayton@redhat.com> CC: Shirish Pargaonkar <shirishp@us.ibm.com>
* [CIFS] Fix ntlmv2 auth with ntlmsspSteve French2010-08-201-3/+3
| | | | | | | | | | | | | | | | | | | | | Make ntlmv2 as an authentication mechanism within ntlmssp instead of ntlmv1. Parse type 2 response in ntlmssp negotiation to pluck AV pairs and use them to calculate ntlmv2 response token. Also, assign domain name from the sever response in type 2 packet of ntlmssp and use that (netbios) domain name in calculation of response. Enable cifs/smb signing using rc4 and md5. Changed name of the structure mac_key to session_key to reflect the type of key it holds. Use kernel crypto_shash_* APIs instead of the equivalent cifs functions. Signed-off-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Steve French <sfrench@us.ibm.com>
* [CIFS] Remove unused cifs_oplock_cachepSteve French2010-05-061-1/+0
| | | | | CC: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* [CIFS] Neaten cERROR and cFYI macros, reduce text spaceJoe Perches2010-04-211-46/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Neaten cERROR and cFYI macros, reduce text space ~2.5K Convert '__FILE__ ": " fmt' to '"%s: " fmt', __FILE__' to save text space Surround macros with do {} while Add parentheses to macros Make statement expression macro from macro with assign Remove now unnecessary parentheses from cFYI and cERROR uses defconfig with CIFS support old $ size fs/cifs/built-in.o text data bss dec hex filename 156012 1760 148 157920 268e0 fs/cifs/built-in.o defconfig with CIFS support old $ size fs/cifs/built-in.o text data bss dec hex filename 153508 1760 148 155416 25f18 fs/cifs/built-in.o allyesconfig old: $ size fs/cifs/built-in.o text data bss dec hex filename 309138 3864 74824 387826 5eaf2 fs/cifs/built-in.o allyesconfig new $ size fs/cifs/built-in.o text data bss dec hex filename 305655 3864 74824 384343 5dd57 fs/cifs/built-in.o Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* include cleanup: Update gfp.h and slab.h includes to prepare for breaking ↵Tejun Heo2010-03-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
* cifs: convert oplock breaks to use slow_work facility (try #4)Jeff Layton2009-09-241-50/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is the fourth respin of the patch to convert oplock breaks to use the slow_work facility. A customer of ours was testing a backport of one of the earlier patchsets, and hit a "Busy inodes after umount..." problem. An oplock break job had raced with a umount, and the superblock got torn down and its memory reused. When the oplock break job tried to dereference the inode->i_sb, the kernel oopsed. This patchset has the oplock break job hold an inode and vfsmount reference until the oplock break completes. With this, there should be no need to take a tcon reference (the vfsmount implicitly holds one already). Currently, when an oplock break comes in there's a chance that the oplock break job won't occur if the allocation of the oplock_q_entry fails. There are also some rather nasty races in the allocation and handling these structs. Rather than allocating oplock queue entries when an oplock break comes in, add a few extra fields to the cifsFileInfo struct. Get rid of the dedicated cifs_oplock_thread as well and queue the oplock break job to the slow_work thread pool. This approach also has the advantage that the oplock break jobs can potentially run in parallel rather than be serialized like they are today. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: protect GlobalOplock_Q with its own spinlockJeff Layton2009-09-021-9/+8
| | | | | | | | | | | Right now, the GlobalOplock_Q is protected by the GlobalMid_Lock. That lock is also used for completely unrelated purposes (mostly for managing the global mid queue). Give the list its own dedicated spinlock (cifs_oplock_lock) and rename the list to cifs_oplock_list to eliminate the camel-case. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* [CIFS] Make socket retry timeouts consistent between blocking and ↵Steve French2009-01-291-1/+19
| | | | | | | | | | | nonblocking cases We have used approximately 15 second timeouts on nonblocking sends in the past, and also 15 second SMB timeout (waiting for server responses, for most request types). Now that we can do blocking tcp sends, make blocking send timeout approximately the same (15 seconds). Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: turn smb_send into a wrapper around smb_sendvJeff Layton2009-01-291-88/+19
| | | | | | | | | | | | | | | | | | | | cifs: turn smb_send into a wrapper around smb_sendv Rename smb_send2 to smb_sendv to make it consistent with kernel naming conventions for functions that take a vector. There's no need to have 2 functions to handle sending SMB calls. Turn smb_send into a wrapper around smb_sendv. This also allows us to properly mark the socket as needing to be reconnected when there's a partial send from smb_send. Also, in practice we always use the address and noblocksnd flag that's attached to the TCP_Server_Info. There's no need to pass them in as separate args to smb_sendv. Signed-off-by: Jeff Layton <jlayton@redhat.com> Acked-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* Remove an already-checked error condition in SendReceiveBlockingLockVolker Lendecke2008-12-261-2/+1
| | | | | | | Remove an already-checked error condition in SendReceiveBlockingLock Signed-off-by: Volker Lendecke <vl@samba.org> Signed-off-by: Steve French <sfrench@us.ibm.com>
* Streamline SendReceiveBlockingLock: Use "goto out:" in an error conditionVolker Lendecke2008-12-261-30/+31
| | | | | | | Streamline SendReceiveBlockingLock: Use "goto out:" in an error condition Signed-off-by: Volker Lendecke <vl@samba.org> Signed-off-by: Steve French <sfrench@us.ibm.com>
* Streamline SendReceiveBlockingLock: Use "goto out:" in an error conditionVolker Lendecke2008-12-261-33/+37
| | | | | | | Streamline SendReceiveBlockingLock: Use "goto out:" in an error condition Signed-off-by: Volker Lendecke <vl@samba.org> Signed-off-by: Steve French <sfrench@us.ibm.com>
* [CIFS] Streamline SendReceive[2] by using "goto out:" in an error conditionSteve French2008-12-261-67/+72
| | | | | Signed-off-by: Volker Lendecke <vl@samba.org> Signed-off-by: Steve French <sfrench@us.ibm.com>
* Slightly streamline SendReceive[2]Volker Lendecke2008-12-261-8/+9
| | | | | | | | | Slightly streamline SendReceive[2] Remove an else branch by naming the error condition what it is Signed-off-by: Volker Lendecke <vl@samba.org> Signed-off-by: Steve French <sfrench@us.ibm.com>
* Check the return value of cifs_sign_smb[2]Volker Lendecke2008-12-261-0/+14
| | | | | | | Check the return value of cifs_sign_smb[2] Signed-off-by: Volker Lendecke <vl@samba.org> Signed-off-by: Steve French <sfrench@us.ibm.com>
* [CIFS] Slightly simplify wait_for_free_request(), remove an unnecessary ↵Volker Lendecke2008-12-261-25/+26
| | | | | | | | | | "else" branch This is no functional change, because in the "if" branch we do an early "return 0;". Signed-off-by: Volker Lendecke <vl@samba.org> Signed-off-by: Steve French <sfrench@us.ibm.com>
* Simplify allocate_mid() slightly: Remove some unnecessary "else" branchesVolker Lendecke2008-12-261-2/+6
| | | | | | | | Simplify allocate_mid() slightly: Remove some unnecessary "else" branches Signed-off-by: Volker Lendecke <vl@samba.org> Acked-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>