summaryrefslogtreecommitdiffstats
path: root/drivers/bluetooth/btmrvl_main.c (unfollow)
Commit message (Collapse)AuthorFilesLines
2015-12-10Bluetooth: Remove unnecessary HCI_ADVERTISING_INSTANCE flagJohan Hedberg3-19/+9
This flag just tells us whether hdev->adv_instances is empty or not. We can equally well use the list_empty() function to get this information. Signed-off-by: Johan Hedberg <johan.hedberg@intel.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2015-12-10Bluetooth: Simplify read_adv_features codeJohan Hedberg1-20/+8
The code in the Read Advertising Features mgmt command handler is unnecessarily complicated. Clean it up and remove unnecessary variables & branches. Signed-off-by: Johan Hedberg <johan.hedberg@intel.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2015-12-10Bluetooth: Perform HCI update for power on synchronouslyJohan Hedberg5-134/+128
The request to update HCI during power on is always coming either from hdev->req_workqueue or through an ioctl, so it's safe to use hci_req_sync for it. This way we also eliminate potential races with incoming mgmt commands or other actions while powering on. Part of this refactoring is the splitting of mgmt_powered() into mgmt_power_on() and __mgmt_power_off() functions. The main reason is the different requirements as far as hdev locking is concerned, as highlighted with the __ prefix of the power off API. Since the power on in the case of clearing the AUTO_OFF flag cannot be done synchronously in the set_powered mgmt handler, the hci_power_on work callback is extended to cover this (which also simplifies the set_powered helper a lot). Signed-off-by: Johan Hedberg <johan.hedberg@intel.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2015-12-10Bluetooth: Move fast connectable code to hci_request.cJohan Hedberg3-39/+40
We'll soon need this both in hci_request.c and mgmt.c so move it to hci_request.c as a generic helper. Signed-off-by: Johan Hedberg <johan.hedberg@intel.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2015-12-10Bluetooth: Move EIR update to hci_request.cJohan Hedberg3-195/+198
We'll soon need to update the EIR both from hci_request.c and mgmt.c so move update_eir() as a more generic request helper to hci_request.c. Signed-off-by: Johan Hedberg <johan.hedberg@intel.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2015-12-10Bluetooth: HCI name update to hci_request.cJohan Hedberg3-12/+14
We'll soon need this both from hci_request.c and mgmt.c so move it as a request helper function to hci_request.c. Signed-off-by: Johan Hedberg <johan.hedberg@intel.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2015-12-10Bluetooth: Move discoverable timeout behind hdev->req_workqueueJohan Hedberg4-53/+28
Since the other discoverable changes are behind req_workqueue now it only makes sense to move the discoverable timeout there as well. Signed-off-by: Johan Hedberg <johan.hedberg@intel.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2015-12-10Bluetooth: Move discoverable changes to hdev->req_workqueueJohan Hedberg3-77/+79
The discoverable mode is intrinsically linked with the connectable mode e.g. through sharing the same HCI command (Write Scan Enable) for BR/EDR. It makes therefore sense to move it to hci_request.c and run the changes through the same hdev->req_workqueue. Signed-off-by: Johan Hedberg <johan.hedberg@intel.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2015-12-10Bluetooth: Perform Class of Device changes through hdev->req_workqueueJohan Hedberg3-47/+49
The Class of Device needs to be changed e.g. for limited discoverable mode. In preparation of moving the discoverable mode to hci_request.c and hdev->req_workqueue, move the Class of Device helpers there first. Signed-off-by: Johan Hedberg <johan.hedberg@intel.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2015-12-10Bluetooth: Move connectable changes to hdev->req_workqueueJohan Hedberg3-74/+53
This way the connectable changes are synchronized against each other, which helps avoid potential races. The connectable mode is also linked together with LE advertising which makes is more convenient to have it behind the same workqueue. Signed-off-by: Johan Hedberg <johan.hedberg@intel.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2015-12-10Bluetooth: Move advertising instance management to hci_request.cJohan Hedberg7-566/+589
This paves the way for eventually performing advertising changes through the hdev->req_workqueue. Some new APIs need to be exposed from mgmt.c to hci_request.c and vice-versa, but many of them will go away once hdev->req_workqueue gets used. Signed-off-by: Johan Hedberg <johan.hedberg@intel.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2015-12-10Bluetooth: Move __hci_update_background_scan up in hci_request.cJohan Hedberg1-73/+73
This way we avoid the need to do a forward declaration in later patches. Signed-off-by: Johan Hedberg <johan.hedberg@intel.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2015-12-10Bluetooth: Run page scan updates through hdev->req_workqueueJohan Hedberg5-21/+35
Since Add/Remove Device perform the page scan updates independently from the HCI command completion we've introduced a potential race when multiple mgmt commands are queued. Doing the page scan updates through the req_workqueue ensures that the state changes are performed in a race-free manner. At the same time, to make the request helper more widely usable, extend it to also cover Inquiry Scan changes since those are behind the same HCI command. This is also reflected in the new name of the API as well as the work struct name. Signed-off-by: Johan Hedberg <johan.hedberg@intel.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2015-12-09cgroup: fix sock_cgroup_data initialization on earlier compilersTejun Heo1-2/+2
sock_cgroup_data is a struct containing an anonymous union. sock_cgroup_set_prioidx() and sock_cgroup_set_classid() were initializing a field inside the anonymous union as follows. struct sock_ccgroup_data skcd_buf = { .val = VAL }; While this is fine on more recent compilers, gcc-4.4.7 triggers the following errors. include/linux/cgroup-defs.h: In function ‘sock_cgroup_set_prioidx’: include/linux/cgroup-defs.h:619: error: unknown field ‘val’ specified in initializer include/linux/cgroup-defs.h:619: warning: missing braces around initializer include/linux/cgroup-defs.h:619: warning: (near initialization for ‘skcd_buf.<anonymous>’) This is because .val belongs to the anonymous union nested inside the struct but the initializer is missing the nesting. Fix it by adding an extra pair of braces. Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: Alaa Hleihel <alaa@dev.mellanox.co.il> Fixes: bd1060a1d671 ("sock, cgroup: add sock->sk_cgroup") Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-09chelsio: constify cmac_ops structuresJulia Lawall2-2/+2
The cmac_ops structures are never modified, so declare them as const. Done with the help of Coccinelle. Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-09bnx2x: remove rx_pkt/rx_callsEric Dumazet2-6/+0
These fields are updated but never read. Remove the overhead. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-09bnx2x: avoid soft lockup in bnx2x_poll()Eric Dumazet1-30/+21
Under heavy TX load, bnx2x_poll() can loop forever and trigger soft lockup bugs. A napi poll handler must yield after one TX completion round, risk of livelock is too high otherwise. Bug is very easy to trigger using a debug build, and udp flood, because of added cpu cycles in TX completion, and we do not receive enough packets to break the loop. Reported-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Ariel Elior <ariel.elior@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-09rhashtable: Remove unnecessary wmb for future_tblHerbert Xu1-3/+0
The patch 9497df88ab5567daa001829051c5f87161a81ff0 ("rhashtable: Fix reader/rehash race") added a pair of barriers. In fact the wmb is superfluous because every subsequent write to the old or new hash table uses rcu_assign_pointer, which itself carriers a full barrier prior to the assignment. Therefore we may remove the explicit wmb. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-09cxgb4: Adds PCI device id for new T5 adaptersHariprasad Shenai1-0/+3
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-09cxgb4: Add FL DMA mapping error and low counterHariprasad Shenai3-0/+11
Add Free List DMA Mapping Errors to SGE Queue info for Free Lists. Add Free List "Low" counter to count the number of times we see the number of pointers that we _think_ the hardware sees in the Free List below the Egress Threshold. Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-09cxgb4: Deal with wrap-around of queue for Work requestHariprasad Shenai1-4/+53
The WR headers may not fit within one descriptor. So we need to deal with wrap-around here. Based on original patch by Pranjal Joshi <pjoshi@chelsio.com> Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-09cxgb4: prevent simultaneous execution of service_ofldq()Hariprasad Shenai2-6/+51
Change mutual exclusion mechanism to prevent multiple threads of execution from running in service_ofldq() at the same time. The old mechanism used an implicit guard on the down-call path and none on the restart path and wasn't working. This checking makes the mechanism explicit and is much easier to understand as a result. Based on original work by Casey Leedom <leedom@chelsio.com> Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-09cxgb4: Use ACCES_ONCE macro to read queue's consumer indexHariprasad Shenai1-2/+2
Use helper macro ACCESS_ONCE() to load from the SGE status page to prevent the compiler loading multiple times. Based on original work by Mike Werner <werner@chelsio.com> Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-09cxgb4/cxgb4vf: update Kconfig file to include T6 adapterHariprasad Shenai1-8/+9
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-09cxgb4: Align rest of the ethtool get statsHariprasad Shenai1-73/+73
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-09net: hns: optimize XGE capability by reducing cpu usageyankejian3-30/+55
here is the patch raising the performance of XGE by: 1)changes the way page management method for enet momery, and 2)reduces the count of rmb, and 3)adds Memory prefetching Signed-off-by: Kejian Yan <yankejian@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-09sock, cgroup: add sock->sk_cgroupTejun Heo6-9/+191
In cgroup v1, dealing with cgroup membership was difficult because the number of membership associations was unbound. As a result, cgroup v1 grew several controllers whose primary purpose is either tagging membership or pull in configuration knobs from other subsystems so that cgroup membership test can be avoided. net_cls and net_prio controllers are examples of the latter. They allow configuring network-specific attributes from cgroup side so that network subsystem can avoid testing cgroup membership; unfortunately, these are not only cumbersome but also problematic. Both net_cls and net_prio aren't properly hierarchical. Both inherit configuration from the parent on creation but there's no interaction afterwards. An ancestor doesn't restrict the behavior in its subtree in anyway and configuration changes aren't propagated downwards. Especially when combined with cgroup delegation, this is problematic because delegatees can mess up whatever network configuration implemented at the system level. net_prio would allow the delegatees to set whatever priority value regardless of CAP_NET_ADMIN and net_cls the same for classid. While it is possible to solve these issues from controller side by implementing hierarchical allowable ranges in both controllers, it would involve quite a bit of complexity in the controllers and further obfuscate network configuration as it becomes even more difficult to tell what's actually being configured looking from the network side. While not much can be done for v1 at this point, as membership handling is sane on cgroup v2, it'd be better to make cgroup matching behave like other network matches and classifiers than introducing further complications. In preparation, this patch updates sock->sk_cgrp_data handling so that it points to the v2 cgroup that sock was created in until either net_prio or net_cls is used. Once either of the two is used, sock->sk_cgrp_data reverts to its previous role of carrying prioidx and classid. This is to avoid adding yet another cgroup related field to struct sock. As the mode switching can happen at most once per boot, the switching mechanism is aimed at lowering hot path overhead. It may leak a finite, likely small, number of cgroup refs and report spurious prioidx or classid on switching; however, dynamic updates of prioidx and classid have always been racy and lossy - socks between creation and fd installation are never updated, config changes don't update existing sockets at all, and prioidx may index with dead and recycled cgroup IDs. Non-critical inaccuracies from small race windows won't make any noticeable difference. This patch doesn't make use of the pointer yet. The following patch will implement netfilter match for cgroup2 membership. v2: Use sock_cgroup_data to avoid inflating struct sock w/ another cgroup specific field. v3: Add comments explaining why sock_data_prioidx() and sock_data_classid() use different fallback values. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Daniel Wagner <daniel.wagner@bmw-carit.de> CC: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-09net: wrap sock->sk_cgrp_prioidx and ->sk_classid inside a structTejun Heo12-38/+76
Introduce sock->sk_cgrp_data which is a struct sock_cgroup_data. ->sk_cgroup_prioidx and ->sk_classid are moved into it. The struct and its accessors are defined in cgroup-defs.h. This is to prepare for overloading the fields with a cgroup pointer. This patch mostly performs equivalent conversions but the followings are noteworthy. * Equality test before updating classid is removed from sock_update_classid(). This shouldn't make any noticeable difference and a similar test will be implemented on the helper side later. * sock_update_netprioidx() now takes struct sock_cgroup_data and can be moved to netprio_cgroup.h without causing include dependency loop. Moved. * The dummy version of sock_update_netprioidx() converted to a static inline function while at it. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-09netprio_cgroup: limit the maximum css->id to USHRT_MAXTejun Heo2-5/+14
netprio builds per-netdev contiguous priomap array which is indexed by css->id. The array is allocated using kzalloc() effectively limiting the maximum ID supported to some thousand range. This patch caps the maximum supported css->id to USHRT_MAX which should be way above what is actually useable. This allows reducing sock->sk_cgrp_prioidx to u16 from u32. The freed up part will be used to overload the cgroup related fields. sock->sk_cgrp_prioidx's position is swapped with sk_mark so that the two cgroup related fields are adjacent. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Daniel Wagner <daniel.wagner@bmw-carit.de> Cc: Daniel Borkmann <daniel@iogearbox.net> CC: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-09Revert "Merge branch 'vsock-virtio'"Stefan Hajnoczi14-2778/+0
This reverts commit 0d76d6e8b2507983a2cae4c09880798079007421 and merge commit c402293bd76fbc93e52ef8c0947ab81eea3ae019, reversing changes made to c89359a42e2a49656451569c382eed63e781153c. The virtio-vsock device specification is not finalized yet. Michael Tsirkin voiced concerned about merging this code when the hardware interface (and possibly the userspace interface) could still change. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-09sh_eth: get rid of bb_{set|clr|read}()Sergei Shtylyov1-21/+6
After the MDIO bitbang code consolidation, there's no need anymore for bb_{set|clr}() as well as bb_read() -- just expand them inline, thus saving more LoCs... Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Acked-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-09sh_eth: factor out common code from MDIO bitbang methodsSergei Shtylyov1-23/+12
sh_mm[cd]_ctrl() and sh_set_mdio() all look mostly the same -- factor out their common code and put it into sh_mdio_ctrl(). Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Acked-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-09sh_eth: remove mask fields from 'struct bb_info'Sergei Shtylyov1-15/+7
The MDIO control bits are always mapped to the same bits of the same register (PIR), so there's no need to store their masks in the 'struct bb_info'... Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Acked-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-09drivers: net: xgene: constify xgene_mac_ops and xgene_port_ops structuresJulia Lawall8-18/+18
The xgene_mac_ops and xgene_port_ops structures are never modified, so declare them as const. Done with the help of Coccinelle. Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-08net: Fix inverted test in __skb_recv_datagramRainer Weikusat1-1/+1
As the kernel generally uses negated error numbers, *err needs to be compared with -EAGAIN (d'oh). Signed-off-by: Rainer Weikusat <rweikusat@mobileactivedefense.com> Fixes: ea3793ee29d3 ("core: enable more fine-grained datagram reception control") Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-07cxgb3: Convert simple_strtoul to kstrtoxLABBE Corentin2-13/+29
the simple_strtoul function is obsolete. This patch replace it by kstrtox. Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-07net: dsa: move dsa slave destroy code to slave.cNeil Armstrong3-2/+13
Move dsa slave dedicated code from dsa_switch_destroy to a new dsa_slave_destroy function in slave.c. Add the netif_carrier_off and phy_disconnect calls in order to correctly cleanup the netdev state and PHY state machine. Signed-off-by: Frode Isaksen <fisaksen@baylibre.com> Signed-off-by: Neil Armstrong <narmstrong@baylibre.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-07net: dsa: Add missing master netdev dev_put() callsNeil Armstrong1-1/+5
Upon probe failure or unbinding, add missing dev_put() calls. Signed-off-by: Neil Armstrong <narmstrong@baylibre.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-07net: dsa: cleanup resources upon module removalNeil Armstrong1-0/+8
Make sure that we unassign the master_netdev dsa_ptr to make the packet processing go through the regular Ethernet receive path. Suggested-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Neil Armstrong <narmstrong@baylibre.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-07net: dsa: remove DSA link pollingNeil Armstrong2-55/+0
Since no more DSA driver uses the polling callback, and since the phylib handles the link detection, remove the link polling work and timer code. Signed-off-by: Neil Armstrong <narmstrong@baylibre.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-07net, thunderx: Remove unnecessary rcv buffer start address managementSunil Goutham2-48/+11
Since we have moved on to using allocated pages to carve receive buffers instead of netdev_alloc_skb() there is no need to store any pointers for later retrieval. Earlier we had to store skb and skb->data pointers which later are used to handover received packet to network stack. This will avoid an unnecessary cache miss as well. Signed-off-by: Sunil Goutham <sgoutham@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-07net: thunderx: nicvf_queues: nivc_*_intr: remove duplicationYury Norov1-100/+40
The same switch-case repeates for nivc_*_intr functions. In this patch it is moved to a helper nicvf_int_type_to_mask(). By the way: - Unneeded write to NICVF register dropped if int_type is unknown. - netdev_dbg() is used instead of netdev_err(). Signed-off-by: Yury Norov <yury.norov@auriga.com> Signed-off-by: Aleksey Makarov <aleksey.makarov@caviumnetworks.com> Acked-by: Vadim Lomovtsev <Vadim.Lomovtsev@caiumnetworks.com> Signed-off-by: Sunil Goutham <sgoutham@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-07mac80211: handle HW ROC expired properlyIlan Peer1-1/+5
In case of HW ROC, when the driver reports that the ROC expired, it is not sufficient to purge the ROCs based on the remaining time, as it possible that the device finished the ROC session before the actual requested duration. To handle such cases, in case of ROC expired notification from the driver, complete all the ROCs which are marked with hw_begun, regardless of the remaining duration. Signed-off-by: Ilan Peer <ilan.peer@intel.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2015-12-07af_unix: fix unix_dgram_recvmsg entry lockingRainer Weikusat1-15/+20
The current unix_dgram_recvsmg code acquires the u->readlock mutex in order to protect access to the peek offset prior to calling __skb_recv_datagram for actually receiving data. This implies that a blocking reader will go to sleep with this mutex held if there's presently no data to return to userspace. Two non-desirable side effects of this are that a later non-blocking read call on the same socket will block on the ->readlock mutex until the earlier blocking call releases it (or the readers is interrupted) and that later blocking read calls will wait longer than the effective socket read timeout says they should: The timeout will only start 'ticking' once such a reader hits the schedule_timeout in wait_for_more_packets (core.c) while the time it already had to wait until it could acquire the mutex is unaccounted for. The patch avoids both by using the __skb_try_recv_datagram and __skb_wait_for_more packets functions created by the first patch to implement a unix_dgram_recvmsg read loop which releases the readlock mutex prior to going to sleep and reacquires it as needed afterwards. Non-blocking readers will thus immediately return with -EAGAIN if there's no data available regardless of any concurrent blocking readers and all blocking readers will end up sleeping via schedule_timeout, thus honouring the configured socket receive timeout. Signed-off-by: Rainer Weikusat <rweikusat@mobileactivedefense.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-07core: enable more fine-grained datagram reception controlRainer Weikusat2-29/+54
The __skb_recv_datagram routine in core/ datagram.c provides a general skb reception factility supposed to be utilized by protocol modules providing datagram sockets. It encompasses both the actual recvmsg code and a surrounding 'sleep until data is available' loop. This is inconvenient if a protocol module has to use additional locking in order to maintain some per-socket state the generic datagram socket code is unaware of (as the af_unix code does). The patch below moves the recvmsg proper code into a new __skb_try_recv_datagram routine which doesn't sleep and renames wait_for_more_packets to __skb_wait_for_more_packets, both routines being exported interfaces. The original __skb_recv_datagram routine is reimplemented on top of these two functions such that its user-visible behaviour remains unchanged. Signed-off-by: Rainer Weikusat <rweikusat@mobileactivedefense.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-07PHY: DP83867: Remove looking in parent device for OF propertiesAndrew Lunn1-4/+1
Device tree properties for a phy device are expected to be in the phy node. The current code for the DP83867 also tries to look in the parent node. The devices binding documentation does not mention this, no current device tree file makes use of this, and it is not behaviour we want. So remove looking in the parent device. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-07net: cdc_ncm: add "ndp_to_end" sysfs attributeBjørn Mork2-0/+62
Adding a writable sysfs attribute for the "NDP to end" quirk flag. This makes it easier for end users to test new devices for this firmware bug. We've been lucky so far, but we should not depend on reporters capable of rebuilding the driver. Cc: Enrico Mioso <mrkiko.rs@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-07net/mlx4_core: Support the HA mode for SRIOV VFs tooMoni Shoua1-10/+97
When the mlx4 driver runs in HA mode, and all VFs are single ported ones, we make their single port Highly-Available. This is done by taking advantage of the HA mode properties (following bonding changes with programming the port V2P map, etc) and adding the missing parts which are unique to SRIOV such as mirroring VF steering rules on both ports. Due to limits on the MAC and VLAN table this mode is enabled only when number of total VFs is under 64. Signed-off-by: Moni Shoua <monis@mellanox.com> Reviewed-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-07IB/mlx4: Use the VF base-port when demuxing mad from wireOr Gerlitz1-3/+14
Under HA mode, it's possible that the VF registered its GID (and expects to get mads through the PV scheme) on a port which is different from the one this mad arrived on, due to HA fail over. Therefore, if the gid is not matched on the port that the packet arrived on, check for a match on the other port if HA mode is active -- and if a match is found on the other port, continue processing the mad using that other port. Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Reviewed-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-07net/mlx4_core: Keep VLAN/MAC tables mirrored in multifunc HA modeMoni Shoua3-23/+586
Due to HW limitations, indexes to MAC and VLAN tables are always taken from the table of the actual port. So, if a resource holds an index to a table, it may refer to different values during the lifetime of the resource, unless the tables are mirrored. Also, even when driver is not in HA mode the policy of allocating an index to these tables is such to make sure, as much as possible, that when the time comes the mirroring will be successful. This means that in multifunction mode the allocation of a free index in a port's table tries to make sure that the same index in the other's port table is also free. Signed-off-by: Moni Shoua <monis@mellanox.com> Reviewed-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>