thunderbolt: Changes for v5.14 merge window
This includes following Thunderbolt/USB4 changes for the v5.14 merge
window:
* Add self-authenticate quirk for a new Dell dock
* NVM improvements
* Align wake configuration with the USB4 connection manager guide
* USB4 buffer allocation support
* Retimer NVM firmware upgrade support when there is no device
attached
* Support for Intel Alder Lake integrated Thunderbolt/USB4 controller
* A couple of miscellaneous cleanups.
All these have been in linux-next with no reported issues.
-----BEGIN PGP SIGNATURE-----
iQJUBAABCgA+FiEEVTdhRGBbNzLrSUBaAP2fSd+ZWKAFAmDQYRQgHG1pa2Eud2Vz
dGVyYmVyZ0BsaW51eC5pbnRlbC5jb20ACgkQAP2fSd+ZWKCosg//ZjQPG4KGDlYi
ZiusA6YrjpJ5aIZyoyCe+Mhfqdpg+OzAm9hPup0i5foJvTs1SINPYrVCcqDtkBp1
mvFH8v3XqiBtz+Aj6hs4gj11E4hIg+6LCk4PJhZiWcBeHhRjCgy2HeLmUsMEvyK/
aLyEirARlAWo9XCFadEQ3fg6g2ps2jbP0onslsKtW/of2kVdfXrW4sAdxCKGugy1
BVhP8hbJ6tgLirK0xUW5Kf13LoVwCJjLDLgPWFO07gmCKFYXSeKgIckAH5wE9Ezm
4o32j8K5A65leJLIfH0LJO6n+6gvV+K4q4MDxfeoeB4Gq4mZXhAMRaCHtlMMuMbB
6AMzq6cgH/tsA58Usl+QO9oN9BB8pj2nFao6ssTbowvYKZC0M3kIOT0t7QuxxTKz
xCKcqgMoa4mwp46MrhfGERbbXBPuFVAA2AAdRJOdXoteNY+OiwKGm0qkjAoA7e1J
Ysn3H1bNhyM4Dk0L9XY+MpMh/xsk6t4bZrorDWdZqaI8OzWGEyEo9lWUKU+hQkkB
bcOCmFEbdg37C2WnhVSLjJux74bBADat19hcxg92WhOzaOEDGhK6HCmcEqN4jxRj
3PrGlEb1+9K7/c7ZFC3x5G3qVsr2C7ehVWRUIqoLN8/NzCPKevVYyz7UcEoB88bB
dnen0ugqKtYN+lLISZ6qzYOkYrEym9o=
=QPoe
-----END PGP SIGNATURE-----
Merge tag 'thunderbolt-for-v5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/westeri/thunderbolt into usb-next
Mika writes:
thunderbolt: Changes for v5.14 merge window
This includes following Thunderbolt/USB4 changes for the v5.14 merge
window:
* Add self-authenticate quirk for a new Dell dock
* NVM improvements
* Align wake configuration with the USB4 connection manager guide
* USB4 buffer allocation support
* Retimer NVM firmware upgrade support when there is no device
attached
* Support for Intel Alder Lake integrated Thunderbolt/USB4 controller
* A couple of miscellaneous cleanups.
All these have been in linux-next with no reported issues.
* tag 'thunderbolt-for-v5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/westeri/thunderbolt: (29 commits)
thunderbolt: Fix DROM handling for USB4 DROM
thunderbolt: Add support for Intel Alder Lake
thunderbolt: No need to include <linux/acpi.h> in usb4_port.c
thunderbolt: Poll 10ms for REG_FW_STS_NVM_AUTH_DONE to be set
thunderbolt: Add device links only when software connection manager is used
thunderbolt: Bond lanes only when dual_link_port != NULL in alloc_dev_default()
thunderbolt: Check for NVM authentication status after the operation started
thunderbolt: Add WRITE_ONLY and AUTHENTICATE_ONLY NVM operations for retimers
thunderbolt: Allow router NVM authenticate separately
thunderbolt: Move nvm_write_ops to tb.h
thunderbolt: Add support for retimer NVM upgrade when there is no link
thunderbolt: Add additional USB4 port operations for retimer access
thunderbolt: Add support for ACPI _DSM to power on/off retimers
thunderbolt: Add USB4 port devices
thunderbolt: Log the link as TBT instead of TBT3
thunderbolt: Add KUnit tests for credit allocation
thunderbolt: Add quirk for Intel Goshen Ridge DP credits
thunderbolt: Allocate credits according to router preferences
thunderbolt: Update port credits after bonding is enabled/disabled
thunderbolt: Read router preferred credit allocation information
...
This commit is contained in:
commit
00a738b86e
26 changed files with 2472 additions and 462 deletions
|
|
@ -1,4 +1,4 @@
|
|||
What: /sys/bus/thunderbolt/devices/.../domainX/boot_acl
|
||||
What: /sys/bus/thunderbolt/devices/.../domainX/boot_acl
|
||||
Date: Jun 2018
|
||||
KernelVersion: 4.17
|
||||
Contact: thunderbolt-software@lists.01.org
|
||||
|
|
@ -21,7 +21,7 @@ Description: Holds a comma separated list of device unique_ids that
|
|||
If a device is authorized automatically during boot its
|
||||
boot attribute is set to 1.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/.../domainX/deauthorization
|
||||
What: /sys/bus/thunderbolt/devices/.../domainX/deauthorization
|
||||
Date: May 2021
|
||||
KernelVersion: 5.12
|
||||
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
|
||||
|
|
@ -30,7 +30,7 @@ Description: This attribute tells whether the system supports
|
|||
de-authorize PCIe tunnel by writing 0 to authorized
|
||||
attribute under each device.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/.../domainX/iommu_dma_protection
|
||||
What: /sys/bus/thunderbolt/devices/.../domainX/iommu_dma_protection
|
||||
Date: Mar 2019
|
||||
KernelVersion: 4.21
|
||||
Contact: thunderbolt-software@lists.01.org
|
||||
|
|
@ -39,7 +39,7 @@ Description: This attribute tells whether the system uses IOMMU
|
|||
it is not (DMA protection is solely based on Thunderbolt
|
||||
security levels).
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/.../domainX/security
|
||||
What: /sys/bus/thunderbolt/devices/.../domainX/security
|
||||
Date: Sep 2017
|
||||
KernelVersion: 4.13
|
||||
Contact: thunderbolt-software@lists.01.org
|
||||
|
|
@ -61,7 +61,7 @@ Description: This attribute holds current Thunderbolt security level
|
|||
the BIOS.
|
||||
======= ==================================================
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/.../authorized
|
||||
What: /sys/bus/thunderbolt/devices/.../authorized
|
||||
Date: Sep 2017
|
||||
KernelVersion: 4.13
|
||||
Contact: thunderbolt-software@lists.01.org
|
||||
|
|
@ -95,14 +95,14 @@ Description: This attribute is used to authorize Thunderbolt devices
|
|||
EKEYREJECTED if the challenge response did not match.
|
||||
== ========================================================
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/.../boot
|
||||
What: /sys/bus/thunderbolt/devices/.../boot
|
||||
Date: Jun 2018
|
||||
KernelVersion: 4.17
|
||||
Contact: thunderbolt-software@lists.01.org
|
||||
Description: This attribute contains 1 if Thunderbolt device was already
|
||||
authorized on boot and 0 otherwise.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/.../generation
|
||||
What: /sys/bus/thunderbolt/devices/.../generation
|
||||
Date: Jan 2020
|
||||
KernelVersion: 5.5
|
||||
Contact: Christian Kellner <christian@kellner.me>
|
||||
|
|
@ -110,7 +110,7 @@ Description: This attribute contains the generation of the Thunderbolt
|
|||
controller associated with the device. It will contain 4
|
||||
for USB4.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/.../key
|
||||
What: /sys/bus/thunderbolt/devices/.../key
|
||||
Date: Sep 2017
|
||||
KernelVersion: 4.13
|
||||
Contact: thunderbolt-software@lists.01.org
|
||||
|
|
@ -213,12 +213,15 @@ Description: When new NVM image is written to the non-active NVM
|
|||
restarted with the new NVM firmware. If the image
|
||||
verification fails an error code is returned instead.
|
||||
|
||||
This file will accept writing values "1" or "2"
|
||||
This file will accept writing values "1", "2" or "3".
|
||||
|
||||
- Writing "1" will flush the image to the storage
|
||||
area and authenticate the image in one action.
|
||||
- Writing "2" will run some basic validation on the image
|
||||
and flush it to the storage area.
|
||||
- Writing "3" will authenticate the image that is
|
||||
currently written in the storage area. This is only
|
||||
supported with USB4 devices and retimers.
|
||||
|
||||
When read holds status of the last authentication
|
||||
operation if an error occurred during the process. This
|
||||
|
|
@ -226,6 +229,20 @@ Description: When new NVM image is written to the non-active NVM
|
|||
based mailbox before the device is power cycled. Writing
|
||||
0 here clears the status.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/.../nvm_authenticate_on_disconnect
|
||||
Date: Oct 2020
|
||||
KernelVersion: v5.9
|
||||
Contact: Mario Limonciello <mario.limonciello@dell.com>
|
||||
Description: For supported devices, automatically authenticate the new Thunderbolt
|
||||
image when the device is disconnected from the host system.
|
||||
|
||||
This file will accept writing values "1" or "2"
|
||||
|
||||
- Writing "1" will flush the image to the storage
|
||||
area and prepare the device for authentication on disconnect.
|
||||
- Writing "2" will run some basic validation on the image
|
||||
and flush it to the storage area.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/key
|
||||
Date: Jan 2018
|
||||
KernelVersion: 4.15
|
||||
|
|
@ -276,6 +293,39 @@ Contact: thunderbolt-software@lists.01.org
|
|||
Description: This contains XDomain service specific settings as
|
||||
bitmask. Format: %x
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/usb4_portX/link
|
||||
Date: Sep 2021
|
||||
KernelVersion: v5.14
|
||||
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
|
||||
Description: Returns the current link mode. Possible values are
|
||||
"usb4", "tbt" and "none".
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/usb4_portX/offline
|
||||
Date: Sep 2021
|
||||
KernelVersion: v5.14
|
||||
Contact: Rajmohan Mani <rajmohan.mani@intel.com>
|
||||
Description: Writing 1 to this attribute puts the USB4 port into
|
||||
offline mode. Only allowed when there is nothing
|
||||
connected to the port (link attribute returns "none").
|
||||
Once the port is in offline mode it does not receive any
|
||||
hotplug events. This is used to update NVM firmware of
|
||||
on-board retimers. Writing 0 puts the port back to
|
||||
online mode.
|
||||
|
||||
This attribute is only visible if the platform supports
|
||||
powering on retimers when there is no cable connected.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/usb4_portX/rescan
|
||||
Date: Sep 2021
|
||||
KernelVersion: v5.14
|
||||
Contact: Rajmohan Mani <rajmohan.mani@intel.com>
|
||||
Description: When the USB4 port is in offline mode writing 1 to this
|
||||
attribute forces rescan of the sideband for on-board
|
||||
retimers. Each retimer appear under the USB4 port as if
|
||||
the USB4 link was up. These retimers act in the same way
|
||||
as if the cable was connected so upgrading their NVM
|
||||
firmware can be done the usual way.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/<device>:<port>.<index>/device
|
||||
Date: Oct 2020
|
||||
KernelVersion: v5.9
|
||||
|
|
@ -308,17 +358,3 @@ Date: Oct 2020
|
|||
KernelVersion: v5.9
|
||||
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
|
||||
Description: Retimer vendor identifier read from the hardware.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/.../nvm_authenticate_on_disconnect
|
||||
Date: Oct 2020
|
||||
KernelVersion: v5.9
|
||||
Contact: Mario Limonciello <mario.limonciello@dell.com>
|
||||
Description: For supported devices, automatically authenticate the new Thunderbolt
|
||||
image when the device is disconnected from the host system.
|
||||
|
||||
This file will accept writing values "1" or "2"
|
||||
|
||||
- Writing "1" will flush the image to the storage
|
||||
area and prepare the device for authentication on disconnect.
|
||||
- Writing "2" will run some basic validation on the image
|
||||
and flush it to the storage area.
|
||||
|
|
|
|||
|
|
@ -256,6 +256,35 @@ Note names of the NVMem devices ``nvm_activeN`` and ``nvm_non_activeN``
|
|||
depend on the order they are registered in the NVMem subsystem. N in
|
||||
the name is the identifier added by the NVMem subsystem.
|
||||
|
||||
Upgrading on-board retimer NVM when there is no cable connected
|
||||
---------------------------------------------------------------
|
||||
If the platform supports, it may be possible to upgrade the retimer NVM
|
||||
firmware even when there is nothing connected to the USB4
|
||||
ports. When this is the case the ``usb4_portX`` devices have two special
|
||||
attributes: ``offline`` and ``rescan``. The way to upgrade the firmware
|
||||
is to first put the USB4 port into offline mode::
|
||||
|
||||
# echo 1 > /sys/bus/thunderbolt/devices/0-0/usb4_port1/offline
|
||||
|
||||
This step makes sure the port does not respond to any hotplug events,
|
||||
and also ensures the retimers are powered on. The next step is to scan
|
||||
for the retimers::
|
||||
|
||||
# echo 1 > /sys/bus/thunderbolt/devices/0-0/usb4_port1/rescan
|
||||
|
||||
This enumerates and adds the on-board retimers. Now retimer NVM can be
|
||||
upgraded in the same way than with cable connected (see previous
|
||||
section). However, the retimer is not disconnected as we are offline
|
||||
mode) so after writing ``1`` to ``nvm_authenticate`` one should wait for
|
||||
5 or more seconds before running rescan again::
|
||||
|
||||
# echo 1 > /sys/bus/thunderbolt/devices/0-0/usb4_port1/rescan
|
||||
|
||||
This point if everything went fine, the port can be put back to
|
||||
functional state again::
|
||||
|
||||
# echo 0 > /sys/bus/thunderbolt/devices/0-0/usb4_port1/offline
|
||||
|
||||
Upgrading NVM when host controller is in safe mode
|
||||
--------------------------------------------------
|
||||
If the existing NVM is not properly authenticated (or is missing) the
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
obj-${CONFIG_USB4} := thunderbolt.o
|
||||
thunderbolt-objs := nhi.o nhi_ops.o ctl.o tb.o switch.o cap.o path.o tunnel.o eeprom.o
|
||||
thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o lc.o tmu.o usb4.o
|
||||
thunderbolt-objs += nvm.o retimer.o quirks.o
|
||||
thunderbolt-objs += usb4_port.o nvm.o retimer.o quirks.o
|
||||
|
||||
thunderbolt-${CONFIG_ACPI} += acpi.o
|
||||
thunderbolt-$(CONFIG_DEBUG_FS) += debugfs.o
|
||||
|
|
|
|||
|
|
@ -180,3 +180,209 @@ bool tb_acpi_is_xdomain_allowed(void)
|
|||
return osc_sb_native_usb4_control & OSC_USB_XDOMAIN;
|
||||
return true;
|
||||
}
|
||||
|
||||
/* UUID for retimer _DSM: e0053122-795b-4122-8a5e-57be1d26acb3 */
|
||||
static const guid_t retimer_dsm_guid =
|
||||
GUID_INIT(0xe0053122, 0x795b, 0x4122,
|
||||
0x8a, 0x5e, 0x57, 0xbe, 0x1d, 0x26, 0xac, 0xb3);
|
||||
|
||||
#define RETIMER_DSM_QUERY_ONLINE_STATE 1
|
||||
#define RETIMER_DSM_SET_ONLINE_STATE 2
|
||||
|
||||
static int tb_acpi_retimer_set_power(struct tb_port *port, bool power)
|
||||
{
|
||||
struct usb4_port *usb4 = port->usb4;
|
||||
union acpi_object argv4[2];
|
||||
struct acpi_device *adev;
|
||||
union acpi_object *obj;
|
||||
int ret;
|
||||
|
||||
if (!usb4->can_offline)
|
||||
return 0;
|
||||
|
||||
adev = ACPI_COMPANION(&usb4->dev);
|
||||
if (WARN_ON(!adev))
|
||||
return 0;
|
||||
|
||||
/* Check if we are already powered on (and in correct mode) */
|
||||
obj = acpi_evaluate_dsm_typed(adev->handle, &retimer_dsm_guid, 1,
|
||||
RETIMER_DSM_QUERY_ONLINE_STATE, NULL,
|
||||
ACPI_TYPE_INTEGER);
|
||||
if (!obj) {
|
||||
tb_port_warn(port, "ACPI: query online _DSM failed\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
ret = obj->integer.value;
|
||||
ACPI_FREE(obj);
|
||||
|
||||
if (power == ret)
|
||||
return 0;
|
||||
|
||||
tb_port_dbg(port, "ACPI: calling _DSM to power %s retimers\n",
|
||||
power ? "on" : "off");
|
||||
|
||||
argv4[0].type = ACPI_TYPE_PACKAGE;
|
||||
argv4[0].package.count = 1;
|
||||
argv4[0].package.elements = &argv4[1];
|
||||
argv4[1].integer.type = ACPI_TYPE_INTEGER;
|
||||
argv4[1].integer.value = power;
|
||||
|
||||
obj = acpi_evaluate_dsm_typed(adev->handle, &retimer_dsm_guid, 1,
|
||||
RETIMER_DSM_SET_ONLINE_STATE, argv4,
|
||||
ACPI_TYPE_INTEGER);
|
||||
if (!obj) {
|
||||
tb_port_warn(port,
|
||||
"ACPI: set online state _DSM evaluation failed\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
ret = obj->integer.value;
|
||||
ACPI_FREE(obj);
|
||||
|
||||
if (ret >= 0) {
|
||||
if (power)
|
||||
return ret == 1 ? 0 : -EBUSY;
|
||||
return 0;
|
||||
}
|
||||
|
||||
tb_port_warn(port, "ACPI: set online state _DSM failed with error %d\n", ret);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_acpi_power_on_retimers() - Call platform to power on retimers
|
||||
* @port: USB4 port
|
||||
*
|
||||
* Calls platform to turn on power to all retimers behind this USB4
|
||||
* port. After this function returns successfully the caller can
|
||||
* continue with the normal retimer flows (as specified in the USB4
|
||||
* spec). Note if this returns %-EBUSY it means the type-C port is in
|
||||
* non-USB4/TBT mode (there is non-USB4/TBT device connected).
|
||||
*
|
||||
* This should only be called if the USB4/TBT link is not up.
|
||||
*
|
||||
* Returns %0 on success.
|
||||
*/
|
||||
int tb_acpi_power_on_retimers(struct tb_port *port)
|
||||
{
|
||||
return tb_acpi_retimer_set_power(port, true);
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_acpi_power_off_retimers() - Call platform to power off retimers
|
||||
* @port: USB4 port
|
||||
*
|
||||
* This is the opposite of tb_acpi_power_on_retimers(). After returning
|
||||
* successfully the normal operations with the @port can continue.
|
||||
*
|
||||
* Returns %0 on success.
|
||||
*/
|
||||
int tb_acpi_power_off_retimers(struct tb_port *port)
|
||||
{
|
||||
return tb_acpi_retimer_set_power(port, false);
|
||||
}
|
||||
|
||||
static bool tb_acpi_bus_match(struct device *dev)
|
||||
{
|
||||
return tb_is_switch(dev) || tb_is_usb4_port_device(dev);
|
||||
}
|
||||
|
||||
static struct acpi_device *tb_acpi_find_port(struct acpi_device *adev,
|
||||
const struct tb_port *port)
|
||||
{
|
||||
struct acpi_device *port_adev;
|
||||
|
||||
if (!adev)
|
||||
return NULL;
|
||||
|
||||
/*
|
||||
* Device routers exists under the downstream facing USB4 port
|
||||
* of the parent router. Their _ADR is always 0.
|
||||
*/
|
||||
list_for_each_entry(port_adev, &adev->children, node) {
|
||||
if (acpi_device_adr(port_adev) == port->port)
|
||||
return port_adev;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static struct acpi_device *tb_acpi_switch_find_companion(struct tb_switch *sw)
|
||||
{
|
||||
struct acpi_device *adev = NULL;
|
||||
struct tb_switch *parent_sw;
|
||||
|
||||
parent_sw = tb_switch_parent(sw);
|
||||
if (parent_sw) {
|
||||
struct tb_port *port = tb_port_at(tb_route(sw), parent_sw);
|
||||
struct acpi_device *port_adev;
|
||||
|
||||
port_adev = tb_acpi_find_port(ACPI_COMPANION(&parent_sw->dev), port);
|
||||
if (port_adev)
|
||||
adev = acpi_find_child_device(port_adev, 0, false);
|
||||
} else {
|
||||
struct tb_nhi *nhi = sw->tb->nhi;
|
||||
struct acpi_device *parent_adev;
|
||||
|
||||
parent_adev = ACPI_COMPANION(&nhi->pdev->dev);
|
||||
if (parent_adev)
|
||||
adev = acpi_find_child_device(parent_adev, 0, false);
|
||||
}
|
||||
|
||||
return adev;
|
||||
}
|
||||
|
||||
static struct acpi_device *tb_acpi_find_companion(struct device *dev)
|
||||
{
|
||||
/*
|
||||
* The Thunderbolt/USB4 hierarchy looks like following:
|
||||
*
|
||||
* Device (NHI)
|
||||
* Device (HR) // Host router _ADR == 0
|
||||
* Device (DFP0) // Downstream port _ADR == lane 0 adapter
|
||||
* Device (DR) // Device router _ADR == 0
|
||||
* Device (UFP) // Upstream port _ADR == lane 0 adapter
|
||||
* Device (DFP1) // Downstream port _ADR == lane 0 adapter number
|
||||
*
|
||||
* At the moment we bind the host router to the corresponding
|
||||
* Linux device.
|
||||
*/
|
||||
if (tb_is_switch(dev))
|
||||
return tb_acpi_switch_find_companion(tb_to_switch(dev));
|
||||
else if (tb_is_usb4_port_device(dev))
|
||||
return tb_acpi_find_port(ACPI_COMPANION(dev->parent),
|
||||
tb_to_usb4_port_device(dev)->port);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void tb_acpi_setup(struct device *dev)
|
||||
{
|
||||
struct acpi_device *adev = ACPI_COMPANION(dev);
|
||||
struct usb4_port *usb4 = tb_to_usb4_port_device(dev);
|
||||
|
||||
if (!adev || !usb4)
|
||||
return;
|
||||
|
||||
if (acpi_check_dsm(adev->handle, &retimer_dsm_guid, 1,
|
||||
BIT(RETIMER_DSM_QUERY_ONLINE_STATE) |
|
||||
BIT(RETIMER_DSM_SET_ONLINE_STATE)))
|
||||
usb4->can_offline = true;
|
||||
}
|
||||
|
||||
static struct acpi_bus_type tb_acpi_bus = {
|
||||
.name = "thunderbolt",
|
||||
.match = tb_acpi_bus_match,
|
||||
.find_companion = tb_acpi_find_companion,
|
||||
.setup = tb_acpi_setup,
|
||||
};
|
||||
|
||||
int tb_acpi_init(void)
|
||||
{
|
||||
return register_acpi_bus_type(&tb_acpi_bus);
|
||||
}
|
||||
|
||||
void tb_acpi_exit(void)
|
||||
{
|
||||
unregister_acpi_bus_type(&tb_acpi_bus);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -299,15 +299,13 @@ static int dma_port_request(struct tb_dma_port *dma, u32 in,
|
|||
return status_to_errno(out);
|
||||
}
|
||||
|
||||
static int dma_port_flash_read_block(struct tb_dma_port *dma, u32 address,
|
||||
void *buf, u32 size)
|
||||
static int dma_port_flash_read_block(void *data, unsigned int dwaddress,
|
||||
void *buf, size_t dwords)
|
||||
{
|
||||
struct tb_dma_port *dma = data;
|
||||
struct tb_switch *sw = dma->sw;
|
||||
u32 in, dwaddress, dwords;
|
||||
int ret;
|
||||
|
||||
dwaddress = address / 4;
|
||||
dwords = size / 4;
|
||||
u32 in;
|
||||
|
||||
in = MAIL_IN_CMD_FLASH_READ << MAIL_IN_CMD_SHIFT;
|
||||
if (dwords < MAIL_DATA_DWORDS)
|
||||
|
|
@ -323,14 +321,13 @@ static int dma_port_flash_read_block(struct tb_dma_port *dma, u32 address,
|
|||
dma->base + MAIL_DATA, dwords, DMA_PORT_TIMEOUT);
|
||||
}
|
||||
|
||||
static int dma_port_flash_write_block(struct tb_dma_port *dma, u32 address,
|
||||
const void *buf, u32 size)
|
||||
static int dma_port_flash_write_block(void *data, unsigned int dwaddress,
|
||||
const void *buf, size_t dwords)
|
||||
{
|
||||
struct tb_dma_port *dma = data;
|
||||
struct tb_switch *sw = dma->sw;
|
||||
u32 in, dwaddress, dwords;
|
||||
int ret;
|
||||
|
||||
dwords = size / 4;
|
||||
u32 in;
|
||||
|
||||
/* Write the block to MAIL_DATA registers */
|
||||
ret = dma_port_write(sw->tb->ctl, buf, tb_route(sw), dma->port,
|
||||
|
|
@ -341,12 +338,8 @@ static int dma_port_flash_write_block(struct tb_dma_port *dma, u32 address,
|
|||
in = MAIL_IN_CMD_FLASH_WRITE << MAIL_IN_CMD_SHIFT;
|
||||
|
||||
/* CSS header write is always done to the same magic address */
|
||||
if (address >= DMA_PORT_CSS_ADDRESS) {
|
||||
dwaddress = DMA_PORT_CSS_ADDRESS;
|
||||
if (dwaddress >= DMA_PORT_CSS_ADDRESS)
|
||||
in |= MAIL_IN_CSS;
|
||||
} else {
|
||||
dwaddress = address / 4;
|
||||
}
|
||||
|
||||
in |= ((dwords - 1) << MAIL_IN_DWORDS_SHIFT) & MAIL_IN_DWORDS_MASK;
|
||||
in |= (dwaddress << MAIL_IN_ADDRESS_SHIFT) & MAIL_IN_ADDRESS_MASK;
|
||||
|
|
@ -365,36 +358,8 @@ static int dma_port_flash_write_block(struct tb_dma_port *dma, u32 address,
|
|||
int dma_port_flash_read(struct tb_dma_port *dma, unsigned int address,
|
||||
void *buf, size_t size)
|
||||
{
|
||||
unsigned int retries = DMA_PORT_RETRIES;
|
||||
|
||||
do {
|
||||
unsigned int offset;
|
||||
size_t nbytes;
|
||||
int ret;
|
||||
|
||||
offset = address & 3;
|
||||
nbytes = min_t(size_t, size + offset, MAIL_DATA_DWORDS * 4);
|
||||
|
||||
ret = dma_port_flash_read_block(dma, address, dma->buf,
|
||||
ALIGN(nbytes, 4));
|
||||
if (ret) {
|
||||
if (ret == -ETIMEDOUT) {
|
||||
if (retries--)
|
||||
continue;
|
||||
ret = -EIO;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
nbytes -= offset;
|
||||
memcpy(buf, dma->buf + offset, nbytes);
|
||||
|
||||
size -= nbytes;
|
||||
address += nbytes;
|
||||
buf += nbytes;
|
||||
} while (size > 0);
|
||||
|
||||
return 0;
|
||||
return tb_nvm_read_data(address, buf, size, DMA_PORT_RETRIES,
|
||||
dma_port_flash_read_block, dma);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -411,40 +376,11 @@ int dma_port_flash_read(struct tb_dma_port *dma, unsigned int address,
|
|||
int dma_port_flash_write(struct tb_dma_port *dma, unsigned int address,
|
||||
const void *buf, size_t size)
|
||||
{
|
||||
unsigned int retries = DMA_PORT_RETRIES;
|
||||
unsigned int offset;
|
||||
if (address >= DMA_PORT_CSS_ADDRESS && size > DMA_PORT_CSS_MAX_SIZE)
|
||||
return -E2BIG;
|
||||
|
||||
if (address >= DMA_PORT_CSS_ADDRESS) {
|
||||
offset = 0;
|
||||
if (size > DMA_PORT_CSS_MAX_SIZE)
|
||||
return -E2BIG;
|
||||
} else {
|
||||
offset = address & 3;
|
||||
address = address & ~3;
|
||||
}
|
||||
|
||||
do {
|
||||
u32 nbytes = min_t(u32, size, MAIL_DATA_DWORDS * 4);
|
||||
int ret;
|
||||
|
||||
memcpy(dma->buf + offset, buf, nbytes);
|
||||
|
||||
ret = dma_port_flash_write_block(dma, address, buf, nbytes);
|
||||
if (ret) {
|
||||
if (ret == -ETIMEDOUT) {
|
||||
if (retries--)
|
||||
continue;
|
||||
ret = -EIO;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
size -= nbytes;
|
||||
address += nbytes;
|
||||
buf += nbytes;
|
||||
} while (size > 0);
|
||||
|
||||
return 0;
|
||||
return tb_nvm_write_data(address, buf, size, DMA_PORT_RETRIES,
|
||||
dma_port_flash_write_block, dma);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -881,11 +881,12 @@ int tb_domain_init(void)
|
|||
int ret;
|
||||
|
||||
tb_test_init();
|
||||
|
||||
tb_debugfs_init();
|
||||
tb_acpi_init();
|
||||
|
||||
ret = tb_xdomain_init();
|
||||
if (ret)
|
||||
goto err_debugfs;
|
||||
goto err_acpi;
|
||||
ret = bus_register(&tb_bus_type);
|
||||
if (ret)
|
||||
goto err_xdomain;
|
||||
|
|
@ -894,7 +895,8 @@ int tb_domain_init(void)
|
|||
|
||||
err_xdomain:
|
||||
tb_xdomain_exit();
|
||||
err_debugfs:
|
||||
err_acpi:
|
||||
tb_acpi_exit();
|
||||
tb_debugfs_exit();
|
||||
tb_test_exit();
|
||||
|
||||
|
|
@ -907,6 +909,7 @@ void tb_domain_exit(void)
|
|||
ida_destroy(&tb_domain_ida);
|
||||
tb_nvm_exit();
|
||||
tb_xdomain_exit();
|
||||
tb_acpi_exit();
|
||||
tb_debugfs_exit();
|
||||
tb_test_exit();
|
||||
}
|
||||
|
|
|
|||
|
|
@ -214,7 +214,10 @@ static u32 tb_crc32(void *data, size_t len)
|
|||
return ~__crc32c_le(~0, data, len);
|
||||
}
|
||||
|
||||
#define TB_DROM_DATA_START 13
|
||||
#define TB_DROM_DATA_START 13
|
||||
#define TB_DROM_HEADER_SIZE 22
|
||||
#define USB4_DROM_HEADER_SIZE 16
|
||||
|
||||
struct tb_drom_header {
|
||||
/* BYTE 0 */
|
||||
u8 uid_crc8; /* checksum for uid */
|
||||
|
|
@ -224,9 +227,9 @@ struct tb_drom_header {
|
|||
u32 data_crc32; /* checksum for data_len bytes starting at byte 13 */
|
||||
/* BYTE 13 */
|
||||
u8 device_rom_revision; /* should be <= 1 */
|
||||
u16 data_len:10;
|
||||
u8 __unknown1:6;
|
||||
/* BYTES 16-21 */
|
||||
u16 data_len:12;
|
||||
u8 reserved:4;
|
||||
/* BYTES 16-21 - Only for TBT DROM, nonexistent in USB4 DROM */
|
||||
u16 vendor_id;
|
||||
u16 model_id;
|
||||
u8 model_rev;
|
||||
|
|
@ -401,10 +404,10 @@ static int tb_drom_parse_entry_port(struct tb_switch *sw,
|
|||
*
|
||||
* Drom must have been copied to sw->drom.
|
||||
*/
|
||||
static int tb_drom_parse_entries(struct tb_switch *sw)
|
||||
static int tb_drom_parse_entries(struct tb_switch *sw, size_t header_size)
|
||||
{
|
||||
struct tb_drom_header *header = (void *) sw->drom;
|
||||
u16 pos = sizeof(*header);
|
||||
u16 pos = header_size;
|
||||
u16 drom_size = header->data_len + TB_DROM_DATA_START;
|
||||
int res;
|
||||
|
||||
|
|
@ -566,7 +569,7 @@ static int tb_drom_parse(struct tb_switch *sw)
|
|||
header->data_crc32, crc);
|
||||
}
|
||||
|
||||
return tb_drom_parse_entries(sw);
|
||||
return tb_drom_parse_entries(sw, TB_DROM_HEADER_SIZE);
|
||||
}
|
||||
|
||||
static int usb4_drom_parse(struct tb_switch *sw)
|
||||
|
|
@ -583,7 +586,7 @@ static int usb4_drom_parse(struct tb_switch *sw)
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
return tb_drom_parse_entries(sw);
|
||||
return tb_drom_parse_entries(sw, USB4_DROM_HEADER_SIZE);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -1677,14 +1677,18 @@ static void icm_icl_rtd3_veto(struct tb *tb, const struct icm_pkg_header *hdr)
|
|||
|
||||
static bool icm_tgl_is_supported(struct tb *tb)
|
||||
{
|
||||
u32 val;
|
||||
unsigned long end = jiffies + msecs_to_jiffies(10);
|
||||
|
||||
/*
|
||||
* If the firmware is not running use software CM. This platform
|
||||
* should fully support both.
|
||||
*/
|
||||
val = ioread32(tb->nhi->iobase + REG_FW_STS);
|
||||
return !!(val & REG_FW_STS_NVM_AUTH_DONE);
|
||||
do {
|
||||
u32 val;
|
||||
|
||||
val = ioread32(tb->nhi->iobase + REG_FW_STS);
|
||||
if (val & REG_FW_STS_NVM_AUTH_DONE)
|
||||
return true;
|
||||
usleep_range(100, 500);
|
||||
} while (time_before(jiffies, end));
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static void icm_handle_notification(struct work_struct *work)
|
||||
|
|
@ -2505,6 +2509,8 @@ struct tb *icm_probe(struct tb_nhi *nhi)
|
|||
case PCI_DEVICE_ID_INTEL_TGL_NHI1:
|
||||
case PCI_DEVICE_ID_INTEL_TGL_H_NHI0:
|
||||
case PCI_DEVICE_ID_INTEL_TGL_H_NHI1:
|
||||
case PCI_DEVICE_ID_INTEL_ADL_NHI0:
|
||||
case PCI_DEVICE_ID_INTEL_ADL_NHI1:
|
||||
icm->is_supported = icm_tgl_is_supported;
|
||||
icm->driver_ready = icm_icl_driver_ready;
|
||||
icm->set_uuid = icm_icl_set_uuid;
|
||||
|
|
|
|||
|
|
@ -208,8 +208,8 @@ static int tb_lc_set_wake_one(struct tb_switch *sw, unsigned int offset,
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
ctrl &= ~(TB_LC_SX_CTRL_WOC | TB_LC_SX_CTRL_WOD | TB_LC_SX_CTRL_WOP |
|
||||
TB_LC_SX_CTRL_WOU4);
|
||||
ctrl &= ~(TB_LC_SX_CTRL_WOC | TB_LC_SX_CTRL_WOD | TB_LC_SX_CTRL_WODPC |
|
||||
TB_LC_SX_CTRL_WODPD | TB_LC_SX_CTRL_WOP | TB_LC_SX_CTRL_WOU4);
|
||||
|
||||
if (flags & TB_WAKE_ON_CONNECT)
|
||||
ctrl |= TB_LC_SX_CTRL_WOC | TB_LC_SX_CTRL_WOD;
|
||||
|
|
@ -217,6 +217,8 @@ static int tb_lc_set_wake_one(struct tb_switch *sw, unsigned int offset,
|
|||
ctrl |= TB_LC_SX_CTRL_WOU4;
|
||||
if (flags & TB_WAKE_ON_PCIE)
|
||||
ctrl |= TB_LC_SX_CTRL_WOP;
|
||||
if (flags & TB_WAKE_ON_DP)
|
||||
ctrl |= TB_LC_SX_CTRL_WODPC | TB_LC_SX_CTRL_WODPD;
|
||||
|
||||
return tb_sw_write(sw, &ctrl, TB_CFG_SWITCH, offset + TB_LC_SX_CTRL, 1);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -17,7 +17,6 @@
|
|||
#include <linux/module.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/property.h>
|
||||
#include <linux/platform_data/x86/apple.h>
|
||||
|
||||
#include "nhi.h"
|
||||
#include "nhi_regs.h"
|
||||
|
|
@ -1127,69 +1126,6 @@ static bool nhi_imr_valid(struct pci_dev *pdev)
|
|||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* During suspend the Thunderbolt controller is reset and all PCIe
|
||||
* tunnels are lost. The NHI driver will try to reestablish all tunnels
|
||||
* during resume. This adds device links between the tunneled PCIe
|
||||
* downstream ports and the NHI so that the device core will make sure
|
||||
* NHI is resumed first before the rest.
|
||||
*/
|
||||
static void tb_apple_add_links(struct tb_nhi *nhi)
|
||||
{
|
||||
struct pci_dev *upstream, *pdev;
|
||||
|
||||
if (!x86_apple_machine)
|
||||
return;
|
||||
|
||||
switch (nhi->pdev->device) {
|
||||
case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE:
|
||||
case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C:
|
||||
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
|
||||
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
|
||||
break;
|
||||
default:
|
||||
return;
|
||||
}
|
||||
|
||||
upstream = pci_upstream_bridge(nhi->pdev);
|
||||
while (upstream) {
|
||||
if (!pci_is_pcie(upstream))
|
||||
return;
|
||||
if (pci_pcie_type(upstream) == PCI_EXP_TYPE_UPSTREAM)
|
||||
break;
|
||||
upstream = pci_upstream_bridge(upstream);
|
||||
}
|
||||
|
||||
if (!upstream)
|
||||
return;
|
||||
|
||||
/*
|
||||
* For each hotplug downstream port, create add device link
|
||||
* back to NHI so that PCIe tunnels can be re-established after
|
||||
* sleep.
|
||||
*/
|
||||
for_each_pci_bridge(pdev, upstream->subordinate) {
|
||||
const struct device_link *link;
|
||||
|
||||
if (!pci_is_pcie(pdev))
|
||||
continue;
|
||||
if (pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM ||
|
||||
!pdev->is_hotplug_bridge)
|
||||
continue;
|
||||
|
||||
link = device_link_add(&pdev->dev, &nhi->pdev->dev,
|
||||
DL_FLAG_AUTOREMOVE_SUPPLIER |
|
||||
DL_FLAG_PM_RUNTIME);
|
||||
if (link) {
|
||||
dev_dbg(&nhi->pdev->dev, "created link from %s\n",
|
||||
dev_name(&pdev->dev));
|
||||
} else {
|
||||
dev_warn(&nhi->pdev->dev, "device link creation from %s failed\n",
|
||||
dev_name(&pdev->dev));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static struct tb *nhi_select_cm(struct tb_nhi *nhi)
|
||||
{
|
||||
struct tb *tb;
|
||||
|
|
@ -1278,9 +1214,6 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||
return res;
|
||||
}
|
||||
|
||||
tb_apple_add_links(nhi);
|
||||
tb_acpi_add_links(nhi);
|
||||
|
||||
tb = nhi_select_cm(nhi);
|
||||
if (!tb) {
|
||||
dev_err(&nhi->pdev->dev,
|
||||
|
|
@ -1400,6 +1333,10 @@ static struct pci_device_id nhi_ids[] = {
|
|||
.driver_data = (kernel_ulong_t)&icl_nhi_ops },
|
||||
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_H_NHI1),
|
||||
.driver_data = (kernel_ulong_t)&icl_nhi_ops },
|
||||
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADL_NHI0),
|
||||
.driver_data = (kernel_ulong_t)&icl_nhi_ops },
|
||||
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADL_NHI1),
|
||||
.driver_data = (kernel_ulong_t)&icl_nhi_ops },
|
||||
|
||||
/* Any USB4 compliant host */
|
||||
{ PCI_DEVICE_CLASS(PCI_CLASS_SERIAL_USB_USB4, ~0) },
|
||||
|
|
|
|||
|
|
@ -72,6 +72,8 @@ extern const struct tb_nhi_ops icl_nhi_ops;
|
|||
#define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_BRIDGE 0x15ea
|
||||
#define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_NHI 0x15eb
|
||||
#define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_BRIDGE 0x15ef
|
||||
#define PCI_DEVICE_ID_INTEL_ADL_NHI0 0x463e
|
||||
#define PCI_DEVICE_ID_INTEL_ADL_NHI1 0x466d
|
||||
#define PCI_DEVICE_ID_INTEL_ICL_NHI1 0x8a0d
|
||||
#define PCI_DEVICE_ID_INTEL_ICL_NHI0 0x8a17
|
||||
#define PCI_DEVICE_ID_INTEL_TGL_NHI0 0x9a1b
|
||||
|
|
|
|||
|
|
@ -164,6 +164,101 @@ void tb_nvm_free(struct tb_nvm *nvm)
|
|||
kfree(nvm);
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_nvm_read_data() - Read data from NVM
|
||||
* @address: Start address on the flash
|
||||
* @buf: Buffer where the read data is copied
|
||||
* @size: Size of the buffer in bytes
|
||||
* @retries: Number of retries if block read fails
|
||||
* @read_block: Function that reads block from the flash
|
||||
* @read_block_data: Data passsed to @read_block
|
||||
*
|
||||
* This is a generic function that reads data from NVM or NVM like
|
||||
* device.
|
||||
*
|
||||
* Returns %0 on success and negative errno otherwise.
|
||||
*/
|
||||
int tb_nvm_read_data(unsigned int address, void *buf, size_t size,
|
||||
unsigned int retries, read_block_fn read_block,
|
||||
void *read_block_data)
|
||||
{
|
||||
do {
|
||||
unsigned int dwaddress, dwords, offset;
|
||||
u8 data[NVM_DATA_DWORDS * 4];
|
||||
size_t nbytes;
|
||||
int ret;
|
||||
|
||||
offset = address & 3;
|
||||
nbytes = min_t(size_t, size + offset, NVM_DATA_DWORDS * 4);
|
||||
|
||||
dwaddress = address / 4;
|
||||
dwords = ALIGN(nbytes, 4) / 4;
|
||||
|
||||
ret = read_block(read_block_data, dwaddress, data, dwords);
|
||||
if (ret) {
|
||||
if (ret != -ENODEV && retries--)
|
||||
continue;
|
||||
return ret;
|
||||
}
|
||||
|
||||
nbytes -= offset;
|
||||
memcpy(buf, data + offset, nbytes);
|
||||
|
||||
size -= nbytes;
|
||||
address += nbytes;
|
||||
buf += nbytes;
|
||||
} while (size > 0);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_nvm_write_data() - Write data to NVM
|
||||
* @address: Start address on the flash
|
||||
* @buf: Buffer where the data is copied from
|
||||
* @size: Size of the buffer in bytes
|
||||
* @retries: Number of retries if the block write fails
|
||||
* @write_block: Function that writes block to the flash
|
||||
* @write_block_data: Data passwd to @write_block
|
||||
*
|
||||
* This is generic function that writes data to NVM or NVM like device.
|
||||
*
|
||||
* Returns %0 on success and negative errno otherwise.
|
||||
*/
|
||||
int tb_nvm_write_data(unsigned int address, const void *buf, size_t size,
|
||||
unsigned int retries, write_block_fn write_block,
|
||||
void *write_block_data)
|
||||
{
|
||||
do {
|
||||
unsigned int offset, dwaddress;
|
||||
u8 data[NVM_DATA_DWORDS * 4];
|
||||
size_t nbytes;
|
||||
int ret;
|
||||
|
||||
offset = address & 3;
|
||||
nbytes = min_t(u32, size + offset, NVM_DATA_DWORDS * 4);
|
||||
|
||||
memcpy(data + offset, buf, nbytes);
|
||||
|
||||
dwaddress = address / 4;
|
||||
ret = write_block(write_block_data, dwaddress, data, nbytes / 4);
|
||||
if (ret) {
|
||||
if (ret == -ETIMEDOUT) {
|
||||
if (retries--)
|
||||
continue;
|
||||
ret = -EIO;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
size -= nbytes;
|
||||
address += nbytes;
|
||||
buf += nbytes;
|
||||
} while (size > 0);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void tb_nvm_exit(void)
|
||||
{
|
||||
ida_destroy(&nvm_ida);
|
||||
|
|
|
|||
|
|
@ -367,7 +367,7 @@ static void __tb_path_deallocate_nfc(struct tb_path *path, int first_hop)
|
|||
int i, res;
|
||||
for (i = first_hop; i < path->path_length; i++) {
|
||||
res = tb_port_add_nfc_credits(path->hops[i].in_port,
|
||||
-path->nfc_credits);
|
||||
-path->hops[i].nfc_credits);
|
||||
if (res)
|
||||
tb_port_warn(path->hops[i].in_port,
|
||||
"nfc credits deallocation failed for hop %d\n",
|
||||
|
|
@ -502,7 +502,7 @@ int tb_path_activate(struct tb_path *path)
|
|||
/* Add non flow controlled credits. */
|
||||
for (i = path->path_length - 1; i >= 0; i--) {
|
||||
res = tb_port_add_nfc_credits(path->hops[i].in_port,
|
||||
path->nfc_credits);
|
||||
path->hops[i].nfc_credits);
|
||||
if (res) {
|
||||
__tb_path_deallocate_nfc(path, i);
|
||||
goto err;
|
||||
|
|
|
|||
|
|
@ -12,7 +12,17 @@ static void quirk_force_power_link(struct tb_switch *sw)
|
|||
sw->quirks |= QUIRK_FORCE_POWER_LINK_CONTROLLER;
|
||||
}
|
||||
|
||||
static void quirk_dp_credit_allocation(struct tb_switch *sw)
|
||||
{
|
||||
if (sw->credit_allocation && sw->min_dp_main_credits == 56) {
|
||||
sw->min_dp_main_credits = 18;
|
||||
tb_sw_dbg(sw, "quirked DP main: %u\n", sw->min_dp_main_credits);
|
||||
}
|
||||
}
|
||||
|
||||
struct tb_quirk {
|
||||
u16 hw_vendor_id;
|
||||
u16 hw_device_id;
|
||||
u16 vendor;
|
||||
u16 device;
|
||||
void (*hook)(struct tb_switch *sw);
|
||||
|
|
@ -20,7 +30,13 @@ struct tb_quirk {
|
|||
|
||||
static const struct tb_quirk tb_quirks[] = {
|
||||
/* Dell WD19TB supports self-authentication on unplug */
|
||||
{ 0x00d4, 0xb070, quirk_force_power_link },
|
||||
{ 0x0000, 0x0000, 0x00d4, 0xb070, quirk_force_power_link },
|
||||
{ 0x0000, 0x0000, 0x00d4, 0xb071, quirk_force_power_link },
|
||||
/*
|
||||
* Intel Goshen Ridge NVM 27 and before report wrong number of
|
||||
* DP buffers.
|
||||
*/
|
||||
{ 0x8087, 0x0b26, 0x0000, 0x0000, quirk_dp_credit_allocation },
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
@ -36,7 +52,15 @@ void tb_check_quirks(struct tb_switch *sw)
|
|||
for (i = 0; i < ARRAY_SIZE(tb_quirks); i++) {
|
||||
const struct tb_quirk *q = &tb_quirks[i];
|
||||
|
||||
if (sw->device == q->device && sw->vendor == q->vendor)
|
||||
q->hook(sw);
|
||||
if (q->hw_vendor_id && q->hw_vendor_id != sw->config.vendor_id)
|
||||
continue;
|
||||
if (q->hw_device_id && q->hw_device_id != sw->config.device_id)
|
||||
continue;
|
||||
if (q->vendor && q->vendor != sw->vendor)
|
||||
continue;
|
||||
if (q->device && q->device != sw->device)
|
||||
continue;
|
||||
|
||||
q->hook(sw);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -103,6 +103,7 @@ static int tb_retimer_nvm_validate_and_write(struct tb_retimer *rt)
|
|||
unsigned int image_size, hdr_size;
|
||||
const u8 *buf = rt->nvm->buf;
|
||||
u16 ds_size, device;
|
||||
int ret;
|
||||
|
||||
image_size = rt->nvm->buf_data_size;
|
||||
if (image_size < NVM_MIN_SIZE || image_size > NVM_MAX_SIZE)
|
||||
|
|
@ -140,8 +141,43 @@ static int tb_retimer_nvm_validate_and_write(struct tb_retimer *rt)
|
|||
buf += hdr_size;
|
||||
image_size -= hdr_size;
|
||||
|
||||
return usb4_port_retimer_nvm_write(rt->port, rt->index, 0, buf,
|
||||
image_size);
|
||||
ret = usb4_port_retimer_nvm_write(rt->port, rt->index, 0, buf,
|
||||
image_size);
|
||||
if (!ret)
|
||||
rt->nvm->flushed = true;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int tb_retimer_nvm_authenticate(struct tb_retimer *rt, bool auth_only)
|
||||
{
|
||||
u32 status;
|
||||
int ret;
|
||||
|
||||
if (auth_only) {
|
||||
ret = usb4_port_retimer_nvm_set_offset(rt->port, rt->index, 0);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = usb4_port_retimer_nvm_authenticate(rt->port, rt->index);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
usleep_range(100, 150);
|
||||
|
||||
/*
|
||||
* Check the status now if we still can access the retimer. It
|
||||
* is expected that the below fails.
|
||||
*/
|
||||
ret = usb4_port_retimer_nvm_authenticate_status(rt->port, rt->index,
|
||||
&status);
|
||||
if (!ret) {
|
||||
rt->auth_status = status;
|
||||
return status ? -EINVAL : 0;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static ssize_t device_show(struct device *dev, struct device_attribute *attr,
|
||||
|
|
@ -176,8 +212,7 @@ static ssize_t nvm_authenticate_store(struct device *dev,
|
|||
struct device_attribute *attr, const char *buf, size_t count)
|
||||
{
|
||||
struct tb_retimer *rt = tb_to_retimer(dev);
|
||||
bool val;
|
||||
int ret;
|
||||
int val, ret;
|
||||
|
||||
pm_runtime_get_sync(&rt->dev);
|
||||
|
||||
|
|
@ -191,7 +226,7 @@ static ssize_t nvm_authenticate_store(struct device *dev,
|
|||
goto exit_unlock;
|
||||
}
|
||||
|
||||
ret = kstrtobool(buf, &val);
|
||||
ret = kstrtoint(buf, 10, &val);
|
||||
if (ret)
|
||||
goto exit_unlock;
|
||||
|
||||
|
|
@ -199,16 +234,22 @@ static ssize_t nvm_authenticate_store(struct device *dev,
|
|||
rt->auth_status = 0;
|
||||
|
||||
if (val) {
|
||||
if (!rt->nvm->buf) {
|
||||
ret = -EINVAL;
|
||||
goto exit_unlock;
|
||||
if (val == AUTHENTICATE_ONLY) {
|
||||
ret = tb_retimer_nvm_authenticate(rt, true);
|
||||
} else {
|
||||
if (!rt->nvm->flushed) {
|
||||
if (!rt->nvm->buf) {
|
||||
ret = -EINVAL;
|
||||
goto exit_unlock;
|
||||
}
|
||||
|
||||
ret = tb_retimer_nvm_validate_and_write(rt);
|
||||
if (ret || val == WRITE_ONLY)
|
||||
goto exit_unlock;
|
||||
}
|
||||
if (val == WRITE_AND_AUTHENTICATE)
|
||||
ret = tb_retimer_nvm_authenticate(rt, false);
|
||||
}
|
||||
|
||||
ret = tb_retimer_nvm_validate_and_write(rt);
|
||||
if (ret)
|
||||
goto exit_unlock;
|
||||
|
||||
ret = usb4_port_retimer_nvm_authenticate(rt->port, rt->index);
|
||||
}
|
||||
|
||||
exit_unlock:
|
||||
|
|
@ -283,11 +324,13 @@ struct device_type tb_retimer_type = {
|
|||
|
||||
static int tb_retimer_add(struct tb_port *port, u8 index, u32 auth_status)
|
||||
{
|
||||
struct usb4_port *usb4;
|
||||
struct tb_retimer *rt;
|
||||
u32 vendor, device;
|
||||
int ret;
|
||||
|
||||
if (!port->cap_usb4)
|
||||
usb4 = port->usb4;
|
||||
if (!usb4)
|
||||
return -EINVAL;
|
||||
|
||||
ret = usb4_port_retimer_read(port, index, USB4_SB_VENDOR_ID, &vendor,
|
||||
|
|
@ -331,7 +374,7 @@ static int tb_retimer_add(struct tb_port *port, u8 index, u32 auth_status)
|
|||
rt->port = port;
|
||||
rt->tb = port->sw->tb;
|
||||
|
||||
rt->dev.parent = &port->sw->dev;
|
||||
rt->dev.parent = &usb4->dev;
|
||||
rt->dev.bus = &tb_bus_type;
|
||||
rt->dev.type = &tb_retimer_type;
|
||||
dev_set_name(&rt->dev, "%s:%u.%u", dev_name(&port->sw->dev),
|
||||
|
|
@ -389,7 +432,7 @@ static struct tb_retimer *tb_port_find_retimer(struct tb_port *port, u8 index)
|
|||
struct tb_retimer_lookup lookup = { .port = port, .index = index };
|
||||
struct device *dev;
|
||||
|
||||
dev = device_find_child(&port->sw->dev, &lookup, retimer_match);
|
||||
dev = device_find_child(&port->usb4->dev, &lookup, retimer_match);
|
||||
if (dev)
|
||||
return tb_to_retimer(dev);
|
||||
|
||||
|
|
@ -399,19 +442,18 @@ static struct tb_retimer *tb_port_find_retimer(struct tb_port *port, u8 index)
|
|||
/**
|
||||
* tb_retimer_scan() - Scan for on-board retimers under port
|
||||
* @port: USB4 port to scan
|
||||
* @add: If true also registers found retimers
|
||||
*
|
||||
* Tries to enumerate on-board retimers connected to @port. Found
|
||||
* retimers are registered as children of @port. Does not scan for cable
|
||||
* retimers for now.
|
||||
* Brings the sideband into a state where retimers can be accessed.
|
||||
* Then Tries to enumerate on-board retimers connected to @port. Found
|
||||
* retimers are registered as children of @port if @add is set. Does
|
||||
* not scan for cable retimers for now.
|
||||
*/
|
||||
int tb_retimer_scan(struct tb_port *port)
|
||||
int tb_retimer_scan(struct tb_port *port, bool add)
|
||||
{
|
||||
u32 status[TB_MAX_RETIMER_INDEX + 1] = {};
|
||||
int ret, i, last_idx = 0;
|
||||
|
||||
if (!port->cap_usb4)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* Send broadcast RT to make sure retimer indices facing this
|
||||
* port are set.
|
||||
|
|
@ -420,6 +462,13 @@ int tb_retimer_scan(struct tb_port *port)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* Enable sideband channel for each retimer. We can do this
|
||||
* regardless whether there is device connected or not.
|
||||
*/
|
||||
for (i = 1; i <= TB_MAX_RETIMER_INDEX; i++)
|
||||
usb4_port_retimer_set_inbound_sbtx(port, i);
|
||||
|
||||
/*
|
||||
* Before doing anything else, read the authentication status.
|
||||
* If the retimer has it set, store it for the new retimer
|
||||
|
|
@ -451,10 +500,10 @@ int tb_retimer_scan(struct tb_port *port)
|
|||
rt = tb_port_find_retimer(port, i);
|
||||
if (rt) {
|
||||
put_device(&rt->dev);
|
||||
} else {
|
||||
} else if (add) {
|
||||
ret = tb_retimer_add(port, i, status[i]);
|
||||
if (ret && ret != -EOPNOTSUPP)
|
||||
return ret;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -479,7 +528,10 @@ static int remove_retimer(struct device *dev, void *data)
|
|||
*/
|
||||
void tb_retimer_remove_all(struct tb_port *port)
|
||||
{
|
||||
if (port->cap_usb4)
|
||||
device_for_each_child_reverse(&port->sw->dev, port,
|
||||
struct usb4_port *usb4;
|
||||
|
||||
usb4 = port->usb4;
|
||||
if (usb4)
|
||||
device_for_each_child_reverse(&usb4->dev, port,
|
||||
remove_retimer);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -17,7 +17,9 @@
|
|||
enum usb4_sb_opcode {
|
||||
USB4_SB_OPCODE_ERR = 0x20525245, /* "ERR " */
|
||||
USB4_SB_OPCODE_ONS = 0x444d4321, /* "!CMD" */
|
||||
USB4_SB_OPCODE_ROUTER_OFFLINE = 0x4e45534c, /* "LSEN" */
|
||||
USB4_SB_OPCODE_ENUMERATE_RETIMERS = 0x4d554e45, /* "ENUM" */
|
||||
USB4_SB_OPCODE_SET_INBOUND_SBTX = 0x5055534c, /* "LSUP" */
|
||||
USB4_SB_OPCODE_QUERY_LAST_RETIMER = 0x5453414c, /* "LAST" */
|
||||
USB4_SB_OPCODE_GET_NVM_SECTOR_SIZE = 0x53534e47, /* "GNSS" */
|
||||
USB4_SB_OPCODE_NVM_SET_OFFSET = 0x53504f42, /* "BOPS" */
|
||||
|
|
|
|||
|
|
@ -26,11 +26,6 @@ struct nvm_auth_status {
|
|||
u32 status;
|
||||
};
|
||||
|
||||
enum nvm_write_ops {
|
||||
WRITE_AND_AUTHENTICATE = 1,
|
||||
WRITE_ONLY = 2,
|
||||
};
|
||||
|
||||
/*
|
||||
* Hold NVM authentication failure status per switch This information
|
||||
* needs to stay around even when the switch gets power cycled so we
|
||||
|
|
@ -308,13 +303,23 @@ static inline int nvm_read(struct tb_switch *sw, unsigned int address,
|
|||
return dma_port_flash_read(sw->dma_port, address, buf, size);
|
||||
}
|
||||
|
||||
static int nvm_authenticate(struct tb_switch *sw)
|
||||
static int nvm_authenticate(struct tb_switch *sw, bool auth_only)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (tb_switch_is_usb4(sw))
|
||||
if (tb_switch_is_usb4(sw)) {
|
||||
if (auth_only) {
|
||||
ret = usb4_switch_nvm_set_offset(sw, 0);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
sw->nvm->authenticating = true;
|
||||
return usb4_switch_nvm_authenticate(sw);
|
||||
} else if (auth_only) {
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
sw->nvm->authenticating = true;
|
||||
if (!tb_route(sw)) {
|
||||
nvm_authenticate_start_dma_port(sw);
|
||||
ret = nvm_authenticate_host_dma_port(sw);
|
||||
|
|
@ -459,7 +464,7 @@ static void tb_switch_nvm_remove(struct tb_switch *sw)
|
|||
|
||||
/* port utility functions */
|
||||
|
||||
static const char *tb_port_type(struct tb_regs_port_header *port)
|
||||
static const char *tb_port_type(const struct tb_regs_port_header *port)
|
||||
{
|
||||
switch (port->type >> 16) {
|
||||
case 0:
|
||||
|
|
@ -488,17 +493,21 @@ static const char *tb_port_type(struct tb_regs_port_header *port)
|
|||
}
|
||||
}
|
||||
|
||||
static void tb_dump_port(struct tb *tb, struct tb_regs_port_header *port)
|
||||
static void tb_dump_port(struct tb *tb, const struct tb_port *port)
|
||||
{
|
||||
const struct tb_regs_port_header *regs = &port->config;
|
||||
|
||||
tb_dbg(tb,
|
||||
" Port %d: %x:%x (Revision: %d, TB Version: %d, Type: %s (%#x))\n",
|
||||
port->port_number, port->vendor_id, port->device_id,
|
||||
port->revision, port->thunderbolt_version, tb_port_type(port),
|
||||
port->type);
|
||||
regs->port_number, regs->vendor_id, regs->device_id,
|
||||
regs->revision, regs->thunderbolt_version, tb_port_type(regs),
|
||||
regs->type);
|
||||
tb_dbg(tb, " Max hop id (in/out): %d/%d\n",
|
||||
port->max_in_hop_id, port->max_out_hop_id);
|
||||
tb_dbg(tb, " Max counters: %d\n", port->max_counters);
|
||||
tb_dbg(tb, " NFC Credits: %#x\n", port->nfc_credits);
|
||||
regs->max_in_hop_id, regs->max_out_hop_id);
|
||||
tb_dbg(tb, " Max counters: %d\n", regs->max_counters);
|
||||
tb_dbg(tb, " NFC Credits: %#x\n", regs->nfc_credits);
|
||||
tb_dbg(tb, " Credits (total/control): %u/%u\n", port->total_credits,
|
||||
port->ctl_credits);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -738,13 +747,32 @@ static int tb_init_port(struct tb_port *port)
|
|||
cap = tb_port_find_cap(port, TB_PORT_CAP_USB4);
|
||||
if (cap > 0)
|
||||
port->cap_usb4 = cap;
|
||||
|
||||
/*
|
||||
* USB4 ports the buffers allocated for the control path
|
||||
* can be read from the path config space. Legacy
|
||||
* devices we use hard-coded value.
|
||||
*/
|
||||
if (tb_switch_is_usb4(port->sw)) {
|
||||
struct tb_regs_hop hop;
|
||||
|
||||
if (!tb_port_read(port, &hop, TB_CFG_HOPS, 0, 2))
|
||||
port->ctl_credits = hop.initial_credits;
|
||||
}
|
||||
if (!port->ctl_credits)
|
||||
port->ctl_credits = 2;
|
||||
|
||||
} else if (port->port != 0) {
|
||||
cap = tb_port_find_cap(port, TB_PORT_CAP_ADAP);
|
||||
if (cap > 0)
|
||||
port->cap_adap = cap;
|
||||
}
|
||||
|
||||
tb_dump_port(port->sw->tb, &port->config);
|
||||
port->total_credits =
|
||||
(port->config.nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK) >>
|
||||
ADP_CS_4_TOTAL_BUFFERS_SHIFT;
|
||||
|
||||
tb_dump_port(port->sw->tb, port);
|
||||
|
||||
INIT_LIST_HEAD(&port->list);
|
||||
return 0;
|
||||
|
|
@ -991,8 +1019,11 @@ static int tb_port_set_link_width(struct tb_port *port, unsigned int width)
|
|||
* tb_port_lane_bonding_enable() - Enable bonding on port
|
||||
* @port: port to enable
|
||||
*
|
||||
* Enable bonding by setting the link width of the port and the
|
||||
* other port in case of dual link port.
|
||||
* Enable bonding by setting the link width of the port and the other
|
||||
* port in case of dual link port. Does not wait for the link to
|
||||
* actually reach the bonded state so caller needs to call
|
||||
* tb_port_wait_for_link_width() before enabling any paths through the
|
||||
* link to make sure the link is in expected state.
|
||||
*
|
||||
* Return: %0 in case of success and negative errno in case of error
|
||||
*/
|
||||
|
|
@ -1043,6 +1074,79 @@ void tb_port_lane_bonding_disable(struct tb_port *port)
|
|||
tb_port_set_link_width(port, 1);
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_port_wait_for_link_width() - Wait until link reaches specific width
|
||||
* @port: Port to wait for
|
||||
* @width: Expected link width (%1 or %2)
|
||||
* @timeout_msec: Timeout in ms how long to wait
|
||||
*
|
||||
* Should be used after both ends of the link have been bonded (or
|
||||
* bonding has been disabled) to wait until the link actually reaches
|
||||
* the expected state. Returns %-ETIMEDOUT if the @width was not reached
|
||||
* within the given timeout, %0 if it did.
|
||||
*/
|
||||
int tb_port_wait_for_link_width(struct tb_port *port, int width,
|
||||
int timeout_msec)
|
||||
{
|
||||
ktime_t timeout = ktime_add_ms(ktime_get(), timeout_msec);
|
||||
int ret;
|
||||
|
||||
do {
|
||||
ret = tb_port_get_link_width(port);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
else if (ret == width)
|
||||
return 0;
|
||||
|
||||
usleep_range(1000, 2000);
|
||||
} while (ktime_before(ktime_get(), timeout));
|
||||
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
static int tb_port_do_update_credits(struct tb_port *port)
|
||||
{
|
||||
u32 nfc_credits;
|
||||
int ret;
|
||||
|
||||
ret = tb_port_read(port, &nfc_credits, TB_CFG_PORT, ADP_CS_4, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (nfc_credits != port->config.nfc_credits) {
|
||||
u32 total;
|
||||
|
||||
total = (nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK) >>
|
||||
ADP_CS_4_TOTAL_BUFFERS_SHIFT;
|
||||
|
||||
tb_port_dbg(port, "total credits changed %u -> %u\n",
|
||||
port->total_credits, total);
|
||||
|
||||
port->config.nfc_credits = nfc_credits;
|
||||
port->total_credits = total;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_port_update_credits() - Re-read port total credits
|
||||
* @port: Port to update
|
||||
*
|
||||
* After the link is bonded (or bonding was disabled) the port total
|
||||
* credits may change, so this function needs to be called to re-read
|
||||
* the credits. Updates also the second lane adapter.
|
||||
*/
|
||||
int tb_port_update_credits(struct tb_port *port)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = tb_port_do_update_credits(port);
|
||||
if (ret)
|
||||
return ret;
|
||||
return tb_port_do_update_credits(port->dual_link_port);
|
||||
}
|
||||
|
||||
static int tb_port_start_lane_initialization(struct tb_port *port)
|
||||
{
|
||||
int ret;
|
||||
|
|
@ -1054,6 +1158,33 @@ static int tb_port_start_lane_initialization(struct tb_port *port)
|
|||
return ret == -EINVAL ? 0 : ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Returns true if the port had something (router, XDomain) connected
|
||||
* before suspend.
|
||||
*/
|
||||
static bool tb_port_resume(struct tb_port *port)
|
||||
{
|
||||
bool has_remote = tb_port_has_remote(port);
|
||||
|
||||
if (port->usb4) {
|
||||
usb4_port_device_resume(port->usb4);
|
||||
} else if (!has_remote) {
|
||||
/*
|
||||
* For disconnected downstream lane adapters start lane
|
||||
* initialization now so we detect future connects.
|
||||
*
|
||||
* For XDomain start the lane initialzation now so the
|
||||
* link gets re-established.
|
||||
*
|
||||
* This is only needed for non-USB4 ports.
|
||||
*/
|
||||
if (!tb_is_upstream_port(port) || port->xdomain)
|
||||
tb_port_start_lane_initialization(port);
|
||||
}
|
||||
|
||||
return has_remote || port->xdomain;
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_port_is_enabled() - Is the adapter port enabled
|
||||
* @port: Port to check
|
||||
|
|
@ -1592,8 +1723,7 @@ static ssize_t nvm_authenticate_sysfs(struct device *dev, const char *buf,
|
|||
bool disconnect)
|
||||
{
|
||||
struct tb_switch *sw = tb_to_switch(dev);
|
||||
int val;
|
||||
int ret;
|
||||
int val, ret;
|
||||
|
||||
pm_runtime_get_sync(&sw->dev);
|
||||
|
||||
|
|
@ -1616,22 +1746,27 @@ static ssize_t nvm_authenticate_sysfs(struct device *dev, const char *buf,
|
|||
nvm_clear_auth_status(sw);
|
||||
|
||||
if (val > 0) {
|
||||
if (!sw->nvm->flushed) {
|
||||
if (!sw->nvm->buf) {
|
||||
if (val == AUTHENTICATE_ONLY) {
|
||||
if (disconnect)
|
||||
ret = -EINVAL;
|
||||
goto exit_unlock;
|
||||
}
|
||||
else
|
||||
ret = nvm_authenticate(sw, true);
|
||||
} else {
|
||||
if (!sw->nvm->flushed) {
|
||||
if (!sw->nvm->buf) {
|
||||
ret = -EINVAL;
|
||||
goto exit_unlock;
|
||||
}
|
||||
|
||||
ret = nvm_validate_and_write(sw);
|
||||
if (ret || val == WRITE_ONLY)
|
||||
goto exit_unlock;
|
||||
}
|
||||
if (val == WRITE_AND_AUTHENTICATE) {
|
||||
if (disconnect) {
|
||||
ret = tb_lc_force_power(sw);
|
||||
} else {
|
||||
sw->nvm->authenticating = true;
|
||||
ret = nvm_authenticate(sw);
|
||||
ret = nvm_validate_and_write(sw);
|
||||
if (ret || val == WRITE_ONLY)
|
||||
goto exit_unlock;
|
||||
}
|
||||
if (val == WRITE_AND_AUTHENTICATE) {
|
||||
if (disconnect)
|
||||
ret = tb_lc_force_power(sw);
|
||||
else
|
||||
ret = nvm_authenticate(sw, false);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -2432,6 +2567,14 @@ int tb_switch_lane_bonding_enable(struct tb_switch *sw)
|
|||
return ret;
|
||||
}
|
||||
|
||||
ret = tb_port_wait_for_link_width(down, 2, 100);
|
||||
if (ret) {
|
||||
tb_port_warn(down, "timeout enabling lane bonding\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
tb_port_update_credits(down);
|
||||
tb_port_update_credits(up);
|
||||
tb_switch_update_link_attributes(sw);
|
||||
|
||||
tb_sw_dbg(sw, "lane bonding enabled\n");
|
||||
|
|
@ -2462,7 +2605,17 @@ void tb_switch_lane_bonding_disable(struct tb_switch *sw)
|
|||
tb_port_lane_bonding_disable(up);
|
||||
tb_port_lane_bonding_disable(down);
|
||||
|
||||
/*
|
||||
* It is fine if we get other errors as the router might have
|
||||
* been unplugged.
|
||||
*/
|
||||
if (tb_port_wait_for_link_width(down, 1, 100) == -ETIMEDOUT)
|
||||
tb_sw_warn(sw, "timeout disabling lane bonding\n");
|
||||
|
||||
tb_port_update_credits(down);
|
||||
tb_port_update_credits(up);
|
||||
tb_switch_update_link_attributes(sw);
|
||||
|
||||
tb_sw_dbg(sw, "lane bonding disabled\n");
|
||||
}
|
||||
|
||||
|
|
@ -2529,6 +2682,16 @@ void tb_switch_unconfigure_link(struct tb_switch *sw)
|
|||
tb_lc_unconfigure_port(down);
|
||||
}
|
||||
|
||||
static void tb_switch_credits_init(struct tb_switch *sw)
|
||||
{
|
||||
if (tb_switch_is_icm(sw))
|
||||
return;
|
||||
if (!tb_switch_is_usb4(sw))
|
||||
return;
|
||||
if (usb4_switch_credits_init(sw))
|
||||
tb_sw_info(sw, "failed to determine preferred buffer allocation, using defaults\n");
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_switch_add() - Add a switch to the domain
|
||||
* @sw: Switch to add
|
||||
|
|
@ -2559,6 +2722,8 @@ int tb_switch_add(struct tb_switch *sw)
|
|||
}
|
||||
|
||||
if (!sw->safe_mode) {
|
||||
tb_switch_credits_init(sw);
|
||||
|
||||
/* read drom */
|
||||
ret = tb_drom_read(sw);
|
||||
if (ret) {
|
||||
|
|
@ -2612,11 +2777,16 @@ int tb_switch_add(struct tb_switch *sw)
|
|||
sw->device_name);
|
||||
}
|
||||
|
||||
ret = usb4_switch_add_ports(sw);
|
||||
if (ret) {
|
||||
dev_err(&sw->dev, "failed to add USB4 ports\n");
|
||||
goto err_del;
|
||||
}
|
||||
|
||||
ret = tb_switch_nvm_add(sw);
|
||||
if (ret) {
|
||||
dev_err(&sw->dev, "failed to add NVM devices\n");
|
||||
device_del(&sw->dev);
|
||||
return ret;
|
||||
goto err_ports;
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
@ -2637,6 +2807,13 @@ int tb_switch_add(struct tb_switch *sw)
|
|||
|
||||
tb_switch_debugfs_init(sw);
|
||||
return 0;
|
||||
|
||||
err_ports:
|
||||
usb4_switch_remove_ports(sw);
|
||||
err_del:
|
||||
device_del(&sw->dev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -2676,6 +2853,7 @@ void tb_switch_remove(struct tb_switch *sw)
|
|||
tb_plug_events_active(sw, false);
|
||||
|
||||
tb_switch_nvm_remove(sw);
|
||||
usb4_switch_remove_ports(sw);
|
||||
|
||||
if (tb_route(sw))
|
||||
dev_info(&sw->dev, "device disconnected\n");
|
||||
|
|
@ -2773,22 +2951,11 @@ int tb_switch_resume(struct tb_switch *sw)
|
|||
|
||||
/* check for surviving downstream switches */
|
||||
tb_switch_for_each_port(sw, port) {
|
||||
if (!tb_port_has_remote(port) && !port->xdomain) {
|
||||
/*
|
||||
* For disconnected downstream lane adapters
|
||||
* start lane initialization now so we detect
|
||||
* future connects.
|
||||
*/
|
||||
if (!tb_is_upstream_port(port) && tb_port_is_null(port))
|
||||
tb_port_start_lane_initialization(port);
|
||||
if (!tb_port_is_null(port))
|
||||
continue;
|
||||
|
||||
if (!tb_port_resume(port))
|
||||
continue;
|
||||
} else if (port->xdomain) {
|
||||
/*
|
||||
* Start lane initialization for XDomain so the
|
||||
* link gets re-established.
|
||||
*/
|
||||
tb_port_start_lane_initialization(port);
|
||||
}
|
||||
|
||||
if (tb_wait_for_port(port, true) <= 0) {
|
||||
tb_port_warn(port,
|
||||
|
|
@ -2797,7 +2964,7 @@ int tb_switch_resume(struct tb_switch *sw)
|
|||
tb_sw_set_unplugged(port->remote->sw);
|
||||
else if (port->xdomain)
|
||||
port->xdomain->is_unplugged = true;
|
||||
} else if (tb_port_has_remote(port) || port->xdomain) {
|
||||
} else {
|
||||
/*
|
||||
* Always unlock the port so the downstream
|
||||
* switch/domain is accessible.
|
||||
|
|
@ -2844,7 +3011,8 @@ void tb_switch_suspend(struct tb_switch *sw, bool runtime)
|
|||
if (runtime) {
|
||||
/* Trigger wake when something is plugged in/out */
|
||||
flags |= TB_WAKE_ON_CONNECT | TB_WAKE_ON_DISCONNECT;
|
||||
flags |= TB_WAKE_ON_USB4 | TB_WAKE_ON_USB3 | TB_WAKE_ON_PCIE;
|
||||
flags |= TB_WAKE_ON_USB4;
|
||||
flags |= TB_WAKE_ON_USB3 | TB_WAKE_ON_PCIE | TB_WAKE_ON_DP;
|
||||
} else if (device_may_wakeup(&sw->dev)) {
|
||||
flags |= TB_WAKE_ON_USB4 | TB_WAKE_ON_USB3 | TB_WAKE_ON_PCIE;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -10,6 +10,7 @@
|
|||
#include <linux/errno.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/platform_data/x86/apple.h>
|
||||
|
||||
#include "tb.h"
|
||||
#include "tb_regs.h"
|
||||
|
|
@ -595,7 +596,7 @@ static void tb_scan_port(struct tb_port *port)
|
|||
return;
|
||||
}
|
||||
|
||||
tb_retimer_scan(port);
|
||||
tb_retimer_scan(port, true);
|
||||
|
||||
sw = tb_switch_alloc(port->sw->tb, &port->sw->dev,
|
||||
tb_downstream_route(port));
|
||||
|
|
@ -662,7 +663,7 @@ static void tb_scan_port(struct tb_port *port)
|
|||
tb_sw_warn(sw, "failed to enable TMU\n");
|
||||
|
||||
/* Scan upstream retimers */
|
||||
tb_retimer_scan(upstream_port);
|
||||
tb_retimer_scan(upstream_port, true);
|
||||
|
||||
/*
|
||||
* Create USB 3.x tunnels only when the switch is plugged to the
|
||||
|
|
@ -1571,6 +1572,69 @@ static const struct tb_cm_ops tb_cm_ops = {
|
|||
.disconnect_xdomain_paths = tb_disconnect_xdomain_paths,
|
||||
};
|
||||
|
||||
/*
|
||||
* During suspend the Thunderbolt controller is reset and all PCIe
|
||||
* tunnels are lost. The NHI driver will try to reestablish all tunnels
|
||||
* during resume. This adds device links between the tunneled PCIe
|
||||
* downstream ports and the NHI so that the device core will make sure
|
||||
* NHI is resumed first before the rest.
|
||||
*/
|
||||
static void tb_apple_add_links(struct tb_nhi *nhi)
|
||||
{
|
||||
struct pci_dev *upstream, *pdev;
|
||||
|
||||
if (!x86_apple_machine)
|
||||
return;
|
||||
|
||||
switch (nhi->pdev->device) {
|
||||
case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE:
|
||||
case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C:
|
||||
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
|
||||
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
|
||||
break;
|
||||
default:
|
||||
return;
|
||||
}
|
||||
|
||||
upstream = pci_upstream_bridge(nhi->pdev);
|
||||
while (upstream) {
|
||||
if (!pci_is_pcie(upstream))
|
||||
return;
|
||||
if (pci_pcie_type(upstream) == PCI_EXP_TYPE_UPSTREAM)
|
||||
break;
|
||||
upstream = pci_upstream_bridge(upstream);
|
||||
}
|
||||
|
||||
if (!upstream)
|
||||
return;
|
||||
|
||||
/*
|
||||
* For each hotplug downstream port, create add device link
|
||||
* back to NHI so that PCIe tunnels can be re-established after
|
||||
* sleep.
|
||||
*/
|
||||
for_each_pci_bridge(pdev, upstream->subordinate) {
|
||||
const struct device_link *link;
|
||||
|
||||
if (!pci_is_pcie(pdev))
|
||||
continue;
|
||||
if (pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM ||
|
||||
!pdev->is_hotplug_bridge)
|
||||
continue;
|
||||
|
||||
link = device_link_add(&pdev->dev, &nhi->pdev->dev,
|
||||
DL_FLAG_AUTOREMOVE_SUPPLIER |
|
||||
DL_FLAG_PM_RUNTIME);
|
||||
if (link) {
|
||||
dev_dbg(&nhi->pdev->dev, "created link from %s\n",
|
||||
dev_name(&pdev->dev));
|
||||
} else {
|
||||
dev_warn(&nhi->pdev->dev, "device link creation from %s failed\n",
|
||||
dev_name(&pdev->dev));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
struct tb *tb_probe(struct tb_nhi *nhi)
|
||||
{
|
||||
struct tb_cm *tcm;
|
||||
|
|
@ -1594,5 +1658,8 @@ struct tb *tb_probe(struct tb_nhi *nhi)
|
|||
|
||||
tb_dbg(tb, "using software connection manager\n");
|
||||
|
||||
tb_apple_add_links(nhi);
|
||||
tb_acpi_add_links(nhi);
|
||||
|
||||
return tb;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -20,6 +20,7 @@
|
|||
|
||||
#define NVM_MIN_SIZE SZ_32K
|
||||
#define NVM_MAX_SIZE SZ_512K
|
||||
#define NVM_DATA_DWORDS 16
|
||||
|
||||
/* Intel specific NVM offsets */
|
||||
#define NVM_DEVID 0x05
|
||||
|
|
@ -57,6 +58,12 @@ struct tb_nvm {
|
|||
bool flushed;
|
||||
};
|
||||
|
||||
enum tb_nvm_write_ops {
|
||||
WRITE_AND_AUTHENTICATE = 1,
|
||||
WRITE_ONLY = 2,
|
||||
AUTHENTICATE_ONLY = 3,
|
||||
};
|
||||
|
||||
#define TB_SWITCH_KEY_SIZE 32
|
||||
#define TB_SWITCH_MAX_DEPTH 6
|
||||
#define USB4_SWITCH_MAX_DEPTH 5
|
||||
|
|
@ -135,6 +142,12 @@ struct tb_switch_tmu {
|
|||
* @rpm_complete: Completion used to wait for runtime resume to
|
||||
* complete (ICM only)
|
||||
* @quirks: Quirks used for this Thunderbolt switch
|
||||
* @credit_allocation: Are the below buffer allocation parameters valid
|
||||
* @max_usb3_credits: Router preferred number of buffers for USB 3.x
|
||||
* @min_dp_aux_credits: Router preferred minimum number of buffers for DP AUX
|
||||
* @min_dp_main_credits: Router preferred minimum number of buffers for DP MAIN
|
||||
* @max_pcie_credits: Router preferred number of buffers for PCIe
|
||||
* @max_dma_credits: Router preferred number of buffers for DMA/P2P
|
||||
*
|
||||
* When the switch is being added or removed to the domain (other
|
||||
* switches) you need to have domain lock held.
|
||||
|
|
@ -177,6 +190,12 @@ struct tb_switch {
|
|||
u8 depth;
|
||||
struct completion rpm_complete;
|
||||
unsigned long quirks;
|
||||
bool credit_allocation;
|
||||
unsigned int max_usb3_credits;
|
||||
unsigned int min_dp_aux_credits;
|
||||
unsigned int min_dp_main_credits;
|
||||
unsigned int max_pcie_credits;
|
||||
unsigned int max_dma_credits;
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
@ -189,6 +208,7 @@ struct tb_switch {
|
|||
* @cap_tmu: Offset of the adapter specific TMU capability (%0 if not present)
|
||||
* @cap_adap: Offset of the adapter specific capability (%0 if not present)
|
||||
* @cap_usb4: Offset to the USB4 port capability (%0 if not present)
|
||||
* @usb4: Pointer to the USB4 port structure (only if @cap_usb4 is != %0)
|
||||
* @port: Port number on switch
|
||||
* @disabled: Disabled by eeprom or enabled but not implemented
|
||||
* @bonded: true if the port is bonded (two lanes combined as one)
|
||||
|
|
@ -198,6 +218,10 @@ struct tb_switch {
|
|||
* @in_hopids: Currently allocated input HopIDs
|
||||
* @out_hopids: Currently allocated output HopIDs
|
||||
* @list: Used to link ports to DP resources list
|
||||
* @total_credits: Total number of buffers available for this port
|
||||
* @ctl_credits: Buffers reserved for control path
|
||||
* @dma_credits: Number of credits allocated for DMA tunneling for all
|
||||
* DMA paths through this port.
|
||||
*
|
||||
* In USB4 terminology this structure represents an adapter (protocol or
|
||||
* lane adapter).
|
||||
|
|
@ -211,6 +235,7 @@ struct tb_port {
|
|||
int cap_tmu;
|
||||
int cap_adap;
|
||||
int cap_usb4;
|
||||
struct usb4_port *usb4;
|
||||
u8 port;
|
||||
bool disabled;
|
||||
bool bonded;
|
||||
|
|
@ -219,6 +244,24 @@ struct tb_port {
|
|||
struct ida in_hopids;
|
||||
struct ida out_hopids;
|
||||
struct list_head list;
|
||||
unsigned int total_credits;
|
||||
unsigned int ctl_credits;
|
||||
unsigned int dma_credits;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct usb4_port - USB4 port device
|
||||
* @dev: Device for the port
|
||||
* @port: Pointer to the lane 0 adapter
|
||||
* @can_offline: Does the port have necessary platform support to moved
|
||||
* it into offline mode and back
|
||||
* @offline: The port is currently in offline mode
|
||||
*/
|
||||
struct usb4_port {
|
||||
struct device dev;
|
||||
struct tb_port *port;
|
||||
bool can_offline;
|
||||
bool offline;
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
@ -255,6 +298,8 @@ struct tb_retimer {
|
|||
* @next_hop_index: HopID of the packet when it is routed out from @out_port
|
||||
* @initial_credits: Number of initial flow control credits allocated for
|
||||
* the path
|
||||
* @nfc_credits: Number of non-flow controlled buffers allocated for the
|
||||
* @in_port.
|
||||
*
|
||||
* Hop configuration is always done on the IN port of a switch.
|
||||
* in_port and out_port have to be on the same switch. Packets arriving on
|
||||
|
|
@ -274,6 +319,7 @@ struct tb_path_hop {
|
|||
int in_counter_index;
|
||||
int next_hop_index;
|
||||
unsigned int initial_credits;
|
||||
unsigned int nfc_credits;
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
@ -296,7 +342,6 @@ enum tb_path_port {
|
|||
* struct tb_path - a unidirectional path between two ports
|
||||
* @tb: Pointer to the domain structure
|
||||
* @name: Name of the path (used for debugging)
|
||||
* @nfc_credits: Number of non flow controlled credits allocated for the path
|
||||
* @ingress_shared_buffer: Shared buffering used for ingress ports on the path
|
||||
* @egress_shared_buffer: Shared buffering used for egress ports on the path
|
||||
* @ingress_fc_enable: Flow control for ingress ports on the path
|
||||
|
|
@ -317,7 +362,6 @@ enum tb_path_port {
|
|||
struct tb_path {
|
||||
struct tb *tb;
|
||||
const char *name;
|
||||
int nfc_credits;
|
||||
enum tb_path_port ingress_shared_buffer;
|
||||
enum tb_path_port egress_shared_buffer;
|
||||
enum tb_path_port ingress_fc_enable;
|
||||
|
|
@ -346,6 +390,7 @@ struct tb_path {
|
|||
#define TB_WAKE_ON_USB4 BIT(2)
|
||||
#define TB_WAKE_ON_USB3 BIT(3)
|
||||
#define TB_WAKE_ON_PCIE BIT(4)
|
||||
#define TB_WAKE_ON_DP BIT(5)
|
||||
|
||||
/**
|
||||
* struct tb_cm_ops - Connection manager specific operations vector
|
||||
|
|
@ -623,6 +668,7 @@ struct tb *tb_probe(struct tb_nhi *nhi);
|
|||
extern struct device_type tb_domain_type;
|
||||
extern struct device_type tb_retimer_type;
|
||||
extern struct device_type tb_switch_type;
|
||||
extern struct device_type usb4_port_device_type;
|
||||
|
||||
int tb_domain_init(void);
|
||||
void tb_domain_exit(void);
|
||||
|
|
@ -674,6 +720,16 @@ int tb_nvm_add_non_active(struct tb_nvm *nvm, size_t size,
|
|||
void tb_nvm_free(struct tb_nvm *nvm);
|
||||
void tb_nvm_exit(void);
|
||||
|
||||
typedef int (*read_block_fn)(void *, unsigned int, void *, size_t);
|
||||
typedef int (*write_block_fn)(void *, unsigned int, const void *, size_t);
|
||||
|
||||
int tb_nvm_read_data(unsigned int address, void *buf, size_t size,
|
||||
unsigned int retries, read_block_fn read_block,
|
||||
void *read_block_data);
|
||||
int tb_nvm_write_data(unsigned int address, const void *buf, size_t size,
|
||||
unsigned int retries, write_block_fn write_next_block,
|
||||
void *write_block_data);
|
||||
|
||||
struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
|
||||
u64 route);
|
||||
struct tb_switch *tb_switch_alloc_safe_mode(struct tb *tb,
|
||||
|
|
@ -853,6 +909,11 @@ void tb_port_release_out_hopid(struct tb_port *port, int hopid);
|
|||
struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
|
||||
struct tb_port *prev);
|
||||
|
||||
static inline bool tb_port_use_credit_allocation(const struct tb_port *port)
|
||||
{
|
||||
return tb_port_is_null(port) && port->sw->credit_allocation;
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_for_each_port_on_path() - Iterate over each port on path
|
||||
* @src: Source port
|
||||
|
|
@ -870,6 +931,9 @@ int tb_port_get_link_width(struct tb_port *port);
|
|||
int tb_port_state(struct tb_port *port);
|
||||
int tb_port_lane_bonding_enable(struct tb_port *port);
|
||||
void tb_port_lane_bonding_disable(struct tb_port *port);
|
||||
int tb_port_wait_for_link_width(struct tb_port *port, int width,
|
||||
int timeout_msec);
|
||||
int tb_port_update_credits(struct tb_port *port);
|
||||
|
||||
int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec);
|
||||
int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap);
|
||||
|
|
@ -904,6 +968,17 @@ bool tb_path_is_invalid(struct tb_path *path);
|
|||
bool tb_path_port_on_path(const struct tb_path *path,
|
||||
const struct tb_port *port);
|
||||
|
||||
/**
|
||||
* tb_path_for_each_hop() - Iterate over each hop on path
|
||||
* @path: Path whose hops to iterate
|
||||
* @hop: Hop used as iterator
|
||||
*
|
||||
* Iterates over each hop on path.
|
||||
*/
|
||||
#define tb_path_for_each_hop(path, hop) \
|
||||
for ((hop) = &(path)->hops[0]; \
|
||||
(hop) <= &(path)->hops[(path)->path_length - 1]; (hop)++)
|
||||
|
||||
int tb_drom_read(struct tb_switch *sw);
|
||||
int tb_drom_read_uid_only(struct tb_switch *sw, u64 *uid);
|
||||
|
||||
|
|
@ -950,7 +1025,7 @@ void tb_xdomain_remove(struct tb_xdomain *xd);
|
|||
struct tb_xdomain *tb_xdomain_find_by_link_depth(struct tb *tb, u8 link,
|
||||
u8 depth);
|
||||
|
||||
int tb_retimer_scan(struct tb_port *port);
|
||||
int tb_retimer_scan(struct tb_port *port, bool add);
|
||||
void tb_retimer_remove_all(struct tb_port *port);
|
||||
|
||||
static inline bool tb_is_retimer(const struct device *dev)
|
||||
|
|
@ -975,10 +1050,12 @@ int usb4_switch_set_sleep(struct tb_switch *sw);
|
|||
int usb4_switch_nvm_sector_size(struct tb_switch *sw);
|
||||
int usb4_switch_nvm_read(struct tb_switch *sw, unsigned int address, void *buf,
|
||||
size_t size);
|
||||
int usb4_switch_nvm_set_offset(struct tb_switch *sw, unsigned int address);
|
||||
int usb4_switch_nvm_write(struct tb_switch *sw, unsigned int address,
|
||||
const void *buf, size_t size);
|
||||
int usb4_switch_nvm_authenticate(struct tb_switch *sw);
|
||||
int usb4_switch_nvm_authenticate_status(struct tb_switch *sw, u32 *status);
|
||||
int usb4_switch_credits_init(struct tb_switch *sw);
|
||||
bool usb4_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in);
|
||||
int usb4_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in);
|
||||
int usb4_switch_dealloc_dp_resource(struct tb_switch *sw, struct tb_port *in);
|
||||
|
|
@ -986,20 +1063,27 @@ struct tb_port *usb4_switch_map_pcie_down(struct tb_switch *sw,
|
|||
const struct tb_port *port);
|
||||
struct tb_port *usb4_switch_map_usb3_down(struct tb_switch *sw,
|
||||
const struct tb_port *port);
|
||||
int usb4_switch_add_ports(struct tb_switch *sw);
|
||||
void usb4_switch_remove_ports(struct tb_switch *sw);
|
||||
|
||||
int usb4_port_unlock(struct tb_port *port);
|
||||
int usb4_port_configure(struct tb_port *port);
|
||||
void usb4_port_unconfigure(struct tb_port *port);
|
||||
int usb4_port_configure_xdomain(struct tb_port *port);
|
||||
void usb4_port_unconfigure_xdomain(struct tb_port *port);
|
||||
int usb4_port_router_offline(struct tb_port *port);
|
||||
int usb4_port_router_online(struct tb_port *port);
|
||||
int usb4_port_enumerate_retimers(struct tb_port *port);
|
||||
|
||||
int usb4_port_retimer_set_inbound_sbtx(struct tb_port *port, u8 index);
|
||||
int usb4_port_retimer_read(struct tb_port *port, u8 index, u8 reg, void *buf,
|
||||
u8 size);
|
||||
int usb4_port_retimer_write(struct tb_port *port, u8 index, u8 reg,
|
||||
const void *buf, u8 size);
|
||||
int usb4_port_retimer_is_last(struct tb_port *port, u8 index);
|
||||
int usb4_port_retimer_nvm_sector_size(struct tb_port *port, u8 index);
|
||||
int usb4_port_retimer_nvm_set_offset(struct tb_port *port, u8 index,
|
||||
unsigned int address);
|
||||
int usb4_port_retimer_nvm_write(struct tb_port *port, u8 index,
|
||||
unsigned int address, const void *buf,
|
||||
size_t size);
|
||||
|
|
@ -1018,6 +1102,22 @@ int usb4_usb3_port_allocate_bandwidth(struct tb_port *port, int *upstream_bw,
|
|||
int usb4_usb3_port_release_bandwidth(struct tb_port *port, int *upstream_bw,
|
||||
int *downstream_bw);
|
||||
|
||||
static inline bool tb_is_usb4_port_device(const struct device *dev)
|
||||
{
|
||||
return dev->type == &usb4_port_device_type;
|
||||
}
|
||||
|
||||
static inline struct usb4_port *tb_to_usb4_port_device(struct device *dev)
|
||||
{
|
||||
if (tb_is_usb4_port_device(dev))
|
||||
return container_of(dev, struct usb4_port, dev);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
struct usb4_port *usb4_port_device_add(struct tb_port *port);
|
||||
void usb4_port_device_remove(struct usb4_port *usb4);
|
||||
int usb4_port_device_resume(struct usb4_port *usb4);
|
||||
|
||||
/* Keep link controller awake during update */
|
||||
#define QUIRK_FORCE_POWER_LINK_CONTROLLER BIT(0)
|
||||
|
||||
|
|
@ -1031,6 +1131,11 @@ bool tb_acpi_may_tunnel_usb3(void);
|
|||
bool tb_acpi_may_tunnel_dp(void);
|
||||
bool tb_acpi_may_tunnel_pcie(void);
|
||||
bool tb_acpi_is_xdomain_allowed(void);
|
||||
|
||||
int tb_acpi_init(void);
|
||||
void tb_acpi_exit(void);
|
||||
int tb_acpi_power_on_retimers(struct tb_port *port);
|
||||
int tb_acpi_power_off_retimers(struct tb_port *port);
|
||||
#else
|
||||
static inline void tb_acpi_add_links(struct tb_nhi *nhi) { }
|
||||
|
||||
|
|
@ -1039,6 +1144,11 @@ static inline bool tb_acpi_may_tunnel_usb3(void) { return true; }
|
|||
static inline bool tb_acpi_may_tunnel_dp(void) { return true; }
|
||||
static inline bool tb_acpi_may_tunnel_pcie(void) { return true; }
|
||||
static inline bool tb_acpi_is_xdomain_allowed(void) { return true; }
|
||||
|
||||
static inline int tb_acpi_init(void) { return 0; }
|
||||
static inline void tb_acpi_exit(void) { }
|
||||
static inline int tb_acpi_power_on_retimers(struct tb_port *port) { return 0; }
|
||||
static inline int tb_acpi_power_off_retimers(struct tb_port *port) { return 0; }
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
|
|
|
|||
|
|
@ -195,6 +195,7 @@ struct tb_regs_switch_header {
|
|||
#define ROUTER_CS_5_SLP BIT(0)
|
||||
#define ROUTER_CS_5_WOP BIT(1)
|
||||
#define ROUTER_CS_5_WOU BIT(2)
|
||||
#define ROUTER_CS_5_WOD BIT(3)
|
||||
#define ROUTER_CS_5_C3S BIT(23)
|
||||
#define ROUTER_CS_5_PTO BIT(24)
|
||||
#define ROUTER_CS_5_UTO BIT(25)
|
||||
|
|
@ -228,6 +229,7 @@ enum usb4_switch_op {
|
|||
USB4_SWITCH_OP_NVM_SET_OFFSET = 0x23,
|
||||
USB4_SWITCH_OP_DROM_READ = 0x24,
|
||||
USB4_SWITCH_OP_NVM_SECTOR_SIZE = 0x25,
|
||||
USB4_SWITCH_OP_BUFFER_ALLOC = 0x33,
|
||||
};
|
||||
|
||||
/* Router TMU configuration */
|
||||
|
|
@ -458,6 +460,8 @@ struct tb_regs_hop {
|
|||
#define TB_LC_SX_CTRL 0x96
|
||||
#define TB_LC_SX_CTRL_WOC BIT(1)
|
||||
#define TB_LC_SX_CTRL_WOD BIT(2)
|
||||
#define TB_LC_SX_CTRL_WODPC BIT(3)
|
||||
#define TB_LC_SX_CTRL_WODPD BIT(4)
|
||||
#define TB_LC_SX_CTRL_WOU4 BIT(5)
|
||||
#define TB_LC_SX_CTRL_WOP BIT(6)
|
||||
#define TB_LC_SX_CTRL_L1C BIT(16)
|
||||
|
|
|
|||
|
|
@ -87,22 +87,30 @@ static struct tb_switch *alloc_host(struct kunit *test)
|
|||
sw->ports[1].config.type = TB_TYPE_PORT;
|
||||
sw->ports[1].config.max_in_hop_id = 19;
|
||||
sw->ports[1].config.max_out_hop_id = 19;
|
||||
sw->ports[1].total_credits = 60;
|
||||
sw->ports[1].ctl_credits = 2;
|
||||
sw->ports[1].dual_link_port = &sw->ports[2];
|
||||
|
||||
sw->ports[2].config.type = TB_TYPE_PORT;
|
||||
sw->ports[2].config.max_in_hop_id = 19;
|
||||
sw->ports[2].config.max_out_hop_id = 19;
|
||||
sw->ports[2].total_credits = 60;
|
||||
sw->ports[2].ctl_credits = 2;
|
||||
sw->ports[2].dual_link_port = &sw->ports[1];
|
||||
sw->ports[2].link_nr = 1;
|
||||
|
||||
sw->ports[3].config.type = TB_TYPE_PORT;
|
||||
sw->ports[3].config.max_in_hop_id = 19;
|
||||
sw->ports[3].config.max_out_hop_id = 19;
|
||||
sw->ports[3].total_credits = 60;
|
||||
sw->ports[3].ctl_credits = 2;
|
||||
sw->ports[3].dual_link_port = &sw->ports[4];
|
||||
|
||||
sw->ports[4].config.type = TB_TYPE_PORT;
|
||||
sw->ports[4].config.max_in_hop_id = 19;
|
||||
sw->ports[4].config.max_out_hop_id = 19;
|
||||
sw->ports[4].total_credits = 60;
|
||||
sw->ports[4].ctl_credits = 2;
|
||||
sw->ports[4].dual_link_port = &sw->ports[3];
|
||||
sw->ports[4].link_nr = 1;
|
||||
|
||||
|
|
@ -143,6 +151,25 @@ static struct tb_switch *alloc_host(struct kunit *test)
|
|||
return sw;
|
||||
}
|
||||
|
||||
static struct tb_switch *alloc_host_usb4(struct kunit *test)
|
||||
{
|
||||
struct tb_switch *sw;
|
||||
|
||||
sw = alloc_host(test);
|
||||
if (!sw)
|
||||
return NULL;
|
||||
|
||||
sw->generation = 4;
|
||||
sw->credit_allocation = true;
|
||||
sw->max_usb3_credits = 32;
|
||||
sw->min_dp_aux_credits = 1;
|
||||
sw->min_dp_main_credits = 0;
|
||||
sw->max_pcie_credits = 64;
|
||||
sw->max_dma_credits = 14;
|
||||
|
||||
return sw;
|
||||
}
|
||||
|
||||
static struct tb_switch *alloc_dev_default(struct kunit *test,
|
||||
struct tb_switch *parent,
|
||||
u64 route, bool bonded)
|
||||
|
|
@ -164,44 +191,60 @@ static struct tb_switch *alloc_dev_default(struct kunit *test,
|
|||
sw->ports[1].config.type = TB_TYPE_PORT;
|
||||
sw->ports[1].config.max_in_hop_id = 19;
|
||||
sw->ports[1].config.max_out_hop_id = 19;
|
||||
sw->ports[1].total_credits = 60;
|
||||
sw->ports[1].ctl_credits = 2;
|
||||
sw->ports[1].dual_link_port = &sw->ports[2];
|
||||
|
||||
sw->ports[2].config.type = TB_TYPE_PORT;
|
||||
sw->ports[2].config.max_in_hop_id = 19;
|
||||
sw->ports[2].config.max_out_hop_id = 19;
|
||||
sw->ports[2].total_credits = 60;
|
||||
sw->ports[2].ctl_credits = 2;
|
||||
sw->ports[2].dual_link_port = &sw->ports[1];
|
||||
sw->ports[2].link_nr = 1;
|
||||
|
||||
sw->ports[3].config.type = TB_TYPE_PORT;
|
||||
sw->ports[3].config.max_in_hop_id = 19;
|
||||
sw->ports[3].config.max_out_hop_id = 19;
|
||||
sw->ports[3].total_credits = 60;
|
||||
sw->ports[3].ctl_credits = 2;
|
||||
sw->ports[3].dual_link_port = &sw->ports[4];
|
||||
|
||||
sw->ports[4].config.type = TB_TYPE_PORT;
|
||||
sw->ports[4].config.max_in_hop_id = 19;
|
||||
sw->ports[4].config.max_out_hop_id = 19;
|
||||
sw->ports[4].total_credits = 60;
|
||||
sw->ports[4].ctl_credits = 2;
|
||||
sw->ports[4].dual_link_port = &sw->ports[3];
|
||||
sw->ports[4].link_nr = 1;
|
||||
|
||||
sw->ports[5].config.type = TB_TYPE_PORT;
|
||||
sw->ports[5].config.max_in_hop_id = 19;
|
||||
sw->ports[5].config.max_out_hop_id = 19;
|
||||
sw->ports[5].total_credits = 60;
|
||||
sw->ports[5].ctl_credits = 2;
|
||||
sw->ports[5].dual_link_port = &sw->ports[6];
|
||||
|
||||
sw->ports[6].config.type = TB_TYPE_PORT;
|
||||
sw->ports[6].config.max_in_hop_id = 19;
|
||||
sw->ports[6].config.max_out_hop_id = 19;
|
||||
sw->ports[6].total_credits = 60;
|
||||
sw->ports[6].ctl_credits = 2;
|
||||
sw->ports[6].dual_link_port = &sw->ports[5];
|
||||
sw->ports[6].link_nr = 1;
|
||||
|
||||
sw->ports[7].config.type = TB_TYPE_PORT;
|
||||
sw->ports[7].config.max_in_hop_id = 19;
|
||||
sw->ports[7].config.max_out_hop_id = 19;
|
||||
sw->ports[7].total_credits = 60;
|
||||
sw->ports[7].ctl_credits = 2;
|
||||
sw->ports[7].dual_link_port = &sw->ports[8];
|
||||
|
||||
sw->ports[8].config.type = TB_TYPE_PORT;
|
||||
sw->ports[8].config.max_in_hop_id = 19;
|
||||
sw->ports[8].config.max_out_hop_id = 19;
|
||||
sw->ports[8].total_credits = 60;
|
||||
sw->ports[8].ctl_credits = 2;
|
||||
sw->ports[8].dual_link_port = &sw->ports[7];
|
||||
sw->ports[8].link_nr = 1;
|
||||
|
||||
|
|
@ -260,14 +303,18 @@ static struct tb_switch *alloc_dev_default(struct kunit *test,
|
|||
if (port->dual_link_port && upstream_port->dual_link_port) {
|
||||
port->dual_link_port->remote = upstream_port->dual_link_port;
|
||||
upstream_port->dual_link_port->remote = port->dual_link_port;
|
||||
}
|
||||
|
||||
if (bonded) {
|
||||
/* Bonding is used */
|
||||
port->bonded = true;
|
||||
port->dual_link_port->bonded = true;
|
||||
upstream_port->bonded = true;
|
||||
upstream_port->dual_link_port->bonded = true;
|
||||
if (bonded) {
|
||||
/* Bonding is used */
|
||||
port->bonded = true;
|
||||
port->total_credits *= 2;
|
||||
port->dual_link_port->bonded = true;
|
||||
port->dual_link_port->total_credits = 0;
|
||||
upstream_port->bonded = true;
|
||||
upstream_port->total_credits *= 2;
|
||||
upstream_port->dual_link_port->bonded = true;
|
||||
upstream_port->dual_link_port->total_credits = 0;
|
||||
}
|
||||
}
|
||||
|
||||
return sw;
|
||||
|
|
@ -294,6 +341,27 @@ static struct tb_switch *alloc_dev_with_dpin(struct kunit *test,
|
|||
return sw;
|
||||
}
|
||||
|
||||
static struct tb_switch *alloc_dev_usb4(struct kunit *test,
|
||||
struct tb_switch *parent,
|
||||
u64 route, bool bonded)
|
||||
{
|
||||
struct tb_switch *sw;
|
||||
|
||||
sw = alloc_dev_default(test, parent, route, bonded);
|
||||
if (!sw)
|
||||
return NULL;
|
||||
|
||||
sw->generation = 4;
|
||||
sw->credit_allocation = true;
|
||||
sw->max_usb3_credits = 14;
|
||||
sw->min_dp_aux_credits = 1;
|
||||
sw->min_dp_main_credits = 18;
|
||||
sw->max_pcie_credits = 32;
|
||||
sw->max_dma_credits = 14;
|
||||
|
||||
return sw;
|
||||
}
|
||||
|
||||
static void tb_test_path_basic(struct kunit *test)
|
||||
{
|
||||
struct tb_port *src_port, *dst_port, *p;
|
||||
|
|
@ -1829,6 +1897,475 @@ static void tb_test_tunnel_dma_match(struct kunit *test)
|
|||
tb_tunnel_free(tunnel);
|
||||
}
|
||||
|
||||
static void tb_test_credit_alloc_legacy_not_bonded(struct kunit *test)
|
||||
{
|
||||
struct tb_switch *host, *dev;
|
||||
struct tb_port *up, *down;
|
||||
struct tb_tunnel *tunnel;
|
||||
struct tb_path *path;
|
||||
|
||||
host = alloc_host(test);
|
||||
dev = alloc_dev_default(test, host, 0x1, false);
|
||||
|
||||
down = &host->ports[8];
|
||||
up = &dev->ports[9];
|
||||
tunnel = tb_tunnel_alloc_pci(NULL, up, down);
|
||||
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
|
||||
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2);
|
||||
|
||||
path = tunnel->paths[0];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 16U);
|
||||
|
||||
path = tunnel->paths[1];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 16U);
|
||||
|
||||
tb_tunnel_free(tunnel);
|
||||
}
|
||||
|
||||
static void tb_test_credit_alloc_legacy_bonded(struct kunit *test)
|
||||
{
|
||||
struct tb_switch *host, *dev;
|
||||
struct tb_port *up, *down;
|
||||
struct tb_tunnel *tunnel;
|
||||
struct tb_path *path;
|
||||
|
||||
host = alloc_host(test);
|
||||
dev = alloc_dev_default(test, host, 0x1, true);
|
||||
|
||||
down = &host->ports[8];
|
||||
up = &dev->ports[9];
|
||||
tunnel = tb_tunnel_alloc_pci(NULL, up, down);
|
||||
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
|
||||
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2);
|
||||
|
||||
path = tunnel->paths[0];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U);
|
||||
|
||||
path = tunnel->paths[1];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U);
|
||||
|
||||
tb_tunnel_free(tunnel);
|
||||
}
|
||||
|
||||
static void tb_test_credit_alloc_pcie(struct kunit *test)
|
||||
{
|
||||
struct tb_switch *host, *dev;
|
||||
struct tb_port *up, *down;
|
||||
struct tb_tunnel *tunnel;
|
||||
struct tb_path *path;
|
||||
|
||||
host = alloc_host_usb4(test);
|
||||
dev = alloc_dev_usb4(test, host, 0x1, true);
|
||||
|
||||
down = &host->ports[8];
|
||||
up = &dev->ports[9];
|
||||
tunnel = tb_tunnel_alloc_pci(NULL, up, down);
|
||||
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
|
||||
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2);
|
||||
|
||||
path = tunnel->paths[0];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U);
|
||||
|
||||
path = tunnel->paths[1];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 64U);
|
||||
|
||||
tb_tunnel_free(tunnel);
|
||||
}
|
||||
|
||||
static void tb_test_credit_alloc_dp(struct kunit *test)
|
||||
{
|
||||
struct tb_switch *host, *dev;
|
||||
struct tb_port *in, *out;
|
||||
struct tb_tunnel *tunnel;
|
||||
struct tb_path *path;
|
||||
|
||||
host = alloc_host_usb4(test);
|
||||
dev = alloc_dev_usb4(test, host, 0x1, true);
|
||||
|
||||
in = &host->ports[5];
|
||||
out = &dev->ports[14];
|
||||
|
||||
tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
|
||||
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
|
||||
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)3);
|
||||
|
||||
/* Video (main) path */
|
||||
path = tunnel->paths[0];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 12U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 18U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 0U);
|
||||
|
||||
/* AUX TX */
|
||||
path = tunnel->paths[1];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U);
|
||||
|
||||
/* AUX RX */
|
||||
path = tunnel->paths[2];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U);
|
||||
|
||||
tb_tunnel_free(tunnel);
|
||||
}
|
||||
|
||||
static void tb_test_credit_alloc_usb3(struct kunit *test)
|
||||
{
|
||||
struct tb_switch *host, *dev;
|
||||
struct tb_port *up, *down;
|
||||
struct tb_tunnel *tunnel;
|
||||
struct tb_path *path;
|
||||
|
||||
host = alloc_host_usb4(test);
|
||||
dev = alloc_dev_usb4(test, host, 0x1, true);
|
||||
|
||||
down = &host->ports[12];
|
||||
up = &dev->ports[16];
|
||||
tunnel = tb_tunnel_alloc_usb3(NULL, up, down, 0, 0);
|
||||
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
|
||||
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2);
|
||||
|
||||
path = tunnel->paths[0];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U);
|
||||
|
||||
path = tunnel->paths[1];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U);
|
||||
|
||||
tb_tunnel_free(tunnel);
|
||||
}
|
||||
|
||||
static void tb_test_credit_alloc_dma(struct kunit *test)
|
||||
{
|
||||
struct tb_switch *host, *dev;
|
||||
struct tb_port *nhi, *port;
|
||||
struct tb_tunnel *tunnel;
|
||||
struct tb_path *path;
|
||||
|
||||
host = alloc_host_usb4(test);
|
||||
dev = alloc_dev_usb4(test, host, 0x1, true);
|
||||
|
||||
nhi = &host->ports[7];
|
||||
port = &dev->ports[3];
|
||||
|
||||
tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 8, 1, 8, 1);
|
||||
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
|
||||
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2);
|
||||
|
||||
/* DMA RX */
|
||||
path = tunnel->paths[0];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U);
|
||||
|
||||
/* DMA TX */
|
||||
path = tunnel->paths[1];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U);
|
||||
|
||||
tb_tunnel_free(tunnel);
|
||||
}
|
||||
|
||||
static void tb_test_credit_alloc_dma_multiple(struct kunit *test)
|
||||
{
|
||||
struct tb_tunnel *tunnel1, *tunnel2, *tunnel3;
|
||||
struct tb_switch *host, *dev;
|
||||
struct tb_port *nhi, *port;
|
||||
struct tb_path *path;
|
||||
|
||||
host = alloc_host_usb4(test);
|
||||
dev = alloc_dev_usb4(test, host, 0x1, true);
|
||||
|
||||
nhi = &host->ports[7];
|
||||
port = &dev->ports[3];
|
||||
|
||||
/*
|
||||
* Create three DMA tunnels through the same ports. With the
|
||||
* default buffers we should be able to create two and the last
|
||||
* one fails.
|
||||
*
|
||||
* For default host we have following buffers for DMA:
|
||||
*
|
||||
* 120 - (2 + 2 * (1 + 0) + 32 + 64 + spare) = 20
|
||||
*
|
||||
* For device we have following:
|
||||
*
|
||||
* 120 - (2 + 2 * (1 + 18) + 14 + 32 + spare) = 34
|
||||
*
|
||||
* spare = 14 + 1 = 15
|
||||
*
|
||||
* So on host the first tunnel gets 14 and the second gets the
|
||||
* remaining 1 and then we run out of buffers.
|
||||
*/
|
||||
tunnel1 = tb_tunnel_alloc_dma(NULL, nhi, port, 8, 1, 8, 1);
|
||||
KUNIT_ASSERT_TRUE(test, tunnel1 != NULL);
|
||||
KUNIT_ASSERT_EQ(test, tunnel1->npaths, (size_t)2);
|
||||
|
||||
path = tunnel1->paths[0];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U);
|
||||
|
||||
path = tunnel1->paths[1];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U);
|
||||
|
||||
tunnel2 = tb_tunnel_alloc_dma(NULL, nhi, port, 9, 2, 9, 2);
|
||||
KUNIT_ASSERT_TRUE(test, tunnel2 != NULL);
|
||||
KUNIT_ASSERT_EQ(test, tunnel2->npaths, (size_t)2);
|
||||
|
||||
path = tunnel2->paths[0];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U);
|
||||
|
||||
path = tunnel2->paths[1];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U);
|
||||
|
||||
tunnel3 = tb_tunnel_alloc_dma(NULL, nhi, port, 10, 3, 10, 3);
|
||||
KUNIT_ASSERT_TRUE(test, tunnel3 == NULL);
|
||||
|
||||
/*
|
||||
* Release the first DMA tunnel. That should make 14 buffers
|
||||
* available for the next tunnel.
|
||||
*/
|
||||
tb_tunnel_free(tunnel1);
|
||||
|
||||
tunnel3 = tb_tunnel_alloc_dma(NULL, nhi, port, 10, 3, 10, 3);
|
||||
KUNIT_ASSERT_TRUE(test, tunnel3 != NULL);
|
||||
|
||||
path = tunnel3->paths[0];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U);
|
||||
|
||||
path = tunnel3->paths[1];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U);
|
||||
|
||||
tb_tunnel_free(tunnel3);
|
||||
tb_tunnel_free(tunnel2);
|
||||
}
|
||||
|
||||
static void tb_test_credit_alloc_all(struct kunit *test)
|
||||
{
|
||||
struct tb_port *up, *down, *in, *out, *nhi, *port;
|
||||
struct tb_tunnel *pcie_tunnel, *dp_tunnel1, *dp_tunnel2, *usb3_tunnel;
|
||||
struct tb_tunnel *dma_tunnel1, *dma_tunnel2;
|
||||
struct tb_switch *host, *dev;
|
||||
struct tb_path *path;
|
||||
|
||||
/*
|
||||
* Create PCIe, 2 x DP, USB 3.x and two DMA tunnels from host to
|
||||
* device. Expectation is that all these can be established with
|
||||
* the default credit allocation found in Intel hardware.
|
||||
*/
|
||||
|
||||
host = alloc_host_usb4(test);
|
||||
dev = alloc_dev_usb4(test, host, 0x1, true);
|
||||
|
||||
down = &host->ports[8];
|
||||
up = &dev->ports[9];
|
||||
pcie_tunnel = tb_tunnel_alloc_pci(NULL, up, down);
|
||||
KUNIT_ASSERT_TRUE(test, pcie_tunnel != NULL);
|
||||
KUNIT_ASSERT_EQ(test, pcie_tunnel->npaths, (size_t)2);
|
||||
|
||||
path = pcie_tunnel->paths[0];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U);
|
||||
|
||||
path = pcie_tunnel->paths[1];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 64U);
|
||||
|
||||
in = &host->ports[5];
|
||||
out = &dev->ports[13];
|
||||
|
||||
dp_tunnel1 = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
|
||||
KUNIT_ASSERT_TRUE(test, dp_tunnel1 != NULL);
|
||||
KUNIT_ASSERT_EQ(test, dp_tunnel1->npaths, (size_t)3);
|
||||
|
||||
path = dp_tunnel1->paths[0];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 12U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 18U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 0U);
|
||||
|
||||
path = dp_tunnel1->paths[1];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U);
|
||||
|
||||
path = dp_tunnel1->paths[2];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U);
|
||||
|
||||
in = &host->ports[6];
|
||||
out = &dev->ports[14];
|
||||
|
||||
dp_tunnel2 = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
|
||||
KUNIT_ASSERT_TRUE(test, dp_tunnel2 != NULL);
|
||||
KUNIT_ASSERT_EQ(test, dp_tunnel2->npaths, (size_t)3);
|
||||
|
||||
path = dp_tunnel2->paths[0];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 12U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 18U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 0U);
|
||||
|
||||
path = dp_tunnel2->paths[1];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U);
|
||||
|
||||
path = dp_tunnel2->paths[2];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U);
|
||||
|
||||
down = &host->ports[12];
|
||||
up = &dev->ports[16];
|
||||
usb3_tunnel = tb_tunnel_alloc_usb3(NULL, up, down, 0, 0);
|
||||
KUNIT_ASSERT_TRUE(test, usb3_tunnel != NULL);
|
||||
KUNIT_ASSERT_EQ(test, usb3_tunnel->npaths, (size_t)2);
|
||||
|
||||
path = usb3_tunnel->paths[0];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U);
|
||||
|
||||
path = usb3_tunnel->paths[1];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U);
|
||||
|
||||
nhi = &host->ports[7];
|
||||
port = &dev->ports[3];
|
||||
|
||||
dma_tunnel1 = tb_tunnel_alloc_dma(NULL, nhi, port, 8, 1, 8, 1);
|
||||
KUNIT_ASSERT_TRUE(test, dma_tunnel1 != NULL);
|
||||
KUNIT_ASSERT_EQ(test, dma_tunnel1->npaths, (size_t)2);
|
||||
|
||||
path = dma_tunnel1->paths[0];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U);
|
||||
|
||||
path = dma_tunnel1->paths[1];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U);
|
||||
|
||||
dma_tunnel2 = tb_tunnel_alloc_dma(NULL, nhi, port, 9, 2, 9, 2);
|
||||
KUNIT_ASSERT_TRUE(test, dma_tunnel2 != NULL);
|
||||
KUNIT_ASSERT_EQ(test, dma_tunnel2->npaths, (size_t)2);
|
||||
|
||||
path = dma_tunnel2->paths[0];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U);
|
||||
|
||||
path = dma_tunnel2->paths[1];
|
||||
KUNIT_ASSERT_EQ(test, path->path_length, 2);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
|
||||
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U);
|
||||
|
||||
tb_tunnel_free(dma_tunnel2);
|
||||
tb_tunnel_free(dma_tunnel1);
|
||||
tb_tunnel_free(usb3_tunnel);
|
||||
tb_tunnel_free(dp_tunnel2);
|
||||
tb_tunnel_free(dp_tunnel1);
|
||||
tb_tunnel_free(pcie_tunnel);
|
||||
}
|
||||
|
||||
static const u32 root_directory[] = {
|
||||
0x55584401, /* "UXD" v1 */
|
||||
0x00000018, /* Root directory length */
|
||||
|
|
@ -2105,6 +2642,14 @@ static struct kunit_case tb_test_cases[] = {
|
|||
KUNIT_CASE(tb_test_tunnel_dma_tx),
|
||||
KUNIT_CASE(tb_test_tunnel_dma_chain),
|
||||
KUNIT_CASE(tb_test_tunnel_dma_match),
|
||||
KUNIT_CASE(tb_test_credit_alloc_legacy_not_bonded),
|
||||
KUNIT_CASE(tb_test_credit_alloc_legacy_bonded),
|
||||
KUNIT_CASE(tb_test_credit_alloc_pcie),
|
||||
KUNIT_CASE(tb_test_credit_alloc_dp),
|
||||
KUNIT_CASE(tb_test_credit_alloc_usb3),
|
||||
KUNIT_CASE(tb_test_credit_alloc_dma),
|
||||
KUNIT_CASE(tb_test_credit_alloc_dma_multiple),
|
||||
KUNIT_CASE(tb_test_credit_alloc_all),
|
||||
KUNIT_CASE(tb_test_property_parse),
|
||||
KUNIT_CASE(tb_test_property_format),
|
||||
KUNIT_CASE(tb_test_property_copy),
|
||||
|
|
|
|||
|
|
@ -34,6 +34,16 @@
|
|||
#define TB_DP_AUX_PATH_OUT 1
|
||||
#define TB_DP_AUX_PATH_IN 2
|
||||
|
||||
/* Minimum number of credits needed for PCIe path */
|
||||
#define TB_MIN_PCIE_CREDITS 6U
|
||||
/*
|
||||
* Number of credits we try to allocate for each DMA path if not limited
|
||||
* by the host router baMaxHI.
|
||||
*/
|
||||
#define TB_DMA_CREDITS 14U
|
||||
/* Minimum number of credits for DMA path */
|
||||
#define TB_MIN_DMA_CREDITS 1U
|
||||
|
||||
static const char * const tb_tunnel_names[] = { "PCI", "DP", "DMA", "USB3" };
|
||||
|
||||
#define __TB_TUNNEL_PRINT(level, tunnel, fmt, arg...) \
|
||||
|
|
@ -57,6 +67,55 @@ static const char * const tb_tunnel_names[] = { "PCI", "DP", "DMA", "USB3" };
|
|||
#define tb_tunnel_dbg(tunnel, fmt, arg...) \
|
||||
__TB_TUNNEL_PRINT(tb_dbg, tunnel, fmt, ##arg)
|
||||
|
||||
static inline unsigned int tb_usable_credits(const struct tb_port *port)
|
||||
{
|
||||
return port->total_credits - port->ctl_credits;
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_available_credits() - Available credits for PCIe and DMA
|
||||
* @port: Lane adapter to check
|
||||
* @max_dp_streams: If non-%NULL stores maximum number of simultaneous DP
|
||||
* streams possible through this lane adapter
|
||||
*/
|
||||
static unsigned int tb_available_credits(const struct tb_port *port,
|
||||
size_t *max_dp_streams)
|
||||
{
|
||||
const struct tb_switch *sw = port->sw;
|
||||
int credits, usb3, pcie, spare;
|
||||
size_t ndp;
|
||||
|
||||
usb3 = tb_acpi_may_tunnel_usb3() ? sw->max_usb3_credits : 0;
|
||||
pcie = tb_acpi_may_tunnel_pcie() ? sw->max_pcie_credits : 0;
|
||||
|
||||
if (tb_acpi_is_xdomain_allowed()) {
|
||||
spare = min_not_zero(sw->max_dma_credits, TB_DMA_CREDITS);
|
||||
/* Add some credits for potential second DMA tunnel */
|
||||
spare += TB_MIN_DMA_CREDITS;
|
||||
} else {
|
||||
spare = 0;
|
||||
}
|
||||
|
||||
credits = tb_usable_credits(port);
|
||||
if (tb_acpi_may_tunnel_dp()) {
|
||||
/*
|
||||
* Maximum number of DP streams possible through the
|
||||
* lane adapter.
|
||||
*/
|
||||
ndp = (credits - (usb3 + pcie + spare)) /
|
||||
(sw->min_dp_aux_credits + sw->min_dp_main_credits);
|
||||
} else {
|
||||
ndp = 0;
|
||||
}
|
||||
credits -= ndp * (sw->min_dp_aux_credits + sw->min_dp_main_credits);
|
||||
credits -= usb3;
|
||||
|
||||
if (max_dp_streams)
|
||||
*max_dp_streams = ndp;
|
||||
|
||||
return credits > 0 ? credits : 0;
|
||||
}
|
||||
|
||||
static struct tb_tunnel *tb_tunnel_alloc(struct tb *tb, size_t npaths,
|
||||
enum tb_tunnel_type type)
|
||||
{
|
||||
|
|
@ -94,24 +153,37 @@ static int tb_pci_activate(struct tb_tunnel *tunnel, bool activate)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int tb_initial_credits(const struct tb_switch *sw)
|
||||
static int tb_pci_init_credits(struct tb_path_hop *hop)
|
||||
{
|
||||
/* If the path is complete sw is not NULL */
|
||||
if (sw) {
|
||||
/* More credits for faster link */
|
||||
switch (sw->link_speed * sw->link_width) {
|
||||
case 40:
|
||||
return 32;
|
||||
case 20:
|
||||
return 24;
|
||||
}
|
||||
struct tb_port *port = hop->in_port;
|
||||
struct tb_switch *sw = port->sw;
|
||||
unsigned int credits;
|
||||
|
||||
if (tb_port_use_credit_allocation(port)) {
|
||||
unsigned int available;
|
||||
|
||||
available = tb_available_credits(port, NULL);
|
||||
credits = min(sw->max_pcie_credits, available);
|
||||
|
||||
if (credits < TB_MIN_PCIE_CREDITS)
|
||||
return -ENOSPC;
|
||||
|
||||
credits = max(TB_MIN_PCIE_CREDITS, credits);
|
||||
} else {
|
||||
if (tb_port_is_null(port))
|
||||
credits = port->bonded ? 32 : 16;
|
||||
else
|
||||
credits = 7;
|
||||
}
|
||||
|
||||
return 16;
|
||||
hop->initial_credits = credits;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void tb_pci_init_path(struct tb_path *path)
|
||||
static int tb_pci_init_path(struct tb_path *path)
|
||||
{
|
||||
struct tb_path_hop *hop;
|
||||
|
||||
path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL;
|
||||
path->egress_shared_buffer = TB_PATH_NONE;
|
||||
path->ingress_fc_enable = TB_PATH_ALL;
|
||||
|
|
@ -119,11 +191,16 @@ static void tb_pci_init_path(struct tb_path *path)
|
|||
path->priority = 3;
|
||||
path->weight = 1;
|
||||
path->drop_packages = 0;
|
||||
path->nfc_credits = 0;
|
||||
path->hops[0].initial_credits = 7;
|
||||
if (path->path_length > 1)
|
||||
path->hops[1].initial_credits =
|
||||
tb_initial_credits(path->hops[1].in_port->sw);
|
||||
|
||||
tb_path_for_each_hop(path, hop) {
|
||||
int ret;
|
||||
|
||||
ret = tb_pci_init_credits(hop);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -163,14 +240,16 @@ struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down)
|
|||
goto err_free;
|
||||
}
|
||||
tunnel->paths[TB_PCI_PATH_UP] = path;
|
||||
tb_pci_init_path(tunnel->paths[TB_PCI_PATH_UP]);
|
||||
if (tb_pci_init_path(tunnel->paths[TB_PCI_PATH_UP]))
|
||||
goto err_free;
|
||||
|
||||
path = tb_path_discover(tunnel->dst_port, -1, down, TB_PCI_HOPID, NULL,
|
||||
"PCIe Down");
|
||||
if (!path)
|
||||
goto err_deactivate;
|
||||
tunnel->paths[TB_PCI_PATH_DOWN] = path;
|
||||
tb_pci_init_path(tunnel->paths[TB_PCI_PATH_DOWN]);
|
||||
if (tb_pci_init_path(tunnel->paths[TB_PCI_PATH_DOWN]))
|
||||
goto err_deactivate;
|
||||
|
||||
/* Validate that the tunnel is complete */
|
||||
if (!tb_port_is_pcie_up(tunnel->dst_port)) {
|
||||
|
|
@ -228,23 +307,25 @@ struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
|
|||
|
||||
path = tb_path_alloc(tb, down, TB_PCI_HOPID, up, TB_PCI_HOPID, 0,
|
||||
"PCIe Down");
|
||||
if (!path) {
|
||||
tb_tunnel_free(tunnel);
|
||||
return NULL;
|
||||
}
|
||||
tb_pci_init_path(path);
|
||||
if (!path)
|
||||
goto err_free;
|
||||
tunnel->paths[TB_PCI_PATH_DOWN] = path;
|
||||
if (tb_pci_init_path(path))
|
||||
goto err_free;
|
||||
|
||||
path = tb_path_alloc(tb, up, TB_PCI_HOPID, down, TB_PCI_HOPID, 0,
|
||||
"PCIe Up");
|
||||
if (!path) {
|
||||
tb_tunnel_free(tunnel);
|
||||
return NULL;
|
||||
}
|
||||
tb_pci_init_path(path);
|
||||
if (!path)
|
||||
goto err_free;
|
||||
tunnel->paths[TB_PCI_PATH_UP] = path;
|
||||
if (tb_pci_init_path(path))
|
||||
goto err_free;
|
||||
|
||||
return tunnel;
|
||||
|
||||
err_free:
|
||||
tb_tunnel_free(tunnel);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static bool tb_dp_is_usb4(const struct tb_switch *sw)
|
||||
|
|
@ -599,9 +680,20 @@ static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void tb_dp_init_aux_credits(struct tb_path_hop *hop)
|
||||
{
|
||||
struct tb_port *port = hop->in_port;
|
||||
struct tb_switch *sw = port->sw;
|
||||
|
||||
if (tb_port_use_credit_allocation(port))
|
||||
hop->initial_credits = sw->min_dp_aux_credits;
|
||||
else
|
||||
hop->initial_credits = 1;
|
||||
}
|
||||
|
||||
static void tb_dp_init_aux_path(struct tb_path *path)
|
||||
{
|
||||
int i;
|
||||
struct tb_path_hop *hop;
|
||||
|
||||
path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL;
|
||||
path->egress_shared_buffer = TB_PATH_NONE;
|
||||
|
|
@ -610,13 +702,42 @@ static void tb_dp_init_aux_path(struct tb_path *path)
|
|||
path->priority = 2;
|
||||
path->weight = 1;
|
||||
|
||||
for (i = 0; i < path->path_length; i++)
|
||||
path->hops[i].initial_credits = 1;
|
||||
tb_path_for_each_hop(path, hop)
|
||||
tb_dp_init_aux_credits(hop);
|
||||
}
|
||||
|
||||
static void tb_dp_init_video_path(struct tb_path *path, bool discover)
|
||||
static int tb_dp_init_video_credits(struct tb_path_hop *hop)
|
||||
{
|
||||
u32 nfc_credits = path->hops[0].in_port->config.nfc_credits;
|
||||
struct tb_port *port = hop->in_port;
|
||||
struct tb_switch *sw = port->sw;
|
||||
|
||||
if (tb_port_use_credit_allocation(port)) {
|
||||
unsigned int nfc_credits;
|
||||
size_t max_dp_streams;
|
||||
|
||||
tb_available_credits(port, &max_dp_streams);
|
||||
/*
|
||||
* Read the number of currently allocated NFC credits
|
||||
* from the lane adapter. Since we only use them for DP
|
||||
* tunneling we can use that to figure out how many DP
|
||||
* tunnels already go through the lane adapter.
|
||||
*/
|
||||
nfc_credits = port->config.nfc_credits &
|
||||
ADP_CS_4_NFC_BUFFERS_MASK;
|
||||
if (nfc_credits / sw->min_dp_main_credits > max_dp_streams)
|
||||
return -ENOSPC;
|
||||
|
||||
hop->nfc_credits = sw->min_dp_main_credits;
|
||||
} else {
|
||||
hop->nfc_credits = min(port->total_credits - 2, 12U);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int tb_dp_init_video_path(struct tb_path *path)
|
||||
{
|
||||
struct tb_path_hop *hop;
|
||||
|
||||
path->egress_fc_enable = TB_PATH_NONE;
|
||||
path->egress_shared_buffer = TB_PATH_NONE;
|
||||
|
|
@ -625,16 +746,15 @@ static void tb_dp_init_video_path(struct tb_path *path, bool discover)
|
|||
path->priority = 1;
|
||||
path->weight = 1;
|
||||
|
||||
if (discover) {
|
||||
path->nfc_credits = nfc_credits & ADP_CS_4_NFC_BUFFERS_MASK;
|
||||
} else {
|
||||
u32 max_credits;
|
||||
tb_path_for_each_hop(path, hop) {
|
||||
int ret;
|
||||
|
||||
max_credits = (nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK) >>
|
||||
ADP_CS_4_TOTAL_BUFFERS_SHIFT;
|
||||
/* Leave some credits for AUX path */
|
||||
path->nfc_credits = min(max_credits - 2, 12U);
|
||||
ret = tb_dp_init_video_credits(hop);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -674,7 +794,8 @@ struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in)
|
|||
goto err_free;
|
||||
}
|
||||
tunnel->paths[TB_DP_VIDEO_PATH_OUT] = path;
|
||||
tb_dp_init_video_path(tunnel->paths[TB_DP_VIDEO_PATH_OUT], true);
|
||||
if (tb_dp_init_video_path(tunnel->paths[TB_DP_VIDEO_PATH_OUT]))
|
||||
goto err_free;
|
||||
|
||||
path = tb_path_discover(in, TB_DP_AUX_TX_HOPID, NULL, -1, NULL, "AUX TX");
|
||||
if (!path)
|
||||
|
|
@ -761,7 +882,7 @@ struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
|
|||
1, "Video");
|
||||
if (!path)
|
||||
goto err_free;
|
||||
tb_dp_init_video_path(path, false);
|
||||
tb_dp_init_video_path(path);
|
||||
paths[TB_DP_VIDEO_PATH_OUT] = path;
|
||||
|
||||
path = tb_path_alloc(tb, in, TB_DP_AUX_TX_HOPID, out,
|
||||
|
|
@ -785,20 +906,58 @@ err_free:
|
|||
return NULL;
|
||||
}
|
||||
|
||||
static u32 tb_dma_credits(struct tb_port *nhi)
|
||||
static unsigned int tb_dma_available_credits(const struct tb_port *port)
|
||||
{
|
||||
u32 max_credits;
|
||||
const struct tb_switch *sw = port->sw;
|
||||
int credits;
|
||||
|
||||
max_credits = (nhi->config.nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK) >>
|
||||
ADP_CS_4_TOTAL_BUFFERS_SHIFT;
|
||||
return min(max_credits, 13U);
|
||||
credits = tb_available_credits(port, NULL);
|
||||
if (tb_acpi_may_tunnel_pcie())
|
||||
credits -= sw->max_pcie_credits;
|
||||
credits -= port->dma_credits;
|
||||
|
||||
return credits > 0 ? credits : 0;
|
||||
}
|
||||
|
||||
static void tb_dma_init_path(struct tb_path *path, unsigned int efc, u32 credits)
|
||||
static int tb_dma_reserve_credits(struct tb_path_hop *hop, unsigned int credits)
|
||||
{
|
||||
int i;
|
||||
struct tb_port *port = hop->in_port;
|
||||
|
||||
path->egress_fc_enable = efc;
|
||||
if (tb_port_use_credit_allocation(port)) {
|
||||
unsigned int available = tb_dma_available_credits(port);
|
||||
|
||||
/*
|
||||
* Need to have at least TB_MIN_DMA_CREDITS, otherwise
|
||||
* DMA path cannot be established.
|
||||
*/
|
||||
if (available < TB_MIN_DMA_CREDITS)
|
||||
return -ENOSPC;
|
||||
|
||||
while (credits > available)
|
||||
credits--;
|
||||
|
||||
tb_port_dbg(port, "reserving %u credits for DMA path\n",
|
||||
credits);
|
||||
|
||||
port->dma_credits += credits;
|
||||
} else {
|
||||
if (tb_port_is_null(port))
|
||||
credits = port->bonded ? 14 : 6;
|
||||
else
|
||||
credits = min(port->total_credits, credits);
|
||||
}
|
||||
|
||||
hop->initial_credits = credits;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Path from lane adapter to NHI */
|
||||
static int tb_dma_init_rx_path(struct tb_path *path, unsigned int credits)
|
||||
{
|
||||
struct tb_path_hop *hop;
|
||||
unsigned int i, tmp;
|
||||
|
||||
path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL;
|
||||
path->ingress_fc_enable = TB_PATH_ALL;
|
||||
path->egress_shared_buffer = TB_PATH_NONE;
|
||||
path->ingress_shared_buffer = TB_PATH_NONE;
|
||||
|
|
@ -806,8 +965,80 @@ static void tb_dma_init_path(struct tb_path *path, unsigned int efc, u32 credits
|
|||
path->weight = 1;
|
||||
path->clear_fc = true;
|
||||
|
||||
for (i = 0; i < path->path_length; i++)
|
||||
path->hops[i].initial_credits = credits;
|
||||
/*
|
||||
* First lane adapter is the one connected to the remote host.
|
||||
* We don't tunnel other traffic over this link so can use all
|
||||
* the credits (except the ones reserved for control traffic).
|
||||
*/
|
||||
hop = &path->hops[0];
|
||||
tmp = min(tb_usable_credits(hop->in_port), credits);
|
||||
hop->initial_credits = tmp;
|
||||
hop->in_port->dma_credits += tmp;
|
||||
|
||||
for (i = 1; i < path->path_length; i++) {
|
||||
int ret;
|
||||
|
||||
ret = tb_dma_reserve_credits(&path->hops[i], credits);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Path from NHI to lane adapter */
|
||||
static int tb_dma_init_tx_path(struct tb_path *path, unsigned int credits)
|
||||
{
|
||||
struct tb_path_hop *hop;
|
||||
|
||||
path->egress_fc_enable = TB_PATH_ALL;
|
||||
path->ingress_fc_enable = TB_PATH_ALL;
|
||||
path->egress_shared_buffer = TB_PATH_NONE;
|
||||
path->ingress_shared_buffer = TB_PATH_NONE;
|
||||
path->priority = 5;
|
||||
path->weight = 1;
|
||||
path->clear_fc = true;
|
||||
|
||||
tb_path_for_each_hop(path, hop) {
|
||||
int ret;
|
||||
|
||||
ret = tb_dma_reserve_credits(hop, credits);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void tb_dma_release_credits(struct tb_path_hop *hop)
|
||||
{
|
||||
struct tb_port *port = hop->in_port;
|
||||
|
||||
if (tb_port_use_credit_allocation(port)) {
|
||||
port->dma_credits -= hop->initial_credits;
|
||||
|
||||
tb_port_dbg(port, "released %u DMA path credits\n",
|
||||
hop->initial_credits);
|
||||
}
|
||||
}
|
||||
|
||||
static void tb_dma_deinit_path(struct tb_path *path)
|
||||
{
|
||||
struct tb_path_hop *hop;
|
||||
|
||||
tb_path_for_each_hop(path, hop)
|
||||
tb_dma_release_credits(hop);
|
||||
}
|
||||
|
||||
static void tb_dma_deinit(struct tb_tunnel *tunnel)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < tunnel->npaths; i++) {
|
||||
if (!tunnel->paths[i])
|
||||
continue;
|
||||
tb_dma_deinit_path(tunnel->paths[i]);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -832,7 +1063,7 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
|
|||
struct tb_tunnel *tunnel;
|
||||
size_t npaths = 0, i = 0;
|
||||
struct tb_path *path;
|
||||
u32 credits;
|
||||
int credits;
|
||||
|
||||
if (receive_ring > 0)
|
||||
npaths++;
|
||||
|
|
@ -848,32 +1079,39 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
|
|||
|
||||
tunnel->src_port = nhi;
|
||||
tunnel->dst_port = dst;
|
||||
tunnel->deinit = tb_dma_deinit;
|
||||
|
||||
credits = tb_dma_credits(nhi);
|
||||
credits = min_not_zero(TB_DMA_CREDITS, nhi->sw->max_dma_credits);
|
||||
|
||||
if (receive_ring > 0) {
|
||||
path = tb_path_alloc(tb, dst, receive_path, nhi, receive_ring, 0,
|
||||
"DMA RX");
|
||||
if (!path) {
|
||||
tb_tunnel_free(tunnel);
|
||||
return NULL;
|
||||
}
|
||||
tb_dma_init_path(path, TB_PATH_SOURCE | TB_PATH_INTERNAL, credits);
|
||||
if (!path)
|
||||
goto err_free;
|
||||
tunnel->paths[i++] = path;
|
||||
if (tb_dma_init_rx_path(path, credits)) {
|
||||
tb_tunnel_dbg(tunnel, "not enough buffers for RX path\n");
|
||||
goto err_free;
|
||||
}
|
||||
}
|
||||
|
||||
if (transmit_ring > 0) {
|
||||
path = tb_path_alloc(tb, nhi, transmit_ring, dst, transmit_path, 0,
|
||||
"DMA TX");
|
||||
if (!path) {
|
||||
tb_tunnel_free(tunnel);
|
||||
return NULL;
|
||||
}
|
||||
tb_dma_init_path(path, TB_PATH_ALL, credits);
|
||||
if (!path)
|
||||
goto err_free;
|
||||
tunnel->paths[i++] = path;
|
||||
if (tb_dma_init_tx_path(path, credits)) {
|
||||
tb_tunnel_dbg(tunnel, "not enough buffers for TX path\n");
|
||||
goto err_free;
|
||||
}
|
||||
}
|
||||
|
||||
return tunnel;
|
||||
|
||||
err_free:
|
||||
tb_tunnel_free(tunnel);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -1067,8 +1305,28 @@ static void tb_usb3_reclaim_available_bandwidth(struct tb_tunnel *tunnel,
|
|||
tunnel->allocated_up, tunnel->allocated_down);
|
||||
}
|
||||
|
||||
static void tb_usb3_init_credits(struct tb_path_hop *hop)
|
||||
{
|
||||
struct tb_port *port = hop->in_port;
|
||||
struct tb_switch *sw = port->sw;
|
||||
unsigned int credits;
|
||||
|
||||
if (tb_port_use_credit_allocation(port)) {
|
||||
credits = sw->max_usb3_credits;
|
||||
} else {
|
||||
if (tb_port_is_null(port))
|
||||
credits = port->bonded ? 32 : 16;
|
||||
else
|
||||
credits = 7;
|
||||
}
|
||||
|
||||
hop->initial_credits = credits;
|
||||
}
|
||||
|
||||
static void tb_usb3_init_path(struct tb_path *path)
|
||||
{
|
||||
struct tb_path_hop *hop;
|
||||
|
||||
path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL;
|
||||
path->egress_shared_buffer = TB_PATH_NONE;
|
||||
path->ingress_fc_enable = TB_PATH_ALL;
|
||||
|
|
@ -1076,11 +1334,9 @@ static void tb_usb3_init_path(struct tb_path *path)
|
|||
path->priority = 3;
|
||||
path->weight = 3;
|
||||
path->drop_packages = 0;
|
||||
path->nfc_credits = 0;
|
||||
path->hops[0].initial_credits = 7;
|
||||
if (path->path_length > 1)
|
||||
path->hops[1].initial_credits =
|
||||
tb_initial_credits(path->hops[1].in_port->sw);
|
||||
|
||||
tb_path_for_each_hop(path, hop)
|
||||
tb_usb3_init_credits(hop);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -1280,6 +1536,9 @@ void tb_tunnel_free(struct tb_tunnel *tunnel)
|
|||
if (!tunnel)
|
||||
return;
|
||||
|
||||
if (tunnel->deinit)
|
||||
tunnel->deinit(tunnel);
|
||||
|
||||
for (i = 0; i < tunnel->npaths; i++) {
|
||||
if (tunnel->paths[i])
|
||||
tb_path_free(tunnel->paths[i]);
|
||||
|
|
|
|||
|
|
@ -27,6 +27,7 @@ enum tb_tunnel_type {
|
|||
* @paths: All paths required by the tunnel
|
||||
* @npaths: Number of paths in @paths
|
||||
* @init: Optional tunnel specific initialization
|
||||
* @deinit: Optional tunnel specific de-initialization
|
||||
* @activate: Optional tunnel specific activation/deactivation
|
||||
* @consumed_bandwidth: Return how much bandwidth the tunnel consumes
|
||||
* @release_unused_bandwidth: Release all unused bandwidth
|
||||
|
|
@ -47,6 +48,7 @@ struct tb_tunnel {
|
|||
struct tb_path **paths;
|
||||
size_t npaths;
|
||||
int (*init)(struct tb_tunnel *tunnel);
|
||||
void (*deinit)(struct tb_tunnel *tunnel);
|
||||
int (*activate)(struct tb_tunnel *tunnel, bool activate);
|
||||
int (*consumed_bandwidth)(struct tb_tunnel *tunnel, int *consumed_up,
|
||||
int *consumed_down);
|
||||
|
|
|
|||
|
|
@ -13,7 +13,6 @@
|
|||
#include "sb_regs.h"
|
||||
#include "tb.h"
|
||||
|
||||
#define USB4_DATA_DWORDS 16
|
||||
#define USB4_DATA_RETRIES 3
|
||||
|
||||
enum usb4_sb_target {
|
||||
|
|
@ -37,8 +36,19 @@ enum usb4_sb_target {
|
|||
|
||||
#define USB4_NVM_SECTOR_SIZE_MASK GENMASK(23, 0)
|
||||
|
||||
typedef int (*read_block_fn)(void *, unsigned int, void *, size_t);
|
||||
typedef int (*write_block_fn)(void *, const void *, size_t);
|
||||
#define USB4_BA_LENGTH_MASK GENMASK(7, 0)
|
||||
#define USB4_BA_INDEX_MASK GENMASK(15, 0)
|
||||
|
||||
enum usb4_ba_index {
|
||||
USB4_BA_MAX_USB3 = 0x1,
|
||||
USB4_BA_MIN_DP_AUX = 0x2,
|
||||
USB4_BA_MIN_DP_MAIN = 0x3,
|
||||
USB4_BA_MAX_PCIE = 0x4,
|
||||
USB4_BA_MAX_HI = 0x5,
|
||||
};
|
||||
|
||||
#define USB4_BA_VALUE_MASK GENMASK(31, 16)
|
||||
#define USB4_BA_VALUE_SHIFT 16
|
||||
|
||||
static int usb4_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit,
|
||||
u32 value, int timeout_msec)
|
||||
|
|
@ -62,76 +72,6 @@ static int usb4_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit,
|
|||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
static int usb4_do_read_data(u16 address, void *buf, size_t size,
|
||||
read_block_fn read_block, void *read_block_data)
|
||||
{
|
||||
unsigned int retries = USB4_DATA_RETRIES;
|
||||
unsigned int offset;
|
||||
|
||||
do {
|
||||
unsigned int dwaddress, dwords;
|
||||
u8 data[USB4_DATA_DWORDS * 4];
|
||||
size_t nbytes;
|
||||
int ret;
|
||||
|
||||
offset = address & 3;
|
||||
nbytes = min_t(size_t, size + offset, USB4_DATA_DWORDS * 4);
|
||||
|
||||
dwaddress = address / 4;
|
||||
dwords = ALIGN(nbytes, 4) / 4;
|
||||
|
||||
ret = read_block(read_block_data, dwaddress, data, dwords);
|
||||
if (ret) {
|
||||
if (ret != -ENODEV && retries--)
|
||||
continue;
|
||||
return ret;
|
||||
}
|
||||
|
||||
nbytes -= offset;
|
||||
memcpy(buf, data + offset, nbytes);
|
||||
|
||||
size -= nbytes;
|
||||
address += nbytes;
|
||||
buf += nbytes;
|
||||
} while (size > 0);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int usb4_do_write_data(unsigned int address, const void *buf, size_t size,
|
||||
write_block_fn write_next_block, void *write_block_data)
|
||||
{
|
||||
unsigned int retries = USB4_DATA_RETRIES;
|
||||
unsigned int offset;
|
||||
|
||||
offset = address & 3;
|
||||
address = address & ~3;
|
||||
|
||||
do {
|
||||
u32 nbytes = min_t(u32, size, USB4_DATA_DWORDS * 4);
|
||||
u8 data[USB4_DATA_DWORDS * 4];
|
||||
int ret;
|
||||
|
||||
memcpy(data + offset, buf, nbytes);
|
||||
|
||||
ret = write_next_block(write_block_data, data, nbytes / 4);
|
||||
if (ret) {
|
||||
if (ret == -ETIMEDOUT) {
|
||||
if (retries--)
|
||||
continue;
|
||||
ret = -EIO;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
size -= nbytes;
|
||||
address += nbytes;
|
||||
buf += nbytes;
|
||||
} while (size > 0);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int usb4_native_switch_op(struct tb_switch *sw, u16 opcode,
|
||||
u32 *metadata, u8 *status,
|
||||
const void *tx_data, size_t tx_dwords,
|
||||
|
|
@ -193,7 +133,7 @@ static int __usb4_switch_op(struct tb_switch *sw, u16 opcode, u32 *metadata,
|
|||
{
|
||||
const struct tb_cm_ops *cm_ops = sw->tb->cm_ops;
|
||||
|
||||
if (tx_dwords > USB4_DATA_DWORDS || rx_dwords > USB4_DATA_DWORDS)
|
||||
if (tx_dwords > NVM_DATA_DWORDS || rx_dwords > NVM_DATA_DWORDS)
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
|
|
@ -320,7 +260,7 @@ int usb4_switch_setup(struct tb_switch *sw)
|
|||
parent = tb_switch_parent(sw);
|
||||
downstream_port = tb_port_at(tb_route(sw), parent);
|
||||
sw->link_usb4 = link_is_usb4(downstream_port);
|
||||
tb_sw_dbg(sw, "link: %s\n", sw->link_usb4 ? "USB4" : "TBT3");
|
||||
tb_sw_dbg(sw, "link: %s\n", sw->link_usb4 ? "USB4" : "TBT");
|
||||
|
||||
xhci = val & ROUTER_CS_6_HCI;
|
||||
tbt3 = !(val & ROUTER_CS_6_TNS);
|
||||
|
|
@ -414,8 +354,8 @@ static int usb4_switch_drom_read_block(void *data,
|
|||
int usb4_switch_drom_read(struct tb_switch *sw, unsigned int address, void *buf,
|
||||
size_t size)
|
||||
{
|
||||
return usb4_do_read_data(address, buf, size,
|
||||
usb4_switch_drom_read_block, sw);
|
||||
return tb_nvm_read_data(address, buf, size, USB4_DATA_RETRIES,
|
||||
usb4_switch_drom_read_block, sw);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -473,12 +413,18 @@ int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags)
|
|||
|
||||
val &= ~(PORT_CS_19_WOC | PORT_CS_19_WOD | PORT_CS_19_WOU4);
|
||||
|
||||
if (flags & TB_WAKE_ON_CONNECT)
|
||||
val |= PORT_CS_19_WOC;
|
||||
if (flags & TB_WAKE_ON_DISCONNECT)
|
||||
val |= PORT_CS_19_WOD;
|
||||
if (flags & TB_WAKE_ON_USB4)
|
||||
if (tb_is_upstream_port(port)) {
|
||||
val |= PORT_CS_19_WOU4;
|
||||
} else {
|
||||
bool configured = val & PORT_CS_19_PC;
|
||||
|
||||
if ((flags & TB_WAKE_ON_CONNECT) && !configured)
|
||||
val |= PORT_CS_19_WOC;
|
||||
if ((flags & TB_WAKE_ON_DISCONNECT) && configured)
|
||||
val |= PORT_CS_19_WOD;
|
||||
if ((flags & TB_WAKE_ON_USB4) && configured)
|
||||
val |= PORT_CS_19_WOU4;
|
||||
}
|
||||
|
||||
ret = tb_port_write(port, &val, TB_CFG_PORT,
|
||||
port->cap_usb4 + PORT_CS_19, 1);
|
||||
|
|
@ -487,7 +433,7 @@ int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags)
|
|||
}
|
||||
|
||||
/*
|
||||
* Enable wakes from PCIe and USB 3.x on this router. Only
|
||||
* Enable wakes from PCIe, USB 3.x and DP on this router. Only
|
||||
* needed for device routers.
|
||||
*/
|
||||
if (route) {
|
||||
|
|
@ -495,11 +441,13 @@ int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
val &= ~(ROUTER_CS_5_WOP | ROUTER_CS_5_WOU);
|
||||
val &= ~(ROUTER_CS_5_WOP | ROUTER_CS_5_WOU | ROUTER_CS_5_WOD);
|
||||
if (flags & TB_WAKE_ON_USB3)
|
||||
val |= ROUTER_CS_5_WOU;
|
||||
if (flags & TB_WAKE_ON_PCIE)
|
||||
val |= ROUTER_CS_5_WOP;
|
||||
if (flags & TB_WAKE_ON_DP)
|
||||
val |= ROUTER_CS_5_WOD;
|
||||
|
||||
ret = tb_sw_write(sw, &val, TB_CFG_SWITCH, ROUTER_CS_5, 1);
|
||||
if (ret)
|
||||
|
|
@ -595,12 +543,21 @@ static int usb4_switch_nvm_read_block(void *data,
|
|||
int usb4_switch_nvm_read(struct tb_switch *sw, unsigned int address, void *buf,
|
||||
size_t size)
|
||||
{
|
||||
return usb4_do_read_data(address, buf, size,
|
||||
usb4_switch_nvm_read_block, sw);
|
||||
return tb_nvm_read_data(address, buf, size, USB4_DATA_RETRIES,
|
||||
usb4_switch_nvm_read_block, sw);
|
||||
}
|
||||
|
||||
static int usb4_switch_nvm_set_offset(struct tb_switch *sw,
|
||||
unsigned int address)
|
||||
/**
|
||||
* usb4_switch_nvm_set_offset() - Set NVM write offset
|
||||
* @sw: USB4 router
|
||||
* @address: Start offset
|
||||
*
|
||||
* Explicitly sets NVM write offset. Normally when writing to NVM this
|
||||
* is done automatically by usb4_switch_nvm_write().
|
||||
*
|
||||
* Returns %0 in success and negative errno if there was a failure.
|
||||
*/
|
||||
int usb4_switch_nvm_set_offset(struct tb_switch *sw, unsigned int address)
|
||||
{
|
||||
u32 metadata, dwaddress;
|
||||
u8 status = 0;
|
||||
|
|
@ -618,8 +575,8 @@ static int usb4_switch_nvm_set_offset(struct tb_switch *sw,
|
|||
return status ? -EIO : 0;
|
||||
}
|
||||
|
||||
static int usb4_switch_nvm_write_next_block(void *data, const void *buf,
|
||||
size_t dwords)
|
||||
static int usb4_switch_nvm_write_next_block(void *data, unsigned int dwaddress,
|
||||
const void *buf, size_t dwords)
|
||||
{
|
||||
struct tb_switch *sw = data;
|
||||
u8 status;
|
||||
|
|
@ -652,8 +609,8 @@ int usb4_switch_nvm_write(struct tb_switch *sw, unsigned int address,
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
return usb4_do_write_data(address, buf, size,
|
||||
usb4_switch_nvm_write_next_block, sw);
|
||||
return tb_nvm_write_data(address, buf, size, USB4_DATA_RETRIES,
|
||||
usb4_switch_nvm_write_next_block, sw);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -735,6 +692,147 @@ int usb4_switch_nvm_authenticate_status(struct tb_switch *sw, u32 *status)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_switch_credits_init() - Read buffer allocation parameters
|
||||
* @sw: USB4 router
|
||||
*
|
||||
* Reads @sw buffer allocation parameters and initializes @sw buffer
|
||||
* allocation fields accordingly. Specifically @sw->credits_allocation
|
||||
* is set to %true if these parameters can be used in tunneling.
|
||||
*
|
||||
* Returns %0 on success and negative errno otherwise.
|
||||
*/
|
||||
int usb4_switch_credits_init(struct tb_switch *sw)
|
||||
{
|
||||
int max_usb3, min_dp_aux, min_dp_main, max_pcie, max_dma;
|
||||
int ret, length, i, nports;
|
||||
const struct tb_port *port;
|
||||
u32 data[NVM_DATA_DWORDS];
|
||||
u32 metadata = 0;
|
||||
u8 status = 0;
|
||||
|
||||
memset(data, 0, sizeof(data));
|
||||
ret = usb4_switch_op_data(sw, USB4_SWITCH_OP_BUFFER_ALLOC, &metadata,
|
||||
&status, NULL, 0, data, ARRAY_SIZE(data));
|
||||
if (ret)
|
||||
return ret;
|
||||
if (status)
|
||||
return -EIO;
|
||||
|
||||
length = metadata & USB4_BA_LENGTH_MASK;
|
||||
if (WARN_ON(length > ARRAY_SIZE(data)))
|
||||
return -EMSGSIZE;
|
||||
|
||||
max_usb3 = -1;
|
||||
min_dp_aux = -1;
|
||||
min_dp_main = -1;
|
||||
max_pcie = -1;
|
||||
max_dma = -1;
|
||||
|
||||
tb_sw_dbg(sw, "credit allocation parameters:\n");
|
||||
|
||||
for (i = 0; i < length; i++) {
|
||||
u16 index, value;
|
||||
|
||||
index = data[i] & USB4_BA_INDEX_MASK;
|
||||
value = (data[i] & USB4_BA_VALUE_MASK) >> USB4_BA_VALUE_SHIFT;
|
||||
|
||||
switch (index) {
|
||||
case USB4_BA_MAX_USB3:
|
||||
tb_sw_dbg(sw, " USB3: %u\n", value);
|
||||
max_usb3 = value;
|
||||
break;
|
||||
case USB4_BA_MIN_DP_AUX:
|
||||
tb_sw_dbg(sw, " DP AUX: %u\n", value);
|
||||
min_dp_aux = value;
|
||||
break;
|
||||
case USB4_BA_MIN_DP_MAIN:
|
||||
tb_sw_dbg(sw, " DP main: %u\n", value);
|
||||
min_dp_main = value;
|
||||
break;
|
||||
case USB4_BA_MAX_PCIE:
|
||||
tb_sw_dbg(sw, " PCIe: %u\n", value);
|
||||
max_pcie = value;
|
||||
break;
|
||||
case USB4_BA_MAX_HI:
|
||||
tb_sw_dbg(sw, " DMA: %u\n", value);
|
||||
max_dma = value;
|
||||
break;
|
||||
default:
|
||||
tb_sw_dbg(sw, " unknown credit allocation index %#x, skipping\n",
|
||||
index);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Validate the buffer allocation preferences. If we find
|
||||
* issues, log a warning and fall back using the hard-coded
|
||||
* values.
|
||||
*/
|
||||
|
||||
/* Host router must report baMaxHI */
|
||||
if (!tb_route(sw) && max_dma < 0) {
|
||||
tb_sw_warn(sw, "host router is missing baMaxHI\n");
|
||||
goto err_invalid;
|
||||
}
|
||||
|
||||
nports = 0;
|
||||
tb_switch_for_each_port(sw, port) {
|
||||
if (tb_port_is_null(port))
|
||||
nports++;
|
||||
}
|
||||
|
||||
/* Must have DP buffer allocation (multiple USB4 ports) */
|
||||
if (nports > 2 && (min_dp_aux < 0 || min_dp_main < 0)) {
|
||||
tb_sw_warn(sw, "multiple USB4 ports require baMinDPaux/baMinDPmain\n");
|
||||
goto err_invalid;
|
||||
}
|
||||
|
||||
tb_switch_for_each_port(sw, port) {
|
||||
if (tb_port_is_dpout(port) && min_dp_main < 0) {
|
||||
tb_sw_warn(sw, "missing baMinDPmain");
|
||||
goto err_invalid;
|
||||
}
|
||||
if ((tb_port_is_dpin(port) || tb_port_is_dpout(port)) &&
|
||||
min_dp_aux < 0) {
|
||||
tb_sw_warn(sw, "missing baMinDPaux");
|
||||
goto err_invalid;
|
||||
}
|
||||
if ((tb_port_is_usb3_down(port) || tb_port_is_usb3_up(port)) &&
|
||||
max_usb3 < 0) {
|
||||
tb_sw_warn(sw, "missing baMaxUSB3");
|
||||
goto err_invalid;
|
||||
}
|
||||
if ((tb_port_is_pcie_down(port) || tb_port_is_pcie_up(port)) &&
|
||||
max_pcie < 0) {
|
||||
tb_sw_warn(sw, "missing baMaxPCIe");
|
||||
goto err_invalid;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Buffer allocation passed the validation so we can use it in
|
||||
* path creation.
|
||||
*/
|
||||
sw->credit_allocation = true;
|
||||
if (max_usb3 > 0)
|
||||
sw->max_usb3_credits = max_usb3;
|
||||
if (min_dp_aux > 0)
|
||||
sw->min_dp_aux_credits = min_dp_aux;
|
||||
if (min_dp_main > 0)
|
||||
sw->min_dp_main_credits = min_dp_main;
|
||||
if (max_pcie > 0)
|
||||
sw->max_pcie_credits = max_pcie;
|
||||
if (max_dma > 0)
|
||||
sw->max_dma_credits = max_dma;
|
||||
|
||||
return 0;
|
||||
|
||||
err_invalid:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_switch_query_dp_resource() - Query availability of DP IN resource
|
||||
* @sw: USB4 router
|
||||
|
|
@ -896,6 +994,60 @@ struct tb_port *usb4_switch_map_usb3_down(struct tb_switch *sw,
|
|||
return NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_switch_add_ports() - Add USB4 ports for this router
|
||||
* @sw: USB4 router
|
||||
*
|
||||
* For USB4 router finds all USB4 ports and registers devices for each.
|
||||
* Can be called to any router.
|
||||
*
|
||||
* Return %0 in case of success and negative errno in case of failure.
|
||||
*/
|
||||
int usb4_switch_add_ports(struct tb_switch *sw)
|
||||
{
|
||||
struct tb_port *port;
|
||||
|
||||
if (tb_switch_is_icm(sw) || !tb_switch_is_usb4(sw))
|
||||
return 0;
|
||||
|
||||
tb_switch_for_each_port(sw, port) {
|
||||
struct usb4_port *usb4;
|
||||
|
||||
if (!tb_port_is_null(port))
|
||||
continue;
|
||||
if (!port->cap_usb4)
|
||||
continue;
|
||||
|
||||
usb4 = usb4_port_device_add(port);
|
||||
if (IS_ERR(usb4)) {
|
||||
usb4_switch_remove_ports(sw);
|
||||
return PTR_ERR(usb4);
|
||||
}
|
||||
|
||||
port->usb4 = usb4;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_switch_remove_ports() - Removes USB4 ports from this router
|
||||
* @sw: USB4 router
|
||||
*
|
||||
* Unregisters previously registered USB4 ports.
|
||||
*/
|
||||
void usb4_switch_remove_ports(struct tb_switch *sw)
|
||||
{
|
||||
struct tb_port *port;
|
||||
|
||||
tb_switch_for_each_port(sw, port) {
|
||||
if (port->usb4) {
|
||||
usb4_port_device_remove(port->usb4);
|
||||
port->usb4 = NULL;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_port_unlock() - Unlock USB4 downstream port
|
||||
* @port: USB4 port to unlock
|
||||
|
|
@ -1029,7 +1181,7 @@ static int usb4_port_wait_for_bit(struct tb_port *port, u32 offset, u32 bit,
|
|||
|
||||
static int usb4_port_read_data(struct tb_port *port, void *data, size_t dwords)
|
||||
{
|
||||
if (dwords > USB4_DATA_DWORDS)
|
||||
if (dwords > NVM_DATA_DWORDS)
|
||||
return -EINVAL;
|
||||
|
||||
return tb_port_read(port, data, TB_CFG_PORT, port->cap_usb4 + PORT_CS_2,
|
||||
|
|
@ -1039,7 +1191,7 @@ static int usb4_port_read_data(struct tb_port *port, void *data, size_t dwords)
|
|||
static int usb4_port_write_data(struct tb_port *port, const void *data,
|
||||
size_t dwords)
|
||||
{
|
||||
if (dwords > USB4_DATA_DWORDS)
|
||||
if (dwords > NVM_DATA_DWORDS)
|
||||
return -EINVAL;
|
||||
|
||||
return tb_port_write(port, data, TB_CFG_PORT, port->cap_usb4 + PORT_CS_2,
|
||||
|
|
@ -1175,6 +1327,48 @@ static int usb4_port_sb_op(struct tb_port *port, enum usb4_sb_target target,
|
|||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
static int usb4_port_set_router_offline(struct tb_port *port, bool offline)
|
||||
{
|
||||
u32 val = !offline;
|
||||
int ret;
|
||||
|
||||
ret = usb4_port_sb_write(port, USB4_SB_TARGET_ROUTER, 0,
|
||||
USB4_SB_METADATA, &val, sizeof(val));
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
val = USB4_SB_OPCODE_ROUTER_OFFLINE;
|
||||
return usb4_port_sb_write(port, USB4_SB_TARGET_ROUTER, 0,
|
||||
USB4_SB_OPCODE, &val, sizeof(val));
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_port_router_offline() - Put the USB4 port to offline mode
|
||||
* @port: USB4 port
|
||||
*
|
||||
* This function puts the USB4 port into offline mode. In this mode the
|
||||
* port does not react on hotplug events anymore. This needs to be
|
||||
* called before retimer access is done when the USB4 links is not up.
|
||||
*
|
||||
* Returns %0 in case of success and negative errno if there was an
|
||||
* error.
|
||||
*/
|
||||
int usb4_port_router_offline(struct tb_port *port)
|
||||
{
|
||||
return usb4_port_set_router_offline(port, true);
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_port_router_online() - Put the USB4 port back to online
|
||||
* @port: USB4 port
|
||||
*
|
||||
* Makes the USB4 port functional again.
|
||||
*/
|
||||
int usb4_port_router_online(struct tb_port *port)
|
||||
{
|
||||
return usb4_port_set_router_offline(port, false);
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_port_enumerate_retimers() - Send RT broadcast transaction
|
||||
* @port: USB4 port
|
||||
|
|
@ -1200,6 +1394,33 @@ static inline int usb4_port_retimer_op(struct tb_port *port, u8 index,
|
|||
timeout_msec);
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_port_retimer_set_inbound_sbtx() - Enable sideband channel transactions
|
||||
* @port: USB4 port
|
||||
* @index: Retimer index
|
||||
*
|
||||
* Enables sideband channel transations on SBTX. Can be used when USB4
|
||||
* link does not go up, for example if there is no device connected.
|
||||
*/
|
||||
int usb4_port_retimer_set_inbound_sbtx(struct tb_port *port, u8 index)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = usb4_port_retimer_op(port, index, USB4_SB_OPCODE_SET_INBOUND_SBTX,
|
||||
500);
|
||||
|
||||
if (ret != -ENODEV)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* Per the USB4 retimer spec, the retimer is not required to
|
||||
* send an RT (Retimer Transaction) response for the first
|
||||
* SET_INBOUND_SBTX command
|
||||
*/
|
||||
return usb4_port_retimer_op(port, index, USB4_SB_OPCODE_SET_INBOUND_SBTX,
|
||||
500);
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_port_retimer_read() - Read from retimer sideband registers
|
||||
* @port: USB4 port
|
||||
|
|
@ -1292,8 +1513,19 @@ int usb4_port_retimer_nvm_sector_size(struct tb_port *port, u8 index)
|
|||
return ret ? ret : metadata & USB4_NVM_SECTOR_SIZE_MASK;
|
||||
}
|
||||
|
||||
static int usb4_port_retimer_nvm_set_offset(struct tb_port *port, u8 index,
|
||||
unsigned int address)
|
||||
/**
|
||||
* usb4_port_retimer_nvm_set_offset() - Set NVM write offset
|
||||
* @port: USB4 port
|
||||
* @index: Retimer index
|
||||
* @address: Start offset
|
||||
*
|
||||
* Exlicitly sets NVM write offset. Normally when writing to NVM this is
|
||||
* done automatically by usb4_port_retimer_nvm_write().
|
||||
*
|
||||
* Returns %0 in success and negative errno if there was a failure.
|
||||
*/
|
||||
int usb4_port_retimer_nvm_set_offset(struct tb_port *port, u8 index,
|
||||
unsigned int address)
|
||||
{
|
||||
u32 metadata, dwaddress;
|
||||
int ret;
|
||||
|
|
@ -1316,8 +1548,8 @@ struct retimer_info {
|
|||
u8 index;
|
||||
};
|
||||
|
||||
static int usb4_port_retimer_nvm_write_next_block(void *data, const void *buf,
|
||||
size_t dwords)
|
||||
static int usb4_port_retimer_nvm_write_next_block(void *data,
|
||||
unsigned int dwaddress, const void *buf, size_t dwords)
|
||||
|
||||
{
|
||||
const struct retimer_info *info = data;
|
||||
|
|
@ -1357,8 +1589,8 @@ int usb4_port_retimer_nvm_write(struct tb_port *port, u8 index, unsigned int add
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
return usb4_do_write_data(address, buf, size,
|
||||
usb4_port_retimer_nvm_write_next_block, &info);
|
||||
return tb_nvm_write_data(address, buf, size, USB4_DATA_RETRIES,
|
||||
usb4_port_retimer_nvm_write_next_block, &info);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -1442,7 +1674,7 @@ static int usb4_port_retimer_nvm_read_block(void *data, unsigned int dwaddress,
|
|||
int ret;
|
||||
|
||||
metadata = dwaddress << USB4_NVM_READ_OFFSET_SHIFT;
|
||||
if (dwords < USB4_DATA_DWORDS)
|
||||
if (dwords < NVM_DATA_DWORDS)
|
||||
metadata |= dwords << USB4_NVM_READ_LENGTH_SHIFT;
|
||||
|
||||
ret = usb4_port_retimer_write(port, index, USB4_SB_METADATA, &metadata,
|
||||
|
|
@ -1475,8 +1707,8 @@ int usb4_port_retimer_nvm_read(struct tb_port *port, u8 index,
|
|||
{
|
||||
struct retimer_info info = { .port = port, .index = index };
|
||||
|
||||
return usb4_do_read_data(address, buf, size,
|
||||
usb4_port_retimer_nvm_read_block, &info);
|
||||
return tb_nvm_read_data(address, buf, size, USB4_DATA_RETRIES,
|
||||
usb4_port_retimer_nvm_read_block, &info);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
|||
280
drivers/thunderbolt/usb4_port.c
Normal file
280
drivers/thunderbolt/usb4_port.c
Normal file
|
|
@ -0,0 +1,280 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* USB4 port device
|
||||
*
|
||||
* Copyright (C) 2021, Intel Corporation
|
||||
* Author: Mika Westerberg <mika.westerberg@linux.intel.com>
|
||||
*/
|
||||
|
||||
#include <linux/pm_runtime.h>
|
||||
|
||||
#include "tb.h"
|
||||
|
||||
static ssize_t link_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct usb4_port *usb4 = tb_to_usb4_port_device(dev);
|
||||
struct tb_port *port = usb4->port;
|
||||
struct tb *tb = port->sw->tb;
|
||||
const char *link;
|
||||
|
||||
if (mutex_lock_interruptible(&tb->lock))
|
||||
return -ERESTARTSYS;
|
||||
|
||||
if (tb_is_upstream_port(port))
|
||||
link = port->sw->link_usb4 ? "usb4" : "tbt";
|
||||
else if (tb_port_has_remote(port))
|
||||
link = port->remote->sw->link_usb4 ? "usb4" : "tbt";
|
||||
else
|
||||
link = "none";
|
||||
|
||||
mutex_unlock(&tb->lock);
|
||||
|
||||
return sysfs_emit(buf, "%s\n", link);
|
||||
}
|
||||
static DEVICE_ATTR_RO(link);
|
||||
|
||||
static struct attribute *common_attrs[] = {
|
||||
&dev_attr_link.attr,
|
||||
NULL
|
||||
};
|
||||
|
||||
static const struct attribute_group common_group = {
|
||||
.attrs = common_attrs,
|
||||
};
|
||||
|
||||
static int usb4_port_offline(struct usb4_port *usb4)
|
||||
{
|
||||
struct tb_port *port = usb4->port;
|
||||
int ret;
|
||||
|
||||
ret = tb_acpi_power_on_retimers(port);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = usb4_port_router_offline(port);
|
||||
if (ret) {
|
||||
tb_acpi_power_off_retimers(port);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = tb_retimer_scan(port, false);
|
||||
if (ret) {
|
||||
usb4_port_router_online(port);
|
||||
tb_acpi_power_off_retimers(port);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void usb4_port_online(struct usb4_port *usb4)
|
||||
{
|
||||
struct tb_port *port = usb4->port;
|
||||
|
||||
usb4_port_router_online(port);
|
||||
tb_acpi_power_off_retimers(port);
|
||||
}
|
||||
|
||||
static ssize_t offline_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct usb4_port *usb4 = tb_to_usb4_port_device(dev);
|
||||
|
||||
return sysfs_emit(buf, "%d\n", usb4->offline);
|
||||
}
|
||||
|
||||
static ssize_t offline_store(struct device *dev,
|
||||
struct device_attribute *attr, const char *buf, size_t count)
|
||||
{
|
||||
struct usb4_port *usb4 = tb_to_usb4_port_device(dev);
|
||||
struct tb_port *port = usb4->port;
|
||||
struct tb *tb = port->sw->tb;
|
||||
bool val;
|
||||
int ret;
|
||||
|
||||
ret = kstrtobool(buf, &val);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
pm_runtime_get_sync(&usb4->dev);
|
||||
|
||||
if (mutex_lock_interruptible(&tb->lock)) {
|
||||
ret = -ERESTARTSYS;
|
||||
goto out_rpm;
|
||||
}
|
||||
|
||||
if (val == usb4->offline)
|
||||
goto out_unlock;
|
||||
|
||||
/* Offline mode works only for ports that are not connected */
|
||||
if (tb_port_has_remote(port)) {
|
||||
ret = -EBUSY;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
if (val) {
|
||||
ret = usb4_port_offline(usb4);
|
||||
if (ret)
|
||||
goto out_unlock;
|
||||
} else {
|
||||
usb4_port_online(usb4);
|
||||
tb_retimer_remove_all(port);
|
||||
}
|
||||
|
||||
usb4->offline = val;
|
||||
tb_port_dbg(port, "%s offline mode\n", val ? "enter" : "exit");
|
||||
|
||||
out_unlock:
|
||||
mutex_unlock(&tb->lock);
|
||||
out_rpm:
|
||||
pm_runtime_mark_last_busy(&usb4->dev);
|
||||
pm_runtime_put_autosuspend(&usb4->dev);
|
||||
|
||||
return ret ? ret : count;
|
||||
}
|
||||
static DEVICE_ATTR_RW(offline);
|
||||
|
||||
static ssize_t rescan_store(struct device *dev,
|
||||
struct device_attribute *attr, const char *buf, size_t count)
|
||||
{
|
||||
struct usb4_port *usb4 = tb_to_usb4_port_device(dev);
|
||||
struct tb_port *port = usb4->port;
|
||||
struct tb *tb = port->sw->tb;
|
||||
bool val;
|
||||
int ret;
|
||||
|
||||
ret = kstrtobool(buf, &val);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (!val)
|
||||
return count;
|
||||
|
||||
pm_runtime_get_sync(&usb4->dev);
|
||||
|
||||
if (mutex_lock_interruptible(&tb->lock)) {
|
||||
ret = -ERESTARTSYS;
|
||||
goto out_rpm;
|
||||
}
|
||||
|
||||
/* Must be in offline mode already */
|
||||
if (!usb4->offline) {
|
||||
ret = -EINVAL;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
tb_retimer_remove_all(port);
|
||||
ret = tb_retimer_scan(port, true);
|
||||
|
||||
out_unlock:
|
||||
mutex_unlock(&tb->lock);
|
||||
out_rpm:
|
||||
pm_runtime_mark_last_busy(&usb4->dev);
|
||||
pm_runtime_put_autosuspend(&usb4->dev);
|
||||
|
||||
return ret ? ret : count;
|
||||
}
|
||||
static DEVICE_ATTR_WO(rescan);
|
||||
|
||||
static struct attribute *service_attrs[] = {
|
||||
&dev_attr_offline.attr,
|
||||
&dev_attr_rescan.attr,
|
||||
NULL
|
||||
};
|
||||
|
||||
static umode_t service_attr_is_visible(struct kobject *kobj,
|
||||
struct attribute *attr, int n)
|
||||
{
|
||||
struct device *dev = kobj_to_dev(kobj);
|
||||
struct usb4_port *usb4 = tb_to_usb4_port_device(dev);
|
||||
|
||||
/*
|
||||
* Always need some platform help to cycle the modes so that
|
||||
* retimers can be accessed through the sideband.
|
||||
*/
|
||||
return usb4->can_offline ? attr->mode : 0;
|
||||
}
|
||||
|
||||
static const struct attribute_group service_group = {
|
||||
.attrs = service_attrs,
|
||||
.is_visible = service_attr_is_visible,
|
||||
};
|
||||
|
||||
static const struct attribute_group *usb4_port_device_groups[] = {
|
||||
&common_group,
|
||||
&service_group,
|
||||
NULL
|
||||
};
|
||||
|
||||
static void usb4_port_device_release(struct device *dev)
|
||||
{
|
||||
struct usb4_port *usb4 = container_of(dev, struct usb4_port, dev);
|
||||
|
||||
kfree(usb4);
|
||||
}
|
||||
|
||||
struct device_type usb4_port_device_type = {
|
||||
.name = "usb4_port",
|
||||
.groups = usb4_port_device_groups,
|
||||
.release = usb4_port_device_release,
|
||||
};
|
||||
|
||||
/**
|
||||
* usb4_port_device_add() - Add USB4 port device
|
||||
* @port: Lane 0 adapter port to add the USB4 port
|
||||
*
|
||||
* Creates and registers a USB4 port device for @port. Returns the new
|
||||
* USB4 port device pointer or ERR_PTR() in case of error.
|
||||
*/
|
||||
struct usb4_port *usb4_port_device_add(struct tb_port *port)
|
||||
{
|
||||
struct usb4_port *usb4;
|
||||
int ret;
|
||||
|
||||
usb4 = kzalloc(sizeof(*usb4), GFP_KERNEL);
|
||||
if (!usb4)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
usb4->port = port;
|
||||
usb4->dev.type = &usb4_port_device_type;
|
||||
usb4->dev.parent = &port->sw->dev;
|
||||
dev_set_name(&usb4->dev, "usb4_port%d", port->port);
|
||||
|
||||
ret = device_register(&usb4->dev);
|
||||
if (ret) {
|
||||
put_device(&usb4->dev);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
pm_runtime_no_callbacks(&usb4->dev);
|
||||
pm_runtime_set_active(&usb4->dev);
|
||||
pm_runtime_enable(&usb4->dev);
|
||||
pm_runtime_set_autosuspend_delay(&usb4->dev, TB_AUTOSUSPEND_DELAY);
|
||||
pm_runtime_mark_last_busy(&usb4->dev);
|
||||
pm_runtime_use_autosuspend(&usb4->dev);
|
||||
|
||||
return usb4;
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_port_device_remove() - Removes USB4 port device
|
||||
* @usb4: USB4 port device
|
||||
*
|
||||
* Unregisters the USB4 port device from the system. The device will be
|
||||
* released when the last reference is dropped.
|
||||
*/
|
||||
void usb4_port_device_remove(struct usb4_port *usb4)
|
||||
{
|
||||
device_unregister(&usb4->dev);
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_port_device_resume() - Resumes USB4 port device
|
||||
* @usb4: USB4 port device
|
||||
*
|
||||
* Used to resume USB4 port device after sleep state.
|
||||
*/
|
||||
int usb4_port_device_resume(struct usb4_port *usb4)
|
||||
{
|
||||
return usb4->offline ? usb4_port_offline(usb4) : 0;
|
||||
}
|
||||
|
|
@ -1527,6 +1527,13 @@ int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd)
|
|||
return ret;
|
||||
}
|
||||
|
||||
ret = tb_port_wait_for_link_width(port, 2, 100);
|
||||
if (ret) {
|
||||
tb_port_warn(port, "timeout enabling lane bonding\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
tb_port_update_credits(port);
|
||||
tb_xdomain_update_link_attributes(xd);
|
||||
|
||||
dev_dbg(&xd->dev, "lane bonding enabled\n");
|
||||
|
|
@ -1548,7 +1555,10 @@ void tb_xdomain_lane_bonding_disable(struct tb_xdomain *xd)
|
|||
port = tb_port_at(xd->route, tb_xdomain_parent(xd));
|
||||
if (port->dual_link_port) {
|
||||
tb_port_lane_bonding_disable(port);
|
||||
if (tb_port_wait_for_link_width(port, 1, 100) == -ETIMEDOUT)
|
||||
tb_port_warn(port, "timeout disabling lane bonding\n");
|
||||
tb_port_disable(port->dual_link_port);
|
||||
tb_port_update_credits(port);
|
||||
tb_xdomain_update_link_attributes(xd);
|
||||
|
||||
dev_dbg(&xd->dev, "lane bonding disabled\n");
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue