mirror of
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/
synced 2025-04-19 20:58:31 +09:00
SCSI misc on 20250326
Updates to the usual drivers (scsi_debug, ufs, lpfc, st, fnic, mpi3mr, mpt3sas) and the removal of cxlflash. The only non-trivial core change is an addition to unit attention handling to recognize UAs for power on/reset and new media so the tape driver can use it. Signed-off-by: James E.J. Bottomley <James.Bottomley@HansenPartnership.com> -----BEGIN PGP SIGNATURE----- iJwEABMIAEQWIQTnYEDbdso9F2cI+arnQslM7pishQUCZ+RQ2yYcamFtZXMuYm90 dG9tbGV5QGhhbnNlbnBhcnRuZXJzaGlwLmNvbQAKCRDnQslM7pishe6DAQCdW/21 S1Y6BDlJLQfpWChGv6GIzanC+5sMfylw4d6ULgEA8upOE5L3fC29IY958jXig0o1 uLjxylwYEfVLDf8gwJ0= =mkM+ -----END PGP SIGNATURE----- Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi Pull SCSI updates from James Bottomley: "Updates to the usual drivers (scsi_debug, ufs, lpfc, st, fnic, mpi3mr, mpt3sas) and the removal of cxlflash. The only non-trivial core change is an addition to unit attention handling to recognize UAs for power on/reset and new media so the tape driver can use it" * tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (107 commits) scsi: st: Tighten the page format heuristics with MODE SELECT scsi: st: ERASE does not change tape location scsi: st: Fix array overflow in st_setup() scsi: target: tcm_loop: Fix wrong abort tag scsi: lpfc: Restore clearing of NLP_UNREG_INP in ndlp->nlp_flag scsi: hisi_sas: Fixed failure to issue vendor specific commands scsi: fnic: Remove unnecessary NUL-terminations scsi: fnic: Remove redundant flush_workqueue() calls scsi: core: Use a switch statement when attaching VPD pages scsi: ufs: renesas: Add initialization code for R-Car S4-8 ES1.2 scsi: ufs: renesas: Add reusable functions scsi: ufs: renesas: Refactor 0x10ad/0x10af PHY settings scsi: ufs: renesas: Remove register control helper function scsi: ufs: renesas: Add register read to remove save/set/restore scsi: ufs: renesas: Replace init data by init code scsi: ufs: dt-bindings: renesas,ufs: Add calibration data scsi: mpi3mr: Task Abort EH Support scsi: storvsc: Don't report the host packet status as the hv status scsi: isci: Make most module parameters static scsi: megaraid_sas: Make most module parameters static ...
This commit is contained in:
commit
2e3fcbcc3b
@ -1559,3 +1559,48 @@ Description:
|
||||
Symbol - HCMID. This file shows the UFSHCD manufacturer id.
|
||||
The Manufacturer ID is defined by JEDEC in JEDEC-JEP106.
|
||||
The file is read only.
|
||||
|
||||
What: /sys/bus/platform/drivers/ufshcd/*/critical_health
|
||||
What: /sys/bus/platform/devices/*.ufs/critical_health
|
||||
Date: February 2025
|
||||
Contact: Avri Altman <avri.altman@wdc.com>
|
||||
Description: Report the number of times a critical health event has been
|
||||
reported by a UFS device. Further insight into the specific
|
||||
issue can be gained by reading one of: bPreEOLInfo,
|
||||
bDeviceLifeTimeEstA, bDeviceLifeTimeEstB,
|
||||
bWriteBoosterBufferLifeTimeEst, and bRPMBLifeTimeEst.
|
||||
|
||||
The file is read only.
|
||||
|
||||
What: /sys/bus/platform/drivers/ufshcd/*/clkscale_enable
|
||||
What: /sys/bus/platform/devices/*.ufs/clkscale_enable
|
||||
Date: January 2025
|
||||
Contact: Ziqi Chen <quic_ziqichen@quicinc.com>
|
||||
Description:
|
||||
This attribute shows whether the UFS clock scaling is enabled or not.
|
||||
And it can be used to enable/disable the clock scaling by writing
|
||||
1 or 0 to this attribute.
|
||||
|
||||
The attribute is read/write.
|
||||
|
||||
What: /sys/bus/platform/drivers/ufshcd/*/clkgate_enable
|
||||
What: /sys/bus/platform/devices/*.ufs/clkgate_enable
|
||||
Date: January 2025
|
||||
Contact: Ziqi Chen <quic_ziqichen@quicinc.com>
|
||||
Description:
|
||||
This attribute shows whether the UFS clock gating is enabled or not.
|
||||
And it can be used to enable/disable the clock gating by writing
|
||||
1 or 0 to this attribute.
|
||||
|
||||
The attribute is read/write.
|
||||
|
||||
What: /sys/bus/platform/drivers/ufshcd/*/clkgate_delay_ms
|
||||
What: /sys/bus/platform/devices/*.ufs/clkgate_delay_ms
|
||||
Date: January 2025
|
||||
Contact: Ziqi Chen <quic_ziqichen@quicinc.com>
|
||||
Description:
|
||||
This attribute shows and sets the number of milliseconds of idle time
|
||||
before the UFS driver starts to perform clock gating. This can
|
||||
prevent the UFS from frequently performing clock gating/ungating.
|
||||
|
||||
The attribute is read/write.
|
||||
|
@ -1,433 +0,0 @@
|
||||
================================
|
||||
Coherent Accelerator (CXL) Flash
|
||||
================================
|
||||
|
||||
Introduction
|
||||
============
|
||||
|
||||
The IBM Power architecture provides support for CAPI (Coherent
|
||||
Accelerator Power Interface), which is available to certain PCIe slots
|
||||
on Power 8 systems. CAPI can be thought of as a special tunneling
|
||||
protocol through PCIe that allow PCIe adapters to look like special
|
||||
purpose co-processors which can read or write an application's
|
||||
memory and generate page faults. As a result, the host interface to
|
||||
an adapter running in CAPI mode does not require the data buffers to
|
||||
be mapped to the device's memory (IOMMU bypass) nor does it require
|
||||
memory to be pinned.
|
||||
|
||||
On Linux, Coherent Accelerator (CXL) kernel services present CAPI
|
||||
devices as a PCI device by implementing a virtual PCI host bridge.
|
||||
This abstraction simplifies the infrastructure and programming
|
||||
model, allowing for drivers to look similar to other native PCI
|
||||
device drivers.
|
||||
|
||||
CXL provides a mechanism by which user space applications can
|
||||
directly talk to a device (network or storage) bypassing the typical
|
||||
kernel/device driver stack. The CXL Flash Adapter Driver enables a
|
||||
user space application direct access to Flash storage.
|
||||
|
||||
The CXL Flash Adapter Driver is a kernel module that sits in the
|
||||
SCSI stack as a low level device driver (below the SCSI disk and
|
||||
protocol drivers) for the IBM CXL Flash Adapter. This driver is
|
||||
responsible for the initialization of the adapter, setting up the
|
||||
special path for user space access, and performing error recovery. It
|
||||
communicates directly the Flash Accelerator Functional Unit (AFU)
|
||||
as described in Documentation/arch/powerpc/cxl.rst.
|
||||
|
||||
The cxlflash driver supports two, mutually exclusive, modes of
|
||||
operation at the device (LUN) level:
|
||||
|
||||
- Any flash device (LUN) can be configured to be accessed as a
|
||||
regular disk device (i.e.: /dev/sdc). This is the default mode.
|
||||
|
||||
- Any flash device (LUN) can be configured to be accessed from
|
||||
user space with a special block library. This mode further
|
||||
specifies the means of accessing the device and provides for
|
||||
either raw access to the entire LUN (referred to as direct
|
||||
or physical LUN access) or access to a kernel/AFU-mediated
|
||||
partition of the LUN (referred to as virtual LUN access). The
|
||||
segmentation of a disk device into virtual LUNs is assisted
|
||||
by special translation services provided by the Flash AFU.
|
||||
|
||||
Overview
|
||||
========
|
||||
|
||||
The Coherent Accelerator Interface Architecture (CAIA) introduces a
|
||||
concept of a master context. A master typically has special privileges
|
||||
granted to it by the kernel or hypervisor allowing it to perform AFU
|
||||
wide management and control. The master may or may not be involved
|
||||
directly in each user I/O, but at the minimum is involved in the
|
||||
initial setup before the user application is allowed to send requests
|
||||
directly to the AFU.
|
||||
|
||||
The CXL Flash Adapter Driver establishes a master context with the
|
||||
AFU. It uses memory mapped I/O (MMIO) for this control and setup. The
|
||||
Adapter Problem Space Memory Map looks like this::
|
||||
|
||||
+-------------------------------+
|
||||
| 512 * 64 KB User MMIO |
|
||||
| (per context) |
|
||||
| User Accessible |
|
||||
+-------------------------------+
|
||||
| 512 * 128 B per context |
|
||||
| Provisioning and Control |
|
||||
| Trusted Process accessible |
|
||||
+-------------------------------+
|
||||
| 64 KB Global |
|
||||
| Trusted Process accessible |
|
||||
+-------------------------------+
|
||||
|
||||
This driver configures itself into the SCSI software stack as an
|
||||
adapter driver. The driver is the only entity that is considered a
|
||||
Trusted Process to program the Provisioning and Control and Global
|
||||
areas in the MMIO Space shown above. The master context driver
|
||||
discovers all LUNs attached to the CXL Flash adapter and instantiates
|
||||
scsi block devices (/dev/sdb, /dev/sdc etc.) for each unique LUN
|
||||
seen from each path.
|
||||
|
||||
Once these scsi block devices are instantiated, an application
|
||||
written to a specification provided by the block library may get
|
||||
access to the Flash from user space (without requiring a system call).
|
||||
|
||||
This master context driver also provides a series of ioctls for this
|
||||
block library to enable this user space access. The driver supports
|
||||
two modes for accessing the block device.
|
||||
|
||||
The first mode is called a virtual mode. In this mode a single scsi
|
||||
block device (/dev/sdb) may be carved up into any number of distinct
|
||||
virtual LUNs. The virtual LUNs may be resized as long as the sum of
|
||||
the sizes of all the virtual LUNs, along with the meta-data associated
|
||||
with it does not exceed the physical capacity.
|
||||
|
||||
The second mode is called the physical mode. In this mode a single
|
||||
block device (/dev/sdb) may be opened directly by the block library
|
||||
and the entire space for the LUN is available to the application.
|
||||
|
||||
Only the physical mode provides persistence of the data. i.e. The
|
||||
data written to the block device will survive application exit and
|
||||
restart and also reboot. The virtual LUNs do not persist (i.e. do
|
||||
not survive after the application terminates or the system reboots).
|
||||
|
||||
|
||||
Block library API
|
||||
=================
|
||||
|
||||
Applications intending to get access to the CXL Flash from user
|
||||
space should use the block library, as it abstracts the details of
|
||||
interfacing directly with the cxlflash driver that are necessary for
|
||||
performing administrative actions (i.e.: setup, tear down, resize).
|
||||
The block library can be thought of as a 'user' of services,
|
||||
implemented as IOCTLs, that are provided by the cxlflash driver
|
||||
specifically for devices (LUNs) operating in user space access
|
||||
mode. While it is not a requirement that applications understand
|
||||
the interface between the block library and the cxlflash driver,
|
||||
a high-level overview of each supported service (IOCTL) is provided
|
||||
below.
|
||||
|
||||
The block library can be found on GitHub:
|
||||
http://github.com/open-power/capiflash
|
||||
|
||||
|
||||
CXL Flash Driver LUN IOCTLs
|
||||
===========================
|
||||
|
||||
Users, such as the block library, that wish to interface with a flash
|
||||
device (LUN) via user space access need to use the services provided
|
||||
by the cxlflash driver. As these services are implemented as ioctls,
|
||||
a file descriptor handle must first be obtained in order to establish
|
||||
the communication channel between a user and the kernel. This file
|
||||
descriptor is obtained by opening the device special file associated
|
||||
with the scsi disk device (/dev/sdb) that was created during LUN
|
||||
discovery. As per the location of the cxlflash driver within the
|
||||
SCSI protocol stack, this open is actually not seen by the cxlflash
|
||||
driver. Upon successful open, the user receives a file descriptor
|
||||
(herein referred to as fd1) that should be used for issuing the
|
||||
subsequent ioctls listed below.
|
||||
|
||||
The structure definitions for these IOCTLs are available in:
|
||||
uapi/scsi/cxlflash_ioctl.h
|
||||
|
||||
DK_CXLFLASH_ATTACH
|
||||
------------------
|
||||
|
||||
This ioctl obtains, initializes, and starts a context using the CXL
|
||||
kernel services. These services specify a context id (u16) by which
|
||||
to uniquely identify the context and its allocated resources. The
|
||||
services additionally provide a second file descriptor (herein
|
||||
referred to as fd2) that is used by the block library to initiate
|
||||
memory mapped I/O (via mmap()) to the CXL flash device and poll for
|
||||
completion events. This file descriptor is intentionally installed by
|
||||
this driver and not the CXL kernel services to allow for intermediary
|
||||
notification and access in the event of a non-user-initiated close(),
|
||||
such as a killed process. This design point is described in further
|
||||
detail in the description for the DK_CXLFLASH_DETACH ioctl.
|
||||
|
||||
There are a few important aspects regarding the "tokens" (context id
|
||||
and fd2) that are provided back to the user:
|
||||
|
||||
- These tokens are only valid for the process under which they
|
||||
were created. The child of a forked process cannot continue
|
||||
to use the context id or file descriptor created by its parent
|
||||
(see DK_CXLFLASH_VLUN_CLONE for further details).
|
||||
|
||||
- These tokens are only valid for the lifetime of the context and
|
||||
the process under which they were created. Once either is
|
||||
destroyed, the tokens are to be considered stale and subsequent
|
||||
usage will result in errors.
|
||||
|
||||
- A valid adapter file descriptor (fd2 >= 0) is only returned on
|
||||
the initial attach for a context. Subsequent attaches to an
|
||||
existing context (DK_CXLFLASH_ATTACH_REUSE_CONTEXT flag present)
|
||||
do not provide the adapter file descriptor as it was previously
|
||||
made known to the application.
|
||||
|
||||
- When a context is no longer needed, the user shall detach from
|
||||
the context via the DK_CXLFLASH_DETACH ioctl. When this ioctl
|
||||
returns with a valid adapter file descriptor and the return flag
|
||||
DK_CXLFLASH_APP_CLOSE_ADAP_FD is present, the application _must_
|
||||
close the adapter file descriptor following a successful detach.
|
||||
|
||||
- When this ioctl returns with a valid fd2 and the return flag
|
||||
DK_CXLFLASH_APP_CLOSE_ADAP_FD is present, the application _must_
|
||||
close fd2 in the following circumstances:
|
||||
|
||||
+ Following a successful detach of the last user of the context
|
||||
+ Following a successful recovery on the context's original fd2
|
||||
+ In the child process of a fork(), following a clone ioctl,
|
||||
on the fd2 associated with the source context
|
||||
|
||||
- At any time, a close on fd2 will invalidate the tokens. Applications
|
||||
should exercise caution to only close fd2 when appropriate (outlined
|
||||
in the previous bullet) to avoid premature loss of I/O.
|
||||
|
||||
DK_CXLFLASH_USER_DIRECT
|
||||
-----------------------
|
||||
This ioctl is responsible for transitioning the LUN to direct
|
||||
(physical) mode access and configuring the AFU for direct access from
|
||||
user space on a per-context basis. Additionally, the block size and
|
||||
last logical block address (LBA) are returned to the user.
|
||||
|
||||
As mentioned previously, when operating in user space access mode,
|
||||
LUNs may be accessed in whole or in part. Only one mode is allowed
|
||||
at a time and if one mode is active (outstanding references exist),
|
||||
requests to use the LUN in a different mode are denied.
|
||||
|
||||
The AFU is configured for direct access from user space by adding an
|
||||
entry to the AFU's resource handle table. The index of the entry is
|
||||
treated as a resource handle that is returned to the user. The user
|
||||
is then able to use the handle to reference the LUN during I/O.
|
||||
|
||||
DK_CXLFLASH_USER_VIRTUAL
|
||||
------------------------
|
||||
This ioctl is responsible for transitioning the LUN to virtual mode
|
||||
of access and configuring the AFU for virtual access from user space
|
||||
on a per-context basis. Additionally, the block size and last logical
|
||||
block address (LBA) are returned to the user.
|
||||
|
||||
As mentioned previously, when operating in user space access mode,
|
||||
LUNs may be accessed in whole or in part. Only one mode is allowed
|
||||
at a time and if one mode is active (outstanding references exist),
|
||||
requests to use the LUN in a different mode are denied.
|
||||
|
||||
The AFU is configured for virtual access from user space by adding
|
||||
an entry to the AFU's resource handle table. The index of the entry
|
||||
is treated as a resource handle that is returned to the user. The
|
||||
user is then able to use the handle to reference the LUN during I/O.
|
||||
|
||||
By default, the virtual LUN is created with a size of 0. The user
|
||||
would need to use the DK_CXLFLASH_VLUN_RESIZE ioctl to adjust the grow
|
||||
the virtual LUN to a desired size. To avoid having to perform this
|
||||
resize for the initial creation of the virtual LUN, the user has the
|
||||
option of specifying a size as part of the DK_CXLFLASH_USER_VIRTUAL
|
||||
ioctl, such that when success is returned to the user, the
|
||||
resource handle that is provided is already referencing provisioned
|
||||
storage. This is reflected by the last LBA being a non-zero value.
|
||||
|
||||
When a LUN is accessible from more than one port, this ioctl will
|
||||
return with the DK_CXLFLASH_ALL_PORTS_ACTIVE return flag set. This
|
||||
provides the user with a hint that I/O can be retried in the event
|
||||
of an I/O error as the LUN can be reached over multiple paths.
|
||||
|
||||
DK_CXLFLASH_VLUN_RESIZE
|
||||
-----------------------
|
||||
This ioctl is responsible for resizing a previously created virtual
|
||||
LUN and will fail if invoked upon a LUN that is not in virtual
|
||||
mode. Upon success, an updated last LBA is returned to the user
|
||||
indicating the new size of the virtual LUN associated with the
|
||||
resource handle.
|
||||
|
||||
The partitioning of virtual LUNs is jointly mediated by the cxlflash
|
||||
driver and the AFU. An allocation table is kept for each LUN that is
|
||||
operating in the virtual mode and used to program a LUN translation
|
||||
table that the AFU references when provided with a resource handle.
|
||||
|
||||
This ioctl can return -EAGAIN if an AFU sync operation takes too long.
|
||||
In addition to returning a failure to user, cxlflash will also schedule
|
||||
an asynchronous AFU reset. Should the user choose to retry the operation,
|
||||
it is expected to succeed. If this ioctl fails with -EAGAIN, the user
|
||||
can either retry the operation or treat it as a failure.
|
||||
|
||||
DK_CXLFLASH_RELEASE
|
||||
-------------------
|
||||
This ioctl is responsible for releasing a previously obtained
|
||||
reference to either a physical or virtual LUN. This can be
|
||||
thought of as the inverse of the DK_CXLFLASH_USER_DIRECT or
|
||||
DK_CXLFLASH_USER_VIRTUAL ioctls. Upon success, the resource handle
|
||||
is no longer valid and the entry in the resource handle table is
|
||||
made available to be used again.
|
||||
|
||||
As part of the release process for virtual LUNs, the virtual LUN
|
||||
is first resized to 0 to clear out and free the translation tables
|
||||
associated with the virtual LUN reference.
|
||||
|
||||
DK_CXLFLASH_DETACH
|
||||
------------------
|
||||
This ioctl is responsible for unregistering a context with the
|
||||
cxlflash driver and release outstanding resources that were
|
||||
not explicitly released via the DK_CXLFLASH_RELEASE ioctl. Upon
|
||||
success, all "tokens" which had been provided to the user from the
|
||||
DK_CXLFLASH_ATTACH onward are no longer valid.
|
||||
|
||||
When the DK_CXLFLASH_APP_CLOSE_ADAP_FD flag was returned on a successful
|
||||
attach, the application _must_ close the fd2 associated with the context
|
||||
following the detach of the final user of the context.
|
||||
|
||||
DK_CXLFLASH_VLUN_CLONE
|
||||
----------------------
|
||||
This ioctl is responsible for cloning a previously created
|
||||
context to a more recently created context. It exists solely to
|
||||
support maintaining user space access to storage after a process
|
||||
forks. Upon success, the child process (which invoked the ioctl)
|
||||
will have access to the same LUNs via the same resource handle(s)
|
||||
as the parent, but under a different context.
|
||||
|
||||
Context sharing across processes is not supported with CXL and
|
||||
therefore each fork must be met with establishing a new context
|
||||
for the child process. This ioctl simplifies the state management
|
||||
and playback required by a user in such a scenario. When a process
|
||||
forks, child process can clone the parents context by first creating
|
||||
a context (via DK_CXLFLASH_ATTACH) and then using this ioctl to
|
||||
perform the clone from the parent to the child.
|
||||
|
||||
The clone itself is fairly simple. The resource handle and lun
|
||||
translation tables are copied from the parent context to the child's
|
||||
and then synced with the AFU.
|
||||
|
||||
When the DK_CXLFLASH_APP_CLOSE_ADAP_FD flag was returned on a successful
|
||||
attach, the application _must_ close the fd2 associated with the source
|
||||
context (still resident/accessible in the parent process) following the
|
||||
clone. This is to avoid a stale entry in the file descriptor table of the
|
||||
child process.
|
||||
|
||||
This ioctl can return -EAGAIN if an AFU sync operation takes too long.
|
||||
In addition to returning a failure to user, cxlflash will also schedule
|
||||
an asynchronous AFU reset. Should the user choose to retry the operation,
|
||||
it is expected to succeed. If this ioctl fails with -EAGAIN, the user
|
||||
can either retry the operation or treat it as a failure.
|
||||
|
||||
DK_CXLFLASH_VERIFY
|
||||
------------------
|
||||
This ioctl is used to detect various changes such as the capacity of
|
||||
the disk changing, the number of LUNs visible changing, etc. In cases
|
||||
where the changes affect the application (such as a LUN resize), the
|
||||
cxlflash driver will report the changed state to the application.
|
||||
|
||||
The user calls in when they want to validate that a LUN hasn't been
|
||||
changed in response to a check condition. As the user is operating out
|
||||
of band from the kernel, they will see these types of events without
|
||||
the kernel's knowledge. When encountered, the user's architected
|
||||
behavior is to call in to this ioctl, indicating what they want to
|
||||
verify and passing along any appropriate information. For now, only
|
||||
verifying a LUN change (ie: size different) with sense data is
|
||||
supported.
|
||||
|
||||
DK_CXLFLASH_RECOVER_AFU
|
||||
-----------------------
|
||||
This ioctl is used to drive recovery (if such an action is warranted)
|
||||
of a specified user context. Any state associated with the user context
|
||||
is re-established upon successful recovery.
|
||||
|
||||
User contexts are put into an error condition when the device needs to
|
||||
be reset or is terminating. Users are notified of this error condition
|
||||
by seeing all 0xF's on an MMIO read. Upon encountering this, the
|
||||
architected behavior for a user is to call into this ioctl to recover
|
||||
their context. A user may also call into this ioctl at any time to
|
||||
check if the device is operating normally. If a failure is returned
|
||||
from this ioctl, the user is expected to gracefully clean up their
|
||||
context via release/detach ioctls. Until they do, the context they
|
||||
hold is not relinquished. The user may also optionally exit the process
|
||||
at which time the context/resources they held will be freed as part of
|
||||
the release fop.
|
||||
|
||||
When the DK_CXLFLASH_APP_CLOSE_ADAP_FD flag was returned on a successful
|
||||
attach, the application _must_ unmap and close the fd2 associated with the
|
||||
original context following this ioctl returning success and indicating that
|
||||
the context was recovered (DK_CXLFLASH_RECOVER_AFU_CONTEXT_RESET).
|
||||
|
||||
DK_CXLFLASH_MANAGE_LUN
|
||||
----------------------
|
||||
This ioctl is used to switch a LUN from a mode where it is available
|
||||
for file-system access (legacy), to a mode where it is set aside for
|
||||
exclusive user space access (superpipe). In case a LUN is visible
|
||||
across multiple ports and adapters, this ioctl is used to uniquely
|
||||
identify each LUN by its World Wide Node Name (WWNN).
|
||||
|
||||
|
||||
CXL Flash Driver Host IOCTLs
|
||||
============================
|
||||
|
||||
Each host adapter instance that is supported by the cxlflash driver
|
||||
has a special character device associated with it to enable a set of
|
||||
host management function. These character devices are hosted in a
|
||||
class dedicated for cxlflash and can be accessed via `/dev/cxlflash/*`.
|
||||
|
||||
Applications can be written to perform various functions using the
|
||||
host ioctl APIs below.
|
||||
|
||||
The structure definitions for these IOCTLs are available in:
|
||||
uapi/scsi/cxlflash_ioctl.h
|
||||
|
||||
HT_CXLFLASH_LUN_PROVISION
|
||||
-------------------------
|
||||
This ioctl is used to create and delete persistent LUNs on cxlflash
|
||||
devices that lack an external LUN management interface. It is only
|
||||
valid when used with AFUs that support the LUN provision capability.
|
||||
|
||||
When sufficient space is available, LUNs can be created by specifying
|
||||
the target port to host the LUN and a desired size in 4K blocks. Upon
|
||||
success, the LUN ID and WWID of the created LUN will be returned and
|
||||
the SCSI bus can be scanned to detect the change in LUN topology. Note
|
||||
that partial allocations are not supported. Should a creation fail due
|
||||
to a space issue, the target port can be queried for its current LUN
|
||||
geometry.
|
||||
|
||||
To remove a LUN, the device must first be disassociated from the Linux
|
||||
SCSI subsystem. The LUN deletion can then be initiated by specifying a
|
||||
target port and LUN ID. Upon success, the LUN geometry associated with
|
||||
the port will be updated to reflect new number of provisioned LUNs and
|
||||
available capacity.
|
||||
|
||||
To query the LUN geometry of a port, the target port is specified and
|
||||
upon success, the following information is presented:
|
||||
|
||||
- Maximum number of provisioned LUNs allowed for the port
|
||||
- Current number of provisioned LUNs for the port
|
||||
- Maximum total capacity of provisioned LUNs for the port (4K blocks)
|
||||
- Current total capacity of provisioned LUNs for the port (4K blocks)
|
||||
|
||||
With this information, the number of available LUNs and capacity can be
|
||||
can be calculated.
|
||||
|
||||
HT_CXLFLASH_AFU_DEBUG
|
||||
---------------------
|
||||
This ioctl is used to debug AFUs by supporting a command pass-through
|
||||
interface. It is only valid when used with AFUs that support the AFU
|
||||
debug capability.
|
||||
|
||||
With exception of buffer management, AFU debug commands are opaque to
|
||||
cxlflash and treated as pass-through. For debug commands that do require
|
||||
data transfer, the user supplies an adequately sized data buffer and must
|
||||
specify the data transfer direction with respect to the host. There is a
|
||||
maximum transfer size of 256K imposed. Note that partial read completions
|
||||
are not supported - when errors are experienced with a host read data
|
||||
transfer, the data buffer is not copied back to the user.
|
@ -13,7 +13,6 @@ powerpc
|
||||
cpu_families
|
||||
cpu_features
|
||||
cxl
|
||||
cxlflash
|
||||
dawr-power9
|
||||
dexcr
|
||||
dscr
|
||||
|
@ -33,6 +33,16 @@ properties:
|
||||
resets:
|
||||
maxItems: 1
|
||||
|
||||
nvmem-cells:
|
||||
maxItems: 1
|
||||
|
||||
nvmem-cell-names:
|
||||
items:
|
||||
- const: calibration
|
||||
|
||||
dependencies:
|
||||
nvmem-cells: [ nvmem-cell-names ]
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
@ -58,4 +68,6 @@ examples:
|
||||
freq-table-hz = <200000000 200000000>, <38400000 38400000>;
|
||||
power-domains = <&sysc R8A779F0_PD_ALWAYS_ON>;
|
||||
resets = <&cpg 1514>;
|
||||
nvmem-cells = <&ufs_tune>;
|
||||
nvmem-cell-names = "calibration";
|
||||
};
|
||||
|
105
Documentation/devicetree/bindings/ufs/rockchip,rk3576-ufshc.yaml
Normal file
105
Documentation/devicetree/bindings/ufs/rockchip,rk3576-ufshc.yaml
Normal file
@ -0,0 +1,105 @@
|
||||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/ufs/rockchip,rk3576-ufshc.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Rockchip UFS Host Controller
|
||||
|
||||
maintainers:
|
||||
- Shawn Lin <shawn.lin@rock-chips.com>
|
||||
|
||||
allOf:
|
||||
- $ref: ufs-common.yaml
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
const: rockchip,rk3576-ufshc
|
||||
|
||||
reg:
|
||||
maxItems: 5
|
||||
|
||||
reg-names:
|
||||
items:
|
||||
- const: hci
|
||||
- const: mphy
|
||||
- const: hci_grf
|
||||
- const: mphy_grf
|
||||
- const: hci_apb
|
||||
|
||||
clocks:
|
||||
maxItems: 4
|
||||
|
||||
clock-names:
|
||||
items:
|
||||
- const: core
|
||||
- const: pclk
|
||||
- const: pclk_mphy
|
||||
- const: ref_out
|
||||
|
||||
power-domains:
|
||||
maxItems: 1
|
||||
|
||||
resets:
|
||||
maxItems: 4
|
||||
|
||||
reset-names:
|
||||
items:
|
||||
- const: biu
|
||||
- const: sys
|
||||
- const: ufs
|
||||
- const: grf
|
||||
|
||||
reset-gpios:
|
||||
maxItems: 1
|
||||
description: |
|
||||
GPIO specifiers for host to reset the whole UFS device including PHY and
|
||||
memory. This gpio is active low and should choose the one whose high output
|
||||
voltage is lower than 1.5V based on the UFS spec.
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- reg-names
|
||||
- clocks
|
||||
- clock-names
|
||||
- interrupts
|
||||
- power-domains
|
||||
- resets
|
||||
- reset-names
|
||||
- reset-gpios
|
||||
|
||||
unevaluatedProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
#include <dt-bindings/clock/rockchip,rk3576-cru.h>
|
||||
#include <dt-bindings/reset/rockchip,rk3576-cru.h>
|
||||
#include <dt-bindings/interrupt-controller/arm-gic.h>
|
||||
#include <dt-bindings/power/rockchip,rk3576-power.h>
|
||||
#include <dt-bindings/pinctrl/rockchip.h>
|
||||
#include <dt-bindings/gpio/gpio.h>
|
||||
|
||||
soc {
|
||||
#address-cells = <2>;
|
||||
#size-cells = <2>;
|
||||
|
||||
ufshc: ufshc@2a2d0000 {
|
||||
compatible = "rockchip,rk3576-ufshc";
|
||||
reg = <0x0 0x2a2d0000 0x0 0x10000>,
|
||||
<0x0 0x2b040000 0x0 0x10000>,
|
||||
<0x0 0x2601f000 0x0 0x1000>,
|
||||
<0x0 0x2603c000 0x0 0x1000>,
|
||||
<0x0 0x2a2e0000 0x0 0x10000>;
|
||||
reg-names = "hci", "mphy", "hci_grf", "mphy_grf", "hci_apb";
|
||||
clocks = <&cru ACLK_UFS_SYS>, <&cru PCLK_USB_ROOT>, <&cru PCLK_MPHY>,
|
||||
<&cru CLK_REF_UFS_CLKOUT>;
|
||||
clock-names = "core", "pclk", "pclk_mphy", "ref_out";
|
||||
interrupts = <GIC_SPI 361 IRQ_TYPE_LEVEL_HIGH>;
|
||||
power-domains = <&power RK3576_PD_USB>;
|
||||
resets = <&cru SRST_A_UFS_BIU>, <&cru SRST_A_UFS_SYS>, <&cru SRST_A_UFS>,
|
||||
<&cru SRST_P_UFS_GRF>;
|
||||
reset-names = "biu", "sys", "ufs", "grf";
|
||||
reset-gpios = <&gpio4 RK_PD0 GPIO_ACTIVE_LOW>;
|
||||
};
|
||||
};
|
@ -157,6 +157,11 @@ enabled driver and mode options. The value in the file is a bit mask where the
|
||||
bit definitions are the same as those used with MTSETDRVBUFFER in setting the
|
||||
options.
|
||||
|
||||
Each directory contains the entry 'position_lost_in_reset'. If this value is
|
||||
one, reading and writing to the device is blocked after device reset. Most
|
||||
devices rewind the tape after reset and the writes/read don't access the
|
||||
tape position the user expects.
|
||||
|
||||
A link named 'tape' is made from the SCSI device directory to the class
|
||||
directory corresponding to the mode 0 auto-rewind device (e.g., st0).
|
||||
|
||||
|
@ -377,7 +377,7 @@ Code Seq# Include File Comments
|
||||
0xC0 00-0F linux/usb/iowarrior.h
|
||||
0xCA 00-0F uapi/misc/cxl.h
|
||||
0xCA 10-2F uapi/misc/ocxl.h
|
||||
0xCA 80-BF uapi/scsi/cxlflash_ioctl.h
|
||||
0xCA 80-BF uapi/scsi/cxlflash_ioctl.h Dead since 6.14
|
||||
0xCB 00-1F CBM serial IEC bus in development:
|
||||
<mailto:michael.klein@puffin.lb.shuttle.de>
|
||||
0xCC 00-0F drivers/misc/ibmvmc.h pseries VMC driver
|
||||
|
@ -6354,15 +6354,6 @@ F: drivers/misc/cxl/
|
||||
F: include/misc/cxl*
|
||||
F: include/uapi/misc/cxl.h
|
||||
|
||||
CXLFLASH (IBM Coherent Accelerator Processor Interface CAPI Flash) SCSI DRIVER
|
||||
M: Manoj N. Kumar <manoj@linux.ibm.com>
|
||||
M: Uma Krishnan <ukrishn@linux.ibm.com>
|
||||
L: linux-scsi@vger.kernel.org
|
||||
S: Obsolete
|
||||
F: Documentation/arch/powerpc/cxlflash.rst
|
||||
F: drivers/scsi/cxlflash/
|
||||
F: include/uapi/scsi/cxlflash_ioctl.h
|
||||
|
||||
CYBERPRO FB DRIVER
|
||||
M: Russell King <linux@armlinux.org.uk>
|
||||
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
|
||||
|
@ -1221,6 +1221,30 @@
|
||||
};
|
||||
};
|
||||
|
||||
ufshc: ufshc@2a2d0000 {
|
||||
compatible = "rockchip,rk3576-ufshc";
|
||||
reg = <0x0 0x2a2d0000 0x0 0x10000>,
|
||||
<0x0 0x2b040000 0x0 0x10000>,
|
||||
<0x0 0x2601f000 0x0 0x1000>,
|
||||
<0x0 0x2603c000 0x0 0x1000>,
|
||||
<0x0 0x2a2e0000 0x0 0x10000>;
|
||||
reg-names = "hci", "mphy", "hci_grf", "mphy_grf", "hci_apb";
|
||||
clocks = <&cru ACLK_UFS_SYS>, <&cru PCLK_USB_ROOT>, <&cru PCLK_MPHY>,
|
||||
<&cru CLK_REF_UFS_CLKOUT>;
|
||||
clock-names = "core", "pclk", "pclk_mphy", "ref_out";
|
||||
assigned-clocks = <&cru CLK_REF_OSC_MPHY>;
|
||||
assigned-clock-parents = <&cru CLK_REF_MPHY_26M>;
|
||||
interrupts = <GIC_SPI 361 IRQ_TYPE_LEVEL_HIGH>;
|
||||
power-domains = <&power RK3576_PD_USB>;
|
||||
pinctrl-0 = <&ufs_refclk>;
|
||||
pinctrl-names = "default";
|
||||
resets = <&cru SRST_A_UFS_BIU>, <&cru SRST_A_UFS_SYS>,
|
||||
<&cru SRST_A_UFS>, <&cru SRST_P_UFS_GRF>;
|
||||
reset-names = "biu", "sys", "ufs", "grf";
|
||||
reset-gpios = <&gpio4 RK_PD0 GPIO_ACTIVE_LOW>;
|
||||
status = "disabled";
|
||||
};
|
||||
|
||||
sdmmc: mmc@2a310000 {
|
||||
compatible = "rockchip,rk3576-dw-mshc";
|
||||
reg = <0x0 0x2a310000 0x0 0x4000>;
|
||||
|
@ -1843,65 +1843,6 @@ mptscsih_dev_reset(struct scsi_cmnd * SCpnt)
|
||||
return FAILED;
|
||||
}
|
||||
|
||||
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
|
||||
/**
|
||||
* mptscsih_target_reset - Perform a SCSI TARGET_RESET!
|
||||
* @SCpnt: Pointer to scsi_cmnd structure, IO which reset is due to
|
||||
*
|
||||
* (linux scsi_host_template.eh_target_reset_handler routine)
|
||||
*
|
||||
* Returns SUCCESS or FAILED.
|
||||
**/
|
||||
int
|
||||
mptscsih_target_reset(struct scsi_cmnd * SCpnt)
|
||||
{
|
||||
MPT_SCSI_HOST *hd;
|
||||
int retval;
|
||||
VirtDevice *vdevice;
|
||||
MPT_ADAPTER *ioc;
|
||||
|
||||
/* If we can't locate our host adapter structure, return FAILED status.
|
||||
*/
|
||||
if ((hd = shost_priv(SCpnt->device->host)) == NULL){
|
||||
printk(KERN_ERR MYNAM ": target reset: "
|
||||
"Can't locate host! (sc=%p)\n", SCpnt);
|
||||
return FAILED;
|
||||
}
|
||||
|
||||
ioc = hd->ioc;
|
||||
printk(MYIOC_s_INFO_FMT "attempting target reset! (sc=%p)\n",
|
||||
ioc->name, SCpnt);
|
||||
scsi_print_command(SCpnt);
|
||||
|
||||
vdevice = SCpnt->device->hostdata;
|
||||
if (!vdevice || !vdevice->vtarget) {
|
||||
retval = 0;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Target reset to hidden raid component is not supported
|
||||
*/
|
||||
if (vdevice->vtarget->tflags & MPT_TARGET_FLAGS_RAID_COMPONENT) {
|
||||
retval = FAILED;
|
||||
goto out;
|
||||
}
|
||||
|
||||
retval = mptscsih_IssueTaskMgmt(hd,
|
||||
MPI_SCSITASKMGMT_TASKTYPE_TARGET_RESET,
|
||||
vdevice->vtarget->channel,
|
||||
vdevice->vtarget->id, 0, 0,
|
||||
mptscsih_get_tm_timeout(ioc));
|
||||
|
||||
out:
|
||||
printk (MYIOC_s_INFO_FMT "target reset: %s (sc=%p)\n",
|
||||
ioc->name, ((retval == 0) ? "SUCCESS" : "FAILED" ), SCpnt);
|
||||
|
||||
if (retval == 0)
|
||||
return SUCCESS;
|
||||
else
|
||||
return FAILED;
|
||||
}
|
||||
|
||||
|
||||
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
|
||||
/**
|
||||
@ -2915,14 +2856,14 @@ mptscsih_do_cmd(MPT_SCSI_HOST *hd, INTERNAL_CMD *io)
|
||||
timeout = 10;
|
||||
break;
|
||||
|
||||
case RESERVE:
|
||||
case RESERVE_6:
|
||||
cmdLen = 6;
|
||||
dir = MPI_SCSIIO_CONTROL_READ;
|
||||
CDB[0] = cmd;
|
||||
timeout = 10;
|
||||
break;
|
||||
|
||||
case RELEASE:
|
||||
case RELEASE_6:
|
||||
cmdLen = 6;
|
||||
dir = MPI_SCSIIO_CONTROL_READ;
|
||||
CDB[0] = cmd;
|
||||
@ -3306,7 +3247,6 @@ EXPORT_SYMBOL(mptscsih_sdev_destroy);
|
||||
EXPORT_SYMBOL(mptscsih_sdev_configure);
|
||||
EXPORT_SYMBOL(mptscsih_abort);
|
||||
EXPORT_SYMBOL(mptscsih_dev_reset);
|
||||
EXPORT_SYMBOL(mptscsih_target_reset);
|
||||
EXPORT_SYMBOL(mptscsih_bus_reset);
|
||||
EXPORT_SYMBOL(mptscsih_host_reset);
|
||||
EXPORT_SYMBOL(mptscsih_bios_param);
|
||||
|
@ -121,7 +121,6 @@ extern int mptscsih_sdev_configure(struct scsi_device *device,
|
||||
struct queue_limits *lim);
|
||||
extern int mptscsih_abort(struct scsi_cmnd * SCpnt);
|
||||
extern int mptscsih_dev_reset(struct scsi_cmnd * SCpnt);
|
||||
extern int mptscsih_target_reset(struct scsi_cmnd * SCpnt);
|
||||
extern int mptscsih_bus_reset(struct scsi_cmnd * SCpnt);
|
||||
extern int mptscsih_host_reset(struct scsi_cmnd *SCpnt);
|
||||
extern int mptscsih_bios_param(struct scsi_device * sdev, struct block_device *bdev, sector_t capacity, int geom[]);
|
||||
|
@ -303,9 +303,9 @@ if SCSI_LOWLEVEL && SCSI
|
||||
config ISCSI_TCP
|
||||
tristate "iSCSI Initiator over TCP/IP"
|
||||
depends on SCSI && INET
|
||||
select CRC32
|
||||
select CRYPTO
|
||||
select CRYPTO_MD5
|
||||
select CRYPTO_CRC32C
|
||||
select SCSI_ISCSI_ATTRS
|
||||
help
|
||||
The iSCSI Driver provides a host with the ability to access storage
|
||||
@ -336,7 +336,6 @@ source "drivers/scsi/cxgbi/Kconfig"
|
||||
source "drivers/scsi/bnx2i/Kconfig"
|
||||
source "drivers/scsi/bnx2fc/Kconfig"
|
||||
source "drivers/scsi/be2iscsi/Kconfig"
|
||||
source "drivers/scsi/cxlflash/Kconfig"
|
||||
|
||||
config SGIWD93_SCSI
|
||||
tristate "SGI WD93C93 SCSI Driver"
|
||||
|
@ -96,7 +96,6 @@ obj-$(CONFIG_SCSI_SYM53C8XX_2) += sym53c8xx_2/
|
||||
obj-$(CONFIG_SCSI_ZALON) += zalon7xx.o
|
||||
obj-$(CONFIG_SCSI_DC395x) += dc395x.o
|
||||
obj-$(CONFIG_SCSI_AM53C974) += esp_scsi.o am53c974.o
|
||||
obj-$(CONFIG_CXLFLASH) += cxlflash/
|
||||
obj-$(CONFIG_MEGARAID_LEGACY) += megaraid.o
|
||||
obj-$(CONFIG_MEGARAID_NEWGEN) += megaraid/
|
||||
obj-$(CONFIG_MEGARAID_SAS) += megaraid/
|
||||
|
@ -3221,8 +3221,8 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
|
||||
break;
|
||||
}
|
||||
fallthrough;
|
||||
case RESERVE:
|
||||
case RELEASE:
|
||||
case RESERVE_6:
|
||||
case RELEASE_6:
|
||||
case REZERO_UNIT:
|
||||
case REASSIGN_BLOCKS:
|
||||
case SEEK_10:
|
||||
|
@ -2029,7 +2029,7 @@ static void aac_pci_resume(struct pci_dev *pdev)
|
||||
dev_err(&pdev->dev, "aacraid: PCI error - resume\n");
|
||||
}
|
||||
|
||||
static struct pci_error_handlers aac_pci_err_handler = {
|
||||
static const struct pci_error_handlers aac_pci_err_handler = {
|
||||
.error_detected = aac_pci_error_detected,
|
||||
.mmio_enabled = aac_pci_mmio_enabled,
|
||||
.slot_reset = aac_pci_slot_reset,
|
||||
|
@ -591,7 +591,7 @@ datadir_t acornscsi_datadirection(int command)
|
||||
case CHANGE_DEFINITION: case COMPARE: case COPY:
|
||||
case COPY_VERIFY: case LOG_SELECT: case MODE_SELECT:
|
||||
case MODE_SELECT_10: case SEND_DIAGNOSTIC: case WRITE_BUFFER:
|
||||
case FORMAT_UNIT: case REASSIGN_BLOCKS: case RESERVE:
|
||||
case FORMAT_UNIT: case REASSIGN_BLOCKS: case RESERVE_6:
|
||||
case SEARCH_EQUAL: case SEARCH_HIGH: case SEARCH_LOW:
|
||||
case WRITE_6: case WRITE_10: case WRITE_VERIFY:
|
||||
case UPDATE_BLOCK: case WRITE_LONG: case WRITE_SAME:
|
||||
|
@ -5776,7 +5776,7 @@ static void beiscsi_remove(struct pci_dev *pcidev)
|
||||
}
|
||||
|
||||
|
||||
static struct pci_error_handlers beiscsi_eeh_handlers = {
|
||||
static const struct pci_error_handlers beiscsi_eeh_handlers = {
|
||||
.error_detected = beiscsi_eeh_err_detected,
|
||||
.slot_reset = beiscsi_eeh_reset,
|
||||
.resume = beiscsi_eeh_resume,
|
||||
|
@ -1642,7 +1642,7 @@ MODULE_DEVICE_TABLE(pci, bfad_id_table);
|
||||
/*
|
||||
* PCI error recovery handlers.
|
||||
*/
|
||||
static struct pci_error_handlers bfad_err_handler = {
|
||||
static const struct pci_error_handlers bfad_err_handler = {
|
||||
.error_detected = bfad_pci_error_detected,
|
||||
.slot_reset = bfad_pci_slot_reset,
|
||||
.mmio_enabled = bfad_pci_mmio_enabled,
|
||||
|
@ -1162,7 +1162,7 @@ err_resume_exit:
|
||||
dev_err(&pdev->dev, "resume of device failed: %d\n", rv);
|
||||
}
|
||||
|
||||
static struct pci_error_handlers csio_err_handler = {
|
||||
static const struct pci_error_handlers csio_err_handler = {
|
||||
.error_detected = csio_pci_error_detected,
|
||||
.slot_reset = csio_pci_slot_reset,
|
||||
.resume = csio_pci_resume,
|
||||
|
@ -1,15 +0,0 @@
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
# IBM CXL-attached Flash Accelerator SCSI Driver
|
||||
#
|
||||
|
||||
config CXLFLASH
|
||||
tristate "Support for IBM CAPI Flash (DEPRECATED)"
|
||||
depends on PCI && SCSI && (CXL || OCXL) && EEH
|
||||
select IRQ_POLL
|
||||
help
|
||||
The cxlflash driver is deprecated and will be removed in a future
|
||||
kernel release.
|
||||
|
||||
Allows CAPI Accelerated IO to Flash
|
||||
If unsure, say N.
|
@ -1,5 +0,0 @@
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
obj-$(CONFIG_CXLFLASH) += cxlflash.o
|
||||
cxlflash-y += main.o superpipe.o lunmgt.o vlun.o
|
||||
cxlflash-$(CONFIG_CXL) += cxl_hw.o
|
||||
cxlflash-$(CONFIG_OCXL) += ocxl_hw.o
|
@ -1,48 +0,0 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
||||
/*
|
||||
* CXL Flash Device Driver
|
||||
*
|
||||
* Written by: Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation
|
||||
* Uma Krishnan <ukrishn@linux.vnet.ibm.com>, IBM Corporation
|
||||
*
|
||||
* Copyright (C) 2018 IBM Corporation
|
||||
*/
|
||||
|
||||
#ifndef _CXLFLASH_BACKEND_H
|
||||
#define _CXLFLASH_BACKEND_H
|
||||
|
||||
extern const struct cxlflash_backend_ops cxlflash_cxl_ops;
|
||||
extern const struct cxlflash_backend_ops cxlflash_ocxl_ops;
|
||||
|
||||
struct cxlflash_backend_ops {
|
||||
struct module *module;
|
||||
void __iomem * (*psa_map)(void *ctx_cookie);
|
||||
void (*psa_unmap)(void __iomem *addr);
|
||||
int (*process_element)(void *ctx_cookie);
|
||||
int (*map_afu_irq)(void *ctx_cookie, int num, irq_handler_t handler,
|
||||
void *cookie, char *name);
|
||||
void (*unmap_afu_irq)(void *ctx_cookie, int num, void *cookie);
|
||||
u64 (*get_irq_objhndl)(void *ctx_cookie, int irq);
|
||||
int (*start_context)(void *ctx_cookie);
|
||||
int (*stop_context)(void *ctx_cookie);
|
||||
int (*afu_reset)(void *ctx_cookie);
|
||||
void (*set_master)(void *ctx_cookie);
|
||||
void * (*get_context)(struct pci_dev *dev, void *afu_cookie);
|
||||
void * (*dev_context_init)(struct pci_dev *dev, void *afu_cookie);
|
||||
int (*release_context)(void *ctx_cookie);
|
||||
void (*perst_reloads_same_image)(void *afu_cookie, bool image);
|
||||
ssize_t (*read_adapter_vpd)(struct pci_dev *dev, void *buf,
|
||||
size_t count);
|
||||
int (*allocate_afu_irqs)(void *ctx_cookie, int num);
|
||||
void (*free_afu_irqs)(void *ctx_cookie);
|
||||
void * (*create_afu)(struct pci_dev *dev);
|
||||
void (*destroy_afu)(void *afu_cookie);
|
||||
struct file * (*get_fd)(void *ctx_cookie, struct file_operations *fops,
|
||||
int *fd);
|
||||
void * (*fops_get_context)(struct file *file);
|
||||
int (*start_work)(void *ctx_cookie, u64 irqs);
|
||||
int (*fd_mmap)(struct file *file, struct vm_area_struct *vm);
|
||||
int (*fd_release)(struct inode *inode, struct file *file);
|
||||
};
|
||||
|
||||
#endif /* _CXLFLASH_BACKEND_H */
|
@ -1,340 +0,0 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
||||
/*
|
||||
* CXL Flash Device Driver
|
||||
*
|
||||
* Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation
|
||||
* Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation
|
||||
*
|
||||
* Copyright (C) 2015 IBM Corporation
|
||||
*/
|
||||
|
||||
#ifndef _CXLFLASH_COMMON_H
|
||||
#define _CXLFLASH_COMMON_H
|
||||
|
||||
#include <linux/async.h>
|
||||
#include <linux/cdev.h>
|
||||
#include <linux/irq_poll.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/rwsem.h>
|
||||
#include <linux/types.h>
|
||||
#include <scsi/scsi.h>
|
||||
#include <scsi/scsi_cmnd.h>
|
||||
#include <scsi/scsi_device.h>
|
||||
|
||||
#include "backend.h"
|
||||
|
||||
extern const struct file_operations cxlflash_cxl_fops;
|
||||
|
||||
#define MAX_CONTEXT CXLFLASH_MAX_CONTEXT /* num contexts per afu */
|
||||
#define MAX_FC_PORTS CXLFLASH_MAX_FC_PORTS /* max ports per AFU */
|
||||
#define LEGACY_FC_PORTS 2 /* legacy ports per AFU */
|
||||
|
||||
#define CHAN2PORTBANK(_x) ((_x) >> ilog2(CXLFLASH_NUM_FC_PORTS_PER_BANK))
|
||||
#define CHAN2BANKPORT(_x) ((_x) & (CXLFLASH_NUM_FC_PORTS_PER_BANK - 1))
|
||||
|
||||
#define CHAN2PORTMASK(_x) (1 << (_x)) /* channel to port mask */
|
||||
#define PORTMASK2CHAN(_x) (ilog2((_x))) /* port mask to channel */
|
||||
#define PORTNUM2CHAN(_x) ((_x) - 1) /* port number to channel */
|
||||
|
||||
#define CXLFLASH_BLOCK_SIZE 4096 /* 4K blocks */
|
||||
#define CXLFLASH_MAX_XFER_SIZE 16777216 /* 16MB transfer */
|
||||
#define CXLFLASH_MAX_SECTORS (CXLFLASH_MAX_XFER_SIZE/512) /* SCSI wants
|
||||
* max_sectors
|
||||
* in units of
|
||||
* 512 byte
|
||||
* sectors
|
||||
*/
|
||||
|
||||
#define MAX_RHT_PER_CONTEXT (PAGE_SIZE / sizeof(struct sisl_rht_entry))
|
||||
|
||||
/* AFU command retry limit */
|
||||
#define MC_RETRY_CNT 5 /* Sufficient for SCSI and certain AFU errors */
|
||||
|
||||
/* Command management definitions */
|
||||
#define CXLFLASH_MAX_CMDS 256
|
||||
#define CXLFLASH_MAX_CMDS_PER_LUN CXLFLASH_MAX_CMDS
|
||||
|
||||
/* RRQ for master issued cmds */
|
||||
#define NUM_RRQ_ENTRY CXLFLASH_MAX_CMDS
|
||||
|
||||
/* SQ for master issued cmds */
|
||||
#define NUM_SQ_ENTRY CXLFLASH_MAX_CMDS
|
||||
|
||||
/* Hardware queue definitions */
|
||||
#define CXLFLASH_DEF_HWQS 1
|
||||
#define CXLFLASH_MAX_HWQS 8
|
||||
#define PRIMARY_HWQ 0
|
||||
|
||||
|
||||
static inline void check_sizes(void)
|
||||
{
|
||||
BUILD_BUG_ON_NOT_POWER_OF_2(CXLFLASH_NUM_FC_PORTS_PER_BANK);
|
||||
BUILD_BUG_ON_NOT_POWER_OF_2(CXLFLASH_MAX_CMDS);
|
||||
}
|
||||
|
||||
/* AFU defines a fixed size of 4K for command buffers (borrow 4K page define) */
|
||||
#define CMD_BUFSIZE SIZE_4K
|
||||
|
||||
enum cxlflash_lr_state {
|
||||
LINK_RESET_INVALID,
|
||||
LINK_RESET_REQUIRED,
|
||||
LINK_RESET_COMPLETE
|
||||
};
|
||||
|
||||
enum cxlflash_init_state {
|
||||
INIT_STATE_NONE,
|
||||
INIT_STATE_PCI,
|
||||
INIT_STATE_AFU,
|
||||
INIT_STATE_SCSI,
|
||||
INIT_STATE_CDEV
|
||||
};
|
||||
|
||||
enum cxlflash_state {
|
||||
STATE_PROBING, /* Initial state during probe */
|
||||
STATE_PROBED, /* Temporary state, probe completed but EEH occurred */
|
||||
STATE_NORMAL, /* Normal running state, everything good */
|
||||
STATE_RESET, /* Reset state, trying to reset/recover */
|
||||
STATE_FAILTERM /* Failed/terminating state, error out users/threads */
|
||||
};
|
||||
|
||||
enum cxlflash_hwq_mode {
|
||||
HWQ_MODE_RR, /* Roundrobin (default) */
|
||||
HWQ_MODE_TAG, /* Distribute based on block MQ tag */
|
||||
HWQ_MODE_CPU, /* CPU affinity */
|
||||
MAX_HWQ_MODE
|
||||
};
|
||||
|
||||
/*
|
||||
* Each context has its own set of resource handles that is visible
|
||||
* only from that context.
|
||||
*/
|
||||
|
||||
struct cxlflash_cfg {
|
||||
struct afu *afu;
|
||||
|
||||
const struct cxlflash_backend_ops *ops;
|
||||
struct pci_dev *dev;
|
||||
struct pci_device_id *dev_id;
|
||||
struct Scsi_Host *host;
|
||||
int num_fc_ports;
|
||||
struct cdev cdev;
|
||||
struct device *chardev;
|
||||
|
||||
ulong cxlflash_regs_pci;
|
||||
|
||||
struct work_struct work_q;
|
||||
enum cxlflash_init_state init_state;
|
||||
enum cxlflash_lr_state lr_state;
|
||||
int lr_port;
|
||||
atomic_t scan_host_needed;
|
||||
|
||||
void *afu_cookie;
|
||||
|
||||
atomic_t recovery_threads;
|
||||
struct mutex ctx_recovery_mutex;
|
||||
struct mutex ctx_tbl_list_mutex;
|
||||
struct rw_semaphore ioctl_rwsem;
|
||||
struct ctx_info *ctx_tbl[MAX_CONTEXT];
|
||||
struct list_head ctx_err_recovery; /* contexts w/ recovery pending */
|
||||
struct file_operations cxl_fops;
|
||||
|
||||
/* Parameters that are LUN table related */
|
||||
int last_lun_index[MAX_FC_PORTS];
|
||||
int promote_lun_index;
|
||||
struct list_head lluns; /* list of llun_info structs */
|
||||
|
||||
wait_queue_head_t tmf_waitq;
|
||||
spinlock_t tmf_slock;
|
||||
bool tmf_active;
|
||||
bool ws_unmap; /* Write-same unmap supported */
|
||||
wait_queue_head_t reset_waitq;
|
||||
enum cxlflash_state state;
|
||||
async_cookie_t async_reset_cookie;
|
||||
};
|
||||
|
||||
struct afu_cmd {
|
||||
struct sisl_ioarcb rcb; /* IOARCB (cache line aligned) */
|
||||
struct sisl_ioasa sa; /* IOASA must follow IOARCB */
|
||||
struct afu *parent;
|
||||
struct scsi_cmnd *scp;
|
||||
struct completion cevent;
|
||||
struct list_head queue;
|
||||
u32 hwq_index;
|
||||
|
||||
u8 cmd_tmf:1,
|
||||
cmd_aborted:1;
|
||||
|
||||
struct list_head list; /* Pending commands link */
|
||||
|
||||
/* As per the SISLITE spec the IOARCB EA has to be 16-byte aligned.
|
||||
* However for performance reasons the IOARCB/IOASA should be
|
||||
* cache line aligned.
|
||||
*/
|
||||
} __aligned(cache_line_size());
|
||||
|
||||
static inline struct afu_cmd *sc_to_afuc(struct scsi_cmnd *sc)
|
||||
{
|
||||
return PTR_ALIGN(scsi_cmd_priv(sc), __alignof__(struct afu_cmd));
|
||||
}
|
||||
|
||||
static inline struct afu_cmd *sc_to_afuci(struct scsi_cmnd *sc)
|
||||
{
|
||||
struct afu_cmd *afuc = sc_to_afuc(sc);
|
||||
|
||||
INIT_LIST_HEAD(&afuc->queue);
|
||||
return afuc;
|
||||
}
|
||||
|
||||
static inline struct afu_cmd *sc_to_afucz(struct scsi_cmnd *sc)
|
||||
{
|
||||
struct afu_cmd *afuc = sc_to_afuc(sc);
|
||||
|
||||
memset(afuc, 0, sizeof(*afuc));
|
||||
return sc_to_afuci(sc);
|
||||
}
|
||||
|
||||
struct hwq {
|
||||
/* Stuff requiring alignment go first. */
|
||||
struct sisl_ioarcb sq[NUM_SQ_ENTRY]; /* 16K SQ */
|
||||
u64 rrq_entry[NUM_RRQ_ENTRY]; /* 2K RRQ */
|
||||
|
||||
/* Beware of alignment till here. Preferably introduce new
|
||||
* fields after this point
|
||||
*/
|
||||
struct afu *afu;
|
||||
void *ctx_cookie;
|
||||
struct sisl_host_map __iomem *host_map; /* MC host map */
|
||||
struct sisl_ctrl_map __iomem *ctrl_map; /* MC control map */
|
||||
ctx_hndl_t ctx_hndl; /* master's context handle */
|
||||
u32 index; /* Index of this hwq */
|
||||
int num_irqs; /* Number of interrupts requested for context */
|
||||
struct list_head pending_cmds; /* Commands pending completion */
|
||||
|
||||
atomic_t hsq_credits;
|
||||
spinlock_t hsq_slock; /* Hardware send queue lock */
|
||||
struct sisl_ioarcb *hsq_start;
|
||||
struct sisl_ioarcb *hsq_end;
|
||||
struct sisl_ioarcb *hsq_curr;
|
||||
spinlock_t hrrq_slock;
|
||||
u64 *hrrq_start;
|
||||
u64 *hrrq_end;
|
||||
u64 *hrrq_curr;
|
||||
bool toggle;
|
||||
bool hrrq_online;
|
||||
|
||||
s64 room;
|
||||
|
||||
struct irq_poll irqpoll;
|
||||
} __aligned(cache_line_size());
|
||||
|
||||
struct afu {
|
||||
struct hwq hwqs[CXLFLASH_MAX_HWQS];
|
||||
int (*send_cmd)(struct afu *afu, struct afu_cmd *cmd);
|
||||
int (*context_reset)(struct hwq *hwq);
|
||||
|
||||
/* AFU HW */
|
||||
struct cxlflash_afu_map __iomem *afu_map; /* entire MMIO map */
|
||||
|
||||
atomic_t cmds_active; /* Number of currently active AFU commands */
|
||||
struct mutex sync_active; /* Mutex to serialize AFU commands */
|
||||
u64 hb;
|
||||
u32 internal_lun; /* User-desired LUN mode for this AFU */
|
||||
|
||||
u32 num_hwqs; /* Number of hardware queues */
|
||||
u32 desired_hwqs; /* Desired h/w queues, effective on AFU reset */
|
||||
enum cxlflash_hwq_mode hwq_mode; /* Steering mode for h/w queues */
|
||||
u32 hwq_rr_count; /* Count to distribute traffic for roundrobin */
|
||||
|
||||
char version[16];
|
||||
u64 interface_version;
|
||||
|
||||
u32 irqpoll_weight;
|
||||
struct cxlflash_cfg *parent; /* Pointer back to parent cxlflash_cfg */
|
||||
};
|
||||
|
||||
static inline struct hwq *get_hwq(struct afu *afu, u32 index)
|
||||
{
|
||||
WARN_ON(index >= CXLFLASH_MAX_HWQS);
|
||||
|
||||
return &afu->hwqs[index];
|
||||
}
|
||||
|
||||
static inline bool afu_is_irqpoll_enabled(struct afu *afu)
|
||||
{
|
||||
return !!afu->irqpoll_weight;
|
||||
}
|
||||
|
||||
static inline bool afu_has_cap(struct afu *afu, u64 cap)
|
||||
{
|
||||
u64 afu_cap = afu->interface_version >> SISL_INTVER_CAP_SHIFT;
|
||||
|
||||
return afu_cap & cap;
|
||||
}
|
||||
|
||||
static inline bool afu_is_ocxl_lisn(struct afu *afu)
|
||||
{
|
||||
return afu_has_cap(afu, SISL_INTVER_CAP_OCXL_LISN);
|
||||
}
|
||||
|
||||
static inline bool afu_is_afu_debug(struct afu *afu)
|
||||
{
|
||||
return afu_has_cap(afu, SISL_INTVER_CAP_AFU_DEBUG);
|
||||
}
|
||||
|
||||
static inline bool afu_is_lun_provision(struct afu *afu)
|
||||
{
|
||||
return afu_has_cap(afu, SISL_INTVER_CAP_LUN_PROVISION);
|
||||
}
|
||||
|
||||
static inline bool afu_is_sq_cmd_mode(struct afu *afu)
|
||||
{
|
||||
return afu_has_cap(afu, SISL_INTVER_CAP_SQ_CMD_MODE);
|
||||
}
|
||||
|
||||
static inline bool afu_is_ioarrin_cmd_mode(struct afu *afu)
|
||||
{
|
||||
return afu_has_cap(afu, SISL_INTVER_CAP_IOARRIN_CMD_MODE);
|
||||
}
|
||||
|
||||
static inline u64 lun_to_lunid(u64 lun)
|
||||
{
|
||||
__be64 lun_id;
|
||||
|
||||
int_to_scsilun(lun, (struct scsi_lun *)&lun_id);
|
||||
return be64_to_cpu(lun_id);
|
||||
}
|
||||
|
||||
static inline struct fc_port_bank __iomem *get_fc_port_bank(
|
||||
struct cxlflash_cfg *cfg, int i)
|
||||
{
|
||||
struct afu *afu = cfg->afu;
|
||||
|
||||
return &afu->afu_map->global.bank[CHAN2PORTBANK(i)];
|
||||
}
|
||||
|
||||
static inline __be64 __iomem *get_fc_port_regs(struct cxlflash_cfg *cfg, int i)
|
||||
{
|
||||
struct fc_port_bank __iomem *fcpb = get_fc_port_bank(cfg, i);
|
||||
|
||||
return &fcpb->fc_port_regs[CHAN2BANKPORT(i)][0];
|
||||
}
|
||||
|
||||
static inline __be64 __iomem *get_fc_port_luns(struct cxlflash_cfg *cfg, int i)
|
||||
{
|
||||
struct fc_port_bank __iomem *fcpb = get_fc_port_bank(cfg, i);
|
||||
|
||||
return &fcpb->fc_port_luns[CHAN2BANKPORT(i)][0];
|
||||
}
|
||||
|
||||
int cxlflash_afu_sync(struct afu *afu, ctx_hndl_t c, res_hndl_t r, u8 mode);
|
||||
void cxlflash_list_init(void);
|
||||
void cxlflash_term_global_luns(void);
|
||||
void cxlflash_free_errpage(void);
|
||||
int cxlflash_ioctl(struct scsi_device *sdev, unsigned int cmd,
|
||||
void __user *arg);
|
||||
void cxlflash_stop_term_user_contexts(struct cxlflash_cfg *cfg);
|
||||
int cxlflash_mark_contexts_error(struct cxlflash_cfg *cfg);
|
||||
void cxlflash_term_local_luns(struct cxlflash_cfg *cfg);
|
||||
void cxlflash_restore_luntable(struct cxlflash_cfg *cfg);
|
||||
|
||||
#endif /* ifndef _CXLFLASH_COMMON_H */
|
@ -1,177 +0,0 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-or-later
|
||||
/*
|
||||
* CXL Flash Device Driver
|
||||
*
|
||||
* Written by: Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation
|
||||
* Uma Krishnan <ukrishn@linux.vnet.ibm.com>, IBM Corporation
|
||||
*
|
||||
* Copyright (C) 2018 IBM Corporation
|
||||
*/
|
||||
|
||||
#include <misc/cxl.h>
|
||||
|
||||
#include "backend.h"
|
||||
|
||||
/*
|
||||
* The following routines map the cxlflash backend operations to existing CXL
|
||||
* kernel API function and are largely simple shims that provide an abstraction
|
||||
* for converting generic context and AFU cookies into cxl_context or cxl_afu
|
||||
* pointers.
|
||||
*/
|
||||
|
||||
static void __iomem *cxlflash_psa_map(void *ctx_cookie)
|
||||
{
|
||||
return cxl_psa_map(ctx_cookie);
|
||||
}
|
||||
|
||||
static void cxlflash_psa_unmap(void __iomem *addr)
|
||||
{
|
||||
cxl_psa_unmap(addr);
|
||||
}
|
||||
|
||||
static int cxlflash_process_element(void *ctx_cookie)
|
||||
{
|
||||
return cxl_process_element(ctx_cookie);
|
||||
}
|
||||
|
||||
static int cxlflash_map_afu_irq(void *ctx_cookie, int num,
|
||||
irq_handler_t handler, void *cookie, char *name)
|
||||
{
|
||||
return cxl_map_afu_irq(ctx_cookie, num, handler, cookie, name);
|
||||
}
|
||||
|
||||
static void cxlflash_unmap_afu_irq(void *ctx_cookie, int num, void *cookie)
|
||||
{
|
||||
cxl_unmap_afu_irq(ctx_cookie, num, cookie);
|
||||
}
|
||||
|
||||
static u64 cxlflash_get_irq_objhndl(void *ctx_cookie, int irq)
|
||||
{
|
||||
/* Dummy fop for cxl */
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int cxlflash_start_context(void *ctx_cookie)
|
||||
{
|
||||
return cxl_start_context(ctx_cookie, 0, NULL);
|
||||
}
|
||||
|
||||
static int cxlflash_stop_context(void *ctx_cookie)
|
||||
{
|
||||
return cxl_stop_context(ctx_cookie);
|
||||
}
|
||||
|
||||
static int cxlflash_afu_reset(void *ctx_cookie)
|
||||
{
|
||||
return cxl_afu_reset(ctx_cookie);
|
||||
}
|
||||
|
||||
static void cxlflash_set_master(void *ctx_cookie)
|
||||
{
|
||||
cxl_set_master(ctx_cookie);
|
||||
}
|
||||
|
||||
static void *cxlflash_get_context(struct pci_dev *dev, void *afu_cookie)
|
||||
{
|
||||
return cxl_get_context(dev);
|
||||
}
|
||||
|
||||
static void *cxlflash_dev_context_init(struct pci_dev *dev, void *afu_cookie)
|
||||
{
|
||||
return cxl_dev_context_init(dev);
|
||||
}
|
||||
|
||||
static int cxlflash_release_context(void *ctx_cookie)
|
||||
{
|
||||
return cxl_release_context(ctx_cookie);
|
||||
}
|
||||
|
||||
static void cxlflash_perst_reloads_same_image(void *afu_cookie, bool image)
|
||||
{
|
||||
cxl_perst_reloads_same_image(afu_cookie, image);
|
||||
}
|
||||
|
||||
static ssize_t cxlflash_read_adapter_vpd(struct pci_dev *dev,
|
||||
void *buf, size_t count)
|
||||
{
|
||||
return cxl_read_adapter_vpd(dev, buf, count);
|
||||
}
|
||||
|
||||
static int cxlflash_allocate_afu_irqs(void *ctx_cookie, int num)
|
||||
{
|
||||
return cxl_allocate_afu_irqs(ctx_cookie, num);
|
||||
}
|
||||
|
||||
static void cxlflash_free_afu_irqs(void *ctx_cookie)
|
||||
{
|
||||
cxl_free_afu_irqs(ctx_cookie);
|
||||
}
|
||||
|
||||
static void *cxlflash_create_afu(struct pci_dev *dev)
|
||||
{
|
||||
return cxl_pci_to_afu(dev);
|
||||
}
|
||||
|
||||
static void cxlflash_destroy_afu(void *afu)
|
||||
{
|
||||
/* Dummy fop for cxl */
|
||||
}
|
||||
|
||||
static struct file *cxlflash_get_fd(void *ctx_cookie,
|
||||
struct file_operations *fops, int *fd)
|
||||
{
|
||||
return cxl_get_fd(ctx_cookie, fops, fd);
|
||||
}
|
||||
|
||||
static void *cxlflash_fops_get_context(struct file *file)
|
||||
{
|
||||
return cxl_fops_get_context(file);
|
||||
}
|
||||
|
||||
static int cxlflash_start_work(void *ctx_cookie, u64 irqs)
|
||||
{
|
||||
struct cxl_ioctl_start_work work = { 0 };
|
||||
|
||||
work.num_interrupts = irqs;
|
||||
work.flags = CXL_START_WORK_NUM_IRQS;
|
||||
|
||||
return cxl_start_work(ctx_cookie, &work);
|
||||
}
|
||||
|
||||
static int cxlflash_fd_mmap(struct file *file, struct vm_area_struct *vm)
|
||||
{
|
||||
return cxl_fd_mmap(file, vm);
|
||||
}
|
||||
|
||||
static int cxlflash_fd_release(struct inode *inode, struct file *file)
|
||||
{
|
||||
return cxl_fd_release(inode, file);
|
||||
}
|
||||
|
||||
const struct cxlflash_backend_ops cxlflash_cxl_ops = {
|
||||
.module = THIS_MODULE,
|
||||
.psa_map = cxlflash_psa_map,
|
||||
.psa_unmap = cxlflash_psa_unmap,
|
||||
.process_element = cxlflash_process_element,
|
||||
.map_afu_irq = cxlflash_map_afu_irq,
|
||||
.unmap_afu_irq = cxlflash_unmap_afu_irq,
|
||||
.get_irq_objhndl = cxlflash_get_irq_objhndl,
|
||||
.start_context = cxlflash_start_context,
|
||||
.stop_context = cxlflash_stop_context,
|
||||
.afu_reset = cxlflash_afu_reset,
|
||||
.set_master = cxlflash_set_master,
|
||||
.get_context = cxlflash_get_context,
|
||||
.dev_context_init = cxlflash_dev_context_init,
|
||||
.release_context = cxlflash_release_context,
|
||||
.perst_reloads_same_image = cxlflash_perst_reloads_same_image,
|
||||
.read_adapter_vpd = cxlflash_read_adapter_vpd,
|
||||
.allocate_afu_irqs = cxlflash_allocate_afu_irqs,
|
||||
.free_afu_irqs = cxlflash_free_afu_irqs,
|
||||
.create_afu = cxlflash_create_afu,
|
||||
.destroy_afu = cxlflash_destroy_afu,
|
||||
.get_fd = cxlflash_get_fd,
|
||||
.fops_get_context = cxlflash_fops_get_context,
|
||||
.start_work = cxlflash_start_work,
|
||||
.fd_mmap = cxlflash_fd_mmap,
|
||||
.fd_release = cxlflash_fd_release,
|
||||
};
|
@ -1,278 +0,0 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-or-later
|
||||
/*
|
||||
* CXL Flash Device Driver
|
||||
*
|
||||
* Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation
|
||||
* Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation
|
||||
*
|
||||
* Copyright (C) 2015 IBM Corporation
|
||||
*/
|
||||
|
||||
#include <linux/unaligned.h>
|
||||
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/pci.h>
|
||||
|
||||
#include <scsi/scsi_host.h>
|
||||
#include <uapi/scsi/cxlflash_ioctl.h>
|
||||
|
||||
#include "sislite.h"
|
||||
#include "common.h"
|
||||
#include "vlun.h"
|
||||
#include "superpipe.h"
|
||||
|
||||
/**
|
||||
* create_local() - allocate and initialize a local LUN information structure
|
||||
* @sdev: SCSI device associated with LUN.
|
||||
* @wwid: World Wide Node Name for LUN.
|
||||
*
|
||||
* Return: Allocated local llun_info structure on success, NULL on failure
|
||||
*/
|
||||
static struct llun_info *create_local(struct scsi_device *sdev, u8 *wwid)
|
||||
{
|
||||
struct cxlflash_cfg *cfg = shost_priv(sdev->host);
|
||||
struct device *dev = &cfg->dev->dev;
|
||||
struct llun_info *lli = NULL;
|
||||
|
||||
lli = kzalloc(sizeof(*lli), GFP_KERNEL);
|
||||
if (unlikely(!lli)) {
|
||||
dev_err(dev, "%s: could not allocate lli\n", __func__);
|
||||
goto out;
|
||||
}
|
||||
|
||||
lli->sdev = sdev;
|
||||
lli->host_no = sdev->host->host_no;
|
||||
lli->in_table = false;
|
||||
|
||||
memcpy(lli->wwid, wwid, DK_CXLFLASH_MANAGE_LUN_WWID_LEN);
|
||||
out:
|
||||
return lli;
|
||||
}
|
||||
|
||||
/**
|
||||
* create_global() - allocate and initialize a global LUN information structure
|
||||
* @sdev: SCSI device associated with LUN.
|
||||
* @wwid: World Wide Node Name for LUN.
|
||||
*
|
||||
* Return: Allocated global glun_info structure on success, NULL on failure
|
||||
*/
|
||||
static struct glun_info *create_global(struct scsi_device *sdev, u8 *wwid)
|
||||
{
|
||||
struct cxlflash_cfg *cfg = shost_priv(sdev->host);
|
||||
struct device *dev = &cfg->dev->dev;
|
||||
struct glun_info *gli = NULL;
|
||||
|
||||
gli = kzalloc(sizeof(*gli), GFP_KERNEL);
|
||||
if (unlikely(!gli)) {
|
||||
dev_err(dev, "%s: could not allocate gli\n", __func__);
|
||||
goto out;
|
||||
}
|
||||
|
||||
mutex_init(&gli->mutex);
|
||||
memcpy(gli->wwid, wwid, DK_CXLFLASH_MANAGE_LUN_WWID_LEN);
|
||||
out:
|
||||
return gli;
|
||||
}
|
||||
|
||||
/**
|
||||
* lookup_local() - find a local LUN information structure by WWID
|
||||
* @cfg: Internal structure associated with the host.
|
||||
* @wwid: WWID associated with LUN.
|
||||
*
|
||||
* Return: Found local lun_info structure on success, NULL on failure
|
||||
*/
|
||||
static struct llun_info *lookup_local(struct cxlflash_cfg *cfg, u8 *wwid)
|
||||
{
|
||||
struct llun_info *lli, *temp;
|
||||
|
||||
list_for_each_entry_safe(lli, temp, &cfg->lluns, list)
|
||||
if (!memcmp(lli->wwid, wwid, DK_CXLFLASH_MANAGE_LUN_WWID_LEN))
|
||||
return lli;
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* lookup_global() - find a global LUN information structure by WWID
|
||||
* @wwid: WWID associated with LUN.
|
||||
*
|
||||
* Return: Found global lun_info structure on success, NULL on failure
|
||||
*/
|
||||
static struct glun_info *lookup_global(u8 *wwid)
|
||||
{
|
||||
struct glun_info *gli, *temp;
|
||||
|
||||
list_for_each_entry_safe(gli, temp, &global.gluns, list)
|
||||
if (!memcmp(gli->wwid, wwid, DK_CXLFLASH_MANAGE_LUN_WWID_LEN))
|
||||
return gli;
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* find_and_create_lun() - find or create a local LUN information structure
|
||||
* @sdev: SCSI device associated with LUN.
|
||||
* @wwid: WWID associated with LUN.
|
||||
*
|
||||
* The LUN is kept both in a local list (per adapter) and in a global list
|
||||
* (across all adapters). Certain attributes of the LUN are local to the
|
||||
* adapter (such as index, port selection mask, etc.).
|
||||
*
|
||||
* The block allocation map is shared across all adapters (i.e. associated
|
||||
* wih the global list). Since different attributes are associated with
|
||||
* the per adapter and global entries, allocate two separate structures for each
|
||||
* LUN (one local, one global).
|
||||
*
|
||||
* Keep a pointer back from the local to the global entry.
|
||||
*
|
||||
* This routine assumes the caller holds the global mutex.
|
||||
*
|
||||
* Return: Found/Allocated local lun_info structure on success, NULL on failure
|
||||
*/
|
||||
static struct llun_info *find_and_create_lun(struct scsi_device *sdev, u8 *wwid)
|
||||
{
|
||||
struct cxlflash_cfg *cfg = shost_priv(sdev->host);
|
||||
struct device *dev = &cfg->dev->dev;
|
||||
struct llun_info *lli = NULL;
|
||||
struct glun_info *gli = NULL;
|
||||
|
||||
if (unlikely(!wwid))
|
||||
goto out;
|
||||
|
||||
lli = lookup_local(cfg, wwid);
|
||||
if (lli)
|
||||
goto out;
|
||||
|
||||
lli = create_local(sdev, wwid);
|
||||
if (unlikely(!lli))
|
||||
goto out;
|
||||
|
||||
gli = lookup_global(wwid);
|
||||
if (gli) {
|
||||
lli->parent = gli;
|
||||
list_add(&lli->list, &cfg->lluns);
|
||||
goto out;
|
||||
}
|
||||
|
||||
gli = create_global(sdev, wwid);
|
||||
if (unlikely(!gli)) {
|
||||
kfree(lli);
|
||||
lli = NULL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
lli->parent = gli;
|
||||
list_add(&lli->list, &cfg->lluns);
|
||||
|
||||
list_add(&gli->list, &global.gluns);
|
||||
|
||||
out:
|
||||
dev_dbg(dev, "%s: returning lli=%p, gli=%p\n", __func__, lli, gli);
|
||||
return lli;
|
||||
}
|
||||
|
||||
/**
|
||||
* cxlflash_term_local_luns() - Delete all entries from local LUN list, free.
|
||||
* @cfg: Internal structure associated with the host.
|
||||
*/
|
||||
void cxlflash_term_local_luns(struct cxlflash_cfg *cfg)
|
||||
{
|
||||
struct llun_info *lli, *temp;
|
||||
|
||||
mutex_lock(&global.mutex);
|
||||
list_for_each_entry_safe(lli, temp, &cfg->lluns, list) {
|
||||
list_del(&lli->list);
|
||||
kfree(lli);
|
||||
}
|
||||
mutex_unlock(&global.mutex);
|
||||
}
|
||||
|
||||
/**
|
||||
* cxlflash_list_init() - initializes the global LUN list
|
||||
*/
|
||||
void cxlflash_list_init(void)
|
||||
{
|
||||
INIT_LIST_HEAD(&global.gluns);
|
||||
mutex_init(&global.mutex);
|
||||
global.err_page = NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* cxlflash_term_global_luns() - frees resources associated with global LUN list
|
||||
*/
|
||||
void cxlflash_term_global_luns(void)
|
||||
{
|
||||
struct glun_info *gli, *temp;
|
||||
|
||||
mutex_lock(&global.mutex);
|
||||
list_for_each_entry_safe(gli, temp, &global.gluns, list) {
|
||||
list_del(&gli->list);
|
||||
cxlflash_ba_terminate(&gli->blka.ba_lun);
|
||||
kfree(gli);
|
||||
}
|
||||
mutex_unlock(&global.mutex);
|
||||
}
|
||||
|
||||
/**
|
||||
* cxlflash_manage_lun() - handles LUN management activities
|
||||
* @sdev: SCSI device associated with LUN.
|
||||
* @arg: Manage ioctl data structure.
|
||||
*
|
||||
* This routine is used to notify the driver about a LUN's WWID and associate
|
||||
* SCSI devices (sdev) with a global LUN instance. Additionally it serves to
|
||||
* change a LUN's operating mode: legacy or superpipe.
|
||||
*
|
||||
* Return: 0 on success, -errno on failure
|
||||
*/
|
||||
int cxlflash_manage_lun(struct scsi_device *sdev, void *arg)
|
||||
{
|
||||
struct dk_cxlflash_manage_lun *manage = arg;
|
||||
struct cxlflash_cfg *cfg = shost_priv(sdev->host);
|
||||
struct device *dev = &cfg->dev->dev;
|
||||
struct llun_info *lli = NULL;
|
||||
int rc = 0;
|
||||
u64 flags = manage->hdr.flags;
|
||||
u32 chan = sdev->channel;
|
||||
|
||||
mutex_lock(&global.mutex);
|
||||
lli = find_and_create_lun(sdev, manage->wwid);
|
||||
dev_dbg(dev, "%s: WWID=%016llx%016llx, flags=%016llx lli=%p\n",
|
||||
__func__, get_unaligned_be64(&manage->wwid[0]),
|
||||
get_unaligned_be64(&manage->wwid[8]), manage->hdr.flags, lli);
|
||||
if (unlikely(!lli)) {
|
||||
rc = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (flags & DK_CXLFLASH_MANAGE_LUN_ENABLE_SUPERPIPE) {
|
||||
/*
|
||||
* Update port selection mask based upon channel, store off LUN
|
||||
* in unpacked, AFU-friendly format, and hang LUN reference in
|
||||
* the sdev.
|
||||
*/
|
||||
lli->port_sel |= CHAN2PORTMASK(chan);
|
||||
lli->lun_id[chan] = lun_to_lunid(sdev->lun);
|
||||
sdev->hostdata = lli;
|
||||
} else if (flags & DK_CXLFLASH_MANAGE_LUN_DISABLE_SUPERPIPE) {
|
||||
if (lli->parent->mode != MODE_NONE)
|
||||
rc = -EBUSY;
|
||||
else {
|
||||
/*
|
||||
* Clean up local LUN for this port and reset table
|
||||
* tracking when no more references exist.
|
||||
*/
|
||||
sdev->hostdata = NULL;
|
||||
lli->port_sel &= ~CHAN2PORTMASK(chan);
|
||||
if (lli->port_sel == 0U)
|
||||
lli->in_table = false;
|
||||
}
|
||||
}
|
||||
|
||||
dev_dbg(dev, "%s: port_sel=%08x chan=%u lun_id=%016llx\n",
|
||||
__func__, lli->port_sel, chan, lli->lun_id[chan]);
|
||||
|
||||
out:
|
||||
mutex_unlock(&global.mutex);
|
||||
dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc);
|
||||
return rc;
|
||||
}
|
File diff suppressed because it is too large
Load Diff
@ -1,129 +0,0 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
||||
/*
|
||||
* CXL Flash Device Driver
|
||||
*
|
||||
* Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation
|
||||
* Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation
|
||||
*
|
||||
* Copyright (C) 2015 IBM Corporation
|
||||
*/
|
||||
|
||||
#ifndef _CXLFLASH_MAIN_H
|
||||
#define _CXLFLASH_MAIN_H
|
||||
|
||||
#include <linux/list.h>
|
||||
#include <linux/types.h>
|
||||
#include <scsi/scsi.h>
|
||||
#include <scsi/scsi_device.h>
|
||||
|
||||
#include "backend.h"
|
||||
|
||||
#define CXLFLASH_NAME "cxlflash"
|
||||
#define CXLFLASH_ADAPTER_NAME "IBM POWER CXL Flash Adapter"
|
||||
#define CXLFLASH_MAX_ADAPTERS 32
|
||||
|
||||
#define PCI_DEVICE_ID_IBM_CORSA 0x04F0
|
||||
#define PCI_DEVICE_ID_IBM_FLASH_GT 0x0600
|
||||
#define PCI_DEVICE_ID_IBM_BRIARD 0x0624
|
||||
|
||||
/* Since there is only one target, make it 0 */
|
||||
#define CXLFLASH_TARGET 0
|
||||
#define CXLFLASH_MAX_CDB_LEN 16
|
||||
|
||||
/* Really only one target per bus since the Texan is directly attached */
|
||||
#define CXLFLASH_MAX_NUM_TARGETS_PER_BUS 1
|
||||
#define CXLFLASH_MAX_NUM_LUNS_PER_TARGET 65536
|
||||
|
||||
#define CXLFLASH_PCI_ERROR_RECOVERY_TIMEOUT (120 * HZ)
|
||||
|
||||
/* FC defines */
|
||||
#define FC_MTIP_CMDCONFIG 0x010
|
||||
#define FC_MTIP_STATUS 0x018
|
||||
#define FC_MAX_NUM_LUNS 0x080 /* Max LUNs host can provision for port */
|
||||
#define FC_CUR_NUM_LUNS 0x088 /* Cur number LUNs provisioned for port */
|
||||
#define FC_MAX_CAP_PORT 0x090 /* Max capacity all LUNs for port (4K blocks) */
|
||||
#define FC_CUR_CAP_PORT 0x098 /* Cur capacity all LUNs for port (4K blocks) */
|
||||
|
||||
#define FC_PNAME 0x300
|
||||
#define FC_CONFIG 0x320
|
||||
#define FC_CONFIG2 0x328
|
||||
#define FC_STATUS 0x330
|
||||
#define FC_ERROR 0x380
|
||||
#define FC_ERRCAP 0x388
|
||||
#define FC_ERRMSK 0x390
|
||||
#define FC_CNT_CRCERR 0x538
|
||||
#define FC_CRC_THRESH 0x580
|
||||
|
||||
#define FC_MTIP_CMDCONFIG_ONLINE 0x20ULL
|
||||
#define FC_MTIP_CMDCONFIG_OFFLINE 0x40ULL
|
||||
|
||||
#define FC_MTIP_STATUS_MASK 0x30ULL
|
||||
#define FC_MTIP_STATUS_ONLINE 0x20ULL
|
||||
#define FC_MTIP_STATUS_OFFLINE 0x10ULL
|
||||
|
||||
/* TIMEOUT and RETRY definitions */
|
||||
|
||||
/* AFU command timeout values */
|
||||
#define MC_AFU_SYNC_TIMEOUT 5 /* 5 secs */
|
||||
#define MC_LUN_PROV_TIMEOUT 5 /* 5 secs */
|
||||
#define MC_AFU_DEBUG_TIMEOUT 5 /* 5 secs */
|
||||
|
||||
/* AFU command room retry limit */
|
||||
#define MC_ROOM_RETRY_CNT 10
|
||||
|
||||
/* FC CRC clear periodic timer */
|
||||
#define MC_CRC_THRESH 100 /* threshold in 5 mins */
|
||||
|
||||
#define FC_PORT_STATUS_RETRY_CNT 100 /* 100 100ms retries = 10 seconds */
|
||||
#define FC_PORT_STATUS_RETRY_INTERVAL_US 100000 /* microseconds */
|
||||
|
||||
/* VPD defines */
|
||||
#define CXLFLASH_VPD_LEN 256
|
||||
#define WWPN_LEN 16
|
||||
#define WWPN_BUF_LEN (WWPN_LEN + 1)
|
||||
|
||||
enum undo_level {
|
||||
UNDO_NOOP = 0,
|
||||
FREE_IRQ,
|
||||
UNMAP_ONE,
|
||||
UNMAP_TWO,
|
||||
UNMAP_THREE
|
||||
};
|
||||
|
||||
struct dev_dependent_vals {
|
||||
u64 max_sectors;
|
||||
u64 flags;
|
||||
#define CXLFLASH_NOTIFY_SHUTDOWN 0x0000000000000001ULL
|
||||
#define CXLFLASH_WWPN_VPD_REQUIRED 0x0000000000000002ULL
|
||||
#define CXLFLASH_OCXL_DEV 0x0000000000000004ULL
|
||||
};
|
||||
|
||||
static inline const struct cxlflash_backend_ops *
|
||||
cxlflash_assign_ops(struct dev_dependent_vals *ddv)
|
||||
{
|
||||
const struct cxlflash_backend_ops *ops = NULL;
|
||||
|
||||
#ifdef CONFIG_OCXL_BASE
|
||||
if (ddv->flags & CXLFLASH_OCXL_DEV)
|
||||
ops = &cxlflash_ocxl_ops;
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_CXL_BASE
|
||||
if (!(ddv->flags & CXLFLASH_OCXL_DEV))
|
||||
ops = &cxlflash_cxl_ops;
|
||||
#endif
|
||||
|
||||
return ops;
|
||||
}
|
||||
|
||||
struct asyc_intr_info {
|
||||
u64 status;
|
||||
char *desc;
|
||||
u8 port;
|
||||
u8 action;
|
||||
#define CLR_FC_ERROR 0x01
|
||||
#define LINK_RESET 0x02
|
||||
#define SCAN_HOST 0x04
|
||||
};
|
||||
|
||||
#endif /* _CXLFLASH_MAIN_H */
|
File diff suppressed because it is too large
Load Diff
@ -1,72 +0,0 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
||||
/*
|
||||
* CXL Flash Device Driver
|
||||
*
|
||||
* Written by: Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation
|
||||
* Uma Krishnan <ukrishn@linux.vnet.ibm.com>, IBM Corporation
|
||||
*
|
||||
* Copyright (C) 2018 IBM Corporation
|
||||
*/
|
||||
|
||||
#define OCXL_MAX_IRQS 4 /* Max interrupts per process */
|
||||
|
||||
struct ocxlflash_irqs {
|
||||
int hwirq;
|
||||
u32 virq;
|
||||
void __iomem *vtrig;
|
||||
};
|
||||
|
||||
/* OCXL hardware AFU associated with the host */
|
||||
struct ocxl_hw_afu {
|
||||
struct ocxlflash_context *ocxl_ctx; /* Host context */
|
||||
struct pci_dev *pdev; /* PCI device */
|
||||
struct device *dev; /* Generic device */
|
||||
bool perst_same_image; /* Same image loaded on perst */
|
||||
|
||||
struct ocxl_fn_config fcfg; /* DVSEC config of the function */
|
||||
struct ocxl_afu_config acfg; /* AFU configuration data */
|
||||
|
||||
int fn_actag_base; /* Function acTag base */
|
||||
int fn_actag_enabled; /* Function acTag number enabled */
|
||||
int afu_actag_base; /* AFU acTag base */
|
||||
int afu_actag_enabled; /* AFU acTag number enabled */
|
||||
|
||||
phys_addr_t ppmmio_phys; /* Per process MMIO space */
|
||||
phys_addr_t gmmio_phys; /* Global AFU MMIO space */
|
||||
void __iomem *gmmio_virt; /* Global MMIO map */
|
||||
|
||||
void *link_token; /* Link token for the SPA */
|
||||
struct idr idr; /* IDR to manage contexts */
|
||||
int max_pasid; /* Maximum number of contexts */
|
||||
bool is_present; /* Function has AFUs defined */
|
||||
};
|
||||
|
||||
enum ocxlflash_ctx_state {
|
||||
CLOSED,
|
||||
OPENED,
|
||||
STARTED
|
||||
};
|
||||
|
||||
struct ocxlflash_context {
|
||||
struct ocxl_hw_afu *hw_afu; /* HW AFU back pointer */
|
||||
struct address_space *mapping; /* Mapping for pseudo filesystem */
|
||||
bool master; /* Whether this is a master context */
|
||||
int pe; /* Process element */
|
||||
|
||||
phys_addr_t psn_phys; /* Process mapping */
|
||||
u64 psn_size; /* Process mapping size */
|
||||
|
||||
spinlock_t slock; /* Protects irq/fault/event updates */
|
||||
wait_queue_head_t wq; /* Wait queue for poll and interrupts */
|
||||
struct mutex state_mutex; /* Mutex to update context state */
|
||||
enum ocxlflash_ctx_state state; /* Context state */
|
||||
|
||||
struct ocxlflash_irqs *irqs; /* Pointer to array of structures */
|
||||
int num_irqs; /* Number of interrupts */
|
||||
bool pending_irq; /* Pending interrupt on the context */
|
||||
ulong irq_bitmap; /* Bits indicating pending irq num */
|
||||
|
||||
u64 fault_addr; /* Address that triggered the fault */
|
||||
u64 fault_dsisr; /* Value of dsisr register at fault */
|
||||
bool pending_fault; /* Pending translation fault */
|
||||
};
|
@ -1,560 +0,0 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
||||
/*
|
||||
* CXL Flash Device Driver
|
||||
*
|
||||
* Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation
|
||||
* Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation
|
||||
*
|
||||
* Copyright (C) 2015 IBM Corporation
|
||||
*/
|
||||
|
||||
#ifndef _SISLITE_H
|
||||
#define _SISLITE_H
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
typedef u16 ctx_hndl_t;
|
||||
typedef u32 res_hndl_t;
|
||||
|
||||
#define SIZE_4K 4096
|
||||
#define SIZE_64K 65536
|
||||
|
||||
/*
|
||||
* IOARCB: 64 bytes, min 16 byte alignment required, host native endianness
|
||||
* except for SCSI CDB which remains big endian per SCSI standards.
|
||||
*/
|
||||
struct sisl_ioarcb {
|
||||
u16 ctx_id; /* ctx_hndl_t */
|
||||
u16 req_flags;
|
||||
#define SISL_REQ_FLAGS_RES_HNDL 0x8000U /* bit 0 (MSB) */
|
||||
#define SISL_REQ_FLAGS_PORT_LUN_ID 0x0000U
|
||||
|
||||
#define SISL_REQ_FLAGS_SUP_UNDERRUN 0x4000U /* bit 1 */
|
||||
|
||||
#define SISL_REQ_FLAGS_TIMEOUT_SECS 0x0000U /* bits 8,9 */
|
||||
#define SISL_REQ_FLAGS_TIMEOUT_MSECS 0x0040U
|
||||
#define SISL_REQ_FLAGS_TIMEOUT_USECS 0x0080U
|
||||
#define SISL_REQ_FLAGS_TIMEOUT_CYCLES 0x00C0U
|
||||
|
||||
#define SISL_REQ_FLAGS_TMF_CMD 0x0004u /* bit 13 */
|
||||
|
||||
#define SISL_REQ_FLAGS_AFU_CMD 0x0002U /* bit 14 */
|
||||
|
||||
#define SISL_REQ_FLAGS_HOST_WRITE 0x0001U /* bit 15 (LSB) */
|
||||
#define SISL_REQ_FLAGS_HOST_READ 0x0000U
|
||||
|
||||
union {
|
||||
u32 res_hndl; /* res_hndl_t */
|
||||
u32 port_sel; /* this is a selection mask:
|
||||
* 0x1 -> port#0 can be selected,
|
||||
* 0x2 -> port#1 can be selected.
|
||||
* Can be bitwise ORed.
|
||||
*/
|
||||
};
|
||||
u64 lun_id;
|
||||
u32 data_len; /* 4K for read/write */
|
||||
u32 ioadl_len;
|
||||
union {
|
||||
u64 data_ea; /* min 16 byte aligned */
|
||||
u64 ioadl_ea;
|
||||
};
|
||||
u8 msi; /* LISN to send on RRQ write */
|
||||
#define SISL_MSI_CXL_PFAULT 0 /* reserved for CXL page faults */
|
||||
#define SISL_MSI_SYNC_ERROR 1 /* recommended for AFU sync error */
|
||||
#define SISL_MSI_RRQ_UPDATED 2 /* recommended for IO completion */
|
||||
#define SISL_MSI_ASYNC_ERROR 3 /* master only - for AFU async error */
|
||||
|
||||
u8 rrq; /* 0 for a single RRQ */
|
||||
u16 timeout; /* in units specified by req_flags */
|
||||
u32 rsvd1;
|
||||
u8 cdb[16]; /* must be in big endian */
|
||||
#define SISL_AFU_CMD_SYNC 0xC0 /* AFU sync command */
|
||||
#define SISL_AFU_CMD_LUN_PROVISION 0xD0 /* AFU LUN provision command */
|
||||
#define SISL_AFU_CMD_DEBUG 0xE0 /* AFU debug command */
|
||||
|
||||
#define SISL_AFU_LUN_PROVISION_CREATE 0x00 /* LUN provision create type */
|
||||
#define SISL_AFU_LUN_PROVISION_DELETE 0x01 /* LUN provision delete type */
|
||||
|
||||
union {
|
||||
u64 reserved; /* Reserved for IOARRIN mode */
|
||||
struct sisl_ioasa *ioasa; /* IOASA EA for SQ Mode */
|
||||
};
|
||||
} __packed;
|
||||
|
||||
struct sisl_rc {
|
||||
u8 flags;
|
||||
#define SISL_RC_FLAGS_SENSE_VALID 0x80U
|
||||
#define SISL_RC_FLAGS_FCP_RSP_CODE_VALID 0x40U
|
||||
#define SISL_RC_FLAGS_OVERRUN 0x20U
|
||||
#define SISL_RC_FLAGS_UNDERRUN 0x10U
|
||||
|
||||
u8 afu_rc;
|
||||
#define SISL_AFU_RC_RHT_INVALID 0x01U /* user error */
|
||||
#define SISL_AFU_RC_RHT_UNALIGNED 0x02U /* should never happen */
|
||||
#define SISL_AFU_RC_RHT_OUT_OF_BOUNDS 0x03u /* user error */
|
||||
#define SISL_AFU_RC_RHT_DMA_ERR 0x04u /* see afu_extra
|
||||
* may retry if afu_retry is off
|
||||
* possible on master exit
|
||||
*/
|
||||
#define SISL_AFU_RC_RHT_RW_PERM 0x05u /* no RW perms, user error */
|
||||
#define SISL_AFU_RC_LXT_UNALIGNED 0x12U /* should never happen */
|
||||
#define SISL_AFU_RC_LXT_OUT_OF_BOUNDS 0x13u /* user error */
|
||||
#define SISL_AFU_RC_LXT_DMA_ERR 0x14u /* see afu_extra
|
||||
* may retry if afu_retry is off
|
||||
* possible on master exit
|
||||
*/
|
||||
#define SISL_AFU_RC_LXT_RW_PERM 0x15u /* no RW perms, user error */
|
||||
|
||||
#define SISL_AFU_RC_NOT_XLATE_HOST 0x1au /* possible if master exited */
|
||||
|
||||
/* NO_CHANNELS means the FC ports selected by dest_port in
|
||||
* IOARCB or in the LXT entry are down when the AFU tried to select
|
||||
* a FC port. If the port went down on an active IO, it will set
|
||||
* fc_rc to =0x54(NOLOGI) or 0x57(LINKDOWN) instead.
|
||||
*/
|
||||
#define SISL_AFU_RC_NO_CHANNELS 0x20U /* see afu_extra, may retry */
|
||||
#define SISL_AFU_RC_CAP_VIOLATION 0x21U /* either user error or
|
||||
* afu reset/master restart
|
||||
*/
|
||||
#define SISL_AFU_RC_OUT_OF_DATA_BUFS 0x30U /* always retry */
|
||||
#define SISL_AFU_RC_DATA_DMA_ERR 0x31U /* see afu_extra
|
||||
* may retry if afu_retry is off
|
||||
*/
|
||||
|
||||
u8 scsi_rc; /* SCSI status byte, retry as appropriate */
|
||||
#define SISL_SCSI_RC_CHECK 0x02U
|
||||
#define SISL_SCSI_RC_BUSY 0x08u
|
||||
|
||||
u8 fc_rc; /* retry */
|
||||
/*
|
||||
* We should only see fc_rc=0x57 (LINKDOWN) or 0x54(NOLOGI) for
|
||||
* commands that are in flight when a link goes down or is logged out.
|
||||
* If the link is down or logged out before AFU selects the port, either
|
||||
* it will choose the other port or we will get afu_rc=0x20 (no_channel)
|
||||
* if there is no valid port to use.
|
||||
*
|
||||
* ABORTPEND/ABORTOK/ABORTFAIL/TGTABORT can be retried, typically these
|
||||
* would happen if a frame is dropped and something times out.
|
||||
* NOLOGI or LINKDOWN can be retried if the other port is up.
|
||||
* RESIDERR can be retried as well.
|
||||
*
|
||||
* ABORTFAIL might indicate that lots of frames are getting CRC errors.
|
||||
* So it maybe retried once and reset the link if it happens again.
|
||||
* The link can also be reset on the CRC error threshold interrupt.
|
||||
*/
|
||||
#define SISL_FC_RC_ABORTPEND 0x52 /* exchange timeout or abort request */
|
||||
#define SISL_FC_RC_WRABORTPEND 0x53 /* due to write XFER_RDY invalid */
|
||||
#define SISL_FC_RC_NOLOGI 0x54 /* port not logged in, in-flight cmds */
|
||||
#define SISL_FC_RC_NOEXP 0x55 /* FC protocol error or HW bug */
|
||||
#define SISL_FC_RC_INUSE 0x56 /* tag already in use, HW bug */
|
||||
#define SISL_FC_RC_LINKDOWN 0x57 /* link down, in-flight cmds */
|
||||
#define SISL_FC_RC_ABORTOK 0x58 /* pending abort completed w/success */
|
||||
#define SISL_FC_RC_ABORTFAIL 0x59 /* pending abort completed w/fail */
|
||||
#define SISL_FC_RC_RESID 0x5A /* ioasa underrun/overrun flags set */
|
||||
#define SISL_FC_RC_RESIDERR 0x5B /* actual data len does not match SCSI
|
||||
* reported len, possibly due to dropped
|
||||
* frames
|
||||
*/
|
||||
#define SISL_FC_RC_TGTABORT 0x5C /* command aborted by target */
|
||||
};
|
||||
|
||||
#define SISL_SENSE_DATA_LEN 20 /* Sense data length */
|
||||
#define SISL_WWID_DATA_LEN 16 /* WWID data length */
|
||||
|
||||
/*
|
||||
* IOASA: 64 bytes & must follow IOARCB, min 16 byte alignment required,
|
||||
* host native endianness
|
||||
*/
|
||||
struct sisl_ioasa {
|
||||
union {
|
||||
struct sisl_rc rc;
|
||||
u32 ioasc;
|
||||
#define SISL_IOASC_GOOD_COMPLETION 0x00000000U
|
||||
};
|
||||
|
||||
union {
|
||||
u32 resid;
|
||||
u32 lunid_hi;
|
||||
};
|
||||
|
||||
u8 port;
|
||||
u8 afu_extra;
|
||||
/* when afu_rc=0x04, 0x14, 0x31 (_xxx_DMA_ERR):
|
||||
* afu_exta contains PSL response code. Useful codes are:
|
||||
*/
|
||||
#define SISL_AFU_DMA_ERR_PAGE_IN 0x0A /* AFU_retry_on_pagein Action
|
||||
* Enabled N/A
|
||||
* Disabled retry
|
||||
*/
|
||||
#define SISL_AFU_DMA_ERR_INVALID_EA 0x0B /* this is a hard error
|
||||
* afu_rc Implies
|
||||
* 0x04, 0x14 master exit.
|
||||
* 0x31 user error.
|
||||
*/
|
||||
/* when afu rc=0x20 (no channels):
|
||||
* afu_extra bits [4:5]: available portmask, [6:7]: requested portmask.
|
||||
*/
|
||||
#define SISL_AFU_NO_CLANNELS_AMASK(afu_extra) (((afu_extra) & 0x0C) >> 2)
|
||||
#define SISL_AFU_NO_CLANNELS_RMASK(afu_extra) ((afu_extra) & 0x03)
|
||||
|
||||
u8 scsi_extra;
|
||||
u8 fc_extra;
|
||||
|
||||
union {
|
||||
u8 sense_data[SISL_SENSE_DATA_LEN];
|
||||
struct {
|
||||
u32 lunid_lo;
|
||||
u8 wwid[SISL_WWID_DATA_LEN];
|
||||
};
|
||||
};
|
||||
|
||||
/* These fields are defined by the SISlite architecture for the
|
||||
* host to use as they see fit for their implementation.
|
||||
*/
|
||||
union {
|
||||
u64 host_use[4];
|
||||
u8 host_use_b[32];
|
||||
};
|
||||
} __packed;
|
||||
|
||||
#define SISL_RESP_HANDLE_T_BIT 0x1ULL /* Toggle bit */
|
||||
|
||||
/* MMIO space is required to support only 64-bit access */
|
||||
|
||||
/*
|
||||
* This AFU has two mechanisms to deal with endian-ness.
|
||||
* One is a global configuration (in the afu_config) register
|
||||
* below that specifies the endian-ness of the host.
|
||||
* The other is a per context (i.e. application) specification
|
||||
* controlled by the endian_ctrl field here. Since the master
|
||||
* context is one such application the master context's
|
||||
* endian-ness is set to be the same as the host.
|
||||
*
|
||||
* As per the SISlite spec, the MMIO registers are always
|
||||
* big endian.
|
||||
*/
|
||||
#define SISL_ENDIAN_CTRL_BE 0x8000000000000080ULL
|
||||
#define SISL_ENDIAN_CTRL_LE 0x0000000000000000ULL
|
||||
|
||||
#ifdef __BIG_ENDIAN
|
||||
#define SISL_ENDIAN_CTRL SISL_ENDIAN_CTRL_BE
|
||||
#else
|
||||
#define SISL_ENDIAN_CTRL SISL_ENDIAN_CTRL_LE
|
||||
#endif
|
||||
|
||||
/* per context host transport MMIO */
|
||||
struct sisl_host_map {
|
||||
__be64 endian_ctrl; /* Per context Endian Control. The AFU will
|
||||
* operate on whatever the context is of the
|
||||
* host application.
|
||||
*/
|
||||
|
||||
__be64 intr_status; /* this sends LISN# programmed in ctx_ctrl.
|
||||
* Only recovery in a PERM_ERR is a context
|
||||
* exit since there is no way to tell which
|
||||
* command caused the error.
|
||||
*/
|
||||
#define SISL_ISTATUS_PERM_ERR_LISN_3_EA 0x0400ULL /* b53, user error */
|
||||
#define SISL_ISTATUS_PERM_ERR_LISN_2_EA 0x0200ULL /* b54, user error */
|
||||
#define SISL_ISTATUS_PERM_ERR_LISN_1_EA 0x0100ULL /* b55, user error */
|
||||
#define SISL_ISTATUS_PERM_ERR_LISN_3_PASID 0x0080ULL /* b56, user error */
|
||||
#define SISL_ISTATUS_PERM_ERR_LISN_2_PASID 0x0040ULL /* b57, user error */
|
||||
#define SISL_ISTATUS_PERM_ERR_LISN_1_PASID 0x0020ULL /* b58, user error */
|
||||
#define SISL_ISTATUS_PERM_ERR_CMDROOM 0x0010ULL /* b59, user error */
|
||||
#define SISL_ISTATUS_PERM_ERR_RCB_READ 0x0008ULL /* b60, user error */
|
||||
#define SISL_ISTATUS_PERM_ERR_SA_WRITE 0x0004ULL /* b61, user error */
|
||||
#define SISL_ISTATUS_PERM_ERR_RRQ_WRITE 0x0002ULL /* b62, user error */
|
||||
/* Page in wait accessing RCB/IOASA/RRQ is reported in b63.
|
||||
* Same error in data/LXT/RHT access is reported via IOASA.
|
||||
*/
|
||||
#define SISL_ISTATUS_TEMP_ERR_PAGEIN 0x0001ULL /* b63, can only be
|
||||
* generated when AFU
|
||||
* auto retry is
|
||||
* disabled. If user
|
||||
* can determine the
|
||||
* command that caused
|
||||
* the error, it can
|
||||
* be retried.
|
||||
*/
|
||||
#define SISL_ISTATUS_UNMASK (0x07FFULL) /* 1 means unmasked */
|
||||
#define SISL_ISTATUS_MASK ~(SISL_ISTATUS_UNMASK) /* 1 means masked */
|
||||
|
||||
__be64 intr_clear;
|
||||
__be64 intr_mask;
|
||||
__be64 ioarrin; /* only write what cmd_room permits */
|
||||
__be64 rrq_start; /* start & end are both inclusive */
|
||||
__be64 rrq_end; /* write sequence: start followed by end */
|
||||
__be64 cmd_room;
|
||||
__be64 ctx_ctrl; /* least significant byte or b56:63 is LISN# */
|
||||
#define SISL_CTX_CTRL_UNMAP_SECTOR 0x8000000000000000ULL /* b0 */
|
||||
#define SISL_CTX_CTRL_LISN_MASK (0xFFULL)
|
||||
__be64 mbox_w; /* restricted use */
|
||||
__be64 sq_start; /* Submission Queue (R/W): write sequence and */
|
||||
__be64 sq_end; /* inclusion semantics are the same as RRQ */
|
||||
__be64 sq_head; /* Submission Queue Head (R): for debugging */
|
||||
__be64 sq_tail; /* Submission Queue TAIL (R/W): next IOARCB */
|
||||
__be64 sq_ctx_reset; /* Submission Queue Context Reset (R/W) */
|
||||
};
|
||||
|
||||
/* per context provisioning & control MMIO */
|
||||
struct sisl_ctrl_map {
|
||||
__be64 rht_start;
|
||||
__be64 rht_cnt_id;
|
||||
/* both cnt & ctx_id args must be ULL */
|
||||
#define SISL_RHT_CNT_ID(cnt, ctx_id) (((cnt) << 48) | ((ctx_id) << 32))
|
||||
|
||||
__be64 ctx_cap; /* afu_rc below is when the capability is violated */
|
||||
#define SISL_CTX_CAP_PROXY_ISSUE 0x8000000000000000ULL /* afu_rc 0x21 */
|
||||
#define SISL_CTX_CAP_REAL_MODE 0x4000000000000000ULL /* afu_rc 0x21 */
|
||||
#define SISL_CTX_CAP_HOST_XLATE 0x2000000000000000ULL /* afu_rc 0x1a */
|
||||
#define SISL_CTX_CAP_PROXY_TARGET 0x1000000000000000ULL /* afu_rc 0x21 */
|
||||
#define SISL_CTX_CAP_AFU_CMD 0x0000000000000008ULL /* afu_rc 0x21 */
|
||||
#define SISL_CTX_CAP_GSCSI_CMD 0x0000000000000004ULL /* afu_rc 0x21 */
|
||||
#define SISL_CTX_CAP_WRITE_CMD 0x0000000000000002ULL /* afu_rc 0x21 */
|
||||
#define SISL_CTX_CAP_READ_CMD 0x0000000000000001ULL /* afu_rc 0x21 */
|
||||
__be64 mbox_r;
|
||||
__be64 lisn_pasid[2];
|
||||
/* pasid _a arg must be ULL */
|
||||
#define SISL_LISN_PASID(_a, _b) (((_a) << 32) | (_b))
|
||||
__be64 lisn_ea[3];
|
||||
};
|
||||
|
||||
/* single copy global regs */
|
||||
struct sisl_global_regs {
|
||||
__be64 aintr_status;
|
||||
/*
|
||||
* In cxlflash, FC port/link are arranged in port pairs, each
|
||||
* gets a byte of status:
|
||||
*
|
||||
* *_OTHER: other err, FC_ERRCAP[31:20]
|
||||
* *_LOGO: target sent FLOGI/PLOGI/LOGO while logged in
|
||||
* *_CRC_T: CRC threshold exceeded
|
||||
* *_LOGI_R: login state machine timed out and retrying
|
||||
* *_LOGI_F: login failed, FC_ERROR[19:0]
|
||||
* *_LOGI_S: login succeeded
|
||||
* *_LINK_DN: link online to offline
|
||||
* *_LINK_UP: link offline to online
|
||||
*/
|
||||
#define SISL_ASTATUS_FC2_OTHER 0x80000000ULL /* b32 */
|
||||
#define SISL_ASTATUS_FC2_LOGO 0x40000000ULL /* b33 */
|
||||
#define SISL_ASTATUS_FC2_CRC_T 0x20000000ULL /* b34 */
|
||||
#define SISL_ASTATUS_FC2_LOGI_R 0x10000000ULL /* b35 */
|
||||
#define SISL_ASTATUS_FC2_LOGI_F 0x08000000ULL /* b36 */
|
||||
#define SISL_ASTATUS_FC2_LOGI_S 0x04000000ULL /* b37 */
|
||||
#define SISL_ASTATUS_FC2_LINK_DN 0x02000000ULL /* b38 */
|
||||
#define SISL_ASTATUS_FC2_LINK_UP 0x01000000ULL /* b39 */
|
||||
|
||||
#define SISL_ASTATUS_FC3_OTHER 0x00800000ULL /* b40 */
|
||||
#define SISL_ASTATUS_FC3_LOGO 0x00400000ULL /* b41 */
|
||||
#define SISL_ASTATUS_FC3_CRC_T 0x00200000ULL /* b42 */
|
||||
#define SISL_ASTATUS_FC3_LOGI_R 0x00100000ULL /* b43 */
|
||||
#define SISL_ASTATUS_FC3_LOGI_F 0x00080000ULL /* b44 */
|
||||
#define SISL_ASTATUS_FC3_LOGI_S 0x00040000ULL /* b45 */
|
||||
#define SISL_ASTATUS_FC3_LINK_DN 0x00020000ULL /* b46 */
|
||||
#define SISL_ASTATUS_FC3_LINK_UP 0x00010000ULL /* b47 */
|
||||
|
||||
#define SISL_ASTATUS_FC0_OTHER 0x00008000ULL /* b48 */
|
||||
#define SISL_ASTATUS_FC0_LOGO 0x00004000ULL /* b49 */
|
||||
#define SISL_ASTATUS_FC0_CRC_T 0x00002000ULL /* b50 */
|
||||
#define SISL_ASTATUS_FC0_LOGI_R 0x00001000ULL /* b51 */
|
||||
#define SISL_ASTATUS_FC0_LOGI_F 0x00000800ULL /* b52 */
|
||||
#define SISL_ASTATUS_FC0_LOGI_S 0x00000400ULL /* b53 */
|
||||
#define SISL_ASTATUS_FC0_LINK_DN 0x00000200ULL /* b54 */
|
||||
#define SISL_ASTATUS_FC0_LINK_UP 0x00000100ULL /* b55 */
|
||||
|
||||
#define SISL_ASTATUS_FC1_OTHER 0x00000080ULL /* b56 */
|
||||
#define SISL_ASTATUS_FC1_LOGO 0x00000040ULL /* b57 */
|
||||
#define SISL_ASTATUS_FC1_CRC_T 0x00000020ULL /* b58 */
|
||||
#define SISL_ASTATUS_FC1_LOGI_R 0x00000010ULL /* b59 */
|
||||
#define SISL_ASTATUS_FC1_LOGI_F 0x00000008ULL /* b60 */
|
||||
#define SISL_ASTATUS_FC1_LOGI_S 0x00000004ULL /* b61 */
|
||||
#define SISL_ASTATUS_FC1_LINK_DN 0x00000002ULL /* b62 */
|
||||
#define SISL_ASTATUS_FC1_LINK_UP 0x00000001ULL /* b63 */
|
||||
|
||||
#define SISL_FC_INTERNAL_UNMASK 0x0000000300000000ULL /* 1 means unmasked */
|
||||
#define SISL_FC_INTERNAL_MASK ~(SISL_FC_INTERNAL_UNMASK)
|
||||
#define SISL_FC_INTERNAL_SHIFT 32
|
||||
|
||||
#define SISL_FC_SHUTDOWN_NORMAL 0x0000000000000010ULL
|
||||
#define SISL_FC_SHUTDOWN_ABRUPT 0x0000000000000020ULL
|
||||
|
||||
#define SISL_STATUS_SHUTDOWN_ACTIVE 0x0000000000000010ULL
|
||||
#define SISL_STATUS_SHUTDOWN_COMPLETE 0x0000000000000020ULL
|
||||
|
||||
#define SISL_ASTATUS_UNMASK 0xFFFFFFFFULL /* 1 means unmasked */
|
||||
#define SISL_ASTATUS_MASK ~(SISL_ASTATUS_UNMASK) /* 1 means masked */
|
||||
|
||||
__be64 aintr_clear;
|
||||
__be64 aintr_mask;
|
||||
__be64 afu_ctrl;
|
||||
__be64 afu_hb;
|
||||
__be64 afu_scratch_pad;
|
||||
__be64 afu_port_sel;
|
||||
#define SISL_AFUCONF_AR_IOARCB 0x4000ULL
|
||||
#define SISL_AFUCONF_AR_LXT 0x2000ULL
|
||||
#define SISL_AFUCONF_AR_RHT 0x1000ULL
|
||||
#define SISL_AFUCONF_AR_DATA 0x0800ULL
|
||||
#define SISL_AFUCONF_AR_RSRC 0x0400ULL
|
||||
#define SISL_AFUCONF_AR_IOASA 0x0200ULL
|
||||
#define SISL_AFUCONF_AR_RRQ 0x0100ULL
|
||||
/* Aggregate all Auto Retry Bits */
|
||||
#define SISL_AFUCONF_AR_ALL (SISL_AFUCONF_AR_IOARCB|SISL_AFUCONF_AR_LXT| \
|
||||
SISL_AFUCONF_AR_RHT|SISL_AFUCONF_AR_DATA| \
|
||||
SISL_AFUCONF_AR_RSRC|SISL_AFUCONF_AR_IOASA| \
|
||||
SISL_AFUCONF_AR_RRQ)
|
||||
#ifdef __BIG_ENDIAN
|
||||
#define SISL_AFUCONF_ENDIAN 0x0000ULL
|
||||
#else
|
||||
#define SISL_AFUCONF_ENDIAN 0x0020ULL
|
||||
#endif
|
||||
#define SISL_AFUCONF_MBOX_CLR_READ 0x0010ULL
|
||||
__be64 afu_config;
|
||||
__be64 rsvd[0xf8];
|
||||
__le64 afu_version;
|
||||
__be64 interface_version;
|
||||
#define SISL_INTVER_CAP_SHIFT 16
|
||||
#define SISL_INTVER_MAJ_SHIFT 8
|
||||
#define SISL_INTVER_CAP_MASK 0xFFFFFFFF00000000ULL
|
||||
#define SISL_INTVER_MAJ_MASK 0x00000000FFFF0000ULL
|
||||
#define SISL_INTVER_MIN_MASK 0x000000000000FFFFULL
|
||||
#define SISL_INTVER_CAP_IOARRIN_CMD_MODE 0x800000000000ULL
|
||||
#define SISL_INTVER_CAP_SQ_CMD_MODE 0x400000000000ULL
|
||||
#define SISL_INTVER_CAP_RESERVED_CMD_MODE_A 0x200000000000ULL
|
||||
#define SISL_INTVER_CAP_RESERVED_CMD_MODE_B 0x100000000000ULL
|
||||
#define SISL_INTVER_CAP_LUN_PROVISION 0x080000000000ULL
|
||||
#define SISL_INTVER_CAP_AFU_DEBUG 0x040000000000ULL
|
||||
#define SISL_INTVER_CAP_OCXL_LISN 0x020000000000ULL
|
||||
};
|
||||
|
||||
#define CXLFLASH_NUM_FC_PORTS_PER_BANK 2 /* fixed # of ports per bank */
|
||||
#define CXLFLASH_MAX_FC_BANKS 2 /* max # of banks supported */
|
||||
#define CXLFLASH_MAX_FC_PORTS (CXLFLASH_NUM_FC_PORTS_PER_BANK * \
|
||||
CXLFLASH_MAX_FC_BANKS)
|
||||
#define CXLFLASH_MAX_CONTEXT 512 /* number of contexts per AFU */
|
||||
#define CXLFLASH_NUM_VLUNS 512 /* number of vluns per AFU/port */
|
||||
#define CXLFLASH_NUM_REGS 512 /* number of registers per port */
|
||||
|
||||
struct fc_port_bank {
|
||||
__be64 fc_port_regs[CXLFLASH_NUM_FC_PORTS_PER_BANK][CXLFLASH_NUM_REGS];
|
||||
__be64 fc_port_luns[CXLFLASH_NUM_FC_PORTS_PER_BANK][CXLFLASH_NUM_VLUNS];
|
||||
};
|
||||
|
||||
struct sisl_global_map {
|
||||
union {
|
||||
struct sisl_global_regs regs;
|
||||
char page0[SIZE_4K]; /* page 0 */
|
||||
};
|
||||
|
||||
char page1[SIZE_4K]; /* page 1 */
|
||||
|
||||
struct fc_port_bank bank[CXLFLASH_MAX_FC_BANKS]; /* pages 2 - 9 */
|
||||
|
||||
/* pages 10 - 15 are reserved */
|
||||
|
||||
};
|
||||
|
||||
/*
|
||||
* CXL Flash Memory Map
|
||||
*
|
||||
* +-------------------------------+
|
||||
* | 512 * 64 KB User MMIO |
|
||||
* | (per context) |
|
||||
* | User Accessible |
|
||||
* +-------------------------------+
|
||||
* | 512 * 128 B per context |
|
||||
* | Provisioning and Control |
|
||||
* | Trusted Process accessible |
|
||||
* +-------------------------------+
|
||||
* | 64 KB Global |
|
||||
* | Trusted Process accessible |
|
||||
* +-------------------------------+
|
||||
*/
|
||||
struct cxlflash_afu_map {
|
||||
union {
|
||||
struct sisl_host_map host;
|
||||
char harea[SIZE_64K]; /* 64KB each */
|
||||
} hosts[CXLFLASH_MAX_CONTEXT];
|
||||
|
||||
union {
|
||||
struct sisl_ctrl_map ctrl;
|
||||
char carea[cache_line_size()]; /* 128B each */
|
||||
} ctrls[CXLFLASH_MAX_CONTEXT];
|
||||
|
||||
union {
|
||||
struct sisl_global_map global;
|
||||
char garea[SIZE_64K]; /* 64KB single block */
|
||||
};
|
||||
};
|
||||
|
||||
/*
|
||||
* LXT - LBA Translation Table
|
||||
* LXT control blocks
|
||||
*/
|
||||
struct sisl_lxt_entry {
|
||||
u64 rlba_base; /* bits 0:47 is base
|
||||
* b48:55 is lun index
|
||||
* b58:59 is write & read perms
|
||||
* (if no perm, afu_rc=0x15)
|
||||
* b60:63 is port_sel mask
|
||||
*/
|
||||
};
|
||||
|
||||
/*
|
||||
* RHT - Resource Handle Table
|
||||
* Per the SISlite spec, RHT entries are to be 16-byte aligned
|
||||
*/
|
||||
struct sisl_rht_entry {
|
||||
struct sisl_lxt_entry *lxt_start;
|
||||
u32 lxt_cnt;
|
||||
u16 rsvd;
|
||||
u8 fp; /* format & perm nibbles.
|
||||
* (if no perm, afu_rc=0x05)
|
||||
*/
|
||||
u8 nmask;
|
||||
} __packed __aligned(16);
|
||||
|
||||
struct sisl_rht_entry_f1 {
|
||||
u64 lun_id;
|
||||
union {
|
||||
struct {
|
||||
u8 valid;
|
||||
u8 rsvd[5];
|
||||
u8 fp;
|
||||
u8 port_sel;
|
||||
};
|
||||
|
||||
u64 dw;
|
||||
};
|
||||
} __packed __aligned(16);
|
||||
|
||||
/* make the fp byte */
|
||||
#define SISL_RHT_FP(fmt, perm) (((fmt) << 4) | (perm))
|
||||
|
||||
/* make the fp byte for a clone from a source fp and clone flags
|
||||
* flags must be only 2 LSB bits.
|
||||
*/
|
||||
#define SISL_RHT_FP_CLONE(src_fp, cln_flags) ((src_fp) & (0xFC | (cln_flags)))
|
||||
|
||||
#define RHT_PERM_READ 0x01U
|
||||
#define RHT_PERM_WRITE 0x02U
|
||||
#define RHT_PERM_RW (RHT_PERM_READ | RHT_PERM_WRITE)
|
||||
|
||||
/* extract the perm bits from a fp */
|
||||
#define SISL_RHT_PERM(fp) ((fp) & RHT_PERM_RW)
|
||||
|
||||
#define PORT0 0x01U
|
||||
#define PORT1 0x02U
|
||||
#define PORT2 0x04U
|
||||
#define PORT3 0x08U
|
||||
#define PORT_MASK(_n) ((1 << (_n)) - 1)
|
||||
|
||||
/* AFU Sync Mode byte */
|
||||
#define AFU_LW_SYNC 0x0U
|
||||
#define AFU_HW_SYNC 0x1U
|
||||
#define AFU_GSYNC 0x2U
|
||||
|
||||
/* Special Task Management Function CDB */
|
||||
#define TMF_LUN_RESET 0x1U
|
||||
#define TMF_CLEAR_ACA 0x2U
|
||||
|
||||
#endif /* _SISLITE_H */
|
File diff suppressed because it is too large
Load Diff
@ -1,150 +0,0 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
||||
/*
|
||||
* CXL Flash Device Driver
|
||||
*
|
||||
* Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation
|
||||
* Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation
|
||||
*
|
||||
* Copyright (C) 2015 IBM Corporation
|
||||
*/
|
||||
|
||||
#ifndef _CXLFLASH_SUPERPIPE_H
|
||||
#define _CXLFLASH_SUPERPIPE_H
|
||||
|
||||
extern struct cxlflash_global global;
|
||||
|
||||
/*
|
||||
* Terminology: use afu (and not adapter) to refer to the HW.
|
||||
* Adapter is the entire slot and includes PSL out of which
|
||||
* only the AFU is visible to user space.
|
||||
*/
|
||||
|
||||
/* Chunk size parms: note sislite minimum chunk size is
|
||||
* 0x10000 LBAs corresponding to a NMASK or 16.
|
||||
*/
|
||||
#define MC_CHUNK_SIZE (1 << MC_RHT_NMASK) /* in LBAs */
|
||||
|
||||
#define CMD_TIMEOUT 30 /* 30 secs */
|
||||
#define CMD_RETRIES 5 /* 5 retries for scsi_execute */
|
||||
|
||||
#define MAX_SECTOR_UNIT 512 /* max_sector is in 512 byte multiples */
|
||||
|
||||
enum lun_mode {
|
||||
MODE_NONE = 0,
|
||||
MODE_VIRTUAL,
|
||||
MODE_PHYSICAL
|
||||
};
|
||||
|
||||
/* Global (entire driver, spans adapters) lun_info structure */
|
||||
struct glun_info {
|
||||
u64 max_lba; /* from read cap(16) */
|
||||
u32 blk_len; /* from read cap(16) */
|
||||
enum lun_mode mode; /* NONE, VIRTUAL, PHYSICAL */
|
||||
int users; /* Number of users w/ references to LUN */
|
||||
|
||||
u8 wwid[16];
|
||||
|
||||
struct mutex mutex;
|
||||
|
||||
struct blka blka;
|
||||
struct list_head list;
|
||||
};
|
||||
|
||||
/* Local (per-adapter) lun_info structure */
|
||||
struct llun_info {
|
||||
u64 lun_id[MAX_FC_PORTS]; /* from REPORT_LUNS */
|
||||
u32 lun_index; /* Index in the LUN table */
|
||||
u32 host_no; /* host_no from Scsi_host */
|
||||
u32 port_sel; /* What port to use for this LUN */
|
||||
bool in_table; /* Whether a LUN table entry was created */
|
||||
|
||||
u8 wwid[16]; /* Keep a duplicate copy here? */
|
||||
|
||||
struct glun_info *parent; /* Pointer to entry in global LUN structure */
|
||||
struct scsi_device *sdev;
|
||||
struct list_head list;
|
||||
};
|
||||
|
||||
struct lun_access {
|
||||
struct llun_info *lli;
|
||||
struct scsi_device *sdev;
|
||||
struct list_head list;
|
||||
};
|
||||
|
||||
enum ctx_ctrl {
|
||||
CTX_CTRL_CLONE = (1 << 1),
|
||||
CTX_CTRL_ERR = (1 << 2),
|
||||
CTX_CTRL_ERR_FALLBACK = (1 << 3),
|
||||
CTX_CTRL_NOPID = (1 << 4),
|
||||
CTX_CTRL_FILE = (1 << 5)
|
||||
};
|
||||
|
||||
#define ENCODE_CTXID(_ctx, _id) (((((u64)_ctx) & 0xFFFFFFFF0ULL) << 28) | _id)
|
||||
#define DECODE_CTXID(_val) (_val & 0xFFFFFFFF)
|
||||
|
||||
struct ctx_info {
|
||||
struct sisl_ctrl_map __iomem *ctrl_map; /* initialized at startup */
|
||||
struct sisl_rht_entry *rht_start; /* 1 page (req'd for alignment),
|
||||
* alloc/free on attach/detach
|
||||
*/
|
||||
u32 rht_out; /* Number of checked out RHT entries */
|
||||
u32 rht_perms; /* User-defined permissions for RHT entries */
|
||||
struct llun_info **rht_lun; /* Mapping of RHT entries to LUNs */
|
||||
u8 *rht_needs_ws; /* User-desired write-same function per RHTE */
|
||||
|
||||
u64 ctxid;
|
||||
u64 irqs; /* Number of interrupts requested for context */
|
||||
pid_t pid;
|
||||
bool initialized;
|
||||
bool unavail;
|
||||
bool err_recovery_active;
|
||||
struct mutex mutex; /* Context protection */
|
||||
struct kref kref;
|
||||
void *ctx;
|
||||
struct cxlflash_cfg *cfg;
|
||||
struct list_head luns; /* LUNs attached to this context */
|
||||
const struct vm_operations_struct *cxl_mmap_vmops;
|
||||
struct file *file;
|
||||
struct list_head list; /* Link contexts in error recovery */
|
||||
};
|
||||
|
||||
struct cxlflash_global {
|
||||
struct mutex mutex;
|
||||
struct list_head gluns;/* list of glun_info structs */
|
||||
struct page *err_page; /* One page of all 0xF for error notification */
|
||||
};
|
||||
|
||||
int cxlflash_vlun_resize(struct scsi_device *sdev, void *resize);
|
||||
int _cxlflash_vlun_resize(struct scsi_device *sdev, struct ctx_info *ctxi,
|
||||
struct dk_cxlflash_resize *resize);
|
||||
|
||||
int cxlflash_disk_release(struct scsi_device *sdev,
|
||||
void *release);
|
||||
int _cxlflash_disk_release(struct scsi_device *sdev, struct ctx_info *ctxi,
|
||||
struct dk_cxlflash_release *release);
|
||||
|
||||
int cxlflash_disk_clone(struct scsi_device *sdev, void *arg);
|
||||
|
||||
int cxlflash_disk_virtual_open(struct scsi_device *sdev, void *arg);
|
||||
|
||||
int cxlflash_lun_attach(struct glun_info *gli, enum lun_mode mode, bool locked);
|
||||
void cxlflash_lun_detach(struct glun_info *gli);
|
||||
|
||||
struct ctx_info *get_context(struct cxlflash_cfg *cfg, u64 rctxit, void *arg,
|
||||
enum ctx_ctrl ctrl);
|
||||
void put_context(struct ctx_info *ctxi);
|
||||
|
||||
struct sisl_rht_entry *get_rhte(struct ctx_info *ctxi, res_hndl_t rhndl,
|
||||
struct llun_info *lli);
|
||||
|
||||
struct sisl_rht_entry *rhte_checkout(struct ctx_info *ctxi,
|
||||
struct llun_info *lli);
|
||||
void rhte_checkin(struct ctx_info *ctxi, struct sisl_rht_entry *rhte);
|
||||
|
||||
void cxlflash_ba_terminate(struct ba_lun *ba_lun);
|
||||
|
||||
int cxlflash_manage_lun(struct scsi_device *sdev, void *manage);
|
||||
|
||||
int check_state(struct cxlflash_cfg *cfg);
|
||||
|
||||
#endif /* ifndef _CXLFLASH_SUPERPIPE_H */
|
File diff suppressed because it is too large
Load Diff
@ -1,82 +0,0 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
||||
/*
|
||||
* CXL Flash Device Driver
|
||||
*
|
||||
* Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation
|
||||
* Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation
|
||||
*
|
||||
* Copyright (C) 2015 IBM Corporation
|
||||
*/
|
||||
|
||||
#ifndef _CXLFLASH_VLUN_H
|
||||
#define _CXLFLASH_VLUN_H
|
||||
|
||||
/* RHT - Resource Handle Table */
|
||||
#define MC_RHT_NMASK 16 /* in bits */
|
||||
#define MC_CHUNK_SHIFT MC_RHT_NMASK /* shift to go from LBA to chunk# */
|
||||
|
||||
#define HIBIT (BITS_PER_LONG - 1)
|
||||
|
||||
#define MAX_AUN_CLONE_CNT 0xFF
|
||||
|
||||
/*
|
||||
* LXT - LBA Translation Table
|
||||
*
|
||||
* +-------+-------+-------+-------+-------+-------+-------+---+---+
|
||||
* | RLBA_BASE |LUN_IDX| P |SEL|
|
||||
* +-------+-------+-------+-------+-------+-------+-------+---+---+
|
||||
*
|
||||
* The LXT Entry contains the physical LBA where the chunk starts (RLBA_BASE).
|
||||
* AFU ORes the low order bits from the virtual LBA (offset into the chunk)
|
||||
* with RLBA_BASE. The result is the physical LBA to be sent to storage.
|
||||
* The LXT Entry also contains an index to a LUN TBL and a bitmask of which
|
||||
* outgoing (FC) * ports can be selected. The port select bit-mask is ANDed
|
||||
* with a global port select bit-mask maintained by the driver.
|
||||
* In addition, it has permission bits that are ANDed with the
|
||||
* RHT permissions to arrive at the final permissions for the chunk.
|
||||
*
|
||||
* LXT tables are allocated dynamically in groups. This is done to avoid
|
||||
* a malloc/free overhead each time the LXT has to grow or shrink.
|
||||
*
|
||||
* Based on the current lxt_cnt (used), it is always possible to know
|
||||
* how many are allocated (used+free). The number of allocated entries is
|
||||
* not stored anywhere.
|
||||
*
|
||||
* The LXT table is re-allocated whenever it needs to cross into another group.
|
||||
*/
|
||||
#define LXT_GROUP_SIZE 8
|
||||
#define LXT_NUM_GROUPS(lxt_cnt) (((lxt_cnt) + 7)/8) /* alloc'ed groups */
|
||||
#define LXT_LUNIDX_SHIFT 8 /* LXT entry, shift for LUN index */
|
||||
#define LXT_PERM_SHIFT 4 /* LXT entry, shift for permission bits */
|
||||
|
||||
struct ba_lun_info {
|
||||
u64 *lun_alloc_map;
|
||||
u32 lun_bmap_size;
|
||||
u32 total_aus;
|
||||
u64 free_aun_cnt;
|
||||
|
||||
/* indices to be used for elevator lookup of free map */
|
||||
u32 free_low_idx;
|
||||
u32 free_curr_idx;
|
||||
u32 free_high_idx;
|
||||
|
||||
u8 *aun_clone_map;
|
||||
};
|
||||
|
||||
struct ba_lun {
|
||||
u64 lun_id;
|
||||
u64 wwpn;
|
||||
size_t lsize; /* LUN size in number of LBAs */
|
||||
size_t lba_size; /* LBA size in number of bytes */
|
||||
size_t au_size; /* Allocation Unit size in number of LBAs */
|
||||
struct ba_lun_info *ba_lun_handle;
|
||||
};
|
||||
|
||||
/* Block Allocator */
|
||||
struct blka {
|
||||
struct ba_lun ba_lun;
|
||||
u64 nchunk; /* number of chunks */
|
||||
struct mutex mutex;
|
||||
};
|
||||
|
||||
#endif /* ifndef _CXLFLASH_SUPERPIPE_H */
|
@ -735,7 +735,7 @@ efct_pci_io_resume(struct pci_dev *pdev)
|
||||
|
||||
MODULE_DEVICE_TABLE(pci, efct_pci_table);
|
||||
|
||||
static struct pci_error_handlers efct_pci_err_handler = {
|
||||
static const struct pci_error_handlers efct_pci_err_handler = {
|
||||
.error_detected = efct_pci_io_error_detected,
|
||||
.slot_reset = efct_pci_io_slot_reset,
|
||||
.resume = efct_pci_io_resume,
|
||||
|
@ -308,43 +308,29 @@ void fdls_schedule_oxid_free_retry_work(struct work_struct *work)
|
||||
struct fnic *fnic = iport->fnic;
|
||||
struct reclaim_entry_s *reclaim_entry;
|
||||
unsigned long delay_j = msecs_to_jiffies(OXID_RECLAIM_TOV(iport));
|
||||
unsigned long flags;
|
||||
int idx;
|
||||
|
||||
spin_lock_irqsave(&fnic->fnic_lock, fnic->lock_flags);
|
||||
|
||||
for_each_set_bit(idx, oxid_pool->pending_schedule_free, FNIC_OXID_POOL_SZ) {
|
||||
|
||||
FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num,
|
||||
"Schedule oxid free. oxid idx: %d\n", idx);
|
||||
|
||||
spin_unlock_irqrestore(&fnic->fnic_lock, fnic->lock_flags);
|
||||
reclaim_entry = (struct reclaim_entry_s *)
|
||||
kzalloc(sizeof(struct reclaim_entry_s), GFP_KERNEL);
|
||||
spin_lock_irqsave(&fnic->fnic_lock, fnic->lock_flags);
|
||||
|
||||
reclaim_entry = kzalloc(sizeof(*reclaim_entry), GFP_KERNEL);
|
||||
if (!reclaim_entry) {
|
||||
FNIC_FCS_DBG(KERN_WARNING, fnic->host, fnic->fnic_num,
|
||||
"Failed to allocate memory for reclaim struct for oxid idx: 0x%x\n",
|
||||
idx);
|
||||
|
||||
schedule_delayed_work(&oxid_pool->schedule_oxid_free_retry,
|
||||
msecs_to_jiffies(SCHEDULE_OXID_FREE_RETRY_TIME));
|
||||
spin_unlock_irqrestore(&fnic->fnic_lock, fnic->lock_flags);
|
||||
return;
|
||||
}
|
||||
|
||||
if (test_and_clear_bit(idx, oxid_pool->pending_schedule_free)) {
|
||||
reclaim_entry->oxid_idx = idx;
|
||||
reclaim_entry->expires = round_jiffies(jiffies + delay_j);
|
||||
list_add_tail(&reclaim_entry->links, &oxid_pool->oxid_reclaim_list);
|
||||
schedule_delayed_work(&oxid_pool->oxid_reclaim_work, delay_j);
|
||||
} else {
|
||||
/* unlikely scenario, free the allocated memory and continue */
|
||||
kfree(reclaim_entry);
|
||||
}
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&fnic->fnic_lock, fnic->lock_flags);
|
||||
clear_bit(idx, oxid_pool->pending_schedule_free);
|
||||
reclaim_entry->oxid_idx = idx;
|
||||
reclaim_entry->expires = round_jiffies(jiffies + delay_j);
|
||||
spin_lock_irqsave(&fnic->fnic_lock, flags);
|
||||
list_add_tail(&reclaim_entry->links, &oxid_pool->oxid_reclaim_list);
|
||||
spin_unlock_irqrestore(&fnic->fnic_lock, flags);
|
||||
schedule_delayed_work(&oxid_pool->oxid_reclaim_work, delay_j);
|
||||
}
|
||||
}
|
||||
|
||||
static bool fdls_is_oxid_fabric_req(uint16_t oxid)
|
||||
@ -1567,9 +1553,9 @@ void fdls_send_fabric_logo(struct fnic_iport_s *iport)
|
||||
|
||||
iport->fabric.flags &= ~FNIC_FDLS_FABRIC_ABORT_ISSUED;
|
||||
|
||||
FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num,
|
||||
"0x%x: FDLS send fabric LOGO with oxid: 0x%x",
|
||||
iport->fcid, oxid);
|
||||
FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num,
|
||||
"0x%x: FDLS send fabric LOGO with oxid: 0x%x",
|
||||
iport->fcid, oxid);
|
||||
|
||||
fnic_send_fcoe_frame(iport, frame, frame_size);
|
||||
|
||||
@ -1898,7 +1884,6 @@ static void fdls_fdmi_register_hba(struct fnic_iport_s *iport)
|
||||
if (fnic->subsys_desc_len >= FNIC_FDMI_MODEL_LEN)
|
||||
fnic->subsys_desc_len = FNIC_FDMI_MODEL_LEN - 1;
|
||||
strscpy_pad(data, fnic->subsys_desc, FNIC_FDMI_MODEL_LEN);
|
||||
data[FNIC_FDMI_MODEL_LEN - 1] = 0;
|
||||
fnic_fdmi_attr_set(fdmi_attr, FNIC_FDMI_TYPE_MODEL, FNIC_FDMI_MODEL_LEN,
|
||||
data, &attr_off_bytes);
|
||||
|
||||
@ -2061,7 +2046,6 @@ static void fdls_fdmi_register_pa(struct fnic_iport_s *iport)
|
||||
snprintf(tmp_data, FNIC_FDMI_OS_NAME_LEN - 1, "host%d",
|
||||
fnic->host->host_no);
|
||||
strscpy_pad(data, tmp_data, FNIC_FDMI_OS_NAME_LEN);
|
||||
data[FNIC_FDMI_OS_NAME_LEN - 1] = 0;
|
||||
fnic_fdmi_attr_set(fdmi_attr, FNIC_FDMI_TYPE_OS_NAME,
|
||||
FNIC_FDMI_OS_NAME_LEN, data, &attr_off_bytes);
|
||||
|
||||
@ -2071,7 +2055,6 @@ static void fdls_fdmi_register_pa(struct fnic_iport_s *iport)
|
||||
sprintf(fc_host_system_hostname(fnic->host), "%s", utsname()->nodename);
|
||||
strscpy_pad(data, fc_host_system_hostname(fnic->host),
|
||||
FNIC_FDMI_HN_LEN);
|
||||
data[FNIC_FDMI_HN_LEN - 1] = 0;
|
||||
fnic_fdmi_attr_set(fdmi_attr, FNIC_FDMI_TYPE_HOST_NAME,
|
||||
FNIC_FDMI_HN_LEN, data, &attr_off_bytes);
|
||||
|
||||
@ -4659,13 +4642,13 @@ fnic_fdls_validate_and_get_frame_type(struct fnic_iport_s *iport,
|
||||
d_id = ntoh24(fchdr->fh_d_id);
|
||||
|
||||
/* some common validation */
|
||||
if (fdls_get_state(fabric) > FDLS_STATE_FABRIC_FLOGI) {
|
||||
if ((iport->fcid != d_id) || (!FNIC_FC_FRAME_CS_CTL(fchdr))) {
|
||||
FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num,
|
||||
"invalid frame received. Dropping frame");
|
||||
return -1;
|
||||
}
|
||||
if (fdls_get_state(fabric) > FDLS_STATE_FABRIC_FLOGI) {
|
||||
if (iport->fcid != d_id || (!FNIC_FC_FRAME_CS_CTL(fchdr))) {
|
||||
FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num,
|
||||
"invalid frame received. Dropping frame");
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
/* BLS ABTS response */
|
||||
if ((fchdr->fh_r_ctl == FC_RCTL_BA_ACC)
|
||||
@ -4682,7 +4665,7 @@ fnic_fdls_validate_and_get_frame_type(struct fnic_iport_s *iport,
|
||||
"Received unexpected ABTS RSP(oxid:0x%x) from 0x%x. Dropping frame",
|
||||
oxid, s_id);
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
return FNIC_FABRIC_BLS_ABTS_RSP;
|
||||
} else if (fdls_is_oxid_fdmi_req(oxid)) {
|
||||
return FNIC_FDMI_BLS_ABTS_RSP;
|
||||
|
@ -1365,10 +1365,9 @@ static void __exit fnic_cleanup_module(void)
|
||||
if (pc_rscn_handling_feature_flag == PC_RSCN_HANDLING_FEATURE_ON)
|
||||
destroy_workqueue(reset_fnic_work_queue);
|
||||
|
||||
if (fnic_fip_queue) {
|
||||
flush_workqueue(fnic_fip_queue);
|
||||
if (fnic_fip_queue)
|
||||
destroy_workqueue(fnic_fip_queue);
|
||||
}
|
||||
|
||||
kmem_cache_destroy(fnic_sgl_cache[FNIC_SGL_CACHE_MAX]);
|
||||
kmem_cache_destroy(fnic_sgl_cache[FNIC_SGL_CACHE_DFLT]);
|
||||
kmem_cache_destroy(fnic_io_req_cache);
|
||||
|
@ -633,8 +633,7 @@ extern struct dentry *hisi_sas_debugfs_dir;
|
||||
extern void hisi_sas_stop_phys(struct hisi_hba *hisi_hba);
|
||||
extern int hisi_sas_alloc(struct hisi_hba *hisi_hba);
|
||||
extern void hisi_sas_free(struct hisi_hba *hisi_hba);
|
||||
extern u8 hisi_sas_get_ata_protocol(struct host_to_dev_fis *fis,
|
||||
int direction);
|
||||
extern u8 hisi_sas_get_ata_protocol(struct sas_task *task);
|
||||
extern struct hisi_sas_port *to_hisi_sas_port(struct asd_sas_port *sas_port);
|
||||
extern void hisi_sas_sata_done(struct sas_task *task,
|
||||
struct hisi_sas_slot *slot);
|
||||
|
@ -21,8 +21,32 @@ struct hisi_sas_internal_abort_data {
|
||||
bool rst_ha_timeout; /* reset the HA for timeout */
|
||||
};
|
||||
|
||||
u8 hisi_sas_get_ata_protocol(struct host_to_dev_fis *fis, int direction)
|
||||
static u8 hisi_sas_get_ata_protocol_from_tf(struct ata_queued_cmd *qc)
|
||||
{
|
||||
if (!qc)
|
||||
return HISI_SAS_SATA_PROTOCOL_PIO;
|
||||
|
||||
switch (qc->tf.protocol) {
|
||||
case ATA_PROT_NODATA:
|
||||
return HISI_SAS_SATA_PROTOCOL_NONDATA;
|
||||
case ATA_PROT_PIO:
|
||||
return HISI_SAS_SATA_PROTOCOL_PIO;
|
||||
case ATA_PROT_DMA:
|
||||
return HISI_SAS_SATA_PROTOCOL_DMA;
|
||||
case ATA_PROT_NCQ_NODATA:
|
||||
case ATA_PROT_NCQ:
|
||||
return HISI_SAS_SATA_PROTOCOL_FPDMA;
|
||||
default:
|
||||
return HISI_SAS_SATA_PROTOCOL_PIO;
|
||||
}
|
||||
}
|
||||
|
||||
u8 hisi_sas_get_ata_protocol(struct sas_task *task)
|
||||
{
|
||||
struct host_to_dev_fis *fis = &task->ata_task.fis;
|
||||
struct ata_queued_cmd *qc = task->uldd_task;
|
||||
int direction = task->data_dir;
|
||||
|
||||
switch (fis->command) {
|
||||
case ATA_CMD_FPDMA_WRITE:
|
||||
case ATA_CMD_FPDMA_READ:
|
||||
@ -93,7 +117,7 @@ u8 hisi_sas_get_ata_protocol(struct host_to_dev_fis *fis, int direction)
|
||||
{
|
||||
if (direction == DMA_NONE)
|
||||
return HISI_SAS_SATA_PROTOCOL_NONDATA;
|
||||
return HISI_SAS_SATA_PROTOCOL_PIO;
|
||||
return hisi_sas_get_ata_protocol_from_tf(qc);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1806,7 +1806,7 @@ static struct platform_driver hisi_sas_v1_driver = {
|
||||
.driver = {
|
||||
.name = DRV_NAME,
|
||||
.of_match_table = sas_v1_of_match,
|
||||
.acpi_match_table = ACPI_PTR(sas_v1_acpi_match),
|
||||
.acpi_match_table = sas_v1_acpi_match,
|
||||
},
|
||||
};
|
||||
|
||||
|
@ -2538,9 +2538,7 @@ static void prep_ata_v2_hw(struct hisi_hba *hisi_hba,
|
||||
(task->ata_task.fis.control & ATA_SRST))
|
||||
dw1 |= 1 << CMD_HDR_RESET_OFF;
|
||||
|
||||
dw1 |= (hisi_sas_get_ata_protocol(
|
||||
&task->ata_task.fis, task->data_dir))
|
||||
<< CMD_HDR_FRAME_TYPE_OFF;
|
||||
dw1 |= (hisi_sas_get_ata_protocol(task)) << CMD_HDR_FRAME_TYPE_OFF;
|
||||
dw1 |= sas_dev->device_id << CMD_HDR_DEV_ID_OFF;
|
||||
hdr->dw1 = cpu_to_le32(dw1);
|
||||
|
||||
@ -3653,7 +3651,7 @@ static struct platform_driver hisi_sas_v2_driver = {
|
||||
.driver = {
|
||||
.name = DRV_NAME,
|
||||
.of_match_table = sas_v2_of_match,
|
||||
.acpi_match_table = ACPI_PTR(sas_v2_acpi_match),
|
||||
.acpi_match_table = sas_v2_acpi_match,
|
||||
},
|
||||
};
|
||||
|
||||
|
@ -1456,9 +1456,7 @@ static void prep_ata_v3_hw(struct hisi_hba *hisi_hba,
|
||||
(task->ata_task.fis.control & ATA_SRST))
|
||||
dw1 |= 1 << CMD_HDR_RESET_OFF;
|
||||
|
||||
dw1 |= (hisi_sas_get_ata_protocol(
|
||||
&task->ata_task.fis, task->data_dir))
|
||||
<< CMD_HDR_FRAME_TYPE_OFF;
|
||||
dw1 |= (hisi_sas_get_ata_protocol(task)) << CMD_HDR_FRAME_TYPE_OFF;
|
||||
dw1 |= sas_dev->device_id << CMD_HDR_DEV_ID_OFF;
|
||||
|
||||
if (FIS_CMD_IS_UNCONSTRAINED(task->ata_task.fis))
|
||||
|
@ -453,17 +453,13 @@ static ssize_t host_store_hp_ssd_smart_path_status(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
int status, len;
|
||||
int status;
|
||||
struct ctlr_info *h;
|
||||
struct Scsi_Host *shost = class_to_shost(dev);
|
||||
char tmpbuf[10];
|
||||
|
||||
if (!capable(CAP_SYS_ADMIN) || !capable(CAP_SYS_RAWIO))
|
||||
return -EACCES;
|
||||
len = count > sizeof(tmpbuf) - 1 ? sizeof(tmpbuf) - 1 : count;
|
||||
strncpy(tmpbuf, buf, len);
|
||||
tmpbuf[len] = '\0';
|
||||
if (sscanf(tmpbuf, "%d", &status) != 1)
|
||||
if (kstrtoint(buf, 10, &status))
|
||||
return -EINVAL;
|
||||
h = shost_to_hba(shost);
|
||||
h->acciopath_status = !!status;
|
||||
@ -477,17 +473,13 @@ static ssize_t host_store_raid_offload_debug(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
int debug_level, len;
|
||||
int debug_level;
|
||||
struct ctlr_info *h;
|
||||
struct Scsi_Host *shost = class_to_shost(dev);
|
||||
char tmpbuf[10];
|
||||
|
||||
if (!capable(CAP_SYS_ADMIN) || !capable(CAP_SYS_RAWIO))
|
||||
return -EACCES;
|
||||
len = count > sizeof(tmpbuf) - 1 ? sizeof(tmpbuf) - 1 : count;
|
||||
strncpy(tmpbuf, buf, len);
|
||||
tmpbuf[len] = '\0';
|
||||
if (sscanf(tmpbuf, "%d", &debug_level) != 1)
|
||||
if (kstrtoint(buf, 10, &debug_level))
|
||||
return -EINVAL;
|
||||
if (debug_level < 0)
|
||||
debug_level = 0;
|
||||
@ -7238,8 +7230,7 @@ static int hpsa_controller_hard_reset(struct pci_dev *pdev,
|
||||
|
||||
static void init_driver_version(char *driver_version, int len)
|
||||
{
|
||||
memset(driver_version, 0, len);
|
||||
strncpy(driver_version, HPSA " " HPSA_DRIVER_VERSION, len - 1);
|
||||
strscpy_pad(driver_version, HPSA " " HPSA_DRIVER_VERSION, len);
|
||||
}
|
||||
|
||||
static int write_driver_ver_to_cfgtable(struct CfgTable __iomem *cfgtable)
|
||||
|
@ -3631,8 +3631,8 @@ ips_send_cmd(ips_ha_t * ha, ips_scb_t * scb)
|
||||
|
||||
break;
|
||||
|
||||
case RESERVE:
|
||||
case RELEASE:
|
||||
case RESERVE_6:
|
||||
case RELEASE_6:
|
||||
scb->scsi_cmd->result = DID_OK << 16;
|
||||
break;
|
||||
|
||||
@ -3899,8 +3899,8 @@ ips_chkstatus(ips_ha_t * ha, IPS_STATUS * pstatus)
|
||||
case WRITE_6:
|
||||
case READ_10:
|
||||
case WRITE_10:
|
||||
case RESERVE:
|
||||
case RELEASE:
|
||||
case RESERVE_6:
|
||||
case RELEASE_6:
|
||||
break;
|
||||
|
||||
case MODE_SENSE:
|
||||
|
@ -91,31 +91,31 @@ MODULE_DEVICE_TABLE(pci, isci_id_table);
|
||||
|
||||
/* linux isci specific settings */
|
||||
|
||||
unsigned char no_outbound_task_to = 2;
|
||||
static unsigned char no_outbound_task_to = 2;
|
||||
module_param(no_outbound_task_to, byte, 0);
|
||||
MODULE_PARM_DESC(no_outbound_task_to, "No Outbound Task Timeout (1us incr)");
|
||||
|
||||
u16 ssp_max_occ_to = 20;
|
||||
static u16 ssp_max_occ_to = 20;
|
||||
module_param(ssp_max_occ_to, ushort, 0);
|
||||
MODULE_PARM_DESC(ssp_max_occ_to, "SSP Max occupancy timeout (100us incr)");
|
||||
|
||||
u16 stp_max_occ_to = 5;
|
||||
static u16 stp_max_occ_to = 5;
|
||||
module_param(stp_max_occ_to, ushort, 0);
|
||||
MODULE_PARM_DESC(stp_max_occ_to, "STP Max occupancy timeout (100us incr)");
|
||||
|
||||
u16 ssp_inactive_to = 5;
|
||||
static u16 ssp_inactive_to = 5;
|
||||
module_param(ssp_inactive_to, ushort, 0);
|
||||
MODULE_PARM_DESC(ssp_inactive_to, "SSP inactivity timeout (100us incr)");
|
||||
|
||||
u16 stp_inactive_to = 5;
|
||||
static u16 stp_inactive_to = 5;
|
||||
module_param(stp_inactive_to, ushort, 0);
|
||||
MODULE_PARM_DESC(stp_inactive_to, "STP inactivity timeout (100us incr)");
|
||||
|
||||
unsigned char phy_gen = SCIC_SDS_PARM_GEN2_SPEED;
|
||||
static unsigned char phy_gen = SCIC_SDS_PARM_GEN2_SPEED;
|
||||
module_param(phy_gen, byte, 0);
|
||||
MODULE_PARM_DESC(phy_gen, "PHY generation (1: 1.5Gbps 2: 3.0Gbps 3: 6.0Gbps)");
|
||||
|
||||
unsigned char max_concurr_spinup;
|
||||
static unsigned char max_concurr_spinup;
|
||||
module_param(max_concurr_spinup, byte, 0);
|
||||
MODULE_PARM_DESC(max_concurr_spinup, "Max concurrent device spinup");
|
||||
|
||||
|
@ -473,13 +473,6 @@ static inline void sci_swab32_cpy(void *_dest, void *_src, ssize_t word_cnt)
|
||||
dest[word_cnt] = swab32(src[word_cnt]);
|
||||
}
|
||||
|
||||
extern unsigned char no_outbound_task_to;
|
||||
extern u16 ssp_max_occ_to;
|
||||
extern u16 stp_max_occ_to;
|
||||
extern u16 ssp_inactive_to;
|
||||
extern u16 stp_inactive_to;
|
||||
extern unsigned char phy_gen;
|
||||
extern unsigned char max_concurr_spinup;
|
||||
extern uint cable_selection_override;
|
||||
|
||||
irqreturn_t isci_msix_isr(int vec, void *data);
|
||||
|
@ -198,7 +198,7 @@ enum sci_status sci_remote_device_reset(
|
||||
* device. When there are no active IO for the device it is is in this
|
||||
* state.
|
||||
*
|
||||
* @SCI_STP_DEV_CMD: This is the command state for for the STP remote
|
||||
* @SCI_STP_DEV_CMD: This is the command state for the STP remote
|
||||
* device. This state is entered when the device is processing a
|
||||
* non-NCQ command. The device object will fail any new start IO
|
||||
* requests until this command is complete.
|
||||
|
@ -17,7 +17,6 @@
|
||||
* Zhenyu Wang
|
||||
*/
|
||||
|
||||
#include <crypto/hash.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/inet.h>
|
||||
#include <linux/slab.h>
|
||||
@ -468,8 +467,7 @@ static void iscsi_sw_tcp_send_hdr_prep(struct iscsi_conn *conn, void *hdr,
|
||||
* sufficient room.
|
||||
*/
|
||||
if (conn->hdrdgst_en) {
|
||||
iscsi_tcp_dgst_header(tcp_sw_conn->tx_hash, hdr, hdrlen,
|
||||
hdr + hdrlen);
|
||||
iscsi_tcp_dgst_header(hdr, hdrlen, hdr + hdrlen);
|
||||
hdrlen += ISCSI_DIGEST_SIZE;
|
||||
}
|
||||
|
||||
@ -494,7 +492,7 @@ iscsi_sw_tcp_send_data_prep(struct iscsi_conn *conn, struct scatterlist *sg,
|
||||
{
|
||||
struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
|
||||
struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data;
|
||||
struct ahash_request *tx_hash = NULL;
|
||||
u32 *tx_crcp = NULL;
|
||||
unsigned int hdr_spec_len;
|
||||
|
||||
ISCSI_SW_TCP_DBG(conn, "offset=%d, datalen=%d %s\n", offset, len,
|
||||
@ -507,11 +505,10 @@ iscsi_sw_tcp_send_data_prep(struct iscsi_conn *conn, struct scatterlist *sg,
|
||||
WARN_ON(iscsi_padded(len) != iscsi_padded(hdr_spec_len));
|
||||
|
||||
if (conn->datadgst_en)
|
||||
tx_hash = tcp_sw_conn->tx_hash;
|
||||
tx_crcp = &tcp_sw_conn->tx_crc;
|
||||
|
||||
return iscsi_segment_seek_sg(&tcp_sw_conn->out.data_segment,
|
||||
sg, count, offset, len,
|
||||
NULL, tx_hash);
|
||||
sg, count, offset, len, NULL, tx_crcp);
|
||||
}
|
||||
|
||||
static void
|
||||
@ -520,7 +517,7 @@ iscsi_sw_tcp_send_linear_data_prep(struct iscsi_conn *conn, void *data,
|
||||
{
|
||||
struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
|
||||
struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data;
|
||||
struct ahash_request *tx_hash = NULL;
|
||||
u32 *tx_crcp = NULL;
|
||||
unsigned int hdr_spec_len;
|
||||
|
||||
ISCSI_SW_TCP_DBG(conn, "datalen=%zd %s\n", len, conn->datadgst_en ?
|
||||
@ -532,10 +529,10 @@ iscsi_sw_tcp_send_linear_data_prep(struct iscsi_conn *conn, void *data,
|
||||
WARN_ON(iscsi_padded(len) != iscsi_padded(hdr_spec_len));
|
||||
|
||||
if (conn->datadgst_en)
|
||||
tx_hash = tcp_sw_conn->tx_hash;
|
||||
tx_crcp = &tcp_sw_conn->tx_crc;
|
||||
|
||||
iscsi_segment_init_linear(&tcp_sw_conn->out.data_segment,
|
||||
data, len, NULL, tx_hash);
|
||||
data, len, NULL, tx_crcp);
|
||||
}
|
||||
|
||||
static int iscsi_sw_tcp_pdu_init(struct iscsi_task *task,
|
||||
@ -583,7 +580,6 @@ iscsi_sw_tcp_conn_create(struct iscsi_cls_session *cls_session,
|
||||
struct iscsi_cls_conn *cls_conn;
|
||||
struct iscsi_tcp_conn *tcp_conn;
|
||||
struct iscsi_sw_tcp_conn *tcp_sw_conn;
|
||||
struct crypto_ahash *tfm;
|
||||
|
||||
cls_conn = iscsi_tcp_conn_setup(cls_session, sizeof(*tcp_sw_conn),
|
||||
conn_idx);
|
||||
@ -596,37 +592,9 @@ iscsi_sw_tcp_conn_create(struct iscsi_cls_session *cls_session,
|
||||
tcp_sw_conn->queue_recv = iscsi_recv_from_iscsi_q;
|
||||
|
||||
mutex_init(&tcp_sw_conn->sock_lock);
|
||||
|
||||
tfm = crypto_alloc_ahash("crc32c", 0, CRYPTO_ALG_ASYNC);
|
||||
if (IS_ERR(tfm))
|
||||
goto free_conn;
|
||||
|
||||
tcp_sw_conn->tx_hash = ahash_request_alloc(tfm, GFP_KERNEL);
|
||||
if (!tcp_sw_conn->tx_hash)
|
||||
goto free_tfm;
|
||||
ahash_request_set_callback(tcp_sw_conn->tx_hash, 0, NULL, NULL);
|
||||
|
||||
tcp_sw_conn->rx_hash = ahash_request_alloc(tfm, GFP_KERNEL);
|
||||
if (!tcp_sw_conn->rx_hash)
|
||||
goto free_tx_hash;
|
||||
ahash_request_set_callback(tcp_sw_conn->rx_hash, 0, NULL, NULL);
|
||||
|
||||
tcp_conn->rx_hash = tcp_sw_conn->rx_hash;
|
||||
tcp_conn->rx_crcp = &tcp_sw_conn->rx_crc;
|
||||
|
||||
return cls_conn;
|
||||
|
||||
free_tx_hash:
|
||||
ahash_request_free(tcp_sw_conn->tx_hash);
|
||||
free_tfm:
|
||||
crypto_free_ahash(tfm);
|
||||
free_conn:
|
||||
iscsi_conn_printk(KERN_ERR, conn,
|
||||
"Could not create connection due to crc32c "
|
||||
"loading error. Make sure the crc32c "
|
||||
"module is built as a module or into the "
|
||||
"kernel\n");
|
||||
iscsi_tcp_conn_teardown(cls_conn);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void iscsi_sw_tcp_release_conn(struct iscsi_conn *conn)
|
||||
@ -664,20 +632,8 @@ static void iscsi_sw_tcp_release_conn(struct iscsi_conn *conn)
|
||||
static void iscsi_sw_tcp_conn_destroy(struct iscsi_cls_conn *cls_conn)
|
||||
{
|
||||
struct iscsi_conn *conn = cls_conn->dd_data;
|
||||
struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
|
||||
struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data;
|
||||
|
||||
iscsi_sw_tcp_release_conn(conn);
|
||||
|
||||
ahash_request_free(tcp_sw_conn->rx_hash);
|
||||
if (tcp_sw_conn->tx_hash) {
|
||||
struct crypto_ahash *tfm;
|
||||
|
||||
tfm = crypto_ahash_reqtfm(tcp_sw_conn->tx_hash);
|
||||
ahash_request_free(tcp_sw_conn->tx_hash);
|
||||
crypto_free_ahash(tfm);
|
||||
}
|
||||
|
||||
iscsi_tcp_conn_teardown(cls_conn);
|
||||
}
|
||||
|
||||
|
@ -41,8 +41,8 @@ struct iscsi_sw_tcp_conn {
|
||||
void (*old_write_space)(struct sock *);
|
||||
|
||||
/* data and header digests */
|
||||
struct ahash_request *tx_hash; /* CRC32C (Tx) */
|
||||
struct ahash_request *rx_hash; /* CRC32C (Rx) */
|
||||
u32 tx_crc; /* CRC32C (Tx) */
|
||||
u32 rx_crc; /* CRC32C (Rx) */
|
||||
|
||||
/* MIB custom statistics */
|
||||
uint32_t sendpage_failures_cnt;
|
||||
|
@ -15,7 +15,7 @@
|
||||
* Zhenyu Wang
|
||||
*/
|
||||
|
||||
#include <crypto/hash.h>
|
||||
#include <linux/crc32c.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/inet.h>
|
||||
@ -168,7 +168,7 @@ iscsi_tcp_segment_splice_digest(struct iscsi_segment *segment, void *digest)
|
||||
segment->size = ISCSI_DIGEST_SIZE;
|
||||
segment->copied = 0;
|
||||
segment->sg = NULL;
|
||||
segment->hash = NULL;
|
||||
segment->crcp = NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
@ -191,29 +191,27 @@ int iscsi_tcp_segment_done(struct iscsi_tcp_conn *tcp_conn,
|
||||
struct iscsi_segment *segment, int recv,
|
||||
unsigned copied)
|
||||
{
|
||||
struct scatterlist sg;
|
||||
unsigned int pad;
|
||||
|
||||
ISCSI_DBG_TCP(tcp_conn->iscsi_conn, "copied %u %u size %u %s\n",
|
||||
segment->copied, copied, segment->size,
|
||||
recv ? "recv" : "xmit");
|
||||
if (segment->hash && copied) {
|
||||
/*
|
||||
* If a segment is kmapd we must unmap it before sending
|
||||
* to the crypto layer since that will try to kmap it again.
|
||||
*/
|
||||
iscsi_tcp_segment_unmap(segment);
|
||||
if (segment->crcp && copied) {
|
||||
if (segment->data) {
|
||||
*segment->crcp = crc32c(*segment->crcp,
|
||||
segment->data + segment->copied,
|
||||
copied);
|
||||
} else {
|
||||
const void *data;
|
||||
|
||||
if (!segment->data) {
|
||||
sg_init_table(&sg, 1);
|
||||
sg_set_page(&sg, sg_page(segment->sg), copied,
|
||||
segment->copied + segment->sg_offset +
|
||||
segment->sg->offset);
|
||||
} else
|
||||
sg_init_one(&sg, segment->data + segment->copied,
|
||||
copied);
|
||||
ahash_request_set_crypt(segment->hash, &sg, NULL, copied);
|
||||
crypto_ahash_update(segment->hash);
|
||||
data = kmap_local_page(sg_page(segment->sg));
|
||||
*segment->crcp = crc32c(*segment->crcp,
|
||||
data + segment->copied +
|
||||
segment->sg_offset +
|
||||
segment->sg->offset,
|
||||
copied);
|
||||
kunmap_local(data);
|
||||
}
|
||||
}
|
||||
|
||||
segment->copied += copied;
|
||||
@ -258,10 +256,8 @@ int iscsi_tcp_segment_done(struct iscsi_tcp_conn *tcp_conn,
|
||||
* Set us up for transferring the data digest. hdr digest
|
||||
* is completely handled in hdr done function.
|
||||
*/
|
||||
if (segment->hash) {
|
||||
ahash_request_set_crypt(segment->hash, NULL,
|
||||
segment->digest, 0);
|
||||
crypto_ahash_final(segment->hash);
|
||||
if (segment->crcp) {
|
||||
put_unaligned_le32(~*segment->crcp, segment->digest);
|
||||
iscsi_tcp_segment_splice_digest(segment,
|
||||
recv ? segment->recv_digest : segment->digest);
|
||||
return 0;
|
||||
@ -282,8 +278,7 @@ EXPORT_SYMBOL_GPL(iscsi_tcp_segment_done);
|
||||
* given buffer, and returns the number of bytes
|
||||
* consumed, which can actually be less than @len.
|
||||
*
|
||||
* If hash digest is enabled, the function will update the
|
||||
* hash while copying.
|
||||
* If CRC is enabled, the function will update the CRC while copying.
|
||||
* Combining these two operations doesn't buy us a lot (yet),
|
||||
* but in the future we could implement combined copy+crc,
|
||||
* just way we do for network layer checksums.
|
||||
@ -311,14 +306,10 @@ iscsi_tcp_segment_recv(struct iscsi_tcp_conn *tcp_conn,
|
||||
}
|
||||
|
||||
inline void
|
||||
iscsi_tcp_dgst_header(struct ahash_request *hash, const void *hdr,
|
||||
size_t hdrlen, unsigned char digest[ISCSI_DIGEST_SIZE])
|
||||
iscsi_tcp_dgst_header(const void *hdr, size_t hdrlen,
|
||||
unsigned char digest[ISCSI_DIGEST_SIZE])
|
||||
{
|
||||
struct scatterlist sg;
|
||||
|
||||
sg_init_one(&sg, hdr, hdrlen);
|
||||
ahash_request_set_crypt(hash, &sg, digest, hdrlen);
|
||||
crypto_ahash_digest(hash);
|
||||
put_unaligned_le32(~crc32c(~0, hdr, hdrlen), digest);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(iscsi_tcp_dgst_header);
|
||||
|
||||
@ -343,24 +334,23 @@ iscsi_tcp_dgst_verify(struct iscsi_tcp_conn *tcp_conn,
|
||||
*/
|
||||
static inline void
|
||||
__iscsi_segment_init(struct iscsi_segment *segment, size_t size,
|
||||
iscsi_segment_done_fn_t *done, struct ahash_request *hash)
|
||||
iscsi_segment_done_fn_t *done, u32 *crcp)
|
||||
{
|
||||
memset(segment, 0, sizeof(*segment));
|
||||
segment->total_size = size;
|
||||
segment->done = done;
|
||||
|
||||
if (hash) {
|
||||
segment->hash = hash;
|
||||
crypto_ahash_init(hash);
|
||||
if (crcp) {
|
||||
segment->crcp = crcp;
|
||||
*crcp = ~0;
|
||||
}
|
||||
}
|
||||
|
||||
inline void
|
||||
iscsi_segment_init_linear(struct iscsi_segment *segment, void *data,
|
||||
size_t size, iscsi_segment_done_fn_t *done,
|
||||
struct ahash_request *hash)
|
||||
size_t size, iscsi_segment_done_fn_t *done, u32 *crcp)
|
||||
{
|
||||
__iscsi_segment_init(segment, size, done, hash);
|
||||
__iscsi_segment_init(segment, size, done, crcp);
|
||||
segment->data = data;
|
||||
segment->size = size;
|
||||
}
|
||||
@ -370,13 +360,12 @@ inline int
|
||||
iscsi_segment_seek_sg(struct iscsi_segment *segment,
|
||||
struct scatterlist *sg_list, unsigned int sg_count,
|
||||
unsigned int offset, size_t size,
|
||||
iscsi_segment_done_fn_t *done,
|
||||
struct ahash_request *hash)
|
||||
iscsi_segment_done_fn_t *done, u32 *crcp)
|
||||
{
|
||||
struct scatterlist *sg;
|
||||
unsigned int i;
|
||||
|
||||
__iscsi_segment_init(segment, size, done, hash);
|
||||
__iscsi_segment_init(segment, size, done, crcp);
|
||||
for_each_sg(sg_list, sg, sg_count, i) {
|
||||
if (offset < sg->length) {
|
||||
iscsi_tcp_segment_init_sg(segment, sg, offset);
|
||||
@ -393,7 +382,7 @@ EXPORT_SYMBOL_GPL(iscsi_segment_seek_sg);
|
||||
* iscsi_tcp_hdr_recv_prep - prep segment for hdr reception
|
||||
* @tcp_conn: iscsi connection to prep for
|
||||
*
|
||||
* This function always passes NULL for the hash argument, because when this
|
||||
* This function always passes NULL for the crcp argument, because when this
|
||||
* function is called we do not yet know the final size of the header and want
|
||||
* to delay the digest processing until we know that.
|
||||
*/
|
||||
@ -434,15 +423,15 @@ static void
|
||||
iscsi_tcp_data_recv_prep(struct iscsi_tcp_conn *tcp_conn)
|
||||
{
|
||||
struct iscsi_conn *conn = tcp_conn->iscsi_conn;
|
||||
struct ahash_request *rx_hash = NULL;
|
||||
u32 *rx_crcp = NULL;
|
||||
|
||||
if (conn->datadgst_en &&
|
||||
!(conn->session->tt->caps & CAP_DIGEST_OFFLOAD))
|
||||
rx_hash = tcp_conn->rx_hash;
|
||||
rx_crcp = tcp_conn->rx_crcp;
|
||||
|
||||
iscsi_segment_init_linear(&tcp_conn->in.segment,
|
||||
conn->data, tcp_conn->in.datalen,
|
||||
iscsi_tcp_data_recv_done, rx_hash);
|
||||
iscsi_tcp_data_recv_done, rx_crcp);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -730,7 +719,7 @@ iscsi_tcp_hdr_dissect(struct iscsi_conn *conn, struct iscsi_hdr *hdr)
|
||||
|
||||
if (tcp_conn->in.datalen) {
|
||||
struct iscsi_tcp_task *tcp_task = task->dd_data;
|
||||
struct ahash_request *rx_hash = NULL;
|
||||
u32 *rx_crcp = NULL;
|
||||
struct scsi_data_buffer *sdb = &task->sc->sdb;
|
||||
|
||||
/*
|
||||
@ -743,7 +732,7 @@ iscsi_tcp_hdr_dissect(struct iscsi_conn *conn, struct iscsi_hdr *hdr)
|
||||
*/
|
||||
if (conn->datadgst_en &&
|
||||
!(conn->session->tt->caps & CAP_DIGEST_OFFLOAD))
|
||||
rx_hash = tcp_conn->rx_hash;
|
||||
rx_crcp = tcp_conn->rx_crcp;
|
||||
|
||||
ISCSI_DBG_TCP(conn, "iscsi_tcp_begin_data_in( "
|
||||
"offset=%d, datalen=%d)\n",
|
||||
@ -756,7 +745,7 @@ iscsi_tcp_hdr_dissect(struct iscsi_conn *conn, struct iscsi_hdr *hdr)
|
||||
tcp_task->data_offset,
|
||||
tcp_conn->in.datalen,
|
||||
iscsi_tcp_process_data_in,
|
||||
rx_hash);
|
||||
rx_crcp);
|
||||
spin_unlock(&conn->session->back_lock);
|
||||
return rc;
|
||||
}
|
||||
@ -878,7 +867,7 @@ iscsi_tcp_hdr_recv_done(struct iscsi_tcp_conn *tcp_conn,
|
||||
return 0;
|
||||
}
|
||||
|
||||
iscsi_tcp_dgst_header(tcp_conn->rx_hash, hdr,
|
||||
iscsi_tcp_dgst_header(hdr,
|
||||
segment->total_copied - ISCSI_DIGEST_SIZE,
|
||||
segment->digest);
|
||||
|
||||
|
@ -74,8 +74,7 @@ struct lpfc_sli2_slim;
|
||||
* queue depths when there are driver resource error or Firmware
|
||||
* resource error.
|
||||
*/
|
||||
/* 1 Second */
|
||||
#define QUEUE_RAMP_DOWN_INTERVAL (msecs_to_jiffies(1000 * 1))
|
||||
#define QUEUE_RAMP_DOWN_INTERVAL (secs_to_jiffies(1))
|
||||
|
||||
/* Number of exchanges reserved for discovery to complete */
|
||||
#define LPFC_DISC_IOCB_BUFF_COUNT 20
|
||||
|
@ -1,7 +1,7 @@
|
||||
/*******************************************************************
|
||||
* This file is part of the Emulex Linux Device Driver for *
|
||||
* Fibre Channel Host Bus Adapters. *
|
||||
* Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term *
|
||||
* Copyright (C) 2017-2025 Broadcom. All Rights Reserved. The term *
|
||||
* “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. *
|
||||
* Copyright (C) 2004-2016 Emulex. All rights reserved. *
|
||||
* EMULEX and SLI are trademarks of Emulex. *
|
||||
@ -8045,8 +8045,7 @@ lpfc_els_rcv_rscn(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
|
||||
if (test_bit(FC_DISC_TMO, &vport->fc_flag)) {
|
||||
tmo = ((phba->fc_ratov * 3) + 3);
|
||||
mod_timer(&vport->fc_disctmo,
|
||||
jiffies +
|
||||
msecs_to_jiffies(1000 * tmo));
|
||||
jiffies + secs_to_jiffies(tmo));
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
@ -8081,7 +8080,7 @@ lpfc_els_rcv_rscn(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
|
||||
if (test_bit(FC_DISC_TMO, &vport->fc_flag)) {
|
||||
tmo = ((phba->fc_ratov * 3) + 3);
|
||||
mod_timer(&vport->fc_disctmo,
|
||||
jiffies + msecs_to_jiffies(1000 * tmo));
|
||||
jiffies + secs_to_jiffies(tmo));
|
||||
}
|
||||
if ((rscn_cnt < FC_MAX_HOLD_RSCN) &&
|
||||
!test_bit(FC_RSCN_DISCOVERY, &vport->fc_flag)) {
|
||||
@ -9511,7 +9510,7 @@ lpfc_els_timeout_handler(struct lpfc_vport *vport)
|
||||
if (!list_empty(&pring->txcmplq))
|
||||
if (!test_bit(FC_UNLOADING, &phba->pport->load_flag))
|
||||
mod_timer(&vport->els_tmofunc,
|
||||
jiffies + msecs_to_jiffies(1000 * timeout));
|
||||
jiffies + secs_to_jiffies(timeout));
|
||||
}
|
||||
|
||||
/**
|
||||
@ -9569,18 +9568,16 @@ lpfc_els_flush_cmd(struct lpfc_vport *vport)
|
||||
mbx_tmo_err = test_bit(MBX_TMO_ERR, &phba->bit_flags);
|
||||
/* First we need to issue aborts to outstanding cmds on txcmpl */
|
||||
list_for_each_entry_safe(piocb, tmp_iocb, &pring->txcmplq, list) {
|
||||
lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
|
||||
"2243 iotag = 0x%x cmd_flag = 0x%x "
|
||||
"ulp_command = 0x%x this_vport %x "
|
||||
"sli_flag = 0x%x\n",
|
||||
piocb->iotag, piocb->cmd_flag,
|
||||
get_job_cmnd(phba, piocb),
|
||||
(piocb->vport == vport),
|
||||
phba->sli.sli_flag);
|
||||
|
||||
if (piocb->vport != vport)
|
||||
continue;
|
||||
|
||||
lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
|
||||
"2243 iotag = 0x%x cmd_flag = 0x%x "
|
||||
"ulp_command = 0x%x sli_flag = 0x%x\n",
|
||||
piocb->iotag, piocb->cmd_flag,
|
||||
get_job_cmnd(phba, piocb),
|
||||
phba->sli.sli_flag);
|
||||
|
||||
if ((phba->sli.sli_flag & LPFC_SLI_ACTIVE) && !mbx_tmo_err) {
|
||||
if (piocb->cmd_flag & LPFC_IO_LIBDFC)
|
||||
continue;
|
||||
@ -10899,7 +10896,7 @@ lpfc_do_scr_ns_plogi(struct lpfc_hba *phba, struct lpfc_vport *vport)
|
||||
"3334 Delay fc port discovery for %d secs\n",
|
||||
phba->fc_ratov);
|
||||
mod_timer(&vport->delayed_disc_tmo,
|
||||
jiffies + msecs_to_jiffies(1000 * phba->fc_ratov));
|
||||
jiffies + secs_to_jiffies(phba->fc_ratov));
|
||||
return;
|
||||
}
|
||||
|
||||
@ -11156,7 +11153,7 @@ lpfc_retry_pport_discovery(struct lpfc_hba *phba)
|
||||
if (!ndlp)
|
||||
return;
|
||||
|
||||
mod_timer(&ndlp->nlp_delayfunc, jiffies + msecs_to_jiffies(1000));
|
||||
mod_timer(&ndlp->nlp_delayfunc, jiffies + secs_to_jiffies(1));
|
||||
set_bit(NLP_DELAY_TMO, &ndlp->nlp_flag);
|
||||
ndlp->nlp_last_elscmd = ELS_CMD_FLOGI;
|
||||
phba->pport->port_state = LPFC_FLOGI;
|
||||
|
@ -1,7 +1,7 @@
|
||||
/*******************************************************************
|
||||
* This file is part of the Emulex Linux Device Driver for *
|
||||
* Fibre Channel Host Bus Adapters. *
|
||||
* Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term *
|
||||
* Copyright (C) 2017-2025 Broadcom. All Rights Reserved. The term *
|
||||
* “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. *
|
||||
* Copyright (C) 2004-2016 Emulex. All rights reserved. *
|
||||
* EMULEX and SLI are trademarks of Emulex. *
|
||||
@ -228,10 +228,16 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
|
||||
if (ndlp->nlp_state == NLP_STE_MAPPED_NODE)
|
||||
return;
|
||||
|
||||
/* check for recovered fabric node */
|
||||
if (ndlp->nlp_state == NLP_STE_UNMAPPED_NODE &&
|
||||
ndlp->nlp_DID == Fabric_DID)
|
||||
/* Ignore callback for a mismatched (stale) rport */
|
||||
if (ndlp->rport != rport) {
|
||||
lpfc_vlog_msg(vport, KERN_WARNING, LOG_NODE,
|
||||
"6788 fc rport mismatch: d_id x%06x ndlp x%px "
|
||||
"fc rport x%px node rport x%px state x%x "
|
||||
"refcnt %u\n",
|
||||
ndlp->nlp_DID, ndlp, rport, ndlp->rport,
|
||||
ndlp->nlp_state, kref_read(&ndlp->kref));
|
||||
return;
|
||||
}
|
||||
|
||||
if (rport->port_name != wwn_to_u64(ndlp->nlp_portname.u.wwn))
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
|
||||
@ -3518,7 +3524,7 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, struct lpfc_mbx_read_top *la)
|
||||
if (phba->fc_topology &&
|
||||
phba->fc_topology != bf_get(lpfc_mbx_read_top_topology, la)) {
|
||||
lpfc_printf_log(phba, KERN_WARNING, LOG_SLI,
|
||||
"3314 Toplogy changed was 0x%x is 0x%x\n",
|
||||
"3314 Topology changed was 0x%x is 0x%x\n",
|
||||
phba->fc_topology,
|
||||
bf_get(lpfc_mbx_read_top_topology, la));
|
||||
phba->fc_topology_changed = 1;
|
||||
@ -4973,7 +4979,7 @@ lpfc_set_disctmo(struct lpfc_vport *vport)
|
||||
tmo, vport->port_state, vport->fc_flag);
|
||||
}
|
||||
|
||||
mod_timer(&vport->fc_disctmo, jiffies + msecs_to_jiffies(1000 * tmo));
|
||||
mod_timer(&vport->fc_disctmo, jiffies + secs_to_jiffies(tmo));
|
||||
set_bit(FC_DISC_TMO, &vport->fc_flag);
|
||||
|
||||
/* Start Discovery Timer state <hba_state> */
|
||||
@ -5564,6 +5570,7 @@ static struct lpfc_nodelist *
|
||||
__lpfc_findnode_did(struct lpfc_vport *vport, uint32_t did)
|
||||
{
|
||||
struct lpfc_nodelist *ndlp;
|
||||
struct lpfc_nodelist *np = NULL;
|
||||
uint32_t data1;
|
||||
|
||||
list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
|
||||
@ -5578,14 +5585,20 @@ __lpfc_findnode_did(struct lpfc_vport *vport, uint32_t did)
|
||||
ndlp, ndlp->nlp_DID,
|
||||
ndlp->nlp_flag, data1, ndlp->nlp_rpi,
|
||||
ndlp->active_rrqs_xri_bitmap);
|
||||
return ndlp;
|
||||
|
||||
/* Check for new or potentially stale node */
|
||||
if (ndlp->nlp_state != NLP_STE_UNUSED_NODE)
|
||||
return ndlp;
|
||||
np = ndlp;
|
||||
}
|
||||
}
|
||||
|
||||
/* FIND node did <did> NOT FOUND */
|
||||
lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE,
|
||||
"0932 FIND node did x%x NOT FOUND.\n", did);
|
||||
return NULL;
|
||||
if (!np)
|
||||
/* FIND node did <did> NOT FOUND */
|
||||
lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE,
|
||||
"0932 FIND node did x%x NOT FOUND.\n", did);
|
||||
|
||||
return np;
|
||||
}
|
||||
|
||||
struct lpfc_nodelist *
|
||||
|
@ -1,7 +1,7 @@
|
||||
/*******************************************************************
|
||||
* This file is part of the Emulex Linux Device Driver for *
|
||||
* Fibre Channel Host Bus Adapters. *
|
||||
* Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term *
|
||||
* Copyright (C) 2017-2025 Broadcom. All Rights Reserved. The term *
|
||||
* “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. *
|
||||
* Copyright (C) 2004-2016 Emulex. All rights reserved. *
|
||||
* EMULEX and SLI are trademarks of Emulex. *
|
||||
@ -595,7 +595,7 @@ lpfc_config_port_post(struct lpfc_hba *phba)
|
||||
/* Set up ring-0 (ELS) timer */
|
||||
timeout = phba->fc_ratov * 2;
|
||||
mod_timer(&vport->els_tmofunc,
|
||||
jiffies + msecs_to_jiffies(1000 * timeout));
|
||||
jiffies + secs_to_jiffies(timeout));
|
||||
/* Set up heart beat (HB) timer */
|
||||
mod_timer(&phba->hb_tmofunc,
|
||||
jiffies + secs_to_jiffies(LPFC_HB_MBOX_INTERVAL));
|
||||
@ -604,7 +604,7 @@ lpfc_config_port_post(struct lpfc_hba *phba)
|
||||
phba->last_completion_time = jiffies;
|
||||
/* Set up error attention (ERATT) polling timer */
|
||||
mod_timer(&phba->eratt_poll,
|
||||
jiffies + msecs_to_jiffies(1000 * phba->eratt_poll_interval));
|
||||
jiffies + secs_to_jiffies(phba->eratt_poll_interval));
|
||||
|
||||
if (test_bit(LINK_DISABLED, &phba->hba_flag)) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
|
||||
@ -3361,8 +3361,8 @@ lpfc_block_mgmt_io(struct lpfc_hba *phba, int mbx_action)
|
||||
/* Determine how long we might wait for the active mailbox
|
||||
* command to be gracefully completed by firmware.
|
||||
*/
|
||||
timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba,
|
||||
phba->sli.mbox_active) * 1000) + jiffies;
|
||||
timeout = secs_to_jiffies(lpfc_mbox_tmo_val(phba,
|
||||
phba->sli.mbox_active)) + jiffies;
|
||||
}
|
||||
spin_unlock_irqrestore(&phba->hbalock, iflag);
|
||||
|
||||
@ -6909,7 +6909,7 @@ lpfc_sli4_async_fip_evt(struct lpfc_hba *phba,
|
||||
* re-instantiate the Vlink using FDISC.
|
||||
*/
|
||||
mod_timer(&ndlp->nlp_delayfunc,
|
||||
jiffies + msecs_to_jiffies(1000));
|
||||
jiffies + secs_to_jiffies(1));
|
||||
set_bit(NLP_DELAY_TMO, &ndlp->nlp_flag);
|
||||
ndlp->nlp_last_elscmd = ELS_CMD_FDISC;
|
||||
vport->port_state = LPFC_FDISC;
|
||||
@ -13169,6 +13169,7 @@ lpfc_sli4_enable_msi(struct lpfc_hba *phba)
|
||||
eqhdl = lpfc_get_eq_hdl(0);
|
||||
rc = pci_irq_vector(phba->pcidev, 0);
|
||||
if (rc < 0) {
|
||||
free_irq(phba->pcidev->irq, phba);
|
||||
pci_free_irq_vectors(phba->pcidev);
|
||||
lpfc_printf_log(phba, KERN_WARNING, LOG_INIT,
|
||||
"0496 MSI pci_irq_vec failed (%d)\n", rc);
|
||||
@ -13249,6 +13250,7 @@ lpfc_sli4_enable_intr(struct lpfc_hba *phba, uint32_t cfg_mode)
|
||||
eqhdl = lpfc_get_eq_hdl(0);
|
||||
retval = pci_irq_vector(phba->pcidev, 0);
|
||||
if (retval < 0) {
|
||||
free_irq(phba->pcidev->irq, phba);
|
||||
lpfc_printf_log(phba, KERN_WARNING, LOG_INIT,
|
||||
"0502 INTR pci_irq_vec failed (%d)\n",
|
||||
retval);
|
||||
|
@ -5645,9 +5645,8 @@ wait_for_cmpl:
|
||||
* cmd_flag is set to LPFC_DRIVER_ABORTED before we wait
|
||||
* for abort to complete.
|
||||
*/
|
||||
wait_event_timeout(waitq,
|
||||
(lpfc_cmd->pCmd != cmnd),
|
||||
msecs_to_jiffies(2*vport->cfg_devloss_tmo*1000));
|
||||
wait_event_timeout(waitq, (lpfc_cmd->pCmd != cmnd),
|
||||
secs_to_jiffies(2*vport->cfg_devloss_tmo));
|
||||
|
||||
spin_lock(&lpfc_cmd->buf_lock);
|
||||
|
||||
@ -5911,7 +5910,7 @@ lpfc_chk_tgt_mapped(struct lpfc_vport *vport, struct fc_rport *rport)
|
||||
* If target is not in a MAPPED state, delay until
|
||||
* target is rediscovered or devloss timeout expires.
|
||||
*/
|
||||
later = msecs_to_jiffies(2 * vport->cfg_devloss_tmo * 1000) + jiffies;
|
||||
later = secs_to_jiffies(2 * vport->cfg_devloss_tmo) + jiffies;
|
||||
while (time_after(later, jiffies)) {
|
||||
if (!pnode)
|
||||
return FAILED;
|
||||
@ -5957,7 +5956,7 @@ lpfc_reset_flush_io_context(struct lpfc_vport *vport, uint16_t tgt_id,
|
||||
lpfc_sli_abort_taskmgmt(vport,
|
||||
&phba->sli.sli3_ring[LPFC_FCP_RING],
|
||||
tgt_id, lun_id, context);
|
||||
later = msecs_to_jiffies(2 * vport->cfg_devloss_tmo * 1000) + jiffies;
|
||||
later = secs_to_jiffies(2 * vport->cfg_devloss_tmo) + jiffies;
|
||||
while (time_after(later, jiffies) && cnt) {
|
||||
schedule_timeout_uninterruptible(msecs_to_jiffies(20));
|
||||
cnt = lpfc_sli_sum_iocb(vport, tgt_id, lun_id, context);
|
||||
@ -6137,8 +6136,7 @@ lpfc_target_reset_handler(struct scsi_cmnd *cmnd)
|
||||
wait_event_timeout(waitq,
|
||||
!test_bit(NLP_WAIT_FOR_LOGO,
|
||||
&pnode->save_flags),
|
||||
msecs_to_jiffies(dev_loss_tmo *
|
||||
1000));
|
||||
secs_to_jiffies(dev_loss_tmo));
|
||||
|
||||
if (test_and_clear_bit(NLP_WAIT_FOR_LOGO,
|
||||
&pnode->save_flags))
|
||||
|
@ -1025,7 +1025,7 @@ lpfc_handle_rrq_active(struct lpfc_hba *phba)
|
||||
LIST_HEAD(send_rrq);
|
||||
|
||||
clear_bit(HBA_RRQ_ACTIVE, &phba->hba_flag);
|
||||
next_time = jiffies + msecs_to_jiffies(1000 * (phba->fc_ratov + 1));
|
||||
next_time = jiffies + secs_to_jiffies(phba->fc_ratov + 1);
|
||||
spin_lock_irqsave(&phba->rrq_list_lock, iflags);
|
||||
list_for_each_entry_safe(rrq, nextrrq,
|
||||
&phba->active_rrq_list, list) {
|
||||
@ -1208,8 +1208,7 @@ lpfc_set_rrq_active(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
|
||||
else
|
||||
rrq->send_rrq = 0;
|
||||
rrq->xritag = xritag;
|
||||
rrq->rrq_stop_time = jiffies +
|
||||
msecs_to_jiffies(1000 * (phba->fc_ratov + 1));
|
||||
rrq->rrq_stop_time = jiffies + secs_to_jiffies(phba->fc_ratov + 1);
|
||||
rrq->nlp_DID = ndlp->nlp_DID;
|
||||
rrq->vport = ndlp->vport;
|
||||
rrq->rxid = rxid;
|
||||
@ -1736,8 +1735,7 @@ lpfc_sli_ringtxcmpl_put(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
|
||||
BUG_ON(!piocb->vport);
|
||||
if (!test_bit(FC_UNLOADING, &piocb->vport->load_flag))
|
||||
mod_timer(&piocb->vport->els_tmofunc,
|
||||
jiffies +
|
||||
msecs_to_jiffies(1000 * (phba->fc_ratov << 1)));
|
||||
jiffies + secs_to_jiffies(phba->fc_ratov << 1));
|
||||
}
|
||||
|
||||
return 0;
|
||||
@ -2923,6 +2921,8 @@ lpfc_sli_def_mbox_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
|
||||
clear_bit(NLP_UNREG_INP, &ndlp->nlp_flag);
|
||||
ndlp->nlp_defer_did = NLP_EVT_NOTHING_PENDING;
|
||||
lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0);
|
||||
} else {
|
||||
clear_bit(NLP_UNREG_INP, &ndlp->nlp_flag);
|
||||
}
|
||||
|
||||
/* The unreg_login mailbox is complete and had a
|
||||
@ -3956,8 +3956,7 @@ void lpfc_poll_eratt(struct timer_list *t)
|
||||
else
|
||||
/* Restart the timer for next eratt poll */
|
||||
mod_timer(&phba->eratt_poll,
|
||||
jiffies +
|
||||
msecs_to_jiffies(1000 * phba->eratt_poll_interval));
|
||||
jiffies + secs_to_jiffies(phba->eratt_poll_interval));
|
||||
return;
|
||||
}
|
||||
|
||||
@ -9008,7 +9007,7 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
|
||||
|
||||
/* Start the ELS watchdog timer */
|
||||
mod_timer(&vport->els_tmofunc,
|
||||
jiffies + msecs_to_jiffies(1000 * (phba->fc_ratov * 2)));
|
||||
jiffies + secs_to_jiffies(phba->fc_ratov * 2));
|
||||
|
||||
/* Start heart beat timer */
|
||||
mod_timer(&phba->hb_tmofunc,
|
||||
@ -9027,7 +9026,7 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
|
||||
|
||||
/* Start error attention (ERATT) polling timer */
|
||||
mod_timer(&phba->eratt_poll,
|
||||
jiffies + msecs_to_jiffies(1000 * phba->eratt_poll_interval));
|
||||
jiffies + secs_to_jiffies(phba->eratt_poll_interval));
|
||||
|
||||
/*
|
||||
* The port is ready, set the host's link state to LINK_DOWN
|
||||
@ -9504,8 +9503,7 @@ lpfc_sli_issue_mbox_s3(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox,
|
||||
goto out_not_finished;
|
||||
}
|
||||
/* timeout active mbox command */
|
||||
timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba, pmbox) *
|
||||
1000);
|
||||
timeout = secs_to_jiffies(lpfc_mbox_tmo_val(phba, pmbox));
|
||||
mod_timer(&psli->mbox_tmo, jiffies + timeout);
|
||||
}
|
||||
|
||||
@ -9629,8 +9627,7 @@ lpfc_sli_issue_mbox_s3(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox,
|
||||
drvr_flag);
|
||||
goto out_not_finished;
|
||||
}
|
||||
timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba, pmbox) *
|
||||
1000) + jiffies;
|
||||
timeout = secs_to_jiffies(lpfc_mbox_tmo_val(phba, pmbox)) + jiffies;
|
||||
i = 0;
|
||||
/* Wait for command to complete */
|
||||
while (((word0 & OWN_CHIP) == OWN_CHIP) ||
|
||||
@ -9756,9 +9753,8 @@ lpfc_sli4_async_mbox_block(struct lpfc_hba *phba)
|
||||
* command to be gracefully completed by firmware.
|
||||
*/
|
||||
if (phba->sli.mbox_active)
|
||||
timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba,
|
||||
phba->sli.mbox_active) *
|
||||
1000) + jiffies;
|
||||
timeout = secs_to_jiffies(lpfc_mbox_tmo_val(phba,
|
||||
phba->sli.mbox_active)) + jiffies;
|
||||
spin_unlock_irq(&phba->hbalock);
|
||||
|
||||
/* Make sure the mailbox is really active */
|
||||
@ -9881,8 +9877,7 @@ lpfc_sli4_wait_bmbx_ready(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
|
||||
}
|
||||
}
|
||||
|
||||
timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba, mboxq)
|
||||
* 1000) + jiffies;
|
||||
timeout = secs_to_jiffies(lpfc_mbox_tmo_val(phba, mboxq)) + jiffies;
|
||||
|
||||
do {
|
||||
bmbx_reg.word0 = readl(phba->sli4_hba.BMBXregaddr);
|
||||
@ -10230,7 +10225,7 @@ lpfc_sli4_post_async_mbox(struct lpfc_hba *phba)
|
||||
|
||||
/* Start timer for the mbox_tmo and log some mailbox post messages */
|
||||
mod_timer(&psli->mbox_tmo, (jiffies +
|
||||
msecs_to_jiffies(1000 * lpfc_mbox_tmo_val(phba, mboxq))));
|
||||
secs_to_jiffies(lpfc_mbox_tmo_val(phba, mboxq))));
|
||||
|
||||
lpfc_printf_log(phba, KERN_INFO, LOG_MBOX | LOG_SLI,
|
||||
"(%d):0355 Mailbox cmd x%x (x%x/x%x) issue Data: "
|
||||
@ -13159,7 +13154,7 @@ lpfc_sli_issue_iocb_wait(struct lpfc_hba *phba,
|
||||
retval = lpfc_sli_issue_iocb(phba, ring_number, piocb,
|
||||
SLI_IOCB_RET_IOCB);
|
||||
if (retval == IOCB_SUCCESS) {
|
||||
timeout_req = msecs_to_jiffies(timeout * 1000);
|
||||
timeout_req = secs_to_jiffies(timeout);
|
||||
timeleft = wait_event_timeout(done_q,
|
||||
lpfc_chk_iocb_flg(phba, piocb, LPFC_IO_WAKE),
|
||||
timeout_req);
|
||||
@ -13275,8 +13270,7 @@ lpfc_sli_issue_mbox_wait(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmboxq,
|
||||
/* now issue the command */
|
||||
retval = lpfc_sli_issue_mbox(phba, pmboxq, MBX_NOWAIT);
|
||||
if (retval == MBX_BUSY || retval == MBX_SUCCESS) {
|
||||
wait_for_completion_timeout(&mbox_done,
|
||||
msecs_to_jiffies(timeout * 1000));
|
||||
wait_for_completion_timeout(&mbox_done, secs_to_jiffies(timeout));
|
||||
|
||||
spin_lock_irqsave(&phba->hbalock, flag);
|
||||
pmboxq->ctx_u.mbox_wait = NULL;
|
||||
@ -13336,9 +13330,8 @@ lpfc_sli_mbox_sys_shutdown(struct lpfc_hba *phba, int mbx_action)
|
||||
* command to be gracefully completed by firmware.
|
||||
*/
|
||||
if (phba->sli.mbox_active)
|
||||
timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba,
|
||||
phba->sli.mbox_active) *
|
||||
1000) + jiffies;
|
||||
timeout = secs_to_jiffies(lpfc_mbox_tmo_val(phba,
|
||||
phba->sli.mbox_active)) + jiffies;
|
||||
spin_unlock_irq(&phba->hbalock);
|
||||
|
||||
/* Enable softirqs again, done with phba->hbalock */
|
||||
|
@ -1,7 +1,7 @@
|
||||
/*******************************************************************
|
||||
* This file is part of the Emulex Linux Device Driver for *
|
||||
* Fibre Channel Host Bus Adapters. *
|
||||
* Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term *
|
||||
* Copyright (C) 2017-2025 Broadcom. All Rights Reserved. The term *
|
||||
* “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. *
|
||||
* Copyright (C) 2004-2016 Emulex. All rights reserved. *
|
||||
* EMULEX and SLI are trademarks of Emulex. *
|
||||
@ -20,7 +20,7 @@
|
||||
* included with this package. *
|
||||
*******************************************************************/
|
||||
|
||||
#define LPFC_DRIVER_VERSION "14.4.0.7"
|
||||
#define LPFC_DRIVER_VERSION "14.4.0.8"
|
||||
#define LPFC_DRIVER_NAME "lpfc"
|
||||
|
||||
/* Used for SLI 2/3 */
|
||||
@ -32,6 +32,6 @@
|
||||
|
||||
#define LPFC_MODULE_DESC "Emulex LightPulse Fibre Channel SCSI driver " \
|
||||
LPFC_DRIVER_VERSION
|
||||
#define LPFC_COPYRIGHT "Copyright (C) 2017-2024 Broadcom. All Rights " \
|
||||
#define LPFC_COPYRIGHT "Copyright (C) 2017-2025 Broadcom. All Rights " \
|
||||
"Reserved. The term \"Broadcom\" refers to Broadcom Inc. " \
|
||||
"and/or its subsidiaries."
|
||||
|
@ -246,7 +246,7 @@ static void lpfc_discovery_wait(struct lpfc_vport *vport)
|
||||
* fabric RA_TOV value and dev_loss tmo. The driver's
|
||||
* devloss_tmo is 10 giving this loop a 3x multiplier minimally.
|
||||
*/
|
||||
wait_time_max = msecs_to_jiffies(((phba->fc_ratov * 3) + 3) * 1000);
|
||||
wait_time_max = secs_to_jiffies((phba->fc_ratov * 3) + 3);
|
||||
wait_time_max += jiffies;
|
||||
start_time = jiffies;
|
||||
while (time_before(jiffies, wait_time_max)) {
|
||||
|
@ -855,8 +855,8 @@ mega_build_cmd(adapter_t *adapter, struct scsi_cmnd *cmd, int *busy)
|
||||
return scb;
|
||||
|
||||
#if MEGA_HAVE_CLUSTERING
|
||||
case RESERVE:
|
||||
case RELEASE:
|
||||
case RESERVE_6:
|
||||
case RELEASE_6:
|
||||
|
||||
/*
|
||||
* Do we support clustering and is the support enabled
|
||||
@ -875,7 +875,7 @@ mega_build_cmd(adapter_t *adapter, struct scsi_cmnd *cmd, int *busy)
|
||||
}
|
||||
|
||||
scb->raw_mbox[0] = MEGA_CLUSTER_CMD;
|
||||
scb->raw_mbox[2] = ( *cmd->cmnd == RESERVE ) ?
|
||||
scb->raw_mbox[2] = *cmd->cmnd == RESERVE_6 ?
|
||||
MEGA_RESERVE_LD : MEGA_RELEASE_LD;
|
||||
|
||||
scb->raw_mbox[3] = ldrv_num;
|
||||
@ -1618,8 +1618,8 @@ mega_cmd_done(adapter_t *adapter, u8 completed[], int nstatus, int status)
|
||||
* failed or the input parameter is invalid
|
||||
*/
|
||||
if( status == 1 &&
|
||||
(cmd->cmnd[0] == RESERVE ||
|
||||
cmd->cmnd[0] == RELEASE) ) {
|
||||
(cmd->cmnd[0] == RESERVE_6 ||
|
||||
cmd->cmnd[0] == RELEASE_6) ) {
|
||||
|
||||
cmd->result |= (DID_ERROR << 16) |
|
||||
SAM_STAT_RESERVATION_CONFLICT;
|
||||
|
@ -1725,8 +1725,8 @@ megaraid_mbox_build_cmd(adapter_t *adapter, struct scsi_cmnd *scp, int *busy)
|
||||
|
||||
return scb;
|
||||
|
||||
case RESERVE:
|
||||
case RELEASE:
|
||||
case RESERVE_6:
|
||||
case RELEASE_6:
|
||||
/*
|
||||
* Do we support clustering and is the support enabled
|
||||
*/
|
||||
@ -1748,7 +1748,7 @@ megaraid_mbox_build_cmd(adapter_t *adapter, struct scsi_cmnd *scp, int *busy)
|
||||
scb->dev_channel = 0xFF;
|
||||
scb->dev_target = target;
|
||||
ccb->raw_mbox[0] = CLUSTER_CMD;
|
||||
ccb->raw_mbox[2] = (scp->cmnd[0] == RESERVE) ?
|
||||
ccb->raw_mbox[2] = scp->cmnd[0] == RESERVE_6 ?
|
||||
RESERVE_LD : RELEASE_LD;
|
||||
|
||||
ccb->raw_mbox[3] = target;
|
||||
@ -2334,8 +2334,8 @@ megaraid_mbox_dpc(unsigned long devp)
|
||||
* Error code returned is 1 if Reserve or Release
|
||||
* failed or the input parameter is invalid
|
||||
*/
|
||||
if (status == 1 && (scp->cmnd[0] == RESERVE ||
|
||||
scp->cmnd[0] == RELEASE)) {
|
||||
if (status == 1 && (scp->cmnd[0] == RESERVE_6 ||
|
||||
scp->cmnd[0] == RELEASE_6)) {
|
||||
|
||||
scp->result = DID_ERROR << 16 |
|
||||
SAM_STAT_RESERVATION_CONFLICT;
|
||||
|
@ -93,7 +93,7 @@ static unsigned int scmd_timeout = MEGASAS_DEFAULT_CMD_TIMEOUT;
|
||||
module_param(scmd_timeout, int, 0444);
|
||||
MODULE_PARM_DESC(scmd_timeout, "scsi command timeout (10-90s), default 90s. See megasas_reset_timer.");
|
||||
|
||||
int perf_mode = -1;
|
||||
static int perf_mode = -1;
|
||||
module_param(perf_mode, int, 0444);
|
||||
MODULE_PARM_DESC(perf_mode, "Performance mode (only for Aero adapters), options:\n\t\t"
|
||||
"0 - balanced: High iops and low latency queues are allocated &\n\t\t"
|
||||
@ -105,15 +105,15 @@ MODULE_PARM_DESC(perf_mode, "Performance mode (only for Aero adapters), options:
|
||||
"default mode is 'balanced'"
|
||||
);
|
||||
|
||||
int event_log_level = MFI_EVT_CLASS_CRITICAL;
|
||||
static int event_log_level = MFI_EVT_CLASS_CRITICAL;
|
||||
module_param(event_log_level, int, 0644);
|
||||
MODULE_PARM_DESC(event_log_level, "Asynchronous event logging level- range is: -2(CLASS_DEBUG) to 4(CLASS_DEAD), Default: 2(CLASS_CRITICAL)");
|
||||
|
||||
unsigned int enable_sdev_max_qd;
|
||||
static unsigned int enable_sdev_max_qd;
|
||||
module_param(enable_sdev_max_qd, int, 0444);
|
||||
MODULE_PARM_DESC(enable_sdev_max_qd, "Enable sdev max qd as can_queue. Default: 0");
|
||||
|
||||
int poll_queues;
|
||||
static int poll_queues;
|
||||
module_param(poll_queues, int, 0444);
|
||||
MODULE_PARM_DESC(poll_queues, "Number of queues to be use for io_uring poll mode.\n\t\t"
|
||||
"This parameter is effective only if host_tagset_enable=1 &\n\t\t"
|
||||
@ -122,7 +122,7 @@ MODULE_PARM_DESC(poll_queues, "Number of queues to be use for io_uring poll mode
|
||||
"High iops queues are not allocated &\n\t\t"
|
||||
);
|
||||
|
||||
int host_tagset_enable = 1;
|
||||
static int host_tagset_enable = 1;
|
||||
module_param(host_tagset_enable, int, 0444);
|
||||
MODULE_PARM_DESC(host_tagset_enable, "Shared host tagset enable/disable Default: enable(1)");
|
||||
|
||||
|
@ -19,6 +19,7 @@
|
||||
#define MPI3_CONFIG_PAGETYPE_PCIE_SWITCH (0x31)
|
||||
#define MPI3_CONFIG_PAGETYPE_PCIE_LINK (0x33)
|
||||
#define MPI3_CONFIG_PAGEATTR_MASK (0xf0)
|
||||
#define MPI3_CONFIG_PAGEATTR_SHIFT (4)
|
||||
#define MPI3_CONFIG_PAGEATTR_READ_ONLY (0x00)
|
||||
#define MPI3_CONFIG_PAGEATTR_CHANGEABLE (0x10)
|
||||
#define MPI3_CONFIG_PAGEATTR_PERSISTENT (0x20)
|
||||
@ -29,10 +30,13 @@
|
||||
#define MPI3_CONFIG_ACTION_READ_PERSISTENT (0x04)
|
||||
#define MPI3_CONFIG_ACTION_WRITE_PERSISTENT (0x05)
|
||||
#define MPI3_DEVICE_PGAD_FORM_MASK (0xf0000000)
|
||||
#define MPI3_DEVICE_PGAD_FORM_SHIFT (28)
|
||||
#define MPI3_DEVICE_PGAD_FORM_GET_NEXT_HANDLE (0x00000000)
|
||||
#define MPI3_DEVICE_PGAD_FORM_HANDLE (0x20000000)
|
||||
#define MPI3_DEVICE_PGAD_HANDLE_MASK (0x0000ffff)
|
||||
#define MPI3_DEVICE_PGAD_HANDLE_SHIFT (0)
|
||||
#define MPI3_SAS_EXPAND_PGAD_FORM_MASK (0xf0000000)
|
||||
#define MPI3_SAS_EXPAND_PGAD_FORM_SHIFT (28)
|
||||
#define MPI3_SAS_EXPAND_PGAD_FORM_GET_NEXT_HANDLE (0x00000000)
|
||||
#define MPI3_SAS_EXPAND_PGAD_FORM_HANDLE_PHY_NUM (0x10000000)
|
||||
#define MPI3_SAS_EXPAND_PGAD_FORM_HANDLE (0x20000000)
|
||||
|
@ -66,7 +66,12 @@ struct mpi3_component_image_header {
|
||||
#define MPI3_IMAGE_HEADER_SIGNATURE1_SMM (0x204d4d53)
|
||||
#define MPI3_IMAGE_HEADER_SIGNATURE1_PSW (0x20575350)
|
||||
#define MPI3_IMAGE_HEADER_SIGNATURE2_VALUE (0x50584546)
|
||||
#define MPI3_IMAGE_HEADER_FLAGS_SIGNED_UEFI_MASK (0x00000300)
|
||||
#define MPI3_IMAGE_HEADER_FLAGS_SIGNED_UEFI_SHIFT (8)
|
||||
#define MPI3_IMAGE_HEADER_FLAGS_CERT_CHAIN_FORMAT_MASK (0x000000c0)
|
||||
#define MPI3_IMAGE_HEADER_FLAGS_CERT_CHAIN_FORMAT_SHIFT (6)
|
||||
#define MPI3_IMAGE_HEADER_FLAGS_DEVICE_KEY_BASIS_MASK (0x00000030)
|
||||
#define MPI3_IMAGE_HEADER_FLAGS_DEVICE_KEY_BASIS_SHIFT (4)
|
||||
#define MPI3_IMAGE_HEADER_FLAGS_DEVICE_KEY_BASIS_CDI (0x00000000)
|
||||
#define MPI3_IMAGE_HEADER_FLAGS_DEVICE_KEY_BASIS_DI (0x00000010)
|
||||
#define MPI3_IMAGE_HEADER_FLAGS_SIGNED_NVDATA (0x00000008)
|
||||
@ -214,11 +219,13 @@ struct mpi3_encrypted_hash_entry {
|
||||
#define MPI3_HASH_IMAGE_TYPE_KEY_WITH_HASH_1_OF_2 (0x04)
|
||||
#define MPI3_HASH_IMAGE_TYPE_KEY_WITH_HASH_2_OF_2 (0x05)
|
||||
#define MPI3_HASH_ALGORITHM_VERSION_MASK (0xe0)
|
||||
#define MPI3_HASH_ALGORITHM_VERSION_SHIFT (5)
|
||||
#define MPI3_HASH_ALGORITHM_VERSION_NONE (0x00)
|
||||
#define MPI3_HASH_ALGORITHM_VERSION_SHA1 (0x20)
|
||||
#define MPI3_HASH_ALGORITHM_VERSION_SHA2 (0x40)
|
||||
#define MPI3_HASH_ALGORITHM_VERSION_SHA3 (0x60)
|
||||
#define MPI3_HASH_ALGORITHM_SIZE_MASK (0x1f)
|
||||
#define MPI3_HASH_ALGORITHM_SIZE_SHIFT (0)
|
||||
#define MPI3_HASH_ALGORITHM_SIZE_UNUSED (0x00)
|
||||
#define MPI3_HASH_ALGORITHM_SIZE_SHA256 (0x01)
|
||||
#define MPI3_HASH_ALGORITHM_SIZE_SHA512 (0x02)
|
||||
@ -236,6 +243,7 @@ struct mpi3_encrypted_hash_entry {
|
||||
#define MPI3_ENCRYPTION_ALGORITHM_ML_DSA_65 (0x0c)
|
||||
#define MPI3_ENCRYPTION_ALGORITHM_ML_DSA_44 (0x0d)
|
||||
#define MPI3_ENCRYPTED_HASH_ENTRY_FLAGS_PAIRED_KEY_MASK (0x0f)
|
||||
#define MPI3_ENCRYPTED_HASH_ENTRY_FLAGS_PAIRED_KEY_SHIFT (0)
|
||||
|
||||
#ifndef MPI3_ENCRYPTED_HASH_ENTRY_MAX
|
||||
#define MPI3_ENCRYPTED_HASH_ENTRY_MAX (1)
|
||||
|
@ -38,23 +38,31 @@ struct mpi3_scsi_io_request {
|
||||
#define MPI3_SCSIIO_MSGFLAGS_METASGL_VALID (0x80)
|
||||
#define MPI3_SCSIIO_MSGFLAGS_DIVERT_TO_FIRMWARE (0x40)
|
||||
#define MPI3_SCSIIO_FLAGS_LARGE_CDB (0x60000000)
|
||||
#define MPI3_SCSIIO_FLAGS_LARGE_CDB_MASK (0x60000000)
|
||||
#define MPI3_SCSIIO_FLAGS_LARGE_CDB_SHIFT (29)
|
||||
#define MPI3_SCSIIO_FLAGS_IOC_USE_ONLY_27_MASK (0x18000000)
|
||||
#define MPI3_SCSIIO_FLAGS_IOC_USE_ONLY_27_SHIFT (27)
|
||||
#define MPI3_SCSIIO_FLAGS_CDB_16_OR_LESS (0x00000000)
|
||||
#define MPI3_SCSIIO_FLAGS_CDB_GREATER_THAN_16 (0x20000000)
|
||||
#define MPI3_SCSIIO_FLAGS_CDB_IN_SEPARATE_BUFFER (0x40000000)
|
||||
#define MPI3_SCSIIO_FLAGS_TASKATTRIBUTE_MASK (0x07000000)
|
||||
#define MPI3_SCSIIO_FLAGS_TASKATTRIBUTE_SHIFT (24)
|
||||
#define MPI3_SCSIIO_FLAGS_DATADIRECTION_MASK (0x000c0000)
|
||||
#define MPI3_SCSIIO_FLAGS_DATADIRECTION_SHIFT (18)
|
||||
#define MPI3_SCSIIO_FLAGS_TASKATTRIBUTE_SIMPLEQ (0x00000000)
|
||||
#define MPI3_SCSIIO_FLAGS_TASKATTRIBUTE_HEADOFQ (0x01000000)
|
||||
#define MPI3_SCSIIO_FLAGS_TASKATTRIBUTE_ORDEREDQ (0x02000000)
|
||||
#define MPI3_SCSIIO_FLAGS_TASKATTRIBUTE_ACAQ (0x04000000)
|
||||
#define MPI3_SCSIIO_FLAGS_CMDPRI_MASK (0x00f00000)
|
||||
#define MPI3_SCSIIO_FLAGS_CMDPRI_SHIFT (20)
|
||||
#define MPI3_SCSIIO_FLAGS_DATADIRECTION_MASK (0x000c0000)
|
||||
#define MPI3_SCSIIO_FLAGS_DATADIRECTION_NO_DATA_TRANSFER (0x00000000)
|
||||
#define MPI3_SCSIIO_FLAGS_DATADIRECTION_WRITE (0x00040000)
|
||||
#define MPI3_SCSIIO_FLAGS_DATADIRECTION_READ (0x00080000)
|
||||
#define MPI3_SCSIIO_FLAGS_DMAOPERATION_MASK (0x00030000)
|
||||
#define MPI3_SCSIIO_FLAGS_DMAOPERATION_SHIFT (16)
|
||||
#define MPI3_SCSIIO_FLAGS_DMAOPERATION_HOST_PI (0x00010000)
|
||||
#define MPI3_SCSIIO_FLAGS_DIVERT_REASON_MASK (0x000000f0)
|
||||
#define MPI3_SCSIIO_FLAGS_DIVERT_REASON_SHIFT (4)
|
||||
#define MPI3_SCSIIO_FLAGS_DIVERT_REASON_IO_THROTTLING (0x00000010)
|
||||
#define MPI3_SCSIIO_FLAGS_DIVERT_REASON_WRITE_SAME_TOO_LARGE (0x00000020)
|
||||
#define MPI3_SCSIIO_FLAGS_DIVERT_REASON_PROD_SPECIFIC (0x00000080)
|
||||
@ -99,6 +107,7 @@ struct mpi3_scsi_io_reply {
|
||||
#define MPI3_SCSI_STATUS_ACA_ACTIVE (0x30)
|
||||
#define MPI3_SCSI_STATUS_TASK_ABORTED (0x40)
|
||||
#define MPI3_SCSI_STATE_SENSE_MASK (0x03)
|
||||
#define MPI3_SCSI_STATE_SENSE_SHIFT (0)
|
||||
#define MPI3_SCSI_STATE_SENSE_VALID (0x00)
|
||||
#define MPI3_SCSI_STATE_SENSE_FAILED (0x01)
|
||||
#define MPI3_SCSI_STATE_SENSE_BUFF_Q_EMPTY (0x02)
|
||||
|
@ -30,6 +30,7 @@ struct mpi3_ioc_init_request {
|
||||
#define MPI3_IOCINIT_MSGFLAGS_WRITESAMEDIVERT_SUPPORTED (0x08)
|
||||
#define MPI3_IOCINIT_MSGFLAGS_SCSIIOSTATUSREPLY_SUPPORTED (0x04)
|
||||
#define MPI3_IOCINIT_MSGFLAGS_HOSTMETADATA_MASK (0x03)
|
||||
#define MPI3_IOCINIT_MSGFLAGS_HOSTMETADATA_SHIFT (0)
|
||||
#define MPI3_IOCINIT_MSGFLAGS_HOSTMETADATA_NOT_USED (0x00)
|
||||
#define MPI3_IOCINIT_MSGFLAGS_HOSTMETADATA_SEPARATED (0x01)
|
||||
#define MPI3_IOCINIT_MSGFLAGS_HOSTMETADATA_INLINE (0x02)
|
||||
@ -40,6 +41,7 @@ struct mpi3_ioc_init_request {
|
||||
#define MPI3_WHOINIT_MANUFACTURER (0x04)
|
||||
|
||||
#define MPI3_IOCINIT_DRIVERCAP_OSEXPOSURE_MASK (0x00000003)
|
||||
#define MPI3_IOCINIT_DRIVERCAP_OSEXPOSURE_SHIFT (0)
|
||||
#define MPI3_IOCINIT_DRIVERCAP_OSEXPOSURE_NO_GUIDANCE (0x00000000)
|
||||
#define MPI3_IOCINIT_DRIVERCAP_OSEXPOSURE_NO_SPECIAL (0x00000001)
|
||||
#define MPI3_IOCINIT_DRIVERCAP_OSEXPOSURE_REPORT_AS_HDD (0x00000002)
|
||||
@ -111,9 +113,11 @@ struct mpi3_ioc_facts_data {
|
||||
__le32 diag_tty_size;
|
||||
};
|
||||
#define MPI3_IOCFACTS_CAPABILITY_NON_SUPERVISOR_MASK (0x80000000)
|
||||
#define MPI3_IOCFACTS_CAPABILITY_NON_SUPERVISOR_SHIFT (31)
|
||||
#define MPI3_IOCFACTS_CAPABILITY_SUPERVISOR_IOC (0x00000000)
|
||||
#define MPI3_IOCFACTS_CAPABILITY_NON_SUPERVISOR_IOC (0x80000000)
|
||||
#define MPI3_IOCFACTS_CAPABILITY_INT_COALESCE_MASK (0x00000600)
|
||||
#define MPI3_IOCFACTS_CAPABILITY_INT_COALESCE_SHIFT (9)
|
||||
#define MPI3_IOCFACTS_CAPABILITY_INT_COALESCE_FIXED_THRESHOLD (0x00000000)
|
||||
#define MPI3_IOCFACTS_CAPABILITY_INT_COALESCE_OUTSTANDING_IO (0x00000200)
|
||||
#define MPI3_IOCFACTS_CAPABILITY_COMPLETE_RESET_SUPPORTED (0x00000100)
|
||||
@ -134,6 +138,7 @@ struct mpi3_ioc_facts_data {
|
||||
#define MPI3_IOCFACTS_EXCEPT_SAS_DISABLED (0x1000)
|
||||
#define MPI3_IOCFACTS_EXCEPT_SAFE_MODE (0x0800)
|
||||
#define MPI3_IOCFACTS_EXCEPT_SECURITY_KEY_MASK (0x0700)
|
||||
#define MPI3_IOCFACTS_EXCEPT_SECURITY_KEY_SHIFT (8)
|
||||
#define MPI3_IOCFACTS_EXCEPT_SECURITY_KEY_NONE (0x0000)
|
||||
#define MPI3_IOCFACTS_EXCEPT_SECURITY_KEY_LOCAL_VIA_MGMT (0x0100)
|
||||
#define MPI3_IOCFACTS_EXCEPT_SECURITY_KEY_EXT_VIA_MGMT (0x0200)
|
||||
@ -149,6 +154,7 @@ struct mpi3_ioc_facts_data {
|
||||
#define MPI3_IOCFACTS_EXCEPT_BLOCKING_BOOT_EVENT (0x0004)
|
||||
#define MPI3_IOCFACTS_EXCEPT_SECURITY_SELFTEST_FAILURE (0x0002)
|
||||
#define MPI3_IOCFACTS_EXCEPT_BOOTSTAT_MASK (0x0001)
|
||||
#define MPI3_IOCFACTS_EXCEPT_BOOTSTAT_SHIFT (0)
|
||||
#define MPI3_IOCFACTS_EXCEPT_BOOTSTAT_PRIMARY (0x0000)
|
||||
#define MPI3_IOCFACTS_EXCEPT_BOOTSTAT_SECONDARY (0x0001)
|
||||
#define MPI3_IOCFACTS_PROTOCOL_SAS (0x0010)
|
||||
@ -161,10 +167,12 @@ struct mpi3_ioc_facts_data {
|
||||
#define MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_MASK (0x0000ff00)
|
||||
#define MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_SHIFT (8)
|
||||
#define MPI3_IOCFACTS_FLAGS_INITIAL_PORT_ENABLE_MASK (0x00000030)
|
||||
#define MPI3_IOCFACTS_FLAGS_INITIAL_PORT_ENABLE_SHIFT (4)
|
||||
#define MPI3_IOCFACTS_FLAGS_INITIAL_PORT_ENABLE_NOT_STARTED (0x00000000)
|
||||
#define MPI3_IOCFACTS_FLAGS_INITIAL_PORT_ENABLE_IN_PROGRESS (0x00000010)
|
||||
#define MPI3_IOCFACTS_FLAGS_INITIAL_PORT_ENABLE_COMPLETE (0x00000020)
|
||||
#define MPI3_IOCFACTS_FLAGS_PERSONALITY_MASK (0x0000000f)
|
||||
#define MPI3_IOCFACTS_FLAGS_PERSONALITY_SHIFT (0)
|
||||
#define MPI3_IOCFACTS_FLAGS_PERSONALITY_EHBA (0x00000000)
|
||||
#define MPI3_IOCFACTS_FLAGS_PERSONALITY_RAID_DDR (0x00000002)
|
||||
#define MPI3_IOCFACTS_IO_THROTTLE_DATA_LENGTH_NOT_REQUIRED (0x0000)
|
||||
@ -204,6 +212,7 @@ struct mpi3_create_request_queue_request {
|
||||
};
|
||||
|
||||
#define MPI3_CREATE_REQUEST_QUEUE_FLAGS_SEGMENTED_MASK (0x80)
|
||||
#define MPI3_CREATE_REQUEST_QUEUE_FLAGS_SEGMENTED_SHIFT (7)
|
||||
#define MPI3_CREATE_REQUEST_QUEUE_FLAGS_SEGMENTED_SEGMENTED (0x80)
|
||||
#define MPI3_CREATE_REQUEST_QUEUE_FLAGS_SEGMENTED_CONTIGUOUS (0x00)
|
||||
#define MPI3_CREATE_REQUEST_QUEUE_SIZE_MINIMUM (2)
|
||||
@ -237,10 +246,12 @@ struct mpi3_create_reply_queue_request {
|
||||
};
|
||||
|
||||
#define MPI3_CREATE_REPLY_QUEUE_FLAGS_SEGMENTED_MASK (0x80)
|
||||
#define MPI3_CREATE_REPLY_QUEUE_FLAGS_SEGMENTED_SHIFT (7)
|
||||
#define MPI3_CREATE_REPLY_QUEUE_FLAGS_SEGMENTED_SEGMENTED (0x80)
|
||||
#define MPI3_CREATE_REPLY_QUEUE_FLAGS_SEGMENTED_CONTIGUOUS (0x00)
|
||||
#define MPI3_CREATE_REPLY_QUEUE_FLAGS_COALESCE_DISABLE (0x02)
|
||||
#define MPI3_CREATE_REPLY_QUEUE_FLAGS_INT_ENABLE_MASK (0x01)
|
||||
#define MPI3_CREATE_REPLY_QUEUE_FLAGS_INT_ENABLE_SHIFT (0)
|
||||
#define MPI3_CREATE_REPLY_QUEUE_FLAGS_INT_ENABLE_DISABLE (0x00)
|
||||
#define MPI3_CREATE_REPLY_QUEUE_FLAGS_INT_ENABLE_ENABLE (0x01)
|
||||
#define MPI3_CREATE_REPLY_QUEUE_SIZE_MINIMUM (2)
|
||||
@ -326,9 +337,11 @@ struct mpi3_event_notification_reply {
|
||||
};
|
||||
|
||||
#define MPI3_EVENT_NOTIFY_MSGFLAGS_ACK_MASK (0x01)
|
||||
#define MPI3_EVENT_NOTIFY_MSGFLAGS_ACK_SHIFT (0)
|
||||
#define MPI3_EVENT_NOTIFY_MSGFLAGS_ACK_REQUIRED (0x01)
|
||||
#define MPI3_EVENT_NOTIFY_MSGFLAGS_ACK_NOT_REQUIRED (0x00)
|
||||
#define MPI3_EVENT_NOTIFY_MSGFLAGS_EVENT_ORIGINALITY_MASK (0x02)
|
||||
#define MPI3_EVENT_NOTIFY_MSGFLAGS_EVENT_ORIGINALITY_SHIFT (1)
|
||||
#define MPI3_EVENT_NOTIFY_MSGFLAGS_EVENT_ORIGINALITY_ORIGINAL (0x00)
|
||||
#define MPI3_EVENT_NOTIFY_MSGFLAGS_EVENT_ORIGINALITY_REPLAY (0x02)
|
||||
struct mpi3_event_data_gpio_interrupt {
|
||||
@ -487,6 +500,7 @@ struct mpi3_event_sas_topo_phy_entry {
|
||||
#define MPI3_EVENT_SAS_TOPO_PHY_STATUS_NO_EXIST (0x40)
|
||||
#define MPI3_EVENT_SAS_TOPO_PHY_STATUS_VACANT (0x80)
|
||||
#define MPI3_EVENT_SAS_TOPO_PHY_RC_MASK (0x0f)
|
||||
#define MPI3_EVENT_SAS_TOPO_PHY_RC_SHIFT (0)
|
||||
#define MPI3_EVENT_SAS_TOPO_PHY_RC_TARG_NOT_RESPONDING (0x02)
|
||||
#define MPI3_EVENT_SAS_TOPO_PHY_RC_PHY_CHANGED (0x03)
|
||||
#define MPI3_EVENT_SAS_TOPO_PHY_RC_NO_CHANGE (0x04)
|
||||
@ -566,6 +580,7 @@ struct mpi3_event_pcie_topo_port_entry {
|
||||
#define MPI3_EVENT_PCIE_TOPO_PS_DELAY_NOT_RESPONDING (0x05)
|
||||
#define MPI3_EVENT_PCIE_TOPO_PS_RESPONDING (0x06)
|
||||
#define MPI3_EVENT_PCIE_TOPO_PI_LANES_MASK (0xf0)
|
||||
#define MPI3_EVENT_PCIE_TOPO_PI_LANES_SHIFT (4)
|
||||
#define MPI3_EVENT_PCIE_TOPO_PI_LANES_UNKNOWN (0x00)
|
||||
#define MPI3_EVENT_PCIE_TOPO_PI_LANES_1 (0x10)
|
||||
#define MPI3_EVENT_PCIE_TOPO_PI_LANES_2 (0x20)
|
||||
@ -573,6 +588,7 @@ struct mpi3_event_pcie_topo_port_entry {
|
||||
#define MPI3_EVENT_PCIE_TOPO_PI_LANES_8 (0x40)
|
||||
#define MPI3_EVENT_PCIE_TOPO_PI_LANES_16 (0x50)
|
||||
#define MPI3_EVENT_PCIE_TOPO_PI_RATE_MASK (0x0f)
|
||||
#define MPI3_EVENT_PCIE_TOPO_PI_RATE_SHIFT (0)
|
||||
#define MPI3_EVENT_PCIE_TOPO_PI_RATE_UNKNOWN (0x00)
|
||||
#define MPI3_EVENT_PCIE_TOPO_PI_RATE_DISABLED (0x01)
|
||||
#define MPI3_EVENT_PCIE_TOPO_PI_RATE_2_5 (0x02)
|
||||
@ -881,6 +897,7 @@ struct mpi3_pel_req_action_acknowledge {
|
||||
};
|
||||
|
||||
#define MPI3_PELACKNOWLEDGE_MSGFLAGS_SAFE_MODE_EXIT_MASK (0x03)
|
||||
#define MPI3_PELACKNOWLEDGE_MSGFLAGS_SAFE_MODE_EXIT_SHIFT (0)
|
||||
#define MPI3_PELACKNOWLEDGE_MSGFLAGS_SAFE_MODE_EXIT_NO_GUIDANCE (0x00)
|
||||
#define MPI3_PELACKNOWLEDGE_MSGFLAGS_SAFE_MODE_EXIT_CONTINUE_OP (0x01)
|
||||
#define MPI3_PELACKNOWLEDGE_MSGFLAGS_SAFE_MODE_EXIT_TRANSITION_TO_FAULT (0x02)
|
||||
@ -924,6 +941,7 @@ struct mpi3_ci_download_request {
|
||||
#define MPI3_CI_DOWNLOAD_MSGFLAGS_FORCE_FMC_ENABLE (0x40)
|
||||
#define MPI3_CI_DOWNLOAD_MSGFLAGS_SIGNED_NVDATA (0x20)
|
||||
#define MPI3_CI_DOWNLOAD_MSGFLAGS_WRITE_CACHE_FLUSH_MASK (0x03)
|
||||
#define MPI3_CI_DOWNLOAD_MSGFLAGS_WRITE_CACHE_FLUSH_SHIFT (0)
|
||||
#define MPI3_CI_DOWNLOAD_MSGFLAGS_WRITE_CACHE_FLUSH_FAST (0x00)
|
||||
#define MPI3_CI_DOWNLOAD_MSGFLAGS_WRITE_CACHE_FLUSH_MEDIUM (0x01)
|
||||
#define MPI3_CI_DOWNLOAD_MSGFLAGS_WRITE_CACHE_FLUSH_SLOW (0x02)
|
||||
@ -953,6 +971,7 @@ struct mpi3_ci_download_reply {
|
||||
#define MPI3_CI_DOWNLOAD_FLAGS_OFFLINE_ACTIVATION_REQUIRED (0x20)
|
||||
#define MPI3_CI_DOWNLOAD_FLAGS_KEY_UPDATE_PENDING (0x10)
|
||||
#define MPI3_CI_DOWNLOAD_FLAGS_ACTIVATION_STATUS_MASK (0x0e)
|
||||
#define MPI3_CI_DOWNLOAD_FLAGS_ACTIVATION_STATUS_SHIFT (1)
|
||||
#define MPI3_CI_DOWNLOAD_FLAGS_ACTIVATION_STATUS_NOT_NEEDED (0x00)
|
||||
#define MPI3_CI_DOWNLOAD_FLAGS_ACTIVATION_STATUS_AWAITING (0x02)
|
||||
#define MPI3_CI_DOWNLOAD_FLAGS_ACTIVATION_STATUS_ONLINE_PENDING (0x04)
|
||||
@ -976,9 +995,11 @@ struct mpi3_ci_upload_request {
|
||||
};
|
||||
|
||||
#define MPI3_CI_UPLOAD_MSGFLAGS_LOCATION_MASK (0x01)
|
||||
#define MPI3_CI_UPLOAD_MSGFLAGS_LOCATION_SHIFT (0)
|
||||
#define MPI3_CI_UPLOAD_MSGFLAGS_LOCATION_PRIMARY (0x00)
|
||||
#define MPI3_CI_UPLOAD_MSGFLAGS_LOCATION_SECONDARY (0x01)
|
||||
#define MPI3_CI_UPLOAD_MSGFLAGS_FORMAT_MASK (0x02)
|
||||
#define MPI3_CI_UPLOAD_MSGFLAGS_FORMAT_SHIFT (1)
|
||||
#define MPI3_CI_UPLOAD_MSGFLAGS_FORMAT_FLASH (0x00)
|
||||
#define MPI3_CI_UPLOAD_MSGFLAGS_FORMAT_EXECUTABLE (0x02)
|
||||
#define MPI3_CTRL_OP_FORCE_FULL_DISCOVERY (0x01)
|
||||
|
@ -9,6 +9,7 @@
|
||||
#define MPI3_DIAG_BUFFER_TYPE_FW (0x02)
|
||||
#define MPI3_DIAG_BUFFER_ACTION_RELEASE (0x01)
|
||||
|
||||
#define MPI3_DIAG_BUFFER_POST_MSGFLAGS_SEGMENTED (0x01)
|
||||
struct mpi3_diag_buffer_post_request {
|
||||
__le16 host_tag;
|
||||
u8 ioc_use_only02;
|
||||
|
@ -18,7 +18,7 @@ union mpi3_version_union {
|
||||
|
||||
#define MPI3_VERSION_MAJOR (3)
|
||||
#define MPI3_VERSION_MINOR (0)
|
||||
#define MPI3_VERSION_UNIT (34)
|
||||
#define MPI3_VERSION_UNIT (35)
|
||||
#define MPI3_VERSION_DEV (0)
|
||||
#define MPI3_DEVHANDLE_INVALID (0xffff)
|
||||
struct mpi3_sysif_oper_queue_indexes {
|
||||
@ -80,6 +80,7 @@ struct mpi3_sysif_registers {
|
||||
#define MPI3_SYSIF_IOC_CONFIG_OPER_RPY_ENT_SZ_SHIFT (20)
|
||||
#define MPI3_SYSIF_IOC_CONFIG_OPER_REQ_ENT_SZ (0x000f0000)
|
||||
#define MPI3_SYSIF_IOC_CONFIG_OPER_REQ_ENT_SZ_SHIFT (16)
|
||||
#define MPI3_SYSIF_IOC_CONFIG_SHUTDOWN_SHIFT (14)
|
||||
#define MPI3_SYSIF_IOC_CONFIG_SHUTDOWN_MASK (0x0000c000)
|
||||
#define MPI3_SYSIF_IOC_CONFIG_SHUTDOWN_NO (0x00000000)
|
||||
#define MPI3_SYSIF_IOC_CONFIG_SHUTDOWN_NORMAL (0x00004000)
|
||||
@ -97,6 +98,7 @@ struct mpi3_sysif_registers {
|
||||
#define MPI3_SYSIF_IOC_STATUS_READY (0x00000001)
|
||||
#define MPI3_SYSIF_ADMIN_Q_NUM_ENTRIES_OFFSET (0x00000024)
|
||||
#define MPI3_SYSIF_ADMIN_Q_NUM_ENTRIES_REQ_MASK (0x0fff)
|
||||
#define MPI3_SYSIF_ADMIN_Q_NUM_ENTRIES_REQ_SHIFT (0)
|
||||
#define MPI3_SYSIF_ADMIN_Q_NUM_ENTRIES_REPLY_OFFSET (0x00000026)
|
||||
#define MPI3_SYSIF_ADMIN_Q_NUM_ENTRIES_REPLY_MASK (0x0fff0000)
|
||||
#define MPI3_SYSIF_ADMIN_Q_NUM_ENTRIES_REPLY_SHIFT (16)
|
||||
@ -106,6 +108,7 @@ struct mpi3_sysif_registers {
|
||||
#define MPI3_SYSIF_ADMIN_REPLY_Q_ADDR_HIGH_OFFSET (0x00000034)
|
||||
#define MPI3_SYSIF_COALESCE_CONTROL_OFFSET (0x00000040)
|
||||
#define MPI3_SYSIF_COALESCE_CONTROL_ENABLE_MASK (0xc0000000)
|
||||
#define MPI3_SYSIF_COALESCE_CONTROL_ENABLE_SHIFT (30)
|
||||
#define MPI3_SYSIF_COALESCE_CONTROL_ENABLE_NO_CHANGE (0x00000000)
|
||||
#define MPI3_SYSIF_COALESCE_CONTROL_ENABLE_DISABLE (0x40000000)
|
||||
#define MPI3_SYSIF_COALESCE_CONTROL_ENABLE_ENABLE (0xc0000000)
|
||||
@ -124,6 +127,7 @@ struct mpi3_sysif_registers {
|
||||
#define MPI3_SYSIF_OPER_REPLY_Q_N_CI_OFFSET(N) (MPI3_SYSIF_OPER_REPLY_Q_CI_OFFSET + (((N) - 1) * 8))
|
||||
#define MPI3_SYSIF_WRITE_SEQUENCE_OFFSET (0x00001c04)
|
||||
#define MPI3_SYSIF_WRITE_SEQUENCE_KEY_VALUE_MASK (0x0000000f)
|
||||
#define MPI3_SYSIF_WRITE_SEQUENCE_KEY_VALUE_SHIFT (0)
|
||||
#define MPI3_SYSIF_WRITE_SEQUENCE_KEY_VALUE_FLUSH (0x0)
|
||||
#define MPI3_SYSIF_WRITE_SEQUENCE_KEY_VALUE_1ST (0xf)
|
||||
#define MPI3_SYSIF_WRITE_SEQUENCE_KEY_VALUE_2ND (0x4)
|
||||
@ -133,6 +137,7 @@ struct mpi3_sysif_registers {
|
||||
#define MPI3_SYSIF_WRITE_SEQUENCE_KEY_VALUE_6TH (0xd)
|
||||
#define MPI3_SYSIF_HOST_DIAG_OFFSET (0x00001c08)
|
||||
#define MPI3_SYSIF_HOST_DIAG_RESET_ACTION_MASK (0x00000700)
|
||||
#define MPI3_SYSIF_HOST_DIAG_RESET_ACTION_SHIFT (8)
|
||||
#define MPI3_SYSIF_HOST_DIAG_RESET_ACTION_NO_RESET (0x00000000)
|
||||
#define MPI3_SYSIF_HOST_DIAG_RESET_ACTION_SOFT_RESET (0x00000100)
|
||||
#define MPI3_SYSIF_HOST_DIAG_RESET_ACTION_HOST_CONTROL_BOOT_RESET (0x00000200)
|
||||
@ -151,6 +156,7 @@ struct mpi3_sysif_registers {
|
||||
#define MPI3_SYSIF_FAULT_FUNC_AREA_SHIFT (24)
|
||||
#define MPI3_SYSIF_FAULT_FUNC_AREA_MPI_DEFINED (0x00000000)
|
||||
#define MPI3_SYSIF_FAULT_CODE_MASK (0x0000ffff)
|
||||
#define MPI3_SYSIF_FAULT_CODE_SHIFT (0)
|
||||
#define MPI3_SYSIF_FAULT_CODE_DIAG_FAULT_RESET (0x0000f000)
|
||||
#define MPI3_SYSIF_FAULT_CODE_CI_ACTIVATION_RESET (0x0000f001)
|
||||
#define MPI3_SYSIF_FAULT_CODE_SOFT_RESET_IN_PROGRESS (0x0000f002)
|
||||
@ -176,17 +182,20 @@ struct mpi3_sysif_registers {
|
||||
#define MPI3_SYSIF_DIAG_RW_ADDRESS_HIGH_OFFSET (0x00001c5c)
|
||||
#define MPI3_SYSIF_DIAG_RW_CONTROL_OFFSET (0x00001c60)
|
||||
#define MPI3_SYSIF_DIAG_RW_CONTROL_LEN_MASK (0x00000030)
|
||||
#define MPI3_SYSIF_DIAG_RW_CONTROL_LEN_SHIFT (4)
|
||||
#define MPI3_SYSIF_DIAG_RW_CONTROL_LEN_1BYTE (0x00000000)
|
||||
#define MPI3_SYSIF_DIAG_RW_CONTROL_LEN_2BYTES (0x00000010)
|
||||
#define MPI3_SYSIF_DIAG_RW_CONTROL_LEN_4BYTES (0x00000020)
|
||||
#define MPI3_SYSIF_DIAG_RW_CONTROL_LEN_8BYTES (0x00000030)
|
||||
#define MPI3_SYSIF_DIAG_RW_CONTROL_RESET (0x00000004)
|
||||
#define MPI3_SYSIF_DIAG_RW_CONTROL_DIR_MASK (0x00000002)
|
||||
#define MPI3_SYSIF_DIAG_RW_CONTROL_DIR_SHIFT (1)
|
||||
#define MPI3_SYSIF_DIAG_RW_CONTROL_DIR_READ (0x00000000)
|
||||
#define MPI3_SYSIF_DIAG_RW_CONTROL_DIR_WRITE (0x00000002)
|
||||
#define MPI3_SYSIF_DIAG_RW_CONTROL_START (0x00000001)
|
||||
#define MPI3_SYSIF_DIAG_RW_STATUS_OFFSET (0x00001c62)
|
||||
#define MPI3_SYSIF_DIAG_RW_STATUS_STATUS_MASK (0x0000000e)
|
||||
#define MPI3_SYSIF_DIAG_RW_STATUS_STATUS_SHIFT (1)
|
||||
#define MPI3_SYSIF_DIAG_RW_STATUS_STATUS_SUCCESS (0x00000000)
|
||||
#define MPI3_SYSIF_DIAG_RW_STATUS_STATUS_INV_ADDR (0x00000002)
|
||||
#define MPI3_SYSIF_DIAG_RW_STATUS_STATUS_ACC_ERR (0x00000004)
|
||||
@ -207,7 +216,9 @@ struct mpi3_default_reply_descriptor {
|
||||
};
|
||||
|
||||
#define MPI3_REPLY_DESCRIPT_FLAGS_PHASE_MASK (0x0001)
|
||||
#define MPI3_REPLY_DESCRIPT_FLAGS_PHASE_SHIFT (0)
|
||||
#define MPI3_REPLY_DESCRIPT_FLAGS_TYPE_MASK (0xf000)
|
||||
#define MPI3_REPLY_DESCRIPT_FLAGS_TYPE_SHIFT (12)
|
||||
#define MPI3_REPLY_DESCRIPT_FLAGS_TYPE_ADDRESS_REPLY (0x0000)
|
||||
#define MPI3_REPLY_DESCRIPT_FLAGS_TYPE_SUCCESS (0x1000)
|
||||
#define MPI3_REPLY_DESCRIPT_FLAGS_TYPE_TARGET_COMMAND_BUFFER (0x2000)
|
||||
@ -301,6 +312,7 @@ union mpi3_sge_union {
|
||||
};
|
||||
|
||||
#define MPI3_SGE_FLAGS_ELEMENT_TYPE_MASK (0xf0)
|
||||
#define MPI3_SGE_FLAGS_ELEMENT_TYPE_SHIFT (4)
|
||||
#define MPI3_SGE_FLAGS_ELEMENT_TYPE_SIMPLE (0x00)
|
||||
#define MPI3_SGE_FLAGS_ELEMENT_TYPE_BIT_BUCKET (0x10)
|
||||
#define MPI3_SGE_FLAGS_ELEMENT_TYPE_CHAIN (0x20)
|
||||
@ -309,6 +321,7 @@ union mpi3_sge_union {
|
||||
#define MPI3_SGE_FLAGS_END_OF_LIST (0x08)
|
||||
#define MPI3_SGE_FLAGS_END_OF_BUFFER (0x04)
|
||||
#define MPI3_SGE_FLAGS_DLAS_MASK (0x03)
|
||||
#define MPI3_SGE_FLAGS_DLAS_SHIFT (0)
|
||||
#define MPI3_SGE_FLAGS_DLAS_SYSTEM (0x00)
|
||||
#define MPI3_SGE_FLAGS_DLAS_IOC_UDP (0x01)
|
||||
#define MPI3_SGE_FLAGS_DLAS_IOC_CTL (0x02)
|
||||
@ -322,15 +335,18 @@ union mpi3_sge_union {
|
||||
#define MPI3_EEDPFLAGS_CHK_APP_TAG (0x0200)
|
||||
#define MPI3_EEDPFLAGS_CHK_GUARD (0x0100)
|
||||
#define MPI3_EEDPFLAGS_ESC_MODE_MASK (0x00c0)
|
||||
#define MPI3_EEDPFLAGS_ESC_MODE_SHIFT (6)
|
||||
#define MPI3_EEDPFLAGS_ESC_MODE_DO_NOT_DISABLE (0x0040)
|
||||
#define MPI3_EEDPFLAGS_ESC_MODE_APPTAG_DISABLE (0x0080)
|
||||
#define MPI3_EEDPFLAGS_ESC_MODE_APPTAG_REFTAG_DISABLE (0x00c0)
|
||||
#define MPI3_EEDPFLAGS_HOST_GUARD_MASK (0x0030)
|
||||
#define MPI3_EEDPFLAGS_HOST_GUARD_SHIFT (4)
|
||||
#define MPI3_EEDPFLAGS_HOST_GUARD_T10_CRC (0x0000)
|
||||
#define MPI3_EEDPFLAGS_HOST_GUARD_IP_CHKSUM (0x0010)
|
||||
#define MPI3_EEDPFLAGS_HOST_GUARD_OEM_SPECIFIC (0x0020)
|
||||
#define MPI3_EEDPFLAGS_PT_REF_TAG (0x0008)
|
||||
#define MPI3_EEDPFLAGS_EEDP_OP_MASK (0x0007)
|
||||
#define MPI3_EEDPFLAGS_EEDP_OP_SHIFT (0)
|
||||
#define MPI3_EEDPFLAGS_EEDP_OP_CHECK (0x0001)
|
||||
#define MPI3_EEDPFLAGS_EEDP_OP_STRIP (0x0002)
|
||||
#define MPI3_EEDPFLAGS_EEDP_OP_CHECK_REMOVE (0x0003)
|
||||
@ -403,6 +419,7 @@ struct mpi3_default_reply {
|
||||
#define MPI3_IOCSTATUS_LOG_INFO_AVAIL_MASK (0x8000)
|
||||
#define MPI3_IOCSTATUS_LOG_INFO_AVAILABLE (0x8000)
|
||||
#define MPI3_IOCSTATUS_STATUS_MASK (0x7fff)
|
||||
#define MPI3_IOCSTATUS_STATUS_SHIFT (0)
|
||||
#define MPI3_IOCSTATUS_SUCCESS (0x0000)
|
||||
#define MPI3_IOCSTATUS_INVALID_FUNCTION (0x0001)
|
||||
#define MPI3_IOCSTATUS_BUSY (0x0002)
|
||||
@ -469,4 +486,5 @@ struct mpi3_default_reply {
|
||||
#define MPI3_IOCLOGINFO_TYPE_NONE (0x0)
|
||||
#define MPI3_IOCLOGINFO_TYPE_SAS (0x3)
|
||||
#define MPI3_IOCLOGINFO_LOG_DATA_MASK (0x0fffffff)
|
||||
#define MPI3_IOCLOGINFO_LOG_DATA_SHIFT (0)
|
||||
#endif
|
||||
|
@ -56,8 +56,8 @@ extern struct list_head mrioc_list;
|
||||
extern int prot_mask;
|
||||
extern atomic64_t event_counter;
|
||||
|
||||
#define MPI3MR_DRIVER_VERSION "8.12.0.3.50"
|
||||
#define MPI3MR_DRIVER_RELDATE "11-November-2024"
|
||||
#define MPI3MR_DRIVER_VERSION "8.13.0.5.50"
|
||||
#define MPI3MR_DRIVER_RELDATE "20-February-2025"
|
||||
|
||||
#define MPI3MR_DRIVER_NAME "mpi3mr"
|
||||
#define MPI3MR_DRIVER_LICENSE "GPL"
|
||||
@ -80,13 +80,14 @@ extern atomic64_t event_counter;
|
||||
|
||||
/* Admin queue management definitions */
|
||||
#define MPI3MR_ADMIN_REQ_Q_SIZE (2 * MPI3MR_PAGE_SIZE_4K)
|
||||
#define MPI3MR_ADMIN_REPLY_Q_SIZE (4 * MPI3MR_PAGE_SIZE_4K)
|
||||
#define MPI3MR_ADMIN_REPLY_Q_SIZE (8 * MPI3MR_PAGE_SIZE_4K)
|
||||
#define MPI3MR_ADMIN_REQ_FRAME_SZ 128
|
||||
#define MPI3MR_ADMIN_REPLY_FRAME_SZ 16
|
||||
|
||||
/* Operational queue management definitions */
|
||||
#define MPI3MR_OP_REQ_Q_QD 512
|
||||
#define MPI3MR_OP_REP_Q_QD 1024
|
||||
#define MPI3MR_OP_REP_Q_QD2K 2048
|
||||
#define MPI3MR_OP_REP_Q_QD4K 4096
|
||||
#define MPI3MR_OP_REQ_Q_SEG_SIZE 4096
|
||||
#define MPI3MR_OP_REP_Q_SEG_SIZE 4096
|
||||
@ -328,6 +329,7 @@ enum mpi3mr_reset_reason {
|
||||
#define MPI3MR_RESET_REASON_OSTYPE_SHIFT 28
|
||||
#define MPI3MR_RESET_REASON_IOCNUM_SHIFT 20
|
||||
|
||||
|
||||
/* Queue type definitions */
|
||||
enum queue_type {
|
||||
MPI3MR_DEFAULT_QUEUE = 0,
|
||||
@ -387,6 +389,7 @@ struct mpi3mr_ioc_facts {
|
||||
u16 max_msix_vectors;
|
||||
u8 personality;
|
||||
u8 dma_mask;
|
||||
bool max_req_limit;
|
||||
u8 protocol_flags;
|
||||
u8 sge_mod_mask;
|
||||
u8 sge_mod_value;
|
||||
@ -456,6 +459,8 @@ struct op_req_qinfo {
|
||||
* @enable_irq_poll: Flag to indicate polling is enabled
|
||||
* @in_use: Queue is handled by poll/ISR
|
||||
* @qtype: Type of queue (types defined in enum queue_type)
|
||||
* @qfull_watermark: Watermark defined in reply queue to avoid
|
||||
* reply queue full
|
||||
*/
|
||||
struct op_reply_qinfo {
|
||||
u16 ci;
|
||||
@ -471,6 +476,7 @@ struct op_reply_qinfo {
|
||||
bool enable_irq_poll;
|
||||
atomic_t in_use;
|
||||
enum queue_type qtype;
|
||||
u16 qfull_watermark;
|
||||
};
|
||||
|
||||
/**
|
||||
@ -928,6 +934,8 @@ struct trigger_event_data {
|
||||
* @size: Buffer size
|
||||
* @addr: Virtual address
|
||||
* @dma_addr: Buffer DMA address
|
||||
* @is_segmented: The buffer is segmented or not
|
||||
* @disabled_after_reset: The buffer is disabled after reset
|
||||
*/
|
||||
struct diag_buffer_desc {
|
||||
u8 type;
|
||||
@ -937,6 +945,8 @@ struct diag_buffer_desc {
|
||||
u32 size;
|
||||
void *addr;
|
||||
dma_addr_t dma_addr;
|
||||
bool is_segmented;
|
||||
bool disabled_after_reset;
|
||||
};
|
||||
|
||||
/**
|
||||
@ -1022,6 +1032,8 @@ struct scmd_priv {
|
||||
* @admin_reply_base: Admin reply queue base virtual address
|
||||
* @admin_reply_dma: Admin reply queue base dma address
|
||||
* @admin_reply_q_in_use: Queue is handled by poll/ISR
|
||||
* @admin_pend_isr: Count of unprocessed admin ISR/poll calls
|
||||
* due to another thread processing replies
|
||||
* @ready_timeout: Controller ready timeout
|
||||
* @intr_info: Interrupt cookie pointer
|
||||
* @intr_info_count: Number of interrupt cookies
|
||||
@ -1090,6 +1102,7 @@ struct scmd_priv {
|
||||
* @ts_update_interval: Timestamp update interval
|
||||
* @reset_in_progress: Reset in progress flag
|
||||
* @unrecoverable: Controller unrecoverable flag
|
||||
* @io_admin_reset_sync: Manage state of I/O ops during an admin reset process
|
||||
* @prev_reset_result: Result of previous reset
|
||||
* @reset_mutex: Controller reset mutex
|
||||
* @reset_waitq: Controller reset wait queue
|
||||
@ -1153,6 +1166,12 @@ struct scmd_priv {
|
||||
* @snapdump_trigger_active: Snapdump trigger active flag
|
||||
* @pci_err_recovery: PCI error recovery in progress
|
||||
* @block_on_pci_err: Block IO during PCI error recovery
|
||||
* @reply_qfull_count: Occurences of reply queue full avoidance kicking-in
|
||||
* @prevent_reply_qfull: Enable reply queue prevention
|
||||
* @seg_tb_support: Segmented trace buffer support
|
||||
* @num_tb_segs: Number of Segments in Trace buffer
|
||||
* @trace_buf_pool: DMA pool for Segmented trace buffer segments
|
||||
* @trace_buf: Trace buffer segments memory descriptor
|
||||
*/
|
||||
struct mpi3mr_ioc {
|
||||
struct list_head list;
|
||||
@ -1189,6 +1208,7 @@ struct mpi3mr_ioc {
|
||||
void *admin_reply_base;
|
||||
dma_addr_t admin_reply_dma;
|
||||
atomic_t admin_reply_q_in_use;
|
||||
atomic_t admin_pend_isr;
|
||||
|
||||
u32 ready_timeout;
|
||||
|
||||
@ -1276,6 +1296,7 @@ struct mpi3mr_ioc {
|
||||
u16 ts_update_interval;
|
||||
u8 reset_in_progress;
|
||||
u8 unrecoverable;
|
||||
u8 io_admin_reset_sync;
|
||||
int prev_reset_result;
|
||||
struct mutex reset_mutex;
|
||||
wait_queue_head_t reset_waitq;
|
||||
@ -1351,6 +1372,13 @@ struct mpi3mr_ioc {
|
||||
bool fw_release_trigger_active;
|
||||
bool pci_err_recovery;
|
||||
bool block_on_pci_err;
|
||||
atomic_t reply_qfull_count;
|
||||
bool prevent_reply_qfull;
|
||||
bool seg_tb_support;
|
||||
u32 num_tb_segs;
|
||||
struct dma_pool *trace_buf_pool;
|
||||
struct segments *trace_buf;
|
||||
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -12,23 +12,98 @@
|
||||
#include <uapi/scsi/scsi_bsg_mpi3mr.h>
|
||||
|
||||
/**
|
||||
* mpi3mr_alloc_trace_buffer: Allocate trace buffer
|
||||
* mpi3mr_alloc_trace_buffer: Allocate segmented trace buffer
|
||||
* @mrioc: Adapter instance reference
|
||||
* @trace_size: Trace buffer size
|
||||
*
|
||||
* Allocate trace buffer
|
||||
* Allocate either segmented memory pools or contiguous buffer
|
||||
* based on the controller capability for the host trace
|
||||
* buffer.
|
||||
*
|
||||
* Return: 0 on success, non-zero on failure.
|
||||
*/
|
||||
static int mpi3mr_alloc_trace_buffer(struct mpi3mr_ioc *mrioc, u32 trace_size)
|
||||
{
|
||||
struct diag_buffer_desc *diag_buffer = &mrioc->diag_buffers[0];
|
||||
int i, sz;
|
||||
u64 *diag_buffer_list = NULL;
|
||||
dma_addr_t diag_buffer_list_dma;
|
||||
u32 seg_count;
|
||||
|
||||
diag_buffer->addr = dma_alloc_coherent(&mrioc->pdev->dev,
|
||||
trace_size, &diag_buffer->dma_addr, GFP_KERNEL);
|
||||
if (diag_buffer->addr) {
|
||||
dprint_init(mrioc, "trace diag buffer is allocated successfully\n");
|
||||
if (mrioc->seg_tb_support) {
|
||||
seg_count = (trace_size) / MPI3MR_PAGE_SIZE_4K;
|
||||
trace_size = seg_count * MPI3MR_PAGE_SIZE_4K;
|
||||
|
||||
diag_buffer_list = dma_alloc_coherent(&mrioc->pdev->dev,
|
||||
sizeof(u64) * seg_count,
|
||||
&diag_buffer_list_dma, GFP_KERNEL);
|
||||
if (!diag_buffer_list)
|
||||
return -1;
|
||||
|
||||
mrioc->num_tb_segs = seg_count;
|
||||
|
||||
sz = sizeof(struct segments) * seg_count;
|
||||
mrioc->trace_buf = kzalloc(sz, GFP_KERNEL);
|
||||
if (!mrioc->trace_buf)
|
||||
goto trace_buf_failed;
|
||||
|
||||
mrioc->trace_buf_pool = dma_pool_create("trace_buf pool",
|
||||
&mrioc->pdev->dev, MPI3MR_PAGE_SIZE_4K, MPI3MR_PAGE_SIZE_4K,
|
||||
0);
|
||||
if (!mrioc->trace_buf_pool) {
|
||||
ioc_err(mrioc, "trace buf pool: dma_pool_create failed\n");
|
||||
goto trace_buf_pool_failed;
|
||||
}
|
||||
|
||||
for (i = 0; i < seg_count; i++) {
|
||||
mrioc->trace_buf[i].segment =
|
||||
dma_pool_zalloc(mrioc->trace_buf_pool, GFP_KERNEL,
|
||||
&mrioc->trace_buf[i].segment_dma);
|
||||
diag_buffer_list[i] =
|
||||
(u64) mrioc->trace_buf[i].segment_dma;
|
||||
if (!diag_buffer_list[i])
|
||||
goto tb_seg_alloc_failed;
|
||||
}
|
||||
|
||||
diag_buffer->addr = diag_buffer_list;
|
||||
diag_buffer->dma_addr = diag_buffer_list_dma;
|
||||
diag_buffer->is_segmented = true;
|
||||
|
||||
dprint_init(mrioc, "segmented trace diag buffer\n"
|
||||
"is allocated successfully seg_count:%d\n", seg_count);
|
||||
return 0;
|
||||
} else {
|
||||
diag_buffer->addr = dma_alloc_coherent(&mrioc->pdev->dev,
|
||||
trace_size, &diag_buffer->dma_addr, GFP_KERNEL);
|
||||
if (diag_buffer->addr) {
|
||||
dprint_init(mrioc, "trace diag buffer is allocated successfully\n");
|
||||
return 0;
|
||||
}
|
||||
return -1;
|
||||
}
|
||||
|
||||
tb_seg_alloc_failed:
|
||||
if (mrioc->trace_buf_pool) {
|
||||
for (i = 0; i < mrioc->num_tb_segs; i++) {
|
||||
if (mrioc->trace_buf[i].segment) {
|
||||
dma_pool_free(mrioc->trace_buf_pool,
|
||||
mrioc->trace_buf[i].segment,
|
||||
mrioc->trace_buf[i].segment_dma);
|
||||
mrioc->trace_buf[i].segment = NULL;
|
||||
}
|
||||
mrioc->trace_buf[i].segment = NULL;
|
||||
}
|
||||
dma_pool_destroy(mrioc->trace_buf_pool);
|
||||
mrioc->trace_buf_pool = NULL;
|
||||
}
|
||||
trace_buf_pool_failed:
|
||||
kfree(mrioc->trace_buf);
|
||||
mrioc->trace_buf = NULL;
|
||||
trace_buf_failed:
|
||||
if (diag_buffer_list)
|
||||
dma_free_coherent(&mrioc->pdev->dev,
|
||||
sizeof(u64) * mrioc->num_tb_segs,
|
||||
diag_buffer_list, diag_buffer_list_dma);
|
||||
return -1;
|
||||
}
|
||||
|
||||
@ -100,8 +175,9 @@ retry_trace:
|
||||
dprint_init(mrioc,
|
||||
"trying to allocate trace diag buffer of size = %dKB\n",
|
||||
trace_size / 1024);
|
||||
if (get_order(trace_size) > MAX_PAGE_ORDER ||
|
||||
if ((!mrioc->seg_tb_support && (get_order(trace_size) > MAX_PAGE_ORDER)) ||
|
||||
mpi3mr_alloc_trace_buffer(mrioc, trace_size)) {
|
||||
|
||||
retry = true;
|
||||
trace_size -= trace_dec_size;
|
||||
dprint_init(mrioc, "trace diag buffer allocation failed\n"
|
||||
@ -161,6 +237,12 @@ int mpi3mr_issue_diag_buf_post(struct mpi3mr_ioc *mrioc,
|
||||
u8 prev_status;
|
||||
int retval = 0;
|
||||
|
||||
if (diag_buffer->disabled_after_reset) {
|
||||
dprint_bsg_err(mrioc, "%s: skipping diag buffer posting\n"
|
||||
"as it is disabled after reset\n", __func__);
|
||||
return -1;
|
||||
}
|
||||
|
||||
memset(&diag_buf_post_req, 0, sizeof(diag_buf_post_req));
|
||||
mutex_lock(&mrioc->init_cmds.mutex);
|
||||
if (mrioc->init_cmds.state & MPI3MR_CMD_PENDING) {
|
||||
@ -177,8 +259,12 @@ int mpi3mr_issue_diag_buf_post(struct mpi3mr_ioc *mrioc,
|
||||
diag_buf_post_req.address = le64_to_cpu(diag_buffer->dma_addr);
|
||||
diag_buf_post_req.length = le32_to_cpu(diag_buffer->size);
|
||||
|
||||
dprint_bsg_info(mrioc, "%s: posting diag buffer type %d\n", __func__,
|
||||
diag_buffer->type);
|
||||
if (diag_buffer->is_segmented)
|
||||
diag_buf_post_req.msg_flags |= MPI3_DIAG_BUFFER_POST_MSGFLAGS_SEGMENTED;
|
||||
|
||||
dprint_bsg_info(mrioc, "%s: posting diag buffer type %d segmented:%d\n", __func__,
|
||||
diag_buffer->type, diag_buffer->is_segmented);
|
||||
|
||||
prev_status = diag_buffer->status;
|
||||
diag_buffer->status = MPI3MR_HDB_BUFSTATUS_POSTED_UNPAUSED;
|
||||
init_completion(&mrioc->init_cmds.done);
|
||||
@ -2339,6 +2425,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
|
||||
}
|
||||
|
||||
if (!mrioc->ioctl_sges_allocated) {
|
||||
mutex_unlock(&mrioc->bsg_cmds.mutex);
|
||||
dprint_bsg_err(mrioc, "%s: DMA memory was not allocated\n",
|
||||
__func__);
|
||||
return -ENOMEM;
|
||||
@ -3060,6 +3147,29 @@ reply_queue_count_show(struct device *dev, struct device_attribute *attr,
|
||||
|
||||
static DEVICE_ATTR_RO(reply_queue_count);
|
||||
|
||||
/**
|
||||
* reply_qfull_count_show - Show reply qfull count
|
||||
* @dev: class device
|
||||
* @attr: Device attributes
|
||||
* @buf: Buffer to copy
|
||||
*
|
||||
* Retrieves the current value of the reply_qfull_count from the mrioc structure and
|
||||
* formats it as a string for display.
|
||||
*
|
||||
* Return: sysfs_emit() return
|
||||
*/
|
||||
static ssize_t
|
||||
reply_qfull_count_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct Scsi_Host *shost = class_to_shost(dev);
|
||||
struct mpi3mr_ioc *mrioc = shost_priv(shost);
|
||||
|
||||
return sysfs_emit(buf, "%u\n", atomic_read(&mrioc->reply_qfull_count));
|
||||
}
|
||||
|
||||
static DEVICE_ATTR_RO(reply_qfull_count);
|
||||
|
||||
/**
|
||||
* logging_level_show - Show controller debug level
|
||||
* @dev: class device
|
||||
@ -3152,6 +3262,7 @@ static struct attribute *mpi3mr_host_attrs[] = {
|
||||
&dev_attr_fw_queue_depth.attr,
|
||||
&dev_attr_op_req_q_count.attr,
|
||||
&dev_attr_reply_queue_count.attr,
|
||||
&dev_attr_reply_qfull_count.attr,
|
||||
&dev_attr_logging_level.attr,
|
||||
&dev_attr_adp_state.attr,
|
||||
NULL,
|
||||
|
@ -17,7 +17,7 @@ static void mpi3mr_process_factsdata(struct mpi3mr_ioc *mrioc,
|
||||
struct mpi3_ioc_facts_data *facts_data);
|
||||
static void mpi3mr_pel_wait_complete(struct mpi3mr_ioc *mrioc,
|
||||
struct mpi3mr_drv_cmd *drv_cmd);
|
||||
|
||||
static int mpi3mr_check_op_admin_proc(struct mpi3mr_ioc *mrioc);
|
||||
static int poll_queues;
|
||||
module_param(poll_queues, int, 0444);
|
||||
MODULE_PARM_DESC(poll_queues, "Number of queues for io_uring poll mode. (Range 1 - 126)");
|
||||
@ -446,8 +446,10 @@ int mpi3mr_process_admin_reply_q(struct mpi3mr_ioc *mrioc)
|
||||
u16 threshold_comps = 0;
|
||||
struct mpi3_default_reply_descriptor *reply_desc;
|
||||
|
||||
if (!atomic_add_unless(&mrioc->admin_reply_q_in_use, 1, 1))
|
||||
if (!atomic_add_unless(&mrioc->admin_reply_q_in_use, 1, 1)) {
|
||||
atomic_inc(&mrioc->admin_pend_isr);
|
||||
return 0;
|
||||
}
|
||||
|
||||
reply_desc = (struct mpi3_default_reply_descriptor *)mrioc->admin_reply_base +
|
||||
admin_reply_ci;
|
||||
@ -459,7 +461,7 @@ int mpi3mr_process_admin_reply_q(struct mpi3mr_ioc *mrioc)
|
||||
}
|
||||
|
||||
do {
|
||||
if (mrioc->unrecoverable)
|
||||
if (mrioc->unrecoverable || mrioc->io_admin_reset_sync)
|
||||
break;
|
||||
|
||||
mrioc->admin_req_ci = le16_to_cpu(reply_desc->request_queue_ci);
|
||||
@ -554,7 +556,7 @@ int mpi3mr_process_op_reply_q(struct mpi3mr_ioc *mrioc,
|
||||
}
|
||||
|
||||
do {
|
||||
if (mrioc->unrecoverable)
|
||||
if (mrioc->unrecoverable || mrioc->io_admin_reset_sync)
|
||||
break;
|
||||
|
||||
req_q_idx = le16_to_cpu(reply_desc->request_queue_id) - 1;
|
||||
@ -1302,7 +1304,7 @@ static int mpi3mr_issue_and_process_mur(struct mpi3mr_ioc *mrioc,
|
||||
(ioc_config & MPI3_SYSIF_IOC_CONFIG_ENABLE_IOC)))
|
||||
retval = 0;
|
||||
|
||||
ioc_info(mrioc, "Base IOC Sts/Config after %s MUR is (0x%x)/(0x%x)\n",
|
||||
ioc_info(mrioc, "Base IOC Sts/Config after %s MUR is (0x%08x)/(0x%08x)\n",
|
||||
(!retval) ? "successful" : "failed", ioc_status, ioc_config);
|
||||
return retval;
|
||||
}
|
||||
@ -1355,6 +1357,19 @@ mpi3mr_revalidate_factsdata(struct mpi3mr_ioc *mrioc)
|
||||
"\tcontroller while sas transport support is enabled at the\n"
|
||||
"\tdriver, please reboot the system or reload the driver\n");
|
||||
|
||||
if (mrioc->seg_tb_support) {
|
||||
if (!(mrioc->facts.ioc_capabilities &
|
||||
MPI3_IOCFACTS_CAPABILITY_SEG_DIAG_TRACE_SUPPORTED)) {
|
||||
ioc_err(mrioc,
|
||||
"critical error: previously enabled segmented trace\n"
|
||||
" buffer capability is disabled after reset. Please\n"
|
||||
" update the firmware or reboot the system or\n"
|
||||
" reload the driver to enable trace diag buffer\n");
|
||||
mrioc->diag_buffers[0].disabled_after_reset = true;
|
||||
} else
|
||||
mrioc->diag_buffers[0].disabled_after_reset = false;
|
||||
}
|
||||
|
||||
if (mrioc->facts.max_devhandle > mrioc->dev_handle_bitmap_bits) {
|
||||
removepend_bitmap = bitmap_zalloc(mrioc->facts.max_devhandle,
|
||||
GFP_KERNEL);
|
||||
@ -1717,7 +1732,7 @@ static int mpi3mr_issue_reset(struct mpi3mr_ioc *mrioc, u16 reset_type,
|
||||
ioc_config = readl(&mrioc->sysif_regs->ioc_configuration);
|
||||
ioc_status = readl(&mrioc->sysif_regs->ioc_status);
|
||||
ioc_info(mrioc,
|
||||
"ioc_status/ioc_onfig after %s reset is (0x%x)/(0x%x)\n",
|
||||
"ioc_status/ioc_config after %s reset is (0x%08x)/(0x%08x)\n",
|
||||
(!retval)?"successful":"failed", ioc_status,
|
||||
ioc_config);
|
||||
if (retval)
|
||||
@ -2104,15 +2119,22 @@ static int mpi3mr_create_op_reply_q(struct mpi3mr_ioc *mrioc, u16 qidx)
|
||||
}
|
||||
|
||||
reply_qid = qidx + 1;
|
||||
op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD;
|
||||
if ((mrioc->pdev->device == MPI3_MFGPAGE_DEVID_SAS4116) &&
|
||||
!mrioc->pdev->revision)
|
||||
op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD4K;
|
||||
|
||||
if (mrioc->pdev->device == MPI3_MFGPAGE_DEVID_SAS4116) {
|
||||
if (mrioc->pdev->revision)
|
||||
op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD;
|
||||
else
|
||||
op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD4K;
|
||||
} else
|
||||
op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD2K;
|
||||
|
||||
op_reply_q->ci = 0;
|
||||
op_reply_q->ephase = 1;
|
||||
atomic_set(&op_reply_q->pend_ios, 0);
|
||||
atomic_set(&op_reply_q->in_use, 0);
|
||||
op_reply_q->enable_irq_poll = false;
|
||||
op_reply_q->qfull_watermark =
|
||||
op_reply_q->num_replies - (MPI3MR_THRESHOLD_REPLY_COUNT * 2);
|
||||
|
||||
if (!op_reply_q->q_segments) {
|
||||
retval = mpi3mr_alloc_op_reply_q_segments(mrioc, qidx);
|
||||
@ -2416,8 +2438,10 @@ int mpi3mr_op_request_post(struct mpi3mr_ioc *mrioc,
|
||||
void *segment_base_addr;
|
||||
u16 req_sz = mrioc->facts.op_req_sz;
|
||||
struct segments *segments = op_req_q->q_segments;
|
||||
struct op_reply_qinfo *op_reply_q = NULL;
|
||||
|
||||
reply_qidx = op_req_q->reply_qid - 1;
|
||||
op_reply_q = mrioc->op_reply_qinfo + reply_qidx;
|
||||
|
||||
if (mrioc->unrecoverable)
|
||||
return -EFAULT;
|
||||
@ -2448,6 +2472,15 @@ int mpi3mr_op_request_post(struct mpi3mr_ioc *mrioc,
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Reply queue is nearing to get full, push back IOs to SML */
|
||||
if ((mrioc->prevent_reply_qfull == true) &&
|
||||
(atomic_read(&op_reply_q->pend_ios) >
|
||||
(op_reply_q->qfull_watermark))) {
|
||||
atomic_inc(&mrioc->reply_qfull_count);
|
||||
retval = -EAGAIN;
|
||||
goto out;
|
||||
}
|
||||
|
||||
segment_base_addr = segments[pi / op_req_q->segment_qd].segment;
|
||||
req_entry = (u8 *)segment_base_addr +
|
||||
((pi % op_req_q->segment_qd) * req_sz);
|
||||
@ -2726,7 +2759,16 @@ static void mpi3mr_watchdog_work(struct work_struct *work)
|
||||
return;
|
||||
}
|
||||
|
||||
if (mrioc->ts_update_counter++ >= mrioc->ts_update_interval) {
|
||||
if (atomic_read(&mrioc->admin_pend_isr)) {
|
||||
ioc_err(mrioc, "Unprocessed admin ISR instance found\n"
|
||||
"flush admin replies\n");
|
||||
mpi3mr_process_admin_reply_q(mrioc);
|
||||
}
|
||||
|
||||
if (!(mrioc->facts.ioc_capabilities &
|
||||
MPI3_IOCFACTS_CAPABILITY_NON_SUPERVISOR_IOC) &&
|
||||
(mrioc->ts_update_counter++ >= mrioc->ts_update_interval)) {
|
||||
|
||||
mrioc->ts_update_counter = 0;
|
||||
mpi3mr_sync_timestamp(mrioc);
|
||||
}
|
||||
@ -3091,6 +3133,9 @@ static void mpi3mr_process_factsdata(struct mpi3mr_ioc *mrioc,
|
||||
mrioc->facts.dma_mask = (facts_flags &
|
||||
MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_MASK) >>
|
||||
MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_SHIFT;
|
||||
mrioc->facts.dma_mask = (facts_flags &
|
||||
MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_MASK) >>
|
||||
MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_SHIFT;
|
||||
mrioc->facts.protocol_flags = facts_data->protocol_flags;
|
||||
mrioc->facts.mpi_version = le32_to_cpu(facts_data->mpi_version.word);
|
||||
mrioc->facts.max_reqs = le16_to_cpu(facts_data->max_outstanding_requests);
|
||||
@ -4214,6 +4259,13 @@ retry_init:
|
||||
mrioc->shost->transportt = mpi3mr_transport_template;
|
||||
}
|
||||
|
||||
if (mrioc->facts.max_req_limit)
|
||||
mrioc->prevent_reply_qfull = true;
|
||||
|
||||
if (mrioc->facts.ioc_capabilities &
|
||||
MPI3_IOCFACTS_CAPABILITY_SEG_DIAG_TRACE_SUPPORTED)
|
||||
mrioc->seg_tb_support = true;
|
||||
|
||||
mrioc->reply_sz = mrioc->facts.reply_sz;
|
||||
|
||||
retval = mpi3mr_check_reset_dma_mask(mrioc);
|
||||
@ -4370,6 +4422,7 @@ retry_init:
|
||||
goto out_failed_noretry;
|
||||
}
|
||||
|
||||
mrioc->io_admin_reset_sync = 0;
|
||||
if (is_resume || mrioc->block_on_pci_err) {
|
||||
dprint_reset(mrioc, "setting up single ISR\n");
|
||||
retval = mpi3mr_setup_isr(mrioc, 1);
|
||||
@ -4671,7 +4724,7 @@ void mpi3mr_memset_buffers(struct mpi3mr_ioc *mrioc)
|
||||
*/
|
||||
void mpi3mr_free_mem(struct mpi3mr_ioc *mrioc)
|
||||
{
|
||||
u16 i;
|
||||
u16 i, j;
|
||||
struct mpi3mr_intr_info *intr_info;
|
||||
struct diag_buffer_desc *diag_buffer;
|
||||
|
||||
@ -4806,6 +4859,26 @@ void mpi3mr_free_mem(struct mpi3mr_ioc *mrioc)
|
||||
|
||||
for (i = 0; i < MPI3MR_MAX_NUM_HDB; i++) {
|
||||
diag_buffer = &mrioc->diag_buffers[i];
|
||||
if ((i == 0) && mrioc->seg_tb_support) {
|
||||
if (mrioc->trace_buf_pool) {
|
||||
for (j = 0; j < mrioc->num_tb_segs; j++) {
|
||||
if (mrioc->trace_buf[j].segment) {
|
||||
dma_pool_free(mrioc->trace_buf_pool,
|
||||
mrioc->trace_buf[j].segment,
|
||||
mrioc->trace_buf[j].segment_dma);
|
||||
mrioc->trace_buf[j].segment = NULL;
|
||||
}
|
||||
|
||||
mrioc->trace_buf[j].segment = NULL;
|
||||
}
|
||||
dma_pool_destroy(mrioc->trace_buf_pool);
|
||||
mrioc->trace_buf_pool = NULL;
|
||||
}
|
||||
|
||||
kfree(mrioc->trace_buf);
|
||||
mrioc->trace_buf = NULL;
|
||||
diag_buffer->size = sizeof(u64) * mrioc->num_tb_segs;
|
||||
}
|
||||
if (diag_buffer->addr) {
|
||||
dma_free_coherent(&mrioc->pdev->dev,
|
||||
diag_buffer->size, diag_buffer->addr,
|
||||
@ -4883,7 +4956,7 @@ static void mpi3mr_issue_ioc_shutdown(struct mpi3mr_ioc *mrioc)
|
||||
}
|
||||
|
||||
ioc_info(mrioc,
|
||||
"Base IOC Sts/Config after %s shutdown is (0x%x)/(0x%x)\n",
|
||||
"Base IOC Sts/Config after %s shutdown is (0x%08x)/(0x%08x)\n",
|
||||
(!retval) ? "successful" : "failed", ioc_status,
|
||||
ioc_config);
|
||||
}
|
||||
@ -5228,6 +5301,55 @@ cleanup_drv_cmd:
|
||||
drv_cmd->retry_count = 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* mpi3mr_check_op_admin_proc -
|
||||
* @mrioc: Adapter instance reference
|
||||
*
|
||||
* Check if any of the operation reply queues
|
||||
* or the admin reply queue are currently in use.
|
||||
* If any queue is in use, this function waits for
|
||||
* a maximum of 10 seconds for them to become available.
|
||||
*
|
||||
* Return: 0 on success, non-zero on failure.
|
||||
*/
|
||||
static int mpi3mr_check_op_admin_proc(struct mpi3mr_ioc *mrioc)
|
||||
{
|
||||
|
||||
u16 timeout = 10 * 10;
|
||||
u16 elapsed_time = 0;
|
||||
bool op_admin_in_use = false;
|
||||
|
||||
do {
|
||||
op_admin_in_use = false;
|
||||
|
||||
/* Check admin_reply queue first to exit early */
|
||||
if (atomic_read(&mrioc->admin_reply_q_in_use) == 1)
|
||||
op_admin_in_use = true;
|
||||
else {
|
||||
/* Check op_reply queues */
|
||||
int i;
|
||||
|
||||
for (i = 0; i < mrioc->num_queues; i++) {
|
||||
if (atomic_read(&mrioc->op_reply_qinfo[i].in_use) == 1) {
|
||||
op_admin_in_use = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!op_admin_in_use)
|
||||
break;
|
||||
|
||||
msleep(100);
|
||||
|
||||
} while (++elapsed_time < timeout);
|
||||
|
||||
if (op_admin_in_use)
|
||||
return 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* mpi3mr_soft_reset_handler - Reset the controller
|
||||
* @mrioc: Adapter instance reference
|
||||
@ -5308,6 +5430,7 @@ int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc,
|
||||
mpi3mr_wait_for_host_io(mrioc, MPI3MR_RESET_HOST_IOWAIT_TIMEOUT);
|
||||
|
||||
mpi3mr_ioc_disable_intr(mrioc);
|
||||
mrioc->io_admin_reset_sync = 1;
|
||||
|
||||
if (snapdump) {
|
||||
mpi3mr_set_diagsave(mrioc);
|
||||
@ -5335,6 +5458,16 @@ int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc,
|
||||
ioc_err(mrioc, "Failed to issue soft reset to the ioc\n");
|
||||
goto out;
|
||||
}
|
||||
|
||||
retval = mpi3mr_check_op_admin_proc(mrioc);
|
||||
if (retval) {
|
||||
ioc_err(mrioc, "Soft reset failed due to an Admin or I/O queue polling\n"
|
||||
"thread still processing replies even after a 10 second\n"
|
||||
"timeout. Marking the controller as unrecoverable!\n");
|
||||
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (mrioc->num_io_throttle_group !=
|
||||
mrioc->facts.max_io_throttle_group) {
|
||||
ioc_err(mrioc,
|
||||
|
@ -3839,6 +3839,18 @@ int mpi3mr_issue_tm(struct mpi3mr_ioc *mrioc, u8 tm_type,
|
||||
tgtdev = mpi3mr_get_tgtdev_by_handle(mrioc, handle);
|
||||
|
||||
if (scmd) {
|
||||
if (tm_type == MPI3_SCSITASKMGMT_TASKTYPE_ABORT_TASK) {
|
||||
cmd_priv = scsi_cmd_priv(scmd);
|
||||
if (!cmd_priv)
|
||||
goto out_unlock;
|
||||
|
||||
struct op_req_qinfo *op_req_q;
|
||||
|
||||
op_req_q = &mrioc->req_qinfo[cmd_priv->req_q_idx];
|
||||
tm_req.task_host_tag = cpu_to_le16(cmd_priv->host_tag);
|
||||
tm_req.task_request_queue_id =
|
||||
cpu_to_le16(op_req_q->qid);
|
||||
}
|
||||
sdev = scmd->device;
|
||||
sdev_priv_data = sdev->hostdata;
|
||||
scsi_tgt_priv_data = ((sdev_priv_data) ?
|
||||
@ -4387,6 +4399,92 @@ out:
|
||||
return retval;
|
||||
}
|
||||
|
||||
/**
|
||||
* mpi3mr_eh_abort - Callback function for abort error handling
|
||||
* @scmd: SCSI command reference
|
||||
*
|
||||
* Issues Abort Task Management if the command is in LLD scope
|
||||
* and verifies if it is aborted successfully, and return status
|
||||
* accordingly.
|
||||
*
|
||||
* Return: SUCCESS if the abort was successful, otherwise FAILED
|
||||
*/
|
||||
static int mpi3mr_eh_abort(struct scsi_cmnd *scmd)
|
||||
{
|
||||
struct mpi3mr_ioc *mrioc = shost_priv(scmd->device->host);
|
||||
struct mpi3mr_stgt_priv_data *stgt_priv_data;
|
||||
struct mpi3mr_sdev_priv_data *sdev_priv_data;
|
||||
struct scmd_priv *cmd_priv;
|
||||
u16 dev_handle, timeout = MPI3MR_ABORTTM_TIMEOUT;
|
||||
u8 resp_code = 0;
|
||||
int retval = FAILED, ret = 0;
|
||||
struct request *rq = scsi_cmd_to_rq(scmd);
|
||||
unsigned long scmd_age_ms = jiffies_to_msecs(jiffies - scmd->jiffies_at_alloc);
|
||||
unsigned long scmd_age_sec = scmd_age_ms / HZ;
|
||||
|
||||
sdev_printk(KERN_INFO, scmd->device,
|
||||
"%s: attempting abort task for scmd(%p)\n", mrioc->name, scmd);
|
||||
|
||||
sdev_printk(KERN_INFO, scmd->device,
|
||||
"%s: scmd(0x%p) is outstanding for %lus %lums, timeout %us, retries %d, allowed %d\n",
|
||||
mrioc->name, scmd, scmd_age_sec, scmd_age_ms % HZ, rq->timeout / HZ,
|
||||
scmd->retries, scmd->allowed);
|
||||
|
||||
scsi_print_command(scmd);
|
||||
|
||||
sdev_priv_data = scmd->device->hostdata;
|
||||
if (!sdev_priv_data || !sdev_priv_data->tgt_priv_data) {
|
||||
sdev_printk(KERN_INFO, scmd->device,
|
||||
"%s: Device not available, Skip issuing abort task\n",
|
||||
mrioc->name);
|
||||
retval = SUCCESS;
|
||||
goto out;
|
||||
}
|
||||
|
||||
stgt_priv_data = sdev_priv_data->tgt_priv_data;
|
||||
dev_handle = stgt_priv_data->dev_handle;
|
||||
|
||||
cmd_priv = scsi_cmd_priv(scmd);
|
||||
if (!cmd_priv->in_lld_scope ||
|
||||
cmd_priv->host_tag == MPI3MR_HOSTTAG_INVALID) {
|
||||
sdev_printk(KERN_INFO, scmd->device,
|
||||
"%s: scmd (0x%p) not in LLD scope, Skip issuing Abort Task\n",
|
||||
mrioc->name, scmd);
|
||||
retval = SUCCESS;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (stgt_priv_data->dev_removed) {
|
||||
sdev_printk(KERN_INFO, scmd->device,
|
||||
"%s: Device (handle = 0x%04x) removed, Skip issuing Abort Task\n",
|
||||
mrioc->name, dev_handle);
|
||||
retval = FAILED;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = mpi3mr_issue_tm(mrioc, MPI3_SCSITASKMGMT_TASKTYPE_ABORT_TASK,
|
||||
dev_handle, sdev_priv_data->lun_id, MPI3MR_HOSTTAG_BLK_TMS,
|
||||
timeout, &mrioc->host_tm_cmds, &resp_code, scmd);
|
||||
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
if (cmd_priv->in_lld_scope) {
|
||||
sdev_printk(KERN_INFO, scmd->device,
|
||||
"%s: Abort task failed. scmd (0x%p) was not terminated\n",
|
||||
mrioc->name, scmd);
|
||||
goto out;
|
||||
}
|
||||
|
||||
retval = SUCCESS;
|
||||
out:
|
||||
sdev_printk(KERN_INFO, scmd->device,
|
||||
"%s: Abort Task %s for scmd (0x%p)\n", mrioc->name,
|
||||
((retval == SUCCESS) ? "SUCCEEDED" : "FAILED"), scmd);
|
||||
|
||||
return retval;
|
||||
}
|
||||
|
||||
/**
|
||||
* mpi3mr_scan_start - Scan start callback handler
|
||||
* @shost: SCSI host reference
|
||||
@ -5069,6 +5167,7 @@ static const struct scsi_host_template mpi3mr_driver_template = {
|
||||
.scan_finished = mpi3mr_scan_finished,
|
||||
.scan_start = mpi3mr_scan_start,
|
||||
.change_queue_depth = mpi3mr_change_queue_depth,
|
||||
.eh_abort_handler = mpi3mr_eh_abort,
|
||||
.eh_device_reset_handler = mpi3mr_eh_dev_reset,
|
||||
.eh_target_reset_handler = mpi3mr_eh_target_reset,
|
||||
.eh_bus_reset_handler = mpi3mr_eh_bus_reset,
|
||||
@ -5803,7 +5902,7 @@ static const struct pci_device_id mpi3mr_pci_id_table[] = {
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pci, mpi3mr_pci_id_table);
|
||||
|
||||
static struct pci_error_handlers mpi3mr_err_handler = {
|
||||
static const struct pci_error_handlers mpi3mr_err_handler = {
|
||||
.error_detected = mpi3mr_pcierr_error_detected,
|
||||
.mmio_enabled = mpi3mr_pcierr_mmio_enabled,
|
||||
.slot_reset = mpi3mr_pcierr_slot_reset,
|
||||
|
@ -125,6 +125,12 @@
|
||||
* 06-24-19 02.00.55 Bumped MPI2_HEADER_VERSION_UNIT
|
||||
* 08-01-19 02.00.56 Bumped MPI2_HEADER_VERSION_UNIT
|
||||
* 10-02-19 02.00.57 Bumped MPI2_HEADER_VERSION_UNIT
|
||||
* 07-20-20 02.00.58 Bumped MPI2_HEADER_VERSION_UNIT
|
||||
* 03-30-21 02.00.59 Bumped MPI2_HEADER_VERSION_UNIT
|
||||
* 06-03-22 02.00.60 Bumped MPI2_HEADER_VERSION_UNIT
|
||||
* 09-20-23 02.00.61 Bumped MPI2_HEADER_VERSION_UNIT
|
||||
* 09-13-24 02.00.62 Bumped MPI2_HEADER_VERSION_UNIT
|
||||
* Added MPI2_FUNCTION_MCTP_PASSTHROUGH
|
||||
* --------------------------------------------------------------------------
|
||||
*/
|
||||
|
||||
@ -165,7 +171,7 @@
|
||||
|
||||
|
||||
/* Unit and Dev versioning for this MPI header set */
|
||||
#define MPI2_HEADER_VERSION_UNIT (0x39)
|
||||
#define MPI2_HEADER_VERSION_UNIT (0x3E)
|
||||
#define MPI2_HEADER_VERSION_DEV (0x00)
|
||||
#define MPI2_HEADER_VERSION_UNIT_MASK (0xFF00)
|
||||
#define MPI2_HEADER_VERSION_UNIT_SHIFT (8)
|
||||
@ -669,6 +675,7 @@ typedef union _MPI2_REPLY_DESCRIPTORS_UNION {
|
||||
#define MPI2_FUNCTION_PWR_MGMT_CONTROL (0x30)
|
||||
#define MPI2_FUNCTION_SEND_HOST_MESSAGE (0x31)
|
||||
#define MPI2_FUNCTION_NVME_ENCAPSULATED (0x33)
|
||||
#define MPI2_FUNCTION_MCTP_PASSTHROUGH (0x34)
|
||||
#define MPI2_FUNCTION_MIN_PRODUCT_SPECIFIC (0xF0)
|
||||
#define MPI2_FUNCTION_MAX_PRODUCT_SPECIFIC (0xFF)
|
||||
|
||||
|
@ -251,6 +251,7 @@
|
||||
* 12-17-18 02.00.47 Swap locations of Slotx2 and Slotx4 in ManPage 7.
|
||||
* 08-01-19 02.00.49 Add MPI26_MANPAGE7_FLAG_X2_X4_SLOT_INFO_VALID
|
||||
* Add MPI26_IOUNITPAGE1_NVME_WRCACHE_SHIFT
|
||||
* 09-13-24 02.00.50 Added PCIe 32 GT/s link rate
|
||||
*/
|
||||
|
||||
#ifndef MPI2_CNFG_H
|
||||
@ -1121,6 +1122,7 @@ typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_7 {
|
||||
#define MPI2_IOUNITPAGE7_PCIE_SPEED_5_0_GBPS (0x01)
|
||||
#define MPI2_IOUNITPAGE7_PCIE_SPEED_8_0_GBPS (0x02)
|
||||
#define MPI2_IOUNITPAGE7_PCIE_SPEED_16_0_GBPS (0x03)
|
||||
#define MPI2_IOUNITPAGE7_PCIE_SPEED_32_0_GBPS (0x04)
|
||||
|
||||
/*defines for IO Unit Page 7 ProcessorState field */
|
||||
#define MPI2_IOUNITPAGE7_PSTATE_MASK_SECOND (0x0000000F)
|
||||
@ -2301,6 +2303,7 @@ typedef struct _MPI2_CONFIG_PAGE_SASIOUNIT_1 {
|
||||
#define MPI2_SASIOUNIT1_CONTROL_CLEAR_AFFILIATION (0x0001)
|
||||
|
||||
/*values for SAS IO Unit Page 1 AdditionalControlFlags */
|
||||
#define MPI2_SASIOUNIT1_ACONTROL_PROD_SPECIFIC_1 (0x8000)
|
||||
#define MPI2_SASIOUNIT1_ACONTROL_DA_PERSIST_CONNECT (0x0100)
|
||||
#define MPI2_SASIOUNIT1_ACONTROL_MULTI_PORT_DOMAIN_ILLEGAL (0x0080)
|
||||
#define MPI2_SASIOUNIT1_ACONTROL_SATA_ASYNCHROUNOUS_NOTIFICATION (0x0040)
|
||||
@ -3591,6 +3594,7 @@ typedef struct _MPI2_CONFIG_PAGE_EXT_MAN_PS {
|
||||
#define MPI26_PCIE_NEG_LINK_RATE_5_0 (0x03)
|
||||
#define MPI26_PCIE_NEG_LINK_RATE_8_0 (0x04)
|
||||
#define MPI26_PCIE_NEG_LINK_RATE_16_0 (0x05)
|
||||
#define MPI26_PCIE_NEG_LINK_RATE_32_0 (0x06)
|
||||
|
||||
|
||||
/****************************************************************************
|
||||
@ -3700,6 +3704,7 @@ typedef struct _MPI26_CONFIG_PAGE_PIOUNIT_1 {
|
||||
#define MPI26_PCIEIOUNIT1_MAX_RATE_5_0 (0x30)
|
||||
#define MPI26_PCIEIOUNIT1_MAX_RATE_8_0 (0x40)
|
||||
#define MPI26_PCIEIOUNIT1_MAX_RATE_16_0 (0x50)
|
||||
#define MPI26_PCIEIOUNIT1_MAX_RATE_32_0 (0x60)
|
||||
|
||||
/*values for PCIe IO Unit Page 1 DMDReportPCIe */
|
||||
#define MPI26_PCIEIOUNIT1_DMDRPT_UNIT_MASK (0x80)
|
||||
|
@ -179,6 +179,7 @@
|
||||
* Added MPI26_IOCFACTS_CAPABILITY_COREDUMP_ENABLED
|
||||
* Added MPI2_FW_DOWNLOAD_ITYPE_COREDUMP
|
||||
* Added MPI2_FW_UPLOAD_ITYPE_COREDUMP
|
||||
* 9-13-24 02.00.39 Added MPI26_MCTP_PASSTHROUGH messages
|
||||
* --------------------------------------------------------------------------
|
||||
*/
|
||||
|
||||
@ -382,6 +383,7 @@ typedef struct _MPI2_IOC_FACTS_REPLY {
|
||||
/*ProductID field uses MPI2_FW_HEADER_PID_ */
|
||||
|
||||
/*IOCCapabilities */
|
||||
#define MPI26_IOCFACTS_CAPABILITY_MCTP_PASSTHRU (0x00800000)
|
||||
#define MPI26_IOCFACTS_CAPABILITY_COREDUMP_ENABLED (0x00200000)
|
||||
#define MPI26_IOCFACTS_CAPABILITY_PCIE_SRIOV (0x00100000)
|
||||
#define MPI26_IOCFACTS_CAPABILITY_ATOMIC_REQ (0x00080000)
|
||||
@ -1798,5 +1800,57 @@ typedef struct _MPI26_IOUNIT_CONTROL_REPLY {
|
||||
Mpi26IoUnitControlReply_t,
|
||||
*pMpi26IoUnitControlReply_t;
|
||||
|
||||
/****************************************************************************
|
||||
* MCTP Passthrough messages (MPI v2.6 and later only.)
|
||||
****************************************************************************/
|
||||
|
||||
/* MCTP Passthrough Request Message */
|
||||
typedef struct _MPI26_MCTP_PASSTHROUGH_REQUEST {
|
||||
U8 MsgContext; /* 0x00 */
|
||||
U8 Reserved1[2]; /* 0x01 */
|
||||
U8 Function; /* 0x03 */
|
||||
U8 Reserved2[3]; /* 0x04 */
|
||||
U8 MsgFlags; /* 0x07 */
|
||||
U8 VP_ID; /* 0x08 */
|
||||
U8 VF_ID; /* 0x09 */
|
||||
U16 Reserved3; /* 0x0A */
|
||||
U32 Reserved4; /* 0x0C */
|
||||
U8 Flags; /* 0x10 */
|
||||
U8 Reserved5[3]; /* 0x11 */
|
||||
U32 Reserved6; /* 0x14 */
|
||||
U32 H2DLength; /* 0x18 */
|
||||
U32 D2HLength; /* 0x1C */
|
||||
MPI25_SGE_IO_UNION H2DSGL; /* 0x20 */
|
||||
MPI25_SGE_IO_UNION D2HSGL; /* 0x30 */
|
||||
} MPI26_MCTP_PASSTHROUGH_REQUEST,
|
||||
*PTR_MPI26_MCTP_PASSTHROUGH_REQUEST,
|
||||
Mpi26MctpPassthroughRequest_t,
|
||||
*pMpi26MctpPassthroughRequest_t;
|
||||
|
||||
/* values for the MsgContext field */
|
||||
#define MPI26_MCTP_MSG_CONEXT_UNUSED (0x00)
|
||||
|
||||
/* values for the Flags field */
|
||||
#define MPI26_MCTP_FLAGS_MSG_FORMAT_MPT (0x01)
|
||||
|
||||
/* MCTP Passthrough Reply Message */
|
||||
typedef struct _MPI26_MCTP_PASSTHROUGH_REPLY {
|
||||
U8 MsgContext; /* 0x00 */
|
||||
U8 Reserved1; /* 0x01 */
|
||||
U8 MsgLength; /* 0x02 */
|
||||
U8 Function; /* 0x03 */
|
||||
U8 Reserved2[3]; /* 0x04 */
|
||||
U8 MsgFlags; /* 0x07 */
|
||||
U8 VP_ID; /* 0x08 */
|
||||
U8 VF_ID; /* 0x09 */
|
||||
U16 Reserved3; /* 0x0A */
|
||||
U16 Reserved4; /* 0x0C */
|
||||
U16 IOCStatus; /* 0x0E */
|
||||
U32 IOCLogInfo; /* 0x10 */
|
||||
U32 ResponseDataLength; /* 0x14 */
|
||||
} MPI26_MCTP_PASSTHROUGH_REPLY,
|
||||
*PTR_MPI26_MCTP_PASSTHROUGH_REPLY,
|
||||
Mpi26MctpPassthroughReply_t,
|
||||
*pMpi26MctpPassthroughReply_t;
|
||||
|
||||
#endif
|
||||
|
@ -1202,6 +1202,11 @@ _base_sas_ioc_info(struct MPT3SAS_ADAPTER *ioc, MPI2DefaultReply_t *mpi_reply,
|
||||
ioc->sge_size;
|
||||
func_str = "nvme_encapsulated";
|
||||
break;
|
||||
case MPI2_FUNCTION_MCTP_PASSTHROUGH:
|
||||
frame_sz = sizeof(Mpi26MctpPassthroughRequest_t) +
|
||||
ioc->sge_size;
|
||||
func_str = "mctp_passthru";
|
||||
break;
|
||||
default:
|
||||
frame_sz = 32;
|
||||
func_str = "unknown";
|
||||
@ -4874,6 +4879,12 @@ _base_display_ioc_capabilities(struct MPT3SAS_ADAPTER *ioc)
|
||||
i++;
|
||||
}
|
||||
|
||||
if (ioc->facts.IOCCapabilities &
|
||||
MPI26_IOCFACTS_CAPABILITY_MCTP_PASSTHRU) {
|
||||
pr_cont("%sMCTP Passthru", i ? "," : "");
|
||||
i++;
|
||||
}
|
||||
|
||||
iounit_pg1_flags = le32_to_cpu(ioc->iounit_pg1.Flags);
|
||||
if (!(iounit_pg1_flags & MPI2_IOUNITPAGE1_NATIVE_COMMAND_Q_DISABLE)) {
|
||||
pr_cont("%sNCQ", i ? "," : "");
|
||||
@ -8018,7 +8029,7 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc)
|
||||
|
||||
mutex_lock(&ioc->hostdiag_unlock_mutex);
|
||||
if (mpt3sas_base_unlock_and_get_host_diagnostic(ioc, &host_diagnostic))
|
||||
goto out;
|
||||
goto unlock;
|
||||
|
||||
hcb_size = ioc->base_readl(&ioc->chip->HCBSize);
|
||||
drsprintk(ioc, ioc_info(ioc, "diag reset: issued\n"));
|
||||
@ -8038,7 +8049,7 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc)
|
||||
ioc_info(ioc,
|
||||
"Invalid host diagnostic register value\n");
|
||||
_base_dump_reg_set(ioc);
|
||||
goto out;
|
||||
goto unlock;
|
||||
}
|
||||
if (!(host_diagnostic & MPI2_DIAG_RESET_ADAPTER))
|
||||
break;
|
||||
@ -8074,17 +8085,19 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc)
|
||||
ioc_err(ioc, "%s: failed going to ready state (ioc_state=0x%x)\n",
|
||||
__func__, ioc_state);
|
||||
_base_dump_reg_set(ioc);
|
||||
goto out;
|
||||
goto fail;
|
||||
}
|
||||
|
||||
pci_cfg_access_unlock(ioc->pdev);
|
||||
ioc_info(ioc, "diag reset: SUCCESS\n");
|
||||
return 0;
|
||||
|
||||
out:
|
||||
unlock:
|
||||
mutex_unlock(&ioc->hostdiag_unlock_mutex);
|
||||
|
||||
fail:
|
||||
pci_cfg_access_unlock(ioc->pdev);
|
||||
ioc_err(ioc, "diag reset: FAILED\n");
|
||||
mutex_unlock(&ioc->hostdiag_unlock_mutex);
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
|
@ -77,8 +77,8 @@
|
||||
#define MPT3SAS_DRIVER_NAME "mpt3sas"
|
||||
#define MPT3SAS_AUTHOR "Avago Technologies <MPT-FusionLinux.pdl@avagotech.com>"
|
||||
#define MPT3SAS_DESCRIPTION "LSI MPT Fusion SAS 3.0 Device Driver"
|
||||
#define MPT3SAS_DRIVER_VERSION "51.100.00.00"
|
||||
#define MPT3SAS_MAJOR_VERSION 51
|
||||
#define MPT3SAS_DRIVER_VERSION "52.100.00.00"
|
||||
#define MPT3SAS_MAJOR_VERSION 52
|
||||
#define MPT3SAS_MINOR_VERSION 100
|
||||
#define MPT3SAS_BUILD_VERSION 00
|
||||
#define MPT3SAS_RELEASE_VERSION 00
|
||||
@ -1858,9 +1858,6 @@ int mpt3sas_config_get_manufacturing_pg0(struct MPT3SAS_ADAPTER *ioc,
|
||||
int mpt3sas_config_get_manufacturing_pg1(struct MPT3SAS_ADAPTER *ioc,
|
||||
Mpi2ConfigReply_t *mpi_reply, Mpi2ManufacturingPage1_t *config_page);
|
||||
|
||||
int mpt3sas_config_get_manufacturing_pg7(struct MPT3SAS_ADAPTER *ioc,
|
||||
Mpi2ConfigReply_t *mpi_reply, Mpi2ManufacturingPage7_t *config_page,
|
||||
u16 sz);
|
||||
int mpt3sas_config_get_manufacturing_pg10(struct MPT3SAS_ADAPTER *ioc,
|
||||
Mpi2ConfigReply_t *mpi_reply,
|
||||
struct Mpi2ManufacturingPage10_t *config_page);
|
||||
@ -1887,9 +1884,6 @@ int mpt3sas_config_get_iounit_pg0(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigReply_t
|
||||
int mpt3sas_config_get_sas_device_pg0(struct MPT3SAS_ADAPTER *ioc,
|
||||
Mpi2ConfigReply_t *mpi_reply, Mpi2SasDevicePage0_t *config_page,
|
||||
u32 form, u32 handle);
|
||||
int mpt3sas_config_get_sas_device_pg1(struct MPT3SAS_ADAPTER *ioc,
|
||||
Mpi2ConfigReply_t *mpi_reply, Mpi2SasDevicePage1_t *config_page,
|
||||
u32 form, u32 handle);
|
||||
int mpt3sas_config_get_pcie_device_pg0(struct MPT3SAS_ADAPTER *ioc,
|
||||
Mpi2ConfigReply_t *mpi_reply, Mpi26PCIeDevicePage0_t *config_page,
|
||||
u32 form, u32 handle);
|
||||
|
@ -576,44 +576,6 @@ mpt3sas_config_get_manufacturing_pg1(struct MPT3SAS_ADAPTER *ioc,
|
||||
return r;
|
||||
}
|
||||
|
||||
/**
|
||||
* mpt3sas_config_get_manufacturing_pg7 - obtain manufacturing page 7
|
||||
* @ioc: per adapter object
|
||||
* @mpi_reply: reply mf payload returned from firmware
|
||||
* @config_page: contents of the config page
|
||||
* @sz: size of buffer passed in config_page
|
||||
* Context: sleep.
|
||||
*
|
||||
* Return: 0 for success, non-zero for failure.
|
||||
*/
|
||||
int
|
||||
mpt3sas_config_get_manufacturing_pg7(struct MPT3SAS_ADAPTER *ioc,
|
||||
Mpi2ConfigReply_t *mpi_reply, Mpi2ManufacturingPage7_t *config_page,
|
||||
u16 sz)
|
||||
{
|
||||
Mpi2ConfigRequest_t mpi_request;
|
||||
int r;
|
||||
|
||||
memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
|
||||
mpi_request.Function = MPI2_FUNCTION_CONFIG;
|
||||
mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
|
||||
mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_MANUFACTURING;
|
||||
mpi_request.Header.PageNumber = 7;
|
||||
mpi_request.Header.PageVersion = MPI2_MANUFACTURING7_PAGEVERSION;
|
||||
ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
|
||||
r = _config_request(ioc, &mpi_request, mpi_reply,
|
||||
MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
|
||||
if (r)
|
||||
goto out;
|
||||
|
||||
mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
|
||||
r = _config_request(ioc, &mpi_request, mpi_reply,
|
||||
MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
|
||||
sz);
|
||||
out:
|
||||
return r;
|
||||
}
|
||||
|
||||
/**
|
||||
* mpt3sas_config_get_manufacturing_pg10 - obtain manufacturing page 10
|
||||
* @ioc: per adapter object
|
||||
@ -1213,47 +1175,6 @@ mpt3sas_config_get_sas_device_pg0(struct MPT3SAS_ADAPTER *ioc,
|
||||
return r;
|
||||
}
|
||||
|
||||
/**
|
||||
* mpt3sas_config_get_sas_device_pg1 - obtain sas device page 1
|
||||
* @ioc: per adapter object
|
||||
* @mpi_reply: reply mf payload returned from firmware
|
||||
* @config_page: contents of the config page
|
||||
* @form: GET_NEXT_HANDLE or HANDLE
|
||||
* @handle: device handle
|
||||
* Context: sleep.
|
||||
*
|
||||
* Return: 0 for success, non-zero for failure.
|
||||
*/
|
||||
int
|
||||
mpt3sas_config_get_sas_device_pg1(struct MPT3SAS_ADAPTER *ioc,
|
||||
Mpi2ConfigReply_t *mpi_reply, Mpi2SasDevicePage1_t *config_page,
|
||||
u32 form, u32 handle)
|
||||
{
|
||||
Mpi2ConfigRequest_t mpi_request;
|
||||
int r;
|
||||
|
||||
memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
|
||||
mpi_request.Function = MPI2_FUNCTION_CONFIG;
|
||||
mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
|
||||
mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED;
|
||||
mpi_request.ExtPageType = MPI2_CONFIG_EXTPAGETYPE_SAS_DEVICE;
|
||||
mpi_request.Header.PageVersion = MPI2_SASDEVICE1_PAGEVERSION;
|
||||
mpi_request.Header.PageNumber = 1;
|
||||
ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
|
||||
r = _config_request(ioc, &mpi_request, mpi_reply,
|
||||
MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
|
||||
if (r)
|
||||
goto out;
|
||||
|
||||
mpi_request.PageAddress = cpu_to_le32(form | handle);
|
||||
mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
|
||||
r = _config_request(ioc, &mpi_request, mpi_reply,
|
||||
MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
|
||||
sizeof(*config_page));
|
||||
out:
|
||||
return r;
|
||||
}
|
||||
|
||||
/**
|
||||
* mpt3sas_config_get_pcie_device_pg0 - obtain pcie device page 0
|
||||
* @ioc: per adapter object
|
||||
|
@ -186,6 +186,9 @@ _ctl_display_some_debug(struct MPT3SAS_ADAPTER *ioc, u16 smid,
|
||||
case MPI2_FUNCTION_NVME_ENCAPSULATED:
|
||||
desc = "nvme_encapsulated";
|
||||
break;
|
||||
case MPI2_FUNCTION_MCTP_PASSTHROUGH:
|
||||
desc = "mctp_passthrough";
|
||||
break;
|
||||
}
|
||||
|
||||
if (!desc)
|
||||
@ -652,6 +655,40 @@ _ctl_set_task_mid(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command *karg,
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* _ctl_send_mctp_passthru_req - Send an MCTP passthru request
|
||||
* @ioc: per adapter object
|
||||
* @mctp_passthru_req: MPI mctp passhthru request from caller
|
||||
* @psge: pointer to the H2DSGL
|
||||
* @data_out_dma: DMA buffer for H2D SGL
|
||||
* @data_out_sz: H2D length
|
||||
* @data_in_dma: DMA buffer for D2H SGL
|
||||
* @data_in_sz: D2H length
|
||||
* @smid: SMID to submit the request
|
||||
*
|
||||
*/
|
||||
static void
|
||||
_ctl_send_mctp_passthru_req(
|
||||
struct MPT3SAS_ADAPTER *ioc,
|
||||
Mpi26MctpPassthroughRequest_t *mctp_passthru_req, void *psge,
|
||||
dma_addr_t data_out_dma, int data_out_sz,
|
||||
dma_addr_t data_in_dma, int data_in_sz,
|
||||
u16 smid)
|
||||
{
|
||||
mctp_passthru_req->H2DLength = data_out_sz;
|
||||
mctp_passthru_req->D2HLength = data_in_sz;
|
||||
|
||||
/* Build the H2D SGL from the data out buffer */
|
||||
ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, 0, 0);
|
||||
|
||||
psge += ioc->sge_size_ieee;
|
||||
|
||||
/* Build the D2H SGL for the data in buffer */
|
||||
ioc->build_sg(ioc, psge, 0, 0, data_in_dma, data_in_sz);
|
||||
|
||||
ioc->put_smid_default(ioc, smid);
|
||||
}
|
||||
|
||||
/**
|
||||
* _ctl_do_mpt_command - main handler for MPT3COMMAND opcode
|
||||
* @ioc: per adapter object
|
||||
@ -679,6 +716,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
|
||||
size_t data_in_sz = 0;
|
||||
long ret;
|
||||
u16 device_handle = MPT3SAS_INVALID_DEVICE_HANDLE;
|
||||
int tm_ret;
|
||||
|
||||
issue_reset = 0;
|
||||
|
||||
@ -792,6 +830,23 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
|
||||
|
||||
init_completion(&ioc->ctl_cmds.done);
|
||||
switch (mpi_request->Function) {
|
||||
case MPI2_FUNCTION_MCTP_PASSTHROUGH:
|
||||
{
|
||||
Mpi26MctpPassthroughRequest_t *mctp_passthru_req =
|
||||
(Mpi26MctpPassthroughRequest_t *)request;
|
||||
|
||||
if (!(ioc->facts.IOCCapabilities & MPI26_IOCFACTS_CAPABILITY_MCTP_PASSTHRU)) {
|
||||
ioc_err(ioc, "%s: MCTP Passthrough request not supported\n",
|
||||
__func__);
|
||||
mpt3sas_base_free_smid(ioc, smid);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
_ctl_send_mctp_passthru_req(ioc, mctp_passthru_req, psge, data_out_dma,
|
||||
data_out_sz, data_in_dma, data_in_sz, smid);
|
||||
break;
|
||||
}
|
||||
case MPI2_FUNCTION_NVME_ENCAPSULATED:
|
||||
{
|
||||
nvme_encap_request = (Mpi26NVMeEncapsulatedRequest_t *)request;
|
||||
@ -1120,18 +1175,25 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
|
||||
if (pcie_device && (!ioc->tm_custom_handling) &&
|
||||
(!(mpt3sas_scsih_is_pcie_scsi_device(
|
||||
pcie_device->device_info))))
|
||||
mpt3sas_scsih_issue_locked_tm(ioc,
|
||||
tm_ret = mpt3sas_scsih_issue_locked_tm(ioc,
|
||||
le16_to_cpu(mpi_request->FunctionDependent1),
|
||||
0, 0, 0,
|
||||
MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET, 0,
|
||||
0, pcie_device->reset_timeout,
|
||||
MPI26_SCSITASKMGMT_MSGFLAGS_PROTOCOL_LVL_RST_PCIE);
|
||||
else
|
||||
mpt3sas_scsih_issue_locked_tm(ioc,
|
||||
tm_ret = mpt3sas_scsih_issue_locked_tm(ioc,
|
||||
le16_to_cpu(mpi_request->FunctionDependent1),
|
||||
0, 0, 0,
|
||||
MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET, 0,
|
||||
0, 30, MPI2_SCSITASKMGMT_MSGFLAGS_LINK_RESET);
|
||||
|
||||
if (tm_ret != SUCCESS) {
|
||||
ioc_info(ioc,
|
||||
"target reset failed, issue hard reset: handle (0x%04x)\n",
|
||||
le16_to_cpu(mpi_request->FunctionDependent1));
|
||||
mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
|
||||
}
|
||||
} else
|
||||
mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
|
||||
}
|
||||
@ -1200,6 +1262,8 @@ _ctl_getiocinfo(struct MPT3SAS_ADAPTER *ioc, void __user *arg)
|
||||
}
|
||||
karg.bios_version = le32_to_cpu(ioc->bios_pg3.BiosVersion);
|
||||
|
||||
karg.driver_capability |= MPT3_IOCTL_IOCINFO_DRIVER_CAP_MCTP_PASSTHRU;
|
||||
|
||||
if (copy_to_user(arg, &karg, sizeof(karg))) {
|
||||
pr_err("failure at %s:%d/%s()!\n",
|
||||
__FILE__, __LINE__, __func__);
|
||||
@ -2786,6 +2850,217 @@ out_unlock_pciaccess:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* _ctl_get_mpt_mctp_passthru_adapter - Traverse the IOC list and return the IOC at
|
||||
* dev_index positionthat support MCTP passhtru
|
||||
* @dev_index: position in the mpt3sas_ioc_list to search for
|
||||
* Return pointer to the IOC on success
|
||||
* NULL if device not found error
|
||||
*/
|
||||
static struct MPT3SAS_ADAPTER *
|
||||
_ctl_get_mpt_mctp_passthru_adapter(int dev_index)
|
||||
{
|
||||
struct MPT3SAS_ADAPTER *ioc = NULL;
|
||||
int count = 0;
|
||||
|
||||
spin_lock(&gioc_lock);
|
||||
/* Traverse ioc list and return number of IOC that support MCTP passthru */
|
||||
list_for_each_entry(ioc, &mpt3sas_ioc_list, list) {
|
||||
if (ioc->facts.IOCCapabilities & MPI26_IOCFACTS_CAPABILITY_MCTP_PASSTHRU) {
|
||||
if (count == dev_index) {
|
||||
spin_unlock(&gioc_lock);
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
spin_unlock(&gioc_lock);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* mpt3sas_get_device_count - Retrieve the count of MCTP passthrough
|
||||
* capable devices managed by the driver.
|
||||
*
|
||||
* Returns number of devices that support MCTP passthrough.
|
||||
*/
|
||||
int
|
||||
mpt3sas_get_device_count(void)
|
||||
{
|
||||
int count = 0;
|
||||
struct MPT3SAS_ADAPTER *ioc = NULL;
|
||||
|
||||
spin_lock(&gioc_lock);
|
||||
/* Traverse ioc list and return number of IOC that support MCTP passthru */
|
||||
list_for_each_entry(ioc, &mpt3sas_ioc_list, list)
|
||||
if (ioc->facts.IOCCapabilities & MPI26_IOCFACTS_CAPABILITY_MCTP_PASSTHRU)
|
||||
count++;
|
||||
|
||||
spin_unlock(&gioc_lock);
|
||||
|
||||
return count;
|
||||
}
|
||||
EXPORT_SYMBOL(mpt3sas_get_device_count);
|
||||
|
||||
/**
|
||||
* mpt3sas_send_passthru_cmd - Send an MPI MCTP passthrough command to
|
||||
* firmware
|
||||
* @command: The MPI MCTP passthrough command to send to firmware
|
||||
*
|
||||
* Returns 0 on success, anything else is error.
|
||||
*/
|
||||
int mpt3sas_send_mctp_passthru_req(struct mpt3_passthru_command *command)
|
||||
{
|
||||
struct MPT3SAS_ADAPTER *ioc;
|
||||
MPI2RequestHeader_t *mpi_request = NULL, *request;
|
||||
MPI2DefaultReply_t *mpi_reply;
|
||||
Mpi26MctpPassthroughRequest_t *mctp_passthru_req;
|
||||
u16 smid;
|
||||
unsigned long timeout;
|
||||
u8 issue_reset = 0;
|
||||
u32 sz;
|
||||
void *psge;
|
||||
void *data_out = NULL;
|
||||
dma_addr_t data_out_dma = 0;
|
||||
size_t data_out_sz = 0;
|
||||
void *data_in = NULL;
|
||||
dma_addr_t data_in_dma = 0;
|
||||
size_t data_in_sz = 0;
|
||||
long ret;
|
||||
|
||||
/* Retrieve ioc from dev_index */
|
||||
ioc = _ctl_get_mpt_mctp_passthru_adapter(command->dev_index);
|
||||
if (!ioc)
|
||||
return -ENODEV;
|
||||
|
||||
mutex_lock(&ioc->pci_access_mutex);
|
||||
if (ioc->shost_recovery ||
|
||||
ioc->pci_error_recovery || ioc->is_driver_loading ||
|
||||
ioc->remove_host) {
|
||||
ret = -EAGAIN;
|
||||
goto unlock_pci_access;
|
||||
}
|
||||
|
||||
/* Lock the ctl_cmds mutex to ensure a single ctl cmd is pending */
|
||||
if (mutex_lock_interruptible(&ioc->ctl_cmds.mutex)) {
|
||||
ret = -ERESTARTSYS;
|
||||
goto unlock_pci_access;
|
||||
}
|
||||
|
||||
if (ioc->ctl_cmds.status != MPT3_CMD_NOT_USED) {
|
||||
ioc_err(ioc, "%s: ctl_cmd in use\n", __func__);
|
||||
ret = -EAGAIN;
|
||||
goto unlock_ctl_cmds;
|
||||
}
|
||||
|
||||
ret = mpt3sas_wait_for_ioc(ioc, IOC_OPERATIONAL_WAIT_COUNT);
|
||||
if (ret)
|
||||
goto unlock_ctl_cmds;
|
||||
|
||||
mpi_request = (MPI2RequestHeader_t *)command->mpi_request;
|
||||
if (mpi_request->Function != MPI2_FUNCTION_MCTP_PASSTHROUGH) {
|
||||
ioc_err(ioc, "%s: Invalid request received, Function 0x%x\n",
|
||||
__func__, mpi_request->Function);
|
||||
ret = -EINVAL;
|
||||
goto unlock_ctl_cmds;
|
||||
}
|
||||
|
||||
/* Use first reserved smid for passthrough commands */
|
||||
smid = ioc->scsiio_depth - INTERNAL_SCSIIO_CMDS_COUNT + 1;
|
||||
ret = 0;
|
||||
ioc->ctl_cmds.status = MPT3_CMD_PENDING;
|
||||
memset(ioc->ctl_cmds.reply, 0, ioc->reply_sz);
|
||||
request = mpt3sas_base_get_msg_frame(ioc, smid);
|
||||
memset(request, 0, ioc->request_sz);
|
||||
memcpy(request, command->mpi_request, sizeof(Mpi26MctpPassthroughRequest_t));
|
||||
ioc->ctl_cmds.smid = smid;
|
||||
data_out_sz = command->data_out_size;
|
||||
data_in_sz = command->data_in_size;
|
||||
|
||||
/* obtain dma-able memory for data transfer */
|
||||
if (data_out_sz) /* WRITE */ {
|
||||
data_out = dma_alloc_coherent(&ioc->pdev->dev, data_out_sz,
|
||||
&data_out_dma, GFP_ATOMIC);
|
||||
if (!data_out) {
|
||||
ret = -ENOMEM;
|
||||
mpt3sas_base_free_smid(ioc, smid);
|
||||
goto out;
|
||||
}
|
||||
memcpy(data_out, command->data_out_buf_ptr, data_out_sz);
|
||||
|
||||
}
|
||||
|
||||
if (data_in_sz) /* READ */ {
|
||||
data_in = dma_alloc_coherent(&ioc->pdev->dev, data_in_sz,
|
||||
&data_in_dma, GFP_ATOMIC);
|
||||
if (!data_in) {
|
||||
ret = -ENOMEM;
|
||||
mpt3sas_base_free_smid(ioc, smid);
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
psge = &((Mpi26MctpPassthroughRequest_t *)request)->H2DSGL;
|
||||
|
||||
init_completion(&ioc->ctl_cmds.done);
|
||||
|
||||
mctp_passthru_req = (Mpi26MctpPassthroughRequest_t *)request;
|
||||
|
||||
_ctl_send_mctp_passthru_req(ioc, mctp_passthru_req, psge, data_out_dma,
|
||||
data_out_sz, data_in_dma, data_in_sz, smid);
|
||||
|
||||
timeout = command->timeout;
|
||||
if (timeout < MPT3_IOCTL_DEFAULT_TIMEOUT)
|
||||
timeout = MPT3_IOCTL_DEFAULT_TIMEOUT;
|
||||
|
||||
wait_for_completion_timeout(&ioc->ctl_cmds.done, timeout*HZ);
|
||||
if (!(ioc->ctl_cmds.status & MPT3_CMD_COMPLETE)) {
|
||||
mpt3sas_check_cmd_timeout(ioc,
|
||||
ioc->ctl_cmds.status, mpi_request,
|
||||
sizeof(Mpi26MctpPassthroughRequest_t) / 4, issue_reset);
|
||||
goto issue_host_reset;
|
||||
}
|
||||
|
||||
mpi_reply = ioc->ctl_cmds.reply;
|
||||
|
||||
/* copy out xdata to user */
|
||||
if (data_in_sz)
|
||||
memcpy(command->data_in_buf_ptr, data_in, data_in_sz);
|
||||
|
||||
/* copy out reply message frame to user */
|
||||
if (command->max_reply_bytes) {
|
||||
sz = min_t(u32, command->max_reply_bytes, ioc->reply_sz);
|
||||
memcpy(command->reply_frame_buf_ptr, ioc->ctl_cmds.reply, sz);
|
||||
}
|
||||
|
||||
issue_host_reset:
|
||||
if (issue_reset) {
|
||||
ret = -ENODATA;
|
||||
mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
|
||||
}
|
||||
|
||||
out:
|
||||
/* free memory associated with sg buffers */
|
||||
if (data_in)
|
||||
dma_free_coherent(&ioc->pdev->dev, data_in_sz, data_in,
|
||||
data_in_dma);
|
||||
|
||||
if (data_out)
|
||||
dma_free_coherent(&ioc->pdev->dev, data_out_sz, data_out,
|
||||
data_out_dma);
|
||||
|
||||
ioc->ctl_cmds.status = MPT3_CMD_NOT_USED;
|
||||
|
||||
unlock_ctl_cmds:
|
||||
mutex_unlock(&ioc->ctl_cmds.mutex);
|
||||
|
||||
unlock_pci_access:
|
||||
mutex_unlock(&ioc->pci_access_mutex);
|
||||
return ret;
|
||||
|
||||
}
|
||||
EXPORT_SYMBOL(mpt3sas_send_mctp_passthru_req);
|
||||
|
||||
/**
|
||||
* _ctl_ioctl - mpt3ctl main ioctl entry point (unlocked)
|
||||
* @file: (struct file)
|
||||
|
@ -160,6 +160,9 @@ struct mpt3_ioctl_pci_info {
|
||||
#define MPT3_IOCTL_INTERFACE_SAS35 (0x07)
|
||||
#define MPT2_IOCTL_VERSION_LENGTH (32)
|
||||
|
||||
/* Bits set for mpt3_ioctl_iocinfo.driver_cap */
|
||||
#define MPT3_IOCTL_IOCINFO_DRIVER_CAP_MCTP_PASSTHRU 0x1
|
||||
|
||||
/**
|
||||
* struct mpt3_ioctl_iocinfo - generic controller info
|
||||
* @hdr - generic header
|
||||
@ -175,6 +178,7 @@ struct mpt3_ioctl_pci_info {
|
||||
* @driver_version - driver version - 32 ASCII characters
|
||||
* @rsvd1 - reserved
|
||||
* @scsi_id - scsi id of adapter 0
|
||||
* @driver_capability - driver capabilities
|
||||
* @rsvd2 - reserved
|
||||
* @pci_information - pci info (2nd revision)
|
||||
*/
|
||||
@ -192,7 +196,8 @@ struct mpt3_ioctl_iocinfo {
|
||||
uint8_t driver_version[MPT2_IOCTL_VERSION_LENGTH];
|
||||
uint8_t rsvd1;
|
||||
uint8_t scsi_id;
|
||||
uint16_t rsvd2;
|
||||
uint8_t driver_capability;
|
||||
uint8_t rsvd2;
|
||||
struct mpt3_ioctl_pci_info pci_information;
|
||||
};
|
||||
|
||||
@ -458,4 +463,46 @@ struct mpt3_enable_diag_sbr_reload {
|
||||
struct mpt3_ioctl_header hdr;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct mpt3_passthru_command - generic mpt firmware passthru command
|
||||
* @dev_index - device index
|
||||
* @timeout - command timeout in seconds. (if zero then use driver default
|
||||
* value).
|
||||
* @reply_frame_buf_ptr - MPI reply location
|
||||
* @data_in_buf_ptr - destination for read
|
||||
* @data_out_buf_ptr - data source for write
|
||||
* @max_reply_bytes - maximum number of reply bytes to be sent to app.
|
||||
* @data_in_size - number bytes for data transfer in (read)
|
||||
* @data_out_size - number bytes for data transfer out (write)
|
||||
* @mpi_request - request frame
|
||||
*/
|
||||
struct mpt3_passthru_command {
|
||||
u8 dev_index;
|
||||
uint32_t timeout;
|
||||
void *reply_frame_buf_ptr;
|
||||
void *data_in_buf_ptr;
|
||||
void *data_out_buf_ptr;
|
||||
uint32_t max_reply_bytes;
|
||||
uint32_t data_in_size;
|
||||
uint32_t data_out_size;
|
||||
Mpi26MctpPassthroughRequest_t *mpi_request;
|
||||
};
|
||||
|
||||
/*
|
||||
* mpt3sas_get_device_count - Retrieve the count of MCTP passthrough
|
||||
* capable devices managed by the driver.
|
||||
*
|
||||
* Returns number of devices that support MCTP passthrough.
|
||||
*/
|
||||
int mpt3sas_get_device_count(void);
|
||||
|
||||
/*
|
||||
* mpt3sas_send_passthru_cmd - Send an MPI MCTP passthrough command to
|
||||
* firmware
|
||||
* @command: The MPI MCTP passthrough command to send to firmware
|
||||
*
|
||||
* Returns 0 on success, anything else is error .
|
||||
*/
|
||||
int mpt3sas_send_mctp_passthru_req(struct mpt3_passthru_command *command);
|
||||
|
||||
#endif /* MPT3SAS_CTL_H_INCLUDED */
|
||||
|
@ -2703,7 +2703,7 @@ scsih_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim)
|
||||
ssp_target = 1;
|
||||
if (sas_device->device_info &
|
||||
MPI2_SAS_DEVICE_INFO_SEP) {
|
||||
sdev_printk(KERN_WARNING, sdev,
|
||||
sdev_printk(KERN_INFO, sdev,
|
||||
"set ignore_delay_remove for handle(0x%04x)\n",
|
||||
sas_device_priv_data->sas_target->handle);
|
||||
sas_device_priv_data->ignore_delay_remove = 1;
|
||||
@ -12710,7 +12710,7 @@ static const struct pci_device_id mpt3sas_pci_table[] = {
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pci, mpt3sas_pci_table);
|
||||
|
||||
static struct pci_error_handlers _mpt3sas_err_handler = {
|
||||
static const struct pci_error_handlers _mpt3sas_err_handler = {
|
||||
.error_detected = scsih_pci_error_detected,
|
||||
.mmio_enabled = scsih_pci_mmio_enabled,
|
||||
.slot_reset = scsih_pci_slot_reset,
|
||||
|
@ -151,16 +151,6 @@ static inline u8 mvs_assign_reg_set(struct mvs_info *mvi,
|
||||
return MVS_CHIP_DISP->assign_reg_set(mvi, &dev->taskfileset);
|
||||
}
|
||||
|
||||
void mvs_phys_reset(struct mvs_info *mvi, u32 phy_mask, int hard)
|
||||
{
|
||||
u32 no;
|
||||
for_each_phy(phy_mask, phy_mask, no) {
|
||||
if (!(phy_mask & 1))
|
||||
continue;
|
||||
MVS_CHIP_DISP->phy_reset(mvi, no, hard);
|
||||
}
|
||||
}
|
||||
|
||||
int mvs_phy_control(struct asd_sas_phy *sas_phy, enum phy_func func,
|
||||
void *funcdata)
|
||||
{
|
||||
|
@ -425,7 +425,6 @@ struct mvs_task_exec_info {
|
||||
void mvs_get_sas_addr(void *buf, u32 buflen);
|
||||
void mvs_iounmap(void __iomem *regs);
|
||||
int mvs_ioremap(struct mvs_info *mvi, int bar, int bar_ex);
|
||||
void mvs_phys_reset(struct mvs_info *mvi, u32 phy_mask, int hard);
|
||||
int mvs_phy_control(struct asd_sas_phy *sas_phy, enum phy_func func,
|
||||
void *funcdata);
|
||||
void mvs_set_sas_addr(struct mvs_info *mvi, int port_id, u32 off_lo,
|
||||
|
@ -2876,7 +2876,7 @@ MODULE_DEVICE_TABLE(pci, qedi_pci_tbl);
|
||||
|
||||
static enum cpuhp_state qedi_cpuhp_state;
|
||||
|
||||
static struct pci_error_handlers qedi_err_handler = {
|
||||
static const struct pci_error_handlers qedi_err_handler = {
|
||||
.error_detected = qedi_io_error_detected,
|
||||
};
|
||||
|
||||
|
@ -2136,8 +2136,8 @@ qla2x00_write_flash_byte(struct qla_hw_data *ha, uint32_t addr, uint8_t data)
|
||||
* @flash_id: Flash ID
|
||||
*
|
||||
* This function polls the device until bit 7 of what is read matches data
|
||||
* bit 7 or until data bit 5 becomes a 1. If that hapens, the flash ROM timed
|
||||
* out (a fatal error). The flash book recommeds reading bit 7 again after
|
||||
* bit 7 or until data bit 5 becomes a 1. If that happens, the flash ROM timed
|
||||
* out (a fatal error). The flash book recommends reading bit 7 again after
|
||||
* reading bit 5 as a 1.
|
||||
*
|
||||
* Returns 0 on success, else non-zero.
|
||||
|
@ -510,22 +510,34 @@ void scsi_attach_vpd(struct scsi_device *sdev)
|
||||
return;
|
||||
|
||||
for (i = 4; i < vpd_buf->len; i++) {
|
||||
if (vpd_buf->data[i] == 0x0)
|
||||
switch (vpd_buf->data[i]) {
|
||||
case 0x0:
|
||||
scsi_update_vpd_page(sdev, 0x0, &sdev->vpd_pg0);
|
||||
if (vpd_buf->data[i] == 0x80)
|
||||
break;
|
||||
case 0x80:
|
||||
scsi_update_vpd_page(sdev, 0x80, &sdev->vpd_pg80);
|
||||
if (vpd_buf->data[i] == 0x83)
|
||||
break;
|
||||
case 0x83:
|
||||
scsi_update_vpd_page(sdev, 0x83, &sdev->vpd_pg83);
|
||||
if (vpd_buf->data[i] == 0x89)
|
||||
break;
|
||||
case 0x89:
|
||||
scsi_update_vpd_page(sdev, 0x89, &sdev->vpd_pg89);
|
||||
if (vpd_buf->data[i] == 0xb0)
|
||||
break;
|
||||
case 0xb0:
|
||||
scsi_update_vpd_page(sdev, 0xb0, &sdev->vpd_pgb0);
|
||||
if (vpd_buf->data[i] == 0xb1)
|
||||
break;
|
||||
case 0xb1:
|
||||
scsi_update_vpd_page(sdev, 0xb1, &sdev->vpd_pgb1);
|
||||
if (vpd_buf->data[i] == 0xb2)
|
||||
break;
|
||||
case 0xb2:
|
||||
scsi_update_vpd_page(sdev, 0xb2, &sdev->vpd_pgb2);
|
||||
if (vpd_buf->data[i] == 0xb7)
|
||||
break;
|
||||
case 0xb7:
|
||||
scsi_update_vpd_page(sdev, 0xb7, &sdev->vpd_pgb7);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
kfree(vpd_buf);
|
||||
}
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -547,6 +547,18 @@ enum scsi_disposition scsi_check_sense(struct scsi_cmnd *scmd)
|
||||
|
||||
scsi_report_sense(sdev, &sshdr);
|
||||
|
||||
if (sshdr.sense_key == UNIT_ATTENTION) {
|
||||
/*
|
||||
* Increment the counters for Power on/Reset or New Media so
|
||||
* that all ULDs interested in these can see that those have
|
||||
* happened, even if someone else gets the sense data.
|
||||
*/
|
||||
if (sshdr.asc == 0x28)
|
||||
scmd->device->ua_new_media_ctr++;
|
||||
else if (sshdr.asc == 0x29)
|
||||
scmd->device->ua_por_ctr++;
|
||||
}
|
||||
|
||||
if (scsi_sense_is_deferred(&sshdr))
|
||||
return NEEDS_RETRY;
|
||||
|
||||
@ -711,6 +723,13 @@ enum scsi_disposition scsi_check_sense(struct scsi_cmnd *scmd)
|
||||
return SUCCESS;
|
||||
|
||||
case COMPLETED:
|
||||
/*
|
||||
* A command using command duration limits (CDL) with a
|
||||
* descriptor set with policy 0xD may be completed with success
|
||||
* and the sense data DATA CURRENTLY UNAVAILABLE, indicating
|
||||
* that the command was in fact aborted because it exceeded its
|
||||
* duration limit. Never retry these commands.
|
||||
*/
|
||||
if (sshdr.asc == 0x55 && sshdr.ascq == 0x0a) {
|
||||
set_scsi_ml_byte(scmd, SCSIML_STAT_DL_TIMEOUT);
|
||||
req->cmd_flags |= REQ_FAILFAST_DEV;
|
||||
|
@ -151,8 +151,9 @@ int scsi_complete_async_scans(void)
|
||||
struct async_scan_data *data;
|
||||
|
||||
do {
|
||||
if (list_empty(&scanning_hosts))
|
||||
return 0;
|
||||
scoped_guard(spinlock, &async_scan_lock)
|
||||
if (list_empty(&scanning_hosts))
|
||||
return 0;
|
||||
/* If we can't get memory immediately, that's OK. Just
|
||||
* sleep a little. Even if we never get memory, the async
|
||||
* scans will finish eventually.
|
||||
|
@ -17,7 +17,9 @@ static const struct ctl_table scsi_table[] = {
|
||||
.data = &scsi_logging_level,
|
||||
.maxlen = sizeof(scsi_logging_level),
|
||||
.mode = 0644,
|
||||
.proc_handler = proc_dointvec },
|
||||
.proc_handler = proc_dointvec_minmax,
|
||||
.extra1 = SYSCTL_ZERO,
|
||||
.extra2 = SYSCTL_INT_MAX },
|
||||
};
|
||||
|
||||
static struct ctl_table_header *scsi_table_header;
|
||||
|
@ -163,9 +163,11 @@ static const char *st_formats[] = {
|
||||
|
||||
static int debugging = DEBUG;
|
||||
|
||||
/* Setting these non-zero may risk recognizing resets */
|
||||
#define MAX_RETRIES 0
|
||||
#define MAX_WRITE_RETRIES 0
|
||||
#define MAX_READY_RETRIES 0
|
||||
|
||||
#define NO_TAPE NOT_READY
|
||||
|
||||
#define ST_TIMEOUT (900 * HZ)
|
||||
@ -357,10 +359,18 @@ static int st_chk_result(struct scsi_tape *STp, struct st_request * SRpnt)
|
||||
{
|
||||
int result = SRpnt->result;
|
||||
u8 scode;
|
||||
unsigned int ctr;
|
||||
DEB(const char *stp;)
|
||||
char *name = STp->name;
|
||||
struct st_cmdstatus *cmdstatp;
|
||||
|
||||
ctr = scsi_get_ua_por_ctr(STp->device);
|
||||
if (ctr != STp->por_ctr) {
|
||||
STp->por_ctr = ctr;
|
||||
STp->pos_unknown = 1; /* ASC => power on / reset */
|
||||
st_printk(KERN_WARNING, STp, "Power on/reset recognized.");
|
||||
}
|
||||
|
||||
if (!result)
|
||||
return 0;
|
||||
|
||||
@ -413,10 +423,11 @@ static int st_chk_result(struct scsi_tape *STp, struct st_request * SRpnt)
|
||||
if (cmdstatp->have_sense &&
|
||||
cmdstatp->sense_hdr.asc == 0 && cmdstatp->sense_hdr.ascq == 0x17)
|
||||
STp->cleaning_req = 1; /* ASC and ASCQ => cleaning requested */
|
||||
if (cmdstatp->have_sense && scode == UNIT_ATTENTION && cmdstatp->sense_hdr.asc == 0x29)
|
||||
if (cmdstatp->have_sense && scode == UNIT_ATTENTION &&
|
||||
cmdstatp->sense_hdr.asc == 0x29 && !STp->pos_unknown) {
|
||||
STp->pos_unknown = 1; /* ASC => power on / reset */
|
||||
|
||||
STp->pos_unknown |= STp->device->was_reset;
|
||||
st_printk(KERN_WARNING, STp, "Power on/reset recognized.");
|
||||
}
|
||||
|
||||
if (cmdstatp->have_sense &&
|
||||
scode == RECOVERED_ERROR
|
||||
@ -952,7 +963,6 @@ static void reset_state(struct scsi_tape *STp)
|
||||
STp->partition = find_partition(STp);
|
||||
if (STp->partition < 0)
|
||||
STp->partition = 0;
|
||||
STp->new_partition = STp->partition;
|
||||
}
|
||||
}
|
||||
|
||||
@ -969,6 +979,7 @@ static int test_ready(struct scsi_tape *STp, int do_wait)
|
||||
{
|
||||
int attentions, waits, max_wait, scode;
|
||||
int retval = CHKRES_READY, new_session = 0;
|
||||
unsigned int ctr;
|
||||
unsigned char cmd[MAX_COMMAND_SIZE];
|
||||
struct st_request *SRpnt = NULL;
|
||||
struct st_cmdstatus *cmdstatp = &STp->buffer->cmdstat;
|
||||
@ -1025,6 +1036,13 @@ static int test_ready(struct scsi_tape *STp, int do_wait)
|
||||
}
|
||||
}
|
||||
|
||||
ctr = scsi_get_ua_new_media_ctr(STp->device);
|
||||
if (ctr != STp->new_media_ctr) {
|
||||
STp->new_media_ctr = ctr;
|
||||
new_session = 1;
|
||||
DEBC_printk(STp, "New tape session.");
|
||||
}
|
||||
|
||||
retval = (STp->buffer)->syscall_result;
|
||||
if (!retval)
|
||||
retval = new_session ? CHKRES_NEW_SESSION : CHKRES_READY;
|
||||
@ -2897,7 +2915,6 @@ static int st_int_ioctl(struct scsi_tape *STp, unsigned int cmd_in, unsigned lon
|
||||
timeout = STp->long_timeout * 8;
|
||||
|
||||
DEBC_printk(STp, "Erasing tape.\n");
|
||||
fileno = blkno = at_sm = 0;
|
||||
break;
|
||||
case MTSETBLK: /* Set block length */
|
||||
case MTSETDENSITY: /* Set tape density */
|
||||
@ -2930,14 +2947,17 @@ static int st_int_ioctl(struct scsi_tape *STp, unsigned int cmd_in, unsigned lon
|
||||
if (cmd_in == MTSETDENSITY) {
|
||||
(STp->buffer)->b_data[4] = arg;
|
||||
STp->density_changed = 1; /* At least we tried ;-) */
|
||||
STp->changed_density = arg;
|
||||
} else if (cmd_in == SET_DENS_AND_BLK)
|
||||
(STp->buffer)->b_data[4] = arg >> 24;
|
||||
else
|
||||
(STp->buffer)->b_data[4] = STp->density;
|
||||
if (cmd_in == MTSETBLK || cmd_in == SET_DENS_AND_BLK) {
|
||||
ltmp = arg & MT_ST_BLKSIZE_MASK;
|
||||
if (cmd_in == MTSETBLK)
|
||||
if (cmd_in == MTSETBLK) {
|
||||
STp->blksize_changed = 1; /* At least we tried ;-) */
|
||||
STp->changed_blksize = arg;
|
||||
}
|
||||
} else
|
||||
ltmp = STp->block_size;
|
||||
(STp->buffer)->b_data[9] = (ltmp >> 16);
|
||||
@ -3084,7 +3104,9 @@ static int st_int_ioctl(struct scsi_tape *STp, unsigned int cmd_in, unsigned lon
|
||||
cmd_in == MTSETDRVBUFFER ||
|
||||
cmd_in == SET_DENS_AND_BLK) {
|
||||
if (cmdstatp->sense_hdr.sense_key == ILLEGAL_REQUEST &&
|
||||
!(STp->use_pf & PF_TESTED)) {
|
||||
cmdstatp->sense_hdr.asc == 0x24 &&
|
||||
(STp->device)->scsi_level <= SCSI_2 &&
|
||||
!(STp->use_pf & PF_TESTED)) {
|
||||
/* Try the other possible state of Page Format if not
|
||||
already tried */
|
||||
STp->use_pf = (STp->use_pf ^ USE_PF) | PF_TESTED;
|
||||
@ -3636,9 +3658,23 @@ static long st_ioctl(struct file *file, unsigned int cmd_in, unsigned long arg)
|
||||
retval = (-EIO);
|
||||
goto out;
|
||||
}
|
||||
reset_state(STp);
|
||||
/* remove this when the midlevel properly clears was_reset */
|
||||
STp->device->was_reset = 0;
|
||||
reset_state(STp); /* Clears pos_unknown */
|
||||
|
||||
/* Fix the device settings after reset, ignore errors */
|
||||
if (mtc.mt_op == MTREW || mtc.mt_op == MTSEEK ||
|
||||
mtc.mt_op == MTEOM) {
|
||||
if (STp->can_partitions) {
|
||||
/* STp->new_partition contains the
|
||||
* latest partition set
|
||||
*/
|
||||
STp->partition = 0;
|
||||
switch_partition(STp);
|
||||
}
|
||||
if (STp->density_changed)
|
||||
st_int_ioctl(STp, MTSETDENSITY, STp->changed_density);
|
||||
if (STp->blksize_changed)
|
||||
st_int_ioctl(STp, MTSETBLK, STp->changed_blksize);
|
||||
}
|
||||
}
|
||||
|
||||
if (mtc.mt_op != MTNOP && mtc.mt_op != MTSETBLK &&
|
||||
@ -4122,7 +4158,7 @@ static void validate_options(void)
|
||||
*/
|
||||
static int __init st_setup(char *str)
|
||||
{
|
||||
int i, len, ints[5];
|
||||
int i, len, ints[ARRAY_SIZE(parms) + 1];
|
||||
char *stp;
|
||||
|
||||
stp = get_options(str, ARRAY_SIZE(ints), ints);
|
||||
@ -4384,6 +4420,9 @@ static int st_probe(struct device *dev)
|
||||
goto out_idr_remove;
|
||||
}
|
||||
|
||||
tpnt->new_media_ctr = scsi_get_ua_new_media_ctr(SDp);
|
||||
tpnt->por_ctr = scsi_get_ua_por_ctr(SDp);
|
||||
|
||||
dev_set_drvdata(dev, tpnt);
|
||||
|
||||
|
||||
@ -4665,6 +4704,24 @@ options_show(struct device *dev, struct device_attribute *attr, char *buf)
|
||||
}
|
||||
static DEVICE_ATTR_RO(options);
|
||||
|
||||
/**
|
||||
* position_lost_in_reset_show - Value 1 indicates that reads, writes, etc.
|
||||
* are blocked because a device reset has occurred and no operation positioning
|
||||
* the tape has been issued.
|
||||
* @dev: struct device
|
||||
* @attr: attribute structure
|
||||
* @buf: buffer to return formatted data in
|
||||
*/
|
||||
static ssize_t position_lost_in_reset_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct st_modedef *STm = dev_get_drvdata(dev);
|
||||
struct scsi_tape *STp = STm->tape;
|
||||
|
||||
return sprintf(buf, "%d", STp->pos_unknown);
|
||||
}
|
||||
static DEVICE_ATTR_RO(position_lost_in_reset);
|
||||
|
||||
/* Support for tape stats */
|
||||
|
||||
/**
|
||||
@ -4849,6 +4906,7 @@ static struct attribute *st_dev_attrs[] = {
|
||||
&dev_attr_default_density.attr,
|
||||
&dev_attr_default_compression.attr,
|
||||
&dev_attr_options.attr,
|
||||
&dev_attr_position_lost_in_reset.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
|
@ -165,6 +165,7 @@ struct scsi_tape {
|
||||
unsigned char compression_changed;
|
||||
unsigned char drv_buffer;
|
||||
unsigned char density;
|
||||
unsigned char changed_density;
|
||||
unsigned char door_locked;
|
||||
unsigned char autorew_dev; /* auto-rewind device */
|
||||
unsigned char rew_at_close; /* rewind necessary at close */
|
||||
@ -172,11 +173,16 @@ struct scsi_tape {
|
||||
unsigned char cleaning_req; /* cleaning requested? */
|
||||
unsigned char first_tur; /* first TEST UNIT READY */
|
||||
int block_size;
|
||||
int changed_blksize;
|
||||
int min_block;
|
||||
int max_block;
|
||||
int recover_count; /* From tape opening */
|
||||
int recover_reg; /* From last status call */
|
||||
|
||||
/* The saved values of midlevel counters */
|
||||
unsigned int new_media_ctr;
|
||||
unsigned int por_ctr;
|
||||
|
||||
#if DEBUG
|
||||
unsigned char write_pending;
|
||||
int nbr_finished;
|
||||
|
@ -776,7 +776,7 @@ static void handle_multichannel_storage(struct hv_device *device, int max_chns)
|
||||
|
||||
if (vstor_packet->operation != VSTOR_OPERATION_COMPLETE_IO ||
|
||||
vstor_packet->status != 0) {
|
||||
dev_err(dev, "Failed to create sub-channel: op=%d, sts=%d\n",
|
||||
dev_err(dev, "Failed to create sub-channel: op=%d, host=0x%x\n",
|
||||
vstor_packet->operation, vstor_packet->status);
|
||||
return;
|
||||
}
|
||||
@ -1183,7 +1183,7 @@ static void storvsc_on_io_completion(struct storvsc_device *stor_device,
|
||||
STORVSC_LOGGING_WARN : STORVSC_LOGGING_ERROR;
|
||||
|
||||
storvsc_log_ratelimited(device, loglevel,
|
||||
"tag#%d cmd 0x%x status: scsi 0x%x srb 0x%x hv 0x%x\n",
|
||||
"tag#%d cmd 0x%x status: scsi 0x%x srb 0x%x host 0x%x\n",
|
||||
scsi_cmd_to_rq(request->cmd)->tag,
|
||||
stor_pkt->vm_srb.cdb[0],
|
||||
vstor_packet->vm_srb.scsi_status,
|
||||
|
@ -212,7 +212,7 @@ int iscsi_target_check_login_request(
|
||||
|
||||
if ((login_req->max_version != login->version_max) ||
|
||||
(login_req->min_version != login->version_min)) {
|
||||
pr_err("Login request changed Version Max/Nin"
|
||||
pr_err("Login request changed Version Max/Min"
|
||||
" unexpectedly to 0x%02x/0x%02x, protocol error\n",
|
||||
login_req->max_version, login_req->min_version);
|
||||
iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR,
|
||||
@ -557,7 +557,7 @@ static void iscsi_target_do_login_rx(struct work_struct *work)
|
||||
* before initial PDU processing in iscsi_target_start_negotiation()
|
||||
* has completed, go ahead and retry until it's cleared.
|
||||
*
|
||||
* Otherwise if the TCP connection drops while this is occuring,
|
||||
* Otherwise if the TCP connection drops while this is occurring,
|
||||
* iscsi_target_start_negotiation() will detect the failure, call
|
||||
* cancel_delayed_work_sync(&conn->login_work), and cleanup the
|
||||
* remaining iscsi connection resources from iscsi_np process context.
|
||||
@ -1050,7 +1050,7 @@ static int iscsi_target_do_login(struct iscsit_conn *conn, struct iscsi_login *l
|
||||
/*
|
||||
* Check to make sure the TCP connection has not
|
||||
* dropped asynchronously while session reinstatement
|
||||
* was occuring in this kthread context, before
|
||||
* was occurring in this kthread context, before
|
||||
* transitioning to full feature phase operation.
|
||||
*/
|
||||
if (iscsi_target_sk_check_close(conn))
|
||||
|
@ -176,7 +176,7 @@ static int tcm_loop_queuecommand(struct Scsi_Host *sh, struct scsi_cmnd *sc)
|
||||
|
||||
memset(tl_cmd, 0, sizeof(*tl_cmd));
|
||||
tl_cmd->sc = sc;
|
||||
tl_cmd->sc_cmd_tag = scsi_cmd_to_rq(sc)->tag;
|
||||
tl_cmd->sc_cmd_tag = blk_mq_unique_tag(scsi_cmd_to_rq(sc));
|
||||
|
||||
tcm_loop_target_queue_cmd(tl_cmd);
|
||||
return 0;
|
||||
@ -242,7 +242,8 @@ static int tcm_loop_abort_task(struct scsi_cmnd *sc)
|
||||
tl_hba = *(struct tcm_loop_hba **)shost_priv(sc->device->host);
|
||||
tl_tpg = &tl_hba->tl_hba_tpgs[sc->device->id];
|
||||
ret = tcm_loop_issue_tmr(tl_tpg, sc->device->lun,
|
||||
scsi_cmd_to_rq(sc)->tag, TMR_ABORT_TASK);
|
||||
blk_mq_unique_tag(scsi_cmd_to_rq(sc)),
|
||||
TMR_ABORT_TASK);
|
||||
return (ret == TMR_FUNCTION_COMPLETE) ? SUCCESS : FAILED;
|
||||
}
|
||||
|
||||
|
@ -123,7 +123,7 @@ static ssize_t target_core_item_dbroot_store(struct config_item *item,
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
read_bytes = snprintf(db_root_stage, DB_ROOT_LEN, "%s", page);
|
||||
read_bytes = scnprintf(db_root_stage, DB_ROOT_LEN, "%s", page);
|
||||
if (!read_bytes)
|
||||
goto unlock;
|
||||
|
||||
@ -143,7 +143,7 @@ static ssize_t target_core_item_dbroot_store(struct config_item *item,
|
||||
}
|
||||
filp_close(fp, NULL);
|
||||
|
||||
strncpy(db_root, db_root_stage, read_bytes);
|
||||
strscpy(db_root, db_root_stage);
|
||||
pr_debug("Target_Core_ConfigFS: db_root set to %s\n", db_root);
|
||||
|
||||
r = read_bytes;
|
||||
@ -3664,7 +3664,7 @@ static void target_init_dbroot(void)
|
||||
}
|
||||
filp_close(fp, NULL);
|
||||
|
||||
strncpy(db_root, db_root_stage, DB_ROOT_LEN);
|
||||
strscpy(db_root, db_root_stage);
|
||||
pr_debug("Target_Core_ConfigFS: db_root set to %s\n", db_root);
|
||||
}
|
||||
|
||||
|
@ -1078,8 +1078,8 @@ passthrough_parse_cdb(struct se_cmd *cmd,
|
||||
if (!dev->dev_attrib.emulate_pr &&
|
||||
((cdb[0] == PERSISTENT_RESERVE_IN) ||
|
||||
(cdb[0] == PERSISTENT_RESERVE_OUT) ||
|
||||
(cdb[0] == RELEASE || cdb[0] == RELEASE_10) ||
|
||||
(cdb[0] == RESERVE || cdb[0] == RESERVE_10))) {
|
||||
(cdb[0] == RELEASE_6 || cdb[0] == RELEASE_10) ||
|
||||
(cdb[0] == RESERVE_6 || cdb[0] == RESERVE_10))) {
|
||||
return TCM_UNSUPPORTED_SCSI_OPCODE;
|
||||
}
|
||||
|
||||
@ -1101,7 +1101,7 @@ passthrough_parse_cdb(struct se_cmd *cmd,
|
||||
return target_cmd_size_check(cmd, size);
|
||||
}
|
||||
|
||||
if (cdb[0] == RELEASE || cdb[0] == RELEASE_10) {
|
||||
if (cdb[0] == RELEASE_6 || cdb[0] == RELEASE_10) {
|
||||
cmd->execute_cmd = target_scsi2_reservation_release;
|
||||
if (cdb[0] == RELEASE_10)
|
||||
size = get_unaligned_be16(&cdb[7]);
|
||||
@ -1109,7 +1109,7 @@ passthrough_parse_cdb(struct se_cmd *cmd,
|
||||
size = cmd->data_length;
|
||||
return target_cmd_size_check(cmd, size);
|
||||
}
|
||||
if (cdb[0] == RESERVE || cdb[0] == RESERVE_10) {
|
||||
if (cdb[0] == RESERVE_6 || cdb[0] == RESERVE_10) {
|
||||
cmd->execute_cmd = target_scsi2_reservation_reserve;
|
||||
if (cdb[0] == RESERVE_10)
|
||||
size = get_unaligned_be16(&cdb[7]);
|
||||
|
@ -91,7 +91,7 @@ target_scsi2_reservation_check(struct se_cmd *cmd)
|
||||
|
||||
switch (cmd->t_task_cdb[0]) {
|
||||
case INQUIRY:
|
||||
case RELEASE:
|
||||
case RELEASE_6:
|
||||
case RELEASE_10:
|
||||
return 0;
|
||||
default:
|
||||
@ -418,12 +418,12 @@ static int core_scsi3_pr_seq_non_holder(struct se_cmd *cmd, u32 pr_reg_type,
|
||||
return -EINVAL;
|
||||
}
|
||||
break;
|
||||
case RELEASE:
|
||||
case RELEASE_6:
|
||||
case RELEASE_10:
|
||||
/* Handled by CRH=1 in target_scsi2_reservation_release() */
|
||||
ret = 0;
|
||||
break;
|
||||
case RESERVE:
|
||||
case RESERVE_6:
|
||||
case RESERVE_10:
|
||||
/* Handled by CRH=1 in target_scsi2_reservation_reserve() */
|
||||
ret = 0;
|
||||
|
@ -1674,9 +1674,9 @@ static bool tcm_is_pr_enabled(struct target_opcode_descriptor *descr,
|
||||
return true;
|
||||
|
||||
switch (descr->opcode) {
|
||||
case RESERVE:
|
||||
case RESERVE_6:
|
||||
case RESERVE_10:
|
||||
case RELEASE:
|
||||
case RELEASE_6:
|
||||
case RELEASE_10:
|
||||
/*
|
||||
* The pr_ops which are used by the backend modules don't
|
||||
@ -1828,9 +1828,9 @@ static struct target_opcode_descriptor tcm_opcode_pro_register_move = {
|
||||
|
||||
static struct target_opcode_descriptor tcm_opcode_release = {
|
||||
.support = SCSI_SUPPORT_FULL,
|
||||
.opcode = RELEASE,
|
||||
.opcode = RELEASE_6,
|
||||
.cdb_size = 6,
|
||||
.usage_bits = {RELEASE, 0x00, 0x00, 0x00,
|
||||
.usage_bits = {RELEASE_6, 0x00, 0x00, 0x00,
|
||||
0x00, SCSI_CONTROL_MASK},
|
||||
.enabled = tcm_is_pr_enabled,
|
||||
};
|
||||
@ -1847,9 +1847,9 @@ static struct target_opcode_descriptor tcm_opcode_release10 = {
|
||||
|
||||
static struct target_opcode_descriptor tcm_opcode_reserve = {
|
||||
.support = SCSI_SUPPORT_FULL,
|
||||
.opcode = RESERVE,
|
||||
.opcode = RESERVE_6,
|
||||
.cdb_size = 6,
|
||||
.usage_bits = {RESERVE, 0x00, 0x00, 0x00,
|
||||
.usage_bits = {RESERVE_6, 0x00, 0x00, 0x00,
|
||||
0x00, SCSI_CONTROL_MASK},
|
||||
.enabled = tcm_is_pr_enabled,
|
||||
};
|
||||
@ -2151,8 +2151,10 @@ spc_rsoc_get_descr(struct se_cmd *cmd, struct target_opcode_descriptor **opcode)
|
||||
if (descr->serv_action_valid)
|
||||
return TCM_INVALID_CDB_FIELD;
|
||||
|
||||
if (!descr->enabled || descr->enabled(descr, cmd))
|
||||
if (!descr->enabled || descr->enabled(descr, cmd)) {
|
||||
*opcode = descr;
|
||||
return TCM_NO_SENSE;
|
||||
}
|
||||
break;
|
||||
case 0x2:
|
||||
/*
|
||||
@ -2166,8 +2168,10 @@ spc_rsoc_get_descr(struct se_cmd *cmd, struct target_opcode_descriptor **opcode)
|
||||
if (descr->serv_action_valid &&
|
||||
descr->service_action == requested_sa) {
|
||||
if (!descr->enabled || descr->enabled(descr,
|
||||
cmd))
|
||||
cmd)) {
|
||||
*opcode = descr;
|
||||
return TCM_NO_SENSE;
|
||||
}
|
||||
} else if (!descr->serv_action_valid)
|
||||
return TCM_INVALID_CDB_FIELD;
|
||||
break;
|
||||
@ -2180,13 +2184,15 @@ spc_rsoc_get_descr(struct se_cmd *cmd, struct target_opcode_descriptor **opcode)
|
||||
*/
|
||||
if (descr->service_action == requested_sa)
|
||||
if (!descr->enabled || descr->enabled(descr,
|
||||
cmd))
|
||||
cmd)) {
|
||||
*opcode = descr;
|
||||
return TCM_NO_SENSE;
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
return TCM_NO_SENSE;
|
||||
}
|
||||
|
||||
static sense_reason_t
|
||||
@ -2243,7 +2249,7 @@ spc_emulate_report_supp_op_codes(struct se_cmd *cmd)
|
||||
response_length += spc_rsoc_encode_command_descriptor(
|
||||
&buf[response_length], rctd, descr);
|
||||
}
|
||||
put_unaligned_be32(response_length - 3, buf);
|
||||
put_unaligned_be32(response_length - 4, buf);
|
||||
} else {
|
||||
response_length = spc_rsoc_encode_one_command_descriptor(
|
||||
&buf[response_length], rctd, descr,
|
||||
@ -2267,9 +2273,9 @@ spc_parse_cdb(struct se_cmd *cmd, unsigned int *size)
|
||||
unsigned char *cdb = cmd->t_task_cdb;
|
||||
|
||||
switch (cdb[0]) {
|
||||
case RESERVE:
|
||||
case RESERVE_6:
|
||||
case RESERVE_10:
|
||||
case RELEASE:
|
||||
case RELEASE_6:
|
||||
case RELEASE_10:
|
||||
if (!dev->dev_attrib.emulate_pr)
|
||||
return TCM_UNSUPPORTED_SCSI_OPCODE;
|
||||
@ -2313,7 +2319,7 @@ spc_parse_cdb(struct se_cmd *cmd, unsigned int *size)
|
||||
*size = get_unaligned_be32(&cdb[5]);
|
||||
cmd->execute_cmd = target_scsi3_emulate_pr_out;
|
||||
break;
|
||||
case RELEASE:
|
||||
case RELEASE_6:
|
||||
case RELEASE_10:
|
||||
if (cdb[0] == RELEASE_10)
|
||||
*size = get_unaligned_be16(&cdb[7]);
|
||||
@ -2322,7 +2328,7 @@ spc_parse_cdb(struct se_cmd *cmd, unsigned int *size)
|
||||
|
||||
cmd->execute_cmd = target_scsi2_reservation_release;
|
||||
break;
|
||||
case RESERVE:
|
||||
case RESERVE_6:
|
||||
case RESERVE_10:
|
||||
/*
|
||||
* The SPC-2 RESERVE does not contain a size in the SCSI CDB.
|
||||
|
@ -458,6 +458,14 @@ static ssize_t pm_qos_enable_store(struct device *dev,
|
||||
return count;
|
||||
}
|
||||
|
||||
static ssize_t critical_health_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct ufs_hba *hba = dev_get_drvdata(dev);
|
||||
|
||||
return sysfs_emit(buf, "%d\n", hba->critical_health_count);
|
||||
}
|
||||
|
||||
static DEVICE_ATTR_RW(rpm_lvl);
|
||||
static DEVICE_ATTR_RO(rpm_target_dev_state);
|
||||
static DEVICE_ATTR_RO(rpm_target_link_state);
|
||||
@ -470,6 +478,7 @@ static DEVICE_ATTR_RW(enable_wb_buf_flush);
|
||||
static DEVICE_ATTR_RW(wb_flush_threshold);
|
||||
static DEVICE_ATTR_RW(rtc_update_ms);
|
||||
static DEVICE_ATTR_RW(pm_qos_enable);
|
||||
static DEVICE_ATTR_RO(critical_health);
|
||||
|
||||
static struct attribute *ufs_sysfs_ufshcd_attrs[] = {
|
||||
&dev_attr_rpm_lvl.attr,
|
||||
@ -484,6 +493,7 @@ static struct attribute *ufs_sysfs_ufshcd_attrs[] = {
|
||||
&dev_attr_wb_flush_threshold.attr,
|
||||
&dev_attr_rtc_update_ms.attr,
|
||||
&dev_attr_pm_qos_enable.attr,
|
||||
&dev_attr_critical_health.attr,
|
||||
NULL
|
||||
};
|
||||
|
||||
|
@ -83,34 +83,34 @@ UFS_CMD_TRACE_TSF_TYPES
|
||||
|
||||
TRACE_EVENT(ufshcd_clk_gating,
|
||||
|
||||
TP_PROTO(const char *dev_name, int state),
|
||||
TP_PROTO(struct ufs_hba *hba, int state),
|
||||
|
||||
TP_ARGS(dev_name, state),
|
||||
TP_ARGS(hba, state),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__string(dev_name, dev_name)
|
||||
__field(struct ufs_hba *, hba)
|
||||
__field(int, state)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__assign_str(dev_name);
|
||||
__entry->hba = hba;
|
||||
__entry->state = state;
|
||||
),
|
||||
|
||||
TP_printk("%s: gating state changed to %s",
|
||||
__get_str(dev_name),
|
||||
dev_name(__entry->hba->dev),
|
||||
__print_symbolic(__entry->state, UFSCHD_CLK_GATING_STATES))
|
||||
);
|
||||
|
||||
TRACE_EVENT(ufshcd_clk_scaling,
|
||||
|
||||
TP_PROTO(const char *dev_name, const char *state, const char *clk,
|
||||
TP_PROTO(struct ufs_hba *hba, const char *state, const char *clk,
|
||||
u32 prev_state, u32 curr_state),
|
||||
|
||||
TP_ARGS(dev_name, state, clk, prev_state, curr_state),
|
||||
TP_ARGS(hba, state, clk, prev_state, curr_state),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__string(dev_name, dev_name)
|
||||
__field(struct ufs_hba *, hba)
|
||||
__string(state, state)
|
||||
__string(clk, clk)
|
||||
__field(u32, prev_state)
|
||||
@ -118,7 +118,7 @@ TRACE_EVENT(ufshcd_clk_scaling,
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__assign_str(dev_name);
|
||||
__entry->hba = hba;
|
||||
__assign_str(state);
|
||||
__assign_str(clk);
|
||||
__entry->prev_state = prev_state;
|
||||
@ -126,80 +126,80 @@ TRACE_EVENT(ufshcd_clk_scaling,
|
||||
),
|
||||
|
||||
TP_printk("%s: %s %s from %u to %u Hz",
|
||||
__get_str(dev_name), __get_str(state), __get_str(clk),
|
||||
dev_name(__entry->hba->dev), __get_str(state), __get_str(clk),
|
||||
__entry->prev_state, __entry->curr_state)
|
||||
);
|
||||
|
||||
TRACE_EVENT(ufshcd_auto_bkops_state,
|
||||
|
||||
TP_PROTO(const char *dev_name, const char *state),
|
||||
TP_PROTO(struct ufs_hba *hba, const char *state),
|
||||
|
||||
TP_ARGS(dev_name, state),
|
||||
TP_ARGS(hba, state),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__string(dev_name, dev_name)
|
||||
__field(struct ufs_hba *, hba)
|
||||
__string(state, state)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__assign_str(dev_name);
|
||||
__entry->hba = hba;
|
||||
__assign_str(state);
|
||||
),
|
||||
|
||||
TP_printk("%s: auto bkops - %s",
|
||||
__get_str(dev_name), __get_str(state))
|
||||
dev_name(__entry->hba->dev), __get_str(state))
|
||||
);
|
||||
|
||||
DECLARE_EVENT_CLASS(ufshcd_profiling_template,
|
||||
TP_PROTO(const char *dev_name, const char *profile_info, s64 time_us,
|
||||
TP_PROTO(struct ufs_hba *hba, const char *profile_info, s64 time_us,
|
||||
int err),
|
||||
|
||||
TP_ARGS(dev_name, profile_info, time_us, err),
|
||||
TP_ARGS(hba, profile_info, time_us, err),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__string(dev_name, dev_name)
|
||||
__field(struct ufs_hba *, hba)
|
||||
__string(profile_info, profile_info)
|
||||
__field(s64, time_us)
|
||||
__field(int, err)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__assign_str(dev_name);
|
||||
__entry->hba = hba;
|
||||
__assign_str(profile_info);
|
||||
__entry->time_us = time_us;
|
||||
__entry->err = err;
|
||||
),
|
||||
|
||||
TP_printk("%s: %s: took %lld usecs, err %d",
|
||||
__get_str(dev_name), __get_str(profile_info),
|
||||
dev_name(__entry->hba->dev), __get_str(profile_info),
|
||||
__entry->time_us, __entry->err)
|
||||
);
|
||||
|
||||
DEFINE_EVENT(ufshcd_profiling_template, ufshcd_profile_hibern8,
|
||||
TP_PROTO(const char *dev_name, const char *profile_info, s64 time_us,
|
||||
TP_PROTO(struct ufs_hba *hba, const char *profile_info, s64 time_us,
|
||||
int err),
|
||||
TP_ARGS(dev_name, profile_info, time_us, err));
|
||||
TP_ARGS(hba, profile_info, time_us, err));
|
||||
|
||||
DEFINE_EVENT(ufshcd_profiling_template, ufshcd_profile_clk_gating,
|
||||
TP_PROTO(const char *dev_name, const char *profile_info, s64 time_us,
|
||||
TP_PROTO(struct ufs_hba *hba, const char *profile_info, s64 time_us,
|
||||
int err),
|
||||
TP_ARGS(dev_name, profile_info, time_us, err));
|
||||
TP_ARGS(hba, profile_info, time_us, err));
|
||||
|
||||
DEFINE_EVENT(ufshcd_profiling_template, ufshcd_profile_clk_scaling,
|
||||
TP_PROTO(const char *dev_name, const char *profile_info, s64 time_us,
|
||||
TP_PROTO(struct ufs_hba *hba, const char *profile_info, s64 time_us,
|
||||
int err),
|
||||
TP_ARGS(dev_name, profile_info, time_us, err));
|
||||
TP_ARGS(hba, profile_info, time_us, err));
|
||||
|
||||
DECLARE_EVENT_CLASS(ufshcd_template,
|
||||
TP_PROTO(const char *dev_name, int err, s64 usecs,
|
||||
TP_PROTO(struct ufs_hba *hba, int err, s64 usecs,
|
||||
int dev_state, int link_state),
|
||||
|
||||
TP_ARGS(dev_name, err, usecs, dev_state, link_state),
|
||||
TP_ARGS(hba, err, usecs, dev_state, link_state),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(s64, usecs)
|
||||
__field(int, err)
|
||||
__string(dev_name, dev_name)
|
||||
__field(struct ufs_hba *, hba)
|
||||
__field(int, dev_state)
|
||||
__field(int, link_state)
|
||||
),
|
||||
@ -207,14 +207,14 @@ DECLARE_EVENT_CLASS(ufshcd_template,
|
||||
TP_fast_assign(
|
||||
__entry->usecs = usecs;
|
||||
__entry->err = err;
|
||||
__assign_str(dev_name);
|
||||
__entry->hba = hba;
|
||||
__entry->dev_state = dev_state;
|
||||
__entry->link_state = link_state;
|
||||
),
|
||||
|
||||
TP_printk(
|
||||
"%s: took %lld usecs, dev_state: %s, link_state: %s, err %d",
|
||||
__get_str(dev_name),
|
||||
dev_name(__entry->hba->dev),
|
||||
__entry->usecs,
|
||||
__print_symbolic(__entry->dev_state, UFS_PWR_MODES),
|
||||
__print_symbolic(__entry->link_state, UFS_LINK_STATES),
|
||||
@ -223,60 +223,62 @@ DECLARE_EVENT_CLASS(ufshcd_template,
|
||||
);
|
||||
|
||||
DEFINE_EVENT(ufshcd_template, ufshcd_system_suspend,
|
||||
TP_PROTO(const char *dev_name, int err, s64 usecs,
|
||||
TP_PROTO(struct ufs_hba *hba, int err, s64 usecs,
|
||||
int dev_state, int link_state),
|
||||
TP_ARGS(dev_name, err, usecs, dev_state, link_state));
|
||||
TP_ARGS(hba, err, usecs, dev_state, link_state));
|
||||
|
||||
DEFINE_EVENT(ufshcd_template, ufshcd_system_resume,
|
||||
TP_PROTO(const char *dev_name, int err, s64 usecs,
|
||||
TP_PROTO(struct ufs_hba *hba, int err, s64 usecs,
|
||||
int dev_state, int link_state),
|
||||
TP_ARGS(dev_name, err, usecs, dev_state, link_state));
|
||||
TP_ARGS(hba, err, usecs, dev_state, link_state));
|
||||
|
||||
DEFINE_EVENT(ufshcd_template, ufshcd_runtime_suspend,
|
||||
TP_PROTO(const char *dev_name, int err, s64 usecs,
|
||||
TP_PROTO(struct ufs_hba *hba, int err, s64 usecs,
|
||||
int dev_state, int link_state),
|
||||
TP_ARGS(dev_name, err, usecs, dev_state, link_state));
|
||||
TP_ARGS(hba, err, usecs, dev_state, link_state));
|
||||
|
||||
DEFINE_EVENT(ufshcd_template, ufshcd_runtime_resume,
|
||||
TP_PROTO(const char *dev_name, int err, s64 usecs,
|
||||
TP_PROTO(struct ufs_hba *hba, int err, s64 usecs,
|
||||
int dev_state, int link_state),
|
||||
TP_ARGS(dev_name, err, usecs, dev_state, link_state));
|
||||
TP_ARGS(hba, err, usecs, dev_state, link_state));
|
||||
|
||||
DEFINE_EVENT(ufshcd_template, ufshcd_init,
|
||||
TP_PROTO(const char *dev_name, int err, s64 usecs,
|
||||
TP_PROTO(struct ufs_hba *hba, int err, s64 usecs,
|
||||
int dev_state, int link_state),
|
||||
TP_ARGS(dev_name, err, usecs, dev_state, link_state));
|
||||
TP_ARGS(hba, err, usecs, dev_state, link_state));
|
||||
|
||||
DEFINE_EVENT(ufshcd_template, ufshcd_wl_suspend,
|
||||
TP_PROTO(const char *dev_name, int err, s64 usecs,
|
||||
TP_PROTO(struct ufs_hba *hba, int err, s64 usecs,
|
||||
int dev_state, int link_state),
|
||||
TP_ARGS(dev_name, err, usecs, dev_state, link_state));
|
||||
TP_ARGS(hba, err, usecs, dev_state, link_state));
|
||||
|
||||
DEFINE_EVENT(ufshcd_template, ufshcd_wl_resume,
|
||||
TP_PROTO(const char *dev_name, int err, s64 usecs,
|
||||
TP_PROTO(struct ufs_hba *hba, int err, s64 usecs,
|
||||
int dev_state, int link_state),
|
||||
TP_ARGS(dev_name, err, usecs, dev_state, link_state));
|
||||
TP_ARGS(hba, err, usecs, dev_state, link_state));
|
||||
|
||||
DEFINE_EVENT(ufshcd_template, ufshcd_wl_runtime_suspend,
|
||||
TP_PROTO(const char *dev_name, int err, s64 usecs,
|
||||
TP_PROTO(struct ufs_hba *hba, int err, s64 usecs,
|
||||
int dev_state, int link_state),
|
||||
TP_ARGS(dev_name, err, usecs, dev_state, link_state));
|
||||
TP_ARGS(hba, err, usecs, dev_state, link_state));
|
||||
|
||||
DEFINE_EVENT(ufshcd_template, ufshcd_wl_runtime_resume,
|
||||
TP_PROTO(const char *dev_name, int err, s64 usecs,
|
||||
TP_PROTO(struct ufs_hba *hba, int err, s64 usecs,
|
||||
int dev_state, int link_state),
|
||||
TP_ARGS(dev_name, err, usecs, dev_state, link_state));
|
||||
TP_ARGS(hba, err, usecs, dev_state, link_state));
|
||||
|
||||
TRACE_EVENT(ufshcd_command,
|
||||
TP_PROTO(struct scsi_device *sdev, enum ufs_trace_str_t str_t,
|
||||
TP_PROTO(struct scsi_device *sdev, struct ufs_hba *hba,
|
||||
enum ufs_trace_str_t str_t,
|
||||
unsigned int tag, u32 doorbell, u32 hwq_id, int transfer_len,
|
||||
u32 intr, u64 lba, u8 opcode, u8 group_id),
|
||||
|
||||
TP_ARGS(sdev, str_t, tag, doorbell, hwq_id, transfer_len, intr, lba,
|
||||
TP_ARGS(sdev, hba, str_t, tag, doorbell, hwq_id, transfer_len, intr, lba,
|
||||
opcode, group_id),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(struct scsi_device *, sdev)
|
||||
__field(struct ufs_hba *, hba)
|
||||
__field(enum ufs_trace_str_t, str_t)
|
||||
__field(unsigned int, tag)
|
||||
__field(u32, doorbell)
|
||||
@ -290,6 +292,7 @@ TRACE_EVENT(ufshcd_command,
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->sdev = sdev;
|
||||
__entry->hba = hba;
|
||||
__entry->str_t = str_t;
|
||||
__entry->tag = tag;
|
||||
__entry->doorbell = doorbell;
|
||||
@ -312,13 +315,13 @@ TRACE_EVENT(ufshcd_command,
|
||||
);
|
||||
|
||||
TRACE_EVENT(ufshcd_uic_command,
|
||||
TP_PROTO(const char *dev_name, enum ufs_trace_str_t str_t, u32 cmd,
|
||||
TP_PROTO(struct ufs_hba *hba, enum ufs_trace_str_t str_t, u32 cmd,
|
||||
u32 arg1, u32 arg2, u32 arg3),
|
||||
|
||||
TP_ARGS(dev_name, str_t, cmd, arg1, arg2, arg3),
|
||||
TP_ARGS(hba, str_t, cmd, arg1, arg2, arg3),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__string(dev_name, dev_name)
|
||||
__field(struct ufs_hba *, hba)
|
||||
__field(enum ufs_trace_str_t, str_t)
|
||||
__field(u32, cmd)
|
||||
__field(u32, arg1)
|
||||
@ -327,7 +330,7 @@ TRACE_EVENT(ufshcd_uic_command,
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__assign_str(dev_name);
|
||||
__entry->hba = hba;
|
||||
__entry->str_t = str_t;
|
||||
__entry->cmd = cmd;
|
||||
__entry->arg1 = arg1;
|
||||
@ -337,19 +340,19 @@ TRACE_EVENT(ufshcd_uic_command,
|
||||
|
||||
TP_printk(
|
||||
"%s: %s: cmd: 0x%x, arg1: 0x%x, arg2: 0x%x, arg3: 0x%x",
|
||||
show_ufs_cmd_trace_str(__entry->str_t), __get_str(dev_name),
|
||||
show_ufs_cmd_trace_str(__entry->str_t), dev_name(__entry->hba->dev),
|
||||
__entry->cmd, __entry->arg1, __entry->arg2, __entry->arg3
|
||||
)
|
||||
);
|
||||
|
||||
TRACE_EVENT(ufshcd_upiu,
|
||||
TP_PROTO(const char *dev_name, enum ufs_trace_str_t str_t, void *hdr,
|
||||
TP_PROTO(struct ufs_hba *hba, enum ufs_trace_str_t str_t, void *hdr,
|
||||
void *tsf, enum ufs_trace_tsf_t tsf_t),
|
||||
|
||||
TP_ARGS(dev_name, str_t, hdr, tsf, tsf_t),
|
||||
TP_ARGS(hba, str_t, hdr, tsf, tsf_t),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__string(dev_name, dev_name)
|
||||
__field(struct ufs_hba *, hba)
|
||||
__field(enum ufs_trace_str_t, str_t)
|
||||
__array(unsigned char, hdr, 12)
|
||||
__array(unsigned char, tsf, 16)
|
||||
@ -357,7 +360,7 @@ TRACE_EVENT(ufshcd_upiu,
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__assign_str(dev_name);
|
||||
__entry->hba = hba;
|
||||
__entry->str_t = str_t;
|
||||
memcpy(__entry->hdr, hdr, sizeof(__entry->hdr));
|
||||
memcpy(__entry->tsf, tsf, sizeof(__entry->tsf));
|
||||
@ -366,7 +369,7 @@ TRACE_EVENT(ufshcd_upiu,
|
||||
|
||||
TP_printk(
|
||||
"%s: %s: HDR:%s, %s:%s",
|
||||
show_ufs_cmd_trace_str(__entry->str_t), __get_str(dev_name),
|
||||
show_ufs_cmd_trace_str(__entry->str_t), dev_name(__entry->hba->dev),
|
||||
__print_hex(__entry->hdr, sizeof(__entry->hdr)),
|
||||
show_ufs_cmd_trace_tsf(__entry->tsf_t),
|
||||
__print_hex(__entry->tsf, sizeof(__entry->tsf))
|
||||
@ -375,22 +378,22 @@ TRACE_EVENT(ufshcd_upiu,
|
||||
|
||||
TRACE_EVENT(ufshcd_exception_event,
|
||||
|
||||
TP_PROTO(const char *dev_name, u16 status),
|
||||
TP_PROTO(struct ufs_hba *hba, u16 status),
|
||||
|
||||
TP_ARGS(dev_name, status),
|
||||
TP_ARGS(hba, status),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__string(dev_name, dev_name)
|
||||
__field(struct ufs_hba *, hba)
|
||||
__field(u16, status)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__assign_str(dev_name);
|
||||
__entry->hba = hba;
|
||||
__entry->status = status;
|
||||
),
|
||||
|
||||
TP_printk("%s: status 0x%x",
|
||||
__get_str(dev_name), __entry->status
|
||||
dev_name(__entry->hba->dev), __entry->status
|
||||
)
|
||||
);
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user