Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.13-rc8).

Conflicts:

drivers/net/ethernet/realtek/r8169_main.c
  1f691a1fc4be ("r8169: remove redundant hwmon support")
  152d00a91396 ("r8169: simplify setting hwmon attribute visibility")
https://lore.kernel.org/20250115122152.760b4e8d@canb.auug.org.au

Adjacent changes:

drivers/net/ethernet/broadcom/bnxt/bnxt.c
  152f4da05aee ("bnxt_en: add support for rx-copybreak ethtool command")
  f0aa6a37a3db ("eth: bnxt: always recalculate features after XDP clearing, fix null-deref")

drivers/net/ethernet/intel/ice/ice_type.h
  50327223a8bb ("ice: add lock to protect low latency interface")
  dc26548d729e ("ice: Fix quad registers read on E825")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2025-01-16 10:30:22 -08:00
commit 2ee738e90e
315 changed files with 2891 additions and 1675 deletions

View File

@ -121,6 +121,8 @@ Ben Widawsky <bwidawsk@kernel.org> <benjamin.widawsky@intel.com>
Benjamin Poirier <benjamin.poirier@gmail.com> <bpoirier@suse.de>
Benjamin Tissoires <bentiss@kernel.org> <benjamin.tissoires@gmail.com>
Benjamin Tissoires <bentiss@kernel.org> <benjamin.tissoires@redhat.com>
Bingwu Zhang <xtex@aosc.io> <xtexchooser@duck.com>
Bingwu Zhang <xtex@aosc.io> <xtex@xtexx.eu.org>
Bjorn Andersson <andersson@kernel.org> <bjorn@kryo.se>
Bjorn Andersson <andersson@kernel.org> <bjorn.andersson@linaro.org>
Bjorn Andersson <andersson@kernel.org> <bjorn.andersson@sonymobile.com>

View File

@ -269,27 +269,7 @@ Namely, when invoked to select an idle state for a CPU (i.e. an idle state that
the CPU will ask the processor hardware to enter), it attempts to predict the
idle duration and uses the predicted value for idle state selection.
It first obtains the time until the closest timer event with the assumption
that the scheduler tick will be stopped. That time, referred to as the *sleep
length* in what follows, is the upper bound on the time before the next CPU
wakeup. It is used to determine the sleep length range, which in turn is needed
to get the sleep length correction factor.
The ``menu`` governor maintains two arrays of sleep length correction factors.
One of them is used when tasks previously running on the given CPU are waiting
for some I/O operations to complete and the other one is used when that is not
the case. Each array contains several correction factor values that correspond
to different sleep length ranges organized so that each range represented in the
array is approximately 10 times wider than the previous one.
The correction factor for the given sleep length range (determined before
selecting the idle state for the CPU) is updated after the CPU has been woken
up and the closer the sleep length is to the observed idle duration, the closer
to 1 the correction factor becomes (it must fall between 0 and 1 inclusive).
The sleep length is multiplied by the correction factor for the range that it
falls into to obtain the first approximation of the predicted idle duration.
Next, the governor uses a simple pattern recognition algorithm to refine its
It first uses a simple pattern recognition algorithm to obtain a preliminary
idle duration prediction. Namely, it saves the last 8 observed idle duration
values and, when predicting the idle duration next time, it computes the average
and variance of them. If the variance is small (smaller than 400 square
@ -301,29 +281,39 @@ Again, if the variance of them is small (in the above sense), the average is
taken as the "typical interval" value and so on, until either the "typical
interval" is determined or too many data points are disregarded, in which case
the "typical interval" is assumed to equal "infinity" (the maximum unsigned
integer value). The "typical interval" computed this way is compared with the
sleep length multiplied by the correction factor and the minimum of the two is
taken as the predicted idle duration.
integer value).
Then, the governor computes an extra latency limit to help "interactive"
workloads. It uses the observation that if the exit latency of the selected
idle state is comparable with the predicted idle duration, the total time spent
in that state probably will be very short and the amount of energy to save by
entering it will be relatively small, so likely it is better to avoid the
overhead related to entering that state and exiting it. Thus selecting a
shallower state is likely to be a better option then. The first approximation
of the extra latency limit is the predicted idle duration itself which
additionally is divided by a value depending on the number of tasks that
previously ran on the given CPU and now they are waiting for I/O operations to
complete. The result of that division is compared with the latency limit coming
from the power management quality of service, or `PM QoS <cpu-pm-qos_>`_,
framework and the minimum of the two is taken as the limit for the idle states'
exit latency.
If the "typical interval" computed this way is long enough, the governor obtains
the time until the closest timer event with the assumption that the scheduler
tick will be stopped. That time, referred to as the *sleep length* in what follows,
is the upper bound on the time before the next CPU wakeup. It is used to determine
the sleep length range, which in turn is needed to get the sleep length correction
factor.
The ``menu`` governor maintains an array containing several correction factor
values that correspond to different sleep length ranges organized so that each
range represented in the array is approximately 10 times wider than the previous
one.
The correction factor for the given sleep length range (determined before
selecting the idle state for the CPU) is updated after the CPU has been woken
up and the closer the sleep length is to the observed idle duration, the closer
to 1 the correction factor becomes (it must fall between 0 and 1 inclusive).
The sleep length is multiplied by the correction factor for the range that it
falls into to obtain an approximation of the predicted idle duration that is
compared to the "typical interval" determined previously and the minimum of
the two is taken as the idle duration prediction.
If the "typical interval" value is small, which means that the CPU is likely
to be woken up soon enough, the sleep length computation is skipped as it may
be costly and the idle duration is simply predicted to equal the "typical
interval" value.
Now, the governor is ready to walk the list of idle states and choose one of
them. For this purpose, it compares the target residency of each state with
the predicted idle duration and the exit latency of it with the computed latency
limit. It selects the state with the target residency closest to the predicted
the predicted idle duration and the exit latency of it with the with the latency
limit coming from the power management quality of service, or `PM QoS <cpu-pm-qos_>`_,
framework. It selects the state with the target residency closest to the predicted
idle duration, but still below it, and exit latency that does not exceed the
limit.

View File

@ -42,6 +42,9 @@ properties:
interrupts:
maxItems: 1
'#sound-dai-cells':
const: 0
ports:
$ref: /schemas/graph.yaml#/properties/ports
properties:
@ -85,7 +88,21 @@ required:
- ports
- max-linkrate-mhz
additionalProperties: false
allOf:
- $ref: /schemas/sound/dai-common.yaml#
- if:
not:
properties:
compatible:
contains:
enum:
- mediatek,mt8188-dp-tx
- mediatek,mt8195-dp-tx
then:
properties:
'#sound-dai-cells': false
unevaluatedProperties: false
examples:
- |

View File

@ -65,6 +65,7 @@ properties:
- st,lsm9ds0-gyro
- description: STMicroelectronics Magnetometers
enum:
- st,iis2mdc
- st,lis2mdl
- st,lis3mdl-magn
- st,lsm303agr-magn

View File

@ -0,0 +1,292 @@
.. SPDX-License-Identifier: GPL-2.0-only
=====================================================================
Audio drivers for Cirrus Logic CS35L54/56/57 Boosted Smart Amplifiers
=====================================================================
:Copyright: 2025 Cirrus Logic, Inc. and
Cirrus Logic International Semiconductor Ltd.
Contact: patches@opensource.cirrus.com
Summary
=======
The high-level summary of this document is:
**If you have a laptop that uses CS35L54/56/57 amplifiers but audio is not
working, DO NOT ATTEMPT TO USE FIRMWARE AND SETTINGS FROM ANOTHER LAPTOP,
EVEN IF THAT LAPTOP SEEMS SIMILAR.**
The CS35L54/56/57 amplifiers must be correctly configured for the power
supply voltage, speaker impedance, maximum speaker voltage/current, and
other external hardware connections.
The amplifiers feature advanced boost technology that increases the voltage
used to drive the speakers, while proprietary speaker protection algorithms
allow these boosted amplifiers to push the limits of the speakers without
causing damage. These **must** be configured correctly.
Supported Cirrus Logic amplifiers
---------------------------------
The cs35l56 drivers support:
* CS35L54
* CS35L56
* CS35L57
There are two drivers in the kernel
*For systems using SoundWire*: sound/soc/codecs/cs35l56.c and associated files
*For systems using HDA*: sound/pci/hda/cs35l56_hda.c
Firmware
========
The amplifier is controlled and managed by firmware running on the internal
DSP. Firmware files are essential to enable the full capabilities of the
amplifier.
Firmware is distributed in the linux-firmware repository:
https://gitlab.com/kernel-firmware/linux-firmware.git
On most SoundWire systems the amplifier has a default minimum capability to
produce audio. However this will be
* at low volume, to protect the speakers, since the speaker specifications
and power supply voltages are unknown.
* a mono mix of left and right channels.
On some SoundWire systems that have both CS42L43 and CS35L56/57 the CS35L56/57
receive their audio from the CS42L43 instead of directly from the host
SoundWire interface. These systems can be identified by the CS42L43 showing
in dmesg as a SoundWire device, but the CS35L56/57 as SPI. On these systems
the firmware is *mandatory* to enable receiving the audio from the CS42L43.
On HDA systems the firmware is *mandatory* to enable HDA bridge mode. There
will not be any audio from the amplifiers without firmware.
Cirrus Logic firmware files
---------------------------
Each amplifier requires two firmware files. One file has a .wmfw suffix, the
other has a .bin suffix.
The firmware is customized by the OEM to match the hardware of each laptop,
and the firmware is specific to that laptop. Because of this, there are many
firmware files in linux-firmware for these amplifiers. Firmware files are
**not interchangeable between laptops**.
Cirrus Logic submits files for known laptops to the upstream linux-firmware
repository. Providing Cirrus Logic is aware of a particular laptop and has
permission from the manufacturer to publish the firmware, it will be pushed
to linux-firmware. You may need to upgrade to a newer release of
linux-firmware to obtain the firmware for your laptop.
**Important:** the Makefile for linux-firmware creates symlinks that are listed
in the WHENCE file. These symlinks are required for the CS35L56 driver to be
able to load the firmware.
How do I know which firmware file I should have?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
All firmware file names are qualified with a unique "system ID". On normal
x86 PCs with PCI audio this is the Vendor Subsystem ID (SSID) of the host
PCI audio interface.
The SSID can be viewed using the lspci tool::
lspci -v -nn | grep -A2 -i audio
0000:00:1f.3 Audio device [0403]: Intel Corporation Meteor Lake-P HD Audio Controller [8086:7e28]
Subsystem: Dell Meteor Lake-P HD Audio Controller [1028:0c63]
In this example the SSID is 10280c63.
The format of the firmware file names is:
cs35lxx-b0-dsp1-misc-SSID[-spkidX]-ampN
Where:
* cs35lxx-b0 is the amplifier model and silicon revision. This information
is logged by the driver during initialization.
* SSID is the 8-digit hexadecimal SSID value.
* ampN is the amplifier number (for example amp1). This is the same as
the prefix on the ALSA control names except that it is always lower-case
in the file name.
* spkidX is an optional part, used for laptops that have firmware
configurations for different makes and models of internal speakers.
Sound Open Firmware and ALSA topology files
-------------------------------------------
All SoundWire systems will require a Sound Open Firmware (SOF) for the
host CPU audio DSP, together with an ALSA topology file (.tplg).
The SOF firmware will usually be provided by the manufacturer of the host
CPU (i.e. Intel or AMD). The .tplg file is normally part of the SOF firmware
release.
SOF binary builds are available from: https://github.com/thesofproject/sof-bin/releases
The main SOF source is here: https://github.com/thesofproject
ALSA-ucm configurations
-----------------------
Typically an appropriate ALSA-ucm configuration file is needed for
use-case managers and audio servers such as PipeWire.
Configuration files are available from the alsa-ucm-conf repository:
https://git.alsa-project.org/?p=alsa-ucm-conf.git
Kernel log messages
===================
SoundWire
---------
A successful initialization will look like this (this will be repeated for
each amplifier)::
[ 7.568374] cs35l56 sdw:0:0:01fa:3556:01:0: supply VDD_P not found, using dummy regulator
[ 7.605208] cs35l56 sdw:0:0:01fa:3556:01:0: supply VDD_IO not found, using dummy regulator
[ 7.605313] cs35l56 sdw:0:0:01fa:3556:01:0: supply VDD_A not found, using dummy regulator
[ 7.939279] cs35l56 sdw:0:0:01fa:3556:01:0: Cirrus Logic CS35L56 Rev B0 OTP3 fw:3.4.4 (patched=0)
[ 7.947844] cs35l56 sdw:0:0:01fa:3556:01:0: Slave 4 state check1: UNATTACHED, status was 1
[ 8.740280] cs35l56 sdw:0:0:01fa:3556:01:0: supply VDD_B not found, using dummy regulator
[ 8.740552] cs35l56 sdw:0:0:01fa:3556:01:0: supply VDD_AMP not found, using dummy regulator
[ 9.242164] cs35l56 sdw:0:0:01fa:3556:01:0: DSP1: cirrus/cs35l56-b0-dsp1-misc-xxxxxxxx.wmfw: format 3 timestamp 0x66b2b872
[ 9.242173] cs35l56 sdw:0:0:01fa:3556:01:0: DSP1: cirrus/cs35l56-b0-dsp1-misc-xxxxxxxx.wmfw: Tue 05 Dec 2023 21:37:21 GMT Standard Time
[ 9.991709] cs35l56 sdw:0:0:01fa:3556:01:0: DSP1: Firmware: 1a00d6 vendor: 0x2 v3.11.23, 41 algorithms
[10.039098] cs35l56 sdw:0:0:01fa:3556:01:0: DSP1: cirrus/cs35l56-b0-dsp1-misc-xxxxxxxx-amp1.bin: v3.11.23
[10.879235] cs35l56 sdw:0:0:01fa:3556:01:0: Slave 4 state check1: UNATTACHED, status was 1
[11.401536] cs35l56 sdw:0:0:01fa:3556:01:0: Calibration applied
HDA
---
A successful initialization will look like this (this will be repeated for
each amplifier)::
[ 6.306475] cs35l56-hda i2c-CSC3556:00-cs35l56-hda.0: Cirrus Logic CS35L56 Rev B0 OTP3 fw:3.4.4 (patched=0)
[ 6.613892] cs35l56-hda i2c-CSC3556:00-cs35l56-hda.0: DSP system name: 'xxxxxxxx', amp name: 'AMP1'
[ 8.266660] snd_hda_codec_cs8409 ehdaudio0D0: bound i2c-CSC3556:00-cs35l56-hda.0 (ops cs35l56_hda_comp_ops [snd_hda_scodec_cs35l56])
[ 8.287525] cs35l56-hda i2c-CSC3556:00-cs35l56-hda.0: DSP1: cirrus/cs35l56-b0-dsp1-misc-xxxxxxxx.wmfw: format 3 timestamp 0x66b2b872
[ 8.287528] cs35l56-hda i2c-CSC3556:00-cs35l56-hda.0: DSP1: cirrus/cs35l56-b0-dsp1-misc-xxxxxxxx.wmfw: Tue 05 Dec 2023 21:37:21 GMT Standard Time
[ 9.984335] cs35l56-hda i2c-CSC3556:00-cs35l56-hda.0: DSP1: Firmware: 1a00d6 vendor: 0x2 v3.11.23, 41 algorithms
[10.085797] cs35l56-hda i2c-CSC3556:00-cs35l56-hda.0: DSP1: cirrus/cs35l56-b0-dsp1-misc-xxxxxxxx-amp1.bin: v3.11.23
[10.655237] cs35l56-hda i2c-CSC3556:00-cs35l56-hda.0: Calibration applied
Important messages
~~~~~~~~~~~~~~~~~~
Cirrus Logic CS35L56 Rev B0 OTP3 fw:3.4.4 (patched=0)
Shows that the driver has been able to read device ID registers from the
amplifier.
* The actual amplifier type and silicon revision (CS35L56 B0 in this
example) is shown, as read from the amplifier identification registers.
* (patched=0) is normal, and indicates that the amplifier has been hard
reset and is running default ROM firmware.
* (patched=1) means that something has previously downloaded firmware
to the amplifier and the driver does not have control of the RESET
signal to be able to replace this preloaded firmware. This is normal
for systems where the BIOS downloads firmware to the amplifiers
before OS boot.
This status can also be seen if the cs35l56 kernel module is unloaded
and reloaded on a system where the driver does not have control of
RESET. SoundWire systems typically do not give the driver control of
RESET and only a BIOS (re)boot can reset the amplifiers.
DSP1: cirrus/cs35l56-b0-dsp1-misc-xxxxxxxx.wmfw
Shows that a .wmfw firmware file was found and downloaded.
DSP1: cirrus/cs35l56-b0-dsp1-misc-xxxxxxxx-amp1.bin
Shows that a .bin firmware file was found and downloaded.
Calibration applied
Factory calibration data in EFI was written to the amplifier.
Error messages
==============
This section explains some of the error messages that the driver can log.
Algorithm coefficient version %d.%d.%d but expected %d.%d.%d
The version of the .bin file content does not match the loaded firmware.
Caused by mismatched .wmfw and .bin file, or .bin file was found but
.wmfw was not.
No %s for algorithm %x
The version of the .bin file content does not match the loaded firmware.
Caused by mismatched .wmfw and .bin file, or .bin file was found but
.wmfw was not.
.bin file required but not found
HDA driver did not find a .bin file that matches this hardware.
Calibration disabled due to missing firmware controls
Driver was not able to write EFI calibration data to firmware registers.
This typically means that either:
* The driver did not find a suitable wmfw for this hardware, or
* The amplifier has already been patched with firmware by something
previously, and the driver does not have control of a hard RESET line
to be able to reset the amplifier and download the firmware files it
found. This situation is indicated by the device identification
string in the kernel log shows "(patched=1)"
Failed to write calibration
Same meaning and cause as "Calibration disabled due to missing firmware
controls"
Failed to read calibration data from EFI
Factory calibration data in EFI is missing, empty or corrupt.
This is most likely to be cause by accidentally deleting the file from
the EFI filesystem.
No calibration for silicon ID
The factory calibration data in EFI does not match this hardware.
The most likely cause is that an amplifier has been replaced on the
motherboard without going through manufacturer calibration process to
generate calibration data for the new amplifier.
Did not find any buses for CSCxxxx
Only on HDA systems. The HDA codec driver found an ACPI entry for
Cirrus Logic companion amps, but could not enumerate the ACPI entries for
the I2C/SPI buses. The most likely cause of this is that:
* The relevant bus driver (I2C or SPI) is not part of the kernel.
* The HDA codec driver was built-in to the kernel but the I2C/SPI
bus driver is a module and so the HDA codec driver cannot call the
bus driver functions.
init_completion timed out
The SoundWire bus controller (host end) did not enumerate the amplifier.
In other words, the ACPI says there is an amplifier but for some reason
it was not detected on the bus.
No AF01 node
Indicates an error in ACPI. A SoundWire system should have a Device()
node named "AF01" but it was not found.
Failed to get spk-id-gpios
ACPI says that the driver should request a GPIO but the driver was not
able to get that GPIO. The most likely cause is that the kernel does not
include the correct GPIO or PINCTRL driver for this system.
Failed to read spk-id
ACPI says that the driver should request a GPIO but the driver was not
able to read that GPIO.
Unexpected spk-id element count
AF01 contains more speaker ID GPIO entries than the driver supports
Overtemp error
Amplifier overheat protection was triggered and the amplifier shut down
to protect itself.
Amp short error
Amplifier detected a short-circuit on the speaker output pins and shut
down for protection. This would normally indicate a damaged speaker.
Hibernate wake failed
The driver tried to wake the amplifier from its power-saving state but
did not see the expected responses from the amplifier. This can be caused
by using firmware that does not match the hardware.

View File

@ -0,0 +1,9 @@
.. SPDX-License-Identifier: GPL-2.0
Codec-Specific Information
==========================
.. toctree::
:maxdepth: 2
cs35l56

View File

@ -13,6 +13,7 @@ Sound Subsystem Documentation
alsa-configuration
hd-audio/index
cards/index
codecs/index
utimers
.. only:: subproject and html

View File

@ -1914,6 +1914,9 @@ No flags are specified so far, the corresponding field must be set to zero.
#define KVM_IRQ_ROUTING_HV_SINT 4
#define KVM_IRQ_ROUTING_XEN_EVTCHN 5
On s390, adding a KVM_IRQ_ROUTING_S390_ADAPTER is rejected on ucontrol VMs with
error -EINVAL.
flags:
- KVM_MSI_VALID_DEVID: used along with KVM_IRQ_ROUTING_MSI routing entry

View File

@ -58,11 +58,15 @@ Groups:
Enables async page faults for the guest. So in case of a major page fault
the host is allowed to handle this async and continues the guest.
-EINVAL is returned when called on the FLIC of a ucontrol VM.
KVM_DEV_FLIC_APF_DISABLE_WAIT
Disables async page faults for the guest and waits until already pending
async page faults are done. This is necessary to trigger a completion interrupt
for every init interrupt before migrating the interrupt list.
-EINVAL is returned when called on the FLIC of a ucontrol VM.
KVM_DEV_FLIC_ADAPTER_REGISTER
Register an I/O adapter interrupt source. Takes a kvm_s390_io_adapter
describing the adapter to register::

View File

@ -4135,7 +4135,6 @@ S: Odd Fixes
F: drivers/net/ethernet/netronome/nfp/bpf/
BPF JIT for POWERPC (32-BIT AND 64-BIT)
M: Michael Ellerman <mpe@ellerman.id.au>
M: Hari Bathini <hbathini@linux.ibm.com>
M: Christophe Leroy <christophe.leroy@csgroup.eu>
R: Naveen N Rao <naveen@kernel.org>
@ -5475,6 +5474,7 @@ L: linux-sound@vger.kernel.org
L: patches@opensource.cirrus.com
S: Maintained
F: Documentation/devicetree/bindings/sound/cirrus,cs*
F: Documentation/sound/codecs/cs*
F: drivers/mfd/cs42l43*
F: drivers/pinctrl/cirrus/pinctrl-cs42l43*
F: drivers/spi/spi-cs42l43*
@ -12637,7 +12637,7 @@ F: arch/mips/include/uapi/asm/kvm*
F: arch/mips/kvm/
KERNEL VIRTUAL MACHINE FOR POWERPC (KVM/powerpc)
M: Michael Ellerman <mpe@ellerman.id.au>
M: Madhavan Srinivasan <maddy@linux.ibm.com>
R: Nicholas Piggin <npiggin@gmail.com>
L: linuxppc-dev@lists.ozlabs.org
L: kvm@vger.kernel.org
@ -13216,11 +13216,11 @@ X: drivers/macintosh/adb-iop.c
X: drivers/macintosh/via-macii.c
LINUX FOR POWERPC (32-BIT AND 64-BIT)
M: Madhavan Srinivasan <maddy@linux.ibm.com>
M: Michael Ellerman <mpe@ellerman.id.au>
R: Nicholas Piggin <npiggin@gmail.com>
R: Christophe Leroy <christophe.leroy@csgroup.eu>
R: Naveen N Rao <naveen@kernel.org>
M: Madhavan Srinivasan <maddy@linux.ibm.com>
L: linuxppc-dev@lists.ozlabs.org
S: Supported
W: https://github.com/linuxppc/wiki/wiki
@ -22000,6 +22000,7 @@ W: https://github.com/thesofproject/linux/
F: sound/soc/sof/
SOUND - GENERIC SOUND CARD (Simple-Audio-Card, Audio-Graph-Card)
M: Mark Brown <broonie@kernel.org>
M: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
S: Supported
L: linux-sound@vger.kernel.org

View File

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 13
SUBLEVEL = 0
EXTRAVERSION = -rc6
EXTRAVERSION = -rc7
NAME = Baby Opossum Posse
# *DOCUMENTATION*

View File

@ -87,7 +87,7 @@
reg = <0x402c0000 0x4000>;
interrupts = <110>;
clocks = <&clks IMXRT1050_CLK_IPG_PDOF>,
<&clks IMXRT1050_CLK_OSC>,
<&clks IMXRT1050_CLK_AHB_PODF>,
<&clks IMXRT1050_CLK_USDHC1>;
clock-names = "ipg", "ahb", "per";
bus-width = <4>;

View File

@ -323,6 +323,7 @@ CONFIG_SND_SOC_IMX_SGTL5000=y
CONFIG_SND_SOC_FSL_ASOC_CARD=y
CONFIG_SND_SOC_AC97_CODEC=y
CONFIG_SND_SOC_CS42XX8_I2C=y
CONFIG_SND_SOC_SPDIF=y
CONFIG_SND_SOC_TLV320AIC3X_I2C=y
CONFIG_SND_SOC_WM8960=y
CONFIG_SND_SOC_WM8962=y

View File

@ -165,7 +165,7 @@ audio_subsys: bus@59000000 {
};
esai0: esai@59010000 {
compatible = "fsl,imx8qm-esai";
compatible = "fsl,imx8qm-esai", "fsl,imx6ull-esai";
reg = <0x59010000 0x10000>;
interrupts = <GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&esai0_lpcg IMX_LPCG_CLK_4>,

View File

@ -134,7 +134,7 @@
};
esai1: esai@59810000 {
compatible = "fsl,imx8qm-esai";
compatible = "fsl,imx8qm-esai", "fsl,imx6ull-esai";
reg = <0x59810000 0x10000>;
interrupts = <GIC_SPI 411 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&esai1_lpcg IMX_LPCG_CLK_0>,

View File

@ -1673,7 +1673,7 @@
netcmix_blk_ctrl: syscon@4c810000 {
compatible = "nxp,imx95-netcmix-blk-ctrl", "syscon";
reg = <0x0 0x4c810000 0x0 0x10000>;
reg = <0x0 0x4c810000 0x0 0x8>;
#clock-cells = <1>;
clocks = <&scmi_clk IMX95_CLK_BUSNETCMIX>;
assigned-clocks = <&scmi_clk IMX95_CLK_BUSNETCMIX>;

View File

@ -2440,6 +2440,7 @@
qcom,cmb-element-bits = <32>;
qcom,cmb-msrs-num = <32>;
status = "disabled";
out-ports {
port {
@ -6092,7 +6093,7 @@
<0x0 0x40000000 0x0 0xf20>,
<0x0 0x40000f20 0x0 0xa8>,
<0x0 0x40001000 0x0 0x4000>,
<0x0 0x40200000 0x0 0x100000>,
<0x0 0x40200000 0x0 0x1fe00000>,
<0x0 0x01c03000 0x0 0x1000>,
<0x0 0x40005000 0x0 0x2000>;
reg-names = "parf", "dbi", "elbi", "atu", "addr_space",
@ -6250,7 +6251,7 @@
<0x0 0x60000000 0x0 0xf20>,
<0x0 0x60000f20 0x0 0xa8>,
<0x0 0x60001000 0x0 0x4000>,
<0x0 0x60200000 0x0 0x100000>,
<0x0 0x60200000 0x0 0x1fe00000>,
<0x0 0x01c13000 0x0 0x1000>,
<0x0 0x60005000 0x0 0x2000>;
reg-names = "parf", "dbi", "elbi", "atu", "addr_space",

View File

@ -773,6 +773,10 @@
status = "okay";
};
&usb_1_ss0_dwc3 {
dr_mode = "host";
};
&usb_1_ss0_dwc3_hs {
remote-endpoint = <&pmic_glink_ss0_hs_in>;
};
@ -801,6 +805,10 @@
status = "okay";
};
&usb_1_ss1_dwc3 {
dr_mode = "host";
};
&usb_1_ss1_dwc3_hs {
remote-endpoint = <&pmic_glink_ss1_hs_in>;
};

View File

@ -1197,6 +1197,10 @@
status = "okay";
};
&usb_1_ss0_dwc3 {
dr_mode = "host";
};
&usb_1_ss0_dwc3_hs {
remote-endpoint = <&pmic_glink_ss0_hs_in>;
};
@ -1225,6 +1229,10 @@
status = "okay";
};
&usb_1_ss1_dwc3 {
dr_mode = "host";
};
&usb_1_ss1_dwc3_hs {
remote-endpoint = <&pmic_glink_ss1_hs_in>;
};
@ -1253,6 +1261,10 @@
status = "okay";
};
&usb_1_ss2_dwc3 {
dr_mode = "host";
};
&usb_1_ss2_dwc3_hs {
remote-endpoint = <&pmic_glink_ss2_hs_in>;
};

View File

@ -2924,7 +2924,7 @@
#address-cells = <3>;
#size-cells = <2>;
ranges = <0x01000000 0x0 0x00000000 0x0 0x70200000 0x0 0x100000>,
<0x02000000 0x0 0x70300000 0x0 0x70300000 0x0 0x1d00000>;
<0x02000000 0x0 0x70300000 0x0 0x70300000 0x0 0x3d00000>;
bus-range = <0x00 0xff>;
dma-coherent;
@ -4066,8 +4066,6 @@
dma-coherent;
usb-role-switch;
ports {
#address-cells = <1>;
#size-cells = <0>;
@ -4321,8 +4319,6 @@
dma-coherent;
usb-role-switch;
ports {
#address-cells = <1>;
#size-cells = <0>;
@ -4421,8 +4417,6 @@
dma-coherent;
usb-role-switch;
ports {
#address-cells = <1>;
#size-cells = <0>;

View File

@ -333,6 +333,7 @@
power-domain@RK3328_PD_HEVC {
reg = <RK3328_PD_HEVC>;
clocks = <&cru SCLK_VENC_CORE>;
#power-domain-cells = <0>;
};
power-domain@RK3328_PD_VIDEO {

View File

@ -350,6 +350,7 @@
assigned-clocks = <&pmucru CLK_PCIEPHY0_REF>;
assigned-clock-rates = <100000000>;
resets = <&cru SRST_PIPEPHY0>;
reset-names = "phy";
rockchip,pipe-grf = <&pipegrf>;
rockchip,pipe-phy-grf = <&pipe_phy_grf0>;
#phy-cells = <1>;

View File

@ -1681,6 +1681,7 @@
assigned-clocks = <&pmucru CLK_PCIEPHY1_REF>;
assigned-clock-rates = <100000000>;
resets = <&cru SRST_PIPEPHY1>;
reset-names = "phy";
rockchip,pipe-grf = <&pipegrf>;
rockchip,pipe-phy-grf = <&pipe_phy_grf1>;
#phy-cells = <1>;
@ -1697,6 +1698,7 @@
assigned-clocks = <&pmucru CLK_PCIEPHY2_REF>;
assigned-clock-rates = <100000000>;
resets = <&cru SRST_PIPEPHY2>;
reset-names = "phy";
rockchip,pipe-grf = <&pipegrf>;
rockchip,pipe-phy-grf = <&pipe_phy_grf2>;
#phy-cells = <1>;

View File

@ -72,7 +72,7 @@
rfkill {
compatible = "rfkill-gpio";
label = "rfkill-pcie-wlan";
label = "rfkill-m2-wlan";
radio-type = "wlan";
shutdown-gpios = <&gpio4 RK_PA2 GPIO_ACTIVE_HIGH>;
};

View File

@ -434,6 +434,7 @@
&sdmmc {
bus-width = <4>;
cap-sd-highspeed;
cd-gpios = <&gpio0 RK_PA4 GPIO_ACTIVE_LOW>;
disable-wp;
max-frequency = <150000000>;
no-mmc;

View File

@ -783,9 +783,6 @@ static int hyp_ack_unshare(u64 addr, const struct pkvm_mem_transition *tx)
if (tx->initiator.id == PKVM_ID_HOST && hyp_page_count((void *)addr))
return -EBUSY;
if (__hyp_ack_skip_pgtable_check(tx))
return 0;
return __hyp_check_page_state_range(addr, size,
PKVM_PAGE_SHARED_BORROWED);
}

View File

@ -24,6 +24,7 @@ static DEFINE_MUTEX(arm_pmus_lock);
static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc);
static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc);
static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc);
static struct kvm_vcpu *kvm_pmc_to_vcpu(const struct kvm_pmc *pmc)
{
@ -327,48 +328,25 @@ u64 kvm_pmu_implemented_counter_mask(struct kvm_vcpu *vcpu)
return GENMASK(val - 1, 0) | BIT(ARMV8_PMU_CYCLE_IDX);
}
/**
* kvm_pmu_enable_counter_mask - enable selected PMU counters
* @vcpu: The vcpu pointer
* @val: the value guest writes to PMCNTENSET register
*
* Call perf_event_enable to start counting the perf event
*/
void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
static void kvm_pmc_enable_perf_event(struct kvm_pmc *pmc)
{
int i;
if (!kvm_vcpu_has_pmu(vcpu))
if (!pmc->perf_event) {
kvm_pmu_create_perf_event(pmc);
return;
if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) || !val)
return;
for (i = 0; i < KVM_ARMV8_PMU_MAX_COUNTERS; i++) {
struct kvm_pmc *pmc;
if (!(val & BIT(i)))
continue;
pmc = kvm_vcpu_idx_to_pmc(vcpu, i);
if (!pmc->perf_event) {
kvm_pmu_create_perf_event(pmc);
} else {
perf_event_enable(pmc->perf_event);
if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
kvm_debug("fail to enable perf event\n");
}
}
perf_event_enable(pmc->perf_event);
if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
kvm_debug("fail to enable perf event\n");
}
/**
* kvm_pmu_disable_counter_mask - disable selected PMU counters
* @vcpu: The vcpu pointer
* @val: the value guest writes to PMCNTENCLR register
*
* Call perf_event_disable to stop counting the perf event
*/
void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
static void kvm_pmc_disable_perf_event(struct kvm_pmc *pmc)
{
if (pmc->perf_event)
perf_event_disable(pmc->perf_event);
}
void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vcpu, u64 val)
{
int i;
@ -376,16 +354,18 @@ void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
return;
for (i = 0; i < KVM_ARMV8_PMU_MAX_COUNTERS; i++) {
struct kvm_pmc *pmc;
struct kvm_pmc *pmc = kvm_vcpu_idx_to_pmc(vcpu, i);
if (!(val & BIT(i)))
continue;
pmc = kvm_vcpu_idx_to_pmc(vcpu, i);
if (pmc->perf_event)
perf_event_disable(pmc->perf_event);
if (kvm_pmu_counter_is_enabled(pmc))
kvm_pmc_enable_perf_event(pmc);
else
kvm_pmc_disable_perf_event(pmc);
}
kvm_vcpu_pmu_restore_guest(vcpu);
}
/*
@ -626,27 +606,28 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
if (!kvm_has_feat(vcpu->kvm, ID_AA64DFR0_EL1, PMUVer, V3P5))
val &= ~ARMV8_PMU_PMCR_LP;
/* Request a reload of the PMU to enable/disable affected counters */
if ((__vcpu_sys_reg(vcpu, PMCR_EL0) ^ val) & ARMV8_PMU_PMCR_E)
kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu);
/* The reset bits don't indicate any state, and shouldn't be saved. */
__vcpu_sys_reg(vcpu, PMCR_EL0) = val & ~(ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_P);
if (val & ARMV8_PMU_PMCR_E) {
kvm_pmu_enable_counter_mask(vcpu,
__vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
} else {
kvm_pmu_disable_counter_mask(vcpu,
__vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
}
if (val & ARMV8_PMU_PMCR_C)
kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0);
if (val & ARMV8_PMU_PMCR_P) {
unsigned long mask = kvm_pmu_accessible_counter_mask(vcpu);
mask &= ~BIT(ARMV8_PMU_CYCLE_IDX);
/*
* Unlike other PMU sysregs, the controls in PMCR_EL0 always apply
* to the 'guest' range of counters and never the 'hyp' range.
*/
unsigned long mask = kvm_pmu_implemented_counter_mask(vcpu) &
~kvm_pmu_hyp_counter_mask(vcpu) &
~BIT(ARMV8_PMU_CYCLE_IDX);
for_each_set_bit(i, &mask, 32)
kvm_pmu_set_pmc_value(kvm_vcpu_idx_to_pmc(vcpu, i), 0, true);
}
kvm_vcpu_pmu_restore_guest(vcpu);
}
static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc)
@ -910,11 +891,11 @@ void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu)
{
u64 mask = kvm_pmu_implemented_counter_mask(vcpu);
kvm_pmu_handle_pmcr(vcpu, kvm_vcpu_read_pmcr(vcpu));
__vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= mask;
__vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= mask;
__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= mask;
kvm_pmu_reprogram_counter_mask(vcpu, mask);
}
int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu)

View File

@ -1208,16 +1208,14 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
mask = kvm_pmu_accessible_counter_mask(vcpu);
if (p->is_write) {
val = p->regval & mask;
if (r->Op2 & 0x1) {
if (r->Op2 & 0x1)
/* accessing PMCNTENSET_EL0 */
__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val;
kvm_pmu_enable_counter_mask(vcpu, val);
kvm_vcpu_pmu_restore_guest(vcpu);
} else {
else
/* accessing PMCNTENCLR_EL0 */
__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
kvm_pmu_disable_counter_mask(vcpu, val);
}
kvm_pmu_reprogram_counter_mask(vcpu, val);
} else {
p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
}
@ -2450,6 +2448,26 @@ static unsigned int s1pie_el2_visibility(const struct kvm_vcpu *vcpu,
return __el2_visibility(vcpu, rd, s1pie_visibility);
}
static bool access_mdcr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
u64 old = __vcpu_sys_reg(vcpu, MDCR_EL2);
if (!access_rw(vcpu, p, r))
return false;
/*
* Request a reload of the PMU to enable/disable the counters affected
* by HPME.
*/
if ((old ^ __vcpu_sys_reg(vcpu, MDCR_EL2)) & MDCR_EL2_HPME)
kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu);
return true;
}
/*
* Architected system registers.
* Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@ -2983,7 +3001,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
EL2_REG(SCTLR_EL2, access_rw, reset_val, SCTLR_EL2_RES1),
EL2_REG(ACTLR_EL2, access_rw, reset_val, 0),
EL2_REG_VNCR(HCR_EL2, reset_hcr, 0),
EL2_REG(MDCR_EL2, access_rw, reset_val, 0),
EL2_REG(MDCR_EL2, access_mdcr, reset_val, 0),
EL2_REG(CPTR_EL2, access_rw, reset_val, CPTR_NVHE_EL2_RES1),
EL2_REG_VNCR(HSTR_EL2, reset_val, 0),
EL2_REG_VNCR(HFGRTR_EL2, reset_val, 0),

View File

@ -34,6 +34,8 @@ enum vcpu_ftr {
#define E500_TLB_BITMAP (1 << 30)
/* TLB1 entry is mapped by host TLB0 */
#define E500_TLB_TLB0 (1 << 29)
/* entry is writable on the host */
#define E500_TLB_WRITABLE (1 << 28)
/* bits [6-5] MAS2_X1 and MAS2_X0 and [4-0] bits for WIMGE */
#define E500_TLB_MAS2_ATTR (0x7f)

View File

@ -45,11 +45,14 @@ static inline unsigned int tlb1_max_shadow_size(void)
return host_tlb_params[1].entries - tlbcam_index - 1;
}
static inline u32 e500_shadow_mas3_attrib(u32 mas3, int usermode)
static inline u32 e500_shadow_mas3_attrib(u32 mas3, bool writable, int usermode)
{
/* Mask off reserved bits. */
mas3 &= MAS3_ATTRIB_MASK;
if (!writable)
mas3 &= ~(MAS3_UW|MAS3_SW);
#ifndef CONFIG_KVM_BOOKE_HV
if (!usermode) {
/* Guest is in supervisor mode,
@ -242,17 +245,18 @@ static inline int tlbe_is_writable(struct kvm_book3e_206_tlb_entry *tlbe)
return tlbe->mas7_3 & (MAS3_SW|MAS3_UW);
}
static inline bool kvmppc_e500_ref_setup(struct tlbe_ref *ref,
static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref,
struct kvm_book3e_206_tlb_entry *gtlbe,
kvm_pfn_t pfn, unsigned int wimg)
kvm_pfn_t pfn, unsigned int wimg,
bool writable)
{
ref->pfn = pfn;
ref->flags = E500_TLB_VALID;
if (writable)
ref->flags |= E500_TLB_WRITABLE;
/* Use guest supplied MAS2_G and MAS2_E */
ref->flags |= (gtlbe->mas2 & MAS2_ATTRIB_MASK) | wimg;
return tlbe_is_writable(gtlbe);
}
static inline void kvmppc_e500_ref_release(struct tlbe_ref *ref)
@ -305,6 +309,7 @@ static void kvmppc_e500_setup_stlbe(
{
kvm_pfn_t pfn = ref->pfn;
u32 pr = vcpu->arch.shared->msr & MSR_PR;
bool writable = !!(ref->flags & E500_TLB_WRITABLE);
BUG_ON(!(ref->flags & E500_TLB_VALID));
@ -312,7 +317,7 @@ static void kvmppc_e500_setup_stlbe(
stlbe->mas1 = MAS1_TSIZE(tsize) | get_tlb_sts(gtlbe) | MAS1_VALID;
stlbe->mas2 = (gvaddr & MAS2_EPN) | (ref->flags & E500_TLB_MAS2_ATTR);
stlbe->mas7_3 = ((u64)pfn << PAGE_SHIFT) |
e500_shadow_mas3_attrib(gtlbe->mas7_3, pr);
e500_shadow_mas3_attrib(gtlbe->mas7_3, writable, pr);
}
static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
@ -321,15 +326,14 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
struct tlbe_ref *ref)
{
struct kvm_memory_slot *slot;
unsigned long pfn = 0; /* silence GCC warning */
unsigned int psize;
unsigned long pfn;
struct page *page = NULL;
unsigned long hva;
int pfnmap = 0;
int tsize = BOOK3E_PAGESZ_4K;
int ret = 0;
unsigned long mmu_seq;
struct kvm *kvm = vcpu_e500->vcpu.kvm;
unsigned long tsize_pages = 0;
pte_t *ptep;
unsigned int wimg = 0;
pgd_t *pgdir;
@ -351,110 +355,12 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
slot = gfn_to_memslot(vcpu_e500->vcpu.kvm, gfn);
hva = gfn_to_hva_memslot(slot, gfn);
if (tlbsel == 1) {
struct vm_area_struct *vma;
mmap_read_lock(kvm->mm);
vma = find_vma(kvm->mm, hva);
if (vma && hva >= vma->vm_start &&
(vma->vm_flags & VM_PFNMAP)) {
/*
* This VMA is a physically contiguous region (e.g.
* /dev/mem) that bypasses normal Linux page
* management. Find the overlap between the
* vma and the memslot.
*/
unsigned long start, end;
unsigned long slot_start, slot_end;
pfnmap = 1;
start = vma->vm_pgoff;
end = start +
vma_pages(vma);
pfn = start + ((hva - vma->vm_start) >> PAGE_SHIFT);
slot_start = pfn - (gfn - slot->base_gfn);
slot_end = slot_start + slot->npages;
if (start < slot_start)
start = slot_start;
if (end > slot_end)
end = slot_end;
tsize = (gtlbe->mas1 & MAS1_TSIZE_MASK) >>
MAS1_TSIZE_SHIFT;
/*
* e500 doesn't implement the lowest tsize bit,
* or 1K pages.
*/
tsize = max(BOOK3E_PAGESZ_4K, tsize & ~1);
/*
* Now find the largest tsize (up to what the guest
* requested) that will cover gfn, stay within the
* range, and for which gfn and pfn are mutually
* aligned.
*/
for (; tsize > BOOK3E_PAGESZ_4K; tsize -= 2) {
unsigned long gfn_start, gfn_end;
tsize_pages = 1UL << (tsize - 2);
gfn_start = gfn & ~(tsize_pages - 1);
gfn_end = gfn_start + tsize_pages;
if (gfn_start + pfn - gfn < start)
continue;
if (gfn_end + pfn - gfn > end)
continue;
if ((gfn & (tsize_pages - 1)) !=
(pfn & (tsize_pages - 1)))
continue;
gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1);
pfn &= ~(tsize_pages - 1);
break;
}
} else if (vma && hva >= vma->vm_start &&
is_vm_hugetlb_page(vma)) {
unsigned long psize = vma_kernel_pagesize(vma);
tsize = (gtlbe->mas1 & MAS1_TSIZE_MASK) >>
MAS1_TSIZE_SHIFT;
/*
* Take the largest page size that satisfies both host
* and guest mapping
*/
tsize = min(__ilog2(psize) - 10, tsize);
/*
* e500 doesn't implement the lowest tsize bit,
* or 1K pages.
*/
tsize = max(BOOK3E_PAGESZ_4K, tsize & ~1);
}
mmap_read_unlock(kvm->mm);
}
if (likely(!pfnmap)) {
tsize_pages = 1UL << (tsize + 10 - PAGE_SHIFT);
pfn = __kvm_faultin_pfn(slot, gfn, FOLL_WRITE, NULL, &page);
if (is_error_noslot_pfn(pfn)) {
if (printk_ratelimit())
pr_err("%s: real page not found for gfn %lx\n",
__func__, (long)gfn);
return -EINVAL;
}
/* Align guest and physical address to page map boundaries */
pfn &= ~(tsize_pages - 1);
gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1);
pfn = __kvm_faultin_pfn(slot, gfn, FOLL_WRITE, &writable, &page);
if (is_error_noslot_pfn(pfn)) {
if (printk_ratelimit())
pr_err("%s: real page not found for gfn %lx\n",
__func__, (long)gfn);
return -EINVAL;
}
spin_lock(&kvm->mmu_lock);
@ -472,14 +378,13 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
* can't run hence pfn won't change.
*/
local_irq_save(flags);
ptep = find_linux_pte(pgdir, hva, NULL, NULL);
ptep = find_linux_pte(pgdir, hva, NULL, &psize);
if (ptep) {
pte_t pte = READ_ONCE(*ptep);
if (pte_present(pte)) {
wimg = (pte_val(pte) >> PTE_WIMGE_SHIFT) &
MAS2_WIMGE_MASK;
local_irq_restore(flags);
} else {
local_irq_restore(flags);
pr_err_ratelimited("%s: pte not present: gfn %lx,pfn %lx\n",
@ -488,10 +393,72 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
goto out;
}
}
writable = kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg);
local_irq_restore(flags);
if (psize && tlbsel == 1) {
unsigned long psize_pages, tsize_pages;
unsigned long start, end;
unsigned long slot_start, slot_end;
psize_pages = 1UL << (psize - PAGE_SHIFT);
start = pfn & ~(psize_pages - 1);
end = start + psize_pages;
slot_start = pfn - (gfn - slot->base_gfn);
slot_end = slot_start + slot->npages;
if (start < slot_start)
start = slot_start;
if (end > slot_end)
end = slot_end;
tsize = (gtlbe->mas1 & MAS1_TSIZE_MASK) >>
MAS1_TSIZE_SHIFT;
/*
* Any page size that doesn't satisfy the host mapping
* will fail the start and end tests.
*/
tsize = min(psize - PAGE_SHIFT + BOOK3E_PAGESZ_4K, tsize);
/*
* e500 doesn't implement the lowest tsize bit,
* or 1K pages.
*/
tsize = max(BOOK3E_PAGESZ_4K, tsize & ~1);
/*
* Now find the largest tsize (up to what the guest
* requested) that will cover gfn, stay within the
* range, and for which gfn and pfn are mutually
* aligned.
*/
for (; tsize > BOOK3E_PAGESZ_4K; tsize -= 2) {
unsigned long gfn_start, gfn_end;
tsize_pages = 1UL << (tsize - 2);
gfn_start = gfn & ~(tsize_pages - 1);
gfn_end = gfn_start + tsize_pages;
if (gfn_start + pfn - gfn < start)
continue;
if (gfn_end + pfn - gfn > end)
continue;
if ((gfn & (tsize_pages - 1)) !=
(pfn & (tsize_pages - 1)))
continue;
gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1);
pfn &= ~(tsize_pages - 1);
break;
}
}
kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg, writable);
kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,
ref, gvaddr, stlbe);
writable = tlbe_is_writable(stlbe);
/* Clear i-cache for new pages */
kvmppc_mmu_flush_icache(pfn);

View File

@ -122,6 +122,7 @@ struct kernel_mapping {
extern struct kernel_mapping kernel_map;
extern phys_addr_t phys_ram_base;
extern unsigned long vmemmap_start_pfn;
#define is_kernel_mapping(x) \
((x) >= kernel_map.virt_addr && (x) < (kernel_map.virt_addr + kernel_map.size))

View File

@ -87,7 +87,7 @@
* Define vmemmap for pfn_to_page & page_to_pfn calls. Needed if kernel
* is configured with CONFIG_SPARSEMEM_VMEMMAP enabled.
*/
#define vmemmap ((struct page *)VMEMMAP_START - (phys_ram_base >> PAGE_SHIFT))
#define vmemmap ((struct page *)VMEMMAP_START - vmemmap_start_pfn)
#define PCI_IO_SIZE SZ_16M
#define PCI_IO_END VMEMMAP_START

View File

@ -159,6 +159,7 @@ struct riscv_pmu_snapshot_data {
};
#define RISCV_PMU_RAW_EVENT_MASK GENMASK_ULL(47, 0)
#define RISCV_PMU_PLAT_FW_EVENT_MASK GENMASK_ULL(61, 0)
#define RISCV_PMU_RAW_EVENT_IDX 0x20000
#define RISCV_PLAT_FW_EVENT 0xFFFF

View File

@ -3,8 +3,11 @@
#ifndef __ASM_RISCV_SPINLOCK_H
#define __ASM_RISCV_SPINLOCK_H
#ifdef CONFIG_RISCV_COMBO_SPINLOCKS
#ifdef CONFIG_QUEUED_SPINLOCKS
#define _Q_PENDING_LOOPS (1 << 9)
#endif
#ifdef CONFIG_RISCV_COMBO_SPINLOCKS
#define __no_arch_spinlock_redefine
#include <asm/ticket_spinlock.h>

View File

@ -23,21 +23,21 @@
REG_S a0, TASK_TI_A0(tp)
csrr a0, CSR_CAUSE
/* Exclude IRQs */
blt a0, zero, _new_vmalloc_restore_context_a0
blt a0, zero, .Lnew_vmalloc_restore_context_a0
REG_S a1, TASK_TI_A1(tp)
/* Only check new_vmalloc if we are in page/protection fault */
li a1, EXC_LOAD_PAGE_FAULT
beq a0, a1, _new_vmalloc_kernel_address
beq a0, a1, .Lnew_vmalloc_kernel_address
li a1, EXC_STORE_PAGE_FAULT
beq a0, a1, _new_vmalloc_kernel_address
beq a0, a1, .Lnew_vmalloc_kernel_address
li a1, EXC_INST_PAGE_FAULT
bne a0, a1, _new_vmalloc_restore_context_a1
bne a0, a1, .Lnew_vmalloc_restore_context_a1
_new_vmalloc_kernel_address:
.Lnew_vmalloc_kernel_address:
/* Is it a kernel address? */
csrr a0, CSR_TVAL
bge a0, zero, _new_vmalloc_restore_context_a1
bge a0, zero, .Lnew_vmalloc_restore_context_a1
/* Check if a new vmalloc mapping appeared that could explain the trap */
REG_S a2, TASK_TI_A2(tp)
@ -69,7 +69,7 @@ _new_vmalloc_kernel_address:
/* Check the value of new_vmalloc for this cpu */
REG_L a2, 0(a0)
and a2, a2, a1
beq a2, zero, _new_vmalloc_restore_context
beq a2, zero, .Lnew_vmalloc_restore_context
/* Atomically reset the current cpu bit in new_vmalloc */
amoxor.d a0, a1, (a0)
@ -83,11 +83,11 @@ _new_vmalloc_kernel_address:
csrw CSR_SCRATCH, x0
sret
_new_vmalloc_restore_context:
.Lnew_vmalloc_restore_context:
REG_L a2, TASK_TI_A2(tp)
_new_vmalloc_restore_context_a1:
.Lnew_vmalloc_restore_context_a1:
REG_L a1, TASK_TI_A1(tp)
_new_vmalloc_restore_context_a0:
.Lnew_vmalloc_restore_context_a0:
REG_L a0, TASK_TI_A0(tp)
.endm
@ -278,6 +278,7 @@ SYM_CODE_START_NOALIGN(ret_from_exception)
#else
sret
#endif
SYM_INNER_LABEL(ret_from_exception_end, SYM_L_GLOBAL)
SYM_CODE_END(ret_from_exception)
ASM_NOKPROBE(ret_from_exception)

View File

@ -23,7 +23,7 @@ struct used_bucket {
struct relocation_head {
struct hlist_node node;
struct list_head *rel_entry;
struct list_head rel_entry;
void *location;
};
@ -634,7 +634,7 @@ process_accumulated_relocations(struct module *me,
location = rel_head_iter->location;
list_for_each_entry_safe(rel_entry_iter,
rel_entry_iter_tmp,
rel_head_iter->rel_entry,
&rel_head_iter->rel_entry,
head) {
curr_type = rel_entry_iter->type;
reloc_handlers[curr_type].reloc_handler(
@ -704,16 +704,7 @@ static int add_relocation_to_accumulate(struct module *me, int type,
return -ENOMEM;
}
rel_head->rel_entry =
kmalloc(sizeof(struct list_head), GFP_KERNEL);
if (!rel_head->rel_entry) {
kfree(entry);
kfree(rel_head);
return -ENOMEM;
}
INIT_LIST_HEAD(rel_head->rel_entry);
INIT_LIST_HEAD(&rel_head->rel_entry);
rel_head->location = location;
INIT_HLIST_NODE(&rel_head->node);
if (!current_head->first) {
@ -722,7 +713,6 @@ static int add_relocation_to_accumulate(struct module *me, int type,
if (!bucket) {
kfree(entry);
kfree(rel_head->rel_entry);
kfree(rel_head);
return -ENOMEM;
}
@ -735,7 +725,7 @@ static int add_relocation_to_accumulate(struct module *me, int type,
}
/* Add relocation to head of discovered rel_head */
list_add_tail(&entry->head, rel_head->rel_entry);
list_add_tail(&entry->head, &rel_head->rel_entry);
return 0;
}

View File

@ -30,7 +30,7 @@ static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
p->ainsn.api.restore = (unsigned long)p->addr + len;
patch_text_nosync(p->ainsn.api.insn, &p->opcode, len);
patch_text_nosync(p->ainsn.api.insn + len, &insn, GET_INSN_LENGTH(insn));
patch_text_nosync((void *)p->ainsn.api.insn + len, &insn, GET_INSN_LENGTH(insn));
}
static void __kprobes arch_prepare_simulate(struct kprobe *p)

View File

@ -17,6 +17,7 @@
#ifdef CONFIG_FRAME_POINTER
extern asmlinkage void handle_exception(void);
extern unsigned long ret_from_exception_end;
static inline int fp_is_valid(unsigned long fp, unsigned long sp)
{
@ -71,7 +72,8 @@ void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs,
fp = frame->fp;
pc = ftrace_graph_ret_addr(current, &graph_idx, frame->ra,
&frame->ra);
if (pc == (unsigned long)handle_exception) {
if (pc >= (unsigned long)handle_exception &&
pc < (unsigned long)&ret_from_exception_end) {
if (unlikely(!__kernel_text_address(pc) || !fn(arg, pc)))
break;

View File

@ -35,7 +35,7 @@
int show_unhandled_signals = 1;
static DEFINE_SPINLOCK(die_lock);
static DEFINE_RAW_SPINLOCK(die_lock);
static int copy_code(struct pt_regs *regs, u16 *val, const u16 *insns)
{
@ -81,7 +81,7 @@ void die(struct pt_regs *regs, const char *str)
oops_enter();
spin_lock_irqsave(&die_lock, flags);
raw_spin_lock_irqsave(&die_lock, flags);
console_verbose();
bust_spinlocks(1);
@ -100,7 +100,7 @@ void die(struct pt_regs *regs, const char *str)
bust_spinlocks(0);
add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
spin_unlock_irqrestore(&die_lock, flags);
raw_spin_unlock_irqrestore(&die_lock, flags);
oops_exit();
if (in_interrupt())

View File

@ -33,6 +33,7 @@
#include <asm/pgtable.h>
#include <asm/sections.h>
#include <asm/soc.h>
#include <asm/sparsemem.h>
#include <asm/tlbflush.h>
#include "../kernel/head.h"
@ -62,6 +63,13 @@ EXPORT_SYMBOL(pgtable_l5_enabled);
phys_addr_t phys_ram_base __ro_after_init;
EXPORT_SYMBOL(phys_ram_base);
#ifdef CONFIG_SPARSEMEM_VMEMMAP
#define VMEMMAP_ADDR_ALIGN (1ULL << SECTION_SIZE_BITS)
unsigned long vmemmap_start_pfn __ro_after_init;
EXPORT_SYMBOL(vmemmap_start_pfn);
#endif
unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]
__page_aligned_bss;
EXPORT_SYMBOL(empty_zero_page);
@ -240,8 +248,12 @@ static void __init setup_bootmem(void)
* Make sure we align the start of the memory on a PMD boundary so that
* at worst, we map the linear mapping with PMD mappings.
*/
if (!IS_ENABLED(CONFIG_XIP_KERNEL))
if (!IS_ENABLED(CONFIG_XIP_KERNEL)) {
phys_ram_base = memblock_start_of_DRAM() & PMD_MASK;
#ifdef CONFIG_SPARSEMEM_VMEMMAP
vmemmap_start_pfn = round_down(phys_ram_base, VMEMMAP_ADDR_ALIGN) >> PAGE_SHIFT;
#endif
}
/*
* In 64-bit, any use of __va/__pa before this point is wrong as we
@ -1101,6 +1113,9 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
kernel_map.xiprom_sz = (uintptr_t)(&_exiprom) - (uintptr_t)(&_xiprom);
phys_ram_base = CONFIG_PHYS_RAM_BASE;
#ifdef CONFIG_SPARSEMEM_VMEMMAP
vmemmap_start_pfn = round_down(phys_ram_base, VMEMMAP_ADDR_ALIGN) >> PAGE_SHIFT;
#endif
kernel_map.phys_addr = (uintptr_t)CONFIG_PHYS_RAM_BASE;
kernel_map.size = (uintptr_t)(&_end) - (uintptr_t)(&_start);

View File

@ -2678,9 +2678,13 @@ static int flic_set_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
kvm_s390_clear_float_irqs(dev->kvm);
break;
case KVM_DEV_FLIC_APF_ENABLE:
if (kvm_is_ucontrol(dev->kvm))
return -EINVAL;
dev->kvm->arch.gmap->pfault_enabled = 1;
break;
case KVM_DEV_FLIC_APF_DISABLE_WAIT:
if (kvm_is_ucontrol(dev->kvm))
return -EINVAL;
dev->kvm->arch.gmap->pfault_enabled = 0;
/*
* Make sure no async faults are in transition when
@ -2894,6 +2898,8 @@ int kvm_set_routing_entry(struct kvm *kvm,
switch (ue->type) {
/* we store the userspace addresses instead of the guest addresses */
case KVM_IRQ_ROUTING_S390_ADAPTER:
if (kvm_is_ucontrol(kvm))
return -EINVAL;
e->set = set_adapter_int;
uaddr = gmap_translate(kvm->arch.gmap, ue->u.adapter.summary_addr);
if (uaddr == -EFAULT)

View File

@ -854,7 +854,7 @@ unpin:
static void unpin_scb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page,
gpa_t gpa)
{
hpa_t hpa = (hpa_t) vsie_page->scb_o;
hpa_t hpa = virt_to_phys(vsie_page->scb_o);
if (hpa)
unpin_guest_page(vcpu->kvm, gpa, hpa);

View File

@ -190,7 +190,8 @@ int ssp_get(struct task_struct *target, const struct user_regset *regset,
struct fpu *fpu = &target->thread.fpu;
struct cet_user_state *cetregs;
if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK))
if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK) ||
!ssp_active(target, regset))
return -ENODEV;
sync_fpstate(fpu);

View File

@ -175,7 +175,6 @@ EXPORT_SYMBOL_GPL(arch_static_call_transform);
noinstr void __static_call_update_early(void *tramp, void *func)
{
BUG_ON(system_state != SYSTEM_BOOTING);
BUG_ON(!early_boot_irqs_disabled);
BUG_ON(static_call_initialized);
__text_gen_insn(tramp, JMP32_INSN_OPCODE, tramp, func, JMP32_INSN_SIZE);
sync_core();

View File

@ -1080,7 +1080,8 @@ struct execmem_info __init *execmem_arch_setup(void)
start = MODULES_VADDR + offset;
if (IS_ENABLED(CONFIG_ARCH_HAS_EXECMEM_ROX)) {
if (IS_ENABLED(CONFIG_ARCH_HAS_EXECMEM_ROX) &&
cpu_feature_enabled(X86_FEATURE_PSE)) {
pgprot = PAGE_KERNEL_ROX;
flags = EXECMEM_KASAN_SHADOW | EXECMEM_ROX_CACHE;
} else {

View File

@ -6844,16 +6844,24 @@ static struct bfq_queue *bfq_waker_bfqq(struct bfq_queue *bfqq)
if (new_bfqq == waker_bfqq) {
/*
* If waker_bfqq is in the merge chain, and current
* is the only procress.
* is the only process, waker_bfqq can be freed.
*/
if (bfqq_process_refs(waker_bfqq) == 1)
return NULL;
break;
return waker_bfqq;
}
new_bfqq = new_bfqq->new_bfqq;
}
/*
* If waker_bfqq is not in the merge chain, and it's procress reference
* is 0, waker_bfqq can be freed.
*/
if (bfqq_process_refs(waker_bfqq) == 0)
return NULL;
return waker_bfqq;
}

View File

@ -610,16 +610,28 @@ acpi_video_device_lcd_get_level_current(struct acpi_video_device *device,
return 0;
}
/**
* acpi_video_device_EDID() - Get EDID from ACPI _DDC
* @device: video output device (LCD, CRT, ..)
* @edid: address for returned EDID pointer
* @length: _DDC length to request (must be a multiple of 128)
*
* Get EDID from ACPI _DDC. On success, a pointer to the EDID data is written
* to the @edid address, and the length of the EDID is returned. The caller is
* responsible for freeing the edid pointer.
*
* Return the length of EDID (positive value) on success or error (negative
* value).
*/
static int
acpi_video_device_EDID(struct acpi_video_device *device,
union acpi_object **edid, int length)
acpi_video_device_EDID(struct acpi_video_device *device, void **edid, int length)
{
int status;
acpi_status status;
struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
union acpi_object *obj;
union acpi_object arg0 = { ACPI_TYPE_INTEGER };
struct acpi_object_list args = { 1, &arg0 };
int ret;
*edid = NULL;
@ -636,16 +648,17 @@ acpi_video_device_EDID(struct acpi_video_device *device,
obj = buffer.pointer;
if (obj && obj->type == ACPI_TYPE_BUFFER)
*edid = obj;
else {
if (obj && obj->type == ACPI_TYPE_BUFFER) {
*edid = kmemdup(obj->buffer.pointer, obj->buffer.length, GFP_KERNEL);
ret = *edid ? obj->buffer.length : -ENOMEM;
} else {
acpi_handle_debug(device->dev->handle,
"Invalid _DDC data for length %d\n", length);
status = -EFAULT;
kfree(obj);
ret = -EFAULT;
}
return status;
kfree(obj);
return ret;
}
/* bus */
@ -1435,9 +1448,7 @@ int acpi_video_get_edid(struct acpi_device *device, int type, int device_id,
{
struct acpi_video_bus *video;
struct acpi_video_device *video_device;
union acpi_object *buffer = NULL;
acpi_status status;
int i, length;
int i, length, ret;
if (!device || !acpi_driver_data(device))
return -EINVAL;
@ -1477,16 +1488,10 @@ int acpi_video_get_edid(struct acpi_device *device, int type, int device_id,
}
for (length = 512; length > 0; length -= 128) {
status = acpi_video_device_EDID(video_device, &buffer,
length);
if (ACPI_SUCCESS(status))
break;
ret = acpi_video_device_EDID(video_device, edid, length);
if (ret > 0)
return ret;
}
if (!length)
continue;
*edid = buffer->buffer.pointer;
return length;
}
return -ENODEV;

View File

@ -440,6 +440,13 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
DMI_MATCH(DMI_BOARD_NAME, "S5602ZA"),
},
},
{
/* Asus Vivobook X1504VAP */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_BOARD_NAME, "X1504VAP"),
},
},
{
/* Asus Vivobook X1704VAP */
.matches = {
@ -646,6 +653,17 @@ static const struct dmi_system_id irq1_edge_low_force_override[] = {
DMI_MATCH(DMI_BOARD_NAME, "GMxHGxx"),
},
},
{
/*
* TongFang GM5HG0A in case of the SKIKK Vanaheim relabel the
* board-name is changed, so check OEM strings instead. Note
* OEM string matches are always exact matches.
* https://bugzilla.kernel.org/show_bug.cgi?id=219614
*/
.matches = {
DMI_EXACT_MATCH(DMI_OEM_STRING, "GM5HG0A"),
},
},
{ }
};
@ -671,11 +689,11 @@ static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity,
for (i = 0; i < ARRAY_SIZE(override_table); i++) {
const struct irq_override_cmp *entry = &override_table[i];
if (dmi_check_system(entry->system) &&
entry->irq == gsi &&
if (entry->irq == gsi &&
entry->triggering == triggering &&
entry->polarity == polarity &&
entry->shareable == shareable)
entry->shareable == shareable &&
dmi_check_system(entry->system))
return entry->override;
}

View File

@ -27,9 +27,17 @@ static ssize_t name##_read(struct file *file, struct kobject *kobj, \
loff_t off, size_t count) \
{ \
struct device *dev = kobj_to_dev(kobj); \
cpumask_var_t mask; \
ssize_t n; \
\
return cpumap_print_bitmask_to_buf(buf, topology_##mask(dev->id), \
off, count); \
if (!alloc_cpumask_var(&mask, GFP_KERNEL)) \
return -ENOMEM; \
\
cpumask_copy(mask, topology_##mask(dev->id)); \
n = cpumap_print_bitmask_to_buf(buf, mask, off, count); \
free_cpumask_var(mask); \
\
return n; \
} \
\
static ssize_t name##_list_read(struct file *file, struct kobject *kobj, \
@ -37,9 +45,17 @@ static ssize_t name##_list_read(struct file *file, struct kobject *kobj, \
loff_t off, size_t count) \
{ \
struct device *dev = kobj_to_dev(kobj); \
cpumask_var_t mask; \
ssize_t n; \
\
return cpumap_print_list_to_buf(buf, topology_##mask(dev->id), \
off, count); \
if (!alloc_cpumask_var(&mask, GFP_KERNEL)) \
return -ENOMEM; \
\
cpumask_copy(mask, topology_##mask(dev->id)); \
n = cpumap_print_list_to_buf(buf, mask, off, count); \
free_cpumask_var(mask); \
\
return n; \
}
define_id_show_func(physical_package_id, "%d");

View File

@ -1468,6 +1468,7 @@ static bool zram_meta_alloc(struct zram *zram, u64 disksize)
zram->mem_pool = zs_create_pool(zram->disk->disk_name);
if (!zram->mem_pool) {
vfree(zram->table);
zram->table = NULL;
return false;
}

View File

@ -917,7 +917,7 @@ static int mhi_pci_claim(struct mhi_controller *mhi_cntrl,
return err;
}
mhi_cntrl->regs = pcim_iomap_region(pdev, 1 << bar_num, pci_name(pdev));
mhi_cntrl->regs = pcim_iomap_region(pdev, bar_num, pci_name(pdev));
if (IS_ERR(mhi_cntrl->regs)) {
err = PTR_ERR(mhi_cntrl->regs);
dev_err(&pdev->dev, "failed to map pci region: %d\n", err);

View File

@ -325,8 +325,6 @@ config QORIQ_CPUFREQ
This adds the CPUFreq driver support for Freescale QorIQ SoCs
which are capable of changing the CPU's frequency dynamically.
endif
config ACPI_CPPC_CPUFREQ
tristate "CPUFreq driver based on the ACPI CPPC spec"
depends on ACPI_PROCESSOR
@ -355,4 +353,6 @@ config ACPI_CPPC_CPUFREQ_FIE
If in doubt, say N.
endif
endmenu

View File

@ -504,12 +504,12 @@ static int sbi_cpuidle_probe(struct platform_device *pdev)
int cpu, ret;
struct cpuidle_driver *drv;
struct cpuidle_device *dev;
struct device_node *np, *pds_node;
struct device_node *pds_node;
/* Detect OSI support based on CPU DT nodes */
sbi_cpuidle_use_osi = true;
for_each_possible_cpu(cpu) {
np = of_cpu_device_node_get(cpu);
struct device_node *np __free(device_node) = of_cpu_device_node_get(cpu);
if (np &&
of_property_present(np, "power-domains") &&
of_property_present(np, "power-domain-names")) {

View File

@ -10,25 +10,27 @@
* DOC: teo-description
*
* The idea of this governor is based on the observation that on many systems
* timer events are two or more orders of magnitude more frequent than any
* other interrupts, so they are likely to be the most significant cause of CPU
* wakeups from idle states. Moreover, information about what happened in the
* (relatively recent) past can be used to estimate whether or not the deepest
* idle state with target residency within the (known) time till the closest
* timer event, referred to as the sleep length, is likely to be suitable for
* the upcoming CPU idle period and, if not, then which of the shallower idle
* states to choose instead of it.
* timer interrupts are two or more orders of magnitude more frequent than any
* other interrupt types, so they are likely to dominate CPU wakeup patterns.
* Moreover, in principle, the time when the next timer event is going to occur
* can be determined at the idle state selection time, although doing that may
* be costly, so it can be regarded as the most reliable source of information
* for idle state selection.
*
* Of course, non-timer wakeup sources are more important in some use cases
* which can be covered by taking a few most recent idle time intervals of the
* CPU into account. However, even in that context it is not necessary to
* consider idle duration values greater than the sleep length, because the
* closest timer will ultimately wake up the CPU anyway unless it is woken up
* earlier.
* Of course, non-timer wakeup sources are more important in some use cases,
* but even then it is generally unnecessary to consider idle duration values
* greater than the time time till the next timer event, referred as the sleep
* length in what follows, because the closest timer will ultimately wake up the
* CPU anyway unless it is woken up earlier.
*
* Thus this governor estimates whether or not the prospective idle duration of
* a CPU is likely to be significantly shorter than the sleep length and selects
* an idle state for it accordingly.
* However, since obtaining the sleep length may be costly, the governor first
* checks if it can select a shallow idle state using wakeup pattern information
* from recent times, in which case it can do without knowing the sleep length
* at all. For this purpose, it counts CPU wakeup events and looks for an idle
* state whose target residency has not exceeded the idle duration (measured
* after wakeup) in the majority of relevant recent cases. If the target
* residency of that state is small enough, it may be used right away and the
* sleep length need not be determined.
*
* The computations carried out by this governor are based on using bins whose
* boundaries are aligned with the target residency parameter values of the CPU
@ -39,7 +41,11 @@
* idle state 2, the third bin spans from the target residency of idle state 2
* up to, but not including, the target residency of idle state 3 and so on.
* The last bin spans from the target residency of the deepest idle state
* supplied by the driver to infinity.
* supplied by the driver to the scheduler tick period length or to infinity if
* the tick period length is less than the target residency of that state. In
* the latter case, the governor also counts events with the measured idle
* duration between the tick period length and the target residency of the
* deepest idle state.
*
* Two metrics called "hits" and "intercepts" are associated with each bin.
* They are updated every time before selecting an idle state for the given CPU
@ -49,47 +55,46 @@
* sleep length and the idle duration measured after CPU wakeup fall into the
* same bin (that is, the CPU appears to wake up "on time" relative to the sleep
* length). In turn, the "intercepts" metric reflects the relative frequency of
* situations in which the measured idle duration is so much shorter than the
* sleep length that the bin it falls into corresponds to an idle state
* shallower than the one whose bin is fallen into by the sleep length (these
* situations are referred to as "intercepts" below).
* non-timer wakeup events for which the measured idle duration falls into a bin
* that corresponds to an idle state shallower than the one whose bin is fallen
* into by the sleep length (these events are also referred to as "intercepts"
* below).
*
* In order to select an idle state for a CPU, the governor takes the following
* steps (modulo the possible latency constraint that must be taken into account
* too):
*
* 1. Find the deepest CPU idle state whose target residency does not exceed
* the current sleep length (the candidate idle state) and compute 2 sums as
* follows:
* 1. Find the deepest enabled CPU idle state (the candidate idle state) and
* compute 2 sums as follows:
*
* - The sum of the "hits" and "intercepts" metrics for the candidate state
* and all of the deeper idle states (it represents the cases in which the
* CPU was idle long enough to avoid being intercepted if the sleep length
* had been equal to the current one).
* - The sum of the "hits" metric for all of the idle states shallower than
* the candidate one (it represents the cases in which the CPU was likely
* woken up by a timer).
*
* - The sum of the "intercepts" metrics for all of the idle states shallower
* than the candidate one (it represents the cases in which the CPU was not
* idle long enough to avoid being intercepted if the sleep length had been
* equal to the current one).
* - The sum of the "intercepts" metric for all of the idle states shallower
* than the candidate one (it represents the cases in which the CPU was
* likely woken up by a non-timer wakeup source).
*
* 2. If the second sum is greater than the first one the CPU is likely to wake
* up early, so look for an alternative idle state to select.
* 2. If the second sum computed in step 1 is greater than a half of the sum of
* both metrics for the candidate state bin and all subsequent bins(if any),
* a shallower idle state is likely to be more suitable, so look for it.
*
* - Traverse the idle states shallower than the candidate one in the
* - Traverse the enabled idle states shallower than the candidate one in the
* descending order.
*
* - For each of them compute the sum of the "intercepts" metrics over all
* of the idle states between it and the candidate one (including the
* former and excluding the latter).
*
* - If each of these sums that needs to be taken into account (because the
* check related to it has indicated that the CPU is likely to wake up
* early) is greater than a half of the corresponding sum computed in step
* 1 (which means that the target residency of the state in question had
* not exceeded the idle duration in over a half of the relevant cases),
* select the given idle state instead of the candidate one.
* - If this sum is greater than a half of the second sum computed in step 1,
* use the given idle state as the new candidate one.
*
* 3. By default, select the candidate state.
* 3. If the current candidate state is state 0 or its target residency is short
* enough, return it and prevent the scheduler tick from being stopped.
*
* 4. Obtain the sleep length value and check if it is below the target
* residency of the current candidate state, in which case a new shallower
* candidate state needs to be found, so look for it.
*/
#include <linux/cpuidle.h>

View File

@ -237,9 +237,9 @@ static const struct loongson_gpio_chip_data loongson_gpio_ls2k2000_data1 = {
static const struct loongson_gpio_chip_data loongson_gpio_ls2k2000_data2 = {
.label = "ls2k2000_gpio",
.mode = BIT_CTRL_MODE,
.conf_offset = 0x84,
.in_offset = 0x88,
.out_offset = 0x80,
.conf_offset = 0x4,
.in_offset = 0x8,
.out_offset = 0x0,
};
static const struct loongson_gpio_chip_data loongson_gpio_ls3a5000_data = {

View File

@ -1027,6 +1027,30 @@ static void gpio_sim_device_deactivate(struct gpio_sim_device *dev)
dev->pdev = NULL;
}
static void
gpio_sim_device_lockup_configfs(struct gpio_sim_device *dev, bool lock)
{
struct configfs_subsystem *subsys = dev->group.cg_subsys;
struct gpio_sim_bank *bank;
struct gpio_sim_line *line;
/*
* The device only needs to depend on leaf line entries. This is
* sufficient to lock up all the configfs entries that the
* instantiated, alive device depends on.
*/
list_for_each_entry(bank, &dev->bank_list, siblings) {
list_for_each_entry(line, &bank->line_list, siblings) {
if (lock)
WARN_ON(configfs_depend_item_unlocked(
subsys, &line->group.cg_item));
else
configfs_undepend_item_unlocked(
&line->group.cg_item);
}
}
}
static ssize_t
gpio_sim_device_config_live_store(struct config_item *item,
const char *page, size_t count)
@ -1039,14 +1063,24 @@ gpio_sim_device_config_live_store(struct config_item *item,
if (ret)
return ret;
guard(mutex)(&dev->lock);
if (live)
gpio_sim_device_lockup_configfs(dev, true);
if (live == gpio_sim_device_is_live(dev))
ret = -EPERM;
else if (live)
ret = gpio_sim_device_activate(dev);
else
gpio_sim_device_deactivate(dev);
scoped_guard(mutex, &dev->lock) {
if (live == gpio_sim_device_is_live(dev))
ret = -EPERM;
else if (live)
ret = gpio_sim_device_activate(dev);
else
gpio_sim_device_deactivate(dev);
}
/*
* Undepend is required only if device disablement (live == 0)
* succeeds or if device enablement (live == 1) fails.
*/
if (live == !!ret)
gpio_sim_device_lockup_configfs(dev, false);
return ret ?: count;
}

View File

@ -1410,7 +1410,7 @@ gpio_virtuser_make_lookup_table(struct gpio_virtuser_device *dev)
size_t num_entries = gpio_virtuser_get_lookup_count(dev);
struct gpio_virtuser_lookup_entry *entry;
struct gpio_virtuser_lookup *lookup;
unsigned int i = 0;
unsigned int i = 0, idx;
lockdep_assert_held(&dev->lock);
@ -1424,12 +1424,12 @@ gpio_virtuser_make_lookup_table(struct gpio_virtuser_device *dev)
return -ENOMEM;
list_for_each_entry(lookup, &dev->lookup_list, siblings) {
idx = 0;
list_for_each_entry(entry, &lookup->entry_list, siblings) {
table->table[i] =
table->table[i++] =
GPIO_LOOKUP_IDX(entry->key,
entry->offset < 0 ? U16_MAX : entry->offset,
lookup->con_id, i, entry->flags);
i++;
lookup->con_id, idx++, entry->flags);
}
}
@ -1439,6 +1439,15 @@ gpio_virtuser_make_lookup_table(struct gpio_virtuser_device *dev)
return 0;
}
static void
gpio_virtuser_remove_lookup_table(struct gpio_virtuser_device *dev)
{
gpiod_remove_lookup_table(dev->lookup_table);
kfree(dev->lookup_table->dev_id);
kfree(dev->lookup_table);
dev->lookup_table = NULL;
}
static struct fwnode_handle *
gpio_virtuser_make_device_swnode(struct gpio_virtuser_device *dev)
{
@ -1487,10 +1496,8 @@ gpio_virtuser_device_activate(struct gpio_virtuser_device *dev)
pdevinfo.fwnode = swnode;
ret = gpio_virtuser_make_lookup_table(dev);
if (ret) {
fwnode_remove_software_node(swnode);
return ret;
}
if (ret)
goto err_remove_swnode;
reinit_completion(&dev->probe_completion);
dev->driver_bound = false;
@ -1498,23 +1505,31 @@ gpio_virtuser_device_activate(struct gpio_virtuser_device *dev)
pdev = platform_device_register_full(&pdevinfo);
if (IS_ERR(pdev)) {
ret = PTR_ERR(pdev);
bus_unregister_notifier(&platform_bus_type, &dev->bus_notifier);
fwnode_remove_software_node(swnode);
return PTR_ERR(pdev);
goto err_remove_lookup_table;
}
wait_for_completion(&dev->probe_completion);
bus_unregister_notifier(&platform_bus_type, &dev->bus_notifier);
if (!dev->driver_bound) {
platform_device_unregister(pdev);
fwnode_remove_software_node(swnode);
return -ENXIO;
ret = -ENXIO;
goto err_unregister_pdev;
}
dev->pdev = pdev;
return 0;
err_unregister_pdev:
platform_device_unregister(pdev);
err_remove_lookup_table:
gpio_virtuser_remove_lookup_table(dev);
err_remove_swnode:
fwnode_remove_software_node(swnode);
return ret;
}
static void
@ -1526,10 +1541,33 @@ gpio_virtuser_device_deactivate(struct gpio_virtuser_device *dev)
swnode = dev_fwnode(&dev->pdev->dev);
platform_device_unregister(dev->pdev);
gpio_virtuser_remove_lookup_table(dev);
fwnode_remove_software_node(swnode);
dev->pdev = NULL;
gpiod_remove_lookup_table(dev->lookup_table);
kfree(dev->lookup_table);
}
static void
gpio_virtuser_device_lockup_configfs(struct gpio_virtuser_device *dev, bool lock)
{
struct configfs_subsystem *subsys = dev->group.cg_subsys;
struct gpio_virtuser_lookup_entry *entry;
struct gpio_virtuser_lookup *lookup;
/*
* The device only needs to depend on leaf lookup entries. This is
* sufficient to lock up all the configfs entries that the
* instantiated, alive device depends on.
*/
list_for_each_entry(lookup, &dev->lookup_list, siblings) {
list_for_each_entry(entry, &lookup->entry_list, siblings) {
if (lock)
WARN_ON(configfs_depend_item_unlocked(
subsys, &entry->group.cg_item));
else
configfs_undepend_item_unlocked(
&entry->group.cg_item);
}
}
}
static ssize_t
@ -1544,15 +1582,24 @@ gpio_virtuser_device_config_live_store(struct config_item *item,
if (ret)
return ret;
guard(mutex)(&dev->lock);
if (live == gpio_virtuser_device_is_live(dev))
return -EPERM;
if (live)
ret = gpio_virtuser_device_activate(dev);
else
gpio_virtuser_device_deactivate(dev);
gpio_virtuser_device_lockup_configfs(dev, true);
scoped_guard(mutex, &dev->lock) {
if (live == gpio_virtuser_device_is_live(dev))
ret = -EPERM;
else if (live)
ret = gpio_virtuser_device_activate(dev);
else
gpio_virtuser_device_deactivate(dev);
}
/*
* Undepend is required only if device disablement (live == 0)
* succeeds or if device enablement (live == 1) fails.
*/
if (live == !!ret)
gpio_virtuser_device_lockup_configfs(dev, false);
return ret ?: count;
}

View File

@ -567,7 +567,6 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
else
remaining_size -= size;
}
mutex_unlock(&mgr->lock);
if (bo->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS && adjust_dcc_size) {
struct drm_buddy_block *dcc_block;
@ -584,6 +583,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
(u64)vres->base.size,
&vres->blocks);
}
mutex_unlock(&mgr->lock);
vres->base.start = 0;
size = max_t(u64, amdgpu_vram_mgr_blocks_size(&vres->blocks),

View File

@ -350,10 +350,27 @@ int kfd_dbg_set_mes_debug_mode(struct kfd_process_device *pdd, bool sq_trap_en)
{
uint32_t spi_dbg_cntl = pdd->spi_dbg_override | pdd->spi_dbg_launch_mode;
uint32_t flags = pdd->process->dbg_flags;
struct amdgpu_device *adev = pdd->dev->adev;
int r;
if (!kfd_dbg_is_per_vmid_supported(pdd->dev))
return 0;
if (!pdd->proc_ctx_cpu_ptr) {
r = amdgpu_amdkfd_alloc_gtt_mem(adev,
AMDGPU_MES_PROC_CTX_SIZE,
&pdd->proc_ctx_bo,
&pdd->proc_ctx_gpu_addr,
&pdd->proc_ctx_cpu_ptr,
false);
if (r) {
dev_err(adev->dev,
"failed to allocate process context bo\n");
return r;
}
memset(pdd->proc_ctx_cpu_ptr, 0, AMDGPU_MES_PROC_CTX_SIZE);
}
return amdgpu_mes_set_shader_debugger(pdd->dev->adev, pdd->proc_ctx_gpu_addr, spi_dbg_cntl,
pdd->watch_points, flags, sq_trap_en);
}

View File

@ -1160,7 +1160,8 @@ static void kfd_process_wq_release(struct work_struct *work)
*/
synchronize_rcu();
ef = rcu_access_pointer(p->ef);
dma_fence_signal(ef);
if (ef)
dma_fence_signal(ef);
kfd_process_remove_sysfs(p);

View File

@ -8400,16 +8400,6 @@ static void manage_dm_interrupts(struct amdgpu_device *adev,
struct amdgpu_crtc *acrtc,
struct dm_crtc_state *acrtc_state)
{
/*
* We have no guarantee that the frontend index maps to the same
* backend index - some even map to more than one.
*
* TODO: Use a different interrupt or check DC itself for the mapping.
*/
int irq_type =
amdgpu_display_crtc_idx_to_irq_type(
adev,
acrtc->crtc_id);
struct drm_vblank_crtc_config config = {0};
struct dc_crtc_timing *timing;
int offdelay;
@ -8435,28 +8425,7 @@ static void manage_dm_interrupts(struct amdgpu_device *adev,
drm_crtc_vblank_on_config(&acrtc->base,
&config);
amdgpu_irq_get(
adev,
&adev->pageflip_irq,
irq_type);
#if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
amdgpu_irq_get(
adev,
&adev->vline0_irq,
irq_type);
#endif
} else {
#if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
amdgpu_irq_put(
adev,
&adev->vline0_irq,
irq_type);
#endif
amdgpu_irq_put(
adev,
&adev->pageflip_irq,
irq_type);
drm_crtc_vblank_off(&acrtc->base);
}
}
@ -11155,8 +11124,8 @@ dm_get_plane_scale(struct drm_plane_state *plane_state,
int plane_src_w, plane_src_h;
dm_get_oriented_plane_size(plane_state, &plane_src_w, &plane_src_h);
*out_plane_scale_w = plane_state->crtc_w * 1000 / plane_src_w;
*out_plane_scale_h = plane_state->crtc_h * 1000 / plane_src_h;
*out_plane_scale_w = plane_src_w ? plane_state->crtc_w * 1000 / plane_src_w : 0;
*out_plane_scale_h = plane_src_h ? plane_state->crtc_h * 1000 / plane_src_h : 0;
}
/*

View File

@ -4510,7 +4510,7 @@ static bool commit_minimal_transition_based_on_current_context(struct dc *dc,
struct pipe_split_policy_backup policy;
struct dc_state *intermediate_context;
struct dc_state *old_current_state = dc->current_state;
struct dc_surface_update srf_updates[MAX_SURFACE_NUM] = {0};
struct dc_surface_update srf_updates[MAX_SURFACES] = {0};
int surface_count;
/*

View File

@ -483,9 +483,9 @@ bool dc_state_add_plane(
if (stream_status == NULL) {
dm_error("Existing stream not found; failed to attach surface!\n");
goto out;
} else if (stream_status->plane_count == MAX_SURFACE_NUM) {
} else if (stream_status->plane_count == MAX_SURFACES) {
dm_error("Surface: can not attach plane_state %p! Maximum is: %d\n",
plane_state, MAX_SURFACE_NUM);
plane_state, MAX_SURFACES);
goto out;
} else if (!otg_master_pipe) {
goto out;
@ -600,7 +600,7 @@ bool dc_state_rem_all_planes_for_stream(
{
int i, old_plane_count;
struct dc_stream_status *stream_status = NULL;
struct dc_plane_state *del_planes[MAX_SURFACE_NUM] = { 0 };
struct dc_plane_state *del_planes[MAX_SURFACES] = { 0 };
for (i = 0; i < state->stream_count; i++)
if (state->streams[i] == stream) {
@ -875,7 +875,7 @@ bool dc_state_rem_all_phantom_planes_for_stream(
{
int i, old_plane_count;
struct dc_stream_status *stream_status = NULL;
struct dc_plane_state *del_planes[MAX_SURFACE_NUM] = { 0 };
struct dc_plane_state *del_planes[MAX_SURFACES] = { 0 };
for (i = 0; i < state->stream_count; i++)
if (state->streams[i] == phantom_stream) {

View File

@ -57,7 +57,7 @@ struct dmub_notification;
#define DC_VER "3.2.310"
#define MAX_SURFACES 3
#define MAX_SURFACES 4
#define MAX_PLANES 6
#define MAX_STREAMS 6
#define MIN_VIEWPORT_SIZE 12
@ -1398,7 +1398,7 @@ struct dc_scratch_space {
* store current value in plane states so we can still recover
* a valid current state during dc update.
*/
struct dc_plane_state plane_states[MAX_SURFACE_NUM];
struct dc_plane_state plane_states[MAX_SURFACES];
struct dc_stream_state stream_state;
};

View File

@ -56,7 +56,7 @@ struct dc_stream_status {
int plane_count;
int audio_inst;
struct timing_sync_info timing_sync_info;
struct dc_plane_state *plane_states[MAX_SURFACE_NUM];
struct dc_plane_state *plane_states[MAX_SURFACES];
bool is_abm_supported;
struct mall_stream_config mall_stream_config;
bool fpo_in_use;

View File

@ -76,7 +76,6 @@ struct dc_perf_trace {
unsigned long last_entry_write;
};
#define MAX_SURFACE_NUM 6
#define NUM_PIXEL_FORMATS 10
enum tiling_mode {

View File

@ -66,11 +66,15 @@ static inline double dml_max5(double a, double b, double c, double d, double e)
static inline double dml_ceil(double a, double granularity)
{
if (granularity == 0)
return 0;
return (double) dcn_bw_ceil2(a, granularity);
}
static inline double dml_floor(double a, double granularity)
{
if (granularity == 0)
return 0;
return (double) dcn_bw_floor2(a, granularity);
}
@ -114,11 +118,15 @@ static inline double dml_ceil_2(double f)
static inline double dml_ceil_ex(double x, double granularity)
{
if (granularity == 0)
return 0;
return (double) dcn_bw_ceil2(x, granularity);
}
static inline double dml_floor_ex(double x, double granularity)
{
if (granularity == 0)
return 0;
return (double) dcn_bw_floor2(x, granularity);
}

View File

@ -813,7 +813,7 @@ static bool remove_all_phantom_planes_for_stream(struct dml2_context *ctx, struc
{
int i, old_plane_count;
struct dc_stream_status *stream_status = NULL;
struct dc_plane_state *del_planes[MAX_SURFACE_NUM] = { 0 };
struct dc_plane_state *del_planes[MAX_SURFACES] = { 0 };
for (i = 0; i < context->stream_count; i++)
if (context->streams[i] == stream) {

View File

@ -303,5 +303,7 @@ int smu_v13_0_set_wbrf_exclusion_ranges(struct smu_context *smu,
int smu_v13_0_get_boot_freq_by_index(struct smu_context *smu,
enum smu_clk_type clk_type,
uint32_t *value);
void smu_v13_0_interrupt_work(struct smu_context *smu);
#endif
#endif

View File

@ -1320,11 +1320,11 @@ static int smu_v13_0_set_irq_state(struct amdgpu_device *adev,
return 0;
}
static int smu_v13_0_ack_ac_dc_interrupt(struct smu_context *smu)
void smu_v13_0_interrupt_work(struct smu_context *smu)
{
return smu_cmn_send_smc_msg(smu,
SMU_MSG_ReenableAcDcInterrupt,
NULL);
smu_cmn_send_smc_msg(smu,
SMU_MSG_ReenableAcDcInterrupt,
NULL);
}
#define THM_11_0__SRCID__THM_DIG_THERM_L2H 0 /* ASIC_TEMP > CG_THERMAL_INT.DIG_THERM_INTH */
@ -1377,12 +1377,12 @@ static int smu_v13_0_irq_process(struct amdgpu_device *adev,
switch (ctxid) {
case SMU_IH_INTERRUPT_CONTEXT_ID_AC:
dev_dbg(adev->dev, "Switched to AC mode!\n");
smu_v13_0_ack_ac_dc_interrupt(smu);
schedule_work(&smu->interrupt_work);
adev->pm.ac_power = true;
break;
case SMU_IH_INTERRUPT_CONTEXT_ID_DC:
dev_dbg(adev->dev, "Switched to DC mode!\n");
smu_v13_0_ack_ac_dc_interrupt(smu);
schedule_work(&smu->interrupt_work);
adev->pm.ac_power = false;
break;
case SMU_IH_INTERRUPT_CONTEXT_ID_THERMAL_THROTTLING:

View File

@ -3219,6 +3219,7 @@ static const struct pptable_funcs smu_v13_0_0_ppt_funcs = {
.is_asic_wbrf_supported = smu_v13_0_0_wbrf_support_check,
.enable_uclk_shadow = smu_v13_0_enable_uclk_shadow,
.set_wbrf_exclusion_ranges = smu_v13_0_set_wbrf_exclusion_ranges,
.interrupt_work = smu_v13_0_interrupt_work,
};
void smu_v13_0_0_set_ppt_funcs(struct smu_context *smu)

View File

@ -2797,6 +2797,7 @@ static const struct pptable_funcs smu_v13_0_7_ppt_funcs = {
.is_asic_wbrf_supported = smu_v13_0_7_wbrf_support_check,
.enable_uclk_shadow = smu_v13_0_enable_uclk_shadow,
.set_wbrf_exclusion_ranges = smu_v13_0_set_wbrf_exclusion_ranges,
.interrupt_work = smu_v13_0_interrupt_work,
};
void smu_v13_0_7_set_ppt_funcs(struct smu_context *smu)

View File

@ -1158,9 +1158,15 @@ static int intel_hdcp_check_link(struct intel_connector *connector)
goto out;
}
intel_hdcp_update_value(connector,
DRM_MODE_CONTENT_PROTECTION_DESIRED,
true);
ret = intel_hdcp1_enable(connector);
if (ret) {
drm_err(display->drm, "Failed to enable hdcp (%d)\n", ret);
intel_hdcp_update_value(connector,
DRM_MODE_CONTENT_PROTECTION_DESIRED,
true);
goto out;
}
out:
mutex_unlock(&dig_port->hdcp_mutex);
mutex_unlock(&hdcp->mutex);

View File

@ -14,9 +14,6 @@ config DRM_MEDIATEK
select DRM_BRIDGE_CONNECTOR
select DRM_MIPI_DSI
select DRM_PANEL
select MEMORY
select MTK_SMI
select PHY_MTK_MIPI_DSI
select VIDEOMODE_HELPERS
help
Choose this option if you have a Mediatek SoCs.
@ -27,7 +24,6 @@ config DRM_MEDIATEK
config DRM_MEDIATEK_DP
tristate "DRM DPTX Support for MediaTek SoCs"
depends on DRM_MEDIATEK
select PHY_MTK_DP
select DRM_DISPLAY_HELPER
select DRM_DISPLAY_DP_HELPER
select DRM_DISPLAY_DP_AUX_BUS
@ -38,6 +34,5 @@ config DRM_MEDIATEK_HDMI
tristate "DRM HDMI Support for Mediatek SoCs"
depends on DRM_MEDIATEK
select SND_SOC_HDMI_CODEC if SND_SOC
select PHY_MTK_HDMI
help
DRM/KMS HDMI driver for Mediatek SoCs

View File

@ -112,6 +112,11 @@ static void mtk_drm_finish_page_flip(struct mtk_crtc *mtk_crtc)
drm_crtc_handle_vblank(&mtk_crtc->base);
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
if (mtk_crtc->cmdq_client.chan)
return;
#endif
spin_lock_irqsave(&mtk_crtc->config_lock, flags);
if (!mtk_crtc->config_updating && mtk_crtc->pending_needs_vblank) {
mtk_crtc_finish_page_flip(mtk_crtc);
@ -284,10 +289,8 @@ static void ddp_cmdq_cb(struct mbox_client *cl, void *mssg)
state = to_mtk_crtc_state(mtk_crtc->base.state);
spin_lock_irqsave(&mtk_crtc->config_lock, flags);
if (mtk_crtc->config_updating) {
spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
if (mtk_crtc->config_updating)
goto ddp_cmdq_cb_out;
}
state->pending_config = false;
@ -315,10 +318,15 @@ static void ddp_cmdq_cb(struct mbox_client *cl, void *mssg)
mtk_crtc->pending_async_planes = false;
}
spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
ddp_cmdq_cb_out:
if (mtk_crtc->pending_needs_vblank) {
mtk_crtc_finish_page_flip(mtk_crtc);
mtk_crtc->pending_needs_vblank = false;
}
spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
mtk_crtc->cmdq_vblank_cnt = 0;
wake_up(&mtk_crtc->cb_blocking_queue);
}
@ -606,13 +614,18 @@ static void mtk_crtc_update_config(struct mtk_crtc *mtk_crtc, bool needs_vblank)
*/
mtk_crtc->cmdq_vblank_cnt = 3;
spin_lock_irqsave(&mtk_crtc->config_lock, flags);
mtk_crtc->config_updating = false;
spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
mbox_send_message(mtk_crtc->cmdq_client.chan, cmdq_handle);
mbox_client_txdone(mtk_crtc->cmdq_client.chan, 0);
}
#endif
#else
spin_lock_irqsave(&mtk_crtc->config_lock, flags);
mtk_crtc->config_updating = false;
spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
#endif
mutex_unlock(&mtk_crtc->hw_lock);
}

View File

@ -460,6 +460,29 @@ static unsigned int mtk_ovl_fmt_convert(struct mtk_disp_ovl *ovl,
}
}
static void mtk_ovl_afbc_layer_config(struct mtk_disp_ovl *ovl,
unsigned int idx,
struct mtk_plane_pending_state *pending,
struct cmdq_pkt *cmdq_pkt)
{
unsigned int pitch_msb = pending->pitch >> 16;
unsigned int hdr_pitch = pending->hdr_pitch;
unsigned int hdr_addr = pending->hdr_addr;
if (pending->modifier != DRM_FORMAT_MOD_LINEAR) {
mtk_ddp_write_relaxed(cmdq_pkt, hdr_addr, &ovl->cmdq_reg, ovl->regs,
DISP_REG_OVL_HDR_ADDR(ovl, idx));
mtk_ddp_write_relaxed(cmdq_pkt,
OVL_PITCH_MSB_2ND_SUBBUF | pitch_msb,
&ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH_MSB(idx));
mtk_ddp_write_relaxed(cmdq_pkt, hdr_pitch, &ovl->cmdq_reg, ovl->regs,
DISP_REG_OVL_HDR_PITCH(ovl, idx));
} else {
mtk_ddp_write_relaxed(cmdq_pkt, pitch_msb,
&ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH_MSB(idx));
}
}
void mtk_ovl_layer_config(struct device *dev, unsigned int idx,
struct mtk_plane_state *state,
struct cmdq_pkt *cmdq_pkt)
@ -467,25 +490,14 @@ void mtk_ovl_layer_config(struct device *dev, unsigned int idx,
struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
struct mtk_plane_pending_state *pending = &state->pending;
unsigned int addr = pending->addr;
unsigned int hdr_addr = pending->hdr_addr;
unsigned int pitch = pending->pitch;
unsigned int hdr_pitch = pending->hdr_pitch;
unsigned int pitch_lsb = pending->pitch & GENMASK(15, 0);
unsigned int fmt = pending->format;
unsigned int rotation = pending->rotation;
unsigned int offset = (pending->y << 16) | pending->x;
unsigned int src_size = (pending->height << 16) | pending->width;
unsigned int blend_mode = state->base.pixel_blend_mode;
unsigned int ignore_pixel_alpha = 0;
unsigned int con;
bool is_afbc = pending->modifier != DRM_FORMAT_MOD_LINEAR;
union overlay_pitch {
struct split_pitch {
u16 lsb;
u16 msb;
} split_pitch;
u32 pitch;
} overlay_pitch;
overlay_pitch.pitch = pitch;
if (!pending->enable) {
mtk_ovl_layer_off(dev, idx, cmdq_pkt);
@ -513,22 +525,30 @@ void mtk_ovl_layer_config(struct device *dev, unsigned int idx,
ignore_pixel_alpha = OVL_CONST_BLEND;
}
if (pending->rotation & DRM_MODE_REFLECT_Y) {
/*
* Treat rotate 180 as flip x + flip y, and XOR the original rotation value
* to flip x + flip y to support both in the same time.
*/
if (rotation & DRM_MODE_ROTATE_180)
rotation ^= DRM_MODE_REFLECT_X | DRM_MODE_REFLECT_Y;
if (rotation & DRM_MODE_REFLECT_Y) {
con |= OVL_CON_VIRT_FLIP;
addr += (pending->height - 1) * pending->pitch;
}
if (pending->rotation & DRM_MODE_REFLECT_X) {
if (rotation & DRM_MODE_REFLECT_X) {
con |= OVL_CON_HORZ_FLIP;
addr += pending->pitch - 1;
}
if (ovl->data->supports_afbc)
mtk_ovl_set_afbc(ovl, cmdq_pkt, idx, is_afbc);
mtk_ovl_set_afbc(ovl, cmdq_pkt, idx,
pending->modifier != DRM_FORMAT_MOD_LINEAR);
mtk_ddp_write_relaxed(cmdq_pkt, con, &ovl->cmdq_reg, ovl->regs,
DISP_REG_OVL_CON(idx));
mtk_ddp_write_relaxed(cmdq_pkt, overlay_pitch.split_pitch.lsb | ignore_pixel_alpha,
mtk_ddp_write_relaxed(cmdq_pkt, pitch_lsb | ignore_pixel_alpha,
&ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH(idx));
mtk_ddp_write_relaxed(cmdq_pkt, src_size, &ovl->cmdq_reg, ovl->regs,
DISP_REG_OVL_SRC_SIZE(idx));
@ -537,19 +557,8 @@ void mtk_ovl_layer_config(struct device *dev, unsigned int idx,
mtk_ddp_write_relaxed(cmdq_pkt, addr, &ovl->cmdq_reg, ovl->regs,
DISP_REG_OVL_ADDR(ovl, idx));
if (is_afbc) {
mtk_ddp_write_relaxed(cmdq_pkt, hdr_addr, &ovl->cmdq_reg, ovl->regs,
DISP_REG_OVL_HDR_ADDR(ovl, idx));
mtk_ddp_write_relaxed(cmdq_pkt,
OVL_PITCH_MSB_2ND_SUBBUF | overlay_pitch.split_pitch.msb,
&ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH_MSB(idx));
mtk_ddp_write_relaxed(cmdq_pkt, hdr_pitch, &ovl->cmdq_reg, ovl->regs,
DISP_REG_OVL_HDR_PITCH(ovl, idx));
} else {
mtk_ddp_write_relaxed(cmdq_pkt,
overlay_pitch.split_pitch.msb,
&ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH_MSB(idx));
}
if (ovl->data->supports_afbc)
mtk_ovl_afbc_layer_config(ovl, idx, pending, cmdq_pkt);
mtk_ovl_set_bit_depth(dev, idx, fmt, cmdq_pkt);
mtk_ovl_layer_on(dev, idx, cmdq_pkt);

View File

@ -543,18 +543,16 @@ static int mtk_dp_set_color_format(struct mtk_dp *mtk_dp,
enum dp_pixelformat color_format)
{
u32 val;
/* update MISC0 */
mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3034,
color_format << DP_TEST_COLOR_FORMAT_SHIFT,
DP_TEST_COLOR_FORMAT_MASK);
u32 misc0_color;
switch (color_format) {
case DP_PIXELFORMAT_YUV422:
val = PIXEL_ENCODE_FORMAT_DP_ENC0_P0_YCBCR422;
misc0_color = DP_COLOR_FORMAT_YCbCr422;
break;
case DP_PIXELFORMAT_RGB:
val = PIXEL_ENCODE_FORMAT_DP_ENC0_P0_RGB;
misc0_color = DP_COLOR_FORMAT_RGB;
break;
default:
drm_warn(mtk_dp->drm_dev, "Unsupported color format: %d\n",
@ -562,6 +560,11 @@ static int mtk_dp_set_color_format(struct mtk_dp *mtk_dp,
return -EINVAL;
}
/* update MISC0 */
mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3034,
misc0_color,
DP_TEST_COLOR_FORMAT_MASK);
mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_303C,
val, PIXEL_ENCODE_FORMAT_DP_ENC0_P0_MASK);
return 0;
@ -2100,7 +2103,6 @@ static enum drm_connector_status mtk_dp_bdg_detect(struct drm_bridge *bridge)
struct mtk_dp *mtk_dp = mtk_dp_from_bridge(bridge);
enum drm_connector_status ret = connector_status_disconnected;
bool enabled = mtk_dp->enabled;
u8 sink_count = 0;
if (!mtk_dp->train_info.cable_plugged_in)
return ret;
@ -2115,8 +2117,8 @@ static enum drm_connector_status mtk_dp_bdg_detect(struct drm_bridge *bridge)
* function, we just need to check the HPD connection to check
* whether we connect to a sink device.
*/
drm_dp_dpcd_readb(&mtk_dp->aux, DP_SINK_COUNT, &sink_count);
if (DP_GET_SINK_COUNT(sink_count))
if (drm_dp_read_sink_count(&mtk_dp->aux) > 0)
ret = connector_status_connected;
if (!enabled)
@ -2408,12 +2410,19 @@ mtk_dp_bridge_mode_valid(struct drm_bridge *bridge,
{
struct mtk_dp *mtk_dp = mtk_dp_from_bridge(bridge);
u32 bpp = info->color_formats & DRM_COLOR_FORMAT_YCBCR422 ? 16 : 24;
u32 rate = min_t(u32, drm_dp_max_link_rate(mtk_dp->rx_cap) *
drm_dp_max_lane_count(mtk_dp->rx_cap),
drm_dp_bw_code_to_link_rate(mtk_dp->max_linkrate) *
mtk_dp->max_lanes);
u32 lane_count_min = mtk_dp->train_info.lane_count;
u32 rate = drm_dp_bw_code_to_link_rate(mtk_dp->train_info.link_rate) *
lane_count_min;
if (rate < mode->clock * bpp / 8)
/*
*FEC overhead is approximately 2.4% from DP 1.4a spec 2.2.1.4.2.
*The down-spread amplitude shall either be disabled (0.0%) or up
*to 0.5% from 1.4a 3.5.2.6. Add up to approximately 3% total overhead.
*
*Because rate is already divided by 10,
*mode->clock does not need to be multiplied by 10
*/
if ((rate * 97 / 100) < (mode->clock * bpp / 8))
return MODE_CLOCK_HIGH;
return MODE_OK;
@ -2454,10 +2463,9 @@ static u32 *mtk_dp_bridge_atomic_get_input_bus_fmts(struct drm_bridge *bridge,
struct drm_display_mode *mode = &crtc_state->adjusted_mode;
struct drm_display_info *display_info =
&conn_state->connector->display_info;
u32 rate = min_t(u32, drm_dp_max_link_rate(mtk_dp->rx_cap) *
drm_dp_max_lane_count(mtk_dp->rx_cap),
drm_dp_bw_code_to_link_rate(mtk_dp->max_linkrate) *
mtk_dp->max_lanes);
u32 lane_count_min = mtk_dp->train_info.lane_count;
u32 rate = drm_dp_bw_code_to_link_rate(mtk_dp->train_info.link_rate) *
lane_count_min;
*num_input_fmts = 0;
@ -2466,8 +2474,8 @@ static u32 *mtk_dp_bridge_atomic_get_input_bus_fmts(struct drm_bridge *bridge,
* datarate of YUV422 and sink device supports YUV422, we output YUV422
* format. Use this condition, we can support more resolution.
*/
if ((rate < (mode->clock * 24 / 8)) &&
(rate > (mode->clock * 16 / 8)) &&
if (((rate * 97 / 100) < (mode->clock * 24 / 8)) &&
((rate * 97 / 100) > (mode->clock * 16 / 8)) &&
(display_info->color_formats & DRM_COLOR_FORMAT_YCBCR422)) {
input_fmts = kcalloc(1, sizeof(*input_fmts), GFP_KERNEL);
if (!input_fmts)

View File

@ -373,11 +373,12 @@ static bool mtk_drm_get_all_drm_priv(struct device *dev)
struct mtk_drm_private *temp_drm_priv;
struct device_node *phandle = dev->parent->of_node;
const struct of_device_id *of_id;
struct device_node *node;
struct device *drm_dev;
unsigned int cnt = 0;
int i, j;
for_each_child_of_node_scoped(phandle->parent, node) {
for_each_child_of_node(phandle->parent, node) {
struct platform_device *pdev;
of_id = of_match_node(mtk_drm_of_ids, node);
@ -406,8 +407,10 @@ static bool mtk_drm_get_all_drm_priv(struct device *dev)
if (temp_drm_priv->mtk_drm_bound)
cnt++;
if (cnt == MAX_CRTC)
if (cnt == MAX_CRTC) {
of_node_put(node);
break;
}
}
if (drm_priv->data->mmsys_dev_num == cnt) {
@ -673,6 +676,8 @@ err_deinit:
err_free:
private->drm = NULL;
drm_dev_put(drm);
for (i = 0; i < private->data->mmsys_dev_num; i++)
private->all_drm_private[i]->drm = NULL;
return ret;
}
@ -900,7 +905,7 @@ static int mtk_drm_of_ddp_path_build_one(struct device *dev, enum mtk_crtc_path
const unsigned int **out_path,
unsigned int *out_path_len)
{
struct device_node *next, *prev, *vdo = dev->parent->of_node;
struct device_node *next = NULL, *prev, *vdo = dev->parent->of_node;
unsigned int temp_path[DDP_COMPONENT_DRM_ID_MAX] = { 0 };
unsigned int *final_ddp_path;
unsigned short int idx = 0;
@ -1089,7 +1094,7 @@ static int mtk_drm_probe(struct platform_device *pdev)
/* No devicetree graphs support: go with hardcoded paths if present */
dev_dbg(dev, "Using hardcoded paths for MMSYS %u\n", mtk_drm_data->mmsys_id);
private->data = mtk_drm_data;
};
}
private->all_drm_private = devm_kmalloc_array(dev, private->data->mmsys_dev_num,
sizeof(*private->all_drm_private),

View File

@ -139,11 +139,11 @@
#define CLK_HS_POST GENMASK(15, 8)
#define CLK_HS_EXIT GENMASK(23, 16)
#define DSI_VM_CMD_CON 0x130
/* DSI_VM_CMD_CON */
#define VM_CMD_EN BIT(0)
#define TS_VFP_EN BIT(5)
#define DSI_SHADOW_DEBUG 0x190U
/* DSI_SHADOW_DEBUG */
#define FORCE_COMMIT BIT(0)
#define BYPASS_SHADOW BIT(1)
@ -187,6 +187,8 @@ struct phy;
struct mtk_dsi_driver_data {
const u32 reg_cmdq_off;
const u32 reg_vm_cmd_off;
const u32 reg_shadow_dbg_off;
bool has_shadow_ctl;
bool has_size_ctl;
bool cmdq_long_packet_ctl;
@ -246,23 +248,22 @@ static void mtk_dsi_phy_timconfig(struct mtk_dsi *dsi)
u32 data_rate_mhz = DIV_ROUND_UP(dsi->data_rate, HZ_PER_MHZ);
struct mtk_phy_timing *timing = &dsi->phy_timing;
timing->lpx = (80 * data_rate_mhz / (8 * 1000)) + 1;
timing->da_hs_prepare = (59 * data_rate_mhz + 4 * 1000) / 8000 + 1;
timing->da_hs_zero = (163 * data_rate_mhz + 11 * 1000) / 8000 + 1 -
timing->lpx = (60 * data_rate_mhz / (8 * 1000)) + 1;
timing->da_hs_prepare = (80 * data_rate_mhz + 4 * 1000) / 8000;
timing->da_hs_zero = (170 * data_rate_mhz + 10 * 1000) / 8000 + 1 -
timing->da_hs_prepare;
timing->da_hs_trail = (78 * data_rate_mhz + 7 * 1000) / 8000 + 1;
timing->da_hs_trail = timing->da_hs_prepare + 1;
timing->ta_go = 4 * timing->lpx;
timing->ta_sure = 3 * timing->lpx / 2;
timing->ta_get = 5 * timing->lpx;
timing->da_hs_exit = (118 * data_rate_mhz / (8 * 1000)) + 1;
timing->ta_go = 4 * timing->lpx - 2;
timing->ta_sure = timing->lpx + 2;
timing->ta_get = 4 * timing->lpx;
timing->da_hs_exit = 2 * timing->lpx + 1;
timing->clk_hs_prepare = (57 * data_rate_mhz / (8 * 1000)) + 1;
timing->clk_hs_post = (65 * data_rate_mhz + 53 * 1000) / 8000 + 1;
timing->clk_hs_trail = (78 * data_rate_mhz + 7 * 1000) / 8000 + 1;
timing->clk_hs_zero = (330 * data_rate_mhz / (8 * 1000)) + 1 -
timing->clk_hs_prepare;
timing->clk_hs_exit = (118 * data_rate_mhz / (8 * 1000)) + 1;
timing->clk_hs_prepare = 70 * data_rate_mhz / (8 * 1000);
timing->clk_hs_post = timing->clk_hs_prepare + 8;
timing->clk_hs_trail = timing->clk_hs_prepare;
timing->clk_hs_zero = timing->clk_hs_trail * 4;
timing->clk_hs_exit = 2 * timing->clk_hs_trail;
timcon0 = FIELD_PREP(LPX, timing->lpx) |
FIELD_PREP(HS_PREP, timing->da_hs_prepare) |
@ -367,8 +368,8 @@ static void mtk_dsi_set_mode(struct mtk_dsi *dsi)
static void mtk_dsi_set_vm_cmd(struct mtk_dsi *dsi)
{
mtk_dsi_mask(dsi, DSI_VM_CMD_CON, VM_CMD_EN, VM_CMD_EN);
mtk_dsi_mask(dsi, DSI_VM_CMD_CON, TS_VFP_EN, TS_VFP_EN);
mtk_dsi_mask(dsi, dsi->driver_data->reg_vm_cmd_off, VM_CMD_EN, VM_CMD_EN);
mtk_dsi_mask(dsi, dsi->driver_data->reg_vm_cmd_off, TS_VFP_EN, TS_VFP_EN);
}
static void mtk_dsi_rxtx_control(struct mtk_dsi *dsi)
@ -714,7 +715,7 @@ static int mtk_dsi_poweron(struct mtk_dsi *dsi)
if (dsi->driver_data->has_shadow_ctl)
writel(FORCE_COMMIT | BYPASS_SHADOW,
dsi->regs + DSI_SHADOW_DEBUG);
dsi->regs + dsi->driver_data->reg_shadow_dbg_off);
mtk_dsi_reset_engine(dsi);
mtk_dsi_phy_timconfig(dsi);
@ -1263,26 +1264,36 @@ static void mtk_dsi_remove(struct platform_device *pdev)
static const struct mtk_dsi_driver_data mt8173_dsi_driver_data = {
.reg_cmdq_off = 0x200,
.reg_vm_cmd_off = 0x130,
.reg_shadow_dbg_off = 0x190
};
static const struct mtk_dsi_driver_data mt2701_dsi_driver_data = {
.reg_cmdq_off = 0x180,
.reg_vm_cmd_off = 0x130,
.reg_shadow_dbg_off = 0x190
};
static const struct mtk_dsi_driver_data mt8183_dsi_driver_data = {
.reg_cmdq_off = 0x200,
.reg_vm_cmd_off = 0x130,
.reg_shadow_dbg_off = 0x190,
.has_shadow_ctl = true,
.has_size_ctl = true,
};
static const struct mtk_dsi_driver_data mt8186_dsi_driver_data = {
.reg_cmdq_off = 0xd00,
.reg_vm_cmd_off = 0x200,
.reg_shadow_dbg_off = 0xc00,
.has_shadow_ctl = true,
.has_size_ctl = true,
};
static const struct mtk_dsi_driver_data mt8188_dsi_driver_data = {
.reg_cmdq_off = 0xd00,
.reg_vm_cmd_off = 0x200,
.reg_shadow_dbg_off = 0xc00,
.has_shadow_ctl = true,
.has_size_ctl = true,
.cmdq_long_packet_ctl = true,

View File

@ -384,7 +384,7 @@ nouveau_acpi_edid(struct drm_device *dev, struct drm_connector *connector)
if (ret < 0)
return NULL;
return kmemdup(edid, EDID_LENGTH, GFP_KERNEL);
return edid;
}
bool nouveau_acpi_video_backlight_use_native(void)

View File

@ -387,6 +387,10 @@ int xe_gt_init_early(struct xe_gt *gt)
xe_force_wake_init_gt(gt, gt_to_fw(gt));
spin_lock_init(&gt->global_invl_lock);
err = xe_gt_tlb_invalidation_init_early(gt);
if (err)
return err;
return 0;
}
@ -588,10 +592,6 @@ int xe_gt_init(struct xe_gt *gt)
xe_hw_fence_irq_init(&gt->fence_irq[i]);
}
err = xe_gt_tlb_invalidation_init(gt);
if (err)
return err;
err = xe_gt_pagefault_init(gt);
if (err)
return err;

View File

@ -122,10 +122,12 @@ void xe_gt_idle_enable_pg(struct xe_gt *gt)
if (!xe_gt_is_media_type(gt))
gtidle->powergate_enable |= RENDER_POWERGATE_ENABLE;
for (i = XE_HW_ENGINE_VCS0, j = 0; i <= XE_HW_ENGINE_VCS7; ++i, ++j) {
if ((gt->info.engine_mask & BIT(i)))
gtidle->powergate_enable |= (VDN_HCP_POWERGATE_ENABLE(j) |
VDN_MFXVDENC_POWERGATE_ENABLE(j));
if (xe->info.platform != XE_DG1) {
for (i = XE_HW_ENGINE_VCS0, j = 0; i <= XE_HW_ENGINE_VCS7; ++i, ++j) {
if ((gt->info.engine_mask & BIT(i)))
gtidle->powergate_enable |= (VDN_HCP_POWERGATE_ENABLE(j) |
VDN_MFXVDENC_POWERGATE_ENABLE(j));
}
}
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);

View File

@ -106,7 +106,7 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
}
/**
* xe_gt_tlb_invalidation_init - Initialize GT TLB invalidation state
* xe_gt_tlb_invalidation_init_early - Initialize GT TLB invalidation state
* @gt: graphics tile
*
* Initialize GT TLB invalidation state, purely software initialization, should
@ -114,7 +114,7 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
*
* Return: 0 on success, negative error code on error.
*/
int xe_gt_tlb_invalidation_init(struct xe_gt *gt)
int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt)
{
gt->tlb_invalidation.seqno = 1;
INIT_LIST_HEAD(&gt->tlb_invalidation.pending_fences);

View File

@ -14,7 +14,8 @@ struct xe_gt;
struct xe_guc;
struct xe_vma;
int xe_gt_tlb_invalidation_init(struct xe_gt *gt);
int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt);
void xe_gt_tlb_invalidation_reset(struct xe_gt *gt);
int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt);
int xe_gt_tlb_invalidation_vma(struct xe_gt *gt,

View File

@ -165,6 +165,7 @@ static int drivetemp_scsi_command(struct drivetemp_data *st,
{
u8 scsi_cmd[MAX_COMMAND_SIZE];
enum req_op op;
int err;
memset(scsi_cmd, 0, sizeof(scsi_cmd));
scsi_cmd[0] = ATA_16;
@ -192,8 +193,11 @@ static int drivetemp_scsi_command(struct drivetemp_data *st,
scsi_cmd[12] = lba_high;
scsi_cmd[14] = ata_command;
return scsi_execute_cmd(st->sdev, scsi_cmd, op, st->smartdata,
ATA_SECT_SIZE, HZ, 5, NULL);
err = scsi_execute_cmd(st->sdev, scsi_cmd, op, st->smartdata,
ATA_SECT_SIZE, HZ, 5, NULL);
if (err > 0)
err = -EIO;
return err;
}
static int drivetemp_ata_command(struct drivetemp_data *st, u8 feature,

View File

@ -91,6 +91,7 @@
#define AD4695_T_WAKEUP_SW_MS 3
#define AD4695_T_REFBUF_MS 100
#define AD4695_T_REGCONFIG_NS 20
#define AD4695_T_SCK_CNV_DELAY_NS 80
#define AD4695_REG_ACCESS_SCLK_HZ (10 * MEGA)
/* Max number of voltage input channels. */
@ -132,8 +133,13 @@ struct ad4695_state {
unsigned int vref_mv;
/* Common mode input pin voltage. */
unsigned int com_mv;
/* 1 per voltage and temperature chan plus 1 xfer to trigger 1st CNV */
struct spi_transfer buf_read_xfer[AD4695_MAX_CHANNELS + 2];
/*
* 2 per voltage and temperature chan plus 1 xfer to trigger 1st
* CNV. Excluding the trigger xfer, every 2nd xfer only serves
* to control CS and add a delay between the last SCLK and next
* CNV rising edges.
*/
struct spi_transfer buf_read_xfer[AD4695_MAX_CHANNELS * 2 + 3];
struct spi_message buf_read_msg;
/* Raw conversion data received. */
u8 buf[ALIGN((AD4695_MAX_CHANNELS + 2) * AD4695_MAX_CHANNEL_SIZE,
@ -423,7 +429,7 @@ static int ad4695_buffer_preenable(struct iio_dev *indio_dev)
u8 temp_chan_bit = st->chip_info->num_voltage_inputs;
u32 bit, num_xfer, num_slots;
u32 temp_en = 0;
int ret;
int ret, rx_buf_offset = 0;
/*
* We are using the advanced sequencer since it is the only way to read
@ -449,11 +455,9 @@ static int ad4695_buffer_preenable(struct iio_dev *indio_dev)
iio_for_each_active_channel(indio_dev, bit) {
xfer = &st->buf_read_xfer[num_xfer];
xfer->bits_per_word = 16;
xfer->rx_buf = &st->buf[(num_xfer - 1) * 2];
xfer->rx_buf = &st->buf[rx_buf_offset];
xfer->len = 2;
xfer->cs_change = 1;
xfer->cs_change_delay.value = AD4695_T_CONVERT_NS;
xfer->cs_change_delay.unit = SPI_DELAY_UNIT_NSECS;
rx_buf_offset += xfer->len;
if (bit == temp_chan_bit) {
temp_en = 1;
@ -468,21 +472,44 @@ static int ad4695_buffer_preenable(struct iio_dev *indio_dev)
}
num_xfer++;
/*
* We need to add a blank xfer in data reads, to meet the timing
* requirement of a minimum delay between the last SCLK rising
* edge and the CS deassert.
*/
xfer = &st->buf_read_xfer[num_xfer];
xfer->delay.value = AD4695_T_SCK_CNV_DELAY_NS;
xfer->delay.unit = SPI_DELAY_UNIT_NSECS;
xfer->cs_change = 1;
xfer->cs_change_delay.value = AD4695_T_CONVERT_NS;
xfer->cs_change_delay.unit = SPI_DELAY_UNIT_NSECS;
num_xfer++;
}
/*
* The advanced sequencer requires that at least 2 slots are enabled.
* Since slot 0 is always used for other purposes, we need only 1
* enabled voltage channel to meet this requirement. If the temperature
* channel is the only enabled channel, we need to add one more slot
* in the sequence but not read from it.
* enabled voltage channel to meet this requirement. If the temperature
* channel is the only enabled channel, we need to add one more slot in
* the sequence but not read from it. This is because the temperature
* sensor is sampled at the end of the channel sequence in advanced
* sequencer mode (see datasheet page 38).
*
* From the iio_for_each_active_channel() block above, we now have an
* xfer with data followed by a blank xfer to allow us to meet the
* timing spec, so move both of those up before adding an extra to
* handle the temperature-only case.
*/
if (num_slots < 2) {
/* move last xfer so we can insert one more xfer before it */
st->buf_read_xfer[num_xfer] = *xfer;
/* Move last two xfers */
st->buf_read_xfer[num_xfer] = st->buf_read_xfer[num_xfer - 1];
st->buf_read_xfer[num_xfer - 1] = st->buf_read_xfer[num_xfer - 2];
num_xfer++;
/* modify 2nd to last xfer for extra slot */
/* Modify inserted xfer for extra slot. */
xfer = &st->buf_read_xfer[num_xfer - 3];
memset(xfer, 0, sizeof(*xfer));
xfer->cs_change = 1;
xfer->delay.value = st->chip_info->t_acq_ns;
@ -499,6 +526,12 @@ static int ad4695_buffer_preenable(struct iio_dev *indio_dev)
return ret;
num_slots++;
/*
* We still want to point at the last xfer when finished, so
* update the pointer.
*/
xfer = &st->buf_read_xfer[num_xfer - 1];
}
/*
@ -583,8 +616,20 @@ out:
*/
static int ad4695_read_one_sample(struct ad4695_state *st, unsigned int address)
{
struct spi_transfer xfer[2] = { };
int ret, i = 0;
struct spi_transfer xfers[2] = {
{
.speed_hz = AD4695_REG_ACCESS_SCLK_HZ,
.bits_per_word = 16,
.tx_buf = &st->cnv_cmd,
.len = 2,
},
{
/* Required delay between last SCLK and CNV/CS */
.delay.value = AD4695_T_SCK_CNV_DELAY_NS,
.delay.unit = SPI_DELAY_UNIT_NSECS,
}
};
int ret;
ret = ad4695_set_single_cycle_mode(st, address);
if (ret)
@ -592,29 +637,22 @@ static int ad4695_read_one_sample(struct ad4695_state *st, unsigned int address)
/*
* Setting the first channel to the temperature channel isn't supported
* in single-cycle mode, so we have to do an extra xfer to read the
* temperature.
* in single-cycle mode, so we have to do an extra conversion to read
* the temperature.
*/
if (address == AD4695_CMD_TEMP_CHAN) {
/* We aren't reading, so we can make this a short xfer. */
st->cnv_cmd2 = AD4695_CMD_TEMP_CHAN << 3;
xfer[0].tx_buf = &st->cnv_cmd2;
xfer[0].len = 1;
xfer[0].cs_change = 1;
xfer[0].cs_change_delay.value = AD4695_T_CONVERT_NS;
xfer[0].cs_change_delay.unit = SPI_DELAY_UNIT_NSECS;
st->cnv_cmd = AD4695_CMD_TEMP_CHAN << 11;
i = 1;
ret = spi_sync_transfer(st->spi, xfers, ARRAY_SIZE(xfers));
if (ret)
return ret;
}
/* Then read the result and exit conversion mode. */
st->cnv_cmd = AD4695_CMD_EXIT_CNV_MODE << 11;
xfer[i].bits_per_word = 16;
xfer[i].tx_buf = &st->cnv_cmd;
xfer[i].rx_buf = &st->raw_data;
xfer[i].len = 2;
xfers[0].rx_buf = &st->raw_data;
return spi_sync_transfer(st->spi, xfer, i + 1);
return spi_sync_transfer(st->spi, xfers, ARRAY_SIZE(xfers));
}
static int ad4695_read_raw(struct iio_dev *indio_dev,

View File

@ -917,6 +917,9 @@ static int ad7124_setup(struct ad7124_state *st)
* set all channels to this default value.
*/
ad7124_set_channel_odr(st, i, 10);
/* Disable all channels to prevent unintended conversions. */
ad_sd_write_reg(&st->sd, AD7124_CHANNEL(i), 2, 0);
}
ret = ad_sd_write_reg(&st->sd, AD7124_ADC_CONTROL, 2, st->adc_control);

View File

@ -200,6 +200,7 @@ struct ad7173_channel {
struct ad7173_state {
struct ad_sigma_delta sd;
struct ad_sigma_delta_info sigma_delta_info;
const struct ad7173_device_info *info;
struct ad7173_channel *channels;
struct regulator_bulk_data regulators[3];
@ -753,7 +754,7 @@ static int ad7173_disable_one(struct ad_sigma_delta *sd, unsigned int chan)
return ad_sd_write_reg(sd, AD7173_REG_CH(chan), 2, 0);
}
static struct ad_sigma_delta_info ad7173_sigma_delta_info = {
static const struct ad_sigma_delta_info ad7173_sigma_delta_info = {
.set_channel = ad7173_set_channel,
.append_status = ad7173_append_status,
.disable_all = ad7173_disable_all,
@ -1403,7 +1404,7 @@ static int ad7173_fw_parse_device_config(struct iio_dev *indio_dev)
if (ret < 0)
return dev_err_probe(dev, ret, "Interrupt 'rdy' is required\n");
ad7173_sigma_delta_info.irq_line = ret;
st->sigma_delta_info.irq_line = ret;
return ad7173_fw_parse_channel_config(indio_dev);
}
@ -1436,8 +1437,9 @@ static int ad7173_probe(struct spi_device *spi)
spi->mode = SPI_MODE_3;
spi_setup(spi);
ad7173_sigma_delta_info.num_slots = st->info->num_configs;
ret = ad_sd_init(&st->sd, indio_dev, spi, &ad7173_sigma_delta_info);
st->sigma_delta_info = ad7173_sigma_delta_info;
st->sigma_delta_info.num_slots = st->info->num_configs;
ret = ad_sd_init(&st->sd, indio_dev, spi, &st->sigma_delta_info);
if (ret)
return ret;

View File

@ -895,7 +895,7 @@ static int ad9467_update_scan_mode(struct iio_dev *indio_dev,
return 0;
}
static struct iio_info ad9467_info = {
static const struct iio_info ad9467_info = {
.read_raw = ad9467_read_raw,
.write_raw = ad9467_write_raw,
.update_scan_mode = ad9467_update_scan_mode,
@ -903,6 +903,14 @@ static struct iio_info ad9467_info = {
.read_avail = ad9467_read_avail,
};
/* Same as above, but without .read_avail */
static const struct iio_info ad9467_info_no_read_avail = {
.read_raw = ad9467_read_raw,
.write_raw = ad9467_write_raw,
.update_scan_mode = ad9467_update_scan_mode,
.debugfs_reg_access = ad9467_reg_access,
};
static int ad9467_scale_fill(struct ad9467_state *st)
{
const struct ad9467_chip_info *info = st->info;
@ -1214,11 +1222,12 @@ static int ad9467_probe(struct spi_device *spi)
}
if (st->info->num_scales > 1)
ad9467_info.read_avail = ad9467_read_avail;
indio_dev->info = &ad9467_info;
else
indio_dev->info = &ad9467_info_no_read_avail;
indio_dev->name = st->info->name;
indio_dev->channels = st->info->channels;
indio_dev->num_channels = st->info->num_channels;
indio_dev->info = &ad9467_info;
ret = ad9467_iio_backend_get(st);
if (ret)

View File

@ -979,7 +979,7 @@ static int at91_ts_register(struct iio_dev *idev,
return ret;
err:
input_free_device(st->ts_input);
input_free_device(input);
return ret;
}

View File

@ -368,6 +368,8 @@ static irqreturn_t rockchip_saradc_trigger_handler(int irq, void *p)
int ret;
int i, j = 0;
memset(&data, 0, sizeof(data));
mutex_lock(&info->lock);
iio_for_each_active_channel(i_dev, i) {

View File

@ -691,11 +691,14 @@ static int stm32_dfsdm_generic_channel_parse_of(struct stm32_dfsdm *dfsdm,
return -EINVAL;
}
ret = fwnode_property_read_string(node, "label", &ch->datasheet_name);
if (ret < 0) {
dev_err(&indio_dev->dev,
" Error parsing 'label' for idx %d\n", ch->channel);
return ret;
if (fwnode_property_present(node, "label")) {
/* label is optional */
ret = fwnode_property_read_string(node, "label", &ch->datasheet_name);
if (ret < 0) {
dev_err(&indio_dev->dev,
" Error parsing 'label' for idx %d\n", ch->channel);
return ret;
}
}
df_ch = &dfsdm->ch_list[ch->channel];

View File

@ -500,12 +500,14 @@ static irqreturn_t ads1119_trigger_handler(int irq, void *private)
struct iio_dev *indio_dev = pf->indio_dev;
struct ads1119_state *st = iio_priv(indio_dev);
struct {
unsigned int sample;
s16 sample;
s64 timestamp __aligned(8);
} scan;
unsigned int index;
int ret;
memset(&scan, 0, sizeof(scan));
if (!iio_trigger_using_own(indio_dev)) {
index = find_first_bit(indio_dev->active_scan_mask,
iio_get_masklength(indio_dev));

View File

@ -183,9 +183,9 @@ static int ads124s_reset(struct iio_dev *indio_dev)
struct ads124s_private *priv = iio_priv(indio_dev);
if (priv->reset_gpio) {
gpiod_set_value(priv->reset_gpio, 0);
gpiod_set_value_cansleep(priv->reset_gpio, 0);
udelay(200);
gpiod_set_value(priv->reset_gpio, 1);
gpiod_set_value_cansleep(priv->reset_gpio, 1);
} else {
return ads124s_write_cmd(indio_dev, ADS124S08_CMD_RESET);
}

View File

@ -613,6 +613,8 @@ static int ads1298_init(struct iio_dev *indio_dev)
}
indio_dev->name = devm_kasprintf(dev, GFP_KERNEL, "ads129%u%s",
indio_dev->num_channels, suffix);
if (!indio_dev->name)
return -ENOMEM;
/* Enable internal test signal, double amplitude, double frequency */
ret = regmap_write(priv->regmap, ADS1298_REG_CONFIG2,

View File

@ -381,7 +381,7 @@ static irqreturn_t ads8688_trigger_handler(int irq, void *p)
struct iio_poll_func *pf = p;
struct iio_dev *indio_dev = pf->indio_dev;
/* Ensure naturally aligned timestamp */
u16 buffer[ADS8688_MAX_CHANNELS + sizeof(s64)/sizeof(u16)] __aligned(8);
u16 buffer[ADS8688_MAX_CHANNELS + sizeof(s64)/sizeof(u16)] __aligned(8) = { };
int i, j = 0;
iio_for_each_active_channel(indio_dev, i) {

View File

@ -48,7 +48,7 @@ static irqreturn_t iio_simple_dummy_trigger_h(int irq, void *p)
int i = 0, j;
u16 *data;
data = kmalloc(indio_dev->scan_bytes, GFP_KERNEL);
data = kzalloc(indio_dev->scan_bytes, GFP_KERNEL);
if (!data)
goto done;

View File

@ -730,14 +730,21 @@ static irqreturn_t fxas21002c_trigger_handler(int irq, void *p)
int ret;
mutex_lock(&data->lock);
ret = regmap_bulk_read(data->regmap, FXAS21002C_REG_OUT_X_MSB,
data->buffer, CHANNEL_SCAN_MAX * sizeof(s16));
ret = fxas21002c_pm_get(data);
if (ret < 0)
goto out_unlock;
ret = regmap_bulk_read(data->regmap, FXAS21002C_REG_OUT_X_MSB,
data->buffer, CHANNEL_SCAN_MAX * sizeof(s16));
if (ret < 0)
goto out_pm_put;
iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
data->timestamp);
out_pm_put:
fxas21002c_pm_put(data);
out_unlock:
mutex_unlock(&data->lock);

View File

@ -403,6 +403,7 @@ struct inv_icm42600_sensor_state {
typedef int (*inv_icm42600_bus_setup)(struct inv_icm42600_state *);
extern const struct regmap_config inv_icm42600_regmap_config;
extern const struct regmap_config inv_icm42600_spi_regmap_config;
extern const struct dev_pm_ops inv_icm42600_pm_ops;
const struct iio_mount_matrix *

View File

@ -87,6 +87,21 @@ const struct regmap_config inv_icm42600_regmap_config = {
};
EXPORT_SYMBOL_NS_GPL(inv_icm42600_regmap_config, "IIO_ICM42600");
/* define specific regmap for SPI not supporting burst write */
const struct regmap_config inv_icm42600_spi_regmap_config = {
.name = "inv_icm42600",
.reg_bits = 8,
.val_bits = 8,
.max_register = 0x4FFF,
.ranges = inv_icm42600_regmap_ranges,
.num_ranges = ARRAY_SIZE(inv_icm42600_regmap_ranges),
.volatile_table = inv_icm42600_regmap_volatile_accesses,
.rd_noinc_table = inv_icm42600_regmap_rd_noinc_accesses,
.cache_type = REGCACHE_RBTREE,
.use_single_write = true,
};
EXPORT_SYMBOL_NS_GPL(inv_icm42600_spi_regmap_config, "IIO_ICM42600");
struct inv_icm42600_hw {
uint8_t whoami;
const char *name;
@ -814,6 +829,8 @@ out_unlock:
static int inv_icm42600_resume(struct device *dev)
{
struct inv_icm42600_state *st = dev_get_drvdata(dev);
struct inv_icm42600_sensor_state *gyro_st = iio_priv(st->indio_gyro);
struct inv_icm42600_sensor_state *accel_st = iio_priv(st->indio_accel);
int ret;
mutex_lock(&st->lock);
@ -834,9 +851,12 @@ static int inv_icm42600_resume(struct device *dev)
goto out_unlock;
/* restore FIFO data streaming */
if (st->fifo.on)
if (st->fifo.on) {
inv_sensors_timestamp_reset(&gyro_st->ts);
inv_sensors_timestamp_reset(&accel_st->ts);
ret = regmap_write(st->map, INV_ICM42600_REG_FIFO_CONFIG,
INV_ICM42600_FIFO_CONFIG_STREAM);
}
out_unlock:
mutex_unlock(&st->lock);

Some files were not shown because too many files have changed in this diff Show More