drm-misc-next for v6.15:

UAPI Changes:
 
 fourcc:
 - Add modifiers for MediaTek tiled formats
 
 Cross-subsystem Changes:
 
 bus:
 - mhi: Enable image transfer via BHIe in PBL
 
 dma-buf:
 - Add fast-path for single-fence merging
 
 Core Changes:
 
 atomic helper:
 - Allow full modeset on connector changes
 - Clarify semantics of allow_modeset
 - Clarify semantics of drm_atomic_helper_check()
 
 buddy allocator:
 - Fix multi-root cleanup
 
 ci:
 - Update IGT
 
 display:
 - dp: Support Extendeds Wake Timeout
 - dp_mst: Fix RAD-to-string conversion
 
 panic:
 - Encode QR code according to Fido 2.2
 
 probe helper:
 - Cleanups
 
 scheduler:
 - Cleanups
 
 ttm:
 - Refactor pool-allocation code
 - Cleanups
 
 Driver Changes:
 
 amdxdma:
 - Fix error handling
 - Cleanups
 
 ast:
 - Refactor detection of transmitter chips
 - Refactor support of VBIOS display-mode handling
 - astdp: Fix connection status; Filter unsupported display modes
 
 bridge:
 - adv7511: Report correct capabilities
 - it6505: Fix HDCP V compare
 - sn65dsi86: Fix device IDs
 - Cleanups
 
 i915:
 - Enable Extendeds Wake Timeout
 
 imagination:
 - Check job dependencies with DRM-sched helper
 
 ivpu:
 - Improve command-queue handling
 - Use workqueue for IRQ handling
 - Add suport for HW fault injection
 - Locking fixes
 - Cleanups
 
 mgag200:
 - Add support for G200eH5 chips
 
 msm:
 - dpu: Add concurrent writeback support for DPU 10.x+
 
 nouveau:
 - Move drm_slave_encoder interface into driver
 - nvkm: Refactor GSP RPC
 
 omapdrm:
 - Cleanups
 
 panel:
 - Convert several panels to multi-style functions to improve error
   handling
 - edp: Add support for B140UAN04.4, BOE NV140FHM-NZ, CSW MNB601LS1-3,
   LG LP079QX1-SP0V, MNE007QS3-7, STA 116QHD024002, Starry 116KHD024006,
   Lenovo T14s Gen6 Snapdragon
 - himax-hx83102: Add support for CSOT PNA957QT1-1, Kingdisplay
   kd110n11-51ie, Starry 2082109qfh040022-50e
 
 panthor:
 - Expose sizes of intenral BOs via fdinfo
 - Fix race between reset and suspend
 - Cleanups
 
 qaic:
 - Add support for AIC200
 - Cleanups
 
 renesas:
 - Fix limits in DT bindings
 
 rockchip:
 - rk3576: Add HDMI support
 - vop2: Add new display modes on RK3588 HDMI0 up to 4K
 - Don't change HDMI reference clock rate
 - Fix DT bindings
 
 solomon:
 - Set SPI device table to silence warnings
 - Fix pixel and scanline encoding
 
 v3d:
 - Cleanups
 
 vc4:
 - Use drm_exec
 - Use dma-resv for wait-BO ioctl
 - Remove seqno infrastructure
 
 virtgpu:
 - Support partial mappings of GEM objects
 - Reserve VGA resources during initialization
 - Fix UAF in virtgpu_dma_buf_free_obj()
 - Add panic support
 
 vkms:
 - Switch to a managed modesetting pipeline
 - Add support for ARGB8888
 
 xlnx:
 - Set correct DMA segment size
 - Fix error handling
 - Fix docs
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEEchf7rIzpz2NEoWjlaA3BHVMLeiMFAmesYp4ACgkQaA3BHVML
 eiP0agf/ZsUXdVdJ3HMI3vAMbiUnW8ThQZx0M3Zkpf3EOCKgdibbjNkhgTzsp3b/
 GqcP/pmqwmjCUgVADVDVJvRTD1rSadcxCYYTwI9fsjnXWmib9l4sjI3jPRJkIekj
 a5eeCFnMEKzfaxhFPJ/nu+esMZ19ZRE1iSO7WC7YC0SSR6zYP/QaNkGti8QA5TpL
 oBHd4jctoxXw9sw/ACEwzsSRdxL/opXQ5KbFJFmr2Q9/osaTUX7QtslDhnh+2lVK
 ZGxGdFIvFYKB+QrXpfjJU51Q01gqp/PWS+YIDrpr7bLC5YeFQVe3FDjqYIXSoE2t
 dFYtU2mSS/rwijcRc64sghXCdbKS1w==
 =iJGz
 -----END PGP SIGNATURE-----

Merge tag 'drm-misc-next-2025-02-12' of https://gitlab.freedesktop.org/drm/misc/kernel into drm-next

drm-misc-next for v6.15:

UAPI Changes:

fourcc:
- Add modifiers for MediaTek tiled formats

Cross-subsystem Changes:

bus:
- mhi: Enable image transfer via BHIe in PBL

dma-buf:
- Add fast-path for single-fence merging

Core Changes:

atomic helper:
- Allow full modeset on connector changes
- Clarify semantics of allow_modeset
- Clarify semantics of drm_atomic_helper_check()

buddy allocator:
- Fix multi-root cleanup

ci:
- Update IGT

display:
- dp: Support Extendeds Wake Timeout
- dp_mst: Fix RAD-to-string conversion

panic:
- Encode QR code according to Fido 2.2

probe helper:
- Cleanups

scheduler:
- Cleanups

ttm:
- Refactor pool-allocation code
- Cleanups

Driver Changes:

amdxdma:
- Fix error handling
- Cleanups

ast:
- Refactor detection of transmitter chips
- Refactor support of VBIOS display-mode handling
- astdp: Fix connection status; Filter unsupported display modes

bridge:
- adv7511: Report correct capabilities
- it6505: Fix HDCP V compare
- sn65dsi86: Fix device IDs
- Cleanups

i915:
- Enable Extendeds Wake Timeout

imagination:
- Check job dependencies with DRM-sched helper

ivpu:
- Improve command-queue handling
- Use workqueue for IRQ handling
- Add suport for HW fault injection
- Locking fixes
- Cleanups

mgag200:
- Add support for G200eH5 chips

msm:
- dpu: Add concurrent writeback support for DPU 10.x+

nouveau:
- Move drm_slave_encoder interface into driver
- nvkm: Refactor GSP RPC

omapdrm:
- Cleanups

panel:
- Convert several panels to multi-style functions to improve error
  handling
- edp: Add support for B140UAN04.4, BOE NV140FHM-NZ, CSW MNB601LS1-3,
  LG LP079QX1-SP0V, MNE007QS3-7, STA 116QHD024002, Starry 116KHD024006,
  Lenovo T14s Gen6 Snapdragon
- himax-hx83102: Add support for CSOT PNA957QT1-1, Kingdisplay
  kd110n11-51ie, Starry 2082109qfh040022-50e

panthor:
- Expose sizes of intenral BOs via fdinfo
- Fix race between reset and suspend
- Cleanups

qaic:
- Add support for AIC200
- Cleanups

renesas:
- Fix limits in DT bindings

rockchip:
- rk3576: Add HDMI support
- vop2: Add new display modes on RK3588 HDMI0 up to 4K
- Don't change HDMI reference clock rate
- Fix DT bindings

solomon:
- Set SPI device table to silence warnings
- Fix pixel and scanline encoding

v3d:
- Cleanups

vc4:
- Use drm_exec
- Use dma-resv for wait-BO ioctl
- Remove seqno infrastructure

virtgpu:
- Support partial mappings of GEM objects
- Reserve VGA resources during initialization
- Fix UAF in virtgpu_dma_buf_free_obj()
- Add panic support

vkms:
- Switch to a managed modesetting pipeline
- Add support for ARGB8888

xlnx:
- Set correct DMA segment size
- Fix error handling
- Fix docs

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://patchwork.freedesktop.org/patch/msgid/20250212090625.GA24865@linux.fritz.box
This commit is contained in:
Dave Airlie 2025-02-14 10:17:26 +10:00
commit 0ed1356af8
309 changed files with 7951 additions and 3189 deletions

View File

@ -3946,6 +3946,10 @@ S: 1 Amherst Street
S: Cambridge, Massachusetts 02139
S: USA
N: Luben Tuikov
E: Luben Tuikov <ltuikov89@gmail.com>
D: Maintainer of the DRM GPU Scheduler
N: Simmule Turner
E: sturner@tele-tv.com
D: Added swapping to filesystem

View File

@ -18,8 +18,14 @@ properties:
- enum:
# Boe nv110wum-l60 11.0" WUXGA TFT LCD panel
- boe,nv110wum-l60
# CSOT pna957qt1-1 10.95" WUXGA TFT LCD panel
- csot,pna957qt1-1
# IVO t109nw41 11.0" WUXGA TFT LCD panel
- ivo,t109nw41
# KINGDISPLAY KD110N11-51IE 10.95" WUXGA TFT LCD panel
- kingdisplay,kd110n11-51ie
# STARRY 2082109QFH040022-50E 10.95" WUXGA TFT LCD panel
- starry,2082109qfh040022-50e
# STARRY himax83102-j02 10.51" WUXGA TFT LCD panel
- starry,himax83102-j02
- const: himax,hx83102

View File

@ -47,12 +47,26 @@ properties:
maxItems: 1
# See compatible-specific constraints below.
clocks: true
clock-names: true
clocks:
minItems: 1
maxItems: 8
clock-names:
minItems: 1
maxItems: 8
interrupts:
minItems: 1
maxItems: 4
description: Interrupt specifiers, one per DU channel
resets: true
reset-names: true
resets:
minItems: 1
maxItems: 2
reset-names:
minItems: 1
maxItems: 2
power-domains:
maxItems: 1
@ -74,7 +88,7 @@ properties:
renesas,cmms:
$ref: /schemas/types.yaml#/definitions/phandle-array
minItems: 1
minItems: 2
maxItems: 4
items:
maxItems: 1
@ -174,6 +188,7 @@ allOf:
- pattern: '^dclkin\.[01]$'
interrupts:
minItems: 2
maxItems: 2
resets:
@ -229,6 +244,7 @@ allOf:
- pattern: '^dclkin\.[01]$'
interrupts:
minItems: 2
maxItems: 2
resets:
@ -282,6 +298,7 @@ allOf:
- pattern: '^dclkin\.[01]$'
interrupts:
minItems: 2
maxItems: 2
resets:
@ -336,6 +353,7 @@ allOf:
- pattern: '^dclkin\.[01]$'
interrupts:
minItems: 2
maxItems: 2
resets:
@ -397,6 +415,7 @@ allOf:
- pattern: '^dclkin\.[012]$'
interrupts:
minItems: 3
maxItems: 3
resets:
@ -461,9 +480,11 @@ allOf:
- pattern: '^dclkin\.[0123]$'
interrupts:
minItems: 4
maxItems: 4
resets:
minItems: 2
maxItems: 2
reset-names:
@ -534,9 +555,11 @@ allOf:
- pattern: '^dclkin\.[012]$'
interrupts:
minItems: 3
maxItems: 3
resets:
minItems: 2
maxItems: 2
reset-names:
@ -605,9 +628,11 @@ allOf:
- pattern: '^dclkin\.[013]$'
interrupts:
minItems: 3
maxItems: 3
resets:
minItems: 2
maxItems: 2
reset-names:
@ -726,6 +751,7 @@ allOf:
- pattern: '^dclkin\.[01]$'
interrupts:
minItems: 2
maxItems: 2
resets:

View File

@ -29,6 +29,7 @@ allOf:
properties:
compatible:
enum:
- rockchip,rk3576-dw-hdmi-qp
- rockchip,rk3588-dw-hdmi-qp
reg:
@ -156,7 +157,7 @@ examples:
<GIC_SPI 172 IRQ_TYPE_LEVEL_HIGH 0>,
<GIC_SPI 360 IRQ_TYPE_LEVEL_HIGH 0>;
interrupt-names = "avp", "cec", "earc", "main", "hpd";
phys = <&hdptxphy_hdmi0>;
phys = <&hdptxphy0>;
power-domains = <&power RK3588_PD_VO1>;
resets = <&cru SRST_HDMITX0_REF>, <&cru SRST_HDMIHDP0>;
reset-names = "ref", "hdp";

View File

@ -53,6 +53,8 @@ properties:
- description: Pixel clock for video port 2.
- description: Pixel clock for video port 3.
- description: Peripheral(vop grf/dsi) clock.
- description: Alternative pixel clock provided by HDMI0 PHY PLL.
- description: Alternative pixel clock provided by HDMI1 PHY PLL.
clock-names:
minItems: 5
@ -64,6 +66,8 @@ properties:
- const: dclk_vp2
- const: dclk_vp3
- const: pclk_vop
- const: pll_hdmiphy0
- const: pll_hdmiphy1
rockchip,grf:
$ref: /schemas/types.yaml#/definitions/phandle

View File

@ -338,6 +338,8 @@ patternProperties:
description: Crystalfontz America, Inc.
"^csky,.*":
description: Hangzhou C-SKY Microsystems Co., Ltd
"^csot,.*":
description: Guangzhou China Star Optoelectronics Technology Co., Ltd
"^csq,.*":
description: Shenzen Chuangsiqi Technology Co.,Ltd.
"^ctera,.*":

View File

@ -10,6 +10,7 @@ GPU Driver Documentation
imagination/index
mcde
meson
nouveau
pl111
tegra
tve200

View File

@ -21,7 +21,10 @@ File format specification
- File shall contain one key value pair per one line of text.
- Colon character (`:`) must be used to delimit keys and values.
- All keys shall be prefixed with `drm-`.
- All standardised keys shall be prefixed with `drm-`.
- Driver-specific keys shall be prefixed with `driver_name-`, where
driver_name should ideally be the same as the `name` field in
`struct drm_driver`, although this is not mandatory.
- Whitespace between the delimiter and first non-whitespace character shall be
ignored when parsing.
- Keys are not allowed to contain whitespace characters.

View File

@ -0,0 +1,29 @@
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
===============================
drm/nouveau NVIDIA GPU Driver
===============================
The drm/nouveau driver provides support for a wide range of NVIDIA GPUs,
covering GeForce, Quadro, and Tesla series, from the NV04 architecture up
to the latest Turing, Ampere, Ada families.
NVKM: NVIDIA Kernel Manager
===========================
The NVKM component serves as the core abstraction layer within the nouveau
driver, responsible for managing NVIDIA GPU hardware at the kernel level.
NVKM provides a unified interface for handling various GPU architectures.
It enables resource management, power control, memory handling, and command
submission required for the proper functioning of NVIDIA GPUs under the
nouveau driver.
NVKM plays a critical role in abstracting hardware complexities and
providing a consistent API to upper layers of the driver stack.
GSP Support
------------------------
.. kernel-doc:: drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
:doc: GSP message queue element

View File

@ -26,6 +26,8 @@ the currently possible format options:
drm-cycles-panthor: 94439687187
drm-maxfreq-panthor: 1000000000 Hz
drm-curfreq-panthor: 1000000000 Hz
panthor-resident-memory: 10396 KiB
panthor-active-memory: 10396 KiB
drm-total-memory: 16480 KiB
drm-shared-memory: 0
drm-active-memory: 16200 KiB
@ -44,3 +46,11 @@ driver by writing into the appropriate sysfs node::
Where `N` is a bit mask where cycle and timestamp sampling are respectively
enabled by the first and second bits.
Possible `panthor-*-memory` keys are: `active` and `resident`.
These values convey the sizes of the internal driver-owned shmem BO's that
aren't exposed to user-space through a DRM handle, like queue ring buffers,
sync object arrays and heap chunks. Because they are all allocated and pinned
at creation time, only `panthor-resident-memory` is necessary to tell us their
size. `panthor-active-memory` shows the size of kernel BO's associated with
VM's and groups currently being scheduled for execution by the GPU.

View File

@ -7128,7 +7128,7 @@ F: include/linux/power/smartreflex.h
DRM ACCEL DRIVERS FOR INTEL VPU
M: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
M: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com>
M: Maciej Falkowski <maciej.falkowski@linux.intel.com>
L: dri-devel@lists.freedesktop.org
S: Supported
T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
@ -7247,8 +7247,7 @@ F: Documentation/devicetree/bindings/display/panel/panel-edp.yaml
F: drivers/gpu/drm/panel/panel-edp.c
DRM DRIVER FOR GENERIC USB DISPLAY
M: Noralf Trønnes <noralf@tronnes.org>
S: Maintained
S: Orphan
W: https://github.com/notro/gud/wiki
T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
F: drivers/gpu/drm/gud/
@ -7353,15 +7352,13 @@ T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
F: drivers/gpu/drm/mgag200/
DRM DRIVER FOR MI0283QT
M: Noralf Trønnes <noralf@tronnes.org>
S: Maintained
S: Orphan
T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
F: Documentation/devicetree/bindings/display/multi-inno,mi0283qt.txt
F: drivers/gpu/drm/tiny/mi0283qt.c
DRM DRIVER FOR MIPI DBI compatible panels
M: Noralf Trønnes <noralf@tronnes.org>
S: Maintained
S: Orphan
W: https://github.com/notro/panel-mipi-dbi/wiki
T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
F: Documentation/devicetree/bindings/display/panel/panel-mipi-dbi-spi.yaml
@ -7458,8 +7455,7 @@ F: Documentation/devicetree/bindings/display/bridge/ps8640.yaml
F: drivers/gpu/drm/bridge/parade-ps8640.c
DRM DRIVER FOR PERVASIVE DISPLAYS REPAPER PANELS
M: Noralf Trønnes <noralf@tronnes.org>
S: Maintained
S: Orphan
T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
F: Documentation/devicetree/bindings/display/repaper.txt
F: drivers/gpu/drm/tiny/repaper.c
@ -7682,7 +7678,6 @@ X: drivers/gpu/drm/mediatek/
X: drivers/gpu/drm/msm/
X: drivers/gpu/drm/nouveau/
X: drivers/gpu/drm/radeon/
X: drivers/gpu/drm/renesas/rcar-du/
X: drivers/gpu/drm/tegra/
DRM DRIVERS FOR ALLWINNER A10
@ -7837,12 +7832,13 @@ F: include/linux/host1x.h
F: include/uapi/drm/tegra_drm.h
DRM DRIVERS FOR RENESAS R-CAR
M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
M: Kieran Bingham <kieran.bingham+renesas@ideasonboard.com>
M: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
M: Tomi Valkeinen <tomi.valkeinen+renesas@ideasonboard.com>
R: Kieran Bingham <kieran.bingham+renesas@ideasonboard.com>
L: dri-devel@lists.freedesktop.org
L: linux-renesas-soc@vger.kernel.org
S: Supported
T: git git://linuxtv.org/pinchartl/media drm/du/next
T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
F: Documentation/devicetree/bindings/display/bridge/renesas,dsi-csi2-tx.yaml
F: Documentation/devicetree/bindings/display/bridge/renesas,dw-hdmi.yaml
F: Documentation/devicetree/bindings/display/bridge/renesas,lvds.yaml
@ -7979,12 +7975,12 @@ F: Documentation/gpu/zynqmp.rst
F: drivers/gpu/drm/xlnx/
DRM GPU SCHEDULER
M: Luben Tuikov <ltuikov89@gmail.com>
M: Matthew Brost <matthew.brost@intel.com>
M: Danilo Krummrich <dakr@kernel.org>
M: Philipp Stanner <pstanner@redhat.com>
M: Philipp Stanner <phasta@kernel.org>
R: Christian König <ckoenig.leichtzumerken@gmail.com>
L: dri-devel@lists.freedesktop.org
S: Maintained
S: Supported
T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
F: drivers/gpu/drm/scheduler/
F: include/drm/gpu_scheduler.h

View File

@ -714,10 +714,10 @@ CONFIG_VIDEO_ADV7604_CEC=y
CONFIG_VIDEO_ML86V7667=m
CONFIG_IMX_IPUV3_CORE=m
CONFIG_DRM=y
# CONFIG_DRM_I2C_CH7006 is not set
# CONFIG_DRM_I2C_SIL164 is not set
CONFIG_DRM_I2C_NXP_TDA998X=m
CONFIG_DRM_NOUVEAU=m
# CONFIG_DRM_NOUVEAU_CH7006 is not set
# CONFIG_DRM_NOUVEAU_SIL164 is not set
CONFIG_DRM_EXYNOS=m
CONFIG_DRM_EXYNOS_FIMD=y
CONFIG_DRM_EXYNOS_MIXER=y

View File

@ -132,11 +132,11 @@ CONFIG_I2C=y
CONFIG_HWMON=m
CONFIG_DRM=m
CONFIG_DRM_DISPLAY_DP_AUX_CEC=y
# CONFIG_DRM_I2C_CH7006 is not set
# CONFIG_DRM_I2C_SIL164 is not set
CONFIG_DRM_RADEON=m
CONFIG_DRM_NOUVEAU=m
# CONFIG_DRM_NOUVEAU_BACKLIGHT is not set
# CONFIG_DRM_NOUVEAU_CH7006 is not set
# CONFIG_DRM_NOUVEAU_SIL164 is not set
CONFIG_DRM_VGEM=m
CONFIG_DRM_UDL=m
CONFIG_DRM_MGAG200=m

View File

@ -193,11 +193,11 @@ CONFIG_MEDIA_SUPPORT=m
CONFIG_AGP=y
CONFIG_AGP_PARISC=y
CONFIG_DRM=y
# CONFIG_DRM_I2C_CH7006 is not set
# CONFIG_DRM_I2C_SIL164 is not set
CONFIG_DRM_RADEON=y
CONFIG_DRM_NOUVEAU=m
# CONFIG_DRM_NOUVEAU_BACKLIGHT is not set
# CONFIG_DRM_NOUVEAU_CH7006 is not set
# CONFIG_DRM_NOUVEAU_SIL164 is not set
CONFIG_DRM_MGAG200=m
CONFIG_FB=y
CONFIG_FB_PM2=m

View File

@ -185,7 +185,7 @@ aie2_sched_notify(struct amdxdna_sched_job *job)
}
static int
aie2_sched_resp_handler(void *handle, const u32 *data, size_t size)
aie2_sched_resp_handler(void *handle, void __iomem *data, size_t size)
{
struct amdxdna_sched_job *job = handle;
struct amdxdna_gem_obj *cmd_abo;
@ -203,7 +203,7 @@ aie2_sched_resp_handler(void *handle, const u32 *data, size_t size)
goto out;
}
status = *data;
status = readl(data);
XDNA_DBG(job->hwctx->client->xdna, "Resp status 0x%x", status);
if (status == AIE2_STATUS_SUCCESS)
amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_COMPLETED);
@ -216,7 +216,7 @@ out:
}
static int
aie2_sched_nocmd_resp_handler(void *handle, const u32 *data, size_t size)
aie2_sched_nocmd_resp_handler(void *handle, void __iomem *data, size_t size)
{
struct amdxdna_sched_job *job = handle;
u32 ret = 0;
@ -230,7 +230,7 @@ aie2_sched_nocmd_resp_handler(void *handle, const u32 *data, size_t size)
goto out;
}
status = *data;
status = readl(data);
XDNA_DBG(job->hwctx->client->xdna, "Resp status 0x%x", status);
out:
@ -239,14 +239,14 @@ out:
}
static int
aie2_sched_cmdlist_resp_handler(void *handle, const u32 *data, size_t size)
aie2_sched_cmdlist_resp_handler(void *handle, void __iomem *data, size_t size)
{
struct amdxdna_sched_job *job = handle;
struct amdxdna_gem_obj *cmd_abo;
struct cmd_chain_resp *resp;
struct amdxdna_dev *xdna;
u32 fail_cmd_status;
u32 fail_cmd_idx;
u32 cmd_status;
u32 ret = 0;
cmd_abo = job->cmd_bo;
@ -256,17 +256,17 @@ aie2_sched_cmdlist_resp_handler(void *handle, const u32 *data, size_t size)
goto out;
}
resp = (struct cmd_chain_resp *)data;
cmd_status = readl(data + offsetof(struct cmd_chain_resp, status));
xdna = job->hwctx->client->xdna;
XDNA_DBG(xdna, "Status 0x%x", resp->status);
if (resp->status == AIE2_STATUS_SUCCESS) {
XDNA_DBG(xdna, "Status 0x%x", cmd_status);
if (cmd_status == AIE2_STATUS_SUCCESS) {
amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_COMPLETED);
goto out;
}
/* Slow path to handle error, read from ringbuf on BAR */
fail_cmd_idx = resp->fail_cmd_idx;
fail_cmd_status = resp->fail_cmd_status;
fail_cmd_idx = readl(data + offsetof(struct cmd_chain_resp, fail_cmd_idx));
fail_cmd_status = readl(data + offsetof(struct cmd_chain_resp, fail_cmd_status));
XDNA_DBG(xdna, "Failed cmd idx %d, status 0x%x",
fail_cmd_idx, fail_cmd_status);
@ -361,7 +361,7 @@ aie2_sched_job_timedout(struct drm_sched_job *sched_job)
return DRM_GPU_SCHED_STAT_NOMINAL;
}
const struct drm_sched_backend_ops sched_ops = {
static const struct drm_sched_backend_ops sched_ops = {
.run_job = aie2_sched_job_run,
.free_job = aie2_sched_job_free,
.timedout_job = aie2_sched_job_timedout,

View File

@ -209,16 +209,14 @@ static u32 aie2_error_backtrack(struct amdxdna_dev_hdl *ndev, void *err_info, u3
return err_col;
}
static int aie2_error_async_cb(void *handle, const u32 *data, size_t size)
static int aie2_error_async_cb(void *handle, void __iomem *data, size_t size)
{
struct async_event_msg_resp *resp;
struct async_event *e = handle;
if (data) {
resp = (struct async_event_msg_resp *)data;
e->resp.type = resp->type;
e->resp.type = readl(data + offsetof(struct async_event_msg_resp, type));
wmb(); /* Update status in the end, so that no lock for here */
e->resp.status = resp->status;
e->resp.status = readl(data + offsetof(struct async_event_msg_resp, status));
}
queue_work(e->wq, &e->work);
return 0;

View File

@ -356,7 +356,7 @@ fail:
}
int aie2_register_asyn_event_msg(struct amdxdna_dev_hdl *ndev, dma_addr_t addr, u32 size,
void *handle, int (*cb)(void*, const u32 *, size_t))
void *handle, int (*cb)(void*, void __iomem *, size_t))
{
struct async_event_msg_req req = { 0 };
struct xdna_mailbox_msg msg = {
@ -435,7 +435,7 @@ int aie2_config_cu(struct amdxdna_hwctx *hwctx)
}
int aie2_execbuf(struct amdxdna_hwctx *hwctx, struct amdxdna_sched_job *job,
int (*notify_cb)(void *, const u32 *, size_t))
int (*notify_cb)(void *, void __iomem *, size_t))
{
struct mailbox_channel *chann = hwctx->priv->mbox_chann;
struct amdxdna_dev *xdna = hwctx->client->xdna;
@ -640,7 +640,7 @@ aie2_cmd_op_to_msg_op(u32 op)
int aie2_cmdlist_multi_execbuf(struct amdxdna_hwctx *hwctx,
struct amdxdna_sched_job *job,
int (*notify_cb)(void *, const u32 *, size_t))
int (*notify_cb)(void *, void __iomem *, size_t))
{
struct amdxdna_gem_obj *cmdbuf_abo = aie2_cmdlist_get_cmd_buf(job);
struct mailbox_channel *chann = hwctx->priv->mbox_chann;
@ -705,7 +705,7 @@ int aie2_cmdlist_multi_execbuf(struct amdxdna_hwctx *hwctx,
int aie2_cmdlist_single_execbuf(struct amdxdna_hwctx *hwctx,
struct amdxdna_sched_job *job,
int (*notify_cb)(void *, const u32 *, size_t))
int (*notify_cb)(void *, void __iomem *, size_t))
{
struct amdxdna_gem_obj *cmdbuf_abo = aie2_cmdlist_get_cmd_buf(job);
struct mailbox_channel *chann = hwctx->priv->mbox_chann;
@ -740,7 +740,7 @@ int aie2_cmdlist_single_execbuf(struct amdxdna_hwctx *hwctx,
}
int aie2_sync_bo(struct amdxdna_hwctx *hwctx, struct amdxdna_sched_job *job,
int (*notify_cb)(void *, const u32 *, size_t))
int (*notify_cb)(void *, void __iomem *, size_t))
{
struct mailbox_channel *chann = hwctx->priv->mbox_chann;
struct amdxdna_gem_obj *abo = to_xdna_obj(job->bos[0]);

View File

@ -271,18 +271,18 @@ int aie2_destroy_context(struct amdxdna_dev_hdl *ndev, struct amdxdna_hwctx *hwc
int aie2_map_host_buf(struct amdxdna_dev_hdl *ndev, u32 context_id, u64 addr, u64 size);
int aie2_query_status(struct amdxdna_dev_hdl *ndev, char __user *buf, u32 size, u32 *cols_filled);
int aie2_register_asyn_event_msg(struct amdxdna_dev_hdl *ndev, dma_addr_t addr, u32 size,
void *handle, int (*cb)(void*, const u32 *, size_t));
void *handle, int (*cb)(void*, void __iomem *, size_t));
int aie2_config_cu(struct amdxdna_hwctx *hwctx);
int aie2_execbuf(struct amdxdna_hwctx *hwctx, struct amdxdna_sched_job *job,
int (*notify_cb)(void *, const u32 *, size_t));
int (*notify_cb)(void *, void __iomem *, size_t));
int aie2_cmdlist_single_execbuf(struct amdxdna_hwctx *hwctx,
struct amdxdna_sched_job *job,
int (*notify_cb)(void *, const u32 *, size_t));
int (*notify_cb)(void *, void __iomem *, size_t));
int aie2_cmdlist_multi_execbuf(struct amdxdna_hwctx *hwctx,
struct amdxdna_sched_job *job,
int (*notify_cb)(void *, const u32 *, size_t));
int (*notify_cb)(void *, void __iomem *, size_t));
int aie2_sync_bo(struct amdxdna_hwctx *hwctx, struct amdxdna_sched_job *job,
int (*notify_cb)(void *, const u32 *, size_t));
int (*notify_cb)(void *, void __iomem *, size_t));
/* aie2_hwctx.c */
int aie2_hwctx_init(struct amdxdna_hwctx *hwctx);

View File

@ -64,6 +64,7 @@ int npu1_set_dpm(struct amdxdna_dev_hdl *ndev, u32 dpm_level)
if (ret) {
XDNA_ERR(ndev->xdna, "Set npu clock to %d failed, ret %d\n",
ndev->priv->dpm_clk_tbl[dpm_level].npuclk, ret);
return ret;
}
ndev->npuclk_freq = freq;
@ -72,6 +73,7 @@ int npu1_set_dpm(struct amdxdna_dev_hdl *ndev, u32 dpm_level)
if (ret) {
XDNA_ERR(ndev->xdna, "Set h clock to %d failed, ret %d\n",
ndev->priv->dpm_clk_tbl[dpm_level].hclk, ret);
return ret;
}
ndev->hclk_freq = freq;
ndev->dpm_level = dpm_level;

View File

@ -90,7 +90,7 @@ struct mailbox_pkg {
struct mailbox_msg {
void *handle;
int (*notify_cb)(void *handle, const u32 *data, size_t size);
int (*notify_cb)(void *handle, void __iomem *data, size_t size);
size_t pkg_size; /* package size in bytes */
struct mailbox_pkg pkg;
};
@ -243,7 +243,7 @@ no_space:
static int
mailbox_get_resp(struct mailbox_channel *mb_chann, struct xdna_msg_header *header,
void *data)
void __iomem *data)
{
struct mailbox_msg *mb_msg;
int msg_id;
@ -331,7 +331,7 @@ static int mailbox_get_msg(struct mailbox_channel *mb_chann)
memcpy_fromio((u32 *)&header + 1, read_addr, rest);
read_addr += rest;
ret = mailbox_get_resp(mb_chann, &header, (u32 *)read_addr);
ret = mailbox_get_resp(mb_chann, &header, read_addr);
mailbox_set_headptr(mb_chann, head + msg_size);
/* After update head, it can equal to ringbuf_size. This is expected. */

View File

@ -25,7 +25,7 @@ struct mailbox_channel;
struct xdna_mailbox_msg {
u32 opcode;
void *handle;
int (*notify_cb)(void *handle, const u32 *data, size_t size);
int (*notify_cb)(void *handle, void __iomem *data, size_t size);
u8 *send_data;
size_t send_size;
};

View File

@ -16,7 +16,7 @@
#include "amdxdna_mailbox_helper.h"
#include "amdxdna_pci_drv.h"
int xdna_msg_cb(void *handle, const u32 *data, size_t size)
int xdna_msg_cb(void *handle, void __iomem *data, size_t size)
{
struct xdna_notify *cb_arg = handle;
int ret;
@ -29,9 +29,9 @@ int xdna_msg_cb(void *handle, const u32 *data, size_t size)
goto out;
}
memcpy_fromio(cb_arg->data, data, cb_arg->size);
print_hex_dump_debug("resp data: ", DUMP_PREFIX_OFFSET,
16, 4, data, cb_arg->size, true);
memcpy(cb_arg->data, data, cb_arg->size);
16, 4, cb_arg->data, cb_arg->size, true);
out:
ret = cb_arg->error;
complete(&cb_arg->comp);

View File

@ -35,7 +35,7 @@ struct xdna_notify {
.notify_cb = xdna_msg_cb, \
}
int xdna_msg_cb(void *handle, const u32 *data, size_t size);
int xdna_msg_cb(void *handle, void __iomem *data, size_t size);
int xdna_send_msg_wait(struct amdxdna_dev *xdna, struct mailbox_channel *chann,
struct xdna_mailbox_msg *msg);

View File

@ -4,6 +4,7 @@
*/
#include <linux/debugfs.h>
#include <linux/fault-inject.h>
#include <drm/drm_debugfs.h>
#include <drm/drm_file.h>
@ -397,6 +398,88 @@ static int dct_active_set(void *data, u64 active_percent)
DEFINE_DEBUGFS_ATTRIBUTE(ivpu_dct_fops, dct_active_get, dct_active_set, "%llu\n");
static int priority_bands_show(struct seq_file *s, void *v)
{
struct ivpu_device *vdev = s->private;
struct ivpu_hw_info *hw = vdev->hw;
for (int band = VPU_JOB_SCHEDULING_PRIORITY_BAND_IDLE;
band < VPU_JOB_SCHEDULING_PRIORITY_BAND_COUNT; band++) {
switch (band) {
case VPU_JOB_SCHEDULING_PRIORITY_BAND_IDLE:
seq_puts(s, "Idle: ");
break;
case VPU_JOB_SCHEDULING_PRIORITY_BAND_NORMAL:
seq_puts(s, "Normal: ");
break;
case VPU_JOB_SCHEDULING_PRIORITY_BAND_FOCUS:
seq_puts(s, "Focus: ");
break;
case VPU_JOB_SCHEDULING_PRIORITY_BAND_REALTIME:
seq_puts(s, "Realtime: ");
break;
}
seq_printf(s, "grace_period %9u process_grace_period %9u process_quantum %9u\n",
hw->hws.grace_period[band], hw->hws.process_grace_period[band],
hw->hws.process_quantum[band]);
}
return 0;
}
static int priority_bands_fops_open(struct inode *inode, struct file *file)
{
return single_open(file, priority_bands_show, inode->i_private);
}
static ssize_t
priority_bands_fops_write(struct file *file, const char __user *user_buf, size_t size, loff_t *pos)
{
struct seq_file *s = file->private_data;
struct ivpu_device *vdev = s->private;
char buf[64];
u32 grace_period;
u32 process_grace_period;
u32 process_quantum;
u32 band;
int ret;
if (size >= sizeof(buf))
return -EINVAL;
ret = simple_write_to_buffer(buf, sizeof(buf) - 1, pos, user_buf, size);
if (ret < 0)
return ret;
buf[size] = '\0';
ret = sscanf(buf, "%u %u %u %u", &band, &grace_period, &process_grace_period,
&process_quantum);
if (ret != 4)
return -EINVAL;
if (band >= VPU_JOB_SCHEDULING_PRIORITY_BAND_COUNT)
return -EINVAL;
vdev->hw->hws.grace_period[band] = grace_period;
vdev->hw->hws.process_grace_period[band] = process_grace_period;
vdev->hw->hws.process_quantum[band] = process_quantum;
return size;
}
static const struct file_operations ivpu_hws_priority_bands_fops = {
.owner = THIS_MODULE,
.open = priority_bands_fops_open,
.write = priority_bands_fops_write,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
void ivpu_debugfs_init(struct ivpu_device *vdev)
{
struct dentry *debugfs_root = vdev->drm.debugfs_root;
@ -419,6 +502,8 @@ void ivpu_debugfs_init(struct ivpu_device *vdev)
&fw_trace_hw_comp_mask_fops);
debugfs_create_file("fw_trace_level", 0200, debugfs_root, vdev,
&fw_trace_level_fops);
debugfs_create_file("hws_priority_bands", 0200, debugfs_root, vdev,
&ivpu_hws_priority_bands_fops);
debugfs_create_file("reset_engine", 0200, debugfs_root, vdev,
&ivpu_reset_engine_fops);
@ -430,4 +515,8 @@ void ivpu_debugfs_init(struct ivpu_device *vdev)
debugfs_root, vdev, &fw_profiling_freq_fops);
debugfs_create_file("dct", 0644, debugfs_root, vdev, &ivpu_dct_fops);
}
#ifdef CONFIG_FAULT_INJECTION
fault_create_debugfs_attr("fail_hw", debugfs_root, &ivpu_hw_failure);
#endif
}

View File

@ -7,6 +7,7 @@
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/pm_runtime.h>
#include <linux/workqueue.h>
#include <generated/utsrelease.h>
#include <drm/drm_accel.h>
@ -36,8 +37,6 @@
#define DRIVER_VERSION_STR "1.0.0 " UTS_RELEASE
#endif
static struct lock_class_key submitted_jobs_xa_lock_class_key;
int ivpu_dbg_mask;
module_param_named(dbg_mask, ivpu_dbg_mask, int, 0644);
MODULE_PARM_DESC(dbg_mask, "Driver debug mask. See IVPU_DBG_* macros.");
@ -128,20 +127,18 @@ void ivpu_file_priv_put(struct ivpu_file_priv **link)
kref_put(&file_priv->ref, file_priv_release);
}
static int ivpu_get_capabilities(struct ivpu_device *vdev, struct drm_ivpu_param *args)
bool ivpu_is_capable(struct ivpu_device *vdev, u32 capability)
{
switch (args->index) {
switch (capability) {
case DRM_IVPU_CAP_METRIC_STREAMER:
args->value = 1;
break;
return true;
case DRM_IVPU_CAP_DMA_MEMORY_RANGE:
args->value = 1;
break;
return true;
case DRM_IVPU_CAP_MANAGE_CMDQ:
return vdev->fw->sched_mode == VPU_SCHEDULING_MODE_HW;
default:
return -EINVAL;
return false;
}
return 0;
}
static int ivpu_get_param_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
@ -201,7 +198,7 @@ static int ivpu_get_param_ioctl(struct drm_device *dev, void *data, struct drm_f
args->value = vdev->hw->sku;
break;
case DRM_IVPU_PARAM_CAPABILITIES:
ret = ivpu_get_capabilities(vdev, args);
args->value = ivpu_is_capable(vdev, args->index);
break;
default:
ret = -EINVAL;
@ -310,6 +307,9 @@ static const struct drm_ioctl_desc ivpu_drm_ioctls[] = {
DRM_IOCTL_DEF_DRV(IVPU_METRIC_STREAMER_GET_DATA, ivpu_ms_get_data_ioctl, 0),
DRM_IOCTL_DEF_DRV(IVPU_METRIC_STREAMER_STOP, ivpu_ms_stop_ioctl, 0),
DRM_IOCTL_DEF_DRV(IVPU_METRIC_STREAMER_GET_INFO, ivpu_ms_get_info_ioctl, 0),
DRM_IOCTL_DEF_DRV(IVPU_CMDQ_CREATE, ivpu_cmdq_create_ioctl, 0),
DRM_IOCTL_DEF_DRV(IVPU_CMDQ_DESTROY, ivpu_cmdq_destroy_ioctl, 0),
DRM_IOCTL_DEF_DRV(IVPU_CMDQ_SUBMIT, ivpu_cmdq_submit_ioctl, 0),
};
static int ivpu_wait_for_ready(struct ivpu_device *vdev)
@ -421,6 +421,9 @@ void ivpu_prepare_for_reset(struct ivpu_device *vdev)
{
ivpu_hw_irq_disable(vdev);
disable_irq(vdev->irq);
cancel_work_sync(&vdev->irq_ipc_work);
cancel_work_sync(&vdev->irq_dct_work);
cancel_work_sync(&vdev->context_abort_work);
ivpu_ipc_disable(vdev);
ivpu_mmu_disable(vdev);
}
@ -453,7 +456,7 @@ static const struct drm_driver driver = {
.postclose = ivpu_postclose,
.gem_create_object = ivpu_gem_create_object,
.gem_prime_import_sg_table = drm_gem_shmem_prime_import_sg_table,
.gem_prime_import = ivpu_gem_prime_import,
.ioctls = ivpu_drm_ioctls,
.num_ioctls = ARRAY_SIZE(ivpu_drm_ioctls),
@ -465,54 +468,6 @@ static const struct drm_driver driver = {
.major = 1,
};
static void ivpu_context_abort_invalid(struct ivpu_device *vdev)
{
struct ivpu_file_priv *file_priv;
unsigned long ctx_id;
mutex_lock(&vdev->context_list_lock);
xa_for_each(&vdev->context_xa, ctx_id, file_priv) {
if (!file_priv->has_mmu_faults || file_priv->aborted)
continue;
mutex_lock(&file_priv->lock);
ivpu_context_abort_locked(file_priv);
file_priv->aborted = true;
mutex_unlock(&file_priv->lock);
}
mutex_unlock(&vdev->context_list_lock);
}
static irqreturn_t ivpu_irq_thread_handler(int irq, void *arg)
{
struct ivpu_device *vdev = arg;
u8 irq_src;
if (kfifo_is_empty(&vdev->hw->irq.fifo))
return IRQ_NONE;
while (kfifo_get(&vdev->hw->irq.fifo, &irq_src)) {
switch (irq_src) {
case IVPU_HW_IRQ_SRC_IPC:
ivpu_ipc_irq_thread_handler(vdev);
break;
case IVPU_HW_IRQ_SRC_MMU_EVTQ:
ivpu_context_abort_invalid(vdev);
break;
case IVPU_HW_IRQ_SRC_DCT:
ivpu_pm_dct_irq_thread_handler(vdev);
break;
default:
ivpu_err_ratelimited(vdev, "Unknown IRQ source: %u\n", irq_src);
break;
}
}
return IRQ_HANDLED;
}
static int ivpu_irq_init(struct ivpu_device *vdev)
{
struct pci_dev *pdev = to_pci_dev(vdev->drm.dev);
@ -524,12 +479,16 @@ static int ivpu_irq_init(struct ivpu_device *vdev)
return ret;
}
INIT_WORK(&vdev->irq_ipc_work, ivpu_ipc_irq_work_fn);
INIT_WORK(&vdev->irq_dct_work, ivpu_pm_irq_dct_work_fn);
INIT_WORK(&vdev->context_abort_work, ivpu_context_abort_work_fn);
ivpu_irq_handlers_init(vdev);
vdev->irq = pci_irq_vector(pdev, 0);
ret = devm_request_threaded_irq(vdev->drm.dev, vdev->irq, ivpu_hw_irq_handler,
ivpu_irq_thread_handler, IRQF_NO_AUTOEN, DRIVER_NAME, vdev);
ret = devm_request_irq(vdev->drm.dev, vdev->irq, ivpu_hw_irq_handler,
IRQF_NO_AUTOEN, DRIVER_NAME, vdev);
if (ret)
ivpu_err(vdev, "Failed to request an IRQ %d\n", ret);
@ -617,7 +576,6 @@ static int ivpu_dev_init(struct ivpu_device *vdev)
xa_init_flags(&vdev->context_xa, XA_FLAGS_ALLOC | XA_FLAGS_LOCK_IRQ);
xa_init_flags(&vdev->submitted_jobs_xa, XA_FLAGS_ALLOC1);
xa_init_flags(&vdev->db_xa, XA_FLAGS_ALLOC1);
lockdep_set_class(&vdev->submitted_jobs_xa.xa_lock, &submitted_jobs_xa_lock_class_key);
INIT_LIST_HEAD(&vdev->bo_list);
vdev->db_limit.min = IVPU_MIN_DB;
@ -627,6 +585,10 @@ static int ivpu_dev_init(struct ivpu_device *vdev)
if (ret)
goto err_xa_destroy;
ret = drmm_mutex_init(&vdev->drm, &vdev->submitted_jobs_lock);
if (ret)
goto err_xa_destroy;
ret = drmm_mutex_init(&vdev->drm, &vdev->bo_list_lock);
if (ret)
goto err_xa_destroy;

View File

@ -58,6 +58,7 @@
#define IVPU_PLATFORM_SILICON 0
#define IVPU_PLATFORM_SIMICS 2
#define IVPU_PLATFORM_FPGA 3
#define IVPU_PLATFORM_HSLE 4
#define IVPU_PLATFORM_INVALID 8
#define IVPU_SCHED_MODE_AUTO -1
@ -110,6 +111,7 @@ struct ivpu_wa_table {
bool disable_clock_relinquish;
bool disable_d0i3_msg;
bool wp0_during_power_up;
bool disable_d0i2;
};
struct ivpu_hw_info;
@ -142,9 +144,14 @@ struct ivpu_device {
struct xa_limit db_limit;
u32 db_next;
struct work_struct irq_ipc_work;
struct work_struct irq_dct_work;
struct work_struct context_abort_work;
struct mutex bo_list_lock; /* Protects bo_list */
struct list_head bo_list;
struct mutex submitted_jobs_lock; /* Protects submitted_jobs */
struct xarray submitted_jobs_xa;
struct ivpu_ipc_consumer job_done_consumer;
@ -200,6 +207,9 @@ extern bool ivpu_force_snoop;
#define IVPU_TEST_MODE_MIP_DISABLE BIT(6)
#define IVPU_TEST_MODE_DISABLE_TIMEOUTS BIT(8)
#define IVPU_TEST_MODE_TURBO BIT(9)
#define IVPU_TEST_MODE_CLK_RELINQ_DISABLE BIT(10)
#define IVPU_TEST_MODE_CLK_RELINQ_ENABLE BIT(11)
#define IVPU_TEST_MODE_D0I2_DISABLE BIT(12)
extern int ivpu_test_mode;
struct ivpu_file_priv *ivpu_file_priv_get(struct ivpu_file_priv *file_priv);
@ -208,6 +218,7 @@ void ivpu_file_priv_put(struct ivpu_file_priv **link);
int ivpu_boot(struct ivpu_device *vdev);
int ivpu_shutdown(struct ivpu_device *vdev);
void ivpu_prepare_for_reset(struct ivpu_device *vdev);
bool ivpu_is_capable(struct ivpu_device *vdev, u32 capability);
static inline u8 ivpu_revision(struct ivpu_device *vdev)
{
@ -282,7 +293,8 @@ static inline bool ivpu_is_simics(struct ivpu_device *vdev)
static inline bool ivpu_is_fpga(struct ivpu_device *vdev)
{
return ivpu_get_platform(vdev) == IVPU_PLATFORM_FPGA;
return ivpu_get_platform(vdev) == IVPU_PLATFORM_FPGA ||
ivpu_get_platform(vdev) == IVPU_PLATFORM_HSLE;
}
static inline bool ivpu_is_force_snoop_enabled(struct ivpu_device *vdev)

View File

@ -145,7 +145,10 @@ ivpu_fw_sched_mode_select(struct ivpu_device *vdev, const struct vpu_firmware_he
if (ivpu_sched_mode != IVPU_SCHED_MODE_AUTO)
return ivpu_sched_mode;
return VPU_SCHEDULING_MODE_OS;
if (IVPU_FW_CHECK_API_VER_LT(vdev, fw_hdr, JSM, 3, 24))
return VPU_SCHEDULING_MODE_OS;
return VPU_SCHEDULING_MODE_HW;
}
static int ivpu_fw_parse(struct ivpu_device *vdev)
@ -531,6 +534,8 @@ static void ivpu_fw_boot_params_print(struct ivpu_device *vdev, struct vpu_boot_
boot_params->d0i3_entry_vpu_ts);
ivpu_dbg(vdev, FW_BOOT, "boot_params.system_time_us = %llu\n",
boot_params->system_time_us);
ivpu_dbg(vdev, FW_BOOT, "boot_params.power_profile = %u\n",
boot_params->power_profile);
}
void ivpu_fw_boot_params_setup(struct ivpu_device *vdev, struct vpu_boot_params *boot_params)
@ -631,6 +636,8 @@ void ivpu_fw_boot_params_setup(struct ivpu_device *vdev, struct vpu_boot_params
boot_params->d0i3_delayed_entry = 1;
boot_params->d0i3_residency_time_us = 0;
boot_params->d0i3_entry_vpu_ts = 0;
if (IVPU_WA(disable_d0i2))
boot_params->power_profile = 1;
boot_params->system_time_us = ktime_to_us(ktime_get_real());
wmb(); /* Flush WC buffers after writing bootparams */

View File

@ -20,6 +20,8 @@
#include "ivpu_mmu.h"
#include "ivpu_mmu_context.h"
MODULE_IMPORT_NS("DMA_BUF");
static const struct drm_gem_object_funcs ivpu_gem_funcs;
static inline void ivpu_dbg_bo(struct ivpu_device *vdev, struct ivpu_bo *bo, const char *action)
@ -172,6 +174,47 @@ struct drm_gem_object *ivpu_gem_create_object(struct drm_device *dev, size_t siz
return &bo->base.base;
}
struct drm_gem_object *ivpu_gem_prime_import(struct drm_device *dev,
struct dma_buf *dma_buf)
{
struct device *attach_dev = dev->dev;
struct dma_buf_attachment *attach;
struct sg_table *sgt;
struct drm_gem_object *obj;
int ret;
attach = dma_buf_attach(dma_buf, attach_dev);
if (IS_ERR(attach))
return ERR_CAST(attach);
get_dma_buf(dma_buf);
sgt = dma_buf_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL);
if (IS_ERR(sgt)) {
ret = PTR_ERR(sgt);
goto fail_detach;
}
obj = drm_gem_shmem_prime_import_sg_table(dev, attach, sgt);
if (IS_ERR(obj)) {
ret = PTR_ERR(obj);
goto fail_unmap;
}
obj->import_attach = attach;
obj->resv = dma_buf->resv;
return obj;
fail_unmap:
dma_buf_unmap_attachment_unlocked(attach, sgt, DMA_BIDIRECTIONAL);
fail_detach:
dma_buf_detach(dma_buf, attach);
dma_buf_put(dma_buf);
return ERR_PTR(ret);
}
static struct ivpu_bo *ivpu_bo_alloc(struct ivpu_device *vdev, u64 size, u32 flags)
{
struct drm_gem_shmem_object *shmem;

View File

@ -28,6 +28,7 @@ int ivpu_bo_pin(struct ivpu_bo *bo);
void ivpu_bo_unbind_all_bos_from_context(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx);
struct drm_gem_object *ivpu_gem_create_object(struct drm_device *dev, size_t size);
struct drm_gem_object *ivpu_gem_prime_import(struct drm_device *dev, struct dma_buf *dma_buf);
struct ivpu_bo *ivpu_bo_create(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx,
struct ivpu_addr_range *range, u64 size, u32 flags);
struct ivpu_bo *ivpu_bo_create_global(struct ivpu_device *vdev, u64 size, u32 flags);

View File

@ -9,6 +9,16 @@
#include "ivpu_hw_ip.h"
#include <linux/dmi.h>
#include <linux/fault-inject.h>
#include <linux/pm_runtime.h>
#ifdef CONFIG_FAULT_INJECTION
DECLARE_FAULT_ATTR(ivpu_hw_failure);
static char *ivpu_fail_hw;
module_param_named_unsafe(fail_hw, ivpu_fail_hw, charp, 0444);
MODULE_PARM_DESC(fail_hw, "<interval>,<probability>,<space>,<times>");
#endif
static char *platform_to_str(u32 platform)
{
@ -19,43 +29,36 @@ static char *platform_to_str(u32 platform)
return "SIMICS";
case IVPU_PLATFORM_FPGA:
return "FPGA";
case IVPU_PLATFORM_HSLE:
return "HSLE";
default:
return "Invalid platform";
}
}
static const struct dmi_system_id dmi_platform_simulation[] = {
{
.ident = "Intel Simics",
.matches = {
DMI_MATCH(DMI_BOARD_NAME, "lnlrvp"),
DMI_MATCH(DMI_BOARD_VERSION, "1.0"),
DMI_MATCH(DMI_BOARD_SERIAL, "123456789"),
},
},
{
.ident = "Intel Simics",
.matches = {
DMI_MATCH(DMI_BOARD_NAME, "Simics"),
},
},
{ }
};
static void platform_init(struct ivpu_device *vdev)
{
if (dmi_check_system(dmi_platform_simulation))
vdev->platform = IVPU_PLATFORM_SIMICS;
else
vdev->platform = IVPU_PLATFORM_SILICON;
int platform = ivpu_hw_btrs_platform_read(vdev);
ivpu_dbg(vdev, MISC, "Platform type: %s (%d)\n",
platform_to_str(vdev->platform), vdev->platform);
ivpu_dbg(vdev, MISC, "Platform type: %s (%d)\n", platform_to_str(platform), platform);
switch (platform) {
case IVPU_PLATFORM_SILICON:
case IVPU_PLATFORM_SIMICS:
case IVPU_PLATFORM_FPGA:
case IVPU_PLATFORM_HSLE:
vdev->platform = platform;
break;
default:
ivpu_err(vdev, "Invalid platform type: %d\n", platform);
break;
}
}
static void wa_init(struct ivpu_device *vdev)
{
vdev->wa.punit_disabled = ivpu_is_fpga(vdev);
vdev->wa.punit_disabled = false;
vdev->wa.clear_runtime_mem = false;
if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL)
@ -65,14 +68,24 @@ static void wa_init(struct ivpu_device *vdev)
ivpu_revision(vdev) < IVPU_HW_IP_REV_LNL_B0)
vdev->wa.disable_clock_relinquish = true;
if (ivpu_test_mode & IVPU_TEST_MODE_CLK_RELINQ_ENABLE)
vdev->wa.disable_clock_relinquish = false;
if (ivpu_test_mode & IVPU_TEST_MODE_CLK_RELINQ_DISABLE)
vdev->wa.disable_clock_relinquish = true;
if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX)
vdev->wa.wp0_during_power_up = true;
if (ivpu_test_mode & IVPU_TEST_MODE_D0I2_DISABLE)
vdev->wa.disable_d0i2 = true;
IVPU_PRINT_WA(punit_disabled);
IVPU_PRINT_WA(clear_runtime_mem);
IVPU_PRINT_WA(interrupt_clear_with_0);
IVPU_PRINT_WA(disable_clock_relinquish);
IVPU_PRINT_WA(wp0_during_power_up);
IVPU_PRINT_WA(disable_d0i2);
}
static void timeouts_init(struct ivpu_device *vdev)
@ -84,12 +97,12 @@ static void timeouts_init(struct ivpu_device *vdev)
vdev->timeout.autosuspend = -1;
vdev->timeout.d0i3_entry_msg = -1;
} else if (ivpu_is_fpga(vdev)) {
vdev->timeout.boot = 100000;
vdev->timeout.jsm = 50000;
vdev->timeout.tdr = 2000000;
vdev->timeout.boot = 50;
vdev->timeout.jsm = 15000;
vdev->timeout.tdr = 30000;
vdev->timeout.autosuspend = -1;
vdev->timeout.d0i3_entry_msg = 500;
vdev->timeout.state_dump_msg = 10;
vdev->timeout.state_dump_msg = 10000;
} else if (ivpu_is_simics(vdev)) {
vdev->timeout.boot = 50;
vdev->timeout.jsm = 500;
@ -110,6 +123,26 @@ static void timeouts_init(struct ivpu_device *vdev)
}
}
static void priority_bands_init(struct ivpu_device *vdev)
{
/* Idle */
vdev->hw->hws.grace_period[VPU_JOB_SCHEDULING_PRIORITY_BAND_IDLE] = 0;
vdev->hw->hws.process_grace_period[VPU_JOB_SCHEDULING_PRIORITY_BAND_IDLE] = 50000;
vdev->hw->hws.process_quantum[VPU_JOB_SCHEDULING_PRIORITY_BAND_IDLE] = 160000;
/* Normal */
vdev->hw->hws.grace_period[VPU_JOB_SCHEDULING_PRIORITY_BAND_NORMAL] = 50000;
vdev->hw->hws.process_grace_period[VPU_JOB_SCHEDULING_PRIORITY_BAND_NORMAL] = 50000;
vdev->hw->hws.process_quantum[VPU_JOB_SCHEDULING_PRIORITY_BAND_NORMAL] = 300000;
/* Focus */
vdev->hw->hws.grace_period[VPU_JOB_SCHEDULING_PRIORITY_BAND_FOCUS] = 50000;
vdev->hw->hws.process_grace_period[VPU_JOB_SCHEDULING_PRIORITY_BAND_FOCUS] = 50000;
vdev->hw->hws.process_quantum[VPU_JOB_SCHEDULING_PRIORITY_BAND_FOCUS] = 200000;
/* Realtime */
vdev->hw->hws.grace_period[VPU_JOB_SCHEDULING_PRIORITY_BAND_REALTIME] = 0;
vdev->hw->hws.process_grace_period[VPU_JOB_SCHEDULING_PRIORITY_BAND_REALTIME] = 50000;
vdev->hw->hws.process_quantum[VPU_JOB_SCHEDULING_PRIORITY_BAND_REALTIME] = 200000;
}
static void memory_ranges_init(struct ivpu_device *vdev)
{
if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) {
@ -248,12 +281,18 @@ int ivpu_hw_init(struct ivpu_device *vdev)
{
ivpu_hw_btrs_info_init(vdev);
ivpu_hw_btrs_freq_ratios_init(vdev);
priority_bands_init(vdev);
memory_ranges_init(vdev);
platform_init(vdev);
wa_init(vdev);
timeouts_init(vdev);
atomic_set(&vdev->hw->firewall_irq_counter, 0);
#ifdef CONFIG_FAULT_INJECTION
if (ivpu_fail_hw)
setup_fault_attr(&ivpu_hw_failure, ivpu_fail_hw);
#endif
return 0;
}
@ -285,8 +324,6 @@ void ivpu_hw_profiling_freq_drive(struct ivpu_device *vdev, bool enable)
void ivpu_irq_handlers_init(struct ivpu_device *vdev)
{
INIT_KFIFO(vdev->hw->irq.fifo);
if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX)
vdev->hw->irq.ip_irq_handler = ivpu_hw_ip_irq_handler_37xx;
else
@ -300,7 +337,6 @@ void ivpu_irq_handlers_init(struct ivpu_device *vdev)
void ivpu_hw_irq_enable(struct ivpu_device *vdev)
{
kfifo_reset(&vdev->hw->irq.fifo);
ivpu_hw_ip_irq_enable(vdev);
ivpu_hw_btrs_irq_enable(vdev);
}
@ -327,9 +363,9 @@ irqreturn_t ivpu_hw_irq_handler(int irq, void *ptr)
/* Re-enable global interrupts to re-trigger MSI for pending interrupts */
ivpu_hw_btrs_global_int_enable(vdev);
if (!kfifo_is_empty(&vdev->hw->irq.fifo))
return IRQ_WAKE_THREAD;
if (ip_handled || btrs_handled)
return IRQ_HANDLED;
return IRQ_NONE;
if (!ip_handled && !btrs_handled)
return IRQ_NONE;
pm_runtime_mark_last_busy(vdev->drm.dev);
return IRQ_HANDLED;
}

View File

@ -6,18 +6,10 @@
#ifndef __IVPU_HW_H__
#define __IVPU_HW_H__
#include <linux/kfifo.h>
#include "ivpu_drv.h"
#include "ivpu_hw_btrs.h"
#include "ivpu_hw_ip.h"
#define IVPU_HW_IRQ_FIFO_LENGTH 1024
#define IVPU_HW_IRQ_SRC_IPC 1
#define IVPU_HW_IRQ_SRC_MMU_EVTQ 2
#define IVPU_HW_IRQ_SRC_DCT 3
struct ivpu_addr_range {
resource_size_t start;
resource_size_t end;
@ -27,7 +19,6 @@ struct ivpu_hw_info {
struct {
bool (*btrs_irq_handler)(struct ivpu_device *vdev, int irq);
bool (*ip_irq_handler)(struct ivpu_device *vdev, int irq);
DECLARE_KFIFO(fifo, u8, IVPU_HW_IRQ_FIFO_LENGTH);
} irq;
struct {
struct ivpu_addr_range global;
@ -45,6 +36,11 @@ struct ivpu_hw_info {
u8 pn_ratio;
u32 profiling_freq;
} pll;
struct {
u32 grace_period[VPU_HWS_NUM_PRIORITY_BANDS];
u32 process_quantum[VPU_HWS_NUM_PRIORITY_BANDS];
u32 process_grace_period[VPU_HWS_NUM_PRIORITY_BANDS];
} hws;
u32 tile_fuse;
u32 sku;
u16 config;

View File

@ -630,8 +630,7 @@ bool ivpu_hw_btrs_irq_handler_lnl(struct ivpu_device *vdev, int irq)
if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, SURV_ERR, status)) {
ivpu_dbg(vdev, IRQ, "Survivability IRQ\n");
if (!kfifo_put(&vdev->hw->irq.fifo, IVPU_HW_IRQ_SRC_DCT))
ivpu_err_ratelimited(vdev, "IRQ FIFO full\n");
queue_work(system_wq, &vdev->irq_dct_work);
}
if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, FREQ_CHANGE, status))
@ -888,3 +887,10 @@ void ivpu_hw_btrs_diagnose_failure(struct ivpu_device *vdev)
else
return diagnose_failure_lnl(vdev);
}
int ivpu_hw_btrs_platform_read(struct ivpu_device *vdev)
{
u32 reg = REGB_RD32(VPU_HW_BTRS_LNL_VPU_STATUS);
return REG_GET_FLD(VPU_HW_BTRS_LNL_VPU_STATUS, PLATFORM, reg);
}

View File

@ -46,5 +46,6 @@ void ivpu_hw_btrs_global_int_disable(struct ivpu_device *vdev);
void ivpu_hw_btrs_irq_enable(struct ivpu_device *vdev);
void ivpu_hw_btrs_irq_disable(struct ivpu_device *vdev);
void ivpu_hw_btrs_diagnose_failure(struct ivpu_device *vdev);
int ivpu_hw_btrs_platform_read(struct ivpu_device *vdev);
#endif /* __IVPU_HW_BTRS_H__ */

View File

@ -86,6 +86,7 @@
#define VPU_HW_BTRS_LNL_VPU_STATUS_POWER_RESOURCE_OWN_ACK_MASK BIT_MASK(7)
#define VPU_HW_BTRS_LNL_VPU_STATUS_PERF_CLK_MASK BIT_MASK(11)
#define VPU_HW_BTRS_LNL_VPU_STATUS_DISABLE_CLK_RELINQUISH_MASK BIT_MASK(12)
#define VPU_HW_BTRS_LNL_VPU_STATUS_PLATFORM_MASK GENMASK(31, 29)
#define VPU_HW_BTRS_LNL_IP_RESET 0x00000160u
#define VPU_HW_BTRS_LNL_IP_RESET_TRIGGER_MASK BIT_MASK(0)

View File

@ -968,14 +968,14 @@ void ivpu_hw_ip_wdt_disable(struct ivpu_device *vdev)
static u32 ipc_rx_count_get_37xx(struct ivpu_device *vdev)
{
u32 count = REGV_RD32_SILENT(VPU_37XX_HOST_SS_TIM_IPC_FIFO_STAT);
u32 count = readl(vdev->regv + VPU_37XX_HOST_SS_TIM_IPC_FIFO_STAT);
return REG_GET_FLD(VPU_37XX_HOST_SS_TIM_IPC_FIFO_STAT, FILL_LEVEL, count);
}
static u32 ipc_rx_count_get_40xx(struct ivpu_device *vdev)
{
u32 count = REGV_RD32_SILENT(VPU_40XX_HOST_SS_TIM_IPC_FIFO_STAT);
u32 count = readl(vdev->regv + VPU_40XX_HOST_SS_TIM_IPC_FIFO_STAT);
return REG_GET_FLD(VPU_40XX_HOST_SS_TIM_IPC_FIFO_STAT, FILL_LEVEL, count);
}

View File

@ -7,6 +7,7 @@
#define __IVPU_HW_REG_IO_H__
#include <linux/bitfield.h>
#include <linux/fault-inject.h>
#include <linux/io.h>
#include <linux/iopoll.h>
@ -16,13 +17,11 @@
#define REG_IO_ERROR 0xffffffff
#define REGB_RD32(reg) ivpu_hw_reg_rd32(vdev, vdev->regb, (reg), #reg, __func__)
#define REGB_RD32_SILENT(reg) readl(vdev->regb + (reg))
#define REGB_RD64(reg) ivpu_hw_reg_rd64(vdev, vdev->regb, (reg), #reg, __func__)
#define REGB_WR32(reg, val) ivpu_hw_reg_wr32(vdev, vdev->regb, (reg), (val), #reg, __func__)
#define REGB_WR64(reg, val) ivpu_hw_reg_wr64(vdev, vdev->regb, (reg), (val), #reg, __func__)
#define REGV_RD32(reg) ivpu_hw_reg_rd32(vdev, vdev->regv, (reg), #reg, __func__)
#define REGV_RD32_SILENT(reg) readl(vdev->regv + (reg))
#define REGV_RD64(reg) ivpu_hw_reg_rd64(vdev, vdev->regv, (reg), #reg, __func__)
#define REGV_WR32(reg, val) ivpu_hw_reg_wr32(vdev, vdev->regv, (reg), (val), #reg, __func__)
#define REGV_WR64(reg, val) ivpu_hw_reg_wr64(vdev, vdev->regv, (reg), (val), #reg, __func__)
@ -47,31 +46,42 @@
#define REG_TEST_FLD_NUM(REG, FLD, num, val) \
((num) == FIELD_GET(REG##_##FLD##_MASK, val))
#define REGB_POLL_FLD(reg, fld, val, timeout_us) \
({ \
u32 var; \
int r; \
ivpu_dbg(vdev, REG, "%s : %s (0x%08x) Polling field %s started (expected 0x%x)\n", \
__func__, #reg, reg, #fld, val); \
r = read_poll_timeout(REGB_RD32_SILENT, var, (FIELD_GET(reg##_##fld##_MASK, var) == (val)),\
REG_POLL_SLEEP_US, timeout_us, false, (reg)); \
ivpu_dbg(vdev, REG, "%s : %s (0x%08x) Polling field %s %s (reg val 0x%08x)\n", \
__func__, #reg, reg, #fld, r ? "ETIMEDOUT" : "OK", var); \
r; \
})
#define REGB_POLL_FLD(reg, fld, exp_fld_val, timeout_us) \
ivpu_hw_reg_poll_fld(vdev, vdev->regb, reg, reg##_##fld##_MASK, \
FIELD_PREP(reg##_##fld##_MASK, exp_fld_val), timeout_us, \
__func__, #reg, #fld)
#define REGV_POLL_FLD(reg, fld, val, timeout_us) \
({ \
u32 var; \
int r; \
ivpu_dbg(vdev, REG, "%s : %s (0x%08x) Polling field %s started (expected 0x%x)\n", \
__func__, #reg, reg, #fld, val); \
r = read_poll_timeout(REGV_RD32_SILENT, var, (FIELD_GET(reg##_##fld##_MASK, var) == (val)),\
REG_POLL_SLEEP_US, timeout_us, false, (reg)); \
ivpu_dbg(vdev, REG, "%s : %s (0x%08x) Polling field %s %s (reg val 0x%08x)\n", \
__func__, #reg, reg, #fld, r ? "ETIMEDOUT" : "OK", var); \
r; \
})
#define REGV_POLL_FLD(reg, fld, exp_fld_val, timeout_us) \
ivpu_hw_reg_poll_fld(vdev, vdev->regv, reg, reg##_##fld##_MASK, \
FIELD_PREP(reg##_##fld##_MASK, exp_fld_val), timeout_us, \
__func__, #reg, #fld)
extern struct fault_attr ivpu_hw_failure;
static inline int __must_check
ivpu_hw_reg_poll_fld(struct ivpu_device *vdev, void __iomem *base,
u32 reg_offset, u32 reg_mask, u32 exp_masked_val, u32 timeout_us,
const char *func_name, const char *reg_name, const char *fld_name)
{
u32 reg_val;
int ret;
ivpu_dbg(vdev, REG, "%s : %s (0x%08x) POLL %s started (exp_val 0x%x)\n",
func_name, reg_name, reg_offset, fld_name, exp_masked_val);
ret = read_poll_timeout(readl, reg_val, (reg_val & reg_mask) == exp_masked_val,
REG_POLL_SLEEP_US, timeout_us, false, base + reg_offset);
#ifdef CONFIG_FAULT_INJECTION
if (should_fail(&ivpu_hw_failure, 1))
ret = -ETIMEDOUT;
#endif
ivpu_dbg(vdev, REG, "%s : %s (0x%08x) POLL %s %s (reg_val 0x%08x)\n",
func_name, reg_name, reg_offset, fld_name, ret ? "ETIMEDOUT" : "OK", reg_val);
return ret;
}
static inline u32
ivpu_hw_reg_rd32(struct ivpu_device *vdev, void __iomem *base, u32 reg,

View File

@ -459,13 +459,12 @@ void ivpu_ipc_irq_handler(struct ivpu_device *vdev)
}
}
if (!list_empty(&ipc->cb_msg_list))
if (!kfifo_put(&vdev->hw->irq.fifo, IVPU_HW_IRQ_SRC_IPC))
ivpu_err_ratelimited(vdev, "IRQ FIFO full\n");
queue_work(system_wq, &vdev->irq_ipc_work);
}
void ivpu_ipc_irq_thread_handler(struct ivpu_device *vdev)
void ivpu_ipc_irq_work_fn(struct work_struct *work)
{
struct ivpu_device *vdev = container_of(work, struct ivpu_device, irq_ipc_work);
struct ivpu_ipc_info *ipc = vdev->ipc;
struct ivpu_ipc_rx_msg *rx_msg, *r;
struct list_head cb_msg_list;

View File

@ -90,7 +90,7 @@ void ivpu_ipc_disable(struct ivpu_device *vdev);
void ivpu_ipc_reset(struct ivpu_device *vdev);
void ivpu_ipc_irq_handler(struct ivpu_device *vdev);
void ivpu_ipc_irq_thread_handler(struct ivpu_device *vdev);
void ivpu_ipc_irq_work_fn(struct work_struct *work);
void ivpu_ipc_consumer_add(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons,
u32 channel, ivpu_ipc_rx_callback_t callback);

View File

@ -8,6 +8,7 @@
#include <linux/bitfield.h>
#include <linux/highmem.h>
#include <linux/pci.h>
#include <linux/pm_runtime.h>
#include <linux/module.h>
#include <uapi/drm/ivpu_accel.h>
@ -17,6 +18,7 @@
#include "ivpu_ipc.h"
#include "ivpu_job.h"
#include "ivpu_jsm_msg.h"
#include "ivpu_mmu.h"
#include "ivpu_pm.h"
#include "ivpu_trace.h"
#include "vpu_boot_api.h"
@ -83,23 +85,9 @@ static struct ivpu_cmdq *ivpu_cmdq_alloc(struct ivpu_file_priv *file_priv)
if (!cmdq)
return NULL;
ret = xa_alloc_cyclic(&vdev->db_xa, &cmdq->db_id, NULL, vdev->db_limit, &vdev->db_next,
GFP_KERNEL);
if (ret < 0) {
ivpu_err(vdev, "Failed to allocate doorbell id: %d\n", ret);
goto err_free_cmdq;
}
ret = xa_alloc_cyclic(&file_priv->cmdq_xa, &cmdq->id, cmdq, file_priv->cmdq_limit,
&file_priv->cmdq_id_next, GFP_KERNEL);
if (ret < 0) {
ivpu_err(vdev, "Failed to allocate command queue id: %d\n", ret);
goto err_erase_db_xa;
}
cmdq->mem = ivpu_bo_create_global(vdev, SZ_4K, DRM_IVPU_BO_WC | DRM_IVPU_BO_MAPPABLE);
if (!cmdq->mem)
goto err_erase_cmdq_xa;
goto err_free_cmdq;
ret = ivpu_preemption_buffers_create(vdev, file_priv, cmdq);
if (ret)
@ -107,10 +95,6 @@ static struct ivpu_cmdq *ivpu_cmdq_alloc(struct ivpu_file_priv *file_priv)
return cmdq;
err_erase_cmdq_xa:
xa_erase(&file_priv->cmdq_xa, cmdq->id);
err_erase_db_xa:
xa_erase(&vdev->db_xa, cmdq->db_id);
err_free_cmdq:
kfree(cmdq);
return NULL;
@ -118,15 +102,44 @@ err_free_cmdq:
static void ivpu_cmdq_free(struct ivpu_file_priv *file_priv, struct ivpu_cmdq *cmdq)
{
if (!cmdq)
return;
ivpu_preemption_buffers_free(file_priv->vdev, file_priv, cmdq);
ivpu_bo_free(cmdq->mem);
xa_erase(&file_priv->vdev->db_xa, cmdq->db_id);
kfree(cmdq);
}
static struct ivpu_cmdq *ivpu_cmdq_create(struct ivpu_file_priv *file_priv, u8 priority,
bool is_legacy)
{
struct ivpu_device *vdev = file_priv->vdev;
struct ivpu_cmdq *cmdq = NULL;
int ret;
lockdep_assert_held(&file_priv->lock);
cmdq = ivpu_cmdq_alloc(file_priv);
if (!cmdq) {
ivpu_err(vdev, "Failed to allocate command queue\n");
return NULL;
}
cmdq->priority = priority;
cmdq->is_legacy = is_legacy;
ret = xa_alloc_cyclic(&file_priv->cmdq_xa, &cmdq->id, cmdq, file_priv->cmdq_limit,
&file_priv->cmdq_id_next, GFP_KERNEL);
if (ret < 0) {
ivpu_err(vdev, "Failed to allocate command queue ID: %d\n", ret);
goto err_free_cmdq;
}
ivpu_dbg(vdev, JOB, "Command queue %d created, ctx %d\n", cmdq->id, file_priv->ctx.id);
return cmdq;
err_free_cmdq:
ivpu_cmdq_free(file_priv, cmdq);
return NULL;
}
static int ivpu_hws_cmdq_init(struct ivpu_file_priv *file_priv, struct ivpu_cmdq *cmdq, u16 engine,
u8 priority)
{
@ -152,6 +165,13 @@ static int ivpu_register_db(struct ivpu_file_priv *file_priv, struct ivpu_cmdq *
struct ivpu_device *vdev = file_priv->vdev;
int ret;
ret = xa_alloc_cyclic(&vdev->db_xa, &cmdq->db_id, NULL, vdev->db_limit, &vdev->db_next,
GFP_KERNEL);
if (ret < 0) {
ivpu_err(vdev, "Failed to allocate doorbell ID: %d\n", ret);
return ret;
}
if (vdev->fw->sched_mode == VPU_SCHEDULING_MODE_HW)
ret = ivpu_jsm_hws_register_db(vdev, file_priv->ctx.id, cmdq->id, cmdq->db_id,
cmdq->mem->vpu_addr, ivpu_bo_size(cmdq->mem));
@ -160,41 +180,52 @@ static int ivpu_register_db(struct ivpu_file_priv *file_priv, struct ivpu_cmdq *
cmdq->mem->vpu_addr, ivpu_bo_size(cmdq->mem));
if (!ret)
ivpu_dbg(vdev, JOB, "DB %d registered to cmdq %d ctx %d\n",
cmdq->db_id, cmdq->id, file_priv->ctx.id);
ivpu_dbg(vdev, JOB, "DB %d registered to cmdq %d ctx %d priority %d\n",
cmdq->db_id, cmdq->id, file_priv->ctx.id, cmdq->priority);
else
xa_erase(&vdev->db_xa, cmdq->db_id);
return ret;
}
static int
ivpu_cmdq_init(struct ivpu_file_priv *file_priv, struct ivpu_cmdq *cmdq, u8 priority)
static void ivpu_cmdq_jobq_init(struct ivpu_device *vdev, struct vpu_job_queue *jobq)
{
jobq->header.engine_idx = VPU_ENGINE_COMPUTE;
jobq->header.head = 0;
jobq->header.tail = 0;
if (ivpu_test_mode & IVPU_TEST_MODE_TURBO) {
ivpu_dbg(vdev, JOB, "Turbo mode enabled");
jobq->header.flags = VPU_JOB_QUEUE_FLAGS_TURBO_MODE;
}
wmb(); /* Flush WC buffer for jobq->header */
}
static inline u32 ivpu_cmdq_get_entry_count(struct ivpu_cmdq *cmdq)
{
size_t size = ivpu_bo_size(cmdq->mem) - sizeof(struct vpu_job_queue_header);
return size / sizeof(struct vpu_job_queue_entry);
}
static int ivpu_cmdq_register(struct ivpu_file_priv *file_priv, struct ivpu_cmdq *cmdq)
{
struct ivpu_device *vdev = file_priv->vdev;
struct vpu_job_queue_header *jobq_header;
int ret;
lockdep_assert_held(&file_priv->lock);
if (cmdq->db_registered)
if (cmdq->db_id)
return 0;
cmdq->entry_count = (u32)((ivpu_bo_size(cmdq->mem) - sizeof(struct vpu_job_queue_header)) /
sizeof(struct vpu_job_queue_entry));
cmdq->entry_count = ivpu_cmdq_get_entry_count(cmdq);
cmdq->jobq = (struct vpu_job_queue *)ivpu_bo_vaddr(cmdq->mem);
jobq_header = &cmdq->jobq->header;
jobq_header->engine_idx = VPU_ENGINE_COMPUTE;
jobq_header->head = 0;
jobq_header->tail = 0;
if (ivpu_test_mode & IVPU_TEST_MODE_TURBO) {
ivpu_dbg(vdev, JOB, "Turbo mode enabled");
jobq_header->flags = VPU_JOB_QUEUE_FLAGS_TURBO_MODE;
}
wmb(); /* Flush WC buffer for jobq->header */
ivpu_cmdq_jobq_init(vdev, cmdq->jobq);
if (vdev->fw->sched_mode == VPU_SCHEDULING_MODE_HW) {
ret = ivpu_hws_cmdq_init(file_priv, cmdq, VPU_ENGINE_COMPUTE, priority);
ret = ivpu_hws_cmdq_init(file_priv, cmdq, VPU_ENGINE_COMPUTE, cmdq->priority);
if (ret)
return ret;
}
@ -203,58 +234,83 @@ ivpu_cmdq_init(struct ivpu_file_priv *file_priv, struct ivpu_cmdq *cmdq, u8 prio
if (ret)
return ret;
cmdq->db_registered = true;
return 0;
}
static int ivpu_cmdq_fini(struct ivpu_file_priv *file_priv, struct ivpu_cmdq *cmdq)
static int ivpu_cmdq_unregister(struct ivpu_file_priv *file_priv, struct ivpu_cmdq *cmdq)
{
struct ivpu_device *vdev = file_priv->vdev;
int ret;
lockdep_assert_held(&file_priv->lock);
if (!cmdq->db_registered)
if (!cmdq->db_id)
return 0;
cmdq->db_registered = false;
if (vdev->fw->sched_mode == VPU_SCHEDULING_MODE_HW) {
ret = ivpu_jsm_hws_destroy_cmdq(vdev, file_priv->ctx.id, cmdq->id);
if (!ret)
ivpu_dbg(vdev, JOB, "Command queue %d destroyed\n", cmdq->id);
ivpu_dbg(vdev, JOB, "Command queue %d destroyed, ctx %d\n",
cmdq->id, file_priv->ctx.id);
}
ret = ivpu_jsm_unregister_db(vdev, cmdq->db_id);
if (!ret)
ivpu_dbg(vdev, JOB, "DB %d unregistered\n", cmdq->db_id);
xa_erase(&file_priv->vdev->db_xa, cmdq->db_id);
cmdq->db_id = 0;
return 0;
}
static struct ivpu_cmdq *ivpu_cmdq_acquire(struct ivpu_file_priv *file_priv, u8 priority)
static inline u8 ivpu_job_to_jsm_priority(u8 priority)
{
if (priority == DRM_IVPU_JOB_PRIORITY_DEFAULT)
return VPU_JOB_SCHEDULING_PRIORITY_BAND_NORMAL;
return priority - 1;
}
static void ivpu_cmdq_destroy(struct ivpu_file_priv *file_priv, struct ivpu_cmdq *cmdq)
{
ivpu_cmdq_unregister(file_priv, cmdq);
xa_erase(&file_priv->cmdq_xa, cmdq->id);
ivpu_cmdq_free(file_priv, cmdq);
}
static struct ivpu_cmdq *ivpu_cmdq_acquire_legacy(struct ivpu_file_priv *file_priv, u8 priority)
{
struct ivpu_cmdq *cmdq;
unsigned long cmdq_id;
int ret;
unsigned long id;
lockdep_assert_held(&file_priv->lock);
xa_for_each(&file_priv->cmdq_xa, cmdq_id, cmdq)
if (cmdq->priority == priority)
xa_for_each(&file_priv->cmdq_xa, id, cmdq)
if (cmdq->is_legacy && cmdq->priority == priority)
break;
if (!cmdq) {
cmdq = ivpu_cmdq_alloc(file_priv);
cmdq = ivpu_cmdq_create(file_priv, priority, true);
if (!cmdq)
return NULL;
cmdq->priority = priority;
}
ret = ivpu_cmdq_init(file_priv, cmdq, priority);
if (ret)
return cmdq;
}
static struct ivpu_cmdq *ivpu_cmdq_acquire(struct ivpu_file_priv *file_priv, u32 cmdq_id)
{
struct ivpu_device *vdev = file_priv->vdev;
struct ivpu_cmdq *cmdq;
lockdep_assert_held(&file_priv->lock);
cmdq = xa_load(&file_priv->cmdq_xa, cmdq_id);
if (!cmdq) {
ivpu_warn_ratelimited(vdev, "Failed to find command queue with ID: %u\n", cmdq_id);
return NULL;
}
return cmdq;
}
@ -266,11 +322,8 @@ void ivpu_cmdq_release_all_locked(struct ivpu_file_priv *file_priv)
lockdep_assert_held(&file_priv->lock);
xa_for_each(&file_priv->cmdq_xa, cmdq_id, cmdq) {
xa_erase(&file_priv->cmdq_xa, cmdq_id);
ivpu_cmdq_fini(file_priv, cmdq);
ivpu_cmdq_free(file_priv, cmdq);
}
xa_for_each(&file_priv->cmdq_xa, cmdq_id, cmdq)
ivpu_cmdq_destroy(file_priv, cmdq);
}
/*
@ -286,8 +339,10 @@ static void ivpu_cmdq_reset(struct ivpu_file_priv *file_priv)
mutex_lock(&file_priv->lock);
xa_for_each(&file_priv->cmdq_xa, cmdq_id, cmdq)
cmdq->db_registered = false;
xa_for_each(&file_priv->cmdq_xa, cmdq_id, cmdq) {
xa_erase(&file_priv->vdev->db_xa, cmdq->db_id);
cmdq->db_id = 0;
}
mutex_unlock(&file_priv->lock);
}
@ -305,25 +360,24 @@ void ivpu_cmdq_reset_all_contexts(struct ivpu_device *vdev)
mutex_unlock(&vdev->context_list_lock);
}
static void ivpu_cmdq_fini_all(struct ivpu_file_priv *file_priv)
{
struct ivpu_cmdq *cmdq;
unsigned long cmdq_id;
xa_for_each(&file_priv->cmdq_xa, cmdq_id, cmdq)
ivpu_cmdq_fini(file_priv, cmdq);
}
void ivpu_context_abort_locked(struct ivpu_file_priv *file_priv)
{
struct ivpu_device *vdev = file_priv->vdev;
struct ivpu_cmdq *cmdq;
unsigned long cmdq_id;
lockdep_assert_held(&file_priv->lock);
ivpu_dbg(vdev, JOB, "Context ID: %u abort\n", file_priv->ctx.id);
ivpu_cmdq_fini_all(file_priv);
xa_for_each(&file_priv->cmdq_xa, cmdq_id, cmdq)
ivpu_cmdq_unregister(file_priv, cmdq);
if (vdev->fw->sched_mode == VPU_SCHEDULING_MODE_OS)
ivpu_jsm_context_release(vdev, file_priv->ctx.id);
ivpu_mmu_disable_ssid_events(vdev, file_priv->ctx.id);
file_priv->aborted = true;
}
static int ivpu_cmdq_push_job(struct ivpu_cmdq *cmdq, struct ivpu_job *job)
@ -462,16 +516,14 @@ static struct ivpu_job *ivpu_job_remove_from_submitted_jobs(struct ivpu_device *
{
struct ivpu_job *job;
xa_lock(&vdev->submitted_jobs_xa);
job = __xa_erase(&vdev->submitted_jobs_xa, job_id);
lockdep_assert_held(&vdev->submitted_jobs_lock);
job = xa_erase(&vdev->submitted_jobs_xa, job_id);
if (xa_empty(&vdev->submitted_jobs_xa) && job) {
vdev->busy_time = ktime_add(ktime_sub(ktime_get(), vdev->busy_start_ts),
vdev->busy_time);
}
xa_unlock(&vdev->submitted_jobs_xa);
return job;
}
@ -479,6 +531,28 @@ static int ivpu_job_signal_and_destroy(struct ivpu_device *vdev, u32 job_id, u32
{
struct ivpu_job *job;
lockdep_assert_held(&vdev->submitted_jobs_lock);
job = xa_load(&vdev->submitted_jobs_xa, job_id);
if (!job)
return -ENOENT;
if (job_status == VPU_JSM_STATUS_MVNCI_CONTEXT_VIOLATION_HW) {
guard(mutex)(&job->file_priv->lock);
if (job->file_priv->has_mmu_faults)
return 0;
/*
* Mark context as faulty and defer destruction of the job to jobs abort thread
* handler to synchronize between both faults and jobs returning context violation
* status and ensure both are handled in the same way
*/
job->file_priv->has_mmu_faults = true;
queue_work(system_wq, &vdev->context_abort_work);
return 0;
}
job = ivpu_job_remove_from_submitted_jobs(vdev, job_id);
if (!job)
return -ENOENT;
@ -497,6 +571,10 @@ static int ivpu_job_signal_and_destroy(struct ivpu_device *vdev, u32 job_id, u32
ivpu_stop_job_timeout_detection(vdev);
ivpu_rpm_put(vdev);
if (!xa_empty(&vdev->submitted_jobs_xa))
ivpu_start_job_timeout_detection(vdev);
return 0;
}
@ -505,11 +583,29 @@ void ivpu_jobs_abort_all(struct ivpu_device *vdev)
struct ivpu_job *job;
unsigned long id;
mutex_lock(&vdev->submitted_jobs_lock);
xa_for_each(&vdev->submitted_jobs_xa, id, job)
ivpu_job_signal_and_destroy(vdev, id, DRM_IVPU_JOB_STATUS_ABORTED);
mutex_unlock(&vdev->submitted_jobs_lock);
}
static int ivpu_job_submit(struct ivpu_job *job, u8 priority)
void ivpu_cmdq_abort_all_jobs(struct ivpu_device *vdev, u32 ctx_id, u32 cmdq_id)
{
struct ivpu_job *job;
unsigned long id;
mutex_lock(&vdev->submitted_jobs_lock);
xa_for_each(&vdev->submitted_jobs_xa, id, job)
if (job->file_priv->ctx.id == ctx_id && job->cmdq_id == cmdq_id)
ivpu_job_signal_and_destroy(vdev, id, DRM_IVPU_JOB_STATUS_ABORTED);
mutex_unlock(&vdev->submitted_jobs_lock);
}
static int ivpu_job_submit(struct ivpu_job *job, u8 priority, u32 cmdq_id)
{
struct ivpu_file_priv *file_priv = job->file_priv;
struct ivpu_device *vdev = job->vdev;
@ -521,25 +617,35 @@ static int ivpu_job_submit(struct ivpu_job *job, u8 priority)
if (ret < 0)
return ret;
mutex_lock(&vdev->submitted_jobs_lock);
mutex_lock(&file_priv->lock);
cmdq = ivpu_cmdq_acquire(file_priv, priority);
if (cmdq_id == 0)
cmdq = ivpu_cmdq_acquire_legacy(file_priv, priority);
else
cmdq = ivpu_cmdq_acquire(file_priv, cmdq_id);
if (!cmdq) {
ivpu_warn_ratelimited(vdev, "Failed to get job queue, ctx %d engine %d prio %d\n",
file_priv->ctx.id, job->engine_idx, priority);
ivpu_warn_ratelimited(vdev, "Failed to get job queue, ctx %d\n", file_priv->ctx.id);
ret = -EINVAL;
goto err_unlock_file_priv;
goto err_unlock;
}
xa_lock(&vdev->submitted_jobs_xa);
ret = ivpu_cmdq_register(file_priv, cmdq);
if (ret) {
ivpu_err(vdev, "Failed to register command queue: %d\n", ret);
goto err_unlock;
}
job->cmdq_id = cmdq->id;
is_first_job = xa_empty(&vdev->submitted_jobs_xa);
ret = __xa_alloc_cyclic(&vdev->submitted_jobs_xa, &job->job_id, job, file_priv->job_limit,
&file_priv->job_id_next, GFP_KERNEL);
ret = xa_alloc_cyclic(&vdev->submitted_jobs_xa, &job->job_id, job, file_priv->job_limit,
&file_priv->job_id_next, GFP_KERNEL);
if (ret < 0) {
ivpu_dbg(vdev, JOB, "Too many active jobs in ctx %d\n",
file_priv->ctx.id);
ret = -EBUSY;
goto err_unlock_submitted_jobs_xa;
goto err_unlock;
}
ret = ivpu_cmdq_push_job(cmdq, job);
@ -559,23 +665,23 @@ static int ivpu_job_submit(struct ivpu_job *job, u8 priority)
trace_job("submit", job);
ivpu_dbg(vdev, JOB, "Job submitted: id %3u ctx %2d engine %d prio %d addr 0x%llx next %d\n",
job->job_id, file_priv->ctx.id, job->engine_idx, priority,
job->job_id, file_priv->ctx.id, job->engine_idx, cmdq->priority,
job->cmd_buf_vpu_addr, cmdq->jobq->header.tail);
xa_unlock(&vdev->submitted_jobs_xa);
mutex_unlock(&file_priv->lock);
if (unlikely(ivpu_test_mode & IVPU_TEST_MODE_NULL_HW))
if (unlikely(ivpu_test_mode & IVPU_TEST_MODE_NULL_HW)) {
ivpu_job_signal_and_destroy(vdev, job->job_id, VPU_JSM_STATUS_SUCCESS);
}
mutex_unlock(&vdev->submitted_jobs_lock);
return 0;
err_erase_xa:
__xa_erase(&vdev->submitted_jobs_xa, job->job_id);
err_unlock_submitted_jobs_xa:
xa_unlock(&vdev->submitted_jobs_xa);
err_unlock_file_priv:
xa_erase(&vdev->submitted_jobs_xa, job->job_id);
err_unlock:
mutex_unlock(&vdev->submitted_jobs_lock);
mutex_unlock(&file_priv->lock);
ivpu_rpm_put(vdev);
return ret;
@ -585,7 +691,7 @@ static int
ivpu_job_prepare_bos_for_submit(struct drm_file *file, struct ivpu_job *job, u32 *buf_handles,
u32 buf_count, u32 commands_offset)
{
struct ivpu_file_priv *file_priv = file->driver_priv;
struct ivpu_file_priv *file_priv = job->file_priv;
struct ivpu_device *vdev = file_priv->vdev;
struct ww_acquire_ctx acquire_ctx;
enum dma_resv_usage usage;
@ -647,49 +753,20 @@ unlock_reservations:
return ret;
}
static inline u8 ivpu_job_to_hws_priority(struct ivpu_file_priv *file_priv, u8 priority)
static int ivpu_submit(struct drm_file *file, struct ivpu_file_priv *file_priv, u32 cmdq_id,
u32 buffer_count, u32 engine, void __user *buffers_ptr, u32 cmds_offset,
u8 priority)
{
if (priority == DRM_IVPU_JOB_PRIORITY_DEFAULT)
return DRM_IVPU_JOB_PRIORITY_NORMAL;
return priority - 1;
}
int ivpu_submit_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
{
struct ivpu_file_priv *file_priv = file->driver_priv;
struct ivpu_device *vdev = file_priv->vdev;
struct drm_ivpu_submit *params = data;
struct ivpu_job *job;
u32 *buf_handles;
int idx, ret;
u8 priority;
if (params->engine != DRM_IVPU_ENGINE_COMPUTE)
return -EINVAL;
if (params->priority > DRM_IVPU_JOB_PRIORITY_REALTIME)
return -EINVAL;
if (params->buffer_count == 0 || params->buffer_count > JOB_MAX_BUFFER_COUNT)
return -EINVAL;
if (!IS_ALIGNED(params->commands_offset, 8))
return -EINVAL;
if (!file_priv->ctx.id)
return -EINVAL;
if (file_priv->has_mmu_faults)
return -EBADFD;
buf_handles = kcalloc(params->buffer_count, sizeof(u32), GFP_KERNEL);
buf_handles = kcalloc(buffer_count, sizeof(u32), GFP_KERNEL);
if (!buf_handles)
return -ENOMEM;
ret = copy_from_user(buf_handles,
(void __user *)params->buffers_ptr,
params->buffer_count * sizeof(u32));
ret = copy_from_user(buf_handles, buffers_ptr, buffer_count * sizeof(u32));
if (ret) {
ret = -EFAULT;
goto err_free_handles;
@ -700,27 +777,23 @@ int ivpu_submit_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
goto err_free_handles;
}
ivpu_dbg(vdev, JOB, "Submit ioctl: ctx %u buf_count %u\n",
file_priv->ctx.id, params->buffer_count);
ivpu_dbg(vdev, JOB, "Submit ioctl: ctx %u buf_count %u\n", file_priv->ctx.id, buffer_count);
job = ivpu_job_create(file_priv, params->engine, params->buffer_count);
job = ivpu_job_create(file_priv, engine, buffer_count);
if (!job) {
ivpu_err(vdev, "Failed to create job\n");
ret = -ENOMEM;
goto err_exit_dev;
}
ret = ivpu_job_prepare_bos_for_submit(file, job, buf_handles, params->buffer_count,
params->commands_offset);
ret = ivpu_job_prepare_bos_for_submit(file, job, buf_handles, buffer_count, cmds_offset);
if (ret) {
ivpu_err(vdev, "Failed to prepare job: %d\n", ret);
goto err_destroy_job;
}
priority = ivpu_job_to_hws_priority(file_priv, params->priority);
down_read(&vdev->pm->reset_lock);
ret = ivpu_job_submit(job, priority);
ret = ivpu_job_submit(job, priority, cmdq_id);
up_read(&vdev->pm->reset_lock);
if (ret)
goto err_signal_fence;
@ -740,12 +813,122 @@ err_free_handles:
return ret;
}
int ivpu_submit_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
{
struct ivpu_file_priv *file_priv = file->driver_priv;
struct drm_ivpu_submit *args = data;
u8 priority;
if (args->engine != DRM_IVPU_ENGINE_COMPUTE)
return -EINVAL;
if (args->priority > DRM_IVPU_JOB_PRIORITY_REALTIME)
return -EINVAL;
if (args->buffer_count == 0 || args->buffer_count > JOB_MAX_BUFFER_COUNT)
return -EINVAL;
if (!IS_ALIGNED(args->commands_offset, 8))
return -EINVAL;
if (!file_priv->ctx.id)
return -EINVAL;
if (file_priv->has_mmu_faults)
return -EBADFD;
priority = ivpu_job_to_jsm_priority(args->priority);
return ivpu_submit(file, file_priv, 0, args->buffer_count, args->engine,
(void __user *)args->buffers_ptr, args->commands_offset, priority);
}
int ivpu_cmdq_submit_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
{
struct ivpu_file_priv *file_priv = file->driver_priv;
struct drm_ivpu_cmdq_submit *args = data;
if (!ivpu_is_capable(file_priv->vdev, DRM_IVPU_CAP_MANAGE_CMDQ))
return -ENODEV;
if (args->cmdq_id < IVPU_CMDQ_MIN_ID || args->cmdq_id > IVPU_CMDQ_MAX_ID)
return -EINVAL;
if (args->buffer_count == 0 || args->buffer_count > JOB_MAX_BUFFER_COUNT)
return -EINVAL;
if (!IS_ALIGNED(args->commands_offset, 8))
return -EINVAL;
if (!file_priv->ctx.id)
return -EINVAL;
if (file_priv->has_mmu_faults)
return -EBADFD;
return ivpu_submit(file, file_priv, args->cmdq_id, args->buffer_count, VPU_ENGINE_COMPUTE,
(void __user *)args->buffers_ptr, args->commands_offset, 0);
}
int ivpu_cmdq_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
{
struct ivpu_file_priv *file_priv = file->driver_priv;
struct drm_ivpu_cmdq_create *args = data;
struct ivpu_cmdq *cmdq;
if (!ivpu_is_capable(file_priv->vdev, DRM_IVPU_CAP_MANAGE_CMDQ))
return -ENODEV;
if (args->priority > DRM_IVPU_JOB_PRIORITY_REALTIME)
return -EINVAL;
mutex_lock(&file_priv->lock);
cmdq = ivpu_cmdq_create(file_priv, ivpu_job_to_jsm_priority(args->priority), false);
if (cmdq)
args->cmdq_id = cmdq->id;
mutex_unlock(&file_priv->lock);
return cmdq ? 0 : -ENOMEM;
}
int ivpu_cmdq_destroy_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
{
struct ivpu_file_priv *file_priv = file->driver_priv;
struct ivpu_device *vdev = file_priv->vdev;
struct drm_ivpu_cmdq_destroy *args = data;
struct ivpu_cmdq *cmdq;
u32 cmdq_id;
int ret;
if (!ivpu_is_capable(vdev, DRM_IVPU_CAP_MANAGE_CMDQ))
return -ENODEV;
mutex_lock(&file_priv->lock);
cmdq = xa_load(&file_priv->cmdq_xa, args->cmdq_id);
if (!cmdq || cmdq->is_legacy) {
ret = -ENOENT;
goto err_unlock;
}
cmdq_id = cmdq->id;
ivpu_cmdq_destroy(file_priv, cmdq);
mutex_unlock(&file_priv->lock);
ivpu_cmdq_abort_all_jobs(vdev, file_priv->ctx.id, cmdq_id);
return 0;
err_unlock:
mutex_unlock(&file_priv->lock);
return ret;
}
static void
ivpu_job_done_callback(struct ivpu_device *vdev, struct ivpu_ipc_hdr *ipc_hdr,
struct vpu_jsm_msg *jsm_msg)
{
struct vpu_ipc_msg_payload_job_done *payload;
int ret;
if (!jsm_msg) {
ivpu_err(vdev, "IPC message has no JSM payload\n");
@ -758,9 +941,10 @@ ivpu_job_done_callback(struct ivpu_device *vdev, struct ivpu_ipc_hdr *ipc_hdr,
}
payload = (struct vpu_ipc_msg_payload_job_done *)&jsm_msg->payload;
ret = ivpu_job_signal_and_destroy(vdev, payload->job_id, payload->job_status);
if (!ret && !xa_empty(&vdev->submitted_jobs_xa))
ivpu_start_job_timeout_detection(vdev);
mutex_lock(&vdev->submitted_jobs_lock);
ivpu_job_signal_and_destroy(vdev, payload->job_id, payload->job_status);
mutex_unlock(&vdev->submitted_jobs_lock);
}
void ivpu_job_done_consumer_init(struct ivpu_device *vdev)
@ -773,3 +957,55 @@ void ivpu_job_done_consumer_fini(struct ivpu_device *vdev)
{
ivpu_ipc_consumer_del(vdev, &vdev->job_done_consumer);
}
void ivpu_context_abort_work_fn(struct work_struct *work)
{
struct ivpu_device *vdev = container_of(work, struct ivpu_device, context_abort_work);
struct ivpu_file_priv *file_priv;
struct ivpu_job *job;
unsigned long ctx_id;
unsigned long id;
if (drm_WARN_ON(&vdev->drm, pm_runtime_get_if_active(vdev->drm.dev) <= 0))
return;
if (vdev->fw->sched_mode == VPU_SCHEDULING_MODE_HW)
ivpu_jsm_reset_engine(vdev, 0);
mutex_lock(&vdev->context_list_lock);
xa_for_each(&vdev->context_xa, ctx_id, file_priv) {
if (!file_priv->has_mmu_faults || file_priv->aborted)
continue;
mutex_lock(&file_priv->lock);
ivpu_context_abort_locked(file_priv);
mutex_unlock(&file_priv->lock);
}
mutex_unlock(&vdev->context_list_lock);
/*
* We will not receive new MMU event interrupts until existing events are discarded
* however, we want to discard these events only after aborting the faulty context
* to avoid generating new faults from that context
*/
ivpu_mmu_discard_events(vdev);
if (vdev->fw->sched_mode != VPU_SCHEDULING_MODE_HW)
goto runtime_put;
ivpu_jsm_hws_resume_engine(vdev, 0);
/*
* In hardware scheduling mode NPU already has stopped processing jobs
* and won't send us any further notifications, thus we have to free job related resources
* and notify userspace
*/
mutex_lock(&vdev->submitted_jobs_lock);
xa_for_each(&vdev->submitted_jobs_xa, id, job)
if (job->file_priv->aborted)
ivpu_job_signal_and_destroy(vdev, job->job_id, DRM_IVPU_JOB_STATUS_ABORTED);
mutex_unlock(&vdev->submitted_jobs_lock);
runtime_put:
pm_runtime_mark_last_busy(vdev->drm.dev);
pm_runtime_put_autosuspend(vdev->drm.dev);
}

View File

@ -30,8 +30,8 @@ struct ivpu_cmdq {
u32 entry_count;
u32 id;
u32 db_id;
bool db_registered;
u8 priority;
bool is_legacy;
};
/**
@ -51,6 +51,7 @@ struct ivpu_job {
struct ivpu_file_priv *file_priv;
struct dma_fence *done_fence;
u64 cmd_buf_vpu_addr;
u32 cmdq_id;
u32 job_id;
u32 engine_idx;
size_t bo_count;
@ -58,14 +59,19 @@ struct ivpu_job {
};
int ivpu_submit_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
int ivpu_cmdq_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
int ivpu_cmdq_destroy_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
int ivpu_cmdq_submit_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
void ivpu_context_abort_locked(struct ivpu_file_priv *file_priv);
void ivpu_cmdq_release_all_locked(struct ivpu_file_priv *file_priv);
void ivpu_cmdq_reset_all_contexts(struct ivpu_device *vdev);
void ivpu_cmdq_abort_all_jobs(struct ivpu_device *vdev, u32 ctx_id, u32 cmdq_id);
void ivpu_job_done_consumer_init(struct ivpu_device *vdev);
void ivpu_job_done_consumer_fini(struct ivpu_device *vdev);
void ivpu_context_abort_work_fn(struct work_struct *work);
void ivpu_jobs_abort_all(struct ivpu_device *vdev);

View File

@ -7,6 +7,7 @@
#include "ivpu_hw.h"
#include "ivpu_ipc.h"
#include "ivpu_jsm_msg.h"
#include "vpu_jsm_api.h"
const char *ivpu_jsm_msg_type_to_str(enum vpu_ipc_msg_type type)
{
@ -407,26 +408,18 @@ int ivpu_jsm_hws_setup_priority_bands(struct ivpu_device *vdev)
{
struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_SET_PRIORITY_BAND_SETUP };
struct vpu_jsm_msg resp;
struct ivpu_hw_info *hw = vdev->hw;
struct vpu_ipc_msg_payload_hws_priority_band_setup *setup =
&req.payload.hws_priority_band_setup;
int ret;
/* Idle */
req.payload.hws_priority_band_setup.grace_period[0] = 0;
req.payload.hws_priority_band_setup.process_grace_period[0] = 50000;
req.payload.hws_priority_band_setup.process_quantum[0] = 160000;
/* Normal */
req.payload.hws_priority_band_setup.grace_period[1] = 50000;
req.payload.hws_priority_band_setup.process_grace_period[1] = 50000;
req.payload.hws_priority_band_setup.process_quantum[1] = 300000;
/* Focus */
req.payload.hws_priority_band_setup.grace_period[2] = 50000;
req.payload.hws_priority_band_setup.process_grace_period[2] = 50000;
req.payload.hws_priority_band_setup.process_quantum[2] = 200000;
/* Realtime */
req.payload.hws_priority_band_setup.grace_period[3] = 0;
req.payload.hws_priority_band_setup.process_grace_period[3] = 50000;
req.payload.hws_priority_band_setup.process_quantum[3] = 200000;
req.payload.hws_priority_band_setup.normal_band_percentage = 10;
for (int band = VPU_JOB_SCHEDULING_PRIORITY_BAND_IDLE;
band < VPU_JOB_SCHEDULING_PRIORITY_BAND_COUNT; band++) {
setup->grace_period[band] = hw->hws.grace_period[band];
setup->process_grace_period[band] = hw->hws.process_grace_period[band];
setup->process_quantum[band] = hw->hws.process_quantum[band];
}
setup->normal_band_percentage = 10;
ret = ivpu_ipc_send_receive_internal(vdev, &req, VPU_JSM_MSG_SET_PRIORITY_BAND_SETUP_RSP,
&resp, VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);

View File

@ -20,6 +20,12 @@
#define IVPU_MMU_REG_CR0 0x00200020u
#define IVPU_MMU_REG_CR0ACK 0x00200024u
#define IVPU_MMU_REG_CR0ACK_VAL_MASK GENMASK(31, 0)
#define IVPU_MMU_REG_CR0_ATSCHK_MASK BIT(4)
#define IVPU_MMU_REG_CR0_CMDQEN_MASK BIT(3)
#define IVPU_MMU_REG_CR0_EVTQEN_MASK BIT(2)
#define IVPU_MMU_REG_CR0_PRIQEN_MASK BIT(1)
#define IVPU_MMU_REG_CR0_SMMUEN_MASK BIT(0)
#define IVPU_MMU_REG_CR1 0x00200028u
#define IVPU_MMU_REG_CR2 0x0020002cu
#define IVPU_MMU_REG_IRQ_CTRL 0x00200050u
@ -141,12 +147,6 @@
#define IVPU_MMU_IRQ_EVTQ_EN BIT(2)
#define IVPU_MMU_IRQ_GERROR_EN BIT(0)
#define IVPU_MMU_CR0_ATSCHK BIT(4)
#define IVPU_MMU_CR0_CMDQEN BIT(3)
#define IVPU_MMU_CR0_EVTQEN BIT(2)
#define IVPU_MMU_CR0_PRIQEN BIT(1)
#define IVPU_MMU_CR0_SMMUEN BIT(0)
#define IVPU_MMU_CR1_TABLE_SH GENMASK(11, 10)
#define IVPU_MMU_CR1_TABLE_OC GENMASK(9, 8)
#define IVPU_MMU_CR1_TABLE_IC GENMASK(7, 6)
@ -596,7 +596,7 @@ static int ivpu_mmu_reset(struct ivpu_device *vdev)
REGV_WR32(IVPU_MMU_REG_CMDQ_PROD, 0);
REGV_WR32(IVPU_MMU_REG_CMDQ_CONS, 0);
val = IVPU_MMU_CR0_CMDQEN;
val = REG_SET_FLD(IVPU_MMU_REG_CR0, CMDQEN, 0);
ret = ivpu_mmu_reg_write_cr0(vdev, val);
if (ret)
return ret;
@ -617,12 +617,12 @@ static int ivpu_mmu_reset(struct ivpu_device *vdev)
REGV_WR32(IVPU_MMU_REG_EVTQ_PROD_SEC, 0);
REGV_WR32(IVPU_MMU_REG_EVTQ_CONS_SEC, 0);
val |= IVPU_MMU_CR0_EVTQEN;
val = REG_SET_FLD(IVPU_MMU_REG_CR0, EVTQEN, val);
ret = ivpu_mmu_reg_write_cr0(vdev, val);
if (ret)
return ret;
val |= IVPU_MMU_CR0_ATSCHK;
val = REG_SET_FLD(IVPU_MMU_REG_CR0, ATSCHK, val);
ret = ivpu_mmu_reg_write_cr0(vdev, val);
if (ret)
return ret;
@ -631,7 +631,7 @@ static int ivpu_mmu_reset(struct ivpu_device *vdev)
if (ret)
return ret;
val |= IVPU_MMU_CR0_SMMUEN;
val = REG_SET_FLD(IVPU_MMU_REG_CR0, SMMUEN, val);
return ivpu_mmu_reg_write_cr0(vdev, val);
}
@ -725,8 +725,8 @@ static int ivpu_mmu_cdtab_entry_set(struct ivpu_device *vdev, u32 ssid, u64 cd_d
cd[2] = 0;
cd[3] = 0x0000000000007444;
/* For global context generate memory fault on VPU */
if (ssid == IVPU_GLOBAL_CONTEXT_MMU_SSID)
/* For global and reserved contexts generate memory fault on VPU */
if (ssid == IVPU_GLOBAL_CONTEXT_MMU_SSID || ssid == IVPU_RESERVED_CONTEXT_MMU_SSID)
cd[0] |= IVPU_MMU_CD_0_A;
if (valid)
@ -870,28 +870,107 @@ static u32 *ivpu_mmu_get_event(struct ivpu_device *vdev)
return evt;
}
static int ivpu_mmu_evtq_set(struct ivpu_device *vdev, bool enable)
{
u32 val = REGV_RD32(IVPU_MMU_REG_CR0);
if (enable)
val = REG_SET_FLD(IVPU_MMU_REG_CR0, EVTQEN, val);
else
val = REG_CLR_FLD(IVPU_MMU_REG_CR0, EVTQEN, val);
REGV_WR32(IVPU_MMU_REG_CR0, val);
return REGV_POLL_FLD(IVPU_MMU_REG_CR0ACK, VAL, val, IVPU_MMU_REG_TIMEOUT_US);
}
static int ivpu_mmu_evtq_enable(struct ivpu_device *vdev)
{
return ivpu_mmu_evtq_set(vdev, true);
}
static int ivpu_mmu_evtq_disable(struct ivpu_device *vdev)
{
return ivpu_mmu_evtq_set(vdev, false);
}
void ivpu_mmu_discard_events(struct ivpu_device *vdev)
{
struct ivpu_mmu_info *mmu = vdev->mmu;
mutex_lock(&mmu->lock);
/*
* Disable event queue (stop MMU from updating the producer)
* to allow synchronization of consumer and producer indexes
*/
ivpu_mmu_evtq_disable(vdev);
vdev->mmu->evtq.cons = REGV_RD32(IVPU_MMU_REG_EVTQ_PROD_SEC);
REGV_WR32(IVPU_MMU_REG_EVTQ_CONS_SEC, vdev->mmu->evtq.cons);
vdev->mmu->evtq.prod = REGV_RD32(IVPU_MMU_REG_EVTQ_PROD_SEC);
ivpu_mmu_evtq_enable(vdev);
drm_WARN_ON_ONCE(&vdev->drm, vdev->mmu->evtq.cons != vdev->mmu->evtq.prod);
mutex_unlock(&mmu->lock);
}
int ivpu_mmu_disable_ssid_events(struct ivpu_device *vdev, u32 ssid)
{
struct ivpu_mmu_info *mmu = vdev->mmu;
struct ivpu_mmu_cdtab *cdtab = &mmu->cdtab;
u64 *entry;
u64 val;
if (ssid > IVPU_MMU_CDTAB_ENT_COUNT)
return -EINVAL;
mutex_lock(&mmu->lock);
entry = cdtab->base + (ssid * IVPU_MMU_CDTAB_ENT_SIZE);
val = READ_ONCE(entry[0]);
val &= ~IVPU_MMU_CD_0_R;
WRITE_ONCE(entry[0], val);
if (!ivpu_is_force_snoop_enabled(vdev))
clflush_cache_range(entry, IVPU_MMU_CDTAB_ENT_SIZE);
ivpu_mmu_cmdq_write_cfgi_all(vdev);
ivpu_mmu_cmdq_sync(vdev);
mutex_unlock(&mmu->lock);
return 0;
}
void ivpu_mmu_irq_evtq_handler(struct ivpu_device *vdev)
{
struct ivpu_file_priv *file_priv;
u32 *event;
u32 ssid;
ivpu_dbg(vdev, IRQ, "MMU event queue\n");
while ((event = ivpu_mmu_get_event(vdev)) != NULL) {
ivpu_mmu_dump_event(vdev, event);
ssid = FIELD_GET(IVPU_MMU_EVT_SSID_MASK, event[0]);
if (ssid == IVPU_GLOBAL_CONTEXT_MMU_SSID) {
while ((event = ivpu_mmu_get_event(vdev))) {
ssid = FIELD_GET(IVPU_MMU_EVT_SSID_MASK, *event);
if (ssid == IVPU_GLOBAL_CONTEXT_MMU_SSID ||
ssid == IVPU_RESERVED_CONTEXT_MMU_SSID) {
ivpu_mmu_dump_event(vdev, event);
ivpu_pm_trigger_recovery(vdev, "MMU event");
return;
}
ivpu_mmu_user_context_mark_invalid(vdev, ssid);
REGV_WR32(IVPU_MMU_REG_EVTQ_CONS_SEC, vdev->mmu->evtq.cons);
file_priv = xa_load(&vdev->context_xa, ssid);
if (file_priv) {
if (!READ_ONCE(file_priv->has_mmu_faults)) {
ivpu_mmu_dump_event(vdev, event);
WRITE_ONCE(file_priv->has_mmu_faults, true);
}
}
}
if (!kfifo_put(&vdev->hw->irq.fifo, IVPU_HW_IRQ_SRC_MMU_EVTQ))
ivpu_err_ratelimited(vdev, "IRQ FIFO full\n");
queue_work(system_wq, &vdev->context_abort_work);
}
void ivpu_mmu_evtq_dump(struct ivpu_device *vdev)

View File

@ -47,5 +47,7 @@ int ivpu_mmu_invalidate_tlb(struct ivpu_device *vdev, u16 ssid);
void ivpu_mmu_irq_evtq_handler(struct ivpu_device *vdev);
void ivpu_mmu_irq_gerr_handler(struct ivpu_device *vdev);
void ivpu_mmu_evtq_dump(struct ivpu_device *vdev);
void ivpu_mmu_discard_events(struct ivpu_device *vdev);
int ivpu_mmu_disable_ssid_events(struct ivpu_device *vdev, u32 ssid);
#endif /* __IVPU_MMU_H__ */

View File

@ -635,16 +635,3 @@ void ivpu_mmu_reserved_context_fini(struct ivpu_device *vdev)
ivpu_mmu_cd_clear(vdev, vdev->rctx.id);
ivpu_mmu_context_fini(vdev, &vdev->rctx);
}
void ivpu_mmu_user_context_mark_invalid(struct ivpu_device *vdev, u32 ssid)
{
struct ivpu_file_priv *file_priv;
xa_lock(&vdev->context_xa);
file_priv = xa_load(&vdev->context_xa, ssid);
if (file_priv)
file_priv->has_mmu_faults = true;
xa_unlock(&vdev->context_xa);
}

View File

@ -37,8 +37,6 @@ void ivpu_mmu_global_context_fini(struct ivpu_device *vdev);
int ivpu_mmu_reserved_context_init(struct ivpu_device *vdev);
void ivpu_mmu_reserved_context_fini(struct ivpu_device *vdev);
void ivpu_mmu_user_context_mark_invalid(struct ivpu_device *vdev, u32 ssid);
int ivpu_mmu_context_insert_node(struct ivpu_mmu_context *ctx, const struct ivpu_addr_range *range,
u64 size, struct drm_mm_node *node);
void ivpu_mmu_context_remove_node(struct ivpu_mmu_context *ctx, struct drm_mm_node *node);

View File

@ -177,16 +177,11 @@ void ivpu_pm_trigger_recovery(struct ivpu_device *vdev, const char *reason)
return;
}
if (ivpu_is_fpga(vdev)) {
ivpu_err(vdev, "Recovery not available on FPGA\n");
return;
}
/* Trigger recovery if it's not in progress */
if (atomic_cmpxchg(&vdev->pm->reset_pending, 0, 1) == 0) {
ivpu_hw_diagnose_failure(vdev);
ivpu_hw_irq_disable(vdev); /* Disable IRQ early to protect from IRQ storm */
queue_work(system_long_wq, &vdev->pm->recovery_work);
queue_work(system_unbound_wq, &vdev->pm->recovery_work);
}
}
@ -462,8 +457,9 @@ int ivpu_pm_dct_disable(struct ivpu_device *vdev)
return 0;
}
void ivpu_pm_dct_irq_thread_handler(struct ivpu_device *vdev)
void ivpu_pm_irq_dct_work_fn(struct work_struct *work)
{
struct ivpu_device *vdev = container_of(work, struct ivpu_device, irq_dct_work);
bool enable;
int ret;

View File

@ -45,6 +45,6 @@ void ivpu_stop_job_timeout_detection(struct ivpu_device *vdev);
int ivpu_pm_dct_init(struct ivpu_device *vdev);
int ivpu_pm_dct_enable(struct ivpu_device *vdev, u8 active_percent);
int ivpu_pm_dct_disable(struct ivpu_device *vdev);
void ivpu_pm_dct_irq_thread_handler(struct ivpu_device *vdev);
void ivpu_pm_irq_dct_work_fn(struct work_struct *work);
#endif /* __IVPU_PM_H__ */

View File

@ -7,11 +7,14 @@
#include <linux/err.h>
#include "ivpu_drv.h"
#include "ivpu_gem.h"
#include "ivpu_fw.h"
#include "ivpu_hw.h"
#include "ivpu_sysfs.h"
/*
/**
* DOC: npu_busy_time_us
*
* npu_busy_time_us is the time that the device spent executing jobs.
* The time is counted when and only when there are jobs submitted to firmware.
*
@ -30,17 +33,42 @@ npu_busy_time_us_show(struct device *dev, struct device_attribute *attr, char *b
struct ivpu_device *vdev = to_ivpu_device(drm);
ktime_t total, now = 0;
xa_lock(&vdev->submitted_jobs_xa);
mutex_lock(&vdev->submitted_jobs_lock);
total = vdev->busy_time;
if (!xa_empty(&vdev->submitted_jobs_xa))
now = ktime_sub(ktime_get(), vdev->busy_start_ts);
xa_unlock(&vdev->submitted_jobs_xa);
mutex_unlock(&vdev->submitted_jobs_lock);
return sysfs_emit(buf, "%lld\n", ktime_to_us(ktime_add(total, now)));
}
static DEVICE_ATTR_RO(npu_busy_time_us);
/**
* DOC: npu_memory_utilization
*
* The npu_memory_utilization is used to report in bytes a current NPU memory utilization.
*
*/
static ssize_t
npu_memory_utilization_show(struct device *dev, struct device_attribute *attr, char *buf)
{
struct drm_device *drm = dev_get_drvdata(dev);
struct ivpu_device *vdev = to_ivpu_device(drm);
struct ivpu_bo *bo;
u64 total_npu_memory = 0;
mutex_lock(&vdev->bo_list_lock);
list_for_each_entry(bo, &vdev->bo_list, bo_list_node)
total_npu_memory += bo->base.base.size;
mutex_unlock(&vdev->bo_list_lock);
return sysfs_emit(buf, "%lld\n", total_npu_memory);
}
static DEVICE_ATTR_RO(npu_memory_utilization);
/**
* DOC: sched_mode
*
@ -64,6 +92,7 @@ static DEVICE_ATTR_RO(sched_mode);
static struct attribute *ivpu_dev_attrs[] = {
&dev_attr_npu_busy_time_us.attr,
&dev_attr_npu_memory_utilization.attr,
&dev_attr_sched_mode.attr,
NULL,
};

View File

@ -20,6 +20,11 @@ static unsigned int mhi_timeout_ms = 2000; /* 2 sec default */
module_param(mhi_timeout_ms, uint, 0600);
MODULE_PARM_DESC(mhi_timeout_ms, "MHI controller timeout value");
static const char *fw_image_paths[FAMILY_MAX] = {
[FAMILY_AIC100] = "qcom/aic100/sbl.bin",
[FAMILY_AIC200] = "qcom/aic200/sbl.bin",
};
static const struct mhi_channel_config aic100_channels[] = {
{
.name = "QAIC_LOOPBACK",
@ -439,6 +444,297 @@ static const struct mhi_channel_config aic100_channels[] = {
},
};
static const struct mhi_channel_config aic200_channels[] = {
{
.name = "QAIC_LOOPBACK",
.num = 0,
.num_elements = 32,
.local_elements = 0,
.event_ring = 0,
.dir = DMA_TO_DEVICE,
.ee_mask = MHI_CH_EE_AMSS,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
.wake_capable = false,
},
{
.name = "QAIC_LOOPBACK",
.num = 1,
.num_elements = 32,
.local_elements = 0,
.event_ring = 0,
.dir = DMA_FROM_DEVICE,
.ee_mask = MHI_CH_EE_AMSS,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
.wake_capable = false,
},
{
.name = "QAIC_SAHARA",
.num = 2,
.num_elements = 32,
.local_elements = 0,
.event_ring = 0,
.dir = DMA_TO_DEVICE,
.ee_mask = MHI_CH_EE_SBL,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
.wake_capable = false,
},
{
.name = "QAIC_SAHARA",
.num = 3,
.num_elements = 32,
.local_elements = 0,
.event_ring = 0,
.dir = DMA_FROM_DEVICE,
.ee_mask = MHI_CH_EE_SBL,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
.wake_capable = false,
},
{
.name = "QAIC_SSR",
.num = 6,
.num_elements = 32,
.local_elements = 0,
.event_ring = 0,
.dir = DMA_TO_DEVICE,
.ee_mask = MHI_CH_EE_AMSS,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
.wake_capable = false,
},
{
.name = "QAIC_SSR",
.num = 7,
.num_elements = 32,
.local_elements = 0,
.event_ring = 0,
.dir = DMA_FROM_DEVICE,
.ee_mask = MHI_CH_EE_AMSS,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
.wake_capable = false,
},
{
.name = "QAIC_CONTROL",
.num = 10,
.num_elements = 128,
.local_elements = 0,
.event_ring = 0,
.dir = DMA_TO_DEVICE,
.ee_mask = MHI_CH_EE_AMSS,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
.wake_capable = false,
},
{
.name = "QAIC_CONTROL",
.num = 11,
.num_elements = 128,
.local_elements = 0,
.event_ring = 0,
.dir = DMA_FROM_DEVICE,
.ee_mask = MHI_CH_EE_AMSS,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
.wake_capable = false,
},
{
.name = "QAIC_LOGGING",
.num = 12,
.num_elements = 32,
.local_elements = 0,
.event_ring = 0,
.dir = DMA_TO_DEVICE,
.ee_mask = MHI_CH_EE_SBL,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
.wake_capable = false,
},
{
.name = "QAIC_LOGGING",
.num = 13,
.num_elements = 32,
.local_elements = 0,
.event_ring = 0,
.dir = DMA_FROM_DEVICE,
.ee_mask = MHI_CH_EE_SBL,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
.wake_capable = false,
},
{
.name = "QAIC_STATUS",
.num = 14,
.num_elements = 32,
.local_elements = 0,
.event_ring = 0,
.dir = DMA_TO_DEVICE,
.ee_mask = MHI_CH_EE_AMSS,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
.wake_capable = false,
},
{
.name = "QAIC_STATUS",
.num = 15,
.num_elements = 32,
.local_elements = 0,
.event_ring = 0,
.dir = DMA_FROM_DEVICE,
.ee_mask = MHI_CH_EE_AMSS,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
.wake_capable = false,
},
{
.name = "QAIC_TELEMETRY",
.num = 16,
.num_elements = 32,
.local_elements = 0,
.event_ring = 0,
.dir = DMA_TO_DEVICE,
.ee_mask = MHI_CH_EE_AMSS,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
.wake_capable = false,
},
{
.name = "QAIC_TELEMETRY",
.num = 17,
.num_elements = 32,
.local_elements = 0,
.event_ring = 0,
.dir = DMA_FROM_DEVICE,
.ee_mask = MHI_CH_EE_AMSS,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
.wake_capable = false,
},
{
.name = "QAIC_TIMESYNC_PERIODIC",
.num = 22,
.num_elements = 32,
.local_elements = 0,
.event_ring = 0,
.dir = DMA_TO_DEVICE,
.ee_mask = MHI_CH_EE_AMSS,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
.wake_capable = false,
},
{
.name = "QAIC_TIMESYNC_PERIODIC",
.num = 23,
.num_elements = 32,
.local_elements = 0,
.event_ring = 0,
.dir = DMA_FROM_DEVICE,
.ee_mask = MHI_CH_EE_AMSS,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
.wake_capable = false,
},
{
.name = "IPCR",
.num = 24,
.num_elements = 32,
.local_elements = 0,
.event_ring = 0,
.dir = DMA_TO_DEVICE,
.ee_mask = MHI_CH_EE_AMSS,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
.wake_capable = false,
},
{
.name = "IPCR",
.num = 25,
.num_elements = 32,
.local_elements = 0,
.event_ring = 0,
.dir = DMA_FROM_DEVICE,
.ee_mask = MHI_CH_EE_AMSS,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = true,
.wake_capable = false,
},
};
static struct mhi_event_config aic100_events[] = {
{
.num_elements = 32,
@ -454,16 +750,44 @@ static struct mhi_event_config aic100_events[] = {
},
};
static struct mhi_controller_config aic100_config = {
.max_channels = 128,
.timeout_ms = 0, /* controlled by mhi_timeout */
.buf_len = 0,
.num_channels = ARRAY_SIZE(aic100_channels),
.ch_cfg = aic100_channels,
.num_events = ARRAY_SIZE(aic100_events),
.event_cfg = aic100_events,
.use_bounce_buf = false,
.m2_no_db = false,
static struct mhi_event_config aic200_events[] = {
{
.num_elements = 32,
.irq_moderation_ms = 0,
.irq = 0,
.channel = U32_MAX,
.priority = 1,
.mode = MHI_DB_BRST_DISABLE,
.data_type = MHI_ER_CTRL,
.hardware_event = false,
.client_managed = false,
.offload_channel = false,
},
};
static struct mhi_controller_config mhi_cntrl_configs[] = {
[FAMILY_AIC100] = {
.max_channels = 128,
.timeout_ms = 0, /* controlled by mhi_timeout */
.buf_len = 0,
.num_channels = ARRAY_SIZE(aic100_channels),
.ch_cfg = aic100_channels,
.num_events = ARRAY_SIZE(aic100_events),
.event_cfg = aic100_events,
.use_bounce_buf = false,
.m2_no_db = false,
},
[FAMILY_AIC200] = {
.max_channels = 128,
.timeout_ms = 0, /* controlled by mhi_timeout */
.buf_len = 0,
.num_channels = ARRAY_SIZE(aic200_channels),
.ch_cfg = aic200_channels,
.num_events = ARRAY_SIZE(aic200_events),
.event_cfg = aic200_events,
.use_bounce_buf = false,
.m2_no_db = false,
},
};
static int mhi_read_reg(struct mhi_controller *mhi_cntrl, void __iomem *addr, u32 *out)
@ -545,8 +869,9 @@ static int mhi_reset_and_async_power_up(struct mhi_controller *mhi_cntrl)
}
struct mhi_controller *qaic_mhi_register_controller(struct pci_dev *pci_dev, void __iomem *mhi_bar,
int mhi_irq, bool shared_msi)
int mhi_irq, bool shared_msi, int family)
{
struct mhi_controller_config mhi_config = mhi_cntrl_configs[family];
struct mhi_controller *mhi_cntrl;
int ret;
@ -581,11 +906,18 @@ struct mhi_controller *qaic_mhi_register_controller(struct pci_dev *pci_dev, voi
if (shared_msi) /* MSI shared with data path, no IRQF_NO_SUSPEND */
mhi_cntrl->irq_flags = IRQF_SHARED;
mhi_cntrl->fw_image = "qcom/aic100/sbl.bin";
mhi_cntrl->fw_image = fw_image_paths[family];
if (family == FAMILY_AIC200) {
mhi_cntrl->name = "AIC200";
mhi_cntrl->seg_len = SZ_512K;
} else {
mhi_cntrl->name = "AIC100";
}
/* use latest configured timeout */
aic100_config.timeout_ms = mhi_timeout_ms;
ret = mhi_register_controller(mhi_cntrl, &aic100_config);
mhi_config.timeout_ms = mhi_timeout_ms;
ret = mhi_register_controller(mhi_cntrl, &mhi_config);
if (ret) {
pci_err(pci_dev, "mhi_register_controller failed %d\n", ret);
return ERR_PTR(ret);

View File

@ -8,7 +8,7 @@
#define MHICONTROLLERQAIC_H_
struct mhi_controller *qaic_mhi_register_controller(struct pci_dev *pci_dev, void __iomem *mhi_bar,
int mhi_irq, bool shared_msi);
int mhi_irq, bool shared_msi, int family);
void qaic_mhi_free_controller(struct mhi_controller *mhi_cntrl, bool link_up);
void qaic_mhi_start_reset(struct mhi_controller *mhi_cntrl);
void qaic_mhi_reset_done(struct mhi_controller *mhi_cntrl);

View File

@ -32,6 +32,12 @@
#define to_accel_kdev(qddev) (to_drm(qddev)->accel->kdev) /* Return Linux device of accel node */
#define to_qaic_device(dev) (to_qaic_drm_device((dev))->qdev)
enum aic_families {
FAMILY_AIC100,
FAMILY_AIC200,
FAMILY_MAX,
};
enum __packed dev_states {
/* Device is offline or will be very soon */
QAIC_OFFLINE,
@ -113,10 +119,10 @@ struct qaic_device {
struct pci_dev *pdev;
/* Req. ID of request that will be queued next in MHI control device */
u32 next_seq_num;
/* Base address of bar 0 */
void __iomem *bar_0;
/* Base address of bar 2 */
void __iomem *bar_2;
/* Base address of the MHI bar */
void __iomem *bar_mhi;
/* Base address of the DBCs bar */
void __iomem *bar_dbc;
/* Controller structure for MHI devices */
struct mhi_controller *mhi_cntrl;
/* MHI control channel device */

View File

@ -34,13 +34,46 @@
MODULE_IMPORT_NS("DMA_BUF");
#define PCI_DEV_AIC080 0xa080
#define PCI_DEV_AIC100 0xa100
#define PCI_DEVICE_ID_QCOM_AIC080 0xa080
#define PCI_DEVICE_ID_QCOM_AIC100 0xa100
#define PCI_DEVICE_ID_QCOM_AIC200 0xa110
#define QAIC_NAME "qaic"
#define QAIC_DESC "Qualcomm Cloud AI Accelerators"
#define CNTL_MAJOR 5
#define CNTL_MINOR 0
struct qaic_device_config {
/* Indicates the AIC family the device belongs to */
int family;
/* A bitmask representing the available BARs */
int bar_mask;
/* An index value used to identify the MHI controller BAR */
unsigned int mhi_bar_idx;
/* An index value used to identify the DBCs BAR */
unsigned int dbc_bar_idx;
};
static const struct qaic_device_config aic080_config = {
.family = FAMILY_AIC100,
.bar_mask = BIT(0) | BIT(2) | BIT(4),
.mhi_bar_idx = 0,
.dbc_bar_idx = 2,
};
static const struct qaic_device_config aic100_config = {
.family = FAMILY_AIC100,
.bar_mask = BIT(0) | BIT(2) | BIT(4),
.mhi_bar_idx = 0,
.dbc_bar_idx = 2,
};
static const struct qaic_device_config aic200_config = {
.family = FAMILY_AIC200,
.bar_mask = BIT(0) | BIT(1) | BIT(2) | BIT(4),
.mhi_bar_idx = 1,
.dbc_bar_idx = 2,
};
bool datapath_polling;
module_param(datapath_polling, bool, 0400);
MODULE_PARM_DESC(datapath_polling, "Operate the datapath in polling mode");
@ -352,7 +385,8 @@ void qaic_dev_reset_clean_local_state(struct qaic_device *qdev)
release_dbc(qdev, i);
}
static struct qaic_device *create_qdev(struct pci_dev *pdev, const struct pci_device_id *id)
static struct qaic_device *create_qdev(struct pci_dev *pdev,
const struct qaic_device_config *config)
{
struct device *dev = &pdev->dev;
struct qaic_drm_device *qddev;
@ -365,12 +399,10 @@ static struct qaic_device *create_qdev(struct pci_dev *pdev, const struct pci_de
return NULL;
qdev->dev_state = QAIC_OFFLINE;
if (id->device == PCI_DEV_AIC080 || id->device == PCI_DEV_AIC100) {
qdev->num_dbc = 16;
qdev->dbc = devm_kcalloc(dev, qdev->num_dbc, sizeof(*qdev->dbc), GFP_KERNEL);
if (!qdev->dbc)
return NULL;
}
qdev->num_dbc = 16;
qdev->dbc = devm_kcalloc(dev, qdev->num_dbc, sizeof(*qdev->dbc), GFP_KERNEL);
if (!qdev->dbc)
return NULL;
qddev = devm_drm_dev_alloc(&pdev->dev, &qaic_accel_driver, struct qaic_drm_device, drm);
if (IS_ERR(qddev))
@ -426,17 +458,18 @@ static struct qaic_device *create_qdev(struct pci_dev *pdev, const struct pci_de
return qdev;
}
static int init_pci(struct qaic_device *qdev, struct pci_dev *pdev)
static int init_pci(struct qaic_device *qdev, struct pci_dev *pdev,
const struct qaic_device_config *config)
{
int bars;
int ret;
bars = pci_select_bars(pdev, IORESOURCE_MEM);
bars = pci_select_bars(pdev, IORESOURCE_MEM) & 0x3f;
/* make sure the device has the expected BARs */
if (bars != (BIT(0) | BIT(2) | BIT(4))) {
pci_dbg(pdev, "%s: expected BARs 0, 2, and 4 not found in device. Found 0x%x\n",
__func__, bars);
if (bars != config->bar_mask) {
pci_dbg(pdev, "%s: expected BARs %#x not found in device. Found %#x\n",
__func__, config->bar_mask, bars);
return -EINVAL;
}
@ -449,13 +482,13 @@ static int init_pci(struct qaic_device *qdev, struct pci_dev *pdev)
return ret;
dma_set_max_seg_size(&pdev->dev, UINT_MAX);
qdev->bar_0 = devm_ioremap_resource(&pdev->dev, &pdev->resource[0]);
if (IS_ERR(qdev->bar_0))
return PTR_ERR(qdev->bar_0);
qdev->bar_mhi = devm_ioremap_resource(&pdev->dev, &pdev->resource[config->mhi_bar_idx]);
if (IS_ERR(qdev->bar_mhi))
return PTR_ERR(qdev->bar_mhi);
qdev->bar_2 = devm_ioremap_resource(&pdev->dev, &pdev->resource[2]);
if (IS_ERR(qdev->bar_2))
return PTR_ERR(qdev->bar_2);
qdev->bar_dbc = devm_ioremap_resource(&pdev->dev, &pdev->resource[config->dbc_bar_idx]);
if (IS_ERR(qdev->bar_dbc))
return PTR_ERR(qdev->bar_dbc);
/* Managed release since we use pcim_enable_device above */
pci_set_master(pdev);
@ -465,14 +498,15 @@ static int init_pci(struct qaic_device *qdev, struct pci_dev *pdev)
static int init_msi(struct qaic_device *qdev, struct pci_dev *pdev)
{
int irq_count = qdev->num_dbc + 1;
int mhi_irq;
int ret;
int i;
/* Managed release since we use pcim_enable_device */
ret = pci_alloc_irq_vectors(pdev, 32, 32, PCI_IRQ_MSI);
ret = pci_alloc_irq_vectors(pdev, irq_count, irq_count, PCI_IRQ_MSI | PCI_IRQ_MSIX);
if (ret == -ENOSPC) {
ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI);
ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI | PCI_IRQ_MSIX);
if (ret < 0)
return ret;
@ -485,7 +519,8 @@ static int init_msi(struct qaic_device *qdev, struct pci_dev *pdev)
* interrupted, it shouldn't race with itself.
*/
qdev->single_msi = true;
pci_info(pdev, "Allocating 32 MSIs failed, operating in 1 MSI mode. Performance may be impacted.\n");
pci_info(pdev, "Allocating %d MSIs failed, operating in 1 MSI mode. Performance may be impacted.\n",
irq_count);
} else if (ret < 0) {
return ret;
}
@ -515,21 +550,22 @@ static int init_msi(struct qaic_device *qdev, struct pci_dev *pdev)
static int qaic_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
struct qaic_device_config *config = (struct qaic_device_config *)id->driver_data;
struct qaic_device *qdev;
int mhi_irq;
int ret;
int i;
qdev = create_qdev(pdev, id);
qdev = create_qdev(pdev, config);
if (!qdev)
return -ENOMEM;
ret = init_pci(qdev, pdev);
ret = init_pci(qdev, pdev, config);
if (ret)
return ret;
for (i = 0; i < qdev->num_dbc; ++i)
qdev->dbc[i].dbc_base = qdev->bar_2 + QAIC_DBC_OFF(i);
qdev->dbc[i].dbc_base = qdev->bar_dbc + QAIC_DBC_OFF(i);
mhi_irq = init_msi(qdev, pdev);
if (mhi_irq < 0)
@ -539,8 +575,8 @@ static int qaic_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if (ret)
return ret;
qdev->mhi_cntrl = qaic_mhi_register_controller(pdev, qdev->bar_0, mhi_irq,
qdev->single_msi);
qdev->mhi_cntrl = qaic_mhi_register_controller(pdev, qdev->bar_mhi, mhi_irq,
qdev->single_msi, config->family);
if (IS_ERR(qdev->mhi_cntrl)) {
ret = PTR_ERR(qdev->mhi_cntrl);
qaic_destroy_drm_device(qdev, QAIC_NO_PARTITION);
@ -607,8 +643,9 @@ static struct mhi_driver qaic_mhi_driver = {
};
static const struct pci_device_id qaic_ids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, PCI_DEV_AIC080), },
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, PCI_DEV_AIC100), },
{ PCI_DEVICE_DATA(QCOM, AIC080, (kernel_ulong_t)&aic080_config), },
{ PCI_DEVICE_DATA(QCOM, AIC100, (kernel_ulong_t)&aic100_config), },
{ PCI_DEVICE_DATA(QCOM, AIC200, (kernel_ulong_t)&aic200_config), },
{ }
};
MODULE_DEVICE_TABLE(pci, qaic_ids);

View File

@ -201,7 +201,7 @@ static int qaic_timesync_probe(struct mhi_device *mhi_dev, const struct mhi_devi
goto free_sync_msg;
/* Qtimer register pointer */
mqtsdev->qtimer_addr = qdev->bar_0 + QTIMER_REG_OFFSET;
mqtsdev->qtimer_addr = qdev->bar_mhi + QTIMER_REG_OFFSET;
timer_setup(timer, qaic_timesync_timer, 0);
timer->expires = jiffies + msecs_to_jiffies(timesync_delay_ms);
add_timer(timer);

View File

@ -160,7 +160,7 @@ struct sahara_context {
struct work_struct fw_work;
struct work_struct dump_work;
struct mhi_device *mhi_dev;
const char **image_table;
const char * const *image_table;
u32 table_size;
u32 active_image_id;
const struct firmware *firmware;
@ -177,7 +177,7 @@ struct sahara_context {
bool is_mem_dump_mode;
};
static const char *aic100_image_table[] = {
static const char * const aic100_image_table[] = {
[1] = "qcom/aic100/fw1.bin",
[2] = "qcom/aic100/fw2.bin",
[4] = "qcom/aic100/fw4.bin",
@ -188,6 +188,34 @@ static const char *aic100_image_table[] = {
[10] = "qcom/aic100/fw10.bin",
};
static const char * const aic200_image_table[] = {
[5] = "qcom/aic200/uefi.elf",
[12] = "qcom/aic200/aic200-nsp.bin",
[23] = "qcom/aic200/aop.mbn",
[32] = "qcom/aic200/tz.mbn",
[33] = "qcom/aic200/hypvm.mbn",
[39] = "qcom/aic200/aic200_abl.elf",
[40] = "qcom/aic200/apdp.mbn",
[41] = "qcom/aic200/devcfg.mbn",
[42] = "qcom/aic200/sec.elf",
[43] = "qcom/aic200/aic200-hlos.elf",
[49] = "qcom/aic200/shrm.elf",
[50] = "qcom/aic200/cpucp.elf",
[51] = "qcom/aic200/aop_devcfg.mbn",
[57] = "qcom/aic200/cpucp_dtbs.elf",
[62] = "qcom/aic200/uefi_dtbs.elf",
[63] = "qcom/aic200/xbl_ac_config.mbn",
[64] = "qcom/aic200/tz_ac_config.mbn",
[65] = "qcom/aic200/hyp_ac_config.mbn",
[66] = "qcom/aic200/pdp.elf",
[67] = "qcom/aic200/pdp_cdb.elf",
[68] = "qcom/aic200/sdi.mbn",
[69] = "qcom/aic200/dcd.mbn",
[73] = "qcom/aic200/gearvm.mbn",
[74] = "qcom/aic200/sti.bin",
[75] = "qcom/aic200/pvs.bin",
};
static int sahara_find_image(struct sahara_context *context, u32 image_id)
{
int ret;
@ -748,8 +776,15 @@ static int sahara_mhi_probe(struct mhi_device *mhi_dev, const struct mhi_device_
context->mhi_dev = mhi_dev;
INIT_WORK(&context->fw_work, sahara_processing);
INIT_WORK(&context->dump_work, sahara_dump_processing);
context->image_table = aic100_image_table;
context->table_size = ARRAY_SIZE(aic100_image_table);
if (!strcmp(mhi_dev->mhi_cntrl->name, "AIC200")) {
context->image_table = aic200_image_table;
context->table_size = ARRAY_SIZE(aic200_image_table);
} else {
context->image_table = aic100_image_table;
context->table_size = ARRAY_SIZE(aic100_image_table);
}
context->active_image_id = SAHARA_IMAGE_ID_NONE;
dev_set_drvdata(&mhi_dev->dev, context);

View File

@ -177,6 +177,36 @@ int mhi_download_rddm_image(struct mhi_controller *mhi_cntrl, bool in_panic)
}
EXPORT_SYMBOL_GPL(mhi_download_rddm_image);
static void mhi_fw_load_error_dump(struct mhi_controller *mhi_cntrl)
{
struct device *dev = &mhi_cntrl->mhi_dev->dev;
rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
void __iomem *base = mhi_cntrl->bhi;
int ret, i;
u32 val;
struct {
char *name;
u32 offset;
} error_reg[] = {
{ "ERROR_CODE", BHI_ERRCODE },
{ "ERROR_DBG1", BHI_ERRDBG1 },
{ "ERROR_DBG2", BHI_ERRDBG2 },
{ "ERROR_DBG3", BHI_ERRDBG3 },
{ NULL },
};
read_lock_bh(pm_lock);
if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
for (i = 0; error_reg[i].name; i++) {
ret = mhi_read_reg(mhi_cntrl, base, error_reg[i].offset, &val);
if (ret)
break;
dev_err(dev, "Reg: %s value: 0x%x\n", error_reg[i].name, val);
}
}
read_unlock_bh(pm_lock);
}
static int mhi_fw_load_bhie(struct mhi_controller *mhi_cntrl,
const struct mhi_buf *mhi_buf)
{
@ -226,24 +256,13 @@ static int mhi_fw_load_bhie(struct mhi_controller *mhi_cntrl,
}
static int mhi_fw_load_bhi(struct mhi_controller *mhi_cntrl,
dma_addr_t dma_addr,
size_t size)
const struct mhi_buf *mhi_buf)
{
u32 tx_status, val, session_id;
int i, ret;
void __iomem *base = mhi_cntrl->bhi;
rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
struct {
char *name;
u32 offset;
} error_reg[] = {
{ "ERROR_CODE", BHI_ERRCODE },
{ "ERROR_DBG1", BHI_ERRDBG1 },
{ "ERROR_DBG2", BHI_ERRDBG2 },
{ "ERROR_DBG3", BHI_ERRDBG3 },
{ NULL },
};
rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
void __iomem *base = mhi_cntrl->bhi;
u32 tx_status, session_id;
int ret;
read_lock_bh(pm_lock);
if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
@ -255,11 +274,9 @@ static int mhi_fw_load_bhi(struct mhi_controller *mhi_cntrl,
dev_dbg(dev, "Starting image download via BHI. Session ID: %u\n",
session_id);
mhi_write_reg(mhi_cntrl, base, BHI_STATUS, 0);
mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_HIGH,
upper_32_bits(dma_addr));
mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_LOW,
lower_32_bits(dma_addr));
mhi_write_reg(mhi_cntrl, base, BHI_IMGSIZE, size);
mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_HIGH, upper_32_bits(mhi_buf->dma_addr));
mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_LOW, lower_32_bits(mhi_buf->dma_addr));
mhi_write_reg(mhi_cntrl, base, BHI_IMGSIZE, mhi_buf->len);
mhi_write_reg(mhi_cntrl, base, BHI_IMGTXDB, session_id);
read_unlock_bh(pm_lock);
@ -274,18 +291,7 @@ static int mhi_fw_load_bhi(struct mhi_controller *mhi_cntrl,
if (tx_status == BHI_STATUS_ERROR) {
dev_err(dev, "Image transfer failed\n");
read_lock_bh(pm_lock);
if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
for (i = 0; error_reg[i].name; i++) {
ret = mhi_read_reg(mhi_cntrl, base,
error_reg[i].offset, &val);
if (ret)
break;
dev_err(dev, "Reg: %s value: 0x%x\n",
error_reg[i].name, val);
}
}
read_unlock_bh(pm_lock);
mhi_fw_load_error_dump(mhi_cntrl);
goto invalid_pm_state;
}
@ -296,6 +302,16 @@ invalid_pm_state:
return -EIO;
}
static void mhi_free_bhi_buffer(struct mhi_controller *mhi_cntrl,
struct image_info *image_info)
{
struct mhi_buf *mhi_buf = image_info->mhi_buf;
dma_free_coherent(mhi_cntrl->cntrl_dev, mhi_buf->len, mhi_buf->buf, mhi_buf->dma_addr);
kfree(image_info->mhi_buf);
kfree(image_info);
}
void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
struct image_info *image_info)
{
@ -310,6 +326,45 @@ void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
kfree(image_info);
}
static int mhi_alloc_bhi_buffer(struct mhi_controller *mhi_cntrl,
struct image_info **image_info,
size_t alloc_size)
{
struct image_info *img_info;
struct mhi_buf *mhi_buf;
img_info = kzalloc(sizeof(*img_info), GFP_KERNEL);
if (!img_info)
return -ENOMEM;
/* Allocate memory for entry */
img_info->mhi_buf = kzalloc(sizeof(*img_info->mhi_buf), GFP_KERNEL);
if (!img_info->mhi_buf)
goto error_alloc_mhi_buf;
/* Allocate and populate vector table */
mhi_buf = img_info->mhi_buf;
mhi_buf->len = alloc_size;
mhi_buf->buf = dma_alloc_coherent(mhi_cntrl->cntrl_dev, mhi_buf->len,
&mhi_buf->dma_addr, GFP_KERNEL);
if (!mhi_buf->buf)
goto error_alloc_segment;
img_info->bhi_vec = NULL;
img_info->entries = 1;
*image_info = img_info;
return 0;
error_alloc_segment:
kfree(mhi_buf);
error_alloc_mhi_buf:
kfree(img_info);
return -ENOMEM;
}
int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
struct image_info **image_info,
size_t alloc_size)
@ -365,9 +420,9 @@ error_alloc_mhi_buf:
return -ENOMEM;
}
static void mhi_firmware_copy(struct mhi_controller *mhi_cntrl,
const u8 *buf, size_t remainder,
struct image_info *img_info)
static void mhi_firmware_copy_bhie(struct mhi_controller *mhi_cntrl,
const u8 *buf, size_t remainder,
struct image_info *img_info)
{
size_t to_cpy;
struct mhi_buf *mhi_buf = img_info->mhi_buf;
@ -386,15 +441,61 @@ static void mhi_firmware_copy(struct mhi_controller *mhi_cntrl,
}
}
static enum mhi_fw_load_type mhi_fw_load_type_get(const struct mhi_controller *mhi_cntrl)
{
if (mhi_cntrl->fbc_download) {
return MHI_FW_LOAD_FBC;
} else {
if (mhi_cntrl->seg_len)
return MHI_FW_LOAD_BHIE;
else
return MHI_FW_LOAD_BHI;
}
}
static int mhi_load_image_bhi(struct mhi_controller *mhi_cntrl, const u8 *fw_data, size_t size)
{
struct image_info *image;
int ret;
ret = mhi_alloc_bhi_buffer(mhi_cntrl, &image, size);
if (ret)
return ret;
/* Load the firmware into BHI vec table */
memcpy(image->mhi_buf->buf, fw_data, size);
ret = mhi_fw_load_bhi(mhi_cntrl, &image->mhi_buf[image->entries - 1]);
mhi_free_bhi_buffer(mhi_cntrl, image);
return ret;
}
static int mhi_load_image_bhie(struct mhi_controller *mhi_cntrl, const u8 *fw_data, size_t size)
{
struct image_info *image;
int ret;
ret = mhi_alloc_bhie_table(mhi_cntrl, &image, size);
if (ret)
return ret;
mhi_firmware_copy_bhie(mhi_cntrl, fw_data, size, image);
ret = mhi_fw_load_bhie(mhi_cntrl, &image->mhi_buf[image->entries - 1]);
mhi_free_bhie_table(mhi_cntrl, image);
return ret;
}
void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
{
const struct firmware *firmware = NULL;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
enum mhi_fw_load_type fw_load_type;
enum mhi_pm_state new_state;
const char *fw_name;
const u8 *fw_data;
void *buf;
dma_addr_t dma_addr;
size_t size, fw_sz;
int ret;
@ -453,21 +554,17 @@ void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
fw_sz = firmware->size;
skip_req_fw:
buf = dma_alloc_coherent(mhi_cntrl->cntrl_dev, size, &dma_addr,
GFP_KERNEL);
if (!buf) {
release_firmware(firmware);
goto error_fw_load;
}
/* Download image using BHI */
memcpy(buf, fw_data, size);
ret = mhi_fw_load_bhi(mhi_cntrl, dma_addr, size);
dma_free_coherent(mhi_cntrl->cntrl_dev, size, buf, dma_addr);
fw_load_type = mhi_fw_load_type_get(mhi_cntrl);
if (fw_load_type == MHI_FW_LOAD_BHIE)
ret = mhi_load_image_bhie(mhi_cntrl, fw_data, size);
else
ret = mhi_load_image_bhi(mhi_cntrl, fw_data, size);
/* Error or in EDL mode, we're done */
if (ret) {
dev_err(dev, "MHI did not load image over BHI, ret: %d\n", ret);
dev_err(dev, "MHI did not load image over BHI%s, ret: %d\n",
fw_load_type == MHI_FW_LOAD_BHIE ? "e" : "",
ret);
release_firmware(firmware);
goto error_fw_load;
}
@ -486,7 +583,7 @@ skip_req_fw:
* If we're doing fbc, populate vector tables while
* device transitioning into MHI READY state
*/
if (mhi_cntrl->fbc_download) {
if (fw_load_type == MHI_FW_LOAD_FBC) {
ret = mhi_alloc_bhie_table(mhi_cntrl, &mhi_cntrl->fbc_image, fw_sz);
if (ret) {
release_firmware(firmware);
@ -494,7 +591,7 @@ skip_req_fw:
}
/* Load the firmware into BHIE vec table */
mhi_firmware_copy(mhi_cntrl, fw_data, fw_sz, mhi_cntrl->fbc_image);
mhi_firmware_copy_bhie(mhi_cntrl, fw_data, fw_sz, mhi_cntrl->fbc_image);
}
release_firmware(firmware);
@ -511,7 +608,7 @@ fw_load_ready_state:
return;
error_ready_state:
if (mhi_cntrl->fbc_download) {
if (fw_load_type == MHI_FW_LOAD_FBC) {
mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
mhi_cntrl->fbc_image = NULL;
}

View File

@ -1144,7 +1144,7 @@ int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
}
mhi_cntrl->bhi = mhi_cntrl->regs + bhi_off;
if (mhi_cntrl->fbc_download || mhi_cntrl->rddm_size) {
if (mhi_cntrl->fbc_download || mhi_cntrl->rddm_size || mhi_cntrl->seg_len) {
ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, BHIEOFF,
&bhie_off);
if (ret) {

View File

@ -29,6 +29,13 @@ struct bhi_vec_entry {
u64 size;
};
enum mhi_fw_load_type {
MHI_FW_LOAD_BHI, /* BHI only in PBL */
MHI_FW_LOAD_BHIE, /* BHIe only in PBL */
MHI_FW_LOAD_FBC, /* BHI in PBL followed by BHIe in SBL */
MHI_FW_LOAD_MAX,
};
enum mhi_ch_state_type {
MHI_CH_STATE_TYPE_RESET,
MHI_CH_STATE_TYPE_STOP,

View File

@ -84,8 +84,8 @@ struct dma_fence *__dma_fence_unwrap_merge(unsigned int num_fences,
struct dma_fence **fences,
struct dma_fence_unwrap *iter)
{
struct dma_fence *tmp, *unsignaled = NULL, **array;
struct dma_fence_array *result;
struct dma_fence *tmp, **array;
ktime_t timestamp;
int i, j, count;
@ -94,6 +94,8 @@ struct dma_fence *__dma_fence_unwrap_merge(unsigned int num_fences,
for (i = 0; i < num_fences; ++i) {
dma_fence_unwrap_for_each(tmp, &iter[i], fences[i]) {
if (!dma_fence_is_signaled(tmp)) {
dma_fence_put(unsignaled);
unsignaled = dma_fence_get(tmp);
++count;
} else {
ktime_t t = dma_fence_timestamp(tmp);
@ -107,9 +109,16 @@ struct dma_fence *__dma_fence_unwrap_merge(unsigned int num_fences,
/*
* If we couldn't find a pending fence just return a private signaled
* fence with the timestamp of the last signaled one.
*
* Or if there was a single unsignaled fence left we can return it
* directly and early since that is a major path on many workloads.
*/
if (count == 0)
return dma_fence_allocate_private_stub(timestamp);
else if (count == 1)
return unsignaled;
dma_fence_put(unsignaled);
array = kmalloc_array(count, sizeof(*array), GFP_KERNEL);
if (!array)

View File

@ -28,7 +28,7 @@ static const struct dma_fence_ops mock_ops = {
.get_timeline_name = mock_name,
};
static struct dma_fence *mock_fence(void)
static struct dma_fence *__mock_fence(u64 context, u64 seqno)
{
struct mock_fence *f;
@ -37,12 +37,16 @@ static struct dma_fence *mock_fence(void)
return NULL;
spin_lock_init(&f->lock);
dma_fence_init(&f->base, &mock_ops, &f->lock,
dma_fence_context_alloc(1), 1);
dma_fence_init(&f->base, &mock_ops, &f->lock, context, seqno);
return &f->base;
}
static struct dma_fence *mock_fence(void)
{
return __mock_fence(dma_fence_context_alloc(1), 1);
}
static struct dma_fence *mock_array(unsigned int num_fences, ...)
{
struct dma_fence_array *array;
@ -304,6 +308,177 @@ error_put_f1:
return err;
}
static int unwrap_merge_duplicate(void *arg)
{
struct dma_fence *fence, *f1, *f2;
struct dma_fence_unwrap iter;
int err = 0;
f1 = mock_fence();
if (!f1)
return -ENOMEM;
dma_fence_enable_sw_signaling(f1);
f2 = dma_fence_unwrap_merge(f1, f1);
if (!f2) {
err = -ENOMEM;
goto error_put_f1;
}
dma_fence_unwrap_for_each(fence, &iter, f2) {
if (fence == f1) {
dma_fence_put(f1);
f1 = NULL;
} else {
pr_err("Unexpected fence!\n");
err = -EINVAL;
}
}
if (f1) {
pr_err("Not all fences seen!\n");
err = -EINVAL;
}
dma_fence_put(f2);
error_put_f1:
dma_fence_put(f1);
return err;
}
static int unwrap_merge_seqno(void *arg)
{
struct dma_fence *fence, *f1, *f2, *f3, *f4;
struct dma_fence_unwrap iter;
int err = 0;
u64 ctx[2];
ctx[0] = dma_fence_context_alloc(1);
ctx[1] = dma_fence_context_alloc(1);
f1 = __mock_fence(ctx[1], 1);
if (!f1)
return -ENOMEM;
dma_fence_enable_sw_signaling(f1);
f2 = __mock_fence(ctx[1], 2);
if (!f2) {
err = -ENOMEM;
goto error_put_f1;
}
dma_fence_enable_sw_signaling(f2);
f3 = __mock_fence(ctx[0], 1);
if (!f3) {
err = -ENOMEM;
goto error_put_f2;
}
dma_fence_enable_sw_signaling(f3);
f4 = dma_fence_unwrap_merge(f1, f2, f3);
if (!f4) {
err = -ENOMEM;
goto error_put_f3;
}
dma_fence_unwrap_for_each(fence, &iter, f4) {
if (fence == f3 && f2) {
dma_fence_put(f3);
f3 = NULL;
} else if (fence == f2 && !f3) {
dma_fence_put(f2);
f2 = NULL;
} else {
pr_err("Unexpected fence!\n");
err = -EINVAL;
}
}
if (f2 || f3) {
pr_err("Not all fences seen!\n");
err = -EINVAL;
}
dma_fence_put(f4);
error_put_f3:
dma_fence_put(f3);
error_put_f2:
dma_fence_put(f2);
error_put_f1:
dma_fence_put(f1);
return err;
}
static int unwrap_merge_order(void *arg)
{
struct dma_fence *fence, *f1, *f2, *a1, *a2, *c1, *c2;
struct dma_fence_unwrap iter;
int err = 0;
f1 = mock_fence();
if (!f1)
return -ENOMEM;
dma_fence_enable_sw_signaling(f1);
f2 = mock_fence();
if (!f2) {
dma_fence_put(f1);
return -ENOMEM;
}
dma_fence_enable_sw_signaling(f2);
a1 = mock_array(2, f1, f2);
if (!a1)
return -ENOMEM;
c1 = mock_chain(NULL, dma_fence_get(f1));
if (!c1)
goto error_put_a1;
c2 = mock_chain(c1, dma_fence_get(f2));
if (!c2)
goto error_put_a1;
/*
* The fences in the chain are the same as in a1 but in oposite order,
* the dma_fence_merge() function should be able to handle that.
*/
a2 = dma_fence_unwrap_merge(a1, c2);
dma_fence_unwrap_for_each(fence, &iter, a2) {
if (fence == f1) {
f1 = NULL;
if (!f2)
pr_err("Unexpected order!\n");
} else if (fence == f2) {
f2 = NULL;
if (f1)
pr_err("Unexpected order!\n");
} else {
pr_err("Unexpected fence!\n");
err = -EINVAL;
}
}
if (f1 || f2) {
pr_err("Not all fences seen!\n");
err = -EINVAL;
}
dma_fence_put(a2);
return err;
error_put_a1:
dma_fence_put(a1);
return -ENOMEM;
}
static int unwrap_merge_complex(void *arg)
{
struct dma_fence *fence, *f1, *f2, *f3, *f4, *f5;
@ -327,7 +502,7 @@ static int unwrap_merge_complex(void *arg)
goto error_put_f2;
/* The resulting array has the fences in reverse */
f4 = dma_fence_unwrap_merge(f2, f1);
f4 = mock_array(2, dma_fence_get(f2), dma_fence_get(f1));
if (!f4)
goto error_put_f3;
@ -367,6 +542,87 @@ error_put_f1:
return err;
}
static int unwrap_merge_complex_seqno(void *arg)
{
struct dma_fence *fence, *f1, *f2, *f3, *f4, *f5, *f6, *f7;
struct dma_fence_unwrap iter;
int err = -ENOMEM;
u64 ctx[2];
ctx[0] = dma_fence_context_alloc(1);
ctx[1] = dma_fence_context_alloc(1);
f1 = __mock_fence(ctx[0], 2);
if (!f1)
return -ENOMEM;
dma_fence_enable_sw_signaling(f1);
f2 = __mock_fence(ctx[1], 1);
if (!f2)
goto error_put_f1;
dma_fence_enable_sw_signaling(f2);
f3 = __mock_fence(ctx[0], 1);
if (!f3)
goto error_put_f2;
dma_fence_enable_sw_signaling(f3);
f4 = __mock_fence(ctx[1], 2);
if (!f4)
goto error_put_f3;
dma_fence_enable_sw_signaling(f4);
f5 = mock_array(2, dma_fence_get(f1), dma_fence_get(f2));
if (!f5)
goto error_put_f4;
f6 = mock_array(2, dma_fence_get(f3), dma_fence_get(f4));
if (!f6)
goto error_put_f5;
f7 = dma_fence_unwrap_merge(f5, f6);
if (!f7)
goto error_put_f6;
err = 0;
dma_fence_unwrap_for_each(fence, &iter, f7) {
if (fence == f1 && f4) {
dma_fence_put(f1);
f1 = NULL;
} else if (fence == f4 && !f1) {
dma_fence_put(f4);
f4 = NULL;
} else {
pr_err("Unexpected fence!\n");
err = -EINVAL;
}
}
if (f1 || f4) {
pr_err("Not all fences seen!\n");
err = -EINVAL;
}
dma_fence_put(f7);
error_put_f6:
dma_fence_put(f6);
error_put_f5:
dma_fence_put(f5);
error_put_f4:
dma_fence_put(f4);
error_put_f3:
dma_fence_put(f3);
error_put_f2:
dma_fence_put(f2);
error_put_f1:
dma_fence_put(f1);
return err;
}
int dma_fence_unwrap(void)
{
static const struct subtest tests[] = {
@ -375,7 +631,11 @@ int dma_fence_unwrap(void)
SUBTEST(unwrap_chain),
SUBTEST(unwrap_chain_array),
SUBTEST(unwrap_merge),
SUBTEST(unwrap_merge_duplicate),
SUBTEST(unwrap_merge_seqno),
SUBTEST(unwrap_merge_order),
SUBTEST(unwrap_merge_complex),
SUBTEST(unwrap_merge_complex_seqno),
};
return subtests(tests, NULL);

View File

@ -135,7 +135,6 @@ drm_kms_helper-y := \
drm_atomic_state_helper.o \
drm_crtc_helper.o \
drm_damage_helper.o \
drm_encoder_slave.o \
drm_flip_work.o \
drm_format_helper.o \
drm_gem_atomic_helper.o \

View File

@ -674,7 +674,7 @@ static int amdgpu_connector_lvds_get_modes(struct drm_connector *connector)
}
static enum drm_mode_status amdgpu_connector_lvds_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
const struct drm_display_mode *mode)
{
struct drm_encoder *encoder = amdgpu_connector_best_single_encoder(connector);
@ -839,7 +839,7 @@ static int amdgpu_connector_vga_get_modes(struct drm_connector *connector)
}
static enum drm_mode_status amdgpu_connector_vga_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
const struct drm_display_mode *mode)
{
struct drm_device *dev = connector->dev;
struct amdgpu_device *adev = drm_to_adev(dev);
@ -1196,7 +1196,7 @@ static void amdgpu_connector_dvi_force(struct drm_connector *connector)
}
static enum drm_mode_status amdgpu_connector_dvi_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
const struct drm_display_mode *mode)
{
struct drm_device *dev = connector->dev;
struct amdgpu_device *adev = drm_to_adev(dev);
@ -1464,7 +1464,7 @@ out:
}
static enum drm_mode_status amdgpu_connector_dp_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
const struct drm_display_mode *mode)
{
struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector);
struct amdgpu_connector_atom_dig *amdgpu_dig_connector = amdgpu_connector->con_priv;

View File

@ -430,7 +430,7 @@ void amdgpu_atombios_dp_set_link_config(struct drm_connector *connector,
}
int amdgpu_atombios_dp_mode_valid_helper(struct drm_connector *connector,
struct drm_display_mode *mode)
const struct drm_display_mode *mode)
{
struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector);
struct amdgpu_connector_atom_dig *dig_connector;

View File

@ -32,7 +32,7 @@ int amdgpu_atombios_dp_get_panel_mode(struct drm_encoder *encoder,
void amdgpu_atombios_dp_set_link_config(struct drm_connector *connector,
const struct drm_display_mode *mode);
int amdgpu_atombios_dp_mode_valid_helper(struct drm_connector *connector,
struct drm_display_mode *mode);
const struct drm_display_mode *mode);
bool amdgpu_atombios_dp_needs_link_train(struct amdgpu_connector *amdgpu_connector);
void amdgpu_atombios_dp_set_rx_power_state(struct drm_connector *connector,
u8 power_state);

View File

@ -7472,10 +7472,11 @@ create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector,
}
enum drm_mode_status amdgpu_dm_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
const struct drm_display_mode *mode)
{
int result = MODE_ERROR;
struct dc_sink *dc_sink;
struct drm_display_mode *test_mode;
/* TODO: Unhardcode stream count */
struct dc_stream_state *stream;
struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector);
@ -7500,11 +7501,16 @@ enum drm_mode_status amdgpu_dm_connector_mode_valid(struct drm_connector *connec
goto fail;
}
drm_mode_set_crtcinfo(mode, 0);
test_mode = drm_mode_duplicate(connector->dev, mode);
if (!test_mode)
goto fail;
stream = create_validate_stream_for_sink(aconnector, mode,
drm_mode_set_crtcinfo(test_mode, 0);
stream = create_validate_stream_for_sink(aconnector, test_mode,
to_dm_connector_state(connector->state),
NULL);
drm_mode_destroy(connector->dev, test_mode);
if (stream) {
dc_stream_release(stream);
result = MODE_OK;

View File

@ -949,7 +949,7 @@ void amdgpu_dm_connector_init_helper(struct amdgpu_display_manager *dm,
int link_index);
enum drm_mode_status amdgpu_dm_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode);
const struct drm_display_mode *mode);
void dm_restore_drm_connector_state(struct drm_device *dev,
struct drm_connector *connector);

View File

@ -88,7 +88,7 @@ komeda_wb_connector_get_modes(struct drm_connector *connector)
static enum drm_mode_status
komeda_wb_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
const struct drm_display_mode *mode)
{
struct drm_device *dev = connector->dev;
struct drm_mode_config *mode_config = &dev->mode_config;

View File

@ -43,7 +43,7 @@ static int malidp_mw_connector_get_modes(struct drm_connector *connector)
static enum drm_mode_status
malidp_mw_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
const struct drm_display_mode *mode)
{
struct drm_device *dev = connector->dev;
struct drm_mode_config *mode_config = &dev->mode_config;

View File

@ -13,6 +13,7 @@ ast-y := \
ast_mode.o \
ast_post.o \
ast_sil164.o \
ast_vbios.o \
ast_vga.o
obj-$(CONFIG_DRM_AST) := ast.o

View File

@ -5,6 +5,7 @@
#include <linux/firmware.h>
#include <linux/delay.h>
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_state_helper.h>
#include <drm/drm_edid.h>
#include <drm/drm_modeset_helper_vtables.h>
@ -12,11 +13,71 @@
#include <drm/drm_probe_helper.h>
#include "ast_drv.h"
#include "ast_vbios.h"
struct ast_astdp_mode_index_table_entry {
unsigned int hdisplay;
unsigned int vdisplay;
unsigned int mode_index;
};
/* FIXME: Do refresh rate and flags actually matter? */
static const struct ast_astdp_mode_index_table_entry ast_astdp_mode_index_table[] = {
{ 320, 240, ASTDP_320x240_60 },
{ 400, 300, ASTDP_400x300_60 },
{ 512, 384, ASTDP_512x384_60 },
{ 640, 480, ASTDP_640x480_60 },
{ 800, 600, ASTDP_800x600_56 },
{ 1024, 768, ASTDP_1024x768_60 },
{ 1152, 864, ASTDP_1152x864_75 },
{ 1280, 800, ASTDP_1280x800_60_RB },
{ 1280, 1024, ASTDP_1280x1024_60 },
{ 1360, 768, ASTDP_1366x768_60 }, // same as 1366x786
{ 1366, 768, ASTDP_1366x768_60 },
{ 1440, 900, ASTDP_1440x900_60_RB },
{ 1600, 900, ASTDP_1600x900_60_RB },
{ 1600, 1200, ASTDP_1600x1200_60 },
{ 1680, 1050, ASTDP_1680x1050_60_RB },
{ 1920, 1080, ASTDP_1920x1080_60 },
{ 1920, 1200, ASTDP_1920x1200_60 },
{ 0 }
};
struct ast_astdp_connector_state {
struct drm_connector_state base;
int mode_index;
};
static struct ast_astdp_connector_state *
to_ast_astdp_connector_state(const struct drm_connector_state *state)
{
return container_of(state, struct ast_astdp_connector_state, base);
}
static int ast_astdp_get_mode_index(unsigned int hdisplay, unsigned int vdisplay)
{
const struct ast_astdp_mode_index_table_entry *entry = ast_astdp_mode_index_table;
while (entry->hdisplay && entry->vdisplay) {
if (entry->hdisplay == hdisplay && entry->vdisplay == vdisplay)
return entry->mode_index;
++entry;
}
return -EINVAL;
}
static bool ast_astdp_is_connected(struct ast_device *ast)
{
if (!ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDF, AST_IO_VGACRDF_HPD))
return false;
/*
* HPD might be set even if no monitor is connected, so also check that
* the link training was successful.
*/
if (!ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDC, AST_IO_VGACRDC_LINK_SUCCESS))
return false;
return true;
}
@ -221,80 +282,6 @@ static void ast_dp_set_enable(struct ast_device *ast, bool enabled)
drm_WARN_ON(dev, !__ast_dp_wait_enable(ast, enabled));
}
static void ast_dp_set_mode(struct drm_crtc *crtc, struct ast_vbios_mode_info *vbios_mode)
{
struct ast_device *ast = to_ast_device(crtc->dev);
u32 ulRefreshRateIndex;
u8 ModeIdx;
ulRefreshRateIndex = vbios_mode->enh_table->refresh_rate_index - 1;
switch (crtc->mode.crtc_hdisplay) {
case 320:
ModeIdx = ASTDP_320x240_60;
break;
case 400:
ModeIdx = ASTDP_400x300_60;
break;
case 512:
ModeIdx = ASTDP_512x384_60;
break;
case 640:
ModeIdx = (ASTDP_640x480_60 + (u8) ulRefreshRateIndex);
break;
case 800:
ModeIdx = (ASTDP_800x600_56 + (u8) ulRefreshRateIndex);
break;
case 1024:
ModeIdx = (ASTDP_1024x768_60 + (u8) ulRefreshRateIndex);
break;
case 1152:
ModeIdx = ASTDP_1152x864_75;
break;
case 1280:
if (crtc->mode.crtc_vdisplay == 800)
ModeIdx = (ASTDP_1280x800_60_RB - (u8) ulRefreshRateIndex);
else // 1024
ModeIdx = (ASTDP_1280x1024_60 + (u8) ulRefreshRateIndex);
break;
case 1360:
case 1366:
ModeIdx = ASTDP_1366x768_60;
break;
case 1440:
ModeIdx = (ASTDP_1440x900_60_RB - (u8) ulRefreshRateIndex);
break;
case 1600:
if (crtc->mode.crtc_vdisplay == 900)
ModeIdx = (ASTDP_1600x900_60_RB - (u8) ulRefreshRateIndex);
else //1200
ModeIdx = ASTDP_1600x1200_60;
break;
case 1680:
ModeIdx = (ASTDP_1680x1050_60_RB - (u8) ulRefreshRateIndex);
break;
case 1920:
if (crtc->mode.crtc_vdisplay == 1080)
ModeIdx = ASTDP_1920x1080_60;
else //1200
ModeIdx = ASTDP_1920x1200_60;
break;
default:
return;
}
/*
* CRE0[7:0]: MISC0 ((0x00: 18-bpp) or (0x20: 24-bpp)
* CRE1[7:0]: MISC1 (default: 0x00)
* CRE2[7:0]: video format index (0x00 ~ 0x20 or 0x40 ~ 0x50)
*/
ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xE0, ASTDP_AND_CLEAR_MASK,
ASTDP_MISC0_24bpp);
ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xE1, ASTDP_AND_CLEAR_MASK, ASTDP_MISC1);
ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xE2, ASTDP_AND_CLEAR_MASK, ModeIdx);
}
static void ast_wait_for_vretrace(struct ast_device *ast)
{
unsigned long timeout = jiffies + HZ;
@ -313,15 +300,62 @@ static const struct drm_encoder_funcs ast_astdp_encoder_funcs = {
.destroy = drm_encoder_cleanup,
};
static enum drm_mode_status
ast_astdp_encoder_helper_mode_valid(struct drm_encoder *encoder,
const struct drm_display_mode *mode)
{
int res;
res = ast_astdp_get_mode_index(mode->hdisplay, mode->vdisplay);
if (res < 0)
return MODE_NOMODE;
return MODE_OK;
}
static void ast_astdp_encoder_helper_atomic_mode_set(struct drm_encoder *encoder,
struct drm_crtc_state *crtc_state,
struct drm_connector_state *conn_state)
{
struct drm_crtc *crtc = crtc_state->crtc;
struct drm_device *dev = encoder->dev;
struct ast_device *ast = to_ast_device(dev);
struct ast_crtc_state *ast_crtc_state = to_ast_crtc_state(crtc_state);
struct ast_vbios_mode_info *vbios_mode_info = &ast_crtc_state->vbios_mode_info;
const struct ast_vbios_enhtable *vmode = ast_crtc_state->vmode;
struct ast_astdp_connector_state *astdp_conn_state =
to_ast_astdp_connector_state(conn_state);
int mode_index = astdp_conn_state->mode_index;
u8 refresh_rate_index;
u8 vgacre0, vgacre1, vgacre2;
ast_dp_set_mode(crtc, vbios_mode_info);
if (drm_WARN_ON(dev, vmode->refresh_rate_index < 1 || vmode->refresh_rate_index > 255))
return;
refresh_rate_index = vmode->refresh_rate_index - 1;
/* FIXME: Why are we doing this? */
switch (mode_index) {
case ASTDP_1280x800_60_RB:
case ASTDP_1440x900_60_RB:
case ASTDP_1600x900_60_RB:
case ASTDP_1680x1050_60_RB:
mode_index = (u8)(mode_index - (u8)refresh_rate_index);
break;
default:
mode_index = (u8)(mode_index + (u8)refresh_rate_index);
break;
}
/*
* CRE0[7:0]: MISC0 ((0x00: 18-bpp) or (0x20: 24-bpp)
* CRE1[7:0]: MISC1 (default: 0x00)
* CRE2[7:0]: video format index (0x00 ~ 0x20 or 0x40 ~ 0x50)
*/
vgacre0 = AST_IO_VGACRE0_24BPP;
vgacre1 = 0x00;
vgacre2 = mode_index & 0xff;
ast_set_index_reg(ast, AST_IO_VGACRI, 0xe0, vgacre0);
ast_set_index_reg(ast, AST_IO_VGACRI, 0xe1, vgacre1);
ast_set_index_reg(ast, AST_IO_VGACRI, 0xe2, vgacre2);
}
static void ast_astdp_encoder_helper_atomic_enable(struct drm_encoder *encoder,
@ -348,10 +382,31 @@ static void ast_astdp_encoder_helper_atomic_disable(struct drm_encoder *encoder,
ast_dp_set_phy_sleep(ast, true);
}
static int ast_astdp_encoder_helper_atomic_check(struct drm_encoder *encoder,
struct drm_crtc_state *crtc_state,
struct drm_connector_state *conn_state)
{
const struct drm_display_mode *mode = &crtc_state->mode;
struct ast_astdp_connector_state *astdp_conn_state =
to_ast_astdp_connector_state(conn_state);
int res;
if (drm_atomic_crtc_needs_modeset(crtc_state)) {
res = ast_astdp_get_mode_index(mode->hdisplay, mode->vdisplay);
if (res < 0)
return res;
astdp_conn_state->mode_index = res;
}
return 0;
}
static const struct drm_encoder_helper_funcs ast_astdp_encoder_helper_funcs = {
.mode_valid = ast_astdp_encoder_helper_mode_valid,
.atomic_mode_set = ast_astdp_encoder_helper_atomic_mode_set,
.atomic_enable = ast_astdp_encoder_helper_atomic_enable,
.atomic_disable = ast_astdp_encoder_helper_atomic_disable,
.atomic_check = ast_astdp_encoder_helper_atomic_check,
};
/*
@ -422,18 +477,62 @@ static const struct drm_connector_helper_funcs ast_astdp_connector_helper_funcs
.detect_ctx = ast_astdp_connector_helper_detect_ctx,
};
static void ast_astdp_connector_reset(struct drm_connector *connector)
{
struct ast_astdp_connector_state *astdp_state =
kzalloc(sizeof(*astdp_state), GFP_KERNEL);
if (connector->state)
connector->funcs->atomic_destroy_state(connector, connector->state);
if (astdp_state)
__drm_atomic_helper_connector_reset(connector, &astdp_state->base);
else
__drm_atomic_helper_connector_reset(connector, NULL);
}
static struct drm_connector_state *
ast_astdp_connector_atomic_duplicate_state(struct drm_connector *connector)
{
struct ast_astdp_connector_state *new_astdp_state, *astdp_state;
struct drm_device *dev = connector->dev;
if (drm_WARN_ON(dev, !connector->state))
return NULL;
new_astdp_state = kmalloc(sizeof(*new_astdp_state), GFP_KERNEL);
if (!new_astdp_state)
return NULL;
__drm_atomic_helper_connector_duplicate_state(connector, &new_astdp_state->base);
astdp_state = to_ast_astdp_connector_state(connector->state);
new_astdp_state->mode_index = astdp_state->mode_index;
return &new_astdp_state->base;
}
static void ast_astdp_connector_atomic_destroy_state(struct drm_connector *connector,
struct drm_connector_state *state)
{
struct ast_astdp_connector_state *astdp_state = to_ast_astdp_connector_state(state);
__drm_atomic_helper_connector_destroy_state(&astdp_state->base);
kfree(astdp_state);
}
static const struct drm_connector_funcs ast_astdp_connector_funcs = {
.reset = ast_astdp_connector_reset,
.fill_modes = drm_helper_probe_single_connector_modes,
.destroy = drm_connector_cleanup,
.atomic_duplicate_state = ast_astdp_connector_atomic_duplicate_state,
.atomic_destroy_state = ast_astdp_connector_atomic_destroy_state,
};
/*
* Output
*/
static const struct drm_connector_funcs ast_astdp_connector_funcs = {
.reset = drm_atomic_helper_connector_reset,
.fill_modes = drm_helper_probe_single_connector_modes,
.destroy = drm_connector_cleanup,
.atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
};
int ast_astdp_output_init(struct ast_device *ast)
{
struct drm_device *dev = &ast->base;

View File

@ -170,7 +170,7 @@ static int ast_detect_chip(struct pci_dev *pdev,
/* Patch AST2500/AST2510 */
if ((pdev->revision & 0xf0) == 0x40) {
if (!(vgacrd0 & AST_VRAM_INIT_STATUS_MASK))
if (!(vgacrd0 & AST_IO_VGACRD0_VRAM_INIT_STATUS_MASK))
ast_patch_ahb_2500(regs);
}
@ -393,11 +393,15 @@ static int ast_drm_freeze(struct drm_device *dev)
static int ast_drm_thaw(struct drm_device *dev)
{
struct ast_device *ast = to_ast_device(dev);
int ret;
ast_enable_vga(ast->ioregs);
ast_open_key(ast->ioregs);
ast_enable_mmio(dev->dev, ast->ioregs);
ast_post_gpu(ast);
ret = ast_post_gpu(ast);
if (ret)
return ret;
return drm_mode_config_helper_resume(dev);
}

View File

@ -39,6 +39,8 @@
#include "ast_reg.h"
struct ast_vbios_enhtable;
#define DRIVER_AUTHOR "Dave Airlie"
#define DRIVER_NAME "ast"
@ -205,7 +207,9 @@ struct ast_device {
} astdp;
} output;
bool support_wide_screen;
bool support_wsxga_p; /* 1680x1050 */
bool support_fullhd; /* 1920x1080 */
bool support_wuxga; /* 1920x1200 */
u8 *dp501_fw_addr;
const struct firmware *dp501_fw; /* dp501 fw */
@ -348,40 +352,20 @@ struct ast_vbios_stdtable {
u8 gr[9];
};
struct ast_vbios_enhtable {
u32 ht;
u32 hde;
u32 hfp;
u32 hsync;
u32 vt;
u32 vde;
u32 vfp;
u32 vsync;
u32 dclk_index;
u32 flags;
u32 refresh_rate;
u32 refresh_rate_index;
u32 mode_id;
};
struct ast_vbios_dclk_info {
u8 param1;
u8 param2;
u8 param3;
};
struct ast_vbios_mode_info {
const struct ast_vbios_stdtable *std_table;
const struct ast_vbios_enhtable *enh_table;
};
struct ast_crtc_state {
struct drm_crtc_state base;
/* Last known format of primary plane */
const struct drm_format_info *format;
struct ast_vbios_mode_info vbios_mode_info;
const struct ast_vbios_stdtable *std_table;
const struct ast_vbios_enhtable *vmode;
};
#define to_ast_crtc_state(state) container_of(state, struct ast_crtc_state, base)
@ -445,7 +429,7 @@ int ast_mode_config_init(struct ast_device *ast);
int ast_mm_init(struct ast_device *ast);
/* ast post */
void ast_post_gpu(struct ast_device *ast);
int ast_post_gpu(struct ast_device *ast);
u32 ast_mindwm(struct ast_device *ast, u32 r);
void ast_moutdwm(struct ast_device *ast, u32 r, u32 v);
void ast_patch_ahb_2500(void __iomem *regs);

View File

@ -36,33 +36,89 @@
#include "ast_drv.h"
/* Try to detect WSXGA+ on Gen2+ */
static bool __ast_2100_detect_wsxga_p(struct ast_device *ast)
{
u8 vgacrd0 = ast_get_index_reg(ast, AST_IO_VGACRI, 0xd0);
if (!(vgacrd0 & AST_IO_VGACRD0_VRAM_INIT_BY_BMC))
return true;
if (vgacrd0 & AST_IO_VGACRD0_IKVM_WIDESCREEN)
return true;
return false;
}
/* Try to detect WUXGA on Gen2+ */
static bool __ast_2100_detect_wuxga(struct ast_device *ast)
{
u8 vgacrd1;
if (ast->support_fullhd) {
vgacrd1 = ast_get_index_reg(ast, AST_IO_VGACRI, 0xd1);
if (!(vgacrd1 & AST_IO_VGACRD1_SUPPORTS_WUXGA))
return true;
}
return false;
}
static void ast_detect_widescreen(struct ast_device *ast)
{
u8 jreg;
ast->support_wsxga_p = false;
ast->support_fullhd = false;
ast->support_wuxga = false;
/* Check if we support wide screen */
switch (AST_GEN(ast)) {
case 1:
ast->support_wide_screen = false;
break;
default:
jreg = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xd0, 0xff);
if (!(jreg & 0x80))
ast->support_wide_screen = true;
else if (jreg & 0x01)
ast->support_wide_screen = true;
else {
ast->support_wide_screen = false;
if (ast->chip == AST1300)
ast->support_wide_screen = true;
if (ast->chip == AST1400)
ast->support_wide_screen = true;
if (ast->chip == AST2510)
ast->support_wide_screen = true;
if (IS_AST_GEN7(ast))
ast->support_wide_screen = true;
if (AST_GEN(ast) >= 7) {
ast->support_wsxga_p = true;
ast->support_fullhd = true;
if (__ast_2100_detect_wuxga(ast))
ast->support_wuxga = true;
} else if (AST_GEN(ast) >= 6) {
if (__ast_2100_detect_wsxga_p(ast))
ast->support_wsxga_p = true;
else if (ast->chip == AST2510)
ast->support_wsxga_p = true;
if (ast->support_wsxga_p)
ast->support_fullhd = true;
if (__ast_2100_detect_wuxga(ast))
ast->support_wuxga = true;
} else if (AST_GEN(ast) >= 5) {
if (__ast_2100_detect_wsxga_p(ast))
ast->support_wsxga_p = true;
else if (ast->chip == AST1400)
ast->support_wsxga_p = true;
if (ast->support_wsxga_p)
ast->support_fullhd = true;
if (__ast_2100_detect_wuxga(ast))
ast->support_wuxga = true;
} else if (AST_GEN(ast) >= 4) {
if (__ast_2100_detect_wsxga_p(ast))
ast->support_wsxga_p = true;
else if (ast->chip == AST1300)
ast->support_wsxga_p = true;
if (ast->support_wsxga_p)
ast->support_fullhd = true;
if (__ast_2100_detect_wuxga(ast))
ast->support_wuxga = true;
} else if (AST_GEN(ast) >= 3) {
if (__ast_2100_detect_wsxga_p(ast))
ast->support_wsxga_p = true;
if (ast->support_wsxga_p) {
if (ast->chip == AST2200)
ast->support_fullhd = true;
}
break;
if (__ast_2100_detect_wuxga(ast))
ast->support_wuxga = true;
} else if (AST_GEN(ast) >= 2) {
if (__ast_2100_detect_wsxga_p(ast))
ast->support_wsxga_p = true;
if (ast->support_wsxga_p) {
if (ast->chip == AST2100)
ast->support_fullhd = true;
}
if (__ast_2100_detect_wuxga(ast))
ast->support_wuxga = true;
}
}
@ -76,49 +132,37 @@ static void ast_detect_tx_chip(struct ast_device *ast, bool need_post)
};
struct drm_device *dev = &ast->base;
u8 jreg, vgacrd1;
/*
* Several of the listed TX chips are not explicitly supported
* by the ast driver. If these exist in real-world devices, they
* are most likely reported as VGA or SIL164 outputs. We warn here
* to get bug reports for these devices. If none come in for some
* time, we can begin to fail device probing on these values.
*/
vgacrd1 = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xd1, AST_IO_VGACRD1_TX_TYPE_MASK);
drm_WARN(dev, vgacrd1 == AST_IO_VGACRD1_TX_ITE66121_VBIOS,
"ITE IT66121 detected, 0x%x, Gen%lu\n", vgacrd1, AST_GEN(ast));
drm_WARN(dev, vgacrd1 == AST_IO_VGACRD1_TX_CH7003_VBIOS,
"Chrontel CH7003 detected, 0x%x, Gen%lu\n", vgacrd1, AST_GEN(ast));
drm_WARN(dev, vgacrd1 == AST_IO_VGACRD1_TX_ANX9807_VBIOS,
"Analogix ANX9807 detected, 0x%x, Gen%lu\n", vgacrd1, AST_GEN(ast));
u8 vgacra3, vgacrd1;
/* Check 3rd Tx option (digital output afaik) */
ast->tx_chip = AST_TX_NONE;
/*
* VGACRA3 Enhanced Color Mode Register, check if DVO is already
* enabled, in that case, assume we have a SIL164 TMDS transmitter
*
* Don't make that assumption if we the chip wasn't enabled and
* is at power-on reset, otherwise we'll incorrectly "detect" a
* SIL164 when there is none.
*/
if (!need_post) {
jreg = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xa3, 0xff);
if (jreg & 0x80)
ast->tx_chip = AST_TX_SIL164;
}
if (IS_AST_GEN4(ast) || IS_AST_GEN5(ast) || IS_AST_GEN6(ast)) {
if (AST_GEN(ast) <= 3) {
/*
* On AST GEN4+, look the configuration set by the SoC in
* VGACRA3 Enhanced Color Mode Register, check if DVO is already
* enabled, in that case, assume we have a SIL164 TMDS transmitter
*
* Don't make that assumption if we the chip wasn't enabled and
* is at power-on reset, otherwise we'll incorrectly "detect" a
* SIL164 when there is none.
*/
if (!need_post) {
vgacra3 = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xa3, 0xff);
if (vgacra3 & AST_IO_VGACRA3_DVO_ENABLED)
ast->tx_chip = AST_TX_SIL164;
}
} else {
/*
* On AST GEN4+, look at the configuration set by the SoC in
* the SOC scratch register #1 bits 11:8 (interestingly marked
* as "reserved" in the spec)
*/
jreg = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xd1,
AST_IO_VGACRD1_TX_TYPE_MASK);
switch (jreg) {
vgacrd1 = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xd1,
AST_IO_VGACRD1_TX_TYPE_MASK);
switch (vgacrd1) {
/*
* GEN4 to GEN6
*/
case AST_IO_VGACRD1_TX_SIL164_VBIOS:
ast->tx_chip = AST_TX_SIL164;
break;
@ -134,14 +178,32 @@ static void ast_detect_tx_chip(struct ast_device *ast, bool need_post)
fallthrough;
case AST_IO_VGACRD1_TX_FW_EMBEDDED_FW:
ast->tx_chip = AST_TX_DP501;
}
} else if (IS_AST_GEN7(ast)) {
if (ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xd1, AST_IO_VGACRD1_TX_TYPE_MASK) ==
AST_IO_VGACRD1_TX_ASTDP) {
int ret = ast_dp_launch(ast);
if (!ret)
ast->tx_chip = AST_TX_ASTDP;
break;
/*
* GEN7+
*/
case AST_IO_VGACRD1_TX_ASTDP:
ast->tx_chip = AST_TX_ASTDP;
break;
/*
* Several of the listed TX chips are not explicitly supported
* by the ast driver. If these exist in real-world devices, they
* are most likely reported as VGA or SIL164 outputs. We warn here
* to get bug reports for these devices. If none come in for some
* time, we can begin to fail device probing on these values.
*/
case AST_IO_VGACRD1_TX_ITE66121_VBIOS:
drm_warn(dev, "ITE IT66121 detected, 0x%x, Gen%lu\n",
vgacrd1, AST_GEN(ast));
break;
case AST_IO_VGACRD1_TX_CH7003_VBIOS:
drm_warn(dev, "Chrontel CH7003 detected, 0x%x, Gen%lu\n",
vgacrd1, AST_GEN(ast));
break;
case AST_IO_VGACRD1_TX_ANX9807_VBIOS:
drm_warn(dev, "Analogix ANX9807 detected, 0x%x, Gen%lu\n",
vgacrd1, AST_GEN(ast));
break;
}
}
@ -290,18 +352,25 @@ struct drm_device *ast_device_create(struct pci_dev *pdev,
ast->regs = regs;
ast->ioregs = ioregs;
ast_detect_widescreen(ast);
ast_detect_tx_chip(ast, need_post);
ret = ast_get_dram_info(ast);
if (ret)
return ERR_PTR(ret);
drm_info(dev, "dram MCLK=%u Mhz type=%d bus_width=%d\n",
ast->mclk, ast->dram_type, ast->dram_bus_width);
if (need_post)
ast_post_gpu(ast);
ast_detect_tx_chip(ast, need_post);
switch (ast->tx_chip) {
case AST_TX_ASTDP:
ret = ast_post_gpu(ast);
break;
default:
ret = 0;
if (need_post)
ret = ast_post_gpu(ast);
break;
}
if (ret)
return ERR_PTR(ret);
ret = ast_mm_init(ast);
if (ret)
@ -315,6 +384,8 @@ struct drm_device *ast_device_create(struct pci_dev *pdev,
drm_info(dev, "failed to map reserved buffer!\n");
}
ast_detect_widescreen(ast);
ret = ast_mode_config_init(ast);
if (ret)
return ERR_PTR(ret);

View File

@ -47,6 +47,7 @@
#include "ast_drv.h"
#include "ast_tables.h"
#include "ast_vbios.h"
#define AST_LUT_SIZE 256
@ -106,134 +107,9 @@ static void ast_crtc_set_gamma(struct ast_device *ast,
}
}
static bool ast_get_vbios_mode_info(const struct drm_format_info *format,
const struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode,
struct ast_vbios_mode_info *vbios_mode)
{
u32 refresh_rate_index = 0, refresh_rate;
const struct ast_vbios_enhtable *best = NULL;
u32 hborder, vborder;
bool check_sync;
switch (format->cpp[0] * 8) {
case 8:
vbios_mode->std_table = &vbios_stdtable[VGAModeIndex];
break;
case 16:
vbios_mode->std_table = &vbios_stdtable[HiCModeIndex];
break;
case 24:
case 32:
vbios_mode->std_table = &vbios_stdtable[TrueCModeIndex];
break;
default:
return false;
}
switch (mode->crtc_hdisplay) {
case 640:
vbios_mode->enh_table = &res_640x480[refresh_rate_index];
break;
case 800:
vbios_mode->enh_table = &res_800x600[refresh_rate_index];
break;
case 1024:
vbios_mode->enh_table = &res_1024x768[refresh_rate_index];
break;
case 1152:
vbios_mode->enh_table = &res_1152x864[refresh_rate_index];
break;
case 1280:
if (mode->crtc_vdisplay == 800)
vbios_mode->enh_table = &res_1280x800[refresh_rate_index];
else
vbios_mode->enh_table = &res_1280x1024[refresh_rate_index];
break;
case 1360:
vbios_mode->enh_table = &res_1360x768[refresh_rate_index];
break;
case 1440:
vbios_mode->enh_table = &res_1440x900[refresh_rate_index];
break;
case 1600:
if (mode->crtc_vdisplay == 900)
vbios_mode->enh_table = &res_1600x900[refresh_rate_index];
else
vbios_mode->enh_table = &res_1600x1200[refresh_rate_index];
break;
case 1680:
vbios_mode->enh_table = &res_1680x1050[refresh_rate_index];
break;
case 1920:
if (mode->crtc_vdisplay == 1080)
vbios_mode->enh_table = &res_1920x1080[refresh_rate_index];
else
vbios_mode->enh_table = &res_1920x1200[refresh_rate_index];
break;
default:
return false;
}
refresh_rate = drm_mode_vrefresh(mode);
check_sync = vbios_mode->enh_table->flags & WideScreenMode;
while (1) {
const struct ast_vbios_enhtable *loop = vbios_mode->enh_table;
while (loop->refresh_rate != 0xff) {
if ((check_sync) &&
(((mode->flags & DRM_MODE_FLAG_NVSYNC) &&
(loop->flags & PVSync)) ||
((mode->flags & DRM_MODE_FLAG_PVSYNC) &&
(loop->flags & NVSync)) ||
((mode->flags & DRM_MODE_FLAG_NHSYNC) &&
(loop->flags & PHSync)) ||
((mode->flags & DRM_MODE_FLAG_PHSYNC) &&
(loop->flags & NHSync)))) {
loop++;
continue;
}
if (loop->refresh_rate <= refresh_rate
&& (!best || loop->refresh_rate > best->refresh_rate))
best = loop;
loop++;
}
if (best || !check_sync)
break;
check_sync = 0;
}
if (best)
vbios_mode->enh_table = best;
hborder = (vbios_mode->enh_table->flags & HBorder) ? 8 : 0;
vborder = (vbios_mode->enh_table->flags & VBorder) ? 8 : 0;
adjusted_mode->crtc_htotal = vbios_mode->enh_table->ht;
adjusted_mode->crtc_hblank_start = vbios_mode->enh_table->hde + hborder;
adjusted_mode->crtc_hblank_end = vbios_mode->enh_table->ht - hborder;
adjusted_mode->crtc_hsync_start = vbios_mode->enh_table->hde + hborder +
vbios_mode->enh_table->hfp;
adjusted_mode->crtc_hsync_end = (vbios_mode->enh_table->hde + hborder +
vbios_mode->enh_table->hfp +
vbios_mode->enh_table->hsync);
adjusted_mode->crtc_vtotal = vbios_mode->enh_table->vt;
adjusted_mode->crtc_vblank_start = vbios_mode->enh_table->vde + vborder;
adjusted_mode->crtc_vblank_end = vbios_mode->enh_table->vt - vborder;
adjusted_mode->crtc_vsync_start = vbios_mode->enh_table->vde + vborder +
vbios_mode->enh_table->vfp;
adjusted_mode->crtc_vsync_end = (vbios_mode->enh_table->vde + vborder +
vbios_mode->enh_table->vfp +
vbios_mode->enh_table->vsync);
return true;
}
static void ast_set_vbios_color_reg(struct ast_device *ast,
const struct drm_format_info *format,
const struct ast_vbios_mode_info *vbios_mode)
const struct ast_vbios_enhtable *vmode)
{
u32 color_index;
@ -256,7 +132,7 @@ static void ast_set_vbios_color_reg(struct ast_device *ast,
ast_set_index_reg(ast, AST_IO_VGACRI, 0x91, 0x00);
if (vbios_mode->enh_table->flags & NewModeInfo) {
if (vmode->flags & NewModeInfo) {
ast_set_index_reg(ast, AST_IO_VGACRI, 0x91, 0xa8);
ast_set_index_reg(ast, AST_IO_VGACRI, 0x92, format->cpp[0] * 8);
}
@ -264,19 +140,19 @@ static void ast_set_vbios_color_reg(struct ast_device *ast,
static void ast_set_vbios_mode_reg(struct ast_device *ast,
const struct drm_display_mode *adjusted_mode,
const struct ast_vbios_mode_info *vbios_mode)
const struct ast_vbios_enhtable *vmode)
{
u32 refresh_rate_index, mode_id;
refresh_rate_index = vbios_mode->enh_table->refresh_rate_index;
mode_id = vbios_mode->enh_table->mode_id;
refresh_rate_index = vmode->refresh_rate_index;
mode_id = vmode->mode_id;
ast_set_index_reg(ast, AST_IO_VGACRI, 0x8d, refresh_rate_index & 0xff);
ast_set_index_reg(ast, AST_IO_VGACRI, 0x8e, mode_id & 0xff);
ast_set_index_reg(ast, AST_IO_VGACRI, 0x91, 0x00);
if (vbios_mode->enh_table->flags & NewModeInfo) {
if (vmode->flags & NewModeInfo) {
ast_set_index_reg(ast, AST_IO_VGACRI, 0x91, 0xa8);
ast_set_index_reg(ast, AST_IO_VGACRI, 0x93, adjusted_mode->clock / 1000);
ast_set_index_reg(ast, AST_IO_VGACRI, 0x94, adjusted_mode->crtc_hdisplay);
@ -288,14 +164,11 @@ static void ast_set_vbios_mode_reg(struct ast_device *ast,
static void ast_set_std_reg(struct ast_device *ast,
struct drm_display_mode *mode,
struct ast_vbios_mode_info *vbios_mode)
const struct ast_vbios_stdtable *stdtable)
{
const struct ast_vbios_stdtable *stdtable;
u32 i;
u8 jreg;
stdtable = vbios_mode->std_table;
jreg = stdtable->misc;
ast_io_write8(ast, AST_IO_VGAMR_W, jreg);
@ -336,13 +209,13 @@ static void ast_set_std_reg(struct ast_device *ast,
static void ast_set_crtc_reg(struct ast_device *ast,
struct drm_display_mode *mode,
struct ast_vbios_mode_info *vbios_mode)
const struct ast_vbios_enhtable *vmode)
{
u8 jreg05 = 0, jreg07 = 0, jreg09 = 0, jregAC = 0, jregAD = 0, jregAE = 0;
u16 temp, precache = 0;
if ((IS_AST_GEN6(ast) || IS_AST_GEN7(ast)) &&
(vbios_mode->enh_table->flags & AST2500PreCatchCRT))
(vmode->flags & AST2500PreCatchCRT))
precache = 40;
ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0x11, 0x7f, 0x00);
@ -461,14 +334,14 @@ static void ast_set_offset_reg(struct ast_device *ast,
static void ast_set_dclk_reg(struct ast_device *ast,
struct drm_display_mode *mode,
struct ast_vbios_mode_info *vbios_mode)
const struct ast_vbios_enhtable *vmode)
{
const struct ast_vbios_dclk_info *clk_info;
if (IS_AST_GEN6(ast) || IS_AST_GEN7(ast))
clk_info = &dclk_table_ast2500[vbios_mode->enh_table->dclk_index];
clk_info = &dclk_table_ast2500[vmode->dclk_index];
else
clk_info = &dclk_table[vbios_mode->enh_table->dclk_index];
clk_info = &dclk_table[vmode->dclk_index];
ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xc0, 0x00, clk_info->param1);
ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xc1, 0x00, clk_info->param2);
@ -526,15 +399,15 @@ static void ast_set_crtthd_reg(struct ast_device *ast)
static void ast_set_sync_reg(struct ast_device *ast,
struct drm_display_mode *mode,
struct ast_vbios_mode_info *vbios_mode)
const struct ast_vbios_enhtable *vmode)
{
u8 jreg;
jreg = ast_io_read8(ast, AST_IO_VGAMR_R);
jreg &= ~0xC0;
if (vbios_mode->enh_table->flags & NVSync)
if (vmode->flags & NVSync)
jreg |= 0x80;
if (vbios_mode->enh_table->flags & NHSync)
if (vmode->flags & NHSync)
jreg |= 0x40;
ast_io_write8(ast, AST_IO_VGAMR_W, jreg);
}
@ -654,10 +527,9 @@ static void ast_primary_plane_helper_atomic_update(struct drm_plane *plane,
if (!old_fb || (fb->format != old_fb->format) || crtc_state->mode_changed) {
struct ast_crtc_state *ast_crtc_state = to_ast_crtc_state(crtc_state);
struct ast_vbios_mode_info *vbios_mode_info = &ast_crtc_state->vbios_mode_info;
ast_set_color_reg(ast, fb->format);
ast_set_vbios_color_reg(ast, fb->format, vbios_mode_info);
ast_set_vbios_color_reg(ast, fb->format, ast_crtc_state->vmode);
}
drm_atomic_helper_damage_iter_init(&iter, old_plane_state, plane_state);
@ -1021,72 +893,13 @@ static enum drm_mode_status
ast_crtc_helper_mode_valid(struct drm_crtc *crtc, const struct drm_display_mode *mode)
{
struct ast_device *ast = to_ast_device(crtc->dev);
enum drm_mode_status status;
uint32_t jtemp;
const struct ast_vbios_enhtable *vmode;
if (ast->support_wide_screen) {
if ((mode->hdisplay == 1680) && (mode->vdisplay == 1050))
return MODE_OK;
if ((mode->hdisplay == 1280) && (mode->vdisplay == 800))
return MODE_OK;
if ((mode->hdisplay == 1440) && (mode->vdisplay == 900))
return MODE_OK;
if ((mode->hdisplay == 1360) && (mode->vdisplay == 768))
return MODE_OK;
if ((mode->hdisplay == 1600) && (mode->vdisplay == 900))
return MODE_OK;
if ((mode->hdisplay == 1152) && (mode->vdisplay == 864))
return MODE_OK;
vmode = ast_vbios_find_mode(ast, mode);
if (!vmode)
return MODE_NOMODE;
if ((ast->chip == AST2100) || // GEN2, but not AST1100 (?)
(ast->chip == AST2200) || // GEN3, but not AST2150 (?)
IS_AST_GEN4(ast) || IS_AST_GEN5(ast) ||
IS_AST_GEN6(ast) || IS_AST_GEN7(ast)) {
if ((mode->hdisplay == 1920) && (mode->vdisplay == 1080))
return MODE_OK;
if ((mode->hdisplay == 1920) && (mode->vdisplay == 1200)) {
jtemp = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xd1, 0xff);
if (jtemp & 0x01)
return MODE_NOMODE;
else
return MODE_OK;
}
}
}
status = MODE_NOMODE;
switch (mode->hdisplay) {
case 640:
if (mode->vdisplay == 480)
status = MODE_OK;
break;
case 800:
if (mode->vdisplay == 600)
status = MODE_OK;
break;
case 1024:
if (mode->vdisplay == 768)
status = MODE_OK;
break;
case 1152:
if (mode->vdisplay == 864)
status = MODE_OK;
break;
case 1280:
if (mode->vdisplay == 1024)
status = MODE_OK;
break;
case 1600:
if (mode->vdisplay == 1200)
status = MODE_OK;
break;
default:
break;
}
return status;
return MODE_OK;
}
static void ast_crtc_helper_mode_set_nofb(struct drm_crtc *crtc)
@ -1095,8 +908,8 @@ static void ast_crtc_helper_mode_set_nofb(struct drm_crtc *crtc)
struct ast_device *ast = to_ast_device(dev);
struct drm_crtc_state *crtc_state = crtc->state;
struct ast_crtc_state *ast_crtc_state = to_ast_crtc_state(crtc_state);
struct ast_vbios_mode_info *vbios_mode_info =
&ast_crtc_state->vbios_mode_info;
const struct ast_vbios_stdtable *std_table = ast_crtc_state->std_table;
const struct ast_vbios_enhtable *vmode = ast_crtc_state->vmode;
struct drm_display_mode *adjusted_mode = &crtc_state->adjusted_mode;
/*
@ -1107,25 +920,29 @@ static void ast_crtc_helper_mode_set_nofb(struct drm_crtc *crtc)
*/
ast_wait_for_vretrace(ast);
ast_set_vbios_mode_reg(ast, adjusted_mode, vbios_mode_info);
ast_set_vbios_mode_reg(ast, adjusted_mode, vmode);
ast_set_index_reg(ast, AST_IO_VGACRI, 0xa1, 0x06);
ast_set_std_reg(ast, adjusted_mode, vbios_mode_info);
ast_set_crtc_reg(ast, adjusted_mode, vbios_mode_info);
ast_set_dclk_reg(ast, adjusted_mode, vbios_mode_info);
ast_set_std_reg(ast, adjusted_mode, std_table);
ast_set_crtc_reg(ast, adjusted_mode, vmode);
ast_set_dclk_reg(ast, adjusted_mode, vmode);
ast_set_crtthd_reg(ast);
ast_set_sync_reg(ast, adjusted_mode, vbios_mode_info);
ast_set_sync_reg(ast, adjusted_mode, vmode);
}
static int ast_crtc_helper_atomic_check(struct drm_crtc *crtc,
struct drm_atomic_state *state)
{
struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state, crtc);
struct drm_display_mode *adjusted_mode = &crtc_state->adjusted_mode;
struct drm_crtc_state *old_crtc_state = drm_atomic_get_old_crtc_state(state, crtc);
struct ast_crtc_state *old_ast_crtc_state = to_ast_crtc_state(old_crtc_state);
struct drm_device *dev = crtc->dev;
struct ast_device *ast = to_ast_device(dev);
struct ast_crtc_state *ast_state;
const struct drm_format_info *format;
bool succ;
const struct ast_vbios_enhtable *vmode;
unsigned int hborder = 0;
unsigned int vborder = 0;
int ret;
if (!crtc_state->enable)
@ -1157,11 +974,56 @@ static int ast_crtc_helper_atomic_check(struct drm_crtc *crtc,
}
}
succ = ast_get_vbios_mode_info(format, &crtc_state->mode,
&crtc_state->adjusted_mode,
&ast_state->vbios_mode_info);
if (!succ)
/*
* Set register tables.
*
* TODO: These tables mix all kinds of fields and should
* probably be resolved into various helper functions.
*/
switch (format->format) {
case DRM_FORMAT_C8:
ast_state->std_table = &vbios_stdtable[VGAModeIndex];
break;
case DRM_FORMAT_RGB565:
ast_state->std_table = &vbios_stdtable[HiCModeIndex];
break;
case DRM_FORMAT_RGB888:
case DRM_FORMAT_XRGB8888:
ast_state->std_table = &vbios_stdtable[TrueCModeIndex];
break;
default:
return -EINVAL;
}
/*
* Find the VBIOS mode and adjust the DRM display mode accordingly
* if a full modeset is required. Otherwise keep the existing values.
*/
if (drm_atomic_crtc_needs_modeset(crtc_state)) {
vmode = ast_vbios_find_mode(ast, &crtc_state->mode);
if (!vmode)
return -EINVAL;
ast_state->vmode = vmode;
if (vmode->flags & HBorder)
hborder = 8;
if (vmode->flags & VBorder)
vborder = 8;
adjusted_mode->crtc_hdisplay = vmode->hde;
adjusted_mode->crtc_hblank_start = vmode->hde + hborder;
adjusted_mode->crtc_hblank_end = vmode->ht - hborder;
adjusted_mode->crtc_hsync_start = vmode->hde + hborder + vmode->hfp;
adjusted_mode->crtc_hsync_end = vmode->hde + hborder + vmode->hfp + vmode->hsync;
adjusted_mode->crtc_htotal = vmode->ht;
adjusted_mode->crtc_vdisplay = vmode->vde;
adjusted_mode->crtc_vblank_start = vmode->vde + vborder;
adjusted_mode->crtc_vblank_end = vmode->vt - vborder;
adjusted_mode->crtc_vsync_start = vmode->vde + vborder + vmode->vfp;
adjusted_mode->crtc_vsync_end = vmode->vde + vborder + vmode->vfp + vmode->vsync;
adjusted_mode->crtc_vtotal = vmode->vt;
}
return 0;
}
@ -1263,8 +1125,8 @@ ast_crtc_atomic_duplicate_state(struct drm_crtc *crtc)
ast_state = to_ast_crtc_state(crtc->state);
new_ast_state->format = ast_state->format;
memcpy(&new_ast_state->vbios_mode_info, &ast_state->vbios_mode_info,
sizeof(new_ast_state->vbios_mode_info));
new_ast_state->std_table = ast_state->std_table;
new_ast_state->vmode = ast_state->vmode;
return &new_ast_state->base;
}
@ -1373,12 +1235,7 @@ int ast_mode_config_init(struct ast_device *ast)
dev->mode_config.min_height = 0;
dev->mode_config.preferred_depth = 24;
if (ast->chip == AST2100 || // GEN2, but not AST1100 (?)
ast->chip == AST2200 || // GEN3, but not AST2150 (?)
IS_AST_GEN7(ast) ||
IS_AST_GEN6(ast) ||
IS_AST_GEN5(ast) ||
IS_AST_GEN4(ast)) {
if (ast->support_fullhd) {
dev->mode_config.max_width = 1920;
dev->mode_config.max_height = 2048;
} else {

View File

@ -340,26 +340,49 @@ static void ast_init_dram_reg(struct ast_device *ast)
} while ((j & 0x40) == 0);
}
void ast_post_gpu(struct ast_device *ast)
int ast_post_gpu(struct ast_device *ast)
{
int ret;
ast_set_def_ext_reg(ast);
if (IS_AST_GEN7(ast)) {
if (ast->tx_chip == AST_TX_ASTDP)
ast_dp_launch(ast);
} else if (ast->config_mode == ast_use_p2a) {
if (IS_AST_GEN6(ast))
if (AST_GEN(ast) >= 7) {
if (ast->tx_chip == AST_TX_ASTDP) {
ret = ast_dp_launch(ast);
if (ret)
return ret;
}
} else if (AST_GEN(ast) >= 6) {
if (ast->config_mode == ast_use_p2a) {
ast_post_chip_2500(ast);
else if (IS_AST_GEN5(ast) || IS_AST_GEN4(ast))
} else {
if (ast->tx_chip == AST_TX_SIL164) {
/* Enable DVO */
ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xa3, 0xcf, 0x80);
}
}
} else if (AST_GEN(ast) >= 4) {
if (ast->config_mode == ast_use_p2a) {
ast_post_chip_2300(ast);
else
ast_init_3rdtx(ast);
} else {
if (ast->tx_chip == AST_TX_SIL164) {
/* Enable DVO */
ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xa3, 0xcf, 0x80);
}
}
} else {
if (ast->config_mode == ast_use_p2a) {
ast_init_dram_reg(ast);
ast_init_3rdtx(ast);
} else {
if (ast->tx_chip == AST_TX_SIL164)
ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xa3, 0xcf, 0x80); /* Enable DVO */
} else {
if (ast->tx_chip == AST_TX_SIL164) {
/* Enable DVO */
ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xa3, 0xcf, 0x80);
}
}
}
return 0;
}
/* AST 2300 DRAM settings */
@ -2039,7 +2062,7 @@ void ast_post_chip_2500(struct ast_device *ast)
u8 reg;
reg = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xd0, 0xff);
if ((reg & AST_VRAM_INIT_STATUS_MASK) == 0) {/* vga only */
if ((reg & AST_IO_VGACRD0_VRAM_INIT_STATUS_MASK) == 0) {/* vga only */
/* Clear bus lock condition */
ast_patch_ahb_2500(ast->regs);

View File

@ -32,11 +32,18 @@
#define AST_IO_VGACR80_PASSWORD (0xa8)
#define AST_IO_VGACRA1_VGAIO_DISABLED BIT(1)
#define AST_IO_VGACRA1_MMIO_ENABLED BIT(2)
#define AST_IO_VGACRA3_DVO_ENABLED BIT(7)
#define AST_IO_VGACRB6_HSYNC_OFF BIT(0)
#define AST_IO_VGACRB6_VSYNC_OFF BIT(1)
#define AST_IO_VGACRCB_HWC_16BPP BIT(0) /* set: ARGB4444, cleared: 2bpp palette */
#define AST_IO_VGACRCB_HWC_ENABLED BIT(1)
/* mirrors SCU100[7:0] */
#define AST_IO_VGACRD0_VRAM_INIT_STATUS_MASK GENMASK(7, 6)
#define AST_IO_VGACRD0_VRAM_INIT_BY_BMC BIT(7)
#define AST_IO_VGACRD0_VRAM_INIT_READY BIT(6)
#define AST_IO_VGACRD0_IKVM_WIDESCREEN BIT(0)
#define AST_IO_VGACRD1_MCU_FW_EXECUTING BIT(5)
/* Display Transmitter Type */
#define AST_IO_VGACRD1_TX_TYPE_MASK GENMASK(3, 1)
@ -48,11 +55,16 @@
#define AST_IO_VGACRD1_TX_ANX9807_VBIOS 0x0a
#define AST_IO_VGACRD1_TX_FW_EMBEDDED_FW 0x0c /* special case of DP501 */
#define AST_IO_VGACRD1_TX_ASTDP 0x0e
#define AST_IO_VGACRD1_SUPPORTS_WUXGA BIT(0)
/*
* AST DisplayPort
*/
#define AST_IO_VGACRD7_EDID_VALID_FLAG BIT(0)
#define AST_IO_VGACRDC_LINK_SUCCESS BIT(0)
#define AST_IO_VGACRDF_HPD BIT(0)
#define AST_IO_VGACRDF_DP_VIDEO_ENABLE BIT(4) /* mirrors AST_IO_VGACRE3_DP_VIDEO_ENABLE */
#define AST_IO_VGACRE0_24BPP BIT(5) /* 18 bpp, if unset */
#define AST_IO_VGACRE3_DP_VIDEO_ENABLE BIT(0)
#define AST_IO_VGACRE3_DP_PHY_SLEEP BIT(4)
#define AST_IO_VGACRE5_EDID_READ_DONE BIT(0)
@ -60,23 +72,4 @@
#define AST_IO_VGAIR1_R (0x5A)
#define AST_IO_VGAIR1_VREFRESH BIT(3)
#define AST_VRAM_INIT_STATUS_MASK GENMASK(7, 6)
//#define AST_VRAM_INIT_BY_BMC BIT(7)
//#define AST_VRAM_INIT_READY BIT(6)
/*
* AST DisplayPort
*/
/*
* ASTDP setmode registers:
* CRE0[7:0]: MISC0 ((0x00: 18-bpp) or (0x20: 24-bpp)
* CRE1[7:0]: MISC1 (default: 0x00)
* CRE2[7:0]: video format index (0x00 ~ 0x20 or 0x40 ~ 0x50)
*/
#define ASTDP_MISC0_24bpp BIT(5)
#define ASTDP_MISC1 0
#define ASTDP_AND_CLEAR_MASK 0x00
#endif

View File

@ -24,6 +24,8 @@
#ifndef AST_TABLES_H
#define AST_TABLES_H
#include "ast_drv.h"
/* Std. Table Index Definition */
#define TextModeIndex 0
#define EGAModeIndex 1
@ -31,54 +33,6 @@
#define HiCModeIndex 3
#define TrueCModeIndex 4
#define Charx8Dot 0x00000001
#define HalfDCLK 0x00000002
#define DoubleScanMode 0x00000004
#define LineCompareOff 0x00000008
#define HBorder 0x00000020
#define VBorder 0x00000010
#define WideScreenMode 0x00000100
#define NewModeInfo 0x00000200
#define NHSync 0x00000400
#define PHSync 0x00000800
#define NVSync 0x00001000
#define PVSync 0x00002000
#define SyncPP (PVSync | PHSync)
#define SyncPN (PVSync | NHSync)
#define SyncNP (NVSync | PHSync)
#define SyncNN (NVSync | NHSync)
#define AST2500PreCatchCRT 0x00004000
/* DCLK Index */
#define VCLK25_175 0x00
#define VCLK28_322 0x01
#define VCLK31_5 0x02
#define VCLK36 0x03
#define VCLK40 0x04
#define VCLK49_5 0x05
#define VCLK50 0x06
#define VCLK56_25 0x07
#define VCLK65 0x08
#define VCLK75 0x09
#define VCLK78_75 0x0A
#define VCLK94_5 0x0B
#define VCLK108 0x0C
#define VCLK135 0x0D
#define VCLK157_5 0x0E
#define VCLK162 0x0F
/* #define VCLK193_25 0x10 */
#define VCLK154 0x10
#define VCLK83_5 0x11
#define VCLK106_5 0x12
#define VCLK146_25 0x13
#define VCLK148_5 0x14
#define VCLK71 0x15
#define VCLK88_75 0x16
#define VCLK119 0x17
#define VCLK85_5 0x18
#define VCLK97_75 0x19
#define VCLK118_25 0x1A
static const struct ast_vbios_dclk_info dclk_table[] = {
{0x2C, 0xE7, 0x03}, /* 00: VCLK25_175 */
{0x95, 0x62, 0x03}, /* 01: VCLK28_322 */
@ -212,141 +166,4 @@ static const struct ast_vbios_stdtable vbios_stdtable[] = {
},
};
static const struct ast_vbios_enhtable res_640x480[] = {
{ 800, 640, 8, 96, 525, 480, 2, 2, VCLK25_175, /* 60Hz */
(SyncNN | HBorder | VBorder | Charx8Dot), 60, 1, 0x2E },
{ 832, 640, 16, 40, 520, 480, 1, 3, VCLK31_5, /* 72Hz */
(SyncNN | HBorder | VBorder | Charx8Dot), 72, 2, 0x2E },
{ 840, 640, 16, 64, 500, 480, 1, 3, VCLK31_5, /* 75Hz */
(SyncNN | Charx8Dot) , 75, 3, 0x2E },
{ 832, 640, 56, 56, 509, 480, 1, 3, VCLK36, /* 85Hz */
(SyncNN | Charx8Dot) , 85, 4, 0x2E },
{ 832, 640, 56, 56, 509, 480, 1, 3, VCLK36, /* end */
(SyncNN | Charx8Dot) , 0xFF, 4, 0x2E },
};
static const struct ast_vbios_enhtable res_800x600[] = {
{1024, 800, 24, 72, 625, 600, 1, 2, VCLK36, /* 56Hz */
(SyncPP | Charx8Dot), 56, 1, 0x30 },
{1056, 800, 40, 128, 628, 600, 1, 4, VCLK40, /* 60Hz */
(SyncPP | Charx8Dot), 60, 2, 0x30 },
{1040, 800, 56, 120, 666, 600, 37, 6, VCLK50, /* 72Hz */
(SyncPP | Charx8Dot), 72, 3, 0x30 },
{1056, 800, 16, 80, 625, 600, 1, 3, VCLK49_5, /* 75Hz */
(SyncPP | Charx8Dot), 75, 4, 0x30 },
{1048, 800, 32, 64, 631, 600, 1, 3, VCLK56_25, /* 85Hz */
(SyncPP | Charx8Dot), 84, 5, 0x30 },
{1048, 800, 32, 64, 631, 600, 1, 3, VCLK56_25, /* end */
(SyncPP | Charx8Dot), 0xFF, 5, 0x30 },
};
static const struct ast_vbios_enhtable res_1024x768[] = {
{1344, 1024, 24, 136, 806, 768, 3, 6, VCLK65, /* 60Hz */
(SyncNN | Charx8Dot), 60, 1, 0x31 },
{1328, 1024, 24, 136, 806, 768, 3, 6, VCLK75, /* 70Hz */
(SyncNN | Charx8Dot), 70, 2, 0x31 },
{1312, 1024, 16, 96, 800, 768, 1, 3, VCLK78_75, /* 75Hz */
(SyncPP | Charx8Dot), 75, 3, 0x31 },
{1376, 1024, 48, 96, 808, 768, 1, 3, VCLK94_5, /* 85Hz */
(SyncPP | Charx8Dot), 84, 4, 0x31 },
{1376, 1024, 48, 96, 808, 768, 1, 3, VCLK94_5, /* end */
(SyncPP | Charx8Dot), 0xFF, 4, 0x31 },
};
static const struct ast_vbios_enhtable res_1280x1024[] = {
{1688, 1280, 48, 112, 1066, 1024, 1, 3, VCLK108, /* 60Hz */
(SyncPP | Charx8Dot), 60, 1, 0x32 },
{1688, 1280, 16, 144, 1066, 1024, 1, 3, VCLK135, /* 75Hz */
(SyncPP | Charx8Dot), 75, 2, 0x32 },
{1728, 1280, 64, 160, 1072, 1024, 1, 3, VCLK157_5, /* 85Hz */
(SyncPP | Charx8Dot), 85, 3, 0x32 },
{1728, 1280, 64, 160, 1072, 1024, 1, 3, VCLK157_5, /* end */
(SyncPP | Charx8Dot), 0xFF, 3, 0x32 },
};
static const struct ast_vbios_enhtable res_1600x1200[] = {
{2160, 1600, 64, 192, 1250, 1200, 1, 3, VCLK162, /* 60Hz */
(SyncPP | Charx8Dot), 60, 1, 0x33 },
{2160, 1600, 64, 192, 1250, 1200, 1, 3, VCLK162, /* end */
(SyncPP | Charx8Dot), 0xFF, 1, 0x33 },
};
static const struct ast_vbios_enhtable res_1152x864[] = {
{1600, 1152, 64, 128, 900, 864, 1, 3, VCLK108, /* 75Hz */
(SyncPP | Charx8Dot | NewModeInfo), 75, 1, 0x3B },
{1600, 1152, 64, 128, 900, 864, 1, 3, VCLK108, /* end */
(SyncPP | Charx8Dot | NewModeInfo), 0xFF, 1, 0x3B },
};
/* 16:9 */
static const struct ast_vbios_enhtable res_1360x768[] = {
{1792, 1360, 64, 112, 795, 768, 3, 6, VCLK85_5, /* 60Hz */
(SyncPP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 1, 0x39 },
{1792, 1360, 64, 112, 795, 768, 3, 6, VCLK85_5, /* end */
(SyncPP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo |
AST2500PreCatchCRT), 0xFF, 1, 0x39 },
};
static const struct ast_vbios_enhtable res_1600x900[] = {
{1760, 1600, 48, 32, 926, 900, 3, 5, VCLK97_75, /* 60Hz CVT RB */
(SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo |
AST2500PreCatchCRT), 60, 1, 0x3A },
{2112, 1600, 88, 168, 934, 900, 3, 5, VCLK118_25, /* 60Hz CVT */
(SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 2, 0x3A },
{2112, 1600, 88, 168, 934, 900, 3, 5, VCLK118_25, /* 60Hz CVT */
(SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 0xFF, 2, 0x3A },
};
static const struct ast_vbios_enhtable res_1920x1080[] = {
{2200, 1920, 88, 44, 1125, 1080, 4, 5, VCLK148_5, /* 60Hz */
(SyncPP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo |
AST2500PreCatchCRT), 60, 1, 0x38 },
{2200, 1920, 88, 44, 1125, 1080, 4, 5, VCLK148_5, /* 60Hz */
(SyncPP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo |
AST2500PreCatchCRT), 0xFF, 1, 0x38 },
};
/* 16:10 */
static const struct ast_vbios_enhtable res_1280x800[] = {
{1440, 1280, 48, 32, 823, 800, 3, 6, VCLK71, /* 60Hz RB */
(SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo |
AST2500PreCatchCRT), 60, 1, 0x35 },
{1680, 1280, 72,128, 831, 800, 3, 6, VCLK83_5, /* 60Hz */
(SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 2, 0x35 },
{1680, 1280, 72,128, 831, 800, 3, 6, VCLK83_5, /* 60Hz */
(SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 0xFF, 2, 0x35 },
};
static const struct ast_vbios_enhtable res_1440x900[] = {
{1600, 1440, 48, 32, 926, 900, 3, 6, VCLK88_75, /* 60Hz RB */
(SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo |
AST2500PreCatchCRT), 60, 1, 0x36 },
{1904, 1440, 80,152, 934, 900, 3, 6, VCLK106_5, /* 60Hz */
(SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 2, 0x36 },
{1904, 1440, 80,152, 934, 900, 3, 6, VCLK106_5, /* 60Hz */
(SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 0xFF, 2, 0x36 },
};
static const struct ast_vbios_enhtable res_1680x1050[] = {
{1840, 1680, 48, 32, 1080, 1050, 3, 6, VCLK119, /* 60Hz RB */
(SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo |
AST2500PreCatchCRT), 60, 1, 0x37 },
{2240, 1680,104,176, 1089, 1050, 3, 6, VCLK146_25, /* 60Hz */
(SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 2, 0x37 },
{2240, 1680,104,176, 1089, 1050, 3, 6, VCLK146_25, /* 60Hz */
(SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 0xFF, 2, 0x37 },
};
static const struct ast_vbios_enhtable res_1920x1200[] = {
{2080, 1920, 48, 32, 1235, 1200, 3, 6, VCLK154, /* 60Hz RB*/
(SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo |
AST2500PreCatchCRT), 60, 1, 0x34 },
{2080, 1920, 48, 32, 1235, 1200, 3, 6, VCLK154, /* 60Hz RB */
(SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo |
AST2500PreCatchCRT), 0xFF, 1, 0x34 },
};
#endif

View File

@ -0,0 +1,241 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2005 ASPEED Technology Inc.
*
* Permission to use, copy, modify, distribute, and sell this software and its
* documentation for any purpose is hereby granted without fee, provided that
* the above copyright notice appear in all copies and that both that
* copyright notice and this permission notice appear in supporting
* documentation, and that the name of the authors not be used in
* advertising or publicity pertaining to distribution of the software without
* specific, written prior permission. The authors makes no representations
* about the suitability of this software for any purpose. It is provided
* "as is" without express or implied warranty.
*
* THE AUTHORS DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
* INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO
* EVENT SHALL THE AUTHORS BE LIABLE FOR ANY SPECIAL, INDIRECT OR
* CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE,
* DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
* TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
* PERFORMANCE OF THIS SOFTWARE.
*/
#include "ast_drv.h"
#include "ast_vbios.h"
/* 4:3 */
static const struct ast_vbios_enhtable res_640x480[] = {
{ 800, 640, 8, 96, 525, 480, 2, 2, VCLK25_175, /* 60 Hz */
(SyncNN | HBorder | VBorder | Charx8Dot), 60, 1, 0x2e },
{ 832, 640, 16, 40, 520, 480, 1, 3, VCLK31_5, /* 72 Hz */
(SyncNN | HBorder | VBorder | Charx8Dot), 72, 2, 0x2e },
{ 840, 640, 16, 64, 500, 480, 1, 3, VCLK31_5, /* 75 Hz */
(SyncNN | Charx8Dot), 75, 3, 0x2e },
{ 832, 640, 56, 56, 509, 480, 1, 3, VCLK36, /* 85 Hz */
(SyncNN | Charx8Dot), 85, 4, 0x2e },
AST_VBIOS_INVALID_MODE, /* end */
};
static const struct ast_vbios_enhtable res_800x600[] = {
{ 1024, 800, 24, 72, 625, 600, 1, 2, VCLK36, /* 56 Hz */
(SyncPP | Charx8Dot), 56, 1, 0x30 },
{ 1056, 800, 40, 128, 628, 600, 1, 4, VCLK40, /* 60 Hz */
(SyncPP | Charx8Dot), 60, 2, 0x30 },
{ 1040, 800, 56, 120, 666, 600, 37, 6, VCLK50, /* 72 Hz */
(SyncPP | Charx8Dot), 72, 3, 0x30 },
{ 1056, 800, 16, 80, 625, 600, 1, 3, VCLK49_5, /* 75 Hz */
(SyncPP | Charx8Dot), 75, 4, 0x30 },
{ 1048, 800, 32, 64, 631, 600, 1, 3, VCLK56_25, /* 85 Hz */
(SyncPP | Charx8Dot), 84, 5, 0x30 },
AST_VBIOS_INVALID_MODE, /* end */
};
static const struct ast_vbios_enhtable res_1024x768[] = {
{ 1344, 1024, 24, 136, 806, 768, 3, 6, VCLK65, /* 60 Hz */
(SyncNN | Charx8Dot), 60, 1, 0x31 },
{ 1328, 1024, 24, 136, 806, 768, 3, 6, VCLK75, /* 70 Hz */
(SyncNN | Charx8Dot), 70, 2, 0x31 },
{ 1312, 1024, 16, 96, 800, 768, 1, 3, VCLK78_75, /* 75 Hz */
(SyncPP | Charx8Dot), 75, 3, 0x31 },
{ 1376, 1024, 48, 96, 808, 768, 1, 3, VCLK94_5, /* 85 Hz */
(SyncPP | Charx8Dot), 84, 4, 0x31 },
AST_VBIOS_INVALID_MODE, /* end */
};
static const struct ast_vbios_enhtable res_1152x864[] = {
{ 1600, 1152, 64, 128, 900, 864, 1, 3, VCLK108, /* 75 Hz */
(SyncPP | Charx8Dot | NewModeInfo), 75, 1, 0x3b },
AST_VBIOS_INVALID_MODE, /* end */
};
static const struct ast_vbios_enhtable res_1280x1024[] = {
{ 1688, 1280, 48, 112, 1066, 1024, 1, 3, VCLK108, /* 60 Hz */
(SyncPP | Charx8Dot), 60, 1, 0x32 },
{ 1688, 1280, 16, 144, 1066, 1024, 1, 3, VCLK135, /* 75 Hz */
(SyncPP | Charx8Dot), 75, 2, 0x32 },
{ 1728, 1280, 64, 160, 1072, 1024, 1, 3, VCLK157_5, /* 85 Hz */
(SyncPP | Charx8Dot), 85, 3, 0x32 },
AST_VBIOS_INVALID_MODE, /* end */
};
static const struct ast_vbios_enhtable res_1600x1200[] = {
{ 2160, 1600, 64, 192, 1250, 1200, 1, 3, VCLK162, /* 60 Hz */
(SyncPP | Charx8Dot), 60, 1, 0x33 },
AST_VBIOS_INVALID_MODE, /* end */
};
/* 16:9 */
static const struct ast_vbios_enhtable res_1360x768[] = {
{ 1792, 1360, 64, 112, 795, 768, 3, 6, VCLK85_5, /* 60 Hz */
(SyncPP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 1, 0x39 },
AST_VBIOS_INVALID_MODE, /* end */
};
static const struct ast_vbios_enhtable res_1600x900[] = {
{ 1760, 1600, 48, 32, 926, 900, 3, 5, VCLK97_75, /* 60 Hz CVT RB */
(SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo |
AST2500PreCatchCRT), 60, 1, 0x3a },
{ 2112, 1600, 88, 168, 934, 900, 3, 5, VCLK118_25, /* 60 Hz CVT */
(SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 2, 0x3a },
AST_VBIOS_INVALID_MODE, /* end */
};
static const struct ast_vbios_enhtable res_1920x1080[] = {
{ 2200, 1920, 88, 44, 1125, 1080, 4, 5, VCLK148_5, /* 60 Hz */
(SyncPP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo |
AST2500PreCatchCRT), 60, 1, 0x38 },
AST_VBIOS_INVALID_MODE, /* end */
};
/* 16:10 */
static const struct ast_vbios_enhtable res_1280x800[] = {
{ 1440, 1280, 48, 32, 823, 800, 3, 6, VCLK71, /* 60 Hz RB */
(SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo |
AST2500PreCatchCRT), 60, 1, 0x35 },
{ 1680, 1280, 72, 128, 831, 800, 3, 6, VCLK83_5, /* 60 Hz */
(SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 2, 0x35 },
AST_VBIOS_INVALID_MODE, /* end */
};
static const struct ast_vbios_enhtable res_1440x900[] = {
{ 1600, 1440, 48, 32, 926, 900, 3, 6, VCLK88_75, /* 60 Hz RB */
(SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo |
AST2500PreCatchCRT), 60, 1, 0x36 },
{ 1904, 1440, 80, 152, 934, 900, 3, 6, VCLK106_5, /* 60 Hz */
(SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 2, 0x36 },
AST_VBIOS_INVALID_MODE, /* end */
};
static const struct ast_vbios_enhtable res_1680x1050[] = {
{ 1840, 1680, 48, 32, 1080, 1050, 3, 6, VCLK119, /* 60 Hz RB */
(SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo |
AST2500PreCatchCRT), 60, 1, 0x37 },
{ 2240, 1680, 104, 176, 1089, 1050, 3, 6, VCLK146_25, /* 60 Hz */
(SyncPN | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo), 60, 2, 0x37 },
AST_VBIOS_INVALID_MODE, /* end */
};
static const struct ast_vbios_enhtable res_1920x1200[] = {
{ 2080, 1920, 48, 32, 1235, 1200, 3, 6, VCLK154, /* 60 Hz RB*/
(SyncNP | Charx8Dot | LineCompareOff | WideScreenMode | NewModeInfo |
AST2500PreCatchCRT), 60, 1, 0x34 },
AST_VBIOS_INVALID_MODE, /* end */
};
/*
* VBIOS mode tables
*/
static const struct ast_vbios_enhtable *res_table_wuxga[] = {
&res_1920x1200[0],
NULL,
};
static const struct ast_vbios_enhtable *res_table_fullhd[] = {
&res_1920x1080[0],
NULL,
};
static const struct ast_vbios_enhtable *res_table_wsxga_p[] = {
&res_1280x800[0],
&res_1360x768[0],
&res_1440x900[0],
&res_1600x900[0],
&res_1680x1050[0],
NULL,
};
static const struct ast_vbios_enhtable *res_table[] = {
&res_640x480[0],
&res_800x600[0],
&res_1024x768[0],
&res_1152x864[0],
&res_1280x1024[0],
&res_1600x1200[0],
NULL,
};
static const struct ast_vbios_enhtable *
__ast_vbios_find_mode_table(const struct ast_vbios_enhtable **vmode_tables,
unsigned int hdisplay,
unsigned int vdisplay)
{
while (*vmode_tables) {
if ((*vmode_tables)->hde == hdisplay && (*vmode_tables)->vde == vdisplay)
return *vmode_tables;
++vmode_tables;
}
return NULL;
}
static const struct ast_vbios_enhtable *ast_vbios_find_mode_table(const struct ast_device *ast,
unsigned int hdisplay,
unsigned int vdisplay)
{
const struct ast_vbios_enhtable *vmode_table = NULL;
if (ast->support_wuxga)
vmode_table = __ast_vbios_find_mode_table(res_table_wuxga, hdisplay, vdisplay);
if (!vmode_table && ast->support_fullhd)
vmode_table = __ast_vbios_find_mode_table(res_table_fullhd, hdisplay, vdisplay);
if (!vmode_table && ast->support_wsxga_p)
vmode_table = __ast_vbios_find_mode_table(res_table_wsxga_p, hdisplay, vdisplay);
if (!vmode_table)
vmode_table = __ast_vbios_find_mode_table(res_table, hdisplay, vdisplay);
return vmode_table;
}
const struct ast_vbios_enhtable *ast_vbios_find_mode(const struct ast_device *ast,
const struct drm_display_mode *mode)
{
const struct ast_vbios_enhtable *best_vmode = NULL;
const struct ast_vbios_enhtable *vmode_table;
const struct ast_vbios_enhtable *vmode;
u32 refresh_rate;
vmode_table = ast_vbios_find_mode_table(ast, mode->hdisplay, mode->vdisplay);
if (!vmode_table)
return NULL;
refresh_rate = drm_mode_vrefresh(mode);
for (vmode = vmode_table; ast_vbios_mode_is_valid(vmode); ++vmode) {
if (((mode->flags & DRM_MODE_FLAG_NVSYNC) && (vmode->flags & PVSync)) ||
((mode->flags & DRM_MODE_FLAG_PVSYNC) && (vmode->flags & NVSync)) ||
((mode->flags & DRM_MODE_FLAG_NHSYNC) && (vmode->flags & PHSync)) ||
((mode->flags & DRM_MODE_FLAG_PHSYNC) && (vmode->flags & NHSync))) {
continue;
}
if (vmode->refresh_rate <= refresh_rate &&
(!best_vmode || vmode->refresh_rate > best_vmode->refresh_rate))
best_vmode = vmode;
}
return best_vmode;
}

View File

@ -0,0 +1,108 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright (c) 2005 ASPEED Technology Inc.
*
* Permission to use, copy, modify, distribute, and sell this software and its
* documentation for any purpose is hereby granted without fee, provided that
* the above copyright notice appear in all copies and that both that
* copyright notice and this permission notice appear in supporting
* documentation, and that the name of the authors not be used in
* advertising or publicity pertaining to distribution of the software without
* specific, written prior permission. The authors makes no representations
* about the suitability of this software for any purpose. It is provided
* "as is" without express or implied warranty.
*
* THE AUTHORS DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
* INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO
* EVENT SHALL THE AUTHORS BE LIABLE FOR ANY SPECIAL, INDIRECT OR
* CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE,
* DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
* TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
* PERFORMANCE OF THIS SOFTWARE.
*/
/* Ported from xf86-video-ast driver */
#ifndef AST_VBIOS_H
#define AST_VBIOS_H
#include <linux/types.h>
struct ast_device;
struct drm_display_mode;
#define Charx8Dot 0x00000001
#define HalfDCLK 0x00000002
#define DoubleScanMode 0x00000004
#define LineCompareOff 0x00000008
#define HBorder 0x00000020
#define VBorder 0x00000010
#define WideScreenMode 0x00000100
#define NewModeInfo 0x00000200
#define NHSync 0x00000400
#define PHSync 0x00000800
#define NVSync 0x00001000
#define PVSync 0x00002000
#define SyncPP (PVSync | PHSync)
#define SyncPN (PVSync | NHSync)
#define SyncNP (NVSync | PHSync)
#define SyncNN (NVSync | NHSync)
#define AST2500PreCatchCRT 0x00004000
/* DCLK Index */
#define VCLK25_175 0x00
#define VCLK28_322 0x01
#define VCLK31_5 0x02
#define VCLK36 0x03
#define VCLK40 0x04
#define VCLK49_5 0x05
#define VCLK50 0x06
#define VCLK56_25 0x07
#define VCLK65 0x08
#define VCLK75 0x09
#define VCLK78_75 0x0a
#define VCLK94_5 0x0b
#define VCLK108 0x0c
#define VCLK135 0x0d
#define VCLK157_5 0x0e
#define VCLK162 0x0f
/* #define VCLK193_25 0x10 */
#define VCLK154 0x10
#define VCLK83_5 0x11
#define VCLK106_5 0x12
#define VCLK146_25 0x13
#define VCLK148_5 0x14
#define VCLK71 0x15
#define VCLK88_75 0x16
#define VCLK119 0x17
#define VCLK85_5 0x18
#define VCLK97_75 0x19
#define VCLK118_25 0x1a
struct ast_vbios_enhtable {
u32 ht;
u32 hde;
u32 hfp;
u32 hsync;
u32 vt;
u32 vde;
u32 vfp;
u32 vsync;
u32 dclk_index;
u32 flags;
u32 refresh_rate;
u32 refresh_rate_index;
u32 mode_id;
};
#define AST_VBIOS_INVALID_MODE \
{0u, 0u, 0u, 0u, 0u, 0u, 0u, 0u, 0u, 0u, 0u, 0u}
static inline bool ast_vbios_mode_is_valid(const struct ast_vbios_enhtable *vmode)
{
return vmode->ht && vmode->vt && vmode->refresh_rate;
}
const struct ast_vbios_enhtable *ast_vbios_find_mode(const struct ast_device *ast,
const struct drm_display_mode *mode);
#endif

View File

@ -243,9 +243,14 @@ static const struct hdmi_codec_ops adv7511_codec_ops = {
static const struct hdmi_codec_pdata codec_data = {
.ops = &adv7511_codec_ops,
.i2s_formats = (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S20_3LE |
SNDRV_PCM_FMTBIT_S24_3LE | SNDRV_PCM_FMTBIT_S24_LE |
SNDRV_PCM_FMTBIT_IEC958_SUBFRAME_LE),
.max_i2s_channels = 2,
.i2s = 1,
.no_i2s_capture = 1,
.spdif = 1,
.no_spdif_capture = 1,
};
int adv7511_audio_init(struct device *dev, struct adv7511 *adv7511)

View File

@ -847,7 +847,7 @@ static int adv7511_connector_get_modes(struct drm_connector *connector)
static enum drm_mode_status
adv7511_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
const struct drm_display_mode *mode)
{
struct adv7511 *adv = connector_to_adv7511(connector);
@ -910,14 +910,16 @@ static struct adv7511 *bridge_to_adv7511(struct drm_bridge *bridge)
return container_of(bridge, struct adv7511, bridge);
}
static void adv7511_bridge_enable(struct drm_bridge *bridge)
static void adv7511_bridge_atomic_enable(struct drm_bridge *bridge,
struct drm_bridge_state *bridge_state)
{
struct adv7511 *adv = bridge_to_adv7511(bridge);
adv7511_power_on(adv);
}
static void adv7511_bridge_disable(struct drm_bridge *bridge)
static void adv7511_bridge_atomic_disable(struct drm_bridge *bridge,
struct drm_bridge_state *bridge_state)
{
struct adv7511 *adv = bridge_to_adv7511(bridge);
@ -996,14 +998,18 @@ static void adv7511_bridge_hpd_notify(struct drm_bridge *bridge,
}
static const struct drm_bridge_funcs adv7511_bridge_funcs = {
.enable = adv7511_bridge_enable,
.disable = adv7511_bridge_disable,
.mode_set = adv7511_bridge_mode_set,
.mode_valid = adv7511_bridge_mode_valid,
.attach = adv7511_bridge_attach,
.detect = adv7511_bridge_detect,
.edid_read = adv7511_bridge_edid_read,
.hpd_notify = adv7511_bridge_hpd_notify,
.atomic_enable = adv7511_bridge_atomic_enable,
.atomic_disable = adv7511_bridge_atomic_disable,
.atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,
.atomic_reset = drm_atomic_helper_bridge_reset,
};
/* -----------------------------------------------------------------------------

View File

@ -1619,7 +1619,7 @@ bool cdns_mhdp_bandwidth_ok(struct cdns_mhdp_device *mhdp,
static
enum drm_mode_status cdns_mhdp_mode_valid(struct drm_connector *conn,
struct drm_display_mode *mode)
const struct drm_display_mode *mode)
{
struct cdns_mhdp_device *mhdp = connector_to_mhdp(conn);

View File

@ -2250,12 +2250,13 @@ static bool it6505_hdcp_part2_ksvlist_check(struct it6505 *it6505)
continue;
}
for (i = 0; i < 5; i++) {
for (i = 0; i < 5; i++)
if (bv[i][3] != av[i][0] || bv[i][2] != av[i][1] ||
av[i][1] != av[i][2] || bv[i][0] != av[i][3])
bv[i][1] != av[i][2] || bv[i][0] != av[i][3])
break;
DRM_DEV_DEBUG_DRIVER(dev, "V' all match!! %d, %d", retry, i);
if (i == 5) {
DRM_DEV_DEBUG_DRIVER(dev, "V' all match!! %d", retry);
return true;
}
}

View File

@ -115,16 +115,9 @@ static int ge_b850v3_lvds_get_modes(struct drm_connector *connector)
return num_modes;
}
static enum drm_mode_status ge_b850v3_lvds_mode_valid(
struct drm_connector *connector, struct drm_display_mode *mode)
{
return MODE_OK;
}
static const struct
drm_connector_helper_funcs ge_b850v3_lvds_connector_helper_funcs = {
.get_modes = ge_b850v3_lvds_get_modes,
.mode_valid = ge_b850v3_lvds_mode_valid,
};
static enum drm_connector_status ge_b850v3_lvds_bridge_detect(struct drm_bridge *bridge)

View File

@ -162,8 +162,7 @@ static int mchp_lvds_probe(struct platform_device *pdev)
lvds->dev = dev;
lvds->regs = devm_ioremap_resource(lvds->dev,
platform_get_resource(pdev, IORESOURCE_MEM, 0));
lvds->regs = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(lvds->regs))
return PTR_ERR(lvds->regs);

View File

@ -15,7 +15,6 @@
#include <drm/drm_crtc.h>
#include <drm/drm_edid.h>
#include <drm/drm_of.h>
#include <drm/drm_panel.h>
#include <drm/drm_print.h>
#include <drm/drm_probe_helper.h>

View File

@ -19,7 +19,6 @@
#include <drm/drm_bridge.h>
#include <drm/drm_crtc.h>
#include <drm/drm_of.h>
#include <drm/drm_panel.h>
#include <drm/drm_print.h>
#include <drm/drm_probe_helper.h>

View File

@ -20,7 +20,6 @@
#include <drm/drm_edid.h>
#include <drm/drm_mipi_dsi.h>
#include <drm/drm_of.h>
#include <drm/drm_panel.h>
#include <drm/drm_print.h>
#define PAGE0_AUXCH_CFG3 0x76

View File

@ -2621,6 +2621,7 @@ static int dw_hdmi_connector_create(struct dw_hdmi *hdmi)
* - MEDIA_BUS_FMT_UYYVYY12_0_5X36,
* - MEDIA_BUS_FMT_UYYVYY10_0_5X30,
* - MEDIA_BUS_FMT_UYYVYY8_0_5X24,
* - MEDIA_BUS_FMT_RGB888_1X24,
* - MEDIA_BUS_FMT_YUV16_1X48,
* - MEDIA_BUS_FMT_RGB161616_1X48,
* - MEDIA_BUS_FMT_UYVY12_1X24,
@ -2631,7 +2632,6 @@ static int dw_hdmi_connector_create(struct dw_hdmi *hdmi)
* - MEDIA_BUS_FMT_RGB101010_1X30,
* - MEDIA_BUS_FMT_UYVY8_1X16,
* - MEDIA_BUS_FMT_YUV8_1X24,
* - MEDIA_BUS_FMT_RGB888_1X24,
*/
/* Can return a maximum of 11 possible output formats for a mode/connector */
@ -2669,7 +2669,7 @@ static u32 *dw_hdmi_bridge_atomic_get_output_bus_fmts(struct drm_bridge *bridge,
}
/*
* If the current mode enforces 4:2:0, force the output but format
* If the current mode enforces 4:2:0, force the output bus format
* to 4:2:0 and do not add the YUV422/444/RGB formats
*/
if (conn->ycbcr_420_allowed &&

View File

@ -20,10 +20,10 @@
#include <video/mipi_display.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_bridge.h>
#include <drm/drm_crtc.h>
#include <drm/drm_mipi_dsi.h>
#include <drm/drm_of.h>
#include <drm/drm_panel.h>
#include <drm/drm_print.h>
#include <drm/drm_probe_helper.h>

View File

@ -26,7 +26,6 @@
#include <drm/drm_bridge.h>
#include <drm/drm_mipi_dsi.h>
#include <drm/drm_of.h>
#include <drm/drm_panel.h>
#include <drm/drm_probe_helper.h>
#define FLD_VAL(val, start, end) FIELD_PREP(GENMASK(start, end), val)

View File

@ -40,7 +40,6 @@
#include <drm/drm_bridge.h>
#include <drm/drm_mipi_dsi.h>
#include <drm/drm_of.h>
#include <drm/drm_panel.h>
#include <drm/drm_print.h>
#include <drm/drm_probe_helper.h>

View File

@ -32,7 +32,6 @@
#include <drm/drm_edid.h>
#include <drm/drm_mipi_dsi.h>
#include <drm/drm_of.h>
#include <drm/drm_panel.h>
#include <drm/drm_print.h>
#include <drm/drm_probe_helper.h>
@ -480,6 +479,7 @@ static int ti_sn65dsi86_add_aux_device(struct ti_sn65dsi86 *pdata,
const char *name)
{
struct device *dev = pdata->dev;
const struct i2c_client *client = to_i2c_client(dev);
struct auxiliary_device *aux;
int ret;
@ -488,6 +488,7 @@ static int ti_sn65dsi86_add_aux_device(struct ti_sn65dsi86 *pdata,
return -ENOMEM;
aux->name = name;
aux->id = (client->adapter->nr << 10) | client->addr;
aux->dev.parent = dev;
aux->dev.release = ti_sn65dsi86_aux_device_release;
device_set_of_node_from_dev(&aux->dev, dev);

View File

@ -132,7 +132,7 @@ fi
# Pass needed files to the test stage
mkdir -p install
cp -rfv .gitlab-ci/* install/.
cp -rfv ci/* install/.
cp -rfv bin/ci/* install/.
cp -rfv install/common install/ci-common
cp -rfv drivers/gpu/drm/ci/* install/.

View File

@ -1,8 +1,7 @@
.build:
extends:
- .build-rules
- .container+build-rules
stage: build
stage: build-only
artifacts:
paths:
- artifacts
@ -110,3 +109,104 @@ build-nodebugfs:arm64:
build:x86_64:
extends: .build:x86_64
# Disable build jobs that we won't use
alpine-build-testing:
rules:
- when: never
debian-android:
rules:
- when: never
debian-arm32:
rules:
- when: never
debian-arm32-asan:
rules:
- when: never
debian-arm64:
rules:
- when: never
debian-arm64-asan:
rules:
- when: never
debian-arm64-build-test:
rules:
- when: never
debian-arm64-release:
rules:
- when: never
debian-build-testing:
rules:
- when: never
debian-clang:
rules:
- when: never
debian-clang-release:
rules:
- when: never
debian-no-libdrm:
rules:
- when: never
debian-ppc64el:
rules:
- when: never
debian-release:
rules:
- when: never
debian-s390x:
rules:
- when: never
debian-testing:
rules:
- when: never
debian-testing-asan:
rules:
- when: never
debian-testing-msan:
rules:
- when: never
debian-vulkan:
rules:
- when: never
debian-x86_32:
rules:
- when: never
fedora-release:
rules:
- when: never
rustfmt:
rules:
- when: never
shader-db:
rules:
- when: never
windows-msvc:
rules:
- when: never
yaml-toml-shell-py-test:
rules:
- when: never

View File

@ -24,7 +24,7 @@ alpine/x86_64_build:
rules:
- when: never
debian/x86_64_test-vk:
debian/arm64_test-gl:
rules:
- when: never
@ -32,7 +32,15 @@ debian/arm64_test-vk:
rules:
- when: never
debian/arm64_test-gl:
debian/ppc64el_build:
rules:
- when: never
debian/s390x_build:
rules:
- when: never
debian/x86_64_test-vk:
rules:
- when: never
@ -56,14 +64,6 @@ windows_test_msvc:
rules:
- when: never
.debian/x86_64_build-mingw:
rules:
- when: never
rustfmt:
rules:
- when: never
windows_msvc:
rules:
- when: never
- when: never

View File

@ -1,11 +1,11 @@
variables:
DRM_CI_PROJECT_PATH: &drm-ci-project-path mesa/mesa
DRM_CI_COMMIT_SHA: &drm-ci-commit-sha c6a9a9c3bce90923f7700219354e0b6e5a3c9ba6
DRM_CI_COMMIT_SHA: &drm-ci-commit-sha 7d3062470f3ccc6cb40540e772e902c7e2248024
UPSTREAM_REPO: https://gitlab.freedesktop.org/drm/kernel.git
TARGET_BRANCH: drm-next
IGT_VERSION: a73311079a5d8ac99eb25336a8369a2c3c6b519b
IGT_VERSION: 33adea9ebafd059ac88a5ccfec60536394f36c7c
DEQP_RUNNER_GIT_URL: https://gitlab.freedesktop.org/mesa/deqp-runner.git
DEQP_RUNNER_GIT_TAG: v0.20.0
@ -20,6 +20,9 @@ variables:
rm download-git-cache.sh
set +o xtrace
S3_JWT_FILE: /s3_jwt
S3_JWT_FILE_SCRIPT: |-
echo -n '${S3_JWT}' > '${S3_JWT_FILE}' &&
unset CI_JOB_JWT S3_JWT # Unsetting vulnerable env variables
S3_HOST: s3.freedesktop.org
# This bucket is used to fetch the kernel image
S3_KERNEL_BUCKET: mesa-rootfs
@ -31,17 +34,14 @@ variables:
PIPELINE_ARTIFACTS_BASE: ${S3_HOST}/${S3_ARTIFACTS_BUCKET}/${CI_PROJECT_PATH}/${CI_PIPELINE_ID}
# per-job artifact storage on MinIO
JOB_ARTIFACTS_BASE: ${PIPELINE_ARTIFACTS_BASE}/${CI_JOB_ID}
# default kernel for rootfs before injecting the current kernel tree
KERNEL_REPO: "gfx-ci/linux"
KERNEL_TAG: "v6.6.21-mesa-f8ea"
KERNEL_IMAGE_BASE: https://${S3_HOST}/${S3_KERNEL_BUCKET}/${KERNEL_REPO}/${KERNEL_TAG}
PKG_REPO_REV: "3cc12a2a"
LAVA_TAGS: subset-1-gfx
LAVA_JOB_PRIORITY: 30
ARTIFACTS_BASE_URL: https://${CI_PROJECT_ROOT_NAMESPACE}.${CI_PAGES_DOMAIN}/-/${CI_PROJECT_NAME}/-/jobs/${CI_JOB_ID}/artifacts
# Python scripts for structured logger
PYTHONPATH: "$PYTHONPATH:$CI_PROJECT_DIR/install"
default:
id_tokens:
S3_JWT:
@ -50,16 +50,13 @@ default:
- export SCRIPTS_DIR=$(mktemp -d)
- curl -L -s --retry 4 -f --retry-all-errors --retry-delay 60 -O --output-dir "${SCRIPTS_DIR}" "${DRM_CI_PROJECT_URL}/-/raw/${DRM_CI_COMMIT_SHA}/.gitlab-ci/setup-test-env.sh"
- source ${SCRIPTS_DIR}/setup-test-env.sh
- echo -e "\e[0Ksection_start:$(date +%s):unset_env_vars_section[collapsed=true]\r\e[0KUnsetting vulnerable environment variables"
- echo -n "${S3_JWT}" > "${S3_JWT_FILE}"
- unset CI_JOB_JWT S3_JWT
- echo -e "\e[0Ksection_end:$(date +%s):unset_env_vars_section\r\e[0K"
- eval "$S3_JWT_FILE_SCRIPT"
- echo -e "\e[0Ksection_start:$(date +%s):drm_ci_download_section[collapsed=true]\r\e[0KDownloading mesa from $DRM_CI_PROJECT_URL/-/archive/$DRM_CI_COMMIT_SHA/mesa-$DRM_CI_COMMIT_SHA.tar.gz"
- cd $CI_PROJECT_DIR
- curl --output - $DRM_CI_PROJECT_URL/-/archive/$DRM_CI_COMMIT_SHA/mesa-$DRM_CI_COMMIT_SHA.tar.gz | tar -xz
- mv mesa-$DRM_CI_COMMIT_SHA/.gitlab-ci* .
- mv mesa-$DRM_CI_COMMIT_SHA/bin/ci .
- mv mesa-$DRM_CI_COMMIT_SHA/bin .
- rm -rf mesa-$DRM_CI_COMMIT_SHA/
- echo -e "\e[0Ksection_end:$(date +%s):drm_ci_download_section\r\e[0K"
@ -71,6 +68,7 @@ default:
export S3_JWT="$(<${S3_JWT_FILE})" &&
rm "${S3_JWT_FILE}"
include:
- project: 'freedesktop/ci-templates'
ref: 16bc29078de5e0a067ff84a1a199a3760d3b3811
@ -85,6 +83,7 @@ include:
- project: *drm-ci-project-path
ref: *drm-ci-commit-sha
file:
- '/.gitlab-ci/build/gitlab-ci.yml'
- '/.gitlab-ci/container/gitlab-ci.yml'
- '/.gitlab-ci/farm-rules.yml'
- '/.gitlab-ci/lava/lava-gitlab-ci.yml'
@ -115,9 +114,10 @@ include:
stages:
- sanity
- container
- code-validation
- git-archive
- build
- build-for-tests
- build-only
- code-validation
- amdgpu
- i915
- mediatek
@ -128,33 +128,27 @@ stages:
- rockchip
- software-driver
# YAML anchors for rule conditions
# --------------------------------
.rules-anchors:
rules:
# Pipeline for forked project branch
- if: &is-forked-branch '$CI_COMMIT_BRANCH && $CI_PROJECT_NAMESPACE != "mesa"'
when: manual
# Forked project branch / pre-merge pipeline not for Marge bot
- if: &is-forked-branch-or-pre-merge-not-for-marge '$CI_PROJECT_NAMESPACE != "mesa" || ($GITLAB_USER_LOGIN != "marge-bot" && $CI_PIPELINE_SOURCE == "merge_request_event")'
when: manual
# Pipeline runs for the main branch of the upstream Mesa project
- if: &is-mesa-main '$CI_PROJECT_NAMESPACE == "mesa" && $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH && $CI_COMMIT_BRANCH'
when: always
# Post-merge pipeline
- if: &is-post-merge '$CI_PROJECT_NAMESPACE == "mesa" && $CI_COMMIT_BRANCH'
when: on_success
# Post-merge pipeline, not for Marge Bot
- if: &is-post-merge-not-for-marge '$CI_PROJECT_NAMESPACE == "mesa" && $GITLAB_USER_LOGIN != "marge-bot" && $CI_COMMIT_BRANCH'
when: on_success
# do not duplicate pipelines on merge pipelines
- if: $CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS && $CI_PIPELINE_SOURCE == "push"
when: never
# merge pipeline
- if: &is-merge-attempt $GITLAB_USER_LOGIN == "marge-bot" && $CI_PIPELINE_SOURCE == "merge_request_event"
# post-merge pipeline
- if: &is-post-merge $GITLAB_USER_LOGIN == "marge-bot" && $CI_PIPELINE_SOURCE == "push"
# Pre-merge pipeline
- if: &is-pre-merge '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: on_success
# Pre-merge pipeline for Marge Bot
- if: &is-pre-merge-for-marge '$GITLAB_USER_LOGIN == "marge-bot" && $CI_PIPELINE_SOURCE == "merge_request_event"'
when: on_success
- if: &is-pre-merge $CI_PIPELINE_SOURCE == "merge_request_event"
# Push to a branch on a fork
- &is-fork-push '$CI_PROJECT_NAMESPACE != "mesa" && $CI_PIPELINE_SOURCE == "push"'
- if: &is-fork-push $CI_PROJECT_NAMESPACE != "mesa" && $CI_PIPELINE_SOURCE == "push"
# nightly pipeline
- if: &is-scheduled-pipeline $CI_PIPELINE_SOURCE == "schedule"
# pipeline for direct pushes that bypassed the CI
- if: &is-direct-push $CI_PROJECT_NAMESPACE == "mesa" && $CI_PIPELINE_SOURCE == "push" && $GITLAB_USER_LOGIN != "marge-bot"
# Rules applied to every job in the pipeline
.common-rules:
@ -162,42 +156,51 @@ stages:
- if: *is-fork-push
when: manual
.never-post-merge-rules:
rules:
- if: *is-post-merge
when: never
# Rule to filter for only scheduled pipelines.
.scheduled_pipeline-rules:
rules:
- if: &is-scheduled-pipeline '$CI_PIPELINE_SOURCE == "schedule"'
when: on_success
# Generic rule to not run the job during scheduled pipelines. Jobs that aren't
# something like a nightly run should include this rule.
.no_scheduled_pipelines-rules:
rules:
- if: *is-scheduled-pipeline
when: never
# When to automatically run the CI for build jobs
.build-rules:
rules:
- !reference [.no_scheduled_pipelines-rules, rules]
- !reference [.never-post-merge-rules, rules]
# Run automatically once all dependency jobs have passed
- when: on_success
# When to automatically run the CI for container jobs
.container+build-rules:
rules:
- !reference [.no_scheduled_pipelines-rules, rules]
- !reference [.common-rules, rules]
# Run when re-enabling a disabled farm, but not when disabling it
- !reference [.disable-farm-mr-rules, rules]
# Never run immediately after merging, as we just ran everything
- !reference [.never-post-merge-rules, rules]
# Build everything in merge pipelines, if any files affecting the pipeline
# were changed
- if: *is-merge-attempt
changes: &all_paths
- drivers/gpu/drm/ci/**/*
when: on_success
# Same as above, but for pre-merge pipelines
- if: *is-pre-merge
changes:
*all_paths
when: manual
# Skip everything for pre-merge and merge pipelines which don't change
# anything in the build
- if: *is-merge-attempt
when: never
- if: *is-pre-merge
when: never
# Build everything after someone bypassed the CI
- if: *is-direct-push
when: on_success
# Build everything in scheduled pipelines
- if: *is-scheduled-pipeline
when: on_success
# Allow building everything in fork pipelines, but build nothing unless
# manually triggered
- when: manual
.ci-deqp-artifacts:
artifacts:
name: "mesa_${CI_JOB_NAME}"
name: "${CI_PROJECT_NAME}_${CI_JOB_NAME}"
when: always
untracked: false
paths:
@ -208,31 +211,7 @@ stages:
- _build/meson-logs/strace
.container-rules:
rules:
- !reference [.no_scheduled_pipelines-rules, rules]
- !reference [.never-post-merge-rules, rules]
# Run pipeline by default in the main project if any CI pipeline
# configuration files were changed, to ensure docker images are up to date
- if: *is-post-merge
changes:
- drivers/gpu/drm/ci/**/*
when: on_success
# Run pipeline by default if it was triggered by Marge Bot, is for a
# merge request, and any files affecting the pipeline were changed
- if: *is-pre-merge-for-marge
when: on_success
# Run pipeline by default in the main project if it was not triggered by
# Marge Bot, and any files affecting the pipeline were changed
- if: *is-post-merge-not-for-marge
when: on_success
# Allow triggering jobs manually in other cases
- when: manual
# Git archive
make git archive:
extends:
- .fdo.ci-fairy
@ -264,30 +243,64 @@ sanity:
rules:
- if: *is-pre-merge
when: on_success
# Other cases default to never
- when: never
variables:
GIT_STRATEGY: none
script:
# ci-fairy check-commits --junit-xml=check-commits.xml
- ci-fairy check-merge-request --require-allow-collaboration --junit-xml=check-merge-request.xml
- |
set -eu
image_tags=(
ALPINE_X86_64_LAVA_SSH_TAG
CONTAINER_TAG
DEBIAN_BASE_TAG
DEBIAN_BUILD_TAG
DEBIAN_PYUTILS_TAG
DEBIAN_TEST_GL_TAG
KERNEL_ROOTFS_TAG
KERNEL_TAG
PKG_REPO_REV
)
for var in "${image_tags[@]}"
do
if [ "$(echo -n "${!var}" | wc -c)" -gt 20 ]
then
echo "$var is too long; please make sure it is at most 20 chars."
exit 1
fi
done
artifacts:
when: on_failure
reports:
junit: check-*.xml
tags:
- placeholder-job
# Rules for tests that should not block merging, but should be available to
# optionally run with the "play" button in the UI in pre-merge non-marge
# pipelines. This should appear in "extends:" after any includes of
# test-source-dep.yml rules, so that these rules replace those.
.test-manual-mr:
mr-label-maker-test:
extends:
- .fdo.ci-fairy
stage: sanity
rules:
- !reference [.no_scheduled_pipelines-rules, rules]
- if: *is-forked-branch-or-pre-merge-not-for-marge
when: manual
- !reference [.mr-label-maker-rules, rules]
variables:
JOB_TIMEOUT: 80
GIT_STRATEGY: fetch
timeout: 10m
script:
- set -eu
- python3 -m venv .venv
- source .venv/bin/activate
- pip install git+https://gitlab.freedesktop.org/freedesktop/mr-label-maker
- mr-label-maker --dry-run --mr $CI_MERGE_REQUEST_IID
# Jobs that need to pass before spending hardware resources on further testing
.required-for-hardware-jobs:
needs: []
needs:
- job: clang-format
optional: true
- job: rustfmt
optional: true
- job: toml-lint
optional: true

View File

@ -47,7 +47,7 @@ else
ARCH="x86_64"
fi
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 -s ${FDO_HTTP_CACHE_URI:-}$PIPELINE_ARTIFACTS_BASE/$ARCH/igt.tar.gz | tar --zstd -v -x -C /
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 -s $PIPELINE_ARTIFACTS_BASE/$ARCH/igt.tar.gz | tar --zstd -v -x -C /
TESTLIST="/igt/libexec/igt-gpu-tools/ci-testlist.txt"
@ -69,7 +69,7 @@ igt-runner \
run \
--igt-folder /igt/libexec/igt-gpu-tools \
--caselist $TESTLIST \
--output /results \
--output $RESULTS_DIR \
-vvvv \
$IGT_SKIPS \
$IGT_FLAKES \
@ -80,13 +80,10 @@ set -e
deqp-runner junit \
--testsuite IGT \
--results /results/failures.csv \
--output /results/junit.xml \
--results $RESULTS_DIR/failures.csv \
--output $RESULTS_DIR/junit.xml \
--limit 50 \
--template "See https://$CI_PROJECT_ROOT_NAMESPACE.pages.freedesktop.org/-/$CI_PROJECT_NAME/-/jobs/$CI_JOB_ID/artifacts/results/{{testcase}}.xml"
# Store the results also in the simpler format used by the runner in ChromeOS CI
#sed -r 's/(dmesg-warn|pass)/success/g' /results/results.txt > /results/results_simple.txt
--template "See $ARTIFACTS_BASE_URL/results/{{testcase}}.xml"
cd $oldpath
exit $ret

View File

@ -1,5 +1,5 @@
variables:
CONTAINER_TAG: "2024-09-09-uprevs"
CONTAINER_TAG: "20250204-mesa-uprev"
DEBIAN_X86_64_BUILD_BASE_IMAGE: "debian/x86_64_build-base"
DEBIAN_BASE_TAG: "${CONTAINER_TAG}"
@ -7,9 +7,16 @@ variables:
DEBIAN_BUILD_TAG: "${CONTAINER_TAG}"
KERNEL_ROOTFS_TAG: "${CONTAINER_TAG}"
# default kernel for rootfs before injecting the current kernel tree
KERNEL_TAG: "v6.13-rc4-mesa-5e77"
KERNEL_REPO: "gfx-ci/linux"
PKG_REPO_REV: "bca9635d"
DEBIAN_X86_64_TEST_BASE_IMAGE: "debian/x86_64_test-base"
DEBIAN_X86_64_TEST_IMAGE_GL_PATH: "debian/x86_64_test-gl"
DEBIAN_TEST_GL_TAG: "${CONTAINER_TAG}"
ALPINE_X86_64_LAVA_SSH_TAG: "${CONTAINER_TAG}"
DEBIAN_PYUTILS_IMAGE: "debian/x86_64_pyutils"
DEBIAN_PYUTILS_TAG: "${CONTAINER_TAG}"
ALPINE_X86_64_LAVA_SSH_TAG: "${CONTAINER_TAG}"

View File

@ -1,58 +1,102 @@
#!/bin/bash
#!/usr/bin/env bash
# SPDX-License-Identifier: MIT
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC1091 # paths only become valid at runtime
set -e
set -x
# If we run in the fork (not from mesa or Marge-bot), reuse mainline kernel and rootfs, if exist.
_check_artifact_path() {
_url="https://${1}/${2}"
if curl -s -o /dev/null -I -L -f --retry 4 --retry-delay 15 "${_url}"; then
echo -n "${_url}"
fi
}
# Try to use the kernel and rootfs built in mainline first, so we're more
# likely to hit cache
if curl -L --retry 4 -f --retry-all-errors --retry-delay 60 -s "https://${BASE_SYSTEM_MAINLINE_HOST_PATH}/done"; then
BASE_SYSTEM_HOST_PATH="${BASE_SYSTEM_MAINLINE_HOST_PATH}"
else
BASE_SYSTEM_HOST_PATH="${BASE_SYSTEM_FORK_HOST_PATH}"
fi
get_path_to_artifact() {
_mainline_artifact="$(_check_artifact_path ${BASE_SYSTEM_MAINLINE_HOST_PATH} ${1})"
if [ -n "${_mainline_artifact}" ]; then
echo -n "${_mainline_artifact}"
return
fi
_fork_artifact="$(_check_artifact_path ${BASE_SYSTEM_FORK_HOST_PATH} ${1})"
if [ -n "${_fork_artifact}" ]; then
echo -n "${_fork_artifact}"
return
fi
set +x
error "Sorry, I couldn't find a viable built path for ${1} in either mainline or a fork." >&2
echo "" >&2
echo "If you're working on CI, this probably means that you're missing a dependency:" >&2
echo "this job ran ahead of the job which was supposed to upload that artifact." >&2
echo "" >&2
echo "If you aren't working on CI, please ping @mesa/ci-helpers to see if we can help." >&2
echo "" >&2
echo "This job is going to fail, because I can't find the resources I need. Sorry." >&2
set -x
exit 1
}
. "${SCRIPTS_DIR}/setup-test-env.sh"
section_start prepare_rootfs "Preparing root filesystem"
set -ex
section_switch rootfs "Assembling root filesystem"
ROOTFS_URL="$(get_path_to_artifact lava-rootfs.tar.zst)"
[ $? != 1 ] || exit 1
rm -rf results
mkdir -p results/job-rootfs-overlay/
cp artifacts/ci-common/capture-devcoredump.sh results/job-rootfs-overlay/
artifacts/ci-common/generate-env.sh > results/job-rootfs-overlay/set-job-env-vars.sh
cp artifacts/ci-common/init-*.sh results/job-rootfs-overlay/
cp artifacts/ci-common/intel-gpu-freq.sh results/job-rootfs-overlay/
cp "$SCRIPTS_DIR"/setup-test-env.sh results/job-rootfs-overlay/
# Prepare env vars for upload.
section_start variables "Variables passed through:"
KERNEL_IMAGE_BASE="https://${BASE_SYSTEM_HOST_PATH}" \
artifacts/ci-common/generate-env.sh | tee results/job-rootfs-overlay/set-job-env-vars.sh
section_end variables
tar zcf job-rootfs-overlay.tar.gz -C results/job-rootfs-overlay/ .
ci-fairy s3cp --token-file "${S3_JWT_FILE}" job-rootfs-overlay.tar.gz "https://${JOB_ROOTFS_OVERLAY_PATH}"
# Prepare env vars for upload.
section_switch variables "Environment variables passed through to device:"
cat results/job-rootfs-overlay/set-job-env-vars.sh
section_switch lava_submit "Submitting job for scheduling"
touch results/lava.log
tail -f results/lava.log &
PYTHONPATH=artifacts/ artifacts/lava/lava_job_submitter.py \
submit \
--farm "${FARM}" \
--device-type "${DEVICE_TYPE}" \
--boot-method "${BOOT_METHOD}" \
--job-timeout-min $((CI_JOB_TIMEOUT/60 - 5)) \
--dump-yaml \
--pipeline-info "$CI_JOB_NAME: $CI_PIPELINE_URL on $CI_COMMIT_REF_NAME ${CI_NODE_INDEX}/${CI_NODE_TOTAL}" \
--rootfs-url-prefix "https://${BASE_SYSTEM_HOST_PATH}" \
--rootfs-url "${ROOTFS_URL}" \
--kernel-url-prefix "https://${PIPELINE_ARTIFACTS_BASE}/${DEBIAN_ARCH}" \
--build-url "${FDO_HTTP_CACHE_URI:-}https://${PIPELINE_ARTIFACTS_BASE}/${DEBIAN_ARCH}/kernel-files.tar.zst" \
--job-rootfs-overlay-url "${FDO_HTTP_CACHE_URI:-}https://${JOB_ROOTFS_OVERLAY_PATH}" \
--job-timeout-min ${JOB_TIMEOUT:-80} \
--kernel-external "${EXTERNAL_KERNEL_TAG}" \
--first-stage-init artifacts/ci-common/init-stage1.sh \
--ci-project-dir "${CI_PROJECT_DIR}" \
--device-type "${DEVICE_TYPE}" \
--farm "${FARM}" \
--dtb-filename "${DTB}" \
--jwt-file "${S3_JWT_FILE}" \
--kernel-image-name "${KERNEL_IMAGE_NAME}" \
--kernel-image-type "${KERNEL_IMAGE_TYPE}" \
--boot-method "${BOOT_METHOD}" \
--visibility-group "${VISIBILITY_GROUP}" \
--lava-tags "${LAVA_TAGS}" \
--mesa-job-name "$CI_JOB_NAME" \
--structured-log-file "results/lava_job_detail.json" \
--ssh-client-image "${LAVA_SSH_CLIENT_IMAGE}" \
--project-name "${CI_PROJECT_NAME}" \
--starting-section "${CURRENT_SECTION}" \
--job-submitted-at "${CI_JOB_STARTED_AT}" \
- append-overlay \
--name=kernel-build \
--url="${FDO_HTTP_CACHE_URI:-}https://${PIPELINE_ARTIFACTS_BASE}/${DEBIAN_ARCH}/kernel-files.tar.zst" \
--compression=zstd \
--path="${CI_PROJECT_DIR}" \
--format=tar \
- append-overlay \
--name=job-overlay \
--url="https://${JOB_ROOTFS_OVERLAY_PATH}" \
--compression=gz \
--path="/" \
--format=tar \
- submit \
>> results/lava.log

View File

@ -1,21 +1,16 @@
.test-rules:
rules:
- if: '$FD_FARM == "offline" && $RUNNER_TAG =~ /^google-freedreno-/'
when: never
- if: '$COLLABORA_FARM == "offline" && $RUNNER_TAG =~ /^mesa-ci-x86-64-lava-/'
when: never
- !reference [.no_scheduled_pipelines-rules, rules]
- when: on_success
.lava-test:
extends:
- .test-rules
- .container+build-rules
timeout: "1h30m"
rules:
- !reference [.scheduled_pipeline-rules, rules]
- !reference [.collabora-farm-rules, rules]
- when: on_success
script:
# Note: Build dir (and thus install) may be dirty due to GIT_STRATEGY
- rm -rf install
- tar -xf artifacts/install.tar
- mv install/* artifacts/.
- mv -n install/* artifacts/.
# Override it with our lava-submit.sh script
- ./artifacts/lava-submit.sh
@ -32,6 +27,7 @@
- alpine/x86_64_lava_ssh_client
- kernel+rootfs_arm32
- debian/x86_64_build
- python-artifacts
- testing:arm32
- igt:arm32
@ -48,6 +44,7 @@
- alpine/x86_64_lava_ssh_client
- kernel+rootfs_arm64
- debian/x86_64_build
- python-artifacts
- testing:arm64
- igt:arm64
@ -64,6 +61,7 @@
- alpine/x86_64_lava_ssh_client
- kernel+rootfs_x86_64
- debian/x86_64_build
- python-artifacts
- testing:x86_64
- igt:x86_64
@ -71,8 +69,11 @@
extends:
- .baremetal-test-arm64
- .use-debian/baremetal_arm64_test
- .test-rules
timeout: "1h30m"
rules:
- !reference [.scheduled_pipeline-rules, rules]
- !reference [.google-freedreno-farm-rules, rules]
- when: on_success
variables:
FDO_CI_CONCURRENT: 10
HWCI_TEST_SCRIPT: "/install/igt_runner.sh"
@ -441,20 +442,20 @@ panfrost:g12b:
virtio_gpu:none:
stage: software-driver
timeout: "1h30m"
rules:
- !reference [.scheduled_pipeline-rules, rules]
- when: on_success
variables:
CROSVM_GALLIUM_DRIVER: llvmpipe
DRIVER_NAME: virtio_gpu
GPU_VERSION: none
extends:
- .test-gl
- .test-rules
tags:
- kvm
script:
- ln -sf $CI_PROJECT_DIR/install /install
- mv install/bzImage /lava-files/bzImage
- mkdir -p $CI_PROJECT_DIR/results
- ln -sf $CI_PROJECT_DIR/results /results
- install/crosvm-runner.sh install/igt_runner.sh
needs:
- debian/x86_64_test-gl
@ -464,20 +465,20 @@ virtio_gpu:none:
vkms:none:
stage: software-driver
timeout: "1h30m"
rules:
- !reference [.scheduled_pipeline-rules, rules]
- when: on_success
variables:
DRIVER_NAME: vkms
GPU_VERSION: none
extends:
- .test-gl
- .test-rules
tags:
- kvm
script:
- ln -sf $CI_PROJECT_DIR/install /install
- mv install/bzImage /lava-files/bzImage
- mkdir -p /lib/modules
- mkdir -p $CI_PROJECT_DIR/results
- ln -sf $CI_PROJECT_DIR/results /results
- ./install/crosvm-runner.sh ./install/igt_runner.sh
needs:
- debian/x86_64_test-gl

Some files were not shown because too many files have changed in this diff Show More