Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/torvalds/linux.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorDave Airlie <airlied@redhat.com>2020-11-04 03:55:11 +0300
committerDave Airlie <airlied@redhat.com>2020-11-04 04:49:10 +0300
commit1cd260a7905e3ba2e5dfa39b110ad6cf8f466f49 (patch)
treebfb701fdb0fcb32f8e6e53fb1692361c8fa33a6a /drivers/gpu/drm/amd/amdgpu
parent3cea11cd5e3b00d91caf0b4730194039b45c5891 (diff)
parent4dfec0d1d7b9970f36931de714b379dbeaed83f8 (diff)
Merge tag 'drm-misc-next-2020-10-27' of git://anongit.freedesktop.org/drm/drm-misc into drm-next
drm-misc-next for 5.11: UAPI Changes: - doc: rules for EBUSY on non-blocking commits; requirements for fourcc modifiers; on parsing EDID - fbdev/sbuslib: Remove unused FBIOSCURSOR32 - fourcc: deprecate DRM_FORMAT_MOD_NONE - virtio: Support blob resources for memory allocations; Expose host-visible and cross-device features Cross-subsystem Changes: - devicetree: Add vendor Prefix for Yes Optoelectronics, Shanghai Top Display Optoelectronics - dma-buf: Add struct dma_buf_map that stores DMA pointer and I/O-memory flag; dma_buf_vmap()/vunmap() return address in dma_buf_map; Use struct_size() macro Core Changes: - atomic: pass full state to CRTC atomic enable/disable; warn for EBUSY during non-blocking commits - dp: Prepare for DP 2.0 DPCD - dp_mst: Receive extended DPCD caps - dma-buf: Documentation - doc: Format modifiers; dma-buf-map; Cleanups - fbdev: Don't use compat_alloc_user_space(); mark as orphaned - fb-helper: Take lock in drm_fb_helper_restore_work_fb() - gem: Convert implementation and drivers to GEM object functions, remove GEM callbacks from struct drm_driver (expect gem_prime_mmap) - panel: Cleanups - pci: Add legacy infix to drm_irq_by_busid() - sched: Avoid infinite waits in drm_sched_entity_destroy() - switcheroo: Cleanups - ttm: Remove AGP support; Don't modify caching during swapout; Major refactoring of the implementation and API that affects all depending drivers; Add ttm_bo_wait_ctx(); Add ttm_bo_pin()/unpin() in favor of TTM_PL_FLAG_NO_EVICT; Remove ttm_bo_create(); Remove fault_reserve_notify() callback; Push move() implementation into drivers; Remove TTM_PAGE_FLAG_WRITE; Replace caching flags with init-time cache setting; Push ttm_tt_bind() into drivers; Replace move_notify() with delete_mem_notify(); No overlapping memcpy(); no more ttm_set_populated() - vram-helper: Fix BO top-down placement; TTM-related changes; Init GEM object functions with defaults; Default placement in system memory; Cleanups Driver Changes: - amdgpu: Use GEM object functions - armada: Use GEM object functions - aspeed: Configure output via sysfs; Init struct drm_driver with - ast: Reload LUT after FB format changes - bridge: Add driver and DT bindings for anx7625; Cleanups - bridge/dw-hdmi: Constify ops - bridge/ti-sn65dsi86: Add retries for link training - bridge/lvds-codec: Add support for regulator - bridge/tc358768: Restore connector support DRM_GEM_CMA_DRIVEROPS; Cleanups - display/ti,j721e-dss: Add DT properies assigned-clocks, assigned-clocks-parent and dma-coherent - display/ti,am65s-dss: Add DT properies assigned-clocks, assigned-clocks-parent and dma-coherent - etnaviv: Use GEM object functions - exynos: Use GEM object functions - fbdev: Cleanups and compiler fixes throughout framebuffer drivers - fbdev/cirrusfb: Avoid division by 0 - gma500: Use GEM object functions; Fix double-free of connector; Cleanups - hisilicon/hibmc: I2C-based DDC support; Use to_hibmc_drm_device(); Cleanups - i915: Use GEM object functions - imx/dcss: Init driver with DRM_GEM_CMA_DRIVER_OPS; Cleanups - ingenic: Reset pixel clock when parent clock changes; support reserved memory; Alloc F0 and F1 DMA channels at once; Support different pixel formats; Revert support for cached mmap buffers on F0/F1; support 30-bit/24-bit/8-bit-palette modes - komeda: Use DEFINE_SHOW_ATTRIBUTE - mcde: Detect platform_get_irq() errors - mediatek: Use GEM object functions - msm: Use GEM object functions - nouveau: Cleanups; TTM-related changes; Use GEM object functions - omapdrm: Use GEM object functions - panel: Add driver and DT bindings for Novatak nt36672a; Add driver and DT bindings for YTC700TLAG-05-201C; Add driver and DT bindings for TDO TL070WSH30; Cleanups - panel/mantix: Fix reset; Fix deref of NULL pointer in mantix_get_modes() - panel/otm8009a: Allow non-continuous dsi clock; Cleanups - panel/rm68200: Allow non-continuous dsi clock; Fix mode to 50 FPS - panfrost: Fix job timeout handling; Cleanups - pl111: Use GEM object functions - qxl: Cleanups; TTM-related changes; Pin new BOs with ttm_bo_init_reserved() - radeon: Cleanups; TTM-related changes; Use GEM object functions - rockchip: Use GEM object functions - shmobile: Cleanups - tegra: Use GEM object functions - tidss: Set drm_plane_helper_funcs.prepare_fb - tilcdc: Don't keep vblank interrupt enabled all the time - tve200: Detect platform_get_irq() errors - vc4: Use GEM object functions; Only register components once DSI is attached; Add Maxime as maintainer - vgem: Use GEM object functions - via: Simplify critical section in via_mem_alloc() - virtgpu: Use GEM object functions - virtio: Implement blob resources, host-visible and cross-device features; Support mapping of host-allocated resources; Use UUID APi; Cleanups - vkms: Use GEM object functions; Switch to SHMEM - vmwgfx: TTM-related changes; Inline ttm_bo_swapout_all() - xen: Use GEM object functions - xlnx: Use GEM object functions Signed-off-by: Dave Airlie <airlied@redhat.com> From: Thomas Zimmermann <tzimmermann@suse.de> Link: https://patchwork.freedesktop.org/patch/msgid/20201027100936.GA4858@linux-uq9g
Diffstat (limited to 'drivers/gpu/drm/amd/amdgpu')
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c5
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c2
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c5
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_display.c8
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c5
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c6
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c25
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h5
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c12
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c2
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_object.c87
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_object.h5
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c157
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c9
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c2
15 files changed, 186 insertions, 149 deletions
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index 5da487b64a66..054a1c2d5054 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -1479,7 +1479,7 @@ int amdgpu_amdkfd_gpuvm_map_memory_to_gpu(
}
}
- if (!amdgpu_ttm_tt_get_usermm(bo->tbo.ttm) && !bo->pin_count)
+ if (!amdgpu_ttm_tt_get_usermm(bo->tbo.ttm) && !bo->tbo.pin_count)
amdgpu_bo_fence(bo,
&avm->process_info->eviction_fence->base,
true);
@@ -1558,7 +1558,8 @@ int amdgpu_amdkfd_gpuvm_unmap_memory_from_gpu(
* required.
*/
if (mem->mapped_to_gpu_memory == 0 &&
- !amdgpu_ttm_tt_get_usermm(mem->bo->tbo.ttm) && !mem->bo->pin_count)
+ !amdgpu_ttm_tt_get_usermm(mem->bo->tbo.ttm) &&
+ !mem->bo->tbo.pin_count)
amdgpu_amdkfd_remove_eviction_fence(mem->bo,
process_info->eviction_fence);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 12598a4b5c78..d50b63a93d37 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -410,7 +410,7 @@ static int amdgpu_cs_bo_validate(struct amdgpu_cs_parser *p,
uint32_t domain;
int r;
- if (bo->pin_count)
+ if (bo->tbo.pin_count)
return 0;
/* Don't move this buffer if we have depleted our allowance
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index 2d125b8b15ee..065937482239 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -1319,6 +1319,7 @@ static int amdgpu_debugfs_evict_gtt(struct seq_file *m, void *data)
struct drm_info_node *node = (struct drm_info_node *)m->private;
struct drm_device *dev = node->minor->dev;
struct amdgpu_device *adev = drm_to_adev(dev);
+ struct ttm_resource_manager *man;
int r;
r = pm_runtime_get_sync(dev->dev);
@@ -1327,7 +1328,9 @@ static int amdgpu_debugfs_evict_gtt(struct seq_file *m, void *data)
return r;
}
- seq_printf(m, "(%d)\n", ttm_bo_evict_mm(&adev->mman.bdev, TTM_PL_TT));
+ man = ttm_manager_type(&adev->mman.bdev, TTM_PL_TT);
+ r = ttm_resource_manager_evict_all(&adev->mman.bdev, man);
+ seq_printf(m, "(%d)\n", r);
pm_runtime_mark_last_busy(dev->dev);
pm_runtime_put_autosuspend(dev->dev);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
index 7cc7af2a6822..b25faaee6f0e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
@@ -132,10 +132,7 @@ static void amdgpu_display_unpin_work_func(struct work_struct *__work)
/* unpin of the old buffer */
r = amdgpu_bo_reserve(work->old_abo, true);
if (likely(r == 0)) {
- r = amdgpu_bo_unpin(work->old_abo);
- if (unlikely(r != 0)) {
- DRM_ERROR("failed to unpin buffer after flip\n");
- }
+ amdgpu_bo_unpin(work->old_abo);
amdgpu_bo_unreserve(work->old_abo);
} else
DRM_ERROR("failed to reserve buffer after flip\n");
@@ -249,8 +246,7 @@ pflip_cleanup:
}
unpin:
if (!adev->enable_virtual_display)
- if (unlikely(amdgpu_bo_unpin(new_abo) != 0))
- DRM_ERROR("failed to unpin new abo in error path\n");
+ amdgpu_bo_unpin(new_abo);
unreserve:
amdgpu_bo_unreserve(new_abo);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index 957934926b24..5b465ab774d1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -281,7 +281,7 @@ static struct sg_table *amdgpu_dma_buf_map(struct dma_buf_attachment *attach,
struct sg_table *sgt;
long r;
- if (!bo->pin_count) {
+ if (!bo->tbo.pin_count) {
/* move buffer into GTT or VRAM */
struct ttm_operation_ctx ctx = { false, false };
unsigned domains = AMDGPU_GEM_DOMAIN_GTT;
@@ -390,7 +390,8 @@ static int amdgpu_dma_buf_begin_cpu_access(struct dma_buf *dma_buf,
if (unlikely(ret != 0))
return ret;
- if (!bo->pin_count && (bo->allowed_domains & AMDGPU_GEM_DOMAIN_GTT)) {
+ if (!bo->tbo.pin_count &&
+ (bo->allowed_domains & AMDGPU_GEM_DOMAIN_GTT)) {
amdgpu_bo_placement_from_domain(bo, AMDGPU_GEM_DOMAIN_GTT);
ret = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 42d9748921f5..8b30915aa972 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -1520,19 +1520,13 @@ static struct drm_driver kms_driver = {
.lastclose = amdgpu_driver_lastclose_kms,
.irq_handler = amdgpu_irq_handler,
.ioctls = amdgpu_ioctls_kms,
- .gem_free_object_unlocked = amdgpu_gem_object_free,
- .gem_open_object = amdgpu_gem_object_open,
- .gem_close_object = amdgpu_gem_object_close,
.dumb_create = amdgpu_mode_dumb_create,
.dumb_map_offset = amdgpu_mode_dumb_mmap,
.fops = &amdgpu_driver_kms_fops,
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
- .gem_prime_export = amdgpu_gem_prime_export,
.gem_prime_import = amdgpu_gem_prime_import,
- .gem_prime_vmap = amdgpu_gem_prime_vmap,
- .gem_prime_vunmap = amdgpu_gem_prime_vunmap,
.gem_prime_mmap = amdgpu_gem_prime_mmap,
.name = DRIVER_NAME,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 7e8265da9f25..8ea6fc745769 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -36,9 +36,12 @@
#include "amdgpu.h"
#include "amdgpu_display.h"
+#include "amdgpu_dma_buf.h"
#include "amdgpu_xgmi.h"
-void amdgpu_gem_object_free(struct drm_gem_object *gobj)
+static const struct drm_gem_object_funcs amdgpu_gem_object_funcs;
+
+static void amdgpu_gem_object_free(struct drm_gem_object *gobj)
{
struct amdgpu_bo *robj = gem_to_amdgpu_bo(gobj);
@@ -87,6 +90,7 @@ retry:
return r;
}
*obj = &bo->tbo.base;
+ (*obj)->funcs = &amdgpu_gem_object_funcs;
return 0;
}
@@ -119,8 +123,8 @@ void amdgpu_gem_force_release(struct amdgpu_device *adev)
* Call from drm_gem_handle_create which appear in both new and open ioctl
* case.
*/
-int amdgpu_gem_object_open(struct drm_gem_object *obj,
- struct drm_file *file_priv)
+static int amdgpu_gem_object_open(struct drm_gem_object *obj,
+ struct drm_file *file_priv)
{
struct amdgpu_bo *abo = gem_to_amdgpu_bo(obj);
struct amdgpu_device *adev = amdgpu_ttm_adev(abo->tbo.bdev);
@@ -152,8 +156,8 @@ int amdgpu_gem_object_open(struct drm_gem_object *obj,
return 0;
}
-void amdgpu_gem_object_close(struct drm_gem_object *obj,
- struct drm_file *file_priv)
+static void amdgpu_gem_object_close(struct drm_gem_object *obj,
+ struct drm_file *file_priv)
{
struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
@@ -211,6 +215,15 @@ out_unlock:
ttm_eu_backoff_reservation(&ticket, &list);
}
+static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = {
+ .free = amdgpu_gem_object_free,
+ .open = amdgpu_gem_object_open,
+ .close = amdgpu_gem_object_close,
+ .export = amdgpu_gem_prime_export,
+ .vmap = amdgpu_gem_prime_vmap,
+ .vunmap = amdgpu_gem_prime_vunmap,
+};
+
/*
* GEM ioctls.
*/
@@ -870,7 +883,7 @@ static int amdgpu_debugfs_gem_bo_info(int id, void *ptr, void *data)
seq_printf(m, "\t0x%08x: %12ld byte %s",
id, amdgpu_bo_size(bo), placement);
- pin_count = READ_ONCE(bo->pin_count);
+ pin_count = READ_ONCE(bo->tbo.pin_count);
if (pin_count)
seq_printf(m, " pin count %d", pin_count);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h
index e0f025dd1b14..637bf51dbf06 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h
@@ -33,11 +33,6 @@
#define AMDGPU_GEM_DOMAIN_MAX 0x3
#define gem_to_amdgpu_bo(gobj) container_of((gobj), struct amdgpu_bo, tbo.base)
-void amdgpu_gem_object_free(struct drm_gem_object *obj);
-int amdgpu_gem_object_open(struct drm_gem_object *obj,
- struct drm_file *file_priv);
-void amdgpu_gem_object_close(struct drm_gem_object *obj,
- struct drm_file *file_priv);
unsigned long amdgpu_gem_timeout(uint64_t timeout_ns);
/*
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
index 36604d751d62..cc86f431a3d4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -45,12 +45,10 @@ void amdgpu_gmc_get_pde_for_bo(struct amdgpu_bo *bo, int level,
uint64_t *addr, uint64_t *flags)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
- struct ttm_dma_tt *ttm;
switch (bo->tbo.mem.mem_type) {
case TTM_PL_TT:
- ttm = container_of(bo->tbo.ttm, struct ttm_dma_tt, ttm);
- *addr = ttm->dma_address[0];
+ *addr = bo->tbo.ttm->dma_address[0];
break;
case TTM_PL_VRAM:
*addr = amdgpu_bo_gpu_offset(bo);
@@ -122,16 +120,14 @@ int amdgpu_gmc_set_pte_pde(struct amdgpu_device *adev, void *cpu_pt_addr,
uint64_t amdgpu_gmc_agp_addr(struct ttm_buffer_object *bo)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev);
- struct ttm_dma_tt *ttm;
- if (bo->num_pages != 1 || bo->ttm->caching_state == tt_cached)
+ if (bo->num_pages != 1 || bo->ttm->caching == ttm_cached)
return AMDGPU_BO_INVALID_OFFSET;
- ttm = container_of(bo->ttm, struct ttm_dma_tt, ttm);
- if (ttm->dma_address[0] + PAGE_SIZE >= adev->gmc.agp_size)
+ if (bo->ttm->dma_address[0] + PAGE_SIZE >= adev->gmc.agp_size)
return AMDGPU_BO_INVALID_OFFSET;
- return adev->gmc.agp_start + ttm->dma_address[0];
+ return adev->gmc.agp_start + bo->ttm->dma_address[0];
}
/**
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
index f203e4a6a3f2..1721739def84 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
@@ -136,7 +136,7 @@ void amdgpu_gtt_mgr_fini(struct amdgpu_device *adev)
ttm_resource_manager_set_used(man, false);
- ret = ttm_resource_manager_force_list_clean(&adev->mman.bdev, man);
+ ret = ttm_resource_manager_evict_all(&adev->mman.bdev, man);
if (ret)
return;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index ac043baac05d..1aa516429c80 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -78,7 +78,7 @@ static void amdgpu_bo_destroy(struct ttm_buffer_object *tbo)
struct amdgpu_device *adev = amdgpu_ttm_adev(tbo->bdev);
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(tbo);
- if (bo->pin_count > 0)
+ if (bo->tbo.pin_count > 0)
amdgpu_bo_subtract_pin_size(bo);
amdgpu_bo_kunmap(bo);
@@ -137,7 +137,7 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
places[c].fpfn = 0;
places[c].lpfn = 0;
places[c].mem_type = TTM_PL_VRAM;
- places[c].flags = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED;
+ places[c].flags = 0;
if (flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED)
places[c].lpfn = visible_pfn;
@@ -154,11 +154,6 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
places[c].lpfn = 0;
places[c].mem_type = TTM_PL_TT;
places[c].flags = 0;
- if (flags & AMDGPU_GEM_CREATE_CPU_GTT_USWC)
- places[c].flags |= TTM_PL_FLAG_WC |
- TTM_PL_FLAG_UNCACHED;
- else
- places[c].flags |= TTM_PL_FLAG_CACHED;
c++;
}
@@ -167,11 +162,6 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
places[c].lpfn = 0;
places[c].mem_type = TTM_PL_SYSTEM;
places[c].flags = 0;
- if (flags & AMDGPU_GEM_CREATE_CPU_GTT_USWC)
- places[c].flags |= TTM_PL_FLAG_WC |
- TTM_PL_FLAG_UNCACHED;
- else
- places[c].flags |= TTM_PL_FLAG_CACHED;
c++;
}
@@ -179,7 +169,7 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
places[c].fpfn = 0;
places[c].lpfn = 0;
places[c].mem_type = AMDGPU_PL_GDS;
- places[c].flags = TTM_PL_FLAG_UNCACHED;
+ places[c].flags = 0;
c++;
}
@@ -187,7 +177,7 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
places[c].fpfn = 0;
places[c].lpfn = 0;
places[c].mem_type = AMDGPU_PL_GWS;
- places[c].flags = TTM_PL_FLAG_UNCACHED;
+ places[c].flags = 0;
c++;
}
@@ -195,7 +185,7 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
places[c].fpfn = 0;
places[c].lpfn = 0;
places[c].mem_type = AMDGPU_PL_OA;
- places[c].flags = TTM_PL_FLAG_UNCACHED;
+ places[c].flags = 0;
c++;
}
@@ -203,7 +193,7 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
places[c].fpfn = 0;
places[c].lpfn = 0;
places[c].mem_type = TTM_PL_SYSTEM;
- places[c].flags = TTM_PL_MASK_CACHING;
+ places[c].flags = 0;
c++;
}
@@ -721,7 +711,7 @@ int amdgpu_bo_validate(struct amdgpu_bo *bo)
uint32_t domain;
int r;
- if (bo->pin_count)
+ if (bo->tbo.pin_count)
return 0;
domain = bo->preferred_domains;
@@ -918,13 +908,13 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain,
*/
domain = amdgpu_bo_get_preferred_pin_domain(adev, domain);
- if (bo->pin_count) {
+ if (bo->tbo.pin_count) {
uint32_t mem_type = bo->tbo.mem.mem_type;
if (!(domain & amdgpu_mem_type_to_domain(mem_type)))
return -EINVAL;
- bo->pin_count++;
+ ttm_bo_pin(&bo->tbo);
if (max_offset != 0) {
u64 domain_start = amdgpu_ttm_domain_start(adev,
@@ -955,7 +945,6 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain,
if (!bo->placements[i].lpfn ||
(lpfn && lpfn < bo->placements[i].lpfn))
bo->placements[i].lpfn = lpfn;
- bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
}
r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
@@ -964,7 +953,7 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain,
goto error;
}
- bo->pin_count = 1;
+ ttm_bo_pin(&bo->tbo);
domain = amdgpu_mem_type_to_domain(bo->tbo.mem.mem_type);
if (domain == AMDGPU_GEM_DOMAIN_VRAM) {
@@ -1006,34 +995,16 @@ int amdgpu_bo_pin(struct amdgpu_bo *bo, u32 domain)
* Returns:
* 0 for success or a negative error code on failure.
*/
-int amdgpu_bo_unpin(struct amdgpu_bo *bo)
+void amdgpu_bo_unpin(struct amdgpu_bo *bo)
{
- struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
- struct ttm_operation_ctx ctx = { false, false };
- int r, i;
-
- if (WARN_ON_ONCE(!bo->pin_count)) {
- dev_warn(adev->dev, "%p unpin not necessary\n", bo);
- return 0;
- }
- bo->pin_count--;
- if (bo->pin_count)
- return 0;
+ ttm_bo_unpin(&bo->tbo);
+ if (bo->tbo.pin_count)
+ return;
amdgpu_bo_subtract_pin_size(bo);
if (bo->tbo.base.import_attach)
dma_buf_unpin(bo->tbo.base.import_attach);
-
- for (i = 0; i < bo->placement.num_placement; i++) {
- bo->placements[i].lpfn = 0;
- bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
- }
- r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
- if (unlikely(r))
- dev_err(adev->dev, "%p validate failed for unpin\n", bo);
-
- return r;
}
/**
@@ -1048,6 +1019,8 @@ int amdgpu_bo_unpin(struct amdgpu_bo *bo)
*/
int amdgpu_bo_evict_vram(struct amdgpu_device *adev)
{
+ struct ttm_resource_manager *man;
+
/* late 2.6.33 fix IGP hibernate - we need pm ops to do this correct */
#ifndef CONFIG_HIBERNATION
if (adev->flags & AMD_IS_APU) {
@@ -1055,7 +1028,9 @@ int amdgpu_bo_evict_vram(struct amdgpu_device *adev)
return 0;
}
#endif
- return ttm_bo_evict_mm(&adev->mman.bdev, TTM_PL_VRAM);
+
+ man = ttm_manager_type(&adev->mman.bdev, TTM_PL_VRAM);
+ return ttm_resource_manager_evict_all(&adev->mman.bdev, man);
}
static const char *amdgpu_vram_names[] = {
@@ -1360,19 +1335,14 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
* Returns:
* 0 for success or a negative error code on failure.
*/
-int amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo)
+vm_fault_t amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev);
struct ttm_operation_ctx ctx = { false, false };
- struct amdgpu_bo *abo;
+ struct amdgpu_bo *abo = ttm_to_amdgpu_bo(bo);
unsigned long offset, size;
int r;
- if (!amdgpu_bo_is_amdgpu_bo(bo))
- return 0;
-
- abo = ttm_to_amdgpu_bo(bo);
-
/* Remember that this BO was accessed by the CPU */
abo->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
@@ -1385,8 +1355,8 @@ int amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo)
return 0;
/* Can't move a pinned BO to visible VRAM */
- if (abo->pin_count > 0)
- return -EINVAL;
+ if (abo->tbo.pin_count > 0)
+ return VM_FAULT_SIGBUS;
/* hurrah the memory is not visible ! */
atomic64_inc(&adev->num_vram_cpu_page_faults);
@@ -1398,15 +1368,18 @@ int amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo)
abo->placement.busy_placement = &abo->placements[1];
r = ttm_bo_validate(bo, &abo->placement, &ctx);
- if (unlikely(r != 0))
- return r;
+ if (unlikely(r == -EBUSY || r == -ERESTARTSYS))
+ return VM_FAULT_NOPAGE;
+ else if (unlikely(r))
+ return VM_FAULT_SIGBUS;
offset = bo->mem.start << PAGE_SHIFT;
/* this should never happen */
if (bo->mem.mem_type == TTM_PL_VRAM &&
(offset + size) > adev->gmc.visible_vram_size)
- return -EINVAL;
+ return VM_FAULT_SIGBUS;
+ ttm_bo_move_to_lru_tail_unlocked(bo);
return 0;
}
@@ -1489,7 +1462,7 @@ u64 amdgpu_bo_gpu_offset(struct amdgpu_bo *bo)
{
WARN_ON_ONCE(bo->tbo.mem.mem_type == TTM_PL_SYSTEM);
WARN_ON_ONCE(!dma_resv_is_locked(bo->tbo.base.resv) &&
- !bo->pin_count && bo->tbo.type != ttm_bo_type_kernel);
+ !bo->tbo.pin_count && bo->tbo.type != ttm_bo_type_kernel);
WARN_ON_ONCE(bo->tbo.mem.start == AMDGPU_BO_INVALID_OFFSET);
WARN_ON_ONCE(bo->tbo.mem.mem_type == TTM_PL_VRAM &&
!(bo->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS));
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
index 5ddb6cf96030..132e5f955180 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -89,7 +89,6 @@ struct amdgpu_bo {
struct ttm_buffer_object tbo;
struct ttm_bo_kmap_obj kmap;
u64 flags;
- unsigned pin_count;
u64 tiling_flags;
u64 metadata_flags;
void *metadata;
@@ -267,7 +266,7 @@ void amdgpu_bo_unref(struct amdgpu_bo **bo);
int amdgpu_bo_pin(struct amdgpu_bo *bo, u32 domain);
int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain,
u64 min_offset, u64 max_offset);
-int amdgpu_bo_unpin(struct amdgpu_bo *bo);
+void amdgpu_bo_unpin(struct amdgpu_bo *bo);
int amdgpu_bo_evict_vram(struct amdgpu_device *adev);
int amdgpu_bo_init(struct amdgpu_device *adev);
int amdgpu_bo_late_init(struct amdgpu_device *adev);
@@ -285,7 +284,7 @@ void amdgpu_bo_move_notify(struct ttm_buffer_object *bo,
bool evict,
struct ttm_resource *new_mem);
void amdgpu_bo_release_notify(struct ttm_buffer_object *bo);
-int amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo);
+vm_fault_t amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo);
void amdgpu_bo_fence(struct amdgpu_bo *bo, struct dma_fence *fence,
bool shared);
int amdgpu_bo_sync_wait_resv(struct amdgpu_device *adev, struct dma_resv *resv,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 8039d2399584..ddb1c8e9eea4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -66,6 +66,8 @@
static int amdgpu_ttm_backend_bind(struct ttm_bo_device *bdev,
struct ttm_tt *ttm,
struct ttm_resource *bo_mem);
+static void amdgpu_ttm_backend_unbind(struct ttm_bo_device *bdev,
+ struct ttm_tt *ttm);
static int amdgpu_ttm_init_on_chip(struct amdgpu_device *adev,
unsigned int type,
@@ -92,7 +94,7 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
.fpfn = 0,
.lpfn = 0,
.mem_type = TTM_PL_SYSTEM,
- .flags = TTM_PL_MASK_CACHING
+ .flags = 0
};
/* Don't handle scatter gather BOs */
@@ -292,11 +294,9 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
cpu_addr = &job->ibs[0].ptr[num_dw];
if (mem->mem_type == TTM_PL_TT) {
- struct ttm_dma_tt *dma;
dma_addr_t *dma_address;
- dma = container_of(bo->ttm, struct ttm_dma_tt, ttm);
- dma_address = &dma->dma_address[offset >> PAGE_SHIFT];
+ dma_address = &bo->ttm->dma_address[offset >> PAGE_SHIFT];
r = amdgpu_gart_map(adev, 0, num_pages, dma_address, flags,
cpu_addr);
if (r)
@@ -538,19 +538,13 @@ static int amdgpu_move_vram_ram(struct ttm_buffer_object *bo, bool evict,
placements.fpfn = 0;
placements.lpfn = 0;
placements.mem_type = TTM_PL_TT;
- placements.flags = TTM_PL_MASK_CACHING;
+ placements.flags = 0;
r = ttm_bo_mem_space(bo, &placement, &tmp_mem, ctx);
if (unlikely(r)) {
pr_err("Failed to find GTT space for blit from VRAM\n");
return r;
}
- /* set caching flags */
- r = ttm_tt_set_placement_caching(bo->ttm, tmp_mem.placement);
- if (unlikely(r)) {
- goto out_cleanup;
- }
-
r = ttm_tt_populate(bo->bdev, bo->ttm, ctx);
if (unlikely(r))
goto out_cleanup;
@@ -567,8 +561,13 @@ static int amdgpu_move_vram_ram(struct ttm_buffer_object *bo, bool evict,
goto out_cleanup;
}
- /* move BO (in tmp_mem) to new_mem */
- r = ttm_bo_move_ttm(bo, ctx, new_mem);
+ r = ttm_bo_wait_ctx(bo, ctx);
+ if (unlikely(r))
+ goto out_cleanup;
+
+ amdgpu_ttm_backend_unbind(bo->bdev, bo->ttm);
+ ttm_resource_free(bo, &bo->mem);
+ ttm_bo_assign_mem(bo, new_mem);
out_cleanup:
ttm_resource_free(bo, &tmp_mem);
return r;
@@ -599,7 +598,7 @@ static int amdgpu_move_ram_vram(struct ttm_buffer_object *bo, bool evict,
placements.fpfn = 0;
placements.lpfn = 0;
placements.mem_type = TTM_PL_TT;
- placements.flags = TTM_PL_MASK_CACHING;
+ placements.flags = 0;
r = ttm_bo_mem_space(bo, &placement, &tmp_mem, ctx);
if (unlikely(r)) {
pr_err("Failed to find GTT space for blit to VRAM\n");
@@ -607,11 +606,16 @@ static int amdgpu_move_ram_vram(struct ttm_buffer_object *bo, bool evict,
}
/* move/bind old memory to GTT space */
- r = ttm_bo_move_ttm(bo, ctx, &tmp_mem);
+ r = ttm_tt_populate(bo->bdev, bo->ttm, ctx);
+ if (unlikely(r))
+ return r;
+
+ r = amdgpu_ttm_backend_bind(bo->bdev, bo->ttm, &tmp_mem);
if (unlikely(r)) {
goto out_cleanup;
}
+ ttm_bo_assign_mem(bo, &tmp_mem);
/* copy to VRAM */
r = amdgpu_move_blit(bo, evict, new_mem, old_mem);
if (unlikely(r)) {
@@ -660,9 +664,17 @@ static int amdgpu_bo_move(struct ttm_buffer_object *bo, bool evict,
struct ttm_resource *old_mem = &bo->mem;
int r;
+ if (new_mem->mem_type == TTM_PL_TT) {
+ r = amdgpu_ttm_backend_bind(bo->bdev, bo->ttm, new_mem);
+ if (r)
+ return r;
+ }
+
+ amdgpu_bo_move_notify(bo, evict, new_mem);
+
/* Can't move a pinned BO */
abo = ttm_to_amdgpu_bo(bo);
- if (WARN_ON_ONCE(abo->pin_count > 0))
+ if (WARN_ON_ONCE(abo->tbo.pin_count > 0))
return -EINVAL;
adev = amdgpu_ttm_adev(bo->bdev);
@@ -671,14 +683,24 @@ static int amdgpu_bo_move(struct ttm_buffer_object *bo, bool evict,
ttm_bo_move_null(bo, new_mem);
return 0;
}
- if ((old_mem->mem_type == TTM_PL_TT &&
- new_mem->mem_type == TTM_PL_SYSTEM) ||
- (old_mem->mem_type == TTM_PL_SYSTEM &&
- new_mem->mem_type == TTM_PL_TT)) {
- /* bind is enough */
+ if (old_mem->mem_type == TTM_PL_SYSTEM &&
+ new_mem->mem_type == TTM_PL_TT) {
ttm_bo_move_null(bo, new_mem);
return 0;
}
+
+ if (old_mem->mem_type == TTM_PL_TT &&
+ new_mem->mem_type == TTM_PL_SYSTEM) {
+ r = ttm_bo_wait_ctx(bo, ctx);
+ if (r)
+ goto fail;
+
+ amdgpu_ttm_backend_unbind(bo->bdev, bo->ttm);
+ ttm_resource_free(bo, &bo->mem);
+ ttm_bo_assign_mem(bo, new_mem);
+ return 0;
+ }
+
if (old_mem->mem_type == AMDGPU_PL_GDS ||
old_mem->mem_type == AMDGPU_PL_GWS ||
old_mem->mem_type == AMDGPU_PL_OA ||
@@ -712,12 +734,12 @@ memcpy:
if (!amdgpu_mem_visible(adev, old_mem) ||
!amdgpu_mem_visible(adev, new_mem)) {
pr_err("Move buffer fallback to memcpy unavailable\n");
- return r;
+ goto fail;
}
r = ttm_bo_move_memcpy(bo, ctx, new_mem);
if (r)
- return r;
+ goto fail;
}
if (bo->type == ttm_bo_type_device &&
@@ -732,6 +754,11 @@ memcpy:
/* update statistics */
atomic64_add((u64)bo->num_pages << PAGE_SHIFT, &adev->num_bytes_moved);
return 0;
+fail:
+ swap(*new_mem, bo->mem);
+ amdgpu_bo_move_notify(bo, false, new_mem);
+ swap(*new_mem, bo->mem);
+ return r;
}
/**
@@ -767,6 +794,7 @@ static int amdgpu_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_reso
mem->bus.offset += adev->gmc.aper_base;
mem->bus.is_iomem = true;
+ mem->bus.caching = ttm_write_combined;
break;
default:
return -EINVAL;
@@ -811,7 +839,7 @@ uint64_t amdgpu_ttm_domain_start(struct amdgpu_device *adev, uint32_t type)
* TTM backend functions.
*/
struct amdgpu_ttm_tt {
- struct ttm_dma_tt ttm;
+ struct ttm_tt ttm;
struct drm_gem_object *gobj;
u64 offset;
uint64_t userptr;
@@ -943,7 +971,7 @@ bool amdgpu_ttm_tt_get_user_pages_done(struct ttm_tt *ttm)
if (!gtt || !gtt->userptr)
return false;
- DRM_DEBUG_DRIVER("user_pages_done 0x%llx pages 0x%lx\n",
+ DRM_DEBUG_DRIVER("user_pages_done 0x%llx pages 0x%x\n",
gtt->userptr, ttm->num_pages);
WARN_ONCE(!gtt->range || !gtt->range->hmm_pfns,
@@ -1095,7 +1123,7 @@ static int amdgpu_ttm_gart_bind(struct amdgpu_device *adev,
gart_bind_fail:
if (r)
- DRM_ERROR("failed to bind %lu pages at 0x%08llX\n",
+ DRM_ERROR("failed to bind %u pages at 0x%08llX\n",
ttm->num_pages, gtt->offset);
return r;
@@ -1130,7 +1158,7 @@ static int amdgpu_ttm_backend_bind(struct ttm_bo_device *bdev,
}
}
if (!ttm->num_pages) {
- WARN(1, "nothing to bind %lu pages for mreg %p back %p!\n",
+ WARN(1, "nothing to bind %u pages for mreg %p back %p!\n",
ttm->num_pages, bo_mem, ttm);
}
@@ -1153,7 +1181,7 @@ static int amdgpu_ttm_backend_bind(struct ttm_bo_device *bdev,
ttm->pages, gtt->ttm.dma_address, flags);
if (r)
- DRM_ERROR("failed to bind %lu pages at 0x%08llX\n",
+ DRM_ERROR("failed to bind %u pages at 0x%08llX\n",
ttm->num_pages, gtt->offset);
gtt->bound = true;
return r;
@@ -1267,8 +1295,8 @@ static void amdgpu_ttm_backend_unbind(struct ttm_bo_device *bdev,
/* unbind shouldn't be done for GDS/GWS/OA in ttm_bo_clean_mm */
r = amdgpu_gart_unbind(adev, gtt->offset, ttm->num_pages);
if (r)
- DRM_ERROR("failed to unbind %lu pages at 0x%08llX\n",
- gtt->ttm.ttm.num_pages, gtt->offset);
+ DRM_ERROR("failed to unbind %u pages at 0x%08llX\n",
+ gtt->ttm.num_pages, gtt->offset);
gtt->bound = false;
}
@@ -1282,7 +1310,7 @@ static void amdgpu_ttm_backend_destroy(struct ttm_bo_device *bdev,
if (gtt->usertask)
put_task_struct(gtt->usertask);
- ttm_dma_tt_fini(&gtt->ttm);
+ ttm_tt_fini(&gtt->ttm);
kfree(gtt);
}
@@ -1296,7 +1324,9 @@ static void amdgpu_ttm_backend_destroy(struct ttm_bo_device *bdev,
static struct ttm_tt *amdgpu_ttm_tt_create(struct ttm_buffer_object *bo,
uint32_t page_flags)
{
+ struct amdgpu_bo *abo = ttm_to_amdgpu_bo(bo);
struct amdgpu_ttm_tt *gtt;
+ enum ttm_caching caching;
gtt = kzalloc(sizeof(struct amdgpu_ttm_tt), GFP_KERNEL);
if (gtt == NULL) {
@@ -1304,12 +1334,17 @@ static struct ttm_tt *amdgpu_ttm_tt_create(struct ttm_buffer_object *bo,
}
gtt->gobj = &bo->base;
+ if (abo->flags & AMDGPU_GEM_CREATE_CPU_GTT_USWC)
+ caching = ttm_write_combined;
+ else
+ caching = ttm_cached;
+
/* allocate space for the uninitialized page entries */
- if (ttm_sg_tt_init(&gtt->ttm, bo, page_flags)) {
+ if (ttm_sg_tt_init(&gtt->ttm, bo, page_flags, caching)) {
kfree(gtt);
return NULL;
}
- return &gtt->ttm.ttm;
+ return &gtt->ttm;
}
/**
@@ -1332,7 +1367,6 @@ static int amdgpu_ttm_tt_populate(struct ttm_bo_device *bdev,
return -ENOMEM;
ttm->page_flags |= TTM_PAGE_FLAG_SG;
- ttm_tt_set_populated(ttm);
return 0;
}
@@ -1352,7 +1386,6 @@ static int amdgpu_ttm_tt_populate(struct ttm_bo_device *bdev,
drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
gtt->ttm.dma_address,
ttm->num_pages);
- ttm_tt_set_populated(ttm);
return 0;
}
@@ -1478,7 +1511,7 @@ bool amdgpu_ttm_tt_affect_userptr(struct ttm_tt *ttm, unsigned long start,
/* Return false if no part of the ttm_tt object lies within
* the range
*/
- size = (unsigned long)gtt->ttm.ttm.num_pages * PAGE_SIZE;
+ size = (unsigned long)gtt->ttm.num_pages * PAGE_SIZE;
if (gtt->userptr > end || gtt->userptr + size <= start)
return false;
@@ -1529,7 +1562,7 @@ uint64_t amdgpu_ttm_tt_pde_flags(struct ttm_tt *ttm, struct ttm_resource *mem)
if (mem && mem->mem_type == TTM_PL_TT) {
flags |= AMDGPU_PTE_SYSTEM;
- if (ttm->caching_state == tt_cached)
+ if (ttm->caching == ttm_cached)
flags |= AMDGPU_PTE_SNOOPED;
}
@@ -1699,20 +1732,23 @@ static int amdgpu_ttm_access_memory(struct ttm_buffer_object *bo,
return ret;
}
+static void
+amdgpu_bo_delete_mem_notify(struct ttm_buffer_object *bo)
+{
+ amdgpu_bo_move_notify(bo, false, NULL);
+}
+
static struct ttm_bo_driver amdgpu_bo_driver = {
.ttm_tt_create = &amdgpu_ttm_tt_create,
.ttm_tt_populate = &amdgpu_ttm_tt_populate,
.ttm_tt_unpopulate = &amdgpu_ttm_tt_unpopulate,
- .ttm_tt_bind = &amdgpu_ttm_backend_bind,
- .ttm_tt_unbind = &amdgpu_ttm_backend_unbind,
.ttm_tt_destroy = &amdgpu_ttm_backend_destroy,
.eviction_valuable = amdgpu_ttm_bo_eviction_valuable,
.evict_flags = &amdgpu_evict_flags,
.move = &amdgpu_bo_move,
.verify_access = &amdgpu_verify_access,
- .move_notify = &amdgpu_bo_move_notify,
+ .delete_mem_notify = &amdgpu_bo_delete_mem_notify,
.release_notify = &amdgpu_bo_release_notify,
- .fault_reserve_notify = &amdgpu_bo_fault_reserve_notify,
.io_mem_reserve = &amdgpu_ttm_io_mem_reserve,
.io_mem_pfn = amdgpu_ttm_io_mem_pfn,
.access_memory = &amdgpu_ttm_access_memory,
@@ -2092,15 +2128,48 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
adev->mman.buffer_funcs_enabled = enable;
}
+static vm_fault_t amdgpu_ttm_fault(struct vm_fault *vmf)
+{
+ struct ttm_buffer_object *bo = vmf->vma->vm_private_data;
+ vm_fault_t ret;
+
+ ret = ttm_bo_vm_reserve(bo, vmf);
+ if (ret)
+ return ret;
+
+ ret = amdgpu_bo_fault_reserve_notify(bo);
+ if (ret)
+ goto unlock;
+
+ ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot,
+ TTM_BO_VM_NUM_PREFAULT, 1);
+ if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT))
+ return ret;
+
+unlock:
+ dma_resv_unlock(bo->base.resv);
+ return ret;
+}
+
+static struct vm_operations_struct amdgpu_ttm_vm_ops = {
+ .fault = amdgpu_ttm_fault,
+ .open = ttm_bo_vm_open,
+ .close = ttm_bo_vm_close,
+ .access = ttm_bo_vm_access
+};
+
int amdgpu_mmap(struct file *filp, struct vm_area_struct *vma)
{
struct drm_file *file_priv = filp->private_data;
struct amdgpu_device *adev = drm_to_adev(file_priv->minor->dev);
+ int r;
- if (adev == NULL)
- return -EINVAL;
+ r = ttm_bo_mmap(filp, vma, &adev->mman.bdev);
+ if (unlikely(r != 0))
+ return r;
- return ttm_bo_mmap(filp, vma, &adev->mman.bdev);
+ vma->vm_ops = &amdgpu_ttm_vm_ops;
+ return 0;
}
int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index df110afa97bf..38b59a4fc04c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -609,7 +609,7 @@ void amdgpu_vm_del_from_lru_notify(struct ttm_buffer_object *bo)
if (!amdgpu_bo_is_amdgpu_bo(bo))
return;
- if (bo->mem.placement & TTM_PL_FLAG_NO_EVICT)
+ if (bo->pin_count)
return;
abo = ttm_to_amdgpu_bo(bo);
@@ -1790,7 +1790,6 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev, struct amdgpu_bo_va *bo_va,
resv = vm->root.base.bo->tbo.base.resv;
} else {
struct drm_gem_object *obj = &bo->tbo.base;
- struct ttm_dma_tt *ttm;
resv = bo->tbo.base.resv;
if (obj->import_attach && bo_va->is_xgmi) {
@@ -1803,10 +1802,8 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev, struct amdgpu_bo_va *bo_va,
}
mem = &bo->tbo.mem;
nodes = mem->mm_node;
- if (mem->mem_type == TTM_PL_TT) {
- ttm = container_of(bo->tbo.ttm, struct ttm_dma_tt, ttm);
- pages_addr = ttm->dma_address;
- }
+ if (mem->mem_type == TTM_PL_TT)
+ pages_addr = bo->tbo.ttm->dma_address;
}
if (bo) {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
index 01c1171afbe0..7747be644dd0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
@@ -212,7 +212,7 @@ void amdgpu_vram_mgr_fini(struct amdgpu_device *adev)
ttm_resource_manager_set_used(man, false);
- ret = ttm_resource_manager_force_list_clean(&adev->mman.bdev, man);
+ ret = ttm_resource_manager_evict_all(&adev->mman.bdev, man);
if (ret)
return;