If we want to allow submitting multiple pieces of work to the GPU at once while still requiring CPU synchronization, we'll need to track all past fence cycles associated with a resource alongside the current one. To solve this the concept of chaining fences has been introduced, fences from past usages can be chained to the latest fence which'll then recursively forward operations to chained fences.
This change also ends up mandating a move away from `FenceCycleDependency` as it would prevent fences from concurrently locking the same resources which is required for chaining to work as two fences being chained fundamentally means they're locking the same resources. The `AtomicForwardList` is therefore used as the new container.
A `Buffer` class was created to hold any generic Vulkan buffer object with `span` semantics, `StagingBuffer` was implemented atop it as a wrapper for `Buffer` that inherits from `FenceCycleDependency` and can be used as such.
Fixes bugs with the Texture Manager lookup, fixes `RenderTarget` address extraction (`low`/`high` were flipped prior), refactors `Maxwell3D::CallMethod` to utilize a case for the register being modified + preventing redundant method calls when no new value is being written to the register, and fixes the behavior of shadow RAM which was broken previously and would lead to incorrect arguments being utilized for methods.
We had issues when combining host and guest presentation since certain configurations in guest presentation such as double buffering were very unoptimal for the host and would significantly affect the FPS. As a result of this, we've now made host presentation have its own presentation textures which are copied into from the guest at presentation time, allowing us to change parameters of the host presentation independently of the guest.
We've implemented the infrastructure for this which includes being able to create images from host GPU memory using VMA, an optimized linear texture sync and a method to do on-GPU texture-to-texture copies.
We've also moved to driving the V-Sync event using AChoreographer on its on thread in this PR, which more accurately encapsulates HOS behavior and allows games such as ARMS to boot as they depend on the V-Sync event being signalled even when the game isn't presenting.
Implements a wrapper over fences to track a single cycle of activation, implement a Vulkan memory manager that wraps the Vulkan-Memory-Allocator library and a command scheduler for scheduling Vulkan command buffers
We decided to restructure Skyline to draw a layer of separation between guest and host GPU. We're reserving the `gpu` namespace and directory for purely host GPU and creating a new `soc` directory and namespace for emulation of parts of the X1 SoC which is currently limited to guest GPU but will be expanded to contain components like the audio DSP down the line.
* Fix alignment handling in NvHostAsGpu::AllocSpace
* Implement Ioctl{2,3} ioctls
These were added in HOS 3.0.0 in order to ease handling ioctl buffers.
* Introduce support for GPU address space remapping
* Fix nvdrv and am service bugs
Syncpoints are supposed to be allocated from ID 1, they were allocated
at 0 before. The ioctl functions were also missing from the service map
* Fix friend:u service name
* Stub NVGPU_IOCTL_CHANNEL_SET_TIMESLICE
* Stub IManagerForApplication::CheckAvailability
* Add OsFileSystem Directory support and add a size field to directory entries
The size field will be needed by the incoming HOS IDirectory support.
* Implement support for IDirectory
This is used by applications to list the contents of a directory.
* Address feedback
The GPU has it's own seperate address space to the CPU. It is able to
address 40 bit addresses and accesses the system memory. A sorted vector
has been used to store blocks as insertions are not very frequent.