I believe that in today’s world, e-mail newsletters still make a lot of sense. Back in the early days of the Internet - before search engines like Google became truly effective - there were websites that provided manually curated catalogs of links, organized into categories and subcategories. Later, full-text search engines such as Yahoo and Google took over, making it easy to find almost anything online.
But now, with the overwhelming flood of new content published every day, aggressive Search Engine Optimization (SEO) tactics, and the rise of AI-generated noise, I find it valuable to rely on trusted people who periodically curate and share the most interesting articles and videos within a specific field.
So here’s my question for you: Which programming-related newsletters do you recommend? I’m especially interested in those covering C++, graphics rendering, game development, and similar topics.
Here is my current list:
EDIT: There are additional newsletters recommended in comments under my social media posts on X/Twitter and LinkedIn.
By the way, I still use RSS/Atom feeds to follow interesting websites and blogs. Not every site offers one, but when they do, it’s a convenient way to aggregate recent posts in a single place. For this, I use the free online service Feedly.
If you also follow news feeds this way, you can subscribe to the Atom feed of my blog.
I also use the social bookmarking service Pinboard. You can browse my public links about graphics under the tags rendering and graphics. Some of these links point to individual articles, while others lead to entire websites or blogs.
If you’re programming graphics using modern APIs like DirectX 12 or Vulkan and you're working with an AMD GPU, you may already be familiar with the Radeon Developer Tool Suite. In this article, I’d like to highlight one of the tools it includes - Driver Experiments - and specifically focus on two experiments that can help you debug AMD-specific issues in your application, such as visual glitches.
Not an actual screenshot from a game, just an illustration.
Before diving into the details, let’s start with the basics. Driver Experiments is one of the tabs available in the Radeon Developer Panel, part of the Radeon Developer Tool Suite. To get started:
The Driver Experiments tool provides a range of toggles that control low-level driver behavior. These settings are normally inaccessible to anyone outside AMD and are certainly not intended for end users or gamers. However, in a development or testing environment - which is our focus here - they can be extremely valuable.
Comprehensive documentation for the tool and its individual experiments is available at GPUOpen.com: Radeon Developer Panel > Features > Driver Experiments.
When using these settings, please keep in mind the following limitations:
Among the many available experiments, some relate to enabling or disabling specific API features (such as ray tracing or mesh shaders), while others target internal driver optimizations. These toggles can help diagnose bugs in your code, uncover optimization opportunities, or even verify suspected driver issues. In the next section, I’ll describe two experiments that I find especially helpful when debugging problems that tend to affect AMD hardware more frequently than other vendors.
This is about a topic I already warned about back in 2015, right after DirectX 12 was released, in my article "Direct3D 12 - Watch out for non-uniform resource index!". To recap: when writing shaders that perform dynamic indexing of an array of descriptors (buffers, textures, samplers), the index is assumed to be scalar - that is, to have the same value across all threads in a wave. For an explanation of what that means, see my old post: "Which Values Are Scalar in a Shader?" When it is not scalar (e.g. it varies from pixel to pixel), we need to decorate it with the NonUniformResourceIndex
qualifier in HLSL or the nonuniformEXT
qualifier in GLSL:
Texture2D<float4> allTextures[400] : register(t3);
...
float4 color = allTextures[NonUniformResourceIndex(materialIndex)].Sample(
mySampler, texCoords);
The worst thing is that if we forget about NonUniformResourceIndex
while the index is indeed non-uniform, we may get undefined behavior, which typically means indexing into the wrong descriptor and results in visual glitches. It won't be reported as an error by the D3D Debug Layer. (EDIT: But PIX can help detect it.) It typically affects only AMD GPUs, while working fine on NVIDIA. This is because in the AMD GPU assembly (ISA) (which is publicly available – see AMD GPU architecture programming documentation) descriptors are scalar, so when the index is non-uniform, the shader compiler needs to generate instructions for a "waterfall loop" that have some performance overhead.
I think that whoever designed the NonUniformResourceIndex
qualifier in shader languages is guilty of hours of debugging and frustration for countless developers who stumbled upon this problem. This approach of "performance by default, correctness as opt-in" is not a good design. A better language design would be to do the opposite:
myCB.myConstIndex + 10
) and then optimize it.UniformResourceIndex()
qualifier, thus declaring that we know what we are doing and we agree to introduce a bug if we don’t keep our promise to ensure the index is really scalar.But the reality is what it is, and no one seems to be working on fixing this. (EDIT: Not fully true, there is some discussion.) That’s where Driver Experiments can help. When you activate the "Force NonUniformResourceIndex" experiment, all shaders are compiled as if every dynamic descriptor index were annotated with NonUniformResourceIndex
. This may incur a performance cost, but it can also resolve visual bugs. If enabling it fixes the issue, you’ve likely found a missing NonUniformResourceIndex
somewhere in your shaders - you just need to identify which one.
This relates to a topic I touched on in my older post: "Texture Compression: What Can It Mean?". "Compression" in the context of textures can mean many different things. Here, I’m not referring to packing textures in a ZIP file or even using compressed pixel formats like BC7 or ASTC. I’m talking about internal compression formats that GPUs sometimes apply to textures in video memory. These formats are opaque to the developer, lossless, and specific to the GPU vendor and model. They’re not intended to reduce memory usage - in fact, they may slightly increase it due to additional metadata - but they can improve performance when the texture is used. This kind of compression is typically applied to render-target (DX12) or color-attachment (Vulkan) and depth-stencil textures. The decision of when and how to apply such compression is made by the driver and depends on factors like pixel format, MSAA usage, and even texture dimensions.
The problem with this form of compression is that, while invisible to the developer, it can introduce bugs that wouldn’t occur if the texture were stored as a plain, uncompressed pixel array. Two issues in particular come to mind:
(1) Missing or incorrect barrier. Some GPUs may not support certain compression formats for all types of texture usage. Imagine a texture that is first bound as a render target. Rendering triangles to it is optimized thanks to the specialized internal compression. Later, we want to use that texture in a screen-space post-processing pass, sampling it as an SRV (shader resource). In DX12 and Vulkan, this requires inserting a barrier between the two usages. A barrier typically ensures correct execution order - so that the next draw call starts only after the previous one finishes - and flushes or invalidates relevant caches. However, if the GPU doesn’t support the render-target compression format for SRV usage, the barrier must also trigger decompression, converting the entire texture into a different internal format. This step may be slow, but it’s necessary for rendering to work correctly. That’s exactly what D3D12_RESOURCE_STATES
and VkImageLayout
enums are designed to control.
Now, imagine what happens if we forget to issue this barrier or issue an incorrect one. The texture remains in its compressed render-target format but is then sampled as a shader resource. As a result, we read incorrect data - leading to completely broken output, such as the kind of visual garbage shown in the image above. In contrast, if the driver hadn’t applied any compression, the missing barrier would be less critical because there’d be no format transition required.
(2) Missing or incorrect clear. I discussed this in detail in my older articles: "Initializing DX12 Textures After Allocation and Aliasing" and the follow-up "States and Barriers of Aliasing Render Targets". To recap: when a texture is placed in memory that may contain garbage data, it needs to be properly initialized before use. This situation can occur when the texture is created as placed in a larger memory block (using the CreatePlacedResource
function), and that memory was previously used for something else, or when the texture aliases other resources. Proper initialization usually involves a Clear operation. However, if we don’t care about the contents, we can also use the DiscardResource
function (in DX12) or transition the texture from VK_IMAGE_LAYOUT_UNDEFINED
(in Vulkan).
Here comes the tricky part. What if we’re going to overwrite the entire texture by using it as a render target or UAV / storage image? Surprisingly, that is not considered proper initialization. If the texture were uncompressed, everything might work fine. But when an internal compression format is applied, visual artifacts can appear - and sometimes persist - even after a full overwrite as an RT or UAV. This issue frequently shows up on AMD GPUs while going unnoticed on NVIDIA. The root cause is that the texture’s metadata wasn't properly initialized. The DiscardResource
function handles this correctly: it initializes the metadata while leaving the actual pixel values undefined.
The Driver Experiments tool can also help with debugging this type of issue by providing the "Disable color texture compression" experiment (and in DX12, also "Disable depth-stencil texture compression"). When enabled, the driver skips applying internal compression formats to textures in video memory. While this may result in reduced performance, it can also eliminate rendering bugs. If enabling this experiment resolves the issue, it’s a strong indicator that the problem lies in a missing or incorrect initialization (typically a Clear operation) or a barrier involving a render-target or depth-stencil texture. The next step is to identify the affected texture and insert the appropriate command at the right place in the rendering process.
The Driver Experiments tab in the Radeon Developer Panel is a collection of toggles for the AMD graphics driver, useful for debugging and performance tuning. I've focused on two of them in this article, but there are many more, each potentially useful in different situations. Over the years, I’ve encountered various issues across many games. For example:
This will be a beginner-level article for programmers working in C, C++, or other languages that use a similar preprocessor - such as shader languages like HLSL or GLSL. The preprocessor is a powerful feature. While it can be misused in ways that make code more complex and error-prone, it can also be a valuable tool for building programs and libraries that work across multiple platforms and external environments.
In this post, I’ll focus specifically on conditional compilation using the #if
and #ifdef
directives. These allow you to include or exclude parts of your code at compile time, which is much more powerful than a typical runtime if()
condition. For example, you can completely remove a piece of code that might not even compile in certain configurations. This is especially useful when targeting specific platforms, external libraries, or particular versions of them.
When it comes to enabling or disabling a feature in your code, there are generally two common approaches:
Solution 1: Define or don’t define a macro and use #ifdef
:
// To disable the feature: leave the macro undefined.
// To enable the feature: define the macro (with or without a value).
#define M
// Later in the code...
#ifdef M
// Use the feature...
#else
// Use fallback path...
#endif
Solution 2: Define a macro with a numeric value (0 or 1), and use #if
:
// To disable the feature: define the macro as 0.
#define M 0
// To enable the feature: define the macro as a non-zero value.
#define M 1
// Later in the code...
#if M
// Use the feature...
#else
// Use fallback path...
#endif
There are more possibilities to consider, so let’s summarize how different macro definitions behave with #ifdef
and #if
in the table below:
#ifdef M | #if M | |
---|---|---|
(Undefined) | No | No |
#define M |
Yes | ERROR |
#define M 0 |
Yes | No |
#define M 1 |
Yes | Yes |
#define M (1) |
Yes | Yes |
#define M FOO |
Yes | No |
#define M "FOO" |
Yes | ERROR |
The #ifdef M
directive simply checks whether the macro M
is defined, no matter if it has empty value or any other value. On the other hand, #if M
attempts to evaluate the value of M
as an integer constant expression. This means it works correctly if M
is defined as a literal number like 1 or even as an arithmetic expression like (OTHER_MACRO + 1)
. Interestingly, using an undefined symbol in #if
evaluates to 0, but defining a macro with an empty value or a non-numeric token (like a string) will cause a compilation error - such as “error C1017: invalid integer constant expression” in Visual Studio.
It's also worth noting that #if
can be used to check whether a macro is defined by writing #if defined(M)
. While this is more verbose than #ifdef M
, it’s also more flexible and robust. It allows you to combine multiple conditions using logical operators like &&
and ||
, enabling more complex preprocessor logic. It is also the only option when doing #elif defined(OTHER_M)
, unless you are using C++23, which adds missing #elifdef
and #elifndef
directives.
So, which of the two approaches should you choose? We may argue about the one or the other, but when developing Vulkan Memory Allocator and D3D12 Memory Allocator libraries, I decided to treat some configuration macros as having three distinct states:
To support this pattern, I use the following structure:
#ifndef M
#if (MY OWN CONDITION...)
#define M 1
#else
#define M 0
#endif
#endif
// Somewhere later...
#if M
// Use the feature...
#else
// Use fallback path...
#endif
Today I would like to present my new article: "The Secrets of Floating-Point Numbers". I can be helpful to any programmer no matter what programming language they use. In this article, I discuss floating-point numbers compliant with the IEEE 754 standard, which are available in most programming languages. I describe their structure, capabilities, and limitations. I also address the common belief that these numbers are inaccurate or nondeterministic. Furthermore, I highlight many non-obvious pitfalls that await developers who use them.
The article was first published few months ago in Polish in issue 5/2024 (115) (November/December 2024) of the Programista magazine. Now I have a right to show it publicly for free, so I share it in two language versions:
This post is about D3d12info open-source project that I'm involved in. The project is in continuous development, while I noticed I didn't blog about it since I first announced it in 2022. Here, I describe the story behind it and the current state of it. The post may be interesting to you if you are a programmer coding graphics for Windows using DirectX 12.
Various GPUs (discrete graphics cards, processor integrated graphics chips) from various vendors (AMD, Intel, Nvidia, …) have various capabilities. Even when a GPU supports a specific API (OpenGL, DirectX 11, DirectX 12, Vulkan), some of the features may not be supported. These features span from the big ones that even non-programmers recognize, like ray tracing, to the most obscure, like the lengthy D3D12_FEATURE_DATA_D3D12_OPTIONS::VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation
and even lengthier Vulkan VkPhysicalDeviceShaderIntegerDotProductProperties::integerDotProductAccumulatingSaturating4x8BitPackedMixedSignednessAccelerated
🙂
Before using any of these features in our apps, we need to query if the feature is supported on the current GPU. Checking it programmatically is relatively simple. Graphics APIs offer functions for that purpose, like ID3D12Device::CheckFeatureSupport
and vkGetPhysicalDeviceProperties2
. When the feature is not supported, the app should either fall back to some other implementation (e.g. using screen-space reflections instead of ray-traced reflections) or display an error telling that the GPU doesn’t meet our minimum hardware requirements.
However, when we plan using some optional feature of the API and we think about testing it on a variety of platforms and eventually shipping it to end users, we may ask:
For Vulkan, answers to these questions are: yes & yes. For querying the capabilities of the local GPU, Vulkan SDK comes with a small command-line program called “vulkaninfo”. After calling it, we can see all the extensions, properties, features, and limits of the GPU, in a human-readable text format. Alternatively, JSON and HTML format is also available.
For the database of GPUs, Sascha Willems maintains Vulkan Hardware Database and an accompanying GUI app Vulkan Hardware Capability Viewer that presents the capabilities of the local GPU and also allows submitting this report to the database.
For Direct3D 12, however, I wasn’t aware of any such application or database. Windows SDK comes with a GUI app that can be found in "c:\Program Files (x86)\Windows Kits\10\bin\*\x64\dxcapsviewer.exe". It presents some features of DirectDraw, Direct3D9, 11, DXGI, also some options of Direct3D12, but doesn’t seem to be complete in terms of all the latest options available. There is no updated version of it distributed with DirectX 12 Agility SDK. There is also no way to use it from command line. At least Microsoft open sourced it: DxCapsViewer @ GitHub.
This is why I decided to develop D3d12info, to become a DX12 equivalent of vulkaninfo. Written in C++, this small Windows console app prints all the capabilities of DX12 to the standard output, in text format. The project is open source, under MIT license, but you can also download precompiled binaries by picking the latest release.
JSON is also available as the output format, which makes the app suitable for automated processing as part of some larger pipeline.
I published first “draft” version of D3d12info in 2018, but it wasn’t until July 2022 that I released first version I considered complete and marked as 1.0.0. The app had many releases since then. I update it as Microsoft ships new versions of the Agility SDK to fetch newly added capabilities (including ones from “preview” version of the SDK).
There are some other information fetched and printed by the app apart from DX12 features. The ones I consider most important are:
However, I try to limit the scope of the project to avoid feature creep, so I refuse some feature requests. For example, I decided not to include capabilities queried from DirectX Video or WDDM.
D3d12info would stay only a command-line tool without Dmytro Bulatov “Devaniti” - a Ukrainian developer working at ZibraAI, who joined the project and developed D3d12infoGUI. This app is a convenient overlay that unpacks the command-line D3d12info, launches it, converts its output into a nicely looking HTML page, which is then saved to a temporary file and opened in a web browser. This allows browsing capabilities of the current GPU in a convenient way. Dmytro also contributed significantly to the code of my D3d12info project.
If you scroll down the report, you can see a table with texture formats and the capabilities they support. Many of them are mandatory for every GPU supporting feature level 12_0, which are marked by a hollow check mark. However, as you can see below, my GPU supports some additional formats as “UAV Typed Load”:
The web page with the report also offers a large green button near the top that submits it to the online database. Here comes the last part of the ecosystem: D3d12infoDB. This is something I was dreaming about for years, but I couldn’t make it since I am not a proficient web developer. Now, Dmytro along with other contributors from the open source community developed a website that gathers reports about various GPUs, offering multiple ways of browsing, searching, and filtering them.
One great feature they’ve added recently is Feature Table. It gathers DX12 capabilities as rows, while columns are subsequent generations of the GPUs from AMD, Nvidia, Intel, and Qualcomm. This way, we can easily see which features are supported by older GPU generations to make a better decision about minimum feature set required by the game we develop. For example, we can see that ray tracing (DXR 1.1) and mesh shaders are supported by Nvidia since Turning architecture (GeForce RTX 2000 series, released in 2018), but support from AMD is more recent, since RDNA2 architecture (Radeon RX 6000 series, released in 2020).
As I mentioned above, I keep the D3d12info tool updated to the latest DirectX 12 Agility SDK, to fetch and print newly added capabilities. This also includes major features like DirectSR or metacommands. D3d12infoGUI app and D3d12infoDB website are also updated frequently.
I want to avoid expanding my app too much. One major feature I consider adding is a separate executable for x86 32-bit, x86 64-bit, and ARM architecture, as I heard there are differences in DX12 capabilities supported between them, while some graphics programmers (e.g. on the demoscene) still target 32 bits. Please let me know if it would be useful to you!
Finally, here is my call to action! You can help the project by submitting your GPU to the online database. Every submission counts. Even having a different version of the graphics driver constitutes a separate entry. Please download the latest D3d12infoGUI release, launch it, and when the local web page opens, press that large green button to submit your report.
Only if you are one of those developers working for a GPU vendor and you use some prototype future GPU hardware or an internal unreleased build of the graphics driver, then, please don’t do it. We don’t want to leak any confidential information through this website. If you accidentally submitted such report, please contact us and we will remove it.
In January 2025, I participated in PolyJam - a Global Game Jam site in Warsaw, Poland. I shared my experiences in a blog post: Global Game Jam 2025 and First Impressions from Godot. This post focuses on a specific issue I encountered during the jam: Godot 4.3 frequently hanging on my ASUS TUF Gaming laptop. If you're in a hurry, you can SCROLL DOWN to skip straight to the solution that worked for me.
The laptop I used was an ASUS TUF Gaming FX505DY. Interestingly, it has two different AMD GPUs onboard - a detail that becomes important later:
The game we developed wasn’t particularly complex or demanding - it was a 2D pixel art project. Yet, the Godot editor kept freezing frequently, even without running the game. The hangs occurred at random moments, often while simply navigating the editor UI. Each time, I had to force-close and restart the process. I was using Godot 4.3 Stable at the time.
I needed a quick solution. My first step was verifying that both Godot 4.3 and my AMD graphics drivers were up to date (they were). Then, I launched Godot via "Godot_v4.3-stable_win64_console.exe", which displays a console window with debug logs alongside the editor. That’s when I noticed an error message appearing every time the hang occurred:
ERROR: Condition "err != VK_SUCCESS" is true. Returning: FAILED
at: command_queue_execute_and_present (drivers/vulkan/rendering_device_driver_vulkan.cpp:2266)
This suggested the issue might be GPU-related, specifically involving the Vulkan API. However, I wasn’t entirely sure - the same error message occasionally appeared even when the engine wasn’t hanging, so it wasn’t a definitive indicator.
To investigate further, I decided to enable the Vulkan validation layer, hoping it would reveal more detailed error messages about what the engine was doing wrong. Having Vulkan SDK installed in my system, I launched the Vulkan Configurator app that comes with it ("Bin\vkconfig.exe"), I selected Vulkan Layers Management = Layers Controlled by the Vulkan Configurator, and selected Validation.
Unfortunately, when I launched Godot again, no new error messages appeared in the console. (Looking back, I’m not even sure if that console window actually captured the process’s standard output.) For a brief moment, I thought enabling the Vulkan validation layer had fixed the hangs - but they soon returned. Maybe they were less frequent, or perhaps it was just wishful thinking.
Next, I considered forcing Godot to use the integrated GPU (Radeon Vega 8) instead of the more powerful discrete GPU (RX 560X). To test this, I adjusted Windows power settings to prioritize power saving over maximum performance. However, this didn’t work - Godot still reported using the Radeon RX 560X.
THE SOLUTION: What finally worked was forcing Godot to use the integrated GPU by launching it with a specific command-line parameter. Instead of running the editor normally, I used:
Godot_v4.3-stable_win64_console.exe --verbose --gpu-index 1
This made Godot use the second GPU (index 1) - the slower Radeon Vega 8 - instead of the default RX 560X. The result? No more hangs. While the integrated GPU is less powerful, it was more than enough for our 2D pixel art game.
I am not sure why it helped, considering that both GPUs on my laptop are from AMD and they are supported by one driver. I also didn't check whether the new Godot 4.4 that was released since then has this bug fixed. I am just leaving this story here, in case someone stumbles upon the same problem in the future.
On January 30th 2025 Microsoft released a new version of DirectX 12 Agility SDK: 1.615.0 (D3D12SDKVersion = 615) and 1.716.0-preview (D3D12SDKVersion = 716). The main article announcing this release is: AgilitySDK 1.716.0-preview and 1.615-retail. Files are available to download from DirectX 12 Agility SDK Downloads, as always, in form of .nupkg files (which are really ZIP archives).
I can see several interesting additions in the new SDK, so in this article I am going to describe them and delve into details of some of them. This way, I aim to consolidate information that is scattered across multiple Microsoft pages and provide links to all of them. The article is intended for advanced programmers who use DirectX 12 and are interested in the latest developments of the API and its surrounding ecosystem, including features that are currently in preview mode and will be included in future retail versions.
This is the only feature added to both the retail and preview versions of the new SDK. The article announcing it is: Agility SDK 1.716.0-preview & 1.615-retail: Shader hash bypass. A more extensive article explaining this feature is available here: Validator Hashing.
The problem:
If you use DirectX 12, you most likely know that shaders are compiled in two stages. First, the source code in HLSL (High-Level Shading Language) is compiled using the Microsoft DXC compiler into an intermediate binary code. This often happens offline when the application is built. The intermediate form is commonly referred to as DXBC (as the container format and the first 4 bytes of the file) or DXIL (as the intermediate language of the shader code, somewhat similar to SPIR-V or LLVM IR). This intermediate code is then passed to a DirectX 12 function that creates a Pipeline State Object (PSO), such as ID3D12Device::CreateGraphicsPipelineState
. During this step, the second stage of compilation occurs within the graphics driver, converting the intermediate code into machine code (ISA) specific to the GPU. I described this process in more detail in my article Shapes and forms of DX12 root signatures, specifically in the "Shader Compilation" section.
What you may not know is that the intermediate compiled shader blob is digitally signed by the DXC compiler using a hash embedded within it. This hash is then validated during PSO creation, and the function fails if the hash doesn’t match. Moreover, despite the DXC compiler being open source and hosted on github.com/microsoft/DirectXShaderCompiler, the signing process is handled by a separate library, "dxil.dll", which is not open source.
If you only use the DXC compiler provided by Microsoft, you may never encounter any issues with this. I first noticed this problem when I accidentally used "dxc.exe" from the Vulkan SDK instead of the Windows SDK to compile my shaders. This happened because the Vulkan SDK appeared first in my "PATH" environment variable. My shaders compiled successfully, but since the closed-source "dxil.dll" library is not distributed with the Vulkan SDK, they were not signed. As a result, I couldn’t create PSO objects from them. As the ecosystem of graphics APIs continues to grow, this could also become a problem for libraries and tools that aim to generate DXIL code directly, bypassing the HLSL source code and DXC compiler. Some developers have even reverse-engineered the signing algorithm to overcome this obstacle, as described by Stephen Gutekanst / Hexops in this article: Building the DirectX shader compiler better than Microsoft?.
The solution:
With this new SDK release, Microsoft has made two significant changes:
01010101010101010101010101010101
for "BYPASS", 02020202020202020202020202020202
for "PREVIEW_BYPASS".Technologies that generate DXIL shader code can now use either of these methods to produce a valid shader.
The capability to check whether this new feature is supported is exposed through D3D12_FEATURE_DATA_BYTECODE_BYPASS_HASH_SUPPORTED::Supported
. However, it appears to be implemented entirely at the level of the Microsoft DirectX runtime rather than the graphics driver, as it returns TRUE
on every system I tested.
One caveat is that "dxil.dll" not only signs the shader but also performs some form of validation. Microsoft didn’t want to leave developers without the ability to validate their shaders when using the bypass hash. To address this, they have now integrated the validation code into the D3D Debug Layer, allowing shaders to be validated as they are passed to the PSO creation function.
This feature is only available in the preview SDK version. The article announcing it is: Agility SDK 1.716.0-preview: Tight Alignment of Resources. There is also specification: Direct3D 12 Tight Placed Resource Alignment, but it very low level, describing even the interface for the graphics driver.
The problem:
This one is particularly interesting to me, as I develop the D3D12 Memory Allocator and Vulkan Memory Allocator libraries, which focus on GPU memory management. In DirectX 12, buffers require alignment to 64 KB, which can be problematic and lead to significant memory waste when creating a large number of very small buffers. I previously discussed this issue in my older article: Secrets of Direct3D 12: Resource Alignment.
The solution:
This is one of many features that the Vulkan API got right, and Microsoft is now aligning DirectX 12 in the same direction. In Vulkan, developers need to query the required size and alignment of each resource using functions like vkGetBufferMemoryRequirements
, and the driver can return a small alignment if supported. For more details, you can refer to my older article: Differences in memory management between Direct3D 12 and Vulkan. Microsoft is now finally allowing buffers in DirectX 12 to support smaller alignments by introducing the following new API elements:
D3D12_FEATURE_DATA_TIGHT_ALIGNMENT::SupportTier
.D3D12_RESOURCE_FLAG_USE_TIGHT_ALIGNMENT
, to the description of the resource you are about to create.ID3D12Device::GetResourceAllocationInfo
, the function may now return an alignment smaller than 64 KB. As Microsoft states: "Placed buffers can now be aligned as tightly as 8 B (max of 256 B). Committed buffers have also had alignment restrictions reduced to 4 KiB."I have already implemented support for this new feature in the D3D12MA library. Since this is a preview feature, I’ve done so on a separate branch for now. You can find it here: D3D12MemoryAllocator branch resource-tight-alignment.
This feature requires support from the graphics driver, and as of today, no drivers support it yet. The announcement article mentions that AMD plans to release a supporting driver in early February, while other GPU vendors are also interested and will support it in an "upcoming driver" or at some indefinite point in the future - similar to other preview features described below.
However, testing is possible right now using the software (CPU) implementation of DirectX 12 called WARP. Here’s how you can set it up:
Microsoft has also shared a sample application to test this feature: DirectX-Graphics-Samples - HelloTightAlignment.
This feature is only available in the preview SDK version. The article announcing it is: Agility SDK 1.716.0-preview: Application Specific Driver State. It is intended for capture-replay tools rather than general usage in applications.
The problem:
A graphics API like Direct3D or Vulkan serves as a standardized contract between a game, game engine, or other graphics application, and the graphics driver. In an ideal world, every application that correctly uses the API would work seamlessly with any driver that correctly implements the API. However, we know that software is far from perfect and often contains bugs, which can exist on either side of the API: in the application or in the graphics driver.
It’s no secret that graphics drivers often detect specific popular or problematic games and applications to apply tailored settings to them. These settings might include tweaks to the DirectX 12 driver or the shader compiler, for example. Such adjustments can improve performance in cases where default heuristics are not optimal for a particular application or shader, or they can provide workarounds for known bugs.
For the driver to detect a specific application, it would be helpful to pass some form of application identification. Vulkan includes this functionality in its core API through the VkApplicationInfo
structure, where developers can provide the application name, engine name, application version, and engine version. DirectX 12, however, lacks this feature. The AMD GPU Services (AGS) library adds this capability with the AGSDX12ExtensionParams
structure, but this is specific to AMD and not universally adopted by all applications.
Because of this limitation, DirectX 12 drivers must rely on detecting applications solely by their .exe file name. This can cause issues with capture-replay tools such as PIX on Windows, RenderDoc or GFXReconstruct. These tools attempt to replay the same sequence of DirectX 12 calls but use a different executable name, which means driver workarounds are not applied.
Interestingly, there is a workaround for PIX that you can try if you encounter issues opening or analyzing a capture:
mklink WinPixEngineHost.exe ThatGreatGame.exe
This way, PIX will use "WinPixEngineHost.exe" to launch the DirectX 12 workload, but the driver will see the original executable name. This ensures that the app-specific profile is applied, which may resolve the issue.
The solution:
With this new SDK release, Microsoft introduces an API to retrieve and apply an "application-specific driver state." This state will take the form of an opaque blob of binary data. With this feature and a supporting driver, capture-replay tools will hopefully be able to instruct the driver to apply the same app-specific profile and workarounds when replaying a recorded graphics workload as it would for the original application - even if the executable file name of the replay tool is different. This means that workarounds like the one described above will no longer be necessary.
The support for this feature can be queried using D3D12_FEATURE_DATA_APPLICATION_SPECIFIC_DRIVER_STATE::Supported
. Since this feature is intended for tools rather than typical graphics applications, I won’t delve into further details here.
This feature is only available in the preview SDK version. The article announcing it is: Agility SDK 1.716.0-preview: Recreate At GPUVA. It is intended for capture-replay tools rather than general usage in applications.
The problem:
Graphics APIs are gradually moving toward the use of free-form pointers, known as GPU Virtual Addresses (GPUVA). If such pointers are embedded in buffers, capture-replay tools may struggle to replay the workload accurately, as the addresses of the resources may differ in subsequent runs. Microsoft mentions that in PIX, they intercept the indirect argument buffer used for ExecuteIndirect
to patch these pointers, but this approach may not always be fully reliable.
The solution:
With this new SDK release, Microsoft introduces an API to retrieve the address of a resource and to request the creation of a new resource at a specific address. To ensure that no other resources are assigned to the intended address beforehand, there will also be an option to reserve a list of GPUVA address ranges before creating a Direct3D 12 device.
The support for this feature can be queried using D3D12_FEATURE_DATA_D3D12_OPTIONS20::RecreateAtTier
. Since this feature is intended for tools rather than typical graphics applications, I won’t delve into further details here.
This is yet another feature that Vulkan already provides, while Microsoft is only now adding it. In Vulkan, the ability to recreate resources at a specific address was introduced alongside the VK_KHR_buffer_device_address extension, which introduced free-form pointers. This functionality is provided through "capture replay" features, such as the VkBufferOpaqueCaptureAddressCreateInfo
structure.
This feature works automatically and does not introduce any new API. It improves performance by passing some DirectX 12 function calls directly to the graphics driver, bypassing intermediate functions in Microsoft’s DirectX 12 runtime code.
If I understand it correctly, this appears to be yet another feature that Vulkan got right, and Microsoft is now catching up. For more details, see the article Architecture of the Vulkan Loader Interfaces, which describes how dynamically fetching pointers to Vulkan functions using vkGetInstanceProcAddr
and vkGetDeviceProcAddr
can point directly to the "Installable Client Driver (ICD)," bypassing "trampoline functions."
There are also some additions to D3D12 Video. The article announcing them is: Agility SDK 1.716.0-preview: New D3D12 Video Encode Features. However, since I don’t have much expertise in D3D12 Video, I won’t describe them here.
Microsoft also released new versions of PIX that support all these new features from day 0! See the announcement article for PIX version 2501.30 and 2501.30-preview.
Queries for the new capabilities added in this update to the Agility SDK (both retail and preview versions) have already been integrated into the D3d12info command-line tool, the D3d12infoGUI tool, and the D3d12infoDB online database of DX12 GPU capabilities. You can contribute to this project by running the GUI tool and submitting your GPU’s capabilities to the database!
Last weekend, 24-26 January 2025 I participated in Global Game Jam, and, more specifically, PolyJam 2025 - a site in Warsaw, Poland. In this post I'll share the game we've made (including the full source code) and describe my first impressions of the Godot Engine, which we used for development.
We made a simple 2D pixel-art game with mechanics similar to Overcooked. It was designed for 2 to 4 players in co-op mode, using keyboard and gamepads.
Entry of the game at globalgamejam.org
GitHub repository with the source code
A side note: The theme of GGJ 2025 was "Bubble". Many teams created games about bubbles in water, while others interpreted it more creatively. For example, the game Startup Panic: The Grind Never Stops featured minigames like drawing graphs or typing buzzwords such as "Machine Learning" to convince investors to fund your startup – an obvious bubble 🙂 Our game, on the other hand, focused on taking care of babies and fulfilling their needs so they could grow up successfully. In Polish, the word for "bubbles" is "bąbelki", but it’s also informally used to refer to babies. Deliberately misspelled as "bombelki", it is a wordplay that makes sense and fits the theme in Polish.
My previous game jam was exactly two years ago. Before that jam, I had learned a bit of the Cocos Creator and used it to develop my game, mainly to try something new. I described my impressions in this post: Impressions After Global Game Jam 2023. This time, I took a similar approach and I started learning Godot engine about three weeks before the jam. Having some experience with Unity and Unreal Engine, my first impressions of Godot have been very positive. Despite being an open-source project, it doesn’t have that typical "open-source feeling" of being buggy, unfinished, or inconvenient to use. Quite the opposite! Here are the things I especially like about the engine:
I like that it’s small, lightweight, and easy to set up. All you need to do is download a 55 MB archive, unpack it, and you’re ready to start developing. This is because it’s a portable executable that doesn’t require any installation. The only time you need to download additional files (over 1 GB) is when you’re preparing to create a build for a specific platform.
I also appreciate how simple the core ideas of the engine are:
I’m not sure if this approach is optimal in terms of performance or whether it’s as well-optimized as the full Entity Component System (ECS) that some other engines use. However, I believe a good engine should be designed like this one – with a simple and intuitive interface, while handling performance optimizations seamlessly under the hood.
I also appreciate the idea that the editor is built using the same GUI controls available for game development. This approach provides access to a wide range of advanced controls: not just buttons and labels, but also movable splitters, multi-line text editors, tree views, and more. They can all be skinned with custom colors and textures.
Similarly, files saved by the engine are text files in the INI-like format with sections like [SectionName]
and key-value pairs like Name = Value
. Unlike binary files, XML, or JSON, these files are very convenient to merge when conflicts arise after two developers modify the same file. The same format is also available and recommended for use in games, such as for saving settings.
Then, there is GDScript - a custom scripting language. While Godot also offers a separate version that supports C# and even has a binding to Rust, GDScript is the native way of implementing game logic. I like it a lot. Some people compare it to Python, but it’s not a fork or extension of Python; it’s a completely separate language. The syntax shares similarities with Python, such as using indentation instead of braces {}
to define scopes. However, GDScript includes many features that Python lacks, specifically tailored for convenient and efficient game development.
One such feature is an interesting mix of dynamic and static typing. By default, variables can have dynamic types (referred to as "variant"), but there are ways to define a static type for a variable. In such cases, assigning a value of a different type results in an error – a feature that Python lacks.
var a = 0.1
a = "Text" # OK - dynamic type.
var b: float
b = 0.1
b = "Text" # Error! b must be a number.
var c := 0.1
c = "Text" # Error! c must be a number.
Another great feature is the inclusion of vector types for 2D, 3D, or 4D vectors of floats or integers. These types are both convenient and intuitive to use – they are passed by value (creating an implicit copy) and are mutable, meaning you can modify individual xyzw
components. This is something that Python cannot easily replicate: in Python, tuples are immutable, while lists and custom classes are passed by reference. As a result, assigning or passing them as function parameters in Python makes the new variable refer to the original object. In GDScript, on the other hand:
var a := Vector2(1.0, 2.0)
var b := a # Made a copy.
b.x = 3.0 # Can modify a single component.
print(a) # Prints (1, 2).
I really appreciate the extra language features that are clearly designed for game development. For example, the @export
attribute before a variable exposes it to the Inspector as a property of a specific type, making it available for visual editing. The $NodeName
syntax allows you to reference other nodes in the scene, and it supports file system-like paths, such as using /
to navigate down the hierarchy and ..
to go up. For instance, you can write something like $../AudioPlayers/HitAudioPlayer.play()
.
I also like how easy it is to animate any property of any object using paths like the one shown above. This can be done using a dedicated AnimationPlayer
node, which provides a full sequencer experience with a timeline. Alternatively, you can dynamically change properties over time using a temporary Tween
object. For example, the following code changes the font color of a label to a transparent color over 0.5 seconds, using a specific easing function selected from the many available options (check out the Godot tweening cheat sheet for more details):
create_tween().tween_property(addition_label.label_settings, ^":font_color", transparent_color, 0.5).set_trans(Tween.TRANS_CUBIC).set_ease(Tween.EASE_IN)
I really appreciate the documentation. All core language features, as well as the classes and functions available in the standard library, seem to be well-documented. The documentation is not only available online but also integrated into the editor (just press F1), allowing you to open documentation tabs alongside your script code tabs.
I also like the debugger. Being able to debug the code I write is incredibly important to me, and Godot delivers a full debugging experience. It allows you to pause the game (automatically pausing when an error occurs), inspect the call stack, view variable values, explore the current scene tree, and more.
That said, I’m sure Godot isn’t perfect. For me, it was just a one-month adventure, so I’ve only described my first impressions. There must be reasons why AAA games aren’t commonly made in this engine. It likely has some rough edges and missing features. I only worked with 2D graphics, but I can see it supports 3D graphics with a Forward+ renderer and PBR materials. While it could potentially be used for 3D projects, I’m certain it’s not as powerful as Unreal Engine in that regard. I also encountered some serious technical issues with the engine during the game jam, but I’ll describe those in separate blog posts to make them easier to find for anyone searching the Internet for a solution. (Update: the follow-up article is "Fixing Godot 4.3 Hang on ASUS TUF Gaming Laptop".)
I also don’t know much about Godot’s performance. The game we made was very simple. If we had thousands of objects on the scene to render and complex logic to calculate every frame, performance would become a critical factor. Doing some work in every object every frame using _process
function is surely an anti-pattern and it runs serially on a single thread. However, I can see that GDScript also supports multithreading – another feature that sets it apart from Python.
To summarize, I now believe that Godot is a great engine at least for game jams and fast prototyping.
Earlier this month, Timothy Lottes published a document on Google Docs called “Fixing the GPU”, where he describes many ideas about how programming compute shaders on the GPU could be improved. It might be an interesting read for those advanced enough to understand it. The document is open for suggestions and comments, and there are few comments there already.
On a different topic, 25 November I attended Code::Dive conference in Wrocław, Poland. It was mostly dedicated to programming in C++ language. I usually attend conferences about game development, so it was an interesting experience for me. Big thanks to Tomasz Łopuszański from Programista magazine for inviting me there! It was great to see Bjarne Stroustrup and Herb Sutter live, among other good speakers. By the way, recordings from the talks are available on YouTube.
Those two events inspired me to write down my thoughts – my personal “wishlist” about programming languages, from the perspective of someone interesting in games and real-time graphics programming. I gathered my opinions about things I like and dislike in C++ and some ideas about how a new, better language could look like. It is less about a specific syntax to propose and more about high-level ideas. You can find it under the following shortened address, but it is really a document at Google Docs. Comments are welcome.
Of course I am aware of Rust, D, Circle, Carbon, and other programming languages that share the same goal of replacing C++. I just wanted to write down my own thoughts about this topic.
Floating-point numbers are a great invention. Thanks to dedicating separate bits to the sign, exponent, and mantissa (also called significand), they can represent a wide range of numbers on a limited number of bits - numbers that are positive or negative, very large or very small (close to zero), integer or fractional.
In programming, we typically use double-precision (64b) or single-precision (32b) numbers. These are the data types available in programming languages (like double
and float
in C/C++) and supported by processors, which can perform calculations on them efficiently. Those of you who deal with graphics programming using graphics APIs like OpenGL, DirectX, or Vulkan, may know that some GPUs also support 16-bit floating-point type, also known as half-float.
Such 16b "half" type obviously has limited precision and range compared to the "single" or "double" version. Because of these limitations, I am reserved in recommending them to use in graphics. I summarized capabilities and limits of these 3 types in a table in my old "Floating-Point Formats Cheatsheet".
Now, as artificial intelligence (AI) / machine learning (ML) is a popular topic, programmers use low precision numbers in this domain. When I learned that floating-point formats based only on 8 bits were proposed, I immediately thought: 256 possible value is little enough that they could be all visualized in a 16x16 table! I developed a script that generates such tables, and so I invite you to take a look at my new article:
"FP8 data type - all values in a table"