2026-02-12

Looking back at 2025, and forward to 2026

In this post I look back at what we achieved in 2025, and where we want to go in 2026.

Overview of 2025

PyGfx

We started the year with a big effor to refactor the rendering pipeline. This touched caching, scene traversal, managing 'scene environment', and a lot more. This work was spread over multiple pull requests, and was needed to allow more flexibility, such as having different blend modes for different materials.

This laid the foundations to refactor the blending mechanics, a big PR that took 4 months to finish, with several follow-up PRs to iron out the details. In short, with these changes we acknowledge that blending is a hard problem for which there is no single best solution. Instead of trying to solve it (badly) for the user, we give the user tools to handle blending in various ways. This inclused more control over blending order, transparency, and depth handling. But also provide alternative blending options like weighed blending and dithering.

Further notable improvements:

wgpu-py

For wgpu-py we implemented improved support for type hints, making it easier to write code using wgpu using IDE's that use static or dynamic introspection.

And of course we kept up with new versions of the WebGPU spec and wgpu-native.

rendercanvas

The most notable improvements in rendercanvas are:

The road to async

Quite a lot of effort was put into improving the support for async. This work was pretty hard, because it involves the (changing) API of wgpu-native to deal with asynchronous calls, different async frameworks (e.g. asyncio, trio, rendercanvas' async adapter), and threading.

One notable advantage of async we were looking forward to was the improved performance of rendercanvas backends that need the rendered image as a bitmap, which is downloaded via GPUBuffer.map_async().

To kick this work of, the context classes that were first implemented as part of wgpu-py were moved to rendercanvas. This made it possible in rendercanvas to implement more advanced contexts, and let wgpu-py focus on being a GPU API.

Then we applied several changes in rendercanvas and wgpu-py to allow them to interoperate in an async setting.

In rendercanvas:

In wgpu-py:

With these changes, it became possible to implement async bitmap present; the rendered image can be downloaded from the GPU without actually waiting for it; the CPU (and GPU) can do other things while this happens. This results in a major increase in framerate.

This directly benefits the Jupyter backend and future remote backends, but also makes it viable to make bitmap-present the default for Qt, which solves many issues our users were facing.

What we did not get to

There were also features that we planned to do in 2025, but that we did not mange to do in 2025.

Outlook for 2026

More or less in order of urgency:

Funding

In 2025 we received generous funding from the Flatiron Institute and from Ramona Optics. Together with some buildup runway in 2025 this helped us through 2025. We are very grateful for these funds; without these, PyGfx, wgpu-py and rendercanvas would probably be abandonware.

For 2026, both current sponsors continue to support PyGfx, although with smaller amounts. We also applied for a European grant, for which we passed the first round, and hope to hear the verdict soon.

Almar, Kushal, and Caitlin are also working on a proprietary project that uses PyGfx. Some of the work mentioned in the outlook will be done as part of that project.

If your company or research group is able to financially support this project, that would be awesome! We need funding to keep going. It looks like 2026 will be fine, but there have been times when I did not know how things would work out. So please reach out if you can!