Leveraging WebAssembly in .NET

Tarun Gudipati
6 min readSep 5, 2023

As Full-stack web developers we have all been in situations where we could offload some computation to the client’s browser, however in some cases we find JavaScript isn’t a great choice in terms of memory or time for cpu-bound tasks.

Web Assembly to the rescue

Web Assembly(WASM) is an open web standard that promises efficient code execution. Because at a browser level it runs in a binary format, offering performance similar to native code execution.

However the beauty of WASM is that it’s not a language on it’s own.
You could compile your favourite languages like C, C++, Go and of-course
C# .NET as well to target WASM.

What this means is the result of building your code would be a WASM file, that could be downloaded in the browser and provide a way for your javascript client to run the compiled code on browser.

Running .NET on WASM

Most online resources tell you how to run .NET on browser using Blazor.
Blazor is a fantastic framework that is capable of doing full-fledged UI in browser with C#.

However if you don’t want to utilise the full UI capabilities of Blazor, instead you just want to run compute intense workload with just pure .NET.
Then you are at the right place …

A Practical Example

Let’s take a practical example to better understand the use-case of WASM.
Please find the code samples mentioned in below example on my github here:

Let’s say your client side code needs to upload very large files to the server.
There any many approaches to solving such a problem like chunking the file down and sending it as a multi-part stream or compressing it.
Let’s say we went down the compression route,

This means we will have to run a cpu-intense task on the client side.

For easier demonstration, I’ve chosen to do this with react and typescript project (let’s be honest who doesn’t like react or typescript😉), however, you could do this with vanilla javascript as well.

If we want to run compute-intense tasks on the browser, it traditionally meant we had to take a hit on the UI performance, because we only have one-thread available to us, so all of react’s UI update event’s had to be stalled until the thread is available.

This was true, until web-workers were introduced, this is a complex enough topic that I could probably make a story on its own.
Without going into too much depth think of web workers can be explained as a simple and event driven way to run some javascript code off the main/UI thread on a secondary thread.

Lets start by looking at App.tsx , the main component of client-side code, admittedly it’s not a pretty UI 😜 but it’s enough to show the potential of WASM

As we can see it doesn’t have the most fancy UI, however the idea is that we will have 2 input elements, both are of file type.

The overall process is broken down in to the following steps:

When we mount the App component, the useEffect would initialise the web workers and set their respective message handlers

Choosing a file causes a file change event to be fired and we call into the upload handler method (handleNativeUpload/handleNonNativeUpload)

Then upload handler method does some basic checks and hands off the file handle or pointer to the web worker.

The non-native worker looks like the following:

As we can see it takes the file pointer and forms an Array Buffer out of it and then calls the gzip method from pako library which is pretty fast in its own.

Once it get’s the compressed bytes back we communicate the results back to the main thread using postMessage

Now to the native-worker

Now this worker is a bit different, first we dynamically import the main script, which is responsible for handling the inter-op between JS and WASM code. vendor/main.js

This script downloads all the necessary wasm files and also JS glue required and makes them available in the worker context.
Apart from that we also initialise the .NET WASM runtime and the exports made from .NET layer into Javascript, so its ready for use.

However once we call into native workers onMessage method, we follow a similar approach it’s just that we now call into WASM native code instead of JS code for Gzipping.

The magical part

Now let’s see how we write the actual WASM implementation that make the magic possible.

It all starts with one simple class.
The Program.cs

As we can see this plain and simple C# and GzipCompress that creates a gzip stream backed by a memory stream and returns the byte array representing the compressed bytes back to the caller.

Is it really that simple ? One class to rule em all.
The answer is it depends (like most other things is Software), because the heavy lifting of converting JS Objects into C# is done by the JSExport annotation.

What happens behind the scenes is a source generator kicks in and generates the C# side of inter-op code for us and this means in most cases it just works, however for the cases where it doesn’t you’ll have to do the hardlifting.

But wait there is one final piece to this magic and that is, the JS side of things.
The main.js which we I mentioned before is needed for the JS interop.
This is how it looks like:

The _framework/dotnet.js is actually injected during build time, by the dotnet wasm-tools workload.

All we do here is two things

  1. Initialise the dotnet runtime and get the exports for the main assembly or whichever assembly we are interested in.
  2. Write wrapper methods to delegate the calls to the exported native methods.

And that’s it, we’re technically done.

One potential optimization that was added as part of the .csproj file is the ability to compile ahead of time.
What this means is .NET is compiled down to native machine target code (in this case the target is web assembly) instead of IL with just few flags


So we basically reduced a step the following chain as shown:

Typscript (compile) -> Javascript -> .NET (runtime) -> IL (runtime) -> WASM
Typescript (compile) -> Javascript -> .NET (runtime) -> WASM

Show me the numbers

Enough of the talking, let’s get to numbers.

As you can see .NET implementation scales pretty well with size and we ended up in upto ~4x performance benefits.

Of course this doesn’t mean WASM is the one and only solution to all your scalability problems, however its worth considering if you plan to offload some work back to the browser.
Because it’s been out since 2017 and data shows that most major browsers already support it.

And web workers have also been out since 2011–2012 and many more browsers support it.

Anyway thought I’d share something I found very interesting, that’s it for this story, thanks for making it till here 😄

Connect with me on LinkedIn for more such interesting reads.