Rust to .NET compiler - Progress update

The past few months have been quite chaotic, both for the Rust to .NET compiler backend I am working on, and for me personally. As I am writing this article, I am in the metaphorical eye of the hurricane - I have just graduated high school, and I will be writing my national final exams in about a week. Still, since I have a bit of free time, I will try to write about some of the progress I have made!

Tiny recap - what is rustc_codegen_clr?

rustc_codegen_clr is a compiler backend, which allows the Rust compiler to turn Rust code into .NET assemblies. You can imagine it as a plugin, replacing the very last stage of Rust compilation.

Big news - the project got accepted into GSoC.

One of the things I have worked on recently was submitting a proposal related to this project to Google Summer of Code. And, today I am proud to announce that it has been accepted!

What does this mean for the project? Well, first of all, I will be able to focus solely on rustc_codegen_clr for the following months. Before that, I had been working on the project in my free time - in the evenings or weekends. So, the speed of development should increase.

My proposal will also be mentored by Jack Huey, a member of the Rust language team. Having an outside perspective, from someone experienced, should allow me to increase the quality of my project.

So, what is the proposal about, specifically?

Trouble with tests

Currently, my project relies on custom-written test executables. If the test crashes, then it fails, if it does not, then it passes.

fn main(){
    let a = b"Hello, Bob!\n\0";
    let b = "Hello, Bob!\n\0";
    if black_box(a) != black_box(b).as_bytes(){
    let a:&[u8] = &b"Hello, Bob!\n\0"[..];
    let b:&[u8] = &b"Hello, Bob!\n\0"[..];
    if black_box(a) != black_box(b){

This works, but it certainly is not the way Rust is usually tested. As many of you probably know, Rust has built-in test support:

fn should_pass(){
fn should_fail(){

You can run those tests using the cargo test command.

Currently, the codegen is not able to compile & run the test shown above. At the moment of writing, the test harness crashes when trying to parse the command line arguments. While this test program looks ridiculously simple, the harness generated by the compiler is not.

My GSoC proposal is simple: I want to get the test presented above to run correctly within the .NET runtime.

Why work on tests?

You may wonder: why focus on adding support for a different way of testing if I already have a way to write tests?

First of all, I would be able to run tests not written with my project in mind. Things like the Rust test suite rely on this feature. While running the whole compiler test suite is a whole other can of worms, this is a good first step.

I could also test individual crates(Rust libraries), or try to use something like crater. crater is a Rust tool, that tests every single Rust library - all 100 000 of them. With this kind of testing, I would be able to find almost any issue with this project. That is still far off in the future, but still - you can see why better tests could be really useful.

The Rust tests would also run faster than my crappy, custom ones. Currently, I build a separate test binary for each test case - and that adds up quickly. At the time of writing, my test suite takes 44.461 seconds to run, on my machine. On gihub actions, the tests are even solwer - takning 4m to run. With cargo tests, I could build one big binary for all tests - cutting down the build times significantly.

So, you probably can understand why I feel this is a good goal for the near future.

What is needed for running better tests?

This is a small list of the work needed to get tests to work:

  1. Command Line Argument Support - this one is partially done, but it is surprisingly complex.
  2. Support for Atomic Intrinsics - The test harness is multithreaded, so it needs atomics for synchronization.
  3. Support for multiple threads - This is kind of self-explanatory, this includes support for launching/handling multiple threads.
  4. Better interop - necessary to implement multi-threading properly. Storing thread safely handles off-stack requires a lot more work, like adding first-class support for GCHandles.
  5. Support for dyn Trait objects - while I personally almost never use them, and I don't like them, they undeniably have their uses. Since the standard library needs them to properly function, I will need to deal with them.
  6. The ability to catch panics - Rust has a concept of panicking and unwinding. This way of handling errors is remarkably similar to .NET exceptions(I implement unwinds using .NET exceptions). Since the failing test cases will unwind(or throw exceptions), I will need to add support for catching them.

As you can see, the work I will need to put in was needed anyway - and getting better, faster tests is just the cherry on top.

If you have any questions/suggestions/whatever regarding my GSoC work, feel free to ask me on Rust Zulip, or in the Reddit comments. Currently, I am in the "Community bonding" GSoC phase - the point at which any feedback will be appreciated.

Command line arguments

You might have noticed I said that support for command line arguments "is surprisingly complex". Why, tough? This seems like something that should be very simple, borderline trivial. Well, it turns out Rust uses something unusual to get the command line arguments: weird linker tricks.

Getting command-line arguments the cool way

In most languages, command line arguments are passed to the main function of your program, like this:

public static void Main(string[] args){
    // handle the args here 

Or this:

int main(int argc, char** argv){
    // handle the args here 

This makes a lot of sense: most OSs pass arguments to the program this way. Your programming language only inserts a thin wrapper around the OS entry point and passes the OS-provided arguments to your app. Parsing command line arguments is a bit different in Rust:

fn main(){
    let args = std::env::args();

The arguments are not directly passed to the main function, and you retrieve them by calling std::env::args. You may assume this is implemented roughly like this:

// OS-args are stored here.
static mut ARGS:Option<Vec<String>> = None;
unsafe fn _start(argc:c_int,argv:*const *const c_char){
    // Set the args
    ARGS = Some(convert_args(argc,argv));
    // call main
    // tell the OS that this program has finished.
fn main(){
    // Just gets the value of `ARGS`.
    let args = std::env::args();

And you would be partially correct: the command line arguments are indeed stored in a static variable. But they are retrieved from the OS before your program even fully loads.

Dynamic linkers and init_array

For most OSs, the Rust standard library uses a function named really_init to initialize the static variables containing command-line arguments.

static ARGC: AtomicIsize = AtomicIsize::new(0);
static ARGV: AtomicPtr<*const u8> = AtomicPtr::new(ptr::null_mut());

unsafe fn really_init(argc: isize, argv: *const *const u8) {
    // These don't need to be ordered with each other or other stores,
    // because they only hold the unmodified system-provide argv/argc., Ordering::Relaxed); as *mut _, Ordering::Relaxed);

Ok, that makes sense. But what calls really_init? really_init is called by a "static initialzier" - on GNU Linux, it uses '.init_array'.

#[link_section = ".init_array.00099"]
static ARGV_INIT_ARRAY: extern "C" fn(
    *const *const u8,
    *const *const u8,
) = {
    extern "C" fn init_wrapper(
        argc: crate::os::raw::c_int,
        argv: *const *const u8,
        _envp: *const *const u8,
    ) {
        unsafe {
            really_init(argc as isize, argv);

The purpose of a static initializer is pretty self-explanatory: it is supposed to initialize static variables. The really interesting thing about static initializers is that they run before your program even starts.

A static initializer is called by a "dynamic linker" - the part of your OS responsible for loading executables. This has a really interesting consequence: a static initializer will also run when you load a library. This means that you can write something like this:

extern "C" fn can_be_called_by_other_langs(){
    // Can get OS args, even in a shared library
    let args = std::env::args();

in a shared library(a.k .a. dynamic library), and it will still work just fine.

Ok, all of that is pretty nice, but how can something like this be implemented in .NET?

Implementing GNU-style static initializers in .NET

Implementing static initializers may seem like quite the task at first glance. I would have to handle the linker directives passed to me by the frontend, and then call those static functions with proper arguments, and in the right order.

That would be complicated, but I have been able to take a very big shortcut. You see, at least on GNU Linux, there is only 1 static initializer used in the whole Rust standard library.

This makes my job a whole lot easier: I can "just" call really_init directly, and not worry about anything else.


.NET has a pretty nice feature, that allows me to easily emulate the behavior of .init_arrays: static constructors. They are pieces of code, which run while a type is being initialized. I store my Rust code in a class called RustModule. It contains all the compiled functions, static allocations, and other stuff like that.

Since a static constructor will run before the RustModule class is used, I can guarantee all static data is properly initialized.

I can also use the static constructor to retrieve the command line arguments and call really_init directly. This is relatively easy since the static constructor is just an ordinary static method, with a funny name(.cctor). So, I don't need all that much code to handle all of that properly.

The exact implementation is not as simple as this, but this should give you a rough idea about what is happening.

100(+1) test cases

Some time ago(4 months to be exact), I started using a slightly modified version of a tool called rustlantis to fuzz test my compiler. Rustlantis is a pretty amazing tool: it can automatically creates complex, but UB-free Rust programs, extensively testing the Rust compiler. It is a really marvelous thing, and I can't stress enough how helpful it has been.

With my minor changes, I have been able to hook this amazing tool into my compiler - and generate 101 failing test cases for me to fix.

No, this is not an elaborate reference to One Hundred and One Dalmatians - I am just stupid and made an off-by-one error.

I saved failing test cases from 0 to 100, instead of from 1 to 100, and I just decided to keep all of them, and live with my mistakes.

Anyway, I have made some big progress fixing those issues - currently, only 3 of them remain(cases 16, 47, and 58)!

This means I am 97.0297% done!

Yes, the percentage is all messed up - 101 is prime, so no nice percentages for me :(.

The test cases are compiled with and without optimizations, and the behavior of the resulting program is compared between "standard" Rust and the .NET version.

While fixing those cases, I have discovered some interesting consequences of certain... quirks of .NET.

.NETs selective sign dementia

If I asked you this simple question:

Does the .NET runtime have an unsigned integer type?

Your answer would probably be:

Yes, of course! What a silly question!

My answer would be:

Yes, but no, but also yes.

You see, the .NET runtime obviously has support for unsigned integer types:

uint unsigned = 0;

.NET clearly separates signed and unsigned integers almost everywhere, besides the evaluation stack. As soon as an integer lands on the evaluation stack, the runtime gets a weird case of sign amnesia, and instantly forgets if it is supposed to be signed.

Let me show you an example. Look at this C# function:

public static int Add(int signed, uint unsigned){
    // Requires an explicit sign cast!
    return signed + (int)unsigned;

This C# function then gets compiled into CIL - .NET's bytecode, which works by pushing and popping values from the evaluation stack. You would expect the CIL this function compiled into to look like this:

.method static 
        int32 Add(
            int32 'signed',
            uint32 'unsigned'
    // Load argument 0(int signed)
    // Load argument 1(uint unsigned)
    // Convert uint to int
    // Add them together
    // Return their sum.

The arguments get pushed onto the evaluation stack, the second one gets converted from signed to unsigned, and then they get added together. Nope! The sign conversion is not needed.

.method static 
        int32 Add(
            int32 'signed',
            uint32 'unsigned'
    // Load argument 0(int signed)
    // Load argument 1(uint unsigned)
    // Add them together(even though they have different signs)
    // Return their sum.

Ok, so what is the problem here?

Oh no, I don't have to use one more instruction to convert some integers, what a terrible tragedy!

Well, while it may not seem that bad, this makes some things... less than intuitive.

I will now ask you a seemingly trivial question. If you had to convert a 32-bit unsigned value(uint) to a 64-bit signed type(long), which one of those instructions would you use?

conv.i8 - Convert to int64, pushing int64 on stack.

conv.u8 - Convert to unsigned int64, pushing int64 on stack.

At first, your intuition would suggest using conv.i8 - it is supposed to convert a value to signed int64(long). Your intuition would be, however, wrong.

This C# function:

static long UIntToLong(uint input){
    return (long)input;

Compiles into the following CIL:

.method assembly hidebysig static 
    int64 'UIntToLong' (
        uint32 input
    .maxstack 8

Let me explain what is exactly happening, and why on earth is the instruction for unsigned conversion used here. The real difference between conv.u8 and conv.i8 is the kind of conversion they use.

conv.i8 uses sign extension - meaning it tries to preserve the sign bit of the input value.

So, when its input is an unsigned integer, it treats its last bit as a sign, even when we don't want it to. conv.u8 uses zero extension - it does not try to preserve the sign, and simply fills all the "new" bits with 0s.

You can imagine this as conv.i8 assuming its input is signed, and conv.u8 assuming its input is unsigned.

They are named in a pretty confusing way, but this is not a big issue, since it at least supports unsigned integers, unlike certain languages(I am looking at you, Java!).

Now, this is not a problem in all the languages using .NET. You don't have to think about, or even know this stuff. The smart people developing your compiler have you covered!

(Un) fortunately, I am the (allegedly smart person writing the Rust to .NET compiler. So, this, and other "small" details and edge cases are sadly my problem.

Seeing as I can't count 100 test cases without messing up, I don't have big hopes ;).

.NETs magic 3rd variable-sized binary floating-point format

Did you know that .NET has a 3rd binary floating-point type(and no, I am not talking about decimal)? In CIL, you have direct equivalents to float and double - float32 and float64, but there is also a 3rd type.

Well, what is it?

The confusingly named F type is supposed to be an internal implementation detail. The spec says:

The internal representation shall have precision and range greater than or equal to the nominal type. Conversions to and from the internal representation shall preserve value.

So, its size may vary across implementations. Ok, so what? Why does this internal type even matter? Well, it turns out that it (sometimes) matters quite a lot.

First of all, all floating-point instructions operate on this "F" type - when you load a float or a double onto the evaluation stack, it gets converted to the F type.

Still, most of the time, you can just pretend it does not exist - since its size depends on the context. In practice, it is 32-bit when you operate on floats, and 64-bit when you operate on doubles.

You can imagine you are directly operating on float32 and float64, and the "F" type never bothers you.

The "F" type always has just the right size for your operation: never too small, never too big. So, it should not be noticeable.

All right, it is a weird type whose size depends on the context, but none is perfect. Of course, it may look odd, but it is just an implementation detail. Surely, it won't suddenly rear its ugly head and cause weird precision issues in one, very specific case?

Meet conv.r.un

There are 3 .NET instructions used for converting values to floating-point types: conv.r4 - Convert to float32, pushing F on stack. conv.r8- Convert to float64, pushing F on stack. conv.r.un - Convert unsigned integer to floating-point, pushing F on stack

Can you spot the odd one out?

conv.r.un does not specify the size of its result. So, what is the size of the type it pushes onto the stack? The answer is... it depends. Could you spot the issue with this snippet of code:

.method static float64 ULongToDouble(uint64 'ulong'){

This code will convert the uint64 to a 32bit float type and then it will convert that float into a double(float64). This will result in a loss of precision. What is even worse is that this behavior is not very consistent: sometimes I can reproduce it, sometimes I can't.

Well, there is one instruction missing: conv.r.un should be followed by conv.r8. Even though it seems like it is unnecessary, it is actually crucial this instruction is there.

This looks weird, but this is what the C# compiler would do. So, I just have to stick a conv.r8 there, and everything is fine again.

.method static float64 ULongToDouble(uint64 'ulong'){

Once again, this is something that you are extremely unlikely to ever encounter. This is just an "invisible" implementation detail - but it is still interesting to see it pop up.

One of my favorite things about this project is just learning about the inner workings of Rust and .NET. As another example: did you know there is a cap on the total length of the strings in a .NET assembly?

Strings, strings, more strings

You can’t store more than 4GBs worth of strings in a .NET assembly. You may wonder: how on earth would you encounter that?

The answer is quite simple: a dumb approach to debugging.

You see, there are some things that make the runtime just... crash. No exception, no message, nothing. It just kind of... dies.

As an example, calling a null pointer using calli will just crash the runtime.

calli void ()

So, we can't get a backtrace and check what caused the problem.

How do we solve that?

Well, we can "simply" log to the console the name of every single function we call: this way, we can at least know where we crashed. We can also log messages on who called the function, and provide yet entry and exit,

Originally, I did something dumb: I stored the whole message for each function call. So, the total string length was:

The number of function calls × 2 × 40+ bytes per message.

You can see how quickly things will start to add up.

The solution turned out to be quite simple: split the stored strings, and reassemble them at runtime. We can change this message:

Calling FN1 from FN2

into 4 strings: "Calling ", FN1, " from " and FN2.

So, now our total string length will just be:

Number of functions × function name length + "Calling " and " from "

Much better.

This incredibly "advanced" debugging solution is, of course, disabled by default and only meant for diagnosing very weird edge cases, such as the runtime crashing without any messages.

However stupid it may be, it works.

Suprise for the end

Oh, did I forget to mention that rustc_codegen_clr can also compiler Rust to C?

I don't know how to integrate that nicely into the article, but the project can also serve as a Rust to C compiler when you set the C_MODE config flag.


Well, it all circles back to GSoC. I was not sure if my main proposal would be accepted, so I thought about submitting another, "safer" one.

After a particularly bad migraine, I realized my internal IR(CIL trees) is basically just an AST. And I can convert it to C without major issues.

So, I kind of.. did that? I wrote around 1K loc, and my Rust to .NET compiler now outputs C for some reason. Cool.

A lot of my tests already pass for the "C_MODE", and it can compile & run a very broken version of the Rust standard library.

The whole thing works by just pretending the C language is a very weird .NET runtime. I know this sounds very funny - but hey, this was just a weird side project.

I will probably write a bit more about it in the future, but this article is already a bit too long for my taste.

It was a nice experiment, is relatively easy to maintain, and it helped a lot with debugging - so it will probably stay for now.

Mixed-mode assemblies, NATIVE_PASSTROUGH, and my linker

Some people have asked me about mixed-mode assemblies: assemblies containing both native code and .NET CIL.

I have made some progress toward that too. First of all, my linker can now link assemblies with shared libraries: it will emit all the info the .NET runtime needs to load the functions from such a library. It can also link static native libraries. They will come bundled with the final executable, and will too be loaded by the .NET runtime. So, my linker, cleverly named "linker", will now enable you to bundle native code with your Rust compiled for .NET.

I call this(experimental) feature NATIVE_PASSTROUGH.

Now, this feature is not perfect by any means(it crashes when you try to use calling conventions other than "C"), but this is a significant step towards future Mixed-mode assemblies.

Wrapping stuff up

Originally, I had more stuff to talk about, but I didn't want this article to become an unreadable wall of text.

Once again, if you have any questions, feel free to ask. You can use the Rust Zulip, open a discussion on the project GitHub, or just ask in the Reddit comments. I will try to respond to all the feedback I recive.

So, goodbye, and see you next time!

You can also sposor me on GitHub, if you want to.