So You Want to Build a Language VM - Part 17 - Basic Threads
Adds executing a program in separate OS threads
Intro
Hey everyone! In this tutorial, we’re going to start adding in multithreading to the Iridium VM. Please make sure you are starting from this point in the code: https://gitlab.com/subnetzero/iridium/tags/0.0.16. Going forward, I’m going to make a tag per tutorial so that everyone starts from a common point. === A Note on Assumed Knowledge I write these tutorials target towards more advanced users. I sometimes skip small steps, such as "add in this line to file X".
Multithreading
The current version of Iridium is single-process, single-threaded. When you execute an application, the VM can’t do anything else until it has finished. This is how the VMs for languages like Python and Ruby work. We want Iridium to work more like the BEAM VM, which provides a sort of shell to the VM. It can see running processes, terminate them, or other administrative tasks.
This will take a lot more than one tutorial part, of course. =) But a good first step would be able to use the REPL to run a program in a separate OS thread.
Ready? Here we go!
Threads
We should probably figure out how to create threads in Rust first. The Rust book covers them well, so I’ll steal an example from it:
use std::thread;
use std::time::Duration;
fn main() {
thread::spawn(|| {
for i in 1..10 {
println!("hi number {} from the spawned thread!", i);
thread::sleep(Duration::from_millis(1));
}
});
for i in 1..5 {
println!("hi number {} from the main thread!", i);
thread::sleep(Duration::from_millis(1));
}
}
Can we add a REPL command that accepts a path to a file of assembly code, compiles it, and hands it off to a VM in a background thread? Could it be that simple?! Let’s find out!
REPL Flow
We should probably put a bit of thought into the details of this before charging ahead. A slightly more boring path, but our users will thank us.
Maybe.
OK, probably not.
Anyway, here is a possible workflow:
Welcome to Iridium! Let's be productive!
>>> .spawn
Please enter the path to the file you wish to load: test.iasm
Starting program in background thread
>>>
Questions arise when thinking about this:
How do we get the output from the program?
How do we display the output from the program to the user?
Do we need to track all programs run in the background?
How will we allow those programs to get input?
I’m sure we could think of more, but let’s address the issue of tracking programs.
Auditability and PIDs
Whenever you execute a program on a computer running Linux, the operating system gives it something called a PID
or process identifier. Linux guarantees that a PID is unique amongst currently running processes, non-negative, and between 1 and 32,767. After it reaches that number, it will begin to wrap around. If a process starts, gets a PID of 600, runs, then stops, 600 can be re-used.
Note | Why that oddly specific upper bound? Because that’s the default in most Linux kernels. As computing needs and power have grown, the idea of a server running over 32k processes is no longer as absurd as it once was. To accommodate this, you can change the max PID to around four million. |
When we start a REPL session, that is seen as one process. We can start OS threads in the background, but the OS does not know anything specific about what the Iridium VM is doing. If we want to track what code the VM runs and the results, we’ll have to do the same.
A Slight Digression
As I was poking around with 0.0.16, this happened:
>>> .load_file
Please enter the path to the file you wish to load:
Attempting to load program from file...
thread 'main' panicked at 'File not found: Os { code: 2, kind: NotFound, message: "No such file or directory" }', libcore/result.rs:945:5
It seems the REPL crashes when it can’t find a file. We’re going to fix that real quick.
Note | Since we are getting into new territory for me, we’ll probably have a lot of these little side quests going forward. =) |
The troublesome line is this one in src/repl/mod.rs
:
let mut f = File::open(Path::new(&filename)).expect("File not found");
Let’s change it to handle the error case without panicking:
let filename = Path::new(&tmp);
let mut f = match File::open(&filename) {
Ok(f) => { f }
Err(e) => {
println!("There was an error opening that file: {:?}", e);
continue;
}
};
We can also get rid of that double Path::new()
call.
Testing
After making that change, all tests still pass, and if we try to give it a non-existent or bad file name now, we get:
Welcome to Iridium! Let's be productive!
>>> .load_file
Please enter the path to the file you wish to load:
Attempting to load program from file...
There was an error opening that file: Os { code: 2, kind: NotFound, message: "No such file or directory" }
>>> .load_file
Please enter the path to the file you wish to load: doh
Attempting to load program from file...
There was an error opening that file: Os { code: 2, kind: NotFound, message: "No such file or directory" }
>>>
Yay! Committing that, and we can go back to threading.
Back to Threading
Let’s tackle the whole the spawning a thread thing first. Make a new module, src/scheduler/mod.rs
. We’re using a new module because I suspect that this will be the beginnings of a more complex schedule we’ll use later. We can also use a Scheduler
struct to track information, such as PIDs.
In the new module, put the following:
use std::thread;
use vm::VM;
#[derive(Default)]
pub struct Scheduler {
}
impl Scheduler {
pub fn new() -> Scheduler {
Scheduler{}
}
pub fn get_thread(vm: VM) {
}
}
Function Signature
Let’s look at the signature for the thread::spawn() function:
pub fn spawn<F, T>(f: F) -> JoinHandle<T>
where
F: FnOnce() -> T,
F: Send + 'static,
T: Send + 'static,
I am going to go over every line; a few new advanced Rust concepts are introduced here.
First off, notice that it is generic over F
and T
, and returns a JoinHandle<T>
. What’s a JoinHandle<T>
you ask? Great question! You can think of them as handles to the thread that is executing. It is not the thread itself, nor is it a simple pointer to it. Whatever the thread does, it has to return T
.
The type parameter F
is a FnClose, or function closure. The where
constraint here indicates what types of functions are allowed:
where
F: FnOnce() -> T,
F: Send + 'static,
T: Send + 'static,
They must implement Send + ’static'
, and FnOnce() → T
. This means two things:
The function must return T
FnOnce functions are called once (there are other types when needed, such as Fn, FnMut)
Requiring 'static means that it will exist for the life of the program. NOTE: This means the closure, not a specific execution of it.
Here’s a simple example:
let join_handle: thread::JoinHandle<u32> = thread::spawn(|| {
10.0 // This would fail, because it is a float, not a u32
10 // This would succees, because it is a u32, not a float
});
Note | In docs, you may see JoinHandler<T> written as JoinHandler<_> . The _ is a placeholder and won’t compile (I don’t think, anyway). |
This means we have to change our VM::run()
function to return a value. Easy-peasy:
/// Wraps execution in a loop so it will continue to run until done or there is an error
/// executing instructions.
pub fn run(&mut self) -> u32 {
// TODO: Should setup custom errors here
if !self.verify_header() {
println!("Header was incorrect");
return 1;
}
// If the header is valid, we need to change the PC to be at bit 65.
self.pc = 65;
let mut is_done = false;
while !is_done {
is_done = self.execute_instruction();
}
0
}
Threads
Back to the spawning pool, zergling!
The actual thread code itself is simple: vm.run()
. get_thread
will accept a VM, create a thread, which will execute vm.run() until it returns a value. This is simple because we are giving the thread ownership of an entire VM, so borrowck
remains pleased. As we make the scheduler more advanced, we’ll have to get a bit more creative. For now, this will work:
impl Scheduler {
pub fn new() -> Scheduler {
Scheduler{}
}
/// Takes a VM and runs it in a background thread
pub fn get_thread(mut vm: VM) -> thread::JoinHandle<u32> {
thread::spawn(move || {
vm.run()
})
}
}
For now, let’s copy the Linux model, and a assign each program a unique PID starting from 0 at zero. Let’s change our Scheduler structure like so:
pub struct Scheduler {
next_pid: u32,
max_pid: u32,
}
impl Scheduler {
pub fn new() -> Scheduler {
Scheduler{
next_pid: 0,
max_pid: 50000
}
}
}
Adding Scheduler to REPL
Much like we gave the REPL shell its own VM, we can give it a scheduler, like so:
use scheduler::Scheduler;
/// Core structure for the REPL for the Assembler
pub struct REPL {
command_buffer: Vec<String>,
vm: VM,
asm: Assembler,
scheduler: Scheduler
}
impl REPL {
/// Creates and returns a new assembly REPL
pub fn new() -> REPL {
REPL {
vm: VM::new(),
command_buffer: vec![],
asm: Assembler::new(),
scheduler: Scheduler::new()
}
}
I’m not including the rest of the REPL functions again. You can scroll up for them. =)
Assembler
For now, we’ll still have the REPL handle assembling, and will give the thread a VM with bytecode ready to run.
Important | The ".spawn" and ".load_file" commands are nearly identical. Let’s factor those into smaller functions. |
First New Function
fn get_data_from_load(&mut self) -> Option<String> {
let stdin = io::stdin();
print!("Please enter the path to the file you wish to load: ");
io::stdout().flush().expect("Unable to flush stdout");
let mut tmp = String::new();
stdin.read_line(&mut tmp).expect("Unable to read line from user");
println!("Attempting to load program from file...");
let tmp = tmp.trim();
let filename = Path::new(&tmp);
let mut f = match File::open(&filename) {
Ok(f) => { f }
Err(e) => {
println!("There was an error opening that file: {:?}", e);
return None;
}
};
let mut contents = String::new();
match f.read_to_string(&mut contents) {
Ok(program_string) => {
Some(program_string.to_string())
},
Err(e) => {
println!("there was an error reading that file: {:?}", e);
None
}
}
}
Put it in repl/mod.rs
as part of the REPL impl. Now we can make .spawn and .load_file much smaller:
".spawn" => {
let contents = self.get_data_from_load();
if let Some(contents) = contents {
match self.asm.assemble(&contents) {
Ok(mut assembled_program) => {
println!("Sending assembled program to VM");
self.vm.program.append(&mut assembled_program);
println!("{:#?}", self.vm.program);
self.scheduler.get_thread(self.vm.clone());
},
Err(errors) => {
for error in errors {
println!("Unable to parse input: {}", error);
}
continue;
}
}
} else { continue; }
}
Testing
We can start with a program but one instruction, HLT
. You can find it under docs/examples/iasm
. Let’s see what happens!
>>> .spawn
Please enter the path to the file you wish to load: /Users/fletcher/Projects/iridium-book/docs/examples/iasm/hlt.iasm
Attempting to load program from file...
There was an error parsing the code: Error(Code(CompleteStr("4"), Many1))
Unable to parse input: There was an error parsing the code: Error(Code(CompleteStr("4"), Many1))
Doh. Time to debug.
<time passed>
Aha! You’ll notice in this section:
match f.read_to_string(&mut contents) {
Ok(program_string) => {
Some(program_string)
},
Err(e) => {
println!("there was an error reading that file: {:?}", e);
None
}
}
read_to_string
returns the number of bytes it read, and reads the contents into the String
provided as an argument. In our match statement, we converted that number of bytes into a string and returned. That is what the parser refused to parse.
The fix is simple:
match f.read_to_string(&mut contents) {
Ok(_bytes_read) => {
Some(contents)
},
Err(e) => {
println!("there was an error reading that file: {:?}", e);
None
}
}
Let’s try again:
Welcome to Iridium! Let's be productive!
>>> .spawn
Please enter the path to the file you wish to load: docs/examples/iasm/hlt.iasm
Attempting to load program from file...
Loaded conents: Some(
".data\n\n.code\nload $0 #100\nhlt\n"
)
Did not find any errors in the first phase
Sending assembled program to VM
[
45,
50,
49,
45,
0,
0,
<snip a ton of zeros>
0,
0,
100,
5,
0,
0,
0
]
>>> thread '<unnamed>' panicked at 'index out of bounds: the len is 72 but the index is 72', /Users/travis/build/rust-lang/rust/src/libcore/slice/mod.rs:2079:10
note: Run with `RUST_BACKTRACE=1` for a backtrace.
After our assembler adds in the header and such, the final program size is going to be at least 65 bytes. 64 for the header, and 1 for at least the instruction. This looks like an off-by-one error in our program counter. That is, the program vector’s total length is 72, and it was off by one in the execution loop. I have a suspicion…
Check out the run
function in vm.rs
.
/// Wraps execution in a loop so it will continue to run until done or there is an error
/// executing instructions.
pub fn run(&mut self) -> u32 {
// TODO: Should setup custom errors here
if !self.verify_header() {
println!("Header was incorrect");
return 1;
}
// If the header is valid, we need to change the PC to be at bit 65.
self.pc = 64;
let mut is_done = false;
while !is_done {
is_done = self.execute_instruction();
}
0
}
See how it presets the program counter to 65? That means when the VM starts executing, the first thing it does is request the next instruction, which would be 66. The VM Is always ahead by one.
Change that to 64 and let’s try again…
>>> .spawn
Please enter the path to the file you wish to load: docs/examples/iasm/hlt.iasm
Attempting to load program from file...
Loaded conents: Some(
".data\n\n.code\nload $0 #100\nhlt\n"
)
Did not find any errors in the first phase
Sending assembled program to VM
[
45,
50,
49,
45,
0,
0,
<snip many zeros>
0,
0,
100,
5,
0,
0,
0
]
>>> HLT encountered
Hey, it ran and everything! In a background thread! Let’s check our registers to see if we can see the values:
.registers
Listing registers and all contents:
[
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
]
End of Register Listing
Whaaa?
Because the VM ran in a background thread, it had its own set of registers and everything else. When we type .registers
in the REPL, we’re still looking at the registers for the VM created and used by the REPL in the main thread.
In the background thread, when run
terminated, all those data structures were gone…like tears in the rain.
(Sorry not sorry)
A Surprise Failure!
Imagine my surprise when I ran cargo test
at this point and it showed four failed tests: test_sub_opcode, test_mul_opcode, test_div_opcode, test_add_opcode. The error was:
failures:
---- vm::tests::test_add_opcode stdout ----
thread 'vm::tests::test_add_opcode' panicked at 'index out of bounds: the len is 69 but the index is 69', /Users/travis/build/rust-lang/rust/src/libcore/slice/mod.rs:2079:10
note: Run with `RUST_BACKTRACE=1` for a backtrace.
---- vm::tests::test_div_opcode stdout ----
thread 'vm::tests::test_div_opcode' panicked at 'index out of bounds: the len is 69 but the index is 69', /Users/travis/build/rust-lang/rust/src/libcore/slice/mod.rs:2079:10
---- vm::tests::test_load_opcode stdout ----
Illegal instruction encountered
thread 'vm::tests::test_load_opcode' panicked at 'assertion failed: `(left == right)`
left: `1`,
right: `500`', src/vm.rs:336:9
---- vm::tests::test_mul_opcode stdout ----
thread 'vm::tests::test_mul_opcode' panicked at 'index out of bounds: the len is 69 but the index is 69', /Users/travis/build/rust-lang/rust/src/libcore/slice/mod.rs:2079:10
---- vm::tests::test_sub_opcode stdout ----
thread 'vm::tests::test_sub_opcode' panicked at 'index out of bounds: the len is 69 but the index is 69', /Users/travis/build/rust-lang/rust/src/libcore/slice/mod.rs:2079:10
Hello again, off by one error, my old friend. We meet again!
To fix it, I had to make a slight tweak to prepend_header
in the vm.rs
test module:
fn prepend_header(mut b: Vec<u8>) -> Vec<u8> {
let mut prepension = vec![];
for byte in PIE_HEADER_PREFIX.into_iter() {
prepension.push(byte.clone());
}
while prepension.len() < PIE_HEADER_LENGTH {
prepension.push(0);
}
prepension.append(&mut b);
prepension
}
Can you spot the difference? =)
Wrap Up
We have found and fixed a few bugs and successfully have a primitive form of running applications in background threads that we can build on later. In the next section, we’ll finish up PID tracking.
This is only the beginning of the cool features we’ll build into our VM. =) You can find the final form of the code after this tutorial in GitLab under the tag 0.0.17.
If you need some assistance with any of the topics in the tutorials, or just devops and application development in general, we offer consulting services. Check it out over here or click Services along the top.