So You Want to Build a Language VM - Part 24 - SSH Server: Part 2
Finishes adding an SSH server to the Iridium VM
Intro
So, change of plans. I’ve been fighting with thrussh for hours now trying to get SSH working. The key exchange was failing, and I had no idea why. It turned out that even their example client/server didn’t work when I tried. Despite spending a lot of time going through the source, I couldn’t find the cause of the issue. The crate uses futures very heavily, which makes the program flow hard to follow, at least for me. I’m sure that somewhere in the world there is someone who has no problem following futures-based async, but it isn’t me. In light of this, I decided to go old school. I’m leaving the previous tutorial part up; I think its important to see this aspect of projects as well. Having to scrap something that doesn’t work out and pivot to something else.
So what will we do? We’re going to do a simple TCP server that spawns an OS thread per connection. The remote access functionality to Iridium is not meant to handle hundreds or thousands of users, so this will be fine from a performance perspective. The one sticking point is encryption. SSH would have let us have encrypted connections. We’ll have to implement that on top of the socket server, but we’ll save that for another tutorial.
Starting Over
We’re going to start from this tag: https://gitlab.com/subnetzero/iridium/tags/0.0.22. We should have nothing to do with the thrussh crate in our code.
Updating CLI Options
First up, let’s add some options to our src/bin/cli.yml
. The user should be able to enable the remote access feature, specify a host to bind to, and a port to bind to.
Binding, Hosts, and Ports
Feel free to skip this section if you know what TCP ports are, and what binding to an interface and port means. If you don’t, read on! This will give you a quick overview.
TCP and UDP
Most network communication between computers utilize TCP (transmission control protocol) or UDP (user datagram protocol). Higher-level abstractions build on top of these. HTTP, for example, uses TCP underneath. This is a complex topic, so for now, remember these two points:
TCP is a dedicated connection that guarantees in-order delivery of data. Think of it as a phone conversation.
UDP is fire-and-forget. You send a packet to a server and it may or may not arrive; you won’t know if it does or not. Think of it as mailing a letter.
TCP is best for our needs.
Interfaces
If a server is on a network, it is using an interface. This is the piece of the hardware that the network cable plugs into, and its abstraction in the operating system. These interfaces have an IP Address, which unique identifies that server on the network. They look like 192.168.1.10
. Each of the 4 fields can range from 0 to 255.
Note | Yes, I know, there is IPv6 and such. We’re going to skip that for now. |
If you are on a Mac or Linux computer, you can see your interfaces by typing ifconfig -a
. An example is:
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 8c:85:90:7a:53:04
inet6 fe80::462:3eb0:435b:7753%en0 prefixlen 64 secured scopeid 0x8
inet 192.168.1.34 netmask 0xffffff00 broadcast 192.168.1.255
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
The interface name is en0
, and its IP Address is 192.168.1.34
.
Ports
Linux servers have a range of ports that programs can bind to. These range from 1 to 65535. Think of these as phone extensions. Ports between 1 and 1024 are reserved for use by the OS, and only the root
user can bind to those. Ports above that can be used by your program.
Binding
A program can bind to a combination of and interface and port. For example, HTTP servers bind to port 80. HTTPS binds to port 443. Once a program has done that, external programs can reach it across the network. When you send a request to a different machine across a network, the destination IP and port number are included. The OS then knows how to route that message to the programs listening on that interface and port.
Back to CLI Options
The user needs to be able to specify the interface and port Iridium should listen on. We add those like so:
- ENABLE_REMOTE_ACCESS:
help: Enables the remote server component of Iridium VM
required: false
takes_value: false
long: enable-remote-access
short: r
- LISTEN_PORT:
help: Which port Iridium should listen for remote connections on. Defaults to 2244.
required: false
takes_value: true
long: bind-port
short: p
- LISTEN_HOST:
help: Which address Iridium should listen for remote connections on. Defaults to "127.0.0.1".
required: false
takes_value: true
long: bind-host
short: h
The first option determines if remote access is enabled at all. By default, it is not. We want to do our best to be secure by default.
Parsing the Options
The next step is to check if remote access is enabled, and if so, what host and port the user wants. Open up src/bin/iridium.rs
and put this block in:
if matches.is_present("ENABLE_REMOTE_ACCESS") {
let port = matches.value_of("LISTEN_PORT").unwrap_or("2244");
let host = matches.value_of("LISTEN_HOST").unwrap_or("127.0.0.1");
start_remote_server(host.to_string(), port.to_string());
}
let matches = App::from_yaml(yaml).get_matches();
. What this does is:Check if the
--enable-remote-access
flag was passed when starting iridiumExtract the port if provided, or default to 2244 if one was not
Extract the host if provided, or default to 127.0.0.1 if one was not
Call the function that starts the remote listener in a background thread
Important | 127.0.0.1 is a special IP address. Every computer that implements IPv4 has it. You may hear it referred to as the loopback address, or localhost. In fact, if you type There is another special address, Binding to 127.0.0.1 by default is another security by default choice. |
Starting the Server
Let’s take a look at the function definition for starting a TCP server:
fn start_remote_server(listen_host: String, listen_port: String) {
let _t = std::thread::spawn(move || {
let mut sh = iridium::remote::server::Server::new(listen_host, listen_port);
sh.listen();
});
}
We start a separate thread to handle connections. This is so that the user can continue to interact with the terminal/CLI version, while other users can remote in.
We create a new Server (we’ll see that next) that is defined in another module, passing it our host and port
We call listen on that server.
Remote Module
We’re going to put our logic in a new module, src/remote
. In it, you’ll need 3 files: mod.rs
, client.rs
, and server.rs
. mod.rs
is simple:
pub mod server;
pub mod client;
Let’s take a look at what is in server.rs
:
use std::io::BufReader;
use std::net::TcpListener;
use std::thread;
pub struct Server {
bind_hostname: String,
bind_port: String,
}
That’s our Server! Not much to it, right? Right now, we store the hostname and port, and that’s it. We’ll add more stuff later. Now the implementation is a bit more complex:
impl Server {
pub fn new(bind_hostname: String, bind_port: String) -> Server {
Server {
bind_hostname,
bind_port,
}
}
pub fn listen(&mut self) {
println!("Initializing TCP server...");
let listener = TcpListener::bind(self.bind_hostname.clone() + ":" + &self.bind_port).unwrap();
for stream in listener.incoming() {
let stream = stream.unwrap();
thread::spawn(|| {
let mut client = Client::new(stream);
client.run();
});
}
}
}
The new
function is straightforward, so I won’t go over it in detail. listen
is where the functionality is. Let’s go through it line by line:
let listener = TcpListener::bind(self.bind_hostname.clone() + ":" + &self.bind_port).unwrap();
This creates the listening socket. When an external user tries to connect, this is what they talk to. Notice how we create a string of the form "hostname:port". This is the usual way of specifying what host and port you want to listen on. Success is not guaranteed. For example, if you try to bind to a port that some other program is listening on, it will fail. So we will want to remove the unwrap() and handle that more gracefully.
for stream in listener.incoming() {
The listening socket has a function incoming
that blocks until there is a connection attempt. It is an infinite loop. When someone does connect, we get a stream object:
let stream = stream.unwrap();
thread::spawn(|| {
let mut client = Client::new(stream);
client.run();
});
Whenever a new client connects, we create a new Client struct, and call run()
. Our listener socket loop is complete, and it goes back to waiting for more connections.
Client
Let’s look at what is in client.rs
:
use std::io::{BufRead, Write, Read};
use std::io::{BufReader, BufWriter};
use std::net::TcpStream;
use std::sync::mpsc::{Sender, Receiver};
use std::sync::{mpsc};
use std::thread;
use repl;
pub struct Client {
reader: BufReader<TcpStream>,
writer: BufWriter<TcpStream>,
raw_stream: TcpStream,
repl: repl::REPL,
}
Each of our clients has three copies of the same stream
we saw earlier in the server, as well as their own REPL. You’re probably wondering why we copy the stream. Its so we can wrap them in Rust’s BufReader and BufWriter, which allows us to send and receive data at a higher abstraction level than bytes. The raw_stream is there so that we can create the other two. That will make sense in a minute, I promise. =)
Let’s look at the implementation:
impl Client {
pub fn new(stream: TcpStream) -> Client {
// TODO: Handle this better
let reader = stream.try_clone().unwrap();
let writer = stream.try_clone().unwrap();
let mut repl = repl::REPL::new();
Client {
reader: BufReader::new(reader),
writer: BufWriter::new(writer),
raw_stream: stream,
repl: repl
}
}
// more functions...
}
Because we need a reader and writer, we use the stream’s ability to clone to create the additional connections. This would be cleaner if Rust had self-referential structs (https://internals.rust-lang.org/t/improving-self-referential-structs/4808), but it doesn’t yet. The raw stream is kept around in case we need more clones for some reason in the future.
Important | These are not really additional network connections. It’s more like multiple pointers to the same stream. |
There’s a few more functions in client.rs
:
fn w(&mut self, msg: &str) -> bool {
match self.writer.write_all(msg.as_bytes()) {
Ok(_) => {
match self.writer.flush() {
Ok(_) => {
true
}
Err(e) => {
println!("Error flushing to client: {}", e);
false
}
}
}
Err(e) => {
println!("Error writing to client: {}", e);
false
}
}
}
This is a general write function. It takes any string, converts it to bytes, and uses our BufWriter<TcpStream> to send it to the client. We call flush to make sure it gets sent, instead of hanging around for some amount of time.
Next up, we have:
fn write_prompt(&mut self) {
self.w(repl::PROMPT);
}
REPL
There’s two more functions in the Client struct, but we’re going to hop over to the repl module and look at some changes there first to give context. Open up src/repl/mod.rs
. You’ll note the addition of some static strings:
pub static REMOTE_BANNER: &'static str = "Welcome to Iridium! Let's be productive!";
pub static PROMPT: &'static str = ">>> ";
These are just for convenience, so we use them in other modules.
In the REPL struct, we have added two things:
pub struct REPL {
command_buffer: Vec<String>,
vm: VM,
asm: Assembler,
scheduler: Scheduler,
pub tx_pipe: Option<Box<Sender<String>>>,
pub rx_pipe: Option<Box<Receiver<String>>>
}
See tx_pipe
and rx_pipe
? Those are being created in the constructor now:
pub fn new() -> REPL {
let (tx, rx): (Sender<String>, Receiver<String>) = mpsc::channel();
REPL {
vm: VM::new(),
command_buffer: vec![],
asm: Assembler::new(),
scheduler: Scheduler::new(),
tx_pipe: Some(Box::new(tx)),
rx_pipe: Some(Box::new(rx))
}
}
Channels
These are just normal Rust mpsc
channels. Instead of our REPL writing directly to stdout
via println!
, it will write to the tx_pipe
. Whatever has the rx_pipe
end can listen on it for the REPL’s output.
If you look through the various REPL functions, you’ll see I’ve replaced the println!
macros with something like this:
self.send_message(format!("Please enter the path to the file you wish to load: "));
There’s two helper functions in the REPL now:
pub fn send_message(&mut self, msg: String) {
match &self.tx_pipe {
Some(pipe) => {
pipe.send(msg+"\n");
},
None => {}
}
}
pub fn send_prompt(&mut self) {
match &self.tx_pipe {
Some(pipe) => {
pipe.send(PROMPT.to_owned());
},
None => {}
}
}
There’s also a new function called run_single
:
pub fn run_single(&mut self, buffer: &str) -> Option<String> {
if buffer.starts_with(COMMAND_PREFIX) {
self.execute_command(&buffer);
return None;
} else {
let program = match program(CompleteStr(&buffer)) {
Ok((_remainder, program)) => {
Some(program)
}
Err(e) => {
self.send_message(format!("Unable to parse input: {:?}", e));
self.send_prompt();
None
}
};
match program {
Some(p) => {
let mut bytes = p.to_bytes(&self.asm.symbols);
self.vm.program.append(&mut bytes);
self.vm.run_once();
None
}
None => {
None
}
}
}
}
This is because the older run
function uses an infinite loop. If a remote client called that, they would never get a response.
Summary of the REPL Changes
I know that’s a lot of changes to the REPL to absorb, so here is a neatly itemized list:
REPLs now send output over a Rust mpsc channel
The receiver for that channel can be a remote client
Remote client input is sent to the run_single function
Back to the Client
The last two functions we are going to look at are:
fn recv_loop(&mut self) {
let rx = self.repl.rx_pipe.take();
// TODO: Make this safer on unwrap
let mut writer = self.raw_stream.try_clone().unwrap();
let t = thread::spawn(move || {
let chan = rx.unwrap();
loop {
match chan.recv() {
Ok(msg) => {
writer.write_all(msg.as_bytes());
writer.flush();
},
Err(e) => {}
}
}
});
}
pub fn run(&mut self) {
self.recv_loop();
let mut buf = String::new();
let banner = repl::REMOTE_BANNER.to_owned() + "\n" + repl::PROMPT;
self.w(&banner);
loop {
match self.reader.read_line(&mut buf) {
Ok(_) => {
buf.trim_right();
self.repl.run_single(&buf);
}
Err(e) => {
println!("Error receiving: {:#?}", e);
}
}
}
}
When run
is called, it calls recv_loop
. This takes the rx_pipe
from the Client’s REPL, spawns a thread, and just continually listens for input on it. When input is found (the REPL has sent something), we send it to the client.
The rest of the run
function is printing the banner and then entering an infinite loop to listen for commands from the remote client.
Demonstration
Let’s see the remote access in action:
$ iridium --enable-remote-access
Initializing TCP server...
>>> Welcome to Iridium! Let's be productive!
>>>
And in another terminal window:
$ telnet localhost 2244
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Welcome to Iridium! Let's be productive!
>>> !registers
Listing registers and all contents:
[
0,
// snip a lot more zeros
0,
0
]
End of Register Listing
>>>
This design can handle many more clients, but it is far from efficient due to the number of threads used. We’ll work on improving that later.
Security
You’ll note that there is no password, no username, nothing. Anything sent over the network to the Iridium VM can be ready by anyone that can see the network traffic. We’ll need to layer encryption, authentication, and authorization on top of this. But this tutorial is already long enough. =)
End
That’s it for this tutorial! You can find the code as it should be at the end of this tutorial here: https://blog.subnetzero.io/project/iridium-vm/building-language-vm-part-23/. I know there were a lot of changes I glossed over, but we’re getting to the point that it isn’t practical to do so. If you have any questions, please post in the comments, chat, e-mail, etc. I’m more than happy to answer them.
See you next time!
If you need some assistance with any of the topics in the tutorials, or just devops and application development in general, we offer consulting services. Check it out over here or click Services along the top.