~ 6 min
Rust on Google Cloud Run

Photo by Zdeněk Macháček on Unsplash
I really liked Richard Seroter’s comparison of different programming language’s size and performance on Google Cloud Run, and decided to do the same test with some more languages. So in this post it’s Rust.
Rust is a relatively new language that has gained enormous popularity in the last few years, not only for low-level driver and system programming, but also for web and microservice development. It has consistently topped the Stack Overflow Developer Survey Most Loved Languages section, and just has a lot of buzz as the next big thing for both server (system layer & microservices) as well as to some extent client programming (using WASM in the browser).
Demo Rust Service
For this test of microservice deployment size and speed, I created a pretty identical service to Rechard’s example (see source code in Github here.
Here’s my employee.rs
file with a DTO struct for the employee data:
use serde::{Deserialize, Serialize};
#[derive(Debug, Deserialize, Serialize)]
pub struct Employee {
pub Id: String,
pub FullName: String,
pub Location: String,
pub JobTitle: String,
}
And are two functions in my main.rs
that serve the employee data, and run the web service:
async fn employees(_req: Request<Body>) -> Result<Response<Body>, Infallible> {
let v: Vec<employee::Employee> = vec![
employee::Employee {
Id: String::from("100"),
FullName: String::from("Jack Donaghy"),
JobTitle: String::from("Writer"),
Location: String::from("NYC"),
},
employee::Employee {
Id: String::from("101"),
FullName: String::from("Liz Lemon"),
JobTitle: String::from("Executive"),
Location: String::from("NYC"),
},
];
Ok(Response::new(serde_json::to_string(&v).unwrap().into()))
}
Here’s our main function with the listener configuration.
#[tokio::main]
async fn main() {
let addr = SocketAddr::from(([0, 0, 0, 0], 8080));
let make_svc = make_service_fn(|_conn| async { Ok::<_, Infallible>(service_fn(employees)) });
let server = Server::bind(&addr).serve(make_svc);
if let Err(e) = server.await {
eprintln!("server error: {}", e);
}
}
And that’s it, after running cargo run locally this service spins up and answers requests for employee data on port 8080.
Build and deploy to Cloud Run
So now let’s build a docker image and deploy to Cloud Run, see the file deploy_gcp.sh for details:
# Build and publish image to our cloud registry
gcloud builds submit --tag gcr.io/$1/rustservice
# Deploy image to Cloud Run and allow unauthenticated traffic to service
gcloud run deploy rustservice --image gcr.io/$1/rustservice /
--platform managed --project $1 --region us-central1 /
--allow-unauthenticated
Now we have our image published to our registry, at a pretty compact 10.9 MB using the Google’s Distroless CC base image (extra compact just for running applications, the build is done using the full 1.6 GB Rust image).
We also already have our service deployed to cloud run, and can test the service with curl.
curl https://rustservice2-qtw3rvj3ya-uc.a.run.app | jq
And which returns our test data.
[
{
"Id": "100",
"FullName": "Jack Donaghy",
"Location": "NYC",
"JobTitle": "Writer"
},
{
"Id": "101",
"FullName": "Liz Lemon",
"Location": "NYC",
"JobTitle": "Executive"
}
]
Cold run
Now let’s do the same performance test using hey that was done with the other frameworks.
hey -n 200 -c 10 https://rustservice2-qtw3rvj3ya-uc.a.run.app
Here are the cold start results (calling from a VM running in the same us-central1 (Iowa) region).
Summary:
Total: 0.8240 secs
Slowest: 0.6409 secs
Fastest: 0.0051 secs
Average: 0.0271 secs
Requests/sec: 242.7116
Total data: 30600 bytes
Size/request: 153 bytes
Response time histogram:
0.005 [1] |
0.069 [189] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.132 [0] |
0.196 [0] |
0.259 [0] |
0.323 [2] |
0.387 [6] |■
0.450 [0] |
0.514 [0] |
0.577 [0] |
0.641 [2] |
Latency distribution:
10% in 0.0060 secs
25% in 0.0064 secs
50% in 0.0072 secs
75% in 0.0087 secs
90% in 0.0127 secs
95% in 0.3021 secs
99% in 0.6339 secs
Details (average, fastest, slowest):
DNS+dialup: 0.0012 secs, 0.0051 secs, 0.6409 secs
DNS-lookup: 0.0004 secs, 0.0000 secs, 0.0078 secs
req write: 0.0000 secs, 0.0000 secs, 0.0012 secs
resp wait: 0.0258 secs, 0.0050 secs, 0.6152 secs
resp read: 0.0001 secs, 0.0000 secs, 0.0062 secs
Status code distribution:
[200] 200 responses
Cloud Run spun up around 11 instances to handle the load.
And the startup time metric was around 200ms for the first instance (scale from 0).
Hot run
Now running again with the hot instances resulted in these numbers.
Summary:
Total: 0.1737 secs
Slowest: 0.0300 secs
Fastest: 0.0059 secs
Average: 0.0084 secs
Requests/sec: 1151.3049
Total data: 30600 bytes
Size/request: 153 bytes
Response time histogram:
0.006 [1] |
0.008 [177] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.011 [11] |■■
0.013 [1] |
0.016 [0] |
0.018 [0] |
0.020 [0] |
0.023 [0] |
0.025 [0] |
0.028 [0] |
0.030 [10] |■■
Latency distribution:
10% in 0.0064 secs
25% in 0.0068 secs
50% in 0.0073 secs
75% in 0.0078 secs
90% in 0.0083 secs
95% in 0.0282 secs
99% in 0.0291 secs
Details (average, fastest, slowest):
DNS+dialup: 0.0010 secs, 0.0059 secs, 0.0300 secs
DNS-lookup: 0.0002 secs, 0.0000 secs, 0.0045 secs
req write: 0.0000 secs, 0.0000 secs, 0.0003 secs
resp wait: 0.0072 secs, 0.0058 secs, 0.0107 secs
resp read: 0.0000 secs, 0.0000 secs, 0.0006 secs
Status code distribution:
[200] 200 responses
Wow, average latencies of 8-9ms, that’s fast. Rust isn’t without its complexities (and the build time is significantly longer than some other languages..), however as can be seen the raw performance is hard to beat (if you still want to use a somewhat high-level language). Again, these are somewhat ideal numbers by testing from a VM from the same cloud region, and real world latencies will largely be determined by where the client is connecting from (since the actual runtime numbers are so low).
Results summary
Here’s a table of the 95% responses based on runtime & client location.
Call type | Service running locally | Service running on Cloud Run us-central1 |
---|---|---|
Cold start (first hey run) | 0.0009 secs | 0.3021 secs |
Hot start (subsequent hey run) | 0.0008 secs | 0.0282 secs |
Link to the source code for the Rust service is here.