Blog: Digital Londons: Testing safely with simulation
By John Lusty
How do you put an autonomous vehicle to the test to prove it’s safe? And how do you do that safely? It’s a brand new challenge.
The safest approaches involve considerable use of simulation, enabling you to put a car through its paces virtually, so you can iterate on and deliver a system that’s rigorous and robust.
A smart mix of simulation styles is key. By creating exact digital replicas of the environments you want to drive in you can ensure the car is operating in the closest possible match for the world — or ‘domain’ — in which it’ll be travelling. And by putting your imagination to work and building environments with elements of ‘fiction’, you can equip the car with a robustness of behaviour that accounts for a wide variety of potential scenarios, many of which may not yet have been witnessed in the real world.
Our very real challenge
FiveAI is solving autonomous transport for Europe. That means our cars need to know the forms European city roads take, how others drive, what pedestrians look like, and how they behave.
We collate this knowledge through ‘domain analysis’, spending many, many hours data gathering on real world streets. As our cars drive, their powerful sensors ‘see’ and capture their surroundings.
We’ll launch our service in London, growing it route-by-route and city-by-city. To launch safely, and as soon as possible, we need to test our cars and enable them to learn about the capital. They must cover billions of London street miles fast, without putting people or property at risk.
Simulation, in various forms, is helping us solve this real world puzzle.
Building digital replicas of London
Think of simulation, and the images your mind conjures up will likely have been shaped by simulations in popular culture, from video games to innovative virtual reality experiences. These tend to be high fidelity, realistic worlds that excite the mind precisely because they’re copied from life as we know it. This kind of photoreal simulation has an important application in the development of autonomous vehicles, principally for the development and testing of ‘perception’ — how our car ‘sees’.
But more lo-fi iterations are incredibly useful to us too, as are simulations without any visuals at all. These may appear less elaborate but, in the terms useful to us, they’re just as true to life and just as impactful. They’re principally geared to representing and exploring behaviours, helping our car learn to ‘reason’ about what the other vehicles and pedestrians it sees might do, and take safe action based on this reasoning. We refer to this side of things as ‘prediction and planning’.
At FiveAI, simulation gives us the power to create synthetic versions of the environment uncovered and described through our domain analysis. Think ‘digital twins’ of real world London, that are 100% fit for purpose. Whether they’re high or low fidelity, we use all these replica locations to test new test cases.
These ‘digital twins’ can be near exact duplicates to the human eye, capturing a rich array of detail — this is the high realism video game aesthetic that excites imaginations. This kind of simulation is essential for some of the work we do. Other uses of simulation don’t need to be so visually impressive, we can dial back the graphics from ultra setting to low and instead focus purely on capturing the realistic behaviour of road users and pedestrians. Some complex simulations replicate multiple dimensions at once, painting a vivid picture that looks and feels so realistic you could step into it. Others recreate just the surface of a road, just the radar profile of a street environment, or just the paths that other traffic follows. Whatever’s needed…
The aim here isn’t to be 100% accurate. We’re not striving to create perfect copies of the real world. That’s not possible. A simulation could look close enough to the naked eye, but an algorithm could immediately spot its gaps and inaccuracies. So what is the aim? To achieve a ‘good enough’ level of accuracy that allows us to successfully model the inaccuracy at play, by identifying and representing any leftovers with a statistical distribution of noise. This allows us to focus our resources on bringing fidelity to our simulations where it matters most, i.e. proving out the safety performance of our system.
Faithful simulations, then, vary widely in form and complexity. There are also multiple things going on in any given simulation. We consider the replica location that’s been recreated a ‘fixed environment’. It encompasses roads, buildings, bollards, signs, traffic lights, lane markings, trees, foliage and more. On top of this static layer, all sorts of possibility comes into play thanks to the ‘dynamic environment’ we also create. This contains all the moveable stuff: other vehicles, pedestrians, weather, traffic light behaviour and more. It’s true to life in so far as it’s highly representative of the real world environment but, crucially, we can also choreograph these ‘objects’ and elements exactly as we choose.
We model the complete extent of what it’s physically possible for an object to do and we understand the norms of what it does or is likely to do. This way we can be ready for the everyday and expected and be ready to perform in a huge number of unlikely-yet-possible scenarios that have never yet been seen in the real world. Because that’s what genuine safety demands.
Let’s take a moment to circle back to the ‘why’ that drives all this. Within a simulation, we can run a scenario a billion times, taking all possible movements and interactions into account, so the car’s ability to perform safely is rigorously tested. It would be impossible to do this in the real world — it would take decades, damage the environment, and might put road users at risk. Meanwhile, we can simulate an entire day of driving in just one minute.
Digital twins of all kinds open up new possibilities. We don’t have control of the real world but, within a simulation, we’re in control. To push safety forwards, we can explore scenarios it would be difficult or dangerous to set up in the real world. We can introduce large groups of pedestrians or change the weather at a whim.
Simulation lets us test on a digital version of the city and it allows us to spin off fictional versions of the same city. These are ‘generative environments’. They’re realistic but, unlike digital twins, they aren’t drawn directly from the real world.
As with digital twins, there’s a fixed/dynamic mix at play within generative environments, too. We might build a fictional crescent-shaped road (the fixed environment) and add a range of cyclists, vehicles, pedestrians and drizzle representative of an October rush hour in London (the dynamic environment).
We can push London’s characteristic features to greater extremes. Narrower roads. Denser foliage. Complicated roadworks. More nonuniform buildings to obscure the scene. And we can get up to speed quickly. If new roadworks are coming, we can simulate them first to ensure we’re robust to them. If electric scooters will soon be on our streets, we can learn all about them before they land.
Getting fictional has real world benefits. By testing the vehicle in cases that are beyond what we commonly and currently find on actual roads, we can accelerate and deepen our learning. By purposefully seeking out failures and weaknesses in our platform — including its sensors, hardware, and more — we can iterate on them fast. The result? Our car quickly becomes a perfect match for its environment.
The team that’s making it happen
FiveAI’s simulation is created by an exceptional mix of people. You’ll find VFX artists who’ve worked on some of the world’s biggest games and blockbuster films — from Grand Theft Auto to Star Wars and Game of Thrones — and people who have spent years building crowd simulations, models of traffic flowing around cities, and 3D visualisations of the world’s largest oil fields.
There’s no instructional handbook that explains how to build a simulation such as we need. There are no experts who have solved this problem already. We believe the solution can only come from taking the brightest and the best from many different backgrounds and bringing together experts from many different fields.
If you were to visit us on any given day, you might find an artist building new pedestrian models and animations, an engineer recreating a range of London-typical rain and frost, and others building out specific tests that plug into our continuous integration pipeline, so every time someone changes any piece of code that runs on the car, we can run a series of tests to drive the car through the simulation to ensure it doesn’t crash or fail to operate.
You’d see us in meetings real and virtual, since FiveAI’s team works across six UK locations. You’d see our Product Managers roaming the floor checking all our upcoming work is brilliantly specced out. You’d see simulation team members building out our visualisation, bringing to life test routes we’re driving right now.
And you’d see people working on scaling all this up, because the simulation we’re building is more powerful than anything that could be run on a single computer. No video game comes close in scope and complexity. It needs to be distributed across many machines working together.
Powered by purpose
Our task is a radically new one. As I mentioned, that demands a radically new constellation of skills, ideas, points of view, and ways of working. Wherever our people have come from, they’ve chosen to join FiveAI because of the size and significance of our mission. The self-driving shared transport service we’re helping to make a reality will bring huge benefits to Europe’s cities, and the lives of people who live and work in them.
For movie and game greats, this is a chance to bring sci-fi to the streets of London, reworked, and as a force for good. For software engineers, this is an opportunity to witness their code on the road. For roboticists, this means they can play in the ultimate sandbox. I moved here from working on cutting-edge VR. Now, together with the team, I’m bringing the virtual to bear on something incredibly real.
As our journey progresses, we’ll be seeking an even more diverse range of talent for our team. And there are countless reasons to join us. I’ve already got enthusiastic about our mission and our rich mix of thinkers and doers, all working together.
It’s also worth mentioning that, here, you’re never a small cog in a large machine. Every one of us is doing something that’s critical to our vision.
The snapshots of our simulation you can see right here on this page have all helped our car learn and improve fast, bringing us closer to launching our world-changing service.
John Lusty is VP Simulation at FiveAI and leads the business’s fast-paced London simulation studio. Before joining FiveAI he worked at Facebook where he founded and led Oculus VR London.