The Government's ambitious programme could see vehicles with potentially up to level five automomous driving on UK roads by 2025. If this can be achieved, this will mean UK drivers can take advantage of the vehicle products currently available in other countries. It should, in theory, also bring the proposed benefits of greater road safety and time efficiency for users (who can complete other tasks whilst the vehicle is driving).
In doing so, this may help UK based manufacturers to compete on an even level with US, Japanese and German manufacturers who have more enabling regulatory regimes and are at later stages of product testing on the roads.
However, this is an ambitious target, considering that we are currently in 2022 and the UK is generally at level two automated driving, which is some way behind the other countries. If the plans are rolled out too rapidly, it could create issues with road safety.
As part of the proposed plans, there will of course need to be a comprehensive road testing program. It should be recognised that the UK has a complex system of roads which include different gradients of curves and roundabouts, which are not present in the more simple US grid system of roads. It is therefore likely to need more advanced testing and technology.
As the UK is behind other countries, it may find this is a short timescale to build in the level of regulations it needs to balance safety on the one hand with enabling manufacturers to develop, test and sell these products. As this regulatory development will often be in response to the development of this fledging industry and the issues that it presents, the UK may be at a disadvantage compared to countries such as the US which have more history.
What can the UK learn from other countries?
United States of America
On a federal level, the US is moving towards authorising use of fully autonomous cars, which will not have any manual control such as steering or pedals. This authorisation is coming from the US National Highway Traffic Safety Administration (NHTSA). Self-driving cars have been tested and utilised in a number of US states including Texas, Arizona, Washington, Michigan and California.
The US is currently at a mix of level three and four automation. At present, self-driving cars typically include manual controls for backup safety. Manufacturers must still meet safety standards on a local, state and federal level to launch and operate such vehicles on US roads. General Motors is working to deploy a fully autonomous vehicle called the Cruise Origin, and is seeking permission from the NHTSA to deploy this in early 2023. General Motors/Cruise are among 30 or more companies allowed to test highly automated or self-driving vehicles on US roads. There is high growth potential within this market, which is expected to reach $186.4 billion by 2030.
By 2030, it is anticipated that more than half of newly launched cars in South Korea are expected to be equipped with level three autonomous driving technology. Level three autonomous driving technology means the driver can hand over control to the vehicle but must be ready to take control when prompted in a limited number of areas such as on the freeway. Hyundai Motor has announced that it will equip Genesis G90 models with this level three technology (called 'Highway Driving Pilot'). Currently, cars with this level of technology can only be test-driven and the regulations do not currently permit such cars to be driven commercially. Use of such cars is currently on a smaller scale to the US. Revising the regulations so that they allow level three driving technology should bring South Korea in line with the US, Germany and Japan. Honda, Tesla and Mercedes-Benz have developed technology which is at levels 2.5 or 3 due to the favourable legal environment for developing this technology. Within Germany, cars with technology up to level four can be driven.
Under current regulation, autonomous cars currently require a safety driver's presence. Within China the position is slightly different as these vehicles are being used as taxis; in April 2022 Baidu and Pony.ai received the first licences from China to operate vehicles with no active driver, but with a safety supervisor on board.
Baidu, and the most advanced of its rivals, has reached level four autonomous driving, which mean they can operate without a driver but must be pre-loaded with a detailed map, which will restrict the travel path.
It appears the licence granted to Baidu is fully driverless with no safety driver present, therefore if there is a safety supervisor this may be a digital program. These licences are granted by the head office of the Bejing High-level Automated Driving Demonsration Area, which means the taxis mmust operate in a designated area of 60 square kilometres in Bejing.
Once a self-driving taxi is on the road, the operator must gather information about pedestrians and local conditions based on daily driving.
What are the legal implications
How do you insure such a vehicle, which is not actively driven. It seems liability will attach to the product manufacturer, however how can this work where there is an accident, in terms of establishing who was at fault? If there is only a passenger on board they are likely not to have the same awareness as an active driver. It is also estimated that it will take up to 20 years to fully replace manually driven cars with automated vehicles. Insurers also need to consider the scenario where there is some level of control built in; where a passenger could step in and avert an accident and does not do so, who is liable: the car for not driving correctly or the passenger for omitting to intervene (even though they are not an active driver)? Insurance products will need to adapt to cover the party which could be liable (see below).
At the same time, as with all technological advancements, more information will be diligtalised and readily available. Those insurers who can effectively harness and utilise this data will be able to create the most competitive products.
Regulatory catch-up: with fast-moving technological development, law and regulations need to keep up so they can enable and authorise technological development. The regulations need to keep up with the pace of development.
Occasionally, a practical solution to a problem emerges faster than the understanding of the problem itself. This is what seems to be happening with Level 3 autonomous vehicles, particularly in Europe.
A moral dilemma?
There is a concern about the rapid increase in the number of testing miles. The testing of these cars can put other drivers at risk. There is a moral dilemma here: do regulations allowing such technology create safer roads for people or do they slow the adoption of technologies which can reduce traffic accidents?
In the event of a crash, ethical decisions need to be made, which often puts people in a moral quandry. Such technology may reduce accidents, but will not rule them out altogether. In testing conducted by Waymo and Cruise in 2021, 53 collisions occurred.
Additionally, the moral principles which guide how someone drives vary between countries and individuals. Without a consensus on a universal moral code, this makes it impossible to develop a car that universally satisfies the ethical frameworks adopted by populations around the world.
Liability in the event of a crash
It makes sense to say that when the vehicle is driving itself, the human user is not the 'driver' and therefore cannot be liable whilst the car is in charge. However, what happens when the car commits the offence? The National Transport Commission in Australia has recommended each car should have an automated driving system entity (ADSE) which is a separate legal entity who would be liable in the event of wrongdoing.
A graduated range of penalities could include: infringement notices, enforceable undertakings, suspension and withdrawal of ADS approval.
The UK government considers a similar system where the manufacturer who gains authorisation for the car will have to accept some form of direct/vicarious liability. Each car would therefore be backed by a separate legal entity that could be subjected to a range of criminal sanctions.
Driverless vehicles by 2025 – what are the main issues?
The government are seeking a wider rollout of autonomous vehicles in 2025, with some self-driving features rolled out later this year. It is not entirely clear whether level five self-driving automation is even possible (full automation). For context, Tesla is currently at level two (partial automation). A timeframe for the introduction of level five self-driving cars in the UK is inevitably ill-defined, as is the full introduction of level four or three self-driving cars outside of test areas.
The main stumbling block is technological. It is extremely difficult to account for the near-infinite number of variables that a driverless car might encounter. Currently, issues such as running red lights and misidentifying tram tracks still cause issues. Add to that adverse driving conditions (night and/or rain). Most importantly, 'edge cases' represent instances when an unlikely and potentially dangerous event occurs. These are difficult to prepare for, and hence go some way to explaining why the safety of self-driving cars is 99.99% rather than 99.9999%.
Metaculus, a prediction market, has a median prediction for commercially available level five driverless cars as 2031.
How can safety concerns be addressed?
The government is consulting on a 'safety ambition' for autonomous vehicles to be as safe as a competent and careful human driver. This is connected to government plans for the rollout of self-driving vehicles in 2025. At present, the thinking is that safety concerns can be addressed through common standards for autonomous vehicles (with sanctions for failure to meet these standards). A new safety framework is being developed accordingly and a consultation currently open (which closes on 14 October 2022)
Written by Jonathan Moss and additional research by Patrick Gordon