Case Study 2018 - Your Autonomous Taxi Awaits You
The IB Computer Science case study 2018 deals with driverless vehicles - specifically, driverless taxis. The case study focuses on Levangerstadt, a fictional town hoping to invest in driverless public transport technologies. The case study booklet divides the problem into several main areas, represented below. These include technical challenges (such as perceiving the car's environment or knowing its exact location) and social and ethical challenges (such as how to behave in the event of a potential crash).
Background research
This section focuses on the driverless technologies currently available. This is just to help students understand the wide range of automated vehicle technologies that might be one day incorporated into Levangerstadt.

Society of Automotive Engineers
The Society of Automotive Engineers (SAE) is mentioned explicitly in the case study booklet. SAE have a scale describing six levels of vehicle automation, from “No automation”, through “Conditional automation”, to “Full automation”. This scale is a useful discussion point for CS students: where do current vehicles fit in, and what might be the next logical steps for Levangerstadt?

Driverless taxi - Homer
Homer is a good example of a driverless vehicle adapted from a regular road vehicle, rather than being specifically developed. In this case the Homer team took a Ford Fusion and equipped it with the sensors and other equipment needed to turn the car into a self driving vehicle.
Voyage’s website has many detailed articles and explanations covering Homer's core computing systems, LIDAR, and other sensing systems. An excellent read that is very useful for the case study.
Tesla
Tesla are making something of a stir in the driverless vehicle world. They actually claim all of their vehicles have “full self-driving hardware”, but they are prevented from using the technology due to legal restrictions. The also links to the ethical considerations of the Levangerstadt case study.
An excellent video on Tesla’s website shows camera data from multiple positions overlaid with sensor and other data. The video cannot be embedded so you will need to visit their page to view it. The same page has diagrams and explanations of the various sensors the cars use.

Waymo - Google driverless car
Google's driverless car (now renamed Waymo) is well known, but still an example well worth studying. The project's website doesn't have a huge amount of technical detail, but it is still workwhile background reading.
The project is making constant progress, driving on Arizona's roads and having already covered over 3 million miles. Waymo even has an upcoming public trial of the vehicle - members of the public can apply through their Early Rider scheme.

RoboRace
FIA Formula E (electric racing cars) have branched out to try driverless racing cars. The project is called RoboRace. Their original plan was to race 20 of these cars in the 2017-2018 season, but the technology has not yet advanced far enough. You can read a little about the hardware, and watch a video of two cars in action (the fun stuff starts at 0.35).

The Great Robot Race
Although over ten years old now, the Great Robot Race is a useful resource because it highlights just how far driverless vehicles have come in a relatively short time. In this Grand Challenge organised by DARPA, vehicles had to drive themselves across the Nevada desert. The web site and video offer great insight into the difficulties of achieving this feat. One of the entrants also became the car on which the later Google driverless vehicle was initially based.
What Uber Learned from a Year of Self Driving
A short but surprisingly detailed video that examines the progress Uber made with self-driving vehicles in 2017. As a ride-sharing application, this is very relevant to the Computer Science case study. The video shares insights into technical issues (such as situations when the machine learning algorithms have insufficient data) and social issues (such as examining why people are reluctant to use self driving vehicles). Overall this is a highly recommended video.
DARPA Urban Challenge 2007
The 2007 competition for driverless cars organised by DARPA was the followup to the 2005 Grand Challenge in the Nevada desert. Although the video is a bit old now, it is useful for the CS case study because it allows us to see how far AI technology has come. There are also several scenes that serve as a reminder of what can go wrong with driverless vehicles!
Knowing the car's exact location

How GPS works
GPS is an essential location technology, although in the case study it is made clear that GPS alone will not be sufficiently accurate and it may need to be augmented with other technologies. Nevertheless, understanding GPS is important. How Does GPS work? is a clear and no-nonsense article from Physics.org. How GPS Works goes into more detail and also examines issues such as accuracy and the number of satellites.

What is WAAS?
Wide Area Augmentation System (WAAS) is a system being developed to improve the accuracy of location services. Garmin's explanation of WAAS is a good start for understanding how the system works (many of their devices support it)
Perceiving the car's environment - data collection
The Perceiving the Car's Environment section has been broken into two. This first section deals with the easier of the two tasks - collecting data about the environment using a variety of sensors.

Sensor data visualization video
The Singapore Autonomous Vehicle Initiative (SAVI) are developing a driverless vehicle which they hope will be used as a taxi. They are working in conjunction with the startup company NuTonomy. This excellent video shows a visualization of data collected from the vehicle’s sensors. The system is able to detect and track objects including other vehicles, pedestrians, and road hazards.
Perceiving the car's environment - understanding
Once data has been collected about the car's environment (see above), a harder task is faced. Driverless vehicles must then attempt to 'understand' that data so they can 'know' the environment surrounding them. This section has a lot of links with the IB TOK course.
What is a Neural Network? - Chapter 1, deep learning
This is one of the best explanations of neural networks around, assuming no previous knowledge or experience of them from the viewer. The video goes through the process of creating an artifical neural network (a Multi Layer Perceptron to be precise) to recognise handwritten digits. The speed and clarity of the explanations are excellent, and are accompanied by clear animated diagrams.
One of the most useful aspects of the video is its explanations of how input is broken down into generalities, so (in the case of digits) the system looks for general shapes such as loops and straight lines. It then examines how these individual shapes can be broken down into smaller components (e.g. a loop is composed of four curved lines).
Chris Urmson: How a driverless car sees the road
A Quick Introduction to Neural Networks
This excellent guide to Artificial Neural Networks (ANNs) features clear explanations and plenty of diagrams. A good understanding of these key concepts (single neurons, activation functions, and Multi Layer Perceptron (MLP) networks) is important before moving on to Convolution Neural Networks.
An Intuitive Explanation of Convolutional Neural Networks
The followup page to A Quick Introduction to Neural Networks, this guide deals with the extra steps added by Convolution Neural Networks. This includes filtering, ReLU, and pooling / subsampling. The diagrams and explanations make this a very good start to understanding Convolution Neural Networks.
Understanding Convolutional Neural Networks for NLP
Although this page focuses on Convolution Neural Networks for NLP, parts of it are still very relevant to machine vision for driverless vehicles. The first sections in particular, cover filtering and invariance in a way that may be easier to understand than the other examples here.
A Beginner's Guide To Understanding Convolutional Neural Networks
This is another clear and well-written guide to understanding Convolution Neural Networks. Although it doesn't cover all CNN stages in the same level of detail, it has very good explanations of the convolution / filtering process, and of training a neural network.
Visualising a Neural Network
This excellent 3D visualisation of a character recognition neural network is great for understanding how each of the layers in an MLP (Multi Layer Perceptron) network link together. Different colours are used to indicate the different weights given to each connection.
Image filter visualisation
This website is an excellent way to help students understand the matrices and convolutions that are used to process images in CNNs. You can easily edit the JavaScript code to change the 'filter' matrix and see the effect of different feature extraction filters. If you're looking for examples to try, the Explanation of Convnets page has some examples.
To use this software, click Load, Browse, and then load one of the default convolutions.
How we teach computers to understand pictures
In this TED Talk Fei Fei Li talks about her experiement with ImageNet, a Convolution Neural Network (CNN) that can recognise images in 22,000 categories. The video explains the sheer size of the problem, the complexity of the neural network, and the amount of training data required - it is a real eye opener with significant relevance to the Levangerstadt case study.
The Next Big Step for AI? Understanding Video
Image recognition is still a pretty tough challenge for most computer systems, but driverless vehices would benefit from going a step beyond and understanding actions and intent. For example, if a car would understand the direction and intention of a pedestrian, appropriate action could be taken. This article examines how that might be possible. It also has a link to the collection of videos - Moments in Time - released by MIT. These are short videos annotated with descriptions of events occurring within them. This really helps reinforce to students the fact that neural networks need training data, and that data - in the form of images or videos - needs to be clearly described by humans first.
Getting from A to B

Greedy Algorithms
This page is packed with animations and interactives explaining Greedy Algorithms and their applications. It also has a particularly good section on the drawbacks of greedy algorithms.
Ethical Challenges
The case study booklet raises several potential ethical concerns. The range from common issues such as the impact on the job market if public transport is automated, to more philosophical discussions about how cars should behave when faced with a potential accident. This section has lots of potential links with the IB TOK course.

The Trolley Problem
Although driverless vehicles are often promoted as being safer than human drivers, there are situations when accidents may be unavoidable. As part of the case study, students must examine the ethical implications of decisions cars may make in these situations. This clearly links to the TOK component of the IBDP.
The Trolley Problem is a well known ethical problem that is explicitly mentioned in the case study booklet. In terms of self driving vehicles, The ethical dilemma of self-driving cars (video) is a good introduction. Why Self-Driving Cars Must Be Programmed to Kill and Ethics of Self-Driving Cars are great articles that examine the topic in more detail.
Justice: What's The Right Thing To Do?
This series of videos from Harvard University examines ethics. Although in a lecture format, they are very accessible. The first 20 minutes of the first lecture is very useful for the Levangerstadt case study as it discusses the classic Trolley Problem, which is explicitly mentioned in the case study booklet.