Scalable Design of a Mapping application

Scalable Design of a Mapping application

A mapping application stores and manages millions of road data to accurately navigate individual users and applications. For example, the Google Maps application processes billions of nodes and edges to guide people all over the world. A feature reach mapping application serves both individuals and companies.

For individuals:

  • Allow searching tracks from source to destination.

  • Allow direction on the track.

  • Show the ETA (estimated time of arrival).

For companies:

  • Allow driving the self-driving cars (Waymo's self-driving car).

  • Allow ride-sharing services (Lyft).

  • Allow parcel services (Uber).

Requirements


Functional Requirements

  • Detect the current location: It requires finding out the latitude and longitude of the current location. This is the basis for providing other services.

  • Provide a route: According to the source and destination, the application should provide multiple faster routes based on the vehicles.

  • Directions: On the way from the source to the destination, the app will guide users in the right direction.

Non-Functional Requirements

  • Availability: The system should be highly available.

  • Scalability: The system should be scalable as both individuals and enterprise applications like Uber, Lyft, Waymo, etc., use the application internally.

  • Performance: This includes the accuracy and latency of finding and providing routes.

Challenges

  • Scalability: The map application will be dealing with billions of nodes and edges when finding the path all over the world. Typical pathfinding algorithms like Dijkstra's algorithm will not scale to this volume and may provide a poor experience. We will require a modified and improved version of the algorithm.

  • ETA Computation: The ETA should consider different factors like road constructions, traffic volumes, and different incidents on the way to the destination and update accordingly.

Resource Estimation

Let's consider the map application has a total of 30 million daily users.

Servers:

If a single server can handle 8000 requests per second, and a user makes 50 requests per day, we will make a total of (50 30 10^6) requests per day.

This implies (50 30 10^6) / (24 60 60) requests per second.

TOTAL_REQUEST_PER_DAY = (50 30 10^6) / (24 60 60);

Total number of servers we will need is (TOTAL_REQUEST_PER_DAY/8000).

Storage:

Storage requirement for the map is pretty much consistent. Reading route data from various sources is persisted for one time.

Bandwidth:

Based on the following assumptions:

  • Total active users: 30 million

  • Each user makes 50 requests per day

  • Each request size is 200 bytes

  • Each response size will be 2050 bytes

Incoming Bandwidth:

INCOMING_BANDWIDTH = ((TOTAL_ACTIVE_USERS REQUEST_PER_DAY_BY_EACH_USER EACH_REQUEST_SIZE) / 24_HOURS_IN_SECONDS) bytes/s

In numbers: (30 10^6 50 200) / (24 60 * 60) bytes/s

Outgoing Bandwidth:

Unlike the incoming request, the response will have both textual and visual content to build up a map.

OUTGOING_BANDWIDTH = ((TOTAL_ACTIVE_USERS REQUEST_PER_DAY_BY_EACH_USER EACH_RESPONSE_SIZE) / 24_HOURS_IN_SECONDS) bytes/s

Building Blocks

Load Balancer: To distribute traffic between different services and servers.

Database: To store the location data and metadata in graph format.

Pub-Sub System: Navigate users on the right directions and also invoke related services.

Key-Value Storage: Also, store the metadata information.

High-Level Design


Components

  • Location Finder: Reads the user's current position. May use GPS/WIFI/Cellular technology to find out the current accurate location.

  • Distributed Search: This determines the latitude and longitude of a place entered by the user. It's a mapping service that translates the place name to the latitude and longitude.

  • Route Finder: Using two positions, will generate all possible routes.

  • Graph Processing Service: Between multiple routes, will determine the optimal route.

  • Area Search Service: Orchestrates the distributed search, route finder, and graph processing service. At any time, using two places, get the actual position by invoking the distributed search. From these two locations, find the routes using the route finder. Later, use the graph processing service to get the optimal route.

  • Database: Stores the routes, nodes, and edge data. Perhaps, utilize the graph database like DataStax.

  • Pub-Sub System: While the user moves through the route, to navigate and guide, the pub-sub model will be used.

  • Third-Party Data Receiver: Gets data from various sources about city, roads, and places. Pre-processes the third-party data so later these data can be used to store in the DB in native graph format.

  • Graph Building: Graph building will be used to store the route data in the graph database that is suitable for making queries.

  • User: The programs and people who are using the service.

  • Load Balancer: Used to distribute traffic between different servers and services.

Workflow

  • User enters the source (uses location finder for current location) and destination.

  • The area search service will find the optimal route using distributed search, route finder, and graph processing service.

  • Navigator will track the user's route. In case of distraction, the navigator will send an event to Kafka. Upon receiving the event, Kafka will recalculate the route using the area search service and send it back.

API Design

  • Current location: Returns the user's current location.

  • Get route: Based on source, destination, and transport type, return the optimal route.

  • Get direction: At any point in time, based on the current location, give the next steps.

Challenges of Google Map Design


Scalability

We have a mesh of billions of nodes and edges. When we need to calculate the shortest path between two nodes, we cannot simply run the shortest path algorithm on the entire graph. Because, first of all, it is both impractical and impossible to load the entire nodes and edges into memory. Additionally, it will lead to performance issues for the users. Instead, we can divide the graph into different small segments. This will lead to:

  • Smaller polygons to calculate the shortest path.

  • We can query them in parallel, improving performance.

Segment

Calculate the shortest path between two nodes in a single segment:

  • In a single segment, there will be a weight between each edge that connects two nodes.

  • Weight will be calculated by the traffic density, average speed of the vehicles, and other criteria.

  • Load the segment in memory and run the regular shortest path algorithm like Dijkstra.

  • Cache the results to reuse in the future.

Calculate the shortest path between two places that are not nodes:

  • Calculate the nearest nodes from the point of edge.

  • Use the shortest path using these nodes.

Calculate the shortest path when the source and destination are from different segments:

  • We will use the term 'Exit Point', where an exit point is the connecting point between two segments.

  • Exit points will be shared by two or more segments.

  • Using latitude and longitude, we can find the source and destination segments.

  • Instead of going to all the segments, we will use a radius of approximate distance and discard any segment outside the radius during path computation.

  • While finding the shortest path over segments, we only consider the exit point and do not consider the internals of the segments.

  • For finding the shortest

segments path, we will do the rest of the computation inside the segments.

ETA Computation

  • Using the pub-sub model, the navigation service will collect live location data.

  • Considering the current location, traffic density (high, medium, low), traffic pattern (rush hour or not), find out the ETA.

Detailed Design of Google Maps

Segment Setup and Request Handling

Storage schema

We will use 3 types of data storage:

  • Key-value store

  • Graph database

  • Relational database

After generating a segment, we will store the segment id and nodes (along with the edge) in the graph database.

We will utilize the key-value store to store the segment server id (where the segment is stored), boundaries (or exit points), and neighbor segment ids against the segment id.

In the traditional relational database, we will keep the information of the edges like weight, their traffic rush hours, and the related segment and node id.

Design/Create Segment

  • First, get the data of the segment and get a segment id by a UUID generator.

  • Store the segment in the server using a 'server allocator service'.

  • Now we have saved the segment with the segment id.

  • Create the key-value store using segment id and its boundaries, neighbors, and server id.

  • Insert the edges' information of the segment into the relational database.

Handle User Request

  • When the user gives us the source and destination, find the latitude and longitude using the 'distributed search' service.

  • Find out the source and destination segments from the graph database.

  • Find all the relevant neighbors/segments using a radius.

  • The 'route finder' will find all the optimal routes using these segments.

  • The 'graph processing service' will find the appropriate route from the list of optimal routes.

Improve estimation using live data

  • Web-socket & Load-balancer: Using the WebSocket, the server can do two-way communication. A load balancer can be used to limit the number of socket connections on the server.

  • Pub-Sub: The pub-sub model collects the data streams and puts them into a data analytics service like Apache Spark. The 'map update service' listens to the analytics result and updates the data storage.

Performance

Our target is:

  • The system can handle millions of user requests.

  • The system can handle a large amount of data.

Availability

  • Instead of storing the graph on a single server, we are utilizing individual servers for each segment.

  • This will help load the relevant segment into memory and query; otherwise, loading the entire graph will lead to out-of-memory errors.

  • We will utilize lazy loading so only relevant segments will load up in memory.

Scalability

  • Since we load segments only based on user requests, we can scale to a degree.

  • Also, for more segment data, we will utilize more servers that will not degrade individual server performance.

Reduce response

  • We will query smaller subgraphs (only relevant segments).

  • Cache the results of the query.

Accuracy

  • Use live location from the user to guide accordingly.

  • Update the edge weight based on traffic patterns.