Fast and Compact Planar Embeddings

Leo Ferres, José Fuentes-Sepúlveda, Travis Gagie, Meng He, and Gonzalo Navarro

Introduction

The rate at which we store data is increasing even faster than the speed and capacity of computing hardware. Thus, if we want to use what we store efficiently, we need to represent it in better ways. The surge in the number and complexity of the maps we want to have available on mobile devices is particularly pronounced and has resulted in a bewildering number of ways to store planar graphs. Each of these representations has its disadvantages, however: e.g., some do not support fast navigation, some are large, some cannot represent multi-edges or certain embeddings, and some are complicated to build in practice, especially in parallel, which is a practical concern when dealing with massive datasets.

Turán [1] gave a very simple representation for a specified embedding of a connected planar multi-graph with n vertices and m edges that uses 4m bits, but Jacobson [2] noted that it "does not allow fast searching". In this work we show how to add a sublinear number of bits to Turán's representation such that it supports fast navigation. Additionally, we present the first linear-work and practical parallel algorithm for building compact representations of planar graphs. Also, we present an experimental study of our parallel algorithm, discussing some practical trade-offs. We provide a set of experiments to prove the scalabitily and good space-usage of our implementation, using a small portion of the original input. Finally, we provide implementations of useful queries that also behave efficiently.


PDF document

An author's version of our work can be downloaded here.


The code

The code for this work is available at github.

A new version of the code is here. It corresponds to an integration with the library SDSL.


Experiments

We implemented our construction algorithm in C and compiled it using GCC 5.4 with Cilk Plus extension.

We tested our algorithm on six datasets:

All the datasets are available here.

The experiments were carried out on a NUMA machine with two NUMA nodes. Each NUMA node includes a 14-core Intel® Xeon® CPU (E5-2695) processor clocked at 2.3GHz. The machine runs Linux 2.6.32-642.el6.x86 64, in 64-bit mode. The machine has per-core L1 and L2 caches of sizes 64KB and 256KB, respectively and a per-processor shared L3 cache of 35MB, with a 768GB DDR3 RAM memory (384GB per NUMA node), clocked at 1867MHz. Hyperthreading was enabled, giving a total of 28 logical cores per NUMA node. Running times were measured using the high-resolution (nanosecond) C functions in time.h. Memory usage was measured using the tools provided by malloc_count.

Results

In the table, seq corresponds to a serialization of our parallel algorithm.

p wc pe5M pe10M pe15M pe20M pe25M
seq 5.56 13.55 27.86 42.34 57.04 71.76
17.0817.1435.6354.2173.1192.17
24.069.7120.2630.6741.3752.16
42.124.9810.3115.7521.1726.70
81.132.655.368.2510.8113.61
120.781.843.745.617.509.46
160.601.402.844.325.747.28
200.491.152.333.574.656.02
240.410.972.013.154.115.10
280.421.012.053.043.885.36
320.380.901.852.823.774.80
360.340.831.722.573.484.38
400.330.761.582.413.224.04
440.310.711.472.233.013.72
480.300.681.372.092.763.56
520.290.651.332.012.673.36
560.310.611.241.902.543.35


The second and fifth authors received travel funding from EU grant H2020-MSCA-RISE-2015 BIRDS GA No. 690941. The second author received funding from Conicyt Fondecyt 3170534. The third and fifth authors received funding Basal Funds FB0001, Conicyt, Chile. The third author received funding from Academy of Finland grant 268324. Early parts of this work were done while the third author was at the University of Helsinki and while the third and fifth authors were visiting the University of A Coruña.


Contact us at: jfuentess@dcc.uchile.cl