Overpass API/Installation

From OpenStreetMap Wiki
Jump to navigation Jump to search
Overpass API logo.svg
edit
Overpass API · Language reference · Language guide · Technical terms · Areas · Query examples · Sparse Editing · Permanent ID · FAQ · more · Web site
Servers status · Versions · Development · Technical design · Installation · XAPI compatibility layer · Public transport sketch lines · Applications · Source code and issues
Overpass turbo · Wizard · Overpass turbo shortcuts · MapCSS stylesheets · Export to GeoJSON · more · Development · Source code and issues · Web site
Overpass Ultra · Overpass Ultra extensions · MapLibre stylesheets ·more · Source code and issues · Web site

This page tells you how to install the OSM3S server such that you can use it as a local OSM mirror. Additional functionality like management of areas and the line diagram utils aren't covered yet.

Please note: the primary source is on https://dev.overpass-api.de/overpass-doc/en/more_info/setup.html. This is more a location to collect various measures for troubleshooting.

System Requirements

Hardware

It is highly recommended that you have at least the following hardware resources available for an OSM planet server:

  • 1 GB of RAM and sufficient swap space for a small extract or a development system. By contrast, overpass-api.de (the main Overpass API instance) has 32 GB main memory. Actual memory requirements also highly depend on the expected maximum number of concurrent users.
  • For a full planet with meta and attic data (=all data since license change in September 2012), about 200 GB - 300 GB disk space are required when using a compressed database (since 0.7.54). Without compression at least double the amount is needed.
  • Use of fast SSDs instead of slower hard disks is highly recommended!

Software

It is required that you have the following resources:

  • Access to Expat and a C++ compiler
  • An OSM file in XML format compressed in bzip format (Geofabrik is an excellent resource for this. Another good resource is located on the Planet.osm page.)
  • Alternatively, you can also use the clone mechanism or an extract or planet file in PBF Format along with osmconvert (requires --out-osm parameter for osmconvert, as Overpass API doesn't support PBF natively)

NOTE: You do not need a database engine (e.g. MySQL or PostgreSQL); the database back-end is included in the OSM3S package.

You will need to identify or create:

  • $EXEC_DIR: The root directory in which executable files should be installed (/bin/ suffix removed). (~100 MB). For example, the public server has this on /opt/osm-3s/v0.7.54/
  • $DB_DIR: a directory to store the database
  • $REPLICATE_DIR: a directory to store minutely (or otherwise) diffs (only necessary if you decide to configure minutely updates below)

Setup

Ubuntu or Debian 6.0 (squeeze) or Debian 7.0 (wheezy)

Install the following packages: g++, make, expat, libexpat1-dev and zlib1g-dev.

sudo apt-get update
sudo apt-get install g++ make expat libexpat1-dev zlib1g-dev

Option 1: Installation via tarball

Download the latest tarball, prepared with GNU autoconf. For example:

wget http://dev.overpass-api.de/releases/osm-3s_v[latest_version].tar.gz

Unpack the tarball:

tar -zxvf osm-3s_v*.tar.gz

Compile the OSM3S package:

cd osm-3s_v*
./configure CXXFLAGS="-O2" --prefix=$EXEC_DIR
make install

Option 2: Installation via bleeding edge dev version (expert use)

Alternatively, if you want the bleeding edge latest dev version, you can get it from github.

sudo apt-get install git libtool autoconf automake
git clone https://github.com/drolbr/Overpass-API.git
cd Overpass-API
git checkout minor_issues

NB: the usual development takes place in feature branches. The branch _master_ is moved only from point version to point version. The branch _minor_issues_ points to the latest patch release. The other active branches are features for future versions which are not yet release-ready.

When using the latest dev version from github, the build system has to be updated first. The following steps were successfully tested on Ubuntu 14.04 and debian 7.0:

cd ./src/
autoreconf
libtoolize
automake --add-missing
autoreconf

Compile the OSM3S package:

cd ../build/
../src/configure CXXFLAGS="-Wall -O2" --prefix=$EXEC_DIR
make install

NOTE: If you encounter a message like this: configure: error: cannot find install-sh or install.sh in "../src" "../src/.." "../src/../.." if you already have automake on your computer (sudo apt-get install automake), it may indicate that the symbolic link(s) in the "../src/" directory are broken. Before you can continue you will need to delete and recreate the links to your system's proper files, for example:

ln -s /usr/share/automake-1.11/missing ./missing
ln -s /usr/share/automake-1.11/install-sh ./install-sh
ln -s /usr/share/automake-1.11/depcomp ./depcomp

or if you receive a 'Link already exists' error you can try using the absolute paths:

sudo rm -r /root/osm-3s_v0.7.50/src/missing
sudo rm -r /root/osm-3s_v0.7.50/src/install-sh
sudo rm -r /root/osm-3s_v0.7.50/src/depcomp

sudo ln -s /usr/share/automake-1.11/missing /root/osm-3s_v0.7.50/src/missing
sudo ln -s /usr/share/automake-1.11/install-sh /root/osm-3s_v0.7.50/src/install-sh
sudo ln -s /usr/share/automake-1.11/depcomp /root/osm-3s_v0.7.50/src/depcomp

NOTE: If you encounter an error of this format during compiling: make: *** [...] Error 1 it means that something unexpected occurred and this is an opportunity to help make the OSM3S package more robust. To help you will need to capture the compile-time output and email it to the package's current maintainer: Roland Olbricht For example, the following command will capture the output and put it in a file called error.log:

make install >&error.log

Option 3: AWS Marketplace AMI (third-party offering)

This image has been provided by a third party and is not officially endorsed or supported.

A paid pre-built AMI based on these instructions exists on the AWS marketplace

v0.7.x on Ubuntu 22.04 (x86) - https://aws.amazon.com/marketplace/pp/prodview-d6c5n6d6qs2c6

v0.7.x on Ubuntu 22.04 (ARM) - https://aws.amazon.com/marketplace/pp/prodview-di6a6yqzv46z4


For any issues regarding this image contact: https://aws.amazon.com/marketplace/pp/prodview-d6c5n6d6qs2c6#pdp-support

  • The image includes a snapshot of the database on the day the image was built (See version number), so you may need to apply up to half a year worth of minute updates. This takes about 15 days of full CPU load for a single core, and that can be slow and expensive.
  • By default it exposes the API on HTTP only.
  • Minutely updates are enabled
  • NOTE: It does not include areas and is cloned with meta=no (i.e does not include meta=yes or meta=attic)
  • Generally speaking, burstable EC2 instance are not suitable for building areas

Populating the DB

The recommended way to populate the database is via cloning from the dev server:

./download_clone.sh --db-dir=database_dir --source=https://dev.overpass-api.de/api_drolbr/ --meta=no

This is fastest and needs the least space. If you need metadata (i.e. objects version numbers, editing users and timestamps) then put --meta=yes instead of --meta=no. If you want even museum data (all the old versions since the license change in 2012) then replace the parameter with --meta=attic.

You could also populate the overpass database from a planet file. For this, you need to download a planet file:

wget https://planet.openstreetmap.org/planet/planet-latest.osm.bz2

Populate the database with:

nohup ../src/bin/init_osm3s.sh planet-latest.osm.bz2 $DB_DIR $EXEC_DIR &
tail -f nohup.out

If you want to query your server with JOSM, you'll need metadata. Add the --meta parameter:

nohup ../src/bin/init_osm3s.sh planet-latest.osm.bz2 $DB_DIR $EXEC_DIR --meta &
tail -f nohup.out

It is not possible to get museum data this way, because the planet file does not contain that data.

The nohup together with & detaches the process from your console, so you can log off without accidently stopping it. tail -f nohup.out allows you to read the output of the process (which is written into nohup.out).

NOTE: This step can take a very long time to complete. In the case of a smaller OSM extract files, less than 1 hour, but in the case of a full planet file this step could take on the order of 24 hours or more, depending on available memory and processor resources. When the process has finished successfully the file nohup.out will indicate this with "Update complete" at the very end.

(As a side note, this also works for applying OSC files onto an existing database. Thus you can make daily updates by applying these diffs with a cronjob. This method takes fewer disk loads than minute updates, and the data is still pretty timely.)

Populating the DB with attic data

Since Overpass API v0.7.50, it is possible to also retain previous object versions, the so called attic versions, in the database. Previous object versions are accessible via [date:...], [diff:...], [adiff:...] as well as some filters like (changed:...).

For the main Overpass API instance, the database was initially built using the first available ODbL compliant planet dump file (September 2012). If you don't require all the history back to 2012, it is also possible to start with any later planet dump and apply any subsequent update via the daily/hourly/minutely update process.

Any subsequent changes can be automatically stored in the database, if the following two prerequisites are met:

  • Dispatcher needs to be run with attic support enabled
  • Apply_osc_to_db.sh also needs to run with attic support enabled

Relevant settings for both dispatcher and update script are described further down on this page

To populate the database with attic data, to use augmented diffs use --keep-attic instead of --meta.

Notes:

  • At this time it is not possible to use a full history dump to initialize the database (see User Page).
  • Using extracts instead of a planet file along with attic mode is currently being discussed on the developer's list and it likely also not to work.

Static Usage

OSM3S is now ready to answer queries. To run a query, run

$EXEC_DIR/bin/osm3s_query --db-dir=$DB_DIR

and enter your query on the standard input. If typing directly into the console, you need to press Ctrl+D in the end to signal the end of input. Answers will appear on standard output.

If you've imported the entire planet, try the example query:

<query type="node"><bbox-query n="51.0" s="50.9" w="6.9" e="7.0"/><has-kv k="amenity" v="pub"/></query><print/>

This one returns all pubs in Cologne (the city with the best beer in Germany :) ).

Check the full introduction to OSM3S query language on the Web or at $EXEC_DIR/html/index.html (installed as part of OSM3S) for more information.

Lastly, if you're using the dispatcher daemon, osm3s_query can connect to it and find $DB_DIR by itself:

$EXEC_DIR/bin/osm3s_query

If you can make conversion requests to osm3s_query without specifying the db dir, then the dispatcher daemon is running correctly.

Starting the dispatcher daemon

If you wish to automatically apply diff updates or run the Web API, you need to start the dispatcher daemon (this is otherwise optional). Like all other processes they should be started by a single, standard user. Do not run anything with root privileges. That would be an unnecessary security risk. The tools set the necessary file permissions to allow writing for the user that created the database and reading for everybody else.

nohup $EXEC_DIR/bin/dispatcher --osm-base --db-dir=$DB_DIR &

For meta data you need to add a parameter:

nohup $EXEC_DIR/bin/dispatcher --osm-base --meta --db-dir=$DB_DIR  &

When serving attic data you need to run the dispatcher with the following parameters:

nohup $EXEC_DIR/bin/dispatcher --osm-base --attic --db-dir=$DB_DIR &

Systemd, Upstart

Short answer: Systemd is not designed to run a DBMS, in particular Overpass API. I explain the details in a blog post.

Some reminders:

  • Never start any component of Overpass API as root. The whole system is designed to work without root, and you can run into really weird bugs if some parts of the system run as root.
  • Do not automatically remove any of the lock files, socket files or shared memory files. They work as canaries, i.e. hitting existing files is almost always an indicator for bigger trouble elsewhere. Please ask back in those cases.

crontab

Overpass includes a script, ${EXEC_DIR}/bin/reboot.sh, which may be used with the crontab @reboot option. It requires some editing before deployment, and should be run as the overpass user, not root.

Applying minutely (or hourly, or daily) diffs

information sign

The dispatcher daemon must be running for diff application to work.

First, decide the maximum tolerable lag for your DB:

From these, you need to find replicate sequence number, which will become $FIRST_MIN_DIFF in the instructions below. To find it:

  1. Browse through the replicate directory hierarchy (e.g. https://planet.openstreetmap.org/replication/minute/) and find the diff that has a date before the starting point of the planet dump. The planet dump starts at 00:00 UTC; because the server shows local time, this is equivalent to 01:00 BST during summer and 00:00 BST during winter in the file listing.
  2. Verify you have the right file by checking the respective *.state.txt file. The timestamp should show a date (here always UTC) slightly before midnight. sequenceNumber in this file (also present in the filename) is your replicant sequence number, and $FIRST_MIN_DIFF.

From $EXEC_DIR/bin, run:

nohup ./fetch_osc.sh $FIRST_MINDIFF_ID https://planet.openstreetmap.org/replication/minute/ $REPLICATE_DIR/ &

This starts a daemon that will download all diffs from $FIRST_MINDIFF_ID to the present into your replicate directory. When new diffs are made available, if this is kept running, it will download them automatically. If you get diffs on another way, you can omit this command.

Next, apply changes to your DB:

nohup ./apply_osc_to_db.sh $REPLICATE_DIR/ $FIRST_MINDIFF_ID --meta=no &

This starts the daemon that keeps the database up to date. Latest versions require an additional parameter augmented_diffs:

nohup ./apply_osc_to_db.sh $REPLICATE_DIR/ $FIRST_MINDIFF_ID --augmented_diffs=no &

To add metadata, you must add a parameter to the second command. Instead of the above, run:

nohup ./apply_osc_to_db.sh $REPLICATE_DIR/ $FIRST_MINDIFF_ID --meta &

To update your database containing attic data, you need to use the following command:

nohup ./apply_osc_to_db.sh $REPLICATE_DIR/ $FIRST_MINDIFF_ID --meta=attic &

To see what's going on, watch these log files:

$DB_DIR/transactions.log
$DB_DIR/apply_osc_to_db.log
$REPLICATE_DIR/fetch_osc.log

Setting up the Web API

information sign

The dispatcher daemon must be running for the Web API to work.

This section describes one way to setup a basic read-only HTTP based API with OSM3S.

1. Install Apache2 and enable CGI

sudo apt-get install apache2
sudo a2enmod cgi ext_filter

2. Configure Apache2

cd /etc/apache2/sites-available
$EDITOR *default*

Note: use the correct name of the default file for your apache installation.

Make your default file look something like this:

<VirtualHost *:80>
	ServerAdmin webmaster@localhost
	ExtFilterDefine gzip mode=output cmd=/bin/gzip
	DocumentRoot [YOUR_HTML_ROOT_DIR]

	# This directive indicates that whenever someone types http://www.mydomain.com/api/ 
	# Apache2 should refer to what is in the local directory [YOUR_EXEC_DIR]/cgi-bin/
	ScriptAlias /api/ [YOUR_EXEC_DIR]/cgi-bin/


	# This specifies some directives specific to the directory: [YOUR_EXEC_DIR]/cgi-bin/
	<Directory "[YOUR_EXEC_DIR]/cgi-bin/">
                AllowOverride None
                Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
                # For Apache 2.2:
                #  Order allow,deny
                # For Apache >= 2.4:  
                Require all granted
                #SetOutputFilter gzip
                #Header set Content-Encoding gzip
	</Directory>

	ErrorLog /var/log/apache2/error.log

	# Possible values include: debug, info, notice, warn, error, crit, alert, emerg
	LogLevel warn

	CustomLog /var/log/apache2/access.log combined

</VirtualHost>

3. Reload Apache:

sudo systemctl reload apache2

4. If not already running, start the dispatcher process as overpass user and point it to your database directory:

nohup $EXEC_DIR/bin/dispatcher --osm-base --db-dir=$DB_DIR &

5. Test the Web API with the following command:

curl http://DOMAIN_OR_IP/api/interpreter?data=%3Cprint%20mode=%22body%22/%3E

The result should look similar to this:

<?xml version="1.0" encoding="UTF-8"?>
<osm-derived>
  <note>
    The data included in this document is from www.openstreetmap.org. It has there been collected 
    by a large group of contributors. For individual attribution of each item please refer to 
    https://www.openstreetmap.org/api/0.6/[node|way|relation]/#id/history 
  </note>
  <meta osm_base=""/>

</osm-derived>

Area creation

This section was taken over from http://overpass-api.de/full_installation.html and may need some revision. Please also check the discussion page and add those details which are worth mentioning here.

To use areas with Overpass API, you essentially need another permanent running process that generates the current areas from the existing data in batch runs.

First, you need to copy the rules directory into a subdirectory of the database directory:

cp -pR "../rules" $DB_DIR

Hint: If you use an early tarball (ca. 2015) the rules subfolder is missing. It may be found here if you need it: https://github.com/drolbr/Overpass-API

The next step is to start a second dispatcher that coordinates read and write operations for the areas related files in the database:

nohup $EXEC_DIR/bin/dispatcher --areas --db-dir=$DB_DIR &

chmod 666 "../db/osm3s_v0.7.*_areas"

The dispatcher has been successfully started if you find a line "Dispatcher just started." in the file transactions.log in the database directory with correct date (in UTC).

The third step then is to start the rule batch processor as a daemon:

nohup $EXEC_DIR/bin/rules_loop.sh $DB_DIR &

Now we don't want this process to impede the real business of the server. Therefore, I strongly suggest to priorize this process down. To do this, you need to find with

ps -ef | grep rules

the PIDs belonging to the processes rules_loop.sh and ./osm3s_query --progress --rules. Run for each of the two PIDs the commands:

renice -n 19 -p PID
ionice -c 2 -n 7 -p PID

The second command is not available on FreeBSD. This is not at big problem, because this rescheduling just means giving hints to the operating system.

When the batch process has completed its first cycle, all areas get accessible via the database at once. This may take up to 24 hours.

Troubleshooting

runtime error: open64: 2 /osm3s_v0.6.91_osm_base Dispatcher_Client

Note: if you get an output doc that looks more like this:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
  <meta http-equiv="content-type" content="text/html; charset=utf-8" lang="en"/>
  <title>OSM3S Response</title>
</head>
<body>

<p>
   The data included in this document is from www.openstreetmap.org. It has there been collected
   by a large group of contributors. For individual attribution of each item please refer to 
   https://www.openstreetmap.org/api/0.6/[node|way|relation]/#id/history 
</p>

<p><strong style="color:#FF0000">Error</strong>: runtime error: open64: 2 /osm3s_v0.6.91_osm_base Dispatcher_Client::1 </p>

</body>
</html>

Then it may indicate that the dispatcher process is not running or not configured correctly.

runtime error: open64: 2 No such file or directory /osm3s_v0.7.51_osm_base Dispatcher_Client::1

Make sure that the first dispatcher is running.

File_Error Address already in use 98 /srv/osm3s/db_dir//osm3s_v0.7.3_osm_base Dispatcher_Server::4

Check for stale lock files in the following two locations before restarting a crashed/killed dispatcher

  • /dev/shm
  • your db directory (a file named osm3s_v*_osb_base).

To clean up these lock files automatically you can try running:

$EXEC_DIR/bin/dispatcher --terminate

File_Error 17 /osm3s_v0.6.94_osm_base Dispatcher_Server::1

If you killed (or crashed) the dispatcher daemon and wish to restart it, you might encounter this error (unless you reboot) : There is a lock file : /dev/shm/osm3s_v0.6.94_osm_base that prevent other dispatchers to run while one is allready running. Remove that file (and check that no dispatcher is running) and restart it.

To remove this lock file (and others), try running:

$EXEC_DIR/bin/dispatcher --terminate

No such file or directory /srv/osm-3s_v0.7.52/db/areas.bin File_Blocks::File_Blocks::1

No such file or directory /srv/osm-3s_v0.7.52/db/areas.bin File_Blocks::File_Blocks::1

Error might happen, if file permissions for area files are wrong. Try

chown -R www-data:www-data $DB_DIR/area*


This change would imply write access for www-data to the database files, which is ill-advised. Area creation will usually run as non www-data user. To handle queries, read-only access to area files is definitively sufficient. Mmd (talk) 20:49, 12 September 2015 (UTC)
Good point! This was really just a quick trial and error solution, which might be wrong. Any detailled step-by-step instruction would be really helpful: I got the described error message while following the above official install instructions, so there's definitely some missing part in the documentation! free_as_a_bird (talk) 23:18, 12 September 2015 (UTC)
Yes, that's not really recommended. Usually you would run both dispatcher and the rules_loop.sh script as a dedicated (non www-data) user. OTOH www-data still needs to have read access to the database files, as the 'interpreter' is run as www-data user via CGI. I think it is best to discuss this all on the Overpass Developer list (see Info Ambox on this page) and also get some feedback from Roland. Mmd (talk) 11:41, 13 September 2015 (UTC)

Database population problem

(Found in version 0.7.5) If you receive an out of memory error while populating the database:

Out of memory in UB 25187: OOM killed process 21091 (update_database)

Try adding --flush-size=1 as a parameter when calling update_database, so in most cases, add the parameter to the last line in the init_osm3s.sh script

Area batch run out of memory error

When generating an area run, you may receive the following:

Query run out of memory in "recurse" at line 255 using about 1157 MB

(Assuming you have enough physical free memory, 4gb worked for me) Try removing all the "area" files from your database directory and increase the element-limit (in your $DB_DIR/rules/rules.osm3s file) to "2073741824"

Apache config fails

If you encounter some message like this one when (re)starting the apache server:

# apache2ctl graceful 
Syntax error on line 12 of /etc/apache2/httpd.conf:
Invalid command 'Header', perhaps misspelled or defined by a module not included in the server configuration
Action 'graceful' failed.

then apache doesnt use mod_headers. you can activate mod_headers by running:

# a2enmod headers
Enabling module headers.
To activate the new configuration, you need to run:
service apache2 restart
# apache2ctl graceful 

After this, apache should start up correctly.

Apache: HTTP 403 Forbidden errors

If you run into 403 forbidden errors on apache, double check your configuration in /etc/apache2/apache2.conf if your directory is explicitly allowed.

Contributors corner

WebAPI using NGINX (Ubuntu)

You may want to set up the WebAPI using NGINX instead of Apache. You can follow the instructions below to enable NGINX to serve as CGI interpreter.

Installing dependencies

First of all, install and enable NGINX and fcgiwrap:

sudo apt-get install nginx fcgiwrap
sudo systemctl enable --now nginx fcgiwrap

fcgiwrap should create a socket file in /var/run/fcgiwrap.socket, you should be able to find it in this directory:

$ ls -la /var/run/

With a socket in place, we can go further (if location of the socket differs, modify further config accordingly).

Configuring NGINX

Create and edit separate server definition in sites-available nginx directory:

$ sudoedit /etc/nginx/sites-available/overpass.conf

Paste and adjust following configuration:

server {
    listen 80;
    location /api/ {
        alias [path-to-exec-dir]/cgi-bin/;
        #gzip on;
        #gzip_types application/json application/osm3s+xml;
        
        # set the minimum length that will be compressed
        #gzip_min_length 1000; # in bytes
        
        # Fastcgi socket
        fastcgi_pass  unix:/var/run/fcgiwrap.socket;
        
        # Fastcgi parameters, include the standard ones
        include /etc/nginx/fastcgi_params;
        
        # Adjust non standard fcgi parameters
        fastcgi_param SCRIPT_FILENAME  $request_filename;
    }
}

To enable gzip compression, uncomment lines no. 5, 6 and 9. This configuration assumes http traffic (port 80) and no domain name, so api requests should look like this:

http://[your.static.ip.address]/api/interpreter?data=[OverpassRequest]

Enabling configuration

Last, we have to enable the created configuration.

Delete the default nginx site:

$ sudo rm /etc/nginx/sites-enabled/default

Then, symlink the newly created cofiguration into sites-enabled directory:

$ cd /etc/nginx/sites-enabled/
$ sudo ln -s /etc/nginx/sites-available/overpass.conf

Finally, reload the nginx configuration. First check if config is syntactically correct:

$ sudo nginx -t

If there are reports no errors, reload nginx service:

$ sudo systemctl reload nginx

After successful reloading, the Overpass API will be available under following address (assuming you started the dispatcher and completed all required previous steps):

http://[your.static.ip.address]/api/interpreter

Installation

Via a Docker image

The official Docker repository is this one.

There also exists a non official docker image available here. This image is either initialized from a planet data file or cloned from an existing data repository. Docker is required to run at the computer.

There are also other images here and here that are however no longer maintained.

CentOS or RHEL 7

1. Install dependencies

$ sudo yum install tar make gcc-c++ expat expat-devel zlib-devel bzip2 rpmbuild gcc ruby-devel rpm-install
$ sudo gem install fpm

2. Get source

$ wget http://dev.overpass-api.de/releases/osm-3s_v0.7.52.tar.gz
$ tar -xvzf osm-3s_v0.7.52.tar.gz

3. Compile the software

cd osm*
 ./configure CXXFLAGS="-O3" --prefix=/usr/local/osm3s
make

4. Systemd unit file

$ vim overpass-api.service
[Unit]
Description=Overpass API dispatcher daemon
After=syslog.target
 
[Service]
Type=simple
ExecStart=/usr/local/osm3s/bin/dispatcher --osm-base --db-dir=/var/lib/osm3s/data/db
ExecStop=/usr/local/osm3s/bin/dispatcher --terminate
 
[Install]
WantedBy=multi-user.target

5. Create rpm package with some post install/remove scripts

post-install script

$ vim post-install
#!/bin/bash
 
mv /usr/local/osm3s/bin/overpass-api.service /etc/systemd/system
mkdir -p /var/lib/osm3s/data/db

post-remove script

$ vim post-remove
#!/bin/bash
 
rm /etc/systemd/system/overpass-api.service

create rpm package using fpm

/usr/local/bin/fpm -s dir -t rpm -n overpass-api -v 0.7 --iteration 52 --exclude bin/.dirstamp --exclude bin/.libs --after-install post-install --after-remove post-remove --prefix /usr/local/osm3s bin/

The package is available on https://packagecloud.io/visibilityspots/packages and a proof of concept is available at https://github.com/visibilityspots/vagrant-puppet/tree/overpass-api

This package seems to be incomplete, it doesn't include cgi-bin directory nor does it match the overall structure of the official installation. Moved to experimental section for the time being. Better put it on your own user page until it is ready. Mmd (talk) 14:24, 30 December 2015 (UTC)

Community Installation Guides

Guides written by community members not affiliated with the developers:

  • ZeLonewolf's guide to installing overpass on Ubuntu 18.04 LTS