There are two modes available for archivo: Develop for working on the server and Deploy for deploying the service with docker+gunicorn.
Archivo uses poetry, so please install it beforehand.
# Clone the repository:
git clone https://github.com/dbpedia/archivo.git
# Go into the repo
cd archivo
# Install the dependencies with poetry
poetry install
# Change directory to the actual source code
cd archivo
# Run the dev server:
# Note: this only starts the webservice, cronjobs (update, discovery, etc.) are only run if it is started in deployment mode with gunicorn
# If those services are tested just import the archivo python file in the interactive shell and execute the required functions
poetry run python archivo.py
It is assumed that docker is correctly installed, if not try this.
Steps for running the service with docker:
Just clone the repository as it is
git clone https://github.com/dbpedia/archivo.git
cd archivo
You need to configure multiple points:
- Configure your local nginx (or any other similar software) to make a local directory (and all its possible subdirectories)
LOCAL_DIR
(e.g./home/myuser/www/archivo-data
) available to the public under a certain URLPUBLIC_URL
(e.g.https://mydomain.org/myuser/archivo-data
) - Now configure the two necessary files:
- The archivo config, here you need to set at least the
PUBLIC_URL_BASE
constant to yourPUBLIC_URL
- The docker run script, here you need to mount your local directory
LOCAL_DIR
to/usr/local/archivo-data
(see the example given) - You can also change the preset configs in the
archivo_config.py
, but it is not necessary for the first start
- The archivo config, here you need to set at least the
First, you need to build the container with the following command:
docker build -t archivo-build .
Since this downloads and builds the pellet reasoner, the first time this is executed will take quite a while. But since it is cached it won't do it again (on the same machine).
Then just run the configured script:
chmod -x run.sh
./run.sh