Skip to main content

Restoring Dictionary Projects

SubQuery TeamAbout 2 min

Restoring Dictionary Projects

Below are the Dictionary Projects supported on Mainnet. These projects are commonly used, are simple to index and create, but create large datasets.

In order to speed up the onboarding of Indexers, we are providing database snapshots for most of these dictionaries

Downloading Database Snapshots

NetworkDeployment IDDatabase SizeS3 Bucket URLSHA256
PolkadotQmZGAZQ7e1oZgfuK4V29Fa5gveYK3G2zEwvUzTZKNvSBsm~220GBS3 URLopen in new windowc1a1f67e2a205dc9fdd90f738686f3ee57052fcc7bc383a054574ab81e17584f
KusamaQmXwfCF8858YY924VHgNLsxRQfBLosVbB31mygRLhgJbWn~260GBS3 URLopen in new window65f6fc3dd0410b296f651369690fd866070dbba8781a61454fc00cc11676452c
NodleQmQtmsHoJEYUcxKE4tBqr9Z8kudcgkczQPfhkAxVExQX5y~15GBS3 URLopen in new window71b52ef798f96c86214213e26a1960477d11f0f80916914159fd2feec2ba17fe
MoonbeamQmeeqBHdVu7iYnhVE9ZiYEKTWe4jXVUD5pVoGXT6LbCP2t~123GBS3 URLopen in new window508a47bf84476e222d7ce72d4ca870a41da46d7c2284abdb3db915964d0a69c6
MoonriverQmXCr6uZFdY1YcGTa4u6ZieQPXK4VHE1Pjy7CBr7ubFwKR~130GBS3 URLopen in new windowdb22b7565d8fea385a9e69636eb805079bf6708e898296f27673bc2b4d7a476d
AcalaQmarrhgrpqw5VK71rMtb4GARpPvq8ajMjAqnjnWZFLV61N~10GS3 URLopen in new window4f0d8105f45ca856c57fa5f87d102e398e1e99403612c077750cb07bc9839c0d
KaruraQmWumrabg4k6t4EUMhQg19xWwcxGq1hWbcmfmRYiy2Bod5~10GS3 URLopen in new window046674efb30cdc7ab61b1e834201ac125548e0fafb5f6b69e321a9ddf7b06ae9
KhalaQmYCAns2cunZKJFU85KNK8CvL2ATAmCFVZRdBf963GqWYs~78GS3 URLopen in new window1b18b40345b7473fb4d8219f1da60381126ec8bbbe064158d2ed5b1b3ad532cd
WestendQma6BeSQGHrhP5aydmkQcJCR25TEwuNMogS5boovBBwoeW~35GS3 URLopen in new window72c94be8187a1298a81a7039900566a80447996899b047f6b4fe3f3066a89bef
AstarQmUmnKPhcE6JwGMYvY3Yitb5j8qxbQBMxgpkHpVQuXqxDH~65GS3 URLopen in new windowdb2c8be67d18e7401290d67b3d7f457dc1881ef0505eb22807487d03b5031e81

You can download the snapshot either from the s3 bucket URL or the BitTorrent magnet link:

Download Snapshot

Downloading via Bittorrent will be faster, you can use your favourite bittorent client or install Aria2

sudo apt update
sudo apt install aria2

aria2c <Magnet_Link>

To download from an S3 bucket

curl -o dictionary.tar <Download_URL>

Restoring the Database Snapshot

This assumes that you have an indexer running locally with admin access to a PostgresQL database (you will be using the pg_restore command).

  1. First extract the downloaded snapshot and then extract it using the following command:
tar -xvf dictionary.tar

You will now have a pg dump file called: schema_xxxxxxx.dump

  1. Copy the schema_xxxxxxx.dump to .data/postgres/ and then use this command:
docker exec -it indexer_db pg_restore -v -j 2 -h localhost -p 5432 -U postgres -d postgres /var/lib/postgresql/data/schema_xxxxxxx.dump > restore.log 2>&1 &

Note

  • please Make sure your indexer_db container is running with healthy status before starting restore process

  • We use the -j parameter to update the number of jobs running concurrently. Depending on your machine size, you may want to increase this number to speed up the restore process. Read moreopen in new window

The restore process will start and take quite a long time (like 2 days), please make sure you run this cmd in the background (use tools like tmux/screen/nohup). Here is an example of the output log.

pg_restore: creating SCHEMA "schema_qmzj9whrhrmvn2h"
pg_restore: creating FUNCTION "schema_qmzj9whrhrmvn2h.schema_notification()"
pg_restore: creating TABLE "schema_qmzj9whrhrmvn2h._metadata"
pg_restore: creating TABLE "schema_qmzj9whrhrmvn2h._poi"
pg_restore: creating TABLE "schema_qmzj9whrhrmvn2h.events"
pg_restore: creating TABLE "schema_qmzj9whrhrmvn2h.extrinsics"
pg_restore: creating TABLE "schema_qmzj9whrhrmvn2h.spec_versions"
pg_restore: processing data for table "schema_qmzj9whrhrmvn2h._metadata"
pg_restore: processing data for table "schema_qmzj9whrhrmvn2h._poi"
pg_restore: processing data for table "schema_qmzj9whrhrmvn2h.events"
pg_restore: processing data for table "schema_qmzj9whrhrmvn2h.extrinsics"
pg_restore: processing data for table "schema_qmzj9whrhrmvn2h.spec_versions"
pg_restore: creating CONSTRAINT "schema_qmzj9whrhrmvn2h._metadata _metadata_pkey"
pg_restore: creating CONSTRAINT "schema_qmzj9whrhrmvn2h._poi _poi_chainBlockHash_key"
pg_restore: creating CONSTRAINT "schema_qmzj9whrhrmvn2h._poi _poi_hash_key"
pg_restore: creating CONSTRAINT "schema_qmzj9whrhrmvn2h._poi _poi_mmrRoot_key"
pg_restore: creating CONSTRAINT "schema_qmzj9whrhrmvn2h._poi _poi_parentHash_key"
pg_restore: creating CONSTRAINT "schema_qmzj9whrhrmvn2h._poi _poi_pkey"
pg_restore: creating CONSTRAINT "schema_qmzj9whrhrmvn2h.events events_pkey"
pg_restore: creating CONSTRAINT "schema_qmzj9whrhrmvn2h.extrinsics extrinsics_pkey"
pg_restore: creating CONSTRAINT "schema_qmzj9whrhrmvn2h.spec_versions spec_versions_pkey"
pg_restore: creating INDEX "schema_qmzj9whrhrmvn2h.events_block_height__block_range"
pg_restore: creating INDEX "schema_qmzj9whrhrmvn2h.events_event__block_range"
pg_restore: creating INDEX "schema_qmzj9whrhrmvn2h.events_module__block_range"
pg_restore: creating INDEX "schema_qmzj9whrhrmvn2h.extrinsics_block_height__block_range"
pg_restore: creating INDEX "schema_qmzj9whrhrmvn2h.extrinsics_call__block_range"
pg_restore: creating INDEX "schema_qmzj9whrhrmvn2h.extrinsics_module__block_range"
pg_restore: creating INDEX "schema_qmzj9whrhrmvn2h.extrinsics_tx_hash__block_range"
pg_restore: creating INDEX "schema_qmzj9whrhrmvn2h.poi_hash"
pg_restore: creating TRIGGER "schema_qmzj9whrhrmvn2h._metadata 0xc1aaf8b4176d0f02"

After the data restored, you can start adding the specific project to your service inside admin app, and start indexing the project, the indexing will start basing on the restored data and continue indexing the project.