99
edits
Note: Currently new registrations are closed, if you want an account Contact us
Line 2: | Line 2: | ||
== Hosting == | == Hosting == | ||
We | We were now on a [https://www.scaleway.com/virtual-cloud-servers START1-S instance] virtual cloud server with the following specs: | ||
* 2 x86 | * 2 x86-64 bit Cores | ||
* | * 2 GB Memory | ||
* 50GB SSD Disk | * 50GB SSD Disk | ||
* | * 200 Mbit/s Bandwidth | ||
* | * 1 Reserved IP (v4) | ||
* | * €3.99/Month | ||
== Coordination == | == Coordination == | ||
Line 18: | Line 18: | ||
Maintained in a private git repo at -> https://git.fosscommunity.in/community/access | Maintained in a private git repo at -> https://git.fosscommunity.in/community/access | ||
== History == | |||
We were on a [https://www.scaleway.com/virtual-cloud-servers VC1S instance] virtual cloud server with the following specs until 31/01/2019. | |||
* 2 x86-64 bit Cores | |||
* 2GB Memory | |||
* 50GB SSD Disk | |||
* 1 Flexible Public IPv4 | |||
* 200Mbit/s Unmetered bandwidth | |||
* €2.99/Month | |||
On 01/02/2019 our server was taken down by [https://scaleway.com Scaleway] quoting payment issues. Payment was failing even after updating the credit card details and following that our server got deleted without proper notifications from Scaleway's side. Screenshot of the email from Scaleway attached below. Fortunately we were provided with a snapshot of the server from which we were able to recover codema to a new server. | |||
[[File:]] | |||
=== Codema recovery process === | |||
''Here's a brief description of how codema was recovered after the server take down on 01/01/2019:'' | |||
Under 'Snapshots' tab in the Scaleway dashboard we were provided with the snapshot (backup) of our codema server. A system image was created from this snapshot and it was used to create a new server with similar specifications. We lost our public IP along with the old server, so a new IP was assigned to the server and then updated the DNS A record of codema.fsci.org.in to point to this new IP. Once the server was up loomio was restarted using the following commands from the loomio installation directory: | |||
docker-compose down | |||
docker-compose up -d | |||
The logs were checked for errors using the following command: | |||
docker-compose logs -f | |||
Loomio wasn't getting started saying the port 25 was already in use. So the application using that port (which was exim4 in this case) was killed and loomio was restarted again. | |||
[[Category: Services]] | [[Category: Services]] |