Note: Currently new registrations are closed, if you want an account Contact us
Difference between revisions of "Poddery - Diaspora, Matrix and XMPP"
m (→Coordination) |
|||
(13 intermediate revisions by 4 users not shown) | |||
Line 56: | Line 56: | ||
= Coordination = | = Coordination = | ||
* [https:// | * [https://codema.in/g/2bjVXqAu/fosscommunity-in-poddery-com-maintainer-s-group Loomio group] - Mainly used for decision making | ||
* Matrix room - [https://matrix.to/#/#poddery:poddery.com #poddery:poddery.com] | * Matrix room - [https://matrix.to/#/#poddery:poddery.com #poddery:poddery.com] also bridged to xmpp [xmpp:poddery.com-support@chat.yax.im?join poddery.com-support@chat.yax.im] | ||
* [https://git.fosscommunity.in/community/poddery.com/issues Issue tracker] - Used for tracking progress of tasks | * [https://git.fosscommunity.in/community/poddery.com/issues Issue tracker] - Used for tracking progress of tasks | ||
Line 81: | Line 81: | ||
= Configuration and Maintenance = | = Configuration and Maintenance = | ||
Boot into rescue system using https://docs.hetzner.com/robot/dedicated-server/troubleshooting/hetzner-rescue-system | |||
== Disk Partitioning == | == Disk Partitioning == | ||
* RAID 1 setup on 2x2TB HDDs (<code>sda</code> and <code>sdb</code>). | * RAID 1 setup on 2x2TB HDDs (<code>sda</code> and <code>sdb</code>). | ||
Line 219: | Line 222: | ||
=== Workers === | === Workers === | ||
* For scalability, Poddery is running [https://github.com/matrix-org/synapse/blob/master/docs/workers. | * For scalability, Poddery is running [https://github.com/matrix-org/synapse/blob/master/docs/workers.md workers]. Currently all workers specified in that page, expect <code>synapse.app.appservice</code> is running on poddery.com | ||
* A new service [https://gist.github.com/necessary129/5dfbb140e4727496b0ad2bf801c10fdc <code>matrix-synapse@.service</code>] is installed for the workers (Save the <code>synape_worker</code> file somewhere like <code>/usr/local/bin/</code> or something). | * A new service [https://gist.github.com/necessary129/5dfbb140e4727496b0ad2bf801c10fdc <code>matrix-synapse@.service</code>] is installed for the workers (Save the <code>synape_worker</code> file somewhere like <code>/usr/local/bin/</code> or something). | ||
* The worker config can be found at <code>/etc/matrix-synapse/workers</code> | * The worker config can be found at <code>/etc/matrix-synapse/workers</code> | ||
Line 236: | Line 239: | ||
=== Synapse Updation === | === Synapse Updation === | ||
* First check [https:/ | * First check [https://matrix-org.github.io/synapse/latest/upgrade synapse//latest/upgrade] to see if anything extra needs to be done. Then, just run <code>/root/upgrade-synapse</code> | ||
* Current version of synapse can be found from https://poddery.com/_matrix/federation/v1/version | |||
=== Riot-web Updation === | === Riot-web Updation === | ||
Line 318: | Line 322: | ||
letsencrypt certonly --webroot --agree-tos -w /usr/share/diaspora/public -d poddery.com -d www.poddery.com -d test.poddery.com -d groups.poddery.com -d fund.poddery.com -w /usr/share/diaspora/public/save -d save.poddery.com -w /var/www/riot -d chat.poddery.com | letsencrypt certonly --webroot --agree-tos -w /usr/share/diaspora/public -d poddery.com -d www.poddery.com -d test.poddery.com -d groups.poddery.com -d fund.poddery.com -w /usr/share/diaspora/public/save -d save.poddery.com -w /var/www/riot -d chat.poddery.com | ||
* To include an additional subdomain such as fund.poddery.com use with --expand parameter as shown below | * To include an additional subdomain such as fund.poddery.com use with --expand parameter as shown below | ||
letsencrypt certonly --webroot --agree-tos -w /usr/share/diaspora/public | letsencrypt certonly --webroot --agree-tos --expand -w /usr/share/diaspora/public -d poddery.com -d www.poddery.com -d test.poddery.com -d groups.poddery.com -d fund.poddery.com -w /usr/share/diaspora/public/save/ -d save.poddery.com -w /var/www/riot/ -d chat.poddery.com | ||
==Backup== | |||
Backup server is provided by Manu (KVM virtual machine with 180 GB storage and 1 GB ram ). | |||
Debian Stetch was upgraded Debian Buster before database relication of synapse database. | |||
Documentation: https://www.percona.com/blog/2018/09/07/setting-up-streaming-replication-postgresql/ | |||
Currently postgres database for matrix-synapse is backed up. | |||
===Before Replication (specific to poddery.com)=== | |||
Setup tinc vpn in the backup server | |||
# apt install tinc | |||
Configure tinc by creating tinc.conf and host podderybackup under label fsci. | |||
Add tinc-up and tinc-down scripts | |||
Copy poddery host config to backup server and podderybackup host config to poddery.com server. | |||
Reload tinc vpn service at both poddery.com and backup servers | |||
# systemctl reload tinc@fsci.service | |||
Enable tinc@fsci systemd service for autostart | |||
# systemctl enable tinc@fsci.service | |||
The synapse database was also pruned to reduce the size before replication by following this guide - https://levans.fr/shrink-synapse-database.html | |||
If you want to follow this guide, make sure matrix synapse server is updated to version 1.13 atleast since it introduces the Rooms API mentioned the guide. | |||
Changes done to steps in the guide. | |||
# jq '.rooms[] | select(.joined_local_members == 0) | .room_id' < roomlist.json | sed -e 's/"//g' > to_purge.txt | |||
The room list obtained this way can, be looped to pass the room names as variables to the purge api. | |||
# set +H // if you are using bash to avoid '!' in the roomname triggering the history substitution. | |||
# for room_id in $(cat to_purge.txt); do curl --header "Authorization: Bearer <your access token>" \ | |||
-X POST -H "Content-Type: application/json" -d "{ \"room_id\": \"$room_id\" }" \ | |||
'https://127.0.0.1:8008/_synapse/admin/v1/purge_room'; done; | |||
We also did not remove old history of large rooms. | |||
===Step 1: Postgresql (for synapse) Primary configuration=== | |||
Create postgresql user for replication. | |||
$ psql -c "CREATE USER replication REPLICATION LOGIN CONNECTION LIMIT 1 ENCRYPTED PASSWORD 'yourpassword';" | |||
The password is in the access repo if you need it later. | |||
Allow standby to connect to primary using the user just created. | |||
$ cd /etc/postgresql/11/main | |||
$ nano pg_hba.conf | |||
Add below line to allow replication user to get access to the server | |||
host replication replication 172.16.0.3/32 md5 | |||
Next , open the postgres configuration file | |||
nano postgresql.conf | |||
Set the following configuration options in the postgresql.conf file | |||
listen_addresses = 'localhost,172.16.0.2' | |||
port=5432 | |||
wal_level = replica | |||
max_wal_senders = 1 | |||
wal_keep_segments = 64 | |||
archive_mode = on | |||
archive_command = 'cd .' | |||
You need to restart since postgresql.conf was edited and parameters changed, | |||
# systemctl restart postgresql | |||
===Step 2: Postgresql (for synapse) Standby configuration === | |||
Install postgresql | |||
# apt install postgresql | |||
Check postgresql server is running | |||
# su postgres -c psql | |||
Make sure en_US.UTF-8 locale is available | |||
# dpkg-reconfigure locales | |||
Stop postgresql before changing any configuration | |||
#systemctl stop postgresql@11-main | |||
Switch to postgres user | |||
# su - postgres | |||
$ cd /etc/postgresql/11/ | |||
Copy data from master and create recovery.conf | |||
$ pg_basebackup -h git.fosscommunity.in -D /var/lib/postgresql/11/main/ -P -U rep --wal-method=fetch -R | |||
Open the postgres configuration file | |||
$ nano postgresql.conf | |||
Set the following configuration options in the postgresql.conf file | |||
max_connections = 500 // This option and the one below are set to be same as in postgresql.conf at primary or the service won't start. | |||
max_worker_processes = 16 | |||
host_standby = on // The above pg_basebackup command should set it. If it's not manually turn it to on. | |||
Start the stopped postgresql service | |||
# systemctl start postgresql@11-main | |||
===Postgresql (for synapse) Replication Status=== | |||
On Primary, | |||
$ ps -ef | grep sender | |||
$ psql -c "select * from pg_stat_activity where usename='rep';" | |||
On Standby, | |||
$ ps -ef | grep receiver | |||
= Troubleshooting = | |||
== Allow XMPP login even if diaspora account is closed == | |||
Diaspora has a [https://github.com/diaspora/diaspora/blob/develop/Changelog.md#new-maintenance-feature-to-automatically-expire-inactive-accounts default setting] to close accounts that have been inactive for 2 years. At the time of writing, there seems [https://github.com/diaspora/diaspora/issues/5358#issuecomment-371921462 no way] to reopen a closed account. This also means that if your account is closed, you will no longer be able to login to the associated XMPP service as well. Here we discuss a workaround to get access back to the XMPP account. | |||
The prosody module [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua mod_auth_diaspora] is used for diaspora-based XMPP auth. It checks if <code>locked_at</code> value in the <code>users</code> table of diaspora db is <code>null</code> [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua-L89 here] and [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua-L98 here]. If your account is locked, it will have the <code>datetime</code> value that represents the date and time at which your account is locked. Setting it back to <code>null</code> will let you use your XMPP account again. | |||
-- Replace <username> with actual username of the locked account | |||
UPDATE users SET locked_at=NULL WHERE username='<username>'; | |||
NOTE: Matrix account won't be affected even if the associated diaspora account is closed because it uses a [https://pypi.org/project/synapse-diaspora-auth/ custom auth module] which works differently. | |||
= History = | = History = |
Latest revision as of 15:45, 28 November 2023
We run decentralized and federated Diaspora social netowrk, XMPP and Matrix instant messaging services at poddery.com. Along with Diaspora, Poddery username and password can be used to access XMPP and Matrix services as well. chat.poddery.com provides Riot client (accessed by a web browser), which can be used to connect to any Matrix server without installing a Riot app/client.
Environment
Hosting
Poddery is hosted at Hetzner with the following specs:
- Intel Xeon E3-1246V3 Process - 4 Cores, 3.5GHz
- 4TB HDD
- 32GB DDR3 RAM
Operating System
- Debian Buster
User Visible Services
Diaspora
- Currently installed version is 0.7.6.1 which is available in Debian Buster contrib
- For live statistics see https://poddery.com/statistics
Chat/XMPP
- Prosody is used as the XMPP server which is modern and lightweight.
- Currently installed version is 0.11.2 which is available in Debian Buster.
- All XEPs are enabled which the Conversations app support.
Chat/Matrix
- Synapse is used as the Matrix server.
- Synapse is currently installed directly from the official GitHub repo.
- Riot-web Matrix client is hosted at https://chat.poddery.com
Homepage
Homepage and other static pages are maintained in FSCI GitLab instance.
- poddery.com -> https://git.fosscommunity.in/community/poddery.com
- save.poddery.com -> https://git.fosscommunity.in/community/save.poddery.com
- fund.poddery.com -> https://git.fosscommunity.in/community/fund-poddery
Backend Services
Web Server / Reverse Proxy
- Nginx web server which also acts as front-end (reverse proxy) for Diaspora and Matrix.
Database
- PostgreSQL for Matrix
- MySQL for Diaspora
TODO: Consider migrating to PostgreSQL to optimize resources (We can reduce one service and RAM usage).
- Exim
SSL/TLS certificates
- Let's Encrypt
Firewall
- UFW (Uncomplicated Firewall)
Intrusion Prevention
- Fail2ban
Coordination
- Loomio group - Mainly used for decision making
- Matrix room - #poddery:poddery.com also bridged to xmpp poddery.com-support@chat.yax.im
- Issue tracker - Used for tracking progress of tasks
Contact
- Email: poddery at autistici.org (alias that reaches Akhilan, Abhijith Balan, Fayad, Balasankar, Julius, Praveen, Prasobh, Sruthi, Shirish, Vamsee and Manukrishnan)
- The following people have their GPG keys in the access file:
- ID: 0xCE1F9C674512C22A - Praveen Arimbrathodiyil (piratepin)
- ID: 0xB77D2E2E23735427 - Balasankar C
- ID: 0x5D0064186AF037D9 - Manu Krishnan T V
- ID: 0x51C954405D432381 - Fayad Fami (fayad)
- ID: 0x863D4DF2ED9C28EF - Abhijith PA
- ID: 0x6EF48CCD865A1FFC - Syam G Krishnan (sgk)
- ID: 0xFD49D0BC6FEAECDA - Sagar Ippalpalli
- ID: 0x92FDAB42A95FF20C - Pirate Bady (piratesin)
- ID: 0x0B1955F40C691CCE - Kannan
- ID: 0x32FF6C6F5B7AE248 - Akhil Varkey
- ID: 0xFBB7061C27CB70C1 - Ranjith Siji
- ID: 0xEAAFE4A8F39DE34F - Kiran S Kunjumon (hacksk)
- It's recommended to setup Vim GnuPG Plugin for transparent editing. Those who are new to GPG can follow this guide.
Server Access
Maintained in a private git repo at https://git.fosscommunity.in/community/access
Configuration and Maintenance
Boot into rescue system using https://docs.hetzner.com/robot/dedicated-server/troubleshooting/hetzner-rescue-system
Disk Partitioning
- RAID 1 setup on 2x2TB HDDs (
sda
andsdb
).
mdadm --verbose --create /dev/mdX --level=mirror --raid-devices=2 /dev/sdaY /dev/sdbY
- Separate partitions for swap (
md0
- 16GB), boot (md1
- 512MB) and root (md2
- 50GB). - LVM on Luks for separate encrypted data partitions for database, static files and logs.
# Setup LUKS (make surelvm2
,udev
andcryptsetup
packages are installed). cryptsetup luksFormat /dev/mdX # Give disk encryption password as specified in the access repo cryptsetup luksOpen /dev/mdX poddery # LVM Setup # Create physical volume namedpoddery
pvcreate /dev/mapper/poddery # Create volume group nameddata
vgcreate data /dev/mapper/poddery # Create logical volumes namedlog
,db
andstatic
lvcreate -n log /dev/data -L 50G lvcreate -n db /dev/data -L 500G # Assign remaining free space for static files lvcreate -n static /dev/data -l 100%FREE # Setup filesystem on the logical volumes mkfs.ext4 /dev/data/log mkfs.ext4 /dev/data/db mkfs.ext4 /dev/data/static # Create directories for mounting the encrypted partitions mkdir /var/lib/db /var/lib/static /var/log/poddery # Manually mount encrypted partitions. This is needed on each reboot as Hetzner doesn't provide a web console so that we can't decrypt the partitions during booting. mount /dev/data/db /var/lib/db mount /dev/data/static /var/lib/static mount /dev/data/log /var/log/poddery
Hardening checklist
- SSH password based login disabled (allow only key based logins)
- SSH login disabled for root user (use a normal user with sudo)
# Check for the following settings in /etc/ssh/sshd_config: ... PermitRootLogin no ... PasswordAuthentication no ...
ufw
firewall enabled with only the ports that needs to be opened (ufw tutorial):
ufw default deny incoming ufw default allow outgoing ufw allow ssh ufw allow http/tcp ufw allow https/tcp ufw allow Turnserver ufw allow XMPP ufw allow 8448
ufw enable # Verify everything is setup properly ufw status # Enable ufw logging with default mode low ufw logging on
fail2ban
configured against brute force attacks:
# Check for the following line/etc/ssh/sshd_config
... LogLevel VERBOSE ... # Restart SSH and enable fail2ban systemctl restart ssh systemctl enable fail2ban systemctl start fail2ban # To unban an IP, first check/var/log/fail2ban.log
to get the banned IP and then run the following # Heresshd
is the defaut jail name, change it if you are using a different jail fail2ban-client set sshd unbanip <banned_ip>
Diaspora
- Install
diaspora-installer
from Debian Buster contrib:
apt install diaspora-installer
- Move MySQL data to encrypted partition:
# Make sure/dev/data/db
is mounted to/var/lib/db
systemctl stop mysql systemctl disable mysql mv /var/lib/mysql /var/lib/db/ ln -s /var/lib/db/mysql /var/lib/ systemctl start mysql
- Move static files to encrypted partition:
# Make sure/dev/data/static
is mounted to/var/lib/static
mkdir /var/lib/static/diaspora mv /usr/share/diaspora/public/uploads /var/lib/static/diaspora ln -s /var/lib/static/diaspora/uploads /usr/share/diaspora/public/ chown -R diaspora: /var/lib/static/diaspora
- Modify configuration files at
/etc/diaspora
and/etc/diaspora.conf
as needed (backup of the current configuration files are available in the access repo). - Homepage configuration:
# Make suregit
andacl
packages are installed # Grantrwx
permissions for the ssh user to/usr/share/diaspora/public
setfacl -m "u:<ssh_user>:rwx" /usr/share/diaspora/public # Clone poddery.com repo cd /usr/share/diaspora/public git clone https://git.fosscommunity.in/community/poddery.com.git cd poddery.com && mv * .[^.]* .. #Give yes for all files when prompted cd .. && rmdir poddery.com
- Save Poddery repo is maintained as a sub module in poddery.com repo. See this tutorial for working with git submodules.
# Clone save.poddery.com repo cd /usr/share/diaspora/public/save git submodule init git submodule update
Matrix
- See the official installation guide of Synapse for installing from source.
- Nginx is used as reverse proxy to send requests that has
/_matrix/*
in URL to Synapse on port8008
. This is configured in/etc/nginx/sites-enabled/diaspora
. - Shamil's Synapse Diaspora Auth script is used to authenticate Synapse with Diaspora database.
- Move PostgreSQL data to encrypted partition:
# Make sure/dev/data/db
is mounted to/var/lib/db
systemctl stop postgresql systemctl disable postgresql mv /var/lib/postgres /var/lib/db/ ln -s /var/lib/db/postgres /var/lib/ systemctl start postgresql
- Move static files to encrypted partition:
# Make sure/dev/data/static
is mounted to/var/lib/static
mkdir /var/lib/static/synapse mv /var/lib/matrix-synapse/uploads /var/lib/static/synapse/ ln -s /var/lib/static/synapse/uploads /var/lib/matrix-synapse/ mv /var/lib/matrix-synapse/media /var/lib/static/synapse/ ln -s /var/lib/static/synapse/media /var/lib/matrix-synapse/ chown -R matrix-synapse: /var/lib/static/synapse
- Install identity server
mxisd
(deb
package available here)
Workers
- For scalability, Poddery is running workers. Currently all workers specified in that page, expect
synapse.app.appservice
is running on poddery.com - A new service
matrix-synapse@.service
is installed for the workers (Save thesynape_worker
file somewhere like/usr/local/bin/
or something). - The worker config can be found at
/etc/matrix-synapse/workers
- Synapse needs to be put under a reverse proxy see
/etc/nginx/sites-enabled/matrix
. A lot of/_matrix/
urls needs to be overridden too see/etc/nginx/sites-enabled/diaspora
- These lines must be added to
homeserver.yaml
as we are runningmedia_repository
,federation_sender
,pusher
,user_dir
workers respectively:
enable_media_repo: False send_federation: False start_pushers: False update_user_directory: false
- These services must be enabled:
matrix-synapse@synchrotron.service matrix-synapse@federation_reader.service matrix-synapse@event_creator.service matrix-synapse@federation_sender.service matrix-synapse@pusher.service matrix-synapse@user_dir.service matrix-synapse@media_repository.service matrix-synapse@frontend_proxy.service matrix-synapse@client_reader.service matrix-synapse@synchrotron_2.service
To load balance between the 2 synchrotrons, We are running matrix-synchrotron-balancer. It has a systemd file at /etc/systemd/system/matrix-synchrotron-balancer
. The files are in /opt/matrix-synchrotron-balancer
Synapse Updation
- First check synapse//latest/upgrade to see if anything extra needs to be done. Then, just run
/root/upgrade-synapse
- Current version of synapse can be found from https://poddery.com/_matrix/federation/v1/version
Riot-web Updation
- Just run the following (make sure to replace
<version>
with a proper version number likev1.0.0
):
/var/www/get-riot <version>
Chat/XMPP
- Steps for setting up Prosody is given at https://wiki.debian.org/Diaspora/XMPP
# Follow steps 1 to 6 from https://wiki.debian.org/Diaspora/XMPP and then run the following: mysql -u root -p # Enter password from the access repo CREATE USER 'prosody'@'localhost' IDENTIFIED BY '<passwd_in_repo>'; GRANT ALL PRIVILEGES ON diaspora_production.* TO 'prosody'@'localhost'; FLUSH PRIVILEGES; systemctl restart prosody
- Install plugins
# Make sure mercurial
is installed
cd /etc && hg clone https://hg.prosody.im/prosody-modules/ prosody-modules
Set Nginx Conf for BOSH URLS
- Add the following in
nginx
configuration file to enable the BOSH URL to make JSXC Working:
upstream chat_cluster { server localhost:5280; }
location /http-bind { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-Proto https; proxy_redirect off; proxy_connect_timeout 5; proxy_buffering off; proxy_read_timeout 70; keepalive_timeout 70; send_timeout 70; client_max_body_size 4M; client_body_buffer_size 128K; proxy_pass http://chat_cluster; }
TLS
- Install
letsencrypt
. - Ensure proper permissions are set for
/etc/letsencrypt
and its contents.
chown -R root:ssl-cert /etc/letsencrypt chmod g+r -R /etc/letsencrypt chmod g+x /etc/letsencrypt/{archive,live}
- Generate certificates. For more details see https://certbot.eff.org.
- Make sure the certificates used by
diaspora
are symbolic links to letsencrypt default location:
ls -l /etc/diaspora/ssl total 0 lrwxrwxrwx 1 root root 47 Apr 2 22:47 poddery.com-bundle.pem -> /etc/letsencrypt/live/poddery.com/fullchain.pem lrwxrwxrwx 1 root root 45 Apr 2 22:48 poddery.com.key -> /etc/letsencrypt/live/poddery.com/privkey.pem # If you don't get the above output, then run the following: cp -L /etc/letsencrypt/live/poddery.com/fullchain.pem /etc/diaspora/ssl/poddery.com-bundle.pem cp -L /etc/letsencrypt/live/poddery.com/privkey.pem /etc/diaspora/ssl/poddery.com.key
- Make sure the certificates used by
prosody
are symbolic links to letsencrypt default location:
ls -l /etc/prosody/certs/ total 0 lrwxrwxrwx 1 root root 40 Mar 28 01:16 poddery.com.crt -> /etc/letsencrypt/live/poddery.com/fullchain.pem lrwxrwxrwx 1 root root 33 Mar 28 01:16 poddery.com.key -> /etc/letsencrypt/live/poddery.com/privkey.pem # If you don't get the above output, then run the following: cp -L /etc/letsencrypt/live/poddery.com/fullchain.pem /etc/prosody/certs/poddery.com.crt cp -L /etc/letsencrypt/live/poddery.com/privkey.pem /etc/prosody/certs/poddery.com.key
- Note- letsencrypt executable used below is actually a symlik to /usr/bin/certbot
- Cron jobs:
crontab -e 30 2 * * 1 letsencrypt renew >> /var/log/le-renew.log 32 2 * * 1 /etc/init.d/nginx reload 34 2 * * 1 /etc/init.d/prosody reload
- Manually updating TLS certificate:
letsencrypt certonly --webroot --agree-tos -w /usr/share/diaspora/public -d poddery.com -d www.poddery.com -d test.poddery.com -d groups.poddery.com -d fund.poddery.com -w /usr/share/diaspora/public/save -d save.poddery.com -w /var/www/riot -d chat.poddery.com
- To include an additional subdomain such as fund.poddery.com use with --expand parameter as shown below
letsencrypt certonly --webroot --agree-tos --expand -w /usr/share/diaspora/public -d poddery.com -d www.poddery.com -d test.poddery.com -d groups.poddery.com -d fund.poddery.com -w /usr/share/diaspora/public/save/ -d save.poddery.com -w /var/www/riot/ -d chat.poddery.com
Backup
Backup server is provided by Manu (KVM virtual machine with 180 GB storage and 1 GB ram ).
Debian Stetch was upgraded Debian Buster before database relication of synapse database.
Documentation: https://www.percona.com/blog/2018/09/07/setting-up-streaming-replication-postgresql/
Currently postgres database for matrix-synapse is backed up.
Before Replication (specific to poddery.com)
Setup tinc vpn in the backup server
# apt install tinc
Configure tinc by creating tinc.conf and host podderybackup under label fsci. Add tinc-up and tinc-down scripts Copy poddery host config to backup server and podderybackup host config to poddery.com server.
Reload tinc vpn service at both poddery.com and backup servers
# systemctl reload tinc@fsci.service
Enable tinc@fsci systemd service for autostart
# systemctl enable tinc@fsci.service
The synapse database was also pruned to reduce the size before replication by following this guide - https://levans.fr/shrink-synapse-database.html If you want to follow this guide, make sure matrix synapse server is updated to version 1.13 atleast since it introduces the Rooms API mentioned the guide. Changes done to steps in the guide.
# jq '.rooms[] | select(.joined_local_members == 0) | .room_id' < roomlist.json | sed -e 's/"//g' > to_purge.txt
The room list obtained this way can, be looped to pass the room names as variables to the purge api.
# set +H // if you are using bash to avoid '!' in the roomname triggering the history substitution. # for room_id in $(cat to_purge.txt); do curl --header "Authorization: Bearer <your access token>" \ -X POST -H "Content-Type: application/json" -d "{ \"room_id\": \"$room_id\" }" \ 'https://127.0.0.1:8008/_synapse/admin/v1/purge_room'; done;
We also did not remove old history of large rooms.
Step 1: Postgresql (for synapse) Primary configuration
Create postgresql user for replication.
$ psql -c "CREATE USER replication REPLICATION LOGIN CONNECTION LIMIT 1 ENCRYPTED PASSWORD 'yourpassword';"
The password is in the access repo if you need it later.
Allow standby to connect to primary using the user just created.
$ cd /etc/postgresql/11/main
$ nano pg_hba.conf
Add below line to allow replication user to get access to the server
host replication replication 172.16.0.3/32 md5
Next , open the postgres configuration file
nano postgresql.conf
Set the following configuration options in the postgresql.conf file
listen_addresses = 'localhost,172.16.0.2' port=5432 wal_level = replica max_wal_senders = 1 wal_keep_segments = 64 archive_mode = on archive_command = 'cd .'
You need to restart since postgresql.conf was edited and parameters changed,
# systemctl restart postgresql
Step 2: Postgresql (for synapse) Standby configuration
Install postgresql
# apt install postgresql
Check postgresql server is running
# su postgres -c psql
Make sure en_US.UTF-8 locale is available
# dpkg-reconfigure locales
Stop postgresql before changing any configuration
#systemctl stop postgresql@11-main
Switch to postgres user
# su - postgres $ cd /etc/postgresql/11/
Copy data from master and create recovery.conf
$ pg_basebackup -h git.fosscommunity.in -D /var/lib/postgresql/11/main/ -P -U rep --wal-method=fetch -R
Open the postgres configuration file
$ nano postgresql.conf
Set the following configuration options in the postgresql.conf file
max_connections = 500 // This option and the one below are set to be same as in postgresql.conf at primary or the service won't start. max_worker_processes = 16 host_standby = on // The above pg_basebackup command should set it. If it's not manually turn it to on.
Start the stopped postgresql service
# systemctl start postgresql@11-main
Postgresql (for synapse) Replication Status
On Primary,
$ ps -ef | grep sender $ psql -c "select * from pg_stat_activity where usename='rep';"
On Standby,
$ ps -ef | grep receiver
Troubleshooting
Allow XMPP login even if diaspora account is closed
Diaspora has a default setting to close accounts that have been inactive for 2 years. At the time of writing, there seems no way to reopen a closed account. This also means that if your account is closed, you will no longer be able to login to the associated XMPP service as well. Here we discuss a workaround to get access back to the XMPP account.
The prosody module mod_auth_diaspora is used for diaspora-based XMPP auth. It checks if locked_at
value in the users
table of diaspora db is null
here and here. If your account is locked, it will have the datetime
value that represents the date and time at which your account is locked. Setting it back to null
will let you use your XMPP account again.
-- Replace <username> with actual username of the locked account UPDATE users SET locked_at=NULL WHERE username='<username>';
NOTE: Matrix account won't be affected even if the associated diaspora account is closed because it uses a custom auth module which works differently.
History
- See here for the archive of Poddery wiki page before the migration to Hetzner.