Poddery - Diaspora, Matrix and XMPP: Difference between revisions
Redirect XMPP to durare and clarify nginx configuration |
|||
| (36 intermediate revisions by 6 users not shown) | |||
| Line 1: | Line 1: | ||
We run decentralized and federated [https://diasporafoundation.org/ Diaspora] social | We run decentralized and federated [https://diasporafoundation.org/ Diaspora] social network, [https://xmpp.org/ XMPP] and [https://matrix.org Matrix] instant messaging services at [https://poddery.com poddery.com]. Along with Diaspora, Poddery username and password can be used to access XMPP and Matrix services as well. [https://chat.poddery.com chat.poddery.com] provides Element client (accessed by a web browser), which can be used to connect to any Matrix server without installing the Element app. | ||
= Environment = | = Environment = | ||
| Line 18: | Line 18: | ||
=== Chat/XMPP === | === Chat/XMPP === | ||
* | * This is moved to Durare.org server Virtual Host. See https://gitlab.com/piratemovin/diasp.in/-/wikis/XMPP-durare.org-setup | ||
=== Chat/Matrix === | === Chat/Matrix === | ||
* [https://matrix.org/docs/projects/server/synapse.html Synapse] is used as the Matrix server. | * [https://matrix.org/docs/projects/server/synapse.html Synapse] is used as the Matrix server. | ||
* Synapse is currently installed directly from the [https://github.com/matrix-org/synapse official GitHub repo]. | * Synapse is currently installed directly from the [https://github.com/matrix-org/synapse official GitHub repo]. | ||
* Riot-web Matrix client is hosted at https://chat.poddery.com | |||
=== Homepage === | === Homepage === | ||
| Line 34: | Line 33: | ||
== Backend Services == | == Backend Services == | ||
=== Web Server / Reverse Proxy === | === Web Server / Reverse Proxy === | ||
* Nginx web server which also acts as front-end (reverse proxy) for Diaspora and Matrix. | * Nginx web server which also acts as front-end (reverse proxy) for Diaspora and Matrix. By default all https requests to 443 are passed to diaspora. Requests starting with | ||
*#_matrix|_synapse is passed to synapse main service and | |||
*#_matrix/media is passed to synapse media worker | |||
=== Database === | === Database === | ||
| Line 46: | Line 47: | ||
=== SSL/TLS certificates === | === SSL/TLS certificates === | ||
* | * Let's Encrypt | ||
=== Firewall === | |||
* UFW (Uncomplicated Firewall) | |||
=== Intrusion Prevention === | |||
* Fail2ban | |||
= Coordination = | = Coordination = | ||
* [https:// | * [https://codema.in/g/2bjVXqAu/fosscommunity-in-poddery-com-maintainer-s-group Loomio group] - Mainly used for decision making | ||
* Matrix room - [https://matrix.to/#/#poddery:poddery.com #poddery:poddery.com] | * Matrix room - [https://matrix.to/#/#poddery:poddery.com #poddery:poddery.com] also bridged to xmpp [xmpp:poddery.com-support@chat.yax.im?join poddery.com-support@chat.yax.im] | ||
* [https://git.fosscommunity.in/community/poddery.com/issues Issue tracker] - Used for tracking progress of tasks | * [https://git.fosscommunity.in/community/poddery.com/issues Issue tracker] - Used for tracking progress of tasks | ||
| Line 74: | Line 81: | ||
= Configuration and Maintenance = | = Configuration and Maintenance = | ||
Boot into rescue system using https://docs.hetzner.com/robot/dedicated-server/troubleshooting/hetzner-rescue-system | |||
== Disk Partitioning == | == Disk Partitioning == | ||
* RAID 1 setup on 2x2TB HDDs (<code>sda</code> and <code>sdb</code>). | * RAID 1 setup on 2x2TB HDDs (<code>sda</code> and <code>sdb</code>). | ||
| Line 94: | Line 104: | ||
# Assign remaining free space for static files | # Assign remaining free space for static files | ||
lvcreate -n static /dev/data -l 100%FREE | lvcreate -n static /dev/data -l 100%FREE | ||
# Setup filesystem on the logical volumes | |||
mkfs.ext4 /dev/data/log | |||
mkfs.ext4 /dev/data/db | |||
mkfs.ext4 /dev/data/static | |||
# Create directories for mounting the encrypted partitions | # Create directories for mounting the encrypted partitions | ||
| Line 117: | Line 132: | ||
ufw default allow outgoing | ufw default allow outgoing | ||
ufw allow ssh | ufw allow ssh | ||
ufw allow http/tcp | |||
ufw allow https/tcp | |||
ufw allow Turnserver | |||
ufw allow XMPP | |||
ufw allow 8448 | |||
ufw enable | ufw enable | ||
# Verify everything is setup properly | |||
ufw status | |||
# Enable ufw logging with default mode low | |||
ufw logging on | |||
* <code>fail2ban</code> configured against brute force attacks: | * <code>fail2ban</code> configured against brute force attacks: | ||
| Line 126: | Line 153: | ||
# Restart SSH and enable fail2ban | # Restart SSH and enable fail2ban | ||
systemctl restart ssh | |||
systemctl enable fail2ban | |||
systemctl start fail2ban | |||
# To unban an IP, first check <code>/var/log/fail2ban.log</code> to get the banned IP and then run the following | # To unban an IP, first check <code>/var/log/fail2ban.log</code> to get the banned IP and then run the following | ||
| Line 138: | Line 165: | ||
apt install diaspora-installer | apt install diaspora-installer | ||
* Move MySQL data to encrypted partition | * Move MySQL data to encrypted partition: | ||
# Make sure <code>/dev/data/db</code> is mounted to <code>/var/lib/db</code> | # Make sure <code>/dev/data/db</code> is mounted to <code>/var/lib/db</code> | ||
systemctl stop mysql | systemctl stop mysql | ||
mv /var/lib/mysql /var/lib/db | systemctl disable mysql | ||
ln -s /var/lib/db/mysql /var/lib/ | mv /var/lib/mysql /var/lib/db/ | ||
ln -s /var/lib/db/mysql /var/lib/ | |||
systemctl start mysql | systemctl start mysql | ||
* Move static files to encrypted partition: | |||
# Make sure <code>/dev/data/static</code> is mounted to <code>/var/lib/static</code> | |||
mkdir /var/lib/static/diaspora | |||
mv /usr/share/diaspora/public/uploads /var/lib/static/diaspora | |||
ln -s /var/lib/static/diaspora/uploads /usr/share/diaspora/public/ | |||
chown -R diaspora: /var/lib/static/diaspora | |||
* Modify configuration files at <code>/etc/diaspora</code> and <code>/etc/diaspora.conf</code> as needed (backup of the current configuration files are available in the [[#Server_Access|access repo]]). | * Modify configuration files at <code>/etc/diaspora</code> and <code>/etc/diaspora.conf</code> as needed (backup of the current configuration files are available in the [[#Server_Access|access repo]]). | ||
| Line 149: | Line 184: | ||
# Make sure <code>git</code> and <code>acl</code> packages are installed | # Make sure <code>git</code> and <code>acl</code> packages are installed | ||
# Grant <code>rwx</code> permissions for the ssh user to <code>/usr/share/diaspora/public</code> | # Grant <code>rwx</code> permissions for the ssh user to <code>/usr/share/diaspora/public</code> | ||
setfacl -m "u:<ssh_user>:rwx" /usr/share/diaspora/public | |||
# Clone poddery.com repo | # Clone poddery.com repo | ||
| Line 167: | Line 202: | ||
* Nginx is used as reverse proxy to send requests that has <code>/_matrix/*</code> in URL to Synapse on port <code>8008</code>. This is configured in <code>/etc/nginx/sites-enabled/diaspora</code>. | * Nginx is used as reverse proxy to send requests that has <code>/_matrix/*</code> in URL to Synapse on port <code>8008</code>. This is configured in <code>/etc/nginx/sites-enabled/diaspora</code>. | ||
* Shamil's [https://git.fosscommunity.in/necessary129/synapse-diaspora-auth Synapse Diaspora Auth] script is used to authenticate Synapse with Diaspora database. | * Shamil's [https://git.fosscommunity.in/necessary129/synapse-diaspora-auth Synapse Diaspora Auth] script is used to authenticate Synapse with Diaspora database. | ||
* Move PostgreSQL data to encrypted partition: | |||
# Make sure <code>/dev/data/db</code> is mounted to <code>/var/lib/db</code> | |||
systemctl stop postgresql | |||
systemctl disable postgresql | |||
mv /var/lib/postgres /var/lib/db/ | |||
ln -s /var/lib/db/postgres /var/lib/ | |||
systemctl start postgresql | |||
* Move static files to encrypted partition: | |||
# Make sure <code>/dev/data/static</code> is mounted to <code>/var/lib/static</code> | |||
mkdir /var/lib/static/synapse | |||
mv /var/lib/matrix-synapse/uploads /var/lib/static/synapse/ | |||
ln -s /var/lib/static/synapse/uploads /var/lib/matrix-synapse/ | |||
mv /var/lib/matrix-synapse/media /var/lib/static/synapse/ | |||
ln -s /var/lib/static/synapse/media /var/lib/matrix-synapse/ | |||
chown -R matrix-synapse: /var/lib/static/synapse | |||
* Install identity server <code>mxisd</code> (<code>deb</code> package available [https://github.com/kamax-matrix/mxisd/blob/master/docs/install/debian.md here]) | |||
=== Workers === | |||
* For scalability, Poddery is running [https://github.com/matrix-org/synapse/blob/master/docs/workers.md workers]. Currently all workers specified in that page, expect <code>synapse.app.appservice</code> is running on poddery.com | |||
* A new service [https://gist.github.com/necessary129/5dfbb140e4727496b0ad2bf801c10fdc <code>matrix-synapse@.service</code>] is installed for the workers (Save the <code>synape_worker</code> file somewhere like <code>/usr/local/bin/</code> or something). | |||
* The worker config can be found at <code>/etc/matrix-synapse/workers</code> | |||
* Synapse needs to be put under a reverse proxy see <code>/etc/nginx/sites-enabled/matrix</code>. A lot of <code>/_matrix/</code> urls needs to be overridden too see <code>/etc/nginx/sites-enabled/diaspora</code> | |||
* These lines must be added to <code>homeserver.yaml</code> as we are running <code>media_repository</code>, <code>federation_sender</code>, <code>pusher</code>, <code>user_dir</code> workers respectively: | |||
enable_media_repo: False | enable_media_repo: False | ||
send_federation: False | send_federation: False | ||
| Line 184: | Line 232: | ||
update_user_directory: false | update_user_directory: false | ||
These services must be enabled | * These services must be enabled: | ||
matrix-synapse@synchrotron.service matrix-synapse@federation_reader.service matrix-synapse@event_creator.service matrix-synapse@federation_sender.service matrix-synapse@pusher.service matrix-synapse@user_dir.service matrix-synapse@media_repository.service matrix-synapse@frontend_proxy.service matrix-synapse@client_reader.service | |||
matrix-synapse@synchrotron.service | |||
matrix-synapse@federation_reader.service | |||
matrix-synapse@event_creator.service | |||
matrix-synapse@federation_sender.service | |||
matrix-synapse@pusher.service | |||
matrix-synapse@user_dir.service | |||
matrix-synapse@media_repository.service | |||
matrix-synapse@frontend_proxy.service | |||
matrix-synapse@client_reader.service | |||
matrix-synapse@synchrotron_2.service | |||
To load balance between the 2 synchrotrons, We are running [https://github.com/Sorunome/matrix-synchrotron-balancer matrix-synchrotron-balancer]. It has a systemd file at <code>/etc/systemd/system/matrix-synchrotron-balancer</code>. The files are in <code>/opt/matrix-synchrotron-balancer</code> | |||
=== Synapse Updation === | === Synapse Updation === | ||
First check [https:/ | * First check [https://matrix-org.github.io/synapse/latest/upgrade synapse//latest/upgrade] to see if anything extra needs to be done. Then, just run <code>/root/upgrade-synapse</code> | ||
* Current version of synapse can be found from https://poddery.com/_matrix/federation/v1/version | |||
=== Riot-web Updation === | === Riot-web Updation === | ||
* | * Just run the following (make sure to replace <code><version></code> with a proper version number like <code>v1.0.0</code>): | ||
/var/www/get-riot <version> | |||
== Chat/XMPP == | == Chat/XMPP == | ||
* | * See https://gitlab.com/piratemovin/diasp.in/-/wikis/XMPP-durare.org-setup | ||
== TLS == | == TLS == | ||
* Install <code>letsencrypt</code>. | |||
* Ensure proper permissions are set for <code>/etc/letsencrypt</code> and its contents. | * Ensure proper permissions are set for <code>/etc/letsencrypt</code> and its contents. | ||
chown -R root:ssl-cert /etc/letsencrypt | chown -R root:ssl-cert /etc/letsencrypt | ||
chmod g+r -R /etc/letsencrypt | chmod g+r -R /etc/letsencrypt | ||
chmod g+x /etc/letsencrypt/{archive,live} | chmod g+x /etc/letsencrypt/{archive,live} | ||
* Generate certificates. For more details see https://certbot.eff.org. | |||
* Make sure the certificates used by <code>diaspora</code> are symbolic links to letsencrypt default location: | * Make sure the certificates used by <code>diaspora</code> are symbolic links to letsencrypt default location: | ||
ls -l /etc/diaspora/ssl | |||
''total 0 | |||
''lrwxrwxrwx 1 root root 47 Apr 2 22:47 poddery.com-bundle.pem -> /etc/letsencrypt/live/poddery.com/fullchain.pem'' | |||
''lrwxrwxrwx 1 root root 45 Apr 2 22:48 poddery.com.key -> /etc/letsencrypt/live/poddery.com/privkey.pem'' | |||
# If you don't get the above output, then run the following: | |||
cp -L /etc/letsencrypt/live/poddery.com/fullchain.pem /etc/diaspora/ssl/poddery.com-bundle.pem | cp -L /etc/letsencrypt/live/poddery.com/fullchain.pem /etc/diaspora/ssl/poddery.com-bundle.pem | ||
cp -L /etc/letsencrypt/live/poddery.com/privkey.pem /etc/diaspora/ssl/poddery.com.key | cp -L /etc/letsencrypt/live/poddery.com/privkey.pem /etc/diaspora/ssl/poddery.com.key | ||
| Line 251: | Line 280: | ||
''lrwxrwxrwx 1 root root 40 Mar 28 01:16 poddery.com.crt -> /etc/letsencrypt/live/poddery.com/fullchain.pem'' | ''lrwxrwxrwx 1 root root 40 Mar 28 01:16 poddery.com.crt -> /etc/letsencrypt/live/poddery.com/fullchain.pem'' | ||
''lrwxrwxrwx 1 root root 33 Mar 28 01:16 poddery.com.key -> /etc/letsencrypt/live/poddery.com/privkey.pem'' | ''lrwxrwxrwx 1 root root 33 Mar 28 01:16 poddery.com.key -> /etc/letsencrypt/live/poddery.com/privkey.pem'' | ||
# If you don't get the above output, then run the following: | |||
cp -L /etc/letsencrypt/live/poddery.com/fullchain.pem /etc/prosody/certs/poddery.com.crt | |||
cp -L /etc/letsencrypt/live/poddery.com/privkey.pem /etc/prosody/certs/poddery.com.key | |||
* Note- letsencrypt executable used below is actually a symlik to /usr/bin/certbot | |||
* Cron jobs: | * Cron jobs: | ||
crontab -e | crontab -e | ||
| Line 258: | Line 292: | ||
''34 2 * * 1 /etc/init.d/prosody reload'' | ''34 2 * * 1 /etc/init.d/prosody reload'' | ||
===SSL certificate renewal=== | |||
On the 12th of October 2025, all the certificates were removed and were recreated. [https://codema.in/d/XUfAOrPW/poddery-server-certificates-recreated This thread] documents all those steps. | |||
When renewing certificates on the poddery server, make sure to follow the following steps. | |||
# Stop nginx by running | |||
sudo systemctl stop nginx | |||
# Renew certificates for all the domains | |||
sudo certbot renew | |||
Follow the prompts by certbot to renew certificates for all the domains. | |||
# Start nginx after the renewal is successful | |||
sudo systemctl start nginx | |||
==Backup== | |||
Backup server is provided by Manu (KVM virtual machine with 180 GB storage and 1 GB ram ). | |||
Debian Stetch was upgraded Debian Buster before database relication of synapse database. | |||
Documentation: https://www.percona.com/blog/2018/09/07/setting-up-streaming-replication-postgresql/ | |||
Currently postgres database for matrix-synapse is backed up. | |||
===Before Replication (specific to poddery.com)=== | |||
Setup tinc vpn in the backup server | |||
# apt install tinc | |||
Configure tinc by creating tinc.conf and host podderybackup under label fsci. | |||
Add tinc-up and tinc-down scripts | |||
Copy poddery host config to backup server and podderybackup host config to poddery.com server. | |||
Reload tinc vpn service at both poddery.com and backup servers | |||
# systemctl reload tinc@fsci.service | |||
Enable tinc@fsci systemd service for autostart | |||
# systemctl enable tinc@fsci.service | |||
The synapse database was also pruned to reduce the size before replication by following this guide - https://levans.fr/shrink-synapse-database.html | |||
If you want to follow this guide, make sure matrix synapse server is updated to version 1.13 atleast since it introduces the Rooms API mentioned the guide. | |||
Changes done to steps in the guide. | |||
# jq '.rooms[] | select(.joined_local_members == 0) | .room_id' < roomlist.json | sed -e 's/"//g' > to_purge.txt | |||
The room list obtained this way can, be looped to pass the room names as variables to the purge api. | |||
# set +H // if you are using bash to avoid '!' in the roomname triggering the history substitution. | |||
# for room_id in $(cat to_purge.txt); do curl --header "Authorization: Bearer <your access token>" \ | |||
-X POST -H "Content-Type: application/json" -d "{ \"room_id\": \"$room_id\" }" \ | |||
'https://127.0.0.1:8008/_synapse/admin/v1/purge_room'; done; | |||
We also did not remove old history of large rooms. | |||
===Step 1: Postgresql (for synapse) Primary configuration=== | |||
Create postgresql user for replication. | |||
$ psql -c "CREATE USER replication REPLICATION LOGIN CONNECTION LIMIT 1 ENCRYPTED PASSWORD 'yourpassword';" | |||
The password is in the access repo if you need it later. | |||
Allow standby to connect to primary using the user just created. | |||
$ cd /etc/postgresql/11/main | |||
$ nano pg_hba.conf | |||
Add below line to allow replication user to get access to the server | |||
host replication replication 172.16.0.3/32 md5 | |||
Next , open the postgres configuration file | |||
nano postgresql.conf | |||
Set the following configuration options in the postgresql.conf file | |||
listen_addresses = 'localhost,172.16.0.2' | |||
port=5432 | |||
wal_level = replica | |||
max_wal_senders = 1 | |||
wal_keep_segments = 64 | |||
archive_mode = on | |||
archive_command = 'cd .' | |||
You need to restart since postgresql.conf was edited and parameters changed, | |||
# systemctl restart postgresql | |||
===Step 2: Postgresql (for synapse) Standby configuration === | |||
Install postgresql | |||
# apt install postgresql | |||
Check postgresql server is running | |||
# su postgres -c psql | |||
Make sure en_US.UTF-8 locale is available | |||
# dpkg-reconfigure locales | |||
Stop postgresql before changing any configuration | |||
#systemctl stop postgresql@11-main | |||
Switch to postgres user | |||
# su - postgres | |||
$ cd /etc/postgresql/11/ | |||
Copy data from master and create recovery.conf | |||
$ pg_basebackup -h git.fosscommunity.in -D /var/lib/postgresql/11/main/ -P -U rep --wal-method=fetch -R | |||
Open the postgres configuration file | |||
$ nano postgresql.conf | |||
Set the following configuration options in the postgresql.conf file | |||
max_connections = 500 // This option and the one below are set to be same as in postgresql.conf at primary or the service won't start. | |||
max_worker_processes = 16 | |||
host_standby = on // The above pg_basebackup command should set it. If it's not manually turn it to on. | |||
Start the stopped postgresql service | |||
# systemctl start postgresql@11-main | |||
===Postgresql (for synapse) Replication Status=== | |||
On Primary, | |||
$ ps -ef | grep sender | |||
$ psql -c "select * from pg_stat_activity where usename='rep';" | |||
On Standby, | |||
$ ps -ef | grep receiver | |||
===Backup steps on 7th Jan 2025=== | |||
====Matrix-synapse==== | |||
For synapse, the following files were backed up: | |||
* Dump of postgresql database using `pg_dump` | |||
* `/etc/matrix-synapse` - contains config files | |||
* `/var/lib/static/synapse/media` -- contains uploaded media files | |||
In order to access the poddery server from the backup server (with your public ssh keys added to both the servers in `~/.ssh/authorized-keys`), run the following command in your local system:<syntaxhighlight lang="bash"> | |||
eval "$(ssh-agent -s)" | |||
</syntaxhighlight>followed by<syntaxhighlight> | |||
ssh user@server -o "ForwardAgent yes" -o "AddKeysToAgent yes" | |||
</syntaxhighlight>on the local system. | |||
The dump was taken using the command from the [https://element-hq.github.io/synapse/latest/usage/administration/backups.html#quick-and-easy-database-backup-and-restore official docs]:<syntaxhighlight> | |||
ssh user@poddery-server 'sudo -u postgres pg_dump -Fc --exclude-table-data e2e_one_time_keys_json synapse' > synapse-2025-01-07.dump | |||
</syntaxhighlight> | |||
====Prosody==== | |||
For backing up prosody, the following were copied: | |||
* Dump of the database using `mysqldump` | |||
* `/var/lib/prosody` for media files | |||
* `/etc/prosody` for config files | |||
For taking the dump, the following was run from the backup-server | |||
<syntaxhighlight lang="bash"> | |||
ssh user@poddery-server 'mysqldump -u prosody --password="$(cat <path/to/password-file>)" prosody | gzip' > backups/prosody-backup.sql.gz | |||
</syntaxhighlight> | |||
Backup of `/var/lb/prosody` was taken using following steps: | |||
* Create a tar file of prosody directory | |||
<syntaxhighlight> | |||
cd /var/lib && sudo tar -czvf ~user/var.lib.prosody-2025-01-07.tar.gz prosody | |||
</syntaxhighlight> | |||
* Make user as owner of compressed file: | |||
<syntaxhighlight> | |||
cd && chown user: var.lib.prosody-2025-01-07.tar.gz | |||
</syntaxhighlight> | |||
* Use `scp` to transfer tar file to the backup-server | |||
<syntaxhighlight> | |||
scp -P <port-for-ssh-on-backup-server> ./var.lib.prosody-2025-01-07.tar.gz backup-user@backup-server:directory-to-backup | |||
</syntaxhighlight> | |||
= Troubleshooting = | |||
== Allow XMPP login even if diaspora account is closed == | |||
Diaspora has a [https://github.com/diaspora/diaspora/blob/develop/Changelog.md#new-maintenance-feature-to-automatically-expire-inactive-accounts default setting] to close accounts that have been inactive for 2 years. At the time of writing, there seems [https://github.com/diaspora/diaspora/issues/5358#issuecomment-371921462 no way] to reopen a closed account. This also means that if your account is closed, you will no longer be able to login to the associated XMPP service as well. Here we discuss a workaround to get access back to the XMPP account. | |||
The prosody module [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua mod_auth_diaspora] is used for diaspora-based XMPP auth. It checks if <code>locked_at</code> value in the <code>users</code> table of diaspora db is <code>null</code> [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua-L89 here] and [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua-L98 here]. If your account is locked, it will have the <code>datetime</code> value that represents the date and time at which your account is locked. Setting it back to <code>null</code> will let you use your XMPP account again. | |||
-- Replace <username> with actual username of the locked account | |||
UPDATE users SET locked_at=NULL WHERE username='<username>'; | |||
NOTE: Matrix account won't be affected even if the associated diaspora account is closed because it uses a [https://pypi.org/project/synapse-diaspora-auth/ custom auth module] which works differently. | |||
= History = | = History = | ||