Poddery - Diaspora, Matrix and XMPP: Difference between revisions

Redirect XMPP to durare and clarify nginx configuration
 
(28 intermediate revisions by 6 users not shown)
Line 1: Line 1:
We run decentralized and federated [https://diasporafoundation.org/ Diaspora] social netowrk, [https://xmpp.org/ XMPP] and [https://matrix.org Matrix] instant messaging services at [https://poddery.com poddery.com]. Along with Diaspora, Poddery username and password can be used to access XMPP and Matrix services as well. [https://chat.poddery.com chat.poddery.com] provides Riot client (accessed by a web browser), which can be used to connect to any Matrix server without installing a Riot app/client.
We run decentralized and federated [https://diasporafoundation.org/ Diaspora] social network, [https://xmpp.org/ XMPP] and [https://matrix.org Matrix] instant messaging services at [https://poddery.com poddery.com]. Along with Diaspora, Poddery username and password can be used to access XMPP and Matrix services as well. [https://chat.poddery.com chat.poddery.com] provides Element client (accessed by a web browser), which can be used to connect to any Matrix server without installing the Element app.


= Environment =
= Environment =
Line 18: Line 18:


=== Chat/XMPP ===
=== Chat/XMPP ===
* [https://prosody.im/ Prosody] is used as the XMPP server which is modern and lightweight.
* This is moved to Durare.org server Virtual Host. See https://gitlab.com/piratemovin/diasp.in/-/wikis/XMPP-durare.org-setup
* Currently installed version is 0.11.2 which is available in [https://packages.debian.org/buster/prosody Debian Buster].
* All XEPs are enabled which the [https://conversations.im/ Conversations app] support.


=== Chat/Matrix ===
=== Chat/Matrix ===
* [https://matrix.org/docs/projects/server/synapse.html Synapse] is used as the Matrix server.
* [https://matrix.org/docs/projects/server/synapse.html Synapse] is used as the Matrix server.
* Synapse is currently installed directly from the [https://github.com/matrix-org/synapse official GitHub repo].
* Synapse is currently installed directly from the [https://github.com/matrix-org/synapse official GitHub repo].
* Riot-web Matrix client is hosted at https://chat.poddery.com


=== Homepage ===
=== Homepage ===
Line 34: Line 33:
== Backend Services ==
== Backend Services ==
=== Web Server / Reverse Proxy ===
=== Web Server / Reverse Proxy ===
* Nginx web server which also acts as front-end (reverse proxy) for Diaspora and Matrix.
* Nginx web server which also acts as front-end (reverse proxy) for Diaspora and Matrix. By default all https requests to 443 are passed to diaspora. Requests starting with
*#_matrix|_synapse is passed to synapse main service and
*#_matrix/media is passed to synapse media worker


=== Database ===
=== Database ===
Line 55: Line 56:


= Coordination =
= Coordination =
* [https://www.loomio.org/g/2bjVXqAu/fosscommunity-in-poddery-com-maintainer-s-group Loomio group] - Mainly used for decision making
* [https://codema.in/g/2bjVXqAu/fosscommunity-in-poddery-com-maintainer-s-group Loomio group] - Mainly used for decision making
* Matrix room - [https://matrix.to/#/#poddery:poddery.com #poddery:poddery.com]
* Matrix room - [https://matrix.to/#/#poddery:poddery.com #poddery:poddery.com] also bridged to xmpp [xmpp:poddery.com-support@chat.yax.im?join poddery.com-support@chat.yax.im]
* [https://git.fosscommunity.in/community/poddery.com/issues Issue tracker] - Used for tracking progress of tasks
* [https://git.fosscommunity.in/community/poddery.com/issues Issue tracker] - Used for tracking progress of tasks


Line 80: Line 81:


= Configuration and Maintenance =
= Configuration and Maintenance =
Boot into rescue system using https://docs.hetzner.com/robot/dedicated-server/troubleshooting/hetzner-rescue-system
== Disk Partitioning ==
== Disk Partitioning ==
* RAID 1 setup on 2x2TB HDDs (<code>sda</code> and <code>sdb</code>).
* RAID 1 setup on 2x2TB HDDs (<code>sda</code> and <code>sdb</code>).
Line 218: Line 222:


=== Workers ===
=== Workers ===
* For scalability, Poddery is running [https://github.com/matrix-org/synapse/blob/master/docs/workers.rst workers]. Currently all workers specified in that page, expect <code>synapse.app.appservice</code> is running on poddery.com
* For scalability, Poddery is running [https://github.com/matrix-org/synapse/blob/master/docs/workers.md workers]. Currently all workers specified in that page, expect <code>synapse.app.appservice</code> is running on poddery.com
* A new service [https://gist.github.com/necessary129/5dfbb140e4727496b0ad2bf801c10fdc <code>matrix-synapse@.service</code>] is installed for the workers (Save the <code>synape_worker</code> file somewhere like <code>/usr/local/bin/</code> or something).
* A new service [https://gist.github.com/necessary129/5dfbb140e4727496b0ad2bf801c10fdc <code>matrix-synapse@.service</code>] is installed for the workers (Save the <code>synape_worker</code> file somewhere like <code>/usr/local/bin/</code> or something).
* The worker config can be found at <code>/etc/matrix-synapse/workers</code>
* The worker config can be found at <code>/etc/matrix-synapse/workers</code>
Line 228: Line 232:
   update_user_directory: false
   update_user_directory: false


* These services must be enabled, and added to <code>Requires</code> and <code>Before</code> sections of the original <code>matrix-synapse.service</code>:
* These services must be enabled:
  matrix-synapse@synchrotron.service matrix-synapse@federation_reader.service matrix-synapse@event_creator.service matrix-synapse@federation_sender.service matrix-synapse@pusher.service matrix-synapse@user_dir.service matrix-synapse@media_repository.service matrix-synapse@frontend_proxy.service matrix-synapse@client_reader.service
 
  matrix-synapse@synchrotron.service  
matrix-synapse@federation_reader.service  
matrix-synapse@event_creator.service  
matrix-synapse@federation_sender.service  
matrix-synapse@pusher.service  
matrix-synapse@user_dir.service  
matrix-synapse@media_repository.service  
matrix-synapse@frontend_proxy.service  
matrix-synapse@client_reader.service  
matrix-synapse@synchrotron_2.service
 
To load balance between the 2 synchrotrons, We are running [https://github.com/Sorunome/matrix-synchrotron-balancer matrix-synchrotron-balancer]. It has a systemd file at <code>/etc/systemd/system/matrix-synchrotron-balancer</code>. The files are in <code>/opt/matrix-synchrotron-balancer</code>


=== Synapse Updation ===
=== Synapse Updation ===
* First check [https://github.com/matrix-org/synapse/blob/master/UPGRADE.rst synapse/UPGRADE.rst] to see if anything extra needs to be done. Then, just run <code>/root/upgrade-synapse</code>
* First check [https://matrix-org.github.io/synapse/latest/upgrade synapse//latest/upgrade] to see if anything extra needs to be done. Then, just run <code>/root/upgrade-synapse</code>
* Current version of synapse can be found from https://poddery.com/_matrix/federation/v1/version


=== Riot-web Updation ===  
=== Riot-web Updation ===  
* Just run the following (make sure to replace <code><version></code> with a proper version number like v1.0.0):
* Just run the following (make sure to replace <code><version></code> with a proper version number like <code>v1.0.0</code>):
  /var/www/get-riot <version>
  /var/www/get-riot <version>


== Chat/XMPP ==
== Chat/XMPP ==
* Steps for setting up Prosody is given at https://wiki.debian.org/Diaspora/XMPP
* See https://gitlab.com/piratemovin/diasp.in/-/wikis/XMPP-durare.org-setup
# Follow steps 1 to 6 from https://wiki.debian.org/Diaspora/XMPP and then run the following:
mysql -u root -p # Enter password from the access repo
CREATE USER 'prosody'@'localhost' IDENTIFIED BY '<passwd_in_repo>';
GRANT ALL PRIVILEGES ON diaspora_production.* TO 'prosody'@'localhost';
FLUSH PRIVILEGES;
systemctl restart prosody
 
* Install plugins
# Make sure <code>mercurial</code> is installed
cd /etc && hg clone https://hg.prosody.im/prosody-modules/ prosody-modules
 
=== Set Nginx Conf for BOSH URLS ===
* Add the following in <code>nginx</code> configuration file to enable the BOSH URL to make JSXC Working:
upstream chat_cluster {
  server localhost:5280;
}
 
location /http-bind {
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header Host $http_host;
  proxy_set_header X-Forwarded-Proto https;
  proxy_redirect off;
  proxy_connect_timeout 5;
  proxy_buffering      off;
  proxy_read_timeout    70;
  keepalive_timeout    70;
  send_timeout          70;
  client_max_body_size 4M;
  client_body_buffer_size 128K;
  proxy_pass http://chat_cluster;
}
 
* [https://wiki.diasporafoundation.org/Integration/Chat#Nginx See here] for more details on <code>nginx</code> configuration. Alternatively, <code>apache</code> settings can be found [https://github.com/jsxc/jsxc/wiki/Prepare-apache here].


== TLS ==
== TLS ==
Line 311: Line 292:
  ''34 2 * * 1 /etc/init.d/prosody reload''
  ''34 2 * * 1 /etc/init.d/prosody reload''


* Manually updating TLS certificate:
===SSL certificate renewal===
  letsencrypt certonly --webroot --agree-tos -w /usr/share/diaspora/public  -d poddery.com -d www.poddery.com -d test.poddery.com -d groups.poddery.com -d fund.poddery.com -w /usr/share/diaspora/public/save -d save.poddery.com -w /var/www/riot -d chat.poddery.com
On the 12th of October 2025, all the certificates were removed and were recreated. [https://codema.in/d/XUfAOrPW/poddery-server-certificates-recreated This thread] documents all those steps.
* To include an additional subdomain such as fund.poddery.com use with --expand parameter as shown below
 
  letsencrypt certonly --webroot --agree-tos -w /usr/share/diaspora/public --expand -d poddery.com -d www.poddery.com -d test.poddery.com -d groups.poddery.com -d fund.poddery.com -w /usr/share/diaspora/public/save/ -d save.poddery.com -w /var/www/riot/ -d chat.poddery.com
When renewing certificates on the poddery server, make sure to follow the following steps.
 
# Stop nginx by running
sudo systemctl stop nginx
 
# Renew certificates for all the domains
  sudo certbot renew
 
Follow the prompts by certbot to renew certificates for all the domains.
 
# Start nginx after the renewal is successful
sudo systemctl start nginx
 
==Backup==
 
Backup server is provided by Manu (KVM virtual machine with 180 GB storage and 1 GB ram ).
 
Debian Stetch was upgraded Debian Buster before database relication of synapse database.
 
Documentation: https://www.percona.com/blog/2018/09/07/setting-up-streaming-replication-postgresql/
 
Currently postgres database for matrix-synapse is backed up.
 
===Before Replication (specific to poddery.com)===
 
Setup tinc vpn in the backup server
 
# apt install tinc
 
Configure tinc by creating tinc.conf and host podderybackup under label fsci.
Add tinc-up and tinc-down scripts
Copy poddery host config to backup server and podderybackup host config to poddery.com server.
 
Reload tinc vpn service at both poddery.com and backup servers
 
# systemctl reload tinc@fsci.service
 
Enable tinc@fsci systemd service for autostart
 
# systemctl enable tinc@fsci.service
 
The synapse database was also pruned to reduce the size before replication by following this guide - https://levans.fr/shrink-synapse-database.html
If you want to follow this guide, make sure matrix synapse server is updated to version 1.13 atleast since it introduces the Rooms API mentioned the guide.
Changes done to steps in the guide.
 
  # jq '.rooms[] | select(.joined_local_members == 0) | .room_id' < roomlist.json | sed -e 's/"//g' > to_purge.txt
 
The room list obtained this way can, be looped to pass the room names as variables to the purge api.
 
# set +H // if you are using bash to avoid '!' in the roomname triggering the history substitution.
# for room_id in $(cat to_purge.txt); do curl --header "Authorization: Bearer <your access token>" \
    -X POST -H "Content-Type: application/json" -d "{ \"room_id\": \"$room_id\" }" \
    'https://127.0.0.1:8008/_synapse/admin/v1/purge_room'; done;
 
We also did not remove old history of large rooms.
 
===Step 1: Postgresql (for synapse) Primary configuration===
 
Create postgresql user for replication.
 
$ psql -c "CREATE USER replication REPLICATION LOGIN CONNECTION LIMIT 1 ENCRYPTED PASSWORD 'yourpassword';"
The password is in the access repo if you need it later.
 
Allow standby to connect to primary using the user just created.
 
$ cd /etc/postgresql/11/main
 
$ nano pg_hba.conf
 
Add below line to allow replication user to get access to the server
 
host    replication    replication    172.16.0.3/32  md5
 
Next , open the postgres configuration file
 
nano postgresql.conf
 
Set the following configuration options in the postgresql.conf file
 
listen_addresses = 'localhost,172.16.0.2'
port=5432
wal_level = replica
max_wal_senders = 1
wal_keep_segments = 64
archive_mode = on
archive_command = 'cd .'
 
You need to restart since postgresql.conf was edited and parameters changed,
 
# systemctl restart postgresql
 
===Step 2: Postgresql (for synapse) Standby configuration ===
 
Install postgresql
 
# apt install postgresql
 
Check postgresql server is running
 
# su postgres -c psql
 
Make sure en_US.UTF-8 locale is available
 
# dpkg-reconfigure locales
 
Stop postgresql before changing any configuration
 
#systemctl stop postgresql@11-main
 
Switch to postgres user
 
# su - postgres
$ cd /etc/postgresql/11/
 
Copy data from master and create recovery.conf
 
$ pg_basebackup -h git.fosscommunity.in -D /var/lib/postgresql/11/main/ -P -U rep --wal-method=fetch  -R
 
Open the postgres configuration file
 
$ nano postgresql.conf
 
Set the following configuration options in the postgresql.conf file
 
max_connections = 500 // This option and the one below are set to be same as in postgresql.conf at primary or the service won't start.
max_worker_processes = 16
host_standby = on // The above pg_basebackup command should set it. If it's not manually turn it to on.
 
Start the stopped postgresql service
 
# systemctl start postgresql@11-main
 
===Postgresql (for synapse) Replication Status===
 
On Primary,
 
$ ps -ef | grep sender
$ psql -c "select * from pg_stat_activity where usename='rep';"
 
On Standby,
 
  $ ps -ef | grep receiver
 
===Backup steps on 7th Jan 2025===
====Matrix-synapse====
For synapse, the following files were backed up:
 
* Dump of postgresql database using `pg_dump`
* `/etc/matrix-synapse` - contains config files
* `/var/lib/static/synapse/media` -- contains uploaded media files
 
In order to access the poddery server from the backup server (with your public ssh keys added to both the servers in `~/.ssh/authorized-keys`), run the following command in your local system:<syntaxhighlight lang="bash">
eval "$(ssh-agent -s)"
</syntaxhighlight>followed by<syntaxhighlight>
ssh user@server -o "ForwardAgent yes" -o "AddKeysToAgent yes"
</syntaxhighlight>on the local system.
 
The dump was taken using the command from the [https://element-hq.github.io/synapse/latest/usage/administration/backups.html#quick-and-easy-database-backup-and-restore official docs]:<syntaxhighlight>
ssh user@poddery-server 'sudo -u postgres pg_dump -Fc --exclude-table-data e2e_one_time_keys_json synapse' > synapse-2025-01-07.dump
</syntaxhighlight>
 
====Prosody====
For backing up prosody, the following were copied:
 
* Dump of the database using `mysqldump`
* `/var/lib/prosody` for media files
* `/etc/prosody` for config files
 
For taking the dump, the following was run from the backup-server
<syntaxhighlight lang="bash">
ssh user@poddery-server 'mysqldump -u prosody --password="$(cat <path/to/password-file>)" prosody | gzip' > backups/prosody-backup.sql.gz
</syntaxhighlight>
 
Backup of `/var/lb/prosody` was taken using following steps:
 
* Create a tar file of prosody directory
<syntaxhighlight>
cd /var/lib && sudo tar -czvf ~user/var.lib.prosody-2025-01-07.tar.gz prosody
</syntaxhighlight>
 
* Make user as owner of compressed file:
 
<syntaxhighlight>
cd && chown user: var.lib.prosody-2025-01-07.tar.gz
</syntaxhighlight>
 
* Use `scp` to transfer tar file to the backup-server
<syntaxhighlight>
scp -P <port-for-ssh-on-backup-server> ./var.lib.prosody-2025-01-07.tar.gz backup-user@backup-server:directory-to-backup
</syntaxhighlight>
 
= Troubleshooting =
== Allow XMPP login even if diaspora account is closed ==
Diaspora has a [https://github.com/diaspora/diaspora/blob/develop/Changelog.md#new-maintenance-feature-to-automatically-expire-inactive-accounts default setting] to close accounts that have been inactive for 2 years. At the time of writing, there seems [https://github.com/diaspora/diaspora/issues/5358#issuecomment-371921462 no way] to reopen a closed account. This also means that if your account is closed, you will no longer be able to login to the associated XMPP service as well. Here we discuss a workaround to get access back to the XMPP account.
 
The prosody module [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua mod_auth_diaspora] is used for diaspora-based XMPP auth. It checks if <code>locked_at</code> value in the <code>users</code> table of diaspora db is <code>null</code> [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua-L89 here] and [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua-L98 here]. If your account is locked, it will have the <code>datetime</code> value that represents the date and time at which your account is locked. Setting it back to <code>null</code> will let you use your XMPP account again.
 
-- Replace <username> with actual username of the locked account
UPDATE users SET locked_at=NULL WHERE username='<username>';
 
NOTE: Matrix account won't be affected even if the associated diaspora account is closed because it uses a [https://pypi.org/project/synapse-diaspora-auth/ custom auth module] which works differently.


= History =
= History =