Poddery - Diaspora, Matrix and XMPP: Difference between revisions

No edit summary
Redirect XMPP to durare and clarify nginx configuration
 
(21 intermediate revisions by 5 users not shown)
Line 1: Line 1:
We run decentralized and federated [https://diasporafoundation.org/ Diaspora] social netowrk, [https://xmpp.org/ XMPP] and [https://matrix.org Matrix] instant messaging services at [https://poddery.com poddery.com]. Along with Diaspora, Poddery username and password can be used to access XMPP and Matrix services as well. [https://chat.poddery.com chat.poddery.com] provides Riot client (accessed by a web browser), which can be used to connect to any Matrix server without installing a Riot app/client.
We run decentralized and federated [https://diasporafoundation.org/ Diaspora] social network, [https://xmpp.org/ XMPP] and [https://matrix.org Matrix] instant messaging services at [https://poddery.com poddery.com]. Along with Diaspora, Poddery username and password can be used to access XMPP and Matrix services as well. [https://chat.poddery.com chat.poddery.com] provides Element client (accessed by a web browser), which can be used to connect to any Matrix server without installing the Element app.


= Environment =
= Environment =
Line 18: Line 18:


=== Chat/XMPP ===
=== Chat/XMPP ===
* [https://prosody.im/ Prosody] is used as the XMPP server which is modern and lightweight.
* This is moved to Durare.org server Virtual Host. See https://gitlab.com/piratemovin/diasp.in/-/wikis/XMPP-durare.org-setup
* Currently installed version is 0.11.2 which is available in [https://packages.debian.org/buster/prosody Debian Buster].
* All XEPs are enabled which the [https://conversations.im/ Conversations app] support.


=== Chat/Matrix ===
=== Chat/Matrix ===
Line 35: Line 33:
== Backend Services ==
== Backend Services ==
=== Web Server / Reverse Proxy ===
=== Web Server / Reverse Proxy ===
* Nginx web server which also acts as front-end (reverse proxy) for Diaspora and Matrix.
* Nginx web server which also acts as front-end (reverse proxy) for Diaspora and Matrix. By default all https requests to 443 are passed to diaspora. Requests starting with
*#_matrix|_synapse is passed to synapse main service and
*#_matrix/media is passed to synapse media worker


=== Database ===
=== Database ===
Line 56: Line 56:


= Coordination =
= Coordination =
* [https://www.loomio.org/g/2bjVXqAu/fosscommunity-in-poddery-com-maintainer-s-group Loomio group] - Mainly used for decision making
* [https://codema.in/g/2bjVXqAu/fosscommunity-in-poddery-com-maintainer-s-group Loomio group] - Mainly used for decision making
* Matrix room - [https://matrix.to/#/#poddery:poddery.com #poddery:poddery.com]
* Matrix room - [https://matrix.to/#/#poddery:poddery.com #poddery:poddery.com] also bridged to xmpp [xmpp:poddery.com-support@chat.yax.im?join poddery.com-support@chat.yax.im]
* [https://git.fosscommunity.in/community/poddery.com/issues Issue tracker] - Used for tracking progress of tasks
* [https://git.fosscommunity.in/community/poddery.com/issues Issue tracker] - Used for tracking progress of tasks


Line 81: Line 81:


= Configuration and Maintenance =
= Configuration and Maintenance =
Boot into rescue system using https://docs.hetzner.com/robot/dedicated-server/troubleshooting/hetzner-rescue-system
== Disk Partitioning ==
== Disk Partitioning ==
* RAID 1 setup on 2x2TB HDDs (<code>sda</code> and <code>sdb</code>).
* RAID 1 setup on 2x2TB HDDs (<code>sda</code> and <code>sdb</code>).
Line 219: Line 222:


=== Workers ===
=== Workers ===
* For scalability, Poddery is running [https://github.com/matrix-org/synapse/blob/master/docs/workers.rst workers]. Currently all workers specified in that page, expect <code>synapse.app.appservice</code> is running on poddery.com
* For scalability, Poddery is running [https://github.com/matrix-org/synapse/blob/master/docs/workers.md workers]. Currently all workers specified in that page, expect <code>synapse.app.appservice</code> is running on poddery.com
* A new service [https://gist.github.com/necessary129/5dfbb140e4727496b0ad2bf801c10fdc <code>matrix-synapse@.service</code>] is installed for the workers (Save the <code>synape_worker</code> file somewhere like <code>/usr/local/bin/</code> or something).
* A new service [https://gist.github.com/necessary129/5dfbb140e4727496b0ad2bf801c10fdc <code>matrix-synapse@.service</code>] is installed for the workers (Save the <code>synape_worker</code> file somewhere like <code>/usr/local/bin/</code> or something).
* The worker config can be found at <code>/etc/matrix-synapse/workers</code>
* The worker config can be found at <code>/etc/matrix-synapse/workers</code>
Line 231: Line 234:
* These services must be enabled:
* These services must be enabled:


  matrix-synapse@synchrotron.service matrix-synapse@federation_reader.service matrix-synapse@event_creator.service matrix-synapse@federation_sender.service matrix-synapse@pusher.service matrix-synapse@user_dir.service matrix-synapse@media_repository.service matrix-synapse@frontend_proxy.service matrix-synapse@client_reader.service matrix-synapse@synchrotron_2.service
  matrix-synapse@synchrotron.service  
matrix-synapse@federation_reader.service  
matrix-synapse@event_creator.service  
matrix-synapse@federation_sender.service  
matrix-synapse@pusher.service  
matrix-synapse@user_dir.service  
matrix-synapse@media_repository.service  
matrix-synapse@frontend_proxy.service  
matrix-synapse@client_reader.service  
matrix-synapse@synchrotron_2.service


To load balance between the 2 synchrotrons, We are running [https://github.com/Sorunome/matrix-synchrotron-balancer matrix-synchrotron-balancer]. It has a systemd file at <code>/etc/systemd/system/matrix-synchrotron-balancer</code>. The files are in <code>/opt/matrix-synchrotron-balancer</code>
To load balance between the 2 synchrotrons, We are running [https://github.com/Sorunome/matrix-synchrotron-balancer matrix-synchrotron-balancer]. It has a systemd file at <code>/etc/systemd/system/matrix-synchrotron-balancer</code>. The files are in <code>/opt/matrix-synchrotron-balancer</code>


=== Synapse Updation ===
=== Synapse Updation ===
* First check [https://github.com/matrix-org/synapse/blob/master/UPGRADE.rst synapse/UPGRADE.rst] to see if anything extra needs to be done. Then, just run <code>/root/upgrade-synapse</code>
* First check [https://matrix-org.github.io/synapse/latest/upgrade synapse//latest/upgrade] to see if anything extra needs to be done. Then, just run <code>/root/upgrade-synapse</code>
* Current version of synapse can be found from https://poddery.com/_matrix/federation/v1/version


=== Riot-web Updation ===  
=== Riot-web Updation ===  
Line 243: Line 256:


== Chat/XMPP ==
== Chat/XMPP ==
* Steps for setting up Prosody is given at https://wiki.debian.org/Diaspora/XMPP
* See https://gitlab.com/piratemovin/diasp.in/-/wikis/XMPP-durare.org-setup
# Follow steps 1 to 6 from https://wiki.debian.org/Diaspora/XMPP and then run the following:
mysql -u root -p # Enter password from the access repo
CREATE USER 'prosody'@'localhost' IDENTIFIED BY '<passwd_in_repo>';
GRANT ALL PRIVILEGES ON diaspora_production.* TO 'prosody'@'localhost';
FLUSH PRIVILEGES;
systemctl restart prosody
 
* Install plugins
# Make sure <code>mercurial</code> is installed
cd /etc && hg clone https://hg.prosody.im/prosody-modules/ prosody-modules
 
=== Set Nginx Conf for BOSH URLS ===
* Add the following in <code>nginx</code> configuration file to enable the BOSH URL to make JSXC Working:
upstream chat_cluster {
  server localhost:5280;
}
 
location /http-bind {
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header Host $http_host;
  proxy_set_header X-Forwarded-Proto https;
  proxy_redirect off;
  proxy_connect_timeout 5;
  proxy_buffering      off;
  proxy_read_timeout    70;
  keepalive_timeout    70;
  send_timeout          70;
  client_max_body_size 4M;
  client_body_buffer_size 128K;
  proxy_pass http://chat_cluster;
}
 
* [https://wiki.diasporafoundation.org/Integration/Chat#Nginx See here] for more details on <code>nginx</code> configuration. Alternatively, <code>apache</code> settings can be found [https://github.com/jsxc/jsxc/wiki/Prepare-apache here].


== TLS ==
== TLS ==
Line 315: Line 292:
  ''34 2 * * 1 /etc/init.d/prosody reload''
  ''34 2 * * 1 /etc/init.d/prosody reload''


* Manually updating TLS certificate:
===SSL certificate renewal===
letsencrypt certonly --webroot --agree-tos -w /usr/share/diaspora/public  -d poddery.com -d www.poddery.com -d test.poddery.com -d groups.poddery.com -d fund.poddery.com -w /usr/share/diaspora/public/save -d save.poddery.com -w /var/www/riot -d chat.poddery.com
On the 12th of October 2025, all the certificates were removed and were recreated. [https://codema.in/d/XUfAOrPW/poddery-server-certificates-recreated This thread] documents all those steps.
* To include an additional subdomain such as fund.poddery.com use with --expand parameter as shown below
 
  letsencrypt certonly --webroot --agree-tos --expand -w /usr/share/diaspora/public -d poddery.com -d www.poddery.com -d test.poddery.com -d groups.poddery.com -d fund.poddery.com -w /usr/share/diaspora/public/save/ -d save.poddery.com -w /var/www/riot/ -d chat.poddery.com
When renewing certificates on the poddery server, make sure to follow the following steps.
 
# Stop nginx by running
sudo systemctl stop nginx
 
# Renew certificates for all the domains
sudo certbot renew
 
Follow the prompts by certbot to renew certificates for all the domains.
 
# Start nginx after the renewal is successful
  sudo systemctl start nginx


==Backup==
==Backup==
Line 329: Line 317:


Currently postgres database for matrix-synapse is backed up.
Currently postgres database for matrix-synapse is backed up.
===Before Replication (specific to poddery.com)===


Setup tinc vpn in the backup server
Setup tinc vpn in the backup server
Line 338: Line 328:
Copy poddery host config to backup server and podderybackup host config to poddery.com server.
Copy poddery host config to backup server and podderybackup host config to poddery.com server.


Relaod tinc vpn servie at both poddery.com and backup servers
Reload tinc vpn service at both poddery.com and backup servers


  # systemctl reload tinc@fsci.service
  # systemctl reload tinc@fsci.service
Line 346: Line 336:
  # systemctl enable tinc@fsci.service
  # systemctl enable tinc@fsci.service


===Postgresql (for synapse) Primary configuration===
The synapse database was also pruned to reduce the size before replication by following this guide - https://levans.fr/shrink-synapse-database.html
If you want to follow this guide, make sure matrix synapse server is updated to version 1.13 atleast since it introduces the Rooms API mentioned the guide.
Changes done to steps in the guide.
 
  # jq '.rooms[] | select(.joined_local_members == 0) | .room_id' < roomlist.json | sed -e 's/"//g' > to_purge.txt
 
The room list obtained this way can, be looped to pass the room names as variables to the purge api.
 
# set +H // if you are using bash to avoid '!' in the roomname triggering the history substitution.
# for room_id in $(cat to_purge.txt); do curl --header "Authorization: Bearer <your access token>" \
    -X POST -H "Content-Type: application/json" -d "{ \"room_id\": \"$room_id\" }" \
    'https://127.0.0.1:8008/_synapse/admin/v1/purge_room'; done;
 
We also did not remove old history of large rooms.
 
===Step 1: Postgresql (for synapse) Primary configuration===


Create postgresql user for replication.
Create postgresql user for replication.
Line 381: Line 386:
  # systemctl restart postgresql
  # systemctl restart postgresql


===Postgresql (for synapse) Standby configuration: Step 2===
===Step 2: Postgresql (for synapse) Standby configuration ===


Install postgresql  
Install postgresql  
Line 403: Line 408:
  # su - postgres
  # su - postgres
  $ cd /etc/postgresql/11/
  $ cd /etc/postgresql/11/
Copy data from master and create recovery.conf
$ pg_basebackup -h git.fosscommunity.in -D /var/lib/postgresql/11/main/ -P -U rep --wal-method=fetch  -R


Open the postgres configuration file
Open the postgres configuration file
Line 410: Line 419:
Set the following configuration options in the postgresql.conf file
Set the following configuration options in the postgresql.conf file


  max_connections = 500
  max_connections = 500 // This option and the one below are set to be same as in postgresql.conf at primary or the service won't start.
  max_worker_processes = 16
  max_worker_processes = 16
 
  host_standby = on // The above pg_basebackup command should set it. If it's not manually turn it to on.
Copy data from master and create recovery.conf
 
  $ pg_basebackup -h git.fosscommunity.in -D /var/lib/postgresql/11/main/ -P -U rep --wal-method=fetch  -R


Start the stopped postgresql service
Start the stopped postgresql service
Line 421: Line 427:
  # systemctl start postgresql@11-main
  # systemctl start postgresql@11-main


===Replication Status===
===Postgresql (for synapse) Replication Status===


On Primary,
On Primary,
Line 431: Line 437:


  $ ps -ef | grep receiver
  $ ps -ef | grep receiver
===Backup steps on 7th Jan 2025===
====Matrix-synapse====
For synapse, the following files were backed up:
* Dump of postgresql database using `pg_dump`
* `/etc/matrix-synapse` - contains config files
* `/var/lib/static/synapse/media` -- contains uploaded media files
In order to access the poddery server from the backup server (with your public ssh keys added to both the servers in `~/.ssh/authorized-keys`), run the following command in your local system:<syntaxhighlight lang="bash">
eval "$(ssh-agent -s)"
</syntaxhighlight>followed by<syntaxhighlight>
ssh user@server -o "ForwardAgent yes" -o "AddKeysToAgent yes"
</syntaxhighlight>on the local system.
The dump was taken using the command from the [https://element-hq.github.io/synapse/latest/usage/administration/backups.html#quick-and-easy-database-backup-and-restore official docs]:<syntaxhighlight>
ssh user@poddery-server 'sudo -u postgres pg_dump -Fc --exclude-table-data e2e_one_time_keys_json synapse' > synapse-2025-01-07.dump
</syntaxhighlight>
====Prosody====
For backing up prosody, the following were copied:
* Dump of the database using `mysqldump`
* `/var/lib/prosody` for media files
* `/etc/prosody` for config files
For taking the dump, the following was run from the backup-server
<syntaxhighlight lang="bash">
ssh user@poddery-server 'mysqldump -u prosody --password="$(cat <path/to/password-file>)" prosody | gzip' > backups/prosody-backup.sql.gz
</syntaxhighlight>
Backup of `/var/lb/prosody` was taken using following steps:
* Create a tar file of prosody directory
<syntaxhighlight>
cd /var/lib && sudo tar -czvf ~user/var.lib.prosody-2025-01-07.tar.gz prosody
</syntaxhighlight>
* Make user as owner of compressed file:
<syntaxhighlight>
cd && chown user: var.lib.prosody-2025-01-07.tar.gz
</syntaxhighlight>
* Use `scp` to transfer tar file to the backup-server
<syntaxhighlight>
scp -P <port-for-ssh-on-backup-server> ./var.lib.prosody-2025-01-07.tar.gz backup-user@backup-server:directory-to-backup
</syntaxhighlight>
= Troubleshooting =
== Allow XMPP login even if diaspora account is closed ==
Diaspora has a [https://github.com/diaspora/diaspora/blob/develop/Changelog.md#new-maintenance-feature-to-automatically-expire-inactive-accounts default setting] to close accounts that have been inactive for 2 years. At the time of writing, there seems [https://github.com/diaspora/diaspora/issues/5358#issuecomment-371921462 no way] to reopen a closed account. This also means that if your account is closed, you will no longer be able to login to the associated XMPP service as well. Here we discuss a workaround to get access back to the XMPP account.
The prosody module [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua mod_auth_diaspora] is used for diaspora-based XMPP auth. It checks if <code>locked_at</code> value in the <code>users</code> table of diaspora db is <code>null</code> [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua-L89 here] and [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua-L98 here]. If your account is locked, it will have the <code>datetime</code> value that represents the date and time at which your account is locked. Setting it back to <code>null</code> will let you use your XMPP account again.
-- Replace <username> with actual username of the locked account
UPDATE users SET locked_at=NULL WHERE username='<username>';
NOTE: Matrix account won't be affected even if the associated diaspora account is closed because it uses a [https://pypi.org/project/synapse-diaspora-auth/ custom auth module] which works differently.


= History =
= History =