Jump to content

Navigation menu

Note: Currently new registrations are closed, if you want an account Contact us

Difference between revisions of "Poddery - Diaspora, Matrix and XMPP"

m
 
(25 intermediate revisions by 5 users not shown)
Line 25: Line 25:
* [https://matrix.org/docs/projects/server/synapse.html Synapse] is used as the Matrix server.
* [https://matrix.org/docs/projects/server/synapse.html Synapse] is used as the Matrix server.
* Synapse is currently installed directly from the [https://github.com/matrix-org/synapse official GitHub repo].
* Synapse is currently installed directly from the [https://github.com/matrix-org/synapse official GitHub repo].
* Riot-web Matrix client is hosted at https://chat.poddery.com


=== Homepage ===
=== Homepage ===
Line 55: Line 56:


= Coordination =
= Coordination =
* [https://www.loomio.org/g/2bjVXqAu/fosscommunity-in-poddery-com-maintainer-s-group Loomio group] - Mainly used for decision making
* [https://codema.in/g/2bjVXqAu/fosscommunity-in-poddery-com-maintainer-s-group Loomio group] - Mainly used for decision making
* Matrix room - [https://matrix.to/#/#poddery:poddery.com #poddery:poddery.com]
* Matrix room - [https://matrix.to/#/#poddery:poddery.com #poddery:poddery.com] also bridged to xmpp [xmpp:poddery.com-support@chat.yax.im?join poddery.com-support@chat.yax.im]
* [https://git.fosscommunity.in/community/poddery.com/issues Issue tracker] - Used for tracking progress of tasks
* [https://git.fosscommunity.in/community/poddery.com/issues Issue tracker] - Used for tracking progress of tasks


Line 80: Line 81:


= Configuration and Maintenance =
= Configuration and Maintenance =
Boot into rescue system using https://docs.hetzner.com/robot/dedicated-server/troubleshooting/hetzner-rescue-system
== Disk Partitioning ==
== Disk Partitioning ==
* RAID 1 setup on 2x2TB HDDs (<code>sda</code> and <code>sdb</code>).
* RAID 1 setup on 2x2TB HDDs (<code>sda</code> and <code>sdb</code>).
Line 100: Line 104:
  # Assign remaining free space for static files
  # Assign remaining free space for static files
  lvcreate -n static /dev/data -l 100%FREE  
  lvcreate -n static /dev/data -l 100%FREE  
# Setup filesystem on the logical volumes
mkfs.ext4 /dev/data/log
mkfs.ext4 /dev/data/db
mkfs.ext4 /dev/data/static
   
   
  # Create directories for mounting the encrypted partitions
  # Create directories for mounting the encrypted partitions
Line 123: Line 132:
  ufw default allow outgoing
  ufw default allow outgoing
  ufw allow ssh
  ufw allow ssh
ufw allow http/tcp
ufw allow https/tcp
ufw allow Turnserver
ufw allow XMPP
ufw allow 8448
  ufw enable
  ufw enable
# Verify everything is setup properly
ufw status
# Enable ufw logging with default mode low
ufw logging on


* <code>fail2ban</code> configured against brute force attacks:
* <code>fail2ban</code> configured against brute force attacks:
Line 132: Line 153:
   
   
  # Restart SSH and enable fail2ban
  # Restart SSH and enable fail2ban
  sudo systemctl restart ssh
  systemctl restart ssh
  sudo systemctl enable fail2ban
  systemctl enable fail2ban
  sudo systemctl start fail2ban
  systemctl start fail2ban
   
   
  # To unban an IP, first check <code>/var/log/fail2ban.log</code> to get the banned IP and then run the following
  # To unban an IP, first check <code>/var/log/fail2ban.log</code> to get the banned IP and then run the following
Line 144: Line 165:
  apt install diaspora-installer
  apt install diaspora-installer


* Move MySQL data to encrypted partition
* Move MySQL data to encrypted partition:
  # Make sure <code>/dev/data/db</code> is mounted to <code>/var/lib/db</code>
  # Make sure <code>/dev/data/db</code> is mounted to <code>/var/lib/db</code>
  systemctl stop mysql
  systemctl stop mysql
  mv /var/lib/mysql /var/lib/db
systemctl disable mysql
  ln -s /var/lib/db/mysql /var/lib/mysql
  mv /var/lib/mysql /var/lib/db/
  ln -s /var/lib/db/mysql /var/lib/
  systemctl start mysql
  systemctl start mysql
* Move static files to encrypted partition:
# Make sure <code>/dev/data/static</code> is mounted to <code>/var/lib/static</code>
mkdir /var/lib/static/diaspora
mv /usr/share/diaspora/public/uploads /var/lib/static/diaspora
ln -s /var/lib/static/diaspora/uploads /usr/share/diaspora/public/
chown -R diaspora: /var/lib/static/diaspora


* Modify configuration files at <code>/etc/diaspora</code> and <code>/etc/diaspora.conf</code> as needed (backup of the current configuration files are available in the [[#Server_Access|access repo]]).
* Modify configuration files at <code>/etc/diaspora</code> and <code>/etc/diaspora.conf</code> as needed (backup of the current configuration files are available in the [[#Server_Access|access repo]]).
Line 155: Line 184:
  # Make sure <code>git</code> and <code>acl</code> packages are installed
  # Make sure <code>git</code> and <code>acl</code> packages are installed
  # Grant <code>rwx</code> permissions for the ssh user to <code>/usr/share/diaspora/public</code>
  # Grant <code>rwx</code> permissions for the ssh user to <code>/usr/share/diaspora/public</code>
  sudo setfacl -m "u:<ssh_user>:rwx" /usr/share/diaspora/public
  setfacl -m "u:<ssh_user>:rwx" /usr/share/diaspora/public
   
   
  # Clone poddery.com repo
  # Clone poddery.com repo
Line 173: Line 202:
* Nginx is used as reverse proxy to send requests that has <code>/_matrix/*</code> in URL to Synapse on port <code>8008</code>. This is configured in <code>/etc/nginx/sites-enabled/diaspora</code>.
* Nginx is used as reverse proxy to send requests that has <code>/_matrix/*</code> in URL to Synapse on port <code>8008</code>. This is configured in <code>/etc/nginx/sites-enabled/diaspora</code>.
* Shamil's [https://git.fosscommunity.in/necessary129/synapse-diaspora-auth Synapse Diaspora Auth] script is used to authenticate Synapse with Diaspora database.
* Shamil's [https://git.fosscommunity.in/necessary129/synapse-diaspora-auth Synapse Diaspora Auth] script is used to authenticate Synapse with Diaspora database.
* Move PostgreSQL data to encrypted partition:
# Make sure <code>/dev/data/db</code> is mounted to <code>/var/lib/db</code>
systemctl stop postgresql
systemctl disable postgresql
mv /var/lib/postgres /var/lib/db/
ln -s /var/lib/db/postgres /var/lib/
systemctl start postgresql


=== Workers ===
* Move static files to encrypted partition:
For scalability, Poddery running [https://github.com/matrix-org/synapse/blob/master/docs/workers.rst workers]. Currently all workers specified in that page, expect `synapse.app.appservice` is running on poddery.com
# Make sure <code>/dev/data/static</code> is mounted to <code>/var/lib/static</code>
mkdir /var/lib/static/synapse
mv /var/lib/matrix-synapse/uploads /var/lib/static/synapse/
ln -s /var/lib/static/synapse/uploads /var/lib/matrix-synapse/
mv /var/lib/matrix-synapse/media /var/lib/static/synapse/
ln -s /var/lib/static/synapse/media /var/lib/matrix-synapse/
chown -R matrix-synapse: /var/lib/static/synapse


A new service [https://gist.github.com/necessary129/5dfbb140e4727496b0ad2bf801c10fdc <code>matrix-synapse@.service</code>] is installed for the workers (Save the <code>synape_worker</code> file somewhere like <code>/usr/local/bin/</code> or something).
* Install identity server <code>mxisd</code> (<code>deb</code> package available [https://github.com/kamax-matrix/mxisd/blob/master/docs/install/debian.md here])
 
The worker config can be found at <code>/etc/matrix-synapse/workers</code>
 
Synapse needs to be put under a reverse proxy see <code>/etc/nginx/sites-enabled/matrix</code>. A lot of <code>/_matrix/</code> urls needs to be overridden too see <code>/etc/nginx/sites-enabled/diaspora</code>
 
These lines must be added to <code>homeserver.yaml</code> as we are running <code>media_repository</code>, <code>federation_sender</code>, <code>pusher</code>, <code>user_dir</code> workers respectively:


=== Workers ===
* For scalability, Poddery is running [https://github.com/matrix-org/synapse/blob/master/docs/workers.md workers]. Currently all workers specified in that page, expect <code>synapse.app.appservice</code> is running on poddery.com
* A new service [https://gist.github.com/necessary129/5dfbb140e4727496b0ad2bf801c10fdc <code>matrix-synapse@.service</code>] is installed for the workers (Save the <code>synape_worker</code> file somewhere like <code>/usr/local/bin/</code> or something).
* The worker config can be found at <code>/etc/matrix-synapse/workers</code>
* Synapse needs to be put under a reverse proxy see <code>/etc/nginx/sites-enabled/matrix</code>. A lot of <code>/_matrix/</code> urls needs to be overridden too see <code>/etc/nginx/sites-enabled/diaspora</code>
* These lines must be added to <code>homeserver.yaml</code> as we are running <code>media_repository</code>, <code>federation_sender</code>, <code>pusher</code>, <code>user_dir</code> workers respectively:
   enable_media_repo: False
   enable_media_repo: False
   send_federation: False
   send_federation: False
Line 190: Line 232:
   update_user_directory: false
   update_user_directory: false


These services must be enabled, and added to <code>Requires</code> and <code>Before</code> sections of the original <code>matrix-synapse.service</code>:
* These services must be enabled:
  matrix-synapse@synchrotron.service matrix-synapse@federation_reader.service matrix-synapse@event_creator.service matrix-synapse@federation_sender.service matrix-synapse@pusher.service matrix-synapse@user_dir.service matrix-synapse@media_repository.service matrix-synapse@frontend_proxy.service matrix-synapse@client_reader.service
 
  matrix-synapse@synchrotron.service matrix-synapse@federation_reader.service matrix-synapse@event_creator.service matrix-synapse@federation_sender.service matrix-synapse@pusher.service matrix-synapse@user_dir.service matrix-synapse@media_repository.service matrix-synapse@frontend_proxy.service matrix-synapse@client_reader.service matrix-synapse@synchrotron_2.service
 
To load balance between the 2 synchrotrons, We are running [https://github.com/Sorunome/matrix-synchrotron-balancer matrix-synchrotron-balancer]. It has a systemd file at <code>/etc/systemd/system/matrix-synchrotron-balancer</code>. The files are in <code>/opt/matrix-synchrotron-balancer</code>


=== Synapse Updation ===
=== Synapse Updation ===
First check [https://github.com/matrix-org/synapse/blob/master/UPGRADE.rst synapse/UPGRADE.rst] to see if anything extra needs to be done. Then, just run <code>/root/upgrade-synapse</code>
* First check [https://matrix-org.github.io/synapse/latest/upgrade synapse//latest/upgrade] to see if anything extra needs to be done. Then, just run <code>/root/upgrade-synapse</code>
* Current version of synapse can be found from https://poddery.com/_matrix/federation/v1/version


=== Riot-web Updation ===  
=== Riot-web Updation ===  
* https://chat.poddery.com/#/welcome
* Just run the following (make sure to replace <code><version></code> with a proper version number like <code>v1.0.0</code>):
  # Backup current riot-web folder from <code>riot</code> to <code>riot-backup</code>
/var/www/get-riot <version>
  wget https://github.com/vector-im/riot-web/releases/download/v1.0.1/riot-v1.0.1.tar.gz
  tar -xvf riot-v1.01.tar.gz
  cp -r riot-v1.0.1/* /var/www/riot/
  rm -rf ./riot-v1.0.1*
  # Transfer the old <code>config.json</code>, <code>home.html</code>, <code>home-status.html</code> from <code>riot-backup</code> to <code>/var/www/riot/</code>
  systemctl reload nginx


== Chat/XMPP ==
== Chat/XMPP ==
Line 210: Line 250:
  # Follow steps 1 to 6 from https://wiki.debian.org/Diaspora/XMPP and then run the following:
  # Follow steps 1 to 6 from https://wiki.debian.org/Diaspora/XMPP and then run the following:
  mysql -u root -p # Enter password from the access repo
  mysql -u root -p # Enter password from the access repo
 
  CREATE USER 'prosody'@'localhost' IDENTIFIED BY '<passwd_in_repo>';
  CREATE USER 'prosody'@'localhost' IDENTIFIED BY '<passwd_in_repo>';
  GRANT ALL PRIVILEGES ON diaspora_production.* TO 'prosody'@'localhost';
  GRANT ALL PRIVILEGES ON diaspora_production.* TO 'prosody'@'localhost';
  FLUSH PRIVILEGES;
  FLUSH PRIVILEGES;
 
  systemctl restart prosody
  systemctl restart prosody
* Install plugins
# Make sure <code>mercurial</code> is installed
cd /etc && hg clone https://hg.prosody.im/prosody-modules/ prosody-modules


=== Set Nginx Conf for BOSH URLS ===
=== Set Nginx Conf for BOSH URLS ===
Line 242: Line 286:


== TLS ==
== TLS ==
 
* Install <code>letsencrypt</code>.
* Ensure proper permissions are set for <code>/etc/letsencrypt</code> and its contents.
* Ensure proper permissions are set for <code>/etc/letsencrypt</code> and its contents.
  chown -R root:ssl-cert /etc/letsencrypt
  chown -R root:ssl-cert /etc/letsencrypt
  chmod g+r -R /etc/letsencrypt
  chmod g+r -R /etc/letsencrypt
  chmod g+x /etc/letsencrypt/{archive,live}
  chmod g+x /etc/letsencrypt/{archive,live}
 
* Generate certificates. For more details see https://certbot.eff.org.
* Make sure the certificates used by <code>diaspora</code> are symbolic links to letsencrypt default location:
* Make sure the certificates used by <code>diaspora</code> are symbolic links to letsencrypt default location:
ls -l /etc/diaspora/ssl
''total 0
''lrwxrwxrwx 1 root root 47 Apr  2 22:47 poddery.com-bundle.pem -> /etc/letsencrypt/live/poddery.com/fullchain.pem''
''lrwxrwxrwx 1 root root 45 Apr  2 22:48 poddery.com.key -> /etc/letsencrypt/live/poddery.com/privkey.pem''
# If you don't get the above output, then run the following:
  cp -L /etc/letsencrypt/live/poddery.com/fullchain.pem /etc/diaspora/ssl/poddery.com-bundle.pem
  cp -L /etc/letsencrypt/live/poddery.com/fullchain.pem /etc/diaspora/ssl/poddery.com-bundle.pem
  cp -L /etc/letsencrypt/live/poddery.com/privkey.pem /etc/diaspora/ssl/poddery.com.key
  cp -L /etc/letsencrypt/live/poddery.com/privkey.pem /etc/diaspora/ssl/poddery.com.key
Line 257: Line 307:
  ''lrwxrwxrwx 1 root root 40 Mar 28 01:16 poddery.com.crt -> /etc/letsencrypt/live/poddery.com/fullchain.pem''
  ''lrwxrwxrwx 1 root root 40 Mar 28 01:16 poddery.com.crt -> /etc/letsencrypt/live/poddery.com/fullchain.pem''
  ''lrwxrwxrwx 1 root root 33 Mar 28 01:16 poddery.com.key -> /etc/letsencrypt/live/poddery.com/privkey.pem''
  ''lrwxrwxrwx 1 root root 33 Mar 28 01:16 poddery.com.key -> /etc/letsencrypt/live/poddery.com/privkey.pem''
# If you don't get the above output, then run the following:
cp -L /etc/letsencrypt/live/poddery.com/fullchain.pem /etc/prosody/certs/poddery.com.crt
cp -L /etc/letsencrypt/live/poddery.com/privkey.pem /etc/prosody/certs/poddery.com.key


* Note- letsencrypt executable used below is actually a symlik to /usr/bin/certbot
* Cron jobs:
* Cron jobs:
  crontab -e
  crontab -e
Line 265: Line 320:


* Manually updating TLS certificate:
* Manually updating TLS certificate:
  letsencrypt certonly --webroot -w /usr/share/diaspora/public  -d poddery.com -d www.poddery.com -d test.poddery.com -d groups.poddery.com -d fund.poddery.com -w /usr/share/diaspora/public/save -d save.poddery.com -w /var/www/riot -d chat.poddery.com  
  letsencrypt certonly --webroot --agree-tos -w /usr/share/diaspora/public  -d poddery.com -d www.poddery.com -d test.poddery.com -d groups.poddery.com -d fund.poddery.com -w /usr/share/diaspora/public/save -d save.poddery.com -w /var/www/riot -d chat.poddery.com
* To include an additional subdomain such as fund.poddery.com use with --expand parameter as shown below
letsencrypt certonly --webroot --agree-tos --expand -w /usr/share/diaspora/public -d poddery.com -d www.poddery.com -d test.poddery.com -d groups.poddery.com -d fund.poddery.com -w /usr/share/diaspora/public/save/ -d save.poddery.com -w /var/www/riot/ -d chat.poddery.com
 
==Backup==
 
Backup server is provided by Manu (KVM virtual machine with 180 GB storage and 1 GB ram ).
 
Debian Stetch was upgraded Debian Buster before database relication of synapse database.
 
Documentation: https://www.percona.com/blog/2018/09/07/setting-up-streaming-replication-postgresql/
 
Currently postgres database for matrix-synapse is backed up.
 
===Before Replication (specific to poddery.com)===
 
Setup tinc vpn in the backup server
 
# apt install tinc
 
Configure tinc by creating tinc.conf and host podderybackup under label fsci.
Add tinc-up and tinc-down scripts
Copy poddery host config to backup server and podderybackup host config to poddery.com server.
 
Reload tinc vpn service at both poddery.com and backup servers
 
# systemctl reload tinc@fsci.service
 
Enable tinc@fsci systemd service for autostart
 
# systemctl enable tinc@fsci.service
 
The synapse database was also pruned to reduce the size before replication by following this guide - https://levans.fr/shrink-synapse-database.html
If you want to follow this guide, make sure matrix synapse server is updated to version 1.13 atleast since it introduces the Rooms API mentioned the guide.
Changes done to steps in the guide.
 
  # jq '.rooms[] | select(.joined_local_members == 0) | .room_id' < roomlist.json | sed -e 's/"//g' > to_purge.txt
 
The room list obtained this way can, be looped to pass the room names as variables to the purge api.
 
# set +H // if you are using bash to avoid '!' in the roomname triggering the history substitution.
# for room_id in $(cat to_purge.txt); do curl --header "Authorization: Bearer <your access token>" \
    -X POST -H "Content-Type: application/json" -d "{ \"room_id\": \"$room_id\" }" \
    'https://127.0.0.1:8008/_synapse/admin/v1/purge_room'; done;
 
We also did not remove old history of large rooms.
 
===Step 1: Postgresql (for synapse) Primary configuration===
 
Create postgresql user for replication.
 
$ psql -c "CREATE USER replication REPLICATION LOGIN CONNECTION LIMIT 1 ENCRYPTED PASSWORD 'yourpassword';"
The password is in the access repo if you need it later.
 
Allow standby to connect to primary using the user just created.
 
$ cd /etc/postgresql/11/main
 
$ nano pg_hba.conf
 
Add below line to allow replication user to get access to the server
 
host    replication    replication    172.16.0.3/32  md5
 
Next , open the postgres configuration file
 
nano postgresql.conf
 
Set the following configuration options in the postgresql.conf file
 
listen_addresses = 'localhost,172.16.0.2'
port=5432
wal_level = replica
max_wal_senders = 1
wal_keep_segments = 64
archive_mode = on
archive_command = 'cd .'
 
You need to restart since postgresql.conf was edited and parameters changed,
 
# systemctl restart postgresql
 
===Step 2: Postgresql (for synapse) Standby configuration ===
 
Install postgresql
 
# apt install postgresql
 
Check postgresql server is running
 
# su postgres -c psql
 
Make sure en_US.UTF-8 locale is available
 
# dpkg-reconfigure locales
 
Stop postgresql before changing any configuration
 
#systemctl stop postgresql@11-main
 
Switch to postgres user
 
# su - postgres
$ cd /etc/postgresql/11/
 
Copy data from master and create recovery.conf
 
$ pg_basebackup -h git.fosscommunity.in -D /var/lib/postgresql/11/main/ -P -U rep --wal-method=fetch  -R
 
Open the postgres configuration file
 
$ nano postgresql.conf
 
Set the following configuration options in the postgresql.conf file
 
max_connections = 500 // This option and the one below are set to be same as in postgresql.conf at primary or the service won't start.
max_worker_processes = 16
host_standby = on // The above pg_basebackup command should set it. If it's not manually turn it to on.
 
Start the stopped postgresql service
 
# systemctl start postgresql@11-main
 
===Postgresql (for synapse) Replication Status===
 
On Primary,
 
$ ps -ef | grep sender
$ psql -c "select * from pg_stat_activity where usename='rep';"
 
On Standby,
 
$ ps -ef | grep receiver
 
= Troubleshooting =
== Allow XMPP login even if diaspora account is closed ==
Diaspora has a [https://github.com/diaspora/diaspora/blob/develop/Changelog.md#new-maintenance-feature-to-automatically-expire-inactive-accounts default setting] to close accounts that have been inactive for 2 years. At the time of writing, there seems [https://github.com/diaspora/diaspora/issues/5358#issuecomment-371921462 no way] to reopen a closed account. This also means that if your account is closed, you will no longer be able to login to the associated XMPP service as well. Here we discuss a workaround to get access back to the XMPP account.
 
The prosody module [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua mod_auth_diaspora] is used for diaspora-based XMPP auth. It checks if <code>locked_at</code> value in the <code>users</code> table of diaspora db is <code>null</code> [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua-L89 here] and [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua-L98 here]. If your account is locked, it will have the <code>datetime</code> value that represents the date and time at which your account is locked. Setting it back to <code>null</code> will let you use your XMPP account again.
 
-- Replace <username> with actual username of the locked account
UPDATE users SET locked_at=NULL WHERE username='<username>';
 
NOTE: Matrix account won't be affected even if the associated diaspora account is closed because it uses a [https://pypi.org/project/synapse-diaspora-auth/ custom auth module] which works differently.


= History =
= History =