Jump to content

Navigation menu

Note: Currently new registrations are closed, if you want an account Contact us

Difference between revisions of "Poddery - Diaspora, Matrix and XMPP"

m
 
(11 intermediate revisions by 4 users not shown)
Line 56: Line 56:


= Coordination =
= Coordination =
* [https://www.loomio.org/g/2bjVXqAu/fosscommunity-in-poddery-com-maintainer-s-group Loomio group] - Mainly used for decision making
* [https://codema.in/g/2bjVXqAu/fosscommunity-in-poddery-com-maintainer-s-group Loomio group] - Mainly used for decision making
* Matrix room - [https://matrix.to/#/#poddery:poddery.com #poddery:poddery.com]
* Matrix room - [https://matrix.to/#/#poddery:poddery.com #poddery:poddery.com] also bridged to xmpp [xmpp:poddery.com-support@chat.yax.im?join poddery.com-support@chat.yax.im]
* [https://git.fosscommunity.in/community/poddery.com/issues Issue tracker] - Used for tracking progress of tasks
* [https://git.fosscommunity.in/community/poddery.com/issues Issue tracker] - Used for tracking progress of tasks


Line 81: Line 81:


= Configuration and Maintenance =
= Configuration and Maintenance =
Boot into rescue system using https://docs.hetzner.com/robot/dedicated-server/troubleshooting/hetzner-rescue-system
== Disk Partitioning ==
== Disk Partitioning ==
* RAID 1 setup on 2x2TB HDDs (<code>sda</code> and <code>sdb</code>).
* RAID 1 setup on 2x2TB HDDs (<code>sda</code> and <code>sdb</code>).
Line 219: Line 222:


=== Workers ===
=== Workers ===
* For scalability, Poddery is running [https://github.com/matrix-org/synapse/blob/master/docs/workers.rst workers]. Currently all workers specified in that page, expect <code>synapse.app.appservice</code> is running on poddery.com
* For scalability, Poddery is running [https://github.com/matrix-org/synapse/blob/master/docs/workers.md workers]. Currently all workers specified in that page, expect <code>synapse.app.appservice</code> is running on poddery.com
* A new service [https://gist.github.com/necessary129/5dfbb140e4727496b0ad2bf801c10fdc <code>matrix-synapse@.service</code>] is installed for the workers (Save the <code>synape_worker</code> file somewhere like <code>/usr/local/bin/</code> or something).
* A new service [https://gist.github.com/necessary129/5dfbb140e4727496b0ad2bf801c10fdc <code>matrix-synapse@.service</code>] is installed for the workers (Save the <code>synape_worker</code> file somewhere like <code>/usr/local/bin/</code> or something).
* The worker config can be found at <code>/etc/matrix-synapse/workers</code>
* The worker config can be found at <code>/etc/matrix-synapse/workers</code>
Line 236: Line 239:


=== Synapse Updation ===
=== Synapse Updation ===
* First check [https://github.com/matrix-org/synapse/blob/master/UPGRADE.rst synapse/UPGRADE.rst] to see if anything extra needs to be done. Then, just run <code>/root/upgrade-synapse</code>
* First check [https://matrix-org.github.io/synapse/latest/upgrade synapse//latest/upgrade] to see if anything extra needs to be done. Then, just run <code>/root/upgrade-synapse</code>
* Current version of synapse can be found from https://poddery.com/_matrix/federation/v1/version


=== Riot-web Updation ===  
=== Riot-web Updation ===  
Line 329: Line 333:


Currently postgres database for matrix-synapse is backed up.
Currently postgres database for matrix-synapse is backed up.
===Before Replication (specific to poddery.com)===


Setup tinc vpn in the backup server
Setup tinc vpn in the backup server
Line 338: Line 344:
Copy poddery host config to backup server and podderybackup host config to poddery.com server.
Copy poddery host config to backup server and podderybackup host config to poddery.com server.


Relaod tinc vpn servie at both poddery.com and backup servers
Reload tinc vpn service at both poddery.com and backup servers


  # systemctl reload tinc@fsci.service
  # systemctl reload tinc@fsci.service
Line 346: Line 352:
  # systemctl enable tinc@fsci.service
  # systemctl enable tinc@fsci.service


===Postgresql (for synapse) Primary configuration===
The synapse database was also pruned to reduce the size before replication by following this guide - https://levans.fr/shrink-synapse-database.html
If you want to follow this guide, make sure matrix synapse server is updated to version 1.13 atleast since it introduces the Rooms API mentioned the guide.
Changes done to steps in the guide.
 
  # jq '.rooms[] | select(.joined_local_members == 0) | .room_id' < roomlist.json | sed -e 's/"//g' > to_purge.txt
 
The room list obtained this way can, be looped to pass the room names as variables to the purge api.
 
# set +H // if you are using bash to avoid '!' in the roomname triggering the history substitution.
# for room_id in $(cat to_purge.txt); do curl --header "Authorization: Bearer <your access token>" \
    -X POST -H "Content-Type: application/json" -d "{ \"room_id\": \"$room_id\" }" \
    'https://127.0.0.1:8008/_synapse/admin/v1/purge_room'; done;
 
We also did not remove old history of large rooms.
 
===Step 1: Postgresql (for synapse) Primary configuration===


Create postgresql user for replication.
Create postgresql user for replication.
Line 381: Line 402:
  # systemctl restart postgresql
  # systemctl restart postgresql


===Postgresql (for synapse) Standby configuration: Step 2===
===Step 2: Postgresql (for synapse) Standby configuration ===


Install postgresql  
Install postgresql  
Line 403: Line 424:
  # su - postgres
  # su - postgres
  $ cd /etc/postgresql/11/
  $ cd /etc/postgresql/11/
Copy data from master and create recovery.conf
$ pg_basebackup -h git.fosscommunity.in -D /var/lib/postgresql/11/main/ -P -U rep --wal-method=fetch  -R


Open the postgres configuration file
Open the postgres configuration file
Line 410: Line 435:
Set the following configuration options in the postgresql.conf file
Set the following configuration options in the postgresql.conf file


  max_connections = 500
  max_connections = 500 // This option and the one below are set to be same as in postgresql.conf at primary or the service won't start.
  max_worker_processes = 16
  max_worker_processes = 16
 
  host_standby = on // The above pg_basebackup command should set it. If it's not manually turn it to on.
Copy data from master and create recovery.conf
 
  $ pg_basebackup -h git.fosscommunity.in -D /var/lib/postgresql/11/main/ -P -U rep --wal-method=fetch  -R


Start the stopped postgresql service
Start the stopped postgresql service
Line 421: Line 443:
  # systemctl start postgresql@11-main
  # systemctl start postgresql@11-main


===Replication Status===
===Postgresql (for synapse) Replication Status===


On Primary,
On Primary,
Line 431: Line 453:


  $ ps -ef | grep receiver
  $ ps -ef | grep receiver
= Troubleshooting =
== Allow XMPP login even if diaspora account is closed ==
Diaspora has a [https://github.com/diaspora/diaspora/blob/develop/Changelog.md#new-maintenance-feature-to-automatically-expire-inactive-accounts default setting] to close accounts that have been inactive for 2 years. At the time of writing, there seems [https://github.com/diaspora/diaspora/issues/5358#issuecomment-371921462 no way] to reopen a closed account. This also means that if your account is closed, you will no longer be able to login to the associated XMPP service as well. Here we discuss a workaround to get access back to the XMPP account.
The prosody module [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua mod_auth_diaspora] is used for diaspora-based XMPP auth. It checks if <code>locked_at</code> value in the <code>users</code> table of diaspora db is <code>null</code> [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua-L89 here] and [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua-L98 here]. If your account is locked, it will have the <code>datetime</code> value that represents the date and time at which your account is locked. Setting it back to <code>null</code> will let you use your XMPP account again.
-- Replace <username> with actual username of the locked account
UPDATE users SET locked_at=NULL WHERE username='<username>';
NOTE: Matrix account won't be affected even if the associated diaspora account is closed because it uses a [https://pypi.org/project/synapse-diaspora-auth/ custom auth module] which works differently.


= History =
= History =