218
edits
Note: Currently new registrations are closed, if you want an account Contact us
m (→Coordination) |
|||
(15 intermediate revisions by 5 users not shown) | |||
Line 56: | Line 56: | ||
= Coordination = | = Coordination = | ||
* [https:// | * [https://codema.in/g/2bjVXqAu/fosscommunity-in-poddery-com-maintainer-s-group Loomio group] - Mainly used for decision making | ||
* Matrix room - [https://matrix.to/#/#poddery:poddery.com #poddery:poddery.com] | * Matrix room - [https://matrix.to/#/#poddery:poddery.com #poddery:poddery.com] also bridged to xmpp [xmpp:poddery.com-support@chat.yax.im?join poddery.com-support@chat.yax.im] | ||
* [https://git.fosscommunity.in/community/poddery.com/issues Issue tracker] - Used for tracking progress of tasks | * [https://git.fosscommunity.in/community/poddery.com/issues Issue tracker] - Used for tracking progress of tasks | ||
Line 81: | Line 81: | ||
= Configuration and Maintenance = | = Configuration and Maintenance = | ||
Boot into rescue system using https://docs.hetzner.com/robot/dedicated-server/troubleshooting/hetzner-rescue-system | |||
== Disk Partitioning == | == Disk Partitioning == | ||
* RAID 1 setup on 2x2TB HDDs (<code>sda</code> and <code>sdb</code>). | * RAID 1 setup on 2x2TB HDDs (<code>sda</code> and <code>sdb</code>). | ||
Line 219: | Line 222: | ||
=== Workers === | === Workers === | ||
* For scalability, Poddery is running [https://github.com/matrix-org/synapse/blob/master/docs/workers. | * For scalability, Poddery is running [https://github.com/matrix-org/synapse/blob/master/docs/workers.md workers]. Currently all workers specified in that page, expect <code>synapse.app.appservice</code> is running on poddery.com | ||
* A new service [https://gist.github.com/necessary129/5dfbb140e4727496b0ad2bf801c10fdc <code>matrix-synapse@.service</code>] is installed for the workers (Save the <code>synape_worker</code> file somewhere like <code>/usr/local/bin/</code> or something). | * A new service [https://gist.github.com/necessary129/5dfbb140e4727496b0ad2bf801c10fdc <code>matrix-synapse@.service</code>] is installed for the workers (Save the <code>synape_worker</code> file somewhere like <code>/usr/local/bin/</code> or something). | ||
* The worker config can be found at <code>/etc/matrix-synapse/workers</code> | * The worker config can be found at <code>/etc/matrix-synapse/workers</code> | ||
Line 229: | Line 232: | ||
update_user_directory: false | update_user_directory: false | ||
* These services must be enabled | * These services must be enabled: | ||
matrix-synapse@synchrotron.service matrix-synapse@federation_reader.service matrix-synapse@event_creator.service matrix-synapse@federation_sender.service matrix-synapse@pusher.service matrix-synapse@user_dir.service matrix-synapse@media_repository.service matrix-synapse@frontend_proxy.service matrix-synapse@client_reader.service | |||
matrix-synapse@synchrotron.service matrix-synapse@federation_reader.service matrix-synapse@event_creator.service matrix-synapse@federation_sender.service matrix-synapse@pusher.service matrix-synapse@user_dir.service matrix-synapse@media_repository.service matrix-synapse@frontend_proxy.service matrix-synapse@client_reader.service matrix-synapse@synchrotron_2.service | |||
To load balance between the 2 synchrotrons, We are running [https://github.com/Sorunome/matrix-synchrotron-balancer matrix-synchrotron-balancer]. It has a systemd file at <code>/etc/systemd/system/matrix-synchrotron-balancer</code>. The files are in <code>/opt/matrix-synchrotron-balancer</code> | |||
=== Synapse Updation === | === Synapse Updation === | ||
* First check [https:/ | * First check [https://matrix-org.github.io/synapse/latest/upgrade synapse//latest/upgrade] to see if anything extra needs to be done. Then, just run <code>/root/upgrade-synapse</code> | ||
* Current version of synapse can be found from https://poddery.com/_matrix/federation/v1/version | |||
=== Riot-web Updation === | === Riot-web Updation === | ||
Line 315: | Line 322: | ||
letsencrypt certonly --webroot --agree-tos -w /usr/share/diaspora/public -d poddery.com -d www.poddery.com -d test.poddery.com -d groups.poddery.com -d fund.poddery.com -w /usr/share/diaspora/public/save -d save.poddery.com -w /var/www/riot -d chat.poddery.com | letsencrypt certonly --webroot --agree-tos -w /usr/share/diaspora/public -d poddery.com -d www.poddery.com -d test.poddery.com -d groups.poddery.com -d fund.poddery.com -w /usr/share/diaspora/public/save -d save.poddery.com -w /var/www/riot -d chat.poddery.com | ||
* To include an additional subdomain such as fund.poddery.com use with --expand parameter as shown below | * To include an additional subdomain such as fund.poddery.com use with --expand parameter as shown below | ||
letsencrypt certonly --webroot --agree-tos -w /usr/share/diaspora/public | letsencrypt certonly --webroot --agree-tos --expand -w /usr/share/diaspora/public -d poddery.com -d www.poddery.com -d test.poddery.com -d groups.poddery.com -d fund.poddery.com -w /usr/share/diaspora/public/save/ -d save.poddery.com -w /var/www/riot/ -d chat.poddery.com | ||
==Backup== | |||
Backup server is provided by Manu (KVM virtual machine with 180 GB storage and 1 GB ram ). | |||
Debian Stetch was upgraded Debian Buster before database relication of synapse database. | |||
Documentation: https://www.percona.com/blog/2018/09/07/setting-up-streaming-replication-postgresql/ | |||
Currently postgres database for matrix-synapse is backed up. | |||
===Before Replication (specific to poddery.com)=== | |||
Setup tinc vpn in the backup server | |||
# apt install tinc | |||
Configure tinc by creating tinc.conf and host podderybackup under label fsci. | |||
Add tinc-up and tinc-down scripts | |||
Copy poddery host config to backup server and podderybackup host config to poddery.com server. | |||
Reload tinc vpn service at both poddery.com and backup servers | |||
# systemctl reload tinc@fsci.service | |||
Enable tinc@fsci systemd service for autostart | |||
# systemctl enable tinc@fsci.service | |||
The synapse database was also pruned to reduce the size before replication by following this guide - https://levans.fr/shrink-synapse-database.html | |||
If you want to follow this guide, make sure matrix synapse server is updated to version 1.13 atleast since it introduces the Rooms API mentioned the guide. | |||
Changes done to steps in the guide. | |||
# jq '.rooms[] | select(.joined_local_members == 0) | .room_id' < roomlist.json | sed -e 's/"//g' > to_purge.txt | |||
The room list obtained this way can, be looped to pass the room names as variables to the purge api. | |||
# set +H // if you are using bash to avoid '!' in the roomname triggering the history substitution. | |||
# for room_id in $(cat to_purge.txt); do curl --header "Authorization: Bearer <your access token>" \ | |||
-X POST -H "Content-Type: application/json" -d "{ \"room_id\": \"$room_id\" }" \ | |||
'https://127.0.0.1:8008/_synapse/admin/v1/purge_room'; done; | |||
We also did not remove old history of large rooms. | |||
===Step 1: Postgresql (for synapse) Primary configuration=== | |||
Create postgresql user for replication. | |||
$ psql -c "CREATE USER replication REPLICATION LOGIN CONNECTION LIMIT 1 ENCRYPTED PASSWORD 'yourpassword';" | |||
The password is in the access repo if you need it later. | |||
Allow standby to connect to primary using the user just created. | |||
$ cd /etc/postgresql/11/main | |||
$ nano pg_hba.conf | |||
Add below line to allow replication user to get access to the server | |||
host replication replication 172.16.0.3/32 md5 | |||
Next , open the postgres configuration file | |||
nano postgresql.conf | |||
Set the following configuration options in the postgresql.conf file | |||
listen_addresses = 'localhost,172.16.0.2' | |||
port=5432 | |||
wal_level = replica | |||
max_wal_senders = 1 | |||
wal_keep_segments = 64 | |||
archive_mode = on | |||
archive_command = 'cd .' | |||
You need to restart since postgresql.conf was edited and parameters changed, | |||
# systemctl restart postgresql | |||
===Step 2: Postgresql (for synapse) Standby configuration === | |||
Install postgresql | |||
# apt install postgresql | |||
Check postgresql server is running | |||
# su postgres -c psql | |||
Make sure en_US.UTF-8 locale is available | |||
# dpkg-reconfigure locales | |||
Stop postgresql before changing any configuration | |||
#systemctl stop postgresql@11-main | |||
Switch to postgres user | |||
# su - postgres | |||
$ cd /etc/postgresql/11/ | |||
Copy data from master and create recovery.conf | |||
$ pg_basebackup -h git.fosscommunity.in -D /var/lib/postgresql/11/main/ -P -U rep --wal-method=fetch -R | |||
Open the postgres configuration file | |||
$ nano postgresql.conf | |||
Set the following configuration options in the postgresql.conf file | |||
max_connections = 500 // This option and the one below are set to be same as in postgresql.conf at primary or the service won't start. | |||
max_worker_processes = 16 | |||
host_standby = on // The above pg_basebackup command should set it. If it's not manually turn it to on. | |||
Start the stopped postgresql service | |||
# systemctl start postgresql@11-main | |||
===Postgresql (for synapse) Replication Status=== | |||
On Primary, | |||
$ ps -ef | grep sender | |||
$ psql -c "select * from pg_stat_activity where usename='rep';" | |||
On Standby, | |||
$ ps -ef | grep receiver | |||
= Troubleshooting = | |||
== Allow XMPP login even if diaspora account is closed == | |||
Diaspora has a [https://github.com/diaspora/diaspora/blob/develop/Changelog.md#new-maintenance-feature-to-automatically-expire-inactive-accounts default setting] to close accounts that have been inactive for 2 years. At the time of writing, there seems [https://github.com/diaspora/diaspora/issues/5358#issuecomment-371921462 no way] to reopen a closed account. This also means that if your account is closed, you will no longer be able to login to the associated XMPP service as well. Here we discuss a workaround to get access back to the XMPP account. | |||
The prosody module [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua mod_auth_diaspora] is used for diaspora-based XMPP auth. It checks if <code>locked_at</code> value in the <code>users</code> table of diaspora db is <code>null</code> [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua-L89 here] and [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua-L98 here]. If your account is locked, it will have the <code>datetime</code> value that represents the date and time at which your account is locked. Setting it back to <code>null</code> will let you use your XMPP account again. | |||
-- Replace <username> with actual username of the locked account | |||
UPDATE users SET locked_at=NULL WHERE username='<username>'; | |||
NOTE: Matrix account won't be affected even if the associated diaspora account is closed because it uses a [https://pypi.org/project/synapse-diaspora-auth/ custom auth module] which works differently. | |||
= History = | = History = |