Do this AFTER THE EVENT IS OVER
See Migrating Ubersystem onsite from offsite for the reverse instructions on how to migrate it on-site
TODO: the DB restore procedure here is pretty massive and maybe overkill, just, either I need more coffee or there seemed to be a problem with the normal simpler restore procedure when DB replication was in the mix.
TODO: probably automate away a lot of this through puppet/magbot/etc
PREPARING
- Create and merge a pull request:
- In both rams1.uber.magfest.org.yaml (onsite server) and super2018.uber.magfest.org.yaml (cloud server):
- post_con = True # for the love of all that is holy in this world, do not forget to set this.
- at the con = False
- send_sms = False
- redirect_all_traffic_onsite = False
- remove all the database replication settings for cloud and rams1 (comment out)
- uber::db_replication_slave
- replication_mode
- replicate_from
- allow_to_hosts
- In rams1.uber.magfest.org (onsite server):
- send_emails = False
- In super2018.uber.magfest.org.yaml (cloud server):
- send_emails = True # There are a few emails that send POST_CON
- Test config on staging (note, you may need to check these settings aren't overridden under external/stagingX.uber.magfest.org.yaml
- ensure onsite.uber.magfest.org is in frontend.uber.magfest.org backup script: named backup-all-production-dbs.sh
- re-enable cloud server in magbot
Did you remember to set post_con = True? Did you? Double check that
OK good.
DO THE MIGRATION
- do a final deploy to rams1 (the onsite server), this will be the last time you will magbot deploy to prod_super_onsite
- for super-safety sake, go into rams1 and manually change the following stuff:
- in the INI file: (this server is about to be dead and shouldn't do these things again, or start ever again like next year when we bring this VM up by mistake)
- remove AWS keys
- remove Stripe keys
- remove Twilio keys
- post_con set to True
- at_the_con set to False
- send_emails set to False
- postgres connection string: comment out, so uber can't accidentally be started.
- set nginx maintenance mode on rams1 and cloud server
- cloud: echo "ubersystem is down for quick maintenance and will return shortly" > /var/www/maintenance.html
- rams1: echo 'click <a href="https://super2018.uber.magfest.org/">here</a> to go to live ubersystem' > /var/www/maintenance.html
- run supervisorctl stop all on both rams1 and cloud to stop app server
- on rams1 prevent ubersystem from starting on startup
- in /etc/supervisord.d/uber_daemon.conf set autostart=false
- backup the database (from mcp), this will make a copy in ~/backup
- ~/sysadmin/backup-all-production-dbs.sh
- find the backup file: ls -alltr ~/backup/