Do this AFTER THE EVENT IS OVER

See Migrating Ubersystem onsite from offsite for the reverse instructions on how to migrate it on-site

TODO: the DB restore procedure here is pretty massive and maybe overkill, just, either I need more coffee or there seemed to be a problem with the normal simpler restore procedure when DB replication was in the mix.

TODO: probably automate away a lot of this through puppet/magbot/etc

PREPARING

  1. Create and merge a pull request:
    1. In both rams1.uber.magfest.org.yaml (onsite server) and super2018.uber.magfest.org.yaml (cloud server):
      1. post_con = True # for the love of all that is holy in this world, do not forget to set this.
      2. at the con = False
      3. send_sms = False
      4. redirect_all_traffic_onsite = False
      5. remove all the database replication settings for cloud and rams1 (comment out)
        1. uber::db_replication_slave
        2. replication_mode
        3. replicate_from
        4. allow_to_hosts
    2. In rams1.uber.magfest.org (onsite server):
      1. send_emails = False
    3. In super2018.uber.magfest.org.yaml (cloud server):
      1. send_emails = True # There are a few emails that send POST_CON
  2. Test config on staging (note, you may need to check these settings aren't overridden under external/stagingX.uber.magfest.org.yaml
  3. ensure onsite.uber.magfest.org is in frontend.uber.magfest.org backup script: named backup-all-production-dbs.sh
  4. re-enable cloud server in magbot

Did you remember to set post_con = True?  Did you? Double check that

images/icons/emoticons/smile.svg

OK good.

DO THE MIGRATION

  1. do a final deploy to rams1 (the onsite server), this will be the last time you will magbot deploy to prod_super_onsite
  2. for super-safety sake, go into rams1 and manually change the following stuff:
    1. in the INI file: (this server is about to be dead and shouldn't do these things again, or start ever again like next year when we bring this VM up by mistake)
      1. remove AWS keys
      2. remove Stripe keys
      3. remove Twilio keys
      4. post_con set to True
      5. at_the_con set to False
      6. send_emails set to False
      7. postgres connection string: comment out, so uber can't accidentally be started.
  3. set nginx maintenance mode on rams1 and cloud server
    1. cloud: echo "ubersystem is down for quick maintenance and will return shortly" > /var/www/maintenance.html
    2. rams1: echo 'click <a href="https://super2018.uber.magfest.org/">here</a> to go to live ubersystem' > /var/www/maintenance.html
  4. run supervisorctl stop all on both rams1 and cloud to stop app server
  5. on rams1 prevent ubersystem from starting on startup
    1. in /etc/supervisord.d/uber_daemon.conf set autostart=false
  6. backup the database (from mcp), this will make a copy in ~/backup
    1. ~/sysadmin/backup-all-production-dbs.sh
  7. find the backup file: ls -alltr ~/backup/