-
- Private work.
- Not sure why 8sleep was flashing. Water is half full – will fill next aquarium RODI cycle.
- AWS Secrets Manager. Will move privates there once I lift EC2.
- AWS KMS = Key Management System.
- For some reason, docker was wiped on my ubuntu wsl (maybe from docker desktop update on the host?). Reinstalled.
- That was easy. For some reason, my gitlab runners died with this. Ran a new one in a docker container, registered it (have to create a new token in gitlab). Default image
python:3.12
. Remember it stores config locally in /etc/gitlab-runner/config.toml
. You might have been able to restore it from there.
- Disabled resource saver on docker desktop. You’ll get errors like this when it’s in that mode:
error getting credentials - err: exec: "docker-credential-desktop.exe": executable file not found in $PATH, out:
- Segmented, tenderized, injected, and started the cure for both friendsgiving turkeys.
- Supercontest.
- All RDS today.
- Finished the cloud DB deployment.
- Yep, left docker-compose with an app and a db container for local dev. Prod is EC2+RDS (soon to be serverless for both).
- Remember you have the useful
migrations/data_changes
folder. Cleaned it up a bit today.
- Because migrations are with
flask-migrate
(via alembic
), and flask-migrate
goes through the flask CLI (flask db <>
), it piggybacks on whatever DB connection your app has. I have to update that anywhere for the prod config, so migrations should work with no code changes beyond that.
- Removed the flask-monitoring-db volume and other minor infra. It was deleted a while ago.
- Database URLs (for sqla):
dialect+driver://username:password@host:port/database
- Removed flask-debugtoolbar.
- Added
.pgpass
for abstraction in the makefile (to connect to prod).
- You still have to pass host/user/pw to the psql/pg_dump/pg_restore commands.
- And in the
PGPASSFILE
, specify db as *
so that it works with the supercontest
(main) db as well as the postgres
db (target of pg_restore
).
- Created KMS key to encrypt rds exports to s3. Also a new IAM role with s3 write perms for this.
- Took a manual snapshot and exported to s3. Snapshot takes a minute or two, export takes longer (it sat in “starting” for 27min and then the actual export took ). Restored it in my local db to test.
- To update the db in your local dev env, you have two options:
- SSH into the EC2 instance, run a pg_dump of the RDS instance, then copy it to your local box and run pg_restore.
- Take a snapshot (manual or automatic) in RDS, export it to S3, download, and restore from it.
- RDS snapshots are parquet files, which can’t be imported with pg_restore. So the former is almost always better.
- Less load on the EC2 now (small benefit).
- Added a convenient gnumake target for the above dev-syncing-method. SSHs into EC2 (using existing SSH profile), runs a backup against the cloud db, SCPs it to the local machine, restores the local db against it and restarts everything.