-
- Private work.
- Browsers do not count for netflix devices.
- Finished The Fall of the House of Usher. Flanagan is so good.
- Garden maintenance.
- AWS product refresher.
- App Runner is kinda like Elastic Beanstalk. It’s a all-in-one service for “I have an app and don’t care about the infra, just deploy/manage it for me.” EB is comprehensive. AR is only for containers.
- SAM = Serverless Application Model. It’s a template for building a serverless stack on aws.
- There’s also SAR = Serverless Application Repository. Integrates with Lambda.
- Remember storage options: Glacier, S3, EFS, EBS.
- Boto is the python SDK.
- Chalice is AWS’ platform for python serverless applications. It’s an aggregator, like the others. With SDK/CDK integration. Define your web routes, your schedules, your s3 triggers, your lambda functions, whatever. https://aws.github.io/chalice. Pretty cool tool, although I don’t think I’ll keep the API backend in python for much longer, so I don’t want to marry to a python platform.
- DevOps guru does some pretty cool ML-based anomaly detection for services like RDS. Costs a few dollars a month.
- Lambda dive.
- Most stemming from the large tree of https://docs.aws.amazon.com/lambda/latest/dg/welcome.html.
- Entry point is always
lambda_handler(event, context)
. Event is the (json) data passed by the caller. Context is env, arch, logs, etc.
- You can do cool stuff like trigger from s3; every time an image is uploaded, it creates a thumbnail and adds that to s3 as well. I could use this for the sbsc banner.
- You can do some advanced stuff later. Testing functions in the cloud. Added traces.
- The default artifact of a lambda created from the UI is a zip archive. You can include OTHER libraries, dependencies. And you can organize them in layers for efficiency. Example on https://docs.aws.amazon.com/lambda/latest/dg/python-package.html.
- If you want to bypass all that, you can just deploy one of your containers as a lambda. Then it’s not
lambda_handler
, it’s the docker ENTRY_POINT
.
- If you’re not plugging a lambda into another service and triggering on some event-driven flow, you can invoke the lambda directly via the UI or CLI. You can ALSO assign a function URL and call it to invoke a lambda.
- Use CDK + CloudFormation to define lambdas from source and autodeploy them. Fun fact: cloudformation is declarative (as you’d expect) but CDK is imperative! It converts your instructions to a finish line.
- There are tons of third-party extensions for lambda: datadog, splunk, sentry, etc (if you don’t want to use internal services like cloudwatch for these)
- Lambdas don’t have a connection pooling mechanism and make lots of short connections. This is exactly what RDS proxy was built for.
- Your lambda functions can use IAM permissions to connect to the DB. Much better than UN/PW, etc.
- Full RDS example: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-lambda-tutorial.html.
- Another example with Aurora: https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/python/cross_service/aurora_rest_lending_library
- Supercontest.
- Probably the best E2E tutorial for a webapp: https://aws.amazon.com/getting-started/hands-on/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/. Amplify, API Gateway, Lambda, Cognito, Dynamo. Pretty close to my supercontest stack.
- Closed docker image build ticket (https://gitlab.com/bmahlstedt/supercontest/-/issues/165) – I’m gonna do serverless. Frontend = amplify hosted. Backend = lambdas. Will handle the remainder on the respective tickets for react and compute.
- Cognito and SES (and most of the rest of the infra) is in us-west-1. Pinpoint and SNS are in us-east-1. Hopefully this doesn’t cause any issues.
- When I sell the EC2 RI later (once fully serverless), aws marketplace takes 12%.
- Since the lambdas hit the db through rds proxy (not connection-pooling through flask anymore), I have to finish #188 first.
- RDS.
- Current EC2 node is t2.micro. And it runs 3 containers, only 1 being postgres. So t3.micro for the db is plenty.
- RDS Postgres calculator: https://calculator.aws/#/addService/RDSPostgreSQL
- Aurora Postgres calculator: https://calculator.aws/#/addService/AuroraPostgreSQL
- Scenarios (order of modern-ness)
- Aurora serverless v2, 0.5ACU/hr (min), rds proxy, 10GB storage, 10GB backups = $175/mo
- Aurora serverless v1, 2ACU/hr (min), 10GB storage, 10GB backups = $115/mo
- Aurora non-serverless, 1 instance, t3.medium, RI 3yr upfront, rds proxy, single az, 10GB storage, 10GB backups = $60/mo (35 from the upfront, only 25/mo recurring)
- RDS, 1 instance, t3.micro, RI 3yr upfront, rds proxy, single az, 10GB storage, 10GB backups = $35/mo (10 from the upfront, only 25/mo recurring)
- RDS proxy is more than HALF the cost for the prices above. For aurora serverless v2, RDS proxy is ~115/mo (at min of 0.5ACU/hr, min of 8ACU charge). For the non-serverless solutions, it’s ~$25/mo (at min of 0.5vCPU/hr, min of 2 vCPU charge).
- Serverless WOULD be cheaper than my RDS because 0.5ACU/hr is a pretty small workload. BUT, supercontest is even less than that, so the serverless minimum is greater than an always-on RDS instance. This is the only piece of the app that isn’t scalable at the end of the cloud migration. As traffic increases, the db should be converted to aurora serverless v2. https://gitlab.com/bmahlstedt/supercontest/-/issues/226
- Bought the RI, 3yr, upfront, single az, $294. Created the DB as above. Non-aurora, non-serverless, postgres15, db.t3.micro, existing master un/pw, 20GiB gp3 SSD, auth via pw and IAM, rds proxy.
- Will add elasticache later: https://gitlab.com/bmahlstedt/supercontest/-/issues/225
- Backups daily (6am ET). Maintenance weekly (tuesdays 7am ET).
- There’s DMS = Database Migration Service. Designed for tasks like this, getting another DB into RDS. I wanna see the lowlevel cloud db details, so I’ll do it all manually.
- Actually no – deleted the RDS proxy. I get low traffic for this site, not worried. Postgres default connection limit is 100. I’m not gonna have 100 concurrent lambdas until the userbase substantially increases. Added to #226 for future.
- The version of psql/pg_restore/pg_dump must match between server (rds) and client (where you’re running the above, either localhost or ec2 instance). Therefore, you’ll need pg15 installed.
- You can attach a security group which allows postgres traffic inbound (tcp, port 5432), as opposed to an EC2 or Lambda connection, but it still won’t work. You have to change the db config in rds to be publicly accessible.
- All explicit targets should go to lambdas, and have explicit lambda connections in rds. Open-ended psql can be accessed from EC2, for now – when I’m fully serverless later, I can open up public access to the DB (at least dev).
- Got
psql
/ pg_restore
/ pg_dump
working with the cloud db.
- Updated banner.