-
- Private work.
- YT went from warning with X, to timer, to grayed out screen. Keep tightening. Ublockorigin’s patches work temporarily, then YT changes.
- City of Gods.
- Bottled the fire cider.
- Supercontest hit an error in htmlmin: https://github.com/hamidfzm/Flask-HTMLmin/issues/18 (https://bmahlstedt-org.sentry.io/issues/4585353345). No action was taken on that ticket. Appears transient. Will investigate if it happens again.
-
- Private work.
- Startup CTO handbook: https://github.com/ZachGoldberg/Startup-CTO-Handbook/blob/main/StartupCTOHandbook.md
//
and %
as two separate operations are a faster pair than a single invocation of divmod
.
- Longest palindromic substring: https://en.wikipedia.org/wiki/Longest_palindromic_substring#Manacher’s_algorithm
- Next.js 13 getting a lot of hate.
- Replaced full disposal bag in irobot base.
- Fire cider started fermentation on sept 24, so it’s ready to bottle now.
- Meta’s data warehouse is millions of Apache Hive tables. Most data engineers use spark to query/transform/analyze.
- Billions series finale.
- NBA revenue is only 28% tix. 51% from media. Warriors have the highest valuation rn (7.7B). Knicks are #2.
- Updated docker desktop (4.24.2).
- Leetcode’s daily puzzle rolls over at midnight UTC (nice). So 8pm eastern.
- Supercontest.
- Static media served from s3 (via route 53 -> cloudfront behavior for /route match) rather than the app (via route 53 -> cloudfront default -> elb -> ec2 -> nginx -> gunicorn -> flask) is so much better. Regardless of geodistribution/WAF/otherBenefits, it’s just fewer requests to the load balancer. The app logic is just the app logic, no errant things. In the server logs, a request is now just a request (not a bunch of fluff from favicon, banner, etc). Also cheaper to serve from s3 than elb. Still need to move css/js static out of the app.
- Lots of work on the cognito side.
- Created user pool. Can control sign-in behavior (username/email/phone), even federated identities (considered but didn’t allow google/apple/amazon, did not enable facebook/saml/okta/AD). MFA Password requirements. Account recovery. Comms prefs (email, sms, etc). Chose to use their UIs (not just API), hosted as endpoints on my custom domain.
- Created email (sbsc@) and associated domain (sbsc.com) in SES, to attach to cognito for comms. Verified both. Didn’t add observability yet (can publish to cloudwatch). Production access isn’t approved yet; waited about 2hrs.
- Could move sbsc@gmail to a proper google workspace, under my domain.
-
- Private work.
- Read MS IRA quarterly report.
- Flask does have backward comp issues (werkzeug, flask, extensions, more). Most of these have been in the world of ext, not webframework primitives, so I’ve lifted those deps. Eg flask-user to cognito, for this guy’s problem: https://blog.miguelgrinberg.com/post/we-have-to-talk-about-flask.
- Disabled adblockplus on yt. Just ublockorigin (and privacybadger, but trackers diff than ads).
- Matt Levine, on PE funds: “Really though the ideal would be to raise money from public investors without running a public fund. One classic way to do this is with insurance: You buy or start an insurance company, the insurance company sells life insurance and annuities to public investors, and it invests the float in your private equity business. The public investors aren’t investors in a fund; they are customers of an insurance company. (The insurance company, meanwhile, is a large institutional investor, so its investments are private.) This is a very popular approach and many of the big private equity firms have insurance businesses.” You could also become a holding company / conglomerate. Your portfolio doesn’t contain shares in a bunch of companies, it owns a bunch of companies.
- Updated geforce drivers.
- Supercontest.
- Moved schedules from
apscheduler
to eventbridge
.
- https://gitlab.com/bmahlstedt/supercontest/-/issues/190
- They call lambdas. The functions use
urllib3
from the stdlib (rather than requests
) so no deps.
- The app logic for the schedules stays in the app, but is now triggered from a rest call.
- Set the schedules in eventbridge with no retry policy, to start.
- Schedules don’t run in dev (so don’t need email conditional logic, etc). They just run from eventbridge -> lambda and hit the endpoints in the prod app.
- Now that jobs run within a request context, I can use
url_for
and url_root
to dynamically infer addresses/links (rather than hardcoding).
- If an exception is uncaught within a route, flask (obviously) returns 500. Rewrote some of the schedule targets to return clear bools of whether or not the action was done (eg “don’t need to commit scores rn”), so that the routes can forward the appropriate status code.
- Overall change was -150 lines.
- Did some manual auth management with a custom route decorator. Will switch to better auth during the cognito migration (and api gateway).
- Cleaned some of the “I actually did this action” function bool passarounds. Plugged into http status code responses for clarity.
- The metrics for these schedules are not on the eventbridge side, they’re on the target side (lambda). Cloudwatch handles all this, obviously, but you don’t have to create a custom dashboard (unless you want) – just go to the lambda function and check the monitor tab.
- Will add alarms for failure later. Easy on “failure” clarifications. Would love to plot by status code response. I didn’t have this for apscheduler jobs before either. Now it’s easier to plug in. A good benefit of moving everything atomic to the cloud.
- Confirmed score commits worked, for both this change and yesterday’s (all the cloudfront changes).
- And the
url_for
changes for dynamic linking.
- Both eventbridge and lambda have retry policies. I had disabled the eventbridge one, and still noticed a larger-than-expected number of lambda invocations. It was because of the retry setting on the faas side, retrying twice in the event of failure.
-
- Private.
- Had this wp blog go down for the first time. Error Establishing Database Connection. No prior changes, no mods to creds, etc. Just
sudo reboot
from the DO console, fixed.
- Remember route53 doesn’t charge for alias records to other resources (like elb or cf).
- Added backyard ring cam.
- Updated postman version.
- Supercontest.
- Finished https://gitlab.com/bmahlstedt/supercontest/-/issues/193.
- Figured out what was broken in the network layer last night. It was WAF. Was checking
referer=southbaysupercontest.com
and blocking all requests without that header. This was a remnant from when the cf dist only had one origin, s3. Now the toplevel dist is gonna get requests from all sorts of client referers. Made the WAF ACL conditional first. Then just deleted it.
- CF does the http -> https redirect, cleaner – don’t need 2 ELB listeners now. SSL termination still happens there, of course.
- You can spoof
referer
in a request, of course. It’s just a header.
- Working with cloudfront is a little annoying, fb cycle time is ~15min.
- Nope, the WAF issue wasn’t it. Here’s a good summary: https://stackoverflow.com/a/75672806. Basically, 502 was because cloudfront couldn’t connect to the origin, because TLS was failing. The cert must match the host header, and cloudfront wasn’t forwarding the host header, so elb returned bad gateway. You can configure cf to forward all headers in the viewer policy.
- Added WAF back.
- Banners, team logos, brand content. This is routed at
/assets
, served by S3 (via CF). That’s not ALL the static content though. CSS/JS are still served by the app (will change later). This /static
path was conflicting with the S3 one, so I moved the cloud to /assets
.
- Overall, got everything working. Both the app and the static content are edged by cloudfront (and static cached). And most static assets are on s3. Access-controlled by WAF.
- Also added lines+scores to allpicks view, for convenience: https://gitlab.com/bmahlstedt/supercontest/-/issues/220.
- Also note: You could cache the app (keyed by url params, any custom headers you want, etc). But do this once the dynamic app behavior is resolved. Right now, so much is fetched/queried/calculated in realtime, you’ll get stale data (saw this with old templates after the oldpicks change, and requesting a js bundle hash that didn’t exist bc it was stale, etc).
- Deployed the banner yesterday. Line autocommit worked well today. Tried to submit picks and there was a bug. The CF distribution was configured to only forward ELB request methods GET/HEAD (default). The picking interface does some POST calls.
-
- Private work.
- Finished a rewatch of Midnight Mass to prep for usher.
- Warriorsssss (suns) nba season opener.
- The jaw gum is $1/piece.
- Ennui = tedium = boredom.
- Hydroponic maintenance. Post-deepclean, so put a dent in the remaining liquid nutrients. Consolidating to dry. Remember ph down 30 for big 20 for small.
- Privateer email is getting a ton of trash marketing email lately.
- Blockfi emerges from bankruptcy, nearly a year later. Now for gemini/genesis to follow.
- SNI = Server Name Indication.
- Cloudfront can take up to 25 minutes to fully deploy changes to a distribution.
- Like WAF, I could use Origin Shield (better caching), with Cloudfront. Just the click of a button. Both incur extra charges.
- Supercontest.
- Deleted the
www
A record and surrounding network infra. I don’t want to maintain anything supporting this old format.
- Got a new cert for *.southbaysupercontest subdomains. You can’t edit the alternate domains after a cert is created (and linked with a load balancer + other resources). This makes it easier to manage. This cert can be diff (in diff region, same domain names) than the cert for the elb.
- Also duplicated the cert in us-east-1 (the original was in us-west-1). Cloudfront only allows alternate domains with certs in us-east-1.
- Then add route 53 records to forward from your subdomain to the cloudfront dist (I added an A for ipv4 as well as an AAAA for ipv6).
- Blocked all public access for the s3 bucket. Then added policy rule to only allow traffic from cloudfront. In order to ONLY allow this traffic from MY site, you need to use WAF. S3 can have domain-specific access policies, but cloudfront cannot. Use WAF to restrict requests based on referrer.
- Enabled WAF. It’s pretty cool. Can add rules based on headers, rates, everything. Comes out of the box with some basic protection. Can block, captcha, much more.
-
- Private work.
- Some logarithms.
- Charged the remote control blinds. Just take microusb from desktop, extension cord from drawer, and plug in (red -> blinking green -> green).
- Remember
{:,}
for comma separators on numbers in the thousands spots.
- The equinox increase of 7/mo is in relation to ~260 or whatever I pay. A little under 3% increase.
- S3.
- Charges for versioning (if you have it enabled for a bucket) at the same rate as regular storage. If you have 6 versions of the file, it’s 6x the storage (not diffs).
- Lambdas for pre/post data fetching. Many capabilities.
- Remember permissions on the bucket are given by the ARN, but permissions on objects within the bucket require the
/*
suffix.
- Looked at some policy examples for fine-tuned access control.
- You can add localhost to Referer in a bucket policy to allow access in dev quickly/temp. This effectively exposes to the public though – any service can refer from localhost.
- Lol equinox is increasing their rates by $7/mo.
- Set up the ring cam. Remote control, voice, everything.
- Supercontest.
- Finished the larger change to rewrite all the routing
- https://gitlab.com/bmahlstedt/supercontest/-/issues/218
- Includes the shifting of the statistics views, the cleanup of the template logic (as far as URL params), and the filtering of stats by season.
- Changed the logic so
season=0
meant ALL (just like league=0
means all). This required some special handling of the season
param – ONLY the stats views allow 0 (others just require valid seasons).
- Moved all static assets to s3. Team logos, branding icons, banners, etc.
- https://gitlab.com/bmahlstedt/supercontest/-/issues/193
- Couldn’t find an easy solution for a “pointer” – a generic s3 alias (just a key-value pair) which I can point at anther object path in the bucket. I would update the
banner
pointer to the specific img in the archive. Oh well, just copying each week’s head as banner.jpg
, like I used to do locally.
- Much easier to manage these assets now. I can change the banner and the docs and everything else without rebuild/redeploy. They’re just s3 refs.
- There is a
flask-s3
python package: https://flask-s3.readthedocs.io/en/latest/. I’ve decided not to use it. The interface of pure URLs is simple/proper/stable enough for me. I don’t need extra functionality.
- Could add WAF to cloudfront. Slows it down a TINY bit but adds some basic protection. Don’t need for now.
- The docs were broken. Local and remote builds. Needed to add
setuptools
back to pyproject.toml
– it’s necessary for sphinx.
- Remember to delete
docs/modules
, these are autogenerated and if you’ve moved modules around, you’ll have some stale imports.
- Also remember open in live server for index html files from vscode (useful for autodocs).
-
- Private work.
- Danse Macabre, Greenwood Cemetery.
- Mealprep: smoothies, oatmilk, hibiscus.
- Finished Homeland. So good.
- AWS is starting to charge for IPs. A single static ipv4 is ~$4/mo I think?
- Supercontest.
- Changed the
View
navbar dropdown to be btn-secondary
, standing out from the other season/week/league filters.
- Moved the
alltimelb
view to the standard Leaderboard dropdown option for season. Didn’t add the param season=All
, since they’re so diff. Cols by week vs cols by season, different values shown in each cell, etc. Best to keep separate. It’s not like stats where it just affects a filter in the query.
- Still some (now) invalid email addresses of players. Not gonna do anything. That’s their account profiler, I don’t want to modify it. Just have the mailer fail as necessary.
- Sometimes espn returns unknown statuses: https://bmahlstedt-org.sentry.io/issues/4550103735/?notification_uuid=c2a7fec2-1faf-4ddf-bc71-5576421eb16e&project=1773879&referrer=regression_activity-email