-
- Abdulrashid Sadulaev is a 23yr old Russian wrestler. He is the current world champ. He’s from Dagestan, just like Khabib.
- Awesome pin of the defending champ snyder with some offshoot peterson roll all the way to fall? https://www.youtube.com/watch?v=Ryy19IY9fQo.
- Supercontest. Worked on async score fetching and result calculation today.
- Design decision: Score.coverer and Pick.points will update in realtime, as the scores change, rather than waiting until the game completes. This allows you to present mid-game date as “what if the games ended right now?” For views that don’t want to bank games until they’re done, like the leaderboard, you can filter with an additional check for FINISHED_STATUSES.
- If the app is down for a game, you have to manually enter the information into the db. This is fine. Based on the xml scoresheet that I pull nfl scores from, the only option is live. In order to get past data, I’d have to hit another API.
- sqlalchemy difference between default and server_default: default is just what python will insert for the value if you don’t provide one, server_default is a statement that you can use to generate the value on the db side.
- Chubbyemu videos are so interesting and informative.
- The patriots, the chiefs, and the NINERS are the only teams still undefeated.
- Ordered pink curing salt to make pastrami beef ribs with the next 2 racks. Curing salt is basically just table salt (sodium chloride) mixed with sodium nitrite. The pink doesn’t matter, it’s just dyed to make sure people don’t use it as table salt (although pink Himalayan salt can be used as table salt, and it’s obviously pink too). Regular sea salt can be used to cure/brine meat as well, obviously, it just can’t go as long because the nitrites are what protect the meat from bacteria growth during the curing process.
-
- Supercontest.
- Added tons of stats for the history of the league. https://github.com/brianmahlstedt/supercontest/issues/23. Super interesting.
- The submit button stays up after someone clicks it. That was the problem. Fixed it and removed a few more duplicates.
- cache.memoize on get_picks was causing problems if you made multiple pick submissions within the span of 10 seconds, because the client would be told about picks that might have been old, desyncing from the actual picks in the backend db. Fixed (by removing the cache for that query). Pick information should be live on request. Scores are different, because the user doesn’t write to the scores table from the frontend.
- Completely cleaned and reorganized my room. Cleared a bunch of space. Including closet, lights, everything.
- DigitalOcean.
- Did some research, you can upgrade any of disk/cpu/ram. It adds a decent amount to the monthly cost. Just going from 1GB memory to 2 makes it $10/mo instead of $5.
- Allowed password login on the droplet for my user, so that if anything goes wrong remotely, I don’t need my laptop and ssh key. I can just ssh from any machine.
- sudo vim /etc/ssh/sshd_config
- PasswordAuthentication = yes
- sudo service ssh reload
- sudo passwd bmahlstedt
- Confimed that I have access. You can ssh in like usual or you can go to the DO website in the browser and access a console from there. Password is one of my old SpaceX ones.
- Upgraded from the legacy metrics agent on my droplet to the current one.
- tmux shortcut reminders.
- To detach from your current session, just type your hotkey then d
- To kill your current session, just type your hotkey then : then kill-session
- If another client has another session in a different window size, just kick them off when you attach with tmux attach -d
-
- I care much less about the API utilization metrics that FMD provides. The most interesting is the API performance. Graph below. I wish they had multi-version api perf like they do for utilization. The results are mostly expected, but still an awesome feature. https://southbaysupercontest.com/dashboard/api_performance.

- In checking back, you can see multi-version api perf for specific endpoints! Just not an overview of all routes for the application.
- South Park is back!
- FMD
- Verified that it CANNOT do patchfix version resolution. Only major and minor.
- A major version (3) was released yesterday, which appears to have broken the overview page. 3.0.3 was released a couple hours ago, looks like they’ve had a few bugfixes. Upgraded, and confirmed the overview is still broken.
- Maybe this line of questioning will help the lesser perspectives understand the ignorant racism in the NBA owner concession:
- Is it ok to compare someone to Saddam Hussein, when they have nothing in common with him except that they’re Arab?
- Is it ok to compare someone to Hitler, when they have nothing in common with him except that they’re German?
- Is it ok to compare someone to slaveowners, when they have nothing in common with them except that they’re white?
- It it ok to compare someone to <anyone bad or good>, when they have nothing in common to do with it except that they share the same race?
- No. None of these comparisons are ok. They’re offensive. Doesn’t matter the historical subject/object, victim/aggressor, small/large, green/orange, old/young.
- All daemon threads will be killed when the last non-daemon thread dies. The non-daemons are the main ones. The daemon threads are subservient – they will not hold the main app.
- Supercontest.
- Got a few “AttributeError: ‘AnonymousUserMixin’ object has no attribute ‘id’” errors in the production app, with an IP from singapore. This just means they tried to access a route and weren’t logged in. The flask_user.current_user object is AnonymousUserMixin when the user is not logged in, so they can’t do anything. The error was caught pre-route in the url_value_preprocessor.
- Right now, I have 94 compiled requirements (including testreqs) and 26 input requirements (not including testreqs).
- Remember that while 99% of requests make it to the app and you can check the standard uwsgi/flask logs (make logs-prod), there’s a small chance that nginx catches something. just run `docker logs -f nginx-proxy` to check.
- Added text for harner’s banner.
- Tons more work on the redesign of blueprints, routes, url_prefix, url_defaults, url_value_preprocessor.
- Used request.url_rule.endpoint and request.url_rule.argument to infer all of the values (in g) that the current view would have access to. Then I conditioned the url_for() navlink builds in the templates based on those.
- It’s all in a great place now. This was frustrating for a few days, but will be much easier to manage/change in the future without breaking things.
- Closed https://github.com/brianmahlstedt/supercontest/issues/102.
- Added apscheduler (advanced python scheduler) for the weekly email pick reminder.
- Not sure how, but two people submitted 10 picks into the db. Both were pairs of the same 5. I deleted them manually, but I’d like to know how this happened.
- Split up the utilities module into a separate util package, containing distinct modules for the specific type of functionality they provided. It was getting too big.
- Originally wrote the email reminder with apscheduler, but there’s a specific flask-apscheduler that made it a little easier. Don’t need to manage the debug app duplication, atexit, or any of that peripheral wrapping.
- Tested with my email only and a modified cron. Worked! The apscheduled job shows up in the regular flask logs too.
- Closed the weekly reminder ticket: https://github.com/brianmahlstedt/supercontest/issues/97.
- In debug mode, flask loads two instances of the application. If your app does something like schedule an external cron job, it will happen twice.
- Added Google Keep back to phone and to browser bookmark. I was missing the ability to immediately open something and jot a note down, especially while I was out and about, away from my laptop/blog.
- The `date` command in bash gives the local datetime with tzinfo.
-
- The amazon rewards card gives you 3% back at amazon. There’s also a “reload” feature on amazon, where you put money in your account (for use at amazon) and it gives you 2%, plus $10 bonus sometimes. I tried reloading WITH my rewards card today. It won’t let you.
- Electric toothbrush finally died, bought a new one. $50. Arrives tomorrow.
- Went to ortho in Van Nuys, got an MRI referral. Total round trip: 2.5 hours. 1.5hr driving, 1hr in waiting hour, 5 min visit with doctor.
- So far, the entire chain has been:
- spacex doctor -> spacex pt -> general doctor -> bloodwork -> general doctor -> xray -> general doctor -> ortho doctor
- 8 appointments, 8 visits, 0 conclusions. The referral system in medicine is terrible.
- There will be more. MRI -> ortho doctor -> whatever is next (hopefully not just PT).
- Idea: there should be a global medical database. You have to fill out the same new patient forms at every clinic you go to, and 99% of it is the same.
- Every office would have auth keys to a specific subset of the db: general information, surgical history, family history, ortho, respiratory, psychological, whatever. There would be a ton of different tables and only your office’s specialty would be accessible by your office.
- This is just chrome holding all your passwords. They’re collocated in a higher layer, the browser, and then every site uses only the ones it needs (and can only access the ones it needs).
- This would make the patient experience easier, would make referrals easier, would make everything easier.
- You could also do some machine learning once all the data was collocated. I guarantee there are correlations we don’t see yet because all the variables are never observed together.
- Jinja can understand **kwargs, but the syntax is a little bit different than python: https://medium.com/python-pandemonium/using-kwargs-in-flask-jinja-465692b37a99.
- Supercontest.
- Week 4 did not become available at 10am PT, which would have been the case if I had messed up the timing (because that’s 5pm UTC).
- Restructured almost all of core/results, and its corresponding usage in views, to be sequential. Query for something, pass it to something else, sort it, calculate from it, etc. It’s a much clearer, layered approach now. The analytic functions are modular, no queries are being repeated. Establishment of the necessary information happens FIRST, in the route, in the highest level. It’s much easier to see duplicates now.
- This modularity made my query footprint a lot smaller. The all-picks view would query for all users but only one week, whereas the lb/graph would query for all users for all week. They used the same machinery underneath, but the latter called it in a loop. Now it queries all at once and reformats the data as desired after. I remember thinking about this inefficiency when I first wrote it, but didn’t have time to optimize. I do now.
- Commented out all the tests so that I can run test-python to completion as it should. All no tests run, it’s still gives more information to add this properly to my dev workflow (it compiles requirements and such).
- Tested and work. App is a lot snippier, and FDT shows a lot fewer queries (like 10 max, for the worst route that has to query everything).
- Updated to harner’s gif. Played around with the background css styling. There are two important ones: background-repeat and background-size. “space” will keep the original size but fill in gaps with whitespace, “round” will distort the image within a slight margin to fill the gaps (both for repeat). For size, you can basically stretch it or keep it the same, fit in parent, etc. Looks good now. Fits one across on mobile, and repeats on larger viewports.
- Made the app gracefully handle the time period between 5pm PT (week_start) and when westgate posts the lines, if there’s a delay. It conditionally shows messages, and even omits the table.
- The heaviest (time-wise) route in the main navs is /graph. The heaviest league is the free one, because it’s the least filtered. And the heaviest season is any full one. Therefore, the current view that takes the longest to load is https://southbaysupercontest.com/season2018/league0/graph.
- Removed wednesday from RESULTS_DAYS and just added a single commit_scores call within commit_lines, so it initializes properly.
- Added queries.is_user_in_league and g.user_in_league.
- Fixed a few things to make the new “return empty but valid results for new lines and new picks” work.
- Confirmed that newly committed lines from westgate enter the db with timezone info, and timezone = utc (the rest were manually added to the migration – I wanted to make sure it was consistent.
- Did the usual wed deployment.
- If you follow a ctag in vim, don’t edit it there. It won’t save by default. This is just for looking.
-
- To remove from a collection that represents a many-many relationship, just delete that list element and commit. You could also wipe it all by doing something like
- user = db.session.query(User).filter(x).one()
- user.leagues = []
- db.session.commit()
- I docker-compose downed the 3 bmahlstedt.com containers on the droplet so the supercontest can have the full resources of that tiny machine. I’ll move bmahlstedt.com elsewhere. I’ll have to remember to bring it back up for interviews, etc.
- Cooked the soaked garbanzo beans. Scalded my hip by splashing boiling water directly on it. Was bad. Blistered up and got all oozy.
- Bought bulk charcoal on amazon. Wasn’t prime shipping, but it was $45 for 4x18lbs. Briquettes for $1/lb is usually a good deal – this is half that.
- Big fresh order. $75. Should be good until tough mudder tahoe trip.
- I should tweet this:
- “[B*tches] aint leavin til 6 in the morning” – Snoop Dogg, Gin and Juice
- “Parties dont stop til 8 in the morning” – Jermaine Dupri, Welcome to Atlanta
- what kind of latenights are these rappers getting into
- SQLAlchemy obviously does a join for all relationships. For many-one, it creates a member and for many-many it creates a collection. You can filter against these directly through the ORM, which is nice.
- Went to costco to buy beef ribs.
- Interesting, but not surprising. Finishes get more likely as you go up in weight class:

- Supercontest.
- The league change alone was very simple. I only had to pass the league ID to like 5 places, and then add it as a filter to like 3 queries. This is due to the efficient layering of the query structure.
- Confirmed in the league deploy that the flask monitoring dashboard data persists across versions now.
- I could theoretically remove the entire dbsession.joins module now and just query on the backref relationships, but it’s slower. The eager joins are much faster than going through the orm for everything.
- Made it so that if a user is in a non-free league for the requested season, the default is to display that. If you’re only in free, it shows free.
- Adding coverer to the Score table, collocated as closely as possible to the information that it depends upon – this makes more sense than putting it in the Line table. There’s also no real reason that it needs its own table, it’s not distinct enough. It will update in realtime, not waiting until the game is finished. This is because some views want to update colors before the games are finished, but other views want to hold info until status=F/FO (like the lb). The information is there, you can do either.
- All of the same logic above for coverer in the score table is repeated for points in the pick table.
- is_league_in_season should return True if every checked for the free league. Fixed.
- Did the usual efficiency optimization. Looked at FDT, removed unnecessary queries, consolidated others. Saved a ton of time, benchmarks are back down.
- Renamed season_week_blueprint to data_blueprint.
- Today was pretty dumb. Reverted a lot of the changes. Overall, pretty underwhelmed by the inability to nest blueprints, conditionally handle values, set up url defaults, preprocess, etc. Sharing app state to build urls for navlinks should not be this hard.
- Ended up using flask-caching to just memoize the calls that were repeated, rather than solving the problem directly in application logic.
- I have it configured to SimpleCache, which is just a python dict with key/values. Not the most thread safe, but should be fine.
- Checked the FDT profiler to see if not just any sqlalchemy queries were being repeated, but python functions as well. Sorted by cumulative time, then calls, to make sure that nothing from my dist (supercontest.*) was being called a lot and taking a lot of time.
-
- The droplet was slogging along.
- Ran docker system prune –all, reclaimed 7.5G.
- There is only one core, it’s usually railed at almost 100% when deploying, handling a request, etc. It idles at 1-2% otherwise.
- There’s only 1G RAM. There’s about 24G disk space. 0 swap.
- Tried moving the wheelbuild from the intermediate container to a simple install in the final container. Still would hang.
- sudo reboot. Wanted to restart the docker service but just did a whole system update anyway.
- (unrelated) cleared the release updater with:
- sudo truncate -s 0 /var/lib/ubuntu-release-upgrader/release-upgrade-available
- sudo apt update && sudo apt upgrade
- sudo apt full-upgrade
- sudo apt autoremove
- `sudo update-rc.d docker defaults` to make docker start on boot.
- After system restart, mem went down to about 200M, 20%. It was at about 80% before, even before building the image.
- Tried rebuilding the image for supercontest, worked this time. Looks like it was just system overload (I’m straining this tiny machine). A restart/upgrade every now and then won’t kill us.
- docker-compose restart supercontest-app-prod clears a decent amount of mem.
- docker-compose down before running the new build also clears a lot of space.
- The flask monitoring dashboard has a db backend, obviously. It’s only 4M right now.
- /proc/sys/vm/swappiness is how aggressively your system will use swap. It’s configurable. 60 is the usual.
- Pruned my laptop’s docker system as well. Reclaimed 21GB.
- Lost both fantasy games, both pretty badly.
- Supercontest.
- Added a persistent (named) docker volume for the flask monitoring dashboard. It was getting wiped new on every deploy. Confirmed it stayed.
- Added ipython to the dev image so that the manage.py shell opens it by default.
- Did some backlog cleaning and milestone organization. Issue count is in a good place.
- Centered the headers on the rules markdown. Swapped the picks/points column on the all-picks view so the colored points cells would stand out, not adjacent to the full main table.
- With Monday, Sunday, and Thursday games combined, there are ~18 hours of active scoring during the week. This is a little over 1000 minutes. If I fetch the nfl scoresheet xml once a minute, that’s 1000 requests a week. Totally handle-able. With our current user base of ~50, and each user opening the app (or different views within it) about 10 times a week, we’re already at half that traffic. Much better to control this and keep it static around 1000, rather than proportional to our user base as we scale.
- With a backref or backpopulating relationship in sqlalchemy, graphene will break unless you name your SQLAlchemyObjectType classes distinct from the model name. I.e., Week becomes WeekNode.
- Relationship changes are just in the ORM. They don’t change any of the raw db, no sql, no migration/upgrade required.
- Substantially cleaned the makefile and admin.md.
- Added backrefs for all existing FKs in the models. There’s a lot more information now!
- I don’t think you can filter on a backref with graphene. Still, it’s useful to be able to see all the info (basically a join) in graphiql.
- The automigration for the table creation of league and league_user_association was easy. The only manual changes to the migration file were the creation of the 2018 and 2019 paid leagues, then adding users to them.
- To find all the users from 2018:
- from supercontest.dbsession.joins import join_to_picks
- all_picks = join_to_picks()
- results = all_picks.filter(Season.season == 2018).distinct(User.email).all()
- emails = [user.email for _, user, _, _, _, _ in results]
- In the migration file, I create the league table and the league_user association, then I find all the users and add them appropriately. This migration was pretty cool.
- Added full league functionality.
- league0 is the free league. It’s not an actual league row, it’s just “don’t do any filtering for league”. It’s the same as the app is today, before the league change. This allows users to play for free, any year they want.
- The url_for(request.url_rule.endpoint, *args)) calls are layered. The season navs and league navs are lumped. They pass season and league, because they’re the top level. The week navs pass season, league, and week, because they’re the lowest. The information beneath (like what tab you’re on: matchups or lb) is passed through request.url_rule.endpoint. You do not need to provide any lower information; ie league_navs don’t have to pass week or main tab, and main navs don’t have to pass week.
- The nav_active js obviously matches the element’s id to the route, not the displayed text. This allows you to show whatever you want while doing unique endpoint-nav matching.
- Use g.<attr> in the templates, that’s what it’s for. The redefinition to variables in layout.html is for javascript only, not use in html.
- Rewrote the url_defaults and value_preprocessor to be MUCH clearer about the actions each is taking, why, and when.
- The hardest part of the league change was handling all the navs and links moving everywhere properly. This is a little easier in a node app with something like react/redux, where you have a full state container.
- Did some cool conditional behavior on the matchups view. If a league was passed (via route explicitly, which flask must handle), it strips it and redirects to the leagueless matchups endpoint. The python side can parse and handle the values as args/kwargs at will, but if an url is fetched from a server, there MUST be a @route to handle it. You can’t just url_defaults or url_value_preprocessor away it, you must add a @route to handle it first. Then you can redirect / add data / remove data as desired.
- Turned on the activity overview (commits vs code reviews etc) for my github profile.
- Ansible (even with -vvv and stdout_lines assigned to debug vars and such) does not print the stdout and stderr of the remote commands back on the host machine. If you wanna watch, ssh in and do it.
- Reorganized and cleaned the kitchen, moving the machines under the shelves and removing the mat. Looks much better now. Cleaned the fridge/freezer too. Made new batch of oat milk. Starting soaking a large batch of garbanzo beans. Bottled the green tea hibiscus kombucha without a second fermentation. Threw away the scoby – I don’t plan on making another batch soon.
- Made another batch of protein bars. Just pecans, oats, and protein powder in this one. Delicious. Better than the tahini base.
- I didn’t think about it until after, but add some oat milk to the next batch. It will make them stick together better, allow you to add more protein powder, and impart a little taste.
- Gbro didn’t finish the 1/2 cup over the course of the day. I’ll leave it at the same serving for now (1/2c once a day instead of twice). Online resources say that appetites vary, but a healthy cat weight is one where you can “easily feel the ribs”. We’re not close.
- GitLab.
- https://www.youtube.com/watch?v=nMAgP4WIcno.
- Created an account, started to look through the full capabilities.
- Combines Stash (SCM), JIRA (tickets), and Bamboo (CI/CD).
- It’s not an extension to github, it’s a full alternative. It’s a competitor.
- You can edit in browser, merge requests (they call them the correct thing, not pull requests!), create boards, sprint plan, etc.
- You get a lot of CI configuration right out of the box. It infers your language, creates a container, runs static analysis, checks licenses, runs security tests – all without you doing anything. You can, of course, customize all of this. They have advanced stages for dynamic testing, CSRF, and more.
- It autocomments a lot of helpful information on the MR. Performance change, code quality deltas, etc.
- CD has different environments, you can do partial deployments, more.
- Microsoft bought GitHub last year for 7.5b.
- I’m going to finish these few big tickets and then I’ll migrate all my projects/repos over from github to gitlab.
- https://www.programcreek.com/python/ is an awesome website. It basically just scrapes open source python projects for whatever you’re looking for. In most cases, you’ll get a working implementation of the string you’re curious about.
-
- Remember to pass python lists through jinja with the |tojson filter for use in javascript. Same with bools. For dicts, dump them python side then pass with |safe.
- Supercontest.
- Added row count (rank, I guess, since it’s sorted?) to the all-picks view. Makes it easy to compare to 45 (of the lb, which shows all users with season picks instead of week picks).
- Added header with total pick counts for each team, and colored status of every game. This is the only place that has this info now; previously it would just highlight your picks on the matchups tabs, and then everyone’s picks on this tab (which might have excluded teams; an often occurence).
- App behaved well all day with the sunday rush, even with all the new changes (which were substantial). The live updated of points/possiblepoints for completed games was awesome. It ranked everybody dynamically, and was interesting to watch throughout the day.
- Rewrote the `joins` module to properly use joins instead of subqueries. This is a lot cleaner. It removes all the `.c.` nonsense, and I don’t have to be explicit (manually) about all the columns to forward through the joins. The only slightly annoying thing is that it returns the query results as a tuple divided by source table, so you have to unpack it to get the data you want. With the subquery structure, it was all flat so I could access the data directly with the column name.
- Made the leaderboard sort with the same 3 keys as the all-picks view: total points, percentage, user id.
- Added percentage at the top of the lb for each week, showing the performance of the entire league (theoretically should be close to 50).
- Finished the graphql update. Fully got filters working. Didn’t do joins. Uploaded a new client pkg to pypi with examples. Added twine instructions to ADMIN.md.
- augmented-ui is a css kit you can include via cdn or npm pkg that transforms your site into cyberpunk: https://medium.com/better-programming/augmented-ui-cyberpunk-inspired-web-ui-made-easy-d463c0371144.
- Foam roll on the legs = crucial for training.
- Awesome website that does the same thing Ken’s script used to do: https://www.fantasy-football-power-rankings.com/espn/1187979/2019. Power rankings, where it lists your record if you were to play every other team every week. Just like total points for, it is a more absolute metric of success without the random luck of head-head matchup order. Yahoo already reports this in your weekly recap, but ESPN doesn’t, so this site provides it. https://github.com/JakePartusch/fantasy-football-power-rankings.
- If joining the same table twice for multiple conditions, use sqlalchemy.orm.aliased. Same as a simple table table2 rename in sql.
- Gonna feed gbro 1/2c twice a day instead of leaving big bowls out to self feed. This is slightly more than the recommended amount, but he’s definitely overweight. I want to see how much he’s consuming, then briley can adjust as he wants.


- If you create a list in python with [x] * n, then they all point to the same object. If you update one of them, every element will mirror the same update.
- Made another batch of tahini, but instead of using 100% sesame seeds, I used a ratio of about 80% sesame to 20% pecans, yielding pecan tahini. Incredible.
- Made another batch of protein bars: oats, dates, protein powder, pecan tahini, banana, cinnamon, cacao powder. Incredible.
- Read through most of the graphql documentation again.
- Remember, it converts resolver names and such from snake_case to camelCase. I disably this during schema instantiation with auto_camelcase=False.
- Within an ObjectType class, each resolve_x method corresponding to the field x that you define in the constructor.
- They intentionally do not allow select *. Graphql is meant to be deliberate: fetch only what you need, not all cols. https://github.com/graphql/graphql-spec/issues/127.
- SQLAlchemyConnectionField is the magic liason. Should map most of your entire model, all cols, field types, sortability, etc.
- There is an extension to this that allows filtering: https://pypi.org/project/graphene-sqlalchemy-filter/. graphene-sqlalchemy can’t do it right out of the box (simply). You’d have to add custom resolvers that pass args through query.filter() etc.
- It’s worth mentioning: the documentation surrounding this effort is horrendous. I have a very simple use case: take a few sqlalchemy models with foreign keys, allow flask graphql queries with filters on cols, for both direct queries as well as queries on joined tables. This common implementation should be trivial to implement, and it’s not even close to that.
- Filters: eq, ne, like, ilike, regexp, is_null, in, not_in, lt, lte, gt, gte, range.
- Example: {seasons(filters:{season_start_lt:”2019-06-01″}){edges{node{id season}}}}. This would return just the rows from 2018 (lt = less than), and only the id and year number.
- Datetimes are of the form “2019-11-01T00:00:00”
- sqlalchemy has an `add_columns` which you can call after a join to create a more sql-standard select return table. https://docs.sqlalchemy.org/en/13/orm/query.html#sqlalchemy.orm.query.Query.add_columns. This can be used to get around the necessity of unpacking the multiple table results if you query a join. You can even add labels.