-
- SBSC. Continued on the model/schema redesigns.
session.refresh(var)
if you’ve committed changes that affect it (like for aggregators/observers).
sqlalchemy_utils.observes
appears to be smart enough to NOT trigger directly on the objects/vars/paths you pass to the decorator, but rather the nested cols you actually use within the function. This is good in 1 way: You can observe the high level, and it will only trigger on changes to the lowest dep (not ANY change to the highest obj tree). This is bad in 1 way: A bit misleading, when you think that the value you pass to the observer explicitly defines the trigger scope.
- Actually I’m not sure about the above^. When I had
observes(game)
, changing game.line
triggered both opponents to update, but changing game.opponents[0].score
only triggered opponent[0]
to update. Overall, I just want to be explicit about what triggers the observer calc.
- You CAN have tables FK to themselves (and be self-referential in sqla): https://docs.sqlalchemy.org/en/20/orm/self_referential.html. This is common for adjacency matrixes, where a Node table might have a parent/child Node.
- Renamed
Opponent
table to Contestant
to avoid confusion.
- Important: Looks like
sqlalchemy_utils.observes
can observe cols of the local table, as well as relationships. But for relationships, it seems to ONLY support aggregation functions. Example: you have a one-to-many relationship. The observer can do a nested calc like child_count = len(parent.children)
. But it CANNOT do something like first_child_plus_decade = parent.children[0].age + 10
and trigger on the children[0].age
change. Unfortunately, that’s exactly what I need for Contestant.score
(for the opponent).
- To fix this, I just denormalized
opponent_score
into every row of the Contestant
table.
- Ended up keeping the change to switch the order of
Game
and Contestant
in the hierarchy. Keeps the purity (standardization) of the many-to-one relationship and removes the hardcoded 1s and 2s. Gonna have some circular referencing in the bidirectional relationship between them, no matter which wraps which. And could have made Pick
only FK to Contestant
no matter what, since they’re bidirectionally related, but kept that change as well. Pick
-> Contestant
-> Game
. No need for Pick FKs to both.
-
- Steph on pace for his 2nd 50/40/90 season. Only happened 13 times. https://en.wikipedia.org/wiki/50%E2%80%9340%E2%80%9390_club
- NY is empire state. CA is golden state. NJ is garden state. All: https://en.wikipedia.org/wiki/List_of_U.S._state_and_territory_nicknames
- The mesh screen in front of mics is called a pop filter. Plosives can cause popping sounds on the record, this slows the fast moving air from speech.
- Any key that can guarantee uniqueness in a db is called a candidate key. In simple tables, this can be the primary key, a integer that autoincrements. But sometimes it’s useful to use other columns. Say, email in a user table. Now imagine you have a locations table. Address alone might not be unique – could be multiple units. You can use a composite key like address + apartment_number to become unique / candidate key.
- SBSC. Wrote the migration for lines->games and teams->opponents. Was one of the more complicated migrations I’ve written for sbsc.
- SBSC. Did some more model/schema redesign. Particularly the hierarchical relationship between
Game
and Opponent
.
-
- Went to Kings@Nets.
- Private work.
- The multigame view for march madness is very clean: https://www.ncaa.com/march-madness-live/multigame
- SBSC.
- Observers can observe columns or entire objects. Any dot-notated path, eg observer on
game
within Score
would work (straight object), and observer on game.opponent1.prediction.name
within Score
would also work (nested col through many other objects).
- Merged the old
lines
and scores
tables. It’s now a single table (games
) with datetime/line/week/status and opponent1/2, then the opponents objects each have score/margin/team/prediction/location/coverage.
- Made the
picks
objs FK to opponents
, not teams
.
- Moved
coverage
and score
into Opponent
, rather than Score
. Also added margin
there.
-
- Private work.
- BoA got more than 15B in deposits over the last couple days as people move to bigger (safer?) banks.
- NSU advanced to the DII elite 8 last night and got reseeded 1st: https://www.ncaa.com/brackets/basketball-men/d2/2023
- FB cutting another 13%. Anchorage 20% (they’re only a few hundred strong).
- Finished Community.
- SBSC.
- Be careful with the team/prediction/location schema change; although historically team1=favorite and team2=underdog in the database, that’s not the case. Either team can have any of the 3 predictions (favorite/pickem/underdog). I’ve written all logic in the app to respect that (look it up where necessary).
- Added observers for basically every stat calculation (eg overperformance margin). The only exception is
pick%
. It’s an SBSC stat. I intentionally just calculate it in the stats
module – don’t want this being a Team.pick_counts
col observing the Pick
table (for example), then it would update on every pick (frequent). Just calc when the stats view is requested/read by a user (infrequent).
- Basically rewrote the whole app on this ticket. Data, backend, and frontend.
- Renamed line/lines to game/games to be more clear about what the table is.
- Instead of associating team/prediction/location on each game row, duplicated, I split it out so each game just has two objects, opponent1 and opponent2. They FK to the Opponents table, which has cols for team/prediction/location (nested FKs to their respective tables).
-
- Signed into desktop app with nvidia, updated graphics drivers for the first time in a LONG time. Hopefully this fixes the issue of the right monitor lagging.
- Private work.
- Roasted the 7lb pork butt.
- Cumin is the dried/ground seeds of a plant in the parsley family.
- SBSC. Made a pretty drastic change to split
favored_team
* and underdog_team
* cols in the lines
and scores
table into just team1
* and team2
*, then separate cols for *_prediction
and *_location
and *_coverage
.
- This cleans a lot of the logic, and normalizes a bit further.
- It also spills into the
home_team
and coverer
logic, cleaning it up there as well. It also makes them non-nullable, so much easier to manage.
- If something is able to be looked up via table relationships, leave it that way. Example: A pick obj has a team obj and a line obj, the line obj has a score obj, the score obj has a coverage obj (for that picked team obj), the coverage obj has a points col.
- If something requires a calculation, add an observer. Example: A score obj has to compare the score cols of team1/2 against its line obj and set each team’s coverage if cover/push/noncover.
-
- Private work.
- Remember USDC depegged to $0.87 briefly in the wake of SVB.
- Continued melodysheep (after end of time) with their 3-part series Life Beyond.
- Inevitable that aliens exist. There are more habitable (distance to star, elements, liquid) planets/moons in our universe than grains of sand on earth.
- Big bang ~14B years ago. Life on earth started developing ~4B years ago. We have ~100T years until the last star dies and life is extinguished. We’re only at the very beginning. Much more time for primal life to develop, intelligent life to get better at travel/communication, etc.
- Also gets into the museum of alien life, what they could be like. Our planet (earth) has ~10B people.
- Our solar system has 8 planets. Our galaxy (milky way) has ~100B stars. Our universe has ~1T galaxies.
- Kardashev scale, type 1/2/3 civilization. 1 = you can capture all the energy available to your planet. 2 = you can harvest all the energy from your star. 3 = your entire galaxy.
- If the whole universe is an ocean, we’ve searched about a cup of it. There are some corners we can never reach because of the expanding universe.
- Pandas refresher.
- Tabular data. Rows and cols.
- Read and write. Import and export.
- Select and query. Some for iterating/dev. Faster alts for prod.
- Join.
- Plot.
- Computed columns.
- Aggregations. Summaries. Statistics.
- Reshaping. Lots of functions to moved data and relations around.
- Numpy is a bit faster and more memory efficient. Pandas has more high-level functionality. Numpy does support named axes (structured arrays), but in general pandas is better at multidimensional.
- SBSC.
- Little annoying that you can’t just set the width of the bars in plotly for horizontal bar charts, and then let height automatically adjust: https://community.plotly.com/t/can-you-scale-the-dcc-graph-height-automatically-to-content/45471/3. Computing height is easy, but you have to include the top section for title and such.
- Added vlines for the average of ALL traces, not just the first.
categoryorder
is pretty cool (https://plotly.com/python/reference/layout/xaxis/#layout-xaxis-categoryorder), but I’m doing a sort across all traces so I can’t just do a single sort with knowledge of only the keys.
- As much as possible, frontend views try to match database views. Little to zero modification. So the
results
module is mostly just querying the sqla objects, very slight calc wrappers on the data (rollups, payouts, etc), then serializing to the app. The stats
module, on the other hand, is purely meant for statistical analysis. So it uses numpy
and pandas
, then serializes to json (with layout/config) and passes to the frontend for plotly to render.
-
- Just 3 devices in my home aren’t smart and need updating for DST: oven, coffee, aquarium autofeeder.
- There isn’t a leak in my coffee maker. It has an overflow hole on the back.
- All homemade mealprep: smoothies, powders, peanut butter, oat milk, liver, protein bars.
- Cool video: https://www.youtube.com/watch?v=uD4izuDMUQA. Stars extinguishing into white dwarfs, then black dwarfs. Life become extinct very quickly. Then the cold dark universe exists a long long long time after. Black holes absorb a lot of stuff. But the (accelerating) expansion of the universe impossible to catch up with. Potential for multiverse.
- Little bit of SBSC work, little bit of private work.