-
- Played around with maintenance banners for a downed website. You basically just have to return 503 so search engines know it’s temporary. Then, for users, you can add a custom html template with a friendly, descriptive message.
- Helped jcriss with the grom mods. Bar end mirrors, bunny ear blockoffs with turn signals integrated, fender eliminator with brake and turn signals, sprocket, brake and clutch shorty levers.
- Sliced open my finger pretty bad with an exacto knife while slicing a bundle to rewire.
- Smoked ribs again for game of thrones. This time did NOT use hot sauce as the only marinade! Added a little curry to the rub.
- Got the replacement amex and activated it. Got new insurance cards and put them on all bikes.
- Received my medical identification card and the choice form. You must elect one of the two plans: LA Care and Health Net. They’re both HMOs. I choice Health Net because Torrance Memorial Medical Center is under them, my closest hospital (although Health Net’s Medi-Cal plan will cover emergency services anywhere in the United States!) I have no intended doctor visits or anything else to sway the decision. Chose Salahuddin Aschrafnia as primary care simply because it’s close (Redondo) and decently rated.
- Unboxed the new tent and sleeping pad for LiB, practiced assembling everything (even hitching the guy lines).
-
- Puppeteer is a popular headless browser webscraper like selenium.
- HN/MD.
- Completely reorganized all of my files in Google drive. This took a couple hours.
- Watched deliverance. It was hyped as one of the most disturbing movies of all time. It was average. The movie was 2 hours long and crawled. There were essentially 3 scenes: rape, river peril, and cliff climbing. It could have been 60 minutes shorter. It was neither shocking nor provocative. The banjo duel was the best part.
- Ordered a single person tent and sleeping pad (totaling $125, but they’re good quality and reusable) for lightning in a bottle.
- I get a couple recruitment calls a day. Considering taking my number off linkedin.
- They’re starting this new tactic of “we’re looking for the best of the best for <X> role, can you recommend one of your colleagues who is looking for an amazing new opportunity?” They manipulate the tone by flattering your altruistic side, seeming informal and taking pressure off the real focus before circling back to your pursuit.
- Supercontest.
- Played around with multiple docker-compose, splitting the dev/prod envs into separate files rather than separate services within the same file. I prefer it the way I had it.
- Had to recalibrate my consideration of the reverse proxy. I’m not adding an nginx container to then forward traffic to multiple other containers running nginx as the server frontend for other app containers. I’m simply taking the existing nginx and certbot containers that serve supercontest, and I’m swapping them for nginx-proxy and letsencrypt-ngnix-proxy-companion so that they can be the webserver for multiple service domains (running in separate other containers).
- Those two containers are therefore abstracted outside of the supercontest project. They probably won’t be in a compose file. They’ll just be a command to run prior to starting the supercontest containers, with the sc compose file just having the app and the db.
- This is a wonderful compartmentalization, in both technical and general terms. Service administration has advanced wondrously in the past decade.
- Created a repo for my rc files and uninstalled dejadup: https://github.com/brianmahlstedt/config.
-
- Supercontest
- Remember, you have to docker-compose down before running the tests on the host. Otherwise, the app will not be able to start (socket already in use).
- Added tests for http auth and the graphql endpoint. CSRF is disabled in the python tests, so I gave a full client example in the readme.
- Stopped using relays and connectionfields, opting instead to just use graphene’s List type. You lose some automatic pagination and other cursor capabilities, but querying becomes a lot simpler – you don’t have to specify edges and nodes every time. As it grows, switching back would be very easy.
- Deployed to prod and closed #41 after merge. The `make deploy` convenience target works.
- Graphiql in prod was missing the csrf token. I ensured SC_DEV worked and the csrf protection was initialized.
- Changed the makefile recipes for ansible to use –key-file at the command line instead of requiring that ssh-agent and ssh-add be run before.
- Added csrf_protect back to the app in dev mode, then exempted the graphql view. This is just wrapping it with the func (instead of the decorator), like login_required(csrf_protect.exempt(GraphQLView.as_view())).
- Right now, my graphql resolvers just return everything. I don’t add nodes and fields for people to filter on certain values. I can later, but right now I just expect clients to handle it all.
- Emailed the group with the graphiql link and the python query example.
- docker top can be used to check processes in the container.
- You can check the environment of an already-running process with `cat /proc/<pid>/environ`. Super useful.
- Removed the ps1 stuff from tmux (it was exiting immediately on startup) and removed the ssh-agent stuff from .bash_profile (I explicitly call out the ssh key in my makefile calls to ansible).
- Nginx reverse proxy.
- Multiple websites in different containers on the same machine.
- First create the umbrella network on the host: docker network create nginx-proxy
- Then start the reverse proxy container which uses this network.
- Then start the actual service containers with this network as well. These must have 3 things in their docker-compose yamls:
- Expose port 80 on the service
- Add the nginx-proxy network
- Add the VIRTUAL_HOST env var for the domain.
- There’s a companion container to handle letsencrypt as well.
- Add a few more volumes to share certs between the containers.
- Add the LETSENCRYPT_HOST (and _EMAIL) env vars for the domain.
- Didn’t finish this, but will resume tomorrow.
- Generalized my digitalocean project name and droplet name to MyProject and MyDroplet, since they host multiple services. Added the A and NS records for my second domain (bmahlstedt.com), so digitalocean’s nameservers direct to both my domains instead of godaddy. The registrar update it within like 60 seconds, much faster than last time.
- Since I haven’t finished the reverse proxy yet, https://bmahlstedt.com points to the supercontest application lol. This makes sense, as I’m not VIRTUAL_HOST routing the traffic by domain yet.
- This could be used easily to point multiple domains at the exact same service/site.
- It’s also not ssl trusted and red, which makes sense since I haven’t certified this domain yet.
-
- Bazel.
- WORKSPACE file defines the root. File(s) named BUILD within that root defines the rules, to point at the input source and define the outputs. You can have multiple BUILD files. Each defines a “package” for bazel. They can depend on each other (need to add “visibility” in the build file), and each can have multiple targets.
- bazel build //path-to-package:target-name
- Say you have a .cc file that prints hello world. Building that target with cc_binary would add it to <workspace_root>/bazel-bin/main/hello-world, which you can then call whenever you want.
- bazel-bin, bazel-genfiles, bazel-out, bazel-* are all just symlinks (in your workspace root) to ~/.cache/bazel.
- You can query dependencies of your targets: bazel query –output graph –nohost_deps –noimplicit_deps ‘deps(//main:hello-world)’
- Installed graphviz and xdot, common viewers for many things (including bazel dependency graphs).
- http://www.webgraphviz.com/ is an awesome browser viewer, just copy the text output from the command line. Or, pipe it to xdot at the command line.
- The value here is the entire tree. Everything is a file, and the entire dependency graph is known. Therefore, building outputs (binaries, whatever) can be optimized. When outputs need to be rebuilt, only the inputs that have changed need to be rebuilt.
- For a language like python that isn’t built (compiled) manually, but rather interpreted, this has a lot less value. There are four standard python targets. py_binary, py_library, py_test, py_runtime.
- Looked up some more python/bazel suggestions, watched https://www.youtube.com/watch?v=9mhmGcR6CPo.
- Ultimately, not using this for supercontest or any of my other projects. Simple GNUmake and sx-setuptools are wonderful.
- There is value in a monorepo setting, but the hardest part is getting the dependency resolution down to the file level instead of the python package level.
- This becomes impossible fully, because third-party packages will be vendored and you can’t specify all of those down to file.
- If third-party packages started defining as bazel packages instead of python packages, we could get somewhere.
- This is all an attempt to define a language-agnostic packaging standard that ultimately just defines file inputs and file outputs.
- Bazel users absolutely love the word hermetic. It means airtight, people.
- Remember, compiling is just translating to a lower-level language (like assembly, bytecode, machine code…).
- Some nix reminders.
- inode is a data structure. It stores metadata like owner, perms, parent dir, last modified, etc. It does not store filename or the actual data in the file.
- Hard links are basically copies. They contain the data. Can only hard link files, not dirs. Same inode. Must be on same filesystem.
- Soft links (symlinks) are basically shortcuts. They do not contain the data. Can soft link dirs or files. Different inodes. Can cross filesystems.
- To nest bullets in github markdown, leave the hyphen and just put 4 spaces in front of it.
- $PS1 is a linux variable that defines the custom shell prompt. It’s different within tmux vs outside, hence the lack of color. Tried the top 5 solutions to fix this, none worked. Messed with a ton of bashrc and tmux.conf.
- Nginx can directly serve multiple websites (domains) from the same machine. If you are running your services in a container, then you can also use nginx on the host as a reverse proxy to forward traffic to the appropriate containers (where nginx again can be the server for the app-specific request).
- Bought bmahlstedt.com for $21 (2yr contract) through GoDaddy, same as southbaysupercontest.
- If a website tells you to disable your adblocker, you can often just set style=”display:none;” on the banner and then change the background color back to white or increase brightness.
- GraphQL.
- There are a few places in my application where I translate an email to an ID, an ID to picks, picks to scores, etc. GraphQL should be able to help quite well with this over-fetching that REST is vulnerable to.
- Was created at FB in 2012, earlier than I thought.
- graphene and graphene-sqlalchemy are two python packages to aid in use with graphql models. flask-graphene is the extension to add the /graphql view. gql is the client.
- Added the graphql view, with the query schema wrapped around my existing user/pick/matchup models.
- Created the environment variable SC_DEV and set it to 1 in docker-compose for app_dev. This skips csrf protection and enables graphiql in the browser.
- Wrapped the view_func with login_required() for add_user_rule, rather than decorating it like a normal route. You now need to login to hit the graphql endpoint, even programmatically.
- In graphiql, ctrl-space will autocomplete with an option dropdown. ctrl-enter will execute the query.
- You can then query from the command line with curl at /graphql?query=<>
- You can then query from python with gql.
- Since the app has direct access to the database, sqlalchemy is fine to perform internal app queries. To go through graphql for the app itself would be weird and inefficient: python -> http through view -> python.
- I am intentionally not adding mutations. This is a read-only interface for users to mess with the db.
- Graphiql is an extremely useful interface for users to query the db. I had to do some fancy stuff to extend csrf/auth to the graphql endpoint, but I was successful.
- Added two tests. One that verifies that you can auth with the app via basic requests + csrf token (rather than with selenium). The second auths verifies that the graphql endpoint can return data programmatically. This was simply achieve with json={‘query’: query} where query is a docstring with the same content you’d enter into graphiql. Didn’t end up needing gql (bc I couldn’t really use it without hacking my auth mechanism for csrf in).
- Ended up enabling graphiql for production, since it’s protected by auth anyway.
- Github offers an API to query their data with graphql: https://developer.github.com/v4/.
- Medium obviously collaborates with freecodecamp.org and codeburst.io.
- Alexa (not amazon) is another company that monitors internet traffic. They rank the most popular sites: https://www.alexa.com/topsites. In the US the top 24 are: google youtube facebook amazon wikipedia reddit yahoo twitter linkedin instagram ebay microsoftonline netflix twitch instructure pornhub imgur live craigslist espn chase paypal bing cnn
- JWT = json web tokens.
- Extremely using for programmatically repeating a manual browser request (like a login): open chrome devtools, perform an action, then go to the network tab, right click the request, copy as curl, then convert to python requests with https://curl.trillworks.com/.
- It totally depends on the service, but selenium should be able to login for all because it’s closest to a real user. For direct auth with requests, the server can expect whatever it wants. Some require certain cookies (which you can get with a naked request then session.cookies.get_dict()). Supercontest requires a csrf_token to be passed with your credentials, that’s it. Make a request, save the csrf token from the response, the hit /user/sign-in with your creds and the csrf token.