- Bazel.
- WORKSPACE file defines the root. File(s) named BUILD within that root defines the rules, to point at the input source and define the outputs. You can have multiple BUILD files. Each defines a “package” for bazel. They can depend on each other (need to add “visibility” in the build file), and each can have multiple targets.
- bazel build //path-to-package:target-name
- Say you have a .cc file that prints hello world. Building that target with cc_binary would add it to <workspace_root>/bazel-bin/main/hello-world, which you can then call whenever you want.
- bazel-bin, bazel-genfiles, bazel-out, bazel-* are all just symlinks (in your workspace root) to ~/.cache/bazel.
- You can query dependencies of your targets: bazel query –output graph –nohost_deps –noimplicit_deps ‘deps(//main:hello-world)’
- Installed graphviz and xdot, common viewers for many things (including bazel dependency graphs).
- http://www.webgraphviz.com/ is an awesome browser viewer, just copy the text output from the command line. Or, pipe it to xdot at the command line.
- The value here is the entire tree. Everything is a file, and the entire dependency graph is known. Therefore, building outputs (binaries, whatever) can be optimized. When outputs need to be rebuilt, only the inputs that have changed need to be rebuilt.
- For a language like python that isn’t built (compiled) manually, but rather interpreted, this has a lot less value. There are four standard python targets. py_binary, py_library, py_test, py_runtime.
- Looked up some more python/bazel suggestions, watched https://www.youtube.com/watch?v=9mhmGcR6CPo.
- Ultimately, not using this for supercontest or any of my other projects. Simple GNUmake and sx-setuptools are wonderful.
- There is value in a monorepo setting, but the hardest part is getting the dependency resolution down to the file level instead of the python package level.
- This becomes impossible fully, because third-party packages will be vendored and you can’t specify all of those down to file.
- If third-party packages started defining as bazel packages instead of python packages, we could get somewhere.
- This is all an attempt to define a language-agnostic packaging standard that ultimately just defines file inputs and file outputs.
- Bazel users absolutely love the word hermetic. It means airtight, people.
- Remember, compiling is just translating to a lower-level language (like assembly, bytecode, machine code…).
- Some nix reminders.
- inode is a data structure. It stores metadata like owner, perms, parent dir, last modified, etc. It does not store filename or the actual data in the file.
- Hard links are basically copies. They contain the data. Can only hard link files, not dirs. Same inode. Must be on same filesystem.
- Soft links (symlinks) are basically shortcuts. They do not contain the data. Can soft link dirs or files. Different inodes. Can cross filesystems.
- To nest bullets in github markdown, leave the hyphen and just put 4 spaces in front of it.
- $PS1 is a linux variable that defines the custom shell prompt. It’s different within tmux vs outside, hence the lack of color. Tried the top 5 solutions to fix this, none worked. Messed with a ton of bashrc and tmux.conf.
- Nginx can directly serve multiple websites (domains) from the same machine. If you are running your services in a container, then you can also use nginx on the host as a reverse proxy to forward traffic to the appropriate containers (where nginx again can be the server for the app-specific request).
- Bought bmahlstedt.com for $21 (2yr contract) through GoDaddy, same as southbaysupercontest.
- If a website tells you to disable your adblocker, you can often just set style=”display:none;” on the banner and then change the background color back to white or increase brightness.
- GraphQL.
- There are a few places in my application where I translate an email to an ID, an ID to picks, picks to scores, etc. GraphQL should be able to help quite well with this over-fetching that REST is vulnerable to.
- Was created at FB in 2012, earlier than I thought.
- graphene and graphene-sqlalchemy are two python packages to aid in use with graphql models. flask-graphene is the extension to add the /graphql view. gql is the client.
- Added the graphql view, with the query schema wrapped around my existing user/pick/matchup models.
- Created the environment variable SC_DEV and set it to 1 in docker-compose for app_dev. This skips csrf protection and enables graphiql in the browser.
- Wrapped the view_func with login_required() for add_user_rule, rather than decorating it like a normal route. You now need to login to hit the graphql endpoint, even programmatically.
- In graphiql, ctrl-space will autocomplete with an option dropdown. ctrl-enter will execute the query.
- You can then query from the command line with curl at /graphql?query=<>
- You can then query from python with gql.
- Since the app has direct access to the database, sqlalchemy is fine to perform internal app queries. To go through graphql for the app itself would be weird and inefficient: python -> http through view -> python.
- I am intentionally not adding mutations. This is a read-only interface for users to mess with the db.
- Graphiql is an extremely useful interface for users to query the db. I had to do some fancy stuff to extend csrf/auth to the graphql endpoint, but I was successful.
- Added two tests. One that verifies that you can auth with the app via basic requests + csrf token (rather than with selenium). The second auths verifies that the graphql endpoint can return data programmatically. This was simply achieve with json={‘query’: query} where query is a docstring with the same content you’d enter into graphiql. Didn’t end up needing gql (bc I couldn’t really use it without hacking my auth mechanism for csrf in).
- Ended up enabling graphiql for production, since it’s protected by auth anyway.
- Github offers an API to query their data with graphql: https://developer.github.com/v4/.
- Medium obviously collaborates with freecodecamp.org and codeburst.io.
- Alexa (not amazon) is another company that monitors internet traffic. They rank the most popular sites: https://www.alexa.com/topsites. In the US the top 24 are: google youtube facebook amazon wikipedia reddit yahoo twitter linkedin instagram ebay microsoftonline netflix twitch instructure pornhub imgur live craigslist espn chase paypal bing cnn
- JWT = json web tokens.
- Extremely using for programmatically repeating a manual browser request (like a login): open chrome devtools, perform an action, then go to the network tab, right click the request, copy as curl, then convert to python requests with https://curl.trillworks.com/.
- It totally depends on the service, but selenium should be able to login for all because it’s closest to a real user. For direct auth with requests, the server can expect whatever it wants. Some require certain cookies (which you can get with a naked request then session.cookies.get_dict()). Supercontest requires a csrf_token to be passed with your credentials, that’s it. Make a request, save the csrf token from the response, the hit /user/sign-in with your creds and the csrf token.