• Great summary of machine learning: https://vas3k.com/blog/machine_learning/.
    • Started tracking the foods I eat with MyFitnessPal. I’ve got a solid diet, but I’m curious what my macro counts are, so I want to get more analytical about it.
    • PSD2 = Payment Services Directive. SCA = Strong Customer Authentication. They’re standards for transferring money with apps and services.
    • Twilio allows you to make calls, send texts, and more…programmatically! Basically communication APIs.
    • Authy is an alternative to Google Authenticator. It’s a 2FA app.
    • Good summary of Google’s JS style preferences: https://medium.freecodecamp.org/google-publishes-a-javascript-style-guide-here-are-some-key-lessons-1810b8ad050b. I’m in agreement with basically all. The semicolon still kinda annoys me.
    • Remember, var is function scoped and let is block scoped (smaller). So if you have a for loop in a function, the let variable is only available there. If you define that same variable as var in the for loop, the whole function will have it. If you use var or let outside of a function, they’re both globally scoped. You should never use var, only use let or const.
    • Medium article on graphql at Netflix. They’re very happy. What would be 8 direct rest API calls between client and server is now one client call and 7 server to server interactions. This is much faster, as the latency on server resources is much less. Abstraction in queries is great also!
    • Lucene is a search engine by Apache. It’s what the Atlassian products used (Lucene health check failed!). Elasticsearch is built on top of it, and is a very common tool to search, analyze, and visualize data.
    • Schema-on-write is when the data is validated before it is written, making sure it adheres to the schema. On the other hand, schema-on-read is when any data can be submitted and then the user can fetch it and apply their own lens (schema) to it (or part of it). Write is better for conformity, and more efficient. Read is better for flexibility.
    • Normalization is when a database contains many tables, each with a small atomic amount of information, then relations stitch the tables together. It avoids redundancy. A query then requires many joins, which can be inefficient. Denormalization, then, is when take a normalized database (NOT one that has never been normalized before, like a huge table with everything from the start), and intentionally add redundant information back in to make common queries more efficient. This makes read operations faster, while slightly making write operations slower (because they have to write in two redundant locations now).