• Tuesday

    • Private work.
    • SES Trust/Safety team approved the production request. Likely the scathe yesterday. Still waiting on the tollfree registration of the pinpoint origination number, then will submit to take it out of the sandbox, then all clear on the cognito changes.
    • Put the ducati on the tender.
    • OpenAI devday keynote: https://www.youtube.com/watch?v=U9mJuUkhUzk. Release of GPT-4 Turbo. 128k context (300 pages). JSON compatibility. Trained on data up to Apr 23. Custom GTPs = biggest change. You and modify/focus/customize a GPT for a specific purpose, then rerelease that to the GPT store for others to use.
    • Roasted 6lb pork butt to make carnitas burritos.
    • Google invested another 2B in anthropic.
    • Sandhill has some spacex. Selling at 86. 50k min check. Curious how Bret is handling/approving this (if at all?).
    • Supercontest.
      • Cloudwatch.
        • Finished the logging change. Piped from app container on EC2 to cloudwatch. Added alarms. Removed sentry.
        • First attempted with a cloudwatch agent config before finding the awslogs docker driver. Easy.
        • CW can do a lot. I use for metrics/logs/alarms/dashboards. But you can also do RUM, traces, canaries, A/B, more.
        • Seamless handling between EC2<->CW, from both a credential and a config perspective. Also datetime formatting (from your custom logs), how to manage the timezone when tailing in the CW UI, etc.
        • Retention policies. Starting at 30d for now.
        • You can query logs like: filter @message like /commit-scores/ | sort @timestamp desc | limit 20
        • The alarms work as follows: A custom metric scrapes the logs and counts errors. An alarm is created from that metric. The alarm sends violations to the SNS topic. The topic emails me.
        • The log filter checks for ERROR, CRITICAL, EXCEPTION (case insensitive).
        • Also added an alarm for any lambda errors (to email as well).
        • Seeing a bit weird behavior with insufficient data for the metric filter on the logs. It counts regex matches, and default value IS value, but the metric is not reporting the 0 datapoints, so the alarm says insufficient data.
          • Ended up finding this bug. It was that all timestamps were being pushed as the same timestamp (ie not updating with the log message, just repeating the same startup time).
          • So the metric filter (and corresponding alarm) didn’t have any data. It thought it was all flattened into one datapoint.
          • Not sure why it was doing this. Because it was freezing at the startup ts, I looked at the beginning of the logs. Gunicorn was printing slightly diff format.
          • So I standardized everything (gunicorn, flask, my loggers, etc) to ignore ms and include tz.
          • This fixed it. AWS showed properly parses datetimes.
      • There are many trash requests from scrapers checking wordpress things and assets to steal info (/wp-includes/*, /uploads/, /admin/, etc).
      • LB links.
        • https://gitlab.com/bmahlstedt/supercontest/-/issues/222
        • Looks clean.