A hundred years ago, humanity answered that very question, twice. In 1936, Alan invented the Turing Machine, which, highly inspired by the mechanical trend of the 20th century, distillated the common components of early computers into a single universal machine that, despite its simplicity, was capable of performing every computation conceivable. From simple numerical calculations to entire
- google's manifest v3 has no analouge to the
webRequestBlocking
API, which is neccesary for (effective) adblockers to work - starting in chrome version 127, the transition to mv3 will start cutting off the use of mv2 extensions alltogether
- this will inevitably piss of enterprises when their extensions don't work, so the
ExtensionManifestV2Availability
key was added and will presumably stay forever after enterprises complain enough
You can use this as a regular user, which will let you keep your mv2 extensions even after they're supposed to stop working
In a terminal, run:
#!/bin/bash | |
# Video Quality | |
# The range of the CRF scale is 0–51, where 0 is lossless, 23 is the default, | |
# and 51 is worst quality possible. A lower value generally leads to higher | |
# quality, and a subjectively sane range is 17–28 | |
QUALITY=28 | |
# check if slop command exists | |
if ! command -v slop &> /dev/null |
Not only Mojo is great for writing high-performance code, but it also allows us to leverage huge Python ecosystem of libraries and tools. With seamless Python interoperability, Mojo can use Python for what it's good at, especially GUIs, without sacrificing performance in critical code. Let's take the classic Mandelbrot set algorithm and implement it in Mojo.
We'll introduce a Complex
type and use it in our implementation.
For a long time I've been really impacted by the ease of use Cassandra and CockroachDB bring to operating a data store at scale. While these systems have very different tradeoffs what they have in common is how easy it is to deploy and operate a cluster. I have experience with them with cluster sizes in the dozens, hundreds, or even thousands of nodes and in comparison to some other clustered technologies they get you far pretty fast. They have sane defaults that provide scale and high availability to people that wouldn't always understand how to achieve it with more complex systems. People can get pretty far before they have to become experts. When you start needing more extreme usage you will need to become an expert of the system just like any other piece of infrastructure. But what I really love about these systems is it makes geo-aware data placement, GDPR concerns potentially simplified and data replication and movement a breeze most of the time.
Several years ago the great [Andy Gross](ht
A list of requirements:
- stakeholders expect a list of provided features, every few days, in a human-friendly report
- every change must have been reviewed, before being deployed
- every change must have passed our automated checks, before being deployed
- every change must have been verified by QA staff, before being deployed
"""A quick benchmark comparing the performance of: | |
- msgspec: https://github.com/jcrist/msgspec | |
- pydantic V1: https://docs.pydantic.dev/1.10/ | |
- pydantic V2: https://docs.pydantic.dev/dev-v2/ | |
The benchmark is modified from the one in the msgspec repo here: | |
https://github.com/jcrist/msgspec/blob/main/benchmarks/bench_validation.py | |
I make no claims that it's illustrative of all use cases. I wrote this up |
This logging setup configures Structlog to output pretty logs in development, and JSON log lines in production.
Then, you can use Structlog loggers or standard logging
loggers, and they both will be processed by the Structlog pipeline (see the hello()
endpoint for reference). That way any log generated by your dependencies will also be processed and enriched, even if they know nothing about Structlog!
Requests are assigned a correlation ID with the asgi-correlation-id
middleware (either captured from incoming request or generated on the fly).
All logs are linked to the correlation ID, and to the Datadog trace/span if instrumented.
This data "global to the request" is stored in context vars, and automatically added to all logs produced during the request thanks to Structlog.
You can add to these "global local variables" at any point in an endpoint with `structlog.contextvars.bind_contextvars(custom
# Written by The Suhu (2021).
# Tested on Ubuntu 20.04 LTS
The default cURL installed on the operating system may not be the latest version. if you want the latest version, then you need to build from the source. Let's check the cURL version installed with the following command.