diff --git a/content/blog/a-brief-history-of-configuration-defined-image-builders.md b/content/blog/a-brief-history-of-configuration-defined-image-builders.md
new file mode 100644
index 0000000..cbf31e6
--- /dev/null
+++ b/content/blog/a-brief-history-of-configuration-defined-image-builders.md
@@ -0,0 +1,46 @@
+---
+title: "A Brief History of Configuration-Defined Image Builders"
+date: "2021-04-06"
+---
+
+When you think of a configuration-defined image builder, most likely you think of Docker (which builds images for containers). But before Docker, there were several other projects, all of which came out of a vibrant community of Debian-using sysadmins looking for better ways to build VM and container images, which lead to a series of projects that built off each other to build something better.
+
+## Before KVM, there was Xen
+
+The [Xen hypervisor](https://xenproject.org/developers/teams/xen-hypervisor/) is likely something you've heard of, and that's where this story begins. The mainstream desire to programmatically create OS images came about as Xen became a popular hypervisor in the mid 2000s. The first development in that regard was [xen-tools](https://www.xen-tools.org/software/xen-tools/), which automated installation of Debian, Ubuntu and CentOS guests, by generating images for them using custom perl scripts. The world has largely moved on from Xen, but it still sees wide use.
+
+## ApplianceKit and ApplianceKit-NG
+
+The methods used in xen-tools, while generally effective, lacked flexibility. Hosting providers needed a way to allow end-users to customize the images they deployed. In my case, we solved this by creating ApplianceKit. That particular venture was sold to another hosting company, and for whatever reason, I started another one. In that venture, we created ApplianceKit-NG.
+
+ApplianceKit and ApplianceKit-NG took different approaches internally to solve a basic problem, taking an XML description of a software image and reproducing it, for example:
+
+
+
+ LAMP appliance based on Debian squeeze
+
+ Ariadne Conill
+ ariadne@dereferenced.org
+
+ squeeze
+
+ apache2
+ libapache2-mod-php5
+ mysql-server
+ mysql-client
+
+
+
+As you can see here, the XML description described a _desired state_ for the image to be in at deployment time. ApplianceKit did this through an actor model: different modules would act on elements in the configuration description. [ApplianceKit-NG](https://bitbucket.org/tortoiselabs/appliancekit-ng/src/master/) instead treated this as a matter of compilation: first, a high-level pass converted the XML into a [mid-level IR,](https://bitbucket.org/tortoiselabs/appliancekit-ng/src/master/ADL.md) then the mid-level IR was converted into a low-level IR, then the IR was converted into a series of commands that were evaluated like a shell script. (Had I known about skarnet's execline at that time, I would have used it.)
+
+## Docker
+
+Another company that was active in the Debian community and experimenting with configuration-defined image building was dotCloud. dotCloud took a similar evolutionary path, with the final image building system they made being Docker. Docker evolved further on the concept outlined in ApplianceKit-NG by simplifying everything: instead of explicitly configuring a desired state, you simply use image layering:
+
+FROM debian:squeeze
+MAINTAINER ariadne@dereferenced.org
+RUN apt-get update && apt-get install apache2 libapache2-mod-php5 mysql-server mysql-client
+
+By taking a simpler approach, Docker has won out. Everything is built on top of Docker these days, such as Kubernetes, and this is a good thing. Even though some projects like Packer have further advanced the state of the art, Docker remains the go-to for this task, simply because its simple enough for people to mostly understand.
+
+The main takeaway is that simply advancing the state of the art is not good enough to make a project compelling. It must advance the state of simplicity too.
diff --git a/content/blog/a-silo-can-never-provide-digital-autonomy-to-its-users.md b/content/blog/a-silo-can-never-provide-digital-autonomy-to-its-users.md
new file mode 100644
index 0000000..2c7d68b
--- /dev/null
+++ b/content/blog/a-silo-can-never-provide-digital-autonomy-to-its-users.md
@@ -0,0 +1,18 @@
+---
+title: "a silo can never provide digital autonomy to its users"
+date: "2022-07-01"
+---
+
+Lately there has been a lot of discussion about various silos and their activities, notably GitHub and an up and coming alternative to Tumblr called Cohost. I'd like to talk about both to make the point that silos do not, and can not elevate user freedoms, by design, even if they are run with the best of intentions, by analyzing the behavior of both of these silos.
+
+It is said that if you are not paying for a service, that you are the product. To look at this, we will start with GitHub, who have had a significant controversy over the past year with their now-commercial Copilot service. Copilot is a paid service which provides code suggestions using a neural network model that was trained using the entirety of publicly posted source code on GitHub as its corpus. As many have noted, this is likely a problem from a copyright point of view.
+
+Microsoft claims that this use of the GitHub public source code is ethically correct and legal, citing fair use as their justification for data mining the entire GitHub public source corpus. Interestingly, in the EU, there is a "text and data mining" exception to the copyright directive, [which may provide for some precedent for this thinking](https://deliverypdf.ssrn.com/delivery.php?ID=380124069122109084081011069119068081059089022064027023064104069125083028119005007123033062000029047123108125065064093118008030058071007053078069071085069007101073030038014010096097074114126065017112027071084124110068123116074098119115105064007068091122&EXT=pdf&INDEX=TRUE). While the legal construction they use to justify the way they trained the Copilot model is interesting, it is important to note that we, as consumers of the GitHub service, enabled Microsoft to do this by uploading source code to their service.
+
+Now let's talk about [Cohost](https://cohost.org), a recently launched alternative to Tumblr which is paid for by its subscribers, and promises that it will never sell out to a third party. While I think that Cohost will likely be one of the more ethically-run silos out there, it is still a silo, and like Microsoft's GitHub, it has business interests (subscriber retention) which [place it in conflict with the goals of digital autonomy](https://techautonomy.org/). Specifically, like all silos, Cohost's platform is designed to keep users inside the Cohost platform, just as GitHub uses the network effect of its own silo to make it difficult to use anything other than GitHub for collaboration on software.
+
+Some have argued that, due to the network effects of silos, the only thing which can defeat a bad silo is a good silo. The problem with this argument is that it requires one to accept the supposition that there can be a good silo. Silos, by their very nature of being centralized services under the control of the privileged, cannot be good if you look at the power structures imposed by them. Instead, we should use our privilege to lift others up, something that commercial silos, by design, are incapable of doing.
+
+How do we do this though? One way is to embrace networks of consent. From a technical point of view, the IndieWeb people have worked on a number of simple, easy to implement protocols, which provide the ability for web services to interact openly with each other, but in a way that allows for a website owner to define policy over what content they will accept. From a social point of view, we should avoid commercial silos, such as GitHub, and use our own infrastructure, either through self-hosting or through membership to a cooperative or public society.
+
+Although I understand that both of these goals can be difficult to achieve, they make more sense than jumping from one silo to the next after they cross the line. You control where you choose to participate -- for me, that means I am shifting my participation so that I only participate in commercial silos when absolutely necessary. We should choose to participate in power structures which value our communal membership, rather than value our ability to generate or pay revenue.
diff --git a/content/blog/a-slightly-delayed-monthly-status-update.md b/content/blog/a-slightly-delayed-monthly-status-update.md
new file mode 100644
index 0000000..43a80b5
--- /dev/null
+++ b/content/blog/a-slightly-delayed-monthly-status-update.md
@@ -0,0 +1,52 @@
+---
+title: "A slightly-delayed monthly status update"
+date: "2021-06-04"
+---
+
+A few weeks ago, I announced the [creation of a security response team for Alpine](https://ariadne.space/2021/04/20/building-a-security-response-team-in-alpine/), of which I am presently the chair.
+
+Since then, the team has been fully chartered by both the previous Alpine core team, and the new Alpine council, and we have gotten a few members on board working on security issues in Alpine. Once the Technical Steering Committee is fully formed, the security team will report to the TSC and fall under its purview.
+
+Accordingly, I thought it would be prudent to start write monthly updates summarizing what I've been up to. This one is a little delayed because we've been focused on getting Alpine 3.14 out the door (first RC should come out on Monday)!
+
+## secfixes-tracker
+
+One of the primary activities of the security team is to manage the [security database](https://secdb.alpinelinux.org). This is largely done using the secfixes-tracker application I wrote in April. At AlpineConf, I gave a bubble talk about the new security team, including a demonstration of how we use the secfixes-tracker application to research and mitigate security vulnerabilities.
+
+Since the creation of the security team through the Alpine 3.14 release cycle, myself and other security team volunteers have mitigated over 100 vulnerabilities through patching or non-maintainer security upgrades in the pending 3.14 release alone and many more in past releases which are still supported.
+
+All of this work in finding unpatched vulnerabilities is done using secfixes-tracker. However, while it finds many vulnerabilities, it is not perfect. There are both false positives and false negatives, which we are working on improving.
+
+The next step for secfixes-tracker is to integrate it into GitLab, so that maintainers can log in and reject CVEs they deem irrelevant in their packages instead of having to attribute a security fix to version `0`. I am also [working on a protocol to allow security trackers to share data](https://docs.google.com/document/d/11-m_aXnrySM6KeA5I6BjdeGeSIxymfip4hseg2Y0UKw/edit#heading=h.bz0hbmpvjhfb) with each other in an automated way.
+
+## Infrastructure
+
+Another role of the security team is to advise the infrastructure team on security-related matters. In the past few weeks, this primarily focused around two issues: how to [securely relay patches from the alpine-aports mailing list into GitLab without compromising the security of `aports.git`](https://gitlab.alpinelinux.org/mailinglist-bot) and [our response to recent changes in freenode](https://freenode.net/news/freenode-is-foss), where it was the recommendation of the security team to [leave freenode in favor of OFTC](https://alpinelinux.org/posts/Switching-to-OFTC.html).
+
+## Reproducible Builds
+
+Another project of mine personally is working to prove the reproducibility of Alpine package builds, as part of the [Reproducible Builds project](https://reproducible-builds.org/). To this end, I hope to have the Alpine 3.15 build fully reproducible. This will require some changes to `abuild` so that it produces buildinfo files, as well as a rebuilder backend. We plan to use the same buildinfo format as Arch, and will likely adapt some of the other reproducible builds work Arch has done to Alpine.
+
+I plan to have a meeting within the next week or two to formulate an official reproducible builds team inside Alpine and lay out the next steps for what we need to do in order to get things going. In the meantime, join `#alpine-reproducible` on `irc.oftc.net` if you wish to follow along.
+
+I plan for reproducible builds (perhaps getting all of main reproducible) to be a sprint in July, once the prerequisite infrastructure is in place to support it, so stay tuned on that.
+
+## apk-tools 3
+
+On this front, there's not much to report yet. My goal is to integrate the security database into our APKINDEX, so that we can have `apk list --upgradable --security`, which lists all of the security fixes you need to apply. Unfortunately, we are still working to finalize the ADB format which is a prerequisite for providing the security database in ADB format. It does look like Timo is almost done with this, so once he is done, I will be able to start working on a way to reflect the security database into our APKINDEX files.
+
+## The `linux-distros` list
+
+There is a mailing list which is intended to allow linux distribution security personnel to discuss security issues in private. As Alpine now has a security team, it is possible for Alpine to take steps to participate on this list.
+
+However... participation on this list comes with a few restrictions: you have to agree to follow all embargo terms in a precise way. For example, if an embargoed security vulnerability is announced there and the embargo specifies you may not patch your packages until XYZ date, then you must follow that or you will be kicked off the list.
+
+I am not sure it is necessarily appropriate or even valuable for Alpine to participate on the list. At present, if an embargoed vulnerability falls off a truck and Alpine notices it, we can fix it immediately. If we join the `linux-distros` list, then we may be put in a position where we have to hide problems, which I didn't sign up for. I consider it a feature that the Alpine security team is operating fully in the open for everyone to see, and want to preserve that as much as possible.
+
+The other problem is that distributions which participate [bind their package maintainers to an NDA](https://wiki.gentoo.org/wiki/Project:Security/Pre-Release-Disclosure) in order to look at data relevant to their packages. I don't like this at all and feel that it is not in the spirit of free software to make contributors acknowledge an NDA.
+
+We plan to discuss this over the next week and see if we can reach consensus as a team on what to do. I prefer to fix vulnerabilities, not wait to fix vulnerabilities, but obviously I am open to being convinced that there is value to Alpine's participation on that list.
+
+## Acknowledgement
+
+My activities relating to Alpine security work are presently sponsored by Google and the Linux Foundation. Without their support, I would not be able to work on security full time in Alpine, so thanks!
diff --git a/content/blog/a-tail-of-two-bunnies.md b/content/blog/a-tail-of-two-bunnies.md
new file mode 100644
index 0000000..075cbcd
--- /dev/null
+++ b/content/blog/a-tail-of-two-bunnies.md
@@ -0,0 +1,36 @@
+---
+title: "a tail of two bunnies"
+date: "2021-08-21"
+---
+
+As many people know, I collect stuffed animals. Accordingly, I get a lot of questions about what to look for in a quality stuffed animal which will last a long time. While there are a lot of factors to consider when evaluating a design, I hope the two examples I present here in contrast to each other will help most people get the basic idea.
+
+## the basic things to look for
+
+A stuffed animal is basically a set of fabric patches sewn together around some stuffing material. Therefore, the primary mode of failure for a stuffed animal is when one or more seams suffers a tear or rip in its stitching. A trained eye can look at a design and determine both the likelihood of failure and the most vulnerable seams, even in a high quality stuffed animal.
+
+There are two basic ways to sew together a stuffed animal: the fabric patches can be sewn together to form inward-facing seams, or they can be sewn together to form outward-facing seams. Generally, the stuffed animals that have inward facing seams have more robust construction. This means that if you can easily see the seam lines that the quality is likely to be low. Similarly, if eyes and other accessories are sewn in along a main seam line, they become points of vulnerability in the design.
+
+Materials also matter: if the purpose of the stuffed animal is to be placed on a bed, or in a crib, it should be made out of fire-retardant materials. Higher quality stuffed animals will use polyester fill with a wool-polyester blend for the outside, while lower quality stuffed animals may use materials like cotton. In the [event of a fire](https://www.sikkerhverdag.no/en/safe-products/clothes-and-equipment/these-clothes-are-the-most-flammable/), polyester can potentially melt onto skin, but materials like cotton will burn much more vigorously than polyester (which is fire retardant).
+
+Finally, it is important to verify that the stuffed animal has been certified to a well-known safety standard. Look for compliance with the European Union's EN71 safety standard or the ASTM F963 standard. Do not buy any stuffed animal made by a company which is not compliant with these standards. Stuffed animals bought off maker-oriented websites like Etsy will most likely not be certified, in these cases, you may wish to verify with the maker that they are familiar with the EN71 and ASTM F963 standards and have designed around those standards.
+
+## a good example: the jellycat bashful bunny
+
+![A jellycat bashful bunny, cream colored, size: really big. it is approximately 4 feet tall.](images/BARB1BC-300x300.jpg)
+
+One of my favorite bunny designs is the [Jellycat Bashful Bunny](https://www.jellycat.com/us/bashful-cream-bunny-bas3bc/). I have several of them, ranging from small to the largest size available.
+
+This is what I would consider to be a high quality design. While the seam line along his tummy is visible, it is a very small seam line, which is indicative that the stitching is inward-facing. There are no other visible seam lines. Cared for properly, this stuffed animal will last a very long time.
+
+## a bad example: build a bear's pawlette
+
+![Jumbo Pawlette, from build a bear. This variant is 3 feet tall.](images/25756Alt1x-300x300.jpg)
+
+A few people have asked me about [Build a Bear's Pawlette design](https://www.buildabear.com/online-exclusive-jumbo-pawlette/025756.html) recently, as it looks very similar to the Jellycat Bashful Bunny. I don't think it is a very good design.
+
+To start with, you can see that there are 21 separate panels stitched together: 4 for the ears, 3 for the head, 4 for the arms, 2 for the tummy, 2 for the back, 4 for the legs, and 2 for the feet. The seam lines are very visible, which indicates that there is a high likelihood that the stitching is outward rather than inward. That makes sense, because it's a lot easier to stitch up a stuffed animal in store that way. Additionally, you can see that the eyes are anchored to the seam lines that make up the face, which means detachment of the eyes is a likely possibility as a failure mode.
+
+Build a Bear has some good designs that are robustly constructed, but Pawlette is not one of them. I would avoid that one.
+
+Hopefully this is helpful to somebody, at the very least, I can link people to this post now when they ask about this stuff.
diff --git a/content/blog/a-tale-of-two-envsubst-implementations.md b/content/blog/a-tale-of-two-envsubst-implementations.md
new file mode 100644
index 0000000..7110781
--- /dev/null
+++ b/content/blog/a-tale-of-two-envsubst-implementations.md
@@ -0,0 +1,107 @@
+---
+title: "A tale of two envsubst implementations"
+date: "2021-04-15"
+---
+
+Yesterday, Dermot Bradley brought up in IRC that gettext-tiny's lack of an `envsubst` utility could be a potential problem, as many Alpine users [use it to generate configuration from templates](https://www.robustperception.io/environment-substitution-with-docker). So I decided to look into writing a replacement, as the tool did not seem that complex. That rewrite is [now available on GitHub](https://github.com/kaniini/envsubst), and is already in Alpine testing for experimental use.
+
+## What `envsubst` does
+
+The `envsubst` utility is designed to take a set of strings as input and replace variables in them, in the same way that shells do variable substitution. Additionally, the variables that will be substituted can be restricted to a defined set, which is nice for reliability purposes.
+
+Because it provides a simple way to perform substitutions in a file without having to mess with `sed` and other similar utilities, it is seen as a helpful tool for building configuration files from templates: you just install the `cmd:envsubst` provider with apk and perform the substitutions.
+
+Unfortunately though, GNU `envsubst` is quite deficient in terms of functionality and interface.
+
+## Good tool design is important
+
+When building a tool like `envsubst`, it is important to think about how it will be used. One of the things that is really important is making sure a tool is satisfying to use: a tool which has non-obvious behavior or implies functionality that is not actually there is a badly designed tool. Sadly, while sussing out a list of requirements for my replacement `envsubst` tool, I found that GNU `envsubst` has several deficiencies that are quite disappointing.
+
+### GNU `envsubst` does not actually implement POSIX variable substitution like a shell would
+
+In POSIX, variable substitution is more than simply replacing a variable with the value it is defined to. In GNU `envsubst`, the documentation speaks of _shell variables_, and then outlines the `$FOO` and `${FOO}` formats for representing those variables. The latter format implies that POSIX variable substitution is supported, but it's not.
+
+In a POSIX-conformant shell, you can do:
+
+% FOO="abc\_123"
+% echo ${FOO%\_\*}
+abc
+
+Unfortunately, this isn't supported by GNU `envsubst`:
+
+% FOO="abc\_123" envsubst
+$FOO
+abc\_123
+${FOO}
+abc\_123
+${FOO%\_\*}
+${FOO%\_\*}
+
+It's not yet supported by my implementation either, [but it's on the list of things to do](https://github.com/kaniini/envsubst/issues/1).
+
+### Defining a restricted set of environment variables is bizzare
+
+GNU `envsubst` describes taking an optional `[SHELL-FORMAT]` parameter. The way this feature is implemented is truly bizzare, as seen below:
+
+% envsubst -h
+Usage: envsubst \[OPTION\] \[SHELL-FORMAT\]
+...
+Operation mode:
+ -v, --variables output the variables occurring in SHELL-FORMAT
+...
+% FOO="abc123" BAR="xyz456" envsubst FOO
+$FOO
+$FOO
+% FOO="abc123" envsubst -v FOO
+% FOO="abc123" envsubst -v \\$FOO
+FOO
+% FOO="abc123" BAR="xyz456" envsubst \\$FOO
+$FOO
+abc123
+$BAR
+$BAR
+% FOO="abc123" BAR="xyz456" envsubst \\$FOO \\$BAR
+envsubst: too many arguments
+% FOO="abc123" BAR="xyz456" envsubst \\$FOO,\\$BAR
+$FOO
+abc123
+$BAR
+xyz456
+$BAZ
+$BAZ
+% envsubst -v
+envsubst: missing arguments
+%
+
+As discussed above, `[SHELL-FORMAT]` is a very strange thing to call this, because it is not really a shell variable substitution format at all.
+
+Then there's the matter of requiring variable names to be provided in this shell-like variable format. That requirement gives a shell script author the ability to easily break their script by accident, for example:
+
+% echo 'Your home directory is $HOME' | envsubst $HOME
+Your home directory is $HOME
+
+Because you forgot to escape `$HOME` as `\$HOME`, the substitution list was empty:
+
+% echo 'Your home directory is $HOME' | envsubst \\$HOME
+Your home directory is /home/kaniini
+
+The correct way to handle this would be to accept `HOME` without having to describe it as a variable. That approach is supported by my implementation:
+
+% echo 'Your home directory is $HOME' | ~/.local/bin/envsubst HOME
+Your home directory is /home/kaniini
+
+Then there's the matter of not supporting multiple variables in the traditional UNIX style (as separate tokens). Being forced to use a comma on top of using a variable sigil for this is just bizzare and makes the tool absolutely unpleasant to use with this feature. For example, this is how you're supposed to add two variables to the substitution list in GNU `envsubst`:
+
+% echo 'User $USER with home directory $HOME' | envsubst \\$USER,\\$HOME
+User kaniini with home directory /home/kaniini
+
+While my implementation supports doing it that way, it also supports the more natural UNIX way:
+
+% echo 'User $USER with home directory $HOME' | ~/.local/bin/envsubst USER HOME
+User kaniini with home directory /home/kaniini
+
+## This is common with GNU software
+
+This isn't just about GNU `envsubst`. A lot of other GNU software is equally broken. Even the GNU C library [has design deficiencies which are similarly frustrating](https://drewdevault.com/2020/09/25/A-story-of-two-libcs.html). The reason why I wish to replace GNU software in Alpine is because in many cases, it is _defective by design_. Whether the design defects are caused by apathy, or they're caused by politics, it doesn't matter. The end result is the same, we get defective software. I want better security and better reliability, which means we need better tools.
+
+We can talk about the FSF political issue, and many are debating that at length. But the larger picture is that the tools made by the GNU project are, for the most part, clunky and unpleasant to use. That's the real issue that needs solving.
diff --git a/content/blog/activitypub-the-present-state-or-why-saving-the-worse-is-better-virus-is-both-possible-and-important.md b/content/blog/activitypub-the-present-state-or-why-saving-the-worse-is-better-virus-is-both-possible-and-important.md
new file mode 100644
index 0000000..43f4f33
--- /dev/null
+++ b/content/blog/activitypub-the-present-state-or-why-saving-the-worse-is-better-virus-is-both-possible-and-important.md
@@ -0,0 +1,110 @@
+---
+title: "ActivityPub: the present state, or why saving the 'worse is better' virus is both possible and important"
+date: "2019-01-10"
+---
+
+> This is the second article in a series that will be a fairly critical review of ActivityPub from a trust & safety perspective. Stay tuned for more.
+
+In [our previous episode](https://blog.dereferenced.org/activitypub-the-worse-is-better-approach-to-federated-social-networking), I laid out some personal observations about implementing an AP stack from scratch over the past year. When we started this arduous task, there were only three other AP implementations in progress: Mastodon, Kroeg and PubCrawl (the AP transport for Hubzilla), so it has been a pretty significant journey.
+
+I also described how ActivityPub was a student of the 'worse is better' design philosophy. Some people felt a little hurt by this, but they shouldn't have: after all, UNIX (of which modern Linux and BSD systems are a derivative) is also a student of the 'worse is better' philosophy. And much like the unices of yesteryear, ActivityPub right now has a lot of missing pieces. But that's alright, as long as the participants in this experiment understand the limitations.
+
+For the first time in decades, the success of ActivityPub, in part by way of it's aggressive adoption of the 'worse is better' philosophy (which enabled them to ship _something_) has made some traction that has inspired people to believe that perhaps we can take back the Web and make it open again. This in itself is a wonderful thing, and we must do our best to seize this opportunity and run with it.
+
+As I mentioned, there have been a huge amount of projects looking to implement AP in some way or other, many not yet in a public stage but seeking guidance on how to write an AP stack. My DMs have been quite busy with questions over the past couple of months about ActivityPub.
+
+## Let's talk about the elephant in the room, actually no not that one.
+
+ActivityPub has been brought this far by the [W3C Social CG](https://www.w3.org/community/socialcg/). This is a Community Group that was chartered by the W3C to advance the Social Web.
+
+While they did a good job at getting some of the best minds into the same room and talking about building a federated social web, a lot of decisions were already predetermined (using pump.io as a basis) or left underspecified to satisfy other groups inside W3C. Finally, the ActivityPub specification itself claimed that pure JSON could be used to implement ActivityPub, but the W3C kept pushing for layered specs on top like [JSON-LD Linked Data Signatures](https://w3c-dvcg.github.io/ld-signatures/), a spec that is not yet finalized but depends on JSON-LD.
+
+[LDS has a lot of problems](https://blog.dereferenced.org/the-case-for-blind-key-rotation), but I already covered them already. You can read about some of those problems by reading up on a mitigation known as [Blind Key Rotation](https://blog.dereferenced.org/the-case-for-blind-key-rotation). Anyway, this isn't _really_ about W3C pushing for use of LDS in AP, that is just one illustrated example of trying to bundle JSON-LD and dependencies into ActivityPub to make JSON-LD a defacto requirement.
+
+Because of this bundling issue, we established a new community group, called [LitePub](https://litepub.social/litepub), this was meant to be a workspace for people actually implementing ActivityPub stacks so that they could get documentation and support for using ActivityPub without JSON-LD, or using JSON-LD in a safe way. To date, the LitePub community is one of the best resources for asking questions about ActivityPub and getting real answers that can be used in production today.
+
+But to build the next generation of ActivityPub, the LitePub group isn't enough. Is W3C still interested? Unfortunately, from what I can tell, not really: [they are pursuing another system that was developed in house called SOLID](https://www.w3.org/community/solid/), which is built on the [Linked Data Platform](https://www.w3.org/TR/ldp/). Since SOLID is being developed by W3C top brass, I would assume that they aren't interested in stewarding a new revision of ActivityPub. And why would they be? SOLID is essentially a semantic web retread of ActivityPub, which gives the W3C top brass exactly what they wanted in the first place.
+
+In some ways, I argue that W3C's perceived disinterest in Social Web technologies other than SOLID largely has to do with fediverse projects having a very luke warm response to JSON-LD and LDS.
+
+The good news is that there have been some initial conversations between a few projects on what a working group to build the next generation of ActivityPub would look like, how it would be managed, and how it would be funded. We will be having more of these conversations over the next few months.
+
+## ActivityPub: the present state
+
+In the first blog post, I went into [a little detail about the present state of ActivityPub](https://blog.dereferenced.org/activitypub-the-worse-is-better-approach-to-federated-social-networking). But is it really as bad as I said?
+
+I am going to break down a few examples of faults in the protocol and talk about their current state as well as what we are doing for short-term mitigations and where we are doing them.
+
+### Ambiguous addressing: is it a DM or just a post directly addressed to a circle of friends?
+
+As Osada and Hubzilla started to get attention, Mastodon and Pleroma users started to see weird behavior in their notifications and timelines: messages from people they didn't necessarily follow which got directly addressed to the user. These are messages sent to a group of selected friends, but can otherwise be forwarded (boosted/repeated/announced) to other audiences.
+
+In other words, they do not have the same _semantic_ meaning as a DM. But due to the way they were addressed, Mastodon and Pleroma saw them as a DM.
+
+Mastodon fixed this issue in 2.6 by adding heuristics: if a message has recipients in both the `to` and `cc` fields, then it's a public message that is addressed to a group of recipients, and not a DM. Unfortunately, Mastodon treats it similarly to a followers-only post and does not infer the correct rights.
+
+Meanwhile, Pleroma and Friendica came up with the idea to add a semantic hint to the message with the `litepub:directMessage` field. If this is set to true, it should be considered as a direct message. If the field is set to false, then it should be considered a group message. If the field is unset, then heuristics are used to determine the message type.
+
+Pleroma has a branch in progress which adds both support for the `litepub:directMessage` field as well as the heuristics. It should be landing shortly (it needs a rebase and I need to fix up some of the heuristics).
+
+So overall, the issue is reasonably mitigated at this point.
+
+### Fake direction attacks
+
+Several months ago, [Puckipedia](https://puckipedia.com/) did some fake direction testing against mainstream ActivityPub implementations. Fake direction attacks are especially problematic because they allow spoofing to happen.
+
+She found vulnerabilities in Mastodon, Pleroma and PixelFed, as well as [recently a couple of other fediverse software](https://puckipedia.com/mn1n-7nny).
+
+The vulnerabilities she reported in Mastodon, Pleroma and PixelFed have been fixed, but the class of vulnerability as she observes keeps appearing.
+
+In part, we can mitigate this by writing excellent security documentation and referring people to read it. This is something that I hope the LitePub group can do in the future.
+
+But for now, I would say this issue is not fully mitigated.
+
+### Leakage caused by Mastodon's followers-only scope
+
+Software which is directly compatible with the Mastodon followers-only scope have a few problems, I am grouping them together here:
+
+- New followers can see content that was posted before they were authorized to view any followers-only content
+- Replies to followers-only posts are addressed to their _own_ followers instead of the followers collection of the OP at the time the post was created (which creates metadata leaks about the OP)
+- Software which does not support the followers-only scope can dereference the OP's followers collection in any way they wish, including interpreting it as `as:Public` (this is explicitly allowed by the ActivityStreams 2.0 specification, you can't even make this up)
+
+Mitigation of this is actually incredibly easy, which makes me question why Mastodon didn't do it to begin with: simply expand the followers collection when preparing to send the message outbound.
+
+An implementation of this will be landing in Pleroma soon to harden the followers-only scope as well as fix followers-only threads to be more usable.
+
+Implementation of this mitigation also brings the followers-only threads to Friendica and Hubzilla in a safe and compatible way: all fediverse software will be able to properly interact with the threads.
+
+### The “don't @ me” problem
+
+> Some of this interpretation about Zot may be slightly wrong, it is based on reading the specification for Zot and Zot 6.
+
+Other federated protocols such as DFRN, Zot and Zot 6 provide a rich framework for defining what interactions are allowed with a given message. ActivityPub doesn't.
+
+DFRN provides UI hints on each object that hint at what may be done with the object, but uses a capabilities system under the hood. Capability enforcement is done by the “feed producer,” which either accepts your request or denies it. If you comment on a post in DFRN, it is the responsibility of the parent “feed producer” to forward your post onward through the network.
+
+Zot uses a similar capabilities system but provides a magic signature in response to consuming the capability, which you then forward as proof of acceptance. Zot 6 uses a similar authentication scheme, except using OpenWebAuth instead of the original Zot authentication scheme.
+
+For ActivityPub, my proposal is to use a system of capability URIs and proof objects that are cross-checked by the receiving server. In terms of the proof objects themselves, cryptographic signatures are not a component of this proof system, it is strictly capability based. Cryptographic verification could be provided by leveraging HTTP Signatures to sign the response, if desired. I am still working out the details on how precisely this will work, and that will probably be the what the next blog post is about.
+
+As a datapoint: in Pleroma, we already use this cross-checking technique to verify objects which have been forwarded to us due to ActivityPub §7.1.2. This allows us to avoid JSON-LD and LDS signatures and is the recommended way to verify forwarded objects in LitePub implementations.
+
+### Unauthenticated object fetching
+
+Right now, due to the nature of ActivityPub and the design motivations behind it, fetching public objects is entirely unauthenticated.
+
+This has lead to a few incidents where fediverse users have gotten upset over their posts still arriving at servers they have blocked, since they naturally expect that posts won't arrive at servers they have blocked.
+
+Mastodon has implemented an extension for post fetching where fetching private posts is authenticated using the HTTP Signature of the user who is fetching the post. This is a possible way of solving the authentication problem: instances can be identified based on which actor signed the request.
+
+However, I don't think that fetching private posts in this way (instead this should always fail) is a good idea and wouldn't recommend it. With that said, a more generalized approach based on using HTTP Signatures to fetch public posts could be workable.
+
+But I do not think the AP server should use a random user's key to sign the requests: instead there should be an AP actor which explicitly represents the whole instance, and the instance actor's key should be used to sign the fetch requests instead. That way information about individual users isn't leaked, and signatures aren't created without the express consent of a random instance user.
+
+Once object fetches are properly authenticated in a way that instances are identifiable, then objects can be selectively disclosed. This also hardens object fetching via third parties such as crawlers.
+
+## Conclusion
+
+In this particular blog entry, I discussed why ActivityPub is still the hero we need despite being designed with the 'worse is better' philosophy, as well as discussed some early plans for cross-project collaboration on a next generation ActivityPub-based protocol, and discussed a few of the common problem areas with ActivityPub and how we can mitigate them in the future.
+
+And with that, despite the present issues we face with ActivityPub, I will end this by borrowing a common saying from the cryptocurrency community: the future is bright, the future is decentralized.
diff --git a/content/blog/activitypub-the-worse-is-better-approach-to-federated-social-networking.md b/content/blog/activitypub-the-worse-is-better-approach-to-federated-social-networking.md
new file mode 100644
index 0000000..2b92c17
--- /dev/null
+++ b/content/blog/activitypub-the-worse-is-better-approach-to-federated-social-networking.md
@@ -0,0 +1,54 @@
+---
+title: "ActivityPub: The “Worse Is Better” Approach to Federated Social Networking"
+date: "2019-01-07"
+---
+
+> This is the first article in a series that will be a fairly critical review of ActivityPub from a trust & safety perspective. Stay tuned for more.
+
+In the modern day, myself and many other developers working on libre software have been exposed to a protocol design philosophy that emphasizes safety and correctness. That philosophy can be summarized with these goals:
+
+- Simplicity: the protocol must be simple to implement. It is more important for the protocol to be simple than the backend implementation.
+- Correctness: the protocol must be verifiably correct. Incorrect behavior is simply not allowed.
+- Safety: the protocol must be designed in a way that is safe. Behavior and functionality which risks safety is considered incorrect.
+- Completeness: the protocol must cover as many situations as is practical. All reasonably expected cases must be covered. Simplicity is not a valid excuse to reduce completeness.
+
+Most people would correctly refer to these as good characteristics and overall the right way to approach designing protocols, especially in a federated and social setting. In many ways, the [Diaspora protocol](https://diaspora.github.io/diaspora_federation/) could be considered as an example of this philosophy of design.
+
+The “worse is better” approach to protocol design is only slightly different:
+
+- Simplicity: the protocol must be simple to implement. It is important for the backend implementation to be equally simple as the protocol itself. Simplicity of both implementation and protocol are the most important considerations in the design.
+- Correctness: the protocol must be correct when tested against reasonably expected cases. It is more important to be simple than correct. Inconsistencies between real implementations and theoretical implementations are acceptable.
+- Safety: the protocol must be safe when tested against basic use cases. It is more important to be simple than safe.
+- Completeness: the protocol must cover reasonably expected cases. It is more important for the protocol to be simple than complete. Under-specification is acceptable when it improves the simplicity of the protocol.
+
+[OStatus](https://indieweb.org/OStatus) and [ActivityPub](https://www.w3.org/tr/activitypub) are examples of the “worse is better” approach to protocol design. I have intentionally portrayed this design approach in a way to attempt to convince you that it is a really bad approach.
+
+However, I do believe that this approach, even though it is considerably worse approach to protocol design which creates technologies that people simply cannot trust or have confidence in their safety while using those technologies, has better survival characteristics.
+
+To understand why, we have to look at both what expected security features of federated social networks are, and what people mostly use social networks for.
+
+When you ask people what security features they expect of a federated social networking service such as Mastodon or Pleroma, they usually reply with a list like this:
+
+- I should be able to interact with my friends.
+- The messages I share only with my friends should be handled in a secure manner. I should be able to depend on the software to not compromise my private posts.
+- Blocking should work reasonably well: if I block someone, they should disappear from my experience.
+
+These requirements sound reasonable, right? And of course, ActivityPub mostly gets the job done. After all, the main use of social media is shitposting, posting selfies of yourself and sharing pictures of your dog. But would they be better served by a different protocol? Absolutely.
+
+See, the thing is, ActivityPub is like a virus. The protocol is simple enough to implement that people can actually do it. And they are, aren't they? There's over 40 applications presently in development that use ActivityPub as the basis of their networking stack.
+
+Why is this? Because, _despite_ the design flaws in ActivityPub, it is generally _good enough_: you can interact with your friends, and in compliant implementations, addressing ensures that nobody else except for those you explicitly authorize will read your messages.
+
+But it's not good enough: [for example, people have expressed that they want others to be able to read messages, but not reply to them](https://github.com/tootsuite/mastodon/issues/8565).
+
+Had ActivityPub been a capability-based system instead of a signature-based system, this would never have been a concern to begin with: replies to the message would have gone to a special capability URI and then accepted or rejected.
+
+There are similar problems with things like the Mastodon “followers-only” posts and general concerns like direct messaging: these types of messages imply specific policy, but there is no mechanism in ActivityPub to convey these semantics. (This is in part solved by the LitePub `litepub:directMessage` flag, but that's a kludge to be honest.)
+
+I've also mentioned before that a large number of instances where there have been discourse about Mastodon verses Pleroma have actually been caused by complete design failures of ActivityPub.
+
+An example of this is with instances you've banned being able to see threads from your instance still: what happens with this is that somebody from a third instance interacts with the thread and then the software (either Mastodon or Pleroma) reconstructs the entire thread. Since there is no authentication requirement to retrieve a thread, these blocked instances can successfully reconstruct the threads they weren't allowed to receive in the first place. The only difference between Mastodon and Pleroma here is that Pleroma allows the general public to view the shared timelines without using a third party tool, which exposes the leaks caused by ActivityPub's bad design.
+
+In an ideal world, the number of ActivityPub implementations would be zero. But of course this is not an ideal world, so that leaves us with the question: “where do we go from _here_?”
+
+And honestly, I don't know how to answer that yet. Maybe we can save ActivityPub by extending it to be properly capability-based and eventually dropping support for the ActivityPub of today. But this will require coordination between all the vendors. And with 40+ projects out there, it's not going to be easy. And do we even care about those 40+ projects anyway?
diff --git a/content/blog/actually-bsd-kqueue-is-a-mountain-of-technical-debt.md b/content/blog/actually-bsd-kqueue-is-a-mountain-of-technical-debt.md
new file mode 100644
index 0000000..c10faf3
--- /dev/null
+++ b/content/blog/actually-bsd-kqueue-is-a-mountain-of-technical-debt.md
@@ -0,0 +1,72 @@
+---
+title: "actually, BSD kqueue is a mountain of technical debt"
+date: "2021-06-06"
+---
+
+A side effect of [the whole freenode kerfluffle](https://ariadne.space/2021/05/20/the-whole-freenode-kerfluffle/) is that I've been looking at IRCD again. IRC, is of course a very weird and interesting place, and the smaller community of people who run IRCDs are largely weirder and even more interesting.
+
+However, in that community of IRCD administrators there happens to be a few incorrect systems programming opinions that have been cargo culted around for years. This particular blog is about one of these bikesheds, namely the _kqueue vs epoll debate_.
+
+You've probably heard it before. It goes something like this, _"BSD is better for networking, because it has kqueue. Linux has nothing like kqueue, epoll doesn't come close."_ While I agree that epoll doesn't come close, I think that's actually a feature that has lead to a much more flexible and composable design.
+
+## In the beginning...
+
+Originally, IRCD like most daemons used `select` for polling sockets for readiness, as this was the first polling API available on systems with BSD sockets. The `select` syscall works by taking a set of three bitmaps, with each bit describing a file descriptor number: bit 1 refers to file descriptor 1 and so on. The bitmaps are the `read_set`, `write_set` and `err_set`, which map to sockets that can be read, written to or have errors accordingly. Due to design defects with the `select` syscalls, it can only support up to `FD_SETSIZE` file descriptors on most systems. This can be mitigated by making `fd_set` an arbitrarily large bitmap and depending on `fdmax` to be the upper bound, which is what WinSock has traditionally done on Windows.
+
+The `select` syscall clearly had some design deficits that negatively affected scalability, so AT&T introduced the `poll` syscall in System V UNIX. The `poll` syscall takes an array of `struct pollfd` of user-specified length, and updates a bitmap of flags in each `struct pollfd` entry with the current status of each socket. Then you iterate over the `struct pollfd` list. This is naturally a lot more efficient than `select`, where you have to iterate over all file descriptors up to `fdmax` and test for membership in each of the three bitmaps to ascertain each socket's status.
+
+It can be argued that `select` was bounded by `FD_SETSIZE` (which is usually 1024 sockets), while `poll` begins to have serious scalability issues at around `10240` sockets. These arbitrary benchmarks have been referred to as the C1K and C10K problems accordingly. Dan Kegel has a [very lengthy post on his website](http://www.kegel.com/c10k.html) about his experiences mitigating the C10K problem in the context of running an FTP site.
+
+## Then there was kqueue...
+
+In July 2000, Jonathan Lemon introduced kqueue into FreeBSD, which quickly propagated into the other BSD forks as well. kqueue is a kernel-assisted event notification system using two syscalls: `kqueue` and `kevent`. The `kqueue` syscall creates a handle in the kernel represented as a file descriptor, which a developer uses with `kevent` to add and remove _event filters_. Event filters can match against file descriptors, processes, filesystem paths, timers, and so on.
+
+This design allows for a single-threaded server to process hundreds of thousands of connections at once, because it can register all of the sockets it wishes to monitor with the kernel and then lazily iterate over the sockets as they have events.
+
+Most IRCDs have supported `kqueue` for the past 15 to 20 years.
+
+## And then epoll...
+
+In October 2002, Davide Libenzi got [his `epoll` patch](http://www.xmailserver.org/linux-patches/nio-improve.html) merged into Linux 2.5.44. Like with kqueue, you use the `epoll_create` syscall to create a kernel handle which represents the set of descriptors to monitor. You use the `epoll_ctl` syscall to add or remove descriptors from that set. And finally, you use `epoll_wait` to wait for kernel events.
+
+In general, the scalability aspects are the same to the application programmer: you have your sockets, you use `epoll_ctl` to add them to the kernel's `epoll` handle, and then you wait for events, just like you would with `kevent`.
+
+Like `kqueue`, most IRCDs have supported `epoll` for the past 15 years.
+
+## What is a file descriptor, anyway?
+
+To understand the argument I am about to make, we need to talk about _file descriptors_. UNIX uses the term _file descriptor_ a lot, even when referring to things which are clearly _not_ files, like network sockets. Outside the UNIX world, a file descriptor is usually referred to as a _kernel handle_. Indeed, in Windows, kernel-managed resources are given the `HANDLE` type, which makes this relationship more clear. Essentially, a kernel handle is basically an opaque reference to an object in kernel space, and the astute reader may notice some similarities to the [object-capability model](https://en.wikipedia.org/wiki/Object-capability_model) as a result.
+
+Now that we understand that file descriptors are actually just kernel handles, we can now talk about `kqueue` and `epoll`, and why `epoll` is actually the correct design.
+
+## The problem with event filters
+
+The key difference between `epoll` and `kqueue` is that `kqueue` operates on the notion of _event filters_ instead of _kernel handles_. This means that any time you want `kqueue` to do something new, you have to add a new type of _event filter_.
+
+[FreeBSD presently has 10 different event filter types](https://www.freebsd.org/cgi/man.cgi?query=kqueue&sektion=2): `EVFILT_READ`, `EVFILT_WRITE`, `EVFILT_EMPTY`, `EVFILT_AIO`, `EVFILT_VNODE`, `EVFILT_PROC`, `EVFILT_PROCDESC`, `EVFILT_SIGNAL`, `EVFILT_TIMER` and `EVFILT_USER`. Darwin has additional event filters concerning monitoring Mach ports.
+
+Other than `EVFILT_READ`, `EVFILT_WRITE` and `EVFILT_EMPTY`, all of these different event filter types are related to entirely different concerns in the kernel: they don't monitor kernel handles, but instead other specific subsystems than sockets.
+
+This makes for a powerful API, but one which lacks [composability](https://en.wikipedia.org/wiki/Composability).
+
+## `epoll` is better because it is composable
+
+It is possible to do almost everything that `kqueue` can do on FreeBSD in Linux, but instead of having a single monolithic syscall to handle _everything_, Linux takes the approach of providing syscalls which allow almost anything to be represented as a _kernel handle_.
+
+Since `epoll` strictly monitors _kernel handles_, you can register _any_ kernel handle you have with it and get events back when its state changes. As a comparison to Windows, this basically means that `epoll` is a kernel-accelerated form of `WaitForMultipleObjects` in the Win32 API.
+
+You are probably wondering how this works, so here's a table of commonly used `kqueue` event filters and the Linux syscall used to get a kernel handle for use with `epoll`.
+
+| BSD event filter | Linux equivalent |
+| --- | --- |
+| `EVFILT_READ`, `EVFILT_WRITE`, `EVFILT_EMPTY` | Pass the socket with `EPOLLIN` etc. |
+| `EVFILT_VNODE` | `inotify` |
+| `EVFILT_SIGNAL` | `signalfd` |
+| `EVFILT_TIMER` | `timerfd` |
+| `EVFILT_USER` | `eventfd` |
+| `EVFILT_PROC`, `EVFILT_PROCDESC` | `pidfd`, alternatively bind processes to a `cgroup` and monitor `cgroup.events` |
+| `EVFILT_AIO` | `aiocb.aio_fildes` (treat as socket) |
+
+Hopefully, as you can see, `epoll` can automatically monitor _any_ kind of kernel resource without having to be modified, due to its composable design, which makes it superior to `kqueue` from the perspective of having less technical debt.
+
+Interestingly, [FreeBSD has added support for Linux's `eventfd` recently](https://www.freebsd.org/cgi/man.cgi?query=eventfd&apropos=0&sektion=2&manpath=FreeBSD+13.0-RELEASE+and+Ports&arch=default&format=html), so it appears that they may take `kqueue` in this direction as well. Between that and FreeBSD's [process descriptors](https://www.freebsd.org/cgi/man.cgi?query=procdesc&sektion=4&apropos=0&manpath=FreeBSD+13.0-RELEASE+and+Ports), it seems likely.
diff --git a/content/blog/alpineconf-2021-recap.md b/content/blog/alpineconf-2021-recap.md
new file mode 100644
index 0000000..51c645d
--- /dev/null
+++ b/content/blog/alpineconf-2021-recap.md
@@ -0,0 +1,144 @@
+---
+title: "AlpineConf 2021 recap"
+date: "2021-05-18"
+---
+
+Last weekend was AlpineConf, the first one ever. We held it as a virtual event, and over 700 participants came and went during the weekend. Although there were many things we learned up to and during the conference that could be improved, I think that the first AlpineConf was a great success! If you're interested in rewatching the event, both days have mostly full recordings on the [Alpine website](https://alpinelinux.org/conf).
+
+## What worked
+
+We held the conference on a [BigBlueButton](https://bigbluebutton.org) instance I set up and used the [Alpine Gitlab for organizing](https://gitlab.alpinelinux.org/alpine/alpineconf-cfp). BigBlueButton scaled well, even when we had nearly 100 active participants, the server performed quite well. Similarly, using issue tracking in Gitlab helped us to keep the CFP process simple. I think in general, we will keep this setup for future events, as it worked quite well.
+
+## What didn't work so well
+
+A major problem with BigBlueButton was attaching conference talks from YouTube. This caused problems with several privacy extensions which blocked the YouTube player from running. Also, the YouTube video playback segments are missing from the recordings. I'm going to investigate alternative options for this which should hopefully help with making the recorded talks play back correctly next time.
+
+Maybe if a BigBlueButton developer sees this, they can work to improve the YouTube viewing feature as well so that it works on the recording playback. That would be a really nice feature to have.
+
+Other than that, we only had one scheduling SNAFU, and that was basically my fault -- I didn't confirm the timeslot I scheduled the cloud team talk in, and so naturally, the cloud team was largely asleep because they were in US/Pacific time.
+
+Overall though, I think things went well and many people said they enjoyed the conference. Next year, as we will have some experience to draw from, things will be even better, hopefully.
+
+## The talks on day 1...
+
+The first day was very exciting with a lot of talks and [blahaj representation](https://www.ikea.com/us/en/p/blahaj-soft-toy-shark-90373590/). The talks mostly focused around user stories about Alpine. We learned about where and how Alpine was being used... from phones, to data centers, to windmills, to the science community. Here is the list of talks on the first day and my thoughts!
+
+#### The Beauty of Simplicity, by Cameron Seid (@deltaryz)
+
+This was the first talk of the conference and largely focused on how Cameron managed his Alpine server. It was a good starting talk for the conference, I think, because it showed how people use Alpine at home in their personal infrastructure. The talk was prerecorded and Cameron spent a lot of time on editing to make it look flashy.
+
+#### pmbootstrap: The Swiss Army Knife of postmarketOS development, by Oliver Smith (@ollieparanoid)
+
+postmarketOS is a distribution of Alpine for phones and other embedded devices.
+
+In this talk, Oliver went into `pmbootstrap`, a tool which helps to automate many of the tasks of building postmarketOS images and packages. About halfway through the talk, a user joined who I needed to make moderator, but I clicked the wrong button and made them presenter instead. Thankfully, Oliver was a good sport about it and we were able to fix the video playback quickly. I learned a lot about how `pmbootstrap` can be used for any sort of embedded project, and that opens up a lot of possibilities for collaborating with the pmOS team in other embedded applications involving Alpine.
+
+#### Using Alpine Linux in DataCenterLight, by Nico Schottelius (@telmich)
+
+In this talk, Nico walks us through how Alpine powers many devices in his data center project called DataCenterLight. He is using Alpine in his routing infrastructure with 10 gigabit links! The talk went over everything from routing all the way down to individual customer services, and briefly compared Alpine to Debian and Devuan from both a user and development point of view.
+
+#### aports-qa-bot: automating aports, by Rasmus Thomsen (@Cogitri)
+
+Rasmus talked about the `aports-qa-bot` he wrote which helps maintainers and the mentoring team review merge requests from contributors. He went into some detail about the modular design of the bot and how it can be easily extended for other teams and also Alpine derivatives. The postmarketOS team asked about deploying it for their downstream `pmaports` repo, so you'll probably be seeing the bot there soon.
+
+#### apk-polkit-rs: Using APK without the CLI, by Rasmus Thomsen (@Cogitri)
+
+Rasmus had the next slot as well, where he talked about his `apk-polkit-rs` project which provides a DBus service that can be called for installing and upgrading packages using apk. He also talked about the rust crate he is working on to wrap the apk-tools 3 API. Overall, the future looks very interesting for working with apk-tools from rust!
+
+#### Alpine building infrastructure update, by Natanael Copa (@ncopa)
+
+Next, Natanael gave a bubble talk about the Alpine building infrastructure. For me this was largely a trip down memory lane, as I witnessed the build infrastructure evolve first hand. He talked about how the first generation build infrastructure was a series of IRC bots which reacted to IRC messages in order to trigger a new build, and how the IRC infrastructure evolved from IRC to ZeroMQ to MQTT.
+
+He then showed how the builders work, using a live builder as an example, walking through the design and implementation of the build scripts. Finally, he proposed some ideas for building a more robust system that allowed for parallelizing the build process where possible.
+
+#### postmarketOS demo, by Martijn Braam (@MartijnBraam)
+
+Martijn showed us postmarketOS in action on several different phones. Did I mention he has a lot of phones? I asked in the Q&A afterwards and he said he had like 6 pinephones and somewhere around 60 other phones.
+
+I have to admire the dedication to reverse engineering phones that would lead to somebody acquiring 60+ phones to tinker with.
+
+#### Sxmo: Simple X Mobile - A minimalist environment for Linux smartphones, by Maarten van Gompel (@proycon)
+
+Maarten van Gompel, Anjandev Momi and Miles Alan gave a talk about and demonstration of Sxmo, their lightweight phone environment based on dwm, dmenu and a bunch of other tools as plumbing.
+
+The UI reminds me a lot of palmOS. I suspect if palmOS were still alive and kicking today, it would look like Sxmo. **Phone calls and text messages are routed through shell scripts**, a feature I didn't know I needed until I saw it in action. Sxmo probably is _the_ killer app for running an actual Linux distribution on your phone.
+
+This UI is absolutely _begging_ for jog-wheels to come back, and I for one hope they do.
+
+#### Alpine and the larger musl ecosystem (a roundtable discussion)
+
+This got off to a rocky start because I don't know how to organize stuff like this. I should have found somebody else to run the discussion, but it was really fruitful. We came to the conclusion that we needed to work more closely together in the musl distribution ecosystem to proactively deal with issues like misinformed upstreams and so on, so that we do not have another Rust-like situation again. That lead to the formation of `#musl-distros` on freenode to coordinate on these issues.
+
+#### Taking Alpine to the Edge and Beyond With Linux Foundation's Project EVE, by Roman Shaposhnik (@rvs)
+
+Roman talked about Project EVE, an edge computing solution being developed under the auspices of the LF Edge working group at Linux Foundation. EVE (Edge Virtualization Engine) is a distribution of Alpine built with Docker's LinuxKit, which has multiple Alpine-based containers working together in order to provide an edge computing solution.
+
+He talked about how the cloud has eroded software freedom (after all, you can't depend on free-as-in-freedom computing when it's on hardware you don't own) by encouraging users to trade it for convenience, and how edge computing brings that same convenience in-house, thus solving the software freedom issue.
+
+Afterward, he demonstrated how EVE is deployed on windmills to analyze audio recordings from the windmill to determine their health. All of that, including the customer application, is running on Alpine.
+
+He concluded the talk with a brief update on the `riscv64` port. It looks like we are well on the way to having the port in Alpine 3.15.
+
+#### BinaryBuilder.jl: The Subtle Art of Binaries that "Just Work", by Elliot Saba and Mosè Giordano
+
+Elliot and Mosè talked about BinaryBuilder, which they use to cross-compile software for all platforms supported by the Julia programming language. They do this by building the software in an Alpine-based environment under Linux namespaces or Docker (on mac).
+
+Amongst other things, they have a series of wrapper scripts around programs like `uname` which allow them to emulate the userspace commands of the target operating system, which helps convince badly written autoconf scripts to cooperate.
+
+All in all, it was a fascinating talk!
+
+## The talks on day 2...
+
+The talks on day 2 were primarily about the technical plumbing of Alpine.
+
+#### Future of Alpine Linux community chats (a roundtable discussion)
+
+We talked about the [current situation on freenode](https://news.ycombinator.com/item?id=27153338). The conclusion we came to regarding that was to support the freenode staff in their efforts to find a solution until the end of the month, at which point we would evaluate the situation again.
+
+This lead to a discussion about enhancing the IRC experience for new contributors, and the possibility of just setting up an internal IRC server for the project to use, as well as working with Element to set up a hosted Matrix server alternative.
+
+We also talked for the first time about the Alpine communities which are growing on non-free services such as Discord. Laurent observed that there is value in meeting users where they already are for outreach purposes, and also pointed out that the nature of proprietary IRC networks imposes a software freedom issue that doesn't exist with self-hosting our own. Most people agreed with these points, so we concluded that we would figure out plans to start integrating these unofficial communities into Alpine properly.
+
+#### Security tracker demo and security team Q&A
+
+This was kind of a bubble talk. I gave a demo of the new security.alpinelinux.org tracker, as well as an overview of how the current CVE system works with the NVD and CIRCL feeds and so on. We then talked a bit about how the CVE system could be improved by the Linked Data proposal I am working on, which will be published shortly.
+
+Afterwards, we talked about initiatives like bringing `clang`'s Control Flow Integrity into Alpine and a bunch of other topics about security in Alpine. It was a fun talk and we covered a lot of topics. It went for an hour and a half, as a talk was cancelled in the 15:00 slot.
+
+#### Alpine s390x port discussion, by me
+
+After the security talk, I talked a bit about running Alpine on mainframes, how they work, and why people still want to use them in 2021. In the Q&A we talked about big vs little endian and why people aren't mining Monero on mainframes.
+
+#### Simplified networking configuration with ifupdown-ng, by me
+
+This was an expanded talk about ifupdown-ng loosely based on the one Max gave at VirtualNOG last year. I adapted his talk, replacing Debian-specific content with Alpine content and talked a bit about NSL (RIP). The talk seemed to go well, in the Q&A we talked primarily about SR-IOV, which is not yet supported by ifupdown-ng.
+
+#### Declarative networking configuration with ifstate, by Thomas Liske (@liske)
+
+After the ifupdown-ng talk, Thomas talked about and demonstrated his `ifstate` project, which is available as an alternative to `ifupdown` in Alpine. Unlike ifupdown-ng which takes a hybrid approach, and ifupdown which takes an imperative approach, ifstate is a fully declarative implementation. The YAML syntax is quite interesting. I think ifstate will be quite popular for Alpine users requiring fully declarative configuration.
+
+#### AlpineConf 2.0 planning discussion
+
+After the networking track, we talked about AlpineConf next year. The conclusion was that AlpineConf has most value being a virtual event, and that if we want to have a physical event there's events like FOSDEM out there which we can use for that.
+
+#### Alpine cloud team talk and Q&A
+
+This wound up being a bit of a bubble talk because I failed to actually confirm whether anyone from the cloud team could give a talk at this time. Nonetheless the talk was a huge success. We talked about Alpine in the cloud and how to build on it.
+
+#### systemd: the good parts, by Christine Dodrill (@Xe)
+
+Christine gave a talk about systemd's feature set that she would like to see implemented in Alpine somehow. In the chat, Laurent provided some commentary...
+
+It was a fun talk that was at least somewhat amusing.
+
+#### Governance event
+
+Finally to close out the conference, Natanael talked about Alpine governance. In this event, he announced the dissolution of the Alpine Core Team and implementation of the Alpine Council instead. The Alpine Council will be initially managed by Natanael Copa, Carlo Landmeter and Kevin Daudt in the interim. This group will handle the administrative responsibilities of the project, while a technical steering committee will handle the technical planning for the project. This arrangement is likely familiar to anyone who has used Fedora, I think it makes sense to copy what works!
+
+Afterwards, we talked a little bit informally about everyone's thoughts on the conference.
+
+## In closing...
+
+Thanks to [Natanael Copa for proposing the idea of AlpineConf last year](https://lists.alpinelinux.org/~alpine/devel/%3C20200521160527.718c2d2c%40ncopa-desktop.copa.dup.pw%3E), Kevin Daudt for helping push the buttons and keeping things going (especially when my internet connection failed due to bad weather), all of the wonderful presenters who gave talks (many of which gave talks for their first time ever!) and everyone who dropped in to participate in the conference!
+
+We will be having a technically-oriented Alpine miniconf in November, and then AlpineConf 2022 next May! Hopefully you will be at both. Announcements will be forthcoming about both soon.
diff --git a/content/blog/an-inside-look-into-the-illicit-ad-industry.md b/content/blog/an-inside-look-into-the-illicit-ad-industry.md
new file mode 100644
index 0000000..f517cbe
--- /dev/null
+++ b/content/blog/an-inside-look-into-the-illicit-ad-industry.md
@@ -0,0 +1,89 @@
+---
+title: "an inside look into the illicit ad industry"
+date: "2021-11-04"
+---
+
+So, you want to work in ad tech, do you? Perhaps this will be a cautionary tale...
+
+I have worked my entire life as a contractor. This has had advantages and disadvantages. For example, I am free to set my own schedule, and undertake engagements at my own leisure, but as a result my tax situation is more complicated. Another advantage is that sometimes, you get involved in an engagement that is truly fascinating. This is the story of such an engagement. Some details have been slightly changed, and specific names are elided.
+
+A common theme amongst contractors in the technology industry is to band together to take on engagements which cannot be reasonably handled by a single contractor. Our story begins with such an engagement: a friend of mine ran a bespoke IT services company, which provided system administration, free software consulting and development. His company also handled the infrastructure deployment needs of customers who did not want to build their own infrastructure. I frequently worked with my friend on various consulting engagements over the years, including this one.
+
+One day, I was chilling in IRC, when I got a PM from my friend: he had gotten an inquiry from a possible client that needed help reverse engineering a piece of obfuscated JavaScript. I said something like "sounds like fun, send it over, and I'll see what I come up with." The script in question was called `popunder.js` and did exactly what you think it does. The customer in question had started a popunder ad network, and needed help adapting this obfuscated popunder script to work with his system, which he built using [a software called Revive Adserver](https://en.wikipedia.org/wiki/Revive_Adserver), a fork of the last GPL version of OpenX.
+
+I rolled my eyes and reverse engineered the script for him, allowing him to adapt it for his ad network. The adaptation was a success, and he wired me a sum that was triple my quoted hourly rate. This, admittedly, resulted in me being very curious about his business, as at the time, I was not used to making that kind of money. Actually, I'm still not.
+
+A few weeks passed, and he approached me with a proposition: he needed somebody who could reverse engineer the JavaScript programs delivered by ad networks and figure out how the scripts worked. As he was paying considerably more than my advertised hourly rate, I agreed, and got to work reverse engineering the JavaScript programs he required. It was nearly a full time job, as these programs kept evolving.
+
+In retrospect, he probably wasn't doing anything with the reports I wrote on each piece of JavaScript I reverse engineered, as that wasn't the actual point of the exercise: in reality, he wanted me to become familiar with the techniques ad networks used to detect fraud, so that we could develop countermeasures. In other words, the engagement evolved into a red-team type engagement, except that we weren't testing the ad networks for their sake, but instead ours.
+
+## so-called "domain masking": an explanation
+
+Years ago, you might have browsed websites like The Pirate Bay and saw advertising for a popular game, or some sort of other advertisement that you wouldn't have expected to see on The Pirate Bay. I assure you, brands were not knowingly targeting users on TPB: they were being duped via a category of techniques called _domain masking_.
+
+This is a type of scam that black-hat ad networks do in order to launder illicit traffic into clean traffic: they will set up fake websites and apply for advertisements on those websites through a shell company. This gives them a clean advertising feed to serve ads from. The next step is to launder the traffic by serving those tags on empty pages on the website, so that you can use them with an `