formatting fixes

main
Ariadne Conill 2022-08-02 17:43:04 -05:00
parent 2cf7625acc
commit 4db7bdc0da
2 changed files with 14 additions and 8 deletions

View File

@ -19,8 +19,10 @@ Recently I had a stick of RAM fail on treefort. I ordered a replacement stick an
I thought it was a little weird that one drive failed out of the three, so I assumed it was just due to maintenance, perhaps the drive had been reseated after the RAM stick was replaced, after all. As the price of a replacement 4TB Samsung SSD is presently around $700 retail, I thought I would re-add the drive to the array, assuming it would fail out of the array again during rebuild if it had actually failed.
\# mdadm —-manage /dev/md2 —-add /dev/sdb3
```
# mdadm —-manage /dev/md2 —-add /dev/sdb3
mdadm: added /dev/sdb3
```
I then checked /proc/mdstat and it reported the array as healthy. I thought nothing of it, though in retrospect maybe I should have found this suspicious, there was no discussion about the array being in a recovery state, instead it was healthy, with three drives present. Unfortunately, I figured “ok, I guess its fine” and left it at that.
@ -30,8 +32,10 @@ Meanwhile, the filesystem in the treefort environment being backed by the local
I was not aware of the data corruption issue until today, anyway, when I logged into the treefort environment, and decided to fire up nano to finish up some work I had been doing that needed to be resolved this week. That led me to have a rude surprise:
```
treefort:~$ nano
Segmentation fault
```
This worried me, after all, why could nano crash if it were working yesterday, and nothing had changed? So, I used apk fix to reinstall nano, making it work again. At this point, I was quite suspicious, that something was up with the server, so I immediately killed all the guests running on it, and focused on the bare metal host environment (what we would call the dom0 if we were still using Xen).

View File

@ -13,25 +13,27 @@ Before we get into this, it is important to again restate that if you are an app
Where did `gethostbyname` come from, anyway? Most people believe this function came from BIND, the reference DNS implementation developed by the Berkeley CSRG. In reality, it was introduced to BSD in 1982, alongside the `sethostent` and `gethostent` APIs. I happen to have a copy of the 4.2BSD source code, so here is the implementation from 4.2BSD, which was released in early 1983:
struct hostent \*
```c
struct hostent *
gethostbyname(name)
register char \*name;
register char *name;
{
register struct hostent \*p;
register char \*\*cp;
register struct hostent *p;
register char **cp;
sethostent(0);
while (p = gethostent()) {
if (strcmp(p->h\_name, name) == 0)
if (strcmp(p->h_name, name) == 0)
break;
for (cp = p->h\_aliases; \*cp != 0; cp++)
if (strcmp(\*cp, name) == 0)
for (cp = p->h_aliases; *cp != 0; cp++)
if (strcmp(*cp, name) == 0)
goto found;
}
found:
endhostent();
return (p);
}
```
As you can see, the 4.2BSD implementation only checks the `/etc/hosts` file and nothing else. This answers the question about why `gethostbyname` and its successor, `getaddrinfo` do DNS queries in a blocking way: they did not want to introduce a replacement API for `gethostbyname` that was asynchronous.