diff --git a/.gitignore b/.gitignore index efb41a7..37407a1 100644 --- a/.gitignore +++ b/.gitignore @@ -9,6 +9,3 @@ # VSCode .vscode - -# HTML -generated diff --git a/docs/2022-06-08-backup.html b/docs/2022-06-08-backup.html new file mode 100644 index 0000000..bc18920 --- /dev/null +++ b/docs/2022-06-08-backup.html @@ -0,0 +1,136 @@ + + +
+First thing first, I want to list my own devices, which I have through the years:
+ +App/Service I use daily:
+ +The purpose is that I want my data to be safe, secure, and can be easily recovered if I lost some devices; +or in the worst situation, I lost all. +Because you know, it is hard to guess what is waiting for us in the future.
+ +There are 2 sections which I want to share, the first is How to backup, the second is Recover strategy.
+ +Before I talk about backup, I want to talk about data. +In specifically, which data should I backup?
+ +I use Arch Linux and macOS, primarily work in the terminal so I have too many dotfiles, for example, ~/.config/nvim/init.lua
.
+Each time I reinstall Arch Linux (I like it a lot), I need to reconfigure all the settings, and it is time-consuming.
So for the DE and UI settings, I keep it as default as possible, unless it’s getting in my way, I leave the default setting there and forget about it. +The others are dotfiles, which I write my own dotfiles tool to backup and reconfigure easily and quickly. +Also, I know that installing Arch Linux is not easy, despite I install it too many times (Like thousand times since I was in high school). +Not because it is hard, but as life goes on, the official install guide keeps getting new update and covering too many cases for my own personal use, so I write my own guide to quickly capture what I need to do. +I back up all my dotfiles inside my dotfiles tool in GitHub and GitLab as I trust them both. +Also as I travel the Internet, I discover Codeberg and Treehouse and use them as another backup for git repo.
+ +So that is my dotfiles, for my regular data, like Wallpaper or Books, Images, I use Google Drive (Actually I pay for it). +But the step: open the webpage, click the upload button and choose files seems boring and time-consuming. +So I use Rclone, it supports Google Drive, One Drive and many providers but I only use Google Drive for now. +The commands are simple:
+ +# Sync from local to remote
+rclone sync MyBooks remote:MyBooks -P --exclude .DS_Store
+
+# Sync from remote to local
+rclone sync remote:MyBooks MyBooks -P --exclude .DS_Store
+
+
+Before you use Rclone to sync to Google Drive, you should read Google Drive rclone configuration first.
+ +The next data is my passwords and my OTPs. +These are the things which I’m scare to lose the most. +First thing first, I enable 2-Step Verification for all of my important accounts, should use both OTP and phone method.
+ +I use Bitwarden for passwords (That is a long story, coming from Google Password manager to Firefox Lockwise and then settle down with Bitwarden) and Aegis for OTPs. +The reason I choose Aegis, not Authy (I use Authy for so long but Aegis is definitely better) is because Aegis allows me to extract all the OTPs to a single file (Can be encrypted), which I use to transfer or backup easily.
+ +As long as Bitwarden provides free passwords stored, I use all of its apps, extensions so that I can easily sync passwords between laptops and phones. +The thing I need to remember is the master password of Bitwarden in my head.
+ +With Aegis, I export the data, then sync it to Google Drive, also store it locally in my phone. +For safety, I also store Aegis data locally on all of my laptops (Encrypted of course).
+ +The main problem here is the OTP, I can not store all of my OTPs in the cloud completely. +Because if I want to access my OTPs in the cloud, I should log in, and then input my OTP, this is a circle, my friends.
+ +There are many strategies that I process to react as if something strange is happening to my devices.
+ +If I lost my laptops, single laptop or all, do not panic as long as I have my phones. +The OTPs are in there, the passwords are in Bitwarden cloud, other data is in Google Drive so nothing is lost here.
+ +If I lost my phone, but not my laptops, I use the OTPs which are stored locally in my laptops.
+ +In the worst situation, I lost everything, my laptops, my phone. +The first step is to recover my SIM, then log in to Google account using the password and SMS OTP. +After that, log in to Bitwarden account using the master password and OTP from Gmail, which I open previously.
+ +This guide will be updated regularly I promise.
+ + + diff --git a/docs/2022-06-08-dockerfile-go.html b/docs/2022-06-08-dockerfile-go.html new file mode 100644 index 0000000..c7bbdba --- /dev/null +++ b/docs/2022-06-08-dockerfile-go.html @@ -0,0 +1,125 @@ + + + +Each time I start a new Go project, I repeat many steps.
+Like set up .gitignore
, CI configs, Dockerfile, …
So I decide to have a baseline Dockerfile like this:
+ +FROM golang:1.18-bullseye as builder
+
+RUN go install golang.org/dl/go1.18@latest \
+ && go1.18 download
+
+WORKDIR /build
+
+COPY go.mod .
+COPY go.sum .
+COPY vendor .
+COPY . .
+
+RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GOAMD64=v3 go build -o ./app -tags timetzdata -trimpath .
+
+FROM gcr.io/distroless/base-debian11
+
+COPY --from=builder /build/app /app
+
+ENTRYPOINT ["/app"]
+
+
+I use multi-stage build to keep my image size small. +First stage is Go official image, +second stage is Distroless.
+ +Before Distroless, I use Alpine official image, +There is a whole discussion on the Internet to choose which is the best base image for Go. +After reading some blogs, I discover Distroless as a small and secure base image. +So I stick with it for a while.
+ +Also, remember to match Distroless Debian version with Go official image Debian version.
+ +FROM golang:1.18-bullseye as builder
+
+
+This is Go image I use as a build stage. +This can be official Go image or custom image is required in some companies.
+ +RUN go install golang.org/dl/go1.18@latest \
+ && go1.18 download
+
+
+This is optional. +In my case, my company is slow to update Go image so I use this trick to install latest Go version.
+ +WORKDIR /build
+
+COPY go.mod .
+COPY go.sum .
+COPY vendor .
+COPY . .
+
+
+I use /build
to emphasize that I am building something in that directory.
The 4 COPY
lines are familiar if you use Go enough.
+First is go.mod
and go.sum
because it defines Go modules.
+The second is vendor
, this is optional but I use it because I don’t want each time I build Dockerfile, I need to redownload Go modules.
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GOAMD64=v3 go build -o ./app -tags timetzdata -trimpath .
+
+
+This is where I build Go program.
+ +CGO_ENABLED=0
because I don’t want to mess with C libraries.
+GOOS=linux GOARCH=amd64
is easy to explain, Linux with x86-64.
+GOAMD64=v3
is new since Go 1.18,
+I use v3 because I read about AMD64 version in Arch Linux rfcs. TLDR’s newer computers are already x86-64-v3.
-tags timetzdata
to embed timezone database incase base image does not have.
+-trimpath
to support reproduce build.
FROM gcr.io/distroless/base-debian11
+
+COPY --from=builder /build/app /app
+
+ENTRYPOINT ["/app"]
+
+
+Finally, I copy app
to Distroless base image.
It is hard to write bootstrap tool to quickly create Go service. +So I write this guide instead. +This is a quick checklist for me every damn time I need to write a Go service from scratch. +Also, this is my personal opinion, so feel free to comment.
+ +main.go
+internal
+| business_1
+| | http
+| | | handler.go
+| | | service.go
+| | | repository.go
+| | | models.go
+| | grpc
+| | | handler.go
+| | | service.go
+| | | repository.go
+| | | models.go
+| | service.go
+| | repository.go
+| | models.go
+| business_2
+| | grpc
+| | | handler.go
+| | | service.go
+| | | repository.go
+| | | models.go
+
+
+All business codes are inside internal
.
+Each business has a different directory (business_1
, business_2
).
Inside each business, there are 2 handlers: http
, grpc
:
http
is for public APIs (Android, iOS,… are clients).grpc
is for internal APIs (other services are clients).Inside each handler, there are usually 3 layers: handler
, service
, repository
:
handler
interacts directly with gRPC or REST using specific codes (cookies,…)service
is where we write business/logic codes, and only business/logic codes is written here.repository
is where we write codes which interacts with database/cache like MySQL, Redis, …handler
must exist inside grpc
, http
.
+But service
, repository
, models
can exist directly inside business
if both grpc
, http
has same business/logic.
If we have too many services, some of the logic will be overlapped.
+ +For example, service A and service B both need to make POST call API to service C. +If service A and service B both have libs to call service C to do that API, we need to move the libs to some common pkg libs. +So in the future, service D which needs to call C will not need to copy libs to handle service C api but only need to import from common pkg libs.
+ +Another bad practice is adapter service. +No need to write a new service if what we need is just common pkg libs.
+ +What is the point to pass many params (--abc
, --xyz
) when what we only need is start service?
In my case, service starts with only config, and config should be read from file or environment like The Twelve Factors guide.
+ +Just don’t.
+ +Use protocolbuffers/protobuf-go, grpc/grpc-go for gRPC.
+ +Write 1 for both gRPC, REST sounds good, but in the end, it is not worth it.
+ +prototool is deprecated, and buf can generate, lint, format as good as prototool.
+ +Don’t use gin.Context
when pass context from handler layer to service layer, use gin.Context.Request.Context()
instead.
It is fast!
+ +Don’t overuse func (*Logger) With
. Because if log line is too long, there is a possibility that we can lost it.
Use MarshalLogObject
when we need to hide some field of object when log (field has long or sensitive value)
Don’t use Panic
. Use Fatal
for errors when start service to check dependencies. If you really need panic level, use DPanic
.
Use contextID
or traceID
in every log lines for easily debug.
Each ORM libs has each different syntax. +To learn and use those libs correctly is time consuming. +So just stick to plain SQL. +It is easier to debug when something is wrong.
+ +But database/sql
has its own limit.
+For example, it is hard to get primary key after insert/update.
+So may be you want to use ORM for those cases.
It is easy to write a suite test, thanks to testify. +Also, for mocking, there are many options out there. +Pick 1 then sleep peacefully.
+ +go fmt
, goimports
with mvdan/gofumpt.gofumpt
provides more rules when format Go codes.
No need to say more. +Lint or get the f out!
+ + + diff --git a/main.go b/main.go index 58ae3be..5e025ca 100644 --- a/main.go +++ b/main.go @@ -13,7 +13,7 @@ import ( const ( postsPath = "posts" headHTMLPath = "custom/head.html" - generatedPath = "generated" + generatedPath = "docs" htmlExt = ".html" ) @@ -32,7 +32,7 @@ func main() { log.Fatalln("Failed to remove all", generatedPath, err) } - if err := os.MkdirAll(generatedPath, 0777); err != nil { + if err := os.MkdirAll(generatedPath, 0o777); err != nil { log.Fatalln("Failed to mkdir all", generatedPath) } @@ -60,7 +60,7 @@ func main() { generatedFileName := strings.TrimSuffix(file.Name(), filepath.Ext(file.Name())) + htmlExt generatedFilePath := filepath.Join(generatedPath, generatedFileName) - if err := os.WriteFile(generatedFilePath, generatedHTML, 0666); err != nil { + if err := os.WriteFile(generatedFilePath, generatedHTML, 0o666); err != nil { log.Fatalln("Failed to write file", generatedFilePath, err) } }