diff --git a/.gitignore b/.gitignore index 5d48d87..7b84340 100644 --- a/.gitignore +++ b/.gitignore @@ -12,3 +12,6 @@ # GitHub .github_access_token + +# Node +node_modules diff --git a/.prettierignore b/.prettierignore new file mode 100644 index 0000000..a02ab47 --- /dev/null +++ b/.prettierignore @@ -0,0 +1,21 @@ +# gitignore +# macOS +.DS_Store + +# Window +*.exe + +# IntelliJ +.idea + +# VSCode +.vscode + +# GitHub +.github_access_token + +# Node +node_modules + +# Ignore prettier +.github diff --git a/.prettierrc.json b/.prettierrc.json new file mode 100644 index 0000000..0967ef4 --- /dev/null +++ b/.prettierrc.json @@ -0,0 +1 @@ +{} diff --git a/Makefile b/Makefile index 2c59ca5..196a9f1 100644 --- a/Makefile +++ b/Makefile @@ -27,6 +27,7 @@ format: go install mvdan.cc/gofumpt@latest gofimports -w -company github.com/make-go-great . gofumpt -w -extra . + yarn prettier --write . gen: go run . diff --git a/docs/2022-06-08-backup.html b/docs/2022-06-08-backup.html index c7906b1..06db6f9 100644 --- a/docs/2022-06-08-backup.html +++ b/docs/2022-06-08-backup.html @@ -43,55 +43,173 @@
Index
-

Backup my way

-

First thing first, I want to list my own devices, which I have through the years:

- -

App/Service I use daily:

- -

The purpose is that I want my data to be safe, secure, and can be easily recovered if I lost some devices; -or in the worst situation, I lost all. -Because you know, it is hard to guess what is waiting for us in the future.

-

There are 2 sections which I want to share, the first is How to backup, the second is Recover strategy.

-

How to backup

-

Before I talk about backup, I want to talk about data. -In specifically, which data should I backup?

-

I use Arch Linux and macOS, primarily work in the terminal so I have too many dotfiles, for example, ~/.config/nvim/init.lua. -Each time I reinstall Arch Linux (I like it a lot), I need to reconfigure all the settings, and it is time-consuming.

-

So for the DE and UI settings, I keep it as default as possible, unless it's getting in my way, I leave the default setting there and forget about it. -The others are dotfiles, which I write my own dotfiles tool to backup and reconfigure easily and quickly. -Also, I know that installing Arch Linux is not easy, despite I install it too many times (Like thousand times since I was in high school). -Not because it is hard, but as life goes on, the official install guide keeps getting new update and covering too many cases for my own personal use, so I write my own guide to quickly capture what I need to do. -I back up all my dotfiles in GitHub and GitLab as I trust them both. -Also as I travel the Internet, I discover Codeberg and Treehouse and use them as another backup for git repo.

-

So that is my dotfiles, for my regular data, like Wallpaper or Books, Images, I use Google Drive (Actually I pay for it). -But the step: open the webpage, click the upload button and choose files seems boring and time-consuming. -So I use Rclone, it supports Google Drive, One Drive and many providers but I only use Google Drive for now. -The commands are simple:

-
# Sync from local to remote
+    

+ Backup my way +

+

+ First thing first, I want to list my own devices, which I have through the + years: +

+ +

App/Service I use daily:

+ +

+ The purpose is that I want my data to be safe, secure, and can be easily + recovered if I lost some devices; or in the worst situation, I lost all. + Because you know, it is hard to guess what is waiting for us in the + future. +

+

+ There are 2 sections which I want to share, the first is + How to backup, the second is + Recover strategy. +

+

+ How to backup +

+

+ Before I talk about backup, I want to talk about data. In specifically, + which data should I backup? +

+

+ I use Arch Linux and macOS, primarily work in the terminal so I have too + many dotfiles, for example, ~/.config/nvim/init.lua. Each + time I reinstall Arch Linux (I like it a lot), I need to reconfigure all + the settings, and it is time-consuming. +

+

+ So for the DE and UI settings, I keep it as default as possible, unless + it's getting in my way, I leave the default setting there and forget about + it. The others are dotfiles, which I write my own + dotfiles tool to backup + and reconfigure easily and quickly. Also, I know that installing Arch + Linux is not easy, despite I install it too many times (Like thousand + times since I was in high school). Not because it is hard, but as life + goes on, the + official install guide + keeps getting new update and covering too many cases for my own personal + use, so I write my own + guide + to quickly capture what I need to do. I back up all my dotfiles in GitHub + and GitLab as I trust them both. Also as I travel the Internet, I discover + Codeberg and + Treehouse + and use them as another backup for git repo. +

+

+ So that is my dotfiles, for my regular data, like Wallpaper or Books, + Images, I use Google Drive (Actually I pay for it). But the step: open the + webpage, click the upload button and choose files seems boring and + time-consuming. So I use Rclone, it supports Google Drive, One Drive and + many providers but I only use Google Drive for now. The commands are + simple: +

+
+
# Sync from local to remote
 rclone sync MyBooks remote:MyBooks -P --exclude .DS_Store
 
 # Sync from remote to local
-rclone sync remote:MyBooks MyBooks -P --exclude .DS_Store
-

Before you use Rclone to sync to Google Drive, you should read Google Drive rclone configuration first.

-

For private data, I use restic which can be used with Rclone:

-
# Init
+rclone sync remote:MyBooks MyBooks -P --exclude .DS_Store
+
+

+ Before you use Rclone to sync to Google Drive, you should read + Google Drive rclone configuration + first. +

+

For private data, I use restic which can be used with Rclone:

+
+
# Init
 restic -r rclone:remote:PrivateData init
 
 # Backup
@@ -101,27 +219,74 @@ restic -r rclone:remote:PrivateData backup PrivateData
 restic -r rclone:remote:PrivateData forget --keep-last 1 --prune
 
 # Restore
-restic -r rclone:remote:PrivateData restore latest --target ~
-

The next data is my passwords and my OTPs. -These are the things which I'm scare to lose the most. -First thing first, I enable 2-Step Verification for all of my important accounts, should use both OTP and phone method.

-

I use Bitwarden for passwords (That is a long story, coming from Google Password manager to Firefox Lockwise and then settle down with Bitwarden) and Aegis for OTPs. -The reason I choose Aegis, not Authy (I use Authy for so long but Aegis is definitely better) is because Aegis allows me to extract all the OTPs to a single file (Can be encrypted), which I use to transfer or backup easily.

-

As long as Bitwarden provides free passwords stored, I use all of its apps, extensions so that I can easily sync passwords between laptops and phones. -The thing I need to remember is the master password of Bitwarden in my head.

-

With Aegis, I export the data, then sync it to Google Drive, also store it locally in my phone.

-

The main problem here is the OTP, I can not store all of my OTPs in the cloud completely. -Because if I want to access my OTPs in the cloud, I should log in, and then input my OTP, this is a circle, my friends.

-

Recovery strategy

-

There are many strategies that I process to react as if something strange is happening to my devices.

-

If I lost my laptops, single laptop or all, do not panic as long as I have my phones. -The OTPs are in there, the passwords are in Bitwarden cloud, other data is in Google Drive so nothing is lost here.

-

If I lost my phone, but not my laptops, I use the OTPs which are stored locally in my laptops.

-

In the worst situation, I lost everything, my laptops, my phone. -The first step is to recover my SIM, then log in to Google account using the password and SMS OTP. -After that, log in to Bitwarden account using the master password and OTP from Gmail, which I open previously.

-

The end

-

This guide will be updated regularly I promise.

+restic -r rclone:remote:PrivateData restore latest --target ~
+
+

+ The next data is my passwords and my OTPs. These are the things which I'm + scare to lose the most. First thing first, I enable 2-Step Verification + for all of my important accounts, should use both OTP and phone method. +

+

+ I use Bitwarden for passwords (That is a long story, coming from Google + Password manager to Firefox Lockwise and then settle down with Bitwarden) + and Aegis for OTPs. The reason I choose Aegis, not Authy (I use Authy for + so long but Aegis is definitely better) is because Aegis allows me to + extract all the OTPs to a single file (Can be encrypted), which I use to + transfer or backup easily. +

+

+ As long as Bitwarden provides free passwords stored, I use all of its + apps, extensions so that I can easily sync passwords between laptops and + phones. The thing I need to remember is the master password of Bitwarden + in my head. +

+

+ With Aegis, I export the data, then sync it to Google Drive, also store it + locally in my phone. +

+

+ The main problem here is the OTP, I can not store all of my OTPs in the + cloud completely. Because if I want to access my OTPs in the cloud, I + should log in, and then input my OTP, this is a circle, my friends. +

+

+ Recovery strategy +

+

+ There are many strategies that I process to react as if something strange + is happening to my devices. +

+

+ If I lost my laptops, single laptop or all, do not panic as long as I have + my phones. The OTPs are in there, the passwords are in Bitwarden cloud, + other data is in Google Drive so nothing is lost here. +

+

+ If I lost my phone, but not my laptops, I use the OTPs which are stored + locally in my laptops. +

+

+ In the worst situation, I lost everything, my laptops, my phone. The first + step is to recover my SIM, then log in to Google account using the + password and SMS OTP. After that, log in to Bitwarden account using the + master password and OTP from Gmail, which I open previously. +

+

+ The end +

+

This guide will be updated regularly I promise.

Feel free to ask me via diff --git a/docs/2022-06-08-dockerfile-go.html b/docs/2022-06-08-dockerfile-go.html index f9032ce..4eddf2c 100644 --- a/docs/2022-06-08-dockerfile-go.html +++ b/docs/2022-06-08-dockerfile-go.html @@ -43,11 +43,22 @@
Index
-

Dockerfile for Go

-

Each time I start a new Go project, I repeat many steps. -Like set up .gitignore, CI configs, Dockerfile, ...

-

So I decide to have a baseline Dockerfile like this:

-
FROM golang:1.19-bullseye as builder
+    

+ Dockerfile for Go +

+

+ Each time I start a new Go project, I repeat many steps. Like set up + .gitignore, CI configs, Dockerfile, ... +

+

So I decide to have a baseline Dockerfile like this:

+
+
FROM golang:1.19-bullseye as builder
 
 RUN go install golang.org/dl/go1.19@latest \
     && go1.19 download
@@ -65,46 +76,95 @@ Like set up .gitignore, CI configs, Dockerfile, ...

COPY --from=builder /build/app /app -ENTRYPOINT ["/app"]
-

I use multi-stage build to keep my image size small. -First stage is Go official image, -second stage is Distroless.

-

Before Distroless, I use Alpine official image, -There is a whole discussion on the Internet to choose which is the best base image for Go. -After reading some blogs, I discover Distroless as a small and secure base image. -So I stick with it for a while.

-

Also, remember to match Distroless Debian version with Go official image Debian version.

-
FROM golang:1.19-bullseye as builder
-

This is Go image I use as a build stage. -This can be official Go image or custom image is required in some companies.

-
RUN go install golang.org/dl/go1.19@latest \
-    && go1.19 download
-

This is optional. -In my case, my company is slow to update Go image so I use this trick to install latest Go version.

-
WORKDIR /build
+ENTRYPOINT ["/app"]
+
+

+ I use + multi-stage build + to keep my image size small. First stage is + Go official image, second stage is + Distroless. +

+

+ Before Distroless, I use + Alpine official image, There is a whole discussion on the Internet to choose which is the best + base image for Go. After reading some blogs, I discover Distroless as a + small and secure base image. So I stick with it for a while. +

+

+ Also, remember to match Distroless Debian version with Go official image + Debian version. +

+
+
FROM golang:1.19-bullseye as builder
+
+

+ This is Go image I use as a build stage. This can be official Go image or + custom image is required in some companies. +

+
+
RUN go install golang.org/dl/go1.19@latest \
+    && go1.19 download
+
+

+ This is optional. In my case, my company is slow to update Go image so I + use this trick to install latest Go version. +

+
+
WORKDIR /build
 
 COPY go.mod .
 COPY go.sum .
 COPY vendor .
-COPY . .
-

I use /build to emphasize that I am building something in that directory.

-

The 4 COPY lines are familiar if you use Go enough. -First is go.mod and go.sum because it defines Go modules. -The second is vendor, this is optional but I use it because I don't want each time I build Dockerfile, I need to redownload Go modules.

-
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GOAMD64=v3 go build -o ./app -tags timetzdata -trimpath .
-

This is where I build Go program.

-

CGO_ENABLED=0 because I don't want to mess with C libraries. -GOOS=linux GOARCH=amd64 is easy to explain, Linux with x86-64. -GOAMD64=v3 is new since Go 1.18, -I use v3 because I read about AMD64 version in Arch Linux rfcs. TLDR's newer computers are already x86-64-v3.

-

-tags timetzdata to embed timezone database incase base image does not have. --trimpath to support reproduce build.

-
FROM gcr.io/distroless/base-debian11
+COPY . .
+
+

+ I use /build to emphasize that I am building something in + that directory. +

+

+ The 4 COPY lines are familiar if you use Go enough. First is + go.mod and go.sum because it defines Go modules. + The second is vendor, this is optional but I use it because I + don't want each time I build Dockerfile, I need to redownload Go modules. +

+
+
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GOAMD64=v3 go build -o ./app -tags timetzdata -trimpath .
+
+

This is where I build Go program.

+

+ CGO_ENABLED=0 because I don't want to mess with C libraries. + GOOS=linux GOARCH=amd64 is easy to explain, Linux with + x86-64. GOAMD64=v3 is new since + Go 1.18, I + use v3 because I read about AMD64 version in + Arch Linux rfcs. TLDR's newer computers are already x86-64-v3. +

+

+ -tags timetzdata to embed timezone database incase base image + does not have. -trimpath to support reproduce build. +

+
+
FROM gcr.io/distroless/base-debian11
 
 COPY --from=builder /build/app /app
 
-ENTRYPOINT ["/app"]
-

Finally, I copy app to Distroless base image.

+ENTRYPOINT ["/app"]
+
+

Finally, I copy app to Distroless base image.

Feel free to ask me via diff --git a/docs/2022-07-10-bootstrap-go.html b/docs/2022-07-10-bootstrap-go.html index d8bfe83..e42df30 100644 --- a/docs/2022-07-10-bootstrap-go.html +++ b/docs/2022-07-10-bootstrap-go.html @@ -43,13 +43,32 @@
Index
-

Bootstrap Go

-

It is hard to write bootstrap tool to quickly create Go service. -So I write this guide instead. -This is a quick checklist for me every damn time I need to write a Go service from scratch. -Also, this is my personal opinion, so feel free to comment.

-

Structure

-
main.go
+    

+ Bootstrap Go +

+

+ It is hard to write bootstrap tool to quickly create Go service. So I + write this guide instead. This is a quick checklist for me every damn time + I need to write a Go service from scratch. Also, this is my personal + opinion, so feel free to comment. +

+

+ Structure +

+
+
main.go
 internal
 | business
 | | http
@@ -65,57 +84,132 @@ internal
 | | | models.go
 | | service.go
 | | repository.go
-| | models.go
-

All business codes are inside internal. -Each business has a different directory business.

-

Inside each business, there are 2 handlers: http, grpc:

-
    -
  • -http is for public APIs (Android, iOS, ... are clients).
  • -
  • -grpc is for internal APIs (other services are clients).
  • -
  • -consumer is for consuming messages from queue (Kafka, RabbitMQ, ...).
  • -
-

For each handler, there are usually 3 layers: handler, service, repository:

-
    -
  • -handler interacts directly with gRPC, REST or consumer using specific codes (cookies, ...) In case gRPC, there are frameworks outside handle for us so we can write business/logic codes here too. But remember, gRPC only.
  • -
  • -service is where we write business/logic codes, and only business/logic codes is written here.
  • -
  • -repository is where we write codes which interacts with database/cache like MySQL, Redis, ...
  • -
  • -models is where we put all request, response, data models.
  • -
-

Location:

-
    -
  • -handler must exist inside grpc, http, consumer.
  • -
  • -service, models can exist directly inside of business if both grpc, http, consumer has same business/logic.
  • -
  • -repository should be placed directly inside of business.
  • -
-

Do not repeat!

-

If we have too many services, some of the logic will be overlapped.

-

For example, service A and service B both need to make POST call API to service C. -If service A and service B both have libs to call service C to do that API, we need to move the libs to some common pkg libs. -So in the future, service D which needs to call C will not need to copy libs to handle service C api but only need to import from common pkg libs.

-

Another bad practice is adapter service. -No need to write a new service if what we need is just common pkg libs.

-

Taste on style guide

-

Stop using global var

-

If I see someone using global var, I swear I will shoot them twice in the face.

-

Why?

-
    -
  • Can not write unit test.
  • -
  • Is not thread safe.
  • -
-

Use functional options, but don't overuse it!

-

For simple struct with 1 or 2 fields, no need to use functional options.

-

Example:

-
func main() {
+| | models.go
+
+

+ All business codes are inside internal. Each business has a + different directory business. +

+

+ Inside each business, there are 2 handlers: http, + grpc: +

+
    +
  • + http is for public APIs (Android, iOS, ... are clients). +
  • +
  • + grpc is for internal APIs (other services are clients). +
  • +
  • + consumer is for consuming messages from queue (Kafka, + RabbitMQ, ...). +
  • +
+

+ For each handler, there are usually 3 layers: handler, + service, repository: +

+
    +
  • + handler interacts directly with gRPC, REST or consumer + using specific codes (cookies, ...) In case gRPC, there are frameworks + outside handle for us so we can write business/logic codes here too. But + remember, gRPC only. +
  • +
  • + service is where we write business/logic codes, and only + business/logic codes is written here. +
  • +
  • + repository is where we write codes which interacts with + database/cache like MySQL, Redis, ... +
  • +
  • + models is where we put all request, response, data models. +
  • +
+

Location:

+
    +
  • + handler must exist inside grpc, + http, consumer. +
  • +
  • + service, models can exist directly inside of + business if both grpc, http, + consumer has same business/logic. +
  • +
  • + repository should be placed directly inside of + business. +
  • +
+

+ Do not repeat! +

+

If we have too many services, some of the logic will be overlapped.

+

+ For example, service A and service B both need to make POST call API to + service C. If service A and service B both have libs to call service C to + do that API, we need to move the libs to some common pkg libs. So in the + future, service D which needs to call C will not need to copy libs to + handle service C api but only need to import from common pkg libs. +

+

+ Another bad practice is adapter service. No need to write a new service if + what we need is just common pkg libs. +

+

+ Taste on style guide +

+

+ Stop using global var +

+

+ If I see someone using global var, I swear I will shoot them twice in the + face. +

+

Why?

+
    +
  • Can not write unit test.
  • +
  • Is not thread safe.
  • +
+

+ Use functional options, but don't overuse it! +

+

+ For simple struct with 1 or 2 fields, no need to use functional options. +

+

+ Example: +

+
+
func main() {
 	s := NewS(WithA(1), WithB("b"))
 	fmt.Printf("%+v\n", s)
 }
@@ -145,16 +239,40 @@ No need to write a new service if what we need is just common pkg libs.

opt(s) } return s -}
-

In above example, I construct s with WithA and WithB option. -No need to pass direct field inside s.

-

Use errgroup as much as possible

-

If business logic involves calling too many APIs, but they are not depend on each other. -We can fire them parallel :)

-

Personally, I prefer errgroup to WaitGroup (https://pkg.go.dev/sync#WaitGroup). -Because I always need deal with error.

-

Example:

-
eg, egCtx := errgroup.WithContext(ctx)
+}
+
+

+ In above example, I construct s with WithA and + WithB option. No need to pass direct field inside + s. +

+

+ Use + errgroup + as much as possible +

+

+ If business logic involves calling too many APIs, but they are not depend + on each other. We can fire them parallel :) +

+

+ Personally, I prefer errgroup to WaitGroup (https://pkg.go.dev/sync#WaitGroup). Because I always need deal with error. +

+

Example:

+
+
eg, egCtx := errgroup.WithContext(ctx)
 
 eg.Go(func() error {
 	// Do some thing
@@ -168,125 +286,396 @@ Because I always need deal with error.

if err := eg.Wait(); err != nil { // Handle error -}
-

Use semaphore when need to implement WorkerPool

-

Please don't use external libs for WorkerPool, I don't want to deal with dependency hell.

-

External libs

-

No need vendor -

-

Only need if you need something from vendor, to generate mock or something else.

-

Use build.go to include build tools in go.mod

-

To easily control version of build tools.

-

For example build.go:

-
//go:build tools
+}
+
+

+ Use + semaphore + when need to implement WorkerPool +

+

+ Please don't use external libs for WorkerPool, I don't want to deal with + dependency hell. +

+

+ External libs +

+

+ No need vendor +

+

+ Only need if you need something from vendor, to generate mock + or something else. +

+

+ Use build.go to include build tools in go.mod +

+

To easily control version of build tools.

+

For example build.go:

+
+
//go:build tools
 // +build tools
 
 package main
 
 import (
 	_ "github.com/golang/protobuf/protoc-gen-go"
-)
-

And then in Makefile:

-
build:
-    go install github.com/golang/protobuf/protoc-gen-go
-

We always get the version of build tools in go.mod each time we install it. -Future contributors will not cry anymore.

-

Don't use cli libs (spf13/cobra, urfave/cli) just for Go service

-

What is the point to pass many params (do-it, --abc, --xyz) when what we only need is start service?

-

In my case, service starts with only config, and config should be read from file or environment like The Twelve Factors guide.

-

Don't use grpc-ecosystem/grpc-gateway -

-

Just don't.

-

Use protocolbuffers/protobuf-go, grpc/grpc-go for gRPC.

-

Write 1 for both gRPC, REST sounds good, but in the end, it is not worth it.

-

Don't use uber/prototool, use bufbuild/buf -

-

prototool is deprecated, and buf can generate, lint, format as good as prototool.

-

Use gin-gonic/gin for REST.

-

Don't use gin.Context when pass context from handler layer to service layer, use gin.Context.Request.Context() instead.

-

If you want log, just use uber-go/zap -

-

It is fast!

-
    -
  • Don't overuse func (*Logger) With. Because if log line is too long, there is a possibility that we can lost it.
  • -
  • Use MarshalLogObject when we need to hide some field of object when log (field is long or has sensitive value)
  • -
  • Don't use Panic. Use Fatal for errors when start service to check dependencies. If you really need panic level, use DPanic.
  • -
  • If doubt, use zap.Any.
  • -
  • Use contextID or traceID in every log lines for easily debug.
  • -
-

To read config, use spf13/viper -

-

Only init config in main or cmd layer. -Do not use viper.Get... in business layer or inside business layer.

-

Why?

-
    -
  • Hard to mock and test
  • -
  • Put all config in single place for easily tracking
  • -
-

Also, be careful if config value is empty. -You should decide to continue or stop the service if there is no config.

-

Don't overuse ORM libs, no need to handle another layer above SQL.

-

Each ORM libs has each different syntax. -To learn and use those libs correctly is time consuming. -So just stick to plain SQL. -It is easier to debug when something is wrong.

-

But database/sql has its own limit. -For example, it is hard to get primary key after insert/update. -So may be you want to use ORM for those cases. -I hear that go-gorm/gorm, ent/ent is good.

-

If you want test, just use stretchr/testify.

-

It is easy to write a suite test, thanks to testify. -Also, for mocking, there are many options out there. -Pick 1 then sleep peacefully.

-

If need to mock, choose matryer/moq or golang/mock -

-

The first is easy to use but not powerful as the later. -If you want to make sure mock func is called with correct times, use the later.

-

Example with matryer/moq:

-
// Only gen mock if source code file is newer than mock file
+)
+
+

And then in Makefile:

+
+
build:
+    go install github.com/golang/protobuf/protoc-gen-go
+
+

+ We always get the version of build tools in go.mod each time + we install it. Future contributors will not cry anymore. +

+

+ Don't use cli libs (spf13/cobra, urfave/cli) just for Go + service +

+

+ What is the point to pass many params (do-it, + --abc, --xyz) when what we only need is start + service? +

+

+ In my case, service starts with only config, and config should be read + from file or environment like + The Twelve Factors + guide. +

+

+ Don't use + grpc-ecosystem/grpc-gateway +

+

Just don't.

+

+ Use + protocolbuffers/protobuf-go, grpc/grpc-go for gRPC. +

+

+ Write 1 for both gRPC, REST sounds good, but in the end, it is not worth + it. +

+

+ Don't use uber/prototool, + use bufbuild/buf +

+

+ prototool is deprecated, and buf can generate, lint, format as good as + prototool. +

+

+ Use gin-gonic/gin for + REST. +

+

+ Don't use gin.Context when pass context from handler layer to + service layer, use gin.Context.Request.Context() instead. +

+

+ If you want log, just use + uber-go/zap +

+

It is fast!

+
    +
  • + Don't overuse func (*Logger) With. Because if log line is + too long, there is a possibility that we can lost it. +
  • +
  • + Use MarshalLogObject when we need to hide some field of + object when log (field is long or has sensitive value) +
  • +
  • + Don't use Panic. Use Fatal for errors when + start service to check dependencies. If you really need panic level, use + DPanic. +
  • +
  • If doubt, use zap.Any.
  • +
  • + Use contextID or traceID in every log lines + for easily debug. +
  • +
+

+ To read config, use + spf13/viper +

+

+ Only init config in main or cmd layer. Do not use + viper.Get... in business layer or inside business layer. +

+

Why?

+
    +
  • Hard to mock and test
  • +
  • Put all config in single place for easily tracking
  • +
+

+ Also, be careful if config value is empty. You should decide to continue + or stop the service if there is no config. +

+

+ Don't overuse ORM libs, no need to handle another layer above SQL. +

+

+ Each ORM libs has each different syntax. To learn and use those libs + correctly is time consuming. So just stick to plain SQL. It is easier to + debug when something is wrong. +

+

+ But database/sql has its own limit. For example, it is hard + to get primary key after insert/update. So may be you want to use ORM for + those cases. I hear that + go-gorm/gorm, + ent/ent is good. +

+

+ If you want test, just use + stretchr/testify. +

+

+ It is easy to write a suite test, thanks to testify. Also, for mocking, + there are many options out there. Pick 1 then sleep peacefully. +

+

+ If need to mock, choose + matryer/moq or + golang/mock +

+

+ The first is easy to use but not powerful as the later. If you want to + make sure mock func is called with correct times, use the later. +

+

Example with matryer/moq:

+
+
// Only gen mock if source code file is newer than mock file
 // https://jonwillia.ms/2019/12/22/conditional-gomock-mockgen
-//go:generate sh -c "test service_mock_generated.go -nt $GOFILE && exit 0; moq -rm -out service_mock_generated.go . Service"
-

Be careful with spf13/cast -

-

Don't cast proto enum:

-
// Bad
+//go:generate sh -c "test service_mock_generated.go -nt $GOFILE && exit 0; moq -rm -out service_mock_generated.go . Service"
+
+

+ Be careful with spf13/cast +

+

Don't cast proto enum:

+
+
// Bad
 a := cast.ToInt32(servicev1.ReasonCode_ABC)
 
 // Good
-a := int32(servicev1.ReasonCode_ABC)
-

Use stringer if you want your type enum can be print as string

-
type Drink int
+a := int32(servicev1.ReasonCode_ABC)
+
+

+ Use + stringer + if you want your type enum can be print as string +

+
+
type Drink int
 
 const (
 	Beer Drink = iota
 	Water
 	OrangeJuice
-)
-
go install golang.org/x/tools/cmd/stringer@latest
+)
+
+
+
go install golang.org/x/tools/cmd/stringer@latest
 
 # Run inside directory which contains Drink
-stringer -type=Drink
-

Don't waste your time rewrite rate limiter if your use case is simple, use rate or go-redis/redis_rate -

-

rate if you want rate limiter locally in your single instance of service. -redis_rate if you want rate limiter distributed across all your instances of service.

-

Replace go fmt, goimports with mvdan/gofumpt.

-

gofumpt provides more rules when format Go codes.

-

Use golangci/golangci-lint.

-

No need to say more. -Lint or get the f out!

-

If you get fieldalignment error, use fieldalignment to fix them.

-
# Install
+stringer -type=Drink
+
+

+ Don't waste your time rewrite rate limiter if your use case is simple, + use + rate + or + go-redis/redis_rate +

+

+ rate if you want rate limiter locally in your single instance of service. + redis_rate if you want rate limiter distributed across all your instances + of service. +

+

+ Replace go fmt, goimports with + mvdan/gofumpt. +

+

gofumpt provides more rules when format Go codes.

+

+ Use + golangci/golangci-lint. +

+

No need to say more. Lint or get the f out!

+

+ If you get fieldalignment error, use + fieldalignment + to fix them. +

+
+
# Install
 go install golang.org/x/tools/go/analysis/passes/fieldalignment/cmd/fieldalignment@latest
 
 # Fix
-fieldalignment -fix ./internal/business/*.go
-

Thanks

- +fieldalignment -fix ./internal/business/*.go
+
+

+ Thanks +

+
Feel free to ask me via diff --git a/docs/2022-07-12-uuid-or-else.html b/docs/2022-07-12-uuid-or-else.html index 73a6da6..5924222 100644 --- a/docs/2022-07-12-uuid-or-else.html +++ b/docs/2022-07-12-uuid-or-else.html @@ -43,46 +43,137 @@
Index
-

UUID or else

-

There are many use cases where we need to use a unique ID. -In my experience, I only encouter 2 cases:

-
    -
  • ID to trace request from client to server, from service to service (microservice architecture or nanoservice I don't know).
  • -
  • Primary key for database.
  • -
-

In my Go universe, there are some libs to help us with this:

- -

First use case is trace ID, or context aware ID

-

The ID is used only for trace and log. -If same ID is generated twice (because maybe the possibilty is too small but not 0), honestly I don't care. -When I use that ID to search log , if it pops more than things I care for, it is still no harm to me.

-

My choice for this use case is rs/xid. -Because it is small (not span too much on log line) and copy friendly.

-

Second use case is primary key, also hard choice

-

Why I don't use auto increment key for primary key? -The answer is simple, I don't want to write database specific SQL. -SQLite has some different syntax from MySQL, and PostgreSQL and so on. -Every logic I can move to application layer from database layer, I will.

-

In the past and present, I use google/uuid, specificially I use UUID v4. -In the future I will look to use segmentio/ksuid and oklog/ulid (trial and error of course). -Both are sortable, but google/uuid is not. -The reason I'm afraid because the database is sensitive subject, and I need more testing and battle test proof to trust those libs.

-

What else?

-

I think about adding prefix to ID to identify which resource that ID represents.

-

Thanks

- +

+ UUID or else +

+

+ There are many use cases where we need to use a unique ID. In my + experience, I only encouter 2 cases: +

+
    +
  • + ID to trace request from client to server, from service to service + (microservice architecture or nanoservice I don't know). +
  • +
  • Primary key for database.
  • +
+

In my Go universe, there are some libs to help us with this:

+ +

+ First use case is trace ID, or context aware ID +

+

+ The ID is used only for trace and log. If same ID is generated twice + (because maybe the possibilty is too small but not 0), honestly I don't + care. When I use that ID to search log , if it pops more than things I + care for, it is still no harm to me. +

+

+ My choice for this use case is rs/xid. Because it is + small (not span too much on log line) and copy friendly. +

+

+ Second use case is primary key, also hard choice +

+

+ Why I don't use auto increment key for primary key? The answer is simple, + I don't want to write database specific SQL. SQLite has some different + syntax from MySQL, and PostgreSQL and so on. Every logic I can move to + application layer from database layer, I will. +

+

+ In the past and present, I use google/uuid, specificially + I use UUID v4. In the future I will look to use + segmentio/ksuid and oklog/ulid (trial + and error of course). Both are sortable, but + google/uuid is not. The reason I'm afraid because the + database is sensitive subject, and I need more testing and battle test + proof to trust those libs. +

+

+ What else? +

+

+ I think about adding prefix to ID to identify which resource that ID + represents. +

+

+ Thanks +

+
Feel free to ask me via diff --git a/docs/2022-07-19-migrate-to-buf.html b/docs/2022-07-19-migrate-to-buf.html index 196bfcf..3b446c3 100644 --- a/docs/2022-07-19-migrate-to-buf.html +++ b/docs/2022-07-19-migrate-to-buf.html @@ -43,26 +43,55 @@
Index
-

Migrate to buf from prototool -

-

Why? Because prototool is outdated, and can not run on M1 mac.

-

We need 3 files:

-
    -
  • -build.go: need to install protoc-gen-* binaries with pin version in go.mod -
  • -
  • buf.yaml
  • -
  • buf.gen.yaml
  • -
-

FYI, the libs version I use:

- -

build.go:

-
//go:build tools
+    

+ Migrate to buf from prototool +

+

+ Why? Because prototool is outdated, and can not run on M1 + mac. +

+

We need 3 files:

+
    +
  • + build.go: need to install protoc-gen-* binaries with pin + version in go.mod +
  • +
  • buf.yaml
  • +
  • buf.gen.yaml
  • +
+

FYI, the libs version I use:

+ +

build.go:

+
+
//go:build tools
 // +build tools
 
 import (
@@ -71,9 +100,11 @@
   _ "github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway"
   _ "github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger"
   _ "github.com/kei2100/protoc-gen-marshal-zap/plugin/protoc-gen-marshal-zap"
-)
-

buf.yaml

-
version: v1
+)
+
+

buf.yaml

+
+
version: v1
 deps:
   - buf.build/haunt98/googleapis:b38d93f7ade94a698adff9576474ae7c
   - buf.build/haunt98/grpc-gateway:ecf4f0f58aa8496f8a76ed303c6e06c7
@@ -84,9 +115,11 @@
     - FILE
 lint:
   use:
-    - DEFAULT
-

buf.gen.yaml:

-
version: v1
+    - DEFAULT
+
+

buf.gen.yaml:

+
+
version: v1
 plugins:
   - name: go
     out: pkg
@@ -105,9 +138,11 @@
     opt:
       - lang=go
   - name: marshal-zap
-    out: pkg
-

Update Makefile:

-
gen:
+    out: pkg
+
+

Update Makefile:

+
+
gen:
   go install github.com/golang/protobuf/protoc-gen-go
   go install github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
   go install github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger
@@ -116,32 +151,74 @@
   go install github.com/bufbuild/buf/cmd/buf@latest
   buf mod update
   buf format -w
-  buf generate
-

Run make gen to have fun of course.

-

FAQ

-

Remember grpc-ecosystem/grpc-gateway, envoyproxy/protoc-gen-validate, kei2100/protoc-gen-marshal-zap is optional, so feel free to delete if you don't use theme.

-

If use vendor:

-
    -
  • Replace buf generate with buf generate --exclude-path vendor.
  • -
  • Replace buf format -w with buf format -w --exclude-path vendor.
  • -
-

If you use grpc-gateway:

-
    -
  • Replace import "third_party/googleapis/google/api/annotations.proto"; with import "google/api/annotations.proto"; -
  • -
  • Delete security_definitions, security, in option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger).
  • -
-

The last step is delete prototool.yaml.

-

If you are not migrate but start from scratch:

-
    -
  • Add buf lint to make sure your proto is good.
  • -
  • Add buf breaking --against "https://your-grpc-repo-goes-here.git" to make sure each time you update proto, you don't break backward compatibility.
  • -
-

Thanks

- + buf generate
+
+

Run make gen to have fun of course.

+

+ FAQ +

+

+ Remember grpc-ecosystem/grpc-gateway, + envoyproxy/protoc-gen-validate, + kei2100/protoc-gen-marshal-zap is optional, so feel free to + delete if you don't use theme. +

+

If use vendor:

+
    +
  • + Replace buf generate with + buf generate --exclude-path vendor. +
  • +
  • + Replace buf format -w with + buf format -w --exclude-path vendor. +
  • +
+

If you use grpc-gateway:

+
    +
  • + Replace + import "third_party/googleapis/google/api/annotations.proto"; + with import "google/api/annotations.proto"; +
  • +
  • + Delete security_definitions, security, in + option + (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger). +
  • +
+

The last step is delete prototool.yaml.

+

If you are not migrate but start from scratch:

+
    +
  • Add buf lint to make sure your proto is good.
  • +
  • + Add + buf breaking --against "https://your-grpc-repo-goes-here.git" + to make sure each time you update proto, you don't break backward + compatibility. +
  • +
+

+ Thanks +

+
Feel free to ask me via diff --git a/docs/2022-07-31-experiment-go.html b/docs/2022-07-31-experiment-go.html index c887304..cd3bcb9 100644 --- a/docs/2022-07-31-experiment-go.html +++ b/docs/2022-07-31-experiment-go.html @@ -43,12 +43,31 @@ -

Experiment Go

-

There come a time when you need to experiment new things, new style, new approach. -So this post serves as it is named.

-

Design API by trimming down the interface/struct or whatever

-

Instead of:

-
type Client interface {
+    

+ Experiment Go +

+

+ There come a time when you need to experiment new things, new style, new + approach. So this post serves as it is named. +

+

+ Design API by trimming down the interface/struct or whatever +

+

Instead of:

+
+
type Client interface {
     GetUser()
     AddUser()
     GetAccount()
@@ -57,9 +76,11 @@ So this post serves as it is named.

// c is Client c.GetUser() -c.RemoveAccount()
-

Try:

-
type Client struct {
+c.RemoveAccount()
+
+

Try:

+
+
type Client struct {
     User ClientUser
     Account ClientAccount
 }
@@ -76,30 +97,75 @@ So this post serves as it is named.

// c is Client c.User.Get() -c.Account.Remove()
-

The difference is c.GetUser() -> c.User.Get().

-

For example we have client which connect to bank. -There are many functions like GetUser, GetTransaction, VerifyAccount, ... -So split big client to many children, each child handle single aspect, like user or transaction.

-

My concert is we replace an interface with a struct which contains multiple interfaces aka children. -I don't know if this is the right call.

-

This pattern is used by google/go-github.

-

Find alternative to grpc/grpc-go -

-

Why? -See for yourself. -Also read A new Go API for Protocol Buffers to know why v1.20.0 is v2.

-

Currently there are some:

- -

Thanks

- +c.Account.Remove()
+
+

+ The difference is c.GetUser() -> + c.User.Get(). +

+

+ For example we have client which connect to bank. There are many functions + like GetUser, GetTransaction, + VerifyAccount, ... So split big client to many children, each + child handle single aspect, like user or transaction. +

+

+ My concert is we replace an interface with a struct which contains + multiple interfaces aka children. I don't know if this is the right call. +

+

+ This pattern is used by + google/go-github. +

+

+ Find alternative to + grpc/grpc-go +

+

+ Why? + See for yourself. Also read + A new Go API for Protocol Buffers + to know why v1.20.0 is v2. +

+

Currently there are some:

+ +

+ Thanks +

+
Feel free to ask me via diff --git a/docs/2022-07-31-sql.html b/docs/2022-07-31-sql.html index 8f864cc..8f9332e 100644 --- a/docs/2022-07-31-sql.html +++ b/docs/2022-07-31-sql.html @@ -43,36 +43,93 @@ -

SQL

-

Previously in my fresher software developer time, I rarely write SQL, I always use ORM to wrap SQL. -But time past and too much abstraction bites me. -So I decide to only write SQL from now as much as possible, no more ORM for me. -But if there is any cool ORM for Go, I guess I try.

-

This guide is not kind of guide which cover all cases. -Just my little tricks when I work with SQL.

-

Stay away from database unique id

-

Use UUID instead. -If you can, and you should, choose UUID type which can be sortable.

-

Stay away from database timestamp

-

Stay away from all kind of database timestamp (MySQL timestmap, SQLite timestamp, ...) -Just use int64 then pass the timestamp in service layer not database layer.

-

Why? Because time and date and location are too much complex to handle. -In my business, I use timestamp in milliseconds. -Then I save timestamp as int64 value to database. -Each time I get timestamp from database, I parse to time struct in Go with location or format I want. -No more hassle!

-

It looks like this:

-
[Business] time, data -> convert to unix timestamp milliseconds -> [Database] int64
-

Use index!!!

-

You should use index for faster query, but not too much. -Don't create index for every fields in table. -Choose wisely!

-

For example, create index in MySQL:

-
CREATE INDEX `idx_timestamp`
-    ON `user_upload` (`timestamp`);
-

Be careful with NULL

-

If compare with field which can be NULL, remember to check NULL for safety.

-
-- field_something can be NULL
+    

+ SQL +

+

+ Previously in my fresher software developer time, I rarely write SQL, I + always use ORM to wrap SQL. But time past and too much abstraction bites + me. So I decide to only write SQL from now as much as possible, no more + ORM for me. But if there is any cool ORM for Go, I guess I try. +

+

+ This guide is not kind of guide which cover all cases. Just my little + tricks when I work with SQL. +

+

+ Stay away from database unique id +

+

+ Use UUID instead. If you can, and you should, choose UUID type which can + be sortable. +

+

+ Stay away from database timestamp +

+

+ Stay away from all kind of database timestamp (MySQL timestmap, SQLite + timestamp, ...) Just use int64 then pass the timestamp in service layer + not database layer. +

+

+ Why? Because time and date and location are too much complex to handle. In + my business, I use timestamp in milliseconds. Then I save timestamp as + int64 value to database. Each time I get timestamp from database, I parse + to time struct in Go with location or format I want. No more hassle! +

+

It looks like this:

+
+
+[Business] time, data -> convert to unix timestamp milliseconds -> [Database] int64
+
+

+ Use index!!! +

+

+ You should use index for faster query, but not too much. Don't create + index for every fields in table. Choose wisely! +

+

For example, create index in MySQL:

+
+
CREATE INDEX `idx_timestamp`
+    ON `user_upload` (`timestamp`);
+
+

+ Be careful with NULL +

+

+ If compare with field which can be NULL, remember to check NULL for + safety. +

+
+
-- field_something can be NULL
 
 -- Bad
 SELECT *
@@ -82,34 +139,111 @@ Choose wisely!

-- Good SELECT * FROM table -WHERE (field_something IS NULL OR field_something != 1)
-

Need clarify why this happpen? Idk :(

-

-VARCHAR or TEXT -

-

Prefer VARCHAR if you need to query and of course use index, and make sure size of value will never hit the limit. -Prefer TEXT if you don't care, just want to store something.

-

Be super careful when migrate, update database on production and online!!!

-

Plase read docs about online ddl operations before do anything online (keep database running the same time update it, for example create index, ...)

- -

Tools

- -

Thanks

- +WHERE (field_something IS NULL OR field_something != 1)
+
+

Need clarify why this happpen? Idk :(

+

+ + VARCHAR or TEXT +

+

+ Prefer VARCHAR if you need to query and of course use index, + and make sure size of value will never hit the limit. Prefer + TEXT if you don't care, just want to store something. +

+

+ Be super careful when migrate, update database on production and + online!!! +

+

+ Plase read docs about online ddl operations before do anything online + (keep database running the same time update it, for example create index, + ...) +

+ +

+ Tools +

+ +

+ Thanks +

+
Feel free to ask me via diff --git a/docs/2022-08-10-gitignore.html b/docs/2022-08-10-gitignore.html index b5c880c..1a3f256 100644 --- a/docs/2022-08-10-gitignore.html +++ b/docs/2022-08-10-gitignore.html @@ -43,10 +43,23 @@ -

gitignore

-

My quick check for .gitignore.

-

Base

-
# macOS
+    

+ gitignore +

+

My quick check for .gitignore.

+

+ Base +

+
+
# macOS
 .DS_Store
 
 # Windows
@@ -56,16 +69,31 @@
 .idea/
 
 # VSCode
-.vscode/
-

Go

-
# Go
+.vscode/
+
+

+ Go +

+
+
# Go
 # Test coverage
 coverage.out
 
 # Should ignore vendor
-vendor
-

Python

-
venv
+vendor
+
+

+ Python +

+
venv
Feel free to ask me via diff --git a/docs/2022-10-26-reload-config.html b/docs/2022-10-26-reload-config.html index c0eaf34..5aee9cf 100644 --- a/docs/2022-10-26-reload-config.html +++ b/docs/2022-10-26-reload-config.html @@ -43,9 +43,18 @@ -

Reload config

-

This serves as design draft of reload config system

-
@startuml Reload config
+    

+ Reload config +

+

This serves as design draft of reload config system

+
+
@startuml Reload config
 
 skinparam defaultFontName Iosevka Term SS08
 
@@ -88,27 +97,46 @@
 
 deactivate other_service
 
-@enduml
-

Config storage can be any key value storage or database like etcd, Consul, mySQL, ...

-

If storage is key value storage, maybe there is API to listen on config change. -Otherwise we should create a loop to get all config from storage for some interval, for example each 5 minute.

-

Each other_service need to get config from its memory, not hit storage. -So there is some delay between upstream config (config in storage) and downstream config (config in other_service), but maybe we can forgive that delay (???).

-

Pros:

-
    -
  • -

    Config can be dynamic, service does not need to restart to apply new config.

    -
  • -
  • -

    Each service only keep 1 connection to storage to listen to config change, not hit storage for each request.

    -
  • -
-

Cons:

-
    -
  • Each service has 1 more dependency, aka storage.
  • -
  • Need to handle fallback config, incase storage failure.
  • -
  • Delay between upstream/downstream config
  • -
+@enduml
+
+

+ Config storage can be any key value storage or database like etcd, Consul, + mySQL, ... +

+

+ If storage is key value storage, maybe there is API to listen on config + change. Otherwise we should create a loop to get all config from storage + for some interval, for example each 5 minute. +

+

+ Each other_service need to get config from its memory, not + hit storage. So there is some delay between upstream config + (config in storage) and downstream config (config in + other_service), but maybe we can forgive that delay (???). +

+

Pros:

+
    +
  • +

    + Config can be dynamic, service does not need to restart to apply new + config. +

    +
  • +
  • +

    + Each service only keep 1 connection to storage to listen + to config change, not hit storage for each request. +

    +
  • +
+

Cons:

+
    +
  • Each service has 1 more dependency, aka storage.
  • +
  • + Need to handle fallback config, incase storage failure. +
  • +
  • Delay between upstream/downstream config
  • +
Feel free to ask me via diff --git a/docs/2022-12-25-archlinux.html b/docs/2022-12-25-archlinux.html index 509a0f8..a32709e 100644 --- a/docs/2022-12-25-archlinux.html +++ b/docs/2022-12-25-archlinux.html @@ -43,90 +43,174 @@ -

Install Arch Linux

-

Install Arch Linux is thing I always want to do for my laptop/PC since I had my laptop in ninth grade.

-

This is not a guide for everyone, this is just save for myself in a future and for anyone who want to walk in my shoes.

-

Installation guide

-

Pre-installation

-

Check disks carefully:

-
lsblk
-

USB flash installation medium

-

Verify the boot mode

-

Check UEFI mode:

-
ls /sys/firmware/efi/efivars
-

Connect to the internet

-

For wifi, use iwd.

-

Partition the disks

-

GPT fdisk:

-
cgdisk /dev/sdx
-

Partition scheme

-

UEFI/GPT layout:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Mount pointPartitionPartition typeSuggested size
/mnt/efi/dev/efi_system_partitionEFI System Partition512 MiB
/mnt/boot/dev/extended_boot_loader_partitionExtended Boot Loader Partition1 GiB
/mnt/dev/root_partitionRoot Partition
-

BIOS/GPT layout:

- - - - - - - - - - - - - - - - - - - - - - - -
Mount pointPartitionPartition typeSuggested size
BIOS boot partition1 MiB
/mnt/dev/root_partitionRoot Partition
-

LVM:

-
# Create physical volumes
+    

+ Install Arch Linux +

+

+ Install Arch Linux is thing I always want to do for my laptop/PC since I + had my laptop in ninth grade. +

+

+ This is not a guide for everyone, this is just save for myself in a future + and for anyone who want to walk in my shoes. +

+

+ Installation guide +

+

+ Pre-installation +

+

Check disks carefully:

+
lsblk
+

+ USB flash installation medium +

+

+ Verify the boot mode +

+

Check UEFI mode:

+
+
ls /sys/firmware/efi/efivars
+
+

+ Connect to the internet +

+

+ For wifi, use + iwd. +

+

+ Partition the disks +

+

+ GPT fdisk: +

+
+
cgdisk /dev/sdx
+
+

+ Partition scheme +

+

UEFI/GPT layout:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Mount pointPartitionPartition typeSuggested size
/mnt/efi/dev/efi_system_partitionEFI System Partition512 MiB
/mnt/boot/dev/extended_boot_loader_partitionExtended Boot Loader Partition1 GiB
/mnt/dev/root_partitionRoot Partition
+

BIOS/GPT layout:

+ + + + + + + + + + + + + + + + + + + + + + + +
Mount pointPartitionPartition typeSuggested size
BIOS boot partition1 MiB
/mnt/dev/root_partitionRoot Partition
+

LVM:

+
+
# Create physical volumes
 pvcreate /dev/sdaX
 
 # Create volume groups
 vgcreate RootGroup /dev/sdaX /dev/sdaY
 
 # Create logical volumes
-lvcreate -l +100%FREE RootGroup -n rootvol
-

Format:

-
# efi
+lvcreate -l +100%FREE RootGroup -n rootvol
+
+

Format:

+
+
# efi
 mkfs.fat -F32 /dev/efi_system_partition
 
 # boot
@@ -139,9 +223,11 @@ mkfs.ext4 -L ROOT /dev/root_partition
 mkfs.btrfs -L ROOT /dev/root_partition
 
 # root on lvm
-mkfs.ext4 /dev/RootGroup/rootvol
-

Mount:

-
# root
+mkfs.ext4 /dev/RootGroup/rootvol
+
+

Mount:

+
+
# root
 mount /dev/root_partition /mnt
 
 # root with btrfs
@@ -154,9 +240,19 @@ mount /dev/RootGroup/rootvol /mnt
 mount --mkdir /dev/efi_system_partition /mnt/efi
 
 # boot
-mount --mkdir /dev/extended_boot_loader_partition /mnt/boot
-

Installation

-
pacstrap -K /mnt base linux linux-firmware
+mount --mkdir /dev/extended_boot_loader_partition /mnt/boot
+
+

+ Installation +

+
+
pacstrap -K /mnt base linux linux-firmware
 
 # AMD
 pacstrap -K /mnt amd-ucode
@@ -171,39 +267,125 @@ pacstrap -K /mnt btrfs-progs
 pacstrap -K /mnt lvm2
 
 # Text editor
-pacstrap -K /mnt neovim
-

Configure

-

fstab

-
genfstab -U /mnt >> /mnt/etc/fstab
-

Chroot

-
arch-chroot /mnt
-

Time zone

-
ln -sf /usr/share/zoneinfo/Region/City /etc/localtime
+pacstrap -K /mnt neovim
+
+

+ Configure +

+

+ fstab +

+
+
genfstab -U /mnt >> /mnt/etc/fstab
+
+

+ Chroot +

+
+
arch-chroot /mnt
+
+

+ Time zone +

+
+
+ln -sf /usr/share/zoneinfo/Region/City /etc/localtime
 
-hwclock --systohc
-

Localization:

-

Edit /etc/locale.gen:

-
# Uncomment en_US.UTF-8 UTF-8
-

Generate locales:

-
locale-gen
-

Edit /etc/locale.conf:

-
LANG=en_US.UTF-8
-

Network configuration

-

Edit /etc/hostname:

-
myhostname
-

Initramfs

-

Edit /etc/mkinitcpio.conf:

-
# LVM
+hwclock --systohc
+
+

+ Localization: +

+

Edit /etc/locale.gen:

+
+
# Uncomment en_US.UTF-8 UTF-8
+
+

Generate locales:

+
locale-gen
+

Edit /etc/locale.conf:

+
+
LANG=en_US.UTF-8
+
+

+ Network configuration +

+

Edit /etc/hostname:

+
myhostname
+

+ Initramfs +

+

Edit /etc/mkinitcpio.conf:

+
+
# LVM
 # https://wiki.archlinux.org/title/Install_Arch_Linux_on_LVM#Adding_mkinitcpio_hooks
 HOOKS=(base udev ... block lvm2 filesystems)
 
 # https://wiki.archlinux.org/title/mkinitcpio#Common_hooks
-# Replace udev with systemd
-
mkinitcpio -P
-

Root password

-
passwd
-

Addition

-
# NetworkManager
+# Replace udev with systemd
+
+
mkinitcpio -P
+

+ Root password +

+
passwd
+

+ Addition +

+
+
# NetworkManager
 pacman -Syu networkmanager
 systemctl enable NetworkManager.service
 
@@ -212,15 +394,55 @@ pacman -Syu bluez
 systemctl enable bluetooth.service
 
 # Clock
-timedatectl set-ntp true
-

Boot loader

-

systemd-boot

-

GRUB

-

General recommendations

-

Always remember to check dependencies when install packages.

-

System administration

-

Sudo:

-
pacman -Syu sudo
+timedatectl set-ntp true
+
+

+ Boot loader +

+

systemd-boot

+

+ GRUB +

+

+ General recommendations +

+

+ Always remember to check dependencies when install + packages. +

+

+ System administration +

+

+ Sudo: +

+
+
pacman -Syu sudo
 
 EDITOR=nvim visudo
 # Uncomment group wheel
@@ -232,22 +454,53 @@ useradd -m -G wheel -c "The Joker
 useradd -m -G wheel -s /usr/bin/zsh -c "The Joker" joker
 
 # Set password
-passwd joker
-

systemd-homed (WIP):

-
systemctl enable systemd-homed.service
+passwd joker
+
+

+ systemd-homed (WIP): +

+
+
systemctl enable systemd-homed.service
 
 homectl create joker --real-name="The Joker" --member-of=wheel
 
 # Using zsh
-homectl update joker --shell=/usr/bin/zsh
-

Note: -Can not run homectl when install Arch Linux. -Should run on the first boot.

-

Desktop Environment

-

Install Xorg:

-
pacman -Syu xorg-server
-

GNOME

-
pacman -Syu gnome-shell \
+homectl update joker --shell=/usr/bin/zsh
+
+

+ Note: Can not run homectl when install Arch + Linux. Should run on the first boot. +

+

+ Desktop Environment +

+

+ Install + Xorg: +

+
+
pacman -Syu xorg-server
+
+

+ GNOME +

+
+
pacman -Syu gnome-shell \
 	gnome-control-center gnome-system-monitor \
 	gnome-tweaks gnome-backgrounds gnome-screenshot gnome-keyring gnome-logs \
 	gnome-console gnome-text-editor \
@@ -255,37 +508,164 @@ Should run on the first boot.

# Login manager pacman -Syu gdm -systemctl enable gdm.service
-

KDE (WIP)

-
pacman -Syu plasma-meta \
+systemctl enable gdm.service
+
+

+ KDE (WIP) +

+
+
pacman -Syu plasma-meta \
 	kde-system-meta
 
 # Login manager
 pacman -Syu sddm
-systemctl enable sddm.service
-

List of applications

-

pacman

-

Uncomment in /etc/pacman.conf:

-
# Misc options
+systemctl enable sddm.service
+
+

+ List of applications +

+

+ pacman +

+

Uncomment in /etc/pacman.conf:

+
+
# Misc options
 Color
-ParallelDownloads
-

Pipewire (WIP)

-
pacman -Syu pipewire wireplumber \
+ParallelDownloads
+
+

+ Pipewire (WIP) +

+
+
+pacman -Syu pipewire wireplumber \
 	pipewire-alsa pipewire-pulse \
-	gst-plugin-pipewire pipewire-v4l2
-

Flatpak (WIP)

-
pacman -Syu flatpak
-

Improving performance

-

https://wiki.archlinux.org/index.php/swap#Swap_file

-

https://wiki.archlinux.org/index.php/swap#Swappiness

-

https://wiki.archlinux.org/index.php/Systemd/Journal#Journal_size_limit

-

https://wiki.archlinux.org/index.php/Core_dump#Disabling_automatic_core_dumps

-

https://wiki.archlinux.org/index.php/Solid_state_drive#Periodic_TRIM

-

https://wiki.archlinux.org/index.php/Silent_boot

-

https://wiki.archlinux.org/title/Improving_performance#Watchdogs

-

https://wiki.archlinux.org/title/PRIME

-

In the end

-

This guide is updated regularly I promise.

+ gst-plugin-pipewire pipewire-v4l2
+
+

+ Flatpak (WIP) +

+
+
pacman -Syu flatpak
+
+

+ Improving performance +

+

+ https://wiki.archlinux.org/index.php/swap#Swap_file +

+

+ https://wiki.archlinux.org/index.php/swap#Swappiness +

+

+ https://wiki.archlinux.org/index.php/Systemd/Journal#Journal_size_limit +

+

+ https://wiki.archlinux.org/index.php/Core_dump#Disabling_automatic_core_dumps +

+

+ https://wiki.archlinux.org/index.php/Solid_state_drive#Periodic_TRIM +

+

+ https://wiki.archlinux.org/index.php/Silent_boot +

+

+ https://wiki.archlinux.org/title/Improving_performance#Watchdogs +

+

+ https://wiki.archlinux.org/title/PRIME +

+

+ In the end +

+

This guide is updated regularly I promise.

Feel free to ask me via diff --git a/docs/2022-12-25-go-buf.html b/docs/2022-12-25-go-buf.html index 07e787e..342af4e 100644 --- a/docs/2022-12-25-go-buf.html +++ b/docs/2022-12-25-go-buf.html @@ -43,10 +43,16 @@ -

Integration Go gRPC with Buf

-

There are 2 questions here. -What is Buf? -And why is Buf?

+

+ Integration Go gRPC with Buf +

+

There are 2 questions here. What is Buf? And why is Buf?

Feel free to ask me via diff --git a/docs/2022-12-25-go-test-asap.html b/docs/2022-12-25-go-test-asap.html index 6921ac9..e55d563 100644 --- a/docs/2022-12-25-go-test-asap.html +++ b/docs/2022-12-25-go-test-asap.html @@ -43,38 +43,88 @@ -

Speed up writing Go test ASAP

-

Imagine your project currently have 0% unit test code coverage. -And your boss keep pushing it to 80% or even 90%? -What do you do? -Give up?

-

What if I tell you there is a way? -Not entirely cheating but ... you know, there is always trade off.

-

If your purpose is to test carefully all path, check if all return is correctly. -Sadly this post is not for you, I guess. -If you only want good number on test coverage, with minimum effort as possible, I hope this will show you some idea you can use :)

-

In my opinion, unit test is not that important (like must must have). -It's just make sure your code is running excatly as you intent it to be. -If you don't think about edge case before, unit test won't help you.

-

First, rewrite the impossible (to test) out

-

When I learn programming, I encounter very interesting idea, which become mainly my mindset when I dev later. -I don't recall it clearly, kinda like: "Don't just fix bugs, rewrite it so that kind of bugs will not appear again". -So in our context, there is some thing we hardly or can not write test in Go. -My suggestion is don't use that thing.

-

In my experience, I can list a few here:

-
    -
  • Read config each time call func (viper.Get...). You can and you should init all config when project starts.
  • -
  • Not use Dependency Injection (DI). There are too many posts in Internet tell you how to do DI properly.
  • -
  • Use global var (Except global var Err...). You should move all global var to fields inside some struct.
  • -
-

Let the fun (writing test) begin

-

If you code Go long enough, you know table driven tests and how is that so useful. -You set up test data, then you test. -Somewhere in the future, you change the func, then you need to update test data, then you good!

-

In simple case, your func only have 2 or 3 inputs so table drive tests is still looking good. -But real world is ugly (maybe not, idk I'm just too young in this industry). Your func can have 5 or 10 inputs, also your func call many third party services.

-

Imagine having below func to upload image:

-
type service struct {
+    

+ Speed up writing Go test ASAP +

+

+ Imagine your project currently have 0% unit test code coverage. And your + boss keep pushing it to 80% or even 90%? What do you do? Give up? +

+

+ What if I tell you there is a way? Not entirely cheating but ... you know, + there is always trade off. +

+

+ If your purpose is to test carefully all path, check if all return is + correctly. Sadly this post is not for you, I guess. If you only want good + number on test coverage, with minimum effort as possible, I hope this will + show you some idea you can use :) +

+

+ In my opinion, unit test is not that important (like must must have). It's + just make sure your code is running excatly as you intent it to be. If you + don't think about edge case before, unit test won't help you. +

+

+ First, rewrite the impossible (to test) out +

+

+ When I learn programming, I encounter very interesting idea, which become + mainly my mindset when I dev later. I don't recall it clearly, kinda like: + "Don't just fix bugs, rewrite it so that kind of bugs will not appear + again". So in our context, there is some thing we hardly or can not write + test in Go. My suggestion is don't use that thing. +

+

In my experience, I can list a few here:

+
    +
  • + Read config each time call func (viper.Get...). You can and + you should init all config when project starts. +
  • +
  • + Not use Dependency Injection (DI). There are too many posts in Internet + tell you how to do DI properly. +
  • +
  • + Use global var (Except global var Err...). You should move + all global var to fields inside some struct. +
  • +
+

+ Let the fun (writing test) begin +

+

+ If you code Go long enough, you know table driven tests and how is that so + useful. You set up test data, then you test. Somewhere in the future, you + change the func, then you need to update test data, then you good! +

+

+ In simple case, your func only have 2 or 3 inputs so table drive tests is + still looking good. But real world is ugly (maybe not, idk I'm just too + young in this industry). Your func can have 5 or 10 inputs, also your func + call many third party services. +

+

Imagine having below func to upload image:

+
+
type service struct {
     db DB
     redis Redis
     minio MinIO
@@ -105,9 +155,15 @@ But real world is ugly (maybe not, idk I'm just too young in this industry). You
     }
 
     return nil
-}
-

With table driven test and thanks to stretchr/testify, I usually write like this:

-
type ServiceSuite struct {
+}
+
+

+ With table driven test and thanks to + stretchr/testify, I + usually write like this: +

+
+
type ServiceSuite struct {
     suite.Suite
 
     db DBMock
@@ -152,16 +208,20 @@ But real world is ugly (maybe not, idk I'm just too young in this industry). You
             s.Equal(wantErr, gotErr)
         })
     }
-}
-

Looks good right? -Be careful with this. -It can go from 0 to 100 ugly real quick.

-

What if req is a struct with many fields? -So in each test case you need to set up req. -They are almost the same, but with some error case you must alter req. -It's easy to be init with wrong value here (typing maybe ?). -Also all req looks similiar, kinda duplicated.

-
tests := []struct{
+}
+
+

+ Looks good right? Be careful with this. It can go from 0 to 100 ugly real + quick. +

+

+ What if req is a struct with many fields? So in each test case you need to + set up req. They are almost the same, but with some error case you must + alter req. It's easy to be init with wrong value here (typing maybe ?). + Also all req looks similiar, kinda duplicated. +

+
+
tests := []struct{
         name string
         req Request
         verifyErr error
@@ -207,10 +267,14 @@ Also all req looks similiar, kinda duplicated.

} // Other fieles } - }
-

What if dependencies of service keep growing? -More mock error to test data of course.

-
    tests := []struct{
+    }
+
+

+ What if dependencies of service keep growing? More mock error to test data + of course. +

+
+
    tests := []struct{
         name string
         req Request
         verifyErr error
@@ -228,15 +292,30 @@ More mock error to test data of course.

{ // Init test case } - }
-

The test file keep growing longer and longer until I feel sick about it.

-

See tektoncd/pipeline unit test to get a feeling about this. -When I see it, TestPodBuild has almost 2000 lines.

-

The solution I propose here is simple (absolutely not perfect, but good with my usecase) thanks to stretchr/testify. -I init all default action on success case. -Then I alter request or mock error for unit test to hit on other case. -Remember if unit test is hit, code coverate is surely increaesed, and that my goal.

-
// Init ServiceSuite as above
+    }
+
+

+ The test file keep growing longer and longer until I feel sick about it. +

+

+ See + tektoncd/pipeline unit test + to get a feeling about this. When I see it, TestPodBuild has + almost 2000 lines. +

+

+ The solution I propose here is simple (absolutely not perfect, but good + with my usecase) thanks to stretchr/testify. I init all + default action on success case. Then I + alter request or mock error for unit test to hit on other + case. Remember if unit test is hit, code coverate is surely increaesed, + and that my goal. +

+
+
// Init ServiceSuite as above
 
 func (s *ServiceSuite) TestUpload() {
     // Init success request
@@ -271,11 +350,25 @@ Remember if unit test is hit, code coverate is surely increaesed, and that my // ...
-}
-

If you think this is not quick enough, just ignore the response. -You only need to check error or not if you want code coverage only.

-

So if request change fields or more dependencies, I need to update success case, and maybe add corresponding error case if need.

-

Same idea but still with table, you can find here Functional table-driven tests in Go - Fatih Arslan.

+}
+
+

+ If you think this is not quick enough, just ignore the + response. You only need to check error or not if you want code coverage + only. +

+

+ So if request change fields or more dependencies, I need to update success + case, and maybe add corresponding error case if need. +

+

+ Same idea but still with table, you can find here + Functional table-driven tests in Go - Fatih Arslan. +

Feel free to ask me via diff --git a/docs/index.html b/docs/index.html index 07bd611..72e1f71 100644 --- a/docs/index.html +++ b/docs/index.html @@ -43,21 +43,27 @@ -

Index

-

This is where I dump my thoughts.

- +

+ Index +

+

This is where I dump my thoughts.

+
Feel free to ask me via diff --git a/package.json b/package.json new file mode 100644 index 0000000..af42f02 --- /dev/null +++ b/package.json @@ -0,0 +1,5 @@ +{ + "devDependencies": { + "prettier": "2.8.1" + } +} diff --git a/yarn.lock b/yarn.lock new file mode 100644 index 0000000..d5cfe2c --- /dev/null +++ b/yarn.lock @@ -0,0 +1,8 @@ +# THIS IS AN AUTOGENERATED FILE. DO NOT EDIT THIS FILE DIRECTLY. +# yarn lockfile v1 + + +prettier@2.8.1: + version "2.8.1" + resolved "https://registry.yarnpkg.com/prettier/-/prettier-2.8.1.tgz#4e1fd11c34e2421bc1da9aea9bd8127cd0a35efc" + integrity sha512-lqGoSJBQNJidqCHE80vqZJHWHRFoNYsSpP9AjFhlhi9ODCJA541svILes/+/1GM3VaL/abZi7cpFzOpdR9UPKg==