feat: use deno fmt instead of prettier

main
sudo pacman -Syu 2023-08-06 01:56:25 +07:00
parent 5bdf10ff38
commit d82cc62524
26 changed files with 498 additions and 365 deletions

View File

@ -40,9 +40,7 @@ gen:
go run .
format-html:
bun upgrade
bun install --global prettier
prettier --write ./posts ./templates ./docs
deno fmt ./posts ./templates ./docs
srht:
# https://srht.site/quickstart

View File

@ -1,14 +1,23 @@
# Backup my way
First thing first, I want to list my own devices, which I have through the years:
First thing first, I want to list my own devices, which I have through the
years:
- ~~Laptop Samsung NP300E4Z-S06VN (Old laptop which I give to my mom)~~
- ~~[Laptop Dell Inspiron 15 3567](https://www.dell.com/support/home/en-vn/product-support/product/inspiron-15-3567-laptop/drivers) [LVFS](https://fwupd.org/lvfs/devices/com.dell.uefi1d4362ca.firmware) (My mom bought it for me when I go to college, I give it to my mom afterward)~~
- ~~[Laptop Acer Nitro AN515-45](https://www.acer.com/ac/en/US/content/support-product/8841) (Gaming laptop which I buy for gaming, I give it to my sister)~~
- ~~[Laptop Dell Inspiron 15 3567](https://www.dell.com/support/home/en-vn/product-support/product/inspiron-15-3567-laptop/drivers)
[LVFS](https://fwupd.org/lvfs/devices/com.dell.uefi1d4362ca.firmware) (My mom
bought it for me when I go to college, I give it to my mom afterward)~~
- ~~[Laptop Acer Nitro AN515-45](https://www.acer.com/ac/en/US/content/support-product/8841)
(Gaming laptop which I buy for gaming, I give it to my sister)~~
- MacBook Pro M1 2020 (My company laptop)
- ~~Phone [LG G3 D851 T-Mobile](https://forum.xda-developers.com/c/lg-g3.3147/) (Bought long time ago, now is a brick)~~
- ~~Phone [Xiaomi Redmi 6A](https://forum.xda-developers.com/c/xiaomi-redmi-6a.7881/) (I give it to my sister too)~~
- Phone [Xiaomi Poco X3 NFC](https://forum.xda-developers.com/c/xiaomi-poco-x3-nfc.11523/) (Primary phone which I use daily)
- ~~Phone [LG G3 D851 T-Mobile](https://forum.xda-developers.com/c/lg-g3.3147/)
(Bought long time ago, now is a brick)~~
- ~~Phone
[Xiaomi Redmi 6A](https://forum.xda-developers.com/c/xiaomi-redmi-6a.7881/) (I
give it to my sister too)~~
- Phone
[Xiaomi Poco X3 NFC](https://forum.xda-developers.com/c/xiaomi-poco-x3-nfc.11523/)
(Primary phone which I use daily)
App/Service I use daily:
@ -22,31 +31,44 @@ App/Service I use daily:
- Google Drive (currently use 200GB plan)
- GMail < [SimpleLogin](https://simplelogin.io/) < Proton Mail
The purpose is that I want my data to be safe, secure, and can be easily recovered if I lost some devices;
or in the worst situation, I lost all.
Because you know, it is hard to guess what is waiting for us in the future.
The purpose is that I want my data to be safe, secure, and can be easily
recovered if I lost some devices; or in the worst situation, I lost all. Because
you know, it is hard to guess what is waiting for us in the future.
There are 2 sections which I want to share, the first is **How to backup**, the second is **Recover strategy**.
There are 2 sections which I want to share, the first is **How to backup**, the
second is **Recover strategy**.
## How to backup
Before I talk about backup, I want to talk about data.
In specifically, which data should I backup?
Before I talk about backup, I want to talk about data. In specifically, which
data should I backup?
I use Arch Linux and macOS, primarily work in the terminal so I have too many dotfiles, for example, `~/.config/nvim/init.lua`.
Each time I reinstall Arch Linux (I like it a lot), I need to reconfigure all the settings, and it is time-consuming.
I use Arch Linux and macOS, primarily work in the terminal so I have too many
dotfiles, for example, `~/.config/nvim/init.lua`. Each time I reinstall Arch
Linux (I like it a lot), I need to reconfigure all the settings, and it is
time-consuming.
So for the DE and UI settings, I keep it as default as possible, unless it's getting in my way, I leave the default setting there and forget about it.
The others are dotfiles, which I write my own [dotfiles tool](https://github.com/haunt98/dotfiles) to backup and reconfigure easily and quickly.
Also, I know that installing Arch Linux is not easy, despite I install it too many times (Like thousand times since I was in high school).
Not because it is hard, but as life goes on, the [official install guide](https://wiki.archlinux.org/title/installation_guide) keeps getting new update and covering too many cases for my own personal use, so I write my own [guide](https://github.com/haunt98/til/blob/main/install-archlinux.md) to quickly capture what I need to do.
I back up all my dotfiles in GitHub and GitLab as I trust them both.
Also as I travel the Internet, I discover [Codeberg](https://codeberg.org/), [Treehouse](https://gitea.treehouse.systems/) and use them as another backup for git repo.
So for the DE and UI settings, I keep it as default as possible, unless it's
getting in my way, I leave the default setting there and forget about it. The
others are dotfiles, which I write my own
[dotfiles tool](https://github.com/haunt98/dotfiles) to backup and reconfigure
easily and quickly. Also, I know that installing Arch Linux is not easy, despite
I install it too many times (Like thousand times since I was in high school).
Not because it is hard, but as life goes on, the
[official install guide](https://wiki.archlinux.org/title/installation_guide)
keeps getting new update and covering too many cases for my own personal use, so
I write my own
[guide](https://github.com/haunt98/til/blob/main/install-archlinux.md) to
quickly capture what I need to do. I back up all my dotfiles in GitHub and
GitLab as I trust them both. Also as I travel the Internet, I discover
[Codeberg](https://codeberg.org/), [Treehouse](https://gitea.treehouse.systems/)
and use them as another backup for git repo.
So that is my dotfiles, for my regular data, like Wallpaper or Books, Images, I use Google Drive (Actually I pay for it).
But the step: open the webpage, click the upload button and choose files seems boring and time-consuming.
So I use Rclone, it supports Google Drive, One Drive and many providers but I only use Google Drive for now.
The commands are simple:
So that is my dotfiles, for my regular data, like Wallpaper or Books, Images, I
use Google Drive (Actually I pay for it). But the step: open the webpage, click
the upload button and choose files seems boring and time-consuming. So I use
Rclone, it supports Google Drive, One Drive and many providers but I only use
Google Drive for now. The commands are simple:
```sh
# Sync from local to remote
@ -57,7 +79,8 @@ rclone sync MyBooks remote:MyBooks -P --exclude .DS_Store
rclone sync remote:MyBooks MyBooks -P --exclude .DS_Store
```
Before you use Rclone to sync to Google Drive, you should read [Google Drive rclone configuration](https://rclone.org/drive/) first.
Before you use Rclone to sync to Google Drive, you should read
[Google Drive rclone configuration](https://rclone.org/drive/) first.
For private data, I use restic which use Rclone as backend:
@ -77,23 +100,28 @@ restic -r rclone:remote:PrivateData forget --keep-last 1 --prune
restic -r rclone:remote:PrivateData restore latest --target ~
```
The next data is my passwords and my OTPs.
These are the things which I'm scare to lose the most.
First thing first, I enable 2-Step Verification for all of my important accounts, should use both OTP and phone method.
The next data is my passwords and my OTPs. These are the things which I'm scare
to lose the most. First thing first, I enable 2-Step Verification for all of my
important accounts, should use both OTP and phone method.
I use Bitwarden for passwords (That is a long story, coming from Google Password manager to Firefox Lockwise and then settle down with Bitwarden) and Aegis for OTPs.
The reason I choose Aegis, not Authy (I use Authy for so long but Aegis is definitely better) is because Aegis allows me to extract all the OTPs to a single file (Can be encrypted), which I use to transfer or backup easily.
I use Bitwarden for passwords (That is a long story, coming from Google Password
manager to Firefox Lockwise and then settle down with Bitwarden) and Aegis for
OTPs. The reason I choose Aegis, not Authy (I use Authy for so long but Aegis is
definitely better) is because Aegis allows me to extract all the OTPs to a
single file (Can be encrypted), which I use to transfer or backup easily.
As long as Bitwarden provides free passwords stored, I use all of its apps, extensions so that I can easily sync passwords between laptops and phones.
The thing I need to remember is the master password of Bitwarden in my head.
As long as Bitwarden provides free passwords stored, I use all of its apps,
extensions so that I can easily sync passwords between laptops and phones. The
thing I need to remember is the master password of Bitwarden in my head.
With Aegis, I export the data, then:
- Sync it to Google Drive
- Store it locally in my phone.
The main problem here is the OTP, I can not store all of my OTPs in the cloud completely.
Because if I want to access my OTPs in the cloud, I should log in, and then input my OTP, this is a loop, my friends.
The main problem here is the OTP, I can not store all of my OTPs in the cloud
completely. Because if I want to access my OTPs in the cloud, I should log in,
and then input my OTP, this is a loop, my friends.
### Backup work related data
@ -105,23 +133,28 @@ APIs tools:
- [HTTPie](https://httpie.io/app)
- Already sync online (for now).
Stay away from Postman, it's lag and you can accidentally upload private data publicly.
Stay away from Postman, it's lag and you can accidentally upload private data
publicly.
## Recovery strategy
There are many strategies that I process to react as if something strange is happening to my devices.
There are many strategies that I process to react as if something strange is
happening to my devices.
- If I lost my laptops, single laptop or all, do not panic as long as I have my phones.
The OTPs are in there, the passwords are in Bitwarden cloud, other data is in Google Drive so nothing is lost here.
- If I lost my phone, but not my laptops, I use the OTPs which are stored locally in my laptops.
- In the worst situation, I lost everything, my laptops, my phone.
The first step is to recover my SIM, then log in to Google account using the password and SMS OTP.
After that, log in to Bitwarden account using the master password and OTP from Gmail, which I open previously.
- If I lost my laptops, single laptop or all, do not panic as long as I have my
phones. The OTPs are in there, the passwords are in Bitwarden cloud, other
data is in Google Drive so nothing is lost here.
- If I lost my phone, but not my laptops, I use the OTPs which are stored
locally in my laptops.
- In the worst situation, I lost everything, my laptops, my phone. The first
step is to recover my SIM, then log in to Google account using the password
and SMS OTP. After that, log in to Bitwarden account using the master password
and OTP from Gmail, which I open previously.
## Misc
To backup everything is hard, so keep it simple and only backup important things.
Pick one then stay away from other cloud services:
To backup everything is hard, so keep it simple and only backup important
things. Pick one then stay away from other cloud services:
- TODOis, Evernote, ... -> Google Keep/Notion
- Dropbox, OneDrive, ... -> Google Drive

View File

@ -1,7 +1,7 @@
# Dockerfile for Go
Each time I start a new Go project, I repeat many steps.
Like set up `.gitignore`, CI configs, Dockerfile, ...
Each time I start a new Go project, I repeat many steps. Like set up
`.gitignore`, CI configs, Dockerfile, ...
So I decide to have a baseline Dockerfile like this:
@ -27,31 +27,35 @@ COPY --from=builder /build/app /app
ENTRYPOINT ["/app"]
```
I use [multi-stage build](https://docs.docker.com/develop/develop-images/multistage-build/) to keep my image size small.
First stage is [Go official image](https://hub.docker.com/_/golang),
second stage is [Distroless](https://github.com/GoogleContainerTools/distroless).
I use
[multi-stage build](https://docs.docker.com/develop/develop-images/multistage-build/)
to keep my image size small. First stage is
[Go official image](https://hub.docker.com/_/golang), second stage is
[Distroless](https://github.com/GoogleContainerTools/distroless).
Before Distroless, I use [Alpine official image](https://hub.docker.com/_/alpine),
There is a whole discussion on the Internet to choose which is the best base image for Go.
After reading some blogs, I discover Distroless as a small and secure base image.
So I stick with it for a while.
Before Distroless, I use
[Alpine official image](https://hub.docker.com/_/alpine), There is a whole
discussion on the Internet to choose which is the best base image for Go. After
reading some blogs, I discover Distroless as a small and secure base image. So I
stick with it for a while.
Also, remember to match Distroless Debian version with Go official image Debian version.
Also, remember to match Distroless Debian version with Go official image Debian
version.
```Dockerfile
FROM golang:1.20-bullseye as builder
```
This is Go image I use as a build stage.
This can be official Go image or custom image is required in some companies.
This is Go image I use as a build stage. This can be official Go image or custom
image is required in some companies.
```Dockerfile
RUN go install golang.org/dl/go1.20@latest \
&& go1.20 download
```
This is optional.
In my case, my company is slow to update Go image so I use this trick to install latest Go version.
This is optional. In my case, my company is slow to update Go image so I use
this trick to install latest Go version.
```Dockerfile
WORKDIR /build
@ -64,9 +68,10 @@ COPY . .
I use `/build` to emphasize that I am building something in that directory.
The 4 `COPY` lines are familiar if you use Go enough.
First is `go.mod` and `go.sum` because it defines Go modules.
The second is `vendor`, this is optional but I use it because I don't want each time I build Dockerfile, I need to redownload Go modules.
The 4 `COPY` lines are familiar if you use Go enough. First is `go.mod` and
`go.sum` because it defines Go modules. The second is `vendor`, this is optional
but I use it because I don't want each time I build Dockerfile, I need to
redownload Go modules.
```Dockerfile
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GOAMD64=v3 go build -o ./app -tags timetzdata -trimpath -ldflags="-s -w" .
@ -76,10 +81,12 @@ This is where I build Go program.
- `CGO_ENABLED=0` because I don't want to mess with C libraries.
- `GOOS=linux GOARCH=amd64` is easy to explain, Linux with x86-64.
- `GOAMD64=v3` is new since [Go 1.18](https://go.dev/doc/go1.18#amd64),
I use v3 because I read about AMD64 version in [Arch Linux rfcs](https://gitlab.archlinux.org/archlinux/rfcs/-/blob/master/rfcs/0002-march.rst).
- `GOAMD64=v3` is new since [Go 1.18](https://go.dev/doc/go1.18#amd64), I use v3
because I read about AMD64 version in
[Arch Linux rfcs](https://gitlab.archlinux.org/archlinux/rfcs/-/blob/master/rfcs/0002-march.rst).
TLDR's newer computers are already x86-64-v3.
- `-tags timetzdata` to embed timezone database in case base image does not have.
- `-tags timetzdata` to embed timezone database in case base image does not
have.
- `-trimpath` to support reproduce build.
- `-ldflags="-s -w"` to strip debugging information.

View File

@ -1,9 +1,9 @@
# Bootstrap Go
It is hard to write bootstrap tool to quickly create Go service.
So I write this guide instead.
This is a quick checklist for me every damn time I need to write a Go service from scratch.
Also, this is my personal opinion, so feel free to comment.
It is hard to write bootstrap tool to quickly create Go service. So I write this
guide instead. This is a quick checklist for me every damn time I need to write
a Go service from scratch. Also, this is my personal opinion, so feel free to
comment.
## Structure
@ -28,8 +28,8 @@ internal
models.go
```
All business codes are inside `internal`.
Each business has a different directory `business`.
All business codes are inside `internal`. Each business has a different
directory `business`.
Inside each business, there are 2 handlers: `http`, `grpc`:
@ -37,29 +37,37 @@ Inside each business, there are 2 handlers: `http`, `grpc`:
- `grpc` is for internal APIs (other services are clients).
- `consumer` is for consuming messages from queue (Kafka, RabbitMQ, ...).
For each handler, there are usually 3 layers: `handler`, `service`, `repository`:
For each handler, there are usually 3 layers: `handler`, `service`,
`repository`:
- `handler` interacts directly with gRPC, REST or consumer using specific codes (cookies, ...) In case gRPC, there are frameworks outside handle for us so we can write business/logic codes here too. But remember, gRPC only.
- `service` is where we write business/logic codes, and only business/logic codes is written here.
- `repository` is where we write codes which interacts with database/cache like MySQL, Redis, ...
- `handler` interacts directly with gRPC, REST or consumer using specific codes
(cookies, ...) In case gRPC, there are frameworks outside handle for us so we
can write business/logic codes here too. But remember, gRPC only.
- `service` is where we write business/logic codes, and only business/logic
codes is written here.
- `repository` is where we write codes which interacts with database/cache like
MySQL, Redis, ...
- `models` is where we put all request, response, data models.
Location:
- `handler` must exist inside `grpc`, `http`, `consumer`.
- `service`, `models` can exist directly inside of `business` if both `grpc`, `http`, `consumer` has same business/logic.
- `service`, `models` can exist directly inside of `business` if both `grpc`,
`http`, `consumer` has same business/logic.
- `repository` should be placed directly inside of `business`.
## Do not repeat!
If we have too many services, some of the logic will be overlapped.
For example, service A and service B both need to make POST call API to service C.
If service A and service B both have libs to call service C to do that API, we need to move the libs to some common pkg libs.
So in the future, service D which needs to call C will not need to copy libs to handle service C api but only need to import from common pkg libs.
For example, service A and service B both need to make POST call API to service
C. If service A and service B both have libs to call service C to do that API,
we need to move the libs to some common pkg libs. So in the future, service D
which needs to call C will not need to copy libs to handle service C api but
only need to import from common pkg libs.
Another bad practice is adapter service.
No need to write a new service if what we need is just common pkg libs.
Another bad practice is adapter service. No need to write a new service if what
we need is just common pkg libs.
## Taste on style guide
@ -112,17 +120,18 @@ func NewS(opts ...OptionS) *S {
}
```
In above example, I construct `s` with `WithA` and `WithB` option.
No need to pass direct field inside `s`.
In above example, I construct `s` with `WithA` and `WithB` option. No need to
pass direct field inside `s`.
### Use [errgroup](https://pkg.go.dev/golang.org/x/sync/errgroup) as much as possible
If business logic involves calling too many APIs, but they are not depend on each other.
We can fire them parallel :)
If business logic involves calling too many APIs, but they are not depend on
each other. We can fire them parallel :)
Personally, I prefer `errgroup` to `WaitGroup` (https://pkg.go.dev/sync#WaitGroup).
Because I always need deal with error.
Be super careful with `egCtx`, should use this instead of parent `ctx` inside `eg.Go`.
Personally, I prefer `errgroup` to `WaitGroup`
(https://pkg.go.dev/sync#WaitGroup). Because I always need deal with error. Be
super careful with `egCtx`, should use this instead of parent `ctx` inside
`eg.Go`.
Example:
@ -146,7 +155,8 @@ if err := eg.Wait(); err != nil {
### Use [semaphore](https://pkg.go.dev/golang.org/x/sync/semaphore) when need to implement WorkerPool
Please don't use external libs for WorkerPool, I don't want to deal with dependency hell.
Please don't use external libs for WorkerPool, I don't want to deal with
dependency hell.
### Use [sync.Pool](https://pkg.go.dev/sync#Pool) when need to reuse object, mainly for `bytes.Buffer`
@ -184,7 +194,8 @@ func MarshalWithoutEscapeHTML(v any) ([]byte, error) {
### No need `vendor`
Only need if you need something from `vendor`, to generate mock or something else.
Only need if you need something from `vendor`, to generate mock or something
else.
### Use `build.go` to include build tools in go.mod
@ -215,21 +226,26 @@ Future contributors will not cry anymore.
### Don't use cli libs ([spf13/cobra](https://github.com/spf13/cobra), [urfave/cli](https://github.com/urfave/cli)) just for Go service
What is the point to pass many params (`do-it`, `--abc`, `--xyz`) when what we only need is start service?
What is the point to pass many params (`do-it`, `--abc`, `--xyz`) when what we
only need is start service?
In my case, service starts with only config, and config should be read from file or environment like [The Twelve Factors](https://12factor.net/) guide.
In my case, service starts with only config, and config should be read from file
or environment like [The Twelve Factors](https://12factor.net/) guide.
### Don't use [grpc-ecosystem/grpc-gateway](https://github.com/grpc-ecosystem/grpc-gateway)
Just don't.
Use [protocolbuffers/protobuf-go](https://github.com/protocolbuffers/protobuf-go), [grpc/grpc-go](https://github.com/grpc/grpc-go) for gRPC.
Use
[protocolbuffers/protobuf-go](https://github.com/protocolbuffers/protobuf-go),
[grpc/grpc-go](https://github.com/grpc/grpc-go) for gRPC.
Write 1 for both gRPC, REST sounds good, but in the end, it is not worth it.
### Don't use [uber/prototool](https://github.com/uber/prototool), use [bufbuild/buf](https://github.com/bufbuild/buf)
prototool is deprecated, and buf can generate, lint, format as good as prototool.
prototool is deprecated, and buf can generate, lint, format as good as
prototool.
### Use [gin-gonic/gin](https://github.com/gin-gonic/gin) for REST.
@ -252,53 +268,57 @@ defer func() {
It is fast!
- Don't overuse `func (*Logger) With`. Because if log line is too long, there is a possibility that we can lost it.
- Use `MarshalLogObject` when we need to hide some field of object when log (field is long or has sensitive value)
- Don't use `Panic`. Use `Fatal` for errors when start service to check dependencies. If you really need panic level, use `DPanic`.
- Don't overuse `func (*Logger) With`. Because if log line is too long, there is
a possibility that we can lost it.
- Use `MarshalLogObject` when we need to hide some field of object when log
(field is long or has sensitive value)
- Don't use `Panic`. Use `Fatal` for errors when start service to check
dependencies. If you really need panic level, use `DPanic`.
- If doubt, use `zap.Any`.
- Use `contextID` or `traceID` in every log lines for easily debug.
### To read config, use [spf13/viper](https://github.com/spf13/viper)
Only init config in main or cmd layer.
Do not use `viper.Get...` in business layer or inside business layer.
Only init config in main or cmd layer. Do not use `viper.Get...` in business
layer or inside business layer.
Why?
- Hard to mock and test
- Put all config in single place for easily tracking
Also, be careful if config value is empty.
You should decide to continue or stop the service if there is empty config.
Also, be careful if config value is empty. You should decide to continue or stop
the service if there is empty config.
### Don't overuse ORM libs, no need to handle another layer above SQL.
Each ORM libs has each different syntax.
To learn and use those libs correctly is time consuming.
So just stick to plain SQL.
It is easier to debug when something is wrong.
Each ORM libs has each different syntax. To learn and use those libs correctly
is time consuming. So just stick to plain SQL. It is easier to debug when
something is wrong.
Also please use [prepared statement](https://go.dev/doc/database/prepared-statements) as much as possible.
Idealy, we should init all prepared statement when we init database connection to cached it, not create it every time we need it.
Also please use
[prepared statement](https://go.dev/doc/database/prepared-statements) as much as
possible. Idealy, we should init all prepared statement when we init database
connection to cached it, not create it every time we need it.
But `database/sql` has its own limit.
For example, it is hard to get primary key after insert/update.
So may be you want to use ORM for those cases.
I hear that [go-gorm/gorm](https://github.com/go-gorm/gorm), [ent/ent](https://github.com/ent/ent) is good.
But `database/sql` has its own limit. For example, it is hard to get primary key
after insert/update. So may be you want to use ORM for those cases. I hear that
[go-gorm/gorm](https://github.com/go-gorm/gorm),
[ent/ent](https://github.com/ent/ent) is good.
### Connect Redis with [redis/go-redis](https://github.com/redis/go-redis)
Be careful when use [HGETALL](https://redis.io/commands/hgetall/).
If key not found, empty data will be returned not nil error.
See [redis/go-redis/issues/1668](https://github.com/redis/go-redis/issues/1668)
Be careful when use [HGETALL](https://redis.io/commands/hgetall/). If key not
found, empty data will be returned not nil error. See
[redis/go-redis/issues/1668](https://github.com/redis/go-redis/issues/1668)
Use [Pipelines](https://redis.uptrace.dev/guide/go-redis-pipelines.html) for:
- HSET and EXPIRE in 1 command.
- Multiple GET in 1 command.
Prefer to use `Pipelined` instead of `Pipeline`.
Inside `Pipelined`, please return `redis.Cmder` for each command.
Prefer to use `Pipelined` instead of `Pipeline`. Inside `Pipelined`, please
return `redis.Cmder` for each command.
Example:
@ -355,22 +375,24 @@ Remember to config:
- Write-Ahead Logging: `PRAGMA journal_mode=WAL`
- Disable connections pool with `SetMaxOpenConns` sets to 1
Don't use [mattn/go-sqlite3](https://github.com/mattn/go-sqlite3), it's required `CGO_ENABLED`.
Don't use [mattn/go-sqlite3](https://github.com/mattn/go-sqlite3), it's required
`CGO_ENABLED`.
### Connect Kafka with [Shopify/sarama](https://github.com/Shopify/sarama)
Don't use [confluentinc/confluent-kafka-go](https://github.com/confluentinc/confluent-kafka-go), it's required `CGO_ENABLED`.
Don't use
[confluentinc/confluent-kafka-go](https://github.com/confluentinc/confluent-kafka-go),
it's required `CGO_ENABLED`.
### If you want test, just use [stretchr/testify](https://github.com/stretchr/testify).
It is easy to write a suite test, thanks to testify.
Also, for mocking, there are many options out there.
Pick 1 then sleep peacefully.
It is easy to write a suite test, thanks to testify. Also, for mocking, there
are many options out there. Pick 1 then sleep peacefully.
### If need to mock, choose [matryer/moq](https://github.com/matryer/moq) or [uber/mock](https://github.com/uber/mock)
The first is easy to use but not powerful as the later.
If you want to make sure mock func is called with correct times, use the later.
The first is easy to use but not powerful as the later. If you want to make sure
mock func is called with correct times, use the later.
Example with `matryer/moq`:
@ -414,7 +436,8 @@ stringer -type=Drink
### Don't waste your time rewrite rate limiter if your use case is simple, use [rate](https://pkg.go.dev/golang.org/x/time/rate) or [go-redis/redis_rate](https://github.com/go-redis/redis_rate)
**rate** if you want rate limiter locally in your single instance of service.
**redis_rate** if you want rate limiter distributed across all your instances of service.
**redis_rate** if you want rate limiter distributed across all your instances of
service.
### Replace `go fmt`, `goimports` with [mvdan/gofumpt](https://github.com/mvdan/gofumpt).
@ -422,10 +445,11 @@ stringer -type=Drink
### Use [golangci/golangci-lint](https://github.com/golangci/golangci-lint).
No need to say more.
Lint or get the f out!
No need to say more. Lint or get the f out!
If you get `fieldalignment` error, use [fieldalignment](https://pkg.go.dev/golang.org/x/tools/go/analysis/passes/fieldalignment) to fix them.
If you get `fieldalignment` error, use
[fieldalignment](https://pkg.go.dev/golang.org/x/tools/go/analysis/passes/fieldalignment)
to fix them.
```sh
# Install

View File

@ -1,9 +1,10 @@
# UUID or else
There are many use cases where we need to use a unique ID.
In my experience, I only encouter 2 cases:
There are many use cases where we need to use a unique ID. In my experience, I
only encouter 2 cases:
- ID to trace request from client to server, from service to service (microservice architecture or nanoservice I don't know).
- ID to trace request from client to server, from service to service
(microservice architecture or nanoservice I don't know).
- Primary key for database.
In my Go universe, there are some libs to help us with this:
@ -15,24 +16,26 @@ In my Go universe, there are some libs to help us with this:
## First use case is trace ID, or context aware ID
The ID is used only for trace and log.
If same ID is generated twice (because maybe the possibilty is too small but not 0), honestly I don't care.
When I use that ID to search log , if it pops more than things I care for, it is still no harm to me.
The ID is used only for trace and log. If same ID is generated twice (because
maybe the possibilty is too small but not 0), honestly I don't care. When I use
that ID to search log , if it pops more than things I care for, it is still no
harm to me.
My choice for this use case is **rs/xid**.
Because it is small (not span too much on log line) and copy friendly.
My choice for this use case is **rs/xid**. Because it is small (not span too
much on log line) and copy friendly.
## Second use case is primary key, also hard choice
Why I don't use auto increment key for primary key?
The answer is simple, I don't want to write database specific SQL.
SQLite has some different syntax from MySQL, and PostgreSQL and so on.
Every logic I can move to application layer from database layer, I will.
Why I don't use auto increment key for primary key? The answer is simple, I
don't want to write database specific SQL. SQLite has some different syntax from
MySQL, and PostgreSQL and so on. Every logic I can move to application layer
from database layer, I will.
In the past and present, I use **google/uuid**, specificially I use UUID v4.
In the future I will look to use **segmentio/ksuid** and **oklog/ulid** (trial and error of course).
Both are sortable, but **google/uuid** is not.
The reason I'm afraid because the database is sensitive subject, and I need more testing and battle test proof to trust those libs.
In the past and present, I use **google/uuid**, specificially I use UUID v4. In
the future I will look to use **segmentio/ksuid** and **oklog/ulid** (trial and
error of course). Both are sortable, but **google/uuid** is not. The reason I'm
afraid because the database is sensitive subject, and I need more testing and
battle test proof to trust those libs.
## What else?

View File

@ -4,7 +4,8 @@ Why? Because `prototool` is outdated, and can not run on M1 mac.
We need 3 files:
- `build.go`: need to install protoc-gen-\* binaries with pin version in `go.mod`
- `build.go`: need to install protoc-gen-\* binaries with pin version in
`go.mod`
- `buf.yaml`
- `buf.gen.yaml`
@ -84,7 +85,8 @@ gen:
Run `make gen` to have fun of course.
If using `bufbuild/protoc-gen-validate`, `kei2100/protoc-gen-marshal-zap`, better make a raw copy of proto file for other services to integrate:
If using `bufbuild/protoc-gen-validate`, `kei2100/protoc-gen-marshal-zap`,
better make a raw copy of proto file for other services to integrate:
```Makefile
raw:
@ -98,7 +100,9 @@ raw:
## FAQ
Remember `bufbuild/protoc-gen-validate`, `kei2100/protoc-gen-marshal-zap`, `grpc-ecosystem/grpc-gateway` is optional, so feel free to delete if you don't use theme.
Remember `bufbuild/protoc-gen-validate`, `kei2100/protoc-gen-marshal-zap`,
`grpc-ecosystem/grpc-gateway` is optional, so feel free to delete if you don't
use theme.
If use `vendor`:
@ -107,24 +111,29 @@ If use `vendor`:
If you use grpc-gateway:
- Replace `import "third_party/googleapis/google/api/annotations.proto";` with `import "google/api/annotations.proto";`
- Delete `security_definitions`, `security`, in `option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger)`.
- Replace `import "third_party/googleapis/google/api/annotations.proto";` with
`import "google/api/annotations.proto";`
- Delete `security_definitions`, `security`, in
`option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger)`.
The last step is delete `prototool.yaml`.
If you are not migrate but start from scratch:
- Add `buf lint` to make sure your proto is good.
- Add `buf breaking --against "https://your-grpc-repo-goes-here.git"` to make sure each time you update proto, you don't break backward compatibility.
- Add `buf breaking --against "https://your-grpc-repo-goes-here.git"` to make
sure each time you update proto, you don't break backward compatibility.
# Tips
Some experience I got after writing proto files for a living:
- Ignore DRY (Do not Repeat Yourself) when handling proto, don't split proto into many files.
Trust me, it saves you from wasting time to debug how to import Go after generated.
Because proto import and Go import is [2](https://github.com/golang/protobuf/issues/895) different things.
If someone already have split proto files, you should use `sed` to fix the damn things.
- Ignore DRY (Do not Repeat Yourself) when handling proto, don't split proto
into many files. Trust me, it saves you from wasting time to debug how to
import Go after generated. Because proto import and Go import is
[2](https://github.com/golang/protobuf/issues/895) different things. If
someone already have split proto files, you should use `sed` to fix the damn
things.
## Thanks

View File

@ -1,7 +1,7 @@
# Experiment Go
There come a time when you need to experiment new things, new style, new approach.
So this post serves as it is named.
There come a time when you need to experiment new things, new style, new
approach. So this post serves as it is named.
# Design API by trimming down the interface/struct or whatever
@ -45,12 +45,12 @@ c.Account.Remove()
The difference is `c.GetUser()` -> `c.User.Get()`.
For example we have client which connect to bank.
There are many functions like `GetUser`, `GetTransaction`, `VerifyAccount`, ...
So split big client to many children, each child handle single aspect, like user or transaction.
For example we have client which connect to bank. There are many functions like
`GetUser`, `GetTransaction`, `VerifyAccount`, ... So split big client to many
children, each child handle single aspect, like user or transaction.
My concert is we replace an interface with a struct which contains multiple interfaces aka children.
I don't know if this is the right call.
My concert is we replace an interface with a struct which contains multiple
interfaces aka children. I don't know if this is the right call.
This pattern is used by [google/go-github](https://github.com/google/go-github).
@ -61,12 +61,14 @@ Why?
Also read:
- [A new Go API for Protocol Buffers](https://go.dev/blog/protobuf-apiv2) to know why `v1.20.0` is `v2`.
- [A new Go API for Protocol Buffers](https://go.dev/blog/protobuf-apiv2) to
know why `v1.20.0` is `v2`.
- [Go Protobuf Plugin Versioning](https://jbrandhorst.com/post/plugin-versioning/).
Currently there are some:
- [bufbuild/connect-go](https://github.com/bufbuild/connect-go). Comming from buf, trust worthy but need time to make it match feature parity with grpc-go.
- [bufbuild/connect-go](https://github.com/bufbuild/connect-go). Comming from
buf, trust worthy but need time to make it match feature parity with grpc-go.
- [twitchtv/twirp](https://github.com/twitchtv/twirp)
- [storj/drpc](https://github.com/storj/drpc)

View File

@ -1,28 +1,28 @@
# SQL
Previously in my fresher software developer time, I rarely write SQL, I always use ORM to wrap SQL.
But time past and too much abstraction bites me.
So I decide to only write SQL from now as much as possible, no more ORM for me.
But if there is any cool ORM for Go, I guess I try.
Previously in my fresher software developer time, I rarely write SQL, I always
use ORM to wrap SQL. But time past and too much abstraction bites me. So I
decide to only write SQL from now as much as possible, no more ORM for me. But
if there is any cool ORM for Go, I guess I try.
This guide is not kind of guide which cover all cases.
Just my little tricks when I work with SQL.
This guide is not kind of guide which cover all cases. Just my little tricks
when I work with SQL.
## Stay away from database unique id
Use UUID instead.
If you can, and you should, choose UUID type which can be sortable.
Use UUID instead. If you can, and you should, choose UUID type which can be
sortable.
## Stay away from database timestamp
Stay away from all kind of database timestamp (MySQL timestmap, SQLite timestamp, ...)
Just use int64 then pass the timestamp in service layer not database layer.
Stay away from all kind of database timestamp (MySQL timestmap, SQLite
timestamp, ...) Just use int64 then pass the timestamp in service layer not
database layer.
Why? Because time and date and location are too much complex to handle.
In my business, I use timestamp in milliseconds.
Then I save timestamp as int64 value to database.
Each time I get timestamp from database, I parse to time struct in Go with location or format I want.
No more hassle!
Why? Because time and date and location are too much complex to handle. In my
business, I use timestamp in milliseconds. Then I save timestamp as int64 value
to database. Each time I get timestamp from database, I parse to time struct in
Go with location or format I want. No more hassle!
It looks like this:
@ -32,9 +32,9 @@ It looks like this:
## Extra field for extra things
Create new column in database is scary, so I suggest avoid it if you can.
How to avoid, first design table with extra field.
It is black hole, put everything in there if you want.
Create new column in database is scary, so I suggest avoid it if you can. How to
avoid, first design table with extra field. It is black hole, put everything in
there if you want.
I always use MySQL json data type for extra field.
@ -42,9 +42,8 @@ JSON data type also used for dumping request, response data.
## Use index!!!
You should use index for faster query, but not too much.
Don't create index for every fields in table.
Choose wisely!
You should use index for faster query, but not too much. Don't create index for
every fields in table. Choose wisely!
For example, create index in MySQL:
@ -53,7 +52,8 @@ CREATE INDEX idx_user_id
ON user_upload (user_id);
```
If create index inside `CREATE TABLE`, [prefer `INDEX` to `KEY`](https://stackoverflow.com/a/1401615):
If create index inside `CREATE TABLE`,
[prefer `INDEX` to `KEY`](https://stackoverflow.com/a/1401615):
```sql
CREATE TABLE user_upload
@ -92,8 +92,9 @@ Need clarify why this happpen? Idk :(
## `VARCHAR` or `TEXT`
Prefer `VARCHAR` if you need to query and of course use index, and make sure size of value will never hit the limit.
Prefer `TEXT` if you don't care, just want to store something.
Prefer `VARCHAR` if you need to query and of course use index, and make sure
size of value will never hit the limit. Prefer `TEXT` if you don't care, just
want to store something.
## `LIMIT`
@ -101,15 +102,20 @@ Prefer `LIMIT 10 OFFSET 5` to `LIMIT 5, 10` to avoid misunderstanding.
## Be super careful when migrate, update database on production and online!!!
Plase read docs about online ddl operations before do anything online (keep database running the same time update it, for example create index, ...)
Plase read docs about online ddl operations before do anything online (keep
database running the same time update it, for example create index, ...)
- [For MySQL 5.7](https://dev.mysql.com/doc/refman/5.7/en/innodb-online-ddl-operations.html), [Limitations](https://dev.mysql.com/doc/refman/5.7/en/innodb-online-ddl-limitations.html)
- [For MySQL 8.0](https://dev.mysql.com/doc/refman/8.0/en/innodb-online-ddl-operations.html), [Limitations](https://dev.mysql.com/doc/refman/8.0/en/innodb-online-ddl-limitations.html)
- [For MySQL 5.7](https://dev.mysql.com/doc/refman/5.7/en/innodb-online-ddl-operations.html),
[Limitations](https://dev.mysql.com/doc/refman/5.7/en/innodb-online-ddl-limitations.html)
- [For MySQL 8.0](https://dev.mysql.com/doc/refman/8.0/en/innodb-online-ddl-operations.html),
[Limitations](https://dev.mysql.com/doc/refman/8.0/en/innodb-online-ddl-limitations.html)
## Tools
- Use [sqlfluff/sqlfluff](https://github.com/sqlfluff/sqlfluff) to check your SQL.
- Use [k1LoW/tbls](https://github.com/k1LoW/tbls) to grasp your database reality :)
- Use [sqlfluff/sqlfluff](https://github.com/sqlfluff/sqlfluff) to check your
SQL.
- Use [k1LoW/tbls](https://github.com/k1LoW/tbls) to grasp your database reality
:)
## Thanks

View File

@ -49,19 +49,23 @@ deactivate other_service
@enduml
```
Config storage can be any key value storage or database like etcd, Consul, mySQL, ...
Config storage can be any key value storage or database like etcd, Consul,
mySQL, ...
If storage is key value storage, maybe there is API to listen on config change.
Otherwise we should create a loop to get all config from storage for some interval, for example each 5 minute.
Otherwise we should create a loop to get all config from storage for some
interval, for example each 5 minute.
Each `other_service` need to get config from its memory, not hit `storage`.
So there is some delay between upstream config (config in `storage`) and downstream config (config in `other_service`), but maybe we can forgive that delay (???).
Each `other_service` need to get config from its memory, not hit `storage`. So
there is some delay between upstream config (config in `storage`) and downstream
config (config in `other_service`), but maybe we can forgive that delay (???).
Pros:
- Config can be dynamic, service does not need to restart to apply new config.
- Each service only keep 1 connection to `storage` to listen to config change, not hit `storage` for each request.
- Each service only keep 1 connection to `storage` to listen to config change,
not hit `storage` for each request.
Cons:

View File

@ -1,8 +1,10 @@
# Install Arch Linux
Install Arch Linux is thing I always want to do for my laptop/PC since I had my laptop in ninth grade.
Install Arch Linux is thing I always want to do for my laptop/PC since I had my
laptop in ninth grade.
This is not a guide for everyone, this is just save for myself in a future and for anyone who want to walk in my shoes.
This is not a guide for everyone, this is just save for myself in a future and
for anyone who want to walk in my shoes.
## [Installation guide](https://wiki.archlinux.org/index.php/Installation_guide)
@ -46,8 +48,8 @@ UEFI/GPT layout:
| `/mnt/boot` | `/dev/extended_boot_loader_partition` | Extended Boot Loader Partition | 1 GiB |
| `/mnt` | `/dev/root_partition` | Root Partition | |
Why not `/boot/efi`?
See [Lennart Poettering comment](https://github.com/systemd/systemd/pull/3757#issuecomment-234290236).
Why not `/boot/efi`? See
[Lennart Poettering comment](https://github.com/systemd/systemd/pull/3757#issuecomment-234290236).
BIOS/GPT layout:
@ -338,9 +340,8 @@ homectl create joker --real-name="The Joker" --member-of=wheel
homectl update joker --shell=/usr/bin/zsh
```
**Note**:
Can not run `homectl` when install Arch Linux.
Should run on the first boot.
**Note**: Can not run `homectl` when install Arch Linux. Should run on the first
boot.
### Desktop Environment
@ -384,7 +385,8 @@ pacman -Syu pipewire wireplumber \
gst-plugin-pipewire pipewire-v4l2
```
See [Advanced Linux Sound Architecture](https://wiki.archlinux.org/title/Advanced_Linux_Sound_Architecture)
See
[Advanced Linux Sound Architecture](https://wiki.archlinux.org/title/Advanced_Linux_Sound_Architecture)
```sh
pacman -Syu sof-firmware

View File

@ -1,42 +1,47 @@
# Speed up writing Go test ASAP
Imagine your project currently have 0% unit test code coverage.
And your boss keep pushing it to 80% or even 90%?
What do you do?
Give up?
Imagine your project currently have 0% unit test code coverage. And your boss
keep pushing it to 80% or even 90%? What do you do? Give up?
What if I tell you there is a way?
Not entirely cheating but ... you know, there is always trade off.
What if I tell you there is a way? Not entirely cheating but ... you know, there
is always trade off.
If your purpose is to test carefully all path, check if all return is correctly.
Sadly this post is not for you, I guess.
If you only want good number on test coverage, with minimum effort as possible, I hope this will show you some idea you can use :)
Sadly this post is not for you, I guess. If you only want good number on test
coverage, with minimum effort as possible, I hope this will show you some idea
you can use :)
In my opinion, unit test is not that important (like must must have).
It's just make sure your code is running excatly as you intent it to be.
If you don't think about edge case before, unit test won't help you.
In my opinion, unit test is not that important (like must must have). It's just
make sure your code is running excatly as you intent it to be. If you don't
think about edge case before, unit test won't help you.
## First, rewrite the impossible (to test) out
When I learn programming, I encounter very interesting idea, which become mainly my mindset when I dev later.
I don't recall it clearly, kinda like: "Don't just fix bugs, rewrite it so that kind of bugs will not appear again".
So in our context, there is some thing we hardly or can not write test in Go.
My suggestion is don't use that thing.
When I learn programming, I encounter very interesting idea, which become mainly
my mindset when I dev later. I don't recall it clearly, kinda like: "Don't just
fix bugs, rewrite it so that kind of bugs will not appear again". So in our
context, there is some thing we hardly or can not write test in Go. My
suggestion is don't use that thing.
In my experience, I can list a few here:
- Read config each time call func (`viper.Get...`). You can and you should init all config when project starts.
- Not use Dependency Injection (DI). There are too many posts in Internet tell you how to do DI properly.
- Use global var (Except global var `Err...`). You should move all global var to fields inside some struct.
- Read config each time call func (`viper.Get...`). You can and you should init
all config when project starts.
- Not use Dependency Injection (DI). There are too many posts in Internet tell
you how to do DI properly.
- Use global var (Except global var `Err...`). You should move all global var to
fields inside some struct.
## Let the fun (writing test) begin
If you code Go long enough, you know table driven tests and how is that so useful.
You set up test data, then you test.
Somewhere in the future, you change the func, then you need to update test data, then you good!
If you code Go long enough, you know table driven tests and how is that so
useful. You set up test data, then you test. Somewhere in the future, you change
the func, then you need to update test data, then you good!
In simple case, your func only have 2 or 3 inputs so table drive tests is still looking good.
But real world is ugly (maybe not, idk I'm just too young in this industry). Your func can have 5 or 10 inputs, also your func call many third party services.
In simple case, your func only have 2 or 3 inputs so table drive tests is still
looking good. But real world is ugly (maybe not, idk I'm just too young in this
industry). Your func can have 5 or 10 inputs, also your func call many third
party services.
Imagine having below func to upload image:
@ -75,7 +80,9 @@ func (s *service) Upload(ctx context.Context, req Request) error {
}
```
With table driven test and thanks to [stretchr/testify](https://github.com/stretchr/testify), I usually write like this:
With table driven test and thanks to
[stretchr/testify](https://github.com/stretchr/testify), I usually write like
this:
```go
type ServiceSuite struct {
@ -126,15 +133,12 @@ func (s *ServiceSuite) TestUpload() {
}
```
Looks good right?
Be careful with this.
It can go from 0 to 100 ugly real quick.
Looks good right? Be careful with this. It can go from 0 to 100 ugly real quick.
What if req is a struct with many fields?
So in each test case you need to set up req.
They are almost the same, but with some error case you must alter req.
It's easy to be init with wrong value here (typing maybe ?).
Also all req looks similiar, kinda duplicated.
What if req is a struct with many fields? So in each test case you need to set
up req. They are almost the same, but with some error case you must alter req.
It's easy to be init with wrong value here (typing maybe ?). Also all req looks
similiar, kinda duplicated.
```go
tests := []struct{
@ -186,40 +190,43 @@ tests := []struct{
}
```
What if dependencies of service keep growing?
More mock error to test data of course.
What if dependencies of service keep growing? More mock error to test data of
course.
```go
tests := []struct{
name string
req Request
verifyErr error
minioErr error
redisErr error
dbErr error
logErr error
wantErr error
// Murr error
aErr error
bErr error
cErr error
// ...
}{
{
// Init test case
}
tests := []struct{
name string
req Request
verifyErr error
minioErr error
redisErr error
dbErr error
logErr error
wantErr error
// Murr error
aErr error
bErr error
cErr error
// ...
}{
{
// Init test case
}
}
```
The test file keep growing longer and longer until I feel sick about it.
See [tektoncd/pipeline unit test](https://github.com/tektoncd/pipeline/blob/main/pkg/pod/pod_test.go) to get a feeling about this.
When I see it, `TestPodBuild` has almost 2000 lines.
See
[tektoncd/pipeline unit test](https://github.com/tektoncd/pipeline/blob/main/pkg/pod/pod_test.go)
to get a feeling about this. When I see it, `TestPodBuild` has almost 2000
lines.
The solution I propose here is simple (absolutely not perfect, but good with my usecase) thanks to **stretchr/testify**.
I init all **default** action on **success** case.
Then I **alter** request or mock error for unit test to hit on other case.
Remember if unit test is hit, code coverage is surely increased, and that my **goal**.
The solution I propose here is simple (absolutely not perfect, but good with my
usecase) thanks to **stretchr/testify**. I init all **default** action on
**success** case. Then I **alter** request or mock error for unit test to hit on
other case. Remember if unit test is hit, code coverage is surely increased, and
that my **goal**.
```go
// Init ServiceSuite as above
@ -260,9 +267,11 @@ func (s *ServiceSuite) TestUpload() {
}
```
If you think this is not quick enough, just **ignore** the response.
You only need to check error or not if you want code coverage only.
If you think this is not quick enough, just **ignore** the response. You only
need to check error or not if you want code coverage only.
So if request change fields or more dependencies, I need to update success case, and maybe add corresponding error case if need.
So if request change fields or more dependencies, I need to update success case,
and maybe add corresponding error case if need.
Same idea but still with table, you can find here [Functional table-driven tests in Go - Fatih Arslan](https://arslan.io/2022/12/04/functional-table-driven-tests-in-go/).
Same idea but still with table, you can find here
[Functional table-driven tests in Go - Fatih Arslan](https://arslan.io/2022/12/04/functional-table-driven-tests-in-go/).

View File

@ -6,7 +6,8 @@ This is collect of all incidents I created in the past :(
Because all configs is read from file.
But the port config is empty -> So when service inits, it use that empty port somehow.
But the port config is empty -> So when service inits, it use that empty port
somehow.
**Solution**: For some configs, make sure to failed first if it's empty.
@ -17,24 +18,24 @@ For example I have 2 APIs:
- API upload: allow user to upload image
- API submit: submit data to server
API upload is slow, it takes 10s to finish.
API submit is fast, only takes 2s.
API upload is slow, it takes 10s to finish. API submit is fast, only takes 2s.
The problem is submit use data from upload too.
When user calls API upload, image is stored in cache.
When user calls API submit, it use whatever image is stored in cache.
The problem is submit use data from upload too. When user calls API upload,
image is stored in cache. When user calls API submit, it use whatever image is
stored in cache.
It's when the fun begins.
Imagine user Trong already upload image.
So he is ready to submit.
But for the same time, he re-call API upload to upload another image too.
Imagine user Trong already upload image. So he is ready to submit. But for the
same time, he re-call API upload to upload another image too.
So if API upload is finished first, which is kinda impossible (U know upload file is not fast right?), everything right.
But for most cases, API submit is finished first.
It means Trong's data is submitted with the old image.
Then API upload is finished, it will replace the old image with the new one.So the old one, aka image in submitted data, is gone.
So if API upload is finished first, which is kinda impossible (U know upload
file is not fast right?), everything right. But for most cases, API submit is
finished first. It means Trong's data is submitted with the old image. Then API
upload is finished, it will replace the old image with the new one.So the old
one, aka image in submitted data, is gone.
Chaos right there!
**Solution**: Use a lock, if user enter API upload, lock it to prevent user call other APIs. Rememeber to unlock after finished
**Solution**: Use a lock, if user enter API upload, lock it to prevent user call
other APIs. Rememeber to unlock after finished

View File

@ -1,22 +1,21 @@
# Fonts
I always want the best fonts for my terminal, my text editor, my ...
But I'm not satisfied easily, so I keep trying new fonts.
Prefer free fonts of course :D
I always want the best fonts for my terminal, my text editor, my ... But I'm not
satisfied easily, so I keep trying new fonts. Prefer free fonts of course :D
TLDR:
If use macOS, use [San Francisco](https://developer.apple.com/fonts/) for everything.
Remember each time macOS release new version/new update,
you should download again to get (maybe) latest version.
If use macOS, use [San Francisco](https://developer.apple.com/fonts/) for
everything. Remember each time macOS release new version/new update, you should
download again to get (maybe) latest version.
Otherwise:
- Use [JetBrains Mono](https://github.com/JetBrains/JetBrainsMono) for code.
- Use [Inter](https://github.com/rsms/inter) for everything else.
All images belows is either official images I got from fonts website or my own screenshots.
I'm too lazy to screenshot anw :D
All images belows is either official images I got from fonts website or my own
screenshots. I'm too lazy to screenshot anw :D
## [JetBrains Mono](https://github.com/JetBrains/JetBrainsMono)
@ -28,15 +27,16 @@ I'm too lazy to screenshot anw :D
## [Iosevka](https://github.com/be5invis/Iosevka)
I often choose SS08 variant because I also love [PragmataPro](https://fsd.it/shop/fonts/pragmatapro/) too.
I often choose SS08 variant because I also love
[PragmataPro](https://fsd.it/shop/fonts/pragmatapro/) too.
- Support Vietnamese
- Support bold, italic
- Support ligatures
- Support display font (non mono)
Font is narrow, can display much more on small screen.
But too much config/variant for ligatures scares me.
Font is narrow, can display much more on small screen. But too much
config/variant for ligatures scares me.
![img01](https://raw.githubusercontent.com/be5invis/Iosevka/v21.1.1/images/iosevka-ss08.dark.svg#gh-dark-mode-only)
@ -60,8 +60,8 @@ Looks good on my phone.
- No italic
- No ligatures
Font is wide, remember to edit line height to make it looks good.
I like its bold, strong look.
Font is wide, remember to edit line height to make it looks good. I like its
bold, strong look.
![img02](https://github.com/evilmartians/mono/raw/main/documentation/martian-mono-character-set.png)
@ -72,8 +72,7 @@ I like its bold, strong look.
- No italic
- No ligatures
Font is small, can display much more on small screen.
I like its curved look.
Font is small, can display much more on small screen. I like its curved look.
## [Hermit](https://github.com/pcaro90/hermit)
@ -97,18 +96,19 @@ I love its wide look, also it's feel nostalgic.
- No bold, italic
- No ligatures
Feel 8-bit vibe, mono to death.
But sometimes it's hard to read.
Feel 8-bit vibe, mono to death. But sometimes it's hard to read.
![img03](https://github.com/slavfox/Cozette/raw/master/img/sample.png)
## Murr fonts, but I don't use much
- [Fira Code](https://github.com/tonsky/FiraCode): best ligatures.
- [Cascadia Code](https://github.com/microsoft/cascadia-code): seems discontinued :(.
- [Cascadia Code](https://github.com/microsoft/cascadia-code): seems
discontinued :(.
- [Input](https://input.djr.com/): seems discontinued :(.
- [Monoid](https://github.com/larsenwork/monoid): seems discontinued :(.
- [Fantasque Sans Mono](https://github.com/belluzj/fantasque-sans): Comic fonts vibe, seems discontinued :(.
- [Fantasque Sans Mono](https://github.com/belluzj/fantasque-sans): Comic fonts
vibe, seems discontinued :(.
- [mononoki](https://github.com/madmalik/mononoki): share same vibe with agave.
"Costing money" fonts, but I like it, will buy it if I have money:

View File

@ -1,7 +1,7 @@
# Games 4 fun
Just a little note about apps, games, settings for next time playing :D
Please have fun, of course :D
Just a little note about apps, games, settings for next time playing :D Please
have fun, of course :D
I have tested all software below on:
@ -11,8 +11,8 @@ If below links die, I will try to scrape Internet to get a new link.
## PS2 emulator
I use [PCSX2](https://github.com/PCSX2/pcsx2).
Currently it supports macOS on nightly builds, but it's good enough.
I use [PCSX2](https://github.com/PCSX2/pcsx2). Currently it supports macOS on
nightly builds, but it's good enough.
![pcsx2-000](https://raw.githubusercontent.com/haunt98/posts-images/main/pcsx2-000.png)
@ -25,23 +25,28 @@ Should enable cheats:
![pcsx2-001](https://raw.githubusercontent.com/haunt98/posts-images/main/pcsx2-001.png)
For cover art of games, please use [xlenore/ps2-covers](https://github.com/xlenore/ps2-covers).
For cover art of games, please use
[xlenore/ps2-covers](https://github.com/xlenore/ps2-covers).
### [Resident Evil 4](https://wiki.pcsx2.net/Resident_Evil_4)
You can download it [here](https://cdromance.com/ps2-iso/resident-evil-4-usa/).
I recommend to use [HD textures](https://gbatemp.net/threads/resident-evil-4-hd-textures-update-2.615869/), it's better for your eyes.
[Direct download link](https://www.mediafire.com/file/eyspelayfqtfz7a/R.4.hd.textures.xXthe.RockoXx.rar/file) if forum dies.
Please give thanks to [xXtheRockoXx](https://ko-fi.com/xxtherockoxx) for his work.
I recommend to use
[HD textures](https://gbatemp.net/threads/resident-evil-4-hd-textures-update-2.615869/),
it's better for your eyes.
[Direct download link](https://www.mediafire.com/file/eyspelayfqtfz7a/R.4.hd.textures.xXthe.RockoXx.rar/file)
if forum dies. Please give thanks to
[xXtheRockoXx](https://ko-fi.com/xxtherockoxx) for his work.
After download HD textures, please extract then copy to PCSX2 texture folder.
Remember to rename it to serial name (SLUS-21134, ...), because different region has different serial name.
Remember to rename it to serial name (SLUS-21134, ...), because different region
has different serial name.
Settings below are for Resident Evil 4 only.
If using macOS please switch Graphics/Renderer to Vulkan.
For other OS, I haven't tested yet.
If using macOS please switch Graphics/Renderer to Vulkan. For other OS, I
haven't tested yet.
In Graphics/Rendering:
@ -65,8 +70,8 @@ In Graphics/Post-Processing:
![pcsx2-004](https://raw.githubusercontent.com/haunt98/posts-images/main/pcsx2-004.png)
For hacking, create file with content below in PCSX2 cheat folder.
Remember to rename it to crc.pnach (013E349D.pnach, ...).
For hacking, create file with content below in PCSX2 cheat folder. Remember to
rename it to crc.pnach (013E349D.pnach, ...).
```txt
// Money
@ -88,10 +93,11 @@ Beautiful result!
## PS3 emulator
I use [RPCS3](https://github.com/RPCS3/rpcs3).
Currently it supports macOS on nightly builds.
I use [RPCS3](https://github.com/RPCS3/rpcs3). Currently it supports macOS on
nightly builds.
Download [PS3 Firmwares](https://www.playstation.com/en-us/support/hardware/ps3/system-software/).
Download
[PS3 Firmwares](https://www.playstation.com/en-us/support/hardware/ps3/system-software/).
Links to download games, ... for PS3:

View File

@ -35,7 +35,8 @@ I bought it from my friend.
- Plate: PC
- **Gasket mount**
- PCB: DZ60 RGB-WKL Hot-Swap
- **South facing** (mạch xuôi), but 2 switches in the top left, near USB-C port, are **North facing** (mạch ngược).
- **South facing** (mạch xuôi), but 2 switches in the top left, near USB-C
port, are **North facing** (mạch ngược).
#### Layout
@ -57,14 +58,15 @@ My layout's **quirk/gotcha**:
#### Review
Things I don't like, also [honest review from Reddit](https://www.reddit.com/r/HHKB/comments/xmcbkq/comment/j1625fy):
Things I don't like, also
[honest review from Reddit](https://www.reddit.com/r/HHKB/comments/xmcbkq/comment/j1625fy):
- The sides don't have any gaskets, so the keys on the far left and right bend down more.
They will pop out of the hotswap PCB or the plate if pressing too hard.
- The sides don't have any gaskets, so the keys on the far left and right bend
down more. They will pop out of the hotswap PCB or the plate if pressing too
hard.
- PCB:
- Not all keys are **South facing**.
- Can not config RGB per key for real.
Only support RGB mode switching.
- Can not config RGB per key for real. Only support RGB mode switching.
#### Support links
@ -105,9 +107,8 @@ I choose this switch because I prefer linear (please be silent).
### SKYLOONG Glacier Silent Red Switch
I was given this switch by my friend.
Currently using for alpha keys.
Love the silent.
I was given this switch by my friend. Currently using for alpha keys. Love the
silent.
![keeb-008](https://raw.githubusercontent.com/haunt98/posts-images/main/keeb-008.webp)
@ -146,8 +147,8 @@ I was given this switch by my friend, full mod (lube + film).
## Keycap
Currently, I use Akko 9009 Cherry Profile and EnjoyPBT 9009 Cherry Profile.
The space of EnjoyPBT 9009 is not straight so I use Akko 9009 space.
Currently, I use Akko 9009 Cherry Profile and EnjoyPBT 9009 Cherry Profile. The
space of EnjoyPBT 9009 is not straight so I use Akko 9009 space.
I know I know, I love 9009 color too much.

View File

@ -4,13 +4,16 @@
Always have year, month, day in filename to easily sort it out.
If file is uploaded by user, add `user_id` in filename, or some other unique identifier depends on your business in which you require to upload.
If file is uploaded by user, add `user_id` in filename, or some other unique
identifier depends on your business in which you require to upload.
Personally, I always add timestamp and extra data in filename to avoid duplicate.
Personally, I always add timestamp and extra data in filename to avoid
duplicate.
Example filename: `yyyy/mm/dd/{user_id}-{timestamp}-{extra}.ext`
Be careful with `/`, too much nested folder is no good for backup (as they say, idk if true or not, but less folder mean less complicated to me).
Be careful with `/`, too much nested folder is no good for backup (as they say,
idk if true or not, but less folder mean less complicated to me).
## Time variable

View File

@ -6,6 +6,8 @@ All configs are in [my dotfiles](https://github.com/haunt98/dotfiles).
## Trick or treat
Search current word: `*`
Search multiple words:
```vim
@ -59,7 +61,8 @@ Advance:
- `M`: middle of screen
- `L`: bottom of screen
- `CTRL-]`, `CTRL-T`: jump to tag/jump back from tag
- Support jump to Go definition with [fatih/vim-go](https://github.com/fatih/vim-go).
- Support jump to Go definition with
[fatih/vim-go](https://github.com/fatih/vim-go).
## Keymap
@ -85,7 +88,8 @@ vim.keymap.set("n", "q", ":q<CR>")
- `<Leader>f`: find files
- `<Leader>rg`: grep files
- `<Space>s`: find lsp symbols
- With [nvim-tree/nvim-tree.lua](https://github.com/nvim-tree/nvim-tree.lua), inside nvim-tree:
- With [nvim-tree/nvim-tree.lua](https://github.com/nvim-tree/nvim-tree.lua),
inside nvim-tree:
- `<C-n>`: toggle
- `<Leader>n`: locate file
- `a`: create
@ -106,7 +110,8 @@ vim.keymap.set("n", "q", ":q<CR>")
- `[D`, `]D`, `[d`, `]d`: diagnostic backward/forward
- `[Q`, `]Q`, `[q`, `]q`: quickfix backward/forward
- `[T`, `]T`, `[t`, `]t`: tree-sitter backward/forward
- Support more languages with [nvim-treesitter/nvim-treesitter](https://github.com/nvim-treesitter/nvim-treesitter)
- Support more languages with
[nvim-treesitter/nvim-treesitter](https://github.com/nvim-treesitter/nvim-treesitter)
- With mini-comment
- `gcc`: comment/uncomment current line
- `gc`: comment/uncomment selected lines
@ -116,7 +121,8 @@ vim.keymap.set("n", "q", ":q<CR>")
- `sr`: replace surround
- With mini-trailspace
- `<Leader>tr`: trim trailing whitespace
- With [nvim-treesitter/nvim-treesitter-textobjects](https://github.com/nvim-treesitter/nvim-treesitter-textobjects)
- With
[nvim-treesitter/nvim-treesitter-textobjects](https://github.com/nvim-treesitter/nvim-treesitter-textobjects)
- `vif`, `vaf`: select inner/outer function
- `vic`, `vac`: select inner/outer class
- With [neovim/nvim-lspconfig](https://github.com/neovim/nvim-lspconfig)
@ -130,6 +136,8 @@ vim.keymap.set("n", "q", ":q<CR>")
## References / Thanks
- vim docs:
- [Seven habits of effective text editing 2.0](https://moolenaar.net/habits_2007.pdf)
- neovim official docs:
- [neovim Motion](https://neovim.io/doc/user/motion.html)
- [neovim Tagsrch](http://neovim.io/doc/user/tagsrch.html)

View File

@ -2,11 +2,12 @@
## Discord (old) naming
The way Discord naming user like`Joker#1234` is so interesting.
If user A register first with name `ABCXYZ`, and later user B register with name `ABCXYZ` too, user A, user B will have name `ABCXYZ#1`, `ABCXYZ#2` respectively.
The way Discord naming user like`Joker#1234` is so interesting. If user A
register first with name `ABCXYZ`, and later user B register with name `ABCXYZ`
too, user A, user B will have name `ABCXYZ#1`, `ABCXYZ#2` respectively.
Why it's interesting?
Each time I join a new platform, all names I want are taken :D
Why it's interesting? Each time I join a new platform, all names I want are
taken :D
## Interesting website

View File

@ -12,8 +12,8 @@ So step by step:
- Render spec in local.
- Push to company host for other teams to see.
Only step 1 is manual, aka I write my API spec completely with my hand (no auto gen from code whatever).
The others can be done with tools:
Only step 1 is manual, aka I write my API spec completely with my hand (no auto
gen from code whatever). The others can be done with tools:
```sh
# Convert

View File

@ -11,8 +11,8 @@ Imagine a chain of APIs:
- Calling API A
- Calling API B
Normally, if API A fails, API B should not be called.
But what if API A is **optional**, whether it successes or fails, API B should be called anyway.
Normally, if API A fails, API B should not be called. But what if API A is
**optional**, whether it successes or fails, API B should be called anyway.
My buggy code is like this:
@ -25,8 +25,9 @@ if err := doA(ctx); err != nil {
doB(ctx)
```
The problem is `doA` taking too long, so `ctx` is canceled, and the parent of `ctx` is canceled too.
So when `doB` is called with `ctx`, it will be canceled too (not what we want but sadly that what we got).
The problem is `doA` taking too long, so `ctx` is canceled, and the parent of
`ctx` is canceled too. So when `doB` is called with `ctx`, it will be canceled
too (not what we want but sadly that what we got).
Example buggy code ([The Go Playground](https://go.dev/play/p/p4S27Su16VH)):
@ -75,14 +76,21 @@ As you see both `doA` and `doB` are canceled.
## The (temporary) solution
Quick Google search leads me to [context: add WithoutCancel #40221](https://github.com/golang/go/issues/40221) and I quote:
Quick Google search leads me to
[context: add WithoutCancel #40221](https://github.com/golang/go/issues/40221)
and I quote:
> This is useful in multiple frequently recurring and important scenarios:
>
> - Handling of rollback/cleanup operations in the context of an event (e.g., HTTP request) that has to continue regardless of whether the triggering event is canceled (e.g., due to timeout or the client going away)
> - Handling of long-running operations triggered by an event (e.g., HTTP request) that terminates before the termination of the long-running operation
> - Handling of rollback/cleanup operations in the context of an event (e.g.,
> HTTP request) that has to continue regardless of whether the triggering
> event is canceled (e.g., due to timeout or the client going away)
> - Handling of long-running operations triggered by an event (e.g., HTTP
> request) that terminates before the termination of the long-running
> operation
So beside waiting to upgrade to Go `1.21` to use `context.WithoutCancel`, you can use this [workaround code](https://pkg.go.dev/context@master#WithoutCancel):
So beside waiting to upgrade to Go `1.21` to use `context.WithoutCancel`, you
can use this [workaround code](https://pkg.go.dev/context@master#WithoutCancel):
```go
func DisconnectContext(parent context.Context) context.Context {
@ -116,7 +124,8 @@ func (ctx disconnectedContext) Value(key any) any {
}
```
So the buggy code becomes ([The Go Playground](https://go.dev/play/p/oIU-WxEJ_F3)):
So the buggy code becomes
([The Go Playground](https://go.dev/play/p/oIU-WxEJ_F3)):
```go
func main() {
@ -158,8 +167,8 @@ doA context deadline exceeded
doB
```
As you see only `doA` is canceled, `doB` is done perfectly.
And that what we want in this case.
As you see only `doA` is canceled, `doB` is done perfectly. And that what we
want in this case.
## Thanks

View File

@ -2,7 +2,8 @@
The title is a joke.
But after digging a few holes on the wall, I think I should leave a few notes for the future me.
But after digging a few holes on the wall, I think I should leave a few notes
for the future me.
- Be careful to choose countersink drill bit (aka mũi khoan).
- I pick **wood** type to drill on the wall, and ... it's broken :(

View File

@ -212,7 +212,8 @@ xcode-select --install
### Linux
Fix black screen when open game in fullscreen in external monitor with [kazysmaster/gnome-shell-extension-disable-unredirect](https://github.com/kazysmaster/gnome-shell-extension-disable-unredirect)
Fix black screen when open game in fullscreen in external monitor with
[kazysmaster/gnome-shell-extension-disable-unredirect](https://github.com/kazysmaster/gnome-shell-extension-disable-unredirect)
### Firefox

View File

@ -280,8 +280,10 @@ Commonly flags:
Be careful flags (need dry run if not sure):
- `-u`: skip if files in **dst** is already newer than in **src**, if you want to sync both ways
- `--delete`: delete files in **dst** if not exist in **src**, useful to sync dst with src
- `-u`: skip if files in **dst** is already newer than in **src**, if you want
to sync both ways
- `--delete`: delete files in **dst** if not exist in **src**, useful to sync
dst with src
## [F2](https://github.com/ayoisaiah/f2)

View File

@ -6,11 +6,14 @@ Just a collection/checklist while using Android phone.
All Android phones are bloat.
So first thing first, use [Universal Android Debloater GUI](https://github.com/0x192/universal-android-debloater).
So first thing first, use
[Universal Android Debloater GUI](https://github.com/0x192/universal-android-debloater).
## Apps
Use [F-Droid](https://f-droid.org/en/) with [Droid-ify](https://github.com/Droid-ify/client) to replace Google Play as much as you can.
Use [F-Droid](https://f-droid.org/en/) with
[Droid-ify](https://github.com/Droid-ify/client) to replace Google Play as much
as you can.
Daily:

View File

@ -4,7 +4,6 @@ Just to save my noted for future me using Redis again.
## Redis does not store creation time of keys
Why?
Because TODO
Why? Because TODO
https://stackoverflow.com/questions/9917331/time-of-creation-of-key-in-redis

View File

@ -5,7 +5,8 @@ My notes/mistakes/... when using cache (mainly Redis) from time to time
My default strategy is:
- Write to database first then to cache second
- Read from cache first, if not found then read from database second, then re-write to cache
- Read from cache first, if not found then read from database second, then
re-write to cache
```mermaid
sequenceDiagram
@ -38,10 +39,10 @@ sequenceDiagram
It's good for general cases, for example with CRUD action.
The bad things happen when cache and database are not consistent.
For example what happen if writing database OK then writing cache failed?
Now database has new value, but cache has old value
Then when we read again, we read cache first with old value, and that is disaster.
The bad things happen when cache and database are not consistent. For example
what happen if writing database OK then writing cache failed? Now database has
new value, but cache has old value Then when we read again, we read cache first
with old value, and that is disaster.
## Thanks