diff --git a/docs/2022-06-08-backup.html b/docs/2022-06-08-backup.html index df3ca3c..e83dc27 100644 --- a/docs/2022-06-08-backup.html +++ b/docs/2022-06-08-backup.html @@ -11,4 +11,5 @@ restic -r rclone:remote:PrivateData backup PrivateData # Cleanup old backups restic -r rclone:remote:PrivateData forget --keep-last 1 --prune -
The next data is my passwords and my OTPs.
These are the things which I'm scare to lose the most.
First thing first, I enable 2-Step Verification for all of my important accounts, should use both OTP and phone method.
I use Bitwarden for passwords (That is a long story, coming from Google Password manager to Firefox Lockwise and then settle down with Bitwarden) and Aegis for OTPs.
The reason I choose Aegis, not Authy (I use Authy for so long but Aegis is definitely better) is because Aegis allows me to extract all the OTPs to a single file (Can be encrypted), which I use to transfer or backup easily.
As long as Bitwarden provides free passwords stored, I use all of its apps, extensions so that I can easily sync passwords between laptops and phones.
The thing I need to remember is the master password of Bitwarden in my head.
With Aegis, I export the data, then sync it to Google Drive, also store it locally in my phone.
The main problem here is the OTP, I can not store all of my OTPs in the cloud completely.
Because if I want to access my OTPs in the cloud, I should log in, and then input my OTP, this is a circle, my friends.
There are many strategies that I process to react as if something strange is happening to my devices.
If I lost my laptops, single laptop or all, do not panic as long as I have my phones.
The OTPs are in there, the passwords are in Bitwarden cloud, other data is in Google Drive so nothing is lost here.
If I lost my phone, but not my laptops, I use the OTPs which are stored locally in my laptops.
In the worst situation, I lost everything, my laptops, my phone.
The first step is to recover my SIM, then log in to Google account using the password and SMS OTP.
After that, log in to Bitwarden account using the master password and OTP from Gmail, which I open previously.
This guide will be updated regularly I promise.
Feel free to ask me via email \ No newline at end of file +The next data is my passwords and my OTPs.
These are the things which I'm scare to lose the most.
First thing first, I enable 2-Step Verification for all of my important accounts, should use both OTP and phone method.
I use Bitwarden for passwords (That is a long story, coming from Google Password manager to Firefox Lockwise and then settle down with Bitwarden) and Aegis for OTPs.
The reason I choose Aegis, not Authy (I use Authy for so long but Aegis is definitely better) is because Aegis allows me to extract all the OTPs to a single file (Can be encrypted), which I use to transfer or backup easily.
As long as Bitwarden provides free passwords stored, I use all of its apps, extensions so that I can easily sync passwords between laptops and phones.
The thing I need to remember is the master password of Bitwarden in my head.
With Aegis, I export the data, then sync it to Google Drive, also store it locally in my phone.
The main problem here is the OTP, I can not store all of my OTPs in the cloud completely.
Because if I want to access my OTPs in the cloud, I should log in, and then input my OTP, this is a circle, my friends.
There are many strategies that I process to react as if something strange is happening to my devices.
If I lost my laptops, single laptop or all, do not panic as long as I have my phones.
The OTPs are in there, the passwords are in Bitwarden cloud, other data is in Google Drive so nothing is lost here.
If I lost my phone, but not my laptops, I use the OTPs which are stored locally in my laptops.
In the worst situation, I lost everything, my laptops, my phone.
The first step is to recover my SIM, then log in to Google account using the password and SMS OTP.
After that, log in to Bitwarden account using the master password and OTP from Gmail, which I open previously.
This guide will be updated regularly I promise.
Feel free to ask me via email +Mastodon \ No newline at end of file diff --git a/docs/2022-06-08-dockerfile-go.html b/docs/2022-06-08-dockerfile-go.html index a17c58f..7a1486a 100644 --- a/docs/2022-06-08-dockerfile-go.html +++ b/docs/2022-06-08-dockerfile-go.html @@ -32,4 +32,5 @@ COPY . . COPY --from=builder /build/app /app ENTRYPOINT ["/app"] -Finally, I copy app
to Distroless base image.
Finally, I copy app
to Distroless base image.
There are many use cases where we need to use a unique ID.
In my experience, I only encouter 2 cases:
In my Go universe, there are some libs to help us with this:
The ID is used only for trace and log.
If same ID is generated twice (because maybe the possibilty is too small but not 0), honestly I don't care.
When I use that ID to search log , if it pops more than things I care for, it is still no harm to me.
My choice for this use case is rs/xid.
Because it is small (not span too much on log line) and copy friendly.
Why I don't use auto increment key for primary key?
The answer is simple, I don't want to write database specific SQL.
SQLite has some different syntax from MySQL, and PostgreSQL and so on.
Every logic I can move to application layer from database layer, I will.
In the past and present, I use google/uuid, specificially I use UUID v4.
In the future I will look to use segmentio/ksuid and oklog/ulid (trial and error of course).
Both are sortable, but google/uuid is not.
The reason I'm afraid because the database is sensitive subject, and I need more testing and battle test proof to trust those libs.
I think about adding prefix to ID to identify which resource that ID represents.
There are many use cases where we need to use a unique ID.
In my experience, I only encouter 2 cases:
In my Go universe, there are some libs to help us with this:
The ID is used only for trace and log.
If same ID is generated twice (because maybe the possibilty is too small but not 0), honestly I don't care.
When I use that ID to search log , if it pops more than things I care for, it is still no harm to me.
My choice for this use case is rs/xid.
Because it is small (not span too much on log line) and copy friendly.
Why I don't use auto increment key for primary key?
The answer is simple, I don't want to write database specific SQL.
SQLite has some different syntax from MySQL, and PostgreSQL and so on.
Every logic I can move to application layer from database layer, I will.
In the past and present, I use google/uuid, specificially I use UUID v4.
In the future I will look to use segmentio/ksuid and oklog/ulid (trial and error of course).
Both are sortable, but google/uuid is not.
The reason I'm afraid because the database is sensitive subject, and I need more testing and battle test proof to trust those libs.
I think about adding prefix to ID to identify which resource that ID represents.
Run make gen
to have fun of course.
Remember grpc-ecosystem/grpc-gateway
, envoyproxy/protoc-gen-validate
, kei2100/protoc-gen-marshal-zap
is optional, so feel free to delete if you don't use theme.
If use vendor
:
buf generate
with buf generate --exclude-path vendor
.buf format -w
with buf format -w --exclude-path vendor
.If you use grpc-gateway:
import "third_party/googleapis/google/api/annotations.proto";
with import "google/api/annotations.proto";
security_definitions
, security
, in option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger)
.The last step is delete prototool.yaml
.
If you are not migrate but start from scratch:
buf lint
to make sure your proto is good.buf breaking --against "https://your-grpc-repo-goes-here.git"
to make sure each time you update proto, you don't break backward compatibility.Run make gen
to have fun of course.
Remember grpc-ecosystem/grpc-gateway
, envoyproxy/protoc-gen-validate
, kei2100/protoc-gen-marshal-zap
is optional, so feel free to delete if you don't use theme.
If use vendor
:
buf generate
with buf generate --exclude-path vendor
.buf format -w
with buf format -w --exclude-path vendor
.If you use grpc-gateway:
import "third_party/googleapis/google/api/annotations.proto";
with import "google/api/annotations.proto";
security_definitions
, security
, in option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger)
.The last step is delete prototool.yaml
.
If you are not migrate but start from scratch:
buf lint
to make sure your proto is good.buf breaking --against "https://your-grpc-repo-goes-here.git"
to make sure each time you update proto, you don't break backward compatibility.The difference is c.GetUser()
-> c.User.Get()
.
For example we have client which connect to bank.
There are many functions like GetUser
, GetTransaction
, VerifyAccount
, ...
So split big client to many children, each child handle single aspect, like user or transaction.
My concert is we replace an interface with a struct which contains multiple interfaces aka children.
I don't know if this is the right call.
This pattern is used by google/go-github.
Why?
See for yourself
Currently there are 2:
The difference is c.GetUser()
-> c.User.Get()
.
For example we have client which connect to bank.
There are many functions like GetUser
, GetTransaction
, VerifyAccount
, ...
So split big client to many children, each child handle single aspect, like user or transaction.
My concert is we replace an interface with a struct which contains multiple interfaces aka children.
I don't know if this is the right call.
This pattern is used by google/go-github.
Why?
See for yourself
Currently there are 2:
Need clarify why this happpen? Idk :(
VARCHAR
or TEXT
Prefer VARCHAR
if you need to query and of course use index, and make sure size of value will never hit the limit.
Prefer TEXT
if you don't care, just want to store something.
Plase read docs about online ddl operations before do anything online (keep database running the same time update it, for example create index, ...)
Need clarify why this happpen? Idk :(
VARCHAR
or TEXT
Prefer VARCHAR
if you need to query and of course use index, and make sure size of value will never hit the limit.
Prefer TEXT
if you don't care, just want to store something.
Plase read docs about online ddl operations before do anything online (keep database running the same time update it, for example create index, ...)
venv
-
Feel free to ask me via email
\ No newline at end of file
+Feel free to ask me via email
+Mastodon
\ No newline at end of file
diff --git a/docs/2022-10-26-reload-config.html b/docs/2022-10-26-reload-config.html
index b90d6fa..eaa79f1 100644
--- a/docs/2022-10-26-reload-config.html
+++ b/docs/2022-10-26-reload-config.html
@@ -42,4 +42,5 @@ other_service -> other_service: do other business
deactivate other_service
@enduml
-Config storage can be any key value storage or database like etcd, Consul, mySQL, ...
If storage is key value storage, maybe there is API to listen on config change.
Otherwise we should create a loop to get all config from storage for some interval, for example each 5 minute.
Each other_service
need to get config from its memory, not hit storage
.
So there is some delay between upstream config (config in storage
) and downstream config (config in other_service
), but maybe we can forgive that delay (???).
Pros:
Config can be dynamic, service does not need to restart to apply new config.
Each service only keep 1 connection to storage
to listen to config change, not hit storage
for each request.
Cons:
storage
.storage
failure.Config storage can be any key value storage or database like etcd, Consul, mySQL, ...
If storage is key value storage, maybe there is API to listen on config change.
Otherwise we should create a loop to get all config from storage for some interval, for example each 5 minute.
Each other_service
need to get config from its memory, not hit storage
.
So there is some delay between upstream config (config in storage
) and downstream config (config in other_service
), but maybe we can forgive that delay (???).
Pros:
Config can be dynamic, service does not need to restart to apply new config.
Each service only keep 1 connection to storage
to listen to config change, not hit storage
for each request.
Cons:
storage
.storage
failure.This is where I dump my thoughts.
This is where I dump my thoughts.
Feel free to ask me via email +Mastodon \ No newline at end of file diff --git a/templates/post.html b/templates/post.html index 26028a6..47ee178 100644 --- a/templates/post.html +++ b/templates/post.html @@ -21,5 +21,6 @@ Feel free to ask me via email + Mastodon