~
Transform business requirements to action, which usually involves:
After successfully do all of that, next step is:
In ZaloPay, each team has its own responsibilities/domains, aka many different services.
Ideally each team can choose custom backend techstack if they want, but mostly boils down to Java or Go. Some teams use Python for scripting, data processing, ...
Example: UM (Team User Management Core) has 10+ Java services and 30+ Go services.
The question is for each new business requirements, what should we do:
Example: Business requirements says: Must match/compare user EKYC data with Bank data (name, dob, id, ...).
How do services communicate with each other?
First is through API, this is the direct way, you send a request then you wait for response.
HTTP: GET/POST/...
Example: TODO: show API image
GRPC: use proto file as constract.
Example: TODO: show proto file image
There are no hard rules on how to design APIs, only some best practices, like REST API, ...
Correct answer will always be: "It depends". Depends on:
Why do we use HTTP for Client-Server and GRPC for Server-Server?
Second way is by Message Broker, the most well known is Kafka.
Main idea is decoupling.
Imaging service A need to call services B, C, D, E after doing some action, but B just died. We must handle B errors gracefully if B is not that important (not affect main flow of A). Imaging not only B, but multi B, like B1, B2, B3, ... Bn, this is so depressed to continue.
Message Broker is a way to detach B from A.
Dumb exaplain be like: each time A do something, A produces message to Message Broker, than A forgets about it. Then all B1, B2 can consume A's message if they want and do something with it, A does not know and does not need to know about it.
Pro tip: Use proto to define models (if you can) to take advantage of detecting breaking changes.
You should know about DRY, SOLID, KISS, Design Pattern. The basic is learning which is which when you read code. Truly understand will be knowing when to use and when to not.
All of these above are industry standard.
The way business moving is fast, so a feature is maybe implemented today, but gets thrown out of window tomorrow (Like A/B testing, one of them is chosen, the other says bye). So how do we adapt? The problem is to detect, which code/function is likely stable, resisted changing and which is likely to change.
For each service, I often split to 3 layers: handler, service, repository.
Handler layer is likely never changed. Repository layer is rarely changed. Service layer is changed daily, this is where I put so much time on.
The previous question can be asked in many ways:
My answer is, as Message Broker introduce concept decoupling, loosely coupled coding. Which means, 2 functions which do not share same business can be deleted without breaking the other.
For example, we can send noti to users using SMS, Zalo, or noti-in-app (3 providers). They are all independently feature which serves same purpose: alert user about something. What happen if we add providers or remove some? Existing providers keep working as usual, new providers should behave properly too.
So we have send noti abstraction, which can be implement by each provider, treat like a module (think like lego) which can be plug and play right away.
And when we do not need send noti anymore, we can delete whole of it which includes all providers and still not affecting main flow.
Test is not a way to find bug, test is a way for us to make sure what we code is actually what we think/expect.
Best case scenerio is test with real dependencies (real servives, real Redis, real MySQL, real Kafka, ...). But it's not easy way to setup yourself.
The easier way is to use mocks. Mock all dependencies to test all possible edge cases you can think of.
TODO: Show example
TODO: Cache strategy, async operation
TODO: Scale problem
TODO: Take care incident