Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've worked at a place that architected 100s of microservices pretty well, in a similar way that Uber apparently does.

Q1: (perf) these tools exist, the buzzword phrase is "distributed tracing". The relationships are actually not explicitly defined for the tooling to work, but rather inferred. Visualize a network call as a call-stack, where each service is a level in the stack. Jaeger (a CNCF project addressing distributed tracing) was coincidentally started by Uber.

Q2 (stubs): In my experience, mocked responses get you a long, long way. Typically the API response type that you're mocking is generated from a protobuf (or thrift, OpenAPI, etc.) file. If your dependency changes that type in a way that breaks your test, the CI platform will let them know.

If it's a more subtle change (like, it used to deterministically return 18 and now it deterministically returns 20), it's really on the service owners to communicate changes and grep the code base before making the change.

Q3 (logging/metrics): Typically by using shared "logging" and "metrics" lib for each language. Every service will typically be a gRPC service and accordingly a standardized + generated-from-protobufs set of metrics to Prometheus, by default.

Q4 (how to upgrade common libraries): this is definitely a tricky one. The answer is, basically, really carefully. Typically, you'll want your infrastructure to be compatible with vX and vX+1, and give teams a deadline to cut over from logging X to X+1. The couple of weeks before that deadline usually involves a lot of cat-herding and handwringing.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: