Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is where go’s insistence on reinventing the wheel feels terribly misplaced. Every major debug format has a way to associate code locations with line numbers. Every major debug format also has a way to separate the debug data from the main executable (.dSYM, .dbg, .pdb). In other words, the problem that the massive pclntab table (over 25% of a stripped binary!) is trying to solve is already a well-trodden and solved problem. But go, being go, insists on doing things their own way. The same holds for their wacky calling convention (everything on the stack even when register calling convention is the platform default) and their zero-reliance on libc (to the point of rolling their own syscall code and inducing weird breakage).

Sure, the existing solutions might not be perfect, but reinventing the wheel gets tiresome after a while. Contrast this with Rust, which has made an overt effort to fit into existing tooling: symbols are mangled using the C++ mangler so that gdb and friends understand them, rust outputs nice normal DWARF stuff on Linux so gdb debugging just works, Rust uses platform calling convention as much as possible, etc. It means that a wealth of existing tooling just works.



I am not a fan of Go, and I also wish these things were true (and more[1], actually), but I find it hard to agree that its priorities are "terribly misplaced." Inside the context of Go's goals (e.g., "compile fast") and non-goals (e.g., "make it easy to attach debuggers to apps replicated a zillion times in Borg") these trade-offs make a lot of sense to me. Like: Go rewrote their linker, I think, 3 times, to increase the speed. If step 1 was to wade through the LLVM backend, I am not sure this would have happened. Am I missing something?

I love Rust, but Go is focused on a handful of very specific use cases. Rust is not. I don't know that I can fault Go for choosing implementation details that directly enable those use cases.

[1]: http://dtrace.org/blogs/wesolows/2014/12/29/golang-is-trash/


I'd check out the HN comments in response to the parent's [1]: https://news.ycombinator.com/item?id=8815778

Specifically the top reply there is by rsc (tech lead for Go)


> non-goals (e.g., "make it easy to attach debuggers to apps replicated a zillion times in Borg")

But wouldn't it still be nice to have a standardized way to analyze post-mortem dumps across languages?


Google's anointed production languages used to be five: C++, Java, JavaScript, Python, and Go. Not much to reasonably standardize across, especially if a standardized solution ends up with more compromises than a custom one.


But DWARF uses less space than Go's native format. So inventing a custom "linetab" format seems like the compromise, not using DWARF.


I suspect that the format is again copy-pasted from somewhere in Plan9, and existing Plan9 tools for it are ported, too.


Insert standardization XKCD. It's been tried. And even so, you can still use the "standard" coredump tool to analyze a Go program's coredump with decent success.


It’s not fun, usually.


I'm usually really willing to forgive a lot of stuff when justified by genuinely different design goals or priorities.

Unfortunately with Go I become less convinced with every passing year that they can keep getting away with this. They keep spinning obvious weaknesses as philosophical strengths, rather than admitting it is due to limited resources and backwards compatibility constraints. Their use cases (servers, UNIX tools) aren't actually unusual or different to other teams. It seems like every time I read about Go they've made what is simply a bad design decision that they later regret and explore fixing, but their rules cause them to keep compounding self-inflicted wounds. Compared to other language and runtime teams they just don't seem to know what they're doing.

Here are just some of the examples we've learned about so far.

Stack unwinding and hyper-inefficient calling conventions. Despite being designed for servers at Google, where throughput really matters a lot, they generate extremely bloated code and hardly use registers rather than generating unwind metadata that's consulted on need and using a tuned calling convention. Tuned calling conventions are optimisations that date back literally decades in the C world and yet Go doesn't have them!

This significantly reduces icache utilisation (hurting throughput) and means they can't use any existing tools, and yet the only benefit is it made their compiler easier to write initially. They increased the server costs of Go shops permanently by taking this shortcut which benefited only the compiler authors. Now they struggle to fix it because they don't have any de-optimisation engine either, so changing calling conventions makes it harder to get useful stack traces and would break user-authored inline assembly (which is rare for Go's use cases).

Compare to how the Java guys did it: the compiler generates highly optimised code and tables of metadata that let the runtime map register/stack state back to the programmers de-optimised view of the program. Methods can be inlined aggressively because the VM can always undo it, so it doesn't get in developer's way. That metadata is only consulted when a stack unwind is actually needed, which is rare. The rest of the time it sits cold in far away RAM or swapped to disk. Calling conventions aren't exposed to the user and can be changed as needed, but if you need custom assembly you go via JNI that uses the platform calling convention and accept a slower function call.

Go's approach isn't some principled matter of design, as evidenced by their explorations of fixing it. They just didn't plan ahead.

Garbage collectors. The Go team originally tried to claim their GC was some sort of massive advance, a GC for the ages. A few years later they gave a presentation where they admitted they had explored replacing it several times because it's extremely inefficient, but are hamstrung by a (self imposed) rule that they're only allowed one knob and don't want to make their compiler slower. Once again, the constraints of their compiler causes massive cost bloat for projects in production (where cost really matters).

Compare to Java: the default GC tries to strike a balance between throughput and latency, but if you need super low latency or super high throughput you can flip a switch to get that. The runtime can't know if your task is a batch job like a compiler or a latency sensitive HTTP server, so can tell it, but if you don't it'll take a middle path. Given the huge costs of large server farms, this is sensible!

Compile time. Whenever you read about the Go team's choices it's apparent they are willing to mangle basically anything to get themselves an easier to write or faster compiler, hence the fact that it hardly optimises and generates massive binaries. But this isn't the only way to get fast compile times.

Compare to Java: compilation is done in parallel with program execution and only where it matters. During development where you frequently start up and shut down programs, you're only waiting for the compiler frontend (javac) which is very simple and doesn't optimise at all, so it's fast like Go's is. When deployed to production the program automatically ends up optimised and running at peak performance and you don't even need to flip a "in prod" switch like with a C compiler: the fact that the program is long running is itself evidence that you're in prod and worth optimising.

This heuristic used to hurt a lot for small command line tools, which usually don't need to be very fast. But you can produce binaries with the GraalVM native-image tool that start as fast (or even faster) than C programs do now, so that's not a big deal any longer.

Generics. Well, this one has been thrashed out so much I won't cover it again here. Suffice it to say that other languages have all concluded this is worth having and managed to introduce it in either their first versions, or in a backwards compatible way later.

Debugging. The ability to easily debug binaries and get reasonable stack traces is known to make program optimisation hard, because optimisation means re-arranging the program behind the developer's back. It's hard to put a breakpoint on a function that was deleted by the optimiser, or inlined. That's why C compilers have debug vs non-debug modes. Debug binaries can be significantly slower than release binaries, hence the difference. In fact in the past I've seen cases where debug-mode C binaries were so slow you couldn't use them because getting the program to the point where it'd experience issues took so long. And of course forget about debugging production binaries.

Golang faces the same problem but in effect just always runs every program in debug mode.

Compare to Java: See the above description of the de-optimisation engine and tables. If you request a stack trace or probe a method with a method, the program is selectively de-optimised so the part being inspected by the developer looks normal whilst the rest of the program continues running at full speed. This means you can attach debuggers to any program at any time, without flags, and you can even attach debuggers to production JVMs at any time. This feature doesn't impose any throughput hit (it does consume memory, but it's cold memory).

So we can see that repeatedly the Go guys have made choices that seem to have wildly wrong cost/benefit tradeoffs, tradeoffs that literally nobody else made, and almost always the root cause is their duplication of effort vs other open source runtimes. They use a variety of fairly condescending justifications for this like "our employees are too young to handle more than one flag", but when you dig in you find they've usually explored changing things anyway. They just didn't succeed.


I'm the author of the article linked in this entry and https://science.raphael.poss.name/go-calling-convention-x86-..., and this is IMHO the best comment in this thread.


Thanks!


This sentence got me:

> Instead of creating (or borrowing from Plan9) an “assembly language” with its own assembler, “C” compiler (but it’s not really C), and an entire “linker” (that’s not really a linker nor a link-editor but does a bunch of other stuff), it would have been much better to simply reuse what already exists.

"Simply reuse what already exists"...like the things that they reused, for example?


Well go also uses its own assembler, on top of that a kind of modified garbage version of real ones. You can only justify so many reinventions of the wheel, yet they redid everything.


Did they actually redo everything, or does it just look that way from starting from the Plan9 toolchain? Which could also be said to be re-doing everything, but from a much earlier starting point.

IIRC Go started out shipping with a port of the Plan9 C compiler and toolchain - it was bootstrapped by building the C compiler with your system C compiler, then building the Go compiler. Which, until re-written in Go circa-2013, was in Plan-9 style C. It all looks deeply idiosyncratic but it was a toolchain the initial implementors were highly familiar with.


Perhaps the other assemblers would not provide desired compilation speed?

Perhaps their IP requirement would not sufice Google lawyers?

Perhaps Go devs would rather have more control on the development of assembler by writing it from scratch to understand every design decision instead of inheriting thousands of unknown design decisions?

I don't know. Neither do others outside of the project.

I find these baseless micro-aggressions against Go missplaced and unfruitful.


> I don't know. Neither do others outside of the project.

> I find these baseless micro-aggressions against Go missplaced and unfruitful.

Hu? Ok then Go is perfect because it is developed in secret.

We are discussing here, I'm not "micro-agressing" anyone. If I don't like a design / re-implementation decision, and I in the mood to share that opinion with this cyber-assembly, I do it. And I expect developers to not be offended by me having a technical opinion; and I expect third parties to be even less offended. And yes, it might be a bad opinion in some cases. I'm not even 100% sure it is not the case here, because like you said they could have had some kind of justification to do that. But it suspect it is extremely rare to have a good justification to rewrite an assembler, with really big quirks on top of that, when they did it.


> Ok then Go is perfect because it is developed in secret.

Didn't implied that at all. No need for straw man.


Yes, you absolutely did, by stating (not just implying) that any criticism of the project that does not take its internal decision making into account is "baseless micro-agression".


I was referring specifically to this:

> You can only justify so many reinventions of the wheel, yet they redid everything.

Don't extrapolate what I write.


> micro-aggressions

We're adults here, can we please not talk like tumblr blog posters?


I totally agree with the above - was never able to click with Go _but_ I totally understand how reinventing the wheel has worked well for them.

The days when the Go project fired up were different than the days when Rust started. Rust made different tradeoffs by relying on LLVM and it has advantages (free optimizations!) and disadvantages of their own.


The first releases of each were only 8 months apart, far from “different days”. The projects simply have different goals.


A lot of the insularity and weirdness comes from the Plan 9 heritage. Go's authors (Rob Pike, Ken Thompson, and Russ Cox) cannibalized/ported a bunch of their own Plan 9 stuff during initial development. For example, I believe the original compiler was basically a rewrite of the Inferno C compiler.

This is a large part of why Go is not based on GCC or LLVM, why it has its own linker, its own assembly language, its own syscall interface, its own debug format, its own runtime (forgoing libc), and so on. Clearly Go's designers were more than a little contrarian in their way of doing things, but that's not the whole answer.

Being able to repurpose existing code is an efficiency multiplier during the bootstrapping phase. But when bootstrapping is done, you have to consider the ROI of going back and redoing some things or keep a design that works pretty well. The Go team is undoubtedly aware of some of these issues, but probably don't consider them to be a priority.

In some cases the tools are a benefit. Go's compiler and linker are extremely fast, which I appreciate as a developer. A possible compromise would be to offer a slower build pipeline for production builds, which made use of LLVM and its many man-years of code optimizations.


Personally I more wish Rust would take this approach. Rust desperately needs a fast, developer oriented compiler. The slow compile times is potentially Rusts biggest flaw, to the point where I find it keeps me off the language for anything non-trivial. Even better might be a Rust interpreter, so you'd get REPL and fast development cycles.


This is why Cranelift is being worked on. There is also a Rust interpreter, miri.

I think starting with LLVM was the right decision (and one that was I was primarily responsible for). Rust would lose most of its benefits if it didn't produce code with performance on par with C++. LLVM is not the fastest compiler in the world (though it's not like it's horribly slow either), but its optimization pipeline is unmatched. I don't see replicating LLVM's code quality as feasible without a large team and a decade of work. Middling code gen performance is an acceptable price to pay until we get Cranelift; the alternative, developing our own backend, would mean not being able to deploy Rust code at all in many scenarios.


Forgive me if this is ignorant, since I havent done any benchmarks on this in a while, but doesnt GCC produce slightly faster code on average across a wide set of benchmarks compared to clang/LLVM?


Perhaps, but the advantages of a large third-party ecosystem around LLVM outweighed any performance differences between GCC and LLVM.


At least in these benchmarks that phoronix run time-to-time, (so they at least can be compared to their older self) LLVM, in its Clang incarnation, is finally getting some parity in execution times with GCC

https://www.phoronix.com/scan.php?page=article&item=gcc-clan...

Of course, benchmarks, yada yada, but at least is some sort of comparison axis where the improvement over the years is clear.


Thanks for the link. I was probably thinking about some older phoronix benchmarks when I made my post


Thanks for the pointer! I was unfamiliar with Cranelift and it seems like a promising tech. I'll keep an eye on it in hopes that once it is stable I'll be able to put together a development environment that allows for the fast turnaround I prefer.


I have not used rust for anything very large, but the using an editor that supports the rust language server mitigates the compile time problem. In VSCode it show you the compiler warnings and errors as you are editing a file. There is a little lag in updating but the workflow is faster than switching to a terminal to do a full compile.


Slow compile is a developer problem. Big binaries is a problem for both developers and users.

Typically, developers can afford to throw more cores and more ram at their build machines.


Isn't Cranelift going to be usable for that?


If you need any other evidence for this, just look at GOPATH and similar. That was plan9 through and through; they wanted to delegate work to the filesystem. No need for a package manager or anything, just pull down URIs and they'll be where Go wants them to be.


What are you talking about? Plan 9 doesn't even use $path. At least not consistently -- binaries live in /bin.


It's derived from convention of /n/sources and /n/contrib and the like. Sources mounted from network fileservers from various places, etc.

The git support was added to make it a bit easier outside plan9.


Go has had to walk back on some of its choices recently; most notably on platforms without a stable syscall ABI and a very strong push for dynamic linking (…so macOS) they link against the system libraries.


The only popular platform with a stable syscall ABI is Linux. This is a product of the historical accident that Linux doesn't control a libc and ensuing drama.

Almost everyone else doesn't have a stable ABI below the (C) linker level.


I don't think Linux actually guarantees syscall-level compatibility, so no need to single it out, it's just like everyone else.


It does - the syscalls are part of the official userspace interface which the Linux kernel promises not to break. They can add new syscalls, options or flags, but can’t break existing ones.


It very much does, explicitly, in a way that every other operating system does not.


It's still not an explicit guarantee. Actually, Linux the kernel doesn't guarantee or promise anything, it's only distros that try, and those that do promise some compatibility, don't promise all that much. The best promise you can find is like a promise of ABI compatibility within a couple of future releases.


You are extremely wrong, so it's probably worth thinking for a moment about how you became so misinformed and why you feel so strongly that you're not misinformed.

https://www.kernel.org/doc/Documentation/ABI/README

  Most interfaces (like syscalls) are expected to never change and always be available.
https://www.kernel.org/doc/Documentation/ABI/stable/syscalls

  This interface matches much of the POSIX interface and is based
  on it and other Unix based interfaces.  It will only be added to
  over time, and not have things removed from it.
This is literally an explicit promise of the Linux kernel; distros have no influence over the Linux syscall ABI whatsoever.

I think you're perhaps extremely confused about the difference between the userspace syscall -> kernel interface, and kernelspace API/ABI such as out-of-tree kernel modules might use. About the latter, yes, there are no API/ABI guarantees in vanilla Linux.


Expecting something is not a promise, just an attitude they want to have towards it at the moment. I think you are also confusing who is even in a position to promise what. Kernel is not an OS people can use, but only something an OS (distro) itself can use and, given the license, in any way it wants. And so kernel cannot promise or force ABI compatibility or anything really on behalf of any OS that uses it. It's up to the OS, but OSes modify kernels, backport things, build with various ABIs and so on. Look for example at the mess around x32 ABI, some distros had it, some didn't, some had it and dropped it, some had and promised ABI compatibility for some time, but Linus wants to drop it from the kernel (don't know if he actually did it), so they are in a pickle. Read RedHat's application compatibility guide if you want an example on what the best a linux distro can promise wrt ABI compatibility.


> platforms without a stable syscall ABI and a very strong push for dynamic linking (…so macOS)

That's an even better description of Windows. The macOS system call table isn't officially stable, but it's at least slow to change. The Windows equivalent has been known to change from service pack to service pack.


Plan9/9front uses u.h and libc.h, among nc and nl as the compiler and linkers, being n the architecture. That allowed free cross-compiling, as plan9/9front does since your base install and now, OFC, Go.


Small note, we don't use the C++ mangler (https://github.com/rust-lang/rfcs/pull/2603), and did the upstream work in GDB to get it to understand things. (There's also more work to do: https://github.com/rust-lang/rust/issues?q=is%3Aopen+is%3Ais... )

That being said, yes, we see integration into the parent platform as being an important design constraint for Rust. I think Go made reasonable choices for what they're trying to do, though. It's all tradeoffs.


Indeed, though it's worth mentioning that the Rust mangling scheme is based on that of the Itanium C++ ABI.


For those who don't know, it's also worth mentioning that while it's called the "Itanium" C++ ABI (because it was designed originally for the Itanium), it's nowadays used for every architecture on Linux.


Not just Linux, either :-).


> to the point of rolling their own syscall code

It makes the Go concurrency mechanism possible, this is not just a kind of whim.

Most importantly, this allows the scheduler to hook on syscalls in order to schedule an other routine. But this also allows to control what happens during a syscall, since the libc tends to do more than just calling the kernel in its syscall wrappers, which might not be thread safe or might not play well with stack manipulations.

This has never been a problem in my experience.


The problem is that there is exactly one OS that maintains the system call ABI as a stable API: Linux. On other systems, trying to invoke the system calls manually and bypassing the C wrapper opens you up into undefined behavior, and this was particularly problematic on OS X, which occasionally made assumptions about the userspace calling code that weren't true for the Go wrapper shell, since it wasn't the expected system wrapper library.


> On other systems, trying to invoke the system calls manually and bypassing the C wrapper opens you up into undefined behavior

It opens you up to a bit more of behavior changing in the future, but just a tiny bit more. No need to make a big deal out of it. It's a very normal thing in software. Nobody is going to promise you a perfect stable interface to rely on forever, not even Linux. But syscalls are actually pretty easy to keep up with, they change slowly, and it's easy to detect kernel version and choose appropriate wrappers to use with very little extra code.

OS X problem is its own thing. Apple making breaking changes is not a new thing. I use an Apple laptop super rarely and still got fed up with breaking changes, even not upgrading past 13.6 at the moment.


Can any of the downvoters explain disagreement? I actually maintain a small Go library of syscall wrappers for Linux and BSDs and don't get why people are spreading FUD about it, as if it's a minefield. It's not, it's a non issue at all even for someone working independently on it. I find it even easier than dealing with all the libc crap on those systems.


Windows syscall numbers change with every service pack release (see https://j00ru.vexillium.org/syscalls/nt/64/ for an incomplete table), and the interfaces themselves are not guaranteed to be stable in any way. ntdll is the only reasonable way to make syscalls on Windows. Trying to build your own syscall code on Windows is fragile and unmaintainable.

Linux maintains stable syscall interfaces. BSDs don't guarantee it, but generally don't change much. But macOS and Windows can and will change their interfaces.


The problem is that that one OS is right. Systems calls form an API and it needs to be stable and managed. We (developers) have been working on this issue for years and have at least attempted solutions (eg. semantic versioning) where most OS developers feel free to break them on a whim. It is a terrible practice that forces others to spend their time working around.


I would note that system calls only form an external API if the developer says they are an external API. Which it is for Linux but for other OSes the external API is a C library, kernel32, etc.

But right and wrong aside, there's the practical matter of reality. You can't simply pretend everything works how you want them to. At the end of the day you have to deal with how they actually work.


The Linux model is not the "right" one, it's a choice that they've made. Just like static linking isn't the "right" choice either, it's an option with its own drawbacks. Other OSes provide an approved, API stable layer to access the OS; it's just not the syscall layer.


The Linux approach reflects the social structure this project has been developed in.

The Linux project needs to be able to evolve independently of other projects, so they do just that.

This is a classic case of technical architectures following most of the time social structures.

> Systems calls form an API and it needs to be stable and managed.

In some cases (e.g; nearly all the other oses...) system calls form an internal API, and they don't need to be stable, and they actually even don't need to be accessible except to intermediate layers provided in a coordinated way.


No one here is disagreeing on the need for the operating system to provide a stable interface to applications: the question is where that stable interface should lie. Linux takes the most restrictive approach, asserting that the actual hardware instruction effecting the user/kernel switch is the appropriate boundary. OS X and Windows instead take the approach that there are C functions you call that provide that system call layer (these are not necessarily the POSIX API). OpenBSD and FreeBSD have the most permissive approach, placing it at an API, not ABI level (so the function calls may become macros to allow extra arguments to be added).

My preference is that the Windows/OS X model is where the boundary should belong.


There's no "right" about it. You're arguing that _having_ a stable ABI is important, and nobody is denying that. There are other ways to get a stable ABI. All of the other non-Linux OS's have one, they just guarantee it in a different place (generally in a userspace library that manages the syscall interface)


Linux needs stable syscalls because it basically offers no other interface apart from extensions to posix abstractions for calling into the kernel. Windows has a much larger, stable external C API that provides the same functionality Linux syscalls do such that no one uses anything else.


I agree with some of your points; but zero-reliance on libc is the reason why it's so easy to use Go in containers; and Docker is one of the primary reasons why Go is popular. It's what they have got right.


You could statically link in libc and get the same effect.


Statically linking libc is it’s own minefield. It can and is done but even if you statically link everything else you should almost always dynamically link against your platform’s libc.


Statically linking libc is harder than dynamically linking it, but certainly easier than rewriting it.


Good thing we don't want to use much of libc functionality from Go, so nobody needs to reimplement all of it. It's not like people would be begging to call strcpy. All that's needed is the syscalls.


Statically linking a libc seems equivalent to statically linking Go's standard library, but with a whole lot less effort (on the part of the Go developers, that is).


Why? I see no reason you couldn’t on platforms with a stable syscall ABI, other than the standard reasons against static linking.


It depends on the libc.

glibc doesn't really "want" to be statically linked. It can be done sometimes, depending on how it's used, the phase of the moon and so on, but breaks from time to time until it's repaired.

Some of the issues are fundamental. For instance the dynamic linker is a part of the C libraries. If you statically link libc and then dynamically load another shared library, you can end up with two copies of the C libraries loaded at once.

The C library expects various external files to be found on disk, in a format that isn't totally forwards/backwards compatible. That's reasonable when things are dynamically linked because the linker and C library abstract the developer from format changes. If you statically link you should really statically link the data files too, but the C toolchain has no provision for that.


The ISO C standard library has no such expectations regarding external files.


Except glibc does not really support static linking of you want network support.


glibc is just one possible ISO C implementation with lots of extra stuff on top.


Says someone who has never actually tried to do that. You can statically link with Musl. But you can't really statically link with Glibc.


Technically speaking OP only suggested statically linking (a) libc, not glibc specifically, and musl is a libc.


Not sure what’s up with the combative tone? FWIW, I work as an operating systems developer. We sell several products incorporating a few in house libc variants. They are most certainly statically linked.


You can, it just might do some strange things when dealing with iconv.


You'd still have to get a static libc (not usually preinstalled), possibly compile it for a different os/architecture..


An issue here is that statically linking against glibc is generally regarded as a poor idea and I’ve seen a couple non trivial programs that refuse to run with other libc’s.


> Docker is one of the primary reasons why Go is popular.

Based on what data?


I've deployed all sorts of things which dynamically link libc in containers. This just isn't an issue in practice.


If you want static linking use musl..


You'd supposedly sacrifice performance though.

At least that's the common complaint for Alpine docker images... It's based on musl and halve of the community always complains about serious performance degradation


Well if the performance difference matters, going to a GC language makes no sense.


Tell that to the go enthusiasts. They're all claiming it to be peak performance surpassing everything else.

though even java is faster in most benchmarks


Except that Docker was originally written in Java by the former team that actually started the project, and nowadays contains modules written in OCaml taken from the MirageOS project, for the macOS and Windows variants of Docker.

So how much they got right regarding Docker's success and Go is a bit debatable.


Docker was not written in Java. It was shell scripts, python, then go. Dotcloud was primarily a python shop.

Perhaps you are thinking of a different project?


Probably Kubernetes which was indeed Java in formative years.


Kubernetes was never publicly available (open source) in any other language than Go. Early internal prototypes may have been in Java, but those bear no more resemblance to current Kubernetes than Borg does.


You are right, I got that wrong, too late to edit now.


Kubernetes was never Java.



Unless she's talking about an internal prototype that never saw a public repository, Kris is wrong.

The original prototype that saw a public repo was written by Joe and Craig in Python (mostly Craig IIRC) and lasted for all of about a week before they switched to Go.

The original crew of contributors, all from Google, came from a Java background and definitely wrote Java-flavored Go, but no version of Kubernetes was ever written in Java.

Source: I know all the principals and was a contributing member when Kubernetes was initially released.


Yup, watching that talk at last years FOSDEM is what informed me of that part of the history also. Great talk, too!


Indeed, I got that wrong.


I think the parent was reffering to using Go in Docker containers, not using Go for implementing Docker itself.

That said, I agree that Docker was the first major project written in Go many people were exposed to and probably had some influence.


I'm just wondering, how is your line about the code taken from the MirageOS project relevant? Nobody uses the Windows and macOS variants of Docker in production.


Windows shops do use Docker in production, there are plenty of them.

It is relevant in the sense that Docker isn't 100% Go nowadays.


As an aside, do they use Windows containers in that context? Otherwise why?


Yes, for example in Azure deployments.


The calling convention is a serious wtf. They're relying on store-load forwarding to make the stack as free as a register, but that's iffy at best and changes heavily between microarchitectures.


I'd assert the calling convention is strange by design: there is the underlying reality that, to support actual closures and lambdas, as Go does, in the Lisp sense, not the fake Java sense, one can't use the C calling conventions. In particular, it's not true that a called function can expect to find bindings for its variables on a call stack, because of the upward funargs issue: some bound variables for a called function in the presense of true lambdas and thus closures will necessarily NOT be found on the C call stack, because of the dissociation of scope with liveness in the presence of lambda (anonymous functions).


What you describe is a non-problem: you can trivially spill upvars to the stack on-demand, as most compilers do, while keeping formal parameters in registers. Java needs upvars to be final because it doesn't have the concept of "reference to local variable", but that's just a limitation of the JVM, and one easily solved in other runtimes that very much can pass arguments in registers (e.g. .NET).


The Go developers have considered changing to a register-based calling convention[0][1].

I found these tickets a few weeks ago and they explained why the Go developers haven't yet made this change.

[0] https://github.com/golang/go/issues/18597

[1] https://github.com/golang/go/issues/27539


Interestingly, one of the suggestions to deal with issues in panic backtraces due to this change is to use DWARF.


I'm not familiar with the issue: what makes Java's lambdas/closures fake? Is it that bound variables need to be effectively final?


I don’t know if they’ve done anything new, but as originally implemented, they were inner classes.


The inner class gets copies of the variables, so imperative code that wants to reassign them isn't allowed because it probably won't do what you expected.

The goal is not to GC stack frames. But I'm not sure why the didn't create an inner class to hold the closed-over variables in non-final fields (moving them from the stack to the heap) for both the function and all closures it creates.

(Obligatory "doctor, it hurts when I use mutable state!")


Ah, gotcha. Honestly, I always use this as an example of one of the subtle design points that I really appreciate Java for.

Nitpick, but saying copies in Java can get confusing. Both primitives and references are bound by value. I'm sure you know, but for others: no objects are copied.

I always found this limitation had reassuring regularity; it's the same way arguments are bound to function parameters (minus bring final). Local variables being isolated from "other scopes" means that any interthread communication must be mediated through objects.


They were never implemented that way, rather make use of invokedynamic bytecode.

https://youtu.be/Uns1dm3Laq4

Android Java is the one making use of anonymous inner classes instead.


I believe they still are, with the caveat that the bytecode is built at runtime for lambdas not compile time like regular inner classes.


Invokedynamic is not related at all to inner classes.


Maybe my memory is a little rusty or I glossed over a bit too much, but I was thinking of how hotspot does lambdas from here[0]. It seems to use the Invokedyanmic Bootstrap method to spin an InnerClass at runtime. To be fair, it's a hotspot thing and not in the JVM spec.

[0]: https://github.com/frohoff/jdk8u-jdk/blob/master/src/share/c...


Better check out from Brian's talk.

Not really, because the class file with invokedynamic bytecodes is supposed to work across all JVM implementations.


I think we agree? The bytecode is transferrable because the classfile only contains an invokedynamic that calls the LambdaMetaFactory for bootstrapping. The LambdaMetaFactory is provided by the runtime JVM itself so that linkage dosn't introduce an implementation dependence.

Hotspot's just happens to spin an inner class at runtime.


Yes we agree, I do conceed that I wasn't fully correct.


> Is it that bound variables need to be effectively final?

I believe this is it.


Even with store-load fw, you get a penalty (~3 cycle latency) over register accesses, no?


yeah, but it's cheaper than full L1 hit, which is where it would go if not for that.


I was trying to cite a typical full L1 hit latency... I thought store-load fw simply avoid having to flush the complete write buffer before the access is even possible, which risk to take far more than ~3 cycles. Now maybe it can be faster in some cases than an L1 hit, I don't know.

Edit: it seems that store-load forwarding is actually slightly slower than L1: https://www.agner.org/optimize/blog/read.php?i=854#854


I'm guessing that the reason was simply ease of porting 32-bit x86 assembly code to 64-bit.


Let's not forget their attempt at inventing yet another Asm syntax for x86, when there is already the horrible GNU/AT&T as well as the official syntax of the CPU documentation.


Go's assembler syntax is inherited from Plan 9 project, which started in late 1980 and first released in 1992.

For context, gcc was first released in 1987 i.e. about the same time that Plan 9 started.

Go authors didn't attempt to re-invent asm syntax. They re-used the work they did over 30 years ago.

And at the time Plan 9 happened it was hardly re-inventing anything either. It was still the time of invention.

References:

* https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs

* https://en.wikipedia.org/wiki/GNU_Compiler_Collection


And at the time Plan 9 happened it was hardly re-inventing anything either.

Intel's Asm syntax was defined in 1978 with the release of the 8086, and the 32-bit superset in 1985 with the 386. CP/M, DOS, and later Windows assemblers all used the official syntax.


Plan9 assembler syntax didn't start out on x86, and is kept the same across all platforms as much as possible.


The question remains, why not reuse the work that somebody else did even earlier, and that has a lot more adoption already?


Reinventing the wheel is sometimes a feature - using other people's stuff, you gain their features, but you inherit their bugs, their release timelines, whatever overhead they baked in which they thought was okay, etc. You lose the ability to customize and optimize because it's no longer your code...

It's all just tradeoffs in the end - I think golang is finding some success because they didn't make the same tradeoffs everyone else did.


I think by doing it everything their own way, they are not shackled to all of these dependencies - especially to some rusty old C++ compiler. That way, among other benefits, they get some very nice compiler speeds.


I installed golang the other day to check it out for the first time. For whatever reason, I chose to input the 'Hello world' program from golang.org by typing it in manually. As with most C/C++ code I would typically write, I put the brackets on their own lines.

Welp, so much for Go.


Like all opinionated formatters, you adjust to it, or you don't. I don't hate gofmt, other than tabs. Sweet jesus.


It's not about the brace format. It's about the mentality that went into the underlying design decision. The more you look into why they did it that way [1], the more dysfunctional their decisionmaking process sounds.

Basically, while (e.g.) Python's mandated formatting style arose from Guido van Rossum's philosophy of best programming practice, Go's mandated style arose from the fact that it was easier to implement from the point of view of compiler authors who had evidently never used a lexer before, much less written one.

[1] https://stackoverflow.com/questions/17153838/why-does-golang...


> the more dysfunctional their decisionmaking process sounds.

can you expand, because I can't see their dysfunctional decision making from the link you provided.


The second answer?

"Go uses brace brackets for statement grouping, a syntax familiar to programmers who have worked with any language in the C family. Semicolons, however, are for parsers, not for people, and we wanted to eliminate them as much as possible. To achieve this goal, Go borrows a trick from BCPL: the semicolons that separate statements are in the formal grammar but are injected automatically, without lookahead, by the lexer at the end of any line that could be the end of a statement. This works very well in practice but has the effect that it forces a brace style."

Nothing about that is OK. Nothing about it makes sense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: