Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The reasons given are reasons to stick with C89 forever. Change has risk, and there are advantages and disadvantages. If there were no advantages gained from C99, it wouldn't exist (people don't release language updates for no reason).

A much more interesting question is if you were writing curl today, what would you do. If the answer is still 'C89' then we as a profession have to wonder why - did we get it exactly right, and there are no lessons from the last 30 years, or is the fact there are no better alternatives deeply depressing.



For systems programming, C89 is definitely a "sweet spot". It's relatively easy to write a compiler for and was dominant when system variety was at its highest, so it is best supported across the widest variety of platforms. Later C standards are harder to write compilers for and have questionable features like VLAs and more complicated macro syntax. C++ is a hideous mess. Rust is promising, and would probably be my choice personally, but it's also still fairly new and will limit your deployment platform options.

C89 is still a reasonable choice today. I don't think it's depressing. It's a good language, and hitting the sweet spot of both language design and implementation is really hard, so you'd expect to have very few options.


Re: Rust, Curl is modular enough that you don't need to rewrite it in Rust in order to enjoy some of its benefits, you can just tell Curl to use Hyper (Rust's de-facto HTTP lib) as a backend. For the past few years they've been working on getting the Hyper backend to pass the Curl test suite and they're down to five remaining tests, so perfect support looks to be imminent: https://github.com/orgs/hyperium/projects/2/views/1 (seanmonstar occasionally streams on Twitch if you'd like to watch him work on these).


Also, and this sort of blows my mind, but Rust is almost 10 years old. It is a pretty darn stable language, especially for greenfield projects like a new HTTP library. The better Rust gets at interop the more it will begin to eat the systems programming world IMO, and we all benefit, even if it is quietly doing it without much fanfare.


I was with you right up until “quietly doing it without much fanfare”.

Personal opinions of Rust aside, it has gained so much fanfare that “rewrite it Rust” is now practically a meme.


The other side of that is the back end of what Rust folks are doing. There is a vocal segment that are doing a lot of surface level things. But there are also people quietly building the language and toolchain up to be something that you can do true low level embedded work with while maintaining (most of) the guarantees of Rust. That is what I was alluding to. I don't think "rewrite it in Rust" is always smart or even productive.

edit: It is also worth exploring why and how a systems programming language has generated this much excitement in folks that are "rewriting it in Rust". These people are also in their way making it that much easier for everyone else to transition to Rust, proving these projects work just fine in Rust. I do agree it is a meme, but it is a good one for us all. As an infosec practitioner nothing could make me happier than seeing people excited about a language that eradicates one of the worst and most pernicious classes of C/C++ bugs.


The sad part is that this was inflicted by the industry themselves,

https://www.schneier.com/blog/archives/2007/09/the_multics_o...

> The combination of BASED and REFER leaves the compiler to do the error prone pointer arithmetic while having the same innate efficiency as the clumsy equivalent in C. Add to this that PL/1 (like most contemporary languages) included bounds checking and the result is significantly superior to C.


Thanks for the link!

One data point more backing my theory that we're in the middle(?) of "computing dark ages", where the biggest crap and nonsense dominates everything (and people not even know how crappy everything is).

Do you think we will ever leave the dark age?

I mean before our AI overlords get in charge and kill and replace all the nonsense we've built, of course.


Not before our lifetimes.

This is a matter of quality, and like everything in computing, quality only matters when money or law is involved, so in a way returning digital goods with refund is one way to make companies take quality more seriously, other are stricter liability laws for when exploits ocurr in the wild.


It's not fair to judge an entire ecosystem full of extremely talented people by the vocal (and insufferable) 1%. Every group has them. What has become a meme has zero relationship to the quality of the thing.


I wasn’t judging anyone. I was just saying Rust isn’t exactly flying under the radar.


Rust is over 12 years old at this point as a publicly-available project (its development started internally sometime in the late 00s; it was publicized in summer of 2010).


Periodization is hard, especially with Rust, but if we're talking about reliability, counting time before 1.0 in 2015 doesn't feel right to me. Seven and a half years is still a long time :)


Agree, time since v1.0 is a very reasonable measure.


10 years and still an okish IDE experience… This is going to stay with Rust. The problem is not the tooling, but the language.


I don't get what you mean.

There are languages out there with much more advanced features than Rust, like e.g. Scala. But the IDE experience in Scala is not worse than with Java.

There is no fundamental problem with IDE support. It just takes some work with an advanced language.

(The only issue are languages which you need to write "backwards", like Haskell. But that's another story.)


I am talking about compilation speed.

I recently started working with Rust for contributing to projects like Rome/tools [1] and deno_lint [2]. My first impression with Rust is a bit frustrating: compilation takes tens of seconds or minutes. I am waiting in front of my IDE to get type hints / go-to def (often to fall-back to a text search). When I am launching unit tests, I am waiting that rust-analyzer terminates its indexing, and then I am waiting again that the tests compile…

The tools are now mature, and a lot of engineering work is done on both rust compiler and rust-analyzer. I am afraid that the slow compilation of Rust is rooted to its inherent complexity.

[1] https://github.com/rome/tools

[2] https://github.com/denoland/deno_lint


> I am afraid that the slow compilation of Rust is rooted to its inherent complexity.

AFAIK that's not the case.

The problem here is that Rust has "issues" with separate compilation due to some language design decisions (which affect incremental compilation than obviously). It's not built for that and it is, and continuously will be, quite difficult to make this work somehow.

But that's less a problem with the complexity of the language as such.

My Scala example stands: Scala is also quite complex and not the fastest to compile. But after the build system and the compiler crunched the sources once (which may take many minutes on a larger code base) the IDE is very responsive. Things like type hints or go-to def are more or less instant. Code completion is fast enough to be used fluently. Edit-compile-test cycles are fast thanks to the fact that separate compilation considerations were part of the language design decisions. (That's for example why Scala has orphan type-class instances; which are a feature and a wart at the same time).

As I understand Rust's "compilation units" are actually crates. This is not very fine granular and I guess the source of the issues.

I would guess splitting code into a few (more) crates (which than need to depend on each other) may improve the incremental build times. Also things like not building optimized code during development of course apply, but I think cargo does this automatically anyway.

But I'm not an expert on this. Would need to look things up myself.

Maybe someone else has some proven tricks to share?

OK, a quick search yielded some useful results, so I share:

https://www.pingcap.com/blog/rust-huge-compilation-units/

https://fasterthanli.me/articles/why-is-my-rust-build-so-slo...

https://news.ycombinator.com/item?id=29742694


Thanks for the detailed answer and the pointed resources!

In fact, I included the lack of compilation locality in "inherent complexity of Rust". However, I agree that this could be considered apart.

In my experience with TypeScript (quite different, I admitted), splitting in distinct compilation unit may help. However, this does not solve the issue.

This could be great if Rust could deprecate some features in order to improve its compilation speed. I am not sure if it is feasible…


I am unsure what you mean. Care to elaborate?


I think I followed. One aspect of a programming language is how easy it is to build a useful IDE for with code prediction, navigation, refactoring, etc. Java is relatively easy, Lua is very hard. Rust is somewhere in the middle, with macros being a complicating factor. https://rust-analyzer.github.io/blog/2021/11/21/ides-and-mac... discusses the problems specific to rust much better than I could.


Go has one of the best compiled language IDE experiences out there. GoLand makes it feel easier than Python code :)


Can it find all Implementations of an interface or what interfaces this implements ?


Yes to both. It is great :)


I'm unsure exactly what the above poster is trying to say, I generally find Rust development very pleasant with nothing but vscode and Rust-analyzer.

But... I'll admit there is one major stumbling block so far. Debugging iterator chains can be cumbersome because of the disconnect between the language and the compiled code. I've found myself stepping in and out of assembly more than I'd like. I assume this is the kind of problem that can be overcome with a nicer debugger though.


> I assume this is the kind of problem that can be overcome with a nicer debugger though.

I think this would be something that modern debuggers need to solve somehow in general.

There are more and more languages with high amount of syntax sugar, where the output to be debugged doesn't have much in common anymore with the code written.

Debuggers need to be aware of desugarings somehow.

But it makes no sense to implement this on a case by case basis for every language. We need next generation debuggers! (But I have no clue how "a sugar aware debugger" could be implemented; something in the direction of "source maps" maybe?)


See my other answer [1] to get more context :)

[1] https://news.ycombinator.com/item?id=33729406


So you're arguing C89 is better because it's easier to write compilers with it? How is that a relevant point for the context? We're talking about whether c99 it's better migrating for the end user and not a compiler writer


They are arguing that C89 is better because it's supported on more platforms (curl is used on all sorts of oddball embedded systems), and that it's easier to get a new platform going with C89


The requirements for writing curl are different from the requirements for writing other software. Just because C89 is a good choice for curl doesn't mean that C99 isn't a better choice for other things. The failure isn't in having revised a language, it's in thinking that all projects using the older version must upgrade. The idea that progress is linear is an illusion.


The question is more complicated than that: curl is so popular it is used on systems where c99 is not available. The question is how many of those exist, and are at which point it's not worth supporting them anymore.


Just a terminology rant: it is used in -builds- where c99 is not available.

Particularly the MSVC ecosystem is identified in TFA as being a late adopter.

Once built c99 code can run where it wants.


> Once built c99 code can run where it wants.

Is that true on the MSVC ecosystem?

Don't you have to compile separately for each `msvcrt` environment, as I thought they aren't binary compatible? And would a non-C99 msvcrt necessarily have an `snprintf()` implementation in its libc-equivalent dll?


You can call code compiled with one msvcrXX from code compiled with a different msvcrXX, provided that you don't try passing things from one to the other (that is, no passing a pointer to a FILE structure, or even a file descriptor since the file descriptor table is on the msvcrXX instead of the kernel), and always free or realocate memory using the same msvcrXX (that is, don't allocate memory and expect your caller to call free() on it, always provide a custom deallocation function for your objects).

This is possible because, unlike on Linux where function names are global, on Windows function names are scoped to the DLL, so you can have MSVCR71.DLL and MSVCR81.DLL loaded at the same time in the same process and they won't interfere with each other.


OK, but isn't the point of building a program that links to (say) MSVCR71.DLL that you're expecting to run it in an environment where (say) MSVCR81.DLL isn't available?

I don't see how that fixes the problem of possibly not having an snprintf() implementation on a system that doesn't have a C99-compatible MSVC runtime environment.

Did I miss an implication of your comment somehow?


If you compile your program against some MSVCRT then it's your job to make sure that MSVCRT is available on the machine where your program is installed, by delegating to its installer.

All supported MSVCRTs are installable on all supported Windows SKUs.


Yes, that's the thing I always forget, the way Windows deals with multiple incompatible versions of msvcrt is that every application ships its own copy of libc, and hopefully the installer is well-written enough to only copy it into place if it's newer than the newest release of the same major version that's already there, lest a random app re-introduces a bunch of security issues that should have been closed by the last security update for every other application that uses the same msvcrt.

...and by "forget", I mean "block out due to trauma, because surely it can't be that stupid".


Supported is really doing a lot of heavy lifting there, isn't it?

Windows 7 and 8 haven't been supported in years, but are still pretty common in the wild.


> Windows 7 and 8 haven't been supported in years

Windows 7 hasn’t been supported for two years now, but Windows 8 EOS isn’t until January 2023.


> This is possible because, unlike on Linux where function names are global, on Windows function names are scoped to the DLL

This is also possible on Linux with linker scripts AFAIK.


Hmm, not really in the same way -- to do this with linker scripts you would need to rename some of the symbols in the library being consumed.

What you can use, to a limited extent, is dlmopen().

Shared objects in Linux are just really late linked static objects, with fix-ups (hence PIC requirements).

In macOS and Windows, there are heirarchies.


Linker scripts change whether or not symbols are added to a global symbol table for subsequent requests (i.e. "exported"). Though, you don't even need a linker script to effect visibility as both GCC and clang provide a visibility function attribute, and you can change the default visibility through a simple compiler command switch.

dlopen permits you to control whether exported (externally visible) functions in a module become available to satisfy link dependencies in the application, such as subsequent module loads. See the dlopen flags RTLD_GLOBAL and RTLD_LOCAL.

dlmopen is for controlling the visibility of shared library dependencies pulled in by dlopen'd modules, whether RTLD_GLOBAL or RTLD_LOCAL, which only effect the immediate symbols in the module and not symbols from automatically loaded shared library dependencies. If you link the main application with OpenSSL (-lssl -lcrypto), or a prior module you dlopen'd pulled in OpenSSL as a dependency, then those OpenSSL symbols become available to satisfy requirements for subsequent dlopen'd modules. dlmopen allows you to create an entirely different symbol namespace for a module or modules, where symbols dependencies are only ever satisfied from that namespace, and exported (global) symbols, whether pulled in by dlopen or transitively via a shared library, are never visible outside that namespace.

None of these options directly map to the behavior of DLLs. DLLs fundamentally use different semantics, AFAIU. The closest behavior to DLLs might be DT_RUNPATH + dlmopen, but dlmopen use is explicit so not really the same thing. You could use ELF symbol versioning (maybe in combination with DT_SONAME and DT_RUNPATH) to accomplish the same thing as DLLs by effectively renaming all the symbols in a library (e.g. attaching a version component), but there aren't any tools around to help automate that, AFAIK; you'd have to generate linker scripts and it'd be a complex build. Much easier to just static link at that point.


For C, Windows has had a stable CRT (libc in Unix speak) for several years now, since Win10. There are still cross-runtime compatibility concerns with C++, but those shouldn't apply here.


> curl is so popular it is used on systems where c99 is not available.

What systems don't support a version gcc that compiles c99 at this point?


Old systems in ports and airports, military systems, anything that has a 50 years shelve life...


How many old systems like that are connected to the internet and could actually use curl?


More than you would think, but curl is used to talk to the local network too, or inside VPN as well.

Some hacks are quite crazy.

E.G: there is this very old navy broacasting protocol, NMEA (https://en.wikipedia.org/wiki/NMEA_0183), that was designed so it could be transmitted through old fashion radio waves. You'll find it in some sonars, water sensors or AIS beacons. For this reason, despite that it looks more like a layer 4 protocol, it embeds its own packet format and checksum, all in ASCII, that clients are expected to parse.

Now of course, a lot of devices are still emitting their data in NMEA, and it's not uncommon to be able to just telnet or netcat (if UDP) into one to see the data flowing.

But after a while, people started to aggregate those data from their numerous sources into one single router, and expose this router for convenience through... HTTP over TCP/IP.

And now you have those all those old computer towers (some still rocking a CRT screens or windows xp) doing long polling to get broadcasting data over a protocol that was made for request/reponse, to read the payload that is another protocol that was meant for radio equipment and hence requires manual consistency checks, that is transported by yet another protocol that is doing its best to preserve packets.

And they say the spirit of hacking is dead :)

(sometimes I feel IoT or domotic stacks look the same honestly)

Of course, somewhere in there, there is a curl call. The question is therefore does curl author want to support a potential upgrade path for such twisted use case or not. I would say "nahhhh", but maybe curl had precisely the success it did because the author was ready to support it in crazy settings.


But couldn't you just cross compile to such targets?

I think nobody really does any serous development on such old machines.


Maybe. Or maybe they have the only know toolchain to work installed on this single machine somewhere in the basement (seen in a healthcare corp), or they have a chain of trust that needs way too much effort to verify again (seen in the army), or they are not comptent enough to do such thing (seen in airports), or their whole stack is so old it can only run on this stuff (seen in ports), or their target is exotic and you can't cross compile to it easily (seen in aerospace).

Again, not sure that it means curl should endorse such niche situations, but the modes of failure are numerous.

TL;DR: the world is complicated


I read this as: If there would be real need in almost all cases you could cross compile (the exotic target being the exception).

Whether this is feasible in a economic sense is another question. But technical it should be possible.


Quite many!


> If there were no advantages gained from C99, it wouldn't exist (people don't release language updates for no reason).

Why isn't it possible that the updates aren't as good as the people who wrote them thought they were?


VLA certainly were proven as a very bad idea, to the extent Google has paid the effort to remove all of it from the Linux kernel, as security measure.


> VLA certainly were proven as a very bad idea

VLAs aren't a problem. VLA in C are.

But that's not because there is any issue with VLAs. The issue is that C concepts are stupid, but nobody fixes the roots of the issues.

If you put something of variable length (a VLA) into something with limited static length (the "stack") it will explode. That's nothing new and nothing special or exclusive to VLAs. Actually, exactly this is one of the main issues with the bad C design since inception: It does not do bound checks (especially no static ones; as this would require proper depended typing for safe VLAs). Out of bound access will just "explode" as always in C (likely leaving a nice security crater).

To be honest I don't get why we're still stuck with the stack / heap nonsense. There is not stack (or heap). There is only memory.

What would be much more interesting would be direct control over the caches… Instead we still use the pure fantasy products "stack" and "heap" which are actually irrelevant (as they don't exist in the end).

There is nothing "special" about "stack" memory. That's just a very primitive region based automatic memory allocator backed into the C runtime!

The whole "using registers" thingy in context of "stack" is also just fake by now. You don't use HW registers—but some virtualization of them presented to you by the VM that runs inside the CPU. So you don't control register allocation anyway! So this could be made completely transparent without any impact. (The VM inside the CPU does the actual register allocation fully automatic. Presenting the "faked virtual ISA registers" to the outside world just to make "legacy" code happy).


As someone tangentially involved in this, I think this was misguided. But Linus also was not happy about code generation with VLAs.


> As someone tangentially involved in this, I think this was misguided.

Do you mean the removal of VLAs from Linux? Why do you think that was misguided?


> If there were no advantages gained from C99, it wouldn't exist (people don't release language updates for no reason).

That's not what the post said. He didn't say that C99 offered no advantages to anyone. He said no one could come up with benefits to the curl project that would be gained by moving to C99, therefore the risk introduced by doing so was not worth it for now (my paraphrase, obviously).

It sounds to me like a perfectly good reason to stay with the current standard for that project.

[edited to fix a typo]


Writing curl today for your own use, on a platform/OS/tech stack you control, or to target all the places where curl runs right now?

It's still deeply depressing how costly/impractical it is to apply improved technologies in the long tail of environments that isn't "linux on amd64" and similar, but it's not really a language design question in my opinion. We didn't get it "exactly right", we got it "good enough", and upgrade costs are prohibitively high for the general case.


"people don't release language updates for no reason", indeed, many reasons are in the end "planned obsolescence" or ways to make even a naive compiler so much complex that only few remains, and of course in control of very few groups of ppl, and it is near impossible to implement reasonably a real life alternative.

My opinion is C is already way too rich and complex. I would stick to c89 with benign bits of c99 and c11. The benchmark being "one average system developer coding a naive and real life C compiler in a reasonable amount of time and effort".

That said, I know that my "next" C compiler will probably be a RISC-V assembler with a very conservative usage of a macro preprocessor.


some things actually just work




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: