Rust does not guarantee that there are no memory leaks, you can have things reference eachother and therefore never count to 0. This is why weak pointers exist in rust.
Rust doesn’t though. But it does make it much harder to leak memory than C. You need to create a cycle between reference counted objects versus just forget to free something.
Not that it matters as much as people think. Memory leaks aren’t a correctness problem. You typically want to restart a program or service regularly anyway to deal with memory fragmentation. You also frequently have a watchdog like systemd to restart your program if it crashes. Heroku restarts your whole dyno once a day. This means in practice that programs with slow memory leaks tend to work just fine (there are plenty of them in the wild.)
Interesting view that memory leaks aren't correctness problems. But I would require more convincing to agree.
It's true that memory leaks can be small enough that they don't become problems in the end-to-end behaviour of the system in regular use. But a lot of bugs are like that. For example many memory safety bugs.
> Interesting view that memory leaks aren't correctness problems.
The view of rust is rather that they’re not safety problems.
Whether they’re correctness problems is more complicated: in general they are, but there are lots of cases where they’re not, like short-running processes (once the process terminated it’s memory is reclaimed so freeing it is unnecessary overhead), or FFI (you’re moving memory out of your purview, you can’t know whether it’ll be disposed of anymore).
Safety and security as terms have an interesting relationship especially in Rust context - traditionally in engineering safety means defending against accidents and security means defending against malicious attackers, but "safety" in the term "memory safety" commonly implies security as well. Language is hard...
Considing whether memory leaks are a security problem brings us to the traditional CIA definition of security - the A (availability) is at risk from memory leaks.
Sorry, I should have said they’re not safety problems. The commenters below got it right.
Memory safety bugs are very different, because whether or not they affect the functioning of your software, they’re a ticking time bombs that could compromise your system. A memory leak will at worst crash your software.
> Is this “just” a 20% project or is this more significant (like Android one day supporting Rust to develop apps)?
Android already uses Rust internally, in fact for new code it's a major language. They have made this guide to teach Rust to their engineers working on Android, according to this post on reddit: https://old.reddit.com/r/rust/comments/zrs1of/new_rust_cours...
I'm tired of installation instructions that consist of "curl | sh". Especially since this one just tries to detect the platform and downloads the correct installer for me. I know what platform I'm running, and I don't want to pipe the internet to my shell. In the end this is unpacking a tarball in ~/.rustup, I don't need the risk of running some bespoke script for that.
To be honest, the course suggests using apt since that's an easy security-approved way to install things on our computers (see https://en.wikipedia.org/wiki/GLinux).
I'll be happy to update it to suggest using rustup like "normal". Could you make a PR for that?
Since the ultimate objective is to run a binary blob that you just downloaded off of the internet, piping a script to your shell over HTTPS adds no additional attack surface.
And kibwen's point is that if you're in a situation where you're running potentially hostile binaries from a remote host, the fact that you used `curl | sh` to obtain said binary is not the pressing problem.
And where is this trust supposed to come from? I downloaded the thing manually, looked at the scripts, ran the binary in a sandbox, it seemed to be OK. Right, I'll recommend that everyone just curl | bash's it ...
I think the worst thing about this is that Rust is fashionable, so encouraging inexperienced devs think that these dangerous practices are just fine. Look around at how many n00b projects now suggest doing exactly the same thing. It's simply irresponsible of the Rust crowd to keep promoting it.
The bash timing exploit makes everyone focus just on how cleverly evil it can be, and forget the big picture that it's about trusting the Rust org not to screw you.
(BTW, you can run `curl | sh` in a VM or with a modified bash to intercept the code and catch the bash script in the act, so it's not actually as sneaky as people believe).
If you think the Rust org is going to pwn you in a clever sneaky way, then you can't use Rust or any Rust-containing products.
In the end, you're pulling hundreds of MBs of binaries that you won't review, they're compiled from over 15 million lines of code that I don't believe you'd ever review either. Reviewing just the first 10 lines of code gives you nothing. A smoke test in a sandbox is also worthless, since a binary could detect being run that way, or delay the attack, or attack by specifically miscompiling your code (see Reflections on Trusting Trust).
In the end, you have to trust the Rust org, all of it.
(Is it that you don't know of any yourself? Or that you think I can't provide examples? In what world is what I said even slightly controversial? Type checking is just catching on now, decades after it was invented and implemented. C'mon.)
"Trusting the Rust org not to screw you" is one part, another part is trusting the Rust server operators to defend against server compromise by any third party. So trusting the intentions is not sufficient.
The same thing applies to any binaries downloaded from their site, so unless you you've got signed binaries (that use an independently obtained/verified chain of trust), trusting the server is your your only option. Even with signed binaries, you're still trusting the entity that holds the signing key.
In real world trust is not so binary. In a risk assessment I'd be interested evaluating the level of assurance there is in the supply chain of how you get your binaries and artifacts. Some of it can be done using crypto like you say, some of it could be eg published audit reports from a reputable evaluator or other credible information about the processes.
piping to the shell might be somewhere in the middle but node's npx is just asking for trouble. You type `npx cmd` and it downloads cmd. Which means any typo could be death
You meant a binary blob in your distro's repository, so one that was checked, tested, approved and verified with a hash. Which is wildly different than downloading and running random binaries or scripts for that matter off the internet.
No, it's hardly random, it's an official binary provided the Rust project from an official domain managed by the Rust project. If you don't trust it, then you shouldn't trust the Rust source code either.
In this case, the Google internal course explicitly does not trust that random shell script, even if it came from rust-lang.org, while their internal apt repository is trusted. See https://news.ycombinator.com/item?id=34092187
When I'm installing package from the repository, it's signed with GPG. Hopefully in a more secure place than WWW server. May be even at offline server with HSM (one can hope!). When I'm running code downloaded from HTTPS, all it takes is compromising this WWW server (or AWS Cloufront for this particular sh.rustup.rs example). HTTPS adds additional attack surface.
Checking the hash isn't relevant here. The content is served via HTTPS, you either trust the host or you don't. A host can easily serve you a malicious binary, as well as the valid hash of that malicious binary.
Which is why many people choose to only install software from their trusted distro maintainers who add a layer of vetting for random software packages, often built from source so messing with the package isn't possible without leaving some kind of trace that can be detected later.
Indeed, by all means, prefer to trust your distro if they package a version that's new enough. Alternatively, prefer to build from source if you like. But if you trust the Rust project to be competent enough and benign enough not to include malware in the compiler itself, then it's not a stretch to trust their official toolchain juggling tool downloaded from their official website. Focusing on the curl | bash aspect is a tired meme at this point.
Your distro's overworked maintainer isn't reviewing 15+ million lines of code included in Rust.
Most likely they get the precompiled rustc binary just like rustup, and LGTM-YOLO the package. If they try to be diligent, they maybe take extra 150K lines of mrustc code they can't reasonably carefully review for backdoors either, and then use it to bootstrap the several sets of 15M lines of code they won't look at.
The one thing you may get in using your distribution is protection for the case that the rustup.sh website has been temporarily pwned. But I agree that focusing on curl | sh is nonsense.
I get what you are trying to say here but I could also make the argument that you actually doubled it because now you have to trust two things rather than one.
Depending on how you want to consider trust in a wider sense too it may even be worse than “double” because I do not have the same amount of trust for the package I am ultimately installing and the script I am using to install it.
> now you have to trust two things rather than one
No, you're still trusting one thing: the host itself. You're downloading both the script and the binary from the host. Both could be backdoored, and of the two, the binary is far easier to hide a backdoor in.
As for not trusting curl, you still need to fetch the resource somehow, so you're going to be trusting some tool to do it for you. That's not relevant to increasing the attack surface.
I’m not actually in the Rust ecosystem at all and only just discovered the domain belongs to the official Rust project.
That clearly changes the trust calculation in this scenario.
I had assumed it was some 3rd party project which would have put it in a different category of problems entirely.
But the entire conversation is kind of pointless then. “There is a secret backdoor in the official Rust binary” is not a useful part of any reasonable threat model.
> You're downloading both the script and the binary from the host.
Technically, if you don’t read the script, you don’t know the binary is from the same host.
That doesn’t matter, though. The chain of trust is deep, including the tooling that produced the binary, your CPU, the internet, etc.
Downloading the first file basically says “I trust this site to give me this tool and nothing else”. Where it then gets that stuff from shouldn’t matter, even if it is from a shady site. You trusted them not to do that, just as you trusted them not to open up their own site so that hackers can replace files ont it.
On some distros you can install rustup directly, e.g. NixOS or Arch. For Ubuntu, it's sadly not in the default repositories, but there is a snap for rustup uploaded by the rustup maintainer.
This is because people are tired of figuring out seventeen (hundred) different package managers that may or may not work and that may or may not accept your package into their repo?
Apt gives you the version of Rust that was packaged for your distro, so you're not keeping up with the latest releases. It's typically configured to reuse the system install of LLVM, missing some Rust-specific patches that are in the process of flowing upstream. And it references distro-packaged crates by default, meaning that again you're not getting the latest published versions.
I sit next to a guy in Mountain View that works full time just on teaching people Rust and integrating Rust into Android. There’s a whole team dedicated to getting Android developers writing Rust.
The difference in installation procedure is probably a security measure. Piping curl to bash is a bad move and it’s the Android security department that’s pushing Rust in Android. Plus, internally, Google has well maintained aptitude repositories with projects built from HEAD. It’s nice installing something like lldb from the command line and the internal Python interpreter is the latest version. And you know all of your coworkers are running that version, too.
> Is this “just” a 20% project or is this more significant (like Android one day supporting Rust to develop apps)?
It's very unlikely you'll ever see Android supporting apps written in 100% Rust (just like it doesn't support apps written in 100% C++/C) - the APIs and UI toolkit are exposed and written in Java and you'll need to bridge either way.
Supporting Rust as a native code language to augment existing support for C/C++ via NDK is much more likely.
And I haven’t looked at everything yet, but it suggests to install Rust like this:
While everyone I know uses rustup^: ^ https://rustup.rs/