I don't think Swift really competes with Rust or Go at all.
- Go aims to be simple enough for many engineers to be productive with little investment.
- Rust aims to be extremely reliable and efficient.
Swift is very complex, and makes many compromises away from reliability or efficiency in favor of application use cases.
Swift may fill an interesting role for a "native C#" that is mostly reliable, mostly efficient, and somewhat productive, but on the other hand, C#, OCaml, Java, Kotlin, and so on already have various answers to that, and they're only becoming better at it.
The only real advantage it has (as far as I can tell) is iOS (and TensorFlow almost as a knock-on of being the only serious language for iOS).
> - Go aims to be simple enough for many engineers to be productive with little investment.
Go isn't simpler, there is just a defered cost engineers pay later on when it turns out one cannot just do away with complexity by deeming it irrelevant.
But swift isn't as easy as python or javascript either. It is arguably more complex than Java, C#. And probably more complex than Kotlin. It is sure less complex than scala.
IMO Swift is easier than JS or python for medium to large projects. Once you get to a certain scale of codebase where you can't keep all the systems in your mind at one time, weakly typed, interpreted languages start to reach a point of diminishing returns, and the inability to catch issues at compile time starts to be missed. You can mitigate this by writing a lot of tests, but this creates a large additional workload which more than outweighs the additional complexity you would have encountered working with a language like Swift in the first place.
I think one advantage of Swift, as compared to say Rust, on this topic is that one of it's core values is the concept of "gradual disclosure". In other words, Swift is designed in such a way that you can start programming at a low level of complexity and produce functional programs, and over time you can introduce more complex and esoteric concepts into your code as you become more familiar with the language.
If you're jumping straight into a large, mature code-base I can understand how it might seem complex, but it's very possible to write Swift code which is as simple and readable as Go code. The same can probably not be said for C++ or Rust, where there's a certain baseline complexity which cannot be escaped.
I love Go, but there is a strong case that Kubernetes has a messy codebase because Go didn’t have good dependency management or generics when they started it. The strength of the ecosystem in other areas let them get started and now they have the tragedy of success, a la Wordpress.
Part of the issues with Kubernetes is that it started as a Java project and then it became a Go project. If anyone started a large Go project with that mindset it would look messy no matter what you did.
Many large projects may look like a mess but their success is because they got the high level architecture right and perhaps had the moat of a large tech behemoth behind them.
VSCode, Tensorflow, Kubernetes, Go, Typescript, React. These are a few that come to mind.
At a high level, they are all fantastic projects and perhaps messy in their own way, but they are absolutely fantastic, hardened proven codebases with ~millions of users.
There is one video about the Kuberentes code base: https://www.youtube.com/watch?v=4VNDjwzzKPo and it has nothing to do with Go language iteself, it's about organizing the repo and code strucutre.
Kubernetes is a complex distributed system with over 2M loc, not matter what the language you use there going to be some issues.
It's probably one of the largest open source project out there.
2MLOC is too big for what it’s supposed to be doing. That’s a sign that something went wrong. I wouldn’t blame Go qua language per se, but the lack of generics and lack of dependency management (when they started) couldn’t have helped.
The argument - as I understood it knowing only little about Go and Kubernetes - wasn't about bloat in application features, but bloat caused by the language. The stated argument was "missing generics" - missing generics means that similar algorithms have to be re-created multiple times instead of being reused. Thus comparing this tonnaother language might be interesting.
For example C++ is extremely powerful in writing generic code: Make it a template and and provide a set of of traits to modify the behavior and you get a set of algorithms you can re-use for anything – at the cost of unreadable code in the impelemntation (if you aren't careful; with a chance of quite precise code on the call side) and of course with the bloated version in the resulting binary (after compiler instantiated and inlined (thus copied) everything; where the programmer typically doesn't have to care (till the binary is too large to handle) all of it)
So the better question ist ... What code got duplicated exactly and if the one mentioning can't give examples which make a dent in 2MLOC then it is not due to the language in the first place.
I assumed it is about features because there are really a lot of features inside Kubernetes which are not available in Swarm or Nomad by default.
2 MLOC is insane. I assume some of that might be test code and other infrastructure, and it's basically impossible to compare apples to apples in terms of code size. But for reference, 2 MLOC is around the same code size as PostgreSQL, which is also a complex distributed system, has arguably a lot more user-exposed features and a long history, and is written in C of all things.
I think one of Swift's advantages is its progressive disclosure. It can be written without using any of its more advanced features, making it approachable without limiting possibilities for more advanced users.
One of Swift's strengths is that most of its features really make sense if you read them, even if you aren't aware they existed. The entire language is optimized around readability. (No, I'm not counting the function builder fiasco.) For example, you may not know that Swift's loops support where clauses:
for i in 1..<10 where i % 2 == 0 {
print(i)
}
But you can immediately understand how it works, because it's consistent with the rest of the language and reads well.
Any claim of the form "this language is magical! every statement is self-evident" has been bunk since the dawn of programming. The people making such claims are always already too close to the language to realise their own confirmation bias.
So, feedback, from someone whose exposure to Swift consists basically of this article but has eaten a metric crapload of many other braced languages over the decades.
What I can tell is that the programmer wanted to print 2,4,6,8. So the intentional readability is there. However when it comes to understanding how it works (i.e. the language mechanics delivering the intuitive reading) I had lexical concerns. I couldn't immediately tell whether the where-clause acts like a guard predicate in the for-statement or an unbraced if-statement i.e. equivalent to
for i in 1..<10 {
next if !(i % 2 == 0)
print(i)
}
or
for i in 1..<10 {
if (i % 2 == 0) {
print(i)
}
}
or even (let's hope not)
if (i % 2 == 0) {
for i in 1..<10 {
print(i)
}
}
and in a similar vein the lexical scope and binding of i was not clear to me; some languages bind the iterated variable at the level of the statement, others inside the iterated body (which may even be a closure for all I know), and this has implications particularly for the the value of i after the loop, for the binding of i in any function generated inside the loop (example: print could actually talking to an IO object that defers output by prepending lambdas to a chain - I've seen web containers deferring view rendering this way), for name masking, and for the consequences of any flow-control/continuation/exception mechanism that may cause an early loop exit.
I'm unsure what the difference is between the first two…are they not equivalent? The lexical scoping could be interpreted differently, sure (allowing your third example the benefit of the doubt) but this is where the "Swift is consistent" comes into play. You already know Swift's scoping rules, it's one of the first things you learn, so using that knowledge you can figure out this code as well.
No, they're not equivalent. See, that requires making an assumption that I didn't about lexical nesting. They merely do the same thing in this example.
I do not know Swift's scoping rules and having read through the introductory guide and the flow control chapter I still can't tell you if the loop variable remains visible after the loop, and what it is bound to if so.
No doubt that if I sat down with a REPL or an IDE it'd hopefully be evident by example within a few seconds, but the claim was about dry-reading, not interactive experience.
I'll leave it there with an admonishment against making assumptions/declarations about the obvious-ness of something you're already familiar with.
In every language I’ve used with a for-in construct (or equivalent), the iterated variable is scoped to the loop. That seems to be implied by the syntax, too; it’s equivalent to “for all i in [1,9] where i mod 2 = 0”. If you saw that in a math context, you’d expect i to have no meaning beyond the iteration.
Is there a case to be made for other behavior? Or an example of a language that intentionally treats it any differently? (Unless you go out of your way to iterate over a variable you’ve declared in an outer scope in C, but then that’s not really a for-in construct.)
JS works that way with plain for loops, but not for-in or for-of. The following fails on the last line, for example:
for (const i in [1,2,3,4]) {
alert(i);
}
alert("i is now " + i); // ReferenceError
(Not that JS is exactly known for consistent and predictable behavior on edge cases, of course...)
Python I don’t have experience with, and it appears I stand corrected on that front! I wonder how intentional it is, and what sort of use case it enables (and if it’s considered good practice to use).
Python does this mostly because it doesn't have block scope, only function scope like JS's `var`. After using Python for a decade, I think this is mostly a mistake. I would configure (or edit) my linter to prevent me from using it if I needed to write a lot more Python.
Lexical nesting? While I agree with the statement you’re making — that reading code requires you to understand more about what the code means, in that context/runtime, I also think the example you’re picking apart reminds me of the dangers of undefined behaviour in C++. If the language specification plus its test cases still has implementation-specific details, then while by all means measure or discover them, but I would both say this is not unique to Swift and is just part of the complexity that is not writing CPU instructions directly that target only one type of CPU design. Maybe I’m taking this too far, but my point I suppose is that I know what was intended by the swift code even if the exact details are optimized out from underneath me by a compiler or language library. Might as well argue that you should have full unit and integration tests for every target platform to ensure the code behaves the way you expect, and even then you’ll still hit platform-specific edge cases, particularly on the wide variety of platforms Linux supports... I do tend to trust Clang, but even it can vary greatly from (hopefully major) version to version.
Yes, I'm definitely trying to point out the distinction between reading code and understanding its intention vs understanding implementation mechanics and awareness of the possible alternative behaviours. It's easy to point out that C++ is amongst the worst offenders in that regard, but even a language with a reputation for "programmer happiness" like Ruby (which I absolutely adore) is full of magic, almost every nontrivial expression glossing over a huge pile of conceptual and mechanical object-functional devices that beginners trip over quite routinely (don't even get me started on the thick layer of sorcery that Rails slathers over the top of that).
So I'm not actually calling Swift a major offender, but noting generally that I don't there's ever been a language in which the intuition gap between intention and mechanics was small enough to be irrelevant.
(something at the back of my mind is now whispering "Scheme", but too quietly to be taken seriously)
A where clause on a loop is nice, but I think it's likely it will either be abused or be limiting. I've always favored systems that work well together to build something more than the sum of the parts. Perl is an example of that in many cases.
for my $i ( 1..(10-1) ) {
next if $i % 2 == 0;
print $i;
}
Here Perl's post-conditional syntax (which works only on a statement and not a block so it's more manageable) combines well with "next" (Perl's version of "continue") to very clearly convey intent without a lot of extra boilerplate. An additional clause is often just an additional line. Comments after each can provide additional context as needed. I think this is a more flexible way to accomplish the same thing, and more readable in all but the simplest of cases. And for those simplest of cases, I would use a grep, which is also useful in other parts of the language:
for my $i ( grep { not $_ % 2 } 1..(10-1) ) {
print $i;
}
Any syntax learned that works as a special case to only a single structure is either a case of missed opportunity or a case of extra syntax that needs to be learned which shouldn't be, IMO.
That where is not a one-off construct, it's used with switch statements:
switch i {
case 1...100 where i % 2 == 0:
print("\(i) is a small even number")
case ..<0:
print("\(i) is negative")
default:
print("I couldn't care less")
}
Or generic constraints:
struct Vector<T> where T: Numeric {
// ...
}
If you so wish, you can always use any of the functional constructs as well, as in
for i in (0..<10).filter { $0 % 2 == 0 } {
print(i)
}
So it looks like it's a general modifier to a range type, so can be used where those work. That's slightly better, but if filter exists, why not just use that?
One of the biggest criticisms that Perl gets is that there's so many different ways to do things, and that's somewhat deserved, so the question is what does "where" offer that "filter" doesn't other than another keyword and concept you have to learn to understand the language? What's the point if it's not really that much more expressive or clear?
> struct Vector<T> where T: Numeric
Is this really the same thing? This seems like a case where the same word is used to mean something else, even if they are loosely conceptually the same.
It isn't actually a modifier to a range type -- it modifies other constructs, like looping or conditionals. The examples don't show it but literally any condition can go in the where clause, even something like `where random() % 7 == 2`.
I've been able to dip into Swift codebases with zero previous experience without too much trouble. I do have Rust experience which helps. But never-the-less everyday Swift seemed pretty readable to me.
Swift has some really clunky "features" such as dual-identifier parameters and escaped opening parens as identifiers for string templates. WTF! As if function/method declarations weren't complex enough in a statically-typed language which supports generics.
They’re not dual identifiers; one is an argument label and the other is the parameter name. Maybe it’s because I come from an ObjC background, but this is one of my favorite aspects of the syntax; IMO it makes code way easier to read without having to look up method definitions to figure out what each argument does.
As a disclaimer, I’ve written Swift, but this seems to be a simple concept. The first name is the interface and is used at the call site, and the second (optional) name is used within the scope of the function, should the developer want a more descriptive name.
I think that’s more a question of taste/what one is used to. I find that dollar sign a bit heavy, visually, but one could also argue that’s a plus.
Using backslash has the advantage that there’s only one ‘special’ character in strings, but of course, also has the problem can escape any character with a backsplash, except for opening parentheses.
They had good reasons, it means there's only two substrings that need escaping: the backslash and end string sequence. In hindsight, it's ugly and hard to read. A case of optimising for the wrong thing.
Isn't that true of almost all programming languages? Or do you mean cleanly?
I think D has to be this idea (or disease depending on who you ask i.e. Go is based on the idea of only keeping the simple bits) taken to the extreme - you can quite happily write Java in D but also implement a Java in D all the way down to the metal (merrily metaprogramming all the way down)
All of this sounds great but it doesn’t mean that the code written in it will maintain the same level of simplicity/approachability.
Go has the advantage of enforcing it. Writing code is easier than reading it.
Don’t get me wrong, I love swift but I think that for what you’re talking about constraints are necessary because we naturally shift towards more complex ways of expression.
Exactly, I was in love @ Swift 2, but as the language evolved to cover corner cases and enable things like SwiftUI, it became nigh-unreadable, there's simply too many 'dialects'.
Dart is the closest I've found to optimizing the surface area of a language while maintaining flexibility like ObjC.
This describes Perl - one could get very clever with it. As a recovering Perl programmer - that is a scary thing for a language to be, especially for large code-bases worked on by people with different skill levels as the code quickly becomes inconsistent, with mixed paradigms.
I once came across a production bash script that contained sections in Korn syntax, others in C Shell syntax and yet others in bourne shell syntax. Lovely!
> It can be written without using any of its more advanced features
If anything, that's bad, because it invites projects to adopt a impoverished subset of the language that avoids those features, in which case they might as well not exist because you aren't allowed to use them.
It can sure but I remember having to spend 2hours(with a reference manual open) rewriting two lines of Perl script that a sub wrote (after a colleague told me that he couldn't understand the script) into two lines that a Perl beginner could understand..
I've never understood this notion that a professional codebase should be dumbed down to the level of the beginner. In most professions the neophyte must jump though hoops to reach mastery yet in programming we seem to have turned this idea on its head. Whatever happened to mastering your tools? A seasoned Perl veteran should be able to play a damned good round of golf.
We have too many different, interacting parts in most software systems these days to demand devs have pro-level expertise in every part they might have to deal with. If you can simplify the code (without significant harm to functionality) so that only 10% of your potential devs wouldn't understand it very well instead of only 10% would understand it well, you have just increased the value of the code.
If the usage of more advanced concept helped reduce the number of lines, better maintainability or increased performance, you may have a point but notice that it was two lines replaced by two lines.
I've always suspected the sub to increase artificially the difficulty of the code to ensure he was employed, this didn't work..
Swift currently has no language-level concurrency model at all (the only concurrency support available is through libraries such as pthread and libdispatch). However, real concurrency support is on the roadmap for Swift 6 [0] (probably based on some combination of async/await and an actor model), and the core team is expected to release an initial design document within the next few weeks.
It might not offer first class support for green threads like Go, but GCD is an even better imho model for 90% of concurrency needs of the average app:
"concurrency needs of the average app" is different from what Go does well with green threads; this seems like something Swift does notably worse, but well enough that it doesn't annoy you in your use case. I feel like this is a different standard from "if Go can do it, Swift can do it better".
>Isn’t that just a bridge to an existing OS level feature?
Yes. A feature well designed for this very purpose.
>Also is GCD green?
GCD tasks are not green threads but they're not direct threads either (even though they're used under the hood). They're more lightweight and much faster to create (than direct OS threads).
> Rust aims to be extremely reliable and efficient.
Fair point, Swift will never be as predictable or efficient as Rust (not a negative perse, just different goals). But I disagree on the complexity re: Go. What about Swift gives you the impression of extreme complexity? Go is definitely comparatively simpler but I do not think it is so much so as to put it in a different league.
The type errors you start running into with generics with all of their weird limitations in swift can make things complicated fast. That and RxSwift.
Also swift as a language doesn't scale well. There are a lot of bottlenecks in the build process that doesn't let you scale simply amongst many cores like you can with C++, Obj-C & C and probably many other languages too. You're also effectively limited to xcode & apple desktops, so you can't go rent out a 100 core build server on AWS for builds like you can for every other platform out there, not that is matters much yet with swift's build scaling issues.
Also stuff like basic debugging often just... dies.
The more I work with a badly scaling language the more I appreciate a design decision like go made with building fast. I hope with generics in go v2 the boilerplate should reduce a lot.
In my opinion it's just easy to do wrong. Most uses I've seen (at least in java land) are attempts at non-blocking io which ends up turning the whole application into observables from bottom up. Which in turn makes your app hard to debug and reason about.
When done right, when you actually need the observer pattern, and use it as an event queue I'm sure it's probably amazing though.
Those are good points but I agree with the other commenter that these are just difficulties with asynchronous programming generally. I like that Rx makes reasoning about those issues more straight forward and something you must handle instead of something that will bite you later if you didn't think through it.
When was the last time you compiled Swift code? Xcode's new(er) build system has solved many of the scaling issues that affected the previous system, to the point where it scales linearly (on large enough projects) even on a 28/56 Mac Pro, and my 10/20 iMac. Additionally, with cmake's native Swift support in 3.18(?), the scaling should be more solved across platform as well. There's certainly nothing in the language itself that prevents scaling builds, only the compiler and build systems.
The debugger is still rough, true, but improving. Apple just isn't spending the resources needed to bring lldb up to a great experience in a timely fashion.
Swifts build system can split off enough threads to consume all of your CPU cores, and batch mode did improve things, but it's not actually doing so efficiently. There is a trade off between total compute time used and number of threads used. In compute time consumed, swift is the most efficient when it's single threaded in WMO mode, but this means you can only compile in parallel with separate modules that don't depend on each other. And even then it's not a very fast compiling language itself, multithreading issues notwithstanding
Maybe something has changed recently, but as far as Xcode 12 goes, I haven't noticed much of a difference. I last checked deeply with swift 4.
But what is going to build your iOS/macOS application, the application of %99.9 of swift code? It's only going to build on macOS unless you are willing to illegally virtualize macOS on non-apple hardware.
Swift has almost seamless interoperability with existing C and C++ libraries and has much stronger safety guarantees than either of those languages. It's not on the same level as Rust, but ARC eliminates entire classes of memory safety issues.
The Swift runtime is large-ish, but not outrageously so, and is generally statically compiled-in to the binaries, so there's no dependencies on dynamic libraries.
It is, as you say, quite complex, but still an interesting choice IMO.
My understanding was that there is no need to involve Objective-C if you’re directly interfacing with C. That interop is direct/seamless, and works on Linux etc. even without Objective-C or Darwin [0].
I wouldn't say "seamless". Interacting with C directly from Swift is possible, and fine from a binary perspective, but quite clunky at the source level. (Which obviously is a motivation for the library that is being announced here.)
Swift can interface with C++? I've just done some googling but I cannot find any documentation at all relating to C++ interop. I make no assumption that I'm right but all I can see is interop via a C API
Seamless in my mind is what D has where name mangling, templates, and classes all work on Windows, Linux (possibly MacOS - I don't have one). Does swift have any of that?
> Swift can interface with C++? I've just done some googling but I cannot find any documentation at all relating to C++ interop.
It is not there yet but it is actually being implemented on a native level. A quick Google gave me this. [0] [1] [2]
The approach is first-party than what you get with Rust's third-party crates and tools like bindgen, cxx, etc. Swift's C-interop approach is built-in and much more seamless and automated than bindgen's toggles and switches and creating them by hand in with cgo in Golang.
It's being worked on, but it's nowhere near being usable in any sense. Very fundamental things still need to be tied down and decided, much less implemented.
Well I did say it's 'not there yet' and it is 'being implemented' so of course it is not yet usable as of now. Your same point could be said about Dart C/C++ FFI, Rust async-await, etc.
The parent comment I replied to said: "...I cannot find any documentation at all relating to C++ interop." and this some kind of "documentation" that is related to 'C++ interop'.
On most platforms, if you can interface with C, you can also interface with C++. Usually you have to find the mangled symbol in the C++ binary using a tool like nm, and treat it like a C function with whatever ffi you're using.
If you are targeting multiple platforms this can get tricky because the mangled C++ symbol will (probably) be different for each platform, which can be solved with a good build system
C++ doesn't have a stable ABI, so if you target a C++ symbol manually like this, as soon as the compiler revs, to say, C++20, the ABI will change and you'll get a linker error.
Also, this may work for simple C++ methods, but once you cross over into vtable territory, you're going to be in a world of hurt.
Also as far as interoperability, I just don't see a smooth path forward between Swift and C++. Philosophically, much of the C++ code you write ends up compiling away or building specialized generated code at compile time. Anything written in C++17 and newer will also have a lot of constexpr code that generates immutable results baked into the final output.
I'd say, wrap your C++ in a simple C API and expose that to Swift (same would apply to Go/Rust or any other language honestly).
Hot take: ABI stability is overrated. How often do you need to relink specifically without recompiling everything? As an OS packager, it's no big deal to rebuild everything depending on a C++ shared library when that library is updated. For self-contained binary distributions, statically linking the C++ library or putting the shared objects together works fine still.
vtables are always a pain, but I think the Swift team had some cool ideas about vtables and other dynamic stuff across dynamically linked shared objects..
I've only ever done it out of curious laziness but it's one of those things where it's theoretically clean enough to do right but if you do it wrong you might be fixing it all day. If something changes downstream or you make a mistake at 9AM you can end up with subtle non-failing garbage everywhere.
The only way I would do it and trust it, would be if the binding is generated and tested (separately) automatically, but at that stage why not just write one with a proper ABI.
Good luck instantiating and using classes if you do it that way. You pretty much have to reimplement the itanium abi in whatever language you're using (hint: if your language doesn't already support it, then even implementing it in assembly would be easier).
Swift has great C interop for a language that is not a C superset, but C++ interop is at best a thing you can theoretically do. In practice c++ interop is usually done via a wrapper.
I think you are focusing on a bit artificial things here with things like intentions and other biases and notions.
Swift is a statically compiled, C like language, that can be used for system programming. Just like C, C++, Rust, Go, etc. This is not an accident. It was built from the ground up to do that.
The goal with this OSS library is literally making swift easier to use in projects where you'd otherwise reach for exactly those languages. So, whether you like it or not, it's competing (or at least trying to) in that space (i.e. system programming). Actually, Swift was designed to replace Objective C, which of course was the system programming language that Apple standardized on as an alternative to C decades ago. It's designed from the ground up to be a drop in replacement in any project where you'd previously be using that. So, it's a natural fit for any kind of project where you'd otherwise be considering things like Rust, C++, Go, C, etc.
Whether that makes sense or not in your context is of course up for debate and highly subjective.
Ada is a superior language if you are trying to write safe code, but given that the tools (compilers and such) cost serious money, it will never get any adoption apart from avionics, defense contractors, and mission critical/safety type of software.
I don't know Ada well personally, but from what I've read, it's a bit less advanced than Rust at static checks, a good bit simpler than Rust and Swift but more complex than Go, and a little bit old-fashioned in approach (verbose syntax and all that). But I'm not very educated.
I would say that Ada and Rust probably compete on some things, but given the history of Ada in industry, it's probably only in fields that already use Ada (aerospace and what-not).
>> I don't know Ada well personally
>> But I'm not very educated.
If you don't know, please don't speculate.
I use Ada professionally and I am experimenting with Rust. Rust has great promise and I am interested to see where it will go, especially as an alternative to C++.
>> it's a bit less advanced than Rust at static checks
>> a good bit simpler than Rust and Swift but more complex than Go
Ada is a large and fairly complex language because it was designed for hard real-time, safety-critical, embedded systems and it has been in real-world use for 40+ years. I don't think it is that simple, but judging simplicity is subjective: what one person sees as complex, another person might see as simple. You can browse the Ada 2012 Language Reference Manual and see what you think: http://ada-auth.org/standards/12rm/html/RM-TOC.html
>> a little bit old-fashioned in approach (verbose syntax and all that).
This comes from the Ada design philosophy to be explicit in everything and to prefer the use of keywords over symbols.
Long-life programs tend to be read more than they are written. The Ada way is to make programs easier to read rather than faster to write.
>> I would say that Ada and Rust probably compete on some things, but given the history of Ada in industry, it's probably only in fields that already use Ada (aerospace and what-not).
This is largely true. Ada occupies a niche for aerospace and other safety-critical areas, but has not been widely adopted due to the "uncoolness" factor and the cost of most of the available Ada compilers and toolchains.
I think the popularity of Rust has peaked some interest in Ada as well, but I am not sure if it will cause any change it where either are used.
I would like to see Rust to continue to mature and get adopted for wide-spread use with multiple implementations and a language standard. As it currently stands, many aerospace and safety-critical spaces would not be willing / able to adopt Rust without a language standard and certifications. Here's hoping . . .
>> This comes from the Ada design philosophy to be explicit in everything and to prefer the use of keywords over symbols.
This may be true, but it doesn't contradict the claim that Ada is old-fashioned in this regard. It was an old fashion in programming languages to prefer words over symbols, and to try to make programming languages look more natural language-like to make them more readable. See Ada, Cobol, Pascal or AppleScript for some examples. This is much less common with newer programming languages, where it's much more common to favor terseness and shortcut syntax. It's debatable whether this is better, but it seems undeniable that Ada-style verbosity is no longer in fashion.
The claim that more verbose, with keywords instead of braces/symbols is more readable is, imho, very subjective. It's easy to state "code is read more often than it is written" and jump to conclusions from there with no evidence that Ada is actually more readable.
An alternative syntax could be helpful for Ada to gain more traction.
The “verbose syntax is good” stance seems to have been a lot more common back in 70s–80s language design, hence why I call it “old-fashioned” :-) not trying to make a judgement against the language (I personally love Cocoa-style method names, for example).
Benching languages against eachother is like pitting birds of prey against eachother, its pointless because they will hunt what they hunt.
This language/framework dispute I though was fever back in bad PHP days has not changed. I'm not saying; 'oh you kids dont know shit' I'm saying the language is not the problem, the problem is the problem, the right tool for said problem is the answer.
You find Swift complex and speak about why, you're not wrong with its application, iOS it does very well, therefore its a tool for an iOS job.
If I grew up just learning Swift and nothin else, I would say swift is the best . Plato's cave springs to mind.
We are all Engineers, Developers, Hackers, Designers and or Code Monkeys.. Don't try to pit against, know the right tool for the job, if you cant find it, make it. With people who can.
To be clear, that’s kind of what I was trying to say :) maybe the “only advantage” part at the end made that unclear.
I think the problems you’d want to use Go for, the problems you’d want to use Rust for, and the problems you’d want to use Swift for are largely non-overlapping (in spite of whatever similarities they do have).
- Go aims to be simple enough for many engineers to be productive with little investment.
- Rust aims to be extremely reliable and efficient.
Swift is very complex, and makes many compromises away from reliability or efficiency in favor of application use cases.
Swift may fill an interesting role for a "native C#" that is mostly reliable, mostly efficient, and somewhat productive, but on the other hand, C#, OCaml, Java, Kotlin, and so on already have various answers to that, and they're only becoming better at it.
The only real advantage it has (as far as I can tell) is iOS (and TensorFlow almost as a knock-on of being the only serious language for iOS).