I've always thought that killing off Alpha in favour of pushing Itanium was one of the worst things Intel/HP could have done. Not only was Alpha more advanced architecturally, it was actively implemented and mature. With active development by HP, it could have easily snowballed into the standard cloud hardware platform.
Alpha's fate, like the other proprietary RISC architectures that focused on the lucrative but in the end small workstation market, was sealed. With exponentially increasing R&D and manufacturing costs, massive industry consolidation was inevitable. That it was Itanium that delivered the coupe de grace to Alpha was but the final insult, but it would have happened anyway without Itanium.
And it wasn't like the Alpha was some embodiment of perfection either. E.g. that mindbogglingly crazy memory consistency model.
The biggest reason Alpha was a "workstation" chip was margin and R&D issues. It was fast, but they couldn't manufacture in high volume which drove per-chip costs much higher than they could have been if paired with a company like Intel. Meanwhile, complete dependence on manual layout for everything pushed development cost and time to market too far out. Once again, Intel's design tools could have helped reduce this overhead.
I don't disagree with the main points, but Alpha wasn't just focused on the small workstation market. Alpha for lots of us in the IT departments of SMBs was the go-to when your Exchange server couldn't handle the load anymore. DEC and by extension Alpha died as soon as Ken Olson was pushed out.
> that focused on the lucrative but in the end small workstation market
I don't at all believe this was true.
> that mindbogglingly crazy memory consistency model
I guess the utterly competent designers were actually stupid eejits then? The memory consistency was AFAI could determine to reduce to the utmost the hardware guarantees and therefore hardware complexity. It was done for speed.
I have the greatest respect for the Alpha design team, they designed a thing of elegance and even beauty. You could learn a lot from it - I did.
> I guess the utterly competent designers were actually stupid eejits then?
No, I don't think that. But I don't think they had some superhuman foresight either, and they made some decisions that in retrospect were not correct. And with the memory consistency model, they made the classic RISC mistake of encoding an idiosyncrasy of their early implementation into the ISA (similar to delay slots on many early RISC's).
> You could learn a lot from it - I did.
I used Alpha workstations, servers and supercomputers for my work for several years back in the day. They were good, but not magical, and even back then it was quite clear there was no long term future for Alpha.
I'm not a chip designer, far from it, but my understanding is that they really knew what they were doing and this was quite deliberate. See this comment https://news.ycombinator.com/item?id=17672467
I never suggested alphas were magical, but they did seem extremely good and they were designed for future expansion, it seemed to me they were killed off very much not by competitor supremacy.
It was a deliberate choice, but it was a poor one. It made implementing a fast cpu easier, but it also made the consistency model very hard for programmers to reason about, and required much more explicit barriers than any other consistency model.
These barriers also meant that correct multi-threaded alpha code was no longer particularly fast, because you have to insert expensive memory barriers basically everywhere.
Had Alpha not died early, they would absolutely have eventually moved towards a more strict memory model. As it was, it was essentially an irrelevant architecture by the time people started really hitting all the pitfalls.
IIRC alpha-model memory barriers are still used in the linux kernel. That said, I can't find a clear statement of that so I don't know if it is true or was, or just my own memory.
> These barriers also meant that correct multi-threaded alpha code was no longer particularly fast, because you have to insert expensive memory barriers basically everywhere.
I don't buy it. MBs are for multi-core code, and in such code you typically do much work on a single core then have a quick chat with another core. So the MBs are there for the inter-core chatter only. In that case having fast monocore code is a big win.
> IIRC alpha-model memory barriers are still used in the linux kernel. That said, I can't find a clear statement of that so I don't know if it is true or was, or just my own memory.
The various memory barriers and locking primitives are arch-specific code, and at least smp_read_barrier_depends() is a no-op on all architectures except Alpha. Apparently around the 4.15-4.16 kernels there was a bit of de-Alphafication going on which entailed removing much Alpha-specific code from core kernel code. Further in 5.9 {smp_,}read_barrier_depends() were removed from the core barriers, at the cost of making some of the remaining memory barriers on Alpha needlessly strong.