Certainly one of the issues with Itanium was it was fighting the last war.
It was trying to optimize for instruction-level parallelism when power efficiency and thread level parallelism were coming into vogue. Arguably companies like Sun overoptimized for the latter too soon but it was the direction things were going.
A senior exec at Intel told me at the time the focus on frequency in the case of Netburst was driven by Microsoft being uncomfortable with highly multi-core designs--and I have no reason to doubt that was one of the drivers. There was a lot of discussion around the challenges of parallelism, especially on the desktop, at the time. It generally wasn't the problem the hand wringing suggested it would be.
Parallelism is still very much underused on desktop. Most desktop CPU's are used much of their time for running single-threaded JavaScript from some clunky website - no parallelism whatsoever. It's only with the latest-gen CPU's that have things like big.LITTLE that it's becoming a real game changer.
How does big.LITTLE change anything related to your scenario? From my understanding, they’re just high efficiency cores that influence parallelism no differently than any other multi core CPU.
I think if you say to OS and software vendors "here are a bunch of toy cores, do something useful with them", it finally justifies a nudge towards smarter scheduling. With ordinary SMP, there's little reason not to just assign "first free core".
I always envisioned a day where we'd have devote the small cores to "parasitic load" tasks-- your media player, Slack/Discord/etc., and a thousand OS maintenance threads. They might run at 95% load to do not very much, but it's no big deal-- the actual software you cared about now has the big cores to itself, and context switching (along with cache and branch-prediction losses) are reduced. I could even imagine getting to the point where tasks could requisition cores on a "no disruptions until actually yielded by the main process" basis for real-time or maximum performance tasks.
In the last few years this is pretty much where we've gotten to. Sometimes I leave Activity Monitor open on my M1 Mac and the four little cores keep the big cores mostly idle until there's a lot of work to do (a build or sigh scrolling a web page).
I suspect that long term big.little is not going to be all that important. It is more of a stepping stone along the way.
My logic goes something like "why have 4 small cores and 4 big cores when you could have 8 big cores". Then the argument goes "yes but the small cores use less power", to which my replay is "true, but the big cores finish faster and can spend more time sleeping, I think the power argument evens out ether way".
The real gain is to have better power management for the cores, not by having weird small cores.
There are some wrinkles to this: First, big cores are less efficient per unit work because there's a lot of complexity in things like reordering, speculation, and pipelining and that complexity costs energy. A design optimized for peak throughput per time looks different than a design optimized for peak throughput per energy. Second, reducing area and energy for the cores allows spending those resources somewhere else like larger caches which, to a point, can reduce energy further. Third, realtime workloads can't really be delayed without compromising user experience so you're going to be waking some cores up constantly anyway. Might as well have a few efficiency-optimized cores and pile as much on them as possible to keep latency and energy consumption low.
Not all tasks can finish fast. They're just constant low level jobs waiting on IO most of the time. Operating systems are full of suck tasks. It's better to let them run on the LITTLE cores since integrated over time they'll use less power than the big cores.
I think it's generally fair to say though that the applications that really need a lot of performance (e.g. multimedia) multi-thread pretty well and there are typically a lot of background tasks running that consume core cycles as well. What is probably more generally true is that a modern laptop or desktop is ridiculously overpowered for most of what we throw at it. I'm typing this on my downstairs 2015 MacBook and it's perfectly fine for the almost entirely browser-based tasks I throw at it.
My wife has only ever owned cheap Chromebooks, and has never complained that they were slow. I've used them with her streaming videos and such, and I agree- simple web browsing isn't slowing anything down on modern hardware.
Even on a modern ultralight laptop, I can run two chrome profiles, three instances of vscode running different projects, docker and a few other things and the CPU never gets pegged. There's a ton of memory pressure from a memory leak somewhere that I haven't bothered tracking down yet- I suspect the SWC compiler (thanks, rust) but haven't proven it yet. All that and I'm still getting 8+ hours of battery life.
Which notebook model is this? Which OS are you running on it? Would you buy it again given the same budget today? (Or said in another way, is there something better now?)
I'm running an lg gram, with the swaywm flavor of Manjaro. So far, everything hardware-wise has worked pretty flawlessly, though given the option, I'd rather have 32 gigs of ram than 16.
I'm really tempted by the idea of a Framework style laptop with user serviceable parts, but my work style has me moving a fair amount, so not having battery or thermal issues is such a boon I don't know that I could make the switch. In the last 6 hours I've written and compiled code, run tests, attended video calls, streamed video from websites, browsed the internet for recipes for dinner, chatted on slack and am still on 46% battery remaining. I have yet to hear the fans turn on.
The two areas this thing will fall down on is music and gaming. The speakers are pretty bad, and though it can run light games off of steam, I doubt it would do well with anything super graphically intense (though I haven't actually tried much, to be honest). Also, the built-in webcam sucks, but decent webcams and headphones are cheap, so it's really only games that you'd want something else for.
I've looked at the LG gram before (the 2021 model [0]), but wasn't convinced. And now after seeing a friend's new Lenovo legion 5 I'm even more uncertain of which one should I pick. (That Lenovo has a handy button to set the power envelope, which seems to actually work.)
I also usually move a lot, but I don't want to optimize for that. It's easier to find a power outlet than to cool a throttled laptop.
I can't speak for how Windows behaves on it, but I've never noticed any throttling. The only time the fans have kicked on is when the battery is plugged in and charging. As I said before, though, I also don't really game on it.
I don't know if / how it runs linux, but I've got a friend with a legion, and was also happy with it.
If you'd rather have better graphical performance than battery life, definitely go with the legion. If you can't stand the thought of being tethered to a power cord every time you have to do something serious, then you might want to consider the gram.
I've had gaming laptops before, and after putting the battery through the ringer after awhile it was a struggle to get 4 hours unplugged, which I simply didn't want to deal with again.
Fun fact: Intel had a couple CPUs codenamed Tejas that were going for 50-ish pipeline stages and 7-10GHz before being abandoned as basically impossible.
It was trying to optimize for instruction-level parallelism when power efficiency and thread level parallelism were coming into vogue. Arguably companies like Sun overoptimized for the latter too soon but it was the direction things were going.
A senior exec at Intel told me at the time the focus on frequency in the case of Netburst was driven by Microsoft being uncomfortable with highly multi-core designs--and I have no reason to doubt that was one of the drivers. There was a lot of discussion around the challenges of parallelism, especially on the desktop, at the time. It generally wasn't the problem the hand wringing suggested it would be.