I still have the first widely available commercial computer she invented, the Acorn System 1. Over 40 years since then I have given away all my other computers, but kept the Acorn System 1. Thank you Sophie!
I think this is actually good news :) For me as a consumer this means that my processor will be viable for a much longer time! I remember having a Pentium 75mhz which was completely and utterly useless once the 1ghz+ models came out. Then came the multicore era which forced everyone to switch again. But now it seams things have settled down after all. As I see it, in Desktop CPUs there is a sweetspot at around 4-8 cores@~3.6-4ghz (without boost clocks!). If you have something like that you're good. I have an ivy bridge quadcore i5-3750k a 6 (2012) year old 22nm chip which is doing just fine at around 3.8ghz. In most programs and gaming a more recent i5 from 2017-2018 is like... 20-30% faster? Yeah no reason to upgrade at all :) Which again, I love that. This means those expensive chips are viable for longer periods of time and you might even be able to sell them after using them for a few years, or you can buy a used CPU, save a lot of money and still get good performance.
@@ktcool100 yeah I understand it's bad for the lady because she's interested in selling newer faster chips for a lot of money. I'm on the other hand am interested in buying fast chips as cheap as possible.
A great lecture by a great lady, moores law broke about 10 years ago imo- certainly in terms of consumer CPU's, Sophie re-affirms the issues that even I as a consumer have determined, which CPU' manufacturers aren't keen on promoting, the fundamental CPU core design performance clock for clock has not massively improved since the intel core 2 duo core design, and that due to heat density, 28nm and below designs suffer from heat dissipation issues thereby frequency limitations, ie the 2500k was more overclockable than current designs, 4.8-5ghz on a good binned chip, in fact far better overclockable results than ivybridge and newer smaller process designs, The future as shown by sophie needs a revolution to change the fundamental issues being thrown up by further process shrinks, as traditional methods of garnering performance increase have reached limits. this is also why I have only just now upgraded from a i7 920 to a similar process 1366 6 core xeon, most advances now might more be possible in software rather than hardware, until a big c-change in CPU materials and design takes place.
32:30 She pointed to the wrong processor. The C2D etc. are out-of-order superscalar with _enormous_ in-flight queues, and their proper ancestor is the Pentium Pro, not the Pentium.
It's good to listen to someone who really knows her stuff tell us what we already suspected. So the best thing to do is to not worry about the CPU speed in your PC and to: 1, get an SSD to increase the data storage and retrieve speed, and 2, offload as much as possible to services in the internet. Wow! Nuclear reactor fuel rod at just above 100w/cm2 makes you think!!!
"The revolution" is going to be cooling inside the processor die with the tight space of hot transistors by the millions. In a newer laptop of mine the hot air can't even get out and indicate no actual care for good cooling. That it slows down with less use/performance is something else and plays a role, but mostly on battery life.
32:16 She's pointing at the wrong processor. It's the P6/pentium pro that became the basis of all intels future mainstream processors. The pentium was in-order, 16-bit core without register renaming and with L2 cache running at the system FSB clock.
Yes, the Pentium Pro and the Pentium II and the Pentium III (all based on the P6 microarchitecture) have been the basis of the Pentium M and the Core chips. The first, in-order-execution Atom chips really were closer to the Pentium than to the Pentium Pro/II/III.
Since Moors law is running out,,, more lanes next, was 8 bit, 16 bit 32 bit, next is 64 bit wide or maybe 128 wide bit next, parallel.. or ARM RISC 1/2 instructions per cycle = 1/2 heat, whats next RISC-5 with hope future..
This is a nice talk, but it really, _really_ irritates me to say we need a revolution in software to address this. This is the way most hardware engineers think, unfortunately. Better to think of this like abstract algebra. You start with your nice closed field where everything associates and commutes, and slowly you take things away, but you keep on trucking. In databases, we went from consistency to eventual consistency. In machine learning we went from interpretable models to black-box models. How far you get depends on what you're willing to lose. This is primarily based on having a more clear-eyed view of the problem domain, rather than a software revolution. Do you really need absolute, immediate, perfect consistency and interpretability at all times? You _want_ that, but you're a glutton for simple things. Eventually you realize that your gluttony is the major obstacle of progress, not bad software paradigms, and the wheel continues to turns.
the slide shows 2024. TSMC has had 7nm for a while and is at 5nm in 2020, but as she explained these are more marketing numbers than practical engineering ones. I would say her prediction is still reasonable
Wow, some of this hasn't aged well. Integration of more special-purpose units, I guess NPUs and more powerful integrated GPUs count and we are still expecting more heterogeneous designs, but 8 nm in 2024, at Intel perhaps, but TSMC was on 5 nm in 2020 and is actively working on 3 nm. And Intel struggling with 14 nm was less an indication, that it doesn't get better as a whole, than one, that Intel has lost the ball. If you think 14 nm taking a year was bad, 10 nm Intel process has something to say to you. OpenCL, yeah still not happening really. And yet 4-20 cores max is not really a thing, with chips getting closer to 100 cores than that in servers.
29:0040:40 Wow, this talk is from 2014, yikes. That is exactly what Apple has been doing for the past few years. They design their own hardware + they have the right people writing the needed custom software for their chips. This will get really interesting with Apple silicon on the desktop and in laptops.
A great fun action lady and what an achievement Sophie Wilson.
I still have the first widely available commercial computer she invented, the Acorn System 1. Over 40 years since then I have given away all my other computers, but kept the Acorn System 1. Thank you Sophie!
I think this is actually good news :) For me as a consumer this means that my processor will be viable for a much longer time! I remember having a Pentium 75mhz which was completely and utterly useless once the 1ghz+ models came out. Then came the multicore era which forced everyone to switch again. But now it seams things have settled down after all. As I see it, in Desktop CPUs there is a sweetspot at around 4-8 cores@~3.6-4ghz (without boost clocks!). If you have something like that you're good.
I have an ivy bridge quadcore i5-3750k a 6 (2012) year old 22nm chip which is doing just fine at around 3.8ghz. In most programs and gaming a more recent i5 from 2017-2018 is like... 20-30% faster? Yeah no reason to upgrade at all :)
Which again, I love that. This means those expensive chips are viable for longer periods of time and you might even be able to sell them after using them for a few years, or you can buy a used CPU, save a lot of money and still get good performance.
@@ktcool100 yeah I understand it's bad for the lady because she's interested in selling newer faster chips for a lot of money. I'm on the other hand am interested in buying fast chips as cheap as possible.
Excellent, gives a lot of insight. More people should watch this video ;)
A great lecture by a great lady, moores law broke about 10 years ago imo- certainly in terms of consumer CPU's, Sophie re-affirms the issues that even I as a consumer have determined, which CPU' manufacturers aren't keen on promoting, the fundamental CPU core design performance clock for clock has not massively improved since the intel core 2 duo core design, and that due to heat density, 28nm and below designs suffer from heat dissipation issues thereby frequency limitations, ie the 2500k was more overclockable than current designs, 4.8-5ghz on a good binned chip, in fact far better overclockable results than ivybridge and newer smaller process designs, The future as shown by sophie needs a revolution to change the fundamental issues being thrown up by further process shrinks, as traditional methods of garnering performance increase have reached limits. this is also why I have only just now upgraded from a i7 920 to a similar process 1366 6 core xeon, most advances now might more be possible in software rather than hardware, until a big c-change in CPU materials and design takes place.
thats why cisc(x86) design is becoming obsolete and risc(arm) will beome much more relevant
32:30 She pointed to the wrong processor. The C2D etc. are out-of-order superscalar with _enormous_ in-flight queues, and their proper ancestor is the Pentium Pro, not the Pentium.
It's so ironic Sophie Wilson invented the technology that prevented her pessimistic predictions from happening :)
It's good to listen to someone who really knows her stuff tell us what we already suspected.
So the best thing to do is to not worry about the CPU speed in your PC and to:
1, get an SSD to increase the data storage and retrieve speed, and
2, offload as much as possible to services in the internet.
Wow! Nuclear reactor fuel rod at just above 100w/cm2 makes you think!!!
It would have been polite to give her a good round of applause at the end of her talk.
You mean like the one at ~43:40?
"The revolution" is going to be cooling inside the processor die with the tight space of hot transistors by the millions. In a newer laptop of mine the hot air can't even get out and indicate no actual care for good cooling. That it slows down with less use/performance is something else and plays a role, but mostly on battery life.
38:20 This part is hilarious, because now we're stuck with Intel's 14+++ process for another two years, probably through to 2022.
32:16 She's pointing at the wrong processor. It's the P6/pentium pro that became the basis of all intels future mainstream processors. The pentium was in-order, 16-bit core without register renaming and with L2 cache running at the system FSB clock.
Yes, the Pentium Pro and the Pentium II and the Pentium III (all based on the P6 microarchitecture) have been the basis of the Pentium M and the Core chips.
The first, in-order-execution Atom chips really were closer to the Pentium than to the Pentium Pro/II/III.
she is right...7nm from amd ..half are off..due to power consumption are not good for 125w or else it easily melt zen cores than 14nm, does not
Since Moors law is running out,,, more lanes next, was 8 bit, 16 bit 32 bit, next is 64 bit wide or maybe 128 wide bit next, parallel.. or ARM RISC 1/2 instructions per cycle = 1/2 heat, whats next RISC-5 with hope future..
we already have 64 bit processors since last 10+ years
This is a nice talk, but it really, _really_ irritates me to say we need a revolution in software to address this. This is the way most hardware engineers think, unfortunately. Better to think of this like abstract algebra. You start with your nice closed field where everything associates and commutes, and slowly you take things away, but you keep on trucking. In databases, we went from consistency to eventual consistency. In machine learning we went from interpretable models to black-box models. How far you get depends on what you're willing to lose. This is primarily based on having a more clear-eyed view of the problem domain, rather than a software revolution. Do you really need absolute, immediate, perfect consistency and interpretability at all times? You _want_ that, but you're a glutton for simple things. Eventually you realize that your gluttony is the major obstacle of progress, not bad software paradigms, and the wheel continues to turns.
34:12 in the year 2014 we will have 8nm? looks like TSMC were not at the presentation ;)
the slide shows 2024. TSMC has had 7nm for a while and is at 5nm in 2020, but as she explained these are more marketing numbers than practical engineering ones. I would say her prediction is still reasonable
Wow, some of this hasn't aged well. Integration of more special-purpose units, I guess NPUs and more powerful integrated GPUs count and we are still expecting more heterogeneous designs, but 8 nm in 2024, at Intel perhaps, but TSMC was on 5 nm in 2020 and is actively working on 3 nm. And Intel struggling with 14 nm was less an indication, that it doesn't get better as a whole, than one, that Intel has lost the ball. If you think 14 nm taking a year was bad, 10 nm Intel process has something to say to you.
OpenCL, yeah still not happening really. And yet 4-20 cores max is not really a thing, with chips getting closer to 100 cores than that in servers.
Well, that explains the $999.99 iPhone. $999.99 for a half dark chip, wow.
29:00 40:40 Wow, this talk is from 2014, yikes. That is exactly what Apple has been doing for the past few years. They design their own hardware + they have the right people writing the needed custom software for their chips. This will get really interesting with Apple silicon on the desktop and in laptops.
"6502 - as poor as this." (!) TEN years earlier than yours Sophie. C'mon.
She sounds just like Roger.
Oh, shots fired, methinks.