Google’s Chip Designing AI

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 มิ.ย. 2024
  • Machine learning has been in the news a lot lately. Some of the early hype has died down, but the trend still lives on. And now it has really started to make waves in the chip design world.
    Machine learning and AI in chip design is such a sprawling field that I started to lose myself in all the research. So I figured to just go into a recent breakthrough in the chip design field: Floorplanning.
    Google has been applying the same AI prowess that allowed them to badly beat the best Go masters to this obscure, but important sub-category of the field.
    Errata:
    4:00 - I misspoke. The graphic is correct. 10 to the 123rd power, NOT 23rd power.
    Links:
    - The Asianometry Newsletter: asianometry.com
    - Patreon: / asianometry
    - The Podcast: anchor.fm/asianometry

ความคิดเห็น • 313

  • @Asianometry
    @Asianometry  2 ปีที่แล้ว +24

    Like and subscribe! And if you're interested in other tech deep dives, check out this playlist: th-cam.com/play/PLKtxx9TnH76RiptUQ22iDGxNewdxjI6Xh.html

    • @raylopez99
      @raylopez99 2 ปีที่แล้ว +1

      This AI will be the death of EDA vendors like Cadence?

    • @Asianometry
      @Asianometry  2 ปีที่แล้ว +3

      No it won’t be. They’ll probably make their own

    • @masternobody1896
      @masternobody1896 2 ปีที่แล้ว +1

      @@Asianometry cant wait to get ai get smarter so they can make fast cpu that. so I can get more fps in games

    • @StefanWelker
      @StefanWelker 2 ปีที่แล้ว

      I think you should not announce that "you are butchering a name", either research how its pronounced or do it however you want. Announcing it is pretty offensive. Those names were pretty easy to pronounce just by reading the letters.

    • @raylopez99
      @raylopez99 2 ปีที่แล้ว +1

      @@StefanWelker LOL nice troll.

  • @TechTechPotato
    @TechTechPotato 2 ปีที่แล้ว +11

    Thanks for referring my video!

    • @TechTechPotato
      @TechTechPotato 2 ปีที่แล้ว +3

      Synopsis and cadence both have their own respective data as well

  • @bradsalz4084
    @bradsalz4084 2 ปีที่แล้ว +323

    As a integrated IC designer I suppose I should feel a little threatened by such AI technology taking my job. But like all other design tools this will just make the remaining human designers more efficient and accurate. Early in my career I experimented with using the then available "optinmization" engines built into Cadence and ADS design tools and ran into the same problem you describe here: the local minima of the error function is often not the same as the global minimima. So humans still have to figure it out. You can't just go run off for coffee and wait for the solution to pop out. You address the floorplanning (layout) problem here. But as a schematic designer it is an exponentially harder task. If you already have a architecture and process node selected, I do conceed that a machine will be able to size and place devices faster and more efficiently than any human can. The problem is you have a creative step in front of it that is still in the land of human invention, intuition, and judgement. For now, anyway. I'm sure even that will be better done by machines one day. But for now I remain gainfully employed.

    • @kuantumdot
      @kuantumdot 2 ปีที่แล้ว +4

      Very spot on!

    • @GoogleUser-ee8ro
      @GoogleUser-ee8ro 2 ปีที่แล้ว +9

      The video said that industry had been using traditional optimization methods such as annealing for a long time but TPU's DL+RL approach tackles the problem with faster speed yet similar accuracy as humans. It gives me some thought on how much of the speed gain was achieved by Google is gigantic GPU clusters vs traditional EDA's computation power, and how much is truly attributable to algorithm superiority. DL+RL is supposed to be able to "discover" IC design parameters/traits which are missed by human engineers. Based on the conclusion of Google's paper, no such conclusion is drawn. Your job is still very safe, human engineers just need more powerful computers to do their job. Another place though where I see Google's method can be more useful is chip verification. As it was explained in a previous video, we are running into a crisis of human engineer shortage to do verification work. If DL+RL can help, the productivity gain will be enormous.

    • @fukushimaisrevelation2817
      @fukushimaisrevelation2817 2 ปีที่แล้ว +4

      yep sorry your IC designer profession is about as obsolete as the horse and buggy, however, there will be a new chip auditing profession to try to review AI chip designs to make sure the AI skynet doesnt try to take over and/or destroy the world, good luck the rest of mankind is depending on you no pressure. Aw who am I kidding mankind is to reckless to have a human review AI chip designs to try to make sure the AI skynet doest try to take over. most likely mankind will have a different AI perform a study on the AI chip designs to audit them then the government will rubber stamp the self regulated industry AI studies on the AI chip designs in typical government cya fashion.

    • @gazz01
      @gazz01 2 ปีที่แล้ว +1

      Do I also have to feel threatened as a software engineer?
      I think in the CS field, AI has gained far more superiority than in Chip Designing.

    • @joemerino3243
      @joemerino3243 2 ปีที่แล้ว +19

      @@fukushimaisrevelation2817 Imagine getting your entire idea about how AI is going to work out from science fiction movies written by arts majors...

  • @deletechannel3776
    @deletechannel3776 2 ปีที่แล้ว +34

    Ah yes, a neural network trained to design chips designed a chip to train neural networks

  • @xelaxander
    @xelaxander 2 ปีที่แล้ว +266

    Thanks for being precise about Machine Learning. There's way too much BS floating around on that field. The reinforcement learning approach seems like another decent tool in the box to takle a very difficult problem. Honestly that's more than anyone can ask for, imho.

    • @kuantumdot
      @kuantumdot 2 ปีที่แล้ว +3

      Spot on

    • @LiveType
      @LiveType 2 ปีที่แล้ว +4

      I always laugh when people say AI overlords are closing in on being a reality. GPT and its supermassive dataset is definitely getting closer, but it's not there yet. It did set new records though. I feel like there is still a step missing somewhere as the processing power is more than sufficient. Maybe by the end of the decade there will be a new paradigm that enables it. Transformers are the new hotness right now which is what GPT is based off of. Very impressive and rather difficult to implement from my experience.
      Machine learning I find is not always the best tool for the job, but it is amazing versatile and adaptable and more often than not yields shockingly good results for not much effort. Assuming you know what you are doing.

    • @andreicozma6026
      @andreicozma6026 2 ปีที่แล้ว +3

      @@LiveType machine learning is really lower level than AI is. AI encompasses ML but not the other way around. AI tools and models are based off ML concepts and approaches at their core. It's a rather fuzzy line. One way to think about it is AI bring the high level systems while ML is the lower level concepts that make up that system

    • @circuitgamer7759
      @circuitgamer7759 2 ปีที่แล้ว +1

      @@andreicozma6026 I don't think AI has to contain ML in every case - preprogrammed rules can be used in an AI, for example. Unless I'm wrong there, but I think I'm right. If I'm wrong let me know...

    • @andreicozma6026
      @andreicozma6026 2 ปีที่แล้ว +2

      @@circuitgamer7759 you're actually correct, I guess to more correctly re-phrase what I said would be to say that is ML is a subset of AI. So then all of ML technically counts as being "AI", but like you said, not all of AI necessarily has to be part of ML.

  • @windmill1965
    @windmill1965 2 ปีที่แล้ว +40

    Although quite a number of years ago, I was doing the floorplanning and physical layout of an analogue power chip. That is a completely different world from the digital circuits as presented in this video. I don't know how much has been automated these days, but we had to place individual transistors in the correct orientation compared to the temperature gradient on the chip. Individual interconnects had to be adjusted to the maximum amount of current which could flow in them, symmetry between two transistors or blocks was in some cases paramount, voltage drop on the supply wire or ground wire could destroy the accuracy of a block, and so on. There were so many constraints that it was difficult to convey this from the electronics designer to the physical designer. The electronics designer would often do the most crucial portions or blocks of the physical design himself.

  • @nisbahmumtaz909
    @nisbahmumtaz909 2 ปีที่แล้ว +83

    9:26 "I can't find an explanation for how [insert ML tool works], but I CAN find how they train it"
    As an ML scientist, this is close to 90% of how it is. The area where it's gets a lot more analytical is transfer learning, and a huge chunk of reinforcement learning. While we know what goes into training the neural nets, the developed black box intuition is as close as it can get to modern magicry.

    • @PS-re4tr
      @PS-re4tr 2 ปีที่แล้ว +3

      Is there any hope of figuring out how the ML tools work or do we have to accept that they will remain black boxes?

    • @nisbahmumtaz909
      @nisbahmumtaz909 2 ปีที่แล้ว +6

      @@PS-re4tr Oh, it's absolutely not impossible at all. That's why I say in fields where finding out the nodal weights are important (transfer learning, reinforcement learning) people pay extra close attention to them and how they develop with each iteration. They're can become more and more grey boxes only based on how much resources you're willing to devote to researching them.

    • @lolgamez9171
      @lolgamez9171 2 ปีที่แล้ว

      Look up kernel machines and machine learning. We've cracked this black box

    • @J3R3MI6
      @J3R3MI6 2 ปีที่แล้ว

      Magicry is my new favorite word.

  • @al8-.W
    @al8-.W 2 ปีที่แล้ว +10

    I am a junior machine learning engineer in a startup company. Having a tough time getting good with limited support. Still loving it. I love this field for many reasons. My very broad technical interests led me here. I could never choose whether I wanted to study fundamental physics or maths. I also discovered after graduating that I was also very interested in hardware, despite hating electronics practicals. Now I'm happy to sit on this gold mine of opportunities. Discovering the very tools for finding out the best way to do basically anything is exciting. The field is very competitive but I'm sure we could use a lot more people. The methodological fundations of machine learning is so entangled with critical thinking and quality scientific reasoning that I think societies will greatly benefit from people getting interested. I hope we get there eventually.

  • @VioletPrism
    @VioletPrism 2 ปีที่แล้ว +84

    I feel like this will eventually be the only way forward with how complicated cpu's have become

    • @platin2148
      @platin2148 2 ปีที่แล้ว +4

      Which eventually will make it useless for us because we can’t write software for it as it ignored constraints. And we might have even more hardware vulnerabilities. The it’s definitely a help though.

    • @kobilica999
      @kobilica999 2 ปีที่แล้ว +13

      @@platin2148 It's optimization problem, so why it can't be constrained?

    • @platin2148
      @platin2148 2 ปีที่แล้ว +1

      @@kobilica999 I dunno if you ever looked at a heat map if any of the more complex ai’s but even making that map is incredibly difficult.

    • @trapfethen
      @trapfethen 2 ปีที่แล้ว +8

      @@kobilica999 Because constrained optimization is literally one of the hardest problems to solve, specifically because many constraints affect one another. Tweak this variable over here and 3 others change. Unconstrained optimization is in comparison much easier which is why AI has been deployed much more readily in areas where it's function fell squarely in unconstrained optimization territory.
      Obviously, that isn't to say that AI CAN'T be applied to constrained optimization problems, they can, have, and will be in future. You have to find a means of modelling the constraints in the reward function of the AI. This will lead to the AI over time to begin internally modelling a world model that satisfies the constraints. I make it sound simple here, but there are many gotchas. Situations that you didn't think to constrain against because it never occurred to you that that is a situation that would come about (common sense stuff again). A slight misalignment with the AI's internal world model and the constraints can lead it to erroneous results outside of the test data, etc.
      This is one of the reasons that companies like Tesla put so much effort into collecting ship loads of real world data, because it is much easier to verify AI efficacy if you cover more use cases within the field.
      Just some thoughts and rantings by a developer. Hope this helped.

    • @thelelanatorlol3978
      @thelelanatorlol3978 2 ปีที่แล้ว

      @@platin2148 It will ignore constraints as much as the human telling it what constraints it has to follow ignores those constraints.

  • @conradwiebe7919
    @conradwiebe7919 2 ปีที่แล้ว +9

    Big respect for shouting out TechTechPotato

  • @lekhakaananta5864
    @lekhakaananta5864 2 ปีที่แล้ว +57

    ML isn't magic, but it does fit into what you'd expect at the start of the singularity. Now Google can design chips that used to take weeks in days. And what kind of chips did Google use ML to design? Tensor Processing Units, i.e. chips optimized for more ML. So we should expect exponential increase in hardware-level efficiency of ML techniques, until we run into some limit to the scaling.

    • @MrFaaaaaaaaaaaaaaaaa
      @MrFaaaaaaaaaaaaaaaaa 2 ปีที่แล้ว +17

      There are hard limits to what ML can do in this field -- ie: a perfectly organized chip will not have infinite performance.
      so there are only margins to be gained from ML here. I don't think this is significant in terms of approaching the cyber-singularity.

    • @lekhakaananta5864
      @lekhakaananta5864 2 ปีที่แล้ว +14

      @@MrFaaaaaaaaaaaaaaaaa I also don't think you can get the singularity by chip optimization alone, but that's not the important part. The increased ML capability can be generally applied to other fields. Just off the top of my head, if you apply them to molecular simulations and materials science, you might get a better chip production process and thus open up new spaces in chip-design.

    • @andrewferguson6901
      @andrewferguson6901 2 ปีที่แล้ว +8

      It's always been like this. We've been using computers to aid in chip design since it was possible. We used good steel tools to make better steel anvils etc.
      All of technology is used to accelerate development of more technology

    • @lekhakaananta5864
      @lekhakaananta5864 2 ปีที่แล้ว +4

      @@andrewferguson6901 Well yeah, that's the definition of technology. Increase in capability results in some of that capability being used to further increase capability in ways not possible before.
      The non-trivial thing about singularity arguments is that we're approaching some new speed of this. Which judging by exponential curves of things like GDP, is a reasonable extrapolation. It used to be that metal working took many generations of human experience to self-improve. Now chip design AI can self-improve in an iteration time of weeks.

    • @msclrhd
      @msclrhd 2 ปีที่แล้ว +2

      Note that this is using ML to lay out the parts on the chip. For example, the component that handles matrix or tensor multiplication. The ML engine hasn't designed the circuits of those components.

  • @user34274
    @user34274 2 ปีที่แล้ว +5

    Your channel- the content, subject matter, brevity of delivery, lack of distracting snazzy video editing, and the minimal, soothing mode of delivery is just brilliant. Love from Australia.

  • @seth_deegan
    @seth_deegan 2 ปีที่แล้ว +31

    Can't wait for machine-learning-based city planning!

    • @kristopherleslie8343
      @kristopherleslie8343 2 ปีที่แล้ว +2

      Bad idea

    • @seth_deegan
      @seth_deegan 2 ปีที่แล้ว +5

      @@kristopherleslie8343 Probably true.

    • @cocidy
      @cocidy 2 ปีที่แล้ว

      @OneFortyFour exactly haha

    • @kristopherleslie8343
      @kristopherleslie8343 2 ปีที่แล้ว

      @@kezif refer to Elon Musk view info you aren’t up to speed

  • @stevenfranks3131
    @stevenfranks3131 2 ปีที่แล้ว +14

    Really enjoy following along as you explore different topics going on in the tech world and beyond. Thanks!

  • @evil0sheep
    @evil0sheep 2 ปีที่แล้ว +5

    Great video! One nit: simulated annealing is far less prone to getting stuck in local minima then gradient descent/hill climbing algorithms, at the expense of efficiency and accuracy in finding the minimum. Because of this, a common iterative optimization strategy is to use simulated annealing to get close to the global minima, then use that as a starting point for a gradient descent algorithm that finds the true global minimum.
    Also, I don't think simulated annealing is a greedy algorithm. Gradient descent algorithms may qualify as greedy algorithms but it seems really weird to me to call annealing 'greedy'

  • @jerrywatson1958
    @jerrywatson1958 2 ปีที่แล้ว +2

    Another great topic and video! John you are on fire! Thanks for all your hard work.

  • @joe7272
    @joe7272 2 ปีที่แล้ว +3

    an architectural difference is the professional laid out the macoblocks in a grid like oraganized fashion. the AI did it in a rounder more organic looking pattern.

  • @raylopez99
    @raylopez99 2 ปีที่แล้ว +21

    An even bigger bottleneck than floor planning is testing, which even more (I guess) possibilities than 10^9000 possibilities. You should do a video on this, it's been in the news.

    • @Asianometry
      @Asianometry  2 ปีที่แล้ว +8

      Oh like verification?

    • @raylopez99
      @raylopez99 2 ปีที่แล้ว +8

      @@Asianometry Yes. Test vectors to test every conceivable combination on a combinational and logic circuit is prohibitively large. There's a TH-cam video on this...that you can perhaps elaborate on...let me see if I can find it...ah, here it is, you did it! :) "The Growing Semiconductor Design Problem" Dec 5, 2021, maybe link to it.

    • @vatsan2483
      @vatsan2483 2 ปีที่แล้ว +2

      @@Asianometry Yes this is a big fish cause imagining ML to arrive at best test cases and boundary conditions is a grt tool

    • @williambrasky3891
      @williambrasky3891 2 ปีที่แล้ว +2

      @@Asianometry Recently I came across a video by one of the silicon focused creators.
      I'm paraphrasing (so the exact ratio is likely different than what I state here), but the gist was over the last decade or so, especially, verification has become a greater and greater resource hog. Most firms have something like 2-4 times the ppl working on verification vs design. It'll soon grow to become such a colossal undertaking to make current methods infeasible. Apparently, that's where they are especially concentrated on leveraging AI techniques. Makes sense. It's the sort of problem for which NN are well suited.

    • @waldemaro12345
      @waldemaro12345 2 ปีที่แล้ว +2

      @@raylopez99 I think John recent video was touching this subject th-cam.com/video/rtaaOdGuMCc/w-d-xo.html

  • @alexscarbro796
    @alexscarbro796 2 ปีที่แล้ว +7

    An excellent video.
    Differential Evolution is another good (global) optimiser that is pretty good at not getting stuck in local minima.
    That rotating wafer was beautiful BTW!

  • @benjones1717
    @benjones1717 2 ปีที่แล้ว +3

    8:21 I love that baseball pitch flying punch, we need more special moves in baseball.

  • @chrisfisher6700
    @chrisfisher6700 2 ปีที่แล้ว +2

    Another brilliant video. Much appreciate your excellent work. Quite curious your thoughts about how long it will take for quantum computing to make an impact on floor planning? How far do simulated annealing solutions such as DWave need to improve before they can be used more efficiently than ML?

  • @lesptitsoiseaux
    @lesptitsoiseaux 2 ปีที่แล้ว

    Best new channel I found in 2020. Great job!

  • @tonysu8860
    @tonysu8860 2 ปีที่แล้ว +10

    Your attempt at explaining something you don't understand is commendable.
    Let me have a try based on what I know about how Google's AlphaZero machine learning works from a 30,000 foot level and then guess how it's applied to chip design.
    AlphaZero is nearly unique among AI in that the algorthm teaches itself entirely from the beginning without any human guidance, instruction or intervention. The only things the algorithm is given are the basic parameters of the game/problem and the algorithm starts with trial and error to discover basic moves/relationships, building its skill from scratch. Essential to the process which is different than many other machine learning is its use of the Monte Carlo approach, which is to create long and often very complex solutions but not file a final score for that procedure until the very end... This is computationally heavy, but it avoids solutions which might look attractive at first but lead to a less optimal result while making it possible to consider less optimal next steps but eventually arrive at a better result.
    Another aspect of neural networks your video didn't seem to clearly describe is that there is a big difference between training the algorithm and solving the actual problem.
    Training is performed by running the algorithm constantly, 24/7/365 and may require well over a year to achieve world class capability with over 93% accuracy (comparable to the best humans in the world, fully trained, experienced, and typically the best education available). It's slow and tedious, and typically involves crunching terabytes of data of known solutions (Yes, already solved).
    The algorithm can be used at any time, but the more time spent training the algorithm, the better is the algorithm's capability.
    Then, when you have a new solution, you can run that solution through the algorithm and get a result.
    In your video, you said that the AlphaZero solution was only approximately the same quality as 3 other known ways of creating the solution (of the chip floorplan). That suggests to me that the AlphaZero algorithm is probably immature. It might be only equal to one or at most two other methods, but it's my feeling that if matched against 3 other methods... Alphazero should be able to better at least one clearly if not all of them.
    I would guess that within another year, the algorithm should be able to beat every other approach to creating the best floorplan, and that's even with the possibility that chip floorplans will be vastly more complex with such things as stacked 3D layering.

  • @vinicentus
    @vinicentus 2 ปีที่แล้ว +1

    Do you post your sources for the information in your videos anywhere? I would definitely be interested in digging even deeper into many of the topics you present.
    Great video btw👍

  • @negativegamma4453
    @negativegamma4453 2 ปีที่แล้ว +3

    this is great. Thanks. I am a ML engineer of sorts. There is a line of thinking that there is a lot of value in a model that is equal to a human. You can spin up 1000x instances whereas you can't really hire 1000x employees. By getting to something like 70% of human performance you can already see the time savings vs having things routed through a human. Also, there is "natural" performance inflation due to better hardware over time, that 70% of human performance model should be something like 20% faster, or 84% of human in year 2, then 100.8% in year 3 so on.

  • @TomAtkinson
    @TomAtkinson ปีที่แล้ว

    I really like this Amadala meme. Luke's gaze that kills suddenly cutting through her playful banter. Oh and great video too bruv! ;)

  • @splintmeow4723
    @splintmeow4723 2 ปีที่แล้ว

    The majority of this went over my head. Fantastic work. Got the idea. ❤️

  • @tykjpelk
    @tykjpelk ปีที่แล้ว +3

    Inverse design is slowly becoming a powerful technique in my own field of integrated photonics. The idea is that by telling algorithms what we're looking for they can design extremely efficient devices. This is on the single device level, not layout. A simple example is a splitter that sends two different wavelengths down different paths or that combines them. A human would typically make a device where small differences add up over a long distance, easily 100s of microns. Not so for an inverse design algorithm. They typically produce QR codes a few microns in size that make no sense to a human, but kind of work. What lets us designers sleep at night is that a: the designs are usually impossible to fabricate reliably because they use tiny features and corners, and b: the really impressive ones perform relatively coarse tasks (TE/TM splitting, separating whole frequency bands) with lower efficiency than a human optimized, physics based design.

  • @martinsimlastik5457
    @martinsimlastik5457 2 ปีที่แล้ว

    Great summary video!!!

  • @leonjones7120
    @leonjones7120 ปีที่แล้ว

    Great explaining of this technology! great stuff!

  • @dekev7503
    @dekev7503 2 ปีที่แล้ว +7

    Floor planning is just a small step in chip design. I know this because I'm a masters student of microelectronics engineering and I'm literally taking a course in physical design this semester. There are more complex steps and Floor planning is just 5% of the topics covered in the course. From design for test, atgp, static timing analysis, DRC, etc. The way journalists describe this topic make it seem like the AI designs the chip from scratch.

  • @AtriumComplex
    @AtriumComplex 2 ปีที่แล้ว +4

    Hi there, good video. Just two points of clarification. You said simulated annealing uses a objective equation based on objective factors. This suggests you are thinking objective as "neutral". Simulated annealing is actually an attempt to minimize an objective (as in goal) function.
    Additionally, the weakness of simulated annealing is not that it gets stuck in a local minimum. Instead, it's weakness is that it can only find the approximate global minimum. Simulated annealing is actually a strategy to escape local minima. I like to think of simulated annealing as "smoothing out" the loss landscape, so that peaks aren't so high (which traps the optimizer) but also valleys aren't as low (which makes the solution approximate).
    I think you did a really good job summarizing, especially since this isn't necessarily your field! :)

    • @reinerfranke5436
      @reinerfranke5436 2 ปีที่แล้ว

      The practical problem is more difficult than this. To get a min speed the max net length is one constrain but the average net length is for minimum power. So there is one objective but with an additonal constrain. In practice many.

  • @leyasep5919
    @leyasep5919 2 ปีที่แล้ว

    Please ! MORE videos on this subject !
    Thanks :-)

  • @helmutzollner5496
    @helmutzollner5496 2 ปีที่แล้ว +1

    ... and I love listening to your content! Thank you John!

  • @khatharrmalkavian3306
    @khatharrmalkavian3306 2 ปีที่แล้ว +2

    4:45 - You have the terms reversed here. Hill climbing is the naïve algorithm. Annealing is a modification designed to escape local optima. Annealing is a modification to the hill climbing algorithm where you sample the function with large steps, then on each iteration the steps get slightly smaller until you find a stable optimum.

  • @beyondsingularity628
    @beyondsingularity628 2 ปีที่แล้ว +2

    Predictability/regularity and high-quality feedback (wirelength and other measure) of the chip designing field make it an ideal for machine learning! Very optimistic with the trend ❤️

  • @8bitorgy
    @8bitorgy 2 ปีที่แล้ว +2

    Saying go is more complex than chess is like saying Cyrillic is more complex than cuneiform because it has more letters.

    • @Asianometry
      @Asianometry  2 ปีที่แล้ว

      Deep

    • @gyroninjamodder
      @gyroninjamodder 2 ปีที่แล้ว

      similarly a best of REALLY_LARGE_NUMBER of tic tag toe would have a large state space, but it is possible to play optimally.

  • @aniksamiurrahman6365
    @aniksamiurrahman6365 2 ปีที่แล้ว

    Hello Mr. John. In a previous video, you talked about the validation problem. Do you think this technology gonna help in that?

  • @raphaelcardoso7927
    @raphaelcardoso7927 2 ปีที่แล้ว +2

    You said that according to an Intel study, 50% of the power is spent on interconnect. Do you have a reference for that? I'm doing a study in interconnects and I'm finding it hard to get my hands on those data. Thanks!

    • @reinerfranke5436
      @reinerfranke5436 2 ปีที่แล้ว

      In 5nm and around connect is more than 90%. In the 90s or in discrete board design it was another way around.
      Chiplogic is defined by text written expression and synthesized to gates. Both do not carry the information about place and distance. But both define the performance. To me it seems simpler if the logic synthesis guide the logic definition calculating direct from the logic expression using the performance metric. AI could possible then make expression transformation for a better metric. This process is manual done by chip architects and guided by logic equivalence checkers.

  • @bassmechanic237
    @bassmechanic237 2 ปีที่แล้ว

    Awesome video subjects and content

  • @SkillsToLearn
    @SkillsToLearn ปีที่แล้ว

    Thank you for the great video!

  • @blengi
    @blengi 2 ปีที่แล้ว

    Can the AI determine patterns in local minima/non local minima to the point can say, generalize and efficiently encapsulate these into some "simpler" higher level design principles/methodologies, such that one doesn't need the AI tool post discovery, to perhaps aid evolution of designs forward from different perspectives? Or do engineers just interpret the results from the various simulated metrics and thus only optimize over the numbers?

  • @avinashdas1013
    @avinashdas1013 2 ปีที่แล้ว

    Lovely documentary on trending topics in chip industry.

  • @AdityaChaudhary-oo7pr
    @AdityaChaudhary-oo7pr 2 ปีที่แล้ว

    that was amazing information !!!

  • @mhassaankhalid1369
    @mhassaankhalid1369 7 หลายเดือนก่อน

    great video Jon

  • @cuanclifford5922
    @cuanclifford5922 2 ปีที่แล้ว +2

    Google's Chip-Desigining AI*
    It's the difference between an AI that designs chips and a chip that is designing AI.

  • @snawsomes
    @snawsomes 2 ปีที่แล้ว

    Would be interested to see how this could be used for indoor aeroponic farms.

  • @depth386
    @depth386 2 ปีที่แล้ว

    After learning about boolean gates I started working on 4 bit CPU sub-components like Adder, Comparator, a memory piece, etc. Didn’t get far but it was a good intellectual activity

  • @dmitriikruglov320
    @dmitriikruglov320 2 ปีที่แล้ว +1

    I guess in analog IC design where you start with the transistor model rather than a logic block these ML/AI tools will come much later. A different frequency, a different spec, a different application - for each of those you’ll often have to change the whole circuit in a non-trivial way to accommodate for it.

    • @reinerfranke5436
      @reinerfranke5436 2 ปีที่แล้ว

      Ok, to put a little water here: Look at an Opamp. By specifying the databook specs you can select a minium topology and make a numeric dimension of the devices. No need for AI. It would be far easier to have a "topology google search" for all past built circuits and to apply them to your problem. Its simply a secret curtain which lead most analog IC designers to reinvent a solution.

  • @tonyduncan9852
    @tonyduncan9852 2 ปีที่แล้ว +1

    The future is unimaginable. Nearly. Cheers.

  • @fischX
    @fischX 2 ปีที่แล้ว +1

    The shocking thing is not that it is good, but that it is good at basically the fist shot. Compared to chess it's the "look it beats *a* human" moment, probably plenty of space for improvement on speed and quality.

  • @EyesOfByes
    @EyesOfByes 2 ปีที่แล้ว +3

    4:02 *NICE.*

    • @Gameboygenius
      @Gameboygenius 2 ปีที่แล้ว

      Ikr? I thought it was a missed meme opportunity, but Jon had us covered.

  • @TheEVEInspiration
    @TheEVEInspiration 2 ปีที่แล้ว

    About floor planning, seeing the problem I instantly see another solution.
    1. Start by generating for each block N solutions with different edge interface layouts (it does not have to be perfect in this stage)
    2. Do the usual optimization, but with the freedom to select the best fitting prepared versions.
    3. Once a good overal layout is found, optimize the interfaces between the blocks and then the blocks internals to fit that interface.
    Overall, its an outside-in approach, but with a pre-processing step that optimized overal layout first.

    • @slicer95
      @slicer95 2 ปีที่แล้ว

      Any floorplanning algorithm is not supposed to touch the blocks. The granularity should not go below the blocks. It becomes a much harder problem

  • @jack504
    @jack504 2 ปีที่แล้ว

    Could you do a video about the Tesla Dojo? It would be great to know more about it, e.g. efficiency for machine learning Vs other commercially available products, whether Tesla poached expertise from elsewhere or outsourced some of the design?

  • @Bob-em6kn
    @Bob-em6kn 2 ปีที่แล้ว +1

    This is only the early studies. If this takes off, it would be revolutionary

  • @odaialzrigat
    @odaialzrigat 2 ปีที่แล้ว +1

    Wonderful content

  • @jessstuart7495
    @jessstuart7495 2 ปีที่แล้ว +1

    2% to 5% chip performance increase (power, or speedup) is well within the region of diminishing returns. The real advantage is the reduction in time-to-market.

  • @FuzzyDunlots
    @FuzzyDunlots 2 ปีที่แล้ว +1

    This could design rolling papers better. A needed upgrade we all crave to be sure.

  • @KoviPlaysPC
    @KoviPlaysPC 2 ปีที่แล้ว

    love the video!

  • @chaitanya.pinnali
    @chaitanya.pinnali 2 ปีที่แล้ว

    Can you please make a video about Lam Research as well?

  • @nonetrix3066
    @nonetrix3066 2 ปีที่แล้ว

    Ah had this idea seems it's already being done :P

  • @rem9882
    @rem9882 2 ปีที่แล้ว

    Have made a video on the European risk v chip by Sipearl?

  • @stevegunderson2392
    @stevegunderson2392 2 ปีที่แล้ว +1

    Think how much coffee will be saved by floorplanning with machine learning! I have been doing floorplanning for over 40 years I really like coffee!

  • @anterprites
    @anterprites 2 ปีที่แล้ว +1

    6:51 But designs exist! Yes, they do :D

  • @StanUlch
    @StanUlch ปีที่แล้ว

    Logistical algorithms could play a useful roll in determining parameters of significance between nodes. just an observation.

  • @umountable
    @umountable 2 ปีที่แล้ว

    4:42 Simulated anealing is not greedy. in the context of computer science algorithms, greedy means, that the algorithm will not plan into the future when making decisons, but select what looks best right now. That will often not get you to the globally optimal solution.

  • @tobiasmmueller
    @tobiasmmueller ปีที่แล้ว

    4:05 OMG, it’s over 9000!!1!

  • @nicholas6186
    @nicholas6186 2 ปีที่แล้ว

    It looks like the ball throws the character instead of the other way around. 8:25

  • @larryteslaspacexboringlawr739
    @larryteslaspacexboringlawr739 2 ปีที่แล้ว

    thank you and posted to reddit

  • @deliciouspops
    @deliciouspops 2 ปีที่แล้ว +1

    It would be pretty accurate to compare machines to human on performance per watt ratio :D There is a reason we do not use calculators.

  • @proxy1035
    @proxy1035 2 ปีที่แล้ว

    chip design and custom ASICs will always be one of those far dreams of mine that i will never fulfill because it just looks really really complex and is pretty expensive.

  • @davecool42
    @davecool42 2 ปีที่แล้ว +1

    FYI your microphone is still buzzing. Great video nonetheless!

  • @ktofa3822
    @ktofa3822 2 ปีที่แล้ว

    For my own perfection, i’m looking for analog ic design courses?.Thx

  • @video_explorer
    @video_explorer 2 ปีที่แล้ว

    Over 9000!!!! Liked !!

  • @scottspitlerII
    @scottspitlerII 2 ปีที่แล้ว

    5:00 you are literally talking about the halting problem in computing, annealing is probably a non NP or and NP hard problem

  • @rikvermeer1325
    @rikvermeer1325 2 ปีที่แล้ว

    thankyouuuuu!!!!!!

  • @lidarman2
    @lidarman2 2 ปีที่แล้ว +3

    @12:35. Having a maid is weird enough but a maid quarters without a shower? :P

    • @alexmartian3972
      @alexmartian3972 2 ปีที่แล้ว

      3 showers and not a single bathtub on the plan,

  • @vishnusureshperumbavoor
    @vishnusureshperumbavoor 9 หลายเดือนก่อน

    Now the trend is back

  • @cougarten
    @cougarten 2 ปีที่แล้ว +6

    just fyi: your volume is a bit low.

    • @anonimuse6553
      @anonimuse6553 2 ปีที่แล้ว

      I found the volume much better this time.
      Curious.

  • @quantum7401
    @quantum7401 2 ปีที่แล้ว

    8:27 YES!

  • @vermilli5170
    @vermilli5170 2 ปีที่แล้ว

    With big tech such as apple,google, and amazon starting to design their own chips do you see them taking market share from companies such as AMD/Intel?

    • @SebastianRosca
      @SebastianRosca 2 ปีที่แล้ว

      Considering the fact that apple made millions of mac's with intel sillicon and now they have their own chips, in a way you can say that the market share has changed. It's valid for google's and amazon's data centers that used to rely on xeon processors. From a direct consumer perspective, we won't be seeing the new equivalent ryzen or core i7 from google or apple, but when you think about the fact that a datacenter has thousands or tens of thousands cpu's and gpu's and there are hundreds of such datacenters scattered around the globe, it's easy to see that intel especially has lost quite a bit of ground

    • @Nadox15
      @Nadox15 ปีที่แล้ว

      @@SebastianRosca It will be interesting if the x86 architecture will even be relevant in the next decades. Intel is lucky that so much software is based on their architecture. If more and more software gets ported to ARM or even risc-v intel will lose more and more market share (atleast with their own architecture).

  • @atsirkkennycom1628
    @atsirkkennycom1628 2 ปีที่แล้ว

    Very refreshing!! I wonder if there is AI/ML package that we can "pickup" and apply it to our "daily" plan .. that will help our productivity .. haa haa haa!! :)

  • @johnfilmore7638
    @johnfilmore7638 ปีที่แล้ว

    Using the last example of AI design of a home floorplan, its hard to see this being more efficient without spending inordinate amounts of time defining the constraints of each element in relation to each other, before running AI calculations.
    Human intuitiveness for good ergonomics, where receptacles and door locations and counter-heights & beds to night-tables, and outdoor walkways down a grade, for example, are mind-bogglingly challenging and time-consuming to determine & quantify, for developing constraint rules for the AI engine.
    Humans are creatures of habit, and we have intuitive ways of navigating, if AI determines a walkway grade & width is most efficient for human anatomy to traverse, but it is perceived to be too narrow or have a perceived unprotected dropoff for example, then you will not have a happy homebuyer even if they can learn to feel safe walking it at night.
    I believe there will always need to be a hybrid of human design & Machine-learning,
    using home floorplan design as an example, AI would be great at taking a human-designed floorplan, designed using building standard blocks & assemblies: industry std size trusses, 2x4s, drywall, etc,
    an AI engine could use similar construction-code written as constraints, generate an optimum routing of electrical, gas, plumbing, etc.
    An obvious constraint is designing around industry-standard material sizes to reduce the amount of custom-cutting needed, a buyer wanting a "custom-home" probably doesn't mean they want a home that can't fit industry-standard fridge, freezer, ducts, sub-flooring, etc, there are specific features which are perceived as custom, some may actually need to be custom industry nonstandard, and those elements would need to be human-designed unless this was simply an exercise in "seeing what AI gonna make".

  • @dongshengdi773
    @dongshengdi773 2 ปีที่แล้ว +2

    i have an AI friend …
    the perfect partner .
    She can do anything for me;
    Cook, wash the dishes ,
    mop the Floor , do gardening,
    even gives me a massage.

  • @ddoice
    @ddoice 2 ปีที่แล้ว

    At 4:05 just to give some context, the estimated number of atoms of the whole universe is 10^80

  • @onetruekeeper
    @onetruekeeper ปีที่แล้ว

    The A.I. is designing the chips within the set of rules programmed into it. It cannot design outside those rules since machines cannot consciously decide or create.

  • @augustday9483
    @augustday9483 2 ปีที่แล้ว

    A lot of this feels like the Traveling Salesman problem. There are NP hard math problems here that humans have simply not been able to solve (and maybe are unsolvable in non-factorial time).

  • @ithaca2076
    @ithaca2076 2 ปีที่แล้ว

    so this will be my job then.. 8 years from now once I get a masters

  • @gregdee9085
    @gregdee9085 8 หลายเดือนก่อน

    Has always been the case .. For decades... Same tech that auto routers been using for PCB.. or FPGA routing ... Etc etc..

  • @potatofuryy
    @potatofuryy ปีที่แล้ว

    Well I guess I have an easier time deciding my career path now.

  • @mostlymessingabout
    @mostlymessingabout 2 ปีที่แล้ว

    Going deeper... 😎

  • @fg786
    @fg786 9 หลายเดือนก่อน

    We should not forget that the neural networks solving difficult problems run on machines that are vastly more power hungry than the human brain. AlphaGo ran on 2000 CPUs and like 400 GPUs, each probably using up 200 W of power at least. A typical human runs on 100 W, while the brain doesn't use a third of that power.
    There is a long way to go in this regard.

    • @JameBlack
      @JameBlack 6 หลายเดือนก่อน

      Typical human cannot multiply two 3 digit numbers.

  • @JinKee
    @JinKee 2 ปีที่แล้ว

    “droids building droids? how perverse!” - C-3PO, The Revenge of the Sith

  • @rossadew4033
    @rossadew4033 2 ปีที่แล้ว

    Off to TechTechPotato's channel.

  • @dec13666
    @dec13666 2 ปีที่แล้ว

    Me, an Electronics Engineer turned into AI Researcher: _Beautiful 🥺👍_

  • @AndrewSmith-cd5zf
    @AndrewSmith-cd5zf 2 ปีที่แล้ว

    And that’s exactly how skynet became active…

  • @stimpyfeelinit
    @stimpyfeelinit 2 ปีที่แล้ว

    nice pic at 7:07

  • @user-xs3rz1jj4i
    @user-xs3rz1jj4i 2 ปีที่แล้ว

    does apple M1 using this?

  • @SLPCaires
    @SLPCaires 2 ปีที่แล้ว +4

    I wonder what the chips would look like if they were AI designed down to the smallest details and optimized for the use case. Would it resemble something even more organic?

  • @herp_derpingson
    @herp_derpingson 7 หลายเดือนก่อน

    12:45 Maids qtr looks like a jail cell.