Like and subscribe! And if you're interested in other tech deep dives, check out this playlist: th-cam.com/play/PLKtxx9TnH76RiptUQ22iDGxNewdxjI6Xh.html
I think you should not announce that "you are butchering a name", either research how its pronounced or do it however you want. Announcing it is pretty offensive. Those names were pretty easy to pronounce just by reading the letters.
As a integrated IC designer I suppose I should feel a little threatened by such AI technology taking my job. But like all other design tools this will just make the remaining human designers more efficient and accurate. Early in my career I experimented with using the then available "optinmization" engines built into Cadence and ADS design tools and ran into the same problem you describe here: the local minima of the error function is often not the same as the global minimima. So humans still have to figure it out. You can't just go run off for coffee and wait for the solution to pop out. You address the floorplanning (layout) problem here. But as a schematic designer it is an exponentially harder task. If you already have a architecture and process node selected, I do conceed that a machine will be able to size and place devices faster and more efficiently than any human can. The problem is you have a creative step in front of it that is still in the land of human invention, intuition, and judgement. For now, anyway. I'm sure even that will be better done by machines one day. But for now I remain gainfully employed.
The video said that industry had been using traditional optimization methods such as annealing for a long time but TPU's DL+RL approach tackles the problem with faster speed yet similar accuracy as humans. It gives me some thought on how much of the speed gain was achieved by Google is gigantic GPU clusters vs traditional EDA's computation power, and how much is truly attributable to algorithm superiority. DL+RL is supposed to be able to "discover" IC design parameters/traits which are missed by human engineers. Based on the conclusion of Google's paper, no such conclusion is drawn. Your job is still very safe, human engineers just need more powerful computers to do their job. Another place though where I see Google's method can be more useful is chip verification. As it was explained in a previous video, we are running into a crisis of human engineer shortage to do verification work. If DL+RL can help, the productivity gain will be enormous.
yep sorry your IC designer profession is about as obsolete as the horse and buggy, however, there will be a new chip auditing profession to try to review AI chip designs to make sure the AI skynet doesnt try to take over and/or destroy the world, good luck the rest of mankind is depending on you no pressure. Aw who am I kidding mankind is to reckless to have a human review AI chip designs to try to make sure the AI skynet doest try to take over. most likely mankind will have a different AI perform a study on the AI chip designs to audit them then the government will rubber stamp the self regulated industry AI studies on the AI chip designs in typical government cya fashion.
9:26 "I can't find an explanation for how [insert ML tool works], but I CAN find how they train it" As an ML scientist, this is close to 90% of how it is. The area where it's gets a lot more analytical is transfer learning, and a huge chunk of reinforcement learning. While we know what goes into training the neural nets, the developed black box intuition is as close as it can get to modern magicry.
@@PS-re4tr Oh, it's absolutely not impossible at all. That's why I say in fields where finding out the nodal weights are important (transfer learning, reinforcement learning) people pay extra close attention to them and how they develop with each iteration. They're can become more and more grey boxes only based on how much resources you're willing to devote to researching them.
Although quite a number of years ago, I was doing the floorplanning and physical layout of an analogue power chip. That is a completely different world from the digital circuits as presented in this video. I don't know how much has been automated these days, but we had to place individual transistors in the correct orientation compared to the temperature gradient on the chip. Individual interconnects had to be adjusted to the maximum amount of current which could flow in them, symmetry between two transistors or blocks was in some cases paramount, voltage drop on the supply wire or ground wire could destroy the accuracy of a block, and so on. There were so many constraints that it was difficult to convey this from the electronics designer to the physical designer. The electronics designer would often do the most crucial portions or blocks of the physical design himself.
Thanks for being precise about Machine Learning. There's way too much BS floating around on that field. The reinforcement learning approach seems like another decent tool in the box to takle a very difficult problem. Honestly that's more than anyone can ask for, imho.
I always laugh when people say AI overlords are closing in on being a reality. GPT and its supermassive dataset is definitely getting closer, but it's not there yet. It did set new records though. I feel like there is still a step missing somewhere as the processing power is more than sufficient. Maybe by the end of the decade there will be a new paradigm that enables it. Transformers are the new hotness right now which is what GPT is based off of. Very impressive and rather difficult to implement from my experience. Machine learning I find is not always the best tool for the job, but it is amazing versatile and adaptable and more often than not yields shockingly good results for not much effort. Assuming you know what you are doing.
@@LiveType machine learning is really lower level than AI is. AI encompasses ML but not the other way around. AI tools and models are based off ML concepts and approaches at their core. It's a rather fuzzy line. One way to think about it is AI bring the high level systems while ML is the lower level concepts that make up that system
@@andreicozma6026 I don't think AI has to contain ML in every case - preprogrammed rules can be used in an AI, for example. Unless I'm wrong there, but I think I'm right. If I'm wrong let me know...
@@circuitgamer7759 you're actually correct, I guess to more correctly re-phrase what I said would be to say that is ML is a subset of AI. So then all of ML technically counts as being "AI", but like you said, not all of AI necessarily has to be part of ML.
I am a junior machine learning engineer in a startup company. Having a tough time getting good with limited support. Still loving it. I love this field for many reasons. My very broad technical interests led me here. I could never choose whether I wanted to study fundamental physics or maths. I also discovered after graduating that I was also very interested in hardware, despite hating electronics practicals. Now I'm happy to sit on this gold mine of opportunities. Discovering the very tools for finding out the best way to do basically anything is exciting. The field is very competitive but I'm sure we could use a lot more people. The methodological fundations of machine learning is so entangled with critical thinking and quality scientific reasoning that I think societies will greatly benefit from people getting interested. I hope we get there eventually.
Your channel- the content, subject matter, brevity of delivery, lack of distracting snazzy video editing, and the minimal, soothing mode of delivery is just brilliant. Love from Australia.
Which eventually will make it useless for us because we can’t write software for it as it ignored constraints. And we might have even more hardware vulnerabilities. The it’s definitely a help though.
@@kobilica999 Because constrained optimization is literally one of the hardest problems to solve, specifically because many constraints affect one another. Tweak this variable over here and 3 others change. Unconstrained optimization is in comparison much easier which is why AI has been deployed much more readily in areas where it's function fell squarely in unconstrained optimization territory. Obviously, that isn't to say that AI CAN'T be applied to constrained optimization problems, they can, have, and will be in future. You have to find a means of modelling the constraints in the reward function of the AI. This will lead to the AI over time to begin internally modelling a world model that satisfies the constraints. I make it sound simple here, but there are many gotchas. Situations that you didn't think to constrain against because it never occurred to you that that is a situation that would come about (common sense stuff again). A slight misalignment with the AI's internal world model and the constraints can lead it to erroneous results outside of the test data, etc. This is one of the reasons that companies like Tesla put so much effort into collecting ship loads of real world data, because it is much easier to verify AI efficacy if you cover more use cases within the field. Just some thoughts and rantings by a developer. Hope this helped.
ML isn't magic, but it does fit into what you'd expect at the start of the singularity. Now Google can design chips that used to take weeks in days. And what kind of chips did Google use ML to design? Tensor Processing Units, i.e. chips optimized for more ML. So we should expect exponential increase in hardware-level efficiency of ML techniques, until we run into some limit to the scaling.
There are hard limits to what ML can do in this field -- ie: a perfectly organized chip will not have infinite performance. so there are only margins to be gained from ML here. I don't think this is significant in terms of approaching the cyber-singularity.
@@MrFaaaaaaaaaaaaaaaaa I also don't think you can get the singularity by chip optimization alone, but that's not the important part. The increased ML capability can be generally applied to other fields. Just off the top of my head, if you apply them to molecular simulations and materials science, you might get a better chip production process and thus open up new spaces in chip-design.
It's always been like this. We've been using computers to aid in chip design since it was possible. We used good steel tools to make better steel anvils etc. All of technology is used to accelerate development of more technology
@@andrewferguson6901 Well yeah, that's the definition of technology. Increase in capability results in some of that capability being used to further increase capability in ways not possible before. The non-trivial thing about singularity arguments is that we're approaching some new speed of this. Which judging by exponential curves of things like GDP, is a reasonable extrapolation. It used to be that metal working took many generations of human experience to self-improve. Now chip design AI can self-improve in an iteration time of weeks.
Note that this is using ML to lay out the parts on the chip. For example, the component that handles matrix or tensor multiplication. The ML engine hasn't designed the circuits of those components.
an architectural difference is the professional laid out the macoblocks in a grid like oraganized fashion. the AI did it in a rounder more organic looking pattern.
Great video! One nit: simulated annealing is far less prone to getting stuck in local minima then gradient descent/hill climbing algorithms, at the expense of efficiency and accuracy in finding the minimum. Because of this, a common iterative optimization strategy is to use simulated annealing to get close to the global minima, then use that as a starting point for a gradient descent algorithm that finds the true global minimum. Also, I don't think simulated annealing is a greedy algorithm. Gradient descent algorithms may qualify as greedy algorithms but it seems really weird to me to call annealing 'greedy'
this is great. Thanks. I am a ML engineer of sorts. There is a line of thinking that there is a lot of value in a model that is equal to a human. You can spin up 1000x instances whereas you can't really hire 1000x employees. By getting to something like 70% of human performance you can already see the time savings vs having things routed through a human. Also, there is "natural" performance inflation due to better hardware over time, that 70% of human performance model should be something like 20% faster, or 84% of human in year 2, then 100.8% in year 3 so on.
Floor planning is just a small step in chip design. I know this because I'm a masters student of microelectronics engineering and I'm literally taking a course in physical design this semester. There are more complex steps and Floor planning is just 5% of the topics covered in the course. From design for test, atgp, static timing analysis, DRC, etc. The way journalists describe this topic make it seem like the AI designs the chip from scratch.
Inverse design is slowly becoming a powerful technique in my own field of integrated photonics. The idea is that by telling algorithms what we're looking for they can design extremely efficient devices. This is on the single device level, not layout. A simple example is a splitter that sends two different wavelengths down different paths or that combines them. A human would typically make a device where small differences add up over a long distance, easily 100s of microns. Not so for an inverse design algorithm. They typically produce QR codes a few microns in size that make no sense to a human, but kind of work. What lets us designers sleep at night is that a: the designs are usually impossible to fabricate reliably because they use tiny features and corners, and b: the really impressive ones perform relatively coarse tasks (TE/TM splitting, separating whole frequency bands) with lower efficiency than a human optimized, physics based design.
Your attempt at explaining something you don't understand is commendable. Let me have a try based on what I know about how Google's AlphaZero machine learning works from a 30,000 foot level and then guess how it's applied to chip design. AlphaZero is nearly unique among AI in that the algorthm teaches itself entirely from the beginning without any human guidance, instruction or intervention. The only things the algorithm is given are the basic parameters of the game/problem and the algorithm starts with trial and error to discover basic moves/relationships, building its skill from scratch. Essential to the process which is different than many other machine learning is its use of the Monte Carlo approach, which is to create long and often very complex solutions but not file a final score for that procedure until the very end... This is computationally heavy, but it avoids solutions which might look attractive at first but lead to a less optimal result while making it possible to consider less optimal next steps but eventually arrive at a better result. Another aspect of neural networks your video didn't seem to clearly describe is that there is a big difference between training the algorithm and solving the actual problem. Training is performed by running the algorithm constantly, 24/7/365 and may require well over a year to achieve world class capability with over 93% accuracy (comparable to the best humans in the world, fully trained, experienced, and typically the best education available). It's slow and tedious, and typically involves crunching terabytes of data of known solutions (Yes, already solved). The algorithm can be used at any time, but the more time spent training the algorithm, the better is the algorithm's capability. Then, when you have a new solution, you can run that solution through the algorithm and get a result. In your video, you said that the AlphaZero solution was only approximately the same quality as 3 other known ways of creating the solution (of the chip floorplan). That suggests to me that the AlphaZero algorithm is probably immature. It might be only equal to one or at most two other methods, but it's my feeling that if matched against 3 other methods... Alphazero should be able to better at least one clearly if not all of them. I would guess that within another year, the algorithm should be able to beat every other approach to creating the best floorplan, and that's even with the possibility that chip floorplans will be vastly more complex with such things as stacked 3D layering.
An even bigger bottleneck than floor planning is testing, which even more (I guess) possibilities than 10^9000 possibilities. You should do a video on this, it's been in the news.
@@Asianometry Yes. Test vectors to test every conceivable combination on a combinational and logic circuit is prohibitively large. There's a TH-cam video on this...that you can perhaps elaborate on...let me see if I can find it...ah, here it is, you did it! :) "The Growing Semiconductor Design Problem" Dec 5, 2021, maybe link to it.
@@Asianometry Recently I came across a video by one of the silicon focused creators. I'm paraphrasing (so the exact ratio is likely different than what I state here), but the gist was over the last decade or so, especially, verification has become a greater and greater resource hog. Most firms have something like 2-4 times the ppl working on verification vs design. It'll soon grow to become such a colossal undertaking to make current methods infeasible. Apparently, that's where they are especially concentrated on leveraging AI techniques. Makes sense. It's the sort of problem for which NN are well suited.
An excellent video. Differential Evolution is another good (global) optimiser that is pretty good at not getting stuck in local minima. That rotating wafer was beautiful BTW!
Hi there, good video. Just two points of clarification. You said simulated annealing uses a objective equation based on objective factors. This suggests you are thinking objective as "neutral". Simulated annealing is actually an attempt to minimize an objective (as in goal) function. Additionally, the weakness of simulated annealing is not that it gets stuck in a local minimum. Instead, it's weakness is that it can only find the approximate global minimum. Simulated annealing is actually a strategy to escape local minima. I like to think of simulated annealing as "smoothing out" the loss landscape, so that peaks aren't so high (which traps the optimizer) but also valleys aren't as low (which makes the solution approximate). I think you did a really good job summarizing, especially since this isn't necessarily your field! :)
The practical problem is more difficult than this. To get a min speed the max net length is one constrain but the average net length is for minimum power. So there is one objective but with an additonal constrain. In practice many.
Predictability/regularity and high-quality feedback (wirelength and other measure) of the chip designing field make it an ideal for machine learning! Very optimistic with the trend ❤️
Do you post your sources for the information in your videos anywhere? I would definitely be interested in digging even deeper into many of the topics you present. Great video btw👍
About floor planning, seeing the problem I instantly see another solution. 1. Start by generating for each block N solutions with different edge interface layouts (it does not have to be perfect in this stage) 2. Do the usual optimization, but with the freedom to select the best fitting prepared versions. 3. Once a good overal layout is found, optimize the interfaces between the blocks and then the blocks internals to fit that interface. Overall, its an outside-in approach, but with a pre-processing step that optimized overal layout first.
4:45 - You have the terms reversed here. Hill climbing is the naïve algorithm. Annealing is a modification designed to escape local optima. Annealing is a modification to the hill climbing algorithm where you sample the function with large steps, then on each iteration the steps get slightly smaller until you find a stable optimum.
The shocking thing is not that it is good, but that it is good at basically the fist shot. Compared to chess it's the "look it beats *a* human" moment, probably plenty of space for improvement on speed and quality.
Another brilliant video. Much appreciate your excellent work. Quite curious your thoughts about how long it will take for quantum computing to make an impact on floor planning? How far do simulated annealing solutions such as DWave need to improve before they can be used more efficiently than ML?
You said that according to an Intel study, 50% of the power is spent on interconnect. Do you have a reference for that? I'm doing a study in interconnects and I'm finding it hard to get my hands on those data. Thanks!
In 5nm and around connect is more than 90%. In the 90s or in discrete board design it was another way around. Chiplogic is defined by text written expression and synthesized to gates. Both do not carry the information about place and distance. But both define the performance. To me it seems simpler if the logic synthesis guide the logic definition calculating direct from the logic expression using the performance metric. AI could possible then make expression transformation for a better metric. This process is manual done by chip architects and guided by logic equivalence checkers.
2% to 5% chip performance increase (power, or speedup) is well within the region of diminishing returns. The real advantage is the reduction in time-to-market.
I guess in analog IC design where you start with the transistor model rather than a logic block these ML/AI tools will come much later. A different frequency, a different spec, a different application - for each of those you’ll often have to change the whole circuit in a non-trivial way to accommodate for it.
Ok, to put a little water here: Look at an Opamp. By specifying the databook specs you can select a minium topology and make a numeric dimension of the devices. No need for AI. It would be far easier to have a "topology google search" for all past built circuits and to apply them to your problem. Its simply a secret curtain which lead most analog IC designers to reinvent a solution.
After learning about boolean gates I started working on 4 bit CPU sub-components like Adder, Comparator, a memory piece, etc. Didn’t get far but it was a good intellectual activity
Nvidia and others have the same techniques, the future success of each company is not just the fundamental technology, but the business variables and the decisions of individuals, which is chaotic. It would take far more in-depth research to get a feel for the winner of AI in 10-20 years, and even then there is no guarantee that they can profit from it long-term, or if it will turn into a generic commodity technology that simply benefits everyone.
4:42 Simulated anealing is not greedy. in the context of computer science algorithms, greedy means, that the algorithm will not plan into the future when making decisons, but select what looks best right now. That will often not get you to the globally optimal solution.
floorplanning like predicting the weather is a job for quantum computers. probably in 5 to 10 years' time, it will be resolved in minutes to everyone satisfaction.
Only if you want or need to laboriously calculate every possibility to arrive at the one true solution. Until then, the AlphaZero AI described in this video can or should do a very credible job by a series of expert guesses, pruning the decision tree of bad lines without having to calculate them fully to establish how bad they are.
Can the AI determine patterns in local minima/non local minima to the point can say, generalize and efficiently encapsulate these into some "simpler" higher level design principles/methodologies, such that one doesn't need the AI tool post discovery, to perhaps aid evolution of designs forward from different perspectives? Or do engineers just interpret the results from the various simulated metrics and thus only optimize over the numbers?
Could you do a video about the Tesla Dojo? It would be great to know more about it, e.g. efficiency for machine learning Vs other commercially available products, whether Tesla poached expertise from elsewhere or outsourced some of the design?
An important thing to note: It doesn't matter is the AI version gives a result that's 5% worse than a human or existing tools if it can go faster than the human or existing tools. Assuming for a moment existing tools are about as fast as the AI method, use both in parallel and take the best, and you don't have to care about spending the time a human would.
The A.I. is designing the chips within the set of rules programmed into it. It cannot design outside those rules since machines cannot consciously decide or create.
chip design and custom ASICs will always be one of those far dreams of mine that i will never fulfill because it just looks really really complex and is pretty expensive.
"Machine Learning is not superhuman magic. It is based on data created or curated by humans.." That would be the case in supervised learning but this looks like reinforcement learning, so it's not bounded by the quality/size of a training dataset, only by the number of learning iterations. Thank you for another great video and topic!
This is a good example of why AI can surpass humans in expert system domains. You pointed out that the AI found solutions that had performance comparable to those generated by humans, but you also point out that it generated a solution in 24 hours whereas the humans took 6 months. The thing is that the cost of the AI is only electricity, and it can generate 180 competitive solutions in the time that it takes a team of salaried humans to make one solution. In that case, the odds that one of the AI solutions is going to be the best one are 180/181 (99.45%), and the AI will do this at a fraction of the cost. Additionally, in the rare cases where the humans win the race with their single entry, that entry can be added to the AI's training set for an immediately improvement. Moreover, the efficiency of the AI can improve substantially with application specific hardware, and meta-analysis of its work can potentially be used to generate "unnamed" rules that can radically improve its efficiency and/or quality. Humans can also improve, of course, but the way our minds work is largely incompatible with unnamed rules (allowing for zen masters here), so there's a lot of extra overhead involved.
Using the last example of AI design of a home floorplan, its hard to see this being more efficient without spending inordinate amounts of time defining the constraints of each element in relation to each other, before running AI calculations. Human intuitiveness for good ergonomics, where receptacles and door locations and counter-heights & beds to night-tables, and outdoor walkways down a grade, for example, are mind-bogglingly challenging and time-consuming to determine & quantify, for developing constraint rules for the AI engine. Humans are creatures of habit, and we have intuitive ways of navigating, if AI determines a walkway grade & width is most efficient for human anatomy to traverse, but it is perceived to be too narrow or have a perceived unprotected dropoff for example, then you will not have a happy homebuyer even if they can learn to feel safe walking it at night. I believe there will always need to be a hybrid of human design & Machine-learning, using home floorplan design as an example, AI would be great at taking a human-designed floorplan, designed using building standard blocks & assemblies: industry std size trusses, 2x4s, drywall, etc, an AI engine could use similar construction-code written as constraints, generate an optimum routing of electrical, gas, plumbing, etc. An obvious constraint is designing around industry-standard material sizes to reduce the amount of custom-cutting needed, a buyer wanting a "custom-home" probably doesn't mean they want a home that can't fit industry-standard fridge, freezer, ducts, sub-flooring, etc, there are specific features which are perceived as custom, some may actually need to be custom industry nonstandard, and those elements would need to be human-designed unless this was simply an exercise in "seeing what AI gonna make".
A lot of this feels like the Traveling Salesman problem. There are NP hard math problems here that humans have simply not been able to solve (and maybe are unsolvable in non-factorial time).
Very refreshing!! I wonder if there is AI/ML package that we can "pickup" and apply it to our "daily" plan .. that will help our productivity .. haa haa haa!! :)
How small is a silicon circuit you mite ask ? 🙃 Try this idea for size . Focus a laser beam ----------------------- from earth to a Finger nail on the Moon and lithograph a circuit line on it That small So have they reached the atom limit on size - never say done always some guy with a new idea 2035 some tech scientist runs into work at some Lab i got it . . . i got it Like Tagart in blazing saddles :)
I think the term "AI" is eventually simply going to mean ML. It's joked that an approach is only AI while it hasn't been adopted. Most people wouldn't consider the A* algorithm or genetic algorithms or Bayesian numbers or B-tree Searches to be AI at this point. However, neural network based ML is really truly AI, as it replicates the processes nature uses to generate intelligence in the first place. Everything else is simply search algorithms with some innovative flourishes.
The conclusion is no longer correct as one does no longer only have systems trained with human data. Indeed the second stronger version for Go did not start with human player games (as the version that beat Sedon did), but starts right from the getgo only playing against itself only being provided the rules of the game. It might be that the combination of complexity of the problem and quality of human expert intuition places this problem out of the reach of this somewhat 2nd generation ML approach for now -- but only might (and should it be that way at this momemnt in time it would be hard to imagine that this would be a hard limit unless one introduces unscientific assumptions about human intelligence).
It's 2024. Google still hasn't made any profit from AI aided chip design, while Synopsys' stock price are doubled by providing AI assist in chip design process. What is wrong with you Google?
Like and subscribe! And if you're interested in other tech deep dives, check out this playlist: th-cam.com/play/PLKtxx9TnH76RiptUQ22iDGxNewdxjI6Xh.html
This AI will be the death of EDA vendors like Cadence?
No it won’t be. They’ll probably make their own
@@Asianometry cant wait to get ai get smarter so they can make fast cpu that. so I can get more fps in games
I think you should not announce that "you are butchering a name", either research how its pronounced or do it however you want. Announcing it is pretty offensive. Those names were pretty easy to pronounce just by reading the letters.
@@StefanWelker LOL nice troll.
Thanks for referring my video!
Synopsis and cadence both have their own respective data as well
As a integrated IC designer I suppose I should feel a little threatened by such AI technology taking my job. But like all other design tools this will just make the remaining human designers more efficient and accurate. Early in my career I experimented with using the then available "optinmization" engines built into Cadence and ADS design tools and ran into the same problem you describe here: the local minima of the error function is often not the same as the global minimima. So humans still have to figure it out. You can't just go run off for coffee and wait for the solution to pop out. You address the floorplanning (layout) problem here. But as a schematic designer it is an exponentially harder task. If you already have a architecture and process node selected, I do conceed that a machine will be able to size and place devices faster and more efficiently than any human can. The problem is you have a creative step in front of it that is still in the land of human invention, intuition, and judgement. For now, anyway. I'm sure even that will be better done by machines one day. But for now I remain gainfully employed.
Very spot on!
The video said that industry had been using traditional optimization methods such as annealing for a long time but TPU's DL+RL approach tackles the problem with faster speed yet similar accuracy as humans. It gives me some thought on how much of the speed gain was achieved by Google is gigantic GPU clusters vs traditional EDA's computation power, and how much is truly attributable to algorithm superiority. DL+RL is supposed to be able to "discover" IC design parameters/traits which are missed by human engineers. Based on the conclusion of Google's paper, no such conclusion is drawn. Your job is still very safe, human engineers just need more powerful computers to do their job. Another place though where I see Google's method can be more useful is chip verification. As it was explained in a previous video, we are running into a crisis of human engineer shortage to do verification work. If DL+RL can help, the productivity gain will be enormous.
yep sorry your IC designer profession is about as obsolete as the horse and buggy, however, there will be a new chip auditing profession to try to review AI chip designs to make sure the AI skynet doesnt try to take over and/or destroy the world, good luck the rest of mankind is depending on you no pressure. Aw who am I kidding mankind is to reckless to have a human review AI chip designs to try to make sure the AI skynet doest try to take over. most likely mankind will have a different AI perform a study on the AI chip designs to audit them then the government will rubber stamp the self regulated industry AI studies on the AI chip designs in typical government cya fashion.
Do I also have to feel threatened as a software engineer?
I think in the CS field, AI has gained far more superiority than in Chip Designing.
@@fukushimaisrevelation2817 Imagine getting your entire idea about how AI is going to work out from science fiction movies written by arts majors...
9:26 "I can't find an explanation for how [insert ML tool works], but I CAN find how they train it"
As an ML scientist, this is close to 90% of how it is. The area where it's gets a lot more analytical is transfer learning, and a huge chunk of reinforcement learning. While we know what goes into training the neural nets, the developed black box intuition is as close as it can get to modern magicry.
Is there any hope of figuring out how the ML tools work or do we have to accept that they will remain black boxes?
@@PS-re4tr Oh, it's absolutely not impossible at all. That's why I say in fields where finding out the nodal weights are important (transfer learning, reinforcement learning) people pay extra close attention to them and how they develop with each iteration. They're can become more and more grey boxes only based on how much resources you're willing to devote to researching them.
Look up kernel machines and machine learning. We've cracked this black box
Magicry is my new favorite word.
Although quite a number of years ago, I was doing the floorplanning and physical layout of an analogue power chip. That is a completely different world from the digital circuits as presented in this video. I don't know how much has been automated these days, but we had to place individual transistors in the correct orientation compared to the temperature gradient on the chip. Individual interconnects had to be adjusted to the maximum amount of current which could flow in them, symmetry between two transistors or blocks was in some cases paramount, voltage drop on the supply wire or ground wire could destroy the accuracy of a block, and so on. There were so many constraints that it was difficult to convey this from the electronics designer to the physical designer. The electronics designer would often do the most crucial portions or blocks of the physical design himself.
Thanks for being precise about Machine Learning. There's way too much BS floating around on that field. The reinforcement learning approach seems like another decent tool in the box to takle a very difficult problem. Honestly that's more than anyone can ask for, imho.
Spot on
I always laugh when people say AI overlords are closing in on being a reality. GPT and its supermassive dataset is definitely getting closer, but it's not there yet. It did set new records though. I feel like there is still a step missing somewhere as the processing power is more than sufficient. Maybe by the end of the decade there will be a new paradigm that enables it. Transformers are the new hotness right now which is what GPT is based off of. Very impressive and rather difficult to implement from my experience.
Machine learning I find is not always the best tool for the job, but it is amazing versatile and adaptable and more often than not yields shockingly good results for not much effort. Assuming you know what you are doing.
@@LiveType machine learning is really lower level than AI is. AI encompasses ML but not the other way around. AI tools and models are based off ML concepts and approaches at their core. It's a rather fuzzy line. One way to think about it is AI bring the high level systems while ML is the lower level concepts that make up that system
@@andreicozma6026 I don't think AI has to contain ML in every case - preprogrammed rules can be used in an AI, for example. Unless I'm wrong there, but I think I'm right. If I'm wrong let me know...
@@circuitgamer7759 you're actually correct, I guess to more correctly re-phrase what I said would be to say that is ML is a subset of AI. So then all of ML technically counts as being "AI", but like you said, not all of AI necessarily has to be part of ML.
I am a junior machine learning engineer in a startup company. Having a tough time getting good with limited support. Still loving it. I love this field for many reasons. My very broad technical interests led me here. I could never choose whether I wanted to study fundamental physics or maths. I also discovered after graduating that I was also very interested in hardware, despite hating electronics practicals. Now I'm happy to sit on this gold mine of opportunities. Discovering the very tools for finding out the best way to do basically anything is exciting. The field is very competitive but I'm sure we could use a lot more people. The methodological fundations of machine learning is so entangled with critical thinking and quality scientific reasoning that I think societies will greatly benefit from people getting interested. I hope we get there eventually.
Ah yes, a neural network trained to design chips designed a chip to train neural networks
Your channel- the content, subject matter, brevity of delivery, lack of distracting snazzy video editing, and the minimal, soothing mode of delivery is just brilliant. Love from Australia.
I feel like this will eventually be the only way forward with how complicated cpu's have become
Which eventually will make it useless for us because we can’t write software for it as it ignored constraints. And we might have even more hardware vulnerabilities. The it’s definitely a help though.
@@platin2148 It's optimization problem, so why it can't be constrained?
@@kobilica999 I dunno if you ever looked at a heat map if any of the more complex ai’s but even making that map is incredibly difficult.
@@kobilica999 Because constrained optimization is literally one of the hardest problems to solve, specifically because many constraints affect one another. Tweak this variable over here and 3 others change. Unconstrained optimization is in comparison much easier which is why AI has been deployed much more readily in areas where it's function fell squarely in unconstrained optimization territory.
Obviously, that isn't to say that AI CAN'T be applied to constrained optimization problems, they can, have, and will be in future. You have to find a means of modelling the constraints in the reward function of the AI. This will lead to the AI over time to begin internally modelling a world model that satisfies the constraints. I make it sound simple here, but there are many gotchas. Situations that you didn't think to constrain against because it never occurred to you that that is a situation that would come about (common sense stuff again). A slight misalignment with the AI's internal world model and the constraints can lead it to erroneous results outside of the test data, etc.
This is one of the reasons that companies like Tesla put so much effort into collecting ship loads of real world data, because it is much easier to verify AI efficacy if you cover more use cases within the field.
Just some thoughts and rantings by a developer. Hope this helped.
@@platin2148 It will ignore constraints as much as the human telling it what constraints it has to follow ignores those constraints.
Really enjoy following along as you explore different topics going on in the tech world and beyond. Thanks!
Big respect for shouting out TechTechPotato
ML isn't magic, but it does fit into what you'd expect at the start of the singularity. Now Google can design chips that used to take weeks in days. And what kind of chips did Google use ML to design? Tensor Processing Units, i.e. chips optimized for more ML. So we should expect exponential increase in hardware-level efficiency of ML techniques, until we run into some limit to the scaling.
There are hard limits to what ML can do in this field -- ie: a perfectly organized chip will not have infinite performance.
so there are only margins to be gained from ML here. I don't think this is significant in terms of approaching the cyber-singularity.
@@MrFaaaaaaaaaaaaaaaaa I also don't think you can get the singularity by chip optimization alone, but that's not the important part. The increased ML capability can be generally applied to other fields. Just off the top of my head, if you apply them to molecular simulations and materials science, you might get a better chip production process and thus open up new spaces in chip-design.
It's always been like this. We've been using computers to aid in chip design since it was possible. We used good steel tools to make better steel anvils etc.
All of technology is used to accelerate development of more technology
@@andrewferguson6901 Well yeah, that's the definition of technology. Increase in capability results in some of that capability being used to further increase capability in ways not possible before.
The non-trivial thing about singularity arguments is that we're approaching some new speed of this. Which judging by exponential curves of things like GDP, is a reasonable extrapolation. It used to be that metal working took many generations of human experience to self-improve. Now chip design AI can self-improve in an iteration time of weeks.
Note that this is using ML to lay out the parts on the chip. For example, the component that handles matrix or tensor multiplication. The ML engine hasn't designed the circuits of those components.
Can't wait for machine-learning-based city planning!
Bad idea
@@kristopherleslie8343 Probably true.
@OneFortyFour exactly haha
@@kezif refer to Elon Musk view info you aren’t up to speed
I really like this Amadala meme. Luke's gaze that kills suddenly cutting through her playful banter. Oh and great video too bruv! ;)
an architectural difference is the professional laid out the macoblocks in a grid like oraganized fashion. the AI did it in a rounder more organic looking pattern.
Great video! One nit: simulated annealing is far less prone to getting stuck in local minima then gradient descent/hill climbing algorithms, at the expense of efficiency and accuracy in finding the minimum. Because of this, a common iterative optimization strategy is to use simulated annealing to get close to the global minima, then use that as a starting point for a gradient descent algorithm that finds the true global minimum.
Also, I don't think simulated annealing is a greedy algorithm. Gradient descent algorithms may qualify as greedy algorithms but it seems really weird to me to call annealing 'greedy'
this is great. Thanks. I am a ML engineer of sorts. There is a line of thinking that there is a lot of value in a model that is equal to a human. You can spin up 1000x instances whereas you can't really hire 1000x employees. By getting to something like 70% of human performance you can already see the time savings vs having things routed through a human. Also, there is "natural" performance inflation due to better hardware over time, that 70% of human performance model should be something like 20% faster, or 84% of human in year 2, then 100.8% in year 3 so on.
Floor planning is just a small step in chip design. I know this because I'm a masters student of microelectronics engineering and I'm literally taking a course in physical design this semester. There are more complex steps and Floor planning is just 5% of the topics covered in the course. From design for test, atgp, static timing analysis, DRC, etc. The way journalists describe this topic make it seem like the AI designs the chip from scratch.
Yes, you get it. Much more ahead.
Inverse design is slowly becoming a powerful technique in my own field of integrated photonics. The idea is that by telling algorithms what we're looking for they can design extremely efficient devices. This is on the single device level, not layout. A simple example is a splitter that sends two different wavelengths down different paths or that combines them. A human would typically make a device where small differences add up over a long distance, easily 100s of microns. Not so for an inverse design algorithm. They typically produce QR codes a few microns in size that make no sense to a human, but kind of work. What lets us designers sleep at night is that a: the designs are usually impossible to fabricate reliably because they use tiny features and corners, and b: the really impressive ones perform relatively coarse tasks (TE/TM splitting, separating whole frequency bands) with lower efficiency than a human optimized, physics based design.
Your attempt at explaining something you don't understand is commendable.
Let me have a try based on what I know about how Google's AlphaZero machine learning works from a 30,000 foot level and then guess how it's applied to chip design.
AlphaZero is nearly unique among AI in that the algorthm teaches itself entirely from the beginning without any human guidance, instruction or intervention. The only things the algorithm is given are the basic parameters of the game/problem and the algorithm starts with trial and error to discover basic moves/relationships, building its skill from scratch. Essential to the process which is different than many other machine learning is its use of the Monte Carlo approach, which is to create long and often very complex solutions but not file a final score for that procedure until the very end... This is computationally heavy, but it avoids solutions which might look attractive at first but lead to a less optimal result while making it possible to consider less optimal next steps but eventually arrive at a better result.
Another aspect of neural networks your video didn't seem to clearly describe is that there is a big difference between training the algorithm and solving the actual problem.
Training is performed by running the algorithm constantly, 24/7/365 and may require well over a year to achieve world class capability with over 93% accuracy (comparable to the best humans in the world, fully trained, experienced, and typically the best education available). It's slow and tedious, and typically involves crunching terabytes of data of known solutions (Yes, already solved).
The algorithm can be used at any time, but the more time spent training the algorithm, the better is the algorithm's capability.
Then, when you have a new solution, you can run that solution through the algorithm and get a result.
In your video, you said that the AlphaZero solution was only approximately the same quality as 3 other known ways of creating the solution (of the chip floorplan). That suggests to me that the AlphaZero algorithm is probably immature. It might be only equal to one or at most two other methods, but it's my feeling that if matched against 3 other methods... Alphazero should be able to better at least one clearly if not all of them.
I would guess that within another year, the algorithm should be able to beat every other approach to creating the best floorplan, and that's even with the possibility that chip floorplans will be vastly more complex with such things as stacked 3D layering.
An even bigger bottleneck than floor planning is testing, which even more (I guess) possibilities than 10^9000 possibilities. You should do a video on this, it's been in the news.
Oh like verification?
@@Asianometry Yes. Test vectors to test every conceivable combination on a combinational and logic circuit is prohibitively large. There's a TH-cam video on this...that you can perhaps elaborate on...let me see if I can find it...ah, here it is, you did it! :) "The Growing Semiconductor Design Problem" Dec 5, 2021, maybe link to it.
@@Asianometry Yes this is a big fish cause imagining ML to arrive at best test cases and boundary conditions is a grt tool
@@Asianometry Recently I came across a video by one of the silicon focused creators.
I'm paraphrasing (so the exact ratio is likely different than what I state here), but the gist was over the last decade or so, especially, verification has become a greater and greater resource hog. Most firms have something like 2-4 times the ppl working on verification vs design. It'll soon grow to become such a colossal undertaking to make current methods infeasible. Apparently, that's where they are especially concentrated on leveraging AI techniques. Makes sense. It's the sort of problem for which NN are well suited.
@@raylopez99 I think John recent video was touching this subject th-cam.com/video/rtaaOdGuMCc/w-d-xo.html
An excellent video.
Differential Evolution is another good (global) optimiser that is pretty good at not getting stuck in local minima.
That rotating wafer was beautiful BTW!
Hi there, good video. Just two points of clarification. You said simulated annealing uses a objective equation based on objective factors. This suggests you are thinking objective as "neutral". Simulated annealing is actually an attempt to minimize an objective (as in goal) function.
Additionally, the weakness of simulated annealing is not that it gets stuck in a local minimum. Instead, it's weakness is that it can only find the approximate global minimum. Simulated annealing is actually a strategy to escape local minima. I like to think of simulated annealing as "smoothing out" the loss landscape, so that peaks aren't so high (which traps the optimizer) but also valleys aren't as low (which makes the solution approximate).
I think you did a really good job summarizing, especially since this isn't necessarily your field! :)
The practical problem is more difficult than this. To get a min speed the max net length is one constrain but the average net length is for minimum power. So there is one objective but with an additonal constrain. In practice many.
Best new channel I found in 2020. Great job!
8:21 I love that baseball pitch flying punch, we need more special moves in baseball.
Another great topic and video! John you are on fire! Thanks for all your hard work.
Saying go is more complex than chess is like saying Cyrillic is more complex than cuneiform because it has more letters.
Deep
similarly a best of REALLY_LARGE_NUMBER of tic tag toe would have a large state space, but it is possible to play optimally.
Predictability/regularity and high-quality feedback (wirelength and other measure) of the chip designing field make it an ideal for machine learning! Very optimistic with the trend ❤️
Do you post your sources for the information in your videos anywhere? I would definitely be interested in digging even deeper into many of the topics you present.
Great video btw👍
About floor planning, seeing the problem I instantly see another solution.
1. Start by generating for each block N solutions with different edge interface layouts (it does not have to be perfect in this stage)
2. Do the usual optimization, but with the freedom to select the best fitting prepared versions.
3. Once a good overal layout is found, optimize the interfaces between the blocks and then the blocks internals to fit that interface.
Overall, its an outside-in approach, but with a pre-processing step that optimized overal layout first.
Any floorplanning algorithm is not supposed to touch the blocks. The granularity should not go below the blocks. It becomes a much harder problem
4:45 - You have the terms reversed here. Hill climbing is the naïve algorithm. Annealing is a modification designed to escape local optima. Annealing is a modification to the hill climbing algorithm where you sample the function with large steps, then on each iteration the steps get slightly smaller until you find a stable optimum.
The shocking thing is not that it is good, but that it is good at basically the fist shot. Compared to chess it's the "look it beats *a* human" moment, probably plenty of space for improvement on speed and quality.
Google's Chip-Desigining AI*
It's the difference between an AI that designs chips and a chip that is designing AI.
Another brilliant video. Much appreciate your excellent work. Quite curious your thoughts about how long it will take for quantum computing to make an impact on floor planning? How far do simulated annealing solutions such as DWave need to improve before they can be used more efficiently than ML?
This is only the early studies. If this takes off, it would be revolutionary
You said that according to an Intel study, 50% of the power is spent on interconnect. Do you have a reference for that? I'm doing a study in interconnects and I'm finding it hard to get my hands on those data. Thanks!
In 5nm and around connect is more than 90%. In the 90s or in discrete board design it was another way around.
Chiplogic is defined by text written expression and synthesized to gates. Both do not carry the information about place and distance. But both define the performance. To me it seems simpler if the logic synthesis guide the logic definition calculating direct from the logic expression using the performance metric. AI could possible then make expression transformation for a better metric. This process is manual done by chip architects and guided by logic equivalence checkers.
@12:35. Having a maid is weird enough but a maid quarters without a shower? :P
3 showers and not a single bathtub on the plan,
2% to 5% chip performance increase (power, or speedup) is well within the region of diminishing returns. The real advantage is the reduction in time-to-market.
This could design rolling papers better. A needed upgrade we all crave to be sure.
The future is unimaginable. Nearly. Cheers.
... and I love listening to your content! Thank you John!
Thanks for watching
Great explaining of this technology! great stuff!
Great summary video!!!
Think how much coffee will be saved by floorplanning with machine learning! I have been doing floorplanning for over 40 years I really like coffee!
6:51 But designs exist! Yes, they do :D
great video Jon
Lovely documentary on trending topics in chip industry.
I guess in analog IC design where you start with the transistor model rather than a logic block these ML/AI tools will come much later. A different frequency, a different spec, a different application - for each of those you’ll often have to change the whole circuit in a non-trivial way to accommodate for it.
Ok, to put a little water here: Look at an Opamp. By specifying the databook specs you can select a minium topology and make a numeric dimension of the devices. No need for AI. It would be far easier to have a "topology google search" for all past built circuits and to apply them to your problem. Its simply a secret curtain which lead most analog IC designers to reinvent a solution.
After learning about boolean gates I started working on 4 bit CPU sub-components like Adder, Comparator, a memory piece, etc. Didn’t get far but it was a good intellectual activity
At 4:05 just to give some context, the estimated number of atoms of the whole universe is 10^80
Hello Mr. John. In a previous video, you talked about the validation problem. Do you think this technology gonna help in that?
It would be pretty accurate to compare machines to human on performance per watt ratio :D There is a reason we do not use calculators.
4:02 *NICE.*
Ikr? I thought it was a missed meme opportunity, but Jon had us covered.
Does that mean Google is the future company that we should invest in. I feel the growth of this company is unlimited.
It's the best pure monopoly today, IMO. ASML too.
Nvidia and others have the same techniques, the future success of each company is not just the fundamental technology, but the business variables and the decisions of individuals, which is chaotic. It would take far more in-depth research to get a feel for the winner of AI in 10-20 years, and even then there is no guarantee that they can profit from it long-term, or if it will turn into a generic commodity technology that simply benefits everyone.
Looking at their stock price, they sure know how to keep growing and growing.
i have an AI friend …
the perfect partner .
She can do anything for me;
Cook, wash the dishes ,
mop the Floor , do gardening,
even gives me a massage.
Thank you for the great video!
Wonderful content
Please ! MORE videos on this subject !
Thanks :-)
4:42 Simulated anealing is not greedy. in the context of computer science algorithms, greedy means, that the algorithm will not plan into the future when making decisons, but select what looks best right now. That will often not get you to the globally optimal solution.
Awesome video subjects and content
floorplanning like predicting the weather is a job for quantum computers. probably in 5 to 10 years' time, it will be resolved in minutes to everyone satisfaction.
Quantum annealers exist now and are growing in qubit numbers. We are developing new quantum process chips based on a hybrid technology.
Only if you want or need to laboriously calculate every possibility to arrive at the one true solution.
Until then, the AlphaZero AI described in this video can or should do a very credible job by a series of expert guesses, pruning the decision tree of bad lines without having to calculate them fully to establish how bad they are.
Can the AI determine patterns in local minima/non local minima to the point can say, generalize and efficiently encapsulate these into some "simpler" higher level design principles/methodologies, such that one doesn't need the AI tool post discovery, to perhaps aid evolution of designs forward from different perspectives? Or do engineers just interpret the results from the various simulated metrics and thus only optimize over the numbers?
Could you do a video about the Tesla Dojo? It would be great to know more about it, e.g. efficiency for machine learning Vs other commercially available products, whether Tesla poached expertise from elsewhere or outsourced some of the design?
An important thing to note: It doesn't matter is the AI version gives a result that's 5% worse than a human or existing tools if it can go faster than the human or existing tools. Assuming for a moment existing tools are about as fast as the AI method, use both in parallel and take the best, and you don't have to care about spending the time a human would.
Wait till a post silicon bug is discovered and you have to go back to PD to isolate the bug
Would be interested to see how this could be used for indoor aeroponic farms.
Logistical algorithms could play a useful roll in determining parameters of significance between nodes. just an observation.
that was amazing information !!!
FYI your microphone is still buzzing. Great video nonetheless!
The A.I. is designing the chips within the set of rules programmed into it. It cannot design outside those rules since machines cannot consciously decide or create.
chip design and custom ASICs will always be one of those far dreams of mine that i will never fulfill because it just looks really really complex and is pretty expensive.
"Machine Learning is not superhuman magic. It is based on data created or curated by humans.."
That would be the case in supervised learning but this looks like reinforcement learning, so it's not bounded by the quality/size of a training dataset, only by the number of learning iterations. Thank you for another great video and topic!
Me, an Electronics Engineer turned into AI Researcher: _Beautiful 🥺👍_
This is a good example of why AI can surpass humans in expert system domains. You pointed out that the AI found solutions that had performance comparable to those generated by humans, but you also point out that it generated a solution in 24 hours whereas the humans took 6 months. The thing is that the cost of the AI is only electricity, and it can generate 180 competitive solutions in the time that it takes a team of salaried humans to make one solution. In that case, the odds that one of the AI solutions is going to be the best one are 180/181 (99.45%), and the AI will do this at a fraction of the cost. Additionally, in the rare cases where the humans win the race with their single entry, that entry can be added to the AI's training set for an immediately improvement.
Moreover, the efficiency of the AI can improve substantially with application specific hardware, and meta-analysis of its work can potentially be used to generate "unnamed" rules that can radically improve its efficiency and/or quality. Humans can also improve, of course, but the way our minds work is largely incompatible with unnamed rules (allowing for zen masters here), so there's a lot of extra overhead involved.
and once AI evolved a few generation ahead of human, the next-generation concept will be hard to comprehend by human.
Now the trend is back
Using the last example of AI design of a home floorplan, its hard to see this being more efficient without spending inordinate amounts of time defining the constraints of each element in relation to each other, before running AI calculations.
Human intuitiveness for good ergonomics, where receptacles and door locations and counter-heights & beds to night-tables, and outdoor walkways down a grade, for example, are mind-bogglingly challenging and time-consuming to determine & quantify, for developing constraint rules for the AI engine.
Humans are creatures of habit, and we have intuitive ways of navigating, if AI determines a walkway grade & width is most efficient for human anatomy to traverse, but it is perceived to be too narrow or have a perceived unprotected dropoff for example, then you will not have a happy homebuyer even if they can learn to feel safe walking it at night.
I believe there will always need to be a hybrid of human design & Machine-learning,
using home floorplan design as an example, AI would be great at taking a human-designed floorplan, designed using building standard blocks & assemblies: industry std size trusses, 2x4s, drywall, etc,
an AI engine could use similar construction-code written as constraints, generate an optimum routing of electrical, gas, plumbing, etc.
An obvious constraint is designing around industry-standard material sizes to reduce the amount of custom-cutting needed, a buyer wanting a "custom-home" probably doesn't mean they want a home that can't fit industry-standard fridge, freezer, ducts, sub-flooring, etc, there are specific features which are perceived as custom, some may actually need to be custom industry nonstandard, and those elements would need to be human-designed unless this was simply an exercise in "seeing what AI gonna make".
This is one of these area where a superhuman AI has much potential for iterative self improvement on the path to the singularity.
Off to TechTechPotato's channel.
A lot of this feels like the Traveling Salesman problem. There are NP hard math problems here that humans have simply not been able to solve (and maybe are unsolvable in non-factorial time).
Very refreshing!! I wonder if there is AI/ML package that we can "pickup" and apply it to our "daily" plan .. that will help our productivity .. haa haa haa!! :)
Can you please make a video about Lam Research as well?
“droids building droids? how perverse!” - C-3PO, The Revenge of the Sith
Machines are starting to have common sense while humans are losing it.
Rooting for the AI at this point mate. Maybe it can spare some and put us out to pasture in a nice Swiss mountain town.
Ah had this idea seems it's already being done :P
thank you and posted to reddit
For my own perfection, i’m looking for analog ic design courses?.Thx
It looks like the ball throws the character instead of the other way around. 8:25
5:00 you are literally talking about the halting problem in computing, annealing is probably a non NP or and NP hard problem
How small is a silicon circuit you mite ask ? 🙃 Try this idea for size .
Focus a laser beam ----------------------- from earth to a Finger nail on the Moon and lithograph a circuit line on it That small
So have they reached the atom limit on size - never say done always some guy with a new idea
2035 some tech scientist runs into work at some Lab i got it . . . i got it
Like Tagart in blazing saddles :)
Have made a video on the European risk v chip by Sipearl?
love the video!
4:05 OMG, it’s over 9000!!1!
Well I guess I have an easier time deciding my career path now.
Make a video about Quantum processors, please 🙏
just fyi: your volume is a bit low.
I found the volume much better this time.
Curious.
And that’s exactly how skynet became active…
I think the term "AI" is eventually simply going to mean ML. It's joked that an approach is only AI while it hasn't been adopted. Most people wouldn't consider the A* algorithm or genetic algorithms or Bayesian numbers or B-tree Searches to be AI at this point. However, neural network based ML is really truly AI, as it replicates the processes nature uses to generate intelligence in the first place. Everything else is simply search algorithms with some innovative flourishes.
having second thoughts on studying computer science with this one
why
CS is needed to branch into AI. ML came from CS.
The conclusion is no longer correct as one does no longer only have systems trained with human data. Indeed the second stronger version for Go did not start with human player games (as the version that beat Sedon did), but starts right from the getgo only playing against itself only being provided the rules of the game. It might be that the combination of complexity of the problem and quality of human expert intuition places this problem out of the reach of this somewhat 2nd generation ML approach for now -- but only might (and should it be that way at this momemnt in time it would be hard to imagine that this would be a hard limit unless one introduces unscientific assumptions about human intelligence).
It's 2024. Google still hasn't made any profit from AI aided chip design, while Synopsys' stock price are doubled by providing AI assist in chip design process. What is wrong with you Google?