Yeah, I was nodding along with this part. I recently had to do some complex searching in Splunk, which I am not an expert in, but with an LLM I was able to get much further, faster, than I was able to without it, and achieved my objective. It wasn't perfect, but it helped me to evaluate my ideas faster, and then I fell back to an alternative idea when I was convinced the LLM wasn't going to help me any further. The other thing I wonder is, are those that are good communicators themselves more probable to get good results with LLMs? If you can ask the right question, a better answer is sometimes forthcoming.
@@DavidAtTokyo I think it helps to have some traditional AI, ontology, semantics experience and description logic skill so as to know how best to structure a prompt in line with fundamentals of AI reasoning and inferencing. In other words to ‘think’ like the LLM.
Fantastic talk - lots of things resonated with my experience. I think it's great that Eric is (and always has been) open to showing that his thinking has evolved over the years and that sometimes there are difficult choices where there's no "perfect" solution - rarer than it should be amongst technology thought leaders!
Good point - can we learn to content ourselves and moderate our expectations to keep to the areas the LLM has been trained in? If we can work in that area ten times more effectively than areas outside the training, what does that do to our work strategy?
What is it Eric Evan’s wouldn’t say in 2003 but do it now? Not to have a mutable object as entity? Around 1:26:00 in the video. Great talk and great people 🙏
The field of "AI" is broad. You got the Logik programming (symbolic machine learning), genetic algorithms, artificial neural networks, LLMs ("generative AI), knowledge base systems, etc.. Linguistics also added useful tools for parsing languages. I just don't understand the fixation on LLMs. We got a broad selection of various tools, each with it's own benefits and pitfalls. The LLM may help with the tutoring gap (see Benjamin Bloom). Where you don't need the fuzziness (i.e. fantasize) there might be other tools or combinations of other tools which might be better. No matter which way you go, you have the effort to build, train and maintain the tools. Still got a bunch of books to read. I don't do the same things in my job. So for me good technical documentation/ documentation, understanding code/ systems written by others and sometimes even decompilation (because of a lack of good documentation) is most useful.
LLMs are getting a lot of attention as they're one of the newer technologies that get labelled as "AI" and it's a rapidly evolving space where things that were science fiction until a few years ago are now within the reach of a lot of people who might be otherwise intimidated by the mathematical nature of some of the other tools. I was looking at an overview of techniques to address problems/threats recently, of the 33 listed 29 were referenced from research papers published in 2023 and all were available via common frameworks like LangChain or LlamaIndex. I think that rate of movement from research to practice is pretty much unprecedented. It's also interesting that some of these techniques you could imagine being found by chance rather than just by researchers. I saw my first webpage in 1990/1991, within about a year-ish of the technology being invented, and LLMs are almost as big a leap forward - the difference is this time we have the internet. The big leap forward, the accessibility of the engineering around the LLMs (as opposed to the base models) and modern communication is dynamite IMHO.
"The people who lose the old millions of jobs won't be able to take the new millions of jobs" (paraphrasing) That's where retraining or re-education comes in.
Woooow! The legend himself!
Shout out for key observation that the AI helps by eliminating distractions from trivia when coding and learning.
Yeah, I was nodding along with this part. I recently had to do some complex searching in Splunk, which I am not an expert in, but with an LLM I was able to get much further, faster, than I was able to without it, and achieved my objective. It wasn't perfect, but it helped me to evaluate my ideas faster, and then I fell back to an alternative idea when I was convinced the LLM wasn't going to help me any further.
The other thing I wonder is, are those that are good communicators themselves more probable to get good results with LLMs? If you can ask the right question, a better answer is sometimes forthcoming.
@@DavidAtTokyo I think it helps to have some traditional AI, ontology, semantics experience and description logic skill so as to know how best to structure a prompt in line with fundamentals of AI reasoning and inferencing. In other words to ‘think’ like the LLM.
Fantastic talk - lots of things resonated with my experience. I think it's great that Eric is (and always has been) open to showing that his thinking has evolved over the years and that sometimes there are difficult choices where there's no "perfect" solution - rarer than it should be amongst technology thought leaders!
What a great talk, well done Dave for getting Eric Evans on.
This is an excellent talk! 👏👏👏 Thank you Gentlemen!
Great discuss, thanks! I recommend also talking to Uwe Friedrichsen, author of the 'ChatGPT already knows' blog series.
I was focusing and being hypnotized by Eric Evans' wisdom until 28:20 😀
cough
Good point - can we learn to content ourselves and moderate our expectations to keep to the areas the LLM has been trained in? If we can work in that area ten times more effectively than areas outside the training, what does that do to our work strategy?
What is it Eric Evan’s wouldn’t say in 2003 but do it now? Not to have a mutable object as entity? Around 1:26:00 in the video. Great talk and great people 🙏
The field of "AI" is broad. You got the Logik programming (symbolic machine learning), genetic algorithms, artificial neural networks, LLMs ("generative AI), knowledge base systems, etc.. Linguistics also added useful tools for parsing languages. I just don't understand the fixation on LLMs. We got a broad selection of various tools, each with it's own benefits and pitfalls. The LLM may help with the tutoring gap (see Benjamin Bloom). Where you don't need the fuzziness (i.e. fantasize) there might be other tools or combinations of other tools which might be better. No matter which way you go, you have the effort to build, train and maintain the tools.
Still got a bunch of books to read. I don't do the same things in my job. So for me good technical documentation/ documentation, understanding code/ systems written by others and sometimes even decompilation (because of a lack of good documentation) is most useful.
LLMs are getting a lot of attention as they're one of the newer technologies that get labelled as "AI" and it's a rapidly evolving space where things that were science fiction until a few years ago are now within the reach of a lot of people who might be otherwise intimidated by the mathematical nature of some of the other tools.
I was looking at an overview of techniques to address problems/threats recently, of the 33 listed 29 were referenced from research papers published in 2023 and all were available via common frameworks like LangChain or LlamaIndex. I think that rate of movement from research to practice is pretty much unprecedented. It's also interesting that some of these techniques you could imagine being found by chance rather than just by researchers.
I saw my first webpage in 1990/1991, within about a year-ish of the technology being invented, and LLMs are almost as big a leap forward - the difference is this time we have the internet. The big leap forward, the accessibility of the engineering around the LLMs (as opposed to the base models) and modern communication is dynamite IMHO.
Is this available in podcast form so I can listen to it while driving or doing yard work?
We are in the process of releasing our Engineering Room series in Podcast form, so keep a look out.
LLMs are great at DDD. Serendipity?
Just like watching Richard Dawkins talking to a Christopher Hitchens. Epic people, epic talk.
Thanks a lot!
Wow, I count that as high praise, thank you.
"The people who lose the old millions of jobs won't be able to take the new millions of jobs" (paraphrasing)
That's where retraining or re-education comes in.
1:22:50 we're all human
🤩
When we get symbolic models to do the math too - then it takes off
This episode is sponsored by the Weyland-Yutani Corporation.
I used to dream in SQL, now it's all GPT prompts...
Change for the worse!
There's a lot of coughing and sneezing going on around the place