If Claude is blocked in your country, try using Google's Gemini. The paid version of Gemini has an even larger context window than Claude. However, you can't upload docs and so instead you'll need to copy and paste each of the files, which is a bit awful, but will work (sorry). AlphaFold 3 came out while I was editing this video, and I'm so excited! I will definitely make a video about it soon!
Learning a codebase is like that. It's not something a person can realistically do for 8 hours in a day. Just spend a couple hours a day max going through randomly like you were doing. Every codebase is like its own mini-language. You have to just expose yourself, spend time, and become familiar. Takes time. Important to be patient.
Love your videos Mithuna! As another physicist at an Australian university wanting to study a PhD overseas, you're a huge inspiration to me. I'd strongly suggest drawing a mindmap as you ask AI about the different files and imports. I've done that a lot in the past and it helps me see the big picture as well as the links :)
I'm a measly math undergrad, but Google Gemini has helped me conceptually get my head around the occasional abstract concept. I have had problems getting it to help me with code, so hearing that Claude was better at code comprehension is a good recommendation.
Love your work Mithuna! I love physics and you are simply brilliant at explaining so many difficult concepts and I am very impressed at your videos! Keep up the joy of science, discovery and living in the Wonderland with you!
I tried using gemini 1.5 pro on my hobby project which is around 500k tokens. It seemed alright at understanding the codebase but it made a few mistakes every now and then. (well I know the codebase very well) The most fun and useful thing was asking it to find places to refactor. I've been working on this project for many years and it's really amazing to have "someone" look at your code, critique it and just overall chat about your code.
I just use Codium plugin with vscode and it seems like it creates a vector database locally to decide which code files reference your question and then send that forward to gpt to explain. That way you don’t have to upload all the code files online
One approach i have used is to use an IDE called Cursor. It looks exactly like visual studio but has an ai chat box built into the interface that can connect to OpenAI or anthropic. The best part is you can pass in your entire code base into the chat box with one simple command. I don’t know if it would be able to hand such a large context at the present moment, but you can always just pass in specific files that are relevant to the question you have.
That's a really good approach. But my employer would not allow anyone to upload our code base to someone outside services. I'd be interested in what local setups could explain code in a similar qualtity.
well, in chatgpt for example , there is an option in the teams subscription plan i think to opt out from letting openai use ur prompts and data in the training and they SAY they wont save ur data, i guess anthropic might have something like that too , anyway , ur employer might stick to his policy but tell him about this if he dont arlready know abt it
You can use local models, or trust that the companies who say they won't use your data in training their models will actually stay true to their word, but that's obviously placing your faith into a 3rd party. TabNine prides itself on privacy and protecting IP so you might want to look into them, but that's a coding assistant so I'm not sure how much it'd help you understand the codebase.
AIs don't have understanding, they just predict what word / token is likely to come next. Confronted with a function it hasn't seen before, it either extrapolates correctly from other patterns that it has seen, or it hallucinates some BS. There's no way to tell, unless you check its "reasoning" by actually understanding the code yourself. I'm a SW developer (junior, but still) and upon closer inspection I usually find that it usually makes up a load of BS. Maybe there's ways to i prove that by providing better prompts, but at that point it's usually just easier to step through the code myself.
LLMs are very useful as glorified manuals, I use them a bunch at work. Though you have to be able to confirm the information it feeds back to you, they absolutely will lie and do so with as much confidence as when they give you accurate information. For obvious reasons this gets much worse with anything even a little obscure. (In fact Computerphile recently did a video on a paper on the idea that it could be intractably difficult to improve these models much at all. This wouldnt be the first time AI has plateaud; Iterative improvements wont be enough, new fundamental changes will be required.)
Really cool! I'll definitely consider trying this out when I have to understand a new codebase. Could you share how much you spent throughout the process? Claude 3 Opus costs $15 per million input tokens according to the pricing page, and I'd imagine you'd need to make multiple queries, maybe changing your prompts along the way.
Have you tried the open interpreter project? You can put gpt4 or opus into your command prompt and actually get it OUT of a chat box. I can't code so I asked it to code me a machine learning program that can predict the stock prices after exporting tradingview data.
oh I thought you were one of the random ai nonsense channels for a sec, then I clicked through and realized why I'd originally subscribed. FWIW I'd suggest being verrry careful with maintaining your presentation as focusing on your field, so that you don't end up lumped in with the ai hype burnout channels. That said, I do find claude is far less frustrating than gpt4, yes.
I’ve done very little programming but I imagine the best use would be to see if it figures out a bug you haven’t noticed in your code that is rendering it partially or totally inoperative… also if it catches something in code that you think is 100% fine and hasn’t hiccuped on you yet… for that to believe it you’d likely need more experience/ability than I currently feel I have - wow striking a balance of doing stuff on your own vs with assistance of presumed often high but uneven quality in these AI offerings is wild indeed - it does seem that usefulness of programmers in various settings should be becoming much more effective to the point where wow impacting job market more than even offshoring!? A good reason to specialize obviously and good to have you as a PhD investigating specific topics to inspire us to remain useful as humans in specialties within this general coding arena maybe hahaha🤖🤪
Do you think this approach would work for analysing journal papers?
6 หลายเดือนก่อน
Meta created Galactica to do exactly that. However, at that point in time, people didn't know how to handle hallucinations. And Meta said the AI was going to be a perfect scientific tool. You can guess what happened when journalists got hold of it...
6 หลายเดือนก่อน
You could do really nice things. E.g. producing slides to present a research report. Or go "backwards", and ask what biological processes are based on some specific mathematical relationship.
I’m building a tool for doing exactly this. Understanding large or complicated codebases. I haven’t got anything meaningful yet unfortunately I’m still working on it.
Thank you so much. This is so helpful for me. Also, does anyone have suggestions on software that helps to formulate drugs based on knowing a mutation? For example, if I know that a mutation in FOXC2 causes vein disease is there software that can help me formulate a drug or "something" to fix the mutation? Thank you.
I've started using Claude Pro for disecting code too. I more use it for origination running test concepts. I've wondered about using it to dissect books. I'll have to try that. Does anyone else regularly run into message limits?
Yes, I found that to be the only really frustrating thing about using claude! But it’s a cool idea to use it for dissecting books. I’d love to hear about that!
@@LookingGlassUniverse I'm intrigued that you're getting into protein folding. Your quantum mechanics videos were very helpful when I went fishing around for better breakdowns and ACTUAL EXPEREMENTS. I appreciate your approach to these topics, and proteins are another growing area of interest of mine, so I'm glad you're getting into it.
Be careful, be sure to get permission from your project manager and/or the authors of the code before you upload the code into an online GPT, could probably be dubious to share that information especially if your project's repository is meant to be private-- especially since ChatGPT and the like use your conversations to further improve and train their models For better data security, you might be better off with an offline model installed on a secure device
Hey, so I'm facing a similar issue but I'm kind of reluctant because the code I'd be posting to gemini is a quite private and leaks could prove disastrous. Should I be wary of this, or are my worries unfounded?
If you have the business version of ChatGPT then they don’t use your data for training. You can check, but Claude and Gemini might have a similar policy!
Ok, copying and pasting whole code bases into an AI chatbot window is not advisable or even really practical in many cases. This is, generally speaking, something you shouldn't do especially when it isn't your code to begin with. Also, if you put the AI through it's paces you'll find they're all not super great at calculus or really math in general. Until that AI can consistently solve integrals, you should be consistently skeptical of it's code. After many years of professional coding, I've found the best way to understand a large code base is to be familiar with the languages beforehand and to not be afraid to pull that code up in a terminal so you can pick it apart algebraically with CLI tools. It takes time and patience to build these types of skills but this is what will ultimately allow you to manage and parse through enormous amounts of code very very quickly.
I heard about this from my friend yesterday, crazy! I really hope it changes. Gemini has an even bigger context window though. The only issue is that they don’t let you upload files for some reason. This means you have to copy and paste it all in yourself… if you’re ok with that, it will work just as well
You guys/girls use AIs assuming they work well. Now I'll tell you a secret we SW engineers know, they are always wrong for some extent. If you try to go deeper in their understanding of something their failure rate increases. Sort of Heisenberg principle. You should be scared if you find that a non-specialized AI works well for your very technical specialized request. If you cannot understand a codebase yourself, you will likely not be able to understand when the AI is wrong about that codebase and is just fooling you around pretending an understanding. You'll add mistakes on top of mistakes due to these fake understandings. The perfect recipe to failure. If this is too difficult to accept, consider that your ability to understand a codebase is below par compared to a non-specialized AI. That's a tech gap that you should fulfill before putting your hands on the codebase. Otherwise code will start falling apart especially if there is no mandatory and well established code reviewing and automated regression testing processes.
I do agree with you but I still think AI can be used if you ask small questions and verify that they're right. Like, it can give you a possibly wrong path but you can always check if it is right in software. For other subjects I'd stay away from AIs for now because they WILL make you learn wrong things.
@@PlayerMathinson Option a) pray there is documentation Option b) learn coding rigorously
6 หลายเดือนก่อน +1
@@Zartymilagreed! To add to that, it is really helpful to ask where something is done. If the AI hallucinates, you will immediately see that when you look at the code. I use AI all day when coding, and hallucinations aren't any problem. I use it for the main boring work, and then I fix the details. I'd rather spend my time doing unit tests.
There is simply not enough data to fine tune an existing LLM to make it explain a large code base. Are there not enough large public codebases? There are. Are there any explanation of all that? No. There needs to be clever engineering to generate that data, and that's not what the current AI models do.
Really happy to see this transition towards AI-focused content. AI is not just the future; it's relevant right here and right now -- just like cryptocurrencies. In fact, I'd love to see you pairing your new AI focus with crypto talk. Now that would be topical!
If Claude is blocked in your country, try using Google's Gemini. The paid version of Gemini has an even larger context window than Claude. However, you can't upload docs and so instead you'll need to copy and paste each of the files, which is a bit awful, but will work (sorry).
AlphaFold 3 came out while I was editing this video, and I'm so excited! I will definitely make a video about it soon!
you need a VPN and ideally a friend in country X who makes an account for you. gemini isn't as smart
Noooooo make a video about OpenFold instead!!!! c:
Learning a codebase is like that. It's not something a person can realistically do for 8 hours in a day. Just spend a couple hours a day max going through randomly like you were doing. Every codebase is like its own mini-language. You have to just expose yourself, spend time, and become familiar. Takes time. Important to be patient.
Love your videos Mithuna! As another physicist at an Australian university wanting to study a PhD overseas, you're a huge inspiration to me.
I'd strongly suggest drawing a mindmap as you ask AI about the different files and imports. I've done that a lot in the past and it helps me see the big picture as well as the links :)
I'm a measly math undergrad, but Google Gemini has helped me conceptually get my head around the occasional abstract concept. I have had problems getting it to help me with code, so hearing that Claude was better at code comprehension is a good recommendation.
Love your work Mithuna! I love physics and you are simply brilliant at explaining so many difficult concepts and I am very impressed at your videos! Keep up the joy of science, discovery and living in the Wonderland with you!
I tried using gemini 1.5 pro on my hobby project which is around 500k tokens. It seemed alright at understanding the codebase but it made a few mistakes every now and then. (well I know the codebase very well)
The most fun and useful thing was asking it to find places to refactor. I've been working on this project for many years and it's really amazing to have "someone" look at your code, critique it and just overall chat about your code.
What is refactor?
@Miguel_Noether like redesigning the code to make it more readable and easier to maintain
I just use Codium plugin with vscode and it seems like it creates a vector database locally to decide which code files reference your question and then send that forward to gpt to explain. That way you don’t have to upload all the code files online
One approach i have used is to use an IDE called Cursor. It looks exactly like visual studio but has an ai chat box built into the interface that can connect to OpenAI or anthropic. The best part is you can pass in your entire code base into the chat box with one simple command. I don’t know if it would be able to hand such a large context at the present moment, but you can always just pass in specific files that are relevant to the question you have.
That's a really good approach. But my employer would not allow anyone to upload our code base to someone outside services. I'd be interested in what local setups could explain code in a similar qualtity.
well, in chatgpt for example , there is an option in the teams subscription plan i think to opt out from letting openai use ur prompts and data in the training and they SAY they wont save ur data, i guess anthropic might have something like that too , anyway , ur employer might stick to his policy but tell him about this if he dont arlready know abt it
You can use local models, or trust that the companies who say they won't use your data in training their models will actually stay true to their word, but that's obviously placing your faith into a 3rd party. TabNine prides itself on privacy and protecting IP so you might want to look into them, but that's a coding assistant so I'm not sure how much it'd help you understand the codebase.
That is the first 5 minutes work of a software developer on a new codebase.
AIs don't have understanding, they just predict what word / token is likely to come next. Confronted with a function it hasn't seen before, it either extrapolates correctly from other patterns that it has seen, or it hallucinates some BS. There's no way to tell, unless you check its "reasoning" by actually understanding the code yourself.
I'm a SW developer (junior, but still) and upon closer inspection I usually find that it usually makes up a load of BS.
Maybe there's ways to i prove that by providing better prompts, but at that point it's usually just easier to step through the code myself.
are you maybe using gpt 3.5 or other outdated ai? opus is definitely understanding very well.
You would be surprised to see how good the better ones are.
If I only used Llama 3 8B that's what I would say too but ChatGPT 4 Turbo understands surprisingly well most of the time
@@HoD999xdepends what you call "understanding"
LLMs are very useful as glorified manuals, I use them a bunch at work. Though you have to be able to confirm the information it feeds back to you, they absolutely will lie and do so with as much confidence as when they give you accurate information. For obvious reasons this gets much worse with anything even a little obscure.
(In fact Computerphile recently did a video on a paper on the idea that it could be intractably difficult to improve these models much at all. This wouldnt be the first time AI has plateaud; Iterative improvements wont be enough, new fundamental changes will be required.)
how did you generate that picture of architecture , actually? did you mention "deep mind" for that step? is that some ai tool?
Really cool! I'll definitely consider trying this out when I have to understand a new codebase.
Could you share how much you spent throughout the process? Claude 3 Opus costs $15 per million input tokens according to the pricing page, and I'd imagine you'd need to make multiple queries, maybe changing your prompts along the way.
this is for the api, the chat version is 20$/month
Have you tried the open interpreter project? You can put gpt4 or opus into your command prompt and actually get it OUT of a chat box. I can't code so I asked it to code me a machine learning program that can predict the stock prices after exporting tradingview data.
oh I thought you were one of the random ai nonsense channels for a sec, then I clicked through and realized why I'd originally subscribed. FWIW I'd suggest being verrry careful with maintaining your presentation as focusing on your field, so that you don't end up lumped in with the ai hype burnout channels. That said, I do find claude is far less frustrating than gpt4, yes.
I’ve done very little programming but I imagine the best use would be to see if it figures out a bug you haven’t noticed in your code that is rendering it partially or totally inoperative… also if it catches something in code that you think is 100% fine and hasn’t hiccuped on you yet… for that to believe it you’d likely need more experience/ability than I currently feel I have - wow striking a balance of doing stuff on your own vs with assistance of presumed often high but uneven quality in these AI offerings is wild indeed - it does seem that usefulness of programmers in various settings should be becoming much more effective to the point where wow impacting job market more than even offshoring!? A good reason to specialize obviously and good to have you as a PhD investigating specific topics to inspire us to remain useful as humans in specialties within this general coding arena maybe hahaha🤖🤪
Do you think this approach would work for analysing journal papers?
Meta created Galactica to do exactly that. However, at that point in time, people didn't know how to handle hallucinations. And Meta said the AI was going to be a perfect scientific tool. You can guess what happened when journalists got hold of it...
You could do really nice things. E.g. producing slides to present a research report. Or go "backwards", and ask what biological processes are based on some specific mathematical relationship.
whats the future for quantum computing in a world of AI? would one enhance the other?
Hi Mithuna, I really enjoy your videos. But could you take some time out to complete the linear algebra series, they do not have to be perfect?
Which videos would you like?
It helps but i dont really like "giving it it all" 🤷♂️
I’m building a tool for doing exactly this.
Understanding large or complicated codebases.
I haven’t got anything meaningful yet unfortunately I’m still working on it.
Good luck!
Thank you so much. This is so helpful for me. Also, does anyone have suggestions on software that helps to formulate drugs based on knowing a mutation? For example, if I know that a mutation in FOXC2 causes vein disease is there software that can help me formulate a drug or "something" to fix the mutation? Thank you.
I've started using Claude Pro for disecting code too. I more use it for origination running test concepts. I've wondered about using it to dissect books. I'll have to try that. Does anyone else regularly run into message limits?
Yes, I found that to be the only really frustrating thing about using claude! But it’s a cool idea to use it for dissecting books. I’d love to hear about that!
@@LookingGlassUniverse I'm intrigued that you're getting into protein folding. Your quantum mechanics videos were very helpful when I went fishing around for better breakdowns and ACTUAL EXPEREMENTS. I appreciate your approach to these topics, and proteins are another growing area of interest of mine, so I'm glad you're getting into it.
Can you do it with your own PhD thesis, the process to upload it all and understand all of its parts?
This is a great idea for a video.
Please more AI content about how you use AI to solve challenges and problems! I love it! 🙏
Super-useful little aside.
Be careful, be sure to get permission from your project manager and/or the authors of the code before you upload the code into an online GPT,
could probably be dubious to share that information especially if your project's repository is meant to be private--
especially since ChatGPT and the like use your conversations to further improve and train their models
For better data security, you might be better off with an offline model installed on a secure device
The link to the repo is in the video description (OpenFold). It is open source under the Apache 2.0 license.
Good idea but i would recommend getting a good ide it will take you far
Very interesting, ill have to try this out!
Does it mean I can use Chat GPT to finally understand Italian bureaucracy?
Hey, so I'm facing a similar issue but I'm kind of reluctant because the code I'd be posting to gemini is a quite private and leaks could prove disastrous. Should I be wary of this, or are my worries unfounded?
If you have the business version of ChatGPT then they don’t use your data for training. You can check, but Claude and Gemini might have a similar policy!
@@LookingGlassUniverse Okay I'll take a look then, thanks for replying!
Ok, copying and pasting whole code bases into an AI chatbot window is not advisable or even really practical in many cases. This is, generally speaking, something you shouldn't do especially when it isn't your code to begin with. Also, if you put the AI through it's paces you'll find they're all not super great at calculus or really math in general. Until that AI can consistently solve integrals, you should be consistently skeptical of it's code.
After many years of professional coding, I've found the best way to understand a large code base is to be familiar with the languages beforehand and to not be afraid to pull that code up in a terminal so you can pick it apart algebraically with CLI tools. It takes time and patience to build these types of skills but this is what will ultimately allow you to manage and parse through enormous amounts of code very very quickly.
cool application, thx for sharing
How did you connect the repo? Sounds like a copy paste. Please explain in detail. I don't want to spend money on it only to be disappointed.
this is such a good idea!
could you train something like llama on the code base itself and produce similar results?
Oh, that’s a cool idea! I don’t know anything about training llama this way. How does it work?
Maybe not training but using Retrieval Augmented Generation on it?
nice, in my experience chatgpt is most of the times really useful when the documentation of a software is poorly maintained/explained
nice , looking forward to what you find. I need to be able to just drop a zip file tho
You're so lucky you get access to Claude. Here in Europe we're not allowed. 😢
That is what proxies are for.
I heard about this from my friend yesterday, crazy! I really hope it changes.
Gemini has an even bigger context window though. The only issue is that they don’t let you upload files for some reason. This means you have to copy and paste it all in yourself… if you’re ok with that, it will work just as well
anthropic is superior, no doubt
Great let's start writing even more horrible code, at the end AI will read it.
MORE AI YAAAYYY!
Intresting
ask the ai when skynet will be self aware? lol
A backslash
Yes claud is better than chaptgpt
You guys/girls use AIs assuming they work well. Now I'll tell you a secret we SW engineers know, they are always wrong for some extent. If you try to go deeper in their understanding of something their failure rate increases. Sort of Heisenberg principle. You should be scared if you find that a non-specialized AI works well for your very technical specialized request. If you cannot understand a codebase yourself, you will likely not be able to understand when the AI is wrong about that codebase and is just fooling you around pretending an understanding. You'll add mistakes on top of mistakes due to these fake understandings. The perfect recipe to failure. If this is too difficult to accept, consider that your ability to understand a codebase is below par compared to a non-specialized AI. That's a tech gap that you should fulfill before putting your hands on the codebase. Otherwise code will start falling apart especially if there is no mandatory and well established code reviewing and automated regression testing processes.
I do agree with you but I still think AI can be used if you ask small questions and verify that they're right. Like, it can give you a possibly wrong path but you can always check if it is right in software.
For other subjects I'd stay away from AIs for now because they WILL make you learn wrong things.
How do you recommend we read large codebases?
@@PlayerMathinson
Option a) pray there is documentation
Option b) learn coding rigorously
@@Zartymilagreed!
To add to that, it is really helpful to ask where something is done. If the AI hallucinates, you will immediately see that when you look at the code.
I use AI all day when coding, and hallucinations aren't any problem. I use it for the main boring work, and then I fix the details. I'd rather spend my time doing unit tests.
There is simply not enough data to fine tune an existing LLM to make it explain a large code base. Are there not enough large public codebases? There are. Are there any explanation of all that? No. There needs to be clever engineering to generate that data, and that's not what the current AI models do.
👍
:)
I am seriously baffled about how this girl transitioned from Quantum mechanics to protein structures 😅
Really happy to see this transition towards AI-focused content. AI is not just the future; it's relevant right here and right now -- just like cryptocurrencies. In fact, I'd love to see you pairing your new AI focus with crypto talk. Now that would be topical!
cryptocurrencies are 99% scams