- 29
- 38 978
Scientific Coding
United States
เข้าร่วมเมื่อ 2 ก.ย. 2022
Videos on how to work on scientific code projects.
Explore best practices in software development and how to apply them in the field of science.
Learn languages and frameworks relevant in scientific software development.
Explore best practices in software development and how to apply them in the field of science.
Learn languages and frameworks relevant in scientific software development.
Modern Machine Learning Fundamentals: Cross-attention
An overview of how cross-attention works and a code example of an application of cross-attention.
View the previous video for a dive into transformer architecture and the implementation of self-attention: th-cam.com/video/6hHNdBrH9yY/w-d-xo.htmlsi=jzw-ajM-r-lJkiq1
Reference:
"Attention is all you need" arxiv.org/abs/1706.03762
View the previous video for a dive into transformer architecture and the implementation of self-attention: th-cam.com/video/6hHNdBrH9yY/w-d-xo.htmlsi=jzw-ajM-r-lJkiq1
Reference:
"Attention is all you need" arxiv.org/abs/1706.03762
มุมมอง: 702
วีดีโอ
Modern Machine Learning Fundamentals: Transformers
มุมมอง 834หลายเดือนก่อน
A helpful walkthrough to understand transformer and attention concepts with example code implementations. jupyter notebook with example code: github.com/ScientificCoding/scientific-coding/blob/main/deep-learning/2024-11-04 transformer encoder/2024-11-04 transformer encoder.ipynb
pytorch-lightning training flow
มุมมอง 703 หลายเดือนก่อน
A compact tutorial of how pytorch-lightning can streamline your machine learning training flow. Included is a brief demo of the tool tensorboard. Github link to jupyter notebook: github.com/ScientificCoding/scientific-coding/blob/main/deep-learning/2024-08-23 pytorch lightning/pytorch lightning training flow.ipynb Link to previous video on pytorch training flow: th-cam.com/video/haY_h1HWjpg/w-d...
Train Diffusion Models - Line by line code example
มุมมอง 4977 หลายเดือนก่อน
A line by line code implementation of a diffusion model to train an image generator in PyTorch. Jupyter notebook link: github.com/ScientificCoding/scientific-coding/blob/main/deep-learning/2024-05-07 diffusion model/2024-05-07 image generation with diffusion models.ipynb Some sources used and for further reading: th-cam.com/video/a4Yfz2FxXiY/w-d-xo.html github.com/lucidrains/denoising-diffusion...
Deep Learning Training Flow -- PyTorch
มุมมอง 3568 หลายเดือนก่อน
An overview of a basic deep learning training flow in PyTorch. Line by line explanation of deep learning basic concepts. Jupyter Notebookk: github.com/ScientificCoding/scientific-coding/blob/main/deep-learning/2024-04-12 deep learning training flow/deep learning training flow.ipynb
5 Basic Pytorch Operations For Effective Deep Learning Flows
มุมมอง 1069 หลายเดือนก่อน
Basic and essential to know Pytorch operations that are used in most deep learning flows. Example juypter notebook: github.com/ScientificCoding/scientific-coding/blob/main/deep-learning/2024-03-15_5_basic_pytorch_operations/basic_pytorch_operations.ipynb
Positional encodings in PyTorch
มุมมอง 1939 หลายเดือนก่อน
Code and understand positional encodings using PyTorch Example jupyter notebook: github.com/ScientificCoding/scientific-coding/blob/main/deep-learning/2024-03-15_positional_encoding/positional encoding.ipynb
LangChain - prompt a local model
มุมมอง 1469 หลายเดือนก่อน
Line by line instructions on how to prompt a model imported form HuggingFace. This video follows documentation provided by LangChain here: python.langchain.com/docs/integrations/llms/huggingface_pipelines The notebook code for this demo can be found here: github.com/ScientificCoding/scientific-coding/blob/main/lectures/2024-03-10/langchain - prompt local model.ipynb
Fine-tune LLMs - Line by line code example
มุมมอง 4.6K9 หลายเดือนก่อน
This is a line by line code example on how to perform a parameter efficient LoRA fine-tuning on a LLM. Github link to code notebook: github.com/ScientificCoding/scientific-coding/blob/main/lectures/2024-03-01/lora_peft_example.ipynb
LangChain Advanced - Ask questions to local LLM
มุมมอง 607ปีที่แล้ว
How to download a model from Hugging Face, create an index based on your own documents and ask questions about the content. Github Link to Jupyter Notebook: github.com/ScientificCoding/scientific-coding/tree/main/lectures/2023-05-27
LangChain - Advanced SQL in Natural Language and SQL Prompt Engineering
มุมมอง 2.1Kปีที่แล้ว
Link to github with noteboook code used in this video: github.com/ScientificCoding/scientific-coding/tree/main/lectures/2023-05-19 Examples of advanced SQL natural language prompt engineering and how to provide context needed for the LLM inside the prompt. Disclaimer: Note: OpenAI provides a free API key for initial testing. Once you move to a paid subscription, calling the API in the way demon...
run AI image generation on your computer
มุมมอง 113ปีที่แล้ว
Link to step-by-step instructions: github.com/ScientificCoding/scientific-coding/blob/main/lectures/2023-05-05/instructions.md Link to the github repo for the UI: github.com/AUTOMATIC1111/stable-diffusion-webui Link to model on huggingface: huggingface.co/runwayml/stable-diffusion-v1-5
Run a LLM - for example ALPACA - locally in a UI
มุมมอง 1Kปีที่แล้ว
Run a LLM on your machine and access it through a UI. This demo shows how to set up Alpaca to run in UI for responding to prompts. Link to the instructions shown in this video: github.com/ScientificCoding/scientific-coding/blob/main/lectures/2023-04-16/instructions.md The github repo of the UI used in this demo: github.com/oobabooga/text-generation-webui The pre-trained LLM model used in this e...
Inflation analysis in python
มุมมอง 413ปีที่แล้ว
Use python to find out how prices have evolved over time. github link to jupyter notebook: github.com/ScientificCoding/scientific-coding/blob/8578e7d88ba9e66eba5d7c90d1fb4dab7ce93927/lectures/2023-04-14/2023-04 inflation price analysis.ipynb
Use ChatGPT to write code for you
มุมมอง 1.6Kปีที่แล้ว
Use GPT to leverage code development. This code example demonstrates how you can generate code with OpenAI's GPT service and directly write it into a python code module. github link to code notebook: github.com/ScientificCoding/scientific-coding/blob/2de72598e63d31e4f4488dae935fdc0d51aeafd5/lectures/2023-03-31/use GPT to write code for you.ipynb Disclaimer: Note: OpenAI provides a free API key ...
LangChain intro - make natural language SQL database queries
มุมมอง 5Kปีที่แล้ว
LangChain intro - make natural language SQL database queries
LangChain intro - train LLM on set of pdf files
มุมมอง 9Kปีที่แล้ว
LangChain intro - train LLM on set of pdf files
Machine Learning in Julia - Random Forest Classification
มุมมอง 487ปีที่แล้ว
Machine Learning in Julia - Random Forest Classification
Simple way to manage different Python versions
มุมมอง 56ปีที่แล้ว
Simple way to manage different Python versions
Analyzing my electricity usage with julialang DataFrames
มุมมอง 1112 ปีที่แล้ว
Analyzing my electricity usage with julialang DataFrames
Simple and powerful plots in the Julia programming language
มุมมอง 7K2 ปีที่แล้ว
Simple and powerful plots in the Julia programming language
Functions in the Julia programming language
มุมมอง 562 ปีที่แล้ว
Functions in the Julia programming language
That's great! Thanks for your video
Coming from reading the paper and watching a couple of explanations and found the code really clear and much simpler than I thought it could be! Thanks for the video man!
I am glad to hear you are finding it helpful!
What a nice content
Thank you for the informative video on implementing diffusion models. I have been following along and made some modifications to improve the clarity of the generated images, such as increasing the number of steps and epochs and adjusting the learning rate. However, the final images are still not as clear as I would like them to be. Could you please provide some guidance on how to further improve the quality of the output images?
nice
Y=32!×43×11
Wow! Never seen such a good content on here
Great
Great
Thanks for the video, it is very informative. It's amazing that all you had to do was give the SQLDatabaseChain() the reference to the SQLite Db and the LLM and it was able to figure out the correct joins, but what if I had an existing schema that had hundreds of tables and I only wanted to expose a subset of those tables to the LLM for the creation of a SQL statement to satisfy a users inquiry, how would I restrict the tables to be used? Also, you bring up a good point about using OpenAI's LLM and table names becoming public. Have you tried and can recommend other LLM's? Thanks for sharing.
Great work, thx
Can one do zoom and pan in IJulia ?
To the point! Loved it
Glad to hear!
Nice and clean. This is such a generous gift. I am new at this. I will use your code as a template for future training flows. Thank you for sharing your knowledge.
I am so glad to hear that you are finding it helpful!
Thanks for the video and the work you put in sharing this. Very helpful so far! I followed the instructions and it executed just as shown in the video. But where is the fine-tuned model saved to? After execution, I only find an empty folder, named with the current time in the root of the python penv.
Thanks for sharing 💕💕
fantastic work; for someone who isn't a coder, it's a gold mine. I'm new to the LLM space and have an inquiry: since the PDF I extracted isn't formatted, how can I create a dataset with embeddings to finetune the LLM model? Additionally, I'd like your advice on structural modelling for large Txt databases-should I use llm or another method (I want to combine hundreds of PDFs into a single structural hierarchy document that summarises information without losing context. Is this possible locally?)?
Thanks and glad to hear! I have a video on how to work with PDFs: th-cam.com/video/vXmpThOZiIM/w-d-xo.htmlsi=X5AUV0x8g6ZZEhS5 However, it has been a year since I recorded it and there are better ways now to solve this. I plan to make a new video on that in the future.
This was really useful. I'm trying to build(improve) an LLM that could convert Redshift Sql to Athena Sql or Spark Sql. I'm thinking of using this to train over something like Mistral 7B.
It would be better if you pick some model that is closer to your use case. I believe there is one by defog ai that is already trained for sql. This can help you extract extra performance.
@@skad22 Thanks man. I'll try to use sqlcoder-70b
Man I’m glad I subbed to your channel. What a great walkthrough
Glad to hear that!
awesome tutorial, thanks!
Nice video.I am a junior data analyst and was figuring if it is possible to use the same concept on a existing db with like hundreds of tables. Was wondering bc I got some (hard) business inquiries but got little explanation about the db itself and was thinking about a way of having a conversation with your db. Let me know your thoughts
Maybe index the columns and rows to vector db
Somehow the 3D Plot isn´t working. What am I doing wrong? (I am using VS Code)
I would say the headline is misleading, as this tutorial is not really training the model, but rather indexing PDF's and exposing the most relevant parts to the OpenAI prompt with a question.
Perfect!
Thanks for this video, it's super helpful :) I'm newer to using langchain and Chroma - how would I get the indexes to persist using Chroma? In another project I used "vectordb = Chroma.from_documents(documents=texts, embedding=embedding,persist_directory=persist_directory)" with a text splitter, but am unsure on how to persist the index created by VectorIndexCreator.
Hi I am trying to simulate the project showed in the video but I am having doubt that how to write a general prompt and then generate sql by passing table and column information.
Can you elaborate?
@@scientificcoding3153 I am trying to implement this logic on the database that has approx 100+ tables then how can I make sure to generate the correct sql query. Because we can not pass all the 100 tables to the llm because of context length.
You will need to use a LLM which allows for longer prompts. With OpenAI's API this is currently not possible, as far as I know. Alternatively, you can also host your own LLM and fine tune it with information about your schema. That way you can use the prompt space exclusively for the query you are trying to build.
@@scientificcoding3153 Are you saying OpenAI can't read a table with a 100 tables? Or just a large join which might return more than a 100 columns?
how to ask questions to github repositories so the model understands programming languages?
great and helpful vid :) ! can this setup be used with a non Open-AI platform ?
I understand that we are setting OpenAI API key in this code, but where are we using it? Where have we mentioned that we want to use some LLM model? Is this cod even using some LLM model?
The API key is required to use the API provided by the openai package. Without setting it, the calls in this example would error out. The LLM resides server side with OpenAI. We are using it through the API.
Can we pluggin LLAMA models by facebook?
It may require a different wrapper, but it can be done
After getting all dependencies, I am not able to get langchain to work properly after import os. Do you have any suggestions?
What is the runtime error message during import?
Thank you for putting together this demo. I am going to have to try this out. I worry, however, that it will not do so well with a real-world database, one with various foreign keys, one-to-many relationships, abbreviated table and column names. I have no idea how it would be able to make sense of such a schema seeing as how I barely can.
What I have noticed is that there is room for improving the results of prompts by adding more information into them. For instance, if you add written information about the relationships as well as column name abbreviation mappings into the prompt ahead of the question, the model will work with that information and possibly improve the accuracy of the returned result.
@@scientificcoding3153 That's interesting. I hadn't considered that possibility. I was tossing around the idea of annotating the DAOs, which is where the SQL queries are, storing those annotations in a vector database and then pulling out the ones most similar to what the user typed in. But even if that worked, it would limit queries to ones I had already written, so not great. Your approach sounds more promising. Thank you.
I created a new video on SQL and langchain which discusses joins and prompt engineering. Hope you will find it helpful: th-cam.com/video/IsxeqHVHaPM/w-d-xo.html
@@scientificcoding3153 Thank you very much for putting this together! It remains to be seen how accurate the queries are with a production database where the tables and column names are abbreviated. But you have given me something to try out. Ideally, I would like queries that çan be answered by the database to be routed to it (e.g., "What is the maximum mpg by number of cyl?") and the rest to be routed to the LLM (e.g., "Why is the sky blue?"). This is the sort of decision-making that ChatGPT's plugins exhibit, and I assume LangChain's as well.
What website or program are you using to run this? I was trying to run it in VS Code and got an error after index = VectorstoreIndexCreator().from_loaders(loaders) - "Output exceeds the size limit. Open the full output data in a text editor"
I use jupyter notebook. Is what you are seeing an actual execution error or rather a warning by your editor?
Can you use Alpaca with LangChain?
It should be possible. I am working on finding out how to
Is it possible any other Llm instead of OpenAI model. without using OpenAI Api key. Is there any way? could you guide ?
Thanks for your interest and yes, it can be done. I am diving deeper into how to run LLM's locally without the need for OpenAI. Please also see my latest video for an example
I'm not clear why Detectron2 is needed, given that we are not doing object detection here?
The pdf loader class requires detectron2. Without it it will report that it's missing it as a dependency. I am not sure either why it needs it for text loading.
I stoped using julia because of plots are really slow 🙁
What do you use instead?
@@scientificcoding3153 Python and Matplotlib. Generally we plot a data once (Julia can do it faster in second attempt).
which package did you you use and what did you find to be slow? It should be extremely fast using Plots.jl so I would suggest there must be something off with your setup
You can't read anything! The quality of the video is very bad...a pity
Watch it in a mobile. You can zoom in
It is fine for me, in 1080p60 resolution, on a full HD laptop screen - especially if I run the video in full screen
Sorry to hear. I double checked using various browsers. It seems fine to me.
@@paperclips1306 sounds like you have a bad internet connection on the day you watched. TH-cam will send low res version if lots of packets are being dropped.
I found this useful
Glad to hear!