Granite LLM: Code Smarter, Not Harder
ฝัง
- เผยแพร่เมื่อ 28 พ.ย. 2024
- Want to try for yourself? Find the code on Github → ibm.biz/BdaT2a
Learn more about the technology → ibm.biz/BdaT2G
Need help making sense of complex code? In this video, PJ Hagerty demonstrates how to use Granite LLM for code summarization, generation, and completion, all on a coding level. From generating missing code to completing partially functional code, learn how to effectively use these features to improve your coding workflow, enhance code quality, and increase productivity.
AI news moves fast. Sign up for a monthly newsletter for AI updates from IBM → ibm.biz/BdaT2n
#ai #largelanguagemodels #granite
Could you not have a small language model during chain of thought draw from a dataset of 28000 or so colours then have a lower dimensional layer to do translation and context layout.
The data set and slm and lower data will have to be good enough at first then you can get to technical tweaking. A high tops processor doing a small language model with bigger data set on system memory not vram using quickest known circuits in 2027 doing a 60 second long response.
In all LLM used for coding, multiline code snippets should be enclosed with ``` (triple tildes).
```
```
Can I use this to work on my Metaculus prediction bot?
Does it produce the same snippets each time? One issue with LLM at present is their variability of output - which diminishes trust
I think that's just a matter of the temperature parameter, afaik at 0 the LLM should produce the same output each time.
Nice
جميل 😊🥰🤣