That thing about having a function in emacs to Yank all text content from all visible frames and windows sounded like a good idea for interacting with chat gpt so I asked chat gpt to write that Emacs lisp function for me.
I have an engineering hack that has served me well these last 13 years: when you google the error, skip over any stackoverflow links. They are occasionally comedic, and might "work" sometimes and explode when you turn your back.
It's probably good to not only look at what an LLM can do but what is long term financially viable. So far the training costs are AFAIK still increasing non linearly for each generation and OpenAI is bleeding money. So a valid question for the sustainability of this technology is what can an LLM do at a reasonable price? I'm only in the beginning of the episode yet so maybe this aspect might also be brought up...
That thing about having a function in emacs to Yank all text content from all visible frames and windows sounded like a good idea for interacting with chat gpt so I asked chat gpt to write that Emacs lisp function for me.
I have an engineering hack that has served me well these last 13 years: when you google the error, skip over any stackoverflow links. They are occasionally comedic, and might "work" sometimes and explode when you turn your back.
It's probably good to not only look at what an LLM can do but what is long term financially viable. So far the training costs are AFAIK still increasing non linearly for each generation and OpenAI is bleeding money.
So a valid question for the sustainability of this technology is what can an LLM do at a reasonable price?
I'm only in the beginning of the episode yet so maybe this aspect might also be brought up...