Damn…your intros are consistently excellent. I have recently discovered this channel and watched maybe 10 interviews so far and everyone starts off with a fantastic introduction to really set the tone for the listener. 👍🏻
It's very refreshing to see people taking this issue seriously and coming up with innovative solutions, rather than simply writing it off as a "skill issue". Now, off to check out Vale
Fascinating stuff! Thanks to Evan for exploring these ideas and to both of you for sharing them, it's nice to have a lay of the land for memory management, past, present, and possible futures. In the area of correctness, I like the idea of being able to annotate which functions will/may/won't affect state, will/may/won't handle IO etc., so the system can be more predictable. This "new generation" of programming languages and their ideas might be able to bring this about; many thanks! Also nice haircut 👍.
Another excellent conversation. I've been introduced to so many inspirational people and new languages through this podcast. Heisenbugs remind me of the olden days when I used to write C in the 80s/early 90s; compiling in debug mode often meant a crash would no longer manifest itself because whatever buffer was being overrun no longer trashed anything quite so important. I was a novice programmer too, so had little idea about any other debugging techniques. Irrelevant to this podcast, but just a fun bit of reminiscing
In the 90s I always did "transactional" programming, using C++ very strict lifetimes of objects on the stack. Things go out of scope? Fine, then deallocate automatically, no coding required. And it does not matter how deep or complex the execution pathways are. It does not have to be stack memory based, but the control of flow does control the memory claim lifetime. This simplified code a lot and memory management too as most memory use is very temporary.
Evan's work is good, I was following Vale a few years back and he's got a bunch of ideas for these fine-grained classes of problems. I've been moving in the direction of "what if I just work with extension instead of abstraction" for personal coding, which works pretty well if you're in concatenative or sexp syntax(or both, as I am doing) since then the act of code generation is trivial: extension reframes a lot of typing, scoping and synchronization problems to "I can add a state machine to check it at compile time". Given a sufficiently powerful algorithm for that check, you end up arriving at the abstraction - not the generalized one, maybe, but one that resembles the academic concept. It's perhaps the most approachable way to explore safety and liveness in a freeform fashion, except that you will spend the first month building up the code generator from a dumb string concatenator into something interesting, and the result will be undocumented and opaque to outsiders.
resource acquisition is initialization (or yeah, initialization is resource acquisition): the object acquires the resource (e.g. calls the system to open a file handle) just as that object is initialized (so in the constructor), as opposed to having to pre-init the file, make the object with it passed in, do something via the object, end the object and then end the file. the point of raii is that when objects life ends, the resource should end as well - you make the object, the file is opened, do something via the object, end the object and that ends the file.
Having hash function differ from process to process in C# and Java also solvable. Just record seed for process run when in recording mode and have both security and replayability.
the region feature is very interesting, but I have a question. if two pure functions are in different regions, can they still be considered pure? because regions are data at runtime, and the lifecycle of the function dictates the state of the region, that means the function has a side effect. to put it more concretely, a pure region-annotated function cannot call another pure region-annotated function, because suddenly we have to implement a lifetime check of some sort "coloring" the two functions together. the called function cannot be destroyed when the caller function is not destroyed.
@@Eugensson C++ has option types (std::optional). You can regard it as a hack, or not, but what C++ doesn't have yet, which is a problem, is standardised pattern matching to deal with sum types. Let's see if it can make it into C++26.
It's like a bad joke. "Finalizers taking arguments" seems backwards. Just did it in the object when that is required, then it will be sched in during the cleanup. The arguments is backwards. Linear Types are useful, but his arguments not convincing. The whole talk about soaceships doesn't make sense.
"Linear types: a hook into the desctructor call that needs to be provided upon deconstruction of a value (in the RAII sense)" Kind of an inverse of RAII - Resource Deallociation can be parameterized with functionality in a different scope than the allocation. Would that describe it well? I totally see the benefits :)
Why not just use a "pure" keyword onto variables that imply some restrictions on what you can do with that reference? No need to "declare" regions then.
@@AdrianBoyko These days I stick to what I need to work with, so I miss out on some neat ideas no doubt. But even if I see them, they would not be usable for me in my job. Great that people are trying to make things better however.
I might be overreaching here but AFAIK, opt-in restrictions (e.g. const in C/C++) tends to be less used than opt-in permission (e.g. mut in Rust). It's definitely a balancing act: you don't want to have to write 50 keywords to have a usable variable but if you have to explicitly restrict a variable in some cases, some codebase will just ignore the feature altogether.
Damn…your intros are consistently excellent. I have recently discovered this channel and watched maybe 10 interviews so far and everyone starts off with a fantastic introduction to really set the tone for the listener. 👍🏻
It's very refreshing to see people taking this issue seriously and coming up with innovative solutions, rather than simply writing it off as a "skill issue". Now, off to check out Vale
Great that you gave some exposure to the Vale Language.
Best intros ever, like always! Good job, love those interviews!
Fascinating stuff! Thanks to Evan for exploring these ideas and to both of you for sharing them, it's nice to have a lay of the land for memory management, past, present, and possible futures. In the area of correctness, I like the idea of being able to annotate which functions will/may/won't affect state, will/may/won't handle IO etc., so the system can be more predictable. This "new generation" of programming languages and their ideas might be able to bring this about; many thanks!
Also nice haircut 👍.
Another excellent conversation. I've been introduced to so many inspirational people and new languages through this podcast. Heisenbugs remind me of the olden days when I used to write C in the 80s/early 90s; compiling in debug mode often meant a crash would no longer manifest itself because whatever buffer was being overrun no longer trashed anything quite so important. I was a novice programmer too, so had little idea about any other debugging techniques. Irrelevant to this podcast, but just a fun bit of reminiscing
In the 90s I always did "transactional" programming, using C++ very strict lifetimes of objects on the stack.
Things go out of scope?
Fine, then deallocate automatically, no coding required.
And it does not matter how deep or complex the execution pathways are.
It does not have to be stack memory based, but the control of flow does control the memory claim lifetime.
This simplified code a lot and memory management too as most memory use is very temporary.
This show is rapidly becoming /lang/dev/voices and I'm all for it
Evan's work is good, I was following Vale a few years back and he's got a bunch of ideas for these fine-grained classes of problems.
I've been moving in the direction of "what if I just work with extension instead of abstraction" for personal coding, which works pretty well if you're in concatenative or sexp syntax(or both, as I am doing) since then the act of code generation is trivial: extension reframes a lot of typing, scoping and synchronization problems to "I can add a state machine to check it at compile time". Given a sufficiently powerful algorithm for that check, you end up arriving at the abstraction - not the generalized one, maybe, but one that resembles the academic concept. It's perhaps the most approachable way to explore safety and liveness in a freeform fashion, except that you will spend the first month building up the code generator from a dumb string concatenator into something interesting, and the result will be undocumented and opaque to outsiders.
resource acquisition is initialization (or yeah, initialization is resource acquisition): the object acquires the resource (e.g. calls the system to open a file handle) just as that object is initialized (so in the constructor), as opposed to having to pre-init the file, make the object with it passed in, do something via the object, end the object and then end the file. the point of raii is that when objects life ends, the resource should end as well - you make the object, the file is opened, do something via the object, end the object and that ends the file.
Having hash function differ from process to process in C# and Java also solvable. Just record seed for process run when in recording mode and have both security and replayability.
the region feature is very interesting, but I have a question. if two pure functions are in different regions, can they still be considered pure? because regions are data at runtime, and the lifecycle of the function dictates the state of the region, that means the function has a side effect.
to put it more concretely, a pure region-annotated function cannot call another pure region-annotated function, because suddenly we have to implement a lifetime check of some sort "coloring" the two functions together. the called function cannot be destroyed when the caller function is not destroyed.
Kris, if you did a demo video of Vale I would come watch it :)
Edit: as always, loved the video!
The two hardest problems in computer science are (1) naming (2) caching and (3) off-by-one errors.
null deref?
@@Eugensson some languages do not have NULL as a concept, while naming, caching and off-by-one errors are universal.
@@delian66 that is true. We can consider this issue as almost solved, since most modern language prefer option types. Yet C and C++ is still there.
@@Eugensson C++ has option types (std::optional). You can regard it as a hack, or not, but what C++ doesn't have yet, which is a problem, is standardised pattern matching to deal with sum types. Let's see if it can make it into C++26.
Those "regions" sounds a lot like arena allocators. E.g. talloc in Samba.
Is a region like an NSZone?
So linear types == defer + finalizers?
It's like a bad joke.
"Finalizers taking arguments" seems backwards. Just did it in the object when that is required, then it will be sched in during the cleanup.
The arguments is backwards.
Linear Types are useful, but his arguments not convincing.
The whole talk about soaceships doesn't make sense.
"Linear types: a hook into the desctructor call that needs to be provided upon deconstruction of a value (in the RAII sense)"
Kind of an inverse of RAII - Resource Deallociation can be parameterized with functionality in a different scope than the allocation.
Would that describe it well? I totally see the benefits :)
I am sold on these linear types.
Ok, maybe not.
Why not just use a "pure" keyword onto variables that imply some restrictions on what you can do with that reference?
No need to "declare" regions then.
Have you looked at Pony which uses reference capabilities? These are “keywords on variables that restrict what you can do”.
@@AdrianBoyko These days I stick to what I need to work with, so I miss out on some neat ideas no doubt. But even if I see them, they would not be usable for me in my job.
Great that people are trying to make things better however.
I might be overreaching here but AFAIK, opt-in restrictions (e.g. const in C/C++) tends to be less used than opt-in permission (e.g. mut in Rust).
It's definitely a balancing act: you don't want to have to write 50 keywords to have a usable variable but if you have to explicitly restrict a variable in some cases, some codebase will just ignore the feature altogether.
Exactly.