I've written a lot of code in both. I like the simplicity of Go, for sure. Fewer ways to shoot yourself in the foot...kind of. There are just fewer choices to make in Go, which can be a good thing. TypeScript is more like Scala. It can support about any paradigm you'd like to use. There are 10 ways of doing anything. You can easily over engineer a problem by adding layers of generic inheritance or something like that. Then, there's the layers of configuration. Not all TypeScript is the same. Max strict TypeScript, in practice, ends up looking much different than minimally strict TypeScript. The point about no union/sum types is really important. Not everyone uses these features, but once you start using patterns like tagged unions, everything starts to look like a tagged union problem. The simplest example is an Option type, which you can't make yourself in Go today.
Option types are possible in Go, using Generics. They're just not as nice to work with, compared to a language like Rust which has very powerful pattern matching. Go Generics, at least in their current state, are very basic/underpowered. You could argue that that's perfect for Go, but I'd have to disagree. If you're going to add a feature like Generics, you can't half-ass it. type Option[T any] struct { Some T Valid bool }
@@Treslahey thats a creative way to implement an optional type. but having it as a struct vs a enum just feels wrong like u mentioned. since an optional can only have 2 states, it makes more sense to abstract it as an enum, which u cant do in go unfortunately since enums in go are just untyped uints.
The whole thing about optional type is you staying inside the Maybe/Option monad and delegating the error handling to the monadic structure. No need for a enum or some shit like that, thats just a neat sprinkle on top of your chocolate milk, but the chocolate milk itself doesnt need it to be delicious.
As so often is the case... The guy in the article tries to write TS using Go. If you want to write TS then use TS. Go is great but not if you treat it like TS.
@@ryangamv8 yeah sometimes it would be nice. Constants can cover some of those cases but since they're compile time constants they're not the same unfortunately. It would be nice if they introduced that eventually.
Or any other language, I’m tired of people trying to write Java, C++, JS and other stuff in Go. You look at the code, you see the mess, you think “this looks like this person is trying to write in X language”, and than you go check their background and as it turns out, in their previously job they were writing in X language, you can see it right through it.
@@cranberry888 like, when I see "IRepository", "IClient", etc. or getters and setter for everything and everywhere, named "GetThis", "GetThat", they all came from some OOP language like C# or Java. When I see too much reflection or Generics, they probably came from JS, and so on. I might be wrong, I'm just stating about my own experience, and to this day it was never wrong. It's just minor things that give up their background.
As a Rust programmer, the one thing I dont like in Go's error handling is that you can still technically access the value even if there's an error. You get both the error and the value at the same time returned. In Rust you're guaranteed to only get one or the other thanks to ADTs.
It has been repeated 1000 times. Yet as a go programmer, it doesn't bother me, or cause any bugs. I had to get used to it, sure. But after that it's just smooth sailing.
@@krux02 The issue is that it CAN cause bugs. Not to mention, not every function follows the rule of "if it returns an error, the value should be regarded as invalid". And it's not always clear without reading through all of a function's documentation (io.Reader is a perfect example of this). I've been programming primarily in Go for the past 7 years, and can say with confidence that Rust's way is just plain better.
@@johnyewtube2286because it informs you of why they're saying the thing they're saying? You might have had a point if you saw "as a hair dresser, I think roundabouts are better than lights"
Go's error handling is amazing because it keeps it as part of the control flow. Exceptions are the bane of new and "mature" codebases alike, especially within a 20 mile radius of any junior engineer.
Go error handling is better than ts for sure, but most of the time i just wish they do something like rust that wrap error in Result enum, or at least have some syntactic sugar like “?” operator in, again, rust
The biggest gap in Go error handling imo is the lack of a “must use” annotation like Rust has. If a function returns only an error, it can be easy to accidentally forget to check for error.
arrays are values, slices are pointers to an array. append returns a copy of a pointer to the same array; BUT if underlying array grows because of append, it will create a new underlying array and return new pointer to new array.
I think I just like Go now. The whole try-catch blocks in TS is a nightmare, and I'm fairly sure it nudges you towards just "YOLO: Hope it doesn't fail".
25. A slice is a portion of an array (which may be the whole array if you do x := [5]int{0, 1, 2, 3, 4}; s := x[:]). They work on top of the array you slice. Mutating a slice element (s[0] = 5), will mutate its backing array (x[0] is also 5 now), and those changes will be reflected on any slice from that same array, if you need a copy, you have to use the builtin copy functiom or make a copy manually (for loop and copy to new slice with a different backing array, for example). Slices are passed as copy, but it doesn't mean you will get a full copy of its elements, you just get a copy of the slice data structure, which is a struct { length, capacity int, data uintptr }, but data is still a pointer, so if you do s[0] = 0 inside a function, it will mutate its backing array. If you pass s to a function, and do s = s[1:], s is now [1 2 3 4] in the function scope, but it will still be [0 1 2 3 4] outside the function. I actually find it quite easy to understand, and I am pretty stupid, its like any user defined data structure would behave when it is used as argument in Go. There are some other rules like appending to a slice with not enough capacity, but once you know the rules about slices, it is easy and makes sense. P.S. having unmutable data structures in Go is not possible, you can make copies, but mutating it is not a compile error.
It took me a very long time to understand slice and yet I don't think I have fully captured it. Currently I see it as a view of the underlying array, when we cut a slice, we just create a new view of the same array, so manipulate one of the views will affect all the other views. It's so silly that Go designed it this way, make something very simple so hard to understand.
You have to be careful when appending to a slice within a function, if you want to observe that change in the caller function. If the capacity is exceeded, the slice inside the function will start pointing to a new array which is a copy of the old array but with double capacity. The original array remains unchanged and at the same old address, at which your outside slice points to. Thus you won't see the change. In that case pass the pointer-to-slice, instead of the slice. It points to the address of the slice header, not of the array, and so any change within a function works on the slice residing in the caller. Also consider this: you pass the instantiated but empty slice 'a' to a function which should populate it, and suppose your function makes slice 'b' with some elements, or calls another function whose return value is equivalent to slice b. And you just assign: a = b or a = g(...) (function g returns a slice). Now, slice 'a' inside the function (and, of course, inside the function that slice can have a different name i.e. the name of the parameter, but we tend to use the same names for arguments and parameters) - points to the underlying array of b, but the slice a in the caller still points to the same empty array as before. In that case, also, pass the pointer-to-slice to the function parameter and assign like this: *a = b, since by 'a' now you call a pointer variable, so you deference it to get to the slice header in the caller. And now slice in the caller points to the underlying array of b. You didn't have to copy each element in a loop. Also use *a = append(*a, c...) when appending inside the function, slice 'c' to slice a in the caller and you are not absolutely sure that you won't exceed capacity at runtime.
function overloading is a good alternative to OOP and object.verb(subject) notation. overloaded function call is essentially pattern matching on a tuple of types. so it's a good option to have
@@aimanbasem if there is no bounds checking, which i believe there is in JS. But in C there is no bounds checking so if an array can be indexed arbitrarily (or even just access OOB), there are ways for a user to take control of the program and start trying to execute OS commands with whatever access the program was given. This is also why you should assign specific users/roles to processes and never run them as root unless absolutely needed
26: Slice is a reference to some array. Length of slice is how many elements are in slice, and capacity is the length of the array that is point to. If somebody is using antipattern in go that sliceb = append(slicea, something) than sliceb can be either pointer to slicea or copy of slicea+something. If slicea is on full capacity, than go will create new array with larger capacity and put sliceb there, but it won't move pointer to slicea to the new array.
Functional overloading exists in alot of langs, and its quite useful. Im pretty sure c++, java, a good handful of functional langs, etc Its good if you have some optimisation that uses a lookup table or premade collection that needs to be searched, if its created before, it can be passed in, or the function does it itself
@@tokiomutex4148 with proper naming, I'd disagree. The method "String.toString()" in C# has many overloaded methods that still does exactly what the function means, the algorithm just is different depending on what data type is converted into a string. Building multiple constructors is also great, but you could argue we could simply create an empty constructor first then chain set properties to accomplish the same goal (such as " New Object().property1 = value .property2 = otherValue .property3 = value3; ) I find that function overloading only makes sense in OOP though. In Go, it would feel weird, kind of out of place
The main problem of this article is that the author just wants to write Typescript code using Go. This is the stupid idea that doesn't work with any programming language. You just need to learn how to write idiomatic Go and most of those "problems" won't even appear in your project.
for real, the nil pointer arguement is dumb, u just have to remember to check if its nil. which if u write c/c++ you should already be comfortable with.
@@anon-fz2bo I think he meant that in Go interface that contains nil is not nil. For example: var i *int // initializes to nil var something interface{} = i fmt.Println(something == nil) // prints false That's why it is very bad to write your own error type that implements error interface. Because it won't be nil when you return it
I've worked with so many of these people when a company I used to work at adopted go. Not only were people not writing idiomatic go, they couldn't even be consistent about what language they _were_ writing via go - java, C++, python, and others. It was a nightmare for the few of us who actually had learned go properly and weren't just angry at the language over it not being a different language.
58:40 - Another complaint about Go: "Slice/capacity gotcha with pass by value/reference" What are they talking about here? ChatGPT: Ah yes, the good ol' Go slice behavior. What's funny is that people get caught in a kind of cognitive dissonance with Go slices. They look and act like dynamic arrays from other languages but have some unique behavior that can make you pull your hair out if you're not careful. A slice in Go is actually a lightweight data structure that wraps around an underlying array. It contains a pointer to the array, the length of the slice, and its capacity. While arrays are passed by value (meaning a full copy is made), slices are header values that get copied, but they still point to the same underlying array. Here's a gotcha: ```go func main() { original := []int{1, 2, 3} modified := original modified[0] = 9 fmt.Println(original) // prints [9, 2, 3], not [1, 2, 3]! fmt.Println(modified) // prints [9, 2, 3] } ``` You modified modified, but original also changed. Why? Because both slices share the same underlying array. Here's another one related to capacity: ```go original := []int{1, 2, 3} newSlice := original[:1] // newSlice is now [1], right? newSlice = append(newSlice, 4) fmt.Println(original) // prints [1, 4, 3] Surprise! ``` Wait, what? You only modified newSlice, why did original change? This is because the underlying array had enough capacity to accommodate the new value. append doesn't always create a new array; it reuses the existing one if it can. It's efficient but can lead to confusing behavior if you're not careful. This can really trip you up if you're not aware of it, especially if you come from languages like Python or JavaScript where you don't usually have to think about these things. But once you understand it, it's incredibly powerful because you can do very efficient array manipulations without a lot of extra memory allocations. So, the advice here is: Know what you're dealing with. Understand the slice header and its relationship with the underlying array. If you need a true copy, make one explicitly.
for point 7, I think it's better in GoLang, as they can pass variables that don't exist in your struct. What I do in that scenario is create a function, loop through the filters and have a swith case statement that handles the supported properties, and a default. It's a bit more work, but it ensures the integrity of the system is not compromised.
I love getting a new perspective on things, although I don't think you are always right about everything. I feel like it takes great courage to stand in front of this many people and state you opinion and I admire you for it. - No matter if it is factual or aligns with my views. Thank you. Keep it up :)
@ThePrimeTimeagen, your point is very valid! If function signature in ts accepts a union type of arrays, it will be a runtime error in js! So, before pushing something into an string[] array, you should check for argument type to be a string with typeguard: ```ts if (typeof arg !== 'string') return; ```
To me, a telling sign that someone is inexperienced in programming (or that they're using the wrong tool for the job) is that they complain about error handling being cumbersome. It tells me that they haven't been bit in the ass enough times to realize that handling errors is orders of magnitude less painful than dealing with run time bugs and crashes. Go's approach to errors might not be elegant, but try/catch is 100 times worse, because it gives programmers a responsibility they're usually are too lazy to handle.
It turns out that it's easy to mistakingly ignore or overwrite errors in golang. I've worked on large golang code bases, and I've seen this issue many times. It's a mediocre and badly designed language at the end of the day.
I still feel like errors should be sum types, like in Rust with Result or Either with Haskell. That being said, I'm 100% on having error handling as values and that the compiler forces you to know that something can error.
Why not use Checked Exceptions like in Java. If they made it so that you had to either eat the exception or declare that your function also throws it, then the compiler can always inform you if you missed handling any error.
About your "zod competitor" idea. Instead of doing a build step which will codegen validators, you can use the "new Function()" pattern to generate code in runtime. It’s the same approach that fastify uses to generate fast json serializers / deserializers given json schema definitions.
For number 7 and your example, i would use json instead of map. In number 14, we could utilize struct and pointer to construct a optional value parameter or u can say dto
A slice is a descriptor of an array or part it. It holds the length, capacity and the pointer to the starting item in the array. The descriptor is passed by value. If you pass a slice and change the length or capacity, nothing will happen outside the current scope. There is an article on the go dev blog.
IME people use function overloading to 1. create a function that works on a couple different types instead of using full blown generics and 2. mimic default arguments (shorter signatures pass nil as arguments to longer signatures and the longest signature actually does the thing). At least that's what I use it for when I'm writing Java. It's fine but certainly not my favorite. Elixir's pattern matching based overloading is fabulous though, would be interested to see what it might look like in a strongly typed language.
35:42: A Slices capacity describes how many elements it can potentially hold, before the Memory manager needs to reallocate and copy it somewhere else. Until you exceed the capacity of the slice (with something like append()) The Slice stays bound to the initial array and does not use memory of its own (using the array instead) As soon as you exceed the capacity, the slice and its content are copied to a new memory location. (Thus becoming independent and creating memory, that needs to be handled by the GC) When creating a slice from an existing Array, The Slices capacity is the "arrays length" - "the slices start index". If you create two slices, from the same array and they overlap, it is possible to change one slice's values by manipulating the other slices's data. (Since both slices would share the same memory locations) 56:42 : Dang it, I should probably continue watching before writing comments. Slices are not jus sugar though. Arrays are always static. Aka an Arrays capacity is always its length and that length cant change. Slices are more what you would expect from a PHP or Javascript Array. Meaning a dynamic length and no control over where in memory it lives. A Slice can be both a ref or copy (Depends on context) . Initially it is a ref. If you exceed the capacity, it becomes a copy.
We take use of point 8 a lot actually. Some of our endpoints can have both "use and toss"-data, and something we want to keep in a state-container. Omit comes handy. Sure, the data-structure is the real culprit, but old and reliable monoliths and all. Point 11 is also fair enough. Function overloading is very useful. It allows you to break up huge conditionals into smaller segments. That way you can jump right in on step Z instead of step A if you for some reason already know all the conditionals in a given context, without having to extract some of the logic of already written code. But Typescript doesn't really have this either. Last I checked Typescript demands the same number of parameters, which sort of defeats the entire purpose all together.
#26 - slices are NOT messy if you first read the language specifications. A slice holds three numbers: memory address, len and cap. The first one is a pointer to an allocated array. That address is the VALUE of the slice. In assignments to other slices/new slices, as any other value, it is copied, thus the memory address of the underlying array is copied. Therefore, all slices derived from one slice point to the same array UNTIL YOU CHANGE their value. Example. a := []int{1,2,3}. There is an array holding 1,2 3 and some more space where you can append new elements. Now, b:= a. b points to the same array. c:= b[1:] . c points to the same array but starts at index 1 of b, so it represents 2, 3 elements. e :=c[1:]. e represents element 3, because it starts from index 1 of c. You can't retrieve the leading elements, i.e 1, 2 (as you can with tailing elements), but you still have slice a if you need them. But let's have d:= []int{1,3,5.7}. This points to another array. Now lets assign: c = d[:len(d)-1]. Since value of d (memory address of the second array) is assigned to the value of c, c NOW DOESN'T POINT TO THE FIRST ARRAY but points to the second array and represents numbers1,3,5 , but last element is sliced away by slice operation. You can retrieve it: c= c[: len[d)]. Just as with x:=3 , x changes its value when you assign x=7. A slice behaves the same way, only its value is some memory address. Now, his objection was (probably) this: When you pass a slice to a function as value (it is still reference to an array, and thus light-weight), and use the builtin append function to append the receiver var inside the function, if the slice capacity is exceeded, Go runtime will copy the ARRAY to a new location but with a double size (new cap = 2*(old cap + num of appended elements).Of course, the value of slice inside the function changes: it now points to this new array. The slice in the caller still points to the old array and thus that array is not appended beyond its capacity. In order to obtain the wanted result in this case, don't pass a slice to the function, pass a pointer to slice. The receiving parameter points to the slice header on the stack of the caller function, and whatever you do with it affects that slice header, and that one, as any slice, is always pointing to its underlying array on the heap, no matter if that one was copied to another place in memory (at least until you change the value of the slice). OR, return the slice from the function, and you don't have to use pointer to it. But if your function parameter ap received &a (a being some slice from the caller), and then the function did this: *ap = c (c being some other slice pointing to a different array), then you changed the value of a from within the function, and now a points to that different array. This is perfect for populating a slice from some function that spits out the slice, all at once, and you don't have to copy element by element, UNLESS you want, for some reason, to have exactly the old array repopulated and you care about the address. In that case you wouldn't use *ap = c, but builtin copy function: copy(*ap, c). The copy function copies all elements to the underlying array of a, and the value of a (the address of that array) is not changed. In fact, in this case, you can pass a (not pointer to it) and do: copy(a, c) in the function, and you don't have to return a.
As someone who actually has a couple of years experience with Go, there is NO WAY that this blogger does. They're not even aware of a lot of the basic functionality and idioms. Yikes that they felt confident enough to write an article and share it online 😂
Overload isn't just "generics", but it's the main usage pattern. You can use overload for optional parameters, default values, and for function wrapping (did I miss a usage?). The Quicksort example of "sort" in this video is a good example of function wrapping. Another good example is when you have the data of an item in a collection, but your API wants the index of that data. Just overload the function name with a version that does the index lookup and calls "wants and index" version. I'd rather the LSP/IDE do the work than having a bunch of similarly named functions with suffixes like ByValue/ByIndex/ByNameAndIndex/Etc in the name. It's just a poor duplication of the parameter signature information. This is assuming a typed language, of course. I don't have to write a bunch of logic code to test if the signature is valid, don't have a monolithic parameter list, create a wrapper struct/object, or use a vague undefined parameter array.
@@rosehogenson1398 The most helpful part is that you don't have to pick from a list of function names and don't have to mentally translate a variable to a type. These actions clear your short-term like a GOTO statement. You have the tools auto-complete the function name and you plug in your variables that you already had in mind. Breaking mental flow is in the top 5 reason for why users hate a feature - especially in UI design.
I think you have to differentiate to errors in your code to errors from user errors, like a fetch failing. If my code doesn't work, I want it to throw. If a fetch returns a 400 because the user forgot to input their age, I want Go style errors.
I will say Omit type is nice for DTOs. Like I can say this DTO is the whole object except the DB generated ID, and now any time I modify the object my DTO mirrors it.
@@mage3690 the only thing that is a little frustrating is the type hint will always just show Omit instead of the whole object, but it really is a really convenient feature otherwise. And it makes refactoring things easier. When you update a class, you don't have to update all the partial DTOs and you don't even have to modify much code unless the code specifically access a field that was changed.
I've never used go but from the looks of it the error handling reminds me somewhat of a common Lua convention. The standard library is like this. Opening a file is like local file, err = io.open(...), then you can do local file = assert(io.open(....)) if you want to "panic" when opening a file. It also has something a bit like throw catch but it's handled by doing local ok, f_or_err = pcall(function() local f, e = io.open(...) if e then error("LOL") end return f end) I feel like its downside is that you have to drill res,err everywhere and it can get a bit cumbersome sometimes. Sometimes it's nicer to just throw an error and catch it at the top, at least if it's all your own code. If you remove try catch and throw in javascript and just did return [result, new Error("uh oh")] around in a "tuple", is that the same thing as golang? Or am I missing something?
Slice is a reference to the underlaying array. Many slices can be constructed from one array. When you update slice your array gets updated. Similarly when you update array your slice may get updated if it has the range from the updated area of array. It is simple..
17:50 - I don't think that's really TS unions fault but more fault on the part of how TS treats mutations. I think TS type system is very "math-y" with how sets relate to each other and that's the artifact of that - Array is indeed a subset of Array, there's nothing inherently wrong with TS "thinking" that.
@llIlllIIlIIlllIlllIl fair point, yeah. Cause then the operations available on set members also are a part of the set. So a mutable Array set also includes an operation .push(string). And then you have that .push(string) is not a subset of .push(string|number) but vice versa so neither Array set is a subset of each other.
go doesn't have a "dynamic array type," it has the append() function append() writes the new value in place if the capacity is large enough, otherwise it makes a copy
First thing I did when I was writing go was reinvent Result from rust into it. I see it as more like do it the way you like rather than something premade for you, they would have to implement 100 versions to satisfy everyone instead of just letting you pick out and implement a specific one to your liking. I wrap everything into my Result, takes 10 more characters and I've never crashed. That's the great thing about Errors as values, they can easily morph into anything else without much overhead.
Try catch is desired because solving all your problems at one place is a ludicrously attractive proposition. The bigger the problem space, the more attractive it becomes.
Errors by values is definitively the best. Rust does it better leveraging sum types to force engineers to handle the error to unwrap the value. I think what is also missing in Go is a sort of propagating the error up the callstack feature, like Rust's question mark operator. That allows the example of one mechanism to catch all and handle the same way. Also, yeah dead locking yourself could be argued as being a skill issue, but so could every C++ footgun and we still dunk on C++ for it... as an industry we need to push for languages with fewer footguns.
If not every, an overwhelming majority(like, 90% or more) of language disadvantages could be compensated with skill... so technically they are all skill issues.
You need function overloading especially when working with generics, e.g. in C++ the recursive definition of varardic template functions. Example: void write() {} template void write(T const &val, Args && ...args) { std::cout
the Result | undefined return type is actually pretty nice, you must check for undefined before accessing the Result. But try-catch feels very frustrating, just like old day Java, and even worse, no checked exception, all runtime exception. Maybe typescript can improve on this in the future.
Typia allows for runtime validaton of TS objects using TS types/interfaces. Write an interface. Validate some incoming json against it. Beautiful. All the other runtime validation libraries have you operate kind of backwards to that.
I'm just saying this to be contrarian low key but if having the *option* for granular handling is what makes Go's approach so awesome: function jsonParse (input) { try { return [JSON.parse(input), null] } catch (e) { return [null, e] } } const [output, err] = jsonParse('{"abc": 4') console.log(output, err)
what you really want is to explicitly handle all errors that are the result of a bad input, and a 4xx in all such cases. a try/catch or a panic/recover are only to handle something that got missed, that is presumed to be a coding bug as 5xx. the try/catch or panic/recover is a fault barrier. each thread should have a fault barrier at the top if it doesn't continue with a new session; or in the top-most loop as the fault barrier. fault barriers log things that got missed. reaching the fault barrier IS a bug. you can't always recover for real though. "out of filehandles", and your process can't safely proceed with anything.
the main thing that IS wrong with Go error handling: returning 'error' isn't specific enough. You should return the most specific type you can that has the error interface. ie: ex FileNotFound ... ex.FileName. If you are in the middle of doing a task, then that task's error should get returned. If it was caused by an error inside of that task, then that error should be used to construct that task's error. That way, you get context about what you were DOING when it errored out. It is pretty important that the task know what HTTP error code to suggest. You should return 4XX errors on bad input, and 5XX on errors deemed to be bugs to be fixed if your own code. This is why it's a bug to hit the top-most fault-barriers. It means that you missed something that was bad input, or something that could have gone wrong internally. You need to update your code so that all errors can tell you definitively whether fault lies outside the process, or inside the process.
EDIT: I’ve written this comment before watching the vid completely. Prime does address the sum types in TS, rendering half of this comment honestly stupid. I see the point, but in the end, I do not agree with Prime’s point about unsoundness of ts unions. The is A LOT of ways to break soundness of TS type system, and the presented one is not the worst contender. I never ran into such case, probably because I avoid mutating stuff willy-nilly. Tho I’m not religious about it like at the church of holy haskell. I’m so tired of prime’s dogs**t take on typescript enums. I’m sure he gets this point a lot, but I’ll still reiterate. First of all, the keyword "enum" enums are bs. We can all agree, that those aren’t the best. They do get one thing going for them, if you want to name your ints (which may aid perf, but more on that later). Secondly, the way to do sum types (rust enums, ocaml/haskell variants, discriminated unions, etc.) is to declare a union. Union which you can descriminate cases in some way. The go-to way, is to have objects with "type" property of different literal values, aka: type A = { type: "foo", value: number } | { type: "bar", bazinga: string }. Unions are WAY more powerful feature than plain enums with associated values. But the power is in the ability for tsc to see through discrimintions. Aka, if with the previous type you write: if (a.type === "foo"), typescript will deduce the correct variant in the body. I'm sure we all know this trick. I'm sure Prime knows about it as well. I'm sure Prime used this as well. Then WHY are we still doggin' on TS type system? No-one cares about "enum" keyworded enums. It has sum types. And it is not just a gag. I see time and time again Prime presenting "lack of proper enums" (and by proper he means rust) as a serious downside of TS. TS has problems. JS has a lot. "Lack of rust enums" is not one of them. And about those "enum" perf improvements. Keyworded "enums", do have one thing going for them. They allow you to name integers in the type system. This does provide a form of documentation. If we are to use string literals (as everybody does nowadays) as a union tag, they are self-describing (type Tag = "case-foo" | "case-bar"), not ints unfourtunatelly (type Tag = 1 | 2). SMI’s can give you a bit of that sweet sweet performance oompf, if you use them instead of literal strings. SMIs pack better in arrays. You can do bit twiddling to pack multiple variants into a single SMI. Equality checking ints is a lot faster then strcmp-ing string. Although I speculate that using ints vs strings as a union tag isn't that impactful of a perf jump. Since all of the tag values are likely to be specified in code (not generated dynamically, recieved from the user, fetched from the network, etc.), they are interned strings (judging by the V8's string.h). If JIT is to learn the shape of those tag strings (it should be able to learn that strings in tag properties are always interned), doing the comparison then just equates to a single 64-bit word equality check (comparing if pointers are the same), just like eqality-checking SMIs. And since these are const strings, there shouldn't be any additional GC pressure. I might have missed the joke here, but I don't think it is one. I think that screaming "typescript enums are bad. just look how much they are better in rust" is misinformed and/or just ignorant. Yes, "enum" keyword kinda sucks. Yes, rust enums are a good way to model sum types. No, typescript HAS equivalent capabilities. Typescript allows for far better flexibility when it comes to declaring and using sum types. Thing that is genuinely missing when comparing to rust enums is the pattern matching + a presence of few exhastiveness checking bugs in tsc.
He doesn't really understand the differences between enums, unions, and sum types. It's just a fact. I'm guessing he doesn't have any experience with functional programming, and that's probably why he can only relate to Rust's enums and doesn't really see the bigger picture.
There is function overloading in TS where you can define a function that can different definitions for types in your params. In regular JS, you don't need it, since there are no types.
Yes but TS is more like Option|Result since the returned value can be the thing, or an error/undefined etc. And TS will error if you try to operate on that returned value without first narrowing the type down to the thing (i.e. check if it's an error or undefined etc). So it's actually safer and forces proper error checking.
TS has error throwing, which just sucks. There's also no `Option | Result` built into TS, and if you pick any one of the "Rust-like" error handling libraries on npm, you're locked into that one library and you have to write wrappers for every other library you use because they're all incompatible. And every single one of those libraries introduces a runtime overhead for every Result / Option they return, since you have to create a new object for each return. Returning `Error` is about the closest thing you can get that's natively supported, but again there's no standard error handling pattern in TS, so different dependencies will use different patterns. Go is infinitely better on that regard, and improvements can be made to the language server to check mutually exclusive access to either `res` or `err`.
22:30 Correction: In rust if you have a struct that implements two traits, they're both in scope and you're calling a method with a name that both traits have, you'll need to fully qualify the method call or get a compile error.
I use try catch a lot in c# and it has helped me in capturing specific errors as well presenting the error to the user. I used it most though when handling data calls and bindings. In JS i just do if statements and types to create a more concrete sense of error handling. Before not knowing this development was very complicated, i agree every engineer should be keen on error handeling.
on the point number 5 the notion of saying explicitly struct a implements interface b a seems good i think the golang way batter because in compile time you have to know who implements who and especially when it goes like 5 6 8 deep it can make compile time a lot slower.
In rust it is not feasable to have a `struct Hello implements Greeter {}` kindof deal, because at any moment, foreign code can implement their traits on your types, breaking this neat model. The best solution is rust-analyzer's hints, that write `2 implementations` above the definition
The TS guy has not been thru hell -- he thought Satan is throwing him a party there. The only inheritance you may desire is that which from your great grandpa: The clock. But if you love your great grandma, you would rather borrow his books in your composition :D
Very much on the go team. Just wanted to point out that function overloading is absolutely a thing in TypeScript and I'm fairly certain it actually came from JavaScript. Been writing odin recently, and their approach to overloading has been super convenient and I kind of wish go had it
For api design inherentance is a great tool. U don't want more than like 2 layers deep. The nice part with Inherentance is that it let's u define a functi9n for the api. In a waybits an api api
We lazy error in Azure Functions a lot. Try catch generic exceptions on the main function, log the exception, cleanup and exit. This makes sense though because C#, like JS, can throw almost anywhere.
Un-checked exception only need handled where you want while errors have to be handled even if you just want to continue passing them back up the stack. Could take the worst from both patterns and do checked exception.
Used both Go and TS. To me, Go wins cause of simplicity, yet it has a lot of power. Sure I miss the occasional ternery, but writing code, "it just works" more then TS
7:23 im not a experienced go dev so i dont really know what the standard practice is, but golang-ci includes a linter that catches errors that are assigned to underscores.
11:34, I think the 2nd point was trying to argue that if you don't have the if statement to check "err != nil", then you have no idea what goes wrong? I am not familiar with Go, but that's what I get from the article. In the other hand, in JS/TS, if you don't have try catch, your code will stop there and show you the error. You don't need try catch to know where goes wrong when you develop things. And also for try catch, you can put all logic related to tmpls inside try block so you don't need to use let to defined it outside the scope, right? The error in catch is some kind of dynamic, depends on where it goes wrong. So, err is not a static general object but will tell you where and what goes wrong. And if the function that returns the value is your own JS/TS function, then you can make sure it will return an object like, {value: "some value", err: ""}, so when anything wrong happens in the process so you even don't need the try catch to log the error message, you can just const tmpls = func(), and then check if tmpls.value = undefined, and log errorr (tmpls.err), pretty much similar to the way you like in go. So you could actually do it differently using JS/TS. I know you don't like JS and TS, but sometimes, you are a little bit too biased on them. I do enjoy when you argue some facts, but not when it's bias.
point number 7 was recently a pain for me. was writing a webscraper that needs to take info on a page and turn it into structured data. the order of the columns in the html was random and i needed a way to determine what column I was looking at and add it to potentially deeply nested fields of a struct. Had to write a ton of if else for virtually every case because the reflects were problematic.
A good use case for [8] that was explained to me was say, creating a new entry in something. For example a user obj/struct that has fields for name, email address, phone number and a unique ID. When adding a new user, you might not have an ID yet, that could be automatically assigned from the DB, so your create user gets filled out as a (not the right syntax, but whatever). So this way when the User type gets more fields added to it in the future, your adder still needs to be modified to support that because it's linked through. Whereas having two versions, effectively UserWithID and UserWithoutID means there's no linking and you have to remember to add/remove fields in both whenever it changes. I guess the alternative is having inheritance maybe? But fuck that.
Yes, this is what I'm missing as a Rust developer. I really enjoyed composing the types in Typescript even though it was quite hard at times. But in Rust I need a variation of similar types (think of builder pattern for example) and it's a hassle!
ITT: Redditor works backwards from their opinion of enjoying writing TS more than Go. Bro, just say you like TS. It's okay to be a bad developer. I am, too-I write everything in TS. Just embrace it. If it means that you get the job done and push out that application, more power to you.
I think my main problem with the error handling that Go uses, as opposed to try catch statements is, (as someone who hasn't wrote code in a language that uses that), try catches fail closed. for example, you have this code: (written in js/express because thats what I know 🤷) router.post("/user/:id/update", (req,res) => { let validated = validateUserUpdate(req.body) // returns the new data if correct, throws if not. db.updateUser(userId, validated) }) If you forget to catch that error in a environment like TS,C#,Python, etc etc, that DB update will never happen, and will be caught by say, the http framework, or in the worst case, crash the app, but the integrity of the data is always maintained. In a environment like Go, where you're free to just run past the error, there's a chance that you're going to end up wiping out that user's data.
I guess you just have to make a habit to always call db.updateUser (or equivalent in Go) within if err == nil block, or always after the if err != nil block.
In go, you can recover from panics and return a 500, or treat it basically like try/catch. It's kinda wild but it's useful to use at the root of your request handler in a http server for instance.
You're seeing the chat displayed twice. If you'd like to monitor what the user is saying, consider using a separate screen, such as your phone, to keep track of the chat. The current setup may appear a bit chaotic, like a mixed fruit salad.
On point 7] although I understand that in TS it is "native" and easy to iterate over object attributes, I think the complaint that in Go you need to use reflect is invalid. That is what reflect is for. Runtime introspection. Usage of reflect is the correct answer, but it would be nicer if these would be first class citizen. Although, when you think about it, after over 10 years of working with Go, I had this problem maybe once. But it depeds what software you work on.
In practice, with golangci-lint (a huge collection of validation), I feel like Go's 'security' is pretty good, a weakness that people make comparing to Rust. I have a setup using Task (makefile in Go) that runs the whole suite of checks and output good code before commit. I feel the people often compare languages based on the 'out of factory, default' DX that they provide, which I think it's valid but also not practical, when serious project are a work of engineering system with carefully setup pipelines, no one just use the default language toolings itself in real project.
I was waiting on 5. interfaces for the good part about Go interfaces.. something I hope Zig adds. The implicit feature of it makes it so that developers can implement an interface without knowing the interface name. That's a very powerful capability. The flip side of that though is that you could "accidentally" implement an interface without knowing you are. But the flip side of the flap is.. you could now without even knowing/trying, provide an implementation of an interface that could arguably be used in place of another interface implementation. That is one thing I really love.. is I simply define a type (Struct usually), add method(s) that implement some interface, and now I can pass that instance and I am good. Without even importing/depending upon that interface. If I am writing a library without knowing about said interface from another library (lib), and a person is using library lib and also using my library.. they can pass my implementation in to lib's need for that interface.. and all the while I never intended/knew about lib's interface. That's pretty damn powerful.
off topic: something i don't undertand about c++, why not just have the compiler manage inserting delete(var) in the code at relevat code locations. Like if a pointer is declared but not returned in a function, then insert a delete(pointer) and if it is returned, then the delete function could be inserted before the pointer reasignment or the end of the scope containing the pointer. It would litterally be one compiler pass and it should be fairly easy to program. Just follow the pointer to it's scope ends, including function returns.
I've written a lot of code in both. I like the simplicity of Go, for sure. Fewer ways to shoot yourself in the foot...kind of. There are just fewer choices to make in Go, which can be a good thing.
TypeScript is more like Scala. It can support about any paradigm you'd like to use. There are 10 ways of doing anything. You can easily over engineer a problem by adding layers of generic inheritance or something like that. Then, there's the layers of configuration. Not all TypeScript is the same. Max strict TypeScript, in practice, ends up looking much different than minimally strict TypeScript.
The point about no union/sum types is really important. Not everyone uses these features, but once you start using patterns like tagged unions, everything starts to look like a tagged union problem. The simplest example is an Option type, which you can't make yourself in Go today.
Option types are possible in Go, using Generics. They're just not as nice to work with, compared to a language like Rust which has very powerful pattern matching. Go Generics, at least in their current state, are very basic/underpowered. You could argue that that's perfect for Go, but I'd have to disagree. If you're going to add a feature like Generics, you can't half-ass it.
type Option[T any] struct {
Some T
Valid bool
}
The difference is that Scala is half decent.
@@Treslahey thats a creative way to implement an optional type. but having it as a struct vs a enum just feels wrong like u mentioned. since an optional can only have 2 states, it makes more sense to abstract it as an enum, which u cant do in go unfortunately since enums in go are just untyped uints.
@@anon-fz2bo I wholeheartedly agree. Go's lack of a truly generic enum type is one of its sore spots.
The whole thing about optional type is you staying inside the Maybe/Option monad and delegating the error handling to the monadic structure. No need for a enum or some shit like that, thats just a neat sprinkle on top of your chocolate milk, but the chocolate milk itself doesnt need it to be delicious.
As so often is the case... The guy in the article tries to write TS using Go. If you want to write TS then use TS. Go is great but not if you treat it like TS.
No immutable variables tho. Tf is up with that
@@ryangamv8 yeah sometimes it would be nice. Constants can cover some of those cases but since they're compile time constants they're not the same unfortunately. It would be nice if they introduced that eventually.
Or any other language, I’m tired of people trying to write Java, C++, JS and other stuff in Go. You look at the code, you see the mess, you think “this looks like this person is trying to write in X language”, and than you go check their background and as it turns out, in their previously job they were writing in X language, you can see it right through it.
@@gilbertovampre9494how Go style different from x language?
@@cranberry888 like, when I see "IRepository", "IClient", etc. or getters and setter for everything and everywhere, named "GetThis", "GetThat", they all came from some OOP language like C# or Java. When I see too much reflection or Generics, they probably came from JS, and so on. I might be wrong, I'm just stating about my own experience, and to this day it was never wrong. It's just minor things that give up their background.
As a Rust programmer, the one thing I dont like in Go's error handling is that you can still technically access the value even if there's an error. You get both the error and the value at the same time returned. In Rust you're guaranteed to only get one or the other thanks to ADTs.
It has been repeated 1000 times. Yet as a go programmer, it doesn't bother me, or cause any bugs. I had to get used to it, sure. But after that it's just smooth sailing.
@@krux02 The issue is that it CAN cause bugs. Not to mention, not every function follows the rule of "if it returns an error, the value should be regarded as invalid". And it's not always clear without reading through all of a function's documentation (io.Reader is a perfect example of this). I've been programming primarily in Go for the past 7 years, and can say with confidence that Rust's way is just plain better.
"as a ___" why do people feel the need to state they are writing as something or other? Is it just readit brain?
@@johnyewtube2286it helps the reader gain perspective about where someone might be coming from.
@@johnyewtube2286because it informs you of why they're saying the thing they're saying?
You might have had a point if you saw "as a hair dresser, I think roundabouts are better than lights"
Go's error handling is amazing because it keeps it as part of the control flow. Exceptions are the bane of new and "mature" codebases alike, especially within a 20 mile radius of any junior engineer.
i hate exception handling with try catch
Go error handling is better than ts for sure, but most of the time i just wish they do something like rust that wrap error in Result enum, or at least have some syntactic sugar like “?” operator in, again, rust
@@ThePrimeTimeagen thoughts on elixir/erlang error handling?
@@ThePrimeTimeagen except in zig
The biggest gap in Go error handling imo is the lack of a “must use” annotation like Rust has. If a function returns only an error, it can be easy to accidentally forget to check for error.
arrays are values, slices are pointers to an array. append returns a copy of a pointer to the same array; BUT if underlying array grows because of append, it will create a new underlying array and return new pointer to new array.
I think I just like Go now. The whole try-catch blocks in TS is a nightmare, and I'm fairly sure it nudges you towards just "YOLO: Hope it doesn't fail".
if try-catch is a nightmare, 51:54 is a death sentence !
Thing is it pushes you to learn how not to swallow😤 see what I did there?
25. A slice is a portion of an array (which may be the whole array if you do x := [5]int{0, 1, 2, 3, 4}; s := x[:]). They work on top of the array you slice. Mutating a slice element (s[0] = 5), will mutate its backing array (x[0] is also 5 now), and those changes will be reflected on any slice from that same array, if you need a copy, you have to use the builtin copy functiom or make a copy manually (for loop and copy to new slice with a different backing array, for example).
Slices are passed as copy, but it doesn't mean you will get a full copy of its elements, you just get a copy of the slice data structure, which is a struct { length, capacity int, data uintptr }, but data is still a pointer, so if you do s[0] = 0 inside a function, it will mutate its backing array. If you pass s to a function, and do s = s[1:], s is now [1 2 3 4] in the function scope, but it will still be [0 1 2 3 4] outside the function.
I actually find it quite easy to understand, and I am pretty stupid, its like any user defined data structure would behave when it is used as argument in Go. There are some other rules like appending to a slice with not enough capacity, but once you know the rules about slices, it is easy and makes sense.
P.S. having unmutable data structures in Go is not possible, you can make copies, but mutating it is not a compile error.
It took me a very long time to understand slice and yet I don't think I have fully captured it. Currently I see it as a view of the underlying array, when we cut a slice, we just create a new view of the same array, so manipulate one of the views will affect all the other views. It's so silly that Go designed it this way, make something very simple so hard to understand.
@@a-yon_n I think sometimes we overcomplicate some concepts, in simple words, arrays are static arrays and slices are dynamic arrays.
You have to be careful when appending to a slice within a function, if you want to observe that change in the caller function. If the capacity is exceeded, the slice inside the function will start pointing to a new array which is a copy of the old array but with double capacity. The original array remains unchanged and at the same old address, at which your outside slice points to. Thus you won't see the change.
In that case pass the pointer-to-slice, instead of the slice. It points to the address of the slice header, not of the array, and so any change within a function works on the slice residing in the caller.
Also consider this: you pass the instantiated but empty slice 'a' to a function which should populate it, and suppose your function makes slice 'b' with some elements, or calls another function whose return value is equivalent to slice b. And you just assign: a = b or a = g(...) (function g returns a slice).
Now, slice 'a' inside the function (and, of course, inside the function that slice can have a different name i.e. the name of the parameter, but we tend to use the same names for arguments and parameters) - points to the underlying array of b, but the slice a in the caller still points to the same empty array as before.
In that case, also, pass the pointer-to-slice to the function parameter and assign like this: *a = b, since by 'a' now you call a pointer variable, so you deference it to get to the slice header in the caller. And now slice in the caller points to the underlying array of b. You didn't have to copy each element in a loop. Also use *a = append(*a, c...) when appending inside the function, slice 'c' to slice a in the caller and you are not absolutely sure that you won't exceed capacity at runtime.
function overloading is a good alternative to OOP and object.verb(subject) notation. overloaded function call is essentially pattern matching on a tuple of types. so it's a good option to have
Just a quick PSA, accessing JS object fields by variable is a security vulnerability if that field can be defined by user input.
by the same token, indexing a list with a variable is also a vulnerability, isn't it?
@@aimanbasem if there is no bounds checking, which i believe there is in JS. But in C there is no bounds checking so if an array can be indexed arbitrarily (or even just access OOB), there are ways for a user to take control of the program and start trying to execute OS commands with whatever access the program was given. This is also why you should assign specific users/roles to processes and never run them as root unless absolutely needed
Watched a few of Melkey’s videos recently, seemed to have a fairly reasonable opinion on most things… where is he on the Theo-Prime scale?
An absolute beauty. Best of both worlds
He seems like a junior in comparison. I don't know about his qualification but that's the impression I get.
Can you elaborate on the Theo-Prime scale?
Theo gg is a startup hacker developer productivity focused get stuff out the door
Prime is a high scale optimal code, ergonomics freak, elite coder
Actaully a good comparison
26: Slice is a reference to some array. Length of slice is how many elements are in slice, and capacity is the length of the array that is point to.
If somebody is using antipattern in go that sliceb = append(slicea, something) than sliceb can be either pointer to slicea or copy of slicea+something.
If slicea is on full capacity, than go will create new array with larger capacity and put sliceb there, but it won't move pointer to slicea to the new array.
It helps to remember that append is “value semantics mutation API”.
Functional overloading exists in alot of langs, and its quite useful. Im pretty sure c++, java, a good handful of functional langs, etc
Its good if you have some optimisation that uses a lookup table or premade collection that needs to be searched, if its created before, it can be passed in, or the function does it itself
The only thing function overloading is good at is making the code more confusing, change my mind!
@@tokiomutex4148 with proper naming, I'd disagree. The method "String.toString()" in C# has many overloaded methods that still does exactly what the function means, the algorithm just is different depending on what data type is converted into a string.
Building multiple constructors is also great, but you could argue we could simply create an empty constructor first then chain set properties to accomplish the same goal (such as " New Object().property1 = value
.property2 = otherValue
.property3 = value3; )
I find that function overloading only makes sense in OOP though. In Go, it would feel weird, kind of out of place
@@tokiomutex4148and they cause name mangling...
Function overloading can be good for optional arguments or handling of same functionality for different types
@@edoga-hf1dp Until someone modifies the code and a different definition of your function ends up called, good luck debugging it!
The main problem of this article is that the author just wants to write Typescript code using Go. This is the stupid idea that doesn't work with any programming language.
You just need to learn how to write idiomatic Go and most of those "problems" won't even appear in your project.
Doesnt it give you inspiration. Anyone (with the right connections) can become a journalist these days
makes sense
for real, the nil pointer arguement is dumb, u just have to remember to check if its nil. which if u write c/c++ you should already be comfortable with.
@@anon-fz2bo I think he meant that in Go interface that contains nil is not nil. For example:
var i *int // initializes to nil
var something interface{} = i
fmt.Println(something == nil) // prints false
That's why it is very bad to write your own error type that implements error interface. Because it won't be nil when you return it
I've worked with so many of these people when a company I used to work at adopted go. Not only were people not writing idiomatic go, they couldn't even be consistent about what language they _were_ writing via go - java, C++, python, and others. It was a nightmare for the few of us who actually had learned go properly and weren't just angry at the language over it not being a different language.
58:40 - Another complaint about Go: "Slice/capacity gotcha with pass by value/reference"
What are they talking about here?
ChatGPT:
Ah yes, the good ol' Go slice behavior. What's funny is that people get caught in a kind of cognitive dissonance with Go slices. They look and act like dynamic arrays from other languages but have some unique behavior that can make you pull your hair out if you're not careful.
A slice in Go is actually a lightweight data structure that wraps around an underlying array. It contains a pointer to the array, the length of the slice, and its capacity. While arrays are passed by value (meaning a full copy is made), slices are header values that get copied, but they still point to the same underlying array.
Here's a gotcha:
```go
func main() {
original := []int{1, 2, 3}
modified := original
modified[0] = 9
fmt.Println(original) // prints [9, 2, 3], not [1, 2, 3]!
fmt.Println(modified) // prints [9, 2, 3]
}
```
You modified modified, but original also changed. Why? Because both slices share the same underlying array.
Here's another one related to capacity:
```go
original := []int{1, 2, 3}
newSlice := original[:1] // newSlice is now [1], right?
newSlice = append(newSlice, 4)
fmt.Println(original) // prints [1, 4, 3] Surprise!
```
Wait, what? You only modified newSlice, why did original change? This is because the underlying array had enough capacity to accommodate the new value. append doesn't always create a new array; it reuses the existing one if it can. It's efficient but can lead to confusing behavior if you're not careful.
This can really trip you up if you're not aware of it, especially if you come from languages like Python or JavaScript where you don't usually have to think about these things. But once you understand it, it's incredibly powerful because you can do very efficient array manipulations without a lot of extra memory allocations.
So, the advice here is: Know what you're dealing with. Understand the slice header and its relationship with the underlying array. If you need a true copy, make one explicitly.
I'm wondering how Melkey could have a Go course on Frontend Masters when he doesn't know how slices work, it's basic stuff
for point 7, I think it's better in GoLang, as they can pass variables that don't exist in your struct.
What I do in that scenario is create a function, loop through the filters and have a swith case statement that handles the supported properties, and a default. It's a bit more work, but it ensures the integrity of the system is not compromised.
I love getting a new perspective on things, although I don't think you are always right about everything. I feel like it takes great courage to stand in front of this many people and state you opinion and I admire you for it. - No matter if it is factual or aligns with my views. Thank you. Keep it up :)
@ThePrimeTimeagen, your point is very valid! If function signature in ts accepts a union type of arrays, it will be a runtime error in js!
So, before pushing something into an string[] array, you should check for argument type to be a string with typeguard: ```ts if (typeof arg !== 'string') return; ```
To me, a telling sign that someone is inexperienced in programming (or that they're using the wrong tool for the job) is that they complain about error handling being cumbersome. It tells me that they haven't been bit in the ass enough times to realize that handling errors is orders of magnitude less painful than dealing with run time bugs and crashes. Go's approach to errors might not be elegant, but try/catch is 100 times worse, because it gives programmers a responsibility they're usually are too lazy to handle.
I'd rather the program fail than clobber itself or anything else
It turns out that it's easy to mistakingly ignore or overwrite errors in golang. I've worked on large golang code bases, and I've seen this issue many times. It's a mediocre and badly designed language at the end of the day.
I still feel like errors should be sum types, like in Rust with Result or Either with Haskell.
That being said, I'm 100% on having error handling as values and that the compiler forces you to know that something can error.
Why not use Checked Exceptions like in Java. If they made it so that you had to either eat the exception or declare that your function also throws it, then the compiler can always inform you if you missed handling any error.
@@refusalspam Yep, many languages have this. Rust, for example.
About your "zod competitor" idea. Instead of doing a build step which will codegen validators, you can use the "new Function()" pattern to generate code in runtime.
It’s the same approach that fastify uses to generate fast json serializers / deserializers given json schema definitions.
This idea was already implemented. Look up "Typia"
For number 7 and your example, i would use json instead of map. In number 14, we could utilize struct and pointer to construct a optional value parameter or u can say dto
A slice is a descriptor of an array or part it. It holds the length, capacity and the pointer to the starting item in the array. The descriptor is passed by value. If you pass a slice and change the length or capacity, nothing will happen outside the current scope. There is an article on the go dev blog.
IME people use function overloading to 1. create a function that works on a couple different types instead of using full blown generics and 2. mimic default arguments (shorter signatures pass nil as arguments to longer signatures and the longest signature actually does the thing).
At least that's what I use it for when I'm writing Java. It's fine but certainly not my favorite.
Elixir's pattern matching based overloading is fabulous though, would be interested to see what it might look like in a strongly typed language.
Golang's value errors would be nicer or they instead were some result return type with monadic operations on it
Agree but you get some interesting freedom with what it is right now 🤔
@@Entropy67you don't have to use the monadic things everywhere
I don't want to resort to whataboutism but C has the same thing, if I recall correctly, and yet no one bats an eye
35:42: A Slices capacity describes how many elements it can potentially hold, before the Memory manager needs to reallocate and copy it somewhere else.
Until you exceed the capacity of the slice (with something like append()) The Slice stays bound to the initial array and does not use memory of its own (using the array instead)
As soon as you exceed the capacity, the slice and its content are copied to a new memory location.
(Thus becoming independent and creating memory, that needs to be handled by the GC)
When creating a slice from an existing Array, The Slices capacity is the "arrays length" - "the slices start index".
If you create two slices, from the same array and they overlap, it is possible to change one slice's values by manipulating the other slices's data. (Since both slices would share the same memory locations)
56:42 : Dang it, I should probably continue watching before writing comments.
Slices are not jus sugar though. Arrays are always static. Aka an Arrays capacity is always its length and that length cant change. Slices are more what you would expect from a PHP or Javascript Array. Meaning a dynamic length and no control over where in memory it lives.
A Slice can be both a ref or copy (Depends on context) . Initially it is a ref. If you exceed the capacity, it becomes a copy.
We take use of point 8 a lot actually. Some of our endpoints can have both "use and toss"-data, and something we want to keep in a state-container. Omit comes handy. Sure, the data-structure is the real culprit, but old and reliable monoliths and all.
Point 11 is also fair enough. Function overloading is very useful. It allows you to break up huge conditionals into smaller segments. That way you can jump right in on step Z instead of step A if you for some reason already know all the conditionals in a given context, without having to extract some of the logic of already written code. But Typescript doesn't really have this either. Last I checked Typescript demands the same number of parameters, which sort of defeats the entire purpose all together.
#26 - slices are NOT messy if you first read the language specifications. A slice holds three numbers: memory address, len and cap. The first one is a pointer to an allocated array. That address is the VALUE of the slice. In assignments to other slices/new slices, as any other value, it is copied, thus the memory address of the underlying array is copied. Therefore, all slices derived from one slice point to the same array UNTIL YOU CHANGE their value.
Example. a := []int{1,2,3}. There is an array holding 1,2 3 and some more space where you can append new elements. Now, b:= a. b points to the same array. c:= b[1:] . c points to the same array but starts at index 1 of b, so it represents 2, 3 elements. e :=c[1:]. e represents element 3, because it starts from index 1 of c. You can't retrieve the leading elements, i.e 1, 2 (as you can with tailing elements), but you still have slice a if you need them.
But let's have d:= []int{1,3,5.7}. This points to another array. Now lets assign: c = d[:len(d)-1]. Since value of d (memory address of the second array) is assigned to the value of c, c NOW DOESN'T POINT TO THE FIRST ARRAY but points to the second array and represents numbers1,3,5 , but last element is sliced away by slice operation. You can retrieve it: c= c[: len[d)].
Just as with x:=3 , x changes its value when you assign x=7. A slice behaves the same way, only its value is some memory address.
Now, his objection was (probably) this: When you pass a slice to a function as value (it is still reference to an array, and thus light-weight), and use the builtin append function to append the receiver var inside the function, if the slice capacity is exceeded, Go runtime will copy the ARRAY to a new location but with a double size (new cap = 2*(old cap + num of appended elements).Of course, the value of slice inside the function changes: it now points to this new array. The slice in the caller still points to the old array and thus that array is not appended beyond its capacity.
In order to obtain the wanted result in this case, don't pass a slice to the function, pass a pointer to slice. The receiving parameter points to the slice header on the stack of the caller function, and whatever you do with it affects that slice header, and that one, as any slice, is always pointing to its underlying array on the heap, no matter if that one was copied to another place in memory (at least until you change the value of the slice).
OR, return the slice from the function, and you don't have to use pointer to it.
But if your function parameter ap received &a (a being some slice from the caller), and then the function did this: *ap = c (c being some other slice pointing to a different array), then you changed the value of a from within the function, and now a points to that different array. This is perfect for populating a slice from some function that spits out the slice, all at once, and you don't have to copy element by element, UNLESS you want, for some reason, to have exactly the old array repopulated and you care about the address. In that case you wouldn't use *ap = c, but builtin copy function: copy(*ap, c). The copy function copies all elements to the underlying array of a, and the value of a (the address of that array) is not changed. In fact, in this case, you can pass a (not pointer to it) and do: copy(a, c) in the function, and you don't have to return a.
As someone who actually has a couple of years experience with Go, there is NO WAY that this blogger does. They're not even aware of a lot of the basic functionality and idioms. Yikes that they felt confident enough to write an article and share it online 😂
Overload isn't just "generics", but it's the main usage pattern. You can use overload for optional parameters, default values, and for function wrapping (did I miss a usage?). The Quicksort example of "sort" in this video is a good example of function wrapping. Another good example is when you have the data of an item in a collection, but your API wants the index of that data. Just overload the function name with a version that does the index lookup and calls "wants and index" version. I'd rather the LSP/IDE do the work than having a bunch of similarly named functions with suffixes like ByValue/ByIndex/ByNameAndIndex/Etc in the name. It's just a poor duplication of the parameter signature information. This is assuming a typed language, of course. I don't have to write a bunch of logic code to test if the signature is valid, don't have a monolithic parameter list, create a wrapper struct/object, or use a vague undefined parameter array.
😊
Does an LSP actually help you with function overloading? rust-analyzer never gives me a helpful result on a .into() call
@@rosehogenson1398 The most helpful part is that you don't have to pick from a list of function names and don't have to mentally translate a variable to a type. These actions clear your short-term like a GOTO statement. You have the tools auto-complete the function name and you plug in your variables that you already had in mind. Breaking mental flow is in the top 5 reason for why users hate a feature - especially in UI design.
I think you have to differentiate to errors in your code to errors from user errors, like a fetch failing. If my code doesn't work, I want it to throw. If a fetch returns a 400 because the user forgot to input their age, I want Go style errors.
Exactly that, that's why I love Rusts error handling.
Holy shit, 1 hour. Lets go
yeah let's go. what your name?
I will say Omit type is nice for DTOs. Like I can say this DTO is the whole object except the DB generated ID, and now any time I modify the object my DTO mirrors it.
Oh, _that's_ what that is? Just a slice of a struct? Why isn't that a thing?
@@mage3690 the only thing that is a little frustrating is the type hint will always just show Omit instead of the whole object, but it really is a really convenient feature otherwise. And it makes refactoring things easier. When you update a class, you don't have to update all the partial DTOs and you don't even have to modify much code unless the code specifically access a field that was changed.
Why not use struct embedding?
I've never used go but from the looks of it the error handling reminds me somewhat of a common Lua convention. The standard library is like this. Opening a file is like local file, err = io.open(...), then you can do local file = assert(io.open(....)) if you want to "panic" when opening a file. It also has something a bit like throw catch but it's handled by doing local ok, f_or_err = pcall(function() local f, e = io.open(...) if e then error("LOL") end return f end)
I feel like its downside is that you have to drill res,err everywhere and it can get a bit cumbersome sometimes. Sometimes it's nicer to just throw an error and catch it at the top, at least if it's all your own code.
If you remove try catch and throw in javascript and just did return [result, new Error("uh oh")] around in a "tuple", is that the same thing as golang? Or am I missing something?
yeah, but you would be creating an array everytime (not that creating arrays/objects everywhere is uncommon in js [see react])
As a C++ / Assembler / HDL Engineer: This is literally the programmer equivalent of First World Problems :D
Slice is a reference to the underlaying array. Many slices can be constructed from one array.
When you update slice your array gets updated. Similarly when you update array your slice may get updated if it has the range from the updated area of array.
It is simple..
17:50 - I don't think that's really TS unions fault but more fault on the part of how TS treats mutations. I think TS type system is very "math-y" with how sets relate to each other and that's the artifact of that - Array is indeed a subset of Array, there's nothing inherently wrong with TS "thinking" that.
@llIlllIIlIIlllIlllIl fair point, yeah. Cause then the operations available on set members also are a part of the set. So a mutable Array set also includes an operation .push(string). And then you have that .push(string) is not a subset of .push(string|number) but vice versa so neither Array set is a subset of each other.
Actually in go this is how you say you implement an interface:
var _ Interface = (*MyStruct)(nil)
and then it's checked at compile time
go doesn't have a "dynamic array type," it has the append() function
append() writes the new value in place if the capacity is large enough, otherwise it makes a copy
First thing I did when I was writing go was reinvent Result from rust into it. I see it as more like do it the way you like rather than something premade for you, they would have to implement 100 versions to satisfy everyone instead of just letting you pick out and implement a specific one to your liking. I wrap everything into my Result, takes 10 more characters and I've never crashed.
That's the great thing about Errors as values, they can easily morph into anything else without much overhead.
Try catch is desired because solving all your problems at one place is a ludicrously attractive proposition. The bigger the problem space, the more attractive it becomes.
Errors by values is definitively the best. Rust does it better leveraging sum types to force engineers to handle the error to unwrap the value. I think what is also missing in Go is a sort of propagating the error up the callstack feature, like Rust's question mark operator. That allows the example of one mechanism to catch all and handle the same way.
Also, yeah dead locking yourself could be argued as being a skill issue, but so could every C++ footgun and we still dunk on C++ for it... as an industry we need to push for languages with fewer footguns.
Valid points!
If not every, an overwhelming majority(like, 90% or more) of language disadvantages could be compensated with skill... so technically they are all skill issues.
You need function overloading especially when working with generics, e.g. in C++ the recursive definition of varardic template functions. Example:
void write() {}
template
void write(T const &val, Args && ...args) {
std::cout
Your Zod competitor already exists, it's called Typia. It does types => validation, types => json schema, types => protobuff schemas.
io-ts existed before both and did the same.
the Result | undefined return type is actually pretty nice, you must check for undefined before accessing the Result. But try-catch feels very frustrating, just like old day Java, and even worse, no checked exception, all runtime exception. Maybe typescript can improve on this in the future.
OP: "Why can't you be like typescript?"
Go: "I'm sorry, dad"
Go is bad *BUT* receiver methods are great because (1) avoid need for `this` and (2) methods don't need to be bound to receiver, they are pre-bound
yes
Typia allows for runtime validaton of TS objects using TS types/interfaces. Write an interface. Validate some incoming json against it. Beautiful.
All the other runtime validation libraries have you operate kind of backwards to that.
I'm just saying this to be contrarian low key but if having the *option* for granular handling is what makes Go's approach so awesome:
function jsonParse (input) { try { return [JSON.parse(input), null] } catch (e) { return [null, e] } }
const [output, err] = jsonParse('{"abc": 4')
console.log(output, err)
what you really want is to explicitly handle all errors that are the result of a bad input, and a 4xx in all such cases. a try/catch or a panic/recover are only to handle something that got missed, that is presumed to be a coding bug as 5xx. the try/catch or panic/recover is a fault barrier. each thread should have a fault barrier at the top if it doesn't continue with a new session; or in the top-most loop as the fault barrier. fault barriers log things that got missed. reaching the fault barrier IS a bug. you can't always recover for real though. "out of filehandles", and your process can't safely proceed with anything.
the main thing that IS wrong with Go error handling: returning 'error' isn't specific enough. You should return the most specific type you can that has the error interface. ie: ex FileNotFound ... ex.FileName. If you are in the middle of doing a task, then that task's error should get returned. If it was caused by an error inside of that task, then that error should be used to construct that task's error. That way, you get context about what you were DOING when it errored out. It is pretty important that the task know what HTTP error code to suggest. You should return 4XX errors on bad input, and 5XX on errors deemed to be bugs to be fixed if your own code. This is why it's a bug to hit the top-most fault-barriers. It means that you missed something that was bad input, or something that could have gone wrong internally. You need to update your code so that all errors can tell you definitively whether fault lies outside the process, or inside the process.
we have to write 2x more code to make typescript type safe
EDIT: I’ve written this comment before watching the vid completely. Prime does address the sum types in TS, rendering half of this comment honestly stupid. I see the point, but in the end, I do not agree with Prime’s point about unsoundness of ts unions. The is A LOT of ways to break soundness of TS type system, and the presented one is not the worst contender. I never ran into such case, probably because I avoid mutating stuff willy-nilly. Tho I’m not religious about it like at the church of holy haskell.
I’m so tired of prime’s dogs**t take on typescript enums. I’m sure he gets this point a lot, but I’ll still reiterate.
First of all, the keyword "enum" enums are bs. We can all agree, that those aren’t the best. They do get one thing going for them, if you want to name your ints (which may aid perf, but more on that later).
Secondly, the way to do sum types (rust enums, ocaml/haskell variants, discriminated unions, etc.) is to declare a union. Union which you can descriminate cases in some way. The go-to way, is to have objects with "type" property of different literal values, aka: type A = { type: "foo", value: number } | { type: "bar", bazinga: string }.
Unions are WAY more powerful feature than plain enums with associated values. But the power is in the ability for tsc to see through discrimintions. Aka, if with the previous type you write: if (a.type === "foo"), typescript will deduce the correct variant in the body.
I'm sure we all know this trick. I'm sure Prime knows about it as well. I'm sure Prime used this as well.
Then WHY are we still doggin' on TS type system? No-one cares about "enum" keyworded enums. It has sum types. And it is not just a gag. I see time and time again Prime presenting "lack of proper enums" (and by proper he means rust) as a serious downside of TS.
TS has problems. JS has a lot. "Lack of rust enums" is not one of them.
And about those "enum" perf improvements. Keyworded "enums", do have one thing going for them. They allow you to name integers in the type system. This does provide a form of documentation.
If we are to use string literals (as everybody does nowadays) as a union tag, they are self-describing (type Tag = "case-foo" | "case-bar"), not ints unfourtunatelly (type Tag = 1 | 2). SMI’s can give you a bit of that sweet sweet performance oompf, if you use them instead of literal strings. SMIs pack better in arrays. You can do bit twiddling to pack multiple variants into a single SMI. Equality checking ints is a lot faster then strcmp-ing string.
Although I speculate that using ints vs strings as a union tag isn't that impactful of a perf jump. Since all of the tag values are likely to be specified in code (not generated dynamically, recieved from the user, fetched from the network, etc.), they are interned strings (judging by the V8's string.h). If JIT is to learn the shape of those tag strings (it should be able to learn that strings in tag properties are always interned), doing the comparison then just equates to a single 64-bit word equality check (comparing if pointers are the same), just like eqality-checking SMIs. And since these are const strings, there shouldn't be any additional GC pressure.
I might have missed the joke here, but I don't think it is one. I think that screaming "typescript enums are bad. just look how much they are better in rust" is misinformed and/or just ignorant. Yes, "enum" keyword kinda sucks. Yes, rust enums are a good way to model sum types. No, typescript HAS equivalent capabilities. Typescript allows for far better flexibility when it comes to declaring and using sum types. Thing that is genuinely missing when comparing to rust enums is the pattern matching + a presence of few exhastiveness checking bugs in tsc.
He doesn't really understand the differences between enums, unions, and sum types. It's just a fact. I'm guessing he doesn't have any experience with functional programming, and that's probably why he can only relate to Rust's enums and doesn't really see the bigger picture.
Help me Prime. I started learning Go, went into Rob Pyke’s history and then ended up in Plan 9 docs and writing drivers. Halp!
Prime, I think your idea for a Zod competitor already exists. I recall 2 libraries that do what you want, one of which is "Typia", IIRC
Exactly! I wanted to try out Typia myself. It's supposedly very fast. Though, I am not sure it's mature enough to use in a serious project.
This was basically programming slander in all directions for like an hour and I love it
Bro the CaPiTaLIZAtion thing is great. I hate having to write public on every single var to to export it
People taking about elixir are actually dead on, kinda insane how much it taught me about alot of these areas
There is function overloading in TS where you can define a function that can different definitions for types in your params. In regular JS, you don't need it, since there are no types.
Yes but TS is more like Option|Result since the returned value can be the thing, or an error/undefined etc. And TS will error if you try to operate on that returned value without first narrowing the type down to the thing (i.e. check if it's an error or undefined etc). So it's actually safer and forces proper error checking.
TS has error throwing, which just sucks.
There's also no `Option | Result` built into TS, and if you pick any one of the "Rust-like" error handling libraries on npm, you're locked into that one library and you have to write wrappers for every other library you use because they're all incompatible. And every single one of those libraries introduces a runtime overhead for every Result / Option they return, since you have to create a new object for each return.
Returning `Error` is about the closest thing you can get that's natively supported, but again there's no standard error handling pattern in TS, so different dependencies will use different patterns.
Go is infinitely better on that regard, and improvements can be made to the language server to check mutually exclusive access to either `res` or `err`.
22:30 Correction: In rust if you have a struct that implements two traits, they're both in scope and you're calling a method with a name that both traits have, you'll need to fully qualify the method call or get a compile error.
I use try catch a lot in c# and it has helped me in capturing specific errors as well presenting the error to the user. I used it most though when handling data calls and bindings. In JS i just do if statements and types to create a more concrete sense of error handling. Before not knowing this development was very complicated, i agree every engineer should be keen on error handeling.
I really like when you take a really small snippet to explain your point. Would love to see more.
TypeScript needs lots of Boilerplate codes without getting any runtime safety which is more crucial.
on the point number 5 the notion of saying explicitly struct a implements interface b a seems good i think the golang way batter because in compile time you have to know who implements who and especially when it goes like 5 6 8 deep it can make compile time a lot slower.
In rust it is not feasable to have a `struct Hello implements Greeter {}` kindof deal, because at any moment, foreign code can implement their traits on your types, breaking this neat model. The best solution is rust-analyzer's hints, that write `2 implementations` above the definition
The benefit of try catch is that you don't have to repeat the catch block after every statement.
Guy must've gotten oat milk instead of soy.
Composition, just create a new struct with those 3 fields and then embed that in the structs which need the common fields.
But programmers are lazy, we don't want to use two dots to access a nested object property :)
Skill issue. You can embed a struct and it will inherit all the properties and you won't have to use the dots.
Didn't know that is a specific GO thing. I thought you were referring to TS Omit.
The TS guy has not been thru hell -- he thought Satan is throwing him a party there.
The only inheritance you may desire is that which from your great grandpa: The clock. But if you love your great grandma, you would rather borrow his books in your composition :D
yeah thats good practice for handling each line, but sometimes people just try and catch the entire block once and log the error and line later,
The author needs to be sent to a corner with nothing but a chunky 2000's thinkpad running a fresh build of Gentoo.
32 best reasons to develop depression
Very much on the go team. Just wanted to point out that function overloading is absolutely a thing in TypeScript and I'm fairly certain it actually came from JavaScript. Been writing odin recently, and their approach to overloading has been super convenient and I kind of wish go had it
Overloading is completely unnecessary. Just create two different functions.
#define true false 😂
nil pointers are only an issue in untested Go, if you're encountering nil maps or nil pointers after deploy - you didn't test the code properly
For api design inherentance is a great tool.
U don't want more than like 2 layers deep.
The nice part with Inherentance is that it let's u define a functi9n for the api. In a waybits an api api
We lazy error in Azure Functions a lot. Try catch generic exceptions on the main function, log the exception, cleanup and exit. This makes sense though because C#, like JS, can throw almost anywhere.
Un-checked exception only need handled where you want while errors have to be handled even if you just want to continue passing them back up the stack. Could take the worst from both patterns and do checked exception.
Node js has atomic operations. But you must implement mutexes and atomic types yourself. This is definitely a skill issue.
@14:00 An uncaught exception is an abort()/assert(). It's pretty trivial to do that around the code if that is in fact the behavior you want.
Is this a moustache competition? 😂
Used both Go and TS. To me, Go wins cause of simplicity, yet it has a lot of power. Sure I miss the occasional ternery, but writing code, "it just works" more then TS
Speaking as an infra engineer, the go packaging system is so much easier to deal with. Deploying node apps at scale is much worse.
About point 7 dynamically looking at struct field names, isn't that what the reflect package is for?
7:23 im not a experienced go dev so i dont really know what the standard practice is, but golang-ci includes a linter that catches errors that are assigned to underscores.
11:34, I think the 2nd point was trying to argue that if you don't have the if statement to check "err != nil", then you have no idea what goes wrong? I am not familiar with Go, but that's what I get from the article.
In the other hand, in JS/TS, if you don't have try catch, your code will stop there and show you the error. You don't need try catch to know where goes wrong when you develop things.
And also for try catch, you can put all logic related to tmpls inside try block so you don't need to use let to defined it outside the scope, right? The error in catch is some kind of dynamic, depends on where it goes wrong. So, err is not a static general object but will tell you where and what goes wrong.
And if the function that returns the value is your own JS/TS function, then you can make sure it will return an object like, {value: "some value", err: ""}, so when anything wrong happens in the process so you even don't need the try catch to log the error message, you can just const tmpls = func(), and then check if tmpls.value = undefined, and log errorr (tmpls.err), pretty much similar to the way you like in go.
So you could actually do it differently using JS/TS.
I know you don't like JS and TS, but sometimes, you are a little bit too biased on them.
I do enjoy when you argue some facts, but not when it's bias.
point number 7 was recently a pain for me. was writing a webscraper that needs to take info on a page and turn it into structured data. the order of the columns in the html was random and i needed a way to determine what column I was looking at and add it to potentially deeply nested fields of a struct. Had to write a ton of if else for virtually every case because the reflects were problematic.
Constructor overloading != function overloading.
And function overloading can be generics via syntax sugar.
A good use case for [8] that was explained to me was say, creating a new entry in something.
For example a user obj/struct that has fields for name, email address, phone number and a unique ID. When adding a new user, you might not have an ID yet, that could be automatically assigned from the DB, so your create user gets filled out as a (not the right syntax, but whatever).
So this way when the User type gets more fields added to it in the future, your adder still needs to be modified to support that because it's linked through.
Whereas having two versions, effectively UserWithID and UserWithoutID means there's no linking and you have to remember to add/remove fields in both whenever it changes.
I guess the alternative is having inheritance maybe? But fuck that.
Yes, this is what I'm missing as a Rust developer. I really enjoyed composing the types in Typescript even though it was quite hard at times. But in Rust I need a variation of similar types (think of builder pattern for example) and it's a hassle!
In Go, you can embed struct as a struct field:
type(
UserData struct {
ID string
somethinElse int
}
User struct {
name string
email string
phone string
otherData UserData
}
)
//In func :
var (
usrDatas []UserData
users []User
)
//For index i:
user, usrData := new(User), new(UserData)
user.name =
usrData.ID =
user.otherData = usrData
users = append(users, *user)
usrDatas = append(usrDatas, *usrData)
//
//updating:
users[i].otherData = usrDatas[i]
You access users[i]'s ID by: users[i].otherData.ID
25:30 * is a dangerously leaky abstraction. Option and ? are less leaky.
ITT: Redditor works backwards from their opinion of enjoying writing TS more than Go.
Bro, just say you like TS. It's okay to be a bad developer. I am, too-I write everything in TS. Just embrace it. If it means that you get the job done and push out that application, more power to you.
I think my main problem with the error handling that Go uses, as opposed to try catch statements is, (as someone who hasn't wrote code in a language that uses that), try catches fail closed. for example, you have this code: (written in js/express because thats what I know 🤷)
router.post("/user/:id/update", (req,res) => {
let validated = validateUserUpdate(req.body) // returns the new data if correct, throws if not.
db.updateUser(userId, validated)
})
If you forget to catch that error in a environment like TS,C#,Python, etc etc, that DB update will never happen, and will be caught by say, the http framework, or in the worst case, crash the app, but the integrity of the data is always maintained.
In a environment like Go, where you're free to just run past the error, there's a chance that you're going to end up wiping out that user's data.
I guess you just have to make a habit to always call db.updateUser (or equivalent in Go) within if err == nil block, or always after the if err != nil block.
In go, you can recover from panics and return a 500, or treat it basically like try/catch. It's kinda wild but it's useful to use at the root of your request handler in a http server for instance.
After watching this video I'm convinced that Prime is the reincarnation of the original Menace.
You're seeing the chat displayed twice. If you'd like to monitor what the user is saying, consider using a separate screen, such as your phone, to keep track of the chat. The current setup may appear a bit chaotic, like a mixed fruit salad.
I like simplicity of Go. You dont need to compile bizzlion types in your head like when yo do TS.
On point 7] although I understand that in TS it is "native" and easy to iterate over object attributes, I think the complaint that in Go you need to use reflect is invalid. That is what reflect is for. Runtime introspection. Usage of reflect is the correct answer, but it would be nicer if these would be first class citizen. Although, when you think about it, after over 10 years of working with Go, I had this problem maybe once. But it depeds what software you work on.
0:01 @ThePrimeTimeagen, About that stupid rock, there's no "under" or "over" in space.
In practice, with golangci-lint (a huge collection of validation), I feel like Go's 'security' is pretty good, a weakness that people make comparing to Rust. I have a setup using Task (makefile in Go) that runs the whole suite of checks and output good code before commit. I feel the people often compare languages based on the 'out of factory, default' DX that they provide, which I think it's valid but also not practical, when serious project are a work of engineering system with carefully setup pipelines, no one just use the default language toolings itself in real project.
I was waiting on 5. interfaces for the good part about Go interfaces.. something I hope Zig adds. The implicit feature of it makes it so that developers can implement an interface without knowing the interface name. That's a very powerful capability. The flip side of that though is that you could "accidentally" implement an interface without knowing you are. But the flip side of the flap is.. you could now without even knowing/trying, provide an implementation of an interface that could arguably be used in place of another interface implementation. That is one thing I really love.. is I simply define a type (Struct usually), add method(s) that implement some interface, and now I can pass that instance and I am good. Without even importing/depending upon that interface. If I am writing a library without knowing about said interface from another library (lib), and a person is using library lib and also using my library.. they can pass my implementation in to lib's need for that interface.. and all the while I never intended/knew about lib's interface. That's pretty damn powerful.
off topic: something i don't undertand about c++, why not just have the compiler manage inserting delete(var) in the code at relevat code locations. Like if a pointer is declared but not returned in a function, then insert a delete(pointer) and if it is returned, then the delete function could be inserted before the pointer reasignment or the end of the scope containing the pointer.
It would litterally be one compiler pass and it should be fairly easy to program. Just follow the pointer to it's scope ends, including function returns.
TS is just JS, just fancier.
The real question, what is the best type system? TS, Go, Rust, Haskell, Kotlin, ...?