00:00 If I'm not wrong, Python has a predefined list of objects, including numbers from 0 to 255, that's why when you create variables with these values, they'll have the same id
that’s not why the ids are equal (you can’t create all 3-character strings at initialization time, that’d be a waste of resources); at a low level strings are arrays of characters (therefore immediately decay into pointers), and since the video uses literal strings (known at compile time) they are written in the data section (and since it’s the same string it’s only one entry). not sure what the behaviour would be if you tried to mutate the string though. .lower() simply allocates a new (runtime) string which is why the address is different
Dude forgot to mention the most important difference between "is" and "==", so I will. `is` is a Python command. It does what the interpreter tells it to do (similar to "for", "if", "while" and "in") and cannot be changed normally. `==` is a OPERATOR and each class has it's own definition of what "equals" even means, by defining it with the `__eq__` method. That's why objects from different classes can equal each other. The most important difference between `==` and `is` is that the first can be a different logic for each class, which means someone can implement a class that always equals everything or never equals anything (or use really any logic) while `is` will always check for the objects id, not to check if both objects equals but to check if they are the same object (usually by checking the memory address that holds the values). Now, for a beginner this usually doesn't make much difference and for most cases `==` is encouraged, but you should ALWAYS use `is` when checking for None values, since `is` can only match a None if it really is None and will never use a different logic that can falsely match None.
I think not much related, but now I understand why people simple writes: "if variable:" print('yes') else: print('no') Without "is" or "==" It will enters the block if "variable" is anything except None/False/0 or empty arrays, tuples, dicts etc. If the variable "exists" or "has elements" then will enter 'yes', although I don't know if this is a good practice.
@@thelearningmachine_ An object evaluates to a bool through the `__bool__` method, similar to `__eq__`. Any class can implement its own logic of what it means to be "true". Now, if it's bad or good practice, it depends on what your team (or yourself) decides is bad practice. For me, I always use an explicit condition if the value is not a boolean, because anyone can easily understand the condition and will not need to search (or memorize) when that object evaluates to True, so it's easier for them to read an explicit condition. The use of `==` vs `is` depends on what the condition is.
7:48 Optimization trick: if you already know the final size of your list, you can use `[None] * len` to already have space saved in memory, but you can't use methods like `push` (because your list is already filled with None values), you have to manually index it.
I don't really recommend doing this, as it can lead to weird errors if you accidentally jump out of the loop while filling in the list. It is safer and faster to use generators with list functions.
IIRC python reallocates twice the space for increasing lists. So it’s not as expensive as you’d think to not pre-allocate. Only O(log(n)) rather than O(n).
The second example can be a bit misleading for the viewers. Granted one should avoid modifying a list in place. But the resulting list after deleting "B" does contain "C". One can verify that, when the for loop is done, items = ["A", "C'", "D", "E"]. But indeed it's still recommended to create a copy or declare a empty list to populate.
Thanks!! I didn't get it at first, and then I found your comment that helped me understand. You deserve more than one like here for spending your time to explain it.
3:04 you can also loop through the list backwards to avoid issues when changing the list as you loop through it. It doesnt sound like it should work but if you play out the logic it will.
10:11 In my opinion this can be simplified even more using `Path("notes.txt").read_text()` or `Path("notes.txt").write_text()`. If you are doing some fancy file handling things where you explicitly need to open the file (or append to it or stream parts & not read it all at once), then sure, use `with open("notes.txt") as file:`. However all too often I see code that opens a file just to read/write it all and then close it again. Why not remove all the hassle of worrying about opening and make use of the stdlib provided to you. There's even a bytes read/write function as well if you need that. `pathlib.Path` FTW!
I actually ran into an issue yesterday where I accidentally forgot to copy a dictionary before looping through and modifying a dictionary, and I got an error saying that the length was changed during the loop. I was actually a bit surprised when I didn't see that same error here.
when I write code that is a tool, let's say something about LEO assets... when python goes wrong, I throw a builtin exception, (and if it;s I/O, I import errors and figure out which I/O thing went wrong), but if it's a domain specific user mistake, I send a custom exception. for example, they set orbit altitude to 2000 km, I'll throw a: >>>class LEOAltitude(ValueError): telling them LEO is 200 - 1600 km. If it's more general, I may go nuts and have an mro that looks like: LEOAltitudeError [or MEO, GEO] AltitudeError. [or Inclination, RightAscension, Eccentricity, MeanAnomaly, ...] OrbitParameterError [or Epoch or Classification... any thing in aTwo Line Element] ValueError. [now it's python's mro from here on out] StandardError Exception BaseException object
Building immutable strings that way is actually faster than you would expect on CPython. There's an optimization that allocates extra storage to a string object when it's built by repeated concatenation so copies can be avoided in most cases, but only if there is at most one reference to it, like here where it's a local variable. But even having that string part of a list already breaks this optimization, and it's not available e.g. on PyPy. Otherwise, if you have a small and fixed number of strings to join, format strings are the fastest since Python 3.6 because you skip the construction of a temporary list or tuple.
Regarding exception handling: I understand why you'd use specific exception types if you handle them all in different ways. You should probably specify KeyboardInterrupts for example. But if all you are doing is logging the error, maybe display an error message and closing the program, what is the benefit of writing code for every exception individually? That just seems like a lot of wasted time and extra lines for no reason. What does it matter what kind of exception it is if I am just gonna handle it the same regardless?
This sort of goes into the 'are you catching exceptions for known issues you want to fix during runtime, or are you catching exceptions for unexpected problems just before the system crashes?' Some 'expected' errors, like a user supplying an invalid file name shouldn't cause the program to crash, just advise the user to try again. But others that are not 'expected', like unable to find an object you believe should always be in a list, should perhaps be handled differently. There's a whole philosophy about these two ways of dealing with problems.
Note it is ok to change the elements of a list while iterating to it, something like this for i, n in enumerate(nums): nums[i]=10*n is fine, eventhough in this case (overwriting) list comprehension is better: nums=[10*n for n in nums].
1:50 For integers, Python creates objects for numbers -5 to 256 when it starts, so references to these will use the same object. I believe these values are chosen as they are commonly used. For strings, when creating a string literal of length
4:55 instead of this, you can do that: items: list[str] = ['A', 'B', 'C', 'D', 'E'] if 'B' in items: items.remove('B') for item in items: print(item) 8:03 maybe you can try this in one of your shorts videos: def append_text(): text: str = ""+"text"*50
I admuttedly have used the base exception handling for a very long time, however I have now been trying to be more explicit (due to another one of your videos). However, there are still many cases where I just don't want my program to crash at all, so I just wrap it in a base exception handler, and then just log the exception (with the callback), just so I can see that something went wrong, but I don't have to worry about it crashing.
If you use the Exception e catch-all, you can actually use "raise" to make sure the exception you got but don't/can't handle gets re-raised to the code that called your function. This can be handy sometimes. Edit: A somewhat contrived use case: You catch FileNotFound, but you don't explicitly catch IO errors. Maybe your code doesn't need to know that IO errors are a thing. In both cases, you want to act on the information that the file was not processed correctly. Maybe this is not a hard error in your library. You pass the catch-all exception to the calling function using "raise" after logging or something similar. Perhaps you want to revert your DB update that was probably corrupted, but you don't really care about the specifics of the error raised? Re-raising lets you do just that. I use this to great effect when using complicated libraries like BeautifulSoup.
Ah yes I remember doing the thing with the variables wrong. My variables were always something like s1,s2,s3,s4 and so on for every string variable. And what I also did was recycling variables. Like in one part I have a variable i as the result of an equation and after putting the result into the GUI I use the same variable in a later part of the script for something completely different.
19:35 I feel like the poor exception handling isn't as much a "nooby" mistake as it is a flaw in python's design philosophy. Well, ok it is a nooby mistake, but that's because python makes it tedious to find all the throwable exceptions an operation might do. Either in docs, or in lsp's too. Why you can't just get static analysis of "what are all the possible exceptions this function can throw?" I do not know. I have to imagine it's possible and I've just never seen it.
@@thefanboy3285 Language Server Protocol. You have em for different programming languages that you want your editor to detect in your code and lets you do things like intellisense and jump to definitions and such.
Because the possible list of exceptions that a function like 'open' can throw depends on the underlying operation system. How an LSP/editor possible know what exceptions Data General AOS/VS II from 1988 or Unisys Exec 8 from 1967 can generate?
Regarding enumerations, one has to remember there's a lot of 'black magic' that happens when the 'for item enumerate(items)' happens. It's creating /using the 'iterable' part of the object and determining the starting and ending points before running the loop. So some things like changing a value is fine, but changing the start or end points by appending or removing is problematic. Using a filter clause is great because then each item is 'tested' against the filter before each new pass through the loop and skipped if fails.
I am well experienced python developer and I really enjoy your videos, but man, what I really admire here is your patience with the comments. Hope you keep up with the good work despite that and I hope people reading my comments see that the only real problem here is how annoying the developer community may get ~often~ sometimes
Both the positive and negative comments helped me grow. I understand that we all come from different backgrounds, and that we don't all share the same views, but that's just how things are :)
It is very common to remove elements without having to recreate the whole list (for performance reasons)! The common practice for all languages is always to iterate from END to start by index (only). That's all. For a new list, as you already know, a list comprehension with if clause, would be more pythonian (instead of a loop). Other than that, great video!
One of the thing with drives me crazy with Python is that it does adhere with namespace, for example with the with ... file, you create a variable within the block, but later outside the block you address the variable (print(content)). If you would do something similar in a language like C, C++ or Java then you would get an error as it would not be a know variable.
It may be a little unusual, but it's nothing to be afraid of, at least it has uses. there's a fair number of situations where it's convenient to return the last iterated value in a loop from outside (after) the loop. I don't think this usage makes for especially hard to read code either as long as you're aware of it
@kaelanm-s3919 no, it is still harder to read. in other language, to return data from inner scope to outter scope, you have to declear variable at outter scrope first. but in python you have no reason to do something like this most of the time, so when other read the code like this, they may confuse a bit until they read the whole inner scope
Thank you for the video! Regarding mistake #6, using isinstance can be useful to check the type of a variable, by after your explanation it is still not clear to me how to check if two variables have the same type. If I know in advance what type to check, I could do: if isinstance(name, str) and isinstance(number, str): ... But if I do not know to what I need to compare, then something like this would be required?: if isinstance(name, type(number): ... In that case, if I'm going to use the "type" function anyway, wouldn't it be better to just compare the types?: if type(name) == type(number): ... 🤔
This depends on the application. There are three cases. - Exact type match is required. Use the "bad" code. - One directional subclass relationship allowed. Use isinstance(a, type(b)). - Bidirectional subclass relationship allowed. Use (isinstance(a, type(b)) or isinstance(b, type(a))). If anyone knows a better way to handle the third case, let me know.
In a different TH-cam video I saw another method to create a new copy of a string (in addition to using deepcopy). The video said using the slice operator would create a new copy. For instance if you had a string called str1 and wanted a distinct copy of it in str2 you could use the statement str2 = str1[ : ] as opposed to str2 = str1 which would make str2 point to str1 so they would both be the same string.
#10 I still do this to some degree. Part of this is that I'm lazy and don't want to keep typing long variable names over and over again, and part of it is a throwback to starting on BASIC 30+ years ago as a little kid. Back in the BASIC days, single letter variable names were the norm and anything more was almost a sin, so that's what I've carried over for so long. As for the longer names, these days there's plenty of auto-complete helpers in dev environments (including Vim) that mean that longer names are no longer the repetitive hassle they once were, but I still struggle with the balance of too concise vs too verbose.
My suggestion for removing items from a list, use either a list comprehension or the filter function, since they both are easily optimized by your Python runtime/compiler.
I really enjoy your videos and find them very informative. I was wondering if you could create a more advanced video on common mistakes in Python, particularly related to code design. I've noticed that many people tend to duplicate code excessively. Specifically, I'd love to learn more about why inheritance is often not recommended and why composition is considered a better structure in many cases. Thanks in advance if you decide to make such a video! 😄
The biggest mistake even experienced coders make: hard-coded resource names (URLs, file paths, etc.) I've never investigated a codeset at any company that didn't have them all over the place. If you wanna look like a pro, all you have to do is look for hard-coded file paths embedded in Python code. Resource paths should be in a single location in a config file (or database) and accessed by a config class that all client code uses to get those resource names. The biggest source of brittle code I've seen is hard-coded resource names.
3:04 it does work if you modify the current element, you just have to reassign it. for example: for index, item in enumerate(items): if item == "B": items[index] = "new value" item = "new value" else: print(item) its a very niche use case and definitely a janky approach, but i've used it before and it doesn't cause any bugs since it doesn't change where the iteration is occurring.
you could also just use an index and a while loop then when you remove the item just continue the loop and dont increment the index, arguable less pythonic though generally i see very few needs for removing elements from a list outside when its being processed, you can usually just use filtering and process it on the spot, especially since removing from the middle of a list is a slow operation, it has to shift each element if you do need to often, you may be using the wrong datastructure
Some cool stuff to think about. Generally, good. I'm making notes for my improvement. I have some quibbles with the text concatenation versus list append/join contrast - but I don't disagree with your efficiency test, the numbers don't lie. It's just that most large format text will be written with JSON, XML, CSV modules - most human readable text would be output from a triple-quoted template with variable substitutions in practice. I wouldn't sacrifice readability for that mild resource gain - and it's unlikely I would need to. Also regarding exception blocks using print(). Standard out may be a team's preference if the code is deployed to a container cluster and logging aggregation is handled via a third party tool like datadog or loggly. I wouldn't be quite so hard on using print() - even though I agree that a logger with the standard-out flag set would be better and easier to maintain. The important bit was catch and report the exception - which is key! Keep up the good work. I really enjoy the attention to nuances that you bring us. Thank you.
References are a big trap for beginners as they're not really intuitive. As a new programmer, you don't tend to think about what is happening under the hood. For example, you just think after a=b, "a" now has the same information "b", not really thinking about copy vs reference.
I was actually told, that fast string concatenation should be done with a StringIO() object. Jesus, I now compared the three methods and it's the worst, double as slow as the list method. I was lied to. So thanks for that input!
I heard recently that StringIO was in fact the fastest, I didn't test it myself so I wouldn't be able to comment, but I'll do a bit more research regarding that and possibly post a video soon enough :)
For number 5, you start off with a (supposedly bad) example of checking if two variables are of the same type, and then say to replace it by asking if EACH ONE is of a specific (or in a list of) type(s). This is NOT fulfilling the same purpose as what you were replacing. This is checking each type, and not comparing the types at all. The only way to use isinstance to replace the function of the code you started with that I immediately see (as an ABSOLUTELY novice python programmer), would be to write "isinstance(name, type(number))" at which point you have reverted half of what you wanted to change.
You can actually iterate through a list while modifying the original and without making a copy, you just have to iterate through the indices in reverse. Yes, deepcopy() is slow. That's why we shouldn't use it. When we copy things, we should already know the structure of what it is we are copying, and we can use that knowledge to maintain any level of shallow copying we want. matrix = [[1, 2, 3, 4], [5, 6, 7 ,8], [9, 10, 11 12], [13, 14, 15, 16]] new_matrix = matrix.copy() # slicing, or [:], can be new_matrix[0] = matrix[0].copy() # used in place of .copy() as well. new_matrix[0].append(17) print(matrix) print(new_matrix)
Noticed a strange thing in python: A < b is same as not (a >= b)? not!!! When you define gt and eq dundermethods in your class, than lt is still unknown. If you check if a less than b, it will not call if not a >= b, but instead will raise attributeerror or whatever it raises. Anyone knows why the hell it's done like this?
The «except: do something» is still not the wors thing. The worst is except: pass. (but unless you really know what you do). The maybe good way to handle a generic exception is: except Exception as e: print(e) # i forgot how to traceback.
The second one is interesting. I am using a lot of pandas, so I am usually working and modifying some Dataframe. How does the looping behave with df.iterrows? Should i still create a new list/dataframe? And how should I save the modified data ?
Yeah, its usually best to avoid loops and iterrows like the plague. If you can. it's almost always better to make new columns, create copies, use native broadcast, and in more complicated cases to make masks with where you want to change you data and then use .loc to change it. valid_rows = df.notnull() df2 = df[valid_rows] df = df2 # shadow the original dataframe, which can be automatically garbage collected behind the scene, so you can keep overwriting this df as many times as you want angle_correction_mask = df['angle'].between(360, 720) df.loc[angle_correction_mask, 'angle'] -= 360
With example 3, doesn't appending a list with a string also create a new string at the end of that list? If so, why is it faster since in both functions you are creating a new string? If I had to guess, maybe it's because the memory has already been allocated at the end of the list so it's faster to access, whereas the first example, new memory has to be allocated. Is that right?
You are creating twice as many strings when using +=. in "text += 'test'", you first create a string 'test' and then create a second string text + 'test'. In the .append&.join method, you only create a string and put it at the end of the list.
In my opinion, bad explanation on 17:10 (shallow copy). A more simple explanation: b = a.copy() will create a new list, but the items inside the b list are still the same. This will potentially affect only mutable data types (lists, dicts), since immutable data types cannot be modified. b = deepcopy(a) will create new references for the list and all items inside the list.
if you're copying the list you may aswell make a new list and append to it, copying the list does just create a new one then loop through the list and append to it, but it would also append unnecessary items (in this case "B")
At least for the given example, doing so would turn an O(n) process into O(n^2). Every time remove is called, python needs to find where the object is in the list, remove it, and then change the index of every element after it. It's much better to just make a new list that has only the things you want
Not a fan of typed lists--since list doesn't respect type. If you need an ordered container of characters, python has a standard library thing: array.array('c', ....) The point is the latter tells devs: this is an ordered container of characters by definition, and that respects POLA. (principle of least astonishment).
20:42 I'm not with you about that. Sometimes, there can be a lot of exceptions and the generic class can be better to use since we don't have to hande each one of them.
I normally try to avoid writing except Exception, because it avoids all type of exception. Normally, what I do is, I first run the program, if it works correctly, I deliberately provide some cases that I know will throw an error, and write the except block accordingly. But for some packages that uses custom error types like Tensorflow, I have to write the except Exception block. Is there any other way to handle it? I don't know😢. If you have better suggestions, please do let me and others know21:31
Not sure if relevant here but about if can be used to address e.g. list_b = list_a[:] variable as list copy. Not sure if it is a truly deep copy but would save importing a module if that was the case
That was very dismissive for exception wildcards. Who knows every possible error for their code is either someone with too much time on their hands or someone that is likely very arrogant and will likely be wrong. Just saying.
If you design a function, in general you should know what it does, otherwise you're creating/using functionality rather blindly. In the example of using "open()", there are several cases that every developer should know (eg. file not found, file not readable because of wrong type/format, etc). If you don't know about the basic errors you can encounter during programming, then of course the safest option you can opt in for is using a bare except. And as I mentioned in the video, using a bare except can be fine also for throw away scripts. Knowing everything that can go wrong is part of being a professional dev, that's what the senior devs get paid for at the end of the day. Solving issues that beginners/junior devs would never consider.
Exceptions are for handling known and unavoidable errors (e.g. from user input). Because if you don't know about an error, you WANT the program to throw an exception and halt the executions so you can debug what's wrong. This is why non-specific exception catching is a bad practice, you're hurting your ability to debug errors, which will inevitably have consequences that you didn't forsee and handle in the catch, because you didn't know about them!
I’ll gladly explicitly catch all possible exceptions the day the standard library documents all exceptions each method/function can raise…. Until then my code will have catch alls where I do not want exceptions to continue propagating.
errors by value >>> exception bonus points if its also monadic error by value you have to read alot of docs to check every function if it can throw something and what it can throw
for the 3:04 problem, instead of creating a new list with a new name, you can just use "for item in items[:]" which clones the list for the loop only, and you can continue to modify the original list in the loop
12:00 how is that not readable? In fact, you can read it from left to right very well "type of name equals type of x". In contrast, isinstance(name, str) is more unreadable in my opinion. You also didn't refactor the original example, where you're checking that both variables have the same type, in which case, it would now look like isinstance(name, type(x)) and now we get REALLY unreadable PS: I still think isinstance is preferable, just pointing out that readability is not its strong suit
they are functionally different, though. isinstance checks inheritance, e.g.: >>isinstance(True, int) True because so is >>issubclass(bool, int) Moreover, the isXXXXX behavior can be controlled with magic methods (subclasshook).
@@leo9463065 Yes, but when using "if cond is True:" you perform a hidden check if it is a boolean at the same time. "if 'a':" would execute the if block, because a non-empty string is considered truthy, but with "if 'a' is True:", the if block would not be executed because 'a' is not the same as True, but just a truthy value.
I rate with a passion poor exception handling. It is not only a beginner mistake. There is poor exception handling in gigantic projects backed by the best universities in scientific programming software. I let the software run for an entire night only to discover that they were literally hiding an error and discarding every generetated data because of poor exception handling.
I know there is a lot of negativity around type(a) == type(b) , but most of the time I do want it to be that strict for comparisons, I want to know are they both cats or both animals so that way I can treat them the same way.
@@Indently if I want to know that I can treat two objects exactly the same way. If one object has some differences from the other which mean I can't do everything to the one that I can do to the other I want to know that. e.g. I want to know that I either have two generic Animals, or two dogs, two cats. Lets say I am loading Noah's ark, I want to make sure when I am loading the animals that I have a pair of exactly the same type of animal but when picking I don't care what I kind of animal. So I can't use isinstance(animal) because then I will always get True assuming I have filtered out non-animals. But I can't do isinstance(X) because when I have picked up a pair I don't know or care at this point what kind of animals they are.
10:45 you can ignore this, but I'm interested what other pythonistas think (esp. those unsullied by other languages): type hinting an "open" call and even a "read" seems extraneous...more so for strongly typed magic methods, I mean if you don't know dunder init must return None, then don't be writing classes. Any,way I find that whole look at the time stamp cluttered and difficult to parse. I mean try to read this sentence: [Subject] I [pronoun][first person] [verb] like [verb][transitive] [object] Indently [noun][proper] import this: Readability counts.
3:55 items = [i for i in range(1, 20)] >>> for item in items: ... if item == 13: ... items.remove(item) ... else: ... continue ... >>> print(items) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 15, 16, 17, 18, 19] Why does the example I've provided work as intended, but yours doesn't? This also works if I type "items.remove(13)" instead of "items.remove(item)".
skipping an item doesnt matter if you print it when you finish the loop, only if you do something with the item in the else block like the previous person said you never did items.remove(14) so it wont be removed
If you do not close your file, memory is still allocated to the object. Overtime, the application will lag.> crash. Additionally, there are some applications that are meant to be kept active, for example a scheduler that performs an action every interval. You can try writing a loop to open a file 10,000 times without closing to test it out.
@@birdie123 but if it is a script that still terminates immediately, what’s the problem? Is it only for making a good habit of closing the file always so that you do not forget when necessary?
@@oro5421Python's supposedly uses reference counting and deterministic destruction in addition to garbage collection to handle resources. From Python3.6 documentation => "If you don’t explicitly close a file, Python’s garbage collector will eventually destroy the object and close the open file for you, but the file may stay open for a while. " (Note: This sentence was changed from Python 3.8+ 🤔) Yes. It is a good habit to close the file when not in use to prevent memory leak, or causing unexpected changes unknowingly. It's akin to one cleaning the utensils/cutlery after a meal, instead of leaving it lying on the table If your file size is small, and that you have a lot of computing resources, you might argue that it is insignificant. However, some real-life applications might be accessing file/s that is/are several GBs/TBs in size (e.g. machine learning, graphics etc.). => i.e. Managing computing resources will become a big deal.
It's in order to provide strong typing to the LSP, thus it will enable your IDE to give you hints about the methods associated to your variable (like .lower() for a 'str') and prevent basic typing error like multiplying a string by a string without having to run your code
@@oOMikyStarOo is correct. For someone new to all of this, the critical factor is that the annotation is providing tips to your IDE, not enforcing correct usage in your program-- Python will happily allow you to run a program where you multiply a string by a string, whereas a typed language will not only warn you in your IDE, but also fail to compile and give you an error message telling you why. Which is one reason why I wish Python was used for its original intended purpose-- relatively short scripts, one-off tools, rapid prototyping, etc. The fact that people use it to build large production systems (including my soon-to-be employer) makes me sad.
00:00 If I'm not wrong, Python has a predefined list of objects, including numbers from 0 to 255, that's why when you create variables with these values, they'll have the same id
Yeah you are correct
"integer literals" is their name.
I think CPython instantiates -5 to 256, not just 0 to 255
absolutely, that is correct
that’s not why the ids are equal (you can’t create all 3-character strings at initialization time, that’d be a waste of resources); at a low level strings are arrays of characters (therefore immediately decay into pointers), and since the video uses literal strings (known at compile time) they are written in the data section (and since it’s the same string it’s only one entry). not sure what the behaviour would be if you tried to mutate the string though. .lower() simply allocates a new (runtime) string which is why the address is different
Dude forgot to mention the most important difference between "is" and "==", so I will.
`is` is a Python command. It does what the interpreter tells it to do (similar to "for", "if", "while" and "in") and cannot be changed normally. `==` is a OPERATOR and each class has it's own definition of what "equals" even means, by defining it with the `__eq__` method. That's why objects from different classes can equal each other.
The most important difference between `==` and `is` is that the first can be a different logic for each class, which means someone can implement a class that always equals everything or never equals anything (or use really any logic) while `is` will always check for the objects id, not to check if both objects equals but to check if they are the same object (usually by checking the memory address that holds the values).
Now, for a beginner this usually doesn't make much difference and for most cases `==` is encouraged, but you should ALWAYS use `is` when checking for None values, since `is` can only match a None if it really is None and will never use a different logic that can falsely match None.
I think not much related, but now I understand why people simple writes:
"if variable:"
print('yes')
else:
print('no')
Without "is" or "=="
It will enters the block if "variable" is anything except None/False/0 or empty arrays, tuples, dicts etc. If the variable "exists" or "has elements" then will enter 'yes', although I don't know if this is a good practice.
@@thelearningmachine_ An object evaluates to a bool through the `__bool__` method, similar to `__eq__`. Any class can implement its own logic of what it means to be "true".
Now, if it's bad or good practice, it depends on what your team (or yourself) decides is bad practice. For me, I always use an explicit condition if the value is not a boolean, because anyone can easily understand the condition and will not need to search (or memorize) when that object evaluates to True, so it's easier for them to read an explicit condition.
The use of `==` vs `is` depends on what the condition is.
...been using python for years, never knew about the enumerate() start argument. Thanks!
7:48 Optimization trick: if you already know the final size of your list, you can use `[None] * len` to already have space saved in memory, but you can't use methods like `push` (because your list is already filled with None values), you have to manually index it.
Or with the same string n times you can use multiplication directly: my_string="hello"*50.
I don't really recommend doing this, as it can lead to weird errors if you accidentally jump out of the loop while filling in the list. It is safer and faster to use generators with list functions.
IIRC python reallocates twice the space for increasing lists. So it’s not as expensive as you’d think to not pre-allocate. Only O(log(n)) rather than O(n).
Honestly, if your performance needs that level of optimization, Python is the wrong tool for the job.
use numpy 👍
The second example can be a bit misleading for the viewers. Granted one should avoid modifying a list in place. But the resulting list after deleting "B" does contain "C". One can verify that, when the for loop is done, items = ["A", "C'", "D", "E"]. But indeed it's still recommended to create a copy or declare a empty list to populate.
Thanks!! I didn't get it at first, and then I found your comment that helped me understand. You deserve more than one like here for spending your time to explain it.
Yes, but if you also try to remove c how would that work?
"a lot of beginners" oh... come on ... There are enough experienced programmers who make these mistakes
i think someone just has a skill issue
edit: i'm joking, this comment isn't wrong
Yup. I've been using Python for, like, 6 years or something. I still make some of these.
Most of those I seem to have known, but the start in enumerate() 😬
A shallow copy doesn't copy every item except reference types, it just creates a new object with the same values.
3:04 you can also loop through the list backwards to avoid issues when changing the list as you loop through it. It doesnt sound like it should work but if you play out the logic it will.
What? How do you iterate backwards?
@@speedtime4228You can simply wrap the iterable in "reversed()", e.g. for item in reversed(items):
10:11 In my opinion this can be simplified even more using `Path("notes.txt").read_text()` or `Path("notes.txt").write_text()`.
If you are doing some fancy file handling things where you explicitly need to open the file (or append to it or stream parts & not read it all at once), then sure, use `with open("notes.txt") as file:`.
However all too often I see code that opens a file just to read/write it all and then close it again. Why not remove all the hassle of worrying about opening and make use of the stdlib provided to you. There's even a bytes read/write function as well if you need that.
`pathlib.Path` FTW!
I actually ran into an issue yesterday where I accidentally forgot to copy a dictionary before looping through and modifying a dictionary, and I got an error saying that the length was changed during the loop. I was actually a bit surprised when I didn't see that same error here.
when I write code that is a tool, let's say something about LEO assets... when python goes wrong, I throw a builtin exception, (and if it;s I/O, I import errors and figure out which I/O thing went wrong), but if it's a domain specific user mistake, I send a custom exception. for example, they set orbit altitude to 2000 km, I'll throw a:
>>>class LEOAltitude(ValueError):
telling them LEO is 200 - 1600 km.
If it's more general, I may go nuts and have an mro that looks like:
LEOAltitudeError [or MEO, GEO]
AltitudeError. [or Inclination, RightAscension, Eccentricity, MeanAnomaly, ...]
OrbitParameterError [or Epoch or Classification... any thing in aTwo Line Element]
ValueError. [now it's python's mro from here on out]
StandardError
Exception
BaseException
object
Building immutable strings that way is actually faster than you would expect on CPython. There's an optimization that allocates extra storage to a string object when it's built by repeated concatenation so copies can be avoided in most cases, but only if there is at most one reference to it, like here where it's a local variable. But even having that string part of a list already breaks this optimization, and it's not available e.g. on PyPy. Otherwise, if you have a small and fixed number of strings to join, format strings are the fastest since Python 3.6 because you skip the construction of a temporary list or tuple.
Regarding exception handling:
I understand why you'd use specific exception types if you handle them all in different ways. You should probably specify KeyboardInterrupts for example.
But if all you are doing is logging the error, maybe display an error message and closing the program, what is the benefit of writing code for every exception individually?
That just seems like a lot of wasted time and extra lines for no reason. What does it matter what kind of exception it is if I am just gonna handle it the same regardless?
This sort of goes into the 'are you catching exceptions for known issues you want to fix during runtime, or are you catching exceptions for unexpected problems just before the system crashes?'
Some 'expected' errors, like a user supplying an invalid file name shouldn't cause the program to crash, just advise the user to try again. But others that are not 'expected', like unable to find an object you believe should always be in a list, should perhaps be handled differently. There's a whole philosophy about these two ways of dealing with problems.
Note it is ok to change the elements of a list while iterating to it, something like this
for i, n in enumerate(nums):
nums[i]=10*n
is fine, eventhough in this case (overwriting) list comprehension is better:
nums=[10*n for n in nums].
if you want to use nums[i] after you change it tho, you need to change n to 10*n
@@jayman1462 True I would recommend updating num[i] at the end of the loop, just don't forget it.
list comp creates a new list. Better just to do it in place.
1:50 For integers, Python creates objects for numbers -5 to 256 when it starts, so references to these will use the same object. I believe these values are chosen as they are commonly used. For strings, when creating a string literal of length
4:55 instead of this, you can do that:
items: list[str] = ['A', 'B', 'C', 'D', 'E']
if 'B' in items:
items.remove('B')
for item in items:
print(item)
8:03 maybe you can try this in one of your shorts videos:
def append_text():
text: str = ""+"text"*50
[item for item in items if item != "B"]
List comprehension one liners, the most python way to code
Not readable at all
@@ErenMC_ skill issue
I wonder what you want to say with this code.
There needs to be a book of just list compensations like this and the logic behind them. Someone get writing!
I admuttedly have used the base exception handling for a very long time, however I have now been trying to be more explicit (due to another one of your videos). However, there are still many cases where I just don't want my program to crash at all, so I just wrap it in a base exception handler, and then just log the exception (with the callback), just so I can see that something went wrong, but I don't have to worry about it crashing.
Numer 11: not using the programing thigh high socks
TRUE
"Hello, based department?"
If you use the Exception e catch-all, you can actually use "raise" to make sure the exception you got but don't/can't handle gets re-raised to the code that called your function. This can be handy sometimes.
Edit: A somewhat contrived use case: You catch FileNotFound, but you don't explicitly catch IO errors. Maybe your code doesn't need to know that IO errors are a thing.
In both cases, you want to act on the information that the file was not processed correctly. Maybe this is not a hard error in your library. You pass the catch-all exception to the calling function using "raise" after logging or something similar. Perhaps you want to revert your DB update that was probably corrupted, but you don't really care about the specifics of the error raised? Re-raising lets you do just that. I use this to great effect when using complicated libraries like BeautifulSoup.
Ah yes I remember doing the thing with the variables wrong. My variables were always something like s1,s2,s3,s4 and so on for every string variable. And what I also did was recycling variables. Like in one part I have a variable i as the result of an equation and after putting the result into the GUI I use the same variable in a later part of the script for something completely different.
19:35 I feel like the poor exception handling isn't as much a "nooby" mistake as it is a flaw in python's design philosophy. Well, ok it is a nooby mistake, but that's because python makes it tedious to find all the throwable exceptions an operation might do. Either in docs, or in lsp's too. Why you can't just get static analysis of "what are all the possible exceptions this function can throw?" I do not know. I have to imagine it's possible and I've just never seen it.
What are LSPs?
@@thefanboy3285a vscode Language Server Protocol plugin, i.e. a linter like pylance
@@thefanboy3285 Language Server Protocol. You have em for different programming languages that you want your editor to detect in your code and lets you do things like intellisense and jump to definitions and such.
Because the possible list of exceptions that a function like 'open' can throw depends on the underlying operation system.
How an LSP/editor possible know what exceptions Data General AOS/VS II from 1988 or Unisys Exec 8 from 1967 can generate?
@@JanBruunAndersen but wouldn't that still be just OSError, it's not making up error classes up on the fly
Regarding enumerations, one has to remember there's a lot of 'black magic' that happens when the 'for item enumerate(items)' happens. It's creating /using the 'iterable' part of the object and determining the starting and ending points before running the loop. So some things like changing a value is fine, but changing the start or end points by appending or removing is problematic. Using a filter clause is great because then each item is 'tested' against the filter before each new pass through the loop and skipped if fails.
I am well experienced python developer and I really enjoy your videos, but man, what I really admire here is your patience with the comments. Hope you keep up with the good work despite that and I hope people reading my comments see that the only real problem here is how annoying the developer community may get ~often~ sometimes
Both the positive and negative comments helped me grow. I understand that we all come from different backgrounds, and that we don't all share the same views, but that's just how things are :)
It is very common to remove elements without having to recreate the whole list (for performance reasons)! The common practice for all languages is always to iterate from END to start by index (only). That's all.
For a new list, as you already know, a list comprehension with if clause, would be more pythonian (instead of a loop).
Other than that, great video!
One of the thing with drives me crazy with Python is that it does adhere with namespace, for example with the with ... file, you create a variable within the block, but later outside the block you address the variable (print(content)). If you would do something similar in a language like C, C++ or Java then you would get an error as it would not be a know variable.
It may be a little unusual, but it's nothing to be afraid of, at least it has uses. there's a fair number of situations where it's convenient to return the last iterated value in a loop from outside (after) the loop. I don't think this usage makes for especially hard to read code either as long as you're aware of it
@kaelanm-s3919 no, it is still harder to read. in other language, to return data from inner scope to outter scope, you have to declear variable at outter scrope first. but in python you have no reason to do something like this most of the time, so when other read the code like this, they may confuse a bit until they read the whole inner scope
Thank you for the video! Regarding mistake #6, using isinstance can be useful to check the type of a variable, by after your explanation it is still not clear to me how to check if two variables have the same type. If I know in advance what type to check, I could do:
if isinstance(name, str) and isinstance(number, str):
...
But if I do not know to what I need to compare, then something like this would be required?:
if isinstance(name, type(number):
...
In that case, if I'm going to use the "type" function anyway, wouldn't it be better to just compare the types?:
if type(name) == type(number):
...
🤔
This depends on the application. There are three cases.
- Exact type match is required. Use the "bad" code.
- One directional subclass relationship allowed. Use isinstance(a, type(b)).
- Bidirectional subclass relationship allowed. Use (isinstance(a, type(b)) or isinstance(b, type(a))).
If anyone knows a better way to handle the third case, let me know.
In a different TH-cam video I saw another method to create a new copy of a string (in addition to using deepcopy). The video said using the slice operator would create a new copy. For instance if you had a string called str1 and wanted a distinct copy of it in str2 you could use the statement str2 = str1[ : ] as opposed to str2 = str1 which would make str2 point to str1 so they would both be the same string.
#10 I still do this to some degree. Part of this is that I'm lazy and don't want to keep typing long variable names over and over again, and part of it is a throwback to starting on BASIC 30+ years ago as a little kid. Back in the BASIC days, single letter variable names were the norm and anything more was almost a sin, so that's what I've carried over for so long. As for the longer names, these days there's plenty of auto-complete helpers in dev environments (including Vim) that mean that longer names are no longer the repetitive hassle they once were, but I still struggle with the balance of too concise vs too verbose.
My suggestion for removing items from a list, use either a list comprehension or the filter function, since they both are easily optimized by your Python runtime/compiler.
I really enjoy your videos and find them very informative. I was wondering if you could create a more advanced video on common mistakes in Python, particularly related to code design. I've noticed that many people tend to duplicate code excessively.
Specifically, I'd love to learn more about why inheritance is often not recommended and why composition is considered a better structure in many cases.
Thanks in advance if you decide to make such a video! 😄
The biggest mistake even experienced coders make: hard-coded resource names (URLs, file paths, etc.)
I've never investigated a codeset at any company that didn't have them all over the place. If you wanna look like a pro, all you have to do is look for hard-coded file paths embedded in Python code.
Resource paths should be in a single location in a config file (or database) and accessed by a config class that all client code uses to get those resource names. The biggest source of brittle code I've seen is hard-coded resource names.
3:04 it does work if you modify the current element, you just have to reassign it. for example:
for index, item in enumerate(items):
if item == "B":
items[index] = "new value"
item = "new value"
else:
print(item)
its a very niche use case and definitely a janky approach, but i've used it before and it doesn't cause any bugs since it doesn't change where the iteration is occurring.
you could also just use an index and a while loop then when you remove the item just continue the loop and dont increment the index, arguable less pythonic though
generally i see very few needs for removing elements from a list outside when its being processed, you can usually just use filtering and process it on the spot, especially since removing from the middle of a list is a slow operation, it has to shift each element
if you do need to often, you may be using the wrong datastructure
Some cool stuff to think about. Generally, good. I'm making notes for my improvement. I have some quibbles with the text concatenation versus list append/join contrast - but I don't disagree with your efficiency test, the numbers don't lie. It's just that most large format text will be written with JSON, XML, CSV modules - most human readable text would be output from a triple-quoted template with variable substitutions in practice. I wouldn't sacrifice readability for that mild resource gain - and it's unlikely I would need to. Also regarding exception blocks using print(). Standard out may be a team's preference if the code is deployed to a container cluster and logging aggregation is handled via a third party tool like datadog or loggly. I wouldn't be quite so hard on using print() - even though I agree that a logger with the standard-out flag set would be better and easier to maintain. The important bit was catch and report the exception - which is key! Keep up the good work. I really enjoy the attention to nuances that you bring us. Thank you.
That's the second time I've heard a TH-camr say that just today. Though, you're the only one of the two that I believe.
This was really helpful. Hope you do another one in the future.
Is it possible to append during iterating, to get extra iterations?
Tip #2 should specify don't alter the structure of an iterable whilst iterating over it. You can modify the elements to your heart's content
I have never seen the id function before...
Underrated channel fr
num: list[int] = [1, 2, 3]
c_num: list[int] = num.copy()
c_num.append(4)
print(f'{id(num)=}')
print(f'{id(c_num)=}')
print(num)
print(c_num)
5:07 Better Solution for the example problem: items = [item for item in items if item != „B“]
References are a big trap for beginners as they're not really intuitive. As a new programmer, you don't tend to think about what is happening under the hood. For example, you just think after a=b, "a" now has the same information "b", not really thinking about copy vs reference.
I was actually told, that fast string concatenation should be done with a StringIO() object. Jesus, I now compared the three methods and it's the worst, double as slow as the list method. I was lied to. So thanks for that input!
I heard recently that StringIO was in fact the fastest, I didn't test it myself so I wouldn't be able to comment, but I'll do a bit more research regarding that and possibly post a video soon enough :)
#2 is a no-no in almost every language I know of. Most languages have no tolerance for modifying a container during iteration.
Please make a video on python Selenium,getting confused in type hinting while declaring the necessary variables
How does 'text'*50 perform?
Much faster since only one new string object is made, instead of 50 new ones. The overhead really comes from creating new string objects.
@@adiaphoros6842 cool
#11 - Put type on each variable when is obvious (and probably useless)
For number 5, you start off with a (supposedly bad) example of checking if two variables are of the same type, and then say to replace it by asking if EACH ONE is of a specific (or in a list of) type(s). This is NOT fulfilling the same purpose as what you were replacing. This is checking each type, and not comparing the types at all.
The only way to use isinstance to replace the function of the code you started with that I immediately see (as an ABSOLUTELY novice python programmer), would be to write "isinstance(name, type(number))" at which point you have reverted half of what you wanted to change.
Thank you very much, I love your videos!
You can actually iterate through a list while modifying the original and without making a copy, you just have to iterate through the indices in reverse.
Yes, deepcopy() is slow. That's why we shouldn't use it. When we copy things, we should already know the structure of what it is we are copying, and we can use that knowledge to maintain any level of shallow copying we want.
matrix = [[1, 2, 3, 4],
[5, 6, 7 ,8],
[9, 10, 11 12],
[13, 14, 15, 16]]
new_matrix = matrix.copy() # slicing, or [:], can be
new_matrix[0] = matrix[0].copy() # used in place of .copy() as well.
new_matrix[0].append(17)
print(matrix)
print(new_matrix)
Noticed a strange thing in python:
A < b is same as not (a >= b)? not!!!
When you define gt and eq dundermethods in your class, than lt is still unknown. If you check if a less than b, it will not call if not a >= b, but instead will raise attributeerror or whatever it raises.
Anyone knows why the hell it's done like this?
Thank you for the enumerate() index. I was doing the math like a fool lmaoo
Old people tend to use smaller variable names because if I am not mistaken it used to be a problem.
#4 I tend to use Path('notes.txt').read_text() now if I just have a quick read/write to do. Maybe I'm just enamoured with Path.
I knew not to do all these things, but it's great to hear why they're bad practises
Like I never considered a crash before .close()
How do we then name GUI variables. I encountered this mental block a lot.
Combine them with the parent name, for example: dialog_path, form_name, txt_username, chk_remember.
var_00001, var_00002, var_00003, and so on. Even gives you consistent indentation :)
@@vaolin1703 🦕
@@vaolin1703 The indentation is actually a good argument. However, I tend to tell the beautify procedure to align variable assignments for me.
The «except: do something» is still not the wors thing. The worst is except: pass. (but unless you really know what you do).
The maybe good way to handle a generic exception is:
except Exception as e:
print(e) # i forgot how to traceback.
The second one is interesting. I am using a lot of pandas, so I am usually working and modifying some Dataframe. How does the looping behave with df.iterrows? Should i still create a new list/dataframe? And how should I save the modified data ?
i suppose pandas support masks which is better than looping usually, but it depends on your use case
Yeah, its usually best to avoid loops and iterrows like the plague. If you can. it's almost always better to make new columns, create copies, use native broadcast, and in more complicated cases to make masks with where you want to change you data and then use .loc to change it.
valid_rows = df.notnull()
df2 = df[valid_rows]
df = df2 # shadow the original dataframe, which can be automatically garbage collected behind the scene, so you can keep overwriting this df as many times as you want
angle_correction_mask = df['angle'].between(360, 720)
df.loc[angle_correction_mask, 'angle'] -= 360
With example 3, doesn't appending a list with a string also create a new string at the end of that list? If so, why is it faster since in both functions you are creating a new string?
If I had to guess, maybe it's because the memory has already been allocated at the end of the list so it's faster to access, whereas the first example, new memory has to be allocated. Is that right?
You are creating twice as many strings when using +=.
in "text += 'test'", you first create a string 'test' and then create a second string text + 'test'.
In the .append&.join method, you only create a string and put it at the end of the list.
In my opinion, bad explanation on 17:10 (shallow copy). A more simple explanation: b = a.copy() will create a new list, but the items inside the b list are still the same. This will potentially affect only mutable data types (lists, dicts), since immutable data types cannot be modified. b = deepcopy(a) will create new references for the list and all items inside the list.
Finally. 0/10 mistakes made by me. All those years were worth it.
5:12 Actually, why not just do 'for item in list(items)' to make a copy so we are not iterating through the list can get elements removed?
if you're copying the list you may aswell make a new list and append to it, copying the list does just create a new one then loop through the list and append to it, but it would also append unnecessary items (in this case "B")
At least for the given example, doing so would turn an O(n) process into O(n^2). Every time remove is called, python needs to find where the object is in the list, remove it, and then change the index of every element after it. It's much better to just make a new list that has only the things you want
in example number 3: there is much faster than join,, that what we called StringIO or BytesIO
I couldn’t create a reasonable use case to prove that in Python 3.12, do you mind sharing an example?
Not a fan of typed lists--since list doesn't respect type. If you need an ordered container of characters, python has a standard library thing:
array.array('c', ....)
The point is the latter tells devs: this is an ordered container of characters by definition, and that respects POLA. (principle of least astonishment).
Is it ok to use " var is not None"
Yes it is okay (actually recommended). This is because None is a singleton, so every instance of None is actually the same instance.
Yes. Same applies for True, False, and the ellipsis constant.
yes, definitely, i use it all the time
Not just ok. It's best practice to do so
Yes, but “Var is !nil” exist
Nope I’ll stay with except BaseException in case something goes very wrong when using some god knows who written modules
100% agree.
Unless you want to perform specific task when a specific exception raises (almost never happen)
20:42 I'm not with you about that. Sometimes, there can be a lot of exceptions and the generic class can be better to use since we don't have to hande each one of them.
"A cat is not _exactly_ an animal." - Indently 2024 at 13:57
I normally try to avoid writing except Exception, because it avoids all type of exception. Normally, what I do is, I first run the program, if it works correctly, I deliberately provide some cases that I know will throw an error, and write the except block accordingly. But for some packages that uses custom error types like Tensorflow, I have to write the except Exception block. Is there any other way to handle it? I don't know😢. If you have better suggestions, please do let me and others know21:31
Not sure if relevant here but about if can be used to address e.g. list_b = list_a[:] variable as list copy. Not sure if it is a truly deep copy but would save importing a module if that was the case
That was very dismissive for exception wildcards. Who knows every possible error for their code is either someone with too much time on their hands or someone that is likely very arrogant and will likely be wrong. Just saying.
If you design a function, in general you should know what it does, otherwise you're creating/using functionality rather blindly. In the example of using "open()", there are several cases that every developer should know (eg. file not found, file not readable because of wrong type/format, etc).
If you don't know about the basic errors you can encounter during programming, then of course the safest option you can opt in for is using a bare except. And as I mentioned in the video, using a bare except can be fine also for throw away scripts.
Knowing everything that can go wrong is part of being a professional dev, that's what the senior devs get paid for at the end of the day. Solving issues that beginners/junior devs would never consider.
Exceptions are for handling known and unavoidable errors (e.g. from user input). Because if you don't know about an error, you WANT the program to throw an exception and halt the executions so you can debug what's wrong. This is why non-specific exception catching is a bad practice, you're hurting your ability to debug errors, which will inevitably have consequences that you didn't forsee and handle in the catch, because you didn't know about them!
I’ll gladly explicitly catch all possible exceptions the day the standard library documents all exceptions each method/function can raise…. Until then my code will have catch alls where I do not want exceptions to continue propagating.
errors by value >>> exception
bonus points if its also monadic error by value
you have to read alot of docs to check every function if it can throw something and what it can throw
For general exception, I can either use except Exception as e: ....... Or except: traceback.format.exc()
What is a reference type?
for the 3:04 problem, instead of creating a new list with a new name, you can just use "for item in items[:]" which clones the list for the loop only, and you can continue to modify the original list in the loop
12:00 how is that not readable? In fact, you can read it from left to right very well "type of name equals type of x". In contrast, isinstance(name, str) is more unreadable in my opinion. You also didn't refactor the original example, where you're checking that both variables have the same type, in which case, it would now look like isinstance(name, type(x)) and now we get REALLY unreadable
PS: I still think isinstance is preferable, just pointing out that readability is not its strong suit
they are functionally different, though. isinstance checks inheritance, e.g.:
>>isinstance(True, int)
True
because so is
>>issubclass(bool, int)
Moreover, the isXXXXX behavior can be controlled with magic methods (subclasshook).
Maybe be dump to ask , what is the editor in this video ?
For second one adding [:] will solve the issue...for item in items[:]:.....
20 : 54 is there any simple way to check if none without using try cath,i was c# developer it was '?' key to check if is null...
You can check if the file exists using the `Path` class from the stdlib module `pathlib`. E.g. `if not Path(path).exists(): return`
@@carrotmanmatt ohh thanks a lot
for lists iterations, why don't you just create a copy of the initial list:
for item in items[:]:
...
Isn't the best way to iterate over a list that you want to modify, this:
for item in list(mylist): # do stuff to mylist
Pycharm always highlights not using is with booleans so I suppose it's ok for that?
But if you write something like
`if condition is True:`
and condition is always a boolean, you can just write
`if condition:`
@@leo9463065 Yes, but when using "if cond is True:" you perform a hidden check if it is a boolean at the same time. "if 'a':" would execute the if block, because a non-empty string is considered truthy, but with "if 'a' is True:", the if block would not be executed because 'a' is not the same as True, but just a truthy value.
to the second point: you can just iterate though a copy of the original list
I rate with a passion poor exception handling. It is not only a beginner mistake. There is poor exception handling in gigantic projects backed by the best universities in scientific programming software. I let the software run for an entire night only to discover that they were literally hiding an error and discarding every generetated data because of poor exception handling.
I know there is a lot of negativity around type(a) == type(b) , but most of the time I do want it to be that strict for comparisons, I want to know are they both cats or both animals so that way I can treat them the same way.
I don't understand how isinstance() prevents you from doing that though. Do you mind sharing an example where isinstance() doesn't work for you?
@@Indently if I want to know that I can treat two objects exactly the same way. If one object has some differences from the other which mean I can't do everything to the one that I can do to the other I want to know that.
e.g. I want to know that I either have two generic Animals, or two dogs, two cats.
Lets say I am loading Noah's ark, I want to make sure when I am loading the animals that I have a pair of exactly the same type of animal but when picking I don't care what I kind of animal. So I can't use isinstance(animal) because then I will always get True assuming I have filtered out non-animals. But I can't do isinstance(X) because when I have picked up a pair I don't know or care at this point what kind of animals they are.
in 2024 you should use content: str = Path('notes.txt').read_text() ...after from pathlib import Path
Print(“text” * 50)
print("Hello!" * 50)
excellent
What about 'text' * 50?
I see that's the reason for deepcopy
ye i always use with
10:39 no it's NOT "Pythonic" - one of Python's tenets states that "explicit is better than implicit" hence "selfh"ishness of class methods 😂
id = memory address
not always, a class can define its own id using the id dunder method.
6:34 'text' * 50 🗿
catching a true bare exception is:
try:
[statement]
except:
print('something went wrong, but we have no idea')
this will take down your project.
Magic numbers 😅
10:45 you can ignore this, but I'm interested what other pythonistas think (esp. those unsullied by other languages):
type hinting an "open" call and even a "read" seems extraneous...more so for strongly typed magic methods, I mean if you don't know dunder init must return None, then don't be writing classes. Any,way I find that whole look at the time stamp cluttered and difficult to parse. I mean try to read this sentence:
[Subject] I [pronoun][first person] [verb] like [verb][transitive] [object] Indently [noun][proper]
import this: Readability counts.
3:55
items = [i for i in range(1, 20)]
>>> for item in items:
... if item == 13:
... items.remove(item)
... else:
... continue
...
>>> print(items)
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 15, 16, 17, 18, 19]
Why does the example I've provided work as intended, but yours doesn't?
This also works if I type "items.remove(13)" instead of "items.remove(item)".
because you dont have anything to do with 14 , try printing the item in the else block and you should see that 13 and 14 is not there
so for an example usecase, if you wanted to remove both 13 and 14 you would be able to remove only 13
skipping an item doesnt matter if you print it when you finish the loop, only if you do something with the item in the else block like the previous person said
you never did items.remove(14) so it wont be removed
I don’t really understand the closing files things. I mean, it does still close when program terminates for any reason. Like, what is going to happen?
If you do not close your file, memory is still allocated to the object. Overtime, the application will lag.> crash.
Additionally, there are some applications that are meant to be kept active, for example a scheduler that performs an action every interval.
You can try writing a loop to open a file 10,000 times without closing to test it out.
@@birdie123 but if it is a script that still terminates immediately, what’s the problem? Is it only for making a good habit of closing the file always so that you do not forget when necessary?
@@oro5421Python's supposedly uses reference counting and deterministic destruction in addition to garbage collection to handle resources.
From Python3.6 documentation => "If you don’t explicitly close a file, Python’s garbage collector will eventually destroy the object and close the open file for you, but the file may stay open for a while. " (Note: This sentence was changed from Python 3.8+ 🤔)
Yes. It is a good habit to close the file when not in use to prevent memory leak, or causing unexpected changes unknowingly.
It's akin to one cleaning the utensils/cutlery after a meal, instead of leaving it lying on the table
If your file size is small, and that you have a lot of computing resources, you might argue that it is insignificant.
However, some real-life applications might be accessing file/s that is/are several GBs/TBs in size (e.g. machine learning, graphics etc.). => i.e. Managing computing resources will become a big deal.
@@oro5421most python programs go on to do something after they've read the content of the file
what does the colon : mean? I thought you write a = "bob" in python
type annotation
It's in order to provide strong typing to the LSP, thus it will enable your IDE to give you hints about the methods associated to your variable (like .lower() for a 'str') and prevent basic typing error like multiplying a string by a string without having to run your code
@@oOMikyStarOo is correct. For someone new to all of this, the critical factor is that the annotation is providing tips to your IDE, not enforcing correct usage in your program-- Python will happily allow you to run a program where you multiply a string by a string, whereas a typed language will not only warn you in your IDE, but also fail to compile and give you an error message telling you why. Which is one reason why I wish Python was used for its original intended purpose-- relatively short scripts, one-off tools, rapid prototyping, etc. The fact that people use it to build large production systems (including my soon-to-be employer) makes me sad.