Hi Rainer, long time lurker here, great videos you got you are a natural born teacher and we are very lucky to have you. One small tip for you in regard of your video's audio. I am not entirely sure how it is done, either automatically or applied manually by you or your audio/video-editor, but it seems that you use some form of audio compression on your voice to clarify your speech to a somewhat singular volume level, however, it also activates when you are quiet and is then raising the noise floor, which seems to amplify the surroundings when you are not saying anything, and it also amplifies your breathing :) Like you are suddenly Kirby :) It's not a big deal at all but I just felt i should mention it to increase the quality of your already high quality videos :) Thanks a lot for doing this work and god bless you :)
Hi, thanks for the info. Can you maybe point me to a quiet section? Would help me if I could hear it as well. I don't do anything manually in terms of audio quality, but maybe the microphone or Premiere does it. Thanks!
@@RainerHahnekamp It happens basically every time you breathe in or out. It shouldn't be that loud. It's definitely an automated thing, maybe check if your microphone does auto compression or perhaps it's some setting in adobe premiere. I just did some research and it turns out youtube also compresses the audio quite a bit, it is said that it's a bit less when your output volume is lower. Honestly, your videos sound good enough so I wouldn't spend too much time on it, I just mentioned it in case it was something you or your editor did, and perhaps they could dial it back a bit. Keep up the good work :)
@@UfuUfu-sj3bv Thanks, unfortunately I do everything on my own and I don't have the necessary education when it comes to audio/video recording. In my next video I hear the loud breathing already in the original video file. So it looks it is coming already from the microphone.
@@RainerHahnekamp Well compression is basically the process of making all (or most) parts equal so that the sound source becomes more predictable and with vocals that leads to hearing the result better. It's like pressed down in total, soft sounds and high volume sounds, and then because it got no more unpredictable spikes in loudness, it usually gets turned up a lot. So what's probably happening in your case is that it's simultaneously being compressed down into a singular volume, then raised back up, making breathing sounds as if it was as loud as your normal speaking. There's a very big dynamic range in audio sources, meaning it usually goes from very soft to very loud. You must have noticed it at one point in a movie, the music would be very loud, but the voices would be too soft, so you turn it up and you get your eardrums blasted. The more cheaper streamer material probably does this for you but it makes for uncontrollable results as you've noticed. So investing at some point in a more expensive microphone and audio interface could be a good move to increase production quality. Then you can play around with compression yourself on an unmodified audio source. But it's not easy, doing sound editing, since it's a 3d space you are dealing with. So perhaps it's better to wait until these video's made you millionair and get an audio editor to do it for you :)
Hi Rainer. Brilliant video, thanks for taking the time! We usually cover DOM testing using Cypress E2Es. Do you think there is any advantage using DOM based tests in our unit tests too? We had been simply "newing" components before and supplying mocks in the constructor (without TestBed). Looks like we've have to move to TestBed in any case to use effects in the constructor.
Yeah, so independent that you don't have any other options to test effects, you need to ask yourself what benefits you get of non-DOM tests? I guess, you are directly calling methods of your TypeScript component then? If yes, here is an official statement from the Angular team on that topic: github.com/angular/angular/issues/54438#issuecomment-1971813177 Tests without DOM access are usually done against Services or heavy-logic based functions.
@@RainerHahnekamp Thanks Rainer! Yeah, we normally call the functions directly and test that the logic is correct. We do extract a lot of the logic into standalone (pure) util functions a lot of the time and test them separately. My thoughts were that DOM tests will test the angular wiring which is already tested by Cypress, but from your video I can see the benefit that you can refactor and the tests will still run (and still be meaningful) as they are abstracted a little from the nitty-gritty of the implementation. They are also a lot faster than e2e tests.
It seems to me that flushEffects won't catch bugs when the effect is called more than it is expected. My theory is if you call the flushEffects 2 times it will only execute the effect at max 2 times. So if the effect would have run 2 times between the flushEffects calls, instead it will only exeute once, therefore resulting in invalid test result(s). In order to workaround this you can call flushEffects after every interaction with the service but that's horribly DX in my oppinion. What would you suggest in this case @RainerHahnekamp ?
Hi Norbert, yes, you are right. When Angular's change detection would run an effect two times and you miss to call flushEffects at the right time in the test, the effect runs only once. So you end up with a test which gives you wrong results. That's why the recommendation is to avoid flushEffecs at all and test as much as you can via the DOM. You can still miss there to run detectChanges but chances aren't that high because you require detectChanges also for your other assertions against the DOM.
Awesome video that really clarifies the basics! Quick question: When starting out with the expectations for the effect, you said that the effect will not run until After ChangeDetection. I am a bit confused here, since the docs say that effects run at least once initially. Why was the effect not run initially in this case?
Yes, because the docs assume that Change Detection is always part of the game. "run initially" means that the effect starts to be dirty. So you always have at least once execution in the change detection. Does this answer your question?
The effect WAS indeed running initially, but not before the first change detection. So first change detection always triggers, but then only those that have changed.
Thanks Rainer for the videos. A bit off topic, but I wanted to ask about new builders in Angular based on esbuild. It seems that esbuild does not support dynamic import, so it gave me an error when changed the route to lazy loaded module. So we can,t use esbuild with lazy loaded routes?
Hi Giorgi, I am not aware of any issues like that. I've done all my latest videos with Angular 17, esbuild and lazy loaded modules. It works. Please compare your angular.json config with mine. Maybe there is something you've missed.
@@RainerHahnekamp ok, will check, thanks. I updated recently with nx. In Angular documentation they say that you just need to change browser with browser-esbuild and nothing else. Seems there is something needed. At least now I know it. Thanks.
@@giorgipaikidze85 I see, you might want to check if it is not something nx-related. Don't know, maybe you use publishable libraries which are running on webpack... don't know what Nx would do in that case.
Thank you Paul. Signals look easy at first sight, it is a little bit more complicated with glitch-free but once you understand that one, it is straight-forward
The way you explain it is so nice and easy to understand!
Thanks a lot. Happy to hear!
Hi Rainer, long time lurker here, great videos you got you are a natural born teacher and we are very lucky to have you. One small tip for you in regard of your video's audio. I am not entirely sure how it is done, either automatically or applied manually by you or your audio/video-editor, but it seems that you use some form of audio compression on your voice to clarify your speech to a somewhat singular volume level, however, it also activates when you are quiet and is then raising the noise floor, which seems to amplify the surroundings when you are not saying anything, and it also amplifies your breathing :) Like you are suddenly Kirby :) It's not a big deal at all but I just felt i should mention it to increase the quality of your already high quality videos :) Thanks a lot for doing this work and god bless you :)
Hi, thanks for the info. Can you maybe point me to a quiet section? Would help me if I could hear it as well. I don't do anything manually in terms of audio quality, but maybe the microphone or Premiere does it.
Thanks!
@@RainerHahnekamp It happens basically every time you breathe in or out. It shouldn't be that loud. It's definitely an automated thing, maybe check if your microphone does auto compression or perhaps it's some setting in adobe premiere.
I just did some research and it turns out youtube also compresses the audio quite a bit, it is said that it's a bit less when your output volume is lower.
Honestly, your videos sound good enough so I wouldn't spend too much time on it, I just mentioned it in case it was something you or your editor did, and perhaps they could dial it back a bit.
Keep up the good work :)
@@UfuUfu-sj3bv Thanks, unfortunately I do everything on my own and I don't have the necessary education when it comes to audio/video recording. In my next video I hear the loud breathing already in the original video file. So it looks it is coming already from the microphone.
@@RainerHahnekamp Well compression is basically the process of making all (or most) parts equal so that the sound source becomes more predictable and with vocals that leads to hearing the result better. It's like pressed down in total, soft sounds and high volume sounds, and then because it got no more unpredictable spikes in loudness, it usually gets turned up a lot. So what's probably happening in your case is that it's simultaneously being compressed down into a singular volume, then raised back up, making breathing sounds as if it was as loud as your normal speaking. There's a very big dynamic range in audio sources, meaning it usually goes from very soft to very loud. You must have noticed it at one point in a movie, the music would be very loud, but the voices would be too soft, so you turn it up and you get your eardrums blasted. The more cheaper streamer material probably does this for you but it makes for uncontrollable results as you've noticed. So investing at some point in a more expensive microphone and audio interface could be a good move to increase production quality. Then you can play around with compression yourself on an unmodified audio source. But it's not easy, doing sound editing, since it's a 3d space you are dealing with. So perhaps it's better to wait until these video's made you millionair and get an audio editor to do it for you :)
I hear the same, but it's better with compression to hear everything, than to be super quiet :) So just leave it as is I guess
very good video, so same method applies to my signalStore, or do i have to do something different ?
Yes, the SignalStore is an Angular Service which exposes native Signals. All things apply there as well.
Hi Rainer. Brilliant video, thanks for taking the time! We usually cover DOM testing using Cypress E2Es. Do you think there is any advantage using DOM based tests in our unit tests too? We had been simply "newing" components before and supplying mocks in the constructor (without TestBed). Looks like we've have to move to TestBed in any case to use effects in the constructor.
Yeah, so independent that you don't have any other options to test effects, you need to ask yourself what benefits you get of non-DOM tests? I guess, you are directly calling methods of your TypeScript component then?
If yes, here is an official statement from the Angular team on that topic: github.com/angular/angular/issues/54438#issuecomment-1971813177
Tests without DOM access are usually done against Services or heavy-logic based functions.
@@RainerHahnekamp Thanks Rainer! Yeah, we normally call the functions directly and test that the logic is correct. We do extract a lot of the logic into standalone (pure) util functions a lot of the time and test them separately. My thoughts were that DOM tests will test the angular wiring which is already tested by Cypress, but from your video I can see the benefit that you can refactor and the tests will still run (and still be meaningful) as they are abstracted a little from the nitty-gritty of the implementation. They are also a lot faster than e2e tests.
Very nice Video Rainer
Thank you DJ
It seems to me that flushEffects won't catch bugs when the effect is called more than it is expected.
My theory is if you call the flushEffects 2 times it will only execute the effect at max 2 times. So if the effect would have run 2 times between the flushEffects calls, instead it will only exeute once, therefore resulting in invalid test result(s).
In order to workaround this you can call flushEffects after every interaction with the service but that's horribly DX in my oppinion.
What would you suggest in this case @RainerHahnekamp ?
Hi Norbert, yes, you are right. When Angular's change detection would run an effect two times and you miss to call flushEffects at the right time in the test, the effect runs only once. So you end up with a test which gives you wrong results.
That's why the recommendation is to avoid flushEffecs at all and test as much as you can via the DOM. You can still miss there to run detectChanges but chances aren't that high because you require detectChanges also for your other assertions against the DOM.
Awesome video that really clarifies the basics!
Quick question:
When starting out with the expectations for the effect, you said that the effect will not run until After ChangeDetection.
I am a bit confused here, since the docs say that effects run at least once initially. Why was the effect not run initially in this case?
Yes, because the docs assume that Change Detection is always part of the game. "run initially" means that the effect starts to be dirty. So you always have at least once execution in the change detection.
Does this answer your question?
The effect WAS indeed running initially, but not before the first change detection. So first change detection always triggers, but then only those that have changed.
Thanks Rainer for the videos. A bit off topic, but I wanted to ask about new builders in Angular based on esbuild. It seems that esbuild does not support dynamic import, so it gave me an error when changed the route to lazy loaded module. So we can,t use esbuild with lazy loaded routes?
Hi Giorgi, I am not aware of any issues like that. I've done all my latest videos with Angular 17, esbuild and lazy loaded modules. It works. Please compare your angular.json config with mine. Maybe there is something you've missed.
@@RainerHahnekamp ok, will check, thanks. I updated recently with nx. In Angular documentation they say that you just need to change browser with browser-esbuild and nothing else. Seems there is something needed. At least now I know it. Thanks.
@@giorgipaikidze85 I see, you might want to check if it is not something nx-related. Don't know, maybe you use publishable libraries which are running on webpack... don't know what Nx would do in that case.
Great tutorial, thanks!
You are welcome!
Excellent video
Thanks a lot John.
Another Great Vídeo!!
Thanks
What is color scheme/font family?
Hi, I am always using the font ASAP.
Great explanations
Thank you Paul. Signals look easy at first sight, it is a little bit more complicated with glitch-free but once you understand that one, it is straight-forward