- 8
- 123 583
Alexandre Sajus
เข้าร่วมเมื่อ 2 ก.ย. 2013
Creating a Web App using only Python with Taipy
In this tutorial, we create a Sales Dashboard web application using only Python. We use Taipy, an open-source Python library, to create interactive visual elements, charts, and multiple pages.
GitHub link: github.com/AlexandreSajus/taipy-course
Chapters:
0:00:00 Intro
0:05:03 Getting Started
0:19:42 Visual Elements
0:33:45 Styling
1:06:45 Charts
1:13:26 Multiple Pages
1:27:55 Authentication
1:29:49 Deployment
GitHub link: github.com/AlexandreSajus/taipy-course
Chapters:
0:00:00 Intro
0:05:03 Getting Started
0:19:42 Visual Elements
0:33:45 Styling
1:06:45 Charts
1:13:26 Multiple Pages
1:27:55 Authentication
1:29:49 Deployment
มุมมอง: 2 163
วีดีโอ
DCS Player goes Dogfighting in Real Life
มุมมอง 27K8 หลายเดือนก่อน
Massive thanks to @skyCombatAce for this insane experience! Check out the BFM Guides from @AIRWARFAREGROUP here: th-cam.com/play/PLroS5xjXW90smqJPDcIlPvXYUKxLM94bm.html&si=S4A9SO4Ro8Lnw7E5 Chapters: 0:00 Intro 1:17 Disclaimer 1:47 Briefing 2:58 Fight 1 4:29 Debrief 1 5:30 Fight 2 7:12 Debrief 2 8:07 Fight 3 9:06 Debrief 3 9:44 Fight 4 10:49 Debrief 11:28 Key Takeaways 14:14 Thanks
Creating JARVIS - Python Voice Virtual Assistant (ChatGPT, ElevenLabs, Deepgram, Taipy)
มุมมอง 24Kปีที่แล้ว
Check out the GitHub repository here: github.com/AlexandreSajus/JARVIS 0:00 Talking to JARVIS 0:58 Intro 1:52 How JARVIS works 3:12 How to setup JARVIS 4:05 Getting API keys 5:05 Installing JARVIS 6:49 Running JARVIS 7:44 Talking to JARVIS 9:18 How to mod JARVIS for your use case 10:45 Recording audio using Pyaudio 12:25 Transcribing to text using Deepgram 12:45 Sending prompts to OpenAI GPT 13...
Training an AI for WIPEOUT (MLAgents Unity Reinforcement Learning)
มุมมอง 2.4Kปีที่แล้ว
Thanks a lot to @TwoMinutePapers for giving me this idea three years ago and inspiring me to join, study, and work in the AI field today! 0:00 Intro 0:30 Reinforcement Learning 1:43 Training 3:34 Results
Detecting Military Vehicles using AI (ARMA 3 YOLOv5 Image Segmentation)
มุมมอง 11Kปีที่แล้ว
Fine-tuning YOLOv5 to detect military vehicles on aerial imagery in ARMA 3 GitHub Repo github.com/AlexandreSajus/Military-Vehicles-Image-Recognition Cool Annotation Tool www.makesense.ai/ 0:00 Intro 0:29 Computer Vision 1:59 Labelling Strategy 3:15 Results 3:37 Conclusion
Creating an Ecosystem in Unity (Unity 3D Prey Predator System)
มุมมอง 3.9Kปีที่แล้ว
A Simple Ecosystem with Lions, Chickens, and Grass in Unity GitHub Repo github.com/AlexandreSajus/Unity-Ecosystem 0:00 Intro 0:22 The Lion 1:06 The Chicken 1:23 Natural Selection 1:50 Results 2:33 Conclusion
Controlling Drones with AI (Python Reinforcement Learning Quadcopter)
มุมมอง 34Kปีที่แล้ว
Teaching a Reinforcement Learning agent to pilot a quadcopter and navigate waypoints using careful environment shaping. GitHub Repo github.com/AlexandreSajus/Quadcopter-AI 0:00 Intro 0:22 Physics 1:08 Control Theory 2:04 Reinforcement Learning 3:45 Training 4:13 Results 4:46 Conclusion
Making Water in Unity (Unity 2D SPH Fluid Simulation)
มุมมอง 20Kปีที่แล้ว
Making a 2D Fluid Simulation in Unity with barely any knowledge about C# or Fluid Physics GitHub Repo github.com/AlexandreSajus/Unity-Fluid-Simulation Python Version github.com/AlexandreSajus/Python-Fluid-Simulation Brandon Pelfrey's Blog web.archive.org/web/20090722233436/blog.brandonpelfrey.com/?p=303 Code Monkey's Liquid Shader Tutorial th-cam.com/video/_8v4DRhHu2g/w-d-xo.html 0:00 Intro 0:3...
0:30 - 2:20 uhhhhh....
Thank you for the video!
@alexandresajus Thanks for the video, I've created my 1st web app. Now I'm trying to create an animation chart for the web app. I managed to create an animation chart using a Python's package called bar_chart_race. Is it possible to use Taipy for rendering animation charts in a web browser or a workaround for it? bar_chart_race can return Matplotlib animation as an HTML5 string (IPython.core.display.HTML in Jupyter Notebook) or save it as .mp4, .gif, .html,... A Plotly animation can also be returned as plotly.graph_objs._figure.Figure or save it as .html
soo awesome . plz make more tutorials . you explains the best for reinforcement learning .
@@yuvrajkukreja9727 Thank you very much!
Great idea! Could you please provide the image dataset and the original video? I am working on a project titled *'Detection of Enemy Military Objects in Images and Videos Using YOLO'* and would greatly appreciate your help. 🙏
Thanks! I don't have the video recordings, but I saved the image dataset I created here: www.kaggle.com/datasets/alexandresajus/arma3cvdataset Good luck on your project! I would love to see it once you finish it!
@@alexandresajus Thank u brother for your help. I'll make sure to share the latest updates with you 🫡
Have been trying to do like a dependent slider for a couple of day but hasn’t been successful. I am intending to do like an adjusted price card visual with dynamic update through a percent slider. Will be grateful if you can make a video on this or maybe direct me to a good documentation.
What exactly is the issue when you try to do this? Could you share the code you are working with? Feel free to share your issue on GitHub or Discord and we will get back to you: github.com/Avaiga/taipy/issues/new/choose discord.com/invite/SJyz2VJGxV
Thank you for this video, I will practice this on my own to fully grasp the way around Taipy. I also hope you will make more videos like this especially with Taipy because I want to explore how I can maximise for other business aspects as well.
sir streamlit vs taipy which is more detailed ? btw explanation is goood
Hey! Thanks! I'm biased since I work at Taipy. I like Streamlit; it is suitable for prototyping, has a big ecosystem, so you'll find many more user-created resources (apps, documentation, widgets), and is easy to approach. The main issue with Streamlit is performance: large datasets and heavy computations freeze Streamlit apps. This issue prevents Streamlit from creating applications above a single-page simple chart dashboard. Taipy will always be better at managing performance: we only run the necessary computations through our callback system, we run the front and back end on separate threads, and we can offload heavy computations to other threads. It might be easier to start with Streamlit, but Taipy will take you further.
I really love your work brother, keep the good work up !!
Thank you! That means a lot to me
Too much time to answer… it doesnt works, Also using elevenlabs, thats very expensive
I used neat it's half reinforcement or even none but he just outsmart me and he realise that by staying near the balloon it will give them more then take it 😅
You can also simulate a fake control and it will work because the ai doesn't really know what physics is
when running my main.py file it shows insufficient quota , pls help .
what's th song name in the intro?
6LACK - Prblms (it's different X Kivnon Remix)
@@alexandresajus love u man ♥, 🔥 video ;)
@Growling Sidewinder We need to see you doing this man ....... Like baddddd
Is it possible to use Ollama opposed to OpenAi?
Ok Bye!
12PM: Jarvis, what time is it? Tomorrow: It is 7AM. jk jk this is a antastic tool!
its all about energy conservation
Wow nice!! I notice the walk is much better than almost other that I could find. Are you using imitation learning for that? Or pure reward n punishment? If it's pure reinforcement learning, I wonder how you can achieve quite great walking motion. Great vid!!!
Thanks! It's just reward and punishment here. The walk comes as default when you use the Walker agent from MLAgents in Unity, so it is a good starting point.
@@alexandresajus I see.. Thanks for the info :). Love seeing things like this :D
Great work :D
whenever i see videos like these, i clone the repos and i am never, ever able to successfully install all the dependencies or requirements.txt. makes me want to give up writing code altogether.
I am stuck at git clone
I have always had a question about mlagents: they randomly select actions at the beginning of training. Can we incorporate human intervention into the training process of mlagents to make them train faster? Is there a corresponding method in mlagents? Looking forward to your answer.
Excellent question. What is commonly done to choose human actions instead of random ones at the beginning of training is called "Imitation learning." MLAgents does provide documentation on imitation learning, but I have never explored it, and it is probably complex to implement: github.com/gzrjzcx/ML-agents/blob/master/docs/Training-Imitation-Learning.md
@@alexandresajus Thank you very much for your answer. I have looked at the link you sent and found that it is an old version of mlagents, which is different from multiple settings. For example, the new version does not have Brain and Academy's Broadcast Hub. So, what should we do in the new version? Thank you for your answer!
@@keyhaven8151 To be honest, I don’t know since I never tried imitation learning. Try to look up « imitation learning mlagents » online, I’m sure there are tutorials. Or use the older version of MLAgents
@@alexandresajus Thank you very much for your answer. I will try to find a solution! thank you!
Good Idea but Eleven Labs is to expensive, the price is more then horrible for live tts… better you use the build in OpenAi tts. Also you can use the openai api whisper, assistant gpt and tts… all with easy tts. Quick cheap and easy
Will That generate Costs throught the API or is that for free?
Hey, Absolutely great project, Can you share the dataset too?
Sure! It's available here: www.kaggle.com/datasets/alexandresajus/arma3cvdataset
dose noone get same code on deepgram me and zou dont got same code
Great video, really glad to see you getting up there and the debrief! Ive thought about this for years, still on my bucket list
hello i have a problem when i try to run main.py it shows me no moduel deepgram found
How'd those G's feel? I'd love to experience G-Lock, but I have bad vision and don't make enough.
Unfortunately (or fortunately 😅) no G-Lock, we pulled a max of 5 to 8 Gs but I think we did not sustain them long enough to experience G-Lock or even tunnel vision. The main effect was task saturation: I just could not think about anything apart from looking at bandit and flight controls
Now I'm wondering how we're ww1 and ww2 pilots pulled it off, despite being in a plane that only relied on guns and prayers
That's a valuable insight. Thank you! I think if you were flying more PvP in DCS, you would have done slightly better, but still both the physical and mental of the real fight makes a difference.
Yes, I think so as well. I have started doing some 1v1 PvP dogfights with more experienced people in DCS. In hindsight, my understanding of one-circle, two-circle, turn radius, and turn performance was lacking. I wish I had gone into this experience with a better understanding of fight flows.
In your defense, youre probably used to flying the dcs fa18c with jhmcs on, enabling you to always keep training your eyes on the bandit, while also providing you with your speed, altitude, etc instrument data. Your muscle memory took over and you did what you know. Important to check your upfront instrument panels when you dont have jhmcs 😊 would love to try this myself!! Great job!
Yes, that was definitely part of the issue. Being able to stay tally while having both speed and alt in the corner of your eye dramatically helps with energy management. I noticed the difference when going back into DCS
I wanna see operator drewski do this.
no attacks from above?! wth?
So no attacks from above the 3 to 9 line which means no attacks face to face which is understandable because there is a risk of collision if we do that
Salut, tu as AirCombatExperience pas très loin de Bordeaux, aussi cool et vachement moins chère.
Pas mal ça, je connaissais pas, merci pour le partage! En plus j’étais à Bordeaux la semaine dernière
looks really fun
"I have no idea how this is legal" god bless America, i am so glad it is
GOD BLESS OUR TROOPS, GOD BLESS AMERICA 🦅🦅🦅🦅🦅🦅
when i run python main.py . i get this error Traceback (most recent call last): File "E:\JARVIS_TEST\JARVIS\main.py", line 15, in <module> from record import speech_to_text File "E:\JARVIS_TEST\JARVIS ecord.py", line 8, in <module> from rhasspysilence import WebRtcVadRecorder, VoiceCommand, VoiceCommandResult ModuleNotFoundError: No module named 'rhasspysilence'
Check this issue: github.com/AlexandreSajus/JARVIS/issues/4 Also try creating a new clean virtual env before installing requirements. Check if there are no errors during installation. Check that you are running main.py from that env. Check that rhasspysilence is installed with pip list
hope you guys wont collide with each other , cuz that tends to happened sometimes in dcs
what exactly did u purchase on the open ai api thingy for it not to return "exceeded current quota"? i payed for chat gpt "hobbyist" plan and thought that would help but nah i wasted 20 $. and u should def start a discord good stuff
Ah I see, you’re not supposed to pay a chatgpt subscription. OpenAI have a website for their API where you just have to enter billing details and maybe add a dollar of credit to use. They charge per request and not on a subscription basis. It should be on the same site where you got your API key
@@alexandresajus AH MY HERO SO FAST, so i just add some money to my account and boom it works?
Aerobatic planes are incredible machines, possibly the most maneuverable machines on earth, using them for irl dogfight is a neat idea
Mr moneybags
DAMNNNNNN ALEX! I literally had no idea this type of this was legal. I definitely want to give this a try man
C'est incroyable bro, faut absolument que j'essaye un de ces jours !
how do you know if shots hit or not
Here, the instructors validated shots visually. If we were in the opponent’s control zone with the opponent on our nose, it was considered a kill
SBMM is crazy bruh
Sweet! I did PPL training after 20 years of sims, and yeah... I thought I knew "task saturation", but when your whole primate body is spamming events it's another thing.