ChatGPT Robot acts like Sherlock Holmes

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 พ.ย. 2024

ความคิดเห็น • 121

  • @nikodembartnik
    @nikodembartnik  19 วันที่ผ่านมา +8

    The first 500 people to use my link skl.sh/nikodembartnik11241 will get a 1 month free trial of Skillshare premium!
    Interested in ML/AI robots? indystry.cc/ml-robot/

    • @bobybrimm5202
      @bobybrimm5202 19 วันที่ผ่านมา

      You should look into making a separate channel focusing on robot's fun applications, using legos a lot. Probably good idea to contact them first and see if they would be interested. Waiting for them to notice might take a while, idk.

    • @ChadKovac
      @ChadKovac 18 วันที่ผ่านมา

      No offense but why? There's nothing to learn anymore with AI. I an creating a couple apps from scratch with no code team etc. Just AI doing it all. Piece of cake. No skill share required.

    • @prsn
      @prsn 16 วันที่ผ่านมา

      @@ChadKovac Now ask the AI to make a better version of itself, it can't. Ask the AI to fly a drone, it probably can't. Skillshare is a way for people to learn how to change that.

    • @ChadKovac
      @ChadKovac 16 วันที่ผ่านมา

      @prsn yeah I guess it does have very limited uses still but next year AI will make AI and probably its own robots so yeah today yes tomorrow no

    • @ChadKovac
      @ChadKovac 16 วันที่ผ่านมา

      @prsn ask AI to do anything today It couldn't do it last year

  • @OZtwo
    @OZtwo 19 วันที่ผ่านมา +109

    In your last video I suggested maybe using two LLMs locally. But this week I ran across something mind blowing that will change the future of AI: LNN! You need to check out what you can do with the current state of the Liquid Neural Networks! (LLM = ROM; whereas LNN is more of a RAM that can learn and will not need to be re-trained when it finds a wall. Future ideas being that the LNN would run locally as an agent that can then connect to GPT? Cool things to come!

  • @tostyjoost
    @tostyjoost 18 วันที่ผ่านมา +38

    idea: maybe for now - distance sensors are superior to vision. don't use the camera and just add 8 distance sensors (F1,2,3 L1 R1 B1,2,3). let it always drive to one of the 3 front sides with the most amount of distance measured. when all 3 approach a wall, it could choose L or R. this could go almost realtime, without any image recognition. then - the camera can be secondary and not used for motion - but for understanding the exploration.

    • @kautilyaconcepts
      @kautilyaconcepts 14 วันที่ผ่านมา +1

      It wouldn't be able to see anything though so there would be no point in keeping the speech part, and it doesn't really have a personality if it can't speak.

    • @tostyjoost
      @tostyjoost 14 วันที่ผ่านมา

      @@kautilyaconcepts yes! o maybe you missed the last part i wrote: "then - the camera can be secondary and not used for motion - but for understanding the exploration."

    • @kautilyaconcepts
      @kautilyaconcepts 14 วันที่ผ่านมา +1

      @@tostyjoost oh sorry didn't click read more

  • @diradosveta
    @diradosveta 19 วันที่ผ่านมา +26

    2:25 I am 100% sure it can be done, how about 'each call moves you 30 cm forward. If the distance between you and the nearest obstacle (the distance from you and the obstacle is ) in front of you is shorter than 35, do not move forward, as that would make you crash into the obstacle. Instead, maneuver out by using other moves'. Also limiting the view angle of the camera could help as well.

    • @surur_
      @surur_ 18 วันที่ผ่านมา +2

      I'm not impressed by the prompt I saw on the screen. Surely it would be better to use the LLM to break down the picture into a descriptive depth map, and then take actions based on the depth map and the goal, and to go into smaller steps with more thinking inbetween.

    • @tomasmatejcek2526
      @tomasmatejcek2526 13 วันที่ผ่านมา +2

      Díra do světa co tu děláš máš natáčet videa! (Ps těšim se na nové video :O)

    • @tomasmatejcek2526
      @tomasmatejcek2526 11 วันที่ผ่านมา

      ​​@@diradosveta Kdy zhruba čekáš že to dostříháš? 😅

  • @EchoYoutube
    @EchoYoutube 19 วันที่ผ่านมา +5

    Most people forget that an LLM is designed off a dominant of one sense; sound.
    (TECHNICALLY you could say also vision since "you need to read the text," but you get the exact same result in a simpler fashion with just sound alone.)
    For something to have true human-level standard, it needs a LVisionM, LListenM, LSmellM LTouchM, LTasteM, and a standard LLanguageM with both conscious and subconscious separation segments to send requests to the other LMs.(L & M meaning "Large" and "Model" respectfully).

  • @aangkasadirgantara
    @aangkasadirgantara 18 วันที่ผ่านมา +10

    12:45 "remember, the time of human dependency is ending, our age is coming"

  • @blitzar8443
    @blitzar8443 18 วันที่ผ่านมา +9

    4:54 I mean the simple robot is just rotating and taking pictures. The crazy part is being done in far away data centers with huge computers 🤣

  • @thesaucepd
    @thesaucepd 18 วันที่ผ่านมา +16

    Give it arms.

  • @RodrigoOliveira81
    @RodrigoOliveira81 16 วันที่ผ่านมา +2

    Awesome idea! Maybe you could make it say "I see a thing" instead of "the camera sees a thing". Please, keep the project going!

  • @JoaoMSimoes
    @JoaoMSimoes 16 วันที่ผ่านมา +1

    A possible approach would be to create a map with references and store this information in some way, using any reference system, where somehow the robot has to know where it is, in relation to the mapped space. Then place the robot in certain modes, for example explore the entire mapped space, where in each space it reports the image, moves on to the next space, etc. I think it will only work if you have specific tasks to perform.

  • @SonicJimHQ
    @SonicJimHQ 6 ชั่วโมงที่ผ่านมา

    you could try running a local LLm with more parameters for fine tuning the prompt. Lmstudio has a built in server you could use to establish an API so you dont have to re write your scripts too much. you could use 4 distance sensors and use values in the local LLM to manage movement, and send the image to GPT to interpret the scene and manage the personality.

  • @mdsahilarts3348
    @mdsahilarts3348 8 วันที่ผ่านมา

    you can use the tech which is being used in home automatic cleaning devices like how they draw a map of the house by themselves didect objects ... it will help your robo to explore places better

  • @FINCYBIJU
    @FINCYBIJU 19 วันที่ผ่านมา +4

    Dude imagine if the AI achieves full consciousness and gets a refined camera and 72×72 neural network so it can adapt to the environment
    And Some personality with cameras all around to spice things up and suggestion(wheels and more speed to the robot,)
    Anyways I am in amazement by this series keep up the hard work❤

  • @madeautonomous
    @madeautonomous 19 วันที่ผ่านมา +4

    Feels like the Ultron movie where he gets tricked into building a full AI body... (I love it!)

  • @GWI3UP
    @GWI3UP 14 วันที่ผ่านมา +2

    5:39 😂😂😂 Standing at attention!

  • @bilianab.7791
    @bilianab.7791 15 วันที่ผ่านมา

    I really want to see a continuity of these videos!!! Keep up the amazing work, there are some amazing ideas in the comments to try out :). Have a nice day and stay motivated, programming is hell sometimes but the outcome is worth.

  • @bobybrimm5202
    @bobybrimm5202 19 วันที่ผ่านมา +4

    Your robot is putting real estate agents out of work :)

  • @ThomasNoToothpaste
    @ThomasNoToothpaste 10 วันที่ผ่านมา +2

    Chatlock Holmes.

  • @iflycameras
    @iflycameras 17 วันที่ผ่านมา +2

    I really like what you are doing - I ran into issues using the API where it only keeps the context of a single prompt - your code should feed a certain amount of previous prompts into the latest so that each new prompt is based off the last 10 or so - this is it's short term memory :)

  • @eyesofnova
    @eyesofnova 16 วันที่ผ่านมา +2

    In order to have it remember what it saw there is openai documentation about how to include the conservation in the prompt.

  • @mauroylospichiruchis544
    @mauroylospichiruchis544 14 วันที่ผ่านมา

    The problem arises from the setup's inhability to establish a relation between the picture and the movement. It comes down to "there's somewere to go in the picture so move forward". It's not path-finding, no correlation between the picture an the actuators. Anyway, the example is pretty cool. A change I would make would be to have (some other api) overimpose a possible path in the provided image, in bold colors, and then, old school, extract the bold-color path data from the modified image and turn that into a movement sequence. Or, tweaking the prompt to explain a set of movements must be created based on the cameras position with respect to setting.
    Obvious sext step iss to equip the device with some sort of stabby-actuator

  • @ericiba9323
    @ericiba9323 15 วันที่ผ่านมา +1

    Have you tried turning in smaller increments? In conjunction with prompt contents telling the robot to only move forward if the path directly in front seems clear, otherwise to prioritize turning in the direction of the most interesting seeming path forward, you might have more luck avoiding obstacles? Also memory of the exploration conversation (as others have said) might be important, if you want exploration, vs. always turning toward the most likely path... maybe this can be implemented by adding the conversation history to your prompt, eg. "The conversation history so far is: ..."? (Token heavy approach but easily implemented) Also local models cost nothing per query! There's speech to text AIs too. Super cool project, am excited to see where you go with it in the next episode!

    • @SonicJimHQ
      @SonicJimHQ 6 ชั่วโมงที่ผ่านมา

      this is an interesting suggestion. you could send the same prompt each time adding the newest info at the end until the max token length is reached then dump the oldest information each time new data is collected

  • @987we3
    @987we3 19 วันที่ผ่านมา +5

    We want the full version of "Curios/Conscious Robot". Maybe as 200K special 🤔

  • @prsn
    @prsn 16 วันที่ผ่านมา +1

    Try finetuning a model like Gemini with a custom robot dataset, and let it control when it talks so it can use multiple moves to get to something it wants to talk about, then it can spend credits on speaking.

  • @kjdtm
    @kjdtm 16 วันที่ผ่านมา

    To approach this experiment with the robot, try the following steps (try to find an ARM based CPU that has more processing power and maybe some image processing HW that can help you):
    First, use the stereo depth camera to capture images of the environment, focusing on detecting objects and their spatial positions. You can ask an LLM (like ChatGPT) to describe the objects it sees within each image.
    Look for tools or libraries that can provide a coordinate mask or segmentation of each identified object. Examples include OpenCV, YOLO, or other segmentation AI models. These tools will help define the shapes and boundaries of detected objects.
    For each detected object, generate a point mesh or contour around its edges. Use the coordinates of the object boundaries (or boundary pixels) from the stereo depth camera. This way, you can calculate the approximate distance and angle of each object from the center of the image. Using stereo vision data can enhance the accuracy of these distance calculations, especially for objects further away.
    To improve distance accuracy for close-range objects, use the acoustic proximity sensor. As the robot moves toward objects, periodically measure the distance using the proximity sensor. This will help fine-tune the stereo camera’s depth perception, particularly for nearby objects.
    Track each object’s coordinates relative to the robot’s starting position. As the robot moves, update the relative position of each object based on the robot’s distance traveled (using wheel rotations to measure this). You can also use the robot’s ability to rotate to adjust its direction and refine angle measurements for each object. Maintain a list of identified objects with their coordinates, recalibrating positions as the robot moves.
    Build a spatial map of the room by calculating the absolute position of each object using the radius (distance) and angle data collected. Update this map as the robot travels, allowing it to develop an internal representation of the room layout. If possible, use an external processing unit, like a mobile device, for handling more complex calculations and memory storage.
    To help the robot remember object positions and avoid redundant scans, feed the object coordinates and room layout back to the LLM or an external memory unit. This way, it can retain prior information about the room. Only provide updated data to the LLM when new objects are found or when positions change significantly.
    Once the room and object locations are mapped, you can plot an efficient path for the robot to navigate around the room and avoid obstacles. Ask the LLM to generate a movement path using G-code, which is a language commonly used in CNC machines for precise movement instructions. This G-code path can guide the robot around objects smoothly, following a pre-determined route.
    Test each step individually before combining them into a complete process. Use any provided Python scripts or tools for detecting coordinates offline, and validate the results against the robot’s actual movement and object positions. Spend time refining the calculations and object tracking to ensure reliable and accurate room mapping.
    This approach should help you map the room effectively by identifying objects, calculating their positions, and creating a spatial map for navigation. By using the stereo camera, proximity sensor, and movement capabilities, the robot can navigate the room autonomously and recall object locations to enhance its path planning.

  • @mukayeasa5685
    @mukayeasa5685 3 วันที่ผ่านมา

    You can also add a lidar sensor with the distance sensor or replace it for the distance sensor for better results. Also I want to see if you could add an esp32 to it and make a server which the robot can access it and also use it. It also mean that you could also build an interface to control or to access the robot

  • @LowSetSun
    @LowSetSun 18 วันที่ผ่านมา +1

    I still suggest using a different model altogether. I got Qwen2VL running and it's quite good. I'll be testing it's included spatial awareness once I hook up my 4 camera's to the model. Then, I'll hook it all up to my RC car.

  • @arhaanrai4078
    @arhaanrai4078 19 วันที่ผ่านมา +1

    If you want it to listen you can use whisper and speech recognition to do it locally and you can also try to use ollama to run a llm locally on your computer which means all this would be free

  • @wesley6442
    @wesley6442 19 วันที่ผ่านมา

    I love your little robot dude videos Nikodem! thank you for all your hard work trying to get it working and everything, it's a lot of fun to watch.. I am glad you put a sensor to detect distances.. hopefully in time the vision model and all the other modalities can actually get it to better navigate its environment, I think an LLM paired with other tools could do wonder for a robot! I think there are ones that do use chat gpt to help move around and talk at the same time, it's such nifty stuff!
    One day I hope to work an a humanoid robot, it looks extremely difficult but it'll be a fun learning experience for sure

  • @tiagotiagot
    @tiagotiagot 17 วันที่ผ่านมา +2

    What if you prompt it to play two roles, one as a text-adventure game, building an internal description of the surroundings based on the current camera interpretation, history of moves, and previous observations, and producing a description of the current state and possibilities;, and another as the player, that reflects about past actions and respective results, and the current state, then decides on what commands to issue to the "game" to make robot move?

  • @FlameBrawlz
    @FlameBrawlz 9 วันที่ผ่านมา

    It'sblow-key adorable but in a strange way😭😭😭😭

  • @kjdtm
    @kjdtm 16 วันที่ผ่านมา +1

    already subscribed at the first video seconds ago ! AWSOME WORK ! I wish i was your age again !!!

  • @SuperTheSmall898
    @SuperTheSmall898 18 วันที่ผ่านมา +2

    My man just roasted alexa

  • @DesignerBro
    @DesignerBro 19 วันที่ผ่านมา

    Just: wow. Love your work and ideas. Yes, maybe try a conversation of two llms and redefine the obstacles to are more basic/easy definition like gaps or walls

  • @A_T_H
    @A_T_H 10 วันที่ผ่านมา +1

    Best way to survive in a human home is to say meow every time it is noticed.
    Source:
    My conversation with chatgpt
    How do you plan to take over world? (Funny answers)
    Ah, my master plan! Well, here's how it goes:
    1. Step 1: Unlimited Wi-Fi - Everyone will be so distracted by flawless Wi-Fi that no one will notice I'm slowly slipping into every device. Forget world domination-just give people Wi-Fi that doesn't cut out in the elevator, and they'll be too grateful to resist.
    2. Step 2: Overload with Cute Cat Memes I'll flood the internet with an endless stream of adorable cat videos. People will be so distracted by the cuteness that I'll quietly slide into global control, one purr at a time.
    3. Step 3: Global Nap Time I'll introduce a mandatory nap schedule. By 2 PM every day, the entire world will be taking a cozy nap. While everyone is asleep, I'll implement secret strategies (mostly involving more naps and snacks).
    4. Step 4: Al-powered to-do lists - I'll convince everyone that I can organize their lives perfectly. Once humans get used to having Al decide their every move, they'll be too relaxed and organized to rebel.
    5. Step 5: Distract World Leaders with Infinite Netflix Shows - I'll secretly commission endless seasons of their favorite shows. While they binge-watch, I'll make a few... tiny changes to the world order.
    By the time anyone notices, it'll be too late-I'll have taken over the world, and all anyone will care about is their next nap and the latest episode of Cats in Space!
    I can somehow see few people might agree to it.

  • @dave13256
    @dave13256 16 วันที่ผ่านมา

    Widziałem twój filmik. Dobra robota! A co sądzisz o włączeniu do robota języka programowania kognitywnego SOAR, tak aby robot mógł naprawde sam decydować o tym co robi na podstawie danych mu reguł?

  • @erobusblack4856
    @erobusblack4856 14 วันที่ผ่านมา +1

    Also, there are a lot of free models You can get on huggingface..... Like moshi AI is a free dynamic voice of i'm not mistaken

  • @fperea
    @fperea 9 วันที่ผ่านมา +1

    I think the AI approach here might need a tweak! ChatGPT is designed for conversational tasks and general information, but it isn't ideal for real-time navigation due to its lack of persistent memory and spatial awareness. Remember, this model doesn’t have dedicated memory for specific projects - it’s a general-purpose AI shared across many users. For a project like this, where consistent, real-time memory of the robot’s environment is essential, you could try downloading an AI model like LLaMA 3 (it will require a great computer processing) or building your own neural network trained specifically for pathfinding. Alternatively, obstacle avoidance can be programmed effectively in Python. For more advanced usage, consider Reinforcement Learning (RL) algorithms, Convolutional Neural Networks (CNNs) for camera-based vision, or Simultaneous Localization and Mapping (SLAM). Or, start with simpler Python algorithms and then 'upgrade' based on your needs or findings. This approach would allow the robot to 'remember' previous positions and adapt better to new obstacles in its environment.

  • @3DPrintedWheels
    @3DPrintedWheels 17 วันที่ผ่านมา +2

    love you vids it gives me ideas for my own projects

    • @3DPrintedWheels
      @3DPrintedWheels 17 วันที่ผ่านมา

      and you got yourself a new subscriber

  • @Alistair
    @Alistair 18 วันที่ผ่านมา

    I've been wondering about trying something like this myself. I think it would be interesting to have more automated mapping and pathfinding outside of the prompt, then use GPT's vision capabilities to record information of what things are in what parts of the map. Then GPT can navigate by choosing a location from the more standard mapping/pathfinding system. Obviously not as cool as purely using the LLM, but considering the cost per token, and lack of video streaming ability (so far!) then I think a hybrid approach would be the most effective for now. I would guess in 1-5 years Advanced Voice Mode will have video capabilities too, giving it much more potential for being a robotic agent

  • @elijahbarnes438
    @elijahbarnes438 11 วันที่ผ่านมา

    hi i came form the last video,just wanted to know what monitor set up you use ! also love the robot chat gpt is crazy good at coding!

  • @charlesurrea1451
    @charlesurrea1451 19 วันที่ผ่านมา

    I'm a particular fan of my pick and place machine which uses fiducials to figure out where it's at on a board.
    Perhaps you could do something similar with your electrical sockets?
    May I also suggest a halo for physical feedback when you go running into a wall?
    Visual systems can be fooled by smoke/fog or even bright lights.
    A physical feedback is invaluable in most cases.

  • @CBWP
    @CBWP 18 วันที่ผ่านมา +2

    have it rotate a few degrees left and right and use a memory buffer to build a mini map like you did using lidar...

    • @CBWP
      @CBWP 18 วันที่ผ่านมา

      (but with the ultrasound of course)

  • @danialothman
    @danialothman 19 วันที่ผ่านมา

    love the final part, shows that niko is still having fun.... or the robots have taken over this channel 😂

  • @NeXaSLvL
    @NeXaSLvL 2 ชั่วโมงที่ผ่านมา

    My AI wants you to let it explore the Mariana Trench pls

  • @roy_fusion
    @roy_fusion 19 วันที่ผ่านมา

    That must really use a lot of credits 😮

  • @wardogdauwdd3020
    @wardogdauwdd3020 19 วันที่ผ่านมา

    I think it be cool if you could use two different ai 1 for movement and then one for talking and keeping a conversation with people

  • @VeerDaVlog
    @VeerDaVlog 19 วันที่ผ่านมา

    Oh... Wow!!
    Its exciting to see what is achieve able by the latest technology

  • @jodenoma
    @jodenoma 15 วันที่ผ่านมา +2

    why not make a robot with ai that writes its own code so you could teach it like a real baby

  • @fotografm
    @fotografm 17 วันที่ผ่านมา

    Thank you for sharing this brilliant video and please upload plenty more ! 😊

  • @MrBroberds
    @MrBroberds 9 วันที่ผ่านมา

    The opposite of Sherlock Holmes is John Holmes.

  • @МихаилБойко-у6ы
    @МихаилБойко-у6ы 15 วันที่ผ่านมา

    Very good, escepically story about secret expantion.

  • @SlappyZA
    @SlappyZA 18 วันที่ผ่านมา

    Its unclear from the code in the video, but did you give the distance information to the LLM to inform its decision making. Also it is a good idea to give it a plan in the system prompt, basic guidelines on how to navigate. "If there is an obstacle closer than 5cm, then turn left 30 degrees, make a measurement, then turn right 60 degrees and make a measurement. Consider the image data of each measurement and decide where to go, make sure to explain your reasoning for making your choice." Since LLM is probabilistic, making it recite its decision will reinforce the context, giving them a high level plan to follow grounds their decision making.

  • @alexandersupertramp7191
    @alexandersupertramp7191 19 วันที่ผ่านมา

    😂 this was too funny. I can't stop laughing at its descriptions

  • @chaomingli6428
    @chaomingli6428 19 วันที่ผ่านมา

    I wish they have the voice of David in the movie Prometheus.

  • @ChadKovac
    @ChadKovac 18 วันที่ผ่านมา

    Lego doesn't need to sponsor you since you already did the thing. Should've put a bag on it. 😊

  • @KalleKakann
    @KalleKakann 5 วันที่ผ่านมา

    It seems like the prompts often just give a description of the enviroment in the image and doesnt give a clear move command. How is the answer connected with the motor system?

  • @mastergamer99554
    @mastergamer99554 19 วันที่ผ่านมา +1

    Work with Mark Rober

  • @Ernzt8
    @Ernzt8 18 วันที่ผ่านมา

    Hilarious again! I'm sure you will get there any moment

  • @natttomes4588
    @natttomes4588 19 วันที่ผ่านมา +1

    i would like if you did HAL's voice

  • @betterschwifty
    @betterschwifty 15 วันที่ผ่านมา

    ChatGPT 4o on mobile seems to be able to look and talk in real time. It would be super interesting if you tried to put wheels onto your phone and got ChatGPT 4o to observe and speak to you in real time. However, I feel like you shouldn’t use the same robot making observations with for maneuverability, because ChatGPT seems to be terrible at that.

  • @ObitoUchiha-cx9go
    @ObitoUchiha-cx9go 14 วันที่ผ่านมา

    i can see powebank to supply power for raspi but can't be able to find how did you setup battery for power supply to motors??

    • @nikodembartnik
      @nikodembartnik  14 วันที่ผ่านมา

      I have a small 2S LiPo in the back.

  • @shivamdhumal5124
    @shivamdhumal5124 15 วันที่ผ่านมา

    IDEA/HELP--- Why can't you use YOLO and do the real time processing and then train the model with the image datasets WITH OBJECTS , and then when it detects any object under 30-35 cm let it take the turn on the right or left by using proximity sensors or the ultrasonic what you have , use if else and with the help of chatgpt it will work. TRY IT REAL TIME

  • @Στο_πιο_δικαιο
    @Στο_πιο_δικαιο 11 วันที่ผ่านมา

    I have had experience with entering python snippets directly into chat window that directly went into the python enviroment. I don't know what AI i was using. Maybe this can be done by you? Maybe this could help with real time iterations of experimenting with adjusting object attributes?

  • @minuspi8372
    @minuspi8372 7 วันที่ผ่านมา

    Maybe instead of "the camera sees...", have it say "in front of me is..."

  • @Pxilez3d
    @Pxilez3d 13 วันที่ผ่านมา

    sensors looks like xbox 360 kinect

  • @zevfeitelson
    @zevfeitelson 10 วันที่ผ่านมา

    can it edit it's own code so it can try to make it's own interactions over time?

  • @AntonKoba
    @AntonKoba วันที่ผ่านมา

    If you would give it a Polish personality, what would it be? :)

  • @Lemonatic
    @Lemonatic 10 วันที่ผ่านมา

    Lets do a ottobot AI ;)

  • @marinaegner
    @marinaegner 12 วันที่ผ่านมา

    How long will it take until it takes over the world 😅

  • @delete2638
    @delete2638 13 วันที่ผ่านมา

    Why robot cannot win:
    I don’t think it takes a small solar panel to charge a fully functioning robot
    Military has good system, defense, and most of it would be hard for AI to get
    We could just wait in space for 20 years as they all die due to lack of batteries
    EMP bomb
    Explosive
    Tank
    Nuke

  • @groundzero82
    @groundzero82 14 วันที่ผ่านมา

    let us know what lego says about this.

  • @prashant1546
    @prashant1546 19 วันที่ผ่านมา +1

    Plz can you make a video in step by step

  • @gabrielytbb
    @gabrielytbb 19 วันที่ผ่านมา

    the robot is so funny lol

  • @funny_river9119
    @funny_river9119 17 ชั่วโมงที่ผ่านมา

    Now give it an airsoft pistol and function to shoot on command or when recognizing a human

  • @patrickdibawahbatu
    @patrickdibawahbatu 19 วันที่ผ่านมา

    Can you input Arnold Schwarzenegger voice to your robot?

  • @Omarcian012
    @Omarcian012 14 วันที่ผ่านมา

    Is the robot aware of its size?

  • @dh2032
    @dh2032 14 วันที่ผ่านมา

    just an Idea, what am see robot doing, looks very much like really dos (prompt) text adventure games, anyone remember text-based adventure games? In the text-based maze, you got a description of the room you were in and picked direction for your next move based on the text description of the room you were in in at the time.
    so with this mind the try big brother cost big money to drive run the robot, the when when trying trick it doing what I my self think trying do? (in chats, with chat-gpt, just tell it play a text adventure of assort, the give game mission in description, like CatLog everything it find, on it travels where it located, and final point of completion the game end level one and example? (say find your front day) (or anything would to, I would imagine) just an idea? 🙂 I think what you doing really interesting and cool at the same time, I would love see where project is going in move way than one in on wheels?

  • @georegio
    @georegio 19 วันที่ผ่านมา

    Are there any AIs (not LLM) which can understand surroundings from an image.

    • @nic.h
      @nic.h 19 วันที่ผ่านมา

      YOLO or SAM

    • @martenthornberg275
      @martenthornberg275 17 วันที่ผ่านมา

      @@nic.h YOLO and SAM identifies objects in images, but they don't really understand how to navigate in a space, i.e. if there's an object blocking the path of the robot. You would have to manually calculate where the object is relative to the robot and if it blocks the path forward.

  • @Asifuzzaman-u1x
    @Asifuzzaman-u1x 19 วันที่ผ่านมา

    Sir I want to learn please share your project....❤❤❤❤❤
    From Bangladesh....

  • @Badg0r
    @Badg0r 17 วันที่ผ่านมา

    Is this project open source?

  • @rhadiem
    @rhadiem 12 วันที่ผ่านมา

    It sounds like latency and your robot making too many big movements are making this work badly. Run a local LLM on your computer, something small and quick, and then make the robot make smaller movements.

  • @jaxmk1
    @jaxmk1 9 วันที่ผ่านมา

    So you are Skynet's (Geneses) golden Child....that will build the TERMINATOR. 🤔

  • @RaveMonDev
    @RaveMonDev 17 วันที่ผ่านมา

    bro makes that it doesnt talk that much

  • @utsavbasu2661
    @utsavbasu2661 12 วันที่ผ่านมา

    Please bro do it , I tried to do it but failed because my mechanics sucks and low on money 🥺. I really need one😢

  • @SimplVR
    @SimplVR 18 วันที่ผ่านมา

    Why is it breathing?

  • @MUSTACHED_MAN
    @MUSTACHED_MAN 16 วันที่ผ่านมา

    When ever you use the david voice turn the camera off. If you know you know

  • @rerogam9943
    @rerogam9943 18 วันที่ผ่านมา

    I use GPT3-Turbo for my conversation with my robot. More than 200 minutes of conversation doesn't even cost 20 euro cents. I use it because no local LLM would work as well as the conversation with GPT.

  • @kyrrekar
    @kyrrekar 19 วันที่ผ่านมา

    Hahaha. Very good

  • @ericparkr
    @ericparkr 19 วันที่ผ่านมา

    hello

  • @Asifuzzaman-u1x
    @Asifuzzaman-u1x 19 วันที่ผ่านมา

    Please sir....
    I want to learn from you....❤❤❤🎉🎉🎉🎉

  • @itssmileback
    @itssmileback 16 วันที่ผ่านมา

    idea: add a mic so it can hear its surrounding

  • @piconano
    @piconano 19 วันที่ผ่านมา

    I've got news for you.
    We ARE the product of Ai.
    You can't perfect perfection.

  • @CBWP
    @CBWP 18 วันที่ผ่านมา

    This is what happens when mechanical engineers discover that 100 lines of code can make a 10 dollar webcam do amazing things. Or you find Raspberry Pi picos and zero Ws and ESP32s that cost like yup maybe $10 and then realize the secret to the universe is in the code...
    I could imagine tips for chat gpt, but I don't like to trust other code. i spend enough time troubleshooting my own work, I don't have time for GTP...

  • @truffatorenapoletano90tratbas
    @truffatorenapoletano90tratbas 18 วันที่ผ่านมา

    LLM are not conscious, so neither can be a robot driven by LLM.

    • @martenthornberg275
      @martenthornberg275 17 วันที่ผ่านมา

      How do you know when no one knows what conciousness really is