- 246
- 64 551
Devashish Priyadarshi
เข้าร่วมเมื่อ 8 ส.ค. 2013
A world without syntax errors | An imaginary programming language by running against AI model
A world without syntax errors | An imaginary programming language by running against AI model
มุมมอง: 22
วีดีโอ
Using Redhat devspace to open intelliJ IDE on phone | except android keyboard doesn't work
มุมมอง 1021 วันที่ผ่านมา
Using Redhat devspace to open intelliJ IDE on phone | except android keyboard doesn't work
AI powered java editor fixes error in the java code
มุมมอง 1728 วันที่ผ่านมา
Repo link: github.com/devashish234073/java-editor-version-predict-using-ai Branch containing exact same code as the video: github.com/devashish234073/java-editor-version-predict-using-ai/tree/fix-error-using-ai-blocking-way
AI powered java editor to predict the version needed to run a particular java code
มุมมอง 2628 วันที่ผ่านมา
Repo link: github.com/devashish234073/java-editor-version-predict-using-ai
Created UI for calling ollama api to predict the minimum java version needed to run a java code
มุมมอง 1428 วันที่ผ่านมา
Github link: github.com/devashish234073/java-editor-version-predict-using-ai
Using AI to tell which java version needed to run a particular code | prompt makes AI return json
มุมมอง 2228 วันที่ผ่านมา
Using AI to tell which minimum version is needed to run a particular java code. The prompt specifically asks to return the response in json with only two keys one telling the version and another description telling the reason. The prompt runs against a locally running llama3.2 model. This can be used to develop a java editor that automatically chooses the java version to run the code against. P...
published the chrome extension that visualizes chatGPT html to firefox to install on phone
มุมมอง 34หลายเดือนก่อน
Extension link: addons.mozilla.org/en-US/firefox/addon/html-code-visualizer/ However as shown in the video you can directly search it also
Added a terraform file for automating the adb setup on windows for the wifi devices application
มุมมอง 21หลายเดือนก่อน
Repo link: github.com/devashish234073/wifi-devices-in-local-network
Added controls under the wifi explorer app to launch any apps from android tv and also add exclusion
มุมมอง 25หลายเดือนก่อน
Added more controls to the wifi explorer app under the adb to display all the apps installed in android app. And a launch button to launch any of the app from the dropdown. The dropdown shows apps package name which is from the output of the below command: "adb shell pm list packages" To launch an app from the package name from the list, this command is being run: "adb shell monkey -p ${package...
Connecting to adb shell of android tv and taking a look at youtube logs
มุมมอง 160หลายเดือนก่อน
Connecting to adb shell of android tv and taking a look at youtube logs
Python code runner (chrome|edge) extension for chatGPT window
มุมมอง 652 หลายเดือนก่อน
Source code: github.com/devashish234073/chrome_extensions/tree/main/python_code_runner_package Run the backend/server.js as a node application then use the extension
Created a chrome extension to inject the html code generated by chatGPT into dom of the extension
มุมมอง 412 หลายเดือนก่อน
Repo link: github.com/devashish234073/chrome_extensions Code link : github.com/devashish234073/chrome_extensions/tree/main/html_code_visualizer Extension link: chromewebstore.google.com/detail/html-code-visualizer/pobjhecnhagnmdpblefbomhpnjcmmddc?authuser=0&hl=en
Accessing the video cast app from phone to cast local video from phone | vid 8 in series
มุมมอง 272 หลายเดือนก่อน
Accessing the video cast app from phone to cast local video from phone | vid 8 in series
Cast local videos directly from your laptop to android tv using node js app | vid7 in series
มุมมอง 562 หลายเดือนก่อน
Cast local videos directly from your laptop to android tv using node js app | vid7 in series
Casting media to android tv using nodejs code | vid 6 in exploring wifi devices series
มุมมอง 682 หลายเดือนก่อน
Casting media to android tv using nodejs code | vid 6 in exploring wifi devices series
UI for Wifi devices update | page reload refreshed with api call to update data | vid 5
มุมมอง 222 หลายเดือนก่อน
UI for Wifi devices update | page reload refreshed with api call to update data | vid 5
Wifi devices discovery continued | vid 3
มุมมอง 262 หลายเดือนก่อน
Wifi devices discovery continued | vid 3
Refactored the code to explore wifi devices | To be continued
มุมมอง 662 หลายเดือนก่อน
Refactored the code to explore wifi devices | To be continued
Exploring the wifi devices (MI-TV) connected to my wifi network using mdns-js library
มุมมอง 192 หลายเดือนก่อน
Exploring the wifi devices (MI-TV) connected to my wifi network using mdns-js library
Added a password for exchange of the public key which is not exchanged between server and client
มุมมอง 112 หลายเดือนก่อน
Added a password for exchange of the public key which is not exchanged between server and client
Created a decentralized secure chat app that involves 6 pairs of pub-priv keys for encryption-decryp
มุมมอง 122 หลายเดือนก่อน
Created a decentralized secure chat app that involves 6 pairs of pub-priv keys for encryption-decryp
Deploying UI for AI appication with quen2 model into Google Cloud VM
มุมมอง 1793 หลายเดือนก่อน
Deploying UI for AI appication with quen2 model into Google Cloud VM
Was trying with a g4dn.xlarge instance type but hit a road block so had to go with t2.xlarge
มุมมอง 163 หลายเดือนก่อน
Was trying with a g4dn.xlarge instance type but hit a road block so had to go with t2.xlarge
Update to ui-for-ai application | css changes | and showing available models in UI to run agnst any
มุมมอง 273 หลายเดือนก่อน
Update to ui-for-ai application | css changes | and showing available models in UI to run agnst any
Meta AI's Imagine feature started rolling out in WhatsApp
มุมมอง 943 หลายเดือนก่อน
Meta AI's Imagine feature started rolling out in WhatsApp
Setting up ollama server with AI model in EC2 using cloudformation template | see desc for repo
มุมมอง 333 หลายเดือนก่อน
Setting up ollama server with AI model in EC2 using cloudformation template | see desc for repo
Running llama 3.1 locally using ollama server's local api
มุมมอง 613 หลายเดือนก่อน
Running llama 3.1 locally using ollama server's local api
Showing RAM and CPU usage of ollama srver while running llama3.1 loclly and running code gentd by it
มุมมอง 953 หลายเดือนก่อน
Showing RAM and CPU usage of ollama srver while running llama3.1 loclly and running code gentd by it
I am connected locally and it works but it is only using my CPU and not my AMD GPU. Is it because i have an AMD GPU? Or am i doing something wrong?
@@ulfark7934 in collab under runtime menu you can find an option change runtime type, there you can see default "CPU" selected , I think you need to change that to GPU and try
See this video's comment for the UI application built around this: www.linkedin.com/posts/devashish-priyadarshi-96554112b_creating-an-imaginary-programming-language-activity-7253835180962979843-4ELQ?
First bro 🎉
The update discussed in this video is done. The changes can be seen here www.linkedin.com/posts/devashish-priyadarshi-96554112b_update-the-ai-powered-java-editor-to-have-activity-7251273408959635456-LOkS?
Repo link: github.com/devashish234073/java-editor-version-predict-using-ai
The prompt: Analyze the image and return a detailed list of all the identifiable products with their brand names and counts in JSON format.
Tried this in two images from the shop first image returned this: { "products": [ { "brand": "Lay's", "type": "Snack packets", "count": 7 }, { "brand": "Kurkure", "type": "Snack packets", "count": 6 }, { "brand": "Pepsi", "type": "Soda bottles", "details": { "large_bottles": 5, "small_bottles": 5 }, "count": 10 }, { "brand": "Centerfruit", "type": "Chewing gum jar", "count": 1 }, { "brand": "Shots", "type": "Chocolate packet", "count": 1 }, { "brand": "Unbranded", "type": "Loose items (grains, pulses, etc.)", "count": 10 }, { "brand": "Unbranded", "type": "Egg trays", "count": 4 }, { "brand": "Unbranded", "type": "Various non-branded items (inside shelves)", "count": 15 } ] }
Notice for cold drinks it even sub categorised large and small
This is the response from second image : { "products": [ { "brand": "Lay's", "type": "Snack packets", "count": 4 }, { "brand": "Kurkure", "type": "Snack packets", "count": 5 }, { "brand": "Gopal", "type": "Papdi Gathiya snack packets", "count": 3 }, { "brand": "Teddy Money", "type": "Snack packets", "count": 3 }, { "brand": "Funtastik", "type": "Moon Chips snack packets", "count": 2 }, { "brand": "Gillette Vector", "type": "Razors", "count": 1 pack (multiple individual units visible) }, { "brand": "Unbranded", "type": "Plastic disposable cups", "count": 1 pack } ] }
Extension link: addons.mozilla.org/en-US/firefox/addon/html-code-visualizer/ However as shown in the video you can directly search it also
Added more controls to the wifi explorer app under the adb to display all the apps installed in android app. And a launch button to launch any of the app from the dropdown. The dropdown shows apps package name which is from the output of the below command: "adb shell pm list packages" To launch an app from the package name from the list, this command is being run: "adb shell monkey -p ${package} -c android.intent.category.LAUNCHER 1" Also there is an exclusion text box using which rules can be created for app logs to close the app when the rule matches. For example in this video I set the first rule on "NowPlaying,Kabootar" for teh youtube app so whenever youtube logs the Text NowPlaying and Kabootar together i.e. when the song "Kabootar Ja Ja" is being played on the tv, the app triggers a back command and closes it. The video plays for few seconds as the "logcat" command to check the app logs runs every 2 seconds. Repo link: github.com/devashish234073/wifi-devices-in-local-network
hi. When I try tto connect via ADB the" allow usb debugging? " dialog shows and disappears immediately.
@@CenkKose-ks9us not sure why that behaviour is happening may be try rebooting the tv and see if it fixes it
If you are able to get the USB debugging toggle ON then you can skip the prompt and see if from your terminal whether you are able to do an adb connect with the IP or not, in the UI if you are not seeing adb just use the IP of the tv to connect.
when doing adb connect IP , you will see the prompt again
@@devashishpriyadarshi6366 My tv box hasn't network debugging option. Can this cause all of these problem?
@@CenkKose-ks9us are you able to see developer options? If developer options is there USB debugging should also be there. In smart TVs based on OS other than android the option might not be present
Published Extension link: chromewebstore.google.com/detail/python-runner/gajnahelmoanddnbpjecaejkhokokgmi
Thank u sir for the video, u helped me a lot of
Source code: github.com/devashish234073/chrome_extensions/tree/main/python_code_runner_package Run the backend/server.js as a node application then use the extension
Repo link: github.com/devashish234073/chrome_extensions Code link : github.com/devashish234073/chrome_extensions/tree/main/html_code_visualizer Extension link: chromewebstore.google.com/detail/html-code-visualizer/pobjhecnhagnmdpblefbomhpnjcmmddc?authuser=0&hl=en
Use this branch for exact code from above video: github.com/devashish234073/wifi-devices-in-local-network/tree/initial-implementation-of-cast-any-local-video
Repo link : github.com/devashish234073/wifi-devices-in-local-network Use this branch for exact same code as the above video: github.com/devashish234073/wifi-devices-in-local-network/tree/initial-cast-to-android-tv-code
In this video I am starting and stopping a ftp server frm a device connected to same wifi and you can see its details getting updated
The main branch might get update, refer to ths branch for exact code from teh video: github.com/devashish234073/wifi-devices-in-local-network/tree/making-api-call-to-update-instead-of-reloading
Repo link: github.com/devashish234073/wifi-devices-in-local-network
The main branch might get updated , refer this branch for exact same code as the above video : github.com/devashish234073/wifi-devices-in-local-network/tree/initial-page-reload-to-refresh-data
Final code from the video: var mdns = require('mdns-js'); function scan() { var browser = mdns.createBrowser(mdns.tcp('googlecast')); browser.on('ready', function () { console.log("READY"); browser.discover(); }); browser.on('update', function (data) { console.log(`${data.addresses} ${data.fullname} ${data.type[0]["name"]}`); }); } scan();
Library link: www.npmjs.com/package/mdns-js Code: var mdns = require('mdns-js'); //if you have another mdns daemon running, like avahi or bonjour, uncomment following line //mdns.excludeInterface('0.0.0.0'); var browser = mdns.createBrowser(mdns.tcp('googlecast')); browser.on('ready', function () { browser.discover(); }); browser.on('update', function (data) { console.log('data:', data); });
how to solve this issue? other than getting a better computer?
@@ggg9gg other than better computer you can use cloud vms with better specs. For example t2.xlarge instances in AWS , you can use it and terminate it after you are done. Since t2.xlarge does not come under free tier you will be charged for it even during free tier period. However if you use it for few hours charge will be very low.
Bro will i able to run this 16 gb ram ryzen 7 octa core processor with gpu 6 gb rtx 3050
@@mohammad-xy9ow yes, it will work with better response time than shown in this video
will it work in offline now
@@Harini-f7c yes if you switch to your local runtime it will run the code against you local python installation
Repo link: github.com/devashish234073/decentralized-secure-chat-app
Repo Link: github.com/devashish234073/decentralized-secure-chat-app This application is used for communication between two peers without need of a central server hosting the application for storing the messages. Two people communicating will require to launch their own EC2 instances each having its own node backend serving the chat UI. The application has RSA key pair encryption decryption logic implemented to encrypt messages at all stages of communication using 6 pairs of public-private keys. Initially was using "http" instead of "https" as encryption-decryption is already taken care in the application logic but the "window.crypto" used for generating the keys is only allowed in localhost or secure(https) domain, so had to add self signed certificate generation step in the cloudformation template. To launch the application two cloudformation stacks needs to be created using the same template each will generate 1 EC2 instance for each person who wants to communicate. The flow begins when person 1 opens his UI which makes an api call to his own backend. This first api call that UI makes is for the key exchange where both the server and UI generates their own public and private key pairs. UI sends its public key through the first api call and receives the servers public key in the response. This first call is important to do as soon as the server is ready so that the legitimate person 1 can use the application as after this first call server deletes its public key permanently so anyone else knowing the public IP of your server will not be able to acquire it and only you will be able to communicate using your server. Same steps needs to be done by the person you are sending message as well the other person also need to open the app first time as soon as possible to acquire his servers public key. After this for person 1 to send message to person 2 , person 1 needs to put IP of person 2's server in the destination field and type and send the message. The message is first encrypted using the public key of the server, then sent to the server, the server then decrypts it using its private key , then call the destinations communicationKey api this api returns another public key that each server keeps for communication with other hosts. After getting hold of this key person 1's server will encrypt the message using "communication" public key of destination server and send it to destination. Destination server then decrypts it using its "communication" private key , then again encrypts in using its own UI's i.e. person2's UI's public key and sends to person2's UI. Person2's UI then decrypts the message using its own private key.
I tried the same with llama 3.1 with an n1 type instance with "nvidia-tesla-t4" gpu but instance creation failed with "A n1-standard-2 VM instance with 1 nvidia-tesla-t4 is currently unavailable in the us-central1-f-zone" error. And was asked to try again in sometime or other region. Looks like still there's shortage of NVIDIA gpus.
For screenshot please refer comment from this video: www.linkedin.com/posts/devashish-priyadarshi-96554112b_improved-the-script-injection-logic-for-the-activity-7226790394280173568-Qsla?
The demo of the fix to the script injection can be seen in this video: www.linkedin.com/posts/devashish-priyadarshi-96554112b_improved-the-script-injection-logic-for-the-activity-7226790394280173568-Qsla?
hi i am new to learning AI. Can u tell me how to use it in my discord bot for free. like which ai api is the best and free with real time data.
APIs available against any AI model is mostly paid with free versions have limits. May be you can create an ec2 using this github.com/devashish234073/ui-for-ai/blob/main/cloudformation-no-gpu-qwen2.json and then you can directly use the ollama apis which are listening on port 11434 as you can see in this template I have already kept the 11434 port open which is to interact with the ollama apis. You can find the ports open in the "SecurityGroupIngress" part of the template. And to access it you need to do the curl or write equivalent client code to do this: curl localhost:11434/api/generate -d '{"model": "qwen2:0.5b","prompt": "Write a poem on goat in 10 words"}') Here you will replace the localhost with the public IP of your EC2 instance Even the "<PUBLICIP>:9999/generate" that UI is calling that is a wrapper over the ollama api so you can even use that, you can get its details from the network tab of the browser's dev tools.
Note: you will still pay for the ec2 or gcp compute instance but that will be comparatively lesser if you are running it against light weight model like quen2 and terminating the instance when you are not using it. Even quen2 one requires at least t2.small type instance which is not in free tier.
This multiple script injection we saw could be solved by hashing any script tag encountered and only appending new script tags if hashed value does not exist. I will implement that and update in the chat also will implement logic to inject script tags with src value also part from the inline ones.
what is ur age and what is ur experience
(32,9.5)
The issue shown at the end of the video in opening the UI is because for the node application "npm install" step was missing in te user data of ec2. This is the error I got from the /var/log/cloud-init-output.log file. node:internal/modules/cjs/loader:1148 throw err; ^ Error: Cannot find module 'express' Require stack: - /ui-for-ai/server.js at Module._resolveFilename (node:internal/modules/cjs/loader:1145:15) at Module._load (node:internal/modules/cjs/loader:986:27) at Module.require (node:internal/modules/cjs/loader:1233:19) at require (node:internal/modules/helpers:179:18) at Object.<anonymous> (/ui-for-ai/server.js:1:17) at Module._compile (node:internal/modules/cjs/loader:1358:14) at Module._extensions..js (node:internal/modules/cjs/loader:1416:10) at Module.load (node:internal/modules/cjs/loader:1208:32) at Module._load (node:internal/modules/cjs/loader:1024:12) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:174:12) { code: 'MODULE_NOT_FOUND', requireStack: [ '/ui-for-ai/server.js' ] }
This video is after implementing the fix : www.linkedin.com/posts/devashish-priyadarshi-96554112b_created-a-cloudformation-template-that-setup-activity-7225027301741146112-E8SW?
Tried a t2.large instance with llama3.1 model but startup script failed with error: pulling 87048bcd5521... 81% ▕████████████ ▏ 3.8 GB/4.7 GB 69 MB/s 12s^[[?25h Error: write /home/ec2-user/.ollama/models/blobs I tried to ssh to the instance and pull manually then got this: Error: write /home/ec2-user/.ollama/models/blobs/sha256-87048bcd55216712ef14c11c2c303728463207b165bf18440b9b84b07ec00f87-partial: no space left on device ubuntu@ip-10-0-1-9:/var/log$ df -h Filesystem Size Used Avail Use% Mounted on /dev/root 7.6G 7.6G 0 100% / tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs 1.6G 876K 1.6G 1% /run tmpfs 5.0M 0 5.0M 0% /run/lock /dev/xvda15 105M 6.1M 99M 6% /boot/efi tmpfs 794M 4.0K 794M 1% /run/user/1000 ubuntu@ip-10-0-1-9:/var/log$ vi cloud-init-output.log
I might have to attach an additional volume
This is the video after attaching a 15GB volume: www.linkedin.com/posts/devashish-priyadarshi-96554112b_created-the-other-template-to-install-llama31-activity-7225252941673156608-S8g3?
Ollama also provides an api to do this . It can be tested with curl localhost:11434/api/generate -d '{"model": "llama3.1","prompt": "Why is the sky blue?"}' For windows terminal single quote will have issue so use : curl localhost:11434/api/generate -d "{\"model\": \"llama3.1\",\"prompt\": \"Why is the sky blue?\"}" For details refer to screenshots in this post: www.linkedin.com/posts/devashish-priyadarshi-96554112b_showing-ram-and-cpu-usage-of-ollama-server-activity-7223295076829999104-GWtt?
Here's the response I got : curl localhost:11434/api/generate -d "{\"model\": \"llama3.1\",\"prompt\": \"Why is the sky blue?\"}" {"model":"llama3.1","created_at":"2024-07-30T04:15:08.0926584Z","response":"The","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:08.3555334Z","response":" sky","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:08.6283079Z","response":" appears","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:08.8860595Z","response":" blue","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:09.1865226Z","response":" to","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:09.4849016Z","response":" us","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:09.7824486Z","response":" because","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:10.0341468Z","response":" of","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:10.3007863Z","response":" a","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:10.5504437Z","response":" phenomenon","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:10.8221666Z","response":" called","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:11.2341383Z","response":" scattering","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:11.5670524Z","response":",","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:11.8733136Z","response":" which","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:12.1689381Z","response":" occurs","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:12.4342679Z","response":" when","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:12.6927458Z","response":" sunlight","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:12.9862022Z","response":" interacts","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:13.3482728Z","response":" with","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:13.6833401Z","response":" the","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:13.9920272Z","response":" tiny","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:14.3078334Z","response":" molecules","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:14.6498076Z","response":" of","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:14.9607043Z","response":" gases","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:15.2677077Z","response":" in","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:15.601236Z","response":" the","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:15.9127104Z","response":" atmosphere","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:16.2127908Z","response":".","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:16.5424011Z","response":" Here","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:16.8758314Z","response":"'s","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:17.1994567Z","response":" why","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:17.5266942Z","response":": ","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:17.8386617Z","response":"1","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:18.1556406Z","response":".","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:18.4738938Z","response":" **","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:18.8085976Z","response":"Sun","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:19.1559211Z","response":"light","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:19.5318295Z","response":"**:","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:19.8837797Z","response":" The","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:20.2609939Z","response":" sun","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:20.59175Z","response":" emits","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:20.9072251Z","response":" white","done":false} {"model":"llama3.1","created_at":"2024-07-30T04:15:21.2382555Z","response":" light","done":false}
Steps: Download ollama from ollama.com run 'ollama run llama3.1' from your terminal The size of this model is 4.7 GB If you have better computer you can try the 70b or 405b models which are 40 GB and 231 GB in size Read more at: ollama.com/library/llama3.1
Steps: Download ollama from ollama.com run 'ollama run llama3.1' from your terminal The size of this model is 4.7 GB If you have better computer you can try the 70b or 405b models which are 40 GB and 231 GB in size Read more at: ollama.com/library/llama3.1
Repo link: github.com/devashish234073/data-transfer-using-qr-code Part1 of this video: th-cam.com/video/zkUZM0Xxv54/w-d-xo.html
Description is missing few steps , after step 7 there should be another "npm install" in the backend directory and then another "npm install" in the AI directory
Can you create video for 2 microservices ?
Not immediately but in my future projects with microservices I will integrate xray then I will reply to this with the video. For now can you take a look at how AWS_XRAY_DAEMON_ADDRESS environment variable is used to reach a remote xray domain? you will have one EC2 in which xray daemon will be installed and others will talk to same host using the environment variable. Also make sure security group is properly configured so that communication between instances is allowed
Even there is a "AWSXRay.setDaemonAddress" method you can explore this too as a starting point you can read this stackoverflow.com/questions/61728733/aws-xray-daemon-locally-unable-to-connect-xray-using-xray-daemon-from-my-app
In the video when I switched to user's friends account the AI generated image was not showing because of the below code in which to prevent excess post from a friend I kept a limit of 3: if (p > 3) {//at most 3 posts from a friend in timeline break; } I will modify this in future to retrieve last three instead of first three
Currently I am working on the reimplementing an angular version of the application so the change will be delayed. The angular version progress can be seen at github.com/devashish234073/honest-social-net/tree/angular-frontend-node-express-backend
Wow sir Pura samajh me aa gya ... Aap Kitne genius h
😀
What is the need for it?
excuse me sir, i used camera capture on my macbook air m1, and it said: No AVVideoCaptureSource device could you give me some suggestion please.
I haven't tested it in apple devices but see if the tab is having access to camera or not. Also once try in chrome if you are using some other device
Was searching for this for last 5 days
Forgot to mention that in this setup there is one manual work needed i.e. to rename the source file of lambda from index.js to index.mjs
Also s3 does not allow all the files to be downloaded from the console at once. However it can be done using aws cli. you can do it by running the below for your bucket: aws s3 sync s3://tempu-folder .
Hi bro! I have a question: If i have a system include 5 services call to each other in 5 different ec2 (can be call microservices but not managed by ECS or EKS). How to tracing? By your way in this video, i think i need config for 5 apps in my code and 5 ec2. After that, i will see in console 5 service map like: client -> app in ec2? I don't know if i'm wrong and are there any way for tracing my system?
you can have your services running in same ec2 if they are running in different ports, they will still showup but incase you need to do tracing across multiple ec2 instances in that case you will need to have xray daemon service running only in one instance and other instances sending the traces to the same instance remotely
Sir iam java developer but i know AWS as well , Query : how to integrate that all services with springboot can u pleasse tell
Even for a springboot application you will need to add the same xray sdk dependencies in your pom file and need to install the xray daemon in the ec2 instance the same way as shown in the video.
Also make sure you test the application by running it in the ec2 instance only. Locally xray tracing parts will not work
Will u upload the developer services integration with springboot
A question, I could mention in general everything that I need to install on my PC previously to be able to do the same thing that you do in the video, I have checked that I need to install phyton and jupyter, for jupyter I don't know if it is the notebook or the lab, which Other extensions are necessary, I am new to this topic, I want to use Google Colab locally on my Windows PC, since online Google Colab has many limitations, thank you.
It has free gpu too?
You can try the change runtime option to use a GPU, I read somewhere that in the free version of clab notes can run for at most 12 hours continuously. Also when changing the runtime of the the GPU option was selectable while others were not probably those are for the pro version
How do you plot using jupyter notebooks in an Android
Pydroid let's you install jupyter notebook, it runs locally within your phone you can use the localhost url to connect from your phone's browser. It's also possible to have jupyter notebook running in your PC and your phone connect to the url by being in the same network like by using same wifi in this case however you need to replace the localhost part of the url with the 192. series ip of your laptop
Hi how to integrate X-Ray with AWS eks cluster, i have 25 microservices
Not sure if there is a direct integration like lambda but I think having xray daemon installed in the images that are used in the cluster will be the first step I believe then just like with ec2 instance you will have to assume the xray role.
Not sure if there is a direct integration like lambda but I think having xray daemon installed in the images that are used in the cluster will be the first step I believe then just like with ec2 instance you will have to assume the xray role.
Hi!! I integrated x-ray in my code using the same way as you have mentioned. My application also runs in an ec2 instance on a private IP address. I am not getting any traces. How can i know that the x-ray is working correctly in my java code?
Sometimes it takes time for first time traces to appear, if its still not coming after a long wait try seeing if any error is there in the xray daemon log file it will be some file in the /var/log directory try looking for a file like /var/log/xray-daemon.log or something with xray in name in the /var/log
fake,it gives error
make sure your jupyter notebook is installed properly, the localhost url that is printed while launching it if you paste that in browser url and if that works then it will woork in google colab too without any error