- 69
- 18 425
The iT Tech Solutions
India
เข้าร่วมเมื่อ 9 ก.ย. 2023
Welcome to "The iT Tech Solution," your go-to destination for insightful solutions in the world of IT development! 🚀
🔍 Dive deep into the intricate realm of coding, where we unravel the complexities and offer practical solutions to the challenges faced by IT professionals like you. From mastering Python's intricacies to harnessing the power of AWS, navigating Git, and optimizing your workflow with VSCode - we've got you covered.
🛠️ Explore step-by-step tutorials, troubleshooting guides, and best practices carefully crafted to elevate your development skills.
🌐 Stay updated on the latest trends, tools, and techniques shaping the IT landscape. Our content is designed to empower you with the knowledge and skills needed to excel in the dynamic world of software development.
🚀 Ready to level up your IT game? Subscribe to "The iT Tech Solution" and embark on a journey of continuous learning and innovation. Your coding conundrums meet their match here!
🔍 Dive deep into the intricate realm of coding, where we unravel the complexities and offer practical solutions to the challenges faced by IT professionals like you. From mastering Python's intricacies to harnessing the power of AWS, navigating Git, and optimizing your workflow with VSCode - we've got you covered.
🛠️ Explore step-by-step tutorials, troubleshooting guides, and best practices carefully crafted to elevate your development skills.
🌐 Stay updated on the latest trends, tools, and techniques shaping the IT landscape. Our content is designed to empower you with the knowledge and skills needed to excel in the dynamic world of software development.
🚀 Ready to level up your IT game? Subscribe to "The iT Tech Solution" and embark on a journey of continuous learning and innovation. Your coding conundrums meet their match here!
PySpark Learning Series | 21 Sorting Data in a Dataframe
PySpark Learning Series | 21 Sorting Data in a Dataframe
In this video we will learn how to sort data in a dataframe. We can sort the data using sort() function and also we can sort the data using orderBy() function.
Your keys words:
How to sort data in pyspark
How to get sorted in dataframe using pyspark
How to sort data in pyspark using sort()
How to sort data in pyspark using orderBy()
Data sorting in pyspark
Sorting data in pyspark
In this video we will learn how to sort data in a dataframe. We can sort the data using sort() function and also we can sort the data using orderBy() function.
Your keys words:
How to sort data in pyspark
How to get sorted in dataframe using pyspark
How to sort data in pyspark using sort()
How to sort data in pyspark using orderBy()
Data sorting in pyspark
Sorting data in pyspark
มุมมอง: 6
วีดีโอ
PySpark Learning Series | 20 Filtering Data from Dataframe
มุมมอง 1019 ชั่วโมงที่ผ่านมา
PySpark Learning Series | 20 Filtering Data from Dataframe In this video we will learn how to filter data from dataframe using filter() function and where() function Data can be filtered from a dataframe by filter function as well as by a where function.
PySpark Learning Series | 19 Dropping a Column and Dropping Duplicates from Dataframe
มุมมอง 1314 วันที่ผ่านมา
PySpark Learning Series | 19 Dropping a Column and Dropping Duplicates from Dataframe In this video we will learn how to drop a complete column from a dataframe and how to remove duplicates from dataframe or how to deduplicate data from a dataframe. We will see how to use dropDuplicates() and how to use drop() in pyspark
PySpark Learning Series | 18 How to Rename Column Names
มุมมอง 1621 วันที่ผ่านมา
PySpark Learning Series | 18 How to Rename Column Names In this video we will see how to rename a column name and how to rename multiple column names as well using withColumnRenamed() function Your queries: how to rename column names in pyspark how rename multiple columns in pyspark withColumnRenamed function in pyspark pyspark tutorials for beginners learn pyspark withColumnRenamed demo
AWS Tutorial - Setting up IAM User and Budget
มุมมอง 8621 วันที่ผ่านมา
AWS Tutorial - Setting up IAM User and Budget In this video we will learn how to setup or create an IAM user and see how to create a budget for free tier account so that we can practice and do hands on without incurring any cost in AWS and we learn AWS for free Your keywords: How to setup AWS account How to use AWS for free How to setup aws free tier account What is IAM in AWS How to create use...
AWS Tutorial - Create Free Tier Account
มุมมอง 88หลายเดือนก่อน
AWS Tutorial - Create Free Tier Account In this video we'll learn how to setup AWS Free Tier Account and how we can create an AWS account to do hands on practice Your Keywords: How to setup AWS account How to use AWS for free How to setup aws free tier account Practice aws for free AWS account setup Create free account in aws
AWS Tutorial - Series Introduction
มุมมอง 66หลายเดือนก่อน
Welcome to the AWS Tutorial. In this series, we’ll explore the world of Amazon Web Services, one of the most popular cloud platforms in the world today. We’ll start from the basics, walking through key concepts like cloud computing, networking, and security. Then we’ll dive deeper into core AWS services-how they work, when to use them, and how to implement them in real-world projects.
Snowflake Copy Command and Internal Stage | Bulk Data Load Using Internal Stage
มุมมอง 126หลายเดือนก่อน
Snowflake Copy Command and Internal Stage | Bulk Data Load Using Internal Stage In this video we will learn about Snowflake Copy Command and how we can create and use internal stage to do a bulk data load from local file system. We will also see what is format and what are different types of internal stage #snowflakecomputing #dataengineeringessentials #snowflake #dataengineering Your Queries H...
Snowflake Stream and Task
มุมมอง 95หลายเดือนก่อน
Snowflake Stream and Task In this video we will see what is a stream and and how we can use it to track the changes in a source table and automate the process using a task. Your queries: snowflake tasks and streams snowflake streams and tasks snowflake task scheduling snowflake tasks example snowflake streams tutorials
PySpark Learning Series | 17 How to add a new column
มุมมอง 47หลายเดือนก่อน
PySpark Learning Series | How to add a new column In PySpark, the withColumn() function is used to add or modify a column in a DataFrame. You can use it to create new columns, replace values in existing columns, or apply transformations to columns. It is especially useful when you need to perform operations on the data in individual columns. Your queries: pyspark tutorial pyspark pyspark tutori...
Connect to Snowflake from Python
มุมมอง 99หลายเดือนก่อน
Connect to Snowflake from Python In this video we will see how to connect to snowflake from python and how can we query snowflake tables using python. We will see how to install the snowflake python connector and use it to connect to snowflake. We will execute a sample query to select few records from Snowflake table and to create a table using a simple create table command in snowflake. More s...
What is Snowflake Storage Integration?
มุมมอง 2052 หลายเดือนก่อน
What is Snowflake Storage Integration?
PySpark Learning Series | 16 How to Use Joins in Pyspark
มุมมอง 642 หลายเดือนก่อน
PySpark Learning Series | 16 How to Use Joins in Pyspark
PySpark Learning Series | 15 SPARK-MySQL connection using JDBC driver
มุมมอง 7245 หลายเดือนก่อน
PySpark Learning Series | 15 SPARK-MySQL connection using JDBC driver
Learn SQL | Install MySQL Workbench on Windows
มุมมอง 1055 หลายเดือนก่อน
Learn SQL | Install MySQL Workbench on Windows
PySpark Learning Series | 14- groupby(), avg(), min(), max(), count()
มุมมอง 315 หลายเดือนก่อน
PySpark Learning Series | 14- groupby(), avg(), min(), max(), count()
PySpark Learning Series | 13- Viewing Data | Some important options to remember
มุมมอง 325 หลายเดือนก่อน
PySpark Learning Series | 13- Viewing Data | Some important options to remember
Load data into Snowflake from AWS s3 Bucket | Using IAM credentials
มุมมอง 8406 หลายเดือนก่อน
Load data into Snowflake from AWS s3 Bucket | Using IAM credentials
PySpark Learning Series | 12- Some useful FILE SOURCE OPTIONS to remember
มุมมอง 336 หลายเดือนก่อน
PySpark Learning Series | 12- Some useful FILE SOURCE OPTIONS to remember
PySpark Learning Series | 11-Save Mode (Error, Ignore, Append, Overwrite)
มุมมอง 927 หลายเดือนก่อน
PySpark Learning Series | 11-Save Mode (Error, Ignore, Append, Overwrite)
PySpark Learning Series | 10-Run Queries on Files without Reading it
มุมมอง 737 หลายเดือนก่อน
PySpark Learning Series | 10-Run Queries on Files without Reading it
PySpark Learning Series | 09-Write Dataframe into Table (Persistent and Temporary)
มุมมอง 1347 หลายเดือนก่อน
PySpark Learning Series | 09-Write Dataframe into Table (Persistent and Temporary)
PySpark Learning Series | 08-Write Dataframes to Files
มุมมอง 1077 หลายเดือนก่อน
PySpark Learning Series | 08-Write Dataframes to Files
PySpark Learning Series | 07-Create DataFrames from Parquet, JSON and CSV
มุมมอง 1638 หลายเดือนก่อน
PySpark Learning Series | 07-Create DataFrames from Parquet, JSON and CSV
ChatGPT Prompting Principles | Hands-on | Write Effective Prompts
มุมมอง 628 หลายเดือนก่อน
ChatGPT Prompting Principles | Hands-on | Write Effective Prompts
ChatGPT - What is Prompt Engineering | Benefits and Types of Prompt
มุมมอง 1098 หลายเดือนก่อน
ChatGPT - What is Prompt Engineering | Benefits and Types of Prompt
PySpark Learning Series | 06-Hive Integration & Spark SQL | Read/Write data from/to Hive
มุมมอง 3148 หลายเดือนก่อน
PySpark Learning Series | 06-Hive Integration & Spark SQL | Read/Write data from/to Hive
PySpark Learning Series | 05-Create DataFrame from RDD | toDF() | createDataFrame()
มุมมอง 1109 หลายเดือนก่อน
PySpark Learning Series | 05-Create DataFrame from RDD | toDF() | createDataFrame()
PySpark Learning Series | 04- RDD Hands-On
มุมมอง 1039 หลายเดือนก่อน
PySpark Learning Series | 04- RDD Hands-On
After doing all this I am still getting an error raise Py4JJavaError( py4j.protocol.Py4JJavaError: An error occurred while calling o111.jdbc. : java.lang.ClassNotFoundException: com.mysql.cj.jdbc.Driver
AMAZING VIDEO (THEORY AND HANDSON Combination). Appreciate the effort
how to establish connection to snowflake from pyspark
In the same way, I'm trying to connect with local sql server but getting TCP/IP error. Can you please help?
You may have to check if TCP/IP is enabled in SQL Server Configuration. A screenshot of the error will help to understand the issue better
To know how to create a Free Tier Account please watch this video below: th-cam.com/video/tV-tXpnLfMM/w-d-xo.htmlsi=djx1ZM0ga7HQ2aDl
IMPORTANT NOTE: Also watch below video to complete two very important setup before starting the hands-on. This will protect you from unwanted costs by setting up Budget Alerts and also create an IAM user to protect your root account th-cam.com/video/SSfW23C1nJY/w-d-xo.htmlsi=nxRF15A8BDwC17fX
Let me know in comment box any specific AWS service you are willing to learn. I'll be happy to present it
Here is the first tutorial of this series th-cam.com/video/tV-tXpnLfMM/w-d-xo.htmlsi=pQFPaylXD8YGW740
Watch this to learn about realtime data processing using Snowflake Stream and Task th-cam.com/video/BhioEIOqe3c/w-d-xo.htmlsi=bCoJ8qlNjyvMOTEL
Aslo watch this to know about Storage Integration, a better and recommended way to connect to AWS s3 as external stage th-cam.com/video/3gHiSZ5vyN8/w-d-xo.html
Nicely presented and easy to absorb. Simple and useful video. Thanks a lot
You are most welcome
But if you do this all your code will be in one container, now when ever any lambda function is triggered the image containing all the functionality will be loaded to memory unnecessarly because we need the functionality of only one lambda. imagine what the impact would be if i have more than 100 lambdas
Yes I agree, this model should be applied only if we have simple and less number of lambdas, for large and complex multiple lambdas we should always consider creating heirarchy of folders with separate dockers for each lambda. Thanks for bringing it up, really appreciate 👍
can you add state name (Lambda invoke) with the existing resultPath to next state i.e Lambda Invoke (1) ?
Hey there, sorry for a little late response. Yes, you can do that by capturing result form $ using RestulPath in first state and InputPath in second state, something like below: { "StartAt": "LambdaInvoke", "States": { "LambdaInvoke": { "Type": "Task", "Resource": "arn:aws:lambda:region:account-id:function:LambdaInvoke", "ResultPath": "$.lambdaResult", "Next": "LambdaInvoke1" }, "LambdaInvoke1": { "Type": "Task", "Resource": "arn:aws:lambda:region:account-id:function:LambdaInvoke1", "InputPath": "$.lambdaResult", "End": true } } } Try this way, it should work
Detailed and to the point.
Glad that it helped
thanks for the video, helpful. But in my case i put Retry [ErrorEquals] MaxAttempts = 3, but issue is that if some error occurs then step functions executing in loop. never ending still after 3 attempts. Please let me know how i should fix that, my all step function goes in infinity loop if some error comes.
I hope you are adding the catcher with correct error to catch and using a pass state as fallback. I will be able to check if I can see your step function code
Thank you so much sir.
Nice demo. Thank you! <3
Glad you liked it!
can you do a similar video from Sagemaker Code Editor? I'm trying from there but Docker is not installed
Sure, I will give it a try, but any specific issue that you are facing?
great video, thanks
Thanks you
Hello, in my case I need to write the spark df into Postgres. I’m currently using kubernetes so what would you suggest me on where to add the jdbc jar file? Confused with that
What I can suggest is to create a Dockerfile that includes the PostgreSQL JDBC driver JAR in the Spark image. Something like this: FROM bitnami/spark:latest # Or any other Spark base image you are using ADD postgresql-<version>.jar /opt/spark/jars/ And then build the Docker image and push it to your Docker registry. Then, use the Custom Image in your kubernetes job. Another option would be to include the JDBC driver JAR within your Spark code spark = SparkSession.builder \ .appName("YourAppName") \ .config("spark.jars", "path/to/postgresql-<version>.jar") \ .getOrCreate() df = spark.read.format("jdbc").option("url", "jdbc:postgresql://your-db-url").option("dbtable", "your-table").option("user", "your-username").option("password", "your-password").load()
@@TheiTTechSolutions I’ve used the first method and it worked. Thanks for your guidance 😊
i follow the all steps but i am getting the error ...Failure using stage area. Cause: [Access Denied (Status Code: 403; Error Code: AccessDenied)] like this ...
Apologies for responding late, 403 means your snowflake is not able to communicate with the aws. There is some issue in key. Try re-creating the stage again and give the check key in the IAM user that you would have created for Snowflake.
Thanks that was useful ❤
Hi, this setup spike my billing very high, The setup was to build lambda function to read the latest file from the s3 dir and make transformation then finally to s3 target dir, So this all setup with the python script has to run once the s3 notification to lambda function that an file just came to s3. But it went into a loop and made the s3 and lambda billing spike Let me knew what is the issue in my setup that i didn't noticed at first while running this python script in lambda
Were you writing the files back to same directory? You can have event notification on a specific key and write the processed file to a different key on which there is no event notification otherwise it will trigger the infinite loop. Be careful!
Getting error in last step
Can you share the error please?
Event Bridge task creation is missing.Didn't show from starting means from the console.Anyhow good video.
Nice explanation. Thank you very much. Keep it up the good work.
Thanks. If you want to execute Lambda 3 with the values from Lambda1 or Lambda2 (based on choiuce state), how can you read the values in lambda 3?
Hello Pradeep, that can be achieved using ResultPath and InputPath step function variables. Check this if this helps: th-cam.com/video/9rzIhRb2qKk/w-d-xo.html
You explained it well! But I have a question : how can I provide the path to the apikey if it is stored in a .txt file not .env?
Well, in that case you will have to read the file. Something as below: # Open and read the .txt file with open('env_variables.txt', 'r') as file: lines = file.readlines() # Parse each line and store the variables in a dictionary env_vars = {} for line in lines: key, value = line.strip().split('=') env_vars[key] = value # Now you can use the environment variables in your code api_key = env_vars.get('API_KEY') print("API Key:", api_key) But, keep in mind that storing sensitive information like passwords or API keys in plaintext files is not recommended for production use
Is this free or paid. I'm asked to get subscription now. Can somebody help me.
Hi, No its not free. To activate or create account you don't need to pay anything as such, but you will be charged for each api call that your program will make. You pay for what you use. As per their website new users get free $5 worth of free tokens. after that it is charged @ $0.0010 / 1K tokens for inputs and $0.0020 / 1K tokens for outputs. Tokens can be thought of as pieces of words. Before the API processes the prompts, the input is broken down into tokens. 1 token ~= 4 chars in English 1 token ~= ¾ words 100 tokens ~= 75 words More about these can be found at: openai.com/pricing help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them
Sir, High clarity. Professional teaching. Useful videos.
Glad that it is helpful
One free comment 😊
Thanks! for this informative guide!🙂
Glad it was helpful!
Thanks for this tutorial!
do we have to purchase or pay any thing to activate the api of gpt 3.5 turbo?
Hi, To activate or create account you don't need to pay anything as such, but you will be charged for each api call that your program will make. You pay for what you use. As per their website new users get free $5 worth of free tokens. after that it is charged @ $0.0010 / 1K tokens for inputs and $0.0020 / 1K tokens for outputs. Tokens can be thought of as pieces of words. Before the API processes the prompts, the input is broken down into tokens. 1 token ~= 4 chars in English 1 token ~= ¾ words 100 tokens ~= 75 words More about these can be found at: openai.com/pricing help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them
@@TheiTTechSolutions is there any free api of chart gpt type AI? I am student , and I only need it,to build small personal project .
No, calling api is not free, you'll get $5 credits as a new user.
@@TheiTTechSolutions i am facing some problem in this, can u send your insta id where i can send u my problem photo.
Still facing issue? U can post me on insta @the_it_tech_sol, I will try to help
Thanks for the updated guide! I've been looking for something to showcase the new method using OpenAI() function
I am glad that it helped!
your screen is so small. cant watch it properly.
My apologies, try to see it in a laptop or PC, I hope that will be helpful. But I will try to zoom it a little next time so that its clearly visible.
very clear instructions and to the point! Thank you for creating this tutorial!
Thanks for watching!
good keep it up!
Thanks!
Thanks :)
Welcome!