- 79
- 47 315
SQL4ALL
เข้าร่วมเมื่อ 15 ก.ย. 2017
PySpark Memory Distribution | Study Along
Reference video link could be found here
th-cam.com/video/7ooZ4S7Ay6Y/w-d-xo.html
My Linkedin profile
www.linkedin.com/in/dataaiarchitect/
th-cam.com/video/7ooZ4S7Ay6Y/w-d-xo.html
My Linkedin profile
www.linkedin.com/in/dataaiarchitect/
มุมมอง: 10
วีดีโอ
Azure AI Foundry | Intro and Demo (Building RAG system)
มุมมอง 39914 วันที่ผ่านมา
AI Foundry makes it super easy to create Gen AI applications. Explore Foundry with me. Find me on linkedin www.linkedin.com/in/dataaiarchitect/
Azure Analysis Services | Architecture | Part 2
มุมมอง 33หลายเดือนก่อน
In this video we try to explore AAS architecture. Find me on Linkedin www.linkedin.com/in/dataaiarchitect
Azure Analysis Services | Evolution | Part 1
มุมมอง 612 หลายเดือนก่อน
In this video series, we are going to talk about azure analysis service. I have been working with a team to optimize their AAS expenses. Sharing my learning through this video. Say hi to me on Linkedin www.linkedin.com/in/dataaiarchitect/
Unity Catalog | Demo | Part 2
มุมมอง 516 หลายเดือนก่อน
Reference video from databricks expert could be found here th-cam.com/video/itGKRVHdNPo/w-d-xo.htmlsi=lQUMshsJP5zVAmu0 You can find me on LinkedIn www.linkedin.com/in/dataaiarchitect/
How to Resolve Azure Quota Exceeded Exception on Databricks Cluster
มุมมอง 1.6K7 หลายเดือนก่อน
Azure quota errors can occur when deploying resources that exceed the predefined limits set for various resources such as vCPUs, cores, or specific resource types within Azure. These errors are triggered when the deployment attempts to exceed the allocated quota for a particular region, subscription, or resource group. The error messages typically indicate the specific resource type, the curren...
Unity Catalog | Demo | Part 1
มุมมอง 1097 หลายเดือนก่อน
Reference video could be found here th-cam.com/video/itGKRVHdNPo/w-d-xo.htmlsi=lQUMshsJP5zVAmu0 You can find me on LinkedIn www.linkedin.com/in/dataaiarchitect/
Cosmos DB Online Migration Using Spark Connector | Cosmos DB Intro
มุมมอง 375ปีที่แล้ว
References learn.microsoft.com/en-us/azure/cosmos-db/migration-choices github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos-spark_3_2-12/docs/scenarios/Ingestion.md Cosmos db using data migration github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/DatabricksLiveContainerMigration Maven com.azure.cosmos.spark:azure-cosmos-spark_3-2_2-12:4.1...
Databricks Spark UI Tab Tour | Demo Session
มุมมอง 1.1Kปีที่แล้ว
Here I try to give a demo on the Spark UI tab based on what I understand! Connect with me on Linkedin - www.linkedin.com/in/azuredataengineer/
Analyze Complex Data Types | Explode multiple array columns | Databricks | Complex Data Types
มุมมอง 246ปีที่แล้ว
The notebook is available at github.com/SqlRescuedLokesh/Databricks Reference articles learn.microsoft.com/en-us/azure/synapse-analytics/how-to-analyze-complex-schema stackoverflow.com/questions/35409539/corrupt-record-error-when-reading-a-json-file-into-spark How to read files from ADLS Gen 2 in Databricks th-cam.com/video/UrU1OskFw_I/w-d-xo.html Connect with me on Linkedin www.linkedin.com/in...
Azure Data Lake | Throttling Error Part 2
มุมมอง 892 ปีที่แล้ว
Was able to find the article where SLAs are mentioned learn.microsoft.com/en-us/azure/storage/common/scalability-targets-standard-account Follow me on Linkedin www.linkedin.com/in/azuredataengineer/
Incremental Load | What | How | For Beginners
มุมมอง 1332 ปีที่แล้ว
We discuss what is incremental load and how to implement it using azure data factory. This video is for beginners. Find me on Linkedin www.linkedin.com/in/azuredataengineer/
Azure Data Lake | Throttling Error Part 1
มุมมอง 1232 ปีที่แล้ว
Starts @ 5:00 Find me on Linkedin www.linkedin.com/in/azuredataengineer/
Power BI | DAX Studio | Quick Tour
มุมมอง 842 ปีที่แล้ว
Last week we had to do some performance tuning on the Power BI dashboard and used DAX studio for the first time. Sharing a quick tour with you guys. Shoutout to my friend Vasu (Power BI expert) for helping us explain some of the features available on DAX studio Connect with me on Linkedin www.linkedin.com/in/azuredataengineer/
Azure Data Factory | How to Add Missing Parameters in Arm Template Parameter File
มุมมอง 3K2 ปีที่แล้ว
Did you ever face this issue? Find me on LinkedIn www.linkedin.com/in/dataengg/
Dynamic Column mapping in Copy Activity in Azure Data Factory
มุมมอง 5K2 ปีที่แล้ว
Dynamic Column mapping in Copy Activity in Azure Data Factory
SQL Complex Queries | Interview Questions Part 1
มุมมอง 812 ปีที่แล้ว
SQL Complex Queries | Interview Questions Part 1
Clustered vs Nonclustered Index Structures in SQL Server | BEST EXPLANATION
มุมมอง 1.7K2 ปีที่แล้ว
Clustered vs Nonclustered Index Structures in SQL Server | BEST EXPLANATION
Databricks | Access ADLS Gen 2 Using OAuth
มุมมอง 2133 ปีที่แล้ว
Databricks | Access ADLS Gen 2 Using OAuth
Getting Started With Repos for Databricks
มุมมอง 2.5K3 ปีที่แล้ว
Getting Started With Repos for Databricks
How to Deploy Azure SQL database using dacpac and Azure DevOps?
มุมมอง 14K3 ปีที่แล้ว
How to Deploy Azure SQL database using dacpac and Azure DevOps?
iLearn - How to use Copy Into in Azure SQL DWH
มุมมอง 1803 ปีที่แล้ว
iLearn - How to use Copy Into in Azure SQL DWH
ADF issue - Disable Event trigger on locked ADLS v2 storage account
มุมมอง 4984 ปีที่แล้ว
ADF issue - Disable Event trigger on locked ADLS v2 storage account
Introduction to Azure Cognitive Search | Part 1
มุมมอง 534 ปีที่แล้ว
Introduction to Azure Cognitive Search | Part 1
How to Integrate ADF with Azure DevOps Git
มุมมอง 3884 ปีที่แล้ว
How to Integrate ADF with Azure DevOps Git
Demystifying ColumnStore Indexes With Real-World Examples
มุมมอง 1694 ปีที่แล้ว
Demystifying ColumnStore Indexes With Real-World Examples
How to use azure key vault in ADF Linked Service
มุมมอง 1.8K4 ปีที่แล้ว
How to use azure key vault in ADF Linked Service
How to Handle Sequential Execution of Data Flows in ADF | TTL
มุมมอง 1544 ปีที่แล้ว
How to Handle Sequential Execution of Data Flows in ADF | TTL
How To Create Your First ADF Pipeline | WinWire Brown Bag Session
มุมมอง 1204 ปีที่แล้ว
How To Create Your First ADF Pipeline | WinWire Brown Bag Session
Thanks for sharing this walkthrough. I recently went through the Microsoft Learn challenge content on this topic and now I'm exploring other related content. This was helpful.
Use of azure ai search costed me 5k indian rupees 😮. Please be careful while provisioning the services.
The culprit was azure search, i ended up selecting standard pricing tier. You could go with free one
I chose Same as Worker to avoid this. 😇
you are using 20 cores and 2 minimum workers, so the minimum required is 40 and available is 30. try to use fewer cores.
very nice bro , very useful information you have delivered in this tutorial . Concise and Crisp info of basic information of Azure Analysis services to start with this service.
Hey how do i do this exact thing but with azure repo instrad of git ? I am getting errors
should be working, what error do you get ?
@@SQL4ALL I resolved it , instead of git ui i directly used visual studio thnx for the reply though.
Excellent work 💌
can you send me code
Please send hi on er.lokeshsharma08@gmail.com
very helpful, some adjustment ideas worked
This was helpful and resolved issue in my project. Thanks for the video
Hello, great session, thank you for sharing. Just one question, can we exclude objects in the deployment? For example out of 5 tables present in the database, i only want to deploy 2
Yes we can do that
In azure synapse, when running the "Overwrite live mode" i get below error, can you tell me how to resolve, i do'nt get why this is happening. Error code: OK Inner error code: BadRequest Message: Missing parameter definition for FullyQualifiedDomainName
Super :)
exactly what I needed..Thanks
😂for the interruptions. Thanks for sharing.
Great help - thank you. Solved my issue.
Thank you very much❤
do i have to create 50 pipelines for 50 DB's?
No, it's 1 pipeline per environment per deployment type, typically you want to have a Prod pipeline exclusive for your data related to the solution. In essence you can add as many databases as you need as part of the solution and in the datpac files. The trick is when you configure the pipeline you need configure the YAML file of the pipeline to add the respective steps for the job to handle the different DBs. Take a look to this video. th-cam.com/video/-rZxLCRrgmI/w-d-xo.html
Hi....here you are using credentials directly,is it possible to use keyvalut?
Yes in the devops pipeline specify the key vault option, make sure access policy is granted and you enable "use key vault secrets" option
Amazing!
I am adding one parameter in the ARM edit as shown. However, I do not see that parameter and the value, when I exported. Could you please suggest.
Let's connect on Tuesday. Drop me a message on er.lokeshsharma08@gmail.com
Awesome work, thanks a lot.
Superb bro
Hi SQL4ALL channel, Firstly thank you for such informative session. I was looking for same. In my use case I want to deploy dacpac into new sql server. Is this possible using the same? Just in Azure devops pipeline if i enter target server ?
Yes create a new server, use parameters to supply the names
@@SQL4ALL Thank You so much
One more question..Can this be implemented using azure runbooks?Bcoz I have imported bacpac files using powershell runbook.
What if i dont know how the tables looks like? meaning i dont know the column names or how many column there is in every table that I have. I have 23 tables in total..
Why would you don't know, you may be query sys.colums table to construct a new metadata column
Great content
I was facing same issue for data bricks activity in adf pipeline.. after watching this video i am able to fix that. thanks a lot
There are resources like storage account, sql server etc. Wanted to make arm template for all at a single place for deploying everything at once in other environment....how to do that?? Please help
Thanks a lot my friend, you saved the day. Tnks again!
Any time brother (one data engineer to another)
17:38 that's why I hate to watch any video from India... bugs are always there...
Hey there, God bless your efforts. I am still new to sql with a general enquiry. How non-clustered index differs from clustered index? It has anything to do with grouping of data while indexing? Thanks a lot.
my friend watch the video, again and again
Great video
Hi, How can I make it incremental deployment?
Hi Srikanth, this will be incremental deployment
@@SQL4ALL this dacpac deployment will update current db ? not overwrite
@@skillquench yeah
Really helpful, can you please suggest: does deadlock causes synapse pipeline deployment fail?
yes deadlock could cause deployment failures, ideally you should not have any active transactions while performing deployment.
@@SQL4ALL thank you. It helped
@@SQL4ALL may I know your linked in profile?
@@vijaylakshmi4573 www.linkedin.com/in/azuredataengineer
Thanks for clear explanation. Can you also explain on how to solve the merge conflicts in the DevOps when integrated with Databricks.
Thanks, Nikhil, yeah merge conflict sucks. Ideally, we should not have this conflict as it suggests that someone has made changes to the main branch before the commit in the dev branch. we had merge conflicts in ADF and had to see which branch to use for changes. It was a manual painstaking process
IT HELPS ME ..LIKE IT
I was wondering if I can merge 2 nvarchar datatype fields together in a SQL Server table?
could you please elaborate more, are you asking about a calculated column constructed from 2 columns ?
this video could have been so much better without the ppl there
The process will be same for excel to sql dynamic column mapping or it'll be different ?
Should be the same. Test it out
Excellent demo lokesh!!
Thanks brother
🅿🆁🅾🅼🅾🆂🅼
Suppose we have 3 column in source file and in table we have 4 column in the sink. And we want to do one to one mapping for that and the addition column in sink we should get it as NULL while mapping it is possible?if yes.. Can you say the steps for that.
well, the steps would be the same as described in the video. For columns not present in Source, source metadata values will be null and is active flag will be set to 0
What to do, if we want datatype to be matched in copy activity
Match the data type? Could you please elaborate your question ❓
Great demo!
How do we set the order of SQL scripts that should get deployed? What if I want to have the TableA.sql to get deployed first and then deploy TableB.sql, and then stored procedure A followed by stored procedure B? Where do we configure the script dependencies when dealing with both kinds of deployment, dacpac deployment and SQL File(s) deployment .
if you are using dacpac, I believe it should take care of the dependencies of pre-defined objects (like sp, table, etc) For running Adhoc scripts use power shell in your CI/CD pipelines
So no solution to the issue?
We need to remove the lock from the ADLS v2 storage account.
Good Introduction to Databricks Repo Concept. What is the best practice in real time projects which has DEV, QA & PROD kind of environments ? 1. Should we follow the Databricks Repo approach or Link individual ADB notebook with DevOps Repo? 2. I am confused with Notebooks available in ADB Repo vs Notebooks available in ADB workspace. I understand Repo & workspace are totally different
Thank you Balan for your comment. I believe checking in individual notebook feature will soon be deprecated (have noticed MS warning message while using this), so in terms of best practice, using repos is the best approach. I also haven't used how the notebooks can be seamlessly moved from repos to the workspace. However, I think this is how it works 1. you develop and save your notebooks in the repo 2. then using ci/cd you deploy it into DEV, QA or PROD workspace. Hope this helps
@@SQL4ALL Thanks for your response. In our project environment, we already have Azure DevOps Repo where we have all the azure artifacts (ADF pipelines, Azure SQL DB DDL & DML's, Databricks notebooks for Data Engineering & Data Science). We are looking for best approach on how we can do the seamless CI/CD to deploy to QA, PROD environments for all the azure artifacts (ADF, ADB, Azure SQL DB)
@@balanmurugan6270 sure for ADF use arm template deployment for sql db use dacpac deployment and for ADB what we discussed
Hi Nice video 😊👌 Will you give training kind of sessions?
Thanks Yes We could discuss on that Please drop me an email @ er.lokeshsharma08@gmail.com
what kind of persmission is required to add the repo in databricks (6:56 time). when I am clicking ok nothing happens. (saying persmission is required to add repos in azure databricks)
Hi anuj I believe you would need admin access to create repo check this out docs.microsoft.com/en-us/azure/databricks/repos
@@SQL4ALL I read it but properly it is noy mentioned what kind of access it is required... Do you have some other articles where it mention properly
@@anujgupta8686 sorry mate. I believe you should be owner ...are you currently contributor ?
@@SQL4ALL yes I am contributor
Hello Sir, I have 3.2 years of experience in IT industry as a Manual testing engineer and I feel like I'm lagging behind because automation seems black box to me. I found SQL intersting, I learnt it at my own. Now in order to start career in SQL what should i do next? Which kind of job i should apply when i have no real-time project experience? Please share your opinions. Is there any other job on which a manual tester can jump which doesn't require coding?
Hi Thanks for reaching out. You can start exploring some of the roles in your current company. Have a word with HR or your manager and express your interests. Else I would suggest learning MSBI tools like ssis, power bi and adf (cloud azure). These are also interesting and easy to learn. then just search for bi jobs and try your luck as a fresher. Please reach out to me @ er.lokeshsharma08@gmail.com
Thankyou for the information, Sir👍😀 I'll try for sure. One last thing, Is it a good decision of leaving MNC job and joining startup?
@@nayanshrijain2040 thats a good question. I would say yes but be prepared if the startup company fires you. Increase your skillset get more exposure