- 291
- 127 036
JBSWiki
India
เข้าร่วมเมื่อ 8 ม.ค. 2022
👌Azure Data Factory Series: Remove & Add Self-Hosted Integration Runtime Node for Resource Scaling👌
Hello, Data Engineers and Cloud Enthusiasts! 👋
Welcome to another exciting episode of the Azure Data Factory Series! In this video, we’ll be walking you through how to remove a server from a Self-Hosted Integration Runtime (SHIR) that has more than one node. Whether you’re looking to replace an existing server due to resource constraints or just scaling up for performance, this tutorial will guide you step-by-step. Let's dive into the world of SHIR scaling! 🎯
What is Self-Hosted Integration Runtime (SHIR)? 🤔
Before jumping into the main topic, let's briefly recap what SHIR is. The Self-Hosted Integration Runtime (SHIR) is a key component in Azure Data Factory that enables you to connect to on-premises data sources or run complex transformations in your own infrastructure. 🌍🔗
When dealing with large amounts of data or complex workflows, you may need more than one SHIR node to share the load and ensure high availability. Multiple nodes make the process efficient, reliable, and scalable. 🚀
In this video, we focus on removing one of these nodes from SHIR without compromising the integrity of your workflows or infrastructure. Whether you're upgrading hardware or replacing outdated systems, this process ensures a smooth transition without downtime. 👌
Why Would You Want to Remove a SHIR Node? 🛠️
There are several real-world scenarios where you might need to remove a node from your SHIR setup. Here are a few:
Scaling Up Resources 📈: As your business grows, so does the load on your servers. You might need to replace an existing server with one that has more CPU power, memory, or network capabilities. In such cases, you'll need to remove the older, weaker node from your SHIR cluster.
Hardware Upgrades 🖥️: Technology evolves quickly, and hardware that was cutting-edge yesterday can become outdated today. By removing an older node, you can replace it with newer hardware that better suits your current needs.
Node Decommissioning 🛑: Sometimes, a node may have reached the end of its life cycle, and it's time to retire it. By decommissioning the node, you can ensure the rest of your SHIR setup continues to run smoothly.
Cost Optimization 💰: Running more nodes than necessary can lead to additional costs. If you no longer need as many nodes in your SHIR, removing one can help optimize costs while maintaining performance.
By removing a node safely and efficiently, you maintain operational efficiency while preparing for growth or upgrading to new systems.
Step-by-Step Guide to Removing a SHIR Node 📝
Let’s break down the exact steps you need to follow to remove a node from your SHIR setup without disrupting your workflows. Follow along to ensure you're doing it right! 🚶♂️🚶♀️
Step 1: Access Your Self-Hosted Integration Runtime Environment 🔑
First, log in to your Azure portal and navigate to the Azure Data Factory instance where your SHIR is configured.
On the left-hand side, select Manage.
From there, navigate to the Integration Runtimes section.
Select your Self-Hosted Integration Runtime from the list of available integration runtimes.
This will take you to the main dashboard of your SHIR, where you can see all the nodes currently connected to it. You’re now ready to start the removal process! 👍
Step 2: Identify the Node You Want to Remove 🔍
Next, you’ll need to identify the specific node you want to remove from your SHIR cluster.
On the SHIR dashboard, click on Nodes to see a list of all available nodes that are part of the SHIR setup.
Look for the server name that corresponds to the node you want to remove.
Make sure you double-check that you’re selecting the correct node for removal to avoid any disruption to ongoing processes. 🛡️
Step 3: Remove the Node from SHIR ❌
Now that you’ve identified the node to remove, let’s go ahead and remove it from the cluster.
Click on the Remove Node option next to the specific node.
You’ll be prompted with a confirmation window-select Yes to proceed.
🎯 Pro Tip: Removing a node doesn’t delete the SHIR entirely. It simply removes the specific node from the cluster. The other nodes will continue to operate as normal.
Step 4: Monitoring the Removal Process 🖥️
After initiating the removal, it’s crucial to monitor the process to ensure there are no unexpected errors or disruptions.
In the SHIR dashboard, you’ll see a progress bar indicating the status of the node removal.
Once the process is complete, you’ll receive a notification indicating that the node has been successfully removed.
Step 5: Test Your Integration Runtime ✅
Once the node has been removed, it’s essential to test the integration runtime to ensure everything is still running smoothly. You don’t want any workflows to break because of a missing node!
Trigger a test run of your pipelines to ensure that the remaining nodes can handle the workload without any issues.
If everything looks good, congratulations! 🎉 You’ve successfully removed a node from your SHIR cluster.
Welcome to another exciting episode of the Azure Data Factory Series! In this video, we’ll be walking you through how to remove a server from a Self-Hosted Integration Runtime (SHIR) that has more than one node. Whether you’re looking to replace an existing server due to resource constraints or just scaling up for performance, this tutorial will guide you step-by-step. Let's dive into the world of SHIR scaling! 🎯
What is Self-Hosted Integration Runtime (SHIR)? 🤔
Before jumping into the main topic, let's briefly recap what SHIR is. The Self-Hosted Integration Runtime (SHIR) is a key component in Azure Data Factory that enables you to connect to on-premises data sources or run complex transformations in your own infrastructure. 🌍🔗
When dealing with large amounts of data or complex workflows, you may need more than one SHIR node to share the load and ensure high availability. Multiple nodes make the process efficient, reliable, and scalable. 🚀
In this video, we focus on removing one of these nodes from SHIR without compromising the integrity of your workflows or infrastructure. Whether you're upgrading hardware or replacing outdated systems, this process ensures a smooth transition without downtime. 👌
Why Would You Want to Remove a SHIR Node? 🛠️
There are several real-world scenarios where you might need to remove a node from your SHIR setup. Here are a few:
Scaling Up Resources 📈: As your business grows, so does the load on your servers. You might need to replace an existing server with one that has more CPU power, memory, or network capabilities. In such cases, you'll need to remove the older, weaker node from your SHIR cluster.
Hardware Upgrades 🖥️: Technology evolves quickly, and hardware that was cutting-edge yesterday can become outdated today. By removing an older node, you can replace it with newer hardware that better suits your current needs.
Node Decommissioning 🛑: Sometimes, a node may have reached the end of its life cycle, and it's time to retire it. By decommissioning the node, you can ensure the rest of your SHIR setup continues to run smoothly.
Cost Optimization 💰: Running more nodes than necessary can lead to additional costs. If you no longer need as many nodes in your SHIR, removing one can help optimize costs while maintaining performance.
By removing a node safely and efficiently, you maintain operational efficiency while preparing for growth or upgrading to new systems.
Step-by-Step Guide to Removing a SHIR Node 📝
Let’s break down the exact steps you need to follow to remove a node from your SHIR setup without disrupting your workflows. Follow along to ensure you're doing it right! 🚶♂️🚶♀️
Step 1: Access Your Self-Hosted Integration Runtime Environment 🔑
First, log in to your Azure portal and navigate to the Azure Data Factory instance where your SHIR is configured.
On the left-hand side, select Manage.
From there, navigate to the Integration Runtimes section.
Select your Self-Hosted Integration Runtime from the list of available integration runtimes.
This will take you to the main dashboard of your SHIR, where you can see all the nodes currently connected to it. You’re now ready to start the removal process! 👍
Step 2: Identify the Node You Want to Remove 🔍
Next, you’ll need to identify the specific node you want to remove from your SHIR cluster.
On the SHIR dashboard, click on Nodes to see a list of all available nodes that are part of the SHIR setup.
Look for the server name that corresponds to the node you want to remove.
Make sure you double-check that you’re selecting the correct node for removal to avoid any disruption to ongoing processes. 🛡️
Step 3: Remove the Node from SHIR ❌
Now that you’ve identified the node to remove, let’s go ahead and remove it from the cluster.
Click on the Remove Node option next to the specific node.
You’ll be prompted with a confirmation window-select Yes to proceed.
🎯 Pro Tip: Removing a node doesn’t delete the SHIR entirely. It simply removes the specific node from the cluster. The other nodes will continue to operate as normal.
Step 4: Monitoring the Removal Process 🖥️
After initiating the removal, it’s crucial to monitor the process to ensure there are no unexpected errors or disruptions.
In the SHIR dashboard, you’ll see a progress bar indicating the status of the node removal.
Once the process is complete, you’ll receive a notification indicating that the node has been successfully removed.
Step 5: Test Your Integration Runtime ✅
Once the node has been removed, it’s essential to test the integration runtime to ensure everything is still running smoothly. You don’t want any workflows to break because of a missing node!
Trigger a test run of your pipelines to ensure that the remaining nodes can handle the workload without any issues.
If everything looks good, congratulations! 🎉 You’ve successfully removed a node from your SHIR cluster.
มุมมอง: 7
วีดีโอ
🛡️Azure Databricks Series: Mount Azure Blob Securely Using Secrets API🛡️
มุมมอง 1512 ชั่วโมงที่ผ่านมา
🛡️Azure Databricks Series: Mount Azure Blob Securely Using Secrets API🛡️
SQL Server Query Tuning Series -Table-Valued Functions: The Good, The Bad, and The Powerful @jbswiki
มุมมอง 6314 ชั่วโมงที่ผ่านมา
SQL Server Query Tuning Series - Table-Valued Functions: The Good, The Bad, and The Powerful Video Description: 🚀 Welcome to our SQL Server Query Tuning Series, where we embark on an exhilarating journey through the world of database optimization! In this episode, we shine a spotlight on the incredible Table-Valued Functions (TVFs). 📊 Discover their benefits, understand their limitations, and w...
🔧Azure Databricks Series: Mounting Azure Data Lake Storage Gen 2 using Service Principal🔧
มุมมอง 2716 ชั่วโมงที่ผ่านมา
🔧Azure Databricks Series: Mounting Azure Data Lake Storage Gen 2 using App Registration and Service Principal🔧 An error occurred while calling o412.ls. : Operation failed: "This request is not authorized to perform this operation using this permission.", 403, GET, jbadbadls2.dfs.core.windows.net/demo02?upn=false&resource=filesystem&maxResults=5000&timeout=90&recursive=false, Authori 📈 Key Benef...
🚀Azure Databricks Series: Mastering DBFS with dbutils - Step-by-Step Guide🚀
มุมมอง 319 ชั่วโมงที่ผ่านมา
Welcome to another exciting episode of the Azure Databricks Series! 🌟 In this video, we’ll dive deep into the Databricks File System (DBFS) and explore how to use dbutils commands for efficient data handling and file management within Databricks. 🚀 🌟 What You’ll Learn in This Video 1️⃣ Introduction to DBFS and its purpose within Azure Databricks. 2️⃣ Overview of dbutils functions and their cate...
🏢Azure Data Factory Series: Boosting SHIR Reliability with Multi-Node Setup🏢
มุมมอง 821 ชั่วโมงที่ผ่านมา
Welcome back to our Azure Data Factory Series! Today, we’re diving deep into how to add a second node to the Self-Hosted Integration Runtime (SHIR) for workload sharing and high availability 🖥️💪. Whether you're working with massive datasets or mission-critical processes, ensuring continuous uptime is a must. With this setup, even if one node goes down, your data pipeline will keep running smoot...
🛠️Azure Databricks Series: Step-by-Step Guide to Installing and Configuring Libraries🛠️
มุมมอง 23วันที่ผ่านมา
repo1.maven.org/maven2/org/apache/commons/commons-math3/3.6.1/commons-math3-3.6.1.jar Library installation attempted on the driver node of cluster 1114-121326-1z5rgkjw and failed. The library installation is unsupported on this compute. Please check the supported libraries for the compute type. Error code: FEATURE_UNSUPPORTED_ON_COMPUTE_ERROR. Error message: com.databricks.api.base.DatabricksSe...
💡Azure Databricks Series: Step-by-Step Guide to Configuring and Using the Databricks CLI💡
มุมมอง 39วันที่ผ่านมา
Download latest version of Python - www.python.org/downloads/ The term 'pip' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. 'pip' is not recognized as an internal or external command, operable program or batch file. 👋 Welcome to another video in our A...
⏲️Azure Databricks Series: Step-by-Step Guide to Scheduling Jobs for Notebooks⏲️
มุมมอง 24วันที่ผ่านมา
Azure Databricks is a powerful platform that allows you to run Apache Spark clusters for big data processing, machine learning, and analytics. One of the most useful features of Databricks is the ability to automate your workflows by scheduling jobs 💻⏰. In this tutorial, we’ll go over how to: Create jobs to run your notebooks automatically ✅. Set up triggers and define the frequency of executio...
SQL Server Query Tuning Series- The Hidden Cause of Query Performance Nightmares @jbswiki #sqlserver
มุมมอง 42วันที่ผ่านมา
SQL Server Query Tuning Series - SQL Server Cardinality Estimation: The Hidden Cause of Query Performance Nightmares @jbswiki #sqlserverquerytuning 📈 The Power of Cardinality Estimation: Our quest begins with an exploration of cardinality estimation, the behind-the-scenes maestro of SQL Server's query optimization process. Picture this: you have a query that needs to fetch data from your databa...
🖥️Azure Databricks Series: Step-by-Step Guide to Creating and Using Notebooks🖥️
มุมมอง 26วันที่ผ่านมา
In Azure Databricks, notebooks are where all the magic happens ✨. A notebook is essentially an interactive environment where you can write, run, and visualize your code directly. It supports multiple languages like Python, Scala, SQL, and R. This means you can collaborate on data projects, execute data transformations, run machine learning models, and visualize your results-all in one place 🌍. ...
📂SQL Server Always On Series: Step-by-Step Guide to Syncing SQL Agent Jobs Across Replicas📂
มุมมอง 11514 วันที่ผ่านมา
Get the script from : jbswiki.com/2024/11/12/alwayson-script-to-sync-sql-server-agent-jobs-from-primary-replica-to-secondary-replica-in-an-always-on-availability-group/ Why Automating SQL Agent Job Synchronization is Important 🎯 Understanding the risks of not synchronizing jobs between replicas. Setting Up Your SQL Server Environment 🛠️ Ensuring that your Primary and Secondary Replicas are conf...
📊🖼️Azure Databricks Series: Creating Real-Time Dashboards for Data Insights🖼️📊
Key Benefits of Using Databricks Dashboards: Real-Time Data: Dashboards are live and can be refreshed frequently, so you always have the most up-to-date information available. ⏱️ Interactive Visuals: You can include various interactive visualizations like bar charts, pie charts, line graphs, and heatmaps to analyze your data dynamically. 📊 Collaboration: Easily share your dashboards with collea...
🎯Azure Data Factory Series: Optimizing Storage Costs with ADF Data Copy🎯
มุมมอง 3014 วันที่ผ่านมา
📢 Introduction Welcome to another episode of our Azure Data Factory Series! 🚀 In this video, we're going to tackle a real-world scenario where you need to move large volumes of data from premium storage to standard storage. This is a common challenge, especially when looking to optimize costs while maintaining data integrity. 💼 The Azure Data Factory (ADF) is a powerful tool that can help you a...
SQL Server Query Tuning Series -Boost SQL Performance:The Impact of Filter Predicates and ROW_NUMBER
มุมมอง 3214 วันที่ผ่านมา
SQL Server Query Tuning Series -Boost SQL Performance: The Impact of Filter Predicates and ROW_NUMBER @jbswiki #querytuning 👋 Greetings, SQL adventurers! In this extensive voyage into the heart of SQL Server query optimization, we're embarking on a thrilling journey to unravel the mysteries of filter predicates and the mesmerizing powers of the ROW_NUMBER window function. Prepare to become a SQ...
📘Azure Data Factory Series: Integrating Key Vault with ADF for Enhanced Security📘
มุมมอง 2721 วันที่ผ่านมา
📘Azure Data Factory Series: Integrating Key Vault with ADF for Enhanced Security📘
SQL Server Query Tuning Series: Boost Performance by Avoiding DISTINCT @TuningSQL @jbswiki
มุมมอง 6521 วันที่ผ่านมา
SQL Server Query Tuning Series: Boost Performance by Avoiding DISTINCT @TuningSQL @jbswiki
✅Azure Data Factory Series: Mastering Service Endpoints for Enhanced Security✅
มุมมอง 2628 วันที่ผ่านมา
✅Azure Data Factory Series: Mastering Service Endpoints for Enhanced Security✅
SQL Server Query Tuning Series -Unraveling the Impact of Functions on SQL Server Optimizer Estimates
มุมมอง 27หลายเดือนก่อน
SQL Server Query Tuning Series -Unraveling the Impact of Functions on SQL Server Optimizer Estimates
💼Azure Databricks Series: Creating Storage Credentials, Catalogs, and Volumes with Unity Catalog💼
💼Azure Databricks Series: Creating Storage Credentials, Catalogs, and Volumes with Unity Catalog💼
🎯Azure Databricks Series: Step-by-Step Guide to Managing Catalog Access for Multiple Workspaces🎯
🎯Azure Databricks Series: Step-by-Step Guide to Managing Catalog Access for Multiple Workspaces🎯
🛣️Azure Data Factory Series: Private Endpoints with Self-Hosted Integration Runtime🛣️
มุมมอง 59หลายเดือนก่อน
🛣️Azure Data Factory Series: Private Endpoints with Self-Hosted Integration Runtime🛣️
🌍Azure Databricks Series: Unity Catalog Setup in Azure Databricks - The Next Steps🌍
🌍Azure Databricks Series: Unity Catalog Setup in Azure Databricks - The Next Steps🌍
🌟Azure Databricks Series: Mastering Unity Catalog - Starting with Metastore Creation🌟
🌟Azure Databricks Series: Mastering Unity Catalog - Starting with Metastore Creation🌟
🎯SQL Server Query Tuning Series: Leveraging Query Hints to Boost Execution Efficiency🎯
🎯SQL Server Query Tuning Series: Leveraging Query Hints to Boost Execution Efficiency🎯
🚀SQL Server Query Tuning Series: Enhancing Performance with Memory Grant Improvements🚀
🚀SQL Server Query Tuning Series: Enhancing Performance with Memory Grant Improvements🚀
🚦SQL Server Query Tuning Series: How Table-Valued Functions Impact Query Parallelism🚦
🚦SQL Server Query Tuning Series: How Table-Valued Functions Impact Query Parallelism🚦
🏗️SQL Server Query Tuning Series: Unlocking High-Performance Queries with Columnstore Indexes🏗️
🏗️SQL Server Query Tuning Series: Unlocking High-Performance Queries with Columnstore Indexes🏗️
⏰SQL Server Query Tuning Series: Troubleshooting and Fixing Query Timeouts⏰
⏰SQL Server Query Tuning Series: Troubleshooting and Fixing Query Timeouts⏰
🚀SQL Server Query Tuning Series: Enhancing Performance with Prefetch in Nested Loop Joins🚀
🚀SQL Server Query Tuning Series: Enhancing Performance with Prefetch in Nested Loop Joins🚀
Thanks a lot!
Thank you for the guide!
Hi, in my case I istanct in windows server 2022 .net framework 3,5(include 2.0) but program not work. I launch debug with visual studio but I see on the line { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); if (Environment.GetCommandLineArgs().Length == 1) ## in this line application.Run(new frmSPA()); else Application.Run(new frmSPA(Environment.GetCommandLineArgs())); } System.BadImageFormatException: 'Could not load file or assembly 'AxInterop.SystemMonitor, Version=3.7.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded.'
thanks brooo
You're welcome!
Any plan to start training ?
Each series consists of a set of videos designed to help the community. They begin with basic concepts and gradually progress to more advanced topics. I don't have plans to start training sessions at the moment, Rahul. Thank you for your interest, and I hope you’re finding the content useful!
Awsome explenantion
Glad you found it helpful!
it was very helpful - thank you
Glad it was helpful!
Awesome, It worked, Thank you.
Happy to hear that it worked!
Do you take online classes for DBA
I have 2 replicas existing databases how to adda TDE and also secondary replica already database available that database need delete and TDE enable?
Hi dude, I discovered this tool today. I am mainly using it to just analyse perfmon files. Is this possible?
I'm afraid that's not possible. The input for this report is a SQLNexus database, so you won't be able to directly analyze PerfMon files.
How we are monitoring the pending log block size from primary to secondary replicas, in this suitation?
Hi thank you for the great explanation. I have a question regarding point in time recovery, if we take the copy only backup on the secondary replica, could you please explain how the point in time restoration can be done on all the nodes. Thank you!
Glad you liked it. When you take a copy-only backup on the Always On secondary replica, the point-in-time recovery process involves several steps to ensure that the restoration is consistent across all nodes. 1) The copy-only backup is taken on the secondary replica. This type of backup does not affect the sequence of regular log backups, making it ideal for scenarios where you need to take an ad-hoc backup without disrupting the backup chain. 2) To perform a point-in-time restoration, you need to restore the backup on all nodes. This involves restoring the full backup followed by any subsequent log backups up to the desired point in time. The steps are as follows: -> Restore the full copy-only backup on the primary replica first. Ensure that the database is in a restoring state. -> Apply the log backups sequentially up to the point in time you want to recover. This includes all log backups taken after the full backup. -> Once the primary replica is restored, you need to restore the same backups on the secondary replicas. This ensures that all replicas are synchronized to the same point in time. -> After restoring the backups, the secondary replicas will synchronize with the primary replica. This process involves applying any remaining log records to bring the secondary replicas up to date with the primary. I hope this helps! If you have any further questions, feel free to ask.
SUPER USEFUL
Glad it was helpful!
Have you configured this setup on azure cloud
Why you have not configured quorum
Is your dr in another network, and why you secondary in not showing in dashboard synchronised with primary with automatic failover.
Helpful ji 🎉
Thanks for your interest!
Thank you so much! This helped
Happy to hear that it solved your problem. Good luck.
Thank you, do you also take Dba classes.
Each series consists of a set of videos designed to help the community. They begin with basic concepts and gradually progress to more advanced topics. I don't have plans to start training sessions at the moment. Thank you for your interest, and I hope you’re finding the content useful!
Hii do you take classes for DBA
Thanks for your interest. I dont take any classes.
Very useful, thanks a lot!
Thanks for your interest. Glad you liked it.
Helpful 🎉
Glad you liked it. Thanks bro!!!
Nice work! Thank you. Two questions: Do I need to install the Failover Cluster Command Interface before using Get-Clusterlog? Or does this work out of the box? 2. Setting the VerboseLogging higher, will it fill up C: drive as there will be automatically generated logfiles that grow in size? Or is this settings ONLY for the GET-Clusterlog command?
Glad you liked it. below are my response, 1) Yes, you need to install the Failover Clustering feature on your server before you can use the Get-ClusterLog command. The Get-ClusterLog cmdlet is part of the Failover Clustering PowerShell module, which is only available after the Failover Clustering feature has been installed. Once the feature is installed, the command will work out of the box without any additional configuration. 2) Setting VerboseLogging to a higher level will indeed increase the amount of logging generated by the Failover Cluster service. This setting applies to the overall cluster logging, not just the Get-ClusterLog command. As a result, more detailed logs will be recorded, which could potentially lead to larger log files that might fill up the C: drive if not managed properly. However, it’s important to note that the Get-ClusterLog cmdlet itself does not directly cause logs to be written to disk continuously; it merely collects existing logs into a single file for easier analysis. The VerboseLogging setting will influence the amount of detail in the logs that are already being generated by the cluster service, which Get-ClusterLog then aggregates. To avoid filling up your C: drive, you should monitor the available disk space and manage the log files periodically, especially when VerboseLogging is enabled at a higher level.
Need a help about 'Always on Availability Group' (Manual failover) using manual Failover wizard secondary become primary and after this again using manual Failover wizard previous primary become primary, previous secondary become seconday, then it shows the error: "Login failed for user 'domain\SUBSERVER$'. Reason: Could not find a login matching the name provided. [CLIENT: <local machine>]" and stop data synchronization.
I look forward to try this in my environment Thanks for the detailed explanation....
Very well described
Can we do this thoriugh GUI in alwayson ..?
Thank you for the question. Yes you can create read routing from GUI.
could you please ping me your email id
I faced another scenario and resolved related to cluster observed same it is in pending state and fluctuating .....
Thanks for sharing your experience. It’s interesting to hear that you encountered a similar issue with the cluster being in a pending state and fluctuating. Could you share more details on how you resolved it? It might help others facing the same problem.
Awesome nice and simple ..!!
Hi broo , sorry but when I started using DTBRS I can not find an admin account , let me know how can I fix this issue please .
Excellent explanation 😊
Thanks !!!Glad you liked it.
It didn’t work for me, getting error: Database is cannot be opened due to inaccessible files or insufficient, memory or disk space.
Well said
🎉❤ very good Good luck
Nicely explained things in detail related to Azure SQL database auditing...very much informative and thanks once again for sharing everything for free...good work 👍👍👍
Thanks. Glad you find it useful.
So, can we say that the one with the highest selectivity should be written first in the where condition?
Yes, it’s generally a good practice to place the condition with the highest selectivity first in the WHERE clause. This helps the database engine filter out rows more efficiently, potentially improving query performance. Selectivity refers to the ability of a condition to narrow down the number of rows returned by a query. The more selective a condition, the fewer rows it returns.
@@jbswiki Thanks for this. could u please make a video on inequality predicate estimation.
@@abhinavtiwariDBA Thanks for your interest. Will do a video soon.
Please advise a scenario where the communication link between the HQ and DR is broken and you do a forced failover but if the HQ servers comes up and formes a quorom and the DR site doesnt know wont there be a split brain scenario as there are now two writeable primaries one at the hq and one at the Dr.
Good video
Glad you enjoyed
Under availability group there is always 1 primary server and 1 or more secondary. If i have some heavy script to run and want to switch mode to async what is best approach? Should i change it on primary and all secondary servers or only secondary? I mean inside availability group. As shown in your video at 05:30
Hope you are well. Only secondary should fine for the given scenario.
Thank you so much. Keep posting these gems. 🙏
Wow, thanks for your interest.
Please add a conclusion of the video at the end because here I understand what you have explained but didn't understand as why we are looking into those details and what are we driving from these details. Basically, what are we trying to achieve?
Sure will do. Thanks for your interest.
❤
Great explanation hope you will upload more videos
More videos every Tuesday. Thanks for your support.
Thank you. I am watching all your videos
Thank you for your support.
Please make more such videos on internals. Thank you.
Sure, Thanks for your support!!!
Amazing video
Thank you for your support.
Amazing video do you take online classes ?
Glad you liked it. I am doing TH-cam videos only.
@@jbswiki Have any plan to start ?you can think of that on weekend
Amazing video..do you take classes ?for HA and performance tunning
Thank you for your support. Glad you liked it. I am doing TH-cam videos only.