- 20
- 145 043
Tech Tutorials
India
เข้าร่วมเมื่อ 14 ก.พ. 2014
Hi there...if you are looking for learning niche software skills , you are at the right place. Do like and subscribe the channel.
Informatica Advanced Concept - Incremental load
Incremental load is defined as the activity of loading only new or updated records from the database into an established Datamart. Incremental loads are useful because they run very efficiently when compared to full loads, particularly so for large data sets.
#informatica #incrementalload
#advancedinformatica
#informatica #incrementalload
#advancedinformatica
มุมมอง: 5 205
วีดีโอ
How to print patterns in informatica?
มุมมอง 5K4 ปีที่แล้ว
Source- 10,A 20,P 20,Q 10,B 10,C 20,R 20,S Target- 10,A 10,A,B 10,A,B,C 20,P 20,P,Q 20,P,Q,R 20,P,Q,R,S #informatica #informaticascenarios #informaticainterviews
How to dynamically create target files based on input conditions using Transaction Control ?
มุมมอง 15K4 ปีที่แล้ว
You need to generate multiple flat file using an Informatica mapping based on the source data content or some other business rule; from a single data source. For example- Create separate target files and load employees in them department wise. You don't know the total number of departments. #informaticinterview #transactioncontrol #informaticascenarios
How to load header,footer and detail records in 3 different target in Informatica?
มุมมอง 12K4 ปีที่แล้ว
The scenario is described as below- Input file- a b c d e f Target1 Target 2 Target3 a b f c d e #informatica #informaticascenarios #informaticainterview
How to convert row data into column data in Informatica?
มุมมอง 13K4 ปีที่แล้ว
Scenario is given below- The source data looks like id, value 10, a 10, b 10, c 20, d 20, e 20, f The target table data should look like id, col1, col2, col3 10, a, b, c Note-Check the aggregator properties wisely else you will drop records. #informatica #informatiacscenarios #informaticainterviews
How to convert column data into row data in Informatica without using the normalizer transformation?
มุมมอง 11K4 ปีที่แล้ว
The scenario is described below- The source data looks like col1, col2, col3 a, b, c d, e, f The target table data should look like Col a b c d e f #informatica #informaticascenarios #informaticainterviews
How to load previous salary for current row in Informatica?
มุมมอง 10K4 ปีที่แล้ว
Design a mapping to get the pervious row salary for the current row. If there is no pervious row exists for the current row, then the pervious row salary should be displayed as null. input like employee_id, salary 10,1000 20, 2000 30, 3000 40, 5000 The output should look like as employee_id, salary, pre_row_salary 10, 1000, Null 20, 2000, 1000 30, 3000, 2000 40, 5000, 3000 #informaticascenarios...
How to load unique records in one file and duplicate records in other with minimum one occurrence?
มุมมอง 14K4 ปีที่แล้ว
Design a mapping to load each product once into one table and the remaining products which are duplicated into another table. The first table should contain the following output A B C D The second table should contain the following output B B C #informatica #informaticascenarios #informaticainterview
How to load unique records in one file and duplicate records in second file without losing records?
มุมมอง 17K4 ปีที่แล้ว
Here you will learn a scenario where we separate unique and duplicatre data from source file without loosing our data. This is a complex mapping and is must for all experienced informatica resources. The solution may seem and easy approach but its not. But as you go along the lecture, you will learn it. Design a mapping to load all unique products in one table and the duplicate rows in another ...
How to extract first and last record from flat file?
มุมมอง 15K4 ปีที่แล้ว
In this lecture, we learn how to extract header and trailer, i.e first and last record from file. We will understand the usage of rank transformation and union transformation. this question is frequently asked in interviews. #informatica #informaticascenarios #informaticainterview
How to send alternate records to two different targets - two ways to approach
มุมมอง 26K4 ปีที่แล้ว
This is a scenario based question mostly asked in Informatica interviews. It can be asked in other manner like send odd and even records to separate tables. Its good to understand the logic rather than just mugging up from websites. #informatica #scenariobased #informaticaInterview
AWS Lambda - Part 2 Lab , S3 trigger AWS Lambda function
มุมมอง 2924 ปีที่แล้ว
in this lecture, we will see how any operation on S3 bucket invokes lambda function. be careful when giving permission to S3 and cloudwatch. #AWS #AWSLambda #Lambdademo
AWS Lambda function for beginners- Part 1
มุมมอง 944 ปีที่แล้ว
This lectures is about the importance of using Lambda function. How it is better than EC2 and ECS ? What is serverless computing? Most important is real time scenarios and Lambda limitations. Very important for AWS certifications. #AWS #AWSLambda #AWScertification
S3 LIfecycle Rule - Important for AWS Certified Soln Architect and Developer
มุมมอง 1014 ปีที่แล้ว
An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. ... For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after you created them, or archive objects to the S3 Glacier storage class one year after creating them. #S3lifecycle #AWS #AWScertified
AWS S3 Storage Classes- AWS Certified Exam Essential Topic
มุมมอง 484 ปีที่แล้ว
In this lecture, we will discuss about different storage classes available in S3. How the user decides to chose one and what costs and features are available in there? Also Important points for the exam are there- For more info please visit aws.amazon.com/s3/storage-classes/ #S3storageclasses #AWSExam #AWS
S3 Encryption : AWS Associate level exam : theory + lab
มุมมอง 1654 ปีที่แล้ว
S3 Encryption : AWS Associate level exam : theory lab
AWS S3 versioning explained--Demo included
มุมมอง 9844 ปีที่แล้ว
AWS S3 versioning explained Demo included
I tried to create files in unix shared path and it worked, but the file doesn't have write access (chmod 777). How to provide chmod 777 to the files that are created dynamically. I know that we can pass the command in post session success command but these dynamically created files are taking name from mapping, how should we mention it?
Hi, one doubt ! why did you map the output that came from the source? we have calculated the new port in the expression, right? shouldn't we map the new output port with the target? if not then how it is working?
Thanks
Could it be that de seq transformation goes from 1 to 2? And then send to odd table those with 1 and to even table those with 2? Could this approach work for the next case where yo have 3, 4 or X number of target tables?
Cant we use sorter after the union to sort the data alphabetically ?
Thanks
may i know what type of questions and at what depth questions will be asked for 12+ years experience
My doubt is , I can get the same result if in the router I can put count=1 , then we don’t need the expr trans and we can join the sorter and the aggregator . I am getting the same result. Please can you help me to understand if there is any problem if we remove the expr ??
Hi,I have tried this scenario but i can only see one out file in target file folder .Other 3 csv files are not getting generated dynamically. Can you please help me what could be the reason for that.
Can you let me know is this same if there is few more columns in source.
3 & 4
Thank you sir it help in my implementation
I think we dont need joiner in this scenario,we can just give count from aggregator to route data
I think , In that way we will loose the records in the duplicate file, it will give only 1 time
it doesn't work when we are going to add like id-10 val-d id-20 val-g
all my data are loading to middle ..I m not getting any data in header and footer .Can someone pls help me
Might be your facing that issue due to the sequence generator current value storage..enable reset and define current value is 1
For join condition can we use next val instead of dummy .Is it possible?
Can you make a video on crack interview of informatica
Hi Without selecting the cycle checkbox and just giving the end value still the nextval gives alternate 1 and 2?
u can use Sequence generator and Expression to A-->a,B-->bb,C-->ccc,D-->dddd,E-->eeeee Let's assume the input string is named VALUE and the total length is named TOTAL_LENGTH. Then you will need the following additional ports in the EXP: - One variable port v_length, type Integer, with this expression term: Length(VALUE) - One variable port v_pad_val, type String (250), with this expression term: LPAD( SUBSTR( VALUE, 1, 1), 250, SUBSTR( VALUE, 1, 1)) - One output port REPEATED_VALUE with this expression: SUBSTR( v_pad_val, 1, TOTAL_LENGTH)
If a source has 100 rows and last 3 rows needs to be loaded in a separate target, how it can be executed ?
Use sequence generator first and then rank (Top 3)
can u pls include a scenario for pyramid triangle data like A-->a,B-->bb,C-->ccc,D-->dddd,E-->eeeee pls explain the logic to execute this one .not by using normaliser pls
**Copied u can use Sequence generator and Expression to A-->a,B-->bb,C-->ccc,D-->dddd,E-->eeeee Let's assume the input string is named VALUE and the total length is named TOTAL_LENGTH. Then you will need the following additional ports in the EXP: - One variable port v_length, type Integer, with this expression term: Length(VALUE) - One variable port v_pad_val, type String (250), with this expression term: LPAD( SUBSTR( VALUE, 1, 1), 250, SUBSTR( VALUE, 1, 1)) - One output port REPEATED_VALUE with this expression: SUBSTR( v_pad_val, 1, TOTAL_LENGTH)
Thanks!!
Thank u sir
Hi,I have created the mapping as you explained and i got the output as expected. I have a doubt we got the output in csv format.is there is any possibility to create the output in tables.
No
These scenarios are best..
Your video are very great bro 👍👍👍
Très bon
I don't know why ppl not subscribe your video.... Your contents are really great bro 👍👍
We have given sequence as sequence+1 in exp transformation but we didn't define any initial value as sequence =0 to add with 1... Can you explain that ?
as this is implementation of LAG() of SQL .... In this way how we can implement the Lead() functionality of sql in informatica.
Yes similar implementation
It helped me!! Thanks
In case u need to reload from beginning ,How do you reset the mapping variable using script?
Thanks for the clear explanations! I've a question. Can we achieve this scenario by considering flags for duplicate records After sorting? IFF(current_input_data=prev_input_data, 'N', 'Y') A Y B Y B N B N C Y C N D Y And then use Router to direct records with flag Y & N separately. A B C D B B C
I have doubt what will we be the value of pattern at beginning , i 'm unable to decode that iif condition could explain one sample taking rough records in paper
At the begining, current_id != prev_id, so pattern takes department value. In this scenario, the first value of pattern will be 'A'.
@@jayashree.j1777 Thank you
@@jayashree.j1777 what's the value of prev_id in the first iteration?
@@whytushar it would be NULL
Thank you so much !! It was really helpful.
Hi, In the same way can we create tables in DB dynamically on the fly..?
no
Sir, what if there are multiple footers? Can we use the same logic?
Nice tutorial.Have one question Input OUTPUT A A B NULL C B D NULL C NULL D NULL How we can achieve this
Hi, I have a scenario like there is one source flat file and target is 3 tables/flat files..so first time all the data should go to table1 and in 2nd run all the data should go to table 2 and in 3 Rd run all the data should go to table 3 and same process has to be repeated ..could you please help me with the solution
No need to give aggregate and joiner, use router and give condition final_count=1. And final_couny>1
if you observe the scenario its taking atleast one occurence of the duplicate data. every subsequent entry needs to be moved to other file. thats the catch in this tutorial. look at video description for more details, you will get it.
@@TechTutorETLandAWS Can't we do this, just by using the "final_occurences" output port in Expression Transformation, and then using a router with the condition, "FINAL_OCCURENCE = 1" to load the first occurrence of each input data to Target 1; and second condition as "FINAL_OCCURENCE > 1" to load the subsequent entries of data to the second target?
@@veenarajan5491 Final occurence > 1 is valid only if there are two occurrences of a value. If more than two, then it will load consecutive occurrences as well which ain't correct.
@@vineethprabhu8560 Even this will load consecutive occurrences in the duplicate file. I still feel this can be just done by router transformation. @tectutorETLandAWS Please explain this
Why can't we just use cyclic seq generatr and route fields? Is my approach correct? Like for alternate we will use cyclic seq generatr with values 1,2 We will route the records with 1 into one target and 2 into another target..
yes you can use..its also one of the ways
Excellent sir. Very good explanation
How to proceed if we have a situation like according to the number of input files the target files should be created?
Why don't you post more videos?
Sir if we take only expression, then how can we generate it? please suggest me. Second this if we consider this only mapping then i have a doubt why we have taken joiner here.
We can use LAG function instead.
If we using group by condition on the newvalue column doesn't it create a single Index for each newvalue column. At the end of aggregator output should be a 1 , b1, c 3 ????and not A1,b1,c1, C2,c3
Thats why he used joiner to solve your question
Thank you!!
Hi, I have created the mapping as you explained in your videos but in my scenario only one csv file generated with admin department and all the records in it. could you please let me know what could be the reason..??
Are you mapping the columns as mentioned in the video ? In the expression box, the FLAG field must lie between the CURR_VALUE and PREV_VALUE fields.
Sir, can we use mapping variable to store the count of records? We'd need not use joiner then to compare each record with total count.