Thanks @itsranjan2003 , but want to check one thing . Scenario : I want to load 200 multiple source tables from MYsql DB to 200 multiple target tables on snowflake DB using just one single mapping and not do any transformations and run that mapping everyday. Just run that mapping everyday morning and do CDC on source data to target data. How is that possible,any idea?
@@itsranjan2003 is it also possible to have the source tables coming from different databases and different schemas within the databases and achieve that with single Replication task ? Or should we create separate replication task for database and schema specific? Note: But, target database and schema is constant for all the different sources databases
@@kieees_7601 I think it will allow tables from one database only in one replication task. for multiple databases and schemas, you may need to use multiple replication tasks.
Hi Ranjan, I am trying to create a similar process. But my requirement is I will pass the parameter values while executing a taskflow. I am seeing the job always takes the values assigned in task (I have checked the run time override option).
Thanks for the quick response. Yes like PowerCenter, in Cloud also Task flow can override the mapping task values or mapping task can override the mapping values. can't we pass/set for example - param_ora_src=$$par_src_conn and param_source_table=$$par_src_tbl_name at task level?And these values($$par_src_conn,$$par_src_tbl_name) we define in param file. I could see we are passing harcoded object name(i.e.'EMP') and connection name at task level itself and again setting the same valuesin param file.
Hello Ranjan, Thanks for the explanation.Since the demo is to parameterize the connections and sources/target objects, but why the value for source/target objects are passing/hardcoding during task configuration. And also the same parameter value ,it was set again in the parameter file. It should be ultimately passed onlyfrom the parameter file if I'm not wrong. Which value does it take whether from the mapping task that was set or from the param file?
Those values are hardcoded in mct level as those are mandatory which is a must to be given. You need to select the check box " Allow to be overwritten " While defining the parameter in mapping level , so that the param values will be taken from parameter file during the job runs.
Hi, I'm trying to run a mapping that updates to three different target tables. The issue is that the 3 have different columns, I just need to update 1 column on the 3 so it seems I can't use dynamic mapping tasks, what do you suggest?
@@itsranjan2003 for example I am running Two mappings each has different source tables and different target tables and I can use one single parameter file ? In order to mention two different source tables and two different target tables from different mapping, the same single parameter file ? Like first_src_table= name First_tgt_table =name Above for first mapping. Second _src_table= name Second_tgt_table =name Above for second mapping?
to verify the column and data types you need to use the hard coded db/file connection and tables, then only you will be able to see the table column and data types. Pls refer my video 6.7
I saw that, it is related to parameterization. All I want to extract all the columns and their data type of source and compare with previous days same source structure Any function which can extract columns name and data type in comma separated list you know can be used
Yes, field mapping won't be there if you parameterize the objects. So the approach is to select the source and target object , map the fields, then parameterize the source and target.
@@dreamraja477 for parameset, some paramset cli and utilities are required for uploading and downloading, though it's a one time activity, but process is lengthy.
Hello sir, I would like to migrate multiple tables from terradata into s3 but while selecting source as a multiple i am not able to select all the tables it's only getting selected one table. I have a requirement of migrating multiple tables through only one mapping task . Can you please help me sir.
what if I have a source query of the type "select * from schema.table" and I want this schema to be paramterized? I have schema A for dev env and schema b for qat. I am trying to define input param with type as string and but it's not working
Hi sir, if we paramterize source tranformation with connection paramter, object parameter, then incoming field won't be there then how can we transform the data in this case if we want to change date format? is there any we can do both paratmerize and transform the data?
@@itsranjan2003 I still don't understand how this will work. When I am hardcoding and updating the fields everything seems fine at target field mapping but when I parametrize source, the target shows some fields unavailable for field mapping.
Select query as object and in place of table name place the parmeter , before that the parameter should be defined in mapping input parameyer and present in the pataneter file..
Thank you for your quick response. In case of Query should I use $$Param_Table_Name or only Param_Table_Name. Example ( Select col1,col2 from Param_Table_name ) or $$Param_Table_name
Thanks @itsranjan2003 , but want to check one thing .
Scenario :
I want to load 200 multiple source tables from MYsql DB to 200 multiple target tables on snowflake DB using just one single mapping and not do any transformations and run that mapping everyday.
Just run that mapping everyday morning and do CDC on source data to target data.
How is that possible,any idea?
please try with replication task , multiple source and target tables can be used in this task.(my video section 4.3 ).
@@itsranjan2003 thank you
@@itsranjan2003 is it also possible to have the source tables coming from different databases and different schemas within the databases and achieve that with single Replication task ? Or should we create separate replication task for database and schema specific?
Note:
But, target database and schema is constant for all the different sources databases
@@kieees_7601 I think it will allow tables from one database only in one replication task. for multiple databases and schemas, you may need to use multiple replication tasks.
Hi Ranjan, I am trying to create a similar process. But my requirement is I will pass the parameter values while executing a taskflow. I am seeing the job always takes the values assigned in task (I have checked the run time override option).
Hi, have you provided the param file path and param file name in taskflow-->data task--> input fields. Pls check. It should work.
Thanks for the quick response. Yes like PowerCenter, in Cloud also Task flow can override the mapping task values or mapping task can override the mapping values. can't we pass/set for example - param_ora_src=$$par_src_conn and param_source_table=$$par_src_tbl_name at task level?And these values($$par_src_conn,$$par_src_tbl_name) we define in param file. I could see we are passing harcoded object name(i.e.'EMP') and connection name at task level itself and again setting the same valuesin param file.
In IICS, the param value need to be provided.. You may try and see if any other options possible or not.
Hello Ranjan,
Thanks for the explanation.Since the demo is to parameterize the connections and sources/target objects, but why the value for source/target objects are passing/hardcoding during task configuration. And also the same parameter value ,it was set again in the parameter file. It should be ultimately passed onlyfrom the parameter file if I'm not wrong. Which value does it take whether from the mapping task that was set or from the param file?
Those values are hardcoded in mct level as those are mandatory which is a must to be given. You need to select the check box " Allow to be overwritten " While defining the parameter in mapping level , so that the param values will be taken from parameter file during the job runs.
Ranjan can I dynamically create a source with every table name entry made in a metadata table?
Hi, I'm trying to run a mapping that updates to three different target tables. The issue is that the 3 have different columns, I just need to update 1 column on the 3 so it seems I can't use dynamic mapping tasks, what do you suggest?
Hi @itsranjan2003, while Parameterize the Source and Target Tables names , in the parameter file can we keep multiple source and target table names
@@kieees_7601 you can keep first parameter with first paramter value. 2nd paramter with 2nd paramter value etc..
@@itsranjan2003 for example I am running Two mappings each has different source tables and different target tables and I can use one single parameter file ?
In order to mention two different source tables and two different target tables from different mapping, the same single parameter file ?
Like first_src_table= name
First_tgt_table =name
Above for first mapping.
Second _src_table= name
Second_tgt_table =name
Above for second mapping?
@@kieees_7601 yes, you can use this..
Sir how to compare column and data type of source before executing mapping based on comparison results
to verify the column and data types you need to use the hard coded db/file connection and tables, then only you will be able to see the table column and data types. Pls refer my video 6.7
I saw that, it is related to parameterization. All I want to extract all the columns and their data type of source and compare with previous days same source structure
Any function which can extract columns name and data type in comma separated list you know can be used
@ranjan I tried this approach, but when I have field mappings defined , it gets wiped off after parametrizing the object names
Yes, field mapping won't be there if you parameterize the objects. So the approach is to select the source and target object , map the fields, then parameterize the source and target.
Thanks, can you also do a video on parameter sets in task flow
@@dreamraja477 for parameset, some paramset cli and utilities are required for uploading and downloading, though it's a one time activity, but process is lengthy.
Hello sir, I would like to migrate multiple tables from terradata into s3 but while selecting source as a multiple i am not able to select all the tables it's only getting selected one table.
I have a requirement of migrating multiple tables through only one mapping task .
Can you please help me sir.
@@mdsaad9190-l2e hi, please try replication task..
what if I have a source query of the type "select * from schema.table" and I want this schema to be paramterized? I have schema A for dev env and schema b for qat. I am trying to define input param with type as string and but it's not working
for schema, define parameter type as "connection" and for tables define parameter type as "object" while creating the input parameters.
Hi,
if i want to add tranformation logic in parameterized mapping, how will we add?
use hard coded connection/object, then update the field, then parameterize the connection / object.
Hi sir,
if we paramterize source tranformation with connection paramter, object parameter, then incoming field won't be there then how can we transform the data in this case if we want to change date format?
is there any we can do both paratmerize and transform the data?
use hard coded connection/object, then update the field, then parameterize the connection / object.
@@itsranjan2003 I still don't understand how this will work. When I am hardcoding and updating the fields everything seems fine at target field mapping but when I parametrize source, the target shows some fields unavailable for field mapping.
@@ShaileshNavghare-n6x pls watch my video in section 6.7
Can I use the source object parameter in the Query. If possible how to do that.
Select query as object and in place of table name place the parmeter , before that the parameter should be defined in mapping input parameyer and present in the pataneter file..
Thank you for your quick response. In case of Query should I use $$Param_Table_Name or only Param_Table_Name. Example ( Select col1,col2 from Param_Table_name ) or $$Param_Table_name
@@seeanj1 $$param table name,