This certification demonstrates that the a success candidate has vital information and talents essential to professionally layout and broaden an green and scalable DataStage strategy to a complicated organization stage commercial enterprise problem; configure a scalable parallel surroundings consisting of clustered and dispensed configurations; collect, file on, and solve troubles recognized thru key software overall performance indicators; be gifted in extending the abilities of the parallel framework the use of the supplied APIs (build-ops and wrappers).
Able to layout and broaden a scalable complicated answer the use of an most fulfilling quantity of degrees.
Able to pick out the gold standard statistics partitioning methodology.
Able to configure a dispensed or non-symmetric surroundings.
Should be gifted with BuildOps and wrappers.
Should recognize a way to song a parallel software to decide wherein bottlenecks exist and a way to remove them.
Should recognize fundamental configuration troubles for all relational databases and be exceedingly gifted in as a minimum one.
Able to enhance process layout through imposing new product capabilities in DataStage v11.3.
Able to reveal DataStage jobs thru the Job Log and the Operations Console.
Exam C2090-424: InfoSphere DataStage v11.3
Number of questions: 64
Number of inquiries to pass: 41
Time allowed: 90 minutes
Section 1: Configuration6�scribe a way to nicely configure DataStage.
Identify duties required to create and configure a mission for use for jobs.
Given a configuration file, discover its additives and its typical meant reason.
Demonstrate right use of node pools.
Section 2: Metadata5�monstrate information of framework schema.
Identify the technique of importing, sharing, and handling metadata.
Demonstrate information of runtime column propagation (RCP).
Section 3: Persistent Storage15%
Explain the procedure of importing/exporting statistics to/from framework.
Demonstrate right use of a Sequential File level.
Demonstrate right utilization of Complex Flat File level.
Demonstrate right utilization of FileSets and DataSets.
Demonstrate use of FTP level for far flung statistics.
Demonstrate use of restructure degrees.
Identify importing/exporting of XML statistics.
Knowledge of balanced optimization for Hadoop and integration of oozie workflows.
Demonstrate right use of File Connector level.
Demonstrate use of DataStage to deal with diverse kinds of statistics consisting of unstructured, hierarchical, Cloud, and Hadoop.
Section 4: Parallel Architecture9�monstrate right use of statistics partitioning and collecting.
Demonstrate information of parallel execution.
Section 5: Databases6�monstrate right choice of database degrees and database particular level properties.
Identify supply database alternatives.
Demonstrate information of goal database alternatives.
Demonstrate information of the distinct SQL enter/advent alternatives and whilst to apply each.
Section 6: Data Transformation12�monstrate information of default kind conversions, output mappings, and related warnings.
Demonstrate right alternatives of Transformer level vs. different degrees.
Describe Transformer level abilities.
Demonstrate the usage of Transformer level variables.
Identify procedure to feature capability now no longer supplied through present DataStage degrees.
Demonstrate right use of SCD level.
Demonstrate process layout information of the use of runtime column propagation (RCP).
Demonstrate information of Transformer level enter and output loop processing.
Section 7: Job Components8�monstrate information of Join, Lookup and Merge degrees.
Demonstrate information of Sort level.
Demonstrate know-how of Aggregator level.
Describe right utilization of alternate capture/alternate apply.
Demonstrate information of real-time additives.
Section 8: Job Design14�monstrate information of shared containers.
Describe a way to reduce Sorts and repartitions.
Demonstrate information of making restart factors and methodologies.
Explain the procedure essential to run more than one copies of the supply.
Knowledge of making DataStage jobs that may be used as a service.
Knowledge of balanced optimization.
Describe the reason and makes use of of parameter units and the way they evaluate with different processes for parameterizing jobs.
Demonstrate the capacity to create and use Data Rules the use of the Data Rules level to degree the great of statistics.
Demonstrate diverse strategies of the use of DataStage to deal with encrypted statistics.
Section 9: Monitor and Troubleshoot9�monstrate information of parallel process score.
Identify and outline surroundings variables that manipulate DataStage in regards to delivered capability and reporting.
Given a procedure list, discover conductor, phase leader, and participant procedure.
Identify regions that could enhance overall performance.
Demonstrate information of runtime metadata evaluation and overall performance monitoring.
Ability to reveal DataStage jobs the use of the Job Log and Operations Console.
Section 10: Job Management and Deployment8�monstrate information of DataStage Designer Repository utilities along with superior find, effect evaluation, and process evaluate.
Articulate the alternate manipulate procedure.
Knowledge of Source Code Control Integration.
Demonstrate the capacity to outline packages, import, and export the use of the ISTool utility.
Demonstrate the capacity to carry out admin duties with gear along with Directory Admin.
Section 11: Job Control and Runtime Management8�monstrate information of message handlers.
Demonstrate the capacity to apply the dsjob command line utility.
Demonstrate capacity to apply process sequencers.
Create and manipulate encrypted passwords and credentials files