Learn how to use Datastage from beginner level to advanced techniques which is taught by experienced working professionals. With our Datastage Training in Chennai you’ll learn concepts in expert level with practical manner.
DataStage is an ETL tool of IBM InfoSphere suite helps to Design, Develop and Run Jobs to create Data Warehouse, Data Mart, and Data Mining for Analysis purpose.
DataStage is a Decision support system tool integrates data across multiple systems.
With DataStage what you can do…
- Powerful , scalable ETL platform
- DataStage builds, manages and expand data.
- Data Stage builds solutions faster and gives users access to the data and reports they need.
- Design the jobs that Extract, Integrate, Aggregate, Load, and Transform the data for your Data Warehouse or Data Marts.
- Create and reuse metadata and job components.
- Run, Monitor, and Schedule these jobs.
- Administer your development and execution environments.
- Support for Big data and Hadoop
- Supports Real-time (SAP, ERP) data integration
- Support work load distribution
- Interfacing multiple databases
- Metadata-driven productivity, enabling collaboration.
- Download PDFSyllabus
- Video ReviewCourse Fees
DataStage jobs are highly scalable due to the implementation of Parallel processing” which helps to maintain partitioning and parallelism mechanism. Both SMP and MMP architecture is supported by DataStage.
DataStage’s popular stages are Lookup, Join, Merge, Change Apply, Change Capture, Link Collector, Link Partition and Slowly Changing Dimension Stages Handles Sequential Files, XML Files, Database Files and Complex flat files. With NLS capability DataStage can process Unicode data.
The advantage of DataStage ETL tools is a user-friendly GUI tool which is easy to learn and quick to Design, Develop and Deploy. Gartner’s bench mark report on DataStage clearly indicates the supremacy of DataStage Product among all the ETL tools in the market.
|Category||Data Warehousing (ETL Tool)|
|Official URL||DataStage Training|
|Demo Classes||At Your Convenience|
|Training Methodology||10% Theory & 90% Practical|
|Course Duration||30-40 Hours|
|Class Availability||Weekdays & Weekends|
|For Demo Class||Email ID - firstname.lastname@example.org|
Datastage Training Syllabus
- DataStage Architecture
- DataStage Clients
- DataStage Workflow
Types of DataStage Job
- Parallel Jobs
- Server Jobs
- Job Sequences
Setting up DataStage Environment
- DataStage Administrator Properties
- Defining Environment Variables
- Importing Table Definitions
Creating Parallel Jobs
- Design a simple Parallel job in Designer
- Compile your job
- Run your job in Director
- View the job log
- Command Line Interface (dsjob)
Accessing Sequential Data
- Sequential File stage
- Data Set stage
- Complex Flat File stage
- Create jobs that read from and write to sequential files
- Read from multiple files using file patterns
- Use multiple readers
- Null handling in Sequential File Stage
- Describe parallel processing architecture Describe pipeline & partition parallelism
- List and describe partitioning and collecting algorithms
- Describe configuration files
- Explain OSH & Score
- Combine data using the Lookup stage
- Combine data using merge stage
- Combine data using the Join stage
- Combine data using the Funnel stage
Sorting and Aggregating Data
- Sort data using in-stage sorts and Sort stage
- Combine data using Aggregator stage
- Remove Duplicates stage
- Understand ways DataStage allows you to transform data
- Create column derivations using userdefined code and system functions
- Filter records based on business criteria
- Control data flow based on data conditions
- Perform a simple Find
- Perform an Advanced Find Perform an impact analysis
- Compare the differences between two Table Definitions and Jobs.
Working with Relational Data
- Import Table Definitions for relational tables.
- Create Data Connections.
- Use Connector stages in a job.
- Use SQL Builder to define SQL Select statements.
- Use SQL Builder to define SQL Insert and Update statements.
- Use the DB2 Enterprise stage.
Metadata in Parallel Framework:
- Explain schemas.
- Create schemas.
- Explain Runtime Column Propagation (RCP).
- Build a job that reads data from a sequential file using a schema.
- Build a shared container.
- Use the DataStage Job Sequencer to build a job that controls a sequence of jobs.
- Use Sequencer links and stages to control the sequence a set of jobs run in.
- Use Sequencer triggers and stages to control the conditions under which jobs run.
- Pass information in job parameters from the master controlling job to the controlled jobs.
- Define user variables.
- Enable restart.
- Handle errors and exceptions.
For Datastage Materials - Download Now