Thursday, June 17, 2010

Autosys cheat sheet

AutoSys Cheatsheet
AutoSys: UNIX
Cd to the "autouser" ($AUTOUSER) directory and "." (or source) the "ksh" file. Ex: ". ./autosys.ksh.machine" After installing AutoSys, first make sure that the DB is up and running. Check the installation by running the command chk_auto_up to verify connection to the DB and event processor.
Enter the KEYS through "gatekeeper", add keys
Run the "autosys_secure" command to set the AutoSys Edit and Exec Super users (and also to enter NT users/passwords)
Start the Event Processor by running the command "eventor"
Shutdown AutoSys: "sendevent -E STOP_DEMON"
To start the AutoSys GUI set your DISPLAY and run the command "autosc &".
NT: Start AutoSys from start->programs->AutoSys-> administrator ->Graphical User Interface ->Command Prompt
Command Line Commands:
1. gatekeeper: Allows you to enter the License Keys which allow you to run AutoSys.
2. eventor [-M machine_name] : Starts the event processor.
3. autorep -J [ALL | Job_name] [-q] [> file_name], -d (detail), -r (run number), -o (override), jil < file_na -G (global var report), -M -q for machine definitions.
Ex: autorep -J job_name -d
autorep -J job_name -d
autorep -J job_name -q > file_name queries the DB & save job Dfn. Into a file
vi file_name
When you want a report of a box use the -L0 option
Autorep -J job_name -l1 report on the job for the day -1 (prev day)
4. sendevent -E STARTJOB -J job_name, sendevent -E FORCE_STARTJOB -J job_name, [JOB_ON_ICE, JOB_OFF_ICE, JOB_ON_HOLD, JOB_OFF_HOLD, SET_GLOBAL, STOP_DEMON. . . .]
sendevent -E STOP_DEMON - to stop AutoSys
(ex: sendevent -E SET_GLOBAL -G "var_name=/home/mydir" to set a var)
(ex: sendevent -E SET_GLOBAL -G "var_name=DELETE" to delete a var)]
5. chk_auto_up: checks to see if event processor and the DB are both up.
6. autoping -m machine: verify that both client & server are correctly configured.
7. cron2jil -f cronfile [-d outdir] [-I incl_file] [-m machine] [-p prefix]
8. jil
To insert a job directly into the DB
insert_job: job.id job_type: c
machine: machine_name
command: echo testing jil
[go | ;] (depending on the DB you are using)
Template example:
/* ----------------- template ----------------- */
insert_job: template job_type: c
box_name: box1
command: ls -l
machine: localhost
owner: lyota01@TANT-A01
permission: gx,ge,wx,we,mx,me
date_conditions: 1
days_of_week: all
start_times: "15:00, 14:00"
run_window: "14:00 - 6:00"
condition: s (job1)
description: "description field"
n_retrys: 12
term_run_time: 60
box_terminator: 1
job_terminator: 1
std_out_file: /tmp/std_out
std_err_file: /tmp/std_err
min_run_alarm: 5
max_run_alarm: 10
alarm_if_fail: 1
max_exit_success: 2
chk_files: /tmp 2000
profile: /tmp/.profile
job_load: 25
priority: 1
auto_delete: 12
9. autosyslog -e: same as tail -f autosys_log_file. This command must be run from the machine where the server resides if used with the -e option. Else it can be used with the -J option to see that job's run log.
10. job_depends: -[c|d|t] -J jobname [-F "mm/dd/yy time"] [-T "mm/dd/yy time"] (Note: It will only print out the first occurrence found)
11. monbro -n monitor_name: Allows you to run from command line monitor/browser programs previously created using the monitor/browser GUI.exec superuser: AUTOSYS superuser
12. autocal_asc full_cal_name: prints, adds & deletes custom calendar definitions.
13. autostatus: Reports the current status of a specific job, or the value of an AutoSys global variable. Ex: autostatus -J job_name, -S instance
14. autotimezone -l : Allows additions, deletions, and queries to the timezones table (-l provides list).
15. autotrack: Tracks & report changes to the AutoSys DB. Ex: autotrack -l 2 (level 2) [sets the tracking level] autotrack -U sys -v (user sys: verbose) To start using the autotrack utility type: autotrack -u to set tracking level 1 or 2. By default it is set to 0. Autotrack -l will list the current tracking level. Options -[J, U, m, F, T, and t] are to request reporting on a specific Job, User, machine, time window (-F -T), and event type (t). Type is used in conjunction w/other parameters. autotrack w/no arguments retrieves information an all events omitting detail. -v option is for verbose.
16. autosys_secure: to change edit, exec superusers, change DB passwd, change remote authentication method.
17. chase [-A|E]: Makes sure that jobs claiming to be running in the client machine are running. The "-E" option restarts the job.
18. archive_events: to archive events in the DB which are older than x days to prev DB from becoming full.
19. clean_files: Deletes old remote agent log files. It does it by searching the DB for all machines which have had jobs started on them.
20. autostatad: to get the status of a PeopleSoft job. You can define one of the user definable buttons to view PeopleSoft job: Autocons*userButton1Label: Adapter Status
User definable buttons: There are user definable buttons in the operator's console.
How to configure:
Autocons*userButton1Command: /autosys/bin/autostatad -J $JOB -g & (which allows you to have a command button on the operator's console.)
Dependencies:
success (job) and s(job_b)
failure(job_a) or f (job_b)
notrunning (job)
terminated(job)
exitcode(job) > 5 and exitcode(job_b) != 10
value(global_name)=100
done(job)
Hostscape: Schedule a job to run every x minutes & then go into forecasting. Make that job fail.
• Solid black line: Hostscape can communicate with the remote agent in the client machine.
• Solid red line: Hostscape can't communicate with the remote agent but it can communicate with the internet daemon (inetd) running on that machine..
• Dashed red line: Hostscape can't communicate with the client machine at all. Client is probably down.
• Accessing a variable name: $$GLOBAL_VAR_NAME (unless used in dependency condition with a job definition. If used in the "command" field, you must use the $$)
Tunable Parameters:
• $AUTOUSER/config.ACE
• $AUTOUSER/autosys.ksh.xxx
• /etc/auto.profile
• /etc/inetd.conf
• /etc/services
Notify.Ace: The alarms to notify on are:
(There is an example in $AUTOSYS/install/data/Notify.Ace).
• DB_ROLLOVER
• DB_PROBLEM
• EP_HIGH_AVAILABILITY
• EP_ROLLOVER
• EP_SHUTDOWN
Where to go to find the Errors:
• $AUTOUSER/out/event_demon.$AUTOSERV
($AUTOUSER/out/event_demon.ACE)
• Output from the job definition output & error files
• /tmp files created for job_run at client machine
• $AUTOSYS/out/DBMaint.out for DB problems
• $SYBASE/install/errorlog_$DSQUERY when event server will not start.
• NT: AutoNuTc\lib/X11\app-defaults\xpert
AutoSys Maintenance: DBMaint @$AUTOSYS/bin
Once a day the Database goes into a maintenance cycle. Every day at 3:00am it runs a program called DBMaint. This is user configurable. The program runs DBstatistics which is found in $AUTOSYS/bin.
app-defaults file: /usr/openwin/lib/app-defaults directory. Autocons, Xpert, etc.. ( or: /usr/lib/X11/app-defaults, /autosys/bin/X11/app-defaults)
Environment file: /etc./auto.profile
C programs: $AUTOSYS/code
Where to change AutoSys screen fonts: /usr/openwin/lib/app-defaults
Where to look for troubleshooting: Chapter 15
Summary of commands: Appendix C
$AUTO_JOB_NAME: when naming a file dynamically using as prefix AutoSys's job name.
$AUTORUN: unique identifier for the run of that job
$AUTOPID: unique identifier for that job's run number (PID)
$JOID: DB identifier for a job. To extract from the DB: select joid from job where job_name=" "
Creating a Virtual Machine:
insert_machine: virtual
type: v /* default, not required */
machine: real_1
machine: real_2
max_load: 100
factor: 0.5 /* used to describe the relative processing power of a machine. Usually between 0.0-1.0*/
machine: real_2
max_load: 60 /* this is designed to limit the loading of a machine */
Load Balancing, Queuing, priorities:
insert_job: test_load
machine: localhost
command: echo "Test load balancing"
job_load: 50
priority: 1 /* this only affects queues */
Note: For 5.0 we will be using information from ServerVision's towards our load balancer which is composed of 26 categories such as i/o usage, disk usage, CPU usage, etc.
Testing:
zql
zql -U autosys -P autosys
NOTES:
When a job is stuck in the starting condition this means that the event processor communicated with the remote agent and passed all the information the remote agent ran the job but was not able to communicate to the DB. Once testing is done with AutoSys one should change the default refresh interval for AutoSys. This is so there is less querying to the DB. When AutoSys goes from dual mode to single mode, always run the autobcp command before bringing AutoSys back to dual mode/High Availability. Default behavior for stdout is to always appends. If you want to overwrite the file enter the following, no spaces: ">file.out"
Box Logic
Use boxes to group jobs with like scheduling parameters, not as means of grouping jobs organizationally. For example, if you have a number of jobs that run daily at 1:00 a.m., you could put all these jobs in a box and assigning a daily start condition to the box. However, a variety of account processing jobs with diverse starting conditions should not be grouped in the same box.
Default Box Job Behavior
Some important rules to remember about boxes are:
• Jobs run only once per box execution.
• Jobs in a box will start only if the box itself is running.
• As long as any job in a box is running, the box remains in RUNNING state; the box cannot complete until all jobs have run.
• By default, a box will return a status of SUCCESS only when all the jobs in the box have run and the status of all the jobs is "success." Default SUCCESS is described in Default Box Success and Box Failure on page 5-13.
• By default, a box will return a status of FAILURE only when all jobs in the box have run and the status of one or more of the jobs is "failure." Default FAILURE is described in Default Box Success and Box Failure on page 5-13.
• Unless otherwise specified, a box will run indefinitely until it reaches a status of SUCCESS or FAILURE. For a description of how to override this behavior, see Box Job Attributes and Terminators on page 5-6.
• Changing the state of a box to INACTIVE (via the sendevent command) changes the state of all the jobs in the box to INACTIVE.
When you Should Not Use a Box
The fact that all jobs in a box change status when a box starts running has lead some to use boxes to implement "job cycle" behavior. Be aware that placing jobs in a box to achieve this end may bring with it undesired behavior due to the nature of boxes.
Avoid the temptation to put jobs in a box as a short cut for performing events (such as ON_ICE or ON_HOLD) on a large number of jobs at once. You will most likely find that the default behavior of boxes inhibits the expected execution of the jobs you placed in the box.
Likewise, you should not place jobs in a box solely because you want to run reports on all of them. When you run autorep on a box, you will get a report on the box and all the jobs in the box (unless you use the -L0 option). In addition, if you use wildcarding when specifying a job name, you could get duplicate entries in your report. For example, suppose you have a box named "acnt_box" containing three jobs named "acnt_job1", "acnt_job2", and "daily_rep". If you specify acnt% as the job name for the autorep report, the report will have an entry for the box "acnt_box" and an entry for each job in the box. Then autorep will continue searching for all job names matching the wildcard characters and, thus, will list "acnt_job1" and "acnt_job2" a second time.
What Happens when a Box Runs
As soon as a box starts running, all the jobs in the box (including sub-boxes) change to status ACTIVATED, meaning they are eligible to run. (Because of this, jobs in boxes do not retain their statuses from previous box cycles.) Then each job is analyzed for additional starting conditions. All jobs with no additional starting conditions are started, without any implied ordering or prioritizing. Jobs with additional starting conditions remain in the ACTIVATED state until those additional dependencies have been met. The box remains in the RUNNING state as long as there are activated or running jobs in the box.
If a box is terminated before a job in it was able to start, the status of that job will change directly from ACTIVATED to INACTIVE.
Note o Jobs in a box cannot start unless the box is running. However, once the job starts running, it will continue to run even if the box is later stopped for some reason.
Time Conditions in a Box
Each job in a box will run only once per box execution. Therefore, you should not define more than one time attribute for any job in a box because the job will only run the first time. If you want to put a job in a box, but you also want it to run more than once, you must assign multiple start time conditions to the box itself, and define no time conditions for the job. Remember also that the box must be running before the job can start. Do not assign a start time for a job in a box if the box will not be running at that time. If you do, the next time the box starts the job will start immediately.
The following example illustrates a scenario that would not work properly if placed in a box.
"job_a" is defined to run repeatedly until it succeeds. "job_report" has one starting condition-the success of "job_a".
How Job Status Changes Affect Box Status
If a box that is not running contains a job that changes status, as a result of a FORCE_STARTJOB or CHANGE_STATUS event, the new job status could change the status of its container box. A change of status of the box could trigger the start of downstream jobs that are dependent on the box.
If a box contained only one job, and the job changed status, the box status would change.

Differences in Informatica versions

Here are new enchancements done in 8.6 ( i just copied from the help manual)
Administration Console
Generate and upload node diagnostics. You can use the Administration Console generate node diagnostics for a node and upload them to the Configuration Support Manager onhttp://my.informatica.com. In the Configuration Support Manager, you can diagnose issues in your Informatica environment and maintain details of your configuration. For more information, see the PowerCenter Administrator Guide.
Command Line Programs
New pmrep command. The pmrep command line program includes a new command, MassUpdate. Use MassUpdate to update a set of sessions based on specified conditions. For more information, see the PowerCenter Command Reference.
Deployment Groups
Non-versioned repository. You can create and edit deployment groups and copy deployment groups to non-versioned repositories. You can copy deployment groups between versioned and non-versioned repositories. When you copy a dynamic deployment group, the Repository Service converts it to a static deployment group. For a non-versioned repository, if the objects in the deployment group exist in the target repository, the deployment operation deletes the existing objects and creates new objects.
Execute Deployment Groups privilege. The Execute Deployment Groups privilege enables you to copy a deployment group without the write permission on target folders. You still need read permission on the source folders and execute permission on the deployment group to copy the deployment group.
Post-deployment validation. Validate the target repository after you copy a deployment group to verify that the objects and dependent objects are valid.
For more information, see the PowerCenter Repository Guide.
Domains
Mixed-version domain. You can run multiple application service versions in a mixed-version domain. Run a mixed-version domain to manage services of multiple versions or to upgrade services in a phased manner. In a mixed-version domain, you can create and run 8.6 and 8.6.1 versions of the Repository Service, Integration Service, and the Web Services Hub.
The application service version appears on the Properties tab of application services in the domain. If you run a mixed-version domain, you can filter the versions of services that you want to view in the Navigator.
License Management Report. Monitors the number of software options purchased for a license and the number of times a license exceeds usage limits. The License Management Report displays the license usage information such as CPU and repository usage and the node configuration details. For more information, see the PowerCenter Administrator Guide.
For more information, see the PowerCenter Administrator Guide.
Integration Service
External Loading
IBM DB2 EE external loader. The IBM DB2 EE external loader can invoke the db2load95 executable located in the Integration Service installation directory. If the IBM DB2 client version 9.5 is installed on the machine where the Integration Service process runs, specify the db2load95 executable when you create the external loader connection.
Partitioning
Oracle subpartition support. PowerCenter supports Oracle sources that use composite partitioning. If you use dynamic partitioning based on the number of source partitions in a session with a subpartitioned Oracle source, the Integration Service sets the number of partitions to the total number of subpartitions at the source.
Pushdown Optimization
Pushdown compatible connections. When a transformation processes data from multiple connections, the Integration Service can push the transformation logic to the source or target database if the connections are pushdown compatible. Two connections are pushdown compatible if they connect to databases on the same database management system and the Integration Service can unambiguously identify the database tables that the connections access.
Connection objects are always pushdown compatible with themselves. Additionally, some relational connections are pushdown compatible if they are of the same database type and have certain identical connection properties.
Subquery support. The Integration Service can produce SQL statements that contain subqueries. This allows it to push transformations in certain sequences in a mapping to the source or target when the session is configured for source-side or full pushdown optimization.
Netezza support. You can push transformation logic to Netezza sources and targets that use ODBC connections.
Real-time
JMS resiliency. Sessions that fail with JMS connection errors are now resilient if recovery is enabled. You can update the pmjmsconnerr.properties file which contains a list of messages that define JMS connection errors.
Streaming XML. You can configure the Integration Service to stream XML data between a JMS or WebSphere MQ source and the XML Parser transformation. When the Integration Service streams XML data, it splits the data into multiple segments. You can configure a smaller input port in the XML Parser transformation and reduce the amount of memory that the XML Parser transformation requires to process large XML files.
Synchronized dynamic cache. You can configure multiple real-time sessions to access the same lookup source table. Each session has a Lookup transformation that shares the lookup source with the other sessions. When one session generates data and updates the lookup source, then each session can update its own cache with the same information.
Metadata Manager
Business glossary. You can create business terms organized by category in a business glossary and add the business terms to metadata objects in the Metadata Manager metadata catalog. Use a business glossary to add business terms to the technical metadata in the metadata catalog. You can import and export business glossaries using Microsoft Excel. For more information, see the Metadata Manager Business Glossary Guide.
User interface. The Browse page in Metadata Manager contains the Catalog, Business Glossary, and Shortcuts views. Use the views to browse and view details about metadata objects, categories and business terms, and shortcuts in Metadata Manager. In addition, the Details section contains a single view of all properties for metadata objects and business terms. For more information, see the Metadata Manager User Guide.
Resources. Metadata Manager added the following resources:
- ERwin. Extract logical models from ERwin.
- Oracle Business Intelligence Enterprise Edition. Extract metadata from OBIEE.
For more information, see the Metadata Manager Administrator Guide.
Metadata Manager Service. You can configure an associated Reference Table Manager Service to view reference tables from the Metadata Manager business glossary. For more information, see the PowerCenter Administrator Guide.
Load Monitor. The Load Monitor updates the current status and load events and contains summary information for objects, links, and errors as you load a resource. It also displays session statistics for PowerCenter workflows.
Error messages. Metadata Manager includes improved error messages to display the cause of errors when you create and configure resources.
Data lineage. You can publish a complete data lineage diagram to a Microsoft Excel file. For more information, see the Metadata Manager User Guide.
Migrate custom resource templates. You can migrate custom resource templates between Metadata Manager instances. For more information, see the Metadata Manager Custom Metadata Integration Guide.
Object Relationships Wizard. Use the Object Relationships Wizard to create relationships by matching metadata objects by any property. You can also use the Object Relationships Wizard to create relationships between categories and business terms to technical metadata in the metadata catalog. For more information, see the Metadata Manager User Guide.
PowerExchange (PowerCenter Connect)
PowerExchange for Netezza
Parameters and variables. You can use parameters and variables for the Null Value, Escape Character, and Delimiter session properties.
PowerExchange for SAP NetWeaver
Row-level processing. The Integration Service can process each row of an outbound IDoc according to the IDoc metadata and pass it to a downstream transformation. Row-level processing can increase session performance.
Viewing ABAP program information. You can view ABAP program information for all mappings in the repository folder.
PowerExchange for Web Services
Importing WSDLs with orchestration. You can import a WSDL file generated by orchestration as a web service source, target, or transformation.
Reference Table Manager
Support for default schema for Microsoft SQL Server 2005. When you create a connection to a Microsoft SQL Server 2005 database, the Reference Table Manager connects to the default schema of the user account provided for the connection.
Code page for import and export file. When you import data from external reference files into Reference Table Manager, you can set the code page for the external file. When you export data from Reference Table Manager, you can set the code page for the export file.
Repository and Repository Services
Upgrade Services

Upgrade services to the latest version. You can use the Upgrade Services tab to upgrade the Repository Service and its dependent services to the latest version. For more information, see the PowerCenter Administrator Guide.
Transformations
Data Masking
Expression masking. You can create an expression in the Data Masking transformation to mask a column.
Key masking. You can configure key masking for datetime values. When you configure key masking for a datetime column, the Data Masking transformation returns deterministic data.
Parameterizing seed values. You can use a mapping parameter to define a seed value when you configure key masking. If the parameter is not valid, the Integration Service uses a default seed value from the defaultValue.xml file.
Masking Social Security numbers. You can mask a Social Security number in any format that includes nine digits. You can configure repeatable masking for Social Security numbers.
Company Names Sample Data. The Data Masking installation includes a flat file that contains company names. You can configure a Lookup transformation to retrieve random company names from the file.
Web Services Hub
Web service security. The Web Services Hub security for protected web services is based on the OASIS web service security standards. By default, WSDLs generated by the Web Services Hub for protected web services contain a security header with the UsernameToken element. The web service client request can use plain text, hashed password, or digested password security. For more information, see the PowerCenter Web Services Provider Guide.

Friday, February 26, 2010

Performance by lookup

1)We use Lookup transformations that query the largest amounts of data to improve overall performance. By doing that we can reduce the number of lookups on the same table.

2)If a mapping contains Lookup transformations, we will enable lookup caching if this option is not enabled .
We will use a persistent cache to improve performance of the lookup whenever possible.
We will explore the possibility of using concurrent caches to improve session performance.
We will use the Lookup SQL Override option to add a WHERE clause to the default SQL statement if it is not defined
We will add ORDER BY clause in lookup SQL statement if there is no order by defined.
We will use SQL override to suppress the default ORDER BY statement and enter an override ORDER BY with fewer columns. Indexing the Lookup Table
We can improve performance for the following types of lookups:
For cached lookups, we will index the lookup table using the columns in the lookup ORDER BY statement.
For Un-cached lookups, we will Index the lookup table using the columns in the lookup where condition.

3)In some cases we use lookup instead of Joiner as lookup is faster than joiner in some cases when lookup contains the master data only.

4)This lookup helps in terms of performance tuning of the mappings also.

session performance

Session performance can be improved by allocating the cache memory in a way that it can execute all the transformation within that cache size.

Mainly only those transformations are considered as a bottleneck for performance which uses CACHE.

Say for example:

We are using three transformation in our mapping-
1) Aggregator - using 1 MB for Indexing and 2MB for Data - Lets assume that after using the Aggregator cache calculator the derived value is 6MB.

Likewise use the cache calculator for calculating the cache size of all the transformation(which uses chache)
2) Joiner - 5MB
3) Look-up - 7MB

So, combining the total cache size will be 6MB+5MB+7MB = 18MB

So, minimum of 18MB must be allocated to the session cache. If we are allocating less memory to the session than the Integration service might fail the session itself.

So, for optimizing the session performance CACHE plays an important role.

Saturday, January 16, 2010

Unix commands for Informatica

The thing is we need to understand why should we learn Unix commands for Informatica


step1: run pmcmd
You will get pmcmd prompt
pmcmd>

step2:. run connect command to connect to repository/ cd to go to informatica server directory
pmcmd>connect
it will ask repository name/password

step3: run whatever command you need (startworkflow,stopworkflow,abortworkflow,starttas t.....)

Some more useful commands would be
PMREP
The pmrep is a command line program that you use to update repository
information and perform repository functions. It´s installed in Repository
Server installation directory, it is usually ( $PM_HOME/repserver ). You can
execute this command like a command line or you can execute from a script
UNIX.
This command has got a lot of options, for example, you can connect to a
repostory if you execute the following command:

pmrep connect
-r
-n
-x
-h
-o

INFASETUP

INFAREP
INFACMD

infasetup is under .../server
infacmd is under .../server/bin

during one process a file is generated in one path and during the running of the worflow you want it to place in another path so for this we need to use unix command "cp soure pathname targt pathname".

Generally all informatica jobs are automated and for this automation we need a tool in general unix based tools are prefered like control-m autosys.. etc. so for the job to run tru tools it must be in the unix script form so there pmcmd worflow name ... etc will be used/

Monday, January 11, 2010

Interview Myths That Keep You From Landing the Job by Karen Noonan, TradePub.com

With so few jobs currently available and so many people currently hoping to fill those jobs, standing out in an interview is of utmost importance. While jobs themselves are scarce, job advice is overly abundant. And with an influx of information comes an influx of confusion. What career counsel do you take, and what do you ignore?
There are a number of common misconceptions related to interview best practices, experts say. Kera Greene of the Career Counselors Consortium and executive coach Barbara Frankel offer tips below that can help you stand out from other interview subjects, avoid frequent pitfalls, and secure the job.

Myth #1: Be prepared with a list of questions to ask at the close of the interview.

There is some truth in this common piece of advice: You should always be prepared, and that usually includes developing questions related to the job. The myth here is that you must wait until it is "your turn" to speak.

By waiting until the interviewer asks you if you have any questions, "it becomes an interrogation instead of a conversation," says Greene.

Greene recommends that you think of an interview as a sales call. You are the product and you are selling yourself to the employer. "You can't be passive in a sales call or you aren't going to sell your product."

Frankel mimics Greene's comments. "It's a two-way street," she says. "I recommend asking a follow-up question at the tail end of your responses."

For example, Frankel says, if the interviewer says, "Tell me about yourself," you first respond to that question and complete your response with a question like, "Can you tell me more about the position?" The interview should be a dialogue.

Myth #2: Do not show weakness in an interview.

The reality is that it is OK to have flaws. In fact, almost every interviewer will ask you to name one. Typically job seekers are told to either avoid this question by providing a "good flaw." One such "good flaw" which is often recommends is: "I am too committed to my work." But, these kinds of responses will only hurt you.

"Every recruiter can see through that," Greene says of faux flaws.

Recruiters conduct interviews all day, every day. They've seen it all and can see through candidates who dodge questions. "They prefer to hire someone who is honest than someone who is obviously lying," Greene says.

And for those of you who claim to be flaw-free, think again. "Everybody has weaknesses," Frankel states. But one is enough. According to Frankel, supply your interviewer with one genuine flaw, explain how you are working to correct it, and then move on to a new question.

Myth #3: Be sure to point out all of your strengths and skills to the employer.

Of course, you want the interviewer to know why you are a valuable candidate, but a laundry list of your skills isn't going to win you any points. Inevitably, in an interview, you will be asked about your skills. What can go wrong in this scenario?

"You don't want to list a litany of strengths," Frankel says.

"What is typical is that they will say: 'I'm a good communicator,' 'I have excellent interpersonal skills,' 'I am responsible,'" Greene explains. "You have to give accomplishments. I need to know what did you accomplish when using these skills."

Frankel recommends doing a little groundwork before your interview so that you are best equipped to answer this question. She tells her clients to find out what the prospective job role consists of. "What makes an interview powerful is to give an example related to their particular needs or challenges that you have demonstrated in the past."

Provide three strengths, with examples. You will get much further with a handful of real strengths than with an unconvincing list of traits.

Myth #4: Let the employer know your salary expectations.

One of the trickiest questions to answer in an interview relates to salary. Money talk can be uncomfortable, but it doesn't have to be. The fact is you don't even have to answer when asked about desired salary.

According to the book "Acing the Interview: How to Ask and Answer the Questions That Will Get You The Job!" a perfect response would be: "I want to earn a salary that is commensurate with the contributions I can make. I am confident I can make a substantial contribution at your firm. What does your firm plan to pay for this position?"

Greene suggests a similar response: "I prefer to discuss the compensation package after you've decided that I'm the best candidate and we can sit down and negotiate the package."

Myth #5: The employer determines whether or not you get the job.

While yes, the employer must be the one to offer you the position, interviewees have more control than they often realize. According to both Greene and Frankel, candidates have a larger say in the final hiring decision than they think.

"They should call the interviewer or hiring manager and say: 'I'd really like to be part of the company,'" says Greene. "It can't hurt you. It can only help."

"Acing the Interview" encourages all candidates to conclude their interviews with one question: "'Based on our interview, do you have any concerns about my ability to do the job?' -- If the answer is yes, ask the interviewer to be explicit. Deal forthrightly with each concern."

For more interview tips and myths, download a free book summary of "Acing the Interview: How to Ask and Answer the Questions That Will Get You The Job!" here.