Documente Academic
Documente Profesional
Documente Cultură
Request Locked
Duplicate records
Duplicate records:
In case of duplication in the records, we can find it in the error
message along with the Info Providers name. Before restarting the job
after deleting the bad DTP request, we have to handle the duplicate
records. Go to the info provider -> DTP step -> Update tab -> check
handle duplicate records -> activate -> Execute DTP. After successful
competition of the job uncheck the Handle Duplicate records option and
activate.
DTP Log Run:
If a DTP is taking log time than the regular run time without having
the back ground job, then we have to turn the status of the DTP into
Red and then delete the DTP bad request (If any), repeat the step or
restart the job.
Before restarting the Job/ repeating the DTP step, make sure about
the reason for failure.
If the failure is due to Space Issue in the F fact table, engage the
DBA team and also BASIS team and explain them the issue. Table
tRFC/IDOC failure
Communication Issues
Check the source system connection with the help of SAP BASIS, if it
is not fine ask them to rebuild the connection. After that restart the
job (Info Pack).
Go to RSA1 -> select source system -> System -> Connection check.
If the data is loading from the source system to DSO directly, then
delete the bad request in the PSA table, then restart the job
Info Pack Long Run: If an info pack is running long, then check
whether the job is finished at source system or not. If it is finished,
then check whether any tRFC/IDOC struck/Failed with the help of SAP
BASIS. Even after reprocessing the tRFC, if the job is in yellow status
then turn the status into Red. Now restart / repeat the step. After
completion of the job force complete.
Before turning the status to Red/Green, make sure whether the load
is of Full/Delta and also the time stamp is properly verified.
Select Info Package-> Process Monitor -> Header -> Select Request -> Go
to source System (Header->Source System) -> Sm37-> give the request
and check the status of the request in the source system -> If it is in
active, then we have to check whether there any struck/failed
tRFCs/IDOCs
If the request is in Cancelled status in Source system -> Check the Info
Pack status in BW system -> If IP status is also in failed state/cancelled
state -> Check the data load type (FULL or DELTA) -> if the status is full
then we can turn the Info Package status red and then we can
repeat/restart the Info package/job. -> If the load is of Delta type then we
have to go RSA7 in source system-> (Compare the last updated time in
Source System SM37 back ground job)) Check the time stamp -> If the
time stamp in RSA7 is matching then turn the Info Package status to Red
-> Restart the job. Itll fetch the data in the next iteration
If the time stamp is not updated in RSA7 -> Turn the status into Green ->
Restart the job. Itll fetch the data in the next iteration.
Source System
BW System
Source
System
RSA7
Time stamp
matching
with SM37
last updated
time
Source
System
SM37
Time stamp
matching
with RSA7
time stamp
I/P Status
RED(Cancelled)
I/P Status
(Active)
I/P Status
RED(Cancelled)
I/P Status
(Cancelled)
Time stamp
matching
with SM37
last updated
time
Time stamp
matching
with RSA7
time stamp
I/P Status
RED(Cancelled)
I/P Status
(Active)
Time stamp
is not
matching
with SM37
last updated
time
Time stamp
is not
matching
with RSA7
time stamp
I/P Status
RED(Cancelled)
I/P Status
(Cancelled)
Time stamp
is not
matching
with SM37
last updated
time
Time stamp
is not
matching
with RSA7
time stamp
Action
Turn the
I/P Status
into Red
and
Restart
the Job
Turn the
I/P Status
into Red
and
Restart
the Job
Turn the
I/P status
into
Green
and
Restart
the job
Turn the
I/P status
into
Green
and
Restart
the job
When there is a failure in DSO activation step, check whether the data is
loading to DSO from PSA or from the source system directly. If the data is
loading to DSO from PSA, then activate the DSO manually as follows
4.
Right click DSO Activation Step -> Target Administration -> Select
the latest request in DSO -> select Activate -> after request turned
to green status, Restart the job.
If the data is loading directly from the source system to DSO, then
delete the bad request in the PSA table, then restart the job
Failure in Drop Index/ Compression step:
When there is a failure in Drop Index/ compression step, check the Error
Message. If it is failed due to Lock Issue, it means job failed because of
the parallel process or action which we have performed on that particular
cube or object. Before restarting the job, make sure whether the object is
unlocked or not
There is a chance of failure in Index step in case of TREX server issues. In
such cases engage BASIS team and get the info reg TREX server and
repeat/ Restart the job once the server is fixed.
Compression Job may fail when there is any other job which is trying to
load the data or accessing from the Cube. In such case job fails with the
error message as Locked by ...... Before restarting the job, make sure
whether the object is unlocked or not.
5. Roll Up failure:
Roll Up fails due to Contention Issue. When there is Master Data load is
in progress, there is a chance of Roll up failure due to resource contention.
In such case before restarting the job/ step, make sure whether the master
data load is completed or not. Once the master data load finishes restart
the job.
6. Change Run Job finishes with error RSM 756
When there is a failure in the attribute change run due to Contention, we
have to wait for the other job (Attribute change run) completion. Only one
ACR can run in BW at a time. Once the other ACR job is completed, then
we can restart/repeat the job.
We can also run the ACR manually in case of nay failures.
Go to RSA1-> Tool -> Apply Hierarchy/Change Run -> select the
appropriate Request in the list for which we have to run ACR -> Execute.
7. Transformation In-active:
In case of any changes in the production/moved to the production without
saving properly or any modification done in the transformation without
changing, in such cases there is a possibility of Load failure with the error
message as Failure due to Transformation In active.
In such cases, we will have to activate the Transformation which is
inactive.
Go to RSA1 -> select the transformation -> Activate
In case of no authorization for activating the transformation in production
system, we can do it by using the program - RSDG_TRFN_ACTIVATE
Try the following steps to use the program "RSDG_TRFN_ACTIVATE here
you will need to enter certain details:
Transformation ID: Transformations Tech Name (ID)
Object Status: ACT
Type of Source: Source Name
Source name: Source Tech Name
Type of Target: Target Name
Target name: Target Tech Name
A. Execute. The Transformation status will be turned into Active.
Undefined
J Framework Error upon Completion (e.g. follow-on job missing)
9. Hierarchy save Failure:
When there a failure in Hierarchy Save, then we have to follow the below
process.
If there is issues with Hierarchy save, we will have to schedule the Info
packages associated with the Hierarchies manually. Then we have to run
Attribute Change Run to update the changes to the associated Targets.
Please find the below mentioned the step by step process.
ST13-> Select Failed Process Chain -> Select Hierarchy Save Step ->
Right click Display Variant -> Select the info package in the hierarchy ->
Go to RSA1 -> Run the Info Package Manually -> Tools -> Run
Hierarchy/Attribute Change Run -> Select Hierarchy List (Here you can find
the List of Hierarchies) -> Execute.
SAP BI/BW Data Load Errors and Solutions for a support Project:
1) BW Error: Failure occurred when delta Update is going on from one data target to
another data target.
Possible Causes:
1.
1. TRFC error
2.
3.
Solution: In Monitor check the technical status of the request for red status and then,
delete the request from the data target.
If its delta update, in the source data target reset the delta. Retrigger the Info Package
to load data again.
If its Full update, restart job again after deleting error request from the data target.
2) BW Error: Master job abended with error code or PR1 batch job did not run or get
delayed.
Possible Causes: This can be because of changes to Maestro or some changes to job.
Solution: Maestro jobs are handled by Production Services, if job is abended with an
error code, BASIS team looks at Maestro problems. If Job is an issue, SAP support team
investigates it.
3) BW Error: Database Errors: Enable to extend Table, enable to extend the Index.
For example.
Database error text: "ORA-01653: unable to extend table
SAPR3./BIC/AZUSPYDO100 by 8192 in table space PSAPODS2D"
Possible Causes: This is due to lack of space available to put further data.
Solution: This is Database error. Short dump message indicates this error message.
Ticket is raised for DBA Elizabeth Mayfield who provides the space required. If the update
mode is delta, technical status of job is changed to red and request is deleted from the
data target. InfoPackage for Delta update is triggered again to get delta from R/3 back. If
its full update, request is deleted from the data target and InfoPackage is triggered again
to get full update.
Once DBA confirms that space issue is corrected, job is rerun to get data from source again.
Possible Causes: This can happen when there is some changes done on datasource and
datasource is not replicated.
Solution: Execute T code SE38 in BW give programme name as
RS_Transtrucuture_activate and execute the programme. Give InfoSource and Source
System and activate. This will replicate the datasource and its status is changed to
active. Once this is done, delete the request by changing technical status to red and
trigger InfoPackage to get delta back from source system.
Delete request from the datatarget and trigger Infopackage again.to get either delta or
full data.
Step1:
If the request is directly into datatarget, take data in PSA also to correct this issue.
Delete error request from datatarget and Go to PSA to investigate the issue.
Look for messages in Monitor, which may give name of the Info Object causing the error.
Normally # character in the last is not permitted in BW.
Filter the records based on Status and start correcting the records. Once they are
complete, upload data from PSA back into datatarget.
"SAPSQL_ARRAY_INSERT_DUPREC" CX_SY_OPEN_SQL_DBC
"SAPLRSDRO " or "LRSDROF07 "
"START_RSODSACTREQ_ACTIVATE"
Possible Causes: This can happen if there are duplicate records from the source
system. BW do not allow duplicate data records.
Solution:
To check for records which are duplicate in master data characteristic, Go to RSRV
transaction
Select name of the characteristic and execute, this will give first 50 records for the
problem. Remove duplicates from the Master data and then upload data back by
triggering Infopackage again.
Possible Causes: This can occur due to either bug in the infopackage or incorrect data
selection In the infopackage.
Solution: Data selection checked in the infopackage and job is started again after
changing the technical status to red and deleting the error request from the data target.
Value Type field was getting changed after the job runs once. Everytime this need to be
corrected.
Possible Causes: This can be because of some issue with source system with datasource.
Delta update program can be one of the issue.
Solution: Go to R/3 source system, See how many records are there in delta in RSA7
transaction. If the records are Zero and you are sure that number of records can not be
zero, then check for update Program which might not be running or stucked. Check for
BD87 ,SMQ1, SM58 for solution to error. Error logs can suggest the solution to problem.
Possible Causes: Errors occurred in one of the job in the process chain
Solution: Attend failure manually and go to process chain log for today and right click on
the next job and select Repeat Option. This will execute all remaining jobs in process
chain.
Possible Causes: This can happen when in source system job is terminated or source
system or BW is not available for whole period of data upload. This can also happen if
resources are not sufficient in source system.
Possible Causes: This can be because of data not acceptable to datatarget although
data reached PSA.
Solution: Data checked in PSA for correctness and after changing the bad data data
uploaded back into datatarget from PSA.
Change QM status of request to red in datatarget and delete request from the
datatarget to edit PSA data. Go to PSA associated with this request and edit records to fix
the error reported in Monitor in Details tabstrip.
Possible Causes: This occurs when Transaction data is loaded before Master data.
Solution: Ensure to load Master data before Transaction data. Reload data depending on
update mode (Delta/Full Update)
Possible Causes: These errors happen when the transfer rules are not active and
mapping the data fields is not correct
Solution: Check for transfer rules ,make relevant changes and load data again.
Possible Causes: This can be because of incorrect PSA data, transfer structure, transfer
rules, update rules and ODS.
Solution: Check PSA data, Transfer structure, transfer rules, Update rules or datatarget
definition.
Possible Causes: The 'valid from' date is smaller than the mimimum date, The 'valid
from' date is smaller than the mimimum date, Error in node time interval with the ID
00000011 . Details in next message, The 'valid from' date is smaller than the mimimum
date.
Possible Causes: This happens when data is not acceptable to ODS definition. Data need to be
corrected in PSA
Solution: Check for Infoobject, which has caused this problem in the monitor details
tabstrip. Delete request from datatarget after changing QM status to red. Correct data in
PSA and update data back to datatarget from PSA.
Possible Causes: This can happen when request IDOC is sent source system ,but the
source system for some reason is not available
Solution: Ensure that source system is available. Change technical status of request to
red and delete request from datatarget. Trigger Infopackage again to get data from
source system