Hello Experts,
Is there any document for SLT and SOLMAN monitoring.
Regards
Amar Ghuge
Hello Experts,
Is there any document for SLT and SOLMAN monitoring.
Regards
Amar Ghuge
Hi,
Recently with HANA SPS9 we have now another option to replicate tables into HANA systems which is the SDI (Smart Data Integration) residing
and running on the HANA systems.
I already heard some people asking if the SLT replication will no longer be required, if we can move/migrate all our replication tables using SLT and BODS into the SDI option, etc.
I’ve been working with SLT for real time replication since its beginning and I just now I have the feeling that SLT as replication tool is getting to a mature point, meaning it’s more solid now as replication solution tool, offering more options and functionalities to deal with real time replication tables, and having less and less bugs.
I’ve been reading about the SDI option for replication, which looks very promising. I guess its strongest sales pitch is that avoids having a BODS or SLT system, meaning, less systems to build in my infrastructure, lower cost. Aside SDI runs on SAP HANA which always has plenty of CPU and RAM.
Now, what is not clear to me is if SDI can really handle really big tables, constantly changing, real time replication? The documents I’ve read I don’t see where I can set the table partition for the replication, performance and initial load options that we have in SLT LTRS rules. Does anybody know where I can use such options on SDI when replicating new big tables? The examples in the documentation and videos are always using tiny tables that does not reflect the majority of customers that use SAP HANA, where the tables, or main ones that are replicated, are usually massive tables.
SLT Replication vs SDI Smart Data Integration? For real time replication of lot of tables, really big tables, with rules, extra fields. What is the best option?
Can at this point SDI take over all the current replication running on SLT and BODS systems?
Thanks
Rod Cardenas
I am working with a Non-SAP Oracle system and there are instances that they delete columns from one of the tables. Some of these tables are very large in nature and if we can avoid the reload step in SLT that would be ideal. I understand that the structures between SLT and Oracle will change and also the target SAP HANA system table will also require to change. Are there manual steps that can avoid a large reload of this data?
Hello Experts,
We are trying to edit table structure in SAP SLT advanced replication server tcode LTRS,
We have successfully created, replicated and load the data from our Oracle table, but when we are trying to add the table in
LTRS, i am getting error "no table columns were found", I tried resetting, deleting(RS_status,removing from LTRC etc) & again creating the table, using expert functions of SAP SLT. but of no use.
Could you any please suggest a solution for this.
Thanks & Regards,
Kishore
Hi All,
We are planning to upgrade SLT system (DMIS - 2011_1_731- 0008) database to Oracle 12c (12.1.0.2.5) from 11.2.0.4.0. I do not see any information about database upgrade for SLT systems. Please let me know if anyone has done it so far.
Any known issues in SLT after Oracle 12c upgrade? Those have to be considered while planning upgrade?
Thanks
Dass
Hi
We are using SLT to migrate the CO data from ECC to S/4 Central Finance system.
We have used RFC for both Source to SLT and SLT to CFIN system as per the admin guide.
As per the other available documents it is DB connection for SLT to CFIN sysytem.
We are getting lot of error message but not well defined to understand. Some msg are given below. Please let me know is it for some RFC connection or authorization problem.
Errors -
Configuration in LTR is active and running but showing the below error -
Hi,
We are trying to extract (near real time) data from SAP ECC to SAP BW on HANA via SLT (ODP).
We are testing couple of scenario; one for transparent table T001 and one for cluster table BSEG.
T001 table is working fine -> Initial Load completed successfully and delta is passing thru in in few sec every time we created new data.
BSEG table is having problem during extraction - > Initial Load went fine. However BSEG Table in LTRC is Stuck in Replication Status and the delta records is not coming thru SLT/ODQ whenever we create new delta records.
Another issue that we suspect is that BSEG Table in LTRC has multiple entries with different substitution name.
See screen shot attached for reference.
Any advise from the Guru's on how to fix the issue?
Thanks in Advanced!
Hi,
we have configured SLT & it is ok. but then i check /1LT/IUC_REP_MSTR job is running but there are no load jobs running.
I don't see DD02T, DD02L & DD08L tables loaded in HANA.
Source is Oracle / Target is HANA.
We recently did system refresh on this box from production. We have deleted all the old SLT configuration from HANA & SAP. But after doing new setup again it is not loading the data into hana.
Can u suggest what could be the issue?
Best Regards,
Avinash
Hi
I am facing the "Z_ZTEST1_004: Run 0000000001 aborted due to DUPLICATE KEY on 20160715 at 145833.
I am using a new Z table for the test and the table do not exits in target system.
Do not able to figure it out from where it getting the duplicate key.
In some post in SCN says -
Fix the issue in the target system or change transfer behaviour to 3.
But how to do it. Please let me know the process.
Thanks
Sudip.
Hello Experts,
I was loading and not replicating BSEG data into HANA and it seems that after loading around 100 million record, it went into an error with the below error message in SLG1:
I don't think we can ever have a duplicate record in BSEG table itself for a given combination of Company Code, Accounting Document, Fiscal Year and Item. Then how come we are having this issue in replication.
There is a SLT piece of code which we have, wherein we are performing two things:
a) Loading only 2013 data
b) Defaulting for Invalid/Blank Date Issue as shown below
It would be great, if somebody can throw some light on this.
Thanks & regards,
Jomy
Hi,
Is it possible to change the transfer behaviour from Array Insert into Single Insert when starting replication for tables?
The default method is Array Insert, but the problem is that table replication fails due to duplicate keys.
What I would like to achieve, instead of manually changing the transfer behaviour to Single insert for each table (almost 500), is that SLT uses Single Insert automatically when starting replication.
Regards,
Hi ,
I am working on SLT- DMIS - 2011_1_731 and replicating data in ECC to ECC scenario but struck with an error after loading the data.
Error when reading the DDIC-data for table CFIN_COPA (RFC destination EHECLNT100)
Error when reading the DDIC-data for table CFIN_CO_ADD (RFC destination EHECLNT100)
Error while generating runtime objects for Z_COBK_041
Syntax error in function module /1CADMC/OLC_100000000015956
Error message: Type "/1CADMC/100000004275216CFIN_CO" is unknown
Please help on what are the steps to be executed to resolve the issue.
Regards,
Rajiv
Hi Team,
I have created two fields in AUSP table using LTRS tcode. But SAP HANA DB does not show replication of these two columns. I want to test possible issues at SAP. How to test this at SAP level?
Hi Experts,
I am encountering this error message "Error in generic write: data structure mistach" when replicating BSEG table from ERP to SLT.
BSEG table has multiple substitution names and no logging table which is unlikely. We are on DMIS 2011_1_731 SP8.
Any one knows how to resolve the error? Thanks!
Erorr encountered in Data Transfer Monitor Tab "Execute of Program /1CADMC/OLC_03100000000000058 failed, return code 3"
Cheers,
Hi Experts,
We are using SAP LT for replicating data on table level from ABAP to ABAP system.
I have two scenarios where I am stuck how to handle data transfer :
1. While transferring data I want to transfer based on specific criteria Example . if PLANT = 'ABCD' then transfer those records from MARC table which satisfy this condition to Target ABAP system.
2. While transferring data from source to target system we do not want to overwrite few fields in Target system based on some conditions. Example KNA1 table - IF field value for BRSCH (Industry) = 'ABC' then do not overwrite value in KNA1 in target system for field BRAN1 (Industry Code1) and other field values in KNA1 should be updated as per source system except BRAN1 field value.
If we can achieve this via transformation rules , then please let me know few examples to achieve these scenarios.
Please let me know is there any way we can achieve above two scenarios in SLT ABAP to ABAP. Thanks.
Regards,
Sanjana
Hi All,
I have done the replication configuration for STXL Table as per the link
The replication is running for more than 8 days, There are 400 million records in STXL Table in ECC, it need to be converted and transferred to HANA, The current initial load in process transferred 1,097,394,440 volume of records to ZSTXL_TX Table in HANA . The status of the STXL Table in DATA Transfer tab in SLT System is shown as "Replication(Initial Load)" . The Data Provisioning tool in HANA still show the Status for STXL Table as " Load" in process. The Processing Mode for this Table in SLT is " Parallel Processing" and the Read mode is 5. Can you please help me to complete the Initial load faster ? I don't see the record count increasing in HANA, only the logging table /1CADMC/00001023 in ECC is increasing in count. Is there a setting anywhere maintained in SLT to trigger the initial load in batches ? The Logging table count in ECC is currently 4454 but its never getting replicated in HANA.
Please advice.
Thanks,
Gokul
Hi All,
I am trying to replicate the STXL Table from ECC to HANA.
The Initial load is running for more than 7 days.
I have followed the process specified in the link
Is there a way to make the transfer to complete faster, It has replicated 1.3 billion data as for now but still shows as initial load.
The Loggging table in ECC is growing in 1000's each day, but its not replicating in HANA anymore, not sure if its waiting for initil load to complete.
Please let me know if anyone had replicated cluster tables like STXL ?
Thanks,
Gokul