Sunteți pe pagina 1din 20

FAQ: OS/DB Migration to Microsoft SQL Server v6.

2 April 2017

Summary
You are currently running an SAP system on a Unix, Windows or Linux operating system and Oracle, Informix, DB2,
Sybase, HANA or MaxDB database and wish to migrate your SAP system to Microsoft SQL Server.

You may also wish to convert your SAP system to Unicode during the migration to SQL Server.

Background Information
SAP & Microsoft have extended the capabilities of the SAP OS/DB migration tools and procedures to simplify the
process of migrating SAP systems to SQL Server. This note contains the latest information regarding the technical
capabilities and features for OS/DB Migrations where the target database is SQL Server.
Please review the latest blogs at: http://blogs.msdn.com/b/saponsqlserver/

Solution
There are several new enhancements that will significantly speed up and simplify the process of migrating a SAP
system to SQL Server. In general all of these features are available for all systems based on WAS 6.20 and above
(such as SAP R/3 4.7 Extension Set 110 and higher).

The link http://scn.sap.com/docs/DOC-8324 contains more information on the OS/DB Migration process. Also review
note 82478.

Customers should target conversion throughput of around 1-2TB per hour using all the enhancements contained in
this document.

RECOMMENDATIONS

1. Required patch levels for Migration Tools, Windows & SQL Server
You must use these patch levels or higher for the following components. It is generally recommended to use the
most recent version of these components.

SWPM, SAPInst & R3SETUP


> 7.1 latest SL Toolset https://service.sap.com/sltoolset (use SWPM)
7.0x latest SL Toolset https://service.sap.com/sltoolset (use 70SWPM)
6.40 NetWeaver 04 Master SR1 (compatible with 6.20/R/3 4.7 systems)
4.6D R3SETUP 46D SR1 (for use on 4.6C - available on request from SAP)
R3LOAD
7.50 749 latest release
7.4x 749 latest release
7.3x Please use 722 EXT latest release
7.1x Please use 722 EXT latest release
7.0x Please use 722 EXT latest release

DBSL
7.50 749 latest release
7.4x 749 latest release
7.3x Please use 722 EXT latest release
7.1x Please use 722 EXT latest release
7.0x Please use 722 EXT latest release

MIGMON
Java based Migration Monitor is downward compatible 7.4x, 7.3x, 7.1x, 7.0x, 6.40, 4.6C and lower. Use the most
recent version. To download Migmon check OSS Note 784118

R3TA
R3TA Table Splitter is only available for Kernels 6.40 and higher. Use the most recent version. Review Note
1650246 - R3ta: new split method for MSSQLand Note 1784491 - R3ta: Split of physical Clustertables

R3LDCTL, loadercli & R3SIZCHK


Use the most recent version.

System Copy OSS Notes

1|Page
7.50 - 7.0x
888210 - NW 7.**: System copy (supplementary note)
1738258 - System Copy of Systems Based on SAP NetWeaver 7.1 and Higher
6.40 Note 784931 and 771209
4.6D Note 316353

Required minimum SAP Netweaver Support Package Stacks (SPSs) for SQL Server 2014 (SAP ABAP or SAP
ABAP+JAVA stacks)
SUPPORT PACKAGE STACK (SPS) SUPPORT PACKAGE STACK (SPS) requirements for
SAP SOFTWARE
requirements SAP BW
SPS 30 (SAP_BASIS SP 30, SAP BW SP 32) + SAP
SAP NETWEAVER 7.0 SPS 29 (SAP_BASIS SP 29)
note 2010451
SAP EHP1 FOR SAP
SPS 15 SPS 15 + SAP note 2010451
NETWEAVER 7.0
SAP EHP2 FOR SAP
SPS 14 SPS 15 + SAP note 2010451
NETWEAVER 7.0
SAP EHP3 FOR SAP
SPS 09
NETWEAVER 7.0
SAP NETWEAVER 7.1 SPS 17
SAP EHP1 FOR SAP
SPS 12 SPS 13 + SAP note 2010451
NETWEAVER 7.1
SAP NETWEAVER 7.3 SPS 10 SPS 11 + SAP note 2010451
SAP EHP1 FOR SAP
SPS 09 SPS 11 + SAP note 2010451
NETWEAVER 7.3
SAP NETWEAVER 7.4 SPS 04 SPS 06 + SAP note 2010451
If your system is running on an SPS lower than the one required above, you have to apply the minimum required SPS
before upgrading/migrating to SQL Server 2014.

Required minimum SAP Netweaver Support Package Stacks (SPSs) for SQL Server 2016 (SAP ABAP or SAP
ABAP+JAVA stacks)
SAP SOFTWARE SUPPORT PACKAGE STACK (SPS) requirements
SAP NETWEAVER 7.0 SPS 33 (SAP_BASIS SP 33)
SAP EHP1 FOR SAP NETWEAVER 7.0 SPS 18
SAP EHP2 FOR SAP NETWEAVER 7.0 SPS 18
SAP EHP3 FOR SAP NETWEAVER 7.0 SPS 17
SAP NETWEAVER 7.1 SPS 20
SAP EHP1 FOR SAP NETWEAVER 7.1 SPS 15
SAP NETWEAVER 7.3 SPS 14
SAP EHP1 FOR SAP NETWEAVER 7.3 SPS 17
SAP NETWEAVER 7.4 SPS 12
SAP NETWEAVER 7.5 SPS 01
If your system is running on an SPS lower than the one required above, you have to apply the minimum required SPS
before upgrading/migrating to SQL Server 2016.

See note 799058 for SQL 2005, note 1152240 for SQL 2008/R2, note 1651862 for SQL 2012, 1966681 for SQL 2014
and Note 2201059 for SQL 2016

Windows & SQL Server


As at April 2017 Windows Server 2012 R2 and SQL 2016 SP1 CU2 or more recent are recommended.
Windows Server 2016 is recommended for all new projects: Windows 2016 is now Generally Available for SAP

SQL Server Enterprise Edition x64 - download and install the latest service pack and CU. Refer to Note 62988
Service packs for Microsoft SQL Server. This link is useful to find the latest SP or CU for SQL Server
http://blogs.msdn.com/b/sqlreleaseservices/

Do not to use 32bit versions of Windows or SQL. If your system is 4.6C based run 4.6C on 64 bit Windows 2003
and 64 bit SQL 2005.

2. Hardware Configurations

2|Page
Review SAP Note 1612283 - Hardware Configuration Standards and Guidance. Follow the guidance in this note.
Do not under specify memory. 384GB is the minimum for new SAP server deployments. Customers with 1-3TB
of RAM are now mainstream.
It is strongly recommended to utilize FusionIO cards (or similar) for larger OS/DB Migrations.

Recommended Hardware Configurations:

SAP Application or DB Server:


2 Processor E5v4 between 8-22 core per processor 384-1,500GB RAM 10GB Network card.
768GB configurations are very common as at April 2017

DB Server:
Use 2 socket server as above or 4 Processor E7v4 1-4TB RAM 10GB Network card. Cost = $33,000-56,000 list
price* SAPS = ~220,000

*Source www.dell.com

3. Unsorted Export
An unsorted export is supported and may be imported into a SQL Server database. A sorted export will take much
longer to export and is only marginally faster to import into SQL Server. Unicode Conversion customers must
export certain cluster tables in sorted mode. This is to allow R3LOAD to read an entire logical cluster record,
decompress the entire record (which may be spread over multiple database records) and convert it to Unicode.
See Note 954268, 1040674 and 1066404. The content of OSS Note 1054852 has been updated

Our default recommendation is to export unsorted as in most cases the UNIX/Oracle or DB2 server has only a
fraction of the CPU, IO and RAM capacity of a modern Intel commodity server. Even though there is an overhead
involved in inserting rows into the clustered index on SQL Server, this overhead is relatively small.

4. Table Splitting
A table split export is fully supported and may be imported into a SQL Server database. Table split packages for
the same table may be imported concurrently.
Table splitting is only supported for R3LOAD 6.40 and higher (R3LOAD 6.40 is backwards compatible with Basis
6.20 releases such as R/3 4.7). Review Note 952514
The limitations on SQL Server table splitting listed in some SAP documentation are out of date and should be
ignored.
Customers have successfully split large tables into a maximum of 20-80 splits and achieved satisfactory results on
tables that have poor import or export throughput. It is recommended to use a minimum amount of splits possible
especially if deadlocks during imports are observed.
There are some tables that we always recommend splitting due to slow export or import performance:
CDCLS, S033, TST03, GLPCA, STXL, CKIT, REPOSRC, APQD, REPOTEXT, INDTEXT

To run R3TA manually use this command line.


r3ta -f c:\export\abap\data\<TABLE NAME>.str -l <TABLE NAME>whr.log -o
c:\export\abap\data\<TABLE NAME>.WHR -table <TABLE NAME>%<NUMBER OF SPLITS>
Using this command in Excel a command line can be built

=CONCATENATE("R3TA -f d:\export\abap\data\",A9,".str ","-l ",A9,"_WHR.log"," -o


d:\export\abap\data\",A9,".WHR"," -table ",A9,"%",B9)

After generating WHR files with R3TA the WHR splitter must be run to create split packages. Always set the
whereLimit parameter to 1, meaning 1 package for each where clause.

where_splitter.bat -whereDir d:\export\abap\data\ -strDir d:\export\ab


ap\data -outputDir d:\export\abap\data -whereLimit 1

5. Package Splitting
The Java based Package Splitting tool is fully supported in all cases. It is recommended not to use the Perl based
splitter.

This command will generate the TPL files and the default STR files (without the EXT files)
r3ldctl –l logfilename –p D:\exportdirectory

Note: Exports to SQL Server do not need Extent files and the whole Extent file (*.EXT) file generation process can
be skipped to save time. Instead it is recommended to use the following script to determine the largest tables in
the Oracle database:
3|Page
spool tablefile.txt
set lines 100 pages 200
col Table format a40
col Owner format a10
col MB format 999,999,999
select owner "Owner", segment_name "Table", bytes/1024/1024 "MB" from
dba_segments where bytes > 100*1024*1024 and segment_type like 'TAB%' order by
owner asc, bytes asc
spool off;

Then it is recommended to extract the largest tables (possibly anything more than ~2GB) into their own packages
(and also table split if required). The following command can be used. Please note that when using SWPM EXT
files are required. EXT files can be bypassed only when doing a manual Migmon based migration

str_splitter.bat -strDirs d:\export\abap\data -outputDir d:\export\abap\data -


tableFile tablefile.txt ***(Note: there is no space between the “-“ and
“tableFile”)

6. FASTLOAD
All SAP data types can now be loaded in Bulk Copy mode. It is recommended to set the –loadprocedure fast
option for all imports to SQL Server. These are the default settings for SAPInst. If migration monitor is used this
parameter must be specified. 4.6C/D migrations should use the parameter –fast (without the “loadprocedure”).
Please also note that to support FastLoad on LOB columns set environment variable BCP_LOB=1 and review
note 1156361
The parameters we recommend for Migmon or SAPInst are loadArgs=-stop_on_error -merge_bck -
loadprocedure fast

7. Migration Time Analyzer


It is recommended to use MIGTIME with the –html option to graphically display the export and/or import time of
packages. It is generally recommended to ensure the longest running packages are started at the beginning of
the export or import. MIGTIME is available for 4.6C and higher
Import_time.bat -installDirs d:\import -html

The script below shows the actual status of the SAP Export using SAP MigrationMonitor log files.
The script reloads every 20 seconds and displays
- actual CPU Load
- Actual running Packages
- Actual waiting Packages

MigMonStatus.zip

Before first usage:


- Unzip the MigMonStatus archive in the Migration Monitor directory
- Rename status.txt in status.cmd
- rename queryCPU.txt in queryCPU.vbs
- start the status.cmd

8. Package Order by recommendations


It is recommended to use an OrderBy.txt text file to optimize the export of an Oracle system and the import to
SQL. By default a system will export packages in alphabetical order and import packages in size order.

The OrderBy.txt can be used to instruct Migration Monitor to start packages in a specific order. Normally the best
order is to start the longest running packages first. It is recommended to perform an export on a test system to
determine which tables are likely to run longest.
Note: It is normal for the export and import runtimes of a package to be very different. Some packages may be
very slow to export yet very fast to import and vice-versa.

9. Oracle Source System Settings


Please review note 936441 - Oracle settings for R3load based system copy

SAP have released SAP Note 1043380 which contains a script that converts the WHERE clause in a WHR file to
a ROW ID value. Alternatively the latest versions of SAPInst will automatically generate ROW ID split WHR files if

4|Page
SWPM is configured for Oracle to Oracle R3LOAD migration. The STR and WHR files generated by SWPM are
independent of OS/DB (as are all aspects of the OS/DB migration process).

The OSS note contains the statement “ROWID table splitting CANNOT be used if the target database is a non
Oracle database”. Customers wishing to speed up an export from Oracle may send an OSS message to BC-DB-
ORA and request clarification of this restriction. Technically the R3LOAD dump files are completely independent
of database and operating system. There is one restriction however, restart of a package during import is not
possible on SQL Server. In this scenario the entire table will need to be dropped and all packages for the table
restarted. ROW ID has a disadvantage that calculation of the splits must be done during downtime – see
1043380.
OS/DB Migrations larger than 1-2TB will benefit from separating the R3LOAD export processes from the Oracle
database server.
Note: Windows application servers can be used as R3LOAD export servers even for Unix or mainframe based
database servers. Intel based server have far superior performance in SAPS/core than most Unix servers,
therefore R3LOAD will run much faster on Intel servers with a high clock speed.
The simplest way to allow Windows R3LOAD to logon to Unix Oracle server is to change the SAP<SID> on
schema systems or sapr3 on non-schema systems to “sapr3” without quotes. This password is hardcoded into
R3LOAD. If the password cannot be changed then the user account on the R3LOAD Windows server (normally
DOMAIN\<sid>ADM) will need to be added to the SAPUSER table OPS$<DOMAIN>\<SAPSID>ADM

10. SQL Server Target System Settings

It is recommended to use Windows Server 2012 R2. Only 64bit platforms are supported. 32bit platforms are now
depreciated and customers are instructed not to use 32bit versions of Windows or SQL Server. SAP R/3 4.6C
offers no native x64 kernel however 4.6C 32 bit kernel can run on Win2003 x64 and is fully supported by Microsoft
& SAP.

The SQL Server database should be manually extended so that the SQL Server automatic file growth mechanism
is not used as it will slow the import. The transaction log file should be increased to ~100+GB for larger systems.
Migrating 10TB+ systems need around 1-3TB of Transaction Log.

Max Degree of Parallelism should be set to 1 usually. Due to the logic for parallelizing index REBUILD or
CREATE statements it is highly likely that most index creation on SAP systems will be single threaded irrespective
of what MAXDOP is specified. Some indexes may benefit from MAXDOP of 4. Do not set MAXDOP to 0

To activate minimized logging start SQL Server with Trace Flag 610. See SAP Note 1482275
If R3LOAD or SQL Server aborts during the import, you need to drop all the tables which were in process at that
time. The reason is that there is a small time window where data should be written to disk in a synchronous
manner, but the writes are asynchronous. Therefore the consistency of the table cannot be guaranteed and the
table should be dropped and the import restarted.

In general we recommend 610, 1118 and 1117. To display trace flags run DBCC tracestatus
Remove trace flag 610 after the migration.

11. Setting up a standalone R3LOAD server – SQL and Oracle


OS/DB Migrations larger than 0.5-1TB will benefit from separating the R3LOAD import processes from the
database server:
a. Install SQL Server 2012, 2014 & 2016 odbc (client libraries only)
b. Apply latest Service Pack for the client libraries
c. Install SAP Java SDK on server
d. Copy the latest versions of R3LOAD.EXE, DBMSSLIB.DLL and MIGMON.SAR (MIGMON.SAR can be
found on the SAP installation master DVD)
e. Set the system environment variables MSSQL_DBNAME=<SID>, MSSQL_SCHEMA=<sid>,
MSSQL_SERVER=<hostname> (or MSSQL_SERVER=<hostname>\<inst> named instance) and
dbms_type=mss
f. If the database logins are required please manually create the users Domain\<sid>adm and
Domain\SAPService<SID> and then use the script attached to Note 1294762 - SCHEMA4SAP.VBS
g. Logon as Domain\<sid>adm and run R3LOAD –testconnect

For creating a R3LOAD server for exporting an Oracle system

a. Install the full 10g/11g/12c x64 client for Windows – not just the SAP client. It is easiest to work with the
full client.
b. Download the Oracle R3LOAD and DBSL – unzip and place in a directory such as
C:\Export\Oracle\Kernel
5|Page
c. Set the follow Environment variables (it might be useful to make a small batch file for this):
SET DBMS_TYPE=ora
SET dbs_ora_schema=SAPR3 or <SID>SAP for schema systems
SET dbs_ora_tnsname=<SID>
SET NLS_LANG=AMERICAN_AMERICA.WE8DEC (or UTF8 if Unicode)
SET ORACLE_HOME=D:\oracle
SET ORACLE_SID=<SID>
SET SAPDATA_HOME= D:\Export\Oracle\Kernel
SET SAPEXE=D:\Export\Oracle\Kernel
SET SAPLOCALHOST=<set to local hostname>
SET SAPSYSTEMNAME=<SID>
SET TNS_ADMIN= D:\oracle\....ora home..\network\admin
d. Edit the SQLNET.ORA and TNSNAMES.ORA to resemble the below
################
# Filename......: sqlnet.ora
# Created.......: created by SAP AG, R/3 Rel. >= 6.10
# Name..........:
# Date..........:
# @(#) $Id: //bc/700-
1_REL/src/ins/SAPINST/impl/tpls/ora/ind/SQLNET.ORA#4 $
################
AUTOMATIC_IPC = ON
TRACE_LEVEL_CLIENT = OFF
NAMES.DEFAULT_DOMAIN = WORLD
SQLNET.EXPIRE_TIME = 10
SQLNET.AUTHENTICATION_SERVICES = (NTS)
DEFAULT_SDU_SIZE=32768
################
# Filename......: tnsnames.ora
# Created.......: created by SAP AG, R/3 Rel. >= 6.10
# @(#) $Id: //bc/700-
1_REL/src/ins/SAPINST/impl/tpls/ora/ind/TNSNAMES.ORA#4 $
################
<SID>.WORLD=
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS =
(COMMUNITY = SAP.WORLD)
(PROTOCOL = TCP)
(HOST = <hostname goes here>)
(PORT = 1527) or can be 1521 – check each system
)
)
(CONNECT_DATA =
(SID = <SID>)
(GLOBAL_NAME = <SID>.WORLD)
)
)
e. Edit the hosts file on the UNIX server and enter the Windows R3LOAD server ip address and hostname.
On the Windows server edit the hosts file and enter the UNIX server ip address and hostname. Test with
PING
f. Test the Oracle connectivity with TNSPING <SID>.WORLD.
g. Run the script attached to SAP Notes 50088 and 361641 (userdomain will usually be the local hostname
of the R3LOAD server if the server is not a domain member). This script will create the OPS$ users that
are needed for SAP to login to Oracle. : sqlplus /NOLOG @oradbusr.sql SCHEMAOWNER UNIX
SAP_SID x (The reason for using the UNIX script is that Oracle on UNIX cannot “see” the hostname of
the Windows server)
h. Try logging into the Oracle database from the Windows server with the following syntax (for schema
systems replace SAPR3 with <SID>SAP) : sqlplus SAPR3/sap@<SID>.WORLD
i. To ensure correct authorizations try running SELECT * FROM T000;
j. Try running R3LOAD –testconnect (remember to set the environment first)

6|Page
For DB2 databases it is recommended to set these environment variables and then run the DB2 client installer
DB2CLIINIPATH=C:\export\client
DB2DBDFT=<SID>
DB2INSTANCE=db2<sid>
DBMS_TYPE=db6
DBS_DB6_SCHEMA=sap<sid>
DBS_DB6_USER=sap<sid>
DSCDB6HOME=<db server name>
EXPORT_DIR=C:\export
JAVA_HOME=C:\export\sapjvm_8
rsdb_ssfs_connect=0
SAPSYSTEMNAME=<SID>

The DB2CLIINIPATH must contain the DB2 conf file


2457164 - dscdb6.conf supported password length
582875 - DB6: SAP cannot log onto the database
The above procedure works for DB6. For DB2 zOS and DB4 a different procedure is required. DB4 needs the
NTAUTH file.

12. Network Settings


Due to the very high volume of traffic it is recommended to configure 10GB Ethernet links between a server
running R3LOAD and the SQL Server.

It is further recommended to configure Jumbo Frames on both the R3LOAD server and the Database Server both
during the export and import. Note that the Jumbo Frame size must be configured identically on the Database
Server, the Switch ports used by both the DB and R3LOAD Server and the NIC card on the R3LOAD Server. The
normal value for Jumbo Frames is 9000 or 9014, though some network devices may only allow 9004. It is
essential that this value is the same (or higher) on all devices or conversions will occur.
If high kernel times are seen on specific Logical Processors in Task Manager check RSS options on the NIC
cards. Windows 2008 and higher allows for RSS Ring configuration usually up to 8 CPUs on 1Gbps NIC and up
to 16 on 10Gbps cards. Perfmon can be used to monitor “Queued DPC” per CPU. This will indicate how many
CPUs are being used for Network DPC traffic and how many RSS Rings are configurable. RSS Ring
configuration can be changed under the Advance Network Properties for most NIC drivers. RSS does not function
well in combination with 3rd party network teaming software. It is recommended to use Windows Server 2016
which has built in network teaming.

On Azure it is not possible to configure jumbo frames. Instead Azure Accelerated networking feature should be
used if possible
Network Settings, Network Teaming, Receive Side Scaling (RSS ...

In some cases the network traffic generated from an import will be so great network errors may cause R3LOAD to
fail. If this occurs please review Microsoft KB899599.

It is also recommended to review Note 392892 and implement http://support.microsoft.com/kb/948496. This is


required on Win2003. In all cases use Windows Server 2016 if possible. Windows Server 2016 includes
integrated teaming that has proven to be a vast improvement over previous teaming solutions in Windows 2008

Note1: Network settings are critical for TCPIP based export/imports.


Note2: Most software based Network Teaming utilities offer only Transmit (Tx) aggregation. SLB or LACP Switch
Based Teaming (requiring trunking on the switch) is required to get Receive (Rx) aggregation.
Note3: Advanced consultants may wish to setup SOFT NUMA on large NUMA based systems. Testing has
shown 20-30% performance boost.
http://msdn.microsoft.com/en-us/library/ms345346.aspx
http://blogs.msdn.com/ddperf/archive/2008/09/09/mainstream-numa-and-the-tcp-ip-stack-part-iv-paralleling-tcp-
ip.aspx

13. Disabling or Deleting Secondary Indexes


Disabling secondary indexes can be done and certain long running indexes built online after the system is
restarted and validated. To do this remove the Index definition from the STR structure file. After the system is
restarted 10-20 indexes can be built online simultaneously. It is recommended to start the ONLINE index build
phase prior to users logging onto the system. If using SQL 2016 start the index build with low priority lock

14. Hyperthreading
It is recommended to use Hyperthreading on all Intel processors. In very rare cases Hyperthreading can be
disabled in the server BIOS. Review Note 1612283 - Hardware Configuration Standards and Guidance
7|Page
15. Purge non-critical tables
Most SAP systems have tables that contain unnecessary data. In many cases these tables can be purged:
Purge SW* tables 738148
Purge Basis tables: 2388483 - How-To: Data Management for Technical Tables
Purge SW* tables 702356

Note: The SAP Note 706478 contains references to many other OSS notes that contain procedures for purging or
archiving many “system” type tables. These tables do not contain business transaction data. In all cases please
use the SAP documented procedures for purging or archiving these tables.

16. TCPIP Port Export/Import Procedure


TCPIP Port based export to a SQL Server system is fully supported. In general we recommend this method for
advanced migration consultants only.

In such an export procedure R3LOAD will communicate directly with the R3LOAD process on the target server.
No dump files will be created as all data is passed via TCPIP. A socket export/import reduces the R3LOAD CPU
consumption and may allow slow legacy servers to run a larger total number of R3LOAD processes.

It is not possible to use TCPIP Port based migration procedure when converting from non-Unicode to Unicode.

It is possible to migrate a Unicode SAP system running on an Oracle database to a Unicode SAP system running
on SQL Server (even if the source system is running on a big Endian 4102 platform and SQL Server is on a little
Endian 4103 platform)

Note: a socket export the OrderBy parameter on the import server must not be set or the import will crash with a
Java error (import order is set by the export server).

17. BW Specific Recommendations


SAP BW has been integrated with SQL Server Column Store and other SQL Server features such as partitioning.
The reports SMIGR_CREATE_DDL and RS_BW_POSTMIGRATION have been redeveloped to convert BW
tables to column store during a migration.
As at April 2017 on all versions of SAP BW from BW 7.00 to BW 7.50 the default process should be:

a. Ensure the full SAP support stack is reasonably up to date (capable of supporting SQL2016)
b. Apply any OSS Notes for SMIGR_CREATE_DDL listed in Note 888210
c. Run SMIGR_CREATE_DDL with option “SQL Server 2016 (all column-store)” option selected
d. Export the database
e. Import the database
f. Run RS_BW_POSTMIGRATION with the default selection for a Heterogenous migration

The default outcome is to automatically convert all F Fact and E Fact cubes to Column Store. If a cube(s) are not
converted to column store open a support message in queue BW-SYS-DB-MSS
One other SAP components it may be possible to only update the SAP_BASIS support pack to allow the use of
the most recent SQL Server version. On BW systems this is not possible and the entire Support Pack Stack must
be upgraded to support a specific version of SQL Server.

It is recommended to review:
Recent SAP BW improvements for SQL Server
Improved SAP compression tool MSSCOMPRESS
Improvements of SAP (BW) System Copy

Modern versions of SQL Server support up to 15,000 table partitions. It is still recommended to check for objects
with many partitions on the source and target systems. Migrations to SQL Server will be re-partitioned even if the
source system is not partitioned 1471910 - SQL Server Partitioning in System Copies and DB Migrations
The number of partitions on SAP BW systems might be different on the source and target systems depending on
some factors. More information on partitioning on BW systems can be found here:
https://blogs.msdn.microsoft.com/saponsqlserver/2013/03/19/optimizing-bw-query-performance/
In general it is recommended to keep the number of partitions below around 500. A typical approach is to do “BW
Compression” on F Fact tables after the data has been validated for 2-6 weeks

To check partition count in before and after migrating a SAP BW system there are several options:

1. Use report MSSCOMPRESS on the target system and copy the results into Excel and sort
2. Run the statement below
8|Page
select COUNT(partition_id),object_name(object_id),index_id
from sys.partitions
where OBJECTPROPERTY(object_id,'IsUserTable')=1
group by object_id, index_id
order by 2,3 asc

To check on an Oracle source system:


You can use the following query on your ORACLE database to check in sqlplus if tables with more than 999
partitions exist:
select table_name from user_part_tables where partition_count >= 999 and
table_name like '/%';

The following two notes are needed when importing onto SQL 2008
SAP Note 1157904 and Note 1364683

To repartition systems follow note 1471910

18. Unicode Conversion Specific Recommendation


Please see notes on Unicode conversion, restrictions on unsorted export and socket export. New versions of
R3LOAD will export cluster tables sorted always.
OSS Note 1139642 has been corrected to accurately state Unicode storage on SQL Server. Since SQL 2008 R2
the storage efficiency of SQL Server is probably at least as good or better than other DBMS.

19. SQL Server PAGE Compression


Full PAGE compression of all tables and indexes on all SAP ABAP applications is the default setting. Do not
change this unless SAP Development support suggest to do so. Please see blogs on
http://blogs.msdn.com/b/saponsqlserver/ for further information.

To check the compression properties of a particular table run the following in SQL Management Studio

select OBJECT_NAME(object_id), index_id, data_compression, data_compression_desc


from sys.partitions where object_id = OBJECT_ID('<TABLENAME>');

20. Overview of SAP Tools & Releases

Basis Release 3.1I 4.0B 4.5B 4.6x 6.20 6.40 7.00 and higher

R3SETUP 4.6D 4.6D 4.6D 4.6D - - -


SAPINST - - - - 6.40 6.40 7.00
R3LDCTL 3.1I 4.0B 4.5B 4.6D 6.40 6.40 7.00
R3SZCHK - - 4.5B 4.6D 6.40 6.40 7.00
R3LOAD 3.1I 4.0B 4.5B 4.6D 6.40 6.40 7.00

MIGMON yes yes yes yes yes yes yes


R3TA no no no no yes yes yes
DISTMON no no no no yes yes yes

Package Splitter:
-Java yes yes yes yes yes yes yes
SQL 2000 SP4a no no yes yes yes yes yes (out of support)
SQL 2005 SP4 no no no yes yes yes yes (out of support)
SQL 2008 SP4* no no no no no no yes (not recommended)
SQL 2008 R2 SP3 no no no no no no yes (not recommended)
SQL 2012 SP3 no no no no no no yes (Intel/AMD x64 only)
SQL 2014 SP2CU4 no no no no no no yes (Intel/AMD x64 only)
SQL 2016 SP1CU2 no no no no no no yes (Intel/AMD x64 only)
SQL vNext* no no no no no no yes (Intel/AMD x64 only)
Win 2003 SP2 x64 no no yes yes yes yes yes (out of support)
Win 2008 SP2 no no no no no no yes (not recommended)
Win 2008 R2 SP1 no no no no no no yes (not recommended)
Win2012 no no no no no no yes (not recommended)
Win2012 R2 no no no no no no yes (Intel/AMD x64 only)
Win2016 no no no no no no yes (Intel/AMD x64 only)

9|Page
*In CTP release

21. Oracle or DB2 ABAP Hints or EXEC SQL – How to handle these
In general we have found that the SQL Server Optimizer does not require as many hints as Oracle. Therefore it is
our standard recommendation to ignore Oracle or DB2 hints on SQL Server. Only if a specific performance
problem is identified should a SQL Server ABAP hint be added. This applies to both SAP standard and custom Z
ABAP. We strongly recommend against manually converting all Oracle ABAP hints into their SQL Server form.
This is time consuming and unnecessary. SAP provide a small report to scan ABAP to detect hints and EXEC
SQL - Report RS_ABAP_SOURCE_SCAN
Review http://blogs.msdn.com/b/saponsqlserver/archive/2011/08/31/how-to-integrate-sql-server-specific-hints-in-
abap.aspx

22. Run sp_updatestats after an Import


After importing a database with R3load it is essential to run sp_updatestats. Table statistics are not automatically
updated during an import. As part of the post-processing steps run sp_updatestats. Typically sp_updatestats will
run for 30-60min on a 1-2TB database.

23. Exporting from UNIX Servers


In some situations it may be required to run SAPInst and R3load on legacy UNIX servers. If possible it is
recommended to use Intel servers to run R3load as they have proven to be vastly faster than UNIX servers.

One simple way to do this is to run all the preparation steps such as table splitting on the UNIX server and then
copy the export directory with the STR, WHR and other required files to a Windows Intel Server. Then manually
run migmon. SWPM/SAPInst will give an option during the system copy to “Manually start Migmon”

However if there is no choice other than to run r3load on the UNIX server then follow the procedure below:
1. Download the latest SL Toolset https://service.sap.com/sltoolset (SWPM)
2. Logon to the Database server (not supported on application servers) and run ./sapinst –nogui as root
3. On a Windows server run sapinstgui.exe and connect to the UNIX server on port 21212
4. Export system using the SAPinst GUI
5. FTP dump files to Windows server and import

Review Note 1680045 – some old operating systems are no longer supported
This link may be useful when for vi and for setting UNIX environment variables such as JAVA_HOME

24. SAP 4.7, ECC 5.0 on Windows 2008 R2 or Windows Server 2012 (R2)
SAP only support Basis 7.0 or higher components on Windows 2008 R2, however it is possible to migration from
UNIX/Oracle to Windows 2008 R2 and SQL Server on older releases provided an upgrade is immediately
performed.

This is documented explicitly in:


Note 1443424 - Migration path to Win2008/MSSQL2008 for 4.6C and 6.20/6.40
Note 1476928 - System copy of SAP systems on Windows 2008 (R2): SQL Server
1783528 - Migration path to Win2012/MSSQL2012 for 4.6C and 6.20/6.40

III. System Copy of a 6.20/6.40 SAP System


You must perform the system copy as described in the system copy guide.
You can either migrate your system by performing a homogeneous system copy with
the database-specific detach/attach method or a heterogeneous system copy with
the database-independent R3load method. You can perform the heterogeneous system
copy procedure to migrate systems from other database platforms to SQL Server
system.

25. SQL Server “slipstream” installations


Download the latest SQL 2012 service pack and CU from http://blogs.msdn.com/b/sqlreleaseservices/ and place
in a central source along with SQL 2012. Run the following commands to automatically patch SQL 2012 during
install:
C:\SAPCD\SQL2012\SQLFULL_x64_ENU>setup /Action=Install /UpdateEnabled=TRUE
/UpdateSource="C:\SAPCD\SQL2012SP1"

Also review SQL4SAP_docu.pdf as detailed in:


1684545 - SAP Installation Media and SQL4SAP for SQL Server 2012
1970448 - SAP Installation Media and SQL4SAP for SQL Server 2014
10 | P a g e
2313067 SAP Installation Media and SQL4SAP for SQL Server 2016

26. Common Problems & Errors


The system copy procedure must be followed exactly or some of the errors below may occur.

a. ERROR: ExeFastLoad: rc = 2
Please review SAP Note 942540. It is probable that the DFACT.SQL file has not been generated by the
SMIGR_CREATE_DDL report or the file is not in the <export dir>\DB\MSS directory. If the problem continues
try setting the NO_BCP=1 to stop FASTLOAD. This will allow R3LOAD to output a more specific error
message. Also check the SQL Server Error Log.

b. SQL Stack Dump LATCH TIMEOUT


It is likely that the SAPDATAx files or SAPLOG1 file was not created large enough and SQL Server has tried
to extend this file. Under extremely heavy load this error may be seen. Expand the database to the expected
final size prior to beginning the import. Ensure the log file is at least 100GB for larger systems.

c. Dump on Logon Screen makes it impossible to logon: DYNPRO_ITAB_ERROR See Note 1287210

d. Deadlock error in package log file


If message : Transaction was deadlocked on lock resources with another process and has been chosen as
the deadlock victim. This message can occur on tables with a large number of splits. In the majority of cases
the fastest resolution will be to drop the table and reset the status of the TSK files and import all packages of
the split table again.
(IMP) INFO: EndFastLoad failed with <2: Bulk-copy commit unsuccessful:[208]
Invalid object name '<sid>.MSSDEADLCK'.
[1205] Transaction (Process ID xxx) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction.
[208] Invalid object nam>
(IMP) ERROR: EndFastload: rc = 2
Reduce BCP_BATCH_SIZE
Review this blog https://blogs.msdn.microsoft.com/saponsqlserver/2016/01/27/improvements-of-sap-bw-
system-copy/

e. Various Error Messages Installing 4.6x/4.7 on x64 Servers


All SAP systems should run on 64bit versions of Windows and SQL Server. The installation routines for
4.6C/D were created before x64 versions of Windows were available. Because of this there may be errors.
Please review Notes 862789, 960769, 899111 & 814834.

f. 4.6C Error (BEK) ERROR: SlicGetInstallationNo() failed


The system environment variable SAPSYSTEMNAME = <SID> is not set. Set this variable for the user
<sid>adm

g. 4.6C error in dev_w* - Long Datatype Conversion not performed” please see Note 126973 - SICK messages
with MS SQL Server

h. R3SETUP and possibly very old SAPInst may attempt to create a SAP database with code page 850BIN prior
to the import of the dump files. Note 799058 and 600027 strictly forbid the use of code page 850BIN and
require conversion to 850BIN2.
Also note that the utility for converting codepage 850BIN to 850BIN2 does not work on SQL 2005 or higher
(the fast conversion feature was dropped from SQL 2005). Therefore care should be taken to avoid the case
where R3SETUP creates a 850BIN database on SQL 2005 and then MIGMON is used to import the system
into this database. Clearly this will result in an unsupported system running code page 850BIN on SQL 2005.
Conversion will be impossible and the import will need to be repeated after dropping and then manually
creating the database.
The following commands display the server (default) and database collations:

SELECT SERVERPROPERTY('Collation')
SQL_Latin1_General_CP850_BIN2
SELECT DATABASEPROPERTYEX('<SID>', 'Collation')
SQL_Latin1_General_CP850_BIN2

An incorrect code page will sometime product import errors with “ERROR: DbSlEndModify failed rc = 26”

11 | P a g e
i. ABAP Shortdump & SM21 error max. marker count = 2090
>B *** ERROR => dbtran ERROR (set_input_da_spec): statement too big
> marker count = 2576 > max. marker count = 2090

This is because the limit on the number of parameters on a stored procedure is 2100 on SQL. It is higher on
other databases
http://technet.microsoft.com/en-us/library/ms191132.aspx

It is possible to change queries with > 2090 parameters to “literal” queries. Review SAP Note 1552952

j. In very rare cases a JOIN on Oracle may not work on SQL Server. This can happen on systems such as
CRM where GUIDs are stored in RAW datatypes and a JOIN is attempted on a CHAR datatype. Please
review Note 1294101

k. A simple and easy way to suspend and release all batch jobs on a system is to run these reports in SE38
Suspend: BTCTRNS1
Release: BTCTRNS2

SQL statement that includes the Jobs for EarlyWatch-Alert (Standard):

update sapr3.tbtco set status = 'P' where jobname not like 'EU%' and jobname not
like 'RDDIMP%' and jobname not like 'SAP%' and jobname not like 'COLLECTOR_FOR%'
and status = 'S'

delete from sapr3.tbtcs where jobname not like 'EU%' and jobname not like
'RDDIMP%' and jobname not like 'SAP%' and jobname not like 'COLLECTOR_FOR%'

SQL statement that includes the Jobs for EarlyWatch-Alert (if system is just being moved):

update sapr3.tbtco set status = 'P' where jobname not like 'EU%' and jobname not
like 'RDDIMP%' and jobname not like 'SAP%' and jobname not like 'COLLECTOR_FOR%'
and jobname not like 'SCUI%' and jobname not like 'AUTO_SESSION_MANAGER' and
status = 'S'

delete from sapr3.tbtcs where jobname not like 'EU%' and jobname not like
'RDDIMP%' and jobname not like 'SAP%' and jobname not like 'COLLECTOR_FOR%' and
jobname not like 'SCUI%' and jobname not like 'AUTO_SESSION_MANAGER'

l. These command will purge old UNIX host profile parameters. Import new profiles with RZ10.
Do not migrate UNIX style profile parameters to Win/SQL. Use zero memory management and keep the
default parameters in general.

truncate table prd.TPFET


truncate table prd.TPFHT

27. Troubleshooting Tips


a. R3LOAD Connection Problems
Review SAP Note 98678. The system environment variable MSSQL_DBSLPROFILE=1 will write a trace file
dbsl_<pid> to the current directory. This file will become very large and seriously reduce the performance of a
system. In some cases it may be necessary to set the SAPSYSTEMNAME=<SID> system environment
variable.
Additional logging can be switched on with environment variable R3LOAD_TL = 1, 2 or 3

b. R3LOAD Cannot Find DFACT.SQL, STR or Dumpfiles


The system environment variable R3LOAD_WL=1 will output extra information in the <package>.LOG file

c. Scan log files with Windows FINDSTR (Windows version of grep)


The command line below will output all the error lines from the export or import directory
Findstr /C:ERROR: <path to log files>\*.log

d. ABAP Dump DATA_OFFSET_TOO_LARGE -> CX_SY_RANGE_OUT_OF_BOUNDS


This problem is usually caused by too longer hostnames in combination with local extended buffering of some
number ranges. Hostname requirements are documented in SAP Note 611361. Review
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/25163 and consider replacing extended local buffering

12 | P a g e
with parallel buffering as per note 599157. It is also possible to use virtual hostnames to workaround this
issue

e. UNIX and Windows CR 0x0D – carriage return formatting is different. SAP Note 27 (not a mistake, note 27)
contains the profile parameter abap/NTfmode. Also see 788907

f. Copying a file in UNIX is possible but Locked in Windows. If the ABAP command OPEN DATASET is used to
open a file on UNIX OS it is still possible to copy this file. On Windows a lock on the file will be held. It is
required (and best practice) to ensure a CLOSE DATASET ABAP command is issued before manipulating a
file external to the ABAP server

g. A large number of R3LOAD processes are configured and Oracle issues this error

The system Error message returned by DbSl:


ORA-00018: maximum number of sessions exceeded
(DB) INFO: disconnected from DB

Solution:
Increase the parameters in
unix: $ORACLE_HOME/dbs/init<DBSID>.ora
windows:$ORACLE_HOME/database/<initDBSID>.ora
PROCESSES=1000
SESSIONS=1105

h. Sorting some BW or other large tables can consume massive amounts of PSAPTEMP. If this occurs there
are two options: (1) switch to Unsorted export (see earlier section in this document or (2) run the commands
below to increase PSAPTEMP

(EXP) ERROR: DbSlExeRead failed


rc = 99, table "/BIC/B0000585000"
(SQL error 1652)
error message returned by DbSl:
ORA-01652: unable to extend temp segment by 128 in tablespace PSAPTEMP
(DB) INFO: disconnected from DB

Sqlplus /nolog
Connect / as sysdba
SQL> ALTER TABLESPACE PSAPTEMP ADD TEMPFILE
'E:\oracle\BWP\sapdata1\temp_1\TEMP.DATA2' SIZE 20000M
SELECT * FROM V$TEMP_SPACE_HEADER;

i. FASTLOAD Errors
The system environment variable NO_BCP=1 will override the –loadprocedure –fast option and force
R3LOAD to use the normal DBSL interface for import

j. Special characters are corrupted


Please review this SAP Note 1279882

k. To Enable fastload on LOB columns in 6.40 & 7.00 set BCP_LOB=1 and review note 1156361

l. If this error occurs during a MDMP Unicode conversion review 992956


(DB) INFO: UMGPMDII~WRD created
(DB) INFO: UMGPMDIT created
(DB) INFO: UMGPMDIT~0 created
(IMP) INFO: ExeFastLoad failed with <2: BCP Commit failed:[2627] Violation of
PRIMARY KEY constraint 'UMGPMDIT~0'. Cannot insert duplicate key in object
'dbo.UMGPMDIT'.
[3621] The statement has been terminated.>
(IMP) ERROR: ExeFastload: rc = 2
(DB) INFO: disconnected from DB
m. ASSERTION_FAILED during generation of DFACT.SQL. Please cross reference 984396 first. If this is
unsuccessful please run RSDDS_CHANGERUN_TMPTABLS_DEL

n. If the following error is seen read OSS Note 1721059. Atomic Bind on SQL 2012
(DB) ERROR: DDL statement failed

13 | P a g e
(INSERT INTO @XSQL VALUES (' sap_atomic_defaultbind 0,
'/BI0/E0BWTC_C02', 'KEY_0BWTC_C02P' ') )
DbSlExecute: rc = 103
(SQL error 2812)

o. Logon or other License profiles implement this note in transaction SECSTORE 1532825

p. MaxDB Migrations using a Windows R3LOAD server require that the appropriate security is in place to allow
connection to MaxDB. See SAP Note 39439 - XUSER entries for SAP DB and MaxDB Syntax should look
similar to this: xuser -U w -u <SID>ADM,<password> -d <SID> -n <maxdbhost> -S
SAPR3 set
q. Below is a useful script to run if an Import fails and the entire SAP database needs to be purged of all tables.
Thanks to Amit for providing this. WARNING: Running this script will drop all tables in the current database

Use <SID>;
EXEC sp_MSforeachtable 'drop table ?';

r. Towards the end of an import there may be many “suspended” SQL processes. These can be viewed with
SQL Management Studio Activity Monitor. Clicking on the suspended process may show that a process is
performing a CREATE INDEX. Towards the end of an import most of the table data import is complete and
SQL Server will be building secondary indexes. The primary clustered index has been built simultaneously as
the table data is loaded. Often these secondary indexes are non-standard Z indexes or sometimes unused
SAP standard indexes. These indexes may be deleted in the source system before export or created after the
system has been restarted and the downtime period is over. SQL 2005 and higher supports online index
creation.
The memory consumption during index creation can be substantial, especially if many indexes are being built
simultaneously. This script is useful to detect situations when SQL is suspending index creation due to
insufficient memory

-- current memory grants per query/session


select
session_id, request_time, grant_time ,
requested_memory_kb / ( 1024.0 * 1024 ) as requested_memory_gb ,
granted_memory_kb / ( 1024.0 * 1024 ) as granted_memory_gb ,
used_memory_kb / ( 1024.0 * 1024 ) as used_memory_gb ,
st.text
from
sys.dm_exec_query_memory_grants g cross apply
sys.dm_exec_sql_text(sql_handle) as st
-- uncomment the where conditions as needed
-- where grant_time is not null -- these sessions are using memory
allocations
-- where grant_time is null -- these sessions are waiting for memory
allocations

-- overall server status

select * from sys.dm_exec_query_resource_semaphores

If many R3LOAD BCP or CREATE INDEX Processes are in status SUSPENDED with
RESOURCE_SEMAPHORE wait type in the DMV below:

select session_id, request_id,start_time, status ,


command, wait_type, wait_resource, wait_time, last_wait_type,
blocking_session_id
from sys.dm_exec_requests where session_id >49 order by wait_time desc;

If this is the case, it may be useful to cap the amount of memory that a particular secondary index build task
can consume. This will force the Secondary Index Build to use TEMPDB. The way to cap memory is to
Active Resource Governor (by right clicking on it in SSMS). Adjust the memory percentage value as needed.
By default SQL Server can easily consume 10-40GB RAM per Index Build if no limit is set – the actual value
depends on the amount of RAM in the server. This substantially improves Index build speed, however if too
many secondary indexes are built at one time this will consume all available memory, thereby blocking other
resources. It is recommended to monitor TempDB utilization when setting this option
14 | P a g e
USE master;
BEGIN TRAN;
-- Create 1 workload group for SAP R3Load
-- Workload group is getting assigned to default pool automatically
CREATE WORKLOAD GROUP R3load;
GO
COMMIT TRAN;
go
-- Create a classification function.
CREATE FUNCTION dbo.classify_r3load() RETURNS sysname
WITH SCHEMABINDING AS
BEGIN
DECLARE @grp_name sysname
IF (APP_NAME() LIKE 'R3 00%')
SET @grp_name = 'R3load'
RETURN @grp_name
END;
GO
-- Register the classifier function with Resource Governor
ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION= dbo.classify_r3load);
GO
--change maximum memory grant a query can get. Default = 25%
ALTER WORKLOAD GROUP R3load with (REQUEST_MAX_MEMORY_GRANT_PERCENT=5);
go
-- Start Resource Governor
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

s. ONLINE Rebuild of Large Secondary Indexes


Tables such as BSIS may have huge secondary indexes. These can be deleted from the STR files and
created ONLINE after the import. This allows post processing and even users to access a system while
indexes are still building.
It is recommended to make scripts and execute these scripts via SQLCMD –S hostname –E –i <script>
**Warning: it is very dangerous to restrict SAP memory with resource governor. This can lead to terminations
and unexpected behavior. Remove the R3 Load resource governor prior to starting the SAP application.

USE master;
BEGIN TRAN;
-- Create 1 workload group for SAP SQLCMD
-- Workload group is getting assigned to default pool automatically
CREATE WORKLOAD GROUP SQLCMD;
GO
COMMIT TRAN;
go
-- Create a classification function.
CREATE FUNCTION dbo.classify_ SQLCMD () RETURNS sysname
WITH SCHEMABINDING AS
BEGIN
DECLARE @grp_name sysname
IF (APP_NAME() LIKE SQLCMD ')
SET @grp_name = ' SQLCMD '
RETURN @grp_name
END;
GO
-- Register the classifier function with Resource Governor
ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION= dbo.classify_ SQLCMD);
GO
--change maximum memory grant a query can get. Default = 25%
ALTER WORKLOAD GROUP SQLCMD with (REQUEST_MAX_MEMORY_GRANT_PERCENT=5);
go
-- Start Resource Governor
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

15 | P a g e
t. To transfer all objects from the dbo schema (or any other schema) into the <sid> schema run the scripts
attached to OSS Note 1294762 – usr_change.sql or copy the output of this script and paste into a new query
window and execute
Also review 683447 - SAP Tools for MS SQL Server

u. If high WRITELOG and/or LOGBUFFER times are seen review the blog on FusionIO & other SSD devices on
http://blogs.msdn.com/b/saponsqlserver/archive/2011/06/10/accelerating-oracle-gt-sql-server-migrations-with-
fusionio-ssd-disks.aspx . FusionIO devices are highly recommended for large migrations to speed up writes
to the Transaction Log and/or tempdb. FusionIO and SSD devices are fully supported for use with SQL
Server. Always run Windows Server 2012 or higher as only this version of Windows supports the TRIM
command

v. Moving from a UNIX clustered/HA CI to a ASCS.


SAP do not support clustering a SAP central instance on modern releases of SAP. Windows MSCS only
supports a ASCS or SCS (Enqueue & Message Server). None of the other components of an SAP systems
are single points of failure therefore it is not permitted to cluster them (Dialog, Batch etc).
In all cases customers must use logon load balancing. This can be setup in transaction SMLG
There appears to be a deficit in SAP documentation about RFCs from .NET
http://help.sap.com/saphelp_nw04/helpdata/en/22/042a31488911d189490000e829fbbd/frameset.htm
A file called saprfc.ini must be created and the system or user environment variable set to the following or
similar
RFC_INI = c:\windows\saprfc.ini

Type B
Connects to an SAP system using load balancing.
The application server will be determined at runtime.
The following parameters can be used:
 DEST = <destination in RfcOpen>
 TYPE = <B: use Load Balancing feature>
 R3NAME = <name of SAP system, optional; default: destination>
 MSHOST = <host name of the message server>
 GROUP = <group name of the application servers, optional; default: PUBLIC>
 RFC_TRACE = <0/1: OFF/ON, optional; default:0(OFF)>
 ABAP_DEBUG = <0/1: OFF/ON, optional; default:0(OFF)>
 USE_SAPGUI = <0/1: OFF/ON, optional; default:0(OFF)>
In addition to the documentation provide by SAP the following may also have to be set:
 dest.SAPSystemName = "<SID>";
The service name of the message server must be defined in the ‘service’ file (<service name> = sapms<SAP
system name>).

Please also review:


Note 1447900 - LIBRFC32.dll unable to get some environment variables
Note 21151 - Multiple Network adapters in SAP Servers (download the attachments and read them)
Note 129997 - Hostname and IP address lookup (from this note)
It is crucial for the operation of the R/3 system that the following
requirement is fulfilled for all hosts running R/3 instances:

a) The hostname of the computer (or the name that is configured with the
profile parameter SAPLOCALHOST) must be resolvable into an IP address.
b) This IP address must resolve back into the same hostname. If the IP
address resolves into more than one address, the hostname must be first
in the list.
c) This resolution must be identical on all R/3 server machines that
belong to the same R/3 system.
Note 364552 - Loadbalancing does not find application server
Note 1011190 - MSCS:Splitting the Central Instance After Upgrade to 7.0/7.1

28. Migration for 4.6C or lower based systems : High level process:

16 | P a g e
a. Raise an OSS message requesting a copy of the 4.6D SAP R3SETUP. (R3SETUP is no longer
available for download)
b. Prepare system according to 4.6D system copy guide
c. Install R3SETUP on the source system and update the DBMSSLIB.DLL, R3LOAD.EXE &
R3SZCHK.EXE
d. Modify R3SETUP DBEXPORT.R3S to force R3SETUP to exit just before starting the export
<xx>=R3SZCHK_IND_IND
<xx>=DBEXPCOPYEXTFILES_NT_IND
<xx>=DBR3LOADEXECDUMMY_IND_IND ***delete***
<xx>=CUSTOMER_EXIT_FOR_EXPORT ***add***
<xx>=DBEXPR3LOADEXEC_NT_IND ***delete***
<xx>=DBGETDATABASESIZE_IND_IND

[CUSTOMER_EXIT_FOR_EXPORT] ***add***
CLASS=CExitStep ***add***
EXIT=YES ***add***

e. Run R3SETUP and open DBEXPORT.R3S. Do not select the Perl based package splitter. Exit
at the customer stop point
f. Copy the Java based splitter to the R3SETUP install directory. Copy *.EXT and *.STR files from
<export dir>\DATA to the installation directory. Configure and run the Java based package
splitter tool. The package splitter will process the EXT and STR files and rename them to *.OLD
and create new EXT and STR files.
g. Copy Migration Monitor to the installation directory and run Migration Monitor to export the
system
h. R3SETUP and open DBEXPORT.R3S to continue the export steps. These steps will generate
the DBSIZE.TPL
i. Run Migration Time Analyzer to check which packages run the longest. Try to optimize the
export by starting these packages first using the OrderBy.txt file
j. Start a CMD.EXE session from the \Windows\syswow64 directory and run SETUP.BAT to install
R3SETUP on target server. Immediately update the DBMSSLIB.DLL and R3LOAD.EXE
k. Modify DBMIG.R3S with exit point
190=DBDBSLTESTCONNECT_NT_IND
200=MIGRATIONKEY_IND_IND
<xx>=CUSTOMER_EXIT_FOR_IMPORT ***add***
210=DBR3LOADEXECDUMMY_IND_IND ***delete***
220=DBR3LOADEXEC_NT_MSS ***delete***
230=DBR3LOADVIEWDUMMY_IND_IND ***delete***
240=DBR3LOADVIEW_NT_IND ***delete***
250=DBPOSTLOAD_NT_MSS
260=DBCONFIGURATION_NT_MSS

[CUSTOMER_EXIT_FOR_IMPORT] ***add***
CLASS=CExitStep ***add***
EXIT=YES ***add***
l. Run R3SETUP and open DBMIG.R3S. Exit at the customer stop point
m. Copy the <export dir> to the target system and run Migration Monitor to import the system
n. Run R3SETUP to continue the installation. If R3SETUP fails review note 965145
o. Run Migration Time Analyzer and review OrderBy.txt
p. Perform the post system copy steps as per the 4.6D system copy guide

29. Useful Oracle Commands


During migrations it may be useful to check how the export is running with some of the following
commands:

select sesion.sid,sesion.username,optimizer_mode, hash_value, address,


cpu_time, elapsed_time, sql_text from v$sqlarea sqlarea, v$session sesion where
sesion.sql_hash_value = sqlarea.hash_value and sesion.sql_address =
sqlarea.address and sesion.username is not null;

17 | P a g e
The following Oracle command can detect if an individual table is corrupt.
ANALYZE TABLE SAPSR3."/1BA/HM_WRC6_320" VALIDATE STRUCTURE;
30. SQL Server 2014 is not available as a DB version in SMIGR_CREATE_DDL
SQL Server 2014 is not listed in SMIGR_CREATE_DLL
Either implement the support packs in note below. If it is not possible to upgrade the SAP Support Pack
Stack or just the SAP_BASIS support pack, then export the system and select SQL 2012.
1984903 - SMIGR_CREATE_DDL: SQL Server 2014 support
31. SAP and Microsoft will stop supporting Windows 2003
Windows 2003 server is now over 13 years old has significant performance and reliability deficits
compared to modern Windows releases. Customers are advised to stop running SAP systems on
Windows 2003 as soon as possible. The following SAP note strongly advises customers to terminate
this obsolete Operating System:
2135423 - Support of SAP Products on Windows Server 2003 after 14-Jul-2015
32. Required OSS Note 1593998 - SMIGR_CREATE_DDL for MSSQL
On BW 7.00 to 7.31 ensure this OSS Note is implemented before running SMIGR_CREATE_DDL
33. R3load Import into SQL Server TDE database
SQL Server supports Transparent Data Encryption and this feature is frequently used by Cloud
customers. SQL Server TDE integrates with the Azure Key Management Service via a free utility on
SQL Server 2016 and earlier.
Review this blog: More Questions From Customers About SQL Server Transparent Data Encryption –
TDE + Azure Key Vault
TDE guarantees that database backups are secured in addition to protecting the “at rest” data.
SQL Server TDE supports common protocols for encryption. We generally recommend AES-256
Testing on customer systems has shown that it is faster to import directly into an empty already
Encrypted database than to apply TDE after the database import.
The overhead of importing into a TDE database is approximately 5% CPU
Therefore it is recommended to follow this sequence:
1. Ensure Perform Volume Maintenance Tasks privilege is assigned to the SQL Server Service
Account to allow Instant File Initialization (datafiles can be created quickly but log files need to be
written to and zeroed out)
2. Create a database of the desired size (for example a 7.2TB database a database of approximately
8TB would be created)
3. Ensure to create a very large transaction log as during the import a lot of log space will be
consumed
4. Configure Azure Key Vault, TDE and monitor the database encryption status and percent complete.
Status can be found in sys.dm_database_encryption_keys
5. When the Encryption Status = 3, the R3load import can start
6. When the import and post processing finished create a Backup
7. Restore backups on replica node(s) configure AlwaysOn

The Azure platform also support Disk Encryption. This technology is similar to Windows Bitlocker and
can be used to encrypt the VHDs that are used by a VM.
Note: it is not necessary or beneficial to use Azure Disk Encryption and SQL Server TDE at the same
time. We recommend against storing SQL Server data and log files that have been encrypted with TDE
on disks that have been encrypted with ADE. Using both SQL Server TDE and ADE can cause
performance problems

34. Removing SAP Business Warehouse Accelerator and Replacing with SQL Server Column Store
SQL Server Column Store, Flat cube and new technologies in SAP BW 7.50 SPS 04 greatly improve
performance and have already allowed many customers to terminate the use of SAP BWA.
Review these SAP Notes and check the SAP on SQL Server blog site for recent announcements about
SQL Server Column Store

Review SAP Note 2258401 - How to uninstall or disconnect BWA to BW

RSDDTREX_ALL_INDEX_REBUILD
BIA Index Deletion task Details :

Steps to be executed in sequence


18 | P a g e
1. Transaction code RSA1 - Delete all "BWA-only" provider and objects if any.
2. Transaction code SE16- Check if there is any entry in table RSDDBOBJDIR with selection
"IDXTP=ICH"
3. Transaction SE38- Execute program RSDDTREX_ALL_INDEX_REBUILD with the following options:
"Edit All Indexes" = X
"only Delete No rebuild" = X
4. Transaction RSDDB- Check if any indexes remain
5. Transaction RSDDV- Confirm if all indexes have been deleted
6. Transaction RSCUSTA- Clear the entry in field "HPA BW Accelerator"
7. Transaction SM59- Delete RFC destination to BWA under TCP/IP connections

35. Migration to Microsoft Azure Public Cloud


Azure public cloud platform is now fully support and popular platform for many customers, especially
customers with large non-production landscapes.
For customers moving from different on-premise based OS/DB combinations to Windows & SQL Server
there is special guidance:

Review this blog: Top 14 Updates and New Technologies for Deploying SAP on Azure

R3load has been proven to run very well on Azure provided sufficient P30 premium storage disks are
used for the database later. The network between a SAP R3load Server and the Database server
becomes critical in most migrations. It is therefore recommended to use Azure Accelerated Networking

ExpressRoute connections between on-premise systems and Azure are a theoretical maximum of
10Gbit/sec. It is not possible for any one copy operation to one target in Azure to achieve throughput of
10Gbit/sec however. Therefore it would not be possible to upload a SQL backup file or R3load dump
files at 10Gbit/sec. Upload rates of ~200Mbit/sec are achievable.

To upload huge amounts of data to Azure it is recommended to use the Azure Import/Export Service

Before deploying SAP on Azure it is essential to completely review and understand the documentation:
http://msdn.microsoft.com/library/dn745892.aspx
Additional SAP Notes that should be reviewed include:
1928533 - SAP Applications on Azure: Supported Products and Azure VM types
2015553 - SAP on Microsoft Azure: Support prerequisites
1999351 - Troubleshooting Enhanced Azure Monitoring for SAP
1409604 - Virtualization on Windows: Enhanced monitoring
1380654 - SAP support in public cloud environments
2145537 - Support of SAP BusinessObjects BI platform on Microsoft Azure

As at April 2017 the following VM sizes are supported in Production. It is not recommended to attempt
to run or install SAP on sizes smaller than those listed below as there will likely be errors during
installation or memory exceptions.
DB Server for Required Azure
VM Type VM Size 2-Tier SAPS 3-Tier SAPS 3-Tier Storage for Database
Supported Files
A5 2 CPU, 14 GB 1,500 12,000 Yes Standard
A6 4 CPU, 28 GB 3,000 25,000 Yes Standard
A7 8 CPU, 56 GB 6,000 50,000 Yes Standard
A8 / A10 8 CPU, 56 GB 11,000 No Standard
A9 / A11 16 CPU, 112 GB 22,000 No Standard
D11 2 CPU, 14 GB 2,325 Yes Standard
D12 4 CPU, 28 GB 4,650 Yes Standard
D13 8 CPU, 56 GB 9,300 No Standard
D14 16 CPU, 112 GB 18,600 No Standard
DS11* 2 CPU, 14 GB 2,325 Yes Premium
DS12* 4 CPU, 28 GB 4,650 48,750 Yes Premium
19 | P a g e
DS13* 8 CPU, 56 GB 9,300 91,050 Yes Premium
DS14* 16 CPU, 112 GB 18,600 Yes Premium
DS11v2* 2 CPU, 14 GB 3,530 Yes Premium
DS12v2* 4 CPU, 28 GB 6,680 Yes Premium
DS13v2* 8 CPU, 56 GB 12,300 Yes Premium
DS14v2* 16 CPU, 112 GB 24,180 Yes Premium
DS15v2* 20 CPU, 140 GB 30,430 Yes Premium
GS1** 2 CPU, 28 GB 3,580 34,415 Yes Premium
GS2** 4 CPU, 56 GB 6,900 78,620 Yes Premium
GS3** 8 CPU, 112 GB 11,870 137,520 Yes Premium
GS4** 16 CPU, 224 GB 22,680 247,880 Yes Premium
GS5** 32 CPU, 448 GB 41,670 Yes Premium

VM types E and Dv3 will be certified at a later time.


Customers running SAP Business One (SAP B1) should contact the author of this blog if they wish to
run SAP B1 on Azure.

For current information about High Availability and DR solutions for Azure deployments please
check http://blogs.msdn.com/saponsqlserver/

20 | P a g e

S-ar putea să vă placă și