Home » SAP

Category Archives: SAP

What are hana log segments?

What are HANA Log Segments?

When we make changes in SAP, the data usually first gets saved in log segments, then data from log segments gets transferred to data volume once it is in status BackedUp whenever a savepoint is triggerred and then the logs gets state free, so hana logs can have below states.

States of HANA Log Segments

State Description
Writing Currently writing to this segment.
Closed Segment is closed by the writer.
Truncated Truncated, but not yet backed up. Backup will remove it.
BackedUp Segment is already backed up, but a savepoint has not yet been written. Therefore it needs to be kept for instance recovery.
Free Segment is free for reuse.

when we set parameter log_mode = normal and enable_auto_log_backup=yes then log files gets created.

if we set log_mode = overwrite then no log segments are created.

Note: Log Segments and Log Backups are different.

Lecture on Basics of SAP and ERP

GRP - Basic SAP Training
Please enter your full name to address you properly and maintain a professional environment within the group.
Kindly provide your email address for communication purposes and to receive group-related updates or announcements.
Please share your WhatsApp number to add you to the group and enable direct communication within the community.
Let us know your current role or job title to understand your professional background and expertise.
Please specify the name of your company or organization to know where you are currently employed or associated with.
Share your level of experience and familiarity with SAP to gauge your expertise.
Mention date in (DD/MM/YYYY)
Rate from 1-10 (10 - being highest)
We would like to understand your motivation for joining the group and your expectations from the community.
Terms of Service Agreement

Welcome to "SAP Solution Manager 7.X" WhatsApp Group! We are delighted to have you as a member of our community. By joining and participating in this group, you agree to comply with the following Terms of Service:

Purpose and Guidelines:
"SAP Solution Manager 7.X" is dedicated to discussions related to SAP Solution Manager and relevant topics.
Please maintain a respectful and courteous tone in all interactions with fellow members.
Avoid spamming, self-promotion, or sharing unrelated content.

Confidentiality:
Any information shared in the group should be treated as confidential and not shared outside the group without permission.

Respect for Intellectual Property:
Do not post copyrighted content without proper authorization from the owner.
Give appropriate credit when sharing content from external sources.

Prohibited Content:
Do not post offensive, abusive, or inappropriate content that may harm or offend others.
Avoid discussions related to politics, religion, or any sensitive topics not relevant to the group's purpose.

Moderation:
The group administrators reserve the right to moderate and remove any content or members violating the guidelines.
Decisions made by the administrators are final.

Liability:
"SAP Solution Manager 7.X" and its administrators are not liable for any content shared by members.
Members are responsible for their own actions and contributions.

Participation and Activity:
Regular participation is encouraged, but inactivity for an extended period may lead to removal from the group.

Data Privacy:
By joining the group, you consent to the collection and processing of your data as per WhatsApp's Privacy Policy.

Amendments:
The Terms of Service may be updated from time to time. Members will be notified of any changes.
By continuing to be a member of "SAP Solution Manager 7.X", you acknowledge that you have read and understood these Terms of Service and agree to abide by them.

For any questions or concerns regarding the group or these terms, please reach out to the administrators.

SAP Solution Manager 7.X
28-July-2023

How to Confirm Which Tables SLT Objects Belong To

How to Confirm Which Tables SLT Objects Belong To

When working with SAP Landscape Transformation (SLT) Replication Server, you may encounter technical objects with cryptic names such as /1LT/SAPL<SID><15 digit number>/1CADMC/<8 digit number>, or /1LT/<11 digit number>. Knowing which underlying table these objects are associated with is crucial for troubleshooting, development, or auditing replication setups. In this article, I’ll walk you through the steps to confirm the related tables of these SLT-generated objects using transaction SE16 and key system tables.


1. Identifying Tables for Objects Like /1LT/SAPL<SID><15 digit number>

This object type is commonly seen in SLT systems. Here’s how you can trace the related table:

Steps:

  1. Log into the SLT system (where <SID> in the object name refers to the SLT system’s SID).
  2. Use transaction SE16 to open the table DMC_FMID.
  3. Set the field IDENT with the <15 digit number> from your object’s name. Execute the query and note the value of the COBJ_GUID field.
  4. Again, in SE16, open the table DMC_COBJ. Set the GUID field to the COBJ_GUID value you just recorded. Execute the query and check the value in the IDENT field.
  5. The IDENT value will be in the format Z_<Table Name>_<MTID>, where <Table Name> is your target table and <MTID> is the mapping ID for the object.

2. Identifying Tables for Objects Like /1CADMC/<8 digit number>

There are two ways to confirm the table name for these objects:

Method 1: From the SLT System

  1. Log into the SLT system.
  2. In SE16, open table IUUC_LOGTAB_ID.
  3. Set the IDENT field with the <8 digit number> from your object’s name and execute.
  4. The result will display both the MTID and the associated table name.

Method 2: From the Source System

  1. Log into the source system (the system being replicated).
  2. In SE16, open table IUUC_LOG_APPLTAB.
  3. Set the LOGTAB_NAME field to your object name /1CADMC/<8 digit number> and execute.
  4. The result will return the table name used for that object.

3. Identifying Tables for Objects Like /1LT/<11 digit number>

For this type of object, follow these steps:

  1. Log into the SLT system.
  2. In SE16, open table IUUC_TAB_ID.
  3. Set the IDENT field with the <11 digit number> from your object name and execute.
  4. The output will display the MTID and the corresponding table name.

Summary Table

Object Format Key Table(s) Input Field Output Field / Info
/1LT/SAPL<SID><15 digit number> DMC_FMID, DMC_COBJ IDENT, GUID IDENT (Z_<Table Name>_<MTID>)
/1CADMC/<8 digit number> IUUC_LOGTAB_ID IDENT MTID, Table Name
/1CADMC/<8 digit number> (source) IUUC_LOG_APPLTAB LOGTAB_NAME Table Name
/1LT/<11 digit number> IUUC_TAB_ID IDENT MTID, Table Name

Conclusion

Understanding the mapping between SLT technical objects and their underlying tables can save time and provide valuable insights when managing SAP replication scenarios. By following the steps above, you can quickly trace any SLT object to its original table, helping you maintain, troubleshoot, and optimize your SAP landscape more effectively.

Python Code to get EBS stats for SAP Systems

In this article, we have developed a Python script to simplify the process of retrieving AWS CloudWatch metrics for Elastic Block Store (EBS) volumes. The script takes input from a CSV file, containing metrics such as metric-name, VolumeId, START_TIME, and END_TIME, and uses the boto3 library to interact with AWS services.

By using this script, users can avoid manually executing individual AWS CLI commands for each metric and volume, making the process more efficient and less error-prone. The script iterates through the CSV file, calls AWS CloudWatch using boto3, and collects the required metric statistics, such as the Average value, for each metric and volume within the specified time range.

The output is then written back to a CSV file with the columns metric-name, VolumeId, Timestamp, and Average. This organized output allows users to easily analyze and further process the data for their specific use cases.

Users can customize the input CSV file with desired metrics and volumes, making it adaptable to various AWS environments and monitoring requirements.

SAMPLE - input.csv
metric-name,VolumeId,START_TIME,END_TIME
VolumeReadOps,vol-12345,2023-07-01T00:00:00,2023-07-02T00:00:00
VolumeWriteOps,vol-67890,2023-07-01T00:00:00,2023-07-02T00:00:00
BurstBalance,vol-54321,2023-07-01T00:00:00,2023-07-02T00:00:00
VolumeBytesRead,vol-98765,2023-07-01T00:00:00,2023-07-02T00:00:00
VolumeBytesWrite,vol-24680,2023-07-01T00:00:00,2023-07-02T00:00:00
CODE - sap_get_metric_statistics.py
import csv
import boto3

# Function to get CloudWatch metric statistics
def get_metric_statistics(metric_name, volume_id, start_time, end_time):
    cloudwatch = boto3.client('cloudwatch')
    response = cloudwatch.get_metric_statistics(
        Namespace='AWS/EBS',
        MetricName=metric_name,
        Dimensions=[
            {
                'Name': 'VolumeId',
                'Value': volume_id
            },
        ],
        StartTime=start_time,
        EndTime=end_time,
        Period=300,
        Statistics=['Average']
    )
    return response['Datapoints']

# Main function
def main():
    input_file = 'input.csv'
    output_file = 'output.csv'

    with open(input_file, 'r') as csvfile:
        csvreader = csv.DictReader(csvfile)
        next(csvreader)  # Skip the header row
        data = list(csvreader)

    with open(output_file, 'w', newline='') as file:
        csvwriter = csv.writer(file)
        csvwriter.writerow(['metric-name', 'VolumeId', 'Timestamp', 'Average'])

        for entry in data:
            metric_name = entry['metric-name']
            volume_id = entry['VolumeId']
            start_time = entry['START_TIME']
            end_time = entry['END_TIME']

            datapoints = get_metric_statistics(metric_name, volume_id, start_time, end_time)
            for datapoint in datapoints:
                csvwriter.writerow([metric_name, volume_id, datapoint['Timestamp'], datapoint['Average']])

if __name__ == "__main__":
    main()

SAMPLE - output.csv

metric-name,VolumeId,Timestamp,Average
VolumeReadOps,volume-1,2023-07-20 10:00:00,120.0
VolumeReadOps,volume-1,2023-07-20 10:05:00,130.0
VolumeReadOps,volume-1,2023-07-20 10:10:00,115.0
VolumeWriteOps,volume-1,2023-07-20 10:00:00,50.0
VolumeWriteOps,volume-1,2023-07-20 10:05:00,60.0
VolumeWriteOps,volume-1,2023-07-20 10:10:00,55.0
BurstBalance,volume-1,2023-07-20 10:00:00,75.0
BurstBalance,volume-1,2023-07-20 10:05:00,80.0
BurstBalance,volume-1,2023-07-20 10:10:00,70.0
VolumeBytesRead,volume-1,2023-07-20 10:00:00,2000.0
VolumeBytesRead,volume-1,2023-07-20 10:05:00,2200.0
VolumeBytesRead,volume-1,2023-07-20 10:10:00,1900.0
VolumeBytesWrite,volume-1,2023-07-20 10:00:00,1500.0
VolumeBytesWrite,volume-1,2023-07-20 10:05:00,1700.0
VolumeBytesWrite,volume-1,2023-07-20 10:10:00,1400.0


SAP HANA

The SAP HANA (High-Performance Analytic Appliance) in-memory columnar database management system was created by SAP SE. It is intended to handle large amounts of data in real time while also providing quick analytics and data processing capabilities. Here’s an in-depth explanation of SAP HANA, complete with examples:

  1. SAP HANA uses in-memory computing, which means it stores and processes data in the server’s main memory (RAM) rather than on traditional disk storage. This allows for faster data access and processing, leading to significant performance gains. Complex analytical queries, for example, that used to take hours can now be completed in seconds with SAP HANA.
  2. Columnar Data Storage: SAP HANA employs a columnar data storage format, in which data is stored column by column rather than row by row. This method improves data compression, speeds up data retrieval, and allows for more efficient data analysis. For example, if you need to calculate total sales across multiple products, SAP HANA can access and aggregate only the relevant columns, resulting in faster results.
  3. SAP HANA supports real-time analytics by processing and analyzing data as it enters the system. Traditional databases frequently necessitate separate data extraction, transformation, and loading (ETL) processes before data can be analyzed. SAP HANA allows you to perform complex analytical operations on real-time data streams. A retail company, for example, can track sales in real-time, allowing for immediate decision-making based on up-to-date information.
  4. SAP HANA offers advanced analytical capabilities such as predictive analytics, text analytics, and geospatial analysis. It supports machine learning and statistical analysis through built-in algorithms and libraries. For example, a telecommunications company can use SAP HANA to analyze customer call records and predict customer churn based on variables such as call duration, network quality, and customer demographics.
  5. Data Integration and Virtualization: SAP HANA enables seamless integration with a wide range of structured and unstructured data sources. It can replicate, extract, and transform data from a variety of systems, including SAP applications, external databases, and big data platforms. SAP HANA can also create virtual data models, which provide a unified view of data from multiple sources. For example, to gain comprehensive insights into customer satisfaction, you can combine sales data from a SAP ERP system with customer feedback from social media.
  6. SAP HANA is used in a variety of industries for a wide range of applications. It is the engine that drives SAP’s business suite, including SAP S/4HANA, which offers integrated enterprise resource planning (ERP) functionality. SAP HANA is also used for real-time analytics, supply chain optimization, fraud detection, customer experience management, Internet of Things data processing, and other applications. A logistics company, for example, can use SAP HANA to optimize delivery routes based on real-time traffic data, resulting in increased efficiency.

New Directory Structure in SAP NetWeaver 7.5 ABAP Installation

In the latest SAP NetWeaver 7.5 release, there have been significant changes to the directory structure for ABAP instances. This article discusses the modifications and how they impact the Primary Application Server (PAS) and Additional Application Server (AAS) instances. Users noticed that the ABAP Primary Application Server (PAS) instance directory, previously named DVBEMGS<Instance_Number>, is no longer present in the directory /usr/sap/<SID>. Instead, a structure named D<Instance_Number> (e.g., D00) similar to an Additional Application Server (AAS) is now found. With the introduction of the ABAP SAP Central Services (ASCS) instance, the distinction between the ABAP PAS and AAS has been removed. This modification resulted in the adoption of the D<Instance_Number> format for all application server instances, regardless of whether they are PAS or AAS.

Example of a new ABAP SAP System based on NW 7.5:

– PAS: D10
– AAS: D13
– AAS: D15

Important Points to Note:
1. The instance directory name for both PAS and AAS is now D<Instance_Number>.
2. Reverting to the old directory naming structure is not feasible.
3. This new directory structure applies only to fresh installations of SAP NetWeaver 7.5 and not to upgraded systems.
4. The changes do not apply to Java or Dual Stack Systems.

Overview of Configuring SAP HANA System Replication

Configuring SAP HANA System Replication between a primary and secondary site involves several steps. Here is an overview of the process:

  1. Prerequisites:
    • Ensure that you have a fully installed and configured SAP HANA system on both the primary and secondary sites.
    • Make sure the network connectivity is established between the primary and secondary sites, including the necessary ports for HANA communication.
  2. Enable System Replication:
    • On the primary site, open the SAP HANA Cockpit or SAP HANA Studio.
    • Connect to the primary HANA instance as a user with administrative privileges.
    • Navigate to the “System Replication” section and enable the system replication feature.
  3. Configure the Primary Site:
    • Set the replication mode to “sync” or “async” based on your requirements.
    • Define the secondary site and specify the connection details (IP address, port, etc.) of the secondary HANA instance.
    • Configure the replication parameters like the replication mode, log retention, etc.
    • Save the configuration and start the replication process on the primary site.
  4. Prepare the Secondary Site:
    • Install and configure a new SAP HANA system on the secondary site if it’s not already done.
    • Ensure that the secondary site has the same hardware resources and HANA version as the primary site.
    • Configure the network settings and ensure that the secondary site can communicate with the primary site.
  5. Establish the Initial Data Copy:
    • Initiate the initial data replication from the primary site to the secondary site.
    • This process involves copying the data from the primary database to the secondary database to synchronize them.
    • Monitor the data copy process and ensure it completes successfully.
  6. Test the Replication:
    • Once the initial data copy is complete, verify that the data is consistent between the primary and secondary sites.
    • Perform tests and checks to ensure that the replication is working as expected.
    • Validate that the secondary site is in a synchronized state with the primary site.
  7. Monitor and Maintain:
    • Set up monitoring tools to track the replication status and performance.
    • Regularly monitor the replication processes, log files, and system alerts.
    • Perform periodic checks to ensure the replication is functioning correctly.

Commands to configure SAP HANA HSR

To configure HANA system replication between a primary and secondary site, you’ll need to perform several steps. Here’s an overview of the commands involved in the configuration process:

  1. Connect to the primary site’s HANA database using the HANA Studio or HANA Cockpit, or by using the hdbsql command line tool.
  2. Check the current replication status and configuration:
ALTER SYSTEM GET CONFIGURATION ('systemReplication')
This command will show you the current replication status and settings.
  1. If replication is not yet enabled, you’ll need to enable it. Use the following command:
ALTER SYSTEM SET CONFIGURATION ('systemReplication', 'enabled', 'true') WITH RECONFIGURE

This command enables system replication and triggers a reconfiguration.
  1. Create the secondary site configuration. Connect to the secondary site’s HANA database and execute the following command:
CREATE SYSTEM REPLICATION CONFIGURATION '<configuration_name>' SITE 'secondary_site' HOST '<secondary_host>' PORT <secondary_port> USER '<replication_user>' PASSWORD '<replication_password>'

Replace <configuration_name>, <secondary_site>, <secondary_host>, <secondary_port>, <replication_user>, and <replication_password> with the appropriate values.
  1. Configure the replication mode and other parameters. Execute the following command at the primary site:
ALTER SYSTEM ALTER CONFIGURATION ('systemReplication') SET ('mode', '<sync_mode>') WHERE TARGET = 'secondary_site'

Replace <sync_mode> with the desired synchronization mode. Common options are 'sync', 'async', or 'near_sync'.
  1. Start the replication process:
ALTER SYSTEM START REPLICA ADMIN FOR CONFIGURATION '<configuration_name>'

Replace <configuration_name> with the name specified in step 4.
  1. Validate the replication setup. Check the replication status using:
SELECT * FROM M_SYSTEM_REPLICATION_STATUS

This command will show you the current replication status.