Quantcast
Channel: SAP Business Process Management
Viewing all 123 articles
Browse latest View live

Conditional Start: Some Restrictions (7/7)

$
0
0

This blog post, as part of this blog series, refers to the recently introduced Conditional Start feature in SAP NetWeaver BPM available with SAP NetWeaver 7.3 EHP 1 SP 06 and higher.

 

The Conditional Start feature can be complex to administer. SAP NetWeaver BPM has some particularities that ease the administration and usability of conditional start scenarios.

 

Prevention of Manual Process Start

In conditional start scenarios, a web service message sent to the conditional start endpoint should only be consumed by exactly one process instance. This means that a message either starts a new process instance or triggers an intermediate message event of a running process instance.

To ensure that there is at most one process instance consuming a message of a kind at any point in time, the manual start of conditional start process instances through the process repository http://<host>:<port>/nwa/bpm-repository and through the public API is disabled.

Instead, a process could be started through sending a web service message to the conditional start endpoint. The process repository provides a parameterized link to the web service navigator to allow a web service message to be sent to the conditional start endpoint. The BPM system then decides whether the web service message starts a new process instance or whether it triggers an intermediate message event.

 

Prevention of Multiple Active Conditional Start Process Definitions Sharing the Same Message Trigger

In SAP NetWeaver BPM, a web service endpoint is represented by a message trigger. A message trigger is reusable and can be used in multiple process definitions. This allows messages to be broadcasted to different process instances. This is not wanted in conditional start scenarios because it implements a collect pattern. Therefore, a message should always be consumed by exactly one process instance.

To ensure that only one process instance consumes a message at any point of time, there must not be more than one distinct process definition active, which shares the same message trigger as their conditional start trigger.

Thus, the activation of a development component fails when a conditional start message trigger is already used by another active conditional start process definition. The activation of a development component also fails when it contains more than one conditional start process definition, which uses the same message trigger.

Such a situation could either happen when deploying a new development component to the process server, when activating another version in the process repository (http://<host>:<port>/nwa/bpm-repository) or when updating a system, which contains multiple active process definitions that share the same message trigger for conditional start.

During design time, the process developer is supported by an automatic check in the process composer that reports an error when there are more than one conditional start process models that share the same message trigger.

 

 

 


Real-time reporting on multiple BPM datasources in Visual Composer

$
0
0

Recently, with imminent release, we came across a requirement where end user wanted to know processes with particular task subject. Though we have all the Reporting on standard and custom data sources in BW, accommodating it there via custom datasource at last moment would have triggered newer version of Reporting datasource and thereby triggering changes in BW side as well (Uncertain if this limitation - of getting new version created- is fixed in 7.31, we are working on 7.2 SP 5 at the moment). Also, we were not able to find “Subject” field in “BPM_TASKS_DS” which can be pulled in BW (It’s available in “BPM_MY_TASK_DS” which cannot be pulled in BW though), leaving us behind with only option of Visual Composer.

However, showing the processes with particular tasks name/subject requires JOIN operation on the 2 (BPM_MY_TASK_DS and BPM_MY_PROCESS_DS) datasources and whatever searches we did, said that JOIN is not supported in Visual Composer(Would be great to know if there is any alternate way). Nonetheless, with the mandate of fulfilling the requirement we somehow did it using concept of “Entry List”. Though this doesn’t let us go deep down in process’ attributes, we can identify the process names against particular tasks. Thankfully, that was sufficient for the requirement.

Security role requiredFor having a reporting view on these datasources, you can display data if your user is assigned to the SAP_BPM_SuperDisplay or SAP_BPM_SuperAdmin roles.

(Because, for BPM_MY_PROCESSES_DS, your user needs to be assigned to a role which gives you access to the particular process instance, e.g. Business Process Administratorrole and for BPM_MY_TASKS_DS, your user needs to be assigned to a role which gives you access to the particular task instance, for example Potential Owner, Actual Owner, Business Administrator, and so on.)

 

Scenario: Taking a simple process example – Scheduler Process - having 3 tasks viz. Sales Approval, Finance Approval and HR Approval, we will try to fetch the list of processes having “Finance Approval” task in it.

 

1.         

1.     Create a new Service Component in VC using datasource “BPM_MY_PROCESSES_DS” with simple “In” and “Out” Port. Choose appropriate fields, here we selected ID, Parent_ID, Status, Subject and Description. Save.

2.5 Service Component.JPG

2.     Create New Model -> Composite view with datasource “BPM_MY_TASKS_DS” and “In” and “Out” Port. Select appropriate fields, here we selected ID, Parent_ID, Status and Subject.

     4 Drag and Drop Task DS.JPG

3.     Create a just a “Start” event to start the model and select a “filter” from Out port.

     5 Start In and filter Out.JPG

4.     Define filter operator to filter on tasks with “Subject” containing “Finance” (Right click on Filter -> Define Operator)

     6 Define Filter.JPG

5.     Create a “Table View” from a filter.

     7 Create Table from Filter.JPG

6.     Go To Layout-> Change the Control type of “Parent Id” field to “DropDown List”( Parent Id specifies the process ID the task is part of)

     8 Change Control type.JPG

7.     Create a Dynamic “Entry list” on “Parent ID” field. (right click -> Entry List)

     9 dynamic Entry List.JPG

8.     On next screen, choose “Visual Composer components” in provider and search for the service component we created in Step 1.

     10 Search Service Component.JPG

9.     Skip “Configure Input” and proceed to “Configure Output”. Select “Id” in Value which specifies the process Id in service component (represented here as Parent_Id). Choose “Subject” in Display text which will show the Process Subject of particular tasks.

     11 Configure Output.JPG

10.    Make the “Parent_Id” field or perhaps the whole table as Read Only, adjust column’s properties of table as required. Save, Deploy and Run the model.

     12 Run the model.JPG

11.    "Export" option can be selected in VC for giving way to export the list to Excel file.

     13 Export option.JPG

 

I am not an expert in Visual Composer, hence any views from experts are most welcome to do this in simpler way Or some better way to have reporting on multiple datasources.

Building Eclipse based Junit tests to automate Process model testing

$
0
0

A process model contains many artifacts which require human/system interaction to complete the end to end process execution. This blog explain how to write eclipse based JUnit tests to automate BPM process model tesing.

Restfuse(http://developer.eclipsesource.com/restfuse/) and Selenium(http://docs.seleniumhq.org/) are the two open source frameworks  used to automate the process model testing.Together with these frameworks the RESTful services of BPM public apis and Apache CXF framework are used .

 

Restfuse is a Junit extension to test RESTful services. A Restfuse based test uses annotations to define RESTful service url ,Request type, Authentication and content and its type. It is not required to deploy the tests to the BPM server. The test run like any other JUnit test in eclipse.

 

Restfuse provides a possibility for callbacks and polling in a declarative manner. It also offers dynamic request manipulation at the request header level.

This blog explains how  BPM process model testing can be automated by creating a new process instance and process all the human tasks of the process model and check the completion of the process. The basic requirements to build such tests are the following.

 

 

1.      1. SAP Netweaver BPM 7.31

2. RESTful services of BPM (http://scn.sap.com/community/bpm/blog/2011/10/27/restful-service-for-netweaver-bpm)

3.      3. Apache CXF framework (http://cxf.apache.org/)

4.      4. Restfuse (1.2)open source library from eclipse http://developer.eclipsesource.com/restfuse/

       https://github.com/eclipsesource/restfuse/blob/master/build/com.eclipsesource.restfuse.target/restfuse.1.2.0.target

  5. Selenium open source library from Selenium( http://docs.seleniumhq.org/docs/03_webdriver.jsp#introducing-webdriver)

6.      6. Harmcrest open source library from eclipse(http://hamcrest.org/)

     http://download.eclipse.org/tools/orbit/downloads/drops/R20110523182458/repository

7.         JUnit 4.10 from eclipse.

 

How to setup the test landscape

 

1.    1. Deploy the BPM RESTFul services in the JEE system.

     Down load the binaries from the SCN code exchange . The RESTful services can be enhanced to meet the various requirements of the test.

     http://scn.sap.com/community/bpm/blog/2011/10/27/restful-service-for-netweaver-bpm

2.Down load the Apatche CXF framework as explained in the blog

     https://cw.sdn.sap.com/cw/message/86851#86851

2.    3. Down load the eclipse Restfuse 1.2 from

      https://github.com/eclipsesource/restfuse/blob/master/build/com.eclipsesource.restfuse.target/restfuse.1.2.0.target

     Build the sources in the Developer studio and package the class files in a jar file.

3.    4. Down load the Restfuse dependent libraries from

http://download.eclipsesource.com/technology/restfuse/restfuse-1.1.1.zip     http://download.eclipsesource.com/technology/restfuse/restfuse-1.1.1.ziphttp://download.eclipsesource.com/technology/restfuse/restfuse-1.1.1.zip

     (You need to use Restfuse 1.2  version in order to use the dynamic path segment feature to pass request parameters to the REST service.)

 

How to write a Junit test to automate a process model

 

The following annotations are required at the Junit class level

@RunWith( HttpJUnitRunner.class)

Publicclass bpmProcessModelTest {

      

 

       @Contextprivate PollState pollState;

      

       @Rulepublic Destination destination = new Destination(this,"http://hostname:port/" );

      

       @Context private Response response;

 

The Response object will be injected after every request send to the BPM system.

 

Test 1:  Get Process Definition of a Process model

 

The Path parameter provides the DC Name, Vendor Name and Process Model name to retrieve the process definition id of the process model. The header annotation define the response media type as Json model type.

 

@HttpTest(method = Method.GET,headers = {@Header(name = "Accept", value = "Application/Json") } ,authentications=@Authentication(type=AuthenticationType.BASIC,user="userid",password="@@@@"), path = "/bpm/bpemservices/processdefinitions?vendor=demo.bpm.test&dcName=customuiprocess&processName=simpleTestProcess")

       public  void getProcessDefintion()

       {

           String s = response.getBody();

          Object obj = JSON.parse(s);

          String pDefId = null;

           if(obj instanceofMap )

     {

             Map m = (Map)obj;

             Object ob = m.get("ProcessDefinitions");

                  if(ob instanceofMap)

        {                   

           Map m1 = (Map)ob;

                        if(m1.get("ProcessDefinition") instanceofMap)

           {

               Map m2 = (Map) m1.get("ProcessDefinition");

               pDefId = (String) m2.get("id");

                      }

        }

      }

           

      com.eclipsesource.restfuse.Assert.assertOk(response);

            

   }

 

Test 2: Get Process Start event of a process definition

 

Get the start event id of a process definition by providing the process definition id in the request url as a parameter.

@HttpTest(method = Method.GET,headers = {@Header(name = "Accept", value = "Application/Json") } ,authentications=@Authentication(type=AuthenticationType.BASIC,user="userid",password="@@@@@"), path = "/bpm/bpemservices/processstartevents?processDefinitionId={pdefid}")

       publicvoid getProcessStartEvent()

    {

       String eventId = bpmjsonhelper.getProcessStartEvent(response);

       com.eclipsesource.restfuse.Assert.assertOk(response);

     } 

 

 

Test 3: Get Process Start Event Schema

 

Get the schema of the process start event. The process start event schema is required to define the input data to create an instance of the process.

@HttpTest(method = Method.GET,headers = {@Header(name = "Accept", value = "Application/xml") } ,authentications=@Authentication(type=AuthenticationType.BASIC,user="userid",password="@@@@"), path = "/bpm/bpemservices/processstartevents/{eventid}")

       publicvoid getProcessStartRequestSchema()

       {

             String s = response.getBody();

             com.eclipsesource.restfuse.Assert.assertOk(response);

         

       }

 

 

 

Test 4: Create a process Instance

 

 

The Http Post method will send the content together with the request and get the process instance as response. The file attribute determine the input data which is stored in the class path of the test.

 

 

@HttpTest(method = Method.POST, file="data.xml" ,headers = { @Header(name = "Accept", value = "Application/json"),@Header(name = "Content-type" , value = "Application/xml") } ,authentications=@Authentication(type=AuthenticationType.BASIC,user="userid",password="@@@@"), path = "/bpm/bpemservices/processstartevents/{eventid}")

       publicvoid createProcessInstance()

       {

             String s = response.getBody();

             HashMap obj = (HashMap)JSON.parse(s);

                    HashMap processInstance = (HashMap) obj.get("ProcessInstance");

             com.eclipsesource.restfuse.Assert.assertOk(response);

                         

       }

 

Test 5: Get all tasks of a process instance

 

The @Poll annotation will retry the method twice with 5 seconds interval inorder to ensure that the tasks are created .

 

@HttpTest(method = Method.GET,headers = { @Header(name = "Accept", value = "Application/Json") } ,authentications=@Authentication(type=AuthenticationType.BASIC,user="userid",password="@@@@"), path = "/bpm/bpemservices/taskinstances/ProcessInstance/{processinstance}?status=READY&status=RESERVED" )

       @Poll( times = 2, interval = 5000 )

       publicvoid getTaskInstanceOfProcess()

       {  

             Object[] taskAbstractList = bpmjsonhelper.getTasks(response);

                            if(null != taskAbstractList )

             {

                             int size =  taskAbstractList.length;

                             for(int i = 0; i < size; i++)

               {

                   HashMap taskAbstract = (HashMap)taskAbstractList[i];

                   String taskid = (String)taskAbstract.get("id");

                   String taskName = (String) taskAbstract.get("name");

                         

                 }

}

// the number of polling interval defined at the @Poll annotation

            

 

               if(pollState.getTimes() == 2)

        {

            com.eclipsesource.restfuse.Assert.assertOk(response);

         }

            

       }

 

 

 

 

 

 

 

Test6: Claim & Completion of a Task

 

    @HttpTest(method = Method.PUT,content ="{x=1}",headers = { @Header(name = "Accept", value = "Application/json"),@Header(name = "Content-type" , value = "Application/json")} ,authentications=@Authentication(type=AuthenticationType.BASIC,user="userid",password="@@@@"), path = "/bpm/bpemservices/taskinstances/{taskinstance}?action=CLAIM")

      publicvoid claimtestTask()

      {

           String s = response.getBody();

            com.eclipsesource.restfuse.Assert.assertNoContent(response);

                 

      }

     

The file attribute determines the task input data  and the file should be in the class path.

    

@HttpTest(method = Method.PUT,file="mytaskData.xml",headers = { @Header(name = "Content-type" , value = "Application/xml")},authentications=@Authentication(type=AuthenticationType.BASIC,user="userid",password="@@@@"), path = "/bpm/bpemservices/taskinstances/{taskinstance}?action=COMPLETE")

      publicvoid completeMyTask()

      {

           String s = response.getBody();

            com.eclipsesource.restfuse.Assert.assertNoContent(response);

      }

 

Test 7:  Comletion of a Task using Task Execution UI

The REST call provide the task execution URL and selenium webdriver is used to open the task exectuion UI and complete the task.,

 

@HttpTest(method = Method.GET,headers = { @Header(name = "Accept", value = "Application/Json") } ,authentications=@Authentication(type=AuthenticationType.BASIC,user="userid",password="@@@@"), path = "/bpm/bpemservices/taskinstances/taskinstance/{taskinstance}" )

public void CompleteMyTask()

{

 

   com.eclipsesource.restfuse.Assert.assertOk(response);

   String s = response.getBody();

   WebDriver webdriver = new FirefoxDriver();

   webdriver.get(s);

   WebElement element = webdriver.findElement(By.name("j_username"));

   lement.sendKeys("userid");

   element.sendKeys(System.getProperty("username"));

   WebElement element2 = webdriver.findElement(By.name("j_password"));

   element2.sendKeys("@@@@");

   WebElement elementForm = webdriver.findElement(By.name("logonForm"));

   elementForm.submit();

 

   webdriver.get(s);

 

 

  WebElement taskCompletion = webdriver.findElement(By.id("FDOEEFPDBOMAACHH.TestTaskComponentView.TestTaskCompleteEvent"));  

   taskCompletion.click();

   webdriver.close();

}

 

Test:8 Process Completion

 

 

@HttpTest(method = Method.GET,headers = { @Header(name = "Accept", value = "Application/Json") } ,authentications=@Authentication(type=AuthenticationType.BASIC,user="userid",password="@@@@"), path = "/bpm/bpemservices/processinstances/{processinstance}?status=COMPLETED" )

@Poll( times = 3, interval = 5000 )

public void CheckProcessStatus()

{

       if(pollState.getTimes() ==3)

       {

        String s = response.getBody();

        com.eclipsesource.restfuse.Assert.assertOk(response);

        }

  

}

Make your CAF service web-ready by giving it a RESTful façade

$
0
0

Modern client-side web frameworks and libraries like SAPUI5 are using REST services to load data from a server. If you want to consume existing business logic from these new web applications, you first need to make them REST-enabled.

 

For standard SAP applications, you can use SAP NetWeaver Gateway. But what about your own composite services built e.g. using the Composite Application Framework (CAF)?

 

In this blog, we will use the Apache Jersey library to expose a CAF Business Object as a RESTful web service.

Deploying Jersey

 

For a description on how to package and deploy the Jersey libraries to the SAP NetWeaver Application Server, please refer to this excellent blog by Werner Steyn.

 

It is recommended to wrap the External Library into its own Enterprise Application, and have it packaged in a separate Software Component so that it can be reused from multiple applications.

 

Don’t forget to remove the access restrictions to the DCs and its public parts, in order to make it available to other applications.

 

Creating a web module DC

 

The Jersey servlet needs a web.xml to be configured and deployed. Create a new web module DC in the software component which holds your CAF project.

Add a DC dependency from the CAF EAR DC to the new web DC so that it is included with the CAF application.

 

Enabling access to the CAF project

 

Since you want to reuse the existing CAF service, you need to be able to reference it from your new web module DC.

In the component properties of the CAF EAR DC, switch to the “Public Parts” tab, select the “client” public part and add permission for your web DC. Repeat this for all other public parts.

 

PP.png

 

Next, switch to the “Permissions” tab and again add permission for the web DC.

 

permissions.png

 

Setting DC dependencies

 

Now, switch to the component properties of your web DC and add the following DC dependencies:

  • caf/core/ear
  • caf/runtime/ear
  • jersey/ear (from the Software Component which contains the Jersey libraries)
  • tc/bl/exception/lib

You can test if everything works by building the CAF application. If the build is failing, check the build log for error messages.

 

Implementing the resource class

 

Now it’s time to actually implement the REST service. All you need is a plain Java class with some annotations. The JNDI name to look up the CAF EJB can be retrieved via the JNDI Browser in NWA.

The GET method simply all employees stored in the respective CAF table. Since Jersey supports POJOs, we can return the Employee object as-is and don’t need to create any wrapper objects.

 

 

package com.sap.demo.cafrest.employee;

import java.util.List;

import javax.naming.InitialContext;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;

import com.sap.demo.employee.modeled.Employee;
import com.sap.demo.employee.modeled.bonode.employee.employee.EmployeeServiceLocal;

@Path("/employees")
public class EmployeeResource {

      EmployeeServiceLocal employeeService;      final String JNDI_NAME = "demo.sap.com/employee~ear/LOCAL/com.sap.demo.employee.modeled.bonode.employee.employee.Employee/com.sap.demo.employee.modeled.bonode.employee.employee.EmployeeServiceLocal";       @GET      @Produces(MediaType.APPLICATION_JSON)      public List<Employee> getAllEmployees() {            try {                  InitialContext jndiContext = new javax.naming.InitialContext();                  employeeService = (EmployeeServiceLocal) jndiContext.lookup(JNDI_NAME);                  List<Employee> result = employeeService.findAll();                  return result;            } catch (Exception e) {                  throw new RuntimeException(e);            }      }

}

In the code sample above, we only implement the method to get all objects. You can easily implement all CRUD and finder methods in a similar way using the respective annotations for GET, PUT, POST and DELETE. Check the Jersey documentation for more details.

This example uses a CAF business object service, but the same approach also works for application services.

 

Configuring the Jersey servlet

 

Finally, we need to configure the servlet container for Jersey. Open the web.xml of your web DC and add the following lines:

 

<?xml version="1.0" encoding="UTF-8"?><web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"      xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"      xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"      id="WebApp_ID" version="2.5">      <display-name>CAF REST demo</display-name>      <servlet>            <servlet-name>Jersey REST Service</servlet-name>            <servlet-class>com.sun.jersey.spi.container.servlet.ServletContainer</servlet-class>            <init-param>                  <param-name>com.sun.jersey.config.property.packages</param-name>                  <param-value>org.codehaus.jackson.jaxrs;com.sap.demo.employee.modeled;com.sap.demo.cafrest.employee</param-value>            </init-param>            <init-param>                  <param-name>com.sun.jersey.api.json.POJOMappingFeature</param-name>                  <param-value>true</param-value>            </init-param>            <load-on-startup>1</load-on-startup>      </servlet>      <servlet-mapping>            <servlet-name>Jersey REST Service</servlet-name>            <url-pattern>/rest/*</url-pattern>      </servlet-mapping></web-app>

 

Via the parameter com.sun.jersey.api.json.POJOMappingFeature, we tell Jersey to use POJO mapping instead of JAXB annotations.

In the value for com.sun.jersey.config.property.packages, you need to add all Java packages containing the POJOs used in your CAF services. Otherwise the POJO to JSON mapping will not work.

 

Testing the service

 

Deploy the whole CAF application to the server. After deployment, you should be able to call the service from your browser:

 

result.png

 

If you get an error message, check the dev traces in the NWA Log Viewer.

 

Summary

 

Exposing a CAF service via REST requires only a few lines of code. The trickiest part is to make sure you have set all the right permissions and dependencies between the various DCs.

Developing BPM Custom Inbox with Task Custom Attributes and Actions

$
0
0

Introduction

If you had the chance to touch SAP NetWeaver BPM 7.31 SP6 or a later SP, you may notice the newly introduced features to define custom attributes and actions for tasks.

The custom attributes and actions help task owners make appropriate decision for a task by presenting essential information in the tasks list directly. Having the important task’s data presented, task owner can take a direct action, without opening it, on a task instance in case custom action for this task had been specified. In SAP NetWeaver BPM 7.31 SP7, the task editor has been extended with a new tab called “Attributes & Actions”. Custom attributes and actions are exposed through SAP NetWeaver BPM’s public API and could be consumed in custom inbox. The public API allows obtaining the custom attributes and actions of the task and their values and completing the task directly.

With these articles I would like to describe how to benefit from custom attributes and actions for tasks in your custom inbox using the public API. I would also provide information about what are the necessary steps to enable custom attributes and actions for task definition.

 

Sample BPM process

To demonstarte the custom attributes and actions I would use a commonly used process for master data quality.

 

pic01.jpg

A company recently acquired new companies and needs to align master data process with all new branches. Customer master data creation process is one of them. The process has the following routine:

  1. Customer requester is a person who is responsible for acquiring retailers for IDES Company. Once the requester finds a new retailer, he/she enters information about it. The data that the requester enters is the customer's ID, first name, last name, country, city, street, zip code, and credit limit.
  2. The data quality manager who is working in the same country location as the new customer reviews the data, and if needed, returns it for rework by customer requester.
  3. Data quality manager can also directly approve or reject customer’s request.

pic02.jpg

 

Custom attributes for tasks

Custom attributes are defined during design time as a part of a task definition when performing modeling in Process Composer. The input data context of the task definition would be available for choosing as custom attributes. The attributes defined in Process Composer are visible as columns in the task list in BPM Inbox. Custom attributes of task would be visible in BPM Inbox when the user filters by task type. The expression would be calculated using the real process data and would be visible as a value. To define your custom attributes for a task follow the steps below.

 

Modeling Custom Attributes in NWDS

Open “Attributes and Actions” tab to define custom attributes for task definition. For each custom attribute the user has to specify its name, label, type, and expression (value). The label of the attribute is a translatable text and the name of the action serving as a key. The expression is the actual value of the attribute, populated in BPM inbox. The order in which the custom attributes are listed in the table would be same when the custom attributes are retrieved from BPM’s public API.

 

pic03.jpg

Optional: Translation of task custom attributes

Copy and rename task’s ‘.xlf’ file. Then open it with S2X document editor. Choose the ‘Source language’ in Header tab, open ‘Resource Text’ tab, and edit the translated text.

 

 

pic05.jpg

For more information, see Internationalization of Java Projects.

 

Consumption of Custom Attributes through Public API

To fetch the custom attributes, use TaskDefinitionManager, TaskInstanceManager, and TaskAbstractCustomAttributesCriteria. The task definition contains information about the name, label and type of all custom attributes defined for this task. The task instance contains the actual values for these custom attributes. To get TaskAbstract with custom attribute’s value, use an instance of TaskAbstractCustomAttributesCriteria as shown below.

 

 

IAuthentication auth = UMFactory.getAuthenticator();
IUser user = auth.forceLoggedInUser(request,response);
TaskInstanceManager taskInstanceManager = null;
TaskDefinitionManager taskDefinitionManager = null;        try {            taskInstanceManager = BPMFactory.getTaskInstanceManager();            taskDefinitionManager = BPMFactory.getTaskDefinitionManager();            
//Task statuses we are interested in            Set<Status> statuses = new HashSet<Status>();              statuses.add(Status.READY);              statuses.add(Status.CREATED);              statuses.add(Status.IN_PROGRESS);              statuses.add(Status.RESERVED);
//The TaskAbstractCustomAttributesCriteria class is a marker telling the 
//getMyTaskAbstracts() method whether to fetch the custom attributes for the 
//tasks or not.            
List<TaskAbstract> taskAbstracts = 
 taskInstanceManager.getMyTaskAbstracts(statuses, null, new TaskAbstractCustomAttributesCriteria());            
for(TaskAbstract ta : taskAbstracts) {
 URI taskModelId = ta.getModelId();
 URI taskDefinitionId = ta.getDefinitionId();

 

If you are interest in particular task definition and want to get its custom attributes, you have to use the following approach:

 

{
//Get custom attributes definitions for the current task definition
TaskDefinition taskDefinition =   taskDefinitionManager.getTaskDefinition(taskDefinitionId);
List<CustomAttributeDefinition> customAttributeDefinitions =       taskDefinition.getCustomAttributeDefinitions();           for(CustomAttributeDefinition cad : customAttributeDefinitions) {     //Retrieves the label of the custom attribute defined which is              //translated based on logged in user's Locale.            String caLabel = cad.getLabel();            String caName = cad.getName();            Class<?> caType = cad.getType();      }            //Get The custom attribute values            Map<String, Object> caValues = ta.getCustomAttributeValues();
}

 

                 

If you have more than one version of the process and you are interested in an active version of task definition and its custom attributes, you have to use the following approach:

 

{
//Get custom attributes definitions for the active task definition
TaskDefinition taskDefinition =  taskDefinitionManager.getActiveTaskDefinition(taskModelId);
List<CustomAttributeDefinition> customAttributeDefinitions =       taskDefinition.getCustomAttributeDefinitions();            

for(CustomAttributeDefinition cad : customAttributeDefinitions) {
     //Retrieves the label of the custom attribute defined which is      //translated based on logged in user's Locale.            String caLabel = cad.getLabel();            String caName = cad.getName();            Class<?> caType = cad.getType();
}
}

 

 

If you have more than one version of the process and you are interested in all versions of task definitions and their custom attributes, you have to use the following approach:

 

{
//Get custom attributes definitions for any task definition
Set<TaskDefinition> taskDefinition = taskDefinitionManager.getTaskDefinitions(taskModelId);

for(TaskDefinition td : taskDefinition) {
            List<CustomAttributeDefinition> customAttributeDefinitions =            td.getCustomAttributeDefinitions();            

for(CustomAttributeDefinition cad : customAttributeDefinitions) {
     //Retrieves the label of the custom attribute defined which is       //translated based on logged in user's Locale.            String caLabel = cad.getLabel();            String caName = cad.getName();            Class<?> caType = cad.getType();      }      }
}
}        
}catch(BPMException e ) {       //exception handling goes here
}

 

pic05.jpg

 

Custom actions for tasks

Custom actions are defined during design time as a part of a task definition. At runtime, they can be fetched via the public API or accessed through the BPM Inbox. The public API is enhanced with a complete method taking custom action’s technical name as a parameter. The name of the action can be taken during the next step of the process by mapping the new task attribute “customAction” to the process context. To define your task custom actions follow the steps below.

 

Modeling custom actions in NWDS

To enable the defining of custom actions for task definition, open the “Attributes & Actions” tab. This tab is used to define actions in a table. For each custom action you have to specify the name, label, and description. The label and description are translatable text, and the name of the action serves as a key. The actions will be presented in BPM inbox as buttons. The order in which the custom actions are listed in the table would be same when the custom actions are retrieved from BPM’s public API.

 

pic06.jpg

 

Optional: Translation of task custom actions

You can follow the same approach described for custom attributes.

 

Modeling process workflow using action value

You have to map ‘customAction’ value to some context. So this ‘customAction’ attribute can be used for decision-making. In our case this value is used to make decision about how the process ends.

 

pic07.jpg

 

Consumption of custom actions through public API

This code snippet shows how to get the custom action definitions for all tasks assigned to the current user.

IAuthentication auth = UMFactory.getAuthenticator();
IUser user =  auth.forceLoggedInUser(request,response);    TaskInstanceManager taskInstanceManager = null;    TaskDefinitionManager taskDefinitionManager = null;    
try {        taskInstanceManager = BPMFactory.getTaskInstanceManager();        taskDefinitionManager = BPMFactory.getTaskDefinitionManager();        //Task statuses we are interested in        Set<Status> statuses = new HashSet<Status>();            statuses.add(Status.READY);        statuses.add(Status.CREATED);            statuses.add(Status.IN_PROGRESS);            statuses.add(Status.RESERVED);        //The TaskAbstractCustomAttributesCriteria class is a marker telling the           // getMyTask Abstracts() method whether to fetch the custom actions for the tasks or not.        List<TaskAbstract> taskAbstracts =            taskInstanceManager.getMyTaskAbstracts(statuses, null,                      new TaskAbstractCustomAttributesCriteria());         PrintWriter pw = response.getWriter();        for(TaskAbstract ta : taskAbstracts) {            URI taskDefinitionId = ta.getDefinitionId();                TaskDefinition taskDefinition = taskDefinitionManager.getTaskDefinition(taskDefinitionId);                List<CustomActionDefinition> customActions = taskDefinition.getCustomActionDefinitions();            }    }catch (BPMException e) {               // TODO: handle exception    }

 

pic08.jpg

 

Complete a task instance with a custom action via the public API

The public API of SAP NetWeaver BPM now has a complete method accepting a custom action as a parameter:

 

    public void complete(URI taskInstanceId, DataObject taskOutputData) throws BPMException;

 

Using this method you can complete a task with one of the already defined custom actions for this task definition. When the complete method call from public API with custom action value is processed, task completion message is created with custom action value, if chosen. And during the same method’s (complete) processing, notification of the completion state is delivered to the BPEMTaskParent, where the task attribute's 'customAction' element is updated with the custom action value that is chosen to complete the task.

 

For more information about custom attribute, see Defining Custom Attributes for Tasks.

Process Observer (POB) Direct Event API for Logging Processes from SAP and Non-SAP Systems

$
0
0

Hi Process Observer community,

 

 

In this blog, I want to introduce the direct event API, which can be used as an alternative to BOR events, for the instrumentation of Business Suite applications to work withProcess Observer.

 

An overview and a description of Process Observer’s architecture were given in previous posts of this series. And – as described in these posts – Process Observer can process BOR and workflow events that are very common in the SAP Business Suite area when interacting with Business Workflow. More than 7000 BOR events are predefined in the SAP Business Suite. Up to now, you needed to create new BOR events in the Business Object Repository (see the Complete Guide for Events in Workflows in SAP ECC 6.0.) when new BOR events were required for Process Observer observation  and you needed to raise the new BOR events in appropriate application exits.

 

As an alternative to using BOR events, Process Observer additionally offers the direct event API (see note 1689819). You can use it to raise application events directly in the application exits to Process Observer (without the indirection overdetour via BOR events). In the event API’s interface, you can directly use the “tasks” as used in the process definition (seeCreate Process Definition for Business Process Monitoring & Analytics for Business Suite Processes (POB)). No further mapping of the tasks to BOR events is required.

  

Technically, the direct event API is a function module that can be called directly in an application exit. The function module is mass-enabled - events are passed as tables – and it supports asynchronous processing in the same way as BOR events: The events received as input by the function module are first stored in a buffer table, then with the POB Event Scheduler (transaction POC_JOB_SCHEDULER,
see also Setup of Component Process Observer for Built-In Processes (POB)
the events are processed asynchronously. The event scheduler works as follows: First the buffered BOR events are processed by Process Observer, then non-BOR events are processed, finally the threshold is checked.

 

Note: Due to this processing order,  some unwanted effects may occur during logging (events processed in the wrong order, which
results in lost events), if you mix BOR and non-BOR events in one process and they may occur together within a very short timeframe.

p2p_process definition.png


The normal use case of using the direct event API is to just call it from a SAVE BAdI of the application that you want to track. Process Observer is normally configured to run in each local system in which processes will be tracked.

 

But the direct event API is also remote-enabled (available as RFC), so that it can also be used to log the events and processes of a remote SAP or non-SAP system. To use it from another SAP system (such as SAP System < ERP 6.00 EhP 4), it is called in the same way as a normal RFC, for use in non-SAP systems, the RFC can be wrapped and made available as a web service using NetWeaver standard tools.

sap_nonsap.png

You create the process definition in Process Observer, just as if the process ran in the local system, then the direct event API is called, passing on a list of events. The interface has a parameter for the ID of the remote system. In the local Process Monitor, the system information or the remote process is available. 

log_system.png

 

Even though the interface is mass-enabled, you must consider performance (the number of events that you create in the remote system) for the communication load, and the fact that the direct event API is currently only available as a synchronous interface (requires system availability). To optimize communication performance, you may, for example, first store the events locally in the remote system and later pass them to the Process Observer system in a single call.

You can also think of scenarios, where only one step of a process modelled in the local is executed in an external system, and you use the direct event API to informa about the execution of the single step.

 

One example, in which the direct event API has been used cross-system, is the monitoring of master data distribution in SAP MDG. (See also Monitoring of cross system workflows with SAP Process Observer). In this scenario, the MDG Hub (>= ERP 6.00 Ehp 6) runs Process Observer. Local events in an MDG client, such as receiving and manipulating of local master data, are transferred remotely as an RFC via the direct event interface.

mdg_process.png

Very essential for the understanding of the direct event API is this example. With a sample process definition, we want to track a procurement process on item level. It looks like this:

  p2p_process definition.png

We create the corresponding event information ’Purchase Order item is created’ in an implementation of the POSTED method of BAdI ME_PURCHDOC_POSTED (which is called when Purchase Orders have been created or changed) using the direct event. In the code given below, we check the creation of an item evaluating the change type (field KZ). We create the event corresponding to the ‘Purchase Order Item Created‘ task by adding the Purchase Order ID and item ID, as well as adding the business object type for Purchase Order and the task type for creation of an item. We additionally store the ABAP kernel transaction ID (which may be relevant in cross-system federation scenarios), an event execution time stamp (e.g. current time), user information (e.g. SY-UNAME) and transaction information (e.g. SY-TCODE). Information about predecessor objects (document flow, here: Purchase Requisitions) can be added in a table. Predecessor information is required to connect the activities within the logged process instance. Finally, the event is added to an event list and the event list is raised using the direct event API.

 

METHOD IF_EX_ME_PURCHDOC_POSTED~POSTED.

* data definitions

  FIELD-SYMBOLS: <fs_ekpo> TYPE uekpo.
 
DATA ls_event             TYPE poc_s_event.
 
DATA lt_event             TYPE poc_t_event.
 
DATA ls_pre_bo            TYPE poc_s_pre_bo_event.
 
DATA lv_time              TYPE poc_execution_time.
 
DATA lv_transaction_id    TYPE poc_transaction_id.

 

* loop at purchase order items
 
LOOP AT im_ekpo ASSIGNING <fs_ekpo> WHERE bstyp = 'F'"Purchase Orders
   
"Check the field MEMORY to see if the PO document is COMPLETE.   

      IF im_ekko-memory = 'X'. "ABAP_TRUE
     
"Incomplete Document. Do not proceed
     
CONTINUE.
   
ENDIF.

* check the change type of the purchase order item

      CASE <fs_ekpo>-kz.
     
WHEN 'I'.                      " purchase order item was created
* compose event
       
CLEAR ls_event.
        ls_event-bo_id = im_ekko-ebeln.     " Purchase Order ID
        ls_event
-item_id = <fs_ekpo>-ebelp. " item ID

         ls_event-bo_type = '001'.          " BO Type ID (001 = Purchase Order)
        ls_event
-event_type = '901'.     
  " task type ID (901 = Create item)
        CALL FUNCTION 'TH_GET_TRANSACTION_ID'       " Kernel Transaction ID
         
IMPORTING
            transaction_id
= lv_transaction_id.   

          ls_event-transaction_id = lv_transaction_id.

        GET TIME STAMP FIELD lv_time.                " Execution Date/time
        ls_event
-executed_at = lv_time.              " Execution Date/time
        ls_event
-executed_by = sy-uname.             " user

        IF sy-tcode IS NOT INITIAL.
          ls_event
-cbe_category = '01'.     
        " Callable Business Entity: cat ‘01’ = Transaction
          ls_event
-cbe_type = sy-tcode.              " tcode
       
ENDIF.        
*
Predecessor Business Objects (document flow) – loop over schedule line items
        ls_pre_bo
-pre_bo_type = '108'.                " predecessor BOType = Purchase Requisition

        LOOP AT im_eket INTO ls_pre_bo_pr WHERE ebeln = <fs_ekpo>-ebeln
           
AND ebelp = <fs_ekpo>-ebelp.
       

            ls_pre_bo-pre_bo_id = ls_pre_bo_pr-banfn.     " predecessor BO-ID
            ls_pre_bo
-pre_item_id = ls_pre_bo_pr-bnfpo.   " predecessor item ID


           
APPEND ls_pre_bo TO ls_event-previous_bo.
           
CLEAR ls_pre_bo .
       
ENDLOOP.

        APPEND ls_event TO lt_event.      " event table

     WHEN OTHERS.

          …

      ENDCASE.
 
ENDLOOP.
* raise list of events to Process Observer
 
IF lt_event[] IS NOT INITIAL.
   
CALL FUNCTION 'POC_RAISE_EVENT'
     
EXPORTING
        it_event 
= lt_event
        iv_commit
= abap_false.
 
ENDIF.
 
CLEAR
lt_pre_bo_pr.

 


ENDMETHOD.

 

 

We hope this new interface makes the instrumentation of the application much easier, especially if you do not have any experience with using BOR events. This blog post should briefly illustrate how to use it. We will soon publish information about an instrumentation for tracking the procure-to-pay process on item level that was implemented completely using the direct event API.

 

Watch out for our news!

What you should know about cancelling a process instance ....

$
0
0

Introduction

 

In the recent past, I was asked several times how to cancel a process instance via the public API of SAP NetWeaver BPM.  My answer was very short and simple: you can´t. Fortunately, I asked for the use case because I wanted to understand why people are asking for support via the public API though it can be done via the SAP NetWeaver Administrator (NWA). The answer I got was always very similar: It should be possible for process participants or the process initiator to reject their request even if it has not yet been completely processed from a business perspective. And for this, access to the NWA naturally should not be opened to end users.

 

As this is a common pattern, which is interesting for many scenarios, I want to share some ideas about it with the whole community. The following list summarizes a couple of example scenarios:

 

  • Employee wants to cancel one of his leave requests, which has not yet been approved or declined by his manager
  • In a Master Data Management scenario, an employee wants to reject a request for article creation, which he created accidently

 

 

Background – What exactly is the challenge?

 

The requirement is actually fairly simply and straight forward. However, it is not only about the question if NWA should be accessible for end users or not. There is a bit more behind this to completely understand the requirement. It is important to consequently distinguish between two types of users: business user and technical administrator.


SAP NetWeaver BPM offers the possibility to cancel a process instance via the NWA. This is an administrative action that can be used to move a process instance into a final state. Typically, it will be performed if the process instance encountered some technical issue, which cannot be resolved (e.g. context mapping issue). From this you can see that it is really meant for exceptional cases. The state of process instance will be 'Canceled', but not 'Completed'. Though 'Canceled' is a valid final state from a technical perspective (resources occupied by the process instance in the process engine will be cleaned up), but it should not be treated as a valid final state from a business point of view. Canceling a process instance will simply interrupt the process flow and terminate it. There is no guarantee that the data managed by the process flow is still in a consistent state.

 

So, for a proper handling of the above scenarios, it is not a good practice to use the cancel action. All these examples have in common that this operation is performed by a business user. So, as general rule, you can check if your scenario requires to offer such a rejection or cancelation possibility to technical or business users. For business users, you should consider implementing the pattern offered in this blog.

 

Note: The scenario I´m using throughout the description assumes that there is a business key that uniquely identifies the process instance. This uniqueness is required at multiple locations as you will see later on. It doesn´t matter if the business key is a simple string identifier or a more complex one consisting of multiple attributes as long as you can express it as an XML schema definition and map it into the BPM process model.

The technical ID could also be used; however it is local to the process and is only available after the process has been started. A business key on the other hand is typically already available also in other contexts outside of BPM.This will not work with the process instance ID as it is randomly created on process start.

 

Solution

As described above, there is the need for a better design of the process model to reach the goal. The basic idea of the proposed pattern is to support termination of the process flow using an Intermediate Message Event (IME). The following process model visualizes this idea.

 

 

cancellation pattern - processflow2.PNG

 

 

At process start, the business key is passed as parameter so that it can be stored within the process context. As first step in the process flow, there is a split, which separates the actual business flow from the option to reject the request again. The latter is realized using the mentioned IME. Having this in a parallel flow of the process, it is possible to send messages to the IME at any point in time the process instance is running. Once a message has been consumed by the IME, the flow runs into a termination event which not only stops this branch, but every other branch in the flow as well. In this example, the branch I referred to as actual business flow only contains a simple Human Activity to keep things simple. In a real life scenario this would be the branch you add your process logic to. This might include other activity types like Automated Activities, but also more complex things like Referenced or Embedded Sub-Processes. The termination event will be propagated down to such Sub-Processes so that they also get stopped.

 

There is one important thing missing to make the pattern work. In order to only reject a single process instance when sending a message to the IME, it needs to be configured with a proper correlation condition. Here comes again the business key into game. The message interface of the IME needs to include the business key of the instance to reject. We simply correlate messages to process instance where the business key in the message matches the one stored in the process context when we trigger the process instance.

 

cancellation pattern - mapping.PNG

 

Having this in place, it is possible to send a web service message to the IME with the valid business key (for details, see section Consumption).


Variants

 

The described pattern might not be completely sufficient for all use cases. Let´s assume the request for which the process instance has been created includes some persisted data or any other kind of state. This data could be stored outside the process context so that it will not be cleaned up when terminating the process instance. There needs to be a way of synchronization of these two systems. In the example of a leave request, this could be the leave request as such, which includes among others the start date, end date and a type of the request (determining if it is about a new request, a change request, etc.). If we allow the requestor to reject his request, this data will need to be cleaned up or at least marked in a way that it has been rejected.

 

Therefore, the flow is extended by adding an additional (automated) activity once the intermediate message event has been consumed by the process instance. With this, an application can execute any kind of activities to react on the rejection. Of course, such compensation operations could be more complex than just executing a single activity. Having this possibility is a big advantage over the original proposal, but of course not always required.

 

cancellation pattern - cleanup branch2.PNG

 

Consumption

 

As described before, using this pattern a process instance can be completed by sending a web service message to the IME “Trigger Completion”. This web service message has to be structured according to the interface of the IME. In this simple example, it mainly consists of the business key referencing the process instance that should be completed. The following XML snippet outlines the message structure for a business key with value 11:


<?xml version="1.0" encoding="utf-8"?>

<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

  <SOAP-ENV:Body>

    <yq1:SimpleBusinessKey

          xmlns:yq1="http://www.example.org/InterfaceWithSimpleBusinessKey/">

      <key>11</key>

    </yq1:SimpleBusinessKey>

  </SOAP-ENV:Body>

</SOAP-ENV:Envelope>

 

The sending party can be easily incorporated into a business user´s UI next to the option to trigger new process instances. The concrete steps to do this depend on the UI technology you are using. In case of Web Dynpro, you can easily import the endpoint configured for the IME as a web service model. Using standard Web Dynpro capabilities, the web service operation can be invoked passing the business key. If you are using SAP UI5, you can leverage standard JavaScript Ajax functionality to trigger a web service invocation on the endpoint. It should also work in a very similar manner for other UI technologies as it is about standard web service consumption.

 

By default, message reception is rejected for users. In order to be able to send a message to an IME, the sending user has to have the UME action “SAP_BPM_TRIGGER_EVENT” assigned.

 

Timer-Based Cancellation

 

There are other use cases which follow a very similar pattern. However, the step to cancel the process instance is not done by a user. It is more that the process instance should be terminated if it has not been completed from a business point of view within a certain point of time. This can be done by replacing the IME with an Intermediate Timer Event in the described process model.  With this type of event, you can configure an amount of time. This can be either a static value or again mapped from outside into the process context.

 

cancellation pattern - timer based event2.PNG

 

Conclusion

 

In this blog, it has been discussed why the “cancel process” operation in SAP NetWeaver BPM is not suited to cancel a process from a business perspective. Rather a new process pattern has been described that allows business users to complete a process instance aside the regular process flow.

 

Further Reading

 

Intermediate Message Events: https://help.sap.com/saphelp_nw73ehp1/helpdata/de/a0/5aebbe5fd8444ab68fe4bfd9f9ad8b/content.htm

 

Creating Service Interface Definitions: https://help.sap.com/saphelp_nw73ehp1/helpdata/de/47/62632a3c304359e10000000a42189c/content.htm

 

Reusable Event Triggers: https://help.sap.com/saphelp_nw73ehp1/helpdata/de/6f/4cbdd279504c4895267792e5199b71/content.htm

The Process Black Box – OpInt Shines a light inside

$
0
0

Much talk has been made of what breakthroughs SAP HANA can provide to the world of business in terms of doing things that could not be done before. I believe that SAP Operational Process Intelligence powered by SAP HANA (OpInt for short) is just such a product. In this blog I will explain why I think OpInt is such a product and how it can compliment SAP Business Process Management (BPM) and SAP Business Workflow in moving towards being a Process Centric Organisation. I will also be taking part in aSAP Mentor Monday webinar on the 22nd July which will discuss this topic. This details are coming soon.

 

Firstly I will cover the “what is it part” and then I will talk about why SAP HANA is key to making it a reality.

 

What is OpInt and how is it different from SAP BPM

 

Where BPM (and workflow) create a process on top of and between existing systems, OpInt takes the approach of listening to Business Events and reporting upon what is happening, comparing these events to a model of how the events should behave and reporting back when the events do not follow the expected pattern. So for example in a wholesale business you might expect less than 5 minutes between a sales order being created and the delivery being created. In OpInt you would create a model to listen to these two Business Events and calculate if the model is being followed. OpInt also predict when thinks will happen based on history.

 

Now take this to a larger scale and imagine the entire end to end Order to Cash process for a global organisation, not all of the process will be running in one system and not all the regions in the organization will have the same Service Level Agreement.

 

Not this is obviously going to include 1000's and 1000's of events that need to be stored, matched, predicted and reported. If this can't be done in Real-Time then we can't intervene to fix things when they go off track - we need to close the door BEFORE the horse bolts. This is why HANA is critical to the success of OpInt and why SAP have written OpInt to run directly in the HANA XS engine.

 

See the wood for the trees ! - How does it work.

 

The first thing we need is something that is capable of turning technical events into business events. For SAP BPM and SAP Business Workflow, this is a relatively simple task, as events are at the heart of these systems and OpInt comes with built in parsers to import BPMN and Business workflow definitions, after which you can pick the steps that you are interested in together with data from the process context. For Business Suite systems we need to use a transaction called Process Observer (on the Business Suite ABAP stack) to create a “Process facade” on top of the suite technical events (or events we have configured). These Process facades can also be imported into OpInt. The final part of the puzzle is how to get events from non-SAP systems (yes they do exist). This is where the Sybase Event Streaming Platform (aka Event Insight) comes into the picture. This technology allows “listening agents” to be deployed next to the 3rd part systems to pick up interesting technical events and correlate these events into business events that can then be feed to OpInt. In this way OpInt can get an end to end view of the Big process we are interested in.

 

OpIntSmall.png

 

Missing Link - Order from Chaos

 

This means that we now have a tool that can provide the best of both worlds for process centric organisations.

 

We can either use BPM where we want the system to control how the process works.

 

Or

 

We can use OpInt where no formal process exists, but we can observe events and figure out if things are working according to plan.

 

For those area that constantly fail to follow the model, we can use BPM to bring order to this chaos.

 

For more info see this YouTube Video or read this series of blogs from the Product Owner for OpInt


Using BAdIs to Anonymize User Information in Process Observer (POB)

$
0
0

Dear community,

 

Data privacy may be a requirement when creating logs of you business processes. In this blog post I want to give some samples how you can anonymize user information using available BAdIs of Process Observer for Built-in Processes. In previous posts of the series, an overview and a description of the architecture of Process Observer are given.

 

First, let me introduce the BAdIs for manipulating logging data that are available for Process Observer:

data_minip_badis.png

  BAdIs for BOR Event Processing only (green):

 

  • Mapping of Business Object Repository Events to Tasks (POC_INSTR_MAP_EVT_TASK): You can use this BAdI to extend the mapping of Business Object Repository (BOR) events to tasks. The default mapping uses the information defined in Customizing for ‘Maintain Business Object Repository Instrumentation’

 

  • Enrichment of Task Event Data (for BOR Events) (POC_MAIN_BA_EVENT): You can use this BAdI to enrich task event data for BOR events by using the customer includes provided in the interface structures.

 

BAdIs for Processing of all Events [including events via direct event API] (red):

 

  • Enhance/Split Tasks (POC_MAIN_TASK): You can use this BAdI to enhance orsplit tasks. The BAdI is executed before the determination of the process definition or instance take place. [Not available in all SPs.]

 

  • Enrichment of Task Log Data (POC_MAIN_BA_LOG):You can use this BAdI to enrich process log data before it is written to the process log. You can either write additional data to fields you add to the customer includes provided in the interface structures for the process log, or you can trigger the update of own tables from this BAdI.

 

Practically all of these events can be used to anonymize (or manipulate) process observer data that gets logged in the process log. In our example, we are using the 'Enrichment of Task Log Data BAdI' (POC_MAIN_BA_LOG) because it works for all events (BOR and direct), and because process definition mapping has already
taken place. So we can implement different anonymization strategies for different process definition (or business areas).

 

The business area concept was introduced to allow the grouping of process definitions, for example, to handle process definition similarly in BAdIs. Business Area information is available in the BAdI interfaces. Business Areas can be defined in Customizing for Process Observer (transaction POC_CUSTOMIZING):

bus_area_cust.png

A business area is then assigned to a Process Definition (header):

business_area_in_modeler.png

Example 1:

 

In the first example, you may just want to replace all SAP dialog users with a String ‘SAP dialog’ instead of logging the actual user ID. Therefore, the method IF_POC_BA_LOG_ENH~ENRICH of BAdI POC_MAIN_BA_LOG is implemented as follows:

 

METHOD if_poc_ba_log_enh~enrich.

* data definitions

FIELD-SYMBOLS: <fs_ba_log> TYPE POC_T_BA_LOG_DB.

DATA ls_logondata TYPE bapilogond.

* anonymize user data
 
LOOP AT ct_ba_log ASSIGNING <fs_ba_log>.

    CALL FUNCTION 'BAPI_USER_GET_DETAIL'
     
EXPORTING
        username     
= <fs_ba_log>-uname
        cache_results
= 'X'
     
IMPORTING
        logondata    
= ls_logondata
     
TABLES
       
return        = lt_return.
   
IF ls_logondata-ustyp = 'A'.            "dialog user
      <fs_ba_log>
-uname = 'SAP dialog'. 

    ENDIF.

  ENDLOOP.

ENDMETHOD.

 

The result in the process monitor looks like this:

dialog_user_log.png

Example 2:

 

In the second example you may want to replace the user ID with the user’s org unit (ORGEH). A basic understanding of the
organizational management is very helpful here. If you want to implement this, it will be even more helpful if you understand how organizational management, which is a very flexible tool, is implemented in your organization.

 

The code for this can be found as a sample implementation IM_POC_MAIN_BA_LOG_SAMPLE of the BAdI definition POC_MAIN_BA_LOG
in class CL_POC_BA_LOG_SAMPLE2. This implementation is an example. It is based on the assumption that the current user is assigned to a position, object type S, which is assigned to an organization unit, object type O, or that the user is directly assigned to an organizational unit. This may or may not be true for your configuration and the first or “lowest” organizational unit to which the user is assigned may not be the organizational unit that you want in the log. Thus the example implementation is an example that should help you to implement
your requirement, but you need to do this very carefully to adjust to your requirements. Also, please consider that these operations can be quite time-consuming, so while there is a lot of buffering implemented, you need to look at the runtime.

 

The result in the process monitor looks like this:

org_logging.png

You can see that the user name is replaced with the name of the organizational unit– ‘AK Company’

 

We hope that this little introduction gives you some more ideas on what manipulations you can do to the data logged by process observer. It is also possible to extend the log tables with customer fields and fill them in the given Enrichment of Task Log DataBAdI or even to write data into your own custom tables in the execution of the BAdI. This may be the subject of a new blog posting. However, just a warning here:  Whenever you create a BAdI implementation, be aware that your code can seriously influence the logging (and therefore the system) performance. So always be careful of what you are doing!

 

Stay tuned for the next episode!

SAP NetWeaver BPM Roadmap Webinar - 18 July 2013

$
0
0

Join us on July 18th for the SAP NetWeaver Business Process Management roadmap webinar as part of the SAP ASUG webinar series.

 

 

SAP NetWeaver Business Process Management sapbpm enables you to quickly automate and flexibly optimize your business processes – from simple workflows to integrated processes that span applications and organizational boundaries. Read more and follow the SAP BPM community here.

 

In this webinar you will learn about current and future capabilities of SAP NetWeaver Business Process Management. We will also walk you through SAP NetWeaver Business Rules Management sapbrm (read more here), as well as SAP Operational Process Intelligence powered by SAP HANA sapopint (read more here).

 

Presenters:

Thomas Volmering and Christian Loos, Product Management for SAP NetWeaver BPM

 

Time and date:

18 July 2013; 8:00 – 9:00 a.m. PST

 

Registration:

Register here>

 

Note: an S-user is required for registering for this webinar. If you do not have one, you can obtain it from here. If you have forgotten your S-user id reset your password from here.

 

We look forward to meeting you there!

 

* SAP NetWeaver BPM as well as SAP NetWeaver Process Orchestration roadmaps can be accessed on SAP Service Marketplace. For more details read this blog.

 

** Interested in other SAP roadmap webinars? Review the official ASUG SAP schedule to find recordings of previous SAP roadmap sessions and follow it (click the receive email notifications on the right hand side) to get notified of upcoming webinars.

Instrumentation for Procure-to-pay process on item level in Process Observer

$
0
0

Instrumentation for Procure-to-pay process on item level in Process Observer

Hello,

This blog post is part of a series of blog postings about “Process Observer for Built-in Processes (POB)”.

This post explains the instrumentation of the  Procurement process (Procurement to Payment) on item level.

The previous posts of the series have given an overview, described the architecture, explained the setup of the component.

 

In this blog, we will look at the sample content delivered for the instrumentation of the procurement process on the item level. We assume that Process Observer is already set up in your system. The sample process definition PROCURE_TO_PAY (Procurement to Payment), and the sample instrumentation needed for the same are delivered in the following releases. (Refer SAP Note 1813249).

  • ERP 6.0 EhP4 (SAP_BS_FND  701 SP14, EA-APPL 604 SP14)
  • ERP 6.0 EhP5 (SAP_BS_FND  702 SP12, EA-APPL 605 SP11)
  • ERP 6.0 EhP6 (SAP_BS_FND  731 SP08, EA-APPL 606 SP08)
  • ERP 6.0 EhP6 (SAP_BS_FND  747 SP02, EA-APPL 617 SP02)

As the process definitions are defined as normal customizing, you may need to check that the POB customizing is copied from client 000 to your current client.

 

To follow the exercises in this blog, it may be useful to have a copy of composite role SAP_POC_BPX assigned to the user, with appropriate authorization settings in the profile.

 

In the following, we are only going to look at the example for procurement processing.  A typical procurement process in the SAP system is as shown below. We will concentrate on the ERP side of this process in the blog.

1.ProcurementProcess.PNG

 

In order to monitor the process, we need to define the process in the POB.  Each box in the above diagram represents an activity, often referred to as “process step”, in the procurement process definition (PROCURE_TO_PAY). The definition is at this point not linked to any runtime object. So the next step is to link the definition, or “model”, to the runtime. To do that, you have to assign (at least) one task (event) to each activity. At runtime, the events are mapped to the activities.. You can create new tasks in the POB façade layer.  In this blog we’ll describe how to do this.

The sample Process Definition PROCURE_TO_PAY helps to monitor the above procurement scenario. This process definition is pre-configured with the following KPIs.  

Count KPIs

  • Number of Purchase Order (line item) changes
  • Number of PO(line item) changes before PO Send
  • Number of Purchase Order Releases
  • Number of Changes in the Purchase Requisition Item
  • Number of Quantity Changes in Purchase Requisition item
  • Number of Purchasing Group changes in Purchase Requisition
  • Number of Purchase Requisition Delivery Date Changes
  • Number of Purchase Order Price Changes
  • Number of Schedule Line Quantity Changes in a Pur.Order Item
  • Number of Schedule Line Delivery Date Changes in a PO item

Duration KPIs

  • Cycle Time for Purchase Order approval
  • Cycle Time for Purchase Order line item creation to invoice date
  • Cycle Time for Purchase Requisition item creation to Purchase Order Creation
  • Cycle Time for Purchase Requisition item creation  to Purchase Order Sending
  • Purchase Order sending to Goods Receipt
  • Cycle Time for Purchase Requisition item creation to Goods Receipt
  • Process Cycle Time (The time taken for the completion of the process)
  • Cycle Time for Purchase Requisition release to Purchase Order approval

Application Façade

The application façade encapsulates the entities at the runtime to put them at the disposal of the process definition. The entities in this layer are created to support requirements of the process definition. Only the entities of the runtime which are relevant for the processes are created in the façade.

Tasks represent the BO activity of the runtime in the façade. The tasks are maintained in the façade layer using the customizing node Maintain Objects in Façade Layer.

2.IMGNode_FacadeLayer.PNG

Note: You could use the transaction POC_FACADE also.

 

SOA BO Types

The Node SOA BO Types contains the whole list of business object types (BO Types) delivered by SAP. Check whether the BO Type is exists already; else create a new one by clicking New Entries button.

In the example we have taken, there are four BO types involved:

  1. Purchase Request
  2. Purchase Order
  3. Goods Receipt
  4. Supplier Invoice

SOA_BO_Types.PNG

Business Object Type

From the SOA BO Types, we select the BO types which we need to use in the scenario we have chosen and maintain in the node Business Object Type node. Click the button New Entries and create the BO Types. The result would look like this:

BO_Types.PNG

Task Type

Task type represents the action.  For the procurement process definition we have provided the appropriate task types. For example, in order to represent the field level changes also the task types are defined.

TaskTypes.PNG

 

Task

A task represents the event of a BO type. For example, the ‘Creation of a Purchase Order item’ is a task. It is essentially the BO Type + Task type.

The check box item level task is very important. This enables us to identify the item level information of the business object and log this information in process observer.

You can create new tasks by clicking the button New Entries.

The different tasks needed for the example definition PROCURE_TO_PAY are as shown below:

Tasks.PNG

Callable Business Entity Type

One of the easiest means to describe a business process is the screens, or in SAP terms, the transactions you use for the different steps of the process. The callable business entity is “the transaction”, but it can also be a web service call or work flow step. In the node Callable Business Entity Type, you define the transaction you use for your process step.  Click on the button New Entries to create new entries.

The result would be as given below:

CBEType.PNG

 

Process Definition Viewer

Using transaction POC_VIEWER from the SAP Easy Access Menu, or entering directly as transaction code, you can open the Process Definition Viewer.

The complete list of activities of the process definition PROCURE_TO_PAY is shown in the diagram below.

POC_Viewer_activities.PNG

‘Create Purchase Requisition (Item)’ and ‘Create Purchase Order (Item)’ are both marked as start activities. This means that, either of these can create a new process instance in the process log. However, a new process instance is created for a ‘Create Purchase Order (Item)’ only if there are no preceding purchase requisition.

You can also find count and duration KPIs that will then be determined at process instance level, and will be available for process monitoring and for process analytics:

POC_Viewer_CountKPIs.PNG

POC_Viewer_DurationKPIs.PNG

To start logging, you have to set the log level for ‘Sales Order Processing’ to ‘Standard Logging’, using transaction POC_MODEL (Create/Edit Process Definition).

Process_Defn_Header.PNG

Now run your process, starting from purchase requisition. You can use the following transactions in the system:

  • Create / Change Purchase Requisition           - ME51N/ ME52N
  • Create / Change Purchase Order                      - ME21N/ ME22N
  • Goods Receipt                                                          - MIGO
  • Create Supplier Invoice                                        - MIRO

Make sure to create all objects with reference to the predecessor objects. Without those references the process would not work from business perspective! The references will make it possible for POB to determine the process chain. Note that if you add references to predecessor objects after creating the object itself would mean that the creation is not part of this process instance.

Logging the events for Procurement Scenario

The following sample BAdI implementations/ reports are provided in order to help the logging of the different events of the procurement scenario.

Purchase Requisition (Item):

BAdI                                                      :               ME_REQ_POSTED

Sample Implementation Name                 :               IMP_POC_A_REQ_POSTED

Class                                                    :               CL_POC_A_REQ_POSTED_SAMPLE

In the BAdI implementation the direct event API (Function Module: POC_RAISE_EVENT) is used to throw the events as follows:

  • 108 / 901 Create PR line item
  • 108 / 903 Change PR line item
  • 108 / 911 Change PR purchasing group
  • 108 / 912 Change Purchase Request item 'Quantity'
  • 108 / 914 Change Purchase Request item 'Delivery Date'
  • 108 / 904 Delete PR line item
  • 108 / 905 Release PR line item

Purchase Order (item)

BAdI                                                      :               ME_PURCHDOC_POSTED

Sample Implementation Name                 :               IMP_POC_A_PO_POSTED

Class                                                    :               CL_POC_A_PO_POSTED_SAMPLE

In the BAdI implementation the direct event API (Function Module: POC_RAISE_EVENT) is used to throw the events as follows:

  • 001 / 21 Create PO
  • 001 / 901 Create PO line item
  • 001 / 903 Change PO line item
  • 001 / 913 Change Purchase Order item 'Price'
  • 001 / 915 Change Purchase Order item 'Indicator for final delivery'
  • 001 / 916 Change Purchase Order item 'Indicator for final billing'
  • 001 / 917 Change Purchase Order item 'Schedule quantity'
  • 001 / 918 Change Purchase Order item 'Schedule delivery date'
  • 001 / 904 Cancel PO line item
  • 001 / 4     Approve PO

Send PO

     Report Name:                                   POCAR_RAISE_EVENT_SEND_PO to raise the event 001 / 921 (Send PO)

    We take the messages from DB table NAST (Message framework table) for all the Purchase orders (KAPPL = ‘EF’) which are “Successfully

     Processed” (VSTAT = ‘1’). We take only those entries which are not yet processed (by comparing the dates/times using DATVR and UHRVR fields)

Goods Issue

BAdI                                               :               MB_DOCUMENT_BADI

Sample Implementation Name          :               IMP_POC_A_DOCUMENT

Class                                             :               CL_POC_A_DOCUMENT_SAMPLE

Method                                           :               MB_DOCUMENT_UPDATE

In the BAdI implementation the direct event API (Function Module: POC_RAISE_EVENT) is used to throw the events as follows:

  • 467A / 901 Create Goods Receipt
  • 467A / 10 Cancel Goods Receipt

Supplier Invoice

BAdI                                               :               INVOICE_UPDATE

Sample Implementation Name          :               IMP_POC_A_INV_UPDATE

Class                                             :               CL_POC_A_INV_UPDATE_SAMPLE

Method                                           :               CHANGE_IN_UPDATE

In the BAdI implementation the direct event API (Function Module: POC_RAISE_EVENT) is used to throw the events as follows:

  • 127 / 901 Create Supplier Invoice item
  • 127 / 904 Delete Supplier Invoice item

Process Monitor

Now, we will have a look at process monitor to see the process. There are two ways you can enter the process monitor. You can either access it directly, choosing ‘Process Monitor’ from the SAP Easy Access Menu (see above), or by entering transaction code POC_MONITOR.

Process_Monitor.PNG

To see the activities (steps) of the process instance, you could either double click on the instance or click on the Process Details button after selecting the process instance row. In the process details screen you will find all the activities, with the last activity at the top. The related activities are displayed at the below part of the screen for each of the activity.

Process_Details_1.PNG

You can also find the different KPI values in the monitor.

Process_Details_2.PNG

We hope that this blog about the instrumentation of the procurement to payment process on item level will help you to track the process easier.

Stay tuned for more news on Process Observer!

Tracking Field Changes Using the Process Observer (POB) Direct Event API

$
0
0

Hi again,

 

In one of my last blog postings about Process Observer, I introduced the Direct Event API as an alternative to using BOR events for logging. The implementation sample given at the end of that posting can easily be extended to perform the tracking of changes that are made to orders on field level. This information is then available for process monitoring and analysis.

 

Tracking of Field Changes

 

First we need to add the changes we want to track to the process definition. The best practice is to create your own task types for each event to be tracked. To do so, you create corresponding entries in the task type table of the process façade (transaction POC_FACADE).

task_type_fields.png

[To work in the customer namespace, you enter task type IDs starting with Z here.]

 

You combine the task type with the corresponding task, thereby defining a new task:

task_field.png

Then you can create one or more new activities in the process definition and assign the new task for monitoring (transaction POC_MODEL). In our example, we create just one new activity for ‘Change Purchase Order’ and add the different change tasks. As a result, the activity will be logged when any of the tasks is being observed. Item level tasks are flagged.

process_def_fields.png

In the process monitor, you can see which tasks have actually been executed (see below).

 

In the application instrumentation, you have to add the creation of the field change events. To identify the change, you compare the new value with the old value as provided by the interface. The new coding is highlighted here:

 

METHOD IF_EX_ME_PURCHDOC_POSTED~POSTED.

 

* data definitions

   FIELD-SYMBOLS: <fs_ekpo> TYPE uekpo.
 
DATA ls_event             TYPE poc_s_event.
 
DATA lt_event             TYPE poc_t_event.
 
DATA ls_pre_bo            TYPE poc_s_pre_bo_event.
 
DATA lv_time              TYPE poc_execution_time.
 
DATA lv_transaction_id    TYPE poc_transaction_id.

 

* loop at purchase order items
  LOOP AT im_ekpo ASSIGNING <fs_ekpo> WHERE bstyp = 'F'"Purchase Orders
    "Check the field MEMORY to see if the PO document is COMPLETE.   

      IF im_ekko-memory = 'X'. "ABAP_TRUE
        "Incomplete Document. Do not proceed
        CONTINUE.
     ENDIF.

 

* compose event
    CLEAR ls_event.
    ls_event-bo_id = im_ekko-ebeln.     " Purchase Order ID
    ls_event-item_id = <fs_ekpo>-ebelp. " item ID

    ls_event-bo_type = '001'.       

              " BO Type ID (001 = Purchase Order)
    CALL FUNCTION 'TH_GET_TRANSACTION_ID'       " Kernel Transaction ID
         IMPORTING
            transaction_id = lv_transaction_id.   

    ls_event-transaction_id = lv_transaction_id.

    GET TIME STAMP FIELD lv_time.                " Execution Date/time
    ls_event-executed_at = lv_time.              " Execution Date/time
    ls_event-executed_by = sy-uname.             " user

    IF sy-tcode IS NOT INITIAL.
        ls_event-cbe_category = '01'.   

             " Callable Business Entity: cat ‘01’ = Transaction
        ls_event-cbe_type = sy-tcode.              " tcode
    ENDIF.        

"Predecessor Business Objects (document flow) – loop over schedule line items
    ls_pre_bo-pre_bo_type = '108'.

           " BO Type ID (108 = Purchase Requisition)
    LOOP AT im_eket INTO ls_pre_bo_pr WHERE ebeln = <fs_ekpo>-ebeln
         AND ebelp = <fs_ekpo>-ebelp.       

                   ls_pre_bo-pre_bo_id = ls_pre_bo_pr-banfn.
         ls_pre_bo-pre_item_id = ls_pre_bo_pr-bnfpo.                                  

                     APPEND ls_pre_bo TO ls_event-previous_bo.
         CLEAR ls_pre_bo .

        ENDLOOP.

 

* check the change type of thepurchase order item

     CASE <fs_ekpo>-kz.

       WHEN 'I'.                    

                   " purchase order item was created
        ls_event-event_type = '901'.     

                   " task type ID (901 = Create item)
       WHEN 'U'.                     

                   " purchase order item was updated

          IF im_ekko_old-memory = 'X'.
                   "we need to consider this as a new CREATE.
             ls_event
-event_type = '901'.                " Create item
            
APPEND ls_event TO lt_event.
         
ELSE.

                           READ TABLE im_ekpo_old INTO ls_ekpo_old

                              WITH KEY banfn = <fs_ekpo>-banfn

            bnfpo = <fs_ekpo>-bnfpo.         

                          IF <fs_ekpo>-menge <> ls_ekpo_old-menge OR

                           <fs_ekpo>-meins <> ls_ekpo_old-meins.
               
ls_event-event_type = '912'.               

                                        " Line Item Quantity Changed
               
APPEND ls_event TO lt_event.
            ENDIF.

                          IF <fs_ekpo>-netpr <> ls_ekpo_old-netpr OR

                             <fs_ekpo>-peinh <> ls_ekpo_old-peinh.
               ls_event
-event_type = '913'.               

                                         " Change of Net Amount

                            APPEND ls_event TO lt_event.

                          ENDIF.         

           ENDIF.

       WHEN OTHERS.

         …

         APPEND ls_event TO lt_event.
    ENDCASE.
  ENDLOOP.

* finally raise list of events to Process Observer
  IF lt_event[] IS NOT INITIAL.
    CALL FUNCTION 'POC_RAISE_EVENT'
      EXPORTING
        it_event  = lt_event
        iv_commit = abap_false.
  ENDIF.
  CLEAR lt_pre_bo_pr.
ENDMETHOD.

 

Note: Predecessor information has to be filled only if the predecessor is a different business object (item).

  

After you have created a Process Requisition (ME51N) and a corresponding Purchase Order item (ME21N), you change the observed ‘Purchasing Group’ field. When you have saved the change, you find the following entries in the process monitor (transaction POC_MONITOR).  The information about the
actual observed task - which field was changed – can be found in the ‘Related Activities’ section.

  process_monitor_fields.png

To further operationalize this, that is, to be informed or to automatically react to such field changes, you create a counter KPI which counts the corresponding field changes (transaction POC_MODEL). In addition to the article ‘Create KPI definitions for Process Observer’, it is important to know that KPIs can also be defined on task level. This means that instead of defining an activity for counting, you specify the BO type and the task type.

KPI_def_field.png

Then you set a threshold value (same transaction). You can set the threshold to the value when you want to be informed. If you want to be informed about any changes that are taking place in a field, just set the field value to 1. You will then be informed when the first change to the field is made.

threshold_field.png

Note: A BAdI and a BRF+ function that allow you to set thresholds specific to a process instance during process runtime are also available.

 

You can then use the threshold event created by Process Observer to create notifications and actions as it is described in blog Using Thresholds and Alerting in Process Observer.

To analyze field changes, you perform a BI analysis using the data source ‘Process KPIs’ (see also BI Content for Process Observer) or aggregate data contained in table POC_D_KPI in a SAP HANA system, for example, by using SAP Operational Process Intelligence.

report_fields.png

[Screenshot taken from Bex report on changes in Sales Orders]

 

Related Use Cases (Exception Tracking)

 

Instead of just tracking simple field changes, you can combine the tracking with more complex calculations. You may, for example, want to evaluate whether a quantity within an order is within a predefined limit. If it is not, you want to be informed of this.

  

If this is the case, you define a new task type and the ‘Quantity Exceeds Limits’ task as described above and add it to the process definition.  Here it may make sense to have a separate activity for the event that is considered to be the exception:

task_exception.png

In the application BAdI, you first check whether the quantity was changed. If so, calculate whether the quantity is still within the limit, if not, add the ‘Quantity Exceeds Limits’ event.

    

     IF <fs_ekpo>-menge > lv_qty_upper_limit

   OR <fs_ekpo>-menge < lv_qty_lower_limit

     …

           " Schedule Line Quantity Exceeds Limits

           ls_event-event_type = 'Z001'.               

           …

           APPEND ls_event TO lt_event.

     ENDIF.

     …

  

You have seen how easy it is to leverage Process Observer for tracking field changes in your documents or for monitoring related issues such as exceptions.

 

Stay tuned for more tips and tricks about Process Observer!

Process Observer - Old dogs and new tricks?

$
0
0

(This will be a series of blogs on my experience turning on Process Observer)


The Mission.


A few weeks ago, I was given the task to prototype the usage of Process Observer in our SRM 7 sandbox.  This basically my own fault… over the past few years, I’ve gotten really annoyed at the fact that our current SRM5 workflows have to write to custom tables every step along the way so that this information can be pushed up to our BW system. So I say it’s my fault because after a few frustrasting days debugging a timing problem between SRM5/WF/BW, when one of my teammates came to talk about what we should do in SRM7, I said ‘Implement Process Observer and get rid of all this <junk>’.  


I didn’t make this recommendation lightly.  I had (I thought) learned a bit about Process Observer and it’s mighty big brother SAP Operational Intelligence (powered by HANA) thru various webcasts and TechEd Sessions, blogs on SCN, and generally drinking Peter McNulty’s Kool-Aid.


Step 1 – Gather the information!

So I gathered my learning materials, which I will provide to here.

 

First off, I had a copy of a TechEd Workshop that I had attended in 2011.  And I also had browsed around on SCN and found an extremely well organized series of blogs by the Process Observer team.  Basically, just type in ‘Process Observer’ in the search bar, and the first result will be an overview, which is also updated to reflect subsequent blogs and documents.  So for this well organized set of information, I would like to call out my thanks to:


Jens-Christoph Nolte

Bernd Schmitt

Matthias Saettele

 

 

I also did due diligence with the SAP Help Portal, but I will have to admit, I was discouraged when the number of hits showed up.   Fortunately, in the Workshop documentation, they have included the link to the right area (at the time) so you don’t need to search 11796 hits.

 

 

POB - help.png

 

(NB: Did I really think I would only get 5 – 10 hits on the phrase ‘process observer’?  No, of course not.  That would be like Googling ‘cat videos’ and being surprised not to find your own cat immediately.)

 

At any rate, the Help is there, if you need it – and it is actually contained on the main Process Observer discussion page – so you don’t need to search the Help Portal at all.

 

 

 

Armed as I was, and having a pretty good idea of the Value Proposition of Process Observer (It’s using GPS while you drive, rather than a map) I was eager to get at it.  And I was probably even more eager, because I feel as though a person with SAP Workflow skills and an understanding of Business Object Repository objects and BOR events will have a sound footing with Process Observer, although I also don’t think that is a prerequisite.


I scanned the first 25 pages of the TechEd Workshop presentation in order to get to ‘Activate Process Observer’. I always feel that when I begin working in a new area, I need to click something, show something, activate something – in order to get a quick win and boost confidence. 


Step 2 – Turn something (anything?) on


In the case of Process Observer, it’s pretty obvious what your first step needs to be, because it is delivered as a component of SAP Business Suite – which means that you need to use transaction SFW5 (Switch Framework Customizing) to turn it on.  Once you do, then an additional source of information is available via the IMG. But there’s no IMG on Process Observer until you switch it on.


The first thing you get is this pop-up:


 

Go ahead, and continue.  On the next screen, if you hover over ‘FND_EPT_PROC_ORCH_1 you can see that this is a reversible business function.  Of course, it’s clear now that I needed to check the check box and select ‘Activate Changes’. 

 


 

You may notice from here on out that I stop referring to it as ‘Process Observer’ and start referring to it as ‘POB’.  We’re just on good terms like that. 


 

I always feel better knowing that there is a background job running.  To make sure that I had actually turned POB on, I tried the Implementation Guide….


 

I knew I was successful because I could check against a system that did NOT have POB turned on, and I was not  able to see these entries.

The next thing I did - before any other work! – was to go a little further in the guide and select any transaction codes I found, and built myself a little folder of transactions related to POB.  Don’t mock me for pointing this out.  When you have navigated thru Tools> ABAP Workbench> Development> SAP Business Workflow > Definition tools > Events > Event creation about, oh, a thousand times, you’ll start adding your most-used transactions to a folder too.

 

That’s all for this entry. I will be working on the next ones even as you are reading this.  I hope you leave some comments!

Summary:

 

  • - It’s much harder to find the help, tutorials, and blogs, discussions that you might need if you go in unprepared and then try to dig your way out of a mess. 
  • - Expect a certain level of uncertainty.  Take screen shots and keep a word document (or whatever your preference is) all along the way.
  • - Pace yourself. Yes, I might be able to turn on POB in an hour.  But understanding what I am doing – not just racing to get it done – is the real bonus.

Strategies for Filling Whitespace: Process First or Data First?

$
0
0

I posted this article today on Forbes.com: http://www.forbes.com/sites/danwoods/2013/08/12/why-data-first-development-rapidly-fills-application-whitespace/

 

I argue that we should build on what users do to solve their own problems, that is use spreadsheets as a start and build from there. This data first approach is very natural, although at scale it leads to a mess.

 

In the article I compare it to the process first approach that was advocated by the BPX community. This approach requires a talented designer.

 

I would love to get some ideas about ways that large SAP shops are filling whitespace with application development techniques that are data first.

 

One other question: What parts of the BOBJ portfolio could play a role?

Process Observer - Old dogs and new tricks? (part 2)

$
0
0

Process Observer – Old dogs and new tricks, part deux

In my first blog of this series, I laid out the resources I had found for learning about this new (to me, anyway) tool, and activated it.  You can read it here.

 

In this blog, I try to get down to brass tacks, so to speak.  I will use mainly the IMG to step thru turning on a process, although I also refer back to my trusty workshop as well as the main Process Observer (POB) page.


Via transaction SPRO, navigate to:  Processes and Tools for Enterprise Applications > Process Orchestration for Built-In Processes

 

1) Maintain Objects in Façade Layer


 

The first time thru here, I admit to being a little lost.  BO Type 001 = Purchase Order?  In whose world?  In mine, it’s BUS2201.  I’ve later understood that these are (as the name suggests - Façade ) essentially SOA objects, so it’s not necessary that there is a ‘direct link’ to good old BUS2201 here.  Since I didn’t really understand this (these BO types and Task Types are alien) I went with creating my own.  The again, the TaskTypeIDs were also a puzzle, so I just created my own. 

  • Do it better– don’t bother creating your own BO Type and Task first off.  Use a BO Type that you can relate to. Remember a Business Object Type (BO Type) is in the Facade Layer, and a BOR Type is from the Business Object Repository.

 

2)  Activate Business Object Log– Easy!  I turned on Standard Logging with my custom BO Type! This step links the activation with the BO Type (not the BOR Type). 

 

3)  Maintain Business Area– You will probably use this as you go forward – but first off, you can give this a pass.  This is not a ‘Business Area’ as in GSBER, but a filter to be used in the BADIs.   Still, I wanted to tick all the boxes in the IMG, so I created a Business Area anyhow.

 

 

 

4)  Maintain Business Object Repository Instrumentation

Well, this is just what I wanted!   Someplace where things I already know about can be linked to the POB BO Types!  Onward!!!

 

This is where the rubber meets the road – the linkage between the BO Type and the familiar old BOR type BUS2201.    You’ll note that this is why I said you should run with the predefined/delivered BO Type – it is already linked to BUS2201, so my entries were superfluous.


But I have a minor whinge when I read the help on this topic:

Maintain Business Object Repository Instrumentation

Use

In this Customizing activity, you create an infrastructure whereby events specific to the Business Object Repository (BOR) are captured, using logging, by process orchestration.

Requirements

BOR events are available in the system.

You have defined reusable tasks and business object types in the facade layer.

You have enabled the switched package BS_POC_SFWS_01.

Activities

You must carry out the following activites:

Map BOR events to tasks in the view cluster POC_VC_BOR.

Map business object types to BOR objects in the view cluster POC_VC_BOR.

Set the report RSWFEVTPOQUEUE as a background-running job.

 

My whinge here is that telling people to maintain a view cluster isn't very friendly.


* Do it better– Don’t try to find view clusters to maintain.  The transaction POC_BOR will contain all the Business Object Repository maintenance. 

     And no, I’ll say it again (I said it in my previous blog) you do not need to be a workflow developer to turn on         POB.  

 

5)  Map Previous Objects from Business Object Repository Payload – After hemming and hawing on this one, I decided this would be necessary in case of a process definition that spanned BO types.

 

6)  Schedule Business Object Repository Event Processing.  No prob.

 

7) Check Process Monitoring Events for BOR


 

  1. At this point, I have already learned some lessons.  Originally, I thought I should create my own BO Type (not to be confused with BOR objects) – as it turns out, I was wrong.  I got some excellent counseling from the POB Team ( Christoph Nolte  and Peter McNulty ) and they set me straight.
  2. Everything is not all about the BOR. Yes, it’s like a nice comfy old  robe, but the POB functionality was built to be inclusive of BOR objects, not exclusive.  Don’t let it throw you.
  3. You will notice that these customizing entries were not really ‘technical’ – there is no coding, no workflowing, no ‘logic’ behind them. This is why it’s clear that you need to have your business analysts engaged.  Since I am working on a prototype, I can just forge ahead and make these things up.  But in a real POB implementation, I am going to have the business people in on this. There is more stuff later on, as I define my Process Definition, where they could really be helpful.

 

 

  This seems like a good breaking point.  My next blog will focus on the actual process definition.  Meanwhile, I'd like to ask you - are you thinking of implementing POB?  You do know it's a 'gateway' to Operation Process Intelligence(Powered by HANA) right?  If you have already implemented POB, what benefits has your organization seen



Scheduling NW BPM Process

$
0
0

 

Few times we have come across requirement to have BPM process scheduled. So, that will run at predefined interval. We couldn’t find any standard way of doing it than having it triggered from PI/ECC, some scheduling tools or by some other process (or perhaps using Java Schedular API). But in the unavailability of any other system and scheduling tools, we have to do it in BPM itself.

We have tried it in BPM by providing a trigger from within the same process. This is a very simple variance in modelling and should be straightforward to implement. The scheduling is done by taking use of “Activation time” attribute of human activity. The parameter (desired time interval) for task activation is supplied to the process Start trigger.

 

Taken a simple scenario where particular human task is to be executed a day after 1st process on continual basis.

 

1. Create a single human activity process with message trigger for START that accepts a single string attribute. The input parameter of the message is used to provide desired activation time for the task.

Scheduler1.JPG

1.      

 

DO_Activate is the process context attribute that stores the activation time value passed to Start event.

Scheduler1.5.JPG

 

2. Configure the activation time property of the human activity as shown below.

 

2.   Scheduler2.JPG

 

3. Create an automated activity following the above human activity. The interface for this activity is the same message interface that is used in the trigger for Start event. The input parameter for this message interface will hold the same value that is used in activation time for the task. As its a static value in the example, it’s the same that is initially passed to the DO_Activate. Hence, same is mapped to input parameters of the interface. (If we want to have next process dynamically started, we can use output of human activity as an input to automated activity which will have dynamic activation time value)

 

Scheduler3.JPG

3.      

 

4. Build and deploy the process. Assign provider system to the service group that provides configuration to the message interface used in the process.

 

5. Start the process from the process repository with some initial activation value. (in example its 1 )

 

4.    Scheduler4.JPG  

 

6. Thought the process seems to have started, the task is still inactive and get activated only after give activation value.

 

5.     Scheduler5.JPG 

 

Once the task is processed, the process will restart itself with the same activation value and continues until cancelled.

 

 

In the cases of automated activities, instead of activation time, we can perhaps use Intermediate Timer event to have desirable delay in the processes.

 

As we are using same message interface in automated activity that is used in the Start trigger, there could be some impact. And you can see that in process editor in the form of nice yellow icon.

 

However, since the time we implemented this, we haven’t come across any problem. But it would be nice if anyone could comment on potential impact of this.

 

Applying Process Mining Techniques to Process Observer Data using the ProM Toolkit

$
0
0
Hi Process Observer community,
Today I want to show you another interesting approach for applying process mining techniques on top of log information for your business processes created with Process Observer. To do so, we will use the open-source process mining toolkit ProM that is provided by the Process Mining Group, Eindhoven Technical University.
Although the current ProM version is 6.2, we are using version 5.2, as some of the functionality that is used is available only in this version. You will find the link for downloading ProM 5.2 at http://www.promtools.org/prom5/ together with some samples and documentation. You can find the latest version of ProM at http://www.promtools.org/.
To export Process Observer log data from your SAP Business Suite system, you can use sample report POCR_LOG_MXML_EXPORT (or use transaction POC_LOG_MXML_EXPORT). For information about the installation of the report in your system, see note 1832016. The report allows you to the specify start and end date and time as well as the Process Definition relevant for the export. Using further options, you specify exporting only those processes with the status ‘finished’, extract further Business Object (BOR) attributes – which may be useful for a more in-depth analysis in ProM, and to delete user information during the export.
In this example, we are exporting logged data from procurement processes, as defined in Instrumentation for Procure-to-pay process on item level in Process Observer.
prom_export.png
The result is stored as a file in Mining eXtensible Markup Language (MXML) format on the local PC.
prom_file.png
The exported file structure looks like this:
prom_xml.png
It is then imported for further processing in ProM using the ‘File – Open supported File…’ functionality of the ProM Workbench. Chose the import plugin for ‘MXML Log reader’.
prom_open.png
The imported file is now visible in ProM and immediately allows you to see some statistics for the imported processes, therefore check the ‘Dashboard’ and the ‘Summary’ view. Note: The process observer instances are referred to as ‘cases’ in ProM, while ProM’s ‘events’ correspond to activities in Process Observer.
In the filter view, you can restrict your log even more:
prom_filter.png
The heuristics miner (path Mining - Log - Heuristics Miner) returns a model of the process execution and gives information about the frequency of the activities and transitions:
prom_heuristic.png
One way of  identifying process variants, is to create Markov Chains by selecting the ‘Sequence Clustering’ plugin in the Analysis menu. You can
then further inspect or view the created Markov Chains:
markov.png
Alternatively, you can identify the most frequent path alternatives with their throughput times running the ‘Performance Sequence Diagram Analysis’ in the Analysis menu:
prom_sequence.png
Finally, I would like to show you how you can identify paths that, on average, take too much time, the critical sub-paths, or the routes using ProM. To do so, you use the heuristic process model described above, and use the ‘Conversion – Heuristic – Heuristic Net to Petri Net’ function to get a Petri Net as a result. Then run ‘Analysis – Petri Net Model - Performance Analysis with Petri Net’, set the ‘Times Measured In’ to an appropriate value. Steps with long durations / high waiting times are now marked in purple:
prom_performance.png
When reviewing the ProM tutorials and playing around with the tool, you will find that you can do a lot more mining and analysis around your process logs, such as evaluating organizational-related information of the process, activity sequence, conformance checking, decision point analysis, and so on. Note that size of data sets that can be processed with ProM is limited; you may need to limit the data size of the export accordingly.
The ProM Toolkit has proven to be a useful and very versatile tool for process analysis with Process Observer. I hope this little introduction has given you some ideas, and I’ll be happy if one of our followers gives an example in their blog about how they're mining and analyzing data.
Some disclaimer about the ProM Toolkit itself: It is an open source tool for which SAP takes no responsibility. The above article is for illustrative purposes only how it could be used in the Process Observer context.
Thank you for staying tuned to this series!

Automatic Put-Back Action for BPM Tasks

$
0
0

We have developed several BPM processes with tasks that have multiple potential task owners.  What would regularly happen is that one task owner would open the task, but not work on it and just close the task browser window.  The task is now reserved (claimed) for that owner.  The other potential task owners no longer could see the task in their work list.  That led to tasks that are not being worked on, potentially for a long time.

 

This caused some unpleasant user comments about the process.  Since SAP BPM doesn’t offer an automatic Put Back out of the box, we decided to implement our own solution to this problem.

 

To implement an automatic Put Back we did the following:

 

  • Designed a Cancel option in the process model (only needed with SAP BPM < 7.31)
  • Added Cancel as task decision option
  • Detected a browser close using Web Dynpro wdDoExit hook method and then completed the task with cancel decision

 

We also implemented an additional functionality we call Save & Close.  This allows the user to work on the task, e.g. enter data and comments, and then put the task back in the pool of the potential task owners. 

 

In the next sections I will explain in detail how to implement the solution.

 

Process Model

For each human task the user’s decision is evaluated by an exclusive gateway.  If the decision was Cancel or Save & Close the process goes back to that human task.  With Save & Close the input of the user will be saved.

 

If you use task email notification via SAP BPM (global or task specific), all potential users are notified again.  In some processes we use custom E-Mail notification which is handled in an automated task just before the human task.  In this case no additional email is sent.  This is usually the desired behavior with our users.

 

Also consider that for each Save & Close or Cancel a new task instance is created.  Depending on the size of the process context and number of instances, this might increase database storage needs significantly.

 

With 7.31 the task can be put back using the BPM API which does not create a new task instance.  This also means that no email will be send from the process.  You could still trigger emails from the Web Dynpro application if needed.

 

process.png

 

Web Dynpro

In the Web Dynpro layer there is quite a bit more to do.

 

First we need to get the taskId which is posted to the Task UI from the UWL and store it in the session.  We do this in the wdDoInit hook method.

We also need a couple other variables which are declared in the Others section.

 

wd_wdDoInit.png

 

wd_others.png

 

When the user clicks on the decision buttons in task view, the corresponding method in the Component Controller is called.  For example the Save & Close and Cancel decision:

 

wd_fireEventSaveClose.png

 

 

Next in our processes we show a Confirmation dialog.  When the user clicks OK in the dialog we handle the result in the event handling method for this dialog.

 

wd_handleConfirmOk.png

 

Here you can see how the Save & Close decision is handled.  It is very important here to set the variable completed to true.  We use this variable later on in the wdDoExit method to decide if the exit happened because of a regular task completed event or a browser close event.

 

The Cancel decision simple sets the according decision in the Web Dynpro context and fires the task complete event.

 

For all other decisions we are saving the data entered by the user, writing a history entry in our process log, setting the corresponding decision to be passed back to the process context and firing the task completed event.

 

Next we take a look at the wdDoExit hook method.  Here we handle the browser close event.

 

wd_wdDoExit.png

 

If the task wasn’t completed by a user’s decision, we can assume the user simple closed browser.  In this case we use the BPM API to get the task instance.  This is needed because the wdDoExit is always called by the Web Dynpro Framework.

 

Using SDO we pass the correlation ID and the decision back to the process.  Then we complete the task.  This will cause the process to continue with the next step.  Because we set the decision to Cancel, the process goes back to the task.  This again is modeled in the process model.

 

With 7.31 you would put back the task using the BPM API instead of completing the task.

 

 

DC Dependencies
To use the BPM API in the Web Dynpro Task UI, the dependencies need to be configured accordingly.  You have to add the following required DC’s:

 

  • tc/bpem/façade/ear
  • tc/je/sdo21/api

What is BPM?

$
0
0

Let's do a quick, and maybe also funny discussion about what BPM means from your point of view...Here are my options for you:

 

BPM ...

 

1 ... is a Software Solution!

2 ... Beats Per Minute - Bum, Badong

3 ... is a holistic management discipline!

4 ... is a pure expert discipline - real managers delegate this kind of things to the specialists...

5 ... is one of these methods to save cost...

6 ...is a documentation exercise - modelling, modelling, modelling - puuh...

7 ...is the opposite of taylorism/specialization - rejoin across silos

8... is often done for its own sake - no value add...

9 ...has a clear focus on business outcome and value!

10 .. today should help master the growing complexity and constant change

 

What do you think is BPM about?

BPM @ DSAG Kongress 2013

$
0
0

I attended this year’s DSAG (German SAP User Group) conference as a speaker.  We presented the BPM projects we have implemented so far and our lessons learned introducing SAP BPM.

 

At the conference there were a couple of other interesting presentations about BPM.  Two of them I like to highlight.

 

Very interesting was the new BPM guide for SAP BPM published by the DSAG AK BPM.  It is a guide to implement SAP BPM written in German. So if you understand German, I very much recommend reading this guide. 

 

Here is the link: http://www.dsag.de/fileadmin/media/Leitfaeden/Leitfaden_Business-Process-Management/index.html

 

Also very interesting was a presentation by Michael Rossitsch of WestImmo about their first steps in BPM.  Along with SAP BPM implementations they introduced a custom developed application where the actual business processes are documented and access to the relevant back-end systems is provided as links.  This is not really a “real” SAP BPM implementation, but a great starting point to get the business processes organized and involving the business doing so.  It also provides them with all the relevant information to actually start SAP BPM implementations.  I thought it was a very smart way to go about introducing BPM.

 

In our presentation I fell a little bit short on time so that I didn't get to all the aspects in detail.  I already published a blog about automatically putting back a BPM task:  http://scn.sap.com/community/bpm/blog/2013/09/15/automatic-put-back-action-for-bpm-tasks

 

Two other aspects I didn’t get to in detail were Persisting the BPM process ID in a separate database table and Reimport of processes.  I hope to provide blogs for both topics in the near future.

 

I also didn’t get to talk about ABPM, a product provided by SAP Consulting.  The “A” stands for accelerated.  It’s a framework to generate much of the persistency and UI’s thus accelerating SAP BPM implementations.  Our own architecture for BPM implementations is very close to the ABPM architecture, so we hope to reap all the acceleration benefits.  We are planning to start the evaluation of ABPM soon.

Viewing all 123 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>