ABAP to the future – my version of the BOPF chapters – Part 2: Consumption of a BO using CRUD services
In the following chapters, we’ll look at how to actually program business logic in BOPF.
Also in the ancient DYNPRO-times, it was (theoretically) possible to implement a MVC-pattern. The DYNPRO-Events PBO and PAI could sever as entry point for a controller which then delegates the actual logic to a model. However, there are only a few samples I know where this has been done consequently (you might debug a PBO and PAI of the ABAP workbench (SE80) or the BOPF builder (BOBX) in order to get an impression how this could be done).
In my opinion, there is no “BOPF equivalent of a PAI” (as Paul writes in A2tF), as the PAI is part of the UI or controller layer (even I can argue with myself what it’s exactly part of), but it’s surely not meant to be part of the model which is where BOPF resides. The UI and its controller are BOPF consumers.
In this chapter, we will consume BOPF as well as provide business logic (such as checks on the input’s sanity). The interfaces and patterns interfaces for service consumption and provisioning look very similar, which is one of the strengths of the patterns used.
This chapter deals with how to access a business object instance.
Excursus: The BOPF Test-UI (transaction BOBT)
After you modeled the static structure of your business object, you can immediately interact with it in all aspects described in this chapter. For this purpose, you can utilize a Dynpro-based Test-UI. Simply load your business object by entering its name and start either with creating a new instance or identifying existing ones.
For a better usability of the Test-UI: Select a node attribute of a unique alternative key to be displayed instead of the GUID.
As we imagine entering our application, there are typically two different UI-patterns: After a selection-screen, a list of instances matching the criteria is displayed, one row representing one instance of the entity. On the button-bar, we’d be offered to edit an existing instance or to create a new one. Alternatively to a selection list, we could also just have to enter an identifier which is a human-readable semantical key. In both patterns, the next screen would be used in order to read the current data and edit it or could be used to create an instance with the corresponding ID (optionally with default values).
As written before, a business object node is the model-part which corresponds to a UML class and thus carries the data of the actual instances. In BOPF, each of these instances is identified with a – tada – GUID. This technical key does not need to be modeled: While generating the combined structure which includes the persistent as well as the (optional) transient information, BOPF also includes a technical structure, the so-called key-include. It contains not only the instances GUID (KEY), but also the PARENT_KEY which is the KEY of the parent node (is initial for root-nodes) as well as the ROOT_KEY (in case of a root-node-instance, the ROOT_KEY and the KEY carry the same values). This key-include is used by the framework in order to resolve compositions as well as their reverse (TO_PARENT) and TO_ROOT, but of course also can be interpreted in business logic.
All semantical data including identifiers are modeled as attributes. Based on these attributes, two core-services exist in order to get the KEYs for an instance: QUERY and CONVERT_ALTERNATIVE_KEY. Once you know the key, you can feed it to the core-service RETRIEVE in order to get the actual data.
Let’s have a look at QUERY first, as it’s the simpler one. A query is a modeled artifact which resides at a node (the “assigned node”). Based upon (multiple) query parameters, a set of instances of the node at which the query resides is being returned (precisely the corresponding KEYs). The query contract allows applying the well-known select-options for each attribute (including BT and CP). There are two types of queries which do not need to be implemented, but which can be answered by the framework itself: The node-attribute-query SELECT_BY_ELEMENTS is a query where the parameters match the node structure; the SELECT_ALL-query is a query without any parameters.
Figure 18 – The SELECT_ALL-query is more or less only technical and needs to have this name
Figure 19 -The node attribute query with the node structure as query parameters
Note that the query-names are not unique within the model, but only within the context of the node: A SELECT_BY_ELEMENTS at the ROOT node will return keys of the ROOT-instances; the SELECT_BY_ELEMENTS at the HEAD node will return HEAD-keys matching the criteria (potentially of multiple monsters). All queries adhere to the implied contract and have to support paging as well as the restriction to a set of instances upon which is queried (see parameter “is_query_options”).
I believe that “QUERY” feels very familiar for most ABAP developers as it kind of wraps an SQL-query (like a prepared statement). But there is one pitfall when using it in transactional applications: Just like any select-statement, only persisted data can be returned. The transactional buffer (some internal member table which holds the created and changed instances) is ignored. Therefore, I highly recommend using “QUERY” only from the consumer at the very beginning at a transaction (e. g. on a selection screen or at the beginning of some batch-report). Especially within service provisioning, queries must not be used! The side effects of reading dirty while applying business logic are tricky to identify and mostly horrible to correct.
The core-service CONVERT_ALTERNATIVE_KEY is much less comfortable with respect how to identify instances of a node and needs more modeling, but it respects the transactional buffer! An alternative key in the sense of BOPF is an attribute of a node (or a combination of multiple attributes) which serves to identify an instance either exactly (usually an ID) or in order to identify a set of instances (usually a foreign key). A node may have one or more alternative keys which are explicitly modeled.
The definition in the business object comprises its structure as well as its multiplicity (uniqueness). In our sample, the monster name could be a unique alternative key while the creator could be a non-unique alternative key if there was a need to have business logic based on the selection by creator.
Figure 20 – Multiple alternative keys for a monster, the name being a unique one
If for example monsters have got a rental price and for all monsters of a creator the price shall be adjusted, we’d need an alternative key on the creator: Using a query would not find a monster which has been created within the same transaction.
Figure 21 – A non-unique alternative key configuration
Figure 22 – Using an alternative key conversion in the Test-UI
The alternative key’s uniqueness can also be used for validating that no second instance with the same unique alternative key is getting created. In contrast to what Paul wrote, BOPF offers a re-use-feature which ensures the adequate uniqueness: Once you model an alternative key, you are requested to add an action validation (which we’ll cover in a later chapter) with implementation class /BOBF/CL_LIB_V_ALTERNATIVE_KEY.
Figure 23 – The BO check will inform about un-validated alternative keys
The SAP-provided implementation also ensures uniqueness across multiple sessions on non-persisted data!
Remark: Alternative keys are also necessary in order to be able to model associations between nodes of different business objects (Cross-BO-associations). In this case, the multiplicity of the association has to match the uniqueness of the alternative key.
Alright, now we got a set of technical keys of instances which we’d like to process. There are two core-services for reading BO nodes: RETRIEVE gets the data of instances of which we know the KEYs. RETRIEVE_BY_ASSOCIATION – surprise, surprise – can retrieve instances (KEYs) of associated nodes. Optionally (not by default!), RETRIEVE_BY_ASSOCIATION also returns the data of the target instances. Both services allow the consumer to specify in which information of the node to be retrieved he’s interested in by specifying the it_requested_attributes. If one of the requested attributes is a calculated one (from the transient part of the node structure), BOPF will execute the corresponding calculation. If no requested attributes is specified, all node-attributes are considered requested.
As your models grow (and they will, be sure) and transient information is added and calculated, the use of the requested attributes is getting more and more important. So even if you’re requesting all attributes of the currently modeled nodes, I recommend specifying the attributes which are relevant. This not only saves you nasty performance analysis in the future, but also helps to make your code more readable. Let me give you a short sample:
iv_node = zif_monster_c=>sc_node-root
it_key = relevant_monster_keys
it_requested_attributes = VALUE #( ( zif_monster_c=>sc_node_atttribute-root-number_of_heads ) )
et_data = relevant_monsters
The above code implies that the number of heads is relevant for the business logic which is about to follow. Also note that a table of monster-keys is being fed into the method. In BOPF, all commands issued by the consumer are mass-enabled. This is particularly important for the retrieval-methods, as each read might result in a DB access (if the buffer is not being hit for all instances). It can scrutinize your system’s performance if you only feed single keys and read with index 1 and do this in a loop. I highly recommend to mass-read all the relevant data (including the necessary associated data) right in the beginning of the method. If you in addition properly fill the requested attributes, 80% of your performance tuning has already been taken care of.
The command for following an association looks very similar:
iv_node = zif_monster_c=>sc_node-root
it_key = relevant_monster_keys
* iv_fill_data = abap_true
* it_requested_attributes = VALUE #( zif_monster_c=>sc_node_atttribute-head-number_of_eyes )
et_key_link = link_root_head
* et_data = relevant_monsters_heads
A careful observer will see that the data of the target node is not always being returned when following an association. The runtime representation of an association is a link between the source and target node. The data is actually a property of the target node (and not of the association). The target node data is also not always necessary in order to implement the requested behavior. As the retrieval of the target node’s data is comparatively expensive (particularly if transient information is requested), the default of a retrieve by association is not to request the data (iv_fill_data). If you have managed to implement a real-world usecase without ever running into a short-dump because you forgot to set iv_fill_data = abap_true, you are certainly a more careful programmer than I am.
After we read the current data of an instance, we might want to manipulate it. /BOBF/IF_TRA_SERVICE_MANAGER offers the core-service MODIFY which is a command to execute all kinds of manipulations (Create, Update, Delete). The modify command gets passed a set of modification instructions which might not only affect multiple instances, but also multiple nodes in one call. This is essential, as there might be business logic which validates whether an instance can be created based on subnode-data. E. g. we could validate that each monster needs to have a least one head. Creating a monster without a head would reject the modifications for the failed monster instance.
I will not go into the details of the command (but I recommend you to read the method documentation on the modification structure which will really help you, the BOPF documentation team did a great job there):
Let me highlight some aspects which might not be obvious from the documentation. When creating instances of multiple nodes of a composition (in one modification call), you need to make sure that the instances of the subnode are created for the proper parent-node-instance. In order to be able to do this, you need to know the KEY of the parent node instance. In this case, you can use /bobf/cl_frw_factory=>get_new_key( ) in order to define with which technical identifier the parent node instance shall be created. Else, as a consumer you don’t need to define the key, the framework will do that for you.
Once you update an instance, you can use the changed attributes in order to inform the framework which parts of the instance have changed. This not only increases performance (as BOPF doesn’t have to compare the before- and target-data), but also allows you to have multiple modification instructions per instance affecting different attributes.
When deleting an instance, BOPF will implicitly delete the subnodes (via the compositions) as well. There is no need for an explicit deletion of the subnode-instances.
Each core-service returns a message container and a change object.
It is crucial to understand that in a BOPF-application (such as it should be in any other well-designed application), messages are exclusively intended to be interpreted by a human. Business logic must never be based upon the existence of a particular message-attribute. BOPF calculates a change-object after each roundtrip. This does not only inform reliably bout the differences in the transaction before and after the roundtrip, but also tells you about failed changes. It may also be the case that during one roundtrip, multiple modifications are being made out of which some are successful and some fail (because they violated some constraint). Thus, if the has_failed_changes( )-method returns abap_true, you definitely have to analyze which change failed!
> Find more alternative versions of chapters in my blogs.