Skip to content
CA Datacom Core - 15.1
Documentation powered by DocOps

New and Enhanced Features for 15.0

Last update June 25, 2019

The following features and enhancements are available:

Enhanced zIIP Processing

When you use SRB mode in CA Datacom®  Version 15.0, more zIIP engine work is driven and offloaded from the general processors during the execution of the Multi-User Facility (MUF). The specific increase varies by workload.

Note: Specify SRB mode by specifying SRB for the fourth option of the MUF startup option SMPTASK.

DATAPOOL data2no Parameter Minimum Increased

The minimum valid value for the data2no parameter on the MUF startup option DATAPOOL was increased from 1 to 3 in Version 15.0. Valid entries for the data2no parameter are now 3-99999.

Deleting an Index Entry from an Open Table

CA Datacom® Version 15.0 supports 24X7 operations by allowing an index entry (key definition) to be deleted from an open CA Datacom®/DB table. A CA Datacom® Datadictionary™ attribute, KEY-USAGE, can be used to deny access in 24X7 real time to the key definition to be deleted. Index entries can also be added while a table is open. A designation of open for a table means that the table is open for use by any number of user applications. The deletion or addition of a key to an open table is designed to prevent any effect upon the operation of any executing applications that are using the table. For more information and steps about deleting an index entry from an open table, see Deleting an Index Entry from an Open Table.

Adding an Index Entry to an Open Table

New index entries (key definitions) can be added (cataloged by CA Datacom® Datadictionary™) to a CA Datacom®/DB table while that table and the database in which it resides remain open. Index entries can also be deleted while a table is open. Adding and deleting index entries while a table is open supports 24X7 operations.

A table being open means that the table is open to any number of user applications. The executing applications are unaffected by the index addition until:

  • after the addition is complete
  • the new index entry is available to either in-progress applications or to new applications that then start

A CA Datacom® Datadictionary™ key addition catalog function adds keys to tables in a 24x7 environment. The 24X7 key addition catalog exists in addition to a full database catalog. You are generally allowed to add a new key to a table while the table is open to user applications.

For more information and steps about adding an index entry from an open table, see Deleting an Index Entry from an Open Table.

Extending Virtual Database Areas

CA Datacom® strives to provide 24x7 support. When an application needs to add data to a VIRTUAL database area but that area has become full, 24X7 operation is threatened. To minimize that threat, CA Datacom® supports the dynamic extension of VIRTUAL database areas. Previous releases only supported dynamic extension for DASD data sets.

VIRTUAL dynamic extensions are similar to DASD dynamic extensions in the following ways:

  • You can define an index area or data area to be subject to dynamic extension so that it activates when more space is required. More space can be required by the addition of an index entry or a data row that cannot fit in the currently defined space.
  • You can set an index area or data area to be subject to dynamic extension based upon a console-like command of the chosen size. This form of dynamic extension is a directed dynamic extension. Directed dynamic extension is completely independent of the full file dynamic extension setting.

VIRTUAL dynamic extensions are different from DASD dynamic extensions in the following ways:

  • A VIRTUAL area is subject to dynamic extension if a MUF startup option or console-like command defines the area to be subject to dynamic extension. DASD dynamic extensions are based on the selection of a CA Datacom® Datadictionary™ option. VIRTUAL areas accept the CA Datacom® Datadictionary™ information but ignore it for a VIRTUAL base.
  • DASD dynamic extensions are subject to Operating System rules regarding making open data sets larger. The basic rules are a maximum of 16 extends per volume and a maximum of 59 volumes. VIRTUAL dynamic extensions are completely within CA Datacom®, and CA Datacom® supports a maximum single-size extension of between one track and 2 billion bytes. CA Datacom® allows you to set the maximum number of dynamic extensions from 0 (none) to 65535 per MUF execution.
  • DASD data sets have a secondary allocation value in CYL/TRK/AVGBLK and a number, but VIRTUAL areas do not. The size of extensions for VIRTUAL dynamic extensions must be set in a console-like option or in a directed dynamic extension.
  • In prior releases, during a MUF startup, you could define a VIRTUAL database (area) as VIRTUAL dbid-area,size. In Version 15.0, extend-size and extend-count options were added (see extend-size and extend-count in the information about the VIRTUAL startup option).

Using a console-like command, you can change the VIRTUAL dynamic extension options while the MUF is enabled. The command is VIRTUAL_DYNAMIC_EXTEND dbid,area,extend-size,extend-count. For more information, see VIRTUAL_DYNAMIC_EXTEND in MUF Startup Options and Console/Console-Like Commands.

64-bit memory is used if allowed and available. If the requirement is not satisfied with 64-bit memory, it is satisfied with Data Space memory, if available.

The MUF EOJ report provides the memory location of 64-bit or Data Space, reports on the placement of the first allocation, and usually represents all parts of the memory. However, if you start with 64-bit at data area open and then extend, Data Space memory is used, and the placement is 64-bit and partly right, but not the full answer.

Dynamic System Table MUF_COVEREDVIRTUAL provides the memory location of 64-bit or Data Space, reports on the placement of the first allocation, and usually represents all parts of the memory. However, if you start with 64-bit at data area open and then extend, Data Space memory is used, and the placement is 64-bit and partly right, but not the full answer (see MUF_COVEREDVIRTUAL (MFC)).

The descriptions of messages DB01705I and DB01706I have been updated to include changes related to VIRTUAL areas. See DB01705I and DB01706I.

Extending the Temporary Table Manager (TTM)

In Version 15.0, there is support for both dynamic extend of the Temporary Table Manager (TTM) work table used by SQL and for an on-demand dynamic extend of the TTM table at your request. These dynamic extends can be done with no loss of access to any other task areas or TTM usage. You need not close the TTM nor therefore reopen it later. Support is provided when the TTM is defined as either DASD or VIRTUAL. When the TTM is DASD, the normal dynamic extend ability is supported for the TTM. When the TTM is VIRTUAL, the dynamic extend or on-demand dynamic extend is supported. Dynamic extend and on-demand dynamic extend are fully supported to your specifications.

Specifying dynamic extend at MUF startup is done using MUF startup options.

Changing dynamic extend settings after the MUF has started is done using the VIRTUAL_DYNAMIC_EXTEND console-like command.

An on-demand dynamic extend is done using the DYNAMIC_EXTEND console-like command.

The TTM can be found to be too small when using the ALTER TABLE statement (see ALTER TABLE).

A new column, TTM_BLKS_MAX_USE, that lists the highest number of 4K-blocks used, has been added to the SQL_STATUS table (see SQL_STATUS (SQS)).

If a TTM becomes full or is too small, you can receive SQL codes -560 (see -560 - TEMPORARY TABLE AREA (TTM) FULL)and -561 (see -561 - TTM TOO SMALL, SEE ERROR ACTION).

DBUTLTY Function PREINIT

The DBUTLTY PREINIT function pre-formats (pre-initializes) a data set as an index area or data area for a planned but non-existing index area or data area.

The PREINIT function promotes 24X7 operations. When you do not use PREINIT, adding a new index area or data area to an existing database involves an outage to that database. You have to close the existing database, catalog the definitions for the database, use a DBUTLTY INIT function to initialize (format) the new area, and use a DBUTLTY LOAD or RETIX function to populate the index or data area, which involves a database outage from MUF access. The PREINIT function allows a new index area or data area to be anticipated and initialized before the catalog is done.

The term "catalog" as used here includes CA Datacom® Datadictionary™ catalogs of databases, catalogs of new key definitions, and SQL DDL CREATE statements that add a new index or data area. During the catalog, and if the database currently exists, each index or data area that is found to exist with a known Data Set Name (DSN) is paired with that existing data set. For areas that do not have a known DSN, the list of PREINIT areas is searched for a match. If a match is found, the PREINIT DSN is "taken" and set as the DSN of the area, and the cataloged area is set as initialized and loaded. This process occurs for the Index Area (IXX), index areas Inn, and data areas.

Following are some examples related to the use of the PREINIT function that illustrate how PREINIT negates the need to use the INIT, RETIX, and LOAD functions, because you use PREINIT to pre-format a new IXX, or an index area that is not the IXX, or a data area before a database is to be cataloged and where the area is not known by CA Datacom® to exist.

Note: Because the PREINIT function is done offline, the database is not opened or referenced. You can, therefore, PREINIT an index or data area for a database that is currently initialized and loaded and that could be in use by open applications. Obviously, you do not want to overlay an existing data set using PREINIT. You might have a condition where the index area is too small or on a full volume, and you want to make it bigger or move it. To have a shorter outage, you can PREINIT the area to the new size and placement while the database is open and is being processed. The outage then might be shorter when you close and remove the old data set name, and catalog the database to pick up the newly initialized data set.

Case 1: Cataloging a database not known to the Directory/CXX

The normal process is to catalog the new database then initialize the index using the INIT function, and initialize the data area(s) using an INIT. Next, load each area using the LOAD function or load the database using LOAD. By using the PREINIT function, you can first pre-initialize the index area(s) and the data area(s), then do the catalog of the database, where the pre-initialized areas are found, with the database set to initialize and load as part of the catalog. The difference could provide value in scheduling the actions.

Case 2: Cataloging a database known to the Directory (CXX) but containing a new data area (and table)

The normal process is to catalog the database and then INIT the data area and LOAD the area. The catalog requires an outage of the data base for a few seconds. If the data area has a few tracks it will quickly be initialized and loaded. For a large area the INIT and LOAD will take some time and this time is occurring with the data base closed. It is possible to use the keyword MULTUSE=YES on the area INIT and LOAD to allow the execution with the base opened. With PREINIT it is possible to initialize and load the area before the catalog and allow the catalog to find the areas and set the new area as initialized and loaded.

Case 3: Add a new key definition to new index area

The normal process is to catalog the database and then INIT the new index area and RETIX the area with MINIMAL=YES to populate the added index. This requires an outage during the catalog and INIT and RETIX functions  When a PREINIT of the index area is initialized before there is an outage, the new index definition can be added in a 24x7 way where there is no outage of the database. If using the ‘key’ catalog option of CA Datacom® Datadictionary™, for example:

-UPD KEY,…

1000 APPLYCXX

Case 4: Add a new table to a new area using SQL

As an example of using SQL, if you were adding a table into a new data area to an existing database using SQL, you could use PREINIT to pre-format a new data area that is to contain a new table that was to be added by using SQL DDL syntax to CREATE the area and table.

Case 5: Adding a new data base using SQL

Another SQL-related example, to add a new database using SQL you can use PREINIT to pre-format a new index and data areas for a planned database that is to be subject to SQL DDL statements, including CREATE BASE, CREATE AREA, and CREATE TABLE

For more information about the DBUTLTY PREINIT function, see PREINIT (Pre-format Index and Data Areas).

DBUTLTY Function INIT for ‘All’ Data Areas in a Database

CA Datacom® makes it easier to initialize data areas. To initialize all data areas in a database, instead of initializing each data area separately with multiple INIT functions, you can use one single INIT function with AREA=*DA. For more information, see the INIT CXX/FXX/IXX/LXX/WXX/Data Area.

Log Area (LXX) Enhancement Set

Extending the Log Area (LXX)

The ability to extend the Log Area (LXX) with MUF processing allows a larger LXX without an outage to use DBUTLTY to INIT an LXX with more tracks.

A z/OS only feature, the LXX can be extended as MUF is running using the existing DYNAMIC_EXTEND option. The process of extending the LXX is similar to that of extending the Directory (CXX). The additional space gained from the LXX extension can only be incorporated into the log to store log records until the LXX is at its "wrap point," meaning, at the end of the data set from which it "wraps" to the front. Interactions that are related to LXX maintenance and the spilling of the LXX make the log available for extension only at the next wrap point or at the end of the next two wraps.

Increasing the size of two Log Area (LXX) internal sequence number constraints ensures that the CA Datacom®/DB Multiple User Facility (MUF) has more 24X7 availability.

Log Record Sequence Number (LRSN)

The Log Record Sequence Number (LRSN) is a count of records added to the CA Datacom®/DB LXX. There is one LRSN for every maintenance command that adds, updates, or deletes a record. A few control commands, for example a commit, also add to the LRSN.

In previous releases, the LRSN was an integer with a range of from zero to one less than 4 billion. With each new execution of the MUF, the LRSN was restarted at a value of one. The LRSN was restarted whenever a user issued a QUIESCE RST console command. The restart, however, required an outage of several minutes that impacted 24X7 availability. No outages for read commands were provided, but open, close, and maintenance commands were held.

In this release, the maximum size of the LRSN is increased from 4 billion to 4 billion times 4 billion, a number large enough to not require recycling because of the LRSN. The outage required for recycling the LRSN is therefore avoided and its impact on 24X7 availability is eliminated.

Log Block Sequence Number (LBSN)

The Log Block Sequence Number (LBSN) is a count of log blocks written to the CA Datacom®/DB LXX.

In previous releases, the LBSN was an integer that ranged from zero to one less than 4 billion. When the prior MUF was not force-terminated, with each new execution of the MUF the LBSN was restarted at a small number. The restart required an outage of several minutes. Some users did not ordinarily terminate the MUF normally because they did not want the short outage.

In this release, the maximum size of the LBSN is increased from 4 billion to 4 billion times 4 billion, a number large enough to never require recycling the LBSN. The outage required for recycling the LBSN is therefore avoided and its impact on 24X7 availability is eliminated.

In support of larger LRSNs and LBSNs, three return codes and two messages have been added. The return codes are 94(147), 94(148), and 94(149). The messages are DB01227I and DB01228I.

New STATUS_LOG Console Command

The format for STATUS_LOG is similar to the STATUS and STATUS_SMP console commands. STATUS_LOG provides Log Area (LXX) information that can prove useful.

New Log Area LXX WRAP message

In support of the LXX extend, a new message occurs each time the LXX is wrapped. That is, message DB00321I - LOG LXX WRAP is generated after writing to the end of the data set and being reset (wrapping) back to the start. It provides the elapsed times of the information from the first physical log record at the start to the last physical log record at the end. This provides an elapsed time covered by one physical start to finish. Also provided is the lowest time. These values can be of interest in sizing the Log Area. The size is of most interest relating to spill processing but has some interest also in Change Data Capture.

URI Reuse

Unique Row Identifier (URI) limitations in previous releases threatened 24X7 availability with rare but possible data table outages. A URI reuse feature in Version 15.0 enhances 24X7 availability by minimizing URI-related outages.

The maximum number of URI records is four billion records (4G). In previous releases, when the limit of four billion records had been reached, a return code 94 (096) was issued, and no new adds or inserts could be done until the data area was closed, backed up, and reloaded.

The URI reuse feature works with any area but is most useful in data areas that have many rows. The URI reuse process saves URI values for deleted rows that have become available for reuse, tracks, and reassigns previously used URIs within the Multi-User Facility (MUF). The values of the reuseable URIs are saved to a URI reuse key within the index of a database. After the user specifies that they want to reuse URIs, no further user action is required. Reusing URIs is chosen by specifying a Y (yes) for the URI-REUSE CA Datacom® Datadictionary™ attribute on the AREA entity-type (batch transaction 3002). Once specified, the URI reuse process is triggered when the URI count for a data area reaches 3G. After the URI count reaches 3G (3,221,225,472), an asynchronous scan of the data area is scheduled in the MUF. This scan uses a new system task with a jobname of ***DBURI.

For details about the CA Datacom® Datadictionary™ URI-REUSE attribute and CA Datacom® Datadictionary™ batch transaction 3002, see the CA Datacom Datadictionary Attribute-Types Reference and the CA Datacom Datadictionary Batch Facilities pages.

Because the URI reuse feature stores available URIs in the index, consider whether increasing the size of existing indexes is needed. This is not necessarily an issue at sites using dynamic extend, or if the number of free URIs is not significantly relative to the number of rows in the data area.

The URI reuse feature has fallback considerations. Potentially, two key IDs could be created when reusing URIs, one for tracking available URIs, and one created by backward recovery. A key ID could be created by backward recovery if the available URI index exists for the DBID and area being recovered and is used to prevent CA Datacom®/DB from reusing a URI that backward recovery, because of the backout of a delete, re-added to the area.

We recommend not setting the URI-REUSE attribute to Y until certain no fallback is going to be needed. When the new URI-REUSE attribute is implemented during an upgrade to Version 15.0, all existing AREA occurrences of URI-REUSE are set to N (no).If a fallback is needed after running with URI-REUSE set to Y, delete both of the URI reuse key IDs using a DBUTLTY LOAD or RETIX. The main fallback consideration for users occurs if the URI reuse feature is implemented and there is an area with a high URI count that exceeds 3G, triggering a URI scan that creates a list of available URIs and with it the associated URI reuse key ID. If you then have to fall back and run a backward recovery in Version 14.0, we recommend running a REMOVE against the URI key ID.

Simplifying Install/IPL Common Memory Modules

Before Version 15.0, CAIRIM installed two CA Datacom®/DB Program Call module routines separately, DBPCCPR and DBPCSPR. The DBPCCPR module was used for local MUF processing requirements and DBPCSPR for MUF processing that was not local. MUF self-installed the DBPCCPR module, but the DBPCSPR module was not self-installed.

In Version 15.0, DBPCSPR has been combined with DBPCCPR. Documentation for the DBPCCPR module therefore now combines both DBPCCPR and DBPCSPR features. Combining DBPCSPR into DBPCCPR reduces by half the related installation and reporting actions.

All reporting of the DBPCSPR module has been eliminated from Version 15.0.

SQL Source Cache

Database Administrators (DBAs) use the SQL Source Cache to reduce MUF resource consumption and lower response times. The SQL Source Cache is a collection of previously executed dynamic queries from CA Datacom® Server and the batch utility DBSQLPR. Queries that are stored in the SQL Source Cache and match new queries reduce the preparation (bind) cost. Statistics accumulated for these queries allow inefficient queries to be identified.

Do not confuse the SQL Source Cache with the Least Recently Used (LRU) Statement Cache. The LRU Statement Cache reduces the overhead of executing "static" SQL statements by avoiding access to the DDD table. The SQL Source Cache, however, is a collection of "dynamic" statements. The two separate caches do not share the same statements. Referring to a cache as simply "the Cache" refers to the SQL Source Cache.

Access to the Cache is through using a group of Dynamic System Tables (DSTs). All of the Cache DSTs have an authorization ID of SYSADM and begin with the prefix SQLSC (SQL Source Cache).

  • The SQLSC_FACILITY (SCF) table consists of one row. The SCF table describes the overall state of the SQL Cache since the MUF was started.
  • The SQLSC_PLAN (SCP) table contains a row for each combination of plan options that can affect a query's generated executable object from queries in the Cache.
  • The SQLSC_ENTRY (SCE) table consists of one row for each query in the Cache.
  • The SQLSC_VERSION (SCV) table consists of one row for each version of a query in the Cache.
  • The SQLSC_METRICS (SCM) table consists of one row for each processing step for each query. Metrics are kept for the original query, and each optimized query.

 For more information, see System Tables Reference.

To be as effective as possible, the Cache replaces literals in the SQL source string that is used to match queries in the Cache, with a parameter marker in the form of a question mark (?). For example, if this query:

select * from cars where color = ‘RED’ and make = ‘FORD’

is stored in the Cache, this new query would match:

select * from cars where color = ‘PINK’ and make = ‘FORD’

Because the two queries are considered matches, the bind process is avoided by using a copy of the same plan that was generated by the first query. Also, statistics are accumulated to only one query. Both queries were probably generated by the same application that substituted the different input selection colors.

This reuse of the same plan does not prevent selection of the most efficient index, because index selection is still performed by the Compound Boolean Selection Facility (CBS) when different literal values could cause a different index to be most efficient. In this example, CBS would select the COLOR index over the MAKE index for the second query, even if the first query used the MAKE index.

To use the SQL Cache feature, either add the MUF startup options SQL_SOURCE_CACHE_SIZE and SQL_SOURCE_CACHE_STMTS to your MUF SYSIN or accept the defaults. 

Although you can dynamically update the specifications made in the SQL Cache-related MUF startup options, those updates only change the Master List, not the SQL structures that actually use the limits. The SQL structures are changed when an INSERT/CURSOR statement is processed in CA Datacom® Server or the batch utility DBSQLPR. Therefore, while a console command posts the change, the change does not take effect until the next INSERT/CURSOR is processed in CA Datacom® Server or the batch utility DBSQLPR. The SQLSC_FACILITY (SCF) Dynamic System Table column SCF_MEMORY_SIZED_TS shows the time when the change takes place, not the time the console command was issued.

When the Cache is full and a new query is inserted, the least recently used (LRU) query or queries are purged from the Cache. The SQLSC_FACILITY (SCF) table (DST) can be used to compute the hit ratio. The hit ratio is the ratio of the number of times the Cache is searched, compared to the number of times a match was found.

Not all queries qualify for caching. The Cache is used in cursor-related SQL statements, that is, statements declaring, opening, fetching, and closing cursors. However, when some features are used in a query, the literal replacement process cannot be used, in which case that query is never added to the Cache and is, therefore, always bound.

Clearer MUF Error Termination

A failure in an SMP task, either a TCB or an SRB, causes a need to terminate the main task MUF TCB. In releases before Version 15.0, main task MUF TCB termination occurred as a U008 ABEND in the DBIOMPR module. In Version 15.0, the termination is in module DBU08PR. The DBU08PR module is used for nothing else. Using DBU08PR allows the error to be more easily identified than the previously used DBIOMPR module which is used not only for main task MUF TCB termination.

Generation Number for Dynamic System Tables

A new GENERATION column was added for existing Dynamic System Tables (DST) with definitional changes. The GENERATION column includes a conceptual "generation" number. When the table is queried, this column is populated with a 1, representing generation 1. All existing table definitions from Version 14.0 and newly added tables in Version 15.0 are considered generation 0. Therefore, you can use the generation number to know which definition you currently have installed (SQL and DR).

Future post Version 15.0 definitional changes to existing tables will either:

  • Add the GENERATION column if it does not exist and populate it with 1
  • Bump the returned generation number by 1, such as to 2 if the previous generation was 1 if the GENERATION column already exists.

Sysview is allowed to use a new internal protocol to specify which generation definition is used to return data. This method allows Sysview to see new Version 15.0 columns, in a Version 15.0 MUF before installing the new Version 15.0 definitions. This allows for an easy fallback to Version14.0 if needed.

The generation number concept allows you to upgrade the DST definitions independent of release boundaries. It also allows CA Datacom® to change DST definitions during a release cycle if needed, without impacting Sysview.

Dynamic System Tables Access through SQL

A true SQL query of the DST must always return data which matches the definition in the CXX/DDD. The Version 15.0 code line for the DST recognizes the definition of the base 1000 and constructs the row that it wants. The code recognizes two definitions, the Version 14.0 definition and the Version 15.0 definition.

This allows you to perform the upgrade and still execute with the Version 14.0 definitions with zero outage. However, new tables, new fields, or altered fields are not available in this mode. This is appropriate for the upgrade until there is no risk that you will fall back to Version 14.0. To upgrade definitions to 15.0, when you have a few seconds of time to close all users of the DST base, catalog the Version 15.0 definitions, and then allow a reopen. At which point, the 15.0 definitions are available.

DBUTLTY Function REPORT with MEMORY=MVS

The MEMORY=MVS and TYPE=DATASP reports for CA Datacom® Version 15.0 have been updated. The DBUTLTY function REPORT with MEMORY=MVS has been enhanced for readability. The DBUTLTY function REPORT with TYPE=DATASP has been enhanced to provide increased productivity-oriented information.

DBUTLTY REPORT Function with TYPE=DATASP

The first line is not changed:

AREA A01  BLKSIZE  4,096  TRACKS         31  BLOCKS        372  URI YES  DSOP 1 RANDOM                 SLACK     0

The second section with table information is not there in that form, described when it occurs after the main space usage information.

The third section with usage headings have changed, only the right hand side. An example before the change is:

---------- FREESPACE IN BLOCKS -----------


0 TO 1/4K       1/2K         1K         2K

       3K         4K         8K        12K

      16K        20K        24K        32K

An example after the change is:

---------- FREESPACE IN BLOCKS -----------


    0-128        256        512        768

    1,024      1,280      1,536      2,048

    2,560      3,072      3,584      4,094

What is different about the headings are the numbers picked on the right side. Before this feature the numbers were always the same and varied from a low number to 32k regardless of the block size for the area. With the change the numbers vary with block size. The example above is a block size of up to 4k (4096). The first entry has a 0- to show that it is a count of blocks with zero (0) bytes of free space through 128 bytes of free space. The next entry is the first number plus 1 through the number printed. The last number will always be the block size less 2 bytes.

Following the TOTALS section will be a new section. The forms shown here show data areas that are using Data Space Options 1-5, and the area has been opened for update.

DSOP 1-5 TOTALS        EMPTY             LARGE             SMALL      LESS THAN   NEVER USED


                       4,094                96                72                            

                      BLOCKS   TO EMPTY BLOCKS   TO LARGE BLOCKS   SMALL BLOCKS       BLOCKS

                          31                 0                 0              0          341

This section provides information about ‘available’ blocks as handled by the DSOP processing for options 1-5. The first and third heading line notes the count of EMPTY BLOCKS, LARGE TO EMPTY BLOCKS, SMALL TO LARGE BLOCKS, LESS THAN  SMALL BLOCKS, and NEVER USED BLOCKS. The second heading line provides the definition of an empty block, large block, and small block. Datacom picks the values for the small value to track as available and the large value to track as available with no user input or controls. They are based upon the tables in the area. The details of the DSOP 1-5 available blocks is internal and not published. A block that has been used for data records but has no current data records is considered empty and has a space of the block size less 2. A block that has less available space but does have room for what is considered a large record is counted. A block with less space but does have room for what is considered a small record is counted. Blocks that have even less space are counted. The final count is for blocks that have never had a data row (since the last INIT of the data area).

The next (last) section provides information about tables of the area. It starts as existed in prior releases but now has columns as provided here:


                                                           PHYSICAL   PHYSICAL   PHYSICAL   PHYSICAL

                                                                CXX      SMALL      LARGE    AVERAGE

TBL  CMP  USER COMPRESSION    CXX RECORDS   SCAN RECORDS   ROW SIZE   ROW SIZE   ROW SIZE   ROW SIZE

A sample line is:

AGR  YES                              466            466        821        136        365        276

On the line is the table name AGR, that is has DB compression, no user compression. The CXX stored record count is 466, the scan of the data area found 466 records, the physical row size (user data plus Datacom Record Control Element) is 821, the physical small row size is 136 (smallest compressed stored row found in scan), the physical largest row size is 365 (largest compressed stored row found in scan), and the average physical row size is 276. The largest physical row size may be larger than the row as not compressed.

RAAT Long Keys

Before Version 14.0, CA Datacom® limited the total key length to 180 bytes. The limit was increased internally for SQL, CBS, and VSAM/T to 999 bytes.

With Version 15.0, CA Datacom® supports record-at-a-time (RAAT) commands with large keys to 999 bytes (except for the OPEN and CLOSE commands). For more information, see Record-at-a Time Technique.


Note: CA Dataquery™ for CA Datacom® and the CA Datacom®/DB Reporting Facility do not support RAAT long keys.

SQL Misc

Miscellaneous SQL enhancements in Version 15.0 are as follows:

UNSIGNED

UNSIGNED was added to the syntax for column specifications in ALTER TABLE (when adding a column) and in the CREATE TABLE column definition. For more information, see Using SQL.

POSSTR

New string function POSSTR was added. For more information about the Character Functions, see Using SQL.

Message DB00248I and DBSQLPR

DBSQLPR has long needed to produce compile information and information about the last PTF applied to the module. In Version 15.0, message DB00248I is used to supply that information. Message DB00248I previously generated output to JESMSGLG in the MUF job for DBSRPPR but now also generates output to the DBSQLPR job in JESMSGLG.

Note: For the DBSQLPER job, there is no output to SYSPRINT, only to JESMSGLG.

SQL_ALLOW_TIME_240000

The DIAGOPTION 1,64,ON was replaced with MUF startup option SQL_ALLOW_TIME_240000 YES/NO. 

Performance of Open/Close Processing

Performance was improved in handling the requirements to open and close data sets within a MUF. The changes were to enhance the 24x7 (always available) aspect of CA Datacom®. It addresses the time for an enabled executing MUF to be recycled to a new MUF execution or to a Shadow MUF.

The three key changes being made to affect the feature are as follows:

  • In earlier releases, the open process was built to ‘expect’ data sets to have been archived and handle the time during an unarchive in an efficient manner. Because most data sets are not archived, Version 15.0 is designed to work most efficiently for these data sets and if one is found as archived, drive the unarchive process in a new subtask built for the purpose (DBOC3PR). This saves open/close time for the normal path.
  • In earlier releases, MUF did every data set open and every close in a single TCB DBOC2PR. With Version 15.0, users are allowed to pick 1-20 sub-tasks to be used for open and close processing (default of 4). In effect, the data sets should mostly balance among the subtasks allocated. This allows up to 20 parallel opens or closes.
  • Closing areas within a database are performed in a more parallel process in Version 15.0 to reduce the time to get a group of areas closed.

The option name to be used, at MUF startup only, is X_OPEN_CLOSE_SUBTASKS.

Index Key Usage

In earlier releases, the use of a key by CBS (and SQL) was typically allowed but could be restricted using the CXXMAINT function of DBUTLTY with the keyword OPTION=ALTER with CBS=YES/NO.

With Version 15.0, the key usage has been expanded.

CA Datacom® Datadictionary™ allows an optional KEY-USAGE attribute with each key definition reflecting access. KEY-USAGE can be set to ANY, RAAT, CBS/SQL, or NONE.

The value ANY shows that all access using the key is allowed. The value of RAAT shows that all Record At A Time requests using the key is allowed, and all Set At A Time and SQL requests using the key are not allowed. The value of CBS/SQL shows that all Set At A Time and SQL requests using the key are allowed, and all Record At A Time requests are not allowed. The NONE shows that the key cannot be used for access to the table.

The key usage is set in the CXX during a catalog of the database from CA Datacom® Datadictionary™ or the alter of the key from CA Datacom® Datadictionary™.

During the catalog of a database, there are never user applications executing against the table. During the key alter, executing programs are allowed with the table open. If there are executing programs, the change to allow access is instantly granted to all programs. The reverse is not true.

There will be instant restrictions applied, and there will be delayed enforcement of key usage. For CBS/SQL any request area that defines a set and has had a SELFR that has accepted key usage will be allowed to continue with all commands for the set. Any SELFR request that searches for candidate key definitions will reject and skip over every key set to be blocked to CBS/SQL. SQL mostly uses the CBS command set processing, and therefore the same rules apply as above.

When SQL uses RAAT for processing, it is allowed using the rules for CBS/SQL and not the rules for RAAT. For RAAT, any request for an initial key usage command (R??KY/KG/KR/KX-LOCKY/KG/KR/KX-GSETL) will fail with return code 03(027). Next or sequential commands will be allowed to continue.

The primary reason for the feature is to ‘test’ delete the key definition and to wait for several weeks to months to be sure there are no RAAT program failures or much slower CBS/SQL queries. Or, if a key definition is added to speed CBS/SQL, there might be value in preventing RAAT usage. Therefore, the definition can be easily changed to other fields or deleted without risk of making RAAT programs fail.

KEY-USAGE Attribute

Valid values:

A – Active (Key is available for all read access)

C – CBS/SQL read access (SAAT and SQL) only is allowed. No RAAT access.

R – RAAT read access only is allowed. No CBS/SQL (SAAT) access.

N – No read access by either RAAT or CBS/SQL (SAAT) commands is allowed.

The KEY-USAGE option must be set as ‘A’ ANY if the key is:

  • The Master Key for the table
  • The Native Sequence key for the table
  • Used by SQL constraints

Altering key use during a catalog is allowed. Altering key use during an alter key is not allowed when:

  • Running MUFPLEX when two or MUF’s are enabled (a Shadow MUF is fine).
  • If the database set during MUF startup to ACCESS NOOPT.
  • If the data base is currently defined as VIRTUAL.
  • The area is not in URI format.

CXX Release Level Handling

In earlier releases, CA Datacom® presented different strategies to upgrade a CXX from one release to the next. All of them required an outage for the conversion.

Version 15.0 completely removes the outage. Version 15.0 supports a CXX initialized and used as Version 14.0. In fact, 15.0 can only initialize a CXX as Version 14.0. There is no such thing as a CXX set as 15.0.

The new process employs few changes to the CXX between Versions 14.0 and 15.0, and the full CXX options are compatible between the two releases, except that options known only to Version 15.0 are ignored if Version 14.0 code is executing.

Changes at the database level are handled using the FORMAT that was built at Version 12.0. A CXX built and used by Version 14.0 has each database with a FORMAT of 1 or 2. If Version 15.0 code is executed with this CXX, when it opens a database the FORMAT 1 or 2 bases are convert to a FORMAT 3, and a message is issued about the conversion. A solution of support within Version 14.0 will cause it to recognize a database at FORMAT 3 and downgrade it to 2.

With these abilities, the CXX is removed as a roadblock between Datacom executing as Version 15.0 and Version 14.0. In fact, users using MUFPLEX may have different physical MUFs executing at different releases. This allows easy upgrades without an outage and also fallbacks without an outage.

Change to Support of MUF Startup Option MUFPLEX with Mode Set as B

In Version 15.0, the support of the MUF startup option MUFPLEX has had minor internal changes to enhance support of all users. It also includes a restriction that the number of enabled MUF instances is a maximum of 2 for mode B. This matches the restrictions for both Mode’s A and S.

Removal of SEQ=PHYSICAL Restriction when MULTUSE=YES for BACKUP and EXTRACT Functions of DBUTLTY

The restriction prohibiting the use of SEQ=PHYSICAL when MULTUSE=YES is specified for DBUTLTY functions BACKUP and EXTRACT has been removed..

FUNCTION is a Reserved Table Name

FUNCTION is now a reserved table name.

Was this helpful?

Please log in to post comments.