What’s new for Developers in Oracle Database 23.7

For more information on what’s new in Oracle Database 23.6, see: What’s new for Developers in Oracle Database 23.6

Table of Contents

TIME_BUCKET Function

You can use the trunc function to group rows into days, hours, or minutes. But to place rows into N-unit time slices – e.g., five minutes, two hours, or three days – you had to write a formula to find the start and end times of each group.

The time_bucket function simplifies this logic tremendously. It takes up to 5 parameters to help you find the start or end times for the interval a given value falls in:

  1. The datetime – the value to bucket
  2. The stride – the size of each group as an internval
  3. The origin – the anchor date for calculating start and end times
  4. The start_or_end – whether to return the start or end of each bucket (optional; defaults to start)
  5. The timebucket_optional_clause – Controls how the function behaves when the return value is an invalid date, or when the origin is the last day of the month and the stride contains only month and/or year (optional, default to on overflow round)

The following example demonstrates the use of time_bucket to bucket a given value in 5, 15, and 90-minute buckets:

ALTER SESSION SET NLS_DATE_FORMAT = ' HH24:MI ';

WITH times AS ( 
  SELECT DATE '2025-01-21' + ( LEVEL / 240 ) dt CONNECT BY LEVEL <= 4
)
SELECT dt,   --   value,          bucket size,   origin date, bucket start/end
       TIME_BUCKET ( dt, INTERVAL  '5' MINUTE,  TRUNC ( dt )        ) five,
       TIME_BUCKET ( dt, INTERVAL '15' MINUTE,  TRUNC ( dt ), START ) fifteen,
       TIME_BUCKET ( dt, INTERVAL '90' MINUTE,  TRUNC ( dt ), END   ) ninety
  FROM   times ORDER BY dt;

DT         FIVE       FIFTEEN    NINETY
__________ __________ __________ __________
 00:06      00:05      00:00      01:30
 00:12      00:10      00:00      01:30
 00:18      00:15      00:15      01:30
 00:24      00:20      00:15      01:30

You can see that all values fall within the same 90-minute bucket (returning the end of the bucket) because all values are between minutes 6 and 24. However, only the first two rows fall into the 0-minute bucket, while the other two fall within the 15-minute bucket (returning the start of the bucket). And none of the values fall within the same 5-minute bucket (returning the default start of the bucket).

For more information, see: https://docs.oracle.com/en/database/oracle/oracle-database/23/dwhsg/sql-analysis-reporting-data-warehouses.html

Materialized columns

Virtual columns – added in 11g – have long enabled developers to define columns whose values are calculated. These columns run the calculation whenever they are selected. However, for complex calculations, having to recalculate the value each time it is retrieved may add significant overhead and delay in response time if the value is queried frequently. Hence, for write-few, read-often values, it can be better to store the result of these calculations when the data is inserted or changed.

Materialized columns in 23.7 enable you to define calculated columns that store the result on write. These move the work from select to insert and update statements and are great for write-few, read-often scenarios, but have the opposite trade-offs for write-often, read-few scenarios.

To demonstrate the benefits of either column type, consider the function inc, which has a 0.1s wait to simulate a complex calculation. The table my_table has both a virtual (value2) and a materialized (value3) column calling the function.

Inserting ten rows takes around 1 second (10 * 0.1s wait time in function inc) as the database stores the result in the materialized column. Querying the materialized column takes less than a tenth of a second; querying the virtual column still takes at least one second because of the wait in function inc:

CREATE OR REPLACE FUNCTION inc ( 
  v int,
  i int 
)
 RETURN INT DETERMINISTIC AS
BEGIN
  dbms_session.sleep ( 0.1 ); -- wait 1/10th second
  RETURN v + i;
END;
/

CREATE TABLE my_table ( 
  value1 int, 
  value2 int
   AS ( inc (value1, 1) ) VIRTUAL,     -- calc on read
  value3 int
   AS ( inc (value1, 2 )) MATERIALIZED -- calc on write
);

set timing on

-- The materialized column is written on write; runtime ~1s
INSERT INTO my_table ( value1 ) 
  SELECT LEVEL
   CONNECT BY LEVEL <= 10;

10 rows inserted.

Elapsed: 00:00:01.036

-- Virtual col calls function at runtime; runtime ~1s
SELECT SUM ( value2 ) FROM my_table;

   SUM(VALUE2)
______________
            65

Elapsed: 00:00:01.053

-- Materialized col stores function result on insert
-- query runtime < 0.1s
SELECT SUM ( value3 ) FROM my_table;

   SUM(VALUE3)
______________
            75

Elapsed: 00:00:00.007

For more information, see: https://docs.oracle.com/en/database/oracle/oracle-database/23/cncpt/tables-and-table-clusters.html#GUID-E32FDAD3-0D1E-4F16-B1A2-7F821D63177F

DBMS_DEVELOPER.GET_METADATA()

The dbms_metadata package has long been a way to get the definition of database objects as a DDL statement or XML document. These formats are impractical for the many applications that use JSON as their data exchange format. The dbms_developer package provides an API to get a JSON representation of tables, views, and indexes. Besides giving this format, calls to dbms_developer are much faster than dbms_metadata calls:

CREATE TABLE my_table ( 
  column1 INT NOT NULL PRIMARY KEY ANNOTATIONS (pk, nn, datatype 'numeric'), 
  column2 DATE                     ANNOTATIONS (datatype 'datetime')
) ANNOTATIONS (note 'example table');

set timing on

SELECT JSON_SERIALIZE(
 DBMS_DEVELOPER.GET_METADATA('MY_TABLE')
 PRETTY) table_info;

TABLE_INFO
_________________________________________________________
{
  "objectType" : "TABLE",
  "objectInfo" :
  {
    "name" : "MY_TABLE",
    "schema" : "GERALD",
    "columns" :
    [
      {
        "name" : "COLUMN1",
        "notNull" : true,
        "dataType" :
        {
          "type" : "NUMBER",
          "precision" : 38
        },
        "isPk" : true,
        "isUk" : true,
        "isFk" : false,
        "annotations" :
        [
          {
            "name" : "PK",
            "value" : ""
          },
          {
            "name" : "NN",
            "value" : ""
          },
          {
            "name" : "DATATYPE",
            "value" : "numeric"
          }
        ]
      },
      {
        "name" : "COLUMN2",
        "notNull" : false,
        "dataType" :
        {
          "type" : "DATE"
        },
        "isPk" : false,
        "isUk" : false,
        "isFk" : false,
        "annotations" :
        [
          {
            "name" : "DATATYPE",
            "value" : "datetime"
          }
        ]
      }
    ],
    "hasBeenAnalyzed" : false,
    "indexes" :
    [
      {
        "name" : "SYS_C008825",
        "indexType" : "NORMAL",
        "uniqueness" : "UNIQUE",
        "status" : "VALID",
        "hasBeenAnalyzed" : false,
        "columns" :
        [
          {
            "name" : "COLUMN1"
          }
        ]
      }
    ],
    "constraints" :
    [
      {
        "name" : "SYS_C008824",
        "constraintType" : "CHECK - NOT NULL",
        "searchCondition" : "\"COLUMN1\" IS NOT NULL",
        "columns" :
        [
          {
            "name" : "COLUMN1"
          }
        ],
        "status" : "ENABLE",
        "deferrable" : false,
        "validated" : "VALIDATED",
        "sysGeneratedName" : true
      },
      {
        "name" : "SYS_C008825",
        "constraintType" : "PRIMARY KEY",
        "columns" :
        [
          {
            "name" : "COLUMN1"
          }
        ],
        "status" : "ENABLE",
        "deferrable" : false,
        "validated" : "VALIDATED",
        "sysGeneratedName" : true
      }
    ],
    "annotations" :
    [
      {
        "name" : "NOTE",
        "value" : "example table"
      }
    ]
  },
  "etag" : "4F35940648C7CEC8DDA21C0580D767AF"
}

Elapsed: 00:00:00.080

SELECT DBMS_METADATA.GET_DDL('TABLE', 'MY_TABLE');

DBMS_METADATA.GET_DDL('TABLE','MY_TABLE')
_____________________________________________________________________________________________

  CREATE TABLE "GERALD"."MY_TABLE"
   (	"COLUMN1" NUMBER(*,0) NOT NULL ENABLE ANNOTATIONS("DATATYPE" 'numeric', "NN", "PK"),
	"COLUMN2" DATE ANNOTATIONS("DATATYPE" 'datetime'),
	 PRIMARY KEY ("COLUMN1")
  USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
  TABLESPACE "USERS"  ENABLE
   ) SEGMENT CREATION DEFERRED
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  TABLESPACE "USERS"
  ANNOTATIONS("NOTE" 'example table')

Elapsed: 00:00:03.01

The dbms_developer.get_metadata function also simplifies the calling signature. While dbms_metadata.get_xxx functions always require the object type as the first input parameter, dbms_developer.get_metadata can defer the type via name resolution. However, some object types can have the same name within the same schema. Indexes and tables are such an example. In that case, dbms_developer.get_metadata will default to the table:

CREATE INDEX my_table ON my_table (column2);

SELECT JSON_SERIALIZE(
 DBMS_DEVELOPER.GET_METADATA('MY_TABLE') -- information for the table is being retrieved
 PRETTY) info;

INFO
_________________________________________________________
{
  "objectType" : "TABLE",
  "objectInfo" :
  {
    "name" : "MY_TABLE",
    "schema" : "GERALD",
    "columns" :
    [
      {
        "name" : "COLUMN1",
        "notNull" : true,
        "dataType" :
        {
          "type" : "NUMBER",
          "precision" : 38
        },
        "isPk" : true,
        "isUk" : true,
        "isFk" : false,
        "annotations" :
        [
          {
            "name" : "DATATYPE",
            "value" : "numeric"
          },
          {
            "name" : "NN",
            "value" : ""
          },
          {
            "name" : "PK",
            "value" : ""
          }
        ]
      },
      {
        "name" : "COLUMN2",
        "notNull" : false,
        "dataType" :
        {
          "type" : "DATE"
        },
        "isPk" : false,
        "isUk" : false,
        "isFk" : false,
        "annotations" :
        [
          {
            "name" : "DATATYPE",
            "value" : "datetime"
          }
        ]
      }
    ],
    "hasBeenAnalyzed" : false,
    "indexes" :
    [
      {
        "name" : "SYS_C008825",
        "indexType" : "NORMAL",
        "uniqueness" : "UNIQUE",
        "status" : "VALID",
        "hasBeenAnalyzed" : false,
        "columns" :
        [
          {
            "name" : "COLUMN1"
          }
        ]
      },
      {
        "name" : "MY_TABLE",
        "indexType" : "NORMAL",
        "uniqueness" : "NONUNIQUE",
        "status" : "VALID",
        "lastAnalyzed" : "2025-05-06T22:53:27",
        "numRows" : 0,
        "sampleSize" : 0,
        "columns" :
        [
          {
            "name" : "COLUMN2"
          }
        ]
      }
    ],
    "constraints" :
    [
      {
        "name" : "SYS_C008824",
        "constraintType" : "CHECK - NOT NULL",
        "searchCondition" : "\"COLUMN1\" IS NOT NULL",
        "columns" :
        [
          {
            "name" : "COLUMN1"
          }
        ],
        "status" : "ENABLE",
        "deferrable" : false,
        "validated" : "VALIDATED",
        "sysGeneratedName" : true
      },
      {
        "name" : "SYS_C008825",
        "constraintType" : "PRIMARY KEY",
        "columns" :
        [
          {
            "name" : "COLUMN1"
          }
        ],
        "status" : "ENABLE",
        "deferrable" : false,
        "validated" : "VALIDATED",
        "sysGeneratedName" : true
      }
    ],
    "annotations" :
    [
      {
        "name" : "NOTE",
        "value" : "example table"
      }
    ]
  },
  "etag" : "C16F4A72C0DAA3FE25792BCF0F8F22BE"
}

To retrieve the information about the index, provide object_type => 'INDEX':

SELECT JSON_SERIALIZE(
 DBMS_DEVELOPER.GET_METADATA('MY_TABLE', object_type => 'INDEX') -- information for the index is retrieved
 PRETTY) index_info;

INDEX_INFO
________________________________________________
{
  "objectType" : "INDEX",
  "objectInfo" :
  {
    "name" : "MY_TABLE",
    "indexType" : "NORMAL",
    "Owner" : "GERALD",
    "tableName" : "MY_TABLE",
    "status" : "VALID",
    "columns" :
    [
      {
        "name" : "COLUMN2",
        "notNull" : false,
        "dataType" :
        {
          "type" : "DATE"
        },
        "isPk" : false,
        "isUk" : false,
        "isFk" : false,
        "annotations" :
        [
          {
            "name" : "DATATYPE",
            "value" : "datetime"
          }
        ]
      }
    ],
    "uniqueness" : "NONUNIQUE",
    "lastAnalyzed" : "2025-05-06T22:53:27",
    "numRows" : 0,
    "sampleSize" : 0
  },
  "etag" : "F7E1D5694ACE1D9B265861453A597FDB"
}

Last but not least, dbms_developer.get_metadata provides two additional parameters, the information level and the etag of the expected metadata.

The level can be set to BASIC, TYPICAL (default) and ALL. Each value provides an additional level of information. The level value names align with other already existing functionality, for example, the STATISTICS_LEVEL parameter.

The etag parameter can be used to provide an etag of the already known metadata document. If the provided etag matches the etag of the current metadata document, the function returns an empty document. There is no difference between the already known metadata document and the newly generated one, i.e., the object hasn’t been modified. If the etag of the document in the database does not match the etag that was provided, a new document is returned (along with the new embedded etag). This is a neat way to only gather information about objects that have, in fact, changed, while ignoring the rest of them. Of course, an application will have to pay attention to the etag values and hold on to them. Note: Because the etag is calculated for the resulting JSON document, a change in the level parameter will also produce a new etag. In other words, the etags of different levels will not match.

SELECT JSON_SERIALIZE(
 DBMS_DEVELOPER.GET_METADATA('MY_TABLE', level => 'BASIC')
 PRETTY) table_info;

TABLE_INFO
________________________________________________
{
  "objectType" : "TABLE",
  "objectInfo" :
  {
    "name" : "MY_TABLE",
    "schema" : "GERALD",
    "columns" :
    [
      {
        "name" : "COLUMN1",
        "notNull" : true,
        "dataType" :
        {
          "type" : "NUMBER",
          "precision" : 38
        }
      },
      {
        "name" : "COLUMN2",
        "notNull" : false,
        "dataType" :
        {
          "type" : "DATE"
        }
      }
    ]
  },
  "etag" : "B644E9475669B3BDE09337481D39E0FB"
}

SELECT JSON_SERIALIZE(
 dbms_developer.get_metadata(
  'MY_TABLE', -- object name
  USER,       -- schema, default USER
  'TABLE',    -- object type, optional
  'BASIC',    -- information level BASIC|TYPICAL|ALL
  'B644E9475669B3BDE09337481D39E0FB' -- etag for the expected metadata as above
  )
 PRETTY) table_info;

TABLE_INFO
_____________
{
}

For more information, see: https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_developer1.html

Foreign Function Interface for JavaScript to Call PL/SQL Code Units

The Multilingual Engine (MLE) enables you to load JavaScript modules into Oracle Database and access them in SQL or PL/SQL. From 23.7 onwards, you can use native JavaScript in these modules to call PL/SQL packages.

For example, this calls the built-in DBMS_RANDOM PL/SQL package using JavaScript to generate random numbers between 1 and 100:

CREATE OR REPLACE FUNCTION get_random_number(
  p_lower_bound NUMBER,
  p_upper_bound NUMBER
) RETURN NUMBER
AS MLE LANGUAGE JAVASCRIPT {{
  const { resolvePackage } = await import ('mle-js-plsql-ffi');
  const dbmsRandom = resolvePackage('dbms_random');

  return dbmsRandom.value(P_LOWER_BOUND, P_UPPER_BOUND);
}};
/

SELECT get_random_number (1, 100);

   GET_RANDOM_NUMBER(1,100)
___________________________
            96.683520107954

For more information, see: https://docs.oracle.com/en/database/oracle/oracle-database/23/mlejs/introduction-pl-sql-foreign-function-interface.html

Smallfile Tablespace Shrink

When you drop a table and remove it from the recyclebin, Oracle Database frees up the space the table consumed in the tablespace. In theory, other objects can now use this space. However, the space released may be too small for other objects to use. If you drop many tables, this can leave lots of small pockets of space that the database cannot reuse. This is known as fragmentation and can lead to using more disk space than necessary.

Since Oracle Database 23ai, you can reclaim this lost space for BIGFILE tablepsaces (the recommended default since 11g) using dbms_space.shrink_tablespace. Starting with 23.7, you can now also reclaim this space in SMALLFILE tablespaces using the dbms_space package. To get an estimate of how much space this will reclaim, you can run this in analyze mode first.

For example, this checks how much space can be reclaimed from the tablespace SMALL_FILE. The report shows its size is currently 0.78GB, and shrinking it can reduce this down to 0.2GB:

set serveroutput on;
execute dbms_space.shrink_tablespace ( 'SMALL_FILE', shrink_mode => DBMS_SPACE.TS_SHRINK_MODE_ANALYZE );

-------------------ANALYZE RESULT-------------------
Total Movable Objects: 29
Total Movable Size(GB): .11
Original Datafile Size(GB): .78
Suggested Target Size(GB): .2
Process Time: +00 00:00:13.453359

Running the shrink after this reorganizes objects in the tablespace and shrinks it down to 0.19GB; a saving of 0.59GB:

execute dbms_space.shrink_tablespace ( 'SMALL_FILE' );

-------------------SHRINK RESULT-------------------
Total Moved Objects: 29
Total Moved Size(GB): .11
Original Datafile Size(GB): .78
New Datafile Size(GB): .19
Process Time: +00 00:02:54.715593 

For more information, see: https://docs.oracle.com/en/database/oracle/oracle-database/23/admin/managing-tablespaces.html#GUID-32D286D3-77E0-4A42-BE10-D0E0632CFC06

This blog post was co-authored with Chris Saxon.

This blog post was originally published at: https://blogs.oracle.com/developers/post/whats-new-for-developers-in-oracle-database-23-7

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top