Asynchronous TDX Transactions

Tabular Data eXchange (TDX) services are an efficient mechanism to import and export large amounts of data into and from a LogicBlox database. We saw in a previous article how to declare and interact with a TDX service to import or export a CSV file on its own. In this blogpost we will cover lb-web’s transaction service, which allows multiple TDX service requests to be composed in a single database transaction. In particular, we will discuss shortcomings in the synchronous protocol that some of you may be familiar with, and then present the new asynchronous protocol, that was released in LogicBlox 4.3.1, that addresses those issues.

The Transaction Service

The transaction service can be used to compose multiple service calls into a single database transaction. This is often called a multi-part request. At a high level, clients must follow a simple protocol: start a transaction, place multiple service calls referring to that transaction (the transaction’s parts), and then commit the transaction. All calls to parts are deferred, and only at commit time will a database transaction be composed and sent to the database.

While this is a general protocol, the current lb-web implementation supports only TDX services. As such, a transaction service is currently declared as a delim_txn_service. For example, this code will declare a transaction service at /txn:

service_by_prefix["/txn/*"] = x,

The Synchronous Protocol

Let us take a closer look at how a client would interact with the transaction service to import 2 files in a single database transaction using the synchronous protocol. Let’s assume that the TDX services for /products and /sales are already available. The interaction would be as follows:

1. POST /txn/start
    ← 201 CREATED (Location: /txn/TXN_ID)
2. POST /products?txn=TXN_ID
3. POST /sales?txn=TXN_ID
4. POST /txn/TXN_ID/commit?count=2
    → Workspace transaction executed
    ← 200 OK {to requests 2., 3., and 4.}

This means that a client sends a POST request to /txn/start, which initiates an in-memory transaction in the server and immediately returns with the server-generated TXN_ID. The client then submits 3 requests in parallel, one for each service (2. and 3.) and one to commit the transaction (4.). Only when the server receives the commit request plus count parts will it create and execute a transaction in the database. After the transaction finishes, requests 2., 3. and 4. receive the corresponding responses in parallel.


This protocol presents a number of shortcomings:

  • because the requests to services and commit are synchronous, the protocol requires a lot of client-side resources. In lb web-client, we need N+1 connections to perform a request for N files/services, and we need to keep these connections open by blocking N+1 threads.
  • a similar issue occurs on the server side because the server needs to keep the connections open.
  • when one of these N+1 connections has a problem, the only safe measure that the server can do is to attempt to abort the transaction (and the workspace transaction if possible). If the transaction is long running, the likelihood of some issue happening is high.
  • besides resource exhaustion and issues with long running transactions, there is the need to configure the client and server appropriately to balance resources and need. That is, we need to manually adjust the number of threads and connections available to clients and server, and the timeouts on various connection events.
  • the use of the count parameter is problematic because if a part request arrives multiple times the server may see a set of requests different than the client.

In the past, we attempted to solve most of these issues at the transport level: instead of using TCP connections, which are synchronous, we used a queue (in particular SQS) to make the requests asynchronous. However, this caused numerous other problems, such as issues with duplicate delivery and visibility timeout management. Therefore, we decided to solve the issues at the protocol level. Enter the asynchronous protocol.

The Asynchronous Protocol

The asynchronous protocol goes as follows:

1. POST /txn/start?async=true
    ← 201 CREATED (Location: /txn/TXN_ID)
2. POST /products?txn=TXN_ID
    ← 201 CREATED* (Location: /txn/TXN_ID/PART_ID)
3. POST /sales?txn=TXN_ID
    ← 201 CREATED* (Location: /txn/TXN_ID/PART_ID)
4. POST /txn/TXN_ID/commit
    (the body of the POST contains JSON with all PART_IDs)
    ← 202 ACCEPTED (Location: /txn/TXN_ID/status)
    → Workspace transaction executed
  • for requests 2. and 3., if they are import requests where the file is sent as a URI to be downloaded (e.g.. file://, s3://, gs://, etc), then the server responds with 202 ACCEPTED instead of 201 CREATED to indicate that it will first attempt to download the file before creating the resource.

An asynchronous transaction is initiated by sending the async=true parameter to /txn/start. After that, the main difference with respect to the synchronous protocol is that part requests and the commit request are now asynchronous. Part requests behave like the transaction create request: the server generates a unique PART_ID that can be used to refer to the part. The commit request simply instructs the server to initiate the workspace transaction. It also receives a JSON body with the list of PART_IDs, so the count parameter is not needed anymore.

The previous interaction instructed the server to execute the transaction. The client now can query the server to check when the transaction is over:

5. GET /txn/TXN_ID/poll
    ← 200 OK
6. GET /txn/TXN_ID/poll
    ← 303 SEE_OTHER (Location: /txn/TXN_ID)

The 200 OK status means that the request is OK but the transaction is still executing. When poll returns 303 SEE_OTHER, it means that the transaction finished and the client should lookup the outcome in that location. The client can then query the outcome (which is actually available from the moment the transaction is started in memory):

7. GET /txn/TXN_ID (Accept: application/json)
    ← 200 OK

The body contains a JSON message with details about the transaction and its parts. This service also supports text/plain and text/html so that this URI can be accessed by browsers to get a textual description of the status.

Finally, it is also possible to query the outcome of each part:

8. GET /txn/TXN_ID/PART_ID (Accept: application/json)
    ← 200 OK
9. GET /txn/TXN_ID/PART_ID (Accept: text/csv)
    ← 200 OK

Request 8. returns a JSON message with detailed information about the part. Request 9. returns the CSV file that was exported for the part, but only if the part was for a local file export (i.e., the server was not instructed to upload the export to a URI).

Currently, the transactions stay in lb web-server memory and are available to be queried up to 4 days after they finish. This means that restarting the server makes all transaction records go away (but they are kept if lb web-server services are (un)loaded). In the future we will offer more fine grained control over the purge policy.

A little bit more RESTful

In order to follow more closely RESTful principles, the transaction service supports alternative ways of accomplishing certain operations.

Request                  Is equivalent to
POST   /txn              POST /txn/start
PUT    /txn/TXN_ID       POST /txn/TXN_ID/commit
DELETE /txn/TXN_ID       POST /txn/TXN_ID/abort

Using the asynchronous protocol

While it is nice to understand the protocols, they are still at a low level of abstraction, so users are not expected to be writing client code all the time. Instead, several APIs are offered that abstract the complexities of the protocol, such as connection/thread management and status polling. The lb-web client API for Java was extended to support the asynchronous protocol, and similar abstractions were introduced to the Python API often used for test development.

Finally, using lb-workflow TDX tasks with the asynchronous protocol is as simple as adding the async=true parameter to the task declaration. For example, the following task declaration can be used to import the products.csv and sales.csv files using the asynchronous protocol and the services described above:

    async        = true,
    transport    = "http://localhost:8080",
    txn_service  = "/txn",
    input = {
      "{ service: '/products' file: '/data/products.csv' }",
      "{ service: '/sales'    file: '/data/sales.csv'    }"
  1. Gary Bake 7 years ago

    I’m trying to use the async method to import a number of files. I need to use the file:/// uri rather than attaching the data as a POST payload due to the files being pretty large.

    Using ‘lb web-client import’ doesn’t return anything so I’m unable to get the PART_ID.

    Is it possible to post a request but use the local file uri?

  2. Author
    Thiago T. Bartolomei 7 years ago

    Hi Gary,

    the async protocol supports the same methods to send files as the previous synchronous protocol: HTTP request body, file URIs and cloud store URIs.

    However, lb web-client’s command line currently does not expose any async functionality. That would require adding commands to start transactions and add parameters to import/export to pass the transaction ID, for example.

    Currently, you need to use Java or Python if you want simple access to the async protocol (you can of course use any HTTP client, too). If you need help with any of these options, let me know.

Leave a reply

© Copyright 2023. Infor. All rights reserved.

Log in with your credentials

Forgot your details?