Chapter 7. Web Service Configuration

7.1. Introduction

ServiceBlox is a framework for developing and hosting services backed by the LogicBlox database. ServiceBlox services are the interfaces to the LogicBlox database from other application components: user interfaces, data integration tools, or 3rd party applications. For example, a typical service might provide data for a UI component that displays data in charts or tables, or receives input from a web form and makes modifications to the database accordingly. This chapter introduces the implementation and configuration of ServiceBlox services.

ServiceBlox is an extensible framework that comes with a few different types of services that should meet the needs of most applications:

  • Protocol buffer (protobuf) services are HTTP services that are invoked using an HTTP POST request. The request contains a binary protobuf or textual JSON message. The service returns the protobuf or JSON result of the invocation as an HTTP response message. This type of service is similar to other service frameworks that resemble remote procedure calls, such as JSON-based services used in AJAX applications, SOAP and XML-RPC. In ServiceBlox, the schema of the request and responses messages are precisely specified by a protobuf protocol. Optionally, messages can be encoded as a JSON string, to support access from web browsers. The services can be accessed by any HTTP client, including browsers, curl or really any application that understands the HTTP protocol and is able to encode and decode protocol buffers or JSON messages.

  • Tabular data exchange (TDX) services are HTTP services that can be accessed by GET, POST, and PUT requests. TDX is the core service for getting data in and out of the LogicBlox database, in large volume. TDX uses delimited files as the input/output data format. Data is retrieved from the database using GET requests. Using POST requests, data in the database can be updated, and using PUT requests data can be replaced. TDX services are typically used for integration purposes, for example for importing large volumes of sales data or for exporting large volumes of forecast data.

  • Global protobuf services are protobuf services that are implemented by distributing incoming requests to services hosted on other LogicBlox workspaces. The responses from the individual services are merged into a single response of the global service. Global services are useful when data needed for a service is stored in multiple, partitioned workspaces.

  • Proxy services act as a simple proxy for a service hosted on a different machine. Proxy services can be used to require authentication on top of existing unauthenticated services, or can be used to provide access to a distributed service-oriented system on a single host.

  • Custom services are supported as plugins to the ServiceBlox service container. Custom services must provide implementation to a set of ServiceBlox Java interfaces. Custom services have a great deal of flexibility, and are used internally to implement Tabular Data Exchange, Global, and Proxy services. However, they should be used very sparingly as they do complicate the deployment of your LogicBlox-based application. If you find yourself needing a custom service, we recommend that you contact LogicBlox support personnel first to explore all appropriate options before proceeding.

ServiceBlox supports services request/response via HTTP, where the service message as well as the payload are part of a HTTP message. Alternatively, for longer-running services, an asynchronous queue can be used in place of HTTP; for services with large payloads (e.g. importing/exporting a large delimited file), AWS S3 objects can be used to transfer the payload. Support for HTTP, as well as queues and S3, are built into ServiceBlox, and is a matter of configuration to select the right mechanism for a given service.

ServiceBlox supports different authentication methods, where the different methods have benefits when used from the relatively hostile environment of a browser versus non-browser applications running in the controlled environment of a machine.

7.2. General Configuration

7.2.1. Anatomy of Service

At the most basic level, a 'service' in ServiceBlox is made up of the following parts:

  • A URI or URI prefix - This is the part of the URI that a user would enter in their browser after the domain. In other words the /example in the URI

  • An HTTP method - Accessing /bars with an HTTP GET is a different service than accessing /bars with an HTTP POST. NOTE: Many service handlers will set defaults for which methods are allowed if you do not specify them.

  • A Handler - This is really just identifying the type of the service, by identifying the type of handler that will service requests. For more, information on handlers see the section on Handlers. Typically the user will just specify the name of a built-in handler (e.g. protobuf, delimited-file, etc.).

There are many other configurable options for a service such as the authentication realm, workspace, handler specific options, etc. that will be covered in the sections related to those features.

7.2.2. Service Configuration in Workspaces

Most services are configured in the workspace with which the service will exchange data. By default, when ServiceBlox starts it scans all available workspaces and handles all services for which it finds a specification. This involves opening and executing queries in multiple workspaces, which can be time-consuming. It is possible to control workspace scanning in two important ways. First, the server uses a regular expression to match the names of workspaces to be scanned. This is controlled by the scan_workspaces configuration option, which defaults to .*, essentially matching any workspace. Second, setting the scan_workspaces_on_startup option to false instructs the server to avoid scanning workspaces altogether during startup. The lb web-server command line can then be used to dynamically load and unload services specified in particular workspaces and branches, without restarting ServiceBlox. For example, lb web-server load-services -w myWorkspace scans myWorkspace and loads services from the specifications in this workspace.

Detailed documentation of the schema used to configure a service in a workspace will be provided for different kind of services in the respective section, but in all cases the specification of a service in LogiQL is a rule which declares the existence of an entity whose type is a subtype of lb:web:config:service, sets some attributes of this entity and associates this service with a prefix and HTTP method.

    service_by_prefix_and_method["/time/", "POST"] = x,
    default_protobuf_service(x) {
      protobuf_protocol[] = "time",
      protobuf_request_message[] = "Request",
      protobuf_response_message[] = "Response"

One setting common to all services is the ability to set a disabled status for this service when it hosts this service and it is put in disabled mode. This will instruct ServiceBlox to return the specified status for this service when it implements this service. This status can then be changed at runtime to either enable the service or switch the disabled status through the lb web-server disable-services and lb web-server enable-services commands.

7.2.3. Static Services / Workspaces

A static service is a service that is hosted by ServiceBlox that is not hosted in a real LogicBlox workspace, but instead is configured via JSON. We recommend defining your services in LogiQL whenever possible.

You can set up a static workspace and services by specifying json files to load in your configuration file.

static_service = /path/to/static_service.json
some_other_static_service = /path/to/other_static_service.json

The JSON would look like the following:


    "handler" : "my-handler",
    "prefix"  : "/my-static-service",
    "http_method": "POST",
    "request_protocol"  : "myprotocol",
    "request_message"   : "Request",
    "response_protocol" : "myprotocol",
    "response_message"  : "Response"

7.2.4. Service Groups

ServiceBlox supports service groups to provide the flexibility to host different versions of a service in a single URI and to facilitate authentication enforcement. It is possible, for example, to provide authenticated and non-authenticated versions of a service in the same URI, but at different ports. For example, suppose we want to expose the time service in authenticated and non-authenticated versions in the same URI. These would be the steps:

  1. Services can declare the group to which they belong. A group is a simple string that identifies services. By default, services belong to the special group lb-web:no-group. Here, we declare a non-authenticated version of the time service at the /time URI and public group. Also, we declare an authenticated version in the same URI but private group.

    service_by_group["/time", "public"] = x,
    default_protobuf_service(x) {
    protobuf_protocol[] = "time",
    service_by_group["/time", "private"] = x,
    default_protobuf_service(x) {
    protobuf_protocol[] = "time",
    auth_realm[] = "realm_name"
  2. Endpoints define how the web-server communicates with clients. There are, for example, TCP endpoints that use TCP sockets, and queue endpoints that use SQS or RabbitMQ queues. They are declared in endpoint sections of the lb-web-server.config. Endpoints can declare the groups that they host and can enforce that all services in those groups must be authenticated. In our example, we create endpoint configurations to host the public and private groups: the clear TCP endpoint hosts the public group; the ssl TCP endpoint hosts the private group. Furthermore, the ssl endpoint declares that it requires authentication, which will make the ServiceBlox server verify that all services that declare to be in the private group indeed have authentication support.

    port = 8080
    groups = public
    ssl = true
    port = 8443
    groups = private
    requires_authentication = true

With this configuration in place, clients accessing the /time URI via TCP port 8080 will be directed to the non-authenticated version of the service. A client accessing /time via TCP port 8443 will be directed to the authenticated version instead.

To support backwards compatibility, service and endpoint groups are optional. If a service does not declare a group, it automatically belongs to the special group lb-web:no-group. Similarly, if an endpoint does not declare the groups it hosts, it automatically hosts only the group lb-web:no-group.

This is a summary of how the ServiceBlox service interprets the group configuration of services and endpoints.

  • A service can belong to one or more groups; endpoints support at least one group. The special lb-web:no-group group is assigned to services and endpoints that do not declare groups. Endpoints can explicitly list lb-web:no-group next to other groups.

  • For each endpoint, services prefixes must be unambiguous. That is, in the set of services hosted by an endpoint (taken from all groups it hosts), two services cannot declare the same prefix. The ServiceBlox server issues a warning and ignores all services that conflict.

  • If a service belongs to a group that is not supported by any endpoint (including the lb-web:no-group group), there's a warning and the service is not hosted.

  • If a service without authentication belongs to a group hosted by an endpoint that requires authentication, there's a warning and the service is not hosted in that particular endpoint (but it may be hosted on other endpoints).

7.2.5. Admission Control

By default, ServiceBlox services requests as soon as they are submitted, and will issue as many concurrent requests as there are worker threads to service those requests. For a mix of read-only and update services, this can sometimes result in poor performance, depending on the type of concurrency control used by the workspace, and the transaction times resulting from the services. In some cases, it is desirable to have ServiceBlox order the execution of the services, for example, such that read-only requests are run concurrently, while update requests are run exclusively. Besides resulting in better performance from avoiding transaction aborts, this can also result in performance gains from disabling concurrency control on the workspace.

This can be achieved by configuring services to use specific admission queues. All requests to services that belong to a queue are handled according to the queue policy. Admission queues are declared in lb-web-server.config. Configuring a service to use an admission queue is done by setting the queue name on the AdmissionQueue property to the service_parameter option when configuring the service. For instance, the following service configuration shows two services using the admission queue named my_exclusive_writes, with one being read-only while the other can update the state of the workspace.

block(`service_config) {



    service_by_prefix["/readonly-service"] = x,
    service_parameter[x,"AdmissionQueue"] = "my_exclusive_writes",
    default_protobuf_service(x) {
      protobuf_protocol[] = "time",
      protobuf_encoding[] = "binary",
      protobuf_request_message[] = "Request",
      protobuf_response_message[] = "Response"

    service_by_prefix["/update-service"] = x,
    service_parameter[x,"AdmissionQueue"] = "my_exclusive_writes",
    default_protobuf_service(x) {
      protobuf_protocol[] = "settime",
      protobuf_request_message[] = "Request",
      protobuf_response_message[] = "Response"

} <-- . 

The policy that governs the admission queue behavior is determined by the HandlerExecutor implementation declared for the queue. ServiceBlox currently provides two implementations: QueuedHandlerExecutor implements an exclusive-writes policy, and SerializerHandlerExecutor implements a serializer policy. The following excerpt from an lb-web-server.config declares two queues using these implementations.

# declaration of 'my_exclusive_writes' queue with exclusive-writes policy
classname = com.logicblox.bloxweb.QueuedHandlerExecutor

# declaration of 'my_serializer' queue with serializer policy
classname = com.logicblox.bloxweb.SerializerHandlerExecutor

The serializer policy simply serializes the execution of requests, such that they execute one-by-one, in the order they are received. It also accepts a sleep parameter that causes the queue to wait a number of milliseconds between request executions.

The exclusive-writes policy allows read-only requests to execute concurrently but gives write request an exclusive lock. In detail:

  • Requests are submitted for execution in the order received.

  • Requests for read-only services can run concurrently.

  • Requests for update services wait until all currently running requests are complete.

  • Requests for read-only services are only submitted for execution after any currently running write requests are complete.

7.2.6. Service Configuration Reference

This section presents a reference to the configuration options of the service interface. Each service type has extensive options that are covered in the corresponding sections of the Core Reference Manual.

Service types
delim_service lb-web:config:delim Relational data exchange service hosted by the workspace that contains the configuration of the service.
default_protobuf_service lb-web:config:protobuf Default protobuf services hosted by the workspace that contains the configuration of the service.
global_protobuf_service lb-web:config:global_protobuf Global protobuf service.
exact_proxy lb-web:config:proxy Exact proxy service.
transparent_proxy lb-web:config:proxy Transparent proxy service.

Configuration options applicable to all services
service_prefix string Required Path of the URL where the ServiceBlox service container will make a service available
service_description string Optional Human-readable description of the service. Currently displayed only in the meta-service.
auth_realm string Optional Name of the realm used for authenticating users of this service (see the authentication section for further details)
custom_handler string Optional Name of a custom handler for this service. The handler needs to be configured in the configuration file lb-web-server.config as a section [handler:name].
disabled_status uint[32] Optional Set the ServiceBlox service to be disabled. The value defines the status code that will be returned to clients that try to access the disabled service.
service_parameter string, string Optional Key/value pairs that are passed as parameters to service handlers.
lazy Optional Declares that the service should only be initialized when it receives the first request. By default services are initialized as soon as they are loaded from the workspace. Initialization may be expensive if it requires additional transactions to lookup data, for example, so making the services lazy makes server startup potentially faster by delaying the initialization.
sync_mode string Optional One of: sync, async, strict_sync or strict_async. If none specified, the sync mode will be inherited from the handler's sync_mode. For more details, see the specific section about asynchronous service configuration.
group string Optional The group to which the service belongs. The default is the unnamed group.
ignore_authentication_requirement Optional Will not require authentication even if this service is placed into a group that requires authentication.
service_operation string Optional The operation for this service for use in role-based authorization. If an authenticated user does not have a Role that contains a Permission for this Operation, then they will be denied access. See the authorization module of the credentials service or the reference manual for more information.

Configuration options on workspace services
inactive_block_name, inactive_after_fixpoint_block_name string Optional Name of an inactive block to be executed when serving a request to this service. There are variants to execute the block before or after fixpoint.
readonly Optional Marks a service as read-only, which means that database transactions executed by this service will use a read only setting.
service_host_workspace string Optional The name of the workspace that is the recipient of the service invocation. By default the host workspace is the workspace in which the service is declared. This allows services to be declared by one workspace but executed by another.

7.2.7. Config Files

The ServiceBlox server and client can be configured statically using configuration files. The format of ServiceBlox configuration files follows the HierarchicalINIConfiguration of the Apache Commons Configuration project. In essence, they are INI files with named sections and a global section (the variables defined before the first section is defined). For example, the following simple config file defines a global variable and a section with two local variables:

global_variable = some_value

section_variable_1 = some_value
section_variable_2 = some_value  

At startup, the initial configuration is created by composing the available configuration files. The ServiceBlox server first loads the default lb-web-server.config file that is included in the LogicBlox distribution. Next, it loads the lb-web-server.config file found in $LB_DEPLOYMENT/config, if it exists. Finally, it loads any configuration file passed as a parameter through the command line.

When a file is loaded, it overrides existing config variables, so it is possible to refine the default configuration. Composition is at the level of variables, but sections are used to identify variables. For example, if the following configuration file is loaded after the previous example, both global_variable and section_variable_1 will be refined, but section_variable_2 will stay the same. Note that the configuration loading process for lb web-client is the same, but using lb-web-client.config instead.

global_variable = new_value

section_variable_1 = new_value  

ServiceBlox supports some variables to be used in config files. A variable reference is in the form $(VARNAME). The following variables are currently supported and are substituted when a config file is loaded:

CONFIG_DIR The fully qualified file path for the directory containing the config file being loaded.
LB_DEPLOYMENT_HOME The contents of the environment variable.
LOGICBLOX_HOME The contents of the environment variable.
HOME The contents of the environment variable.

The lb web-server tool has commands that allow certain aspects of the configuration to be modified dynamically, that is, after the ServiceBlox server has been started. The install-config command takes a config file in the ServiceBlox config format and composes it with the currently running configuration.

$ lb web-server load --config my.config  

Currently it is only possible to load new handlers and static workspaces (see the Server configuration section for details on the section types that are interpreted by the server). The install-handler command is a sub-set of install-config in that it will only load new handlers and will ignore other sections.

One problem with using these commands is that some sections may need to reference files, such as the class of a custom handler. In order for ServiceBlox to locate these files, they need to have a fully qualified path, which is not portable. ServiceBlox solves this problem with the install-jar command.

$ lb web-server load --jar CustomHandlerRelease.jar 

The jar file contains all files that may need to be referenced. Furthermore, it contains a lb-web-server.config file in its top directory. The ServiceBlox server will open the jar and load this config file. The advantage of this approach is that all paths in the config file are resolved relative to the jar file, so it becomes simple to reference custom handlers, for example, in a portable manner.


When the web server first starts up, it reads the internal configuration file which is where all the default values are set. The file also contains comments and commented out options for all available options and is therefore an ideal place to look for most of your configuration questions. We include the text of the file here for your convenience.



# Whether lb services should start lb web-server or not
enabled = true

# How long lb services should wait for the web server to start before timing out, in seconds
startup_timeout = 60

# Arguments to the JVM instance
jvm_args = -Xmx3200m -Xss2048k

# The directory where the web server will write memory dumps on Out Of Memory Errors
# Set to empty to disable dumps
jvm_dump_dir = $LB_DEPLOYMENT_HOME/logs/current

# Whether the web server should scan workspaces for services at startup
scan_workspaces_on_startup = true

# A regular expression to match workspaces to scan at startup; only used if
# scan_workspaces_on_startup is true, and has no effect on static workspaces
#   Example: delim-.*|protobuf.* scans only workspaces starting with "delim-"
#   or "protobuf", besides installing static workspaces.
scan_workspaces = .*

# Number of threads to use when scanning workspaces for services
service_scanning_threads = 1

# A list of directories to scan for handler jars. Equivalent to calling lb web-server load -j on
# each jar with the exception that it is not allowed to have configuration conflicts between
# options in the lb-web-server.config files stored inside the jars.
handler_dirs = lib/java/handlers, ../measure-service/lib/java/handlers


# Directory for access logs. If this property is empty, the access log will be printed to stderr.
logdir_access = $(LB_DEPLOYMENT_HOME)/logs/current

# Directory for general log files
logdir = $(LB_DEPLOYMENT_HOME)/logs/current

#  The following should not be used in production as they affect performance.

# Print all protobuf request and response messages to the log
log_messages = false

# Print all HTTP messages in protobuf format to the log
log_http = false

# Print all connectblox requests to the log
log_connectblox = false

# Log very detailed info when an exception is detected
log_exception_details = false

# Enable or disable logging of rules generated for TDX services
log_delim_file_rules = false


# Set the web server debug flag to log all events (very noisy).
debug = false

# Do not delete temporary files created by TDX.
debug_keep_tdx_files = false


# Set to true to force authentication to use secure cookies
secure_cookies = false

# Set to true to force authentication to use HttpOnly cookies
httponly_cookies = false

# Session ids are stored here so that user sessions survive restarts of the web server
authentication_cache = $(LB_DEPLOYMENT_HOME)/authentication_cache

# Path of directory where encryption keys are kept
keydir = $(HOME)/.s3lib-keys


# Set the maximum size of a form post, to protect against DOS attacks from large forms (4MB)
max_form_content_bytes = 4194304

# Number of threads to use to handle requests
http_server_threads = 250

# Whether or not a client disconnection should abort the transaction it spawned
# Currently only works for protobuf services.
tcp_abort_on_disconnect = true


# Should all messages submitted to the message queues be handled locally (false), or should the web
# server assume that there are other instances handling requests that are not support by this
# web server instance (true)
mq_distributed = false

# Set the global SQS queue to use for queue transports
sqs_endpoint =

# These are configuration options for the HttpClient used internally by the web server when
# connecting to other services (e.g. when acting as a proxy).

# Size of thread pool for HttpClient (shared among all clients)
tcp_client_threads = 100

# Max time the whole exchange can take (in ms)
#tcp_timeout = 320000

# Max time to wait when establishing a connection to the server (in ms)
#tcp_connect_timeout = 75000

# Max time in a connection without processing anything (in ms).
# Processing is defined to be "parsing or generating".
#tcp_idle_timeout = 20000

# Max number of connections httpclient can hold to an address
#tcp_max_connections_per_address = 100



# The s3:default configuration is currently used for all S3 access from services. It is a separate
# configuration section because we might later allow multiple S3 configurations.
# Concurrency of general management of S3 downloads.
max_concurrent = 50

# Number of concurrent uploads/downloads for S3
s3_max_concurrent = 10

# Size of chunks for multipart uploads to S3
chunk_size = 5242880

# iam_role = default
# access_key = ...
# secret_key = ...


port = 8080
address =
# comma-separated list of groups allowed in this endpoint; if omitted, defaults to lb:web:no_group
groups = lb:web:public, lb:web:no-group
# Attempt to use an inherited channel, usually created by systemd or inetd
#inherited = true

# Set setMaxIdleTime on SocketConnector, which is similar to setting
# SO_TIMEOUT on a socket. 0 means no timeout.
tcp_server_max_idle_time = 200000

port = 55183
address =
groups = lb:web:internal, lb:web:no-group

# Set setMaxIdleTime on SocketConnector, which is similar to setting
# SO_TIMEOUT on a socket. 0 means no timeout.
tcp_server_max_idle_time = 0

# [tcp:public-ssl]
# ssl = true
# port = 8443
# address =
# groups = authenticated, public
# requires_authentication = true
# keystore_path = $(LB_DEPLOYMENT_HOME)/config/keystore
# truststore_path = $(LB_DEPLOYMENT_HOME)/config/keystore
# keystore_password = password
# truststore_password = password
# keymanager_password = password


#request_queue_name = lb-web-sample-request
#response_queue_name = lb-web-sample-response
#access_key = ...
#secret_key = ...


#request_queue_name = bloxweb-test-request
#response_queue_name = bloxweb-test-response
#endpoint = amqp://localhost:5673


classname = com.logicblox.bloxweb.DefaultProtoBufHandler

classname = com.logicblox.bloxweb.delim.DelimitedFileHandler
tmpdir = /tmp

classname = com.logicblox.bloxweb.delim.DynamicDelimitedFileHandler
tmpdir = /tmp

classname = com.logicblox.bloxweb.delim.DelimTransactionHandler

classname = com.logicblox.bloxweb.proxy.TransparentProxyHandler

classname = com.logicblox.bloxweb.proxy.ExactProxyHandler

classname = com.logicblox.bloxweb.GlobalProtoBufHandler

classname = com.logicblox.bloxweb.AdminHandler

classname = com.logicblox.bloxweb.authentication.LoginHandler

# The wait handler just waits. It is only useful for performance testing.
classname = com.logicblox.bloxweb.WaitHandler

#  EXTENSION HANDLERS (will be extracted in the future)

# The bcrypt handler converts a clear text password into a hashed password on requests that update
# passwords; otherwise, it behaves like an normal protobuf service.
classname = com.logicblox.bloxweb.authentication.BCryptCredentialsHandler

# Handler to report on the currently authenticated user.
classname = com.logicblox.bloxweb.authentication.UserInfoHandler

classname = com.logicblox.bloxweb.authentication.ChangePasswordHandler

classname = com.logicblox.bloxweb.authentication.ResetPasswordHandler

classname = com.logicblox.bloxweb.authentication.ConfirmResetPasswordHandler

classname =


service_login = $(CONFIG_DIR)/login_service_config.json
available = true

#config = configuration.json
#request_protocol = request_protocol.descriptor
#response_protocol = response_protocol.descriptor


class = com.logicblox.bloxweb.authentication.PasswordBCryptAuthenticationMechanism
stateful = true
realm_option_session_key_prefix = lb-session-key
realm_option_seconds_to_live = 1800
mechanism_option_credential_service = /admin/credentials
user_cache_size = 100
user_cache_seconds_until_expire = 1800

class = com.logicblox.bloxweb.authentication.SignatureAuthenticationMechanism
stateful = false
realm_option_header_regex = ^x-logicblox-.*$
realm_option_required_headers = Date Digest
realm_option_max_request_time_skew = 900000
mechanism_option_allowed_signature_algorithm = SHA512withRSA
mechanism_option_credential_service = /admin/credentials
user_cache_size = 100
user_cache_seconds_until_expire = 1800

class = com.logicblox.bloxweb.authentication.saml.SAMLMechanism
stateful = true
realm_option_session_key_prefix = lb-session-key
realm_option_seconds_to_live = 1800
user_cache_size = 100
user_cache_seconds_until_expire = 1800

Services can be configured to be hosted on different endpoints. Typically, this means setting different TCP ports, addresses or service groups. See the Service Group section above for a common use case of configuring new endpoints with different service groups. See the Transport Methods section for details on setting up queue endpoints.

By default, the lb web server has two TCP endpoints configured. One, running on port 55183, is for internal services. The credentials services run on this endpoint which is only available within the server's local network. The other endpoint, [tcp:public], is what most standard services are served on.

You can also secure endpoints such that only authenticated users can access them. Declaring that it requires authentication will make the ServiceBlox server verify that all services that use that endpoint have authentication support.


Handlers are the specific Java classes that are used to handle all the different types of service requests. ServiceBlox comes with quite a few handlers already installed such as a login handler, tabular data handler, etc. Typically, handlers do not need to be configured unless you are implementing your own custom handler.

If you are implementing your own custom handler, it is recommended that you place a config file in the root of your handler jar. This config file should just contain the configuration for the new handler. Ex:

        classname = com.logicblox.bloxweb.myhandler.MyHandler

You can then specify that a particular service should use this handler in your service configuration.

        service_by_prefix["/my-service"] = x,
        protobuf_service(x) {
          custom_handler[] = "my-handler",


Realms are a way of grouping users by authentication policy. You can then restrict services to different authentication realms. ServiceBlox supports several types of authentication policies / realms. For more information on authentication and the different types of authentication policies, please see the Authentication section.

ServiceBlox comes with three default realm types installed: default-password, default-signature and default-saml. Each of these can be configured via a configuration file with things such as session timeout and other more policy specific options. For more information about how to configure these different realms, please see the relevant subsection of Authentication.


  # This configuration file documents the default configuration options
  # for the lb web-client command-line tool. Do not modify these
  # settings, instead either:
  # a) Make a configuration file
  #    LB_DEPLOYMENT_HOME/config/bloxweb-client.config that customizes
  #    the configuration. In this file, only the custom overrides need
  #    to be configured. This file is automatically loaded when
  #    executing bloxweb-client
  # b) Use the --config option of the
  #    bloxweb-client commands to specify a bloxweb-client.config file.

  # Path of directory where encryption keys are kept.
  keydir = $(HOME)/.s3lib-keys

  # Name of encryption key to use for s3 uploads. By default not configured.
  # keyname = ...

  batch_max_concurrent = 50

  # Number of concurrent uploads/downloads for S3
  s3_max_concurrent = 10

  # Size of chunks for multipart uploads to S3
  s3_chunk_size = 5242880

  # Set the global SQS queue to use for queue transport
  sqs_endpoint =

  # The s3:default configuration is used if no --config option is used
  # on the s3 command used in the batch specification. To change the
  # defaults, specify either the access/secret keys or an IAM role in a
  # custom bloxweb-client.config file.
  # iam_role = default
  # access_key = ...
  # secret_key = ...

  # TCP configurations can specify a host and port, which will be used
  # for relative URLs. The default tcp configuration would typically not
  # have such a configuration.
  # host = ...
  # port = ...


Environment variables:

  • LB_WEBCLIENT_JVM_ARGS can be used to configure the JVM arguments of all lb web-client invocations. Example:

    export LB_WEBCLIENT_JVM_ARGS="-Xmx200m"

7.3. Asynchronous Services

ServiceBlox allows for asynchronous service calls, so long running transactions can be processed in the background, with no need for an open TCP connection between client and server.

To enable asynchronous requests, sync_mode configuration is used at service level, as per below:

      service_by_prefix["/my-service"] = x,
      protobuf_service(x) {
        custom_handler[] = "my-handler",
        sync_mode[] = "async"

Or syncmode at handler level:

      classname = com.logicblox.bloxweb.myhandler.MyHandler

Below is the list of acceptable values:

  • strict_sync: no asynchronous calls will be acceptable.
  • sync: service will allow both synchronous and asynchronous requests, synchronous being the default.
  • async: service will allow both synchronous and asynchronous requests, asynchronous being the default.
  • strict_async: only asynchronous calls will be acceptable.

If the handler is configured as strict_sync or strict_async, the service cannot override this value, otherwise, the service configuration will take precedence over the handler. If no mode is defined at service level, the handler configuration will be assumed, and if no mode is defined at handler level, strict_sync will be assumed.

7.4. Authentication

By default ServiceBlox services do not require authentication by users. They can however be reserved to authenticated users if the specification includes an authentication realm which ServiceBlox has access to. Just like a service, an authentication realm is specified in a workspace according to the ServiceBlox protocol.

ServiceBlox comes with two default realm configurations, which are types of realms that can be instantiated in workspaces. The use of realms is advised to avoid potential security problems and help with future compatibility issues. The two realm configurations are default-password, for stateful authentication with usernames and passwords, and default-signature, for stateless authentication with RSA-SHA512 signatures. ServiceBlox also supports SAML Authentication to rely on third parties to authenticate users.

7.4.1. Glossary

  • Authentication Realm: A collection of credentials used to identify users of a service. A realm can be shared by several services.

  • Authentication Realm Configuration: The specification of a set of parameters defining the characteristics of an Authentication Realm.

  • Authentication Mechanism: A Java class implementing methods to manage credentials in an Authentication Realm.

  • Stateful authentication: An authentication scheme where users provide credentials (aka log in) and obtain a session key which will authenticate them with the service.

  • Stateless authentication: An authentication scheme where users present credentials at each request.

7.4.2. Credentials Database and Service

ServiceBlox contains a library that implements local storage of all user related data, such as usernames, passwords, public keys, email addresses and locale. This library can be used in applications by including the library lb_web_credentials in a project.

The lb web-server command-line tool has support for administration of users. It supports the following commands:

$ lb web-server import-users users.dlm
$ lb web-server export-users users.dlm
$ lb web-server set-password john.smith
enter password for user 'john.smith':
$ lb web-server list-users

The library will cause the workspace that includes it to host the following delimited-file service:

  • /admin/users for files with headers:

    • USER - required

    • DEFAULT_LOCALE - optional

    • EMAIL - optional

    • ACTIVE - optional

    • PASSWORD - optional

    • PUBLIC_KEY - optional

The PASSWORD column is required to be a bcrypt hash of the clear text password. Importing plain text passwords to these services will not work. We enforce the usage of separate bcrypt hashing to discourage transferring and storing files with plain text passwords. Passwords can be hashed using:

$ echo "password" | lb web-client bcrypt

Or by giving the lb web-client bcrypt command a full file:

$ cat passwords.txt
$ lb web-client bcrypt -i passwords.txt

The credentials database can be configured to treat usernames as case sensitive or insensitive. In case sensitive mode, the library guarantees that usernames are unique in the database; in case insensitive mode, usernames are guaranteed to be unique after being canonicalized in lowercase, such that usernames that only differ in case are not allowed.

Case insensitive authentication is the default configuration. This can be changed by pulsing the username_case_sentitive() flag, for example, from the command line:

$ lb exec test '+lb:web:credentials:database:username_case_sensitive().'

7.4.3. Stateful Authentication using Passwords and Sessions

ServiceBlox comes with a default realm configuration for authentication using passwords.

  • The realm configuration uses a session key that is stored in a cookie that expires after a reasonable amount of time. The default name of the session key is "lb-session-key_" plus the realm name. This value can be configured by the realm declaration.

  • The realm configuration stores passwords using the secure bcrypt hashing algorithm with salt. It uses a service to retrieve the credentials from a workspace. The service is by default /admin/users. This service can currently not be proxied to a different machine, but support for this is planned.

To use the default password realm configuration, configure a service as follows:

block(`service_config) {



    service_by_prefix["/time"] = x,
    default_protobuf_service(x) {
      auth_realm[] = "realm_name"

    realm_by_name["realm_name"] = x,
    realm(x) {
      realm_config[] = "default-password",
      realm_session_key[] = "time_session_key"

} <-- .

This will declare a realm called "realm_name" using the default password configuration. In order to be authenticated in this realm, a client needs to send a request to the login service of ServiceBlox. The login service is a ServiceBlox service that accepts JSON or protobuf requests defined by the following protocol:

message Request
  required string realm = 1;
  optional string username = 2;
  optional string password = 3;
  optional bool logout =4;

message Response
  required bool success = 1;
  optional string key = 2;
  optional string exception = 3;

Upon successful login, a key is returned to the client to be stored in a cookie. Since this realm overwrites the default session key name, the cookie will be named "time_session_key" (instead of the default "lb-session-key_realm_name"). In the server, a security context is created which will allow subsequent requests with this cookie or with an HTTP header of the same name to access services in this realm.

The security context will expire when more than secondsToLive seconds elapse without a request using this security context, or after a call to the login service with the key for the context as a cookie or http-header and the attribute logout set to true.

7.4.4. Stateless Authentication using RSA-SHA

For services that are not accessed from a browser, but instead are used from non-browser applications deployed on some machine, LogicBlox advises to use an authentication mechanism called RSA-SHA512 (for reference, the closely related HMAC-SHA1 and RSA-SHA1 is more commonly used, but these have weaker security properties and key management complications).

Clients of the web service compute a string-to-sign based on a hash of the content of a request and some important aspects of the HTTP headers, such as the HTTP method, query-string, etc. The goal of this string-to-sign is to be specific enough so that the HTTP request cannot be altered by an attacker to send a critically different request to the server with the same signature. Clients of the web service compute a signature using their private key and the string-to-sign. This signature is passed to the web-service in the standard header used for HTTP authentication. Although we do use SSL sockets, the signature does not expose any secret information if intercepted.

On the server-side, the web service computes the same string-to-sign and verifies with the public key of the user that the client originally did use the correct private key to sign the request. We also verify that the date of the request is within a certain skew (to be determined, but probably 15 minutes).

This method of authentication and ensuring message integrity is similar to the authentication methods of many current mainstream, highly security-sensitive web services, notably Google Apps and the AWS S3 REST authentication scheme, called HMAC-SHA1. LogicBlox uses asymmetric cryptography (public/private keys) rather than symmetric cryptography (private keys on both client and server), as is the case in HMAC-SHA1. This reduces the problem of secure key management, because our servers will only contain the public key of the client.


Generate an RSA key-pair as follows:

$ openssl genrsa -out priv.pem 2048
$ openssl rsa -in priv.pem -out pub.pem -pubout
$ openssl pkcs8 -topk8 -inform PEM -outform PEM -in priv.pem -out priv-pkcs.pem -nocrypt

The priv-pkcs.pem key is intended for deployment on the client-side (the server of a client using the ServiceBlox service). The ServiceBlox client library supports reading this private key format directly. The pub.pem file should be deployed on the server hosting the service to authenticate the user.

See protobuf-auth-rsa in lb-web-samples/

7.4.5. Customizing Authentication Realms and Mechanisms

An authentication realm is an instance of an authentication realm configuration, and it is uniquely identified by its name in a ServiceBlox instance. If multiple realms are configured with the same name, a warning is raised and the realms are not hosted. The realm accepts several configuration options, which are listed in the realm configuration section of lb-web-server.config. For example, this code can be used to set the session timeout for a default-password realm to be one hour:

realm_by_name["time_auth_proxy"] = x,
realm(x) {
  realm_config[] = "default-password",
  realm_option["seconds_to_live"] = "3600"

7.4.6. Using Authenticated User Information in Services

The authenticated user for a service call is available in the pulse predicate lb:web:auth:username[] = username.

The service can be configured in two different ways to populate the lb:web:auth:username predicate. The first method (default) relies on the imported HTTP control information (which also includes the URL and headers). In this confgiuration, computation of the lb:web:auth:username predicate happens in active logic, based on HTTP control predicates. This means that the authenticated user is then not available in inactive blocks.

The second method directly populates the predicate at the intial stage of the transaction. This method allows entirely disabling the import of the HTTP control information, which can help with the transaction throughput of services. To use this method, configure the service as follows (replacing protobuf_service with the service type used):

protobuf_service(x) {
  service_parameter["enable_http_control"] = "false",
  service_parameter["enable_pulse_auth"] = "true"

Security Considerations.  The authenticated user information should only be trusted if the service is actually authenticated. For services that do not have authenticated configured, clients can claim to be any user, because it is based on HTTP header information (x-logicblox-username). The authentication facilities initialize information for a request.

Supported Services.  The authenticated user is available to protobuf as well as data exchange services. If a proxy service is authenticated, then the authenticated user information is forwarded to the target service. Custom service handlers need to make sure to import the HTTP request information to make this information available.

7.4.7. SAML Authentication

SAML is an XML standard for applications to securely communicate authentication and authorization information. The most common application of SAML is browser single sign-on (SSO), which lets web applications use a trusted, external application to manage the information of users and execute the authentication procedure. The web application is commonly called the service provider, or SP. The external application that manages users and executes authentication is called the identity provider, or IDP. The key advantage of single sign-on is that the service provider never gets to see the secret credentials of the user, and user information does not need to be separately maintained for every web application.

ServiceBlox has native support for SAML 2.0 authentication. SAML is really a collection of different authentication methods though, and not all these methods make sense for ServiceBlox. The remainder of this paragraph is to clarify the level of support to SAML experts. ServiceBlox only uses the Authentication Request Protocol. ServiceBlox supports the Redirect (urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect) and POST (urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST) bindings. By default, ServiceBlox submits the authentication request using a HTTP redirect and instructs the identity provider to use a HTTP POST to the assertion consumer service hosted by ServiceBlox.

SAML authentication is enabled by including a [saml:somename] section in the lb-web-server.config file. The SAML section supports the following configuration options:

Required settings
Configuration Description
request Path of URL for triggering the initial authentication request to the identity provider. This is the URL that users need to visit to trigger authentication. For example, if the path is configured to be /sso/request, then on a local machine the user would need to visit http://localhost:8080/sso/request. From this URL they will be redirected to the identity provider. If the user was already authenticated (for example with a cookie for the identity provider), then the identity provider will normally immediately send the user back to the application. Due to the automatic sequence of redirects, this entry-point can be configured as the front-page of the application.
response Path of URL that the identity provider will use to confirm successful authentication. It is important that this setting is consistent with the assertion_consumer_service configuration.
redirect URL to redirect the user to after successful authentication. For testing purposes it can be helpful to use /login/user?realm=test, which shows the current username.
realm Name of the authentication realm that this SAML configuration will authenticate users to. Services that use this realm can be accessed by the user after successful authentication. The realm is separately configured in the workspace as a normal ServiceBlox authentication realm. It has to use the realm configuration default-saml.
meta Path of URL for hosting xml metadata for the service provider (an EntityDescriptor) that can be used by some identity providers to register the service provider.
alias_idp Alias name given to the identity provider. This alias is used to find a certificate (.cer format) for the identity provider. It is only used by ServiceBlox and is not used in the exchange of information with the identity provider.
alias_sp Alias name given to this service provider. This alias is used to find public/private keys (.pem format) for the service provider. It is only used by ServiceBlox and is not used in the exchange of information with the identity provider.
entity_id_idp Identity for the identity provider. Depending on the registration procedure with the entity provider, this can normally be found in the XML metadata for the identity provider, as the entityID attribute of the EntityDescriptor element.
entity_id_sp Identity for the service provider. This is the name used by the identity provider to refer to the service provider, and is typically agreed upon in some way during the registration procedure of the service provider with the identity provider. This can in simple cases be identical to the alias_sp. In some cases identity providers can have specific requirements for the id, such as URIs, in which case the entity_id_sp cannot be the same as the alias_sp.
sso_service URL hosted by the IDP that a user will be redirected to after visiting the request URL explained previously. This can normally be found in the XML metadata of the identity provider. Be careful to select the Location for the redirect binding, for example: <SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location=""/>
assertion_consumer_service Full URL that the IDP will use to confirm successful authentication. The path of the URL should be the same as the response path, unless a proxy is used that rewrites paths.
attribute_map List of pairs, where the first element is the local identifier to use to refer to a certain attribute of a user. The second element is the standard object identifier, which is the recommended way to refer to attributes returned by the IDP to the SP. The local identifier uid is special in ServiceBlox and will be used as the username. This is the only required attribute. See the example configuration below for how to determine what attributes are available.
Optional settings
keydir Path of directory where encryption keys and certificates are kept. Defaults to the global ServiceBlox key directory.
binding_idp Whether to use GET (Redirect) or POST to submit the SAML request to the Identity Provider. Note that this must match the binding exposed by the sso_service endpoint configured above. The default is Redirect.
binding_sp whether to request the IDP to use GET (Redirect) or POST to submit the SAML response to the SP. The default is POST.
force_authn if true, the request sent to the IDP will set the force_authn flag, which forces the IDP to request the user for authentication credentials. The default is false, in which case an older ongoing IDP session may be reused.
include_name_id_policy if true, include the NameIDPolicy element in the AuthnRequest. This is an optional field that may cause issues with some IDPs. The default is true.

Log level for the SAML libraries. Possible values are:

  • off (default) - no logging.

  • info - print the response attribute map.

  • message - same as info and additionally print all exchanged XML messages.

  • debug - same as message and additionally print decrypted assertions. Do not use in production.

signature_algo The SAML signature algorithm to use. sha1, sha256 and sha512 are supported with the latter as default.

In addition to the configuration options, the following cryptographic keys are needed:

  • alias_idp.cer - Certificate of the IDP. The certificate is used to verify that authentication confirmations really do originate from the intended identity provider. The certificate is not a secret, and is usually available in the XML metadata of the identity provider. This file should contain something similar to:

    -----END CERTIFICATE-----

  • alias_sp.pem - Private key for the service provider. This is used to sign the request to the identity provider. The identity provider uses a certificate generated from this key (see next item) to validate that the authentication request does indeed originate from the registered service provider. This file should contain something similar to:

    -----BEGIN PRIVATE KEY-----
    -----END PRIVATE KEY-----

  • alias_sp.cer - Certificate file for the service provider. The format is the same as the certificate for the identity provider.

Testing SAML with TestShib

This section describes step by step how to use a free online SAML testing service called TestShib with ServiceBlox. Clearly this service should never be used in any actual application, but it is useful as an exercise in deploying SAML, without having to install and configure an identity provider as well.

Choose an Alias.  First, decide on an alias for your web application, which we will refer to as the service provider (SP). This example will use logicblox-abc from now. Do pick a different alias, because the TestShib testing service will use this as the account name, and different users of this guide will interfere!

Private Key.  We begin with generating a private key using the s3lib-keygen tool (which uses OpenSSL). This private key is used by the service provider to sign requests to the identity provider, which can in this way confirm that authentication requests only originate from approved service providers.

$ s3lib-keygen logicblox-abc

Certificate.  While the SAML standard could have chosen to simply provide the public key of the key pair to the identity provider, the standard is using certificates to get additional evidence about the identity of the service provider. Therefore, we next need to create a certificate for the private key. In this example we use a self-signed certificate. Note that the certificate is never used in a browser, so there will not be a problem with the recent trend of severe browser warnings for self-signed certificates. The openssl tool will ask a few pieces of information, which do not matter for this example. Simply hitting enter is sufficient.

$ openssl req -new -x509 -key logicblox-abc.pem -out logicblox-abc.cer -days 1095

If you later want to inspect certificate files (either the one just generated, or the one from the IDP that we will obtain later), then you can use openssl as well:

$ openssl x509 -in logicblox-abc.cer -text -noout

Key Deployment.  Copy the logicblox-abc.pem and logicblox-abc.cer files to your standard key directory, which normally is $HOME/.s3lib-keys.

$ cp logicblox-abc.cer logicblox-abc.pem ~/.s3lib-keys

Collect IDP Metadata.  Before we can configure ServiceBlox we need to collect some information on the identity provider. The TestShib configuration instructions link to a XML document that describes the TestShib services: testshib-providers.xml. The XML document contains two EntityDescriptors: an identity provider (for testing service providers, which is what we are doing here), and a service provider (for testing identity providers, which is not the purpose of this guide). We need to collect three pieces of information:

  • The entityID of the IDP, which can be found in this XML tag: <EntityDescriptor entityID="">. We will use this value in the configuration of ServiceBlox.

  • The URL to redirect users to when authentication is needed, which can be found in this XML tag: <SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location=""/>. We will use this value in the configuration of ServiceBlox.

  • The certificate of the IDP, which the service provider will use to validate that it is not being tricked by somebody posing to be the IDP. The certificate can be found as the first <KeyDescriptor> in the <IDPSSODescriptor> tag. You need to copy the content of <ds:X509Certificate>...</ds:X509Certificate> and create a file testshib.cer that looks like the following example (which is actually the testshib certificate). Make sure that the file is formatted exactly in this way.

    -----END CERTIFICATE-----

    You can check if the certificate is saved correctly by printing the certificate info:

    openssl x509 -in testshib.cer -text -noout

    This should include the following: Issuer: C=US, ST=Pennsylvania, L=Pittsburgh, O=TestShib, Now copy this certificate to the standard key directory as well:

    $ cp testshib.cer ~/.s3lib-keys

Minimal ServiceBlox Configuration.  Now we have all the information needed to configure ServiceBlox. Create a configuration file lb-web-testshib.config (or alternatively put this directly in LB_DEPLOYMENT_HOME/config/lb-web-server.config).

request = /sso/request
response = /sso/response
redirect = /login/user?realm=test
realm = test
meta = /sso/metadata

alias_idp = testshib
alias_sp = logicblox-abc

entity_id_sp = logicblox-abc
entity_id_idp =

sso_service =
assertion_consumer_service = http://localhost:8080/sso/response

attribute_map = uid urn:oid:0.9.2342.19200300.100.1.1

Except for attribute_map, all the configuration settings have been discussed in the previous steps. The attribute_map is mostly an initial attempt. The urn:oid:0.9.2342.19200300.100.1.1 is the standard object identifier for user identifiers, and is used by all SAML providers. The IDP will offer more information on the user, but these attributes are not formally documented, so we need to trigger an authentication request before we can discover them. Some SAML identity providers will publish this information in their documentation or metadata.

Start ServiceBlox. 

$ lb web-server start lb-web-testshib.config

Register Service Provider with TestShib.  ServiceBlox is now hosting an XML metadata file at the configured meta path. This file can be used to register the service provider with TestShib. Download this file and store it as logicblox-abc.xml. You can do this either using a browser or with the following command.

$ curl http://localhost:8080/sso/metadata > logicblox-abc.xml

On the TestShib website the service provider can now be registered with this XML file. Visit the metadata upload form for this and upload the logicblox-abc.xml file.

First Authentication.  Everything is now setup to attempt the first authentication. Point your browser at http://localhost:8080/sso/request. This will redirect to the TestShib website, where you can login with one of the suggested accounts. Pick the username and password myself. After confirming, the browser will go back to the ServiceBlox-hosted application. Most likely, you will now see an error Could not find realm : test. This indicates that the SAML request was processed correctly, but that there is no authentication realm test. This error is simply because we did not cover hosting an actual service with a realm of the name test. While the error is perhaps unsatisfying, this means that the SAML configuration was successful. Note that SAML supports keeping track of the relay state so that users can be redirected to a specific resource upon authentication. For example, if we initiated our request using http://localhost:8080/sso/request?RelayState=/some/resource, we would have been redirected to /some/resource after authentication succeeds. If the RelayState parameter is not used, ServiceBlox falls back to the redirect configuration attribute as defined above.

Configuring more attributes.  In the lb-web-server.log or the terminal (depending on how you started the LogicBlox web-server), there are two tables printed as the result of this authentication request. The first table corresponds to the actual configured attribute_map, which currently only contains uid.

| name                           | value                                       |
| uid                            | myself                                      |

The second table lists all attributes returned by the IDP.

| oid                                | friendly name              | value                                       |
| urn:oid:0.9.2342.19200300.100.1.1  | uid                        | myself                                      |
| urn:oid:   | eduPersonAffiliation       | Member                                      |
| urn:oid:   | eduPersonPrincipalName     |                         |
| urn:oid:                    | sn                         | And I                                       |
| urn:oid:   | eduPersonScopedAffiliation |                         |
| urn:oid:                   | givenName                  | Me Myself                                   |
| urn:oid:   | eduPersonEntitlement       | urn:mace:dir:entitlement:common-lib-terms   |
| urn:oid:                    | cn                         | Me Myself And I                             |
| urn:oid:  | eduPersonTargetedID        | <saml2:NameID Format="urn:oasis:names:tc... |
| urn:oid:                   | telephoneNumber            | 555-5555                                    |

Based on this information we can now extend the attribute_map configuration to have more attributes that are relevant to the service provider. For this, modify lb-web-testshib.config to use the following setting for attribute_map:

attribute_map = \
  uid urn:oid:0.9.2342.19200300.100.1.1 \
  cn urn:oid: \
  phone urn:oid: \
  sn urn:oid: \
  givenName urn:oid:

Note that ServiceBlox currently does not support exposing user attributes to implementations of services, so configuring more attributes is currently actually not very useful, but we expect this to be supported soon. After restarting the LB web server, the next authentication request will now show a more detailed attribute table:

| name                           | value                                       |
| uid                            | myself                                      |
| phone                          | 555-5555                                    |
| sn                             | And I                                       |
| cn                             | Me Myself And I                             |
| givenName                      | Me Myself                                   |

7.4.8. Google Sign-In

Google Sign-In allows the web applications to sign in users using their existing Google account. Similar to SAML mentioned above, the user information does not need to be separately maintained for every web application.

ServiceBlox supports authenticating users with their existing Google accounts into a realm. It can then authorize services for those users just like any other natively defined user. To enable this functionality, create a project in Google Developer Console, add a credential, configure a consent screen and create a client ID. This client ID will be used for configuring the lb-web server and also the client side log-in UI code.

The lb-web server is configured by including a [google-signin:my-app-name] section in the lb-web-server.config file. Replace 'my-app-name' in the section title with project name created in Google Developer Console. The following are the configuration options are available under this section:

Required settings
Configuration Description
client_id This is the client id that Google issues from their Developer Console. It must match the ID used by the client side to generate the ID token.
realm The realm into which the user will be authenticated if Google Sign-In succeeds.
allowed_domains Google sends to lb-web server the authenticated user email. This allows administrators to restrict the domains that can log into the realm. If this is empty or commented out, any domain is allowed; otherwise, only emails belonging to one of these domains is allowed.
Optional settings
Configuration Description
credential_service An absolute url to a credentials service. If defined, new users will be created in this credentials service for users that authenticated successfuly but who did not have an entry. Note that the username will be the user email, and no password will be set.
roles A list with names of user roles. If a credentials service is set (above) and roles are set, then these roles will be assigned to any new users that authenticate via this Google Sign-In configuration. Note that the roles must already exist in the credentials database configured above.

After configuring lb-web server as described above, the application's web client should implement the Google Sign-In in its page as explained at Google Sign-In for Websites. Then, in the sign-in succeeded event handler, invoke the service /google-signin?idtoken= passing the ID token received from Google Sign-In as idtoken parameter.

7.5. Authorization

ServiceBlox supports a flavor Role-Based Authorization Control (RBAC) for securing services.

7.5.1. Definitions


A Role can be thought as a behavior or job function that can be assigned to a user or groups of users. A user can be assigned to many roles. A role can be assigned a set of Permissions. (e.g. "Buyer" or "Admin")


A Permission is an approval of access to a given function or resource. A permission is assigned to one or many Roles and a permission consists of a set of Operations. (e.g. "Manage Order")


An Operation is specific business function or application function. An Operation is assigned to one or many Permissions. A service can have only one Operation. (e.g. "Cancel Order" or "Create User")

7.5.2. Configuration

You can enable RBAC by assigning your services to operations by specifying the service_operation relation for a service, like so:

service_by_prefix["/sell_product"] = x,
default_protobuf_service(x) {
  protobuf_protocol[] = "sample_authorization",
  protobuf_request_message[] = "SellRequest",
  protobuf_response_message[] = "SellResponse",
  auth_realm[] = "default",
  service_operation[] = "Sell Product"

The rest of the configuration is done via data in the workspace. The lb:web:credentials:authorization module defines predicates that map users to roles, roles to permissions, and permissions to operations. The credentials service comes equipped with TDX services for loading this data. The data is then queried out any time the credentials workspace is re-scanned, typically after server startup or calling lb web-server load-services.

Services that fail authorization (because a user does not have a role that has a permission for the operation defined for the service) will return an HTTP status code of 403.

File Bindings

file_binding_by_name["lb:web:credentials:authorization:permissions"] = fb,
file_binding(fb) {
  file_binding_definition_name[] = "lb:web:credentials:authorization:permissions",
  file_binding_entity_creation[] = "accumulate",

  predicate_binding_by_name["lb:web:credentials:authorization:permission_description"] =
    predicate_binding(_) {
      predicate_binding_columns[] = "PERMISSION, DESCRIPTION"

file_binding_by_name["lb:web:credentials:authorization:operations"] = fb,
file_binding(fb) {
  file_binding_definition_name[] = "lb:web:credentials:authorization:operations",
  file_binding_entity_creation[] = "accumulate",

  predicate_binding_by_name["lb:web:credentials:authorization:operation_description"] =
    predicate_binding(_) {
      predicate_binding_columns[] = "OPERATION, DESCRIPTION"

file_binding_by_name["lb:web:credentials:authorization:roles"] = fb,
file_binding(fb) {
  file_binding_definition_name[] = "lb:web:credentials:authorization:roles",
  file_binding_entity_creation[] = "accumulate",

  predicate_binding_by_name["lb:web:credentials:authorization:role_description"] =
    predicate_binding(_) {
      predicate_binding_columns[] = "ROLE, DESCRIPTION"

file_binding_by_name["lb:web:credentials:authorization:role_permission_mappings"] = fb,
file_binding(fb) {
  file_binding_definition_name[] = "lb:web:credentials:authorization:role_permission_mappings",
  file_binding_entity_creation[] = "none",

  predicate_binding_by_name["lb:web:credentials:authorization:role_permissions"] =
    predicate_binding(_) {
      predicate_binding_columns[] = "ROLE, PERMISSION"

file_binding_by_name["lb:web:credentials:authorization:permission_operation_mappings"] = fb,
file_binding(fb) {
  file_binding_definition_name[] = "lb:web:credentials:authorization:permission_operation_mappings",
  file_binding_entity_creation[] = "none",

  predicate_binding_by_name["lb:web:credentials:authorization:permission_operations"] =
    predicate_binding(_) {
      predicate_binding_columns[] = "PERMISSION, OPERATION"

file_binding_by_name["lb:web:credentials:authorization:user_role_mappings"] = fb,
file_binding(fb) {
  file_binding_definition_name[] = "lb:web:credentials:authorization:user_role_mappings",
  file_binding_entity_creation[] = "none",

  predicate_binding_by_name["lb:web:credentials:authorization:user_roles"] =
    predicate_binding(_) {
      predicate_binding_columns[] = "USER, ROLE"
    service_by_prefix[authorization_prefix[] + "/permissions"] = x,
delim_service(x) {
  delim_file_binding[] = "lb:web:credentials:authorization:permissions",

TDX Services

service_by_prefix[authorization_prefix[] + "/permissions"] = x,
delim_service(x) {
  delim_file_binding[] = "lb:web:credentials:authorization:permissions",

service_by_prefix[authorization_prefix[] + "/operations"] = x,
delim_service(x) {
  delim_file_binding[] = "lb:web:credentials:authorization:operations",

service_by_prefix[authorization_prefix[] + "/roles"] = x,
delim_service(x) {
  delim_file_binding[] = "lb:web:credentials:authorization:roles",

service_by_prefix[authorization_prefix[] + "/role_permissions"] = x,
delim_service(x) {
  delim_file_binding[] = "lb:web:credentials:authorization:role_permission_mappings",

service_by_prefix[authorization_prefix[] + "/permission_operations"] = x,
delim_service(x) {
  delim_file_binding[] = "lb:web:credentials:authorization:permission_operation_mappings",

service_by_prefix[authorization_prefix[] + "/user_roles"] = x,
delim_service(x) {
  delim_file_binding[] = "lb:web:credentials:authorization:user_role_mappings",

7.6. Aborting Transactions

ServiceBlox supports a mechanism in which you can abort a transaction and report a desired HTTP status code and debug message. The most obvious use case for this feature is to enable developers on the platform to enforce custom authorization schemes.

In order to cause a transaction abort and return a status code and message you must delta in to the lb:web:abort:error_response predicate which defined as:

error_response(code, msg) -> int(code), string(msg).

This will in turn cause a constraint violation which will abort the transaction in a way that ServiceBlox can interpret and then return the status code and message.

Currently this functionality only works on Protobuf services but TDX services support will be added very soon. In order for the message to be returned to the browser, the server must be in debug mode (this is for security reasons).

7.7. Transport Methods

7.7.1. Using Amazon Simple Storage Service (S3)

Normally, an HTTP request contains a request line, headers, and the content of the request. For POST and PUT requests to tabular data exchange services, the content would be the CSV data. For files that are extremely large, this is not a good idea, because a connection failure would force re-doing the entire upload. Also, a file might already be available in highly redundant distributed storage, which makes it undesirable to send the data directly to a service from the client.

ServiceBlox supports transferring large data files separate from HTTP requests. This makes it possible to have an interface that resembles the REST architecture even when the data files exceed the volume of data one would normally want to transfer over a single socket in a single request.

ServiceBlox uses a custom HTTP header x-blox-content-uri for this. The idea of the HTTP header is that the content is not actually in the body of the HTTP request, but is stored as an S3 object located at the specified URI. The additional HTTP headers of the request or response still apply to the content at this URI (e.g. the content-type header of the HTTP request would specify the content-type of the S3 object). The header x-blox-response-content-uri is used to indicate that a response is expected not as-is in the HTTP response, but at a certain URI. The content-uri support is applicable to all transport mechanisms, so it can be used on TCP requests as well as queue requests.

Users do not need to be aware of the HTTP headers. ServiceBlox client APIs and the lb web-client tool internally use the headers to communicate with the LB web server when importing and exporting delimited files using S3.

ServiceBlox uses a high performance S3 library developed by LogicBlox, called s3lib, to upload and download files to and from S3. The library can also be used through a separate command-line tool called cloud-store. The s3lib library uses encryption keys to encrypt data stored in S3. Because cloud-store, lb web-client and the LB web server use s3lib, it is necessary to set up s3lib encryption keys before using any of those tools with S3. The following section explains how to manage the encryption keys used by s3lib.

Managing s3lib keys

The s3lib library uses asymmetric encryption keys (public/private RSA keys) to encrypt and decrypt data that is stored in S3. Data is encrypted locally prior to uploading to S3, and it is decrypted locally after downloading, so it is never transferred in clear text. The asymmetric encryption method is used to simplify key management: the uploader of a file only needs access to the public key (the private key is still needed to decrypt upon download).

Encryption is currently not an optional feature, so before the tools can be used, encryption keys need to be generated and configured. The s3lib library uses a directory of .pem text files to store the encryption keys. On EC2 instances, this directory should be in an in-memory file system to protect the keys from being available on disk. The s3lib tool and ServiceBlox manage encryption keys by a key alias, which is the name of the .pem file in the key directory. It is important that key aliases are unique and consistent across machines, otherwise decryption will fail and data-loss will occur. A key pair can be generated using:

$ cloud-store keygen -n mykey

This command generates a file mykey.pem under ~/.s3lib-keys by default (it can be changed with the --keydir option). The key alias is thus mykey. The file contains the public as well as the private key, in the following format:

-----END PUBLIC KEY-----


For upload clients that only need the public key, the private key section can be removed from the file.

The s3lib by default uses the directory ~/.s3lib-keys for encryption keys, but all commands and tools can change the default setting.

Using cloud-store

The cloud-store is a command-line tool around s3lib and it can interact with both S3 and Google Cloud Storage (GCS). It has 3 major commands that allow to upload a local file to S3/GCS, to download an S3/GCS file, and to verify whether an S3/GCS file exists. These commands take a command-line option --keydir to change the default setting for the directory where the keys are stored. They also accept a variety of command-line options to configure the retry behavior and influence performance. To review these options, use:

$ cloud-store upload --help
$ cloud-store download --help
$ cloud-store exists --help

The cloud-store needs AWS credentials to access S3. Currently, it uses the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_KEY, or falls back to credentials delivered via the EC2 meta-data service (usually set using EC2 instance roles).

For GCS access, currently, the following environment variables are required: GOOGLE_APPLICATION_CREDENTIALS, GCS_XML_ACCESS_KEY and GCS_XML_SECRET_KEY. The first one should point to a JSON file, generated in Google Developers Console, that defines the GCS service credentials. The other two should be used for accessing the GCS interoperable XML API.

After credentials have been set up and the mykey key has been generated and stored in ~/.s3lib-keys, cloud-store can upload a file as follows:

$ cloud-store upload s3://bucket/AS400.jpg -i AS400.jpg --key mykey # S3
$ cloud-store upload gs://bucket/AS400.jpg -i AS400.jpg --key mykey # GCS

The upload command will encrypt the file using mykey and then will send the encrypted file to the S3/GCS object. The cloud-store also attaches meta-data to the object to identify the key with which it is encrypted. To download the file back:

$ cloud-store download s3://bucket/AS400.jpg -o AS400-2.jpg  # S3
$ cloud-store download gs://bucket/AS400.jpg -o AS400-2.jpg  # GCS

Note that the download command automatically determines what key to use to decrypt from the meta-data attached to the S3/GCS object.

Using S3 in ServiceBlox

The following sections describe how to configure the ServiceBlox components that can use S3. In all cases it is assumed that encryption keys were already generated according to the Managing s3lib keys section.

LB web server

The LB web server can be configured to access S3 to retrieve content of requests and to store responses. Just like cloud-store, it needs AWS credentials and needs access to the directory where the keys are stored. These configurations are set with an [s3:default] section in lb-web-server.config. For example, the following section defines the directory holding encryption keys and the credentials to access AWS:

keydir = ...
access_key = ...
secret_key = ...
#iam_role =
#env =

Note that keydir could have been specified outside the [s3:default] section because variables of the global section are inherited by sections (see details in ServiceBlox Configuration). Also, instead of explicitly specifying the access and secret keys, it is possible to use AWS Identity and Access Management (IAM) by setting the iam_role variable (the value of the variable is ignored and the default role is used). Finally, setting the env variable requests the LB web server to load access and secret keys from environment variables.

After setting AWS credentials and keydir, the LB web server is ready to use S3 on requests. When the server is instructed with the x-blox-content-uri HTTP header to download a file from S3, it will determine the key to decrypt the file from the meta-data stored in the S3 object. It will search for the key in keydir. When the server needs to store response content in S3 due to a x-blox-response-content-uri HTTP header, it expects an additional HTTP header called x-blox-response-content-enckey. This header defines the alias of the key to use to encrypt the file before sending to S3 (the alias is also stored in S3 meta-data).

ServiceBlox Client

The ServiceBlox client accepts the same configuration options as the server, but in the lb-web-client.config file instead. Additionally, it supports the keyname variable in the global section. This variable defines the key alias to use by default for the commands export-delim (to encrypt the exported file) and import-delim (to encrypt bad records if requested; the decryption key for the file to import is determined by meta-data). The key to use in these commands can be overridden with the command line option --key.

ServiceBlox Batch Language

The ServiceBlox client can be used to execute batches specified in the ServiceBlox Batch Language. Since batch specifications are executed with lb web-client, the configurations available in lb-web-client.config are immediately supported. Furthermore, batch specifications can override the values of the config files in two ways:

  • The BatchConfig message can specify the keydir and keyname to be used globally in the batch specification.

  • Most statements that can make use of S3 can individually override the global setting for the key name using the key attribute. Currently, S3Upload, ImportDelim, and ExportDelim can specify the key to use. Note that S3Download uses S3 but the key is determined by meta-data.

7.7.2. Queues

In addition to HTTP over TCP, ServiceBlox also allows you to communicate through the use of queues. Normal TCP communication has a few drawbacks which queues allow you to solve. First, with typical TCP communication, each server needs to know about every other server with which it needs to communicate. In larger deployments with many machines, this can get very complicated to set up and maintain. Second, TCP leaves connections open for the entire duration of the request. This is generally fine if your handling short running requests but during batch operations where requests can take hours, this can be problematic and error prone. If there is a network hiccup entire batches could be aborted.

Queues solve both of these problems. Queues solve the network complexity problem by allowing servers to talk to each other via queues. Typically, there is one queue endpoint that all servers communicate with and no server needs to know anything about any other server. This allows message consumers to be taken offline while producers continue to operate. It also allows consumers to be replaced if there is an error as a new consumer can start up and start handling the back log of messages. Queues solve the long running connection problem by having a request and a response queue. Instead of having one connection per message, you have two total connections, one for putting messages on the queue and one for reading responses off the queue.

ServiceBlox offers built in integration with two queue implementations, each with their own pros and cons. See the introduction paragraph of each section for more information on why you would use one over the other. In general, we recommend using RabbitMQ as it has far better consistency guarantees than SQS (i.e. no zombie messages which can derail batch processes).

Using RabbitMQ

RabbitMQ is the recommended queue implementation for use with LogicBlox deployments. The primary differences between it and SQS are:

  • RabbitMQ is much faster and completely consistent (i.e. guarantees only once delivery) whereas SQS can sometimes deliver messages more than once / fail to delete messages.

  • RabbitMQ can be run locally for development and must be deployed to an EC2 server as part of the application's deployment.

Installing RabbitMQ

To install RabbitMQ on your developer machine, it is recommended you download the generic-unix version of RabbitMQ. The lb-config extension for RabbitMQ is designed to work with this distribution. All you need to do is download the file and unzip it. It does not require any installation and all command to start it will be found in the sbin folder.

The easiest way to start RabbitMQ is to run RABBITMQ_HOME/sbin/rabbitmq-server. Shut it down by pressing Ctrl+C in that terminal.

Configuring RabbitMQ

RabbitMQ requires very little configuration. Once you have started the server, you add a section to both your lb-web-server.config and lb-web-client.config that specifies request and response queues and an amqp endpoint to the RabbitMQ server. An endpoint looks just like a normal URL except it starts with "amqp://" instead of "http://" and must include a port number. The default port for RabbitMQ is 5672.

An example configuration is shown below:

request_queue_name = lb-web-sample-request
response_queue_name = lb-web-sample-response
endpoint = 'amqp://localhost:5672'

Testing RabbitMQ

A JSON service now be invoked over RabbitMQ as follows:

$ echo '{"min_price" : 30}' | lb web-client call /search --format --config rabbitmq:sample

Using SQS

SQS is a cloud based queue implementation offered by AWS. While SQS is good for distributing messages to workers, it is not necessarily a good fit for the ServiceBlox use case where we want to simulate HTTP traffic over TCP. The reason it is not a good fit is that SQS does not guarantee only once delivery. Unless all your services are completely idempotent, then you can easily get data problems because you will execute some messages multiple times.

Installing SQS

One nice thing about SQS is that there is no installation, it is completely hosted in the cloud. You do, however, have to set up an AWS account with SQS privileges so you can retrieve your SQS access key and secret key. AWS configuration is outside the scope of this document.

Configuring SQS

Once you have your access key and secret key, you can configure a pair of SQS queues as an end-point. To do this, add a configuration section to the lb-web-server.config file in LB_DEPLOYMENT_HOME/config.

request_queue_name = lb-web-sample-request
response_queue_name = lb-web-sample-response
access_key = ...
secret_key = ...

To invoke a service via SQS, you can use the lb web-client tool. The lb web-client tool uses a separate configuration located in LB_DEPLOYMENT_HOME/config/lb-web-client.config. The configuration is identical to the configuration for the server:

request_queue_name = lb-web-test-request
response_queue_name = lb-web-test-response
access_key = ...
secret_key = ...

A JSON service now be invoked over SQS as follows:

$ echo '{"min_price" : 30}' | lb web-client call /search --format --config sqs:sample

Configuration options:

AWS credentials are specified as one of the following options:
access_key secret_key Configure access keys and secret keys to use.
iam_role Use IAM instance roles to obtain credentials. This option only works on EC2 instances. The value of the iam_role currently does not matter because EC2 supports only a single IAM role per instance.
env_credentials Use credentials set by environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_KEY. The value of env_credentials does not matter: the setting serves as a flag to enable the use of environment variables for credentials.
Queues can be configured using short names as well as full URLs
request_queue_name response_queue_name The short name of the request and response queues. The queue is required to be in the same account as the credentials used. The endpoint is configured by the global (not section-specific) setting sqs_endpoint. If the queues do not exist, then ServiceBloxserviceblox will attempt to create them.
request_queue_url response_queue_url Full request and response urls. The SQS urls have the form

Deploying Multiple Servers with Queues

There are a few things you should know when you run a multi-server deployment that communicates via queues.

The first is that each ServiceBlox server should typically only have one queue configuration section. A typical mistake is to have the same lb-web-server.config file deployed on each server which contains all the queues. Since the primary reason to have a multi-server deployment is to partition data, having all servers process messages from all queues will cause problems b/c it's quite likely that the server that pulls the message off the queue will not contain the data necessary to process it. Another somewhat comment deployment scenario is to have different services hosted on different servers. Since the entire HTTP message is placed on the queue, including the endpoint, it's likely that the server that processes the message will not contain the service you are trying to invoke. Hence the rule of thumb of only one queue configuration per server.

A second thing to note is that tabular service transactions are currently limited to a single server. This is true for both queue and TCP transports but since you technically can process multiple queues on one server, it's less obvious that your transaction is crossing server boundaries. What this means is that if you have a transaction that includes imports on two queues on different servers then that transaction will never complete. The solution to this problem is to batch your imports into transactions per queue, much the same as you would by IP address or host name in the TCP case.

7.8. CORS Rules

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support in ServiceBlox, you can build rich client-side web applications and selectively allow cross-origin access to your ServiceBlox services.

For ServiceBlox and LogiQL, CORS essentially allows a rich web application application running in one server, to access services from a ServiceBlox server on a different server.

To enable CORS on ServiceBlox, we create CORS rules. These rules fundamentally declare the origins that will be allowed to access a particular service. The following LogiQL rule declares a CORS rule that, when applied to a URL prefix, will allow remote calls from the origin for GET requests to the service with URL prefix of /foo-service.

cors:rule_by_name["allow_foo_GET"] = r,
  cors:rule(r) {
    // prefixes

It is also possible to configure origins using wildcards, other HTTP methods, and allow for non-standard headers. Here are the predicates that allow for this:

  • allowed_origin(r, origin): origin can be a string with one wildcard, such as '*foo', 'foo*', or 'foo*bar'. The '*' string will allow all origins;

  • allowed_method(r, method): method should be a string describing an HTTP method ('GET', 'POST', 'PUSH', etc);

  • allowed_header(r, header): header should be the name of an http request header that the client will be allowed to send.

The following rule samples how to use these predicates as well as how to configure a CORS rule for all services starting with /foo by using a wildcard in the prefix:

cors:rule_by_name["allow_PUSH_POST"] = r,
cors:rule(r) {
  <- .

7.9. Using URL path and parameters in Web Services

Each service is associated with a prefix and ServiceBlox will use this prefix to match a request with a service for this request, however the URL can encode additional information which can be used by web services.

When ServiceBlox handles a request, it pulses a message in the database which contains this information, as encoded in the lb.web.http protobuf protocol:

message Field
  required string name = 1;
  repeated string value = 2;

message RequestURI
  // Full original uri. This contains the query string.
  required string full = 1;

  // Prefix of the service that is invoked.
  required string service_prefix = 2;

  // Best attempt to provide a subset of the path that indicates the
  // specific request made to the service.
  optional string extra_path = 3;

  // Parsed version of the query string
  repeated Field parameter = 4;


This can be used to implement behavior which depends on the URL.

        +lb:web:http:Request_uri[req] = uri,
        +lb:web:http:RequestURI_service_prefix[uri] =pref.

This rule for instance will insert a fact in a log predicate for each service call, with a string including the date of the call and the prefix which the URL was matched against. Similarly one could log the full URL, the URL parameters ( not that these are a list of Fields, each of which associates a parameter name which a list of parameter values, all encoded as string.)

    // Logging full URL
        +lb:web:http:Request_uri[req] = uri,
        +lb:web:http:RequestURI_full[uri] =pref.

    // Logging parameters    
    +log(datetime:string:convert[now]+":param: "+name+":"+int:string:convert[ind]+":"+value)<-    
        +lb:web:http:Request_uri[req] = uri,
        +lb:web:http:RequestURI_parameter[uri,n] = param,
        +lb:web:http:Field_name[param] = name,
        +lb:web:http:Field_value[param,ind] = value.

If the prefix ends with a slash followed by a wildcard character (/*), ServiceBlox will match any URL whose path starts with this prefix. If it does not, ServiceBlox will match any URL whose path is this prefix. For prefix including a wildcard, ServiceBlox will populate the extra_path as the part of the path following the matching prefix. e.g. if a service uses the prefix /time/*, ServiceBlox will match the URL http://localhost/time/EST with this service and populate extra_path with the string "EST".

        +lb:web:http:Request_uri[req] = uri,
        +lb:web:http:RequestURI_extra_path[uri] =pref.