LogicBlox 4.1


LogicBlox 4.1.9

Release Date: April 1st 2015

What's New

Database

  • The new command lb replace-default-branch can be used to replace the current version of the default database branch by the current version of the given branch.

    Example 1. 

    lb replace-default-branch [-h] [--loglevel LOGLEVEL] [-L] [--cwd [DIR]] WORKSPACE BRANCH

Services Framework

  • New connectblox services to create, close and get the list of workspace branches, as well as replacing the default branch. These services can be installed in any workspace and be exposed by lb-web server.

    Note

    We highly recommend that these services be protected with authentication realms and possibly with authorization rules.

  • TDX exports now allow optional columns to be bound to the key of predicates. The only requirement is that an optional column be uniquely defined by the value of a functional predicate whose keys are either required columns or uniquely defined themselves.

    Example 2. 

    For example, a file SKU|LINE|LINE_LABEL where only SKU is required can be bound as follows:

    // bindings
    sku(SKU)
    sku_to_line[SKU] = LINE
    line_to_label[LINE] = LINE_LABEL  

    SKU is required, so LINE is uniquely defined by sku_to_line. Therefore, it can be used as a key in line_to_label, so that LINE_LABEL is also uniquely defined. Note that these would cause an error:

    // error: there are 2 bindings defining LINE
    sku(SKU)
    sku_to_line[SKU] = LINE
    line_to_label[LINE] = LINE_LABEL
    other_predicate[SKU] = LINE.
    
    // error: LINE and LINE_LABEL cannot be defined
    sku(SKU)
    label_line[LINE_LABEL] = LINE
    line_to_label[LINE] = LINE_LABEL 
  • It is now possible to dynamically change the configuration to log the code generated by TDX.
    // start logging generated code
    $ lb web-server log tdx
    
    // stop logging
    $ lb web-server log no-tdx 

Measure Service

  • Significant change in measure language recalc semantics: recalcs now behave more like special aggregation methods. Therefore, they must have a metric defined by another method for the base intersection, whether that be an EDB predicate, primary formula, dialogue, CubiQL expression, etc.
  • New CubIQL REPL
    • Accessible via lb measure-service repl --uri [Service URI]
    • Allows easy querying of CubiQL using the textual syntax.
    • Provides access to a number of helpful operations on CubiQL expressions, such as signature, predId, optimized representations, JSON skin, etc.
  • The new CastExpr can be used to convert an expression with values of one type to values of another type. Providing the target type is all that is necessary. The text syntax is: <expr> as <typ>.
  • Intersection text syntax has changed from using parens to braces (e.g. (sku,store,week) to {sku,store,week} to emphasize their unordered nature and and make them easier to read with all the other parenthesizes involved.
  • New optimizations:
    • Widening expressions are now commuted with several other expressions to reduce the number of tuples materialized.
    • Identity aggregation mappings are normalized away.
    • Override expressions can be commuted with total and count aggregations to potentially reduce the incremental maintenance overhead.
  • Small change in aggregation semantics: aggregations where maps are annotated with hierarchies can switch hierarchies.
  • The measure service now does more rigorous type checking.
  • Added text CubiQL form aggmethod <expr> @ <intersection> to specify an aggregation endpoint by intersection rather than by grouping.
  • Measure service AdminRequests can now be used to obtain the name of the workspace backing a given measure service instance.
  • Improved support for multiple edits and locks at the same intersection.

Developer Tools
  • Additionally to Simple Storage Service (S3) by Amazon Web Services (AWS), we now also support Google Cloud Storage (GCS).
  • S3tool has been renamed to cloud-store. cloud-store is the command-line tool around s3lib that can be used to interact with both AWS S3 and Google Cloud Storage. For more information on the cloud-store command, please refer to the [Admin Manual].
  • The s3lib-keygen command is replaced by the new cloud-store keygen command.

Corrected Issues

The issues listed below have been corrected since the 4.1.8 release.

  • Resolved an issue where lb-web's RelationPrinter did not support entities without refmodes, which prevented the monitoring of these entities.
  • Resolved issues with the Measure Service around roll-up logic generation and the unwinding of synthetic aggregations.
  • Resolved an issue that was causing incorrect results for certain fixpoint computations involving entity creation.
  • Resolved an issue that could cause an incorrect result for a count aggregation over a large cross product.
  • The runtotal series function now disallows entity values.

Installation and Upgrade information

Installation Instructions

Installing LogicBlox 4.1.9 is as simple as following the steps outlined below:

  1. Download the installation package.
  2. Extract the tarball in <YourPreferredInstallDirectory>
  3. Run the following command:
    source <YourPreferredInstallDirectory>/logicblox-4.1.9/etc/profile.d/logicblox.sh
    
    NOTE: this script will set all the necessary environment variables. You might want to add this command to your .bashrc.

Upgrade Information

  • Automatic refmode conversion has been deprecated. The pedantic warning REFMODE_CONVERT has been upgraded to a full warning.

    Important

    • In the 4.2.1 release REFMODE_CONVERT will become an error.
    • Support for refmode conversion will be removed in the 4.2.23 release.

Release Information

Table 1. 

Server requirements
Operating System: 64 bit Linux
Java Runtime Environment 1.7, update 11 or higher
Python 2.7 or higher

LogicBlox 4.1.8

Release Date: March 2nd 2015

What's New

SAML Support

ServiceBlox now supports SAML 2.0 authentication. This capability allows LogicBlox applications to use a single identity provider, typically one indicated by the client, for authentication and user management. For information on how to configure SAML Authentication, please refer to the LogicBlox Administration Guide.

Database

You can now configure the default commit mode in lb-server.config, both for all workspaces and for specific workspaces.

Example 3. lb-server.config

# Default transaction commit mode. Allowed values are "diskcommit"
# (the commit doesn't return until the transaction has been durably
# written to disk) and "softcommit" (the commit may return before the
# transaction has been written to disk, so in case of a crash the
# database may roll back to a previous version).
commit_mode=softcommit

# It is possible to override [workspace] options for a specific
# workspace as follows:
#
# [workspace:foo]
# commit_mode=diskcommit

Performance

We have made significant performance improvements for transactions involving large numbers of blocks, such as those generated by the measure service.

Measure Service

  • Added support for the AMBIG aggregation type that yields a measure's unique value, if there is any.
  • The NAMED types in the metamodel are now exported together with their primitive backing type.

S3 Tools

We have made the following improvements to the S3 tools:

  • s3lib can now be distributed as an executable JAR.
  • Added new command s3tool version.
  • The s3lib-keygen script is replaced by by the s3tool keygen command, removing thereby the dependency on openssl.
  • Added checksum validation for single-part/multi-part uploads/downloads.
  • Added a new option --include-dirs to the s3tool ls command, to list all the objects and (first-level) directories that match the provided S3 URL prefix.
  • Added support for basic HTTP proxies.

Corrected Issues

The issues listed below have been corrected since the 4.1.7 release.

Measure Service

  • Improved the variable handling in measure language rules when there are multiple levels from the same dimension.
  • The Measure Service now handles measure language rules with cyclic dependencies between metrics.

Installation and Upgrade information

Installation Instructions

Installing LogicBlox 4.1.8 is as simple as following the steps outlined below:

  1. Download the installation package.
  2. Extract the tarball in <YourPreferredInstallDirectory>
  3. Run the following command:
    source <YourPreferredInstallDirectory>/logicblox-4.1.8/etc/profile.d/logicblox.sh
    
    NOTE: this script will set all the necessary environment variables. You might want to add this command to your .bashrc.

Release Information

Table 2. 

Server requirements
Operating System: 64 bit Linux or Mac OSX 10.9 or higher
Java Runtime Environment 1.7, update 11 or higher
Python 2.7 or higher

LogicBlox 4.1.7

Release Date: February 2nd 2015

What's New

LogiQL

  • In LogiQL it is now possible to write decimal and floating point literals without leading digits.

    Example 4. Decimal and floating point literals

    .001d
    .0001f

Performance

  • Improved the performance of sampling (used by the optimizer) by roughly 25%. This particularly improves the performance of update transactions on databases with many materialized views to be incrementally maintained.

Developer Tools

  • The lb libraries subcommand can now optionally print library dependencies.

    Example 5. 

    lb libraries [-h] [--libpath LIBPATH] [--dependencies] [--quiet] [LIBRARIES [LIBRARIES ...]]
  • It is now possible to change the default lb-server log level at runtime using the command lb server loglevel. The syntax is similar to the existing lb web-server loglevel command, but affects all logging on the database server, including background processing and in-flight transactions. Per-request loglevel settings are honored as before.

    Example 6. 

    For example, if the default log level were currently "warning" then:

    # returns warning-level log events
    lb query /tmp/db '_("hello world").' -L
    
    # change default log level
    lb server loglevel perf 
    
    # returns perf-level log events
    lb query /tmp/db '_("hello world").' -L
    
    # returns warning-level log events
    lb query /tmp/db '_("hello world").' -L --loglevel warning

    Note

    Note that this change is not persistent over services re-starts.

  • Improved two details in logging at the 'perf' level, which is typically used for performance analysis:
    • First, we now consistently log the key ordering that the optimizer selected for a rule.
    • Second, if indexes are created, we now log the overall duration of the index creation.
    Earlier we only logged the duration of the parallel tasks that perform the index creation.

Services Framework

  • Improved the error reporting of functional dependency violations in TDX. For large files, the actual functional dependency error used to be masked by an internal error. The actual functional dependency violation is now correctly reported, together with the bindings of all variables involved in the rule as part of the error message.

Measure Service

  • Predicates in the measure model may now be flagged as "volatile" to indicate queries using them should not be cached. This is different than the underlying predicate being pulse, which were already handled. The volatile annotation is generally intended for database lifetime predicates where we don’t want to pay the cost of maintaining queries over them.
  • It is now possible to configure dimension "top" levels (that is a level that is at the top of a dimension lattice and has a single member)
  • Added support for AVERAGE aggregation type.
  • Added support for the following builtin operators: Absolute value (ABS) and modulo (MOD).
  • A variety of optimizations to logic generated for queries and updates.
  • Measure language:
    • Variables in formula bodies are now allowed to have different labels, as long as they belong to the same Dimension.
    • Predicates and metrics in inverse formula bodies may now use LogiQL stage annotations, e.g. @prev.

Corrected Issues

The issues listed below have been corrected since the 4.1.6 release.

Performance

  • Addressed a performance issue where indexes for predicates were re-created fully due to a bug in index management. Because index creation is very fast this went unnoticed for a while. For larger databases, index creation is more noticeable though, so this can be a critical fix for some applications.
  • Resolved a performance problem with negation in incrementally maintained materialized views. Tuning has improved the performance for such rules with about 50%.

Database

  • Mostly fixed a common warning for functional dependency violations in sampling. Some warnings still remain. A complete resolution of this issue will be included in the next release.
  • Resolved an issue where the pager stopped responding to requests.

Measure Service

  • Resolved issues with updates to better support locking and multiple edits to a single metric.

Installation and Upgrade information

Installation Instructions

Installing LogicBlox 4.1.7 is as simple as following the steps outlined below:

  1. Download the installation package.
  2. Extract the tarball in <YourPreferredInstallDirectory>
  3. Run the following command:
    source <YourPreferredInstallDirectory>/logicblox-4.1.7/etc/profile.d/logicblox.sh
    
    NOTE: this script will set all the necessary environment variables. You might want to add this command to your .bashrc.

Release Information

Table 3. 

Server requirements
Operating System: 64 bit Linux or Mac OSX 10.9 or higher
Java Runtime Environment 1.7, update 11 or higher
Python 2.7 or higher

LogicBlox 4.1.6

Release Date: January 6th 2015

What's New

OSX Support

We are pleased to offer the first OSX distribution of LogicBlox! Apple users are no longer required to work in a Linux virtual machine. As this is the first public release for OSX, we expect that there are some rough edges. We hope you will help us improve our OSX support by reporting issues that you encounter.

Database

  • The usability of aborting transactions that are being executed within the lb-server has been improved.

    Example 7. 

    The database server assigns transactions an ID once they are received. The ID of transactions executing in workspace ws can be obtained via the following command:

    lb status ws --active

    Subsequently, transactions can be aborted by issuing the command:

    lb aborttransaction ws <ID>

    Aborted requests are now signaled back to the client as soon as they are aborted and aborting transactions that have only been queued has been made possible. Furthermore, a status message is returned when trying to abort a transaction that is not found on the server.

    The transaction abort feature also allows lb-web to support end-to-end aborts. When lb-web clients close the HTTP connection while a transaction is running, lb-web will attempt to abort the transaction.

  • Write transactions to different branches of a workspace are now executed in parallel. Consequently, a long-running write transaction in branch branch1 will not block a concurrently issued write-transaction for branch branch2 of the same workspace. Write-transactions across different workspaces are already executed in parallel.

    Note

    Note that write-transactions for the same branch of a workspace are still serialized.

Performance

  • The performance of incremental view maintenance (in particular, the construction and use of sensitivity indexes) has been tuned, leading to improved performance both for initial evaluation and for incremental maintenance of workspace-lifetime IDB rules. This also causes the performance of IDB rules to be more robust, reducing the need for manual tuning.
  • Auto-retraction of facts has been improved, leading to observable performance differences on full and incremental data loads into schemas that use entities.
  • Measure Service related performance optimizations:
    • Optimized request processing to not block on requests that do not generate logic.
    • Improved reuse of disjunctive views used in aggregations.
    • No-op aggregations (aggregating to the same level) are now optimized away.
  • The TDX code generator now avoids generating intermediate predicates for PUT requests, resulting in noticeable speedups in our benchmarks.

Measure Service

  • Added support for literal updates with entities in the value column using entity refmodes.

Services Framework

  • TDX specifications now supports human-readable descriptions for columns. These descriptions are available for clients to query in the meta-service.
  • The TDX import error reports have been improved. The CAUSE field now contains additional information about the relevant columns in the row that contains an issue, which makes it more suitable for human consumption.
  • Improved consistency of timestamp format of logs across components. lb-web server and client use the same ISO-8601 variant as lb-server, which includes the timezone offset. By default, the current host timezone is used, but this behavior can be changed via configuration files.

Developer Tools

  • A separate system administration manual is now available. It comprises newly written content, as well as content separated from the existing reference manual that is more relevant to administrators than developers. We plan to add more content incrementally and welcome your suggestions on information that should be added. Please refer to the Administration Guide for mode details.
  • The lb services command and the status commands of individual services have been improved to more accurately report information.

    Example 8. 

    For example, lb server status now reports a connection refused error instead of jumping to the conclusion that the lb-server process is indeed not running.

    The lb services command has been enhanced in similar ways. For example, when reporting an error that services are still running, it clarifies that these are running according to the status check. When listing processes, the command reports how it identified those processes.

    Note

    Please note that we do not recommend using lb services in a production setting. It is intended as a convenient cross-platform tool for developers. The new administration manual provides examples for how to configure systemd or upstart service managers for LogicBlox.

Installation and Upgrade information

Installation Instructions

Installing LogicBlox 4.1.6 is as simple as following the steps outlined below:

  1. Download the installation package.
  2. Extract the tarball in <YourPreferredInstallDirectory>
  3. Run the following command:
    source <YourPreferredInstallDirectory>/logicblox-4.1.6/etc/profile.d/logicblox.sh
    
    NOTE: this script will set all the necessary environment variables. You might want to add this command to your .bashrc.

Upgrade Information

The structure of the distributions has changed compared to earlier versions. We have worked on integrating the various components of the LogicBlox system better and have a more standard directory structure. Users should not experience issues with this re-organization if they setup their environment using the recommended etc/profile.d/logicblox.sh script. Users that separately re-define environment variables for components might have issues and need to remove those assignments.

Release Information

Table 4. 

Server requirements
Operating System: 64 bit Linux or Mac OSX 10.9 or higher
Java Runtime Environment 1.7, update 11 or higher
Python 2.7 or higher

LogicBlox 4.1.5

Release Date: December 1st 2014

What's New

Performance

  • The performance of exports of protobuf messages has significantly improved. This feature is used by lb-web protobuf/json services. The impact is particularly noticeable on large messages, where some exports that took minutes before can now take just a few seconds.
  • The performance of adding blocks to large existing projects has improved due to the fact that recursion errors checks are now performed incrementally.
  • Certain rules involving projections (where variables occurring in key-position in body atoms do not occur in head atoms) are now fully parallelized.

Measure Service

  • Added support for key requests at levels higher than that found in the report intersection.
  • The Measure Service now uses a less conservative test for lifting dices outside of aggregations.
  • The detection of when to materialize disjunctive views inside of aggregations has been improved.
  • The global measure handler for model and update requests has improved.
  • The Measure Service now defines primitive dimensions in the service itself if they are not defined in the model.
  • The Measure Service now optimizes away total, min and max aggregations over scalar expressions.
  • It is now possible to configure metrics and expression bindings as disjunctive views within the configuration logic library.

Corrected Issues

The issues listed below have been corrected since the 4.1.4 release.

LogiQL

  • The string:decimal:convert built-in now accepts whitespace at the end of the decimal. This change restores the behavior from LB3, and since string:decimal:convert is also used for parsing decimal values in TDX, it also addresses an issue where decimal values in CSV files did not allow whitespaces.
  • It is now possible in certain cases to use negation with auto-numbered refmodes which had been erroneously disallowed in previous releases.
  • Recursion through IDB entity construction is now disallowed by default, since it may lead to non-terminating logic.

    Note

    The pragma

    lang:compiler:error:ENTITY_CREATION_RECURSION[]=false.
    

    can be used to override this restriction.

  • Min/max aggregations over entity types are no longer permitted. Ordering on internal entity values is non-deterministic, so the results of such aggregations were unpredictable.

    Note

    As a general rule, users should not be able to observe or use the internal entity values in any way. Aggregations over entity types should use some related value, typically the refmode.

Developer Tools

  • lb web-client and lb web-server now support the charset parameter in content-type headers. Previously they only accepted media-type (e.g. "application/json"), and "application/json;charset=ISO-8859-1" would therefore fail. The charset parameter is properly respected, so the data will be parsed with the configured charset.

Services Framework

  • Resolved an issue that could cause an exception when a service was installed with the same prefix as an already existing service.

Measure Service

  • Resolved an issue with how decimal constants are normalized within expressions.
  • Resolved an issue with the handling of dialogue metrics in requests with multiple reports.

Installation and Upgrade information

Installation Instructions

Installing LogicBlox 4.1.5 is as simple as following the steps outlined below:

  1. Download the installation package.
  2. Extract the tarball in <YourPreferredInstallDirectory>
  3. Run the following command:
    source <YourPreferredInstallDirectory>/logicblox-4.1.5/etc/profile.d/logicblox.sh
    
    NOTE: this script will set all the necessary environment variables. You might want to add this command to your .bashrc.

Release Information

Table 5. 

Server requirements
Operating System: 64 bit Linux
Java Runtime Environment 1.7, update 11 or higher
Python 2.7 or higher

LogicBlox 4.1.4

Release Date: November 3rd 2014

What's New

Performance

  • The compiler’s quantifier placement inference algorithm has been optimized, resulting in a significant improvement in compilation performance.

    Note

    As a part of this change some existing code my now trigger a VAR_UNBOUND_IN_DISJUNCT error. This is due to the same named variable being used in more than one disjunct in a rule, but not all of the disjuncts. If the intent was that the variables were supposed to be distinct, as they were considered before, the occurrences in each disjunct will need to be simply renamed.

Developer Tools

  • Removing installed blocks is now supported through the following command:
    lb removeblock <WORKSPACE> <BLOCKNAME> [... <BLOCKNAME>]
    

    Note

    Blocks cannot be removed if they define predicates required by other blocks outside the removal set. When removing “trivial” blocks, where blocks do not contain rules deriving into predicates that are not being removed, the operation should not trigger a full evaluation of all installed rules. However, if removed blocks are “non-trivial”, the operation may require full evaluation of the remaining installed rules.
  • Aborting long-running transactions is now supported. The following command will abort transaction identified by <TXN_ID>, on <WORKSPACE>:
    lb aborttransaction <WORKSPACE> <TXN_ID>
    
    To retrieve a list of active transactions in a workspace, use the following command:
    lb status <WORKSPACE> --active --debug
    

    Note

    It is possible to omit the workspace but then the server will have to search all workspaces to find the transaction with that id.

  • Novice users frequently perform bulk data imports using LogiQL source code, whereas the preferred and more performant way is to use Tabular Data Exchange Services. In order to discourage users from performing bulk data imports via LogiQL source code, the compiler will now report a BULK_IMPORT error on a file where an atom is inserted more than 2048 times in the head of rule (either in an IDB or delta rule).

    Note

    The check is performed before DNF, so using deeply nested disjunctions will not trigger the error. If necessary the limit can be set by using the LB_MAX_USES environment variable or the lang:compiler:maxUses[]=num pragma.

LogiQL

  • New uid<<>> P2P for generating unique numeric identifiers. The values generated are guaranteed to be unique among all uid<<>> invocations within the scope of a workspace. Useful in situations where you need fresh, arbitrary (but not random!) numbers, and don’t want to go to the trouble of e.g. creating an autonumbered refmode just for this purpose.

    Example 9. 

    transaction
    addBlock <doc>
    a(x) -> string(x).
    a("apple").
    a("banana").
    a("cherry").
    b[x]=y -> string(x), int(y).
    </doc>
    commit
    
    transaction
    exec <doc>
    +b[x]=y <- uid<<y>> a(x).
    </doc>
    commit
    

    Yields:

    ┌─b
    │ "apple",10000000043
    │ "banana",10000000045
    │ "cherry",10000000046
    └─b [end]
    

    Note

    The P2P is intended to be used in delta-logic (like the example above) or non-recursive transaction/query IDB computation, where no incremental maintenance needs to be performed.

Services Framework

  • TDX now allows the customization of how data values are quoted and escaped in import and export services. Please refer to the [Reference Manual] for an overview of the file modes supported and their examples.
  • lb-web services now support a service_parameter 'commit_mode', which can be set to softcommit or diskcommit. This will determine the commit_mode of the database requests submitted.

Measure Service

  • Added support for sorting a measure by value, where the values are grouped by all but one key, and each value is mapped to an integer index in increasing order according to its sorted position within its group.
  • Metrics defined by measure language formulas can now be defined by predicates with a different name from the metric.
  • Improved normalization of dicers, filters, and aggregation groupings for more consistent code generation.

Corrected Issues

The issues listed below have been corrected since the 4.1.3 release.

Runtime

  • Resolved an issue that could cause an exception when refmode conversion was used incorrectly.

Developer Tools

  • Resolved an issue where the lb web-server monitor command to log assertions and retractions for predicates had no effect when used in conjunction with a TDX service call.

Measure Service

  • Resolved issue with decimal constant normalization.
  • Histogram and mode aggregations now work with named/entity types.
  • The global measure handler now works correctly and supports model requests.
  • Resolved an issue with the internal predicate name generation, where it did not match predicates in the model.
  • Generate logic when a predicate doesn’t yet have a derivation type.
  • Resolved an issue that could cause an exception when using mutually recursive measure formulas.

Installation and Upgrade information

Installation Instructions

Installing LogicBlox 4.1.4 is as simple as following the steps outlined below:

  1. Download the installation package.
  2. Extract the tarball in <YourPreferredInstallDirectory>
  3. Run the following command:
    source <YourPreferredInstallDirectory>/logicblox-4.1.4/etc/profile.d/logicblox.sh
    
    NOTE: this script will set all the necessary environment variables. You might want to add this command to your .bashrc.

Upgrade Information

  • 4.1.4 includes significant internal revisions to the handling of database metadata, crucial for implementing features such as lb removeblock. As part of this change, users may notice increased memory consumption, in particular for projects having large numbers of rules and predicates. We will be reducing this overhead in subsequent releases.
  • The $ character is no longer allowed in identifiers in LogiQL.
  • If a variable is prefixed with an underscore (e.g. '_person') a VARIABLE_SOLITARY warning will no longer be issued, as was the case before for variables that were simply underscore.
  • The compiler no longer recognizes the following pragmas which are no longer relevant for LogicBlox 4.x:
    • lang:disjoint
    • lang:skolem
    • lang:lockingPolicy
    • lang:capacity / lang:physical:capacity
    • lang:physical:lineNumbers
    • lang:physical:partitioning
    • lang:physical:storageModel
    • lang:remoteRef
    • lang:roleNames
  • Query and fraction parent update transformations now require a target type in order to better support composing multiple transformations together.

    Note

    The query only needs a type if the result has values.

  • Logic is only generated for measure language formulas lazily. This change is to avoid generating and storing results for formulas that are never used at their base intersection.

Release Information

Table 6. 

Server requirements
Operating System: 64 bit Linux
Java Runtime Environment 1.7, update 11 or higher
Python 2.7 or higher

LogicBlox 4.1.3

Release Date: July 17th 2014

What's New

Performance Improvements

  • The performance of, in particular, measure service rules has improved significantly due to a new rule optimizer heuristic.
  • Performance improvements can also be observed for short-running transactions.

LogiQL

  • New built-in for formatting floating point numbers as currencies:
    float:currency:string[x,y] = z -> string(x), float(y), string(z)
    
    Formats a float according to the locale identifier specified by the first parameter. Identifiers are interpreted by the underlying ICU implementation: http://userguide.icu-project.org/locale

    Example 10. 

    numbers(-123.4f).
    numbers(1234.5f).
    
    locales(x) -> string(x).
    locales("@currency=USD").
    locales("@currency=EUR").
    locales("de_DE.UTF-8").
    locales("en_US.UTF-8").
    
    result(x,y,z) <-
       numbers(x),
       locales(y),
       float:currency:string[y,x]=z.    
    

    Yields:

    -123.4 "@currency=EUR" "-123.40"
    -123.4 "@currency=USD" "-US$123.40"
    -123.4 "de_DE.UTF-8"   "-123,40 €"
    -123.4 "en_US.UTF-8"   "($123.40)"
    1234.5 "@currency=EUR" "1,234.50"
    1234.5 "@currency=USD" "US$1,234.50"
    1234.5 "de_DE.UTF-8"   "1.234,50 €"
    1234.5 "en_US.UTF-8"   "$1,234.50"
    

    Note

    Please note that currently you need to use 'en_US.utf8' to actually get the euro character. The $ character has no issue. This is caused by the boost locale library defaulting to US-ASCII instead of UTF8-8. This might be improved in a future release.

    Example 11. Additional examples

    "en_US"                   "$1,234,567.80"      
    "en_US.utf8"              "$1,234,567.80"      
    "en_US.utf8@currency=EUR" "€1,234,567.80"    
    "en_US.utf8@currency=USD" "$1,234,567.80"      
    
    "fr_FR"                   "1234567,80"         
    "fr_FR.utf8"              "1 234 567,80 €"
    "fr_FR.utf8@currency=EUR" "1 234 567,80 €"
    "fr_FR.utf8@currency=USD" "1 234 567,80 $US"
    
    "it_IT"                   "1.234.567,80"       
    "it_IT.utf8"              "1.234.567,80 €"  
    "it_IT.utf8@currency=EUR" "1.234.567,80 €"  
    "it_IT.utf8@currency=USD" "1.234.567,80 US$"  
    
    "nl_NL"                   "1.234.567,80"       
    "nl_NL.utf8"              "€ 1.234.567,80"  
    "nl_NL.utf8@currency=EUR" "€ 1.234.567,80"  
    "nl_NL.utf8@currency=USD" "US$ 1.234.567,80"  
    
    "sv_SE"                   "1234567:80kr"       
    "sv_SE.utf8"              "1 234 567:80 kr" 
    "sv_SE.utf8@currency=EUR" "1 234 567:80 €"
    

  • The following prefix, binary functions for comparison have been added for decimal type:
    • lt_3, le_3, ge_3, gt_3, eq_3 and ne_3
    Please refer to the LogicBlox Reference Manual for a complete overview of all the built-in predicates.

Measure Service

  • Entity types: Measures can now have entity typed values. New measure expression AS_ENTITY converts a primitive typed value to an entity value using the entity’s refmode. Filter comparisons can either work directly on entity values or on their associated refmodes.
  • Labeled levels and intersections: Consider a possible asymmetric affinity metric where affinity[past,predict] is intended to model how likely a consumer is to purchase sku predict after purchasing past. A model containing the this metric may be defined as follows:
    +measure:config:metric("affinity") {
     +measure:config:metric_hasIntersection[]="past:sku,predict:sku",
     +measure:config:metric_usesPredicate[]="aff",
     +measure:config:metric_hasType[]="FLOAT"
    }.
    
    Note here that the the metric’s intersection contains labels past and predict which allows us to disambiguate the two occurrences of levels from the product dimension.
  • Parameterized query support: Measure service queries may now contain parameters which can be filled in with values from a relation in the workspace.

    Example 12. 

    Parameterization can be used, for example, to enable scrolling without installing new logic as position through the dataset changes.

  • Support for transforms in REMOVE messages: This feature allows developers to selectively remove positions that map up to a single member of a level.

    Example 13. 

    Transforms in REMOVE messages can for instance be used to remove positions from predicate Sales that have a value > 10 or filter based on access control specifications.

  • Basic structured error reporting: Error information is now reported via Problem messages in the Response message. For temporary backwards compatibility, the exception field will still be populated with the string ERROR if there was a fatal error. Problems can be report warnings as well as errors and may provide position or stack information.
  • Named expression metrics: It is now possible to give a measure expression a name by defining a new metric.

    Example 14. 

    In our configuration logic, that looks like:
    +measure:config:metric("SalesSouth") {
       +measure:config:metric_usesExpression[]=
         """dice Sales by (filter Location.region.label by = "South")""",
       +measure:config:metric_hasIntersection[]="sku,store,week",
       +measure:config:metric_hasType[]="FLOAT"
    }.
    
  • Command line measure service administration: A command line administration option is added to refresh the measure service model.

    Example 15. 

    lb measure-service admin -u http://localhost:8080/ refresh
    

    Note

    The administrative options can be enable by setting measure:admin:allowed[]="true".

  • Escaped identifiers in text syntax: When writing queries in the text syntax an identifier containing arbitrary characters (whitespace, etc.) can be written via `....`.
  • Measure rule language abs builtin: The builtin function abs is now supported.
  • Aggregating recalcs with projections: In previous releases projecting a dimension was disallowed when aggregating a metric defined by measure rule language recalc. This restriction has now been relaxed. The projection is now allowed, as long as the variable corresponding to that dimension is only used as an argument to metrics in the recalc formula (and not predicates or comparisons).

Corrected Issues

The issues listed below have been corrected since the 4.1.2 release.

  • Resolved an issue that could lead to an 'Unexpected missing predicate' error when adding new logic.
  • Resolved an issue that could cause lb-server to crash under certain circumstances.

Installation and Upgrade information

Installation Instructions

Installing LogicBlox 4.1.3 is as simple as following the steps outlined below:

  1. Download the installation package.
  2. Extract the tarball in <YourPreferredInstallDirectory>
  3. Run the following command:
    source <YourPreferredInstallDirectory>/logicblox-4.1.3/etc/profile.d/logicblox.sh
    
    NOTE: this script will set all the necessary environment variables. You might want to add this command to your .bashrc.

Deprecated Features

  • Parameter terms Parameter terms are subsumed by parameter expressions, and will eventually be removed.
  • Exception field in Response: The exception field in the Response message has been deprecated. If there is an error processing a request it will now contain the string "ERROR" and the nature of the error will be recorded as a Problem message in the problem field.

Release Information

Table 7. 

Server requirements
Operating System: 64 bit Linux
Java Runtime Environment 1.7, update 11 or higher
Python 2.7 or higher

LogicBlox 4.1.2

Release Date: June 2nd 2014

What's New

LogiQL

  • New series<<>> P2P, which allows the construction of predicates by an iterative application of a function. Current implemented functions are random-number generators (uniform, binomial, Cauchy, Poisson) and run-total (accumulating the values of a column). Please refer to the Reference Manual for more details.

    Example 16. 

    Successes[expr,trial]=k <- series<<k=rnd_binomial<n,p,seed>[trial]>>
     R(n,p,seed,expr,trial).
    

    Here, <n,p,seed> is the parameter vector for initializing the function, and [trial] is the running-index vector.

    Example 17. 

    In this example, [A] is the input vector (which is given to the function in every iteration).

    runt[T,A] = V2 <- series<<V2=runtotal[A](V)>> base[T,A] = V.    
    

    Note

    The current implementation only supports primitive types for the series variables, as opposed to entities. This restriction will be removed in a future release.

  • New list_to_seq<<>> P2P, which allows the conversion from ordered collections represented using linked-list style first/next predicates (a la ordered entities, list<<>> P2P) to an array-style representation (a la seq<<>> P2P).

    Example 18. 

    foo(_) -> .
    lang:ordered(`foo).
    
    foo:seq[i]=x -> int(i), foo(x).
    foo:seq[_]=_ <- list_to_seq<<>>foo:first(_), foo:next(_,_).
    
  • The restriction on predicate width to 64 attributes has been relaxed; now up to around 500 attributes are allowed in predicates, depending on their types and the data.

Measure Service

  • Updates and queries in a single Request: It is now possible for a single Request message to both perform an update and query the results. Consequently, the kind field of the Request message is no longer necessary as multiple fields of the Request may be populated. However, AdminRequests must be sent alone in their own Requests.
  • Update and query requests share parameters: The Request message now has a global params field that supercedes the argument field of the QueryRequest message. These parameters are visible from all QueryRequests and UpdateRequests in the Request.
  • Inverse spreading: It is now possible to update and spread to metrics defined via measure rules when there are accompanying inverse formulas. Similar to the old measure engine, the measure service will search for a chain of inverse formulas that can be used to perform the update in the presence of other edits.
  • Inverse formula updates: It is now possible to reinstall a rule with additional formulas. The original rule will only be overwritten if the new rule has the same name.

    Note

    It is currently not possible to update the primary formula (that is currently always the first) so it is important that it be sent exactly the same as it was the first time.

  • Sorting aggregation: A new sorting aggregation method has been added.
  • Delta expression: The expression language has been extended with a delta operator. The delta operator allows for querying how the values in a given expression have changed during the current request. This is generally only useful if both an update and query are made in the same Request. The delta operator can either return those positions that have been added/updated or those positions that have been removed.

    Note

    It is only possible to observe retractions on expressions that are materialized. Querying assertions is allowed on any expression, but if the expression is not materialized doing so will be effectively a no-op.

  • Override expression: The expression language has been extended with an override operator, defined by the OverrideExpr message. Given a list of expressions that are not position-only, the override operator returns left to right, only returning values in positions if they is no value for the position in an expression to the left.

    Note

    Please note that this feature is currently considered experimental and may change in a future release.

  • Exponentiation operator added: The POW exponentiation operator has been added.
  • Measure service refresh latency reduced: When refreshing the measure service’s view of the model, it is now reloaded asynchronously to better take advantage of parallelism and avoid the client blocking unnecessarily.

Performance

  • The performance of TDX import and export has been improved. The performance improvement is particularly noticeable in settings with many small TDX transactions.

Corrected Issues

The issues listed below have been corrected since the 4.1.1 release.

  • Credential service requests that only query information are now properly marked as read-only.

Installation and Upgrade information

Installation Instructions

Installing LogicBlox 4.1.2 is as simple as following the steps outlined below:

  1. Download the installation package.
  2. Extract the tarball in <YourPreferredInstallDirectory>
  3. Run the following command:
    source <YourPreferredInstallDirectory>/logicblox-4.1.2/etc/profile.d/logicblox.sh
    
    NOTE: this script will set all the necessary environment variables. You might want to add this command to your .bashrc.

Deprecated Features

  • Because it is now possible to perform multiple actions in a single Request message the kind field of the Request message is no longer makes sense. For continued backwards compatibility, it is still possible to set the kind field to MODEL_QUERY instead of setting the model_request field.
  • The argument field of the QueryRequest message has been deprecated. Parameters should now be given in the params field of the Request message, as they will be shared among all queries and updates. It is currently still possible to place parameters in the argument field, but they will just internally be appended to the global parameter list.

Release Information

Table 8. 

Server requirements
Operating System: 64 bit Linux
Java Runtime Environment 1.7, update 11 or higher
Python 2.7 or higher

LogicBlox 4.1.1

Release Date: May 1st 2014

What's New

Performance Improvements

  • LogicBlox 4.0.6 introduced domain parallelism to LB4. Domain parallelism, a mechanism for splitting data into pieces such that a query can be evaluated on each piece in parallel, allows LogicBlox to scale up its performance with the number of cores available in a single machine. Domain parallelism in past LogicBlox releases was restricted to queries -- both ad hoc and precompiled. In LogicBlox 4.1.1 the initial full evaluation of IDB rules is parallelized, too.
  • Applications with very large schemas will experience a significant reduction of the memory consumption when installing the schema.
  • Applications making use of the ratio and even spreading features will experience significant performance improvements.

Measure Service

  • The query_request field of the Request message has changed from being optional to being repeated. This allows for batching up multiple QueryRequests, each of which may request data at a different intersection, into a single Request.

    Note

    Please note that this change necessitated a breaking change to how the results are returned. Instead of the result columns being found in the Response message, they are found in the repeated report field.

  • The measure service now offers support for an empty expression. This is mostly useful when generating complex expressions. It can be convenient to emit an empty expression rather than complicating the generation code. The use of the empty expression will then be appropriately optimized away by the measure service.
  • It is now possible to ask for the value of an expression in the previous transaction. This is most useful when using the measure service’s spread-by-query functionality.

    Note

    Please be aware that this feature is still in beta mode and may change in the future.

Developer Tools

  • Config files for lb-web server and client have a section for global attributes. Previously, this section was defined by the initial part of the file, before any section is declared. A problem with this approach is that it is not possible to compose configuration files by concatenation since the global sections of a file would be included in the last section of the previous file.

    Now it is possible to declared a section named global that contains the global attributes. Config file concatenation is possible, and all global sections have their attributes merged.

    It is still possible to use the unnamed section for global, which maintains backward compatibility. However, it is not allowed to have a file with both an unnamed section and a section named global. That is, all concatenated files must declare a global section explicitly.

Corrected Issues

The issues listed below have been corrected since the 4.1.0 release.

  • Resolved an issue with plugin logic that prevented it from referencing predicates that are transaction lifetime.
  • Resolved an issue that prevented lb-web server and client from loading multiple config files with empty sections that have the same name.
  • Resolved an issue with loading a measure with the delimited file import service, that caused the import to fail when a line in the file ended with a double-quote and the delimiter.

Installation and Upgrade information

Installation Instructions

Installing LogicBlox 4.1.1 is as simple as following the steps outlined below:

  1. Download the installation package.
  2. Extract the tarball in <YourPreferredInstallDirectory>
  3. Run the following command:
    source <YourPreferredInstallDirectory>/logicblox-4.1.1/etc/profile.d/logicblox.sh
    
    NOTE: this script will set all the necessary environment variables. You might want to add this command to your .bashrc.

Upgrade Information

  • Previously, dialogues were implemented via installed delta logic. However, now that the measure service no longer installs delta logic, it was no longer possible to read from a dialogue metric at stage initial. The dialogue logic is now in an inactive block that is activated as needed. As such, dialogues no longer need a request predicate and the request_pred field has been removed from the Dialogue message. A new required field, block, has been added to provide the name of the inactive block to be activated when using the dialogue.
  • When sending a protocol buffer update expression, developers now need to supply the expected type of the input, if any.
  • Because it is now possible to make multiple query requests in a single measure service protobuf Request message, instead of report columns being contained directly in the Response message, the columns from each request are found the in repeated report field.

Deprecated Features

  • The report_name field of a QueryRequest is now unnecessary, as reporting predicates are no longer installed, and is not read by the measure service. It will be removed in an upcoming release.
  • Key requests are now syntactic sugar for adding specific expressions to a QueryRequest. As such, they are no longer necessary and will be removed in an upcoming release.

Release Information

Table 9. 

Server requirements
Operating System: 64 bit Linux
Java Runtime Environment 1.7, update 11 or higher
Python 2.7 or higher

LogicBlox 4.1.0

Release Date: April 1st 2014

Executive Summary

LogicBlox 4.1 marks the first 4.x release that is feature compatible with LogicBlox 3.x for applications built on the service-oriented architecture. LogicBlox 4.1 includes full support for two of the most commonly used services: tabular data exchange and measure service.

  • Tabular data exchange in 4.1 is now feature compatible with 3.x. Certain configuration differences exist. Please refer to the the LB4 migration guide for details.
  • The performance issues for the measure service have been addressed in this release, and it has at least comparable performance with LogicBlox 3.x, with more rigorous benchmarks forthcoming with the next releases.

What's New

Language Semantics
  • New arithmetic built-ins: Added built-in support for:
    • float:arccos: to calculate the arccosine of x.
    • float:arcsine: to calculate the arcsine of x.
    • float:arctan: to calculate the arctangent of x.
    Please refer to the LogicBlox Reference Manual for a complete overview of all the built-in predicates.
  • Updates to File Predicates:
    • When using a delimited file predicate, it is now required to specify whether the file predicate will be used for import or for export. This is done through a new setting lang:physical:fileMode['_file]=... where the possible values are import, export and import_detailed. If no file mode is specified, importis assumed.
    • The signature of import delimited file predicates now includes an extra argument of integer type, that is used for specifying the offset (in bytes) at which a record occurs in the file. This new argument comes first and is followed by a semicolon.

      Example 19. 

      Instead of

      _file(SKU,WEEK,STORE,SALES)

      one should now write

      _file(offset; SKU,WEEK,STORE,SALES)

      or simply

      _file(_; SKU,WEEK,STORE,SALES).

    • The signature of import_detailed delimited file predicates extends the signature of import delimited file predicates with two further attributes, that come last, and are used for advanced error reporting.

      Example 20. 

      For example, if _file is import_detailed then instead of

      _file(SKU,WEEK,STORE,SALES)

      we write

      _file(offset; SKU,WEEK,STORE,SALES, errorCode, rawLine)

      where errorCode is an integer that is non-zero if the record could not be parsed correctly, and rawLine is a string containing the entire (unparsed) line of the input file.

    Note

    The signature of "export" delimited file predicates remains unchanged (and, in particular, does not include an offset attribute).

Services Framework

The Tabular Exchange Services are now feature complete with respect to LogicBlox 3.x. LogicBlox 4.1.0 includes support for the following features:

  • Transform Functions: TDX now supports Transform Functions that can be used to transform values prior to importing values to predicates in a workspace or to exporting values to files. The feature can be used to achieve the same effect as the "transforms" feature of TDX on LogicBlox 3.x, but offers more flexibility since developers can define their own functions. Please refer to the LogicBlox Reference Manual for an overview of the transformation functions supported and some examples.
  • Auto primitive and refmode conversions: While generating code for imports and exports, TDX may include primitive and refmode conversions if necessary. This makes it possible, for example, to bind a string column to an int value: (TDX will generate string:int:convert for imports and int:string:convert for exports). Multiple conversions may be necessary. For example, binding a string column to an entity with a float refmode needs a string:float:convert followed by a refmode lookup on imports.
  • New column format "char": Added support for the char TDX column format. This format checks that the values have a single (string or integer) character.
  • Abort on error: A TDX service now by default aborts the transaction on any error. To allow partial imports, the allow_partial_import flag must be set.

    Example 21. 

    service_by_prefix["/a-tdx-service"] = x,
    delim_service(x) {
          delim_file_binding[] = "file-binding",
          allow_partial_import()
    }.
    

    Note

    In LogicBlox 3.x, the default behavior for a TDX service was to accept partial imports. During an import, TDX would accept any correct row, and would discard rows with errors.

  • TDX import now reports back rows that are malformed, i.e., that cannot be parsed by a file predicate according to the specified columns. The whole erroneous line is returned in the CAUSE column, with a MALFORMED_ROW CAUSE_CODE.

    Example 22. 

    For example the following file contains two columns:

    USER|LOCALE
    John|en
    Mary|en|
    Joao|pt
    
    Maria
    "Johannes"|de|zuviel
    

    This file contains the following errors:

    • Mary has 3 columns, thanks to the last |
    • Maria has a single column
    • Johannes has 3 columns

    Note that empty lines are discarded. A POST to the service would return the following file:

    USER|LOCALE|CAUSE|CAUSE_CODE
    ||"Mary|en|"|MALFORMED_ROW
    ||"Maria"|MALFORMED_ROW
    ||"Johanes|de|zuviel"|MALFORMED_ROW
    

  • Un/zip support: TDX now supports GZIP compression when importing or exporting data from a workspace.

    Note

    Please note that if an uncompressed file is passed to lb web-client, it needs to be indicated explicitly that compression is not used with the -n or --no-compression flags.
  • Offset exposed as file header: TDX now exposes a file offset as a special (reserved) column header named TDX_OFFSET, which can then be used in predicate bindings.

    Example 23. 

    In the example below we bind the sales file, which has a single column, to the predicate measures:sales, which is a functional predicate from int to int.

    sales[x] = v -> int(x), int(v).
    
    file_definition_by_name["sales"] = fd,
    file_definition(fd) {
      file_delimiter[] = "|",
      column_headers[] = "SALES",
      column_formats[] = "int"
    },
    file_binding_by_name["sales"] = fb,
    file_binding(fb) {
      file_binding_definition_name[] = "sales",       
      predicate_binding_by_name["measures:sales"] =
        predicate_binding(_) {
          predicate_binding_columns[] = "TDX_OFFSET, SALES"
        }
    }.
    

    Upon import, the offset of the rows in the file are be bound to the TDX_OFFSET column and then used to populate measure:sales. Upon export, the value in the column bound to TDX_OFFSET is ignored (equivalent to measures:sales\[_\] = SALES).

    TDX_OFFSET works as any integer column: it can be bound to an entity, accumulated, be subject of transformation functions, etc. The value of the offset is the number of bytes from the beginning of the file up to the row being imported and, therefore, guaranteed to be unique and monotonically increasing in the file being imported.

Other new features of the LogicBlox Services Framework:

  • Lazy services: Services can now be configured to be lazy, in which case they will only be initialized when first used. This new feature may be used to speed up initialization times, with the drawback that errors are only caught when the service is used.

    Example 24. 

    In the example below the TDX lazy service will only be initialized the first time /sales is used.

    service_by_prefix[ "/sales" ] = x,
    delim_service(x)  {
        delim_file_binding[] = "sales",
        lazy()
    }.
    
  • Support for declaring target workspace in services: It is now possible to have a workspace declare a service that is executed in another workspace. This allows, for example, to speed up the web-server startup time by scanning only the workspaces that declare services, specially if used with lazy services.

    Example 25. 

    The declaration below can be installed in a service-declaration-workspace. When the web-server scans this workspace, it will load the service. However, because it is lazy, it will not immediately initialize the service. When the first request to /sales arrives, the web-server will initialize the service, which includes looking up the sales file binding. The service will then look up sales in the service-host-workspace, and will execute every service request in that workspace.

    service_by_prefix[ "/sales" ] = x,
    delim_service(x) {
        delim_file_binding[] = "sales" ,
        service_host_workspace[] = "service-host-workspace",
        lazy()
        }.
    

Measure Service

  • Protobuf-based update requests: User edits for updates/spreads can now be sent via a protobuf-based format, in addition to TDX.

    Example 26. 

    kind: UPDATE
    update_request {
      expr {
        kind: SPREAD
        metric: "Population"
        inter {
          qualified_level {
            dimension : "BadGuy"
            level : "Foe"
          }
        }
      }
      source {
        intersection {
          qualified_level {
            dimension: "BadGuy"
            level: "Foe"
          }
        }
        type: { kind: FLOAT }
        column {
          string_column {
            value: "Moblin"
            value: "Like Like"
          }
        }
        column {
          float_column {
            value: 42.0
            value: 312.0
          }
        }
      }
    }
    
  • Multi-measure updates: A single update request may now contain more than one update. This is only accessible through the new protobuf-based update interface. Updates can be made to several metrics, including multiple updates to the same metric. However, if multiple edits are made to the same metric, all input intersections must be comparable in the lattice order.
  • Named and composite aggregations and spreads: It is now possible to specify the aggregation and spreading method by name. This is mostly useful for naming a composite aggregation/spreading, but it can also be used to alias primitive aggregation/spreading methods. While composite aggregations allow specifying that a particular primitive aggregation method be used when aggregating up along a given dimension, composite spreads can be used to specify that a particular primitive spreading method be used when spreading down along a given dimension..

    Example 27. Named and composite aggregations

    Named aggregations can be defined via the aggregation field of the MeasureModel message, which is set of AggDef messages. Using our logic configuration library, a primitive aggregation may be aliased via

    +measure:config:aggregation("Sum") {
      +measure:config:aggregation_primitive[]="TOTAL"
    }.
    

    A composite aggregation can be defined like

    +measure:config:aggregation("TotalThenMax") {
      +measure:config:aggregation_composite[0,"Product"]="TOTAL", 
      +measure:config:aggregation_composite[1,"Calendar"]="MAX"
    }.
    

    Note

    Note that the order is important, thus the need to provide the index.

    A named aggregation can then be used anywhere where an aggregation method may be specified. In JSON, for example we could write:

    "method": { "named": "TotalThenMax" },
    

    Example 28. Named and Composite Spreads

    Named spreads can be defined via the spread field of the MeasureModel message, which is set of SpreadDef messages. Using our logic configuration library, a primitive spread may be aliased via

    +measure:config:spread("RatioEven") {
      +measure:config:spread_primitive[]="RATIO"
    }.
    

    A composite spread can be defined like

    +measure:config:spread("RatioThenEven") {
      +measure:config:spread_composite[0,"Product"]="RATIO",
      +measure:config:spread_composite[1,"Calendar"]="EVEN"
    }.
    

    Note

    Similar to the composite aggregations, the order is also important when defining composite spreads, thus the need to provide the index.

    A named spread can then be used anywhere a spread kind may be specified. In JSON, for example we could write

    "spread_kind": { "named": "RatioThenEven" },
    

  • Measure language rule installation: The measure service now understands the measure rule language that was used in Blade applications. It is possible to install such rules into the measure service using an InstallRequest that now has a string field called rules. Alternatively, the command-line measure install tool can be used to install rules like:
    lb measure-service install --rules --uri http://localhost:8080/measure my.rules
    
    Successfully installed rules are persisted in the workspace backing the measure service, so that if the measure service is restarted you do not need to reinstall the rules.

    Note

    The measure service currently uses a more naive strategy for selecting primary formulas than the LogicBlox 3.x measure engine, by simply attempting to pick the first formula of each rule.

  • Recalc metrics: It is now possible to specify that a metric be backed by a recalc rule when requesting it an intersection other than the base.

    Example 29. 

    In the example configuration logic below a recalc metric NetSales is defined by

    +measure:config:metric("NetSales") {
      +measure:config:metric_usesRule[]="NetSales",
      +measure:config:metric_hasIntersection[]="sku,store,week",
      +measure:config:metric_hasType[]="FLOAT"
    }.
    

    The usesRule line tells the measure service to use the first formula in the given rule as the formula to be generated at the requested intersection. Then if NetSales is requested at an intersection other than base, like

    "expr": {
      "kind": "METRIC",
      "metric": { 
        "name": "NetSales",    
        "inter": { 
          "qualified_level": [ 
            { "dimension": "Product", "level": "class" }
            { "dimension": "Location", "level": "region" }
            { "dimension": "Calendar", "level": "month" } ] 
       }
     }
    

    the appropriate recalc logic will be generated as needed.

  • Optional predicate specifications in configuration logic: In the configuration logic, it is now possible to omit the name of the predicate backing a metric if it matches the metric name.

    Example 30. 

    Instead of

    +measure:config:metric("PriceCloseDollars") {
      +measure:config:metric_usesPredicate[] = "PriceCloseDollars",
      +measure:config:metric_hasIntersection[] = "Security,Day",
      +measure:config:metric_hasType[]="DECIMAL"
    }.
    

    it is now possible to simply write

    +measure:config:metric("PriceCloseDollars") {
      +measure:config:metric_hasIntersection[] = "Security,Day",
      +measure:config:metric_hasType[]="DECIMAL"
    }.
    

  • Default aggregation of metrics: If a metric has a default aggregation (or is a recalc rule) it is possible to request it at a specific intersection rather than first having to start at its base intersection.

    Example 31. 

    For example, if the default aggregation method of the Sales metric is TOTAL, instead of writing

    "expr": { "kind": "AGGREGATION",
      "aggregation": {
        "method": { "primitive": "TOTAL" },
        "expr": {
          "kind": "METRIC",
          "metric": { "name": "Sales" }
        },  
        "grouping": [ { "kind": "ALL", "dimension": "Product" } ]
      }
    }
    

    you can now write

    "expr": {
      "kind": "METRIC",
      "metric": { 
        "name": "Sales",    
        "inter": { 
          "qualified_level": [ 
            { "dimension": "Location", "level": "store" }
            { "dimension": "Calendar", "level": "week" } ] 
       }
     }
    

  • Aggregation to intersection: It is now possible to specify directly the intersection of the desired result when aggregating a measure expression.

    Example 32. 

    In earlier releases, to aggregate a measure expression it was necessary to specify a set of grouping operators to indirectly describe the intersection for the result of the aggregation:

    "expr": { "kind": "AGGREGATION",
      "aggregation": {
        "method": { "primitive": "TOTAL" },
        "expr": {
          "kind": "METRIC",
          "metric": { "name": "Sales" }
        },  
        "grouping": [ { "kind": "ALL", "dimension": "Product" } ]
      }
    }
    

    Now it is possible to write the following:

    "expr": { "kind": "AGGREGATION",
      "aggregation": {
        "method": { "primitive": "TOTAL" },
        "expr": {
          "kind": "METRIC",
          "metric": { "name": "Sales" }
        },  
        "inter": { 
          "qualified_level": [ 
            { "dimension": "Location", "level": "store" }
            { "dimension": "Calendar", "level": "week" } ]
        }
      }
    }
    

  • New and changed spreading methods:
    • Measure service EVEN spreading has be renamed to DELTA.

      Example 33. 

        Before DELTA spread After
      P1 4.0   6.0
      P2 2.0   4.0
      Q (Total P1+P2) 6.0 10.0 10.0
    • When using the new EVEN spreading operator, the new value of an aggregated predicate is spread down evenly over the lower level predicates.

      Example 34. 

        Before EVEN spread After
      P1 4.0   5.0
      P2 2.0   5.0
      Q (Total P1+P2) 6.0 10.0 10.

Corrected Issues

The issues listed below have been corrected since the 4.0.8 release.

  • Resolved an issue where compiler-generated ordered entity predicates were not recognized when used in modules.
  • Plugin Logic is now loaded with the same classloader as the handlers in the jar. Previously, lb-web-server used to create a separate classloader for plugins than it was using for handlers packaged in a jar, which caused sharing of static state impossible between a plugin and a handler.
  • Resolved an issue where any error that prevents an import via TDX (such as missing headers) was causing a status 500 to be sent to the client. Now these errors lead to a status 4xx.
  • It is now possible to use the lb web-server import-users command to import users from a delimited file. Please refer to the LogicBlox Reference Manual for more information on this command.
  • Resolved an issue that caused an exception when TDX export involved certain refmode conversions. In particular, this issue could cause the credential services file export to fail.

Installation and Upgrade information

Installation Instructions

Installing LogicBlox 4.1.0 is as simple as following the steps outlined below:

  1. Download the installation package.
  2. Extract the tarball in <YourPreferredInstallDirectory>
  3. Run the following command:
    source <YourPreferredInstallDirectory>/logicblox-4.1.0/etc/profile.d/logicblox.sh
    
    NOTE: this script will set all the necessary environment variables. You might want to add this command to your .bashrc.

Upgrade Information

  • Import file predicates now require an extra attribute: _file(x,y,z) is now _file(offset; x,y,z).
  • Export file predicates now require a pragma: lang:physical:fileMode[`_file] = “export”.
  • Aggregation methods are no longer enums: In order to support named and composite aggregations, the method enum has been converted to a message. Existing uses of aggregation methods need to be rewritten to the new form.

    Example 35. 

    "method": "TOTAL" ,

    becomes

    "method": { "primitive": "TOTAL" } ,
  • Types are no longer enums: In order to ease expansion of types and eventually allow entity-typed metrics and attributes, type has been changed from being an enum to a message. Existing uses of type need to be rewritten to the new form.

    Example 36. 

    "type": "STRING" ,

    becomes

    "type": { "kind": "STRING" } ,
  • Spreading kinds are no longer enum: In order to support named and composite spreads, the SpreadKind enum has been converted to a message. Existing uses of spread kinds need to be rewritten to the new form.

    Example 37. 

    "spread_kind": "RATIO" ,

    becomes

    "spread_kind": { "primitive": "RATIO" } ,
  • Ratio/even spreading semantics have changed What was previously called "even" spreading is now called "delta". See what's new for more details.
  • Dimensions with more than one hierarchy are now required to have a default: In previous releases, specifying a default hierarchy was optional. Now this is required if a dimension has more than one hierarchy.

    Example 38. 

    Example of specifying a default hierarchy, when using the LogiQL measure configuration library:

    +measure:config:dimension(dim) {
      +measure:config:dimension_hasName[]="MyDim",
      +measure:config:dimension_hasDefaultHierarchy[]="MyHierarchy"
    }.
    

Permanently Removed Features

  • Metadata service is no longer supported: The previously deprecated ModelRequest and ModelResponse messages have been removed, and the MetadataService no longer exists. The model may be retrieved via the MODEL_QUERY kind of the Request message.
  • Filter comparisons only use expressions instead of terms or expressions: The previously deprecated term field has been removed from the Comparison message. Filtering is now done exclusively using the expr field of the Comparison message. Because Terms are a subset of MeasureExprs, this only requires a minimal change in queries.

    Example 39. 

    For example, the comparison

    "comparison": [ {
         "op": "EQUALS", 
         "term": {
           "kind": "CONSTANT",
             "constant": { "string_constant": "Atlanta, GA" }
          }
     } ]
    

    would be rewritten to

    "comparison": [ {
         "op": "EQUALS", 
         "expr": {
           "kind": "TERM",
           "term": {
             "kind": "CONSTANT",
               "constant": { "string_constant": "Atlanta, GA" }
            }
         }
    } ]
    

  • Legacy measure expression syntax is no longer supported: The previously deprecated field measure_text has been removed from the Binding and InstallRequest messages.

Release Information

Table 10. 

Server requirements
Operating System: 64 bit Linux
Java Runtime Environment 1.7, update 11 or higher
Python 2.7 or higher