NAV Navbar

Introduction

Zenoo provides a niche platform for building, defining and orchestrating Digital Onboarding (DO) processes. The Zenoo Hub provides the ability to reconfigure the process to alter the orchestration.

Purpose

The Zenoo Platform has been built from the ground up by experienced developers, product managers and UX experts to solve a number of challenges facing businesses that need to onboard customers.

Zenoo is:

Our aim is to arm developers with a toolkit that makes building, managing and optimizing customer interactions less burdensome and more enjoyable, while improving the bottom line for businesses who embrace our approach. We do this by ensuring each customer interaction is unique and optimized to maximise conversions.

Easy onboarding with Zenoo

The Zenoo architecture has been built with an understanding that not all onboarding channels or customers are the same. With this in mind, The Zenoo Hub can initiate an onboarding experience either when a customer takes action (such as clicking on calculator) or through an API (onboard this customer).

As an example, if a customer applies for a loan on a partner website, the process would be as follows:

  1. Customer visits your website and is asked to complete a DO process.
  2. The client initiates a specific DO process by redirecting Customer to Zenoo Hub Client
  3. The Zenoo Hub Client engages the Hub Backend API to manage the DO process orchestration, processing data, performing checks, and other functions.
  4. The website receives the DO process result and responds with a redirect URL.
  5. The customer is redirected back to a Website using the redirect URL

HUB Backend

More details can be found under specific sections below.

Architectural overview

At the core of the Zenoo Hub is a workflow engine that executes Hub DSL scripts. The DSL scripts are used for orchestrating digital-onboarding processes as a series of pages, data transformations and external calls.

The DSL-based approach makes it possible to specify digital-onboarding processes in a concise manner. It enables developers to focus on the business logic rather than the complexities of distributed systems.

The Zenoo Hub is built on top of Apache Kafka using event-streaming and micro-service architecture. It makes the Hub highly scalable and fault-tolerant.

Each workflow execution produces a detailed log of Execution events that can be used for troubleshooting, as well as, analytic purposes.

alt text

DSL execution engine

At the core of the Zenoo Hub is the DSL execution engine. It executes the Hub DSL scripts that are used for orchestrating digital-onboarding processes. The host language for DSL is Groovy.

The DSL scripts are versioned and stored in a Component repository as Hub Components. The Hub employs a component model to facilitate reusability, testability and configurability. Making it possible to build new components from existing ones.

Each workflow execution is assigned an Execution Context that stores the current state of the execution. The execution contexts are persisted and retrieved using a Kafka Streams state store. Leveraging Kafka fault-tolerance capabilities, a replicated changelog topic is maintained to track any state updates.

Each workflow execution produces a detailed log of Execution events. These include life-cycle events, execution requests, responses, errors, executed commands, results etc. The execution events can be very useful for troubleshooting, as well as, analytics purposes.

More details can be found here.

Hub Client (Frontend)

A Hub Client facilitates an interaction between the Hub and an end user. From a Hub Client perspective, a customer journey is a series of pages. It relies on the Hub to determine what page to display next. Apart from that, it gathers user input and submits data back to the Hub via Hub Client API.

A Hub client uses the Hub Client API for the following - to start a new execution using a target or sharable token - to submit user input and resume the execution - to query execution state and current route - to upload files using File cache - to execute route functions

Component repository

The Hub DSL scripts are stored in a Component repository as Hub Components with the support for versioning.

A component model is employed to facilitate reusability, testability and configurability of Hub components, enabling a development model where new components are built from existing ones.

The Admin API makes it possible to register, query and validate Hub components on-the-fly. This enables making changes without the need to rebuild and redeploy the Hub.

Connector exchanges

Connectors are the integration points of the entire workflow orchestration. They are wrapped by exchange commands used within the DSL.

Throughout the workflow execution, external/internal providers can be called by means of exchanges that trigger the connectors. The connectors fetch results and decide in each step what to do with the provider responses accordingly.

Monitoring

The Zenoo Hub employs Micrometer — a vendor-neutral application metrics facade — to integrate with the most popular monitoring systems.

Micrometer has a built-in support for AppOptics, Azure Monitor, Netflix Atlas, CloudWatch, Datadog, Dynatrace, Elastic, Ganglia, Graphite, Humio, Influx/Telegraf, JMX, KairosDB, New Relic, Prometheus, SignalFx, Google Stackdriver, StatsD, and Wavefront.

More details can be found here.

Component model

The Zenoo Hub employs a component model to enable a development model where an onboarding solution is composed of components.

Components are reusable building blocks that are configurable and testable. Each component provides a cohesive piece of functionality that is well tested, documented and can be reused in different contexts or clients.

This approach reduces complexity in many aspects of development. Building from smaller, well-tested pieces of functionality becomes significantly simpler and more manageable.

Let's review an example onboarding project that is assembled from several components. Two of those, otp and document-check, are ready-to-use components. - huddle component contains the main workflow, business logic and project-specific configuration. - huddle.routes component contains project-specific route definitions. - document-check component provides ID document and liveness check functionality via a set of workflows and functions, the configuration includes the country and API credentials. - otp component provides a workflow to verify an SMS OTP code using a customer mobile, the configuration includes a number of retries, country code, OTP provider, etc.

alt text

Hub Component

A Hub Component is a reusable building block providing a cohesive piece of functionality that is configurable and testable.

Each component is identified by a unique name and version. It explicitly declares its dependencies of other components that are used.

It defines one or more DSL closures that used for execution by the DSL engine, these include target, worfklow, function, mapper, route and exchange.

A Hub component is defined as follows:

Component repository

Hub components are stored in a component repository. The components are then retrieved by the Execution Engine when a new execution is triggered.

The components are stored in components Kafka topic with indefinite retention policy. In addition, it keeps track of the latest component versions using components-latest Kafka topic.

The process of registering a new Hub component consist of several steps:

The component repository provides a REST API for registering, validating and querying Hub components, see Admin API.

Additionally, Hub components can be registered automatically at the application start using ComponentConfigurer.

Component DSL

A Hub component defines one or more DSL closures; and a set of dependencies to other components and connectors using Component DSL.

Target

A target specifies a workflow or a function that can be executed via the Hub Client API.

A Hub component can define only a single target. It acts as an entry point for a given onboarding process.

In addition, a target specifies: * a custom configuration for component dependencies, like API credentials for connectors

target {
    workflow('workflow-name')
}

Workflow

Defines a workflow with name using the [Hub DSL]#hub-domain-specific-language-dsl as a series of routes, exchanges, workflows, functions, etc.

In the workflow definition, it is possible to use the DSL closures defined within the component and declared dependencies.

workflow('name') {
    definition
}

Function

Defines a function with name using the [Hub DSL]#hub-domain-specific-language-dsl as a series of exchanges, functions, data mappings and transformations.

In the function definition, it is possible to use the DSL closures defined within the component and declared dependencies. A valid function does not contain any route or workflow.

function('name') {
    definition
}

Route

A route corresponds to a user interaction, a web page or a screen, depending on a Hub Client implementation.

A route can be used in a workflow using the route name, see more details.

route('name') {
    uri '/uri'
    export data
    validate { payload specification }
    checkpoint()
    terminal()
}
Exchange

An exchange is a connector proxy. It makes external (API) calls using an HTTP connector or a custom connector. It provides the following tools for handling connector failures:

An exchange can be used in a function or workflow using the exchange name, see more details.

exchange('name') {
    http {
        definition
    }
    fallback {
        definition
    }
    validate {
        payload specification
    }
}

Mapper

An attribute mapper transforms an input into a result using expression. Ity mapper be used for data mappings, transformations, calculations, etc.

    mapper("name") {
        input ->
            expression
    }

A mapper can be used in a function or workflow using the mapper name, see more details.

As an example, the following mapper generates a client full-name using the firstname and lastname.

    mapper("client-fullname") {
        input -> [ fullname: "$input.client.firstname $input.client.lastname" ]
   }

Dependencies

A Hub component explicitly specifies its dependencies to other components and connectors.

The dependencies are declared as part of the component definition using dependencies block.

A component dependency is referenced using a component name@version. The latest version is used if omitted.

A connector dependency is referenced using a connector fully classified name.

Optionally, you can configure a component dependency by providing a configuration as below. The configuration is then accessible as context attributes for workflows, functions, etc.

For example

dependencies {
    connector 'sms@otp:1.2.0'
    component 'zenoo.playground'
    component 'zenoo.otp:2.4', [countryCode: '+420', tries: 3]
}

Workflow Execution Engine

At the heart of the Zenoo Hub is a workflow engine that executes Hub DSL scripts. These DSL scripts are then used for orchestrating corresponding digital-onboarding processes as a series of pages (routes), external calls, etc.

The DSL scripts are versioned and stored in the component repository as Hub components. This approach makes it possible to make changes on-the-fly without having to rebuild and redeploy the Zenoo Hub.

There are two types of executable DSL scripts:

Execution context

Each workflow execution is assigned an Execution Context that stores the current state of the execution.

An Execution context stores for following:

Execution life-cycle

A new execution is triggered by an Execute request. Typically, an Execute request is generated by a Hub Client via the Hub Client API.

In addition, executions may produce Execute requests to trigger sub-workflow, function or route function executions.

An execution is terminated when one of the following criteria is met:

An execution becomes expired when the execution duration exceeds the configured corresponding expiration, see here.

When an execution terminates or expires, the corresponding execution context is discarded.

Execution model

An execution can be thought of as a series of DSL commands based on the DSL script being executed.

When a command finishes, the corresponding command result gets stored as context attribute using the command namespace setting.

Once a command result is set, it can be used by subsequent commands and for making flow control decisions.

alt text

Notes to expand:

Execution processors

At the core, the execution engine uses a stateful Kafka Streams processor to process incoming execution requests and produce corresponding responses.

It uses a state store to persist and retrieve corresponding Execution contexts. Leveraging Kafka fault-tolerance capabilities, a replicated changelog topic is maintained to track any state updates.

alt text

The Execution processor processes incoming Execution requests stored in execution-requests Kafka topic.

There are several ways execution requests are produced:

Each execution produces a detailed log of Execution events stored in execution-events Kafka topic. These include life-cycle events, execution requests, responses, errors, executed commands, results etc.

The Execution processor produces Execution responses stored in execution-responses Kafka topic. These include routes, function results and errors.

The API Gateway uses the execution responses for corresponding request queries, see Request API.

In addition, the Execution processor produces Exchange requests stored in exchanges Kafka topic that are handled by the Exchange Processor.

Context Attributes

The execution context attributes stores JSON-like data related to an execution, like user input, connector responses and configuration. The attributes are used for sharing data between different DSL commands and making flow control decisions.

An attribute is accessed by its key using . for hierarchical access, e.g. client.address.city

Command namespace

Throughout a workflow execution, DSL commands store their results as context attributes using namespace as attribute key.

The whole attribute namespace is overwritten and any existing attributes stored within the namespace is lost.

e.g. a client route result will be stored in the application.client namespace.

route('client') {
   uri '/client'
   namespace application.client
}

Setting using <<

In addition, it is possible to set context attributes directly using <<.

config.logo << 'http://logo.png'
products << [product1: "Product1", product2: "Product2"]

The << operator merges existing namespace with the specified payload, unlike using a command namespace. It can be used for gathering data from multiple commands using the same namespace.

application << route('basic info')
application << route('advanced info')

Default values

The diamond operator can be used for providing a default value when an attribute is not set.

config.retries ?: 3

Remove attribute

To remove an attribute namespace use remove namespace DSL command.

remove client.test
remove 'toRemove'

Payload validation

It is possible to specify data structure and constrains for attribute payload, see Payload specification for more details.

The payload specification is then used for payload validation using the validate or require blocks.

Require payload

Checks if a given attribute is set and matches a payload specification. Otherwise, the corresponding execution terminates with an error. It can be used for enforcing data-constrains in a workflow, like input attributes. Also, the require() expression result can be used for setting another attribute.

Example: check if input.test is not empty and set the test attribute:

input ->
    test << require(input.test)

Example: check if input contains firstname and lastname:

input ->
    require(input) {
        firstname
        lastname
    }

Payload specification

A payload specification defines attribute payload data structure and constrains.

For key-value maps, it is possible to specify each key name and corresponding data constrains for values using the provided validators. If no validator is specified, a required() validator is used by default.

You can use the following validators:

A validate example:

validate {
    firstname
    lastname
    address {
            city { oneOf "Prague", "Paris" }
            zip { regex ~/^[0-9]{5}(?:-[0-9]{4})?$/ }
        }
    }
    age { number() }
    idFront { file 'image'}
}

See validate examples for more details.

Validate payload Examples

validate {
    mobile
}
validate {
     mobile { regex ~/[0-9]{5}/ }
}
validate {
     product { oneOf "product1", "product2", "product3"}
}
validate {
    firstname
    lastname
    address {
        street
        city
        state
        zip
    }
}

Hub Domain Specific Language (DSL)

The Hub DSL provides an implementation model for expressing digital onboarding solutions in a concise manner without superfluous details. Letting developers focus on the business logic.

In addition to the main purpose Hub DSL supports these objectives:

The Hub DSL provides the following features

DSL Commands

Route

A route represents an interaction with a user.

Typically, the goal is to display route-specific information and gather input from the user. A route is rendered by a Hub Client as a web page or mobile app screen, depending on the Hub Client implementation.

A route is identified by its name and can be used in a workflow definition. A minimal definition specifies a route uri intended for a Hub Client.

route('name') {
    uri '/uri'
}
Definition and usage

It is possible provide a route definition inline within a workflow definition.

worklfow('test') {
    route('name') {
        uri '/uri'
    }
}

Another option is to define a route as part of a Hub component and use it in a workflow by referencing the route name. This approach facilitates route reusability and separation of concerns.

route('name') {
    uri '/uri'
}

worklfow('test') {
    route('name')
}

Additionally, it is possible to reference a route by name and provide additional details when used in a workflow. This allows for separating a route definition (uri, data constrains) and usage (export, namespace, checkpoint).

route('client') {
    uri '/client'
    validate {
        firstname
        lastname
    }
}

worklfow('test') {
    route('client') {
        export documents
        namespace application.client
    }
}
Route Result

A route result is stored using a namespace attribute key.

route('client-info') {
  uri '/client-info'
  namespace client
}

If a validate block is specific, a route result is validated before storing the result and resuming the execution. The route submit request results in a validation error if the validation fails.

route('client-info') {
    uri '/client-info'
    namespace client
    validate {
      firstname
      lastname
      idFront { file 'image'}
    }
}
Exporting data

In order to pass route data, the export is used. Any JSON-like data can be exported, using context attributes or serializable values.

products << [product1: "Product1", product2: "Product2"]
route('products') {
    uri '/products'
    export products
}

route('greeting') {
    uri '/greeting'
    export message: "Hello world!"
}
Route check-point

A route can be marked as a check-point, meaning it is disabled to go back to the previous route.

route('finish') {
    uri '/finish'
    checkpoint()
}
Terminal route

A terminal route marks the end of a workflow execution. The corresponding execution is terminated when a terminal route is executed.

Also, a terminal route is marked as a check-point.

route('finish') {
    uri '/finish'
    terminal()
}

In addition, it is possible to set an execution result payload using a terminal route.

route('finish') {
    uri '/finish'
    terminal(payload)
}
Route functions

A route function allows a Hub Client to execute functions in the context of the given route. A Hub client executes a route function via Hub Client API.

Some use-cases of route functions: - dynamic queries based on user input, like auto-complete, - asynchronous data processing, like document OCR, - communication between different execution.

route('name') {
    uri '/uri'
    function('fnc1') {
      context initial
      namespace fnc1
    }
}

It is possible to specify one or more route functions.

See route examples for more details.

Exchange

An exchange is a connector proxy. It makes external (API) calls using an HTTP connector or a custom connector.

It provides the following tools for handling connector failures:

An exchange is executed asynchronously when marked with async().

HTTP connector

An exchange can use a built-in HTTP connector to make external calls, see more details.

exchange('name') {
  http {
    definition
  }
}
Custom connector

Optionally, an exchange can use a custom connector with config.

exchange('name') {
  connector('custom')
  config input
}
Exchange Result

An exchange result is stored using a namespace attribute key.

exchange('localhost-api') {
  http {
    url "https://localhost:8080/api"
  }
  namespace api
}
Exchange Result Validation

If a validate block is present, an exchange result is validated before storing the result and resuming the execution. An exchange fails with an error if the result validation fails, see fallback.

exchange('status-api') {
  http {
    url config.api.url
  }
  validate {
    status
  }
  namespace api
}
Exchange Fallback

A fallback defines a workflow, function or expression that is executed when an exchange fails with an error. This may happen due to a connector error response, timeout or a failed result validation.

exchange('status-api') {
  http {
    url config.api.url
  }
  fallback {
    route 'Error'
  }
}
Exchange Timeout

It is possible to set an exchange timeout in seconds. The default value is 30 seconds.

An exchange fails with an error if the underlying connector doesn't respond within the specified timeout

exchange('status-api') {
  connector('custom')
  timeout 10
}
Exchange Retry strategies

An exchange uses a retry strategy to retry when a connector request fails. The default strategy uses fixed delays between retry attempts.

The following retry strategies are available:

Fixed backoff

Uses fixed delays between retry attempts, given a number of retry attempts and the backoff delay in seconds. - retry a number of retry attempts, the default is 5 - backoff a number of seconds between retries, the default is 5

exchange('fixed-default') {
    http {
      url config.api.url
    }
    fixedBackoffRetry()
}

exchange('fixed-custom') {
    http {
      url config.api.url
    }
    fixedBackoffRetry {
     retry 10
     backoff 2
    } 
}
Exponential backoff

Uses a randomized exponential backoff strategy, given a number of retry attempts and minimum and maximux backoff delays in seconds. - retry a number of retry attempts, the default is 5 - backoff a minimum delay between retry attempts, the default is 5 - maxBackoff a maximum delay between retry attempts, the default is 50

exchange('exp-default') {
  http {
    url config.api.url
  }
  exponentialBackoffRetry()
}

exchange('exp-custom') {
    http {
      url config.api.url
    }
    exponentialBackoffRetry {
     retry 3
     backoff 5
     maxBackoff 10
    } 
}
No retry

Does not retry when a connector request fails.

exchange('name') {
    http {
      url config.api.url
    }
    noRetry()
}

function

A function makes it possible to query dynamic data, perform complex calculations or make external calls using exhange(). Functions can be executed from a workflow or from another function.

function('mobile.lookup') {
 input mobile: '325-135856984'
 context retry: 3 
 namespace lookup
 async()
}

workflow

Executes a sub-workflow synchronously as a separate workflow execution with different UUID. Data is passed using context and input. Execution is terminated if sub-workflow terminates with terminal route.

workflow('otp') {
 input mobile: '325-135856984'
 context retry: 3 
 namespace otp
}

mapper

An attribute mapper transforms an input into an attribute output using mapper expression, see Mapper. The output gets stored in a namespace if specified. Can be used for data transformations, calculations and providing default values etc.

mapper('name') {
 input input
 namespace namespace

path

Executes a registered path (workflow snippet) specified by a name. Part of current execution, can access and update execution context.

path 'name'

Execution result

The result() and error() commands terminate the current execution successfully or with an error. In addition, an execution result or error payload can be specified.

Terminates an execution successfully with a result payload

result application
result firstName: checkIdp.firstName, lastName: checkIdp.lastName

or with an empty result payload

result()

Terminates an execution with an error using the specified payload

error "Boom"
error otp

or with an empty error payload

error()

Query execution context

Query and retrieve an active execution (not terminated or expired) using execution command.

A current or parent execution is queried by specifying current() or parent(), respectively.

In addition, a context limits the query result with the specified attribute key/namespace. The whole execution context is returned if context omitted.

The following example queries a parent execution context and limits the result to counter namespace. The query result is stored in the parent namespace.

execution {
    parent()
    context counter
    namespace parent
}

sharable

Generates a sharable token or link. The token is then used to start a new workflow or function execution, continue an existing one, etc. A token gets expired when a corresponding execution started and finished.

Examples of usage are following:

token << sharable { function 'function-name' }

sharable {
   reusable()
   function 'function-name'
   namespace token
}

Generates a sharable token to execute a function named function-name. The token gets stored in the token namespace.

sharable {
   url "http://localhost:1234/sharable/$token"
   workflow('workflow-name') {
      context url: 'http://localhost'
      input userId: 'dummy123'
   }
   namespace link
}

Generates a sharable link to execute a workflow named workflow-name with input and context. The link gets stored in the link namespace.

sharable {
   token 'vJRRTX'
   expired()
}

Expires a specific sharable token.

sharable {
   token current()
   expired()
}

Expires a sharable token that was used to execute the current execution.

token << sharable { current() }

Query a sharable token for the current execution.

Exporting namespaces

A context attribute namespace can be exported and queried using Execution API

export config

Flow control

match

Executes a DSL script definition when an expression evaluates as true. The expression can contain context attributes.

match (expression) {
    definition
}
exist

Executes a DSL script definition when an attribute is set.

exist (attribute) {
    definition
}
switch / case

The switch statement matches expression with cases and executes the matching case. It's a fallthrough switch-case. You can share the same code for multiple matches or use the break command. It uses different kinds of matching like, collection case, regular expression case, closure case and equals case.

switch (expression) {
    case "bar":
        route "Bar"
        break

    case ~/fo*/:
        route "Foo"
        break

    case [4, 5, 6, 'inList']:
        route "Matched"
        break

    default:
        route "Default"
}

loop-until

Executes a DSL script definition until an expression evaluates as true.

loop {
    definition
} until { expression }

The maximum number of attempts can be specified.

loop(3) {
    workflow
} until { expression }

In addition, the attempt counter can be accessed as follows:

loop(3) {
  attempt ->
    route('test') {
      export attempt
    }
} until { expression }

Payload specification

It is possible to specify data structure and constrains for attribute payload. The payload specification is then used for payload validation using the validate or require blocks.

It specifies a result (fields) structure and data constrains. A field is defined by a name, data constrains (validators) and nested fields. The default validator for a field is mandatory, i.e. a field is mandatory if listed.

You can use the following validators

A validate example below

validate {
    firstname
    lastname
    address {
            city { values "Prague", "Paris" }
            zip { regex ~/^[0-9]{5}(?:-[0-9]{4})?$/ }
        }
        city { oneOf "Prague", "Paris" }
        zip { regex ~/^[0-9]{5}(?:-[0-9]{4})?$/ }
    }
    skills {
        list { oneOf 1, 2, 3, 4, 5 }
    }
}

Route DSL Examples

route("Finish") {
    uri "/finish"
    terminal()
}
route("Rejected") {
    uri "/rejected"
    checkpoint()
    terminal()
}
route("Basic Info") {
    uri "/basic"
    namespace client
    validate {
        firstname
        lastname
        address {
            street
            city { values "Prague", "Paris" }
            zip { regex ~/^[0-9]{5}(?:-[0-9]{4})?$/ }
        }
    }
}
route("Select Product") {
    uri "/product"
    namespace product
    export  product1: "Product 1", 
            product2: "Product 2", 
            product3: "Product 3"
    validate {
        values "product1", "product2", "product3"
    }
}
delivery.address << [street: "Dejvicka 18", city: "Prague", zip: 12345]

route("Delivery address") {
    uri "/delivery"
    export address: delivery.address
}
register {
    route("Delivery address") {
        uri "/delivery"
        export delivery.address
    }
}

route("Delivery address") {
    export client.address
}

Execution events

Each workflow execution produces a detailed log of execution events. These include life-cycle events, execution requests, responses, errors, executed commands, results etc.

The execution events are stored in execution_events Kafka topic. They are timestamped and correlated by execution UUID.

When aggregated, execution events can be used for troubleshooting. They can be processed for real-time metrics and analytics purposes.

Execution life-cycle

Execution requests

Execution responses

alt text

Context events

Used as an event payload in Execution Context Event. Context events are produced by the DSL executors for a particular execution. They provide a detailed log of a DSL script execution, like

alt text

Connectors

HTTP connector

Usage

exchange('test') {
 connector('type') {
   config
 }
exchange('test') {
 config {
  connector config
 }
exchange('test') {
 http {
  http connector config
 }

HTTP connector

A built-in HTTP connector facilitates making HTTP calls directly from the DSL using an exchange.

It is possible to reference and use context attributes in a connector definition. The common use-cases include URL and body generation, authentication headers, etc.

An HTTP connector response is automatically converted into a context attribute based on the content type. There is a built-in support for JSON and XML content types.

GET requests

Making an HTTP GET is as simple as providing a request url

http {
    url 'https://request-url'
}

An url can be generated using Groovy GString and context attributes.

An example below queries GitHub repositories using a keyword attribute.

http {
  url "https://api.github.com/search/repositories?q=topic:${keyword}"
}

POST requests

HTTP POST request has the method set to POST.

The request body is set using the payload expression result. The payload expression can reference and use available context attributes.

http {
  url "${middleware.url}/api/v1/client"
  method 'POST'
  jsonBody client
}

A JSON request body is specified using jsonBody together with application/json content type.

http {
    url 'http://localhost'
    method 'POST'
    jsonBody firstname: client.firstname, lastname: client.lastname
}

Optionally, it is possible to use a JSON builder, see JsonBuilder

http {
    url 'http://localhost'
    method 'POST'
    jsonBody {
      client {
        firstName client.firstname
        lastName client.lastname
      }
    }
}

Request method

The method specifies an HTTP request method. If omitted, the default method is GET.

The method can be one of the following:

http {
    url "/api/files/cache/${uuid}"
    method 'DELETE'
}

Request headers

The header specifies an HTTP request header.

http {
    url 'http://localhost'
    header 'X-Auth', authtoken
    method 'POST'
    body payload
}

Content type

The contentType specifies an HTTP request Content-Type header.

http {
    url 'http://localhost'
    contentType 'APPLICATION_JSON_VALUE'
    method 'POST'
    body payload
}

Form data

The formData specifies an HTTP request using a form data using application/x-www-form-urlencoded content type.

http {
    url 'http://localhost'
    formData 'data1', content1
    formData 'data2', content2
}

Authorization

Basic authentication

The basicAuth specifies an HTTP basic authentication credentials.

http {
    url 'http://localhost'
    basicAuth 'user', 'password'
}

Hub file cache REST API

The Hub provides a REST API for caching client-uploaded files. This allows to use only cached file descriptors for form processing and avoid using multipart data. Also this feature improves UX because user can upload files separately and it seems to be faster for him than batch upload

POST /api/files/cache

Uploads new file to cache

Request:

An example request

POST /api/files/cache HTTP/1.1
Host: localhost:8080
Content-Type: multipart/form-data
Response:
HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8

{ 
  "uuid": "89828e1e-c834-42a2-86f1-893209f63ab5", 
  "fileName": "my_file.pdf", 
  "mimeType": "application/pdf", 
  "size": 123123, 
  "expiredOn": "2020-01-01T12:00:00Z"
)

DELETE /api/files/cache/{uuid}

Removes file with uuid from cache (deletes from server)

Request:

An example request

POST /api/files/cache/9828e1e-c834-42a2-86f1-893209f63ab5 HTTP/1.1
Host: localhost:8080
Content-Type: application/json;charset=UTF-8
Response:
HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8

Testing

Tests should be part of all user stories for a hub instance or a connector. For a quick start take a look at Instance template and Connector template. Full example with non-trivial tests can be found in Connector tutorial OTP.

The best practice is to keep anything that calls 3rd party services in integration folder instead of test, because you don't have control over these services, and they can be down or calling them can incur costs. Tests in test folder should be as complete as possible and they should use mocks for connectors to external services. Running tests in test folder should be part of build/release action, while tests in integration folder should be run manually by developer as needed.

Another best practice is to split your workflow into smaller bits, ideally the main workflow should be just a chain of calls to one-purpose sub-workflows and functions with occasional updates of attributes. It is much more readable and easily testable than one huge workflow of hundreds of lines. Also, whenever possible you should first create tests for each individual sub-workflows and sub-functions.

Example of a workflow that is split into smaller better testable parts:

workflow('neo') {
    workflow('document-check') {
        namespace document
    }

    person << [
        firstName    : document.firstName,
        lastName     : document.lastName,
        fullName     : document.idp.biographic.fullName,
        dateOfBirth  : document.idp.biographic.birthDate
    ]

    function('create-lead') {
        namespace lead
        input person: person
    } 

    function('create-verification') {
        namespace verification
        input entityId: lead.id,
              verificationRequirementId: env.salesforce.verificationId
    }

    function('lookup-verification-document-ids') {
        input verificationId: verification.uuid
        namespace documentIds
    }

    function('document-verification') {                    
        input entity: lead, 
              idp: document.idp, 
              upload: document.upload.personalId,
              verificationDocumentId: documentIds.idDocument
    }

    workflow('liveness-check') {
        namespace liveness
        input upload: document.upload,
              documentId: documentIds.selfie
    }
}

Configuration of your project

The Zenoo Hub provides extensive support for testing, and you should definitely make as much out of it as possible. Add hub-test-starter dependency to your project's build.gradle to access the whole testing support part of the Zenoo Hub:

ext {
    hubBackendVersion = '2.135.0'
}

dependencies {
    testImplementation group: 'com.zenoo.hub', name: 'backend-spring-boot-starter-test', version: hubBackendVersion
}

You also need to fine tune setup for test and integrationTest tasks:

sourceSets {
    integration {
        groovy.srcDir "$projectDir/src/integration/groovy"
        resources.srcDir "$projectDir/src/integration/resources"
        compileClasspath += main.output
        runtimeClasspath += main.output
    }
}

configurations {
    integrationRuntime.extendsFrom testRuntime
    integrationImplementation.extendsFrom testImplementation
}

test {
    useJUnitPlatform()
    testLogging {
        events "passed", "skipped", "failed"
    }
}

task integrationTest(type: Test) {
    useJUnitPlatform()
    testClassesDirs = sourceSets.integration.output.classesDirs
    classpath = sourceSets.integration.runtimeClasspath
}

processIntegrationResources {
    setDuplicatesStrategy(DuplicatesStrategy.WARN)
}

Setup of the Zenoo Hub for Tests

You will need a separate hub configuration for tests. Usually you should set ComponentConfigurer to be an empty list because you will register components as needed for individual tests. HubConfigurer should have all necessary connectors - mocked for test folder tests and real ones for integration folder tests.

Example of TestConfig class:

@Configuration
class TestConfig {

    @Bean
    @Primary
    ComponentConfigurer componentConfigurer() {
        () -> List.of()
    }

    @Bean
    @Primary
    static HubConfigurer hubConfigurer(
            HttpConnectorMock httpConnectorMock
    ) {
        return new HubConfigurer() {

            @Override
            List<ConnectorActivator> connectors() {
                return of(
                        ConnectorActivator.of(ComponentId.from('http'), httpConnectorMock as Connector<HttpConnectorSpec>)
                )
            }
        }
    }

}

Writing a test

Tests in the Zenoo Hub utilize Spock as test framework, you can learn basics in a tutorial on Baeldung. The easiest way to write a test is to extend WorkflowTestSpecification. It has all the necessary methods for you to test a DSL workflow or function.

1. Prepare mocks

Use Spock's given block to set up mocks as needed. The Zenoo Hub provides MockConnectorExchange class to easily create connector mocks (see below). MockConnectorExchange class implements withResult, withError and withDelay methods that you can configure a connector mock with.

Example:

def "verify code should pass mock call"() {
    given:
    httpConnectorMock.mockExchange.withResult([
            "status"      : "approved",
            "date_updated": "2022-07-21T05:19:21Z",
            "account_sid" : "AC1df896fc9f8d4c30b31490b5303e925e",
            "to"          : "+420123456789",
            "valid"       : true,
            "sid"         : "VE39811dee2cfdfc3b65466f44e07a8dc0",
            "date_created": "2021-07-22T05:17:44Z",
            "service_sid" : "VAXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
            "channel"     : "whatsapp"
    ])

}

2. Register components and start workflow or function WorkflowTestSpecification contains testBuilder attribute that helps with registering and configuring components for a test. testBuilder implements several methods to serve that end:

Method build will generate a testing component, then it will register it and its dependencies, and finally it will start a testing workflow from the testing component.

Example of testBuilder usage to set up the test:

        expect:
        def result = testBuilder.with {
            function = 'send-code'
            input = ['phoneNumber': '+420123456789', 'channel': 'whatsapp']
            addDependency(OtpConnector.otpConnector, [
                serviceSid: 'VAXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX',
                accountSid:'AC1df896fc9f8d4c30b31490b5303e925e',
                authToken: 'lwqIK1nsxcaBwwv7Yuja5PTpdbD7czaI'
            ])
            build()
        }.getResult()

3. Check workflow steps and results

Once the workflow has started it will pause on each route DSL command waiting for Hub Client to submit user input. There is a simple-to-use function submit inherited from WorkflowTestSpecification that you can use to simulate user data entry. You should also check that workflow stopped on the right route at each step, for that you can check route part of response function.

Example of checking route and submitting user data:

response().route.uri == '/otp'
submit([code : 123456])

response method returns WorkflowTesterResponse which depending on the state of execution can become one of these types:

See ValidationResult. There is just one attribute, errors that contains list of ValidationError. ValidationError has just one attribute, message.

Another useful method inherited from WorkflowTestSpecification is upload. It allows you to simulate user uploading file through the Hub Client. The methods itself uploads a file to the test hub instance and returns a FileDescriptor that you can use as parameter for submit method.

Example:

    @Value("classpath:test-files/idFront.jpg")
    Resource idFrontResource

    def "should pass document check"() {
        given:
        testBuilder.with {
            workflow = 'document-check'
            addDependency(NEO_WORKFLOWS)
            build()
        }

        expect:
        response().route.uri == '/id-upload'
        def idFrontUpload = upload(idFrontResource)
        submit(personalId: [idFrontUpload])
        def checkOCR = response().route
        checkOCR.uri == '/check-idp'
        submit(retry: false)
    }

In addition to testing happy path for workflows and smoke tests for connectors you should test for common errors responses and invalid data inputs.

Connector usually does not handle error itself and just passes it on to a workflow which should know how to resolve it. So in case of testing the connector itself you need to write a DSL code just to test different scenarios.

Example of check for error in connector:

Function to test a connector:

function('test-document') {
    input ->
        exchange('RDP document') {
            connector('document')
            config input
            fallback {
                'error'
            }
        }
}

Spock test for invalid data response:

    def "front document verification error"() {
        given:     
        def uploadIdFront = upload(frontError)
        testBuilder.with {
            fuction = 'test-document'
            input = [idFront: uploadIdFront, defaultValidationBypass: false]
            addDependency(RDPCompoennt.rdp)
            build()
        }     
        expect:
        response().result == 'error'
    }

Workflows should either recover from an error, retry or notify user about it, usually on an error page. You should test that these errors are handled properly, eg. user is sent to the right error page and is notified about what has gone wrong.

Example of check for error in a workflow:

The part of the workflow to test:

exchange('IDEMIA - Create Identity') {
    fallback {
        route('error') {
            export error_step: 'processing'
        }
    }
}

The part of the workflow test to check an error:

given:
...
createIdentityConnectorMock.mockExchange.withError()

expect:
...
def errorResponseRoute = response().route
errorResponseRoute.uri == "/error"
errorResponseRoute.export.error_step == "processing"
errorResponseRoute.terminal

Mocking connectors

In most cases it should be enough to use MockConnector class to create a new bean in your TestConfig and pass them on to hubConfigurer. In this way you configure the Zenoo Hub to work with mocks instead of the real connectors.

Example of bean creation:

    @Bean
    MockConnector<DocumentConnector> documentConnectorMock(DocumentConnector documentConnector) {
        new MockConnector<DocumentConnector>(documentConnector)
    }

Example of using it in hubConfigurer

    @Bean
    static HubConfigurer hubConfigurer(
            MockConnector<DocumentConnector> documentConnectorMock,
            MockConnector<LivenessConnector> livenessConnectorMock,
            MockConnector<IdentityConnector> identityConnectorMock
    ) {
        return new HubConfigurer() {

            @Override
            List<ConnectorActivator> connectors() {
                return of(
                        ConnectorActivator.of("rdp-document@refinitiv.rdp", documentConnectorMock),
                        ConnectorActivator.of("rdp-liveness@refinitiv.rdp", livenessConnectorMock),
                        ConnectorActivator.of("rdp-identity@refinitiv.rdp", identityConnectorMock)
                )
            }
        }
    }

MockConnector contains an attribute mockExchange of type MockConnectorExchange that is meant to be used to set mock responses for the connector.

Example:

given:
...
identityConnectorMock.mockExchange
        .withConfigConsumer({ identityConfig = it })
        .withResult([countryCode: "AU", transactionId: "e850891a-6a57-4d5f-b499-3c7d891a0cef", overallStatus: "MATCH"])

expect:
...
response().route.uri == '/address'
submit([location: [locality    : null,
                   sublocality : 'BARCELONA',
                   area1       : 'BARCELONA',
                   street      : 'C/MEDES 4-10',
                   country     : 'HongKong',
                   countryCode : 'HK',
                   streetNumber: '10-Apr']])

identityConfig.address.addressLine1 == 'C/MEDES 4-10 10-Apr'
identityConfig.address.countryCode == 'HK'

Metrics

The Hub is making use of Micrometer, a vendor-neutral application metrics facade, to integrate with the most popular monitoring systems. It has a built-in support for AppOptics, Azure Monitor, Netflix Atlas, CloudWatch, Datadog, Dynatrace, Elastic, Ganglia, Graphite, Humio, Influx/Telegraf, JMX, KairosDB, New Relic, Prometheus, SignalFx, Google Stackdriver, StatsD, and Wavefront.

The following metrics will automatically register:

Executor metrics

Execution metrics

JVM metrics

In addition, you can register custom metrics in a workflow script using the metrics DSL.

Hub Client API

A Hub client facilitates an interaction between the Hub and an end user.

For a Hub client, a customer journey is a series of pages a.k.a. routes. It renders pages (UI), gathers user input and submits data back to the Hub via a REST API.

A Hub client uses the Hub Client API for the following:

Typical API calls sequence

The workflow execution API sequence is as follows:

  1. start a new execution and get Execution Request resource,
  2. query the corresponding response and get the 1st Route resource to display,
  3. submit the 1st route and get Execution Request resource,
  4. query the corresponding response and get the 2nd Route resource or Validation Error resource or Error resource,
  5. submit the 2nd route, same as (3), until the execution is terminated.

In addition, the workflow execution API enables going back to the previous route and executing a function.

Start new execution

POST /api/gateway/execution

Creates a request to start a new workflow execution.

Request:
Response:

The corresponding response can be one of the following

An example of a successful response is below:

HTTP/1.1 201 Created
Content-Length: 645
Content-Type: application/json;charset=UTF-8
Location: /api/gateway/request/f3add886-36d3-49eb-8d2b-96862d63dbe4

{
    "uuid": "59bb233b-2d2b-41e7-ad13-f17c14513603",
    "requestURI": "/api/gateway/request/f3add886-36d3-49eb-8d2b-96862d63dbe4",
    "executionURI": "/api/gateway/execution/59bb233b-2d2b-41e7-ad13-f17c14513603"
    "token": "eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiI2YzBlNjNhMi01MTE1LTRlM2YtOWNjOC1kOTdmYjcxNzFlODIiLCJzdWIiOiJleGVjdXRvciIsImlhdCI6MTU4MjcyNTI3MCwiZXhwIjoxNTgyNzU0MDcwfQ.2QDJai6f4f7fs85CctTN8K3vmL-XGMbFDq0_IF14GkM"
}

Submit route

POST /api/gateway/execution/{uuid}/submit

Creates a request to submit a route for a workflow execution with uuid.

Request:

An example request:

POST /api/gateway/execution/59bb233b-2d2b-41e7-ad13-f17c14513603/submit
Host: localhost:8080
Content-Type: application/json
Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiI1MTFiMDQ0MS1kNmQwLTRhOGEtODAwMy0yMmVmNTI3NDA4NDciLCJzdWIiOiJleGVjdXRvciIsImlhdCI6MTU1NzI0NjE4OSwiZXhwIjoxNTU3Mjc0OTg5fQ.2GVAuboArO8k1G48CY1ojFdypO9zm9u2ZubCE7Qa-Co

{
    "uuid": "3a0d231f-12b8-47b3-a495-b9418db294b3",
    "payload": {
        "firstname": "Joe",
        "lastname": "Bloke",
    }
}
Response:

The corresponding response can be one of the following

Go back to previous route

POST /api/gateway/execution/{uuid}/back

Creates a request to go back to the previous route for a workflow execution with uuid.

Request:

An example request:

POST /api/gateway/executor/c973a8e7-eb24-4e55-980f-f2ea0fff680e/back HTTP/1.1
Host: localhost:8080
Content-Type: application/json
Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiI1MTFiMDQ0MS1kNmQwLTRhOGEtODAwMy0yMmVmNTI3NDA4NDciLCJzdWIiOiJleGVjdXRvciIsImlhdCI6MTU1NzI0NjE4OSwiZXhwIjoxNTU3Mjc0OTg5fQ.2GVAuboArO8k1G48CY1ojFdypO9zm9u2ZubCE7Qa-Co

{
    "uuid": "3a0d231f-12b8-47b3-a495-b9418db294b3",
    "payload": {
        "firstname": "Joe"
    }
}
Response:

The corresponding response can be one of the following

Execute a route function

POST /api/gataway/execution/{uuid}/function

Creates a request to execute a route function.

Request:
Response:

The corresponding response can be one of the following

Get current route

GET /api/gateway/execution/{uuid}

Query an execution with uuid for the current route. It may take time before the current route is available due to ongoing execution.

Response:

An example of a successful response:

HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8
Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiI1MTFiMDQ0MS1kNmQwLTRhOGEtODAwMy0yMmVmNTI3NDA4NDciLCJzdWIiOiJleGVjdXRvciIsImlhdCI6MTU1NzI0NjE4OSwiZXhwIjoxNTU3Mjc0OTg5fQ.2GVAuboArO8k1G48CY1ojFdypO9zm9u2ZubCE7Qa-Co

{ 
  "uuid": "89828e1e-c834-42a2-86f1-893209f63ab5", 
  "uri": "/product", 
  "terminal": false, 
  "backEnabled": false, 
  "export": {"product1": "Product1", "product2": "Product2"},
  "payload": {"product": "product1"}
)

Get exported namespace

GET /api/gateway/execution/{uuid}/export/{namespace}

Query an execution with uuid for exported namespace.

Response:

Get response to Execution Request

GET /api/gateway/request/{uuid}

Query for a response of an Execution request specified by uuid. It may take time before the response is available due to ongoing execution.

Response:

Get current execution state

GET /api/gateway/execution/{uuid}/state

Get an execution state with uuid. It contains detailed information about an execution, like current context, input payload, a list of execution events etc.

POST /api/gateway/sharable/{token}

Starts an execution corresponding to given sharable token. A sharable token specifies an Execution request, see for details Sharable DSL. Moreover, the POST request body is used as Execution request input. May take time before the response is available due to ongoing execution.

Response:

Resources

Execution Request

A unique execution request is generated after each execution command submission (POST). The corresponding execution response is queried using the requestURI.

Route

Route resource represents a route to be rendered by a Hub client.

It contains the following fields:

An example Route resource:

{
    "type": "route",
    "uuid": "89828e1e-c834-42a2-86f1-893209f63ab5", 
    "uri": "/product", 
    "terminal": false, 
    "backEnabled": false, 
    "export": {"product1": "Product1", "product2": "Product2"}
}

Result

Result resource represents an execution result, like a function execution result.

An example Result resource:

{
    "result": "passed"
}

Validation Error

Validation Error resource contains a list of validation errors.

An example Validation Error resource:

{
    "type": "validation-errors",
    "errors": [
        {
            "field": "mobile",
            "message": "Required"
        }
    ]
}

Execution Error

Execution Error resource contains an error message.

An example Execution Error resource:

{
    "type": "error",
    "message": "Resume UUID mismatch!"
}

Security

The execution API endpoints are secured using JWT tokens.

A new token is generated for every Execution Request. The token is then used to query the corresponding response or current route.

The token needs to be included as HTTP Authorization header. The expiration is set to 30 minutes and can be modified using jwt.expiration property.

Authorization: Bearer {token}

Admin API

The Admin API provides a REST API for the Component repository.

The access to Admin API is restricted using HTTP basic authentication, see Admin API security.

Query component

GET /api/component/{name}/{version}

Retrieves a registered component by name and version.

GET /api/component/{name}

Retrieves the latest version of a registered component by name.

Response:

Register component

POST /api/component

Registers a new component. A component much pass a DSL validation process before successful registration.

Request:
Response:

Resources

Component Id

A reference to a component using name and revision. - name a component name, - revision a component version

Component definition

Provides a component definition. The component is identified by name and revision. - name a component name, - revision a component version, generated if ommitted, - definition a component definition using Component DSL.

Configuration properties

Execution

The maximum execution duration before expiration.


Admin API security

Default: admin

An admin user name

Default: auto-generated

An admin user password


Client API security

A secret key used for generating JWT tokens.


JWT tokens generated with specified expiration.

Kafka streams

Default: none

The prefix is used for isolating Hub clusters running within the same Kafka broker. It uses the setting as a prefix for Kafka topics.

For example, a Hub cluster with a testing prefix would use topics like testing-execution-events, testing-exchanges, etc.

The prefix can contain alphanumeric characters, .(dot), -(hyphen), and _(underscore).


The application name used together with prefix to generate a unique application ID.

Each stream processing application must have a unique ID. The same ID must be given to all instances of the application.

This ID is used in the following places to isolate resources used by the application from others:


Default: localhost

Host that is accessible for this and other instance nodes.


Default: 8080

Port that is accessible for this and other instance nodes.


Default: 60000

The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.


Default: 1048576

The maximum size of a request in bytes. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. This is also effectively a cap on the maximum uncompressed record batch size. Note that the server has its own cap on the record batch size (after compression if compression is enabled) which may be different from this.


Directory location for state stores.


Clean up application’s local state directory when Kafka Streams start.


Clean up application’s local state directory when Kafka Streams shut down.

File Uploader

Directory location for cached files.

Kafka SSL

For safe usage of Kafka, it is recommended to use mutual TSL for security. This setup means, that both brokers and clients will have their own certificate. Also, because SSL isn't trusting by default we need to make sure, that the other side's certificates are trusted.

Kafka Configuration

Kafka is by default plaintext only. To enable SSL you need to configure following:

Service configuration

Example configuration
advertised.listeners: PLAINTEXT://kafka:9092,SSL://kafka:9093
ssl.keystore.filename: kafka-keystore.jks
ssl.keystore.password: kafka-keystore-creds
ssl.key.password: changeit
ssl.truststore.location: kafka-truststore.jks
ssl.truststore.password changeit
security.inter.broker.protocol: PLAINTEXT
ssl.client.auth: 'required'
security.protocol: SSL

Docker configuration

We need to configure same things as in service configuration, but for Docker we use env variables. These variables correspond to fields in service, but they are uppercase, use _ instead of . and have prefix KAFKA_.

Example configuration
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,SSL://kafka:9093
KAFKA_SSL_KEYSTORE_FILENAME: kafka-keystore.jks
KAFKA_SSL_KEYSTORE_CREDENTIALS: kafka-keystore-creds
KAFKA_SSL_KEY_CREDENTIALS: kafka-key-creds
KAFKA_SSL_TRUSTSTORE_FILENAME: kafka-truststore.jks
KAFKA_SSL_TRUSTSTORE_CREDENTIALS: kafka-truststore-creds
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: PLAINTEXT
KAFKA_SSL_CLIENT_AUTH: 'required'
KAFKA_SECURITY_PROTOCOL: SSL

Local

We will use Kafka with SSL in Docker, because we need to do some changes in configuration and configuring through Docker compose is the easiest option. In sample-hub-instance directory is located sample docker-compose. But to use it, we need to generate keystores and truststores for both brokers and client (our application).

Generating keystores and truststores

To make this process less painful, we have script that helps this process. Script is used like this:

./generate-stores.sh KEY_ALIAS TARGET_KEYSTORE.jks TARGET_TRUSTSTORE.jks

Where:

Script uses Java's keytool, so all interaction in script is handled by keytool.

Script's workflow is as follows:

  1. Key generation - you will be prompted for keystore's password (twice if keystore doesn't exist yet)
  2. Public key extraction - you will be promoted for keystore's password
  3. Public key import to trust store - you will be promoted for trust store's password (twice if trust store doesn't exist yet)

Kafka Topics


execution-requests

Stores execution requests that are then processed by corresponding executors

execution-events

Stores all execution-related events, like requests, responses, execution life-cycle events, commands, etc.

execution-responses

Stores execution responses generated as a result of processing execution requests.

exchanges

Stores exchange requests

components

Stores all Hub component definitions

components-latest

Stores the latest revisions for Hub components

errors

Stores execution errors

sharables

Stores sharable tokens (links)

cached-files

Stores cached files descriptors, does not store file content

Amazon MSK

Amazon MSK is a fully managed Apache Kafka service hosted by AWS. Hub backend instance can be set easily to use AWS MSK by defining the standard spring kafka properties. See the sample properties in following sections.

Access MSK with no authentication and no encryption

If MSK is provisioned without any authentication and encryption, by default the access protocol is defined as plain-text. In such case, it's enough to set only bootstrap servers in application.yml as below.

application.yml

spring:
  kafka:
    bootstrap-servers: b-1.test.kafka.ap-east-1.amazonaws.com:9092,b-2.test.kafka.ap-east-1.amazonaws.com:9092

Access MSK with IAM role-based authentication and encryption

If MSK is provisioned with IAM role-based authentication and encryption (within the cluster and between clients and brokers), use the properties below for accessing the service. Make sure the IAM role which is assigned to the backend instance container tasks has sufficient MSK permissions as stated here: IAM access control

application.yml

spring:
  kafka:
    bootstrap-servers: b-1.test.kafka.ap-east-1.amazonaws.com:9098,b-2.test.kafka.ap-east-1.amazonaws.comm:9098
    security.protocol: 'SASL_SSL'
    ssl:
      trust-store-location: 'file:/security/cacerts-zenoo.jks'
      trust-store-password: '**'
    properties:
      sasl:
        jaas.config: 'software.amazon.msk.auth.iam.IAMLoginModule required;'
        mechanism: 'AWS_MSK_IAM'
        client.callback.handler.class: 'software.amazon.msk.auth.iam.IAMClientCallbackHandler'

Sample policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "kafka-cluster:Connect",
                "kafka-cluster:AlterCluster",
                "kafka-cluster:DescribeCluster"
            ],
            "Resource": [
                "arn:aws:kafka:us-east-1:123456:cluster/msk-test-cluster/855d7317-7cc9-494e-8a0b-44c67f3327e7-8"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "kafka-cluster:*Topic",
                "kafka-cluster:WriteData",
                "kafka-cluster:ReadData"
            ],
            "Resource": [
                "arn:aws:kafka:us-east-1:123456:topic/msk-test-cluster/"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "kafka-cluster:AlterGroup",
                "kafka-cluster:DescribeGroup"
            ],
            "Resource": [
                "arn:aws:kafka:us-east-1:123456:group/msk-test-cluster/"
            ]
        }
    ]
}

HUB Client

Architecture

HUB client consists of several architectural components:

Target is a folder with appropriate source files (YAML, LESS, Assets etc.) and core dependecy: HUB Client application

HUB client

YAML

The YAML used in Target Builder is a standard YAML that has been extended with some specific YAML tags.

For initial acquaintance with the YAML read this article: Learn X in Y minutes (Where X=yaml)

Reserved fields

In the YAML files, there are three "reserved" fields in the root:


~private

Example
~private:
  yourSecretKey: "secret value"

List of supported tags

Compile-time tags

Run-time tags

Deprecated tags


!include compile-time tag

Syntax

Short version (without properties):

!include ./path/file.ext

Long version (with properties):

!include file: ./path/file.ext
property1: 'some'
property2: 123
Examples
list:
  - !include ./info.md
  - !include file: ./more_info.yml
    title: 'Hello'
    withoutHeader: true
something: !include ./something.yml


!ref compile-time tag

Syntax
!ref components: component_name
Examples
components:
  header: !include file: ./components/header.yml
    title: "Default title"
  bodyItem: !include ./components/body-item.yml
  footer: !include ./components/footer.yml

...

items: - !ref components: header title: 'Welcome' - !ref components: bodyItem name: 'First one' - !ref components: bodyItem name: 'Second one' - !ref components: footer


!property compile-time tag

Syntax

Short version (without default):

!property some_prop

Long version (with default or required):

!property name: some_prop
default: 'Some default value'
required: true
Examples
items:
  - !property prop1
  - !property name: prop2
    default: 'Prop2 is not here :-)'
  - !property name: prop3
    required: true


!condition compile-time tag

Syntax
!condition include: typeof some_prop === "boolean" && some_prop === true
!condition omit: typeof some_prop === "string" && some_prop !== "foo"
Examples
items:
  - name: 'this item is not here :-('
    !condition include: typeof some_falsy_prop === "boolean" && some_falsy_prop === true
  - name: 'this item is here :-)'
    !condition omit: typeof some_falsy_prop === "boolean" && some_falsy_prop === true

!component run time tag

Syntax
!component as: some_component
property: 'foo'
Examples
items:
  - !component as: div
    items:
      - !component as: h1
        items: 'Hello'
      - !component as: HubClientMagicComponent
        doMagic: true

!repeat run time tag

Syntax
!repeat some_item:
  - name: 'Some 1'
  - name: 'Some 2'
  - name: 'Some 3'
component:
  !component as: span
  items: !expression some_item.name
Examples
items:
  - !repeat car: !expression export.cars
    component:
      !component as: span
      items: !expression car.name
  - !repeat pair:
      - name: 'Some 1'
        value: 'Val 1'
      - name: 'Some 2'
        value: 'Val 2'
      - name: 'Some 3'
        value: 'Val 3'
    component:
      !component as: div
      items:
        - !component as: span
          items: !expression pair.name
        - !component as: span
          items: "!expression '(' + pair.value + ')'"

The !repeat expression provides a way to iterate on some collections and map components to it. You simply specify an array as the input collection (you can use an expression as well). Also specify a component that will be rendered n times (n is length of array).

Inside the component, you can access an item by expression that is stored in the key. In the expression, you can directly access an item of the array ([key].item) or its index ([key].index).

This is the format:

!repeat [key]: [array]
component: [some component]

Examples:

Here's an array from the api:

!repeat person: !expression flow.export.persons
component:
  !component as: span
  items: !expression person.item.name

Here's an array in yaml:

!repeat item:
  - Item 1
  - Item 2
component:
  !component as: span
  items: !expression item.item


!expression run time tag

Syntax

Short version (without parameters):

!expression 'some_expression'

Long version (with parameters):

!expression eval: 'some_expression'
parameter_one: 'Some parameter value'

Multiline:

property:
  !expression: |
    const variable = 1;
    // More lines of JavaScript
    console.log('Hello', variable);
Examples
items:
  - !component as: span
    items:
      - !expression 'exports.someExportField'
  - !component as: span
    max:
      !expression eval: 'parseInt(param1) - param2'
      param1: !expression 'exports.someExportField'
      param2: !property test1


!function run time tag

Syntax

Short version (without parameters):

!function 'some_expression'

Long version (with parameters):

!function eval: 'some_expression'
parameter_one: 'Some parameter value'
Examples
items:
  - !component as: div
    onClick: !function 'setSomething(arg1 + "A")'
  - !component as: div
    onClick:
      !function eval: 'doSomething(parseInt(param1) - param2)'
      param1: !expression 'exports.someExportField'
      param2: !property test1
  - !component as: div
    onClick: !function |
      var a = "A";
      var b = "B";
      return a + b + " = done";

!t run time tag

Syntax

Short version (without params):

!t Some text

Long version (with params):

!t text: Some text with {paramOne} and {paramTwo}
paramOne: 'Zenoo'
paramTwo: !expression 'exports.someExportField'
Examples
items:
  - !component as: span
    items:
      - !expression 'exports.someExportField'

!cx run time tag

Syntax
!cx args:
  - 'classOne'
  - 'classTwo'
  - !expression 'exports.someClassName'
Examples
items:
  - !component as: span
    className:
      !cx args:
        - 'classOne'
        - 'classTwo'
        - !expression 'exports.someClassName'

Target configuration

Target structure

HUB Client Target should have the following structure, which can vary depending on complexity.

/src — folder with target source code
  /assets — static assets (fonts, images etc)
  /components — YAML reusable components
  /layouts — visual layouts
  /pages — configuration files for specific pages. By convention, names of these files should be the same as route names in flow
    index.yml
    ...
  /styles — LESS styles
    index.less — list of imports used in target and global styles
    fonts.less — font styles (or import from CDN)
    overstyle.less — style overrides for UI components
    variables.less — CSS variables for UI components theming
    studio-variables.less — CSS variables for Design Studio
  /translates — translations for target
    {LANG}.yml — list of translations for {LANG} locale
    ...
  index.yml — project configuration file
package.json — metadata information about the target and its dependencies
package-lock.json — dependencies tree with locked versions

Project settings

Project settings can be set in the root index.yml file. The values for these settings can be individually set for any particular environment. (Environment-specific entry points)

List of available parameters

Parameter Required Default Value Description
analytics false Analytics configuration
analyticsMapper false Analytics configuration
analyticsParams false Analytics configuration
apiVersion false 'v1' API version ('v0' or 'v1') used to specify usage of legacy API
authorizationTimeout false 10 Authorization cookie expiration timeout (in minutes)
backDisabledAlert false Message to be displayed in case of disabled back action
coreLocale false List of translates for Core messages (more in localization)
defaultLocale false Default locale code (more in localization)
description false Meta description tag
devTools false true Toggles developer tools (open with ctrl + shift + D hotkey)
errorPage false Error page configuration
favicon false Path to favicon
flowExport false Mocked flow export
flowIdName false Flow ID name in Backend instance
flowIdRevision false Flow ID revision in Backend instance
flowName true Flow name in Backend instance
flowStartParameters false List of parameter to be passed to flowStart action
formats false Global formats settings
globals false Extending expression context
handoffTimeout false 20 Handoff credentials cookie expiration (in minutes)
indexPageInit false true Specifies if application initialization should start from flowStart action
loadingComponent false Component to be didsplayed during application initialization
mockData false Mocked input data for development tools
og false List of meta og tags
pages true Page Configuration
serverUrl true URL of Backend instance server
studioSettings false Studio settings
styles false LESS files includes
title true Meta title of an application, shown in browser tab header
translates false List of translates for specific languages (more in localization)
url false Application URL settings
flowReference false When this ref is changed, new flow execution will be initialized

Application URL settings

url: {
  persistHash?: boolean // defines if hash should be persistent on page change, default value is TRUE (default hash is page URI)
  persistQuery?: boolean // defines if query should be persistent on page change, default value is FALSE
  persistPathname?: boolean // defines if pathname should be persistent on page change, default value is FALSE
}

Example configuration

Here's an example of various settings in index.yml:

title: "Zenoo Demo Project"
serverUrl: "https://zenoo.onboardapp.io/api"
flowName: "zenoo"
favicon: "/assets/favicon.ico"
indexPageInit: true
mockData: !include ./mockdata.json
styles:
  - !include ./styles/index.less
analytics:
  gtm: "GTM-ID001"
authorizationTimeout: 60
translates:
  en: !include ./translates/en.yml
  cz: !include ./translates/cz.yml
defaultLocale: "en"
studioSettings:
  name: ZenooBank
  logo: /assets/logo.png
  country: Mexico
  previewUrl: https://onboarding.zenoo.com/
pages:
  index: !include ./pages/index.yml
  otp: !include ./pages/otp.yml
  loan-overview: !include ./pages/loan-overview.yml
  thanks: !include ./pages/thanks.yml
  rejected: !include ./pages/rejected.yml

Page settings

The entire application consists of pages. Each view that is presentable to a user must be implemented as page. There are two predefined user positions, the index page and the error page. The index must be inside index property in pages. In root of your yaml (typically index.yml), you can specify the value of the property errorPage. This property is name of page to which the user will be redirected when an error occurs (such as a network failure).

List of available parameters

Parameter Required Description
analytics false Analytics configuration for specific page
defaultAction false Default form submit action name
defaultActionParams false Default form submit action params
defaults false Default values for form fields
fadeAnimationBack false Use "fade" animation on back action
fadeAnimationSubmit false Use "fade" animation on submit action
formOutputModifier false Override page payload
items false Elements tree of specific page
og true List of meta og tags (will be merged with the ones coming from project configuration)
schema false Validation rules as a JSON schema
title false Page meta title

Example configuration

components:
  formLayout: !include @common/layouts/form-layout.yml
  formGroup: !include @common/components/form-group.yml
  header: !include @common/components/header.yml
  pinInput: !include @common/components/pin-input.yml
fadeAnimationBack: true
schema:
  required:
    - code
  properties:
    code:
      type: string
      minLength: 4
      maxLength: 4
  errorMessage:
    _: "{field} - Required field"
defaults:
  mobile: !expression "flow.export.mobile"
items:
  - !ref components: formLayout
    items:
      - !ref components: header
        progress: <\%-((3 / 8) * 100)\%>
      - !component as: div
        className: "content main-content"
        items:
          - !component as: h1
            items: "Enter your phone number"
          - !component as: p
            items: "Please enter a valid mobile phone number to where we can text a confirmation code to."
          - !ref components: formGroup
            items:
              - !ref components: pinInput
                field: code
                label: "Enter your confirmation code"
                length: 4

Error page

The error page can be specified as an errorPage parameter in application configuration.

errorPage: "error-page"
---
pages:
  error-page: !include ./pages/error.yml # Include error page to a list of pages

If an error page is not specified, the auth cookie will be deleted and application will be reloaded.

You can create more dynamic error page that provides useful features, such as a button to continue or reattempt the previous action (flowContinue). This button will automaticaly fetch the last stored data from the server and redirect user to correct screen.

Another useful error management feature is to provide a button that reloads the flow. If the problem is not easily resolved, have the user click a button to redirect to start of the flow in case with the form action flowReload.

If you are on error page, there are also available page parameters that contain the reason for the error. For example, query the value of page.params.error to get the raw output from the error catch.

Static pages

In pages can be also specified static pages, static page is page which can be accessed outside of any workflow logic, so can be used as landing pages for some calls of other SDKs or etc.. Static page name always start with $ character like this:

pages:
  $static-page: !include ./pages/static-page.yml

Then static page can be accessed by loading URL https://some-target-url.xyz/?s=static-page&some=other-params

If an error page is not specified, the auth cookie will be deleted and application will be reloaded.

Analytics

You are able to initialize different analytics providers when application starts and call specific action when certain event occurs.

List of currently available providers:

Example integration

# index.yml
analytics:
  mixpanel: "a78gc206fb0a9d85edb622d10ec74b5d"
  gtm: "GTM-XXXXXX"

User indentification

To identify current user for different analytics providers analytics.authorizationToken configuration key can be used, e.g.:

# index.yml
analytics:
  authorizationToken: !expression "url.query.do_authorization"

In order to identify user not on initial page load, but by some event, analytics.authorization action from Expression context can be used:

- !component as: div
  onClick: !function "analytics.authorization(flow.export.identityId)"

Events management

Analytics events can be dispatched manually or using analytics event management.

By defining analytics in page configuration built-in analytics event management will be involved, some UI components are dispatching basic default events, e.g. form fields have click, change, blur, focus, etc.

Analytics page configuration

Analytics configuration structure is coresponding to an event you want to handle and can be placed on every page.

There are 3 ways you can set event configuration: "string", "object" or "function" annotaion:

analytics:
  fields:
    firstName:
      # String annotation
      change: "firstNameChanged"
    middleName:
      # Object annotation
      change:
        eventName: "middleNameChanged"
        data:
          page: !expression "page.name"
          device: !expression: "device.deviceType"
    lastName:
      # Function annotation
      change: !function "analytics.event('lastNameChanged')"

You can also define this event configuration inside of parent structure, for example this function will be triggered on any field change:

analytics:
  fields:
    change: !function "analytics.event('someFieldChanged')"
Existing events

Form fields events:

Event name Description
click Triggers when user clicks on field
change Triggers when user change value of field
focus Triggers when user focus on field
blur Triggers when user unfocus from field

File upload events:

Event name Description
click Triggers when user clicks on field
change Triggers when user change value of field
accepted File was accepted to field
rejected File was rejected, it can be caused by prevalidations or livness detection

Path for these events is in this format fields.{FieldName}.{EventName}.

Application lifecycle events

Path Event name Description
page enter Triggers when page is entered
page leave Triggers when page is leaved
form initialized Triggers when execution is initialized

Path for these events is in this format {Path}.{EventName}.

Analytics storage

Expression context has support for dispatching analytics events and for storing some values.

Analytics storage is a simple key/value storage, that can contain any value. It has some utils to make its usage simpler: for numeric values there are increment and get. increment will augment value by 1, if value does not exist, it will set it to 1.

Example:

This will sends event with name Click with parameter count: 1 for first call, 2 for second call, etc.

!function "analytics.event('Click', { count: analytics.storage.increment('timesClicked') })

This will sends event with name Click with parameter count with value from storage. If this value does not exist, count will be set to 0 (default value).

!function "analytics.event('Click', { count: analytics.storage.get('timesClicked', 0) })

Global analytics params

There is a way to set global analytics params which will be sent with every single event. This injection works only when input params in event call is an object or was not provided. Global params has lower priority, so if you redefine same field in event params, it will overwrite it.

Example:

# index.yml
analyticsParams:
  ip: !expression "flow.export.ip"
  page: !expression "page.name"

Dispatch events manually

To manually fire analytics event, use analytics.event method from expression context

- !component as: div
  onClick: !function "analytics.event(eventName, eventParams)"

Formats settings

Global formats should be defined under formats parameter. Later all formats are available as helpers in global application state (expression context)

Example configuration

formats:
  date:
    format: "DD/MM/YYYY"
  number:
    decimalSeparator: "."
    thousandsSeparator: ","
    precision: 2
  currency:
    format: "%u%n"
    unit: "£"
  phone:
    countryCode: "+44"
    mask: "9999 999999"

Global application state and methods

Expressions

Expressions are a simple way to access data from the app runtime, or the response from server. Data is accessed through an object that is internally known as Core context or Expression context. If expression fails, it will return undefined. If you specify the default parameter, it will be returned when expression fails.

Examples of expressions:

property: !expression flow.export.value
property:
  !expression eval: flow.export.value
  default: Nothing
# Multiline expression
property:
  !expression: |
    const variable = 1;
    // More lines of JavaScript
    console.log('Hello', variable);

Functions

A function is another type of expression. It's useful to add some callbacks, such as a button onClick event.

- !component as: div
  onClick: !function "console.log('Click')"
  items: "Click me"

Extending Expression context

To extend expression context with custom values or methods, globals or utils configuration keys can be used:

# index.yml
globals:
  test: "I am a global variable"
utils:
  sum: !expression "function (a, b) { return a + b; }"

Then in page configuration:

- !component as: Heading
  items: !expression "globals.test"
- !component as: Heading
  items: !expression "utils.sum(1, 2)"

Expression context

Expression context is a global object, which is accessible from YAML expression only.

analytics: {
  authorization: (token) => void, // Trigger mixpanel.identify(token), GA.set({ userId: token }) and GTM dataLayer event "authorization" with parameter token (string)
  event: (name: string, params?: object) => void, // Trigger event with given event name and params
  storage: { // More info in "Analytics storage" section
    set: (name: string, value: any) => void
    get: (name: string) => any
    increment: (name: string) => void
  }
}
api: {
  authToken: string,
  progress: {
    [field-name]: number // Percentage of progress in file uploading
  }
}
app: {
  locale: string, // Current locale
  targetId: string, // Current target name
  waiting: {
    [tag]: boolean, // App waiting tags
  }
  wrapByLoading: (promise: Promise<any>) => Promise<any>
}

Example usage of wrapByLoading

# Element with click handler as async operation
- !component as: div
  items: "Run simple async operation"
  onClick: !function "app.wrapByLoading(simple_async_operation, 'SIMPLE_TAG')"

# Element with click handler as async operation with complex structure
- !component as: div
  items: "Run complex async operation"
  onClick: !function |
    app.wrapByLoading((async () => {
      await complex_async_operation();
    })(), 'COMPLEX_TAG')

# Displaying loader during async operation
- !component as: VisibilityWrapper
  visible: !expression "app.waiting.SIMPLE_TAG || app.waiting.COMPLEX_TAG"
  items: "Loading..."
device: {
  ... // https://github.com/duskload/react-device-detect#selectors
  hasWebcam: boolean, // If device has webcamera physically
  hasWebcamPermission: boolean, // If user already granted webcamera permission to current website
}
flow: {
  backEnabled: boolean, // value of backEnabled from API for current page
  execution: { // information about current flow execution
    uuid: string
    token: string
  },
  export: ...any-data-from-server, // this is exported data for page in flow from server
  function: {
    [function-name]: (payload?: any, resultKey?: string) => void, // - call (in !fuction) any flow/route function by call function name (like `flow.function.search('something')`), you can also set output resultKey (default function-name)
    results: {
      [function-name or result-key]: ...any-data-from-server-function, // - here will be data from server under function-name or result-key property name (like `flow.function.results.search`)
    }
  }
  goToErrorPage: (message: string, logout?: boolean) => void // redirect user to error page (if one is specified) with some message put into `page.params.error`. Optionally logout can be performed
  refresh: () => void // refresh workflow based on current workflow status
  reload: () => void // removes authentication cookie and reloads flow
}
form: {
  changeValue: (fieldName: string, value: any, callback?: () => void) => void, // change value of some field, you can use callback that will be called after data set, for example if you need to submit form
  data: {
    [field-name]: ...data-inside-field, // - data can be string, file, etc.
  },
  field: {
    [field-name]: {
      isValid: boolean,           // is field valid
      validationError: string,    // only validation erros generated by page schema
      error: string,              // all field errors including validation errors and server errors
      isFilled: boolean,          // is there any data
      isVisited: boolean,         // true, if field was visited before (focused and blur)
    }
  },
  recompileSchema: () => void, // recompile form validation schema
  clearErrors: (allErrors?: boolean) => void // clear global/validation/manually set errors
  setError: (fieldName: string, error: string) => void // manually set error to some specific field, error can be cleared by passing falsy value
  addTags: (tags: string[]) => void, // add tags to form
  removeTags: (tags: string[]) => void, // regexp as string can be also used to identify more tags
  hasTags: (tags: string | string[]) => boolean // checks if all passed tags are present
  tag: {
    [tag-name]: boolean, // form visual tags
  },
  submit: (actionName: string, params: string[]) => void, // submit form
  valid: boolean, // is form valid
  visited: {
    [field-name]: boolean // indicated if field was visited
  },
}
format: {
  formatDate: (date: string) => string
  formatCurrency: (value: number, options?: NumberFormat) => string
  formatNumber: (value: number, options?: NumberFormat) => string
  roundNumber: (value: number) => number
  dateFormat: string
  currencyUnit: string
  phoneCountryCode: string
  phoneMask: string
}
helper: {
  dayjs, // https://github.com/iamkun/dayjs
  getFileHolder: (file: File | Blob) => Promise<FileHolder> // Get FileHolder compatible with HUB client
}
# page.yml
locals:
  test: "I am a local variable"
  sum: !expression "function (a, b) { return a + b; }"
items:
- !component as: Heading
  items: !expression "globals.test"
- !component as: Heading
  items: !expression "globals.sum(1, 2)"
page: {
  params: any // for example page.params.error contains informations, why you are on error page
  name: string // current page name (route URI)
  storage: { // local page storage, gets cleared on page change
    get: (name: string, defaultValue?: any) => any
    set: (name: string, value: any) => void
  }
}
changeLocale: (locale: string) => void
t: (string, params) => string
te: (string, params) => string
url: {
  ... // - https://github.com/unshiftio/url-parse
}

Localization

HUB Client has built-in support for multiple locales and an easy way to manage translations.

All translation keys are being stored in src/translates folder under appropriate YAML files: {LANG}.yml and should be described in index.yml project configuration file:

defaultLocale: "en"
translates:
  en: !include ./translates/en.yml

Translations can be stored under nested keys, e.g.

# translates/en.yml
welcome:
  text: "Automated real-time identity authentication & decisioning."
  button: "Lets get started"

otp:
  title: "Enter your confirmation code"
  text: "We've sent a confirmation code to your phone number"

...

# Page configuraion
- !component as: Heading
  items: !t "welcome.text"
- !component as: SubmitButton
  text: !t "welcome.button"

In order to use translation key with some parameter, the following notation can be used:

# translates/en.yml
welcome:
  text: "Some text with {param}"

...

# Page configuraion
!t text: "welcome.text"
param: "Zenoo"

# Expression can be used as well
!t text: "welcome.text"
param: !expression "flow.export.param"

There are two ways to use translations in YAML:

To change locale, use the action changeLocale, in which the first parameter is target locale name.

Examples:

# Evaluate translation for given translation key
- !t translation_key

# Evaluate translation for dymanic translation key (e.g. error coming from Backend)
- !expression t(flow.export.translation_key)

Markdown and HTML content in translations

Translation key value can have string, HTML or Markdown as a value:

welcome:
  string: "Welcome"
  text1: !html |
    <h1>Welcome to our <b>website</b></h1>
    <br />
    Please provide some information
  text2: !markdown |
    # Welcome to our **website**

    Please provide some information

In order to use Markdown/HTML you need to use !te tag instead of !t:

# String
- !component as: Paragraph
  items: !t "welcome.string"

# Markdown
- !component as: Paragraph
  items: !te "welcome.text1"

# HTML
- !component as: Paragraph
  items: !te "welcome.text2"

Built in components

Using components in YAML page configuration

Each component must have an "as" parameter that specifies the component element name. You can use the provided component name, or the standard HTML DOM element.

Each component has also $reference property, which can create a named reference to DOM element. This reference is accessible through a $reference object inside appDataContext.

Examples of $reference:

# Referenceable div
!component as: div
$reference: myDiv

# Some component that uses this reference
property: !expression #reference.myDiv

UI components

List of UI components can be viewed in Zenoo Storybook.

EJS partials

It is possible to extend initial HTML content of application. By creating/filling the following files in /ejs folder in target source you can extend content of head element and add HTML code at the beginning/end of body tag:

Remote application start

In order to "pause" application initialization prior to perform some asynchorous task, the following approach can be used with the help of EJS partials:

head.ejs

<script>
  function startApplication() {
    if (!window.runApplication) {
      window.onApplicationPrepared = function() {
        window.runApplication();
      }
    } else {
      window.runApplication();
    }
  }

  (function() {
    window.DISABLE_AUTOLOAD = true;

    // Performing some request needed
    return fetch('https://example.com')
      .then(response => response.json())
      .then(data => {
        // Make something with data, e.g. put to global variables

        startApplication();
      })
      .catch(() => {
        startApplication();
      });
  })();
</script>

Target compilation

For target compilation HUB Client contains CLI tool, which uses Target Builder module internally.

The Target Builder is a tool to process Target files and combine the files with the contents of compiled @zenoo/hub-client-core into releaseable package.

Target Builder uses the index.yml file as entry point, then combines and compiles all files that are included inside this file and also process files in assets folder. These files are supported: YAML, JSON, HTML, MD, LESS, and CSS.

Target Builder passes these steps:

  1. Process assets folders and put it to output folder.
  2. Process the entry point YAML file, and recursively process all includes (for more details, see !include).
  3. Process/compile all other files (such as less, md, etc.).
  4. Combine all output into one large configuration.json file and one styles.css file and place into the output folder.

CLI commands

# Run from specific target directory
hub-client <command> [<environment>] [-p <port>]
<command>

Command to run over specified target: build, dev or deploy

<environment>

Environment name on the basis of which target entry point will be taken

${target}/src/index.${environment}.yml
Examples
# In targets/<target_name> folder
hub-client dev
hub-client deploy stage
hub-client build production

Options

Parameter Explanation Type Default
--port, -p Port for webpack dev server number 8888
--branch, -b Branch to deploy string master

Assets processing

While building a target, two folders of assets are being processed:

Target Builder collects all of the content in these two folders, adds a random hash postfix to the filenames (to prevent caching issues), and places it into the /assets output folder. All references to assets will be replaced with new hashed names, both in YML and LESS/CSS files.

In case of collisions between <target>/src/assets and @zenoo/hub-client-common/lib/assets, the file from <target>/src/assets will have a higher priority. Use this prioritization to replace some "default asset" with an asset specific only for this target.

IMPORTANT

The format to reference an asset should be: /assets/some_file.ext.

Other reference formats such as ./../assets/some_file.ext will not be resolved properly.

Styles processing

In root of each YAML which is included in any depth of target scruture you can define styles field which can contain array of CSS or LESS files to include it into target build.

In finally steps of target build is all styles filtered by unique, that means you can import one style file in multiple components as many times you want, and on output styles.css will be each file only once.

Example:

styles:
  - !include ./style.less
  - !include ./other-style.css

Environment-specific entry points

To build a target with an environment-specific configuration in Target Builder, you can specify a different entrypoint by creating a different index.yml file.

The format of this environment-specific index.yml file must be as follows:

<target>/index.{ENVIRONMENT}.yml

Example: <target>/index.production.yml

Within this file, simply include the main index.yml entry file:

<<<: !include index.yml
serverUrl: 'https://production.onboardapp.io/api'
analytics:
  ga: '123456'
# More configuration keys for selected environment

Development tools

HUB Client has built-in devtools for easer development, support and QA process.

# index.yml application configuration file
devTools: true

IMPORTANT:

If you enable devtools for development environment, it will be inherited by other environment-specific configs. In this case devTools parameter should be explicitly set to false in appropriate environment config. Read more in Environment-specific entry points

# index.production.yml application configuration file
<<<: !include index.yml
devTools: false

To toggle development tools panel click on the corresponding button (1).

The devtools interface allows you to preview and navigate through all the pages in the application without the need to fill all the data every single time. As well as enable some extra logging or trigger Autofill feature for convenient testing.

DevTools1 DevTools2

Available features

Mocked input data

In order to test DO application faster, there is a way to set mocked user input data and go through the whole flow by submitting dataset specific to selected scenario.

Create JSON file with the similar content like:

{
  "default": { // Scenario name
    "welcome": { // Route name
      "firstName": "John" // User input data
    }
  },
  "anotherScenario": {
    "welcome": {
      "firstName": "Edgar"
    }
  }
}

And inlcude it in YAML application configuration file

# index.yml application configuration file
mockData: !include ./mockdata.json

Mocked flow export

If you want the page to be displayed correctly in preview mode (navigated with devtools), there is a possibility to set mocked flow export, usually comming from BE.

# index.yml application configuration file
flowExport: !include ./flowExport.json
{
  "application": {
    "products": {
      "creditCard": "Credit Card",
      "loan": "Loan"
    }
  }
}

Design Studio

Design Studio allows you to quickly design, test and deploy DO application. It gives you a lot of capabilities without the need to code.

Requirements to DO target

body {
  --base-body-background: #f6f6f6;
  --base-border-color: #dde0ec;
  --base-brand-color: #017aff;
  --base-brand-color-contrast: #ffffff;
  --base-color: #465a6a;
  --base-disabled-color: #d6e5f8;
  --base-error-color: #e44343;
  --base-focused-color: #017aff;
  --base-label-color: #8893aa;
  --base-success-color: #1ea03f;

  --base-font-family: 'Inter UI', sans-serif;
  --base-font-size: 16px;
  --base-font-weight: 400;
  --base-letter-spacing: normal;
  --base-line-height: 19px;

  --base-logo: url('/assets/logo-desktop.svg');
  --base-header-logo: url('/assets/logo-mobile.svg');

  --base-form-background: #ffffff;
  --base-form-color: var(--base-color);
  --base-form-border-radius: 7px;
  --base-form-box-shadow: 0 11px 14px -10px #aec1f7;
}

src/layouts/main.yml

components:
  footer: !include ../components/footer.yml
!component as: LayoutWithSidebar
isLayout: true # Required for DS to understand how to change properties in nested layout items
items:
  !property name: items
footer:
  - !ref components: footer
{
  "calculator": {
    "loanPurposeList": {
      "Business": "Business",
      "Personal": "Personal"
    }
  },
  "employment-info": {
    "employmentTypeList": {
      "Employed": "Employed",
      "Self-Employed": "Self-Employed"
    }
  }
}

Design Studio settings available in application configuration

Parameter Description
country Used to define global formats (currency, date format, phone number mask etc.)
flowExport Mocked flow export to display pages in DesignStudio correctly
logo Logo to be displayed in projects list
name Project name to be displayed in DesignStudio, title will be used as a default value
previewUrl Link to preview environment

Example configuration

title: "Zenoo Demo Project"
...
studioSettings:
  country: "Mexico"
  flowExport: !include ./flowExport.json
  logo: "/assets/logo.png"
  name: "ZenooBank"
  previewUrl: "https://onboarding.zenoo.com/"

Changelog

v1.26.0

July 17, 2023

New features

# index.yml
...
pages:
  $static: !include ./pages/static.yml # Accessible with ?s=static

v1.25.2

July 13, 2023

Bug fixes

v1.25.1

July 12, 2023

Bug fixes

v1.25.0

July 3, 2023

New features

v1.24.7

June 26, 2023

Bug fixes

v1.24.6

June 22, 2023

Bug fixes

v1.24.4

June 7, 2023

Bug fixes

v1.24.3

June 5, 2023

Refactor

v1.24.1

May 19, 2023

Bug fixes

v1.24.0

May 18, 2023

New features

v1.23.0

May 15, 2023

New features

v1.22.0

May 5, 2023

New features

v1.21.9

April 27, 2023

Bug fixes

v1.21.6

April 26, 2023

Bug fixes

v1.21.5

April 24, 2023

Bug fixes

v1.21.4

April 21, 2023

New features

v1.21.2

April 6, 2023

Bug fixes

v1.21.0

March 9, 2023

Breaking changes

To support this changes you need

  1. If you use single file upload, modify validation JSON schema of appropriate page on FE
  # Before
  schema:
    required:
      - document
    properties:
      document:
        type: array
        minItems: 1
        items:
          properties:
            size:
              maximum: 10485760
              errorMessage:
                _: !t "errors.invalidFileSize"
            # other properties validations...

  # After
  schema:
    required:
      - document
    properties:
      document:
        properties:
          size:
            maximum: 10485760
            errorMessage:
                _: !t "errors.invalidFileSize"
          # other properties validations...
  1. In case you still need payload to be sent as an array of file descriptors use multiple property. Note, that this property also affects UI, e.g. <FileUpload> component will display Add another file.
  - !component as: FileUpload
    name: document
    multiple: true
    label: "Document"
  1. Make appropriate changes in DSL

v1.20.3

January 5, 2023

Bug fixes

v1.20.2

January 5, 2023

Bug fixes

v1.20.1

December 21, 2022

Bug fixes

v1.20.0

December 13, 2022

New features

v1.19.0

November 28, 2022

New features

v1.18.1

October 18, 2022

Bug fixes

v1.18.0

October 14, 2022

New features

v1.17.2

October 13, 2022

Bug fixes

v1.17.1

September 15, 2022

Bug fixes

v1.17.0

September 15, 2022

New features

v1.16.0

September 15, 2022

New features

v1.15.2

September 12, 2022

Bug fixes

v1.15.1

September 9, 2022

New features

v1.15.0

August 31, 2022

Breaking changes

Bug fixes

v1.14.16

August 3, 2022

New features


v1.14.15

August 2, 2022

Refactor

    --base-link-color: var(--base-brand-color);
    --base-link-disabled-color: var(--base-disabled-color);
    --base-link-font-weight: 400;
    --base-link-text-transform: initial;
    --base-link-text-decoration: underline;
    --base-link-hover-color: var(--base-brand-color);
    --base-link-hover-text-decoration: underline;

v1.14.14

August 1, 2022

Breaking changes

If you need to run application with legacy version of HUB backend, you need to set apiVersion field in index.yml to v0

Refactor

Bug fixes

DevOps

Design Studio Cluster (AWS)

Overview of Services

Zenoo services are grouped under 2 categories:

Build time

Studio locates under this category to manage target creation, modification and deployment. It also handles the design elements of the onboarding applications (targets) together with workflow updates.

Run time

Hub Instance (backend) and Hub Client Target (frontend) are deployed under this category to orchestrate the onboarding journeys and integrations with 3rd party providers.

Hub & Studio Resources (AWS)

Design Studio is deployed as container task in ECS via AWS command line in a pipeline (e.g.: via GitHub action).

Amazon Cognito is the service employed for user management and permissions within Design Studio.

GitHub/Bitbucket is used by Design Studio as a version control system for target (frontend web app) sources and pipelines to compile the targets to static websites.

Other services are deployed as same as under HUB Cluster.

Network Diagram

Studio Network Diagram

HUB Cluster in AWS

Overview of Services

Hub Resources (AWS)

HUB Client Target (Frontend) is stored in AWS S3 buckets as a static website and handled under a CloudFront distribution.

HUB Instance (Backend) is deployed as a container task in ECS via AWS command line in pipeline.

All those container tasks are behind ALB (Application Load Balancer) and the requests are routed accordingly.

MSK (Managed Service Kafka) is the streaming layer of the backend where the user journey executions are handled within different topics (see Hub backend docs for more details).

ElastiCache Redis is used to cache the files processed by the backend (e.g.: documents uploaded by the end-user).

Request Flow

Request-Flow-AWS

Onboarding app (Hub Client Target) is downloaded into the user’s browser through CloudFront distribution.

Each request from the app is made through Application Load Balancer.

Data Flow

Digital Onboarding (DoB) Data Flow.drawio.png

Zenoo Components

HUB Instance (Runtime Backend)

Backend is a JVM service. Written in Java / Groovy, based on Spring Boot framework. Workflow execution is stored in Kafka topics.

Result of the build pipeline is a docker image which runs the backend as a stand-alone server.

Example resources where the target is deployed:

On AWS:

On Azure:

HUB Client Target (Frontend)

Frontend is based on ReactJS framework.

Result of the build pipeline is static HTML & JavaScript files which can be served in CDN (Content Delivery Network) or under a standard HTML server such as nginx.

Example resources where the target is deployed:

On AWS:

On Azure:

AWS CloudFront Distribution

CloudFront is a web service which delivers the onboarding web application content. Configuration for each delivery is defined as a distribution:

CloudFront-Distributions

Behaviour of each distribution contains the settings such as object compression, viewer protocols (HTTP or HTTPS), allowed methods and caching:

CloudFront-Distribution-Edit-Behavior

Multiple paths can be defined to route the requests to different origins such as backend load balancer besides fetching the frontend content from S3 bucket origin:

CloudFront-Distribution-Behavior

Design Studio

Design Studio is a NodeJS service running as a container in ECS and it has no state. It's responsible for UI and flow management of the hub client targets and deployments.

Minimum Requirements

Minimum AWS resource tiers and units to run the hub components are listed below.

MSK Number of Brokers (kafka.t3.small) 2
GB per month 100
EC2 Number of instances (t3.large) 1
GB per month 60
ElastiCache Number of nodes (cache.t3.small) 1
ALB Number of LCUs 1
WAF Web ACLs 1
Rules 10
Requests max 1 mil
CloudFront Free Tier
S3 Standard - GB per month 50

Pricing Example

US East (N. Virginia) AWS Region:

Service Charges Usage Rate Sub totals
MSK Broker instance charges (instance usage, in hours) 31 days * 24 hrs/day * 2 brokers = 1,488 total hours $0.0456 (price per hour for a kafka.t3.small) 1,488 hours * $0.0456 = $67.85
Storage charges in GB-hours 50 GB * 1 month $0.10 (price per GB-month in US East region) 50 GB-months * $0.10 * 2 = $10
EC2 Instance usage, in hours 31 days * 24 hrs/day * 2 brokers = 1,488 total hours $0.0832 (price per hour for t3.large) 1,488 hours * $0.0832 = $61.90
Storage charges in GB-hours 30 GB * 1 month $0.10 (price per GB-month in US East region) 30 GB-months * $0.10 = $3
ElastiCache Node usage, in hours 31 days * 24 hrs/day * 1 node = 1,488 total hours $0.034 (price per hour for cache.t3.small) 1,488 hours * $0.017 = $25.29
ALB Application Load Balancer-hour and LCU-hour 31 days * 24 hrs/day * 1 node = 1,488 total hours $0.0225 (price per ALB-hour)
$0.008 (price per LCU-hour)
1,488 hours * ($0.0225 + $0.008) = $22.69
WAF 1 Web ACL, 10 Rules and max 1 mil requests Web ACL $5.00 per month (prorated hourly)
Rule $1.00 per month (prorated hourly)
Request $0.60 per 1 million requests
(1 ACL * $5) + (10 Rules * $1) + (1 * $0.6) = $15.6
CloudFront Free Tier
1 TB of data transfer out
10,000,000 HTTP or HTTPS Requests
2,000,000 CloudFront Function Invocations
0 0
S3 Standard 50 GB per month $0.023 (per GB-month) 50 * $0.023 = $1.15
Total (per month) $207.5

Build & Run Commands

Commands below are listed to give an overview of each service build and run. Actual implementation may change based on the cloud provider.

Component Commands
hub-instance-zenoo ./gradlew clean build
./gradlew bootRun
hub-client-targets-zenoo/targets/ npm i
npm start

Deployment

Examples below are illustrating the steps of a typical CI/CD pipeline based on GitHub Actions. Additionally, the container configuration sample is defined as a docker compose file below.

HUB Instance (Backend)

-- docker-compose.yml --

Sample docker compose file to deploy hub instance to ECS:

version: '3'
services:
  hub-instance-zenoo:
    image: '917319201960.dkr.ecr.eu-west-2.amazonaws.com/hub-instance-zenoo:v0.0.1'
    ports:
      - '0:8080'
    environment:
      SPRING_PROFILES_ACTIVE: 'stage'
    logging:
      driver: awslogs
      options:
        awslogs-group: zenoo-stage
        awslogs-region: eu-west-2
        awslogs-stream-prefix: backend

-- build.yml --

Sample GitHub build action config:

name: 'Zenoo Hub Instance - Build'

on:
  pull_request:
    branches:
      - master
      - integration/**
      - release/**
    paths-ignore:
      - 'README.md'
      - 'docs/**'
      - 'docker/**'
      - '.github/**'
  push:
    branches:
      - master
    paths-ignore:
      - 'README.md'
      - 'docs/**'
      - 'docker/**'
      - '.github/**'

jobs:
  build:
    name: 'Build and Tests'
    timeout-minutes: 30

    runs-on: ubuntu-latest

    services:
      mongodb:
        image: mongo:4.2.2
        ports:
          - 27017:27017
    steps:
      - name: 'Checkout'
        uses: actions/checkout@v2
      - name: 'Cache gradle dependencies'
        uses: actions/cache@v1.1.0
        with:
          path: ~/.gradle/caches
          key: ${{ runner.os }}-gradle-${{ hashFiles('**/*.gradle') }}
          restore-keys: |
            ${{ runner.os }}-gradle-
      - name: 'Setup JDK 1.8'
        uses: actions/setup-java@v1
        with:
          java-version: 1.8
      - name: 'Grant execute permission for gradlew'
        run: chmod +x gradlew
      - name: 'Run build and tests'
        run: ./gradlew clean build

-- deploy.yml --

Sample GitHub action config for deployment:

name: 'Zenoo Hub Instance - Deploy'

on:
 deployment:
  branches:
  - master

env:
  access-key: ${{ secrets.AWS_ACCESS_KEY }}
  secret-key: ${{ secrets.AWS_SECRET_KEY }}
  cluster: zenoo-cluster-1
  config-name: zenoo-cluster-1
  profile-name: zenoo-stage
  region: eu-west-2
  launch-type: EC2
  project-name: hub-instance-zenoo
  target-group-arn: arn:aws:elasticloadbalancing:eu-west-2:917319201960:targetgroup/hub-instance-zenoo-stage/4ae349354189008b
  container-name: hub-instance-zenoo
  container-port: 5005
  image-repo: 917319201960.dkr.ecr.eu-west-2.amazonaws.com/hub-instance-zenoo:v0.0.1

jobs:
 buildAndDeployZenooStage:
  name: 'Deploy Zenoo Hub Instance to Stage'
  if: github.event.deployment.environment=='stage'
  runs-on: ubuntu-latest
  steps:
  - name: 'Starting deployment to ${{ github.event.deployment.environment }}'
    uses: deliverybot/status@master
    with:
     state: 'pending'
     token: '${{ secrets.GITHUB_TOKEN }}'

  - name: 'Setup ECS-CLI'
    uses: marocchino/setup-ecs-cli@v1
    with:
      version: v1.18.1

  - name: 'Checkout project'
    uses: actions/checkout@v2

  - name: 'Login to Amazon ECR'
    id: login-ecr
    uses: aws-actions/amazon-ecr-login@v1
    env:
      AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY }}
      AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_KEY }}
      AWS_REGION: ${{ env.region }}

  - name: 'Build and upload docker image'
    run: ./gradlew jib --image ${{ env.image-repo }}

  - name: 'Cluster configuration'
    working-directory: docker/stage
    run: |
      ecs-cli configure --cluster ${{ env.cluster }} --default-launch-type ${{ env.launch-type }} --config-name ${{ env.config-name }} --region ${{ env.region }}
      ecs-cli configure profile --access-key ${{ env.access-key }} --secret-key ${{ env.secret-key }} --profile-name ${{ env.profile-name }}

  - name: 'Compose service up'
    working-directory: docker/stage
    run: |
      ecs-cli compose --project-name ${{ env.project-name }} service up --create-log-groups --cluster-config ${{ env.config-name }} --ecs-profile ${{ env.profile-name }} --target-group-arn ${{ env.target-group-arn }} --container-name ${{ env.container-name }} --container-port ${{ env.container-port }}

  - name: 'Deployment success'
    if: success()
    uses: deliverybot/status@master
    with:
     state: 'success'
     token: '${{ secrets.GITHUB_TOKEN }}'

  - name: 'Deployment failure'
    if: failure()
    uses: deliverybot/status@master
    with:
     state: 'failure'
     token: '${{ secrets.GITHUB_TOKEN }}'

HUB Client Target (Frontend)

-- deployment.yml --

Sample GitHub action config for deployment:

name: Deploy Zenoo target

on: ['deployment']

env:
  access-key: ${{ secrets.AWS_ACCESS_KEY_ID }}
  secret-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
  region: eu-central-1

jobs:
  buildAndDeployZenooEkyc:
    name: "Deploy Zenoo Target"
    runs-on: ubuntu-latest
    if: github.event.deployment.payload=='zenoo' && github.event.deployment.environment=='stage'
    steps:
      - uses: actions/checkout@v2
      - name: Update NPM
        run: sudo npm install -g npm@latest
      - name: Authenticate with registry
        run: echo "//nexus.zenoo.com/repository/npm-internal/:_authToken=${{secrets.ZENOO_NPM_TOKEN}}" > ~/.npmrc
      - name: Setting Nexus as default registry for zenoo packages
        run: npm config set @zenoo:registry http://nexus.zenoo.com/repository/npm-internal/
      - name: Installing NPM
        working-directory: targets/zenoo
        run: npm i
      - name: Build Target application for STAGE
        working-directory: targets/zenoo
        run: npm run build
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: ${{ env.access-key }}
          aws-secret-access-key: ${{ env.secret-key }}
          aws-region: ${{ env.region }}
      - name: Copy files to S3 Bucket
        run: |
          aws s3 sync ./targets/zenoo/build/development/static/ s3://zenoo.onboardapp.io --acl public-read --source-region ${{ env.region }} --region ${{ env.region }}

Performance

Throughput

Concurrency

Availability

Security Overview