Introduction
Zenoo provides a niche platform for building, defining and orchestrating Digital Onboarding (DO) processes. The Zenoo Hub provides the ability to reconfigure the process to alter the orchestration.
Purpose
The Zenoo Platform has been built from the ground up by experienced developers, product managers and UX experts to solve a number of challenges facing businesses that need to onboard customers.
Zenoo is:
- Generic - enables DO process modeling using a few basic building blocks
- Flexible - easy to customize, configure and make changes on per-customer basis
- Extensible - easy to add new functionality and integrate 3rd party services
- Scalable - can handle increased traffic
- Resilient - responsive in case of failure
- Transparent - easy to monitor and troubleshoot
Our aim is to arm developers with a toolkit that makes building, managing and optimizing customer interactions less burdensome and more enjoyable, while improving the bottom line for businesses who embrace our approach. We do this by ensuring each customer interaction is unique and optimized to maximise conversions.
Easy onboarding with Zenoo
The Zenoo architecture has been built with an understanding that not all onboarding channels or customers are the same. With this in mind, The Zenoo Hub can initiate an onboarding experience either when a customer takes action (such as clicking on calculator) or through an API (onboard this customer).
As an example, if a customer applies for a loan on a partner website, the process would be as follows:
- Customer visits your website and is asked to complete a DO process.
- The client initiates a specific DO process by redirecting Customer to Zenoo Hub Client
- The Zenoo Hub Client engages the Hub Backend API to manage the DO process orchestration, processing data, performing checks, and other functions.
- The website receives the DO process result and responds with a redirect URL.
- The customer is redirected back to a Website using the redirect URL
HUB Backend
More details can be found under specific sections below.
- Architectural Overview
- Component model
- Workflow engine
- Connectors
- File cache
- Testing
- Metrics
- Hub Client API
- Admin API
- Customization and configuration
- Kafka SSL
- Kafka Topics
- Amazon MSK
Architectural overview
At the core of the Zenoo Hub is a workflow engine that executes Hub DSL scripts. The DSL scripts are used for orchestrating digital-onboarding processes as a series of pages, data transformations and external calls.
The DSL-based approach makes it possible to specify digital-onboarding processes in a concise manner. It enables developers to focus on the business logic rather than the complexities of distributed systems.
The Zenoo Hub is built on top of Apache Kafka using event-streaming and micro-service architecture. It makes the Hub highly scalable and fault-tolerant.
Each workflow execution produces a detailed log of Execution events that can be used for troubleshooting, as well as, analytic purposes.
DSL execution engine
At the core of the Zenoo Hub is the DSL execution engine. It executes the Hub DSL scripts that are used for orchestrating digital-onboarding processes. The host language for DSL is Groovy.
The DSL scripts are versioned and stored in a Component repository as Hub Components. The Hub employs a component model to facilitate reusability, testability and configurability. Making it possible to build new components from existing ones.
Each workflow execution is assigned an Execution Context that stores the current state of the execution. The execution contexts are persisted and retrieved using a Kafka Streams state store. Leveraging Kafka fault-tolerance capabilities, a replicated changelog topic is maintained to track any state updates.
Each workflow execution produces a detailed log of Execution events. These include life-cycle events, execution requests, responses, errors, executed commands, results etc. The execution events can be very useful for troubleshooting, as well as, analytics purposes.
More details can be found here.
Hub Client (Frontend)
A Hub Client facilitates an interaction between the Hub and an end user. From a Hub Client perspective, a customer journey is a series of pages. It relies on the Hub to determine what page to display next. Apart from that, it gathers user input and submits data back to the Hub via Hub Client API.
A Hub client uses the Hub Client API for the following - to start a new execution using a target or sharable token - to submit user input and resume the execution - to query execution state and current route - to upload files using File cache - to execute route functions
Component repository
The Hub DSL scripts are stored in a Component repository as Hub Components with the support for versioning.
A component model is employed to facilitate reusability, testability and configurability of Hub components, enabling a development model where new components are built from existing ones.
The Admin API makes it possible to register, query and validate Hub components on-the-fly. This enables making changes without the need to rebuild and redeploy the Hub.
Connector exchanges
Connectors are the integration points of the entire workflow orchestration. They are wrapped by exchange commands used within the DSL.
Throughout the workflow execution, external/internal providers can be called by means of exchanges that trigger the connectors. The connectors fetch results and decide in each step what to do with the provider responses accordingly.
- Exchange processor
- processes connector requests
- handles connector failures using retries with different retry strategies and timeouts
- produces execution requests with connector responses
- connectors with reactive interface
Monitoring
The Zenoo Hub employs Micrometer — a vendor-neutral application metrics facade — to integrate with the most popular monitoring systems.
Micrometer has a built-in support for AppOptics, Azure Monitor, Netflix Atlas, CloudWatch, Datadog, Dynatrace, Elastic, Ganglia, Graphite, Humio, Influx/Telegraf, JMX, KairosDB, New Relic, Prometheus, SignalFx, Google Stackdriver, StatsD, and Wavefront.
More details can be found here.
Component model
The Zenoo Hub employs a component model to enable a development model where an onboarding solution is composed of components.
Components are reusable building blocks that are configurable and testable. Each component provides a cohesive piece of functionality that is well tested, documented and can be reused in different contexts or clients.
This approach reduces complexity in many aspects of development. Building from smaller, well-tested pieces of functionality becomes significantly simpler and more manageable.
Let's review an example onboarding project that is assembled from several components.
Two of those, otp and document-check, are ready-to-use components.
- huddle
component contains the main workflow, business logic and project-specific configuration.
- huddle.routes
component contains project-specific route definitions.
- document-check
component provides ID document and liveness check functionality via a set of workflows and functions, the configuration includes the country and API credentials.
- otp
component provides a workflow to verify an SMS OTP code using a customer mobile, the configuration includes a number of retries, country code, OTP provider, etc.
Hub Component
A Hub Component is a reusable building block providing a cohesive piece of functionality that is configurable and testable.
Each component is identified by a unique name and version. It explicitly declares its dependencies of other components that are used.
It defines one or more DSL closures that used for execution by the DSL engine, these include target
, worfklow
, function
, mapper
, route
and exchange
.
A Hub component is defined as follows:
- name a unique name of a component,
- version a component version, if omitted SHA-256 hash is generated,
- definition a set of DSL closure definitions using Component DSL,
- dependencies a set of resolved references to other components and connectors used in the definition.
Component repository
Hub components are stored in a component repository. The components are then retrieved by the Execution Engine when a new execution is triggered.
The components are stored in components
Kafka topic with indefinite retention policy.
In addition, it keeps track of the latest component versions using components-latest
Kafka topic.
The process of registering a new Hub component consist of several steps:
- validates component definition based on Component DSL,
- checks the availability of component dependencies,
- validates each DSL closure based on [Hub DSL]#hub-domain-specific-language-dsl,
- resolves component dependencies versions,
- generates the component version if omitted, using SHA-256 hash of the component,
- registers the component.
The component repository provides a REST API for registering, validating and querying Hub components, see Admin API.
Additionally, Hub components can be registered automatically at the application start using ComponentConfigurer.
Component DSL
A Hub component defines one or more DSL closures; and a set of dependencies to other components and connectors using Component DSL.
Target
A target specifies a workflow or a function that can be executed via the Hub Client API.
A Hub component can define only a single target. It acts as an entry point for a given onboarding process.
In addition, a target specifies: * a custom configuration for component dependencies, like API credentials for connectors
target {
workflow('workflow-name')
}
Workflow
Defines a workflow with name using the [Hub DSL]#hub-domain-specific-language-dsl as a series of routes, exchanges, workflows, functions, etc.
In the workflow definition, it is possible to use the DSL closures defined within the component and declared dependencies.
workflow('name') {
definition
}
Function
Defines a function with name using the [Hub DSL]#hub-domain-specific-language-dsl as a series of exchanges, functions, data mappings and transformations.
In the function definition, it is possible to use the DSL closures defined within the component and declared dependencies. A valid function does not contain any route or workflow.
function('name') {
definition
}
Route
A route corresponds to a user interaction, a web page or a screen, depending on a Hub Client implementation.
A route can be used in a workflow using the route name, see more details.
route('name') {
uri '/uri'
export data
validate { payload specification }
checkpoint()
terminal()
}
- uri identifies a route for a Hub Client and is mandatory
- export is used to pass data to a Hub Client, like lists and key-value maps
- validate specifies data constrains for route result submitted by a Hub Client
- checkpoint() marks the route as a checkpoint, disables going back
- terminal() marks the route as terminal and checkpoint
Exchange
An exchange is a connector proxy. It makes external (API) calls using an HTTP connector or a custom connector. It provides the following tools for handling connector failures:
- timeout, an exchange fails with an error if the connector does not respond within the specified timeout
- retry strategy, retries failed connector requests
- fallback, a workflow, function or expression that is executed when an exchange fails
- validate, specifies a valid connector response, an exchange fails with an error if the connector response doesn't pass the validation
An exchange can be used in a function or workflow using the exchange name, see more details.
exchange('name') {
http {
definition
}
fallback {
definition
}
validate {
payload specification
}
}
Mapper
An attribute mapper transforms an input into a result using expression. Ity mapper be used for data mappings, transformations, calculations, etc.
mapper("name") {
input ->
expression
}
A mapper can be used in a function or workflow using the mapper name, see more details.
As an example, the following mapper generates a client full-name using the firstname and lastname.
mapper("client-fullname") {
input -> [ fullname: "$input.client.firstname $input.client.lastname" ]
}
Dependencies
A Hub component explicitly specifies its dependencies to other components and connectors.
The dependencies are declared as part of the component definition using dependencies block.
A component dependency is referenced using a component name@version
. The latest version is used if omitted.
A connector dependency is referenced using a connector fully classified name.
Optionally, you can configure a component dependency by providing a configuration as below. The configuration is then accessible as context attributes for workflows, functions, etc.
For example
dependencies {
connector 'sms@otp:1.2.0'
component 'zenoo.playground'
component 'zenoo.otp:2.4', [countryCode: '+420', tries: 3]
}
Workflow Execution Engine
At the heart of the Zenoo Hub is a workflow engine that executes Hub DSL scripts. These DSL scripts are then used for orchestrating corresponding digital-onboarding processes as a series of pages (routes), external calls, etc.
The DSL scripts are versioned and stored in the component repository as Hub components. This approach makes it possible to make changes on-the-fly without having to rebuild and redeploy the Zenoo Hub.
There are two types of executable DSL scripts:
- workflow, a series of user interactions, data transformations and external calls,
- function, a series of external calls and data transformation.
Execution context
Each workflow execution is assigned an Execution Context that stores the current state of the execution.
An Execution context stores for following:
- UUID, unique execution ID,
- parent UUID, parent execution UUID, set when executing sub-workflows/functions,
- sharable token, that was used for starting the execution,
- Execution events, generated throughout the execution, see Execution events,
- Context attributes, stores execution JSON-like data, see Context attributes.
Execution life-cycle
A new execution is triggered by an Execute request. Typically, an Execute request is generated by a Hub Client via the Hub Client API.
In addition, executions may produce Execute requests to trigger sub-workflow, function or route function executions.
An execution is terminated when one of the following criteria is met:
- a terminal route is executed,
- result() or error() command is executed,
- the whole DSL script is executed.
An execution becomes expired when the execution duration exceeds the configured corresponding expiration, see here.
When an execution terminates or expires, the corresponding execution context is discarded.
Execution model
An execution can be thought of as a series of DSL commands based on the DSL script being executed.
When a command finishes, the corresponding command result gets stored as context attribute using the command namespace setting.
Once a command result is set, it can be used by subsequent commands and for making flow control decisions.
Notes to expand:
- sync and async commands
- sub-workflow, a/sync functions, route functions
- going back, route checkpoint
- route submit validation
- exchange/route payload reduction
- exchange error handling, fallback, validation
Execution processors
At the core, the execution engine uses a stateful Kafka Streams processor to process incoming execution requests and produce corresponding responses.
It uses a state store to persist and retrieve corresponding Execution contexts. Leveraging Kafka fault-tolerance capabilities, a replicated changelog topic is maintained to track any state updates.
The Execution processor processes incoming Execution requests stored in execution-requests
Kafka topic.
There are several ways execution requests are produced:
- API Gateway via the Hub Client API,
- Exchange Processor to submit connector results,
- Execution Processor to trigger a child execution.
Each execution produces a detailed log of Execution events stored in execution-events
Kafka topic.
These include life-cycle events, execution requests, responses, errors, executed commands, results etc.
The Execution processor produces Execution responses stored in execution-responses
Kafka topic. These include routes, function results and errors.
The API Gateway uses the execution responses for corresponding request queries, see Request API.
In addition, the Execution processor produces Exchange requests stored in exchanges
Kafka topic that are handled by the Exchange Processor.
Context Attributes
The execution context attributes stores JSON-like data related to an execution, like user input, connector responses and configuration. The attributes are used for sharing data between different DSL commands and making flow control decisions.
An attribute is accessed by its key using .
for hierarchical access, e.g. client.address.city
Command namespace
Throughout a workflow execution, DSL commands store their results as context attributes using namespace as attribute key.
The whole attribute namespace is overwritten and any existing attributes stored within the namespace is lost.
e.g. a client route result will be stored in the application.client
namespace.
route('client') {
uri '/client'
namespace application.client
}
Setting using <<
In addition, it is possible to set context attributes directly using <<
.
config.logo << 'http://logo.png'
products << [product1: "Product1", product2: "Product2"]
The <<
operator merges existing namespace with the specified payload, unlike using a command namespace.
It can be used for gathering data from multiple commands using the same namespace.
application << route('basic info')
application << route('advanced info')
Default values
The diamond operator can be used for providing a default value when an attribute is not set.
config.retries ?: 3
Remove attribute
To remove an attribute namespace use remove namespace
DSL command.
remove client.test
remove 'toRemove'
Payload validation
It is possible to specify data structure and constrains for attribute payload, see Payload specification for more details.
The payload specification is then used for payload validation using the validate
or require
blocks.
validate
is used in DSL commands, like route and exchange, to validate a command result.require
is used in a workflow definition to enforce data-constrains, like input attributes.
Require payload
Checks if a given attribute is set and matches a payload specification. Otherwise, the corresponding execution terminates with an error.
It can be used for enforcing data-constrains in a workflow, like input attributes.
Also, the require()
expression result can be used for setting another attribute.
Example: check if input.test
is not empty and set the test
attribute:
input ->
test << require(input.test)
Example: check if input
contains firstname
and lastname
:
input ->
require(input) {
firstname
lastname
}
Payload specification
A payload specification defines attribute payload data structure and constrains.
For key-value maps, it is possible to specify each key name and corresponding data constrains for values using the provided validators. If no validator is specified, a required()
validator is used by default.
You can use the following validators:
- required() must not be empty,
- optional() may or may not be empty,
- string() must be a string,
- number() must be a number,
- list() must be a list,
- truefalse() must be a boolean,
- file() must be a file descriptor,
- file mimeType must be a file descriptor and match the mime type, e.g.
application/pdf
,image
, etc., - oneOf value1, value2, ... must be one of the specified values,
- regex ~/pattern/ must match a regular expression, the expression can be a string or a groovy regex pattern syntax.
A validate example:
validate {
firstname
lastname
address {
city { oneOf "Prague", "Paris" }
zip { regex ~/^[0-9]{5}(?:-[0-9]{4})?$/ }
}
}
age { number() }
idFront { file 'image'}
}
See validate examples for more details.
Validate payload Examples
- a single mandatory field
validate {
mobile
}
- a mandatory field with regex validator
validate {
mobile { regex ~/[0-9]{5}/ }
}
- a mandatory field with value validator
validate {
product { oneOf "product1", "product2", "product3"}
}
- mandatory nested fields
validate {
firstname
lastname
address {
street
city
state
zip
}
}
Hub Domain Specific Language (DSL)
The Hub DSL provides an implementation model for expressing digital onboarding solutions in a concise manner without superfluous details. Letting developers focus on the business logic.
In addition to the main purpose Hub DSL supports these objectives:
- enables transparent handling of failures,
- facilitates working with data payloads, data mapping and transformation,
- bridges the gap between developers and domain experts using a common language.
The Hub DSL provides the following features
- Context attributes for storing and working with JSON-like data
- DSL commands implement the basic building blocks of a digital onboarding
- route represents user interactions
- exchange makes external calls using connectors
- workflow executes child workflows
- function executes child functions
- mapper used for data mappings and transformations
- sharable generates and manages sharable links
- result/error terminates the current execution
- Flow control commands allow for conditional execution
DSL Commands
Route
A route represents an interaction with a user.
Typically, the goal is to display route-specific information and gather input from the user. A route is rendered by a Hub Client as a web page or mobile app screen, depending on the Hub Client implementation.
A route is identified by its name and can be used in a workflow definition. A minimal definition specifies a route uri intended for a Hub Client.
route('name') {
uri '/uri'
}
Definition and usage
It is possible provide a route definition inline within a workflow definition.
worklfow('test') {
route('name') {
uri '/uri'
}
}
Another option is to define a route as part of a Hub component and use it in a workflow by referencing the route name. This approach facilitates route reusability and separation of concerns.
route('name') {
uri '/uri'
}
worklfow('test') {
route('name')
}
Additionally, it is possible to reference a route by name and provide additional details when used in a workflow. This allows for separating a route definition (uri, data constrains) and usage (export, namespace, checkpoint).
route('client') {
uri '/client'
validate {
firstname
lastname
}
}
worklfow('test') {
route('client') {
export documents
namespace application.client
}
}
Route Result
A route result is stored using a namespace attribute key.
route('client-info') {
uri '/client-info'
namespace client
}
If a validate block is specific, a route result is validated before storing the result and resuming the execution. The route submit request results in a validation error if the validation fails.
route('client-info') {
uri '/client-info'
namespace client
validate {
firstname
lastname
idFront { file 'image'}
}
}
Exporting data
In order to pass route data, the export is used. Any JSON-like data can be exported, using context attributes or serializable values.
products << [product1: "Product1", product2: "Product2"]
route('products') {
uri '/products'
export products
}
route('greeting') {
uri '/greeting'
export message: "Hello world!"
}
Route check-point
A route can be marked as a check-point, meaning it is disabled to go back to the previous route.
route('finish') {
uri '/finish'
checkpoint()
}
Terminal route
A terminal route marks the end of a workflow execution. The corresponding execution is terminated when a terminal route is executed.
Also, a terminal route is marked as a check-point.
route('finish') {
uri '/finish'
terminal()
}
In addition, it is possible to set an execution result payload using a terminal route.
route('finish') {
uri '/finish'
terminal(payload)
}
Route functions
A route function allows a Hub Client to execute functions in the context of the given route. A Hub client executes a route function via Hub Client API.
Some use-cases of route functions: - dynamic queries based on user input, like auto-complete, - asynchronous data processing, like document OCR, - communication between different execution.
route('name') {
uri '/uri'
function('fnc1') {
context initial
namespace fnc1
}
}
- initial execution context for corresponding function execution
- namespace stores the result of a route function execution
It is possible to specify one or more route functions.
See route examples for more details.
Exchange
An exchange is a connector proxy. It makes external (API) calls using an HTTP connector or a custom connector.
It provides the following tools for handling connector failures:
- timeout, an exchange fails with an error if the connector does not respond within the specified timeout
- retry strategy, retries failed connector requests
- fallback, a workflow, function or expression that is executed when an exchange fails
- validate, specifies a valid connector response, an exchange fails with an error if the connector response doesn't pass the validation
An exchange is executed asynchronously when marked with async()
.
HTTP connector
An exchange can use a built-in HTTP connector to make external calls, see more details.
exchange('name') {
http {
definition
}
}
Custom connector
Optionally, an exchange can use a custom connector with config.
exchange('name') {
connector('custom')
config input
}
Exchange Result
An exchange result is stored using a namespace attribute key.
exchange('localhost-api') {
http {
url "https://localhost:8080/api"
}
namespace api
}
Exchange Result Validation
If a validate block is present, an exchange result is validated before storing the result and resuming the execution. An exchange fails with an error if the result validation fails, see fallback.
exchange('status-api') {
http {
url config.api.url
}
validate {
status
}
namespace api
}
Exchange Fallback
A fallback defines a workflow, function or expression that is executed when an exchange fails with an error. This may happen due to a connector error response, timeout or a failed result validation.
exchange('status-api') {
http {
url config.api.url
}
fallback {
route 'Error'
}
}
Exchange Timeout
It is possible to set an exchange timeout in seconds. The default value is 30 seconds.
An exchange fails with an error if the underlying connector doesn't respond within the specified timeout
exchange('status-api') {
connector('custom')
timeout 10
}
Exchange Retry strategies
An exchange uses a retry strategy to retry when a connector request fails. The default strategy uses fixed delays between retry attempts.
The following retry strategies are available:
Fixed backoff
Uses fixed delays between retry attempts, given a number of retry attempts and the backoff delay in seconds. - retry a number of retry attempts, the default is 5 - backoff a number of seconds between retries, the default is 5
exchange('fixed-default') {
http {
url config.api.url
}
fixedBackoffRetry()
}
exchange('fixed-custom') {
http {
url config.api.url
}
fixedBackoffRetry {
retry 10
backoff 2
}
}
Exponential backoff
Uses a randomized exponential backoff strategy, given a number of retry attempts and minimum and maximux backoff delays in seconds. - retry a number of retry attempts, the default is 5 - backoff a minimum delay between retry attempts, the default is 5 - maxBackoff a maximum delay between retry attempts, the default is 50
exchange('exp-default') {
http {
url config.api.url
}
exponentialBackoffRetry()
}
exchange('exp-custom') {
http {
url config.api.url
}
exponentialBackoffRetry {
retry 3
backoff 5
maxBackoff 10
}
}
No retry
Does not retry when a connector request fails.
exchange('name') {
http {
url config.api.url
}
noRetry()
}
function
- like workflow but without user interactions (route, workflow)
- can be executed asynchronously
- separate execution with different UUID, data passed using context and input
A function makes it possible to query dynamic data, perform complex calculations or make external calls using exhange(). Functions can be executed from a workflow or from another function.
- name a function name
- input a function input
- context execution context for function execution
- namespace a namespace to store function result
- async() the function will be executed asynchronously
function('mobile.lookup') {
input mobile: '325-135856984'
context retry: 3
namespace lookup
async()
}
workflow
Executes a sub-workflow synchronously as a separate workflow execution with different UUID. Data is passed using context and input. Execution is terminated if sub-workflow terminates with terminal route.
- name a workflow name
- input a workflow input
- context execution context for workflow execution
- namespace a namespace to store workflow result
workflow('otp') {
input mobile: '325-135856984'
context retry: 3
namespace otp
}
mapper
An attribute mapper transforms an input into an attribute output using mapper expression, see Mapper. The output gets stored in a namespace if specified. Can be used for data transformations, calculations and providing default values etc.
mapper('name') {
input input
namespace namespace
path
Executes a registered path (workflow snippet) specified by a name. Part of current execution, can access and update execution context.
path 'name'
Execution result
The result()
and error()
commands terminate the current execution successfully or with an error.
In addition, an execution result or error payload can be specified.
Terminates an execution successfully with a result payload
result application
result firstName: checkIdp.firstName, lastName: checkIdp.lastName
or with an empty result payload
result()
Terminates an execution with an error using the specified payload
error "Boom"
error otp
or with an empty error payload
error()
Query execution context
Query and retrieve an active execution (not terminated or expired) using execution
command.
A current or parent execution is queried by specifying current()
or parent()
, respectively.
In addition, a context limits the query result with the specified attribute key/namespace. The whole execution context is returned if context omitted.
- current() to query current execution
- parent() to query parent execution
- context namespace to limit matching execution context
- namespace to store execution context result
The following example queries a parent execution context and limits the result to counter
namespace. The query result is stored in the parent
namespace.
execution {
parent()
context counter
namespace parent
}
sharable
Generates a sharable token or link. The token is then used to start a new workflow or function execution, continue an existing one, etc. A token gets expired when a corresponding execution started and finished.
- url if provided generates a link using the sharable token
- token specifies the sharable token manually rather than using the generated one
- function a name of function to execute, optionally execution input and context can be provided
- workflow a name of workflow to execute, optionally execution input and context can be provided
- expired() expires specified token
- current() sharable token used for the current execution
- reusable() the token does not expire after corresponding execution terminates
- namespace stores generated token or link (if url provided) as String
Examples of usage are following:
token << sharable { function 'function-name' }
sharable {
reusable()
function 'function-name'
namespace token
}
Generates a sharable token to execute a function named function-name
. The token gets stored in the token
namespace.
sharable {
url "http://localhost:1234/sharable/$token"
workflow('workflow-name') {
context url: 'http://localhost'
input userId: 'dummy123'
}
namespace link
}
Generates a sharable link to execute a workflow named workflow-name
with input
and context
. The link gets stored in the link
namespace.
sharable {
token 'vJRRTX'
expired()
}
Expires a specific sharable token.
sharable {
token current()
expired()
}
Expires a sharable token that was used to execute the current execution.
token << sharable { current() }
Query a sharable token for the current execution.
Exporting namespaces
A context attribute namespace can be exported and queried using Execution API
export config
Flow control
match
Executes a DSL script definition when an expression evaluates as true. The expression can contain context attributes.
match (expression) {
definition
}
exist
Executes a DSL script definition when an attribute is set.
exist (attribute) {
definition
}
switch / case
The switch statement matches expression with cases and executes the matching case. It's a fallthrough switch-case. You can share the same code for multiple matches or use the break command. It uses different kinds of matching like, collection case, regular expression case, closure case and equals case.
switch (expression) {
case "bar":
route "Bar"
break
case ~/fo*/:
route "Foo"
break
case [4, 5, 6, 'inList']:
route "Matched"
break
default:
route "Default"
}
loop-until
Executes a DSL script definition until an expression evaluates as true.
loop {
definition
} until { expression }
The maximum number of attempts can be specified.
loop(3) {
workflow
} until { expression }
In addition, the attempt counter can be accessed as follows:
loop(3) {
attempt ->
route('test') {
export attempt
}
} until { expression }
Payload specification
It is possible to specify data structure and constrains for attribute payload.
The payload specification is then used for payload validation using the validate
or require
blocks.
validate
is used in DSL commands, like route and exchange, to validate a command resultrequire
is used in a workflow definition to enforce data-constrains, like input attributes
It specifies a result (fields) structure and data constrains. A field is defined by a name, data constrains (validators) and nested fields. The default validator for a field is mandatory, i.e. a field is mandatory if listed.
You can use the following validators
- string() a field must be a string
- number() a field must be a number
- truefalse() a field must be a boolean
- file() a field must be a file descriptor
- file mimeType a field must be a file descriptor and match the mime type, e.g.
application/pdf
,image
, etc. - oneOf value1, value2, ... a list of values, a field data must be one of the values
- regex ~/pattern/ a regular expression to match a field data, the expression can be a string or a groovy regex pattern syntax
- list() a field must be a list, you can specify the data constrains for the list items
A validate example below
validate {
firstname
lastname
address {
city { values "Prague", "Paris" }
zip { regex ~/^[0-9]{5}(?:-[0-9]{4})?$/ }
}
city { oneOf "Prague", "Paris" }
zip { regex ~/^[0-9]{5}(?:-[0-9]{4})?$/ }
}
skills {
list { oneOf 1, 2, 3, 4, 5 }
}
}
Route DSL Examples
- a terminal route
route("Finish") {
uri "/finish"
terminal()
}
- a terminal route with a checkpoint
route("Rejected") {
uri "/rejected"
checkpoint()
terminal()
}
- a route with result validation
route("Basic Info") {
uri "/basic"
namespace client
validate {
firstname
lastname
address {
street
city { values "Prague", "Paris" }
zip { regex ~/^[0-9]{5}(?:-[0-9]{4})?$/ }
}
}
}
- a route exporting a static map
route("Select Product") {
uri "/product"
namespace product
export product1: "Product 1",
product2: "Product 2",
product3: "Product 3"
validate {
values "product1", "product2", "product3"
}
}
- a route exporting an attribute
delivery.address << [street: "Dejvicka 18", city: "Prague", zip: 12345]
route("Delivery address") {
uri "/delivery"
export address: delivery.address
}
- a route registration
register {
route("Delivery address") {
uri "/delivery"
export delivery.address
}
}
route("Delivery address") {
export client.address
}
Execution events
Each workflow execution produces a detailed log of execution events. These include life-cycle events, execution requests, responses, errors, executed commands, results etc.
The execution events are stored in execution_events
Kafka topic. They are timestamped and correlated by execution UUID.
When aggregated, execution events can be used for troubleshooting. They can be processed for real-time metrics and analytics purposes.
Execution life-cycle
Started Event produced when execution starts
Expired Event produced when execution expires
Terminated Event produced when execution terminates
Execution requests
- Execute Request triggers a new execution
- Route Submit Request
- Route Back Request
- Exchange Submit Request
Execution responses
Context events
Used as an event payload in Execution Context Event. Context events are produced by the DSL executors for a particular execution. They provide a detailed log of a DSL script execution, like
- when a command is executed (CommandEvent)
- route (RouteEvent)
- exchange (ExchangeEvent)
- go back (BackEvent)
- command result is provided
- route result (RouteResultEvent)
- exchange result (ExchangeResultEvent)
- attribute set (AttributeSetterEvent)
- function execution
- executed (FunctionEvent)
- result is provided (FunctionResultEvent)
- errors
- validation error (ValidationErrorEvent)
- execution error (ErrorEvent)
- execution life-cycle
- initialized (InitEvent)
- terminated (TerminatedEvent)
Connectors
Usage
exchange('test') {
connector('type') {
config
}
exchange('test') {
config {
connector config
}
exchange('test') {
http {
http connector config
}
HTTP connector
A built-in HTTP connector facilitates making HTTP calls directly from the DSL using an exchange.
It is possible to reference and use context attributes in a connector definition. The common use-cases include URL and body generation, authentication headers, etc.
An HTTP connector response is automatically converted into a context attribute based on the content type. There is a built-in support for JSON and XML content types.
GET requests
Making an HTTP GET is as simple as providing a request url
http {
url 'https://request-url'
}
An url can be generated using Groovy GString and context attributes.
An example below queries GitHub repositories using a keyword attribute.
http {
url "https://api.github.com/search/repositories?q=topic:${keyword}"
}
POST requests
HTTP POST request has the method set to POST
.
The request body is set using the payload expression result. The payload expression can reference and use available context attributes.
http {
url "${middleware.url}/api/v1/client"
method 'POST'
jsonBody client
}
A JSON request body is specified using jsonBody together with application/json
content type.
http {
url 'http://localhost'
method 'POST'
jsonBody firstname: client.firstname, lastname: client.lastname
}
Optionally, it is possible to use a JSON builder, see JsonBuilder
http {
url 'http://localhost'
method 'POST'
jsonBody {
client {
firstName client.firstname
lastName client.lastname
}
}
}
Request method
The method specifies an HTTP request method. If omitted, the default method is GET.
The method can be one of the following:
DELETE
GET
HEAD
OPTIONS
PATCH
POST
PUT
TRACE
http {
url "/api/files/cache/${uuid}"
method 'DELETE'
}
Request headers
The header specifies an HTTP request header.
http {
url 'http://localhost'
header 'X-Auth', authtoken
method 'POST'
body payload
}
Content type
The contentType specifies an HTTP request Content-Type
header.
http {
url 'http://localhost'
contentType 'APPLICATION_JSON_VALUE'
method 'POST'
body payload
}
Form data
The formData specifies an HTTP request using a form data using application/x-www-form-urlencoded
content type.
http {
url 'http://localhost'
formData 'data1', content1
formData 'data2', content2
}
Authorization
Basic authentication
The basicAuth specifies an HTTP basic authentication credentials.
http {
url 'http://localhost'
basicAuth 'user', 'password'
}
Hub file cache REST API
The Hub provides a REST API for caching client-uploaded files. This allows to use only cached file descriptors for form processing and avoid using multipart data. Also this feature improves UX because user can upload files separately and it seems to be faster for him than batch upload
POST /api/files/cache
Uploads new file to cache
Request:
- file: multipart representation of uploaded file
An example request
POST /api/files/cache HTTP/1.1
Host: localhost:8080
Content-Type: multipart/form-data
Response:
- 201 Created, a success response, file was added to cache, file descriptor is returned as a result:
HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8
{
"uuid": "89828e1e-c834-42a2-86f1-893209f63ab5",
"fileName": "my_file.pdf",
"mimeType": "application/pdf",
"size": 123123,
"expiredOn": "2020-01-01T12:00:00Z"
)
- 400 Bad Request, error during file upload
DELETE /api/files/cache/{uuid}
Removes file with uuid from cache (deletes from server)
Request:
- uuid: UUID of file in cache
An example request
POST /api/files/cache/9828e1e-c834-42a2-86f1-893209f63ab5 HTTP/1.1
Host: localhost:8080
Content-Type: application/json;charset=UTF-8
Response:
- 200 OK, a success response, file was removed from cache
HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8
- 400 Bad Request, error during file deleting
Testing
Tests should be part of all user stories for a hub instance or a connector. For a quick start take a look at Instance template and Connector template. Full example with non-trivial tests can be found in Connector tutorial OTP.
The best practice is to keep anything that calls 3rd party services in integration
folder instead of test
, because
you don't have control over these services, and they can be down or calling them can incur costs. Tests in test
folder should be
as complete as possible and they should use mocks for connectors to external services. Running tests in test
folder should be part
of build/release action, while tests in integration
folder should be run manually by developer as needed.
Another best practice is to split your workflow into smaller bits, ideally the main workflow should be just a chain of calls to one-purpose sub-workflows and functions with occasional updates of attributes. It is much more readable and easily testable than one huge workflow of hundreds of lines. Also, whenever possible you should first create tests for each individual sub-workflows and sub-functions.
Example of a workflow that is split into smaller better testable parts:
workflow('neo') {
workflow('document-check') {
namespace document
}
person << [
firstName : document.firstName,
lastName : document.lastName,
fullName : document.idp.biographic.fullName,
dateOfBirth : document.idp.biographic.birthDate
]
function('create-lead') {
namespace lead
input person: person
}
function('create-verification') {
namespace verification
input entityId: lead.id,
verificationRequirementId: env.salesforce.verificationId
}
function('lookup-verification-document-ids') {
input verificationId: verification.uuid
namespace documentIds
}
function('document-verification') {
input entity: lead,
idp: document.idp,
upload: document.upload.personalId,
verificationDocumentId: documentIds.idDocument
}
workflow('liveness-check') {
namespace liveness
input upload: document.upload,
documentId: documentIds.selfie
}
}
Configuration of your project
The Zenoo Hub provides extensive support for testing, and you should definitely make as much out of it as possible.
Add hub-test-starter
dependency to your project's build.gradle
to access the whole testing support part of the Zenoo Hub:
ext {
hubBackendVersion = '2.135.0'
}
dependencies {
testImplementation group: 'com.zenoo.hub', name: 'backend-spring-boot-starter-test', version: hubBackendVersion
}
You also need to fine tune setup for test
and integrationTest
tasks:
sourceSets {
integration {
groovy.srcDir "$projectDir/src/integration/groovy"
resources.srcDir "$projectDir/src/integration/resources"
compileClasspath += main.output
runtimeClasspath += main.output
}
}
configurations {
integrationRuntime.extendsFrom testRuntime
integrationImplementation.extendsFrom testImplementation
}
test {
useJUnitPlatform()
testLogging {
events "passed", "skipped", "failed"
}
}
task integrationTest(type: Test) {
useJUnitPlatform()
testClassesDirs = sourceSets.integration.output.classesDirs
classpath = sourceSets.integration.runtimeClasspath
}
processIntegrationResources {
setDuplicatesStrategy(DuplicatesStrategy.WARN)
}
Setup of the Zenoo Hub for Tests
You will need a separate hub configuration for tests. Usually you should set ComponentConfigurer
to be an empty list because you will register
components as needed for individual tests. HubConfigurer
should have all necessary connectors - mocked for test
folder tests
and real ones for integration
folder tests.
Example of TestConfig
class:
@Configuration
class TestConfig {
@Bean
@Primary
ComponentConfigurer componentConfigurer() {
() -> List.of()
}
@Bean
@Primary
static HubConfigurer hubConfigurer(
HttpConnectorMock httpConnectorMock
) {
return new HubConfigurer() {
@Override
List<ConnectorActivator> connectors() {
return of(
ConnectorActivator.of(ComponentId.from('http'), httpConnectorMock as Connector<HttpConnectorSpec>)
)
}
}
}
}
Writing a test
Tests in the Zenoo Hub utilize Spock as test framework, you can learn basics in a tutorial on Baeldung.
The easiest way to write a test is to extend WorkflowTestSpecification
. It has all the necessary methods for you to test a DSL workflow or function.
1. Prepare mocks
Use Spock's given
block to set up mocks as needed. The Zenoo Hub provides MockConnectorExchange
class to easily create connector mocks (see below).
MockConnectorExchange
class implements withResult
, withError
and withDelay
methods that you can configure a connector mock with.
Example:
def "verify code should pass mock call"() {
given:
httpConnectorMock.mockExchange.withResult([
"status" : "approved",
"date_updated": "2022-07-21T05:19:21Z",
"account_sid" : "AC1df896fc9f8d4c30b31490b5303e925e",
"to" : "+420123456789",
"valid" : true,
"sid" : "VE39811dee2cfdfc3b65466f44e07a8dc0",
"date_created": "2021-07-22T05:17:44Z",
"service_sid" : "VAXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"channel" : "whatsapp"
])
}
2. Register components and start workflow or function WorkflowTestSpecification
contains testBuilder
attribute that helps with registering and configuring components for a test. testBuilder
implements several methods to serve that end:
setWorkflow
- workflow that will be called to run the test.setFunction
- similar as above but for function, not workflow. There can be either setWorkflow or setFunction but not both.setInput
- sets an input for function or workflow upon its call. See DSL workflow or DSL function for details.setContext
- sets execution context for function or workflow. See DSL workflow or DSL function for details.addDependency
- adds a component dependency for the test along with its configuration (if any is needed). Don't forget to add the component where the workflow or function is located itself.
Method build
will generate a testing component, then it will register it and its dependencies,
and finally it will start a testing workflow from the testing component.
Example of testBuilder
usage to set up the test:
expect:
def result = testBuilder.with {
function = 'send-code'
input = ['phoneNumber': '+420123456789', 'channel': 'whatsapp']
addDependency(OtpConnector.otpConnector, [
serviceSid: 'VAXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX',
accountSid:'AC1df896fc9f8d4c30b31490b5303e925e',
authToken: 'lwqIK1nsxcaBwwv7Yuja5PTpdbD7czaI'
])
build()
}.getResult()
3. Check workflow steps and results
Once the workflow has started it will pause on each route
DSL command waiting for Hub Client to submit user input.
There is a simple-to-use function submit
inherited from WorkflowTestSpecification
that you can use to simulate user data entry.
You should also check that workflow stopped on the right route at each step, for that you can check route part of response
function.
Example of checking route and submitting user data:
response().route.uri == '/otp'
submit([code : 123456])
response
method returns WorkflowTesterResponse which depending on the state of execution can become one of these types:
route - workflow execution is paused and awaits user input. See RouteResource. Available attributes:
- uuid,
- uri - the identifying location from route definition,
- terminal - whether the route is terminal,
- backEnabled - whether back is enabled at this route,
- export - object or list of exported attributes for the Hub Client,
- payload.
result - workflow/function has finished and result of the execution is returned. See ResultResource. There are no attributes, it just contains the object that was returned from the callable.
error - there has been an error while executing a DSL code or a connector. See ErrorResponseResource. Available attributes:
- code,
- message.
validation error - when a DSL code fails to be validated. This can be either invalid DSL, missing attribute or invalid input.
See ValidationResult.
There is just one attribute, errors
that contains list of ValidationError. ValidationError
has just one attribute, message
.
Another useful method inherited from WorkflowTestSpecification
is upload
. It allows you to simulate user uploading
file through the Hub Client. The methods itself uploads a file to the test hub instance and returns a FileDescriptor
that you can use as parameter for submit
method.
Example:
@Value("classpath:test-files/idFront.jpg")
Resource idFrontResource
def "should pass document check"() {
given:
testBuilder.with {
workflow = 'document-check'
addDependency(NEO_WORKFLOWS)
build()
}
expect:
response().route.uri == '/id-upload'
def idFrontUpload = upload(idFrontResource)
submit(personalId: [idFrontUpload])
def checkOCR = response().route
checkOCR.uri == '/check-idp'
submit(retry: false)
}
In addition to testing happy path for workflows and smoke tests for connectors you should test for common errors responses and invalid data inputs.
Connector usually does not handle error itself and just passes it on to a workflow which should know how to resolve it. So in case of testing the connector itself you need to write a DSL code just to test different scenarios.
Example of check for error in connector:
Function to test a connector:
function('test-document') {
input ->
exchange('RDP document') {
connector('document')
config input
fallback {
'error'
}
}
}
Spock test for invalid data response:
def "front document verification error"() {
given:
def uploadIdFront = upload(frontError)
testBuilder.with {
fuction = 'test-document'
input = [idFront: uploadIdFront, defaultValidationBypass: false]
addDependency(RDPCompoennt.rdp)
build()
}
expect:
response().result == 'error'
}
Workflows should either recover from an error, retry or notify user about it, usually on an error page. You should test that these errors are handled properly, eg. user is sent to the right error page and is notified about what has gone wrong.
Example of check for error in a workflow:
The part of the workflow to test:
exchange('IDEMIA - Create Identity') {
fallback {
route('error') {
export error_step: 'processing'
}
}
}
The part of the workflow test to check an error:
given:
...
createIdentityConnectorMock.mockExchange.withError()
expect:
...
def errorResponseRoute = response().route
errorResponseRoute.uri == "/error"
errorResponseRoute.export.error_step == "processing"
errorResponseRoute.terminal
Mocking connectors
In most cases it should be enough to use MockConnector
class to create a new bean in your TestConfig
and pass them on to hubConfigurer
.
In this way you configure the Zenoo Hub to work with mocks instead of the real connectors.
Example of bean creation:
@Bean
MockConnector<DocumentConnector> documentConnectorMock(DocumentConnector documentConnector) {
new MockConnector<DocumentConnector>(documentConnector)
}
Example of using it in hubConfigurer
@Bean
static HubConfigurer hubConfigurer(
MockConnector<DocumentConnector> documentConnectorMock,
MockConnector<LivenessConnector> livenessConnectorMock,
MockConnector<IdentityConnector> identityConnectorMock
) {
return new HubConfigurer() {
@Override
List<ConnectorActivator> connectors() {
return of(
ConnectorActivator.of("rdp-document@refinitiv.rdp", documentConnectorMock),
ConnectorActivator.of("rdp-liveness@refinitiv.rdp", livenessConnectorMock),
ConnectorActivator.of("rdp-identity@refinitiv.rdp", identityConnectorMock)
)
}
}
}
MockConnector
contains an attribute mockExchange
of type MockConnectorExchange
that is meant to be used to set mock responses for the connector.
withConfigConsumer(Consumer<CustomConnectorConfig> consumer)
allows you to add consumer for connector config which is useful to verify configuration that was passed on to the connector.withError()
sets a simpleConnectorException("Error")
as the mocked response of the connector.withResult(Object result)
- sets return value of the mock.withDelay(int delay)
adds delay in seconds before response is returned from the mock when executed. You can take advantage of this method to check behaviour of your flow when a response from a connector takes some time.
Example:
given:
...
identityConnectorMock.mockExchange
.withConfigConsumer({ identityConfig = it })
.withResult([countryCode: "AU", transactionId: "e850891a-6a57-4d5f-b499-3c7d891a0cef", overallStatus: "MATCH"])
expect:
...
response().route.uri == '/address'
submit([location: [locality : null,
sublocality : 'BARCELONA',
area1 : 'BARCELONA',
street : 'C/MEDES 4-10',
country : 'HongKong',
countryCode : 'HK',
streetNumber: '10-Apr']])
identityConfig.address.addressLine1 == 'C/MEDES 4-10 10-Apr'
identityConfig.address.countryCode == 'HK'
Metrics
The Hub is making use of Micrometer, a vendor-neutral application metrics facade, to integrate with the most popular monitoring systems. It has a built-in support for AppOptics, Azure Monitor, Netflix Atlas, CloudWatch, Datadog, Dynatrace, Elastic, Ganglia, Graphite, Humio, Influx/Telegraf, JMX, KairosDB, New Relic, Prometheus, SignalFx, Google Stackdriver, StatsD, and Wavefront.
The following metrics will automatically register:
Executor metrics
hub.executors.active
the number of active executors,hub.executors.terminated
the number of terminated executors.
Execution metrics
hub.executions.started
the number of started executions.hub.executions.expired
the number of expired executions with a drop-off route as a tag.hub.executions.terminated
the number of terminated executions.hub.executions.duration
the execution duration.hub.executions.error
the number of execution errors (generic, validation and exchange errors).hub.routes
the number of executed routes. Optionally, filter out a specific route using a name tag.hub.exchanges
the number of executed exchanges. Optionally, filter out a specific exchanges using a name tag.hub.functions
the number of executed functions. Optionally, filter out a specific functions using a name tag.
JVM metrics
- various memory and buffer pools,
- statistics pertaining to garbage collection,
- thread utilization,
- number of loaded and unloaded classes,
- CPU metrics,
- Uptime metrics.
In addition, you can register custom metrics in a workflow script using the metrics DSL.
Hub Client API
A Hub client facilitates an interaction between the Hub and an end user.
For a Hub client, a customer journey is a series of pages a.k.a. routes. It renders pages (UI), gathers user input and submits data back to the Hub via a REST API.
A Hub client uses the Hub Client API for the following:
- start a new execution for a given target,
- start a new execution using a sharable token,
- submit user input and resume execution,
- execute a route function,
- query an execution state and current route,
- upload files using File cache.
Typical API calls sequence
The workflow execution API sequence is as follows:
- start a new execution and get Execution Request resource,
- query the corresponding response and get the 1st Route resource to display,
- submit the 1st route and get Execution Request resource,
- query the corresponding response and get the 2nd Route resource or Validation Error resource or Error resource,
- submit the 2nd route, same as (3), until the execution is terminated.
In addition, the workflow execution API enables going back to the previous route and executing a function.
Start new execution
POST /api/gateway/execution
Creates a request to start a new workflow execution.
Request:
- name: a workflow name to specify workflow to start.
- payload: an execution input as JSON payload.
Response:
- 201 Created, a success response has a request URI as a Location header and the body contains the newly created request as Execution Request resource
The corresponding response can be one of the following
An example of a successful response is below:
HTTP/1.1 201 Created
Content-Length: 645
Content-Type: application/json;charset=UTF-8
Location: /api/gateway/request/f3add886-36d3-49eb-8d2b-96862d63dbe4
{
"uuid": "59bb233b-2d2b-41e7-ad13-f17c14513603",
"requestURI": "/api/gateway/request/f3add886-36d3-49eb-8d2b-96862d63dbe4",
"executionURI": "/api/gateway/execution/59bb233b-2d2b-41e7-ad13-f17c14513603"
"token": "eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiI2YzBlNjNhMi01MTE1LTRlM2YtOWNjOC1kOTdmYjcxNzFlODIiLCJzdWIiOiJleGVjdXRvciIsImlhdCI6MTU4MjcyNTI3MCwiZXhwIjoxNTgyNzU0MDcwfQ.2QDJai6f4f7fs85CctTN8K3vmL-XGMbFDq0_IF14GkM"
}
Submit route
POST /api/gateway/execution/{uuid}/submit
Creates a request to submit a route for a workflow execution with uuid.
Request:
- uuid: a route uuid to submit, i.e. the current route uuid.
- payload: user entered data and file descriptors as JSON payload.
An example request:
POST /api/gateway/execution/59bb233b-2d2b-41e7-ad13-f17c14513603/submit
Host: localhost:8080
Content-Type: application/json
Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiI1MTFiMDQ0MS1kNmQwLTRhOGEtODAwMy0yMmVmNTI3NDA4NDciLCJzdWIiOiJleGVjdXRvciIsImlhdCI6MTU1NzI0NjE4OSwiZXhwIjoxNTU3Mjc0OTg5fQ.2GVAuboArO8k1G48CY1ojFdypO9zm9u2ZubCE7Qa-Co
{
"uuid": "3a0d231f-12b8-47b3-a495-b9418db294b3",
"payload": {
"firstname": "Joe",
"lastname": "Bloke",
}
}
Response:
- 201 Created, a success response has a request URI as a Location header and the body contains the newly created request as Execution Request resource.
The corresponding response can be one of the following
Go back to previous route
POST /api/gateway/execution/{uuid}/back
Creates a request to go back to the previous route for a workflow execution with uuid.
Request:
- uuid: a route uuid to go back from, i.e. the current route uuid,
- payload: user entered data and file descriptors as JSON payload.
An example request:
POST /api/gateway/executor/c973a8e7-eb24-4e55-980f-f2ea0fff680e/back HTTP/1.1
Host: localhost:8080
Content-Type: application/json
Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiI1MTFiMDQ0MS1kNmQwLTRhOGEtODAwMy0yMmVmNTI3NDA4NDciLCJzdWIiOiJleGVjdXRvciIsImlhdCI6MTU1NzI0NjE4OSwiZXhwIjoxNTU3Mjc0OTg5fQ.2GVAuboArO8k1G48CY1ojFdypO9zm9u2ZubCE7Qa-Co
{
"uuid": "3a0d231f-12b8-47b3-a495-b9418db294b3",
"payload": {
"firstname": "Joe"
}
}
Response:
- 201 Created, a success response has a request URI as a Location header and the body contains the newly created request as Execution Request resource
The corresponding response can be one of the following
Execute a route function
POST /api/gataway/execution/{uuid}/function
Creates a request to execute a route function.
Request:
- name: a route function name
- payload: input data as JSON payload.
Response:
- 201 Created, a success response has a request URI as a Location header and the body contains the newly created request as Execution Request resource.
The corresponding response can be one of the following
Get current route
GET /api/gateway/execution/{uuid}
Query an execution with uuid for the current route. It may take time before the current route is available due to ongoing execution.
Response:
- 200 OK, a success response with Route resource as a body,
- 404 Not Found, an executor with uuid not found,
- 401 Unauthorized, invalid access token,
- 500 Internal Server Error, execution error.
An example of a successful response:
HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8
Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiI1MTFiMDQ0MS1kNmQwLTRhOGEtODAwMy0yMmVmNTI3NDA4NDciLCJzdWIiOiJleGVjdXRvciIsImlhdCI6MTU1NzI0NjE4OSwiZXhwIjoxNTU3Mjc0OTg5fQ.2GVAuboArO8k1G48CY1ojFdypO9zm9u2ZubCE7Qa-Co
{
"uuid": "89828e1e-c834-42a2-86f1-893209f63ab5",
"uri": "/product",
"terminal": false,
"backEnabled": false,
"export": {"product1": "Product1", "product2": "Product2"},
"payload": {"product": "product1"}
)
Get exported namespace
GET /api/gateway/execution/{uuid}/export/{namespace}
Query an execution with uuid for exported namespace.
Response:
- 200 OK, a success response with exported namespace as a body,
- 404 Not Found, an executor with uuid not found,
- 401 Unauthorized, invalid access token,
- 500 Internal Server Error, execution error.
Get response to Execution Request
GET /api/gateway/request/{uuid}
Query for a response of an Execution request specified by uuid. It may take time before the response is available due to ongoing execution.
Response:
200 OK, a success response contains one of the following resources as a body
404 Not Found, an executor with uuid not found
401 Unauthorized, invalid access token
500 Internal Server Error, execution error
Get current execution state
GET /api/gateway/execution/{uuid}/state
Get an execution state with uuid. It contains detailed information about an execution, like current context, input payload, a list of execution events etc.
Sharable token a.k.a. sharable link
POST /api/gateway/sharable/{token}
Starts an execution corresponding to given sharable token. A sharable token specifies an Execution request, see for details Sharable DSL. Moreover, the POST request body is used as Execution request input. May take time before the response is available due to ongoing execution.
Response:
- 201 Created, started execution request, see more details Execution Request resource,
- 410 Gone, sharable token has expired.
Resources
Execution Request
A unique execution request is generated after each execution command submission (POST). The corresponding execution response is queried using the requestURI.
- uuid: execution UUID,
- requestURI: request URI,
- executionURI: execution URI,
- token: authentication token to query corresponding response and execution.
Route
Route resource represents a route to be rendered by a Hub client.
It contains the following fields:
- uuid: identifies a route for the purpose of resuming a workflow execution,
- uri: identifies a route for a hub-client, used for client-side routing and rendering a corresponding route view,
- terminal: marks a route as terminal, set if the corresponding execution terminated,
- backEnabled: determines if it's possible to go back to a previous route,
- export: arbitrary data passed with a route, can be used for setting up a route view, e.g. a list of products,
- payload: a prior route payload, used to pre-fill route view.
An example Route resource:
{
"type": "route",
"uuid": "89828e1e-c834-42a2-86f1-893209f63ab5",
"uri": "/product",
"terminal": false,
"backEnabled": false,
"export": {"product1": "Product1", "product2": "Product2"}
}
Result
Result resource represents an execution result, like a function execution result.
An example Result resource:
{
"result": "passed"
}
Validation Error
Validation Error resource contains a list of validation errors.
- errors: a list of validation error
- field: a field name
- message: a validation error message
An example Validation Error resource:
{
"type": "validation-errors",
"errors": [
{
"field": "mobile",
"message": "Required"
}
]
}
Execution Error
Execution Error resource contains an error message.
An example Execution Error resource:
{
"type": "error",
"message": "Resume UUID mismatch!"
}
Security
The execution API endpoints are secured using JWT tokens.
A new token is generated for every Execution Request. The token is then used to query the corresponding response or current route.
The token needs to be included as HTTP Authorization
header. The expiration is set to 30 minutes and can be modified using jwt.expiration
property.
Authorization: Bearer {token}
Admin API
The Admin API provides a REST API for the Component repository.
The access to Admin API is restricted using HTTP basic authentication, see Admin API security.
Query component
GET /api/component/{name}/{version}
Retrieves a registered component by name and version.
GET /api/component/{name}
Retrieves the latest version of a registered component by name.
Response:
- 200 OK, a success response with Hub component resource as a body,
- 404 Not Found, an executor with uuid not found.
Register component
POST /api/component
Registers a new component. A component much pass a DSL validation process before successful registration.
Request:
- name a component name,
- revision a component version, generated if ommitted,
- definition a component definition using Component DSL.
Response:
- 201 Created, a success response with a reference to a newly registered component:
/api/components/{name}/{revision}
as a Location header,- Component Id as body.
- 400 Bad Request, if component validation fails.
Resources
Component Id
A reference to a component using name and revision. - name a component name, - revision a component version
Component definition
Provides a component definition. The component is identified by name and revision. - name a component name, - revision a component version, generated if ommitted, - definition a component definition using Component DSL.
Configuration properties
Execution
hub.execution.expiration
Default:
1h
The maximum execution duration before expiration.
Admin API security
hub.security.user.name
Default: admin
An admin user name
hub.security.user.password
Default: auto-generated
An admin user password
Client API security
jwt.key
Default: auto-generated
A secret key used for generating JWT tokens.
jwt.expiration
Default:
1800
JWT tokens generated with specified expiration.
Kafka streams
hub.streams.prefix
(Required)
Default: none
The prefix is used for isolating Hub clusters running within the same Kafka broker. It uses the setting as a prefix for Kafka topics.
For example, a Hub cluster with a testing
prefix would use topics like testing-execution-events
, testing-exchanges
, etc.
The prefix can contain alphanumeric characters, .(dot), -(hyphen), and _(underscore).
hub.streams.application
(Required)Default:
hub
The application name used together with prefix to generate a unique application ID.
Each stream processing application must have a unique ID. The same ID must be given to all instances of the application.
This ID is used in the following places to isolate resources used by the application from others:
- As the default Kafka consumer and producer
client.id
prefix - As the Kafka consumer
group.id
for coordination - As the name of the subdirectory in the state directory (
hub.streams.state.dir
) - As the prefix of internal Kafka topic names
hub.streams.host
Default: localhost
Host that is accessible for this and other instance nodes.
hub.streams.port
Default: 8080
Port that is accessible for this and other instance nodes.
hub.streams.request-timeout-ms
Default: 60000
The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.
hub.streams.producer.max-request-size
Default: 1048576
The maximum size of a request in bytes. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. This is also effectively a cap on the maximum uncompressed record batch size. Note that the server has its own cap on the record batch size (after compression if compression is enabled) which may be different from this.
hub.streams.state.dir
Default:
/tmp/kafka-streams
Directory location for state stores.
hub.streams.state.cleanup-on-start
Default:
false
Clean up application’s local state directory when Kafka Streams start.
hub.streams.state.cleanup-on-stop
Default:
false
Clean up application’s local state directory when Kafka Streams shut down.
File Uploader
hub.uploader.cache.dir
Default:
./cache
Directory location for cached files.
Kafka SSL
For safe usage of Kafka, it is recommended to use mutual TSL for security. This setup means, that both brokers and clients will have their own certificate. Also, because SSL isn't trusting by default we need to make sure, that the other side's certificates are trusted.
Kafka Configuration
Kafka is by default plaintext only. To enable SSL you need to configure following:
Service configuration
- advertised.listeners: configuration where Kafka broker listens for client conenctions, recommended value:
PLAINTEXT://kafka3:9092,SSL://kafka3:9093
- ssl.keystore.filename: filename/location of the key store, example value:
kafka_keystore.jks
- ssl.keystore.password: password of the key store, example value:
changeit
- ssl.truststore.filename: filename/location of the trust store, example value:
kafka_truststore.jks
- ssl.truststore.password: password of the trust store, example value:
changeit
- security.inter.broker.protocol: protocol, that will be used for communication between Kafka brokers, for local and dev
deployment
PLAINTEXT
is recommended. For production deployment:SSL
- ssl.client.auth: this field enables/disables client authorization on broker's side. To enable, set value to:
required
- security.protocol: protocol, that will be used for verification. Use
SSL
or leave blank
Example configuration
advertised.listeners: PLAINTEXT://kafka:9092,SSL://kafka:9093
ssl.keystore.filename: kafka-keystore.jks
ssl.keystore.password: kafka-keystore-creds
ssl.key.password: changeit
ssl.truststore.location: kafka-truststore.jks
ssl.truststore.password changeit
security.inter.broker.protocol: PLAINTEXT
ssl.client.auth: 'required'
security.protocol: SSL
Docker configuration
We need to configure same things as in service configuration, but for Docker we use env variables. These variables correspond to fields in service, but they are uppercase, use _ instead of . and have prefix KAFKA_.
- KAFKA_ADVERTISED_LISTENERS: configuration where Kafka broker listens for client connections, recommended value:
PLAINTEXT://kafka3:9092,SSL://kafka3:9093
- KAFKA_SSL_KEYSTORE_FILENAME: filename/location of the key store, example value:
kafka_keystore.jks
- KAFKA_SSL_KEYSTORE_CREDENTIALS: filename/location of keystore file, example value:
kafka-keystore-creds
- KAFKA_SSL_KEY_CREDENTIALS: filename/location of key credential file, example value:
kafka-key-creds
- KAFKA_SSL_TRUSTSTORE_FILENAME: filename/location of the trust store, example value:
kafka_truststore.jks
- KAFKA_SSL_TRUSTSTORE_CREDENTIALS: filename/location of truststore file, example value:
kafka-truststore-creds
- KAFKA_SECURITY_INTER_BROKER_PROTOCOL: protocol, that will be used for communication between Kafka brokers, for local and dev
deployment
PLAINTEXT
is recommended. For production deployment:SSL
- KAFKA_SSL_CLIENT_AUTH: this field enables/disables client authorization on broker's side. To enable, set value to:
required
- KAFKA_SECURITY_PROTOCOL: protocol, that will be used for verification. Use
SSL
or leave blank
Example configuration
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,SSL://kafka:9093
KAFKA_SSL_KEYSTORE_FILENAME: kafka-keystore.jks
KAFKA_SSL_KEYSTORE_CREDENTIALS: kafka-keystore-creds
KAFKA_SSL_KEY_CREDENTIALS: kafka-key-creds
KAFKA_SSL_TRUSTSTORE_FILENAME: kafka-truststore.jks
KAFKA_SSL_TRUSTSTORE_CREDENTIALS: kafka-truststore-creds
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: PLAINTEXT
KAFKA_SSL_CLIENT_AUTH: 'required'
KAFKA_SECURITY_PROTOCOL: SSL
Local
We will use Kafka with SSL in Docker, because we need to do some changes in configuration and configuring through Docker compose is the easiest option. In sample-hub-instance directory is located sample docker-compose. But to use it, we need to generate keystores and truststores for both brokers and client (our application).
Generating keystores and truststores
To make this process less painful, we have script that helps this process. Script is used like this:
./generate-stores.sh KEY_ALIAS TARGET_KEYSTORE.jks TARGET_TRUSTSTORE.jks
Where:
- KEY_ALIAS is alias, that the key will have in keystore and truststore with suffix public
- TARGET_KEYSTORE is location of the keystore, you want to add key to. If keystore doesn't exist it will be created.
- TARGET_TRUSTSTORE is location of the truststore, you want to add public part of key to. If truststore doesn't exist it will be created.
Script uses Java's keytool, so all interaction in script is handled by keytool.
Script's workflow is as follows:
- Key generation - you will be prompted for keystore's password (twice if keystore doesn't exist yet)
- Public key extraction - you will be promoted for keystore's password
- Public key import to trust store - you will be promoted for trust store's password (twice if trust store doesn't exist yet)
Kafka Topics
execution-requests
Stores execution requests that are then processed by corresponding executors
execution-events
Stores all execution-related events, like requests, responses, execution life-cycle events, commands, etc.
execution-responses
Stores execution responses generated as a result of processing execution requests.
exchanges
Stores exchange requests
components
Stores all Hub component definitions
components-latest
Stores the latest revisions for Hub components
errors
Stores execution errors
sharables
Stores sharable tokens (links)
cached-files
Stores cached files descriptors, does not store file content
Amazon MSK
Amazon MSK is a fully managed Apache Kafka service hosted by AWS. Hub backend instance can be set easily to use AWS MSK by defining the standard spring kafka properties. See the sample properties in following sections.
Access MSK with no authentication and no encryption
If MSK is provisioned without any authentication and encryption, by default the access protocol is defined as plain-text. In such case, it's enough to set only bootstrap servers in application.yml as below.
application.yml
spring:
kafka:
bootstrap-servers: b-1.test.kafka.ap-east-1.amazonaws.com:9092,b-2.test.kafka.ap-east-1.amazonaws.com:9092
Access MSK with IAM role-based authentication and encryption
If MSK is provisioned with IAM role-based authentication and encryption (within the cluster and between clients and brokers), use the properties below for accessing the service. Make sure the IAM role which is assigned to the backend instance container tasks has sufficient MSK permissions as stated here: IAM access control
application.yml
spring:
kafka:
bootstrap-servers: b-1.test.kafka.ap-east-1.amazonaws.com:9098,b-2.test.kafka.ap-east-1.amazonaws.comm:9098
security.protocol: 'SASL_SSL'
ssl:
trust-store-location: 'file:/security/cacerts-zenoo.jks'
trust-store-password: '**'
properties:
sasl:
jaas.config: 'software.amazon.msk.auth.iam.IAMLoginModule required;'
mechanism: 'AWS_MSK_IAM'
client.callback.handler.class: 'software.amazon.msk.auth.iam.IAMClientCallbackHandler'
Sample policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kafka-cluster:Connect",
"kafka-cluster:AlterCluster",
"kafka-cluster:DescribeCluster"
],
"Resource": [
"arn:aws:kafka:us-east-1:123456:cluster/msk-test-cluster/855d7317-7cc9-494e-8a0b-44c67f3327e7-8"
]
},
{
"Effect": "Allow",
"Action": [
"kafka-cluster:*Topic",
"kafka-cluster:WriteData",
"kafka-cluster:ReadData"
],
"Resource": [
"arn:aws:kafka:us-east-1:123456:topic/msk-test-cluster/"
]
},
{
"Effect": "Allow",
"Action": [
"kafka-cluster:AlterGroup",
"kafka-cluster:DescribeGroup"
],
"Resource": [
"arn:aws:kafka:us-east-1:123456:group/msk-test-cluster/"
]
}
]
}
HUB Client
Architecture
HUB client consists of several architectural components:
- Target (DO source code) — list of configuration files and assets
- Target builder — transforms Target folder into JSON configuration
- HUB Client application — NPM package with source core HUB client dependencies
Target is a folder with appropriate source files (YAML, LESS, Assets etc.) and core dependecy: HUB Client application
YAML
The YAML used in Target Builder is a standard YAML that has been extended with some specific YAML tags.
For initial acquaintance with the YAML read this article: Learn X in Y minutes (Where X=yaml)
Reserved fields
In the YAML files, there are three "reserved" fields in the root:
~private
- It can be used to store data, which will be omited in build time, and will not be in target configuration. This field is processed by target-builder.
Example
~private:
yourSecretKey: "secret value"
List of supported tags
Compile-time tags
Run-time tags
Deprecated tags
!include
compile-time tag
- This tag is used to import/include any type of file into this place of YAML.
- This tag supports properties, which correspond to the parameters of this file.
- This tag internally opens any file as string, then processes by EJS (https://ejs.co), and then it passes the contents as a string (for other types) — or processes it by compiler if it is a supported file type (YAML, MD, HTML, LESS).
- You can use EJS for any kind of modifing file that depends on properties, such as—for example-generating some repeating parts of a document or setting values for some of the properties.
- The tag is always processed immediately when is used, except when it is used in the
components
field in root of YAML file (see !ref). - The tag is used for including any type of file—not only for YAML but also for LESS, CSS, MD, HTML, etc.
- When you attempt to include a directory, the Target builder will look in this directory for the
index.yaml
orindex.yml
file. It will include this file if it is found. Keep in mind that you can include components like!include ./components/button
instead of!include ./components/button/index.yml
).
Syntax
Short version (without properties):
!include ./path/file.ext
Long version (with properties):
!include file: ./path/file.ext
property1: 'some'
property2: 123
Examples
list:
- !include ./info.md
- !include file: ./more_info.yml
title: 'Hello'
withoutHeader: true
something: !include ./something.yml
!ref
compile-time tag
- This tag is used to reference some component into this place
- The source for the components is the
components
field in the root of YAML file. - This tag can have properties that the same as the !include tag
- The !ref tag properties have higher priority than !include properties in
components
field - You can make multiple references to a component, with different properties for each reference.
Syntax
!ref components: component_name
Examples
components:
header: !include file: ./components/header.yml
title: "Default title"
bodyItem: !include ./components/body-item.yml
footer: !include ./components/footer.yml
...
items:
- !ref components: header
title: 'Welcome'
- !ref components: bodyItem
name: 'First one'
- !ref components: bodyItem
name: 'Second one'
- !ref components: footer
!property
compile-time tag
- This tag is useful as a placeholder for the value of a propertys (in an !include of a YAML file.
- While it is possible to use EJS for this, EJS only works with a string. It's not possible to pass some array or object using EJS, and this is the purpose of the !property tag.
-This tag can have
default
value (only in the long version), which can contain any valid YAML value—including a string, number, array, or complex object. - This tag can have
required: true
mark (only in long version), which will throw an error message while build target, when value is not filled (default
value don't have any impact ifrequired: true
is set). - If the property is not set, this tag is set to
undefined
, and field will not appear in the output JSON.
Syntax
Short version (without default):
!property some_prop
Long version (with default or required):
!property name: some_prop
default: 'Some default value'
required: true
Examples
items:
- !property prop1
- !property name: prop2
default: 'Prop2 is not here :-)'
- !property name: prop3
required: true
!condition
compile-time tag
- This tag is for removing or including some parts of YAML structure (in an !include of a YAML file).
- The first parameter (key) can be either
include
oromit
.include
leaves the bject in the structure if the expression is a genuine value.omit
removes the object from the structure if the expression is a genuine truly value. - The second parameter (expression) is the expression in EJS format.
- This tag is valid only for object types and include/omit object in which is contained. (It is not permissible to apply this tag to a string or number value; use EJS isntead.)
Syntax
!condition include: typeof some_prop === "boolean" && some_prop === true
!condition omit: typeof some_prop === "string" && some_prop !== "foo"
Examples
items:
- name: 'this item is not here :-('
!condition include: typeof some_falsy_prop === "boolean" && some_falsy_prop === true
- name: 'this item is here :-)'
!condition omit: typeof some_falsy_prop === "boolean" && some_falsy_prop === true
!component
run time tag
- This tag creates a component in hub-client-core. During execution, it is converted into a real React component in the HTML DOM.
- Every element is created by
React.createElement
, and property items will be used as children. - The parameter is name of hub-client-core component, or any React Element such as div, span, h1, etc.
Syntax
!component as: some_component
property: 'foo'
Examples
items:
- !component as: div
items:
- !component as: h1
items: 'Hello'
- !component as: HubClientMagicComponent
doMagic: true
!repeat
run time tag
- This tag creates a repeat snippet in the hub-client-core. At execution, this is evaluated in React.
- The parameter is any array (static or from !expression). For more information about the repeat snippet, see the hub-client-core documentation.
Syntax
!repeat some_item:
- name: 'Some 1'
- name: 'Some 2'
- name: 'Some 3'
component:
!component as: span
items: !expression some_item.name
Examples
items:
- !repeat car: !expression export.cars
component:
!component as: span
items: !expression car.name
- !repeat pair:
- name: 'Some 1'
value: 'Val 1'
- name: 'Some 2'
value: 'Val 2'
- name: 'Some 3'
value: 'Val 3'
component:
!component as: div
items:
- !component as: span
items: !expression pair.name
- !component as: span
items: "!expression '(' + pair.value + ')'"
The !repeat expression provides a way to iterate on some collections and map components to it. You simply specify an array as the input collection (you can use an expression as well). Also specify a component that will be rendered n
times (n
is length of array).
Inside the component, you can access an item by expression that is stored in the key. In the expression, you can directly access an item of the array ([key].item
) or its index ([key].index
).
This is the format:
!repeat [key]: [array]
component: [some component]
Examples:
Here's an array from the api:
!repeat person: !expression flow.export.persons
component:
!component as: span
items: !expression person.item.name
Here's an array in yaml:
!repeat item:
- Item 1
- Item 2
component:
!component as: span
items: !expression item.item
!expression
run time tag
- this tag creates an expression in hub-client-core. At execution, this is evaluated in React.
- The parameter is an expression. For more information about expressions, see the hub-client-core documentation.
Syntax
Short version (without parameters):
!expression 'some_expression'
Long version (with parameters):
!expression eval: 'some_expression'
parameter_one: 'Some parameter value'
Multiline:
property:
!expression: |
const variable = 1;
// More lines of JavaScript
console.log('Hello', variable);
Examples
items:
- !component as: span
items:
- !expression 'exports.someExportField'
- !component as: span
max:
!expression eval: 'parseInt(param1) - param2'
param1: !expression 'exports.someExportField'
param2: !property test1
!function
run time tag
- This tag creates a JS function in hub-client-core. At execution time, it evaluated in React and may be used as a paramater in the events of components.
- The parameter is a function body. For more info about expressions, see the hub-client-core documentation.
Syntax
Short version (without parameters):
!function 'some_expression'
Long version (with parameters):
!function eval: 'some_expression'
parameter_one: 'Some parameter value'
Examples
items:
- !component as: div
onClick: !function 'setSomething(arg1 + "A")'
- !component as: div
onClick:
!function eval: 'doSomething(parseInt(param1) - param2)'
param1: !expression 'exports.someExportField'
param2: !property test1
- !component as: div
onClick: !function |
var a = "A";
var b = "B";
return a + b + " = done";
!t
run time tag
- This tag creates a translated string in hub-client-core. At execution time, it evaluated in React.
- The long version of this tag provides parameters. The first parameter is replaced with the second parameter.
Syntax
Short version (without params):
!t Some text
Long version (with params):
!t text: Some text with {paramOne} and {paramTwo}
paramOne: 'Zenoo'
paramTwo: !expression 'exports.someExportField'
Examples
items:
- !component as: span
items:
- !expression 'exports.someExportField'
!cx
run time tag
- This tag is shorcut for a
cx(...)
method call in hub-client-core expressions. - The parameter must be array of classnames (after
args
).
Syntax
!cx args:
- 'classOne'
- 'classTwo'
- !expression 'exports.someClassName'
Examples
items:
- !component as: span
className:
!cx args:
- 'classOne'
- 'classTwo'
- !expression 'exports.someClassName'
Target configuration
- Target structure
- Project settings
- Page settings
- Analytics
- Formats settings
- Global application state and methods (expression context)
- Localization
- Components
- EJS partials
- Remote application start
Target structure
HUB Client Target should have the following structure, which can vary depending on complexity.
/src — folder with target source code
/assets — static assets (fonts, images etc)
/components — YAML reusable components
/layouts — visual layouts
/pages — configuration files for specific pages. By convention, names of these files should be the same as route names in flow
index.yml
...
/styles — LESS styles
index.less — list of imports used in target and global styles
fonts.less — font styles (or import from CDN)
overstyle.less — style overrides for UI components
variables.less — CSS variables for UI components theming
studio-variables.less — CSS variables for Design Studio
/translates — translations for target
{LANG}.yml — list of translations for {LANG} locale
...
index.yml — project configuration file
package.json — metadata information about the target and its dependencies
package-lock.json — dependencies tree with locked versions
Project settings
Project settings can be set in the root index.yml
file.
The values for these settings can be individually set for any particular environment. (Environment-specific entry points)
List of available parameters
Parameter | Required | Default Value | Description |
---|---|---|---|
analytics |
false | Analytics configuration | |
analyticsMapper |
false | Analytics configuration | |
analyticsParams |
false | Analytics configuration | |
apiVersion |
false | 'v1' | API version ('v0' or 'v1') used to specify usage of legacy API |
authorizationTimeout |
false | 10 | Authorization cookie expiration timeout (in minutes) |
backDisabledAlert |
false | Message to be displayed in case of disabled back action | |
coreLocale |
false | List of translates for Core messages (more in localization) | |
defaultLocale |
false | Default locale code (more in localization) | |
description |
false | Meta description tag | |
devTools |
false | true | Toggles developer tools (open with ctrl + shift + D hotkey) |
errorPage |
false | Error page configuration | |
favicon |
false | Path to favicon | |
flowExport |
false | Mocked flow export | |
flowIdName |
false | Flow ID name in Backend instance | |
flowIdRevision |
false | Flow ID revision in Backend instance | |
flowName |
true | Flow name in Backend instance | |
flowStartParameters |
false | List of parameter to be passed to flowStart action |
|
formats |
false | Global formats settings | |
globals |
false | Extending expression context | |
handoffTimeout |
false | 20 | Handoff credentials cookie expiration (in minutes) |
indexPageInit |
false | true | Specifies if application initialization should start from flowStart action |
loadingComponent |
false | Component to be didsplayed during application initialization | |
mockData |
false | Mocked input data for development tools | |
og |
false | List of meta og tags |
|
pages |
true | Page Configuration | |
serverUrl |
true | URL of Backend instance server | |
studioSettings |
false | Studio settings | |
styles |
false | LESS files includes | |
title |
true | Meta title of an application, shown in browser tab header | |
translates |
false | List of translates for specific languages (more in localization) | |
url |
false | Application URL settings | |
flowReference |
false | When this ref is changed, new flow execution will be initialized |
Application URL settings
url: {
persistHash?: boolean // defines if hash should be persistent on page change, default value is TRUE (default hash is page URI)
persistQuery?: boolean // defines if query should be persistent on page change, default value is FALSE
persistPathname?: boolean // defines if pathname should be persistent on page change, default value is FALSE
}
Example configuration
Here's an example of various settings in index.yml:
title: "Zenoo Demo Project"
serverUrl: "https://zenoo.onboardapp.io/api"
flowName: "zenoo"
favicon: "/assets/favicon.ico"
indexPageInit: true
mockData: !include ./mockdata.json
styles:
- !include ./styles/index.less
analytics:
gtm: "GTM-ID001"
authorizationTimeout: 60
translates:
en: !include ./translates/en.yml
cz: !include ./translates/cz.yml
defaultLocale: "en"
studioSettings:
name: ZenooBank
logo: /assets/logo.png
country: Mexico
previewUrl: https://onboarding.zenoo.com/
pages:
index: !include ./pages/index.yml
otp: !include ./pages/otp.yml
loan-overview: !include ./pages/loan-overview.yml
thanks: !include ./pages/thanks.yml
rejected: !include ./pages/rejected.yml
Page settings
The entire application consists of pages. Each view that is presentable to a user must be implemented as page. There are two predefined user positions, the index page and the error page. The index must be inside index
property in pages. In root of your yaml (typically index.yml), you can specify the value of the property errorPage
. This property is name of page to which the user will be redirected when an error occurs (such as a network failure).
List of available parameters
Parameter | Required | Description |
---|---|---|
analytics |
false | Analytics configuration for specific page |
defaultAction |
false | Default form submit action name |
defaultActionParams |
false | Default form submit action params |
defaults |
false | Default values for form fields |
fadeAnimationBack |
false | Use "fade" animation on back action |
fadeAnimationSubmit |
false | Use "fade" animation on submit action |
formOutputModifier |
false | Override page payload |
items |
false | Elements tree of specific page |
og |
true | List of meta og tags (will be merged with the ones coming from project configuration) |
schema |
false | Validation rules as a JSON schema |
title |
false | Page meta title |
Example configuration
components:
formLayout: !include @common/layouts/form-layout.yml
formGroup: !include @common/components/form-group.yml
header: !include @common/components/header.yml
pinInput: !include @common/components/pin-input.yml
fadeAnimationBack: true
schema:
required:
- code
properties:
code:
type: string
minLength: 4
maxLength: 4
errorMessage:
_: "{field} - Required field"
defaults:
mobile: !expression "flow.export.mobile"
items:
- !ref components: formLayout
items:
- !ref components: header
progress: <\%-((3 / 8) * 100)\%>
- !component as: div
className: "content main-content"
items:
- !component as: h1
items: "Enter your phone number"
- !component as: p
items: "Please enter a valid mobile phone number to where we can text a confirmation code to."
- !ref components: formGroup
items:
- !ref components: pinInput
field: code
label: "Enter your confirmation code"
length: 4
Error page
The error page can be specified as an errorPage
parameter in application configuration.
errorPage: "error-page"
---
pages:
error-page: !include ./pages/error.yml # Include error page to a list of pages
If an error page is not specified, the auth
cookie will be deleted and application will be reloaded.
You can create more dynamic error page that provides useful features, such as a button to continue or reattempt the previous action (flowContinue
). This button will automaticaly fetch the last stored data from the server and redirect user to correct screen.
Another useful error management feature is to provide a button that reloads the flow. If the problem is not easily resolved, have the user click a button to redirect to start of the flow in case with the form action flowReload
.
If you are on error page, there are also available page parameters that contain the reason for the error. For example, query the value of page.params.error
to get the raw output from the error catch.
Static pages
In pages
can be also specified static pages, static page is page which can be accessed outside of any workflow logic, so can be used as landing pages for some calls of other SDKs or etc.. Static page name always start with $
character like this:
pages:
$static-page: !include ./pages/static-page.yml
Then static page can be accessed by loading URL https://some-target-url.xyz/?s=static-page&some=other-params
If an error page is not specified, the auth
cookie will be deleted and application will be reloaded.
Analytics
You are able to initialize different analytics providers when application starts and call specific action when certain event occurs.
List of currently available providers:
ga
— Google Analyticsgtm
— Google Tag Managerhotjar
— HotJarmixpanel
— MixPanelsmartlook
— Smartlook
Example integration
# index.yml
analytics:
mixpanel: "a78gc206fb0a9d85edb622d10ec74b5d"
gtm: "GTM-XXXXXX"
User indentification
To identify current user for different analytics providers analytics.authorizationToken
configuration key can be used, e.g.:
# index.yml
analytics:
authorizationToken: !expression "url.query.do_authorization"
In order to identify user not on initial page load, but by some event, analytics.authorization
action from Expression context can be used:
- !component as: div
onClick: !function "analytics.authorization(flow.export.identityId)"
Events management
Analytics events can be dispatched manually or using analytics event management.
By defining analytics
in page configuration built-in analytics event management will be involved, some UI components are dispatching basic default events, e.g. form fields have click
, change
, blur
, focus
, etc.
Analytics page configuration
Analytics configuration structure is coresponding to an event you want to handle and can be placed on every page.
There are 3 ways you can set event configuration: "string", "object" or "function" annotaion:
analytics:
fields:
firstName:
# String annotation
change: "firstNameChanged"
middleName:
# Object annotation
change:
eventName: "middleNameChanged"
data:
page: !expression "page.name"
device: !expression: "device.deviceType"
lastName:
# Function annotation
change: !function "analytics.event('lastNameChanged')"
You can also define this event configuration inside of parent structure, for example this function will be triggered on any field change:
analytics:
fields:
change: !function "analytics.event('someFieldChanged')"
Existing events
Form fields events:
Event name | Description |
---|---|
click |
Triggers when user clicks on field |
change |
Triggers when user change value of field |
focus |
Triggers when user focus on field |
blur |
Triggers when user unfocus from field |
File upload events:
Event name | Description |
---|---|
click |
Triggers when user clicks on field |
change |
Triggers when user change value of field |
accepted |
File was accepted to field |
rejected |
File was rejected, it can be caused by prevalidations or livness detection |
Path for these events is in this format fields.{FieldName}.{EventName}
.
Application lifecycle events
Path | Event name | Description |
---|---|---|
page |
enter |
Triggers when page is entered |
page |
leave |
Triggers when page is leaved |
form |
initialized |
Triggers when execution is initialized |
Path for these events is in this format {Path}.{EventName}
.
Analytics storage
Expression context has support for dispatching analytics events and for storing some values.
Analytics storage is a simple key/value storage, that can contain any value. It has some utils to make its usage simpler: for numeric values there are increment
and get
. increment
will augment value by 1, if value does not exist, it will set it to 1.
Example:
This will sends event with name Click
with parameter count: 1 for first call, 2 for second call, etc.
!function "analytics.event('Click', { count: analytics.storage.increment('timesClicked') })
This will sends event with name Click
with parameter count with value from storage.
If this value does not exist, count will be set to 0 (default value).
!function "analytics.event('Click', { count: analytics.storage.get('timesClicked', 0) })
Global analytics params
There is a way to set global analytics params which will be sent with every single event. This injection works only when input params in event call is an object or was not provided. Global params has lower priority, so if you redefine same field in event params, it will overwrite it.
Example:
# index.yml
analyticsParams:
ip: !expression "flow.export.ip"
page: !expression "page.name"
Dispatch events manually
To manually fire analytics event, use analytics.event
method from expression context
- !component as: div
onClick: !function "analytics.event(eventName, eventParams)"
Formats settings
Global formats should be defined under formats
parameter.
Later all formats are available as helpers in global application state (expression context)
Example configuration
formats:
date:
format: "DD/MM/YYYY"
number:
decimalSeparator: "."
thousandsSeparator: ","
precision: 2
currency:
format: "%u%n"
unit: "£"
phone:
countryCode: "+44"
mask: "9999 999999"
Global application state and methods
Expressions
Expressions are a simple way to access data from the app runtime, or the response from server.
Data is accessed through an object that is internally known as Core context or Expression context.
If expression fails, it will return undefined
. If you specify the default parameter, it will be returned when expression fails.
Examples of expressions:
property: !expression flow.export.value
property:
!expression eval: flow.export.value
default: Nothing
# Multiline expression
property:
!expression: |
const variable = 1;
// More lines of JavaScript
console.log('Hello', variable);
Functions
A function is another type of expression. It's useful to add some callbacks, such as a button onClick
event.
- !component as: div
onClick: !function "console.log('Click')"
items: "Click me"
Extending Expression context
To extend expression context with custom values or methods, globals
or utils
configuration keys can be used:
# index.yml
globals:
test: "I am a global variable"
utils:
sum: !expression "function (a, b) { return a + b; }"
Then in page configuration:
- !component as: Heading
items: !expression "globals.test"
- !component as: Heading
items: !expression "utils.sum(1, 2)"
Expression context
Expression context is a global object, which is accessible from YAML expression only.
analytics
- functions for trigger analytics events
analytics: {
authorization: (token) => void, // Trigger mixpanel.identify(token), GA.set({ userId: token }) and GTM dataLayer event "authorization" with parameter token (string)
event: (name: string, params?: object) => void, // Trigger event with given event name and params
storage: { // More info in "Analytics storage" section
set: (name: string, value: any) => void
get: (name: string) => any
increment: (name: string) => void
}
}
api
- information about API
api: {
authToken: string,
progress: {
[field-name]: number // Percentage of progress in file uploading
}
}
app
- information about app
app: {
locale: string, // Current locale
targetId: string, // Current target name
waiting: {
[tag]: boolean, // App waiting tags
}
wrapByLoading: (promise: Promise<any>) => Promise<any>
}
Example usage of wrapByLoading
# Element with click handler as async operation
- !component as: div
items: "Run simple async operation"
onClick: !function "app.wrapByLoading(simple_async_operation, 'SIMPLE_TAG')"
# Element with click handler as async operation with complex structure
- !component as: div
items: "Run complex async operation"
onClick: !function |
app.wrapByLoading((async () => {
await complex_async_operation();
})(), 'COMPLEX_TAG')
# Displaying loader during async operation
- !component as: VisibilityWrapper
visible: !expression "app.waiting.SIMPLE_TAG || app.waiting.COMPLEX_TAG"
items: "Loading..."
configuration
- complete configuration of your target in json. This is an output from target-builder module, which parses all files inside target folder and produces large JSON that contains all settings, configurations, pages structures, etc.constants
- list of constants, the most important ones are:COUNTRIES_FULL
,COUNTRIES
,COUNTRY_CODES
,LANGUAGES
cookie
- methods exported fromjs-cookie
module (cookie.get
,cookie.set
,cookie.remove
)cx
- method exported fromclassnames
moduledevice
- information about device
device: {
... // https://github.com/duskload/react-device-detect#selectors
hasWebcam: boolean, // If device has webcamera physically
hasWebcamPermission: boolean, // If user already granted webcamera permission to current website
}
flow
- data from server about flow and flow/route functions from server
flow: {
backEnabled: boolean, // value of backEnabled from API for current page
execution: { // information about current flow execution
uuid: string
token: string
},
export: ...any-data-from-server, // this is exported data for page in flow from server
function: {
[function-name]: (payload?: any, resultKey?: string) => void, // - call (in !fuction) any flow/route function by call function name (like `flow.function.search('something')`), you can also set output resultKey (default function-name)
results: {
[function-name or result-key]: ...any-data-from-server-function, // - here will be data from server under function-name or result-key property name (like `flow.function.results.search`)
}
}
goToErrorPage: (message: string, logout?: boolean) => void // redirect user to error page (if one is specified) with some message put into `page.params.error`. Optionally logout can be performed
refresh: () => void // refresh workflow based on current workflow status
reload: () => void // removes authentication cookie and reloads flow
}
form
- data about form, including states of fields
form: {
changeValue: (fieldName: string, value: any, callback?: () => void) => void, // change value of some field, you can use callback that will be called after data set, for example if you need to submit form
data: {
[field-name]: ...data-inside-field, // - data can be string, file, etc.
},
field: {
[field-name]: {
isValid: boolean, // is field valid
validationError: string, // only validation erros generated by page schema
error: string, // all field errors including validation errors and server errors
isFilled: boolean, // is there any data
isVisited: boolean, // true, if field was visited before (focused and blur)
}
},
recompileSchema: () => void, // recompile form validation schema
clearErrors: (allErrors?: boolean) => void // clear global/validation/manually set errors
setError: (fieldName: string, error: string) => void // manually set error to some specific field, error can be cleared by passing falsy value
addTags: (tags: string[]) => void, // add tags to form
removeTags: (tags: string[]) => void, // regexp as string can be also used to identify more tags
hasTags: (tags: string | string[]) => boolean // checks if all passed tags are present
tag: {
[tag-name]: boolean, // form visual tags
},
submit: (actionName: string, params: string[]) => void, // submit form
valid: boolean, // is form valid
visited: {
[field-name]: boolean // indicated if field was visited
},
}
format
- global formats used in application
format: {
formatDate: (date: string) => string
formatCurrency: (value: number, options?: NumberFormat) => string
formatNumber: (value: number, options?: NumberFormat) => string
roundNumber: (value: number) => number
dateFormat: string
currencyUnit: string
phoneCountryCode: string
phoneMask: string
}
globals
- custom constants/variables, see more on how to extend expression contexthelper
- Helper functions and 3rd party libraries
helper: {
dayjs, // https://github.com/iamkun/dayjs
getFileHolder: (file: File | Blob) => Promise<FileHolder> // Get FileHolder compatible with HUB client
}
locals
- local page variables and functions
# page.yml
locals:
test: "I am a local variable"
sum: !expression "function (a, b) { return a + b; }"
items:
- !component as: Heading
items: !expression "globals.test"
- !component as: Heading
items: !expression "globals.sum(1, 2)"
page
- parameters of page, which may be an error. This data is set only from the local application, not the server
page: {
params: any // for example page.params.error contains informations, why you are on error page
name: string // current page name (route URI)
storage: { // local page storage, gets cleared on page change
get: (name: string, defaultValue?: any) => any
set: (name: string, value: any) => void
}
}
translates
- function to translate a string of text, change current locale
changeLocale: (locale: string) => void
t: (string, params) => string
te: (string, params) => string
url
- information about locations, query params, etc.
url: {
... // - https://github.com/unshiftio/url-parse
}
utils
- custom methods, see more on how to extend expression context
Localization
HUB Client has built-in support for multiple locales and an easy way to manage translations.
All translation keys are being stored in src/translates
folder under appropriate YAML files: {LANG}.yml
and should be described in index.yml
project configuration file:
defaultLocale: "en"
translates:
en: !include ./translates/en.yml
Translations can be stored under nested keys, e.g.
# translates/en.yml
welcome:
text: "Automated real-time identity authentication & decisioning."
button: "Lets get started"
otp:
title: "Enter your confirmation code"
text: "We've sent a confirmation code to your phone number"
...
# Page configuraion
- !component as: Heading
items: !t "welcome.text"
- !component as: SubmitButton
text: !t "welcome.button"
In order to use translation key with some parameter, the following notation can be used:
# translates/en.yml
welcome:
text: "Some text with {param}"
...
# Page configuraion
!t text: "welcome.text"
param: "Zenoo"
# Expression can be used as well
!t text: "welcome.text"
param: !expression "flow.export.param"
There are two ways to use translations in YAML:
- Use the
!t
function. It can value for any property in any object. - Use
!expression
and call the functiont
.
To change locale, use the action changeLocale
, in which the first parameter is target locale name.
Examples:
# Evaluate translation for given translation key
- !t translation_key
# Evaluate translation for dymanic translation key (e.g. error coming from Backend)
- !expression t(flow.export.translation_key)
Markdown and HTML content in translations
Translation key value can have string, HTML or Markdown as a value:
welcome:
string: "Welcome"
text1: !html |
<h1>Welcome to our <b>website</b></h1>
<br />
Please provide some information
text2: !markdown |
# Welcome to our **website**
Please provide some information
In order to use Markdown/HTML you need to use !te
tag instead of !t
:
# String
- !component as: Paragraph
items: !t "welcome.string"
# Markdown
- !component as: Paragraph
items: !te "welcome.text1"
# HTML
- !component as: Paragraph
items: !te "welcome.text2"
Built in components
Using components in YAML page configuration
Each component must have an "as" parameter that specifies the component element name. You can use the provided component name, or the standard HTML DOM element.
Each component has also $reference
property, which can create a named reference to DOM element. This reference is accessible through a $reference
object inside appDataContext
.
Examples of $reference
:
# Referenceable div
!component as: div
$reference: myDiv
# Some component that uses this reference
property: !expression #reference.myDiv
UI components
List of UI components can be viewed in Zenoo Storybook.
EJS partials
It is possible to extend initial HTML content of application. By creating/filling the following files in /ejs
folder in target source you can extend content of head
element and add HTML code at the beginning/end of body
tag:
- head.ejs —
head
element content - index.ejs — beginning of
body
element content - body.ejs — end of
body
element content
Remote application start
In order to "pause" application initialization prior to perform some asynchorous task, the following approach can be used with the help of EJS partials:
head.ejs
<script>
function startApplication() {
if (!window.runApplication) {
window.onApplicationPrepared = function() {
window.runApplication();
}
} else {
window.runApplication();
}
}
(function() {
window.DISABLE_AUTOLOAD = true;
// Performing some request needed
return fetch('https://example.com')
.then(response => response.json())
.then(data => {
// Make something with data, e.g. put to global variables
startApplication();
})
.catch(() => {
startApplication();
});
})();
</script>
Target compilation
For target compilation HUB Client contains CLI tool, which uses Target Builder module internally.
The Target Builder is a tool to process Target files and combine the files with the contents of compiled @zenoo/hub-client-core
into releaseable package.
Target Builder uses the index.yml
file as entry point, then combines and compiles all files that are included inside this file and also process files in assets
folder. These files are supported: YAML, JSON, HTML, MD, LESS, and CSS.
Target Builder passes these steps:
- Process assets folders and put it to output folder.
- Process the entry point YAML file, and recursively process all includes (for more details, see !include).
- Process/compile all other files (such as less, md, etc.).
- Combine all output into one large
configuration.json
file and onestyles.css
file and place into the output folder.
CLI commands
# Run from specific target directory
hub-client <command> [<environment>] [-p <port>]
<command>
Command to run over specified target: build
, dev
or deploy
<environment>
Environment name on the basis of which target entry point will be taken
${target}/src/index.${environment}.yml
Examples
# In targets/<target_name> folder
hub-client dev
hub-client deploy stage
hub-client build production
Options
Parameter | Explanation | Type | Default |
---|---|---|---|
--port , -p |
Port for webpack dev server | number | 8888 |
--branch , -b |
Branch to deploy | string | master |
Assets processing
While building a target, two folders of assets are being processed:
<target>/src/assets
@zenoo/hub-client-common/lib/assets
.
Target Builder collects all of the content in these two folders, adds a random hash postfix to the filenames (to prevent caching issues), and places it into the /assets
output folder. All references to assets will be replaced with new hashed names, both in YML and LESS/CSS files.
In case of collisions between <target>/src/assets
and @zenoo/hub-client-common/lib/assets
, the file from <target>/src/assets
will have a higher priority. Use this prioritization to replace some "default asset" with an asset specific only for this target.
IMPORTANT
The format to reference an asset should be: /assets/some_file.ext
.
Other reference formats such as ./../assets/some_file.ext
will not be resolved properly.
Styles processing
In root of each YAML which is included in any depth of target scruture you can define styles
field which can contain array of CSS or LESS files to include it into target build.
In finally steps of target build is all styles filtered by unique, that means you can import one style file in multiple components as many times you want, and on output styles.css
will be each file only once.
Example:
styles:
- !include ./style.less
- !include ./other-style.css
Environment-specific entry points
To build a target with an environment-specific configuration in Target Builder, you can specify a different entrypoint by creating a different index.yml
file.
The format of this environment-specific index.yml
file must be as follows:
<target>/index.{ENVIRONMENT}.yml
Example: <target>/index.production.yml
Within this file, simply include the main index.yml
entry file:
<<<: !include index.yml
serverUrl: 'https://production.onboardapp.io/api'
analytics:
ga: '123456'
# More configuration keys for selected environment
Development tools
HUB Client has built-in devtools for easer development, support and QA process.
# index.yml application configuration file
devTools: true
IMPORTANT:
If you enable devtools for development environment, it will be inherited by other environment-specific configs. In this case devTools
parameter should be explicitly set to false
in appropriate environment config.
Read more in Environment-specific entry points
# index.production.yml application configuration file
<<<: !include index.yml
devTools: false
To toggle development tools panel click on the corresponding button (1).
The devtools interface allows you to preview and navigate through all the pages in the application without the need to fill all the data every single time. As well as enable some extra logging or trigger Autofill feature for convenient testing.
Available features
- Select active route from dropdown by page name or switch to the next/previous one sequentially emulating a normal flow (2)
- Toggle extra logging and some other advanced features as (3):
- Disable expression warnings to toggle log warnings for failed
_!expression_
or_!function_
- Disable reload to prevent the application from reloading after some error on submit action call
- Log all expressions to turn on log not only failed
_!expression_
or_!function_
, but also successfully resolved ones - Log form validations
- Use mocked flow export to prefill pages content with predefined BE responses
- Disable expression warnings to toggle log warnings for failed
- See core dependency version (4)
- Action buttons to control the flow (5):
- Reset flow button restarts the application and will send you to the first page of the flow
- Copy form data button puts into system clipboard JSON representation of currently filled user form data for the furher usage with Mocked input data feature
- Use Autofill feature with selected scenario
Mocked input data
In order to test DO application faster, there is a way to set mocked user input data and go through the whole flow by submitting dataset specific to selected scenario.
Create JSON file with the similar content like:
{
"default": { // Scenario name
"welcome": { // Route name
"firstName": "John" // User input data
}
},
"anotherScenario": {
"welcome": {
"firstName": "Edgar"
}
}
}
And inlcude it in YAML application configuration file
# index.yml application configuration file
mockData: !include ./mockdata.json
Mocked flow export
If you want the page to be displayed correctly in preview mode (navigated with devtools), there is a possibility to set mocked flow export, usually comming from BE.
# index.yml application configuration file
flowExport: !include ./flowExport.json
{
"application": {
"products": {
"creditCard": "Credit Card",
"loan": "Loan"
}
}
}
Design Studio
Design Studio allows you to quickly design, test and deploy DO application. It gives you a lot of capabilities without the need to code.
Requirements to DO target
- Target should have conventional folders and files structure
- CSS variables listed in
src/styles/studio-variables.less
should meet available branding/components variables:
body {
--base-body-background: #f6f6f6;
--base-border-color: #dde0ec;
--base-brand-color: #017aff;
--base-brand-color-contrast: #ffffff;
--base-color: #465a6a;
--base-disabled-color: #d6e5f8;
--base-error-color: #e44343;
--base-focused-color: #017aff;
--base-label-color: #8893aa;
--base-success-color: #1ea03f;
--base-font-family: 'Inter UI', sans-serif;
--base-font-size: 16px;
--base-font-weight: 400;
--base-letter-spacing: normal;
--base-line-height: 19px;
--base-logo: url('/assets/logo-desktop.svg');
--base-header-logo: url('/assets/logo-mobile.svg');
--base-form-background: #ffffff;
--base-form-color: var(--base-color);
--base-form-border-radius: 7px;
--base-form-box-shadow: 0 11px 14px -10px #aec1f7;
}
- Layout components should have property
isLayout
set totrue
in order to properly propagate changes in component properties
src/layouts/main.yml
components:
footer: !include ../components/footer.yml
!component as: LayoutWithSidebar
isLayout: true # Required for DS to understand how to change properties in nested layout items
items:
!property name: items
footer:
- !ref components: footer
- Application localization should be done using nested translation keys for more convinient work with translation tools in Design Studio
- For all page to properly work in preview mode, use
flowExport
studio settings key and describe export data for each route, e.g.:
{
"calculator": {
"loanPurposeList": {
"Business": "Business",
"Personal": "Personal"
}
},
"employment-info": {
"employmentTypeList": {
"Employed": "Employed",
"Self-Employed": "Self-Employed"
}
}
}
Design Studio settings available in application configuration
Parameter | Description |
---|---|
country |
Used to define global formats (currency, date format, phone number mask etc.) |
flowExport |
Mocked flow export to display pages in DesignStudio correctly |
logo |
Logo to be displayed in projects list |
name |
Project name to be displayed in DesignStudio, title will be used as a default value |
previewUrl |
Link to preview environment |
Example configuration
title: "Zenoo Demo Project"
...
studioSettings:
country: "Mexico"
flowExport: !include ./flowExport.json
logo: "/assets/logo.png"
name: "ZenooBank"
previewUrl: "https://onboarding.zenoo.com/"
Changelog
v1.26.0
July 17, 2023
New features
- Support for static pages: if there is page in
index.yml
starting with$
character, it becomes accessible outside of any HUB workflow by adding?s=page-name
to URL query (without $). This can be useful for landing pages, handling APIs and other static content
# index.yml
...
pages:
$static: !include ./pages/static.yml # Accessible with ?s=static
- Sharable headless and publish it inside yaml flow functions context: currently there is
sharableStart
which consumes token and expect to get route (to continue some subflow) now there is also new call of same API namedsharableHeadless
which doesn't expect to get any route, this is valid behaviour of some specific usage of sharable API of HUB. Both of these functions are accessible in flow functions context, to be callable from yaml pages (most probably static pages)
v1.25.2
July 13, 2023
Bug fixes
- Fix
<Datepicker>
component date format withoutuseIsoDate
property
v1.25.1
July 12, 2023
Bug fixes
- Use forked version of
console-feed
module to prevent 404 from npm
v1.25.0
July 3, 2023
New features
- Introduce
flowReference
property in target configuration, which can be used to automatically reload workflow after this property change
v1.24.7
June 26, 2023
Bug fixes
- Prevent to load
favicon.ico
for targets in subfolders
v1.24.6
June 22, 2023
Bug fixes
- Properly handle loading state in
<ResendCode>
component
v1.24.4
June 7, 2023
Bug fixes
<Datepicker>
fix correct ISO date format on input change
v1.24.3
June 5, 2023
Refactor
- Opmitimization of dynamic sidebar progress steps
v1.24.1
May 19, 2023
Bug fixes
- Make translations to work in
index.yml
file with!t
and!te
YAML tags
v1.24.0
May 18, 2023
New features
<Datepicker>
component now hasuseIsoDate
property, which will force ISO date format
v1.23.0
May 15, 2023
New features
- Add
flowSharableRequired
toindex.yml
configuration, which will display error page in case sharable token is missing
v1.22.0
May 5, 2023
New features
- Add empty translations fallback to
configureTranslations
method exported fromhub-client-translations
package. This change will not affect targets and used only in Studio
v1.21.9
April 27, 2023
Bug fixes
<Iovation>
component: remove script from<head>
after component onmount
v1.21.6
April 26, 2023
Bug fixes
- Fix inconsistent form value set with
multiple
property in<FileUpload>
component on image rotation
v1.21.5
April 24, 2023
Bug fixes
- Add global analytics params (
analyticsParams
inindex.yml
) to change page action
v1.21.4
April 21, 2023
New features
- Add shortcut for
sharable
token in query, nowt
orsharable
can be used, e.g.https://{TARGET_URL}/?t={SHARABLE_TOKEN}
v1.21.2
April 6, 2023
Bug fixes
- Fix
<Select>
component behaviour: close select list on "not found text" click (notFoundText
prop)
v1.21.0
March 9, 2023
Breaking changes
- All file upload components (
<FileUpload>
,<AcuantFileUpload>
,<AcuantFileUploadButton>
,<Signature>
) are now by default sending single file descriptor to HUB (File
) instead of an array with single item ([File]
before). In casemultiple
property is set to true — file descriptors are being sent as an array as before.
To support this changes you need
- If you use single file upload, modify validation JSON schema of appropriate page on FE
# Before
schema:
required:
- document
properties:
document:
type: array
minItems: 1
items:
properties:
size:
maximum: 10485760
errorMessage:
_: !t "errors.invalidFileSize"
# other properties validations...
# After
schema:
required:
- document
properties:
document:
properties:
size:
maximum: 10485760
errorMessage:
_: !t "errors.invalidFileSize"
# other properties validations...
- In case you still need payload to be sent as an array of file descriptors use
multiple
property. Note, that this property also affects UI, e.g.<FileUpload>
component will displayAdd another file
.
- !component as: FileUpload
name: document
multiple: true
label: "Document"
- Make appropriate changes in DSL
v1.20.3
January 5, 2023
Bug fixes
<Image>
: fix PDF document preview for multiple files
v1.20.2
January 5, 2023
Bug fixes
<SidebarProgress>
: add--CM-sidebarProgress-border-dashed-color
CSS variable to adjust border dashes color
v1.20.1
December 21, 2022
Bug fixes
<Slider>
: fix disabled styles in Safari
v1.20.0
December 13, 2022
New features
- New
formOutputModifier
page configuration key allows to override form data before submit. Can contain!function
or!expression
YAML tags for dynamic calculation based on expression context values
v1.19.0
November 28, 2022
New features
<Slider>
: allow to provide custom slider steps with newsteps?: number[]
property
v1.18.1
October 18, 2022
Bug fixes
<Slider>
: force numeric keyboard, fix style issues in Firefox
v1.18.0
October 14, 2022
New features
- Introduce
roundToStep
property for<Slider>
component, which is updating slider value to nearest step in case of manual value editing
v1.17.2
October 13, 2022
Bug fixes
- Handle value limits in
onBlur
event in<Slider>
component
v1.17.1
September 15, 2022
Bug fixes
- Do not allow to access process and global in EJS
- Fix default autocomplete value for
<Input>
component
v1.17.0
September 15, 2022
New features
- Add
<SVGImage>
component
v1.16.0
September 15, 2022
New features
- Add
<Portal>
component as an impementation of React portal - Add
onClick
handler to<RadioButton>
component
v1.15.2
September 12, 2022
Bug fixes
<MaskedInput>
: consider 0 value as filled input
v1.15.1
September 9, 2022
New features
- Add possibility to define custom
src
,subkey
andversion
to<Iovation>
component
v1.15.0
August 31, 2022
Breaking changes
- Change handoff credentials storage from
LocalStorage
to cookies with possibility to set expiration withhandoffTimeout
parameter inindex.yml
Bug fixes
- Switching HUB-client packages versions to exact match
v1.14.16
August 3, 2022
New features
- Added support for
-r
(--useRelativePaths
) target builder CLI parameter for resolving all references to files (inside /public/) folder relative to index.html
v1.14.15
August 2, 2022
Refactor
- Extend link components (
<a>
,<LinkButton>
, any kind of button inside<Checkbox>
label) CSS variables
--base-link-color: var(--base-brand-color);
--base-link-disabled-color: var(--base-disabled-color);
--base-link-font-weight: 400;
--base-link-text-transform: initial;
--base-link-text-decoration: underline;
--base-link-hover-color: var(--base-brand-color);
--base-link-hover-text-decoration: underline;
v1.14.14
August 1, 2022
Breaking changes
- Set actual HUB backend version as default, setting
apiVersion: 'v1'
inindex.yml
is not mandatory anymore
If you need to run application with legacy version of HUB backend, you need to set
apiVersion
field inindex.yml
tov0
Refactor
- Expose
changeLocale
action toCoreContext
Bug fixes
- Set correct value to
app.locale
in YAML expression context on application initialization
DevOps
Design Studio Cluster (AWS)
Overview of Services
Zenoo services are grouped under 2 categories:
- Build time
- Run time
Build time
Studio locates under this category to manage target creation, modification and deployment. It also handles the design elements of the onboarding applications (targets) together with workflow updates.
Run time
Hub Instance (backend) and Hub Client Target (frontend) are deployed under this category to orchestrate the onboarding journeys and integrations with 3rd party providers.
Design Studio is deployed as container task in ECS via AWS command line in a pipeline (e.g.: via GitHub action).
Amazon Cognito is the service employed for user management and permissions within Design Studio.
GitHub/Bitbucket is used by Design Studio as a version control system for target (frontend web app) sources and pipelines to compile the targets to static websites.
Other services are deployed as same as under HUB Cluster.
Network Diagram
HUB Cluster in AWS
Overview of Services
HUB Client Target (Frontend) is stored in AWS S3 buckets as a static website and handled under a CloudFront distribution.
HUB Instance (Backend) is deployed as a container task in ECS via AWS command line in pipeline.
All those container tasks are behind ALB (Application Load Balancer) and the requests are routed accordingly.
MSK (Managed Service Kafka) is the streaming layer of the backend where the user journey executions are handled within different topics (see Hub backend docs for more details).
ElastiCache Redis is used to cache the files processed by the backend (e.g.: documents uploaded by the end-user).
Request Flow
Onboarding app (Hub Client Target) is downloaded into the user’s browser through CloudFront distribution.
Each request from the app is made through Application Load Balancer.
Data Flow
Zenoo Components
HUB Instance (Runtime Backend)
Backend is a JVM service. Written in Java / Groovy, based on Spring Boot framework. Workflow execution is stored in Kafka topics.
Result of the build pipeline is a docker image which runs the backend as a stand-alone server.
Example resources where the target is deployed:
On AWS:
- ECS EC2 cluster
On Azure:
- Azure Container Service
HUB Client Target (Frontend)
Frontend is based on ReactJS framework.
Result of the build pipeline is static HTML & JavaScript files which can be served in CDN (Content Delivery Network) or under a standard HTML server such as nginx.
Example resources where the target is deployed:
On AWS:
- One AWS S3 website bucket
- One CloudFront distribution
On Azure:
- Azure Storage
- Azure CDN
AWS CloudFront Distribution
CloudFront is a web service which delivers the onboarding web application content. Configuration for each delivery is defined as a distribution:
Behaviour of each distribution contains the settings such as object compression, viewer protocols (HTTP or HTTPS), allowed methods and caching:
Multiple paths can be defined to route the requests to different origins such as backend load balancer besides fetching the frontend content from S3 bucket origin:
Design Studio
Design Studio is a NodeJS service running as a container in ECS and it has no state. It's responsible for UI and flow management of the hub client targets and deployments.
Minimum Requirements
Minimum AWS resource tiers and units to run the hub components are listed below.
MSK | Number of Brokers (kafka.t3.small) | 2 |
---|---|---|
GB per month | 100 | |
EC2 | Number of instances (t3.large) | 1 |
GB per month | 60 | |
ElastiCache | Number of nodes (cache.t3.small) | 1 |
ALB | Number of LCUs | 1 |
WAF | Web ACLs | 1 |
Rules | 10 | |
Requests | max 1 mil | |
CloudFront | Free Tier | |
S3 | Standard - GB per month | 50 |
Pricing Example
US East (N. Virginia) AWS Region:
Service | Charges | Usage | Rate | Sub totals |
---|---|---|---|---|
MSK | Broker instance charges (instance usage, in hours) | 31 days * 24 hrs/day * 2 brokers = 1,488 total hours | $0.0456 (price per hour for a kafka.t3.small) | 1,488 hours * $0.0456 = $67.85 |
Storage charges in GB-hours | 50 GB * 1 month | $0.10 (price per GB-month in US East region) | 50 GB-months * $0.10 * 2 = $10 | |
EC2 | Instance usage, in hours | 31 days * 24 hrs/day * 2 brokers = 1,488 total hours | $0.0832 (price per hour for t3.large) | 1,488 hours * $0.0832 = $61.90 |
Storage charges in GB-hours | 30 GB * 1 month | $0.10 (price per GB-month in US East region) | 30 GB-months * $0.10 = $3 | |
ElastiCache | Node usage, in hours | 31 days * 24 hrs/day * 1 node = 1,488 total hours | $0.034 (price per hour for cache.t3.small) | 1,488 hours * $0.017 = $25.29 |
ALB | Application Load Balancer-hour and LCU-hour | 31 days * 24 hrs/day * 1 node = 1,488 total hours | $0.0225 (price per ALB-hour) $0.008 (price per LCU-hour) | 1,488 hours * ($0.0225 + $0.008) = $22.69 |
WAF | 1 Web ACL, 10 Rules and max 1 mil requests | Web ACL $5.00 per month (prorated hourly) Rule $1.00 per month (prorated hourly) Request $0.60 per 1 million requests | (1 ACL * $5) + (10 Rules * $1) + (1 * $0.6) = $15.6 | |
CloudFront | Free Tier 1 TB of data transfer out 10,000,000 HTTP or HTTPS Requests 2,000,000 CloudFront Function Invocations | 0 | 0 | |
S3 | Standard | 50 GB per month | $0.023 (per GB-month) | 50 * $0.023 = $1.15 |
Total (per month) | $207.5 |
Build & Run Commands
Commands below are listed to give an overview of each service build and run. Actual implementation may change based on the cloud provider.
Component | Commands |
---|---|
hub-instance-zenoo | ./gradlew clean build ./gradlew bootRun |
hub-client-targets-zenoo/targets/ | npm i npm start |
Deployment
Examples below are illustrating the steps of a typical CI/CD pipeline based on GitHub Actions. Additionally, the container configuration sample is defined as a docker compose file below.
HUB Instance (Backend)
-- docker-compose.yml --
Sample docker compose file to deploy hub instance to ECS:
version: '3'
services:
hub-instance-zenoo:
image: '917319201960.dkr.ecr.eu-west-2.amazonaws.com/hub-instance-zenoo:v0.0.1'
ports:
- '0:8080'
environment:
SPRING_PROFILES_ACTIVE: 'stage'
logging:
driver: awslogs
options:
awslogs-group: zenoo-stage
awslogs-region: eu-west-2
awslogs-stream-prefix: backend
-- build.yml --
Sample GitHub build action config:
name: 'Zenoo Hub Instance - Build'
on:
pull_request:
branches:
- master
- integration/**
- release/**
paths-ignore:
- 'README.md'
- 'docs/**'
- 'docker/**'
- '.github/**'
push:
branches:
- master
paths-ignore:
- 'README.md'
- 'docs/**'
- 'docker/**'
- '.github/**'
jobs:
build:
name: 'Build and Tests'
timeout-minutes: 30
runs-on: ubuntu-latest
services:
mongodb:
image: mongo:4.2.2
ports:
- 27017:27017
steps:
- name: 'Checkout'
uses: actions/checkout@v2
- name: 'Cache gradle dependencies'
uses: actions/cache@v1.1.0
with:
path: ~/.gradle/caches
key: ${{ runner.os }}-gradle-${{ hashFiles('**/*.gradle') }}
restore-keys: |
${{ runner.os }}-gradle-
- name: 'Setup JDK 1.8'
uses: actions/setup-java@v1
with:
java-version: 1.8
- name: 'Grant execute permission for gradlew'
run: chmod +x gradlew
- name: 'Run build and tests'
run: ./gradlew clean build
-- deploy.yml --
Sample GitHub action config for deployment:
name: 'Zenoo Hub Instance - Deploy'
on:
deployment:
branches:
- master
env:
access-key: ${{ secrets.AWS_ACCESS_KEY }}
secret-key: ${{ secrets.AWS_SECRET_KEY }}
cluster: zenoo-cluster-1
config-name: zenoo-cluster-1
profile-name: zenoo-stage
region: eu-west-2
launch-type: EC2
project-name: hub-instance-zenoo
target-group-arn: arn:aws:elasticloadbalancing:eu-west-2:917319201960:targetgroup/hub-instance-zenoo-stage/4ae349354189008b
container-name: hub-instance-zenoo
container-port: 5005
image-repo: 917319201960.dkr.ecr.eu-west-2.amazonaws.com/hub-instance-zenoo:v0.0.1
jobs:
buildAndDeployZenooStage:
name: 'Deploy Zenoo Hub Instance to Stage'
if: github.event.deployment.environment=='stage'
runs-on: ubuntu-latest
steps:
- name: 'Starting deployment to ${{ github.event.deployment.environment }}'
uses: deliverybot/status@master
with:
state: 'pending'
token: '${{ secrets.GITHUB_TOKEN }}'
- name: 'Setup ECS-CLI'
uses: marocchino/setup-ecs-cli@v1
with:
version: v1.18.1
- name: 'Checkout project'
uses: actions/checkout@v2
- name: 'Login to Amazon ECR'
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_KEY }}
AWS_REGION: ${{ env.region }}
- name: 'Build and upload docker image'
run: ./gradlew jib --image ${{ env.image-repo }}
- name: 'Cluster configuration'
working-directory: docker/stage
run: |
ecs-cli configure --cluster ${{ env.cluster }} --default-launch-type ${{ env.launch-type }} --config-name ${{ env.config-name }} --region ${{ env.region }}
ecs-cli configure profile --access-key ${{ env.access-key }} --secret-key ${{ env.secret-key }} --profile-name ${{ env.profile-name }}
- name: 'Compose service up'
working-directory: docker/stage
run: |
ecs-cli compose --project-name ${{ env.project-name }} service up --create-log-groups --cluster-config ${{ env.config-name }} --ecs-profile ${{ env.profile-name }} --target-group-arn ${{ env.target-group-arn }} --container-name ${{ env.container-name }} --container-port ${{ env.container-port }}
- name: 'Deployment success'
if: success()
uses: deliverybot/status@master
with:
state: 'success'
token: '${{ secrets.GITHUB_TOKEN }}'
- name: 'Deployment failure'
if: failure()
uses: deliverybot/status@master
with:
state: 'failure'
token: '${{ secrets.GITHUB_TOKEN }}'
HUB Client Target (Frontend)
-- deployment.yml --
Sample GitHub action config for deployment:
name: Deploy Zenoo target
on: ['deployment']
env:
access-key: ${{ secrets.AWS_ACCESS_KEY_ID }}
secret-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
region: eu-central-1
jobs:
buildAndDeployZenooEkyc:
name: "Deploy Zenoo Target"
runs-on: ubuntu-latest
if: github.event.deployment.payload=='zenoo' && github.event.deployment.environment=='stage'
steps:
- uses: actions/checkout@v2
- name: Update NPM
run: sudo npm install -g npm@latest
- name: Authenticate with registry
run: echo "//nexus.zenoo.com/repository/npm-internal/:_authToken=${{secrets.ZENOO_NPM_TOKEN}}" > ~/.npmrc
- name: Setting Nexus as default registry for zenoo packages
run: npm config set @zenoo:registry http://nexus.zenoo.com/repository/npm-internal/
- name: Installing NPM
working-directory: targets/zenoo
run: npm i
- name: Build Target application for STAGE
working-directory: targets/zenoo
run: npm run build
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ env.access-key }}
aws-secret-access-key: ${{ env.secret-key }}
aws-region: ${{ env.region }}
- name: Copy files to S3 Bucket
run: |
aws s3 sync ./targets/zenoo/build/development/static/ s3://zenoo.onboardapp.io --acl public-read --source-region ${{ env.region }} --region ${{ env.region }}
Performance
Throughput
- 200-300 milliseconds response time when there’s no external provider involved
- 10-15 seconds response time if there’s an external provider involved depending on the provider’s processing time (i.e. Salesforce, QualID, Acuant etc)
Concurrency
- 200/500/1000 user flows per second depending on the complexity of the DOB flow if there’s no provider involved
- 15/25 user flows per second depending on the provider’s limitations time (i.e. Salesforce, QualID, Acuant etc)
Availability
- Platform is hosted on highly available resources within primary and secondary regions to support any disaster recovery as part of the business continuity plan
Security Overview
- Directory listing on frontend is disabled. It’s not possible to browse to asset files directly.
- Allowed HTTP methods (mainly needed by backend service): GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE
- API calls are encrypted under HTTPS/SSL protocol and secured using JWT tokens that are generated when a user journey starts. They are valid for a certain configurable time to have a limited session duration.
- JWT token is validated as an access token. It’s generated per each user flow and limited by configurable time (e.g.: 15 minutes)
- HTTPS/SSL certificates are generated and managed by AWS ACM with SHA256WITHRSA algorithm
- Backend service is validating each request path and payload. In case of any wrong entry, exceptions are returned back as an error response
- Each request payload has JSON content type accepted by the backend service
- Backend service supports encrypted and authenticated connections with AWS MSK and ElastiCache Redis clusters (See hub backend docs for the details).