FogBrain

continuous reasoning for managing next-gen distributed applications

QuickStart

FogBrain currently supports service placement and migration decisions, at first deployment and at application management time, by mainly focussing on services suffering due to the latest changes in infrastructure conditions (e.g. crash of a node hosting a service, too high latency between communicating services).

Application Placement Problem

FogBrain solves the application placement problem stated below, both at first deployment and at management time.

Let A be an application composed of interacting services S1, ..., Sh with their (hardware, software, IoT, communication) requirements and let I be a Cloud-IoT infrastructure composed of interconnected nodes N1, ..., Nk featuring certain (hardware, software, IoT, communication) capabilities.

An eligible placement for A over I maps each service Si of A to a certain node Nj of I so that all service requirements are satisfied by node and networking capabilities.

For more details on this problem, you can have a look at this research article.

Declare an Application

The application sketched below relies on machine learning to optimise home interior lighting based on data sensed from a videocamera and acting upon a smart lights hub. It is made of two interacting microservices: an ML Optimiser and a Lights Driver.



The ML Optimiser requires 16GB of RAM, the Ubuntu OS with mySQL and python, and the availability of a GPU at the target deployment node. The Lights Driver requires 2GB of RAM, the Ubuntu OS, and reachability of the video camera and of the lights hub from the target deployment node.

Besides, communication from the Lights Driver to the ML Optimiser tolerates a latency of at most 20 ms and requires the availability of at least 16 Mbps of bandwidth to livestream video footage. Similarly, communication from the ML Optimiser to the Lights Driver tolerates a latency of at most 50 ms and requires 0.5 Mbps of available bandwidth.

All such requirements can be declared in FogBrain as in:

% application(AppId, [ServiceIds]).
application(lightsApp, [mlOptimiser, lightsDriver]).
% service(ServiceId, [SoftwareRequirements], HardwareRequirements, IoTRequirements).
service(mlOptimiser, [mySQL, python, ubuntu], 16, [gpu]).
service(lightsDriver, [ubuntu], 2, [videocamera, lightshub]).
% s2s(ServiceId1, ServiceId2, MaxLatency, MinBandwidth)
s2s(mlOptimiser, lightsDriver, 50, 0.5).
s2s(lightsDriver, mlOptimiser, 20, 16).

Declare an infrastructure

The infrastructure sketched below spans a Cloud-IoT continuum made of three nodes: a private Cloud VM, a Wifi access point and an edge computing node. GPUs are available at the private Cloud and at the edge node. The access point and the edge node also reach out a lights hub and a videocamera which can be used to deploy the previously described application. Each node features its own hardware and software capabilities and interconnects with the others with the latencies and bandwidth reported on top of end-to-end links.



All such capabilities can be declared in FogBrain as in:

% node(NodeId, SoftwareCapabilities, HardwareCapabilities, IoTCapabilities).
node(privateCloud,[ubuntu, mySQL, python], 128, [gpu]).
node(accesspoint,[ubuntu, mySQL, python], 4, [lightshub, videocamera]).
node(edgenode,[ubuntu, python], 8, [gpu, lightshub, videocamera]).

% link(NodeId1, NodeId2, FeaturedLatency, FeaturedBandwidth).
link(privateCloud, accesspoint, 5, 1000).
link(accesspoint, privateCloud, 5, 1000).
link(accesspoint, edgenode, 5, 20).
link(edgenode, accesspoint, 5, 20).
link(privateCloud, edgenode, 15, 18).
link(edgenode, privateCloud, 15, 18).

Try FogBrain!

FogBrain actually runs in the boxes below. The first box contains editable information about the application and infrastructure sketched above, the second box contains a query to fogBrain/2 which can be used to determine a first placement for the whole application, or to determine service migrations of services suffering due to changes in infrastructure conditions.

First Placement
To determine a first placement for lightsApp, you simply need to specify all application requirements and target infrastructure capabilities in the box below:
% application(AppId, [ServiceIds]).
application(lightsApp, [mlOptimiser, lightsDriver]).
% service(ServiceId, [SoftwareRequirements], HardwareRequirements, IoTRequirements).
service(mlOptimiser, [mySQL, python, ubuntu], 16, [gpu]).
service(lightsDriver, [ubuntu], 2, [videocamera, lightshub]).
% s2s(ServiceId1, ServiceId2, MaxLatency, MinBandwidth)
s2s(mlOptimiser, lightsDriver, 50, 0.5).
s2s(lightsDriver, mlOptimiser, 20, 16).

% node(NodeId, SoftwareCapabilities, HardwareCapabilities, IoTCapabilities).
node(privateCloud,[ubuntu, mySQL, python], 128, [gpu]).
node(accesspoint,[ubuntu, mySQL, python], 4, [lightshub, videocamera]).
node(edgenode,[ubuntu, python], 8, [gpu, lightshub, videocamera]).
% link(NodeId1, NodeId2, FeaturedLatency, FeaturedBandwidth).
link(privateCloud, accesspoint, 5, 1000).
link(accesspoint, privateCloud, 5, 1000).
link(accesspoint, edgenode, 5, 20).
link(edgenode, accesspoint, 5, 20).
link(privateCloud, edgenode, 15, 18).
link(edgenode, privateCloud, 15, 18).

Then issue a query to fogBrain/2 (delete and write again the . after the query) to see output eligible first placements for lightsApp:

fogBrain(lightsApp,Placement).

FogBrain asserts the first determined eligible placement as its default behaviour.

Migration with Continuous Reasoning

To trigger continuous reasoning we assume the application deployment below (the first found) is currently running and it has been asserted as a fact in the FogBrain knowledge base:

deployment(lightsApp, % AppId
    [on(lightsDriver,accesspoint),on(mlOptimiser,privateCloud)], % Placement
    [(privateCloud,16),(accesspoint,2)], % AllocHW
    [(accesspoint,privateCloud,16),(privateCloud,accesspoint,0.5)]).  % AllocBW

As shown above, FogBrain represents asserted deployments as deployment/4 facts, containing information on the current Placement of application services, and on the hardware and bandwidth resources it requires on the nodes and end to end links, viz. AllocHW and AllocBW.

Copy and paste the deployment/4 fact in the first box below. Now try to change the infrastructure so to cause the need for migration of the lightsDriver, e.g. by removing the lightshub from the list of IoT capabilities of the accesspoint:

% application(AppId, [ServiceIds]).
application(lightsApp, [mlOptimiser, lightsDriver]).
% service(ServiceId, [SoftwareRequirements], HardwareRequirements, IoTRequirements).
service(mlOptimiser, [mySQL, python, ubuntu], 16, [gpu]).
service(lightsDriver, [ubuntu], 2, [videocamera, lightshub]).
% s2s(ServiceId1, ServiceId2, MaxLatency, MinBandwidth)
s2s(mlOptimiser, lightsDriver, 50, 0.5).
s2s(lightsDriver, mlOptimiser, 20, 16).

% node(NodeId, SoftwareCapabilities, HardwareCapabilities, IoTCapabilities).
node(privateCloud,[ubuntu, mySQL, python], 128, [gpu]).
node(accesspoint,[ubuntu, mySQL, python], 4, [lightshub, videocamera]).
node(edgenode,[ubuntu, python], 8, [gpu, lightshub, videocamera]).
% link(NodeId1, NodeId2, FeaturedLatency, FeaturedBandwidth).
link(privateCloud, accesspoint, 5, 1000).
link(accesspoint, privateCloud, 5, 1000).
link(accesspoint, edgenode, 5, 20).
link(edgenode, accesspoint, 5, 20).
link(privateCloud, edgenode, 15, 18).
link(edgenode, privateCloud, 15, 18).

% copy & paste the deployment/4 for lightsApp here:

After you changed the infrastructure, query the fogBrain/2 predicate again (delete and write again the . after the query) to trigger continuous reasoning and to determine a new placement only for the lightsDriver service (e.g. migrating it from the accesspoint to the edgenode):

fogBrain(lightsApp,Placement).

After playing with this QuickStart tutorial, you can download FogBrain from our GitHub repository and browse its online docs here.



About This Site

This site is live and interactive powered by the klipse plugin:

  1. Live: The code is executed in your browser
  2. Interactive: You can modify the code and it is evaluated as you type

© 2020 Service-Oriented Cloud & Fog Computing Research Group, Department of Computer Science, University of Pisa, Italy