Quantcast
Channel: projects.eclipse.org - GitHub
Viewing all 302 articles
Browse latest View live

Triquetrum

$
0
0
Background: 

We have been developing Passerelle at eclipselabs@Google (see https://code.google.com/a/eclipselabs.org/p/passerelle/) for many years, as a specialization of the Ptolemy II actor-oriented modeling and simulation framework. It uses Ptolemy as a workflow execution engine and offers a basic GEF-based graphical model editor. Passerelle has been applied in open-source tools for scientific workflows and has also been integrated in iSencia's commercial Passerelle EDM product.

Recently we've agreed with the Ptolemy team at UC Berkely to collaborate more closely on an evolution of Ptolemy II towards the world of OSGi and RCP. This would also include a refactoring of the existing Passerelle code-base, merging some parts into Ptolemy II and making sure that the RCP editor is no longer tied to Passerelle, but becomes generically useable on Ptolemy II.

Besides an RCP editor/workbench, there are also requirements & initial solution components for headless runtimes, ad-hoc task-based processing and integration with external resource managers and data analysis packages.

There are already several scientific workflow systems available, but many are specific to particular research domains. We believe that the combination of Eclipse/OSGi with Ptolemy's architecture for hierarchical and heterogeneous actor-based modeling, delivers a solid platform for a wide range of workflow applications.

The Eclipse Science IWG is an ideal community for such work.

And as eclipselabs@Google is closing down, we believe the time is right to take the step to a "real" Eclipse project.

Scope: 

The project is structured on three lines of work :

  1. A Ptolemy II RCP model editor and execution runtime, taking advantage of Ptolemy's features for heterogeneous and hierarchical models.
    The runtime must be easy to integrate in different environments, ranging from a personal RCP workbench to large-scale distributed systems.
    To that end we will deliver supporting APIs for local & remote executions, including support for debugging/breakpoints etc.
    The platform and RCP editor must be extensible with domain-specific components and modules.
    We will also deliver APIs to facilitate development of extensions, building on the features provided by Ptolemy and OSGi.
     
  2. APIs and OSGi service impls for Task-based processing. This would be a "layer" that can be used independently of Ptolemy, e.g. by other workflow/orchestration/sequencing software or even ad-hoc systems, interactive UIs etc.
     
  3. Supporting APIs and tools, e.g. integration adapters to all kinds of things like external software packages, resource managers, data sources etc.


"Vanilla" packages will be delivered that can be used for general Ptolemy modeling work.

Triquetrum will also deliver extensions, with a focus on scientific software. There is no a-priori limitation on target scientific domains, but the current interested organizations are big research institutions in materials research (synchrotrons), physics and engineering.

Description: 

Triquetrum delivers an open platform for managing and executing scientific workflows. The goal of Triquetrum is to support a wide range of use cases, ranging from automated processes based on predefined models, to replaying ad-hoc research workflows recorded from a user's actions in a scientific workbench UI. It will allow to define and execute models from personal pipelines with a few steps to massive models with thousands of elements.

Besides delivering a generic workflow environment, Triquetrum also deliverd extensions with a focus on scientific software. There is no a-priori limitation on target scientific domains, but the current interested organizations are big research institutions in materials research (synchrotrons), physics and engineering.

The integration of a workflow system in a platform for scientific software can bring many benefits :

  • the steps in scientific processes are made explicitly visible in the workflow models (i.o. being hidden inside program code).
    Such models can serve as a means to present, discuss and share scientific processes in communities with different skills-sets
  • allow differentiating for different roles within a common tools set : software engineers, internal scientists, visiting scientists etc
  • promotes reuse of software assets and modular solution design
  • technical services for automating complex processes in a scalable and maintainable way
  • crucial tool for advanced analytics on gigantic datasets
  • integrates execution tracing, provenance data, etc.

The implementation will be based on the Ptolemy II framework from UC Berkeley.

Why Here?: 

There are several reasons to join the Eclipse community with this project.

Technologically it will be integrating several existing Eclipse technologies like equinox, RCP, Graphiti, EMF. So it's a natural fit to become part of the same community and deliver our results here as well.

On a more functional level, this project will be linked to the Eclipse Science IWG. Through the integration of Ptolemy II as Eclipse RCP plugins, this project will add a research domain to the Science IWG : system design, modeling, and simulation techniques for hierarchical, heterogeneous concurrent systems. But besides this "native" Ptolemy application domain, and thanks to its advanced and open actor-oriented software architecture, it has also already been integrated in several other domains, e.g. as a workflow or process engine for automating scientific workflows.

Triquetrum will also deliver specialized tools and reusable libraries for scientific workflows, which would be a valuable contribution to the Science IWG, we hope.

Initial Contribution: 

For the Ptolemy RCP editor :
1. A minimal editor capable of drawing simple toplevel models and running them, built on Graphiti and EMF.
2. An EMF-based model as a binding layer towards the underlying Ptolemy model elements.
3. Essential ptolemy bundles as binary dependencies. These will at first be built from the Ptolemy II repository.

For the Task-based processing :
1. A domain API for processing arbitrary sequences of Tasks, with their parameters, lifecycle tracing and final results.

Supporting services :
1. A ProcessingService API with brokers and handlers etc for synchronous and asynchronous/buffered task processing
2. An Adapter API to execute Tasks that require external services or applications

All the initial code will be rewritten from the existing Passerelle code. It will be copyrighted by the concerned committer(s) or their organizations. The license will be EPL :

  • The proposed elements in the 2 last topics are based on what's available in Passerelle, but will need to be refactored & extracted from there.
  • For the first line, the principles have already been tried out, but the current Passerelle code-base is not appropriate as an initial code drop. So that will take the most effort...
Project Scheduling: 

An initial contribution is planned for August 2015.

In the fall of 2015, the following will be added :

1. For the Ptolemy RCP editor :
- A minimal set of task-based actors as a show-case of the contents of the other lines of work.

2. For the Task-based processing :
- A basic implementation for in-memory processing.

3. Supporting services :
- An integration of that API with DAWNsci's python analysis RPC
- Some trivial implementations to connect to SOAP web-services


4. A first build configuration, using Eclipse's Tycho-based build infrastructure

Future Work: 

In the first year after the project start, we will spend a lot of work on migrating/reproducing many of the GUI features that Ptolemy II now offers in its Swing-based Vergil editor.

A second line of work will be to implement different storage strategies for execution traces and provenance info, based on the Task-based processing model.

This will be the basis for providing reproducible workflows. We will also collaborate with the Ptolemy team to define optimal ways to recover and continue from execution faults, to arrive at a sufficient level of fault-tolerance for long-running distributed workflows.

The long-term work will be oriented to build a software platform for "reproducible research".

Collaborations will be started with several existing Eclipse and Science projects.
At this moment we're already in contact with the project leads of the DAWNSci, ICE and the PTP projects :

  • DAWNSci delivers core APIs and reference impls to access scientific data sets in files and other sources. These would be integrated in science-oriented extension modules of Triquetrum.
  • DAWNSci + ICE will deliver requirements for integrating workflow software in a workbench for data exploration and visualization.
  • ICE + Sandia Analysis Workbench will deliver requirements for workflows for large-scale calculations
  • PTP has extensive support for working with computing resources. Integrating with clusters/grids like SGE, SLURM... is a crucial part for large-scale scientific workflows.
  • and Ptolemy is of course a primary collaborating project that already has a significant community. We expect interest from there as well.

Through these collaborations, and through our participation in the Eclipse Science IWG we will grow the community around Triquetrum.

Members of the Science IWG would be invited to evaluate use cases of Triquetrum in their domains, and/or to deliver requirements for future work.

We also intend to write about our work in community articles and to participate, when possible, in Eclipse conferences.

Source Repository Type: 
Parent Project: 
Interested Parties: 
  • Christopher Brooks, Prof. Edward Lee (UC Berkeley) - Ptolemy team
  • Matt Gerring (Diamond LS, UK) - DAWN
  • Jay Jay Billings (ORNL) - ICE
  • Sandia Analysis Workbench team
  • Scott Lewis - ECF

Model Driven Health Tools

$
0
0
Background: 

The Model Driven Health Tools (MDHT) open source project was started in 2008 with the goal of using industry standard modeling languages and tools to design and implement healthcare interoperability standards. The project was formed as part of the Open Health Tools (OHT) organization and we now wish to migrate into Eclipse. From the beginning, MDHT was developed as Eclipse plugins using UML, EMF, and OCL as our foundation. Becoming an Eclipse project is a natural fit for our technology and will open new opportunities to expand our community of developers and users for heathcare data interoperability.

HL7 is the dominant international healthcare standards organization. MDHT use is not limited to HL7 specifications, but our team has focused most attention on designing and implementing HL7 standards, especially the Clinical Document Architecture (CDA) standard and its many implementation guide specifications. The HL7 Consolidated CDA (C-CDA) standard is an essential part of Meaningful Use certification requirements that are required for Electronic Health Record (EHR) vendors in the United States, and CDA is widely used for exchange of healthcare data within and outside of the U.S. The U.S. Office of the National Coordinator for Health IT (ONC) and the National Institute for Standards and Technology (NIST) use Java runtime libraries generated from MDHT's UML model of the C-CDA standard for certification testing and verifying that CDA XML documents satisfy all rules specified in the Meaningful Use rules.

Heathcare data interoperability is currently in the midst of rapid change due to an emerging new standard from HL7, called Fast Healthcare Interoperable Resources (FHIR)®©. The primary focus of our MDHT project team for the next two years will be developing complete support for designing and implementing FHIR standards and derived profiles.

Another closely related project proposal will be submitted to Eclipse in the near future, Model Driven Message Interoperability (MDMI). The MDMI project was set up as a sub-project of MDHT within our previous home at OHT, but they are proposed to become peer projects within Eclipse.

Scope: 

MDHT supports the specification and implementation of healthcare interoperability standards by creating UML profiles, model transformations, publication, code generation, and model editing tools for clinical informatics users. The project scope includes:

  • Model editing tools designed for use by clinical informatics specialists (often not software developers)
  • UML profiles and associated Eclipse UI designed to support healthcare modeling methodology and required extensions
  • Model transformation and Java code generation tools, using EMF and OCL
    • Transform UML models to Ecore models
    • Transform custom UML profile extensions to Ecore structures and OCL expressions (e.g. constraints for terminology binding)
  • Model publication tools
    • UML model-to-text transformation to the DITA publication standard XML format
    • Sample projects and wizards to assist with publishing models as formal specifications and as developer implementation guides
  • UML models created for healthcare standards
    • e.g., C-CDA and FHIR
  • Java code generated from UML models, released as JARs that are the primary download for most health IT developers

 

Description: 

MDHT delivers a standard object-oriented alternative to proprietary development methodologies and tooling used to specify and implement most healthcare industry standards. There are three primary categories of users for MDHT tools: authors of healthcare industry interoperability standards, certification or testing authorities who validate that an Electronic Health Record (EHR) system produces XML or JSON files that comply with the standard, and software developers that implement adapters or applications that produce and consume healthcare data. MDHT defines healthcare-specific UML profiles to specify the interoperability standards and delivers model editing tools that are optimized for clinical informatics users, including a customized interface for the UML profile extensions. MDHT also delivers UML publishing tools that are used to generate specification documents that are ready for submission to standards organizations. Finally, the tooling includes model transformations that leverage  EMF and OCL to generate Ecore models and Java code used by healthcare implementers.

What makes healthcare information models different?  Why don't we simply use general-purpose UML tools?

  • Constraint-based modeling approach
    • Most healthcare standards create a base "reference model" and then define templates/profiles/archetypes with lists of constraints on a class from the reference model. For example, in the C-CDA implementation guide, a Vital Sign template includes constraints on the base CDA Observation class. We model this template using a UML subclass, but many constraints cannot be represented using standard UML, especially terminology constraints for coded attributes.
  • Clinical Terminology
    • Creating clinical information models usually involves searching very large and complex terminology systems (i.e. ontologies) such as SNOMED CT or LOINC, creating or evaluating value sets of terminology codes, and assigning these as constraints on attributes in a UML class. MDHT includes specialized editing tools for using these terminology systems, and transformation to runtime Java libraries that support application developers and constraint validation.
  • Generate domain-specific Java classes for templates/profiles/archetypes
    • For example, generate a VitalSign class for implementers, but serialize/deserialize XML or JSON instance data that conforms to the base reference model schema type for CDA or FHIR Observation.
    • Annotate the generated Ecore model to enable customized serialization and deserialization of instances, and validate that instances of the base reference model satisfy constraints indicated by the assigned template ID.
  • Users of modeling tools are often clinical informatics specialists, not familar with UML or software engineering
    • The modeling tools must enable users to edit and review clinical information models, with terminology value set constraints, while remaining focused on the clinical content and not overwhelmed by model formalisms.
Why Here?: 

Becoming an Eclipse project is a natural fit for our technology (UML, EMF, and OCL) and will open new opportunities to expand our community of developers and users for heathcare data interoperability. There is complementary synergy with other Eclipse modeling tool projects and we expect that some of the techniques and tools developed for the healthcare industry can be generalized and integrated with other Eclipse modeling projects.

Initial Contribution: 

All existing MDHT project source code and UML models for HL7 standards will be included in the initial contribution. Several plug-ins will be moved to a deprecated folder as part of migration to Eclipse. Copyright for source code is owned by the original conributors, all with EPL license. All source contributions for previous releases have verified committer agreements with Open Health Tools, and those agreements were originally based on Eclipse Foundation agreements.

MDHT has a large community of developers who download Java runtime libraries that are transformed, generated, and built from UML models for CDA implementation guide standards published by HL7, especially the most recent C-CDA standard. The U.S. Office of the National Coordinator for Health IT (ONC) and the National Institute for Standards and Technology (NIST) use the C-CDA Java runtime libraries as an integral part of certification testing and verification that CDA XML documents satisfy all rules specified in the Meaningful Use rules.

We expect a large and active user community for future MDHT tools that support modeling and implementation of the new HL7 FHIR standards. This will be our immediate priority after completing the project migration into Eclipse.

Project Scheduling: 

All current MDHT source code is managed using github and is available for immediate transition into Eclipse.  See https://github.com/mdht

We expect to do some refactoring of the repository structure and move old code (from previous generation of HL7 standards) into deprecated folders. We currently build using Maven and Tycho, so it should be relatively straightforward to adjust for Eclipse build conventions and produce an initial build.

Future Work: 
  • Our primary focus over the next 12-18 months is to implement tooling and define model development processes for the emerging new standard from HL7, Fast Healthcare Interoperable Resources (FHIR)®©. We will draw on lessons learned from our work with HL7 CDA standards to generalize or extend MDHT for full support of FHIR profiles that constrain or extend base resources, and to generate runtime libraries for application developers in Java and other languages.
  • Our specification publishing tools were originally written to support UML models for HL7 CDA standards. We will generalize these publishing tools, using DITA, to support any UML class model and allow specialized formatting for CDA, FHIR, or other specification publishing requirements.
  • We will improve MDHT model editing capabilities to enable a more user-friendly experience for clinical informatics users.
  • We will begin integration with Eclipse Papyrus UML editing tools, especially to enable support for class diagrams.
Source Repository Type: 
Parent Project: 
Project Leads: 

Eclipse Advanced Visualization Project

$
0
0
Background: 

The Eclipse Community continues to grow in new ways in interesting areas, including through the formation of a suite of Working Groups that focus on everything from embedded systems to the advanced location aware technologies. Talks at both FOSS4GNA and the EclipseCons in recent years have revealed a startling amount of visualization technologies as part of this growth. Most recently, cross-working group discussions at EclipseCon NA uncovered a desire to formalize development around some of these activities into a new Eclipse project dedicated to Visualization. Further discussions across the Science, LocationTech and Internet of Things (IoT) Working Groups have helped refine the scope and identify the multi-instutional team that will work on the project.

Scope: 

The scope of this project will include, but may not be limited to, the following areas:

  • 1D & 2D Plotting
  • Advanced 2D and 3D visualization for data analysis and post-processing
  • Time series visualizations in multiple dimensions
  • Constructive modeling tools for building 3D geometries and meshes
  • Constructive modeling tools for building molecules and materials models
  • Domain-specific scientific visualizations
  • Imaging

Many in the community use SWT-XY-GRAPH from Eclipse Nebula for 1D & 2D plotting, but several teams have extensions and other features that either enhance or replace it and there is dedicated development in low-dimension plotting on several existing Eclipse RCP-based projects.

Imaging is of particular interest to the working groups and others in the Foundation. This project plans to develop an open-source, friendly-licensed alternative to the Java Advanced Imaging (JAI) API.

Description: 

Visualization is a critical part of science and engineering projects and has roles in both setting up problems and post-processing results. The input or "construction" side can include things like constructing 3D geometries or volume meshes of physical space and the post-processing side can include everything from visualizing those geometries and meshes to plotting results to analyzing images to visualizing real data to almost everything else imagineable. There are numerous technologies for performing these tasks and most of them, with the exception of SWT-XY-GRAPH, are unavailable natively in the Eclipse ecosystem.

This work proposes to develop new visualization technologies based on the needs of projects in the working groups and to provide a framework for integrating these and third-party visualization technologies seemlessly with the workbench. The integration framework, which is part of the initial contribution that will be moved from the Eclipse Integrated Computational Environment (ICE), uses pluggable OSGi services to standardize the way that all integrated visualization products interact with the platform. Several abstractions are made for common elements such as plots and construction canvases and editable properties are provided by the implementations. Each visualization service realizes a standard interface (IVizService) that provides factory methods for constructing the widgets.

This work also proposes to develop a standalone, executable Eclipse product (flavor), complete with its own perspective and views, that can be downloaded by users and in which all of the integrated capabilities may be used in a pure visualization context. This insures that the project will be able to stand as its own product without being a kind of hidden project that is integrated into other projects.

Development of Swing/AWT Widget

The development of Swing/AWT widgets is beyond the scope of this work, although there is nothing in the design that fundamentally prevents such a development. Contributions from the community that extend the platform to support Swing/AWT widgets are welcome but will not be pursued.

Examples

The picture below shows some of the proposed capabilities that would be part of this project. The collage shows a 3D constructive solid geometry created interactively in the top left, a domain-specific view of a nuclear reactor plant in the bottom left, 2D plots of neutron scattering data in the top right and a 3D view of a prismatic cell battery, commonly used in phones and laptops, with its temperature as a function of time color mapped onto it in the bottom right.

The plot of neutron scattering data is rendered using SWT-XY-GRAPH, but all of the other images are provided by third party services that are integrated with the visualization service framework. The view of the battery is actually rendered by VisIt, a third party tool for 3D visualization and post-processing that is written in C/C++ and executes as a separate process.


A collage showing plotting, 3D geometry, domain-specific (nuclear reactors), and post-processing visualizations in Eclipse ICE.

The next picture shows some of the proposed imaging capabilities in the initial contribution, courtesy of Marcel Austenfeld from his Bio7 project (http://bio7.org/). It shows a standard United States National Institutes of Health sample image for ImageJ, specifically, according to their website:

"This image is made from a Molecular Probes demo slide:

   Cells: bovine pulmonary arthery endothelial cells
   Blue: nucleus stained with DAPI
   Green: Tubulin stained with Bodipy FL goat anti-mouse IgG
   Red: F-Actin stained with Texas Red X-Phalloidin"

Nucleus stained with DAPI

 

Why Here?: 

This project was identified by members of the Eclipse Community, specifically the working groups, as a project that would give us a common area to work and allow us the opportunity to benefit from significant re-use and shared development of visualization technologies.

This project will also expand Eclipse support into a new area: Scientific Visualization.

Initial Contribution: 

The initial contribution will be primarily based on the existing source code in the Eclipse ICE project in the org.eclipse.ice.viz.* and org.eclipse.ice.clients.widgets.rcp bundles. These bundles provide the core capabilities required for 3D constructive solid geometry visualization, mesh visualization, post-processing visualization and the infrastructure described above for managing the services. They include initial service interface implementations for CSV plots, VisIt and ParaView. The copyright for this code is owned by UT-Battelle, LLC, which is a Solutions Member of the Eclipse Foundation, and released under the EPL as part of Eclipse ICE, which is still in incubation.

Marcel Austenfeld is also planning to contribute his ImageJ plugins from Bio7 to provide imaging support.

The initial contribution is already used by several hundred people around the world and includes plugins for popular visualization tools that will continue to increase its adoption over time.

Third-party libraries and their licenses:

VisIt (Works With) - New BSD

Paraview (Works With) - New BSD

VisIt Java Client (Requires) - New BSD

JMonkeyEngine3 (Requires) - BSD - The org.eclipse.ice.client.widgets.rcp bundle currently depends on JMonkeyEngine3, which was rejected in the CQ process. The team is working to replace the capability with JavaFX-based technologies and by the time of the initial contribution this requirement will no longer apply.

ImageJ - 2 Clause (Simplified) BSD

"Works-With" Dependencies will be filed as required.

Project Scheduling: 

Since most of the third-party libraries on which this project depend have already been approved, piggyback CQs can be used and only a minimal amount of addition IP work is necessary. The initial contribution is already working in two existing projects, suggesting that only a minimal amount of code development will be required. Thus we expect that a relatively quick incubation of six months will be required for the first release. The proposed project schedule is as follows:

  • 6 Months - 1.0 release to Eclipse project site
  • 6-9 Months - First talk or demo at EclipseCon
  • 1 Year - First look at and prototypes of 2.0 release from requirements based on community feedback
  • 1 Year - First peer-reviewed publication

 

Future Work: 

The first twelve months will focus on deployment of the existing technologies and prototyping of new capabilities based on community feedback. New functionality includes:

  • ImageJ2 support
  • 3D mesh editing
  • 3D molecule builder/editor
  • Additional implementations of visualization services (IVizServices) for existing functionality
  • Functionality required by LocationTech and IoT (since most of the proposed is based on the Science Working Group)
  • Improved User Experience
  • Integration of IDataset from DAWNSci where appropriate

This project will have broad applications in the general community, so in addition to proposing tutorials and talks at the EclipseCons we will build community by reaching out to the traditional visualization community and conferences as well as submit a peer-reviewed journal article.

 

Source Repository Type: 
Parent Project: 
Project Leads: 
Interested Parties: 

This project is of interest to the Science Working Group, LocationTech and IoT and may be of interest to others in the Eclipse community, either users or developers.

The United States Department of Energy (DOE) will have a strong interest in the development of this project as the ORNL portion of the initial contribution is directly used and funded by several DOE Offices.

Eclipse Rich Beans

$
0
0
Background: 

Diamond Light Source are part of the Eclipse Science Working Group and have developed several Eclipse RCP products both for user interface and for data analysis and acquisition servers. This project has come from that development work and exists already. It has some overlap with other Eclipse projects, but we don't think this should be an issue and the ecosystem is richer with multiple solutions!

Scope: 

The scope is to provide a set of widgets for scientific and numeric data which allow values to be entered and validated. The project provides data binding to java beans and automatic generation of user interface made up of the widget set. The project scope is also to edit beans with huge arrays of values and complex bean trees.

Not in the scope:

  1. Serialization: RichBeans is agnostic as to whether XML, JSON or any other technology be used. It is just widgets<->beans
Description: 

This project allows user interface to be created from beans or graphs of beans. The user interface available has standard widgets which have few dependencies to reuse. For instance there are widgets for editing numbers with bounds validation, units and that allow expressions of other boxes. There are widgets for entering a range of values and expanding out bean graphs to complete Design of Experiments work.

The API will be simple to use, have great widgets for science and be fast for huge field lists. So even though it is a minnow in the world of models, data binding and UI generation, it has some strong points.

 

Screenshots from three of the examples:

UI

BEAN GRAPH

publicclass ExampleBean {

 

       private List<ExampleItem> items;

//…

 

publicclass ExampleItem {

      

       publicenum ItemChoice {

             XY, POLAR;

 

             publicstatic Map<String, ItemChoice> names() {

                    final Map<String,ItemChoice> ret = new HashMap<String,ItemChoice>(2);

                    ret.put("X-Y Graph", XY);

                    ret.put("Polar",     POLAR);

                    return ret;

             }

       }

 

       private String     itemName;

       private ItemChoice choice = ItemChoice.XY;

       private Double x,y;

       privatedouble r,theta;

//…

 

publicclass ExampleBean {

 

       private List<ExampleItem> items;

      

//…

 

publicclass ExampleItem {

      

       private String     itemName;

       private ItemChoice choice = ItemChoice.XY;

       private Double x,y;

       privatedouble r,theta;

      

       private List<OptionItem> options;

 

//…

 

publicclass OptionItem {

 

    private String optionName;

       privateboolean showAxes, showTitle, showLegend, showData;

       privatestaticintcount = 0;

 

//…

 

So more than 200,000 fields are linked and editable in a speedy fashion!

publicclass ExampleBean {

 

       private List<ExampleItem> items;

 

//… Example has 2000 items

 

 

publicclass ExampleItem {

      

       private String     itemName;

       private ItemChoice choice = ItemChoice.XY;

       private Double x,y;

       privatedouble r,theta;

      

       privatedouble d0, d1,d2,d3,d4,d5,d6,d7,d8, d9;

       privatedouble d10, d11,d12,d13,d14,d15,d16,d17,d18, d19;

       privatedouble d20, d21,d22,d23,d24,d25,d26,d27,d28, d29;

       privatedouble d30, d31,d32,d33,d34,d35,d36,d37,d38, d39;

       privatedouble d40, d41,d42,d43,d44,d45,d46,d47,d48, d49;

       privatedouble d50, d51,d52,d53,d54,d55,d56,d57,d58, d59;

       privatedouble d60, d61,d62,d63,d64,d65,d66,d67,d68, d69;

       privatedouble d70, d71,d72,d73,d74,d75,d76,d77,d78, d79;

       privatedouble d80, d81,d82,d83,d84,d85,d86,d87,d88, d89;

       privatedouble d90, d91,d92,d93,d94,d95,d96,d97,d98, d99;

 

//… Example has 100 fields

 

 

 

Why Here?: 

This project is of interest to multiple members of the Science Working Group and it will directly address requirements of those members - and others in the community - that results in tighter integration and reuse across the projects.

Diamond Light Source are a member and supporter of the Eclipse Foundation and currently see this as the best route to making a project truly open source.

Initial Contribution: 

Bundle names (copywright Diamond Light Source):

  1. org.eclipse.richbeans.api
  2. org.eclipse.richbeans.widgets
  3. org.eclipse.richbeans.reflection
  4. org.eclipse.richbeans.generator
  5. org.eclipse.richbeans.examples
  6. org.eclipse.richbeans.validation
  7. org.eclipse.richbeans.doe 
  8. org.eclipse.richbeans.xml

What they do

  1.  API no dependency interface plugin for services and other interfaces.
  2. Bunch of widgets 
  3. Implementation of bean to UI and UI to bean service used to do data binding using reflection
  4. Automatic generation of user interface, might depend on metawidgets, http://metawidget.org/, EPL licensed code 
  5. Examples of how to use
  6. Validation plugin for checking beans are legal and UI in legal state
  7. Design of expriments plugin for expanding beans that use DOE annotations
  8. Helpers for XML
Project Scheduling: 
  • Initial contribution Autumn 2015 - September or October ideally
  • This project exists and once we have made it ready for release and passed the IP checking, it could be part of the Eclipse Release Train (unlike the DAWNSci project which follows release with the synchrotron software release train).

 

Future Work: 

We plan to increase the auto-generation capability by using the metawidgets project more. Currently the UI generation is by statically generated classes at compile time and does not support bean graphs. Metawidgets are dynamic and support nesting and would work better with our widget set.  http://metawidget.org/

Source Repository Type: 
Parent Project: 
Project Leads: 

Eclipse Tools for Cloud Foundry

$
0
0
Background: 

Cloud Foundry (CF) is an open platform as a service (PaaS) that provides a choice of clouds, runtime frameworks, and application services. It is an open source project with an active and growing community that contributes and supports it, and includes many corporations and organizations like Pivotal, IBM, HP, EMC, Cisco, and SAP.

The Cloud Foundry Tools project was started as a collaboration between Pivotal and IBM, and it is a framework for Eclipse that contains common, reusable application deployment, scaling and service features for Cloud Foundry, and allows third-party vendors to contribute their own Cloud Foundry-based definitions where users can deploy their applications from within their Eclipse IDE.

Scope: 

In Scope:

The scope of this project is to help users deploy and test their applications on Cloud Foundry without leaving their Eclipse integrated development environment. Instead of separately running builds and using the Cloud Foundry command line tool to deploy, scale, or configure applications on Cloud Foundry, developers are able to deploy application projects directly from within their Eclipse IDE, see the running applications on CF, bind or unbind services, scale them up or down, and debug them on CF.

Out of Scope:

This project does not deal with the development of Cloud Foundry as a platform itself and does not provide tooling to assist with that. Working on the CF open-source code itself is unrelated to this project, which focuses solely on users working with the Eclipse IDE and targeting a Cloud Foundry platform where they can deploy and test their applications.

Description: 

Cloud Foundry Tools provide an extensible framework and common UI to deploy applications to different Cloud Foundry targets, and it is a framework that closely integrates with Web Tools Platform (WTP) and Eclipse. It allows application scaling and services management from the same Eclipse-based IDE where applications are developed. Applications can also be debugged on Cloud Foundry using the built-in Eclipse debugger. This makes it very convenient for developers to work on applications running on CF.

The Cloud Foundry Tools are not specialized to just work with a specific Cloud Foundry platform. It allows users to set various Cloud Foundry targets, may it be public Cloud Foundry-based platforms like Pivotal Web Services, IBM Bluemix, or others.

Why Here?: 

Cloud Foundry Tools are a framework that integrates with WTP and uses existing Eclipse UI for application deployment. Cloud Foundry Tools are accessible from existing Eclipse views like the Servers view, and allows application deployment to CF through common Eclipse wizards, like the New Server and Run On Server wizards. It also integrates with the Eclipse Console, streaming application logs from CF to the user, and presents a view of files in CF through the Remote Systems view.

In addition the tool has gained a degree of maturity with various release cycles as an open source project since 2012. It is being actively developed and maintained by two experienced Eclipse development teams, Spring Tool Suite on the Pivotal side, and WTP on IBM. The project also receives additional pull requests from development teams at other organizations.

Being an extensible, open source framework, where third parties can contribute their own CF target definitions through the Cloud Foundry Tools branding extension point, it is an ideal common CF deployment tool for Eclipse.

Its inclusion as an Eclipse project and eventual end-goal of adding it to the Eclipse release train will greatly enhance Eclipse as a primary application development environment for Cloud Foundry.

Initial Contribution: 

The initial contribution is been donated by the Cloud Foundry Foundation with Pivotal and IBM working on the project.

The initial contribution is a stable codebase and is already used in production.

Project Scheduling: 

The project already shipped a number of releases in the past and uses an established release and development cycle of 6 weeks. This process will continue once the project is at Eclipse.

As well as becoming an Eclipse project, our intention is to get Cloud Foundry Tools onto the Eclipse release train, ideally in the Eclipse 4.5 (Mars) Service Refresh 1 (SR1) timeframe.

The following is a general roadmap for the next 12 months:

  • Begin migration from Cloud Foundry to Eclipse Project after version 1.8.3 of Cloud Foundry Tools are released after mid-June 2015.
  • The current code base contains vendor specific definitions for Pivotal Web Services. Scope refactoring work to define a core framework and move vendor specific definitions into extensions and determine how much of this is necessary.
  • Cloud Foundry itself experiences frequent changes. In particular it may experience changes with the Cloud Controller API, so compatibility at the tools level, with possibly some enhancements already in place to better handle version incompatibilities.
  • Adopt WTP changes that allows users to better discover vendor specific downloadables into the core framework from existing WTP user interfaces. This is future work that may be available before the next Eclipse GA in 2016.
Future Work: 

The first step is to turn Cloud Foundry Tools into an Eclipse project, and the existing code base will be migrated “as is” with the Pivotal Web Services definition. The second step is to include it in the Eclipse release train in SR1, which is the end-goal of this migration to the Eclipse Foundation. Discussions are ongoing on whether to refactor the vendor definitions out of the tool and create a pure core framework, with possibly a vendor-neutral definition example, or do the refactoring after it is part of the release train. Past experiences with WTP indicate that including vendor definitions may be problematic for maintenance. An ideal scenario is to have a pure core framework and vendor specific definitions hosted externally, but discoverable through the framework UI. Current work is being done with WTP to allow this discovery and better support vendor defined Cloud Foundry targets outside of a pure Cloud Foundry Tools core framework.

Source Repository Type: 
Parent Project: 
Project Leads: 
Interested Parties: 

Pivotal

IBM

HP

Huawei

hawkBit

$
0
0
Background: 

Updating software (components) on constrained edge devices as well as more powerful controllers and gateways is a common requirement in most IoT scenarios.

At the time being, this process is usually handled by the IoT solution itself, sometimes backed by a full fledged device management system. We believe that this approach generates unnecessary duplicate work in the IoT space, in particular when considering the challenges of implementing a safe and reliable remote software update process: the software update process must never fail and also must never be compromised as, at the one hand, it can be used to fix almost any issue/problem on the device but at the same time also poses the greatest security threat if mis-used to introduce malicious code to the device.

In addition we believe the software update process to be relatively independent from particular application domains when seen from the back end (cloud) perspective. Updating the software for an entire car may differ from updating the firmware of a single sensor with regard to the connectivity of the device to the cloud and also to the complexity of the software package update process on the device. However, the process of rolling out the software, e.g. uploading an artifact to the repository, assigning it to eligible devices, managing the roll out campaign for a large number of devices, orchestrating content delivery networks to distribute the package, monitoring and reporting the progress of the roll-out and last but not least requirements regarding security and reliability are quite similar.

Software provisioning itself is often seen as a sub process of general device management. In fact, most device management systems include functionality for triggering groups of devices to perform an update, usually accompanied by an artifact repository and basic reporting and monitoring capabilities. This is true for both systems specifically targeting IoT as well as systems originating from the mobile area.

Existing device management systems usually lack the capability to efficiently organize roll outs at IoT scale, e.g. splitting the roll out into sub groups, cascading them, automatically stopping the roll out after a defined error threshold etc. They are also usually restricted to a single device management protocol, either a proprietary one or one of the existing standard protocols like LWM2M, OMA-DM or TR-069. Even if they suppport more than one such protocol, they are often a result of the device management protocol they started with and restricted in their adoption capabilities to others.

At the same time the wide functional scope of a full fledged device management system introduces unnecessary (and unwanted) complexity to many IoT projects. This is particularly true for IoT solutions working with constrained devices where requirements regarding generic device management are often very limited only but a secure & reliable software provisioning process is still mandatory.

As a result we have the need for a domain independent solution

  • that works for the majority of IoT projects
  • that goes beyond the pure update and handles more complex roll out strategies needed by large scale IoT projects.
  • that at the same time is focused on software updates in the IoT space
  • and that is able able to work on its own for simple scenarios while having the capability to integrate with existing device management systems and protocols.
Scope: 

The scope of this project is to provide a software update management service for the Internet of Things. That includes the capability to provision software to devices directly or through federated device management systems. In addition it provides value adding processes to the provisioning, e.g. the management of large scale global roll outs, auditing capabilities, reporting and monitoring.

It is out of scope to provide a full blown device management and it is also out of scope to provide client solutions for handling software updates on the device.

Description: 

Project hawkBit aims to create a domain independent back end solution for rolling out software updates to constrained edge devices as well as more powerful controllers and gateways connected to IP based networking infrastructure. Devices can be connected to the hawkBit server either directly through an optimized interface or indirectly through federated device management servers.

hawkBit is device and communication channel neutral by means of supporting:

  • Software and Operating system updates for M2M gateways (typically but bot necessarily running Linux) and
  • Firmware updates for embedded devices

both for

  • cable or
  • over the air (OTA) connected devices

Features at a glance:

  • A device and software repository.

  • Artifact content delivery.

  • Software update and roll out management.

  • Reporting and monitoring.

  • Interfaces:

    • for direct device control.

    • for IoT solutions or applications to manage the repository and the roll outs.

    • for device management federation (i.e. indirect device control)

    • and a user interface to operators to manage and run the roll outs.

Why Here?: 

We see the need for a solution that is open but focused on IoT that can be easily customized for the protocols and 3rd party systems used in the various IoT projects. That approach is currently unique in the industry and will benefit the Eclipse IoT community as other software update or device management systems are either not that flexible or simply not open to the OSS community.

Hosting this project in the Eclipse IoT community allows the project to quickly adapt to the various IoT scenarios out there, e.g. starting from LWM2M connected devices brought to the cloud by Eclipse Leshan down to OSGi empowered gateways enabled by Eclipse Kura.

Initial Contribution: 

The initial contribution will contain a ready-to-run software update server and an artifact download server structured into multiple maven modules based on Spring Boot.

The software update server is proven to run stand alone (fat jar) or in a Cloud Foundry environment (standard Java build pack).

The artifact download server is proven to run stand alone (fat jar) or as a Docker container.

The following interfaces will be included:

  • HTTP/REST interface for devices to integrate.
  • HTTP/REST interface for IoT solutions or applications to control the repository and the roll outs.
  • AMQP interface for device management connector integration.
  • and a Vaadin/GWT based user interface for operators.

The server depends currently on a relational database for the meta data repository (MySQL/MariaDB, H2 DDLs provided) and MongoDB for artifact hosting. Redis can be optionally used for inner cluster communication (central session cache planned for future development).

Copyright is with Bosch Software Innovations GmbH.

Overview:

 

Detailed 3rd party licence list including licenses:

 

amqp-client-3.5.1.jar Apache License 2.0 
aopalliance-1.0.jar AOP Alliance Public Domain 
aspectjrt-1.8.5.jar Eclipse Public License 1.0 
aspectjweaver-1.8.5.jar Eclipse Public License 1.0 
atmosphere-runtime-2.2.7.vaadin1.jar Apache License 2.0 
classmate-1.2.0.jar Apache License 2.0 
commons-lang3-3.3.2.jar Apache License 2.0 
commons-logging-1.1.1.jar Apache License 2.0 
commons-pool2-2.2.jar Apache License 2.0 
ecj-4.4.2.jar Eclipse Public License 1.0 
evo-inflector-1.2.1.jar Apache License 2.0 
flexibleoptiongroup-2.2.0.jar Apache License 2.0 
flute-1.3.0.gg2.jar W3C Software Notice and License 
flyway-core-3.1.jar Apache License 2.0 
freemarker-2.3.22.jar Apache License 2.0 
gson-2.3.1.jar Apache License 2.0 
guava-16.0.1.vaadin1.jar Apache License 2.0 
guava-18.0.jar Apache License 2.0 
hibernate-validator-5.2.1.Final.jar Apache License 2.0 
jackson-annotations-2.5.1.jar Apache License 2.0 
jackson-core-2.5.1.jar Apache License 2.0 
jackson-databind-2.5.1.jar Apache License 2.0 
javax.json-1.0.4.jar Common Development and Distribution License 1.1 
javax.persistence-2.1.0.jar BSD 3-clause "New" or "Revised" License 
javax.servlet-api-3.1.0.jar Common Development and Distribution License 1.0 
javax.transaction-api-1.2.jar Common Development and Distribution License 1.0 
jboss-logging-3.2.1.Final.jar Apache License 2.0 
jcl-over-slf4j-1.7.12.jar MIT License 
jedis-2.5.2.jar MIT License 
jersey-client-1.18.1.jar Common Development and Distribution License 1.1 
jersey-core-1.18.1.jar Common Development and Distribution License 1.1 
jlorem-1.1.jar MIT License 
joda-time-2.5.jar Apache License 2.0 
jolokia-core-1.2.3.jar Apache License 2.0 
json-path-0.9.1.jar Apache License 2.0 
json-simple-1.1.1.jar Apache License 2.0 
json-smart-1.2.jar Apache License 2.0 
jsoup-1.8.1.jar MIT License 
jsr305-2.0.1.jar Apache License 2.0 
jul-to-slf4j-1.7.12.jar MIT License 
log4j-api-2.1.jar Apache License 2.0 
log4j-core-2.1.jar Apache License 2.0 
log4j-slf4j-impl-2.1.jar Apache License 2.0 
mapstruct-1.0.0.Beta4.jar Apache License 2.0 
mongo-java-driver-3.0.2.jar Apache License 2.0 
objenesis-2.1.jar Apache License 2.0 
org.eclipse.persistence.antlr-2.6.0.jar BSD 3-clause "New" or "Revised" License 
org.eclipse.persistence.asm-2.6.0.jar BSD 3-clause "New" or "Revised" License 
org.eclipse.persistence.core-2.6.0.jar BSD 3-clause "New" or "Revised" License 
org.eclipse.persistence.jpa-2.6.0.jar BSD 3-clause "New" or "Revised" License 
org.eclipse.persistence.jpa.jpql-2.6.0.jar BSD 3-clause "New" or "Revised" License 
rsql-parser-2.0.0.jar MIT License 
sac-1.3.jar W3C Software Notice and License 
slf4j-api-1.7.7.jar MIT License 
snakeyaml-1.14.jar Apache License 2.0 
spring-amqp-1.4.5.RELEASE.jar Apache License 2.0 
spring-aop-4.1.7.RELEASE.jar Apache License 2.0 
spring-aspects-4.1.7.RELEASE.jar Apache License 2.0 
spring-beans-4.1.7.RELEASE.jar Apache License 2.0 
spring-boot-1.2.5.RELEASE.jar Apache License 2.0 
spring-boot-actuator-1.2.5.RELEASE.jar Apache License 2.0 
spring-boot-admin-starter-client-1.2.2.jar Apache License 2.0 
spring-boot-autoconfigure-1.2.5.RELEASE.jar Apache License 2.0 
spring-boot-starter-1.2.5.RELEASE.jar Apache License 2.0 
spring-boot-starter-actuator-1.2.5.RELEASE.jar Apache License 2.0 
spring-boot-starter-aop-1.2.5.RELEASE.jar Apache License 2.0 
spring-boot-starter-cloud-connectors-1.2.5.RELEASE.jar Apache License 2.0 
spring-boot-starter-data-jpa-1.2.5.RELEASE.jar Apache License 2.0 
spring-boot-starter-data-mongodb-1.2.5.RELEASE.jar Apache License 2.0 
spring-boot-starter-jdbc-1.2.5.RELEASE.jar Apache License 2.0 
spring-boot-starter-log4j2-1.2.5.RELEASE.jar Apache License 2.0 
spring-boot-starter-tomcat-1.2.5.RELEASE.jar Apache License 2.0 
spring-boot-starter-web-1.2.5.RELEASE.jar Apache License 2.0 
spring-boot-vaadin-0.0.5.RELEASE.jar Apache License 2.0 
spring-cloud-cloudfoundry-connector-1.2.0.RELEASE.jar Apache License 2.0 
spring-cloud-core-1.2.0.RELEASE.jar Apache License 2.0 
spring-cloud-localconfig-connector-1.2.0.RELEASE.jar Apache License 2.0 
spring-cloud-spring-service-connector-1.2.0.RELEASE.jar Apache License 2.0 
spring-context-4.1.7.RELEASE.jar Apache License 2.0 
spring-context-support-4.1.7.RELEASE.jar Apache License 2.0 
spring-core-4.1.7.RELEASE.jar Apache License 2.0 
spring-data-commons-1.10.1.RELEASE.jar Apache License 2.0 
spring-data-jpa-1.8.1.RELEASE.jar Apache License 2.0 
spring-data-mongodb-1.7.1.RELEASE.jar Apache License 2.0 
spring-data-redis-1.5.1.RELEASE.jar Apache License 2.0 
spring-data-rest-core-2.3.1.RELEASE.jar Apache License 2.0 
spring-data-rest-webmvc-2.3.1.RELEASE.jar Apache License 2.0 
spring-expression-4.1.7.RELEASE.jar Apache License 2.0 
spring-hateoas-0.16.0.RELEASE.jar Apache License 2.0 
spring-jdbc-4.1.7.RELEASE.jar Apache License 2.0 
spring-messaging-4.1.7.RELEASE.jar Apache License 2.0 
spring-orm-4.1.7.RELEASE.jar Apache License 2.0 
spring-plugin-core-1.1.0.RELEASE.jar Apache License 2.0 
spring-plugin-metadata-1.2.0.RELEASE.jar Apache License 2.0 
spring-rabbit-1.4.5.RELEASE.jar Apache License 2.0 
spring-retry-1.1.2.RELEASE.jar Apache License 2.0 
spring-security-aspects-3.2.7.RELEASE.jar Apache License 2.0 
spring-security-config-3.2.7.RELEASE.jar Apache License 2.0 
spring-security-core-3.2.7.RELEASE.jar Apache License 2.0 
spring-security-web-3.2.7.RELEASE.jar Apache License 2.0 
spring-tx-4.1.7.RELEASE.jar Apache License 2.0 
spring-vaadin-0.0.5.RELEASE.jar Apache License 2.0 
spring-vaadin-eventbus-0.0.5.RELEASE.jar Apache License 2.0 
spring-vaadin-security-0.0.5.RELEASE.jar Apache License 2.0 
spring-web-4.1.7.RELEASE.jar Apache License 2.0 
spring-webmvc-4.1.7.RELEASE.jar Apache License 2.0 
springfox-core-2.0.3.jar Apache License 2.0 
springfox-schema-2.0.3.jar Apache License 2.0 
springfox-spi-2.0.3.jar Apache License 2.0 
springfox-spring-web-2.0.3.jar Apache License 2.0 
springfox-swagger-common-2.0.3.jar Apache License 2.0 
springfox-swagger2-2.0.3.jar Apache License 2.0 
streamhtmlparser-jsilver-0.0.10.vaadin1.jar Apache License 2.0 
swagger-annotations-1.5.0.jar Apache License 2.0 
swagger-models-1.5.0.jar Apache License 2.0 
tokenfield-7.0.1.jar Apache License 2.0 
tomcat-embed-core-8.0.23.jar Apache License 2.0 
tomcat-embed-el-8.0.23.jar Apache License 2.0 
tomcat-embed-jasper-8.0.23.jar Apache License 2.0 
tomcat-embed-logging-juli-8.0.23.jar Apache License 2.0 
tomcat-embed-websocket-8.0.23.jar Apache License 2.0 
tomcat-jdbc-8.0.23.jar Apache License 2.0 
tomcat-juli-8.0.23.jar Apache License 2.0 
vaadin-lazyquerycontainer-7.4.0.1.jar Apache License 2.0 
vaadin-push-7.5.6.jar Apache License 2.0 
vaadin-sass-compiler-0.9.12.jar Apache License 2.0 
vaadin-server-7.5.6.jar Apache License 2.0 
vaadin-shared-7.5.6.jar Apache License 2.0 
vaadin-slf4j-jdk14-1.6.1.jar MIT License 
vaadin-themes-7.5.6.jar Apache License 2.0 
validation-api-1.1.0.Final.jar Apache License 2.0 
xml-apis-1.4.01.jar Apache License 2.0 

 

Project Scheduling: 

Initial contribution expected: 10/2015

First working build expected: 11/2015

Future Work: 
  • Improve user experience for the community.
  • Further restructure the code base for easier integration and customization.
  • Multi tenancy ready authority store.
  • Provide off the shelf connectors with device management services in the market.
  • Improve scalability and efficiency of the implementation.
  • Implement complex roll out/campaign management.
Source Repository Type: 
Parent Project: 
Project Leads: 
Interested Parties: 
  • Urs Gleim, Siemens AG
  • Regis Piccand, Verisign

Eclipse Collections

$
0
0
Background: 

Goldman Sachs open sourced GS Collections in GitHub in January 2012.  Since then we have seen a steady increase in interest in the project.  We have not previously accepted external contributions to the framework.  We would like to change that by creating a more open project that we can grow a diverse community around.  We feel we can best accomplish this by moving GS Collections to the Eclipse Foundation, renaming the product to Eclipse Collections and renaming the packages from com.gs to org.eclipse.  

Scope: 

The Eclipse Collections project provides object and primitive data structures for Java (e.g. List, Set, Bag, Multimap, BiMap, Stack).  New container implementations, new iteration protocols, additional parallel iteration patterns and types may be added over time but they should extend one of the root types like RichIterable, PrimitiveIterable or ParallelIterable.

Description: 

Eclipse Collections is a collections framework for Java. It has JDK-compatible List, Set and Map implementations with a rich API, additional types not found in the JDK like Bags, Multimaps and set of utility classes that work with any JDK compatible Collections, Arrays, Maps or Strings. The iteration protocol was inspired by the Smalltalk collection framework.

Eclipse Collections started off as an open source project on GitHub called GS Collections.  GS Collections has been presented at the JVM Language Summit in 2012 and JavaOne in 2014.  There are two articles (part one and part two) on InfoQ.com showing some of the capabilities of the collections framework through examples.  A performance comparison between the parallel lazy implementations of Java 8, Scala and GS Collections was presented at QCon New York in 2014.  A set of memory benchmarks is available here.   

Why Here?: 

We can work much more directly and collaboratively with the community by accepting external contributions through the Eclipse contributor agreement, using git as the primary repository, communicating through Eclipse email distribution lists, wikis, and bug trackers.

Similar to the Eclipse IDE, Eclipse Collections has gotten a lot of inspiration over the years from Smalltalk.

Initial Contribution: 

The copyright for GS Collections is owned by Goldman Sachs and the project is currently licensed under Apache 2.0 on GitHub.  There have been no external contributors to the project so all of the IP is owned by Goldman Sachs.  We will fork the project and rename the packages but will leave GS Collections in its current form under the Apache 2.0 license on GitHub.  The code for GS Collections 7.0 will be almost identical to the code for Eclipse Collections 7.0, except for the difference in package names.  We would like to use both the EPL 1.0 and EDL 1.0 licenses so we can continue to offer Eclipse Collections under permissive terms.

Eclipse Collections has no runtime dependencies on any third-party libraries.

Project Scheduling: 

We would like to provide the initial contribution for Eclipse Collections 7.0 before the end of 2015.

Future Work: 

The library currently provides compatibility back to JDK 1.5.  We would like to make a major change to the libary (Eclipse Collections 8.0) by upgrading the library to only be compatible with Java 1.8 or higher.  This will give us tighter interop with new functional interfaces and new interfaces like Streams, where we can provide optimized implementations for the framework.

We will give presentations on Eclipse Collections at Java User Group meetups and technical conferences globally (e.g. JavaOne and EclipseCon) as well as writing technical articles for various developer focused websites. 

Source Repository Type: 
Parent Project: 
Project Leads: 

OMR

$
0
0
Background: 

Building the runtime technology for a new language to match the capabilities of existing mature languages typically requires tremendous effort over decades and, in some cases, never happens because language adoption rates never justify the needed investment. But many of the technologies required are actually not substantially different than the technologies that have been created for existing languages. There are always quirks and peculiarities for each language, but the fundamental technology is really very similar in nature. What makes it extremely difficult to repurpose existing technology for a new language, however, is that the effort to create a new language runtime typically focuses almost entirely on the shortest path to becoming operational for one particular language. “Shortest path” typically means specializing the technology for that language which impedes reuse for other languages. This process has already been repeated many times for many different languages, resulting in several challenges that affect all communities to varying extents:

 

  1. Opportunity cost : every community invests limited resources to independently implement and maintain code that is broadly similar in capability but expressed in different ways. How much more would we all accomplish without this wasted effort?

  2. Long robustness ramp: different implementations tend to run into and fix similar kinds of bugs over their lifetimes. Early design flaws can become extremely restrictive and hard to fix as the community grows around a runtime implementation

  3. Slow capability adoption: hardware and operating system capabilities take much longer to become consistently available and, in the meantime, the developer community can be be disadvantaged on some platforms

  4. Hampered productivity: frameworks for development, diagnostic, profiling, monitoring, management, deployment, testing, etc. require much more effort to build and maintain across many languages or we build broadly similar (but different) tools for each language (see #1)

  5. Barrier to entry: the more hardware, operating system, and tools become popular the harder it becomes to get a new language to be fully capable. Not all language designers necessarily want to become experts building these capabilities.

  6. Slow forward progress: slower innovation in languages, possibly even foiled in some cases by significant runtime implementation costs

 

One approach to improve this situation is to build other languages within an existing mature runtime environment like Microsoft’s Common Language Runtime or a Java Virtual Machine. For example, Scala, Groovy, jRuby, Nashorn, and many other language projects leverage the Java Virtual Machine (JVM) to run code written in other languages. None of these projects, however, have become the de facto implementations for their target language, in part because the JVM is primarily designed and continues to make implementation trade-offs so as to run Java code very efficiently but not necessarily other languages. The implementation trade-offs needed to support other languages require and encourage workarounds and complexity that would not be needed were it not for the fundamental design constraints (i.e. the “Java-ness”) of the JVM itself. For reasons like this, most languages have a native C or C++ runtime implementation that is considered the reference implementation for the majority of that language’s users. More significantly, however, the success of this kind of approach depends on migrating a community from one runtime implementation to another, across what can be a significant number of implementation differences that manifest for developers and users as varying forms of “my program doesn’t work the way it used to work”. To date, very few large language communities have been able to succeed with this scale of migration.

 

A second approach could be to build new runtime components from scratch that are designed from the outset for reuse. But building even one runtime for one specific language is incredibly hard. Building such componentry to support any runtime but without any specific stakeholder (while conceptually every stakeholder) is almost guaranteed to fail.

 

Neither of these two approaches seems like a sure bet, but the idea to leverage a mature JVM’s core technology feels like the best direction. The JVM technology already exists and has proven itself for at least one mature language community. But bolting other languages on top of Java semantics has not yet shown to be a broadly viable solution.

 

Instead, we propose to reorganize the runtime components of an existing commercial JVM implementation (the IBM Developer’s Kit for Java) to separate the parts that implement Java semantics from the parts that provide key runtime capabilities.

 

The OMR project will be formed around these latter language independent parts: a runtime technology platform consisting of core components as a toolbox for building language runtimes. An ecosystem of developers working together to augment the capabilities of this platform while collaborating with developers for tools and frameworks simultaneously fosters industry-wide innovation in managed runtimes, the languages they implement, and the collection of frameworks and tools that will accelerate our industry’s ability to build even more amazing things.

Scope: 

This project consists of core componentry that can be (re)used to build language runtimes along with test cases to operationally document and maintain the semantics of those components. It is a set of functional, robust components that have no language specificity and direct component level tests. At least initially, it will not include any components or tests that are implemented in language specific ways, and it will not include any code that surfaces OMR component capabilities to any particular language except as sample code. Code and tests for language specific capabilities probably belong in projects devoted to particular languages, but as the OMR project becomes consumed by more languages, it may make sense for some language specific code to reside within the OMR project to accelerate problem discovery for OMR code contributions.

 

Alongside this project, we will be open sourcing our CRuby implementation that leverages the OMR technology, and we have a CPython implementation also that leverages some of the OMR technology. As we contribute the underlying OMR technology to the OMR project, we'll also open source the implementations to leverage that OMR technology for CRuby and eventually CPython.

Description: 

The OMR project consists of a highly integrated set of open source C and C++ components that can be used to build robust language runtimes that will support many different hardware and operating system platforms. These components include but are not limited to: memory management, threading, platform port (abstraction) library, diagnostic file support, monitoring support, garbage collection, and native Just In Time compilation.

The long term goal for the OMR project is to foster an open ecosystem of language runtime developers to collaborate and collectively innovate with hardware platform designers, operating system developers, as well as tool and framework developers and to provide a robust runtime technology platform so that language implementers can much more quickly and easily create more fully featured languages to enrich the options available to programmers.

 

Planned functionality:

  1. Thread Library

  2. Port Library

  3. Garbage Collection

  4. Diagnostic support

  5. Just In Time Compiler

  6. Tooling interfaces

  7. Hardware exploitation (e.g. RDMA, GPU, SIMD, etc.)

  8. Any technology implementing capabilities that can be reused in multiple languages, including source code translators, byte code or AST interpreters, etc.

Why Here?: 

The OMR project is an open extensible runtime technology platform enabling any kind of language runtime, but OMR is not itself a runtime for any language. Aside from the general support and nurturing environment any open source foundation would provide, the Eclipse Foundation has particular expertise in establishing open communities around platforms. The success of the OMR project will hinge on dependent projects becoming comfortable to consume our technology via repeated successful delivery of high quality code. The collective experience of the Eclipse Foundation is by far our best chance for success, and we think the OMR project would make an excellent addition to the Eclipse Foundation community.

Initial Contribution: 

The initial contribution will include a set of core utilities, a low level memory allocation library, and a thread library along with an initial set of tests for these components and some examples for how to use these components. More components will be contributed through 2016. The code is virtually all owned by IBM (exceptions noted above under "Legal issues"). This project will be the first time this code has been released in the open, so there is no community around it (yet).

Project Scheduling: 

The initial contribution can be made available as early as January 2016 when we complete all the review and approval process.  Additional components will be going open with an approximately monthly cadence with 

 

End Jan 2016

  • Thread Library with core utilities
  • Partial Port Library and data structures
  • Garbage Collection: Mark / Sweep collector initially

End Feb 2016

  • Initial OS/X platform support

End Mar 2016

  • Parallel scavenger GC support, complete concurrent GC support

May/June 2016

  • Very large heap GC support 

June 2016

  • Just In Time compiler initial drop with more code dropping throughout the rest of the year and into 2017
  • System core dump processing facilities for easier problem diagnosis
Future Work: 

The initial focus will be to move our existing code base into the open project and establish the base core componentry. We hope to engage with partners to extend the list of supported platforms as well as begin to work with different language communities to start the adoption process to leverage the OMR components in language runtimes.

Source Repository Type: 
Parent Project: 
Project Leads: 
Interested Parties: 

Lots of interest expressed in the public conferences where we've talked about this technology.


Hono

$
0
0
Background: 
The open source community has produced a lot of excellent technology, frameworks and products that help with implementing IoT applications. A developer usually selects an appropriate set of technology and components and incorporates them into an application. The chosen components need to suport the implementation of all relevant aspects of an IoT solution including device connectivity, management, monitoring, business logic and last but not least security enforcement at all levels. In an enterprise context a typical solution is about connecting a given number of homogenous devices to a particular application hosted on dedicated server infrastructure. Over time this results in a set of independent silo applications each managing their own (limited) set of devices. As of today, most of the technology created in Eclipse IoT projects specifically supports the development of IoT applications in this way.
 
Bosch is a major manufacturer of all kinds of electronic devices, most of which (if not all) will have connectivity built into them in the near future. Integrating these devices with individual IoT applications as descibed above has several drawbacks:
  • The repetitive implementation of common functionality like device communication and management is error prone and inefficient regarding costs, development time and runtime resources.
  • The tight integration of devices with a specific application de facto partitions the set of things into classes determined by the application the devices have been intially integrated with. This makes it hard to create new cross-domain solutions and business models leveraging devices coming from different application domains.
  • The applications implemented this way are often designed to interact with a limited number of devices in an enterprise environment only. Scaling out such applications with an increasing number of devices therefore often requires massive refactoring (if not re-architecting) of the device integration layer in order to support horizontal scalability as required in most cloud-based use cases.
 
Scope: 
Hono provides a uniform (remote) service interface that supports both the Telemetry as well as Command & Control message exchange pattern requirements. In order to do so, Hono also introduces a standard service interface for managing the identity and access restrictions.
 
The Hono project provides an initial set of implementations of the service interfaces described above (corresponding to the IoT Connector component in the diagram shown below in the Description section), leveraging existing messaging infrastructure components. It is not the project's intention to create an additional message broker implementation.
 
Description: 
Connectivity is at the heart of IoT solutions. Devices (things) need to be connected to a back end component where the data and functionality of the devices is leveraged to provide some higher level business value. IoT solution developers can pick from a wide array of existing (open source) technology to implement a device connectivity & management layer for the particular type of devices at hand. While this is often fun for the developers to do, the resulting solutions are often silo applications lacking the ability to scale horizontally with the number of devices connected and the number of back end components consuming the device data and functionality.
 
The Eclipse IoT Working Group has therefore discussed  a more generic, cloud-based IoT platform architecture which better supports the implementation of IoT solutions without requiring developers to solve some of the recurring (technical) challenges over and over again. The diagram below provides an overview of the IoT Server Platform as discussed in the working group.
 
 
The diagram shows how devices in the field are connected to a cloud-based back end either via a Field Gateway (e.g. something like Eclipse Kura) or directly to so-called Protocol Adapters. The Protocol Adapters' responsibility is abstracting communication protocols as well as providing location transparency of devices to the other back end components. The devices upload (sensor) data to the back end while the functions/services they expose can be invoked from the back end. These two directions of information flow can be characterized as follows:
  • Telemetry
    Data flowing upstream (left to right) from devices to the back end to a consumer like a Business Application or the Device Management component usually consists of a small set of discrete values like sensor readings or status property values. In most cases these messages are one-way only, i.e. devices sending this kind of data usually do not expect a reply from the back end.
  • Command & Control
    Messages flowing downstream (right to left) from back end components like Business Applications often represent invocations of services or functionality provided by connected devices, e.g. instructions to download and apply a firmware update, setting configuration parameters or querying the current reading of a sensor. In most cases a reply to the sent message is expected by the back end component.
It seems reasonable to assume that the number of messages flowing upstream (Telemetry) will be orders of magnitude larger than the number of messages flowing downstream (Command & Control). The aggregated overall number of messages flowing upstream is expected to be in the range of several hundred thousand to millions per second. Note that in this architecture the same (cloud-based) infrastructure is shared by multiple solutions.
 
The IoT Connector component provides the central link between the device-facing Protocol Adapters, additional re-usable back end components, e.g. Device Management or Software Provisioning, and last but not least the IoT solutions leveraging the devices' data and services. Solution developers can use the IoT Connector to uniformly and transparently interact with all kinds of devices without the need for caring about the particular communication protocol(s) the devices use. Multiple solutions can use the same IoT Connector instance running in a shared cloud environment in order to share the data and functionality of all connected devices. The IoT Connector ensures that only those components can consume data and control devices that have been granted authorization by the device owner. In this regard the IoT Connector can be considered an IoT specific message broker targeted at cloud deployment scenarios.
 
The IoT Connector component needs to fulfill a set of non-functional requirements, in particular regarding horizontal scalability, that are specific to both the deployment environment (cloud) and the intended architectural platform characteristics (as opposed to embedding a connectivity layer into applications individually). However, these requirements are not specific to any particular application domain. From a technical point of view it makes no difference if a sensor reading received via a LWM2M protocol adapter represents a temperature or the relative humidity. In both cases the IoT Connector's responsibility is to forward the messages containing the values to (potentially multiple) authorized consumers without introducing too much latency.
 
 
Features at a glance
  • Secure message dispatching
  • Support for different message exchange patterns
  • Used for cloud service federation
  • Provides interfaces to support implementation of protocol adaptors which allow:
    • Sending telemetry data
    • Receiving device control messages (from applications/solutions)
    • Registering authorized consumers of telemetry data received from connected devices
Why Here?: 
The Eclipse IoT Working Group already serves as an incubator for projects and technology that helps with the development of IoT solutions. Hono will leverage some of this technology, in particular for the implementation of protocol adapters, while also adding missing but fundamental pieces to an open source cloud-based IoT Server Platform.
 
Some of the IoT Working Group's current member companies already have expressed an interest in collaborating on Hono and we have also started discussions with other prospect companies regarding their involvement in both the project as well as the IoT Working Group.
 
The following list includes Eclipse IoT projects that provide technology relevant for Hono:
  • leshan
  • Californium
  • Mosquitto
  • Paho
Initial Contribution: 
The initial contribution will contain a ready-to-run messaging component and a Java client implementation to interact with this service.
 
The Java client includes the following functionality:
  • establishing a connection to a RabbitMQ broker
  • managing authorization information per topics
  • sending  messages
  • registering topic based handlers for receiving & processing messages
The messaging component includes the following functionality:
  • storing all information required for authorization
  • accepting incoming messages and
  • dispatching messages to all authorized consumers
All contributed components are structured into multiple Maven modules. The messaging service is proven to run stand alone (executable jar) or as a Docker container. The java client implementation can be used as a java library. The messaging component currently depends on RabbitMQ as the underlying message broker. Consequently, the Java client currently uses the AMQP 0.9 protocol to connect to the messaging component.
 
Project Scheduling: 

We would like to be able to demonstrate a first PoC (based mostly on the initial contribution code) at EclipseCon 2016 in Reston. A first release should be available by Q3 2016.

Future Work: 

As a starting point we provide an implementation based on RabbitMQ because of its easy availability both as a service in existing Cloud Foundry based environments as well as in the form of pre-built Docker images. In order to also provide an implementation supporting horizontal scale-out, we will also create an implementation based on Apache Kafka. Future versions may also support using other cloud-based offerings (e.g. Microsoft's Azure Message Bus or Amazon's Simple Queue Service).

One of the first things to change in the initial contribution will be to define and implement Hono's external messaging interface based on AMQP 1.0 for better interoperability.

Additional steps then include:

  • Support for additional device communication protocols by means of additional Protocol Adapters. In particular, we would like to support LWM2M by means of a leshan based Protocol Adapter.
  • Support for (existing) messaging infrastructure on public cloud providers.
  • Integration with (existing) security infrastructure of public cloud providers and/or IaaS/PaaS stacks.
Source Repository Type: 
Parent Project: 
Project Leads: 
Interested Parties: 
  • Bosch Software Innovations GmbH
  • Red Hat, Inc.
  • innoQ Deutschland GmbH (Thomas Eichstädt-Engelen)
  • GE Digital
  • Siemens AG

January

$
0
0
Background: 

Scientific computing involves the manipulation and processing of various forms of numerical data. This data is organized in specific data structures in order to make the processing as efficient as possible. This project aims to provide a set of standardized Java-based numerical data structures for scientific computing. These data structures will be key to the easy integration of Science Working Group projects and tools such as visualization, workflows and scripting frameworks. A common set of structures reduces barriers to adoption of tools within the Science Working Group and wider community and will speed up the adoption of new technologies.

Scope: 

The January project provides Java implementations of numerical data structures such as multi-dimensional arrays and matrices, including a Java equivalent to the popular Python NumPy library for n-dimensional array objects.

Implementations are scalable to large structures that do not fit entirely in memory at once. For example, data structures up to 100s of MB generally fit in memory without needing additional design consideration, however large data structures of many GBs or even larger need design consideration to allow efficient processing without requiring loading the entire structure at once into memory. Therefore features such as meta information on data, references to data and slicing of data are first class citizens of this project. The required outcome is to allow data structures to scale to run on various distributed computing architectures.

This project will also encapsulate methods for loading, storing and manipulating data. This project is designed to work in headless (non-UI) operation for automated data processing.

Description: 

January is a set of libraries for handling numerical data in Java. It is inspired in part by NumPy and aims to provide similar functionality.

Why use it?
  • Familiar. Provide familiar functionality, especially to NumPy users.
  • Robust. Has test suite and is used in production heavily at Diamond Light Source.
  • No more passing double[]. IDataset provide a consistent object for basing APIs on with significantly improved clarity over using double arrays or similar.
  • Optimized. Optimized for speed and getting better all the time.
  • Scalable. Allows handling of data sets larger than available memory with "Lazy Datasets".
  • Focus on your algorithms. By reusing this library it allows you to focus on your code.

For a basic example, have a look at the example project: BasicExample.java

Browse through the more advanced examples.

  • NumPy Examples show how common NumPy constructs map to Eclipse Datasets.
  • Slicing Examples demonstrate slicing, including how to slice a small amount of data out of a dataset too large to fit in memory all at once.
  • Error Examples demonstrate applying an error to datasets.
  • Iteration Examples demonstrate a few ways to iterate through your datasets.
  • Lazy Examples demonstrate how to use datasets which are not entirely loaded in memory.
Why Here?: 

Common data structures were identified by members of the Eclipse Science Working Group as a fundamental building block for development and integration of scientific tools and technologies.   

Initial Contribution: 

The initial contribuiton is expected to made in the first half of 2016.

The initial contribution is a fork of the Eclipse Dawnsci project that extracts Datasets and its associated mathematical libraries. As per the DAWNSci project proposal:

"The copyright of the initial contribution is held ~100% by Diamond Light Source Ltd. There may be some sections where copyright is held jointly between the European Synchrotron Radiation Facility and Diamond Light Source Ltd. No individual people or other companies own copyright of the initial contribution. Expected future contributions like the implementation of various interfaces will have to be dealt with as they arrive. Currently none are planned where the copyright is not European Synchrotron Radiation Facility and/or Diamond Light Source Ltd."

The initial contribution is made up of three plug-ins:

  • org.eclipse.dataset - main code of the project, include the numerical n-dimensional arrays and the mathematics that operates on them. 
  • org.eclipse.dataset.test - test code for the project
  • org.eclipse.dataset.examples - example code and getting started with datasets

All of the dependencies of the initial contribution are libraries that are already part of Eclipse ecosystem in Orbit:

  • org.apache.commons.math3
  • org.apache.commons.lang
  • org.slf4j.api
  • org.junit

The initial contribution is currently actively developed by Diamond Light Source and collaborated on by Diamond Light Source, Kichwa Coders, and European Synchrotron Radiation Facility, among others. 

Project Scheduling: 

This project will aim to join the Eclipse Release Train from the Oxygen release.

After the initial contribution, the project will focus on standardisation across other Science Working group project, including (but not limited to):

  • Integration of data structures of Eclipse Advanced Visualisation Project
  • Integration of data structures of Eclipse Integrated Computing Environment
  • Integration with Triquetrum Project 
Future Work: 

Future items of work under consideration are (but is not limited to):

  • Loading and storing of datasets
  • Processing large data sets in an architecturally aware manner e.g. on multiple cores or a GPU
  • Physical and mathematical units
Source Repository Type: 
Parent Project: 
Project Leads: 
Interested Parties: 

This project is of interest to members of the Science Working Group and the following projects:

Azura x-1

Edje

$
0
0
Scope: 

The Eclipse Edje project provides a standard hardware abstraction Java API required for delivering IoT services that meet performance and memory constraints of microcontroller-based devices. Edje also provides ready-to-use software packages for target hardware that developers can get from third-parties to develop quickly and easily IoT device software and applications.

Description: 

The edge devices connected to the Cloud that constitute the Internet of Things (IoT) require support for building blocks, standards and frameworks like those provided by the Eclipse Foundation projects: Californium, Paho, Leshan, Kura, Mihini, etc.
Because of the large deployment of Java technology in the Cloud, on the PC, mobile and server sides, most projects above are implemented in Java technology.

Deploying these technologies on embedded devices requires a scalable IoT software platform that can support the hardware foundations of the IoT: microcontrollers (MCU). MCU delivered by companies like STMicroelectronics, NXP+Freescale, Renesas, Atmel, Microchip, etc. are small low-cost low-power 32-bit processors designed for running software in resource-constraint environments: low memory (typically KB), flash (typically MB) and frequency (typically MHz).

The goal of the Edje project is to define a standard high-level Java API called Hardware Abstraction Layer (HAL) for accessing hardware features delivered by microcontrollers such as GPIO, DAC, ADC, PWM, MEMS, UART, CAN, Network, LCD, etc. that can directly connect to native libraries, drivers and board support packages provided by silicon vendors with their evaluation kits.

To achieve this goal, the Edje project also defines the minimal set of API required for delivering IoT services, leveraging largely-deployed technologies, and meeting performance and memory constraints of IoT embedded devices. Edje defines the Edje Device Configuration (EDC). Care has been taken to make the EDC a proper subset of the different Java runtime environments found in Android, J2SE, J2ME, OSGi Minimum and others. This project presents the packages and API that constitute the core of EDC, defines the minimal foundation that iot.eclipse.org projects can rely on, and still compatible with economical constraints of the IoT: footprint. EDC covers the standard packages part of the Java core language (java.lang, java.io, …).

Why Here?: 

The Edje project provides a foundation for deploying IoT frameworks and standards delivered by Eclipse on low-cost resource constrained hardware. Hosting the Edje project at Eclipse ensures that the full stack is available from the same source and properly integrated.
Being part of Eclipse, the Edje project expects quicker and broader adoption in the industry, through open source and by leveraging the Eclipse community and ecosystem.
The goal of the Edje project is to accelerate the development and deployment of IoT. The Edje project ensures that applications developed for Edje will run across hardware suitable for IoT deployment.

Initial Contribution: 

The initial contribution of the Edje project consists in the ECOM framework specified by the ESR consortium.
This framework defines the classes to manage the connections and hardware components of a device.

IS2T holds the copyright of the ECOM implementation provided in this contribution. This implementation runs on MicroEJ OS.

Source Repository Type: 
Parent Project: 
Project Leads: 
Committers: 
Guillaume Balan
Sébastien Eon

Halcyon

$
0
0
Background: 

The Industrial Internet of Things (IIoT) often refers to the idea of connectivity and interoperability between machinery found in the manufacturing industry. IIoT represents a significant opportunity to improve the efficiency of industrial automation processes and in other industries in general. A key challenge for IIoT is the wide and diverse nature of the devices, equipment, software and vendors that comprise the industrial ecosystem.


OPC-UA is an important standard in the industrial automation industry, ensuring interoperability between the many different types of machinery and software. OPC was initially release in 1996 but has evolved over time to become a flexibile and open standard called OPC-UA. OPC-UA has become one of the key standards behind the Industry 4.0 initiative in Europe and more specifically in Germany. It's also beginning to see traction in the United States.

 
Scope: 

This project will provide all the tools necessary to implement UA client and/or server functionality in any JVM-based project.

 

The project will provide:

  • a stack implementation, compatible with the latest version (1.03) of the UA specifications.
  • a SDK built on the stack that enables development of compliant UA client and server applications.

 

The separation between stack and SDK may seem arbitrary at first, but the distinction is common within the OPC-UA community and vendors as it allows SDKs that serve different needs to be built upon a common stack.

Description: 

OPC Unified Architecture is an interoperability standard that enables the secure and reliable exchange of industrial automation data while remaining cross-platform and vendor neutral. The specification, currently version 1.03, is developed and maintained by the OPC Foundation with the guidance of individual software developers, industry vendors, and end-users. It defines the interface between Clients and Servers, including access to real-time data, monitoring of alarms and events, historical data access, and data modeling.

Why Here?: 

The OPC Foundation has been positioning OPC-UA as a contending protocol in the IIoT space, and seeing successful adoption, making it a natural fit for the Eclipse IoT ecosystem.

Initial Contribution: 

The initial contribution includes a fully functional stack, client, and server SDK, however the server SDK is missing certain functionality and API stability that has kept it from seeing a "1.0" release.

 

This missing functionality would be implemented during the incubation phase.

Project Scheduling: 

The initial contribution is ready.

 

I expect that a production ready "1.0" release of all projects could be ready in either Q2 or Q3 of 2016.

Source Repository Type: 
Parent Project: 
Project Leads: 
Committers: 

nathanNJB`s Minecraft Plugin

N4JS

$
0
0
Background: 
ECMAScript, popularly known as JavaScript, has become an important programming language, not only as a scripting language for web pages, but also for larger projects, including rich web applications and even back-end software. 
In order to develop large projects successfully, static validation and sophisticated tooling are crucial. N4JS enriches ECMAScript with a static type system 
and provides extensive support for static validation hosted within a feature-rich IDE.
Scope: 

N4JS is an extension of ECMAScript providing a sound type system.
It provides a transpiler to translate N4JS files to plain ECMAScript.
The N4JS IDE enables the authoring of JS and N4JS files,
providing tool support analogous to that of best-of-breed Java IDEs, e.g., Eclipse JDT.

Description: 
N4JS adds a static type system similar to that of Java to ECMAScript 2015. This type system support nominal and structural typing, in both cases supporting generics similar to that of Java 8. In order to capture details specific to ECMAScript,additional constructs are introduced such as union types, 'this' type, 
and special forms of structural types. Additional concepts required for larger projects are built in, e.g., dependency injection, test support, and various component types. 
 
N4JS provides an extensible framework for representing and manipulating JS and N4JS files.Based on this framework, it provides integrated, extensible tooling that supports instantaneous validation, content assist, and quick fixes, as well as launch support for running the code and associated tests.
 
Why Here?: 
The N4JS IDE is based on the Eclipse platform and uses many associated Eclipse technologies, in particular Xtext. Not only does it provide a Java-based ECMAScript 2015 parser, it also produces EMF-based models for both the abstract syntax tree (AST) of JS and N4JS as well as the type model of N4JS.
This makes it an ideal basis for implementing a host of interesting tools to analyze, maniuplulate, and transform JS and N4JS.
 
N4JS aims to build a community of contributors who will help build a powerful, feature-rich IDE.
 
Initial Contribution: 

The initial contribution consists of the N4JS IDE (and a headless version), tests, and a small ECMAScript runtime library. The code base consists of approximately 8,000 files. The pedigree of the code is as follows:

  1. The majority of the code is written either by NumberFour employees or by consultants working for NumberFour. This copyright holder for this code is by NumberFour AG.
  2. The ECMAScript test suite (github.com/tc39/test262, BSD License) in two versions (for ECMAScript 5 and ECMAScript 2015). For performance reasons, parts of these tests suits are archived and included in the test sources.
  3. The built-in ECMAScript APIs including some of the documentation have been copied from the ECMAScript 5 and 2015 (ed. 6) specification (ecma-international.org/../Ecma-262), License

The code has the following dependencies, which are to be resolved during build and run time:

  1. Several Eclipse projects.
  2. Third party projects found in the Eclipse orbit (e.g., ANTLR, Guava, Guice, some Apache commons projects).
  3. xpect-tests.org, Eclipse Public License; Author: Moritz Eysholdt
  4. XSemantics, xsemantics.sourceforge.net, Eclipse Public License; Author: Lorenzo Bettini
Project Scheduling: 
The project is already using a build process similar to the one used by Eclipse projects, so it should be possible to quickly set up the Eclipse project with a nightly build once the project gets approved and the initial contribution as gone through the due diligence process.
Future Work: 

The current version is already stable and has been in production mode internally for over a year. Fixing bugs and adding minor improvements are on going work, of course. Additionally, the following topics are expected to be addressed:

  1. Improved UI experience, in particular customized content assist as well as more quick fixes.
  2. Improved ECMAScript 2015 support.
  3. Improved type inference.
  4. Improved refactoring capabilities.
  5. Improved node.js developer experience, e.g., support for browser-based projects.

Although tooling only supports N4JS and plain ECMAScript at the moment, it should be possible to also provide support for other ECMAScript typing approaches such as TypeScript in the future. Whether this is added or not depends on the community feedback and involvement.

Source Repository Type: 
Project Leads: 
Committers: 
Mark-Oliver Reiser
Jakub Siberski
Ákos Kitta
Torsten Krämer
Daniel Bölzle
Joe Martin
Interested Parties: 

Sven Efftinge (Typefox GmbH, Eclipse Xtext)


Whiskers

$
0
0
Background: 

The Internet of Things (IoT) needs no introduction: it is a widely-held view that billions of devices will be part of the Internet of Things in just a few short years. Yet, while this explosion of devices represents boundless opportunity for innovation, this ascendancy also presents daunting challenges such as fragmentation, vendor lock-in and the proliferation of information silos. The OGC (Open Geospatial Consortium) SensorThings API is an OGC standard that allows IoT devices and their data to be connected in an easy-to-use and open way. The wide adoption of the SensorThings API would contribute to an IoT ecosystem that is healthy and interconnected, rather than one that is proprietary, incompatible and fragmented.

Scope: 

Whiskers is a JavaScript client for the SensorThings API. As the OGC SensorThings API standard specification continues to evolve, Whiskers will evolve with it.

Description: 

Whiskers is a JavaScript client for the SensorThings API. The SensorThings API is an OGC (Open Geospatial Consortium) standard that allows IoT (Internet of Things) devices and their data to be connected; a major goal is to foster a healthy and open IoT ecosystem, as opposed to one dominated by proprietary information silos.

JavaScript is ubiquitous, powering client-side web applications and server-side services alike. The availability of an open source client library is an important step in the adoption of the OGC SensorThings standard, as it makes development quicker and easier. Whiskers aims to make SensorThings development easy for the large and growing world of JavaScript developers.

Why Here?: 

Eclipse is home to a large and growing number of IoT and geospatial projects, and is widely recognized for its good governance. Being a part of the Eclipse community will bring Whiskers visibility, credibility and access to some of the most experienced minds in the open source world.

Initial Contribution: 

There will be no initial contribution. The project will start from scratch.  We expect to use some existing Eclipse IoT libraries, such as Paho and Californium.

 
Project Scheduling: 

Whiskers aims to have an initial contribution ready by the end of Q2 2016.

Future Work: 

Going forward, Whiskers will evolve along with the SensorThings API standard specification. The Sensing Profile of SensorThings (Part I) has been developed, and the Tasking Profile (Part II) is expected to be available in the upcoming months.  The SensorThings Rules Engine (Part III) is also under development within OGC.

 
Source Repository Type: 
Parent Project: 
Project Leads: 
Committers: 
Steve Liang
Interested Parties: 
  • SensorUp Inc. (http://www.sensorup.com)

  • GeoSensorWeb Lab, University of Calgary (http://sensorweb.geomatics.ucalgary.ca/)

Papyrus for xtUML

$
0
0
Background: 

Papyrus for xtUML is the Eclipse Foundation evolution of the formerly (pre-2014) proprietary version of BridgePoint.  xtUML is a dialect of UML that supports precise model editing, execution and translation for complex cyber-physical systems.  Key developers have joined with PolarSys and the Papyrus Industrial Consortium to collaborate on establishing a portfolio of modeling tools oriented toward embedded control, high-level systems, etc.  Papyrus will provide a common platform layer on top of Eclipse.

The Papyrus-xtUML source code has been released under Apache 2.0 since November of 2014 (with parts opened up 2 years before that).  The source code resides on github.com/xtuml in a few different repositories.

A substantial user base exists in industry and academia in several countries around the world.  The largest concentration of users are in Sweden, Japan and the United Kingdom.  100s of person years of application model IP exist as xtUML models.

References within Eclipse, PolarSys and Papyrus IC include Gaël Blondelle, Francis Bordeleau, Charles Rivet, Maximillian Koegel, Bengt Kvarnstrom.

See xtuml.org for more information.

Scope: 

Papyrus-xtUML provides a dialect of UML modeling based upon a published and accepted methodology called Shlaer-Mellor.  The method and tool has been evolving since the 1990s.  Papyrus-xtUML forms a dialect of UML that fits nicely as a peer to UML-RT (Papyrus-RT), SysML (Papyrus-SysML) and other Papyrus UML/SysML derivatives.  Papyrus-xtUML has differentiated strength in the areas of precise semantic modeling, execution and translation.

Papyrus-xtUML currently has the following features:  xtUML editor, interpretive model execution ("Verifier") and several model compilers including C, C++, SystemC, Java, HTML, DocBook.

Description: 

Papyrus-xtUML is a tool which supplies the capability to edit, execute and translate xtUML models.  Executable, translatable UML (xtUML) is an extension to UML based upon the Shlaer-Mellor Method of Model-Driven Architecture (MDA), which supports a powerful approach to Model-Driven Development (MDD). Papyrus-xtUML provides the system design community with access to xtUML editing, execution and translation capabilities, along with a forum to advance the use of this methodology.

Papyrus-xtUML specializes in editing UML such that platform-independent models are precisely defined to enable interpretive execution (early test) from the first edit.  The execution technology is built-in to the editor and runs partial models without compilation.  Model compilers translate xtUML into target-specific code for various architectures in C, C++, SystemC, Java (and other programming languages) and into documentation in HTML, DocBook and other formats.

Papyrus-xtUML collaborates within the Papyrus Industrial Consortium and PolarSys which pursue providing a cohesive solution set for systems modeling for complex cyber-physical systems.

Why Here?: 

Papyrus-xtUML helps round out a portfolio of solutions within PolarSys and Papyrus Industrial Consortium.  The synergy and collaboration of the technologies and communities works together to increase and improve each part.

  • xtUML brings a strong and committed user base to PolarSys and Papyrus IC (and to the Eclipse Foundation)
  • xtUML offers distinctive technologies around model execution and model translation
  • Eclipse, PolarSys and the Papyrus Industrial Consortium bring common technology to xtUML
  • Eclipse, PolarSys and the Papyrus Industrial Consortium represent ideal governance for xtUML
Initial Contribution: 

The initial contribution exists in repositories under https://github.com/xtuml.

This includes all of the following:

  • xtUML editor:  UML diagrams editing capability with constraints to precisely model semantics following the xtUML paradigm
  • Verifier:  xtUML interpretation and debug (start, stop, single-step, watch, etc) environment for running xtUML models
  • model compilers:  C, C++, SystemC, Java and documentation translators that convert xtUML into these various forms
  • models:  example and real application models developed in the open
  • documentation and training materials including written and video content

All intial contribution is licensed under Apache 2.0 and Creative Commons 1.0.

 

Project Scheduling: 

Papyrus-xtUML is presently released as version 5.3.4 on Eclipse Mars.

The initial contribution is ready now as far as is known.

One Fact Inc currently packages and tests a special release twice per year and sells support contracts around this release.  We would anticipate continuing with the same sort of plan.

Future Work: 

Upcoming features for Papyrus-xtUML fall in three categories:

  • issues that critical path users need fixed and are providing or paying to have provided
  • features and migration of models for users migrating to Papyrus-xtUML
  • convergence of Papyrus-xtUML with the Papyrus Platform and to be more cohesive with the Papyrus Industrial Consortium

Examples of these include:

  • action language editor with model-aware completion and assistance
  • migration from GEF to Papyrus Platform graphics
  • support of by-reference parameter passing between elements in the model
Source Repository Type: 
Parent Project: 
Committers: 
Interested Parties: 

PolarSys Working Group

Papyrus Industrial Consortium

Francis Bordeleau - Ericsson

Charles Rivet - Zeligsoft

Maximillian Koegel - EcipseSource

Bengt Kvarnstrom - Saab

Stefan Landemoo - Saab

Per Johnsson - Saab

Anders Eriksson - Saab

Yuki Tsuchitoi - Fuji-Xerox

Dr. Jan Köhnlein - TypeFox

EclEmma

$
0
0
Background: 

Test code coverage is important to ensure stability, extensibility and maintainability of a code basis. EclEmma provides the tooling to visualize the code coverage in the Eclipse IDE. EclEmma is currently developed outside the Eclipse foundation, which prevents it from being included by default into the Eclipse Packages.

Scope: 

EclEmma is a Java code coverage tool that provides code coverage analysis directly in the Eclipse workbench.

Description: 

EclEmma is a free Java code coverage tool for Eclipse, available under the Eclipse Public License.

It brings code coverage analysis directly into the Eclipse workbench:

  • Fast develop/test cycle: Launches from within the workbench like JUnit test runs can directly be analyzed for code coverage.

  • Rich coverage analysis: Coverage results are immediately summarized and highlighted in the Java source code editors.

  • Non-invasive: EclEmma does not require modifying your projects or performing any other setup.

 

Why Here?: 

To streamline the development process, attract more contributors, ensure that EclEmma remains relevant in the future. Also to be able to included EclEmma in future EPP packages.

 

Initial Contribution: 

EclEmma plugin depends on

  • JaCoCo Java code coverage library, provided under the terms and conditions of the Eclipse Public License 1.0

  • ASM Java bytecode library, provided under the terms and conditions of the BSD License

 

Project Scheduling: 

Code is available and could be provided.

 

Future Work: 

EclEmma is currently feature complete and actively maintained. Future Java version might require enhancements and improvements.

 

Source Repository Type: 
Parent Project: 
Committers: 
Marc R. Hoffmann

Scanning

$
0
0
Background: 

Diamond Light Source facility would liike to share code for scanning which is possible to resue at other facilities. The Eclipse Foundation provides a great foundation to do this. The code is in existence and being used to drive a major scientific facility so we would like a long term lifecycle for it. Eclipse foundation helps to provide this.

Scope: 

The project will provide the ability to scan hardware and plot results. There are a number of core algorithms for scanning, which are common across scientific experiments. The scope of the project is to:

  1. Allow any hardware corresponding to a simple Java interface to be integrated with scanning algorithms.
  2. Provide user interface for scanning, setting up scans, executing and monitoring them. (Initially in SWT/jface but HTML5 or other front ends would later be in scope.)
  3. Provide a python layer, cpython and jython, which is able to drive the scanning algorithms.

The goal is to abstract scanning such that any scientific facility may reuse existing algorithms. 

Description: 

 

Introduction

Scientific facilities operating high end hardware, for instance robots, motorized stages and detectors, have well defined layers for integrating these devices. For instance it is possible for a call to be made to move a motor or expose a detector (similar to taking an image with a digital camera). Examples of these control layers are the EPICS framework, and the TANGO framework. These frameworks are open source and in wide use at facilities around the globe. Facilities also require low latency operation, for instance a fast acting motor and a detector may need to coordinate specialized hardware. For people not familiar with this idea, one example of this is the Zebra box which combines an ARM processor with an FPGA and can orchestrate hardware times down to the nanosecond scale. There is also a device called TFG.


Detailed Design

Introduction

The project will be divided into a services layer to provide the scanning features, an example user interface layer implemented in RCP/jface (users may choose to use another UI layer) (this UI will be used at Diamond Light Source) and a cpython/jython layer for easily scripting the scans.


Server and Clients

The services layer may be operated on a separate server and messages passed between the client and server with a messaging system provided by the project. The scanning project does not prescribe exactly how services are used however if they are started in an OSGi container, declarative services will work. If they are not some manual ‘wiring together’ might be required. One possible arrangement of the services for scanning is shown below. This arrangement is site specific to Diamond Light Source and represents a possible deployment of the services layer and integrating it to analysis in DAWN (outside the scope of this project) in this case using a server. Different facilities would use the various services in this project different ways.


Services Layer

IPointGeneratorService

The point generator service takes a model and provides a generator, IPointGenerator, which gives each nD point in the scan. So for instance in a mapping scan, each point has a x,y stage value for the two-dimensional scan.

Image showing a spiral scan path over a 2D stage


Generators may be nested (unlimited) to provide complex scans. For instance it is possible to combine a grid with temperature step scan. Generators may be added by extension point so that scan paths not envisaged by the developer of the service can be added. Generators may be added by cpython/jython to allow users to define experimental specific scan procedures, including logic to be executed for each point.


IRunnableDeviceService

This service returns devices, IRunnableDevice, which conform to a state machine for driving scans. It is used to run a scan and is the core interface for scanning. It returns the top level scan which uses a point generator to define each scan point. It returns each device in the scan, for instance 2D detectors or devices driving EPICS Area Detector or a Malcolm Device, conform to this interface and pass through the states defined.


!include docs/style.iuml

state BlockStates {
    state NormalStates {
        Resetting --> Idle

        state Idle <<Rest>>
        Idle : Rest state
        Idle -right-> Configuring : Configure

        Configuring -right-> Ready

        state Ready <<Rest>>
        Ready : Rest state
        Ready -right-> PreRun : Run
        Ready --> Resetting : Reset
        Ready -down-> Rewinding : Rewind

        PreRun -right-> Running
        PreRun -down-> Rewinding : Pause

        Running -right-> PostRun
        Running -down-> Rewinding : Pause

        PostRun -left-> Ready
        PostRun -left-> Idle

        Rewinding -right-> Paused

        Paused -left-> Rewinding : Rewind
        Paused -up-> PreRun : Resume
    }

    NormalStates -down-> Aborting : Abort

    Aborting -left-> Aborted

    state Aborted <<Abort>>
    Aborted : Rest state
    Aborted -up-> Resetting : Reset
}

See http://pymalcolm.readthedocs.io/en/latest/arch/statemachine.html


AcquisitionDevice (IRunnableDevice)

When doing a CPU scan the Java-based runnable device uses a thread pool to manage the scan. It combines a move with a detector readout to maximize the scan speed. So for scanning one point we have:

 

_|....|________   run() Tell detector(s) to collect current position

      

______|.....|___  write() Tell detector(s) to write data,

      

______|.............|___ setPosition() move motor(s) to next position


IDeviceConnectorService

Provides a connection to devices which are to be scanned. The devices take part in the scan by setting/getting value so they represent things like motors, temperature controllers and goniometer angles. Each device connected to must conform to a simple interface called IScannable.


IScannable<T> … {

 public T getPosition() throws Exception;

 public void setPosition(T value, IPosition position) throws Exception;

 ...

}


IEventService

An event system is required to receive scan requests, notify the user of scan progress and maintain queues. The event service provides the following models:

  • Publish/Subscribe   (events like scan finished)

  • Submit/Consume    (queues like those for running scans)

  • Request/Response (ask the server a question such as how many detectors)

This event system is also used to manage analysis algorithms which are outside the scope of the scanning project.


IMalcolmService

To make it really easy to talk to any hardware, a middleware layer has been designed called ‘Malcolm’ which bridges between hardware and the runnable device interface/state machine. Malcolm is outside the scope of the scanning project. Malcolm devices which implement IRunnableDevice are made available from it using the IMalcolmService internally to the IRunnableDeviceService. (Malcolm is similar in this respect to TANGO and it may be desirable in future to allow TANGO devices to be scanned and/or allow the scanning device to conform to a TANGO device.)


NeXus Builder Service

The NexusBuilderFactory is used in the scanning to write legal NeXus HDF5 files. NeXus is a self-describing binary file format used in many facilities to record large numerical data efficiently. The DAWN product can read any correctly written NeXus file and provides a large armoury of tools with which to analyse data, for example running fast analysis pipelines on clusters. Devices can be integrated with NeXus by implementing a declarative interface called INexusDevice.


Other Services

There are several other services for connecting to python, running scripts etc. which are included in the project.


User Interface Layer

The project provides user interface parts for visualizing the queues of scans. It also provides a simple scan builder perspective, called ‘Scanning’, which allows a scan to be created and run. This perspective will work without configuration to the scanning project and execute mock devices provided in the project examples. The user interface allows a scan to be defined and configured with available devices and then submitted to the scanning service using the event service. This allows scans to be received on a server if one is implemented. This may be desirable if the client and acquisition are separate, for instance in the case of a remote thick client or a web client.



There is also be an example called ‘X-Ray Centering’ which shows a simple submit/consume using the scanning project. This example requires DAWN libraries to be available because it uses the DAWN plotting system as a service. In the future it may be desirable to remove this dependency as the dawnsci project intends to release plotting separately.


Scripting Layer

The scripting layer is intended to provide a python API which is easy to use and drives the runnable device service. It can submit scans to the scan queue or to be directly run. It uses a ScanRequest object on the java side of the service which is identical to the user interface. The doc from the mscan method is shown here which defines how it works.


Usage:

mscan(scan model(s), detector model(s))


   A simple usage of this function is as follows:

   > mscan(step(my_scannable, 0, 10, 1), det=mandelbrot(0.1))


   The above invokation says "please perform a mapping scan over my scannable

   from 0 to 10 with step size 1, collecting data from the 'Mandelbrot'

   detector with an exposure time of 0.1 seconds at each step".


   You can specify multiple detectors with a list (square brackets):

   > mscan(..., det=[mandelbrot(0.1), another_detector(0.4)])


   You can specify a scannable or list of scannables to monitor:

   > mscan(..., mon=my_scannable, ...)  # or:

   > mscan(..., mon=[my_scannable, another_scannable], ...)


   You can embed one scan path inside another to create a compound scan path:

   > mscan([step(s, 0, 10, 1), step(f, 1, 5, 1)], ...)


   The above invocation says "for each point from 0 to 10 on my slow axis, do

   a scan from 1 to 5 on my fast axis". In fact, for the above case, a grid-

   type scan would be more idiomatic:

   > mscan(grid(axes=(f, s), step=(1, 1), origin=(0, 0), size=(10, 4)), ...)


   By default, this function will submit the scan request to a queue and

   return immediately. You may override this behaviour with the "now" and

   "block" keywords:

   > # Don't return until the scan is complete.

   > mscan(..., ..., block=True)


   > # Skip the queue and run the scan now (but don't wait for completion).

   > mscan(..., ..., now=True)


   > # Skip the queue and return once the scan is complete.

   > # For some beamlines now=True, block=True may need to be defaulted in localstation.

   > mscan(..., ..., now=True, block=True)


NeXus Writing

It is important to ensure that writing of NeXus files is performant. Diamond Light Source have partially funded the new SWMR (“swimmer”) functionality in HDF5 which ensures that one process may write to a file, quickly, while other processes read from the same binary file. This is HDF5 library version 2.10 and has been tested with the scanning project. The project has been designed for devices to write correct NeXus records by implementing a single interface i.e. it should be straightforward to add devices. Other file formats would be out of the scope of the initial phase of the project but would be desirable in the future depending on those willing to be involved with the project.


Current Deployment

The project is deployed at Diamond Light Source and is intended to be the backbone of the next generation of data acquisition at the facility. A screenshot of a user interface developed using the project is included below to show how scanning can be reused. The user interface is for mapping experiments, these experiments per-se are outside the scope of the scanning project and their user interfaces not included however the controller and model from scanning are used to deliver a mapping functionality at Diamond Light Source.



Example of polygonal scanning in a mapping experiment

 
Why Here?: 

The Eclipse Foundation is the right place to collaborate for scanning because of the Science Working Group. This group has attracted several universities and software companies. The Eclipse Foundation gives the most opportunity for discovering new projects that scanning can make use of and add value to the scientists working at or visiting Diamond Light Source.

Initial Contribution: 
The current bundles are available on github at https://github.com/DiamondLightSource/daq-eclipse. (Those starting with uk.ac.diamond will not be part of the initial contribution if you are looking on there at the current state.)
 
org.eclipse.scanning.example.feature
org.eclipse.scanning.feature
org.eclipse.scanning.malcolm.feature
org.eclipse.scanning.releng
org.eclipse.scanning.repository
org.eclipse.scanning.target.platform
org.eclipse.scanning.ui.feature
org.eclipse.scanning.api
org.eclipse.scanning.command
org.eclipse.scanning.event
org.eclipse.scanning.event.ui
org.eclipse.scanning.example
org.eclipse.scanning.example.xcen
org.eclipse.scanning.example.xcen.test
org.eclipse.scanning.example.xcen.ui
org.eclipse.scanning.malcolm.core
org.eclipse.scanning.points
org.eclipse.scanning.sequencer
org.eclipse.scanning.server
org.eclipse.scanning.test
Project Scheduling: 

Scanning will have releases scheduled compatible with the shutdown phases of the facilities adopting the project, to be negotiated with the committers. If many facilities adopt the project, it may better serve them to release on the Eclipse release train which is independent. This should be decided once the committers are engaged with the incubation project.

Future Work: 

Initial contribution 2016, first releases 2017.

In future, we plan to add new scanning algorithms to allow automation not only of mapping experiments but of macromolecular crystallography. In particular a new technique called VMXi.

We would like to present the project at conferences, specifically NoBUGS and icalepcs

Source Repository Type: 
Parent Project: 
Project Leads: 
Committers: 
Eric Berryman
Robert Walton
Mark Booth
Interested Parties: 

Eclipse Science Working Group members, science.eclipse.org.

Several facilities using EPICS at icalepcs 2015 conference. I will approach these facilities directly to suggest committers for the project.

Eclipse Kapua

$
0
0
Background: 

The Eclipse Kapua project is proposed as an open source incubator project under the Eclipse IoT Project.

This proposal is in the Project Proposal Phase (as defined in the Eclipse Development Process) and is written to declare its intent and scope. We solicit additional participation and input from the Eclipse community. Please send all feedback to the Eclipse Proposals Forum.

With analysts predicting millions of connected devices, there is a growing need for solutions which manage edge devices and integrate them with the enterprise IT infrastructure. This need is even more evident in the convergence between Operation Technology (OT) and Information Technology (IT). OT supports physical value creation and manufacturing processes that comprises the devices, sensors and software necessary to control and monitor plant, equipment and, in general, company assets or products. Information Technology (IT), on the other hand, combines all necessary technologies for information processing.

 

Internet of Things solutions creates an integration bridge between OT and IT by connecting company’s products and by integrating processes. Internet of Things solutions provide generic integration platforms between the complex and fragmented world of the IoT devices and the enterprise IT infrastructure.

Scope: 

 

The goal of the Eclipse Kapua project is to provide an IoT integration platform with the following high-level requirements:

  1. The platform manages the connectivity for IoT devices and IoT gateways through a different set of protocols. Initial support will be offered to established IoT protocols like MQTT. Other protocols like AMQP, HTTP, and CoAP will be added over time. The connectivity layer is also responsible for managing device authentication and authorization.
  2. The platform manages the devices on the edge. Device management offers the ability to introspect device configuration, update device applications and firmware, and control the device remotely. The IoT platform exposes an open contract towards the target device being managed with no assumption on the device software stack. The device management should evolve to adopt emerging standard device management protocols like LWM2M.
  3. IoT devices can collect large amounts of telemetry data. The IoT platform enables data pipelines on such data. The data pipelines can offer: data archival for dashboards or business intelligence applications, enable real-time analytics and business rules. A valuable requirement is flexible and configurable data integration routes, offering data storage options to collect the incoming data and make it available to upstream enterprise applications.
  4. The IoT platform relies on solid foundations. In particular, it provides multi-tenant account management, user management, permissions and roles.
  5. The IoT platform is fully programmable via RESTful web services API. A web-based administration console for a device operator is desirable.
  6. The IoT platform can be deployed either in the Cloud or ‘on premise’. Its packaging should  allow for flexible deployment options.
Description: 

The following diagram provides a functional architecture of the Eclipse Kapua project.

Device Connectivity

The connectivity of the devices is managed through a multi-protocol message broker. In the initial contribution, the protocol for the device connectivity will be the IoT protocol MQTT. The broker supports other protocols including AMQP and WebSockets for application integration.

The device connectivity module is responsible to authenticate connections, enforce the appropriate authorization – for example in the topic namespace – and maintain a Device Registry. The Device Registry stores the device profile, the device connection status and the device connection log. It also enables device organization through custom attributes and tags.

Message Routing

The stream of data published by the devices may have different consumers. Certain messages, like command and control messages, are meant to be consumed by the Device Management component; other messages, like the telemetry data are meant to be archived in the IoT Platform or re-directed to other systems. The Message Routing component allows for flexible handling of message streams avoiding hard coded behaviors through configurable massage routes.

Device Management

Through the Device Management component, the IoT platform can perform remote operations on the connected devices. The IoT platform exposes an open contract towards the devices being managed with no assumption on the device software stack. In the initial contribution, the device management contract is based on an open application protocol over MQTT. Such protocol is already implemented by the Eclipse Kura project. With such protocol, the IoT platform can:

  • Introspect and manage the device configuration
  • Manage the device services including service start and stop operations
  • Manage the device applications including application install, update, and remove
  • Execute remote OS commands on the device
  • Get and set device attributes and resources
  • Provision initial configuration of the devices

 

In its evolution and future community contributions, Eclipse Kapua may adopt additional device management protocols like the emerging LWM2M standard

Data Management

Eclipse Kapua can archive the telemetry data sent by the devices into a persistent storage for application retrieval. A reference message payload is defined which allows for a timestamp, a geo position, strongly typed message headers and an opaque message body. The chosen encoding is based on an open Google Protocol Buffers grammar.

In the initial contribution, a NoSQL data storage is used to allow for a flexible indexing of the telemetry messages. Incoming messages are stored and indexed by timestamp, topic, and originating asset. The NoSQL storage allows for indexing of the message headers.

Data Management also keeps a Data Registry which maintains the topics and the metrics that received incoming traffic.

 

Security

 

A foundation layer maintains the security aspects of the IoT platform like the management of tenants, accounts and users. The account model supports a hierarchical access control structure. Following Role Based Access Control (RBAC), user identities can be defined and associated with one or more permissions guaranteeing the principle of "least privilege". Devices connect to the platform using the credentials of one of these user identities or through SSL authentication. 

Application Integration

For integration with existing applications, Eclipse Kapua offers modern Web Services API based on Representational State Transfer (REST). The REST API exposes all the platform functionality, including device management and data management. The REST API also offers a "bridge" to the MQTT broker enabling the routing of commands from applications to devices without a specific connection to the message broker. Technologies such as REST/Comet/WebSockets are included allowing real-time display of data published by the devices in web pages and mobile dashboards. 

Administration Console

Eclipse Kapua features a web-based administration Console to perform all device and data management operations. A screenshot of the administration Console is shown below.

 

Why Here?: 

The Eclipse IoT Industry Working Group has successfully incubated an compelling set of IoT technologies. Eclipse Kapua has a strong relationship with several projects within Eclipse IoT:

  • Eclipse Kura – the Java/OSGi based framework for IoT Gateways. Eclipse Kapua natively manages gateways powered by Eclipse Kura
  • Eclipse Hono – a distributed and vastly scalable messaging infrastructure that can act a front-end messaging service in front of Eclipse Kapua
  • Eclipse hawkBit – an infrastructure for artifact repository and device software distribution
  • Eclipse Leshan – implementation of the LWM2M protocol
  • Eclipse Paho – implementation of the MQTT protocol

Eclipse Kapua adds into the overall Eclipse IoT offering which now encompasses an open source, end-to-end solution for the development, deployment, and management of IoT solutions. 

The richness of the solution opens up new possibilities.

For example, it would be interesting to explore the Eclipse Marketplace as a store for end-to-end IoT applications. IoT applications generally have an edge part and a server (cloud) part. For example, an IoT App package could contain an Eclipse Kura deployment package and a web application developed over the Eclipse Kapua REST APIs. It would be quite interesting to model an IoT App so that its provisioning can be fully automated including the distribution of the edge package to a Kura runtime on a remote gateway and the deployment of the web-application.

Initial Contribution: 

The initial contribution of Eclipse Kapua will be a large subset of the components described above; its code base will come from the Eurotech’s Everyware Cloud product. In particular:

 

  • Eclipse Kapua source code and build system for its services including the web management administration UI and the REST APIs
  • A fully functional IoT platform in the components described above.
  • An Eclipse Kapua Application Developer’s guide
  • Device Connectivity for the MQTT protocol
  • Device Management over the MQTT protocol
  • Data Management for archival of telemetry data
  • Eclipse Kapua Docker images for flexible deployment

 

Project Scheduling: 

Initial contribution expected: 09/2016

First working build expected: 10/2016

Future Work: 
  • Allow for easier plugability of additional connectivity and device management
  • Integration with data visualization dashboards for quick representation of telemetry data
  • Improvements based on community feedback
  • Add rules engines
Source Repository Type: 
Parent Project: 
Project Leads: 
Committers: 
Federico Baldo
Dejan Bosanac
Henryk Konsek
Stefano Morson
Diego Rughetti
Interested Parties: 
  • Eurotech
  • Red Hat
  • Bosch Software Innovations GmbH

 

 

Viewing all 302 articles
Browse latest View live