/ Artificial Intelligence

Part 2 - Using AI to Consistently Deliver Successful Projects and Build Teams

Project Breakdown

To build a solid foundation for a project's execution and increase its chances for success, we have to know what work packages need to be combined and the stages they should follow.

By breaking this process into the necessary low level tasks, we find the associated technical and non-technical requirements connected to them. Most of the expertise comes into play in implementing the current validated best-practices.

Our current reality requires many days spent researching and consulting by a founding team that already possesses an extensive background of expert knowledge and industry experience.

This lengthy process repeats the information collection efforts of disparate teams as they attempt to deliver on similar goals from within their isolated projects.

Can we Automate the Guesswork?

More often than not, these parts are mission critical only to the extent that a non-optimal solution might throw the project off track. The real value of the project’s idea is mostly outside of the scope of implementing existing technical solutions for a given product.

DREAM Builder provides a complete set of micro-service structured analytic functions that deliver A.I. generated solutions to relieve the user of much of this guesswork.

This layer provides the services that underly the project’s descriptions, and the planning, management, and continuous development over the project's lifetime within DREAM.

To enable users to quickly iterate over the necessary units of work, DREAM Builder provides a flexible service layer that allows for dynamic adjustment of the interface’s response based on real-time learning and analytics of the current user’s context and requirements.

Learning Successful Ways to Solve a Given Task

In order to align the required work packages with a concrete execution schedule, we need to first understand what tasks are connected to those packages. This results in a list of associated tasks that now represent a detailed scope of the project.

Each task has to be understood in the context of the expert requirements needed to execute them, which allows us to group them into role profiles that can eventually be matched with an available expert from the pool.

In analyzing the needed tasks to deliver on a set scope of work, we look at what is relevant and associated with a set of “building blocks” we’ve assembled that make up the overall project. When analyzing the tasks that are needed to deliver on the set scope of work, we looki at the relevant work associated with a set of building blocks we compose to describe the overall project.

Template Model Hierarchy

To enable our system to flexibly model highly specific project types, we use a hierarchical system of modules defined for each domain we work in.



These build up from “ability modules” (which deliver small/concise building blocks) to “skill modules”, (which combine sets of dependent ability modules) into a larger, more complex deliverable.

In order to come to an accurate understanding of the requirements needed by a team of experts, we explore dependencies between the project’s parts and then we reverse the process. We are now mainly interested in understanding the tasks associated with the ability modules, which we then use to create the project’s building blocks.

These tasks are then associated with a set of technical, personal, educational and experience requirements, which are used to predict the approximate type of specifications needed in the person assigned to the task.

The tasks are later recombined to clusters of closely related professional activities identified as likely to be executed in combination (because of the minimal variation in the underlying requirements).

Learning Task Requirements From the Community

Our solution to speed this process of learning the community’s best-practices to solve specific problems (using given sets of technologies and processes), was to build on the existing solution that our CTO Frank Fichtenmueller created for his A.I. HR startup that successfully completed the high-tech incubator program INITS.

Based on a search engine capable of aggregating information about the state of labor markets, and a data processing pipeline build on the power of a mixture of attention based LSTM architectures, we are capable of identifying what is the current scope of a task in a given domain, and the technical solutions widely adopted to deliver on it.

The solution involves a large scale analysis of public social media interactions around identified key opinion leaders, the automatic analysis of industry relevant conference proceedings, document analysis on research papers, requirement analysis in job descriptions, all integrated into deep learning-based expressive ontologies allowing complex inferences.

This allows us to iteratively feed back popular solutions to the user during the planning process (within the current understanding of the project’s scope). We then generate automatic interactions through our intent based NLU engine to collect more data from the user, which better informs the breakdown process via generating questions. Here, we use probabilistic network models to inform the planning process, helping to decide what additional information will maximize the predictive power of our system, enabling us to keep the user input process to a minimum.

Solution Architecture

The service and management layer within DREAM Builder allows access along multiple story lines. Iteratively working on the initial scoping of the project within the DREAM Planner, it requests the information it needs to assess the current question, then creates a project description, insights and recommendations for the next steps to take in the project.


1. Service API

  • Unified API for decoupled access to relevant system functionalities for the UI components
  • Provide context and status management to relevant process data, necessary to be aligned across multiple interface components (chatbot/web service/support interfaces)

2. Incoming Analytics API

  • Unifies the analytics service interface into a single module for easier management
  • Provides entry points for activity logging data from the UI to be send to the backend
  • Provides functions that can be easily called from the UI to manage transfer-
  • Function endpoints are optimized analytics streams with included pre-processing pipelines based on the type of logging data presented
  • Handles the logic of data qualification and data specific preprocessing / aggregation across multiple concurrent sources of data (for interaction traction in real-time)

3. Outgoing Analytics API

  • Provides a de-coupled unified access point for UI functions to access scope and context metadata about specific entities
  • End points used to request analytical data in a highly performant fashion from the system for rapid adjustment of UI in real-time
  • Endpoint for requests to data about current process state
  • Ensures real-time update of a single source of truth about context relevant to UI processes, to drive reactive interfaces simultaneous across multiple UIs

4. Scheduler

  • Cache function to reduce load on low level service API

5. Data Preprocessing & Qualification Pipeline

  • Data type specific preprocessing functionality

5.1 Data Quality Control

  • Function specific tests on data quality and integrity along API specified expectations on data input
  • Callbacks to inform process of data upgrade, dynamic collection of additional data when tests are not passed

5.2 Data Extraction Processes

  • File type specific data extraction procedures (pdf, excel, word, etc…)
  • Content type specific data extraction procedures (cv, requirements document, financial statement, team listing, etc….)

6. Entity Specific Analytics Services

  • Monitor continuous streams of data to update information about entities of interest in real-time
  • Provide UI facing analytics API interface to enable updates on the entities to directly become available in the UI layer for adaptation of behavior

7. Low Level Service API

  • Provides a unified, decoupled service layer to sub systems within DREAM

To continue our exploration of how DREAM will take project management and team-building to the next level, please click here to read part three of our series.


The DREAM platform isn’t just another untested beta program on a white paper… It’s live and being used right now to hire blockchain professionals. The token sale will enable DREAM’s innovative team to take DREAM to the next level by integrating A.I. and incorporating our platform token.

Social Channels

Part 2 - Using AI to Consistently Deliver Successful Projects and Build Teams
Share this

Subscribe to DREAM Blog