Inside Rebaca’s Software Analytics


The software work groups at Rebaca produces a huge amount of data as a natural by product of its execution. The data produced varies, from being highly structured such as the software source code lines and the run time software logs, to being completely un-structured such as documents and reports. This sea of data, both structured and un-structured, can be explored and analyzed to discover insightful and action-able information which can then be utilized for controlling the various tasks of the software work groups. Insightful information conveys valuable understanding of the tasks being performed and can be obtained mostly by inspecting the raw data. Actionable information conveys specific road blocks in the execution which then can be acted on by the work group and requires deeper analysis of the raw data. Today more and more software engineering groups – from product developers ( ) to game developers ( ) to version control service providers ( ) – embracing Software Analytics. Rebaca started its Software Analytics journey a few years ago and today it relies on a custom developed online software analytics system for the early detection and removal of road blocks in its software delivery engine which in turn helps it significantly to reduce customer complains.

Rebaca’s Software Analytics System
Rebaca’s Software Analytic System comprises of the following elements

Data Entry Interfaces: These interfaces are used by the members of the various work groups to enter data about the various execution engines such as Delivery, KYC and COE
Data Storage: This element stores the data
Data Exploration Interfaces: These interfaces are used by the members of the QMG ( quality management group ) to explore the data
Key Capability Indicators: These elements provide insightful information which is used by the members of the various work groups to monitor the execution engines

Data Entry Interfaces
The following interfaces are mainly used to enter the planning data that controls the software execution engines such as the progress plans and the risk plans. These interfaces are also used to enter the reporting data on the execution engines such as the MOMs, status reports, validation reports and knowledge articles.

PMS, custom developed on top of Atlassian JIRA, is a online system used to enter and update the progress planning data. These data structures are inspired by Agile. Today an increasing number of Rebaca’s customers use similar tools to manage their progress. Arbor uses,  Empirix uses, Yes Video uses and Cisco uses See for more information on JIRA. Examples of the data structures entered via the PMS are : SPOT ( Story plus Output and Task ), Deadline, Bucket, Iteration, Work Log, Ice Log, Back Log and Front Log
Confluence, a popular enterprise wiki and collaboration platform from Atlassian, is used to enter and update risk planning data. It is also used to enter reporting data. See for more information on Confliuence. Some of the risk planning data structures entered via the Confluence are : Issue, Risk, Dependency and NCR. Some of the reporting data structures entered via this tool are :  Test Report, Customer MOM and Customer Weekly Status Report
OAS,  customer developed online system, is used to enter and update the billing and leave data
Popular SCA ( static code analysis ) tools integrated with SVN or VSS or similar, are used to enter and update the bugs in the source code of the software products that are being worked on by the engineering work groups. Example of adopted SCA tools are: YASCA, SPLINT and Visual Studio Code Analyser

See IT Facility Access Information for URL of the interfaces described below

Data Storage and Exploration Interfaces
The software execution data is stored in the relational data bases of JIRA, Confluence and OAS. The data stored in JIRA is explored using its filters and the customer developed reporting programs. The data stored in Confluence is explored using its page view interfaces and its activity tracking interfaces. The data stored in OAS is explored using its user interface and a custom developed SOAP interface which integrates with the reporting programs. s such as the . The data exploration is done by the QMG in a period manner.

Discovery Of Insightful Information
At Rebaca, we came up with the concept of Key Capability Indicator ( KCI for short ). A KCI is a unit of Insightful Information. At this time there are about 30 odd KCIs in use at Rebaca. Every KCI has a powerful algorithm behind it which queries the software execution data stores and generate a numerical value which in-turn provides a useful insight into a specific aspect of the software execution. The KCIs are further classified into the following categories : Watch and Audit. Watch KCIs provide insight into the completion ( towards plan ) aspect of the execution. Some example of the Watch KCIs are : Burn Up, Burn Down, Product Delivery Velocity, Product Test Velocity, Defect Injection Rate and Resource Velocity. Audit KCIs provide insight into the compliance ( with the best practices defined in the Rebaca Software Engineering process ) aspect of the execution. Some example of the Audit KCIs are : Forward Planning, Story Cloning, Validation Density and Weekly Report Sending. A dedicated team inside the QMG runs these KCIs every week and stores their values in MS excel spreadsheets.

Sharing Of Insightful Information
The QMG generates the KCI values every week for each of the active software work groups and stores them in MS excel spreadsheets. The KCI values are then updated in Confluence pages of the respective work groups. Every work group has three KCI pages – Deadline Watch, Customer Satisfaction Watch and Audit. Templates of these KCI page can be found here : Customer Deadline Watch Template, Customer Satisfaction Watch Template and Assurance Templates – Project. Work group members set up Confluence Watches on these pages and receive email based update alerts from Confluence as soon as these pages are updated by the QMG.

Consumption Of Insightful Information
The process of consuming the KCI starts with the work group head receiving the update alerts, for the KCI pages, from Confluence. The head then carefully analyses the historical and the current values of the KCIs, which are available in the pages, and discovers the root cause for the movement of the value of the KCIs and documents the same in the page against each KCI. If the root cause is actionable then the action items are also discovered by the lead and updated in the KCI page for the concerned KCI. When the lead is done discovering the root cause then the KCI pages are marked as “accepted” in Confluence.  Face to face meetings ( called PMR meetings ) involving the stake holders of the work group is organized by the Admin Group once the pages enter the “accepted” state. The PMR meetings discuss the KCI along with their root cause analysis. It discovers new dependencies and risks which are then entered in Confluence by the lead. The PMR also discusses the existing risks and dependencies, which are available in the KCI pages, till they are addressed. The risks and dependencies are also shared with their customer via the weekly status report. See PWB – PMR Meeting 2014 to know how the PMR meetings and planned and completed.

Analytics is helping companies around the world to delivery success stories, some of which are listed below. Analytics is the future for achieving execution excellence. Let us fully commit our selves to the usage and improvement of the Rebaca’s Software Analytics System.

·         Carlson Hotels Group “SNAP’s” Up More Revenue with Analytics
Global economic conditions in 2012 have not only led to fewer sold-out nights for hotels, but also contributed to customers being more sensitive to the prices they are paying for rooms. Carlson Hotels Groups used analytics to pursue an innovative revenue optimization project, SNAP, which stands for Stay Night Automated Pricing, to optimize quotes rates to customers.

·         U.S. Centers for Disease Control and Prevention (CDC) Advances Preparedness with Analytics
In the U.S., there is little actual experience or first-hand knowledge to call upon to plan potentially catastrophic events such as the Anthrax attack of 2001. However, CDC, working with experts from Georgia Tech, devised sophisticated modeling and computational strategies that address the fundamental challenges in mass dispensing to help save lives.

·         Danaos Corporation’s Ship Comes In with Analytics
Since the early days of shipping, planners have struggled with the quandary whether to try and save money or time. Danaos solved this dilemma by utilizing analytics to create an optimal decision support system that realized millions of dollars in savings.

·         Intel Corporation Tackles Timing and Risk Management with Analytics
To maintain a leadership position, Intel must each year purchase a few billion dollars’ worth of progressively more sophisticated manufacturing equipment. The question becomes when is the optimal time to make this capital investment. HP used a dual-mode multi-period procurement methodology with each supplier to realize a $2 billion revenue upside.

·         Hewlett-Packard (HP) Aids Their Web Sales Channel with Analytics
Building a strong online sales channel is an important strategic need for Hewlett-Packard. HP developed a set of solutions based on mathematical programming, Bayesian modeling, regression modeling, and time-series forecasting to enable a more holistic buying experience for consumers that led to increased conversion rates and order sizes and millions in positive business impact.

·         TNT Express Implements Supply Chain-Wide Optimization
TNT Express developed a portfolio of optimization solutions, its Global Operations (GO) program, to organize its operations in a better and smarter way to help it save 207 million Euros in transportation and other costs over 4 years.