Despite the continuous hype around data analytics and the rapid acceleration of data technologies such as machine learning (ML) and artificial intelligence (AI), most companies are lagging behind with low data capabilities and no in-house data team in place. These companies have their data either fully unleveraged or marginally analyzed by executives on the side of their jobs to produce limited reports.
In such a situation, pushing the organization up the hill of data maturity would require building a team of data-specialized personnel. Building such a team can be daunting, as every company would have different conditions and no one way can fit all cases. However, covering the following main grounds can help cut miles on the road to building a data team from the ground up.
First, nurture the environment and plant the seeds. Data teams cannot grow in a vacuum. To prepare the organization to become data-driven with a data team, enhancing the organizational data culture is a good starting point. Having employees at all levels with a data-driven mentality and an understanding of the role of data analytics can significantly prepare the room for the planned team.
Second, connect with stakeholders and recognize priority needs. Carrying out data culture programs inside the organization can open up opportunities to have meaningful discussions with stakeholders on different levels about their data needs, what they already do with data, and what they want to achieve, in addition to having better insights into the pre-existing data assets. This is a good stage to recognize the organization’s data pain points, which would then be the immediate and strategic objectives of the future data team.
Third, define the initial structure of the team. According to the scale of the organization and the identified needs, data teams can have one of three main structures:
Centralized: This involves having all data roles within one team reporting to one head, chief data officer (CDO), or a similar role. All departments in the organization would request their needs from the team. This is a straightforward approach, especially for small-size companies, but can end up in a bottleneck if not scaled up continuously to meet the organization’s growing needs.
Decentralized: This requires disseminating all data roles and infusing them into departmental teams. This mainly aims to close the gap between technical analysis and business benefits as analysts in every team would be experts in their functional areas. However, the approach may lead to inconsistencies in data management and fragile data governance.
Hybrid: This consists of having governance, infrastructure, and data engineering roles within a core team, along with embedding data analysts, business analysts, and data scientists in departmental teams. The allocated personnel would report to the respective department head as well as the data team head. This approach combines the benefits of both centralized and decentralized structures and is usually applicable in large organizations as they require more headcount in their data teams.
Fourth, map the necessary tech stack and data roles. As the previous stages have uncovered the current uses and needs of data in an organization, it should be easier to start figuring out the tech tools that the team would be initially working with. Mapping the needed tech stack would be the first pillar before moving on to the hiring process. The second pillar would involve defining the roles that the team would need in its nascent stage to meet the prioritized objectives.
Several data job titles can be combined in a data team, with many of them having specializations that intersect with or bisect each other. However, there are three main role areas that should be considered for starting data teams:
Data engineering: implementing and managing data storage systems, integrating scattered datasets, and building pipelines to prepare data for analysis and reporting
Data analysis: performing final data preparation and extracting main insights to inform decision-making
Data science: building automated analysis and reporting systems, usually concerned with predictive and prescriptive machine learning models
Fifth, follow step-by-step team recruitment. Hiring new employees for the data team is one option. The other option can be upskilling existing employees with an interest in a data career and with minimum required skills. Even employees with just interest and no minimum required skills can be reskilled to fill some roles, especially within an initial data team.
The team does not need to take off with full wings. It can start small and gradually grow. Typically, data teams would start with data analysts who have extra skills in data engineering, data engineers who have experience with ad-hoc analyses and reporting, or a limited combination of both. In later stages, other titles can join onboard.
The baby-step-building approach is more convincing for stakeholders as it can be more efficient from a return-on-investment (ROI) perspective. Starting with a full-capacity team may end up being too costly for the organization, which could lead to the budding project being cut off in its prime.
Sixth, deliver ad-hoc analyses, heading towards long-term projects. In the beginning, data analytics experts at the organization would be expected to answer random requests and solve urgent data-related problems, like developing quick reports and reporting on-spot metrics. This is a good point to prove how data personnel can be of direct benefit to the organization.
However, along with delivering said ad-hoc requests, the data team should have strategic goals to enhance and develop the overall data maturity of the organization, like organizing, integrating, and automating the analytics processes and installing advanced predictive models. These long-term projects should foster the organization’s data maturity, which should result in ad-hoc requests being less frequent as all executives should be self-sufficient in using the installed automated reports and systems. In such a data-mature environment, the team would have time to advance their data products continuously, opening up new benefit opportunities.
Seventh, fortify the team’s presence. Strategic projects with shorter implementation periods and more immediate impact should be prioritized over longer ones, especially in the beginning. That would help continuously prove the benefits of the data team and the point of its foundation. Owning the products of the data team by having its name on it can help remind decision-makers of the team’s benefit. In addition, it is highly useful for the data team’s head to have access to top managerial levels to keep promoting the team’s presence and expansion.
Building a data team from scratch requires careful planning, investment, and commitment from organizational leadership. By following these guidelines and adapting them to their specific needs, organizations without prior data capabilities can establish a robust data team capable of driving innovation and offering a competitive advantage through data-driven insights.
Learn more about data management by exploring our articles on data analytics.
**********
Editor’s Note: This post was originally published on April 23, 2024 and last updated on September 17, 2024.
You’ve probably heard tech buzzwords like data-driven decision making, advanced analytics, “artificial intelligence (AI), and so on. The similarity between those terms is that they all require data. There is a famous quote in the computer science field — “garbage in, garbage out” — and it is a wonderful example of how poor data leads to bad results, which leads to terrible insight and disastrous judgments. Now, what good is advanced technology if we can’t put it to use?
The problem is clear: organizations need to have a good data management system in place to ensure they have relevant and reliable data. Data management is defined by Oracle as “the process of collecting, storing, and utilizing data in a safe, efficient, and cost-effective manner.” If the scale of your organization is large, it is very reasonable to employ a holistic platform such as an enterprise resource planning (ERP) system.
On the other hand, if your organization is still in its mid to early stages, it is likely that you cannot afford to employ ERP yet. However, this does not mean that your organization does not need data management. Data management with limited resources is still possible as long as the essential notion of effective data management is implemented.
Here are the four fundamental tips to start data management:
Develop a clear data storage system – Data collection, storage, and retrieval are the fundamental components of a data storage system. You can start small by developing a simple data storage system. Use cloud-based file storage, for example, to begin centralizing your data. Organize the data by naming folders and files in a systematic manner; this will allow you to access your data more easily whenever you need it.
Protect data security and set access control – Data is one of the most valuable assets in any organization. Choose a safe, reliable, and trustworthy location (if physical) or service provider (if cloud-based). Make sure that only the individuals you approve have access to your data. This may be accomplished by adjusting file permissions and separating user access rights.
Schedule a routine data backup procedure – Although this procedure is essential, many businesses still fail to back up their data on a regular basis. By doing regular backups, you can protect your organization against unwanted circumstances such as disasters, outages, and so forth. Make sure that your backup location is independent of your primary data storage location. It could be a different service provider or location, as long as the new backup storage is also secure.
Understand your data and make it simple – First, you must identify what data your organization requires to meet its objectives. The specifications may then be derived from the objectives. For example, if you are aiming to develop an employee retention program, then you will need data on employee turnover to make your data more focused and organized. Remove any data that is irrelevant to the objectives of your organization, including redundant or duplicate data.
Data management has become a necessity in today’s data-driven era. No matter what size and type of your organization, you should start doing it now. Good data management is still achievable, even with limited resources. The tips presented are useful only as a starting point for your data management journey.
Learn more about data management by exploring our articles on data analytics.
**********
Editor’s Note: This post was originally published on December 9, 2021 and last updated on September 17, 2024.
One of the most common challenges faced by professionals in working with key performance indicators (KPIs) relates to data. They grapple with collecting and analyzing data to establish targets accurately, as indicated by 42% of respondents in The KPI Institute’s State of Strategy Management Practice 2023 Report.
This is particularly important as the collected data is expected to be of high qualityand “fit for their intended uses in operations, decision making, and planning,” according to the book “Modern Data Strategy,” by Mike Fleckenstein and Lorraine Fellows. Drawing from its advisory experience, The KPI Institute recommends employing the following data quality dimensions as a framework for assessing your data (see Figure 1).
Figure 1. Data Quality Dimensions | Source: Certified KPI Professional training program
Overcoming Issues with Data Quality Dimensions
Figure 2 highlights a dataset that has encountered significant data quality issues. Through an initial audit, several faulty elements have been identified, revealing potential inaccuracies that could have an adverse impact. This section presents approaches for effectively resolving these faulty elements to improve data reliability.
Figure 2. Sample quality troubled dataset | Source: The KPI Institute
A – Completeness: There is a missing value in the Actual Result column. One way to prevent this is to develop and utilize a data collection template that clearly outlines the necessary data fields. It is also important to regularly review the completeness of the data and address missing information that affects analysis.
B – Consistency: The structure of the data does not correspond with the template, the name, and the position of the Data Custodian being switched. To prevent this issue, one must make sure the data presents the same values across different systems and follows the same structure.
C – Timeliness: This issue pertains to the data being received after the specified deadline. One potential solution is to establish a data collection cycle time and set clear deadlines for data submission. Communicating these deadlines to all relevant parties and sending reminders for data submission can also help address this issue.
D – Conformity: The KPI is expressed as a percentage rate, but the data provided for the result includes a numerical value. To ensure conformity, organizations must provide clear guidelines on data format and how the KPI should be calculated.
E – Accuracy: This issue concerns the usage of an inappropriate sign. The KPI measures a rate, but the sign used in the KPI name is “$.” To ensure accuracy, one should make sure the data reflects real information, including the use of appropriate units. To adhere to accuracy, The KPI Institute developed a naming standard, which designates the symbols ”#” for units, ”%” for rates, and ”$” specifically for monetary value.
Maintaining data quality is essential to generate meaningful and effective KPIs. Reliable data ensures that business decisions are based on trustworthy information, resulting in improved marketing, increased customer satisfaction, enhanced internal processes, and reduced costs.
On the other hand, unreliable data can cause significant challenges. KPIs based on inaccurate data lead to wrong decisions, resulting in wasted resources and a negative impact on the organization’s performance. Poor data quality can impede the identification of trends or the accuracy of forecasts, leading to missed opportunities. In addition, it can hold back innovation, causing businesses to lose competitiveness.
Therefore, it is recommended that organizations prioritize data quality management and take actions to assess and improve data quality to enhance KPIs and drive business success.
Enhance your understanding of KPIs and read more about them on our KPI section.
**********
Editor’s Note: This article was originally published in Performance Magazine: Issue No. 26, 2023 – Data Analytics Edition and has been updated as of September 17, 2024.
Nowadays, data analytics in the ICT (Information and Communication Technologies) industry is not just a byproduct of operations but a cornerstone for strategic decision-making. While many discuss the theoretical potential of data, few address the critical gap between theory and practical application. Leveraging data effectively can drive performance, foster innovation, and enhance customer experience.
ICT companies generate vast amounts of data daily, including sales, revenue and other financial components, as well as supply chain, project delivery, product performance, customer usage, service quality, and customer experience data. The challenge lies not only in the volume of data but in capturing, analyzing, and interpreting it effectively. Robust data storage and management solutions are essential for efficiently managing all of this.
Driving Performance with Data Analytics in ICT
Data analytics in the ICT industry can mean improvements in operational efficiency. By monitoring the right metrics, ICT companies can identify areas of improvement, optimize resource allocation, and streamline processes. For example, analyzing procurement data can determine whether to develop certain capabilities internally or outsource them. This analysis can inform strategic decisions on partnerships, negotiations, and resource allocation, ensuring actions are grounded in real-world insights.
Data can drive innovation by revealing client needs and market trends. ICT companies can tailor their services to meet evolving demands by identifying market gaps and opportunities for new products or enhancements. Continuous analysis of trends and user feedback ensures offerings remain relevant and competitive, addressing unmet needs and keeping pace with technological advancements.
In the competitive ICT market, customer experience is a key differentiator. Data-driven insights allow companies to personalize interactions, anticipate issues, and provide timely solutions. For instance, by combining technology adoption metrics with customer feedback and usage patterns, companies can understand why products are underused and make necessary adjustments. This approach enhances customer satisfaction and loyalty by addressing real-time needs and preferences.
A real-world example of data-driven decision-making is demonstrated by a company closely known to the author, which aimed to improve its win ratio for government bids. For confidentiality reasons, the company’s name will not be disclosed. The company embarked on a data-driven initiative, utilizing historical bid data, competitor analysis, and publicly available data from the official government bidding platform, leveraging robotic process automation (RPA).
The data analysis revealed patterns in winning bids and identified the average variance between the company’s bid prices and the next closest bidders. This analysis informed strategic decisions to revamp bid pricing strategy, tailor proposals to government needs, and reduce the bid price gap with competitors. The company also implemented various initiatives related to cost restructuring and improving partnership terms.
As a result, the company saw a significant improvement in its win ratio, increased revenue from government contracts, and better alignment with market expectations. This data-driven approach established a reputation for quality bids, built stronger relationships with government clients, and supported the company’s growth and strategic goals.
Investing in data analytics capabilities proved highly beneficial, leading to better processing of government bids, more informed decision-making, and reduced manual work. This experience was pivotal in revamping the company’s approach to challenges and increasing data utilization in various decisions.
A significant challenge in implementing data-driven strategies is the lack of a defined data catalog and metadata. Establishing clear data practices is essential for building a reliable foundation. Proper data management ensures privacy and security while enabling effective decision-making. These practices provide a framework for overcoming barriers and applying data-driven strategies practically.
However, creating a data-driven culture requires more than just tools; it demands a mindset shift. This involves building trust in data through proper management practices, empowering teams with data interpretation skills, fostering collaboration, and aligning goals with data-driven initiatives. Starting with a small, credible data set and expanding organization-wide can effectively turn theoretical concepts into practical applications.
The ICT industry is experiencing significant changes. Embracing data-driven decision-making, investing in data architecture, and adopting clear data practices can unlock new potentials, enhance client experiences, and improve efficiency. Failing to do so may result in inefficiencies and a loss of competitive edge. As competition intensifies, leveraging data effectively turns challenges into opportunities.
Deepen your understanding of using data for better decision-making and other aspects of business with our insightful articles on data analytics.
*****************************
About the Author
This article is written by Yazeed Almomen, a Corporate Planning & Performance Manager at one of the leading ICT companies in Saudi Arabia. With over six years of dedicated experience in the corporate planning and performance field across both private and public sectors, he has led numerous performance transformation projects and is passionate about building sustainable planning and performance management practices. He has a keen interest in leveraging data-driven decision-making to enhance corporate performance and foster innovation.
“If communication is more art than science, then it’s more sculpture than painting. While you’re adding to build your picture in painting, you’re chipping away at sculpting. And when you’re deciding on the insights to use, you’re chipping away everything you have to reveal the core key insights that will best achieve your purpose,” according to Craig Smith, McKinsey & Company’s client communication expert.
The same principle applies in the context of data visualization. Chipping away is important to not overdress data with complicated graphs, special effects, and excess colors. Data presentations with too many elements can confuse and overwhelm the audience.
Keep in mind that data must convey information. Allow data visualization elements to communicate and not to serve as a decoration. The simpler it is, the more accessible and understandable it is. “Less is more” as long as the visuals still convey the intended message.
Finding the parallel processes of exploratory and explanatory data visualization and the practice of sculpting could help improve how data visualization is done. How can chipping away truly add more clarity to data visualization?
Exploratory Visualization: Adding Lumps of Clay
Exploratory visualization is the phase where you are trying to understand the data yourself before deciding what interesting insights it might hold in its depths. You can hunt and polish these insights in the later stage before presenting them to your audience.
In this stage, you might end up creating maybe a hundred charts. You may create some of them to get a better sense of the statistical description of the data: means, medians, maximum and minimum values, and many more.
You can also recognize in exploratory if there are any interesting outliers and experience a few things to test relationships between different values. Out of the 100 hypotheses that you visually analyze to figure your way through the data in your hands, you may end up settling on two of them to work on and present to your audience.
In the parallel world of sculpting, artists do a similar thing. They start with an armature-like raw data in designing. Then, they continue to add up lumps of clay on it in exploratory visualizations.
Artists know for sure that a lot of this clay will end up out of the final sculpture. But they are aware that this accumulation of material is essential because it starts giving them a sense of ideal materialization. Also, adding enough material will ensure that they have plenty to work with when they begin shaping up their work.
In the exploratory stage, approaching data visualization as a form of sculpting may remind us to resist two common and fatal urges:
The urge to rush into the explanatory stage – Heading to the chipping away stage too early will lead to flawed results.
The urge to show all of what has been done in the exploratory stage to the audience, begrudging all the effort that we have put into it – When you feel that urge, remember that you don’t want to show your audience that big lump of clay; you want to show a beautified result.
Explanatory Visualization: Chipping Away the Unnecessary
Explanatory visualization is where you settle on the worth-reporting insights. You start polishing the visualizations to do what they are supposed to do, which is explaining or conveying the meaning at a glance.
The main goal of this stage is to ensure that there are no distractions in your visualization. Also, this stage makes sure that there are no unnecessary lumps of clay that hide the intended meaning or the envisioned shape.
In the explanatory stage, sculptors use various tools. But what they aim for is the same. They first begin furtherly shaping the basic form by taking away large amounts of material. It is to ensure they are on track. Then, they move to finer forming using more precise tools to carve in the shape features and others to add texture. The main question driving this stage for sculptors is, what uncovers the envisioned shape underneath?
In data visualization, you can try taking out each element in your visualization like titles, legends, labels, colors, and so on. Then, ask yourself the same question each time, does the visualization still convey its meaning?
If yes, keep that element out. If not, try to figure out what is missing and think of less distracting alternatives, if any. For example, do you have multiple categories that you need to name? Try using labels attached to data points instead of separate legends.
There are a lot of things that you can always take away to make your visualization less distracting and more oriented towards your goal. But to make the chipping away stage simpler, C there are five main things to consider according to Cole Nussbaumer Knaflic as cited in her well-known book, Storytelling with Data:
De-emphasize the chart title; to not drive more attention than it deserves
Remove chart border and gridlines
Send the x- and y-axis lines and labels to the background (Plus tip from me: Also consider completely taking them out)
Remove the variance in colors between the various data points
Label the data points directly
In the explanatory stage, approaching data visualization as a form of sculpting may remind us of how vital it is to keep chipping away the unnecessary parts to uncover what’s beneath, that what you intend to convey is not perfectly visible until you shape it up.
Overall, approaching data visualization as a form of sculpting may remind us of the true sole purpose of the practice and crystalize design in the best possible form.
Deepen your understanding of processing and designing data with our insightful articles on data visualization.