(Updated Version)
While the term sounds intimidating, “data analysis” is nothing more than making sense of information in a table. It consists of filtering, sorting, grouping, and manipulating data tables with basic algebra and statistics.
In fact, you don’t need experience to understand the basics. You have already worked with data extensively in your life, and “analysis” is nothing more than a fancy word for good sense and basic logic.
Over time, people have intuitively categorized the best logical practices for treating data. These categories are what we call today types, methods, and techniques.
This article provides a comprehensive list of types, methods, and techniques, and explains the difference between them.
For a practical intro to data analysis (including types, methods, & techniques), check out our Intro to Data Analysis eBook for free.
Descriptive, Diagnostic, Predictive, & Prescriptive Analysis
If you Google “types of data analysis,” the first few results will explore descriptive, diagnostic, predictive, and prescriptive analysis. Why? Because these names are easy to understand and are used a lot in “the real world.”
Descriptive analysis is an informational method, diagnostic analysis explains “why” a phenomenon occurs, predictive analysis seeks to forecast the result of an action, and prescriptive analysis identifies solutions to a specific problem.
That said, these are only four branches of a larger analytical tree.
Good data analysts know how to position these four types within other analytical methods and tactics, allowing them to leverage strengths and weaknesses in each to uproot the most valuable insights.
Let’s explore the full analytical tree to understand how to appropriately assess and apply these four traditional types.
Tree diagram of Data Analysis Types, Methods, and Techniques
Here’s a picture to visualize the structure and hierarchy of data analysis types, methods, and techniques.
If it’s too small you can view the picture in a new tab. Open it to follow along!
Note: basic descriptive statistics such as mean, median, and mode, as well as standard deviation, are not shown because most people are already familiar with them. In the diagram, they would fall under the “descriptive” analysis type.
Tree Diagram Explained
The highest-level classification of data analysis is quantitative vs qualitative. Quantitative implies numbers while qualitative implies information other than numbers.
Quantitative data analysis then splits into mathematical analysis and artificial intelligence (AI) analysis. Mathematical types then branch into descriptive, diagnostic, predictive, and prescriptive.
Methods falling under mathematical analysis include clustering, classification, forecasting, and optimization. Qualitative data analysis methods include content analysis, narrative analysis, discourse analysis, framework analysis, and/or grounded theory.
Moreover, mathematical techniques include regression, Nïave Bayes, Simple Exponential Smoothing, cohorts, factors, linear discriminants, and more, whereas techniques falling under the AI type include artificial neural networks, decision trees, evolutionary programming, and fuzzy logic. Techniques under qualitative analysis include text analysis, coding, idea pattern analysis, and word frequency.
It’s a lot to remember! Don’t worry, once you understand the relationship and motive behind all these terms, it’ll be like riding a bike.
We’ll move down the list from top to bottom and I encourage you to open the tree diagram above in a new tab so you can follow along.
But first, let’s just address the elephant in the room: what’s the difference between methods and techniques anyway?
Difference between methods and techniques
Though often used interchangeably, methods ands techniques are not the same. By definition, methods are the process by which techniques are applied, and techniques are the practical application of those methods.
For example, consider driving. Methods include staying in your lane, stopping at a red light, and parking in a spot. Techniques include turning the steering wheel, braking, and pushing the gas pedal.
Data sets: observations and fields
It’s important to understand the basic structure of data tables to comprehend the rest of the article. A data set consists of one far-left column containing observations, then a series of columns containing the fields (aka “traits” or “characteristics”) that describe each observations. For example, imagine we want a data table for fruit. It might look like this:
The fruit (observation) | Avg. weight (field1) | Avg. diameter (field 2) | Avg. time to eat (field 3) |
---|---|---|---|
Watermelon | 20 lbs (9 kg) | 16 inch (40 cm) | 20 minutes |
Apple | .33 lbs (.15 kg) | 4 inch (8 cm) | 5 minutes |
Orange | .30 lbs (.14 kg) | 4 inch (8 cm) | 5 minutes |
Now let’s turn to types, methods, and techniques. Each heading below consists of a description, relative importance, the nature of data it explores, and the motivation for using it.
Quantitative Analysis
- Description: Quantitative data analysis is a high-level branch of data analysis that designates methods and techniques concerned with numbers instead of words.
- It accounts for more than 50% of all data analysis and is by far the most widespread and well-known type of data analysis.
- As you have seen, it holds descriptive, diagnostic, predictive, and prescriptive methods, which in turn hold some of the most important techniques available today, such as clustering and forecasting.
- It can be broken down into mathematical and AI analysis.
- Importance: Very high. Quantitative analysis is a must for anyone interesting in becoming or improving as a data analyst.
- Nature of Data: data treated under quantitative analysis is, quite simply, quantitative. It encompasses all numeric data.
- Motive: to extract insights. (Note: we’re at the top of the pyramid, this gets more insightful as we move down.)
Qualitative Analysis
- Description: Qualitative data analysis is a high-level branch of data analysis that focuses on text data instead of numeric.
- It accounts for less than 30% of all data analysis and is common in social sciences.
- It can refer to the simple recognition of qualitative elements, which is not analytic in any way, but most often refers to methods that assign numeric values to non-numeric data for analysis.
- Because of this, some argue that it’s ultimately a quantitative type.
- Importance: Medium. In general, knowing qualitative data analysis is not common or even necessary for corporate roles. However, for researchers working in social sciences, its importance is very high.
- Nature of Data: data treated under qualitative analysis is non-numeric. However, as part of the analysis, analysts turn non-numeric data into numbers, at which point many argue it is no longer qualitative analysis.
- Motive: to extract insights. (This will be more important as we move down the pyramid.)
Mathematical Analysis
- Description: mathematical data analysis is a subtype of qualitative data analysis that designates methods and techniques based on statistics, algebra, and logical reasoning to extract insights. It stands in opposition to artificial intelligence analysis.
- Importance: Very High. The most widespread methods and techniques fall under mathematical analysis. In fact, it’s so common that many people use “quantitative” and “mathematical” analysis interchangeably.
- Nature of Data: numeric. By definition, all data under mathematical analysis are numbers.
- Motive: to extract measurable insights that can be used to act upon.
Artificial Intelligence & Machine Learning Analysis
- Description: artificial intelligence and machine learning analyses designate techniques based on the titular skills. They are not traditionally mathematical, but they are quantitative since they use numbers. Applications of AI & ML analysis techniques are developing, but they’re not yet mainstream enough to show promise across the field.
- Importance: Medium. As of today (September 2020), you don’t need to be fluent in AI & ML data analysis to be a great analyst. BUT, if it’s a field that interests you, learn it. Many believe that in 10 year’s time its importance will be very high.
- Nature of Data: numeric.
- Motive: to create calculations that build on themselves in order and extract insights without direct input from a human.
Descriptive Analysis
- Description: descriptive analysis is a subtype of mathematical data analysis that uses methods and techniques to provide information about the size, dispersion, groupings, and behavior of data sets. This may sounds complicated, but just think about mean, median, and mode: all three are types of descriptive analysis. They provide information about the data set. We’ll look at specific techniques below.
- Importance: Very high. Descriptive analysis is among the most commonly used data analyses in both corporations and research today.
- Nature of Data: the nature of data under descriptive statistics is sets. A set is simply a collection of numbers that behaves in predictable ways. Data reflects real life, and there are patterns everywhere to be found. Descriptive analysis describes those patterns.
- Motive: the motive behind descriptive analysis is to understand how numbers in a set group together, how far apart they are from each other, and how often they occur. As with most statistical analysis, the more data points there are, the easier it is to describe the set.
Diagnostic Analysis
- Description: diagnostic analysis answers the question “why did it happen?” It is an advanced type of mathematical data analysis that manipulates multiple techniques, but does not own any single one. Analysts engage in diagnostic analysis when they try to explain why.
- Importance: Very high. Diagnostics are probably the most important type of data analysis for people who don’t do analysis because they’re valuable to anyone who’s curious. They’re most common in corporations, as managers often only want to know the “why.”
- Nature of Data: data under diagnostic analysis are data sets. These sets in themselves are not enough under diagnostic analysis. Instead, the analyst must know what’s behind the numbers in order to explain “why.” That’s what makes diagnostics so challenging yet so valuable.
- Motive: the motive behind diagnostics is to diagnose — to understand why.
Predictive Analysis
- Description: predictive analysis uses past data to project future data. It’s very often one of the first kinds of analysis new researchers and corporate analysts use because it is intuitive. It is a subtype of the mathematical type of data analysis, and its three notable techniques are regression, moving average, and exponential smoothing.
- Importance: Very high. Predictive analysis is critical for any data analyst working in a corporate environment. Companies always want to know what the future will hold — especially for their revenue.
- Nature of Data: Because past and future imply time, predictive data always includes an element of time. Whether it’s minutes, hours, days, months, or years, we call this time series data. In fact, this data is so important that I’ll mention it twice so you don’t forget: predictive analysis uses time series data.
- Motive: the motive for investigating time series data with predictive analysis is to predict the future in the most analytical way possible.
Prescriptive Analysis
- Description: prescriptive analysis is a subtype of mathematical analysis that answers the question “what will happen if we do X?” It’s largely underestimated in the data analysis world because it requires diagnostic and descriptive analyses to be done before it even starts. More than simple predictive analysis, prescriptive analysis builds entire data models to show how a simple change could impact the ensemble.
- Importance: High. Prescriptive analysis is most common under the finance function in many companies. Financial analysts use it to build a financial model of the financial statements that show how that data will change given alternative inputs.
- Nature of Data: the nature of data in prescriptive analysis is data sets. These data sets contain patterns that respond differently to various inputs. Data that is useful for prescriptive analysis contains correlations between different variables. It’s through these correlations that we establish patterns and prescribe action on this basis. This analysis cannot be performed on data that exists in a vacuum — it must be viewed on the backdrop of the tangibles behind it.
- Motive: the motive for prescriptive analysis is to establish, with an acceptable degree of certainty, what results we can expect given a certain action. As you might expect, this necessitates that the analyst or researcher be aware of the world behind the data, not just the data itself.
Clustering Method
- Description: the clustering method groups data points together based on their relativeness closeness to further explore and treat them based on these groupings. There are two ways to group clusters: intuitively and statistically (or K-means).
- Importance: Very high. Though most corporate roles group clusters intuitively based on management criteria, a solid understanding of how to group them mathematically is an excellent descriptive and diagnostic approach to allow for prescriptive analysis thereafter.
- Nature of Data: the nature of data useful for clustering is sets with 1 or more data fields. While most people are used to looking at only two dimensions (x and y), clustering becomes more accurate the more fields there are.
- Motive: the motive for clustering is to understand how data sets group and to explore them further based on those groups.
- Here’s an example set:
Classification Method
- Description: the classification method aims to separate and group data points based on common characteristics. This can be done intuitively or statistically.
- Importance: High. While simple on the surface, classification can become quite complex. It’s very valuable in corporate and research environments, but can feel like its not worth the work. A good analyst can execute it quickly to deliver results.
- Nature of Data: the nature of data useful for classification is data sets. As we will see, it can be used on qualitative data as well as quantitative. This method requires knowledge of the substance behind the data, not just the numbers themselves.
- Motive: the motive for classification is group data not based on mathematical relationships (which would be clustering), but by predetermined outputs. This is why it’s less useful for diagnostic analysis, and more useful for prescriptive analysis.
Forecasting Method
- Description: the forecasting method uses time past series data to forecast the future.
- Importance: Very high. Forecasting falls under predictive analysis and is arguably the most common and most important method in the corporate world. It is less useful in research, which prefers to understand the known rather than speculate about the future.
- Nature of Data: data useful for forecasting is time series data, which, as we’ve noted, always includes a variable of time.
- Motive: the motive for the forecasting method is the same as that of prescriptive analysis: the confidently estimate future values.
Optimization Method
- Description: the optimization method maximized or minimizes values in a set given a set of criteria. It is arguably most common in prescriptive analysis. In mathematical terms, it is maximizing or minimizing a function given certain constraints.
- Importance: Very high. The idea of optimization applies to more analysis types than any other method. In fact, some argue that it is the fundamental driver behind data analysis. You would use it everywhere in research and in a corporation.
- Nature of Data: the nature of optimizable data is a data set of at least two points.
- Motive: the motive behind optimization is to achieve the best result possible given certain conditions.
Content Analysis Method
- Description: content analysis is a method of qualitative analysis that quantifies textual data to track themes across a document. It’s most common in academic fields and in social sciences, where written content is the subject of inquiry.
- Importance: High. In a corporate setting, content analysis as such is less common. If anything Nïave Bayes (a technique we’ll look at below) is the closest corporations come to text. However, it is of the utmost importance for researchers. If you’re a researcher, check out this article on content analysis.
- Nature of Data: data useful for content analysis is textual data.
- Motive: the motive behind content analysis is to understand themes expressed in a large text
Narrative Analysis Method
- Description: narrative analysis is a method of qualitative analysis that quantifies stories to trace themes in them. It’s differs from content analysis because it focuses on stories rather than research documents, and the techniques used are slightly different from those in content analysis (very nuances and outside the scope of this article).
- Importance: Low. Unless you are highly specialized in working with stories, narrative analysis rare.
- Nature of Data: the nature of the data useful for the narrative analysis method is narrative text.
- Motive: the motive for narrative analysis is to uncover hidden patterns in narrative text.
Discourse Analysis Method
- Description: the discourse analysis method falls under qualitative analysis and uses thematic coding to trace patterns in real-life discourse. That said, real-life discourse is oral, so it must first be transcribed into text.
- Importance: Low. Unless you are focused on understand real-world idea sharing in a research setting, this kind of analysis is less common than the others on this list.
- Nature of Data: the nature of data useful in discourse analysis is first audio files, then transcriptions of those audio files.
- Motive: the motive behind discourse analysis is to trace patterns of real-world discussions. (As a spooky sidenote, have you ever felt like your phone microphone was listening to you and making reading suggestions? If it was, the method was discourse analysis.)
Framework Analysis Method
- Description: the framework analysis method falls under qualitative analysis and uses similar thematic coding techniques to content analysis. However, where content analysis aims to discover themes, framework analysis starts with a framework and only considers elements that fall in its purview.
- Importance: Low. As with the other textual analysis methods, framework analysis is less common in corporate settings. Even in the world of research, only some use it. Strangely, it’s very common for legislative and political research.
- Nature of Data: the nature of data useful for framework analysis is textual.
- Motive: the motive behind framework analysis is to understand what themes and parts of a text match your search criteria.
Grounded Theory Method
- Description: the grounded theory method falls under qualitative analysis and uses thematic coding to build theories around those themes.
- Importance: Low. Like other qualitative analysis techniques, grounded theory is less common in the corporate world. Even among researchers, you would be hard pressed to find many using it. Though powerful, it’s simply too rare to spend time learning.
- Nature of Data: the nature of data useful in the grounded theory method is textual.
- Motive: the motive of grounded theory method is to establish a series of theories based on themes uncovered from a text.
Clustering Technique: K-Means
- Description: k-means is a clustering technique in which data points are grouped in clusters that have the closest means. Though not considered AI or ML, it inherently requires the use of supervised learning to reevaluate clusters as data points are added. Clustering techniques can be used in diagnostic, descriptive, & prescriptive data analyses.
- Importance: Very important. If you only take 3 things from this article, k-means clustering should be part of it. It is useful in any situation where n observations have multiple characteristics and we want to put them in groups.
- Nature of Data: the nature of data is at least one characteristic per observation, but the more the merrier.
- Motive: the motive for clustering techniques such as k-means is to group observations together and either understand or react to them.
Regression Technique
- Description: simple and multivariable regressions use either one independent variable or combination of multiple independent variables to calculate a correlation to a single dependent variable using constants. Regressions are almost synonymous with correlation today.
- Importance: Very high. Along with clustering, if you only take 3 things from this article, regression techniques should be part of it. They’re everywhere in corporate and research fields alike.
- Nature of Data: the nature of data used is regressions is data sets with “n” number of observations and as many variables as are reasonable. It’s important, however, to distinguish between time series data and regression data. You cannot use regressions or time series data without accounting for time. The easier way is to use techniques under the forecasting method.
- Motive: The motive behind regression techniques is to understand correlations between independent variable(s) and a dependent one.
Nïave Bayes Technique
- Description: Nïave Bayes is a classification technique that uses simple probability to classify items based previous classifications. In plain English, the formula would be “the chance that thing with trait x belongs to class c depends on (=) the overall chance of trait x belonging to class c, multiplied by the overall chance of class c, divided by the overall chance of getting trait x.” As a formula, it’s P(c|x) = P(x|c) * P(c) / P(x).
- Importance: High. Nïave Bayes is a very common, simplistic classification techniques because it’s effective with large data sets and it can be applied to any instant in which there is a class. Google, for example, might use it to group webpages into groups for certain search engine queries.
- Nature of Data: the nature of data for Nïave Bayes is at least one class and at least two traits in a data set.
- Motive: the motive behind Nïave Bayes is to classify observations based on previous data. It’s thus considered part of predictive analysis.
Cohorts Technique
- Description: cohorts technique is a type of clustering method used in behavioral sciences to separate users by common traits. As with clustering, it can be done intuitively or mathematically, the latter of which would simply be k-means.
- Importance: Very high. With regard to resembles k-means, the cohort technique is more of a high-level counterpart. In fact, most people are familiar with it as a part of Google Analytics. It’s most common in marketing departments in corporations, rather than in research.
- Nature of Data: the nature of cohort data is data sets in which users are the observation and other fields are used as defining traits for each cohort.
- Motive: the motive for cohort analysis techniques is to group similar users and analyze how you retain them and how the churn.
Factor Technique
- Description: the factor analysis technique is a way of grouping many traits into a single factor to expedite analysis. For example, factors can be used as traits for Nïave Bayes classifications instead of more general fields.
- Importance: High. While not commonly employed in corporations, factor analysis is hugely valuable. Good data analysts use it to simplify their projects and communicate them more clearly.
- Nature of Data: the nature of data useful in factor analysis techniques is data sets with a large number of fields on its observations.
- Motive: the motive for using factor analysis techniques is to reduce the number of fields in order to more quickly analyze and communicate findings.
Linear Discriminants Technique
- Description: linear discriminant analysis techniques are similar to regressions in that they use one or more independent variable to determine a dependent variable; however, the linear discriminant technique falls under a classifier method since it uses traits as independent variables and class as a dependent variable. In this way, it becomes a classifying method AND a predictive method.
- Importance: High. Though the analyst world speaks of and uses linear discriminants less commonly, it’s a highly valuable technique to keep in mind as you progress in data analysis.
- Nature of Data: the nature of data useful for the linear discriminant technique is data sets with many fields.
- Motive: the motive for using linear discriminants is to classify observations that would be otherwise too complex for simple techniques like Nïave Bayes.
Exponential Smoothing Technique
- Description: exponential smoothing is a technique falling under the forecasting method that uses a smoothing factor on prior data in order to predict future values. It can be linear or adjusted for seasonality. The basic principle behind exponential smoothing is to use a percent weight (value between 0 and 1 called alpha) on more recent values in a series and a smaller percent weight on less recent values. The formula is f(x) = current period value * alpha + previous period value * 1-alpha.
- Importance: High. Most analysts still use the moving average technique (covered next) for forecasting, though it is less efficient than exponential moving, because it’s easy to understand. However, good analysts will have exponential smoothing techniques in their pocket to increase the value of their forecasts.
- Nature of Data: the nature of data useful for exponential smoothing is time series data. Time series data has time as part of its fields.
- Motive: the motive for exponential smoothing is to forecast future values with a smoothing variable.
Moving Average Technique
- Description: the moving average technique falls under the forecasting method and uses an average of recent values to predict future ones. For example, to predict rainfall in April, you would take the average of rainfall from January to March. It’s simple, yet highly effective.
- Importance: Very high. While I’m personally not a huge fan of moving averages due to their simplistic nature and lack of consideration for seasonality, they’re the most common forecasting technique and therefore very important.
- Nature of Data: the nature of data useful for moving averages is time series data.
- Motive: the motive for moving averages is to predict future values is a simple, easy-to-communicate way.
Neural Networks Technique
- Description: neural networks are a highly complex artificial intelligence technique that replicate a human’s neural analysis through a series of hyper-rapid computations and comparisons that evolve in real time. This technique is so complex that an analyst must use computer programs to perform it.
- Importance: Medium. While the potential for neural networks is theoretically unlimited, it’s still little understood and therefore uncommon. You do not need to know it by any means in order to be a data analyst.
- Nature of Data: the nature of data useful for neural networks is data sets of astronomical size, meaning with 100s of 1000s of fields and the same number of row at a minimum.
- Motive: the motive for neural networks is to understand wildly complex phenomenon and data to thereafter act on it.
Decision Tree Technique
- Description: the decision tree technique uses artificial intelligence algorithms to rapidly calculate possible decision pathways and their outcomes on a real-time basis. It’s so complex that computer programs are needed to perform it.
- Importance: Medium. As with neural networks, decision trees with AI are too little understood and are therefore uncommon in corporate and research settings alike.
- Nature of Data: the nature of data useful for the decision tree technique is hierarchical data sets that show multiple optional fields for each preceding field.
- Motive: the motive for decision tree techniques is to compute the optimal choices to make in order to achieve a desired result.
Evolutionary Programming Technique
- Description: the evolutionary programming technique uses a series of neural networks, sees how well each one fits a desired outcome, and selects only the best to test and retest. It’s called evolutionary because is resembles the process of natural selection by weeding out weaker options.
- Importance: Medium. As with the other AI techniques, evolutionary programming just isn’t well-understood enough to be usable in many cases. It’s complexity also makes it hard to explain in corporate settings and difficult to defend in research settings.
- Nature of Data: the nature of data in evolutionary programming is data sets of neural networks, or data sets of data sets.
- Motive: the motive for using evolutionary programming is similar to decision trees: understanding the best possible option from complex data.
- Video example:
Fuzzy Logic Technique
- Description: fuzzy logic is a type of computing based on “approximate truths” rather than simple truths such as “true” and “false.” It is essentially two tiers of classification. For example, to say whether “Apples are good,” you need to first classify that “Good is x, y, z.” Only then can you say apples are good. Another way to see it helping a computer see truth like humans do: “definitely true, probably true, maybe true, probably false, definitely false.”
- Importance: Medium. Like the other AI techniques, fuzzy logic is uncommon in both research and corporate settings, which means it’s less important in today’s world.
- Nature of Data: the nature of fuzzy logic data is huge data tables that include other huge data tables with a hierarchy including multiple subfields for each preceding field.
- Motive: the motive of fuzzy logic to replicate human truth valuations in a computer is to model human decisions based on past data. The obvious possible application is marketing.
Text Analysis Technique
- Description: text analysis techniques fall under the qualitative data analysis type and use text to extract insights.
- Importance: Medium. Text analysis techniques, like all the qualitative analysis type, are most valuable for researchers.
- Nature of Data: the nature of data useful in text analysis is words.
- Motive: the motive for text analysis is to trace themes in a text across sets of very long documents, such as books.
Coding Technique
- Description: the coding technique is used in textual analysis to turn ideas into uniform phrases and analyze the number of times and the ways in which those ideas appear. For this reason, some consider it a quantitative technique as well. You can learn more about coding and the other qualitative techniques here.
- Importance: Very high. If you’re a researcher working in social sciences, coding is THE analysis techniques, and for good reason. It’s a great way to add rigor to analysis. That said, it’s less common in corporate settings.
- Nature of Data: the nature of data useful for coding is long text documents.
- Motive: the motive for coding is to make tracing ideas on paper more than an exercise of the mind by quantifying it and understanding is through descriptive methods.
Idea Pattern Technique
- Description: the idea pattern analysis technique fits into coding as the second step of the process. Once themes and ideas are coded, simple descriptive analysis tests may be run. Some people even cluster the ideas!
- Importance: Very high. If you’re a researcher, idea pattern analysis is as important as the coding itself.
- Nature of Data: the nature of data useful for idea pattern analysis is already coded themes.
- Motive: the motive for the idea pattern technique is to trace ideas in otherwise unmanageably-large documents.
Word Frequency Technique
- Description: word frequency is a qualitative technique that stands in opposition to coding and uses an inductive approach to locate specific words in a document in order to understand its relevance. Word frequency is essentially the descriptive analysis of qualitative data because it uses stats like mean, median, and mode to gather insights.
- Importance: High. As with the other qualitative approaches, word frequency is very important in social science research, but less so in corporate settings.
- Nature of Data: the nature of data useful for word frequency is long, informative documents.
- Motive: the motive for word frequency is to locate target words to determine the relevance of a document in question.
Types of data analysis in research
Types of data analysis in research methodology include every item discussed in this article. As a list, they are:
- Quantitative
- Qualitative
- Mathematical
- Machine Learning and AI
- Descriptive
- Diagnostic
- Predictive
- Prescriptive
- Clustering
- Classification
- Forecasting
- Optimization
- Content
- Narrative
- Discourse
- Framework
- Grounded theory
- Artificial Neural Networks
- Decision Trees
- Evolutionary Programming
- Fuzzy Logic
- Text analysis
- Coding
- Idea Pattern Analysis
- Word Frequency Analysis
- Regression
- Nïave Bayes
- Exponential smoothing
- Moving average
- Cohort
- Factor
- Linear discriminant
Types of data analysis in qualitative research
As a list, the types of data analysis in qualitative research are the following methods:
- Content
- Narrative
- Discourse
- Framework
- Grounded theory
Types of data analysis in quantitative research
As a list, the types of data analysis in quantitative research are:
- Mathematical
- Machine Learning and AI
- Descriptive
- Diagnostic
- Predictive
- Prescriptive
Data analysis methods
As a list, data analysis methods are:
- Clustering
- Classification
- Forecasting
- Optimization
- Content (qualitative)
- Narrative (qualitative)
- Discourse (qualitative)
- Framework (qualitative)
- Grounded theory (qualitative)
Quantitative data analysis methods
As a list, quantitative data analysis methods are:
- Clustering
- Classification
- Forecasting
- Optimization
Annex
Tabular View of Data Analysis Types, Methods, and Techniques
Types (Numeric or Non-numeric) | Quantitative |
Qualitative | |
Types tier 2 (Traditional Numeric or New Numeric) | Mathematical |
Artificial Intelligence (AI) | |
Types tier 3 (Informative Nature) | Descriptive |
Diagnostic | |
Predictive | |
Prescriptive | |
Methods | Clustering |
Classification | |
Forecasting | |
Optimization | |
Content analysis | |
Narrative analysis | |
Discourse analysis | |
Framework analysis | |
Grounded theory | |
Techniques | Clustering (doubles as technique) |
Regression (linear and multivariable) | |
Nïave Bayes | |
Cohorts | |
Factors | |
Linear Discriminants | |
Exponential smoothing | |
Moving average | |
Neural networks | |
Decision trees | |
Evolutionary programming | |
Fuzzy logic | |
Text analysis | |
Coding | |
Idea pattern analysis | |
Word frequency |