Skip to main content

Reproducible Research Using R: 6 Analyzing Categorical Data

Reproducible Research Using R
6 Analyzing Categorical Data
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeBrooklyn Civic Data Lab
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. About
    1. 0.1 What You’ll Learn
    2. 0.2 What You Should Know First
    3. 0.3 What This Book Does Not Cover
  2. How to Use This Book
    1. 0.4 Chapter Anatomy
    2. 0.5 Code, Data, and Reproducibility
    3. 0.6 Acknowledgments
  3. 1 Getting Started with R
    1. 1.1 Learning Objectives
    2. 1.2 RStudio
    3. 1.3 R as a Calculator
      1. 1.3.1 Basic Math
      2. 1.3.2 Built-in mathematical Functions
    4. 1.4 Creating Variables and Assigning Objects
    5. 1.5 Vectors
      1. 1.5.1 Numeric Vectors
      2. 1.5.2 Character Vectors
      3. 1.5.3 Logical Vectors
      4. 1.5.4 Factors (categorical)
      5. 1.5.5 Indexing (1-based in R!)
      6. 1.5.6 Type Coercion
    6. 1.6 Data Frames
      1. 1.6.1 Creating Your Own Data Frame
      2. 1.6.2 Functions to Explore Datasets
      3. 1.6.3 Working With Columns Within Data Frames
    7. 1.7 Reading & Writing data
    8. 1.8 Packages
    9. 1.9 Getting Comfortable Making Mistakes - Help and Advice
    10. 1.10 Key Takeaways
    11. 1.11 Checklist: Before Moving On
    12. 1.12 Key Functions & Commands
    13. 1.13 💡 Reproducibility Tip:
  4. 2 Introduction to tidyverse
    1. 2.1 Learning Objectives {tidyverse-objectives}
    2. 2.2 Using Packages
      1. 2.2.1 Installing Packages
      2. 2.2.2 Loading Packages
    3. 2.3 Meet the tidyverse
      1. 2.3.1 The Pipe
    4. 2.4 Manipulating Data in tidyverse
      1. 2.4.1 Distinct
      2. 2.4.2 Select
      3. 2.4.3 Filter
      4. 2.4.4 Arrange
      5. 2.4.5 Mutate
      6. 2.4.6 If Else
      7. 2.4.7 Renaming Columns
      8. 2.4.8 Putting them all together
    5. 2.5 Insights Into Our Data
      1. 2.5.1 Count
      2. 2.5.2 Summarizing and Grouping
    6. 2.6 Common Gotchas & Quick Fixes
      1. 2.6.1 = vs ==
      2. 2.6.2 NA-aware math
      3. 2.6.3 Pipe position
      4. 2.6.4 Conflicting function names
    7. 2.7 Key Takeaways
    8. 2.8 Checklist
    9. 2.9 Key Functions & Commands
    10. 2.10 💡 Reproducibility Tip:
  5. 3 Visualizations
    1. 3.1 Introduction
    2. 3.2 Learning Objectives
    3. 3.3 Base R
    4. 3.4 ggplot2
      1. 3.4.1 Basics
      2. 3.4.2 Scatterplot - geom_point()
      3. 3.4.3 Bar Chart (counts) and Column Chart (values)
      4. 3.4.4 Histograms and Density Plots (distribution)
      5. 3.4.5 Boxplot - geom_boxplot()
      6. 3.4.6 Lines (time series) - geom_line()
      7. 3.4.7 Put text on the plot - geom_text()
      8. 3.4.8 Error bars (requires summary stats) - geom_errorbar()
      9. 3.4.9 Reference lines
    5. 3.5 Key Takeaways
    6. 3.6 Checklist
    7. 3.7 ggplot2 Visualization Reference
      1. 3.7.1 Summary of ggplot Geometries
      2. 3.7.2 Summary of other ggplot commands
    8. 3.8 💡 Reproducibility Tip:
  6. 4 Comparing Two Groups: Data Wrangling, Visualization, and t-Tests
    1. 4.1 Introduction
    2. 4.2 Learning Objectives {means-objectives}
    3. 4.3 Creating a Sample Dataset
    4. 4.4 Merging Data
      1. 4.4.1 Binding our data
      2. 4.4.2 Joining Data
      3. 4.4.3 Wide Format
      4. 4.4.4 Long Format (Reverse Demo)
    5. 4.5 Comparing Means
      1. 4.5.1 Calculating the means
      2. 4.5.2 t.test
    6. 4.6 Key Takeaways
    7. 4.7 Checklist
      1. 4.7.1 Data Creation & Import
      2. 4.7.2 Comparing Two Means
    8. 4.8 Key Functions & Commands
    9. 4.9 Example APA-style Write-up
    10. 4.10 💡 Reproducibility Tip:
  7. 5 Comparing Multiple Means
    1. 5.1 Introduction
    2. 5.2 Learning Objectives {anova-objectives}
    3. 5.3 Creating Our Data
    4. 5.4 Descriptive Statistics
    5. 5.5 Visualizing Relationships
    6. 5.6 Running a T.Test
    7. 5.7 One-Way ANOVA
    8. 5.8 Post-hoc Tests
    9. 5.9 Adding a Second Factor
    10. 5.10 Model Comparison With AIC
    11. 5.11 Key Takeaways
    12. 5.12 Checklist
    13. 5.13 Key Functions & Commands
    14. 5.14 Example APA-style Write-up
    15. 5.15 💡 Reproducibility Tip:
  8. 6 Analyzing Categorical Data
    1. 6.1 Introduction
    2. 6.2 Learning Objectives {cat-objectives}
    3. 6.3 Loading Our Data
    4. 6.4 Contingency Tables
    5. 6.5 Visualizations
    6. 6.6 Chi-Square Test
    7. 6.7 Cross Tables
    8. 6.8 Contribution
    9. 6.9 CramerV
    10. 6.10 Interpretation
    11. 6.11 Key Takeaways
    12. 6.12 Checklist
    13. 6.13 Key Functions & Commands
    14. 6.14 Example APA-style Write-up
    15. 6.15 💡 Reproducibility Tip:
  9. 7 Correlation
    1. 7.1 Introduction
      1. 7.1.1 Learning Objectives
    2. 7.2 Loading Our Data
    3. 7.3 Cleaning our data
    4. 7.4 Visualizing Relationships
    5. 7.5 Running Correlations (r)
    6. 7.6 Correlation Matrix
    7. 7.7 Coefficient of Determination (R^2)
    8. 7.8 Partial Correlations
    9. 7.9 Biserial and Point-Biserial Correlations
    10. 7.10 Grouped Correlations
    11. 7.11 Conclusion
    12. 7.12 Key Takeaways
    13. 7.13 Checklist
    14. 7.14 Key Functions & Commands
    15. 7.15 Example APA-style Write-up
      1. 7.15.1 Bivariate Correlation
      2. 7.15.2 Positive Correlation
      3. 7.15.3 Partial Correlation
    16. 7.16 💡 Reproducibility Tip:
  10. 8 Linear Regression
    1. 8.1 Introduction
    2. 8.2 Learning Objectives {lin-reg-objectives}
    3. 8.3 Loading Our Data
    4. 8.4 Cleaning Our Data
    5. 8.5 Visualizing Relationships
    6. 8.6 Understanding Correlation
    7. 8.7 Linear Regression Model
    8. 8.8 Checking the residuals
    9. 8.9 Adding more variables
      1. 8.9.1 Bonus code
    10. 8.10 Conclusion
    11. 8.11 Key Takeaways
    12. 8.12 Checklist
    13. 8.13 Key Functions & Commands
    14. 8.14 Example APA-style Write-up
    15. 8.15 💡 Reproducibility Tip:
  11. 9 Logistic Regression
    1. 9.1 Introduction
    2. 9.2 Learning Objectives {log-reg-objectives}
    3. 9.3 Load and Preview Data
    4. 9.4 Exploratory Data Analysis
    5. 9.5 Visualize Relationships
    6. 9.6 Train and Test Split
    7. 9.7 Build Logistic Regression Model
      1. 9.7.1 McFadden’s Pseudo-R²
      2. 9.7.2 Variable Importance
      3. 9.7.3 Multicollinearity check
    8. 9.8 Make Predictions
    9. 9.9 Evaluate Model
    10. 9.10 ROC Curve + AUC
    11. 9.11 Interpretation
    12. 9.12 Key Takeaways
    13. 9.13 Checklist
    14. 9.14 Key Functions & Commands
    15. 9.15 Example APA-style Write-up
    16. 9.16 💡 Reproducibility Tip:
  12. 10 Reproducible Reporting
    1. 10.1 Introduction
    2. 10.2 Learning Objectives {r-markdown-objectives}
    3. 10.3 Creating an R Markdown File
    4. 10.4 Parts of an R Markdown File
      1. 10.4.1 The YAML
      2. 10.4.2 Text
      3. 10.4.3 R Chunks
      4. 10.4.4 Sections
    5. 10.5 Knitting an R Markdown File
    6. 10.6 Publishing an R Markdown file
    7. 10.7 Extras
      1. 10.7.1 Links
      2. 10.7.2 Pictures
      3. 10.7.3 Checklists
      4. 10.7.4 Standout sections
      5. 10.7.5 Changing Setting of Specific R Chunks
    8. 10.8 Key Takeaways
    9. 10.9 Checklist
    10. 10.10 Key Functions & Commands
    11. 10.11 Summary of Common R Markdown Syntax
    12. 10.12 💡 Reproducibility Tip:
  13. Appendix: Reproducibility Checklist for Data Analysis in R
    1. 10.13 Project & Environment
    2. 10.14 Data Integrity & Structure
    3. 10.15 Data Transformation & Workflow
    4. 10.16 Merging & Reshaping Data
    5. 10.17 Visualization & Communication
    6. 10.18 Statistical Reasoning
    7. 10.19 Modeling & Inference
    8. 10.20 Randomness & Evaluation
    9. 10.21 Reporting & Execution
    10. 10.22 Final Check
  14. Packages & Functions Reference

6 Analyzing Categorical Data

6.1 Introduction

In some situations, we do not have numerical data, and only categorical data. What do we do then? There are no meaningful measures of central tendency like the mean, median, or mode. We can’t make a scatterplot, or run a correlation matrix, or other techniques we have covered in this textbook so far.

In situations like these, when we only have categorical values, we undergo a Categorical Analysis. Today, we will be looking at three different treatment options aimed at inhibiting infections. The question we are trying to answer is:

Does the type of treatment people get affect whether they get an infection?

We’ll use a dataset called Infection_Treatments.xlsx. Each row is a unique participant, indicating what treatment method they utilized, and if they became infected or not.

6.2 Learning Objectives {cat-objectives}

By the end of this chapter, you will be able to:

  • Identify situations in which categorical analysis is appropriate and numerical methods are not
  • Load and inspect categorical data to confirm variable types and structure
  • Create and interpret contingency tables using base R and tidyverse tools
  • Calculate and interpret row- and column-based percentages for categorical data
  • Visualize relationships between categorical variables using stacked and grouped bar charts
  • Conduct and interpret a chi-square test of independence using chisq.test()
  • Explain the role of expected counts, degrees of freedom, and p-values in chi-square testing
  • Use residuals and standardized residuals to identify cells that contribute most to a chi-square result
  • Quantify the strength of association between categorical variables using Cramer’s V

6.3 Loading Our Data

Similarly to how we loaded data in 7.2, we are going to load the Infection Treatments.xlsx dataset into R using the readxl package and the read_xlsx() function inside of it.

library(readxl)
library(tidyverse)

infection <- read_xlsx("Infection_Treatments.xlsx")

summary(infection)
#>   Infection          Treatment        
#>  Length:150         Length:150        
#>  Class :character   Class :character  
#>  Mode  :character   Mode  :character

str(infection)
#> tibble [150 × 2] (S3: tbl_df/tbl/data.frame)
#>  $ Infection: chr [1:150] "Yes" "Yes" "Yes" "Yes" ...
#>  $ Treatment: chr [1:150] "Control" "Control" "Control" "Control" ...

library(skimr)

skim(infection)
(#tab:loading-data- CA)Data summary
Nameinfection
Number of rows150
Number of columns2
_______________________
Column type frequency:
character2
________________________
Group variablesNone

Variable type: character

skim_variablen_missingcomplete_rateminmaxemptyn_uniquewhitespace
Infection0123020
Treatment01713030

We can see our data nice and loaded. There are two columns, Infection and Treatment which are both categorical, and there are 150 rows. Thankfully, we do not have any missing data. Let’s dig a little deeper into our data.

6.4 Contingency Tables

When there are only categorical variables, we need to create what are called contingency table (otherwise known as frequency tables). We first touched upon contingency tables in section 2.5.1, where we used the table() command from base r and the count() function from dplyr/tidyverse. In addition, the the xtabs command from the stats package can also be used. It is personal preference on which you decide to personally use.

# All three are at the same frequency
table(infection$Treatment)
#> 
#>       Control     Cranberry Lactobacillus 
#>            50            50            50

# This is not an even split like Treatment. There are more people that were not infected vs infected.
infection %>% count(Infection)
#> # A tibble: 2 × 2
#>   Infection     n
#>   <chr>     <int>
#> 1 No          104
#> 2 Yes          46

# Now number of infected and infected per treatment.
table(infection$Treatment,infection$Infection)
#>                
#>                 No Yes
#>   Control       32  18
#>   Cranberry     42   8
#>   Lactobacillus 30  20

# We can also create this using the xtabs commmand from the stats package
contingency_table <- xtabs(~Treatment+Infection, data=infection)

contingency_table
#>                Infection
#> Treatment       No Yes
#>   Control       32  18
#>   Cranberry     42   8
#>   Lactobacillus 30  20

# Let us see what it looks like if we reverse it.
reverse_table <- xtabs(~Infection+Treatment, data=infection)

reverse_table
#>          Treatment
#> Infection Control Cranberry Lactobacillus
#>       No       32        42            30
#>       Yes      18         8            20

Now through any of the ways (table or xtabs), we are able to see that Cranberry has the lowest number infected out of the three treatments. What if we want to understand this from a percentage standpoint?

Let’s find out.

library(janitor)

c_table <- infection %>%
  tabyl(Treatment, Infection) %>%         # Makes a contingency table
  adorn_totals("row") %>%                 # Adds totals for each treatment
  adorn_percentages("row") %>%            # Adds row percentages
  adorn_pct_formatting(digits = 1) %>%    # Makes it readable (adds % signs)
  adorn_ns()                              # Combines counts + percentages

c_table
#>      Treatment          No        Yes
#>        Control 64.0%  (32) 36.0% (18)
#>      Cranberry 84.0%  (42) 16.0%  (8)
#>  Lactobacillus 60.0%  (30) 40.0% (20)
#>          Total 69.3% (104) 30.7% (46)

From a percentage standpoint, we see that 84% of people who were given cranberries were not infected! That is much higher than either other treatment options, and the total percentage of people who were not infected as well.

It seems that cranberry is taking an early lead in terms of which treatment is the most impactful. We have some preliminary numbers, so now we can visualize.

6.5 Visualizations

With purely categorical data, the visualization most commonly recommended would be a bar chart. With a bar chart, a viewer can interpret differences between the variables. There are two main options, a stacked or grouped bar chart. The main difference from a coding perspective is what we enter inside the geom_bar command.

# We can use the library ggthemes to add some flavor to our plots
library(ggthemes)

# Creating a stacked bar chart
stacked <- ggplot(infection, aes(x = Treatment, fill = Infection)) +
  geom_bar(position = "fill") +   # "fill" stacks to 100% height
  labs(
    title = "Proportion of Infections by Treatment Type",
    y = "Proportion of Participants",
    x = "Treatment"
  ) +
  theme_solarized()
# Creating a grouped bar chart
grouped <- ggplot(infection, aes(x = Treatment, fill = Infection)) +
  geom_bar(position = "dodge") +
  geom_text(
    stat = "count",                          # use counts from geom_bar
    aes(label = after_stat(count)),          # label each bar with its count
    position = position_dodge(width = 0.9),  # place correctly over side-by-side bars
    vjust = -0.3,                            # move labels slightly above bars
    size = 4
  ) +
  labs(
    title = "Number of Infections by Treatment Type",
    x = "Treatment",
    y = "Count of Participants",
    fill = "Infection Outcome"
  ) +
  theme_classic()
# We can put them side by side to compare what they look like
library(patchwork)

grouped + stacked + plot_annotation(title = "Visualizing Infection Outcomes by Treatment (Stacked Vs. Grouped")
Side-by-side comparison of stacked and grouped bar charts visualizing infection outcomes by treatment type. The stacked chart emphasizes proportional differences, while the grouped chart highlights raw counts. Together, these plots illustrate how different visual encodings can influence interpretation of categorical data.

Figure 6.1: Side-by-side comparison of stacked and grouped bar charts visualizing infection outcomes by treatment type. The stacked chart emphasizes proportional differences, while the grouped chart highlights raw counts. Together, these plots illustrate how different visual encodings can influence interpretation of categorical data.

# We can also use the plotly package to make the visual more interactive
library(plotly)
#> 
#> Attaching package: 'plotly'
#> The following object is masked from 'package:ggplot2':
#> 
#>     last_plot
#> The following object is masked from 'package:stats':
#> 
#>     filter
#> The following object is masked from 'package:graphics':
#> 
#>     layout

ggplotly(grouped)
#> file:////private/var/folders/zy/hmwzxgcn60n62sdmsjzprx840000gn/T/Rtmpa8HAuQ/file126159d2904b/widget1261eff59ff.html screenshot completed
Interactive grouped bar chart displaying counts of infection outcomes by treatment type. Interactivity allows users to hover over bars to inspect values directly, supporting exploratory analysis while preserving the same information shown in the static grouped bar chart.

Figure 6.2: Interactive grouped bar chart displaying counts of infection outcomes by treatment type. Interactivity allows users to hover over bars to inspect values directly, supporting exploratory analysis while preserving the same information shown in the static grouped bar chart.

6.6 Chi-Square Test

Now, we have done some investigation on our data through contingency tables and visualizations. It is time to run what is called a chi-square test. This allows us to see if two categorical variables are related to each other.

chisq.test(contingency_table)
#> 
#>  Pearson's Chi-squared test
#> 
#> data:  contingency_table
#> X-squared = 7.7759, df = 2, p-value = 0.02049

Whether we performed it on our contingency table, we get:

  1. X^2- This is the test statistic. This is telling us how much different our observed is vs our expected.
  2. df - Degrees of freedom = (rows - 1)(columns - 1). For 3 treatments and 2 outcomes, df = (3-1)(2-1) = 2.
  3. p-value- This tells us if it is statistically significant or not.

We see that our X^2 value is 7.78, our df is 2, and our p-value is 0.0204871. Since our p-value is less than .05, we can rightly say that there is a significant relationship between treatment plans and outcomes.

Now that we know it is significant, we need to dive a little deeper. We are understanding that cranberry is looking like the leader, but we can become more solidified on that stance.

chi_square_test <- chisq.test(contingency_table)

# All the parts of the chi square can be called.
chi_square_test$statistic
#> X-squared 
#>   7.77592

chi_square_test$parameter
#> df 
#>  2

chi_square_test$p.value
#> [1] 0.0204871

chi_square_test$method
#> [1] "Pearson's Chi-squared test"

chi_square_test$data.name
#> [1] "contingency_table"

# What our data already is
chi_square_test$observed
#>                Infection
#> Treatment       No Yes
#>   Control       32  18
#>   Cranberry     42   8
#>   Lactobacillus 30  20

# What our data would look like if there was no relationship
chi_square_test$expected
#>                Infection
#> Treatment             No      Yes
#>   Control       34.66667 15.33333
#>   Cranberry     34.66667 15.33333
#>   Lactobacillus 34.66667 15.33333

# Difference between observed and expected
# Positive is more than expected
# Negative is less than expected
# Bigger number = bigger difference
# Represent how many standard deviations
chi_square_test$residuals
#>                Infection
#> Treatment               No        Yes
#>   Control       -0.4529108  0.6810052
#>   Cranberry      1.2455047 -1.8727644
#>   Lactobacillus -0.7925939  1.1917591

# Standard Residuals
# More like an actual z-score
chi_square_test$stdres
#>                Infection
#> Treatment              No       Yes
#>   Control       -1.001671  1.001671
#>   Cranberry      2.754595 -2.754595
#>   Lactobacillus -1.752924  1.752924

Our deep dive uncovered some very important things about our experiment:

  • Expected: all of the values were different from what we the actual observations were. If there was no relationship between the two variables, it was expected that each treatment would have about 35 people not infected, and 15 people infected
  • Residuals/stdres: this is where we really start to tie everything together. With the residuals, we are looking for if it is positive/negative, and then strength in relation to the other. Cranberry if the only one of the three where there are less people infected than anticipated, almost 2 standard deviations less than expected. Both other treatments have more people infected than expected.
    • Note: Standardized residuals behave like z-scores — values beyond ±2 suggest cells contributing most strongly to the overall χ² statistic.

We tie this information together to get a very strong case for cranberry. There is just one piece of the puzzle left to bring this home.

6.7 Cross Tables

In the gmodels package we are able to utilize the CrossTable command. Beware, this can at first be overwhelming, as a lot of information is thrown at you.

library(gmodels)

# This allows us to see the contributions each category has on chi-square.
# Note, you (infection$Treatment, infection$Infection) if you want to stick to defaults
CrossTable(infection$Treatment, infection$Infection,
           prop.chisq = TRUE,    # Shows the chi-square contribution
           chisq = TRUE,         # shows chi-square test
           expected = TRUE,      # shows expected counts
           prop.r = TRUE,        # shows row proportions
           prop.c = TRUE)        # shows column proportions
#> 
#>  
#>    Cell Contents
#> |-------------------------|
#> |                       N |
#> |              Expected N |
#> | Chi-square contribution |
#> |           N / Row Total |
#> |           N / Col Total |
#> |         N / Table Total |
#> |-------------------------|
#> 
#>  
#> Total Observations in Table:  150 
#> 
#>  
#>                     | infection$Infection 
#> infection$Treatment |        No |       Yes | Row Total | 
#> --------------------|-----------|-----------|-----------|
#>             Control |        32 |        18 |        50 | 
#>                     |    34.667 |    15.333 |           | 
#>                     |     0.205 |     0.464 |           | 
#>                     |     0.640 |     0.360 |     0.333 | 
#>                     |     0.308 |     0.391 |           | 
#>                     |     0.213 |     0.120 |           | 
#> --------------------|-----------|-----------|-----------|
#>           Cranberry |        42 |         8 |        50 | 
#>                     |    34.667 |    15.333 |           | 
#>                     |     1.551 |     3.507 |           | 
#>                     |     0.840 |     0.160 |     0.333 | 
#>                     |     0.404 |     0.174 |           | 
#>                     |     0.280 |     0.053 |           | 
#> --------------------|-----------|-----------|-----------|
#>       Lactobacillus |        30 |        20 |        50 | 
#>                     |    34.667 |    15.333 |           | 
#>                     |     0.628 |     1.420 |           | 
#>                     |     0.600 |     0.400 |     0.333 | 
#>                     |     0.288 |     0.435 |           | 
#>                     |     0.200 |     0.133 |           | 
#> --------------------|-----------|-----------|-----------|
#>        Column Total |       104 |        46 |       150 | 
#>                     |     0.693 |     0.307 |           | 
#> --------------------|-----------|-----------|-----------|
#> 
#>  
#> Statistics for All Table Factors
#> 
#> 
#> Pearson's Chi-squared test 
#> ------------------------------------------------------------
#> Chi^2 =  7.77592     d.f. =  2     p =  0.0204871 
#> 
#> 
#> 

CrossTable gave us a lot of information (nothing we can not handle). Thankfully, there is an legend in the beginning of the results that identifies what each number is. Some of these we already discovered, such as expected values and the chi-square X^2 value, but also some new information. Specifically, we are able to see the chi-square contribution. To break this down:

  • When we run the chi-square test, we get the X^2 value. Importantly, each combination of the variables contributes individually to this number. We are looking for the biggest contributors to not only understand the chi-square value better, but to also understand what is the most impactful.
  • For this result, our third number’s in each cell are the chi-square contribution. It is evident that cranberry has the highest contribution to the chi-square test statistic.

6.8 Contribution

If we want to hone in on this more, we can take the contributions and turn them into percentages, answering the question:

What percentage of the X^2 value is each responsible for?

# Calculate contribution to chi-square statistic
# X^2= ((observed-expected)^2)/expected
contributions <- ((chi_square_test$observed-chi_square_test$expected)^2)/chi_square_test$expected

contributions
#>                Infection
#> Treatment              No       Yes
#>   Control       0.2051282 0.4637681
#>   Cranberry     1.5512821 3.5072464
#>   Lactobacillus 0.6282051 1.4202899

percent_contributions <- contributions / chi_square_test$statistic * 100

percent_contributions
#>                Infection
#> Treatment              No       Yes
#>   Control        2.637993  5.964158
#>   Cranberry     19.949821 45.103943
#>   Lactobacillus  8.078853 18.265233

Through this, we discovered that about 65% of the X^2 value is due to the cranberry treatment.

You already know that visualizations really help paint the picture, and chi-square contributions are not exempt. For this, the pheatmap package comes in handy.

library(pheatmap)

# Create heatmap for percentage contributions
pheatmap(percent_contributions,
         display_numbers = TRUE,
         cluster_rows = FALSE,
         cluster_cols = FALSE,
         main = "% Contribution to Chi-Square Statistic")
Heatmap showing the percentage contribution of each cell to the overall chi-square statistic. Darker shading indicates cells that contribute more strongly to the chi-square value, highlighting which combinations of treatment type and infection outcome drive the observed association. This visualization helps identify where deviations from expected counts are largest following a significant chi-square test.

Figure 6.3: Heatmap showing the percentage contribution of each cell to the overall chi-square statistic. Darker shading indicates cells that contribute more strongly to the chi-square value, highlighting which combinations of treatment type and infection outcome drive the observed association. This visualization helps identify where deviations from expected counts are largest following a significant chi-square test.

That glaring, deep red box? That is the cranberry yes cell!

Tying all of this together, we have discovered:

  1. Cranberry has the most people that were not infected (least amount infected).
  2. The chi-square test shows a significant relationship between treatment and outcome.
  3. When looking at residuals, cranberry was the only treatment with less people infected than expected.
  4. When looking at chi-square contributions, cranberry had the largest contribution, with about 65% of the test statistic being accounted for by cranberry.

Now, how impactful is this overall? What is the effect of treatment overall on outcome?

6.9 CramerV

Cramer’s V tells you the strength of the relationship between two categorical variables, similar to correlation coefficient It is important to note that this measures effect size. This is not a statistic. We can utilize the cramerV command from the rcompanion package.

library(rcompanion)

cramerV(contingency_table)
#> Cramer V 
#>   0.2277

The value derived from this is 0.23. Again, similar to the correlation coefficient, there are guidelines that can be utilized to understand the strength.

Table 6.1: (#tab:CramerV-guidlines)Guidelines for interpreting the strength of association using Cramer’s V. These ranges provide general heuristics for describing effect size in categorical analyses and should be interpreted in context rather than as strict cutoffs.
Cramer V Value Strength of Relationship
0.00–0.10 Very Weak
0.10–0.30 Weak
0.30–0.50 Moderate
>0.50 Strong

Treatment, overall, had a weak-moderate effect on outcome overall. This suggests that while treatment type and infection outcome are related, the relationship isn’t strong — meaning that other factors likely play a larger role.

6.10 Interpretation

  • The chi-square test shows X^2 = 7.78, df = 2, p = 0.02
  • This means there is a statistically significant relationship between Treatment and Infection.
  • The Cranberry group had fewer infections than expected and contributes most to the chi-square statistic.
  • Cramer’s V shows the relationship is weak-to-moderate in strength.
  • Drink your cranberry juice!

6.11 Key Takeaways

  • The Chi-Square test helps us determine whether two categorical variables are related.
  • It compares the observed frequencies (what we saw) to the expected frequencies (what we’d expect by chance).
  • A large Chi-Square statistic and a p-value < .05 suggest that the relationship is statistically significant.
  • Degrees of freedom (df) are based on the number of categories: (rows - 1) * (columns - 1).
  • Cramer’s V measures the strength of the relationship, similar to a correlation coefficient:
    • 0.00–0.10 = very weak | 0.10–0.30 = weak | 0.30–0.50 = moderate | >0.50 = strong
  • Residuals show which specific groups contribute most to the Chi-Square result.
  • Visualizations (like bar charts or heatmaps) make it easier to interpret where the differences lie.
  • Statistical significance ≠ practical significance — even weak relationships can be significant with large samples.
  • Example takeaway: Cranberry treatment showed fewer infections than expected — a weak but meaningful effect!

6.12 Checklist

When running a Chi-Square test, have you:

6.13 Key Functions & Commands

The following functions and commands are introduced or reinforced in this chapter to support categorical data analysis, contingency tables, and Chi-Square testing.

  • table() (base R)
    • Creates basic contingency (frequency) tables for categorical variables.
  • xtabs() (base R)
    • Constructs contingency tables using a formula interface, useful for multi-way tables.
  • tabyl() (janitor)
    • Generates clean contingency tables that integrate easily with percentage calculations.
  • adorn_percentages() (janitor)
    • Converts contingency table counts into row- or column-based percentages.
  • adorn_ns() (janitor)
    • Displays counts and percentages together for clearer interpretation.
  • chisq.test() (stats)
    • Performs a Chi-Square test of independence to assess whether two categorical variables are related.
  • CrossTable() (gmodels)
    • Produces detailed cross-tabulations including expected counts, proportions, and chi-square contributions.
  • pheatmap() (pheatmap)
    • Visualizes Chi-Square contributions or residuals using a heatmap.
  • cramerV() (rcompanion)
    • Computes Cramer’s V to measure the strength of association between categorical variables.

6.14 Example APA-style Write-up

The following example demonstrates one acceptable way to report the results of a chi-square test of independence in APA style.

Chi-Square Test of Independence

A chi-square test of independence indicated a significant association between treatment and infection outcome, χ²(2, N = 150) = 7.78, p = .02, with a weak-to-moderate effect size, Cramer’s V = .23. Participants in the cranberry condition were observed to have fewer infections than expected based on the contingency table under the assumption of independence.

6.15 💡 Reproducibility Tip:

It is essential to both visually and structurally inspect your data, because data are not always what they seem.

Consider Advanced Placement (AP) exam scores in U.S. high schools. Students receive scores from 1 to 5. While these values look numeric, they are actually categorical—there is no meaningful value like 2.5. Treating them as numeric can lead to incorrect analyses. The same issue commonly arises with variables like zip codes, which may be stored as numbers but represent categories, not quantities.

Although functions like View() or head() are useful for visually checking your data, they do not reveal how R is interpreting each variable. Always use functions like str() to confirm that variables are stored with the correct data type before running an analysis.

Analyses based on incorrectly typed variables are not reproducible—because they are not valid to begin with.

Annotate

Next Chapter
7 Correlation
PreviousNext
Textbook
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org