Want to help out or contribute?

If you find any typos, errors, or places where the text may be improved, please let us know by providing feedback either in the feedback survey (given during class) or by using GitHub.

On GitHub open an issue or submit a pull request by clicking the " Edit this page" link at the side of this page.

8  Processing and joining datasets for cleaning

Here we will continue using the “Workflow” block and start moving over to the third block, “Create project data”, in Figure 8.1.

Figure 8.1: Section of the overall workflow we will be covering.

8.1 Learning objectives

The overall learning outcome for this session is to:

  1. Describe a few ways of joining data and processing character data, and applying them to an existing sequence of data processing.

Specific objectives are to:

  1. Describe what regular expressions are in very simple terms, and then use a regular expression on character data.
  2. Describe a few ways that data can be joined and identify which join is appropriate for a given situation. Then apply one join type to two datasets to create a single dataset.
  3. Demonstrate how to use functionals to repeatedly join more than two datasets together.
  4. Apply the function case_when() in situations that require nested conditionals (if-else).
  5. Use the usethis::use_data() function to save the final, fully joined dataset as an .Rda file in data/.

8.2 Processing character data

Reading task: ~5 minutes

Before we go into joining datasets together, we have to do a bit of processing first. Specifically, we want to get the user ID from the file_path_id character data. Whenever you are processing and cleaning data, you will very likely encounter and deal with character data. A wonderful package to use for working with character data is called stringr, which we’ll use to extract the user ID from the file_path_id column.

The main driver behind the functions in stringr are regular expressions (or regex for short). These expressions are powerful, very concise ways of finding patterns in text. Because they are so concise, though, they are also very very difficult to learn, write, and read, even for experienced users. That’s because certain characters like [ or ? have special meanings. For instance, [aeiou] is regex for “find one character in a string that is either a, e, i, o, or u”. The [] in this case mean “find the character in between the two brackets”. We won’t cover regex too much in this course, some great resources for learning them are the R for Data Science regex section, the stringr regex page, as well as in the help doc ?regex.

We’ve already used them a bit in the dir_ls() function with the regexp argument to find our data files. In the case of the regex in our use of dir_ls(), we had wanted to find, for instance, the pattern "user_info.csv" in all the folders and files. But in this case, we want to extract the user ID pattern, from user_1 to user_22. So how would we go about extracting this pattern?

8.3 Exercise: Brainstorm a regex that will match for the user ID

Time: 10 minutes.

In your groups do these tasks. Try not to look ahead, nor in the solution section 😉! When the time is up, we’ll share some ideas and go over what the regex will be.

  1. Looking at the file_path_id column, list what is similar in the user ID between rows and what is different.
  2. Discuss and verbally describe (in English, not regex) what text pattern you might use to extract the user ID.
  3. Use the list below to think about how you might convert the English description of the text pattern to a regex. This will probably be very hard, but try anyway.
    • When characters are written as is, regex will find those characters, e.g. user will find only user.
    • Use [] to find one possible character of the several between the brackets. E.g. [12] means 1 or 2 or [ab] means “a” or “b”. To find a range of numbers or letters, use - in between the start and end ranges, e.g. [1-3] means 1 to 3 or [a-c] means “a” to “c”.
    • Use ? if the character might be there or not. E.g. ab? means “a” and maybe “b” follows it or 1[1-2]? means 1 and maybe 1 or 2 will follow it.

Once you’ve done these tasks, we’ll discuss all together and go over what the regex would be to extract the user ID.

Click for the solution. Only click if you are struggling or are out of time.
"user_[1-9][0-9]?"

Make sure to reinforce that while regex is incredibly complicated, there are some basic things you can do with it that are quite powerful.

More or less, this section and exercise are to introduce the idea and concept of regex, but not to really teach it since that is well beyond the scope of this course and this time frame.

Go over the solution. The explanation is that the pattern will find anything that has user_ followed by a number from 1 to 9 and maybe followed by another number from 0 to 9.

8.4 Using regular expressions to extract text

Now that we’ve identified a possible regex to use to extract the user ID, let’s test it out on the user_info_df data. Once it works, we will convert it into a function and move (cut and paste) it into the R/functions.R file.

Since we will create a new column for the user ID, we will use the mutate() function from the dplyr package. We’ll use the str_extract() function from the stringr package to “extract a string” by using the regex user_[1-9][0-9]? that we discussed from the exercise. Since we’re going to use stringr, so let’s add it as a package dependency by typing this in the Console:

Console
usethis::use_package("stringr")

We’re also using an argument to mutate() you might not have seen previously, called .before. This will insert the new user_id column before the column we use and we do this entirely for visual reasons, since it is easier to see the newly created column when we run the code. In your doc/learning.qmd file, create a new header called ## Using regex for user ID at the bottom of the document, and create a new code chunk below that.

Walk through writing this code, briefly explain/remind how to use mutate, and about the stringr function.

doc/learning.qmd
user_info_df <- import_multiple_files("user_info.csv", import_user_info)
# Note: your file paths and data may look slightly different.
user_info_df |>
  mutate(
    user_id = str_extract(
      file_path_id,
      "user_[1-9][0-9]?"
    ),
    .before = file_path_id
  )
#> # A tibble: 22 × 6
#>    user_id file_path_id                   gender weight height   age
#>    <chr>   <chr>                          <chr>   <dbl>  <dbl> <dbl>
#>  1 user_1  data-raw/mmash/user_1/user_in… M          65    169    29
#>  2 user_10 data-raw/mmash/user_10/user_i… M          85    180    27
#>  3 user_11 data-raw/mmash/user_11/user_i… M         115    186    27
#>  4 user_12 data-raw/mmash/user_12/user_i… M          67    170    27
#>  5 user_13 data-raw/mmash/user_13/user_i… M          74    180    25
#>  6 user_14 data-raw/mmash/user_14/user_i… M          64    171    27
#>  7 user_15 data-raw/mmash/user_15/user_i… M          80    180    24
#>  8 user_16 data-raw/mmash/user_16/user_i… M          67    176    27
#>  9 user_17 data-raw/mmash/user_17/user_i… M          60    175    24
#> 10 user_18 data-raw/mmash/user_18/user_i… M          80    180     0
#> # ℹ 12 more rows

Since we don’t need the file_path_id column anymore, let’s drop it using select() and -.

doc/learning.qmd
user_info_df |>
  mutate(
    user_id = str_extract(
      file_path_id,
      "user_[1-9][0-9]?"
    ),
    .before = file_path_id
  ) |>
  select(-file_path_id)
#> # A tibble: 22 × 5
#>    user_id gender weight height   age
#>    <chr>   <chr>   <dbl>  <dbl> <dbl>
#>  1 user_1  M          65    169    29
#>  2 user_10 M          85    180    27
#>  3 user_11 M         115    186    27
#>  4 user_12 M          67    170    27
#>  5 user_13 M          74    180    25
#>  6 user_14 M          64    171    27
#>  7 user_15 M          80    180    24
#>  8 user_16 M          67    176    27
#>  9 user_17 M          60    175    24
#> 10 user_18 M          80    180     0
#> # ℹ 12 more rows

8.5 Exercise: Convert ID extractor code into a function

Time: 15 minutes.

We now have code that takes the data that has the file_path_id column and extracts the user ID from it. First step: While in the doc/learning.qmd file, convert this code into a function, using the same process you’ve done previously.

Use this code as a guide to help complete the exercise tasks below:

doc/learning.qmd
extract_user_id <- ___(___) {
    ___ <- ___ |>
        ___mutate(
            user_id = ___str_extract(file_path_id,
                                     "user_[0-9][0-9]?"),
            .before = file_path_id
        ) |>
        ___select(-file_path_id)
    return(___)
}

# This tests that it works:
# extract_user_id(user_info_df)
  1. Call the new function extract_user_id and add one argument called imported_data.
    • Remember to output the code into an object and return() it at the end of the function.
    • Include Roxygen documentation.
  2. After writing it and testing that the function works, move (cut and paste) the function into R/functions.R.
  3. Run styler while in the R/functions.R file with the Palette (Ctrl-Shift-P, then type “style file”).
  4. Replace the code in the doc/learning.qmd file with the function name so it looks like extract_user_id(user_info_df), restart the R session, source everything with source() with Ctrl-Shift-S or with the Palette (Ctrl-Shift-P, then type “source”), and run the new function in the code chunk inside the doc/learning.qmd to test that it works. This should automatically run the setup code chunk, otherwise, run that chunk if it doesn’t.
  5. Knit / render the doc/learning.qmd file to make sure things remain reproducible with Ctrl-Shift-K or with the Palette (Ctrl-Shift-P, then type “render”).
  6. Add and commit the changes to the Git history with Ctrl-Alt-M or with the Palette (Ctrl-Shift-P, then type “commit”).
Click for the solution. Only click if you are struggling or are out of time.
#' Extract user ID from data with file path column.
#'
#' @param imported_data Data with `file_path_id` column.
#'
#' @return A data.frame/tibble.
#'
extract_user_id <- function(imported_data) {
  extracted_id <- imported_data |>
    dplyr::mutate(
      user_id = stringr::str_extract(
        file_path_id,
        "user_[0-9][0-9]?"
      ),
      .before = file_path_id
    ) |>
    dplyr::select(-file_path_id)
  return(extracted_id)
}

# This tests that it works:
extract_user_id(user_info_df)

8.6 Modifying existing functions as part of the processing workflow

Now that we’ve created a new function to extract the user ID from the file path variable, we need to actually use it within our processing pipeline. Since we want this function to work on all the datasets that we will import, we need to add it to the import_multiple_files() function. We’ll go to the import_multiple_files() function in R/functions.R and use the |> to add it after using the list_rbind() function. The code should look something like:

R/functions.R
import_multiple_files <- function(file_pattern, import_function) {
  data_files <- fs::dir_ls(here::here("data-raw/mmash/"),
    regexp = file_pattern,
    recurse = TRUE
  )

  combined_data <- purrr::map(data_files, import_function) |>
    purrr::list_rbind(names_to = "file_path_id") |>
    extract_user_id() # Add the function here
  return(combined_data)
}

We’ll re-source the functions with source() using Ctrl-Shift-S or with the Palette (Ctrl-Shift-P, then type “source”). Then re-run these pieces of code you wrote during the exercise in Section 7.3 to update them based on the new code in the import_multiple_files() function. Add this to your doc/learning.qmd file for now.

doc/learning.qmd
user_info_df <- import_multiple_files("user_info.csv", import_user_info)
saliva_df <- import_multiple_files("saliva.csv", import_saliva)
rr_df <- import_multiple_files("RR.csv", import_rr)
actigraph_df <- import_multiple_files("Actigraph.csv", import_actigraph)

As well as adding the summarised_rr_df and summarised_actigraph_df to use user_id instead of file_path_id:

doc/learning.qmd
summarised_rr_df <- rr_df |>
  group_by(user_id, day) |> # change file_path_id to user_id here
  summarise(
    across(ibi_s, list(
      mean = \(x) mean(x, na.rm = TRUE),
      sd = \(x) sd(x, na.rm = TRUE)
    )),
    .groups = "drop"
  )

summarised_actigraph_df <- actigraph_df |>
  group_by(user_id, day) |> # change file_path_id to user_id here
  # These statistics will probably be different for you
  summarise(
    across(hr, list(
      mean = \(x) mean(x, na.rm = TRUE),
      sd = \(x) sd(x, na.rm = TRUE)
    )),
    .groups = "drop"
  )

Let’s render the doc/learning.qmd document using Ctrl-Shift-K or with the Palette (Ctrl-Shift-P, then type “render”) to make sure everything still runs fine. Then, add and commit all the changed files into the Git history with Ctrl-Alt-M or with the Palette (Ctrl-Shift-P, then type “commit”).

8.7 Join datasets together

Walk through and describe these images and the different type of joins after they’ve read it.

Reading task: ~10 minutes

The ability to join datasets together is a fundamental component of data processing and transformation. In our case, we want to add the datasets together so we eventually have preferably one main dataset to work with.

There are many ways to join datasets in dplyr that are described in ?dplyr::join. The more common ones that are implemented in the dplyr package are:

  • left_join(x, y): Join all rows and columns in y that match rows and columns in x. Columns that exist in y but not x are joined to x.
Figure 8.2: Left joining in dplyr. Notice how the last row in the blue data (the row with d in column A) is not included in the outputted data on the right. Modified from the Posit dplyr cheatsheet.
  • right_join(x, y): The opposite of left_join(). Join all rows and columns in x that match rows and columns in y. Columns that exist in x but not y are joined to y.
Figure 8.3: Right joining in dplyr. Notice how the last row in the green data (the row with c in column A) is not included in the outputted data on the right. Modified from the Posit dplyr cheatsheet.
  • full_join(x, y): Join all rows and columns in y that match rows and columns in x. Columns and rows that exist in y but not x are joined to x. A full join keeps all the data from both x and y.
Figure 8.4: Full joining in dplyr. Notice how all rows and columns are included in the outputted data on the right, and that some missingness is introduced because those values don’t exist when the data are combined in this way. Modified from the Posit dplyr cheatsheet.

Now, we want to start joining our datasets. Let’s start with the user_info_df and saliva_df. In this case, we want to use full_join(), since we want all the data from both datasets. This function takes two datasets and lets you indicate which column to join by using the by argument. Here, both datasets have the column user_id so we will join by that column.

full_join(user_info_df, saliva_df, by = "user_id")
#> # A tibble: 43 × 8
#>    user_id gender weight height   age samples      cortisol_norm
#>    <chr>   <chr>   <dbl>  <dbl> <dbl> <chr>                <dbl>
#>  1 user_1  M          65    169    29 before sleep        0.0341
#>  2 user_1  M          65    169    29 wake up             0.0779
#>  3 user_10 M          85    180    27 before sleep        0.0370
#>  4 user_10 M          85    180    27 wake up             0.0197
#>  5 user_11 M         115    186    27 before sleep        0.0406
#>  6 user_11 M         115    186    27 wake up             0.0156
#>  7 user_12 M          67    170    27 before sleep        0.156 
#>  8 user_12 M          67    170    27 wake up             0.145 
#>  9 user_13 M          74    180    25 before sleep        0.0123
#> 10 user_13 M          74    180    25 wake up             0.0342
#> # ℹ 33 more rows
#> # ℹ 1 more variable: melatonin_norm <dbl>

full_join() is useful if we want to include all values from both datasets, as long as each participant (“user”) had data collected from either dataset. When the two datasets have rows that don’t match, we will get missingness in that row, but that’s ok in this case.

We also eventually have other datasets to join together later on. Since full_join() can only take two datasets at a time, do we then just keep using full_join() until all the other datasets are combined? What if we get more data later on? Well, that’s where more functional programming comes in. Again, we have a simple goal: For a set of data frames, join them all together. Here we use another functional programming concept from the purrr package called reduce(). Like map(), which “maps” a function onto a set of items, reduce() applies a function to each item of a vector or list, each time reducing the set of items down until only one remains: the output. Let’s use an example with our simple function add_numbers() we had created earlier on (but later deleted) and add up 1 to 5. Since add_numbers() only takes two numbers, we have to give it two numbers at a time and repeat until we reach 5.

# Add from 1 to 5
first <- add_numbers(1, 2)
second <- add_numbers(first, 3)
third <- add_numbers(second, 4)
add_numbers(third, 5)
#> [1] 15

Instead, we can use reduce to do the same thing:

reduce(1:5, add_numbers)
#> [1] 15

Figure 8.5 visually shows what is happening within reduce().

Figure 8.5: A functional that iteratively uses a function on a set of items until only one output remains. Notice how the output of the first iteration of func() is placed in the first position of func() in the next iteration, and so on. Modified from the Posit purrr cheatsheet.

If we look at the documentation for reduce() by writing ?reduce in the Console, we see thatreduce(), likemap(), takes either a vector or a list as an input. Since data frames can only be put together as a list and not as a vector (a data frame has vectors for columns and so it can’t be a vector itself), we need to combine the datasets together in a list() to be able to reduce them with full_join().

Let’s code this together, using reduce(), full_join(), and list() while in the doc/learning.qmd file.

doc/learning.qmd
list(
  user_info_df,
  saliva_df
) |>
  reduce(full_join)
#> Joining with `by = join_by(user_id)`
#> # A tibble: 43 × 8
#>    user_id gender weight height   age samples      cortisol_norm
#>    <chr>   <chr>   <dbl>  <dbl> <dbl> <chr>                <dbl>
#>  1 user_1  M          65    169    29 before sleep        0.0341
#>  2 user_1  M          65    169    29 wake up             0.0779
#>  3 user_10 M          85    180    27 before sleep        0.0370
#>  4 user_10 M          85    180    27 wake up             0.0197
#>  5 user_11 M         115    186    27 before sleep        0.0406
#>  6 user_11 M         115    186    27 wake up             0.0156
#>  7 user_12 M          67    170    27 before sleep        0.156 
#>  8 user_12 M          67    170    27 wake up             0.145 
#>  9 user_13 M          74    180    25 before sleep        0.0123
#> 10 user_13 M          74    180    25 wake up             0.0342
#> # ℹ 33 more rows
#> # ℹ 1 more variable: melatonin_norm <dbl>

We now have the data in a form that would make sense to join it with the other datasets. So lets try it:

doc/learning.qmd
list(
  user_info_df,
  saliva_df,
  summarised_rr_df
) |>
  reduce(full_join)
#> Joining with `by = join_by(user_id)`
#> Joining with `by = join_by(user_id)`
#> # A tibble: 86 × 11
#>    user_id gender weight height   age samples      cortisol_norm
#>    <chr>   <chr>   <dbl>  <dbl> <dbl> <chr>                <dbl>
#>  1 user_1  M          65    169    29 before sleep        0.0341
#>  2 user_1  M          65    169    29 before sleep        0.0341
#>  3 user_1  M          65    169    29 wake up             0.0779
#>  4 user_1  M          65    169    29 wake up             0.0779
#>  5 user_10 M          85    180    27 before sleep        0.0370
#>  6 user_10 M          85    180    27 before sleep        0.0370
#>  7 user_10 M          85    180    27 wake up             0.0197
#>  8 user_10 M          85    180    27 wake up             0.0197
#>  9 user_11 M         115    186    27 before sleep        0.0406
#> 10 user_11 M         115    186    27 before sleep        0.0406
#> # ℹ 76 more rows
#> # ℹ 4 more variables: melatonin_norm <dbl>, day <dbl>,
#> #   ibi_s_mean <dbl>, ibi_s_sd <dbl>

Hmm, but wait, we now have four rows of each user, when we should have only two, one for each day. By looking at each dataset we joined, we can find that the saliva_df doesn’t have a day column and instead has a samples column. We’ll need to add a day column in order to join properly with the RR dataset. For this, we’ll learn about using nested conditionals.

8.8 Cleaning with nested conditionals

Reading task: ~6 minutes

There are many ways to clean up this particular problem, but probably the easiest, most explicit, and programmatically accurate way of doing it would be with the function case_when(). This function works by providing it with a series of logical conditions and an associated output if the condition is true. Each condition is processed sequentially, meaning that if a condition is TRUE, the output won’t be overridden for later conditions. The general form of case_when() looks like:

case_when(
  variable1 == condition1 ~ output,
  variable2 == condition2 ~ output,
  # (Optional) Otherwise
  TRUE ~ final_output
)

The optional ending is only necessary if you want a certain output if none of your conditions are met. Because conditions are processed sequentially and because it is the last condition, by setting it as TRUE the final output will used. If this last TRUE condition is not used then by default, the final output would be a missing value. A (silly) example using age might be:

case_when(
  age > 20 ~ "old",
  age <= 20 ~ "young",
  # For final condition
  TRUE ~ "unborn!"
)

If instead you want one of the conditions to be NA, you need to set the appropriate NA value:

case_when(
  age > 20 ~ "old",
  age <= 20 ~ NA_character_,
  # For final condition
  TRUE ~ "unborn!"
)

Alternatively, if we want missing age values to output NA values at the end (instead of "unborn!"), we would exclude the final condition:

case_when(
  age > 20 ~ "old",
  age <= 20 ~ "young"
)

With dplyr functions like case_when(), it requires you be explicit about the type of output each condition has since all the outputs must match (e.g. all character or all numeric). This prevents you from accidentally mixing e.g. numeric output with character output. Missing values also have data types:

  • NA_character_ (character)
  • NA_real_ (numeric)
  • NA_integer_ (integer)

Assuming the final output is NA, in a pipeline this would look like how you normally would use mutate():

user_info_df |>
  mutate(age_category = case_when(
    age > 20 ~ "old",
    age <= 20 ~ "young"
  ))
#> # A tibble: 22 × 6
#>    user_id gender weight height   age age_category
#>    <chr>   <chr>   <dbl>  <dbl> <dbl> <chr>       
#>  1 user_1  M          65    169    29 old         
#>  2 user_10 M          85    180    27 old         
#>  3 user_11 M         115    186    27 old         
#>  4 user_12 M          67    170    27 old         
#>  5 user_13 M          74    180    25 old         
#>  6 user_14 M          64    171    27 old         
#>  7 user_15 M          80    180    24 old         
#>  8 user_16 M          67    176    27 old         
#>  9 user_17 M          60    175    24 old         
#> 10 user_18 M          80    180     0 young       
#> # ℹ 12 more rows

Briefly review the content again, to reinforce what they read.

While still in the doc/learning.qmd file, we can use the case_when() function to set "before sleep" as day 1 and "wake up" as day 2 by creating a new column called day. (We will use NA_real_ because the other day columns are numeric, not integer.)

doc/learning.qmd
saliva_with_day_df <- saliva_df |>
  mutate(day = case_when(
    samples == "before sleep" ~ 1,
    samples == "wake up" ~ 2
  ))
saliva_with_day_df
#> # A tibble: 42 × 5
#>    user_id samples      cortisol_norm melatonin_norm   day
#>    <chr>   <chr>                <dbl>          <dbl> <dbl>
#>  1 user_1  before sleep        0.0341  0.0000000174      1
#>  2 user_1  wake up             0.0779  0.00000000675     2
#>  3 user_10 before sleep        0.0370  0.00000000867     1
#>  4 user_10 wake up             0.0197  0.00000000257     2
#>  5 user_11 before sleep        0.0406  0.00000000204     1
#>  6 user_11 wake up             0.0156  0.00000000965     2
#>  7 user_12 before sleep        0.156   0.00000000354     1
#>  8 user_12 wake up             0.145   0.00000000864     2
#>  9 user_13 before sleep        0.0123  0.00000000190     1
#> 10 user_13 wake up             0.0342  0.00000000230     2
#> # ℹ 32 more rows

…Now, let’s use the reduce() with full_join() again:

doc/learning.qmd
list(
  user_info_df,
  saliva_with_day_df,
  summarised_rr_df,
  summarised_actigraph_df
) |>
  reduce(full_join)
#> Joining with `by = join_by(user_id)`
#> Joining with `by = join_by(user_id, day)`
#> Joining with `by = join_by(user_id, day)`
#> # A tibble: 47 × 13
#>    user_id gender weight height   age samples      cortisol_norm
#>    <chr>   <chr>   <dbl>  <dbl> <dbl> <chr>                <dbl>
#>  1 user_1  M          65    169    29 before sleep        0.0341
#>  2 user_1  M          65    169    29 wake up             0.0779
#>  3 user_10 M          85    180    27 before sleep        0.0370
#>  4 user_10 M          85    180    27 wake up             0.0197
#>  5 user_11 M         115    186    27 before sleep        0.0406
#>  6 user_11 M         115    186    27 wake up             0.0156
#>  7 user_12 M          67    170    27 before sleep        0.156 
#>  8 user_12 M          67    170    27 wake up             0.145 
#>  9 user_13 M          74    180    25 before sleep        0.0123
#> 10 user_13 M          74    180    25 wake up             0.0342
#> # ℹ 37 more rows
#> # ℹ 6 more variables: melatonin_norm <dbl>, day <dbl>,
#> #   ibi_s_mean <dbl>, ibi_s_sd <dbl>, hr_mean <dbl>, hr_sd <dbl>

We now have two rows per participant! Let’s add and commit the changes to the Git history with Ctrl-Alt-M or with the Palette (Ctrl-Shift-P, then type “commit”).

8.9 Wrangling data into final form

Now that we’ve got several datasets processed and joined, it’s time to bring it all together and put it into the data-raw/mmash.R script so we can create a final working dataset.

Open up the data-raw/mmash.R file and the top of the file move the code library(fs) to go with the other packages as well. It should look something like this now:

data-raw/mmash.R
library(here)
library(tidyverse)
library(fs)
source(here("R/functions.R"))

Next, as we have altered import_multiple_files() to use file_path instead of file_path_id, we’ll need to update how we group_by() when creating summarised_rr_df and summarised_actigraph_df.

data-raw/mmash.R
summarised_rr_df <- rr_df |>
  group_by(user_id, day) |>
  summarise(
    across(ibi_s, list(
      mean = \(x) mean(x, na.rm = TRUE),
      sd = \(x) sd(x, na.rm = TRUE)
    )),
    .groups = "drop"
  )

summarised_actigraph_df <- actigraph_df |>
  group_by(user_id, day) |>
  summarise(
    across(hr, list(
      mean = \(x) mean(x, na.rm = TRUE),
      sd = \(x) sd(x, na.rm = TRUE)
    )),
    .groups = "drop"
  )

Go into the doc/learning.qmd and cut the code used to create the saliva_with_day_df as well as the code to full_join() all the datasets together with reduce() and paste it at the bottom of the data-raw/mmash.R script. Assign the output into a new variable called mmash, like this:

data-raw/mmash.R
saliva_with_day_df <- saliva_df |>
  mutate(day = case_when(
    samples == "before sleep" ~ 1,
    samples == "wake up" ~ 2,
    TRUE ~ NA_real_
  ))

mmash <- list(
  user_info_df,
  saliva_with_day_df,
  summarised_rr_df,
  summarised_actigraph_df
) |>
  reduce(full_join)
#> Joining with `by = join_by(user_id)`
#> Joining with `by = join_by(user_id, day)`
#> Joining with `by = join_by(user_id, day)`

Lastly, we have to save this final dataset into the data/ folder. We’ll use the function usethis::use_data() to create the folder and save the data as an .rda file. We’ll add this code to the very bottom of the script:

data-raw/mmash.R
usethis::use_data(mmash, overwrite = TRUE)
#> ✔ Setting active project to
#>   "/home/runner/work/r-cubed-intermediate/r-cubed-intermediate".
#> ✔ Saving "mmash" to "data/mmash.rda".
#> ☐ Document your data (see <https://r-pkgs.org/data.html>).

We’re adding overwrite = TRUE so every time we re-run this script, the dataset will be saved. Alternatively, we could (but won’t in this course) save it as a .csv file with:

write_csv(mmash, here("data/mmash.csv"))

And later load it in with read_csv() (since it is so fast). Alright, we’re finished creating this dataset! Let’s generate it by:

  • First running styler with the Palette (Ctrl-Shift-P, then type “style file”).
  • Restarting the R session with Ctrl-Shift-F10 or with the Palette (Ctrl-Shift-P, then type “restart”).
  • Sourcing the data-raw/mmash.R script with Ctrl-Shift-S or with the Palette (Ctrl-Shift-P, then type “source”).

We now have a final dataset to start working on! The main way to load data is with load(here::here("data/mmash.rda")). Go into the doc/learning.qmd file and delete everything again, except for the YAML header and setup code chunk, so that we are ready for the next session. Lastly, add and commit all the changes, including adding the final mmash.rda data file, to the Git history by using Ctrl-Alt-M or with the Palette (Ctrl-Shift-P, then type “commit”).

8.10 Summary

Quickly cover this before finishing the session and when starting the next session.

  • While very difficult to learn and use, regular expressions (regex or regexp) are incredibly powerfully at processing character data.
  • Use left_join(), right_join(), and full_join() to join two datasets together.
  • Use the functional reduce() to iteratively apply a function to a set of items in order to end up with one item (e.g. join more than two datasets into one final dataset).
  • Use case_when() instead of nesting multiple “if else” conditions whenever you need to do slightly more complicated conditionals.