Arbitrary data partitioning by criterion into training and data testing using R

Gidday,

I am looking for a way to randomly split a dataframe (like splitting 90/10) to test and train a model that preserves certain grouping criteria.

Imagine I have a data frame like this:

> test[1:20,]
                companycode     year    expenses         
    1                 C1          1     8.47720                 
    2                 C1          2     8.45250                 
    3                 C1          3     8.46280                 
    4                 C2          1 14828.90603                 
    5                 C3          1   665.21565                 
    6                 C3          2   290.66596                 
    7                 C3          3   865.56265                 
    8                 C3          4   6785.03586                
    9                 C3          5   312.02617                 
    10                C3          6   760.48740               
    11                C3          7  1155.76758                
    12                C4          1  4565.78313                 
    13                C4          2  3340.36540                 
    14                C4          3  2656.73030                 
    15                C4          4  1079.46098                 
    16                C5          1    60.57039                 
    17                C6          1  6282.48118                 
    18                C6          2  7419.32720                 
    19                C7          1   644.90571                 
    20                C8          1 58332.34945   

      

What I am trying to do is to split this dataframe into a training and test suite using a certain split criterion. Using the provided data, I want to split the data in such a way that companies do not mix in both data frames. Dataset 1 contains different companies than Dataset 2.

Imagine a 90/10 split, the ideal split would look like this:

> data_90split

           companycode     year    expenses         

        4                 C2          1 14828.90603                                 
        12                C4          1  4565.78313                 
        13                C4          2  3340.36540                 
        14                C4          3  2656.73030                 
        15                C4          4  1079.46098                 
        16                C5          1    60.57039
        5                 C3          1   665.21565                 
        6                 C3          2   290.66596                 
        7                 C3          3   865.56265                 
        8                 C3          4   6785.03586                
        9                 C3          5   312.02617                 
        10                C3          6   760.48740               
        11                C3          7  1155.76758                 
        17                C6          1  6282.48118                 
        18                C6          2  7419.32720
        1                 C1          1     8.47720                 
        2                 C1          2     8.45250                 
        3                 C1          3     8.46280



 > data_10split
                    companycode     year   expenses
        20                C8          1 58332.34945 
        19                C7          1   644.90571  

      

Hopefully I have clearly indicated what I am looking for. Thanks for your feedback.

+2


source to share


2 answers


comps <- levels(df$companycode)

trn <- sample(comps, length(comps)*0.9)

df.trn <- subset(df, companycode %in% trn)
df.tst <- subset(df, !(companycode %in% trn))

      

This separates your data so that 90% of the companies are in the training set and the rest in the test set.



This does not guarantee that 90% of your strings will pass training and 10% test. The rigorous way to achieve this remains as an exercise for the reader. A lax way would be to repeat the sampling until you get roughly the correct proportions.

+1


source


Assuming there aren't any conditions for the groups you want, the following will randomly split your dataframe into 90% and 10% sections (stored in a list):

set.seed(1)
split(test, sample(1:nrow(test) > round(nrow(test) * .1)))

      



Outputs:

$`FALSE`
   companycode year  expenses
10          C3    6  760.4874
12          C4    1 4565.7831

$`TRUE`
   companycode year    expenses
1           C1    1     8.47720
2           C1    2     8.45250
3           C1    3     8.46280
4           C2    1 14828.90603
5           C3    1   665.21565
6           C3    2   290.66596
7           C3    3   865.56265
8           C3    4  6785.03586
9           C3    5   312.02617
11          C3    7  1155.76758
13          C4    2  3340.36540
14          C4    3  2656.73030
15          C4    4  1079.46098
16          C5    1    60.57039
17          C6    1  6282.48118
18          C6    2  7419.32720
19          C7    1   644.90571
20          C8    1 58332.34945

      

-1


source







All Articles