R Remove duplicate entries in the data frame and keep rows with fewer NA and zeros

I would like to deduplicate a data.frame that I am generating from another part of my codebase without the ability to know the order of columns and rows. There are several columns in the data.frame that I want to compare for duplication, here A

and B

, but I would then like to select which should contain rows containing less NA and zeros in other columns in the dataframe, here C

, D

and E

.

tc=
 'Id  B   A   C  D  E
   1  62  12  0  NA  NA
   2  12  62  1  1  1                  
   3  2   62  1  1  1
   4  62  12  1  1  1
   5  55  23  0  0  0      '

df =read.table(textConnection(tc),header=T)

      

I can use duplicated

, but since I have no control over the order of the columns and rows that my framework comes in, I need a way to get unique values ​​with less NA and zeros.

This will work in the example, but won't if the incoming data.frame has a different order:

df[!duplicated(data.frame(A=df$A,B=df$B),fromLast=TRUE),]
  Id  B  A C D E
2  2 12 62 1 1 1
3  3  2 62 1 1 1
4  4 62 12 1 1 1
5  5 55 23 0 0 0

      

Any ideas?

+3


source to share


1 answer


It uses an approach based on counting valid values ​​and reordering the data frame.

First, calculate NA

and 0

columns C

, D

and E

.

rs <- rowSums(is.na(df[c("C", "D", "E")]) | !df[c("C", "D", "E")])
# [1] 3 0 0 0 3

      

Secondly, make a data frame A

, B

and a new variable:



df_ordered <- df[order(df$A, df$B, rs), ]
#   Id  B  A C  D  E
# 4  4 62 12 1  1  1
# 1  1 62 12 0 NA NA
# 5  5 55 23 0  0  0
# 3  3  2 62 1  1  1
# 2  2 12 62 1  1  1

      

You can now remove duplicate rows and keep the row with the most valid values.

df_ordered[!duplicated(df_ordered[c("A", "B")]), ]
#   Id  B  A C D E
# 2  2 12 62 1 1 1
# 3  3  2 62 1 1 1
# 4  4 62 12 1 1 1
# 5  5 55 23 0 0 0

      

+4


source







All Articles