Remove duplicate values ββin pandas dataframe row
I have a pandas dataframe:
>>df_freq = pd.DataFrame([["Z11", "Z11", "X11"], ["Y11","",""], ["Z11","Z11",""]], columns=list('ABC'))
>>df_freq
A B C
0 Z11 Z11 X11
1 Y11
2 Z11 Z11
I want to make sure each row only has unique values. So it should look like this: Deleted values ββcan be replaced with zero or empty
A B C
0 Z11 0 X11
1 Y11
2 Z11 0
My dataframe is large with hundreds of columns and thousands of rows. The goal is to count the unique values ββin this data frame. I do this using dataframe to matrix transformation and applying
>>np.unique(mat.astype(str), return_counts=True)
But on some lines the same value occurs and I want to remove that before applying the np.unique () method. I want to store unique values ββon each row.
+3
source to share
3 answers
Here's a vectorial NumPy approach -
def reset_rowwise_dups(df):
n = df.shape[0]
row_idx = np.arange(n)[:,None]
a = df_freq.values
idx = np.argsort(a,1)
sorted_a = a[row_idx, idx]
idx_reversed = idx.argsort(1)
sorted_a_dupmask = sorted_a[:,1:] == sorted_a[:,:-1]
dup_mask = np.column_stack((np.zeros(n,dtype=bool), sorted_a_dupmask))
final_mask = dup_mask[row_idx, idx_reversed] & (a != '' )
a[final_mask] = 0
Example run -
In [80]: df_freq
Out[80]:
A B C D
0 Z11 Z11 X11 Z11
1 Y11 Y11
2 Z11 Z11 X11
In [81]: reset_rowwise_dups(df_freq)
In [82]: df_freq
Out[82]:
A B C D
0 Z11 0 X11 0
1 Y11 0
2 Z11 0 X11
Runtime test
# Proposed earlier in this post
def reset_rowwise_dups(df):
n = df.shape[0]
row_idx = np.arange(n)[:,None]
a = df.values
idx = np.argsort(a,1)
sorted_a = a[row_idx, idx]
idx_reversed = idx.argsort(1)
sorted_a_dupmask = sorted_a[:,1:] == sorted_a[:,:-1]
dup_mask = np.column_stack((np.zeros(n,dtype=bool), sorted_a_dupmask))
final_mask = dup_mask[row_idx, idx_reversed] & (a != '' )
a[final_mask] = 0
# @piRSquared soln using pandas apply
def apply_based(df):
mask = df.apply(pd.Series.duplicated, 1) & df.astype(bool)
return df.mask(mask, 0)
Timing -
In [151]: df_freq = pd.DataFrame([["Z11", "Z11", "X11", "Z11"], \
...: ["Y11","","", "Y11"],["Z11","Z11","","X11"]], columns=list('ABCD'))
In [152]: df_freq
Out[152]:
A B C D
0 Z11 Z11 X11 Z11
1 Y11 Y11
2 Z11 Z11 X11
In [153]: df = pd.concat([df_freq]*10000,axis=0)
In [154]: df.index = range(df.shape[0])
In [155]: %timeit apply_based(df)
1 loops, best of 3: 3.35 s per loop
In [156]: %timeit reset_rowwise_dups(df)
100 loops, best of 3: 12.7 ms per loop
+1
source to share
def replaceDuplicateData(nestedList):
for row in range(len(nestedList)):
uniqueDataRow = []
for col in range(len(nestedList[row])):
if nestedList[row][col] not in uniqueDataRow:
uniqueDataRow.append(nestedList[row][col])
else:
nestedList[row][col] = 0
return nestedList
nestedList = [["Z11", "Z11", "X11"], ["Y11","",""], ["Z11","Z11",""]]
print (replaceDuplicateData(nestedList))
Basically you can use this function above to remove duplication in your matrix.
0
source to share