Getting days since last appearance in Pandas DataFrame?

Let's say I have a Pandas DataFrame df

:

Date      Value
01/01/17  0
01/02/17  0
01/03/17  1
01/04/17  0
01/05/17  0
01/06/17  0
01/07/17  1
01/08/17  0
01/09/17  0

      

For each row, I want to efficiently calculate the days since the last occurrence Value=1

.

So df

:

Date      Value    Last_Occurence
01/01/17  0        NaN
01/02/17  0        NaN
01/03/17  1        0
01/04/17  0        1
01/05/17  0        2
01/06/17  0        3
01/07/17  1        0
01/08/17  0        1
01/09/17  0        2

      

I could do a loop:

for i in range(0, len(df)):
    last = np.where(df.loc[0:i,'Value']==1)
    df.loc[i, 'Last_Occurence'] = i-last

      

But it seems very inefficient for extremely large datasets, and is probably wrong anyway.

+1


source to share


4 answers


Here's the NumPy approach -

def intervaled_cumsum(a, trigger_val=1, start_val = 0, invalid_specifier=-1):
    out = np.ones(a.size,dtype=int)    
    idx = np.flatnonzero(a==trigger_val)
    if len(idx)==0:
        return np.full(a.size,invalid_specifier)
    else:
        out[idx[0]] = -idx[0] + 1
        out[0] = start_val
        out[idx[1:]] = idx[:-1] - idx[1:] + 1
        np.cumsum(out, out=out)
        out[:idx[0]] = invalid_specifier
        return out

      

Several examples of running through the array data to demonstrate usage, covering different startup and startup value scenarios:

In [120]: a
Out[120]: array([0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0])

In [121]: p1 = intervaled_cumsum(a, trigger_val=1, start_val=0)
     ...: p2 = intervaled_cumsum(a, trigger_val=1, start_val=1)
     ...: p3 = intervaled_cumsum(a, trigger_val=0, start_val=0)
     ...: p4 = intervaled_cumsum(a, trigger_val=0, start_val=1)
     ...: 

In [122]: np.vstack(( a, p1, p2, p3, p4 ))
Out[122]: 
array([[ 0,  1,  1,  1,  0,  0,  1,  0,  0,  1,  1,  1,  1,  1,  0],
       [-1,  0,  0,  0,  1,  2,  0,  1,  2,  0,  0,  0,  0,  0,  1],
       [-1,  1,  1,  1,  2,  3,  1,  2,  3,  1,  1,  1,  1,  1,  2],
       [ 0,  1,  2,  3,  0,  0,  1,  0,  0,  1,  2,  3,  4,  5,  0],
       [ 1,  2,  3,  4,  1,  1,  2,  1,  1,  2,  3,  4,  5,  6,  1]])

      

Using it to solve our case:

df['Last_Occurence'] = intervaled_cumsum(df.Value.values)

      

Sample output -



In [181]: df
Out[181]: 
       Date  Value  Last_Occurence
0  01/01/17      0              -1
1  01/02/17      0              -1
2  01/03/17      1               0
3  01/04/17      0               1
4  01/05/17      0               2
5  01/06/17      0               3
6  01/07/17      1               0
7  01/08/17      0               1
8  01/09/17      0               2

      

Runtime test

Approaches -

# @Scott Boston soln
def pandas_groupby(df):
    mask = df.Value.cumsum().replace(0,False).astype(bool)
    return df.assign(Last_Occurance=df.groupby(df.Value.astype(bool).\
                                    cumsum()).cumcount().where(mask))

# Proposed in this post
def numpy_based(df):
    df['Last_Occurence'] = intervaled_cumsum(df.Value.values)

      

Timing -

In [33]: df = pd.DataFrame((np.random.rand(10000000)>0.7).astype(int), columns=[['Value']])

In [34]: %timeit pandas_groupby(df)
1 loops, best of 3: 1.06 s per loop

In [35]: %timeit numpy_based(df)
10 loops, best of 3: 103 ms per loop

In [36]: df = pd.DataFrame((np.random.rand(100000000)>0.7).astype(int), columns=[['Value']])

In [37]: %timeit pandas_groupby(df)
1 loops, best of 3: 11.1 s per loop

In [38]: %timeit numpy_based(df)
1 loops, best of 3: 1.03 s per loop

      

+6


source


Try using this with cumsum

, cumcount

and groupby

:

mask = df.Value.cumsum().replace(0,False).astype(bool) #Mask starting zeros as NaN
df_out = df.assign(Last_Occurance=df.groupby(df.Value.astype(bool).cumsum()).cumcount().where(mask))
print(df_out)

      



output:

       Date  Value  Last_Occurance
0  01/01/17      0             NaN
1  01/02/17      0             NaN
2  01/03/17      1             0.0
3  01/04/17      0             1.0
4  01/05/17      0             2.0
5  01/06/17      0             3.0
6  01/07/17      1             0.0
7  01/08/17      0             1.0
8  01/09/17      0             2.0

      

+2


source


You don't need to update the value to last

at every step of the for loop. Initialize variable outside of loop

last = np.nan
for i in range(len(df)):
    if df.loc[i, 'Value'] == 1:
        last = i
    df.loc[i, 'Last_Occurence'] = i - last

      

and update it only when a 1

occurs in the column Value

.

Note that no matter which method you choose, iterating the entire table once is inevitable.

+1


source


You can use argmax:

df.apply(lambda x: np.argmax(df.iloc[x.name::-1].Value.tolist()),axis=1)
Out[85]: 
0    0
1    0
2    0
3    1
4    2
5    3
6    0
7    1
8    2
dtype: int64

      

If you need to superimpose the first 2 lines then use:

df.apply(lambda x: np.argmax(df.iloc[x.name::-1].Value.tolist()) \
                   if 1 in df.iloc[x.name::-1].Value.values \
                   else np.nan,axis=1)
Out[86]: 
0    NaN
1    NaN
2    0.0
3    1.0
4    2.0
5    3.0
6    0.0
7    1.0
8    2.0
dtype: float64

      

+1


source







All Articles