Spark SQL window function with complex condition
This is probably easiest to explain with an example. Suppose I have a DataFrame for a user to login to a site, for example:
scala> df.show(5)
+----------------+----------+
| user_name|login_date|
+----------------+----------+
|SirChillingtonIV|2012-01-04|
|Booooooo99900098|2012-01-04|
|Booooooo99900098|2012-01-06|
| OprahWinfreyJr|2012-01-10|
|SirChillingtonIV|2012-01-11|
+----------------+----------+
only showing top 5 rows
I would like to add a column to this indicating when they became an active user on the site. But there is one caveat: there is a period of time during which the user is considered active, and after this period, if they are back became_active
in the system, they will be became_active
. Let's assume this period is 5 days . Then the desired table derived from the above table would be something like this:
+----------------+----------+-------------+
| user_name|login_date|became_active|
+----------------+----------+-------------+
|SirChillingtonIV|2012-01-04| 2012-01-04|
|Booooooo99900098|2012-01-04| 2012-01-04|
|Booooooo99900098|2012-01-06| 2012-01-04|
| OprahWinfreyJr|2012-01-10| 2012-01-10|
|SirChillingtonIV|2012-01-11| 2012-01-11|
+----------------+----------+-------------+
So, in particular, SirChillingtonIV became the became_active
date was reset because their second login came after the active period expired, but Booooooo99900098 became_active
date was not reset the second time he / she logged in because it was in the active period.
My initial thought was to use the s window functions lag
and then use the values lag
to populate the column became_active
; for example, something started something like this:
import org.apache.spark.sql.expressions.Window import org.apache.spark.sql.functions._ val window = Window.partitionBy("user_name").orderBy("login_date") val df2 = df.withColumn("tmp", lag("login_date", 1).over(window))
Then the rule to fill in the date became_active
would be if tmp
equal null
(ie login_date - tmp >= 5
if this is the first ever login), or if login_date - tmp >= 5
then became_active = login_date
; otherwise, go to the next last value in tmp
and apply the same rule. This speaks to a recursive approach in which I am having difficulty creating a way to implement.
My questions are: Is this a viable approach, and if so, how can I "go back" and look at earlier values tmp
until I find one where I am staying? I cannot, as far as I know, iterate over Column
Spark SQL values . Is there any other way to achieve this result?
source to share
Here's a trick. Import a bunch of functions:
import org.apache.spark.sql.expressions.Window import org.apache.spark.sql.functions.{coalesce, datediff, lag, lit, min, sum}
Define windows:
val userWindow = Window.partitionBy("user_name").orderBy("login_date") val userSessionWindow = Window.partitionBy("user_name", "session")
Find the points where new sessions start:
val newSession = (coalesce( datediff($"login_date", lag($"login_date", 1).over(userWindow)), lit(0) ) > 5).cast("bigint") val sessionized = df.withColumn("session", sum(newSession).over(userWindow))
Find the earliest date in the session:
val result = sessionized .withColumn("became_active", min($"login_date").over(userSessionWindow)) .drop("session")
With a dataset defined as:
val df = Seq( ("SirChillingtonIV", "2012-01-04"), ("Booooooo99900098", "2012-01-04"), ("Booooooo99900098", "2012-01-06"), ("OprahWinfreyJr", "2012-01-10"), ("SirChillingtonIV", "2012-01-11"), ("SirChillingtonIV", "2012-01-14"), ("SirChillingtonIV", "2012-08-11") ).toDF("user_name", "login_date")
Result:
+----------------+----------+-------------+ | user_name|login_date|became_active| +----------------+----------+-------------+ | OprahWinfreyJr|2012-01-10| 2012-01-10| |SirChillingtonIV|2012-01-04| 2012-01-04| <- The first session for user |SirChillingtonIV|2012-01-11| 2012-01-11| <- The second session for user |SirChillingtonIV|2012-01-14| 2012-01-11| |SirChillingtonIV|2012-08-11| 2012-08-11| <- The third session for user |Booooooo99900098|2012-01-04| 2012-01-04| |Booooooo99900098|2012-01-06| 2012-01-04| +----------------+----------+-------------+
source to share
Refactoring the above answer to work with Pyspark
As Pyspark
you can do as Pyspark
below.
create data frame
df = sqlContext.createDataFrame(
[
("SirChillingtonIV", "2012-01-04"),
("Booooooo99900098", "2012-01-04"),
("Booooooo99900098", "2012-01-06"),
("OprahWinfreyJr", "2012-01-10"),
("SirChillingtonIV", "2012-01-11"),
("SirChillingtonIV", "2012-01-14"),
("SirChillingtonIV", "2012-08-11")
],
("user_name", "login_date"))
The above code creates a dataframe as shown below
+----------------+----------+
| user_name|login_date|
+----------------+----------+
|SirChillingtonIV|2012-01-04|
|Booooooo99900098|2012-01-04|
|Booooooo99900098|2012-01-06|
| OprahWinfreyJr|2012-01-10|
|SirChillingtonIV|2012-01-11|
|SirChillingtonIV|2012-01-14|
|SirChillingtonIV|2012-08-11|
+----------------+----------+
Now we want to first figure out the difference between login_date
more 5
days.
To do this, do as below.
Required imports
from pyspark.sql import functions as f
from pyspark.sql import Window
# defining window partitions
login_window = Window.partitionBy("user_name").orderBy("login_date")
session_window = Window.partitionBy("user_name", "session")
session_df = df.withColumn("session", f.sum((f.coalesce(f.datediff("login_date", f.lag("login_date", 1).over(login_window)), f.lit(0)) > 5).cast("int")).over(login_window))
When we run the above line of code, if date_diff
equal NULL
then the function coalesce
will replace NULL
with 0
.
+----------------+----------+-------+
| user_name|login_date|session|
+----------------+----------+-------+
| OprahWinfreyJr|2012-01-10| 0|
|SirChillingtonIV|2012-01-04| 0|
|SirChillingtonIV|2012-01-11| 1|
|SirChillingtonIV|2012-01-14| 1|
|SirChillingtonIV|2012-08-11| 2|
|Booooooo99900098|2012-01-04| 0|
|Booooooo99900098|2012-01-06| 0|
+----------------+----------+-------+
# add became_active column by finding the 'min login_date' for each window partitionBy 'user_name' and 'session' created in above step
final_df = session_df.withColumn("became_active", f.min("login_date").over(session_window)).drop("session")
+----------------+----------+-------------+
| user_name|login_date|became_active|
+----------------+----------+-------------+
| OprahWinfreyJr|2012-01-10| 2012-01-10|
|SirChillingtonIV|2012-01-04| 2012-01-04|
|SirChillingtonIV|2012-01-11| 2012-01-11|
|SirChillingtonIV|2012-01-14| 2012-01-11|
|SirChillingtonIV|2012-08-11| 2012-08-11|
|Booooooo99900098|2012-01-04| 2012-01-04|
|Booooooo99900098|2012-01-06| 2012-01-04|
+----------------+----------+-------------+
source to share