Flink not serializable task

I'm trying to make a basic example of pagerank in flink with a slight change (only when reading the input file, everything else is the same). I am getting error as Task is not serializable and below is part of the output error

atorg.apache.flink.api.scala.ClosureCleaner $ .ensureSerializable (ClosureCleaner.scala: 179) at org.apache.flink.api.scala.ClosureCleaner $ .clean (ClosureCleaner.scala: 171)

Below is my code

object hpdb {

  def main(args: Array[String]) {

    val env = ExecutionEnvironment.getExecutionEnvironment

    val maxIterations = 10000

    val DAMPENING_FACTOR: Double = 0.85

    val EPSILON: Double = 0.0001

    val outpath = "/home/vinoth/bigdata/assign10/pagerank.csv"

    val links = env.readCsvFile[Tuple2[Long,Long]]("/home/vinoth/bigdata/assign10/ppi.csv",
                fieldDelimiter = "\t", includedFields = Array(1,4)).as('sourceId,'targetId).toDataSet[Link]//source and target

    val pages = env.readCsvFile[Tuple1[Long]]("/home/vinoth/bigdata/assign10/ppi.csv",
      fieldDelimiter = "\t", includedFields = Array(1)).as('pageId).toDataSet[Id]//Pageid

    val noOfPages = pages.count()

    val pagesWithRanks = pages.map(p => Page(p.pageId, 1.0 / noOfPages))

    val adjacencyLists = links
      // initialize lists ._1 is the source id and ._2 is the traget id
      .map(e => AdjacencyList(e.sourceId, Array(e.targetId)))
      // concatenate lists
      .groupBy("sourceId").reduce {
      (l1, l2) => AdjacencyList(l1.sourceId, l1.targetIds ++ l2.targetIds)
    }

    // start iteration

    val finalRanks = pagesWithRanks.iterateWithTermination(maxIterations) {
     // **//the output shows error here**     
     currentRanks =>
        val newRanks = currentRanks
          // distribute ranks to target pages
          .join(adjacencyLists).where("pageId").equalTo("sourceId") {
          (page, adjacent, out: Collector[Page]) =>
            for (targetId <- adjacent.targetIds) {
              out.collect(Page(targetId, page.rank / adjacent.targetIds.length))
            }
        }

          // collect ranks and sum them up

          .groupBy("pageId").aggregate(SUM, "rank")
          // apply dampening factor
         //**//the output shows error here** 
           .map { p =>
          Page(p.pageId, (p.rank * DAMPENING_FACTOR) + ((1 - DAMPENING_FACTOR) / pages.count()))
        }

        // terminate if no rank update was significant
        val termination = currentRanks.join(newRanks).where("pageId").equalTo("pageId") {
          (current, next, out: Collector[Int]) =>
            // check for significant update
            if (math.abs(current.rank - next.rank) > EPSILON) out.collect(1)
        }

        (newRanks, termination)
    }

    val result = finalRanks

    // emit result
    result.writeAsCsv(outpath, "\n", " ")

    env.execute()

    }
}

      

Any help in the right direction is much appreciated? Thank.

+3


source to share


1 answer


The problem is what you are linking to DataSet

pages

within MapFunction

. This is not possible because it DataSet

is only a logical representation of the data flow and cannot be accessed at runtime.

What you need to do to solve this problem is assign a value to a val pagesCount = pages.count

variable pagesCount

and refer to that variable in MapFunction

.



What it actually does pages.count

is to trigger the execution of the data flow graph so that the number of items in the can be counted pages

. The result is then returned to your program.

+7


source







All Articles