Loading datasets from Google BigQuery to Google Data Proc using PySpark

I am desperately trying to make a simple program to load data from Big Query into the Spark framework.

Pyspark example for google data does not work, then I followed these links

BigQuery connector for pyspark via Hadoop input format example

load table from array to bit cluster using pyspark script

and now I see this error from google:

Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 400 Bad request {"code": 400, "errors": [{"domain": "global", "message": "Required parameter missing", "reason": "required"}], "message": "Required parameter missing"}

I cannot figure out what input parameters I am missing in my query, there is no clear dodo that talks about input parameters from a pyspark perspective.

My code is below:

import json
import pyspark

hadoopConf=sc._jsc.hadoopConfiguration()
hadoopConf.get("fs.gs.system.bucket")

conf = {"mapred.bq.output.project.id": "test-project-id", "mapred.bq.gcs.bucket": "test-bucket",
    "mapred.bq.input.project.id": "publicdata", 
    "mapred.bq.input.dataset.id":"samples", 
    "mapred.bq.input.table.id": "shakespeare"  }

tableData = sc.newAPIHadoopRDD(
    "com.google.cloud.hadoop.io.bigquery.JsonTextBigQueryInputFormat",
    "org.apache.hadoop.io.LongWritable", "com.google.gson.JsonObject", 
    conf=conf).map(lambda k: json.loads(k[1])).map(lambda x: (x["word"],
    int(x["word_count"]))).reduceByKey(lambda x,y: x+y)

print(tableData)

      

+3


source to share





All Articles