Simba JDBC driver for cloud key used to read Spark JDBC DataFrame

I am using Simba Technologies Inc JDBC driver to connect to google wrench. It works as expected with Java.sql. when i tried to use simba JDBC driver with Spark JDBC reader to read query output as DataFrame but it gives wrong output.

Here is the key table :

UserID  UserName
1   Vaijnath
2   Ganesh
3   Rahul

      

MetaData: User_ID (String)
UserName (String)

I am executing Query as: SELECT * FROM users

This query retrieves the correct data when I use the Simba JDBC driver with Java Sql, but it cannot retrieve the data when I use it with the Spark SQL JDBC reader.

It returns DataFrame as

+------+--------+
|UserID|UserName|
+------+--------+
|UserID|UserName|
|UserID|UserName|
|UserID|UserName|
+------+--------+

      

As we can see it returns the correct metadata and row count, but the row contains the column names.

Here is the code I'm using:

import java.util.Properties
import org.apache.spark.sql.{DataFrame, SparkSession}

object  spannerIn {
    val sparkSession =SparkSession
            .builder()
            .appName("Spark SQL basic example").master("local")
            .config("spark.sql.warehouse.dir", "file:///tmp")
            .config("spark.sql.shuffle.partitions", 1)
            .getOrCreate()

    val properties =new Properties()
    properties.setProperty("user", "")
    properties.setProperty("password", "")
    properties.setProperty("driver", "com.simba.cloudspanner.core.jdbc42.CloudSpanner42Driver")

    val connectionURL="jdbc:cloudspanner://localhost;Project=abc;Instance=pqr;Database=xyz;PvtKeyPath=FilePath"
    val selectQuery="(select * from users)"
    def main(args: Array[String]): Unit = {
            val df = createJdbcDataframe()
            df.show()
    }
    def createJdbcDataframe(): DataFrame = {
    sparkSession.read.jdbc(connectionURL, selectQuery, properties)
    }
}

      

My question is, can I use the Simba JDBC driver with Spark? If so, what additional things do I need to add. Any help appreciated.

+3


source to share


1 answer


This is because Spark, by default, quotes all identifiers using the double quote ("), which means the following query is generated:

SELECT "UserID", "UserName" FROM USERS

      

This is interpreted by Cloud Spanner as a selection of two fixed rows. It's basically the same as most other databases:

SELECT 'UserID', 'UserName' FROM USERS

      

Google Cloud Spanner uses backlinks (`) to quote IDs and expects this:



SELECT `UserID`, `UserName` FROM USERS

      

To fix this, you need to register a specific JDBC dialect for Google Cloud Spanner and register a citation backlink like this:

    Class.forName("nl.topicus.jdbc.CloudSpannerDriver");
    SparkSession spark = SparkSession.builder().appName("Java Spark SQL basic example")
                .config("spark.some.config.option", "some-value").master("local").getOrCreate();
    String sparkURL = "jdbc:cloudspanner://localhost;Project=project-id;Instance=instance-id;Database=db;PvtKeyPath=pathToKeyFile.json";
    JdbcDialects.registerDialect(new JdbcDialect()
    {
        private static final long serialVersionUID = 1L;

        @Override
        public boolean canHandle(String url)
        {
            return url.toLowerCase().startsWith("jdbc:cloudspanner:");
        }

        @Override
        public String quoteIdentifier(String column)
        {
            return "`" + column + "`";
        }
    });
    Dataset<Row> dataset = spark.read().jdbc(sparkURL, "ACCOUNT", new Properties());
    dataset.show();

      

Note that I have not tested above with the Simba driver, but only with this driver: https://github.com/olavloite/spanner-jdbc I think it should work with the Simba driver too.

+2


source







All Articles