spark_write_avro
Serialize a Spark DataFrame into Apache Avro format
Description
Serialize a Spark DataFrame into Apache Avro format. Notice this functionality requires the Spark connection sc
to be instantiated with either an explicitly specified Spark version (i.e., spark_connect(..., version = <version>, packages = c("avro", <other package(s)>), ...)
) or a specific version of Spark avro package to use (e.g., spark_connect(..., packages = c("org.apache.spark:spark-avro_2.12:3.0.0", <other package(s)>), ...)
).
Usage
spark_write_avro(
x,
path,
avro_schema = NULL,
record_name = "topLevelRecord",
record_namespace = "",
compression = "snappy",
partition_by = NULL
)
Arguments
Argument | Description |
---|---|
x | A Spark DataFrame or dplyr operation |
path | The path to the file. Needs to be accessible from the cluster. |
Supports the “hdfs://”, “s3a://” and “file://” protocols. avro_schema | Optional Avro schema in JSON format record_name | Optional top level record name in write result (default: “topLevelRecord”) record_namespace | Record namespace in write result (default: ““) compression | Compression codec to use (default:”snappy”) partition_by | A character
vector. Partitions the output by the given columns on the file system.
See Also
Other Spark serialization routines: collect_from_rds()
, spark_load_table()
, spark_read_avro()
, spark_read_binary()
, spark_read_csv()
, spark_read_delta()
, spark_read_image()
, spark_read_jdbc()
, spark_read_json()
, spark_read_libsvm()
, spark_read_orc()
, spark_read_parquet()
, spark_read_source()
, spark_read_table()
, spark_read_text()
, spark_read()
, spark_save_table()
, spark_write_csv()
, spark_write_delta()
, spark_write_jdbc()
, spark_write_json()
, spark_write_orc()
, spark_write_parquet()
, spark_write_source()
, spark_write_table()
, spark_write_text()