This document outlines a high-level roadmap for development of a MATLAB Interface for Apache Arrow, which enables interfacing with Arrow memory.
Apache Arrow is designed to enable a variety of high-performance columnar analytics use cases.
This design document focuses on a subset of use cases that we feel will help to lay the foundation for more advanced use cases in the future.
We envision a set of packaged (arrow.*) classes and functions allowing users to interact with key functionality from the Arrow C++ libraries using MATLAB code.
Included below is a list of example MATLAB and C++ APIs that would be exposed by the MATLAB Interface for Apache Arrow.
arrow.Bufferarrow.Arrayarrow.RecordBatcharrow.Tablearrow.Fieldarrow.Schemaarrow.type.DataTypearrow.type.Float64arrow.type.Stringarrow.type.Datearrow.type.Timearrow.memory.getTotalBytesAllocatedarrow.memory.allocateBufferIn order to enable interaction with the Arrow C++ libraries, the MATLAB Interface for Apache Arrow must expose associated C++ APIs for wrapping/unwrapping MATLAB mxArray data to/from appropriate Arrow C++ types.
The list below provides a few brief examples of what these C++ APIs might look like (intended to be consistent with the rest of the Arrow ecosystem).
arrow::matlab::is_arrayarrow::matlab::is_record_batcharrow::matlab::is_tablearrow::matlab::unwrap_arrayarrow::matlab::wrap_arrayarrow::matlab::unwrap_record_batcharrow::matlab::wrap_record_batcharrow::matlab::unwrap_tablearrow::matlab::wrap_tableA MATLAB developer could create an arrow.Array from an “ordinary” MATLAB array (e.g. a numeric row vector of type double). They could then operate on this array in a variety of different ways (e.g. indexing/slicing, getting its type/class, clearing it from the workspace, etc.). The arrow.array “factory function” returns a type-specific, concrete subclass of the abstract arrow.Array class based on the MATLAB type of the input array. For example, passing a double array to the arrow.array function will return a corresponding arrow.Float64Array.
Note: MATLAB [missing values] (e.g. NaN, NaT, <undefined>) are automatically converted into Arrow NULL values upon construction of an arrow.Array subclass instance.
>> A = randi(100, 1, 5) A = 82 91 13 92 64 >> class(A) ans = 'double' >> A(4) = NaN; % Set the fourth element to NaN. >> AA = arrow.array(A); % Create an arrow.Array from A. >> class(AA) ans = 'arrow.Float64Array' >> AA(3:5) % Extract elements at indices 3 to 5 from AA. ans = 13 <NULL> 64 >> clear AA; % Clear AA from workspace and release Arrow C++ memory.
To serialize MATLAB data to a file on disk (e.g. Feather, Parquet), a MATLAB developer could start by constructing an arrow.Table using one of several different approaches.
They could directly convert from an existing MATLAB table to an arrow.tabular.Table using a function like arrow.table.
>> Weight = [10; 24; 10; 12; 18]; >> Radius = [80; 135; 65; 70; 150]; >> Density = [10.2; 20.5; 11.2; 13.7; 17.8]; % Create a MATLAB `table` >> T = table(Weight, Radius, Density); % Create an `arrow.tabular.Table` from the MATLAB `table` >> AT = arrow.table(T);
To serialize the arrow.Table, AT, to a file (e.g. Feather) on disk, the user could then instantiate an arrow.internal.io.feather.Writer.
% Make an `arrow.tabular.RecordBatch` from the `arrow.tabular.Table` created in the previous step >> recordBatch = arrow.recordBatch(AT); >> filename = "data.feather"; % Write the `arrow.tabular.RecordBatch` to disk as a Feather V1 file named `data.feather` >> writer = arrow.internal.io.feather.Writer(filename); >> writer.write(recordBatch);
The Feather V1 file could then be read and operated on by an external process like Rust or Go. To read it back into MATLAB, the user could instantiate an arrow.internal.io.feather.Reader.
>> reader = arrow.internal.io.feather.Reader(filename); % Read in the first RecordBatch >> newBatch = reader.read(); % Create a MATLAB `table` from the `arrow.tabular.RecordBatch` >> AT = table(newBatch);
To add support for writing to Feather V1 files, an advanced MATLAB user could use the MATLAB and C++ APIs offered by the MATLAB Interface for Apache Arrow to create arrow.internal.io.feather.Writer.
They would need to author a MEX function (e.g. featherwriteMEX), which can be called directly by MATLAB code. Within their MEX function, they could use arrow::matlab::unwrap_table to convert between the MATLAB representation of the Arrow memory (arrow.Table) and the equivalent C++ representation (arrow::Table). Once the arrow.Table has been “unwrapped” into a C++ arrow::Table, it can be passed to the appropriate Arrow C++ library API for writing to a Feather file (arrow::ipc::feather::WriteTable).
An analogous workflow could be followed to create arrow.internal.io.feather.Reader to enable reading from Feather V1 files.
Ultimately, many of the APIs exposed by the MATLAB Interface for Apache Arrow are targeted at advanced MATLAB users. By leveraging these building blocks, advanced MATLAB users can create high-level interfaces, which are useful to everyday MATLAB users. An example of such a high-level interface would be featherwrite, intended to make it easy to write Feather files. A diagram summarizing the overall workflow and specific pieces an advanced user would need to author to create such a high-level interface is included below.
Arrow supports several approaches to sharing memory locally.
Roughly speaking, local memory sharing workflows can be divided into two categories:
MATLAB supports running Python code within the MATLAB process. In theory, because MATLAB and Python can share the same virtual address space, users should be able to share Arrow memory efficiently between MATLAB and PyArrow code. The Apache Arrow C Data Interface defines a lightweight C API for sharing Arrow data and metadata between multiple languages running within the same virtual address space.
To share a MATLAB arrow.Array with PyArrow efficiently, a user could use the exportToCDataInterface method to export the Arrow memory wrapped by an arrow.Array to the C Data Interface format, consisting of two C-style structs, ArrowArray and ArrowSchema, which represent the Arrow data and associated metadata.
Memory addresses for the ArrowArray and ArrowSchema structs are returned by the call to export. These addresses can be passed to Python directly, without having to make any copies of the underlying Arrow data structures that they refer to. A user can then wrap the underlying data pointed to by the ArrowArray struct (which is already in the Arrow Columnar Format), as well as extract the necessary metadata from the ArrowSchema struct, to create a pyarrow.Array by using the static method pyarrow.Array._import_from_c.
Multiple lines of Python are required to import the Arrow array from MATLAB. Therefore, the function pyrunfile can be used which can run Python scripts defined in an external file.
# Filename: import_from_c.py # Note: This file is located in same directory as the MATLAB file. import pyarrow as pa array = pa.Array._import_from_c(arrayMemoryAddress, schemaMemoryAddress)
% Create a MATLAB arrow.Array. >> AA = arrow.array([1, 2, 3, 4, 5]); % Export C Data Interface C-style structs for `arrow.array.Array` values and schema >> cArray = arrow.c.Array(); >> cSchema = arrow.c.Schema(); % Export the MATLAB arrow.Array to the C Data Interface format, returning the % memory addresses of the required ArrowArray and ArrowSchema C-style structs. >> AA.export(cArray.Address, cSchema.Address); % Import the memory addresses of the C Data Interface format structs to create a pyarrow.Array. >> PA = pyrunfile("import_from_c.py", "array", arrayMemoryAddress=cArray.Address, schemaMemoryAddress=cSchema.Address);
Conversely, a user can create an Arrow array using PyArrow and share it with MATLAB. To do this, they can call the method _export_to_c to export a pyarrow.Array to the C Data Interface format.
NOTE: Since the python calls to _export_to_c and _import_from_c have underscores at the beginning of their names, they cannot be called directly in MATLAB. MATLAB member functions or variables are not allowed to start with an underscore.
To initialize a Python pyarrow array, pyrunfile can (again) be used to execute a Python script containing variables and functions with names that start with an underscore.
The memory addresses to the ArrowArray and ArrowSchema structs populated by the call to _export_to_c can be passed to the static method arrow.Array.importFromCDataInterface to construct a MATLAB arrow.Array with zero copies.
# Filename: export_to_c.py # Note: This file is located in same directory as the MATLAB file. import pyarrow as pa PA._export_to_c(arrayMemoryAddress, schemaMemoryAddress)
% Make a pyarrow.Array. >> PA = py.pyarrow.array([1, 2, 3, 4, 5]); % Create ArrowArray and ArrowSchema C-style structs adhering to the Arrow C Data Interface format. >> cArray = arrow.c.Array(); >> cSchema = arrow.c.Schema(); % Export the pyarrow.Array to the C Data Interface format, populating the required ArrowArray and ArrowShema structs. >> pyrunfile("export_to_c.py", PA=PA, arrayMemoryAddress=cArray.Address, schemaMemoryAddress=cSchema.Address); % Import the C Data Interface structs to create a MATLAB arrow.Array. >> AA = arrow.array.Array.import(cArray, cSchema);
MATLAB supports running Python code in a separate process. A user could leverage the MATLAB Interface for Apache Arrow to share Arrow memory between MATLAB and PyArrow running within a separate Python process using one of the following approaches described below.
For large tables used in a multi-process “data processing pipeline”, a user could serialize their arrow.Table to the Arrow IPC File Format. Then, this file could be memory-mapped (zero-copy) by PyArrow running in a separate process to read the data in with minimal overhead. The fact that the Arrow IPC File Format is a 1:1 mapping of the in-memory Arrow format on disk, makes the memory-mapping highly performant as no custom deserialization/conversion is required to construct a pyarrow.Table.
% Create a MATLAB arrow.Table. >> Var1 = arrow.array(["foo", "bar", "baz"]); >> Var2 = arrow.array([today, today + 1, today + 2]); >> Var3 = arrow.array([10, 20, 30]); >> AT = arrow.Table(Var1, Var2, Var3); % Write the MATLAB arrow.Table to the Arrow IPC File Format on disk. >> recordBatch = arrow.recordBatch(AT); >> filename = "data.arrow" % Open `data.arrow` as an IPC file >> writer = arrow.io.ipc.RecordBatchFileWriter(filename, recordBatch.Schema); % Write the `RecordBatch` to `data.arrow` >> writer.writeRecordBatch(recordBatch); % Close the writer -- don't forget this step! >> writer.close() % Run Python in a separate process. >> pyenv("ExecutionMode", "OutOfProcess"); % Memory map the Arrow IPC File. >> memoryMappedFile = py.pyarrow.memory_map("data.arrow"); % Construct pyarrow.ipc.RecordBatchFileReader to read the Arrow IPC File. >> recordBatchFileReader = py.pyarrow.ipc.open_file(memoryMappedFile); % Read all record batches from the Arrow IPC File in one-shot and return a pyarrow.Table. >> PAT = recordBatchFileReader.read_all()
To ensure code quality, we would like to include the following testing infrastructure, at a minimum:
Note: To test internal C++ code, we can use a MEX function to call the C++ code from a MATLAB Class-Based Unit Test.
More information on testing is included in Testing Guidelines.
To ensure usability, discoverability, and accessibility, we would like to include high quality documentation for the MATLAB Interface for Apache Arrow.
Specific areas of documentation would include:
We would ideally like to make it as easy as possible for MATLAB users to install the MATLAB Interface for Apache Arrow without the need to compile MEX functions or perform any other manual configuration steps.
In MATLAB, users normally install optional software packages via the Add-On Explorer. This workflow is analogous to the way a JavaScript user would install the apache-arrow package via the npm package manager or the way a Rust user would install the arrow crate via the cargo package manager.
In the short term, in the absence of an easily installable MATLAB Add-On, we plan to maintain up-to-date, clearly explained, build and installation instructions for recent versions of MATLAB on GitHub.
In addition, we'd like to include pre-built MEX functions for Windows, Mac, and Linux that get built regularly via CI workflows. This would allow users to try out the latest functionality without having to manually build the MEX interfaces from scratch.
The table below provides a high-level roadmap for the development of specific capabilities in the MATLAB Interface for Apache Arrow.
| Capability | Use Case | Timeframe |
|---|---|---|
| Arrow Memory Interaction | UC1 | Near Term |
| File Reading/Writing | UC2 | Near Term |
| In/Out-of-Process Memory Sharing | UC3 | Mid Term |