Database Reference
In-Depth Information
with the content) that is used as a separator between sections of the stream.
Each section within the stream contains headers describing the content in
the section followed by the body of the section containing the actual data.
When using this method to upload data to BigQuery, you construct the
request body to contain two MIME multipart sections. One section contains
the body of the job insert request specifying the details of the job. The
second section contains the bytes to be loaded by job. As was the case with
Resumable Upload, you do not specify sourceUris , and you use a different
URL to indicate that the request is a multipart request rather than a regular
request:
https://www.googleapis.com/upload/bigquery/
v2\ /projects/${PROJECT_ID}/jobs?uploadType=multipart
Here is a skeleton request highlighting the key features of the multipart
standard:
POST /upload/bigquery/v2/projects/999/
jobs?uploadType=multipart HTTP/1.1
Authentication: your_auth_token
Content-Length: 67342
Content-Type: multipart/related;
boundary=gc0p4Jq0M2Yt08jU534c0p
--gc0p4Jq0M2Yt08jU534c0p
Content-Type: application/json; charset=UTF-8
{
'configuration': {
'load': { …
}
}
}
--gc0p4Jq0M2Yt08jU534c0p
Content-Type: application/octet-stream
Content-Transfer-Encoding: base64
<your data base64 encoded>
--gc0p4Jq0M2Yt08jU534c0p--
And that is about all there is to say about this method, which is its main
advantage: simplicity. The response to this request will be an error if the job
Search WWH ::




Custom Search