Database Reference
In-Depth Information
'tableId': 'example_basic'
}
}
# Setup the job here.
# load[property] = value
load_config['schema'] = {
'fields': [
{'name':'string_f', 'type':'STRING'},
{'name':'boolean_f', 'type':'BOOLEAN'},
{'name':'integer_f', 'type':'INTEGER'},
{'name':'float_f', 'type':'FLOAT'},
{'name':'timestamp_f', 'type':'TIMESTAMP'}
]
}
load_config['sourceUris'] = [
'gs://bigquery-e2e/chapters/06/sample.csv',
]
# End of job configuration.
run_load.start_and_wait(service.jobs(),
auth.PROJECT_ID,
load_config)
if __name__ == '__main__':
main()
Moving Bytes
Fundamentally, the task of loading data involves shipping your data
encoded as bytes to BigQuery and having the service interpret those bytes
and turn them into records that faithfully represent the data you want to
analyze. We start by describing how to transfer your data into the service.
BigQuery has a few different mechanisms for receiving your data:
• Google Cloud Storage
• Resumable uploads
• Multipart HTTP requests
The following sections describe the strengths and limitations of each
mechanism and how to use them.
Search WWH ::




Custom Search