Database Reference
In-Depth Information
- X GET \
"${TABLES_URL}/nested/data?prettyPrint=false"
{"kind":"bigquery#tableDataList", … "totalRows":"3",
"rows":[
{"f":[{"v":"1"}, {"v":{"f":[{"v":"2.0"},
{"v":[{"v":"foo"}]}]}}]},
{"f":[{"v":"2"},{"v":{"f":[{"v":"4.0"},
{"v":[{"v":"bar"}]}]}}]},
{"f":[{"v":"3"},{"v":{"f":[{"v":"8.0"},
{"v":[{"v":"baz"},{"v":"qux"}]}]}}]}]}
TableData Ordering
Loosely speaking, TableData.list() returns rows in order of oldest
data to newest data. That is, if you have a table where you append new
values daily and page through it using TableData.list() , you get
the data from the first day first, then the next day, and so on.
There is a caveat to the ordering rules, however. BigQuery periodically
runs a background operation to optimize table representation for
querying. When you do a lot of small imports, the internal
representation of the table is less efficient to query against. The table
optimization process can reorder data, but does so only with older data
in the table. Data that was added within the last seven days will never
be reordered.
Reading data from tables in a dynamic programming language, such as
Python, is quite convenient. Dynamic languages make it easy to handle cases
in which you don't know the types ahead of time. In Python, you can parse
the JSON response and turn it into a dict object and then access the fields
as you would any dict . Here is the code in Python to iterate through query
results and print them:
def print_results(results):
fields = results['schema']['fields']
rows = results.get('rows', [])
for row in rows:
Search WWH ::




Custom Search