Database Reference
In-Depth Information
Now consider the second set of limits, or quotas, that apply to the rate of
load job creation and execution:
• 20 load jobs per project in RUNNING state
• 10,000 load jobs per project per day
• 1,000 load jobs per destination table per day
To understand the first limit, remember that every job proceeds through
the states PENDING , RUNNING , and DONE . When your job is PENDING , it
is queued but no processing is actually occurring. The actually work is
performed in RUNNING state. The system limits the number of load jobs
concurrently in RUNNING state. If additional load jobs are submitted after
there are 20 running jobs, they remain in the PENDING state until some
of the running jobs are completed. This effectively caps the maximum load
throughput available to a single project. There is a nontrivial amount of
overhead associated with a load job, so to access the full throughput
available to a project, you need to issue load jobs larger than 10 GB of
uncompressed data. This is only a rough guideline because it is perfectly
reasonable to initiate load jobs with smaller input sizes, but be aware that
after you drop below 1 MB, a significant fraction of the running time will
be spent setting up and tearing down the job. This means your effective
throughput of bytes loaded will be lower.
The next two limits on total load jobs per day are self-explanatory, but they
have significant ramifications. Consider the following scenarios:
• 1,000 separate tables that each need to be updated once per hour
• A single table that needs to be updated by 10 independent processes
every 5 minutes
Both these load requirements run up against the quota limits before
one-half of the day is over. The first case exhausts the project level quota and
the second exhausts the per table quota. When you attempt to create a load
job that violates these limits, the job creation request fails with an error that
has reason code quotaExceeded . Retrying the job cannot help until the
quota resets. These limits are an indication that load jobs are not intended
for small frequent table updates. If you run up against these limits, it is likely
that restructuring your tables or load operation can address the issue. For
example, in the 1,000 table scenario described, it may be feasible to combine
the 1,000 tables into a single table with an additional field to distinguish
Search WWH ::




Custom Search