Skip to content

Commit 8e8f565

Browse files
feat: added support for per type and partition export for Cloud Asset API (#398)
Clients can now specify two more args when export assets to bigquery PiperOrigin-RevId: 331912851 Source-Author: Google APIs <noreply@google.com> Source-Date: Tue Sep 15 20:04:02 2020 -0700 Source-Repo: googleapis/googleapis Source-Sha: 5e53d6b6dde0e72fa9510ec1d796176d128afa40 Source-Link: googleapis/googleapis@5e53d6b
1 parent bd9ef39 commit 8e8f565

5 files changed

Lines changed: 487 additions & 3 deletions

File tree

packages/google-cloud-asset/protos/google/cloud/asset/v1/asset_service.proto

Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -393,6 +393,83 @@ message BigQueryDestination {
393393
// is `FALSE` or unset and the destination table already exists, the export
394394
// call returns an INVALID_ARGUMEMT error.
395395
bool force = 3;
396+
397+
// [partition_spec] determines whether to export to partitioned table(s) and
398+
// how to partition the data.
399+
//
400+
// If [partition_spec] is unset or [partition_spec.partion_key] is unset or
401+
// `PARTITION_KEY_UNSPECIFIED`, the snapshot results will be exported to
402+
// non-partitioned table(s). [force] will decide whether to overwrite existing
403+
// table(s).
404+
//
405+
// If [partition_spec] is specified. First, the snapshot results will be
406+
// written to partitioned table(s) with two additional timestamp columns,
407+
// readTime and requestTime, one of which will be the partition key. Secondly,
408+
// in the case when any destination table already exists, it will first try to
409+
// update existing table's schema as necessary by appending additional
410+
// columns. Then, if [force] is `TRUE`, the corresponding partition will be
411+
// overwritten by the snapshot results (data in different partitions will
412+
// remain intact); if [force] is unset or `FALSE`, it will append the data. An
413+
// error will be returned if the schema update or data appension fails.
414+
PartitionSpec partition_spec = 4;
415+
416+
// If this flag is `TRUE`, the snapshot results will be written to one or
417+
// multiple tables, each of which contains results of one asset type. The
418+
// [force] and [partition_spec] fields will apply to each of them.
419+
//
420+
// Field [table] will be concatenated with "_" and the asset type names (see
421+
// https://cloud.google.com/asset-inventory/docs/supported-asset-types for
422+
// supported asset types) to construct per-asset-type table names, in which
423+
// all non-alphanumeric characters like "." and "/" will be substituted by
424+
// "_". Example: if field [table] is "mytable" and snapshot results
425+
// contain "storage.googleapis.com/Bucket" assets, the corresponding table
426+
// name will be "mytable_storage_googleapis_com_Bucket". If any of these
427+
// tables does not exist, a new table with the concatenated name will be
428+
// created.
429+
//
430+
// When [content_type] in the ExportAssetsRequest is `RESOURCE`, the schema of
431+
// each table will include RECORD-type columns mapped to the nested fields in
432+
// the Asset.resource.data field of that asset type (up to the 15 nested level
433+
// BigQuery supports
434+
// (https://cloud.google.com/bigquery/docs/nested-repeated#limitations)). The
435+
// fields in >15 nested levels will be stored in JSON format string as a child
436+
// column of its parent RECORD column.
437+
//
438+
// If error occurs when exporting to any table, the whole export call will
439+
// return an error but the export results that already succeed will persist.
440+
// Example: if exporting to table_type_A succeeds when exporting to
441+
// table_type_B fails during one export call, the results in table_type_A will
442+
// persist and there will not be partial results persisting in a table.
443+
bool separate_tables_per_asset_type = 5;
444+
}
445+
446+
// Specifications of BigQuery partitioned table as export destination.
447+
message PartitionSpec {
448+
// This enum is used to determine the partition key column when exporting
449+
// assets to BigQuery partitioned table(s). Note that, if the partition key is
450+
// a timestamp column, the actual partition is based on its date value
451+
// (expressed in UTC. see details in
452+
// https://cloud.google.com/bigquery/docs/partitioned-tables#date_timestamp_partitioned_tables).
453+
enum PartitionKey {
454+
// Unspecified partition key. If used, it means using non-partitioned table.
455+
PARTITION_KEY_UNSPECIFIED = 0;
456+
457+
// The time when the snapshot is taken. If specified as partition key, the
458+
// result table(s) is partitoned by the additional timestamp column,
459+
// readTime. If [read_time] in ExportAssetsRequest is specified, the
460+
// readTime column's value will be the same as it. Otherwise, its value will
461+
// be the current time that is used to take the snapshot.
462+
READ_TIME = 1;
463+
464+
// The time when the request is received and started to be processed. If
465+
// specified as partition key, the result table(s) is partitoned by the
466+
// requestTime column, an additional timestamp column representing when the
467+
// request was received.
468+
REQUEST_TIME = 2;
469+
}
470+
471+
// The partition key for BigQuery partitioned table.
472+
PartitionKey partition_key = 1;
396473
}
397474

398475
// A Pub/Sub destination.

packages/google-cloud-asset/protos/protos.d.ts

Lines changed: 112 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)