mirror of
https://github.com/kubernetes-sigs/descheduler.git
synced 2026-01-28 06:29:29 +01:00
bump to k8s 1.28 beta.0
This commit is contained in:
202
vendor/google.golang.org/genproto/googleapis/api/LICENSE
generated
vendored
Normal file
202
vendor/google.golang.org/genproto/googleapis/api/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,202 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
1688
vendor/google.golang.org/genproto/googleapis/api/annotations/client.pb.go
generated
vendored
1688
vendor/google.golang.org/genproto/googleapis/api/annotations/client.pb.go
generated
vendored
File diff suppressed because it is too large
Load Diff
20
vendor/google.golang.org/genproto/googleapis/api/annotations/field_behavior.pb.go
generated
vendored
20
vendor/google.golang.org/genproto/googleapis/api/annotations/field_behavior.pb.go
generated
vendored
@@ -1,4 +1,4 @@
|
||||
// Copyright 2018 Google LLC
|
||||
// Copyright 2023 Google LLC
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
@@ -14,8 +14,8 @@
|
||||
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.27.1
|
||||
// protoc v3.12.2
|
||||
// protoc-gen-go v1.26.0
|
||||
// protoc v3.21.9
|
||||
// source: google/api/field_behavior.proto
|
||||
|
||||
package annotations
|
||||
@@ -149,13 +149,13 @@ var (
|
||||
//
|
||||
// Examples:
|
||||
//
|
||||
// string name = 1 [(google.api.field_behavior) = REQUIRED];
|
||||
// State state = 1 [(google.api.field_behavior) = OUTPUT_ONLY];
|
||||
// google.protobuf.Duration ttl = 1
|
||||
// [(google.api.field_behavior) = INPUT_ONLY];
|
||||
// google.protobuf.Timestamp expire_time = 1
|
||||
// [(google.api.field_behavior) = OUTPUT_ONLY,
|
||||
// (google.api.field_behavior) = IMMUTABLE];
|
||||
// string name = 1 [(google.api.field_behavior) = REQUIRED];
|
||||
// State state = 1 [(google.api.field_behavior) = OUTPUT_ONLY];
|
||||
// google.protobuf.Duration ttl = 1
|
||||
// [(google.api.field_behavior) = INPUT_ONLY];
|
||||
// google.protobuf.Timestamp expire_time = 1
|
||||
// [(google.api.field_behavior) = OUTPUT_ONLY,
|
||||
// (google.api.field_behavior) = IMMUTABLE];
|
||||
//
|
||||
// repeated google.api.FieldBehavior field_behavior = 1052;
|
||||
E_FieldBehavior = &file_google_api_field_behavior_proto_extTypes[0]
|
||||
|
||||
188
vendor/google.golang.org/genproto/googleapis/api/annotations/http.pb.go
generated
vendored
188
vendor/google.golang.org/genproto/googleapis/api/annotations/http.pb.go
generated
vendored
@@ -1,4 +1,4 @@
|
||||
// Copyright 2015 Google LLC
|
||||
// Copyright 2023 Google LLC
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
@@ -15,7 +15,7 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.26.0
|
||||
// protoc v3.12.2
|
||||
// protoc v3.21.9
|
||||
// source: google/api/http.proto
|
||||
|
||||
package annotations
|
||||
@@ -127,19 +127,19 @@ func (x *Http) GetFullyDecodeReservedExpansion() bool {
|
||||
//
|
||||
// Example:
|
||||
//
|
||||
// service Messaging {
|
||||
// rpc GetMessage(GetMessageRequest) returns (Message) {
|
||||
// option (google.api.http) = {
|
||||
// get: "/v1/{name=messages/*}"
|
||||
// };
|
||||
// }
|
||||
// }
|
||||
// message GetMessageRequest {
|
||||
// string name = 1; // Mapped to URL path.
|
||||
// }
|
||||
// message Message {
|
||||
// string text = 1; // The resource content.
|
||||
// }
|
||||
// service Messaging {
|
||||
// rpc GetMessage(GetMessageRequest) returns (Message) {
|
||||
// option (google.api.http) = {
|
||||
// get: "/v1/{name=messages/*}"
|
||||
// };
|
||||
// }
|
||||
// }
|
||||
// message GetMessageRequest {
|
||||
// string name = 1; // Mapped to URL path.
|
||||
// }
|
||||
// message Message {
|
||||
// string text = 1; // The resource content.
|
||||
// }
|
||||
//
|
||||
// This enables an HTTP REST to gRPC mapping as below:
|
||||
//
|
||||
@@ -151,21 +151,21 @@ func (x *Http) GetFullyDecodeReservedExpansion() bool {
|
||||
// automatically become HTTP query parameters if there is no HTTP request body.
|
||||
// For example:
|
||||
//
|
||||
// service Messaging {
|
||||
// rpc GetMessage(GetMessageRequest) returns (Message) {
|
||||
// option (google.api.http) = {
|
||||
// get:"/v1/messages/{message_id}"
|
||||
// };
|
||||
// }
|
||||
// }
|
||||
// message GetMessageRequest {
|
||||
// message SubMessage {
|
||||
// string subfield = 1;
|
||||
// }
|
||||
// string message_id = 1; // Mapped to URL path.
|
||||
// int64 revision = 2; // Mapped to URL query parameter `revision`.
|
||||
// SubMessage sub = 3; // Mapped to URL query parameter `sub.subfield`.
|
||||
// }
|
||||
// service Messaging {
|
||||
// rpc GetMessage(GetMessageRequest) returns (Message) {
|
||||
// option (google.api.http) = {
|
||||
// get:"/v1/messages/{message_id}"
|
||||
// };
|
||||
// }
|
||||
// }
|
||||
// message GetMessageRequest {
|
||||
// message SubMessage {
|
||||
// string subfield = 1;
|
||||
// }
|
||||
// string message_id = 1; // Mapped to URL path.
|
||||
// int64 revision = 2; // Mapped to URL query parameter `revision`.
|
||||
// SubMessage sub = 3; // Mapped to URL query parameter `sub.subfield`.
|
||||
// }
|
||||
//
|
||||
// This enables a HTTP JSON to RPC mapping as below:
|
||||
//
|
||||
@@ -186,18 +186,18 @@ func (x *Http) GetFullyDecodeReservedExpansion() bool {
|
||||
// specifies the mapping. Consider a REST update method on the
|
||||
// message resource collection:
|
||||
//
|
||||
// service Messaging {
|
||||
// rpc UpdateMessage(UpdateMessageRequest) returns (Message) {
|
||||
// option (google.api.http) = {
|
||||
// patch: "/v1/messages/{message_id}"
|
||||
// body: "message"
|
||||
// };
|
||||
// }
|
||||
// }
|
||||
// message UpdateMessageRequest {
|
||||
// string message_id = 1; // mapped to the URL
|
||||
// Message message = 2; // mapped to the body
|
||||
// }
|
||||
// service Messaging {
|
||||
// rpc UpdateMessage(UpdateMessageRequest) returns (Message) {
|
||||
// option (google.api.http) = {
|
||||
// patch: "/v1/messages/{message_id}"
|
||||
// body: "message"
|
||||
// };
|
||||
// }
|
||||
// }
|
||||
// message UpdateMessageRequest {
|
||||
// string message_id = 1; // mapped to the URL
|
||||
// Message message = 2; // mapped to the body
|
||||
// }
|
||||
//
|
||||
// The following HTTP JSON to RPC mapping is enabled, where the
|
||||
// representation of the JSON in the request body is determined by
|
||||
@@ -213,19 +213,18 @@ func (x *Http) GetFullyDecodeReservedExpansion() bool {
|
||||
// request body. This enables the following alternative definition of
|
||||
// the update method:
|
||||
//
|
||||
// service Messaging {
|
||||
// rpc UpdateMessage(Message) returns (Message) {
|
||||
// option (google.api.http) = {
|
||||
// patch: "/v1/messages/{message_id}"
|
||||
// body: "*"
|
||||
// };
|
||||
// }
|
||||
// }
|
||||
// message Message {
|
||||
// string message_id = 1;
|
||||
// string text = 2;
|
||||
// }
|
||||
//
|
||||
// service Messaging {
|
||||
// rpc UpdateMessage(Message) returns (Message) {
|
||||
// option (google.api.http) = {
|
||||
// patch: "/v1/messages/{message_id}"
|
||||
// body: "*"
|
||||
// };
|
||||
// }
|
||||
// }
|
||||
// message Message {
|
||||
// string message_id = 1;
|
||||
// string text = 2;
|
||||
// }
|
||||
//
|
||||
// The following HTTP JSON to RPC mapping is enabled:
|
||||
//
|
||||
@@ -243,20 +242,20 @@ func (x *Http) GetFullyDecodeReservedExpansion() bool {
|
||||
// It is possible to define multiple HTTP methods for one RPC by using
|
||||
// the `additional_bindings` option. Example:
|
||||
//
|
||||
// service Messaging {
|
||||
// rpc GetMessage(GetMessageRequest) returns (Message) {
|
||||
// option (google.api.http) = {
|
||||
// get: "/v1/messages/{message_id}"
|
||||
// additional_bindings {
|
||||
// get: "/v1/users/{user_id}/messages/{message_id}"
|
||||
// }
|
||||
// };
|
||||
// }
|
||||
// }
|
||||
// message GetMessageRequest {
|
||||
// string message_id = 1;
|
||||
// string user_id = 2;
|
||||
// }
|
||||
// service Messaging {
|
||||
// rpc GetMessage(GetMessageRequest) returns (Message) {
|
||||
// option (google.api.http) = {
|
||||
// get: "/v1/messages/{message_id}"
|
||||
// additional_bindings {
|
||||
// get: "/v1/users/{user_id}/messages/{message_id}"
|
||||
// }
|
||||
// };
|
||||
// }
|
||||
// }
|
||||
// message GetMessageRequest {
|
||||
// string message_id = 1;
|
||||
// string user_id = 2;
|
||||
// }
|
||||
//
|
||||
// This enables the following two alternative HTTP JSON to RPC mappings:
|
||||
//
|
||||
@@ -268,28 +267,31 @@ func (x *Http) GetFullyDecodeReservedExpansion() bool {
|
||||
//
|
||||
// ## Rules for HTTP mapping
|
||||
//
|
||||
// 1. Leaf request fields (recursive expansion nested messages in the request
|
||||
// message) are classified into three categories:
|
||||
// - Fields referred by the path template. They are passed via the URL path.
|
||||
// - Fields referred by the [HttpRule.body][google.api.HttpRule.body]. They are passed via the HTTP
|
||||
// request body.
|
||||
// - All other fields are passed via the URL query parameters, and the
|
||||
// parameter name is the field path in the request message. A repeated
|
||||
// field can be represented as multiple query parameters under the same
|
||||
// name.
|
||||
// 2. If [HttpRule.body][google.api.HttpRule.body] is "*", there is no URL query parameter, all fields
|
||||
// 1. Leaf request fields (recursive expansion nested messages in the request
|
||||
// message) are classified into three categories:
|
||||
// - Fields referred by the path template. They are passed via the URL path.
|
||||
// - Fields referred by the [HttpRule.body][google.api.HttpRule.body]. They
|
||||
// are passed via the HTTP
|
||||
// request body.
|
||||
// - All other fields are passed via the URL query parameters, and the
|
||||
// parameter name is the field path in the request message. A repeated
|
||||
// field can be represented as multiple query parameters under the same
|
||||
// name.
|
||||
// 2. If [HttpRule.body][google.api.HttpRule.body] is "*", there is no URL
|
||||
// query parameter, all fields
|
||||
// are passed via URL path and HTTP request body.
|
||||
// 3. If [HttpRule.body][google.api.HttpRule.body] is omitted, there is no HTTP request body, all
|
||||
// 3. If [HttpRule.body][google.api.HttpRule.body] is omitted, there is no HTTP
|
||||
// request body, all
|
||||
// fields are passed via URL path and URL query parameters.
|
||||
//
|
||||
// ### Path template syntax
|
||||
//
|
||||
// Template = "/" Segments [ Verb ] ;
|
||||
// Segments = Segment { "/" Segment } ;
|
||||
// Segment = "*" | "**" | LITERAL | Variable ;
|
||||
// Variable = "{" FieldPath [ "=" Segments ] "}" ;
|
||||
// FieldPath = IDENT { "." IDENT } ;
|
||||
// Verb = ":" LITERAL ;
|
||||
// Template = "/" Segments [ Verb ] ;
|
||||
// Segments = Segment { "/" Segment } ;
|
||||
// Segment = "*" | "**" | LITERAL | Variable ;
|
||||
// Variable = "{" FieldPath [ "=" Segments ] "}" ;
|
||||
// FieldPath = IDENT { "." IDENT } ;
|
||||
// Verb = ":" LITERAL ;
|
||||
//
|
||||
// The syntax `*` matches a single URL path segment. The syntax `**` matches
|
||||
// zero or more URL path segments, which must be the last part of the URL path
|
||||
@@ -338,11 +340,11 @@ func (x *Http) GetFullyDecodeReservedExpansion() bool {
|
||||
//
|
||||
// Example:
|
||||
//
|
||||
// http:
|
||||
// rules:
|
||||
// # Selects a gRPC method and applies HttpRule to it.
|
||||
// - selector: example.v1.Messaging.GetMessage
|
||||
// get: /v1/messages/{message_id}/{sub.subfield}
|
||||
// http:
|
||||
// rules:
|
||||
// # Selects a gRPC method and applies HttpRule to it.
|
||||
// - selector: example.v1.Messaging.GetMessage
|
||||
// get: /v1/messages/{message_id}/{sub.subfield}
|
||||
//
|
||||
// ## Special notes
|
||||
//
|
||||
@@ -378,13 +380,15 @@ type HttpRule struct {
|
||||
|
||||
// Selects a method to which this rule applies.
|
||||
//
|
||||
// Refer to [selector][google.api.DocumentationRule.selector] for syntax details.
|
||||
// Refer to [selector][google.api.DocumentationRule.selector] for syntax
|
||||
// details.
|
||||
Selector string `protobuf:"bytes,1,opt,name=selector,proto3" json:"selector,omitempty"`
|
||||
// Determines the URL pattern is matched by this rules. This pattern can be
|
||||
// used with any of the {get|put|post|delete|patch} methods. A custom method
|
||||
// can be defined using the 'custom' field.
|
||||
//
|
||||
// Types that are assignable to Pattern:
|
||||
//
|
||||
// *HttpRule_Get
|
||||
// *HttpRule_Put
|
||||
// *HttpRule_Post
|
||||
|
||||
120
vendor/google.golang.org/genproto/googleapis/api/annotations/resource.pb.go
generated
vendored
120
vendor/google.golang.org/genproto/googleapis/api/annotations/resource.pb.go
generated
vendored
@@ -1,4 +1,4 @@
|
||||
// Copyright 2018 Google LLC
|
||||
// Copyright 2023 Google LLC
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
@@ -15,7 +15,7 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.26.0
|
||||
// protoc v3.12.2
|
||||
// protoc v3.21.9
|
||||
// source: google/api/resource.proto
|
||||
|
||||
package annotations
|
||||
@@ -157,45 +157,45 @@ func (ResourceDescriptor_Style) EnumDescriptor() ([]byte, []int) {
|
||||
//
|
||||
// Example:
|
||||
//
|
||||
// message Topic {
|
||||
// // Indicates this message defines a resource schema.
|
||||
// // Declares the resource type in the format of {service}/{kind}.
|
||||
// // For Kubernetes resources, the format is {api group}/{kind}.
|
||||
// option (google.api.resource) = {
|
||||
// type: "pubsub.googleapis.com/Topic"
|
||||
// pattern: "projects/{project}/topics/{topic}"
|
||||
// };
|
||||
// }
|
||||
// message Topic {
|
||||
// // Indicates this message defines a resource schema.
|
||||
// // Declares the resource type in the format of {service}/{kind}.
|
||||
// // For Kubernetes resources, the format is {api group}/{kind}.
|
||||
// option (google.api.resource) = {
|
||||
// type: "pubsub.googleapis.com/Topic"
|
||||
// pattern: "projects/{project}/topics/{topic}"
|
||||
// };
|
||||
// }
|
||||
//
|
||||
// The ResourceDescriptor Yaml config will look like:
|
||||
//
|
||||
// resources:
|
||||
// - type: "pubsub.googleapis.com/Topic"
|
||||
// pattern: "projects/{project}/topics/{topic}"
|
||||
// resources:
|
||||
// - type: "pubsub.googleapis.com/Topic"
|
||||
// pattern: "projects/{project}/topics/{topic}"
|
||||
//
|
||||
// Sometimes, resources have multiple patterns, typically because they can
|
||||
// live under multiple parents.
|
||||
//
|
||||
// Example:
|
||||
//
|
||||
// message LogEntry {
|
||||
// option (google.api.resource) = {
|
||||
// type: "logging.googleapis.com/LogEntry"
|
||||
// pattern: "projects/{project}/logs/{log}"
|
||||
// pattern: "folders/{folder}/logs/{log}"
|
||||
// pattern: "organizations/{organization}/logs/{log}"
|
||||
// pattern: "billingAccounts/{billing_account}/logs/{log}"
|
||||
// };
|
||||
// }
|
||||
// message LogEntry {
|
||||
// option (google.api.resource) = {
|
||||
// type: "logging.googleapis.com/LogEntry"
|
||||
// pattern: "projects/{project}/logs/{log}"
|
||||
// pattern: "folders/{folder}/logs/{log}"
|
||||
// pattern: "organizations/{organization}/logs/{log}"
|
||||
// pattern: "billingAccounts/{billing_account}/logs/{log}"
|
||||
// };
|
||||
// }
|
||||
//
|
||||
// The ResourceDescriptor Yaml config will look like:
|
||||
//
|
||||
// resources:
|
||||
// - type: 'logging.googleapis.com/LogEntry'
|
||||
// pattern: "projects/{project}/logs/{log}"
|
||||
// pattern: "folders/{folder}/logs/{log}"
|
||||
// pattern: "organizations/{organization}/logs/{log}"
|
||||
// pattern: "billingAccounts/{billing_account}/logs/{log}"
|
||||
// resources:
|
||||
// - type: 'logging.googleapis.com/LogEntry'
|
||||
// pattern: "projects/{project}/logs/{log}"
|
||||
// pattern: "folders/{folder}/logs/{log}"
|
||||
// pattern: "organizations/{organization}/logs/{log}"
|
||||
// pattern: "billingAccounts/{billing_account}/logs/{log}"
|
||||
type ResourceDescriptor struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
@@ -218,14 +218,14 @@ type ResourceDescriptor struct {
|
||||
// The path pattern must follow the syntax, which aligns with HTTP binding
|
||||
// syntax:
|
||||
//
|
||||
// Template = Segment { "/" Segment } ;
|
||||
// Segment = LITERAL | Variable ;
|
||||
// Variable = "{" LITERAL "}" ;
|
||||
// Template = Segment { "/" Segment } ;
|
||||
// Segment = LITERAL | Variable ;
|
||||
// Variable = "{" LITERAL "}" ;
|
||||
//
|
||||
// Examples:
|
||||
//
|
||||
// - "projects/{project}/topics/{topic}"
|
||||
// - "projects/{project}/knowledgeBases/{knowledge_base}"
|
||||
// - "projects/{project}/topics/{topic}"
|
||||
// - "projects/{project}/knowledgeBases/{knowledge_base}"
|
||||
//
|
||||
// The components in braces correspond to the IDs for each resource in the
|
||||
// hierarchy. It is expected that, if multiple patterns are provided,
|
||||
@@ -239,17 +239,17 @@ type ResourceDescriptor struct {
|
||||
//
|
||||
// Example:
|
||||
//
|
||||
// // The InspectTemplate message originally only supported resource
|
||||
// // names with organization, and project was added later.
|
||||
// message InspectTemplate {
|
||||
// option (google.api.resource) = {
|
||||
// type: "dlp.googleapis.com/InspectTemplate"
|
||||
// pattern:
|
||||
// "organizations/{organization}/inspectTemplates/{inspect_template}"
|
||||
// pattern: "projects/{project}/inspectTemplates/{inspect_template}"
|
||||
// history: ORIGINALLY_SINGLE_PATTERN
|
||||
// };
|
||||
// }
|
||||
// // The InspectTemplate message originally only supported resource
|
||||
// // names with organization, and project was added later.
|
||||
// message InspectTemplate {
|
||||
// option (google.api.resource) = {
|
||||
// type: "dlp.googleapis.com/InspectTemplate"
|
||||
// pattern:
|
||||
// "organizations/{organization}/inspectTemplates/{inspect_template}"
|
||||
// pattern: "projects/{project}/inspectTemplates/{inspect_template}"
|
||||
// history: ORIGINALLY_SINGLE_PATTERN
|
||||
// };
|
||||
// }
|
||||
History ResourceDescriptor_History `protobuf:"varint,4,opt,name=history,proto3,enum=google.api.ResourceDescriptor_History" json:"history,omitempty"`
|
||||
// The plural name used in the resource name and permission names, such as
|
||||
// 'projects' for the resource name of 'projects/{project}' and the permission
|
||||
@@ -362,22 +362,22 @@ type ResourceReference struct {
|
||||
//
|
||||
// Example:
|
||||
//
|
||||
// message Subscription {
|
||||
// string topic = 2 [(google.api.resource_reference) = {
|
||||
// type: "pubsub.googleapis.com/Topic"
|
||||
// }];
|
||||
// }
|
||||
// message Subscription {
|
||||
// string topic = 2 [(google.api.resource_reference) = {
|
||||
// type: "pubsub.googleapis.com/Topic"
|
||||
// }];
|
||||
// }
|
||||
//
|
||||
// Occasionally, a field may reference an arbitrary resource. In this case,
|
||||
// APIs use the special value * in their resource reference.
|
||||
//
|
||||
// Example:
|
||||
//
|
||||
// message GetIamPolicyRequest {
|
||||
// string resource = 2 [(google.api.resource_reference) = {
|
||||
// type: "*"
|
||||
// }];
|
||||
// }
|
||||
// message GetIamPolicyRequest {
|
||||
// string resource = 2 [(google.api.resource_reference) = {
|
||||
// type: "*"
|
||||
// }];
|
||||
// }
|
||||
Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type,omitempty"`
|
||||
// The resource type of a child collection that the annotated field
|
||||
// references. This is useful for annotating the `parent` field that
|
||||
@@ -385,11 +385,11 @@ type ResourceReference struct {
|
||||
//
|
||||
// Example:
|
||||
//
|
||||
// message ListLogEntriesRequest {
|
||||
// string parent = 1 [(google.api.resource_reference) = {
|
||||
// child_type: "logging.googleapis.com/LogEntry"
|
||||
// };
|
||||
// }
|
||||
// message ListLogEntriesRequest {
|
||||
// string parent = 1 [(google.api.resource_reference) = {
|
||||
// child_type: "logging.googleapis.com/LogEntry"
|
||||
// };
|
||||
// }
|
||||
ChildType string `protobuf:"bytes,2,opt,name=child_type,json=childType,proto3" json:"child_type,omitempty"`
|
||||
}
|
||||
|
||||
|
||||
458
vendor/google.golang.org/genproto/googleapis/api/annotations/routing.pb.go
generated
vendored
458
vendor/google.golang.org/genproto/googleapis/api/annotations/routing.pb.go
generated
vendored
@@ -1,4 +1,4 @@
|
||||
// Copyright 2021 Google LLC
|
||||
// Copyright 2023 Google LLC
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
@@ -15,7 +15,7 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.26.0
|
||||
// protoc v3.12.2
|
||||
// protoc v3.21.9
|
||||
// source: google/api/routing.proto
|
||||
|
||||
package annotations
|
||||
@@ -44,71 +44,71 @@ const (
|
||||
//
|
||||
// Message Definition:
|
||||
//
|
||||
// message Request {
|
||||
// // The name of the Table
|
||||
// // Values can be of the following formats:
|
||||
// // - `projects/<project>/tables/<table>`
|
||||
// // - `projects/<project>/instances/<instance>/tables/<table>`
|
||||
// // - `region/<region>/zones/<zone>/tables/<table>`
|
||||
// string table_name = 1;
|
||||
// message Request {
|
||||
// // The name of the Table
|
||||
// // Values can be of the following formats:
|
||||
// // - `projects/<project>/tables/<table>`
|
||||
// // - `projects/<project>/instances/<instance>/tables/<table>`
|
||||
// // - `region/<region>/zones/<zone>/tables/<table>`
|
||||
// string table_name = 1;
|
||||
//
|
||||
// // This value specifies routing for replication.
|
||||
// // It can be in the following formats:
|
||||
// // - `profiles/<profile_id>`
|
||||
// // - a legacy `profile_id` that can be any string
|
||||
// string app_profile_id = 2;
|
||||
// }
|
||||
// // This value specifies routing for replication.
|
||||
// // It can be in the following formats:
|
||||
// // - `profiles/<profile_id>`
|
||||
// // - a legacy `profile_id` that can be any string
|
||||
// string app_profile_id = 2;
|
||||
// }
|
||||
//
|
||||
// Example message:
|
||||
//
|
||||
// {
|
||||
// table_name: projects/proj_foo/instances/instance_bar/table/table_baz,
|
||||
// app_profile_id: profiles/prof_qux
|
||||
// }
|
||||
// {
|
||||
// table_name: projects/proj_foo/instances/instance_bar/table/table_baz,
|
||||
// app_profile_id: profiles/prof_qux
|
||||
// }
|
||||
//
|
||||
// The routing header consists of one or multiple key-value pairs. Every key
|
||||
// and value must be percent-encoded, and joined together in the format of
|
||||
// `key1=value1&key2=value2`.
|
||||
// In the examples below I am skipping the percent-encoding for readablity.
|
||||
//
|
||||
// Example 1
|
||||
// # Example 1
|
||||
//
|
||||
// Extracting a field from the request to put into the routing header
|
||||
// unchanged, with the key equal to the field name.
|
||||
//
|
||||
// annotation:
|
||||
//
|
||||
// option (google.api.routing) = {
|
||||
// // Take the `app_profile_id`.
|
||||
// routing_parameters {
|
||||
// field: "app_profile_id"
|
||||
// }
|
||||
// };
|
||||
// option (google.api.routing) = {
|
||||
// // Take the `app_profile_id`.
|
||||
// routing_parameters {
|
||||
// field: "app_profile_id"
|
||||
// }
|
||||
// };
|
||||
//
|
||||
// result:
|
||||
//
|
||||
// x-goog-request-params: app_profile_id=profiles/prof_qux
|
||||
// x-goog-request-params: app_profile_id=profiles/prof_qux
|
||||
//
|
||||
// Example 2
|
||||
// # Example 2
|
||||
//
|
||||
// Extracting a field from the request to put into the routing header
|
||||
// unchanged, with the key different from the field name.
|
||||
//
|
||||
// annotation:
|
||||
//
|
||||
// option (google.api.routing) = {
|
||||
// // Take the `app_profile_id`, but name it `routing_id` in the header.
|
||||
// routing_parameters {
|
||||
// field: "app_profile_id"
|
||||
// path_template: "{routing_id=**}"
|
||||
// }
|
||||
// };
|
||||
// option (google.api.routing) = {
|
||||
// // Take the `app_profile_id`, but name it `routing_id` in the header.
|
||||
// routing_parameters {
|
||||
// field: "app_profile_id"
|
||||
// path_template: "{routing_id=**}"
|
||||
// }
|
||||
// };
|
||||
//
|
||||
// result:
|
||||
//
|
||||
// x-goog-request-params: routing_id=profiles/prof_qux
|
||||
// x-goog-request-params: routing_id=profiles/prof_qux
|
||||
//
|
||||
// Example 3
|
||||
// # Example 3
|
||||
//
|
||||
// Extracting a field from the request to put into the routing
|
||||
// header, while matching a path template syntax on the field's value.
|
||||
@@ -116,91 +116,91 @@ const (
|
||||
// NB: it is more useful to send nothing than to send garbage for the purpose
|
||||
// of dynamic routing, since garbage pollutes cache. Thus the matching.
|
||||
//
|
||||
// Sub-example 3a
|
||||
// # Sub-example 3a
|
||||
//
|
||||
// The field matches the template.
|
||||
//
|
||||
// annotation:
|
||||
//
|
||||
// option (google.api.routing) = {
|
||||
// // Take the `table_name`, if it's well-formed (with project-based
|
||||
// // syntax).
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{table_name=projects/*/instances/*/**}"
|
||||
// }
|
||||
// };
|
||||
// option (google.api.routing) = {
|
||||
// // Take the `table_name`, if it's well-formed (with project-based
|
||||
// // syntax).
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{table_name=projects/*/instances/*/**}"
|
||||
// }
|
||||
// };
|
||||
//
|
||||
// result:
|
||||
//
|
||||
// x-goog-request-params:
|
||||
// table_name=projects/proj_foo/instances/instance_bar/table/table_baz
|
||||
// x-goog-request-params:
|
||||
// table_name=projects/proj_foo/instances/instance_bar/table/table_baz
|
||||
//
|
||||
// Sub-example 3b
|
||||
// # Sub-example 3b
|
||||
//
|
||||
// The field does not match the template.
|
||||
//
|
||||
// annotation:
|
||||
//
|
||||
// option (google.api.routing) = {
|
||||
// // Take the `table_name`, if it's well-formed (with region-based
|
||||
// // syntax).
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{table_name=regions/*/zones/*/**}"
|
||||
// }
|
||||
// };
|
||||
// option (google.api.routing) = {
|
||||
// // Take the `table_name`, if it's well-formed (with region-based
|
||||
// // syntax).
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{table_name=regions/*/zones/*/**}"
|
||||
// }
|
||||
// };
|
||||
//
|
||||
// result:
|
||||
//
|
||||
// <no routing header will be sent>
|
||||
// <no routing header will be sent>
|
||||
//
|
||||
// Sub-example 3c
|
||||
// # Sub-example 3c
|
||||
//
|
||||
// Multiple alternative conflictingly named path templates are
|
||||
// specified. The one that matches is used to construct the header.
|
||||
//
|
||||
// annotation:
|
||||
//
|
||||
// option (google.api.routing) = {
|
||||
// // Take the `table_name`, if it's well-formed, whether
|
||||
// // using the region- or projects-based syntax.
|
||||
// option (google.api.routing) = {
|
||||
// // Take the `table_name`, if it's well-formed, whether
|
||||
// // using the region- or projects-based syntax.
|
||||
//
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{table_name=regions/*/zones/*/**}"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{table_name=projects/*/instances/*/**}"
|
||||
// }
|
||||
// };
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{table_name=regions/*/zones/*/**}"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{table_name=projects/*/instances/*/**}"
|
||||
// }
|
||||
// };
|
||||
//
|
||||
// result:
|
||||
//
|
||||
// x-goog-request-params:
|
||||
// table_name=projects/proj_foo/instances/instance_bar/table/table_baz
|
||||
// x-goog-request-params:
|
||||
// table_name=projects/proj_foo/instances/instance_bar/table/table_baz
|
||||
//
|
||||
// Example 4
|
||||
// # Example 4
|
||||
//
|
||||
// Extracting a single routing header key-value pair by matching a
|
||||
// template syntax on (a part of) a single request field.
|
||||
//
|
||||
// annotation:
|
||||
//
|
||||
// option (google.api.routing) = {
|
||||
// // Take just the project id from the `table_name` field.
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{routing_id=projects/*}/**"
|
||||
// }
|
||||
// };
|
||||
// option (google.api.routing) = {
|
||||
// // Take just the project id from the `table_name` field.
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{routing_id=projects/*}/**"
|
||||
// }
|
||||
// };
|
||||
//
|
||||
// result:
|
||||
//
|
||||
// x-goog-request-params: routing_id=projects/proj_foo
|
||||
// x-goog-request-params: routing_id=projects/proj_foo
|
||||
//
|
||||
// Example 5
|
||||
// # Example 5
|
||||
//
|
||||
// Extracting a single routing header key-value pair by matching
|
||||
// several conflictingly named path templates on (parts of) a single request
|
||||
@@ -208,87 +208,87 @@ const (
|
||||
//
|
||||
// annotation:
|
||||
//
|
||||
// option (google.api.routing) = {
|
||||
// // If the `table_name` does not have instances information,
|
||||
// // take just the project id for routing.
|
||||
// // Otherwise take project + instance.
|
||||
// option (google.api.routing) = {
|
||||
// // If the `table_name` does not have instances information,
|
||||
// // take just the project id for routing.
|
||||
// // Otherwise take project + instance.
|
||||
//
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{routing_id=projects/*}/**"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{routing_id=projects/*/instances/*}/**"
|
||||
// }
|
||||
// };
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{routing_id=projects/*}/**"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{routing_id=projects/*/instances/*}/**"
|
||||
// }
|
||||
// };
|
||||
//
|
||||
// result:
|
||||
//
|
||||
// x-goog-request-params:
|
||||
// routing_id=projects/proj_foo/instances/instance_bar
|
||||
// x-goog-request-params:
|
||||
// routing_id=projects/proj_foo/instances/instance_bar
|
||||
//
|
||||
// Example 6
|
||||
// # Example 6
|
||||
//
|
||||
// Extracting multiple routing header key-value pairs by matching
|
||||
// several non-conflicting path templates on (parts of) a single request field.
|
||||
//
|
||||
// Sub-example 6a
|
||||
// # Sub-example 6a
|
||||
//
|
||||
// Make the templates strict, so that if the `table_name` does not
|
||||
// have an instance information, nothing is sent.
|
||||
//
|
||||
// annotation:
|
||||
//
|
||||
// option (google.api.routing) = {
|
||||
// // The routing code needs two keys instead of one composite
|
||||
// // but works only for the tables with the "project-instance" name
|
||||
// // syntax.
|
||||
// option (google.api.routing) = {
|
||||
// // The routing code needs two keys instead of one composite
|
||||
// // but works only for the tables with the "project-instance" name
|
||||
// // syntax.
|
||||
//
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{project_id=projects/*}/instances/*/**"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "projects/*/{instance_id=instances/*}/**"
|
||||
// }
|
||||
// };
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{project_id=projects/*}/instances/*/**"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "projects/*/{instance_id=instances/*}/**"
|
||||
// }
|
||||
// };
|
||||
//
|
||||
// result:
|
||||
//
|
||||
// x-goog-request-params:
|
||||
// project_id=projects/proj_foo&instance_id=instances/instance_bar
|
||||
// x-goog-request-params:
|
||||
// project_id=projects/proj_foo&instance_id=instances/instance_bar
|
||||
//
|
||||
// Sub-example 6b
|
||||
// # Sub-example 6b
|
||||
//
|
||||
// Make the templates loose, so that if the `table_name` does not
|
||||
// have an instance information, just the project id part is sent.
|
||||
//
|
||||
// annotation:
|
||||
//
|
||||
// option (google.api.routing) = {
|
||||
// // The routing code wants two keys instead of one composite
|
||||
// // but will work with just the `project_id` for tables without
|
||||
// // an instance in the `table_name`.
|
||||
// option (google.api.routing) = {
|
||||
// // The routing code wants two keys instead of one composite
|
||||
// // but will work with just the `project_id` for tables without
|
||||
// // an instance in the `table_name`.
|
||||
//
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{project_id=projects/*}/**"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "projects/*/{instance_id=instances/*}/**"
|
||||
// }
|
||||
// };
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{project_id=projects/*}/**"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "projects/*/{instance_id=instances/*}/**"
|
||||
// }
|
||||
// };
|
||||
//
|
||||
// result (is the same as 6a for our example message because it has the instance
|
||||
// information):
|
||||
//
|
||||
// x-goog-request-params:
|
||||
// project_id=projects/proj_foo&instance_id=instances/instance_bar
|
||||
// x-goog-request-params:
|
||||
// project_id=projects/proj_foo&instance_id=instances/instance_bar
|
||||
//
|
||||
// Example 7
|
||||
// # Example 7
|
||||
//
|
||||
// Extracting multiple routing header key-value pairs by matching
|
||||
// several path templates on multiple request fields.
|
||||
@@ -301,26 +301,26 @@ const (
|
||||
//
|
||||
// annotation:
|
||||
//
|
||||
// option (google.api.routing) = {
|
||||
// // The routing needs both `project_id` and `routing_id`
|
||||
// // (from the `app_profile_id` field) for routing.
|
||||
// option (google.api.routing) = {
|
||||
// // The routing needs both `project_id` and `routing_id`
|
||||
// // (from the `app_profile_id` field) for routing.
|
||||
//
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{project_id=projects/*}/**"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "app_profile_id"
|
||||
// path_template: "{routing_id=**}"
|
||||
// }
|
||||
// };
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{project_id=projects/*}/**"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "app_profile_id"
|
||||
// path_template: "{routing_id=**}"
|
||||
// }
|
||||
// };
|
||||
//
|
||||
// result:
|
||||
//
|
||||
// x-goog-request-params:
|
||||
// project_id=projects/proj_foo&routing_id=profiles/prof_qux
|
||||
// x-goog-request-params:
|
||||
// project_id=projects/proj_foo&routing_id=profiles/prof_qux
|
||||
//
|
||||
// Example 8
|
||||
// # Example 8
|
||||
//
|
||||
// Extracting a single routing header key-value pair by matching
|
||||
// several conflictingly named path templates on several request fields. The
|
||||
@@ -328,73 +328,73 @@ const (
|
||||
//
|
||||
// annotation:
|
||||
//
|
||||
// option (google.api.routing) = {
|
||||
// // The `routing_id` can be a project id or a region id depending on
|
||||
// // the table name format, but only if the `app_profile_id` is not set.
|
||||
// // If `app_profile_id` is set it should be used instead.
|
||||
// option (google.api.routing) = {
|
||||
// // The `routing_id` can be a project id or a region id depending on
|
||||
// // the table name format, but only if the `app_profile_id` is not set.
|
||||
// // If `app_profile_id` is set it should be used instead.
|
||||
//
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{routing_id=projects/*}/**"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{routing_id=regions/*}/**"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "app_profile_id"
|
||||
// path_template: "{routing_id=**}"
|
||||
// }
|
||||
// };
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{routing_id=projects/*}/**"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{routing_id=regions/*}/**"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "app_profile_id"
|
||||
// path_template: "{routing_id=**}"
|
||||
// }
|
||||
// };
|
||||
//
|
||||
// result:
|
||||
//
|
||||
// x-goog-request-params: routing_id=profiles/prof_qux
|
||||
// x-goog-request-params: routing_id=profiles/prof_qux
|
||||
//
|
||||
// Example 9
|
||||
// # Example 9
|
||||
//
|
||||
// Bringing it all together.
|
||||
//
|
||||
// annotation:
|
||||
//
|
||||
// option (google.api.routing) = {
|
||||
// // For routing both `table_location` and a `routing_id` are needed.
|
||||
// //
|
||||
// // table_location can be either an instance id or a region+zone id.
|
||||
// //
|
||||
// // For `routing_id`, take the value of `app_profile_id`
|
||||
// // - If it's in the format `profiles/<profile_id>`, send
|
||||
// // just the `<profile_id>` part.
|
||||
// // - If it's any other literal, send it as is.
|
||||
// // If the `app_profile_id` is empty, and the `table_name` starts with
|
||||
// // the project_id, send that instead.
|
||||
// option (google.api.routing) = {
|
||||
// // For routing both `table_location` and a `routing_id` are needed.
|
||||
// //
|
||||
// // table_location can be either an instance id or a region+zone id.
|
||||
// //
|
||||
// // For `routing_id`, take the value of `app_profile_id`
|
||||
// // - If it's in the format `profiles/<profile_id>`, send
|
||||
// // just the `<profile_id>` part.
|
||||
// // - If it's any other literal, send it as is.
|
||||
// // If the `app_profile_id` is empty, and the `table_name` starts with
|
||||
// // the project_id, send that instead.
|
||||
//
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "projects/*/{table_location=instances/*}/tables/*"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{table_location=regions/*/zones/*}/tables/*"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{routing_id=projects/*}/**"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "app_profile_id"
|
||||
// path_template: "{routing_id=**}"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "app_profile_id"
|
||||
// path_template: "profiles/{routing_id=*}"
|
||||
// }
|
||||
// };
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "projects/*/{table_location=instances/*}/tables/*"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{table_location=regions/*/zones/*}/tables/*"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "table_name"
|
||||
// path_template: "{routing_id=projects/*}/**"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "app_profile_id"
|
||||
// path_template: "{routing_id=**}"
|
||||
// }
|
||||
// routing_parameters {
|
||||
// field: "app_profile_id"
|
||||
// path_template: "profiles/{routing_id=*}"
|
||||
// }
|
||||
// };
|
||||
//
|
||||
// result:
|
||||
//
|
||||
// x-goog-request-params:
|
||||
// table_location=instances/instance_bar&routing_id=prof_qux
|
||||
// x-goog-request-params:
|
||||
// table_location=instances/instance_bar&routing_id=prof_qux
|
||||
type RoutingRule struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
@@ -468,46 +468,46 @@ type RoutingParameter struct {
|
||||
//
|
||||
// Example:
|
||||
//
|
||||
// -- This is a field in the request message
|
||||
// | that the header value will be extracted from.
|
||||
// |
|
||||
// | -- This is the key name in the
|
||||
// | | routing header.
|
||||
// V |
|
||||
// field: "table_name" v
|
||||
// path_template: "projects/*/{table_location=instances/*}/tables/*"
|
||||
// ^ ^
|
||||
// | |
|
||||
// In the {} brackets is the pattern that -- |
|
||||
// specifies what to extract from the |
|
||||
// field as a value to be sent. |
|
||||
// |
|
||||
// The string in the field must match the whole pattern --
|
||||
// before brackets, inside brackets, after brackets.
|
||||
// -- This is a field in the request message
|
||||
// | that the header value will be extracted from.
|
||||
// |
|
||||
// | -- This is the key name in the
|
||||
// | | routing header.
|
||||
// V |
|
||||
// field: "table_name" v
|
||||
// path_template: "projects/*/{table_location=instances/*}/tables/*"
|
||||
// ^ ^
|
||||
// | |
|
||||
// In the {} brackets is the pattern that -- |
|
||||
// specifies what to extract from the |
|
||||
// field as a value to be sent. |
|
||||
// |
|
||||
// The string in the field must match the whole pattern --
|
||||
// before brackets, inside brackets, after brackets.
|
||||
//
|
||||
// When looking at this specific example, we can see that:
|
||||
// - A key-value pair with the key `table_location`
|
||||
// and the value matching `instances/*` should be added
|
||||
// to the x-goog-request-params routing header.
|
||||
// - The value is extracted from the request message's `table_name` field
|
||||
// if it matches the full pattern specified:
|
||||
// `projects/*/instances/*/tables/*`.
|
||||
// - A key-value pair with the key `table_location`
|
||||
// and the value matching `instances/*` should be added
|
||||
// to the x-goog-request-params routing header.
|
||||
// - The value is extracted from the request message's `table_name` field
|
||||
// if it matches the full pattern specified:
|
||||
// `projects/*/instances/*/tables/*`.
|
||||
//
|
||||
// **NB:** If the `path_template` field is not provided, the key name is
|
||||
// equal to the field name, and the whole field should be sent as a value.
|
||||
// This makes the pattern for the field and the value functionally equivalent
|
||||
// to `**`, and the configuration
|
||||
//
|
||||
// {
|
||||
// field: "table_name"
|
||||
// }
|
||||
// {
|
||||
// field: "table_name"
|
||||
// }
|
||||
//
|
||||
// is a functionally equivalent shorthand to:
|
||||
//
|
||||
// {
|
||||
// field: "table_name"
|
||||
// path_template: "{table_name=**}"
|
||||
// }
|
||||
// {
|
||||
// field: "table_name"
|
||||
// path_template: "{table_name=**}"
|
||||
// }
|
||||
//
|
||||
// See Example 1 for more details.
|
||||
PathTemplate string `protobuf:"bytes,2,opt,name=path_template,json=pathTemplate,proto3" json:"path_template,omitempty"`
|
||||
|
||||
53
vendor/google.golang.org/genproto/googleapis/api/expr/v1alpha1/checked.pb.go
generated
vendored
53
vendor/google.golang.org/genproto/googleapis/api/expr/v1alpha1/checked.pb.go
generated
vendored
@@ -1,4 +1,4 @@
|
||||
// Copyright 2021 Google LLC
|
||||
// Copyright 2022 Google LLC
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
@@ -15,7 +15,7 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.26.0
|
||||
// protoc v3.12.2
|
||||
// protoc v3.21.5
|
||||
// source: google/api/expr/v1alpha1/checked.proto
|
||||
|
||||
package expr
|
||||
@@ -183,17 +183,17 @@ type CheckedExpr struct {
|
||||
//
|
||||
// The following entries are in this table:
|
||||
//
|
||||
// - An Ident or Select expression is represented here if it resolves to a
|
||||
// declaration. For instance, if `a.b.c` is represented by
|
||||
// `select(select(id(a), b), c)`, and `a.b` resolves to a declaration,
|
||||
// while `c` is a field selection, then the reference is attached to the
|
||||
// nested select expression (but not to the id or or the outer select).
|
||||
// In turn, if `a` resolves to a declaration and `b.c` are field selections,
|
||||
// the reference is attached to the ident expression.
|
||||
// - Every Call expression has an entry here, identifying the function being
|
||||
// called.
|
||||
// - Every CreateStruct expression for a message has an entry, identifying
|
||||
// the message.
|
||||
// - An Ident or Select expression is represented here if it resolves to a
|
||||
// declaration. For instance, if `a.b.c` is represented by
|
||||
// `select(select(id(a), b), c)`, and `a.b` resolves to a declaration,
|
||||
// while `c` is a field selection, then the reference is attached to the
|
||||
// nested select expression (but not to the id or or the outer select).
|
||||
// In turn, if `a` resolves to a declaration and `b.c` are field selections,
|
||||
// the reference is attached to the ident expression.
|
||||
// - Every Call expression has an entry here, identifying the function being
|
||||
// called.
|
||||
// - Every CreateStruct expression for a message has an entry, identifying
|
||||
// the message.
|
||||
ReferenceMap map[int64]*Reference `protobuf:"bytes,2,rep,name=reference_map,json=referenceMap,proto3" json:"reference_map,omitempty" protobuf_key:"varint,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
|
||||
// A map from expression ids to types.
|
||||
//
|
||||
@@ -293,6 +293,7 @@ type Type struct {
|
||||
// The kind of type.
|
||||
//
|
||||
// Types that are assignable to TypeKind:
|
||||
//
|
||||
// *Type_Dyn
|
||||
// *Type_Null
|
||||
// *Type_Primitive
|
||||
@@ -562,13 +563,16 @@ type Decl struct {
|
||||
// Declarations are organized in containers and this represents the full path
|
||||
// to the declaration in its container, as in `google.api.expr.Decl`.
|
||||
//
|
||||
// Declarations used as [FunctionDecl.Overload][google.api.expr.v1alpha1.Decl.FunctionDecl.Overload] parameters may or may not
|
||||
// have a name depending on whether the overload is function declaration or a
|
||||
// function definition containing a result [Expr][google.api.expr.v1alpha1.Expr].
|
||||
// Declarations used as
|
||||
// [FunctionDecl.Overload][google.api.expr.v1alpha1.Decl.FunctionDecl.Overload]
|
||||
// parameters may or may not have a name depending on whether the overload is
|
||||
// function declaration or a function definition containing a result
|
||||
// [Expr][google.api.expr.v1alpha1.Expr].
|
||||
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
|
||||
// Required. The declaration kind.
|
||||
//
|
||||
// Types that are assignable to DeclKind:
|
||||
//
|
||||
// *Decl_Ident
|
||||
// *Decl_Function
|
||||
DeclKind isDecl_DeclKind `protobuf_oneof:"decl_kind"`
|
||||
@@ -667,7 +671,8 @@ type Reference struct {
|
||||
// presented candidates must happen at runtime because of dynamic types. The
|
||||
// type checker attempts to narrow down this list as much as possible.
|
||||
//
|
||||
// Empty if this is not a reference to a [Decl.FunctionDecl][google.api.expr.v1alpha1.Decl.FunctionDecl].
|
||||
// Empty if this is not a reference to a
|
||||
// [Decl.FunctionDecl][google.api.expr.v1alpha1.Decl.FunctionDecl].
|
||||
OverloadId []string `protobuf:"bytes,3,rep,name=overload_id,json=overloadId,proto3" json:"overload_id,omitempty"`
|
||||
// For references to constants, this may contain the value of the
|
||||
// constant if known at compile time.
|
||||
@@ -1077,8 +1082,8 @@ func (x *Decl_FunctionDecl) GetOverloads() []*Decl_FunctionDecl_Overload {
|
||||
}
|
||||
|
||||
// An overload indicates a function's parameter types and return type, and
|
||||
// may optionally include a function body described in terms of [Expr][google.api.expr.v1alpha1.Expr]
|
||||
// values.
|
||||
// may optionally include a function body described in terms of
|
||||
// [Expr][google.api.expr.v1alpha1.Expr] values.
|
||||
//
|
||||
// Functions overloads are declared in either a function or method
|
||||
// call-style. For methods, the `params[0]` is the expected type of the
|
||||
@@ -1094,10 +1099,12 @@ type Decl_FunctionDecl_Overload struct {
|
||||
// Required. Globally unique overload name of the function which reflects
|
||||
// the function name and argument types.
|
||||
//
|
||||
// This will be used by a [Reference][google.api.expr.v1alpha1.Reference] to indicate the `overload_id` that
|
||||
// was resolved for the function `name`.
|
||||
// This will be used by a [Reference][google.api.expr.v1alpha1.Reference]
|
||||
// to indicate the `overload_id` that was resolved for the function
|
||||
// `name`.
|
||||
OverloadId string `protobuf:"bytes,1,opt,name=overload_id,json=overloadId,proto3" json:"overload_id,omitempty"`
|
||||
// List of function parameter [Type][google.api.expr.v1alpha1.Type] values.
|
||||
// List of function parameter [Type][google.api.expr.v1alpha1.Type]
|
||||
// values.
|
||||
//
|
||||
// Param types are disjoint after generic type parameters have been
|
||||
// replaced with the type `DYN`. Since the `DYN` type is compatible with
|
||||
@@ -1117,7 +1124,7 @@ type Decl_FunctionDecl_Overload struct {
|
||||
// `string.isEmpty()` would have `result_type` of `kind: BOOL`.
|
||||
ResultType *Type `protobuf:"bytes,4,opt,name=result_type,json=resultType,proto3" json:"result_type,omitempty"`
|
||||
// Whether the function is to be used in a method call-style `x.f(...)`
|
||||
// of a function call-style `f(x, ...)`.
|
||||
// or a function call-style `f(x, ...)`.
|
||||
//
|
||||
// For methods, the first parameter declaration, `params[0]` is the
|
||||
// expected type of the target receiver.
|
||||
|
||||
23
vendor/google.golang.org/genproto/googleapis/api/expr/v1alpha1/eval.pb.go
generated
vendored
23
vendor/google.golang.org/genproto/googleapis/api/expr/v1alpha1/eval.pb.go
generated
vendored
@@ -1,4 +1,4 @@
|
||||
// Copyright 2021 Google LLC
|
||||
// Copyright 2022 Google LLC
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
@@ -15,7 +15,7 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.26.0
|
||||
// protoc v3.12.2
|
||||
// protoc v3.21.5
|
||||
// source: google/api/expr/v1alpha1/eval.proto
|
||||
|
||||
package expr
|
||||
@@ -108,6 +108,7 @@ type ExprValue struct {
|
||||
// An expression can resolve to a value, error or unknown.
|
||||
//
|
||||
// Types that are assignable to Kind:
|
||||
//
|
||||
// *ExprValue_Value
|
||||
// *ExprValue_Error
|
||||
// *ExprValue_Unknown
|
||||
@@ -212,22 +213,22 @@ type ExprValue_Unknown struct {
|
||||
// unknowns *might* be included included when evaluation could result in
|
||||
// different unknowns. For example:
|
||||
//
|
||||
// (<unknown[1]> || true) && <unknown[2]> -> <unknown[2]>
|
||||
// <unknown[1]> || <unknown[2]> -> <unknown[1,2]>
|
||||
// <unknown[1]>.foo -> <unknown[1]>
|
||||
// foo(<unknown[1]>) -> <unknown[1]>
|
||||
// <unknown[1]> + <unknown[2]> -> <unknown[1]> or <unknown[2[>
|
||||
// (<unknown[1]> || true) && <unknown[2]> -> <unknown[2]>
|
||||
// <unknown[1]> || <unknown[2]> -> <unknown[1,2]>
|
||||
// <unknown[1]>.foo -> <unknown[1]>
|
||||
// foo(<unknown[1]>) -> <unknown[1]>
|
||||
// <unknown[1]> + <unknown[2]> -> <unknown[1]> or <unknown[2[>
|
||||
//
|
||||
// Unknown takes precidence over Error in cases where a `Value` can short
|
||||
// circuit the result:
|
||||
//
|
||||
// <error> || <unknown> -> <unknown>
|
||||
// <error> && <unknown> -> <unknown>
|
||||
// <error> || <unknown> -> <unknown>
|
||||
// <error> && <unknown> -> <unknown>
|
||||
//
|
||||
// Errors take precidence in all other cases:
|
||||
//
|
||||
// <unknown> + <error> -> <error>
|
||||
// foo(<unknown>, <error>) -> <error>
|
||||
// <unknown> + <error> -> <error>
|
||||
// foo(<unknown>, <error>) -> <error>
|
||||
Unknown *UnknownSet `protobuf:"bytes,3,opt,name=unknown,proto3,oneof"`
|
||||
}
|
||||
|
||||
|
||||
4
vendor/google.golang.org/genproto/googleapis/api/expr/v1alpha1/explain.pb.go
generated
vendored
4
vendor/google.golang.org/genproto/googleapis/api/expr/v1alpha1/explain.pb.go
generated
vendored
@@ -1,4 +1,4 @@
|
||||
// Copyright 2021 Google LLC
|
||||
// Copyright 2022 Google LLC
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
@@ -15,7 +15,7 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.26.0
|
||||
// protoc v3.12.2
|
||||
// protoc v3.21.5
|
||||
// source: google/api/expr/v1alpha1/explain.proto
|
||||
|
||||
package expr
|
||||
|
||||
285
vendor/google.golang.org/genproto/googleapis/api/expr/v1alpha1/syntax.pb.go
generated
vendored
285
vendor/google.golang.org/genproto/googleapis/api/expr/v1alpha1/syntax.pb.go
generated
vendored
@@ -1,4 +1,4 @@
|
||||
// Copyright 2021 Google LLC
|
||||
// Copyright 2022 Google LLC
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
@@ -15,7 +15,7 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.26.0
|
||||
// protoc v3.12.2
|
||||
// protoc v3.21.9
|
||||
// source: google/api/expr/v1alpha1/syntax.proto
|
||||
|
||||
package expr
|
||||
@@ -123,6 +123,7 @@ type Expr struct {
|
||||
// Required. Variants of expressions.
|
||||
//
|
||||
// Types that are assignable to ExprKind:
|
||||
//
|
||||
// *Expr_ConstExpr
|
||||
// *Expr_IdentExpr
|
||||
// *Expr_SelectExpr
|
||||
@@ -302,6 +303,7 @@ type Constant struct {
|
||||
// Required. The valid constant kinds.
|
||||
//
|
||||
// Types that are assignable to ConstantKind:
|
||||
//
|
||||
// *Constant_NullValue
|
||||
// *Constant_BoolValue
|
||||
// *Constant_Int64Value
|
||||
@@ -881,6 +883,13 @@ type Expr_CreateList struct {
|
||||
|
||||
// The elements part of the list.
|
||||
Elements []*Expr `protobuf:"bytes,1,rep,name=elements,proto3" json:"elements,omitempty"`
|
||||
// The indices within the elements list which are marked as optional
|
||||
// elements.
|
||||
//
|
||||
// When an optional-typed value is present, the value it contains
|
||||
// is included in the list. If the optional-typed value is absent, the list
|
||||
// element is omitted from the CreateList result.
|
||||
OptionalIndices []int32 `protobuf:"varint,2,rep,packed,name=optional_indices,json=optionalIndices,proto3" json:"optional_indices,omitempty"`
|
||||
}
|
||||
|
||||
func (x *Expr_CreateList) Reset() {
|
||||
@@ -922,6 +931,13 @@ func (x *Expr_CreateList) GetElements() []*Expr {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *Expr_CreateList) GetOptionalIndices() []int32 {
|
||||
if x != nil {
|
||||
return x.OptionalIndices
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// A map or message creation expression.
|
||||
//
|
||||
// Maps are constructed as `{'key_name': 'value'}`. Message construction is
|
||||
@@ -997,14 +1013,14 @@ func (x *Expr_CreateStruct) GetEntries() []*Expr_CreateStruct_Entry {
|
||||
// Aggregate type macros may be applied to all elements in a list or all keys
|
||||
// in a map:
|
||||
//
|
||||
// * `all`, `exists`, `exists_one` - test a predicate expression against
|
||||
// the inputs and return `true` if the predicate is satisfied for all,
|
||||
// any, or only one value `list.all(x, x < 10)`.
|
||||
// * `filter` - test a predicate expression against the inputs and return
|
||||
// the subset of elements which satisfy the predicate:
|
||||
// `payments.filter(p, p > 1000)`.
|
||||
// * `map` - apply an expression to all elements in the input and return the
|
||||
// output aggregate type: `[1, 2, 3].map(i, i * i)`.
|
||||
// - `all`, `exists`, `exists_one` - test a predicate expression against
|
||||
// the inputs and return `true` if the predicate is satisfied for all,
|
||||
// any, or only one value `list.all(x, x < 10)`.
|
||||
// - `filter` - test a predicate expression against the inputs and return
|
||||
// the subset of elements which satisfy the predicate:
|
||||
// `payments.filter(p, p > 1000)`.
|
||||
// - `map` - apply an expression to all elements in the input and return the
|
||||
// output aggregate type: `[1, 2, 3].map(i, i * i)`.
|
||||
//
|
||||
// The `has(m.x)` macro tests whether the property `x` is present in struct
|
||||
// `m`. The semantics of this macro depend on the type of `m`. For proto2
|
||||
@@ -1133,11 +1149,18 @@ type Expr_CreateStruct_Entry struct {
|
||||
// The `Entry` key kinds.
|
||||
//
|
||||
// Types that are assignable to KeyKind:
|
||||
//
|
||||
// *Expr_CreateStruct_Entry_FieldKey
|
||||
// *Expr_CreateStruct_Entry_MapKey
|
||||
KeyKind isExpr_CreateStruct_Entry_KeyKind `protobuf_oneof:"key_kind"`
|
||||
// Required. The value assigned to the key.
|
||||
//
|
||||
// If the optional_entry field is true, the expression must resolve to an
|
||||
// optional-typed value. If the optional value is present, the key will be
|
||||
// set; however, if the optional value is absent, the key will be unset.
|
||||
Value *Expr `protobuf:"bytes,4,opt,name=value,proto3" json:"value,omitempty"`
|
||||
// Whether the key-value pair is optional.
|
||||
OptionalEntry bool `protobuf:"varint,5,opt,name=optional_entry,json=optionalEntry,proto3" json:"optional_entry,omitempty"`
|
||||
}
|
||||
|
||||
func (x *Expr_CreateStruct_Entry) Reset() {
|
||||
@@ -1207,6 +1230,13 @@ func (x *Expr_CreateStruct_Entry) GetValue() *Expr {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *Expr_CreateStruct_Entry) GetOptionalEntry() bool {
|
||||
if x != nil {
|
||||
return x.OptionalEntry
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
type isExpr_CreateStruct_Entry_KeyKind interface {
|
||||
isExpr_CreateStruct_Entry_KeyKind()
|
||||
}
|
||||
@@ -1246,7 +1276,7 @@ var file_google_api_expr_v1alpha1_syntax_proto_rawDesc = []byte{
|
||||
0x66, 0x6f, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x24, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
|
||||
0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x65, 0x78, 0x70, 0x72, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70,
|
||||
0x68, 0x61, 0x31, 0x2e, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x66, 0x6f, 0x52, 0x0a,
|
||||
0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x66, 0x6f, 0x22, 0xdc, 0x0c, 0x0a, 0x04, 0x45,
|
||||
0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x66, 0x6f, 0x22, 0xae, 0x0d, 0x0a, 0x04, 0x45,
|
||||
0x78, 0x70, 0x72, 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x03, 0x52,
|
||||
0x02, 0x69, 0x64, 0x12, 0x43, 0x0a, 0x0a, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x5f, 0x65, 0x78, 0x70,
|
||||
0x72, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x22, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65,
|
||||
@@ -1299,127 +1329,132 @@ var file_google_api_expr_v1alpha1_syntax_proto_rawDesc = []byte{
|
||||
0x75, 0x6e, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x32, 0x0a, 0x04, 0x61, 0x72, 0x67, 0x73, 0x18,
|
||||
0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x1e, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61,
|
||||
0x70, 0x69, 0x2e, 0x65, 0x78, 0x70, 0x72, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31,
|
||||
0x2e, 0x45, 0x78, 0x70, 0x72, 0x52, 0x04, 0x61, 0x72, 0x67, 0x73, 0x1a, 0x48, 0x0a, 0x0a, 0x43,
|
||||
0x2e, 0x45, 0x78, 0x70, 0x72, 0x52, 0x04, 0x61, 0x72, 0x67, 0x73, 0x1a, 0x73, 0x0a, 0x0a, 0x43,
|
||||
0x72, 0x65, 0x61, 0x74, 0x65, 0x4c, 0x69, 0x73, 0x74, 0x12, 0x3a, 0x0a, 0x08, 0x65, 0x6c, 0x65,
|
||||
0x6d, 0x65, 0x6e, 0x74, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x1e, 0x2e, 0x67, 0x6f,
|
||||
0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x65, 0x78, 0x70, 0x72, 0x2e, 0x76, 0x31,
|
||||
0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x2e, 0x45, 0x78, 0x70, 0x72, 0x52, 0x08, 0x65, 0x6c, 0x65,
|
||||
0x6d, 0x65, 0x6e, 0x74, 0x73, 0x1a, 0xb4, 0x02, 0x0a, 0x0c, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65,
|
||||
0x53, 0x74, 0x72, 0x75, 0x63, 0x74, 0x12, 0x21, 0x0a, 0x0c, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67,
|
||||
0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x6d, 0x65,
|
||||
0x73, 0x73, 0x61, 0x67, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x4b, 0x0a, 0x07, 0x65, 0x6e, 0x74,
|
||||
0x72, 0x69, 0x65, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x31, 0x2e, 0x67, 0x6f, 0x6f,
|
||||
0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x65, 0x78, 0x70, 0x72, 0x2e, 0x76, 0x31, 0x61,
|
||||
0x6c, 0x70, 0x68, 0x61, 0x31, 0x2e, 0x45, 0x78, 0x70, 0x72, 0x2e, 0x43, 0x72, 0x65, 0x61, 0x74,
|
||||
0x65, 0x53, 0x74, 0x72, 0x75, 0x63, 0x74, 0x2e, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x07, 0x65,
|
||||
0x6e, 0x74, 0x72, 0x69, 0x65, 0x73, 0x1a, 0xb3, 0x01, 0x0a, 0x05, 0x45, 0x6e, 0x74, 0x72, 0x79,
|
||||
0x12, 0x0e, 0x0a, 0x02, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x03, 0x52, 0x02, 0x69, 0x64,
|
||||
0x12, 0x1d, 0x0a, 0x09, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x5f, 0x6b, 0x65, 0x79, 0x18, 0x02, 0x20,
|
||||
0x01, 0x28, 0x09, 0x48, 0x00, 0x52, 0x08, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x4b, 0x65, 0x79, 0x12,
|
||||
0x39, 0x0a, 0x07, 0x6d, 0x61, 0x70, 0x5f, 0x6b, 0x65, 0x79, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b,
|
||||
0x32, 0x1e, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x65, 0x78,
|
||||
0x70, 0x72, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x2e, 0x45, 0x78, 0x70, 0x72,
|
||||
0x48, 0x00, 0x52, 0x06, 0x6d, 0x61, 0x70, 0x4b, 0x65, 0x79, 0x12, 0x34, 0x0a, 0x05, 0x76, 0x61,
|
||||
0x6c, 0x75, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1e, 0x2e, 0x67, 0x6f, 0x6f, 0x67,
|
||||
0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x65, 0x78, 0x70, 0x72, 0x2e, 0x76, 0x31, 0x61, 0x6c,
|
||||
0x70, 0x68, 0x61, 0x31, 0x2e, 0x45, 0x78, 0x70, 0x72, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65,
|
||||
0x42, 0x0a, 0x0a, 0x08, 0x6b, 0x65, 0x79, 0x5f, 0x6b, 0x69, 0x6e, 0x64, 0x1a, 0xfd, 0x02, 0x0a,
|
||||
0x0d, 0x43, 0x6f, 0x6d, 0x70, 0x72, 0x65, 0x68, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x19,
|
||||
0x0a, 0x08, 0x69, 0x74, 0x65, 0x72, 0x5f, 0x76, 0x61, 0x72, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09,
|
||||
0x52, 0x07, 0x69, 0x74, 0x65, 0x72, 0x56, 0x61, 0x72, 0x12, 0x3d, 0x0a, 0x0a, 0x69, 0x74, 0x65,
|
||||
0x72, 0x5f, 0x72, 0x61, 0x6e, 0x67, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1e, 0x2e,
|
||||
0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x65, 0x78, 0x70, 0x72, 0x2e,
|
||||
0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x2e, 0x45, 0x78, 0x70, 0x72, 0x52, 0x09, 0x69,
|
||||
0x74, 0x65, 0x72, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x19, 0x0a, 0x08, 0x61, 0x63, 0x63, 0x75,
|
||||
0x5f, 0x76, 0x61, 0x72, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07, 0x61, 0x63, 0x63, 0x75,
|
||||
0x56, 0x61, 0x72, 0x12, 0x3b, 0x0a, 0x09, 0x61, 0x63, 0x63, 0x75, 0x5f, 0x69, 0x6e, 0x69, 0x74,
|
||||
0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1e, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e,
|
||||
0x61, 0x70, 0x69, 0x2e, 0x65, 0x78, 0x70, 0x72, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61,
|
||||
0x31, 0x2e, 0x45, 0x78, 0x70, 0x72, 0x52, 0x08, 0x61, 0x63, 0x63, 0x75, 0x49, 0x6e, 0x69, 0x74,
|
||||
0x12, 0x45, 0x0a, 0x0e, 0x6c, 0x6f, 0x6f, 0x70, 0x5f, 0x63, 0x6f, 0x6e, 0x64, 0x69, 0x74, 0x69,
|
||||
0x6f, 0x6e, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1e, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
|
||||
0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x65, 0x78, 0x70, 0x72, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70,
|
||||
0x68, 0x61, 0x31, 0x2e, 0x45, 0x78, 0x70, 0x72, 0x52, 0x0d, 0x6c, 0x6f, 0x6f, 0x70, 0x43, 0x6f,
|
||||
0x6e, 0x64, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x3b, 0x0a, 0x09, 0x6c, 0x6f, 0x6f, 0x70, 0x5f,
|
||||
0x73, 0x74, 0x65, 0x70, 0x18, 0x06, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1e, 0x2e, 0x67, 0x6f, 0x6f,
|
||||
0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x65, 0x78, 0x70, 0x72, 0x2e, 0x76, 0x31, 0x61,
|
||||
0x6c, 0x70, 0x68, 0x61, 0x31, 0x2e, 0x45, 0x78, 0x70, 0x72, 0x52, 0x08, 0x6c, 0x6f, 0x6f, 0x70,
|
||||
0x53, 0x74, 0x65, 0x70, 0x12, 0x36, 0x0a, 0x06, 0x72, 0x65, 0x73, 0x75, 0x6c, 0x74, 0x18, 0x07,
|
||||
0x6d, 0x65, 0x6e, 0x74, 0x73, 0x12, 0x29, 0x0a, 0x10, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x61,
|
||||
0x6c, 0x5f, 0x69, 0x6e, 0x64, 0x69, 0x63, 0x65, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x05, 0x52,
|
||||
0x0f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x61, 0x6c, 0x49, 0x6e, 0x64, 0x69, 0x63, 0x65, 0x73,
|
||||
0x1a, 0xdb, 0x02, 0x0a, 0x0c, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x53, 0x74, 0x72, 0x75, 0x63,
|
||||
0x74, 0x12, 0x21, 0x0a, 0x0c, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x5f, 0x6e, 0x61, 0x6d,
|
||||
0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65,
|
||||
0x4e, 0x61, 0x6d, 0x65, 0x12, 0x4b, 0x0a, 0x07, 0x65, 0x6e, 0x74, 0x72, 0x69, 0x65, 0x73, 0x18,
|
||||
0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x31, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61,
|
||||
0x70, 0x69, 0x2e, 0x65, 0x78, 0x70, 0x72, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31,
|
||||
0x2e, 0x45, 0x78, 0x70, 0x72, 0x2e, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x53, 0x74, 0x72, 0x75,
|
||||
0x63, 0x74, 0x2e, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x07, 0x65, 0x6e, 0x74, 0x72, 0x69, 0x65,
|
||||
0x73, 0x1a, 0xda, 0x01, 0x0a, 0x05, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x0e, 0x0a, 0x02, 0x69,
|
||||
0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x03, 0x52, 0x02, 0x69, 0x64, 0x12, 0x1d, 0x0a, 0x09, 0x66,
|
||||
0x69, 0x65, 0x6c, 0x64, 0x5f, 0x6b, 0x65, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x48, 0x00,
|
||||
0x52, 0x08, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x4b, 0x65, 0x79, 0x12, 0x39, 0x0a, 0x07, 0x6d, 0x61,
|
||||
0x70, 0x5f, 0x6b, 0x65, 0x79, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1e, 0x2e, 0x67, 0x6f,
|
||||
0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x65, 0x78, 0x70, 0x72, 0x2e, 0x76, 0x31,
|
||||
0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x2e, 0x45, 0x78, 0x70, 0x72, 0x48, 0x00, 0x52, 0x06, 0x6d,
|
||||
0x61, 0x70, 0x4b, 0x65, 0x79, 0x12, 0x34, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x04,
|
||||
0x20, 0x01, 0x28, 0x0b, 0x32, 0x1e, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70,
|
||||
0x69, 0x2e, 0x65, 0x78, 0x70, 0x72, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x2e,
|
||||
0x45, 0x78, 0x70, 0x72, 0x52, 0x06, 0x72, 0x65, 0x73, 0x75, 0x6c, 0x74, 0x42, 0x0b, 0x0a, 0x09,
|
||||
0x65, 0x78, 0x70, 0x72, 0x5f, 0x6b, 0x69, 0x6e, 0x64, 0x22, 0xc1, 0x03, 0x0a, 0x08, 0x43, 0x6f,
|
||||
0x6e, 0x73, 0x74, 0x61, 0x6e, 0x74, 0x12, 0x3b, 0x0a, 0x0a, 0x6e, 0x75, 0x6c, 0x6c, 0x5f, 0x76,
|
||||
0x61, 0x6c, 0x75, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f,
|
||||
0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x4e, 0x75, 0x6c,
|
||||
0x6c, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x48, 0x00, 0x52, 0x09, 0x6e, 0x75, 0x6c, 0x6c, 0x56, 0x61,
|
||||
0x6c, 0x75, 0x65, 0x12, 0x1f, 0x0a, 0x0a, 0x62, 0x6f, 0x6f, 0x6c, 0x5f, 0x76, 0x61, 0x6c, 0x75,
|
||||
0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x08, 0x48, 0x00, 0x52, 0x09, 0x62, 0x6f, 0x6f, 0x6c, 0x56,
|
||||
0x61, 0x6c, 0x75, 0x65, 0x12, 0x21, 0x0a, 0x0b, 0x69, 0x6e, 0x74, 0x36, 0x34, 0x5f, 0x76, 0x61,
|
||||
0x6c, 0x75, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x03, 0x48, 0x00, 0x52, 0x0a, 0x69, 0x6e, 0x74,
|
||||
0x36, 0x34, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x23, 0x0a, 0x0c, 0x75, 0x69, 0x6e, 0x74, 0x36,
|
||||
0x34, 0x5f, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x04, 0x48, 0x00, 0x52,
|
||||
0x0b, 0x75, 0x69, 0x6e, 0x74, 0x36, 0x34, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x23, 0x0a, 0x0c,
|
||||
0x64, 0x6f, 0x75, 0x62, 0x6c, 0x65, 0x5f, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x05, 0x20, 0x01,
|
||||
0x28, 0x01, 0x48, 0x00, 0x52, 0x0b, 0x64, 0x6f, 0x75, 0x62, 0x6c, 0x65, 0x56, 0x61, 0x6c, 0x75,
|
||||
0x65, 0x12, 0x23, 0x0a, 0x0c, 0x73, 0x74, 0x72, 0x69, 0x6e, 0x67, 0x5f, 0x76, 0x61, 0x6c, 0x75,
|
||||
0x65, 0x18, 0x06, 0x20, 0x01, 0x28, 0x09, 0x48, 0x00, 0x52, 0x0b, 0x73, 0x74, 0x72, 0x69, 0x6e,
|
||||
0x67, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x21, 0x0a, 0x0b, 0x62, 0x79, 0x74, 0x65, 0x73, 0x5f,
|
||||
0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0c, 0x48, 0x00, 0x52, 0x0a, 0x62,
|
||||
0x79, 0x74, 0x65, 0x73, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x46, 0x0a, 0x0e, 0x64, 0x75, 0x72,
|
||||
0x61, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x08, 0x20, 0x01, 0x28,
|
||||
0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
|
||||
0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x42, 0x02, 0x18, 0x01,
|
||||
0x48, 0x00, 0x52, 0x0d, 0x64, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x56, 0x61, 0x6c, 0x75,
|
||||
0x65, 0x12, 0x49, 0x0a, 0x0f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x5f, 0x76,
|
||||
0x61, 0x6c, 0x75, 0x65, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f,
|
||||
0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d,
|
||||
0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x42, 0x02, 0x18, 0x01, 0x48, 0x00, 0x52, 0x0e, 0x74, 0x69,
|
||||
0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x42, 0x0f, 0x0a, 0x0d,
|
||||
0x63, 0x6f, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x74, 0x5f, 0x6b, 0x69, 0x6e, 0x64, 0x22, 0xb9, 0x03,
|
||||
0x0a, 0x0a, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x25, 0x0a, 0x0e,
|
||||
0x73, 0x79, 0x6e, 0x74, 0x61, 0x78, 0x5f, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x01,
|
||||
0x20, 0x01, 0x28, 0x09, 0x52, 0x0d, 0x73, 0x79, 0x6e, 0x74, 0x61, 0x78, 0x56, 0x65, 0x72, 0x73,
|
||||
0x69, 0x6f, 0x6e, 0x12, 0x1a, 0x0a, 0x08, 0x6c, 0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x18,
|
||||
0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x6c, 0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x12,
|
||||
0x21, 0x0a, 0x0c, 0x6c, 0x69, 0x6e, 0x65, 0x5f, 0x6f, 0x66, 0x66, 0x73, 0x65, 0x74, 0x73, 0x18,
|
||||
0x03, 0x20, 0x03, 0x28, 0x05, 0x52, 0x0b, 0x6c, 0x69, 0x6e, 0x65, 0x4f, 0x66, 0x66, 0x73, 0x65,
|
||||
0x74, 0x73, 0x12, 0x51, 0x0a, 0x09, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18,
|
||||
0x04, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x33, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61,
|
||||
0x70, 0x69, 0x2e, 0x65, 0x78, 0x70, 0x72, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31,
|
||||
0x2e, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x66, 0x6f, 0x2e, 0x50, 0x6f, 0x73, 0x69,
|
||||
0x74, 0x69, 0x6f, 0x6e, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x09, 0x70, 0x6f, 0x73, 0x69,
|
||||
0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x55, 0x0a, 0x0b, 0x6d, 0x61, 0x63, 0x72, 0x6f, 0x5f, 0x63,
|
||||
0x61, 0x6c, 0x6c, 0x73, 0x18, 0x05, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x34, 0x2e, 0x67, 0x6f, 0x6f,
|
||||
0x45, 0x78, 0x70, 0x72, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x25, 0x0a, 0x0e, 0x6f,
|
||||
0x70, 0x74, 0x69, 0x6f, 0x6e, 0x61, 0x6c, 0x5f, 0x65, 0x6e, 0x74, 0x72, 0x79, 0x18, 0x05, 0x20,
|
||||
0x01, 0x28, 0x08, 0x52, 0x0d, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x61, 0x6c, 0x45, 0x6e, 0x74,
|
||||
0x72, 0x79, 0x42, 0x0a, 0x0a, 0x08, 0x6b, 0x65, 0x79, 0x5f, 0x6b, 0x69, 0x6e, 0x64, 0x1a, 0xfd,
|
||||
0x02, 0x0a, 0x0d, 0x43, 0x6f, 0x6d, 0x70, 0x72, 0x65, 0x68, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e,
|
||||
0x12, 0x19, 0x0a, 0x08, 0x69, 0x74, 0x65, 0x72, 0x5f, 0x76, 0x61, 0x72, 0x18, 0x01, 0x20, 0x01,
|
||||
0x28, 0x09, 0x52, 0x07, 0x69, 0x74, 0x65, 0x72, 0x56, 0x61, 0x72, 0x12, 0x3d, 0x0a, 0x0a, 0x69,
|
||||
0x74, 0x65, 0x72, 0x5f, 0x72, 0x61, 0x6e, 0x67, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32,
|
||||
0x1e, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x65, 0x78, 0x70,
|
||||
0x72, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x2e, 0x45, 0x78, 0x70, 0x72, 0x52,
|
||||
0x09, 0x69, 0x74, 0x65, 0x72, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x19, 0x0a, 0x08, 0x61, 0x63,
|
||||
0x63, 0x75, 0x5f, 0x76, 0x61, 0x72, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07, 0x61, 0x63,
|
||||
0x63, 0x75, 0x56, 0x61, 0x72, 0x12, 0x3b, 0x0a, 0x09, 0x61, 0x63, 0x63, 0x75, 0x5f, 0x69, 0x6e,
|
||||
0x69, 0x74, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1e, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
|
||||
0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x65, 0x78, 0x70, 0x72, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70,
|
||||
0x68, 0x61, 0x31, 0x2e, 0x45, 0x78, 0x70, 0x72, 0x52, 0x08, 0x61, 0x63, 0x63, 0x75, 0x49, 0x6e,
|
||||
0x69, 0x74, 0x12, 0x45, 0x0a, 0x0e, 0x6c, 0x6f, 0x6f, 0x70, 0x5f, 0x63, 0x6f, 0x6e, 0x64, 0x69,
|
||||
0x74, 0x69, 0x6f, 0x6e, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1e, 0x2e, 0x67, 0x6f, 0x6f,
|
||||
0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x65, 0x78, 0x70, 0x72, 0x2e, 0x76, 0x31, 0x61,
|
||||
0x6c, 0x70, 0x68, 0x61, 0x31, 0x2e, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x66, 0x6f,
|
||||
0x2e, 0x4d, 0x61, 0x63, 0x72, 0x6f, 0x43, 0x61, 0x6c, 0x6c, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79,
|
||||
0x52, 0x0a, 0x6d, 0x61, 0x63, 0x72, 0x6f, 0x43, 0x61, 0x6c, 0x6c, 0x73, 0x1a, 0x3c, 0x0a, 0x0e,
|
||||
0x50, 0x6f, 0x73, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10,
|
||||
0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x03, 0x52, 0x03, 0x6b, 0x65, 0x79,
|
||||
0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x05, 0x52,
|
||||
0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x1a, 0x5d, 0x0a, 0x0f, 0x4d, 0x61,
|
||||
0x63, 0x72, 0x6f, 0x43, 0x61, 0x6c, 0x6c, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a,
|
||||
0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x03, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12,
|
||||
0x34, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1e,
|
||||
0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x65, 0x78, 0x70, 0x72,
|
||||
0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x2e, 0x45, 0x78, 0x70, 0x72, 0x52, 0x05,
|
||||
0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0x70, 0x0a, 0x0e, 0x53, 0x6f, 0x75,
|
||||
0x72, 0x63, 0x65, 0x50, 0x6f, 0x73, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x1a, 0x0a, 0x08, 0x6c,
|
||||
0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x6c,
|
||||
0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x16, 0x0a, 0x06, 0x6f, 0x66, 0x66, 0x73, 0x65,
|
||||
0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x05, 0x52, 0x06, 0x6f, 0x66, 0x66, 0x73, 0x65, 0x74, 0x12,
|
||||
0x12, 0x0a, 0x04, 0x6c, 0x69, 0x6e, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x05, 0x52, 0x04, 0x6c,
|
||||
0x69, 0x6e, 0x65, 0x12, 0x16, 0x0a, 0x06, 0x63, 0x6f, 0x6c, 0x75, 0x6d, 0x6e, 0x18, 0x04, 0x20,
|
||||
0x01, 0x28, 0x05, 0x52, 0x06, 0x63, 0x6f, 0x6c, 0x75, 0x6d, 0x6e, 0x42, 0x6e, 0x0a, 0x1c, 0x63,
|
||||
0x6f, 0x6d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x65, 0x78,
|
||||
0x70, 0x72, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x42, 0x0b, 0x53, 0x79, 0x6e,
|
||||
0x74, 0x61, 0x78, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x3c, 0x67, 0x6f, 0x6f, 0x67,
|
||||
0x6c, 0x65, 0x2e, 0x67, 0x6f, 0x6c, 0x61, 0x6e, 0x67, 0x2e, 0x6f, 0x72, 0x67, 0x2f, 0x67, 0x65,
|
||||
0x6e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x61, 0x70, 0x69,
|
||||
0x73, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x65, 0x78, 0x70, 0x72, 0x2f, 0x76, 0x31, 0x61, 0x6c, 0x70,
|
||||
0x68, 0x61, 0x31, 0x3b, 0x65, 0x78, 0x70, 0x72, 0xf8, 0x01, 0x01, 0x62, 0x06, 0x70, 0x72, 0x6f,
|
||||
0x74, 0x6f, 0x33,
|
||||
0x6c, 0x70, 0x68, 0x61, 0x31, 0x2e, 0x45, 0x78, 0x70, 0x72, 0x52, 0x0d, 0x6c, 0x6f, 0x6f, 0x70,
|
||||
0x43, 0x6f, 0x6e, 0x64, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x3b, 0x0a, 0x09, 0x6c, 0x6f, 0x6f,
|
||||
0x70, 0x5f, 0x73, 0x74, 0x65, 0x70, 0x18, 0x06, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1e, 0x2e, 0x67,
|
||||
0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x65, 0x78, 0x70, 0x72, 0x2e, 0x76,
|
||||
0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x2e, 0x45, 0x78, 0x70, 0x72, 0x52, 0x08, 0x6c, 0x6f,
|
||||
0x6f, 0x70, 0x53, 0x74, 0x65, 0x70, 0x12, 0x36, 0x0a, 0x06, 0x72, 0x65, 0x73, 0x75, 0x6c, 0x74,
|
||||
0x18, 0x07, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1e, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e,
|
||||
0x61, 0x70, 0x69, 0x2e, 0x65, 0x78, 0x70, 0x72, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61,
|
||||
0x31, 0x2e, 0x45, 0x78, 0x70, 0x72, 0x52, 0x06, 0x72, 0x65, 0x73, 0x75, 0x6c, 0x74, 0x42, 0x0b,
|
||||
0x0a, 0x09, 0x65, 0x78, 0x70, 0x72, 0x5f, 0x6b, 0x69, 0x6e, 0x64, 0x22, 0xc1, 0x03, 0x0a, 0x08,
|
||||
0x43, 0x6f, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x74, 0x12, 0x3b, 0x0a, 0x0a, 0x6e, 0x75, 0x6c, 0x6c,
|
||||
0x5f, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x1a, 0x2e, 0x67,
|
||||
0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x4e,
|
||||
0x75, 0x6c, 0x6c, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x48, 0x00, 0x52, 0x09, 0x6e, 0x75, 0x6c, 0x6c,
|
||||
0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x1f, 0x0a, 0x0a, 0x62, 0x6f, 0x6f, 0x6c, 0x5f, 0x76, 0x61,
|
||||
0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x08, 0x48, 0x00, 0x52, 0x09, 0x62, 0x6f, 0x6f,
|
||||
0x6c, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x21, 0x0a, 0x0b, 0x69, 0x6e, 0x74, 0x36, 0x34, 0x5f,
|
||||
0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x03, 0x48, 0x00, 0x52, 0x0a, 0x69,
|
||||
0x6e, 0x74, 0x36, 0x34, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x23, 0x0a, 0x0c, 0x75, 0x69, 0x6e,
|
||||
0x74, 0x36, 0x34, 0x5f, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x04, 0x48,
|
||||
0x00, 0x52, 0x0b, 0x75, 0x69, 0x6e, 0x74, 0x36, 0x34, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x23,
|
||||
0x0a, 0x0c, 0x64, 0x6f, 0x75, 0x62, 0x6c, 0x65, 0x5f, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x05,
|
||||
0x20, 0x01, 0x28, 0x01, 0x48, 0x00, 0x52, 0x0b, 0x64, 0x6f, 0x75, 0x62, 0x6c, 0x65, 0x56, 0x61,
|
||||
0x6c, 0x75, 0x65, 0x12, 0x23, 0x0a, 0x0c, 0x73, 0x74, 0x72, 0x69, 0x6e, 0x67, 0x5f, 0x76, 0x61,
|
||||
0x6c, 0x75, 0x65, 0x18, 0x06, 0x20, 0x01, 0x28, 0x09, 0x48, 0x00, 0x52, 0x0b, 0x73, 0x74, 0x72,
|
||||
0x69, 0x6e, 0x67, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x21, 0x0a, 0x0b, 0x62, 0x79, 0x74, 0x65,
|
||||
0x73, 0x5f, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0c, 0x48, 0x00, 0x52,
|
||||
0x0a, 0x62, 0x79, 0x74, 0x65, 0x73, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x46, 0x0a, 0x0e, 0x64,
|
||||
0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x08, 0x20,
|
||||
0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f,
|
||||
0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x42, 0x02,
|
||||
0x18, 0x01, 0x48, 0x00, 0x52, 0x0d, 0x64, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x56, 0x61,
|
||||
0x6c, 0x75, 0x65, 0x12, 0x49, 0x0a, 0x0f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70,
|
||||
0x5f, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67,
|
||||
0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54,
|
||||
0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x42, 0x02, 0x18, 0x01, 0x48, 0x00, 0x52, 0x0e,
|
||||
0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x42, 0x0f,
|
||||
0x0a, 0x0d, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x74, 0x5f, 0x6b, 0x69, 0x6e, 0x64, 0x22,
|
||||
0xb9, 0x03, 0x0a, 0x0a, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x25,
|
||||
0x0a, 0x0e, 0x73, 0x79, 0x6e, 0x74, 0x61, 0x78, 0x5f, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e,
|
||||
0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0d, 0x73, 0x79, 0x6e, 0x74, 0x61, 0x78, 0x56, 0x65,
|
||||
0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x1a, 0x0a, 0x08, 0x6c, 0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f,
|
||||
0x6e, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x6c, 0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f,
|
||||
0x6e, 0x12, 0x21, 0x0a, 0x0c, 0x6c, 0x69, 0x6e, 0x65, 0x5f, 0x6f, 0x66, 0x66, 0x73, 0x65, 0x74,
|
||||
0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x05, 0x52, 0x0b, 0x6c, 0x69, 0x6e, 0x65, 0x4f, 0x66, 0x66,
|
||||
0x73, 0x65, 0x74, 0x73, 0x12, 0x51, 0x0a, 0x09, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x69, 0x6f, 0x6e,
|
||||
0x73, 0x18, 0x04, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x33, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65,
|
||||
0x2e, 0x61, 0x70, 0x69, 0x2e, 0x65, 0x78, 0x70, 0x72, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68,
|
||||
0x61, 0x31, 0x2e, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x66, 0x6f, 0x2e, 0x50, 0x6f,
|
||||
0x73, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x09, 0x70, 0x6f,
|
||||
0x73, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x55, 0x0a, 0x0b, 0x6d, 0x61, 0x63, 0x72, 0x6f,
|
||||
0x5f, 0x63, 0x61, 0x6c, 0x6c, 0x73, 0x18, 0x05, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x34, 0x2e, 0x67,
|
||||
0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x65, 0x78, 0x70, 0x72, 0x2e, 0x76,
|
||||
0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x2e, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e,
|
||||
0x66, 0x6f, 0x2e, 0x4d, 0x61, 0x63, 0x72, 0x6f, 0x43, 0x61, 0x6c, 0x6c, 0x73, 0x45, 0x6e, 0x74,
|
||||
0x72, 0x79, 0x52, 0x0a, 0x6d, 0x61, 0x63, 0x72, 0x6f, 0x43, 0x61, 0x6c, 0x6c, 0x73, 0x1a, 0x3c,
|
||||
0x0a, 0x0e, 0x50, 0x6f, 0x73, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79,
|
||||
0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x03, 0x52, 0x03, 0x6b,
|
||||
0x65, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28,
|
||||
0x05, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x1a, 0x5d, 0x0a, 0x0f,
|
||||
0x4d, 0x61, 0x63, 0x72, 0x6f, 0x43, 0x61, 0x6c, 0x6c, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12,
|
||||
0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x03, 0x52, 0x03, 0x6b, 0x65,
|
||||
0x79, 0x12, 0x34, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b,
|
||||
0x32, 0x1e, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x65, 0x78,
|
||||
0x70, 0x72, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x2e, 0x45, 0x78, 0x70, 0x72,
|
||||
0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0x70, 0x0a, 0x0e, 0x53,
|
||||
0x6f, 0x75, 0x72, 0x63, 0x65, 0x50, 0x6f, 0x73, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x1a, 0x0a,
|
||||
0x08, 0x6c, 0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52,
|
||||
0x08, 0x6c, 0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x16, 0x0a, 0x06, 0x6f, 0x66, 0x66,
|
||||
0x73, 0x65, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x05, 0x52, 0x06, 0x6f, 0x66, 0x66, 0x73, 0x65,
|
||||
0x74, 0x12, 0x12, 0x0a, 0x04, 0x6c, 0x69, 0x6e, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x05, 0x52,
|
||||
0x04, 0x6c, 0x69, 0x6e, 0x65, 0x12, 0x16, 0x0a, 0x06, 0x63, 0x6f, 0x6c, 0x75, 0x6d, 0x6e, 0x18,
|
||||
0x04, 0x20, 0x01, 0x28, 0x05, 0x52, 0x06, 0x63, 0x6f, 0x6c, 0x75, 0x6d, 0x6e, 0x42, 0x6e, 0x0a,
|
||||
0x1c, 0x63, 0x6f, 0x6d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e,
|
||||
0x65, 0x78, 0x70, 0x72, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x42, 0x0b, 0x53,
|
||||
0x79, 0x6e, 0x74, 0x61, 0x78, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x3c, 0x67, 0x6f,
|
||||
0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x67, 0x6f, 0x6c, 0x61, 0x6e, 0x67, 0x2e, 0x6f, 0x72, 0x67, 0x2f,
|
||||
0x67, 0x65, 0x6e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x61,
|
||||
0x70, 0x69, 0x73, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x65, 0x78, 0x70, 0x72, 0x2f, 0x76, 0x31, 0x61,
|
||||
0x6c, 0x70, 0x68, 0x61, 0x31, 0x3b, 0x65, 0x78, 0x70, 0x72, 0xf8, 0x01, 0x01, 0x62, 0x06, 0x70,
|
||||
0x72, 0x6f, 0x74, 0x6f, 0x33,
|
||||
}
|
||||
|
||||
var (
|
||||
|
||||
5
vendor/google.golang.org/genproto/googleapis/api/expr/v1alpha1/value.pb.go
generated
vendored
5
vendor/google.golang.org/genproto/googleapis/api/expr/v1alpha1/value.pb.go
generated
vendored
@@ -1,4 +1,4 @@
|
||||
// Copyright 2021 Google LLC
|
||||
// Copyright 2022 Google LLC
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
@@ -15,7 +15,7 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.26.0
|
||||
// protoc v3.12.2
|
||||
// protoc v3.21.5
|
||||
// source: google/api/expr/v1alpha1/value.proto
|
||||
|
||||
package expr
|
||||
@@ -49,6 +49,7 @@ type Value struct {
|
||||
// Required. The valid kinds of values.
|
||||
//
|
||||
// Types that are assignable to Kind:
|
||||
//
|
||||
// *Value_NullValue
|
||||
// *Value_BoolValue
|
||||
// *Value_Int64Value
|
||||
|
||||
41
vendor/google.golang.org/genproto/googleapis/api/httpbody/httpbody.pb.go
generated
vendored
41
vendor/google.golang.org/genproto/googleapis/api/httpbody/httpbody.pb.go
generated
vendored
@@ -1,4 +1,4 @@
|
||||
// Copyright 2015 Google LLC
|
||||
// Copyright 2023 Google LLC
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
@@ -15,7 +15,7 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.26.0
|
||||
// protoc v3.12.2
|
||||
// protoc v3.21.9
|
||||
// source: google/api/httpbody.proto
|
||||
|
||||
package httpbody
|
||||
@@ -40,7 +40,6 @@ const (
|
||||
// payload formats that can't be represented as JSON, such as raw binary or
|
||||
// an HTML page.
|
||||
//
|
||||
//
|
||||
// This message can be used both in streaming and non-streaming API methods in
|
||||
// the request as well as the response.
|
||||
//
|
||||
@@ -50,32 +49,32 @@ const (
|
||||
//
|
||||
// Example:
|
||||
//
|
||||
// message GetResourceRequest {
|
||||
// // A unique request id.
|
||||
// string request_id = 1;
|
||||
// message GetResourceRequest {
|
||||
// // A unique request id.
|
||||
// string request_id = 1;
|
||||
//
|
||||
// // The raw HTTP body is bound to this field.
|
||||
// google.api.HttpBody http_body = 2;
|
||||
// // The raw HTTP body is bound to this field.
|
||||
// google.api.HttpBody http_body = 2;
|
||||
//
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// service ResourceService {
|
||||
// rpc GetResource(GetResourceRequest)
|
||||
// returns (google.api.HttpBody);
|
||||
// rpc UpdateResource(google.api.HttpBody)
|
||||
// returns (google.protobuf.Empty);
|
||||
// service ResourceService {
|
||||
// rpc GetResource(GetResourceRequest)
|
||||
// returns (google.api.HttpBody);
|
||||
// rpc UpdateResource(google.api.HttpBody)
|
||||
// returns (google.protobuf.Empty);
|
||||
//
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// Example with streaming methods:
|
||||
//
|
||||
// service CaldavService {
|
||||
// rpc GetCalendar(stream google.api.HttpBody)
|
||||
// returns (stream google.api.HttpBody);
|
||||
// rpc UpdateCalendar(stream google.api.HttpBody)
|
||||
// returns (stream google.api.HttpBody);
|
||||
// service CaldavService {
|
||||
// rpc GetCalendar(stream google.api.HttpBody)
|
||||
// returns (stream google.api.HttpBody);
|
||||
// rpc UpdateCalendar(stream google.api.HttpBody)
|
||||
// returns (stream google.api.HttpBody);
|
||||
//
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// Use of this type only changes how the request and response bodies are
|
||||
// handled, all other features will continue to work unchanged.
|
||||
|
||||
203
vendor/google.golang.org/genproto/googleapis/api/launch_stage.pb.go
generated
vendored
Normal file
203
vendor/google.golang.org/genproto/googleapis/api/launch_stage.pb.go
generated
vendored
Normal file
@@ -0,0 +1,203 @@
|
||||
// Copyright 2023 Google LLC
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.26.0
|
||||
// protoc v3.21.9
|
||||
// source: google/api/launch_stage.proto
|
||||
|
||||
package api
|
||||
|
||||
import (
|
||||
reflect "reflect"
|
||||
sync "sync"
|
||||
|
||||
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
|
||||
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
|
||||
)
|
||||
|
||||
const (
|
||||
// Verify that this generated code is sufficiently up-to-date.
|
||||
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
|
||||
// Verify that runtime/protoimpl is sufficiently up-to-date.
|
||||
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
|
||||
)
|
||||
|
||||
// The launch stage as defined by [Google Cloud Platform
|
||||
// Launch Stages](https://cloud.google.com/terms/launch-stages).
|
||||
type LaunchStage int32
|
||||
|
||||
const (
|
||||
// Do not use this default value.
|
||||
LaunchStage_LAUNCH_STAGE_UNSPECIFIED LaunchStage = 0
|
||||
// The feature is not yet implemented. Users can not use it.
|
||||
LaunchStage_UNIMPLEMENTED LaunchStage = 6
|
||||
// Prelaunch features are hidden from users and are only visible internally.
|
||||
LaunchStage_PRELAUNCH LaunchStage = 7
|
||||
// Early Access features are limited to a closed group of testers. To use
|
||||
// these features, you must sign up in advance and sign a Trusted Tester
|
||||
// agreement (which includes confidentiality provisions). These features may
|
||||
// be unstable, changed in backward-incompatible ways, and are not
|
||||
// guaranteed to be released.
|
||||
LaunchStage_EARLY_ACCESS LaunchStage = 1
|
||||
// Alpha is a limited availability test for releases before they are cleared
|
||||
// for widespread use. By Alpha, all significant design issues are resolved
|
||||
// and we are in the process of verifying functionality. Alpha customers
|
||||
// need to apply for access, agree to applicable terms, and have their
|
||||
// projects allowlisted. Alpha releases don't have to be feature complete,
|
||||
// no SLAs are provided, and there are no technical support obligations, but
|
||||
// they will be far enough along that customers can actually use them in
|
||||
// test environments or for limited-use tests -- just like they would in
|
||||
// normal production cases.
|
||||
LaunchStage_ALPHA LaunchStage = 2
|
||||
// Beta is the point at which we are ready to open a release for any
|
||||
// customer to use. There are no SLA or technical support obligations in a
|
||||
// Beta release. Products will be complete from a feature perspective, but
|
||||
// may have some open outstanding issues. Beta releases are suitable for
|
||||
// limited production use cases.
|
||||
LaunchStage_BETA LaunchStage = 3
|
||||
// GA features are open to all developers and are considered stable and
|
||||
// fully qualified for production use.
|
||||
LaunchStage_GA LaunchStage = 4
|
||||
// Deprecated features are scheduled to be shut down and removed. For more
|
||||
// information, see the "Deprecation Policy" section of our [Terms of
|
||||
// Service](https://cloud.google.com/terms/)
|
||||
// and the [Google Cloud Platform Subject to the Deprecation
|
||||
// Policy](https://cloud.google.com/terms/deprecation) documentation.
|
||||
LaunchStage_DEPRECATED LaunchStage = 5
|
||||
)
|
||||
|
||||
// Enum value maps for LaunchStage.
|
||||
var (
|
||||
LaunchStage_name = map[int32]string{
|
||||
0: "LAUNCH_STAGE_UNSPECIFIED",
|
||||
6: "UNIMPLEMENTED",
|
||||
7: "PRELAUNCH",
|
||||
1: "EARLY_ACCESS",
|
||||
2: "ALPHA",
|
||||
3: "BETA",
|
||||
4: "GA",
|
||||
5: "DEPRECATED",
|
||||
}
|
||||
LaunchStage_value = map[string]int32{
|
||||
"LAUNCH_STAGE_UNSPECIFIED": 0,
|
||||
"UNIMPLEMENTED": 6,
|
||||
"PRELAUNCH": 7,
|
||||
"EARLY_ACCESS": 1,
|
||||
"ALPHA": 2,
|
||||
"BETA": 3,
|
||||
"GA": 4,
|
||||
"DEPRECATED": 5,
|
||||
}
|
||||
)
|
||||
|
||||
func (x LaunchStage) Enum() *LaunchStage {
|
||||
p := new(LaunchStage)
|
||||
*p = x
|
||||
return p
|
||||
}
|
||||
|
||||
func (x LaunchStage) String() string {
|
||||
return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
|
||||
}
|
||||
|
||||
func (LaunchStage) Descriptor() protoreflect.EnumDescriptor {
|
||||
return file_google_api_launch_stage_proto_enumTypes[0].Descriptor()
|
||||
}
|
||||
|
||||
func (LaunchStage) Type() protoreflect.EnumType {
|
||||
return &file_google_api_launch_stage_proto_enumTypes[0]
|
||||
}
|
||||
|
||||
func (x LaunchStage) Number() protoreflect.EnumNumber {
|
||||
return protoreflect.EnumNumber(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use LaunchStage.Descriptor instead.
|
||||
func (LaunchStage) EnumDescriptor() ([]byte, []int) {
|
||||
return file_google_api_launch_stage_proto_rawDescGZIP(), []int{0}
|
||||
}
|
||||
|
||||
var File_google_api_launch_stage_proto protoreflect.FileDescriptor
|
||||
|
||||
var file_google_api_launch_stage_proto_rawDesc = []byte{
|
||||
0x0a, 0x1d, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x6c, 0x61, 0x75,
|
||||
0x6e, 0x63, 0x68, 0x5f, 0x73, 0x74, 0x61, 0x67, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12,
|
||||
0x0a, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2a, 0x8c, 0x01, 0x0a, 0x0b,
|
||||
0x4c, 0x61, 0x75, 0x6e, 0x63, 0x68, 0x53, 0x74, 0x61, 0x67, 0x65, 0x12, 0x1c, 0x0a, 0x18, 0x4c,
|
||||
0x41, 0x55, 0x4e, 0x43, 0x48, 0x5f, 0x53, 0x54, 0x41, 0x47, 0x45, 0x5f, 0x55, 0x4e, 0x53, 0x50,
|
||||
0x45, 0x43, 0x49, 0x46, 0x49, 0x45, 0x44, 0x10, 0x00, 0x12, 0x11, 0x0a, 0x0d, 0x55, 0x4e, 0x49,
|
||||
0x4d, 0x50, 0x4c, 0x45, 0x4d, 0x45, 0x4e, 0x54, 0x45, 0x44, 0x10, 0x06, 0x12, 0x0d, 0x0a, 0x09,
|
||||
0x50, 0x52, 0x45, 0x4c, 0x41, 0x55, 0x4e, 0x43, 0x48, 0x10, 0x07, 0x12, 0x10, 0x0a, 0x0c, 0x45,
|
||||
0x41, 0x52, 0x4c, 0x59, 0x5f, 0x41, 0x43, 0x43, 0x45, 0x53, 0x53, 0x10, 0x01, 0x12, 0x09, 0x0a,
|
||||
0x05, 0x41, 0x4c, 0x50, 0x48, 0x41, 0x10, 0x02, 0x12, 0x08, 0x0a, 0x04, 0x42, 0x45, 0x54, 0x41,
|
||||
0x10, 0x03, 0x12, 0x06, 0x0a, 0x02, 0x47, 0x41, 0x10, 0x04, 0x12, 0x0e, 0x0a, 0x0a, 0x44, 0x45,
|
||||
0x50, 0x52, 0x45, 0x43, 0x41, 0x54, 0x45, 0x44, 0x10, 0x05, 0x42, 0x5a, 0x0a, 0x0e, 0x63, 0x6f,
|
||||
0x6d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x42, 0x10, 0x4c, 0x61,
|
||||
0x75, 0x6e, 0x63, 0x68, 0x53, 0x74, 0x61, 0x67, 0x65, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01,
|
||||
0x5a, 0x2d, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x67, 0x6f, 0x6c, 0x61, 0x6e, 0x67, 0x2e,
|
||||
0x6f, 0x72, 0x67, 0x2f, 0x67, 0x65, 0x6e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x67, 0x6f, 0x6f,
|
||||
0x67, 0x6c, 0x65, 0x61, 0x70, 0x69, 0x73, 0x2f, 0x61, 0x70, 0x69, 0x3b, 0x61, 0x70, 0x69, 0xa2,
|
||||
0x02, 0x04, 0x47, 0x41, 0x50, 0x49, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
|
||||
}
|
||||
|
||||
var (
|
||||
file_google_api_launch_stage_proto_rawDescOnce sync.Once
|
||||
file_google_api_launch_stage_proto_rawDescData = file_google_api_launch_stage_proto_rawDesc
|
||||
)
|
||||
|
||||
func file_google_api_launch_stage_proto_rawDescGZIP() []byte {
|
||||
file_google_api_launch_stage_proto_rawDescOnce.Do(func() {
|
||||
file_google_api_launch_stage_proto_rawDescData = protoimpl.X.CompressGZIP(file_google_api_launch_stage_proto_rawDescData)
|
||||
})
|
||||
return file_google_api_launch_stage_proto_rawDescData
|
||||
}
|
||||
|
||||
var file_google_api_launch_stage_proto_enumTypes = make([]protoimpl.EnumInfo, 1)
|
||||
var file_google_api_launch_stage_proto_goTypes = []interface{}{
|
||||
(LaunchStage)(0), // 0: google.api.LaunchStage
|
||||
}
|
||||
var file_google_api_launch_stage_proto_depIdxs = []int32{
|
||||
0, // [0:0] is the sub-list for method output_type
|
||||
0, // [0:0] is the sub-list for method input_type
|
||||
0, // [0:0] is the sub-list for extension type_name
|
||||
0, // [0:0] is the sub-list for extension extendee
|
||||
0, // [0:0] is the sub-list for field type_name
|
||||
}
|
||||
|
||||
func init() { file_google_api_launch_stage_proto_init() }
|
||||
func file_google_api_launch_stage_proto_init() {
|
||||
if File_google_api_launch_stage_proto != nil {
|
||||
return
|
||||
}
|
||||
type x struct{}
|
||||
out := protoimpl.TypeBuilder{
|
||||
File: protoimpl.DescBuilder{
|
||||
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
|
||||
RawDescriptor: file_google_api_launch_stage_proto_rawDesc,
|
||||
NumEnums: 1,
|
||||
NumMessages: 0,
|
||||
NumExtensions: 0,
|
||||
NumServices: 0,
|
||||
},
|
||||
GoTypes: file_google_api_launch_stage_proto_goTypes,
|
||||
DependencyIndexes: file_google_api_launch_stage_proto_depIdxs,
|
||||
EnumInfos: file_google_api_launch_stage_proto_enumTypes,
|
||||
}.Build()
|
||||
File_google_api_launch_stage_proto = out.File
|
||||
file_google_api_launch_stage_proto_rawDesc = nil
|
||||
file_google_api_launch_stage_proto_goTypes = nil
|
||||
file_google_api_launch_stage_proto_depIdxs = nil
|
||||
}
|
||||
23
vendor/google.golang.org/genproto/googleapis/api/tidyfix.go
generated
vendored
Normal file
23
vendor/google.golang.org/genproto/googleapis/api/tidyfix.go
generated
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
// Copyright 2023 Google LLC
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// This file, and the {{.RootMod}} import, won't actually become part of
|
||||
// the resultant binary.
|
||||
//go:build modhack
|
||||
// +build modhack
|
||||
|
||||
package api
|
||||
|
||||
// Necessary for safely adding multi-module repo. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository
|
||||
import _ "google.golang.org/genproto/internal"
|
||||
202
vendor/google.golang.org/genproto/googleapis/rpc/LICENSE
generated
vendored
Normal file
202
vendor/google.golang.org/genproto/googleapis/rpc/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,202 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
374
vendor/google.golang.org/genproto/googleapis/rpc/errdetails/error_details.pb.go
generated
vendored
374
vendor/google.golang.org/genproto/googleapis/rpc/errdetails/error_details.pb.go
generated
vendored
@@ -1,4 +1,4 @@
|
||||
// Copyright 2020 Google LLC
|
||||
// Copyright 2022 Google LLC
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
@@ -15,7 +15,7 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.26.0
|
||||
// protoc v3.12.2
|
||||
// protoc v3.21.9
|
||||
// source: google/rpc/error_details.proto
|
||||
|
||||
package errdetails
|
||||
@@ -36,6 +36,112 @@ const (
|
||||
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
|
||||
)
|
||||
|
||||
// Describes the cause of the error with structured details.
|
||||
//
|
||||
// Example of an error when contacting the "pubsub.googleapis.com" API when it
|
||||
// is not enabled:
|
||||
//
|
||||
// { "reason": "API_DISABLED"
|
||||
// "domain": "googleapis.com"
|
||||
// "metadata": {
|
||||
// "resource": "projects/123",
|
||||
// "service": "pubsub.googleapis.com"
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// This response indicates that the pubsub.googleapis.com API is not enabled.
|
||||
//
|
||||
// Example of an error that is returned when attempting to create a Spanner
|
||||
// instance in a region that is out of stock:
|
||||
//
|
||||
// { "reason": "STOCKOUT"
|
||||
// "domain": "spanner.googleapis.com",
|
||||
// "metadata": {
|
||||
// "availableRegions": "us-central1,us-east2"
|
||||
// }
|
||||
// }
|
||||
type ErrorInfo struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
// The reason of the error. This is a constant value that identifies the
|
||||
// proximate cause of the error. Error reasons are unique within a particular
|
||||
// domain of errors. This should be at most 63 characters and match a
|
||||
// regular expression of `[A-Z][A-Z0-9_]+[A-Z0-9]`, which represents
|
||||
// UPPER_SNAKE_CASE.
|
||||
Reason string `protobuf:"bytes,1,opt,name=reason,proto3" json:"reason,omitempty"`
|
||||
// The logical grouping to which the "reason" belongs. The error domain
|
||||
// is typically the registered service name of the tool or product that
|
||||
// generates the error. Example: "pubsub.googleapis.com". If the error is
|
||||
// generated by some common infrastructure, the error domain must be a
|
||||
// globally unique value that identifies the infrastructure. For Google API
|
||||
// infrastructure, the error domain is "googleapis.com".
|
||||
Domain string `protobuf:"bytes,2,opt,name=domain,proto3" json:"domain,omitempty"`
|
||||
// Additional structured details about this error.
|
||||
//
|
||||
// Keys should match /[a-zA-Z0-9-_]/ and be limited to 64 characters in
|
||||
// length. When identifying the current value of an exceeded limit, the units
|
||||
// should be contained in the key, not the value. For example, rather than
|
||||
// {"instanceLimit": "100/request"}, should be returned as,
|
||||
// {"instanceLimitPerRequest": "100"}, if the client exceeds the number of
|
||||
// instances that can be created in a single (batch) request.
|
||||
Metadata map[string]string `protobuf:"bytes,3,rep,name=metadata,proto3" json:"metadata,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
|
||||
}
|
||||
|
||||
func (x *ErrorInfo) Reset() {
|
||||
*x = ErrorInfo{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_google_rpc_error_details_proto_msgTypes[0]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *ErrorInfo) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*ErrorInfo) ProtoMessage() {}
|
||||
|
||||
func (x *ErrorInfo) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_google_rpc_error_details_proto_msgTypes[0]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use ErrorInfo.ProtoReflect.Descriptor instead.
|
||||
func (*ErrorInfo) Descriptor() ([]byte, []int) {
|
||||
return file_google_rpc_error_details_proto_rawDescGZIP(), []int{0}
|
||||
}
|
||||
|
||||
func (x *ErrorInfo) GetReason() string {
|
||||
if x != nil {
|
||||
return x.Reason
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *ErrorInfo) GetDomain() string {
|
||||
if x != nil {
|
||||
return x.Domain
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *ErrorInfo) GetMetadata() map[string]string {
|
||||
if x != nil {
|
||||
return x.Metadata
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Describes when the clients can retry a failed request. Clients could ignore
|
||||
// the recommendation here or retry when this information is missing from error
|
||||
// responses.
|
||||
@@ -61,7 +167,7 @@ type RetryInfo struct {
|
||||
func (x *RetryInfo) Reset() {
|
||||
*x = RetryInfo{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_google_rpc_error_details_proto_msgTypes[0]
|
||||
mi := &file_google_rpc_error_details_proto_msgTypes[1]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
@@ -74,7 +180,7 @@ func (x *RetryInfo) String() string {
|
||||
func (*RetryInfo) ProtoMessage() {}
|
||||
|
||||
func (x *RetryInfo) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_google_rpc_error_details_proto_msgTypes[0]
|
||||
mi := &file_google_rpc_error_details_proto_msgTypes[1]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
@@ -87,7 +193,7 @@ func (x *RetryInfo) ProtoReflect() protoreflect.Message {
|
||||
|
||||
// Deprecated: Use RetryInfo.ProtoReflect.Descriptor instead.
|
||||
func (*RetryInfo) Descriptor() ([]byte, []int) {
|
||||
return file_google_rpc_error_details_proto_rawDescGZIP(), []int{0}
|
||||
return file_google_rpc_error_details_proto_rawDescGZIP(), []int{1}
|
||||
}
|
||||
|
||||
func (x *RetryInfo) GetRetryDelay() *durationpb.Duration {
|
||||
@@ -112,7 +218,7 @@ type DebugInfo struct {
|
||||
func (x *DebugInfo) Reset() {
|
||||
*x = DebugInfo{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_google_rpc_error_details_proto_msgTypes[1]
|
||||
mi := &file_google_rpc_error_details_proto_msgTypes[2]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
@@ -125,7 +231,7 @@ func (x *DebugInfo) String() string {
|
||||
func (*DebugInfo) ProtoMessage() {}
|
||||
|
||||
func (x *DebugInfo) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_google_rpc_error_details_proto_msgTypes[1]
|
||||
mi := &file_google_rpc_error_details_proto_msgTypes[2]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
@@ -138,7 +244,7 @@ func (x *DebugInfo) ProtoReflect() protoreflect.Message {
|
||||
|
||||
// Deprecated: Use DebugInfo.ProtoReflect.Descriptor instead.
|
||||
func (*DebugInfo) Descriptor() ([]byte, []int) {
|
||||
return file_google_rpc_error_details_proto_rawDescGZIP(), []int{1}
|
||||
return file_google_rpc_error_details_proto_rawDescGZIP(), []int{2}
|
||||
}
|
||||
|
||||
func (x *DebugInfo) GetStackEntries() []string {
|
||||
@@ -178,7 +284,7 @@ type QuotaFailure struct {
|
||||
func (x *QuotaFailure) Reset() {
|
||||
*x = QuotaFailure{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_google_rpc_error_details_proto_msgTypes[2]
|
||||
mi := &file_google_rpc_error_details_proto_msgTypes[3]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
@@ -191,7 +297,7 @@ func (x *QuotaFailure) String() string {
|
||||
func (*QuotaFailure) ProtoMessage() {}
|
||||
|
||||
func (x *QuotaFailure) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_google_rpc_error_details_proto_msgTypes[2]
|
||||
mi := &file_google_rpc_error_details_proto_msgTypes[3]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
@@ -204,7 +310,7 @@ func (x *QuotaFailure) ProtoReflect() protoreflect.Message {
|
||||
|
||||
// Deprecated: Use QuotaFailure.ProtoReflect.Descriptor instead.
|
||||
func (*QuotaFailure) Descriptor() ([]byte, []int) {
|
||||
return file_google_rpc_error_details_proto_rawDescGZIP(), []int{2}
|
||||
return file_google_rpc_error_details_proto_rawDescGZIP(), []int{3}
|
||||
}
|
||||
|
||||
func (x *QuotaFailure) GetViolations() []*QuotaFailure_Violation {
|
||||
@@ -214,111 +320,6 @@ func (x *QuotaFailure) GetViolations() []*QuotaFailure_Violation {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Describes the cause of the error with structured details.
|
||||
//
|
||||
// Example of an error when contacting the "pubsub.googleapis.com" API when it
|
||||
// is not enabled:
|
||||
//
|
||||
// { "reason": "API_DISABLED"
|
||||
// "domain": "googleapis.com"
|
||||
// "metadata": {
|
||||
// "resource": "projects/123",
|
||||
// "service": "pubsub.googleapis.com"
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// This response indicates that the pubsub.googleapis.com API is not enabled.
|
||||
//
|
||||
// Example of an error that is returned when attempting to create a Spanner
|
||||
// instance in a region that is out of stock:
|
||||
//
|
||||
// { "reason": "STOCKOUT"
|
||||
// "domain": "spanner.googleapis.com",
|
||||
// "metadata": {
|
||||
// "availableRegions": "us-central1,us-east2"
|
||||
// }
|
||||
// }
|
||||
type ErrorInfo struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
// The reason of the error. This is a constant value that identifies the
|
||||
// proximate cause of the error. Error reasons are unique within a particular
|
||||
// domain of errors. This should be at most 63 characters and match
|
||||
// /[A-Z0-9_]+/.
|
||||
Reason string `protobuf:"bytes,1,opt,name=reason,proto3" json:"reason,omitempty"`
|
||||
// The logical grouping to which the "reason" belongs. The error domain
|
||||
// is typically the registered service name of the tool or product that
|
||||
// generates the error. Example: "pubsub.googleapis.com". If the error is
|
||||
// generated by some common infrastructure, the error domain must be a
|
||||
// globally unique value that identifies the infrastructure. For Google API
|
||||
// infrastructure, the error domain is "googleapis.com".
|
||||
Domain string `protobuf:"bytes,2,opt,name=domain,proto3" json:"domain,omitempty"`
|
||||
// Additional structured details about this error.
|
||||
//
|
||||
// Keys should match /[a-zA-Z0-9-_]/ and be limited to 64 characters in
|
||||
// length. When identifying the current value of an exceeded limit, the units
|
||||
// should be contained in the key, not the value. For example, rather than
|
||||
// {"instanceLimit": "100/request"}, should be returned as,
|
||||
// {"instanceLimitPerRequest": "100"}, if the client exceeds the number of
|
||||
// instances that can be created in a single (batch) request.
|
||||
Metadata map[string]string `protobuf:"bytes,3,rep,name=metadata,proto3" json:"metadata,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
|
||||
}
|
||||
|
||||
func (x *ErrorInfo) Reset() {
|
||||
*x = ErrorInfo{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_google_rpc_error_details_proto_msgTypes[3]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *ErrorInfo) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*ErrorInfo) ProtoMessage() {}
|
||||
|
||||
func (x *ErrorInfo) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_google_rpc_error_details_proto_msgTypes[3]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use ErrorInfo.ProtoReflect.Descriptor instead.
|
||||
func (*ErrorInfo) Descriptor() ([]byte, []int) {
|
||||
return file_google_rpc_error_details_proto_rawDescGZIP(), []int{3}
|
||||
}
|
||||
|
||||
func (x *ErrorInfo) GetReason() string {
|
||||
if x != nil {
|
||||
return x.Reason
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *ErrorInfo) GetDomain() string {
|
||||
if x != nil {
|
||||
return x.Domain
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *ErrorInfo) GetMetadata() map[string]string {
|
||||
if x != nil {
|
||||
return x.Metadata
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Describes what preconditions have failed.
|
||||
//
|
||||
// For example, if an RPC failed because it required the Terms of Service to be
|
||||
@@ -495,7 +496,8 @@ type ResourceInfo struct {
|
||||
ResourceType string `protobuf:"bytes,1,opt,name=resource_type,json=resourceType,proto3" json:"resource_type,omitempty"`
|
||||
// The name of the resource being accessed. For example, a shared calendar
|
||||
// name: "example.com_4fghdhgsrgh@group.calendar.google.com", if the current
|
||||
// error is [google.rpc.Code.PERMISSION_DENIED][google.rpc.Code.PERMISSION_DENIED].
|
||||
// error is
|
||||
// [google.rpc.Code.PERMISSION_DENIED][google.rpc.Code.PERMISSION_DENIED].
|
||||
ResourceName string `protobuf:"bytes,2,opt,name=resource_name,json=resourceName,proto3" json:"resource_name,omitempty"`
|
||||
// The owner of the resource (optional).
|
||||
// For example, "user:<owner email>" or "project:<Google developer project
|
||||
@@ -628,7 +630,7 @@ type LocalizedMessage struct {
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
// The locale used following the specification defined at
|
||||
// http://www.rfc-editor.org/rfc/bcp/bcp47.txt.
|
||||
// https://www.rfc-editor.org/rfc/bcp/bcp47.txt.
|
||||
// Examples are: "en-US", "fr-CH", "es-MX"
|
||||
Locale string `protobuf:"bytes,1,opt,name=locale,proto3" json:"locale,omitempty"`
|
||||
// The localized error message in the above locale.
|
||||
@@ -705,7 +707,7 @@ type QuotaFailure_Violation struct {
|
||||
func (x *QuotaFailure_Violation) Reset() {
|
||||
*x = QuotaFailure_Violation{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_google_rpc_error_details_proto_msgTypes[10]
|
||||
mi := &file_google_rpc_error_details_proto_msgTypes[11]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
@@ -718,7 +720,7 @@ func (x *QuotaFailure_Violation) String() string {
|
||||
func (*QuotaFailure_Violation) ProtoMessage() {}
|
||||
|
||||
func (x *QuotaFailure_Violation) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_google_rpc_error_details_proto_msgTypes[10]
|
||||
mi := &file_google_rpc_error_details_proto_msgTypes[11]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
@@ -731,7 +733,7 @@ func (x *QuotaFailure_Violation) ProtoReflect() protoreflect.Message {
|
||||
|
||||
// Deprecated: Use QuotaFailure_Violation.ProtoReflect.Descriptor instead.
|
||||
func (*QuotaFailure_Violation) Descriptor() ([]byte, []int) {
|
||||
return file_google_rpc_error_details_proto_rawDescGZIP(), []int{2, 0}
|
||||
return file_google_rpc_error_details_proto_rawDescGZIP(), []int{3, 0}
|
||||
}
|
||||
|
||||
func (x *QuotaFailure_Violation) GetSubject() string {
|
||||
@@ -828,9 +830,43 @@ type BadRequest_FieldViolation struct {
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
// A path leading to a field in the request body. The value will be a
|
||||
// A path that leads to a field in the request body. The value will be a
|
||||
// sequence of dot-separated identifiers that identify a protocol buffer
|
||||
// field. E.g., "field_violations.field" would identify this field.
|
||||
// field.
|
||||
//
|
||||
// Consider the following:
|
||||
//
|
||||
// message CreateContactRequest {
|
||||
// message EmailAddress {
|
||||
// enum Type {
|
||||
// TYPE_UNSPECIFIED = 0;
|
||||
// HOME = 1;
|
||||
// WORK = 2;
|
||||
// }
|
||||
//
|
||||
// optional string email = 1;
|
||||
// repeated EmailType type = 2;
|
||||
// }
|
||||
//
|
||||
// string full_name = 1;
|
||||
// repeated EmailAddress email_addresses = 2;
|
||||
// }
|
||||
//
|
||||
// In this example, in proto `field` could take one of the following values:
|
||||
//
|
||||
// - `full_name` for a violation in the `full_name` value
|
||||
// - `email_addresses[1].email` for a violation in the `email` field of the
|
||||
// first `email_addresses` message
|
||||
// - `email_addresses[3].type[2]` for a violation in the second `type`
|
||||
// value in the third `email_addresses` message.
|
||||
//
|
||||
// In JSON, the same values are represented as:
|
||||
//
|
||||
// - `fullName` for a violation in the `fullName` value
|
||||
// - `emailAddresses[1].email` for a violation in the `email` field of the
|
||||
// first `emailAddresses` message
|
||||
// - `emailAddresses[3].type[2]` for a violation in the second `type`
|
||||
// value in the third `emailAddresses` message.
|
||||
Field string `protobuf:"bytes,1,opt,name=field,proto3" json:"field,omitempty"`
|
||||
// A description of why the request element is bad.
|
||||
Description string `protobuf:"bytes,2,opt,name=description,proto3" json:"description,omitempty"`
|
||||
@@ -947,38 +983,38 @@ var file_google_rpc_error_details_proto_rawDesc = []byte{
|
||||
0x6f, 0x72, 0x5f, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
|
||||
0x12, 0x0a, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x72, 0x70, 0x63, 0x1a, 0x1e, 0x67, 0x6f,
|
||||
0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x64, 0x75,
|
||||
0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x47, 0x0a, 0x09,
|
||||
0x52, 0x65, 0x74, 0x72, 0x79, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x3a, 0x0a, 0x0b, 0x72, 0x65, 0x74,
|
||||
0x72, 0x79, 0x5f, 0x64, 0x65, 0x6c, 0x61, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19,
|
||||
0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66,
|
||||
0x2e, 0x44, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x0a, 0x72, 0x65, 0x74, 0x72, 0x79,
|
||||
0x44, 0x65, 0x6c, 0x61, 0x79, 0x22, 0x48, 0x0a, 0x09, 0x44, 0x65, 0x62, 0x75, 0x67, 0x49, 0x6e,
|
||||
0x66, 0x6f, 0x12, 0x23, 0x0a, 0x0d, 0x73, 0x74, 0x61, 0x63, 0x6b, 0x5f, 0x65, 0x6e, 0x74, 0x72,
|
||||
0x69, 0x65, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x09, 0x52, 0x0c, 0x73, 0x74, 0x61, 0x63, 0x6b,
|
||||
0x45, 0x6e, 0x74, 0x72, 0x69, 0x65, 0x73, 0x12, 0x16, 0x0a, 0x06, 0x64, 0x65, 0x74, 0x61, 0x69,
|
||||
0x6c, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x22,
|
||||
0x9b, 0x01, 0x0a, 0x0c, 0x51, 0x75, 0x6f, 0x74, 0x61, 0x46, 0x61, 0x69, 0x6c, 0x75, 0x72, 0x65,
|
||||
0x12, 0x42, 0x0a, 0x0a, 0x76, 0x69, 0x6f, 0x6c, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x01,
|
||||
0x20, 0x03, 0x28, 0x0b, 0x32, 0x22, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x72, 0x70,
|
||||
0x63, 0x2e, 0x51, 0x75, 0x6f, 0x74, 0x61, 0x46, 0x61, 0x69, 0x6c, 0x75, 0x72, 0x65, 0x2e, 0x56,
|
||||
0x69, 0x6f, 0x6c, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x0a, 0x76, 0x69, 0x6f, 0x6c, 0x61, 0x74,
|
||||
0x69, 0x6f, 0x6e, 0x73, 0x1a, 0x47, 0x0a, 0x09, 0x56, 0x69, 0x6f, 0x6c, 0x61, 0x74, 0x69, 0x6f,
|
||||
0x6e, 0x12, 0x18, 0x0a, 0x07, 0x73, 0x75, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x18, 0x01, 0x20, 0x01,
|
||||
0x28, 0x09, 0x52, 0x07, 0x73, 0x75, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x12, 0x20, 0x0a, 0x0b, 0x64,
|
||||
0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09,
|
||||
0x52, 0x0b, 0x64, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x22, 0xb9, 0x01,
|
||||
0x0a, 0x09, 0x45, 0x72, 0x72, 0x6f, 0x72, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x16, 0x0a, 0x06, 0x72,
|
||||
0x65, 0x61, 0x73, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x72, 0x65, 0x61,
|
||||
0x73, 0x6f, 0x6e, 0x12, 0x16, 0x0a, 0x06, 0x64, 0x6f, 0x6d, 0x61, 0x69, 0x6e, 0x18, 0x02, 0x20,
|
||||
0x01, 0x28, 0x09, 0x52, 0x06, 0x64, 0x6f, 0x6d, 0x61, 0x69, 0x6e, 0x12, 0x3f, 0x0a, 0x08, 0x6d,
|
||||
0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x23, 0x2e,
|
||||
0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x72, 0x70, 0x63, 0x2e, 0x45, 0x72, 0x72, 0x6f, 0x72,
|
||||
0x49, 0x6e, 0x66, 0x6f, 0x2e, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x45, 0x6e, 0x74,
|
||||
0x72, 0x79, 0x52, 0x08, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x1a, 0x3b, 0x0a, 0x0d,
|
||||
0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a,
|
||||
0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12,
|
||||
0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05,
|
||||
0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0xbd, 0x01, 0x0a, 0x13, 0x50, 0x72,
|
||||
0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xb9, 0x01, 0x0a,
|
||||
0x09, 0x45, 0x72, 0x72, 0x6f, 0x72, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x16, 0x0a, 0x06, 0x72, 0x65,
|
||||
0x61, 0x73, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x72, 0x65, 0x61, 0x73,
|
||||
0x6f, 0x6e, 0x12, 0x16, 0x0a, 0x06, 0x64, 0x6f, 0x6d, 0x61, 0x69, 0x6e, 0x18, 0x02, 0x20, 0x01,
|
||||
0x28, 0x09, 0x52, 0x06, 0x64, 0x6f, 0x6d, 0x61, 0x69, 0x6e, 0x12, 0x3f, 0x0a, 0x08, 0x6d, 0x65,
|
||||
0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x23, 0x2e, 0x67,
|
||||
0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x72, 0x70, 0x63, 0x2e, 0x45, 0x72, 0x72, 0x6f, 0x72, 0x49,
|
||||
0x6e, 0x66, 0x6f, 0x2e, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x45, 0x6e, 0x74, 0x72,
|
||||
0x79, 0x52, 0x08, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x1a, 0x3b, 0x0a, 0x0d, 0x4d,
|
||||
0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03,
|
||||
0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14,
|
||||
0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x76,
|
||||
0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0x47, 0x0a, 0x09, 0x52, 0x65, 0x74, 0x72,
|
||||
0x79, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x3a, 0x0a, 0x0b, 0x72, 0x65, 0x74, 0x72, 0x79, 0x5f, 0x64,
|
||||
0x65, 0x6c, 0x61, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f,
|
||||
0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72,
|
||||
0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x0a, 0x72, 0x65, 0x74, 0x72, 0x79, 0x44, 0x65, 0x6c, 0x61,
|
||||
0x79, 0x22, 0x48, 0x0a, 0x09, 0x44, 0x65, 0x62, 0x75, 0x67, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x23,
|
||||
0x0a, 0x0d, 0x73, 0x74, 0x61, 0x63, 0x6b, 0x5f, 0x65, 0x6e, 0x74, 0x72, 0x69, 0x65, 0x73, 0x18,
|
||||
0x01, 0x20, 0x03, 0x28, 0x09, 0x52, 0x0c, 0x73, 0x74, 0x61, 0x63, 0x6b, 0x45, 0x6e, 0x74, 0x72,
|
||||
0x69, 0x65, 0x73, 0x12, 0x16, 0x0a, 0x06, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x18, 0x02, 0x20,
|
||||
0x01, 0x28, 0x09, 0x52, 0x06, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x22, 0x9b, 0x01, 0x0a, 0x0c,
|
||||
0x51, 0x75, 0x6f, 0x74, 0x61, 0x46, 0x61, 0x69, 0x6c, 0x75, 0x72, 0x65, 0x12, 0x42, 0x0a, 0x0a,
|
||||
0x76, 0x69, 0x6f, 0x6c, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b,
|
||||
0x32, 0x22, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x72, 0x70, 0x63, 0x2e, 0x51, 0x75,
|
||||
0x6f, 0x74, 0x61, 0x46, 0x61, 0x69, 0x6c, 0x75, 0x72, 0x65, 0x2e, 0x56, 0x69, 0x6f, 0x6c, 0x61,
|
||||
0x74, 0x69, 0x6f, 0x6e, 0x52, 0x0a, 0x76, 0x69, 0x6f, 0x6c, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73,
|
||||
0x1a, 0x47, 0x0a, 0x09, 0x56, 0x69, 0x6f, 0x6c, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x18, 0x0a,
|
||||
0x07, 0x73, 0x75, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07,
|
||||
0x73, 0x75, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x12, 0x20, 0x0a, 0x0b, 0x64, 0x65, 0x73, 0x63, 0x72,
|
||||
0x69, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x64, 0x65,
|
||||
0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x22, 0xbd, 0x01, 0x0a, 0x13, 0x50, 0x72,
|
||||
0x65, 0x63, 0x6f, 0x6e, 0x64, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x46, 0x61, 0x69, 0x6c, 0x75, 0x72,
|
||||
0x65, 0x12, 0x49, 0x0a, 0x0a, 0x76, 0x69, 0x6f, 0x6c, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18,
|
||||
0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x29, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x72,
|
||||
@@ -1051,27 +1087,27 @@ func file_google_rpc_error_details_proto_rawDescGZIP() []byte {
|
||||
|
||||
var file_google_rpc_error_details_proto_msgTypes = make([]protoimpl.MessageInfo, 15)
|
||||
var file_google_rpc_error_details_proto_goTypes = []interface{}{
|
||||
(*RetryInfo)(nil), // 0: google.rpc.RetryInfo
|
||||
(*DebugInfo)(nil), // 1: google.rpc.DebugInfo
|
||||
(*QuotaFailure)(nil), // 2: google.rpc.QuotaFailure
|
||||
(*ErrorInfo)(nil), // 3: google.rpc.ErrorInfo
|
||||
(*ErrorInfo)(nil), // 0: google.rpc.ErrorInfo
|
||||
(*RetryInfo)(nil), // 1: google.rpc.RetryInfo
|
||||
(*DebugInfo)(nil), // 2: google.rpc.DebugInfo
|
||||
(*QuotaFailure)(nil), // 3: google.rpc.QuotaFailure
|
||||
(*PreconditionFailure)(nil), // 4: google.rpc.PreconditionFailure
|
||||
(*BadRequest)(nil), // 5: google.rpc.BadRequest
|
||||
(*RequestInfo)(nil), // 6: google.rpc.RequestInfo
|
||||
(*ResourceInfo)(nil), // 7: google.rpc.ResourceInfo
|
||||
(*Help)(nil), // 8: google.rpc.Help
|
||||
(*LocalizedMessage)(nil), // 9: google.rpc.LocalizedMessage
|
||||
(*QuotaFailure_Violation)(nil), // 10: google.rpc.QuotaFailure.Violation
|
||||
nil, // 11: google.rpc.ErrorInfo.MetadataEntry
|
||||
nil, // 10: google.rpc.ErrorInfo.MetadataEntry
|
||||
(*QuotaFailure_Violation)(nil), // 11: google.rpc.QuotaFailure.Violation
|
||||
(*PreconditionFailure_Violation)(nil), // 12: google.rpc.PreconditionFailure.Violation
|
||||
(*BadRequest_FieldViolation)(nil), // 13: google.rpc.BadRequest.FieldViolation
|
||||
(*Help_Link)(nil), // 14: google.rpc.Help.Link
|
||||
(*durationpb.Duration)(nil), // 15: google.protobuf.Duration
|
||||
}
|
||||
var file_google_rpc_error_details_proto_depIdxs = []int32{
|
||||
15, // 0: google.rpc.RetryInfo.retry_delay:type_name -> google.protobuf.Duration
|
||||
10, // 1: google.rpc.QuotaFailure.violations:type_name -> google.rpc.QuotaFailure.Violation
|
||||
11, // 2: google.rpc.ErrorInfo.metadata:type_name -> google.rpc.ErrorInfo.MetadataEntry
|
||||
10, // 0: google.rpc.ErrorInfo.metadata:type_name -> google.rpc.ErrorInfo.MetadataEntry
|
||||
15, // 1: google.rpc.RetryInfo.retry_delay:type_name -> google.protobuf.Duration
|
||||
11, // 2: google.rpc.QuotaFailure.violations:type_name -> google.rpc.QuotaFailure.Violation
|
||||
12, // 3: google.rpc.PreconditionFailure.violations:type_name -> google.rpc.PreconditionFailure.Violation
|
||||
13, // 4: google.rpc.BadRequest.field_violations:type_name -> google.rpc.BadRequest.FieldViolation
|
||||
14, // 5: google.rpc.Help.links:type_name -> google.rpc.Help.Link
|
||||
@@ -1089,7 +1125,7 @@ func file_google_rpc_error_details_proto_init() {
|
||||
}
|
||||
if !protoimpl.UnsafeEnabled {
|
||||
file_google_rpc_error_details_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*RetryInfo); i {
|
||||
switch v := v.(*ErrorInfo); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
@@ -1101,7 +1137,7 @@ func file_google_rpc_error_details_proto_init() {
|
||||
}
|
||||
}
|
||||
file_google_rpc_error_details_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*DebugInfo); i {
|
||||
switch v := v.(*RetryInfo); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
@@ -1113,7 +1149,7 @@ func file_google_rpc_error_details_proto_init() {
|
||||
}
|
||||
}
|
||||
file_google_rpc_error_details_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*QuotaFailure); i {
|
||||
switch v := v.(*DebugInfo); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
@@ -1125,7 +1161,7 @@ func file_google_rpc_error_details_proto_init() {
|
||||
}
|
||||
}
|
||||
file_google_rpc_error_details_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*ErrorInfo); i {
|
||||
switch v := v.(*QuotaFailure); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
@@ -1208,7 +1244,7 @@ func file_google_rpc_error_details_proto_init() {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
file_google_rpc_error_details_proto_msgTypes[10].Exporter = func(v interface{}, i int) interface{} {
|
||||
file_google_rpc_error_details_proto_msgTypes[11].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*QuotaFailure_Violation); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
|
||||
10
vendor/google.golang.org/genproto/googleapis/rpc/status/status.pb.go
generated
vendored
10
vendor/google.golang.org/genproto/googleapis/rpc/status/status.pb.go
generated
vendored
@@ -1,4 +1,4 @@
|
||||
// Copyright 2020 Google LLC
|
||||
// Copyright 2022 Google LLC
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
@@ -15,7 +15,7 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.26.0
|
||||
// protoc v3.12.2
|
||||
// protoc v3.21.9
|
||||
// source: google/rpc/status.proto
|
||||
|
||||
package status
|
||||
@@ -48,11 +48,13 @@ type Status struct {
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
// The status code, which should be an enum value of [google.rpc.Code][google.rpc.Code].
|
||||
// The status code, which should be an enum value of
|
||||
// [google.rpc.Code][google.rpc.Code].
|
||||
Code int32 `protobuf:"varint,1,opt,name=code,proto3" json:"code,omitempty"`
|
||||
// A developer-facing error message, which should be in English. Any
|
||||
// user-facing error message should be localized and sent in the
|
||||
// [google.rpc.Status.details][google.rpc.Status.details] field, or localized by the client.
|
||||
// [google.rpc.Status.details][google.rpc.Status.details] field, or localized
|
||||
// by the client.
|
||||
Message string `protobuf:"bytes,2,opt,name=message,proto3" json:"message,omitempty"`
|
||||
// A list of messages that carry the error details. There is a common set of
|
||||
// message types for APIs to use.
|
||||
|
||||
17
vendor/google.golang.org/genproto/internal/doc.go
generated
vendored
Normal file
17
vendor/google.golang.org/genproto/internal/doc.go
generated
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
// Copyright 2023 Google LLC
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// This file makes internal an importable go package
|
||||
// for use with backreferences from submodules.
|
||||
package internal
|
||||
29
vendor/google.golang.org/grpc/CONTRIBUTING.md
generated
vendored
29
vendor/google.golang.org/grpc/CONTRIBUTING.md
generated
vendored
@@ -20,6 +20,19 @@ How to get your contributions merged smoothly and quickly.
|
||||
both author's & review's time is wasted. Create more PRs to address different
|
||||
concerns and everyone will be happy.
|
||||
|
||||
- For speculative changes, consider opening an issue and discussing it first. If
|
||||
you are suggesting a behavioral or API change, consider starting with a [gRFC
|
||||
proposal](https://github.com/grpc/proposal).
|
||||
|
||||
- If you are searching for features to work on, issues labeled [Status: Help
|
||||
Wanted](https://github.com/grpc/grpc-go/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22Status%3A+Help+Wanted%22)
|
||||
is a great place to start. These issues are well-documented and usually can be
|
||||
resolved with a single pull request.
|
||||
|
||||
- If you are adding a new file, make sure it has the copyright message template
|
||||
at the top as a comment. You can copy over the message from an existing file
|
||||
and update the year.
|
||||
|
||||
- The grpc package should only depend on standard Go packages and a small number
|
||||
of exceptions. If your contribution introduces new dependencies which are NOT
|
||||
in the [list](https://godoc.org/google.golang.org/grpc?imports), you need a
|
||||
@@ -32,14 +45,18 @@ How to get your contributions merged smoothly and quickly.
|
||||
- Provide a good **PR description** as a record of **what** change is being made
|
||||
and **why** it was made. Link to a github issue if it exists.
|
||||
|
||||
- Don't fix code style and formatting unless you are already changing that line
|
||||
to address an issue. PRs with irrelevant changes won't be merged. If you do
|
||||
want to fix formatting or style, do that in a separate PR.
|
||||
- If you want to fix formatting or style, consider whether your changes are an
|
||||
obvious improvement or might be considered a personal preference. If a style
|
||||
change is based on preference, it likely will not be accepted. If it corrects
|
||||
widely agreed-upon anti-patterns, then please do create a PR and explain the
|
||||
benefits of the change.
|
||||
|
||||
- Unless your PR is trivial, you should expect there will be reviewer comments
|
||||
that you'll need to address before merging. We expect you to be reasonably
|
||||
responsive to those comments, otherwise the PR will be closed after 2-3 weeks
|
||||
of inactivity.
|
||||
that you'll need to address before merging. We'll mark it as `Status: Requires
|
||||
Reporter Clarification` if we expect you to respond to these comments in a
|
||||
timely manner. If the PR remains inactive for 6 days, it will be marked as
|
||||
`stale` and automatically close 7 days after that if we don't hear back from
|
||||
you.
|
||||
|
||||
- Maintain **clean commit history** and use **meaningful commit messages**. PRs
|
||||
with messy commit history are difficult to review and won't be merged. Use
|
||||
|
||||
8
vendor/google.golang.org/grpc/balancer/balancer.go
generated
vendored
8
vendor/google.golang.org/grpc/balancer/balancer.go
generated
vendored
@@ -279,6 +279,14 @@ type PickResult struct {
|
||||
// type, Done may not be called. May be nil if the balancer does not wish
|
||||
// to be notified when the RPC completes.
|
||||
Done func(DoneInfo)
|
||||
|
||||
// Metadata provides a way for LB policies to inject arbitrary per-call
|
||||
// metadata. Any metadata returned here will be merged with existing
|
||||
// metadata added by the client application.
|
||||
//
|
||||
// LB policies with child policies are responsible for propagating metadata
|
||||
// injected by their children to the ClientConn, as part of Pick().
|
||||
Metatada metadata.MD
|
||||
}
|
||||
|
||||
// TransientFailureError returns e. It exists for backward compatibility and
|
||||
|
||||
9
vendor/google.golang.org/grpc/binarylog/grpc_binarylog_v1/binarylog.pb.go
generated
vendored
9
vendor/google.golang.org/grpc/binarylog/grpc_binarylog_v1/binarylog.pb.go
generated
vendored
@@ -18,14 +18,13 @@
|
||||
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.25.0
|
||||
// protoc v3.14.0
|
||||
// protoc-gen-go v1.28.1
|
||||
// protoc v4.22.0
|
||||
// source: grpc/binlog/v1/binarylog.proto
|
||||
|
||||
package grpc_binarylog_v1
|
||||
|
||||
import (
|
||||
proto "github.com/golang/protobuf/proto"
|
||||
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
|
||||
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
|
||||
durationpb "google.golang.org/protobuf/types/known/durationpb"
|
||||
@@ -41,10 +40,6 @@ const (
|
||||
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
|
||||
)
|
||||
|
||||
// This is a compile-time assertion that a sufficiently up-to-date version
|
||||
// of the legacy proto package is being used.
|
||||
const _ = proto.ProtoPackageIsVersion4
|
||||
|
||||
// Enumerates the type of event
|
||||
// Note the terminology is different from the RPC semantics
|
||||
// definition, but the same meaning is expressed here.
|
||||
|
||||
67
vendor/google.golang.org/grpc/clientconn.go
generated
vendored
67
vendor/google.golang.org/grpc/clientconn.go
generated
vendored
@@ -146,8 +146,18 @@ func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *
|
||||
cc.safeConfigSelector.UpdateConfigSelector(&defaultConfigSelector{nil})
|
||||
cc.ctx, cc.cancel = context.WithCancel(context.Background())
|
||||
|
||||
for _, opt := range extraDialOptions {
|
||||
opt.apply(&cc.dopts)
|
||||
disableGlobalOpts := false
|
||||
for _, opt := range opts {
|
||||
if _, ok := opt.(*disableGlobalDialOptions); ok {
|
||||
disableGlobalOpts = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !disableGlobalOpts {
|
||||
for _, opt := range globalDialOptions {
|
||||
opt.apply(&cc.dopts)
|
||||
}
|
||||
}
|
||||
|
||||
for _, opt := range opts {
|
||||
@@ -256,7 +266,7 @@ func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cc.authority, err = determineAuthority(cc.parsedTarget.Endpoint, cc.target, cc.dopts)
|
||||
cc.authority, err = determineAuthority(cc.parsedTarget.Endpoint(), cc.target, cc.dopts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -788,10 +798,16 @@ func (cc *ClientConn) incrCallsFailed() {
|
||||
func (ac *addrConn) connect() error {
|
||||
ac.mu.Lock()
|
||||
if ac.state == connectivity.Shutdown {
|
||||
if logger.V(2) {
|
||||
logger.Infof("connect called on shutdown addrConn; ignoring.")
|
||||
}
|
||||
ac.mu.Unlock()
|
||||
return errConnClosing
|
||||
}
|
||||
if ac.state != connectivity.Idle {
|
||||
if logger.V(2) {
|
||||
logger.Infof("connect called on addrConn in non-idle state (%v); ignoring.", ac.state)
|
||||
}
|
||||
ac.mu.Unlock()
|
||||
return nil
|
||||
}
|
||||
@@ -928,7 +944,7 @@ func (cc *ClientConn) healthCheckConfig() *healthCheckConfig {
|
||||
return cc.sc.healthCheckConfig
|
||||
}
|
||||
|
||||
func (cc *ClientConn) getTransport(ctx context.Context, failfast bool, method string) (transport.ClientTransport, func(balancer.DoneInfo), error) {
|
||||
func (cc *ClientConn) getTransport(ctx context.Context, failfast bool, method string) (transport.ClientTransport, balancer.PickResult, error) {
|
||||
return cc.blockingpicker.pick(ctx, failfast, balancer.PickInfo{
|
||||
Ctx: ctx,
|
||||
FullMethodName: method,
|
||||
@@ -1097,7 +1113,11 @@ func (ac *addrConn) updateConnectivityState(s connectivity.State, lastErr error)
|
||||
return
|
||||
}
|
||||
ac.state = s
|
||||
channelz.Infof(logger, ac.channelzID, "Subchannel Connectivity change to %v", s)
|
||||
if lastErr == nil {
|
||||
channelz.Infof(logger, ac.channelzID, "Subchannel Connectivity change to %v", s)
|
||||
} else {
|
||||
channelz.Infof(logger, ac.channelzID, "Subchannel Connectivity change to %v, last error: %s", s, lastErr)
|
||||
}
|
||||
ac.cc.handleSubConnStateChange(ac.acbw, s, lastErr)
|
||||
}
|
||||
|
||||
@@ -1231,9 +1251,11 @@ func (ac *addrConn) createTransport(addr resolver.Address, copts transport.Conne
|
||||
addr.ServerName = ac.cc.getServerName(addr)
|
||||
hctx, hcancel := context.WithCancel(ac.ctx)
|
||||
|
||||
onClose := grpcsync.OnceFunc(func() {
|
||||
onClose := func(r transport.GoAwayReason) {
|
||||
ac.mu.Lock()
|
||||
defer ac.mu.Unlock()
|
||||
// adjust params based on GoAwayReason
|
||||
ac.adjustParams(r)
|
||||
if ac.state == connectivity.Shutdown {
|
||||
// Already shut down. tearDown() already cleared the transport and
|
||||
// canceled hctx via ac.ctx, and we expected this connection to be
|
||||
@@ -1254,20 +1276,17 @@ func (ac *addrConn) createTransport(addr resolver.Address, copts transport.Conne
|
||||
// Always go idle and wait for the LB policy to initiate a new
|
||||
// connection attempt.
|
||||
ac.updateConnectivityState(connectivity.Idle, nil)
|
||||
})
|
||||
onGoAway := func(r transport.GoAwayReason) {
|
||||
ac.mu.Lock()
|
||||
ac.adjustParams(r)
|
||||
ac.mu.Unlock()
|
||||
onClose()
|
||||
}
|
||||
|
||||
connectCtx, cancel := context.WithDeadline(ac.ctx, connectDeadline)
|
||||
defer cancel()
|
||||
copts.ChannelzParentID = ac.channelzID
|
||||
|
||||
newTr, err := transport.NewClientTransport(connectCtx, ac.cc.ctx, addr, copts, onGoAway, onClose)
|
||||
newTr, err := transport.NewClientTransport(connectCtx, ac.cc.ctx, addr, copts, onClose)
|
||||
if err != nil {
|
||||
if logger.V(2) {
|
||||
logger.Infof("Creating new client transport to %q: %v", addr, err)
|
||||
}
|
||||
// newTr is either nil, or closed.
|
||||
hcancel()
|
||||
channelz.Warningf(logger, ac.channelzID, "grpc: addrConn.createTransport failed to connect to %s. Err: %v", addr, err)
|
||||
@@ -1371,7 +1390,7 @@ func (ac *addrConn) startHealthCheck(ctx context.Context) {
|
||||
if status.Code(err) == codes.Unimplemented {
|
||||
channelz.Error(logger, ac.channelzID, "Subchannel health check is unimplemented at server side, thus health check is disabled")
|
||||
} else {
|
||||
channelz.Errorf(logger, ac.channelzID, "HealthCheckFunc exits with unexpected error %v", err)
|
||||
channelz.Errorf(logger, ac.channelzID, "Health checking failed: %v", err)
|
||||
}
|
||||
}
|
||||
}()
|
||||
@@ -1522,6 +1541,9 @@ func (c *channelzChannel) ChannelzMetric() *channelz.ChannelInternalMetric {
|
||||
// referenced by users.
|
||||
var ErrClientConnTimeout = errors.New("grpc: timed out when dialing")
|
||||
|
||||
// getResolver finds the scheme in the cc's resolvers or the global registry.
|
||||
// scheme should always be lowercase (typically by virtue of url.Parse()
|
||||
// performing proper RFC3986 behavior).
|
||||
func (cc *ClientConn) getResolver(scheme string) resolver.Builder {
|
||||
for _, rb := range cc.dopts.resolvers {
|
||||
if scheme == rb.Scheme() {
|
||||
@@ -1582,30 +1604,17 @@ func (cc *ClientConn) parseTargetAndFindResolver() (resolver.Builder, error) {
|
||||
}
|
||||
|
||||
// parseTarget uses RFC 3986 semantics to parse the given target into a
|
||||
// resolver.Target struct containing scheme, authority and endpoint. Query
|
||||
// resolver.Target struct containing scheme, authority and url. Query
|
||||
// params are stripped from the endpoint.
|
||||
func parseTarget(target string) (resolver.Target, error) {
|
||||
u, err := url.Parse(target)
|
||||
if err != nil {
|
||||
return resolver.Target{}, err
|
||||
}
|
||||
// For targets of the form "[scheme]://[authority]/endpoint, the endpoint
|
||||
// value returned from url.Parse() contains a leading "/". Although this is
|
||||
// in accordance with RFC 3986, we do not want to break existing resolver
|
||||
// implementations which expect the endpoint without the leading "/". So, we
|
||||
// end up stripping the leading "/" here. But this will result in an
|
||||
// incorrect parsing for something like "unix:///path/to/socket". Since we
|
||||
// own the "unix" resolver, we can workaround in the unix resolver by using
|
||||
// the `URL` field instead of the `Endpoint` field.
|
||||
endpoint := u.Path
|
||||
if endpoint == "" {
|
||||
endpoint = u.Opaque
|
||||
}
|
||||
endpoint = strings.TrimPrefix(endpoint, "/")
|
||||
|
||||
return resolver.Target{
|
||||
Scheme: u.Scheme,
|
||||
Authority: u.Host,
|
||||
Endpoint: endpoint,
|
||||
URL: *u,
|
||||
}, nil
|
||||
}
|
||||
|
||||
51
vendor/google.golang.org/grpc/codes/code_string.go
generated
vendored
51
vendor/google.golang.org/grpc/codes/code_string.go
generated
vendored
@@ -18,7 +18,15 @@
|
||||
|
||||
package codes
|
||||
|
||||
import "strconv"
|
||||
import (
|
||||
"strconv"
|
||||
|
||||
"google.golang.org/grpc/internal"
|
||||
)
|
||||
|
||||
func init() {
|
||||
internal.CanonicalString = canonicalString
|
||||
}
|
||||
|
||||
func (c Code) String() string {
|
||||
switch c {
|
||||
@@ -60,3 +68,44 @@ func (c Code) String() string {
|
||||
return "Code(" + strconv.FormatInt(int64(c), 10) + ")"
|
||||
}
|
||||
}
|
||||
|
||||
func canonicalString(c Code) string {
|
||||
switch c {
|
||||
case OK:
|
||||
return "OK"
|
||||
case Canceled:
|
||||
return "CANCELLED"
|
||||
case Unknown:
|
||||
return "UNKNOWN"
|
||||
case InvalidArgument:
|
||||
return "INVALID_ARGUMENT"
|
||||
case DeadlineExceeded:
|
||||
return "DEADLINE_EXCEEDED"
|
||||
case NotFound:
|
||||
return "NOT_FOUND"
|
||||
case AlreadyExists:
|
||||
return "ALREADY_EXISTS"
|
||||
case PermissionDenied:
|
||||
return "PERMISSION_DENIED"
|
||||
case ResourceExhausted:
|
||||
return "RESOURCE_EXHAUSTED"
|
||||
case FailedPrecondition:
|
||||
return "FAILED_PRECONDITION"
|
||||
case Aborted:
|
||||
return "ABORTED"
|
||||
case OutOfRange:
|
||||
return "OUT_OF_RANGE"
|
||||
case Unimplemented:
|
||||
return "UNIMPLEMENTED"
|
||||
case Internal:
|
||||
return "INTERNAL"
|
||||
case Unavailable:
|
||||
return "UNAVAILABLE"
|
||||
case DataLoss:
|
||||
return "DATA_LOSS"
|
||||
case Unauthenticated:
|
||||
return "UNAUTHENTICATED"
|
||||
default:
|
||||
return "CODE(" + strconv.FormatInt(int64(c), 10) + ")"
|
||||
}
|
||||
}
|
||||
|
||||
4
vendor/google.golang.org/grpc/credentials/tls.go
generated
vendored
4
vendor/google.golang.org/grpc/credentials/tls.go
generated
vendored
@@ -23,9 +23,9 @@ import (
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"net/url"
|
||||
"os"
|
||||
|
||||
credinternal "google.golang.org/grpc/internal/credentials"
|
||||
)
|
||||
@@ -166,7 +166,7 @@ func NewClientTLSFromCert(cp *x509.CertPool, serverNameOverride string) Transpor
|
||||
// it will override the virtual host name of authority (e.g. :authority header
|
||||
// field) in requests.
|
||||
func NewClientTLSFromFile(certFile, serverNameOverride string) (TransportCredentials, error) {
|
||||
b, err := ioutil.ReadFile(certFile)
|
||||
b, err := os.ReadFile(certFile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
42
vendor/google.golang.org/grpc/dialoptions.go
generated
vendored
42
vendor/google.golang.org/grpc/dialoptions.go
generated
vendored
@@ -38,12 +38,14 @@ import (
|
||||
|
||||
func init() {
|
||||
internal.AddGlobalDialOptions = func(opt ...DialOption) {
|
||||
extraDialOptions = append(extraDialOptions, opt...)
|
||||
globalDialOptions = append(globalDialOptions, opt...)
|
||||
}
|
||||
internal.ClearGlobalDialOptions = func() {
|
||||
extraDialOptions = nil
|
||||
globalDialOptions = nil
|
||||
}
|
||||
internal.WithBinaryLogger = withBinaryLogger
|
||||
internal.JoinDialOptions = newJoinDialOption
|
||||
internal.DisableGlobalDialOptions = newDisableGlobalDialOptions
|
||||
}
|
||||
|
||||
// dialOptions configure a Dial call. dialOptions are set by the DialOption
|
||||
@@ -82,7 +84,7 @@ type DialOption interface {
|
||||
apply(*dialOptions)
|
||||
}
|
||||
|
||||
var extraDialOptions []DialOption
|
||||
var globalDialOptions []DialOption
|
||||
|
||||
// EmptyDialOption does not alter the dial configuration. It can be embedded in
|
||||
// another structure to build custom dial options.
|
||||
@@ -95,6 +97,16 @@ type EmptyDialOption struct{}
|
||||
|
||||
func (EmptyDialOption) apply(*dialOptions) {}
|
||||
|
||||
type disableGlobalDialOptions struct{}
|
||||
|
||||
func (disableGlobalDialOptions) apply(*dialOptions) {}
|
||||
|
||||
// newDisableGlobalDialOptions returns a DialOption that prevents the ClientConn
|
||||
// from applying the global DialOptions (set via AddGlobalDialOptions).
|
||||
func newDisableGlobalDialOptions() DialOption {
|
||||
return &disableGlobalDialOptions{}
|
||||
}
|
||||
|
||||
// funcDialOption wraps a function that modifies dialOptions into an
|
||||
// implementation of the DialOption interface.
|
||||
type funcDialOption struct {
|
||||
@@ -111,13 +123,28 @@ func newFuncDialOption(f func(*dialOptions)) *funcDialOption {
|
||||
}
|
||||
}
|
||||
|
||||
type joinDialOption struct {
|
||||
opts []DialOption
|
||||
}
|
||||
|
||||
func (jdo *joinDialOption) apply(do *dialOptions) {
|
||||
for _, opt := range jdo.opts {
|
||||
opt.apply(do)
|
||||
}
|
||||
}
|
||||
|
||||
func newJoinDialOption(opts ...DialOption) DialOption {
|
||||
return &joinDialOption{opts: opts}
|
||||
}
|
||||
|
||||
// WithWriteBufferSize determines how much data can be batched before doing a
|
||||
// write on the wire. The corresponding memory allocation for this buffer will
|
||||
// be twice the size to keep syscalls low. The default value for this buffer is
|
||||
// 32KB.
|
||||
//
|
||||
// Zero will disable the write buffer such that each write will be on underlying
|
||||
// connection. Note: A Send call may not directly translate to a write.
|
||||
// Zero or negative values will disable the write buffer such that each write
|
||||
// will be on underlying connection. Note: A Send call may not directly
|
||||
// translate to a write.
|
||||
func WithWriteBufferSize(s int) DialOption {
|
||||
return newFuncDialOption(func(o *dialOptions) {
|
||||
o.copts.WriteBufferSize = s
|
||||
@@ -127,8 +154,9 @@ func WithWriteBufferSize(s int) DialOption {
|
||||
// WithReadBufferSize lets you set the size of read buffer, this determines how
|
||||
// much data can be read at most for each read syscall.
|
||||
//
|
||||
// The default value for this buffer is 32KB. Zero will disable read buffer for
|
||||
// a connection so data framer can access the underlying conn directly.
|
||||
// The default value for this buffer is 32KB. Zero or negative values will
|
||||
// disable read buffer for a connection so data framer can access the
|
||||
// underlying conn directly.
|
||||
func WithReadBufferSize(s int) DialOption {
|
||||
return newFuncDialOption(func(o *dialOptions) {
|
||||
o.copts.ReadBufferSize = s
|
||||
|
||||
4
vendor/google.golang.org/grpc/encoding/encoding.go
generated
vendored
4
vendor/google.golang.org/grpc/encoding/encoding.go
generated
vendored
@@ -75,7 +75,9 @@ var registeredCompressor = make(map[string]Compressor)
|
||||
// registered with the same name, the one registered last will take effect.
|
||||
func RegisterCompressor(c Compressor) {
|
||||
registeredCompressor[c.Name()] = c
|
||||
grpcutil.RegisteredCompressorNames = append(grpcutil.RegisteredCompressorNames, c.Name())
|
||||
if !grpcutil.IsCompressorNameRegistered(c.Name()) {
|
||||
grpcutil.RegisteredCompressorNames = append(grpcutil.RegisteredCompressorNames, c.Name())
|
||||
}
|
||||
}
|
||||
|
||||
// GetCompressor returns Compressor for the given compressor name.
|
||||
|
||||
5
vendor/google.golang.org/grpc/encoding/gzip/gzip.go
generated
vendored
5
vendor/google.golang.org/grpc/encoding/gzip/gzip.go
generated
vendored
@@ -30,7 +30,6 @@ import (
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"sync"
|
||||
|
||||
"google.golang.org/grpc/encoding"
|
||||
@@ -42,7 +41,7 @@ const Name = "gzip"
|
||||
func init() {
|
||||
c := &compressor{}
|
||||
c.poolCompressor.New = func() interface{} {
|
||||
return &writer{Writer: gzip.NewWriter(ioutil.Discard), pool: &c.poolCompressor}
|
||||
return &writer{Writer: gzip.NewWriter(io.Discard), pool: &c.poolCompressor}
|
||||
}
|
||||
encoding.RegisterCompressor(c)
|
||||
}
|
||||
@@ -63,7 +62,7 @@ func SetLevel(level int) error {
|
||||
}
|
||||
c := encoding.GetCompressor(Name).(*compressor)
|
||||
c.poolCompressor.New = func() interface{} {
|
||||
w, err := gzip.NewWriterLevel(ioutil.Discard, level)
|
||||
w, err := gzip.NewWriterLevel(io.Discard, level)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
7
vendor/google.golang.org/grpc/grpclog/loggerv2.go
generated
vendored
7
vendor/google.golang.org/grpc/grpclog/loggerv2.go
generated
vendored
@@ -22,7 +22,6 @@ import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"strconv"
|
||||
@@ -140,9 +139,9 @@ func newLoggerV2WithConfig(infoW, warningW, errorW io.Writer, c loggerV2Config)
|
||||
// newLoggerV2 creates a loggerV2 to be used as default logger.
|
||||
// All logs are written to stderr.
|
||||
func newLoggerV2() LoggerV2 {
|
||||
errorW := ioutil.Discard
|
||||
warningW := ioutil.Discard
|
||||
infoW := ioutil.Discard
|
||||
errorW := io.Discard
|
||||
warningW := io.Discard
|
||||
infoW := io.Discard
|
||||
|
||||
logLevel := os.Getenv("GRPC_GO_LOG_SEVERITY_LEVEL")
|
||||
switch logLevel {
|
||||
|
||||
8
vendor/google.golang.org/grpc/internal/binarylog/binarylog.go
generated
vendored
8
vendor/google.golang.org/grpc/internal/binarylog/binarylog.go
generated
vendored
@@ -28,8 +28,10 @@ import (
|
||||
"google.golang.org/grpc/internal/grpcutil"
|
||||
)
|
||||
|
||||
// Logger is the global binary logger. It can be used to get binary logger for
|
||||
// each method.
|
||||
var grpclogLogger = grpclog.Component("binarylog")
|
||||
|
||||
// Logger specifies MethodLoggers for method names with a Log call that
|
||||
// takes a context.
|
||||
type Logger interface {
|
||||
GetMethodLogger(methodName string) MethodLogger
|
||||
}
|
||||
@@ -40,8 +42,6 @@ type Logger interface {
|
||||
// It is used to get a MethodLogger for each individual method.
|
||||
var binLogger Logger
|
||||
|
||||
var grpclogLogger = grpclog.Component("binarylog")
|
||||
|
||||
// SetLogger sets the binary logger.
|
||||
//
|
||||
// Only call this at init time.
|
||||
|
||||
133
vendor/google.golang.org/grpc/internal/binarylog/method_logger.go
generated
vendored
133
vendor/google.golang.org/grpc/internal/binarylog/method_logger.go
generated
vendored
@@ -19,6 +19,7 @@
|
||||
package binarylog
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
@@ -26,7 +27,7 @@ import (
|
||||
|
||||
"github.com/golang/protobuf/proto"
|
||||
"github.com/golang/protobuf/ptypes"
|
||||
pb "google.golang.org/grpc/binarylog/grpc_binarylog_v1"
|
||||
binlogpb "google.golang.org/grpc/binarylog/grpc_binarylog_v1"
|
||||
"google.golang.org/grpc/metadata"
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
@@ -49,7 +50,7 @@ var idGen callIDGenerator
|
||||
|
||||
// MethodLogger is the sub-logger for each method.
|
||||
type MethodLogger interface {
|
||||
Log(LogEntryConfig)
|
||||
Log(context.Context, LogEntryConfig)
|
||||
}
|
||||
|
||||
// TruncatingMethodLogger is a method logger that truncates headers and messages
|
||||
@@ -79,7 +80,7 @@ func NewTruncatingMethodLogger(h, m uint64) *TruncatingMethodLogger {
|
||||
// Build is an internal only method for building the proto message out of the
|
||||
// input event. It's made public to enable other library to reuse as much logic
|
||||
// in TruncatingMethodLogger as possible.
|
||||
func (ml *TruncatingMethodLogger) Build(c LogEntryConfig) *pb.GrpcLogEntry {
|
||||
func (ml *TruncatingMethodLogger) Build(c LogEntryConfig) *binlogpb.GrpcLogEntry {
|
||||
m := c.toProto()
|
||||
timestamp, _ := ptypes.TimestampProto(time.Now())
|
||||
m.Timestamp = timestamp
|
||||
@@ -87,22 +88,22 @@ func (ml *TruncatingMethodLogger) Build(c LogEntryConfig) *pb.GrpcLogEntry {
|
||||
m.SequenceIdWithinCall = ml.idWithinCallGen.next()
|
||||
|
||||
switch pay := m.Payload.(type) {
|
||||
case *pb.GrpcLogEntry_ClientHeader:
|
||||
case *binlogpb.GrpcLogEntry_ClientHeader:
|
||||
m.PayloadTruncated = ml.truncateMetadata(pay.ClientHeader.GetMetadata())
|
||||
case *pb.GrpcLogEntry_ServerHeader:
|
||||
case *binlogpb.GrpcLogEntry_ServerHeader:
|
||||
m.PayloadTruncated = ml.truncateMetadata(pay.ServerHeader.GetMetadata())
|
||||
case *pb.GrpcLogEntry_Message:
|
||||
case *binlogpb.GrpcLogEntry_Message:
|
||||
m.PayloadTruncated = ml.truncateMessage(pay.Message)
|
||||
}
|
||||
return m
|
||||
}
|
||||
|
||||
// Log creates a proto binary log entry, and logs it to the sink.
|
||||
func (ml *TruncatingMethodLogger) Log(c LogEntryConfig) {
|
||||
func (ml *TruncatingMethodLogger) Log(ctx context.Context, c LogEntryConfig) {
|
||||
ml.sink.Write(ml.Build(c))
|
||||
}
|
||||
|
||||
func (ml *TruncatingMethodLogger) truncateMetadata(mdPb *pb.Metadata) (truncated bool) {
|
||||
func (ml *TruncatingMethodLogger) truncateMetadata(mdPb *binlogpb.Metadata) (truncated bool) {
|
||||
if ml.headerMaxLen == maxUInt {
|
||||
return false
|
||||
}
|
||||
@@ -121,7 +122,7 @@ func (ml *TruncatingMethodLogger) truncateMetadata(mdPb *pb.Metadata) (truncated
|
||||
// but not counted towards the size limit.
|
||||
continue
|
||||
}
|
||||
currentEntryLen := uint64(len(entry.Value))
|
||||
currentEntryLen := uint64(len(entry.GetKey())) + uint64(len(entry.GetValue()))
|
||||
if currentEntryLen > bytesLimit {
|
||||
break
|
||||
}
|
||||
@@ -132,7 +133,7 @@ func (ml *TruncatingMethodLogger) truncateMetadata(mdPb *pb.Metadata) (truncated
|
||||
return truncated
|
||||
}
|
||||
|
||||
func (ml *TruncatingMethodLogger) truncateMessage(msgPb *pb.Message) (truncated bool) {
|
||||
func (ml *TruncatingMethodLogger) truncateMessage(msgPb *binlogpb.Message) (truncated bool) {
|
||||
if ml.messageMaxLen == maxUInt {
|
||||
return false
|
||||
}
|
||||
@@ -145,7 +146,7 @@ func (ml *TruncatingMethodLogger) truncateMessage(msgPb *pb.Message) (truncated
|
||||
|
||||
// LogEntryConfig represents the configuration for binary log entry.
|
||||
type LogEntryConfig interface {
|
||||
toProto() *pb.GrpcLogEntry
|
||||
toProto() *binlogpb.GrpcLogEntry
|
||||
}
|
||||
|
||||
// ClientHeader configs the binary log entry to be a ClientHeader entry.
|
||||
@@ -159,10 +160,10 @@ type ClientHeader struct {
|
||||
PeerAddr net.Addr
|
||||
}
|
||||
|
||||
func (c *ClientHeader) toProto() *pb.GrpcLogEntry {
|
||||
func (c *ClientHeader) toProto() *binlogpb.GrpcLogEntry {
|
||||
// This function doesn't need to set all the fields (e.g. seq ID). The Log
|
||||
// function will set the fields when necessary.
|
||||
clientHeader := &pb.ClientHeader{
|
||||
clientHeader := &binlogpb.ClientHeader{
|
||||
Metadata: mdToMetadataProto(c.Header),
|
||||
MethodName: c.MethodName,
|
||||
Authority: c.Authority,
|
||||
@@ -170,16 +171,16 @@ func (c *ClientHeader) toProto() *pb.GrpcLogEntry {
|
||||
if c.Timeout > 0 {
|
||||
clientHeader.Timeout = ptypes.DurationProto(c.Timeout)
|
||||
}
|
||||
ret := &pb.GrpcLogEntry{
|
||||
Type: pb.GrpcLogEntry_EVENT_TYPE_CLIENT_HEADER,
|
||||
Payload: &pb.GrpcLogEntry_ClientHeader{
|
||||
ret := &binlogpb.GrpcLogEntry{
|
||||
Type: binlogpb.GrpcLogEntry_EVENT_TYPE_CLIENT_HEADER,
|
||||
Payload: &binlogpb.GrpcLogEntry_ClientHeader{
|
||||
ClientHeader: clientHeader,
|
||||
},
|
||||
}
|
||||
if c.OnClientSide {
|
||||
ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT
|
||||
ret.Logger = binlogpb.GrpcLogEntry_LOGGER_CLIENT
|
||||
} else {
|
||||
ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER
|
||||
ret.Logger = binlogpb.GrpcLogEntry_LOGGER_SERVER
|
||||
}
|
||||
if c.PeerAddr != nil {
|
||||
ret.Peer = addrToProto(c.PeerAddr)
|
||||
@@ -195,19 +196,19 @@ type ServerHeader struct {
|
||||
PeerAddr net.Addr
|
||||
}
|
||||
|
||||
func (c *ServerHeader) toProto() *pb.GrpcLogEntry {
|
||||
ret := &pb.GrpcLogEntry{
|
||||
Type: pb.GrpcLogEntry_EVENT_TYPE_SERVER_HEADER,
|
||||
Payload: &pb.GrpcLogEntry_ServerHeader{
|
||||
ServerHeader: &pb.ServerHeader{
|
||||
func (c *ServerHeader) toProto() *binlogpb.GrpcLogEntry {
|
||||
ret := &binlogpb.GrpcLogEntry{
|
||||
Type: binlogpb.GrpcLogEntry_EVENT_TYPE_SERVER_HEADER,
|
||||
Payload: &binlogpb.GrpcLogEntry_ServerHeader{
|
||||
ServerHeader: &binlogpb.ServerHeader{
|
||||
Metadata: mdToMetadataProto(c.Header),
|
||||
},
|
||||
},
|
||||
}
|
||||
if c.OnClientSide {
|
||||
ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT
|
||||
ret.Logger = binlogpb.GrpcLogEntry_LOGGER_CLIENT
|
||||
} else {
|
||||
ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER
|
||||
ret.Logger = binlogpb.GrpcLogEntry_LOGGER_SERVER
|
||||
}
|
||||
if c.PeerAddr != nil {
|
||||
ret.Peer = addrToProto(c.PeerAddr)
|
||||
@@ -223,7 +224,7 @@ type ClientMessage struct {
|
||||
Message interface{}
|
||||
}
|
||||
|
||||
func (c *ClientMessage) toProto() *pb.GrpcLogEntry {
|
||||
func (c *ClientMessage) toProto() *binlogpb.GrpcLogEntry {
|
||||
var (
|
||||
data []byte
|
||||
err error
|
||||
@@ -238,19 +239,19 @@ func (c *ClientMessage) toProto() *pb.GrpcLogEntry {
|
||||
} else {
|
||||
grpclogLogger.Infof("binarylogging: message to log is neither proto.message nor []byte")
|
||||
}
|
||||
ret := &pb.GrpcLogEntry{
|
||||
Type: pb.GrpcLogEntry_EVENT_TYPE_CLIENT_MESSAGE,
|
||||
Payload: &pb.GrpcLogEntry_Message{
|
||||
Message: &pb.Message{
|
||||
ret := &binlogpb.GrpcLogEntry{
|
||||
Type: binlogpb.GrpcLogEntry_EVENT_TYPE_CLIENT_MESSAGE,
|
||||
Payload: &binlogpb.GrpcLogEntry_Message{
|
||||
Message: &binlogpb.Message{
|
||||
Length: uint32(len(data)),
|
||||
Data: data,
|
||||
},
|
||||
},
|
||||
}
|
||||
if c.OnClientSide {
|
||||
ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT
|
||||
ret.Logger = binlogpb.GrpcLogEntry_LOGGER_CLIENT
|
||||
} else {
|
||||
ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER
|
||||
ret.Logger = binlogpb.GrpcLogEntry_LOGGER_SERVER
|
||||
}
|
||||
return ret
|
||||
}
|
||||
@@ -263,7 +264,7 @@ type ServerMessage struct {
|
||||
Message interface{}
|
||||
}
|
||||
|
||||
func (c *ServerMessage) toProto() *pb.GrpcLogEntry {
|
||||
func (c *ServerMessage) toProto() *binlogpb.GrpcLogEntry {
|
||||
var (
|
||||
data []byte
|
||||
err error
|
||||
@@ -278,19 +279,19 @@ func (c *ServerMessage) toProto() *pb.GrpcLogEntry {
|
||||
} else {
|
||||
grpclogLogger.Infof("binarylogging: message to log is neither proto.message nor []byte")
|
||||
}
|
||||
ret := &pb.GrpcLogEntry{
|
||||
Type: pb.GrpcLogEntry_EVENT_TYPE_SERVER_MESSAGE,
|
||||
Payload: &pb.GrpcLogEntry_Message{
|
||||
Message: &pb.Message{
|
||||
ret := &binlogpb.GrpcLogEntry{
|
||||
Type: binlogpb.GrpcLogEntry_EVENT_TYPE_SERVER_MESSAGE,
|
||||
Payload: &binlogpb.GrpcLogEntry_Message{
|
||||
Message: &binlogpb.Message{
|
||||
Length: uint32(len(data)),
|
||||
Data: data,
|
||||
},
|
||||
},
|
||||
}
|
||||
if c.OnClientSide {
|
||||
ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT
|
||||
ret.Logger = binlogpb.GrpcLogEntry_LOGGER_CLIENT
|
||||
} else {
|
||||
ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER
|
||||
ret.Logger = binlogpb.GrpcLogEntry_LOGGER_SERVER
|
||||
}
|
||||
return ret
|
||||
}
|
||||
@@ -300,15 +301,15 @@ type ClientHalfClose struct {
|
||||
OnClientSide bool
|
||||
}
|
||||
|
||||
func (c *ClientHalfClose) toProto() *pb.GrpcLogEntry {
|
||||
ret := &pb.GrpcLogEntry{
|
||||
Type: pb.GrpcLogEntry_EVENT_TYPE_CLIENT_HALF_CLOSE,
|
||||
func (c *ClientHalfClose) toProto() *binlogpb.GrpcLogEntry {
|
||||
ret := &binlogpb.GrpcLogEntry{
|
||||
Type: binlogpb.GrpcLogEntry_EVENT_TYPE_CLIENT_HALF_CLOSE,
|
||||
Payload: nil, // No payload here.
|
||||
}
|
||||
if c.OnClientSide {
|
||||
ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT
|
||||
ret.Logger = binlogpb.GrpcLogEntry_LOGGER_CLIENT
|
||||
} else {
|
||||
ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER
|
||||
ret.Logger = binlogpb.GrpcLogEntry_LOGGER_SERVER
|
||||
}
|
||||
return ret
|
||||
}
|
||||
@@ -324,7 +325,7 @@ type ServerTrailer struct {
|
||||
PeerAddr net.Addr
|
||||
}
|
||||
|
||||
func (c *ServerTrailer) toProto() *pb.GrpcLogEntry {
|
||||
func (c *ServerTrailer) toProto() *binlogpb.GrpcLogEntry {
|
||||
st, ok := status.FromError(c.Err)
|
||||
if !ok {
|
||||
grpclogLogger.Info("binarylogging: error in trailer is not a status error")
|
||||
@@ -340,10 +341,10 @@ func (c *ServerTrailer) toProto() *pb.GrpcLogEntry {
|
||||
grpclogLogger.Infof("binarylogging: failed to marshal status proto: %v", err)
|
||||
}
|
||||
}
|
||||
ret := &pb.GrpcLogEntry{
|
||||
Type: pb.GrpcLogEntry_EVENT_TYPE_SERVER_TRAILER,
|
||||
Payload: &pb.GrpcLogEntry_Trailer{
|
||||
Trailer: &pb.Trailer{
|
||||
ret := &binlogpb.GrpcLogEntry{
|
||||
Type: binlogpb.GrpcLogEntry_EVENT_TYPE_SERVER_TRAILER,
|
||||
Payload: &binlogpb.GrpcLogEntry_Trailer{
|
||||
Trailer: &binlogpb.Trailer{
|
||||
Metadata: mdToMetadataProto(c.Trailer),
|
||||
StatusCode: uint32(st.Code()),
|
||||
StatusMessage: st.Message(),
|
||||
@@ -352,9 +353,9 @@ func (c *ServerTrailer) toProto() *pb.GrpcLogEntry {
|
||||
},
|
||||
}
|
||||
if c.OnClientSide {
|
||||
ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT
|
||||
ret.Logger = binlogpb.GrpcLogEntry_LOGGER_CLIENT
|
||||
} else {
|
||||
ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER
|
||||
ret.Logger = binlogpb.GrpcLogEntry_LOGGER_SERVER
|
||||
}
|
||||
if c.PeerAddr != nil {
|
||||
ret.Peer = addrToProto(c.PeerAddr)
|
||||
@@ -367,15 +368,15 @@ type Cancel struct {
|
||||
OnClientSide bool
|
||||
}
|
||||
|
||||
func (c *Cancel) toProto() *pb.GrpcLogEntry {
|
||||
ret := &pb.GrpcLogEntry{
|
||||
Type: pb.GrpcLogEntry_EVENT_TYPE_CANCEL,
|
||||
func (c *Cancel) toProto() *binlogpb.GrpcLogEntry {
|
||||
ret := &binlogpb.GrpcLogEntry{
|
||||
Type: binlogpb.GrpcLogEntry_EVENT_TYPE_CANCEL,
|
||||
Payload: nil,
|
||||
}
|
||||
if c.OnClientSide {
|
||||
ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT
|
||||
ret.Logger = binlogpb.GrpcLogEntry_LOGGER_CLIENT
|
||||
} else {
|
||||
ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER
|
||||
ret.Logger = binlogpb.GrpcLogEntry_LOGGER_SERVER
|
||||
}
|
||||
return ret
|
||||
}
|
||||
@@ -392,15 +393,15 @@ func metadataKeyOmit(key string) bool {
|
||||
return strings.HasPrefix(key, "grpc-")
|
||||
}
|
||||
|
||||
func mdToMetadataProto(md metadata.MD) *pb.Metadata {
|
||||
ret := &pb.Metadata{}
|
||||
func mdToMetadataProto(md metadata.MD) *binlogpb.Metadata {
|
||||
ret := &binlogpb.Metadata{}
|
||||
for k, vv := range md {
|
||||
if metadataKeyOmit(k) {
|
||||
continue
|
||||
}
|
||||
for _, v := range vv {
|
||||
ret.Entry = append(ret.Entry,
|
||||
&pb.MetadataEntry{
|
||||
&binlogpb.MetadataEntry{
|
||||
Key: k,
|
||||
Value: []byte(v),
|
||||
},
|
||||
@@ -410,26 +411,26 @@ func mdToMetadataProto(md metadata.MD) *pb.Metadata {
|
||||
return ret
|
||||
}
|
||||
|
||||
func addrToProto(addr net.Addr) *pb.Address {
|
||||
ret := &pb.Address{}
|
||||
func addrToProto(addr net.Addr) *binlogpb.Address {
|
||||
ret := &binlogpb.Address{}
|
||||
switch a := addr.(type) {
|
||||
case *net.TCPAddr:
|
||||
if a.IP.To4() != nil {
|
||||
ret.Type = pb.Address_TYPE_IPV4
|
||||
ret.Type = binlogpb.Address_TYPE_IPV4
|
||||
} else if a.IP.To16() != nil {
|
||||
ret.Type = pb.Address_TYPE_IPV6
|
||||
ret.Type = binlogpb.Address_TYPE_IPV6
|
||||
} else {
|
||||
ret.Type = pb.Address_TYPE_UNKNOWN
|
||||
ret.Type = binlogpb.Address_TYPE_UNKNOWN
|
||||
// Do not set address and port fields.
|
||||
break
|
||||
}
|
||||
ret.Address = a.IP.String()
|
||||
ret.IpPort = uint32(a.Port)
|
||||
case *net.UnixAddr:
|
||||
ret.Type = pb.Address_TYPE_UNIX
|
||||
ret.Type = binlogpb.Address_TYPE_UNIX
|
||||
ret.Address = a.String()
|
||||
default:
|
||||
ret.Type = pb.Address_TYPE_UNKNOWN
|
||||
ret.Type = binlogpb.Address_TYPE_UNKNOWN
|
||||
}
|
||||
return ret
|
||||
}
|
||||
|
||||
12
vendor/google.golang.org/grpc/internal/binarylog/sink.go
generated
vendored
12
vendor/google.golang.org/grpc/internal/binarylog/sink.go
generated
vendored
@@ -26,7 +26,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/golang/protobuf/proto"
|
||||
pb "google.golang.org/grpc/binarylog/grpc_binarylog_v1"
|
||||
binlogpb "google.golang.org/grpc/binarylog/grpc_binarylog_v1"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -42,15 +42,15 @@ type Sink interface {
|
||||
// Write will be called to write the log entry into the sink.
|
||||
//
|
||||
// It should be thread-safe so it can be called in parallel.
|
||||
Write(*pb.GrpcLogEntry) error
|
||||
Write(*binlogpb.GrpcLogEntry) error
|
||||
// Close will be called when the Sink is replaced by a new Sink.
|
||||
Close() error
|
||||
}
|
||||
|
||||
type noopSink struct{}
|
||||
|
||||
func (ns *noopSink) Write(*pb.GrpcLogEntry) error { return nil }
|
||||
func (ns *noopSink) Close() error { return nil }
|
||||
func (ns *noopSink) Write(*binlogpb.GrpcLogEntry) error { return nil }
|
||||
func (ns *noopSink) Close() error { return nil }
|
||||
|
||||
// newWriterSink creates a binary log sink with the given writer.
|
||||
//
|
||||
@@ -66,7 +66,7 @@ type writerSink struct {
|
||||
out io.Writer
|
||||
}
|
||||
|
||||
func (ws *writerSink) Write(e *pb.GrpcLogEntry) error {
|
||||
func (ws *writerSink) Write(e *binlogpb.GrpcLogEntry) error {
|
||||
b, err := proto.Marshal(e)
|
||||
if err != nil {
|
||||
grpclogLogger.Errorf("binary logging: failed to marshal proto message: %v", err)
|
||||
@@ -96,7 +96,7 @@ type bufferedSink struct {
|
||||
done chan struct{}
|
||||
}
|
||||
|
||||
func (fs *bufferedSink) Write(e *pb.GrpcLogEntry) error {
|
||||
func (fs *bufferedSink) Write(e *binlogpb.GrpcLogEntry) error {
|
||||
fs.mu.Lock()
|
||||
defer fs.mu.Unlock()
|
||||
if !fs.flusherStarted {
|
||||
|
||||
39
vendor/google.golang.org/grpc/internal/envconfig/envconfig.go
generated
vendored
39
vendor/google.golang.org/grpc/internal/envconfig/envconfig.go
generated
vendored
@@ -21,19 +21,42 @@ package envconfig
|
||||
|
||||
import (
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
const (
|
||||
prefix = "GRPC_GO_"
|
||||
txtErrIgnoreStr = prefix + "IGNORE_TXT_ERRORS"
|
||||
advertiseCompressorsStr = prefix + "ADVERTISE_COMPRESSORS"
|
||||
)
|
||||
|
||||
var (
|
||||
// TXTErrIgnore is set if TXT errors should be ignored ("GRPC_GO_IGNORE_TXT_ERRORS" is not "false").
|
||||
TXTErrIgnore = !strings.EqualFold(os.Getenv(txtErrIgnoreStr), "false")
|
||||
TXTErrIgnore = boolFromEnv("GRPC_GO_IGNORE_TXT_ERRORS", true)
|
||||
// AdvertiseCompressors is set if registered compressor should be advertised
|
||||
// ("GRPC_GO_ADVERTISE_COMPRESSORS" is not "false").
|
||||
AdvertiseCompressors = !strings.EqualFold(os.Getenv(advertiseCompressorsStr), "false")
|
||||
AdvertiseCompressors = boolFromEnv("GRPC_GO_ADVERTISE_COMPRESSORS", true)
|
||||
// RingHashCap indicates the maximum ring size which defaults to 4096
|
||||
// entries but may be overridden by setting the environment variable
|
||||
// "GRPC_RING_HASH_CAP". This does not override the default bounds
|
||||
// checking which NACKs configs specifying ring sizes > 8*1024*1024 (~8M).
|
||||
RingHashCap = uint64FromEnv("GRPC_RING_HASH_CAP", 4096, 1, 8*1024*1024)
|
||||
)
|
||||
|
||||
func boolFromEnv(envVar string, def bool) bool {
|
||||
if def {
|
||||
// The default is true; return true unless the variable is "false".
|
||||
return !strings.EqualFold(os.Getenv(envVar), "false")
|
||||
}
|
||||
// The default is false; return false unless the variable is "true".
|
||||
return strings.EqualFold(os.Getenv(envVar), "true")
|
||||
}
|
||||
|
||||
func uint64FromEnv(envVar string, def, min, max uint64) uint64 {
|
||||
v, err := strconv.ParseUint(os.Getenv(envVar), 10, 64)
|
||||
if err != nil {
|
||||
return def
|
||||
}
|
||||
if v < min {
|
||||
return min
|
||||
}
|
||||
if v > max {
|
||||
return max
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
31
vendor/google.golang.org/grpc/internal/envconfig/xds.go
generated
vendored
31
vendor/google.golang.org/grpc/internal/envconfig/xds.go
generated
vendored
@@ -20,7 +20,6 @@ package envconfig
|
||||
|
||||
import (
|
||||
"os"
|
||||
"strings"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -36,16 +35,6 @@ const (
|
||||
//
|
||||
// When both bootstrap FileName and FileContent are set, FileName is used.
|
||||
XDSBootstrapFileContentEnv = "GRPC_XDS_BOOTSTRAP_CONFIG"
|
||||
|
||||
ringHashSupportEnv = "GRPC_XDS_EXPERIMENTAL_ENABLE_RING_HASH"
|
||||
clientSideSecuritySupportEnv = "GRPC_XDS_EXPERIMENTAL_SECURITY_SUPPORT"
|
||||
aggregateAndDNSSupportEnv = "GRPC_XDS_EXPERIMENTAL_ENABLE_AGGREGATE_AND_LOGICAL_DNS_CLUSTER"
|
||||
rbacSupportEnv = "GRPC_XDS_EXPERIMENTAL_RBAC"
|
||||
outlierDetectionSupportEnv = "GRPC_EXPERIMENTAL_ENABLE_OUTLIER_DETECTION"
|
||||
federationEnv = "GRPC_EXPERIMENTAL_XDS_FEDERATION"
|
||||
rlsInXDSEnv = "GRPC_EXPERIMENTAL_XDS_RLS_LB"
|
||||
|
||||
c2pResolverTestOnlyTrafficDirectorURIEnv = "GRPC_TEST_ONLY_GOOGLE_C2P_RESOLVER_TRAFFIC_DIRECTOR_URI"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -64,38 +53,40 @@ var (
|
||||
// XDSRingHash indicates whether ring hash support is enabled, which can be
|
||||
// disabled by setting the environment variable
|
||||
// "GRPC_XDS_EXPERIMENTAL_ENABLE_RING_HASH" to "false".
|
||||
XDSRingHash = !strings.EqualFold(os.Getenv(ringHashSupportEnv), "false")
|
||||
XDSRingHash = boolFromEnv("GRPC_XDS_EXPERIMENTAL_ENABLE_RING_HASH", true)
|
||||
// XDSClientSideSecurity is used to control processing of security
|
||||
// configuration on the client-side.
|
||||
//
|
||||
// Note that there is no env var protection for the server-side because we
|
||||
// have a brand new API on the server-side and users explicitly need to use
|
||||
// the new API to get security integration on the server.
|
||||
XDSClientSideSecurity = !strings.EqualFold(os.Getenv(clientSideSecuritySupportEnv), "false")
|
||||
XDSClientSideSecurity = boolFromEnv("GRPC_XDS_EXPERIMENTAL_SECURITY_SUPPORT", true)
|
||||
// XDSAggregateAndDNS indicates whether processing of aggregated cluster
|
||||
// and DNS cluster is enabled, which can be enabled by setting the
|
||||
// environment variable
|
||||
// "GRPC_XDS_EXPERIMENTAL_ENABLE_AGGREGATE_AND_LOGICAL_DNS_CLUSTER" to
|
||||
// "true".
|
||||
XDSAggregateAndDNS = !strings.EqualFold(os.Getenv(aggregateAndDNSSupportEnv), "false")
|
||||
XDSAggregateAndDNS = boolFromEnv("GRPC_XDS_EXPERIMENTAL_ENABLE_AGGREGATE_AND_LOGICAL_DNS_CLUSTER", true)
|
||||
|
||||
// XDSRBAC indicates whether xDS configured RBAC HTTP Filter is enabled,
|
||||
// which can be disabled by setting the environment variable
|
||||
// "GRPC_XDS_EXPERIMENTAL_RBAC" to "false".
|
||||
XDSRBAC = !strings.EqualFold(os.Getenv(rbacSupportEnv), "false")
|
||||
XDSRBAC = boolFromEnv("GRPC_XDS_EXPERIMENTAL_RBAC", true)
|
||||
// XDSOutlierDetection indicates whether outlier detection support is
|
||||
// enabled, which can be disabled by setting the environment variable
|
||||
// "GRPC_EXPERIMENTAL_ENABLE_OUTLIER_DETECTION" to "false".
|
||||
XDSOutlierDetection = !strings.EqualFold(os.Getenv(outlierDetectionSupportEnv), "false")
|
||||
// XDSFederation indicates whether federation support is enabled.
|
||||
XDSFederation = strings.EqualFold(os.Getenv(federationEnv), "true")
|
||||
XDSOutlierDetection = boolFromEnv("GRPC_EXPERIMENTAL_ENABLE_OUTLIER_DETECTION", true)
|
||||
// XDSFederation indicates whether federation support is enabled, which can
|
||||
// be enabled by setting the environment variable
|
||||
// "GRPC_EXPERIMENTAL_XDS_FEDERATION" to "true".
|
||||
XDSFederation = boolFromEnv("GRPC_EXPERIMENTAL_XDS_FEDERATION", false)
|
||||
|
||||
// XDSRLS indicates whether processing of Cluster Specifier plugins and
|
||||
// support for the RLS CLuster Specifier is enabled, which can be enabled by
|
||||
// setting the environment variable "GRPC_EXPERIMENTAL_XDS_RLS_LB" to
|
||||
// "true".
|
||||
XDSRLS = strings.EqualFold(os.Getenv(rlsInXDSEnv), "true")
|
||||
XDSRLS = boolFromEnv("GRPC_EXPERIMENTAL_XDS_RLS_LB", false)
|
||||
|
||||
// C2PResolverTestOnlyTrafficDirectorURI is the TD URI for testing.
|
||||
C2PResolverTestOnlyTrafficDirectorURI = os.Getenv(c2pResolverTestOnlyTrafficDirectorURIEnv)
|
||||
C2PResolverTestOnlyTrafficDirectorURI = os.Getenv("GRPC_TEST_ONLY_GOOGLE_C2P_RESOLVER_TRAFFIC_DIRECTOR_URI")
|
||||
)
|
||||
|
||||
12
vendor/google.golang.org/grpc/internal/grpclog/prefixLogger.go
generated
vendored
12
vendor/google.golang.org/grpc/internal/grpclog/prefixLogger.go
generated
vendored
@@ -63,6 +63,9 @@ func (pl *PrefixLogger) Errorf(format string, args ...interface{}) {
|
||||
|
||||
// Debugf does info logging at verbose level 2.
|
||||
func (pl *PrefixLogger) Debugf(format string, args ...interface{}) {
|
||||
// TODO(6044): Refactor interfaces LoggerV2 and DepthLogger, and maybe
|
||||
// rewrite PrefixLogger a little to ensure that we don't use the global
|
||||
// `Logger` here, and instead use the `logger` field.
|
||||
if !Logger.V(2) {
|
||||
return
|
||||
}
|
||||
@@ -73,6 +76,15 @@ func (pl *PrefixLogger) Debugf(format string, args ...interface{}) {
|
||||
return
|
||||
}
|
||||
InfoDepth(1, fmt.Sprintf(format, args...))
|
||||
|
||||
}
|
||||
|
||||
// V reports whether verbosity level l is at least the requested verbose level.
|
||||
func (pl *PrefixLogger) V(l int) bool {
|
||||
// TODO(6044): Refactor interfaces LoggerV2 and DepthLogger, and maybe
|
||||
// rewrite PrefixLogger a little to ensure that we don't use the global
|
||||
// `Logger` here, and instead use the `logger` field.
|
||||
return Logger.V(l)
|
||||
}
|
||||
|
||||
// NewPrefixLogger creates a prefix logger with the given prefix.
|
||||
|
||||
13
vendor/google.golang.org/grpc/internal/internal.go
generated
vendored
13
vendor/google.golang.org/grpc/internal/internal.go
generated
vendored
@@ -58,6 +58,9 @@ var (
|
||||
// gRPC server. An xDS-enabled server needs to know what type of credentials
|
||||
// is configured on the underlying gRPC server. This is set by server.go.
|
||||
GetServerCredentials interface{} // func (*grpc.Server) credentials.TransportCredentials
|
||||
// CanonicalString returns the canonical string of the code defined here:
|
||||
// https://github.com/grpc/grpc/blob/master/doc/statuscodes.md.
|
||||
CanonicalString interface{} // func (codes.Code) string
|
||||
// DrainServerTransports initiates a graceful close of existing connections
|
||||
// on a gRPC server accepted on the provided listener address. An
|
||||
// xDS-enabled server invokes this method on a grpc.Server when a particular
|
||||
@@ -74,9 +77,16 @@ var (
|
||||
// globally for newly created client channels. The priority will be: 1.
|
||||
// user-provided; 2. this method; 3. default values.
|
||||
AddGlobalDialOptions interface{} // func(opt ...DialOption)
|
||||
// DisableGlobalDialOptions returns a DialOption that prevents the
|
||||
// ClientConn from applying the global DialOptions (set via
|
||||
// AddGlobalDialOptions).
|
||||
DisableGlobalDialOptions interface{} // func() grpc.DialOption
|
||||
// ClearGlobalDialOptions clears the array of extra DialOption. This
|
||||
// method is useful in testing and benchmarking.
|
||||
ClearGlobalDialOptions func()
|
||||
// JoinDialOptions combines the dial options passed as arguments into a
|
||||
// single dial option.
|
||||
JoinDialOptions interface{} // func(...grpc.DialOption) grpc.DialOption
|
||||
// JoinServerOptions combines the server options passed as arguments into a
|
||||
// single server option.
|
||||
JoinServerOptions interface{} // func(...grpc.ServerOption) grpc.ServerOption
|
||||
@@ -127,6 +137,9 @@ var (
|
||||
//
|
||||
// TODO: Remove this function once the RBAC env var is removed.
|
||||
UnregisterRBACHTTPFilterForTesting func()
|
||||
|
||||
// ORCAAllowAnyMinReportingInterval is for examples/orca use ONLY.
|
||||
ORCAAllowAnyMinReportingInterval interface{} // func(so *orca.ServiceOptions)
|
||||
)
|
||||
|
||||
// HealthChecker defines the signature of the client-side LB channel health checking function.
|
||||
|
||||
62
vendor/google.golang.org/grpc/internal/metadata/metadata.go
generated
vendored
62
vendor/google.golang.org/grpc/internal/metadata/metadata.go
generated
vendored
@@ -76,33 +76,11 @@ func Set(addr resolver.Address, md metadata.MD) resolver.Address {
|
||||
return addr
|
||||
}
|
||||
|
||||
// Validate returns an error if the input md contains invalid keys or values.
|
||||
//
|
||||
// If the header is not a pseudo-header, the following items are checked:
|
||||
// - header names must contain one or more characters from this set [0-9 a-z _ - .].
|
||||
// - if the header-name ends with a "-bin" suffix, no validation of the header value is performed.
|
||||
// - otherwise, the header value must contain one or more characters from the set [%x20-%x7E].
|
||||
// Validate validates every pair in md with ValidatePair.
|
||||
func Validate(md metadata.MD) error {
|
||||
for k, vals := range md {
|
||||
// pseudo-header will be ignored
|
||||
if k[0] == ':' {
|
||||
continue
|
||||
}
|
||||
// check key, for i that saving a conversion if not using for range
|
||||
for i := 0; i < len(k); i++ {
|
||||
r := k[i]
|
||||
if !(r >= 'a' && r <= 'z') && !(r >= '0' && r <= '9') && r != '.' && r != '-' && r != '_' {
|
||||
return fmt.Errorf("header key %q contains illegal characters not in [0-9a-z-_.]", k)
|
||||
}
|
||||
}
|
||||
if strings.HasSuffix(k, "-bin") {
|
||||
continue
|
||||
}
|
||||
// check value
|
||||
for _, val := range vals {
|
||||
if hasNotPrintable(val) {
|
||||
return fmt.Errorf("header key %q contains value with non-printable ASCII characters", k)
|
||||
}
|
||||
if err := ValidatePair(k, vals...); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
@@ -118,3 +96,37 @@ func hasNotPrintable(msg string) bool {
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// ValidatePair validate a key-value pair with the following rules (the pseudo-header will be skipped) :
|
||||
//
|
||||
// - key must contain one or more characters.
|
||||
// - the characters in the key must be contained in [0-9 a-z _ - .].
|
||||
// - if the key ends with a "-bin" suffix, no validation of the corresponding value is performed.
|
||||
// - the characters in the every value must be printable (in [%x20-%x7E]).
|
||||
func ValidatePair(key string, vals ...string) error {
|
||||
// key should not be empty
|
||||
if key == "" {
|
||||
return fmt.Errorf("there is an empty key in the header")
|
||||
}
|
||||
// pseudo-header will be ignored
|
||||
if key[0] == ':' {
|
||||
return nil
|
||||
}
|
||||
// check key, for i that saving a conversion if not using for range
|
||||
for i := 0; i < len(key); i++ {
|
||||
r := key[i]
|
||||
if !(r >= 'a' && r <= 'z') && !(r >= '0' && r <= '9') && r != '.' && r != '-' && r != '_' {
|
||||
return fmt.Errorf("header key %q contains illegal characters not in [0-9a-z-_.]", key)
|
||||
}
|
||||
}
|
||||
if strings.HasSuffix(key, "-bin") {
|
||||
return nil
|
||||
}
|
||||
// check value
|
||||
for _, val := range vals {
|
||||
if hasNotPrintable(val) {
|
||||
return fmt.Errorf("header key %q contains value with non-printable ASCII characters", key)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
6
vendor/google.golang.org/grpc/internal/resolver/dns/dns_resolver.go
generated
vendored
6
vendor/google.golang.org/grpc/internal/resolver/dns/dns_resolver.go
generated
vendored
@@ -116,7 +116,7 @@ type dnsBuilder struct{}
|
||||
|
||||
// Build creates and starts a DNS resolver that watches the name resolution of the target.
|
||||
func (b *dnsBuilder) Build(target resolver.Target, cc resolver.ClientConn, opts resolver.BuildOptions) (resolver.Resolver, error) {
|
||||
host, port, err := parseTarget(target.Endpoint, defaultPort)
|
||||
host, port, err := parseTarget(target.Endpoint(), defaultPort)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -140,10 +140,10 @@ func (b *dnsBuilder) Build(target resolver.Target, cc resolver.ClientConn, opts
|
||||
disableServiceConfig: opts.DisableServiceConfig,
|
||||
}
|
||||
|
||||
if target.Authority == "" {
|
||||
if target.URL.Host == "" {
|
||||
d.resolver = defaultResolver
|
||||
} else {
|
||||
d.resolver, err = customAuthorityResolver(target.Authority)
|
||||
d.resolver, err = customAuthorityResolver(target.URL.Host)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
11
vendor/google.golang.org/grpc/internal/resolver/passthrough/passthrough.go
generated
vendored
11
vendor/google.golang.org/grpc/internal/resolver/passthrough/passthrough.go
generated
vendored
@@ -20,13 +20,20 @@
|
||||
// name without scheme back to gRPC as resolved address.
|
||||
package passthrough
|
||||
|
||||
import "google.golang.org/grpc/resolver"
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"google.golang.org/grpc/resolver"
|
||||
)
|
||||
|
||||
const scheme = "passthrough"
|
||||
|
||||
type passthroughBuilder struct{}
|
||||
|
||||
func (*passthroughBuilder) Build(target resolver.Target, cc resolver.ClientConn, opts resolver.BuildOptions) (resolver.Resolver, error) {
|
||||
if target.Endpoint() == "" && opts.Dialer == nil {
|
||||
return nil, errors.New("passthrough: received empty target in Build()")
|
||||
}
|
||||
r := &passthroughResolver{
|
||||
target: target,
|
||||
cc: cc,
|
||||
@@ -45,7 +52,7 @@ type passthroughResolver struct {
|
||||
}
|
||||
|
||||
func (r *passthroughResolver) start() {
|
||||
r.cc.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: r.target.Endpoint}}})
|
||||
r.cc.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: r.target.Endpoint()}}})
|
||||
}
|
||||
|
||||
func (*passthroughResolver) ResolveNow(o resolver.ResolveNowOptions) {}
|
||||
|
||||
4
vendor/google.golang.org/grpc/internal/resolver/unix/unix.go
generated
vendored
4
vendor/google.golang.org/grpc/internal/resolver/unix/unix.go
generated
vendored
@@ -34,8 +34,8 @@ type builder struct {
|
||||
}
|
||||
|
||||
func (b *builder) Build(target resolver.Target, cc resolver.ClientConn, _ resolver.BuildOptions) (resolver.Resolver, error) {
|
||||
if target.Authority != "" {
|
||||
return nil, fmt.Errorf("invalid (non-empty) authority: %v", target.Authority)
|
||||
if target.URL.Host != "" {
|
||||
return nil, fmt.Errorf("invalid (non-empty) authority: %v", target.URL.Host)
|
||||
}
|
||||
|
||||
// gRPC was parsing the dial target manually before PR #4817, and we
|
||||
|
||||
116
vendor/google.golang.org/grpc/internal/transport/controlbuf.go
generated
vendored
116
vendor/google.golang.org/grpc/internal/transport/controlbuf.go
generated
vendored
@@ -22,6 +22,7 @@ import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"sync"
|
||||
@@ -191,7 +192,7 @@ type goAway struct {
|
||||
code http2.ErrCode
|
||||
debugData []byte
|
||||
headsUp bool
|
||||
closeConn bool
|
||||
closeConn error // if set, loopyWriter will exit, resulting in conn closure
|
||||
}
|
||||
|
||||
func (*goAway) isTransportResponseFrame() bool { return false }
|
||||
@@ -209,6 +210,14 @@ type outFlowControlSizeRequest struct {
|
||||
|
||||
func (*outFlowControlSizeRequest) isTransportResponseFrame() bool { return false }
|
||||
|
||||
// closeConnection is an instruction to tell the loopy writer to flush the
|
||||
// framer and exit, which will cause the transport's connection to be closed
|
||||
// (by the client or server). The transport itself will close after the reader
|
||||
// encounters the EOF caused by the connection closure.
|
||||
type closeConnection struct{}
|
||||
|
||||
func (closeConnection) isTransportResponseFrame() bool { return false }
|
||||
|
||||
type outStreamState int
|
||||
|
||||
const (
|
||||
@@ -408,7 +417,7 @@ func (c *controlBuffer) get(block bool) (interface{}, error) {
|
||||
select {
|
||||
case <-c.ch:
|
||||
case <-c.done:
|
||||
return nil, ErrConnClosing
|
||||
return nil, errors.New("transport closed by client")
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -478,12 +487,13 @@ type loopyWriter struct {
|
||||
hEnc *hpack.Encoder // HPACK encoder.
|
||||
bdpEst *bdpEstimator
|
||||
draining bool
|
||||
conn net.Conn
|
||||
|
||||
// Side-specific handlers
|
||||
ssGoAwayHandler func(*goAway) (bool, error)
|
||||
}
|
||||
|
||||
func newLoopyWriter(s side, fr *framer, cbuf *controlBuffer, bdpEst *bdpEstimator) *loopyWriter {
|
||||
func newLoopyWriter(s side, fr *framer, cbuf *controlBuffer, bdpEst *bdpEstimator, conn net.Conn) *loopyWriter {
|
||||
var buf bytes.Buffer
|
||||
l := &loopyWriter{
|
||||
side: s,
|
||||
@@ -496,6 +506,7 @@ func newLoopyWriter(s side, fr *framer, cbuf *controlBuffer, bdpEst *bdpEstimato
|
||||
hBuf: &buf,
|
||||
hEnc: hpack.NewEncoder(&buf),
|
||||
bdpEst: bdpEst,
|
||||
conn: conn,
|
||||
}
|
||||
return l
|
||||
}
|
||||
@@ -513,23 +524,26 @@ const minBatchSize = 1000
|
||||
// 2. Stream level flow control quota available.
|
||||
//
|
||||
// In each iteration of run loop, other than processing the incoming control
|
||||
// frame, loopy calls processData, which processes one node from the activeStreams linked-list.
|
||||
// This results in writing of HTTP2 frames into an underlying write buffer.
|
||||
// When there's no more control frames to read from controlBuf, loopy flushes the write buffer.
|
||||
// As an optimization, to increase the batch size for each flush, loopy yields the processor, once
|
||||
// if the batch size is too low to give stream goroutines a chance to fill it up.
|
||||
// frame, loopy calls processData, which processes one node from the
|
||||
// activeStreams linked-list. This results in writing of HTTP2 frames into an
|
||||
// underlying write buffer. When there's no more control frames to read from
|
||||
// controlBuf, loopy flushes the write buffer. As an optimization, to increase
|
||||
// the batch size for each flush, loopy yields the processor, once if the batch
|
||||
// size is too low to give stream goroutines a chance to fill it up.
|
||||
//
|
||||
// Upon exiting, if the error causing the exit is not an I/O error, run()
|
||||
// flushes and closes the underlying connection. Otherwise, the connection is
|
||||
// left open to allow the I/O error to be encountered by the reader instead.
|
||||
func (l *loopyWriter) run() (err error) {
|
||||
defer func() {
|
||||
if err == ErrConnClosing {
|
||||
// Don't log ErrConnClosing as error since it happens
|
||||
// 1. When the connection is closed by some other known issue.
|
||||
// 2. User closed the connection.
|
||||
// 3. A graceful close of connection.
|
||||
if logger.V(logLevel) {
|
||||
logger.Infof("transport: loopyWriter.run returning. %v", err)
|
||||
}
|
||||
err = nil
|
||||
if logger.V(logLevel) {
|
||||
logger.Infof("transport: loopyWriter exiting with error: %v", err)
|
||||
}
|
||||
if !isIOError(err) {
|
||||
l.framer.writer.Flush()
|
||||
l.conn.Close()
|
||||
}
|
||||
l.cbuf.finish()
|
||||
}()
|
||||
for {
|
||||
it, err := l.cbuf.get(true)
|
||||
@@ -574,7 +588,6 @@ func (l *loopyWriter) run() (err error) {
|
||||
}
|
||||
l.framer.writer.Flush()
|
||||
break hasdata
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -583,11 +596,11 @@ func (l *loopyWriter) outgoingWindowUpdateHandler(w *outgoingWindowUpdate) error
|
||||
return l.framer.fr.WriteWindowUpdate(w.streamID, w.increment)
|
||||
}
|
||||
|
||||
func (l *loopyWriter) incomingWindowUpdateHandler(w *incomingWindowUpdate) error {
|
||||
func (l *loopyWriter) incomingWindowUpdateHandler(w *incomingWindowUpdate) {
|
||||
// Otherwise update the quota.
|
||||
if w.streamID == 0 {
|
||||
l.sendQuota += w.increment
|
||||
return nil
|
||||
return
|
||||
}
|
||||
// Find the stream and update it.
|
||||
if str, ok := l.estdStreams[w.streamID]; ok {
|
||||
@@ -595,10 +608,9 @@ func (l *loopyWriter) incomingWindowUpdateHandler(w *incomingWindowUpdate) error
|
||||
if strQuota := int(l.oiws) - str.bytesOutStanding; strQuota > 0 && str.state == waitingOnStreamQuota {
|
||||
str.state = active
|
||||
l.activeStreams.enqueue(str)
|
||||
return nil
|
||||
return
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *loopyWriter) outgoingSettingsHandler(s *outgoingSettings) error {
|
||||
@@ -606,13 +618,11 @@ func (l *loopyWriter) outgoingSettingsHandler(s *outgoingSettings) error {
|
||||
}
|
||||
|
||||
func (l *loopyWriter) incomingSettingsHandler(s *incomingSettings) error {
|
||||
if err := l.applySettings(s.ss); err != nil {
|
||||
return err
|
||||
}
|
||||
l.applySettings(s.ss)
|
||||
return l.framer.fr.WriteSettingsAck()
|
||||
}
|
||||
|
||||
func (l *loopyWriter) registerStreamHandler(h *registerStream) error {
|
||||
func (l *loopyWriter) registerStreamHandler(h *registerStream) {
|
||||
str := &outStream{
|
||||
id: h.streamID,
|
||||
state: empty,
|
||||
@@ -620,7 +630,6 @@ func (l *loopyWriter) registerStreamHandler(h *registerStream) error {
|
||||
wq: h.wq,
|
||||
}
|
||||
l.estdStreams[h.streamID] = str
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *loopyWriter) headerHandler(h *headerFrame) error {
|
||||
@@ -655,19 +664,20 @@ func (l *loopyWriter) headerHandler(h *headerFrame) error {
|
||||
itl: &itemList{},
|
||||
wq: h.wq,
|
||||
}
|
||||
str.itl.enqueue(h)
|
||||
return l.originateStream(str)
|
||||
return l.originateStream(str, h)
|
||||
}
|
||||
|
||||
func (l *loopyWriter) originateStream(str *outStream) error {
|
||||
hdr := str.itl.dequeue().(*headerFrame)
|
||||
if err := hdr.initStream(str.id); err != nil {
|
||||
if err == ErrConnClosing {
|
||||
return err
|
||||
}
|
||||
// Other errors(errStreamDrain) need not close transport.
|
||||
func (l *loopyWriter) originateStream(str *outStream, hdr *headerFrame) error {
|
||||
// l.draining is set when handling GoAway. In which case, we want to avoid
|
||||
// creating new streams.
|
||||
if l.draining {
|
||||
// TODO: provide a better error with the reason we are in draining.
|
||||
hdr.onOrphaned(errStreamDrain)
|
||||
return nil
|
||||
}
|
||||
if err := hdr.initStream(str.id); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := l.writeHeader(str.id, hdr.endStream, hdr.hf, hdr.onWrite); err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -721,10 +731,10 @@ func (l *loopyWriter) writeHeader(streamID uint32, endStream bool, hf []hpack.He
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *loopyWriter) preprocessData(df *dataFrame) error {
|
||||
func (l *loopyWriter) preprocessData(df *dataFrame) {
|
||||
str, ok := l.estdStreams[df.streamID]
|
||||
if !ok {
|
||||
return nil
|
||||
return
|
||||
}
|
||||
// If we got data for a stream it means that
|
||||
// stream was originated and the headers were sent out.
|
||||
@@ -733,7 +743,6 @@ func (l *loopyWriter) preprocessData(df *dataFrame) error {
|
||||
str.state = active
|
||||
l.activeStreams.enqueue(str)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *loopyWriter) pingHandler(p *ping) error {
|
||||
@@ -744,9 +753,8 @@ func (l *loopyWriter) pingHandler(p *ping) error {
|
||||
|
||||
}
|
||||
|
||||
func (l *loopyWriter) outFlowControlSizeRequestHandler(o *outFlowControlSizeRequest) error {
|
||||
func (l *loopyWriter) outFlowControlSizeRequestHandler(o *outFlowControlSizeRequest) {
|
||||
o.resp <- l.sendQuota
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *loopyWriter) cleanupStreamHandler(c *cleanupStream) error {
|
||||
@@ -763,8 +771,9 @@ func (l *loopyWriter) cleanupStreamHandler(c *cleanupStream) error {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if l.side == clientSide && l.draining && len(l.estdStreams) == 0 {
|
||||
return ErrConnClosing
|
||||
if l.draining && len(l.estdStreams) == 0 {
|
||||
// Flush and close the connection; we are done with it.
|
||||
return errors.New("finished processing active streams while in draining mode")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -799,7 +808,8 @@ func (l *loopyWriter) incomingGoAwayHandler(*incomingGoAway) error {
|
||||
if l.side == clientSide {
|
||||
l.draining = true
|
||||
if len(l.estdStreams) == 0 {
|
||||
return ErrConnClosing
|
||||
// Flush and close the connection; we are done with it.
|
||||
return errors.New("received GOAWAY with no active streams")
|
||||
}
|
||||
}
|
||||
return nil
|
||||
@@ -820,7 +830,7 @@ func (l *loopyWriter) goAwayHandler(g *goAway) error {
|
||||
func (l *loopyWriter) handle(i interface{}) error {
|
||||
switch i := i.(type) {
|
||||
case *incomingWindowUpdate:
|
||||
return l.incomingWindowUpdateHandler(i)
|
||||
l.incomingWindowUpdateHandler(i)
|
||||
case *outgoingWindowUpdate:
|
||||
return l.outgoingWindowUpdateHandler(i)
|
||||
case *incomingSettings:
|
||||
@@ -830,7 +840,7 @@ func (l *loopyWriter) handle(i interface{}) error {
|
||||
case *headerFrame:
|
||||
return l.headerHandler(i)
|
||||
case *registerStream:
|
||||
return l.registerStreamHandler(i)
|
||||
l.registerStreamHandler(i)
|
||||
case *cleanupStream:
|
||||
return l.cleanupStreamHandler(i)
|
||||
case *earlyAbortStream:
|
||||
@@ -838,19 +848,24 @@ func (l *loopyWriter) handle(i interface{}) error {
|
||||
case *incomingGoAway:
|
||||
return l.incomingGoAwayHandler(i)
|
||||
case *dataFrame:
|
||||
return l.preprocessData(i)
|
||||
l.preprocessData(i)
|
||||
case *ping:
|
||||
return l.pingHandler(i)
|
||||
case *goAway:
|
||||
return l.goAwayHandler(i)
|
||||
case *outFlowControlSizeRequest:
|
||||
return l.outFlowControlSizeRequestHandler(i)
|
||||
l.outFlowControlSizeRequestHandler(i)
|
||||
case closeConnection:
|
||||
// Just return a non-I/O error and run() will flush and close the
|
||||
// connection.
|
||||
return ErrConnClosing
|
||||
default:
|
||||
return fmt.Errorf("transport: unknown control message type %T", i)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *loopyWriter) applySettings(ss []http2.Setting) error {
|
||||
func (l *loopyWriter) applySettings(ss []http2.Setting) {
|
||||
for _, s := range ss {
|
||||
switch s.ID {
|
||||
case http2.SettingInitialWindowSize:
|
||||
@@ -869,7 +884,6 @@ func (l *loopyWriter) applySettings(ss []http2.Setting) error {
|
||||
updateHeaderTblSize(l.hEnc, s.Val)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// processData removes the first stream from active streams, writes out at most 16KB
|
||||
@@ -903,7 +917,7 @@ func (l *loopyWriter) processData() (bool, error) {
|
||||
return false, err
|
||||
}
|
||||
if err := l.cleanupStreamHandler(trailer.cleanup); err != nil {
|
||||
return false, nil
|
||||
return false, err
|
||||
}
|
||||
} else {
|
||||
l.activeStreams.enqueue(str)
|
||||
|
||||
6
vendor/google.golang.org/grpc/internal/transport/defaults.go
generated
vendored
6
vendor/google.golang.org/grpc/internal/transport/defaults.go
generated
vendored
@@ -47,3 +47,9 @@ const (
|
||||
defaultClientMaxHeaderListSize = uint32(16 << 20)
|
||||
defaultServerMaxHeaderListSize = uint32(16 << 20)
|
||||
)
|
||||
|
||||
// MaxStreamID is the upper bound for the stream ID before the current
|
||||
// transport gracefully closes and new transport is created for subsequent RPCs.
|
||||
// This is set to 75% of 2^31-1. Streams are identified with an unsigned 31-bit
|
||||
// integer. It's exported so that tests can override it.
|
||||
var MaxStreamID = uint32(math.MaxInt32 * 3 / 4)
|
||||
|
||||
45
vendor/google.golang.org/grpc/internal/transport/handler_server.go
generated
vendored
45
vendor/google.golang.org/grpc/internal/transport/handler_server.go
generated
vendored
@@ -46,24 +46,32 @@ import (
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// NewServerHandlerTransport returns a ServerTransport handling gRPC
|
||||
// from inside an http.Handler. It requires that the http Server
|
||||
// supports HTTP/2.
|
||||
// NewServerHandlerTransport returns a ServerTransport handling gRPC from
|
||||
// inside an http.Handler, or writes an HTTP error to w and returns an error.
|
||||
// It requires that the http Server supports HTTP/2.
|
||||
func NewServerHandlerTransport(w http.ResponseWriter, r *http.Request, stats []stats.Handler) (ServerTransport, error) {
|
||||
if r.ProtoMajor != 2 {
|
||||
return nil, errors.New("gRPC requires HTTP/2")
|
||||
msg := "gRPC requires HTTP/2"
|
||||
http.Error(w, msg, http.StatusBadRequest)
|
||||
return nil, errors.New(msg)
|
||||
}
|
||||
if r.Method != "POST" {
|
||||
return nil, errors.New("invalid gRPC request method")
|
||||
msg := fmt.Sprintf("invalid gRPC request method %q", r.Method)
|
||||
http.Error(w, msg, http.StatusBadRequest)
|
||||
return nil, errors.New(msg)
|
||||
}
|
||||
contentType := r.Header.Get("Content-Type")
|
||||
// TODO: do we assume contentType is lowercase? we did before
|
||||
contentSubtype, validContentType := grpcutil.ContentSubtype(contentType)
|
||||
if !validContentType {
|
||||
return nil, errors.New("invalid gRPC request content-type")
|
||||
msg := fmt.Sprintf("invalid gRPC request content-type %q", contentType)
|
||||
http.Error(w, msg, http.StatusUnsupportedMediaType)
|
||||
return nil, errors.New(msg)
|
||||
}
|
||||
if _, ok := w.(http.Flusher); !ok {
|
||||
return nil, errors.New("gRPC requires a ResponseWriter supporting http.Flusher")
|
||||
msg := "gRPC requires a ResponseWriter supporting http.Flusher"
|
||||
http.Error(w, msg, http.StatusInternalServerError)
|
||||
return nil, errors.New(msg)
|
||||
}
|
||||
|
||||
st := &serverHandlerTransport{
|
||||
@@ -79,7 +87,9 @@ func NewServerHandlerTransport(w http.ResponseWriter, r *http.Request, stats []s
|
||||
if v := r.Header.Get("grpc-timeout"); v != "" {
|
||||
to, err := decodeTimeout(v)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "malformed time-out: %v", err)
|
||||
msg := fmt.Sprintf("malformed grpc-timeout: %v", err)
|
||||
http.Error(w, msg, http.StatusBadRequest)
|
||||
return nil, status.Error(codes.Internal, msg)
|
||||
}
|
||||
st.timeoutSet = true
|
||||
st.timeout = to
|
||||
@@ -97,7 +107,9 @@ func NewServerHandlerTransport(w http.ResponseWriter, r *http.Request, stats []s
|
||||
for _, v := range vv {
|
||||
v, err := decodeMetadataHeader(k, v)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "malformed binary metadata: %v", err)
|
||||
msg := fmt.Sprintf("malformed binary metadata %q in header %q: %v", v, k, err)
|
||||
http.Error(w, msg, http.StatusBadRequest)
|
||||
return nil, status.Error(codes.Internal, msg)
|
||||
}
|
||||
metakv = append(metakv, k, v)
|
||||
}
|
||||
@@ -141,12 +153,15 @@ type serverHandlerTransport struct {
|
||||
stats []stats.Handler
|
||||
}
|
||||
|
||||
func (ht *serverHandlerTransport) Close() {
|
||||
ht.closeOnce.Do(ht.closeCloseChanOnce)
|
||||
func (ht *serverHandlerTransport) Close(err error) {
|
||||
ht.closeOnce.Do(func() {
|
||||
if logger.V(logLevel) {
|
||||
logger.Infof("Closing serverHandlerTransport: %v", err)
|
||||
}
|
||||
close(ht.closedCh)
|
||||
})
|
||||
}
|
||||
|
||||
func (ht *serverHandlerTransport) closeCloseChanOnce() { close(ht.closedCh) }
|
||||
|
||||
func (ht *serverHandlerTransport) RemoteAddr() net.Addr { return strAddr(ht.req.RemoteAddr) }
|
||||
|
||||
// strAddr is a net.Addr backed by either a TCP "ip:port" string, or
|
||||
@@ -236,7 +251,7 @@ func (ht *serverHandlerTransport) WriteStatus(s *Stream, st *status.Status) erro
|
||||
})
|
||||
}
|
||||
}
|
||||
ht.Close()
|
||||
ht.Close(errors.New("finished writing status"))
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -346,7 +361,7 @@ func (ht *serverHandlerTransport) HandleStreams(startStream func(*Stream), trace
|
||||
case <-ht.req.Context().Done():
|
||||
}
|
||||
cancel()
|
||||
ht.Close()
|
||||
ht.Close(errors.New("request is done processing"))
|
||||
}()
|
||||
|
||||
req := ht.req
|
||||
|
||||
115
vendor/google.golang.org/grpc/internal/transport/http2_client.go
generated
vendored
115
vendor/google.golang.org/grpc/internal/transport/http2_client.go
generated
vendored
@@ -59,11 +59,15 @@ var clientConnectionCounter uint64
|
||||
|
||||
// http2Client implements the ClientTransport interface with HTTP2.
|
||||
type http2Client struct {
|
||||
lastRead int64 // Keep this field 64-bit aligned. Accessed atomically.
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
ctxDone <-chan struct{} // Cache the ctx.Done() chan.
|
||||
userAgent string
|
||||
lastRead int64 // Keep this field 64-bit aligned. Accessed atomically.
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
ctxDone <-chan struct{} // Cache the ctx.Done() chan.
|
||||
userAgent string
|
||||
// address contains the resolver returned address for this transport.
|
||||
// If the `ServerName` field is set, it takes precedence over `CallHdr.Host`
|
||||
// passed to `NewStream`, when determining the :authority header.
|
||||
address resolver.Address
|
||||
md metadata.MD
|
||||
conn net.Conn // underlying communication channel
|
||||
loopy *loopyWriter
|
||||
@@ -136,8 +140,7 @@ type http2Client struct {
|
||||
channelzID *channelz.Identifier
|
||||
czData *channelzData
|
||||
|
||||
onGoAway func(GoAwayReason)
|
||||
onClose func()
|
||||
onClose func(GoAwayReason)
|
||||
|
||||
bufferPool *bufferPool
|
||||
|
||||
@@ -193,7 +196,7 @@ func isTemporary(err error) bool {
|
||||
// newHTTP2Client constructs a connected ClientTransport to addr based on HTTP2
|
||||
// and starts to receive messages on it. Non-nil error returns if construction
|
||||
// fails.
|
||||
func newHTTP2Client(connectCtx, ctx context.Context, addr resolver.Address, opts ConnectOptions, onGoAway func(GoAwayReason), onClose func()) (_ *http2Client, err error) {
|
||||
func newHTTP2Client(connectCtx, ctx context.Context, addr resolver.Address, opts ConnectOptions, onClose func(GoAwayReason)) (_ *http2Client, err error) {
|
||||
scheme := "http"
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
defer func() {
|
||||
@@ -213,7 +216,7 @@ func newHTTP2Client(connectCtx, ctx context.Context, addr resolver.Address, opts
|
||||
if opts.FailOnNonTempDialError {
|
||||
return nil, connectionErrorf(isTemporary(err), err, "transport: error while dialing: %v", err)
|
||||
}
|
||||
return nil, connectionErrorf(true, err, "transport: Error while dialing %v", err)
|
||||
return nil, connectionErrorf(true, err, "transport: Error while dialing: %v", err)
|
||||
}
|
||||
|
||||
// Any further errors will close the underlying connection
|
||||
@@ -238,8 +241,11 @@ func newHTTP2Client(connectCtx, ctx context.Context, addr resolver.Address, opts
|
||||
go func(conn net.Conn) {
|
||||
defer ctxMonitorDone.Fire() // Signal this goroutine has exited.
|
||||
<-newClientCtx.Done() // Block until connectCtx expires or the defer above executes.
|
||||
if connectCtx.Err() != nil {
|
||||
if err := connectCtx.Err(); err != nil {
|
||||
// connectCtx expired before exiting the function. Hard close the connection.
|
||||
if logger.V(logLevel) {
|
||||
logger.Infof("newClientTransport: aborting due to connectCtx: %v", err)
|
||||
}
|
||||
conn.Close()
|
||||
}
|
||||
}(conn)
|
||||
@@ -314,6 +320,7 @@ func newHTTP2Client(connectCtx, ctx context.Context, addr resolver.Address, opts
|
||||
cancel: cancel,
|
||||
userAgent: opts.UserAgent,
|
||||
registeredCompressors: grpcutil.RegisteredCompressors(),
|
||||
address: addr,
|
||||
conn: conn,
|
||||
remoteAddr: conn.RemoteAddr(),
|
||||
localAddr: conn.LocalAddr(),
|
||||
@@ -335,7 +342,6 @@ func newHTTP2Client(connectCtx, ctx context.Context, addr resolver.Address, opts
|
||||
streamQuota: defaultMaxStreamsClient,
|
||||
streamsQuotaAvailable: make(chan struct{}, 1),
|
||||
czData: new(channelzData),
|
||||
onGoAway: onGoAway,
|
||||
keepaliveEnabled: keepaliveEnabled,
|
||||
bufferPool: newBufferPool(),
|
||||
onClose: onClose,
|
||||
@@ -438,17 +444,8 @@ func newHTTP2Client(connectCtx, ctx context.Context, addr resolver.Address, opts
|
||||
return nil, err
|
||||
}
|
||||
go func() {
|
||||
t.loopy = newLoopyWriter(clientSide, t.framer, t.controlBuf, t.bdpEst)
|
||||
err := t.loopy.run()
|
||||
if err != nil {
|
||||
if logger.V(logLevel) {
|
||||
logger.Errorf("transport: loopyWriter.run returning. Err: %v", err)
|
||||
}
|
||||
}
|
||||
// Do not close the transport. Let reader goroutine handle it since
|
||||
// there might be data in the buffers.
|
||||
t.conn.Close()
|
||||
t.controlBuf.finish()
|
||||
t.loopy = newLoopyWriter(clientSide, t.framer, t.controlBuf, t.bdpEst, t.conn)
|
||||
t.loopy.run()
|
||||
close(t.writerDone)
|
||||
}()
|
||||
return t, nil
|
||||
@@ -702,6 +699,18 @@ func (e NewStreamError) Error() string {
|
||||
// streams. All non-nil errors returned will be *NewStreamError.
|
||||
func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (*Stream, error) {
|
||||
ctx = peer.NewContext(ctx, t.getPeer())
|
||||
|
||||
// ServerName field of the resolver returned address takes precedence over
|
||||
// Host field of CallHdr to determine the :authority header. This is because,
|
||||
// the ServerName field takes precedence for server authentication during
|
||||
// TLS handshake, and the :authority header should match the value used
|
||||
// for server authentication.
|
||||
if t.address.ServerName != "" {
|
||||
newCallHdr := *callHdr
|
||||
newCallHdr.Host = t.address.ServerName
|
||||
callHdr = &newCallHdr
|
||||
}
|
||||
|
||||
headerFields, err := t.createHeaderFields(ctx, callHdr)
|
||||
if err != nil {
|
||||
return nil, &NewStreamError{Err: err, AllowTransparentRetry: false}
|
||||
@@ -726,15 +735,12 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (*Stream,
|
||||
endStream: false,
|
||||
initStream: func(id uint32) error {
|
||||
t.mu.Lock()
|
||||
if state := t.state; state != reachable {
|
||||
// TODO: handle transport closure in loopy instead and remove this
|
||||
// initStream is never called when transport is draining.
|
||||
if t.state == closing {
|
||||
t.mu.Unlock()
|
||||
// Do a quick cleanup.
|
||||
err := error(errStreamDrain)
|
||||
if state == closing {
|
||||
err = ErrConnClosing
|
||||
}
|
||||
cleanup(err)
|
||||
return err
|
||||
cleanup(ErrConnClosing)
|
||||
return ErrConnClosing
|
||||
}
|
||||
if channelz.IsOn() {
|
||||
atomic.AddInt64(&t.czData.streamsStarted, 1)
|
||||
@@ -752,6 +758,7 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (*Stream,
|
||||
}
|
||||
firstTry := true
|
||||
var ch chan struct{}
|
||||
transportDrainRequired := false
|
||||
checkForStreamQuota := func(it interface{}) bool {
|
||||
if t.streamQuota <= 0 { // Can go negative if server decreases it.
|
||||
if firstTry {
|
||||
@@ -767,6 +774,11 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (*Stream,
|
||||
h := it.(*headerFrame)
|
||||
h.streamID = t.nextID
|
||||
t.nextID += 2
|
||||
|
||||
// Drain client transport if nextID > MaxStreamID which signals gRPC that
|
||||
// the connection is closed and a new one must be created for subsequent RPCs.
|
||||
transportDrainRequired = t.nextID > MaxStreamID
|
||||
|
||||
s.id = h.streamID
|
||||
s.fc = &inFlow{limit: uint32(t.initialWindowSize)}
|
||||
t.mu.Lock()
|
||||
@@ -846,6 +858,12 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (*Stream,
|
||||
sh.HandleRPC(s.ctx, outHeader)
|
||||
}
|
||||
}
|
||||
if transportDrainRequired {
|
||||
if logger.V(logLevel) {
|
||||
logger.Infof("transport: t.nextID > MaxStreamID. Draining")
|
||||
}
|
||||
t.GracefulClose()
|
||||
}
|
||||
return s, nil
|
||||
}
|
||||
|
||||
@@ -934,9 +952,14 @@ func (t *http2Client) Close(err error) {
|
||||
t.mu.Unlock()
|
||||
return
|
||||
}
|
||||
if logger.V(logLevel) {
|
||||
logger.Infof("transport: closing: %v", err)
|
||||
}
|
||||
// Call t.onClose ASAP to prevent the client from attempting to create new
|
||||
// streams.
|
||||
t.onClose()
|
||||
if t.state != draining {
|
||||
t.onClose(GoAwayInvalid)
|
||||
}
|
||||
t.state = closing
|
||||
streams := t.activeStreams
|
||||
t.activeStreams = nil
|
||||
@@ -986,11 +1009,15 @@ func (t *http2Client) GracefulClose() {
|
||||
t.mu.Unlock()
|
||||
return
|
||||
}
|
||||
if logger.V(logLevel) {
|
||||
logger.Infof("transport: GracefulClose called")
|
||||
}
|
||||
t.onClose(GoAwayInvalid)
|
||||
t.state = draining
|
||||
active := len(t.activeStreams)
|
||||
t.mu.Unlock()
|
||||
if active == 0 {
|
||||
t.Close(ErrConnClosing)
|
||||
t.Close(connectionErrorf(true, nil, "no active streams left to process while draining"))
|
||||
return
|
||||
}
|
||||
t.controlBuf.put(&incomingGoAway{})
|
||||
@@ -1148,7 +1175,7 @@ func (t *http2Client) handleRSTStream(f *http2.RSTStreamFrame) {
|
||||
statusCode, ok := http2ErrConvTab[f.ErrCode]
|
||||
if !ok {
|
||||
if logger.V(logLevel) {
|
||||
logger.Warningf("transport: http2Client.handleRSTStream found no mapped gRPC status for the received http2 error %v", f.ErrCode)
|
||||
logger.Warningf("transport: http2Client.handleRSTStream found no mapped gRPC status for the received http2 error: %v", f.ErrCode)
|
||||
}
|
||||
statusCode = codes.Unknown
|
||||
}
|
||||
@@ -1230,10 +1257,12 @@ func (t *http2Client) handleGoAway(f *http2.GoAwayFrame) {
|
||||
t.mu.Unlock()
|
||||
return
|
||||
}
|
||||
if f.ErrCode == http2.ErrCodeEnhanceYourCalm {
|
||||
if logger.V(logLevel) {
|
||||
logger.Infof("Client received GoAway with http2.ErrCodeEnhanceYourCalm.")
|
||||
}
|
||||
if f.ErrCode == http2.ErrCodeEnhanceYourCalm && string(f.DebugData()) == "too_many_pings" {
|
||||
// When a client receives a GOAWAY with error code ENHANCE_YOUR_CALM and debug
|
||||
// data equal to ASCII "too_many_pings", it should log the occurrence at a log level that is
|
||||
// enabled by default and double the configure KEEPALIVE_TIME used for new connections
|
||||
// on that channel.
|
||||
logger.Errorf("Client received GoAway with error code ENHANCE_YOUR_CALM and debug data equal to ASCII \"too_many_pings\".")
|
||||
}
|
||||
id := f.LastStreamID
|
||||
if id > 0 && id%2 == 0 {
|
||||
@@ -1266,8 +1295,10 @@ func (t *http2Client) handleGoAway(f *http2.GoAwayFrame) {
|
||||
// Notify the clientconn about the GOAWAY before we set the state to
|
||||
// draining, to allow the client to stop attempting to create streams
|
||||
// before disallowing new streams on this connection.
|
||||
t.onGoAway(t.goAwayReason)
|
||||
t.state = draining
|
||||
if t.state != draining {
|
||||
t.onClose(t.goAwayReason)
|
||||
t.state = draining
|
||||
}
|
||||
}
|
||||
// All streams with IDs greater than the GoAwayId
|
||||
// and smaller than the previous GoAway ID should be killed.
|
||||
@@ -1756,3 +1787,9 @@ func (t *http2Client) getOutFlowWindow() int64 {
|
||||
return -2
|
||||
}
|
||||
}
|
||||
|
||||
func (t *http2Client) stateForTesting() transportState {
|
||||
t.mu.Lock()
|
||||
defer t.mu.Unlock()
|
||||
return t.state
|
||||
}
|
||||
|
||||
161
vendor/google.golang.org/grpc/internal/transport/http2_server.go
generated
vendored
161
vendor/google.golang.org/grpc/internal/transport/http2_server.go
generated
vendored
@@ -21,6 +21,7 @@ package transport
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
@@ -41,6 +42,7 @@ import (
|
||||
"google.golang.org/grpc/credentials"
|
||||
"google.golang.org/grpc/internal/channelz"
|
||||
"google.golang.org/grpc/internal/grpcrand"
|
||||
"google.golang.org/grpc/internal/grpcsync"
|
||||
"google.golang.org/grpc/keepalive"
|
||||
"google.golang.org/grpc/metadata"
|
||||
"google.golang.org/grpc/peer"
|
||||
@@ -101,13 +103,13 @@ type http2Server struct {
|
||||
|
||||
mu sync.Mutex // guard the following
|
||||
|
||||
// drainChan is initialized when Drain() is called the first time.
|
||||
// After which the server writes out the first GoAway(with ID 2^31-1) frame.
|
||||
// Then an independent goroutine will be launched to later send the second GoAway.
|
||||
// During this time we don't want to write another first GoAway(with ID 2^31 -1) frame.
|
||||
// Thus call to Drain() will be a no-op if drainChan is already initialized since draining is
|
||||
// already underway.
|
||||
drainChan chan struct{}
|
||||
// drainEvent is initialized when Drain() is called the first time. After
|
||||
// which the server writes out the first GoAway(with ID 2^31-1) frame. Then
|
||||
// an independent goroutine will be launched to later send the second
|
||||
// GoAway. During this time we don't want to write another first GoAway(with
|
||||
// ID 2^31 -1) frame. Thus call to Drain() will be a no-op if drainEvent is
|
||||
// already initialized since draining is already underway.
|
||||
drainEvent *grpcsync.Event
|
||||
state transportState
|
||||
activeStreams map[uint32]*Stream
|
||||
// idle is the time instant when the connection went idle.
|
||||
@@ -293,7 +295,7 @@ func NewServerTransport(conn net.Conn, config *ServerConfig) (_ ServerTransport,
|
||||
|
||||
defer func() {
|
||||
if err != nil {
|
||||
t.Close()
|
||||
t.Close(err)
|
||||
}
|
||||
}()
|
||||
|
||||
@@ -329,23 +331,18 @@ func NewServerTransport(conn net.Conn, config *ServerConfig) (_ ServerTransport,
|
||||
t.handleSettings(sf)
|
||||
|
||||
go func() {
|
||||
t.loopy = newLoopyWriter(serverSide, t.framer, t.controlBuf, t.bdpEst)
|
||||
t.loopy = newLoopyWriter(serverSide, t.framer, t.controlBuf, t.bdpEst, t.conn)
|
||||
t.loopy.ssGoAwayHandler = t.outgoingGoAwayHandler
|
||||
if err := t.loopy.run(); err != nil {
|
||||
if logger.V(logLevel) {
|
||||
logger.Errorf("transport: loopyWriter.run returning. Err: %v", err)
|
||||
}
|
||||
}
|
||||
t.conn.Close()
|
||||
t.controlBuf.finish()
|
||||
t.loopy.run()
|
||||
close(t.writerDone)
|
||||
}()
|
||||
go t.keepalive()
|
||||
return t, nil
|
||||
}
|
||||
|
||||
// operateHeader takes action on the decoded headers.
|
||||
func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(*Stream), traceCtx func(context.Context, string) context.Context) (fatal bool) {
|
||||
// operateHeaders takes action on the decoded headers. Returns an error if fatal
|
||||
// error encountered and transport needs to close, otherwise returns nil.
|
||||
func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(*Stream), traceCtx func(context.Context, string) context.Context) error {
|
||||
// Acquire max stream ID lock for entire duration
|
||||
t.maxStreamMu.Lock()
|
||||
defer t.maxStreamMu.Unlock()
|
||||
@@ -361,15 +358,12 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(
|
||||
rstCode: http2.ErrCodeFrameSize,
|
||||
onWrite: func() {},
|
||||
})
|
||||
return false
|
||||
return nil
|
||||
}
|
||||
|
||||
if streamID%2 != 1 || streamID <= t.maxStreamID {
|
||||
// illegal gRPC stream id.
|
||||
if logger.V(logLevel) {
|
||||
logger.Errorf("transport: http2Server.HandleStreams received an illegal stream id: %v", streamID)
|
||||
}
|
||||
return true
|
||||
return fmt.Errorf("received an illegal stream id: %v. headers frame: %+v", streamID, frame)
|
||||
}
|
||||
t.maxStreamID = streamID
|
||||
|
||||
@@ -381,13 +375,14 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(
|
||||
fc: &inFlow{limit: uint32(t.initialWindowSize)},
|
||||
}
|
||||
var (
|
||||
// If a gRPC Response-Headers has already been received, then it means
|
||||
// that the peer is speaking gRPC and we are in gRPC mode.
|
||||
isGRPC = false
|
||||
mdata = make(map[string][]string)
|
||||
httpMethod string
|
||||
// headerError is set if an error is encountered while parsing the headers
|
||||
headerError bool
|
||||
// if false, content-type was missing or invalid
|
||||
isGRPC = false
|
||||
contentType = ""
|
||||
mdata = make(metadata.MD, len(frame.Fields))
|
||||
httpMethod string
|
||||
// these are set if an error is encountered while parsing the headers
|
||||
protocolError bool
|
||||
headerError *status.Status
|
||||
|
||||
timeoutSet bool
|
||||
timeout time.Duration
|
||||
@@ -398,11 +393,23 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(
|
||||
case "content-type":
|
||||
contentSubtype, validContentType := grpcutil.ContentSubtype(hf.Value)
|
||||
if !validContentType {
|
||||
contentType = hf.Value
|
||||
break
|
||||
}
|
||||
mdata[hf.Name] = append(mdata[hf.Name], hf.Value)
|
||||
s.contentSubtype = contentSubtype
|
||||
isGRPC = true
|
||||
|
||||
case "grpc-accept-encoding":
|
||||
mdata[hf.Name] = append(mdata[hf.Name], hf.Value)
|
||||
if hf.Value == "" {
|
||||
continue
|
||||
}
|
||||
compressors := hf.Value
|
||||
if s.clientAdvertisedCompressors != "" {
|
||||
compressors = s.clientAdvertisedCompressors + "," + compressors
|
||||
}
|
||||
s.clientAdvertisedCompressors = compressors
|
||||
case "grpc-encoding":
|
||||
s.recvCompress = hf.Value
|
||||
case ":method":
|
||||
@@ -413,7 +420,7 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(
|
||||
timeoutSet = true
|
||||
var err error
|
||||
if timeout, err = decodeTimeout(hf.Value); err != nil {
|
||||
headerError = true
|
||||
headerError = status.Newf(codes.Internal, "malformed grpc-timeout: %v", err)
|
||||
}
|
||||
// "Transports must consider requests containing the Connection header
|
||||
// as malformed." - A41
|
||||
@@ -421,14 +428,14 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(
|
||||
if logger.V(logLevel) {
|
||||
logger.Errorf("transport: http2Server.operateHeaders parsed a :connection header which makes a request malformed as per the HTTP/2 spec")
|
||||
}
|
||||
headerError = true
|
||||
protocolError = true
|
||||
default:
|
||||
if isReservedHeader(hf.Name) && !isWhitelistedHeader(hf.Name) {
|
||||
break
|
||||
}
|
||||
v, err := decodeMetadataHeader(hf.Name, hf.Value)
|
||||
if err != nil {
|
||||
headerError = true
|
||||
headerError = status.Newf(codes.Internal, "malformed binary metadata %q in header %q: %v", hf.Value, hf.Name, err)
|
||||
logger.Warningf("Failed to decode metadata header (%q, %q): %v", hf.Name, hf.Value, err)
|
||||
break
|
||||
}
|
||||
@@ -447,23 +454,43 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(
|
||||
logger.Errorf("transport: %v", errMsg)
|
||||
}
|
||||
t.controlBuf.put(&earlyAbortStream{
|
||||
httpStatus: 400,
|
||||
httpStatus: http.StatusBadRequest,
|
||||
streamID: streamID,
|
||||
contentSubtype: s.contentSubtype,
|
||||
status: status.New(codes.Internal, errMsg),
|
||||
rst: !frame.StreamEnded(),
|
||||
})
|
||||
return false
|
||||
return nil
|
||||
}
|
||||
|
||||
if !isGRPC || headerError {
|
||||
if protocolError {
|
||||
t.controlBuf.put(&cleanupStream{
|
||||
streamID: streamID,
|
||||
rst: true,
|
||||
rstCode: http2.ErrCodeProtocol,
|
||||
onWrite: func() {},
|
||||
})
|
||||
return false
|
||||
return nil
|
||||
}
|
||||
if !isGRPC {
|
||||
t.controlBuf.put(&earlyAbortStream{
|
||||
httpStatus: http.StatusUnsupportedMediaType,
|
||||
streamID: streamID,
|
||||
contentSubtype: s.contentSubtype,
|
||||
status: status.Newf(codes.InvalidArgument, "invalid gRPC request content-type %q", contentType),
|
||||
rst: !frame.StreamEnded(),
|
||||
})
|
||||
return nil
|
||||
}
|
||||
if headerError != nil {
|
||||
t.controlBuf.put(&earlyAbortStream{
|
||||
httpStatus: http.StatusBadRequest,
|
||||
streamID: streamID,
|
||||
contentSubtype: s.contentSubtype,
|
||||
status: headerError,
|
||||
rst: !frame.StreamEnded(),
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
// "If :authority is missing, Host must be renamed to :authority." - A41
|
||||
@@ -503,7 +530,7 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(
|
||||
if t.state != reachable {
|
||||
t.mu.Unlock()
|
||||
s.cancel()
|
||||
return false
|
||||
return nil
|
||||
}
|
||||
if uint32(len(t.activeStreams)) >= t.maxStreams {
|
||||
t.mu.Unlock()
|
||||
@@ -514,7 +541,7 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(
|
||||
onWrite: func() {},
|
||||
})
|
||||
s.cancel()
|
||||
return false
|
||||
return nil
|
||||
}
|
||||
if httpMethod != http.MethodPost {
|
||||
t.mu.Unlock()
|
||||
@@ -530,7 +557,7 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(
|
||||
rst: !frame.StreamEnded(),
|
||||
})
|
||||
s.cancel()
|
||||
return false
|
||||
return nil
|
||||
}
|
||||
if t.inTapHandle != nil {
|
||||
var err error
|
||||
@@ -550,7 +577,7 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(
|
||||
status: stat,
|
||||
rst: !frame.StreamEnded(),
|
||||
})
|
||||
return false
|
||||
return nil
|
||||
}
|
||||
}
|
||||
t.activeStreams[streamID] = s
|
||||
@@ -574,7 +601,7 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(
|
||||
LocalAddr: t.localAddr,
|
||||
Compression: s.recvCompress,
|
||||
WireLength: int(frame.Header().Length),
|
||||
Header: metadata.MD(mdata).Copy(),
|
||||
Header: mdata.Copy(),
|
||||
}
|
||||
sh.HandleRPC(s.ctx, inHeader)
|
||||
}
|
||||
@@ -597,7 +624,7 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(
|
||||
wq: s.wq,
|
||||
})
|
||||
handle(s)
|
||||
return false
|
||||
return nil
|
||||
}
|
||||
|
||||
// HandleStreams receives incoming streams using the given handler. This is
|
||||
@@ -630,19 +657,16 @@ func (t *http2Server) HandleStreams(handle func(*Stream), traceCtx func(context.
|
||||
continue
|
||||
}
|
||||
if err == io.EOF || err == io.ErrUnexpectedEOF {
|
||||
t.Close()
|
||||
t.Close(err)
|
||||
return
|
||||
}
|
||||
if logger.V(logLevel) {
|
||||
logger.Warningf("transport: http2Server.HandleStreams failed to read frame: %v", err)
|
||||
}
|
||||
t.Close()
|
||||
t.Close(err)
|
||||
return
|
||||
}
|
||||
switch frame := frame.(type) {
|
||||
case *http2.MetaHeadersFrame:
|
||||
if t.operateHeaders(frame, handle, traceCtx) {
|
||||
t.Close()
|
||||
if err := t.operateHeaders(frame, handle, traceCtx); err != nil {
|
||||
t.Close(err)
|
||||
break
|
||||
}
|
||||
case *http2.DataFrame:
|
||||
@@ -843,8 +867,8 @@ const (
|
||||
|
||||
func (t *http2Server) handlePing(f *http2.PingFrame) {
|
||||
if f.IsAck() {
|
||||
if f.Data == goAwayPing.data && t.drainChan != nil {
|
||||
close(t.drainChan)
|
||||
if f.Data == goAwayPing.data && t.drainEvent != nil {
|
||||
t.drainEvent.Fire()
|
||||
return
|
||||
}
|
||||
// Maybe it's a BDP ping.
|
||||
@@ -886,10 +910,7 @@ func (t *http2Server) handlePing(f *http2.PingFrame) {
|
||||
|
||||
if t.pingStrikes > maxPingStrikes {
|
||||
// Send goaway and close the connection.
|
||||
if logger.V(logLevel) {
|
||||
logger.Errorf("transport: Got too many pings from the client, closing the connection.")
|
||||
}
|
||||
t.controlBuf.put(&goAway{code: http2.ErrCodeEnhanceYourCalm, debugData: []byte("too_many_pings"), closeConn: true})
|
||||
t.controlBuf.put(&goAway{code: http2.ErrCodeEnhanceYourCalm, debugData: []byte("too_many_pings"), closeConn: errors.New("got too many pings from the client")})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1153,7 +1174,7 @@ func (t *http2Server) keepalive() {
|
||||
if logger.V(logLevel) {
|
||||
logger.Infof("transport: closing server transport due to maximum connection age.")
|
||||
}
|
||||
t.Close()
|
||||
t.controlBuf.put(closeConnection{})
|
||||
case <-t.done:
|
||||
}
|
||||
return
|
||||
@@ -1169,10 +1190,7 @@ func (t *http2Server) keepalive() {
|
||||
continue
|
||||
}
|
||||
if outstandingPing && kpTimeoutLeft <= 0 {
|
||||
if logger.V(logLevel) {
|
||||
logger.Infof("transport: closing server transport due to idleness.")
|
||||
}
|
||||
t.Close()
|
||||
t.Close(fmt.Errorf("keepalive ping not acked within timeout %s", t.kp.Time))
|
||||
return
|
||||
}
|
||||
if !outstandingPing {
|
||||
@@ -1199,12 +1217,15 @@ func (t *http2Server) keepalive() {
|
||||
// Close starts shutting down the http2Server transport.
|
||||
// TODO(zhaoq): Now the destruction is not blocked on any pending streams. This
|
||||
// could cause some resource issue. Revisit this later.
|
||||
func (t *http2Server) Close() {
|
||||
func (t *http2Server) Close(err error) {
|
||||
t.mu.Lock()
|
||||
if t.state == closing {
|
||||
t.mu.Unlock()
|
||||
return
|
||||
}
|
||||
if logger.V(logLevel) {
|
||||
logger.Infof("transport: closing: %v", err)
|
||||
}
|
||||
t.state = closing
|
||||
streams := t.activeStreams
|
||||
t.activeStreams = nil
|
||||
@@ -1295,10 +1316,10 @@ func (t *http2Server) RemoteAddr() net.Addr {
|
||||
func (t *http2Server) Drain() {
|
||||
t.mu.Lock()
|
||||
defer t.mu.Unlock()
|
||||
if t.drainChan != nil {
|
||||
if t.drainEvent != nil {
|
||||
return
|
||||
}
|
||||
t.drainChan = make(chan struct{})
|
||||
t.drainEvent = grpcsync.NewEvent()
|
||||
t.controlBuf.put(&goAway{code: http2.ErrCodeNo, debugData: []byte{}, headsUp: true})
|
||||
}
|
||||
|
||||
@@ -1319,19 +1340,17 @@ func (t *http2Server) outgoingGoAwayHandler(g *goAway) (bool, error) {
|
||||
// Stop accepting more streams now.
|
||||
t.state = draining
|
||||
sid := t.maxStreamID
|
||||
retErr := g.closeConn
|
||||
if len(t.activeStreams) == 0 {
|
||||
g.closeConn = true
|
||||
retErr = errors.New("second GOAWAY written and no active streams left to process")
|
||||
}
|
||||
t.mu.Unlock()
|
||||
t.maxStreamMu.Unlock()
|
||||
if err := t.framer.fr.WriteGoAway(sid, g.code, g.debugData); err != nil {
|
||||
return false, err
|
||||
}
|
||||
if g.closeConn {
|
||||
// Abruptly close the connection following the GoAway (via
|
||||
// loopywriter). But flush out what's inside the buffer first.
|
||||
t.framer.writer.Flush()
|
||||
return false, fmt.Errorf("transport: Connection closing")
|
||||
if retErr != nil {
|
||||
return false, retErr
|
||||
}
|
||||
return true, nil
|
||||
}
|
||||
@@ -1353,7 +1372,7 @@ func (t *http2Server) outgoingGoAwayHandler(g *goAway) (bool, error) {
|
||||
timer := time.NewTimer(time.Minute)
|
||||
defer timer.Stop()
|
||||
select {
|
||||
case <-t.drainChan:
|
||||
case <-t.drainEvent.Done():
|
||||
case <-timer.C:
|
||||
case <-t.done:
|
||||
return
|
||||
|
||||
24
vendor/google.golang.org/grpc/internal/transport/http_util.go
generated
vendored
24
vendor/google.golang.org/grpc/internal/transport/http_util.go
generated
vendored
@@ -21,6 +21,7 @@ package transport
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
@@ -330,7 +331,8 @@ func (w *bufWriter) Write(b []byte) (n int, err error) {
|
||||
return 0, w.err
|
||||
}
|
||||
if w.batchSize == 0 { // Buffer has been disabled.
|
||||
return w.conn.Write(b)
|
||||
n, err = w.conn.Write(b)
|
||||
return n, toIOError(err)
|
||||
}
|
||||
for len(b) > 0 {
|
||||
nn := copy(w.buf[w.offset:], b)
|
||||
@@ -352,10 +354,30 @@ func (w *bufWriter) Flush() error {
|
||||
return nil
|
||||
}
|
||||
_, w.err = w.conn.Write(w.buf[:w.offset])
|
||||
w.err = toIOError(w.err)
|
||||
w.offset = 0
|
||||
return w.err
|
||||
}
|
||||
|
||||
type ioError struct {
|
||||
error
|
||||
}
|
||||
|
||||
func (i ioError) Unwrap() error {
|
||||
return i.error
|
||||
}
|
||||
|
||||
func isIOError(err error) bool {
|
||||
return errors.As(err, &ioError{})
|
||||
}
|
||||
|
||||
func toIOError(err error) error {
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
return ioError{error: err}
|
||||
}
|
||||
|
||||
type framer struct {
|
||||
writer *bufWriter
|
||||
fr *http2.Framer
|
||||
|
||||
29
vendor/google.golang.org/grpc/internal/transport/transport.go
generated
vendored
29
vendor/google.golang.org/grpc/internal/transport/transport.go
generated
vendored
@@ -257,6 +257,9 @@ type Stream struct {
|
||||
fc *inFlow
|
||||
wq *writeQuota
|
||||
|
||||
// Holds compressor names passed in grpc-accept-encoding metadata from the
|
||||
// client. This is empty for the client side stream.
|
||||
clientAdvertisedCompressors string
|
||||
// Callback to state application's intentions to read data. This
|
||||
// is used to adjust flow control, if needed.
|
||||
requestRead func(int)
|
||||
@@ -345,8 +348,24 @@ func (s *Stream) RecvCompress() string {
|
||||
}
|
||||
|
||||
// SetSendCompress sets the compression algorithm to the stream.
|
||||
func (s *Stream) SetSendCompress(str string) {
|
||||
s.sendCompress = str
|
||||
func (s *Stream) SetSendCompress(name string) error {
|
||||
if s.isHeaderSent() || s.getState() == streamDone {
|
||||
return errors.New("transport: set send compressor called after headers sent or stream done")
|
||||
}
|
||||
|
||||
s.sendCompress = name
|
||||
return nil
|
||||
}
|
||||
|
||||
// SendCompress returns the send compressor name.
|
||||
func (s *Stream) SendCompress() string {
|
||||
return s.sendCompress
|
||||
}
|
||||
|
||||
// ClientAdvertisedCompressors returns the compressor names advertised by the
|
||||
// client via grpc-accept-encoding header.
|
||||
func (s *Stream) ClientAdvertisedCompressors() string {
|
||||
return s.clientAdvertisedCompressors
|
||||
}
|
||||
|
||||
// Done returns a channel which is closed when it receives the final status
|
||||
@@ -583,8 +602,8 @@ type ConnectOptions struct {
|
||||
|
||||
// NewClientTransport establishes the transport with the required ConnectOptions
|
||||
// and returns it to the caller.
|
||||
func NewClientTransport(connectCtx, ctx context.Context, addr resolver.Address, opts ConnectOptions, onGoAway func(GoAwayReason), onClose func()) (ClientTransport, error) {
|
||||
return newHTTP2Client(connectCtx, ctx, addr, opts, onGoAway, onClose)
|
||||
func NewClientTransport(connectCtx, ctx context.Context, addr resolver.Address, opts ConnectOptions, onClose func(GoAwayReason)) (ClientTransport, error) {
|
||||
return newHTTP2Client(connectCtx, ctx, addr, opts, onClose)
|
||||
}
|
||||
|
||||
// Options provides additional hints and information for message
|
||||
@@ -701,7 +720,7 @@ type ServerTransport interface {
|
||||
// Close tears down the transport. Once it is called, the transport
|
||||
// should not be accessed any more. All the pending streams and their
|
||||
// handlers will be terminated asynchronously.
|
||||
Close()
|
||||
Close(err error)
|
||||
|
||||
// RemoteAddr returns the remote network address.
|
||||
RemoteAddr() net.Addr
|
||||
|
||||
13
vendor/google.golang.org/grpc/metadata/metadata.go
generated
vendored
13
vendor/google.golang.org/grpc/metadata/metadata.go
generated
vendored
@@ -91,7 +91,11 @@ func (md MD) Len() int {
|
||||
|
||||
// Copy returns a copy of md.
|
||||
func (md MD) Copy() MD {
|
||||
return Join(md)
|
||||
out := make(MD, len(md))
|
||||
for k, v := range md {
|
||||
out[k] = copyOf(v)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// Get obtains the values for a given key.
|
||||
@@ -171,8 +175,11 @@ func AppendToOutgoingContext(ctx context.Context, kv ...string) context.Context
|
||||
md, _ := ctx.Value(mdOutgoingKey{}).(rawMD)
|
||||
added := make([][]string, len(md.added)+1)
|
||||
copy(added, md.added)
|
||||
added[len(added)-1] = make([]string, len(kv))
|
||||
copy(added[len(added)-1], kv)
|
||||
kvCopy := make([]string, 0, len(kv))
|
||||
for i := 0; i < len(kv); i += 2 {
|
||||
kvCopy = append(kvCopy, strings.ToLower(kv[i]), kv[i+1])
|
||||
}
|
||||
added[len(added)-1] = kvCopy
|
||||
return context.WithValue(ctx, mdOutgoingKey{}, rawMD{md: md.md, added: added})
|
||||
}
|
||||
|
||||
|
||||
28
vendor/google.golang.org/grpc/picker_wrapper.go
generated
vendored
28
vendor/google.golang.org/grpc/picker_wrapper.go
generated
vendored
@@ -58,12 +58,18 @@ func (pw *pickerWrapper) updatePicker(p balancer.Picker) {
|
||||
pw.mu.Unlock()
|
||||
}
|
||||
|
||||
func doneChannelzWrapper(acw *acBalancerWrapper, done func(balancer.DoneInfo)) func(balancer.DoneInfo) {
|
||||
// doneChannelzWrapper performs the following:
|
||||
// - increments the calls started channelz counter
|
||||
// - wraps the done function in the passed in result to increment the calls
|
||||
// failed or calls succeeded channelz counter before invoking the actual
|
||||
// done function.
|
||||
func doneChannelzWrapper(acw *acBalancerWrapper, result *balancer.PickResult) {
|
||||
acw.mu.Lock()
|
||||
ac := acw.ac
|
||||
acw.mu.Unlock()
|
||||
ac.incrCallsStarted()
|
||||
return func(b balancer.DoneInfo) {
|
||||
done := result.Done
|
||||
result.Done = func(b balancer.DoneInfo) {
|
||||
if b.Err != nil && b.Err != io.EOF {
|
||||
ac.incrCallsFailed()
|
||||
} else {
|
||||
@@ -82,7 +88,7 @@ func doneChannelzWrapper(acw *acBalancerWrapper, done func(balancer.DoneInfo)) f
|
||||
// - the current picker returns other errors and failfast is false.
|
||||
// - the subConn returned by the current picker is not READY
|
||||
// When one of these situations happens, pick blocks until the picker gets updated.
|
||||
func (pw *pickerWrapper) pick(ctx context.Context, failfast bool, info balancer.PickInfo) (transport.ClientTransport, func(balancer.DoneInfo), error) {
|
||||
func (pw *pickerWrapper) pick(ctx context.Context, failfast bool, info balancer.PickInfo) (transport.ClientTransport, balancer.PickResult, error) {
|
||||
var ch chan struct{}
|
||||
|
||||
var lastPickErr error
|
||||
@@ -90,7 +96,7 @@ func (pw *pickerWrapper) pick(ctx context.Context, failfast bool, info balancer.
|
||||
pw.mu.Lock()
|
||||
if pw.done {
|
||||
pw.mu.Unlock()
|
||||
return nil, nil, ErrClientConnClosing
|
||||
return nil, balancer.PickResult{}, ErrClientConnClosing
|
||||
}
|
||||
|
||||
if pw.picker == nil {
|
||||
@@ -111,9 +117,9 @@ func (pw *pickerWrapper) pick(ctx context.Context, failfast bool, info balancer.
|
||||
}
|
||||
switch ctx.Err() {
|
||||
case context.DeadlineExceeded:
|
||||
return nil, nil, status.Error(codes.DeadlineExceeded, errStr)
|
||||
return nil, balancer.PickResult{}, status.Error(codes.DeadlineExceeded, errStr)
|
||||
case context.Canceled:
|
||||
return nil, nil, status.Error(codes.Canceled, errStr)
|
||||
return nil, balancer.PickResult{}, status.Error(codes.Canceled, errStr)
|
||||
}
|
||||
case <-ch:
|
||||
}
|
||||
@@ -125,7 +131,6 @@ func (pw *pickerWrapper) pick(ctx context.Context, failfast bool, info balancer.
|
||||
pw.mu.Unlock()
|
||||
|
||||
pickResult, err := p.Pick(info)
|
||||
|
||||
if err != nil {
|
||||
if err == balancer.ErrNoSubConnAvailable {
|
||||
continue
|
||||
@@ -136,7 +141,7 @@ func (pw *pickerWrapper) pick(ctx context.Context, failfast bool, info balancer.
|
||||
if istatus.IsRestrictedControlPlaneCode(st) {
|
||||
err = status.Errorf(codes.Internal, "received picker error with illegal status: %v", err)
|
||||
}
|
||||
return nil, nil, dropError{error: err}
|
||||
return nil, balancer.PickResult{}, dropError{error: err}
|
||||
}
|
||||
// For all other errors, wait for ready RPCs should block and other
|
||||
// RPCs should fail with unavailable.
|
||||
@@ -144,7 +149,7 @@ func (pw *pickerWrapper) pick(ctx context.Context, failfast bool, info balancer.
|
||||
lastPickErr = err
|
||||
continue
|
||||
}
|
||||
return nil, nil, status.Error(codes.Unavailable, err.Error())
|
||||
return nil, balancer.PickResult{}, status.Error(codes.Unavailable, err.Error())
|
||||
}
|
||||
|
||||
acw, ok := pickResult.SubConn.(*acBalancerWrapper)
|
||||
@@ -154,9 +159,10 @@ func (pw *pickerWrapper) pick(ctx context.Context, failfast bool, info balancer.
|
||||
}
|
||||
if t := acw.getAddrConn().getReadyTransport(); t != nil {
|
||||
if channelz.IsOn() {
|
||||
return t, doneChannelzWrapper(acw, pickResult.Done), nil
|
||||
doneChannelzWrapper(acw, &pickResult)
|
||||
return t, pickResult, nil
|
||||
}
|
||||
return t, pickResult.Done, nil
|
||||
return t, pickResult, nil
|
||||
}
|
||||
if pickResult.Done != nil {
|
||||
// Calling done with nil error, no bytes sent and no bytes received.
|
||||
|
||||
6
vendor/google.golang.org/grpc/pickfirst.go
generated
vendored
6
vendor/google.golang.org/grpc/pickfirst.go
generated
vendored
@@ -51,7 +51,7 @@ type pickfirstBalancer struct {
|
||||
|
||||
func (b *pickfirstBalancer) ResolverError(err error) {
|
||||
if logger.V(2) {
|
||||
logger.Infof("pickfirstBalancer: ResolverError called with error %v", err)
|
||||
logger.Infof("pickfirstBalancer: ResolverError called with error: %v", err)
|
||||
}
|
||||
if b.subConn == nil {
|
||||
b.state = connectivity.TransientFailure
|
||||
@@ -102,8 +102,8 @@ func (b *pickfirstBalancer) UpdateClientConnState(state balancer.ClientConnState
|
||||
b.subConn = subConn
|
||||
b.state = connectivity.Idle
|
||||
b.cc.UpdateState(balancer.State{
|
||||
ConnectivityState: connectivity.Idle,
|
||||
Picker: &picker{result: balancer.PickResult{SubConn: b.subConn}},
|
||||
ConnectivityState: connectivity.Connecting,
|
||||
Picker: &picker{err: balancer.ErrNoSubConnAvailable},
|
||||
})
|
||||
b.subConn.Connect()
|
||||
return nil
|
||||
|
||||
7
vendor/google.golang.org/grpc/regenerate.sh
generated
vendored
7
vendor/google.golang.org/grpc/regenerate.sh
generated
vendored
@@ -57,7 +57,8 @@ LEGACY_SOURCES=(
|
||||
${WORKDIR}/grpc-proto/grpc/health/v1/health.proto
|
||||
${WORKDIR}/grpc-proto/grpc/lb/v1/load_balancer.proto
|
||||
profiling/proto/service.proto
|
||||
reflection/grpc_reflection_v1alpha/reflection.proto
|
||||
${WORKDIR}/grpc-proto/grpc/reflection/v1alpha/reflection.proto
|
||||
${WORKDIR}/grpc-proto/grpc/reflection/v1/reflection.proto
|
||||
)
|
||||
|
||||
# Generates only the new gRPC Service symbols
|
||||
@@ -119,8 +120,4 @@ mv ${WORKDIR}/out/google.golang.org/grpc/lookup/grpc_lookup_v1/* ${WORKDIR}/out/
|
||||
# see grpc_testing_not_regenerate/README.md for details.
|
||||
rm ${WORKDIR}/out/google.golang.org/grpc/reflection/grpc_testing_not_regenerate/*.pb.go
|
||||
|
||||
# grpc/testing does not have a go_package option.
|
||||
mv ${WORKDIR}/out/grpc/testing/*.pb.go interop/grpc_testing/
|
||||
mv ${WORKDIR}/out/grpc/core/*.pb.go interop/grpc_testing/core/
|
||||
|
||||
cp -R ${WORKDIR}/out/google.golang.org/grpc/* .
|
||||
|
||||
42
vendor/google.golang.org/grpc/resolver/resolver.go
generated
vendored
42
vendor/google.golang.org/grpc/resolver/resolver.go
generated
vendored
@@ -24,6 +24,7 @@ import (
|
||||
"context"
|
||||
"net"
|
||||
"net/url"
|
||||
"strings"
|
||||
|
||||
"google.golang.org/grpc/attributes"
|
||||
"google.golang.org/grpc/credentials"
|
||||
@@ -40,8 +41,9 @@ var (
|
||||
|
||||
// TODO(bar) install dns resolver in init(){}.
|
||||
|
||||
// Register registers the resolver builder to the resolver map. b.Scheme will be
|
||||
// used as the scheme registered with this builder.
|
||||
// Register registers the resolver builder to the resolver map. b.Scheme will
|
||||
// be used as the scheme registered with this builder. The registry is case
|
||||
// sensitive, and schemes should not contain any uppercase characters.
|
||||
//
|
||||
// NOTE: this function must only be called during initialization time (i.e. in
|
||||
// an init() function), and is not thread-safe. If multiple Resolvers are
|
||||
@@ -202,6 +204,15 @@ type State struct {
|
||||
// gRPC to add new methods to this interface.
|
||||
type ClientConn interface {
|
||||
// UpdateState updates the state of the ClientConn appropriately.
|
||||
//
|
||||
// If an error is returned, the resolver should try to resolve the
|
||||
// target again. The resolver should use a backoff timer to prevent
|
||||
// overloading the server with requests. If a resolver is certain that
|
||||
// reresolving will not change the result, e.g. because it is
|
||||
// a watch-based resolver, returned errors can be ignored.
|
||||
//
|
||||
// If the resolved State is the same as the last reported one, calling
|
||||
// UpdateState can be omitted.
|
||||
UpdateState(State) error
|
||||
// ReportError notifies the ClientConn that the Resolver encountered an
|
||||
// error. The ClientConn will notify the load balancer and begin calling
|
||||
@@ -247,9 +258,6 @@ type Target struct {
|
||||
Scheme string
|
||||
// Deprecated: use URL.Host instead.
|
||||
Authority string
|
||||
// Deprecated: use URL.Path or URL.Opaque instead. The latter is set when
|
||||
// the former is empty.
|
||||
Endpoint string
|
||||
// URL contains the parsed dial target with an optional default scheme added
|
||||
// to it if the original dial target contained no scheme or contained an
|
||||
// unregistered scheme. Any query params specified in the original dial
|
||||
@@ -257,6 +265,24 @@ type Target struct {
|
||||
URL url.URL
|
||||
}
|
||||
|
||||
// Endpoint retrieves endpoint without leading "/" from either `URL.Path`
|
||||
// or `URL.Opaque`. The latter is used when the former is empty.
|
||||
func (t Target) Endpoint() string {
|
||||
endpoint := t.URL.Path
|
||||
if endpoint == "" {
|
||||
endpoint = t.URL.Opaque
|
||||
}
|
||||
// For targets of the form "[scheme]://[authority]/endpoint, the endpoint
|
||||
// value returned from url.Parse() contains a leading "/". Although this is
|
||||
// in accordance with RFC 3986, we do not want to break existing resolver
|
||||
// implementations which expect the endpoint without the leading "/". So, we
|
||||
// end up stripping the leading "/" here. But this will result in an
|
||||
// incorrect parsing for something like "unix:///path/to/socket". Since we
|
||||
// own the "unix" resolver, we can workaround in the unix resolver by using
|
||||
// the `URL` field.
|
||||
return strings.TrimPrefix(endpoint, "/")
|
||||
}
|
||||
|
||||
// Builder creates a resolver that will be used to watch name resolution updates.
|
||||
type Builder interface {
|
||||
// Build creates a new resolver for the given target.
|
||||
@@ -264,8 +290,10 @@ type Builder interface {
|
||||
// gRPC dial calls Build synchronously, and fails if the returned error is
|
||||
// not nil.
|
||||
Build(target Target, cc ClientConn, opts BuildOptions) (Resolver, error)
|
||||
// Scheme returns the scheme supported by this resolver.
|
||||
// Scheme is defined at https://github.com/grpc/grpc/blob/master/doc/naming.md.
|
||||
// Scheme returns the scheme supported by this resolver. Scheme is defined
|
||||
// at https://github.com/grpc/grpc/blob/master/doc/naming.md. The returned
|
||||
// string should not contain uppercase characters, as they will not match
|
||||
// the parsed target's scheme as defined in RFC 3986.
|
||||
Scheme() string
|
||||
}
|
||||
|
||||
|
||||
70
vendor/google.golang.org/grpc/rpc_util.go
generated
vendored
70
vendor/google.golang.org/grpc/rpc_util.go
generated
vendored
@@ -25,7 +25,6 @@ import (
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"math"
|
||||
"strings"
|
||||
"sync"
|
||||
@@ -77,7 +76,7 @@ func NewGZIPCompressorWithLevel(level int) (Compressor, error) {
|
||||
return &gzipCompressor{
|
||||
pool: sync.Pool{
|
||||
New: func() interface{} {
|
||||
w, err := gzip.NewWriterLevel(ioutil.Discard, level)
|
||||
w, err := gzip.NewWriterLevel(io.Discard, level)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
@@ -143,7 +142,7 @@ func (d *gzipDecompressor) Do(r io.Reader) ([]byte, error) {
|
||||
z.Close()
|
||||
d.pool.Put(z)
|
||||
}()
|
||||
return ioutil.ReadAll(z)
|
||||
return io.ReadAll(z)
|
||||
}
|
||||
|
||||
func (d *gzipDecompressor) Type() string {
|
||||
@@ -160,6 +159,7 @@ type callInfo struct {
|
||||
contentSubtype string
|
||||
codec baseCodec
|
||||
maxRetryRPCBufferSize int
|
||||
onFinish []func(err error)
|
||||
}
|
||||
|
||||
func defaultCallInfo() *callInfo {
|
||||
@@ -296,8 +296,44 @@ func (o FailFastCallOption) before(c *callInfo) error {
|
||||
}
|
||||
func (o FailFastCallOption) after(c *callInfo, attempt *csAttempt) {}
|
||||
|
||||
// OnFinish returns a CallOption that configures a callback to be called when
|
||||
// the call completes. The error passed to the callback is the status of the
|
||||
// RPC, and may be nil. The onFinish callback provided will only be called once
|
||||
// by gRPC. This is mainly used to be used by streaming interceptors, to be
|
||||
// notified when the RPC completes along with information about the status of
|
||||
// the RPC.
|
||||
//
|
||||
// # Experimental
|
||||
//
|
||||
// Notice: This API is EXPERIMENTAL and may be changed or removed in a
|
||||
// later release.
|
||||
func OnFinish(onFinish func(err error)) CallOption {
|
||||
return OnFinishCallOption{
|
||||
OnFinish: onFinish,
|
||||
}
|
||||
}
|
||||
|
||||
// OnFinishCallOption is CallOption that indicates a callback to be called when
|
||||
// the call completes.
|
||||
//
|
||||
// # Experimental
|
||||
//
|
||||
// Notice: This type is EXPERIMENTAL and may be changed or removed in a
|
||||
// later release.
|
||||
type OnFinishCallOption struct {
|
||||
OnFinish func(error)
|
||||
}
|
||||
|
||||
func (o OnFinishCallOption) before(c *callInfo) error {
|
||||
c.onFinish = append(c.onFinish, o.OnFinish)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o OnFinishCallOption) after(c *callInfo, attempt *csAttempt) {}
|
||||
|
||||
// MaxCallRecvMsgSize returns a CallOption which sets the maximum message size
|
||||
// in bytes the client can receive.
|
||||
// in bytes the client can receive. If this is not set, gRPC uses the default
|
||||
// 4MB.
|
||||
func MaxCallRecvMsgSize(bytes int) CallOption {
|
||||
return MaxRecvMsgSizeCallOption{MaxRecvMsgSize: bytes}
|
||||
}
|
||||
@@ -320,7 +356,8 @@ func (o MaxRecvMsgSizeCallOption) before(c *callInfo) error {
|
||||
func (o MaxRecvMsgSizeCallOption) after(c *callInfo, attempt *csAttempt) {}
|
||||
|
||||
// MaxCallSendMsgSize returns a CallOption which sets the maximum message size
|
||||
// in bytes the client can send.
|
||||
// in bytes the client can send. If this is not set, gRPC uses the default
|
||||
// `math.MaxInt32`.
|
||||
func MaxCallSendMsgSize(bytes int) CallOption {
|
||||
return MaxSendMsgSizeCallOption{MaxSendMsgSize: bytes}
|
||||
}
|
||||
@@ -657,12 +694,13 @@ func msgHeader(data, compData []byte) (hdr []byte, payload []byte) {
|
||||
|
||||
func outPayload(client bool, msg interface{}, data, payload []byte, t time.Time) *stats.OutPayload {
|
||||
return &stats.OutPayload{
|
||||
Client: client,
|
||||
Payload: msg,
|
||||
Data: data,
|
||||
Length: len(data),
|
||||
WireLength: len(payload) + headerLen,
|
||||
SentTime: t,
|
||||
Client: client,
|
||||
Payload: msg,
|
||||
Data: data,
|
||||
Length: len(data),
|
||||
WireLength: len(payload) + headerLen,
|
||||
CompressedLength: len(payload),
|
||||
SentTime: t,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -683,7 +721,7 @@ func checkRecvPayload(pf payloadFormat, recvCompress string, haveCompressor bool
|
||||
}
|
||||
|
||||
type payloadInfo struct {
|
||||
wireLength int // The compressed length got from wire.
|
||||
compressedLength int // The compressed length got from wire.
|
||||
uncompressedBytes []byte
|
||||
}
|
||||
|
||||
@@ -693,7 +731,7 @@ func recvAndDecompress(p *parser, s *transport.Stream, dc Decompressor, maxRecei
|
||||
return nil, err
|
||||
}
|
||||
if payInfo != nil {
|
||||
payInfo.wireLength = len(d)
|
||||
payInfo.compressedLength = len(d)
|
||||
}
|
||||
|
||||
if st := checkRecvPayload(pf, s.RecvCompress(), compressor != nil || dc != nil); st != nil {
|
||||
@@ -711,7 +749,7 @@ func recvAndDecompress(p *parser, s *transport.Stream, dc Decompressor, maxRecei
|
||||
d, size, err = decompress(compressor, d, maxReceiveMessageSize)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "grpc: failed to decompress the received message %v", err)
|
||||
return nil, status.Errorf(codes.Internal, "grpc: failed to decompress the received message: %v", err)
|
||||
}
|
||||
if size > maxReceiveMessageSize {
|
||||
// TODO: Revisit the error code. Currently keep it consistent with java
|
||||
@@ -746,7 +784,7 @@ func decompress(compressor encoding.Compressor, d []byte, maxReceiveMessageSize
|
||||
}
|
||||
// Read from LimitReader with limit max+1. So if the underlying
|
||||
// reader is over limit, the result will be bigger than max.
|
||||
d, err = ioutil.ReadAll(io.LimitReader(dcReader, int64(maxReceiveMessageSize)+1))
|
||||
d, err = io.ReadAll(io.LimitReader(dcReader, int64(maxReceiveMessageSize)+1))
|
||||
return d, len(d), err
|
||||
}
|
||||
|
||||
@@ -759,7 +797,7 @@ func recv(p *parser, c baseCodec, s *transport.Stream, dc Decompressor, m interf
|
||||
return err
|
||||
}
|
||||
if err := c.Unmarshal(d, m); err != nil {
|
||||
return status.Errorf(codes.Internal, "grpc: failed to unmarshal the received message %v", err)
|
||||
return status.Errorf(codes.Internal, "grpc: failed to unmarshal the received message: %v", err)
|
||||
}
|
||||
if payInfo != nil {
|
||||
payInfo.uncompressedBytes = d
|
||||
|
||||
230
vendor/google.golang.org/grpc/server.go
generated
vendored
230
vendor/google.golang.org/grpc/server.go
generated
vendored
@@ -45,6 +45,7 @@ import (
|
||||
"google.golang.org/grpc/internal/channelz"
|
||||
"google.golang.org/grpc/internal/grpcrand"
|
||||
"google.golang.org/grpc/internal/grpcsync"
|
||||
"google.golang.org/grpc/internal/grpcutil"
|
||||
"google.golang.org/grpc/internal/transport"
|
||||
"google.golang.org/grpc/keepalive"
|
||||
"google.golang.org/grpc/metadata"
|
||||
@@ -74,10 +75,10 @@ func init() {
|
||||
srv.drainServerTransports(addr)
|
||||
}
|
||||
internal.AddGlobalServerOptions = func(opt ...ServerOption) {
|
||||
extraServerOptions = append(extraServerOptions, opt...)
|
||||
globalServerOptions = append(globalServerOptions, opt...)
|
||||
}
|
||||
internal.ClearGlobalServerOptions = func() {
|
||||
extraServerOptions = nil
|
||||
globalServerOptions = nil
|
||||
}
|
||||
internal.BinaryLogger = binaryLogger
|
||||
internal.JoinServerOptions = newJoinServerOption
|
||||
@@ -183,7 +184,7 @@ var defaultServerOptions = serverOptions{
|
||||
writeBufferSize: defaultWriteBufSize,
|
||||
readBufferSize: defaultReadBufSize,
|
||||
}
|
||||
var extraServerOptions []ServerOption
|
||||
var globalServerOptions []ServerOption
|
||||
|
||||
// A ServerOption sets options such as credentials, codec and keepalive parameters, etc.
|
||||
type ServerOption interface {
|
||||
@@ -233,10 +234,11 @@ func newJoinServerOption(opts ...ServerOption) ServerOption {
|
||||
return &joinServerOption{opts: opts}
|
||||
}
|
||||
|
||||
// WriteBufferSize determines how much data can be batched before doing a write on the wire.
|
||||
// The corresponding memory allocation for this buffer will be twice the size to keep syscalls low.
|
||||
// The default value for this buffer is 32KB.
|
||||
// Zero will disable the write buffer such that each write will be on underlying connection.
|
||||
// WriteBufferSize determines how much data can be batched before doing a write
|
||||
// on the wire. The corresponding memory allocation for this buffer will be
|
||||
// twice the size to keep syscalls low. The default value for this buffer is
|
||||
// 32KB. Zero or negative values will disable the write buffer such that each
|
||||
// write will be on underlying connection.
|
||||
// Note: A Send call may not directly translate to a write.
|
||||
func WriteBufferSize(s int) ServerOption {
|
||||
return newFuncServerOption(func(o *serverOptions) {
|
||||
@@ -244,11 +246,10 @@ func WriteBufferSize(s int) ServerOption {
|
||||
})
|
||||
}
|
||||
|
||||
// ReadBufferSize lets you set the size of read buffer, this determines how much data can be read at most
|
||||
// for one read syscall.
|
||||
// The default value for this buffer is 32KB.
|
||||
// Zero will disable read buffer for a connection so data framer can access the underlying
|
||||
// conn directly.
|
||||
// ReadBufferSize lets you set the size of read buffer, this determines how much
|
||||
// data can be read at most for one read syscall. The default value for this
|
||||
// buffer is 32KB. Zero or negative values will disable read buffer for a
|
||||
// connection so data framer can access the underlying conn directly.
|
||||
func ReadBufferSize(s int) ServerOption {
|
||||
return newFuncServerOption(func(o *serverOptions) {
|
||||
o.readBufferSize = s
|
||||
@@ -600,7 +601,7 @@ func (s *Server) stopServerWorkers() {
|
||||
// started to accept requests yet.
|
||||
func NewServer(opt ...ServerOption) *Server {
|
||||
opts := defaultServerOptions
|
||||
for _, o := range extraServerOptions {
|
||||
for _, o := range globalServerOptions {
|
||||
o.apply(&opts)
|
||||
}
|
||||
for _, o := range opt {
|
||||
@@ -942,7 +943,7 @@ func (s *Server) newHTTP2Transport(c net.Conn) transport.ServerTransport {
|
||||
}
|
||||
|
||||
func (s *Server) serveStreams(st transport.ServerTransport) {
|
||||
defer st.Close()
|
||||
defer st.Close(errors.New("finished serving streams for the server transport"))
|
||||
var wg sync.WaitGroup
|
||||
|
||||
var roundRobinCounter uint32
|
||||
@@ -1008,7 +1009,8 @@ var _ http.Handler = (*Server)(nil)
|
||||
func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
st, err := transport.NewServerHandlerTransport(w, r, s.opts.statsHandlers)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
// Errors returned from transport.NewServerHandlerTransport have
|
||||
// already been written to w.
|
||||
return
|
||||
}
|
||||
if !s.addConn(listenerAddressForServeHTTP, st) {
|
||||
@@ -1046,7 +1048,7 @@ func (s *Server) addConn(addr string, st transport.ServerTransport) bool {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
if s.conns == nil {
|
||||
st.Close()
|
||||
st.Close(errors.New("Server.addConn called when server has already been stopped"))
|
||||
return false
|
||||
}
|
||||
if s.drain {
|
||||
@@ -1150,21 +1152,16 @@ func chainUnaryServerInterceptors(s *Server) {
|
||||
|
||||
func chainUnaryInterceptors(interceptors []UnaryServerInterceptor) UnaryServerInterceptor {
|
||||
return func(ctx context.Context, req interface{}, info *UnaryServerInfo, handler UnaryHandler) (interface{}, error) {
|
||||
// the struct ensures the variables are allocated together, rather than separately, since we
|
||||
// know they should be garbage collected together. This saves 1 allocation and decreases
|
||||
// time/call by about 10% on the microbenchmark.
|
||||
var state struct {
|
||||
i int
|
||||
next UnaryHandler
|
||||
}
|
||||
state.next = func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
if state.i == len(interceptors)-1 {
|
||||
return interceptors[state.i](ctx, req, info, handler)
|
||||
}
|
||||
state.i++
|
||||
return interceptors[state.i-1](ctx, req, info, state.next)
|
||||
}
|
||||
return state.next(ctx, req)
|
||||
return interceptors[0](ctx, req, info, getChainUnaryHandler(interceptors, 0, info, handler))
|
||||
}
|
||||
}
|
||||
|
||||
func getChainUnaryHandler(interceptors []UnaryServerInterceptor, curr int, info *UnaryServerInfo, finalHandler UnaryHandler) UnaryHandler {
|
||||
if curr == len(interceptors)-1 {
|
||||
return finalHandler
|
||||
}
|
||||
return func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return interceptors[curr+1](ctx, req, info, getChainUnaryHandler(interceptors, curr+1, info, finalHandler))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1256,7 +1253,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
|
||||
logEntry.PeerAddr = peer.Addr
|
||||
}
|
||||
for _, binlog := range binlogs {
|
||||
binlog.Log(logEntry)
|
||||
binlog.Log(ctx, logEntry)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1267,6 +1264,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
|
||||
var comp, decomp encoding.Compressor
|
||||
var cp Compressor
|
||||
var dc Decompressor
|
||||
var sendCompressorName string
|
||||
|
||||
// If dc is set and matches the stream's compression, use it. Otherwise, try
|
||||
// to find a matching registered compressor for decomp.
|
||||
@@ -1287,12 +1285,18 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
|
||||
// NOTE: this needs to be ahead of all handling, https://github.com/grpc/grpc-go/issues/686.
|
||||
if s.opts.cp != nil {
|
||||
cp = s.opts.cp
|
||||
stream.SetSendCompress(cp.Type())
|
||||
sendCompressorName = cp.Type()
|
||||
} else if rc := stream.RecvCompress(); rc != "" && rc != encoding.Identity {
|
||||
// Legacy compressor not specified; attempt to respond with same encoding.
|
||||
comp = encoding.GetCompressor(rc)
|
||||
if comp != nil {
|
||||
stream.SetSendCompress(rc)
|
||||
sendCompressorName = comp.Name()
|
||||
}
|
||||
}
|
||||
|
||||
if sendCompressorName != "" {
|
||||
if err := stream.SetSendCompress(sendCompressorName); err != nil {
|
||||
return status.Errorf(codes.Internal, "grpc: failed to set send compressor: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1303,7 +1307,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
|
||||
d, err := recvAndDecompress(&parser{r: stream}, stream, dc, s.opts.maxReceiveMessageSize, payInfo, decomp)
|
||||
if err != nil {
|
||||
if e := t.WriteStatus(stream, status.Convert(err)); e != nil {
|
||||
channelz.Warningf(logger, s.channelzID, "grpc: Server.processUnaryRPC failed to write status %v", e)
|
||||
channelz.Warningf(logger, s.channelzID, "grpc: Server.processUnaryRPC failed to write status: %v", e)
|
||||
}
|
||||
return err
|
||||
}
|
||||
@@ -1316,11 +1320,12 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
|
||||
}
|
||||
for _, sh := range shs {
|
||||
sh.HandleRPC(stream.Context(), &stats.InPayload{
|
||||
RecvTime: time.Now(),
|
||||
Payload: v,
|
||||
WireLength: payInfo.wireLength + headerLen,
|
||||
Data: d,
|
||||
Length: len(d),
|
||||
RecvTime: time.Now(),
|
||||
Payload: v,
|
||||
Length: len(d),
|
||||
WireLength: payInfo.compressedLength + headerLen,
|
||||
CompressedLength: payInfo.compressedLength,
|
||||
Data: d,
|
||||
})
|
||||
}
|
||||
if len(binlogs) != 0 {
|
||||
@@ -1328,7 +1333,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
|
||||
Message: d,
|
||||
}
|
||||
for _, binlog := range binlogs {
|
||||
binlog.Log(cm)
|
||||
binlog.Log(stream.Context(), cm)
|
||||
}
|
||||
}
|
||||
if trInfo != nil {
|
||||
@@ -1361,7 +1366,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
|
||||
Header: h,
|
||||
}
|
||||
for _, binlog := range binlogs {
|
||||
binlog.Log(sh)
|
||||
binlog.Log(stream.Context(), sh)
|
||||
}
|
||||
}
|
||||
st := &binarylog.ServerTrailer{
|
||||
@@ -1369,7 +1374,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
|
||||
Err: appErr,
|
||||
}
|
||||
for _, binlog := range binlogs {
|
||||
binlog.Log(st)
|
||||
binlog.Log(stream.Context(), st)
|
||||
}
|
||||
}
|
||||
return appErr
|
||||
@@ -1379,6 +1384,11 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
|
||||
}
|
||||
opts := &transport.Options{Last: true}
|
||||
|
||||
// Server handler could have set new compressor by calling SetSendCompressor.
|
||||
// In case it is set, we need to use it for compressing outbound message.
|
||||
if stream.SendCompress() != sendCompressorName {
|
||||
comp = encoding.GetCompressor(stream.SendCompress())
|
||||
}
|
||||
if err := s.sendResponse(t, stream, reply, cp, opts, comp); err != nil {
|
||||
if err == io.EOF {
|
||||
// The entire stream is done (for unary RPC only).
|
||||
@@ -1406,8 +1416,8 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
|
||||
Err: appErr,
|
||||
}
|
||||
for _, binlog := range binlogs {
|
||||
binlog.Log(sh)
|
||||
binlog.Log(st)
|
||||
binlog.Log(stream.Context(), sh)
|
||||
binlog.Log(stream.Context(), st)
|
||||
}
|
||||
}
|
||||
return err
|
||||
@@ -1421,8 +1431,8 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
|
||||
Message: reply,
|
||||
}
|
||||
for _, binlog := range binlogs {
|
||||
binlog.Log(sh)
|
||||
binlog.Log(sm)
|
||||
binlog.Log(stream.Context(), sh)
|
||||
binlog.Log(stream.Context(), sm)
|
||||
}
|
||||
}
|
||||
if channelz.IsOn() {
|
||||
@@ -1434,17 +1444,16 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
|
||||
// TODO: Should we be logging if writing status failed here, like above?
|
||||
// Should the logging be in WriteStatus? Should we ignore the WriteStatus
|
||||
// error or allow the stats handler to see it?
|
||||
err = t.WriteStatus(stream, statusOK)
|
||||
if len(binlogs) != 0 {
|
||||
st := &binarylog.ServerTrailer{
|
||||
Trailer: stream.Trailer(),
|
||||
Err: appErr,
|
||||
}
|
||||
for _, binlog := range binlogs {
|
||||
binlog.Log(st)
|
||||
binlog.Log(stream.Context(), st)
|
||||
}
|
||||
}
|
||||
return err
|
||||
return t.WriteStatus(stream, statusOK)
|
||||
}
|
||||
|
||||
// chainStreamServerInterceptors chains all stream server interceptors into one.
|
||||
@@ -1470,21 +1479,16 @@ func chainStreamServerInterceptors(s *Server) {
|
||||
|
||||
func chainStreamInterceptors(interceptors []StreamServerInterceptor) StreamServerInterceptor {
|
||||
return func(srv interface{}, ss ServerStream, info *StreamServerInfo, handler StreamHandler) error {
|
||||
// the struct ensures the variables are allocated together, rather than separately, since we
|
||||
// know they should be garbage collected together. This saves 1 allocation and decreases
|
||||
// time/call by about 10% on the microbenchmark.
|
||||
var state struct {
|
||||
i int
|
||||
next StreamHandler
|
||||
}
|
||||
state.next = func(srv interface{}, ss ServerStream) error {
|
||||
if state.i == len(interceptors)-1 {
|
||||
return interceptors[state.i](srv, ss, info, handler)
|
||||
}
|
||||
state.i++
|
||||
return interceptors[state.i-1](srv, ss, info, state.next)
|
||||
}
|
||||
return state.next(srv, ss)
|
||||
return interceptors[0](srv, ss, info, getChainStreamHandler(interceptors, 0, info, handler))
|
||||
}
|
||||
}
|
||||
|
||||
func getChainStreamHandler(interceptors []StreamServerInterceptor, curr int, info *StreamServerInfo, finalHandler StreamHandler) StreamHandler {
|
||||
if curr == len(interceptors)-1 {
|
||||
return finalHandler
|
||||
}
|
||||
return func(srv interface{}, stream ServerStream) error {
|
||||
return interceptors[curr+1](srv, stream, info, getChainStreamHandler(interceptors, curr+1, info, finalHandler))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1583,7 +1587,7 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp
|
||||
logEntry.PeerAddr = peer.Addr
|
||||
}
|
||||
for _, binlog := range ss.binlogs {
|
||||
binlog.Log(logEntry)
|
||||
binlog.Log(stream.Context(), logEntry)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1606,12 +1610,18 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp
|
||||
// NOTE: this needs to be ahead of all handling, https://github.com/grpc/grpc-go/issues/686.
|
||||
if s.opts.cp != nil {
|
||||
ss.cp = s.opts.cp
|
||||
stream.SetSendCompress(s.opts.cp.Type())
|
||||
ss.sendCompressorName = s.opts.cp.Type()
|
||||
} else if rc := stream.RecvCompress(); rc != "" && rc != encoding.Identity {
|
||||
// Legacy compressor not specified; attempt to respond with same encoding.
|
||||
ss.comp = encoding.GetCompressor(rc)
|
||||
if ss.comp != nil {
|
||||
stream.SetSendCompress(rc)
|
||||
ss.sendCompressorName = rc
|
||||
}
|
||||
}
|
||||
|
||||
if ss.sendCompressorName != "" {
|
||||
if err := stream.SetSendCompress(ss.sendCompressorName); err != nil {
|
||||
return status.Errorf(codes.Internal, "grpc: failed to set send compressor: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1649,16 +1659,16 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp
|
||||
ss.trInfo.tr.SetError()
|
||||
ss.mu.Unlock()
|
||||
}
|
||||
t.WriteStatus(ss.s, appStatus)
|
||||
if len(ss.binlogs) != 0 {
|
||||
st := &binarylog.ServerTrailer{
|
||||
Trailer: ss.s.Trailer(),
|
||||
Err: appErr,
|
||||
}
|
||||
for _, binlog := range ss.binlogs {
|
||||
binlog.Log(st)
|
||||
binlog.Log(stream.Context(), st)
|
||||
}
|
||||
}
|
||||
t.WriteStatus(ss.s, appStatus)
|
||||
// TODO: Should we log an error from WriteStatus here and below?
|
||||
return appErr
|
||||
}
|
||||
@@ -1667,17 +1677,16 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp
|
||||
ss.trInfo.tr.LazyLog(stringer("OK"), false)
|
||||
ss.mu.Unlock()
|
||||
}
|
||||
err = t.WriteStatus(ss.s, statusOK)
|
||||
if len(ss.binlogs) != 0 {
|
||||
st := &binarylog.ServerTrailer{
|
||||
Trailer: ss.s.Trailer(),
|
||||
Err: appErr,
|
||||
}
|
||||
for _, binlog := range ss.binlogs {
|
||||
binlog.Log(st)
|
||||
binlog.Log(stream.Context(), st)
|
||||
}
|
||||
}
|
||||
return err
|
||||
return t.WriteStatus(ss.s, statusOK)
|
||||
}
|
||||
|
||||
func (s *Server) handleStream(t transport.ServerTransport, stream *transport.Stream, trInfo *traceInfo) {
|
||||
@@ -1819,7 +1828,7 @@ func (s *Server) Stop() {
|
||||
}
|
||||
for _, cs := range conns {
|
||||
for st := range cs {
|
||||
st.Close()
|
||||
st.Close(errors.New("Server.Stop called"))
|
||||
}
|
||||
}
|
||||
if s.opts.numServerWorkers > 0 {
|
||||
@@ -1944,6 +1953,60 @@ func SendHeader(ctx context.Context, md metadata.MD) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetSendCompressor sets a compressor for outbound messages from the server.
|
||||
// It must not be called after any event that causes headers to be sent
|
||||
// (see ServerStream.SetHeader for the complete list). Provided compressor is
|
||||
// used when below conditions are met:
|
||||
//
|
||||
// - compressor is registered via encoding.RegisterCompressor
|
||||
// - compressor name must exist in the client advertised compressor names
|
||||
// sent in grpc-accept-encoding header. Use ClientSupportedCompressors to
|
||||
// get client supported compressor names.
|
||||
//
|
||||
// The context provided must be the context passed to the server's handler.
|
||||
// It must be noted that compressor name encoding.Identity disables the
|
||||
// outbound compression.
|
||||
// By default, server messages will be sent using the same compressor with
|
||||
// which request messages were sent.
|
||||
//
|
||||
// It is not safe to call SetSendCompressor concurrently with SendHeader and
|
||||
// SendMsg.
|
||||
//
|
||||
// # Experimental
|
||||
//
|
||||
// Notice: This function is EXPERIMENTAL and may be changed or removed in a
|
||||
// later release.
|
||||
func SetSendCompressor(ctx context.Context, name string) error {
|
||||
stream, ok := ServerTransportStreamFromContext(ctx).(*transport.Stream)
|
||||
if !ok || stream == nil {
|
||||
return fmt.Errorf("failed to fetch the stream from the given context")
|
||||
}
|
||||
|
||||
if err := validateSendCompressor(name, stream.ClientAdvertisedCompressors()); err != nil {
|
||||
return fmt.Errorf("unable to set send compressor: %w", err)
|
||||
}
|
||||
|
||||
return stream.SetSendCompress(name)
|
||||
}
|
||||
|
||||
// ClientSupportedCompressors returns compressor names advertised by the client
|
||||
// via grpc-accept-encoding header.
|
||||
//
|
||||
// The context provided must be the context passed to the server's handler.
|
||||
//
|
||||
// # Experimental
|
||||
//
|
||||
// Notice: This function is EXPERIMENTAL and may be changed or removed in a
|
||||
// later release.
|
||||
func ClientSupportedCompressors(ctx context.Context) ([]string, error) {
|
||||
stream, ok := ServerTransportStreamFromContext(ctx).(*transport.Stream)
|
||||
if !ok || stream == nil {
|
||||
return nil, fmt.Errorf("failed to fetch the stream from the given context %v", ctx)
|
||||
}
|
||||
|
||||
return strings.Split(stream.ClientAdvertisedCompressors(), ","), nil
|
||||
}
|
||||
|
||||
// SetTrailer sets the trailer metadata that will be sent when an RPC returns.
|
||||
// When called more than once, all the provided metadata will be merged.
|
||||
//
|
||||
@@ -1978,3 +2041,22 @@ type channelzServer struct {
|
||||
func (c *channelzServer) ChannelzMetric() *channelz.ServerInternalMetric {
|
||||
return c.s.channelzMetric()
|
||||
}
|
||||
|
||||
// validateSendCompressor returns an error when given compressor name cannot be
|
||||
// handled by the server or the client based on the advertised compressors.
|
||||
func validateSendCompressor(name, clientCompressors string) error {
|
||||
if name == encoding.Identity {
|
||||
return nil
|
||||
}
|
||||
|
||||
if !grpcutil.IsCompressorNameRegistered(name) {
|
||||
return fmt.Errorf("compressor not registered %q", name)
|
||||
}
|
||||
|
||||
for _, c := range strings.Split(clientCompressors, ",") {
|
||||
if c == name {
|
||||
return nil // found match
|
||||
}
|
||||
}
|
||||
return fmt.Errorf("client does not support compressor %q", name)
|
||||
}
|
||||
|
||||
10
vendor/google.golang.org/grpc/service_config.go
generated
vendored
10
vendor/google.golang.org/grpc/service_config.go
generated
vendored
@@ -226,7 +226,7 @@ func parseServiceConfig(js string) *serviceconfig.ParseResult {
|
||||
var rsc jsonSC
|
||||
err := json.Unmarshal([]byte(js), &rsc)
|
||||
if err != nil {
|
||||
logger.Warningf("grpc: parseServiceConfig error unmarshaling %s due to %v", js, err)
|
||||
logger.Warningf("grpc: unmarshaling service config %s: %v", js, err)
|
||||
return &serviceconfig.ParseResult{Err: err}
|
||||
}
|
||||
sc := ServiceConfig{
|
||||
@@ -254,7 +254,7 @@ func parseServiceConfig(js string) *serviceconfig.ParseResult {
|
||||
}
|
||||
d, err := parseDuration(m.Timeout)
|
||||
if err != nil {
|
||||
logger.Warningf("grpc: parseServiceConfig error unmarshaling %s due to %v", js, err)
|
||||
logger.Warningf("grpc: unmarshaling service config %s: %v", js, err)
|
||||
return &serviceconfig.ParseResult{Err: err}
|
||||
}
|
||||
|
||||
@@ -263,7 +263,7 @@ func parseServiceConfig(js string) *serviceconfig.ParseResult {
|
||||
Timeout: d,
|
||||
}
|
||||
if mc.RetryPolicy, err = convertRetryPolicy(m.RetryPolicy); err != nil {
|
||||
logger.Warningf("grpc: parseServiceConfig error unmarshaling %s due to %v", js, err)
|
||||
logger.Warningf("grpc: unmarshaling service config %s: %v", js, err)
|
||||
return &serviceconfig.ParseResult{Err: err}
|
||||
}
|
||||
if m.MaxRequestMessageBytes != nil {
|
||||
@@ -283,13 +283,13 @@ func parseServiceConfig(js string) *serviceconfig.ParseResult {
|
||||
for i, n := range *m.Name {
|
||||
path, err := n.generatePath()
|
||||
if err != nil {
|
||||
logger.Warningf("grpc: parseServiceConfig error unmarshaling %s due to methodConfig[%d]: %v", js, i, err)
|
||||
logger.Warningf("grpc: error unmarshaling service config %s due to methodConfig[%d]: %v", js, i, err)
|
||||
return &serviceconfig.ParseResult{Err: err}
|
||||
}
|
||||
|
||||
if _, ok := paths[path]; ok {
|
||||
err = errDuplicatedName
|
||||
logger.Warningf("grpc: parseServiceConfig error unmarshaling %s due to methodConfig[%d]: %v", js, i, err)
|
||||
logger.Warningf("grpc: error unmarshaling service config %s due to methodConfig[%d]: %v", js, i, err)
|
||||
return &serviceconfig.ParseResult{Err: err}
|
||||
}
|
||||
paths[path] = struct{}{}
|
||||
|
||||
22
vendor/google.golang.org/grpc/stats/stats.go
generated
vendored
22
vendor/google.golang.org/grpc/stats/stats.go
generated
vendored
@@ -67,10 +67,18 @@ type InPayload struct {
|
||||
Payload interface{}
|
||||
// Data is the serialized message payload.
|
||||
Data []byte
|
||||
// Length is the length of uncompressed data.
|
||||
|
||||
// Length is the size of the uncompressed payload data. Does not include any
|
||||
// framing (gRPC or HTTP/2).
|
||||
Length int
|
||||
// WireLength is the length of data on wire (compressed, signed, encrypted).
|
||||
// CompressedLength is the size of the compressed payload data. Does not
|
||||
// include any framing (gRPC or HTTP/2). Same as Length if compression not
|
||||
// enabled.
|
||||
CompressedLength int
|
||||
// WireLength is the size of the compressed payload data plus gRPC framing.
|
||||
// Does not include HTTP/2 framing.
|
||||
WireLength int
|
||||
|
||||
// RecvTime is the time when the payload is received.
|
||||
RecvTime time.Time
|
||||
}
|
||||
@@ -129,9 +137,15 @@ type OutPayload struct {
|
||||
Payload interface{}
|
||||
// Data is the serialized message payload.
|
||||
Data []byte
|
||||
// Length is the length of uncompressed data.
|
||||
// Length is the size of the uncompressed payload data. Does not include any
|
||||
// framing (gRPC or HTTP/2).
|
||||
Length int
|
||||
// WireLength is the length of data on wire (compressed, signed, encrypted).
|
||||
// CompressedLength is the size of the compressed payload data. Does not
|
||||
// include any framing (gRPC or HTTP/2). Same as Length if compression not
|
||||
// enabled.
|
||||
CompressedLength int
|
||||
// WireLength is the size of the compressed payload data plus gRPC framing.
|
||||
// Does not include HTTP/2 framing.
|
||||
WireLength int
|
||||
// SentTime is the time when the payload is sent.
|
||||
SentTime time.Time
|
||||
|
||||
103
vendor/google.golang.org/grpc/stream.go
generated
vendored
103
vendor/google.golang.org/grpc/stream.go
generated
vendored
@@ -168,10 +168,19 @@ func NewClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, meth
|
||||
}
|
||||
|
||||
func newClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, opts ...CallOption) (_ ClientStream, err error) {
|
||||
if md, _, ok := metadata.FromOutgoingContextRaw(ctx); ok {
|
||||
if md, added, ok := metadata.FromOutgoingContextRaw(ctx); ok {
|
||||
// validate md
|
||||
if err := imetadata.Validate(md); err != nil {
|
||||
return nil, status.Error(codes.Internal, err.Error())
|
||||
}
|
||||
// validate added
|
||||
for _, kvs := range added {
|
||||
for i := 0; i < len(kvs); i += 2 {
|
||||
if err := imetadata.ValidatePair(kvs[i], kvs[i+1]); err != nil {
|
||||
return nil, status.Error(codes.Internal, err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if channelz.IsOn() {
|
||||
cc.incrCallsStarted()
|
||||
@@ -352,7 +361,7 @@ func newClientStreamWithParams(ctx context.Context, desc *StreamDesc, cc *Client
|
||||
}
|
||||
}
|
||||
for _, binlog := range cs.binlogs {
|
||||
binlog.Log(logEntry)
|
||||
binlog.Log(cs.ctx, logEntry)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -416,7 +425,7 @@ func (cs *clientStream) newAttemptLocked(isTransparent bool) (*csAttempt, error)
|
||||
ctx = trace.NewContext(ctx, trInfo.tr)
|
||||
}
|
||||
|
||||
if cs.cc.parsedTarget.Scheme == "xds" {
|
||||
if cs.cc.parsedTarget.URL.Scheme == "xds" {
|
||||
// Add extra metadata (metadata that will be added by transport) to context
|
||||
// so the balancer can see them.
|
||||
ctx = grpcutil.WithExtraMetadata(ctx, metadata.Pairs(
|
||||
@@ -438,7 +447,7 @@ func (a *csAttempt) getTransport() error {
|
||||
cs := a.cs
|
||||
|
||||
var err error
|
||||
a.t, a.done, err = cs.cc.getTransport(a.ctx, cs.callInfo.failFast, cs.callHdr.Method)
|
||||
a.t, a.pickResult, err = cs.cc.getTransport(a.ctx, cs.callInfo.failFast, cs.callHdr.Method)
|
||||
if err != nil {
|
||||
if de, ok := err.(dropError); ok {
|
||||
err = de.error
|
||||
@@ -455,6 +464,25 @@ func (a *csAttempt) getTransport() error {
|
||||
func (a *csAttempt) newStream() error {
|
||||
cs := a.cs
|
||||
cs.callHdr.PreviousAttempts = cs.numRetries
|
||||
|
||||
// Merge metadata stored in PickResult, if any, with existing call metadata.
|
||||
// It is safe to overwrite the csAttempt's context here, since all state
|
||||
// maintained in it are local to the attempt. When the attempt has to be
|
||||
// retried, a new instance of csAttempt will be created.
|
||||
if a.pickResult.Metatada != nil {
|
||||
// We currently do not have a function it the metadata package which
|
||||
// merges given metadata with existing metadata in a context. Existing
|
||||
// function `AppendToOutgoingContext()` takes a variadic argument of key
|
||||
// value pairs.
|
||||
//
|
||||
// TODO: Make it possible to retrieve key value pairs from metadata.MD
|
||||
// in a form passable to AppendToOutgoingContext(), or create a version
|
||||
// of AppendToOutgoingContext() that accepts a metadata.MD.
|
||||
md, _ := metadata.FromOutgoingContext(a.ctx)
|
||||
md = metadata.Join(md, a.pickResult.Metatada)
|
||||
a.ctx = metadata.NewOutgoingContext(a.ctx, md)
|
||||
}
|
||||
|
||||
s, err := a.t.NewStream(a.ctx, cs.callHdr)
|
||||
if err != nil {
|
||||
nse, ok := err.(*transport.NewStreamError)
|
||||
@@ -529,12 +557,12 @@ type clientStream struct {
|
||||
// csAttempt implements a single transport stream attempt within a
|
||||
// clientStream.
|
||||
type csAttempt struct {
|
||||
ctx context.Context
|
||||
cs *clientStream
|
||||
t transport.ClientTransport
|
||||
s *transport.Stream
|
||||
p *parser
|
||||
done func(balancer.DoneInfo)
|
||||
ctx context.Context
|
||||
cs *clientStream
|
||||
t transport.ClientTransport
|
||||
s *transport.Stream
|
||||
p *parser
|
||||
pickResult balancer.PickResult
|
||||
|
||||
finished bool
|
||||
dc Decompressor
|
||||
@@ -781,7 +809,7 @@ func (cs *clientStream) Header() (metadata.MD, error) {
|
||||
}
|
||||
cs.serverHeaderBinlogged = true
|
||||
for _, binlog := range cs.binlogs {
|
||||
binlog.Log(logEntry)
|
||||
binlog.Log(cs.ctx, logEntry)
|
||||
}
|
||||
}
|
||||
return m, nil
|
||||
@@ -862,7 +890,7 @@ func (cs *clientStream) SendMsg(m interface{}) (err error) {
|
||||
Message: data,
|
||||
}
|
||||
for _, binlog := range cs.binlogs {
|
||||
binlog.Log(cm)
|
||||
binlog.Log(cs.ctx, cm)
|
||||
}
|
||||
}
|
||||
return err
|
||||
@@ -886,7 +914,7 @@ func (cs *clientStream) RecvMsg(m interface{}) error {
|
||||
Message: recvInfo.uncompressedBytes,
|
||||
}
|
||||
for _, binlog := range cs.binlogs {
|
||||
binlog.Log(sm)
|
||||
binlog.Log(cs.ctx, sm)
|
||||
}
|
||||
}
|
||||
if err != nil || !cs.desc.ServerStreams {
|
||||
@@ -907,7 +935,7 @@ func (cs *clientStream) RecvMsg(m interface{}) error {
|
||||
logEntry.PeerAddr = peer.Addr
|
||||
}
|
||||
for _, binlog := range cs.binlogs {
|
||||
binlog.Log(logEntry)
|
||||
binlog.Log(cs.ctx, logEntry)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -934,7 +962,7 @@ func (cs *clientStream) CloseSend() error {
|
||||
OnClientSide: true,
|
||||
}
|
||||
for _, binlog := range cs.binlogs {
|
||||
binlog.Log(chc)
|
||||
binlog.Log(cs.ctx, chc)
|
||||
}
|
||||
}
|
||||
// We never returned an error here for reasons.
|
||||
@@ -952,6 +980,9 @@ func (cs *clientStream) finish(err error) {
|
||||
return
|
||||
}
|
||||
cs.finished = true
|
||||
for _, onFinish := range cs.callInfo.onFinish {
|
||||
onFinish(err)
|
||||
}
|
||||
cs.commitAttemptLocked()
|
||||
if cs.attempt != nil {
|
||||
cs.attempt.finish(err)
|
||||
@@ -973,7 +1004,7 @@ func (cs *clientStream) finish(err error) {
|
||||
OnClientSide: true,
|
||||
}
|
||||
for _, binlog := range cs.binlogs {
|
||||
binlog.Log(c)
|
||||
binlog.Log(cs.ctx, c)
|
||||
}
|
||||
}
|
||||
if err == nil {
|
||||
@@ -1062,9 +1093,10 @@ func (a *csAttempt) recvMsg(m interface{}, payInfo *payloadInfo) (err error) {
|
||||
RecvTime: time.Now(),
|
||||
Payload: m,
|
||||
// TODO truncate large payload.
|
||||
Data: payInfo.uncompressedBytes,
|
||||
WireLength: payInfo.wireLength + headerLen,
|
||||
Length: len(payInfo.uncompressedBytes),
|
||||
Data: payInfo.uncompressedBytes,
|
||||
WireLength: payInfo.compressedLength + headerLen,
|
||||
CompressedLength: payInfo.compressedLength,
|
||||
Length: len(payInfo.uncompressedBytes),
|
||||
})
|
||||
}
|
||||
if channelz.IsOn() {
|
||||
@@ -1103,12 +1135,12 @@ func (a *csAttempt) finish(err error) {
|
||||
tr = a.s.Trailer()
|
||||
}
|
||||
|
||||
if a.done != nil {
|
||||
if a.pickResult.Done != nil {
|
||||
br := false
|
||||
if a.s != nil {
|
||||
br = a.s.BytesReceived()
|
||||
}
|
||||
a.done(balancer.DoneInfo{
|
||||
a.pickResult.Done(balancer.DoneInfo{
|
||||
Err: err,
|
||||
Trailer: tr,
|
||||
BytesSent: a.s != nil,
|
||||
@@ -1464,6 +1496,9 @@ type ServerStream interface {
|
||||
// It is safe to have a goroutine calling SendMsg and another goroutine
|
||||
// calling RecvMsg on the same stream at the same time, but it is not safe
|
||||
// to call SendMsg on the same stream in different goroutines.
|
||||
//
|
||||
// It is not safe to modify the message after calling SendMsg. Tracing
|
||||
// libraries and stats handlers may use the message lazily.
|
||||
SendMsg(m interface{}) error
|
||||
// RecvMsg blocks until it receives a message into m or the stream is
|
||||
// done. It returns io.EOF when the client has performed a CloseSend. On
|
||||
@@ -1489,6 +1524,8 @@ type serverStream struct {
|
||||
comp encoding.Compressor
|
||||
decomp encoding.Compressor
|
||||
|
||||
sendCompressorName string
|
||||
|
||||
maxReceiveMessageSize int
|
||||
maxSendMessageSize int
|
||||
trInfo *traceInfo
|
||||
@@ -1536,7 +1573,7 @@ func (ss *serverStream) SendHeader(md metadata.MD) error {
|
||||
}
|
||||
ss.serverHeaderBinlogged = true
|
||||
for _, binlog := range ss.binlogs {
|
||||
binlog.Log(sh)
|
||||
binlog.Log(ss.ctx, sh)
|
||||
}
|
||||
}
|
||||
return err
|
||||
@@ -1581,6 +1618,13 @@ func (ss *serverStream) SendMsg(m interface{}) (err error) {
|
||||
}
|
||||
}()
|
||||
|
||||
// Server handler could have set new compressor by calling SetSendCompressor.
|
||||
// In case it is set, we need to use it for compressing outbound message.
|
||||
if sendCompressorsName := ss.s.SendCompress(); sendCompressorsName != ss.sendCompressorName {
|
||||
ss.comp = encoding.GetCompressor(sendCompressorsName)
|
||||
ss.sendCompressorName = sendCompressorsName
|
||||
}
|
||||
|
||||
// load hdr, payload, data
|
||||
hdr, payload, data, err := prepareMsg(m, ss.codec, ss.cp, ss.comp)
|
||||
if err != nil {
|
||||
@@ -1602,14 +1646,14 @@ func (ss *serverStream) SendMsg(m interface{}) (err error) {
|
||||
}
|
||||
ss.serverHeaderBinlogged = true
|
||||
for _, binlog := range ss.binlogs {
|
||||
binlog.Log(sh)
|
||||
binlog.Log(ss.ctx, sh)
|
||||
}
|
||||
}
|
||||
sm := &binarylog.ServerMessage{
|
||||
Message: data,
|
||||
}
|
||||
for _, binlog := range ss.binlogs {
|
||||
binlog.Log(sm)
|
||||
binlog.Log(ss.ctx, sm)
|
||||
}
|
||||
}
|
||||
if len(ss.statsHandler) != 0 {
|
||||
@@ -1657,7 +1701,7 @@ func (ss *serverStream) RecvMsg(m interface{}) (err error) {
|
||||
if len(ss.binlogs) != 0 {
|
||||
chc := &binarylog.ClientHalfClose{}
|
||||
for _, binlog := range ss.binlogs {
|
||||
binlog.Log(chc)
|
||||
binlog.Log(ss.ctx, chc)
|
||||
}
|
||||
}
|
||||
return err
|
||||
@@ -1673,9 +1717,10 @@ func (ss *serverStream) RecvMsg(m interface{}) (err error) {
|
||||
RecvTime: time.Now(),
|
||||
Payload: m,
|
||||
// TODO truncate large payload.
|
||||
Data: payInfo.uncompressedBytes,
|
||||
WireLength: payInfo.wireLength + headerLen,
|
||||
Length: len(payInfo.uncompressedBytes),
|
||||
Data: payInfo.uncompressedBytes,
|
||||
Length: len(payInfo.uncompressedBytes),
|
||||
WireLength: payInfo.compressedLength + headerLen,
|
||||
CompressedLength: payInfo.compressedLength,
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1684,7 +1729,7 @@ func (ss *serverStream) RecvMsg(m interface{}) (err error) {
|
||||
Message: payInfo.uncompressedBytes,
|
||||
}
|
||||
for _, binlog := range ss.binlogs {
|
||||
binlog.Log(cm)
|
||||
binlog.Log(ss.ctx, cm)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
|
||||
2
vendor/google.golang.org/grpc/version.go
generated
vendored
2
vendor/google.golang.org/grpc/version.go
generated
vendored
@@ -19,4 +19,4 @@
|
||||
package grpc
|
||||
|
||||
// Version is the current grpc version.
|
||||
const Version = "1.51.0"
|
||||
const Version = "1.54.0"
|
||||
|
||||
34
vendor/google.golang.org/grpc/vet.sh
generated
vendored
34
vendor/google.golang.org/grpc/vet.sh
generated
vendored
@@ -41,16 +41,8 @@ if [[ "$1" = "-install" ]]; then
|
||||
github.com/client9/misspell/cmd/misspell
|
||||
popd
|
||||
if [[ -z "${VET_SKIP_PROTO}" ]]; then
|
||||
if [[ "${TRAVIS}" = "true" ]]; then
|
||||
PROTOBUF_VERSION=3.14.0
|
||||
PROTOC_FILENAME=protoc-${PROTOBUF_VERSION}-linux-x86_64.zip
|
||||
pushd /home/travis
|
||||
wget https://github.com/google/protobuf/releases/download/v${PROTOBUF_VERSION}/${PROTOC_FILENAME}
|
||||
unzip ${PROTOC_FILENAME}
|
||||
bin/protoc --version
|
||||
popd
|
||||
elif [[ "${GITHUB_ACTIONS}" = "true" ]]; then
|
||||
PROTOBUF_VERSION=3.14.0
|
||||
if [[ "${GITHUB_ACTIONS}" = "true" ]]; then
|
||||
PROTOBUF_VERSION=22.0 # a.k.a v4.22.0 in pb.go files.
|
||||
PROTOC_FILENAME=protoc-${PROTOBUF_VERSION}-linux-x86_64.zip
|
||||
pushd /home/runner/go
|
||||
wget https://github.com/google/protobuf/releases/download/v${PROTOBUF_VERSION}/${PROTOC_FILENAME}
|
||||
@@ -66,6 +58,16 @@ elif [[ "$#" -ne 0 ]]; then
|
||||
die "Unknown argument(s): $*"
|
||||
fi
|
||||
|
||||
# - Check that generated proto files are up to date.
|
||||
if [[ -z "${VET_SKIP_PROTO}" ]]; then
|
||||
make proto && git status --porcelain 2>&1 | fail_on_output || \
|
||||
(git status; git --no-pager diff; exit 1)
|
||||
fi
|
||||
|
||||
if [[ -n "${VET_ONLY_PROTO}" ]]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# - Ensure all source files contain a copyright message.
|
||||
# (Done in two parts because Darwin "git grep" has broken support for compound
|
||||
# exclusion matches.)
|
||||
@@ -93,13 +95,6 @@ git grep '"github.com/envoyproxy/go-control-plane/envoy' -- '*.go' ':(exclude)*.
|
||||
|
||||
misspell -error .
|
||||
|
||||
# - Check that generated proto files are up to date.
|
||||
if [[ -z "${VET_SKIP_PROTO}" ]]; then
|
||||
PATH="/home/travis/bin:${PATH}" make proto && \
|
||||
git status --porcelain 2>&1 | fail_on_output || \
|
||||
(git status; git --no-pager diff; exit 1)
|
||||
fi
|
||||
|
||||
# - gofmt, goimports, golint (with exceptions for generated code), go vet,
|
||||
# go mod tidy.
|
||||
# Perform these checks on each module inside gRPC.
|
||||
@@ -111,7 +106,7 @@ for MOD_FILE in $(find . -name 'go.mod'); do
|
||||
goimports -l . 2>&1 | not grep -vE "\.pb\.go"
|
||||
golint ./... 2>&1 | not grep -vE "/grpc_testing_not_regenerate/.*\.pb\.go:"
|
||||
|
||||
go mod tidy
|
||||
go mod tidy -compat=1.17
|
||||
git status --porcelain 2>&1 | fail_on_output || \
|
||||
(git status; git --no-pager diff; exit 1)
|
||||
popd
|
||||
@@ -121,8 +116,9 @@ done
|
||||
#
|
||||
# TODO(dfawley): don't use deprecated functions in examples or first-party
|
||||
# plugins.
|
||||
# TODO(dfawley): enable ST1019 (duplicate imports) but allow for protobufs.
|
||||
SC_OUT="$(mktemp)"
|
||||
staticcheck -go 1.9 -checks 'inherit,-ST1015' ./... > "${SC_OUT}" || true
|
||||
staticcheck -go 1.19 -checks 'inherit,-ST1015,-ST1019,-SA1019' ./... > "${SC_OUT}" || true
|
||||
# Error if anything other than deprecation warnings are printed.
|
||||
not grep -v "is deprecated:.*SA1019" "${SC_OUT}"
|
||||
# Only ignore the following deprecated types/fields/functions.
|
||||
|
||||
2
vendor/google.golang.org/protobuf/encoding/protojson/doc.go
generated
vendored
2
vendor/google.golang.org/protobuf/encoding/protojson/doc.go
generated
vendored
@@ -4,7 +4,7 @@
|
||||
|
||||
// Package protojson marshals and unmarshals protocol buffer messages as JSON
|
||||
// format. It follows the guide at
|
||||
// https://developers.google.com/protocol-buffers/docs/proto3#json.
|
||||
// https://protobuf.dev/programming-guides/proto3#json.
|
||||
//
|
||||
// This package produces a different output than the standard "encoding/json"
|
||||
// package, which does not operate correctly on protocol buffer messages.
|
||||
|
||||
12
vendor/google.golang.org/protobuf/encoding/protojson/well_known_types.go
generated
vendored
12
vendor/google.golang.org/protobuf/encoding/protojson/well_known_types.go
generated
vendored
@@ -814,16 +814,22 @@ func (d decoder) unmarshalTimestamp(m protoreflect.Message) error {
|
||||
return d.unexpectedTokenError(tok)
|
||||
}
|
||||
|
||||
t, err := time.Parse(time.RFC3339Nano, tok.ParsedString())
|
||||
s := tok.ParsedString()
|
||||
t, err := time.Parse(time.RFC3339Nano, s)
|
||||
if err != nil {
|
||||
return d.newError(tok.Pos(), "invalid %v value %v", genid.Timestamp_message_fullname, tok.RawString())
|
||||
}
|
||||
// Validate seconds. No need to validate nanos because time.Parse would have
|
||||
// covered that already.
|
||||
// Validate seconds.
|
||||
secs := t.Unix()
|
||||
if secs < minTimestampSeconds || secs > maxTimestampSeconds {
|
||||
return d.newError(tok.Pos(), "%v value out of range: %v", genid.Timestamp_message_fullname, tok.RawString())
|
||||
}
|
||||
// Validate subseconds.
|
||||
i := strings.LastIndexByte(s, '.') // start of subsecond field
|
||||
j := strings.LastIndexAny(s, "Z-+") // start of timezone field
|
||||
if i >= 0 && j >= i && j-i > len(".999999999") {
|
||||
return d.newError(tok.Pos(), "invalid %v value %v", genid.Timestamp_message_fullname, tok.RawString())
|
||||
}
|
||||
|
||||
fds := m.Descriptor().Fields()
|
||||
fdSeconds := fds.ByNumber(genid.Timestamp_Seconds_field_number)
|
||||
|
||||
8
vendor/google.golang.org/protobuf/encoding/protowire/wire.go
generated
vendored
8
vendor/google.golang.org/protobuf/encoding/protowire/wire.go
generated
vendored
@@ -3,7 +3,7 @@
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package protowire parses and formats the raw wire encoding.
|
||||
// See https://developers.google.com/protocol-buffers/docs/encoding.
|
||||
// See https://protobuf.dev/programming-guides/encoding.
|
||||
//
|
||||
// For marshaling and unmarshaling entire protobuf messages,
|
||||
// use the "google.golang.org/protobuf/proto" package instead.
|
||||
@@ -29,12 +29,8 @@ const (
|
||||
)
|
||||
|
||||
// IsValid reports whether the field number is semantically valid.
|
||||
//
|
||||
// Note that while numbers within the reserved range are semantically invalid,
|
||||
// they are syntactically valid in the wire format.
|
||||
// Implementations may treat records with reserved field numbers as unknown.
|
||||
func (n Number) IsValid() bool {
|
||||
return MinValidNumber <= n && n < FirstReservedNumber || LastReservedNumber < n && n <= MaxValidNumber
|
||||
return MinValidNumber <= n && n <= MaxValidNumber
|
||||
}
|
||||
|
||||
// Type represents the wire type.
|
||||
|
||||
2
vendor/google.golang.org/protobuf/internal/encoding/json/decode.go
generated
vendored
2
vendor/google.golang.org/protobuf/internal/encoding/json/decode.go
generated
vendored
@@ -294,7 +294,7 @@ func (d *Decoder) isValueNext() bool {
|
||||
}
|
||||
|
||||
// consumeToken constructs a Token for given Kind with raw value derived from
|
||||
// current d.in and given size, and consumes the given size-lenght of it.
|
||||
// current d.in and given size, and consumes the given size-length of it.
|
||||
func (d *Decoder) consumeToken(kind Kind, size int) Token {
|
||||
tok := Token{
|
||||
kind: kind,
|
||||
|
||||
5
vendor/google.golang.org/protobuf/internal/encoding/text/decode.go
generated
vendored
5
vendor/google.golang.org/protobuf/internal/encoding/text/decode.go
generated
vendored
@@ -412,12 +412,13 @@ func (d *Decoder) parseFieldName() (tok Token, err error) {
|
||||
// Field number. Identify if input is a valid number that is not negative
|
||||
// and is decimal integer within 32-bit range.
|
||||
if num := parseNumber(d.in); num.size > 0 {
|
||||
str := num.string(d.in)
|
||||
if !num.neg && num.kind == numDec {
|
||||
if _, err := strconv.ParseInt(string(d.in[:num.size]), 10, 32); err == nil {
|
||||
if _, err := strconv.ParseInt(str, 10, 32); err == nil {
|
||||
return d.consumeToken(Name, num.size, uint8(FieldNumber)), nil
|
||||
}
|
||||
}
|
||||
return Token{}, d.newSyntaxError("invalid field number: %s", d.in[:num.size])
|
||||
return Token{}, d.newSyntaxError("invalid field number: %s", str)
|
||||
}
|
||||
|
||||
return Token{}, d.newSyntaxError("invalid field name: %s", errId(d.in))
|
||||
|
||||
43
vendor/google.golang.org/protobuf/internal/encoding/text/decode_number.go
generated
vendored
43
vendor/google.golang.org/protobuf/internal/encoding/text/decode_number.go
generated
vendored
@@ -15,17 +15,12 @@ func (d *Decoder) parseNumberValue() (Token, bool) {
|
||||
if num.neg {
|
||||
numAttrs |= isNegative
|
||||
}
|
||||
strSize := num.size
|
||||
last := num.size - 1
|
||||
if num.kind == numFloat && (d.in[last] == 'f' || d.in[last] == 'F') {
|
||||
strSize = last
|
||||
}
|
||||
tok := Token{
|
||||
kind: Scalar,
|
||||
attrs: numberValue,
|
||||
pos: len(d.orig) - len(d.in),
|
||||
raw: d.in[:num.size],
|
||||
str: string(d.in[:strSize]),
|
||||
str: num.string(d.in),
|
||||
numAttrs: numAttrs,
|
||||
}
|
||||
d.consume(num.size)
|
||||
@@ -46,6 +41,27 @@ type number struct {
|
||||
kind uint8
|
||||
neg bool
|
||||
size int
|
||||
// if neg, this is the length of whitespace and comments between
|
||||
// the minus sign and the rest fo the number literal
|
||||
sep int
|
||||
}
|
||||
|
||||
func (num number) string(data []byte) string {
|
||||
strSize := num.size
|
||||
last := num.size - 1
|
||||
if num.kind == numFloat && (data[last] == 'f' || data[last] == 'F') {
|
||||
strSize = last
|
||||
}
|
||||
if num.neg && num.sep > 0 {
|
||||
// strip whitespace/comments between negative sign and the rest
|
||||
strLen := strSize - num.sep
|
||||
str := make([]byte, strLen)
|
||||
str[0] = data[0]
|
||||
copy(str[1:], data[num.sep+1:strSize])
|
||||
return string(str)
|
||||
}
|
||||
return string(data[:strSize])
|
||||
|
||||
}
|
||||
|
||||
// parseNumber constructs a number object from given input. It allows for the
|
||||
@@ -67,19 +83,22 @@ func parseNumber(input []byte) number {
|
||||
}
|
||||
|
||||
// Optional -
|
||||
var sep int
|
||||
if s[0] == '-' {
|
||||
neg = true
|
||||
s = s[1:]
|
||||
size++
|
||||
// Consume any whitespace or comments between the
|
||||
// negative sign and the rest of the number
|
||||
lenBefore := len(s)
|
||||
s = consume(s, 0)
|
||||
sep = lenBefore - len(s)
|
||||
size += sep
|
||||
if len(s) == 0 {
|
||||
return number{}
|
||||
}
|
||||
}
|
||||
|
||||
// C++ allows for whitespace and comments in between the negative sign and
|
||||
// the rest of the number. This logic currently does not but is consistent
|
||||
// with v1.
|
||||
|
||||
switch {
|
||||
case s[0] == '0':
|
||||
if len(s) > 1 {
|
||||
@@ -116,7 +135,7 @@ func parseNumber(input []byte) number {
|
||||
if len(s) > 0 && !isDelim(s[0]) {
|
||||
return number{}
|
||||
}
|
||||
return number{kind: kind, neg: neg, size: size}
|
||||
return number{kind: kind, neg: neg, size: size, sep: sep}
|
||||
}
|
||||
}
|
||||
s = s[1:]
|
||||
@@ -188,5 +207,5 @@ func parseNumber(input []byte) number {
|
||||
return number{}
|
||||
}
|
||||
|
||||
return number{kind: kind, neg: neg, size: size}
|
||||
return number{kind: kind, neg: neg, size: size, sep: sep}
|
||||
}
|
||||
|
||||
90
vendor/google.golang.org/protobuf/internal/genid/descriptor_gen.go
generated
vendored
90
vendor/google.golang.org/protobuf/internal/genid/descriptor_gen.go
generated
vendored
@@ -50,6 +50,7 @@ const (
|
||||
FileDescriptorProto_Options_field_name protoreflect.Name = "options"
|
||||
FileDescriptorProto_SourceCodeInfo_field_name protoreflect.Name = "source_code_info"
|
||||
FileDescriptorProto_Syntax_field_name protoreflect.Name = "syntax"
|
||||
FileDescriptorProto_Edition_field_name protoreflect.Name = "edition"
|
||||
|
||||
FileDescriptorProto_Name_field_fullname protoreflect.FullName = "google.protobuf.FileDescriptorProto.name"
|
||||
FileDescriptorProto_Package_field_fullname protoreflect.FullName = "google.protobuf.FileDescriptorProto.package"
|
||||
@@ -63,6 +64,7 @@ const (
|
||||
FileDescriptorProto_Options_field_fullname protoreflect.FullName = "google.protobuf.FileDescriptorProto.options"
|
||||
FileDescriptorProto_SourceCodeInfo_field_fullname protoreflect.FullName = "google.protobuf.FileDescriptorProto.source_code_info"
|
||||
FileDescriptorProto_Syntax_field_fullname protoreflect.FullName = "google.protobuf.FileDescriptorProto.syntax"
|
||||
FileDescriptorProto_Edition_field_fullname protoreflect.FullName = "google.protobuf.FileDescriptorProto.edition"
|
||||
)
|
||||
|
||||
// Field numbers for google.protobuf.FileDescriptorProto.
|
||||
@@ -79,6 +81,7 @@ const (
|
||||
FileDescriptorProto_Options_field_number protoreflect.FieldNumber = 8
|
||||
FileDescriptorProto_SourceCodeInfo_field_number protoreflect.FieldNumber = 9
|
||||
FileDescriptorProto_Syntax_field_number protoreflect.FieldNumber = 12
|
||||
FileDescriptorProto_Edition_field_number protoreflect.FieldNumber = 13
|
||||
)
|
||||
|
||||
// Names for google.protobuf.DescriptorProto.
|
||||
@@ -494,26 +497,29 @@ const (
|
||||
|
||||
// Field names for google.protobuf.MessageOptions.
|
||||
const (
|
||||
MessageOptions_MessageSetWireFormat_field_name protoreflect.Name = "message_set_wire_format"
|
||||
MessageOptions_NoStandardDescriptorAccessor_field_name protoreflect.Name = "no_standard_descriptor_accessor"
|
||||
MessageOptions_Deprecated_field_name protoreflect.Name = "deprecated"
|
||||
MessageOptions_MapEntry_field_name protoreflect.Name = "map_entry"
|
||||
MessageOptions_UninterpretedOption_field_name protoreflect.Name = "uninterpreted_option"
|
||||
MessageOptions_MessageSetWireFormat_field_name protoreflect.Name = "message_set_wire_format"
|
||||
MessageOptions_NoStandardDescriptorAccessor_field_name protoreflect.Name = "no_standard_descriptor_accessor"
|
||||
MessageOptions_Deprecated_field_name protoreflect.Name = "deprecated"
|
||||
MessageOptions_MapEntry_field_name protoreflect.Name = "map_entry"
|
||||
MessageOptions_DeprecatedLegacyJsonFieldConflicts_field_name protoreflect.Name = "deprecated_legacy_json_field_conflicts"
|
||||
MessageOptions_UninterpretedOption_field_name protoreflect.Name = "uninterpreted_option"
|
||||
|
||||
MessageOptions_MessageSetWireFormat_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.message_set_wire_format"
|
||||
MessageOptions_NoStandardDescriptorAccessor_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.no_standard_descriptor_accessor"
|
||||
MessageOptions_Deprecated_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.deprecated"
|
||||
MessageOptions_MapEntry_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.map_entry"
|
||||
MessageOptions_UninterpretedOption_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.uninterpreted_option"
|
||||
MessageOptions_MessageSetWireFormat_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.message_set_wire_format"
|
||||
MessageOptions_NoStandardDescriptorAccessor_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.no_standard_descriptor_accessor"
|
||||
MessageOptions_Deprecated_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.deprecated"
|
||||
MessageOptions_MapEntry_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.map_entry"
|
||||
MessageOptions_DeprecatedLegacyJsonFieldConflicts_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.deprecated_legacy_json_field_conflicts"
|
||||
MessageOptions_UninterpretedOption_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.uninterpreted_option"
|
||||
)
|
||||
|
||||
// Field numbers for google.protobuf.MessageOptions.
|
||||
const (
|
||||
MessageOptions_MessageSetWireFormat_field_number protoreflect.FieldNumber = 1
|
||||
MessageOptions_NoStandardDescriptorAccessor_field_number protoreflect.FieldNumber = 2
|
||||
MessageOptions_Deprecated_field_number protoreflect.FieldNumber = 3
|
||||
MessageOptions_MapEntry_field_number protoreflect.FieldNumber = 7
|
||||
MessageOptions_UninterpretedOption_field_number protoreflect.FieldNumber = 999
|
||||
MessageOptions_MessageSetWireFormat_field_number protoreflect.FieldNumber = 1
|
||||
MessageOptions_NoStandardDescriptorAccessor_field_number protoreflect.FieldNumber = 2
|
||||
MessageOptions_Deprecated_field_number protoreflect.FieldNumber = 3
|
||||
MessageOptions_MapEntry_field_number protoreflect.FieldNumber = 7
|
||||
MessageOptions_DeprecatedLegacyJsonFieldConflicts_field_number protoreflect.FieldNumber = 11
|
||||
MessageOptions_UninterpretedOption_field_number protoreflect.FieldNumber = 999
|
||||
)
|
||||
|
||||
// Names for google.protobuf.FieldOptions.
|
||||
@@ -528,16 +534,24 @@ const (
|
||||
FieldOptions_Packed_field_name protoreflect.Name = "packed"
|
||||
FieldOptions_Jstype_field_name protoreflect.Name = "jstype"
|
||||
FieldOptions_Lazy_field_name protoreflect.Name = "lazy"
|
||||
FieldOptions_UnverifiedLazy_field_name protoreflect.Name = "unverified_lazy"
|
||||
FieldOptions_Deprecated_field_name protoreflect.Name = "deprecated"
|
||||
FieldOptions_Weak_field_name protoreflect.Name = "weak"
|
||||
FieldOptions_DebugRedact_field_name protoreflect.Name = "debug_redact"
|
||||
FieldOptions_Retention_field_name protoreflect.Name = "retention"
|
||||
FieldOptions_Target_field_name protoreflect.Name = "target"
|
||||
FieldOptions_UninterpretedOption_field_name protoreflect.Name = "uninterpreted_option"
|
||||
|
||||
FieldOptions_Ctype_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.ctype"
|
||||
FieldOptions_Packed_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.packed"
|
||||
FieldOptions_Jstype_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.jstype"
|
||||
FieldOptions_Lazy_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.lazy"
|
||||
FieldOptions_UnverifiedLazy_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.unverified_lazy"
|
||||
FieldOptions_Deprecated_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.deprecated"
|
||||
FieldOptions_Weak_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.weak"
|
||||
FieldOptions_DebugRedact_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.debug_redact"
|
||||
FieldOptions_Retention_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.retention"
|
||||
FieldOptions_Target_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.target"
|
||||
FieldOptions_UninterpretedOption_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.uninterpreted_option"
|
||||
)
|
||||
|
||||
@@ -547,8 +561,12 @@ const (
|
||||
FieldOptions_Packed_field_number protoreflect.FieldNumber = 2
|
||||
FieldOptions_Jstype_field_number protoreflect.FieldNumber = 6
|
||||
FieldOptions_Lazy_field_number protoreflect.FieldNumber = 5
|
||||
FieldOptions_UnverifiedLazy_field_number protoreflect.FieldNumber = 15
|
||||
FieldOptions_Deprecated_field_number protoreflect.FieldNumber = 3
|
||||
FieldOptions_Weak_field_number protoreflect.FieldNumber = 10
|
||||
FieldOptions_DebugRedact_field_number protoreflect.FieldNumber = 16
|
||||
FieldOptions_Retention_field_number protoreflect.FieldNumber = 17
|
||||
FieldOptions_Target_field_number protoreflect.FieldNumber = 18
|
||||
FieldOptions_UninterpretedOption_field_number protoreflect.FieldNumber = 999
|
||||
)
|
||||
|
||||
@@ -564,6 +582,18 @@ const (
|
||||
FieldOptions_JSType_enum_name = "JSType"
|
||||
)
|
||||
|
||||
// Full and short names for google.protobuf.FieldOptions.OptionRetention.
|
||||
const (
|
||||
FieldOptions_OptionRetention_enum_fullname = "google.protobuf.FieldOptions.OptionRetention"
|
||||
FieldOptions_OptionRetention_enum_name = "OptionRetention"
|
||||
)
|
||||
|
||||
// Full and short names for google.protobuf.FieldOptions.OptionTargetType.
|
||||
const (
|
||||
FieldOptions_OptionTargetType_enum_fullname = "google.protobuf.FieldOptions.OptionTargetType"
|
||||
FieldOptions_OptionTargetType_enum_name = "OptionTargetType"
|
||||
)
|
||||
|
||||
// Names for google.protobuf.OneofOptions.
|
||||
const (
|
||||
OneofOptions_message_name protoreflect.Name = "OneofOptions"
|
||||
@@ -590,20 +620,23 @@ const (
|
||||
|
||||
// Field names for google.protobuf.EnumOptions.
|
||||
const (
|
||||
EnumOptions_AllowAlias_field_name protoreflect.Name = "allow_alias"
|
||||
EnumOptions_Deprecated_field_name protoreflect.Name = "deprecated"
|
||||
EnumOptions_UninterpretedOption_field_name protoreflect.Name = "uninterpreted_option"
|
||||
EnumOptions_AllowAlias_field_name protoreflect.Name = "allow_alias"
|
||||
EnumOptions_Deprecated_field_name protoreflect.Name = "deprecated"
|
||||
EnumOptions_DeprecatedLegacyJsonFieldConflicts_field_name protoreflect.Name = "deprecated_legacy_json_field_conflicts"
|
||||
EnumOptions_UninterpretedOption_field_name protoreflect.Name = "uninterpreted_option"
|
||||
|
||||
EnumOptions_AllowAlias_field_fullname protoreflect.FullName = "google.protobuf.EnumOptions.allow_alias"
|
||||
EnumOptions_Deprecated_field_fullname protoreflect.FullName = "google.protobuf.EnumOptions.deprecated"
|
||||
EnumOptions_UninterpretedOption_field_fullname protoreflect.FullName = "google.protobuf.EnumOptions.uninterpreted_option"
|
||||
EnumOptions_AllowAlias_field_fullname protoreflect.FullName = "google.protobuf.EnumOptions.allow_alias"
|
||||
EnumOptions_Deprecated_field_fullname protoreflect.FullName = "google.protobuf.EnumOptions.deprecated"
|
||||
EnumOptions_DeprecatedLegacyJsonFieldConflicts_field_fullname protoreflect.FullName = "google.protobuf.EnumOptions.deprecated_legacy_json_field_conflicts"
|
||||
EnumOptions_UninterpretedOption_field_fullname protoreflect.FullName = "google.protobuf.EnumOptions.uninterpreted_option"
|
||||
)
|
||||
|
||||
// Field numbers for google.protobuf.EnumOptions.
|
||||
const (
|
||||
EnumOptions_AllowAlias_field_number protoreflect.FieldNumber = 2
|
||||
EnumOptions_Deprecated_field_number protoreflect.FieldNumber = 3
|
||||
EnumOptions_UninterpretedOption_field_number protoreflect.FieldNumber = 999
|
||||
EnumOptions_AllowAlias_field_number protoreflect.FieldNumber = 2
|
||||
EnumOptions_Deprecated_field_number protoreflect.FieldNumber = 3
|
||||
EnumOptions_DeprecatedLegacyJsonFieldConflicts_field_number protoreflect.FieldNumber = 6
|
||||
EnumOptions_UninterpretedOption_field_number protoreflect.FieldNumber = 999
|
||||
)
|
||||
|
||||
// Names for google.protobuf.EnumValueOptions.
|
||||
@@ -813,11 +846,13 @@ const (
|
||||
GeneratedCodeInfo_Annotation_SourceFile_field_name protoreflect.Name = "source_file"
|
||||
GeneratedCodeInfo_Annotation_Begin_field_name protoreflect.Name = "begin"
|
||||
GeneratedCodeInfo_Annotation_End_field_name protoreflect.Name = "end"
|
||||
GeneratedCodeInfo_Annotation_Semantic_field_name protoreflect.Name = "semantic"
|
||||
|
||||
GeneratedCodeInfo_Annotation_Path_field_fullname protoreflect.FullName = "google.protobuf.GeneratedCodeInfo.Annotation.path"
|
||||
GeneratedCodeInfo_Annotation_SourceFile_field_fullname protoreflect.FullName = "google.protobuf.GeneratedCodeInfo.Annotation.source_file"
|
||||
GeneratedCodeInfo_Annotation_Begin_field_fullname protoreflect.FullName = "google.protobuf.GeneratedCodeInfo.Annotation.begin"
|
||||
GeneratedCodeInfo_Annotation_End_field_fullname protoreflect.FullName = "google.protobuf.GeneratedCodeInfo.Annotation.end"
|
||||
GeneratedCodeInfo_Annotation_Semantic_field_fullname protoreflect.FullName = "google.protobuf.GeneratedCodeInfo.Annotation.semantic"
|
||||
)
|
||||
|
||||
// Field numbers for google.protobuf.GeneratedCodeInfo.Annotation.
|
||||
@@ -826,4 +861,11 @@ const (
|
||||
GeneratedCodeInfo_Annotation_SourceFile_field_number protoreflect.FieldNumber = 2
|
||||
GeneratedCodeInfo_Annotation_Begin_field_number protoreflect.FieldNumber = 3
|
||||
GeneratedCodeInfo_Annotation_End_field_number protoreflect.FieldNumber = 4
|
||||
GeneratedCodeInfo_Annotation_Semantic_field_number protoreflect.FieldNumber = 5
|
||||
)
|
||||
|
||||
// Full and short names for google.protobuf.GeneratedCodeInfo.Annotation.Semantic.
|
||||
const (
|
||||
GeneratedCodeInfo_Annotation_Semantic_enum_fullname = "google.protobuf.GeneratedCodeInfo.Annotation.Semantic"
|
||||
GeneratedCodeInfo_Annotation_Semantic_enum_name = "Semantic"
|
||||
)
|
||||
|
||||
1
vendor/google.golang.org/protobuf/internal/impl/convert.go
generated
vendored
1
vendor/google.golang.org/protobuf/internal/impl/convert.go
generated
vendored
@@ -59,7 +59,6 @@ func NewConverter(t reflect.Type, fd protoreflect.FieldDescriptor) Converter {
|
||||
default:
|
||||
return newSingularConverter(t, fd)
|
||||
}
|
||||
panic(fmt.Sprintf("invalid Go type %v for field %v", t, fd.FullName()))
|
||||
}
|
||||
|
||||
var (
|
||||
|
||||
2
vendor/google.golang.org/protobuf/internal/strs/strings_unsafe.go
generated
vendored
2
vendor/google.golang.org/protobuf/internal/strs/strings_unsafe.go
generated
vendored
@@ -87,7 +87,7 @@ func (sb *Builder) grow(n int) {
|
||||
// Unlike strings.Builder, we do not need to copy over the contents
|
||||
// of the old buffer since our builder provides no API for
|
||||
// retrieving previously created strings.
|
||||
sb.buf = make([]byte, 2*(cap(sb.buf)+n))
|
||||
sb.buf = make([]byte, 0, 2*(cap(sb.buf)+n))
|
||||
}
|
||||
|
||||
func (sb *Builder) last(n int) string {
|
||||
|
||||
4
vendor/google.golang.org/protobuf/internal/version/version.go
generated
vendored
4
vendor/google.golang.org/protobuf/internal/version/version.go
generated
vendored
@@ -51,8 +51,8 @@ import (
|
||||
// 10. Send out the CL for review and submit it.
|
||||
const (
|
||||
Major = 1
|
||||
Minor = 28
|
||||
Patch = 1
|
||||
Minor = 30
|
||||
Patch = 0
|
||||
PreRelease = ""
|
||||
)
|
||||
|
||||
|
||||
9
vendor/google.golang.org/protobuf/proto/doc.go
generated
vendored
9
vendor/google.golang.org/protobuf/proto/doc.go
generated
vendored
@@ -5,16 +5,13 @@
|
||||
// Package proto provides functions operating on protocol buffer messages.
|
||||
//
|
||||
// For documentation on protocol buffers in general, see:
|
||||
//
|
||||
// https://developers.google.com/protocol-buffers
|
||||
// https://protobuf.dev.
|
||||
//
|
||||
// For a tutorial on using protocol buffers with Go, see:
|
||||
//
|
||||
// https://developers.google.com/protocol-buffers/docs/gotutorial
|
||||
// https://protobuf.dev/getting-started/gotutorial.
|
||||
//
|
||||
// For a guide to generated Go protocol buffer code, see:
|
||||
//
|
||||
// https://developers.google.com/protocol-buffers/docs/reference/go-generated
|
||||
// https://protobuf.dev/reference/go/go-generated.
|
||||
//
|
||||
// # Binary serialization
|
||||
//
|
||||
|
||||
172
vendor/google.golang.org/protobuf/proto/equal.go
generated
vendored
172
vendor/google.golang.org/protobuf/proto/equal.go
generated
vendored
@@ -5,30 +5,39 @@
|
||||
package proto
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"math"
|
||||
"reflect"
|
||||
|
||||
"google.golang.org/protobuf/encoding/protowire"
|
||||
"google.golang.org/protobuf/reflect/protoreflect"
|
||||
)
|
||||
|
||||
// Equal reports whether two messages are equal.
|
||||
// If two messages marshal to the same bytes under deterministic serialization,
|
||||
// then Equal is guaranteed to report true.
|
||||
// Equal reports whether two messages are equal,
|
||||
// by recursively comparing the fields of the message.
|
||||
//
|
||||
// Two messages are equal if they belong to the same message descriptor,
|
||||
// have the same set of populated known and extension field values,
|
||||
// and the same set of unknown fields values. If either of the top-level
|
||||
// messages are invalid, then Equal reports true only if both are invalid.
|
||||
// - Bytes fields are equal if they contain identical bytes.
|
||||
// Empty bytes (regardless of nil-ness) are considered equal.
|
||||
//
|
||||
// Scalar values are compared with the equivalent of the == operator in Go,
|
||||
// except bytes values which are compared using bytes.Equal and
|
||||
// floating point values which specially treat NaNs as equal.
|
||||
// Message values are compared by recursively calling Equal.
|
||||
// Lists are equal if each element value is also equal.
|
||||
// Maps are equal if they have the same set of keys, where the pair of values
|
||||
// for each key is also equal.
|
||||
// - Floating-point fields are equal if they contain the same value.
|
||||
// Unlike the == operator, a NaN is equal to another NaN.
|
||||
//
|
||||
// - Other scalar fields are equal if they contain the same value.
|
||||
//
|
||||
// - Message fields are equal if they have
|
||||
// the same set of populated known and extension field values, and
|
||||
// the same set of unknown fields values.
|
||||
//
|
||||
// - Lists are equal if they are the same length and
|
||||
// each corresponding element is equal.
|
||||
//
|
||||
// - Maps are equal if they have the same set of keys and
|
||||
// the corresponding value for each key is equal.
|
||||
//
|
||||
// An invalid message is not equal to a valid message.
|
||||
// An invalid message is only equal to another invalid message of the
|
||||
// same type. An invalid message often corresponds to a nil pointer
|
||||
// of the concrete message type. For example, (*pb.M)(nil) is not equal
|
||||
// to &pb.M{}.
|
||||
// If two valid messages marshal to the same bytes under deterministic
|
||||
// serialization, then Equal is guaranteed to report true.
|
||||
func Equal(x, y Message) bool {
|
||||
if x == nil || y == nil {
|
||||
return x == nil && y == nil
|
||||
@@ -42,130 +51,7 @@ func Equal(x, y Message) bool {
|
||||
if mx.IsValid() != my.IsValid() {
|
||||
return false
|
||||
}
|
||||
return equalMessage(mx, my)
|
||||
}
|
||||
|
||||
// equalMessage compares two messages.
|
||||
func equalMessage(mx, my protoreflect.Message) bool {
|
||||
if mx.Descriptor() != my.Descriptor() {
|
||||
return false
|
||||
}
|
||||
|
||||
nx := 0
|
||||
equal := true
|
||||
mx.Range(func(fd protoreflect.FieldDescriptor, vx protoreflect.Value) bool {
|
||||
nx++
|
||||
vy := my.Get(fd)
|
||||
equal = my.Has(fd) && equalField(fd, vx, vy)
|
||||
return equal
|
||||
})
|
||||
if !equal {
|
||||
return false
|
||||
}
|
||||
ny := 0
|
||||
my.Range(func(fd protoreflect.FieldDescriptor, vx protoreflect.Value) bool {
|
||||
ny++
|
||||
return true
|
||||
})
|
||||
if nx != ny {
|
||||
return false
|
||||
}
|
||||
|
||||
return equalUnknown(mx.GetUnknown(), my.GetUnknown())
|
||||
}
|
||||
|
||||
// equalField compares two fields.
|
||||
func equalField(fd protoreflect.FieldDescriptor, x, y protoreflect.Value) bool {
|
||||
switch {
|
||||
case fd.IsList():
|
||||
return equalList(fd, x.List(), y.List())
|
||||
case fd.IsMap():
|
||||
return equalMap(fd, x.Map(), y.Map())
|
||||
default:
|
||||
return equalValue(fd, x, y)
|
||||
}
|
||||
}
|
||||
|
||||
// equalMap compares two maps.
|
||||
func equalMap(fd protoreflect.FieldDescriptor, x, y protoreflect.Map) bool {
|
||||
if x.Len() != y.Len() {
|
||||
return false
|
||||
}
|
||||
equal := true
|
||||
x.Range(func(k protoreflect.MapKey, vx protoreflect.Value) bool {
|
||||
vy := y.Get(k)
|
||||
equal = y.Has(k) && equalValue(fd.MapValue(), vx, vy)
|
||||
return equal
|
||||
})
|
||||
return equal
|
||||
}
|
||||
|
||||
// equalList compares two lists.
|
||||
func equalList(fd protoreflect.FieldDescriptor, x, y protoreflect.List) bool {
|
||||
if x.Len() != y.Len() {
|
||||
return false
|
||||
}
|
||||
for i := x.Len() - 1; i >= 0; i-- {
|
||||
if !equalValue(fd, x.Get(i), y.Get(i)) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// equalValue compares two singular values.
|
||||
func equalValue(fd protoreflect.FieldDescriptor, x, y protoreflect.Value) bool {
|
||||
switch fd.Kind() {
|
||||
case protoreflect.BoolKind:
|
||||
return x.Bool() == y.Bool()
|
||||
case protoreflect.EnumKind:
|
||||
return x.Enum() == y.Enum()
|
||||
case protoreflect.Int32Kind, protoreflect.Sint32Kind,
|
||||
protoreflect.Int64Kind, protoreflect.Sint64Kind,
|
||||
protoreflect.Sfixed32Kind, protoreflect.Sfixed64Kind:
|
||||
return x.Int() == y.Int()
|
||||
case protoreflect.Uint32Kind, protoreflect.Uint64Kind,
|
||||
protoreflect.Fixed32Kind, protoreflect.Fixed64Kind:
|
||||
return x.Uint() == y.Uint()
|
||||
case protoreflect.FloatKind, protoreflect.DoubleKind:
|
||||
fx := x.Float()
|
||||
fy := y.Float()
|
||||
if math.IsNaN(fx) || math.IsNaN(fy) {
|
||||
return math.IsNaN(fx) && math.IsNaN(fy)
|
||||
}
|
||||
return fx == fy
|
||||
case protoreflect.StringKind:
|
||||
return x.String() == y.String()
|
||||
case protoreflect.BytesKind:
|
||||
return bytes.Equal(x.Bytes(), y.Bytes())
|
||||
case protoreflect.MessageKind, protoreflect.GroupKind:
|
||||
return equalMessage(x.Message(), y.Message())
|
||||
default:
|
||||
return x.Interface() == y.Interface()
|
||||
}
|
||||
}
|
||||
|
||||
// equalUnknown compares unknown fields by direct comparison on the raw bytes
|
||||
// of each individual field number.
|
||||
func equalUnknown(x, y protoreflect.RawFields) bool {
|
||||
if len(x) != len(y) {
|
||||
return false
|
||||
}
|
||||
if bytes.Equal([]byte(x), []byte(y)) {
|
||||
return true
|
||||
}
|
||||
|
||||
mx := make(map[protoreflect.FieldNumber]protoreflect.RawFields)
|
||||
my := make(map[protoreflect.FieldNumber]protoreflect.RawFields)
|
||||
for len(x) > 0 {
|
||||
fnum, _, n := protowire.ConsumeField(x)
|
||||
mx[fnum] = append(mx[fnum], x[:n]...)
|
||||
x = x[n:]
|
||||
}
|
||||
for len(y) > 0 {
|
||||
fnum, _, n := protowire.ConsumeField(y)
|
||||
my[fnum] = append(my[fnum], y[:n]...)
|
||||
y = y[n:]
|
||||
}
|
||||
return reflect.DeepEqual(mx, my)
|
||||
vx := protoreflect.ValueOfMessage(mx)
|
||||
vy := protoreflect.ValueOfMessage(my)
|
||||
return vx.Equal(vy)
|
||||
}
|
||||
|
||||
14
vendor/google.golang.org/protobuf/reflect/protoreflect/source_gen.go
generated
vendored
14
vendor/google.golang.org/protobuf/reflect/protoreflect/source_gen.go
generated
vendored
@@ -35,6 +35,8 @@ func (p *SourcePath) appendFileDescriptorProto(b []byte) []byte {
|
||||
b = p.appendSingularField(b, "source_code_info", (*SourcePath).appendSourceCodeInfo)
|
||||
case 12:
|
||||
b = p.appendSingularField(b, "syntax", nil)
|
||||
case 13:
|
||||
b = p.appendSingularField(b, "edition", nil)
|
||||
}
|
||||
return b
|
||||
}
|
||||
@@ -236,6 +238,8 @@ func (p *SourcePath) appendMessageOptions(b []byte) []byte {
|
||||
b = p.appendSingularField(b, "deprecated", nil)
|
||||
case 7:
|
||||
b = p.appendSingularField(b, "map_entry", nil)
|
||||
case 11:
|
||||
b = p.appendSingularField(b, "deprecated_legacy_json_field_conflicts", nil)
|
||||
case 999:
|
||||
b = p.appendRepeatedField(b, "uninterpreted_option", (*SourcePath).appendUninterpretedOption)
|
||||
}
|
||||
@@ -279,6 +283,8 @@ func (p *SourcePath) appendEnumOptions(b []byte) []byte {
|
||||
b = p.appendSingularField(b, "allow_alias", nil)
|
||||
case 3:
|
||||
b = p.appendSingularField(b, "deprecated", nil)
|
||||
case 6:
|
||||
b = p.appendSingularField(b, "deprecated_legacy_json_field_conflicts", nil)
|
||||
case 999:
|
||||
b = p.appendRepeatedField(b, "uninterpreted_option", (*SourcePath).appendUninterpretedOption)
|
||||
}
|
||||
@@ -345,10 +351,18 @@ func (p *SourcePath) appendFieldOptions(b []byte) []byte {
|
||||
b = p.appendSingularField(b, "jstype", nil)
|
||||
case 5:
|
||||
b = p.appendSingularField(b, "lazy", nil)
|
||||
case 15:
|
||||
b = p.appendSingularField(b, "unverified_lazy", nil)
|
||||
case 3:
|
||||
b = p.appendSingularField(b, "deprecated", nil)
|
||||
case 10:
|
||||
b = p.appendSingularField(b, "weak", nil)
|
||||
case 16:
|
||||
b = p.appendSingularField(b, "debug_redact", nil)
|
||||
case 17:
|
||||
b = p.appendSingularField(b, "retention", nil)
|
||||
case 18:
|
||||
b = p.appendSingularField(b, "target", nil)
|
||||
case 999:
|
||||
b = p.appendRepeatedField(b, "uninterpreted_option", (*SourcePath).appendUninterpretedOption)
|
||||
}
|
||||
|
||||
2
vendor/google.golang.org/protobuf/reflect/protoreflect/value.go
generated
vendored
2
vendor/google.golang.org/protobuf/reflect/protoreflect/value.go
generated
vendored
@@ -148,7 +148,7 @@ type Message interface {
|
||||
// be preserved in marshaling or other operations.
|
||||
IsValid() bool
|
||||
|
||||
// ProtoMethods returns optional fast-path implementions of various operations.
|
||||
// ProtoMethods returns optional fast-path implementations of various operations.
|
||||
// This method may return nil.
|
||||
//
|
||||
// The returned methods type is identical to
|
||||
|
||||
168
vendor/google.golang.org/protobuf/reflect/protoreflect/value_equal.go
generated
vendored
Normal file
168
vendor/google.golang.org/protobuf/reflect/protoreflect/value_equal.go
generated
vendored
Normal file
@@ -0,0 +1,168 @@
|
||||
// Copyright 2022 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package protoreflect
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"math"
|
||||
"reflect"
|
||||
|
||||
"google.golang.org/protobuf/encoding/protowire"
|
||||
)
|
||||
|
||||
// Equal reports whether v1 and v2 are recursively equal.
|
||||
//
|
||||
// - Values of different types are always unequal.
|
||||
//
|
||||
// - Bytes values are equal if they contain identical bytes.
|
||||
// Empty bytes (regardless of nil-ness) are considered equal.
|
||||
//
|
||||
// - Floating point values are equal if they contain the same value.
|
||||
// Unlike the == operator, a NaN is equal to another NaN.
|
||||
//
|
||||
// - Enums are equal if they contain the same number.
|
||||
// Since Value does not contain an enum descriptor,
|
||||
// enum values do not consider the type of the enum.
|
||||
//
|
||||
// - Other scalar values are equal if they contain the same value.
|
||||
//
|
||||
// - Message values are equal if they belong to the same message descriptor,
|
||||
// have the same set of populated known and extension field values,
|
||||
// and the same set of unknown fields values.
|
||||
//
|
||||
// - Lists are equal if they are the same length and
|
||||
// each corresponding element is equal.
|
||||
//
|
||||
// - Maps are equal if they have the same set of keys and
|
||||
// the corresponding value for each key is equal.
|
||||
func (v1 Value) Equal(v2 Value) bool {
|
||||
return equalValue(v1, v2)
|
||||
}
|
||||
|
||||
func equalValue(x, y Value) bool {
|
||||
eqType := x.typ == y.typ
|
||||
switch x.typ {
|
||||
case nilType:
|
||||
return eqType
|
||||
case boolType:
|
||||
return eqType && x.Bool() == y.Bool()
|
||||
case int32Type, int64Type:
|
||||
return eqType && x.Int() == y.Int()
|
||||
case uint32Type, uint64Type:
|
||||
return eqType && x.Uint() == y.Uint()
|
||||
case float32Type, float64Type:
|
||||
return eqType && equalFloat(x.Float(), y.Float())
|
||||
case stringType:
|
||||
return eqType && x.String() == y.String()
|
||||
case bytesType:
|
||||
return eqType && bytes.Equal(x.Bytes(), y.Bytes())
|
||||
case enumType:
|
||||
return eqType && x.Enum() == y.Enum()
|
||||
default:
|
||||
switch x := x.Interface().(type) {
|
||||
case Message:
|
||||
y, ok := y.Interface().(Message)
|
||||
return ok && equalMessage(x, y)
|
||||
case List:
|
||||
y, ok := y.Interface().(List)
|
||||
return ok && equalList(x, y)
|
||||
case Map:
|
||||
y, ok := y.Interface().(Map)
|
||||
return ok && equalMap(x, y)
|
||||
default:
|
||||
panic(fmt.Sprintf("unknown type: %T", x))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// equalFloat compares two floats, where NaNs are treated as equal.
|
||||
func equalFloat(x, y float64) bool {
|
||||
if math.IsNaN(x) || math.IsNaN(y) {
|
||||
return math.IsNaN(x) && math.IsNaN(y)
|
||||
}
|
||||
return x == y
|
||||
}
|
||||
|
||||
// equalMessage compares two messages.
|
||||
func equalMessage(mx, my Message) bool {
|
||||
if mx.Descriptor() != my.Descriptor() {
|
||||
return false
|
||||
}
|
||||
|
||||
nx := 0
|
||||
equal := true
|
||||
mx.Range(func(fd FieldDescriptor, vx Value) bool {
|
||||
nx++
|
||||
vy := my.Get(fd)
|
||||
equal = my.Has(fd) && equalValue(vx, vy)
|
||||
return equal
|
||||
})
|
||||
if !equal {
|
||||
return false
|
||||
}
|
||||
ny := 0
|
||||
my.Range(func(fd FieldDescriptor, vx Value) bool {
|
||||
ny++
|
||||
return true
|
||||
})
|
||||
if nx != ny {
|
||||
return false
|
||||
}
|
||||
|
||||
return equalUnknown(mx.GetUnknown(), my.GetUnknown())
|
||||
}
|
||||
|
||||
// equalList compares two lists.
|
||||
func equalList(x, y List) bool {
|
||||
if x.Len() != y.Len() {
|
||||
return false
|
||||
}
|
||||
for i := x.Len() - 1; i >= 0; i-- {
|
||||
if !equalValue(x.Get(i), y.Get(i)) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// equalMap compares two maps.
|
||||
func equalMap(x, y Map) bool {
|
||||
if x.Len() != y.Len() {
|
||||
return false
|
||||
}
|
||||
equal := true
|
||||
x.Range(func(k MapKey, vx Value) bool {
|
||||
vy := y.Get(k)
|
||||
equal = y.Has(k) && equalValue(vx, vy)
|
||||
return equal
|
||||
})
|
||||
return equal
|
||||
}
|
||||
|
||||
// equalUnknown compares unknown fields by direct comparison on the raw bytes
|
||||
// of each individual field number.
|
||||
func equalUnknown(x, y RawFields) bool {
|
||||
if len(x) != len(y) {
|
||||
return false
|
||||
}
|
||||
if bytes.Equal([]byte(x), []byte(y)) {
|
||||
return true
|
||||
}
|
||||
|
||||
mx := make(map[FieldNumber]RawFields)
|
||||
my := make(map[FieldNumber]RawFields)
|
||||
for len(x) > 0 {
|
||||
fnum, _, n := protowire.ConsumeField(x)
|
||||
mx[fnum] = append(mx[fnum], x[:n]...)
|
||||
x = x[n:]
|
||||
}
|
||||
for len(y) > 0 {
|
||||
fnum, _, n := protowire.ConsumeField(y)
|
||||
my[fnum] = append(my[fnum], y[:n]...)
|
||||
y = y[n:]
|
||||
}
|
||||
return reflect.DeepEqual(mx, my)
|
||||
}
|
||||
4
vendor/google.golang.org/protobuf/reflect/protoreflect/value_union.go
generated
vendored
4
vendor/google.golang.org/protobuf/reflect/protoreflect/value_union.go
generated
vendored
@@ -54,11 +54,11 @@ import (
|
||||
// // Append a 0 to a "repeated int32" field.
|
||||
// // Since the Value returned by Mutable is guaranteed to alias
|
||||
// // the source message, modifying the Value modifies the message.
|
||||
// message.Mutable(fieldDesc).(List).Append(protoreflect.ValueOfInt32(0))
|
||||
// message.Mutable(fieldDesc).List().Append(protoreflect.ValueOfInt32(0))
|
||||
//
|
||||
// // Assign [0] to a "repeated int32" field by creating a new Value,
|
||||
// // modifying it, and assigning it.
|
||||
// list := message.NewField(fieldDesc).(List)
|
||||
// list := message.NewField(fieldDesc).List()
|
||||
// list.Append(protoreflect.ValueOfInt32(0))
|
||||
// message.Set(fieldDesc, list)
|
||||
// // ERROR: Since it is not defined whether Set aliases the source,
|
||||
|
||||
2
vendor/google.golang.org/protobuf/reflect/protoregistry/registry.go
generated
vendored
2
vendor/google.golang.org/protobuf/reflect/protoregistry/registry.go
generated
vendored
@@ -46,7 +46,7 @@ var conflictPolicy = "panic" // "panic" | "warn" | "ignore"
|
||||
// It is a variable so that the behavior is easily overridden in another file.
|
||||
var ignoreConflict = func(d protoreflect.Descriptor, err error) bool {
|
||||
const env = "GOLANG_PROTOBUF_REGISTRATION_CONFLICT"
|
||||
const faq = "https://developers.google.com/protocol-buffers/docs/reference/go/faq#namespace-conflict"
|
||||
const faq = "https://protobuf.dev/reference/go/faq#namespace-conflict"
|
||||
policy := conflictPolicy
|
||||
if v := os.Getenv(env); v != "" {
|
||||
policy = v
|
||||
|
||||
1547
vendor/google.golang.org/protobuf/types/descriptorpb/descriptor.pb.go
generated
vendored
1547
vendor/google.golang.org/protobuf/types/descriptorpb/descriptor.pb.go
generated
vendored
File diff suppressed because it is too large
Load Diff
127
vendor/google.golang.org/protobuf/types/known/anypb/any.pb.go
generated
vendored
127
vendor/google.golang.org/protobuf/types/known/anypb/any.pb.go
generated
vendored
@@ -37,8 +37,7 @@
|
||||
// It is functionally a tuple of the full name of the remote message type and
|
||||
// the serialized bytes of the remote message value.
|
||||
//
|
||||
//
|
||||
// Constructing an Any
|
||||
// # Constructing an Any
|
||||
//
|
||||
// An Any message containing another message value is constructed using New:
|
||||
//
|
||||
@@ -48,8 +47,7 @@
|
||||
// }
|
||||
// ... // make use of any
|
||||
//
|
||||
//
|
||||
// Unmarshaling an Any
|
||||
// # Unmarshaling an Any
|
||||
//
|
||||
// With a populated Any message, the underlying message can be serialized into
|
||||
// a remote concrete message value in a few ways.
|
||||
@@ -95,8 +93,7 @@
|
||||
// listed in the case clauses are linked into the Go binary and therefore also
|
||||
// registered in the global registry.
|
||||
//
|
||||
//
|
||||
// Type checking an Any
|
||||
// # Type checking an Any
|
||||
//
|
||||
// In order to type check whether an Any message represents some other message,
|
||||
// then use the MessageIs method:
|
||||
@@ -115,7 +112,6 @@
|
||||
// }
|
||||
// ... // make use of m
|
||||
// }
|
||||
//
|
||||
package anypb
|
||||
|
||||
import (
|
||||
@@ -136,45 +132,49 @@ import (
|
||||
//
|
||||
// Example 1: Pack and unpack a message in C++.
|
||||
//
|
||||
// Foo foo = ...;
|
||||
// Any any;
|
||||
// any.PackFrom(foo);
|
||||
// ...
|
||||
// if (any.UnpackTo(&foo)) {
|
||||
// ...
|
||||
// }
|
||||
// Foo foo = ...;
|
||||
// Any any;
|
||||
// any.PackFrom(foo);
|
||||
// ...
|
||||
// if (any.UnpackTo(&foo)) {
|
||||
// ...
|
||||
// }
|
||||
//
|
||||
// Example 2: Pack and unpack a message in Java.
|
||||
//
|
||||
// Foo foo = ...;
|
||||
// Any any = Any.pack(foo);
|
||||
// ...
|
||||
// if (any.is(Foo.class)) {
|
||||
// foo = any.unpack(Foo.class);
|
||||
// }
|
||||
// Foo foo = ...;
|
||||
// Any any = Any.pack(foo);
|
||||
// ...
|
||||
// if (any.is(Foo.class)) {
|
||||
// foo = any.unpack(Foo.class);
|
||||
// }
|
||||
// // or ...
|
||||
// if (any.isSameTypeAs(Foo.getDefaultInstance())) {
|
||||
// foo = any.unpack(Foo.getDefaultInstance());
|
||||
// }
|
||||
//
|
||||
// Example 3: Pack and unpack a message in Python.
|
||||
// Example 3: Pack and unpack a message in Python.
|
||||
//
|
||||
// foo = Foo(...)
|
||||
// any = Any()
|
||||
// any.Pack(foo)
|
||||
// ...
|
||||
// if any.Is(Foo.DESCRIPTOR):
|
||||
// any.Unpack(foo)
|
||||
// ...
|
||||
// foo = Foo(...)
|
||||
// any = Any()
|
||||
// any.Pack(foo)
|
||||
// ...
|
||||
// if any.Is(Foo.DESCRIPTOR):
|
||||
// any.Unpack(foo)
|
||||
// ...
|
||||
//
|
||||
// Example 4: Pack and unpack a message in Go
|
||||
// Example 4: Pack and unpack a message in Go
|
||||
//
|
||||
// foo := &pb.Foo{...}
|
||||
// any, err := anypb.New(foo)
|
||||
// if err != nil {
|
||||
// ...
|
||||
// }
|
||||
// ...
|
||||
// foo := &pb.Foo{}
|
||||
// if err := any.UnmarshalTo(foo); err != nil {
|
||||
// ...
|
||||
// }
|
||||
// foo := &pb.Foo{...}
|
||||
// any, err := anypb.New(foo)
|
||||
// if err != nil {
|
||||
// ...
|
||||
// }
|
||||
// ...
|
||||
// foo := &pb.Foo{}
|
||||
// if err := any.UnmarshalTo(foo); err != nil {
|
||||
// ...
|
||||
// }
|
||||
//
|
||||
// The pack methods provided by protobuf library will by default use
|
||||
// 'type.googleapis.com/full.type.name' as the type URL and the unpack
|
||||
@@ -182,35 +182,33 @@ import (
|
||||
// in the type URL, for example "foo.bar.com/x/y.z" will yield type
|
||||
// name "y.z".
|
||||
//
|
||||
// # JSON
|
||||
//
|
||||
// JSON
|
||||
// ====
|
||||
// The JSON representation of an `Any` value uses the regular
|
||||
// representation of the deserialized, embedded message, with an
|
||||
// additional field `@type` which contains the type URL. Example:
|
||||
//
|
||||
// package google.profile;
|
||||
// message Person {
|
||||
// string first_name = 1;
|
||||
// string last_name = 2;
|
||||
// }
|
||||
// package google.profile;
|
||||
// message Person {
|
||||
// string first_name = 1;
|
||||
// string last_name = 2;
|
||||
// }
|
||||
//
|
||||
// {
|
||||
// "@type": "type.googleapis.com/google.profile.Person",
|
||||
// "firstName": <string>,
|
||||
// "lastName": <string>
|
||||
// }
|
||||
// {
|
||||
// "@type": "type.googleapis.com/google.profile.Person",
|
||||
// "firstName": <string>,
|
||||
// "lastName": <string>
|
||||
// }
|
||||
//
|
||||
// If the embedded message type is well-known and has a custom JSON
|
||||
// representation, that representation will be embedded adding a field
|
||||
// `value` which holds the custom JSON in addition to the `@type`
|
||||
// field. Example (for message [google.protobuf.Duration][]):
|
||||
//
|
||||
// {
|
||||
// "@type": "type.googleapis.com/google.protobuf.Duration",
|
||||
// "value": "1.212s"
|
||||
// }
|
||||
//
|
||||
// {
|
||||
// "@type": "type.googleapis.com/google.protobuf.Duration",
|
||||
// "value": "1.212s"
|
||||
// }
|
||||
type Any struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
@@ -228,14 +226,14 @@ type Any struct {
|
||||
// scheme `http`, `https`, or no scheme, one can optionally set up a type
|
||||
// server that maps type URLs to message definitions as follows:
|
||||
//
|
||||
// * If no scheme is provided, `https` is assumed.
|
||||
// * An HTTP GET on the URL must yield a [google.protobuf.Type][]
|
||||
// value in binary format, or produce an error.
|
||||
// * Applications are allowed to cache lookup results based on the
|
||||
// URL, or have them precompiled into a binary to avoid any
|
||||
// lookup. Therefore, binary compatibility needs to be preserved
|
||||
// on changes to types. (Use versioned type names to manage
|
||||
// breaking changes.)
|
||||
// - If no scheme is provided, `https` is assumed.
|
||||
// - An HTTP GET on the URL must yield a [google.protobuf.Type][]
|
||||
// value in binary format, or produce an error.
|
||||
// - Applications are allowed to cache lookup results based on the
|
||||
// URL, or have them precompiled into a binary to avoid any
|
||||
// lookup. Therefore, binary compatibility needs to be preserved
|
||||
// on changes to types. (Use versioned type names to manage
|
||||
// breaking changes.)
|
||||
//
|
||||
// Note: this functionality is not currently available in the official
|
||||
// protobuf release, and it is not used for type URLs beginning with
|
||||
@@ -243,7 +241,6 @@ type Any struct {
|
||||
//
|
||||
// Schemes other than `http`, `https` (or the empty scheme) might be
|
||||
// used with implementation specific semantics.
|
||||
//
|
||||
TypeUrl string `protobuf:"bytes,1,opt,name=type_url,json=typeUrl,proto3" json:"type_url,omitempty"`
|
||||
// Must be a valid serialized protocol buffer of the above specified type.
|
||||
Value []byte `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
|
||||
|
||||
63
vendor/google.golang.org/protobuf/types/known/durationpb/duration.pb.go
generated
vendored
63
vendor/google.golang.org/protobuf/types/known/durationpb/duration.pb.go
generated
vendored
@@ -35,8 +35,7 @@
|
||||
//
|
||||
// The Duration message represents a signed span of time.
|
||||
//
|
||||
//
|
||||
// Conversion to a Go Duration
|
||||
// # Conversion to a Go Duration
|
||||
//
|
||||
// The AsDuration method can be used to convert a Duration message to a
|
||||
// standard Go time.Duration value:
|
||||
@@ -65,15 +64,13 @@
|
||||
// the resulting value to the closest representable value (e.g., math.MaxInt64
|
||||
// for positive overflow and math.MinInt64 for negative overflow).
|
||||
//
|
||||
//
|
||||
// Conversion from a Go Duration
|
||||
// # Conversion from a Go Duration
|
||||
//
|
||||
// The durationpb.New function can be used to construct a Duration message
|
||||
// from a standard Go time.Duration value:
|
||||
//
|
||||
// dur := durationpb.New(d)
|
||||
// ... // make use of d as a *durationpb.Duration
|
||||
//
|
||||
package durationpb
|
||||
|
||||
import (
|
||||
@@ -96,43 +93,43 @@ import (
|
||||
//
|
||||
// Example 1: Compute Duration from two Timestamps in pseudo code.
|
||||
//
|
||||
// Timestamp start = ...;
|
||||
// Timestamp end = ...;
|
||||
// Duration duration = ...;
|
||||
// Timestamp start = ...;
|
||||
// Timestamp end = ...;
|
||||
// Duration duration = ...;
|
||||
//
|
||||
// duration.seconds = end.seconds - start.seconds;
|
||||
// duration.nanos = end.nanos - start.nanos;
|
||||
// duration.seconds = end.seconds - start.seconds;
|
||||
// duration.nanos = end.nanos - start.nanos;
|
||||
//
|
||||
// if (duration.seconds < 0 && duration.nanos > 0) {
|
||||
// duration.seconds += 1;
|
||||
// duration.nanos -= 1000000000;
|
||||
// } else if (duration.seconds > 0 && duration.nanos < 0) {
|
||||
// duration.seconds -= 1;
|
||||
// duration.nanos += 1000000000;
|
||||
// }
|
||||
// if (duration.seconds < 0 && duration.nanos > 0) {
|
||||
// duration.seconds += 1;
|
||||
// duration.nanos -= 1000000000;
|
||||
// } else if (duration.seconds > 0 && duration.nanos < 0) {
|
||||
// duration.seconds -= 1;
|
||||
// duration.nanos += 1000000000;
|
||||
// }
|
||||
//
|
||||
// Example 2: Compute Timestamp from Timestamp + Duration in pseudo code.
|
||||
//
|
||||
// Timestamp start = ...;
|
||||
// Duration duration = ...;
|
||||
// Timestamp end = ...;
|
||||
// Timestamp start = ...;
|
||||
// Duration duration = ...;
|
||||
// Timestamp end = ...;
|
||||
//
|
||||
// end.seconds = start.seconds + duration.seconds;
|
||||
// end.nanos = start.nanos + duration.nanos;
|
||||
// end.seconds = start.seconds + duration.seconds;
|
||||
// end.nanos = start.nanos + duration.nanos;
|
||||
//
|
||||
// if (end.nanos < 0) {
|
||||
// end.seconds -= 1;
|
||||
// end.nanos += 1000000000;
|
||||
// } else if (end.nanos >= 1000000000) {
|
||||
// end.seconds += 1;
|
||||
// end.nanos -= 1000000000;
|
||||
// }
|
||||
// if (end.nanos < 0) {
|
||||
// end.seconds -= 1;
|
||||
// end.nanos += 1000000000;
|
||||
// } else if (end.nanos >= 1000000000) {
|
||||
// end.seconds += 1;
|
||||
// end.nanos -= 1000000000;
|
||||
// }
|
||||
//
|
||||
// Example 3: Compute Duration from datetime.timedelta in Python.
|
||||
//
|
||||
// td = datetime.timedelta(days=3, minutes=10)
|
||||
// duration = Duration()
|
||||
// duration.FromTimedelta(td)
|
||||
// td = datetime.timedelta(days=3, minutes=10)
|
||||
// duration = Duration()
|
||||
// duration.FromTimedelta(td)
|
||||
//
|
||||
// # JSON Mapping
|
||||
//
|
||||
@@ -143,8 +140,6 @@ import (
|
||||
// encoded in JSON format as "3s", while 3 seconds and 1 nanosecond should
|
||||
// be expressed in JSON format as "3.000000001s", and 3 seconds and 1
|
||||
// microsecond should be expressed in JSON format as "3.000001s".
|
||||
//
|
||||
//
|
||||
type Duration struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
|
||||
8
vendor/google.golang.org/protobuf/types/known/emptypb/empty.pb.go
generated
vendored
8
vendor/google.golang.org/protobuf/types/known/emptypb/empty.pb.go
generated
vendored
@@ -44,11 +44,9 @@ import (
|
||||
// empty messages in your APIs. A typical example is to use it as the request
|
||||
// or the response type of an API method. For instance:
|
||||
//
|
||||
// service Foo {
|
||||
// rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
|
||||
// }
|
||||
//
|
||||
// The JSON representation for `Empty` is empty JSON object `{}`.
|
||||
// service Foo {
|
||||
// rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
|
||||
// }
|
||||
type Empty struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
|
||||
137
vendor/google.golang.org/protobuf/types/known/fieldmaskpb/field_mask.pb.go
generated
vendored
137
vendor/google.golang.org/protobuf/types/known/fieldmaskpb/field_mask.pb.go
generated
vendored
@@ -37,8 +37,7 @@
|
||||
// The paths are specific to some target message type,
|
||||
// which is not stored within the FieldMask message itself.
|
||||
//
|
||||
//
|
||||
// Constructing a FieldMask
|
||||
// # Constructing a FieldMask
|
||||
//
|
||||
// The New function is used construct a FieldMask:
|
||||
//
|
||||
@@ -61,8 +60,7 @@
|
||||
// ... // handle error
|
||||
// }
|
||||
//
|
||||
//
|
||||
// Type checking a FieldMask
|
||||
// # Type checking a FieldMask
|
||||
//
|
||||
// In order to verify that a FieldMask represents a set of fields that are
|
||||
// reachable from some target message type, use the IsValid method:
|
||||
@@ -89,8 +87,8 @@ import (
|
||||
|
||||
// `FieldMask` represents a set of symbolic field paths, for example:
|
||||
//
|
||||
// paths: "f.a"
|
||||
// paths: "f.b.d"
|
||||
// paths: "f.a"
|
||||
// paths: "f.b.d"
|
||||
//
|
||||
// Here `f` represents a field in some root message, `a` and `b`
|
||||
// fields in the message found in `f`, and `d` a field found in the
|
||||
@@ -107,27 +105,26 @@ import (
|
||||
// specified in the mask. For example, if the mask in the previous
|
||||
// example is applied to a response message as follows:
|
||||
//
|
||||
// f {
|
||||
// a : 22
|
||||
// b {
|
||||
// d : 1
|
||||
// x : 2
|
||||
// }
|
||||
// y : 13
|
||||
// }
|
||||
// z: 8
|
||||
// f {
|
||||
// a : 22
|
||||
// b {
|
||||
// d : 1
|
||||
// x : 2
|
||||
// }
|
||||
// y : 13
|
||||
// }
|
||||
// z: 8
|
||||
//
|
||||
// The result will not contain specific values for fields x,y and z
|
||||
// (their value will be set to the default, and omitted in proto text
|
||||
// output):
|
||||
//
|
||||
//
|
||||
// f {
|
||||
// a : 22
|
||||
// b {
|
||||
// d : 1
|
||||
// }
|
||||
// }
|
||||
// f {
|
||||
// a : 22
|
||||
// b {
|
||||
// d : 1
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// A repeated field is not allowed except at the last position of a
|
||||
// paths string.
|
||||
@@ -165,36 +162,36 @@ import (
|
||||
//
|
||||
// For example, given the target message:
|
||||
//
|
||||
// f {
|
||||
// b {
|
||||
// d: 1
|
||||
// x: 2
|
||||
// }
|
||||
// c: [1]
|
||||
// }
|
||||
// f {
|
||||
// b {
|
||||
// d: 1
|
||||
// x: 2
|
||||
// }
|
||||
// c: [1]
|
||||
// }
|
||||
//
|
||||
// And an update message:
|
||||
//
|
||||
// f {
|
||||
// b {
|
||||
// d: 10
|
||||
// }
|
||||
// c: [2]
|
||||
// }
|
||||
// f {
|
||||
// b {
|
||||
// d: 10
|
||||
// }
|
||||
// c: [2]
|
||||
// }
|
||||
//
|
||||
// then if the field mask is:
|
||||
//
|
||||
// paths: ["f.b", "f.c"]
|
||||
// paths: ["f.b", "f.c"]
|
||||
//
|
||||
// then the result will be:
|
||||
//
|
||||
// f {
|
||||
// b {
|
||||
// d: 10
|
||||
// x: 2
|
||||
// }
|
||||
// c: [1, 2]
|
||||
// }
|
||||
// f {
|
||||
// b {
|
||||
// d: 10
|
||||
// x: 2
|
||||
// }
|
||||
// c: [1, 2]
|
||||
// }
|
||||
//
|
||||
// An implementation may provide options to override this default behavior for
|
||||
// repeated and message fields.
|
||||
@@ -232,51 +229,51 @@ import (
|
||||
//
|
||||
// As an example, consider the following message declarations:
|
||||
//
|
||||
// message Profile {
|
||||
// User user = 1;
|
||||
// Photo photo = 2;
|
||||
// }
|
||||
// message User {
|
||||
// string display_name = 1;
|
||||
// string address = 2;
|
||||
// }
|
||||
// message Profile {
|
||||
// User user = 1;
|
||||
// Photo photo = 2;
|
||||
// }
|
||||
// message User {
|
||||
// string display_name = 1;
|
||||
// string address = 2;
|
||||
// }
|
||||
//
|
||||
// In proto a field mask for `Profile` may look as such:
|
||||
//
|
||||
// mask {
|
||||
// paths: "user.display_name"
|
||||
// paths: "photo"
|
||||
// }
|
||||
// mask {
|
||||
// paths: "user.display_name"
|
||||
// paths: "photo"
|
||||
// }
|
||||
//
|
||||
// In JSON, the same mask is represented as below:
|
||||
//
|
||||
// {
|
||||
// mask: "user.displayName,photo"
|
||||
// }
|
||||
// {
|
||||
// mask: "user.displayName,photo"
|
||||
// }
|
||||
//
|
||||
// # Field Masks and Oneof Fields
|
||||
//
|
||||
// Field masks treat fields in oneofs just as regular fields. Consider the
|
||||
// following message:
|
||||
//
|
||||
// message SampleMessage {
|
||||
// oneof test_oneof {
|
||||
// string name = 4;
|
||||
// SubMessage sub_message = 9;
|
||||
// }
|
||||
// }
|
||||
// message SampleMessage {
|
||||
// oneof test_oneof {
|
||||
// string name = 4;
|
||||
// SubMessage sub_message = 9;
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// The field mask can be:
|
||||
//
|
||||
// mask {
|
||||
// paths: "name"
|
||||
// }
|
||||
// mask {
|
||||
// paths: "name"
|
||||
// }
|
||||
//
|
||||
// Or:
|
||||
//
|
||||
// mask {
|
||||
// paths: "sub_message"
|
||||
// }
|
||||
// mask {
|
||||
// paths: "sub_message"
|
||||
// }
|
||||
//
|
||||
// Note that oneof type names ("test_oneof" in this case) cannot be used in
|
||||
// paths.
|
||||
|
||||
24
vendor/google.golang.org/protobuf/types/known/structpb/struct.pb.go
generated
vendored
24
vendor/google.golang.org/protobuf/types/known/structpb/struct.pb.go
generated
vendored
@@ -44,8 +44,7 @@
|
||||
// "google.golang.org/protobuf/encoding/protojson" package
|
||||
// ensures that they will be serialized as their JSON equivalent.
|
||||
//
|
||||
//
|
||||
// Conversion to and from a Go interface
|
||||
// # Conversion to and from a Go interface
|
||||
//
|
||||
// The standard Go "encoding/json" package has functionality to serialize
|
||||
// arbitrary types to a large degree. The Value.AsInterface, Struct.AsMap, and
|
||||
@@ -58,8 +57,7 @@
|
||||
// forms back as Value, Struct, and ListValue messages, use the NewStruct,
|
||||
// NewList, and NewValue constructor functions.
|
||||
//
|
||||
//
|
||||
// Example usage
|
||||
// # Example usage
|
||||
//
|
||||
// Consider the following example JSON object:
|
||||
//
|
||||
@@ -118,7 +116,6 @@
|
||||
// ... // handle error
|
||||
// }
|
||||
// ... // make use of m as a *structpb.Value
|
||||
//
|
||||
package structpb
|
||||
|
||||
import (
|
||||
@@ -135,7 +132,7 @@ import (
|
||||
// `NullValue` is a singleton enumeration to represent the null value for the
|
||||
// `Value` type union.
|
||||
//
|
||||
// The JSON representation for `NullValue` is JSON `null`.
|
||||
// The JSON representation for `NullValue` is JSON `null`.
|
||||
type NullValue int32
|
||||
|
||||
const (
|
||||
@@ -218,8 +215,9 @@ func NewStruct(v map[string]interface{}) (*Struct, error) {
|
||||
// AsMap converts x to a general-purpose Go map.
|
||||
// The map values are converted by calling Value.AsInterface.
|
||||
func (x *Struct) AsMap() map[string]interface{} {
|
||||
vs := make(map[string]interface{})
|
||||
for k, v := range x.GetFields() {
|
||||
f := x.GetFields()
|
||||
vs := make(map[string]interface{}, len(f))
|
||||
for k, v := range f {
|
||||
vs[k] = v.AsInterface()
|
||||
}
|
||||
return vs
|
||||
@@ -274,8 +272,8 @@ func (x *Struct) GetFields() map[string]*Value {
|
||||
|
||||
// `Value` represents a dynamically typed value which can be either
|
||||
// null, a number, a string, a boolean, a recursive struct value, or a
|
||||
// list of values. A producer of value is expected to set one of that
|
||||
// variants, absence of any variant indicates an error.
|
||||
// list of values. A producer of value is expected to set one of these
|
||||
// variants. Absence of any variant indicates an error.
|
||||
//
|
||||
// The JSON representation for `Value` is JSON value.
|
||||
type Value struct {
|
||||
@@ -286,6 +284,7 @@ type Value struct {
|
||||
// The kind of value.
|
||||
//
|
||||
// Types that are assignable to Kind:
|
||||
//
|
||||
// *Value_NullValue
|
||||
// *Value_NumberValue
|
||||
// *Value_StringValue
|
||||
@@ -596,8 +595,9 @@ func NewList(v []interface{}) (*ListValue, error) {
|
||||
// AsSlice converts x to a general-purpose Go slice.
|
||||
// The slice elements are converted by calling Value.AsInterface.
|
||||
func (x *ListValue) AsSlice() []interface{} {
|
||||
vs := make([]interface{}, len(x.GetValues()))
|
||||
for i, v := range x.GetValues() {
|
||||
vals := x.GetValues()
|
||||
vs := make([]interface{}, len(vals))
|
||||
for i, v := range vals {
|
||||
vs[i] = v.AsInterface()
|
||||
}
|
||||
return vs
|
||||
|
||||
61
vendor/google.golang.org/protobuf/types/known/timestamppb/timestamp.pb.go
generated
vendored
61
vendor/google.golang.org/protobuf/types/known/timestamppb/timestamp.pb.go
generated
vendored
@@ -36,8 +36,7 @@
|
||||
// The Timestamp message represents a timestamp,
|
||||
// an instant in time since the Unix epoch (January 1st, 1970).
|
||||
//
|
||||
//
|
||||
// Conversion to a Go Time
|
||||
// # Conversion to a Go Time
|
||||
//
|
||||
// The AsTime method can be used to convert a Timestamp message to a
|
||||
// standard Go time.Time value in UTC:
|
||||
@@ -59,8 +58,7 @@
|
||||
// ... // handle error
|
||||
// }
|
||||
//
|
||||
//
|
||||
// Conversion from a Go Time
|
||||
// # Conversion from a Go Time
|
||||
//
|
||||
// The timestamppb.New function can be used to construct a Timestamp message
|
||||
// from a standard Go time.Time value:
|
||||
@@ -72,7 +70,6 @@
|
||||
//
|
||||
// ts := timestamppb.Now()
|
||||
// ... // make use of ts as a *timestamppb.Timestamp
|
||||
//
|
||||
package timestamppb
|
||||
|
||||
import (
|
||||
@@ -101,52 +98,50 @@ import (
|
||||
//
|
||||
// Example 1: Compute Timestamp from POSIX `time()`.
|
||||
//
|
||||
// Timestamp timestamp;
|
||||
// timestamp.set_seconds(time(NULL));
|
||||
// timestamp.set_nanos(0);
|
||||
// Timestamp timestamp;
|
||||
// timestamp.set_seconds(time(NULL));
|
||||
// timestamp.set_nanos(0);
|
||||
//
|
||||
// Example 2: Compute Timestamp from POSIX `gettimeofday()`.
|
||||
//
|
||||
// struct timeval tv;
|
||||
// gettimeofday(&tv, NULL);
|
||||
// struct timeval tv;
|
||||
// gettimeofday(&tv, NULL);
|
||||
//
|
||||
// Timestamp timestamp;
|
||||
// timestamp.set_seconds(tv.tv_sec);
|
||||
// timestamp.set_nanos(tv.tv_usec * 1000);
|
||||
// Timestamp timestamp;
|
||||
// timestamp.set_seconds(tv.tv_sec);
|
||||
// timestamp.set_nanos(tv.tv_usec * 1000);
|
||||
//
|
||||
// Example 3: Compute Timestamp from Win32 `GetSystemTimeAsFileTime()`.
|
||||
//
|
||||
// FILETIME ft;
|
||||
// GetSystemTimeAsFileTime(&ft);
|
||||
// UINT64 ticks = (((UINT64)ft.dwHighDateTime) << 32) | ft.dwLowDateTime;
|
||||
// FILETIME ft;
|
||||
// GetSystemTimeAsFileTime(&ft);
|
||||
// UINT64 ticks = (((UINT64)ft.dwHighDateTime) << 32) | ft.dwLowDateTime;
|
||||
//
|
||||
// // A Windows tick is 100 nanoseconds. Windows epoch 1601-01-01T00:00:00Z
|
||||
// // is 11644473600 seconds before Unix epoch 1970-01-01T00:00:00Z.
|
||||
// Timestamp timestamp;
|
||||
// timestamp.set_seconds((INT64) ((ticks / 10000000) - 11644473600LL));
|
||||
// timestamp.set_nanos((INT32) ((ticks % 10000000) * 100));
|
||||
// // A Windows tick is 100 nanoseconds. Windows epoch 1601-01-01T00:00:00Z
|
||||
// // is 11644473600 seconds before Unix epoch 1970-01-01T00:00:00Z.
|
||||
// Timestamp timestamp;
|
||||
// timestamp.set_seconds((INT64) ((ticks / 10000000) - 11644473600LL));
|
||||
// timestamp.set_nanos((INT32) ((ticks % 10000000) * 100));
|
||||
//
|
||||
// Example 4: Compute Timestamp from Java `System.currentTimeMillis()`.
|
||||
//
|
||||
// long millis = System.currentTimeMillis();
|
||||
//
|
||||
// Timestamp timestamp = Timestamp.newBuilder().setSeconds(millis / 1000)
|
||||
// .setNanos((int) ((millis % 1000) * 1000000)).build();
|
||||
// long millis = System.currentTimeMillis();
|
||||
//
|
||||
// Timestamp timestamp = Timestamp.newBuilder().setSeconds(millis / 1000)
|
||||
// .setNanos((int) ((millis % 1000) * 1000000)).build();
|
||||
//
|
||||
// Example 5: Compute Timestamp from Java `Instant.now()`.
|
||||
//
|
||||
// Instant now = Instant.now();
|
||||
//
|
||||
// Timestamp timestamp =
|
||||
// Timestamp.newBuilder().setSeconds(now.getEpochSecond())
|
||||
// .setNanos(now.getNano()).build();
|
||||
// Instant now = Instant.now();
|
||||
//
|
||||
// Timestamp timestamp =
|
||||
// Timestamp.newBuilder().setSeconds(now.getEpochSecond())
|
||||
// .setNanos(now.getNano()).build();
|
||||
//
|
||||
// Example 6: Compute Timestamp from current time in Python.
|
||||
//
|
||||
// timestamp = Timestamp()
|
||||
// timestamp.GetCurrentTime()
|
||||
// timestamp = Timestamp()
|
||||
// timestamp.GetCurrentTime()
|
||||
//
|
||||
// # JSON Mapping
|
||||
//
|
||||
@@ -174,8 +169,6 @@ import (
|
||||
// the Joda Time's [`ISODateTimeFormat.dateTime()`](
|
||||
// http://www.joda.org/joda-time/apidocs/org/joda/time/format/ISODateTimeFormat.html#dateTime%2D%2D
|
||||
// ) to obtain a formatter capable of generating timestamps in this format.
|
||||
//
|
||||
//
|
||||
type Timestamp struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
|
||||
2
vendor/google.golang.org/protobuf/types/known/wrapperspb/wrappers.pb.go
generated
vendored
2
vendor/google.golang.org/protobuf/types/known/wrapperspb/wrappers.pb.go
generated
vendored
@@ -27,7 +27,7 @@
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
//
|
||||
// Wrappers for primitive (non-message) types. These types are useful
|
||||
// for embedding primitives in the `google.protobuf.Any` type and for places
|
||||
// where we need to distinguish between the absence of a primitive
|
||||
|
||||
Reference in New Issue
Block a user