Merge 3dbc86a6e8
into 48a58bbb07
This commit is contained in:
commit
7b01b57117
2 changed files with 587 additions and 0 deletions
114
controllers/gce/cmd/mode-updater/README.md
Normal file
114
controllers/gce/cmd/mode-updater/README.md
Normal file
|
@ -0,0 +1,114 @@
|
|||
|
||||
## (ALPHA) Backend-Service BalancingMode Updater
|
||||
Earlier versions of the GLBC created GCP BackendService resources with no balancing mode specified. By default the API used CPU UTILIZATION. The "internal load balancer" feature provided by GCP requires backend services to have the balancing mode RATE. In order to have a K8s cluster with an internal load balancer and ingress resources, you'll need to perform some manual steps.
|
||||
|
||||
#### Why
|
||||
There are two GCP requirements that complicate changing the backend service balancing mode.
|
||||
1. An instance can only belong to one loadbalancer instance group (a group that has at least one backend service pointing to it).
|
||||
1. An load balancer instance group can only have one balancing mode for all the backend services pointing to it.
|
||||
|
||||
#### Complicating factors
|
||||
1. You cannot atomically update a set of backend services to a new backend mode.
|
||||
1. The default backend service in the `kube-system` namespace exists, so you'll have at least two backend services.
|
||||
|
||||
#### Your Options
|
||||
- (UNTESTED) If you only have one service being referenced by ingresses AND that service is the default backend as specified in the Ingress spec. (resulting in one used backend service and one unused backend service)
|
||||
1. Go to the GCP Console
|
||||
1. Delete the kube-system's default backend service
|
||||
1. Change the balancing mode of the used backend service.
|
||||
|
||||
The GLBC should recreate the default backend service at its resync interval.
|
||||
|
||||
|
||||
- Re-create all ingress resources. The GLBC will use RATE mode when it's not blocked by backend services with UTILIZATION mode.
|
||||
- Must be running GLBC version >0.9.1
|
||||
- Must delete all ingress resources before re-creating
|
||||
|
||||
|
||||
- Run this updater tool
|
||||
|
||||
|
||||
#### How the updater works
|
||||
1. Create temporary instance groups `k8s-ig-migrate` in each zone where a `k8s-ig-{cluster_id}` exists.
|
||||
1. Update all backend-services to point to both original and temporary instance groups (mode of the new backend doesn't matter)
|
||||
1. Slowly migrate instances from original to temporary groups.
|
||||
1. Update all backend-services to remove pointers to original instance groups.
|
||||
1. Update all backend-services to point to original groups (with new balancing mode!)
|
||||
1. Slowly migrate instances from temporary to original groups.
|
||||
1. Update all backend-services to remove pointers to temporary instance groups.
|
||||
1. Delete temporary instance groups
|
||||
|
||||
#### How to run
|
||||
```shell
|
||||
go run main.go {project-id} {cluster-id} {region} {target-balance-mode}
|
||||
|
||||
#Examples
|
||||
# Fetch cluster id
|
||||
CLUSTERID=`kubectl get configmaps ingress-uid -o jsonpath='{.data.uid}' --namespace=kube-system`
|
||||
# for upgrading
|
||||
go run main.go my-project $CLUSTERID us-central1 RATE
|
||||
|
||||
# for reversing
|
||||
go run main.go my-project $CLUSTERID us-central1 UTILIZATION
|
||||
```
|
||||
|
||||
**Example Run**
|
||||
```shell
|
||||
➜ go run mode-updater.go nicksardo-project c4424dd5f02d3cad us-central1 RATE
|
||||
|
||||
Backend-Service BalancingMode Updater 0.1
|
||||
Backend Services:
|
||||
- k8s-be-31165--c4424dd5f02d3cad
|
||||
- k8s-be-31696--c4424dd5f02d3cad
|
||||
Instance Groups:
|
||||
- k8s-ig--c4424dd5f02d3cad (us-central1-a)
|
||||
|
||||
Step 1: Creating temporary instance groups in relevant zones
|
||||
- k8s-ig--migrate (us-central1-a)
|
||||
|
||||
Step 2: Update backend services to point to original and temporary instance groups
|
||||
- k8s-be-31165--c4424dd5f02d3cad
|
||||
- k8s-be-31696--c4424dd5f02d3cad
|
||||
|
||||
Step 3: Migrate instances to temporary group
|
||||
- kubernetes-minion-group-f060 (us-central1-a): removed from k8s-ig--c4424dd5f02d3cad, added to k8s-ig--migrate
|
||||
- kubernetes-minion-group-pnbl (us-central1-a): removed from k8s-ig--c4424dd5f02d3cad, added to k8s-ig--migrate
|
||||
- kubernetes-minion-group-t6dl (us-central1-a): removed from k8s-ig--c4424dd5f02d3cad, added to k8s-ig--migrate
|
||||
|
||||
Step 4: Update backend services to point only to temporary instance groups
|
||||
- k8s-be-31165--c4424dd5f02d3cad
|
||||
- k8s-be-31696--c4424dd5f02d3cad
|
||||
|
||||
Step 5: Update backend services to point to both temporary and original (with new balancing mode) instance groups
|
||||
- k8s-be-31165--c4424dd5f02d3cad
|
||||
- k8s-be-31696--c4424dd5f02d3cad
|
||||
|
||||
Step 6: Migrate instances back to original groups
|
||||
- kubernetes-minion-group-f060 (us-central1-a): removed from k8s-ig--migrate, added to k8s-ig--c4424dd5f02d3cad
|
||||
- kubernetes-minion-group-pnbl (us-central1-a): removed from k8s-ig--migrate, added to k8s-ig--c4424dd5f02d3cad
|
||||
- kubernetes-minion-group-t6dl (us-central1-a): removed from k8s-ig--migrate, added to k8s-ig--c4424dd5f02d3cad
|
||||
|
||||
Step 7: Update backend services to point only to original instance groups
|
||||
- k8s-be-31165--c4424dd5f02d3cad
|
||||
- k8s-be-31696--c4424dd5f02d3cad
|
||||
|
||||
Step 8: Delete temporary instance groups
|
||||
- k8s-ig--migrate (us-central1-a)
|
||||
```
|
||||
|
||||
#### Interaction with GCE Ingress Controller
|
||||
After one or more instances have been removed from their instance group, the controller will start throwing validation errors and will try to sync the instances back. However, the instance will hopefully belong to `k8s-ig--migrate` already and the controller does not have logic to take it out of that group. Therefore, the controller only interrupts the migration process in between the removal from a group and the insertion to a group. On the second set of migrations, this interaction is fine since the destination group is the same for updater and controller. If the controller interrupts an instance from being added to the migrate IG, the updater will attempt migration again. Do not be alarmed by multiple attempts.
|
||||
|
||||
|
||||
#### Maintaining Up-time
|
||||
This may not be a perfect solution, but the updater will sleep for 3 minutes between sensitive changes to the load balancer. For instance, it will sleep after updating the backend-services to point to the new migration instance groups before migrating instances. Without these occasional sleeps, the updater will result in some 502s for a short period of time (order of seconds to minutes). When testing with sleeps, 502s were not detected.
|
||||
|
||||
|
||||
#### TODO
|
||||
- [x] If only one backend-service exists, just update it in place.
|
||||
- [x] If all backend-services are already the target balancing mode, early return.
|
||||
- [x] Wait for op completion instead of sleeping
|
||||
|
||||
#### Warning
|
||||
This tool hasn't been fully tested. Use at your own risk.
|
||||
You should run on a test cluster before running on important clusters.
|
473
controllers/gce/cmd/mode-updater/mode-updater.go
Normal file
473
controllers/gce/cmd/mode-updater/mode-updater.go
Normal file
|
@ -0,0 +1,473 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/golang/glog"
|
||||
"k8s.io/apimachinery/pkg/util/wait"
|
||||
|
||||
"golang.org/x/oauth2"
|
||||
"golang.org/x/oauth2/google"
|
||||
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
"google.golang.org/api/googleapi"
|
||||
)
|
||||
|
||||
var (
|
||||
projectID string
|
||||
clusterID string
|
||||
regionName string
|
||||
targetBalancingMode string
|
||||
|
||||
instanceGroupName string
|
||||
|
||||
s *compute.Service
|
||||
zones []*compute.Zone
|
||||
igs map[string]*compute.InstanceGroup
|
||||
instances map[string][]string
|
||||
)
|
||||
|
||||
const (
|
||||
instanceGroupTemp = "k8s-ig--migrate"
|
||||
balancingModeRATE = "RATE"
|
||||
balancingModeUTIL = "UTILIZATION"
|
||||
|
||||
loadBalancerUpdateTime = 3 * time.Minute
|
||||
|
||||
operationPollInterval = 1 * time.Second
|
||||
operationPollTimeoutDuration = time.Hour
|
||||
|
||||
version = 0.1
|
||||
)
|
||||
|
||||
func main() {
|
||||
fmt.Println("Backend-Service BalancingMode Updater", version)
|
||||
flag.Parse()
|
||||
|
||||
args := flag.Args()
|
||||
if len(args) != 4 {
|
||||
log.Fatalf("Expected four arguments: project_id cluster_id region balancing_mode")
|
||||
}
|
||||
projectID, clusterID, regionName, targetBalancingMode = args[0], args[1], args[2], args[3]
|
||||
|
||||
switch targetBalancingMode {
|
||||
case balancingModeRATE, balancingModeUTIL:
|
||||
default:
|
||||
panic(fmt.Errorf("expected either %s or %s, actual: %v", balancingModeRATE, balancingModeUTIL, targetBalancingMode))
|
||||
}
|
||||
|
||||
igs = make(map[string]*compute.InstanceGroup)
|
||||
|
||||
tokenSource, err := google.DefaultTokenSource(
|
||||
oauth2.NoContext,
|
||||
compute.CloudPlatformScope,
|
||||
compute.ComputeScope)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
client := oauth2.NewClient(oauth2.NoContext, tokenSource)
|
||||
s, err = compute.New(client)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// Get Zones
|
||||
zoneFilter := fmt.Sprintf("(region eq %s)", createRegionLink(regionName))
|
||||
zoneList, err := s.Zones.List(projectID).Filter(zoneFilter).Do()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
zones = zoneList.Items
|
||||
|
||||
if len(zones) == 0 {
|
||||
panic(fmt.Errorf("Expected at least one zone in region: %v", regionName))
|
||||
}
|
||||
|
||||
instanceGroupName = fmt.Sprintf("k8s-ig--%s", clusterID)
|
||||
instances = make(map[string][]string)
|
||||
|
||||
// Get instance groups
|
||||
for _, z := range zones {
|
||||
igl, err := s.InstanceGroups.List(projectID, z.Name).Do()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
for _, ig := range igl.Items {
|
||||
if instanceGroupName != ig.Name {
|
||||
continue
|
||||
}
|
||||
|
||||
// Note instances
|
||||
r := &compute.InstanceGroupsListInstancesRequest{InstanceState: "ALL"}
|
||||
instList, err := s.InstanceGroups.ListInstances(projectID, getResourceName(ig.Zone, "zones"), ig.Name, r).Do()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
var instanceLinks []string
|
||||
for _, i := range instList.Items {
|
||||
instanceLinks = append(instanceLinks, i.Instance)
|
||||
}
|
||||
|
||||
// Note instance group in zone
|
||||
igs[z.Name] = ig
|
||||
instances[z.Name] = instanceLinks
|
||||
}
|
||||
}
|
||||
|
||||
if len(igs) == 0 {
|
||||
panic(fmt.Errorf("Expected at least one instance group named: %v", instanceGroupName))
|
||||
}
|
||||
|
||||
bs := getBackendServices()
|
||||
fmt.Println("Backend Services:")
|
||||
for _, b := range bs {
|
||||
fmt.Println(" - ", b.Name)
|
||||
}
|
||||
fmt.Println("Instance Groups:")
|
||||
for z, g := range igs {
|
||||
fmt.Printf(" - %v (%v)\n", g.Name, z)
|
||||
}
|
||||
|
||||
// Early return for special cases
|
||||
switch len(bs) {
|
||||
case 0:
|
||||
fmt.Println("\nThere are 0 backend services - no action necessary")
|
||||
return
|
||||
case 1:
|
||||
updateSingleBackend(bs[0])
|
||||
return
|
||||
}
|
||||
|
||||
// Check there's work to be done
|
||||
if typeOfBackends(bs) == targetBalancingMode {
|
||||
fmt.Println("\nBackends are already set to target mode")
|
||||
return
|
||||
}
|
||||
|
||||
// Performing update for 2+ backend services
|
||||
updateMultipleBackends()
|
||||
}
|
||||
|
||||
func updateMultipleBackends() {
|
||||
fmt.Println("\nStep 1: Creating temporary instance groups in relevant zones")
|
||||
// Create temoprary instance groups
|
||||
for zone, ig := range igs {
|
||||
_, err := s.InstanceGroups.Get(projectID, zone, instanceGroupTemp).Do()
|
||||
if err != nil {
|
||||
newIg := &compute.InstanceGroup{
|
||||
Name: instanceGroupTemp,
|
||||
Zone: zone,
|
||||
NamedPorts: ig.NamedPorts,
|
||||
}
|
||||
fmt.Printf(" - %v (%v)\n", instanceGroupTemp, zone)
|
||||
op, err := s.InstanceGroups.Insert(projectID, zone, newIg).Do()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
if err = waitForZoneOp(op, zone); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Straddle both groups
|
||||
fmt.Println("\nStep 2: Update backend services to point to original and temporary instance groups")
|
||||
setBackendsTo(true, balancingModeInverse(targetBalancingMode), true, balancingModeInverse(targetBalancingMode))
|
||||
|
||||
sleep(loadBalancerUpdateTime)
|
||||
|
||||
fmt.Println("\nStep 3: Migrate instances to temporary group")
|
||||
migrateInstances(instanceGroupName, instanceGroupTemp)
|
||||
|
||||
sleep(loadBalancerUpdateTime)
|
||||
|
||||
// Remove original backends to get rid of old balancing mode
|
||||
fmt.Println("\nStep 4: Update backend services to point only to temporary instance groups")
|
||||
setBackendsTo(false, "", true, balancingModeInverse(targetBalancingMode))
|
||||
|
||||
// Straddle both groups (creates backend services to original groups with target mode)
|
||||
fmt.Println("\nStep 5: Update backend services to point to both temporary and original (with new balancing mode) instance groups")
|
||||
setBackendsTo(true, targetBalancingMode, true, balancingModeInverse(targetBalancingMode))
|
||||
|
||||
sleep(loadBalancerUpdateTime)
|
||||
|
||||
fmt.Println("\nStep 6: Migrate instances back to original groups")
|
||||
migrateInstances(instanceGroupTemp, instanceGroupName)
|
||||
|
||||
sleep(loadBalancerUpdateTime)
|
||||
|
||||
fmt.Println("\nStep 7: Update backend services to point only to original instance groups")
|
||||
setBackendsTo(true, targetBalancingMode, false, "")
|
||||
|
||||
fmt.Println("\nStep 8: Delete temporary instance groups")
|
||||
for z := range igs {
|
||||
fmt.Printf(" - %v (%v)\n", instanceGroupTemp, z)
|
||||
op, err := s.InstanceGroups.Delete(projectID, z, instanceGroupTemp).Do()
|
||||
if err != nil {
|
||||
fmt.Println("Couldn't delete temporary instance group", instanceGroupTemp)
|
||||
}
|
||||
|
||||
if err = waitForZoneOp(op, z); err != nil {
|
||||
fmt.Println("Couldn't wait for operation: deleting temporary instance group", instanceGroupName)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func sleep(t time.Duration) {
|
||||
fmt.Println("\nSleeping for", t)
|
||||
time.Sleep(t)
|
||||
}
|
||||
|
||||
func setBackendsTo(orig bool, origMode string, temp bool, tempMode string) {
|
||||
bs := getBackendServices()
|
||||
for _, bsi := range bs {
|
||||
var union []*compute.Backend
|
||||
for zone := range igs {
|
||||
if orig {
|
||||
b := &compute.Backend{
|
||||
Group: createInstanceGroupLink(zone, instanceGroupName),
|
||||
BalancingMode: origMode,
|
||||
CapacityScaler: 0.8,
|
||||
MaxRatePerInstance: 1.0,
|
||||
}
|
||||
union = append(union, b)
|
||||
}
|
||||
if temp {
|
||||
b := &compute.Backend{
|
||||
Group: createInstanceGroupLink(zone, instanceGroupTemp),
|
||||
BalancingMode: tempMode,
|
||||
CapacityScaler: 0.8,
|
||||
MaxRatePerInstance: 1.0,
|
||||
}
|
||||
union = append(union, b)
|
||||
}
|
||||
}
|
||||
bsi.Backends = union
|
||||
fmt.Printf(" - %v\n", bsi.Name)
|
||||
op, err := s.BackendServices.Update(projectID, bsi.Name, bsi).Do()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
if err = waitForGlobalOp(op); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func balancingModeInverse(m string) string {
|
||||
switch m {
|
||||
case balancingModeRATE:
|
||||
return balancingModeUTIL
|
||||
case balancingModeUTIL:
|
||||
return balancingModeRATE
|
||||
default:
|
||||
return ""
|
||||
}
|
||||
}
|
||||
|
||||
func getBackendServices() (bs []*compute.BackendService) {
|
||||
bsl, err := s.BackendServices.List(projectID).Do()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
for _, bsli := range bsl.Items {
|
||||
// Ignore regional backend-services and only grab Kubernetes resources
|
||||
if bsli.Region == "" && strings.HasPrefix(bsli.Name, "k8s-be-") && strings.HasSuffix(bsli.Name, clusterID) {
|
||||
bs = append(bs, bsli)
|
||||
}
|
||||
}
|
||||
return bs
|
||||
}
|
||||
|
||||
func typeOfBackends(bs []*compute.BackendService) string {
|
||||
if len(bs) == 0 {
|
||||
return ""
|
||||
}
|
||||
return bs[0].Backends[0].BalancingMode
|
||||
}
|
||||
|
||||
func migrateInstances(fromIG, toIG string) {
|
||||
for z, links := range instances {
|
||||
for _, i := range links {
|
||||
name := getResourceName(i, "instances")
|
||||
fmt.Printf(" - %s (%s): ", name, z)
|
||||
if err := migrateInstance(z, i, fromIG, toIG); err != nil {
|
||||
fmt.Printf(" err: %v", err)
|
||||
} else {
|
||||
fmt.Println()
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func migrateInstance(zone, instanceLink, fromIG, toIG string) error {
|
||||
attempts := 0
|
||||
return wait.Poll(3*time.Second, 10*time.Minute, func() (bool, error) {
|
||||
attempts++
|
||||
if attempts > 1 {
|
||||
fmt.Printf(" (attempt %v) ", attempts)
|
||||
}
|
||||
// Remove from old group
|
||||
rr := &compute.InstanceGroupsRemoveInstancesRequest{Instances: []*compute.InstanceReference{{Instance: instanceLink}}}
|
||||
op, err := s.InstanceGroups.RemoveInstances(projectID, zone, fromIG, rr).Do()
|
||||
if err != nil {
|
||||
fmt.Printf("failed to remove from group %v, err: %v,", fromIG, err)
|
||||
} else if err = waitForZoneOp(op, zone); err != nil {
|
||||
fmt.Printf("failed to wait for operation: removing instance from %v, err: %v,", fromIG, err)
|
||||
} else {
|
||||
fmt.Printf("removed from %v,", fromIG)
|
||||
}
|
||||
|
||||
// Add to new group
|
||||
ra := &compute.InstanceGroupsAddInstancesRequest{Instances: []*compute.InstanceReference{{Instance: instanceLink}}}
|
||||
op, err = s.InstanceGroups.AddInstances(projectID, zone, toIG, ra).Do()
|
||||
if err != nil {
|
||||
if strings.Contains(err.Error(), "memberAlreadyExists") { // GLBC already added the instance back to the IG
|
||||
fmt.Printf(" already exists in %v", toIG)
|
||||
} else {
|
||||
fmt.Printf(" failed to add to group %v, err: %v\n", toIG, err)
|
||||
return false, nil
|
||||
}
|
||||
} else if err = waitForZoneOp(op, zone); err != nil {
|
||||
fmt.Printf(" failed to wait for operation: adding instance to %v, err: %v", toIG, err)
|
||||
} else {
|
||||
fmt.Printf(" added to %v", toIG)
|
||||
}
|
||||
|
||||
return true, nil
|
||||
})
|
||||
}
|
||||
|
||||
func createInstanceGroupLink(zone, igName string) string {
|
||||
return fmt.Sprintf("https://www.googleapis.com/compute/v1/projects/%s/zones/%s/instanceGroups/%s", projectID, zone, igName)
|
||||
}
|
||||
|
||||
func createRegionLink(region string) string {
|
||||
return fmt.Sprintf("https://www.googleapis.com/compute/v1/projects/nicksardo-playground/regions/%v", region)
|
||||
}
|
||||
|
||||
func getResourceName(link string, resourceType string) string {
|
||||
s := strings.Split(link, "/")
|
||||
|
||||
for i := 0; i < len(s); i++ {
|
||||
if s[i] == resourceType {
|
||||
if i+1 <= len(s) {
|
||||
return s[i+1]
|
||||
}
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func updateSingleBackend(bs *compute.BackendService) {
|
||||
needsUpdate := false
|
||||
for _, b := range bs.Backends {
|
||||
if b.BalancingMode != targetBalancingMode {
|
||||
needsUpdate = true
|
||||
b.BalancingMode = targetBalancingMode
|
||||
}
|
||||
}
|
||||
|
||||
if !needsUpdate {
|
||||
fmt.Println("Single backend had all targetBalancingMode - no change necessary")
|
||||
return
|
||||
}
|
||||
|
||||
op, err := s.BackendServices.Update(projectID, bs.Name, bs).Do()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
if err = waitForGlobalOp(op); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
fmt.Println("Updated single backend service to target balancing mode.")
|
||||
}
|
||||
|
||||
// Below operations are copied from the GCE CloudProvider and modified to be static
|
||||
|
||||
func waitForOp(op *compute.Operation, getOperation func(operationName string) (*compute.Operation, error)) error {
|
||||
if op == nil {
|
||||
return fmt.Errorf("operation must not be nil")
|
||||
}
|
||||
|
||||
if opIsDone(op) {
|
||||
return getErrorFromOp(op)
|
||||
}
|
||||
|
||||
opStart := time.Now()
|
||||
opName := op.Name
|
||||
return wait.Poll(operationPollInterval, operationPollTimeoutDuration, func() (bool, error) {
|
||||
start := time.Now()
|
||||
duration := time.Now().Sub(start)
|
||||
if duration > 5*time.Second {
|
||||
glog.Infof("pollOperation: throttled %v for %v", duration, opName)
|
||||
}
|
||||
pollOp, err := getOperation(opName)
|
||||
if err != nil {
|
||||
glog.Warningf("GCE poll operation %s failed: pollOp: [%v] err: [%v] getErrorFromOp: [%v]",
|
||||
opName, pollOp, err, getErrorFromOp(pollOp))
|
||||
}
|
||||
done := opIsDone(pollOp)
|
||||
if done {
|
||||
duration := time.Now().Sub(opStart)
|
||||
if duration > 1*time.Minute {
|
||||
// Log the JSON. It's cleaner than the %v structure.
|
||||
enc, err := pollOp.MarshalJSON()
|
||||
if err != nil {
|
||||
glog.Warningf("waitForOperation: long operation (%v): %v (failed to encode to JSON: %v)",
|
||||
duration, pollOp, err)
|
||||
} else {
|
||||
glog.Infof("waitForOperation: long operation (%v): %v",
|
||||
duration, string(enc))
|
||||
}
|
||||
}
|
||||
}
|
||||
return done, getErrorFromOp(pollOp)
|
||||
})
|
||||
}
|
||||
|
||||
func opIsDone(op *compute.Operation) bool {
|
||||
return op != nil && op.Status == "DONE"
|
||||
}
|
||||
|
||||
func getErrorFromOp(op *compute.Operation) error {
|
||||
if op != nil && op.Error != nil && len(op.Error.Errors) > 0 {
|
||||
err := &googleapi.Error{
|
||||
Code: int(op.HttpErrorStatusCode),
|
||||
Message: op.Error.Errors[0].Message,
|
||||
}
|
||||
glog.Errorf("GCE operation failed: %v", err)
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func waitForGlobalOp(op *compute.Operation) error {
|
||||
return waitForOp(op, func(operationName string) (*compute.Operation, error) {
|
||||
return s.GlobalOperations.Get(projectID, operationName).Do()
|
||||
})
|
||||
}
|
||||
|
||||
func waitForRegionOp(op *compute.Operation, region string) error {
|
||||
return waitForOp(op, func(operationName string) (*compute.Operation, error) {
|
||||
return s.RegionOperations.Get(projectID, region, operationName).Do()
|
||||
})
|
||||
}
|
||||
|
||||
func waitForZoneOp(op *compute.Operation, zone string) error {
|
||||
return waitForOp(op, func(operationName string) (*compute.Operation, error) {
|
||||
return s.ZoneOperations.Get(projectID, zone, operationName).Do()
|
||||
})
|
||||
}
|
Loading…
Reference in a new issue