22 Commits

Author SHA1 Message Date
salatamartin
b8373cf757 Merge pull request #4 from ysoftdevs/feature/cpu-limits
Remove CPU requests and limits
2022-11-04 11:43:01 +01:00
Martin Šalata
b9cf5c5c95 Fix the controller registration error (return back the fleetmanager kubeconfig override) 2022-11-03 15:30:47 +01:00
Martin Šalata
5f56a88996 Remove CPU requests and limits 2022-11-03 14:38:36 +01:00
salatamartin
c8550a8455 Merge pull request #3 from ysoftdevs/feature/token-requestor
Feature/token requestor
2022-11-03 14:35:39 +01:00
pvito
cf6651771b Removing unnecessary err check. 2022-07-11 10:47:04 +02:00
pvito
5c297accd3 Skip existing cluster check for now. 2022-07-08 22:28:33 +02:00
pvito
6da54ab7df Skip existing cluster check for now. 2022-07-08 22:21:46 +02:00
pvito
fd217747d4 Replace kubeconfig server url with actual ingress URL from shoot status. 2022-07-01 23:36:27 +02:00
Martin Šalata
87615a7c2c Make fleet use the new kubeconfig secrets (fetched via token requestor flow) to access shoot clusters 2022-06-09 13:17:22 +02:00
Martin Šalata
4cf1cf33dc Fix reconciliation of fleet pre-token requestor 2022-06-08 14:42:13 +02:00
Martin Šalata
d9d3a12dd9 Add the forgotten utils module 2022-06-07 14:29:54 +02:00
Martin Šalata
597ac2630b Rename package from javamachr to ysoftdevs 2022-06-07 14:22:56 +02:00
Martin Šalata
f1b1ebbdf3 Fix golang version 2022-06-07 14:08:48 +02:00
Martin Šalata
4188b33fd4 Remove generic credentials 2022-06-07 13:09:21 +02:00
Martin Šalata
23dc67eb46 Fix CircleCI 2022-06-07 12:57:46 +02:00
Martin Šalata
99f210a8af Add token requestor flow 2022-06-07 12:57:34 +02:00
salatamartin
19c3602c8a Merge pull request #2 from ysoftdevs/feature/token-requestor
Update dependencies
2022-06-07 10:20:04 +02:00
Martin Šalata
3f865bb2fc Fix make install 2022-06-06 13:13:56 +02:00
Martin Šalata
8f3e13a2c4 Update dependencies 2022-06-06 12:39:54 +02:00
salatamartin
8b33197211 Merge pull request #1 from ysoftdevs/feature/token-requestor
Remove the vendor folder
2022-06-06 12:37:54 +02:00
Martin Šalata
dd5766f1bb Remove the vendor folder 2022-06-06 12:33:05 +02:00
Jakub Vavřík
43be343865 Bumped version for next development v1.0.1 [ci skip] 2021-02-24 12:45:08 +01:00
4111 changed files with 1410 additions and 1256698 deletions

View File

@@ -2,11 +2,7 @@ version: 2
jobs:
build:
docker: # run the steps with Docker
- image: circleci/golang:1.15
auth:
username: javamachr
password: $DOCKERHUB_PASSWORD
- image: circleci/golang:1.17
steps:
- checkout # check out source code to working directory
@@ -18,14 +14,6 @@ jobs:
keys:
- go-mod-v4-{{ checksum "go.sum" }}
- run:
name: Install requirements
command: make install-requirements
- run:
name: Generate sources
command: make generate
- run:
name: Docker login
command: make docker-login
@@ -34,10 +22,6 @@ jobs:
name: Build docker images
command: make docker-images
- run:
name: Build docker images
command: make docker-images
- run:
name: Push docker images
command: make docker-push

View File

@@ -1,7 +1,6 @@
---
name: Bug Report
about: Report a bug encountered while working with this Gardener extension
labels: kind/bug
---
@@ -14,13 +13,11 @@ If multiple identifiers make sense you can also state the commands multiple time
/area auto-scaling
...
"/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|operations|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management
"/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management
"/kind" identifiers: api-change|bug|cleanup|discussion|enhancement|epic|impediment|poc|post-mortem|question|regression|task|technical-debt|test
"/priority" identifiers: normal|critical|blocker
-->
/area TODO
/kind bug
/priority normal
**What happened**:

View File

@@ -1,7 +1,6 @@
---
name: Enhancement Request
about: Suggest an enhancement for this extension
labels: kind/enhancement
---
@@ -14,13 +13,11 @@ If multiple identifiers make sense you can also state the commands multiple time
/area auto-scaling
...
"/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|operations|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management
"/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management
"/kind" identifiers: api-change|bug|cleanup|discussion|enhancement|epic|impediment|poc|post-mortem|question|regression|task|technical-debt|test
"/priority" identifiers: normal|critical|blocker
-->
/area TODO
/kind enhancement
/priority normal
**What would you like to be added**:

View File

@@ -2,7 +2,6 @@
name: Flaking Test
about: Report flaky tests or jobs in Gardener CI
title: "[Flaky Test] FLAKING TEST/SUITE"
labels: kind/flake
---
@@ -17,13 +16,11 @@ If multiple identifiers make sense you can also state the commands multiple time
/area auto-scaling
...
"/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|operations|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management
"/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management
"/kind" identifiers: api-change|bug|cleanup|discussion|enhancement|epic|impediment|poc|post-mortem|question|regression|task|technical-debt|test
"/priority" identifiers: normal|critical|blocker
-->
/area testing
/kind flake
/priority normal
**Which test(s)/suite(s) are flaking**:

View File

@@ -1,7 +1,6 @@
---
name: Support Request
about: Support request or question relating to this extension
labels: kind/question
---
@@ -10,5 +9,5 @@ STOP -- PLEASE READ!
GitHub is not the right place for support requests.
If you're looking for help, please post your question on the [Kubernetes Slack](http://slack.k8s.io/) ([#gardener](https://kubernetes.slack.com/messages/gardener) channel) or join our [weekly meetings](https://github.com/gardener/documentation/blob/master/CONTRIBUTING.md#weekly-meeting).
If you're looking for help, please post your question on the [Kubernetes Slack](http://slack.k8s.io/) ([#gardener](https://kubernetes.slack.com/messages/gardener) channel) or join our [bi-weekly meetings](https://gardener.cloud/docs/contribute/#bi-weekly-meetings).
-->

View File

@@ -9,13 +9,11 @@ If multiple identifiers make sense you can also state the commands multiple time
"/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management
"/kind" identifiers: api-change|bug|cleanup|discussion|enhancement|epic|impediment|poc|post-mortem|question|regression|task|technical-debt|test
"/priority" identifiers: normal|critical|blocker
For Gardener Enhancement Proposals (GEPs), please check the following [documentation](https://github.com/gardener/gardener/tree/master/docs/proposals/README.md) before submitting this pull request.
-->
/area TODO
/kind TODO
/priority normal
**What this PR does / why we need it**:

1
.gitignore vendored
View File

@@ -5,6 +5,7 @@
/local
**/dev
/bin
./vendor
*.coverprofile
*.html

View File

@@ -1,12 +1,12 @@
############# builder
FROM eu.gcr.io/gardener-project/3rd/golang:1.15.5 AS builder
FROM golang:1.17.10 AS builder
WORKDIR /go/src/github.com/gardener/gardener-extension-shoot-fleet-agent
WORKDIR /go/src/github.com/ysoftdevs/gardener-extension-shoot-fleet-agent
COPY . .
RUN make install
RUN make vendor install
############# gardener-extension-shoot-fleet-agent
FROM eu.gcr.io/gardener-project/3rd/alpine:3.12.3 AS gardener-extension-shoot-fleet-agent
FROM alpine:3.15.4 AS gardener-extension-shoot-fleet-agent
COPY charts /charts
COPY --from=builder /go/bin/gardener-extension-shoot-fleet-agent /gardener-extension-shoot-fleet-agent

View File

@@ -14,12 +14,12 @@
EXTENSION_PREFIX := gardener-extension
NAME := shoot-fleet-agent
REGISTRY := javamachr
REGISTRY := ysoftglobal.azurecr.io
IMAGE_PREFIX := $(REGISTRY)/gardener-extension
REPO_ROOT := $(shell dirname $(realpath $(lastword $(MAKEFILE_LIST))))
HACK_DIR := $(REPO_ROOT)/hack
VERSION := $(shell cat "$(REPO_ROOT)/VERSION")
LD_FLAGS := "-w -X github.com/javamachr/$(EXTENSION_PREFIX)-$(NAME)/pkg/version.Version=$(IMAGE_TAG)"
LD_FLAGS := "-w -X github.com/ysoftdevs/$(EXTENSION_PREFIX)-$(NAME)/pkg/version.Version=$(IMAGE_TAG)"
LEADER_ELECTION := false
IGNORE_OPERATION_ANNOTATION := true
@@ -44,12 +44,12 @@ start:
.PHONY: install
install:
@LD_FLAGS="-w -X github.com/gardener/$(EXTENSION_PREFIX)-$(NAME)/pkg/version.Version=$(VERSION)" \
@LD_FLAGS="-w -X github.com/ysoftdevs/$(EXTENSION_PREFIX)-$(NAME)/pkg/version.Version=$(VERSION)" \
$(REPO_ROOT)/vendor/github.com/gardener/gardener/hack/install.sh ./...
.PHONY: docker-login
docker-login:
echo ${DOCKER_PASS} | docker login -u ${DOCKER_USER} --password-stdin
@echo "$$DOCKER_PASSWORD" | docker login -u "${DOCKER_USER}" --password-stdin "${REGISTRY}"
.PHONY: docker-images
docker-images:
@@ -59,6 +59,23 @@ docker-images:
docker-push:
@docker push $(IMAGE_PREFIX)-$(NAME):$(VERSION)
#######################################################
# Rules related to Containerd image build and release #
#######################################################
.PHONY: containerd-login
containerd-login:
@echo "$$DOCKER_PASSWORD" | nerdctl login -u "${DOCKER_USER}" --password-stdin "${REGISTRY}"
.PHONY: containerd-images
containerd-images:
@nerdctl build -t $(IMAGE_PREFIX)-$(NAME):$(VERSION) -f Dockerfile --target $(EXTENSION_PREFIX)-$(NAME) .
@nerdctl build -t $(IMAGE_PREFIX)-$(NAME):latest -f Dockerfile --target $(EXTENSION_PREFIX)-$(NAME) .
.PHONY: containerd-push
containerd-push:
@nerdctl push $(IMAGE_PREFIX)-$(NAME):$(VERSION)
#####################################################################
# Rules for verification, formatting, linting, testing and cleaning #
#####################################################################
@@ -71,12 +88,19 @@ install-requirements:
@go install -mod=vendor $(REPO_ROOT)/vendor/github.com/onsi/ginkgo/ginkgo
@$(REPO_ROOT)/vendor/github.com/gardener/gardener/hack/install-requirements.sh
.PHONY: vendor
vendor:
@GO111MODULE=on go mod vendor
@chmod +x $(REPO_ROOT)/vendor/github.com/gardener/gardener/hack/*
@chmod +x $(REPO_ROOT)/vendor/github.com/gardener/gardener/hack/.ci/*
@$(REPO_ROOT)/hack/update-github-templates.sh
.PHONY: revendor
revendor:
@GO111MODULE=on go mod vendor
@GO111MODULE=on go mod tidy
@chmod +x $(REPO_ROOT)/vendor/github.com/gardener/gardener/hack/*
@chmod +x $(REPO_ROOT)/vendor/github. com/gardener/gardener/hack/.ci/*
@chmod +x $(REPO_ROOT)/vendor/github.com/gardener/gardener/hack/.ci/*
@$(REPO_ROOT)/hack/update-github-templates.sh
.PHONY: clean
@@ -97,6 +121,11 @@ check:
generate:
@$(REPO_ROOT)/vendor/github.com/gardener/gardener/hack/generate.sh ./charts/... ./cmd/... ./pkg/... ./test/...
@rm -rf ./pkg/client/fleet/clientset/internalversion;
.PHONY: generate-charts
generate-charts:
@$(REPO_ROOT)/vendor/github.com/gardener/gardener/hack/generate.sh ./charts/...
.PHONY: format
format:
@$(REPO_ROOT)/vendor/github.com/gardener/gardener/hack/format.sh ./cmd ./pkg ./test

View File

@@ -3,7 +3,7 @@
[![CircleCI](https://circleci.com/gh/javamachr/gardener-extension-shoot-fleet-agent.svg?style=shield)](https://circleci.com/gh/javamachr/gardener-extension-shoot-fleet-agent)
[![Go Report Card](https://goreportcard.com/badge/github.com/javamachr/gardener-extension-shoot-fleet-agent)](https://goreportcard.com/report/github.com/javamachr/gardener-extension-shoot-fleet-agent)
[![Go Report Card](https://goreportcard.com/badge/github.com/ysoftdevs/gardener-extension-shoot-fleet-agent)](https://goreportcard.com/report/github.com/ysoftdevs/gardener-extension-shoot-fleet-agent)
Project Gardener implements the automated management and operation of [Kubernetes](https://kubernetes.io/) clusters as a service. Its main principle is to leverage Kubernetes concepts for all of its tasks.
@@ -76,7 +76,7 @@ Static code checks and tests can be executed by running `VERIFY=true make all`.
## Feedback and Support
Feedback and contributions are always welcome. Please report bugs or suggestions as [GitHub issues](https://github.com/javamachr/gardener-extension-shoot-fleet-agent/issues) or join our [Slack channel #gardener](https://kubernetes.slack.com/messages/gardener) (please invite yourself to the Kubernetes workspace [here](http://slack.k8s.io)).
Feedback and contributions are always welcome. Please report bugs or suggestions as [GitHub issues](https://github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/issues) or join our [Slack channel #gardener](https://kubernetes.slack.com/messages/gardener) (please invite yourself to the Kubernetes workspace [here](http://slack.k8s.io)).
## Learn more!

View File

@@ -1 +1 @@
v1.0.0-dev
v1.0.6-DEV

View File

@@ -2,4 +2,4 @@ apiVersion: v1
appVersion: "1.0"
description: A Helm chart for the Gardener Shoot Fleet Agent extension.
name: gardener-extension-shoot-fleet-agent
version: 0.1.0
version: 0.3.6

View File

@@ -12,10 +12,17 @@ rules:
- extensions.gardener.cloud
resources:
- clusters
- dnsrecords
verbs:
- get
- list
- watch
- apiGroups:
- resources.gardener.cloud
resources:
- managedresources
verbs:
- "*"
- apiGroups:
- extensions.gardener.cloud
resources:

View File

@@ -1,14 +1,12 @@
image:
repository: javamachr/gardener-extension-shoot-fleet-agent
repository: ysoftglobal.azurecr.io/gardener-extension-shoot-fleet-agent
tag: latest
pullPolicy: Always
resources:
requests:
cpu: "50m"
memory: "64Mi"
limits:
cpu: "50m"
memory: "128Mi"
vpa:

View File

@@ -0,0 +1,4 @@
apiVersion: v1
description: A Helm chart for shoot-fleet-agent-shoot
name: shoot-fleet-agent-shoot
version: 0.1.0

View File

@@ -0,0 +1,35 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: extensions.gardener.cloud:extension-shoot-fleet-agent:shoot
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: extensions.gardener.cloud:extension-shoot-fleet-agent:shoot
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: extensions.gardener.cloud:extension-shoot-fleet-agent:shoot
subjects:
- kind: ServiceAccount
name: {{ .Values.shootAccessServiceAccountName }}
namespace: {{ .Values.shootAccessServiceAccountNamespace }}

View File

@@ -0,0 +1,2 @@
shootAccessServiceAccountName: ""
shootAccessServiceAccountNamespace: kube-system

View File

@@ -18,8 +18,8 @@ import (
"context"
"fmt"
"github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/controller"
"github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/controller/healthcheck"
"github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/controller"
"github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/controller/healthcheck"
extensionscontroller "github.com/gardener/gardener/extensions/pkg/controller"
"github.com/gardener/gardener/extensions/pkg/util"

View File

@@ -17,7 +17,7 @@ package app
import (
"os"
fleetagentservicecmd "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/cmd"
fleetagentservicecmd "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/cmd"
controllercmd "github.com/gardener/gardener/extensions/pkg/controller/cmd"
)

View File

@@ -15,19 +15,20 @@
package main
import (
"github.com/javamachr/gardener-extension-shoot-fleet-agent/cmd/gardener-extension-shoot-fleet-agent/app"
"github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/cmd/gardener-extension-shoot-fleet-agent/app"
"os"
controllercmd "github.com/gardener/gardener/extensions/pkg/controller/cmd"
"github.com/gardener/gardener/extensions/pkg/log"
"github.com/gardener/gardener/pkg/logger"
runtimelog "sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/manager/signals"
)
func main() {
runtimelog.SetLogger(log.ZapLogger(false))
runtimelog.SetLogger(logger.ZapLogger(false))
ctx := signals.SetupSignalHandler()
if err := app.NewServiceControllerCommand().ExecuteContext(ctx); err != nil {
controllercmd.LogErrAndExit(err, "error executing the main controller command")
runtimelog.Log.Error(err, "error executing the main controller command")
os.Exit(1)
}
}

View File

@@ -12,7 +12,7 @@ To let the `shoot-fleet-agent` operate properly, you need to have:
- have kubeconfig with read/write access to cluster.fleet.cattle.io and secret resrouces in some namespace
### ControllerRegistration
An example of a `ControllerRegistration` for the `shoot-fleet-agent` can be found here: https://github.com/javamachr/gardener-extension-shoot-fleet-agent/blob/master/example/controller-registration.yaml
An example of a `ControllerRegistration` for the `shoot-fleet-agent` can be found here: https://github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/blob/master/example/controller-registration.yaml
### Configuration
The `ControllerRegistration` contains a Helm chart which eventually deploy the `shoot-fleet-agent` to seed clusters.

View File

@@ -1,20 +1,27 @@
---
apiVersion: core.gardener.cloud/v1beta1
kind: ControllerDeployment
metadata:
name: extension-shoot-fleet-agent
type: helm
providerConfig:
chart: H4sIAAAAAAAAA+0ca2/bRjKfBfg/zMlXJClC6mFJbnXI4VRZTY3GD1ipi+JwCFbkSmJDkbxdUo6a9H77zT74kqinHSe546CNyX3Mzs7uvHaHmhBmU48yg74Pqccd3zP41PdDY+xSGhpkQr2w9uR+UEc4bbflX4Tlv/K5cdJqNNvNTkeUNzqd09Mn0L7nuDtBxEPCAJ4wnPSmdtvqv1KY7LL+5pS6M2fi+YweMoZY4E6rtXb92+1Wfv2bjfpJ+wnUH3qyRfB/vv7HcE3CkDKPQ+iDWmO4m1IPRpHj2o43gYBY73AfcPOocgxvpg4HHgWBz0J8wI3hwsT1RzAjoTXF5i+AUZeEzpxix3CaKSeeLTB4dILVvgfPAkbHzntqw52DDf/y3IQrz12A78mugigIKAPX8SiObZ4N3w5DJE8g6fuzGaK47Q/BdhjH2okT1tQfNQl8Hv3BaupPUjSd1OS/SQGfe7UMwhHONQpg7LgUkX5r8rtA/BmRd+JPOBNv/xHtbwlz/IjD+dlAjB4w/3dqhfjk2JTUdGMsxJI5t3ybYtnnXusi2En++1PCQnNBZu5BY2yT/2b7ZEn+651mq5T/xwASOLeUiXXvwrxRIUGQvD6dN8y62THOBrfwtGJTbjEnCGVVD35CiwCW2Bcw9hmEUwqv9FaCodhA8KPYQNATGwiSzWVWPDKjXdhl21XmMSV188TsfJHi89XDTvJv+5Y58Q8eY4v8N4Wxz/t/p6enJ6X8PwbUamj6ggVayGkIz6zn0Kw3vodh7xqGA0DBJp58IWM0iQ4JKVj+LCDewoQeWn7ZjaPF55TNqW0q90BYT8C/rmPhlkL7Hnk2VTqih84E/hn64/COoKfxWjV5AXMTmqgmLBqEQDh4foj9fOzC7hyO2DzZ/fV5f3CJhIkRjiq1mvg/xlEwTIJd6zRomnV4JhpUdVX1+d8kjoUfoaeyEONChOOFyTw0TUiAmDnywLOocljCdARTIvlNI/FHIcH2BHsE+DbOtgQSJoRLmIZh0K3V7u7uTCKpNn02qWnW8ZqesYGUJ/1+8dA7EVz/d+QwnPdoAai3sQsZIcEuuZMLN2EU64RT58EdQ7dI+GBcM17isR0eMmcUhTnexYQiA7INkHu4F6q9IZwPq/BDb3g+fCGx/Hr+5qerX97Ar72bm97lm/PBEK5uoH91eXb+5vzqEt9+hN7lb/Dz+eXZC6COWFPkKjp/OAck1BFcxb0jkQ0pzRER2xYeUMsZOxbOzptEqJVg4qN58KR/StnM4WJ9ufIxEY/rzJxQupl8dWo4lGg08bsTofrErjbNGv43Re+vFpcZlu+FzHddVI6MTgQrJEKTT2GDsgRTI7sd3Axx/vqNvic4T1pbh1Q4VzCI0XZX0CqS0VmXvri2vNQTK84hOxPtm0vO6ULBJDF9y2cMnVRIaYAcDUeVIIv+y3RYS3hQ2Mn+hxT3Lm4ufthJ0N7nP+j/n7bL85/HgD3X/y0G/KhvuRkGu8eC2/y/Vmv5/Oek2Sz9v0eBDx8MsOnY8dAnEqFZFYw//zyq7LItjiqiM/Vs1UW9xrhkCzQ0Y2eiURqGcVTJhpsrCA3hRaKJNpNBualQmDE9puX6kV2bN4gbTEnjqPLO8eyuijVlqNmX7Y8wXB2TyNWv3aMKwLtoRBW2Lnz4AOYtcSPKTTn+BfGQAmambUCQLCbkjIubumREXS6bAaiX9Xh144/ojv0mDP1HQM/FFt5CKxlIMHLboGKFOBppqsdN3tcPne+y60D6PKuf8iJXsn64fMeCCTcL6Yif1Vuyi5wZItX7B0ATPCX8Wp4cQpVPSbPd6VYTWmQHMyQTSLsEzPHCMVS/4f/4hi83ZTTwuRP6bLERB64eLUTZPRzl2unHs8dRfHTcF5ZLONeHIZoZmhWczrC8L0IyDKaqf38JDbPRMupIUh+jiRHGbKGD4/+M+1rLnfnKCeNwSKLiGHPYkYtOovnuOyTVR+nSVIlJrxulcf9RRjQk2aHWNoyFPadvNigflxJ0+amLGxEJcGzNtA26zFBdjLhPDv3n1tGfEva0/0o5zkhgyC09R275zBChmAgxafEZ8bbz3069kbf/J/V6u17a/8eAJSMgV/VWrupVvKhSM+XPiWO7q9T8BQmOKjMUZpuEpBvbJWkjHM9yIzvxLUzEVbxzCqzZDQojwWj9csnoaWMrHkEcfEirzTwaUqkuNo29ro/j4S7wigaWnZJ5SdL52/x+78JHoS428/AjrDH4n3n995R/mwauv5hh4R7XQVvkv31aby/7/6el/D8OrHjlKBy8lkr4WbLghSK+W5zwsKK965gAIm3B5NOaPM7ap+PeGkKcT0qyGZ07AvNPDhce4GtxDtmFuqqSZ7T5MEEX9v0IdYNiApdOiM80G+QF+ussYx6CNQfMESDWAzFl2f0gUXqer09dkzIAdOisdzya1Ti1GI1p0bGekQnJEoWdCx5N1J3Kz0cciQ4XsKvtWiEk9WJSPhVapRxZz6TXD38132g2mD8gf65FkkN1J8eo+nzLZOIwQJW4+TV/mFU/aN0Bkv0tII5L+iIuudybBL3yPcsSm37//uL4mqCfz1LeGHuqIwVypfI7TwecZm5pZOF15LrXPsrqIie/KrILksr8hvNnM4JaNC0xoLYfjQYYWkRe1mho1XLCo3ed3Fz5LjPyXnSzIsZEdMPEqYYlklpeZmhP7wHks248XHgWz01DIJxS4oZTKUH7I8903jqQ7XBxqZG5fcmh1dX9tBYl6nff8aD6orqCTKX5GH5A1RWHkWqodbSqLldxj17SYQX5UshoOPbL7FZaDULzu4p689zGUDv49aB3Nrh5O3g96Ivbs7eXvYvB8LrXH6RNAeaC7B+ZP+tmSwHGDnXtGzpeKtYVQlN1E6WdHgwdrlBjos8veq8Gt0jx1c3bq9vBza83529WCe6CMsQZ/7lW6FBvUoorRDLK/YhZNL+TktKudM31KdRqn48QMmeWeueN+haljHPx3WhGL4Tu4gULWGTdsqyYiY5qKVYFOtuQ4f4R+XBdJDF6gEXaYurW0biyZLvSmOeb4tqqxt7MLuUx5LazKrpcmtL6s+Q8yv2YtzfrrDgezpG8NiLdFAzHoA+yL3wbcbSa9exMYhbfw//fM/5TpRmzs8sY285/TuqnS/EfPpf3P48Cxec6QylkhRHffjK3U9yXjJDZV1sCglGnRT0rVt+f/RTl64U95T93JbGrAtgq/53Osvy3Tsv8/0eBrPxnJa747kl5sEpHXGejwHscDmnfsJHkghxVxAcFxD1Tli/2LXIJyNV4dJAEruYgJ0MW3DOb1c/N9S8H9pR/NiKWbmBRlmr8jZpg6/1Pc1n+O83GaSn/jwEr579ihU0ShVMUrz9UXmDmWlhf+7jINcpufJGFu8lH4EuJG90tZx4PfP570FHXaqeZzG2wjdEi322oNr/sySJXRjcG9ndeMT8K1DQMqFbV4W8akMpSFcdwWYn6dZRUfFt9zMTLA+R/7w+Btsh/C+V+Sf6brTL//3Hgk8j/rhL61V73rJf2taqvSAlYio9cvdkeF0epzObLOmFCQ/XgOlw/3YlboSICkjF2GF8rtqS4UBfdc45p4+X3GrI4jPaYrHgM0scowG1Hd1e5SSx6L/7ie2b+SjQK1xaX1p/FpTIzyQljNhQOjvaAqLOfdG6ZKRcs9TpR3bDXmO9SvlIyQrF2vImuyLRZrjyEcLm3UYL2Watqcp+mLOSSwcRXOkfxzVRKU5y8B769XEfUfdMai1tAmLgJLyYuzYJYxfYwXLF81AOOt21RpVpaWZh45AdAG9fIAxtdu9slR5ag7PQT/sQSdk8D9IPanf+ndggZEN/3xGu9gYOiWYEF349dPBqJDF9l/uLjwuyN7v783+mU8HN7Sv+bsKf/n9elO0YC2+L/dn3l/K99Wsb/jwIr6jd7CZCX6jL9a60e/tyreDjsKf/zgOz/OyBb5L/R6ix//9c8KfM/HweWLsTF+qqviu3lrO8qehQ+t8jqNxzNaqwysHHoYJNr3+7p1pQdrDkMpGZn7ZFkqBXMKPZicyla+UKtdJLcMlnq5BIWkjqVfPD026fpPf3M8Xqu69/RbLIXogsiSXPyqX51PVlmisPEfuL7+7hbdcNslrtlUz9EiubMZ4uDaFBdDyFD91z91AkAZQ0DgDRFqTDzWFSsZh+nmRS7KnIVduTWVxWpZIpMApiYT7a1mbb7stL1Hxx20v9zxaVDfwBqi/4/qXeW8/8b9U6n1P+PASoHVunD+LPFLizE74SoW1iT/BExajGh7ncVvJBMuiAdBvkaZFJne+4dWXDxwV7+0EHoGGyu9W2stqqd1oUjj1jkD3os1zaa38nqowoKsKzTtivNRzsu0vPHqxr7ONXXjXp9lhbFg8mxdtAqVWH4JE2ZrNI4uyWb+YooRenafNMujInL5SQy+bOpnVrBdVTJfomrZqWzx3RmjcrDlAc72a+hj0doTmVCDc7AzlSJ366ROPVJMYvPDEVmM+VIF6MZvx055sv7eeLqbv24ObN1oyV7Hp+AY9f0Q+U7gVf/fljSQHxOgUPGGPs3Z+Crgwg5Uf3dcfpDMfFUj0W+evbTZVmE67rQxXHBvXkichASrDkOafx7sCmPLMHwwJwrHGW2kD+79Gm4k0X9MCzKYvzEfMoPdVRZzUXvwj//JWqOoSizVH4oKChUWc+atflMWMU25fGoySjtdZPRzhMnnEYjE9mXaOTaut5ZrU4jc7KkyPVs0rzO+MeJ9PLFaKRGr9bNpvn9o15Pl1BCCSWUUEIJJZRQQgkllFBCCSWUUEIJJZRQQgkllFBCCSWUUEIJO8B/AUtWIqQAeAAA
values:
image:
tag: v1.0.6-DEV
fleetManager:
kubeconfig: #base64 encoded kubeconfig of Fleet manager cluster with user that has write access to Cluster and Secret
namespace: clusters
---
apiVersion: core.gardener.cloud/v1beta1
kind: ControllerRegistration
metadata:
name: extension-shoot-fleet-agent
spec:
deployment:
deploymentRefs:
- name: extension-shoot-fleet-agent
resources:
- kind: Extension
type: shoot-fleet-agent
globallyEnabled: true
deployment:
type: helm
providerConfig:
chart: H4sIAAAAAAAAA+0ca3PbNjKf+StwynWadELqYVludZObU2U39TSxPVbqTufmJgORkMSGJFiAlKMmvd9+iwcpkKKedu30yp1MLIHAYneBfQFLTTHzSESYTT4kJOI+jWw+ozSxJwEhiY2nJEqaT+4GLYCT42P5F6D8V35uH3XbneNOryfa273eyckTdHzHeXeClCeYIfSEAdOb+m17/ieF6S7r78xIEPrTiDJyyBxigXvd7tr1h2Uvrn+n3TrqPkGt+2a2Cv7i6/8UXeEkISziKKFIrTG6nZEIjVM/8PxoimLsvod9wB3rKXo78zniaRxTlsAH2BcBmgZ0jEKcuDPo/QIxEuDEnxMYl8yMdhx5gCAiU3hKI/QsZmTifyAeuvWh39+eO+gyChaIRnKkIAnFhKHAj4hjOaejd6MEaAMUQxqGgOBmOEKez7jlTP2kKf9X5FvO+DfWlP9nDbNpU/yXfeXzqLlENAb+0hhN/IBw6yuH38bw/xi/h/+TED7/F7reYObTlKPz0zOYMGb0F+ImluN7BDdVP2iynDl3qUea1mOv6u6wk/4PZ5glzgKHwUFzbNP/DviGov63emASav1/AMCxf0OYWPc+mrctHMfGV6fltGyPzC2PcJf5cSLbB+h7cAfIFZsCTShDyYygV3ofoZHYPeg7sXvQQOwelO8sx4pwSPpolz1nzSvIeGxh/R/CTvrvUdeZ0oPn2KL/nVb7qBT/nZz0Tmr9fwhoNsENxgvwlLMEPXOfI1iNb9BocIVGZwh0G0fyC56Ae/RxQpBLwxhHCwcNwPXLYRxcPidsTjxHxQfCkyL4G/gubCnw8GnkEWUmBhBMwJ8RnSS3GCKN16rLCzR3UAcshUviBGGOIprAOApD2K3PAVskh78+H55dAGFiBqvZhH8ZhopJctzaoqGO00LPRIeGftR4/g+BYkFTiFMWYlKUwmRJzoQmCGYXbIMAIpeoeCVZTuAIHD9rHHScYOiOYUAM3yZmR4QTTbSEWZLE/Wbz9vbWwZJih7JpUwuNNzWvNlCtR/0YQYQipP1r6jPgeLxAYK9hAB4DrQG+lQs2ZQSeiWAuQrcMgiIRfHEtcIHG83nC/HGaFISW0Qismx1AbLAFGoMROh810LeD0fnohUDy0/nb7y9/fIt+GlxfDy7enp+N0OU1Gl5enJ6/Pb+8gG/focHFz+iH84vTF4j4YiVBnBD0AQdApi/ECTtG4BoRUiAh8yk8Jq4/8V1gLZqmYIrQlIJbiGRQSljoc7GsXEaWgCbwQz+RwSVf5cuxoMuU9qfC2Il97DhN+DeD2K+ZtdkujRJGgwDMISNTIQWJzuEztME8Ikcjuzm7HgHv+hv5gIFH0lyHVIRT6CxD21/1gIKnKxV7a2dLIrHSHJl86GBcCk03CvkI1l3KGISpaEkBKlBgxSb22rv+FWEn/58Q2Mmw2fhhJ0F7n/9A/H9yXJ//PATsuf7vIOMH08udJN49F9wW/x2dlOK/zlGnU5//PAh8/Ggjj0z8CKIikZ01kP3779ZOGZoYSiJPDrBMPPIxOJ2JP1XobNu2zERzBZct4kdw1E4+H3cUAicjxXEDmnrNeRsH8Qy3rfd+5PVVninTzKHsDpnqBKeB/ta3EHqfjolC1UcfPyLnBgcp4Y6c/A2OYHrmLPsgoFaw4k+qewZ4TAIueiGkPq/Hqvt+gljsZ+HrPyGIXDwRMHSzaYT4tswoFoWDoyZq0vzr+nkLI3acRZ9oDXMhFBrWz1UcV8Fqp4II/VF+zjeNHwJCtV0Q0qTOML+Sp4SowWe4c9zrN3IyZH8nwVOUj4iZHyUT1PiC/+sLXu7JSEy5n1C22IQCFoxUIewfjLCa7YxrmIFCjL5wA8y5PvJQQtAi4CSE5qFIuiBdavzzJWo77a7dAmqGkDWMIStLfJj6B9jCWr+cV36SpTwCE4fUwksDiAqd918DkRSUyMqZXTdH+85zjEmCjYnWddP6bFqTtZYlIBjiehLAloOZfU+JaoORstUIOxtion5s0/tZwJ7+X1nJEMe23OtzkCpltsjKRKpJqs+It97/9Ernv0fQv137/4eAklOQq3ojV/UyW1RhtErHxMr5Kqv/BsdWCKru4QT3tYeSDsOP3CD18rjCATzVu2bVrV2DvmJI2i+Kzk97XEucnuA4ln6bRSQh0pRsmnjNED+C1Y+qZhVjMo4k0fxdcZf30SfL8IqVgvuEqh3+Yy+6AXvqv0figC5CaNzjOmiL/h8fr8b/J8dHtf4/BJRDc9AR3sw1/DRf7goV3ylJuE/V3nFChES5gsNnTXmutce4fQ2EOKAUJDMy9wXa730u4sHX4iCyj1ryiTyfLWYJunFIUzANknsu4xPKFP/yzvy1IZB7EMn+zCGUqb2mytgAEl8UUX3cmjUhBPGd+56nYZMTl5GMDJ3d2UYeltvoQrLogMlUkT7gyMy2gB3d1AoZy3hlKaBKH1Qg6pkM/NHfnbdaAs63IJkrUc/Q2CkEajzfzIpOBVRDUFjpe1nrQ1YboWw7C8gSk6FITC72nV8v+MB1xR7fe7g4sMYQ8LNcKvZ+NkeBXJ/ibtNZpmMuiGy7SoPgioJaLgqqqjK6OH9Y2GQ0DDFYybzBRs296LORrVXiZZMkbrOgLHqfye1UGBHiD2KUmzImEhwmTi5cUbvy0qB7eeQvP+vOo0XkcpMFgW9GcJDMpMbsj9sYvG0ez+fi9sK4ZClg1Y+Hy6egQL9QP0KNF40yLlXIY9OYqKsMe2mM1lGqhlxmIwb5gDLuUsZo+95LcwetpqCFzUSiubkh1K59fTY4Pbt+d/b6bCgux95dDN6cja4Gw7O8J0JzQfJ3jIZ9oxGhiU8C75pMiq26XVikfm6Ylwc/h5rNjN7zN4NXZzdA7OX1u8ubs+ufrs/frtDaR8q/GoFxszJSXm/5VghkhNOUuaSwefLGvgi49QnT6ohPKGF+uIy5262NVhe4oEEakjfCQvHVNavyXIYIQjFMyX9Vd41+DLaLqGvrA3XpnRdmixNbQ9/KMu1GX0FcSlgr9nijlFQMYO5c1XJR4mX9cXAB4V4y21dibpbLmuSuzSg3ZLIZ6IPoN9QDFN1Oy2BCC/axQ38Je+Z/qtVwTbvMse3856jVLuV/3Xb7uM7/HgIqD3ZGUk0rUr69dHaXxC9Db+yoLenBuNclkZtZ+8/qLOXPCHvqf+G+YlcDsFX/e+X7/263rv97GDD139S76ospGeoqG3FlJoiHng7pQLKd14FY4mUCHJwq56mDkkL1cSObGEnSVguQ8/kq7pmdxmPL+3ODPfWfjbGrO7iELY3+RkuwTf97rfL7P71O7f8fBsrnv2KBHZwmM9Cx31SZ4PLSWF/7BCAzwq5pQDaFCLxUutHffCRyrwfAh5x+rY4JZZGDZ48XxVEjtenFQJYGIimyYbT/itE0lhzYqNGwComrbFPZDxePwKKOs+avGo+YChyg/3u/CLRF/49OVvS/0z2p3/95ELh3/d9RR/+U9z3rtH2tzVs1Aq4SHi/agClJ5N/A5+rDrbgCutNEy66lr03gMEl3I0B8ivNPaQxLTXYzdnnydyCn8NVgVm3BCmmCMGmYNcoqIT/RPFdMCuYXy0OanJclgysUrNWEtYvKaEB4uWEMOuNHU9W+7FF6tCex4pMHm3PX1Wjkl1bS+xR9EXwjc1CJ5SPp3rKvMfVKT7C61qlyZKvUiAvlKoqWlQRlPHcVgEspA7luXjKp2CXJ6wnviDBrl+cc6tlO9wcGIQa7mTC0mtzJWn+r9ttfzWgD5/oCJVvXDYKDXqtebi8x8XQsSmKln8iO08zb0H2Fvssx2uH+f8/4r2gBdowEt+V/3ZNu+fyn16rrfx4EyhbFOAMubNq6/GfVsjz22t0H7Kn/8xjv/zsQW/S/3T0q1/91jo5btf4/BJQuVcX6qrdMvVLVbwO8JeUuXq3x7zS0yYC+iQ89rqg30J0JO9By2EDJjtYjK1uqYCWLxczinWKbMjl5wZFs9M177vyRurX+8qsv80ve0I8GQUBviVEDBLjiVBKbv6bdWE+Rs0ThwDjx8nU2rLGBkfIwo1JA1OmFlC0OIkENPYQKPbL82gtCoFwQwuYFLFWlpqJ9pdx0ef2+o/lWMbO5qKpFXcAbRUGCEbOzs+z32VZp/3Gwk/2fK8kd+gNA287/wN6v/P7XcX3/9yCgaiSlWcxeZ+ujX/Ach9idsV3rGRM87SMZIYhvsVFLOQhu8YJbViFbFtYFuiorK+1l47gVimOA3Ho1et03vmiRv+qwsWe787XoaoFii27afeUFTU8rLP7TFev9dGm7261WmLdks8hJtpqZhvB7QIpReKirG8zKSMAGjWsLEvtoggMuSDeqK3M/VUZkme9jSk505ZGuqVDFeuJgwnwT9ukY3KispAC6PeOR+MkSiRGp2w+WnWWJclfCgSRGlvE6CInKe1kc6FHDrDfzdKeiF8/OP2Hk8j3VW4FV/2ZU3kGU0sOEGcLh9SmiKrMWTOoXT5e/EaLZfCpKl81XVy25juFCt+rvd5aGuHbOkZqy0ej3EFARV4bgnmVWNUm4kD+w84dIxsR8L+IxEf6xMirMZK3WJffRv/9jAQlVBYjirTBBnCqEVSIt1koqcckYR3GhbNO1YX6nfjJLxw5ILbe/zTWDTatNUmfqMhGX51Zbc7Es28t+iUYvmsYizXej5XScbx7zLrKGGmqooYYaaqihhhpqqKGGGmqooYYaaqihhhpqqKGGGmqooYYaaqihhv3hfw+TpaoAeAAA
values:
image:
tag: v1.0.0-dev
fleetManager:
kubeconfig: #base64 encoded kubeconfig of Fleet manager cluster with user that has write access to Cluster and Secret
namespace: clusters

173
go.mod
View File

@@ -1,46 +1,145 @@
module github.com/javamachr/gardener-extension-shoot-fleet-agent
module github.com/ysoftdevs/gardener-extension-shoot-fleet-agent
go 1.15
go 1.17
require (
github.com/ahmetb/gen-crd-api-reference-docs v0.2.0
github.com/gardener/etcd-druid v0.3.0 // indirect
github.com/gardener/gardener v1.15.1-0.20210115062544-6dc08568692a
github.com/go-logr/logr v0.3.0
github.com/gobuffalo/packr/v2 v2.8.1
github.com/golang/mock v1.4.4-0.20200731163441-8734ec565a4d
github.com/karrick/godirwalk v1.16.1 // indirect
github.com/nwaples/rardecode v1.1.0 // indirect
github.com/onsi/ginkgo v1.14.1
github.com/onsi/gomega v1.10.2 // indirect
github.com/pierrec/lz4 v2.5.2+incompatible // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/rancher/fleet/pkg/apis v0.0.0-20200909045814-3675caaa7070
github.com/rogpeppe/go-internal v1.7.0 // indirect
github.com/sirupsen/logrus v1.7.0 // indirect
github.com/spf13/cobra v1.1.1
github.com/ahmetb/gen-crd-api-reference-docs v0.3.0
github.com/gardener/gardener v1.47.1
github.com/go-logr/logr v1.2.3
github.com/gobuffalo/packr/v2 v2.8.3
github.com/golang/mock v1.6.0
github.com/onsi/ginkgo v1.16.5
github.com/onsi/gomega v1.19.0
github.com/pkg/errors v0.9.1
github.com/rancher/fleet/pkg/apis v0.0.0-20220601092642-d45a36738d27
github.com/spf13/cobra v1.4.0
github.com/spf13/pflag v1.0.5
github.com/ulikunitz/xz v0.5.7 // indirect
golang.org/x/crypto v0.0.0-20201221181555-eec23a3978ad // indirect
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a // indirect
golang.org/x/sys v0.0.0-20210123111255-9b0068b26619 // indirect
golang.org/x/term v0.0.0-20201210144234-2321bbc49cbf // indirect
golang.org/x/tools v0.1.0 // indirect
k8s.io/api v0.19.6
k8s.io/apimachinery v0.19.6
k8s.io/api v0.24.0
k8s.io/apimachinery v0.24.0
k8s.io/client-go v11.0.1-0.20190409021438-1a26190bd76a+incompatible
k8s.io/code-generator v0.19.6
k8s.io/component-base v0.19.6
k8s.io/utils v0.0.0-20200912215256-4140de9c8800 // indirect
sigs.k8s.io/controller-runtime v0.7.1
k8s.io/code-generator v0.24.0
k8s.io/component-base v0.24.0
sigs.k8s.io/controller-runtime v0.11.2
sigs.k8s.io/yaml v1.3.0
)
require (
github.com/BurntSushi/toml v0.3.1 // indirect
github.com/Masterminds/goutils v1.1.1 // indirect
github.com/Masterminds/semver v1.5.0 // indirect
github.com/Masterminds/sprig v2.22.0+incompatible // indirect
github.com/PuerkitoBio/purell v1.1.1 // indirect
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/bronze1man/yaml2json v0.0.0-20211227013850-8972abeaea25 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/cyphar/filepath-securejoin v0.2.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/emicklei/go-restful v2.9.6+incompatible // indirect
github.com/evanphx/json-patch v4.12.0+incompatible // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/fsnotify/fsnotify v1.5.1 // indirect
github.com/gardener/etcd-druid v0.8.0 // indirect
github.com/gardener/external-dns-management v0.7.18 // indirect
github.com/gardener/hvpa-controller/api v0.5.0 // indirect
github.com/ghodss/yaml v1.0.0 // indirect
github.com/go-logr/zapr v1.2.0 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.19.5 // indirect
github.com/go-openapi/swag v0.19.14 // indirect
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0 // indirect
github.com/gobuffalo/flect v0.2.3 // indirect
github.com/gobuffalo/logger v1.0.6 // indirect
github.com/gobuffalo/packd v1.0.1 // indirect
github.com/gobwas/glob v0.2.3 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/go-cmp v0.5.7 // indirect
github.com/google/gofuzz v1.1.0 // indirect
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38 // indirect
github.com/google/uuid v1.1.2 // indirect
github.com/googleapis/gnostic v0.5.5 // indirect
github.com/hashicorp/errwrap v1.0.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/huandu/xstrings v1.3.2 // indirect
github.com/imdario/mergo v0.3.12 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/karrick/godirwalk v1.16.1 // indirect
github.com/kubernetes-csi/external-snapshotter/v2 v2.1.4 // indirect
github.com/mailru/easyjson v0.7.6 // indirect
github.com/markbates/errx v1.1.0 // indirect
github.com/markbates/oncer v1.0.0 // indirect
github.com/markbates/safe v1.0.1 // indirect
github.com/mattn/go-colorable v0.1.12 // indirect
github.com/mattn/go-isatty v0.0.14 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/moby/spdystream v0.2.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/nxadm/tail v1.4.8 // indirect
github.com/onsi/ginkgo/v2 v2.1.4 // indirect
github.com/prometheus/client_golang v1.12.1 // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.32.1 // indirect
github.com/prometheus/procfs v0.7.3 // indirect
github.com/rancher/wrangler v0.6.2-0.20200829053106-7e1dd4260224 // indirect
github.com/rogpeppe/go-internal v1.8.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/sirupsen/logrus v1.8.1 // indirect
github.com/spf13/afero v1.8.2 // indirect
go.uber.org/atomic v1.7.0 // indirect
go.uber.org/multierr v1.6.0 // indirect
go.uber.org/zap v1.21.0 // indirect
golang.org/x/crypto v0.0.0-20220516162934-403b01795ae8 // indirect
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3 // indirect
golang.org/x/net v0.0.0-20220412020605-290c469a71a5 // indirect
golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5 // indirect
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c // indirect
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad // indirect
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
golang.org/x/text v0.3.7 // indirect
golang.org/x/time v0.0.0-20220210224613-90d013bbcef8 // indirect
golang.org/x/tools v0.1.10 // indirect
golang.org/x/xerrors v0.0.0-20220411194840-2f41105eb62f // indirect
gomodules.xyz/jsonpatch/v2 v2.2.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/protobuf v1.28.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
istio.io/api v0.0.0-20220304035241-8c47cbbea144 // indirect
istio.io/client-go v1.12.5 // indirect
istio.io/gogo-genproto v0.0.0-20210113155706-4daf5697332f // indirect
k8s.io/apiextensions-apiserver v0.24.0 // indirect
k8s.io/apiserver v0.24.0 // indirect
k8s.io/autoscaler/vertical-pod-autoscaler v0.10.0 // indirect
k8s.io/gengo v0.0.0-20211129171323-c02415ce4185 // indirect
k8s.io/helm v2.16.1+incompatible // indirect
k8s.io/klog v1.0.0 // indirect
k8s.io/klog/v2 v2.60.1 // indirect
k8s.io/kube-aggregator v0.23.3 // indirect
k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65 // indirect
k8s.io/metrics v0.23.3 // indirect
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9 // indirect
sigs.k8s.io/cli-utils v0.16.0 // indirect
sigs.k8s.io/controller-runtime/tools/setup-envtest v0.0.0-20220513175748-3f265c36d7bf // indirect
sigs.k8s.io/controller-tools v0.8.0 // indirect
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.1 // indirect
)
replace (
k8s.io/api => k8s.io/api v0.19.6
k8s.io/apimachinery => k8s.io/apimachinery v0.19.6
k8s.io/apiserver => k8s.io/apiserver v0.19.6
k8s.io/client-go => k8s.io/client-go v0.19.6
k8s.io/code-generator => k8s.io/code-generator v0.19.6
k8s.io/component-base => k8s.io/component-base v0.19.6
k8s.io/helm => k8s.io/helm v2.13.1+incompatible
k8s.io/api => k8s.io/api v0.23.5
k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.23.5
k8s.io/apimachinery => k8s.io/apimachinery v0.23.5
k8s.io/apiserver => k8s.io/apiserver v0.23.5
k8s.io/client-go => k8s.io/client-go v0.23.5
k8s.io/code-generator => k8s.io/code-generator v0.23.5
k8s.io/component-base => k8s.io/component-base v0.23.5
)

1470
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -60,7 +60,7 @@ ProjectConfig
<code>projectConfig</code></br>
<em>
<a href="#shoot-fleet-agent-service.extensions.config.gardener.cloud/v1alpha1.ProjectConfig">
map[string]github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/apis/config/v1alpha1.ProjectConfig
map[string]github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/apis/config/v1alpha1.ProjectConfig
</a>
</em>
</td>

View File

@@ -37,7 +37,9 @@ EOM
if [ "$1" == "--optional" ]; then
shift
MODE=$'\n globallyEnabled: false'
GLOBALLY_ENABLED='false'
else
GLOBALLY_ENABLED='true'
fi
NAME="$1"
CHART_DIR="$2"
@@ -67,7 +69,7 @@ trap cleanup EXIT ERR INT TERM
export HELM_HOME="$temp_helm_home"
[ "$(helm version --client --template "{{.Version}}" | head -c2 | tail -c1)" = "3" ] || helm init --client-only > /dev/null 2>&1
helm package "$CHART_DIR" --version "$VERSION" --app-version "$VERSION" --destination "$temp_dir" > /dev/null
helm package "$CHART_DIR" --app-version "$VERSION" --destination "$temp_dir" > /dev/null
tar -xzm -C "$temp_extract_dir" -f "$temp_dir"/*
chart="$(tar --sort=name -c --owner=root:0 --group=root:0 --mtime='UTC 2019-01-01' -C "$temp_extract_dir" "$(basename "$temp_extract_dir"/*)" | gzip -n | base64 | tr -d '\n')"
@@ -76,10 +78,27 @@ mkdir -p "$(dirname "$DEST")"
cat <<EOM > "$DEST"
---
apiVersion: core.gardener.cloud/v1beta1
kind: ControllerDeployment
metadata:
name: $NAME
type: helm
providerConfig:
chart: $chart
values:
image:
tag: $VERSION
fleetManager:
kubeconfig: #base64 encoded kubeconfig of Fleet manager cluster with user that has write access to Cluster and Secret
namespace: clusters
---
apiVersion: core.gardener.cloud/v1beta1
kind: ControllerRegistration
metadata:
name: $NAME
spec:
deployment:
deploymentRefs:
- name: $NAME
resources:
EOM
@@ -88,22 +107,10 @@ for kind_and_type in "${KINDS_AND_TYPES[@]}"; do
TYPE="$(echo "$kind_and_type" | cut -d ':' -f 2)"
cat <<EOM >> "$DEST"
- kind: $KIND
type: $TYPE$MODE
globallyEnabled: true
type: $TYPE
globallyEnabled: $GLOBALLY_ENABLED
EOM
done
cat <<EOM >> "$DEST"
deployment:
type: helm
providerConfig:
chart: $chart
values:
image:
tag: $VERSION
fleetManager:
kubeconfig: #base64 encoded kubeconfig of Fleet manager cluster with user that has write access to Cluster and Secret
namespace: clusters
EOM
echo "Successfully generated controller registration at $DEST"

View File

@@ -24,24 +24,24 @@ PROJECT_ROOT=$(dirname $0)/..
bash "${PROJECT_ROOT}"/vendor/k8s.io/code-generator/generate-internal-groups.sh \
deepcopy,defaulter \
github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/client/componentconfig \
github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/apis \
github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/apis \
github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/client/componentconfig \
github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/apis \
github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/apis \
"config:v1alpha1" \
--go-header-file "${PROJECT_ROOT}/vendor/github.com/gardener/gardener/hack/LICENSE_BOILERPLATE.txt"
bash "${PROJECT_ROOT}"/vendor/k8s.io/code-generator/generate-internal-groups.sh \
conversion \
github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/client/componentconfig \
github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/apis \
github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/apis \
github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/client/componentconfig \
github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/apis \
github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/apis \
"config:v1alpha1" \
--extra-peer-dirs=github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/apis/config,github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/apis/config/v1alpha1,k8s.io/apimachinery/pkg/apis/meta/v1,k8s.io/apimachinery/pkg/conversion,k8s.io/apimachinery/pkg/runtime, github.com/gardener/gardener/extensions/pkg/controller/healthcheck/config/v1alpha1 \
--extra-peer-dirs=github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/apis/config,github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/apis/config/v1alpha1,k8s.io/apimachinery/pkg/apis/meta/v1,k8s.io/apimachinery/pkg/conversion,k8s.io/apimachinery/pkg/runtime, github.com/gardener/gardener/extensions/pkg/controller/healthcheck/config/v1alpha1 \
--go-header-file "${PROJECT_ROOT}/vendor/github.com/gardener/gardener/hack/LICENSE_BOILERPLATE.txt"
bash "${PROJECT_ROOT}"/vendor/k8s.io/code-generator/generate-internal-groups.sh \
conversion,client \
github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/client/fleet \
github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/client/fleet \
github.com/rancher/fleet/pkg/apis \
github.com/rancher/fleet/pkg/apis \
"fleet.cattle.io:v1alpha1" \

View File

@@ -1,9 +1,9 @@
package install
import (
"github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/apis/config"
v1alpha1 "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/apis/config/v1alpha1"
v1alpha1fleet "github.com/rancher/fleet/pkg/apis/fleet.cattle.io/v1alpha1"
"github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/apis/config"
v1alpha1 "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/apis/config/v1alpha1"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"

View File

@@ -3,8 +3,8 @@ package loader
import (
"io/ioutil"
"github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/apis/config"
"github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/apis/config/install"
"github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/apis/config"
"github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/apis/config/install"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"

View File

@@ -1,5 +1,5 @@
// +k8s:deepcopy-gen=package
// +k8s:conversion-gen=github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/apis/config
// +k8s:conversion-gen=github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/apis/config
// +k8s:openapi-gen=true
// +k8s:defaulter-gen=TypeMeta

View File

@@ -1,3 +1,4 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
@@ -24,7 +25,7 @@ import (
unsafe "unsafe"
healthcheckconfig "github.com/gardener/gardener/extensions/pkg/controller/healthcheck/config"
config "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/apis/config"
config "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/apis/config"
conversion "k8s.io/apimachinery/pkg/conversion"
runtime "k8s.io/apimachinery/pkg/runtime"
)

View File

@@ -1,7 +1,8 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Copyright (c) 2021 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
Copyright (c) SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -1,7 +1,8 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Copyright (c) 2021 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
Copyright (c) SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -21,7 +21,7 @@ package versioned
import (
"fmt"
fleetv1alpha1 "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/typed/fleet.cattle.io/v1alpha1"
fleetv1alpha1 "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/typed/fleet.cattle.io/v1alpha1"
discovery "k8s.io/client-go/discovery"
rest "k8s.io/client-go/rest"
flowcontrol "k8s.io/client-go/util/flowcontrol"

View File

@@ -19,9 +19,9 @@ limitations under the License.
package fake
import (
clientset "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned"
fleetv1alpha1 "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/typed/fleet.cattle.io/v1alpha1"
fakefleetv1alpha1 "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/typed/fleet.cattle.io/v1alpha1/fake"
clientset "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned"
fleetv1alpha1 "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/typed/fleet.cattle.io/v1alpha1"
fakefleetv1alpha1 "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/typed/fleet.cattle.io/v1alpha1/fake"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/discovery"

View File

@@ -22,8 +22,8 @@ import (
"context"
"time"
scheme "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1alpha1 "github.com/rancher/fleet/pkg/apis/fleet.cattle.io/v1alpha1"
scheme "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"

View File

@@ -22,8 +22,8 @@ import (
"context"
"time"
scheme "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1alpha1 "github.com/rancher/fleet/pkg/apis/fleet.cattle.io/v1alpha1"
scheme "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"

View File

@@ -22,8 +22,8 @@ import (
"context"
"time"
scheme "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1alpha1 "github.com/rancher/fleet/pkg/apis/fleet.cattle.io/v1alpha1"
scheme "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"

View File

@@ -22,8 +22,8 @@ import (
"context"
"time"
scheme "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1alpha1 "github.com/rancher/fleet/pkg/apis/fleet.cattle.io/v1alpha1"
scheme "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"

View File

@@ -22,8 +22,8 @@ import (
"context"
"time"
scheme "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1alpha1 "github.com/rancher/fleet/pkg/apis/fleet.cattle.io/v1alpha1"
scheme "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"

View File

@@ -22,8 +22,8 @@ import (
"context"
"time"
scheme "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1alpha1 "github.com/rancher/fleet/pkg/apis/fleet.cattle.io/v1alpha1"
scheme "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"

View File

@@ -22,8 +22,8 @@ import (
"context"
"time"
scheme "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1alpha1 "github.com/rancher/fleet/pkg/apis/fleet.cattle.io/v1alpha1"
scheme "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"

View File

@@ -22,8 +22,8 @@ import (
"context"
"time"
scheme "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1alpha1 "github.com/rancher/fleet/pkg/apis/fleet.cattle.io/v1alpha1"
scheme "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"

View File

@@ -19,7 +19,7 @@ limitations under the License.
package fake
import (
v1alpha1 "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/typed/fleet.cattle.io/v1alpha1"
v1alpha1 "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/typed/fleet.cattle.io/v1alpha1"
rest "k8s.io/client-go/rest"
testing "k8s.io/client-go/testing"
)

View File

@@ -19,8 +19,8 @@ limitations under the License.
package v1alpha1
import (
"github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1alpha1 "github.com/rancher/fleet/pkg/apis/fleet.cattle.io/v1alpha1"
"github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
rest "k8s.io/client-go/rest"
)

View File

@@ -22,8 +22,8 @@ import (
"context"
"time"
scheme "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1alpha1 "github.com/rancher/fleet/pkg/apis/fleet.cattle.io/v1alpha1"
scheme "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"

View File

@@ -22,8 +22,8 @@ import (
"context"
"time"
scheme "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1alpha1 "github.com/rancher/fleet/pkg/apis/fleet.cattle.io/v1alpha1"
scheme "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned/scheme"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"

View File

@@ -19,12 +19,12 @@ import (
"github.com/gardener/gardener/extensions/pkg/controller/cmd"
extensionshealthcheckcontroller "github.com/gardener/gardener/extensions/pkg/controller/healthcheck"
healthcheckconfig "github.com/gardener/gardener/extensions/pkg/controller/healthcheck/config"
apisconfig "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/apis/config"
"github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/apis/config/v1alpha1"
"github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/controller"
controllerconfig "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/controller/config"
healthcheckcontroller "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/controller/healthcheck"
"github.com/spf13/pflag"
apisconfig "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/apis/config"
"github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/apis/config/v1alpha1"
"github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/controller"
controllerconfig "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/controller/config"
healthcheckcontroller "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/controller/healthcheck"
"io/ioutil"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/serializer"

View File

@@ -15,12 +15,17 @@
package controller
import (
"bytes"
"context"
"fmt"
managed_resource_handler "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/controller/managed-resource-handler"
token_requestor_handler "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/controller/token-requestor-handler"
"github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/utils"
"k8s.io/apimachinery/pkg/api/errors"
"reflect"
"strings"
projConfig "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/apis/config"
projConfig "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/apis/config"
"github.com/gardener/gardener/pkg/apis/core/v1beta1"
"github.com/gardener/gardener/pkg/extensions"
@@ -31,13 +36,14 @@ import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/serializer"
"k8s.io/client-go/rest"
"k8s.io/client-go/util/retry"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
gardencorev1beta1helper "github.com/gardener/gardener/pkg/apis/core/v1beta1/helper"
"github.com/gardener/gardener/extensions/pkg/controller"
"github.com/gardener/gardener/extensions/pkg/controller/extension"
"github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/controller/config"
"github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/controller/config"
extensionsv1alpha1 "github.com/gardener/gardener/pkg/apis/extensions/v1alpha1"
kutil "github.com/gardener/gardener/pkg/utils/kubernetes"
@@ -47,14 +53,17 @@ import (
const ActuatorName = "shoot-fleet-agent-actuator"
// KubeconfigSecretName name of secret that holds kubeconfig for Shoot
const KubeconfigSecretName = "kubecfg"
const KubeconfigSecretName = token_requestor_handler.TokenRequestorSecretName
// KubeconfigKey key in KubeconfigSecretName secret that holds kubeconfig for Shoot
const KubeconfigKey = "kubeconfig"
const KubeconfigKey = token_requestor_handler.TokenRequestorSecretKey
// DefaultConfigKey is the name of default config key.
const DefaultConfigKey = "default"
// FleetClusterSecretDataKey is the key of the data item that holds the kubeconfig for the cluster in fleet
const FleetClusterSecretDataKey = "value"
// NewActuator returns an actuator responsible for Extension resources.
func NewActuator(config config.Config) extension.Actuator {
logger := log.Log.WithName(ActuatorName)
@@ -88,6 +97,26 @@ func initializeFleetManagers(config config.Config, logger logr.Logger) map[strin
return fleetManagers
}
func (a *actuator) ensureDependencies(ctx context.Context, cluster *extensions.Cluster) error {
// Initialize token requestor handler
tokenRequestor := token_requestor_handler.NewTokenRequestorHandler(ctx, a.client, cluster)
// Initialize the managed resoruce handler
managedResourceHandler := managed_resource_handler.NewManagedResourceHandler(ctx, a.client, cluster)
dependencyFunctions := []func() error{
tokenRequestor.EnsureKubeconfig,
managedResourceHandler.EnsureManagedResoruces,
}
return utils.RunParallelFunctions(dependencyFunctions)
}
// isShootedSeed looks into the Cluster object to determine whether we are in a Shooted Seed cluster
func isShootHibernated(cluster *extensions.Cluster) bool {
return cluster.Shoot.Status.IsHibernated
}
// Reconcile the Extension resource.
func (a *actuator) Reconcile(ctx context.Context, ex *extensionsv1alpha1.Extension) error {
namespace := ex.GetNamespace()
@@ -99,13 +128,27 @@ func (a *actuator) Reconcile(ctx context.Context, ex *extensionsv1alpha1.Extensi
if isShootedSeedCluster(cluster) {
return a.updateStatus(ctx, ex)
}
if isShootHibernated(cluster) {
return a.updateStatus(ctx, ex)
}
shootsConfigOverride := &config.Config{}
if ex.Spec.ProviderConfig != nil { //parse providerConfig defaults override for this Shoot
if _, _, err := a.decoder.Decode(ex.Spec.ProviderConfig.Raw, nil, shootsConfigOverride); err != nil {
return fmt.Errorf("failed to decode provider config: %+v", err)
}
}
a.ReconcileClusterInFleetManager(ctx, namespace, cluster, shootsConfigOverride)
if err := a.ensureDependencies(ctx, cluster); err != nil {
a.logger.Error(err, "Could not ensure dependencies")
return err
}
if err = a.ReconcileClusterInFleetManager(ctx, namespace, cluster, shootsConfigOverride); err != nil {
a.logger.Error(err, "Could not reconcile cluster in fleet")
return err
}
return a.updateStatus(ctx, ex)
}
@@ -162,71 +205,117 @@ func (a *actuator) InjectScheme(scheme *runtime.Scheme) error {
}
// ReconcileClusterInFleetManager reconciles cluster registration in remote fleet manager
func (a *actuator) ReconcileClusterInFleetManager(ctx context.Context, namespace string, cluster *extensions.Cluster, override *config.Config) {
a.logger.Info("Starting with already registered check")
func (a *actuator) ReconcileClusterInFleetManager(ctx context.Context, namespace string, cluster *extensions.Cluster, override *config.Config) error {
labels := prepareLabels(cluster, getProjectConfig(cluster, &a.serviceConfig), getProjectConfig(cluster, override))
registered, err := a.getFleetManager(cluster).GetCluster(ctx, cluster.Shoot.Name)
if err != nil {
a.logger.Error(err, "Failed to get cluster registration for Shoot", "shoot", cluster.Shoot.Name)
}
if err == nil && registered != nil {
if reflect.DeepEqual(registered.Labels, labels) {
a.logger.Info("Cluster already registered - skipping registration", "clientId", registered.Spec.ClientID)
} else {
a.logger.Info("Updating labels of already registered cluster.", "clientId", registered.Spec.ClientID)
a.updateClusterLabelsInFleet(ctx, registered, cluster, labels)
}
return
}
a.registerNewClusterInFleet(ctx, namespace, cluster, labels)
}
func (a *actuator) updateClusterLabelsInFleet(ctx context.Context, clusterRegistration *fleetv1alpha1.Cluster, cluster *extensions.Cluster, labels map[string]string) {
clusterRegistration.Labels = labels
_, err := a.getFleetManager(cluster).UpdateCluster(ctx, clusterRegistration)
if err != nil {
a.logger.Error(err, "Failed to update cluster labels in Fleet registration.", "clusterName", clusterRegistration.Name)
}
}
func (a *actuator) registerNewClusterInFleet(ctx context.Context, namespace string, cluster *extensions.Cluster, labels map[string]string) {
a.logger.Info("Looking up Secret with KubeConfig for given Shoot.", "namespace", namespace, "secretName", KubeconfigSecretName)
secret := &corev1.Secret{}
if err := a.client.Get(ctx, kutil.Key(namespace, KubeconfigSecretName), secret); err == nil {
secretData := make(map[string][]byte)
secretData["value"] = secret.Data[KubeconfigKey]
a.logger.Info("Loaded kubeconfig from secret", "kubeconfig", secret, "namespace", namespace)
const fleetRegisterNamespace = "clusters"
kubeconfigSecret := corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "kubecfg-" + buildCrdName(cluster),
Namespace: fleetRegisterNamespace,
},
Data: secretData,
}
clusterRegistration := fleetv1alpha1.Cluster{
TypeMeta: metav1.TypeMeta{},
ObjectMeta: metav1.ObjectMeta{
Name: buildCrdName(cluster),
Namespace: fleetRegisterNamespace,
Labels: labels,
},
Spec: fleetv1alpha1.ClusterSpec{
KubeConfigSecret: "kubecfg-" + buildCrdName(cluster),
},
}
if _, err = a.getFleetManager(cluster).CreateKubeconfigSecret(ctx, &kubeconfigSecret); err != nil {
a.logger.Error(err, "Failed to create secret with kubeconfig for Fleet registration")
}
if _, err = a.getFleetManager(cluster).CreateCluster(ctx, &clusterRegistration); err != nil {
a.logger.Error(err, "Failed to create Cluster for Fleet registration")
}
a.logger.Info("Registered shoot cluster in Fleet Manager ", "registration", clusterRegistration)
} else {
kubeconfigSecret := &corev1.Secret{}
if err := a.client.Get(ctx, kutil.Key(namespace, KubeconfigSecretName), kubeconfigSecret); err != nil {
a.logger.Error(err, "Failed to find Secret with kubeconfig for Fleet registration.")
return err
}
a.logger.Info("Checking if the fleet cluster already exists")
// Check whether we already have an existing cluster
_, err := a.getFleetManager(cluster).GetCluster(ctx, buildCrdName(cluster))
// We cannot find the cluster because of an unknown error
if err != nil && !errors.IsNotFound(err) {
a.logger.Error(err, "Failed to get cluster registration for Shoot", "shoot", cluster.Shoot.Name)
return err
}
// We cannot find the cluster because we haven't registered it yet
if errors.IsNotFound(err) {
a.logger.Info("Creating fleet cluster", "shoot", cluster.Shoot.Name)
return a.registerNewClusterInFleet(ctx, namespace, cluster, labels, kubeconfigSecret.Data[KubeconfigKey])
}
a.logger.Info("Updating existing fleet cluster")
return a.updateClusterInFleet(ctx, cluster, labels, kubeconfigSecret.Data[KubeconfigKey])
}
func (a *actuator) updateClusterInFleet(ctx context.Context, cluster *extensions.Cluster, labels map[string]string, kubeconfig []byte) error {
updated := false
clusterRegistration, err := a.getFleetManager(cluster).GetCluster(ctx, buildCrdName(cluster))
if err != nil {
a.logger.Error(err, "Could not fetch fleet cluster")
return err
}
fleetKubeconfigSecret, err := a.getFleetManager(cluster).GetKubeconfigSecret(ctx, buildKubecfgName(cluster))
if err != nil {
a.logger.Error(err, "Could not fetch fleet kubeconfig secret")
return err
}
if !reflect.DeepEqual(clusterRegistration.Labels, labels) {
a.logger.Info("Cluster labels changed, updating")
clusterRegistration.Labels = labels
_, err := a.getFleetManager(cluster).UpdateCluster(ctx, clusterRegistration)
if err != nil {
a.logger.Error(err, "Failed to update cluster labels in Fleet registration.", "clusterName", clusterRegistration.Name)
return err
}
updated = true
}
if bytes.Compare(fleetKubeconfigSecret.Data[FleetClusterSecretDataKey], kubeconfig) != 0 {
a.logger.Info("Shoot kubeconfig changed, updating")
fleetKubeconfigSecret.Data[FleetClusterSecretDataKey] = kubeconfig
_, err := a.getFleetManager(cluster).UpdateKubeconfigSecret(ctx, fleetKubeconfigSecret)
if err != nil {
a.logger.Error(err, "Failed to update kuebconfig secret in Fleet registration.", "clusterName", clusterRegistration.Name)
return err
}
updated = true
}
if updated {
a.logger.Info("Cluster successfully updated.")
} else {
a.logger.Info("Cluster is already up to date.")
}
return nil
}
func (a *actuator) registerNewClusterInFleet(ctx context.Context, namespace string, cluster *extensions.Cluster, labels map[string]string, kubeconfig []byte) error {
secretData := make(map[string][]byte)
secretData[FleetClusterSecretDataKey] = kubeconfig
const fleetRegisterNamespace = "clusters"
kubeconfigSecret := corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: buildKubecfgName(cluster),
Namespace: fleetRegisterNamespace,
},
Data: secretData,
}
clusterRegistration := fleetv1alpha1.Cluster{
TypeMeta: metav1.TypeMeta{},
ObjectMeta: metav1.ObjectMeta{
Name: buildCrdName(cluster),
Namespace: fleetRegisterNamespace,
Labels: labels,
},
Spec: fleetv1alpha1.ClusterSpec{
KubeConfigSecret: "kubecfg-" + buildCrdName(cluster),
},
}
if _, err := a.getFleetManager(cluster).CreateKubeconfigSecret(ctx, &kubeconfigSecret); err != nil {
a.logger.Error(err, "Failed to create secret with kubeconfig for Fleet registration")
return err
}
if _, err := a.getFleetManager(cluster).CreateCluster(ctx, &clusterRegistration); err != nil {
a.logger.Error(err, "Failed to create Cluster for Fleet registration")
return err
}
a.logger.Info("Registered shoot cluster in Fleet Manager ", "registration", clusterRegistration)
return nil
}
func prepareLabels(cluster *extensions.Cluster, serviceConfig projConfig.ProjectConfig, override projConfig.ProjectConfig) map[string]string {
@@ -250,9 +339,11 @@ func prepareLabels(cluster *extensions.Cluster, serviceConfig projConfig.Project
}
func (a *actuator) updateStatus(ctx context.Context, ex *extensionsv1alpha1.Extension) error {
return controller.TryUpdateStatus(ctx, retry.DefaultBackoff, a.client, ex, func() error {
return nil
})
statusUpdater := controller.NewStatusUpdater(a.logger)
statusUpdater.InjectClient(a.client)
operationType := gardencorev1beta1helper.ComputeOperationType(ex.ObjectMeta, ex.Status.LastOperation)
err := statusUpdater.Success(ctx, ex, operationType, "Successfully reconciled Extension")
return err
}
func (a *actuator) getFleetManager(cluster *extensions.Cluster) *FleetManager {
@@ -273,9 +364,14 @@ func getProjectConfig(cluster *extensions.Cluster, serviceConfig *config.Config)
return projectConfig
}
// buildCrdName creates a unique name for cluster registration resources in Fleet manager cluster
func buildKubecfgName(cluster *extensions.Cluster) string {
return fmt.Sprintf("kubecfg-%s", buildCrdName(cluster))
}
// buildCrdName creates a unique name for cluster registration resources in Fleet manager cluster
func buildCrdName(cluster *extensions.Cluster) string {
return cluster.Seed.Name + "" + cluster.Shoot.Name
return fmt.Sprintf("%s%s", cluster.Seed.Name, cluster.Shoot.Name)
}
// isShootedSeedCluster checks if clusters purpose is Infrastructure

View File

@@ -15,7 +15,7 @@
package controller
import (
controllerconfig "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/controller/config"
controllerconfig "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/controller/config"
"github.com/gardener/gardener/extensions/pkg/controller/extension"
"sigs.k8s.io/controller-runtime/pkg/controller"

View File

@@ -1,6 +1,6 @@
package config
import "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/apis/config"
import "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/apis/config"
// Config holds controller config
type Config struct {

View File

@@ -8,9 +8,9 @@ import (
"k8s.io/client-go/tools/clientcmd"
fleetConfig "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/apis/config"
clientset "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned"
"github.com/rancher/fleet/pkg/apis/fleet.cattle.io/v1alpha1"
fleetConfig "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/apis/config"
clientset "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/client/fleet/clientset/versioned"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
@@ -63,6 +63,16 @@ func (f *FleetManager) GetCluster(ctx context.Context, clusterName string) (*v1a
return f.fleetClient.FleetV1alpha1().Clusters(f.namespace).Get(ctx, clusterName, metav1.GetOptions{})
}
// GetKubeconfigSecret registers a clusters kubeconfig secret in remote fleet
func (f *FleetManager) GetKubeconfigSecret(ctx context.Context, secretName string) (*corev1.Secret, error) {
return f.secretClient.CoreV1().Secrets(f.namespace).Get(ctx, secretName, metav1.GetOptions{})
}
// UpdateKubeconfigSecret updates kubeconfig secret in remote fleet
func (f *FleetManager) UpdateKubeconfigSecret(ctx context.Context, secret *corev1.Secret) (*corev1.Secret, error) {
return f.secretClient.CoreV1().Secrets(f.namespace).Update(ctx, secret, metav1.UpdateOptions{})
}
// CreateKubeconfigSecret registers a clusters kubeconfig secret in remote fleet
func (f *FleetManager) CreateKubeconfigSecret(ctx context.Context, secret *corev1.Secret) (*corev1.Secret, error) {
return f.secretClient.CoreV1().Secrets(f.namespace).Create(ctx, secret, metav1.CreateOptions{})

View File

@@ -19,7 +19,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
fleetcontroller "github.com/javamachr/gardener-extension-shoot-fleet-agent/pkg/controller"
fleetcontroller "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/controller"
"github.com/gardener/gardener/extensions/pkg/controller/healthcheck"
healthcheckconfig "github.com/gardener/gardener/extensions/pkg/controller/healthcheck/config"

View File

@@ -0,0 +1,117 @@
package managed_resource_handler
import (
"context"
"fmt"
"github.com/gardener/gardener/extensions/pkg/util"
"github.com/gardener/gardener/pkg/apis/resources/v1alpha1"
"github.com/gardener/gardener/pkg/extensions"
kutil "github.com/gardener/gardener/pkg/utils/kubernetes"
"github.com/gardener/gardener/pkg/utils/kubernetes/health"
"github.com/gardener/gardener/pkg/utils/managedresources"
"github.com/gardener/gardener/pkg/utils/retry"
"github.com/go-logr/logr"
token_requestor_handler "github.com/ysoftdevs/gardener-extension-shoot-fleet-agent/pkg/controller/token-requestor-handler"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"path/filepath"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
"time"
)
var (
ChartsPath = filepath.Join("charts", "internal")
ManagedResourcesChartValues = map[string]interface{}{
"shootAccessServiceAccountName": token_requestor_handler.TargetServiceAccountName,
"shootAccessServiceAccountNamespace": token_requestor_handler.TargetServiceAccountNamespace,
}
)
const (
ManagedResourceChartName = "shoot-fleet-agent-shoot"
ManagedResourceName = "extension-shoot-fleet-agent-shoot"
)
type ManagedResourceHandler struct {
ctx context.Context
client client.Client
cluster *extensions.Cluster
logger logr.Logger
}
func NewManagedResourceHandler(ctx context.Context, client client.Client, cluster *extensions.Cluster) *ManagedResourceHandler {
return &ManagedResourceHandler{
ctx: ctx,
client: client,
cluster: cluster,
logger: log.Log.WithValues("logger", "managed-resource-handler", "cluster", cluster.ObjectMeta.Name),
}
}
// renderChart takes the helm chart at charts/internal/<ManagedResourceChartName> and renders it with the values
func (h *ManagedResourceHandler) renderChart() ([]byte, error) {
chartPath := filepath.Join(ChartsPath, ManagedResourceChartName)
renderer, err := util.NewChartRendererForShoot(h.cluster.Shoot.Spec.Kubernetes.Version)
if err != nil {
return nil, err
}
chart, err := renderer.Render(chartPath, ManagedResourceChartName, h.cluster.ObjectMeta.Name, ManagedResourcesChartValues)
if err != nil {
return nil, err
}
return chart.Manifest(), nil
}
func (h *ManagedResourceHandler) EnsureManagedResoruces() error {
h.logger.Info("Rendering chart with the managed resources")
renderedChart, err := h.renderChart()
if err != nil {
return err
}
data := map[string][]byte{ManagedResourceChartName: renderedChart}
keepObjects := false
forceOverwriteAnnotations := false
h.logger.Info("Creating the ManagedResource")
if err := managedresources.Create(h.ctx, h.client, h.cluster.ObjectMeta.Name, ManagedResourceName, false, "", data, &keepObjects, map[string]string{}, &forceOverwriteAnnotations); err != nil {
h.logger.Error(err, "ManagedResource could not be created")
return err
}
h.logger.Info("Waiting until the ManagedResource is healthy")
if err := h.waitUntilManagedResourceHealthy(); err != nil {
h.logger.Error(err, "ManagedResource has been created, but hasn't been applied")
return err
}
h.logger.Info("ManagedResource created")
return nil
}
func (h *ManagedResourceHandler) waitUntilManagedResourceHealthy() error {
name := ManagedResourceName
namespace := h.cluster.ObjectMeta.Name
obj := &v1alpha1.ManagedResource{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
},
}
return retry.UntilTimeout(h.ctx, 5*time.Second, 60*time.Second, func(ctx context.Context) (done bool, err error) {
if err := h.client.Get(ctx, kutil.Key(namespace, name), obj); err != nil {
h.logger.Info(fmt.Sprintf("Could not wait for the managed resource to be ready: %+v", err))
return retry.MinorError(err)
}
// Check whether ManagedResource has proper status
if err := health.CheckManagedResource(obj); err != nil {
h.logger.Info(fmt.Sprintf("Managed resource %s not ready (yet): %+v", name, err))
return retry.MinorError(err)
}
return retry.Ok()
})
}

View File

@@ -0,0 +1,163 @@
package token_requestor_handler
import (
"context"
"fmt"
extensionscontroller "github.com/gardener/gardener/extensions/pkg/controller"
constsv1alpha1 "github.com/gardener/gardener/pkg/apis/resources/v1alpha1"
"github.com/gardener/gardener/pkg/extensions"
kutil "github.com/gardener/gardener/pkg/utils/kubernetes"
"github.com/gardener/gardener/pkg/utils/retry"
"github.com/go-logr/logr"
v1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
clientapiv1 "k8s.io/client-go/tools/clientcmd/api/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/yaml"
"time"
)
const (
TokenRequestorSecretName = "shoot-access-extension-shoot-fleet-agent"
TokenRequestorSecretKey = constsv1alpha1.DataKeyKubeconfig
TargetServiceAccountName = "extension-shoot-fleet-agent"
TargetServiceAccountNamespace = "kube-system"
)
type TokenRequestorHandler struct {
ctx context.Context
client client.Client
cluster *extensions.Cluster
logger logr.Logger
}
func NewTokenRequestorHandler(ctx context.Context, client client.Client, cluster *extensions.Cluster) *TokenRequestorHandler {
return &TokenRequestorHandler{
ctx: ctx,
client: client,
cluster: cluster,
logger: log.Log.WithValues("logger", "token-requestor-handler", "cluster", cluster.ObjectMeta.Name),
}
}
func (t *TokenRequestorHandler) getGenericKubeconfigName() string {
return extensionscontroller.GenericTokenKubeconfigSecretNameFromCluster(t.cluster)
}
func (t *TokenRequestorHandler) getGenericKubeconfig() (*clientapiv1.Config, error) {
kubeconfigSecret := &v1.Secret{}
kubeconfigSecretName := t.getGenericKubeconfigName()
if err := t.client.Get(t.ctx, kutil.Key(t.cluster.ObjectMeta.Name, kubeconfigSecretName), kubeconfigSecret); err != nil {
return nil, err
}
kubeconfigData, ok := kubeconfigSecret.Data[TokenRequestorSecretKey]
if !ok {
return nil, fmt.Errorf("secret %s doesn't have data key %s", kubeconfigSecretName, TokenRequestorSecretKey)
}
kubeconfig := &clientapiv1.Config{}
if err := yaml.Unmarshal(kubeconfigData, kubeconfig); err != nil {
return nil, err
}
return kubeconfig, nil
}
func (t *TokenRequestorHandler) isTokenInserted() bool {
kubeconfigSecret := &v1.Secret{}
if err := t.client.Get(t.ctx, kutil.Key(t.cluster.ObjectMeta.Name, TokenRequestorSecretName), kubeconfigSecret); err != nil {
t.logger.Info(fmt.Sprintf("Couldn't fet the kubeconfig secret %s, %+v", TokenRequestorSecretName, err))
return false
}
kubeconfigData, ok := kubeconfigSecret.Data[TokenRequestorSecretKey]
if !ok {
t.logger.Info(fmt.Sprintf("Kubeconfig secret %s doesn't contain data item %s", TokenRequestorSecretName, TokenRequestorSecretKey))
return false
}
kubeconfig := &clientapiv1.Config{}
if err := yaml.Unmarshal(kubeconfigData, kubeconfig); err != nil {
t.logger.Info(fmt.Sprintf("Data item %s in kubeconfig secret %s couldn't be parsed: %+v", TokenRequestorSecretKey, TokenRequestorSecretName, err))
return false
}
if len(kubeconfig.AuthInfos) == 0 || kubeconfig.AuthInfos[0].AuthInfo.Token == "" {
t.logger.Info(fmt.Sprintf("Kubeconfig in secret %s doesn't contain token (yet?)", TokenRequestorSecretName))
return false
}
t.logger.Info(fmt.Sprintf("Kubeconfig in secret %s contains a proper token", TokenRequestorSecretName))
return true
}
func (t *TokenRequestorHandler) createKubeconfigSecretFromGeneric() error {
kubeconfig, err := t.getGenericKubeconfig()
s := false
for _, value := range t.cluster.Shoot.Status.AdvertisedAddresses {
// check for external URL server
if value.Name == "external" {
kubeconfig.Clusters[0].Cluster.Server = value.URL
s = true
break
}
}
if s != true {
t.logger.Info(fmt.Sprintf("Shoot status doesn't contain external URL address."))
return err
}
if err != nil {
return err
}
kubeconfigYaml, err := yaml.Marshal(kubeconfig)
if err != nil {
return err
}
existingSecret := &v1.Secret{}
err = t.client.Get(t.ctx, kutil.Key(t.cluster.ObjectMeta.Name, TokenRequestorSecretName), existingSecret)
if err == nil || !apierrors.IsNotFound(err) {
return err
}
secretObject := &v1.Secret{
ObjectMeta: metav1.ObjectMeta{
Namespace: t.cluster.ObjectMeta.Name,
Name: TokenRequestorSecretName,
Labels: map[string]string{
constsv1alpha1.ResourceManagerPurpose: constsv1alpha1.LabelPurposeTokenRequest,
},
Annotations: map[string]string{
constsv1alpha1.ServiceAccountName: TargetServiceAccountName,
constsv1alpha1.ServiceAccountNamespace: TargetServiceAccountNamespace,
},
},
StringData: map[string]string{
TokenRequestorSecretKey: string(kubeconfigYaml),
},
}
return t.client.Create(t.ctx, secretObject)
}
// EnsureKubeconfig creates the secret with a token-requestor label and waits until gardener fills in
// the token into the kubeconfig
func (t *TokenRequestorHandler) EnsureKubeconfig() error {
t.logger.Info(fmt.Sprintf("Trying to create the token requestor secret"))
if err := t.createKubeconfigSecretFromGeneric(); err != nil {
t.logger.Error(err, "Generic kubeconfig could not be copied")
return err
}
t.logger.Info("Waiting until the token is propagated into the token requestor secret")
if err := retry.UntilTimeout(t.ctx, 5*time.Second, 60*time.Second, func(ctx context.Context) (bool, error) {
return t.isTokenInserted(), nil
}); err != nil {
t.logger.Error(err, "Kubeconfig could not be created")
return err
}
return nil
}

43
pkg/utils/parallel.go Normal file
View File

@@ -0,0 +1,43 @@
package utils
import (
"github.com/pkg/errors"
"sync"
)
// RunParallelFunctions runs a list of functions in parallel
// returns nil if all functions return nil
// returns an error which wraps all errors that occurred within the functions
func RunParallelFunctions(functions []func() error) error {
var wg sync.WaitGroup
var mu sync.Mutex
var errorList []error
for _, function := range functions {
wg.Add(1)
// create an intermediate variable so each goroutine uses the correct function and doesn't get overwritten
function := function
go func(wg *sync.WaitGroup, mu *sync.Mutex) {
defer wg.Done()
err := function()
if err != nil {
mu.Lock()
errorList = append(errorList, err)
mu.Unlock()
}
}(&wg, &mu)
}
wg.Wait()
if len(errorList) > 0 {
err := errors.New("not all functions returned nil error")
for _, errorItem := range errorList {
err = errors.Wrap(err, errorItem.Error())
}
return err
}
return nil
}

View File

@@ -1,5 +0,0 @@
TAGS
tags
.*.swp
tomlcheck/tomlcheck
toml.test

View File

@@ -1,15 +0,0 @@
language: go
go:
- 1.1
- 1.2
- 1.3
- 1.4
- 1.5
- 1.6
- tip
install:
- go install ./...
- go get github.com/BurntSushi/toml-test
script:
- export PATH="$PATH:$HOME/gopath/bin"
- make test

View File

@@ -1,3 +0,0 @@
Compatible with TOML version
[v0.4.0](https://github.com/toml-lang/toml/blob/v0.4.0/versions/en/toml-v0.4.0.md)

View File

@@ -1,21 +0,0 @@
The MIT License (MIT)
Copyright (c) 2013 TOML authors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -1,19 +0,0 @@
install:
go install ./...
test: install
go test -v
toml-test toml-test-decoder
toml-test -encoder toml-test-encoder
fmt:
gofmt -w *.go */*.go
colcheck *.go */*.go
tags:
find ./ -name '*.go' -print0 | xargs -0 gotags > TAGS
push:
git push origin master
git push github master

View File

@@ -1,218 +0,0 @@
## TOML parser and encoder for Go with reflection
TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
reflection interface similar to Go's standard library `json` and `xml`
packages. This package also supports the `encoding.TextUnmarshaler` and
`encoding.TextMarshaler` interfaces so that you can define custom data
representations. (There is an example of this below.)
Spec: https://github.com/toml-lang/toml
Compatible with TOML version
[v0.4.0](https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md)
Documentation: https://godoc.org/github.com/BurntSushi/toml
Installation:
```bash
go get github.com/BurntSushi/toml
```
Try the toml validator:
```bash
go get github.com/BurntSushi/toml/cmd/tomlv
tomlv some-toml-file.toml
```
[![Build Status](https://travis-ci.org/BurntSushi/toml.svg?branch=master)](https://travis-ci.org/BurntSushi/toml) [![GoDoc](https://godoc.org/github.com/BurntSushi/toml?status.svg)](https://godoc.org/github.com/BurntSushi/toml)
### Testing
This package passes all tests in
[toml-test](https://github.com/BurntSushi/toml-test) for both the decoder
and the encoder.
### Examples
This package works similarly to how the Go standard library handles `XML`
and `JSON`. Namely, data is loaded into Go values via reflection.
For the simplest example, consider some TOML file as just a list of keys
and values:
```toml
Age = 25
Cats = [ "Cauchy", "Plato" ]
Pi = 3.14
Perfection = [ 6, 28, 496, 8128 ]
DOB = 1987-07-05T05:45:00Z
```
Which could be defined in Go as:
```go
type Config struct {
Age int
Cats []string
Pi float64
Perfection []int
DOB time.Time // requires `import time`
}
```
And then decoded with:
```go
var conf Config
if _, err := toml.Decode(tomlData, &conf); err != nil {
// handle error
}
```
You can also use struct tags if your struct field name doesn't map to a TOML
key value directly:
```toml
some_key_NAME = "wat"
```
```go
type TOML struct {
ObscureKey string `toml:"some_key_NAME"`
}
```
### Using the `encoding.TextUnmarshaler` interface
Here's an example that automatically parses duration strings into
`time.Duration` values:
```toml
[[song]]
name = "Thunder Road"
duration = "4m49s"
[[song]]
name = "Stairway to Heaven"
duration = "8m03s"
```
Which can be decoded with:
```go
type song struct {
Name string
Duration duration
}
type songs struct {
Song []song
}
var favorites songs
if _, err := toml.Decode(blob, &favorites); err != nil {
log.Fatal(err)
}
for _, s := range favorites.Song {
fmt.Printf("%s (%s)\n", s.Name, s.Duration)
}
```
And you'll also need a `duration` type that satisfies the
`encoding.TextUnmarshaler` interface:
```go
type duration struct {
time.Duration
}
func (d *duration) UnmarshalText(text []byte) error {
var err error
d.Duration, err = time.ParseDuration(string(text))
return err
}
```
### More complex usage
Here's an example of how to load the example from the official spec page:
```toml
# This is a TOML document. Boom.
title = "TOML Example"
[owner]
name = "Tom Preston-Werner"
organization = "GitHub"
bio = "GitHub Cofounder & CEO\nLikes tater tots and beer."
dob = 1979-05-27T07:32:00Z # First class dates? Why not?
[database]
server = "192.168.1.1"
ports = [ 8001, 8001, 8002 ]
connection_max = 5000
enabled = true
[servers]
# You can indent as you please. Tabs or spaces. TOML don't care.
[servers.alpha]
ip = "10.0.0.1"
dc = "eqdc10"
[servers.beta]
ip = "10.0.0.2"
dc = "eqdc10"
[clients]
data = [ ["gamma", "delta"], [1, 2] ] # just an update to make sure parsers support it
# Line breaks are OK when inside arrays
hosts = [
"alpha",
"omega"
]
```
And the corresponding Go types are:
```go
type tomlConfig struct {
Title string
Owner ownerInfo
DB database `toml:"database"`
Servers map[string]server
Clients clients
}
type ownerInfo struct {
Name string
Org string `toml:"organization"`
Bio string
DOB time.Time
}
type database struct {
Server string
Ports []int
ConnMax int `toml:"connection_max"`
Enabled bool
}
type server struct {
IP string
DC string
}
type clients struct {
Data [][]interface{}
Hosts []string
}
```
Note that a case insensitive match will be tried if an exact match can't be
found.
A working example of the above can be found in `_examples/example.{go,toml}`.

View File

@@ -1,509 +0,0 @@
package toml
import (
"fmt"
"io"
"io/ioutil"
"math"
"reflect"
"strings"
"time"
)
func e(format string, args ...interface{}) error {
return fmt.Errorf("toml: "+format, args...)
}
// Unmarshaler is the interface implemented by objects that can unmarshal a
// TOML description of themselves.
type Unmarshaler interface {
UnmarshalTOML(interface{}) error
}
// Unmarshal decodes the contents of `p` in TOML format into a pointer `v`.
func Unmarshal(p []byte, v interface{}) error {
_, err := Decode(string(p), v)
return err
}
// Primitive is a TOML value that hasn't been decoded into a Go value.
// When using the various `Decode*` functions, the type `Primitive` may
// be given to any value, and its decoding will be delayed.
//
// A `Primitive` value can be decoded using the `PrimitiveDecode` function.
//
// The underlying representation of a `Primitive` value is subject to change.
// Do not rely on it.
//
// N.B. Primitive values are still parsed, so using them will only avoid
// the overhead of reflection. They can be useful when you don't know the
// exact type of TOML data until run time.
type Primitive struct {
undecoded interface{}
context Key
}
// DEPRECATED!
//
// Use MetaData.PrimitiveDecode instead.
func PrimitiveDecode(primValue Primitive, v interface{}) error {
md := MetaData{decoded: make(map[string]bool)}
return md.unify(primValue.undecoded, rvalue(v))
}
// PrimitiveDecode is just like the other `Decode*` functions, except it
// decodes a TOML value that has already been parsed. Valid primitive values
// can *only* be obtained from values filled by the decoder functions,
// including this method. (i.e., `v` may contain more `Primitive`
// values.)
//
// Meta data for primitive values is included in the meta data returned by
// the `Decode*` functions with one exception: keys returned by the Undecoded
// method will only reflect keys that were decoded. Namely, any keys hidden
// behind a Primitive will be considered undecoded. Executing this method will
// update the undecoded keys in the meta data. (See the example.)
func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
md.context = primValue.context
defer func() { md.context = nil }()
return md.unify(primValue.undecoded, rvalue(v))
}
// Decode will decode the contents of `data` in TOML format into a pointer
// `v`.
//
// TOML hashes correspond to Go structs or maps. (Dealer's choice. They can be
// used interchangeably.)
//
// TOML arrays of tables correspond to either a slice of structs or a slice
// of maps.
//
// TOML datetimes correspond to Go `time.Time` values.
//
// All other TOML types (float, string, int, bool and array) correspond
// to the obvious Go types.
//
// An exception to the above rules is if a type implements the
// encoding.TextUnmarshaler interface. In this case, any primitive TOML value
// (floats, strings, integers, booleans and datetimes) will be converted to
// a byte string and given to the value's UnmarshalText method. See the
// Unmarshaler example for a demonstration with time duration strings.
//
// Key mapping
//
// TOML keys can map to either keys in a Go map or field names in a Go
// struct. The special `toml` struct tag may be used to map TOML keys to
// struct fields that don't match the key name exactly. (See the example.)
// A case insensitive match to struct names will be tried if an exact match
// can't be found.
//
// The mapping between TOML values and Go values is loose. That is, there
// may exist TOML values that cannot be placed into your representation, and
// there may be parts of your representation that do not correspond to
// TOML values. This loose mapping can be made stricter by using the IsDefined
// and/or Undecoded methods on the MetaData returned.
//
// This decoder will not handle cyclic types. If a cyclic type is passed,
// `Decode` will not terminate.
func Decode(data string, v interface{}) (MetaData, error) {
rv := reflect.ValueOf(v)
if rv.Kind() != reflect.Ptr {
return MetaData{}, e("Decode of non-pointer %s", reflect.TypeOf(v))
}
if rv.IsNil() {
return MetaData{}, e("Decode of nil %s", reflect.TypeOf(v))
}
p, err := parse(data)
if err != nil {
return MetaData{}, err
}
md := MetaData{
p.mapping, p.types, p.ordered,
make(map[string]bool, len(p.ordered)), nil,
}
return md, md.unify(p.mapping, indirect(rv))
}
// DecodeFile is just like Decode, except it will automatically read the
// contents of the file at `fpath` and decode it for you.
func DecodeFile(fpath string, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadFile(fpath)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// DecodeReader is just like Decode, except it will consume all bytes
// from the reader and decode it for you.
func DecodeReader(r io.Reader, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadAll(r)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// unify performs a sort of type unification based on the structure of `rv`,
// which is the client representation.
//
// Any type mismatch produces an error. Finding a type that we don't know
// how to handle produces an unsupported type error.
func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
// Special case. Look for a `Primitive` value.
if rv.Type() == reflect.TypeOf((*Primitive)(nil)).Elem() {
// Save the undecoded data and the key context into the primitive
// value.
context := make(Key, len(md.context))
copy(context, md.context)
rv.Set(reflect.ValueOf(Primitive{
undecoded: data,
context: context,
}))
return nil
}
// Special case. Unmarshaler Interface support.
if rv.CanAddr() {
if v, ok := rv.Addr().Interface().(Unmarshaler); ok {
return v.UnmarshalTOML(data)
}
}
// Special case. Handle time.Time values specifically.
// TODO: Remove this code when we decide to drop support for Go 1.1.
// This isn't necessary in Go 1.2 because time.Time satisfies the encoding
// interfaces.
if rv.Type().AssignableTo(rvalue(time.Time{}).Type()) {
return md.unifyDatetime(data, rv)
}
// Special case. Look for a value satisfying the TextUnmarshaler interface.
if v, ok := rv.Interface().(TextUnmarshaler); ok {
return md.unifyText(data, v)
}
// BUG(burntsushi)
// The behavior here is incorrect whenever a Go type satisfies the
// encoding.TextUnmarshaler interface but also corresponds to a TOML
// hash or array. In particular, the unmarshaler should only be applied
// to primitive TOML values. But at this point, it will be applied to
// all kinds of values and produce an incorrect error whenever those values
// are hashes or arrays (including arrays of tables).
k := rv.Kind()
// laziness
if k >= reflect.Int && k <= reflect.Uint64 {
return md.unifyInt(data, rv)
}
switch k {
case reflect.Ptr:
elem := reflect.New(rv.Type().Elem())
err := md.unify(data, reflect.Indirect(elem))
if err != nil {
return err
}
rv.Set(elem)
return nil
case reflect.Struct:
return md.unifyStruct(data, rv)
case reflect.Map:
return md.unifyMap(data, rv)
case reflect.Array:
return md.unifyArray(data, rv)
case reflect.Slice:
return md.unifySlice(data, rv)
case reflect.String:
return md.unifyString(data, rv)
case reflect.Bool:
return md.unifyBool(data, rv)
case reflect.Interface:
// we only support empty interfaces.
if rv.NumMethod() > 0 {
return e("unsupported type %s", rv.Type())
}
return md.unifyAnything(data, rv)
case reflect.Float32:
fallthrough
case reflect.Float64:
return md.unifyFloat64(data, rv)
}
return e("unsupported type %s", rv.Kind())
}
func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if mapping == nil {
return nil
}
return e("type mismatch for %s: expected table but found %T",
rv.Type().String(), mapping)
}
for key, datum := range tmap {
var f *field
fields := cachedTypeFields(rv.Type())
for i := range fields {
ff := &fields[i]
if ff.name == key {
f = ff
break
}
if f == nil && strings.EqualFold(ff.name, key) {
f = ff
}
}
if f != nil {
subv := rv
for _, i := range f.index {
subv = indirect(subv.Field(i))
}
if isUnifiable(subv) {
md.decoded[md.context.add(key).String()] = true
md.context = append(md.context, key)
if err := md.unify(datum, subv); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
} else if f.name != "" {
// Bad user! No soup for you!
return e("cannot write unexported field %s.%s",
rv.Type().String(), f.name)
}
}
}
return nil
}
func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if tmap == nil {
return nil
}
return badtype("map", mapping)
}
if rv.IsNil() {
rv.Set(reflect.MakeMap(rv.Type()))
}
for k, v := range tmap {
md.decoded[md.context.add(k).String()] = true
md.context = append(md.context, k)
rvkey := indirect(reflect.New(rv.Type().Key()))
rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
if err := md.unify(v, rvval); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
rvkey.SetString(k)
rv.SetMapIndex(rvkey, rvval)
}
return nil
}
func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
sliceLen := datav.Len()
if sliceLen != rv.Len() {
return e("expected array length %d; got TOML array of length %d",
rv.Len(), sliceLen)
}
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
n := datav.Len()
if rv.IsNil() || rv.Cap() < n {
rv.Set(reflect.MakeSlice(rv.Type(), n, n))
}
rv.SetLen(n)
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
sliceLen := data.Len()
for i := 0; i < sliceLen; i++ {
v := data.Index(i).Interface()
sliceval := indirect(rv.Index(i))
if err := md.unify(v, sliceval); err != nil {
return err
}
}
return nil
}
func (md *MetaData) unifyDatetime(data interface{}, rv reflect.Value) error {
if _, ok := data.(time.Time); ok {
rv.Set(reflect.ValueOf(data))
return nil
}
return badtype("time.Time", data)
}
func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
if s, ok := data.(string); ok {
rv.SetString(s)
return nil
}
return badtype("string", data)
}
func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
if num, ok := data.(float64); ok {
switch rv.Kind() {
case reflect.Float32:
fallthrough
case reflect.Float64:
rv.SetFloat(num)
default:
panic("bug")
}
return nil
}
return badtype("float", data)
}
func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
if num, ok := data.(int64); ok {
if rv.Kind() >= reflect.Int && rv.Kind() <= reflect.Int64 {
switch rv.Kind() {
case reflect.Int, reflect.Int64:
// No bounds checking necessary.
case reflect.Int8:
if num < math.MinInt8 || num > math.MaxInt8 {
return e("value %d is out of range for int8", num)
}
case reflect.Int16:
if num < math.MinInt16 || num > math.MaxInt16 {
return e("value %d is out of range for int16", num)
}
case reflect.Int32:
if num < math.MinInt32 || num > math.MaxInt32 {
return e("value %d is out of range for int32", num)
}
}
rv.SetInt(num)
} else if rv.Kind() >= reflect.Uint && rv.Kind() <= reflect.Uint64 {
unum := uint64(num)
switch rv.Kind() {
case reflect.Uint, reflect.Uint64:
// No bounds checking necessary.
case reflect.Uint8:
if num < 0 || unum > math.MaxUint8 {
return e("value %d is out of range for uint8", num)
}
case reflect.Uint16:
if num < 0 || unum > math.MaxUint16 {
return e("value %d is out of range for uint16", num)
}
case reflect.Uint32:
if num < 0 || unum > math.MaxUint32 {
return e("value %d is out of range for uint32", num)
}
}
rv.SetUint(unum)
} else {
panic("unreachable")
}
return nil
}
return badtype("integer", data)
}
func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
if b, ok := data.(bool); ok {
rv.SetBool(b)
return nil
}
return badtype("boolean", data)
}
func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
rv.Set(reflect.ValueOf(data))
return nil
}
func (md *MetaData) unifyText(data interface{}, v TextUnmarshaler) error {
var s string
switch sdata := data.(type) {
case TextMarshaler:
text, err := sdata.MarshalText()
if err != nil {
return err
}
s = string(text)
case fmt.Stringer:
s = sdata.String()
case string:
s = sdata
case bool:
s = fmt.Sprintf("%v", sdata)
case int64:
s = fmt.Sprintf("%d", sdata)
case float64:
s = fmt.Sprintf("%f", sdata)
default:
return badtype("primitive (string-like)", data)
}
if err := v.UnmarshalText([]byte(s)); err != nil {
return err
}
return nil
}
// rvalue returns a reflect.Value of `v`. All pointers are resolved.
func rvalue(v interface{}) reflect.Value {
return indirect(reflect.ValueOf(v))
}
// indirect returns the value pointed to by a pointer.
// Pointers are followed until the value is not a pointer.
// New values are allocated for each nil pointer.
//
// An exception to this rule is if the value satisfies an interface of
// interest to us (like encoding.TextUnmarshaler).
func indirect(v reflect.Value) reflect.Value {
if v.Kind() != reflect.Ptr {
if v.CanSet() {
pv := v.Addr()
if _, ok := pv.Interface().(TextUnmarshaler); ok {
return pv
}
}
return v
}
if v.IsNil() {
v.Set(reflect.New(v.Type().Elem()))
}
return indirect(reflect.Indirect(v))
}
func isUnifiable(rv reflect.Value) bool {
if rv.CanSet() {
return true
}
if _, ok := rv.Interface().(TextUnmarshaler); ok {
return true
}
return false
}
func badtype(expected string, data interface{}) error {
return e("cannot load TOML value of type %T into a Go %s", data, expected)
}

View File

@@ -1,121 +0,0 @@
package toml
import "strings"
// MetaData allows access to meta information about TOML data that may not
// be inferrable via reflection. In particular, whether a key has been defined
// and the TOML type of a key.
type MetaData struct {
mapping map[string]interface{}
types map[string]tomlType
keys []Key
decoded map[string]bool
context Key // Used only during decoding.
}
// IsDefined returns true if the key given exists in the TOML data. The key
// should be specified hierarchially. e.g.,
//
// // access the TOML key 'a.b.c'
// IsDefined("a", "b", "c")
//
// IsDefined will return false if an empty key given. Keys are case sensitive.
func (md *MetaData) IsDefined(key ...string) bool {
if len(key) == 0 {
return false
}
var hash map[string]interface{}
var ok bool
var hashOrVal interface{} = md.mapping
for _, k := range key {
if hash, ok = hashOrVal.(map[string]interface{}); !ok {
return false
}
if hashOrVal, ok = hash[k]; !ok {
return false
}
}
return true
}
// Type returns a string representation of the type of the key specified.
//
// Type will return the empty string if given an empty key or a key that
// does not exist. Keys are case sensitive.
func (md *MetaData) Type(key ...string) string {
fullkey := strings.Join(key, ".")
if typ, ok := md.types[fullkey]; ok {
return typ.typeString()
}
return ""
}
// Key is the type of any TOML key, including key groups. Use (MetaData).Keys
// to get values of this type.
type Key []string
func (k Key) String() string {
return strings.Join(k, ".")
}
func (k Key) maybeQuotedAll() string {
var ss []string
for i := range k {
ss = append(ss, k.maybeQuoted(i))
}
return strings.Join(ss, ".")
}
func (k Key) maybeQuoted(i int) string {
quote := false
for _, c := range k[i] {
if !isBareKeyChar(c) {
quote = true
break
}
}
if quote {
return "\"" + strings.Replace(k[i], "\"", "\\\"", -1) + "\""
}
return k[i]
}
func (k Key) add(piece string) Key {
newKey := make(Key, len(k)+1)
copy(newKey, k)
newKey[len(k)] = piece
return newKey
}
// Keys returns a slice of every key in the TOML data, including key groups.
// Each key is itself a slice, where the first element is the top of the
// hierarchy and the last is the most specific.
//
// The list will have the same order as the keys appeared in the TOML data.
//
// All keys returned are non-empty.
func (md *MetaData) Keys() []Key {
return md.keys
}
// Undecoded returns all keys that have not been decoded in the order in which
// they appear in the original TOML document.
//
// This includes keys that haven't been decoded because of a Primitive value.
// Once the Primitive value is decoded, the keys will be considered decoded.
//
// Also note that decoding into an empty interface will result in no decoding,
// and so no keys will be considered decoded.
//
// In this sense, the Undecoded keys correspond to keys in the TOML document
// that do not have a concrete type in your representation.
func (md *MetaData) Undecoded() []Key {
undecoded := make([]Key, 0, len(md.keys))
for _, key := range md.keys {
if !md.decoded[key.String()] {
undecoded = append(undecoded, key)
}
}
return undecoded
}

View File

@@ -1,27 +0,0 @@
/*
Package toml provides facilities for decoding and encoding TOML configuration
files via reflection. There is also support for delaying decoding with
the Primitive type, and querying the set of keys in a TOML document with the
MetaData type.
The specification implemented: https://github.com/toml-lang/toml
The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify
whether a file is a valid TOML document. It can also be used to print the
type of each key in a TOML document.
Testing
There are two important types of tests used for this package. The first is
contained inside '*_test.go' files and uses the standard Go unit testing
framework. These tests are primarily devoted to holistically testing the
decoder and encoder.
The second type of testing is used to verify the implementation's adherence
to the TOML specification. These tests have been factored into their own
project: https://github.com/BurntSushi/toml-test
The reason the tests are in a separate project is so that they can be used by
any implementation of TOML. Namely, it is language agnostic.
*/
package toml

View File

@@ -1,568 +0,0 @@
package toml
import (
"bufio"
"errors"
"fmt"
"io"
"reflect"
"sort"
"strconv"
"strings"
"time"
)
type tomlEncodeError struct{ error }
var (
errArrayMixedElementTypes = errors.New(
"toml: cannot encode array with mixed element types")
errArrayNilElement = errors.New(
"toml: cannot encode array with nil element")
errNonString = errors.New(
"toml: cannot encode a map with non-string key type")
errAnonNonStruct = errors.New(
"toml: cannot encode an anonymous field that is not a struct")
errArrayNoTable = errors.New(
"toml: TOML array element cannot contain a table")
errNoKey = errors.New(
"toml: top-level values must be Go maps or structs")
errAnything = errors.New("") // used in testing
)
var quotedReplacer = strings.NewReplacer(
"\t", "\\t",
"\n", "\\n",
"\r", "\\r",
"\"", "\\\"",
"\\", "\\\\",
)
// Encoder controls the encoding of Go values to a TOML document to some
// io.Writer.
//
// The indentation level can be controlled with the Indent field.
type Encoder struct {
// A single indentation level. By default it is two spaces.
Indent string
// hasWritten is whether we have written any output to w yet.
hasWritten bool
w *bufio.Writer
}
// NewEncoder returns a TOML encoder that encodes Go values to the io.Writer
// given. By default, a single indentation level is 2 spaces.
func NewEncoder(w io.Writer) *Encoder {
return &Encoder{
w: bufio.NewWriter(w),
Indent: " ",
}
}
// Encode writes a TOML representation of the Go value to the underlying
// io.Writer. If the value given cannot be encoded to a valid TOML document,
// then an error is returned.
//
// The mapping between Go values and TOML values should be precisely the same
// as for the Decode* functions. Similarly, the TextMarshaler interface is
// supported by encoding the resulting bytes as strings. (If you want to write
// arbitrary binary data then you will need to use something like base64 since
// TOML does not have any binary types.)
//
// When encoding TOML hashes (i.e., Go maps or structs), keys without any
// sub-hashes are encoded first.
//
// If a Go map is encoded, then its keys are sorted alphabetically for
// deterministic output. More control over this behavior may be provided if
// there is demand for it.
//
// Encoding Go values without a corresponding TOML representation---like map
// types with non-string keys---will cause an error to be returned. Similarly
// for mixed arrays/slices, arrays/slices with nil elements, embedded
// non-struct types and nested slices containing maps or structs.
// (e.g., [][]map[string]string is not allowed but []map[string]string is OK
// and so is []map[string][]string.)
func (enc *Encoder) Encode(v interface{}) error {
rv := eindirect(reflect.ValueOf(v))
if err := enc.safeEncode(Key([]string{}), rv); err != nil {
return err
}
return enc.w.Flush()
}
func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
defer func() {
if r := recover(); r != nil {
if terr, ok := r.(tomlEncodeError); ok {
err = terr.error
return
}
panic(r)
}
}()
enc.encode(key, rv)
return nil
}
func (enc *Encoder) encode(key Key, rv reflect.Value) {
// Special case. Time needs to be in ISO8601 format.
// Special case. If we can marshal the type to text, then we used that.
// Basically, this prevents the encoder for handling these types as
// generic structs (or whatever the underlying type of a TextMarshaler is).
switch rv.Interface().(type) {
case time.Time, TextMarshaler:
enc.keyEqElement(key, rv)
return
}
k := rv.Kind()
switch k {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64,
reflect.Float32, reflect.Float64, reflect.String, reflect.Bool:
enc.keyEqElement(key, rv)
case reflect.Array, reflect.Slice:
if typeEqual(tomlArrayHash, tomlTypeOfGo(rv)) {
enc.eArrayOfTables(key, rv)
} else {
enc.keyEqElement(key, rv)
}
case reflect.Interface:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Map:
if rv.IsNil() {
return
}
enc.eTable(key, rv)
case reflect.Ptr:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Struct:
enc.eTable(key, rv)
default:
panic(e("unsupported type for key '%s': %s", key, k))
}
}
// eElement encodes any value that can be an array element (primitives and
// arrays).
func (enc *Encoder) eElement(rv reflect.Value) {
switch v := rv.Interface().(type) {
case time.Time:
// Special case time.Time as a primitive. Has to come before
// TextMarshaler below because time.Time implements
// encoding.TextMarshaler, but we need to always use UTC.
enc.wf(v.UTC().Format("2006-01-02T15:04:05Z"))
return
case TextMarshaler:
// Special case. Use text marshaler if it's available for this value.
if s, err := v.MarshalText(); err != nil {
encPanic(err)
} else {
enc.writeQuoted(string(s))
}
return
}
switch rv.Kind() {
case reflect.Bool:
enc.wf(strconv.FormatBool(rv.Bool()))
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64:
enc.wf(strconv.FormatInt(rv.Int(), 10))
case reflect.Uint, reflect.Uint8, reflect.Uint16,
reflect.Uint32, reflect.Uint64:
enc.wf(strconv.FormatUint(rv.Uint(), 10))
case reflect.Float32:
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 32)))
case reflect.Float64:
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 64)))
case reflect.Array, reflect.Slice:
enc.eArrayOrSliceElement(rv)
case reflect.Interface:
enc.eElement(rv.Elem())
case reflect.String:
enc.writeQuoted(rv.String())
default:
panic(e("unexpected primitive type: %s", rv.Kind()))
}
}
// By the TOML spec, all floats must have a decimal with at least one
// number on either side.
func floatAddDecimal(fstr string) string {
if !strings.Contains(fstr, ".") {
return fstr + ".0"
}
return fstr
}
func (enc *Encoder) writeQuoted(s string) {
enc.wf("\"%s\"", quotedReplacer.Replace(s))
}
func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
length := rv.Len()
enc.wf("[")
for i := 0; i < length; i++ {
elem := rv.Index(i)
enc.eElement(elem)
if i != length-1 {
enc.wf(", ")
}
}
enc.wf("]")
}
func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
if len(key) == 0 {
encPanic(errNoKey)
}
for i := 0; i < rv.Len(); i++ {
trv := rv.Index(i)
if isNil(trv) {
continue
}
panicIfInvalidKey(key)
enc.newline()
enc.wf("%s[[%s]]", enc.indentStr(key), key.maybeQuotedAll())
enc.newline()
enc.eMapOrStruct(key, trv)
}
}
func (enc *Encoder) eTable(key Key, rv reflect.Value) {
panicIfInvalidKey(key)
if len(key) == 1 {
// Output an extra newline between top-level tables.
// (The newline isn't written if nothing else has been written though.)
enc.newline()
}
if len(key) > 0 {
enc.wf("%s[%s]", enc.indentStr(key), key.maybeQuotedAll())
enc.newline()
}
enc.eMapOrStruct(key, rv)
}
func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value) {
switch rv := eindirect(rv); rv.Kind() {
case reflect.Map:
enc.eMap(key, rv)
case reflect.Struct:
enc.eStruct(key, rv)
default:
panic("eTable: unhandled reflect.Value Kind: " + rv.Kind().String())
}
}
func (enc *Encoder) eMap(key Key, rv reflect.Value) {
rt := rv.Type()
if rt.Key().Kind() != reflect.String {
encPanic(errNonString)
}
// Sort keys so that we have deterministic output. And write keys directly
// underneath this key first, before writing sub-structs or sub-maps.
var mapKeysDirect, mapKeysSub []string
for _, mapKey := range rv.MapKeys() {
k := mapKey.String()
if typeIsHash(tomlTypeOfGo(rv.MapIndex(mapKey))) {
mapKeysSub = append(mapKeysSub, k)
} else {
mapKeysDirect = append(mapKeysDirect, k)
}
}
var writeMapKeys = func(mapKeys []string) {
sort.Strings(mapKeys)
for _, mapKey := range mapKeys {
mrv := rv.MapIndex(reflect.ValueOf(mapKey))
if isNil(mrv) {
// Don't write anything for nil fields.
continue
}
enc.encode(key.add(mapKey), mrv)
}
}
writeMapKeys(mapKeysDirect)
writeMapKeys(mapKeysSub)
}
func (enc *Encoder) eStruct(key Key, rv reflect.Value) {
// Write keys for fields directly under this key first, because if we write
// a field that creates a new table, then all keys under it will be in that
// table (not the one we're writing here).
rt := rv.Type()
var fieldsDirect, fieldsSub [][]int
var addFields func(rt reflect.Type, rv reflect.Value, start []int)
addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
for i := 0; i < rt.NumField(); i++ {
f := rt.Field(i)
// skip unexported fields
if f.PkgPath != "" && !f.Anonymous {
continue
}
frv := rv.Field(i)
if f.Anonymous {
t := f.Type
switch t.Kind() {
case reflect.Struct:
// Treat anonymous struct fields with
// tag names as though they are not
// anonymous, like encoding/json does.
if getOptions(f.Tag).name == "" {
addFields(t, frv, f.Index)
continue
}
case reflect.Ptr:
if t.Elem().Kind() == reflect.Struct &&
getOptions(f.Tag).name == "" {
if !frv.IsNil() {
addFields(t.Elem(), frv.Elem(), f.Index)
}
continue
}
// Fall through to the normal field encoding logic below
// for non-struct anonymous fields.
}
}
if typeIsHash(tomlTypeOfGo(frv)) {
fieldsSub = append(fieldsSub, append(start, f.Index...))
} else {
fieldsDirect = append(fieldsDirect, append(start, f.Index...))
}
}
}
addFields(rt, rv, nil)
var writeFields = func(fields [][]int) {
for _, fieldIndex := range fields {
sft := rt.FieldByIndex(fieldIndex)
sf := rv.FieldByIndex(fieldIndex)
if isNil(sf) {
// Don't write anything for nil fields.
continue
}
opts := getOptions(sft.Tag)
if opts.skip {
continue
}
keyName := sft.Name
if opts.name != "" {
keyName = opts.name
}
if opts.omitempty && isEmpty(sf) {
continue
}
if opts.omitzero && isZero(sf) {
continue
}
enc.encode(key.add(keyName), sf)
}
}
writeFields(fieldsDirect)
writeFields(fieldsSub)
}
// tomlTypeName returns the TOML type name of the Go value's type. It is
// used to determine whether the types of array elements are mixed (which is
// forbidden). If the Go value is nil, then it is illegal for it to be an array
// element, and valueIsNil is returned as true.
// Returns the TOML type of a Go value. The type may be `nil`, which means
// no concrete TOML type could be found.
func tomlTypeOfGo(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() {
return nil
}
switch rv.Kind() {
case reflect.Bool:
return tomlBool
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64:
return tomlInteger
case reflect.Float32, reflect.Float64:
return tomlFloat
case reflect.Array, reflect.Slice:
if typeEqual(tomlHash, tomlArrayType(rv)) {
return tomlArrayHash
}
return tomlArray
case reflect.Ptr, reflect.Interface:
return tomlTypeOfGo(rv.Elem())
case reflect.String:
return tomlString
case reflect.Map:
return tomlHash
case reflect.Struct:
switch rv.Interface().(type) {
case time.Time:
return tomlDatetime
case TextMarshaler:
return tomlString
default:
return tomlHash
}
default:
panic("unexpected reflect.Kind: " + rv.Kind().String())
}
}
// tomlArrayType returns the element type of a TOML array. The type returned
// may be nil if it cannot be determined (e.g., a nil slice or a zero length
// slize). This function may also panic if it finds a type that cannot be
// expressed in TOML (such as nil elements, heterogeneous arrays or directly
// nested arrays of tables).
func tomlArrayType(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() || rv.Len() == 0 {
return nil
}
firstType := tomlTypeOfGo(rv.Index(0))
if firstType == nil {
encPanic(errArrayNilElement)
}
rvlen := rv.Len()
for i := 1; i < rvlen; i++ {
elem := rv.Index(i)
switch elemType := tomlTypeOfGo(elem); {
case elemType == nil:
encPanic(errArrayNilElement)
case !typeEqual(firstType, elemType):
encPanic(errArrayMixedElementTypes)
}
}
// If we have a nested array, then we must make sure that the nested
// array contains ONLY primitives.
// This checks arbitrarily nested arrays.
if typeEqual(firstType, tomlArray) || typeEqual(firstType, tomlArrayHash) {
nest := tomlArrayType(eindirect(rv.Index(0)))
if typeEqual(nest, tomlHash) || typeEqual(nest, tomlArrayHash) {
encPanic(errArrayNoTable)
}
}
return firstType
}
type tagOptions struct {
skip bool // "-"
name string
omitempty bool
omitzero bool
}
func getOptions(tag reflect.StructTag) tagOptions {
t := tag.Get("toml")
if t == "-" {
return tagOptions{skip: true}
}
var opts tagOptions
parts := strings.Split(t, ",")
opts.name = parts[0]
for _, s := range parts[1:] {
switch s {
case "omitempty":
opts.omitempty = true
case "omitzero":
opts.omitzero = true
}
}
return opts
}
func isZero(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return rv.Int() == 0
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return rv.Uint() == 0
case reflect.Float32, reflect.Float64:
return rv.Float() == 0.0
}
return false
}
func isEmpty(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
return rv.Len() == 0
case reflect.Bool:
return !rv.Bool()
}
return false
}
func (enc *Encoder) newline() {
if enc.hasWritten {
enc.wf("\n")
}
}
func (enc *Encoder) keyEqElement(key Key, val reflect.Value) {
if len(key) == 0 {
encPanic(errNoKey)
}
panicIfInvalidKey(key)
enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
enc.eElement(val)
enc.newline()
}
func (enc *Encoder) wf(format string, v ...interface{}) {
if _, err := fmt.Fprintf(enc.w, format, v...); err != nil {
encPanic(err)
}
enc.hasWritten = true
}
func (enc *Encoder) indentStr(key Key) string {
return strings.Repeat(enc.Indent, len(key)-1)
}
func encPanic(err error) {
panic(tomlEncodeError{err})
}
func eindirect(v reflect.Value) reflect.Value {
switch v.Kind() {
case reflect.Ptr, reflect.Interface:
return eindirect(v.Elem())
default:
return v
}
}
func isNil(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
return rv.IsNil()
default:
return false
}
}
func panicIfInvalidKey(key Key) {
for _, k := range key {
if len(k) == 0 {
encPanic(e("Key '%s' is not a valid table name. Key names "+
"cannot be empty.", key.maybeQuotedAll()))
}
}
}
func isValidKeyName(s string) bool {
return len(s) != 0
}

View File

@@ -1,19 +0,0 @@
// +build go1.2
package toml
// In order to support Go 1.1, we define our own TextMarshaler and
// TextUnmarshaler types. For Go 1.2+, we just alias them with the
// standard library interfaces.
import (
"encoding"
)
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler encoding.TextMarshaler
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler encoding.TextUnmarshaler

View File

@@ -1,18 +0,0 @@
// +build !go1.2
package toml
// These interfaces were introduced in Go 1.2, so we add them manually when
// compiling for Go 1.1.
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler interface {
MarshalText() (text []byte, err error)
}
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler interface {
UnmarshalText(text []byte) error
}

View File

@@ -1,953 +0,0 @@
package toml
import (
"fmt"
"strings"
"unicode"
"unicode/utf8"
)
type itemType int
const (
itemError itemType = iota
itemNIL // used in the parser to indicate no type
itemEOF
itemText
itemString
itemRawString
itemMultilineString
itemRawMultilineString
itemBool
itemInteger
itemFloat
itemDatetime
itemArray // the start of an array
itemArrayEnd
itemTableStart
itemTableEnd
itemArrayTableStart
itemArrayTableEnd
itemKeyStart
itemCommentStart
itemInlineTableStart
itemInlineTableEnd
)
const (
eof = 0
comma = ','
tableStart = '['
tableEnd = ']'
arrayTableStart = '['
arrayTableEnd = ']'
tableSep = '.'
keySep = '='
arrayStart = '['
arrayEnd = ']'
commentStart = '#'
stringStart = '"'
stringEnd = '"'
rawStringStart = '\''
rawStringEnd = '\''
inlineTableStart = '{'
inlineTableEnd = '}'
)
type stateFn func(lx *lexer) stateFn
type lexer struct {
input string
start int
pos int
line int
state stateFn
items chan item
// Allow for backing up up to three runes.
// This is necessary because TOML contains 3-rune tokens (""" and ''').
prevWidths [3]int
nprev int // how many of prevWidths are in use
// If we emit an eof, we can still back up, but it is not OK to call
// next again.
atEOF bool
// A stack of state functions used to maintain context.
// The idea is to reuse parts of the state machine in various places.
// For example, values can appear at the top level or within arbitrarily
// nested arrays. The last state on the stack is used after a value has
// been lexed. Similarly for comments.
stack []stateFn
}
type item struct {
typ itemType
val string
line int
}
func (lx *lexer) nextItem() item {
for {
select {
case item := <-lx.items:
return item
default:
lx.state = lx.state(lx)
}
}
}
func lex(input string) *lexer {
lx := &lexer{
input: input,
state: lexTop,
line: 1,
items: make(chan item, 10),
stack: make([]stateFn, 0, 10),
}
return lx
}
func (lx *lexer) push(state stateFn) {
lx.stack = append(lx.stack, state)
}
func (lx *lexer) pop() stateFn {
if len(lx.stack) == 0 {
return lx.errorf("BUG in lexer: no states to pop")
}
last := lx.stack[len(lx.stack)-1]
lx.stack = lx.stack[0 : len(lx.stack)-1]
return last
}
func (lx *lexer) current() string {
return lx.input[lx.start:lx.pos]
}
func (lx *lexer) emit(typ itemType) {
lx.items <- item{typ, lx.current(), lx.line}
lx.start = lx.pos
}
func (lx *lexer) emitTrim(typ itemType) {
lx.items <- item{typ, strings.TrimSpace(lx.current()), lx.line}
lx.start = lx.pos
}
func (lx *lexer) next() (r rune) {
if lx.atEOF {
panic("next called after EOF")
}
if lx.pos >= len(lx.input) {
lx.atEOF = true
return eof
}
if lx.input[lx.pos] == '\n' {
lx.line++
}
lx.prevWidths[2] = lx.prevWidths[1]
lx.prevWidths[1] = lx.prevWidths[0]
if lx.nprev < 3 {
lx.nprev++
}
r, w := utf8.DecodeRuneInString(lx.input[lx.pos:])
lx.prevWidths[0] = w
lx.pos += w
return r
}
// ignore skips over the pending input before this point.
func (lx *lexer) ignore() {
lx.start = lx.pos
}
// backup steps back one rune. Can be called only twice between calls to next.
func (lx *lexer) backup() {
if lx.atEOF {
lx.atEOF = false
return
}
if lx.nprev < 1 {
panic("backed up too far")
}
w := lx.prevWidths[0]
lx.prevWidths[0] = lx.prevWidths[1]
lx.prevWidths[1] = lx.prevWidths[2]
lx.nprev--
lx.pos -= w
if lx.pos < len(lx.input) && lx.input[lx.pos] == '\n' {
lx.line--
}
}
// accept consumes the next rune if it's equal to `valid`.
func (lx *lexer) accept(valid rune) bool {
if lx.next() == valid {
return true
}
lx.backup()
return false
}
// peek returns but does not consume the next rune in the input.
func (lx *lexer) peek() rune {
r := lx.next()
lx.backup()
return r
}
// skip ignores all input that matches the given predicate.
func (lx *lexer) skip(pred func(rune) bool) {
for {
r := lx.next()
if pred(r) {
continue
}
lx.backup()
lx.ignore()
return
}
}
// errorf stops all lexing by emitting an error and returning `nil`.
// Note that any value that is a character is escaped if it's a special
// character (newlines, tabs, etc.).
func (lx *lexer) errorf(format string, values ...interface{}) stateFn {
lx.items <- item{
itemError,
fmt.Sprintf(format, values...),
lx.line,
}
return nil
}
// lexTop consumes elements at the top level of TOML data.
func lexTop(lx *lexer) stateFn {
r := lx.next()
if isWhitespace(r) || isNL(r) {
return lexSkip(lx, lexTop)
}
switch r {
case commentStart:
lx.push(lexTop)
return lexCommentStart
case tableStart:
return lexTableStart
case eof:
if lx.pos > lx.start {
return lx.errorf("unexpected EOF")
}
lx.emit(itemEOF)
return nil
}
// At this point, the only valid item can be a key, so we back up
// and let the key lexer do the rest.
lx.backup()
lx.push(lexTopEnd)
return lexKeyStart
}
// lexTopEnd is entered whenever a top-level item has been consumed. (A value
// or a table.) It must see only whitespace, and will turn back to lexTop
// upon a newline. If it sees EOF, it will quit the lexer successfully.
func lexTopEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case r == commentStart:
// a comment will read to a newline for us.
lx.push(lexTop)
return lexCommentStart
case isWhitespace(r):
return lexTopEnd
case isNL(r):
lx.ignore()
return lexTop
case r == eof:
lx.emit(itemEOF)
return nil
}
return lx.errorf("expected a top-level item to end with a newline, "+
"comment, or EOF, but got %q instead", r)
}
// lexTable lexes the beginning of a table. Namely, it makes sure that
// it starts with a character other than '.' and ']'.
// It assumes that '[' has already been consumed.
// It also handles the case that this is an item in an array of tables.
// e.g., '[[name]]'.
func lexTableStart(lx *lexer) stateFn {
if lx.peek() == arrayTableStart {
lx.next()
lx.emit(itemArrayTableStart)
lx.push(lexArrayTableEnd)
} else {
lx.emit(itemTableStart)
lx.push(lexTableEnd)
}
return lexTableNameStart
}
func lexTableEnd(lx *lexer) stateFn {
lx.emit(itemTableEnd)
return lexTopEnd
}
func lexArrayTableEnd(lx *lexer) stateFn {
if r := lx.next(); r != arrayTableEnd {
return lx.errorf("expected end of table array name delimiter %q, "+
"but got %q instead", arrayTableEnd, r)
}
lx.emit(itemArrayTableEnd)
return lexTopEnd
}
func lexTableNameStart(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.peek(); {
case r == tableEnd || r == eof:
return lx.errorf("unexpected end of table name " +
"(table names cannot be empty)")
case r == tableSep:
return lx.errorf("unexpected table separator " +
"(table names cannot be empty)")
case r == stringStart || r == rawStringStart:
lx.ignore()
lx.push(lexTableNameEnd)
return lexValue // reuse string lexing
default:
return lexBareTableName
}
}
// lexBareTableName lexes the name of a table. It assumes that at least one
// valid character for the table has already been read.
func lexBareTableName(lx *lexer) stateFn {
r := lx.next()
if isBareKeyChar(r) {
return lexBareTableName
}
lx.backup()
lx.emit(itemText)
return lexTableNameEnd
}
// lexTableNameEnd reads the end of a piece of a table name, optionally
// consuming whitespace.
func lexTableNameEnd(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.next(); {
case isWhitespace(r):
return lexTableNameEnd
case r == tableSep:
lx.ignore()
return lexTableNameStart
case r == tableEnd:
return lx.pop()
default:
return lx.errorf("expected '.' or ']' to end table name, "+
"but got %q instead", r)
}
}
// lexKeyStart consumes a key name up until the first non-whitespace character.
// lexKeyStart will ignore whitespace.
func lexKeyStart(lx *lexer) stateFn {
r := lx.peek()
switch {
case r == keySep:
return lx.errorf("unexpected key separator %q", keySep)
case isWhitespace(r) || isNL(r):
lx.next()
return lexSkip(lx, lexKeyStart)
case r == stringStart || r == rawStringStart:
lx.ignore()
lx.emit(itemKeyStart)
lx.push(lexKeyEnd)
return lexValue // reuse string lexing
default:
lx.ignore()
lx.emit(itemKeyStart)
return lexBareKey
}
}
// lexBareKey consumes the text of a bare key. Assumes that the first character
// (which is not whitespace) has not yet been consumed.
func lexBareKey(lx *lexer) stateFn {
switch r := lx.next(); {
case isBareKeyChar(r):
return lexBareKey
case isWhitespace(r):
lx.backup()
lx.emit(itemText)
return lexKeyEnd
case r == keySep:
lx.backup()
lx.emit(itemText)
return lexKeyEnd
default:
return lx.errorf("bare keys cannot contain %q", r)
}
}
// lexKeyEnd consumes the end of a key and trims whitespace (up to the key
// separator).
func lexKeyEnd(lx *lexer) stateFn {
switch r := lx.next(); {
case r == keySep:
return lexSkip(lx, lexValue)
case isWhitespace(r):
return lexSkip(lx, lexKeyEnd)
default:
return lx.errorf("expected key separator %q, but got %q instead",
keySep, r)
}
}
// lexValue starts the consumption of a value anywhere a value is expected.
// lexValue will ignore whitespace.
// After a value is lexed, the last state on the next is popped and returned.
func lexValue(lx *lexer) stateFn {
// We allow whitespace to precede a value, but NOT newlines.
// In array syntax, the array states are responsible for ignoring newlines.
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexValue)
case isDigit(r):
lx.backup() // avoid an extra state and use the same as above
return lexNumberOrDateStart
}
switch r {
case arrayStart:
lx.ignore()
lx.emit(itemArray)
return lexArrayValue
case inlineTableStart:
lx.ignore()
lx.emit(itemInlineTableStart)
return lexInlineTableValue
case stringStart:
if lx.accept(stringStart) {
if lx.accept(stringStart) {
lx.ignore() // Ignore """
return lexMultilineString
}
lx.backup()
}
lx.ignore() // ignore the '"'
return lexString
case rawStringStart:
if lx.accept(rawStringStart) {
if lx.accept(rawStringStart) {
lx.ignore() // Ignore """
return lexMultilineRawString
}
lx.backup()
}
lx.ignore() // ignore the "'"
return lexRawString
case '+', '-':
return lexNumberStart
case '.': // special error case, be kind to users
return lx.errorf("floats must start with a digit, not '.'")
}
if unicode.IsLetter(r) {
// Be permissive here; lexBool will give a nice error if the
// user wrote something like
// x = foo
// (i.e. not 'true' or 'false' but is something else word-like.)
lx.backup()
return lexBool
}
return lx.errorf("expected value but found %q instead", r)
}
// lexArrayValue consumes one value in an array. It assumes that '[' or ','
// have already been consumed. All whitespace and newlines are ignored.
func lexArrayValue(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r) || isNL(r):
return lexSkip(lx, lexArrayValue)
case r == commentStart:
lx.push(lexArrayValue)
return lexCommentStart
case r == comma:
return lx.errorf("unexpected comma")
case r == arrayEnd:
// NOTE(caleb): The spec isn't clear about whether you can have
// a trailing comma or not, so we'll allow it.
return lexArrayEnd
}
lx.backup()
lx.push(lexArrayValueEnd)
return lexValue
}
// lexArrayValueEnd consumes everything between the end of an array value and
// the next value (or the end of the array): it ignores whitespace and newlines
// and expects either a ',' or a ']'.
func lexArrayValueEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r) || isNL(r):
return lexSkip(lx, lexArrayValueEnd)
case r == commentStart:
lx.push(lexArrayValueEnd)
return lexCommentStart
case r == comma:
lx.ignore()
return lexArrayValue // move on to the next value
case r == arrayEnd:
return lexArrayEnd
}
return lx.errorf(
"expected a comma or array terminator %q, but got %q instead",
arrayEnd, r,
)
}
// lexArrayEnd finishes the lexing of an array.
// It assumes that a ']' has just been consumed.
func lexArrayEnd(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemArrayEnd)
return lx.pop()
}
// lexInlineTableValue consumes one key/value pair in an inline table.
// It assumes that '{' or ',' have already been consumed. Whitespace is ignored.
func lexInlineTableValue(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexInlineTableValue)
case isNL(r):
return lx.errorf("newlines not allowed within inline tables")
case r == commentStart:
lx.push(lexInlineTableValue)
return lexCommentStart
case r == comma:
return lx.errorf("unexpected comma")
case r == inlineTableEnd:
return lexInlineTableEnd
}
lx.backup()
lx.push(lexInlineTableValueEnd)
return lexKeyStart
}
// lexInlineTableValueEnd consumes everything between the end of an inline table
// key/value pair and the next pair (or the end of the table):
// it ignores whitespace and expects either a ',' or a '}'.
func lexInlineTableValueEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexInlineTableValueEnd)
case isNL(r):
return lx.errorf("newlines not allowed within inline tables")
case r == commentStart:
lx.push(lexInlineTableValueEnd)
return lexCommentStart
case r == comma:
lx.ignore()
return lexInlineTableValue
case r == inlineTableEnd:
return lexInlineTableEnd
}
return lx.errorf("expected a comma or an inline table terminator %q, "+
"but got %q instead", inlineTableEnd, r)
}
// lexInlineTableEnd finishes the lexing of an inline table.
// It assumes that a '}' has just been consumed.
func lexInlineTableEnd(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemInlineTableEnd)
return lx.pop()
}
// lexString consumes the inner contents of a string. It assumes that the
// beginning '"' has already been consumed and ignored.
func lexString(lx *lexer) stateFn {
r := lx.next()
switch {
case r == eof:
return lx.errorf("unexpected EOF")
case isNL(r):
return lx.errorf("strings cannot contain newlines")
case r == '\\':
lx.push(lexString)
return lexStringEscape
case r == stringEnd:
lx.backup()
lx.emit(itemString)
lx.next()
lx.ignore()
return lx.pop()
}
return lexString
}
// lexMultilineString consumes the inner contents of a string. It assumes that
// the beginning '"""' has already been consumed and ignored.
func lexMultilineString(lx *lexer) stateFn {
switch lx.next() {
case eof:
return lx.errorf("unexpected EOF")
case '\\':
return lexMultilineStringEscape
case stringEnd:
if lx.accept(stringEnd) {
if lx.accept(stringEnd) {
lx.backup()
lx.backup()
lx.backup()
lx.emit(itemMultilineString)
lx.next()
lx.next()
lx.next()
lx.ignore()
return lx.pop()
}
lx.backup()
}
}
return lexMultilineString
}
// lexRawString consumes a raw string. Nothing can be escaped in such a string.
// It assumes that the beginning "'" has already been consumed and ignored.
func lexRawString(lx *lexer) stateFn {
r := lx.next()
switch {
case r == eof:
return lx.errorf("unexpected EOF")
case isNL(r):
return lx.errorf("strings cannot contain newlines")
case r == rawStringEnd:
lx.backup()
lx.emit(itemRawString)
lx.next()
lx.ignore()
return lx.pop()
}
return lexRawString
}
// lexMultilineRawString consumes a raw string. Nothing can be escaped in such
// a string. It assumes that the beginning "'''" has already been consumed and
// ignored.
func lexMultilineRawString(lx *lexer) stateFn {
switch lx.next() {
case eof:
return lx.errorf("unexpected EOF")
case rawStringEnd:
if lx.accept(rawStringEnd) {
if lx.accept(rawStringEnd) {
lx.backup()
lx.backup()
lx.backup()
lx.emit(itemRawMultilineString)
lx.next()
lx.next()
lx.next()
lx.ignore()
return lx.pop()
}
lx.backup()
}
}
return lexMultilineRawString
}
// lexMultilineStringEscape consumes an escaped character. It assumes that the
// preceding '\\' has already been consumed.
func lexMultilineStringEscape(lx *lexer) stateFn {
// Handle the special case first:
if isNL(lx.next()) {
return lexMultilineString
}
lx.backup()
lx.push(lexMultilineString)
return lexStringEscape(lx)
}
func lexStringEscape(lx *lexer) stateFn {
r := lx.next()
switch r {
case 'b':
fallthrough
case 't':
fallthrough
case 'n':
fallthrough
case 'f':
fallthrough
case 'r':
fallthrough
case '"':
fallthrough
case '\\':
return lx.pop()
case 'u':
return lexShortUnicodeEscape
case 'U':
return lexLongUnicodeEscape
}
return lx.errorf("invalid escape character %q; only the following "+
"escape characters are allowed: "+
`\b, \t, \n, \f, \r, \", \\, \uXXXX, and \UXXXXXXXX`, r)
}
func lexShortUnicodeEscape(lx *lexer) stateFn {
var r rune
for i := 0; i < 4; i++ {
r = lx.next()
if !isHexadecimal(r) {
return lx.errorf(`expected four hexadecimal digits after '\u', `+
"but got %q instead", lx.current())
}
}
return lx.pop()
}
func lexLongUnicodeEscape(lx *lexer) stateFn {
var r rune
for i := 0; i < 8; i++ {
r = lx.next()
if !isHexadecimal(r) {
return lx.errorf(`expected eight hexadecimal digits after '\U', `+
"but got %q instead", lx.current())
}
}
return lx.pop()
}
// lexNumberOrDateStart consumes either an integer, a float, or datetime.
func lexNumberOrDateStart(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumberOrDate
}
switch r {
case '_':
return lexNumber
case 'e', 'E':
return lexFloat
case '.':
return lx.errorf("floats must start with a digit, not '.'")
}
return lx.errorf("expected a digit but got %q", r)
}
// lexNumberOrDate consumes either an integer, float or datetime.
func lexNumberOrDate(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumberOrDate
}
switch r {
case '-':
return lexDatetime
case '_':
return lexNumber
case '.', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemInteger)
return lx.pop()
}
// lexDatetime consumes a Datetime, to a first approximation.
// The parser validates that it matches one of the accepted formats.
func lexDatetime(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexDatetime
}
switch r {
case '-', 'T', ':', '.', 'Z', '+':
return lexDatetime
}
lx.backup()
lx.emit(itemDatetime)
return lx.pop()
}
// lexNumberStart consumes either an integer or a float. It assumes that a sign
// has already been read, but that *no* digits have been consumed.
// lexNumberStart will move to the appropriate integer or float states.
func lexNumberStart(lx *lexer) stateFn {
// We MUST see a digit. Even floats have to start with a digit.
r := lx.next()
if !isDigit(r) {
if r == '.' {
return lx.errorf("floats must start with a digit, not '.'")
}
return lx.errorf("expected a digit but got %q", r)
}
return lexNumber
}
// lexNumber consumes an integer or a float after seeing the first digit.
func lexNumber(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumber
}
switch r {
case '_':
return lexNumber
case '.', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemInteger)
return lx.pop()
}
// lexFloat consumes the elements of a float. It allows any sequence of
// float-like characters, so floats emitted by the lexer are only a first
// approximation and must be validated by the parser.
func lexFloat(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexFloat
}
switch r {
case '_', '.', '-', '+', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemFloat)
return lx.pop()
}
// lexBool consumes a bool string: 'true' or 'false.
func lexBool(lx *lexer) stateFn {
var rs []rune
for {
r := lx.next()
if !unicode.IsLetter(r) {
lx.backup()
break
}
rs = append(rs, r)
}
s := string(rs)
switch s {
case "true", "false":
lx.emit(itemBool)
return lx.pop()
}
return lx.errorf("expected value but found %q instead", s)
}
// lexCommentStart begins the lexing of a comment. It will emit
// itemCommentStart and consume no characters, passing control to lexComment.
func lexCommentStart(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemCommentStart)
return lexComment
}
// lexComment lexes an entire comment. It assumes that '#' has been consumed.
// It will consume *up to* the first newline character, and pass control
// back to the last state on the stack.
func lexComment(lx *lexer) stateFn {
r := lx.peek()
if isNL(r) || r == eof {
lx.emit(itemText)
return lx.pop()
}
lx.next()
return lexComment
}
// lexSkip ignores all slurped input and moves on to the next state.
func lexSkip(lx *lexer, nextState stateFn) stateFn {
return func(lx *lexer) stateFn {
lx.ignore()
return nextState
}
}
// isWhitespace returns true if `r` is a whitespace character according
// to the spec.
func isWhitespace(r rune) bool {
return r == '\t' || r == ' '
}
func isNL(r rune) bool {
return r == '\n' || r == '\r'
}
func isDigit(r rune) bool {
return r >= '0' && r <= '9'
}
func isHexadecimal(r rune) bool {
return (r >= '0' && r <= '9') ||
(r >= 'a' && r <= 'f') ||
(r >= 'A' && r <= 'F')
}
func isBareKeyChar(r rune) bool {
return (r >= 'A' && r <= 'Z') ||
(r >= 'a' && r <= 'z') ||
(r >= '0' && r <= '9') ||
r == '_' ||
r == '-'
}
func (itype itemType) String() string {
switch itype {
case itemError:
return "Error"
case itemNIL:
return "NIL"
case itemEOF:
return "EOF"
case itemText:
return "Text"
case itemString, itemRawString, itemMultilineString, itemRawMultilineString:
return "String"
case itemBool:
return "Bool"
case itemInteger:
return "Integer"
case itemFloat:
return "Float"
case itemDatetime:
return "DateTime"
case itemTableStart:
return "TableStart"
case itemTableEnd:
return "TableEnd"
case itemKeyStart:
return "KeyStart"
case itemArray:
return "Array"
case itemArrayEnd:
return "ArrayEnd"
case itemCommentStart:
return "CommentStart"
}
panic(fmt.Sprintf("BUG: Unknown type '%d'.", int(itype)))
}
func (item item) String() string {
return fmt.Sprintf("(%s, %s)", item.typ.String(), item.val)
}

View File

@@ -1,592 +0,0 @@
package toml
import (
"fmt"
"strconv"
"strings"
"time"
"unicode"
"unicode/utf8"
)
type parser struct {
mapping map[string]interface{}
types map[string]tomlType
lx *lexer
// A list of keys in the order that they appear in the TOML data.
ordered []Key
// the full key for the current hash in scope
context Key
// the base key name for everything except hashes
currentKey string
// rough approximation of line number
approxLine int
// A map of 'key.group.names' to whether they were created implicitly.
implicits map[string]bool
}
type parseError string
func (pe parseError) Error() string {
return string(pe)
}
func parse(data string) (p *parser, err error) {
defer func() {
if r := recover(); r != nil {
var ok bool
if err, ok = r.(parseError); ok {
return
}
panic(r)
}
}()
p = &parser{
mapping: make(map[string]interface{}),
types: make(map[string]tomlType),
lx: lex(data),
ordered: make([]Key, 0),
implicits: make(map[string]bool),
}
for {
item := p.next()
if item.typ == itemEOF {
break
}
p.topLevel(item)
}
return p, nil
}
func (p *parser) panicf(format string, v ...interface{}) {
msg := fmt.Sprintf("Near line %d (last key parsed '%s'): %s",
p.approxLine, p.current(), fmt.Sprintf(format, v...))
panic(parseError(msg))
}
func (p *parser) next() item {
it := p.lx.nextItem()
if it.typ == itemError {
p.panicf("%s", it.val)
}
return it
}
func (p *parser) bug(format string, v ...interface{}) {
panic(fmt.Sprintf("BUG: "+format+"\n\n", v...))
}
func (p *parser) expect(typ itemType) item {
it := p.next()
p.assertEqual(typ, it.typ)
return it
}
func (p *parser) assertEqual(expected, got itemType) {
if expected != got {
p.bug("Expected '%s' but got '%s'.", expected, got)
}
}
func (p *parser) topLevel(item item) {
switch item.typ {
case itemCommentStart:
p.approxLine = item.line
p.expect(itemText)
case itemTableStart:
kg := p.next()
p.approxLine = kg.line
var key Key
for ; kg.typ != itemTableEnd && kg.typ != itemEOF; kg = p.next() {
key = append(key, p.keyString(kg))
}
p.assertEqual(itemTableEnd, kg.typ)
p.establishContext(key, false)
p.setType("", tomlHash)
p.ordered = append(p.ordered, key)
case itemArrayTableStart:
kg := p.next()
p.approxLine = kg.line
var key Key
for ; kg.typ != itemArrayTableEnd && kg.typ != itemEOF; kg = p.next() {
key = append(key, p.keyString(kg))
}
p.assertEqual(itemArrayTableEnd, kg.typ)
p.establishContext(key, true)
p.setType("", tomlArrayHash)
p.ordered = append(p.ordered, key)
case itemKeyStart:
kname := p.next()
p.approxLine = kname.line
p.currentKey = p.keyString(kname)
val, typ := p.value(p.next())
p.setValue(p.currentKey, val)
p.setType(p.currentKey, typ)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
p.currentKey = ""
default:
p.bug("Unexpected type at top level: %s", item.typ)
}
}
// Gets a string for a key (or part of a key in a table name).
func (p *parser) keyString(it item) string {
switch it.typ {
case itemText:
return it.val
case itemString, itemMultilineString,
itemRawString, itemRawMultilineString:
s, _ := p.value(it)
return s.(string)
default:
p.bug("Unexpected key type: %s", it.typ)
panic("unreachable")
}
}
// value translates an expected value from the lexer into a Go value wrapped
// as an empty interface.
func (p *parser) value(it item) (interface{}, tomlType) {
switch it.typ {
case itemString:
return p.replaceEscapes(it.val), p.typeOfPrimitive(it)
case itemMultilineString:
trimmed := stripFirstNewline(stripEscapedWhitespace(it.val))
return p.replaceEscapes(trimmed), p.typeOfPrimitive(it)
case itemRawString:
return it.val, p.typeOfPrimitive(it)
case itemRawMultilineString:
return stripFirstNewline(it.val), p.typeOfPrimitive(it)
case itemBool:
switch it.val {
case "true":
return true, p.typeOfPrimitive(it)
case "false":
return false, p.typeOfPrimitive(it)
}
p.bug("Expected boolean value, but got '%s'.", it.val)
case itemInteger:
if !numUnderscoresOK(it.val) {
p.panicf("Invalid integer %q: underscores must be surrounded by digits",
it.val)
}
val := strings.Replace(it.val, "_", "", -1)
num, err := strconv.ParseInt(val, 10, 64)
if err != nil {
// Distinguish integer values. Normally, it'd be a bug if the lexer
// provides an invalid integer, but it's possible that the number is
// out of range of valid values (which the lexer cannot determine).
// So mark the former as a bug but the latter as a legitimate user
// error.
if e, ok := err.(*strconv.NumError); ok &&
e.Err == strconv.ErrRange {
p.panicf("Integer '%s' is out of the range of 64-bit "+
"signed integers.", it.val)
} else {
p.bug("Expected integer value, but got '%s'.", it.val)
}
}
return num, p.typeOfPrimitive(it)
case itemFloat:
parts := strings.FieldsFunc(it.val, func(r rune) bool {
switch r {
case '.', 'e', 'E':
return true
}
return false
})
for _, part := range parts {
if !numUnderscoresOK(part) {
p.panicf("Invalid float %q: underscores must be "+
"surrounded by digits", it.val)
}
}
if !numPeriodsOK(it.val) {
// As a special case, numbers like '123.' or '1.e2',
// which are valid as far as Go/strconv are concerned,
// must be rejected because TOML says that a fractional
// part consists of '.' followed by 1+ digits.
p.panicf("Invalid float %q: '.' must be followed "+
"by one or more digits", it.val)
}
val := strings.Replace(it.val, "_", "", -1)
num, err := strconv.ParseFloat(val, 64)
if err != nil {
if e, ok := err.(*strconv.NumError); ok &&
e.Err == strconv.ErrRange {
p.panicf("Float '%s' is out of the range of 64-bit "+
"IEEE-754 floating-point numbers.", it.val)
} else {
p.panicf("Invalid float value: %q", it.val)
}
}
return num, p.typeOfPrimitive(it)
case itemDatetime:
var t time.Time
var ok bool
var err error
for _, format := range []string{
"2006-01-02T15:04:05Z07:00",
"2006-01-02T15:04:05",
"2006-01-02",
} {
t, err = time.ParseInLocation(format, it.val, time.Local)
if err == nil {
ok = true
break
}
}
if !ok {
p.panicf("Invalid TOML Datetime: %q.", it.val)
}
return t, p.typeOfPrimitive(it)
case itemArray:
array := make([]interface{}, 0)
types := make([]tomlType, 0)
for it = p.next(); it.typ != itemArrayEnd; it = p.next() {
if it.typ == itemCommentStart {
p.expect(itemText)
continue
}
val, typ := p.value(it)
array = append(array, val)
types = append(types, typ)
}
return array, p.typeOfArray(types)
case itemInlineTableStart:
var (
hash = make(map[string]interface{})
outerContext = p.context
outerKey = p.currentKey
)
p.context = append(p.context, p.currentKey)
p.currentKey = ""
for it := p.next(); it.typ != itemInlineTableEnd; it = p.next() {
if it.typ != itemKeyStart {
p.bug("Expected key start but instead found %q, around line %d",
it.val, p.approxLine)
}
if it.typ == itemCommentStart {
p.expect(itemText)
continue
}
// retrieve key
k := p.next()
p.approxLine = k.line
kname := p.keyString(k)
// retrieve value
p.currentKey = kname
val, typ := p.value(p.next())
// make sure we keep metadata up to date
p.setType(kname, typ)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
hash[kname] = val
}
p.context = outerContext
p.currentKey = outerKey
return hash, tomlHash
}
p.bug("Unexpected value type: %s", it.typ)
panic("unreachable")
}
// numUnderscoresOK checks whether each underscore in s is surrounded by
// characters that are not underscores.
func numUnderscoresOK(s string) bool {
accept := false
for _, r := range s {
if r == '_' {
if !accept {
return false
}
accept = false
continue
}
accept = true
}
return accept
}
// numPeriodsOK checks whether every period in s is followed by a digit.
func numPeriodsOK(s string) bool {
period := false
for _, r := range s {
if period && !isDigit(r) {
return false
}
period = r == '.'
}
return !period
}
// establishContext sets the current context of the parser,
// where the context is either a hash or an array of hashes. Which one is
// set depends on the value of the `array` parameter.
//
// Establishing the context also makes sure that the key isn't a duplicate, and
// will create implicit hashes automatically.
func (p *parser) establishContext(key Key, array bool) {
var ok bool
// Always start at the top level and drill down for our context.
hashContext := p.mapping
keyContext := make(Key, 0)
// We only need implicit hashes for key[0:-1]
for _, k := range key[0 : len(key)-1] {
_, ok = hashContext[k]
keyContext = append(keyContext, k)
// No key? Make an implicit hash and move on.
if !ok {
p.addImplicit(keyContext)
hashContext[k] = make(map[string]interface{})
}
// If the hash context is actually an array of tables, then set
// the hash context to the last element in that array.
//
// Otherwise, it better be a table, since this MUST be a key group (by
// virtue of it not being the last element in a key).
switch t := hashContext[k].(type) {
case []map[string]interface{}:
hashContext = t[len(t)-1]
case map[string]interface{}:
hashContext = t
default:
p.panicf("Key '%s' was already created as a hash.", keyContext)
}
}
p.context = keyContext
if array {
// If this is the first element for this array, then allocate a new
// list of tables for it.
k := key[len(key)-1]
if _, ok := hashContext[k]; !ok {
hashContext[k] = make([]map[string]interface{}, 0, 5)
}
// Add a new table. But make sure the key hasn't already been used
// for something else.
if hash, ok := hashContext[k].([]map[string]interface{}); ok {
hashContext[k] = append(hash, make(map[string]interface{}))
} else {
p.panicf("Key '%s' was already created and cannot be used as "+
"an array.", keyContext)
}
} else {
p.setValue(key[len(key)-1], make(map[string]interface{}))
}
p.context = append(p.context, key[len(key)-1])
}
// setValue sets the given key to the given value in the current context.
// It will make sure that the key hasn't already been defined, account for
// implicit key groups.
func (p *parser) setValue(key string, value interface{}) {
var tmpHash interface{}
var ok bool
hash := p.mapping
keyContext := make(Key, 0)
for _, k := range p.context {
keyContext = append(keyContext, k)
if tmpHash, ok = hash[k]; !ok {
p.bug("Context for key '%s' has not been established.", keyContext)
}
switch t := tmpHash.(type) {
case []map[string]interface{}:
// The context is a table of hashes. Pick the most recent table
// defined as the current hash.
hash = t[len(t)-1]
case map[string]interface{}:
hash = t
default:
p.bug("Expected hash to have type 'map[string]interface{}', but "+
"it has '%T' instead.", tmpHash)
}
}
keyContext = append(keyContext, key)
if _, ok := hash[key]; ok {
// Typically, if the given key has already been set, then we have
// to raise an error since duplicate keys are disallowed. However,
// it's possible that a key was previously defined implicitly. In this
// case, it is allowed to be redefined concretely. (See the
// `tests/valid/implicit-and-explicit-after.toml` test in `toml-test`.)
//
// But we have to make sure to stop marking it as an implicit. (So that
// another redefinition provokes an error.)
//
// Note that since it has already been defined (as a hash), we don't
// want to overwrite it. So our business is done.
if p.isImplicit(keyContext) {
p.removeImplicit(keyContext)
return
}
// Otherwise, we have a concrete key trying to override a previous
// key, which is *always* wrong.
p.panicf("Key '%s' has already been defined.", keyContext)
}
hash[key] = value
}
// setType sets the type of a particular value at a given key.
// It should be called immediately AFTER setValue.
//
// Note that if `key` is empty, then the type given will be applied to the
// current context (which is either a table or an array of tables).
func (p *parser) setType(key string, typ tomlType) {
keyContext := make(Key, 0, len(p.context)+1)
for _, k := range p.context {
keyContext = append(keyContext, k)
}
if len(key) > 0 { // allow type setting for hashes
keyContext = append(keyContext, key)
}
p.types[keyContext.String()] = typ
}
// addImplicit sets the given Key as having been created implicitly.
func (p *parser) addImplicit(key Key) {
p.implicits[key.String()] = true
}
// removeImplicit stops tagging the given key as having been implicitly
// created.
func (p *parser) removeImplicit(key Key) {
p.implicits[key.String()] = false
}
// isImplicit returns true if the key group pointed to by the key was created
// implicitly.
func (p *parser) isImplicit(key Key) bool {
return p.implicits[key.String()]
}
// current returns the full key name of the current context.
func (p *parser) current() string {
if len(p.currentKey) == 0 {
return p.context.String()
}
if len(p.context) == 0 {
return p.currentKey
}
return fmt.Sprintf("%s.%s", p.context, p.currentKey)
}
func stripFirstNewline(s string) string {
if len(s) == 0 || s[0] != '\n' {
return s
}
return s[1:]
}
func stripEscapedWhitespace(s string) string {
esc := strings.Split(s, "\\\n")
if len(esc) > 1 {
for i := 1; i < len(esc); i++ {
esc[i] = strings.TrimLeftFunc(esc[i], unicode.IsSpace)
}
}
return strings.Join(esc, "")
}
func (p *parser) replaceEscapes(str string) string {
var replaced []rune
s := []byte(str)
r := 0
for r < len(s) {
if s[r] != '\\' {
c, size := utf8.DecodeRune(s[r:])
r += size
replaced = append(replaced, c)
continue
}
r += 1
if r >= len(s) {
p.bug("Escape sequence at end of string.")
return ""
}
switch s[r] {
default:
p.bug("Expected valid escape code after \\, but got %q.", s[r])
return ""
case 'b':
replaced = append(replaced, rune(0x0008))
r += 1
case 't':
replaced = append(replaced, rune(0x0009))
r += 1
case 'n':
replaced = append(replaced, rune(0x000A))
r += 1
case 'f':
replaced = append(replaced, rune(0x000C))
r += 1
case 'r':
replaced = append(replaced, rune(0x000D))
r += 1
case '"':
replaced = append(replaced, rune(0x0022))
r += 1
case '\\':
replaced = append(replaced, rune(0x005C))
r += 1
case 'u':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+5). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+5])
replaced = append(replaced, escaped)
r += 5
case 'U':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+9). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+9])
replaced = append(replaced, escaped)
r += 9
}
}
return string(replaced)
}
func (p *parser) asciiEscapeToUnicode(bs []byte) rune {
s := string(bs)
hex, err := strconv.ParseUint(strings.ToLower(s), 16, 32)
if err != nil {
p.bug("Could not parse '%s' as a hexadecimal number, but the "+
"lexer claims it's OK: %s", s, err)
}
if !utf8.ValidRune(rune(hex)) {
p.panicf("Escaped character '\\u%s' is not valid UTF-8.", s)
}
return rune(hex)
}
func isStringType(ty itemType) bool {
return ty == itemString || ty == itemMultilineString ||
ty == itemRawString || ty == itemRawMultilineString
}

View File

@@ -1 +0,0 @@
au BufWritePost *.go silent!make tags > /dev/null 2>&1

View File

@@ -1,91 +0,0 @@
package toml
// tomlType represents any Go type that corresponds to a TOML type.
// While the first draft of the TOML spec has a simplistic type system that
// probably doesn't need this level of sophistication, we seem to be militating
// toward adding real composite types.
type tomlType interface {
typeString() string
}
// typeEqual accepts any two types and returns true if they are equal.
func typeEqual(t1, t2 tomlType) bool {
if t1 == nil || t2 == nil {
return false
}
return t1.typeString() == t2.typeString()
}
func typeIsHash(t tomlType) bool {
return typeEqual(t, tomlHash) || typeEqual(t, tomlArrayHash)
}
type tomlBaseType string
func (btype tomlBaseType) typeString() string {
return string(btype)
}
func (btype tomlBaseType) String() string {
return btype.typeString()
}
var (
tomlInteger tomlBaseType = "Integer"
tomlFloat tomlBaseType = "Float"
tomlDatetime tomlBaseType = "Datetime"
tomlString tomlBaseType = "String"
tomlBool tomlBaseType = "Bool"
tomlArray tomlBaseType = "Array"
tomlHash tomlBaseType = "Hash"
tomlArrayHash tomlBaseType = "ArrayHash"
)
// typeOfPrimitive returns a tomlType of any primitive value in TOML.
// Primitive values are: Integer, Float, Datetime, String and Bool.
//
// Passing a lexer item other than the following will cause a BUG message
// to occur: itemString, itemBool, itemInteger, itemFloat, itemDatetime.
func (p *parser) typeOfPrimitive(lexItem item) tomlType {
switch lexItem.typ {
case itemInteger:
return tomlInteger
case itemFloat:
return tomlFloat
case itemDatetime:
return tomlDatetime
case itemString:
return tomlString
case itemMultilineString:
return tomlString
case itemRawString:
return tomlString
case itemRawMultilineString:
return tomlString
case itemBool:
return tomlBool
}
p.bug("Cannot infer primitive type of lex item '%s'.", lexItem)
panic("unreachable")
}
// typeOfArray returns a tomlType for an array given a list of types of its
// values.
//
// In the current spec, if an array is homogeneous, then its type is always
// "Array". If the array is not homogeneous, an error is generated.
func (p *parser) typeOfArray(types []tomlType) tomlType {
// Empty arrays are cool.
if len(types) == 0 {
return tomlArray
}
theType := types[0]
for _, t := range types[1:] {
if !typeEqual(theType, t) {
p.panicf("Array contains values of type '%s' and '%s', but "+
"arrays must be homogeneous.", theType, t)
}
}
return tomlArray
}

View File

@@ -1,242 +0,0 @@
package toml
// Struct field handling is adapted from code in encoding/json:
//
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the Go distribution.
import (
"reflect"
"sort"
"sync"
)
// A field represents a single field found in a struct.
type field struct {
name string // the name of the field (`toml` tag included)
tag bool // whether field has a `toml` tag
index []int // represents the depth of an anonymous field
typ reflect.Type // the type of the field
}
// byName sorts field by name, breaking ties with depth,
// then breaking ties with "name came from toml tag", then
// breaking ties with index sequence.
type byName []field
func (x byName) Len() int { return len(x) }
func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byName) Less(i, j int) bool {
if x[i].name != x[j].name {
return x[i].name < x[j].name
}
if len(x[i].index) != len(x[j].index) {
return len(x[i].index) < len(x[j].index)
}
if x[i].tag != x[j].tag {
return x[i].tag
}
return byIndex(x).Less(i, j)
}
// byIndex sorts field by index sequence.
type byIndex []field
func (x byIndex) Len() int { return len(x) }
func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byIndex) Less(i, j int) bool {
for k, xik := range x[i].index {
if k >= len(x[j].index) {
return false
}
if xik != x[j].index[k] {
return xik < x[j].index[k]
}
}
return len(x[i].index) < len(x[j].index)
}
// typeFields returns a list of fields that TOML should recognize for the given
// type. The algorithm is breadth-first search over the set of structs to
// include - the top struct and then any reachable anonymous structs.
func typeFields(t reflect.Type) []field {
// Anonymous fields to explore at the current level and the next.
current := []field{}
next := []field{{typ: t}}
// Count of queued names for current level and the next.
count := map[reflect.Type]int{}
nextCount := map[reflect.Type]int{}
// Types already visited at an earlier level.
visited := map[reflect.Type]bool{}
// Fields found.
var fields []field
for len(next) > 0 {
current, next = next, current[:0]
count, nextCount = nextCount, map[reflect.Type]int{}
for _, f := range current {
if visited[f.typ] {
continue
}
visited[f.typ] = true
// Scan f.typ for fields to include.
for i := 0; i < f.typ.NumField(); i++ {
sf := f.typ.Field(i)
if sf.PkgPath != "" && !sf.Anonymous { // unexported
continue
}
opts := getOptions(sf.Tag)
if opts.skip {
continue
}
index := make([]int, len(f.index)+1)
copy(index, f.index)
index[len(f.index)] = i
ft := sf.Type
if ft.Name() == "" && ft.Kind() == reflect.Ptr {
// Follow pointer.
ft = ft.Elem()
}
// Record found field and index sequence.
if opts.name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
tagged := opts.name != ""
name := opts.name
if name == "" {
name = sf.Name
}
fields = append(fields, field{name, tagged, index, ft})
if count[f.typ] > 1 {
// If there were multiple instances, add a second,
// so that the annihilation code will see a duplicate.
// It only cares about the distinction between 1 or 2,
// so don't bother generating any more copies.
fields = append(fields, fields[len(fields)-1])
}
continue
}
// Record new anonymous struct to explore in next round.
nextCount[ft]++
if nextCount[ft] == 1 {
f := field{name: ft.Name(), index: index, typ: ft}
next = append(next, f)
}
}
}
}
sort.Sort(byName(fields))
// Delete all fields that are hidden by the Go rules for embedded fields,
// except that fields with TOML tags are promoted.
// The fields are sorted in primary order of name, secondary order
// of field index length. Loop over names; for each name, delete
// hidden fields by choosing the one dominant field that survives.
out := fields[:0]
for advance, i := 0, 0; i < len(fields); i += advance {
// One iteration per name.
// Find the sequence of fields with the name of this first field.
fi := fields[i]
name := fi.name
for advance = 1; i+advance < len(fields); advance++ {
fj := fields[i+advance]
if fj.name != name {
break
}
}
if advance == 1 { // Only one field with this name
out = append(out, fi)
continue
}
dominant, ok := dominantField(fields[i : i+advance])
if ok {
out = append(out, dominant)
}
}
fields = out
sort.Sort(byIndex(fields))
return fields
}
// dominantField looks through the fields, all of which are known to
// have the same name, to find the single field that dominates the
// others using Go's embedding rules, modified by the presence of
// TOML tags. If there are multiple top-level fields, the boolean
// will be false: This condition is an error in Go and we skip all
// the fields.
func dominantField(fields []field) (field, bool) {
// The fields are sorted in increasing index-length order. The winner
// must therefore be one with the shortest index length. Drop all
// longer entries, which is easy: just truncate the slice.
length := len(fields[0].index)
tagged := -1 // Index of first tagged field.
for i, f := range fields {
if len(f.index) > length {
fields = fields[:i]
break
}
if f.tag {
if tagged >= 0 {
// Multiple tagged fields at the same level: conflict.
// Return no field.
return field{}, false
}
tagged = i
}
}
if tagged >= 0 {
return fields[tagged], true
}
// All remaining fields have the same length. If there's more than one,
// we have a conflict (two fields named "X" at the same level) and we
// return no field.
if len(fields) > 1 {
return field{}, false
}
return fields[0], true
}
var fieldCache struct {
sync.RWMutex
m map[reflect.Type][]field
}
// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
func cachedTypeFields(t reflect.Type) []field {
fieldCache.RLock()
f := fieldCache.m[t]
fieldCache.RUnlock()
if f != nil {
return f
}
// Compute fields without lock.
// Might duplicate effort but won't hold other computations back.
f = typeFields(t)
if f == nil {
f = []field{}
}
fieldCache.Lock()
if fieldCache.m == nil {
fieldCache.m = map[reflect.Type][]field{}
}
fieldCache.m[t] = f
fieldCache.Unlock()
return f
}

View File

@@ -1,18 +0,0 @@
language: go
go:
- 1.6
- 1.7
- 1.8
- tip
script:
- go test -v
notifications:
webhooks:
urls:
- https://webhooks.gitter.im/e/06e3328629952dabe3e0
on_success: change # options: [always|never|change] default: always
on_failure: always # options: [always|never|change] default: always
on_start: never # options: [always|never|change] default: always

View File

@@ -1,8 +0,0 @@
# 1.0.1 (2017-05-31)
## Fixed
- #21: Fix generation of alphanumeric strings (thanks @dbarranco)
# 1.0.0 (2014-04-30)
- Initial release.

View File

@@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,70 +0,0 @@
GoUtils
===========
[![Stability: Maintenance](https://masterminds.github.io/stability/maintenance.svg)](https://masterminds.github.io/stability/maintenance.html)
[![GoDoc](https://godoc.org/github.com/Masterminds/goutils?status.png)](https://godoc.org/github.com/Masterminds/goutils) [![Build Status](https://travis-ci.org/Masterminds/goutils.svg?branch=master)](https://travis-ci.org/Masterminds/goutils) [![Build status](https://ci.appveyor.com/api/projects/status/sc2b1ew0m7f0aiju?svg=true)](https://ci.appveyor.com/project/mattfarina/goutils)
GoUtils provides users with utility functions to manipulate strings in various ways. It is a Go implementation of some
string manipulation libraries of Java Apache Commons. GoUtils includes the following Java Apache Commons classes:
* WordUtils
* RandomStringUtils
* StringUtils (partial implementation)
## Installation
If you have Go set up on your system, from the GOPATH directory within the command line/terminal, enter this:
go get github.com/Masterminds/goutils
If you do not have Go set up on your system, please follow the [Go installation directions from the documenation](http://golang.org/doc/install), and then follow the instructions above to install GoUtils.
## Documentation
GoUtils doc is available here: [![GoDoc](https://godoc.org/github.com/Masterminds/goutils?status.png)](https://godoc.org/github.com/Masterminds/goutils)
## Usage
The code snippets below show examples of how to use GoUtils. Some functions return errors while others do not. The first instance below, which does not return an error, is the `Initials` function (located within the `wordutils.go` file).
package main
import (
"fmt"
"github.com/Masterminds/goutils"
)
func main() {
// EXAMPLE 1: A goutils function which returns no errors
fmt.Println (goutils.Initials("John Doe Foo")) // Prints out "JDF"
}
Some functions return errors mainly due to illegal arguements used as parameters. The code example below illustrates how to deal with function that returns an error. In this instance, the function is the `Random` function (located within the `randomstringutils.go` file).
package main
import (
"fmt"
"github.com/Masterminds/goutils"
)
func main() {
// EXAMPLE 2: A goutils function which returns an error
rand1, err1 := goutils.Random (-1, 0, 0, true, true)
if err1 != nil {
fmt.Println(err1) // Prints out error message because -1 was entered as the first parameter in goutils.Random(...)
} else {
fmt.Println(rand1)
}
}
## License
GoUtils is licensed under the Apache License, Version 2.0. Please check the LICENSE.txt file or visit http://www.apache.org/licenses/LICENSE-2.0 for a copy of the license.
## Issue Reporting
Make suggestions or report issues using the Git issue tracker: https://github.com/Masterminds/goutils/issues
## Website
* [GoUtils webpage](http://Masterminds.github.io/goutils/)

View File

@@ -1,21 +0,0 @@
version: build-{build}.{branch}
clone_folder: C:\gopath\src\github.com\Masterminds\goutils
shallow_clone: true
environment:
GOPATH: C:\gopath
platform:
- x64
build: off
install:
- go version
- go env
test_script:
- go test -v
deploy: off

View File

@@ -1,251 +0,0 @@
/*
Copyright 2014 Alexander Okoli
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package goutils
import (
"crypto/rand"
"fmt"
"math"
"math/big"
"regexp"
"unicode"
)
/*
CryptoRandomNonAlphaNumeric creates a random string whose length is the number of characters specified.
Characters will be chosen from the set of all characters (ASCII/Unicode values between 0 to 2,147,483,647 (math.MaxInt32)).
Parameter:
count - the length of random string to create
Returns:
string - the random string
error - an error stemming from an invalid parameter within underlying function, CryptoRandom(...)
*/
func CryptoRandomNonAlphaNumeric(count int) (string, error) {
return CryptoRandomAlphaNumericCustom(count, false, false)
}
/*
CryptoRandomAscii creates a random string whose length is the number of characters specified.
Characters will be chosen from the set of characters whose ASCII value is between 32 and 126 (inclusive).
Parameter:
count - the length of random string to create
Returns:
string - the random string
error - an error stemming from an invalid parameter within underlying function, CryptoRandom(...)
*/
func CryptoRandomAscii(count int) (string, error) {
return CryptoRandom(count, 32, 127, false, false)
}
/*
CryptoRandomNumeric creates a random string whose length is the number of characters specified.
Characters will be chosen from the set of numeric characters.
Parameter:
count - the length of random string to create
Returns:
string - the random string
error - an error stemming from an invalid parameter within underlying function, CryptoRandom(...)
*/
func CryptoRandomNumeric(count int) (string, error) {
return CryptoRandom(count, 0, 0, false, true)
}
/*
CryptoRandomAlphabetic creates a random string whose length is the number of characters specified.
Characters will be chosen from the set of alpha-numeric characters as indicated by the arguments.
Parameters:
count - the length of random string to create
letters - if true, generated string may include alphabetic characters
numbers - if true, generated string may include numeric characters
Returns:
string - the random string
error - an error stemming from an invalid parameter within underlying function, CryptoRandom(...)
*/
func CryptoRandomAlphabetic(count int) (string, error) {
return CryptoRandom(count, 0, 0, true, false)
}
/*
CryptoRandomAlphaNumeric creates a random string whose length is the number of characters specified.
Characters will be chosen from the set of alpha-numeric characters.
Parameter:
count - the length of random string to create
Returns:
string - the random string
error - an error stemming from an invalid parameter within underlying function, CryptoRandom(...)
*/
func CryptoRandomAlphaNumeric(count int) (string, error) {
if count == 0 {
return "", nil
}
RandomString, err := CryptoRandom(count, 0, 0, true, true)
if err != nil {
return "", fmt.Errorf("Error: %s", err)
}
match, err := regexp.MatchString("([0-9]+)", RandomString)
if err != nil {
panic(err)
}
if !match {
//Get the position between 0 and the length of the string-1 to insert a random number
position := getCryptoRandomInt(count)
//Insert a random number between [0-9] in the position
RandomString = RandomString[:position] + string('0' + getCryptoRandomInt(10)) + RandomString[position + 1:]
return RandomString, err
}
return RandomString, err
}
/*
CryptoRandomAlphaNumericCustom creates a random string whose length is the number of characters specified.
Characters will be chosen from the set of alpha-numeric characters as indicated by the arguments.
Parameters:
count - the length of random string to create
letters - if true, generated string may include alphabetic characters
numbers - if true, generated string may include numeric characters
Returns:
string - the random string
error - an error stemming from an invalid parameter within underlying function, CryptoRandom(...)
*/
func CryptoRandomAlphaNumericCustom(count int, letters bool, numbers bool) (string, error) {
return CryptoRandom(count, 0, 0, letters, numbers)
}
/*
CryptoRandom creates a random string based on a variety of options, using using golang's crypto/rand source of randomness.
If the parameters start and end are both 0, start and end are set to ' ' and 'z', the ASCII printable characters, will be used,
unless letters and numbers are both false, in which case, start and end are set to 0 and math.MaxInt32, respectively.
If chars is not nil, characters stored in chars that are between start and end are chosen.
Parameters:
count - the length of random string to create
start - the position in set of chars (ASCII/Unicode int) to start at
end - the position in set of chars (ASCII/Unicode int) to end before
letters - if true, generated string may include alphabetic characters
numbers - if true, generated string may include numeric characters
chars - the set of chars to choose randoms from. If nil, then it will use the set of all chars.
Returns:
string - the random string
error - an error stemming from invalid parameters: if count < 0; or the provided chars array is empty; or end <= start; or end > len(chars)
*/
func CryptoRandom(count int, start int, end int, letters bool, numbers bool, chars ...rune) (string, error) {
if count == 0 {
return "", nil
} else if count < 0 {
err := fmt.Errorf("randomstringutils illegal argument: Requested random string length %v is less than 0.", count) // equiv to err := errors.New("...")
return "", err
}
if chars != nil && len(chars) == 0 {
err := fmt.Errorf("randomstringutils illegal argument: The chars array must not be empty")
return "", err
}
if start == 0 && end == 0 {
if chars != nil {
end = len(chars)
} else {
if !letters && !numbers {
end = math.MaxInt32
} else {
end = 'z' + 1
start = ' '
}
}
} else {
if end <= start {
err := fmt.Errorf("randomstringutils illegal argument: Parameter end (%v) must be greater than start (%v)", end, start)
return "", err
}
if chars != nil && end > len(chars) {
err := fmt.Errorf("randomstringutils illegal argument: Parameter end (%v) cannot be greater than len(chars) (%v)", end, len(chars))
return "", err
}
}
buffer := make([]rune, count)
gap := end - start
// high-surrogates range, (\uD800-\uDBFF) = 55296 - 56319
// low-surrogates range, (\uDC00-\uDFFF) = 56320 - 57343
for count != 0 {
count--
var ch rune
if chars == nil {
ch = rune(getCryptoRandomInt(gap) + int64(start))
} else {
ch = chars[getCryptoRandomInt(gap) + int64(start)]
}
if letters && unicode.IsLetter(ch) || numbers && unicode.IsDigit(ch) || !letters && !numbers {
if ch >= 56320 && ch <= 57343 { // low surrogate range
if count == 0 {
count++
} else {
// Insert low surrogate
buffer[count] = ch
count--
// Insert high surrogate
buffer[count] = rune(55296 + getCryptoRandomInt(128))
}
} else if ch >= 55296 && ch <= 56191 { // High surrogates range (Partial)
if count == 0 {
count++
} else {
// Insert low surrogate
buffer[count] = rune(56320 + getCryptoRandomInt(128))
count--
// Insert high surrogate
buffer[count] = ch
}
} else if ch >= 56192 && ch <= 56319 {
// private high surrogate, skip it
count++
} else {
// not one of the surrogates*
buffer[count] = ch
}
} else {
count++
}
}
return string(buffer), nil
}
func getCryptoRandomInt(count int) int64 {
nBig, err := rand.Int(rand.Reader, big.NewInt(int64(count)))
if err != nil {
panic(err)
}
return nBig.Int64()
}

View File

@@ -1,268 +0,0 @@
/*
Copyright 2014 Alexander Okoli
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package goutils
import (
"fmt"
"math"
"math/rand"
"regexp"
"time"
"unicode"
)
// RANDOM provides the time-based seed used to generate random numbers
var RANDOM = rand.New(rand.NewSource(time.Now().UnixNano()))
/*
RandomNonAlphaNumeric creates a random string whose length is the number of characters specified.
Characters will be chosen from the set of all characters (ASCII/Unicode values between 0 to 2,147,483,647 (math.MaxInt32)).
Parameter:
count - the length of random string to create
Returns:
string - the random string
error - an error stemming from an invalid parameter within underlying function, RandomSeed(...)
*/
func RandomNonAlphaNumeric(count int) (string, error) {
return RandomAlphaNumericCustom(count, false, false)
}
/*
RandomAscii creates a random string whose length is the number of characters specified.
Characters will be chosen from the set of characters whose ASCII value is between 32 and 126 (inclusive).
Parameter:
count - the length of random string to create
Returns:
string - the random string
error - an error stemming from an invalid parameter within underlying function, RandomSeed(...)
*/
func RandomAscii(count int) (string, error) {
return Random(count, 32, 127, false, false)
}
/*
RandomNumeric creates a random string whose length is the number of characters specified.
Characters will be chosen from the set of numeric characters.
Parameter:
count - the length of random string to create
Returns:
string - the random string
error - an error stemming from an invalid parameter within underlying function, RandomSeed(...)
*/
func RandomNumeric(count int) (string, error) {
return Random(count, 0, 0, false, true)
}
/*
RandomAlphabetic creates a random string whose length is the number of characters specified.
Characters will be chosen from the set of alpha-numeric characters as indicated by the arguments.
Parameters:
count - the length of random string to create
letters - if true, generated string may include alphabetic characters
numbers - if true, generated string may include numeric characters
Returns:
string - the random string
error - an error stemming from an invalid parameter within underlying function, RandomSeed(...)
*/
func RandomAlphabetic(count int) (string, error) {
return Random(count, 0, 0, true, false)
}
/*
RandomAlphaNumeric creates a random string whose length is the number of characters specified.
Characters will be chosen from the set of alpha-numeric characters.
Parameter:
count - the length of random string to create
Returns:
string - the random string
error - an error stemming from an invalid parameter within underlying function, RandomSeed(...)
*/
func RandomAlphaNumeric(count int) (string, error) {
RandomString, err := Random(count, 0, 0, true, true)
if err != nil {
return "", fmt.Errorf("Error: %s", err)
}
match, err := regexp.MatchString("([0-9]+)", RandomString)
if err != nil {
panic(err)
}
if !match {
//Get the position between 0 and the length of the string-1 to insert a random number
position := rand.Intn(count)
//Insert a random number between [0-9] in the position
RandomString = RandomString[:position] + string('0'+rand.Intn(10)) + RandomString[position+1:]
return RandomString, err
}
return RandomString, err
}
/*
RandomAlphaNumericCustom creates a random string whose length is the number of characters specified.
Characters will be chosen from the set of alpha-numeric characters as indicated by the arguments.
Parameters:
count - the length of random string to create
letters - if true, generated string may include alphabetic characters
numbers - if true, generated string may include numeric characters
Returns:
string - the random string
error - an error stemming from an invalid parameter within underlying function, RandomSeed(...)
*/
func RandomAlphaNumericCustom(count int, letters bool, numbers bool) (string, error) {
return Random(count, 0, 0, letters, numbers)
}
/*
Random creates a random string based on a variety of options, using default source of randomness.
This method has exactly the same semantics as RandomSeed(int, int, int, bool, bool, []char, *rand.Rand), but
instead of using an externally supplied source of randomness, it uses the internal *rand.Rand instance.
Parameters:
count - the length of random string to create
start - the position in set of chars (ASCII/Unicode int) to start at
end - the position in set of chars (ASCII/Unicode int) to end before
letters - if true, generated string may include alphabetic characters
numbers - if true, generated string may include numeric characters
chars - the set of chars to choose randoms from. If nil, then it will use the set of all chars.
Returns:
string - the random string
error - an error stemming from an invalid parameter within underlying function, RandomSeed(...)
*/
func Random(count int, start int, end int, letters bool, numbers bool, chars ...rune) (string, error) {
return RandomSeed(count, start, end, letters, numbers, chars, RANDOM)
}
/*
RandomSeed creates a random string based on a variety of options, using supplied source of randomness.
If the parameters start and end are both 0, start and end are set to ' ' and 'z', the ASCII printable characters, will be used,
unless letters and numbers are both false, in which case, start and end are set to 0 and math.MaxInt32, respectively.
If chars is not nil, characters stored in chars that are between start and end are chosen.
This method accepts a user-supplied *rand.Rand instance to use as a source of randomness. By seeding a single *rand.Rand instance
with a fixed seed and using it for each call, the same random sequence of strings can be generated repeatedly and predictably.
Parameters:
count - the length of random string to create
start - the position in set of chars (ASCII/Unicode decimals) to start at
end - the position in set of chars (ASCII/Unicode decimals) to end before
letters - if true, generated string may include alphabetic characters
numbers - if true, generated string may include numeric characters
chars - the set of chars to choose randoms from. If nil, then it will use the set of all chars.
random - a source of randomness.
Returns:
string - the random string
error - an error stemming from invalid parameters: if count < 0; or the provided chars array is empty; or end <= start; or end > len(chars)
*/
func RandomSeed(count int, start int, end int, letters bool, numbers bool, chars []rune, random *rand.Rand) (string, error) {
if count == 0 {
return "", nil
} else if count < 0 {
err := fmt.Errorf("randomstringutils illegal argument: Requested random string length %v is less than 0.", count) // equiv to err := errors.New("...")
return "", err
}
if chars != nil && len(chars) == 0 {
err := fmt.Errorf("randomstringutils illegal argument: The chars array must not be empty")
return "", err
}
if start == 0 && end == 0 {
if chars != nil {
end = len(chars)
} else {
if !letters && !numbers {
end = math.MaxInt32
} else {
end = 'z' + 1
start = ' '
}
}
} else {
if end <= start {
err := fmt.Errorf("randomstringutils illegal argument: Parameter end (%v) must be greater than start (%v)", end, start)
return "", err
}
if chars != nil && end > len(chars) {
err := fmt.Errorf("randomstringutils illegal argument: Parameter end (%v) cannot be greater than len(chars) (%v)", end, len(chars))
return "", err
}
}
buffer := make([]rune, count)
gap := end - start
// high-surrogates range, (\uD800-\uDBFF) = 55296 - 56319
// low-surrogates range, (\uDC00-\uDFFF) = 56320 - 57343
for count != 0 {
count--
var ch rune
if chars == nil {
ch = rune(random.Intn(gap) + start)
} else {
ch = chars[random.Intn(gap)+start]
}
if letters && unicode.IsLetter(ch) || numbers && unicode.IsDigit(ch) || !letters && !numbers {
if ch >= 56320 && ch <= 57343 { // low surrogate range
if count == 0 {
count++
} else {
// Insert low surrogate
buffer[count] = ch
count--
// Insert high surrogate
buffer[count] = rune(55296 + random.Intn(128))
}
} else if ch >= 55296 && ch <= 56191 { // High surrogates range (Partial)
if count == 0 {
count++
} else {
// Insert low surrogate
buffer[count] = rune(56320 + random.Intn(128))
count--
// Insert high surrogate
buffer[count] = ch
}
} else if ch >= 56192 && ch <= 56319 {
// private high surrogate, skip it
count++
} else {
// not one of the surrogates*
buffer[count] = ch
}
} else {
count++
}
}
return string(buffer), nil
}

View File

@@ -1,224 +0,0 @@
/*
Copyright 2014 Alexander Okoli
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package goutils
import (
"bytes"
"fmt"
"strings"
"unicode"
)
// Typically returned by functions where a searched item cannot be found
const INDEX_NOT_FOUND = -1
/*
Abbreviate abbreviates a string using ellipses. This will turn the string "Now is the time for all good men" into "Now is the time for..."
Specifically, the algorithm is as follows:
- If str is less than maxWidth characters long, return it.
- Else abbreviate it to (str[0:maxWidth - 3] + "...").
- If maxWidth is less than 4, return an illegal argument error.
- In no case will it return a string of length greater than maxWidth.
Parameters:
str - the string to check
maxWidth - maximum length of result string, must be at least 4
Returns:
string - abbreviated string
error - if the width is too small
*/
func Abbreviate(str string, maxWidth int) (string, error) {
return AbbreviateFull(str, 0, maxWidth)
}
/*
AbbreviateFull abbreviates a string using ellipses. This will turn the string "Now is the time for all good men" into "...is the time for..."
This function works like Abbreviate(string, int), but allows you to specify a "left edge" offset. Note that this left edge is not
necessarily going to be the leftmost character in the result, or the first character following the ellipses, but it will appear
somewhere in the result.
In no case will it return a string of length greater than maxWidth.
Parameters:
str - the string to check
offset - left edge of source string
maxWidth - maximum length of result string, must be at least 4
Returns:
string - abbreviated string
error - if the width is too small
*/
func AbbreviateFull(str string, offset int, maxWidth int) (string, error) {
if str == "" {
return "", nil
}
if maxWidth < 4 {
err := fmt.Errorf("stringutils illegal argument: Minimum abbreviation width is 4")
return "", err
}
if len(str) <= maxWidth {
return str, nil
}
if offset > len(str) {
offset = len(str)
}
if len(str)-offset < (maxWidth - 3) { // 15 - 5 < 10 - 3 = 10 < 7
offset = len(str) - (maxWidth - 3)
}
abrevMarker := "..."
if offset <= 4 {
return str[0:maxWidth-3] + abrevMarker, nil // str.substring(0, maxWidth - 3) + abrevMarker;
}
if maxWidth < 7 {
err := fmt.Errorf("stringutils illegal argument: Minimum abbreviation width with offset is 7")
return "", err
}
if (offset + maxWidth - 3) < len(str) { // 5 + (10-3) < 15 = 12 < 15
abrevStr, _ := Abbreviate(str[offset:len(str)], (maxWidth - 3))
return abrevMarker + abrevStr, nil // abrevMarker + abbreviate(str.substring(offset), maxWidth - 3);
}
return abrevMarker + str[(len(str)-(maxWidth-3)):len(str)], nil // abrevMarker + str.substring(str.length() - (maxWidth - 3));
}
/*
DeleteWhiteSpace deletes all whitespaces from a string as defined by unicode.IsSpace(rune).
It returns the string without whitespaces.
Parameter:
str - the string to delete whitespace from, may be nil
Returns:
the string without whitespaces
*/
func DeleteWhiteSpace(str string) string {
if str == "" {
return str
}
sz := len(str)
var chs bytes.Buffer
count := 0
for i := 0; i < sz; i++ {
ch := rune(str[i])
if !unicode.IsSpace(ch) {
chs.WriteRune(ch)
count++
}
}
if count == sz {
return str
}
return chs.String()
}
/*
IndexOfDifference compares two strings, and returns the index at which the strings begin to differ.
Parameters:
str1 - the first string
str2 - the second string
Returns:
the index where str1 and str2 begin to differ; -1 if they are equal
*/
func IndexOfDifference(str1 string, str2 string) int {
if str1 == str2 {
return INDEX_NOT_FOUND
}
if IsEmpty(str1) || IsEmpty(str2) {
return 0
}
var i int
for i = 0; i < len(str1) && i < len(str2); i++ {
if rune(str1[i]) != rune(str2[i]) {
break
}
}
if i < len(str2) || i < len(str1) {
return i
}
return INDEX_NOT_FOUND
}
/*
IsBlank checks if a string is whitespace or empty (""). Observe the following behavior:
goutils.IsBlank("") = true
goutils.IsBlank(" ") = true
goutils.IsBlank("bob") = false
goutils.IsBlank(" bob ") = false
Parameter:
str - the string to check
Returns:
true - if the string is whitespace or empty ("")
*/
func IsBlank(str string) bool {
strLen := len(str)
if str == "" || strLen == 0 {
return true
}
for i := 0; i < strLen; i++ {
if unicode.IsSpace(rune(str[i])) == false {
return false
}
}
return true
}
/*
IndexOf returns the index of the first instance of sub in str, with the search beginning from the
index start point specified. -1 is returned if sub is not present in str.
An empty string ("") will return -1 (INDEX_NOT_FOUND). A negative start position is treated as zero.
A start position greater than the string length returns -1.
Parameters:
str - the string to check
sub - the substring to find
start - the start position; negative treated as zero
Returns:
the first index where the sub string was found (always >= start)
*/
func IndexOf(str string, sub string, start int) int {
if start < 0 {
start = 0
}
if len(str) < start {
return INDEX_NOT_FOUND
}
if IsEmpty(str) || IsEmpty(sub) {
return INDEX_NOT_FOUND
}
partialIndex := strings.Index(str[start:len(str)], sub)
if partialIndex == -1 {
return INDEX_NOT_FOUND
}
return partialIndex + start
}
// IsEmpty checks if a string is empty (""). Returns true if empty, and false otherwise.
func IsEmpty(str string) bool {
return len(str) == 0
}

View File

@@ -1,357 +0,0 @@
/*
Copyright 2014 Alexander Okoli
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
Package goutils provides utility functions to manipulate strings in various ways.
The code snippets below show examples of how to use goutils. Some functions return
errors while others do not, so usage would vary as a result.
Example:
package main
import (
"fmt"
"github.com/aokoli/goutils"
)
func main() {
// EXAMPLE 1: A goutils function which returns no errors
fmt.Println (goutils.Initials("John Doe Foo")) // Prints out "JDF"
// EXAMPLE 2: A goutils function which returns an error
rand1, err1 := goutils.Random (-1, 0, 0, true, true)
if err1 != nil {
fmt.Println(err1) // Prints out error message because -1 was entered as the first parameter in goutils.Random(...)
} else {
fmt.Println(rand1)
}
}
*/
package goutils
import (
"bytes"
"strings"
"unicode"
)
// VERSION indicates the current version of goutils
const VERSION = "1.0.0"
/*
Wrap wraps a single line of text, identifying words by ' '.
New lines will be separated by '\n'. Very long words, such as URLs will not be wrapped.
Leading spaces on a new line are stripped. Trailing spaces are not stripped.
Parameters:
str - the string to be word wrapped
wrapLength - the column (a column can fit only one character) to wrap the words at, less than 1 is treated as 1
Returns:
a line with newlines inserted
*/
func Wrap(str string, wrapLength int) string {
return WrapCustom(str, wrapLength, "", false)
}
/*
WrapCustom wraps a single line of text, identifying words by ' '.
Leading spaces on a new line are stripped. Trailing spaces are not stripped.
Parameters:
str - the string to be word wrapped
wrapLength - the column number (a column can fit only one character) to wrap the words at, less than 1 is treated as 1
newLineStr - the string to insert for a new line, "" uses '\n'
wrapLongWords - true if long words (such as URLs) should be wrapped
Returns:
a line with newlines inserted
*/
func WrapCustom(str string, wrapLength int, newLineStr string, wrapLongWords bool) string {
if str == "" {
return ""
}
if newLineStr == "" {
newLineStr = "\n" // TODO Assumes "\n" is seperator. Explore SystemUtils.LINE_SEPARATOR from Apache Commons
}
if wrapLength < 1 {
wrapLength = 1
}
inputLineLength := len(str)
offset := 0
var wrappedLine bytes.Buffer
for inputLineLength-offset > wrapLength {
if rune(str[offset]) == ' ' {
offset++
continue
}
end := wrapLength + offset + 1
spaceToWrapAt := strings.LastIndex(str[offset:end], " ") + offset
if spaceToWrapAt >= offset {
// normal word (not longer than wrapLength)
wrappedLine.WriteString(str[offset:spaceToWrapAt])
wrappedLine.WriteString(newLineStr)
offset = spaceToWrapAt + 1
} else {
// long word or URL
if wrapLongWords {
end := wrapLength + offset
// long words are wrapped one line at a time
wrappedLine.WriteString(str[offset:end])
wrappedLine.WriteString(newLineStr)
offset += wrapLength
} else {
// long words aren't wrapped, just extended beyond limit
end := wrapLength + offset
index := strings.IndexRune(str[end:len(str)], ' ')
if index == -1 {
wrappedLine.WriteString(str[offset:len(str)])
offset = inputLineLength
} else {
spaceToWrapAt = index + end
wrappedLine.WriteString(str[offset:spaceToWrapAt])
wrappedLine.WriteString(newLineStr)
offset = spaceToWrapAt + 1
}
}
}
}
wrappedLine.WriteString(str[offset:len(str)])
return wrappedLine.String()
}
/*
Capitalize capitalizes all the delimiter separated words in a string. Only the first letter of each word is changed.
To convert the rest of each word to lowercase at the same time, use CapitalizeFully(str string, delimiters ...rune).
The delimiters represent a set of characters understood to separate words. The first string character
and the first non-delimiter character after a delimiter will be capitalized. A "" input string returns "".
Capitalization uses the Unicode title case, normally equivalent to upper case.
Parameters:
str - the string to capitalize
delimiters - set of characters to determine capitalization, exclusion of this parameter means whitespace would be delimeter
Returns:
capitalized string
*/
func Capitalize(str string, delimiters ...rune) string {
var delimLen int
if delimiters == nil {
delimLen = -1
} else {
delimLen = len(delimiters)
}
if str == "" || delimLen == 0 {
return str
}
buffer := []rune(str)
capitalizeNext := true
for i := 0; i < len(buffer); i++ {
ch := buffer[i]
if isDelimiter(ch, delimiters...) {
capitalizeNext = true
} else if capitalizeNext {
buffer[i] = unicode.ToTitle(ch)
capitalizeNext = false
}
}
return string(buffer)
}
/*
CapitalizeFully converts all the delimiter separated words in a string into capitalized words, that is each word is made up of a
titlecase character and then a series of lowercase characters. The delimiters represent a set of characters understood
to separate words. The first string character and the first non-delimiter character after a delimiter will be capitalized.
Capitalization uses the Unicode title case, normally equivalent to upper case.
Parameters:
str - the string to capitalize fully
delimiters - set of characters to determine capitalization, exclusion of this parameter means whitespace would be delimeter
Returns:
capitalized string
*/
func CapitalizeFully(str string, delimiters ...rune) string {
var delimLen int
if delimiters == nil {
delimLen = -1
} else {
delimLen = len(delimiters)
}
if str == "" || delimLen == 0 {
return str
}
str = strings.ToLower(str)
return Capitalize(str, delimiters...)
}
/*
Uncapitalize uncapitalizes all the whitespace separated words in a string. Only the first letter of each word is changed.
The delimiters represent a set of characters understood to separate words. The first string character and the first non-delimiter
character after a delimiter will be uncapitalized. Whitespace is defined by unicode.IsSpace(char).
Parameters:
str - the string to uncapitalize fully
delimiters - set of characters to determine capitalization, exclusion of this parameter means whitespace would be delimeter
Returns:
uncapitalized string
*/
func Uncapitalize(str string, delimiters ...rune) string {
var delimLen int
if delimiters == nil {
delimLen = -1
} else {
delimLen = len(delimiters)
}
if str == "" || delimLen == 0 {
return str
}
buffer := []rune(str)
uncapitalizeNext := true // TODO Always makes capitalize/un apply to first char.
for i := 0; i < len(buffer); i++ {
ch := buffer[i]
if isDelimiter(ch, delimiters...) {
uncapitalizeNext = true
} else if uncapitalizeNext {
buffer[i] = unicode.ToLower(ch)
uncapitalizeNext = false
}
}
return string(buffer)
}
/*
SwapCase swaps the case of a string using a word based algorithm.
Conversion algorithm:
Upper case character converts to Lower case
Title case character converts to Lower case
Lower case character after Whitespace or at start converts to Title case
Other Lower case character converts to Upper case
Whitespace is defined by unicode.IsSpace(char).
Parameters:
str - the string to swap case
Returns:
the changed string
*/
func SwapCase(str string) string {
if str == "" {
return str
}
buffer := []rune(str)
whitespace := true
for i := 0; i < len(buffer); i++ {
ch := buffer[i]
if unicode.IsUpper(ch) {
buffer[i] = unicode.ToLower(ch)
whitespace = false
} else if unicode.IsTitle(ch) {
buffer[i] = unicode.ToLower(ch)
whitespace = false
} else if unicode.IsLower(ch) {
if whitespace {
buffer[i] = unicode.ToTitle(ch)
whitespace = false
} else {
buffer[i] = unicode.ToUpper(ch)
}
} else {
whitespace = unicode.IsSpace(ch)
}
}
return string(buffer)
}
/*
Initials extracts the initial letters from each word in the string. The first letter of the string and all first
letters after the defined delimiters are returned as a new string. Their case is not changed. If the delimiters
parameter is excluded, then Whitespace is used. Whitespace is defined by unicode.IsSpacea(char). An empty delimiter array returns an empty string.
Parameters:
str - the string to get initials from
delimiters - set of characters to determine words, exclusion of this parameter means whitespace would be delimeter
Returns:
string of initial letters
*/
func Initials(str string, delimiters ...rune) string {
if str == "" {
return str
}
if delimiters != nil && len(delimiters) == 0 {
return ""
}
strLen := len(str)
var buf bytes.Buffer
lastWasGap := true
for i := 0; i < strLen; i++ {
ch := rune(str[i])
if isDelimiter(ch, delimiters...) {
lastWasGap = true
} else if lastWasGap {
buf.WriteRune(ch)
lastWasGap = false
}
}
return buf.String()
}
// private function (lower case func name)
func isDelimiter(ch rune, delimiters ...rune) bool {
if delimiters == nil {
return unicode.IsSpace(ch)
}
for _, delimiter := range delimiters {
if ch == delimiter {
return true
}
}
return false
}

View File

@@ -1,29 +0,0 @@
language: go
go:
- 1.6.x
- 1.7.x
- 1.8.x
- 1.9.x
- 1.10.x
- 1.11.x
- 1.12.x
- tip
# Setting sudo access to false will let Travis CI use containers rather than
# VMs to run the tests. For more details see:
# - http://docs.travis-ci.com/user/workers/container-based-infrastructure/
# - http://docs.travis-ci.com/user/workers/standard-infrastructure/
sudo: false
script:
- make setup
- make test
notifications:
webhooks:
urls:
- https://webhooks.gitter.im/e/06e3328629952dabe3e0
on_success: change # options: [always|never|change] default: always
on_failure: always # options: [always|never|change] default: always
on_start: never # options: [always|never|change] default: always

View File

@@ -1,109 +0,0 @@
# 1.5.0 (2019-09-11)
## Added
- #103: Add basic fuzzing for `NewVersion()` (thanks @jesse-c)
## Changed
- #82: Clarify wildcard meaning in range constraints and update tests for it (thanks @greysteil)
- #83: Clarify caret operator range for pre-1.0.0 dependencies (thanks @greysteil)
- #72: Adding docs comment pointing to vert for a cli
- #71: Update the docs on pre-release comparator handling
- #89: Test with new go versions (thanks @thedevsaddam)
- #87: Added $ to ValidPrerelease for better validation (thanks @jeremycarroll)
## Fixed
- #78: Fix unchecked error in example code (thanks @ravron)
- #70: Fix the handling of pre-releases and the 0.0.0 release edge case
- #97: Fixed copyright file for proper display on GitHub
- #107: Fix handling prerelease when sorting alphanum and num
- #109: Fixed where Validate sometimes returns wrong message on error
# 1.4.2 (2018-04-10)
## Changed
- #72: Updated the docs to point to vert for a console appliaction
- #71: Update the docs on pre-release comparator handling
## Fixed
- #70: Fix the handling of pre-releases and the 0.0.0 release edge case
# 1.4.1 (2018-04-02)
## Fixed
- Fixed #64: Fix pre-release precedence issue (thanks @uudashr)
# 1.4.0 (2017-10-04)
## Changed
- #61: Update NewVersion to parse ints with a 64bit int size (thanks @zknill)
# 1.3.1 (2017-07-10)
## Fixed
- Fixed #57: number comparisons in prerelease sometimes inaccurate
# 1.3.0 (2017-05-02)
## Added
- #45: Added json (un)marshaling support (thanks @mh-cbon)
- Stability marker. See https://masterminds.github.io/stability/
## Fixed
- #51: Fix handling of single digit tilde constraint (thanks @dgodd)
## Changed
- #55: The godoc icon moved from png to svg
# 1.2.3 (2017-04-03)
## Fixed
- #46: Fixed 0.x.x and 0.0.x in constraints being treated as *
# Release 1.2.2 (2016-12-13)
## Fixed
- #34: Fixed issue where hyphen range was not working with pre-release parsing.
# Release 1.2.1 (2016-11-28)
## Fixed
- #24: Fixed edge case issue where constraint "> 0" does not handle "0.0.1-alpha"
properly.
# Release 1.2.0 (2016-11-04)
## Added
- #20: Added MustParse function for versions (thanks @adamreese)
- #15: Added increment methods on versions (thanks @mh-cbon)
## Fixed
- Issue #21: Per the SemVer spec (section 9) a pre-release is unstable and
might not satisfy the intended compatibility. The change here ignores pre-releases
on constraint checks (e.g., ~ or ^) when a pre-release is not part of the
constraint. For example, `^1.2.3` will ignore pre-releases while
`^1.2.3-alpha` will include them.
# Release 1.1.1 (2016-06-30)
## Changed
- Issue #9: Speed up version comparison performance (thanks @sdboyer)
- Issue #8: Added benchmarks (thanks @sdboyer)
- Updated Go Report Card URL to new location
- Updated Readme to add code snippet formatting (thanks @mh-cbon)
- Updating tagging to v[SemVer] structure for compatibility with other tools.
# Release 1.1.0 (2016-03-11)
- Issue #2: Implemented validation to provide reasons a versions failed a
constraint.
# Release 1.0.1 (2015-12-31)
- Fixed #1: * constraint failing on valid versions.
# Release 1.0.0 (2015-10-20)
- Initial release

View File

@@ -1,19 +0,0 @@
Copyright (C) 2014-2019, Matt Butcher and Matt Farina
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -1,36 +0,0 @@
.PHONY: setup
setup:
go get -u gopkg.in/alecthomas/gometalinter.v1
gometalinter.v1 --install
.PHONY: test
test: validate lint
@echo "==> Running tests"
go test -v
.PHONY: validate
validate:
@echo "==> Running static validations"
@gometalinter.v1 \
--disable-all \
--enable deadcode \
--severity deadcode:error \
--enable gofmt \
--enable gosimple \
--enable ineffassign \
--enable misspell \
--enable vet \
--tests \
--vendor \
--deadline 60s \
./... || exit_code=1
.PHONY: lint
lint:
@echo "==> Running linters"
@gometalinter.v1 \
--disable-all \
--enable golint \
--vendor \
--deadline 60s \
./... || :

View File

@@ -1,194 +0,0 @@
# SemVer
The `semver` package provides the ability to work with [Semantic Versions](http://semver.org) in Go. Specifically it provides the ability to:
* Parse semantic versions
* Sort semantic versions
* Check if a semantic version fits within a set of constraints
* Optionally work with a `v` prefix
[![Stability:
Active](https://masterminds.github.io/stability/active.svg)](https://masterminds.github.io/stability/active.html)
[![Build Status](https://travis-ci.org/Masterminds/semver.svg)](https://travis-ci.org/Masterminds/semver) [![Build status](https://ci.appveyor.com/api/projects/status/jfk66lib7hb985k8/branch/master?svg=true&passingText=windows%20build%20passing&failingText=windows%20build%20failing)](https://ci.appveyor.com/project/mattfarina/semver/branch/master) [![GoDoc](https://godoc.org/github.com/Masterminds/semver?status.svg)](https://godoc.org/github.com/Masterminds/semver) [![Go Report Card](https://goreportcard.com/badge/github.com/Masterminds/semver)](https://goreportcard.com/report/github.com/Masterminds/semver)
If you are looking for a command line tool for version comparisons please see
[vert](https://github.com/Masterminds/vert) which uses this library.
## Parsing Semantic Versions
To parse a semantic version use the `NewVersion` function. For example,
```go
v, err := semver.NewVersion("1.2.3-beta.1+build345")
```
If there is an error the version wasn't parseable. The version object has methods
to get the parts of the version, compare it to other versions, convert the
version back into a string, and get the original string. For more details
please see the [documentation](https://godoc.org/github.com/Masterminds/semver).
## Sorting Semantic Versions
A set of versions can be sorted using the [`sort`](https://golang.org/pkg/sort/)
package from the standard library. For example,
```go
raw := []string{"1.2.3", "1.0", "1.3", "2", "0.4.2",}
vs := make([]*semver.Version, len(raw))
for i, r := range raw {
v, err := semver.NewVersion(r)
if err != nil {
t.Errorf("Error parsing version: %s", err)
}
vs[i] = v
}
sort.Sort(semver.Collection(vs))
```
## Checking Version Constraints
Checking a version against version constraints is one of the most featureful
parts of the package.
```go
c, err := semver.NewConstraint(">= 1.2.3")
if err != nil {
// Handle constraint not being parseable.
}
v, _ := semver.NewVersion("1.3")
if err != nil {
// Handle version not being parseable.
}
// Check if the version meets the constraints. The a variable will be true.
a := c.Check(v)
```
## Basic Comparisons
There are two elements to the comparisons. First, a comparison string is a list
of comma separated and comparisons. These are then separated by || separated or
comparisons. For example, `">= 1.2, < 3.0.0 || >= 4.2.3"` is looking for a
comparison that's greater than or equal to 1.2 and less than 3.0.0 or is
greater than or equal to 4.2.3.
The basic comparisons are:
* `=`: equal (aliased to no operator)
* `!=`: not equal
* `>`: greater than
* `<`: less than
* `>=`: greater than or equal to
* `<=`: less than or equal to
## Working With Pre-release Versions
Pre-releases, for those not familiar with them, are used for software releases
prior to stable or generally available releases. Examples of pre-releases include
development, alpha, beta, and release candidate releases. A pre-release may be
a version such as `1.2.3-beta.1` while the stable release would be `1.2.3`. In the
order of precidence, pre-releases come before their associated releases. In this
example `1.2.3-beta.1 < 1.2.3`.
According to the Semantic Version specification pre-releases may not be
API compliant with their release counterpart. It says,
> A pre-release version indicates that the version is unstable and might not satisfy the intended compatibility requirements as denoted by its associated normal version.
SemVer comparisons without a pre-release comparator will skip pre-release versions.
For example, `>=1.2.3` will skip pre-releases when looking at a list of releases
while `>=1.2.3-0` will evaluate and find pre-releases.
The reason for the `0` as a pre-release version in the example comparison is
because pre-releases can only contain ASCII alphanumerics and hyphens (along with
`.` separators), per the spec. Sorting happens in ASCII sort order, again per the spec. The lowest character is a `0` in ASCII sort order (see an [ASCII Table](http://www.asciitable.com/))
Understanding ASCII sort ordering is important because A-Z comes before a-z. That
means `>=1.2.3-BETA` will return `1.2.3-alpha`. What you might expect from case
sensitivity doesn't apply here. This is due to ASCII sort ordering which is what
the spec specifies.
## Hyphen Range Comparisons
There are multiple methods to handle ranges and the first is hyphens ranges.
These look like:
* `1.2 - 1.4.5` which is equivalent to `>= 1.2, <= 1.4.5`
* `2.3.4 - 4.5` which is equivalent to `>= 2.3.4, <= 4.5`
## Wildcards In Comparisons
The `x`, `X`, and `*` characters can be used as a wildcard character. This works
for all comparison operators. When used on the `=` operator it falls
back to the pack level comparison (see tilde below). For example,
* `1.2.x` is equivalent to `>= 1.2.0, < 1.3.0`
* `>= 1.2.x` is equivalent to `>= 1.2.0`
* `<= 2.x` is equivalent to `< 3`
* `*` is equivalent to `>= 0.0.0`
## Tilde Range Comparisons (Patch)
The tilde (`~`) comparison operator is for patch level ranges when a minor
version is specified and major level changes when the minor number is missing.
For example,
* `~1.2.3` is equivalent to `>= 1.2.3, < 1.3.0`
* `~1` is equivalent to `>= 1, < 2`
* `~2.3` is equivalent to `>= 2.3, < 2.4`
* `~1.2.x` is equivalent to `>= 1.2.0, < 1.3.0`
* `~1.x` is equivalent to `>= 1, < 2`
## Caret Range Comparisons (Major)
The caret (`^`) comparison operator is for major level changes. This is useful
when comparisons of API versions as a major change is API breaking. For example,
* `^1.2.3` is equivalent to `>= 1.2.3, < 2.0.0`
* `^0.0.1` is equivalent to `>= 0.0.1, < 1.0.0`
* `^1.2.x` is equivalent to `>= 1.2.0, < 2.0.0`
* `^2.3` is equivalent to `>= 2.3, < 3`
* `^2.x` is equivalent to `>= 2.0.0, < 3`
# Validation
In addition to testing a version against a constraint, a version can be validated
against a constraint. When validation fails a slice of errors containing why a
version didn't meet the constraint is returned. For example,
```go
c, err := semver.NewConstraint("<= 1.2.3, >= 1.4")
if err != nil {
// Handle constraint not being parseable.
}
v, _ := semver.NewVersion("1.3")
if err != nil {
// Handle version not being parseable.
}
// Validate a version against a constraint.
a, msgs := c.Validate(v)
// a is false
for _, m := range msgs {
fmt.Println(m)
// Loops over the errors which would read
// "1.3 is greater than 1.2.3"
// "1.3 is less than 1.4"
}
```
# Fuzzing
[dvyukov/go-fuzz](https://github.com/dvyukov/go-fuzz) is used for fuzzing.
1. `go-fuzz-build`
2. `go-fuzz -workdir=fuzz`
# Contribute
If you find an issue or want to contribute please file an [issue](https://github.com/Masterminds/semver/issues)
or [create a pull request](https://github.com/Masterminds/semver/pulls).

View File

@@ -1,44 +0,0 @@
version: build-{build}.{branch}
clone_folder: C:\gopath\src\github.com\Masterminds\semver
shallow_clone: true
environment:
GOPATH: C:\gopath
platform:
- x64
install:
- go version
- go env
- go get -u gopkg.in/alecthomas/gometalinter.v1
- set PATH=%PATH%;%GOPATH%\bin
- gometalinter.v1.exe --install
build_script:
- go install -v ./...
test_script:
- "gometalinter.v1 \
--disable-all \
--enable deadcode \
--severity deadcode:error \
--enable gofmt \
--enable gosimple \
--enable ineffassign \
--enable misspell \
--enable vet \
--tests \
--vendor \
--deadline 60s \
./... || exit_code=1"
- "gometalinter.v1 \
--disable-all \
--enable golint \
--vendor \
--deadline 60s \
./... || :"
- go test -v
deploy: off

View File

@@ -1,24 +0,0 @@
package semver
// Collection is a collection of Version instances and implements the sort
// interface. See the sort package for more details.
// https://golang.org/pkg/sort/
type Collection []*Version
// Len returns the length of a collection. The number of Version instances
// on the slice.
func (c Collection) Len() int {
return len(c)
}
// Less is needed for the sort interface to compare two Version objects on the
// slice. If checks if one is less than the other.
func (c Collection) Less(i, j int) bool {
return c[i].LessThan(c[j])
}
// Swap is needed for the sort interface to replace the Version objects
// at two different positions in the slice.
func (c Collection) Swap(i, j int) {
c[i], c[j] = c[j], c[i]
}

View File

@@ -1,423 +0,0 @@
package semver
import (
"errors"
"fmt"
"regexp"
"strings"
)
// Constraints is one or more constraint that a semantic version can be
// checked against.
type Constraints struct {
constraints [][]*constraint
}
// NewConstraint returns a Constraints instance that a Version instance can
// be checked against. If there is a parse error it will be returned.
func NewConstraint(c string) (*Constraints, error) {
// Rewrite - ranges into a comparison operation.
c = rewriteRange(c)
ors := strings.Split(c, "||")
or := make([][]*constraint, len(ors))
for k, v := range ors {
cs := strings.Split(v, ",")
result := make([]*constraint, len(cs))
for i, s := range cs {
pc, err := parseConstraint(s)
if err != nil {
return nil, err
}
result[i] = pc
}
or[k] = result
}
o := &Constraints{constraints: or}
return o, nil
}
// Check tests if a version satisfies the constraints.
func (cs Constraints) Check(v *Version) bool {
// loop over the ORs and check the inner ANDs
for _, o := range cs.constraints {
joy := true
for _, c := range o {
if !c.check(v) {
joy = false
break
}
}
if joy {
return true
}
}
return false
}
// Validate checks if a version satisfies a constraint. If not a slice of
// reasons for the failure are returned in addition to a bool.
func (cs Constraints) Validate(v *Version) (bool, []error) {
// loop over the ORs and check the inner ANDs
var e []error
// Capture the prerelease message only once. When it happens the first time
// this var is marked
var prerelesase bool
for _, o := range cs.constraints {
joy := true
for _, c := range o {
// Before running the check handle the case there the version is
// a prerelease and the check is not searching for prereleases.
if c.con.pre == "" && v.pre != "" {
if !prerelesase {
em := fmt.Errorf("%s is a prerelease version and the constraint is only looking for release versions", v)
e = append(e, em)
prerelesase = true
}
joy = false
} else {
if !c.check(v) {
em := fmt.Errorf(c.msg, v, c.orig)
e = append(e, em)
joy = false
}
}
}
if joy {
return true, []error{}
}
}
return false, e
}
var constraintOps map[string]cfunc
var constraintMsg map[string]string
var constraintRegex *regexp.Regexp
func init() {
constraintOps = map[string]cfunc{
"": constraintTildeOrEqual,
"=": constraintTildeOrEqual,
"!=": constraintNotEqual,
">": constraintGreaterThan,
"<": constraintLessThan,
">=": constraintGreaterThanEqual,
"=>": constraintGreaterThanEqual,
"<=": constraintLessThanEqual,
"=<": constraintLessThanEqual,
"~": constraintTilde,
"~>": constraintTilde,
"^": constraintCaret,
}
constraintMsg = map[string]string{
"": "%s is not equal to %s",
"=": "%s is not equal to %s",
"!=": "%s is equal to %s",
">": "%s is less than or equal to %s",
"<": "%s is greater than or equal to %s",
">=": "%s is less than %s",
"=>": "%s is less than %s",
"<=": "%s is greater than %s",
"=<": "%s is greater than %s",
"~": "%s does not have same major and minor version as %s",
"~>": "%s does not have same major and minor version as %s",
"^": "%s does not have same major version as %s",
}
ops := make([]string, 0, len(constraintOps))
for k := range constraintOps {
ops = append(ops, regexp.QuoteMeta(k))
}
constraintRegex = regexp.MustCompile(fmt.Sprintf(
`^\s*(%s)\s*(%s)\s*$`,
strings.Join(ops, "|"),
cvRegex))
constraintRangeRegex = regexp.MustCompile(fmt.Sprintf(
`\s*(%s)\s+-\s+(%s)\s*`,
cvRegex, cvRegex))
}
// An individual constraint
type constraint struct {
// The callback function for the restraint. It performs the logic for
// the constraint.
function cfunc
msg string
// The version used in the constraint check. For example, if a constraint
// is '<= 2.0.0' the con a version instance representing 2.0.0.
con *Version
// The original parsed version (e.g., 4.x from != 4.x)
orig string
// When an x is used as part of the version (e.g., 1.x)
minorDirty bool
dirty bool
patchDirty bool
}
// Check if a version meets the constraint
func (c *constraint) check(v *Version) bool {
return c.function(v, c)
}
type cfunc func(v *Version, c *constraint) bool
func parseConstraint(c string) (*constraint, error) {
m := constraintRegex.FindStringSubmatch(c)
if m == nil {
return nil, fmt.Errorf("improper constraint: %s", c)
}
ver := m[2]
orig := ver
minorDirty := false
patchDirty := false
dirty := false
if isX(m[3]) {
ver = "0.0.0"
dirty = true
} else if isX(strings.TrimPrefix(m[4], ".")) || m[4] == "" {
minorDirty = true
dirty = true
ver = fmt.Sprintf("%s.0.0%s", m[3], m[6])
} else if isX(strings.TrimPrefix(m[5], ".")) {
dirty = true
patchDirty = true
ver = fmt.Sprintf("%s%s.0%s", m[3], m[4], m[6])
}
con, err := NewVersion(ver)
if err != nil {
// The constraintRegex should catch any regex parsing errors. So,
// we should never get here.
return nil, errors.New("constraint Parser Error")
}
cs := &constraint{
function: constraintOps[m[1]],
msg: constraintMsg[m[1]],
con: con,
orig: orig,
minorDirty: minorDirty,
patchDirty: patchDirty,
dirty: dirty,
}
return cs, nil
}
// Constraint functions
func constraintNotEqual(v *Version, c *constraint) bool {
if c.dirty {
// If there is a pre-release on the version but the constraint isn't looking
// for them assume that pre-releases are not compatible. See issue 21 for
// more details.
if v.Prerelease() != "" && c.con.Prerelease() == "" {
return false
}
if c.con.Major() != v.Major() {
return true
}
if c.con.Minor() != v.Minor() && !c.minorDirty {
return true
} else if c.minorDirty {
return false
}
return false
}
return !v.Equal(c.con)
}
func constraintGreaterThan(v *Version, c *constraint) bool {
// If there is a pre-release on the version but the constraint isn't looking
// for them assume that pre-releases are not compatible. See issue 21 for
// more details.
if v.Prerelease() != "" && c.con.Prerelease() == "" {
return false
}
return v.Compare(c.con) == 1
}
func constraintLessThan(v *Version, c *constraint) bool {
// If there is a pre-release on the version but the constraint isn't looking
// for them assume that pre-releases are not compatible. See issue 21 for
// more details.
if v.Prerelease() != "" && c.con.Prerelease() == "" {
return false
}
if !c.dirty {
return v.Compare(c.con) < 0
}
if v.Major() > c.con.Major() {
return false
} else if v.Minor() > c.con.Minor() && !c.minorDirty {
return false
}
return true
}
func constraintGreaterThanEqual(v *Version, c *constraint) bool {
// If there is a pre-release on the version but the constraint isn't looking
// for them assume that pre-releases are not compatible. See issue 21 for
// more details.
if v.Prerelease() != "" && c.con.Prerelease() == "" {
return false
}
return v.Compare(c.con) >= 0
}
func constraintLessThanEqual(v *Version, c *constraint) bool {
// If there is a pre-release on the version but the constraint isn't looking
// for them assume that pre-releases are not compatible. See issue 21 for
// more details.
if v.Prerelease() != "" && c.con.Prerelease() == "" {
return false
}
if !c.dirty {
return v.Compare(c.con) <= 0
}
if v.Major() > c.con.Major() {
return false
} else if v.Minor() > c.con.Minor() && !c.minorDirty {
return false
}
return true
}
// ~*, ~>* --> >= 0.0.0 (any)
// ~2, ~2.x, ~2.x.x, ~>2, ~>2.x ~>2.x.x --> >=2.0.0, <3.0.0
// ~2.0, ~2.0.x, ~>2.0, ~>2.0.x --> >=2.0.0, <2.1.0
// ~1.2, ~1.2.x, ~>1.2, ~>1.2.x --> >=1.2.0, <1.3.0
// ~1.2.3, ~>1.2.3 --> >=1.2.3, <1.3.0
// ~1.2.0, ~>1.2.0 --> >=1.2.0, <1.3.0
func constraintTilde(v *Version, c *constraint) bool {
// If there is a pre-release on the version but the constraint isn't looking
// for them assume that pre-releases are not compatible. See issue 21 for
// more details.
if v.Prerelease() != "" && c.con.Prerelease() == "" {
return false
}
if v.LessThan(c.con) {
return false
}
// ~0.0.0 is a special case where all constraints are accepted. It's
// equivalent to >= 0.0.0.
if c.con.Major() == 0 && c.con.Minor() == 0 && c.con.Patch() == 0 &&
!c.minorDirty && !c.patchDirty {
return true
}
if v.Major() != c.con.Major() {
return false
}
if v.Minor() != c.con.Minor() && !c.minorDirty {
return false
}
return true
}
// When there is a .x (dirty) status it automatically opts in to ~. Otherwise
// it's a straight =
func constraintTildeOrEqual(v *Version, c *constraint) bool {
// If there is a pre-release on the version but the constraint isn't looking
// for them assume that pre-releases are not compatible. See issue 21 for
// more details.
if v.Prerelease() != "" && c.con.Prerelease() == "" {
return false
}
if c.dirty {
c.msg = constraintMsg["~"]
return constraintTilde(v, c)
}
return v.Equal(c.con)
}
// ^* --> (any)
// ^2, ^2.x, ^2.x.x --> >=2.0.0, <3.0.0
// ^2.0, ^2.0.x --> >=2.0.0, <3.0.0
// ^1.2, ^1.2.x --> >=1.2.0, <2.0.0
// ^1.2.3 --> >=1.2.3, <2.0.0
// ^1.2.0 --> >=1.2.0, <2.0.0
func constraintCaret(v *Version, c *constraint) bool {
// If there is a pre-release on the version but the constraint isn't looking
// for them assume that pre-releases are not compatible. See issue 21 for
// more details.
if v.Prerelease() != "" && c.con.Prerelease() == "" {
return false
}
if v.LessThan(c.con) {
return false
}
if v.Major() != c.con.Major() {
return false
}
return true
}
var constraintRangeRegex *regexp.Regexp
const cvRegex string = `v?([0-9|x|X|\*]+)(\.[0-9|x|X|\*]+)?(\.[0-9|x|X|\*]+)?` +
`(-([0-9A-Za-z\-]+(\.[0-9A-Za-z\-]+)*))?` +
`(\+([0-9A-Za-z\-]+(\.[0-9A-Za-z\-]+)*))?`
func isX(x string) bool {
switch x {
case "x", "*", "X":
return true
default:
return false
}
}
func rewriteRange(i string) string {
m := constraintRangeRegex.FindAllStringSubmatch(i, -1)
if m == nil {
return i
}
o := i
for _, v := range m {
t := fmt.Sprintf(">= %s, <= %s", v[1], v[11])
o = strings.Replace(o, v[0], t, 1)
}
return o
}

View File

@@ -1,115 +0,0 @@
/*
Package semver provides the ability to work with Semantic Versions (http://semver.org) in Go.
Specifically it provides the ability to:
* Parse semantic versions
* Sort semantic versions
* Check if a semantic version fits within a set of constraints
* Optionally work with a `v` prefix
Parsing Semantic Versions
To parse a semantic version use the `NewVersion` function. For example,
v, err := semver.NewVersion("1.2.3-beta.1+build345")
If there is an error the version wasn't parseable. The version object has methods
to get the parts of the version, compare it to other versions, convert the
version back into a string, and get the original string. For more details
please see the documentation at https://godoc.org/github.com/Masterminds/semver.
Sorting Semantic Versions
A set of versions can be sorted using the `sort` package from the standard library.
For example,
raw := []string{"1.2.3", "1.0", "1.3", "2", "0.4.2",}
vs := make([]*semver.Version, len(raw))
for i, r := range raw {
v, err := semver.NewVersion(r)
if err != nil {
t.Errorf("Error parsing version: %s", err)
}
vs[i] = v
}
sort.Sort(semver.Collection(vs))
Checking Version Constraints
Checking a version against version constraints is one of the most featureful
parts of the package.
c, err := semver.NewConstraint(">= 1.2.3")
if err != nil {
// Handle constraint not being parseable.
}
v, err := semver.NewVersion("1.3")
if err != nil {
// Handle version not being parseable.
}
// Check if the version meets the constraints. The a variable will be true.
a := c.Check(v)
Basic Comparisons
There are two elements to the comparisons. First, a comparison string is a list
of comma separated and comparisons. These are then separated by || separated or
comparisons. For example, `">= 1.2, < 3.0.0 || >= 4.2.3"` is looking for a
comparison that's greater than or equal to 1.2 and less than 3.0.0 or is
greater than or equal to 4.2.3.
The basic comparisons are:
* `=`: equal (aliased to no operator)
* `!=`: not equal
* `>`: greater than
* `<`: less than
* `>=`: greater than or equal to
* `<=`: less than or equal to
Hyphen Range Comparisons
There are multiple methods to handle ranges and the first is hyphens ranges.
These look like:
* `1.2 - 1.4.5` which is equivalent to `>= 1.2, <= 1.4.5`
* `2.3.4 - 4.5` which is equivalent to `>= 2.3.4, <= 4.5`
Wildcards In Comparisons
The `x`, `X`, and `*` characters can be used as a wildcard character. This works
for all comparison operators. When used on the `=` operator it falls
back to the pack level comparison (see tilde below). For example,
* `1.2.x` is equivalent to `>= 1.2.0, < 1.3.0`
* `>= 1.2.x` is equivalent to `>= 1.2.0`
* `<= 2.x` is equivalent to `<= 3`
* `*` is equivalent to `>= 0.0.0`
Tilde Range Comparisons (Patch)
The tilde (`~`) comparison operator is for patch level ranges when a minor
version is specified and major level changes when the minor number is missing.
For example,
* `~1.2.3` is equivalent to `>= 1.2.3, < 1.3.0`
* `~1` is equivalent to `>= 1, < 2`
* `~2.3` is equivalent to `>= 2.3, < 2.4`
* `~1.2.x` is equivalent to `>= 1.2.0, < 1.3.0`
* `~1.x` is equivalent to `>= 1, < 2`
Caret Range Comparisons (Major)
The caret (`^`) comparison operator is for major level changes. This is useful
when comparisons of API versions as a major change is API breaking. For example,
* `^1.2.3` is equivalent to `>= 1.2.3, < 2.0.0`
* `^1.2.x` is equivalent to `>= 1.2.0, < 2.0.0`
* `^2.3` is equivalent to `>= 2.3, < 3`
* `^2.x` is equivalent to `>= 2.0.0, < 3`
*/
package semver

View File

@@ -1,425 +0,0 @@
package semver
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"regexp"
"strconv"
"strings"
)
// The compiled version of the regex created at init() is cached here so it
// only needs to be created once.
var versionRegex *regexp.Regexp
var validPrereleaseRegex *regexp.Regexp
var (
// ErrInvalidSemVer is returned a version is found to be invalid when
// being parsed.
ErrInvalidSemVer = errors.New("Invalid Semantic Version")
// ErrInvalidMetadata is returned when the metadata is an invalid format
ErrInvalidMetadata = errors.New("Invalid Metadata string")
// ErrInvalidPrerelease is returned when the pre-release is an invalid format
ErrInvalidPrerelease = errors.New("Invalid Prerelease string")
)
// SemVerRegex is the regular expression used to parse a semantic version.
const SemVerRegex string = `v?([0-9]+)(\.[0-9]+)?(\.[0-9]+)?` +
`(-([0-9A-Za-z\-]+(\.[0-9A-Za-z\-]+)*))?` +
`(\+([0-9A-Za-z\-]+(\.[0-9A-Za-z\-]+)*))?`
// ValidPrerelease is the regular expression which validates
// both prerelease and metadata values.
const ValidPrerelease string = `^([0-9A-Za-z\-]+(\.[0-9A-Za-z\-]+)*)$`
// Version represents a single semantic version.
type Version struct {
major, minor, patch int64
pre string
metadata string
original string
}
func init() {
versionRegex = regexp.MustCompile("^" + SemVerRegex + "$")
validPrereleaseRegex = regexp.MustCompile(ValidPrerelease)
}
// NewVersion parses a given version and returns an instance of Version or
// an error if unable to parse the version.
func NewVersion(v string) (*Version, error) {
m := versionRegex.FindStringSubmatch(v)
if m == nil {
return nil, ErrInvalidSemVer
}
sv := &Version{
metadata: m[8],
pre: m[5],
original: v,
}
var temp int64
temp, err := strconv.ParseInt(m[1], 10, 64)
if err != nil {
return nil, fmt.Errorf("Error parsing version segment: %s", err)
}
sv.major = temp
if m[2] != "" {
temp, err = strconv.ParseInt(strings.TrimPrefix(m[2], "."), 10, 64)
if err != nil {
return nil, fmt.Errorf("Error parsing version segment: %s", err)
}
sv.minor = temp
} else {
sv.minor = 0
}
if m[3] != "" {
temp, err = strconv.ParseInt(strings.TrimPrefix(m[3], "."), 10, 64)
if err != nil {
return nil, fmt.Errorf("Error parsing version segment: %s", err)
}
sv.patch = temp
} else {
sv.patch = 0
}
return sv, nil
}
// MustParse parses a given version and panics on error.
func MustParse(v string) *Version {
sv, err := NewVersion(v)
if err != nil {
panic(err)
}
return sv
}
// String converts a Version object to a string.
// Note, if the original version contained a leading v this version will not.
// See the Original() method to retrieve the original value. Semantic Versions
// don't contain a leading v per the spec. Instead it's optional on
// implementation.
func (v *Version) String() string {
var buf bytes.Buffer
fmt.Fprintf(&buf, "%d.%d.%d", v.major, v.minor, v.patch)
if v.pre != "" {
fmt.Fprintf(&buf, "-%s", v.pre)
}
if v.metadata != "" {
fmt.Fprintf(&buf, "+%s", v.metadata)
}
return buf.String()
}
// Original returns the original value passed in to be parsed.
func (v *Version) Original() string {
return v.original
}
// Major returns the major version.
func (v *Version) Major() int64 {
return v.major
}
// Minor returns the minor version.
func (v *Version) Minor() int64 {
return v.minor
}
// Patch returns the patch version.
func (v *Version) Patch() int64 {
return v.patch
}
// Prerelease returns the pre-release version.
func (v *Version) Prerelease() string {
return v.pre
}
// Metadata returns the metadata on the version.
func (v *Version) Metadata() string {
return v.metadata
}
// originalVPrefix returns the original 'v' prefix if any.
func (v *Version) originalVPrefix() string {
// Note, only lowercase v is supported as a prefix by the parser.
if v.original != "" && v.original[:1] == "v" {
return v.original[:1]
}
return ""
}
// IncPatch produces the next patch version.
// If the current version does not have prerelease/metadata information,
// it unsets metadata and prerelease values, increments patch number.
// If the current version has any of prerelease or metadata information,
// it unsets both values and keeps curent patch value
func (v Version) IncPatch() Version {
vNext := v
// according to http://semver.org/#spec-item-9
// Pre-release versions have a lower precedence than the associated normal version.
// according to http://semver.org/#spec-item-10
// Build metadata SHOULD be ignored when determining version precedence.
if v.pre != "" {
vNext.metadata = ""
vNext.pre = ""
} else {
vNext.metadata = ""
vNext.pre = ""
vNext.patch = v.patch + 1
}
vNext.original = v.originalVPrefix() + "" + vNext.String()
return vNext
}
// IncMinor produces the next minor version.
// Sets patch to 0.
// Increments minor number.
// Unsets metadata.
// Unsets prerelease status.
func (v Version) IncMinor() Version {
vNext := v
vNext.metadata = ""
vNext.pre = ""
vNext.patch = 0
vNext.minor = v.minor + 1
vNext.original = v.originalVPrefix() + "" + vNext.String()
return vNext
}
// IncMajor produces the next major version.
// Sets patch to 0.
// Sets minor to 0.
// Increments major number.
// Unsets metadata.
// Unsets prerelease status.
func (v Version) IncMajor() Version {
vNext := v
vNext.metadata = ""
vNext.pre = ""
vNext.patch = 0
vNext.minor = 0
vNext.major = v.major + 1
vNext.original = v.originalVPrefix() + "" + vNext.String()
return vNext
}
// SetPrerelease defines the prerelease value.
// Value must not include the required 'hypen' prefix.
func (v Version) SetPrerelease(prerelease string) (Version, error) {
vNext := v
if len(prerelease) > 0 && !validPrereleaseRegex.MatchString(prerelease) {
return vNext, ErrInvalidPrerelease
}
vNext.pre = prerelease
vNext.original = v.originalVPrefix() + "" + vNext.String()
return vNext, nil
}
// SetMetadata defines metadata value.
// Value must not include the required 'plus' prefix.
func (v Version) SetMetadata(metadata string) (Version, error) {
vNext := v
if len(metadata) > 0 && !validPrereleaseRegex.MatchString(metadata) {
return vNext, ErrInvalidMetadata
}
vNext.metadata = metadata
vNext.original = v.originalVPrefix() + "" + vNext.String()
return vNext, nil
}
// LessThan tests if one version is less than another one.
func (v *Version) LessThan(o *Version) bool {
return v.Compare(o) < 0
}
// GreaterThan tests if one version is greater than another one.
func (v *Version) GreaterThan(o *Version) bool {
return v.Compare(o) > 0
}
// Equal tests if two versions are equal to each other.
// Note, versions can be equal with different metadata since metadata
// is not considered part of the comparable version.
func (v *Version) Equal(o *Version) bool {
return v.Compare(o) == 0
}
// Compare compares this version to another one. It returns -1, 0, or 1 if
// the version smaller, equal, or larger than the other version.
//
// Versions are compared by X.Y.Z. Build metadata is ignored. Prerelease is
// lower than the version without a prerelease.
func (v *Version) Compare(o *Version) int {
// Compare the major, minor, and patch version for differences. If a
// difference is found return the comparison.
if d := compareSegment(v.Major(), o.Major()); d != 0 {
return d
}
if d := compareSegment(v.Minor(), o.Minor()); d != 0 {
return d
}
if d := compareSegment(v.Patch(), o.Patch()); d != 0 {
return d
}
// At this point the major, minor, and patch versions are the same.
ps := v.pre
po := o.Prerelease()
if ps == "" && po == "" {
return 0
}
if ps == "" {
return 1
}
if po == "" {
return -1
}
return comparePrerelease(ps, po)
}
// UnmarshalJSON implements JSON.Unmarshaler interface.
func (v *Version) UnmarshalJSON(b []byte) error {
var s string
if err := json.Unmarshal(b, &s); err != nil {
return err
}
temp, err := NewVersion(s)
if err != nil {
return err
}
v.major = temp.major
v.minor = temp.minor
v.patch = temp.patch
v.pre = temp.pre
v.metadata = temp.metadata
v.original = temp.original
temp = nil
return nil
}
// MarshalJSON implements JSON.Marshaler interface.
func (v *Version) MarshalJSON() ([]byte, error) {
return json.Marshal(v.String())
}
func compareSegment(v, o int64) int {
if v < o {
return -1
}
if v > o {
return 1
}
return 0
}
func comparePrerelease(v, o string) int {
// split the prelease versions by their part. The separator, per the spec,
// is a .
sparts := strings.Split(v, ".")
oparts := strings.Split(o, ".")
// Find the longer length of the parts to know how many loop iterations to
// go through.
slen := len(sparts)
olen := len(oparts)
l := slen
if olen > slen {
l = olen
}
// Iterate over each part of the prereleases to compare the differences.
for i := 0; i < l; i++ {
// Since the lentgh of the parts can be different we need to create
// a placeholder. This is to avoid out of bounds issues.
stemp := ""
if i < slen {
stemp = sparts[i]
}
otemp := ""
if i < olen {
otemp = oparts[i]
}
d := comparePrePart(stemp, otemp)
if d != 0 {
return d
}
}
// Reaching here means two versions are of equal value but have different
// metadata (the part following a +). They are not identical in string form
// but the version comparison finds them to be equal.
return 0
}
func comparePrePart(s, o string) int {
// Fastpath if they are equal
if s == o {
return 0
}
// When s or o are empty we can use the other in an attempt to determine
// the response.
if s == "" {
if o != "" {
return -1
}
return 1
}
if o == "" {
if s != "" {
return 1
}
return -1
}
// When comparing strings "99" is greater than "103". To handle
// cases like this we need to detect numbers and compare them. According
// to the semver spec, numbers are always positive. If there is a - at the
// start like -99 this is to be evaluated as an alphanum. numbers always
// have precedence over alphanum. Parsing as Uints because negative numbers
// are ignored.
oi, n1 := strconv.ParseUint(o, 10, 64)
si, n2 := strconv.ParseUint(s, 10, 64)
// The case where both are strings compare the strings
if n1 != nil && n2 != nil {
if s > o {
return 1
}
return -1
} else if n1 != nil {
// o is a string and s is a number
return -1
} else if n2 != nil {
// s is a string and o is a number
return 1
}
// Both are numbers
if si > oi {
return 1
}
return -1
}

View File

@@ -1,10 +0,0 @@
// +build gofuzz
package semver
func Fuzz(data []byte) int {
if _, err := NewVersion(string(data)); err != nil {
return 0
}
return 1
}

View File

@@ -1,2 +0,0 @@
vendor/
/.glide

View File

@@ -1,26 +0,0 @@
language: go
go:
- 1.9.x
- 1.10.x
- 1.11.x
- 1.12.x
- 1.13.x
- tip
# Setting sudo access to false will let Travis CI use containers rather than
# VMs to run the tests. For more details see:
# - http://docs.travis-ci.com/user/workers/container-based-infrastructure/
# - http://docs.travis-ci.com/user/workers/standard-infrastructure/
sudo: false
script:
- make setup test
notifications:
webhooks:
urls:
- https://webhooks.gitter.im/e/06e3328629952dabe3e0
on_success: change # options: [always|never|change] default: always
on_failure: always # options: [always|never|change] default: always
on_start: never # options: [always|never|change] default: always

View File

@@ -1,282 +0,0 @@
# Changelog
## Release 2.22.0 (2019-10-02)
### Added
- #173: Added getHostByName function to resolve dns names to ips (thanks @fcgravalos)
- #195: Added deepCopy function for use with dicts
### Changed
- Updated merge and mergeOverwrite documentation to explain copying and how to
use deepCopy with it
## Release 2.21.0 (2019-09-18)
### Added
- #122: Added encryptAES/decryptAES functions (thanks @n0madic)
- #128: Added toDecimal support (thanks @Dean-Coakley)
- #169: Added list contcat (thanks @astorath)
- #174: Added deepEqual function (thanks @bonifaido)
- #170: Added url parse and join functions (thanks @astorath)
### Changed
- #171: Updated glide config for Google UUID to v1 and to add ranges to semver and testify
### Fixed
- #172: Fix semver wildcard example (thanks @piepmatz)
- #175: Fix dateInZone doc example (thanks @s3than)
## Release 2.20.0 (2019-06-18)
### Added
- #164: Adding function to get unix epoch for a time (@mattfarina)
- #166: Adding tests for date_in_zone (@mattfarina)
### Changed
- #144: Fix function comments based on best practices from Effective Go (@CodeLingoTeam)
- #150: Handles pointer type for time.Time in "htmlDate" (@mapreal19)
- #161, #157, #160, #153, #158, #156, #155, #159, #152 documentation updates (@badeadan)
### Fixed
## Release 2.19.0 (2019-03-02)
IMPORTANT: This release reverts a change from 2.18.0
In the previous release (2.18), we prematurely merged a partial change to the crypto functions that led to creating two sets of crypto functions (I blame @technosophos -- since that's me). This release rolls back that change, and does what was originally intended: It alters the existing crypto functions to use secure random.
We debated whether this classifies as a change worthy of major revision, but given the proximity to the last release, we have decided that treating 2.18 as a faulty release is the correct course of action. We apologize for any inconvenience.
### Changed
- Fix substr panic 35fb796 (Alexey igrychev)
- Remove extra period 1eb7729 (Matthew Lorimor)
- Make random string functions use crypto by default 6ceff26 (Matthew Lorimor)
- README edits/fixes/suggestions 08fe136 (Lauri Apple)
## Release 2.18.0 (2019-02-12)
### Added
- Added mergeOverwrite function
- cryptographic functions that use secure random (see fe1de12)
### Changed
- Improve documentation of regexMatch function, resolves #139 90b89ce (Jan Tagscherer)
- Handle has for nil list 9c10885 (Daniel Cohen)
- Document behaviour of mergeOverwrite fe0dbe9 (Lukas Rieder)
- doc: adds missing documentation. 4b871e6 (Fernandez Ludovic)
- Replace outdated goutils imports 01893d2 (Matthew Lorimor)
- Surface crypto secure random strings from goutils fe1de12 (Matthew Lorimor)
- Handle untyped nil values as paramters to string functions 2b2ec8f (Morten Torkildsen)
### Fixed
- Fix dict merge issue and provide mergeOverwrite .dst .src1 to overwrite from src -> dst 4c59c12 (Lukas Rieder)
- Fix substr var names and comments d581f80 (Dean Coakley)
- Fix substr documentation 2737203 (Dean Coakley)
## Release 2.17.1 (2019-01-03)
### Fixed
The 2.17.0 release did not have a version pinned for xstrings, which caused compilation failures when xstrings < 1.2 was used. This adds the correct version string to glide.yaml.
## Release 2.17.0 (2019-01-03)
### Added
- adds alder32sum function and test 6908fc2 (marshallford)
- Added kebabcase function ca331a1 (Ilyes512)
### Changed
- Update goutils to 1.1.0 4e1125d (Matt Butcher)
### Fixed
- Fix 'has' documentation e3f2a85 (dean-coakley)
- docs(dict): fix typo in pick example dc424f9 (Dustin Specker)
- fixes spelling errors... not sure how that happened 4cf188a (marshallford)
## Release 2.16.0 (2018-08-13)
### Added
- add splitn function fccb0b0 (Helgi Þorbjörnsson)
- Add slice func df28ca7 (gongdo)
- Generate serial number a3bdffd (Cody Coons)
- Extract values of dict with values function df39312 (Lawrence Jones)
### Changed
- Modify panic message for list.slice ae38335 (gongdo)
- Minor improvement in code quality - Removed an unreachable piece of code at defaults.go#L26:6 - Resolve formatting issues. 5834241 (Abhishek Kashyap)
- Remove duplicated documentation 1d97af1 (Matthew Fisher)
- Test on go 1.11 49df809 (Helgi Þormar Þorbjörnsson)
### Fixed
- Fix file permissions c5f40b5 (gongdo)
- Fix example for buildCustomCert 7779e0d (Tin Lam)
## Release 2.15.0 (2018-04-02)
### Added
- #68 and #69: Add json helpers to docs (thanks @arunvelsriram)
- #66: Add ternary function (thanks @binoculars)
- #67: Allow keys function to take multiple dicts (thanks @binoculars)
- #89: Added sha1sum to crypto function (thanks @benkeil)
- #81: Allow customizing Root CA that used by genSignedCert (thanks @chenzhiwei)
- #92: Add travis testing for go 1.10
- #93: Adding appveyor config for windows testing
### Changed
- #90: Updating to more recent dependencies
- #73: replace satori/go.uuid with google/uuid (thanks @petterw)
### Fixed
- #76: Fixed documentation typos (thanks @Thiht)
- Fixed rounding issue on the `ago` function. Note, the removes support for Go 1.8 and older
## Release 2.14.1 (2017-12-01)
### Fixed
- #60: Fix typo in function name documentation (thanks @neil-ca-moore)
- #61: Removing line with {{ due to blocking github pages genertion
- #64: Update the list functions to handle int, string, and other slices for compatibility
## Release 2.14.0 (2017-10-06)
This new version of Sprig adds a set of functions for generating and working with SSL certificates.
- `genCA` generates an SSL Certificate Authority
- `genSelfSignedCert` generates an SSL self-signed certificate
- `genSignedCert` generates an SSL certificate and key based on a given CA
## Release 2.13.0 (2017-09-18)
This release adds new functions, including:
- `regexMatch`, `regexFindAll`, `regexFind`, `regexReplaceAll`, `regexReplaceAllLiteral`, and `regexSplit` to work with regular expressions
- `floor`, `ceil`, and `round` math functions
- `toDate` converts a string to a date
- `nindent` is just like `indent` but also prepends a new line
- `ago` returns the time from `time.Now`
### Added
- #40: Added basic regex functionality (thanks @alanquillin)
- #41: Added ceil floor and round functions (thanks @alanquillin)
- #48: Added toDate function (thanks @andreynering)
- #50: Added nindent function (thanks @binoculars)
- #46: Added ago function (thanks @slayer)
### Changed
- #51: Updated godocs to include new string functions (thanks @curtisallen)
- #49: Added ability to merge multiple dicts (thanks @binoculars)
## Release 2.12.0 (2017-05-17)
- `snakecase`, `camelcase`, and `shuffle` are three new string functions
- `fail` allows you to bail out of a template render when conditions are not met
## Release 2.11.0 (2017-05-02)
- Added `toJson` and `toPrettyJson`
- Added `merge`
- Refactored documentation
## Release 2.10.0 (2017-03-15)
- Added `semver` and `semverCompare` for Semantic Versions
- `list` replaces `tuple`
- Fixed issue with `join`
- Added `first`, `last`, `intial`, `rest`, `prepend`, `append`, `toString`, `toStrings`, `sortAlpha`, `reverse`, `coalesce`, `pluck`, `pick`, `compact`, `keys`, `omit`, `uniq`, `has`, `without`
## Release 2.9.0 (2017-02-23)
- Added `splitList` to split a list
- Added crypto functions of `genPrivateKey` and `derivePassword`
## Release 2.8.0 (2016-12-21)
- Added access to several path functions (`base`, `dir`, `clean`, `ext`, and `abs`)
- Added functions for _mutating_ dictionaries (`set`, `unset`, `hasKey`)
## Release 2.7.0 (2016-12-01)
- Added `sha256sum` to generate a hash of an input
- Added functions to convert a numeric or string to `int`, `int64`, `float64`
## Release 2.6.0 (2016-10-03)
- Added a `uuidv4` template function for generating UUIDs inside of a template.
## Release 2.5.0 (2016-08-19)
- New `trimSuffix`, `trimPrefix`, `hasSuffix`, and `hasPrefix` functions
- New aliases have been added for a few functions that didn't follow the naming conventions (`trimAll` and `abbrevBoth`)
- `trimall` and `abbrevboth` (notice the case) are deprecated and will be removed in 3.0.0
## Release 2.4.0 (2016-08-16)
- Adds two functions: `until` and `untilStep`
## Release 2.3.0 (2016-06-21)
- cat: Concatenate strings with whitespace separators.
- replace: Replace parts of a string: `replace " " "-" "Me First"` renders "Me-First"
- plural: Format plurals: `len "foo" | plural "one foo" "many foos"` renders "many foos"
- indent: Indent blocks of text in a way that is sensitive to "\n" characters.
## Release 2.2.0 (2016-04-21)
- Added a `genPrivateKey` function (Thanks @bacongobbler)
## Release 2.1.0 (2016-03-30)
- `default` now prints the default value when it does not receive a value down the pipeline. It is much safer now to do `{{.Foo | default "bar"}}`.
- Added accessors for "hermetic" functions. These return only functions that, when given the same input, produce the same output.
## Release 2.0.0 (2016-03-29)
Because we switched from `int` to `int64` as the return value for all integer math functions, the library's major version number has been incremented.
- `min` complements `max` (formerly `biggest`)
- `empty` indicates that a value is the empty value for its type
- `tuple` creates a tuple inside of a template: `{{$t := tuple "a", "b" "c"}}`
- `dict` creates a dictionary inside of a template `{{$d := dict "key1" "val1" "key2" "val2"}}`
- Date formatters have been added for HTML dates (as used in `date` input fields)
- Integer math functions can convert from a number of types, including `string` (via `strconv.ParseInt`).
## Release 1.2.0 (2016-02-01)
- Added quote and squote
- Added b32enc and b32dec
- add now takes varargs
- biggest now takes varargs
## Release 1.1.0 (2015-12-29)
- Added #4: Added contains function. strings.Contains, but with the arguments
switched to simplify common pipelines. (thanks krancour)
- Added Travis-CI testing support
## Release 1.0.0 (2015-12-23)
- Initial release

View File

@@ -1,20 +0,0 @@
Sprig
Copyright (C) 2013 Masterminds
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -1,13 +0,0 @@
HAS_GLIDE := $(shell command -v glide;)
.PHONY: test
test:
go test -v .
.PHONY: setup
setup:
ifndef HAS_GLIDE
go get -u github.com/Masterminds/glide
endif
glide install

View File

@@ -1,78 +0,0 @@
# Sprig: Template functions for Go templates
[![Stability: Sustained](https://masterminds.github.io/stability/sustained.svg)](https://masterminds.github.io/stability/sustained.html)
[![Build Status](https://travis-ci.org/Masterminds/sprig.svg?branch=master)](https://travis-ci.org/Masterminds/sprig)
The Go language comes with a [built-in template
language](http://golang.org/pkg/text/template/), but not
very many template functions. Sprig is a library that provides more than 100 commonly
used template functions.
It is inspired by the template functions found in
[Twig](http://twig.sensiolabs.org/documentation) and in various
JavaScript libraries, such as [underscore.js](http://underscorejs.org/).
## Usage
**Template developers**: Please use Sprig's [function documentation](http://masterminds.github.io/sprig/) for
detailed instructions and code snippets for the >100 template functions available.
**Go developers**: If you'd like to include Sprig as a library in your program,
our API documentation is available [at GoDoc.org](http://godoc.org/github.com/Masterminds/sprig).
For standard usage, read on.
### Load the Sprig library
To load the Sprig `FuncMap`:
```go
import (
"github.com/Masterminds/sprig"
"html/template"
)
// This example illustrates that the FuncMap *must* be set before the
// templates themselves are loaded.
tpl := template.Must(
template.New("base").Funcs(sprig.FuncMap()).ParseGlob("*.html")
)
```
### Calling the functions inside of templates
By convention, all functions are lowercase. This seems to follow the Go
idiom for template functions (as opposed to template methods, which are
TitleCase). For example, this:
```
{{ "hello!" | upper | repeat 5 }}
```
produces this:
```
HELLO!HELLO!HELLO!HELLO!HELLO!
```
## Principles Driving Our Function Selection
We followed these principles to decide which functions to add and how to implement them:
- Use template functions to build layout. The following
types of operations are within the domain of template functions:
- Formatting
- Layout
- Simple type conversions
- Utilities that assist in handling common formatting and layout needs (e.g. arithmetic)
- Template functions should not return errors unless there is no way to print
a sensible value. For example, converting a string to an integer should not
produce an error if conversion fails. Instead, it should display a default
value.
- Simple math is necessary for grid layouts, pagers, and so on. Complex math
(anything other than arithmetic) should be done outside of templates.
- Template functions only deal with the data passed into them. They never retrieve
data from a source.
- Finally, do not override core Go template functions.

View File

@@ -1,26 +0,0 @@
version: build-{build}.{branch}
clone_folder: C:\gopath\src\github.com\Masterminds\sprig
shallow_clone: true
environment:
GOPATH: C:\gopath
platform:
- x64
install:
- go get -u github.com/Masterminds/glide
- set PATH=%GOPATH%\bin;%PATH%
- go version
- go env
build_script:
- glide install
- go install ./...
test_script:
- go test -v
deploy: off

Some files were not shown because too many files have changed in this diff Show More