Update ginkgo to v2 in go.mod, go.sum, vendor
This commit updates ginkgo to v2. Note that because ginkgo/v2 requires go1.18, it was updated as well. Signed-off-by: liornoy <lnoy@redhat.com> Co-authored-by: Sascha Grunert <sgrunert@redhat.com>
This commit is contained in:
24
vendor/github.com/onsi/ginkgo/.travis.yml
generated
vendored
24
vendor/github.com/onsi/ginkgo/.travis.yml
generated
vendored
@ -1,24 +0,0 @@
|
||||
language: go
|
||||
go:
|
||||
- tip
|
||||
- 1.16.x
|
||||
- 1.15.x
|
||||
|
||||
cache:
|
||||
directories:
|
||||
- $GOPATH/pkg/mod
|
||||
|
||||
# allow internal package imports, necessary for forked repositories
|
||||
go_import_path: github.com/onsi/ginkgo
|
||||
|
||||
install:
|
||||
- GO111MODULE="off" go get -v -t ./...
|
||||
- GO111MODULE="off" go get golang.org/x/tools/cmd/cover
|
||||
- GO111MODULE="off" go get github.com/onsi/gomega
|
||||
- GO111MODULE="off" go install github.com/onsi/ginkgo/ginkgo
|
||||
- export PATH=$GOPATH/bin:$PATH
|
||||
|
||||
script:
|
||||
- GO111MODULE="on" go mod tidy && git diff --exit-code go.mod go.sum
|
||||
- go vet
|
||||
- ginkgo -r --randomizeAllSpecs --randomizeSuites --race --trace
|
33
vendor/github.com/onsi/ginkgo/CONTRIBUTING.md
generated
vendored
33
vendor/github.com/onsi/ginkgo/CONTRIBUTING.md
generated
vendored
@ -1,33 +0,0 @@
|
||||
# Contributing to Ginkgo
|
||||
|
||||
Your contributions to Ginkgo are essential for its long-term maintenance and improvement.
|
||||
|
||||
- Please **open an issue first** - describe what problem you are trying to solve and give the community a forum for input and feedback ahead of investing time in writing code!
|
||||
- Ensure adequate test coverage:
|
||||
- When adding to the Ginkgo library, add unit and/or integration tests (under the `integration` folder).
|
||||
- When adding to the Ginkgo CLI, note that there are very few unit tests. Please add an integration test.
|
||||
- Update the documentation. Ginko uses `godoc` comments and documentation on the `gh-pages` branch.
|
||||
If relevant, please submit a docs PR to that branch alongside your code PR.
|
||||
|
||||
Thanks for supporting Ginkgo!
|
||||
|
||||
## Setup
|
||||
|
||||
Fork the repo, then:
|
||||
|
||||
```
|
||||
go get github.com/onsi/ginkgo
|
||||
go get github.com/onsi/gomega/...
|
||||
cd $GOPATH/src/github.com/onsi/ginkgo
|
||||
git remote add fork git@github.com:<NAME>/ginkgo.git
|
||||
|
||||
ginkgo -r -p # ensure tests are green
|
||||
go vet ./... # ensure linter is happy
|
||||
```
|
||||
|
||||
## Making the PR
|
||||
- go to a new branch `git checkout -b my-feature`
|
||||
- make your changes
|
||||
- run tests and linter again (see above)
|
||||
- `git push fork`
|
||||
- open PR 🎉
|
169
vendor/github.com/onsi/ginkgo/README.md
generated
vendored
169
vendor/github.com/onsi/ginkgo/README.md
generated
vendored
@ -1,169 +0,0 @@
|
||||

|
||||
|
||||
[](https://github.com/onsi/ginkgo/actions?query=workflow%3Atest+branch%3Amaster)
|
||||
|
||||
Jump to the [docs](https://onsi.github.io/ginkgo/) | [中文文档](https://ke-chain.github.io/ginkgodoc) to learn more. To start rolling your Ginkgo tests *now* [keep reading](#set-me-up)!
|
||||
|
||||
If you have a question, comment, bug report, feature request, etc. please open a GitHub issue, or visit the [Ginkgo Slack channel](https://app.slack.com/client/T029RQSE6/CQQ50BBNW).
|
||||
|
||||
# Ginkgo 2.0 Release Candidate is available!
|
||||
|
||||
An effort is underway to develop and deliver Ginkgo 2.0. The work is happening in the [ver2](https://github.com/onsi/ginkgo/tree/ver2) branch and a changelog and migration guide is being maintained on that branch [here](https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md). Issue [#711](https://github.com/onsi/ginkgo/issues/711) is the central place for discussion.
|
||||
|
||||
As described in the [changelog](https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md) and [proposal](https://docs.google.com/document/d/1h28ZknXRsTLPNNiOjdHIO-F2toCzq4xoZDXbfYaBdoQ/edit#), Ginkgo 2.0 will clean up the Ginkgo codebase, deprecate and remove some v1 functionality, and add several new much-requested features. To help users get ready for the migration, Ginkgo v1 has started emitting deprecation warnings for features that will no longer be supported with links to documentation for how to migrate away from these features. If you have concerns or comments please chime in on [#711](https://github.com/onsi/ginkgo/issues/711).
|
||||
|
||||
Please start exploring and using the V2 release! To get started follow the [Using the Release Candidate](https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta) directions in the migration guide.
|
||||
|
||||
## TLDR
|
||||
Ginkgo builds on Go's `testing` package, allowing expressive [Behavior-Driven Development](https://en.wikipedia.org/wiki/Behavior-driven_development) ("BDD") style tests.
|
||||
It is typically (and optionally) paired with the [Gomega](https://github.com/onsi/gomega) matcher library.
|
||||
|
||||
```go
|
||||
Describe("the strings package", func() {
|
||||
Context("strings.Contains()", func() {
|
||||
When("the string contains the substring in the middle", func() {
|
||||
It("returns `true`", func() {
|
||||
Expect(strings.Contains("Ginkgo is awesome", "is")).To(BeTrue())
|
||||
})
|
||||
})
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
## Feature List
|
||||
|
||||
- Ginkgo uses Go's `testing` package and can live alongside your existing `testing` tests. It's easy to [bootstrap](https://onsi.github.io/ginkgo/#bootstrapping-a-suite) and start writing your [first tests](https://onsi.github.io/ginkgo/#adding-specs-to-a-suite)
|
||||
|
||||
- Ginkgo allows you to write tests in Go using expressive [Behavior-Driven Development](https://en.wikipedia.org/wiki/Behavior-driven_development) ("BDD") style:
|
||||
- Nestable [`Describe`, `Context` and `When` container blocks](https://onsi.github.io/ginkgo/#organizing-specs-with-containers-describe-and-context)
|
||||
- [`BeforeEach` and `AfterEach` blocks](https://onsi.github.io/ginkgo/#extracting-common-setup-beforeeach) for setup and teardown
|
||||
- [`It` and `Specify` blocks](https://onsi.github.io/ginkgo/#individual-specs-it) that hold your assertions
|
||||
- [`JustBeforeEach` blocks](https://onsi.github.io/ginkgo/#separating-creation-and-configuration-justbeforeeach) that separate creation from configuration (also known as the subject action pattern).
|
||||
- [`BeforeSuite` and `AfterSuite` blocks](https://onsi.github.io/ginkgo/#global-setup-and-teardown-beforesuite-and-aftersuite) to prep for and cleanup after a suite.
|
||||
|
||||
- A comprehensive test runner that lets you:
|
||||
- Mark specs as [pending](https://onsi.github.io/ginkgo/#pending-specs)
|
||||
- [Focus](https://onsi.github.io/ginkgo/#focused-specs) individual specs, and groups of specs, either programmatically or on the command line
|
||||
- Run your tests in [random order](https://onsi.github.io/ginkgo/#spec-permutation), and then reuse random seeds to replicate the same order.
|
||||
- Break up your test suite into parallel processes for straightforward [test parallelization](https://onsi.github.io/ginkgo/#parallel-specs)
|
||||
|
||||
- `ginkgo`: a command line interface with plenty of handy command line arguments for [running your tests](https://onsi.github.io/ginkgo/#running-tests) and [generating](https://onsi.github.io/ginkgo/#generators) test files. Here are a few choice examples:
|
||||
- `ginkgo -nodes=N` runs your tests in `N` parallel processes and print out coherent output in realtime
|
||||
- `ginkgo -cover` runs your tests using Go's code coverage tool
|
||||
- `ginkgo convert` converts an XUnit-style `testing` package to a Ginkgo-style package
|
||||
- `ginkgo -focus="REGEXP"` and `ginkgo -skip="REGEXP"` allow you to specify a subset of tests to run via regular expression
|
||||
- `ginkgo -r` runs all tests suites under the current directory
|
||||
- `ginkgo -v` prints out identifying information for each tests just before it runs
|
||||
|
||||
And much more: run `ginkgo help` for details!
|
||||
|
||||
The `ginkgo` CLI is convenient, but purely optional -- Ginkgo works just fine with `go test`
|
||||
|
||||
- `ginkgo watch` [watches](https://onsi.github.io/ginkgo/#watching-for-changes) packages *and their dependencies* for changes, then reruns tests. Run tests immediately as you develop!
|
||||
|
||||
- Built-in support for testing [asynchronicity](https://onsi.github.io/ginkgo/#asynchronous-tests)
|
||||
|
||||
- Built-in support for [benchmarking](https://onsi.github.io/ginkgo/#benchmark-tests) your code. Control the number of benchmark samples as you gather runtimes and other, arbitrary, bits of numerical information about your code.
|
||||
|
||||
- [Completions for Sublime Text](https://github.com/onsi/ginkgo-sublime-completions): just use [Package Control](https://sublime.wbond.net/) to install `Ginkgo Completions`.
|
||||
|
||||
- [Completions for VSCode](https://github.com/onsi/vscode-ginkgo): just use VSCode's extension installer to install `vscode-ginkgo`.
|
||||
|
||||
- [Ginkgo tools for VSCode](https://marketplace.visualstudio.com/items?itemName=joselitofilho.ginkgotestexplorer): just use VSCode's extension installer to install `ginkgoTestExplorer`.
|
||||
|
||||
- Straightforward support for third-party testing libraries such as [Gomock](https://code.google.com/p/gomock/) and [Testify](https://github.com/stretchr/testify). Check out the [docs](https://onsi.github.io/ginkgo/#third-party-integrations) for details.
|
||||
|
||||
- A modular architecture that lets you easily:
|
||||
- Write [custom reporters](https://onsi.github.io/ginkgo/#writing-custom-reporters) (for example, Ginkgo comes with a [JUnit XML reporter](https://onsi.github.io/ginkgo/#generating-junit-xml-output) and a TeamCity reporter).
|
||||
- [Adapt an existing matcher library (or write your own!)](https://onsi.github.io/ginkgo/#using-other-matcher-libraries) to work with Ginkgo
|
||||
|
||||
## [Gomega](https://github.com/onsi/gomega): Ginkgo's Preferred Matcher Library
|
||||
|
||||
Ginkgo is best paired with Gomega. Learn more about Gomega [here](https://onsi.github.io/gomega/)
|
||||
|
||||
## [Agouti](https://github.com/sclevine/agouti): A Go Acceptance Testing Framework
|
||||
|
||||
Agouti allows you run WebDriver integration tests. Learn more about Agouti [here](https://agouti.org)
|
||||
|
||||
## Getting Started
|
||||
|
||||
You'll need the Go command-line tools. Follow the [installation instructions](https://golang.org/doc/install) if you don't have it installed.
|
||||
|
||||
### Global installation
|
||||
To install the Ginkgo command line interface:
|
||||
```bash
|
||||
go get -u github.com/onsi/ginkgo/ginkgo
|
||||
```
|
||||
Note that this will install it to `$GOBIN`, which will need to be in the `$PATH` (or equivalent). Run `go help install` for more information.
|
||||
|
||||
### Go module ["tools package"](https://github.com/golang/go/issues/25922):
|
||||
Create (or update) a file called `tools/tools.go` with the following contents:
|
||||
```go
|
||||
// +build tools
|
||||
|
||||
package tools
|
||||
|
||||
import (
|
||||
_ "github.com/onsi/ginkgo/ginkgo"
|
||||
)
|
||||
|
||||
// This file imports packages that are used when running go generate, or used
|
||||
// during the development process but not otherwise depended on by built code.
|
||||
```
|
||||
The Ginkgo command can then be run via `go run github.com/onsi/ginkgo/ginkgo`.
|
||||
This approach allows the version of Ginkgo to be maintained under source control for reproducible results,
|
||||
and is well suited to automated test pipelines.
|
||||
|
||||
### Bootstrapping
|
||||
```bash
|
||||
cd path/to/package/you/want/to/test
|
||||
|
||||
ginkgo bootstrap # set up a new ginkgo suite
|
||||
ginkgo generate # will create a sample test file. edit this file and add your tests then...
|
||||
|
||||
go test # to run your tests
|
||||
|
||||
ginkgo # also runs your tests
|
||||
|
||||
```
|
||||
|
||||
## I'm new to Go: What are my testing options?
|
||||
|
||||
Of course, I heartily recommend [Ginkgo](https://github.com/onsi/ginkgo) and [Gomega](https://github.com/onsi/gomega). Both packages are seeing heavy, daily, production use on a number of projects and boast a mature and comprehensive feature-set.
|
||||
|
||||
With that said, it's great to know what your options are :)
|
||||
|
||||
### What Go gives you out of the box
|
||||
|
||||
Testing is a first class citizen in Go, however Go's built-in testing primitives are somewhat limited: The [testing](https://golang.org/pkg/testing) package provides basic XUnit style tests and no assertion library.
|
||||
|
||||
### Matcher libraries for Go's XUnit style tests
|
||||
|
||||
A number of matcher libraries have been written to augment Go's built-in XUnit style tests. Here are two that have gained traction:
|
||||
|
||||
- [testify](https://github.com/stretchr/testify)
|
||||
- [gocheck](https://labix.org/gocheck)
|
||||
|
||||
You can also use Ginkgo's matcher library [Gomega](https://github.com/onsi/gomega) in [XUnit style tests](https://onsi.github.io/gomega/#using-gomega-with-golangs-xunitstyle-tests)
|
||||
|
||||
### BDD style testing frameworks
|
||||
|
||||
There are a handful of BDD-style testing frameworks written for Go. Here are a few:
|
||||
|
||||
- [Ginkgo](https://github.com/onsi/ginkgo) ;)
|
||||
- [GoConvey](https://github.com/smartystreets/goconvey)
|
||||
- [Goblin](https://github.com/franela/goblin)
|
||||
- [Mao](https://github.com/azer/mao)
|
||||
- [Zen](https://github.com/pranavraja/zen)
|
||||
|
||||
Finally, @shageman has [put together](https://github.com/shageman/gotestit) a comprehensive comparison of Go testing libraries.
|
||||
|
||||
Go explore!
|
||||
|
||||
## License
|
||||
|
||||
Ginkgo is MIT-Licensed
|
||||
|
||||
## Contributing
|
||||
|
||||
See [CONTRIBUTING.md](CONTRIBUTING.md)
|
232
vendor/github.com/onsi/ginkgo/config/config.go
generated
vendored
232
vendor/github.com/onsi/ginkgo/config/config.go
generated
vendored
@ -1,232 +0,0 @@
|
||||
/*
|
||||
Ginkgo accepts a number of configuration options.
|
||||
|
||||
These are documented [here](http://onsi.github.io/ginkgo/#the-ginkgo-cli)
|
||||
|
||||
You can also learn more via
|
||||
|
||||
ginkgo help
|
||||
|
||||
or (I kid you not):
|
||||
|
||||
go test -asdf
|
||||
*/
|
||||
package config
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"time"
|
||||
|
||||
"fmt"
|
||||
)
|
||||
|
||||
const VERSION = "1.16.5"
|
||||
|
||||
type GinkgoConfigType struct {
|
||||
RandomSeed int64
|
||||
RandomizeAllSpecs bool
|
||||
RegexScansFilePath bool
|
||||
FocusStrings []string
|
||||
SkipStrings []string
|
||||
SkipMeasurements bool
|
||||
FailOnPending bool
|
||||
FailFast bool
|
||||
FlakeAttempts int
|
||||
EmitSpecProgress bool
|
||||
DryRun bool
|
||||
DebugParallel bool
|
||||
|
||||
ParallelNode int
|
||||
ParallelTotal int
|
||||
SyncHost string
|
||||
StreamHost string
|
||||
}
|
||||
|
||||
var GinkgoConfig = GinkgoConfigType{}
|
||||
|
||||
type DefaultReporterConfigType struct {
|
||||
NoColor bool
|
||||
SlowSpecThreshold float64
|
||||
NoisyPendings bool
|
||||
NoisySkippings bool
|
||||
Succinct bool
|
||||
Verbose bool
|
||||
FullTrace bool
|
||||
ReportPassed bool
|
||||
ReportFile string
|
||||
}
|
||||
|
||||
var DefaultReporterConfig = DefaultReporterConfigType{}
|
||||
|
||||
func processPrefix(prefix string) string {
|
||||
if prefix != "" {
|
||||
prefix += "."
|
||||
}
|
||||
return prefix
|
||||
}
|
||||
|
||||
type flagFunc func(string)
|
||||
|
||||
func (f flagFunc) String() string { return "" }
|
||||
func (f flagFunc) Set(s string) error { f(s); return nil }
|
||||
|
||||
func Flags(flagSet *flag.FlagSet, prefix string, includeParallelFlags bool) {
|
||||
prefix = processPrefix(prefix)
|
||||
flagSet.Int64Var(&(GinkgoConfig.RandomSeed), prefix+"seed", time.Now().Unix(), "The seed used to randomize the spec suite.")
|
||||
flagSet.BoolVar(&(GinkgoConfig.RandomizeAllSpecs), prefix+"randomizeAllSpecs", false, "If set, ginkgo will randomize all specs together. By default, ginkgo only randomizes the top level Describe, Context and When groups.")
|
||||
flagSet.BoolVar(&(GinkgoConfig.SkipMeasurements), prefix+"skipMeasurements", false, "If set, ginkgo will skip any measurement specs.")
|
||||
flagSet.BoolVar(&(GinkgoConfig.FailOnPending), prefix+"failOnPending", false, "If set, ginkgo will mark the test suite as failed if any specs are pending.")
|
||||
flagSet.BoolVar(&(GinkgoConfig.FailFast), prefix+"failFast", false, "If set, ginkgo will stop running a test suite after a failure occurs.")
|
||||
|
||||
flagSet.BoolVar(&(GinkgoConfig.DryRun), prefix+"dryRun", false, "If set, ginkgo will walk the test hierarchy without actually running anything. Best paired with -v.")
|
||||
|
||||
flagSet.Var(flagFunc(flagFocus), prefix+"focus", "If set, ginkgo will only run specs that match this regular expression. Can be specified multiple times, values are ORed.")
|
||||
flagSet.Var(flagFunc(flagSkip), prefix+"skip", "If set, ginkgo will only run specs that do not match this regular expression. Can be specified multiple times, values are ORed.")
|
||||
|
||||
flagSet.BoolVar(&(GinkgoConfig.RegexScansFilePath), prefix+"regexScansFilePath", false, "If set, ginkgo regex matching also will look at the file path (code location).")
|
||||
|
||||
flagSet.IntVar(&(GinkgoConfig.FlakeAttempts), prefix+"flakeAttempts", 1, "Make up to this many attempts to run each spec. Please note that if any of the attempts succeed, the suite will not be failed. But any failures will still be recorded.")
|
||||
|
||||
flagSet.BoolVar(&(GinkgoConfig.EmitSpecProgress), prefix+"progress", false, "If set, ginkgo will emit progress information as each spec runs to the GinkgoWriter.")
|
||||
|
||||
flagSet.BoolVar(&(GinkgoConfig.DebugParallel), prefix+"debug", false, "If set, ginkgo will emit node output to files when running in parallel.")
|
||||
|
||||
if includeParallelFlags {
|
||||
flagSet.IntVar(&(GinkgoConfig.ParallelNode), prefix+"parallel.node", 1, "This worker node's (one-indexed) node number. For running specs in parallel.")
|
||||
flagSet.IntVar(&(GinkgoConfig.ParallelTotal), prefix+"parallel.total", 1, "The total number of worker nodes. For running specs in parallel.")
|
||||
flagSet.StringVar(&(GinkgoConfig.SyncHost), prefix+"parallel.synchost", "", "The address for the server that will synchronize the running nodes.")
|
||||
flagSet.StringVar(&(GinkgoConfig.StreamHost), prefix+"parallel.streamhost", "", "The address for the server that the running nodes should stream data to.")
|
||||
}
|
||||
|
||||
flagSet.BoolVar(&(DefaultReporterConfig.NoColor), prefix+"noColor", false, "If set, suppress color output in default reporter.")
|
||||
flagSet.Float64Var(&(DefaultReporterConfig.SlowSpecThreshold), prefix+"slowSpecThreshold", 5.0, "(in seconds) Specs that take longer to run than this threshold are flagged as slow by the default reporter.")
|
||||
flagSet.BoolVar(&(DefaultReporterConfig.NoisyPendings), prefix+"noisyPendings", true, "If set, default reporter will shout about pending tests.")
|
||||
flagSet.BoolVar(&(DefaultReporterConfig.NoisySkippings), prefix+"noisySkippings", true, "If set, default reporter will shout about skipping tests.")
|
||||
flagSet.BoolVar(&(DefaultReporterConfig.Verbose), prefix+"v", false, "If set, default reporter print out all specs as they begin.")
|
||||
flagSet.BoolVar(&(DefaultReporterConfig.Succinct), prefix+"succinct", false, "If set, default reporter prints out a very succinct report")
|
||||
flagSet.BoolVar(&(DefaultReporterConfig.FullTrace), prefix+"trace", false, "If set, default reporter prints out the full stack trace when a failure occurs")
|
||||
flagSet.BoolVar(&(DefaultReporterConfig.ReportPassed), prefix+"reportPassed", false, "If set, default reporter prints out captured output of passed tests.")
|
||||
flagSet.StringVar(&(DefaultReporterConfig.ReportFile), prefix+"reportFile", "", "Override the default reporter output file path.")
|
||||
|
||||
}
|
||||
|
||||
func BuildFlagArgs(prefix string, ginkgo GinkgoConfigType, reporter DefaultReporterConfigType) []string {
|
||||
prefix = processPrefix(prefix)
|
||||
result := make([]string, 0)
|
||||
|
||||
if ginkgo.RandomSeed > 0 {
|
||||
result = append(result, fmt.Sprintf("--%sseed=%d", prefix, ginkgo.RandomSeed))
|
||||
}
|
||||
|
||||
if ginkgo.RandomizeAllSpecs {
|
||||
result = append(result, fmt.Sprintf("--%srandomizeAllSpecs", prefix))
|
||||
}
|
||||
|
||||
if ginkgo.SkipMeasurements {
|
||||
result = append(result, fmt.Sprintf("--%sskipMeasurements", prefix))
|
||||
}
|
||||
|
||||
if ginkgo.FailOnPending {
|
||||
result = append(result, fmt.Sprintf("--%sfailOnPending", prefix))
|
||||
}
|
||||
|
||||
if ginkgo.FailFast {
|
||||
result = append(result, fmt.Sprintf("--%sfailFast", prefix))
|
||||
}
|
||||
|
||||
if ginkgo.DryRun {
|
||||
result = append(result, fmt.Sprintf("--%sdryRun", prefix))
|
||||
}
|
||||
|
||||
for _, s := range ginkgo.FocusStrings {
|
||||
result = append(result, fmt.Sprintf("--%sfocus=%s", prefix, s))
|
||||
}
|
||||
|
||||
for _, s := range ginkgo.SkipStrings {
|
||||
result = append(result, fmt.Sprintf("--%sskip=%s", prefix, s))
|
||||
}
|
||||
|
||||
if ginkgo.FlakeAttempts > 1 {
|
||||
result = append(result, fmt.Sprintf("--%sflakeAttempts=%d", prefix, ginkgo.FlakeAttempts))
|
||||
}
|
||||
|
||||
if ginkgo.EmitSpecProgress {
|
||||
result = append(result, fmt.Sprintf("--%sprogress", prefix))
|
||||
}
|
||||
|
||||
if ginkgo.DebugParallel {
|
||||
result = append(result, fmt.Sprintf("--%sdebug", prefix))
|
||||
}
|
||||
|
||||
if ginkgo.ParallelNode != 0 {
|
||||
result = append(result, fmt.Sprintf("--%sparallel.node=%d", prefix, ginkgo.ParallelNode))
|
||||
}
|
||||
|
||||
if ginkgo.ParallelTotal != 0 {
|
||||
result = append(result, fmt.Sprintf("--%sparallel.total=%d", prefix, ginkgo.ParallelTotal))
|
||||
}
|
||||
|
||||
if ginkgo.StreamHost != "" {
|
||||
result = append(result, fmt.Sprintf("--%sparallel.streamhost=%s", prefix, ginkgo.StreamHost))
|
||||
}
|
||||
|
||||
if ginkgo.SyncHost != "" {
|
||||
result = append(result, fmt.Sprintf("--%sparallel.synchost=%s", prefix, ginkgo.SyncHost))
|
||||
}
|
||||
|
||||
if ginkgo.RegexScansFilePath {
|
||||
result = append(result, fmt.Sprintf("--%sregexScansFilePath", prefix))
|
||||
}
|
||||
|
||||
if reporter.NoColor {
|
||||
result = append(result, fmt.Sprintf("--%snoColor", prefix))
|
||||
}
|
||||
|
||||
if reporter.SlowSpecThreshold > 0 {
|
||||
result = append(result, fmt.Sprintf("--%sslowSpecThreshold=%.5f", prefix, reporter.SlowSpecThreshold))
|
||||
}
|
||||
|
||||
if !reporter.NoisyPendings {
|
||||
result = append(result, fmt.Sprintf("--%snoisyPendings=false", prefix))
|
||||
}
|
||||
|
||||
if !reporter.NoisySkippings {
|
||||
result = append(result, fmt.Sprintf("--%snoisySkippings=false", prefix))
|
||||
}
|
||||
|
||||
if reporter.Verbose {
|
||||
result = append(result, fmt.Sprintf("--%sv", prefix))
|
||||
}
|
||||
|
||||
if reporter.Succinct {
|
||||
result = append(result, fmt.Sprintf("--%ssuccinct", prefix))
|
||||
}
|
||||
|
||||
if reporter.FullTrace {
|
||||
result = append(result, fmt.Sprintf("--%strace", prefix))
|
||||
}
|
||||
|
||||
if reporter.ReportPassed {
|
||||
result = append(result, fmt.Sprintf("--%sreportPassed", prefix))
|
||||
}
|
||||
|
||||
if reporter.ReportFile != "" {
|
||||
result = append(result, fmt.Sprintf("--%sreportFile=%s", prefix, reporter.ReportFile))
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// flagFocus implements the -focus flag.
|
||||
func flagFocus(arg string) {
|
||||
if arg != "" {
|
||||
GinkgoConfig.FocusStrings = append(GinkgoConfig.FocusStrings, arg)
|
||||
}
|
||||
}
|
||||
|
||||
// flagSkip implements the -skip flag.
|
||||
func flagSkip(arg string) {
|
||||
if arg != "" {
|
||||
GinkgoConfig.SkipStrings = append(GinkgoConfig.SkipStrings, arg)
|
||||
}
|
||||
}
|
110
vendor/github.com/onsi/ginkgo/extensions/table/table.go
generated
vendored
110
vendor/github.com/onsi/ginkgo/extensions/table/table.go
generated
vendored
@ -1,110 +0,0 @@
|
||||
/*
|
||||
|
||||
Table provides a simple DSL for Ginkgo-native Table-Driven Tests
|
||||
|
||||
The godoc documentation describes Table's API. More comprehensive documentation (with examples!) is available at http://onsi.github.io/ginkgo#table-driven-tests
|
||||
|
||||
*/
|
||||
|
||||
package table
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
|
||||
"github.com/onsi/ginkgo/internal/codelocation"
|
||||
"github.com/onsi/ginkgo/internal/global"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
/*
|
||||
DescribeTable describes a table-driven test.
|
||||
|
||||
For example:
|
||||
|
||||
DescribeTable("a simple table",
|
||||
func(x int, y int, expected bool) {
|
||||
Ω(x > y).Should(Equal(expected))
|
||||
},
|
||||
Entry("x > y", 1, 0, true),
|
||||
Entry("x == y", 0, 0, false),
|
||||
Entry("x < y", 0, 1, false),
|
||||
)
|
||||
|
||||
The first argument to `DescribeTable` is a string description.
|
||||
The second argument is a function that will be run for each table entry. Your assertions go here - the function is equivalent to a Ginkgo It.
|
||||
The subsequent arguments must be of type `TableEntry`. We recommend using the `Entry` convenience constructors.
|
||||
|
||||
The `Entry` constructor takes a string description followed by an arbitrary set of parameters. These parameters are passed into your function.
|
||||
|
||||
Under the hood, `DescribeTable` simply generates a new Ginkgo `Describe`. Each `Entry` is turned into an `It` within the `Describe`.
|
||||
|
||||
It's important to understand that the `Describe`s and `It`s are generated at evaluation time (i.e. when Ginkgo constructs the tree of tests and before the tests run).
|
||||
|
||||
Individual Entries can be focused (with FEntry) or marked pending (with PEntry or XEntry). In addition, the entire table can be focused or marked pending with FDescribeTable and PDescribeTable/XDescribeTable.
|
||||
|
||||
A description function can be passed to Entry in place of the description. The function is then fed with the entry parameters to generate the description of the It corresponding to that particular Entry.
|
||||
|
||||
For example:
|
||||
|
||||
describe := func(desc string) func(int, int, bool) string {
|
||||
return func(x, y int, expected bool) string {
|
||||
return fmt.Sprintf("%s x=%d y=%d expected:%t", desc, x, y, expected)
|
||||
}
|
||||
}
|
||||
|
||||
DescribeTable("a simple table",
|
||||
func(x int, y int, expected bool) {
|
||||
Ω(x > y).Should(Equal(expected))
|
||||
},
|
||||
Entry(describe("x > y"), 1, 0, true),
|
||||
Entry(describe("x == y"), 0, 0, false),
|
||||
Entry(describe("x < y"), 0, 1, false),
|
||||
)
|
||||
*/
|
||||
func DescribeTable(description string, itBody interface{}, entries ...TableEntry) bool {
|
||||
describeTable(description, itBody, entries, types.FlagTypeNone)
|
||||
return true
|
||||
}
|
||||
|
||||
/*
|
||||
You can focus a table with `FDescribeTable`. This is equivalent to `FDescribe`.
|
||||
*/
|
||||
func FDescribeTable(description string, itBody interface{}, entries ...TableEntry) bool {
|
||||
describeTable(description, itBody, entries, types.FlagTypeFocused)
|
||||
return true
|
||||
}
|
||||
|
||||
/*
|
||||
You can mark a table as pending with `PDescribeTable`. This is equivalent to `PDescribe`.
|
||||
*/
|
||||
func PDescribeTable(description string, itBody interface{}, entries ...TableEntry) bool {
|
||||
describeTable(description, itBody, entries, types.FlagTypePending)
|
||||
return true
|
||||
}
|
||||
|
||||
/*
|
||||
You can mark a table as pending with `XDescribeTable`. This is equivalent to `XDescribe`.
|
||||
*/
|
||||
func XDescribeTable(description string, itBody interface{}, entries ...TableEntry) bool {
|
||||
describeTable(description, itBody, entries, types.FlagTypePending)
|
||||
return true
|
||||
}
|
||||
|
||||
func describeTable(description string, itBody interface{}, entries []TableEntry, flag types.FlagType) {
|
||||
itBodyValue := reflect.ValueOf(itBody)
|
||||
if itBodyValue.Kind() != reflect.Func {
|
||||
panic(fmt.Sprintf("DescribeTable expects a function, got %#v", itBody))
|
||||
}
|
||||
|
||||
global.Suite.PushContainerNode(
|
||||
description,
|
||||
func() {
|
||||
for _, entry := range entries {
|
||||
entry.generateIt(itBodyValue)
|
||||
}
|
||||
},
|
||||
flag,
|
||||
codelocation.New(2),
|
||||
)
|
||||
}
|
129
vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go
generated
vendored
129
vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go
generated
vendored
@ -1,129 +0,0 @@
|
||||
package table
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
|
||||
"github.com/onsi/ginkgo/internal/codelocation"
|
||||
"github.com/onsi/ginkgo/internal/global"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
/*
|
||||
TableEntry represents an entry in a table test. You generally use the `Entry` constructor.
|
||||
*/
|
||||
type TableEntry struct {
|
||||
Description interface{}
|
||||
Parameters []interface{}
|
||||
Pending bool
|
||||
Focused bool
|
||||
codeLocation types.CodeLocation
|
||||
}
|
||||
|
||||
func (t TableEntry) generateIt(itBody reflect.Value) {
|
||||
var description string
|
||||
descriptionValue := reflect.ValueOf(t.Description)
|
||||
switch descriptionValue.Kind() {
|
||||
case reflect.String:
|
||||
description = descriptionValue.String()
|
||||
case reflect.Func:
|
||||
values := castParameters(descriptionValue, t.Parameters)
|
||||
res := descriptionValue.Call(values)
|
||||
if len(res) != 1 {
|
||||
panic(fmt.Sprintf("The describe function should return only a value, returned %d", len(res)))
|
||||
}
|
||||
if res[0].Kind() != reflect.String {
|
||||
panic(fmt.Sprintf("The describe function should return a string, returned %#v", res[0]))
|
||||
}
|
||||
description = res[0].String()
|
||||
default:
|
||||
panic(fmt.Sprintf("Description can either be a string or a function, got %#v", descriptionValue))
|
||||
}
|
||||
|
||||
if t.Pending {
|
||||
global.Suite.PushItNode(description, func() {}, types.FlagTypePending, t.codeLocation, 0)
|
||||
return
|
||||
}
|
||||
|
||||
values := castParameters(itBody, t.Parameters)
|
||||
body := func() {
|
||||
itBody.Call(values)
|
||||
}
|
||||
|
||||
if t.Focused {
|
||||
global.Suite.PushItNode(description, body, types.FlagTypeFocused, t.codeLocation, global.DefaultTimeout)
|
||||
} else {
|
||||
global.Suite.PushItNode(description, body, types.FlagTypeNone, t.codeLocation, global.DefaultTimeout)
|
||||
}
|
||||
}
|
||||
|
||||
func castParameters(function reflect.Value, parameters []interface{}) []reflect.Value {
|
||||
res := make([]reflect.Value, len(parameters))
|
||||
funcType := function.Type()
|
||||
for i, param := range parameters {
|
||||
if param == nil {
|
||||
inType := funcType.In(i)
|
||||
res[i] = reflect.Zero(inType)
|
||||
} else {
|
||||
res[i] = reflect.ValueOf(param)
|
||||
}
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
/*
|
||||
Entry constructs a TableEntry.
|
||||
|
||||
The first argument is a required description (this becomes the content of the generated Ginkgo `It`).
|
||||
Subsequent parameters are saved off and sent to the callback passed in to `DescribeTable`.
|
||||
|
||||
Each Entry ends up generating an individual Ginkgo It.
|
||||
*/
|
||||
func Entry(description interface{}, parameters ...interface{}) TableEntry {
|
||||
return TableEntry{
|
||||
Description: description,
|
||||
Parameters: parameters,
|
||||
Pending: false,
|
||||
Focused: false,
|
||||
codeLocation: codelocation.New(1),
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
You can focus a particular entry with FEntry. This is equivalent to FIt.
|
||||
*/
|
||||
func FEntry(description interface{}, parameters ...interface{}) TableEntry {
|
||||
return TableEntry{
|
||||
Description: description,
|
||||
Parameters: parameters,
|
||||
Pending: false,
|
||||
Focused: true,
|
||||
codeLocation: codelocation.New(1),
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
You can mark a particular entry as pending with PEntry. This is equivalent to PIt.
|
||||
*/
|
||||
func PEntry(description interface{}, parameters ...interface{}) TableEntry {
|
||||
return TableEntry{
|
||||
Description: description,
|
||||
Parameters: parameters,
|
||||
Pending: true,
|
||||
Focused: false,
|
||||
codeLocation: codelocation.New(1),
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
You can mark a particular entry as pending with XEntry. This is equivalent to XIt.
|
||||
*/
|
||||
func XEntry(description interface{}, parameters ...interface{}) TableEntry {
|
||||
return TableEntry{
|
||||
Description: description,
|
||||
Parameters: parameters,
|
||||
Pending: true,
|
||||
Focused: false,
|
||||
codeLocation: codelocation.New(1),
|
||||
}
|
||||
}
|
681
vendor/github.com/onsi/ginkgo/ginkgo_dsl.go
generated
vendored
681
vendor/github.com/onsi/ginkgo/ginkgo_dsl.go
generated
vendored
@ -1,681 +0,0 @@
|
||||
/*
|
||||
Ginkgo is a BDD-style testing framework for Golang
|
||||
|
||||
The godoc documentation describes Ginkgo's API. More comprehensive documentation (with examples!) is available at http://onsi.github.io/ginkgo/
|
||||
|
||||
Ginkgo's preferred matcher library is [Gomega](http://github.com/onsi/gomega)
|
||||
|
||||
Ginkgo on Github: http://github.com/onsi/ginkgo
|
||||
|
||||
Ginkgo is MIT-Licensed
|
||||
*/
|
||||
package ginkgo
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"reflect"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/onsi/ginkgo/config"
|
||||
"github.com/onsi/ginkgo/internal/codelocation"
|
||||
"github.com/onsi/ginkgo/internal/global"
|
||||
"github.com/onsi/ginkgo/internal/remote"
|
||||
"github.com/onsi/ginkgo/internal/testingtproxy"
|
||||
"github.com/onsi/ginkgo/internal/writer"
|
||||
"github.com/onsi/ginkgo/reporters"
|
||||
"github.com/onsi/ginkgo/reporters/stenographer"
|
||||
colorable "github.com/onsi/ginkgo/reporters/stenographer/support/go-colorable"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
var deprecationTracker = types.NewDeprecationTracker()
|
||||
|
||||
const GINKGO_VERSION = config.VERSION
|
||||
const GINKGO_PANIC = `
|
||||
Your test failed.
|
||||
Ginkgo panics to prevent subsequent assertions from running.
|
||||
Normally Ginkgo rescues this panic so you shouldn't see it.
|
||||
|
||||
But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
|
||||
To circumvent this, you should call
|
||||
|
||||
defer GinkgoRecover()
|
||||
|
||||
at the top of the goroutine that caused this panic.
|
||||
`
|
||||
|
||||
func init() {
|
||||
config.Flags(flag.CommandLine, "ginkgo", true)
|
||||
GinkgoWriter = writer.New(os.Stdout)
|
||||
}
|
||||
|
||||
//GinkgoWriter implements an io.Writer
|
||||
//When running in verbose mode any writes to GinkgoWriter will be immediately printed
|
||||
//to stdout. Otherwise, GinkgoWriter will buffer any writes produced during the current test and flush them to screen
|
||||
//only if the current test fails.
|
||||
var GinkgoWriter io.Writer
|
||||
|
||||
//The interface by which Ginkgo receives *testing.T
|
||||
type GinkgoTestingT interface {
|
||||
Fail()
|
||||
}
|
||||
|
||||
//GinkgoRandomSeed returns the seed used to randomize spec execution order. It is
|
||||
//useful for seeding your own pseudorandom number generators (PRNGs) to ensure
|
||||
//consistent executions from run to run, where your tests contain variability (for
|
||||
//example, when selecting random test data).
|
||||
func GinkgoRandomSeed() int64 {
|
||||
return config.GinkgoConfig.RandomSeed
|
||||
}
|
||||
|
||||
//GinkgoParallelNode is deprecated, use GinkgoParallelProcess instead
|
||||
func GinkgoParallelNode() int {
|
||||
deprecationTracker.TrackDeprecation(types.Deprecations.ParallelNode(), codelocation.New(1))
|
||||
return GinkgoParallelProcess()
|
||||
}
|
||||
|
||||
//GinkgoParallelProcess returns the parallel process number for the current ginkgo process
|
||||
//The process number is 1-indexed
|
||||
func GinkgoParallelProcess() int {
|
||||
return config.GinkgoConfig.ParallelNode
|
||||
}
|
||||
|
||||
//Some matcher libraries or legacy codebases require a *testing.T
|
||||
//GinkgoT implements an interface analogous to *testing.T and can be used if
|
||||
//the library in question accepts *testing.T through an interface
|
||||
//
|
||||
// For example, with testify:
|
||||
// assert.Equal(GinkgoT(), 123, 123, "they should be equal")
|
||||
//
|
||||
// Or with gomock:
|
||||
// gomock.NewController(GinkgoT())
|
||||
//
|
||||
// GinkgoT() takes an optional offset argument that can be used to get the
|
||||
// correct line number associated with the failure.
|
||||
func GinkgoT(optionalOffset ...int) GinkgoTInterface {
|
||||
offset := 3
|
||||
if len(optionalOffset) > 0 {
|
||||
offset = optionalOffset[0]
|
||||
}
|
||||
failedFunc := func() bool {
|
||||
return CurrentGinkgoTestDescription().Failed
|
||||
}
|
||||
nameFunc := func() string {
|
||||
return CurrentGinkgoTestDescription().FullTestText
|
||||
}
|
||||
return testingtproxy.New(GinkgoWriter, Fail, Skip, failedFunc, nameFunc, offset)
|
||||
}
|
||||
|
||||
//The interface returned by GinkgoT(). This covers most of the methods
|
||||
//in the testing package's T.
|
||||
type GinkgoTInterface interface {
|
||||
Cleanup(func())
|
||||
Setenv(key, value string)
|
||||
Error(args ...interface{})
|
||||
Errorf(format string, args ...interface{})
|
||||
Fail()
|
||||
FailNow()
|
||||
Failed() bool
|
||||
Fatal(args ...interface{})
|
||||
Fatalf(format string, args ...interface{})
|
||||
Helper()
|
||||
Log(args ...interface{})
|
||||
Logf(format string, args ...interface{})
|
||||
Name() string
|
||||
Parallel()
|
||||
Skip(args ...interface{})
|
||||
SkipNow()
|
||||
Skipf(format string, args ...interface{})
|
||||
Skipped() bool
|
||||
TempDir() string
|
||||
}
|
||||
|
||||
//Custom Ginkgo test reporters must implement the Reporter interface.
|
||||
//
|
||||
//The custom reporter is passed in a SuiteSummary when the suite begins and ends,
|
||||
//and a SpecSummary just before a spec begins and just after a spec ends
|
||||
type Reporter reporters.Reporter
|
||||
|
||||
//Asynchronous specs are given a channel of the Done type. You must close or write to the channel
|
||||
//to tell Ginkgo that your async test is done.
|
||||
type Done chan<- interface{}
|
||||
|
||||
//GinkgoTestDescription represents the information about the current running test returned by CurrentGinkgoTestDescription
|
||||
// FullTestText: a concatenation of ComponentTexts and the TestText
|
||||
// ComponentTexts: a list of all texts for the Describes & Contexts leading up to the current test
|
||||
// TestText: the text in the actual It or Measure node
|
||||
// IsMeasurement: true if the current test is a measurement
|
||||
// FileName: the name of the file containing the current test
|
||||
// LineNumber: the line number for the current test
|
||||
// Failed: if the current test has failed, this will be true (useful in an AfterEach)
|
||||
type GinkgoTestDescription struct {
|
||||
FullTestText string
|
||||
ComponentTexts []string
|
||||
TestText string
|
||||
|
||||
IsMeasurement bool
|
||||
|
||||
FileName string
|
||||
LineNumber int
|
||||
|
||||
Failed bool
|
||||
Duration time.Duration
|
||||
}
|
||||
|
||||
//CurrentGinkgoTestDescripton returns information about the current running test.
|
||||
func CurrentGinkgoTestDescription() GinkgoTestDescription {
|
||||
summary, ok := global.Suite.CurrentRunningSpecSummary()
|
||||
if !ok {
|
||||
return GinkgoTestDescription{}
|
||||
}
|
||||
|
||||
subjectCodeLocation := summary.ComponentCodeLocations[len(summary.ComponentCodeLocations)-1]
|
||||
|
||||
return GinkgoTestDescription{
|
||||
ComponentTexts: summary.ComponentTexts[1:],
|
||||
FullTestText: strings.Join(summary.ComponentTexts[1:], " "),
|
||||
TestText: summary.ComponentTexts[len(summary.ComponentTexts)-1],
|
||||
IsMeasurement: summary.IsMeasurement,
|
||||
FileName: subjectCodeLocation.FileName,
|
||||
LineNumber: subjectCodeLocation.LineNumber,
|
||||
Failed: summary.HasFailureState(),
|
||||
Duration: summary.RunTime,
|
||||
}
|
||||
}
|
||||
|
||||
//Measurement tests receive a Benchmarker.
|
||||
//
|
||||
//You use the Time() function to time how long the passed in body function takes to run
|
||||
//You use the RecordValue() function to track arbitrary numerical measurements.
|
||||
//The RecordValueWithPrecision() function can be used alternatively to provide the unit
|
||||
//and resolution of the numeric measurement.
|
||||
//The optional info argument is passed to the test reporter and can be used to
|
||||
// provide the measurement data to a custom reporter with context.
|
||||
//
|
||||
//See http://onsi.github.io/ginkgo/#benchmark_tests for more details
|
||||
type Benchmarker interface {
|
||||
Time(name string, body func(), info ...interface{}) (elapsedTime time.Duration)
|
||||
RecordValue(name string, value float64, info ...interface{})
|
||||
RecordValueWithPrecision(name string, value float64, units string, precision int, info ...interface{})
|
||||
}
|
||||
|
||||
//RunSpecs is the entry point for the Ginkgo test runner.
|
||||
//You must call this within a Golang testing TestX(t *testing.T) function.
|
||||
//
|
||||
//To bootstrap a test suite you can use the Ginkgo CLI:
|
||||
//
|
||||
// ginkgo bootstrap
|
||||
func RunSpecs(t GinkgoTestingT, description string) bool {
|
||||
specReporters := []Reporter{buildDefaultReporter()}
|
||||
if config.DefaultReporterConfig.ReportFile != "" {
|
||||
reportFile := config.DefaultReporterConfig.ReportFile
|
||||
specReporters[0] = reporters.NewJUnitReporter(reportFile)
|
||||
specReporters = append(specReporters, buildDefaultReporter())
|
||||
}
|
||||
return runSpecsWithCustomReporters(t, description, specReporters)
|
||||
}
|
||||
|
||||
//To run your tests with Ginkgo's default reporter and your custom reporter(s), replace
|
||||
//RunSpecs() with this method.
|
||||
func RunSpecsWithDefaultAndCustomReporters(t GinkgoTestingT, description string, specReporters []Reporter) bool {
|
||||
deprecationTracker.TrackDeprecation(types.Deprecations.CustomReporter())
|
||||
specReporters = append(specReporters, buildDefaultReporter())
|
||||
return runSpecsWithCustomReporters(t, description, specReporters)
|
||||
}
|
||||
|
||||
//To run your tests with your custom reporter(s) (and *not* Ginkgo's default reporter), replace
|
||||
//RunSpecs() with this method. Note that parallel tests will not work correctly without the default reporter
|
||||
func RunSpecsWithCustomReporters(t GinkgoTestingT, description string, specReporters []Reporter) bool {
|
||||
deprecationTracker.TrackDeprecation(types.Deprecations.CustomReporter())
|
||||
return runSpecsWithCustomReporters(t, description, specReporters)
|
||||
}
|
||||
|
||||
func runSpecsWithCustomReporters(t GinkgoTestingT, description string, specReporters []Reporter) bool {
|
||||
writer := GinkgoWriter.(*writer.Writer)
|
||||
writer.SetStream(config.DefaultReporterConfig.Verbose)
|
||||
reporters := make([]reporters.Reporter, len(specReporters))
|
||||
for i, reporter := range specReporters {
|
||||
reporters[i] = reporter
|
||||
}
|
||||
passed, hasFocusedTests := global.Suite.Run(t, description, reporters, writer, config.GinkgoConfig)
|
||||
|
||||
if deprecationTracker.DidTrackDeprecations() {
|
||||
fmt.Fprintln(colorable.NewColorableStderr(), deprecationTracker.DeprecationsReport())
|
||||
}
|
||||
|
||||
if passed && hasFocusedTests && strings.TrimSpace(os.Getenv("GINKGO_EDITOR_INTEGRATION")) == "" {
|
||||
fmt.Println("PASS | FOCUSED")
|
||||
os.Exit(types.GINKGO_FOCUS_EXIT_CODE)
|
||||
}
|
||||
return passed
|
||||
}
|
||||
|
||||
func buildDefaultReporter() Reporter {
|
||||
remoteReportingServer := config.GinkgoConfig.StreamHost
|
||||
if remoteReportingServer == "" {
|
||||
stenographer := stenographer.New(!config.DefaultReporterConfig.NoColor, config.GinkgoConfig.FlakeAttempts > 1, colorable.NewColorableStdout())
|
||||
return reporters.NewDefaultReporter(config.DefaultReporterConfig, stenographer)
|
||||
} else {
|
||||
debugFile := ""
|
||||
if config.GinkgoConfig.DebugParallel {
|
||||
debugFile = fmt.Sprintf("ginkgo-node-%d.log", config.GinkgoConfig.ParallelNode)
|
||||
}
|
||||
return remote.NewForwardingReporter(config.DefaultReporterConfig, remoteReportingServer, &http.Client{}, remote.NewOutputInterceptor(), GinkgoWriter.(*writer.Writer), debugFile)
|
||||
}
|
||||
}
|
||||
|
||||
//Skip notifies Ginkgo that the current spec was skipped.
|
||||
func Skip(message string, callerSkip ...int) {
|
||||
skip := 0
|
||||
if len(callerSkip) > 0 {
|
||||
skip = callerSkip[0]
|
||||
}
|
||||
|
||||
global.Failer.Skip(message, codelocation.New(skip+1))
|
||||
panic(GINKGO_PANIC)
|
||||
}
|
||||
|
||||
//Fail notifies Ginkgo that the current spec has failed. (Gomega will call Fail for you automatically when an assertion fails.)
|
||||
func Fail(message string, callerSkip ...int) {
|
||||
skip := 0
|
||||
if len(callerSkip) > 0 {
|
||||
skip = callerSkip[0]
|
||||
}
|
||||
|
||||
global.Failer.Fail(message, codelocation.New(skip+1))
|
||||
panic(GINKGO_PANIC)
|
||||
}
|
||||
|
||||
//GinkgoRecover should be deferred at the top of any spawned goroutine that (may) call `Fail`
|
||||
//Since Gomega assertions call fail, you should throw a `defer GinkgoRecover()` at the top of any goroutine that
|
||||
//calls out to Gomega
|
||||
//
|
||||
//Here's why: Ginkgo's `Fail` method records the failure and then panics to prevent
|
||||
//further assertions from running. This panic must be recovered. Ginkgo does this for you
|
||||
//if the panic originates in a Ginkgo node (an It, BeforeEach, etc...)
|
||||
//
|
||||
//Unfortunately, if a panic originates on a goroutine *launched* from one of these nodes there's no
|
||||
//way for Ginkgo to rescue the panic. To do this, you must remember to `defer GinkgoRecover()` at the top of such a goroutine.
|
||||
func GinkgoRecover() {
|
||||
e := recover()
|
||||
if e != nil {
|
||||
global.Failer.Panic(codelocation.New(1), e)
|
||||
}
|
||||
}
|
||||
|
||||
//Describe blocks allow you to organize your specs. A Describe block can contain any number of
|
||||
//BeforeEach, AfterEach, JustBeforeEach, It, and Measurement blocks.
|
||||
//
|
||||
//In addition you can nest Describe, Context and When blocks. Describe, Context and When blocks are functionally
|
||||
//equivalent. The difference is purely semantic -- you typically Describe the behavior of an object
|
||||
//or method and, within that Describe, outline a number of Contexts and Whens.
|
||||
func Describe(text string, body func()) bool {
|
||||
global.Suite.PushContainerNode(text, body, types.FlagTypeNone, codelocation.New(1))
|
||||
return true
|
||||
}
|
||||
|
||||
//You can focus the tests within a describe block using FDescribe
|
||||
func FDescribe(text string, body func()) bool {
|
||||
global.Suite.PushContainerNode(text, body, types.FlagTypeFocused, codelocation.New(1))
|
||||
return true
|
||||
}
|
||||
|
||||
//You can mark the tests within a describe block as pending using PDescribe
|
||||
func PDescribe(text string, body func()) bool {
|
||||
global.Suite.PushContainerNode(text, body, types.FlagTypePending, codelocation.New(1))
|
||||
return true
|
||||
}
|
||||
|
||||
//You can mark the tests within a describe block as pending using XDescribe
|
||||
func XDescribe(text string, body func()) bool {
|
||||
global.Suite.PushContainerNode(text, body, types.FlagTypePending, codelocation.New(1))
|
||||
return true
|
||||
}
|
||||
|
||||
//Context blocks allow you to organize your specs. A Context block can contain any number of
|
||||
//BeforeEach, AfterEach, JustBeforeEach, It, and Measurement blocks.
|
||||
//
|
||||
//In addition you can nest Describe, Context and When blocks. Describe, Context and When blocks are functionally
|
||||
//equivalent. The difference is purely semantic -- you typical Describe the behavior of an object
|
||||
//or method and, within that Describe, outline a number of Contexts and Whens.
|
||||
func Context(text string, body func()) bool {
|
||||
global.Suite.PushContainerNode(text, body, types.FlagTypeNone, codelocation.New(1))
|
||||
return true
|
||||
}
|
||||
|
||||
//You can focus the tests within a describe block using FContext
|
||||
func FContext(text string, body func()) bool {
|
||||
global.Suite.PushContainerNode(text, body, types.FlagTypeFocused, codelocation.New(1))
|
||||
return true
|
||||
}
|
||||
|
||||
//You can mark the tests within a describe block as pending using PContext
|
||||
func PContext(text string, body func()) bool {
|
||||
global.Suite.PushContainerNode(text, body, types.FlagTypePending, codelocation.New(1))
|
||||
return true
|
||||
}
|
||||
|
||||
//You can mark the tests within a describe block as pending using XContext
|
||||
func XContext(text string, body func()) bool {
|
||||
global.Suite.PushContainerNode(text, body, types.FlagTypePending, codelocation.New(1))
|
||||
return true
|
||||
}
|
||||
|
||||
//When blocks allow you to organize your specs. A When block can contain any number of
|
||||
//BeforeEach, AfterEach, JustBeforeEach, It, and Measurement blocks.
|
||||
//
|
||||
//In addition you can nest Describe, Context and When blocks. Describe, Context and When blocks are functionally
|
||||
//equivalent. The difference is purely semantic -- you typical Describe the behavior of an object
|
||||
//or method and, within that Describe, outline a number of Contexts and Whens.
|
||||
func When(text string, body func()) bool {
|
||||
global.Suite.PushContainerNode("when "+text, body, types.FlagTypeNone, codelocation.New(1))
|
||||
return true
|
||||
}
|
||||
|
||||
//You can focus the tests within a describe block using FWhen
|
||||
func FWhen(text string, body func()) bool {
|
||||
global.Suite.PushContainerNode("when "+text, body, types.FlagTypeFocused, codelocation.New(1))
|
||||
return true
|
||||
}
|
||||
|
||||
//You can mark the tests within a describe block as pending using PWhen
|
||||
func PWhen(text string, body func()) bool {
|
||||
global.Suite.PushContainerNode("when "+text, body, types.FlagTypePending, codelocation.New(1))
|
||||
return true
|
||||
}
|
||||
|
||||
//You can mark the tests within a describe block as pending using XWhen
|
||||
func XWhen(text string, body func()) bool {
|
||||
global.Suite.PushContainerNode("when "+text, body, types.FlagTypePending, codelocation.New(1))
|
||||
return true
|
||||
}
|
||||
|
||||
//It blocks contain your test code and assertions. You cannot nest any other Ginkgo blocks
|
||||
//within an It block.
|
||||
//
|
||||
//Ginkgo will normally run It blocks synchronously. To perform asynchronous tests, pass a
|
||||
//function that accepts a Done channel. When you do this, you can also provide an optional timeout.
|
||||
func It(text string, body interface{}, timeout ...float64) bool {
|
||||
validateBodyFunc(body, codelocation.New(1))
|
||||
global.Suite.PushItNode(text, body, types.FlagTypeNone, codelocation.New(1), parseTimeout(timeout...))
|
||||
return true
|
||||
}
|
||||
|
||||
//You can focus individual Its using FIt
|
||||
func FIt(text string, body interface{}, timeout ...float64) bool {
|
||||
validateBodyFunc(body, codelocation.New(1))
|
||||
global.Suite.PushItNode(text, body, types.FlagTypeFocused, codelocation.New(1), parseTimeout(timeout...))
|
||||
return true
|
||||
}
|
||||
|
||||
//You can mark Its as pending using PIt
|
||||
func PIt(text string, _ ...interface{}) bool {
|
||||
global.Suite.PushItNode(text, func() {}, types.FlagTypePending, codelocation.New(1), 0)
|
||||
return true
|
||||
}
|
||||
|
||||
//You can mark Its as pending using XIt
|
||||
func XIt(text string, _ ...interface{}) bool {
|
||||
global.Suite.PushItNode(text, func() {}, types.FlagTypePending, codelocation.New(1), 0)
|
||||
return true
|
||||
}
|
||||
|
||||
//Specify blocks are aliases for It blocks and allow for more natural wording in situations
|
||||
//which "It" does not fit into a natural sentence flow. All the same protocols apply for Specify blocks
|
||||
//which apply to It blocks.
|
||||
func Specify(text string, body interface{}, timeout ...float64) bool {
|
||||
validateBodyFunc(body, codelocation.New(1))
|
||||
global.Suite.PushItNode(text, body, types.FlagTypeNone, codelocation.New(1), parseTimeout(timeout...))
|
||||
return true
|
||||
}
|
||||
|
||||
//You can focus individual Specifys using FSpecify
|
||||
func FSpecify(text string, body interface{}, timeout ...float64) bool {
|
||||
validateBodyFunc(body, codelocation.New(1))
|
||||
global.Suite.PushItNode(text, body, types.FlagTypeFocused, codelocation.New(1), parseTimeout(timeout...))
|
||||
return true
|
||||
}
|
||||
|
||||
//You can mark Specifys as pending using PSpecify
|
||||
func PSpecify(text string, is ...interface{}) bool {
|
||||
global.Suite.PushItNode(text, func() {}, types.FlagTypePending, codelocation.New(1), 0)
|
||||
return true
|
||||
}
|
||||
|
||||
//You can mark Specifys as pending using XSpecify
|
||||
func XSpecify(text string, is ...interface{}) bool {
|
||||
global.Suite.PushItNode(text, func() {}, types.FlagTypePending, codelocation.New(1), 0)
|
||||
return true
|
||||
}
|
||||
|
||||
//By allows you to better document large Its.
|
||||
//
|
||||
//Generally you should try to keep your Its short and to the point. This is not always possible, however,
|
||||
//especially in the context of integration tests that capture a particular workflow.
|
||||
//
|
||||
//By allows you to document such flows. By must be called within a runnable node (It, BeforeEach, Measure, etc...)
|
||||
//By will simply log the passed in text to the GinkgoWriter. If By is handed a function it will immediately run the function.
|
||||
func By(text string, callbacks ...func()) {
|
||||
preamble := "\x1b[1mSTEP\x1b[0m"
|
||||
if config.DefaultReporterConfig.NoColor {
|
||||
preamble = "STEP"
|
||||
}
|
||||
fmt.Fprintln(GinkgoWriter, preamble+": "+text)
|
||||
if len(callbacks) == 1 {
|
||||
callbacks[0]()
|
||||
}
|
||||
if len(callbacks) > 1 {
|
||||
panic("just one callback per By, please")
|
||||
}
|
||||
}
|
||||
|
||||
//Measure blocks run the passed in body function repeatedly (determined by the samples argument)
|
||||
//and accumulate metrics provided to the Benchmarker by the body function.
|
||||
//
|
||||
//The body function must have the signature:
|
||||
// func(b Benchmarker)
|
||||
func Measure(text string, body interface{}, samples int) bool {
|
||||
deprecationTracker.TrackDeprecation(types.Deprecations.Measure(), codelocation.New(1))
|
||||
global.Suite.PushMeasureNode(text, body, types.FlagTypeNone, codelocation.New(1), samples)
|
||||
return true
|
||||
}
|
||||
|
||||
//You can focus individual Measures using FMeasure
|
||||
func FMeasure(text string, body interface{}, samples int) bool {
|
||||
deprecationTracker.TrackDeprecation(types.Deprecations.Measure(), codelocation.New(1))
|
||||
global.Suite.PushMeasureNode(text, body, types.FlagTypeFocused, codelocation.New(1), samples)
|
||||
return true
|
||||
}
|
||||
|
||||
//You can mark Measurements as pending using PMeasure
|
||||
func PMeasure(text string, _ ...interface{}) bool {
|
||||
deprecationTracker.TrackDeprecation(types.Deprecations.Measure(), codelocation.New(1))
|
||||
global.Suite.PushMeasureNode(text, func(b Benchmarker) {}, types.FlagTypePending, codelocation.New(1), 0)
|
||||
return true
|
||||
}
|
||||
|
||||
//You can mark Measurements as pending using XMeasure
|
||||
func XMeasure(text string, _ ...interface{}) bool {
|
||||
deprecationTracker.TrackDeprecation(types.Deprecations.Measure(), codelocation.New(1))
|
||||
global.Suite.PushMeasureNode(text, func(b Benchmarker) {}, types.FlagTypePending, codelocation.New(1), 0)
|
||||
return true
|
||||
}
|
||||
|
||||
//BeforeSuite blocks are run just once before any specs are run. When running in parallel, each
|
||||
//parallel node process will call BeforeSuite.
|
||||
//
|
||||
//BeforeSuite blocks can be made asynchronous by providing a body function that accepts a Done channel
|
||||
//
|
||||
//You may only register *one* BeforeSuite handler per test suite. You typically do so in your bootstrap file at the top level.
|
||||
func BeforeSuite(body interface{}, timeout ...float64) bool {
|
||||
validateBodyFunc(body, codelocation.New(1))
|
||||
global.Suite.SetBeforeSuiteNode(body, codelocation.New(1), parseTimeout(timeout...))
|
||||
return true
|
||||
}
|
||||
|
||||
//AfterSuite blocks are *always* run after all the specs regardless of whether specs have passed or failed.
|
||||
//Moreover, if Ginkgo receives an interrupt signal (^C) it will attempt to run the AfterSuite before exiting.
|
||||
//
|
||||
//When running in parallel, each parallel node process will call AfterSuite.
|
||||
//
|
||||
//AfterSuite blocks can be made asynchronous by providing a body function that accepts a Done channel
|
||||
//
|
||||
//You may only register *one* AfterSuite handler per test suite. You typically do so in your bootstrap file at the top level.
|
||||
func AfterSuite(body interface{}, timeout ...float64) bool {
|
||||
validateBodyFunc(body, codelocation.New(1))
|
||||
global.Suite.SetAfterSuiteNode(body, codelocation.New(1), parseTimeout(timeout...))
|
||||
return true
|
||||
}
|
||||
|
||||
//SynchronizedBeforeSuite blocks are primarily meant to solve the problem of setting up singleton external resources shared across
|
||||
//nodes when running tests in parallel. For example, say you have a shared database that you can only start one instance of that
|
||||
//must be used in your tests. When running in parallel, only one node should set up the database and all other nodes should wait
|
||||
//until that node is done before running.
|
||||
//
|
||||
//SynchronizedBeforeSuite accomplishes this by taking *two* function arguments. The first is only run on parallel node #1. The second is
|
||||
//run on all nodes, but *only* after the first function completes successfully. Ginkgo also makes it possible to send data from the first function (on Node 1)
|
||||
//to the second function (on all the other nodes).
|
||||
//
|
||||
//The functions have the following signatures. The first function (which only runs on node 1) has the signature:
|
||||
//
|
||||
// func() []byte
|
||||
//
|
||||
//or, to run asynchronously:
|
||||
//
|
||||
// func(done Done) []byte
|
||||
//
|
||||
//The byte array returned by the first function is then passed to the second function, which has the signature:
|
||||
//
|
||||
// func(data []byte)
|
||||
//
|
||||
//or, to run asynchronously:
|
||||
//
|
||||
// func(data []byte, done Done)
|
||||
//
|
||||
//Here's a simple pseudo-code example that starts a shared database on Node 1 and shares the database's address with the other nodes:
|
||||
//
|
||||
// var dbClient db.Client
|
||||
// var dbRunner db.Runner
|
||||
//
|
||||
// var _ = SynchronizedBeforeSuite(func() []byte {
|
||||
// dbRunner = db.NewRunner()
|
||||
// err := dbRunner.Start()
|
||||
// Ω(err).ShouldNot(HaveOccurred())
|
||||
// return []byte(dbRunner.URL)
|
||||
// }, func(data []byte) {
|
||||
// dbClient = db.NewClient()
|
||||
// err := dbClient.Connect(string(data))
|
||||
// Ω(err).ShouldNot(HaveOccurred())
|
||||
// })
|
||||
func SynchronizedBeforeSuite(node1Body interface{}, allNodesBody interface{}, timeout ...float64) bool {
|
||||
global.Suite.SetSynchronizedBeforeSuiteNode(
|
||||
node1Body,
|
||||
allNodesBody,
|
||||
codelocation.New(1),
|
||||
parseTimeout(timeout...),
|
||||
)
|
||||
return true
|
||||
}
|
||||
|
||||
//SynchronizedAfterSuite blocks complement the SynchronizedBeforeSuite blocks in solving the problem of setting up
|
||||
//external singleton resources shared across nodes when running tests in parallel.
|
||||
//
|
||||
//SynchronizedAfterSuite accomplishes this by taking *two* function arguments. The first runs on all nodes. The second runs only on parallel node #1
|
||||
//and *only* after all other nodes have finished and exited. This ensures that node 1, and any resources it is running, remain alive until
|
||||
//all other nodes are finished.
|
||||
//
|
||||
//Both functions have the same signature: either func() or func(done Done) to run asynchronously.
|
||||
//
|
||||
//Here's a pseudo-code example that complements that given in SynchronizedBeforeSuite. Here, SynchronizedAfterSuite is used to tear down the shared database
|
||||
//only after all nodes have finished:
|
||||
//
|
||||
// var _ = SynchronizedAfterSuite(func() {
|
||||
// dbClient.Cleanup()
|
||||
// }, func() {
|
||||
// dbRunner.Stop()
|
||||
// })
|
||||
func SynchronizedAfterSuite(allNodesBody interface{}, node1Body interface{}, timeout ...float64) bool {
|
||||
global.Suite.SetSynchronizedAfterSuiteNode(
|
||||
allNodesBody,
|
||||
node1Body,
|
||||
codelocation.New(1),
|
||||
parseTimeout(timeout...),
|
||||
)
|
||||
return true
|
||||
}
|
||||
|
||||
//BeforeEach blocks are run before It blocks. When multiple BeforeEach blocks are defined in nested
|
||||
//Describe and Context blocks the outermost BeforeEach blocks are run first.
|
||||
//
|
||||
//Like It blocks, BeforeEach blocks can be made asynchronous by providing a body function that accepts
|
||||
//a Done channel
|
||||
func BeforeEach(body interface{}, timeout ...float64) bool {
|
||||
validateBodyFunc(body, codelocation.New(1))
|
||||
global.Suite.PushBeforeEachNode(body, codelocation.New(1), parseTimeout(timeout...))
|
||||
return true
|
||||
}
|
||||
|
||||
//JustBeforeEach blocks are run before It blocks but *after* all BeforeEach blocks. For more details,
|
||||
//read the [documentation](http://onsi.github.io/ginkgo/#separating_creation_and_configuration_)
|
||||
//
|
||||
//Like It blocks, BeforeEach blocks can be made asynchronous by providing a body function that accepts
|
||||
//a Done channel
|
||||
func JustBeforeEach(body interface{}, timeout ...float64) bool {
|
||||
validateBodyFunc(body, codelocation.New(1))
|
||||
global.Suite.PushJustBeforeEachNode(body, codelocation.New(1), parseTimeout(timeout...))
|
||||
return true
|
||||
}
|
||||
|
||||
//JustAfterEach blocks are run after It blocks but *before* all AfterEach blocks. For more details,
|
||||
//read the [documentation](http://onsi.github.io/ginkgo/#separating_creation_and_configuration_)
|
||||
//
|
||||
//Like It blocks, JustAfterEach blocks can be made asynchronous by providing a body function that accepts
|
||||
//a Done channel
|
||||
func JustAfterEach(body interface{}, timeout ...float64) bool {
|
||||
validateBodyFunc(body, codelocation.New(1))
|
||||
global.Suite.PushJustAfterEachNode(body, codelocation.New(1), parseTimeout(timeout...))
|
||||
return true
|
||||
}
|
||||
|
||||
//AfterEach blocks are run after It blocks. When multiple AfterEach blocks are defined in nested
|
||||
//Describe and Context blocks the innermost AfterEach blocks are run first.
|
||||
//
|
||||
//Like It blocks, AfterEach blocks can be made asynchronous by providing a body function that accepts
|
||||
//a Done channel
|
||||
func AfterEach(body interface{}, timeout ...float64) bool {
|
||||
validateBodyFunc(body, codelocation.New(1))
|
||||
global.Suite.PushAfterEachNode(body, codelocation.New(1), parseTimeout(timeout...))
|
||||
return true
|
||||
}
|
||||
|
||||
func validateBodyFunc(body interface{}, cl types.CodeLocation) {
|
||||
t := reflect.TypeOf(body)
|
||||
if t.Kind() != reflect.Func {
|
||||
return
|
||||
}
|
||||
|
||||
if t.NumOut() > 0 {
|
||||
return
|
||||
}
|
||||
|
||||
if t.NumIn() == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
if t.In(0) == reflect.TypeOf(make(Done)) {
|
||||
deprecationTracker.TrackDeprecation(types.Deprecations.Async(), cl)
|
||||
}
|
||||
}
|
||||
|
||||
func parseTimeout(timeout ...float64) time.Duration {
|
||||
if len(timeout) == 0 {
|
||||
return global.DefaultTimeout
|
||||
} else {
|
||||
return time.Duration(timeout[0] * float64(time.Second))
|
||||
}
|
||||
}
|
48
vendor/github.com/onsi/ginkgo/internal/codelocation/code_location.go
generated
vendored
48
vendor/github.com/onsi/ginkgo/internal/codelocation/code_location.go
generated
vendored
@ -1,48 +0,0 @@
|
||||
package codelocation
|
||||
|
||||
import (
|
||||
"regexp"
|
||||
"runtime"
|
||||
"runtime/debug"
|
||||
"strings"
|
||||
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
func New(skip int) types.CodeLocation {
|
||||
_, file, line, _ := runtime.Caller(skip + 1)
|
||||
stackTrace := PruneStack(string(debug.Stack()), skip+1)
|
||||
return types.CodeLocation{FileName: file, LineNumber: line, FullStackTrace: stackTrace}
|
||||
}
|
||||
|
||||
// PruneStack removes references to functions that are internal to Ginkgo
|
||||
// and the Go runtime from a stack string and a certain number of stack entries
|
||||
// at the beginning of the stack. The stack string has the format
|
||||
// as returned by runtime/debug.Stack. The leading goroutine information is
|
||||
// optional and always removed if present. Beware that runtime/debug.Stack
|
||||
// adds itself as first entry, so typically skip must be >= 1 to remove that
|
||||
// entry.
|
||||
func PruneStack(fullStackTrace string, skip int) string {
|
||||
stack := strings.Split(fullStackTrace, "\n")
|
||||
// Ensure that the even entries are the method names and the
|
||||
// the odd entries the source code information.
|
||||
if len(stack) > 0 && strings.HasPrefix(stack[0], "goroutine ") {
|
||||
// Ignore "goroutine 29 [running]:" line.
|
||||
stack = stack[1:]
|
||||
}
|
||||
// The "+1" is for skipping over the initial entry, which is
|
||||
// runtime/debug.Stack() itself.
|
||||
if len(stack) > 2*(skip+1) {
|
||||
stack = stack[2*(skip+1):]
|
||||
}
|
||||
prunedStack := []string{}
|
||||
re := regexp.MustCompile(`\/ginkgo\/|\/pkg\/testing\/|\/pkg\/runtime\/`)
|
||||
for i := 0; i < len(stack)/2; i++ {
|
||||
// We filter out based on the source code file name.
|
||||
if !re.Match([]byte(stack[i*2+1])) {
|
||||
prunedStack = append(prunedStack, stack[i*2])
|
||||
prunedStack = append(prunedStack, stack[i*2+1])
|
||||
}
|
||||
}
|
||||
return strings.Join(prunedStack, "\n")
|
||||
}
|
151
vendor/github.com/onsi/ginkgo/internal/containernode/container_node.go
generated
vendored
151
vendor/github.com/onsi/ginkgo/internal/containernode/container_node.go
generated
vendored
@ -1,151 +0,0 @@
|
||||
package containernode
|
||||
|
||||
import (
|
||||
"math/rand"
|
||||
"sort"
|
||||
|
||||
"github.com/onsi/ginkgo/internal/leafnodes"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
type subjectOrContainerNode struct {
|
||||
containerNode *ContainerNode
|
||||
subjectNode leafnodes.SubjectNode
|
||||
}
|
||||
|
||||
func (n subjectOrContainerNode) text() string {
|
||||
if n.containerNode != nil {
|
||||
return n.containerNode.Text()
|
||||
} else {
|
||||
return n.subjectNode.Text()
|
||||
}
|
||||
}
|
||||
|
||||
type CollatedNodes struct {
|
||||
Containers []*ContainerNode
|
||||
Subject leafnodes.SubjectNode
|
||||
}
|
||||
|
||||
type ContainerNode struct {
|
||||
text string
|
||||
flag types.FlagType
|
||||
codeLocation types.CodeLocation
|
||||
|
||||
setupNodes []leafnodes.BasicNode
|
||||
subjectAndContainerNodes []subjectOrContainerNode
|
||||
}
|
||||
|
||||
func New(text string, flag types.FlagType, codeLocation types.CodeLocation) *ContainerNode {
|
||||
return &ContainerNode{
|
||||
text: text,
|
||||
flag: flag,
|
||||
codeLocation: codeLocation,
|
||||
}
|
||||
}
|
||||
|
||||
func (container *ContainerNode) Shuffle(r *rand.Rand) {
|
||||
sort.Sort(container)
|
||||
permutation := r.Perm(len(container.subjectAndContainerNodes))
|
||||
shuffledNodes := make([]subjectOrContainerNode, len(container.subjectAndContainerNodes))
|
||||
for i, j := range permutation {
|
||||
shuffledNodes[i] = container.subjectAndContainerNodes[j]
|
||||
}
|
||||
container.subjectAndContainerNodes = shuffledNodes
|
||||
}
|
||||
|
||||
func (node *ContainerNode) BackPropagateProgrammaticFocus() bool {
|
||||
if node.flag == types.FlagTypePending {
|
||||
return false
|
||||
}
|
||||
|
||||
shouldUnfocus := false
|
||||
for _, subjectOrContainerNode := range node.subjectAndContainerNodes {
|
||||
if subjectOrContainerNode.containerNode != nil {
|
||||
shouldUnfocus = subjectOrContainerNode.containerNode.BackPropagateProgrammaticFocus() || shouldUnfocus
|
||||
} else {
|
||||
shouldUnfocus = (subjectOrContainerNode.subjectNode.Flag() == types.FlagTypeFocused) || shouldUnfocus
|
||||
}
|
||||
}
|
||||
|
||||
if shouldUnfocus {
|
||||
if node.flag == types.FlagTypeFocused {
|
||||
node.flag = types.FlagTypeNone
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
return node.flag == types.FlagTypeFocused
|
||||
}
|
||||
|
||||
func (node *ContainerNode) Collate() []CollatedNodes {
|
||||
return node.collate([]*ContainerNode{})
|
||||
}
|
||||
|
||||
func (node *ContainerNode) collate(enclosingContainers []*ContainerNode) []CollatedNodes {
|
||||
collated := make([]CollatedNodes, 0)
|
||||
|
||||
containers := make([]*ContainerNode, len(enclosingContainers))
|
||||
copy(containers, enclosingContainers)
|
||||
containers = append(containers, node)
|
||||
|
||||
for _, subjectOrContainer := range node.subjectAndContainerNodes {
|
||||
if subjectOrContainer.containerNode != nil {
|
||||
collated = append(collated, subjectOrContainer.containerNode.collate(containers)...)
|
||||
} else {
|
||||
collated = append(collated, CollatedNodes{
|
||||
Containers: containers,
|
||||
Subject: subjectOrContainer.subjectNode,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return collated
|
||||
}
|
||||
|
||||
func (node *ContainerNode) PushContainerNode(container *ContainerNode) {
|
||||
node.subjectAndContainerNodes = append(node.subjectAndContainerNodes, subjectOrContainerNode{containerNode: container})
|
||||
}
|
||||
|
||||
func (node *ContainerNode) PushSubjectNode(subject leafnodes.SubjectNode) {
|
||||
node.subjectAndContainerNodes = append(node.subjectAndContainerNodes, subjectOrContainerNode{subjectNode: subject})
|
||||
}
|
||||
|
||||
func (node *ContainerNode) PushSetupNode(setupNode leafnodes.BasicNode) {
|
||||
node.setupNodes = append(node.setupNodes, setupNode)
|
||||
}
|
||||
|
||||
func (node *ContainerNode) SetupNodesOfType(nodeType types.SpecComponentType) []leafnodes.BasicNode {
|
||||
nodes := []leafnodes.BasicNode{}
|
||||
for _, setupNode := range node.setupNodes {
|
||||
if setupNode.Type() == nodeType {
|
||||
nodes = append(nodes, setupNode)
|
||||
}
|
||||
}
|
||||
return nodes
|
||||
}
|
||||
|
||||
func (node *ContainerNode) Text() string {
|
||||
return node.text
|
||||
}
|
||||
|
||||
func (node *ContainerNode) CodeLocation() types.CodeLocation {
|
||||
return node.codeLocation
|
||||
}
|
||||
|
||||
func (node *ContainerNode) Flag() types.FlagType {
|
||||
return node.flag
|
||||
}
|
||||
|
||||
//sort.Interface
|
||||
|
||||
func (node *ContainerNode) Len() int {
|
||||
return len(node.subjectAndContainerNodes)
|
||||
}
|
||||
|
||||
func (node *ContainerNode) Less(i, j int) bool {
|
||||
return node.subjectAndContainerNodes[i].text() < node.subjectAndContainerNodes[j].text()
|
||||
}
|
||||
|
||||
func (node *ContainerNode) Swap(i, j int) {
|
||||
node.subjectAndContainerNodes[i], node.subjectAndContainerNodes[j] = node.subjectAndContainerNodes[j], node.subjectAndContainerNodes[i]
|
||||
}
|
22
vendor/github.com/onsi/ginkgo/internal/global/init.go
generated
vendored
22
vendor/github.com/onsi/ginkgo/internal/global/init.go
generated
vendored
@ -1,22 +0,0 @@
|
||||
package global
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/onsi/ginkgo/internal/failer"
|
||||
"github.com/onsi/ginkgo/internal/suite"
|
||||
)
|
||||
|
||||
const DefaultTimeout = time.Duration(1 * time.Second)
|
||||
|
||||
var Suite *suite.Suite
|
||||
var Failer *failer.Failer
|
||||
|
||||
func init() {
|
||||
InitializeGlobals()
|
||||
}
|
||||
|
||||
func InitializeGlobals() {
|
||||
Failer = failer.New()
|
||||
Suite = suite.New(Failer)
|
||||
}
|
103
vendor/github.com/onsi/ginkgo/internal/leafnodes/benchmarker.go
generated
vendored
103
vendor/github.com/onsi/ginkgo/internal/leafnodes/benchmarker.go
generated
vendored
@ -1,103 +0,0 @@
|
||||
package leafnodes
|
||||
|
||||
import (
|
||||
"math"
|
||||
"time"
|
||||
|
||||
"sync"
|
||||
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
type benchmarker struct {
|
||||
mu sync.Mutex
|
||||
measurements map[string]*types.SpecMeasurement
|
||||
orderCounter int
|
||||
}
|
||||
|
||||
func newBenchmarker() *benchmarker {
|
||||
return &benchmarker{
|
||||
measurements: make(map[string]*types.SpecMeasurement),
|
||||
}
|
||||
}
|
||||
|
||||
func (b *benchmarker) Time(name string, body func(), info ...interface{}) (elapsedTime time.Duration) {
|
||||
t := time.Now()
|
||||
body()
|
||||
elapsedTime = time.Since(t)
|
||||
|
||||
b.mu.Lock()
|
||||
defer b.mu.Unlock()
|
||||
measurement := b.getMeasurement(name, "Fastest Time", "Slowest Time", "Average Time", "s", 3, info...)
|
||||
measurement.Results = append(measurement.Results, elapsedTime.Seconds())
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (b *benchmarker) RecordValue(name string, value float64, info ...interface{}) {
|
||||
b.mu.Lock()
|
||||
measurement := b.getMeasurement(name, "Smallest", " Largest", " Average", "", 3, info...)
|
||||
defer b.mu.Unlock()
|
||||
measurement.Results = append(measurement.Results, value)
|
||||
}
|
||||
|
||||
func (b *benchmarker) RecordValueWithPrecision(name string, value float64, units string, precision int, info ...interface{}) {
|
||||
b.mu.Lock()
|
||||
measurement := b.getMeasurement(name, "Smallest", " Largest", " Average", units, precision, info...)
|
||||
defer b.mu.Unlock()
|
||||
measurement.Results = append(measurement.Results, value)
|
||||
}
|
||||
|
||||
func (b *benchmarker) getMeasurement(name string, smallestLabel string, largestLabel string, averageLabel string, units string, precision int, info ...interface{}) *types.SpecMeasurement {
|
||||
measurement, ok := b.measurements[name]
|
||||
if !ok {
|
||||
var computedInfo interface{}
|
||||
computedInfo = nil
|
||||
if len(info) > 0 {
|
||||
computedInfo = info[0]
|
||||
}
|
||||
measurement = &types.SpecMeasurement{
|
||||
Name: name,
|
||||
Info: computedInfo,
|
||||
Order: b.orderCounter,
|
||||
SmallestLabel: smallestLabel,
|
||||
LargestLabel: largestLabel,
|
||||
AverageLabel: averageLabel,
|
||||
Units: units,
|
||||
Precision: precision,
|
||||
Results: make([]float64, 0),
|
||||
}
|
||||
b.measurements[name] = measurement
|
||||
b.orderCounter++
|
||||
}
|
||||
|
||||
return measurement
|
||||
}
|
||||
|
||||
func (b *benchmarker) measurementsReport() map[string]*types.SpecMeasurement {
|
||||
b.mu.Lock()
|
||||
defer b.mu.Unlock()
|
||||
for _, measurement := range b.measurements {
|
||||
measurement.Smallest = math.MaxFloat64
|
||||
measurement.Largest = -math.MaxFloat64
|
||||
sum := float64(0)
|
||||
sumOfSquares := float64(0)
|
||||
|
||||
for _, result := range measurement.Results {
|
||||
if result > measurement.Largest {
|
||||
measurement.Largest = result
|
||||
}
|
||||
if result < measurement.Smallest {
|
||||
measurement.Smallest = result
|
||||
}
|
||||
sum += result
|
||||
sumOfSquares += result * result
|
||||
}
|
||||
|
||||
n := float64(len(measurement.Results))
|
||||
measurement.Average = sum / n
|
||||
measurement.StdDeviation = math.Sqrt(sumOfSquares/n - (sum/n)*(sum/n))
|
||||
}
|
||||
|
||||
return b.measurements
|
||||
}
|
19
vendor/github.com/onsi/ginkgo/internal/leafnodes/interfaces.go
generated
vendored
19
vendor/github.com/onsi/ginkgo/internal/leafnodes/interfaces.go
generated
vendored
@ -1,19 +0,0 @@
|
||||
package leafnodes
|
||||
|
||||
import (
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
type BasicNode interface {
|
||||
Type() types.SpecComponentType
|
||||
Run() (types.SpecState, types.SpecFailure)
|
||||
CodeLocation() types.CodeLocation
|
||||
}
|
||||
|
||||
type SubjectNode interface {
|
||||
BasicNode
|
||||
|
||||
Text() string
|
||||
Flag() types.FlagType
|
||||
Samples() int
|
||||
}
|
47
vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go
generated
vendored
47
vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go
generated
vendored
@ -1,47 +0,0 @@
|
||||
package leafnodes
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/onsi/ginkgo/internal/failer"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
type ItNode struct {
|
||||
runner *runner
|
||||
|
||||
flag types.FlagType
|
||||
text string
|
||||
}
|
||||
|
||||
func NewItNode(text string, body interface{}, flag types.FlagType, codeLocation types.CodeLocation, timeout time.Duration, failer *failer.Failer, componentIndex int) *ItNode {
|
||||
return &ItNode{
|
||||
runner: newRunner(body, codeLocation, timeout, failer, types.SpecComponentTypeIt, componentIndex),
|
||||
flag: flag,
|
||||
text: text,
|
||||
}
|
||||
}
|
||||
|
||||
func (node *ItNode) Run() (outcome types.SpecState, failure types.SpecFailure) {
|
||||
return node.runner.run()
|
||||
}
|
||||
|
||||
func (node *ItNode) Type() types.SpecComponentType {
|
||||
return types.SpecComponentTypeIt
|
||||
}
|
||||
|
||||
func (node *ItNode) Text() string {
|
||||
return node.text
|
||||
}
|
||||
|
||||
func (node *ItNode) Flag() types.FlagType {
|
||||
return node.flag
|
||||
}
|
||||
|
||||
func (node *ItNode) CodeLocation() types.CodeLocation {
|
||||
return node.runner.codeLocation
|
||||
}
|
||||
|
||||
func (node *ItNode) Samples() int {
|
||||
return 1
|
||||
}
|
62
vendor/github.com/onsi/ginkgo/internal/leafnodes/measure_node.go
generated
vendored
62
vendor/github.com/onsi/ginkgo/internal/leafnodes/measure_node.go
generated
vendored
@ -1,62 +0,0 @@
|
||||
package leafnodes
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
|
||||
"github.com/onsi/ginkgo/internal/failer"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
type MeasureNode struct {
|
||||
runner *runner
|
||||
|
||||
text string
|
||||
flag types.FlagType
|
||||
samples int
|
||||
benchmarker *benchmarker
|
||||
}
|
||||
|
||||
func NewMeasureNode(text string, body interface{}, flag types.FlagType, codeLocation types.CodeLocation, samples int, failer *failer.Failer, componentIndex int) *MeasureNode {
|
||||
benchmarker := newBenchmarker()
|
||||
|
||||
wrappedBody := func() {
|
||||
reflect.ValueOf(body).Call([]reflect.Value{reflect.ValueOf(benchmarker)})
|
||||
}
|
||||
|
||||
return &MeasureNode{
|
||||
runner: newRunner(wrappedBody, codeLocation, 0, failer, types.SpecComponentTypeMeasure, componentIndex),
|
||||
|
||||
text: text,
|
||||
flag: flag,
|
||||
samples: samples,
|
||||
benchmarker: benchmarker,
|
||||
}
|
||||
}
|
||||
|
||||
func (node *MeasureNode) Run() (outcome types.SpecState, failure types.SpecFailure) {
|
||||
return node.runner.run()
|
||||
}
|
||||
|
||||
func (node *MeasureNode) MeasurementsReport() map[string]*types.SpecMeasurement {
|
||||
return node.benchmarker.measurementsReport()
|
||||
}
|
||||
|
||||
func (node *MeasureNode) Type() types.SpecComponentType {
|
||||
return types.SpecComponentTypeMeasure
|
||||
}
|
||||
|
||||
func (node *MeasureNode) Text() string {
|
||||
return node.text
|
||||
}
|
||||
|
||||
func (node *MeasureNode) Flag() types.FlagType {
|
||||
return node.flag
|
||||
}
|
||||
|
||||
func (node *MeasureNode) CodeLocation() types.CodeLocation {
|
||||
return node.runner.codeLocation
|
||||
}
|
||||
|
||||
func (node *MeasureNode) Samples() int {
|
||||
return node.samples
|
||||
}
|
117
vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go
generated
vendored
117
vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go
generated
vendored
@ -1,117 +0,0 @@
|
||||
package leafnodes
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"time"
|
||||
|
||||
"github.com/onsi/ginkgo/internal/codelocation"
|
||||
"github.com/onsi/ginkgo/internal/failer"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
type runner struct {
|
||||
isAsync bool
|
||||
asyncFunc func(chan<- interface{})
|
||||
syncFunc func()
|
||||
codeLocation types.CodeLocation
|
||||
timeoutThreshold time.Duration
|
||||
nodeType types.SpecComponentType
|
||||
componentIndex int
|
||||
failer *failer.Failer
|
||||
}
|
||||
|
||||
func newRunner(body interface{}, codeLocation types.CodeLocation, timeout time.Duration, failer *failer.Failer, nodeType types.SpecComponentType, componentIndex int) *runner {
|
||||
bodyType := reflect.TypeOf(body)
|
||||
if bodyType.Kind() != reflect.Func {
|
||||
panic(fmt.Sprintf("Expected a function but got something else at %v", codeLocation))
|
||||
}
|
||||
|
||||
runner := &runner{
|
||||
codeLocation: codeLocation,
|
||||
timeoutThreshold: timeout,
|
||||
failer: failer,
|
||||
nodeType: nodeType,
|
||||
componentIndex: componentIndex,
|
||||
}
|
||||
|
||||
switch bodyType.NumIn() {
|
||||
case 0:
|
||||
runner.syncFunc = body.(func())
|
||||
return runner
|
||||
case 1:
|
||||
if !(bodyType.In(0).Kind() == reflect.Chan && bodyType.In(0).Elem().Kind() == reflect.Interface) {
|
||||
panic(fmt.Sprintf("Must pass a Done channel to function at %v", codeLocation))
|
||||
}
|
||||
|
||||
wrappedBody := func(done chan<- interface{}) {
|
||||
bodyValue := reflect.ValueOf(body)
|
||||
bodyValue.Call([]reflect.Value{reflect.ValueOf(done)})
|
||||
}
|
||||
|
||||
runner.isAsync = true
|
||||
runner.asyncFunc = wrappedBody
|
||||
return runner
|
||||
}
|
||||
|
||||
panic(fmt.Sprintf("Too many arguments to function at %v", codeLocation))
|
||||
}
|
||||
|
||||
func (r *runner) run() (outcome types.SpecState, failure types.SpecFailure) {
|
||||
if r.isAsync {
|
||||
return r.runAsync()
|
||||
} else {
|
||||
return r.runSync()
|
||||
}
|
||||
}
|
||||
|
||||
func (r *runner) runAsync() (outcome types.SpecState, failure types.SpecFailure) {
|
||||
done := make(chan interface{}, 1)
|
||||
|
||||
go func() {
|
||||
finished := false
|
||||
|
||||
defer func() {
|
||||
if e := recover(); e != nil || !finished {
|
||||
r.failer.Panic(codelocation.New(2), e)
|
||||
select {
|
||||
case <-done:
|
||||
break
|
||||
default:
|
||||
close(done)
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
r.asyncFunc(done)
|
||||
finished = true
|
||||
}()
|
||||
|
||||
// If this goroutine gets no CPU time before the select block,
|
||||
// the <-done case may complete even if the test took longer than the timeoutThreshold.
|
||||
// This can cause flaky behaviour, but we haven't seen it in the wild.
|
||||
select {
|
||||
case <-done:
|
||||
case <-time.After(r.timeoutThreshold):
|
||||
r.failer.Timeout(r.codeLocation)
|
||||
}
|
||||
|
||||
failure, outcome = r.failer.Drain(r.nodeType, r.componentIndex, r.codeLocation)
|
||||
return
|
||||
}
|
||||
func (r *runner) runSync() (outcome types.SpecState, failure types.SpecFailure) {
|
||||
finished := false
|
||||
|
||||
defer func() {
|
||||
if e := recover(); e != nil || !finished {
|
||||
r.failer.Panic(codelocation.New(2), e)
|
||||
}
|
||||
|
||||
failure, outcome = r.failer.Drain(r.nodeType, r.componentIndex, r.codeLocation)
|
||||
}()
|
||||
|
||||
r.syncFunc()
|
||||
finished = true
|
||||
|
||||
return
|
||||
}
|
48
vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go
generated
vendored
48
vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go
generated
vendored
@ -1,48 +0,0 @@
|
||||
package leafnodes
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/onsi/ginkgo/internal/failer"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
type SetupNode struct {
|
||||
runner *runner
|
||||
}
|
||||
|
||||
func (node *SetupNode) Run() (outcome types.SpecState, failure types.SpecFailure) {
|
||||
return node.runner.run()
|
||||
}
|
||||
|
||||
func (node *SetupNode) Type() types.SpecComponentType {
|
||||
return node.runner.nodeType
|
||||
}
|
||||
|
||||
func (node *SetupNode) CodeLocation() types.CodeLocation {
|
||||
return node.runner.codeLocation
|
||||
}
|
||||
|
||||
func NewBeforeEachNode(body interface{}, codeLocation types.CodeLocation, timeout time.Duration, failer *failer.Failer, componentIndex int) *SetupNode {
|
||||
return &SetupNode{
|
||||
runner: newRunner(body, codeLocation, timeout, failer, types.SpecComponentTypeBeforeEach, componentIndex),
|
||||
}
|
||||
}
|
||||
|
||||
func NewAfterEachNode(body interface{}, codeLocation types.CodeLocation, timeout time.Duration, failer *failer.Failer, componentIndex int) *SetupNode {
|
||||
return &SetupNode{
|
||||
runner: newRunner(body, codeLocation, timeout, failer, types.SpecComponentTypeAfterEach, componentIndex),
|
||||
}
|
||||
}
|
||||
|
||||
func NewJustBeforeEachNode(body interface{}, codeLocation types.CodeLocation, timeout time.Duration, failer *failer.Failer, componentIndex int) *SetupNode {
|
||||
return &SetupNode{
|
||||
runner: newRunner(body, codeLocation, timeout, failer, types.SpecComponentTypeJustBeforeEach, componentIndex),
|
||||
}
|
||||
}
|
||||
|
||||
func NewJustAfterEachNode(body interface{}, codeLocation types.CodeLocation, timeout time.Duration, failer *failer.Failer, componentIndex int) *SetupNode {
|
||||
return &SetupNode{
|
||||
runner: newRunner(body, codeLocation, timeout, failer, types.SpecComponentTypeJustAfterEach, componentIndex),
|
||||
}
|
||||
}
|
55
vendor/github.com/onsi/ginkgo/internal/leafnodes/suite_nodes.go
generated
vendored
55
vendor/github.com/onsi/ginkgo/internal/leafnodes/suite_nodes.go
generated
vendored
@ -1,55 +0,0 @@
|
||||
package leafnodes
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/onsi/ginkgo/internal/failer"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
type SuiteNode interface {
|
||||
Run(parallelNode int, parallelTotal int, syncHost string) bool
|
||||
Passed() bool
|
||||
Summary() *types.SetupSummary
|
||||
}
|
||||
|
||||
type simpleSuiteNode struct {
|
||||
runner *runner
|
||||
outcome types.SpecState
|
||||
failure types.SpecFailure
|
||||
runTime time.Duration
|
||||
}
|
||||
|
||||
func (node *simpleSuiteNode) Run(parallelNode int, parallelTotal int, syncHost string) bool {
|
||||
t := time.Now()
|
||||
node.outcome, node.failure = node.runner.run()
|
||||
node.runTime = time.Since(t)
|
||||
|
||||
return node.outcome == types.SpecStatePassed
|
||||
}
|
||||
|
||||
func (node *simpleSuiteNode) Passed() bool {
|
||||
return node.outcome == types.SpecStatePassed
|
||||
}
|
||||
|
||||
func (node *simpleSuiteNode) Summary() *types.SetupSummary {
|
||||
return &types.SetupSummary{
|
||||
ComponentType: node.runner.nodeType,
|
||||
CodeLocation: node.runner.codeLocation,
|
||||
State: node.outcome,
|
||||
RunTime: node.runTime,
|
||||
Failure: node.failure,
|
||||
}
|
||||
}
|
||||
|
||||
func NewBeforeSuiteNode(body interface{}, codeLocation types.CodeLocation, timeout time.Duration, failer *failer.Failer) SuiteNode {
|
||||
return &simpleSuiteNode{
|
||||
runner: newRunner(body, codeLocation, timeout, failer, types.SpecComponentTypeBeforeSuite, 0),
|
||||
}
|
||||
}
|
||||
|
||||
func NewAfterSuiteNode(body interface{}, codeLocation types.CodeLocation, timeout time.Duration, failer *failer.Failer) SuiteNode {
|
||||
return &simpleSuiteNode{
|
||||
runner: newRunner(body, codeLocation, timeout, failer, types.SpecComponentTypeAfterSuite, 0),
|
||||
}
|
||||
}
|
90
vendor/github.com/onsi/ginkgo/internal/leafnodes/synchronized_after_suite_node.go
generated
vendored
90
vendor/github.com/onsi/ginkgo/internal/leafnodes/synchronized_after_suite_node.go
generated
vendored
@ -1,90 +0,0 @@
|
||||
package leafnodes
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"github.com/onsi/ginkgo/internal/failer"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
type synchronizedAfterSuiteNode struct {
|
||||
runnerA *runner
|
||||
runnerB *runner
|
||||
|
||||
outcome types.SpecState
|
||||
failure types.SpecFailure
|
||||
runTime time.Duration
|
||||
}
|
||||
|
||||
func NewSynchronizedAfterSuiteNode(bodyA interface{}, bodyB interface{}, codeLocation types.CodeLocation, timeout time.Duration, failer *failer.Failer) SuiteNode {
|
||||
return &synchronizedAfterSuiteNode{
|
||||
runnerA: newRunner(bodyA, codeLocation, timeout, failer, types.SpecComponentTypeAfterSuite, 0),
|
||||
runnerB: newRunner(bodyB, codeLocation, timeout, failer, types.SpecComponentTypeAfterSuite, 0),
|
||||
}
|
||||
}
|
||||
|
||||
func (node *synchronizedAfterSuiteNode) Run(parallelNode int, parallelTotal int, syncHost string) bool {
|
||||
node.outcome, node.failure = node.runnerA.run()
|
||||
|
||||
if parallelNode == 1 {
|
||||
if parallelTotal > 1 {
|
||||
node.waitUntilOtherNodesAreDone(syncHost)
|
||||
}
|
||||
|
||||
outcome, failure := node.runnerB.run()
|
||||
|
||||
if node.outcome == types.SpecStatePassed {
|
||||
node.outcome, node.failure = outcome, failure
|
||||
}
|
||||
}
|
||||
|
||||
return node.outcome == types.SpecStatePassed
|
||||
}
|
||||
|
||||
func (node *synchronizedAfterSuiteNode) Passed() bool {
|
||||
return node.outcome == types.SpecStatePassed
|
||||
}
|
||||
|
||||
func (node *synchronizedAfterSuiteNode) Summary() *types.SetupSummary {
|
||||
return &types.SetupSummary{
|
||||
ComponentType: node.runnerA.nodeType,
|
||||
CodeLocation: node.runnerA.codeLocation,
|
||||
State: node.outcome,
|
||||
RunTime: node.runTime,
|
||||
Failure: node.failure,
|
||||
}
|
||||
}
|
||||
|
||||
func (node *synchronizedAfterSuiteNode) waitUntilOtherNodesAreDone(syncHost string) {
|
||||
for {
|
||||
if node.canRun(syncHost) {
|
||||
return
|
||||
}
|
||||
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
func (node *synchronizedAfterSuiteNode) canRun(syncHost string) bool {
|
||||
resp, err := http.Get(syncHost + "/RemoteAfterSuiteData")
|
||||
if err != nil || resp.StatusCode != http.StatusOK {
|
||||
return false
|
||||
}
|
||||
|
||||
body, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
resp.Body.Close()
|
||||
|
||||
afterSuiteData := types.RemoteAfterSuiteData{}
|
||||
err = json.Unmarshal(body, &afterSuiteData)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
return afterSuiteData.CanRun
|
||||
}
|
181
vendor/github.com/onsi/ginkgo/internal/leafnodes/synchronized_before_suite_node.go
generated
vendored
181
vendor/github.com/onsi/ginkgo/internal/leafnodes/synchronized_before_suite_node.go
generated
vendored
@ -1,181 +0,0 @@
|
||||
package leafnodes
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"reflect"
|
||||
"time"
|
||||
|
||||
"github.com/onsi/ginkgo/internal/failer"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
type synchronizedBeforeSuiteNode struct {
|
||||
runnerA *runner
|
||||
runnerB *runner
|
||||
|
||||
data []byte
|
||||
|
||||
outcome types.SpecState
|
||||
failure types.SpecFailure
|
||||
runTime time.Duration
|
||||
}
|
||||
|
||||
func NewSynchronizedBeforeSuiteNode(bodyA interface{}, bodyB interface{}, codeLocation types.CodeLocation, timeout time.Duration, failer *failer.Failer) SuiteNode {
|
||||
node := &synchronizedBeforeSuiteNode{}
|
||||
|
||||
node.runnerA = newRunner(node.wrapA(bodyA), codeLocation, timeout, failer, types.SpecComponentTypeBeforeSuite, 0)
|
||||
node.runnerB = newRunner(node.wrapB(bodyB), codeLocation, timeout, failer, types.SpecComponentTypeBeforeSuite, 0)
|
||||
|
||||
return node
|
||||
}
|
||||
|
||||
func (node *synchronizedBeforeSuiteNode) Run(parallelNode int, parallelTotal int, syncHost string) bool {
|
||||
t := time.Now()
|
||||
defer func() {
|
||||
node.runTime = time.Since(t)
|
||||
}()
|
||||
|
||||
if parallelNode == 1 {
|
||||
node.outcome, node.failure = node.runA(parallelTotal, syncHost)
|
||||
} else {
|
||||
node.outcome, node.failure = node.waitForA(syncHost)
|
||||
}
|
||||
|
||||
if node.outcome != types.SpecStatePassed {
|
||||
return false
|
||||
}
|
||||
node.outcome, node.failure = node.runnerB.run()
|
||||
|
||||
return node.outcome == types.SpecStatePassed
|
||||
}
|
||||
|
||||
func (node *synchronizedBeforeSuiteNode) runA(parallelTotal int, syncHost string) (types.SpecState, types.SpecFailure) {
|
||||
outcome, failure := node.runnerA.run()
|
||||
|
||||
if parallelTotal > 1 {
|
||||
state := types.RemoteBeforeSuiteStatePassed
|
||||
if outcome != types.SpecStatePassed {
|
||||
state = types.RemoteBeforeSuiteStateFailed
|
||||
}
|
||||
json := (types.RemoteBeforeSuiteData{
|
||||
Data: node.data,
|
||||
State: state,
|
||||
}).ToJSON()
|
||||
http.Post(syncHost+"/BeforeSuiteState", "application/json", bytes.NewBuffer(json))
|
||||
}
|
||||
|
||||
return outcome, failure
|
||||
}
|
||||
|
||||
func (node *synchronizedBeforeSuiteNode) waitForA(syncHost string) (types.SpecState, types.SpecFailure) {
|
||||
failure := func(message string) types.SpecFailure {
|
||||
return types.SpecFailure{
|
||||
Message: message,
|
||||
Location: node.runnerA.codeLocation,
|
||||
ComponentType: node.runnerA.nodeType,
|
||||
ComponentIndex: node.runnerA.componentIndex,
|
||||
ComponentCodeLocation: node.runnerA.codeLocation,
|
||||
}
|
||||
}
|
||||
for {
|
||||
resp, err := http.Get(syncHost + "/BeforeSuiteState")
|
||||
if err != nil || resp.StatusCode != http.StatusOK {
|
||||
return types.SpecStateFailed, failure("Failed to fetch BeforeSuite state")
|
||||
}
|
||||
|
||||
body, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return types.SpecStateFailed, failure("Failed to read BeforeSuite state")
|
||||
}
|
||||
resp.Body.Close()
|
||||
|
||||
beforeSuiteData := types.RemoteBeforeSuiteData{}
|
||||
err = json.Unmarshal(body, &beforeSuiteData)
|
||||
if err != nil {
|
||||
return types.SpecStateFailed, failure("Failed to decode BeforeSuite state")
|
||||
}
|
||||
|
||||
switch beforeSuiteData.State {
|
||||
case types.RemoteBeforeSuiteStatePassed:
|
||||
node.data = beforeSuiteData.Data
|
||||
return types.SpecStatePassed, types.SpecFailure{}
|
||||
case types.RemoteBeforeSuiteStateFailed:
|
||||
return types.SpecStateFailed, failure("BeforeSuite on Node 1 failed")
|
||||
case types.RemoteBeforeSuiteStateDisappeared:
|
||||
return types.SpecStateFailed, failure("Node 1 disappeared before completing BeforeSuite")
|
||||
}
|
||||
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
func (node *synchronizedBeforeSuiteNode) Passed() bool {
|
||||
return node.outcome == types.SpecStatePassed
|
||||
}
|
||||
|
||||
func (node *synchronizedBeforeSuiteNode) Summary() *types.SetupSummary {
|
||||
return &types.SetupSummary{
|
||||
ComponentType: node.runnerA.nodeType,
|
||||
CodeLocation: node.runnerA.codeLocation,
|
||||
State: node.outcome,
|
||||
RunTime: node.runTime,
|
||||
Failure: node.failure,
|
||||
}
|
||||
}
|
||||
|
||||
func (node *synchronizedBeforeSuiteNode) wrapA(bodyA interface{}) interface{} {
|
||||
typeA := reflect.TypeOf(bodyA)
|
||||
if typeA.Kind() != reflect.Func {
|
||||
panic("SynchronizedBeforeSuite expects a function as its first argument")
|
||||
}
|
||||
|
||||
takesNothing := typeA.NumIn() == 0
|
||||
takesADoneChannel := typeA.NumIn() == 1 && typeA.In(0).Kind() == reflect.Chan && typeA.In(0).Elem().Kind() == reflect.Interface
|
||||
returnsBytes := typeA.NumOut() == 1 && typeA.Out(0).Kind() == reflect.Slice && typeA.Out(0).Elem().Kind() == reflect.Uint8
|
||||
|
||||
if !((takesNothing || takesADoneChannel) && returnsBytes) {
|
||||
panic("SynchronizedBeforeSuite's first argument should be a function that returns []byte and either takes no arguments or takes a Done channel.")
|
||||
}
|
||||
|
||||
if takesADoneChannel {
|
||||
return func(done chan<- interface{}) {
|
||||
out := reflect.ValueOf(bodyA).Call([]reflect.Value{reflect.ValueOf(done)})
|
||||
node.data = out[0].Interface().([]byte)
|
||||
}
|
||||
}
|
||||
|
||||
return func() {
|
||||
out := reflect.ValueOf(bodyA).Call([]reflect.Value{})
|
||||
node.data = out[0].Interface().([]byte)
|
||||
}
|
||||
}
|
||||
|
||||
func (node *synchronizedBeforeSuiteNode) wrapB(bodyB interface{}) interface{} {
|
||||
typeB := reflect.TypeOf(bodyB)
|
||||
if typeB.Kind() != reflect.Func {
|
||||
panic("SynchronizedBeforeSuite expects a function as its second argument")
|
||||
}
|
||||
|
||||
returnsNothing := typeB.NumOut() == 0
|
||||
takesBytesOnly := typeB.NumIn() == 1 && typeB.In(0).Kind() == reflect.Slice && typeB.In(0).Elem().Kind() == reflect.Uint8
|
||||
takesBytesAndDone := typeB.NumIn() == 2 &&
|
||||
typeB.In(0).Kind() == reflect.Slice && typeB.In(0).Elem().Kind() == reflect.Uint8 &&
|
||||
typeB.In(1).Kind() == reflect.Chan && typeB.In(1).Elem().Kind() == reflect.Interface
|
||||
|
||||
if !((takesBytesOnly || takesBytesAndDone) && returnsNothing) {
|
||||
panic("SynchronizedBeforeSuite's second argument should be a function that returns nothing and either takes []byte or ([]byte, Done)")
|
||||
}
|
||||
|
||||
if takesBytesAndDone {
|
||||
return func(done chan<- interface{}) {
|
||||
reflect.ValueOf(bodyB).Call([]reflect.Value{reflect.ValueOf(node.data), reflect.ValueOf(done)})
|
||||
}
|
||||
}
|
||||
|
||||
return func() {
|
||||
reflect.ValueOf(bodyB).Call([]reflect.Value{reflect.ValueOf(node.data)})
|
||||
}
|
||||
}
|
249
vendor/github.com/onsi/ginkgo/internal/remote/aggregator.go
generated
vendored
249
vendor/github.com/onsi/ginkgo/internal/remote/aggregator.go
generated
vendored
@ -1,249 +0,0 @@
|
||||
/*
|
||||
|
||||
Aggregator is a reporter used by the Ginkgo CLI to aggregate and present parallel test output
|
||||
coherently as tests complete. You shouldn't need to use this in your code. To run tests in parallel:
|
||||
|
||||
ginkgo -nodes=N
|
||||
|
||||
where N is the number of nodes you desire.
|
||||
*/
|
||||
package remote
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/onsi/ginkgo/config"
|
||||
"github.com/onsi/ginkgo/reporters/stenographer"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
type configAndSuite struct {
|
||||
config config.GinkgoConfigType
|
||||
summary *types.SuiteSummary
|
||||
}
|
||||
|
||||
type Aggregator struct {
|
||||
nodeCount int
|
||||
config config.DefaultReporterConfigType
|
||||
stenographer stenographer.Stenographer
|
||||
result chan bool
|
||||
|
||||
suiteBeginnings chan configAndSuite
|
||||
aggregatedSuiteBeginnings []configAndSuite
|
||||
|
||||
beforeSuites chan *types.SetupSummary
|
||||
aggregatedBeforeSuites []*types.SetupSummary
|
||||
|
||||
afterSuites chan *types.SetupSummary
|
||||
aggregatedAfterSuites []*types.SetupSummary
|
||||
|
||||
specCompletions chan *types.SpecSummary
|
||||
completedSpecs []*types.SpecSummary
|
||||
|
||||
suiteEndings chan *types.SuiteSummary
|
||||
aggregatedSuiteEndings []*types.SuiteSummary
|
||||
specs []*types.SpecSummary
|
||||
|
||||
startTime time.Time
|
||||
}
|
||||
|
||||
func NewAggregator(nodeCount int, result chan bool, config config.DefaultReporterConfigType, stenographer stenographer.Stenographer) *Aggregator {
|
||||
aggregator := &Aggregator{
|
||||
nodeCount: nodeCount,
|
||||
result: result,
|
||||
config: config,
|
||||
stenographer: stenographer,
|
||||
|
||||
suiteBeginnings: make(chan configAndSuite),
|
||||
beforeSuites: make(chan *types.SetupSummary),
|
||||
afterSuites: make(chan *types.SetupSummary),
|
||||
specCompletions: make(chan *types.SpecSummary),
|
||||
suiteEndings: make(chan *types.SuiteSummary),
|
||||
}
|
||||
|
||||
go aggregator.mux()
|
||||
|
||||
return aggregator
|
||||
}
|
||||
|
||||
func (aggregator *Aggregator) SpecSuiteWillBegin(config config.GinkgoConfigType, summary *types.SuiteSummary) {
|
||||
aggregator.suiteBeginnings <- configAndSuite{config, summary}
|
||||
}
|
||||
|
||||
func (aggregator *Aggregator) BeforeSuiteDidRun(setupSummary *types.SetupSummary) {
|
||||
aggregator.beforeSuites <- setupSummary
|
||||
}
|
||||
|
||||
func (aggregator *Aggregator) AfterSuiteDidRun(setupSummary *types.SetupSummary) {
|
||||
aggregator.afterSuites <- setupSummary
|
||||
}
|
||||
|
||||
func (aggregator *Aggregator) SpecWillRun(specSummary *types.SpecSummary) {
|
||||
//noop
|
||||
}
|
||||
|
||||
func (aggregator *Aggregator) SpecDidComplete(specSummary *types.SpecSummary) {
|
||||
aggregator.specCompletions <- specSummary
|
||||
}
|
||||
|
||||
func (aggregator *Aggregator) SpecSuiteDidEnd(summary *types.SuiteSummary) {
|
||||
aggregator.suiteEndings <- summary
|
||||
}
|
||||
|
||||
func (aggregator *Aggregator) mux() {
|
||||
loop:
|
||||
for {
|
||||
select {
|
||||
case configAndSuite := <-aggregator.suiteBeginnings:
|
||||
aggregator.registerSuiteBeginning(configAndSuite)
|
||||
case setupSummary := <-aggregator.beforeSuites:
|
||||
aggregator.registerBeforeSuite(setupSummary)
|
||||
case setupSummary := <-aggregator.afterSuites:
|
||||
aggregator.registerAfterSuite(setupSummary)
|
||||
case specSummary := <-aggregator.specCompletions:
|
||||
aggregator.registerSpecCompletion(specSummary)
|
||||
case suite := <-aggregator.suiteEndings:
|
||||
finished, passed := aggregator.registerSuiteEnding(suite)
|
||||
if finished {
|
||||
aggregator.result <- passed
|
||||
break loop
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (aggregator *Aggregator) registerSuiteBeginning(configAndSuite configAndSuite) {
|
||||
aggregator.aggregatedSuiteBeginnings = append(aggregator.aggregatedSuiteBeginnings, configAndSuite)
|
||||
|
||||
if len(aggregator.aggregatedSuiteBeginnings) == 1 {
|
||||
aggregator.startTime = time.Now()
|
||||
}
|
||||
|
||||
if len(aggregator.aggregatedSuiteBeginnings) != aggregator.nodeCount {
|
||||
return
|
||||
}
|
||||
|
||||
aggregator.stenographer.AnnounceSuite(configAndSuite.summary.SuiteDescription, configAndSuite.config.RandomSeed, configAndSuite.config.RandomizeAllSpecs, aggregator.config.Succinct)
|
||||
|
||||
totalNumberOfSpecs := 0
|
||||
if len(aggregator.aggregatedSuiteBeginnings) > 0 {
|
||||
totalNumberOfSpecs = configAndSuite.summary.NumberOfSpecsBeforeParallelization
|
||||
}
|
||||
|
||||
aggregator.stenographer.AnnounceTotalNumberOfSpecs(totalNumberOfSpecs, aggregator.config.Succinct)
|
||||
aggregator.stenographer.AnnounceAggregatedParallelRun(aggregator.nodeCount, aggregator.config.Succinct)
|
||||
aggregator.flushCompletedSpecs()
|
||||
}
|
||||
|
||||
func (aggregator *Aggregator) registerBeforeSuite(setupSummary *types.SetupSummary) {
|
||||
aggregator.aggregatedBeforeSuites = append(aggregator.aggregatedBeforeSuites, setupSummary)
|
||||
aggregator.flushCompletedSpecs()
|
||||
}
|
||||
|
||||
func (aggregator *Aggregator) registerAfterSuite(setupSummary *types.SetupSummary) {
|
||||
aggregator.aggregatedAfterSuites = append(aggregator.aggregatedAfterSuites, setupSummary)
|
||||
aggregator.flushCompletedSpecs()
|
||||
}
|
||||
|
||||
func (aggregator *Aggregator) registerSpecCompletion(specSummary *types.SpecSummary) {
|
||||
aggregator.completedSpecs = append(aggregator.completedSpecs, specSummary)
|
||||
aggregator.specs = append(aggregator.specs, specSummary)
|
||||
aggregator.flushCompletedSpecs()
|
||||
}
|
||||
|
||||
func (aggregator *Aggregator) flushCompletedSpecs() {
|
||||
if len(aggregator.aggregatedSuiteBeginnings) != aggregator.nodeCount {
|
||||
return
|
||||
}
|
||||
|
||||
for _, setupSummary := range aggregator.aggregatedBeforeSuites {
|
||||
aggregator.announceBeforeSuite(setupSummary)
|
||||
}
|
||||
|
||||
for _, specSummary := range aggregator.completedSpecs {
|
||||
aggregator.announceSpec(specSummary)
|
||||
}
|
||||
|
||||
for _, setupSummary := range aggregator.aggregatedAfterSuites {
|
||||
aggregator.announceAfterSuite(setupSummary)
|
||||
}
|
||||
|
||||
aggregator.aggregatedBeforeSuites = []*types.SetupSummary{}
|
||||
aggregator.completedSpecs = []*types.SpecSummary{}
|
||||
aggregator.aggregatedAfterSuites = []*types.SetupSummary{}
|
||||
}
|
||||
|
||||
func (aggregator *Aggregator) announceBeforeSuite(setupSummary *types.SetupSummary) {
|
||||
aggregator.stenographer.AnnounceCapturedOutput(setupSummary.CapturedOutput)
|
||||
if setupSummary.State != types.SpecStatePassed {
|
||||
aggregator.stenographer.AnnounceBeforeSuiteFailure(setupSummary, aggregator.config.Succinct, aggregator.config.FullTrace)
|
||||
}
|
||||
}
|
||||
|
||||
func (aggregator *Aggregator) announceAfterSuite(setupSummary *types.SetupSummary) {
|
||||
aggregator.stenographer.AnnounceCapturedOutput(setupSummary.CapturedOutput)
|
||||
if setupSummary.State != types.SpecStatePassed {
|
||||
aggregator.stenographer.AnnounceAfterSuiteFailure(setupSummary, aggregator.config.Succinct, aggregator.config.FullTrace)
|
||||
}
|
||||
}
|
||||
|
||||
func (aggregator *Aggregator) announceSpec(specSummary *types.SpecSummary) {
|
||||
if aggregator.config.Verbose && specSummary.State != types.SpecStatePending && specSummary.State != types.SpecStateSkipped {
|
||||
aggregator.stenographer.AnnounceSpecWillRun(specSummary)
|
||||
}
|
||||
|
||||
aggregator.stenographer.AnnounceCapturedOutput(specSummary.CapturedOutput)
|
||||
|
||||
switch specSummary.State {
|
||||
case types.SpecStatePassed:
|
||||
if specSummary.IsMeasurement {
|
||||
aggregator.stenographer.AnnounceSuccessfulMeasurement(specSummary, aggregator.config.Succinct)
|
||||
} else if specSummary.RunTime.Seconds() >= aggregator.config.SlowSpecThreshold {
|
||||
aggregator.stenographer.AnnounceSuccessfulSlowSpec(specSummary, aggregator.config.Succinct)
|
||||
} else {
|
||||
aggregator.stenographer.AnnounceSuccessfulSpec(specSummary)
|
||||
}
|
||||
|
||||
case types.SpecStatePending:
|
||||
aggregator.stenographer.AnnouncePendingSpec(specSummary, aggregator.config.NoisyPendings && !aggregator.config.Succinct)
|
||||
case types.SpecStateSkipped:
|
||||
aggregator.stenographer.AnnounceSkippedSpec(specSummary, aggregator.config.Succinct || !aggregator.config.NoisySkippings, aggregator.config.FullTrace)
|
||||
case types.SpecStateTimedOut:
|
||||
aggregator.stenographer.AnnounceSpecTimedOut(specSummary, aggregator.config.Succinct, aggregator.config.FullTrace)
|
||||
case types.SpecStatePanicked:
|
||||
aggregator.stenographer.AnnounceSpecPanicked(specSummary, aggregator.config.Succinct, aggregator.config.FullTrace)
|
||||
case types.SpecStateFailed:
|
||||
aggregator.stenographer.AnnounceSpecFailed(specSummary, aggregator.config.Succinct, aggregator.config.FullTrace)
|
||||
}
|
||||
}
|
||||
|
||||
func (aggregator *Aggregator) registerSuiteEnding(suite *types.SuiteSummary) (finished bool, passed bool) {
|
||||
aggregator.aggregatedSuiteEndings = append(aggregator.aggregatedSuiteEndings, suite)
|
||||
if len(aggregator.aggregatedSuiteEndings) < aggregator.nodeCount {
|
||||
return false, false
|
||||
}
|
||||
|
||||
aggregatedSuiteSummary := &types.SuiteSummary{}
|
||||
aggregatedSuiteSummary.SuiteSucceeded = true
|
||||
|
||||
for _, suiteSummary := range aggregator.aggregatedSuiteEndings {
|
||||
if !suiteSummary.SuiteSucceeded {
|
||||
aggregatedSuiteSummary.SuiteSucceeded = false
|
||||
}
|
||||
|
||||
aggregatedSuiteSummary.NumberOfSpecsThatWillBeRun += suiteSummary.NumberOfSpecsThatWillBeRun
|
||||
aggregatedSuiteSummary.NumberOfTotalSpecs += suiteSummary.NumberOfTotalSpecs
|
||||
aggregatedSuiteSummary.NumberOfPassedSpecs += suiteSummary.NumberOfPassedSpecs
|
||||
aggregatedSuiteSummary.NumberOfFailedSpecs += suiteSummary.NumberOfFailedSpecs
|
||||
aggregatedSuiteSummary.NumberOfPendingSpecs += suiteSummary.NumberOfPendingSpecs
|
||||
aggregatedSuiteSummary.NumberOfSkippedSpecs += suiteSummary.NumberOfSkippedSpecs
|
||||
aggregatedSuiteSummary.NumberOfFlakedSpecs += suiteSummary.NumberOfFlakedSpecs
|
||||
}
|
||||
|
||||
aggregatedSuiteSummary.RunTime = time.Since(aggregator.startTime)
|
||||
|
||||
aggregator.stenographer.SummarizeFailures(aggregator.specs)
|
||||
aggregator.stenographer.AnnounceSpecRunCompletion(aggregatedSuiteSummary, aggregator.config.Succinct)
|
||||
|
||||
return true, aggregatedSuiteSummary.SuiteSucceeded
|
||||
}
|
147
vendor/github.com/onsi/ginkgo/internal/remote/forwarding_reporter.go
generated
vendored
147
vendor/github.com/onsi/ginkgo/internal/remote/forwarding_reporter.go
generated
vendored
@ -1,147 +0,0 @@
|
||||
package remote
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
|
||||
"github.com/onsi/ginkgo/internal/writer"
|
||||
"github.com/onsi/ginkgo/reporters"
|
||||
"github.com/onsi/ginkgo/reporters/stenographer"
|
||||
|
||||
"github.com/onsi/ginkgo/config"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
//An interface to net/http's client to allow the injection of fakes under test
|
||||
type Poster interface {
|
||||
Post(url string, bodyType string, body io.Reader) (resp *http.Response, err error)
|
||||
}
|
||||
|
||||
/*
|
||||
The ForwardingReporter is a Ginkgo reporter that forwards information to
|
||||
a Ginkgo remote server.
|
||||
|
||||
When streaming parallel test output, this repoter is automatically installed by Ginkgo.
|
||||
|
||||
This is accomplished by passing in the GINKGO_REMOTE_REPORTING_SERVER environment variable to `go test`, the Ginkgo test runner
|
||||
detects this environment variable (which should contain the host of the server) and automatically installs a ForwardingReporter
|
||||
in place of Ginkgo's DefaultReporter.
|
||||
*/
|
||||
|
||||
type ForwardingReporter struct {
|
||||
serverHost string
|
||||
poster Poster
|
||||
outputInterceptor OutputInterceptor
|
||||
debugMode bool
|
||||
debugFile *os.File
|
||||
nestedReporter *reporters.DefaultReporter
|
||||
}
|
||||
|
||||
func NewForwardingReporter(config config.DefaultReporterConfigType, serverHost string, poster Poster, outputInterceptor OutputInterceptor, ginkgoWriter *writer.Writer, debugFile string) *ForwardingReporter {
|
||||
reporter := &ForwardingReporter{
|
||||
serverHost: serverHost,
|
||||
poster: poster,
|
||||
outputInterceptor: outputInterceptor,
|
||||
}
|
||||
|
||||
if debugFile != "" {
|
||||
var err error
|
||||
reporter.debugMode = true
|
||||
reporter.debugFile, err = os.Create(debugFile)
|
||||
if err != nil {
|
||||
fmt.Println(err.Error())
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if !config.Verbose {
|
||||
//if verbose is true then the GinkgoWriter emits to stdout. Don't _also_ redirect GinkgoWriter output as that will result in duplication.
|
||||
ginkgoWriter.AndRedirectTo(reporter.debugFile)
|
||||
}
|
||||
outputInterceptor.StreamTo(reporter.debugFile) //This is not working
|
||||
|
||||
stenographer := stenographer.New(false, true, reporter.debugFile)
|
||||
config.Succinct = false
|
||||
config.Verbose = true
|
||||
config.FullTrace = true
|
||||
reporter.nestedReporter = reporters.NewDefaultReporter(config, stenographer)
|
||||
}
|
||||
|
||||
return reporter
|
||||
}
|
||||
|
||||
func (reporter *ForwardingReporter) post(path string, data interface{}) {
|
||||
encoded, _ := json.Marshal(data)
|
||||
buffer := bytes.NewBuffer(encoded)
|
||||
reporter.poster.Post(reporter.serverHost+path, "application/json", buffer)
|
||||
}
|
||||
|
||||
func (reporter *ForwardingReporter) SpecSuiteWillBegin(conf config.GinkgoConfigType, summary *types.SuiteSummary) {
|
||||
data := struct {
|
||||
Config config.GinkgoConfigType `json:"config"`
|
||||
Summary *types.SuiteSummary `json:"suite-summary"`
|
||||
}{
|
||||
conf,
|
||||
summary,
|
||||
}
|
||||
|
||||
reporter.outputInterceptor.StartInterceptingOutput()
|
||||
if reporter.debugMode {
|
||||
reporter.nestedReporter.SpecSuiteWillBegin(conf, summary)
|
||||
reporter.debugFile.Sync()
|
||||
}
|
||||
reporter.post("/SpecSuiteWillBegin", data)
|
||||
}
|
||||
|
||||
func (reporter *ForwardingReporter) BeforeSuiteDidRun(setupSummary *types.SetupSummary) {
|
||||
output, _ := reporter.outputInterceptor.StopInterceptingAndReturnOutput()
|
||||
reporter.outputInterceptor.StartInterceptingOutput()
|
||||
setupSummary.CapturedOutput = output
|
||||
if reporter.debugMode {
|
||||
reporter.nestedReporter.BeforeSuiteDidRun(setupSummary)
|
||||
reporter.debugFile.Sync()
|
||||
}
|
||||
reporter.post("/BeforeSuiteDidRun", setupSummary)
|
||||
}
|
||||
|
||||
func (reporter *ForwardingReporter) SpecWillRun(specSummary *types.SpecSummary) {
|
||||
if reporter.debugMode {
|
||||
reporter.nestedReporter.SpecWillRun(specSummary)
|
||||
reporter.debugFile.Sync()
|
||||
}
|
||||
reporter.post("/SpecWillRun", specSummary)
|
||||
}
|
||||
|
||||
func (reporter *ForwardingReporter) SpecDidComplete(specSummary *types.SpecSummary) {
|
||||
output, _ := reporter.outputInterceptor.StopInterceptingAndReturnOutput()
|
||||
reporter.outputInterceptor.StartInterceptingOutput()
|
||||
specSummary.CapturedOutput = output
|
||||
if reporter.debugMode {
|
||||
reporter.nestedReporter.SpecDidComplete(specSummary)
|
||||
reporter.debugFile.Sync()
|
||||
}
|
||||
reporter.post("/SpecDidComplete", specSummary)
|
||||
}
|
||||
|
||||
func (reporter *ForwardingReporter) AfterSuiteDidRun(setupSummary *types.SetupSummary) {
|
||||
output, _ := reporter.outputInterceptor.StopInterceptingAndReturnOutput()
|
||||
reporter.outputInterceptor.StartInterceptingOutput()
|
||||
setupSummary.CapturedOutput = output
|
||||
if reporter.debugMode {
|
||||
reporter.nestedReporter.AfterSuiteDidRun(setupSummary)
|
||||
reporter.debugFile.Sync()
|
||||
}
|
||||
reporter.post("/AfterSuiteDidRun", setupSummary)
|
||||
}
|
||||
|
||||
func (reporter *ForwardingReporter) SpecSuiteDidEnd(summary *types.SuiteSummary) {
|
||||
reporter.outputInterceptor.StopInterceptingAndReturnOutput()
|
||||
if reporter.debugMode {
|
||||
reporter.nestedReporter.SpecSuiteDidEnd(summary)
|
||||
reporter.debugFile.Sync()
|
||||
}
|
||||
reporter.post("/SpecSuiteDidEnd", summary)
|
||||
}
|
13
vendor/github.com/onsi/ginkgo/internal/remote/output_interceptor.go
generated
vendored
13
vendor/github.com/onsi/ginkgo/internal/remote/output_interceptor.go
generated
vendored
@ -1,13 +0,0 @@
|
||||
package remote
|
||||
|
||||
import "os"
|
||||
|
||||
/*
|
||||
The OutputInterceptor is used by the ForwardingReporter to
|
||||
intercept and capture all stdin and stderr output during a test run.
|
||||
*/
|
||||
type OutputInterceptor interface {
|
||||
StartInterceptingOutput() error
|
||||
StopInterceptingAndReturnOutput() (string, error)
|
||||
StreamTo(*os.File)
|
||||
}
|
82
vendor/github.com/onsi/ginkgo/internal/remote/output_interceptor_unix.go
generated
vendored
82
vendor/github.com/onsi/ginkgo/internal/remote/output_interceptor_unix.go
generated
vendored
@ -1,82 +0,0 @@
|
||||
// +build freebsd openbsd netbsd dragonfly darwin linux solaris
|
||||
|
||||
package remote
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
|
||||
"github.com/nxadm/tail"
|
||||
"golang.org/x/sys/unix"
|
||||
)
|
||||
|
||||
func NewOutputInterceptor() OutputInterceptor {
|
||||
return &outputInterceptor{}
|
||||
}
|
||||
|
||||
type outputInterceptor struct {
|
||||
redirectFile *os.File
|
||||
streamTarget *os.File
|
||||
intercepting bool
|
||||
tailer *tail.Tail
|
||||
doneTailing chan bool
|
||||
}
|
||||
|
||||
func (interceptor *outputInterceptor) StartInterceptingOutput() error {
|
||||
if interceptor.intercepting {
|
||||
return errors.New("Already intercepting output!")
|
||||
}
|
||||
interceptor.intercepting = true
|
||||
|
||||
var err error
|
||||
|
||||
interceptor.redirectFile, err = ioutil.TempFile("", "ginkgo-output")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// This might call Dup3 if the dup2 syscall is not available, e.g. on
|
||||
// linux/arm64 or linux/riscv64
|
||||
unix.Dup2(int(interceptor.redirectFile.Fd()), 1)
|
||||
unix.Dup2(int(interceptor.redirectFile.Fd()), 2)
|
||||
|
||||
if interceptor.streamTarget != nil {
|
||||
interceptor.tailer, _ = tail.TailFile(interceptor.redirectFile.Name(), tail.Config{Follow: true})
|
||||
interceptor.doneTailing = make(chan bool)
|
||||
|
||||
go func() {
|
||||
for line := range interceptor.tailer.Lines {
|
||||
interceptor.streamTarget.Write([]byte(line.Text + "\n"))
|
||||
}
|
||||
close(interceptor.doneTailing)
|
||||
}()
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (interceptor *outputInterceptor) StopInterceptingAndReturnOutput() (string, error) {
|
||||
if !interceptor.intercepting {
|
||||
return "", errors.New("Not intercepting output!")
|
||||
}
|
||||
|
||||
interceptor.redirectFile.Close()
|
||||
output, err := ioutil.ReadFile(interceptor.redirectFile.Name())
|
||||
os.Remove(interceptor.redirectFile.Name())
|
||||
|
||||
interceptor.intercepting = false
|
||||
|
||||
if interceptor.streamTarget != nil {
|
||||
interceptor.tailer.Stop()
|
||||
interceptor.tailer.Cleanup()
|
||||
<-interceptor.doneTailing
|
||||
interceptor.streamTarget.Sync()
|
||||
}
|
||||
|
||||
return string(output), err
|
||||
}
|
||||
|
||||
func (interceptor *outputInterceptor) StreamTo(out *os.File) {
|
||||
interceptor.streamTarget = out
|
||||
}
|
36
vendor/github.com/onsi/ginkgo/internal/remote/output_interceptor_win.go
generated
vendored
36
vendor/github.com/onsi/ginkgo/internal/remote/output_interceptor_win.go
generated
vendored
@ -1,36 +0,0 @@
|
||||
// +build windows
|
||||
|
||||
package remote
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"os"
|
||||
)
|
||||
|
||||
func NewOutputInterceptor() OutputInterceptor {
|
||||
return &outputInterceptor{}
|
||||
}
|
||||
|
||||
type outputInterceptor struct {
|
||||
intercepting bool
|
||||
}
|
||||
|
||||
func (interceptor *outputInterceptor) StartInterceptingOutput() error {
|
||||
if interceptor.intercepting {
|
||||
return errors.New("Already intercepting output!")
|
||||
}
|
||||
interceptor.intercepting = true
|
||||
|
||||
// not working on windows...
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (interceptor *outputInterceptor) StopInterceptingAndReturnOutput() (string, error) {
|
||||
// not working on windows...
|
||||
interceptor.intercepting = false
|
||||
|
||||
return "", nil
|
||||
}
|
||||
|
||||
func (interceptor *outputInterceptor) StreamTo(*os.File) {}
|
224
vendor/github.com/onsi/ginkgo/internal/remote/server.go
generated
vendored
224
vendor/github.com/onsi/ginkgo/internal/remote/server.go
generated
vendored
@ -1,224 +0,0 @@
|
||||
/*
|
||||
|
||||
The remote package provides the pieces to allow Ginkgo test suites to report to remote listeners.
|
||||
This is used, primarily, to enable streaming parallel test output but has, in principal, broader applications (e.g. streaming test output to a browser).
|
||||
|
||||
*/
|
||||
|
||||
package remote
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"net/http"
|
||||
"sync"
|
||||
|
||||
"github.com/onsi/ginkgo/internal/spec_iterator"
|
||||
|
||||
"github.com/onsi/ginkgo/config"
|
||||
"github.com/onsi/ginkgo/reporters"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
/*
|
||||
Server spins up on an automatically selected port and listens for communication from the forwarding reporter.
|
||||
It then forwards that communication to attached reporters.
|
||||
*/
|
||||
type Server struct {
|
||||
listener net.Listener
|
||||
reporters []reporters.Reporter
|
||||
alives []func() bool
|
||||
lock *sync.Mutex
|
||||
beforeSuiteData types.RemoteBeforeSuiteData
|
||||
parallelTotal int
|
||||
counter int
|
||||
}
|
||||
|
||||
//Create a new server, automatically selecting a port
|
||||
func NewServer(parallelTotal int) (*Server, error) {
|
||||
listener, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &Server{
|
||||
listener: listener,
|
||||
lock: &sync.Mutex{},
|
||||
alives: make([]func() bool, parallelTotal),
|
||||
beforeSuiteData: types.RemoteBeforeSuiteData{Data: nil, State: types.RemoteBeforeSuiteStatePending},
|
||||
parallelTotal: parallelTotal,
|
||||
}, nil
|
||||
}
|
||||
|
||||
//Start the server. You don't need to `go s.Start()`, just `s.Start()`
|
||||
func (server *Server) Start() {
|
||||
httpServer := &http.Server{}
|
||||
mux := http.NewServeMux()
|
||||
httpServer.Handler = mux
|
||||
|
||||
//streaming endpoints
|
||||
mux.HandleFunc("/SpecSuiteWillBegin", server.specSuiteWillBegin)
|
||||
mux.HandleFunc("/BeforeSuiteDidRun", server.beforeSuiteDidRun)
|
||||
mux.HandleFunc("/AfterSuiteDidRun", server.afterSuiteDidRun)
|
||||
mux.HandleFunc("/SpecWillRun", server.specWillRun)
|
||||
mux.HandleFunc("/SpecDidComplete", server.specDidComplete)
|
||||
mux.HandleFunc("/SpecSuiteDidEnd", server.specSuiteDidEnd)
|
||||
|
||||
//synchronization endpoints
|
||||
mux.HandleFunc("/BeforeSuiteState", server.handleBeforeSuiteState)
|
||||
mux.HandleFunc("/RemoteAfterSuiteData", server.handleRemoteAfterSuiteData)
|
||||
mux.HandleFunc("/counter", server.handleCounter)
|
||||
mux.HandleFunc("/has-counter", server.handleHasCounter) //for backward compatibility
|
||||
|
||||
go httpServer.Serve(server.listener)
|
||||
}
|
||||
|
||||
//Stop the server
|
||||
func (server *Server) Close() {
|
||||
server.listener.Close()
|
||||
}
|
||||
|
||||
//The address the server can be reached it. Pass this into the `ForwardingReporter`.
|
||||
func (server *Server) Address() string {
|
||||
return "http://" + server.listener.Addr().String()
|
||||
}
|
||||
|
||||
//
|
||||
// Streaming Endpoints
|
||||
//
|
||||
|
||||
//The server will forward all received messages to Ginkgo reporters registered with `RegisterReporters`
|
||||
func (server *Server) readAll(request *http.Request) []byte {
|
||||
defer request.Body.Close()
|
||||
body, _ := ioutil.ReadAll(request.Body)
|
||||
return body
|
||||
}
|
||||
|
||||
func (server *Server) RegisterReporters(reporters ...reporters.Reporter) {
|
||||
server.reporters = reporters
|
||||
}
|
||||
|
||||
func (server *Server) specSuiteWillBegin(writer http.ResponseWriter, request *http.Request) {
|
||||
body := server.readAll(request)
|
||||
|
||||
var data struct {
|
||||
Config config.GinkgoConfigType `json:"config"`
|
||||
Summary *types.SuiteSummary `json:"suite-summary"`
|
||||
}
|
||||
|
||||
json.Unmarshal(body, &data)
|
||||
|
||||
for _, reporter := range server.reporters {
|
||||
reporter.SpecSuiteWillBegin(data.Config, data.Summary)
|
||||
}
|
||||
}
|
||||
|
||||
func (server *Server) beforeSuiteDidRun(writer http.ResponseWriter, request *http.Request) {
|
||||
body := server.readAll(request)
|
||||
var setupSummary *types.SetupSummary
|
||||
json.Unmarshal(body, &setupSummary)
|
||||
|
||||
for _, reporter := range server.reporters {
|
||||
reporter.BeforeSuiteDidRun(setupSummary)
|
||||
}
|
||||
}
|
||||
|
||||
func (server *Server) afterSuiteDidRun(writer http.ResponseWriter, request *http.Request) {
|
||||
body := server.readAll(request)
|
||||
var setupSummary *types.SetupSummary
|
||||
json.Unmarshal(body, &setupSummary)
|
||||
|
||||
for _, reporter := range server.reporters {
|
||||
reporter.AfterSuiteDidRun(setupSummary)
|
||||
}
|
||||
}
|
||||
|
||||
func (server *Server) specWillRun(writer http.ResponseWriter, request *http.Request) {
|
||||
body := server.readAll(request)
|
||||
var specSummary *types.SpecSummary
|
||||
json.Unmarshal(body, &specSummary)
|
||||
|
||||
for _, reporter := range server.reporters {
|
||||
reporter.SpecWillRun(specSummary)
|
||||
}
|
||||
}
|
||||
|
||||
func (server *Server) specDidComplete(writer http.ResponseWriter, request *http.Request) {
|
||||
body := server.readAll(request)
|
||||
var specSummary *types.SpecSummary
|
||||
json.Unmarshal(body, &specSummary)
|
||||
|
||||
for _, reporter := range server.reporters {
|
||||
reporter.SpecDidComplete(specSummary)
|
||||
}
|
||||
}
|
||||
|
||||
func (server *Server) specSuiteDidEnd(writer http.ResponseWriter, request *http.Request) {
|
||||
body := server.readAll(request)
|
||||
var suiteSummary *types.SuiteSummary
|
||||
json.Unmarshal(body, &suiteSummary)
|
||||
|
||||
for _, reporter := range server.reporters {
|
||||
reporter.SpecSuiteDidEnd(suiteSummary)
|
||||
}
|
||||
}
|
||||
|
||||
//
|
||||
// Synchronization Endpoints
|
||||
//
|
||||
|
||||
func (server *Server) RegisterAlive(node int, alive func() bool) {
|
||||
server.lock.Lock()
|
||||
defer server.lock.Unlock()
|
||||
server.alives[node-1] = alive
|
||||
}
|
||||
|
||||
func (server *Server) nodeIsAlive(node int) bool {
|
||||
server.lock.Lock()
|
||||
defer server.lock.Unlock()
|
||||
alive := server.alives[node-1]
|
||||
if alive == nil {
|
||||
return true
|
||||
}
|
||||
return alive()
|
||||
}
|
||||
|
||||
func (server *Server) handleBeforeSuiteState(writer http.ResponseWriter, request *http.Request) {
|
||||
if request.Method == "POST" {
|
||||
dec := json.NewDecoder(request.Body)
|
||||
dec.Decode(&(server.beforeSuiteData))
|
||||
} else {
|
||||
beforeSuiteData := server.beforeSuiteData
|
||||
if beforeSuiteData.State == types.RemoteBeforeSuiteStatePending && !server.nodeIsAlive(1) {
|
||||
beforeSuiteData.State = types.RemoteBeforeSuiteStateDisappeared
|
||||
}
|
||||
enc := json.NewEncoder(writer)
|
||||
enc.Encode(beforeSuiteData)
|
||||
}
|
||||
}
|
||||
|
||||
func (server *Server) handleRemoteAfterSuiteData(writer http.ResponseWriter, request *http.Request) {
|
||||
afterSuiteData := types.RemoteAfterSuiteData{
|
||||
CanRun: true,
|
||||
}
|
||||
for i := 2; i <= server.parallelTotal; i++ {
|
||||
afterSuiteData.CanRun = afterSuiteData.CanRun && !server.nodeIsAlive(i)
|
||||
}
|
||||
|
||||
enc := json.NewEncoder(writer)
|
||||
enc.Encode(afterSuiteData)
|
||||
}
|
||||
|
||||
func (server *Server) handleCounter(writer http.ResponseWriter, request *http.Request) {
|
||||
c := spec_iterator.Counter{}
|
||||
server.lock.Lock()
|
||||
c.Index = server.counter
|
||||
server.counter++
|
||||
server.lock.Unlock()
|
||||
|
||||
json.NewEncoder(writer).Encode(c)
|
||||
}
|
||||
|
||||
func (server *Server) handleHasCounter(writer http.ResponseWriter, request *http.Request) {
|
||||
writer.Write([]byte(""))
|
||||
}
|
247
vendor/github.com/onsi/ginkgo/internal/spec/spec.go
generated
vendored
247
vendor/github.com/onsi/ginkgo/internal/spec/spec.go
generated
vendored
@ -1,247 +0,0 @@
|
||||
package spec
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"time"
|
||||
|
||||
"sync"
|
||||
|
||||
"github.com/onsi/ginkgo/internal/containernode"
|
||||
"github.com/onsi/ginkgo/internal/leafnodes"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
type Spec struct {
|
||||
subject leafnodes.SubjectNode
|
||||
focused bool
|
||||
announceProgress bool
|
||||
|
||||
containers []*containernode.ContainerNode
|
||||
|
||||
state types.SpecState
|
||||
runTime time.Duration
|
||||
startTime time.Time
|
||||
failure types.SpecFailure
|
||||
previousFailures bool
|
||||
|
||||
stateMutex *sync.Mutex
|
||||
}
|
||||
|
||||
func New(subject leafnodes.SubjectNode, containers []*containernode.ContainerNode, announceProgress bool) *Spec {
|
||||
spec := &Spec{
|
||||
subject: subject,
|
||||
containers: containers,
|
||||
focused: subject.Flag() == types.FlagTypeFocused,
|
||||
announceProgress: announceProgress,
|
||||
stateMutex: &sync.Mutex{},
|
||||
}
|
||||
|
||||
spec.processFlag(subject.Flag())
|
||||
for i := len(containers) - 1; i >= 0; i-- {
|
||||
spec.processFlag(containers[i].Flag())
|
||||
}
|
||||
|
||||
return spec
|
||||
}
|
||||
|
||||
func (spec *Spec) processFlag(flag types.FlagType) {
|
||||
if flag == types.FlagTypeFocused {
|
||||
spec.focused = true
|
||||
} else if flag == types.FlagTypePending {
|
||||
spec.setState(types.SpecStatePending)
|
||||
}
|
||||
}
|
||||
|
||||
func (spec *Spec) Skip() {
|
||||
spec.setState(types.SpecStateSkipped)
|
||||
}
|
||||
|
||||
func (spec *Spec) Failed() bool {
|
||||
return spec.getState() == types.SpecStateFailed || spec.getState() == types.SpecStatePanicked || spec.getState() == types.SpecStateTimedOut
|
||||
}
|
||||
|
||||
func (spec *Spec) Passed() bool {
|
||||
return spec.getState() == types.SpecStatePassed
|
||||
}
|
||||
|
||||
func (spec *Spec) Flaked() bool {
|
||||
return spec.getState() == types.SpecStatePassed && spec.previousFailures
|
||||
}
|
||||
|
||||
func (spec *Spec) Pending() bool {
|
||||
return spec.getState() == types.SpecStatePending
|
||||
}
|
||||
|
||||
func (spec *Spec) Skipped() bool {
|
||||
return spec.getState() == types.SpecStateSkipped
|
||||
}
|
||||
|
||||
func (spec *Spec) Focused() bool {
|
||||
return spec.focused
|
||||
}
|
||||
|
||||
func (spec *Spec) IsMeasurement() bool {
|
||||
return spec.subject.Type() == types.SpecComponentTypeMeasure
|
||||
}
|
||||
|
||||
func (spec *Spec) Summary(suiteID string) *types.SpecSummary {
|
||||
componentTexts := make([]string, len(spec.containers)+1)
|
||||
componentCodeLocations := make([]types.CodeLocation, len(spec.containers)+1)
|
||||
|
||||
for i, container := range spec.containers {
|
||||
componentTexts[i] = container.Text()
|
||||
componentCodeLocations[i] = container.CodeLocation()
|
||||
}
|
||||
|
||||
componentTexts[len(spec.containers)] = spec.subject.Text()
|
||||
componentCodeLocations[len(spec.containers)] = spec.subject.CodeLocation()
|
||||
|
||||
runTime := spec.runTime
|
||||
if runTime == 0 && !spec.startTime.IsZero() {
|
||||
runTime = time.Since(spec.startTime)
|
||||
}
|
||||
|
||||
return &types.SpecSummary{
|
||||
IsMeasurement: spec.IsMeasurement(),
|
||||
NumberOfSamples: spec.subject.Samples(),
|
||||
ComponentTexts: componentTexts,
|
||||
ComponentCodeLocations: componentCodeLocations,
|
||||
State: spec.getState(),
|
||||
RunTime: runTime,
|
||||
Failure: spec.failure,
|
||||
Measurements: spec.measurementsReport(),
|
||||
SuiteID: suiteID,
|
||||
}
|
||||
}
|
||||
|
||||
func (spec *Spec) ConcatenatedString() string {
|
||||
s := ""
|
||||
for _, container := range spec.containers {
|
||||
s += container.Text() + " "
|
||||
}
|
||||
|
||||
return s + spec.subject.Text()
|
||||
}
|
||||
|
||||
func (spec *Spec) Run(writer io.Writer) {
|
||||
if spec.getState() == types.SpecStateFailed {
|
||||
spec.previousFailures = true
|
||||
}
|
||||
|
||||
spec.startTime = time.Now()
|
||||
defer func() {
|
||||
spec.runTime = time.Since(spec.startTime)
|
||||
}()
|
||||
|
||||
for sample := 0; sample < spec.subject.Samples(); sample++ {
|
||||
spec.runSample(sample, writer)
|
||||
|
||||
if spec.getState() != types.SpecStatePassed {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (spec *Spec) getState() types.SpecState {
|
||||
spec.stateMutex.Lock()
|
||||
defer spec.stateMutex.Unlock()
|
||||
return spec.state
|
||||
}
|
||||
|
||||
func (spec *Spec) setState(state types.SpecState) {
|
||||
spec.stateMutex.Lock()
|
||||
defer spec.stateMutex.Unlock()
|
||||
spec.state = state
|
||||
}
|
||||
|
||||
func (spec *Spec) runSample(sample int, writer io.Writer) {
|
||||
spec.setState(types.SpecStatePassed)
|
||||
spec.failure = types.SpecFailure{}
|
||||
innerMostContainerIndexToUnwind := -1
|
||||
|
||||
defer func() {
|
||||
for i := innerMostContainerIndexToUnwind; i >= 0; i-- {
|
||||
container := spec.containers[i]
|
||||
for _, justAfterEach := range container.SetupNodesOfType(types.SpecComponentTypeJustAfterEach) {
|
||||
spec.announceSetupNode(writer, "JustAfterEach", container, justAfterEach)
|
||||
justAfterEachState, justAfterEachFailure := justAfterEach.Run()
|
||||
if justAfterEachState != types.SpecStatePassed && spec.state == types.SpecStatePassed {
|
||||
spec.state = justAfterEachState
|
||||
spec.failure = justAfterEachFailure
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for i := innerMostContainerIndexToUnwind; i >= 0; i-- {
|
||||
container := spec.containers[i]
|
||||
for _, afterEach := range container.SetupNodesOfType(types.SpecComponentTypeAfterEach) {
|
||||
spec.announceSetupNode(writer, "AfterEach", container, afterEach)
|
||||
afterEachState, afterEachFailure := afterEach.Run()
|
||||
if afterEachState != types.SpecStatePassed && spec.getState() == types.SpecStatePassed {
|
||||
spec.setState(afterEachState)
|
||||
spec.failure = afterEachFailure
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
for i, container := range spec.containers {
|
||||
innerMostContainerIndexToUnwind = i
|
||||
for _, beforeEach := range container.SetupNodesOfType(types.SpecComponentTypeBeforeEach) {
|
||||
spec.announceSetupNode(writer, "BeforeEach", container, beforeEach)
|
||||
s, f := beforeEach.Run()
|
||||
spec.failure = f
|
||||
spec.setState(s)
|
||||
if spec.getState() != types.SpecStatePassed {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for _, container := range spec.containers {
|
||||
for _, justBeforeEach := range container.SetupNodesOfType(types.SpecComponentTypeJustBeforeEach) {
|
||||
spec.announceSetupNode(writer, "JustBeforeEach", container, justBeforeEach)
|
||||
s, f := justBeforeEach.Run()
|
||||
spec.failure = f
|
||||
spec.setState(s)
|
||||
if spec.getState() != types.SpecStatePassed {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
spec.announceSubject(writer, spec.subject)
|
||||
s, f := spec.subject.Run()
|
||||
spec.failure = f
|
||||
spec.setState(s)
|
||||
}
|
||||
|
||||
func (spec *Spec) announceSetupNode(writer io.Writer, nodeType string, container *containernode.ContainerNode, setupNode leafnodes.BasicNode) {
|
||||
if spec.announceProgress {
|
||||
s := fmt.Sprintf("[%s] %s\n %s\n", nodeType, container.Text(), setupNode.CodeLocation().String())
|
||||
writer.Write([]byte(s))
|
||||
}
|
||||
}
|
||||
|
||||
func (spec *Spec) announceSubject(writer io.Writer, subject leafnodes.SubjectNode) {
|
||||
if spec.announceProgress {
|
||||
nodeType := ""
|
||||
switch subject.Type() {
|
||||
case types.SpecComponentTypeIt:
|
||||
nodeType = "It"
|
||||
case types.SpecComponentTypeMeasure:
|
||||
nodeType = "Measure"
|
||||
}
|
||||
s := fmt.Sprintf("[%s] %s\n %s\n", nodeType, subject.Text(), subject.CodeLocation().String())
|
||||
writer.Write([]byte(s))
|
||||
}
|
||||
}
|
||||
|
||||
func (spec *Spec) measurementsReport() map[string]*types.SpecMeasurement {
|
||||
if !spec.IsMeasurement() || spec.Failed() {
|
||||
return map[string]*types.SpecMeasurement{}
|
||||
}
|
||||
|
||||
return spec.subject.(*leafnodes.MeasureNode).MeasurementsReport()
|
||||
}
|
144
vendor/github.com/onsi/ginkgo/internal/spec/specs.go
generated
vendored
144
vendor/github.com/onsi/ginkgo/internal/spec/specs.go
generated
vendored
@ -1,144 +0,0 @@
|
||||
package spec
|
||||
|
||||
import (
|
||||
"math/rand"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type Specs struct {
|
||||
specs []*Spec
|
||||
names []string
|
||||
|
||||
hasProgrammaticFocus bool
|
||||
RegexScansFilePath bool
|
||||
}
|
||||
|
||||
func NewSpecs(specs []*Spec) *Specs {
|
||||
names := make([]string, len(specs))
|
||||
for i, spec := range specs {
|
||||
names[i] = spec.ConcatenatedString()
|
||||
}
|
||||
return &Specs{
|
||||
specs: specs,
|
||||
names: names,
|
||||
}
|
||||
}
|
||||
|
||||
func (e *Specs) Specs() []*Spec {
|
||||
return e.specs
|
||||
}
|
||||
|
||||
func (e *Specs) HasProgrammaticFocus() bool {
|
||||
return e.hasProgrammaticFocus
|
||||
}
|
||||
|
||||
func (e *Specs) Shuffle(r *rand.Rand) {
|
||||
sort.Sort(e)
|
||||
permutation := r.Perm(len(e.specs))
|
||||
shuffledSpecs := make([]*Spec, len(e.specs))
|
||||
names := make([]string, len(e.specs))
|
||||
for i, j := range permutation {
|
||||
shuffledSpecs[i] = e.specs[j]
|
||||
names[i] = e.names[j]
|
||||
}
|
||||
e.specs = shuffledSpecs
|
||||
e.names = names
|
||||
}
|
||||
|
||||
func (e *Specs) ApplyFocus(description string, focus, skip []string) {
|
||||
if len(focus)+len(skip) == 0 {
|
||||
e.applyProgrammaticFocus()
|
||||
} else {
|
||||
e.applyRegExpFocusAndSkip(description, focus, skip)
|
||||
}
|
||||
}
|
||||
|
||||
func (e *Specs) applyProgrammaticFocus() {
|
||||
e.hasProgrammaticFocus = false
|
||||
for _, spec := range e.specs {
|
||||
if spec.Focused() && !spec.Pending() {
|
||||
e.hasProgrammaticFocus = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if e.hasProgrammaticFocus {
|
||||
for _, spec := range e.specs {
|
||||
if !spec.Focused() {
|
||||
spec.Skip()
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// toMatch returns a byte[] to be used by regex matchers. When adding new behaviours to the matching function,
|
||||
// this is the place which we append to.
|
||||
func (e *Specs) toMatch(description string, i int) []byte {
|
||||
if i > len(e.names) {
|
||||
return nil
|
||||
}
|
||||
if e.RegexScansFilePath {
|
||||
return []byte(
|
||||
description + " " +
|
||||
e.names[i] + " " +
|
||||
e.specs[i].subject.CodeLocation().FileName)
|
||||
} else {
|
||||
return []byte(
|
||||
description + " " +
|
||||
e.names[i])
|
||||
}
|
||||
}
|
||||
|
||||
func (e *Specs) applyRegExpFocusAndSkip(description string, focus, skip []string) {
|
||||
var focusFilter, skipFilter *regexp.Regexp
|
||||
if len(focus) > 0 {
|
||||
focusFilter = regexp.MustCompile(strings.Join(focus, "|"))
|
||||
}
|
||||
if len(skip) > 0 {
|
||||
skipFilter = regexp.MustCompile(strings.Join(skip, "|"))
|
||||
}
|
||||
|
||||
for i, spec := range e.specs {
|
||||
matchesFocus := true
|
||||
matchesSkip := false
|
||||
|
||||
toMatch := e.toMatch(description, i)
|
||||
|
||||
if focusFilter != nil {
|
||||
matchesFocus = focusFilter.Match(toMatch)
|
||||
}
|
||||
|
||||
if skipFilter != nil {
|
||||
matchesSkip = skipFilter.Match(toMatch)
|
||||
}
|
||||
|
||||
if !matchesFocus || matchesSkip {
|
||||
spec.Skip()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (e *Specs) SkipMeasurements() {
|
||||
for _, spec := range e.specs {
|
||||
if spec.IsMeasurement() {
|
||||
spec.Skip()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//sort.Interface
|
||||
|
||||
func (e *Specs) Len() int {
|
||||
return len(e.specs)
|
||||
}
|
||||
|
||||
func (e *Specs) Less(i, j int) bool {
|
||||
return e.names[i] < e.names[j]
|
||||
}
|
||||
|
||||
func (e *Specs) Swap(i, j int) {
|
||||
e.names[i], e.names[j] = e.names[j], e.names[i]
|
||||
e.specs[i], e.specs[j] = e.specs[j], e.specs[i]
|
||||
}
|
55
vendor/github.com/onsi/ginkgo/internal/spec_iterator/index_computer.go
generated
vendored
55
vendor/github.com/onsi/ginkgo/internal/spec_iterator/index_computer.go
generated
vendored
@ -1,55 +0,0 @@
|
||||
package spec_iterator
|
||||
|
||||
func ParallelizedIndexRange(length int, parallelTotal int, parallelNode int) (startIndex int, count int) {
|
||||
if length == 0 {
|
||||
return 0, 0
|
||||
}
|
||||
|
||||
// We have more nodes than tests. Trivial case.
|
||||
if parallelTotal >= length {
|
||||
if parallelNode > length {
|
||||
return 0, 0
|
||||
} else {
|
||||
return parallelNode - 1, 1
|
||||
}
|
||||
}
|
||||
|
||||
// This is the minimum amount of tests that a node will be required to run
|
||||
minTestsPerNode := length / parallelTotal
|
||||
|
||||
// This is the maximum amount of tests that a node will be required to run
|
||||
// The algorithm guarantees that this would be equal to at least the minimum amount
|
||||
// and at most one more
|
||||
maxTestsPerNode := minTestsPerNode
|
||||
if length%parallelTotal != 0 {
|
||||
maxTestsPerNode++
|
||||
}
|
||||
|
||||
// Number of nodes that will have to run the maximum amount of tests per node
|
||||
numMaxLoadNodes := length % parallelTotal
|
||||
|
||||
// Number of nodes that precede the current node and will have to run the maximum amount of tests per node
|
||||
var numPrecedingMaxLoadNodes int
|
||||
if parallelNode > numMaxLoadNodes {
|
||||
numPrecedingMaxLoadNodes = numMaxLoadNodes
|
||||
} else {
|
||||
numPrecedingMaxLoadNodes = parallelNode - 1
|
||||
}
|
||||
|
||||
// Number of nodes that precede the current node and will have to run the minimum amount of tests per node
|
||||
var numPrecedingMinLoadNodes int
|
||||
if parallelNode <= numMaxLoadNodes {
|
||||
numPrecedingMinLoadNodes = 0
|
||||
} else {
|
||||
numPrecedingMinLoadNodes = parallelNode - numMaxLoadNodes - 1
|
||||
}
|
||||
|
||||
// Evaluate the test start index and number of tests to run
|
||||
startIndex = numPrecedingMaxLoadNodes*maxTestsPerNode + numPrecedingMinLoadNodes*minTestsPerNode
|
||||
if parallelNode > numMaxLoadNodes {
|
||||
count = minTestsPerNode
|
||||
} else {
|
||||
count = maxTestsPerNode
|
||||
}
|
||||
return
|
||||
}
|
59
vendor/github.com/onsi/ginkgo/internal/spec_iterator/parallel_spec_iterator.go
generated
vendored
59
vendor/github.com/onsi/ginkgo/internal/spec_iterator/parallel_spec_iterator.go
generated
vendored
@ -1,59 +0,0 @@
|
||||
package spec_iterator
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
||||
"github.com/onsi/ginkgo/internal/spec"
|
||||
)
|
||||
|
||||
type ParallelIterator struct {
|
||||
specs []*spec.Spec
|
||||
host string
|
||||
client *http.Client
|
||||
}
|
||||
|
||||
func NewParallelIterator(specs []*spec.Spec, host string) *ParallelIterator {
|
||||
return &ParallelIterator{
|
||||
specs: specs,
|
||||
host: host,
|
||||
client: &http.Client{},
|
||||
}
|
||||
}
|
||||
|
||||
func (s *ParallelIterator) Next() (*spec.Spec, error) {
|
||||
resp, err := s.client.Get(s.host + "/counter")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return nil, fmt.Errorf("unexpected status code %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
var counter Counter
|
||||
err = json.NewDecoder(resp.Body).Decode(&counter)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if counter.Index >= len(s.specs) {
|
||||
return nil, ErrClosed
|
||||
}
|
||||
|
||||
return s.specs[counter.Index], nil
|
||||
}
|
||||
|
||||
func (s *ParallelIterator) NumberOfSpecsPriorToIteration() int {
|
||||
return len(s.specs)
|
||||
}
|
||||
|
||||
func (s *ParallelIterator) NumberOfSpecsToProcessIfKnown() (int, bool) {
|
||||
return -1, false
|
||||
}
|
||||
|
||||
func (s *ParallelIterator) NumberOfSpecsThatWillBeRunIfKnown() (int, bool) {
|
||||
return -1, false
|
||||
}
|
45
vendor/github.com/onsi/ginkgo/internal/spec_iterator/serial_spec_iterator.go
generated
vendored
45
vendor/github.com/onsi/ginkgo/internal/spec_iterator/serial_spec_iterator.go
generated
vendored
@ -1,45 +0,0 @@
|
||||
package spec_iterator
|
||||
|
||||
import (
|
||||
"github.com/onsi/ginkgo/internal/spec"
|
||||
)
|
||||
|
||||
type SerialIterator struct {
|
||||
specs []*spec.Spec
|
||||
index int
|
||||
}
|
||||
|
||||
func NewSerialIterator(specs []*spec.Spec) *SerialIterator {
|
||||
return &SerialIterator{
|
||||
specs: specs,
|
||||
index: 0,
|
||||
}
|
||||
}
|
||||
|
||||
func (s *SerialIterator) Next() (*spec.Spec, error) {
|
||||
if s.index >= len(s.specs) {
|
||||
return nil, ErrClosed
|
||||
}
|
||||
|
||||
spec := s.specs[s.index]
|
||||
s.index += 1
|
||||
return spec, nil
|
||||
}
|
||||
|
||||
func (s *SerialIterator) NumberOfSpecsPriorToIteration() int {
|
||||
return len(s.specs)
|
||||
}
|
||||
|
||||
func (s *SerialIterator) NumberOfSpecsToProcessIfKnown() (int, bool) {
|
||||
return len(s.specs), true
|
||||
}
|
||||
|
||||
func (s *SerialIterator) NumberOfSpecsThatWillBeRunIfKnown() (int, bool) {
|
||||
count := 0
|
||||
for _, s := range s.specs {
|
||||
if !s.Skipped() && !s.Pending() {
|
||||
count += 1
|
||||
}
|
||||
}
|
||||
return count, true
|
||||
}
|
47
vendor/github.com/onsi/ginkgo/internal/spec_iterator/sharded_parallel_spec_iterator.go
generated
vendored
47
vendor/github.com/onsi/ginkgo/internal/spec_iterator/sharded_parallel_spec_iterator.go
generated
vendored
@ -1,47 +0,0 @@
|
||||
package spec_iterator
|
||||
|
||||
import "github.com/onsi/ginkgo/internal/spec"
|
||||
|
||||
type ShardedParallelIterator struct {
|
||||
specs []*spec.Spec
|
||||
index int
|
||||
maxIndex int
|
||||
}
|
||||
|
||||
func NewShardedParallelIterator(specs []*spec.Spec, total int, node int) *ShardedParallelIterator {
|
||||
startIndex, count := ParallelizedIndexRange(len(specs), total, node)
|
||||
|
||||
return &ShardedParallelIterator{
|
||||
specs: specs,
|
||||
index: startIndex,
|
||||
maxIndex: startIndex + count,
|
||||
}
|
||||
}
|
||||
|
||||
func (s *ShardedParallelIterator) Next() (*spec.Spec, error) {
|
||||
if s.index >= s.maxIndex {
|
||||
return nil, ErrClosed
|
||||
}
|
||||
|
||||
spec := s.specs[s.index]
|
||||
s.index += 1
|
||||
return spec, nil
|
||||
}
|
||||
|
||||
func (s *ShardedParallelIterator) NumberOfSpecsPriorToIteration() int {
|
||||
return len(s.specs)
|
||||
}
|
||||
|
||||
func (s *ShardedParallelIterator) NumberOfSpecsToProcessIfKnown() (int, bool) {
|
||||
return s.maxIndex - s.index, true
|
||||
}
|
||||
|
||||
func (s *ShardedParallelIterator) NumberOfSpecsThatWillBeRunIfKnown() (int, bool) {
|
||||
count := 0
|
||||
for i := s.index; i < s.maxIndex; i += 1 {
|
||||
if !s.specs[i].Skipped() && !s.specs[i].Pending() {
|
||||
count += 1
|
||||
}
|
||||
}
|
||||
return count, true
|
||||
}
|
20
vendor/github.com/onsi/ginkgo/internal/spec_iterator/spec_iterator.go
generated
vendored
20
vendor/github.com/onsi/ginkgo/internal/spec_iterator/spec_iterator.go
generated
vendored
@ -1,20 +0,0 @@
|
||||
package spec_iterator
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/onsi/ginkgo/internal/spec"
|
||||
)
|
||||
|
||||
var ErrClosed = errors.New("no more specs to run")
|
||||
|
||||
type SpecIterator interface {
|
||||
Next() (*spec.Spec, error)
|
||||
NumberOfSpecsPriorToIteration() int
|
||||
NumberOfSpecsToProcessIfKnown() (int, bool)
|
||||
NumberOfSpecsThatWillBeRunIfKnown() (int, bool)
|
||||
}
|
||||
|
||||
type Counter struct {
|
||||
Index int `json:"index"`
|
||||
}
|
15
vendor/github.com/onsi/ginkgo/internal/specrunner/random_id.go
generated
vendored
15
vendor/github.com/onsi/ginkgo/internal/specrunner/random_id.go
generated
vendored
@ -1,15 +0,0 @@
|
||||
package specrunner
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
func randomID() string {
|
||||
b := make([]byte, 8)
|
||||
_, err := rand.Read(b)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
return fmt.Sprintf("%x-%x-%x-%x", b[0:2], b[2:4], b[4:6], b[6:8])
|
||||
}
|
411
vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go
generated
vendored
411
vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go
generated
vendored
@ -1,411 +0,0 @@
|
||||
package specrunner
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"os/signal"
|
||||
"sync"
|
||||
"syscall"
|
||||
|
||||
"github.com/onsi/ginkgo/internal/spec_iterator"
|
||||
|
||||
"github.com/onsi/ginkgo/config"
|
||||
"github.com/onsi/ginkgo/internal/leafnodes"
|
||||
"github.com/onsi/ginkgo/internal/spec"
|
||||
Writer "github.com/onsi/ginkgo/internal/writer"
|
||||
"github.com/onsi/ginkgo/reporters"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
|
||||
"time"
|
||||
)
|
||||
|
||||
type SpecRunner struct {
|
||||
description string
|
||||
beforeSuiteNode leafnodes.SuiteNode
|
||||
iterator spec_iterator.SpecIterator
|
||||
afterSuiteNode leafnodes.SuiteNode
|
||||
reporters []reporters.Reporter
|
||||
startTime time.Time
|
||||
suiteID string
|
||||
runningSpec *spec.Spec
|
||||
writer Writer.WriterInterface
|
||||
config config.GinkgoConfigType
|
||||
interrupted bool
|
||||
processedSpecs []*spec.Spec
|
||||
lock *sync.Mutex
|
||||
}
|
||||
|
||||
func New(description string, beforeSuiteNode leafnodes.SuiteNode, iterator spec_iterator.SpecIterator, afterSuiteNode leafnodes.SuiteNode, reporters []reporters.Reporter, writer Writer.WriterInterface, config config.GinkgoConfigType) *SpecRunner {
|
||||
return &SpecRunner{
|
||||
description: description,
|
||||
beforeSuiteNode: beforeSuiteNode,
|
||||
iterator: iterator,
|
||||
afterSuiteNode: afterSuiteNode,
|
||||
reporters: reporters,
|
||||
writer: writer,
|
||||
config: config,
|
||||
suiteID: randomID(),
|
||||
lock: &sync.Mutex{},
|
||||
}
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) Run() bool {
|
||||
if runner.config.DryRun {
|
||||
runner.performDryRun()
|
||||
return true
|
||||
}
|
||||
|
||||
runner.reportSuiteWillBegin()
|
||||
signalRegistered := make(chan struct{})
|
||||
go runner.registerForInterrupts(signalRegistered)
|
||||
<-signalRegistered
|
||||
|
||||
suitePassed := runner.runBeforeSuite()
|
||||
|
||||
if suitePassed {
|
||||
suitePassed = runner.runSpecs()
|
||||
}
|
||||
|
||||
runner.blockForeverIfInterrupted()
|
||||
|
||||
suitePassed = runner.runAfterSuite() && suitePassed
|
||||
|
||||
runner.reportSuiteDidEnd(suitePassed)
|
||||
|
||||
return suitePassed
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) performDryRun() {
|
||||
runner.reportSuiteWillBegin()
|
||||
|
||||
if runner.beforeSuiteNode != nil {
|
||||
summary := runner.beforeSuiteNode.Summary()
|
||||
summary.State = types.SpecStatePassed
|
||||
runner.reportBeforeSuite(summary)
|
||||
}
|
||||
|
||||
for {
|
||||
spec, err := runner.iterator.Next()
|
||||
if err == spec_iterator.ErrClosed {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
fmt.Println("failed to iterate over tests:\n" + err.Error())
|
||||
break
|
||||
}
|
||||
|
||||
runner.processedSpecs = append(runner.processedSpecs, spec)
|
||||
|
||||
summary := spec.Summary(runner.suiteID)
|
||||
runner.reportSpecWillRun(summary)
|
||||
if summary.State == types.SpecStateInvalid {
|
||||
summary.State = types.SpecStatePassed
|
||||
}
|
||||
runner.reportSpecDidComplete(summary, false)
|
||||
}
|
||||
|
||||
if runner.afterSuiteNode != nil {
|
||||
summary := runner.afterSuiteNode.Summary()
|
||||
summary.State = types.SpecStatePassed
|
||||
runner.reportAfterSuite(summary)
|
||||
}
|
||||
|
||||
runner.reportSuiteDidEnd(true)
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) runBeforeSuite() bool {
|
||||
if runner.beforeSuiteNode == nil || runner.wasInterrupted() {
|
||||
return true
|
||||
}
|
||||
|
||||
runner.writer.Truncate()
|
||||
conf := runner.config
|
||||
passed := runner.beforeSuiteNode.Run(conf.ParallelNode, conf.ParallelTotal, conf.SyncHost)
|
||||
if !passed {
|
||||
runner.writer.DumpOut()
|
||||
}
|
||||
runner.reportBeforeSuite(runner.beforeSuiteNode.Summary())
|
||||
return passed
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) runAfterSuite() bool {
|
||||
if runner.afterSuiteNode == nil {
|
||||
return true
|
||||
}
|
||||
|
||||
runner.writer.Truncate()
|
||||
conf := runner.config
|
||||
passed := runner.afterSuiteNode.Run(conf.ParallelNode, conf.ParallelTotal, conf.SyncHost)
|
||||
if !passed {
|
||||
runner.writer.DumpOut()
|
||||
}
|
||||
runner.reportAfterSuite(runner.afterSuiteNode.Summary())
|
||||
return passed
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) runSpecs() bool {
|
||||
suiteFailed := false
|
||||
skipRemainingSpecs := false
|
||||
for {
|
||||
spec, err := runner.iterator.Next()
|
||||
if err == spec_iterator.ErrClosed {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
fmt.Println("failed to iterate over tests:\n" + err.Error())
|
||||
suiteFailed = true
|
||||
break
|
||||
}
|
||||
|
||||
runner.processedSpecs = append(runner.processedSpecs, spec)
|
||||
|
||||
if runner.wasInterrupted() {
|
||||
break
|
||||
}
|
||||
if skipRemainingSpecs {
|
||||
spec.Skip()
|
||||
}
|
||||
|
||||
if !spec.Skipped() && !spec.Pending() {
|
||||
if passed := runner.runSpec(spec); !passed {
|
||||
suiteFailed = true
|
||||
}
|
||||
} else if spec.Pending() && runner.config.FailOnPending {
|
||||
runner.reportSpecWillRun(spec.Summary(runner.suiteID))
|
||||
suiteFailed = true
|
||||
runner.reportSpecDidComplete(spec.Summary(runner.suiteID), spec.Failed())
|
||||
} else {
|
||||
runner.reportSpecWillRun(spec.Summary(runner.suiteID))
|
||||
runner.reportSpecDidComplete(spec.Summary(runner.suiteID), spec.Failed())
|
||||
}
|
||||
|
||||
if spec.Failed() && runner.config.FailFast {
|
||||
skipRemainingSpecs = true
|
||||
}
|
||||
}
|
||||
|
||||
return !suiteFailed
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) runSpec(spec *spec.Spec) (passed bool) {
|
||||
maxAttempts := 1
|
||||
if runner.config.FlakeAttempts > 0 {
|
||||
// uninitialized configs count as 1
|
||||
maxAttempts = runner.config.FlakeAttempts
|
||||
}
|
||||
|
||||
for i := 0; i < maxAttempts; i++ {
|
||||
runner.reportSpecWillRun(spec.Summary(runner.suiteID))
|
||||
runner.runningSpec = spec
|
||||
spec.Run(runner.writer)
|
||||
runner.runningSpec = nil
|
||||
runner.reportSpecDidComplete(spec.Summary(runner.suiteID), spec.Failed())
|
||||
if !spec.Failed() {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) CurrentSpecSummary() (*types.SpecSummary, bool) {
|
||||
if runner.runningSpec == nil {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
return runner.runningSpec.Summary(runner.suiteID), true
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) registerForInterrupts(signalRegistered chan struct{}) {
|
||||
c := make(chan os.Signal, 1)
|
||||
signal.Notify(c, os.Interrupt, syscall.SIGTERM)
|
||||
close(signalRegistered)
|
||||
|
||||
<-c
|
||||
signal.Stop(c)
|
||||
runner.markInterrupted()
|
||||
go runner.registerForHardInterrupts()
|
||||
runner.writer.DumpOutWithHeader(`
|
||||
Received interrupt. Emitting contents of GinkgoWriter...
|
||||
---------------------------------------------------------
|
||||
`)
|
||||
if runner.afterSuiteNode != nil {
|
||||
fmt.Fprint(os.Stderr, `
|
||||
---------------------------------------------------------
|
||||
Received interrupt. Running AfterSuite...
|
||||
^C again to terminate immediately
|
||||
`)
|
||||
runner.runAfterSuite()
|
||||
}
|
||||
runner.reportSuiteDidEnd(false)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) registerForHardInterrupts() {
|
||||
c := make(chan os.Signal, 1)
|
||||
signal.Notify(c, os.Interrupt, syscall.SIGTERM)
|
||||
|
||||
<-c
|
||||
fmt.Fprintln(os.Stderr, "\nReceived second interrupt. Shutting down.")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) blockForeverIfInterrupted() {
|
||||
runner.lock.Lock()
|
||||
interrupted := runner.interrupted
|
||||
runner.lock.Unlock()
|
||||
|
||||
if interrupted {
|
||||
select {}
|
||||
}
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) markInterrupted() {
|
||||
runner.lock.Lock()
|
||||
defer runner.lock.Unlock()
|
||||
runner.interrupted = true
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) wasInterrupted() bool {
|
||||
runner.lock.Lock()
|
||||
defer runner.lock.Unlock()
|
||||
return runner.interrupted
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) reportSuiteWillBegin() {
|
||||
runner.startTime = time.Now()
|
||||
summary := runner.suiteWillBeginSummary()
|
||||
for _, reporter := range runner.reporters {
|
||||
reporter.SpecSuiteWillBegin(runner.config, summary)
|
||||
}
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) reportBeforeSuite(summary *types.SetupSummary) {
|
||||
for _, reporter := range runner.reporters {
|
||||
reporter.BeforeSuiteDidRun(summary)
|
||||
}
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) reportAfterSuite(summary *types.SetupSummary) {
|
||||
for _, reporter := range runner.reporters {
|
||||
reporter.AfterSuiteDidRun(summary)
|
||||
}
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) reportSpecWillRun(summary *types.SpecSummary) {
|
||||
runner.writer.Truncate()
|
||||
|
||||
for _, reporter := range runner.reporters {
|
||||
reporter.SpecWillRun(summary)
|
||||
}
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) reportSpecDidComplete(summary *types.SpecSummary, failed bool) {
|
||||
if len(summary.CapturedOutput) == 0 {
|
||||
summary.CapturedOutput = string(runner.writer.Bytes())
|
||||
}
|
||||
for i := len(runner.reporters) - 1; i >= 1; i-- {
|
||||
runner.reporters[i].SpecDidComplete(summary)
|
||||
}
|
||||
|
||||
if failed {
|
||||
runner.writer.DumpOut()
|
||||
}
|
||||
|
||||
runner.reporters[0].SpecDidComplete(summary)
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) reportSuiteDidEnd(success bool) {
|
||||
summary := runner.suiteDidEndSummary(success)
|
||||
summary.RunTime = time.Since(runner.startTime)
|
||||
for _, reporter := range runner.reporters {
|
||||
reporter.SpecSuiteDidEnd(summary)
|
||||
}
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) countSpecsThatRanSatisfying(filter func(ex *spec.Spec) bool) (count int) {
|
||||
count = 0
|
||||
|
||||
for _, spec := range runner.processedSpecs {
|
||||
if filter(spec) {
|
||||
count++
|
||||
}
|
||||
}
|
||||
|
||||
return count
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) suiteDidEndSummary(success bool) *types.SuiteSummary {
|
||||
numberOfSpecsThatWillBeRun := runner.countSpecsThatRanSatisfying(func(ex *spec.Spec) bool {
|
||||
return !ex.Skipped() && !ex.Pending()
|
||||
})
|
||||
|
||||
numberOfPendingSpecs := runner.countSpecsThatRanSatisfying(func(ex *spec.Spec) bool {
|
||||
return ex.Pending()
|
||||
})
|
||||
|
||||
numberOfSkippedSpecs := runner.countSpecsThatRanSatisfying(func(ex *spec.Spec) bool {
|
||||
return ex.Skipped()
|
||||
})
|
||||
|
||||
numberOfPassedSpecs := runner.countSpecsThatRanSatisfying(func(ex *spec.Spec) bool {
|
||||
return ex.Passed()
|
||||
})
|
||||
|
||||
numberOfFlakedSpecs := runner.countSpecsThatRanSatisfying(func(ex *spec.Spec) bool {
|
||||
return ex.Flaked()
|
||||
})
|
||||
|
||||
numberOfFailedSpecs := runner.countSpecsThatRanSatisfying(func(ex *spec.Spec) bool {
|
||||
return ex.Failed()
|
||||
})
|
||||
|
||||
if runner.beforeSuiteNode != nil && !runner.beforeSuiteNode.Passed() && !runner.config.DryRun {
|
||||
var known bool
|
||||
numberOfSpecsThatWillBeRun, known = runner.iterator.NumberOfSpecsThatWillBeRunIfKnown()
|
||||
if !known {
|
||||
numberOfSpecsThatWillBeRun = runner.iterator.NumberOfSpecsPriorToIteration()
|
||||
}
|
||||
numberOfFailedSpecs = numberOfSpecsThatWillBeRun
|
||||
}
|
||||
|
||||
return &types.SuiteSummary{
|
||||
SuiteDescription: runner.description,
|
||||
SuiteSucceeded: success,
|
||||
SuiteID: runner.suiteID,
|
||||
|
||||
NumberOfSpecsBeforeParallelization: runner.iterator.NumberOfSpecsPriorToIteration(),
|
||||
NumberOfTotalSpecs: len(runner.processedSpecs),
|
||||
NumberOfSpecsThatWillBeRun: numberOfSpecsThatWillBeRun,
|
||||
NumberOfPendingSpecs: numberOfPendingSpecs,
|
||||
NumberOfSkippedSpecs: numberOfSkippedSpecs,
|
||||
NumberOfPassedSpecs: numberOfPassedSpecs,
|
||||
NumberOfFailedSpecs: numberOfFailedSpecs,
|
||||
NumberOfFlakedSpecs: numberOfFlakedSpecs,
|
||||
}
|
||||
}
|
||||
|
||||
func (runner *SpecRunner) suiteWillBeginSummary() *types.SuiteSummary {
|
||||
numTotal, known := runner.iterator.NumberOfSpecsToProcessIfKnown()
|
||||
if !known {
|
||||
numTotal = -1
|
||||
}
|
||||
|
||||
numToRun, known := runner.iterator.NumberOfSpecsThatWillBeRunIfKnown()
|
||||
if !known {
|
||||
numToRun = -1
|
||||
}
|
||||
|
||||
return &types.SuiteSummary{
|
||||
SuiteDescription: runner.description,
|
||||
SuiteID: runner.suiteID,
|
||||
|
||||
NumberOfSpecsBeforeParallelization: runner.iterator.NumberOfSpecsPriorToIteration(),
|
||||
NumberOfTotalSpecs: numTotal,
|
||||
NumberOfSpecsThatWillBeRun: numToRun,
|
||||
NumberOfPendingSpecs: -1,
|
||||
NumberOfSkippedSpecs: -1,
|
||||
NumberOfPassedSpecs: -1,
|
||||
NumberOfFailedSpecs: -1,
|
||||
NumberOfFlakedSpecs: -1,
|
||||
}
|
||||
}
|
227
vendor/github.com/onsi/ginkgo/internal/suite/suite.go
generated
vendored
227
vendor/github.com/onsi/ginkgo/internal/suite/suite.go
generated
vendored
@ -1,227 +0,0 @@
|
||||
package suite
|
||||
|
||||
import (
|
||||
"math/rand"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"github.com/onsi/ginkgo/internal/spec_iterator"
|
||||
|
||||
"github.com/onsi/ginkgo/config"
|
||||
"github.com/onsi/ginkgo/internal/containernode"
|
||||
"github.com/onsi/ginkgo/internal/failer"
|
||||
"github.com/onsi/ginkgo/internal/leafnodes"
|
||||
"github.com/onsi/ginkgo/internal/spec"
|
||||
"github.com/onsi/ginkgo/internal/specrunner"
|
||||
"github.com/onsi/ginkgo/internal/writer"
|
||||
"github.com/onsi/ginkgo/reporters"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
type ginkgoTestingT interface {
|
||||
Fail()
|
||||
}
|
||||
|
||||
type deferredContainerNode struct {
|
||||
text string
|
||||
body func()
|
||||
flag types.FlagType
|
||||
codeLocation types.CodeLocation
|
||||
}
|
||||
|
||||
type Suite struct {
|
||||
topLevelContainer *containernode.ContainerNode
|
||||
currentContainer *containernode.ContainerNode
|
||||
|
||||
deferredContainerNodes []deferredContainerNode
|
||||
|
||||
containerIndex int
|
||||
beforeSuiteNode leafnodes.SuiteNode
|
||||
afterSuiteNode leafnodes.SuiteNode
|
||||
runner *specrunner.SpecRunner
|
||||
failer *failer.Failer
|
||||
running bool
|
||||
expandTopLevelNodes bool
|
||||
}
|
||||
|
||||
func New(failer *failer.Failer) *Suite {
|
||||
topLevelContainer := containernode.New("[Top Level]", types.FlagTypeNone, types.CodeLocation{})
|
||||
|
||||
return &Suite{
|
||||
topLevelContainer: topLevelContainer,
|
||||
currentContainer: topLevelContainer,
|
||||
failer: failer,
|
||||
containerIndex: 1,
|
||||
deferredContainerNodes: []deferredContainerNode{},
|
||||
}
|
||||
}
|
||||
|
||||
func (suite *Suite) Run(t ginkgoTestingT, description string, reporters []reporters.Reporter, writer writer.WriterInterface, config config.GinkgoConfigType) (bool, bool) {
|
||||
if config.ParallelTotal < 1 {
|
||||
panic("ginkgo.parallel.total must be >= 1")
|
||||
}
|
||||
|
||||
if config.ParallelNode > config.ParallelTotal || config.ParallelNode < 1 {
|
||||
panic("ginkgo.parallel.node is one-indexed and must be <= ginkgo.parallel.total")
|
||||
}
|
||||
|
||||
suite.expandTopLevelNodes = true
|
||||
for _, deferredNode := range suite.deferredContainerNodes {
|
||||
suite.PushContainerNode(deferredNode.text, deferredNode.body, deferredNode.flag, deferredNode.codeLocation)
|
||||
}
|
||||
|
||||
r := rand.New(rand.NewSource(config.RandomSeed))
|
||||
suite.topLevelContainer.Shuffle(r)
|
||||
iterator, hasProgrammaticFocus := suite.generateSpecsIterator(description, config)
|
||||
suite.runner = specrunner.New(description, suite.beforeSuiteNode, iterator, suite.afterSuiteNode, reporters, writer, config)
|
||||
|
||||
suite.running = true
|
||||
success := suite.runner.Run()
|
||||
if !success {
|
||||
t.Fail()
|
||||
}
|
||||
return success, hasProgrammaticFocus
|
||||
}
|
||||
|
||||
func (suite *Suite) generateSpecsIterator(description string, config config.GinkgoConfigType) (spec_iterator.SpecIterator, bool) {
|
||||
specsSlice := []*spec.Spec{}
|
||||
suite.topLevelContainer.BackPropagateProgrammaticFocus()
|
||||
for _, collatedNodes := range suite.topLevelContainer.Collate() {
|
||||
specsSlice = append(specsSlice, spec.New(collatedNodes.Subject, collatedNodes.Containers, config.EmitSpecProgress))
|
||||
}
|
||||
|
||||
specs := spec.NewSpecs(specsSlice)
|
||||
specs.RegexScansFilePath = config.RegexScansFilePath
|
||||
|
||||
if config.RandomizeAllSpecs {
|
||||
specs.Shuffle(rand.New(rand.NewSource(config.RandomSeed)))
|
||||
}
|
||||
|
||||
specs.ApplyFocus(description, config.FocusStrings, config.SkipStrings)
|
||||
|
||||
if config.SkipMeasurements {
|
||||
specs.SkipMeasurements()
|
||||
}
|
||||
|
||||
var iterator spec_iterator.SpecIterator
|
||||
|
||||
if config.ParallelTotal > 1 {
|
||||
iterator = spec_iterator.NewParallelIterator(specs.Specs(), config.SyncHost)
|
||||
resp, err := http.Get(config.SyncHost + "/has-counter")
|
||||
if err != nil || resp.StatusCode != http.StatusOK {
|
||||
iterator = spec_iterator.NewShardedParallelIterator(specs.Specs(), config.ParallelTotal, config.ParallelNode)
|
||||
}
|
||||
} else {
|
||||
iterator = spec_iterator.NewSerialIterator(specs.Specs())
|
||||
}
|
||||
|
||||
return iterator, specs.HasProgrammaticFocus()
|
||||
}
|
||||
|
||||
func (suite *Suite) CurrentRunningSpecSummary() (*types.SpecSummary, bool) {
|
||||
if !suite.running {
|
||||
return nil, false
|
||||
}
|
||||
return suite.runner.CurrentSpecSummary()
|
||||
}
|
||||
|
||||
func (suite *Suite) SetBeforeSuiteNode(body interface{}, codeLocation types.CodeLocation, timeout time.Duration) {
|
||||
if suite.beforeSuiteNode != nil {
|
||||
panic("You may only call BeforeSuite once!")
|
||||
}
|
||||
suite.beforeSuiteNode = leafnodes.NewBeforeSuiteNode(body, codeLocation, timeout, suite.failer)
|
||||
}
|
||||
|
||||
func (suite *Suite) SetAfterSuiteNode(body interface{}, codeLocation types.CodeLocation, timeout time.Duration) {
|
||||
if suite.afterSuiteNode != nil {
|
||||
panic("You may only call AfterSuite once!")
|
||||
}
|
||||
suite.afterSuiteNode = leafnodes.NewAfterSuiteNode(body, codeLocation, timeout, suite.failer)
|
||||
}
|
||||
|
||||
func (suite *Suite) SetSynchronizedBeforeSuiteNode(bodyA interface{}, bodyB interface{}, codeLocation types.CodeLocation, timeout time.Duration) {
|
||||
if suite.beforeSuiteNode != nil {
|
||||
panic("You may only call BeforeSuite once!")
|
||||
}
|
||||
suite.beforeSuiteNode = leafnodes.NewSynchronizedBeforeSuiteNode(bodyA, bodyB, codeLocation, timeout, suite.failer)
|
||||
}
|
||||
|
||||
func (suite *Suite) SetSynchronizedAfterSuiteNode(bodyA interface{}, bodyB interface{}, codeLocation types.CodeLocation, timeout time.Duration) {
|
||||
if suite.afterSuiteNode != nil {
|
||||
panic("You may only call AfterSuite once!")
|
||||
}
|
||||
suite.afterSuiteNode = leafnodes.NewSynchronizedAfterSuiteNode(bodyA, bodyB, codeLocation, timeout, suite.failer)
|
||||
}
|
||||
|
||||
func (suite *Suite) PushContainerNode(text string, body func(), flag types.FlagType, codeLocation types.CodeLocation) {
|
||||
/*
|
||||
We defer walking the container nodes (which immediately evaluates the `body` function)
|
||||
until `RunSpecs` is called. We do this by storing off the deferred container nodes. Then, when
|
||||
`RunSpecs` is called we actually go through and add the container nodes to the test structure.
|
||||
|
||||
This allows us to defer calling all the `body` functions until _after_ the top level functions
|
||||
have been walked, _after_ func init()s have been called, and _after_ `go test` has called `flag.Parse()`.
|
||||
|
||||
This allows users to load up configuration information in the `TestX` go test hook just before `RunSpecs`
|
||||
is invoked and solves issues like #693 and makes the lifecycle easier to reason about.
|
||||
|
||||
*/
|
||||
if !suite.expandTopLevelNodes {
|
||||
suite.deferredContainerNodes = append(suite.deferredContainerNodes, deferredContainerNode{text, body, flag, codeLocation})
|
||||
return
|
||||
}
|
||||
|
||||
container := containernode.New(text, flag, codeLocation)
|
||||
suite.currentContainer.PushContainerNode(container)
|
||||
|
||||
previousContainer := suite.currentContainer
|
||||
suite.currentContainer = container
|
||||
suite.containerIndex++
|
||||
|
||||
body()
|
||||
|
||||
suite.containerIndex--
|
||||
suite.currentContainer = previousContainer
|
||||
}
|
||||
|
||||
func (suite *Suite) PushItNode(text string, body interface{}, flag types.FlagType, codeLocation types.CodeLocation, timeout time.Duration) {
|
||||
if suite.running {
|
||||
suite.failer.Fail("You may only call It from within a Describe, Context or When", codeLocation)
|
||||
}
|
||||
suite.currentContainer.PushSubjectNode(leafnodes.NewItNode(text, body, flag, codeLocation, timeout, suite.failer, suite.containerIndex))
|
||||
}
|
||||
|
||||
func (suite *Suite) PushMeasureNode(text string, body interface{}, flag types.FlagType, codeLocation types.CodeLocation, samples int) {
|
||||
if suite.running {
|
||||
suite.failer.Fail("You may only call Measure from within a Describe, Context or When", codeLocation)
|
||||
}
|
||||
suite.currentContainer.PushSubjectNode(leafnodes.NewMeasureNode(text, body, flag, codeLocation, samples, suite.failer, suite.containerIndex))
|
||||
}
|
||||
|
||||
func (suite *Suite) PushBeforeEachNode(body interface{}, codeLocation types.CodeLocation, timeout time.Duration) {
|
||||
if suite.running {
|
||||
suite.failer.Fail("You may only call BeforeEach from within a Describe, Context or When", codeLocation)
|
||||
}
|
||||
suite.currentContainer.PushSetupNode(leafnodes.NewBeforeEachNode(body, codeLocation, timeout, suite.failer, suite.containerIndex))
|
||||
}
|
||||
|
||||
func (suite *Suite) PushJustBeforeEachNode(body interface{}, codeLocation types.CodeLocation, timeout time.Duration) {
|
||||
if suite.running {
|
||||
suite.failer.Fail("You may only call JustBeforeEach from within a Describe, Context or When", codeLocation)
|
||||
}
|
||||
suite.currentContainer.PushSetupNode(leafnodes.NewJustBeforeEachNode(body, codeLocation, timeout, suite.failer, suite.containerIndex))
|
||||
}
|
||||
|
||||
func (suite *Suite) PushJustAfterEachNode(body interface{}, codeLocation types.CodeLocation, timeout time.Duration) {
|
||||
if suite.running {
|
||||
suite.failer.Fail("You may only call JustAfterEach from within a Describe or Context", codeLocation)
|
||||
}
|
||||
suite.currentContainer.PushSetupNode(leafnodes.NewJustAfterEachNode(body, codeLocation, timeout, suite.failer, suite.containerIndex))
|
||||
}
|
||||
|
||||
func (suite *Suite) PushAfterEachNode(body interface{}, codeLocation types.CodeLocation, timeout time.Duration) {
|
||||
if suite.running {
|
||||
suite.failer.Fail("You may only call AfterEach from within a Describe, Context or When", codeLocation)
|
||||
}
|
||||
suite.currentContainer.PushSetupNode(leafnodes.NewAfterEachNode(body, codeLocation, timeout, suite.failer, suite.containerIndex))
|
||||
}
|
36
vendor/github.com/onsi/ginkgo/internal/writer/fake_writer.go
generated
vendored
36
vendor/github.com/onsi/ginkgo/internal/writer/fake_writer.go
generated
vendored
@ -1,36 +0,0 @@
|
||||
package writer
|
||||
|
||||
type FakeGinkgoWriter struct {
|
||||
EventStream []string
|
||||
}
|
||||
|
||||
func NewFake() *FakeGinkgoWriter {
|
||||
return &FakeGinkgoWriter{
|
||||
EventStream: []string{},
|
||||
}
|
||||
}
|
||||
|
||||
func (writer *FakeGinkgoWriter) AddEvent(event string) {
|
||||
writer.EventStream = append(writer.EventStream, event)
|
||||
}
|
||||
|
||||
func (writer *FakeGinkgoWriter) Truncate() {
|
||||
writer.EventStream = append(writer.EventStream, "TRUNCATE")
|
||||
}
|
||||
|
||||
func (writer *FakeGinkgoWriter) DumpOut() {
|
||||
writer.EventStream = append(writer.EventStream, "DUMP")
|
||||
}
|
||||
|
||||
func (writer *FakeGinkgoWriter) DumpOutWithHeader(header string) {
|
||||
writer.EventStream = append(writer.EventStream, "DUMP_WITH_HEADER: "+header)
|
||||
}
|
||||
|
||||
func (writer *FakeGinkgoWriter) Bytes() []byte {
|
||||
writer.EventStream = append(writer.EventStream, "BYTES")
|
||||
return nil
|
||||
}
|
||||
|
||||
func (writer *FakeGinkgoWriter) Write(data []byte) (n int, err error) {
|
||||
return 0, nil
|
||||
}
|
89
vendor/github.com/onsi/ginkgo/internal/writer/writer.go
generated
vendored
89
vendor/github.com/onsi/ginkgo/internal/writer/writer.go
generated
vendored
@ -1,89 +0,0 @@
|
||||
package writer
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
"sync"
|
||||
)
|
||||
|
||||
type WriterInterface interface {
|
||||
io.Writer
|
||||
|
||||
Truncate()
|
||||
DumpOut()
|
||||
DumpOutWithHeader(header string)
|
||||
Bytes() []byte
|
||||
}
|
||||
|
||||
type Writer struct {
|
||||
buffer *bytes.Buffer
|
||||
outWriter io.Writer
|
||||
lock *sync.Mutex
|
||||
stream bool
|
||||
redirector io.Writer
|
||||
}
|
||||
|
||||
func New(outWriter io.Writer) *Writer {
|
||||
return &Writer{
|
||||
buffer: &bytes.Buffer{},
|
||||
lock: &sync.Mutex{},
|
||||
outWriter: outWriter,
|
||||
stream: true,
|
||||
}
|
||||
}
|
||||
|
||||
func (w *Writer) AndRedirectTo(writer io.Writer) {
|
||||
w.redirector = writer
|
||||
}
|
||||
|
||||
func (w *Writer) SetStream(stream bool) {
|
||||
w.lock.Lock()
|
||||
defer w.lock.Unlock()
|
||||
w.stream = stream
|
||||
}
|
||||
|
||||
func (w *Writer) Write(b []byte) (n int, err error) {
|
||||
w.lock.Lock()
|
||||
defer w.lock.Unlock()
|
||||
|
||||
n, err = w.buffer.Write(b)
|
||||
if w.redirector != nil {
|
||||
w.redirector.Write(b)
|
||||
}
|
||||
if w.stream {
|
||||
return w.outWriter.Write(b)
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (w *Writer) Truncate() {
|
||||
w.lock.Lock()
|
||||
defer w.lock.Unlock()
|
||||
w.buffer.Reset()
|
||||
}
|
||||
|
||||
func (w *Writer) DumpOut() {
|
||||
w.lock.Lock()
|
||||
defer w.lock.Unlock()
|
||||
if !w.stream {
|
||||
w.buffer.WriteTo(w.outWriter)
|
||||
}
|
||||
}
|
||||
|
||||
func (w *Writer) Bytes() []byte {
|
||||
w.lock.Lock()
|
||||
defer w.lock.Unlock()
|
||||
b := w.buffer.Bytes()
|
||||
copied := make([]byte, len(b))
|
||||
copy(copied, b)
|
||||
return copied
|
||||
}
|
||||
|
||||
func (w *Writer) DumpOutWithHeader(header string) {
|
||||
w.lock.Lock()
|
||||
defer w.lock.Unlock()
|
||||
if !w.stream && w.buffer.Len() > 0 {
|
||||
w.outWriter.Write([]byte(header))
|
||||
w.buffer.WriteTo(w.outWriter)
|
||||
}
|
||||
}
|
87
vendor/github.com/onsi/ginkgo/reporters/default_reporter.go
generated
vendored
87
vendor/github.com/onsi/ginkgo/reporters/default_reporter.go
generated
vendored
@ -1,87 +0,0 @@
|
||||
/*
|
||||
Ginkgo's Default Reporter
|
||||
|
||||
A number of command line flags are available to tweak Ginkgo's default output.
|
||||
|
||||
These are documented [here](http://onsi.github.io/ginkgo/#running_tests)
|
||||
*/
|
||||
package reporters
|
||||
|
||||
import (
|
||||
"github.com/onsi/ginkgo/config"
|
||||
"github.com/onsi/ginkgo/reporters/stenographer"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
type DefaultReporter struct {
|
||||
config config.DefaultReporterConfigType
|
||||
stenographer stenographer.Stenographer
|
||||
specSummaries []*types.SpecSummary
|
||||
}
|
||||
|
||||
func NewDefaultReporter(config config.DefaultReporterConfigType, stenographer stenographer.Stenographer) *DefaultReporter {
|
||||
return &DefaultReporter{
|
||||
config: config,
|
||||
stenographer: stenographer,
|
||||
}
|
||||
}
|
||||
|
||||
func (reporter *DefaultReporter) SpecSuiteWillBegin(config config.GinkgoConfigType, summary *types.SuiteSummary) {
|
||||
reporter.stenographer.AnnounceSuite(summary.SuiteDescription, config.RandomSeed, config.RandomizeAllSpecs, reporter.config.Succinct)
|
||||
if config.ParallelTotal > 1 {
|
||||
reporter.stenographer.AnnounceParallelRun(config.ParallelNode, config.ParallelTotal, reporter.config.Succinct)
|
||||
} else {
|
||||
reporter.stenographer.AnnounceNumberOfSpecs(summary.NumberOfSpecsThatWillBeRun, summary.NumberOfTotalSpecs, reporter.config.Succinct)
|
||||
}
|
||||
}
|
||||
|
||||
func (reporter *DefaultReporter) BeforeSuiteDidRun(setupSummary *types.SetupSummary) {
|
||||
if setupSummary.State != types.SpecStatePassed {
|
||||
reporter.stenographer.AnnounceBeforeSuiteFailure(setupSummary, reporter.config.Succinct, reporter.config.FullTrace)
|
||||
}
|
||||
}
|
||||
|
||||
func (reporter *DefaultReporter) AfterSuiteDidRun(setupSummary *types.SetupSummary) {
|
||||
if setupSummary.State != types.SpecStatePassed {
|
||||
reporter.stenographer.AnnounceAfterSuiteFailure(setupSummary, reporter.config.Succinct, reporter.config.FullTrace)
|
||||
}
|
||||
}
|
||||
|
||||
func (reporter *DefaultReporter) SpecWillRun(specSummary *types.SpecSummary) {
|
||||
if reporter.config.Verbose && !reporter.config.Succinct && specSummary.State != types.SpecStatePending && specSummary.State != types.SpecStateSkipped {
|
||||
reporter.stenographer.AnnounceSpecWillRun(specSummary)
|
||||
}
|
||||
}
|
||||
|
||||
func (reporter *DefaultReporter) SpecDidComplete(specSummary *types.SpecSummary) {
|
||||
switch specSummary.State {
|
||||
case types.SpecStatePassed:
|
||||
if specSummary.IsMeasurement {
|
||||
reporter.stenographer.AnnounceSuccessfulMeasurement(specSummary, reporter.config.Succinct)
|
||||
} else if specSummary.RunTime.Seconds() >= reporter.config.SlowSpecThreshold {
|
||||
reporter.stenographer.AnnounceSuccessfulSlowSpec(specSummary, reporter.config.Succinct)
|
||||
} else {
|
||||
reporter.stenographer.AnnounceSuccessfulSpec(specSummary)
|
||||
if reporter.config.ReportPassed {
|
||||
reporter.stenographer.AnnounceCapturedOutput(specSummary.CapturedOutput)
|
||||
}
|
||||
}
|
||||
case types.SpecStatePending:
|
||||
reporter.stenographer.AnnouncePendingSpec(specSummary, reporter.config.NoisyPendings && !reporter.config.Succinct)
|
||||
case types.SpecStateSkipped:
|
||||
reporter.stenographer.AnnounceSkippedSpec(specSummary, reporter.config.Succinct || !reporter.config.NoisySkippings, reporter.config.FullTrace)
|
||||
case types.SpecStateTimedOut:
|
||||
reporter.stenographer.AnnounceSpecTimedOut(specSummary, reporter.config.Succinct, reporter.config.FullTrace)
|
||||
case types.SpecStatePanicked:
|
||||
reporter.stenographer.AnnounceSpecPanicked(specSummary, reporter.config.Succinct, reporter.config.FullTrace)
|
||||
case types.SpecStateFailed:
|
||||
reporter.stenographer.AnnounceSpecFailed(specSummary, reporter.config.Succinct, reporter.config.FullTrace)
|
||||
}
|
||||
|
||||
reporter.specSummaries = append(reporter.specSummaries, specSummary)
|
||||
}
|
||||
|
||||
func (reporter *DefaultReporter) SpecSuiteDidEnd(summary *types.SuiteSummary) {
|
||||
reporter.stenographer.SummarizeFailures(reporter.specSummaries)
|
||||
reporter.stenographer.AnnounceSpecRunCompletion(summary, reporter.config.Succinct)
|
||||
}
|
59
vendor/github.com/onsi/ginkgo/reporters/fake_reporter.go
generated
vendored
59
vendor/github.com/onsi/ginkgo/reporters/fake_reporter.go
generated
vendored
@ -1,59 +0,0 @@
|
||||
package reporters
|
||||
|
||||
import (
|
||||
"github.com/onsi/ginkgo/config"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
//FakeReporter is useful for testing purposes
|
||||
type FakeReporter struct {
|
||||
Config config.GinkgoConfigType
|
||||
|
||||
BeginSummary *types.SuiteSummary
|
||||
BeforeSuiteSummary *types.SetupSummary
|
||||
SpecWillRunSummaries []*types.SpecSummary
|
||||
SpecSummaries []*types.SpecSummary
|
||||
AfterSuiteSummary *types.SetupSummary
|
||||
EndSummary *types.SuiteSummary
|
||||
|
||||
SpecWillRunStub func(specSummary *types.SpecSummary)
|
||||
SpecDidCompleteStub func(specSummary *types.SpecSummary)
|
||||
}
|
||||
|
||||
func NewFakeReporter() *FakeReporter {
|
||||
return &FakeReporter{
|
||||
SpecWillRunSummaries: make([]*types.SpecSummary, 0),
|
||||
SpecSummaries: make([]*types.SpecSummary, 0),
|
||||
}
|
||||
}
|
||||
|
||||
func (fakeR *FakeReporter) SpecSuiteWillBegin(config config.GinkgoConfigType, summary *types.SuiteSummary) {
|
||||
fakeR.Config = config
|
||||
fakeR.BeginSummary = summary
|
||||
}
|
||||
|
||||
func (fakeR *FakeReporter) BeforeSuiteDidRun(setupSummary *types.SetupSummary) {
|
||||
fakeR.BeforeSuiteSummary = setupSummary
|
||||
}
|
||||
|
||||
func (fakeR *FakeReporter) SpecWillRun(specSummary *types.SpecSummary) {
|
||||
if fakeR.SpecWillRunStub != nil {
|
||||
fakeR.SpecWillRunStub(specSummary)
|
||||
}
|
||||
fakeR.SpecWillRunSummaries = append(fakeR.SpecWillRunSummaries, specSummary)
|
||||
}
|
||||
|
||||
func (fakeR *FakeReporter) SpecDidComplete(specSummary *types.SpecSummary) {
|
||||
if fakeR.SpecDidCompleteStub != nil {
|
||||
fakeR.SpecDidCompleteStub(specSummary)
|
||||
}
|
||||
fakeR.SpecSummaries = append(fakeR.SpecSummaries, specSummary)
|
||||
}
|
||||
|
||||
func (fakeR *FakeReporter) AfterSuiteDidRun(setupSummary *types.SetupSummary) {
|
||||
fakeR.AfterSuiteSummary = setupSummary
|
||||
}
|
||||
|
||||
func (fakeR *FakeReporter) SpecSuiteDidEnd(summary *types.SuiteSummary) {
|
||||
fakeR.EndSummary = summary
|
||||
}
|
178
vendor/github.com/onsi/ginkgo/reporters/junit_reporter.go
generated
vendored
178
vendor/github.com/onsi/ginkgo/reporters/junit_reporter.go
generated
vendored
@ -1,178 +0,0 @@
|
||||
/*
|
||||
|
||||
JUnit XML Reporter for Ginkgo
|
||||
|
||||
For usage instructions: http://onsi.github.io/ginkgo/#generating_junit_xml_output
|
||||
|
||||
*/
|
||||
|
||||
package reporters
|
||||
|
||||
import (
|
||||
"encoding/xml"
|
||||
"fmt"
|
||||
"math"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/onsi/ginkgo/config"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
type JUnitTestSuite struct {
|
||||
XMLName xml.Name `xml:"testsuite"`
|
||||
TestCases []JUnitTestCase `xml:"testcase"`
|
||||
Name string `xml:"name,attr"`
|
||||
Tests int `xml:"tests,attr"`
|
||||
Failures int `xml:"failures,attr"`
|
||||
Errors int `xml:"errors,attr"`
|
||||
Time float64 `xml:"time,attr"`
|
||||
}
|
||||
|
||||
type JUnitTestCase struct {
|
||||
Name string `xml:"name,attr"`
|
||||
ClassName string `xml:"classname,attr"`
|
||||
FailureMessage *JUnitFailureMessage `xml:"failure,omitempty"`
|
||||
Skipped *JUnitSkipped `xml:"skipped,omitempty"`
|
||||
Time float64 `xml:"time,attr"`
|
||||
SystemOut string `xml:"system-out,omitempty"`
|
||||
}
|
||||
|
||||
type JUnitFailureMessage struct {
|
||||
Type string `xml:"type,attr"`
|
||||
Message string `xml:",chardata"`
|
||||
}
|
||||
|
||||
type JUnitSkipped struct {
|
||||
Message string `xml:",chardata"`
|
||||
}
|
||||
|
||||
type JUnitReporter struct {
|
||||
suite JUnitTestSuite
|
||||
filename string
|
||||
testSuiteName string
|
||||
ReporterConfig config.DefaultReporterConfigType
|
||||
}
|
||||
|
||||
//NewJUnitReporter creates a new JUnit XML reporter. The XML will be stored in the passed in filename.
|
||||
func NewJUnitReporter(filename string) *JUnitReporter {
|
||||
return &JUnitReporter{
|
||||
filename: filename,
|
||||
}
|
||||
}
|
||||
|
||||
func (reporter *JUnitReporter) SpecSuiteWillBegin(ginkgoConfig config.GinkgoConfigType, summary *types.SuiteSummary) {
|
||||
reporter.suite = JUnitTestSuite{
|
||||
Name: summary.SuiteDescription,
|
||||
TestCases: []JUnitTestCase{},
|
||||
}
|
||||
reporter.testSuiteName = summary.SuiteDescription
|
||||
reporter.ReporterConfig = config.DefaultReporterConfig
|
||||
}
|
||||
|
||||
func (reporter *JUnitReporter) SpecWillRun(specSummary *types.SpecSummary) {
|
||||
}
|
||||
|
||||
func (reporter *JUnitReporter) BeforeSuiteDidRun(setupSummary *types.SetupSummary) {
|
||||
reporter.handleSetupSummary("BeforeSuite", setupSummary)
|
||||
}
|
||||
|
||||
func (reporter *JUnitReporter) AfterSuiteDidRun(setupSummary *types.SetupSummary) {
|
||||
reporter.handleSetupSummary("AfterSuite", setupSummary)
|
||||
}
|
||||
|
||||
func failureMessage(failure types.SpecFailure) string {
|
||||
return fmt.Sprintf("%s\n%s\n%s", failure.ComponentCodeLocation.String(), failure.Message, failure.Location.String())
|
||||
}
|
||||
|
||||
func (reporter *JUnitReporter) handleSetupSummary(name string, setupSummary *types.SetupSummary) {
|
||||
if setupSummary.State != types.SpecStatePassed {
|
||||
testCase := JUnitTestCase{
|
||||
Name: name,
|
||||
ClassName: reporter.testSuiteName,
|
||||
}
|
||||
|
||||
testCase.FailureMessage = &JUnitFailureMessage{
|
||||
Type: reporter.failureTypeForState(setupSummary.State),
|
||||
Message: failureMessage(setupSummary.Failure),
|
||||
}
|
||||
testCase.SystemOut = setupSummary.CapturedOutput
|
||||
testCase.Time = setupSummary.RunTime.Seconds()
|
||||
reporter.suite.TestCases = append(reporter.suite.TestCases, testCase)
|
||||
}
|
||||
}
|
||||
|
||||
func (reporter *JUnitReporter) SpecDidComplete(specSummary *types.SpecSummary) {
|
||||
testCase := JUnitTestCase{
|
||||
Name: strings.Join(specSummary.ComponentTexts[1:], " "),
|
||||
ClassName: reporter.testSuiteName,
|
||||
}
|
||||
if reporter.ReporterConfig.ReportPassed && specSummary.State == types.SpecStatePassed {
|
||||
testCase.SystemOut = specSummary.CapturedOutput
|
||||
}
|
||||
if specSummary.State == types.SpecStateFailed || specSummary.State == types.SpecStateTimedOut || specSummary.State == types.SpecStatePanicked {
|
||||
testCase.FailureMessage = &JUnitFailureMessage{
|
||||
Type: reporter.failureTypeForState(specSummary.State),
|
||||
Message: failureMessage(specSummary.Failure),
|
||||
}
|
||||
if specSummary.State == types.SpecStatePanicked {
|
||||
testCase.FailureMessage.Message += fmt.Sprintf("\n\nPanic: %s\n\nFull stack:\n%s",
|
||||
specSummary.Failure.ForwardedPanic,
|
||||
specSummary.Failure.Location.FullStackTrace)
|
||||
}
|
||||
testCase.SystemOut = specSummary.CapturedOutput
|
||||
}
|
||||
if specSummary.State == types.SpecStateSkipped || specSummary.State == types.SpecStatePending {
|
||||
testCase.Skipped = &JUnitSkipped{}
|
||||
if specSummary.Failure.Message != "" {
|
||||
testCase.Skipped.Message = failureMessage(specSummary.Failure)
|
||||
}
|
||||
}
|
||||
testCase.Time = specSummary.RunTime.Seconds()
|
||||
reporter.suite.TestCases = append(reporter.suite.TestCases, testCase)
|
||||
}
|
||||
|
||||
func (reporter *JUnitReporter) SpecSuiteDidEnd(summary *types.SuiteSummary) {
|
||||
reporter.suite.Tests = summary.NumberOfSpecsThatWillBeRun
|
||||
reporter.suite.Time = math.Trunc(summary.RunTime.Seconds()*1000) / 1000
|
||||
reporter.suite.Failures = summary.NumberOfFailedSpecs
|
||||
reporter.suite.Errors = 0
|
||||
if reporter.ReporterConfig.ReportFile != "" {
|
||||
reporter.filename = reporter.ReporterConfig.ReportFile
|
||||
fmt.Printf("\nJUnit path was configured: %s\n", reporter.filename)
|
||||
}
|
||||
filePath, _ := filepath.Abs(reporter.filename)
|
||||
dirPath := filepath.Dir(filePath)
|
||||
err := os.MkdirAll(dirPath, os.ModePerm)
|
||||
if err != nil {
|
||||
fmt.Printf("\nFailed to create JUnit directory: %s\n\t%s", filePath, err.Error())
|
||||
}
|
||||
file, err := os.Create(filePath)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Failed to create JUnit report file: %s\n\t%s", filePath, err.Error())
|
||||
}
|
||||
defer file.Close()
|
||||
file.WriteString(xml.Header)
|
||||
encoder := xml.NewEncoder(file)
|
||||
encoder.Indent(" ", " ")
|
||||
err = encoder.Encode(reporter.suite)
|
||||
if err == nil {
|
||||
fmt.Fprintf(os.Stdout, "\nJUnit report was created: %s\n", filePath)
|
||||
} else {
|
||||
fmt.Fprintf(os.Stderr,"\nFailed to generate JUnit report data:\n\t%s", err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
func (reporter *JUnitReporter) failureTypeForState(state types.SpecState) string {
|
||||
switch state {
|
||||
case types.SpecStateFailed:
|
||||
return "Failure"
|
||||
case types.SpecStateTimedOut:
|
||||
return "Timeout"
|
||||
case types.SpecStatePanicked:
|
||||
return "Panic"
|
||||
default:
|
||||
return ""
|
||||
}
|
||||
}
|
15
vendor/github.com/onsi/ginkgo/reporters/reporter.go
generated
vendored
15
vendor/github.com/onsi/ginkgo/reporters/reporter.go
generated
vendored
@ -1,15 +0,0 @@
|
||||
package reporters
|
||||
|
||||
import (
|
||||
"github.com/onsi/ginkgo/config"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
type Reporter interface {
|
||||
SpecSuiteWillBegin(config config.GinkgoConfigType, summary *types.SuiteSummary)
|
||||
BeforeSuiteDidRun(setupSummary *types.SetupSummary)
|
||||
SpecWillRun(specSummary *types.SpecSummary)
|
||||
SpecDidComplete(specSummary *types.SpecSummary)
|
||||
AfterSuiteDidRun(setupSummary *types.SetupSummary)
|
||||
SpecSuiteDidEnd(summary *types.SuiteSummary)
|
||||
}
|
64
vendor/github.com/onsi/ginkgo/reporters/stenographer/console_logging.go
generated
vendored
64
vendor/github.com/onsi/ginkgo/reporters/stenographer/console_logging.go
generated
vendored
@ -1,64 +0,0 @@
|
||||
package stenographer
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func (s *consoleStenographer) colorize(colorCode string, format string, args ...interface{}) string {
|
||||
var out string
|
||||
|
||||
if len(args) > 0 {
|
||||
out = fmt.Sprintf(format, args...)
|
||||
} else {
|
||||
out = format
|
||||
}
|
||||
|
||||
if s.color {
|
||||
return fmt.Sprintf("%s%s%s", colorCode, out, defaultStyle)
|
||||
} else {
|
||||
return out
|
||||
}
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) printBanner(text string, bannerCharacter string) {
|
||||
fmt.Fprintln(s.w, text)
|
||||
fmt.Fprintln(s.w, strings.Repeat(bannerCharacter, len(text)))
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) printNewLine() {
|
||||
fmt.Fprintln(s.w, "")
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) printDelimiter() {
|
||||
fmt.Fprintln(s.w, s.colorize(grayColor, "%s", strings.Repeat("-", 30)))
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) print(indentation int, format string, args ...interface{}) {
|
||||
fmt.Fprint(s.w, s.indent(indentation, format, args...))
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) println(indentation int, format string, args ...interface{}) {
|
||||
fmt.Fprintln(s.w, s.indent(indentation, format, args...))
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) indent(indentation int, format string, args ...interface{}) string {
|
||||
var text string
|
||||
|
||||
if len(args) > 0 {
|
||||
text = fmt.Sprintf(format, args...)
|
||||
} else {
|
||||
text = format
|
||||
}
|
||||
|
||||
stringArray := strings.Split(text, "\n")
|
||||
padding := ""
|
||||
if indentation >= 0 {
|
||||
padding = strings.Repeat(" ", indentation)
|
||||
}
|
||||
for i, s := range stringArray {
|
||||
stringArray[i] = fmt.Sprintf("%s%s", padding, s)
|
||||
}
|
||||
|
||||
return strings.Join(stringArray, "\n")
|
||||
}
|
142
vendor/github.com/onsi/ginkgo/reporters/stenographer/fake_stenographer.go
generated
vendored
142
vendor/github.com/onsi/ginkgo/reporters/stenographer/fake_stenographer.go
generated
vendored
@ -1,142 +0,0 @@
|
||||
package stenographer
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
func NewFakeStenographerCall(method string, args ...interface{}) FakeStenographerCall {
|
||||
return FakeStenographerCall{
|
||||
Method: method,
|
||||
Args: args,
|
||||
}
|
||||
}
|
||||
|
||||
type FakeStenographer struct {
|
||||
calls []FakeStenographerCall
|
||||
lock *sync.Mutex
|
||||
}
|
||||
|
||||
type FakeStenographerCall struct {
|
||||
Method string
|
||||
Args []interface{}
|
||||
}
|
||||
|
||||
func NewFakeStenographer() *FakeStenographer {
|
||||
stenographer := &FakeStenographer{
|
||||
lock: &sync.Mutex{},
|
||||
}
|
||||
stenographer.Reset()
|
||||
return stenographer
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) Calls() []FakeStenographerCall {
|
||||
stenographer.lock.Lock()
|
||||
defer stenographer.lock.Unlock()
|
||||
|
||||
return stenographer.calls
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) Reset() {
|
||||
stenographer.lock.Lock()
|
||||
defer stenographer.lock.Unlock()
|
||||
|
||||
stenographer.calls = make([]FakeStenographerCall, 0)
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) CallsTo(method string) []FakeStenographerCall {
|
||||
stenographer.lock.Lock()
|
||||
defer stenographer.lock.Unlock()
|
||||
|
||||
results := make([]FakeStenographerCall, 0)
|
||||
for _, call := range stenographer.calls {
|
||||
if call.Method == method {
|
||||
results = append(results, call)
|
||||
}
|
||||
}
|
||||
|
||||
return results
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) registerCall(method string, args ...interface{}) {
|
||||
stenographer.lock.Lock()
|
||||
defer stenographer.lock.Unlock()
|
||||
|
||||
stenographer.calls = append(stenographer.calls, NewFakeStenographerCall(method, args...))
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) AnnounceSuite(description string, randomSeed int64, randomizingAll bool, succinct bool) {
|
||||
stenographer.registerCall("AnnounceSuite", description, randomSeed, randomizingAll, succinct)
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) AnnounceAggregatedParallelRun(nodes int, succinct bool) {
|
||||
stenographer.registerCall("AnnounceAggregatedParallelRun", nodes, succinct)
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) AnnounceParallelRun(node int, nodes int, succinct bool) {
|
||||
stenographer.registerCall("AnnounceParallelRun", node, nodes, succinct)
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) AnnounceNumberOfSpecs(specsToRun int, total int, succinct bool) {
|
||||
stenographer.registerCall("AnnounceNumberOfSpecs", specsToRun, total, succinct)
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) AnnounceTotalNumberOfSpecs(total int, succinct bool) {
|
||||
stenographer.registerCall("AnnounceTotalNumberOfSpecs", total, succinct)
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) AnnounceSpecRunCompletion(summary *types.SuiteSummary, succinct bool) {
|
||||
stenographer.registerCall("AnnounceSpecRunCompletion", summary, succinct)
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) AnnounceSpecWillRun(spec *types.SpecSummary) {
|
||||
stenographer.registerCall("AnnounceSpecWillRun", spec)
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) AnnounceBeforeSuiteFailure(summary *types.SetupSummary, succinct bool, fullTrace bool) {
|
||||
stenographer.registerCall("AnnounceBeforeSuiteFailure", summary, succinct, fullTrace)
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) AnnounceAfterSuiteFailure(summary *types.SetupSummary, succinct bool, fullTrace bool) {
|
||||
stenographer.registerCall("AnnounceAfterSuiteFailure", summary, succinct, fullTrace)
|
||||
}
|
||||
func (stenographer *FakeStenographer) AnnounceCapturedOutput(output string) {
|
||||
stenographer.registerCall("AnnounceCapturedOutput", output)
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) AnnounceSuccessfulSpec(spec *types.SpecSummary) {
|
||||
stenographer.registerCall("AnnounceSuccessfulSpec", spec)
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) AnnounceSuccessfulSlowSpec(spec *types.SpecSummary, succinct bool) {
|
||||
stenographer.registerCall("AnnounceSuccessfulSlowSpec", spec, succinct)
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) AnnounceSuccessfulMeasurement(spec *types.SpecSummary, succinct bool) {
|
||||
stenographer.registerCall("AnnounceSuccessfulMeasurement", spec, succinct)
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) AnnouncePendingSpec(spec *types.SpecSummary, noisy bool) {
|
||||
stenographer.registerCall("AnnouncePendingSpec", spec, noisy)
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) AnnounceSkippedSpec(spec *types.SpecSummary, succinct bool, fullTrace bool) {
|
||||
stenographer.registerCall("AnnounceSkippedSpec", spec, succinct, fullTrace)
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) AnnounceSpecTimedOut(spec *types.SpecSummary, succinct bool, fullTrace bool) {
|
||||
stenographer.registerCall("AnnounceSpecTimedOut", spec, succinct, fullTrace)
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) AnnounceSpecPanicked(spec *types.SpecSummary, succinct bool, fullTrace bool) {
|
||||
stenographer.registerCall("AnnounceSpecPanicked", spec, succinct, fullTrace)
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) AnnounceSpecFailed(spec *types.SpecSummary, succinct bool, fullTrace bool) {
|
||||
stenographer.registerCall("AnnounceSpecFailed", spec, succinct, fullTrace)
|
||||
}
|
||||
|
||||
func (stenographer *FakeStenographer) SummarizeFailures(summaries []*types.SpecSummary) {
|
||||
stenographer.registerCall("SummarizeFailures", summaries)
|
||||
}
|
572
vendor/github.com/onsi/ginkgo/reporters/stenographer/stenographer.go
generated
vendored
572
vendor/github.com/onsi/ginkgo/reporters/stenographer/stenographer.go
generated
vendored
@ -1,572 +0,0 @@
|
||||
/*
|
||||
The stenographer is used by Ginkgo's reporters to generate output.
|
||||
|
||||
Move along, nothing to see here.
|
||||
*/
|
||||
|
||||
package stenographer
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"runtime"
|
||||
"strings"
|
||||
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
const defaultStyle = "\x1b[0m"
|
||||
const boldStyle = "\x1b[1m"
|
||||
const redColor = "\x1b[91m"
|
||||
const greenColor = "\x1b[32m"
|
||||
const yellowColor = "\x1b[33m"
|
||||
const cyanColor = "\x1b[36m"
|
||||
const grayColor = "\x1b[90m"
|
||||
const lightGrayColor = "\x1b[37m"
|
||||
|
||||
type cursorStateType int
|
||||
|
||||
const (
|
||||
cursorStateTop cursorStateType = iota
|
||||
cursorStateStreaming
|
||||
cursorStateMidBlock
|
||||
cursorStateEndBlock
|
||||
)
|
||||
|
||||
type Stenographer interface {
|
||||
AnnounceSuite(description string, randomSeed int64, randomizingAll bool, succinct bool)
|
||||
AnnounceAggregatedParallelRun(nodes int, succinct bool)
|
||||
AnnounceParallelRun(node int, nodes int, succinct bool)
|
||||
AnnounceTotalNumberOfSpecs(total int, succinct bool)
|
||||
AnnounceNumberOfSpecs(specsToRun int, total int, succinct bool)
|
||||
AnnounceSpecRunCompletion(summary *types.SuiteSummary, succinct bool)
|
||||
|
||||
AnnounceSpecWillRun(spec *types.SpecSummary)
|
||||
AnnounceBeforeSuiteFailure(summary *types.SetupSummary, succinct bool, fullTrace bool)
|
||||
AnnounceAfterSuiteFailure(summary *types.SetupSummary, succinct bool, fullTrace bool)
|
||||
|
||||
AnnounceCapturedOutput(output string)
|
||||
|
||||
AnnounceSuccessfulSpec(spec *types.SpecSummary)
|
||||
AnnounceSuccessfulSlowSpec(spec *types.SpecSummary, succinct bool)
|
||||
AnnounceSuccessfulMeasurement(spec *types.SpecSummary, succinct bool)
|
||||
|
||||
AnnouncePendingSpec(spec *types.SpecSummary, noisy bool)
|
||||
AnnounceSkippedSpec(spec *types.SpecSummary, succinct bool, fullTrace bool)
|
||||
|
||||
AnnounceSpecTimedOut(spec *types.SpecSummary, succinct bool, fullTrace bool)
|
||||
AnnounceSpecPanicked(spec *types.SpecSummary, succinct bool, fullTrace bool)
|
||||
AnnounceSpecFailed(spec *types.SpecSummary, succinct bool, fullTrace bool)
|
||||
|
||||
SummarizeFailures(summaries []*types.SpecSummary)
|
||||
}
|
||||
|
||||
func New(color bool, enableFlakes bool, writer io.Writer) Stenographer {
|
||||
denoter := "•"
|
||||
if runtime.GOOS == "windows" {
|
||||
denoter = "+"
|
||||
}
|
||||
return &consoleStenographer{
|
||||
color: color,
|
||||
denoter: denoter,
|
||||
cursorState: cursorStateTop,
|
||||
enableFlakes: enableFlakes,
|
||||
w: writer,
|
||||
}
|
||||
}
|
||||
|
||||
type consoleStenographer struct {
|
||||
color bool
|
||||
denoter string
|
||||
cursorState cursorStateType
|
||||
enableFlakes bool
|
||||
w io.Writer
|
||||
}
|
||||
|
||||
var alternatingColors = []string{defaultStyle, grayColor}
|
||||
|
||||
func (s *consoleStenographer) AnnounceSuite(description string, randomSeed int64, randomizingAll bool, succinct bool) {
|
||||
if succinct {
|
||||
s.print(0, "[%d] %s ", randomSeed, s.colorize(boldStyle, description))
|
||||
return
|
||||
}
|
||||
s.printBanner(fmt.Sprintf("Running Suite: %s", description), "=")
|
||||
s.print(0, "Random Seed: %s", s.colorize(boldStyle, "%d", randomSeed))
|
||||
if randomizingAll {
|
||||
s.print(0, " - Will randomize all specs")
|
||||
}
|
||||
s.printNewLine()
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) AnnounceParallelRun(node int, nodes int, succinct bool) {
|
||||
if succinct {
|
||||
s.print(0, "- node #%d ", node)
|
||||
return
|
||||
}
|
||||
s.println(0,
|
||||
"Parallel test node %s/%s.",
|
||||
s.colorize(boldStyle, "%d", node),
|
||||
s.colorize(boldStyle, "%d", nodes),
|
||||
)
|
||||
s.printNewLine()
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) AnnounceAggregatedParallelRun(nodes int, succinct bool) {
|
||||
if succinct {
|
||||
s.print(0, "- %d nodes ", nodes)
|
||||
return
|
||||
}
|
||||
s.println(0,
|
||||
"Running in parallel across %s nodes",
|
||||
s.colorize(boldStyle, "%d", nodes),
|
||||
)
|
||||
s.printNewLine()
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) AnnounceNumberOfSpecs(specsToRun int, total int, succinct bool) {
|
||||
if succinct {
|
||||
s.print(0, "- %d/%d specs ", specsToRun, total)
|
||||
s.stream()
|
||||
return
|
||||
}
|
||||
s.println(0,
|
||||
"Will run %s of %s specs",
|
||||
s.colorize(boldStyle, "%d", specsToRun),
|
||||
s.colorize(boldStyle, "%d", total),
|
||||
)
|
||||
|
||||
s.printNewLine()
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) AnnounceTotalNumberOfSpecs(total int, succinct bool) {
|
||||
if succinct {
|
||||
s.print(0, "- %d specs ", total)
|
||||
s.stream()
|
||||
return
|
||||
}
|
||||
s.println(0,
|
||||
"Will run %s specs",
|
||||
s.colorize(boldStyle, "%d", total),
|
||||
)
|
||||
|
||||
s.printNewLine()
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) AnnounceSpecRunCompletion(summary *types.SuiteSummary, succinct bool) {
|
||||
if succinct && summary.SuiteSucceeded {
|
||||
s.print(0, " %s %s ", s.colorize(greenColor, "SUCCESS!"), summary.RunTime)
|
||||
return
|
||||
}
|
||||
s.printNewLine()
|
||||
color := greenColor
|
||||
if !summary.SuiteSucceeded {
|
||||
color = redColor
|
||||
}
|
||||
s.println(0, s.colorize(boldStyle+color, "Ran %d of %d Specs in %.3f seconds", summary.NumberOfSpecsThatWillBeRun, summary.NumberOfTotalSpecs, summary.RunTime.Seconds()))
|
||||
|
||||
status := ""
|
||||
if summary.SuiteSucceeded {
|
||||
status = s.colorize(boldStyle+greenColor, "SUCCESS!")
|
||||
} else {
|
||||
status = s.colorize(boldStyle+redColor, "FAIL!")
|
||||
}
|
||||
|
||||
flakes := ""
|
||||
if s.enableFlakes {
|
||||
flakes = " | " + s.colorize(yellowColor+boldStyle, "%d Flaked", summary.NumberOfFlakedSpecs)
|
||||
}
|
||||
|
||||
s.print(0,
|
||||
"%s -- %s | %s | %s | %s\n",
|
||||
status,
|
||||
s.colorize(greenColor+boldStyle, "%d Passed", summary.NumberOfPassedSpecs),
|
||||
s.colorize(redColor+boldStyle, "%d Failed", summary.NumberOfFailedSpecs)+flakes,
|
||||
s.colorize(yellowColor+boldStyle, "%d Pending", summary.NumberOfPendingSpecs),
|
||||
s.colorize(cyanColor+boldStyle, "%d Skipped", summary.NumberOfSkippedSpecs),
|
||||
)
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) AnnounceSpecWillRun(spec *types.SpecSummary) {
|
||||
s.startBlock()
|
||||
for i, text := range spec.ComponentTexts[1 : len(spec.ComponentTexts)-1] {
|
||||
s.print(0, s.colorize(alternatingColors[i%2], text)+" ")
|
||||
}
|
||||
|
||||
indentation := 0
|
||||
if len(spec.ComponentTexts) > 2 {
|
||||
indentation = 1
|
||||
s.printNewLine()
|
||||
}
|
||||
index := len(spec.ComponentTexts) - 1
|
||||
s.print(indentation, s.colorize(boldStyle, spec.ComponentTexts[index]))
|
||||
s.printNewLine()
|
||||
s.print(indentation, s.colorize(lightGrayColor, spec.ComponentCodeLocations[index].String()))
|
||||
s.printNewLine()
|
||||
s.midBlock()
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) AnnounceBeforeSuiteFailure(summary *types.SetupSummary, succinct bool, fullTrace bool) {
|
||||
s.announceSetupFailure("BeforeSuite", summary, succinct, fullTrace)
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) AnnounceAfterSuiteFailure(summary *types.SetupSummary, succinct bool, fullTrace bool) {
|
||||
s.announceSetupFailure("AfterSuite", summary, succinct, fullTrace)
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) announceSetupFailure(name string, summary *types.SetupSummary, succinct bool, fullTrace bool) {
|
||||
s.startBlock()
|
||||
var message string
|
||||
switch summary.State {
|
||||
case types.SpecStateFailed:
|
||||
message = "Failure"
|
||||
case types.SpecStatePanicked:
|
||||
message = "Panic"
|
||||
case types.SpecStateTimedOut:
|
||||
message = "Timeout"
|
||||
}
|
||||
|
||||
s.println(0, s.colorize(redColor+boldStyle, "%s [%.3f seconds]", message, summary.RunTime.Seconds()))
|
||||
|
||||
indentation := s.printCodeLocationBlock([]string{name}, []types.CodeLocation{summary.CodeLocation}, summary.ComponentType, 0, summary.State, true)
|
||||
|
||||
s.printNewLine()
|
||||
s.printFailure(indentation, summary.State, summary.Failure, fullTrace)
|
||||
|
||||
s.endBlock()
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) AnnounceCapturedOutput(output string) {
|
||||
if output == "" {
|
||||
return
|
||||
}
|
||||
|
||||
s.startBlock()
|
||||
s.println(0, output)
|
||||
s.midBlock()
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) AnnounceSuccessfulSpec(spec *types.SpecSummary) {
|
||||
s.print(0, s.colorize(greenColor, s.denoter))
|
||||
s.stream()
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) AnnounceSuccessfulSlowSpec(spec *types.SpecSummary, succinct bool) {
|
||||
s.printBlockWithMessage(
|
||||
s.colorize(greenColor, "%s [SLOW TEST:%.3f seconds]", s.denoter, spec.RunTime.Seconds()),
|
||||
"",
|
||||
spec,
|
||||
succinct,
|
||||
)
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) AnnounceSuccessfulMeasurement(spec *types.SpecSummary, succinct bool) {
|
||||
s.printBlockWithMessage(
|
||||
s.colorize(greenColor, "%s [MEASUREMENT]", s.denoter),
|
||||
s.measurementReport(spec, succinct),
|
||||
spec,
|
||||
succinct,
|
||||
)
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) AnnouncePendingSpec(spec *types.SpecSummary, noisy bool) {
|
||||
if noisy {
|
||||
s.printBlockWithMessage(
|
||||
s.colorize(yellowColor, "P [PENDING]"),
|
||||
"",
|
||||
spec,
|
||||
false,
|
||||
)
|
||||
} else {
|
||||
s.print(0, s.colorize(yellowColor, "P"))
|
||||
s.stream()
|
||||
}
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) AnnounceSkippedSpec(spec *types.SpecSummary, succinct bool, fullTrace bool) {
|
||||
// Skips at runtime will have a non-empty spec.Failure. All others should be succinct.
|
||||
if succinct || spec.Failure == (types.SpecFailure{}) {
|
||||
s.print(0, s.colorize(cyanColor, "S"))
|
||||
s.stream()
|
||||
} else {
|
||||
s.startBlock()
|
||||
s.println(0, s.colorize(cyanColor+boldStyle, "S [SKIPPING]%s [%.3f seconds]", s.failureContext(spec.Failure.ComponentType), spec.RunTime.Seconds()))
|
||||
|
||||
indentation := s.printCodeLocationBlock(spec.ComponentTexts, spec.ComponentCodeLocations, spec.Failure.ComponentType, spec.Failure.ComponentIndex, spec.State, succinct)
|
||||
|
||||
s.printNewLine()
|
||||
s.printSkip(indentation, spec.Failure)
|
||||
s.endBlock()
|
||||
}
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) AnnounceSpecTimedOut(spec *types.SpecSummary, succinct bool, fullTrace bool) {
|
||||
s.printSpecFailure(fmt.Sprintf("%s... Timeout", s.denoter), spec, succinct, fullTrace)
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) AnnounceSpecPanicked(spec *types.SpecSummary, succinct bool, fullTrace bool) {
|
||||
s.printSpecFailure(fmt.Sprintf("%s! Panic", s.denoter), spec, succinct, fullTrace)
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) AnnounceSpecFailed(spec *types.SpecSummary, succinct bool, fullTrace bool) {
|
||||
s.printSpecFailure(fmt.Sprintf("%s Failure", s.denoter), spec, succinct, fullTrace)
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) SummarizeFailures(summaries []*types.SpecSummary) {
|
||||
failingSpecs := []*types.SpecSummary{}
|
||||
|
||||
for _, summary := range summaries {
|
||||
if summary.HasFailureState() {
|
||||
failingSpecs = append(failingSpecs, summary)
|
||||
}
|
||||
}
|
||||
|
||||
if len(failingSpecs) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
s.printNewLine()
|
||||
s.printNewLine()
|
||||
plural := "s"
|
||||
if len(failingSpecs) == 1 {
|
||||
plural = ""
|
||||
}
|
||||
s.println(0, s.colorize(redColor+boldStyle, "Summarizing %d Failure%s:", len(failingSpecs), plural))
|
||||
for _, summary := range failingSpecs {
|
||||
s.printNewLine()
|
||||
if summary.HasFailureState() {
|
||||
if summary.TimedOut() {
|
||||
s.print(0, s.colorize(redColor+boldStyle, "[Timeout...] "))
|
||||
} else if summary.Panicked() {
|
||||
s.print(0, s.colorize(redColor+boldStyle, "[Panic!] "))
|
||||
} else if summary.Failed() {
|
||||
s.print(0, s.colorize(redColor+boldStyle, "[Fail] "))
|
||||
}
|
||||
s.printSpecContext(summary.ComponentTexts, summary.ComponentCodeLocations, summary.Failure.ComponentType, summary.Failure.ComponentIndex, summary.State, true)
|
||||
s.printNewLine()
|
||||
s.println(0, s.colorize(lightGrayColor, summary.Failure.Location.String()))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) startBlock() {
|
||||
if s.cursorState == cursorStateStreaming {
|
||||
s.printNewLine()
|
||||
s.printDelimiter()
|
||||
} else if s.cursorState == cursorStateMidBlock {
|
||||
s.printNewLine()
|
||||
}
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) midBlock() {
|
||||
s.cursorState = cursorStateMidBlock
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) endBlock() {
|
||||
s.printDelimiter()
|
||||
s.cursorState = cursorStateEndBlock
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) stream() {
|
||||
s.cursorState = cursorStateStreaming
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) printBlockWithMessage(header string, message string, spec *types.SpecSummary, succinct bool) {
|
||||
s.startBlock()
|
||||
s.println(0, header)
|
||||
|
||||
indentation := s.printCodeLocationBlock(spec.ComponentTexts, spec.ComponentCodeLocations, types.SpecComponentTypeInvalid, 0, spec.State, succinct)
|
||||
|
||||
if message != "" {
|
||||
s.printNewLine()
|
||||
s.println(indentation, message)
|
||||
}
|
||||
|
||||
s.endBlock()
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) printSpecFailure(message string, spec *types.SpecSummary, succinct bool, fullTrace bool) {
|
||||
s.startBlock()
|
||||
s.println(0, s.colorize(redColor+boldStyle, "%s%s [%.3f seconds]", message, s.failureContext(spec.Failure.ComponentType), spec.RunTime.Seconds()))
|
||||
|
||||
indentation := s.printCodeLocationBlock(spec.ComponentTexts, spec.ComponentCodeLocations, spec.Failure.ComponentType, spec.Failure.ComponentIndex, spec.State, succinct)
|
||||
|
||||
s.printNewLine()
|
||||
s.printFailure(indentation, spec.State, spec.Failure, fullTrace)
|
||||
s.endBlock()
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) failureContext(failedComponentType types.SpecComponentType) string {
|
||||
switch failedComponentType {
|
||||
case types.SpecComponentTypeBeforeSuite:
|
||||
return " in Suite Setup (BeforeSuite)"
|
||||
case types.SpecComponentTypeAfterSuite:
|
||||
return " in Suite Teardown (AfterSuite)"
|
||||
case types.SpecComponentTypeBeforeEach:
|
||||
return " in Spec Setup (BeforeEach)"
|
||||
case types.SpecComponentTypeJustBeforeEach:
|
||||
return " in Spec Setup (JustBeforeEach)"
|
||||
case types.SpecComponentTypeAfterEach:
|
||||
return " in Spec Teardown (AfterEach)"
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) printSkip(indentation int, spec types.SpecFailure) {
|
||||
s.println(indentation, s.colorize(cyanColor, spec.Message))
|
||||
s.printNewLine()
|
||||
s.println(indentation, spec.Location.String())
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) printFailure(indentation int, state types.SpecState, failure types.SpecFailure, fullTrace bool) {
|
||||
if state == types.SpecStatePanicked {
|
||||
s.println(indentation, s.colorize(redColor+boldStyle, failure.Message))
|
||||
s.println(indentation, s.colorize(redColor, failure.ForwardedPanic))
|
||||
s.println(indentation, failure.Location.String())
|
||||
s.printNewLine()
|
||||
s.println(indentation, s.colorize(redColor, "Full Stack Trace"))
|
||||
s.println(indentation, failure.Location.FullStackTrace)
|
||||
} else {
|
||||
s.println(indentation, s.colorize(redColor, failure.Message))
|
||||
s.printNewLine()
|
||||
s.println(indentation, failure.Location.String())
|
||||
if fullTrace {
|
||||
s.printNewLine()
|
||||
s.println(indentation, s.colorize(redColor, "Full Stack Trace"))
|
||||
s.println(indentation, failure.Location.FullStackTrace)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) printSpecContext(componentTexts []string, componentCodeLocations []types.CodeLocation, failedComponentType types.SpecComponentType, failedComponentIndex int, state types.SpecState, succinct bool) int {
|
||||
startIndex := 1
|
||||
indentation := 0
|
||||
|
||||
if len(componentTexts) == 1 {
|
||||
startIndex = 0
|
||||
}
|
||||
|
||||
for i := startIndex; i < len(componentTexts); i++ {
|
||||
if (state.IsFailure() || state == types.SpecStateSkipped) && i == failedComponentIndex {
|
||||
color := redColor
|
||||
if state == types.SpecStateSkipped {
|
||||
color = cyanColor
|
||||
}
|
||||
blockType := ""
|
||||
switch failedComponentType {
|
||||
case types.SpecComponentTypeBeforeSuite:
|
||||
blockType = "BeforeSuite"
|
||||
case types.SpecComponentTypeAfterSuite:
|
||||
blockType = "AfterSuite"
|
||||
case types.SpecComponentTypeBeforeEach:
|
||||
blockType = "BeforeEach"
|
||||
case types.SpecComponentTypeJustBeforeEach:
|
||||
blockType = "JustBeforeEach"
|
||||
case types.SpecComponentTypeAfterEach:
|
||||
blockType = "AfterEach"
|
||||
case types.SpecComponentTypeIt:
|
||||
blockType = "It"
|
||||
case types.SpecComponentTypeMeasure:
|
||||
blockType = "Measurement"
|
||||
}
|
||||
if succinct {
|
||||
s.print(0, s.colorize(color+boldStyle, "[%s] %s ", blockType, componentTexts[i]))
|
||||
} else {
|
||||
s.println(indentation, s.colorize(color+boldStyle, "%s [%s]", componentTexts[i], blockType))
|
||||
s.println(indentation, s.colorize(grayColor, "%s", componentCodeLocations[i]))
|
||||
}
|
||||
} else {
|
||||
if succinct {
|
||||
s.print(0, s.colorize(alternatingColors[i%2], "%s ", componentTexts[i]))
|
||||
} else {
|
||||
s.println(indentation, componentTexts[i])
|
||||
s.println(indentation, s.colorize(grayColor, "%s", componentCodeLocations[i]))
|
||||
}
|
||||
}
|
||||
indentation++
|
||||
}
|
||||
|
||||
return indentation
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) printCodeLocationBlock(componentTexts []string, componentCodeLocations []types.CodeLocation, failedComponentType types.SpecComponentType, failedComponentIndex int, state types.SpecState, succinct bool) int {
|
||||
indentation := s.printSpecContext(componentTexts, componentCodeLocations, failedComponentType, failedComponentIndex, state, succinct)
|
||||
|
||||
if succinct {
|
||||
if len(componentTexts) > 0 {
|
||||
s.printNewLine()
|
||||
s.print(0, s.colorize(lightGrayColor, "%s", componentCodeLocations[len(componentCodeLocations)-1]))
|
||||
}
|
||||
s.printNewLine()
|
||||
indentation = 1
|
||||
} else {
|
||||
indentation--
|
||||
}
|
||||
|
||||
return indentation
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) orderedMeasurementKeys(measurements map[string]*types.SpecMeasurement) []string {
|
||||
orderedKeys := make([]string, len(measurements))
|
||||
for key, measurement := range measurements {
|
||||
orderedKeys[measurement.Order] = key
|
||||
}
|
||||
return orderedKeys
|
||||
}
|
||||
|
||||
func (s *consoleStenographer) measurementReport(spec *types.SpecSummary, succinct bool) string {
|
||||
if len(spec.Measurements) == 0 {
|
||||
return "Found no measurements"
|
||||
}
|
||||
|
||||
message := []string{}
|
||||
orderedKeys := s.orderedMeasurementKeys(spec.Measurements)
|
||||
|
||||
if succinct {
|
||||
message = append(message, fmt.Sprintf("%s samples:", s.colorize(boldStyle, "%d", spec.NumberOfSamples)))
|
||||
for _, key := range orderedKeys {
|
||||
measurement := spec.Measurements[key]
|
||||
message = append(message, fmt.Sprintf(" %s - %s: %s%s, %s: %s%s ± %s%s, %s: %s%s",
|
||||
s.colorize(boldStyle, "%s", measurement.Name),
|
||||
measurement.SmallestLabel,
|
||||
s.colorize(greenColor, measurement.PrecisionFmt(), measurement.Smallest),
|
||||
measurement.Units,
|
||||
measurement.AverageLabel,
|
||||
s.colorize(cyanColor, measurement.PrecisionFmt(), measurement.Average),
|
||||
measurement.Units,
|
||||
s.colorize(cyanColor, measurement.PrecisionFmt(), measurement.StdDeviation),
|
||||
measurement.Units,
|
||||
measurement.LargestLabel,
|
||||
s.colorize(redColor, measurement.PrecisionFmt(), measurement.Largest),
|
||||
measurement.Units,
|
||||
))
|
||||
}
|
||||
} else {
|
||||
message = append(message, fmt.Sprintf("Ran %s samples:", s.colorize(boldStyle, "%d", spec.NumberOfSamples)))
|
||||
for _, key := range orderedKeys {
|
||||
measurement := spec.Measurements[key]
|
||||
info := ""
|
||||
if measurement.Info != nil {
|
||||
message = append(message, fmt.Sprintf("%v", measurement.Info))
|
||||
}
|
||||
|
||||
message = append(message, fmt.Sprintf("%s:\n%s %s: %s%s\n %s: %s%s\n %s: %s%s ± %s%s",
|
||||
s.colorize(boldStyle, "%s", measurement.Name),
|
||||
info,
|
||||
measurement.SmallestLabel,
|
||||
s.colorize(greenColor, measurement.PrecisionFmt(), measurement.Smallest),
|
||||
measurement.Units,
|
||||
measurement.LargestLabel,
|
||||
s.colorize(redColor, measurement.PrecisionFmt(), measurement.Largest),
|
||||
measurement.Units,
|
||||
measurement.AverageLabel,
|
||||
s.colorize(cyanColor, measurement.PrecisionFmt(), measurement.Average),
|
||||
measurement.Units,
|
||||
s.colorize(cyanColor, measurement.PrecisionFmt(), measurement.StdDeviation),
|
||||
measurement.Units,
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
return strings.Join(message, "\n")
|
||||
}
|
43
vendor/github.com/onsi/ginkgo/reporters/stenographer/support/go-colorable/README.md
generated
vendored
43
vendor/github.com/onsi/ginkgo/reporters/stenographer/support/go-colorable/README.md
generated
vendored
@ -1,43 +0,0 @@
|
||||
# go-colorable
|
||||
|
||||
Colorable writer for windows.
|
||||
|
||||
For example, most of logger packages doesn't show colors on windows. (I know we can do it with ansicon. But I don't want.)
|
||||
This package is possible to handle escape sequence for ansi color on windows.
|
||||
|
||||
## Too Bad!
|
||||
|
||||

|
||||
|
||||
|
||||
## So Good!
|
||||
|
||||

|
||||
|
||||
## Usage
|
||||
|
||||
```go
|
||||
logrus.SetFormatter(&logrus.TextFormatter{ForceColors: true})
|
||||
logrus.SetOutput(colorable.NewColorableStdout())
|
||||
|
||||
logrus.Info("succeeded")
|
||||
logrus.Warn("not correct")
|
||||
logrus.Error("something error")
|
||||
logrus.Fatal("panic")
|
||||
```
|
||||
|
||||
You can compile above code on non-windows OSs.
|
||||
|
||||
## Installation
|
||||
|
||||
```
|
||||
$ go get github.com/mattn/go-colorable
|
||||
```
|
||||
|
||||
# License
|
||||
|
||||
MIT
|
||||
|
||||
# Author
|
||||
|
||||
Yasuhiro Matsumoto (a.k.a mattn)
|
@ -1,24 +0,0 @@
|
||||
// +build !windows
|
||||
|
||||
package colorable
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
)
|
||||
|
||||
func NewColorable(file *os.File) io.Writer {
|
||||
if file == nil {
|
||||
panic("nil passed instead of *os.File to NewColorable()")
|
||||
}
|
||||
|
||||
return file
|
||||
}
|
||||
|
||||
func NewColorableStdout() io.Writer {
|
||||
return os.Stdout
|
||||
}
|
||||
|
||||
func NewColorableStderr() io.Writer {
|
||||
return os.Stderr
|
||||
}
|
@ -1,57 +0,0 @@
|
||||
package colorable
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
)
|
||||
|
||||
type NonColorable struct {
|
||||
out io.Writer
|
||||
lastbuf bytes.Buffer
|
||||
}
|
||||
|
||||
func NewNonColorable(w io.Writer) io.Writer {
|
||||
return &NonColorable{out: w}
|
||||
}
|
||||
|
||||
func (w *NonColorable) Write(data []byte) (n int, err error) {
|
||||
er := bytes.NewBuffer(data)
|
||||
loop:
|
||||
for {
|
||||
c1, _, err := er.ReadRune()
|
||||
if err != nil {
|
||||
break loop
|
||||
}
|
||||
if c1 != 0x1b {
|
||||
fmt.Fprint(w.out, string(c1))
|
||||
continue
|
||||
}
|
||||
c2, _, err := er.ReadRune()
|
||||
if err != nil {
|
||||
w.lastbuf.WriteRune(c1)
|
||||
break loop
|
||||
}
|
||||
if c2 != 0x5b {
|
||||
w.lastbuf.WriteRune(c1)
|
||||
w.lastbuf.WriteRune(c2)
|
||||
continue
|
||||
}
|
||||
|
||||
var buf bytes.Buffer
|
||||
for {
|
||||
c, _, err := er.ReadRune()
|
||||
if err != nil {
|
||||
w.lastbuf.WriteRune(c1)
|
||||
w.lastbuf.WriteRune(c2)
|
||||
w.lastbuf.Write(buf.Bytes())
|
||||
break loop
|
||||
}
|
||||
if ('a' <= c && c <= 'z') || ('A' <= c && c <= 'Z') || c == '@' {
|
||||
break
|
||||
}
|
||||
buf.Write([]byte(string(c)))
|
||||
}
|
||||
}
|
||||
return len(data) - w.lastbuf.Len(), nil
|
||||
}
|
9
vendor/github.com/onsi/ginkgo/reporters/stenographer/support/go-isatty/LICENSE
generated
vendored
9
vendor/github.com/onsi/ginkgo/reporters/stenographer/support/go-isatty/LICENSE
generated
vendored
@ -1,9 +0,0 @@
|
||||
Copyright (c) Yasuhiro MATSUMOTO <mattn.jp@gmail.com>
|
||||
|
||||
MIT License (Expat)
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
37
vendor/github.com/onsi/ginkgo/reporters/stenographer/support/go-isatty/README.md
generated
vendored
37
vendor/github.com/onsi/ginkgo/reporters/stenographer/support/go-isatty/README.md
generated
vendored
@ -1,37 +0,0 @@
|
||||
# go-isatty
|
||||
|
||||
isatty for golang
|
||||
|
||||
## Usage
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/mattn/go-isatty"
|
||||
"os"
|
||||
)
|
||||
|
||||
func main() {
|
||||
if isatty.IsTerminal(os.Stdout.Fd()) {
|
||||
fmt.Println("Is Terminal")
|
||||
} else {
|
||||
fmt.Println("Is Not Terminal")
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
```
|
||||
$ go get github.com/mattn/go-isatty
|
||||
```
|
||||
|
||||
# License
|
||||
|
||||
MIT
|
||||
|
||||
# Author
|
||||
|
||||
Yasuhiro Matsumoto (a.k.a mattn)
|
2
vendor/github.com/onsi/ginkgo/reporters/stenographer/support/go-isatty/doc.go
generated
vendored
2
vendor/github.com/onsi/ginkgo/reporters/stenographer/support/go-isatty/doc.go
generated
vendored
@ -1,2 +0,0 @@
|
||||
// Package isatty implements interface to isatty
|
||||
package isatty
|
@ -1,9 +0,0 @@
|
||||
// +build appengine
|
||||
|
||||
package isatty
|
||||
|
||||
// IsTerminal returns true if the file descriptor is terminal which
|
||||
// is always false on on appengine classic which is a sandboxed PaaS.
|
||||
func IsTerminal(fd uintptr) bool {
|
||||
return false
|
||||
}
|
18
vendor/github.com/onsi/ginkgo/reporters/stenographer/support/go-isatty/isatty_bsd.go
generated
vendored
18
vendor/github.com/onsi/ginkgo/reporters/stenographer/support/go-isatty/isatty_bsd.go
generated
vendored
@ -1,18 +0,0 @@
|
||||
// +build darwin freebsd openbsd netbsd
|
||||
// +build !appengine
|
||||
|
||||
package isatty
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
const ioctlReadTermios = syscall.TIOCGETA
|
||||
|
||||
// IsTerminal return true if the file descriptor is terminal.
|
||||
func IsTerminal(fd uintptr) bool {
|
||||
var termios syscall.Termios
|
||||
_, _, err := syscall.Syscall6(syscall.SYS_IOCTL, fd, ioctlReadTermios, uintptr(unsafe.Pointer(&termios)), 0, 0, 0)
|
||||
return err == 0
|
||||
}
|
18
vendor/github.com/onsi/ginkgo/reporters/stenographer/support/go-isatty/isatty_linux.go
generated
vendored
18
vendor/github.com/onsi/ginkgo/reporters/stenographer/support/go-isatty/isatty_linux.go
generated
vendored
@ -1,18 +0,0 @@
|
||||
// +build linux
|
||||
// +build !appengine
|
||||
|
||||
package isatty
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
const ioctlReadTermios = syscall.TCGETS
|
||||
|
||||
// IsTerminal return true if the file descriptor is terminal.
|
||||
func IsTerminal(fd uintptr) bool {
|
||||
var termios syscall.Termios
|
||||
_, _, err := syscall.Syscall6(syscall.SYS_IOCTL, fd, ioctlReadTermios, uintptr(unsafe.Pointer(&termios)), 0, 0, 0)
|
||||
return err == 0
|
||||
}
|
16
vendor/github.com/onsi/ginkgo/reporters/stenographer/support/go-isatty/isatty_solaris.go
generated
vendored
16
vendor/github.com/onsi/ginkgo/reporters/stenographer/support/go-isatty/isatty_solaris.go
generated
vendored
@ -1,16 +0,0 @@
|
||||
// +build solaris
|
||||
// +build !appengine
|
||||
|
||||
package isatty
|
||||
|
||||
import (
|
||||
"golang.org/x/sys/unix"
|
||||
)
|
||||
|
||||
// IsTerminal returns true if the given file descriptor is a terminal.
|
||||
// see: http://src.illumos.org/source/xref/illumos-gate/usr/src/lib/libbc/libc/gen/common/isatty.c
|
||||
func IsTerminal(fd uintptr) bool {
|
||||
var termio unix.Termio
|
||||
err := unix.IoctlSetTermio(int(fd), unix.TCGETA, &termio)
|
||||
return err == nil
|
||||
}
|
19
vendor/github.com/onsi/ginkgo/reporters/stenographer/support/go-isatty/isatty_windows.go
generated
vendored
19
vendor/github.com/onsi/ginkgo/reporters/stenographer/support/go-isatty/isatty_windows.go
generated
vendored
@ -1,19 +0,0 @@
|
||||
// +build windows
|
||||
// +build !appengine
|
||||
|
||||
package isatty
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
var kernel32 = syscall.NewLazyDLL("kernel32.dll")
|
||||
var procGetConsoleMode = kernel32.NewProc("GetConsoleMode")
|
||||
|
||||
// IsTerminal return true if the file descriptor is terminal.
|
||||
func IsTerminal(fd uintptr) bool {
|
||||
var st uint32
|
||||
r, _, e := syscall.Syscall(procGetConsoleMode.Addr(), 2, fd, uintptr(unsafe.Pointer(&st)), 0)
|
||||
return r != 0 && e == 0
|
||||
}
|
106
vendor/github.com/onsi/ginkgo/reporters/teamcity_reporter.go
generated
vendored
106
vendor/github.com/onsi/ginkgo/reporters/teamcity_reporter.go
generated
vendored
@ -1,106 +0,0 @@
|
||||
/*
|
||||
|
||||
TeamCity Reporter for Ginkgo
|
||||
|
||||
Makes use of TeamCity's support for Service Messages
|
||||
http://confluence.jetbrains.com/display/TCD7/Build+Script+Interaction+with+TeamCity#BuildScriptInteractionwithTeamCity-ReportingTests
|
||||
*/
|
||||
|
||||
package reporters
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
|
||||
"github.com/onsi/ginkgo/config"
|
||||
"github.com/onsi/ginkgo/types"
|
||||
)
|
||||
|
||||
const (
|
||||
messageId = "##teamcity"
|
||||
)
|
||||
|
||||
type TeamCityReporter struct {
|
||||
writer io.Writer
|
||||
testSuiteName string
|
||||
ReporterConfig config.DefaultReporterConfigType
|
||||
}
|
||||
|
||||
func NewTeamCityReporter(writer io.Writer) *TeamCityReporter {
|
||||
return &TeamCityReporter{
|
||||
writer: writer,
|
||||
}
|
||||
}
|
||||
|
||||
func (reporter *TeamCityReporter) SpecSuiteWillBegin(config config.GinkgoConfigType, summary *types.SuiteSummary) {
|
||||
reporter.testSuiteName = escape(summary.SuiteDescription)
|
||||
fmt.Fprintf(reporter.writer, "%s[testSuiteStarted name='%s']\n", messageId, reporter.testSuiteName)
|
||||
}
|
||||
|
||||
func (reporter *TeamCityReporter) BeforeSuiteDidRun(setupSummary *types.SetupSummary) {
|
||||
reporter.handleSetupSummary("BeforeSuite", setupSummary)
|
||||
}
|
||||
|
||||
func (reporter *TeamCityReporter) AfterSuiteDidRun(setupSummary *types.SetupSummary) {
|
||||
reporter.handleSetupSummary("AfterSuite", setupSummary)
|
||||
}
|
||||
|
||||
func (reporter *TeamCityReporter) handleSetupSummary(name string, setupSummary *types.SetupSummary) {
|
||||
if setupSummary.State != types.SpecStatePassed {
|
||||
testName := escape(name)
|
||||
fmt.Fprintf(reporter.writer, "%s[testStarted name='%s']\n", messageId, testName)
|
||||
message := reporter.failureMessage(setupSummary.Failure)
|
||||
details := reporter.failureDetails(setupSummary.Failure)
|
||||
fmt.Fprintf(reporter.writer, "%s[testFailed name='%s' message='%s' details='%s']\n", messageId, testName, message, details)
|
||||
durationInMilliseconds := setupSummary.RunTime.Seconds() * 1000
|
||||
fmt.Fprintf(reporter.writer, "%s[testFinished name='%s' duration='%v']\n", messageId, testName, durationInMilliseconds)
|
||||
}
|
||||
}
|
||||
|
||||
func (reporter *TeamCityReporter) SpecWillRun(specSummary *types.SpecSummary) {
|
||||
testName := escape(strings.Join(specSummary.ComponentTexts[1:], " "))
|
||||
fmt.Fprintf(reporter.writer, "%s[testStarted name='%s']\n", messageId, testName)
|
||||
}
|
||||
|
||||
func (reporter *TeamCityReporter) SpecDidComplete(specSummary *types.SpecSummary) {
|
||||
testName := escape(strings.Join(specSummary.ComponentTexts[1:], " "))
|
||||
|
||||
if reporter.ReporterConfig.ReportPassed && specSummary.State == types.SpecStatePassed {
|
||||
details := escape(specSummary.CapturedOutput)
|
||||
fmt.Fprintf(reporter.writer, "%s[testPassed name='%s' details='%s']\n", messageId, testName, details)
|
||||
}
|
||||
if specSummary.State == types.SpecStateFailed || specSummary.State == types.SpecStateTimedOut || specSummary.State == types.SpecStatePanicked {
|
||||
message := reporter.failureMessage(specSummary.Failure)
|
||||
details := reporter.failureDetails(specSummary.Failure)
|
||||
fmt.Fprintf(reporter.writer, "%s[testFailed name='%s' message='%s' details='%s']\n", messageId, testName, message, details)
|
||||
}
|
||||
if specSummary.State == types.SpecStateSkipped || specSummary.State == types.SpecStatePending {
|
||||
fmt.Fprintf(reporter.writer, "%s[testIgnored name='%s']\n", messageId, testName)
|
||||
}
|
||||
|
||||
durationInMilliseconds := specSummary.RunTime.Seconds() * 1000
|
||||
fmt.Fprintf(reporter.writer, "%s[testFinished name='%s' duration='%v']\n", messageId, testName, durationInMilliseconds)
|
||||
}
|
||||
|
||||
func (reporter *TeamCityReporter) SpecSuiteDidEnd(summary *types.SuiteSummary) {
|
||||
fmt.Fprintf(reporter.writer, "%s[testSuiteFinished name='%s']\n", messageId, reporter.testSuiteName)
|
||||
}
|
||||
|
||||
func (reporter *TeamCityReporter) failureMessage(failure types.SpecFailure) string {
|
||||
return escape(failure.ComponentCodeLocation.String())
|
||||
}
|
||||
|
||||
func (reporter *TeamCityReporter) failureDetails(failure types.SpecFailure) string {
|
||||
return escape(fmt.Sprintf("%s\n%s", failure.Message, failure.Location.String()))
|
||||
}
|
||||
|
||||
func escape(output string) string {
|
||||
output = strings.Replace(output, "|", "||", -1)
|
||||
output = strings.Replace(output, "'", "|'", -1)
|
||||
output = strings.Replace(output, "\n", "|n", -1)
|
||||
output = strings.Replace(output, "\r", "|r", -1)
|
||||
output = strings.Replace(output, "[", "|[", -1)
|
||||
output = strings.Replace(output, "]", "|]", -1)
|
||||
return output
|
||||
}
|
15
vendor/github.com/onsi/ginkgo/types/code_location.go
generated
vendored
15
vendor/github.com/onsi/ginkgo/types/code_location.go
generated
vendored
@ -1,15 +0,0 @@
|
||||
package types
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
type CodeLocation struct {
|
||||
FileName string
|
||||
LineNumber int
|
||||
FullStackTrace string
|
||||
}
|
||||
|
||||
func (codeLocation CodeLocation) String() string {
|
||||
return fmt.Sprintf("%s:%d", codeLocation.FileName, codeLocation.LineNumber)
|
||||
}
|
30
vendor/github.com/onsi/ginkgo/types/synchronization.go
generated
vendored
30
vendor/github.com/onsi/ginkgo/types/synchronization.go
generated
vendored
@ -1,30 +0,0 @@
|
||||
package types
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
)
|
||||
|
||||
type RemoteBeforeSuiteState int
|
||||
|
||||
const (
|
||||
RemoteBeforeSuiteStateInvalid RemoteBeforeSuiteState = iota
|
||||
|
||||
RemoteBeforeSuiteStatePending
|
||||
RemoteBeforeSuiteStatePassed
|
||||
RemoteBeforeSuiteStateFailed
|
||||
RemoteBeforeSuiteStateDisappeared
|
||||
)
|
||||
|
||||
type RemoteBeforeSuiteData struct {
|
||||
Data []byte
|
||||
State RemoteBeforeSuiteState
|
||||
}
|
||||
|
||||
func (r RemoteBeforeSuiteData) ToJSON() []byte {
|
||||
data, _ := json.Marshal(r)
|
||||
return data
|
||||
}
|
||||
|
||||
type RemoteAfterSuiteData struct {
|
||||
CanRun bool
|
||||
}
|
174
vendor/github.com/onsi/ginkgo/types/types.go
generated
vendored
174
vendor/github.com/onsi/ginkgo/types/types.go
generated
vendored
@ -1,174 +0,0 @@
|
||||
package types
|
||||
|
||||
import (
|
||||
"strconv"
|
||||
"time"
|
||||
)
|
||||
|
||||
const GINKGO_FOCUS_EXIT_CODE = 197
|
||||
|
||||
/*
|
||||
SuiteSummary represents the a summary of the test suite and is passed to both
|
||||
Reporter.SpecSuiteWillBegin
|
||||
Reporter.SpecSuiteDidEnd
|
||||
|
||||
this is unfortunate as these two methods should receive different objects. When running in parallel
|
||||
each node does not deterministically know how many specs it will end up running.
|
||||
|
||||
Unfortunately making such a change would break backward compatibility.
|
||||
|
||||
Until Ginkgo 2.0 comes out we will continue to reuse this struct but populate unknown fields
|
||||
with -1.
|
||||
*/
|
||||
type SuiteSummary struct {
|
||||
SuiteDescription string
|
||||
SuiteSucceeded bool
|
||||
SuiteID string
|
||||
|
||||
NumberOfSpecsBeforeParallelization int
|
||||
NumberOfTotalSpecs int
|
||||
NumberOfSpecsThatWillBeRun int
|
||||
NumberOfPendingSpecs int
|
||||
NumberOfSkippedSpecs int
|
||||
NumberOfPassedSpecs int
|
||||
NumberOfFailedSpecs int
|
||||
// Flaked specs are those that failed initially, but then passed on a
|
||||
// subsequent try.
|
||||
NumberOfFlakedSpecs int
|
||||
RunTime time.Duration
|
||||
}
|
||||
|
||||
type SpecSummary struct {
|
||||
ComponentTexts []string
|
||||
ComponentCodeLocations []CodeLocation
|
||||
|
||||
State SpecState
|
||||
RunTime time.Duration
|
||||
Failure SpecFailure
|
||||
IsMeasurement bool
|
||||
NumberOfSamples int
|
||||
Measurements map[string]*SpecMeasurement
|
||||
|
||||
CapturedOutput string
|
||||
SuiteID string
|
||||
}
|
||||
|
||||
func (s SpecSummary) HasFailureState() bool {
|
||||
return s.State.IsFailure()
|
||||
}
|
||||
|
||||
func (s SpecSummary) TimedOut() bool {
|
||||
return s.State == SpecStateTimedOut
|
||||
}
|
||||
|
||||
func (s SpecSummary) Panicked() bool {
|
||||
return s.State == SpecStatePanicked
|
||||
}
|
||||
|
||||
func (s SpecSummary) Failed() bool {
|
||||
return s.State == SpecStateFailed
|
||||
}
|
||||
|
||||
func (s SpecSummary) Passed() bool {
|
||||
return s.State == SpecStatePassed
|
||||
}
|
||||
|
||||
func (s SpecSummary) Skipped() bool {
|
||||
return s.State == SpecStateSkipped
|
||||
}
|
||||
|
||||
func (s SpecSummary) Pending() bool {
|
||||
return s.State == SpecStatePending
|
||||
}
|
||||
|
||||
type SetupSummary struct {
|
||||
ComponentType SpecComponentType
|
||||
CodeLocation CodeLocation
|
||||
|
||||
State SpecState
|
||||
RunTime time.Duration
|
||||
Failure SpecFailure
|
||||
|
||||
CapturedOutput string
|
||||
SuiteID string
|
||||
}
|
||||
|
||||
type SpecFailure struct {
|
||||
Message string
|
||||
Location CodeLocation
|
||||
ForwardedPanic string
|
||||
|
||||
ComponentIndex int
|
||||
ComponentType SpecComponentType
|
||||
ComponentCodeLocation CodeLocation
|
||||
}
|
||||
|
||||
type SpecMeasurement struct {
|
||||
Name string
|
||||
Info interface{}
|
||||
Order int
|
||||
|
||||
Results []float64
|
||||
|
||||
Smallest float64
|
||||
Largest float64
|
||||
Average float64
|
||||
StdDeviation float64
|
||||
|
||||
SmallestLabel string
|
||||
LargestLabel string
|
||||
AverageLabel string
|
||||
Units string
|
||||
Precision int
|
||||
}
|
||||
|
||||
func (s SpecMeasurement) PrecisionFmt() string {
|
||||
if s.Precision == 0 {
|
||||
return "%f"
|
||||
}
|
||||
|
||||
str := strconv.Itoa(s.Precision)
|
||||
|
||||
return "%." + str + "f"
|
||||
}
|
||||
|
||||
type SpecState uint
|
||||
|
||||
const (
|
||||
SpecStateInvalid SpecState = iota
|
||||
|
||||
SpecStatePending
|
||||
SpecStateSkipped
|
||||
SpecStatePassed
|
||||
SpecStateFailed
|
||||
SpecStatePanicked
|
||||
SpecStateTimedOut
|
||||
)
|
||||
|
||||
func (state SpecState) IsFailure() bool {
|
||||
return state == SpecStateTimedOut || state == SpecStatePanicked || state == SpecStateFailed
|
||||
}
|
||||
|
||||
type SpecComponentType uint
|
||||
|
||||
const (
|
||||
SpecComponentTypeInvalid SpecComponentType = iota
|
||||
|
||||
SpecComponentTypeContainer
|
||||
SpecComponentTypeBeforeSuite
|
||||
SpecComponentTypeAfterSuite
|
||||
SpecComponentTypeBeforeEach
|
||||
SpecComponentTypeJustBeforeEach
|
||||
SpecComponentTypeJustAfterEach
|
||||
SpecComponentTypeAfterEach
|
||||
SpecComponentTypeIt
|
||||
SpecComponentTypeMeasure
|
||||
)
|
||||
|
||||
type FlagType uint
|
||||
|
||||
const (
|
||||
FlagTypeNone FlagType = iota
|
||||
FlagTypeFocused
|
||||
FlagTypePending
|
||||
)
|
@ -1,7 +1,7 @@
|
||||
.DS_Store
|
||||
TODO
|
||||
TODO.md
|
||||
tmp/**/*
|
||||
*.coverprofile
|
||||
.vscode
|
||||
.idea/
|
||||
*.log
|
||||
*.log
|
267
vendor/github.com/onsi/ginkgo/CHANGELOG.md → vendor/github.com/onsi/ginkgo/v2/CHANGELOG.md
generated
vendored
267
vendor/github.com/onsi/ginkgo/CHANGELOG.md → vendor/github.com/onsi/ginkgo/v2/CHANGELOG.md
generated
vendored
@ -1,9 +1,259 @@
|
||||
## 2.7.0
|
||||
|
||||
### Features
|
||||
- Introduce ContinueOnFailure for Ordered containers [e0123ca] - Ordered containers that are also decorated with ContinueOnFailure will not stop running specs after the first spec fails.
|
||||
- Support for bootstrap commands to use custom data for templates (#1110) [7a2b242]
|
||||
- Support for labels and pending decorator in ginkgo outline output (#1113) [e6e3b98]
|
||||
- Color aliases for custom color support (#1101) [49fab7a]
|
||||
|
||||
### Fixes
|
||||
- correctly ensure deterministic spec order, even if specs are generated by iterating over a map [89dda20]
|
||||
- Fix a bug where timedout specs were not correctly treated as failures when determining whether or not to run AfterAlls in an Ordered container.
|
||||
- Ensure go test coverprofile outputs to the expected location (#1105) [b0bd77b]
|
||||
|
||||
## 2.6.1
|
||||
|
||||
### Features
|
||||
- Override formatter colors from envvars - this is a new feature but an alternative approach involving config files might be taken in the future (#1095) [60240d1]
|
||||
|
||||
### Fixes
|
||||
- GinkgoRecover now supports ignoring panics that match a specific, hidden, interface [301f3e2]
|
||||
|
||||
### Maintenance
|
||||
- Bump github.com/onsi/gomega from 1.24.0 to 1.24.1 (#1077) [3643823]
|
||||
- Bump golang.org/x/tools from 0.2.0 to 0.4.0 (#1090) [f9f856e]
|
||||
- Bump nokogiri from 1.13.9 to 1.13.10 in /docs (#1091) [0d7087e]
|
||||
|
||||
## 2.6.0
|
||||
|
||||
### Features
|
||||
- `ReportBeforeSuite` provides access to the suite report before the suite begins.
|
||||
- Add junit config option for omitting leafnodetype (#1088) [956e6d2]
|
||||
- Add support to customize junit report config to omit spec labels (#1087) [de44005]
|
||||
|
||||
### Fixes
|
||||
- Fix stack trace pruning so that it has a chance of working on windows [2165648]
|
||||
|
||||
## 2.5.1
|
||||
|
||||
### Fixes
|
||||
- skipped tests only show as 'S' when running with -v [3ab38ae]
|
||||
- Fix typo in docs/index.md (#1082) [55fc58d]
|
||||
- Fix typo in docs/index.md (#1081) [8a14f1f]
|
||||
- Fix link notation in docs/index.md (#1080) [2669612]
|
||||
- Fix typo in `--progress` deprecation message (#1076) [b4b7edc]
|
||||
|
||||
### Maintenance
|
||||
- chore: Included githubactions in the dependabot config (#976) [baea341]
|
||||
- Bump golang.org/x/sys from 0.1.0 to 0.2.0 (#1075) [9646297]
|
||||
|
||||
## 2.5.0
|
||||
|
||||
### Ginkgo output now includes a timeline-view of the spec
|
||||
|
||||
This commit changes Ginkgo's default output. Spec details are now
|
||||
presented as a **timeline** that includes events that occur during the spec
|
||||
lifecycle interleaved with any GinkgoWriter content. This makes is much easier
|
||||
to understand the flow of a spec and where a given failure occurs.
|
||||
|
||||
The --progress, --slow-spec-threshold, --always-emit-ginkgo-writer flags
|
||||
and the SuppressProgressReporting decorator have all been deprecated. Instead
|
||||
the existing -v and -vv flags better capture the level of verbosity to display. However,
|
||||
a new --show-node-events flag is added to include node `> Enter` and `< Exit` events
|
||||
in the spec timeline.
|
||||
|
||||
In addition, JUnit reports now include the timeline (rendered with -vv) and custom JUnit
|
||||
reports can be configured and generated using
|
||||
`GenerateJUnitReportWithConfig(report types.Report, dst string, config JunitReportConfig)`
|
||||
|
||||
Code should continue to work unchanged with this version of Ginkgo - however if you have tooling that
|
||||
was relying on the specific output format of Ginkgo you _may_ run into issues. Ginkgo's console output is not guaranteed to be stable for tooling and automation purposes. You should, instead, use Ginkgo's JSON format
|
||||
to build tooling on top of as it has stronger guarantees to be stable from version to version.
|
||||
|
||||
### Features
|
||||
- Provide details about which timeout expired [0f2fa27]
|
||||
|
||||
### Fixes
|
||||
- Add Support Policy to docs [c70867a]
|
||||
|
||||
### Maintenance
|
||||
- Bump github.com/onsi/gomega from 1.22.1 to 1.23.0 (#1070) [bb3b4e2]
|
||||
|
||||
## 2.4.0
|
||||
|
||||
### Features
|
||||
|
||||
- DeferCleanup supports functions with multiple-return values [5e33c75]
|
||||
- Add GinkgoLogr (#1067) [bf78c28]
|
||||
- Introduction of 'MustPassRepeatedly' decorator (#1051) [047c02f]
|
||||
|
||||
### Fixes
|
||||
- correcting some typos (#1064) [1403d3c]
|
||||
- fix flaky internal_integration interupt specs [2105ba3]
|
||||
- Correct busted link in README [be6b5b9]
|
||||
|
||||
### Maintenance
|
||||
- Bump actions/checkout from 2 to 3 (#1062) [8a2f483]
|
||||
- Bump golang.org/x/tools from 0.1.12 to 0.2.0 (#1065) [529c4e8]
|
||||
- Bump github/codeql-action from 1 to 2 (#1061) [da09146]
|
||||
- Bump actions/setup-go from 2 to 3 (#1060) [918040d]
|
||||
- Bump github.com/onsi/gomega from 1.22.0 to 1.22.1 (#1053) [2098e4d]
|
||||
- Bump nokogiri from 1.13.8 to 1.13.9 in /docs (#1066) [1d74122]
|
||||
- Add GHA to dependabot config [4442772]
|
||||
|
||||
## 2.3.1
|
||||
|
||||
## Fixes
|
||||
Several users were invoking `ginkgo` by installing the latest version of the cli via `go install github.com/onsi/ginkgo/v2/ginkgo@latest`. When 2.3.0 was released this resulted in an influx of issues as CI systems failed due to a change in the internal contract between the Ginkgo CLI and the Ginkgo library. Ginkgo only supports running the same version of the library as the cli (which is why both are packaged in the same repository).
|
||||
|
||||
With this patch release, the ginkgo CLI can now identify a version mismatch and emit a helpful error message.
|
||||
|
||||
- Ginkgo cli can identify version mismatches and emit a helpful error message [bc4ae2f]
|
||||
- further emphasize that a version match is required when running Ginkgo on CI and/or locally [2691dd8]
|
||||
|
||||
### Maintenance
|
||||
- bump gomega to v1.22.0 [822a937]
|
||||
|
||||
## 2.3.0
|
||||
|
||||
### Interruptible Nodes and Timeouts
|
||||
|
||||
Ginkgo now supports per-node and per-spec timeouts on interruptible nodes. Check out the [documentation for all the details](https://onsi.github.io/ginkgo/#spec-timeouts-and-interruptible-nodes) but the gist is you can now write specs like this:
|
||||
|
||||
```go
|
||||
It("is interruptible", func(ctx SpecContext) { // or context.Context instead of SpecContext, both are valid.
|
||||
// do things until `ctx.Done()` is closed, for example:
|
||||
req, err := http.NewRequestWithContext(ctx, "POST", "/build-widgets", nil)
|
||||
Expect(err).NotTo(HaveOccured())
|
||||
_, err := http.DefaultClient.Do(req)
|
||||
Expect(err).NotTo(HaveOccured())
|
||||
|
||||
Eventually(client.WidgetCount).WithContext(ctx).Should(Equal(17))
|
||||
}, NodeTimeout(time.Second*20), GracePeriod(5*time.Second))
|
||||
```
|
||||
|
||||
and have Ginkgo ensure that the node completes before the timeout elapses. If it does elapse, or if an external interrupt is received (e.g. `^C`) then Ginkgo will cancel the context and wait for the Grace Period for the node to exit before proceeding with any cleanup nodes associated with the spec. The `ctx` provided by Ginkgo can also be passed down to Gomega's `Eventually` to have all assertions within the node governed by a single deadline.
|
||||
|
||||
### Features
|
||||
|
||||
- Ginkgo now records any additional failures that occur during the cleanup of a failed spec. In prior versions this information was quietly discarded, but the introduction of a more rigorous approach to timeouts and interruptions allows Ginkgo to better track subsequent failures.
|
||||
- `SpecContext` also provides a mechanism for third-party libraries to provide additional information when a Progress Report is generated. Gomega uses this to provide the current state of an `Eventually().WithContext()` assertion when a Progress Report is requested.
|
||||
- DescribeTable now exits with an error if it is not passed any Entries [a4c9865]
|
||||
|
||||
## Fixes
|
||||
- fixes crashes on newer Ruby 3 installations by upgrading github-pages gem dependency [92c88d5]
|
||||
- Make the outline command able to use the DSL import [1be2427]
|
||||
|
||||
## Maintenance
|
||||
- chore(docs): delete no meaning d [57c373c]
|
||||
- chore(docs): Fix hyperlinks [30526d5]
|
||||
- chore(docs): fix code blocks without language settings [cf611c4]
|
||||
- fix intra-doc link [b541bcb]
|
||||
|
||||
## 2.2.0
|
||||
|
||||
### Generate real-time Progress Reports [f91377c]
|
||||
|
||||
Ginkgo can now generate Progress Reports to point users at the current running line of code (including a preview of the actual source code) and a best guess at the most relevant subroutines.
|
||||
|
||||
These Progress Reports allow users to debug stuck or slow tests without exiting the Ginkgo process. A Progress Report can be generated at any time by sending Ginkgo a `SIGINFO` (`^T` on MacOS/BSD) or `SIGUSR1`.
|
||||
|
||||
In addition, the user can specify `--poll-progress-after` and `--poll-progress-interval` to have Ginkgo start periodically emitting progress reports if a given node takes too long. These can be overriden/set on a per-node basis with the `PollProgressAfter` and `PollProgressInterval` decorators.
|
||||
|
||||
Progress Reports are emitted to stdout, and also stored in the machine-redable report formats that Ginkgo supports.
|
||||
|
||||
Ginkgo also uses this progress reporting infrastructure under the hood when handling timeouts and interrupts. This yields much more focused, useful, and informative stack traces than previously.
|
||||
|
||||
### Features
|
||||
- `BeforeSuite`, `AfterSuite`, `SynchronizedBeforeSuite`, `SynchronizedAfterSuite`, and `ReportAfterSuite` now support (the relevant subset of) decorators. These can be passed in _after_ the callback functions that are usually passed into these nodes.
|
||||
|
||||
As a result the **signature of these methods has changed** and now includes a trailing `args ...interface{}`. For most users simply using the DSL, this change is transparent. However if you were assigning one of these functions to a custom variable (or passing it around) then your code may need to change to reflect the new signature.
|
||||
|
||||
### Maintenance
|
||||
- Modernize the invocation of Ginkgo in github actions [0ffde58]
|
||||
- Update reocmmended CI settings in docs [896bbb9]
|
||||
- Speed up unnecessarily slow integration test [6d3a90e]
|
||||
|
||||
## 2.1.6
|
||||
|
||||
### Fixes
|
||||
- Add `SuppressProgressReporting` decorator to turn off --progress announcements for a given node [dfef62a]
|
||||
- chore: remove duplicate word in comments [7373214]
|
||||
|
||||
## 2.1.5
|
||||
|
||||
### Fixes
|
||||
- drop -mod=mod instructions; fixes #1026 [6ad7138]
|
||||
- Ensure `CurrentSpecReport` and `AddReportEntry` are thread-safe [817c09b]
|
||||
- remove stale importmap gcflags flag test [3cd8b93]
|
||||
- Always emit spec summary [5cf23e2] - even when only one spec has failed
|
||||
- Fix ReportAfterSuite usage in docs [b1864ad]
|
||||
- fixed typo (#997) [219cc00]
|
||||
- TrimRight is not designed to trim Suffix [71ebb74]
|
||||
- refactor: replace strings.Replace with strings.ReplaceAll (#978) [143d208]
|
||||
- fix syntax in examples (#975) [b69554f]
|
||||
|
||||
### Maintenance
|
||||
- Bump github.com/onsi/gomega from 1.20.0 to 1.20.1 (#1027) [e5dfce4]
|
||||
- Bump tzinfo from 1.2.9 to 1.2.10 in /docs (#1006) [7ae91c4]
|
||||
- Bump github.com/onsi/gomega from 1.19.0 to 1.20.0 (#1005) [e87a85a]
|
||||
- test: add new Go 1.19 to test matrix (#1014) [bbefe12]
|
||||
- Bump golang.org/x/tools from 0.1.11 to 0.1.12 (#1012) [9327906]
|
||||
- Bump golang.org/x/tools from 0.1.10 to 0.1.11 (#993) [f44af96]
|
||||
- Bump nokogiri from 1.13.3 to 1.13.6 in /docs (#981) [ef336aa]
|
||||
|
||||
## 2.1.4
|
||||
|
||||
### Fixes
|
||||
- Numerous documentation typos
|
||||
- Prepend `when` when using `When` (this behavior was in 1.x but unintentionally lost during the 2.0 rewrite) [efce903]
|
||||
- improve error message when a parallel process fails to report back [a7bd1fe]
|
||||
- guard against concurrent map writes in DeprecationTracker [0976569]
|
||||
- Invoke reporting nodes during dry-run (fixes #956 and #935) [aae4480]
|
||||
- Fix ginkgo import circle [f779385]
|
||||
|
||||
## 2.1.3
|
||||
|
||||
See [https://onsi.github.io/ginkgo/MIGRATING_TO_V2](https://onsi.github.io/ginkgo/MIGRATING_TO_V2) for details on V2.
|
||||
|
||||
### Fixes
|
||||
- Calling By in a container node now emits a useful error. [ff12cee]
|
||||
|
||||
## 2.1.2
|
||||
|
||||
### Fixes
|
||||
|
||||
- Track location of focused specs correctly in `ginkgo unfocus` [a612ff1]
|
||||
- Profiling suites with focused specs no longer generates an erroneous failure message [8fbfa02]
|
||||
- Several documentation typos fixed. Big thanks to everyone who helped catch them and report/fix them!
|
||||
|
||||
## 2.1.1
|
||||
|
||||
See [https://onsi.github.io/ginkgo/MIGRATING_TO_V2](https://onsi.github.io/ginkgo/MIGRATING_TO_V2) for details on V2.
|
||||
|
||||
### Fixes
|
||||
- Suites that only import the new dsl packages are now correctly identified as Ginkgo suites [ec17e17]
|
||||
|
||||
## 2.1.0
|
||||
|
||||
See [https://onsi.github.io/ginkgo/MIGRATING_TO_V2](https://onsi.github.io/ginkgo/MIGRATING_TO_V2) for details on V2.
|
||||
|
||||
2.1.0 is a minor release with a few tweaks:
|
||||
|
||||
- Introduce new DSL packages to enable users to pick-and-choose which portions of the DSL to dot-import. [90868e2] More details [here](https://onsi.github.io/ginkgo/#alternatives-to-dot-importing-ginkgo).
|
||||
- Add error check for invalid/nil parameters to DescribeTable [6f8577e]
|
||||
- Myriad docs typos fixed (thanks everyone!) [718542a, ecb7098, 146654c, a8f9913, 6bdffde, 03dcd7e]
|
||||
|
||||
## 2.0.0
|
||||
|
||||
See [https://onsi.github.io/ginkgo/MIGRATING_TO_V2](https://onsi.github.io/ginkgo/MIGRATING_TO_V2)
|
||||
|
||||
## 1.16.5
|
||||
|
||||
Ginkgo 2.0 now has a Release Candidate. 1.16.5 advertises the existence of the RC.
|
||||
1.16.5 deprecates GinkgoParallelNode in favor of GinkgoParallelProcess
|
||||
|
||||
You can silence the RC advertisement by setting an `ACK_GINKG_RC=true` environment variable or creating a file in your home directory called `.ack-ginkgo-rc`
|
||||
You can silence the RC advertisement by setting an `ACK_GINKGO_RC=true` environment variable or creating a file in your home directory called `.ack-ginkgo-rc`
|
||||
|
||||
## 1.16.4
|
||||
|
||||
@ -23,7 +273,7 @@ You can silence the RC advertisement by setting an `ACK_GINKG_RC=true` environme
|
||||
## 1.16.1
|
||||
|
||||
### Fixes
|
||||
- Supress --stream deprecation warning on windows (#793)
|
||||
- Suppress --stream deprecation warning on windows (#793)
|
||||
|
||||
## 1.16.0
|
||||
|
||||
@ -35,7 +285,6 @@ You can silence the RC advertisement by setting an `ACK_GINKG_RC=true` environme
|
||||
|
||||
- Add slim-sprig template functions to bootstrap/generate (#775) [9162b86]
|
||||
|
||||
### Fixes
|
||||
- Fix accidental reference to 1488 (#784) [9fb7fe4]
|
||||
|
||||
## 1.15.2
|
||||
@ -111,7 +360,7 @@ You can silence the RC advertisement by setting an `ACK_GINKG_RC=true` environme
|
||||
- replace tail package with maintained one. this fixes go get errors (#667) [4ba33d4]
|
||||
- improve ginkgo performance - makes progress on #644 [a14f98e]
|
||||
- fix convert integration tests [1f8ba69]
|
||||
- fix typo succesful -> successful (#663) [1ea49cf]
|
||||
- fix typo successful -> successful (#663) [1ea49cf]
|
||||
- Fix invalid link (#658) [b886136]
|
||||
- convert utility : Include comments from source (#657) [1077c6d]
|
||||
- Explain what BDD means [d79e7fb]
|
||||
@ -202,10 +451,10 @@ You can silence the RC advertisement by setting an `ACK_GINKG_RC=true` environme
|
||||
- fix: for `go vet` to pass [69338ec]
|
||||
- docs: fix for contributing instructions [7004cb1]
|
||||
- consolidate and streamline contribution docs (#494) [d848015]
|
||||
- Make generated Junit file compatable with "Maven Surefire" (#488) [e51bee6]
|
||||
- Make generated Junit file compatible with "Maven Surefire" (#488) [e51bee6]
|
||||
- all: gofmt [000d317]
|
||||
- Increase eventually timeout to 30s [c73579c]
|
||||
- Clarify asynchronous test behaviour [294d8f4]
|
||||
- Clarify asynchronous test behavior [294d8f4]
|
||||
- Travis badge should only show master [26d2143]
|
||||
|
||||
## 1.5.0 5/10/2018
|
||||
@ -223,13 +472,13 @@ You can silence the RC advertisement by setting an `ACK_GINKG_RC=true` environme
|
||||
- When running a test and calculating the coverage using the `-coverprofile` and `-outputdir` flags, Ginkgo fails with an error if the directory does not exist. This is due to an [issue in go 1.10](https://github.com/golang/go/issues/24588) (#446) [b36a6e0]
|
||||
- `unfocus` command ignores vendor folder (#459) [e5e551c, c556e43, a3b6351, 9a820dd]
|
||||
- Ignore packages whose tests are all ignored by go (#456) [7430ca7, 6d8be98]
|
||||
- Increase the threshold when checking time measuments (#455) [2f714bf, 68f622c]
|
||||
- Increase the threshold when checking time measurements (#455) [2f714bf, 68f622c]
|
||||
- Fix race condition in coverage tests (#423) [a5a8ff7, ab9c08b]
|
||||
- Add an extra new line after reporting spec run completion for test2json [874520d]
|
||||
- added name name field to junit reported testsuite [ae61c63]
|
||||
- Do not set the run time of a spec when the dryRun flag is used (#438) [457e2d9, ba8e856]
|
||||
- Process FWhen and FSpecify when unfocusing (#434) [9008c7b, ee65bd, df87dfe]
|
||||
- Synchronise the access to the state of specs to avoid race conditions (#430) [7d481bc, ae6829d]
|
||||
- Synchronies the access to the state of specs to avoid race conditions (#430) [7d481bc, ae6829d]
|
||||
- Added Duration on GinkgoTestDescription (#383) [5f49dad, 528417e, 0747408, 329d7ed]
|
||||
- Fix Ginkgo stack trace on failure for Specify (#415) [b977ede, 65ca40e, 6c46eb8]
|
||||
- Update README with Go 1.6+, Golang -> Go (#409) [17f6b97, bc14b66, 20d1598]
|
||||
@ -314,7 +563,7 @@ Bug Fixes:
|
||||
- Fix incorrect failure message when a panic occurs during a parallel test run
|
||||
- Fixed an issue where a pending test within a focused context (or a focused test within a pending context) would skip all other tests.
|
||||
- Be more consistent about handling SIGTERM as well as SIGINT
|
||||
- When interupted while concurrently compiling test suites in the background, Ginkgo now cleans up the compiled artifacts.
|
||||
- When interrupted while concurrently compiling test suites in the background, Ginkgo now cleans up the compiled artifacts.
|
||||
- Fixed a long standing bug where `ginkgo -p` would hang if a process spawned by one of the Ginkgo parallel nodes does not exit. (Hooray!)
|
||||
|
||||
## 1.1.0 (8/2/2014)
|
13
vendor/github.com/onsi/ginkgo/v2/CONTRIBUTING.md
generated
vendored
Normal file
13
vendor/github.com/onsi/ginkgo/v2/CONTRIBUTING.md
generated
vendored
Normal file
@ -0,0 +1,13 @@
|
||||
# Contributing to Ginkgo
|
||||
|
||||
Your contributions to Ginkgo are essential for its long-term maintenance and improvement.
|
||||
|
||||
- Please **open an issue first** - describe what problem you are trying to solve and give the community a forum for input and feedback ahead of investing time in writing code!
|
||||
- Ensure adequate test coverage:
|
||||
- When adding to the Ginkgo library, add unit and/or integration tests (under the `integration` folder).
|
||||
- When adding to the Ginkgo CLI, note that there are very few unit tests. Please add an integration test.
|
||||
- Make sure all the tests succeed via `ginkgo -r -p`
|
||||
- Vet your changes via `go vet ./...`
|
||||
- Update the documentation. Ginkgo uses `godoc` comments and documentation in `docs/index.md`. You can run `bundle exec jekyll serve` in the `docs` directory to preview your changes.
|
||||
|
||||
Thanks for supporting Ginkgo!
|
0
vendor/github.com/onsi/ginkgo/LICENSE → vendor/github.com/onsi/ginkgo/v2/LICENSE
generated
vendored
0
vendor/github.com/onsi/ginkgo/LICENSE → vendor/github.com/onsi/ginkgo/v2/LICENSE
generated
vendored
115
vendor/github.com/onsi/ginkgo/v2/README.md
generated
vendored
Normal file
115
vendor/github.com/onsi/ginkgo/v2/README.md
generated
vendored
Normal file
@ -0,0 +1,115 @@
|
||||

|
||||
|
||||
[](https://github.com/onsi/ginkgo/actions?query=workflow%3Atest+branch%3Amaster) | [Ginkgo Docs](https://onsi.github.io/ginkgo/)
|
||||
|
||||
---
|
||||
|
||||
# Ginkgo
|
||||
|
||||
Ginkgo is a mature testing framework for Go designed to help you write expressive specs. Ginkgo builds on top of Go's `testing` foundation and is complemented by the [Gomega](https://github.com/onsi/gomega) matcher library. Together, Ginkgo and Gomega let you express the intent behind your specs clearly:
|
||||
|
||||
```go
|
||||
import (
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
...
|
||||
)
|
||||
|
||||
Describe("Checking books out of the library", Label("library"), func() {
|
||||
var library *libraries.Library
|
||||
var book *books.Book
|
||||
var valjean *users.User
|
||||
BeforeEach(func() {
|
||||
library = libraries.NewClient()
|
||||
book = &books.Book{
|
||||
Title: "Les Miserables",
|
||||
Author: "Victor Hugo",
|
||||
}
|
||||
valjean = users.NewUser("Jean Valjean")
|
||||
})
|
||||
|
||||
When("the library has the book in question", func() {
|
||||
BeforeEach(func(ctx SpecContext) {
|
||||
Expect(library.Store(ctx, book)).To(Succeed())
|
||||
})
|
||||
|
||||
Context("and the book is available", func() {
|
||||
It("lends it to the reader", func(ctx SpecContext) {
|
||||
Expect(valjean.Checkout(ctx, library, "Les Miserables")).To(Succeed())
|
||||
Expect(valjean.Books()).To(ContainElement(book))
|
||||
Expect(library.UserWithBook(ctx, book)).To(Equal(valjean))
|
||||
}, SpecTimeout(time.Second * 5))
|
||||
})
|
||||
|
||||
Context("but the book has already been checked out", func() {
|
||||
var javert *users.User
|
||||
BeforeEach(func(ctx SpecContext) {
|
||||
javert = users.NewUser("Javert")
|
||||
Expect(javert.Checkout(ctx, library, "Les Miserables")).To(Succeed())
|
||||
})
|
||||
|
||||
It("tells the user", func(ctx SpecContext) {
|
||||
err := valjean.Checkout(ctx, library, "Les Miserables")
|
||||
Expect(error).To(MatchError("Les Miserables is currently checked out"))
|
||||
}, SpecTimeout(time.Second * 5))
|
||||
|
||||
It("lets the user place a hold and get notified later", func(ctx SpecContext) {
|
||||
Expect(valjean.Hold(ctx, library, "Les Miserables")).To(Succeed())
|
||||
Expect(valjean.Holds(ctx)).To(ContainElement(book))
|
||||
|
||||
By("when Javert returns the book")
|
||||
Expect(javert.Return(ctx, library, book)).To(Succeed())
|
||||
|
||||
By("it eventually informs Valjean")
|
||||
notification := "Les Miserables is ready for pick up"
|
||||
Eventually(ctx, valjean.Notifications).Should(ContainElement(notification))
|
||||
|
||||
Expect(valjean.Checkout(ctx, library, "Les Miserables")).To(Succeed())
|
||||
Expect(valjean.Books(ctx)).To(ContainElement(book))
|
||||
Expect(valjean.Holds(ctx)).To(BeEmpty())
|
||||
}, SpecTimeout(time.Second * 10))
|
||||
})
|
||||
})
|
||||
|
||||
When("the library does not have the book in question", func() {
|
||||
It("tells the reader the book is unavailable", func(ctx SpecContext) {
|
||||
err := valjean.Checkout(ctx, library, "Les Miserables")
|
||||
Expect(error).To(MatchError("Les Miserables is not in the library catalog"))
|
||||
}, SpecTimeout(time.Second * 5))
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
Jump to the [docs](https://onsi.github.io/ginkgo/) to learn more. It's easy to [bootstrap](https://onsi.github.io/ginkgo/#bootstrapping-a-suite) and start writing your [first specs](https://onsi.github.io/ginkgo/#adding-specs-to-a-suite).
|
||||
|
||||
If you have a question, comment, bug report, feature request, etc. please open a [GitHub issue](https://github.com/onsi/ginkgo/issues/new), or visit the [Ginkgo Slack channel](https://app.slack.com/client/T029RQSE6/CQQ50BBNW).
|
||||
|
||||
## Capabilities
|
||||
|
||||
Whether writing basic unit specs, complex integration specs, or even performance specs - Ginkgo gives you an expressive Domain-Specific Language (DSL) that will be familiar to users coming from frameworks such as [Quick](https://github.com/Quick/Quick), [RSpec](https://rspec.info), [Jasmine](https://jasmine.github.io), and [Busted](https://lunarmodules.github.io/busted/). This style of testing is sometimes referred to as "Behavior-Driven Development" (BDD) though Ginkgo's utility extends beyond acceptance-level testing.
|
||||
|
||||
With Ginkgo's DSL you can use nestable [`Describe`, `Context` and `When` container nodes](https://onsi.github.io/ginkgo/#organizing-specs-with-container-nodes) to help you organize your specs. [`BeforeEach` and `AfterEach` setup nodes](https://onsi.github.io/ginkgo/#extracting-common-setup-beforeeach) for setup and cleanup. [`It` and `Specify` subject nodes](https://onsi.github.io/ginkgo/#spec-subjects-it) that hold your assertions. [`BeforeSuite` and `AfterSuite` nodes](https://onsi.github.io/ginkgo/#suite-setup-and-cleanup-beforesuite-and-aftersuite) to prep for and cleanup after a suite... and [much more!](https://onsi.github.io/ginkgo/#writing-specs).
|
||||
|
||||
At runtime, Ginkgo can run your specs in reproducibly [random order](https://onsi.github.io/ginkgo/#spec-randomization) and has sophisticated support for [spec parallelization](https://onsi.github.io/ginkgo/#spec-parallelization). In fact, running specs in parallel is as easy as
|
||||
|
||||
```bash
|
||||
ginkgo -p
|
||||
```
|
||||
|
||||
By following [established patterns for writing parallel specs](https://onsi.github.io/ginkgo/#patterns-for-parallel-integration-specs) you can build even large, complex integration suites that parallelize cleanly and run performantly. And you don't have to worry about your spec suite hanging or leaving a mess behind - Ginkgo provides a per-node `context.Context` and the capability to interrupt the spec after a set period of time - and then clean up.
|
||||
|
||||
As your suites grow Ginkgo helps you keep your specs organized with [labels](https://onsi.github.io/ginkgo/#spec-labels) and lets you easily run [subsets of specs](https://onsi.github.io/ginkgo/#filtering-specs), either [programmatically](https://onsi.github.io/ginkgo/#focused-specs) or on the [command line](https://onsi.github.io/ginkgo/#combining-filters). And Ginkgo's reporting infrastructure generates machine-readable output in a [variety of formats](https://onsi.github.io/ginkgo/#generating-machine-readable-reports) _and_ allows you to build your own [custom reporting infrastructure](https://onsi.github.io/ginkgo/#generating-reports-programmatically).
|
||||
|
||||
Ginkgo ships with `ginkgo`, a [command line tool](https://onsi.github.io/ginkgo/#ginkgo-cli-overview) with support for generating, running, filtering, and profiling Ginkgo suites. You can even have Ginkgo automatically run your specs when it detects a change with `ginkgo watch`, enabling rapid feedback loops during test-driven development.
|
||||
|
||||
And that's just Ginkgo! [Gomega](https://onsi.github.io/gomega/) brings a rich, mature, family of [assertions and matchers](https://onsi.github.io/gomega/#provided-matchers) to your suites. With Gomega you can easily mix [synchronous and asynchronous assertions](https://onsi.github.io/ginkgo/#patterns-for-asynchronous-testing) in your specs. You can even build your own set of expressive domain-specific matchers quickly and easily by composing Gomega's [existing building blocks](https://onsi.github.io/ginkgo/#building-custom-matchers).
|
||||
|
||||
Happy Testing!
|
||||
|
||||
## License
|
||||
|
||||
Ginkgo is MIT-Licensed
|
||||
|
||||
## Contributing
|
||||
|
||||
See [CONTRIBUTING.md](CONTRIBUTING.md)
|
@ -1,13 +1,19 @@
|
||||
A Ginkgo release is a tagged git sha and a GitHub release. To cut a release:
|
||||
|
||||
1. Ensure CHANGELOG.md is up to date.
|
||||
- Use `git log --pretty=format:'- %s [%h]' HEAD...vX.X.X` to list all the commits since the last release
|
||||
- Use
|
||||
```bash
|
||||
LAST_VERSION=$(git tag --sort=version:refname | tail -n1)
|
||||
CHANGES=$(git log --pretty=format:'- %s [%h]' HEAD...$LAST_VERSION)
|
||||
echo -e "## NEXT\n\n$CHANGES\n\n### Features\n\n### Fixes\n\n### Maintenance\n\n$(cat CHANGELOG.md)" > CHANGELOG.md
|
||||
```
|
||||
to update the changelog
|
||||
- Categorize the changes into
|
||||
- Breaking Changes (requires a major version)
|
||||
- New Features (minor version)
|
||||
- Fixes (fix version)
|
||||
- Maintenance (which in general should not be mentioned in `CHANGELOG.md` as they have no user impact)
|
||||
1. Update `VERSION` in `config/config.go`
|
||||
1. Update `VERSION` in `types/version.go`
|
||||
1. Commit, push, and release:
|
||||
```
|
||||
git commit -m "vM.m.p"
|
69
vendor/github.com/onsi/ginkgo/v2/config/deprecated.go
generated
vendored
Normal file
69
vendor/github.com/onsi/ginkgo/v2/config/deprecated.go
generated
vendored
Normal file
@ -0,0 +1,69 @@
|
||||
package config
|
||||
|
||||
// GinkgoConfigType has been deprecated and its equivalent now lives in
|
||||
// the types package. You can no longer access Ginkgo configuration from the config
|
||||
// package. Instead use the DSL's GinkgoConfiguration() function to get copies of the
|
||||
// current configuration
|
||||
//
|
||||
// GinkgoConfigType is still here so custom V1 reporters do not result in a compilation error
|
||||
// It will be removed in a future minor release of Ginkgo
|
||||
type GinkgoConfigType = DeprecatedGinkgoConfigType
|
||||
type DeprecatedGinkgoConfigType struct {
|
||||
RandomSeed int64
|
||||
RandomizeAllSpecs bool
|
||||
RegexScansFilePath bool
|
||||
FocusStrings []string
|
||||
SkipStrings []string
|
||||
SkipMeasurements bool
|
||||
FailOnPending bool
|
||||
FailFast bool
|
||||
FlakeAttempts int
|
||||
EmitSpecProgress bool
|
||||
DryRun bool
|
||||
DebugParallel bool
|
||||
|
||||
ParallelNode int
|
||||
ParallelTotal int
|
||||
SyncHost string
|
||||
StreamHost string
|
||||
}
|
||||
|
||||
// DefaultReporterConfigType has been deprecated and its equivalent now lives in
|
||||
// the types package. You can no longer access Ginkgo configuration from the config
|
||||
// package. Instead use the DSL's GinkgoConfiguration() function to get copies of the
|
||||
// current configuration
|
||||
//
|
||||
// DefaultReporterConfigType is still here so custom V1 reporters do not result in a compilation error
|
||||
// It will be removed in a future minor release of Ginkgo
|
||||
type DefaultReporterConfigType = DeprecatedDefaultReporterConfigType
|
||||
type DeprecatedDefaultReporterConfigType struct {
|
||||
NoColor bool
|
||||
SlowSpecThreshold float64
|
||||
NoisyPendings bool
|
||||
NoisySkippings bool
|
||||
Succinct bool
|
||||
Verbose bool
|
||||
FullTrace bool
|
||||
ReportPassed bool
|
||||
ReportFile string
|
||||
}
|
||||
|
||||
// Sadly there is no way to gracefully deprecate access to these global config variables.
|
||||
// Users who need access to Ginkgo's configuration should use the DSL's GinkgoConfiguration() method
|
||||
// These new unwieldy type names exist to give users a hint when they try to compile and the compilation fails
|
||||
type GinkgoConfigIsNoLongerAccessibleFromTheConfigPackageUseTheDSLsGinkgoConfigurationFunctionInstead struct{}
|
||||
|
||||
// Sadly there is no way to gracefully deprecate access to these global config variables.
|
||||
// Users who need access to Ginkgo's configuration should use the DSL's GinkgoConfiguration() method
|
||||
// These new unwieldy type names exist to give users a hint when they try to compile and the compilation fails
|
||||
var GinkgoConfig = GinkgoConfigIsNoLongerAccessibleFromTheConfigPackageUseTheDSLsGinkgoConfigurationFunctionInstead{}
|
||||
|
||||
// Sadly there is no way to gracefully deprecate access to these global config variables.
|
||||
// Users who need access to Ginkgo's configuration should use the DSL's GinkgoConfiguration() method
|
||||
// These new unwieldy type names exist to give users a hint when they try to compile and the compilation fails
|
||||
type DefaultReporterConfigIsNoLongerAccessibleFromTheConfigPackageUseTheDSLsGinkgoConfigurationFunctionInstead struct{}
|
||||
|
||||
// Sadly there is no way to gracefully deprecate access to these global config variables.
|
||||
// Users who need access to Ginkgo's configuration should use the DSL's GinkgoConfiguration() method
|
||||
// These new unwieldy type names exist to give users a hint when they try to compile and the compilation fails
|
||||
var DefaultReporterConfig = DefaultReporterConfigIsNoLongerAccessibleFromTheConfigPackageUseTheDSLsGinkgoConfigurationFunctionInstead{}
|
750
vendor/github.com/onsi/ginkgo/v2/core_dsl.go
generated
vendored
Normal file
750
vendor/github.com/onsi/ginkgo/v2/core_dsl.go
generated
vendored
Normal file
@ -0,0 +1,750 @@
|
||||
/*
|
||||
Ginkgo is a testing framework for Go designed to help you write expressive tests.
|
||||
https://github.com/onsi/ginkgo
|
||||
MIT-Licensed
|
||||
|
||||
The godoc documentation outlines Ginkgo's API. Since Ginkgo is a Domain-Specific Language it is important to
|
||||
build a mental model for Ginkgo - the narrative documentation at https://onsi.github.io/ginkgo/ is designed to help you do that.
|
||||
You should start there - even a brief skim will be helpful. At minimum you should skim through the https://onsi.github.io/ginkgo/#getting-started chapter.
|
||||
|
||||
Ginkgo's is best paired with the Gomega matcher library: https://github.com/onsi/gomega
|
||||
|
||||
You can run Ginkgo specs with go test - however we recommend using the ginkgo cli. It enables functionality
|
||||
that go test does not (especially running suites in parallel). You can learn more at https://onsi.github.io/ginkgo/#ginkgo-cli-overview
|
||||
or by running 'ginkgo help'.
|
||||
*/
|
||||
package ginkgo
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/go-logr/logr"
|
||||
"github.com/onsi/ginkgo/v2/formatter"
|
||||
"github.com/onsi/ginkgo/v2/internal"
|
||||
"github.com/onsi/ginkgo/v2/internal/global"
|
||||
"github.com/onsi/ginkgo/v2/internal/interrupt_handler"
|
||||
"github.com/onsi/ginkgo/v2/internal/parallel_support"
|
||||
"github.com/onsi/ginkgo/v2/reporters"
|
||||
"github.com/onsi/ginkgo/v2/types"
|
||||
)
|
||||
|
||||
const GINKGO_VERSION = types.VERSION
|
||||
|
||||
var flagSet types.GinkgoFlagSet
|
||||
var deprecationTracker = types.NewDeprecationTracker()
|
||||
var suiteConfig = types.NewDefaultSuiteConfig()
|
||||
var reporterConfig = types.NewDefaultReporterConfig()
|
||||
var suiteDidRun = false
|
||||
var outputInterceptor internal.OutputInterceptor
|
||||
var client parallel_support.Client
|
||||
|
||||
func init() {
|
||||
var err error
|
||||
flagSet, err = types.BuildTestSuiteFlagSet(&suiteConfig, &reporterConfig)
|
||||
exitIfErr(err)
|
||||
writer := internal.NewWriter(os.Stdout)
|
||||
GinkgoWriter = writer
|
||||
GinkgoLogr = internal.GinkgoLogrFunc(writer)
|
||||
}
|
||||
|
||||
func exitIfErr(err error) {
|
||||
if err != nil {
|
||||
if outputInterceptor != nil {
|
||||
outputInterceptor.Shutdown()
|
||||
}
|
||||
if client != nil {
|
||||
client.Close()
|
||||
}
|
||||
fmt.Fprintln(formatter.ColorableStdErr, err.Error())
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
func exitIfErrors(errors []error) {
|
||||
if len(errors) > 0 {
|
||||
if outputInterceptor != nil {
|
||||
outputInterceptor.Shutdown()
|
||||
}
|
||||
if client != nil {
|
||||
client.Close()
|
||||
}
|
||||
for _, err := range errors {
|
||||
fmt.Fprintln(formatter.ColorableStdErr, err.Error())
|
||||
}
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
// The interface implemented by GinkgoWriter
|
||||
type GinkgoWriterInterface interface {
|
||||
io.Writer
|
||||
|
||||
Print(a ...interface{})
|
||||
Printf(format string, a ...interface{})
|
||||
Println(a ...interface{})
|
||||
|
||||
TeeTo(writer io.Writer)
|
||||
ClearTeeWriters()
|
||||
}
|
||||
|
||||
/*
|
||||
SpecContext is the context object passed into nodes that are subject to a timeout or need to be notified of an interrupt. It implements the standard context.Context interface but also contains additional helpers to provide an extensibility point for Ginkgo. (As an example, Gomega's Eventually can use the methods defined on SpecContext to provide deeper integratoin with Ginkgo).
|
||||
|
||||
You can do anything with SpecContext that you do with a typical context.Context including wrapping it with any of the context.With* methods.
|
||||
|
||||
Ginkgo will cancel the SpecContext when a node is interrupted (e.g. by the user sending an interupt signal) or when a node has exceeded it's allowed run-time. Note, however, that even in cases where a node has a deadline, SpecContext will not return a deadline via .Deadline(). This is because Ginkgo does not use a WithDeadline() context to model node deadlines as Ginkgo needs control over the precise timing of the context cancellation to ensure it can provide an accurate progress report at the moment of cancellation.
|
||||
*/
|
||||
type SpecContext = internal.SpecContext
|
||||
|
||||
/*
|
||||
GinkgoWriter implements a GinkgoWriterInterface and io.Writer
|
||||
|
||||
When running in verbose mode (ginkgo -v) any writes to GinkgoWriter will be immediately printed
|
||||
to stdout. Otherwise, GinkgoWriter will buffer any writes produced during the current test and flush them to screen
|
||||
only if the current test fails.
|
||||
|
||||
GinkgoWriter also provides convenience Print, Printf and Println methods and allows you to tee to a custom writer via GinkgoWriter.TeeTo(writer).
|
||||
Writes to GinkgoWriter are immediately sent to any registered TeeTo() writers. You can unregister all TeeTo() Writers with GinkgoWriter.ClearTeeWriters()
|
||||
|
||||
You can learn more at https://onsi.github.io/ginkgo/#logging-output
|
||||
*/
|
||||
var GinkgoWriter GinkgoWriterInterface
|
||||
|
||||
/*
|
||||
GinkgoLogr is a logr.Logger that writes to GinkgoWriter
|
||||
*/
|
||||
var GinkgoLogr logr.Logger
|
||||
|
||||
// The interface by which Ginkgo receives *testing.T
|
||||
type GinkgoTestingT interface {
|
||||
Fail()
|
||||
}
|
||||
|
||||
/*
|
||||
GinkgoConfiguration returns the configuration of the current suite.
|
||||
|
||||
The first return value is the SuiteConfig which controls aspects of how the suite runs,
|
||||
the second return value is the ReporterConfig which controls aspects of how Ginkgo's default
|
||||
reporter emits output.
|
||||
|
||||
Mutating the returned configurations has no effect. To reconfigure Ginkgo programmatically you need
|
||||
to pass in your mutated copies into RunSpecs().
|
||||
|
||||
You can learn more at https://onsi.github.io/ginkgo/#overriding-ginkgos-command-line-configuration-in-the-suite
|
||||
*/
|
||||
func GinkgoConfiguration() (types.SuiteConfig, types.ReporterConfig) {
|
||||
return suiteConfig, reporterConfig
|
||||
}
|
||||
|
||||
/*
|
||||
GinkgoRandomSeed returns the seed used to randomize spec execution order. It is
|
||||
useful for seeding your own pseudorandom number generators to ensure
|
||||
consistent executions from run to run, where your tests contain variability (for
|
||||
example, when selecting random spec data).
|
||||
|
||||
You can learn more at https://onsi.github.io/ginkgo/#spec-randomization
|
||||
*/
|
||||
func GinkgoRandomSeed() int64 {
|
||||
return suiteConfig.RandomSeed
|
||||
}
|
||||
|
||||
/*
|
||||
GinkgoParallelProcess returns the parallel process number for the current ginkgo process
|
||||
The process number is 1-indexed. You can use GinkgoParallelProcess() to shard access to shared
|
||||
resources across your suites. You can learn more about patterns for sharding at https://onsi.github.io/ginkgo/#patterns-for-parallel-integration-specs
|
||||
|
||||
For more on how specs are parallelized in Ginkgo, see http://onsi.github.io/ginkgo/#spec-parallelization
|
||||
*/
|
||||
func GinkgoParallelProcess() int {
|
||||
return suiteConfig.ParallelProcess
|
||||
}
|
||||
|
||||
/*
|
||||
PauseOutputInterception() pauses Ginkgo's output interception. This is only relevant
|
||||
when running in parallel and output to stdout/stderr is being intercepted. You generally
|
||||
don't need to call this function - however there are cases when Ginkgo's output interception
|
||||
mechanisms can interfere with external processes launched by the test process.
|
||||
|
||||
In particular, if an external process is launched that has cmd.Stdout/cmd.Stderr set to os.Stdout/os.Stderr
|
||||
then Ginkgo's output interceptor will hang. To circumvent this, set cmd.Stdout/cmd.Stderr to GinkgoWriter.
|
||||
If, for some reason, you aren't able to do that, you can PauseOutputInterception() before starting the process
|
||||
then ResumeOutputInterception() after starting it.
|
||||
|
||||
Note that PauseOutputInterception() does not cause stdout writes to print to the console -
|
||||
this simply stops intercepting and storing stdout writes to an internal buffer.
|
||||
*/
|
||||
func PauseOutputInterception() {
|
||||
if outputInterceptor == nil {
|
||||
return
|
||||
}
|
||||
outputInterceptor.PauseIntercepting()
|
||||
}
|
||||
|
||||
// ResumeOutputInterception() - see docs for PauseOutputInterception()
|
||||
func ResumeOutputInterception() {
|
||||
if outputInterceptor == nil {
|
||||
return
|
||||
}
|
||||
outputInterceptor.ResumeIntercepting()
|
||||
}
|
||||
|
||||
/*
|
||||
RunSpecs is the entry point for the Ginkgo spec runner.
|
||||
|
||||
You must call this within a Golang testing TestX(t *testing.T) function.
|
||||
If you bootstrapped your suite with "ginkgo bootstrap" this is already
|
||||
done for you.
|
||||
|
||||
Ginkgo is typically configured via command-line flags. This configuration
|
||||
can be overridden, however, and passed into RunSpecs as optional arguments:
|
||||
|
||||
func TestMySuite(t *testing.T) {
|
||||
RegisterFailHandler(gomega.Fail)
|
||||
// fetch the current config
|
||||
suiteConfig, reporterConfig := GinkgoConfiguration()
|
||||
// adjust it
|
||||
suiteConfig.SkipStrings = []string{"NEVER-RUN"}
|
||||
reporterConfig.FullTrace = true
|
||||
// pass it in to RunSpecs
|
||||
RunSpecs(t, "My Suite", suiteConfig, reporterConfig)
|
||||
}
|
||||
|
||||
Note that some configuration changes can lead to undefined behavior. For example,
|
||||
you should not change ParallelProcess or ParallelTotal as the Ginkgo CLI is responsible
|
||||
for setting these and orchestrating parallel specs across the parallel processes. See http://onsi.github.io/ginkgo/#spec-parallelization
|
||||
for more on how specs are parallelized in Ginkgo.
|
||||
|
||||
You can also pass suite-level Label() decorators to RunSpecs. The passed-in labels will apply to all specs in the suite.
|
||||
*/
|
||||
func RunSpecs(t GinkgoTestingT, description string, args ...interface{}) bool {
|
||||
if suiteDidRun {
|
||||
exitIfErr(types.GinkgoErrors.RerunningSuite())
|
||||
}
|
||||
suiteDidRun = true
|
||||
|
||||
suiteLabels := Labels{}
|
||||
configErrors := []error{}
|
||||
for _, arg := range args {
|
||||
switch arg := arg.(type) {
|
||||
case types.SuiteConfig:
|
||||
suiteConfig = arg
|
||||
case types.ReporterConfig:
|
||||
reporterConfig = arg
|
||||
case Labels:
|
||||
suiteLabels = append(suiteLabels, arg...)
|
||||
default:
|
||||
configErrors = append(configErrors, types.GinkgoErrors.UnknownTypePassedToRunSpecs(arg))
|
||||
}
|
||||
}
|
||||
exitIfErrors(configErrors)
|
||||
|
||||
configErrors = types.VetConfig(flagSet, suiteConfig, reporterConfig)
|
||||
if len(configErrors) > 0 {
|
||||
fmt.Fprintf(formatter.ColorableStdErr, formatter.F("{{red}}Ginkgo detected configuration issues:{{/}}\n"))
|
||||
for _, err := range configErrors {
|
||||
fmt.Fprintf(formatter.ColorableStdErr, err.Error())
|
||||
}
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
var reporter reporters.Reporter
|
||||
if suiteConfig.ParallelTotal == 1 {
|
||||
reporter = reporters.NewDefaultReporter(reporterConfig, formatter.ColorableStdOut)
|
||||
outputInterceptor = internal.NoopOutputInterceptor{}
|
||||
client = nil
|
||||
} else {
|
||||
reporter = reporters.NoopReporter{}
|
||||
switch strings.ToLower(suiteConfig.OutputInterceptorMode) {
|
||||
case "swap":
|
||||
outputInterceptor = internal.NewOSGlobalReassigningOutputInterceptor()
|
||||
case "none":
|
||||
outputInterceptor = internal.NoopOutputInterceptor{}
|
||||
default:
|
||||
outputInterceptor = internal.NewOutputInterceptor()
|
||||
}
|
||||
client = parallel_support.NewClient(suiteConfig.ParallelHost)
|
||||
if !client.Connect() {
|
||||
client = nil
|
||||
exitIfErr(types.GinkgoErrors.UnreachableParallelHost(suiteConfig.ParallelHost))
|
||||
}
|
||||
defer client.Close()
|
||||
}
|
||||
|
||||
writer := GinkgoWriter.(*internal.Writer)
|
||||
if reporterConfig.Verbosity().GTE(types.VerbosityLevelVerbose) && suiteConfig.ParallelTotal == 1 {
|
||||
writer.SetMode(internal.WriterModeStreamAndBuffer)
|
||||
} else {
|
||||
writer.SetMode(internal.WriterModeBufferOnly)
|
||||
}
|
||||
|
||||
if reporterConfig.WillGenerateReport() {
|
||||
registerReportAfterSuiteNodeForAutogeneratedReports(reporterConfig)
|
||||
}
|
||||
|
||||
err := global.Suite.BuildTree()
|
||||
exitIfErr(err)
|
||||
|
||||
suitePath, err := os.Getwd()
|
||||
exitIfErr(err)
|
||||
suitePath, err = filepath.Abs(suitePath)
|
||||
exitIfErr(err)
|
||||
|
||||
passed, hasFocusedTests := global.Suite.Run(description, suiteLabels, suitePath, global.Failer, reporter, writer, outputInterceptor, interrupt_handler.NewInterruptHandler(client), client, internal.RegisterForProgressSignal, suiteConfig)
|
||||
outputInterceptor.Shutdown()
|
||||
|
||||
flagSet.ValidateDeprecations(deprecationTracker)
|
||||
if deprecationTracker.DidTrackDeprecations() {
|
||||
fmt.Fprintln(formatter.ColorableStdErr, deprecationTracker.DeprecationsReport())
|
||||
}
|
||||
|
||||
if !passed {
|
||||
t.Fail()
|
||||
}
|
||||
|
||||
if passed && hasFocusedTests && strings.TrimSpace(os.Getenv("GINKGO_EDITOR_INTEGRATION")) == "" {
|
||||
fmt.Println("PASS | FOCUSED")
|
||||
os.Exit(types.GINKGO_FOCUS_EXIT_CODE)
|
||||
}
|
||||
return passed
|
||||
}
|
||||
|
||||
/*
|
||||
Skip instructs Ginkgo to skip the current spec
|
||||
|
||||
You can call Skip in any Setup or Subject node closure.
|
||||
|
||||
For more on how to filter specs in Ginkgo see https://onsi.github.io/ginkgo/#filtering-specs
|
||||
*/
|
||||
func Skip(message string, callerSkip ...int) {
|
||||
skip := 0
|
||||
if len(callerSkip) > 0 {
|
||||
skip = callerSkip[0]
|
||||
}
|
||||
cl := types.NewCodeLocationWithStackTrace(skip + 1)
|
||||
global.Failer.Skip(message, cl)
|
||||
panic(types.GinkgoErrors.UncaughtGinkgoPanic(cl))
|
||||
}
|
||||
|
||||
/*
|
||||
Fail notifies Ginkgo that the current spec has failed. (Gomega will call Fail for you automatically when an assertion fails.)
|
||||
|
||||
Under the hood, Fail panics to end execution of the current spec. Ginkgo will catch this panic and proceed with
|
||||
the subsequent spec. If you call Fail, or make an assertion, within a goroutine launched by your spec you must
|
||||
add defer GinkgoRecover() to the goroutine to catch the panic emitted by Fail.
|
||||
|
||||
You can call Fail in any Setup or Subject node closure.
|
||||
|
||||
You can learn more about how Ginkgo manages failures here: https://onsi.github.io/ginkgo/#mental-model-how-ginkgo-handles-failure
|
||||
*/
|
||||
func Fail(message string, callerSkip ...int) {
|
||||
skip := 0
|
||||
if len(callerSkip) > 0 {
|
||||
skip = callerSkip[0]
|
||||
}
|
||||
|
||||
cl := types.NewCodeLocationWithStackTrace(skip + 1)
|
||||
global.Failer.Fail(message, cl)
|
||||
panic(types.GinkgoErrors.UncaughtGinkgoPanic(cl))
|
||||
}
|
||||
|
||||
/*
|
||||
AbortSuite instructs Ginkgo to fail the current spec and skip all subsequent specs, thereby aborting the suite.
|
||||
|
||||
You can call AbortSuite in any Setup or Subject node closure.
|
||||
|
||||
You can learn more about how Ginkgo handles suite interruptions here: https://onsi.github.io/ginkgo/#interrupting-aborting-and-timing-out-suites
|
||||
*/
|
||||
func AbortSuite(message string, callerSkip ...int) {
|
||||
skip := 0
|
||||
if len(callerSkip) > 0 {
|
||||
skip = callerSkip[0]
|
||||
}
|
||||
|
||||
cl := types.NewCodeLocationWithStackTrace(skip + 1)
|
||||
global.Failer.AbortSuite(message, cl)
|
||||
panic(types.GinkgoErrors.UncaughtGinkgoPanic(cl))
|
||||
}
|
||||
|
||||
/*
|
||||
ignorablePanic is used by Gomega to signal to GinkgoRecover that Goemga is handling
|
||||
the error associated with this panic. It i used when Eventually/Consistently are passed a func(g Gomega) and the resulting function launches a goroutines that makes a failed assertion. That failed assertion is registered by Gomega and then panics. Ordinarily the panic is captured by Gomega. In the case of a goroutine Gomega can't capture the panic - so we piggy back on GinkgoRecover so users have a single defer GinkgoRecover() pattern to follow. To do that we need to tell Ginkgo to ignore this panic and not register it as a panic on the global Failer.
|
||||
*/
|
||||
type ignorablePanic interface{ GinkgoRecoverShouldIgnoreThisPanic() }
|
||||
|
||||
/*
|
||||
GinkgoRecover should be deferred at the top of any spawned goroutine that (may) call `Fail`
|
||||
Since Gomega assertions call fail, you should throw a `defer GinkgoRecover()` at the top of any goroutine that
|
||||
calls out to Gomega
|
||||
|
||||
Here's why: Ginkgo's `Fail` method records the failure and then panics to prevent
|
||||
further assertions from running. This panic must be recovered. Normally, Ginkgo recovers the panic for you,
|
||||
however if a panic originates on a goroutine *launched* from one of your specs there's no
|
||||
way for Ginkgo to rescue the panic. To do this, you must remember to `defer GinkgoRecover()` at the top of such a goroutine.
|
||||
|
||||
You can learn more about how Ginkgo manages failures here: https://onsi.github.io/ginkgo/#mental-model-how-ginkgo-handles-failure
|
||||
*/
|
||||
func GinkgoRecover() {
|
||||
e := recover()
|
||||
if e != nil {
|
||||
if _, ok := e.(ignorablePanic); ok {
|
||||
return
|
||||
}
|
||||
global.Failer.Panic(types.NewCodeLocationWithStackTrace(1), e)
|
||||
}
|
||||
}
|
||||
|
||||
// pushNode is used by the various test construction DSL methods to push nodes onto the suite
|
||||
// it handles returned errors, emits a detailed error message to help the user learn what they may have done wrong, then exits
|
||||
func pushNode(node internal.Node, errors []error) bool {
|
||||
exitIfErrors(errors)
|
||||
exitIfErr(global.Suite.PushNode(node))
|
||||
return true
|
||||
}
|
||||
|
||||
/*
|
||||
Describe nodes are Container nodes that allow you to organize your specs. A Describe node's closure can contain any number of
|
||||
Setup nodes (e.g. BeforeEach, AfterEach, JustBeforeEach), and Subject nodes (i.e. It).
|
||||
|
||||
Context and When nodes are aliases for Describe - use whichever gives your suite a better narrative flow. It is idomatic
|
||||
to Describe the behavior of an object or function and, within that Describe, outline a number of Contexts and Whens.
|
||||
|
||||
You can learn more at https://onsi.github.io/ginkgo/#organizing-specs-with-container-nodes
|
||||
In addition, container nodes can be decorated with a variety of decorators. You can learn more here: https://onsi.github.io/ginkgo/#decorator-reference
|
||||
*/
|
||||
func Describe(text string, args ...interface{}) bool {
|
||||
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeContainer, text, args...))
|
||||
}
|
||||
|
||||
/*
|
||||
FDescribe focuses specs within the Describe block.
|
||||
*/
|
||||
func FDescribe(text string, args ...interface{}) bool {
|
||||
args = append(args, internal.Focus)
|
||||
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeContainer, text, args...))
|
||||
}
|
||||
|
||||
/*
|
||||
PDescribe marks specs within the Describe block as pending.
|
||||
*/
|
||||
func PDescribe(text string, args ...interface{}) bool {
|
||||
args = append(args, internal.Pending)
|
||||
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeContainer, text, args...))
|
||||
}
|
||||
|
||||
/*
|
||||
XDescribe marks specs within the Describe block as pending.
|
||||
|
||||
XDescribe is an alias for PDescribe
|
||||
*/
|
||||
var XDescribe = PDescribe
|
||||
|
||||
/* Context is an alias for Describe - it generates the exact same kind of Container node */
|
||||
var Context, FContext, PContext, XContext = Describe, FDescribe, PDescribe, XDescribe
|
||||
|
||||
/* When is an alias for Describe - it generates the exact same kind of Container node */
|
||||
func When(text string, args ...interface{}) bool {
|
||||
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeContainer, "when "+text, args...))
|
||||
}
|
||||
|
||||
/* When is an alias for Describe - it generates the exact same kind of Container node */
|
||||
func FWhen(text string, args ...interface{}) bool {
|
||||
args = append(args, internal.Focus)
|
||||
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeContainer, "when "+text, args...))
|
||||
}
|
||||
|
||||
/* When is an alias for Describe - it generates the exact same kind of Container node */
|
||||
func PWhen(text string, args ...interface{}) bool {
|
||||
args = append(args, internal.Pending)
|
||||
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeContainer, "when "+text, args...))
|
||||
}
|
||||
|
||||
var XWhen = PWhen
|
||||
|
||||
/*
|
||||
It nodes are Subject nodes that contain your spec code and assertions.
|
||||
|
||||
Each It node corresponds to an individual Ginkgo spec. You cannot nest any other Ginkgo nodes within an It node's closure.
|
||||
|
||||
You can pass It nodes bare functions (func() {}) or functions that receive a SpecContext or context.Context: func(ctx SpecContext) {} and func (ctx context.Context) {}. If the function takes a context then the It is deemed interruptible and Ginkgo will cancel the context in the event of a timeout (configured via the SpecTimeout() or NodeTimeout() decorators) or of an interrupt signal.
|
||||
|
||||
You can learn more at https://onsi.github.io/ginkgo/#spec-subjects-it
|
||||
In addition, subject nodes can be decorated with a variety of decorators. You can learn more here: https://onsi.github.io/ginkgo/#decorator-reference
|
||||
*/
|
||||
func It(text string, args ...interface{}) bool {
|
||||
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeIt, text, args...))
|
||||
}
|
||||
|
||||
/*
|
||||
FIt allows you to focus an individual It.
|
||||
*/
|
||||
func FIt(text string, args ...interface{}) bool {
|
||||
args = append(args, internal.Focus)
|
||||
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeIt, text, args...))
|
||||
}
|
||||
|
||||
/*
|
||||
PIt allows you to mark an individual It as pending.
|
||||
*/
|
||||
func PIt(text string, args ...interface{}) bool {
|
||||
args = append(args, internal.Pending)
|
||||
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeIt, text, args...))
|
||||
}
|
||||
|
||||
/*
|
||||
XIt allows you to mark an individual It as pending.
|
||||
|
||||
XIt is an alias for PIt
|
||||
*/
|
||||
var XIt = PIt
|
||||
|
||||
/*
|
||||
Specify is an alias for It - it can allow for more natural wording in some context.
|
||||
*/
|
||||
var Specify, FSpecify, PSpecify, XSpecify = It, FIt, PIt, XIt
|
||||
|
||||
/*
|
||||
By allows you to better document complex Specs.
|
||||
|
||||
Generally you should try to keep your Its short and to the point. This is not always possible, however,
|
||||
especially in the context of integration tests that capture complex or lengthy workflows.
|
||||
|
||||
By allows you to document such flows. By may be called within a Setup or Subject node (It, BeforeEach, etc...)
|
||||
and will simply log the passed in text to the GinkgoWriter. If By is handed a function it will immediately run the function.
|
||||
|
||||
By will also generate and attach a ReportEntry to the spec. This will ensure that By annotations appear in Ginkgo's machine-readable reports.
|
||||
|
||||
Note that By does not generate a new Ginkgo node - rather it is simply synctactic sugar around GinkgoWriter and AddReportEntry
|
||||
You can learn more about By here: https://onsi.github.io/ginkgo/#documenting-complex-specs-by
|
||||
*/
|
||||
func By(text string, callback ...func()) {
|
||||
exitIfErr(global.Suite.By(text, callback...))
|
||||
}
|
||||
|
||||
/*
|
||||
BeforeSuite nodes are suite-level Setup nodes that run just once before any specs are run.
|
||||
When running in parallel, each parallel process will call BeforeSuite.
|
||||
|
||||
You may only register *one* BeforeSuite handler per test suite. You typically do so in your bootstrap file at the top level.
|
||||
|
||||
BeforeSuite can take a func() body, or an interruptible func(SpecContext)/func(context.Context) body.
|
||||
|
||||
You cannot nest any other Ginkgo nodes within a BeforeSuite node's closure.
|
||||
You can learn more here: https://onsi.github.io/ginkgo/#suite-setup-and-cleanup-beforesuite-and-aftersuite
|
||||
*/
|
||||
func BeforeSuite(body interface{}, args ...interface{}) bool {
|
||||
combinedArgs := []interface{}{body}
|
||||
combinedArgs = append(combinedArgs, args...)
|
||||
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeBeforeSuite, "", combinedArgs...))
|
||||
}
|
||||
|
||||
/*
|
||||
AfterSuite nodes are suite-level Setup nodes run after all specs have finished - regardless of whether specs have passed or failed.
|
||||
AfterSuite node closures always run, even if Ginkgo receives an interrupt signal (^C), in order to ensure cleanup occurs.
|
||||
|
||||
When running in parallel, each parallel process will call AfterSuite.
|
||||
|
||||
You may only register *one* AfterSuite handler per test suite. You typically do so in your bootstrap file at the top level.
|
||||
|
||||
AfterSuite can take a func() body, or an interruptible func(SpecContext)/func(context.Context) body.
|
||||
|
||||
You cannot nest any other Ginkgo nodes within an AfterSuite node's closure.
|
||||
You can learn more here: https://onsi.github.io/ginkgo/#suite-setup-and-cleanup-beforesuite-and-aftersuite
|
||||
*/
|
||||
func AfterSuite(body interface{}, args ...interface{}) bool {
|
||||
combinedArgs := []interface{}{body}
|
||||
combinedArgs = append(combinedArgs, args...)
|
||||
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeAfterSuite, "", combinedArgs...))
|
||||
}
|
||||
|
||||
/*
|
||||
SynchronizedBeforeSuite nodes allow you to perform some of the suite setup just once - on parallel process #1 - and then pass information
|
||||
from that setup to the rest of the suite setup on all processes. This is useful for performing expensive or singleton setup once, then passing
|
||||
information from that setup to all parallel processes.
|
||||
|
||||
SynchronizedBeforeSuite accomplishes this by taking *two* function arguments and passing data between them.
|
||||
The first function is only run on parallel process #1. The second is run on all processes, but *only* after the first function completes successfully. The functions have the following signatures:
|
||||
|
||||
The first function (which only runs on process #1) can have any of the following the signatures:
|
||||
|
||||
func()
|
||||
func(ctx context.Context)
|
||||
func(ctx SpecContext)
|
||||
func() []byte
|
||||
func(ctx context.Context) []byte
|
||||
func(ctx SpecContext) []byte
|
||||
|
||||
The byte array returned by the first function (if present) is then passed to the second function, which can have any of the following signature:
|
||||
|
||||
func()
|
||||
func(ctx context.Context)
|
||||
func(ctx SpecContext)
|
||||
func(data []byte)
|
||||
func(ctx context.Context, data []byte)
|
||||
func(ctx SpecContext, data []byte)
|
||||
|
||||
If either function receives a context.Context/SpecContext it is considered interruptible.
|
||||
|
||||
You cannot nest any other Ginkgo nodes within an SynchronizedBeforeSuite node's closure.
|
||||
You can learn more, and see some examples, here: https://onsi.github.io/ginkgo/#parallel-suite-setup-and-cleanup-synchronizedbeforesuite-and-synchronizedaftersuite
|
||||
*/
|
||||
func SynchronizedBeforeSuite(process1Body interface{}, allProcessBody interface{}, args ...interface{}) bool {
|
||||
combinedArgs := []interface{}{process1Body, allProcessBody}
|
||||
combinedArgs = append(combinedArgs, args...)
|
||||
|
||||
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeSynchronizedBeforeSuite, "", combinedArgs...))
|
||||
}
|
||||
|
||||
/*
|
||||
SynchronizedAfterSuite nodes complement the SynchronizedBeforeSuite nodes in solving the problem of splitting clean up into a piece that runs on all processes
|
||||
and a piece that must only run once - on process #1.
|
||||
|
||||
SynchronizedAfterSuite accomplishes this by taking *two* function arguments. The first runs on all processes. The second runs only on parallel process #1
|
||||
and *only* after all other processes have finished and exited. This ensures that process #1, and any resources it is managing, remain alive until
|
||||
all other processes are finished. These two functions can be bare functions (func()) or interruptible (func(context.Context)/func(SpecContext))
|
||||
|
||||
Note that you can also use DeferCleanup() in SynchronizedBeforeSuite to accomplish similar results.
|
||||
|
||||
You cannot nest any other Ginkgo nodes within an SynchronizedAfterSuite node's closure.
|
||||
You can learn more, and see some examples, here: https://onsi.github.io/ginkgo/#parallel-suite-setup-and-cleanup-synchronizedbeforesuite-and-synchronizedaftersuite
|
||||
*/
|
||||
func SynchronizedAfterSuite(allProcessBody interface{}, process1Body interface{}, args ...interface{}) bool {
|
||||
combinedArgs := []interface{}{allProcessBody, process1Body}
|
||||
combinedArgs = append(combinedArgs, args...)
|
||||
|
||||
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeSynchronizedAfterSuite, "", combinedArgs...))
|
||||
}
|
||||
|
||||
/*
|
||||
BeforeEach nodes are Setup nodes whose closures run before It node closures. When multiple BeforeEach nodes
|
||||
are defined in nested Container nodes the outermost BeforeEach node closures are run first.
|
||||
|
||||
BeforeEach can take a func() body, or an interruptible func(SpecContext)/func(context.Context) body.
|
||||
|
||||
You cannot nest any other Ginkgo nodes within a BeforeEach node's closure.
|
||||
You can learn more here: https://onsi.github.io/ginkgo/#extracting-common-setup-beforeeach
|
||||
*/
|
||||
func BeforeEach(args ...interface{}) bool {
|
||||
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeBeforeEach, "", args...))
|
||||
}
|
||||
|
||||
/*
|
||||
JustBeforeEach nodes are similar to BeforeEach nodes, however they are guaranteed to run *after* all BeforeEach node closures - just before the It node closure.
|
||||
This can allow you to separate configuration from creation of resources for a spec.
|
||||
|
||||
JustBeforeEach can take a func() body, or an interruptible func(SpecContext)/func(context.Context) body.
|
||||
|
||||
You cannot nest any other Ginkgo nodes within a JustBeforeEach node's closure.
|
||||
You can learn more and see some examples here: https://onsi.github.io/ginkgo/#separating-creation-and-configuration-justbeforeeach
|
||||
*/
|
||||
func JustBeforeEach(args ...interface{}) bool {
|
||||
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeJustBeforeEach, "", args...))
|
||||
}
|
||||
|
||||
/*
|
||||
AfterEach nodes are Setup nodes whose closures run after It node closures. When multiple AfterEach nodes
|
||||
are defined in nested Container nodes the innermost AfterEach node closures are run first.
|
||||
|
||||
Note that you can also use DeferCleanup() in other Setup or Subject nodes to accomplish similar results.
|
||||
|
||||
AfterEach can take a func() body, or an interruptible func(SpecContext)/func(context.Context) body.
|
||||
|
||||
You cannot nest any other Ginkgo nodes within an AfterEach node's closure.
|
||||
You can learn more here: https://onsi.github.io/ginkgo/#spec-cleanup-aftereach-and-defercleanup
|
||||
*/
|
||||
func AfterEach(args ...interface{}) bool {
|
||||
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeAfterEach, "", args...))
|
||||
}
|
||||
|
||||
/*
|
||||
JustAfterEach nodes are similar to AfterEach nodes, however they are guaranteed to run *before* all AfterEach node closures - just after the It node closure. This can allow you to separate diagnostics collection from teardown for a spec.
|
||||
|
||||
JustAfterEach can take a func() body, or an interruptible func(SpecContext)/func(context.Context) body.
|
||||
|
||||
You cannot nest any other Ginkgo nodes within a JustAfterEach node's closure.
|
||||
You can learn more and see some examples here: https://onsi.github.io/ginkgo/#separating-diagnostics-collection-and-teardown-justaftereach
|
||||
*/
|
||||
func JustAfterEach(args ...interface{}) bool {
|
||||
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeJustAfterEach, "", args...))
|
||||
}
|
||||
|
||||
/*
|
||||
BeforeAll nodes are Setup nodes that can occur inside Ordered containers. They run just once before any specs in the Ordered container run.
|
||||
|
||||
Multiple BeforeAll nodes can be defined in a given Ordered container however they cannot be nested inside any other container.
|
||||
|
||||
BeforeAll can take a func() body, or an interruptible func(SpecContext)/func(context.Context) body.
|
||||
|
||||
You cannot nest any other Ginkgo nodes within a BeforeAll node's closure.
|
||||
You can learn more about Ordered Containers at: https://onsi.github.io/ginkgo/#ordered-containers
|
||||
And you can learn more about BeforeAll at: https://onsi.github.io/ginkgo/#setup-in-ordered-containers-beforeall-and-afterall
|
||||
*/
|
||||
func BeforeAll(args ...interface{}) bool {
|
||||
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeBeforeAll, "", args...))
|
||||
}
|
||||
|
||||
/*
|
||||
AfterAll nodes are Setup nodes that can occur inside Ordered containers. They run just once after all specs in the Ordered container have run.
|
||||
|
||||
Multiple AfterAll nodes can be defined in a given Ordered container however they cannot be nested inside any other container.
|
||||
|
||||
Note that you can also use DeferCleanup() in a BeforeAll node to accomplish similar behavior.
|
||||
|
||||
AfterAll can take a func() body, or an interruptible func(SpecContext)/func(context.Context) body.
|
||||
|
||||
You cannot nest any other Ginkgo nodes within an AfterAll node's closure.
|
||||
You can learn more about Ordered Containers at: https://onsi.github.io/ginkgo/#ordered-containers
|
||||
And you can learn more about AfterAll at: https://onsi.github.io/ginkgo/#setup-in-ordered-containers-beforeall-and-afterall
|
||||
*/
|
||||
func AfterAll(args ...interface{}) bool {
|
||||
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeAfterAll, "", args...))
|
||||
}
|
||||
|
||||
/*
|
||||
DeferCleanup can be called within any Setup or Subject node to register a cleanup callback that Ginkgo will call at the appropriate time to cleanup after the spec.
|
||||
|
||||
DeferCleanup can be passed:
|
||||
1. A function that takes no arguments and returns no values.
|
||||
2. A function that returns multiple values. `DeferCleanup` will ignore all these return values except for the last one. If this last return value is a non-nil error `DeferCleanup` will fail the spec).
|
||||
3. A function that takes a context.Context or SpecContext (and optionally returns multiple values). The resulting cleanup node is deemed interruptible and the passed-in context will be cancelled in the event of a timeout or interrupt.
|
||||
4. A function that takes arguments (and optionally returns multiple values) followed by a list of arguments to pass to the function.
|
||||
5. A function that takes SpecContext and a list of arguments (and optionally returns multiple values) followed by a list of arguments to pass to the function.
|
||||
|
||||
For example:
|
||||
|
||||
BeforeEach(func() {
|
||||
DeferCleanup(os.SetEnv, "FOO", os.GetEnv("FOO"))
|
||||
os.SetEnv("FOO", "BAR")
|
||||
})
|
||||
|
||||
will register a cleanup handler that will set the environment variable "FOO" to it's current value (obtained by os.GetEnv("FOO")) after the spec runs and then sets the environment variable "FOO" to "BAR" for the current spec.
|
||||
|
||||
Similarly:
|
||||
|
||||
BeforeEach(func() {
|
||||
DeferCleanup(func(ctx SpecContext, path) {
|
||||
req, err := http.NewRequestWithContext(ctx, "POST", path, nil)
|
||||
Expect(err).NotTo(HaveOccured())
|
||||
_, err := http.DefaultClient.Do(req)
|
||||
Expect(err).NotTo(HaveOccured())
|
||||
}, "example.com/cleanup", NodeTimeout(time.Second*3))
|
||||
})
|
||||
|
||||
will register a cleanup handler that will have three seconds to successfully complete a request to the specified path. Note that we do not specify a context in the list of arguments passed to DeferCleanup - only in the signature of the function we pass in. Ginkgo will detect the requested context and supply a SpecContext when it invokes the cleanup node. If you want to pass in your own context in addition to the Ginkgo-provided SpecContext you must specify the SpecContext as the first argument (e.g. func(ctx SpecContext, otherCtx context.Context)).
|
||||
|
||||
When DeferCleanup is called in BeforeEach, JustBeforeEach, It, AfterEach, or JustAfterEach the registered callback will be invoked when the spec completes (i.e. it will behave like an AfterEach node)
|
||||
When DeferCleanup is called in BeforeAll or AfterAll the registered callback will be invoked when the ordered container completes (i.e. it will behave like an AfterAll node)
|
||||
When DeferCleanup is called in BeforeSuite, SynchronizedBeforeSuite, AfterSuite, or SynchronizedAfterSuite the registered callback will be invoked when the suite completes (i.e. it will behave like an AfterSuite node)
|
||||
|
||||
Note that DeferCleanup does not represent a node but rather dynamically generates the appropriate type of cleanup node based on the context in which it is called. As such you must call DeferCleanup within a Setup or Subject node, and not within a Container node.
|
||||
You can learn more about DeferCleanup here: https://onsi.github.io/ginkgo/#cleaning-up-our-cleanup-code-defercleanup
|
||||
*/
|
||||
func DeferCleanup(args ...interface{}) {
|
||||
fail := func(message string, cl types.CodeLocation) {
|
||||
global.Failer.Fail(message, cl)
|
||||
}
|
||||
pushNode(internal.NewCleanupNode(deprecationTracker, fail, args...))
|
||||
}
|
143
vendor/github.com/onsi/ginkgo/v2/decorator_dsl.go
generated
vendored
Normal file
143
vendor/github.com/onsi/ginkgo/v2/decorator_dsl.go
generated
vendored
Normal file
@ -0,0 +1,143 @@
|
||||
package ginkgo
|
||||
|
||||
import (
|
||||
"github.com/onsi/ginkgo/v2/internal"
|
||||
)
|
||||
|
||||
/*
|
||||
Offset(uint) is a decorator that allows you to change the stack-frame offset used when computing the line number of the node in question.
|
||||
|
||||
You can learn more here: https://onsi.github.io/ginkgo/#the-offset-decorator
|
||||
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
|
||||
*/
|
||||
type Offset = internal.Offset
|
||||
|
||||
/*
|
||||
FlakeAttempts(uint N) is a decorator that allows you to mark individual specs or spec containers as flaky. Ginkgo will run them up to `N` times until they pass.
|
||||
|
||||
You can learn more here: https://onsi.github.io/ginkgo/#the-flakeattempts-decorator
|
||||
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
|
||||
*/
|
||||
type FlakeAttempts = internal.FlakeAttempts
|
||||
|
||||
/*
|
||||
MustPassRepeatedly(uint N) is a decorator that allows you to repeat the execution of individual specs or spec containers. Ginkgo will run them up to `N` times until they fail.
|
||||
|
||||
You can learn more here: https://onsi.github.io/ginkgo/#the-mustpassrepeatedly-decorator
|
||||
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
|
||||
*/
|
||||
type MustPassRepeatedly = internal.MustPassRepeatedly
|
||||
|
||||
/*
|
||||
Focus is a decorator that allows you to mark a spec or container as focused. Identical to FIt and FDescribe.
|
||||
|
||||
You can learn more here: https://onsi.github.io/ginkgo/#filtering-specs
|
||||
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
|
||||
*/
|
||||
const Focus = internal.Focus
|
||||
|
||||
/*
|
||||
Pending is a decorator that allows you to mark a spec or container as pending. Identical to PIt and PDescribe.
|
||||
|
||||
You can learn more here: https://onsi.github.io/ginkgo/#filtering-specs
|
||||
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
|
||||
*/
|
||||
const Pending = internal.Pending
|
||||
|
||||
/*
|
||||
Serial is a decorator that allows you to mark a spec or container as serial. These specs will never run in parallel with other specs.
|
||||
Specs in ordered containers cannot be marked as serial - mark the ordered container instead.
|
||||
|
||||
You can learn more here: https://onsi.github.io/ginkgo/#serial-specs
|
||||
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
|
||||
*/
|
||||
const Serial = internal.Serial
|
||||
|
||||
/*
|
||||
Ordered is a decorator that allows you to mark a container as ordered. Specs in the container will always run in the order they appear.
|
||||
They will never be randomized and they will never run in parallel with one another, though they may run in parallel with other specs.
|
||||
|
||||
You can learn more here: https://onsi.github.io/ginkgo/#ordered-containers
|
||||
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
|
||||
*/
|
||||
const Ordered = internal.Ordered
|
||||
|
||||
/*
|
||||
ContinueOnFailure is a decorator that allows you to mark an Ordered container to continue running specs even if failures occur. Ordinarily an ordered container will stop running specs after the first failure occurs. Note that if a BeforeAll or a BeforeEach/JustBeforeEach annotated with OncePerOrdered fails then no specs will run as the precondition for the Ordered container will consider to be failed.
|
||||
|
||||
ContinueOnFailure only applies to the outermost Ordered container. Attempting to place ContinueOnFailure in a nested container will result in an error.
|
||||
|
||||
You can learn more here: https://onsi.github.io/ginkgo/#ordered-containers
|
||||
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
|
||||
*/
|
||||
const ContinueOnFailure = internal.ContinueOnFailure
|
||||
|
||||
/*
|
||||
OncePerOrdered is a decorator that allows you to mark outer BeforeEach, AfterEach, JustBeforeEach, and JustAfterEach setup nodes to run once
|
||||
per ordered context. Normally these setup nodes run around each individual spec, with OncePerOrdered they will run once around the set of specs in an ordered container.
|
||||
The behavior for non-Ordered containers/specs is unchanged.
|
||||
|
||||
You can learn more here: https://onsi.github.io/ginkgo/#setup-around-ordered-containers-the-onceperordered-decorator
|
||||
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
|
||||
*/
|
||||
const OncePerOrdered = internal.OncePerOrdered
|
||||
|
||||
/*
|
||||
Label decorates specs with Labels. Multiple labels can be passed to Label and these can be arbitrary strings but must not include the following characters: "&|!,()/".
|
||||
Labels can be applied to container and subject nodes, but not setup nodes. You can provide multiple Labels to a given node and a spec's labels is the union of all labels in its node hierarchy.
|
||||
|
||||
You can learn more here: https://onsi.github.io/ginkgo/#spec-labels
|
||||
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
|
||||
*/
|
||||
func Label(labels ...string) Labels {
|
||||
return Labels(labels)
|
||||
}
|
||||
|
||||
/*
|
||||
Labels are the type for spec Label decorators. Use Label(...) to construct Labels.
|
||||
You can learn more here: https://onsi.github.io/ginkgo/#spec-labels
|
||||
*/
|
||||
type Labels = internal.Labels
|
||||
|
||||
/*
|
||||
PollProgressAfter allows you to override the configured value for --poll-progress-after for a particular node.
|
||||
|
||||
Ginkgo will start emitting node progress if the node is still running after a duration of PollProgressAfter. This allows you to get quicker feedback about the state of a long-running spec.
|
||||
*/
|
||||
type PollProgressAfter = internal.PollProgressAfter
|
||||
|
||||
/*
|
||||
PollProgressInterval allows you to override the configured value for --poll-progress-interval for a particular node.
|
||||
|
||||
Once a node has been running for longer than PollProgressAfter Ginkgo will emit node progress periodically at an interval of PollProgresInterval.
|
||||
*/
|
||||
type PollProgressInterval = internal.PollProgressInterval
|
||||
|
||||
/*
|
||||
NodeTimeout allows you to specify a timeout for an indivdiual node. The node cannot be a container and must be interruptible (i.e. it must be passed a function that accepts a SpecContext or context.Context).
|
||||
|
||||
If the node does not exit within the specified NodeTimeout its context will be cancelled. The node wil then have a period of time controlled by the GracePeriod decorator (or global --grace-period command-line argument) to exit. If the node does not exit within GracePeriod Ginkgo will leak the node and proceed to any clean-up nodes associated with the current spec.
|
||||
*/
|
||||
type NodeTimeout = internal.NodeTimeout
|
||||
|
||||
/*
|
||||
SpecTimeout allows you to specify a timeout for an indivdiual spec. SpecTimeout can only decorate interruptible It nodes.
|
||||
|
||||
All nodes associated with the It node will need to complete before the SpecTimeout has elapsed. Individual nodes (e.g. BeforeEach) may be decorated with different NodeTimeouts - but these can only serve to provide a more stringent deadline for the node in question; they cannot extend the deadline past the SpecTimeout.
|
||||
|
||||
If the spec does not complete within the specified SpecTimeout the currently running node will have its context cancelled. The node wil then have a period of time controlled by that node's GracePeriod decorator (or global --grace-period command-line argument) to exit. If the node does not exit within GracePeriod Ginkgo will leak the node and proceed to any clean-up nodes associated with the current spec.
|
||||
*/
|
||||
type SpecTimeout = internal.SpecTimeout
|
||||
|
||||
/*
|
||||
GracePeriod denotes the period of time Ginkgo will wait for an interruptible node to exit once an interruption (whether due to a timeout or a user-invoked signal) has occurred. If both the global --grace-period cli flag and a GracePeriod decorator are specified the value in the decorator will take precedence.
|
||||
|
||||
Nodes that do not finish within a GracePeriod will be leaked and Ginkgo will proceed to run subsequent nodes. In the event of a timeout, such leaks will be reported to the user.
|
||||
*/
|
||||
type GracePeriod = internal.GracePeriod
|
||||
|
||||
/*
|
||||
SuppressProgressReporting is a decorator that allows you to disable progress reporting of a particular node. This is useful if `ginkgo -v -progress` is generating too much noise; particularly
|
||||
if you have a `ReportAfterEach` node that is running for every skipped spec and is generating lots of progress reports.
|
||||
*/
|
||||
const SuppressProgressReporting = internal.SuppressProgressReporting
|
135
vendor/github.com/onsi/ginkgo/v2/deprecated_dsl.go
generated
vendored
Normal file
135
vendor/github.com/onsi/ginkgo/v2/deprecated_dsl.go
generated
vendored
Normal file
@ -0,0 +1,135 @@
|
||||
package ginkgo
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/onsi/ginkgo/v2/internal"
|
||||
"github.com/onsi/ginkgo/v2/internal/global"
|
||||
"github.com/onsi/ginkgo/v2/reporters"
|
||||
"github.com/onsi/ginkgo/v2/types"
|
||||
)
|
||||
|
||||
/*
|
||||
Deprecated: Done Channel for asynchronous testing
|
||||
|
||||
The Done channel pattern is no longer supported in Ginkgo 2.0.
|
||||
See here for better patterns for asynchronous testing: https://onsi.github.io/ginkgo/#patterns-for-asynchronous-testing
|
||||
|
||||
For a migration guide see: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#removed-async-testing
|
||||
*/
|
||||
type Done = internal.Done
|
||||
|
||||
/*
|
||||
Deprecated: Custom Ginkgo test reporters are deprecated in Ginkgo 2.0.
|
||||
|
||||
Use Ginkgo's reporting nodes instead and 2.0 reporting infrastructure instead. You can learn more here: https://onsi.github.io/ginkgo/#reporting-infrastructure
|
||||
For a migration guide see: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#removed-custom-reporters
|
||||
*/
|
||||
type Reporter = reporters.DeprecatedReporter
|
||||
|
||||
/*
|
||||
Deprecated: Custom Reporters have been removed in Ginkgo 2.0. RunSpecsWithDefaultAndCustomReporters will simply call RunSpecs()
|
||||
|
||||
Use Ginkgo's reporting nodes instead and 2.0 reporting infrastructure instead. You can learn more here: https://onsi.github.io/ginkgo/#reporting-infrastructure
|
||||
For a migration guide see: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#removed-custom-reporters
|
||||
*/
|
||||
func RunSpecsWithDefaultAndCustomReporters(t GinkgoTestingT, description string, _ []Reporter) bool {
|
||||
deprecationTracker.TrackDeprecation(types.Deprecations.CustomReporter())
|
||||
return RunSpecs(t, description)
|
||||
}
|
||||
|
||||
/*
|
||||
Deprecated: Custom Reporters have been removed in Ginkgo 2.0. RunSpecsWithCustomReporters will simply call RunSpecs()
|
||||
|
||||
Use Ginkgo's reporting nodes instead and 2.0 reporting infrastructure instead. You can learn more here: https://onsi.github.io/ginkgo/#reporting-infrastructure
|
||||
For a migration guide see: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#removed-custom-reporters
|
||||
*/
|
||||
func RunSpecsWithCustomReporters(t GinkgoTestingT, description string, _ []Reporter) bool {
|
||||
deprecationTracker.TrackDeprecation(types.Deprecations.CustomReporter())
|
||||
return RunSpecs(t, description)
|
||||
}
|
||||
|
||||
/*
|
||||
Deprecated: GinkgoTestDescription has been replaced with SpecReport.
|
||||
|
||||
Use CurrentSpecReport() instead.
|
||||
You can learn more here: https://onsi.github.io/ginkgo/#getting-a-report-for-the-current-spec
|
||||
The SpecReport type is documented here: https://pkg.go.dev/github.com/onsi/ginkgo/v2/types#SpecReport
|
||||
*/
|
||||
type DeprecatedGinkgoTestDescription struct {
|
||||
FullTestText string
|
||||
ComponentTexts []string
|
||||
TestText string
|
||||
|
||||
FileName string
|
||||
LineNumber int
|
||||
|
||||
Failed bool
|
||||
Duration time.Duration
|
||||
}
|
||||
type GinkgoTestDescription = DeprecatedGinkgoTestDescription
|
||||
|
||||
/*
|
||||
Deprecated: CurrentGinkgoTestDescription has been replaced with CurrentSpecReport.
|
||||
|
||||
Use CurrentSpecReport() instead.
|
||||
You can learn more here: https://onsi.github.io/ginkgo/#getting-a-report-for-the-current-spec
|
||||
The SpecReport type is documented here: https://pkg.go.dev/github.com/onsi/ginkgo/v2/types#SpecReport
|
||||
*/
|
||||
func CurrentGinkgoTestDescription() DeprecatedGinkgoTestDescription {
|
||||
deprecationTracker.TrackDeprecation(
|
||||
types.Deprecations.CurrentGinkgoTestDescription(),
|
||||
types.NewCodeLocation(1),
|
||||
)
|
||||
report := global.Suite.CurrentSpecReport()
|
||||
if report.State == types.SpecStateInvalid {
|
||||
return GinkgoTestDescription{}
|
||||
}
|
||||
componentTexts := []string{}
|
||||
componentTexts = append(componentTexts, report.ContainerHierarchyTexts...)
|
||||
componentTexts = append(componentTexts, report.LeafNodeText)
|
||||
|
||||
return DeprecatedGinkgoTestDescription{
|
||||
ComponentTexts: componentTexts,
|
||||
FullTestText: report.FullText(),
|
||||
TestText: report.LeafNodeText,
|
||||
FileName: report.LeafNodeLocation.FileName,
|
||||
LineNumber: report.LeafNodeLocation.LineNumber,
|
||||
Failed: report.State.Is(types.SpecStateFailureStates),
|
||||
Duration: report.RunTime,
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
Deprecated: GinkgoParallelNode() has been renamed to GinkgoParallelProcess()
|
||||
*/
|
||||
func GinkgoParallelNode() int {
|
||||
deprecationTracker.TrackDeprecation(
|
||||
types.Deprecations.ParallelNode(),
|
||||
types.NewCodeLocation(1),
|
||||
)
|
||||
return GinkgoParallelProcess()
|
||||
}
|
||||
|
||||
/*
|
||||
Deprecated: Benchmarker has been removed from Ginkgo 2.0
|
||||
|
||||
Use Gomega's gmeasure package instead.
|
||||
You can learn more here: https://onsi.github.io/ginkgo/#benchmarking-code
|
||||
*/
|
||||
type Benchmarker interface {
|
||||
Time(name string, body func(), info ...interface{}) (elapsedTime time.Duration)
|
||||
RecordValue(name string, value float64, info ...interface{})
|
||||
RecordValueWithPrecision(name string, value float64, units string, precision int, info ...interface{})
|
||||
}
|
||||
|
||||
/*
|
||||
Deprecated: Measure() has been removed from Ginkgo 2.0
|
||||
|
||||
Use Gomega's gmeasure package instead.
|
||||
You can learn more here: https://onsi.github.io/ginkgo/#benchmarking-code
|
||||
*/
|
||||
func Measure(_ ...interface{}) bool {
|
||||
deprecationTracker.TrackDeprecation(types.Deprecations.Measure(), types.NewCodeLocation(1))
|
||||
return true
|
||||
}
|
@ -1,3 +1,11 @@
|
||||
// +build !windows
|
||||
|
||||
/*
|
||||
These packages are used for colorize on Windows and contributed by mattn.jp@gmail.com
|
||||
|
||||
* go-colorable: <https://github.com/mattn/go-colorable>
|
||||
* go-isatty: <https://github.com/mattn/go-isatty>
|
||||
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2016 Yasuhiro Matsumoto
|
||||
@ -19,3 +27,15 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
*/
|
||||
|
||||
package formatter
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
)
|
||||
|
||||
func newColorable(file *os.File) io.Writer {
|
||||
return file
|
||||
}
|
@ -1,4 +1,33 @@
|
||||
package colorable
|
||||
/*
|
||||
These packages are used for colorize on Windows and contributed by mattn.jp@gmail.com
|
||||
|
||||
* go-colorable: <https://github.com/mattn/go-colorable>
|
||||
* go-isatty: <https://github.com/mattn/go-isatty>
|
||||
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2016 Yasuhiro Matsumoto
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
*/
|
||||
|
||||
package formatter
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
@ -10,10 +39,24 @@ import (
|
||||
"strings"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
|
||||
"github.com/onsi/ginkgo/reporters/stenographer/support/go-isatty"
|
||||
)
|
||||
|
||||
var (
|
||||
kernel32 = syscall.NewLazyDLL("kernel32.dll")
|
||||
procGetConsoleScreenBufferInfo = kernel32.NewProc("GetConsoleScreenBufferInfo")
|
||||
procSetConsoleTextAttribute = kernel32.NewProc("SetConsoleTextAttribute")
|
||||
procSetConsoleCursorPosition = kernel32.NewProc("SetConsoleCursorPosition")
|
||||
procFillConsoleOutputCharacter = kernel32.NewProc("FillConsoleOutputCharacterW")
|
||||
procFillConsoleOutputAttribute = kernel32.NewProc("FillConsoleOutputAttribute")
|
||||
procGetConsoleMode = kernel32.NewProc("GetConsoleMode")
|
||||
)
|
||||
|
||||
func isTerminal(fd uintptr) bool {
|
||||
var st uint32
|
||||
r, _, e := syscall.Syscall(procGetConsoleMode.Addr(), 2, fd, uintptr(unsafe.Pointer(&st)), 0)
|
||||
return r != 0 && e == 0
|
||||
}
|
||||
|
||||
const (
|
||||
foregroundBlue = 0x1
|
||||
foregroundGreen = 0x2
|
||||
@ -52,45 +95,28 @@ type consoleScreenBufferInfo struct {
|
||||
maximumWindowSize coord
|
||||
}
|
||||
|
||||
var (
|
||||
kernel32 = syscall.NewLazyDLL("kernel32.dll")
|
||||
procGetConsoleScreenBufferInfo = kernel32.NewProc("GetConsoleScreenBufferInfo")
|
||||
procSetConsoleTextAttribute = kernel32.NewProc("SetConsoleTextAttribute")
|
||||
procSetConsoleCursorPosition = kernel32.NewProc("SetConsoleCursorPosition")
|
||||
procFillConsoleOutputCharacter = kernel32.NewProc("FillConsoleOutputCharacterW")
|
||||
procFillConsoleOutputAttribute = kernel32.NewProc("FillConsoleOutputAttribute")
|
||||
)
|
||||
|
||||
type Writer struct {
|
||||
type writer struct {
|
||||
out io.Writer
|
||||
handle syscall.Handle
|
||||
lastbuf bytes.Buffer
|
||||
oldattr word
|
||||
}
|
||||
|
||||
func NewColorable(file *os.File) io.Writer {
|
||||
func newColorable(file *os.File) io.Writer {
|
||||
if file == nil {
|
||||
panic("nil passed instead of *os.File to NewColorable()")
|
||||
}
|
||||
|
||||
if isatty.IsTerminal(file.Fd()) {
|
||||
if isTerminal(file.Fd()) {
|
||||
var csbi consoleScreenBufferInfo
|
||||
handle := syscall.Handle(file.Fd())
|
||||
procGetConsoleScreenBufferInfo.Call(uintptr(handle), uintptr(unsafe.Pointer(&csbi)))
|
||||
return &Writer{out: file, handle: handle, oldattr: csbi.attributes}
|
||||
return &writer{out: file, handle: handle, oldattr: csbi.attributes}
|
||||
} else {
|
||||
return file
|
||||
}
|
||||
}
|
||||
|
||||
func NewColorableStdout() io.Writer {
|
||||
return NewColorable(os.Stdout)
|
||||
}
|
||||
|
||||
func NewColorableStderr() io.Writer {
|
||||
return NewColorable(os.Stderr)
|
||||
}
|
||||
|
||||
var color256 = map[int]int{
|
||||
0: 0x000000,
|
||||
1: 0x800000,
|
||||
@ -350,7 +376,7 @@ var color256 = map[int]int{
|
||||
255: 0xeeeeee,
|
||||
}
|
||||
|
||||
func (w *Writer) Write(data []byte) (n int, err error) {
|
||||
func (w *writer) Write(data []byte) (n int, err error) {
|
||||
var csbi consoleScreenBufferInfo
|
||||
procGetConsoleScreenBufferInfo.Call(uintptr(w.handle), uintptr(unsafe.Pointer(&csbi)))
|
||||
|
@ -2,10 +2,16 @@ package formatter
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// ColorableStdOut and ColorableStdErr enable color output support on Windows
|
||||
var ColorableStdOut = newColorable(os.Stdout)
|
||||
var ColorableStdErr = newColorable(os.Stderr)
|
||||
|
||||
const COLS = 80
|
||||
|
||||
type ColorMode uint8
|
||||
@ -45,6 +51,37 @@ func NewWithNoColorBool(noColor bool) Formatter {
|
||||
}
|
||||
|
||||
func New(colorMode ColorMode) Formatter {
|
||||
colorAliases := map[string]int{
|
||||
"black": 0,
|
||||
"red": 1,
|
||||
"green": 2,
|
||||
"yellow": 3,
|
||||
"blue": 4,
|
||||
"magenta": 5,
|
||||
"cyan": 6,
|
||||
"white": 7,
|
||||
}
|
||||
for colorAlias, n := range colorAliases {
|
||||
colorAliases[fmt.Sprintf("bright-%s", colorAlias)] = n + 8
|
||||
}
|
||||
|
||||
getColor := func(color, defaultEscapeCode string) string {
|
||||
color = strings.ToUpper(strings.ReplaceAll(color, "-", "_"))
|
||||
envVar := fmt.Sprintf("GINKGO_CLI_COLOR_%s", color)
|
||||
envVarColor := os.Getenv(envVar)
|
||||
if envVarColor == "" {
|
||||
return defaultEscapeCode
|
||||
}
|
||||
if colorCode, ok := colorAliases[envVarColor]; ok {
|
||||
return fmt.Sprintf("\x1b[38;5;%dm", colorCode)
|
||||
}
|
||||
colorCode, err := strconv.Atoi(envVarColor)
|
||||
if err != nil || colorCode < 0 || colorCode > 255 {
|
||||
return defaultEscapeCode
|
||||
}
|
||||
return fmt.Sprintf("\x1b[38;5;%dm", colorCode)
|
||||
}
|
||||
|
||||
f := Formatter{
|
||||
ColorMode: colorMode,
|
||||
colors: map[string]string{
|
||||
@ -52,18 +89,18 @@ func New(colorMode ColorMode) Formatter {
|
||||
"bold": "\x1b[1m",
|
||||
"underline": "\x1b[4m",
|
||||
|
||||
"red": "\x1b[38;5;9m",
|
||||
"orange": "\x1b[38;5;214m",
|
||||
"coral": "\x1b[38;5;204m",
|
||||
"magenta": "\x1b[38;5;13m",
|
||||
"green": "\x1b[38;5;10m",
|
||||
"dark-green": "\x1b[38;5;28m",
|
||||
"yellow": "\x1b[38;5;11m",
|
||||
"light-yellow": "\x1b[38;5;228m",
|
||||
"cyan": "\x1b[38;5;14m",
|
||||
"gray": "\x1b[38;5;243m",
|
||||
"light-gray": "\x1b[38;5;246m",
|
||||
"blue": "\x1b[38;5;12m",
|
||||
"red": getColor("red", "\x1b[38;5;9m"),
|
||||
"orange": getColor("orange", "\x1b[38;5;214m"),
|
||||
"coral": getColor("coral", "\x1b[38;5;204m"),
|
||||
"magenta": getColor("magenta", "\x1b[38;5;13m"),
|
||||
"green": getColor("green", "\x1b[38;5;10m"),
|
||||
"dark-green": getColor("dark-green", "\x1b[38;5;28m"),
|
||||
"yellow": getColor("yellow", "\x1b[38;5;11m"),
|
||||
"light-yellow": getColor("light-yellow", "\x1b[38;5;228m"),
|
||||
"cyan": getColor("cyan", "\x1b[38;5;14m"),
|
||||
"gray": getColor("gray", "\x1b[38;5;243m"),
|
||||
"light-gray": getColor("light-gray", "\x1b[38;5;246m"),
|
||||
"blue": getColor("blue", "\x1b[38;5;12m"),
|
||||
},
|
||||
}
|
||||
colors := []string{}
|
||||
@ -100,13 +137,13 @@ func (f Formatter) Fiw(indentation uint, maxWidth uint, format string, args ...i
|
||||
outLines = append(outLines, line)
|
||||
continue
|
||||
}
|
||||
outWords := []string{}
|
||||
length := uint(0)
|
||||
words := strings.Split(line, " ")
|
||||
for _, word := range words {
|
||||
outWords := []string{words[0]}
|
||||
length := uint(f.length(words[0]))
|
||||
for _, word := range words[1:] {
|
||||
wordLength := f.length(word)
|
||||
if length+wordLength <= maxWidth {
|
||||
length += wordLength
|
||||
if length+wordLength+1 <= maxWidth {
|
||||
length += wordLength + 1
|
||||
outWords = append(outWords, word)
|
||||
continue
|
||||
}
|
45
vendor/github.com/onsi/ginkgo/v2/ginkgo_t_dsl.go
generated
vendored
Normal file
45
vendor/github.com/onsi/ginkgo/v2/ginkgo_t_dsl.go
generated
vendored
Normal file
@ -0,0 +1,45 @@
|
||||
package ginkgo
|
||||
|
||||
import "github.com/onsi/ginkgo/v2/internal/testingtproxy"
|
||||
|
||||
/*
|
||||
GinkgoT() implements an interface analogous to *testing.T and can be used with
|
||||
third-party libraries that accept *testing.T through an interface.
|
||||
|
||||
GinkgoT() takes an optional offset argument that can be used to get the
|
||||
correct line number associated with the failure.
|
||||
|
||||
You can learn more here: https://onsi.github.io/ginkgo/#using-third-party-libraries
|
||||
*/
|
||||
func GinkgoT(optionalOffset ...int) GinkgoTInterface {
|
||||
offset := 3
|
||||
if len(optionalOffset) > 0 {
|
||||
offset = optionalOffset[0]
|
||||
}
|
||||
return testingtproxy.New(GinkgoWriter, Fail, Skip, DeferCleanup, CurrentSpecReport, offset)
|
||||
}
|
||||
|
||||
/*
|
||||
The interface returned by GinkgoT(). This covers most of the methods in the testing package's T.
|
||||
*/
|
||||
type GinkgoTInterface interface {
|
||||
Cleanup(func())
|
||||
Setenv(kev, value string)
|
||||
Error(args ...interface{})
|
||||
Errorf(format string, args ...interface{})
|
||||
Fail()
|
||||
FailNow()
|
||||
Failed() bool
|
||||
Fatal(args ...interface{})
|
||||
Fatalf(format string, args ...interface{})
|
||||
Helper()
|
||||
Log(args ...interface{})
|
||||
Logf(format string, args ...interface{})
|
||||
Name() string
|
||||
Parallel()
|
||||
Skip(args ...interface{})
|
||||
SkipNow()
|
||||
Skipf(format string, args ...interface{})
|
||||
Skipped() bool
|
||||
TempDir() string
|
||||
}
|
9
vendor/github.com/onsi/ginkgo/v2/internal/counter.go
generated
vendored
Normal file
9
vendor/github.com/onsi/ginkgo/v2/internal/counter.go
generated
vendored
Normal file
@ -0,0 +1,9 @@
|
||||
package internal
|
||||
|
||||
func MakeIncrementingIndexCounter() func() (int, error) {
|
||||
idx := -1
|
||||
return func() (int, error) {
|
||||
idx += 1
|
||||
return idx, nil
|
||||
}
|
||||
}
|
@ -1,32 +1,44 @@
|
||||
package failer
|
||||
package internal
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync"
|
||||
|
||||
"github.com/onsi/ginkgo/types"
|
||||
"github.com/onsi/ginkgo/v2/types"
|
||||
)
|
||||
|
||||
type Failer struct {
|
||||
lock *sync.Mutex
|
||||
failure types.SpecFailure
|
||||
failure types.Failure
|
||||
state types.SpecState
|
||||
}
|
||||
|
||||
func New() *Failer {
|
||||
func NewFailer() *Failer {
|
||||
return &Failer{
|
||||
lock: &sync.Mutex{},
|
||||
state: types.SpecStatePassed,
|
||||
}
|
||||
}
|
||||
|
||||
func (f *Failer) GetState() types.SpecState {
|
||||
f.lock.Lock()
|
||||
defer f.lock.Unlock()
|
||||
return f.state
|
||||
}
|
||||
|
||||
func (f *Failer) GetFailure() types.Failure {
|
||||
f.lock.Lock()
|
||||
defer f.lock.Unlock()
|
||||
return f.failure
|
||||
}
|
||||
|
||||
func (f *Failer) Panic(location types.CodeLocation, forwardedPanic interface{}) {
|
||||
f.lock.Lock()
|
||||
defer f.lock.Unlock()
|
||||
|
||||
if f.state == types.SpecStatePassed {
|
||||
f.state = types.SpecStatePanicked
|
||||
f.failure = types.SpecFailure{
|
||||
f.failure = types.Failure{
|
||||
Message: "Test Panicked",
|
||||
Location: location,
|
||||
ForwardedPanic: fmt.Sprintf("%v", forwardedPanic),
|
||||
@ -34,59 +46,54 @@ func (f *Failer) Panic(location types.CodeLocation, forwardedPanic interface{})
|
||||
}
|
||||
}
|
||||
|
||||
func (f *Failer) Timeout(location types.CodeLocation) {
|
||||
f.lock.Lock()
|
||||
defer f.lock.Unlock()
|
||||
|
||||
if f.state == types.SpecStatePassed {
|
||||
f.state = types.SpecStateTimedOut
|
||||
f.failure = types.SpecFailure{
|
||||
Message: "Timed out",
|
||||
Location: location,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (f *Failer) Fail(message string, location types.CodeLocation) {
|
||||
f.lock.Lock()
|
||||
defer f.lock.Unlock()
|
||||
|
||||
if f.state == types.SpecStatePassed {
|
||||
f.state = types.SpecStateFailed
|
||||
f.failure = types.SpecFailure{
|
||||
f.failure = types.Failure{
|
||||
Message: message,
|
||||
Location: location,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (f *Failer) Drain(componentType types.SpecComponentType, componentIndex int, componentCodeLocation types.CodeLocation) (types.SpecFailure, types.SpecState) {
|
||||
f.lock.Lock()
|
||||
defer f.lock.Unlock()
|
||||
|
||||
failure := f.failure
|
||||
outcome := f.state
|
||||
if outcome != types.SpecStatePassed {
|
||||
failure.ComponentType = componentType
|
||||
failure.ComponentIndex = componentIndex
|
||||
failure.ComponentCodeLocation = componentCodeLocation
|
||||
}
|
||||
|
||||
f.state = types.SpecStatePassed
|
||||
f.failure = types.SpecFailure{}
|
||||
|
||||
return failure, outcome
|
||||
}
|
||||
|
||||
func (f *Failer) Skip(message string, location types.CodeLocation) {
|
||||
f.lock.Lock()
|
||||
defer f.lock.Unlock()
|
||||
|
||||
if f.state == types.SpecStatePassed {
|
||||
f.state = types.SpecStateSkipped
|
||||
f.failure = types.SpecFailure{
|
||||
f.failure = types.Failure{
|
||||
Message: message,
|
||||
Location: location,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (f *Failer) AbortSuite(message string, location types.CodeLocation) {
|
||||
f.lock.Lock()
|
||||
defer f.lock.Unlock()
|
||||
|
||||
if f.state == types.SpecStatePassed {
|
||||
f.state = types.SpecStateAborted
|
||||
f.failure = types.Failure{
|
||||
Message: message,
|
||||
Location: location,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (f *Failer) Drain() (types.SpecState, types.Failure) {
|
||||
f.lock.Lock()
|
||||
defer f.lock.Unlock()
|
||||
|
||||
failure := f.failure
|
||||
outcome := f.state
|
||||
|
||||
f.state = types.SpecStatePassed
|
||||
f.failure = types.Failure{}
|
||||
|
||||
return outcome, failure
|
||||
}
|
125
vendor/github.com/onsi/ginkgo/v2/internal/focus.go
generated
vendored
Normal file
125
vendor/github.com/onsi/ginkgo/v2/internal/focus.go
generated
vendored
Normal file
@ -0,0 +1,125 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/onsi/ginkgo/v2/types"
|
||||
)
|
||||
|
||||
/*
|
||||
If a container marked as focus has a descendant that is also marked as focus, Ginkgo's policy is to
|
||||
unmark the container's focus. This gives developers a more intuitive experience when debugging specs.
|
||||
It is common to focus a container to just run a subset of specs, then identify the specific specs within the container to focus -
|
||||
this policy allows the developer to simply focus those specific specs and not need to go back and turn the focus off of the container:
|
||||
|
||||
As a common example, consider:
|
||||
|
||||
FDescribe("something to debug", function() {
|
||||
It("works", function() {...})
|
||||
It("works", function() {...})
|
||||
FIt("doesn't work", function() {...})
|
||||
It("works", function() {...})
|
||||
})
|
||||
|
||||
here the developer's intent is to focus in on the `"doesn't work"` spec and not to run the adjacent specs in the focused `"something to debug"` container.
|
||||
The nested policy applied by this function enables this behavior.
|
||||
*/
|
||||
func ApplyNestedFocusPolicyToTree(tree *TreeNode) {
|
||||
var walkTree func(tree *TreeNode) bool
|
||||
walkTree = func(tree *TreeNode) bool {
|
||||
if tree.Node.MarkedPending {
|
||||
return false
|
||||
}
|
||||
hasFocusedDescendant := false
|
||||
for _, child := range tree.Children {
|
||||
childHasFocus := walkTree(child)
|
||||
hasFocusedDescendant = hasFocusedDescendant || childHasFocus
|
||||
}
|
||||
tree.Node.MarkedFocus = tree.Node.MarkedFocus && !hasFocusedDescendant
|
||||
return tree.Node.MarkedFocus || hasFocusedDescendant
|
||||
}
|
||||
|
||||
walkTree(tree)
|
||||
}
|
||||
|
||||
/*
|
||||
Ginkgo supports focussing specs using `FIt`, `FDescribe`, etc. - this is called "programmatic focus"
|
||||
It also supports focussing specs using regular expressions on the command line (`-focus=`, `-skip=`) that match against spec text
|
||||
and file filters (`-focus-files=`, `-skip-files=`) that match against code locations for nodes in specs.
|
||||
|
||||
If any of the CLI flags are provided they take precedence. The file filters run first followed by the regex filters.
|
||||
|
||||
This function sets the `Skip` property on specs by applying Ginkgo's focus policy:
|
||||
- If there are no CLI arguments and no programmatic focus, do nothing.
|
||||
- If there are no CLI arguments but a spec somewhere has programmatic focus, skip any specs that have no programmatic focus.
|
||||
- If there are CLI arguments parse them and skip any specs that either don't match the focus filters or do match the skip filters.
|
||||
|
||||
*Note:* specs with pending nodes are Skipped when created by NewSpec.
|
||||
*/
|
||||
func ApplyFocusToSpecs(specs Specs, description string, suiteLabels Labels, suiteConfig types.SuiteConfig) (Specs, bool) {
|
||||
focusString := strings.Join(suiteConfig.FocusStrings, "|")
|
||||
skipString := strings.Join(suiteConfig.SkipStrings, "|")
|
||||
|
||||
hasFocusCLIFlags := focusString != "" || skipString != "" || len(suiteConfig.SkipFiles) > 0 || len(suiteConfig.FocusFiles) > 0 || suiteConfig.LabelFilter != ""
|
||||
|
||||
type SkipCheck func(spec Spec) bool
|
||||
|
||||
// by default, skip any specs marked pending
|
||||
skipChecks := []SkipCheck{func(spec Spec) bool { return spec.Nodes.HasNodeMarkedPending() }}
|
||||
hasProgrammaticFocus := false
|
||||
|
||||
if !hasFocusCLIFlags {
|
||||
// check for programmatic focus
|
||||
for _, spec := range specs {
|
||||
if spec.Nodes.HasNodeMarkedFocus() && !spec.Nodes.HasNodeMarkedPending() {
|
||||
skipChecks = append(skipChecks, func(spec Spec) bool { return !spec.Nodes.HasNodeMarkedFocus() })
|
||||
hasProgrammaticFocus = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if suiteConfig.LabelFilter != "" {
|
||||
labelFilter, _ := types.ParseLabelFilter(suiteConfig.LabelFilter)
|
||||
skipChecks = append(skipChecks, func(spec Spec) bool {
|
||||
return !labelFilter(UnionOfLabels(suiteLabels, spec.Nodes.UnionOfLabels()))
|
||||
})
|
||||
}
|
||||
|
||||
if len(suiteConfig.FocusFiles) > 0 {
|
||||
focusFilters, _ := types.ParseFileFilters(suiteConfig.FocusFiles)
|
||||
skipChecks = append(skipChecks, func(spec Spec) bool { return !focusFilters.Matches(spec.Nodes.CodeLocations()) })
|
||||
}
|
||||
|
||||
if len(suiteConfig.SkipFiles) > 0 {
|
||||
skipFilters, _ := types.ParseFileFilters(suiteConfig.SkipFiles)
|
||||
skipChecks = append(skipChecks, func(spec Spec) bool { return skipFilters.Matches(spec.Nodes.CodeLocations()) })
|
||||
}
|
||||
|
||||
if focusString != "" {
|
||||
// skip specs that don't match the focus string
|
||||
re := regexp.MustCompile(focusString)
|
||||
skipChecks = append(skipChecks, func(spec Spec) bool { return !re.MatchString(description + " " + spec.Text()) })
|
||||
}
|
||||
|
||||
if skipString != "" {
|
||||
// skip specs that match the skip string
|
||||
re := regexp.MustCompile(skipString)
|
||||
skipChecks = append(skipChecks, func(spec Spec) bool { return re.MatchString(description + " " + spec.Text()) })
|
||||
}
|
||||
|
||||
// skip specs if shouldSkip() is true. note that we do nothing if shouldSkip() is false to avoid overwriting skip status established by the node's pending status
|
||||
processedSpecs := Specs{}
|
||||
for _, spec := range specs {
|
||||
for _, skipCheck := range skipChecks {
|
||||
if skipCheck(spec) {
|
||||
spec.Skip = true
|
||||
break
|
||||
}
|
||||
}
|
||||
processedSpecs = append(processedSpecs, spec)
|
||||
}
|
||||
|
||||
return processedSpecs, hasProgrammaticFocus
|
||||
}
|
17
vendor/github.com/onsi/ginkgo/v2/internal/global/init.go
generated
vendored
Normal file
17
vendor/github.com/onsi/ginkgo/v2/internal/global/init.go
generated
vendored
Normal file
@ -0,0 +1,17 @@
|
||||
package global
|
||||
|
||||
import (
|
||||
"github.com/onsi/ginkgo/v2/internal"
|
||||
)
|
||||
|
||||
var Suite *internal.Suite
|
||||
var Failer *internal.Failer
|
||||
|
||||
func init() {
|
||||
InitializeGlobals()
|
||||
}
|
||||
|
||||
func InitializeGlobals() {
|
||||
Failer = internal.NewFailer()
|
||||
Suite = internal.NewSuite()
|
||||
}
|
380
vendor/github.com/onsi/ginkgo/v2/internal/group.go
generated
vendored
Normal file
380
vendor/github.com/onsi/ginkgo/v2/internal/group.go
generated
vendored
Normal file
@ -0,0 +1,380 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/onsi/ginkgo/v2/types"
|
||||
)
|
||||
|
||||
type runOncePair struct {
|
||||
//nodeId should only run once...
|
||||
nodeID uint
|
||||
nodeType types.NodeType
|
||||
//...for specs in a hierarchy that includes this context
|
||||
containerID uint
|
||||
}
|
||||
|
||||
func (pair runOncePair) isZero() bool {
|
||||
return pair.nodeID == 0
|
||||
}
|
||||
|
||||
func runOncePairForNode(node Node, containerID uint) runOncePair {
|
||||
return runOncePair{
|
||||
nodeID: node.ID,
|
||||
nodeType: node.NodeType,
|
||||
containerID: containerID,
|
||||
}
|
||||
}
|
||||
|
||||
type runOncePairs []runOncePair
|
||||
|
||||
func runOncePairsForSpec(spec Spec) runOncePairs {
|
||||
pairs := runOncePairs{}
|
||||
|
||||
containers := spec.Nodes.WithType(types.NodeTypeContainer)
|
||||
for _, node := range spec.Nodes {
|
||||
if node.NodeType.Is(types.NodeTypeBeforeAll | types.NodeTypeAfterAll) {
|
||||
pairs = append(pairs, runOncePairForNode(node, containers.FirstWithNestingLevel(node.NestingLevel-1).ID))
|
||||
} else if node.NodeType.Is(types.NodeTypeBeforeEach|types.NodeTypeJustBeforeEach|types.NodeTypeAfterEach|types.NodeTypeJustAfterEach) && node.MarkedOncePerOrdered {
|
||||
passedIntoAnOrderedContainer := false
|
||||
firstOrderedContainerDeeperThanNode := containers.FirstSatisfying(func(container Node) bool {
|
||||
passedIntoAnOrderedContainer = passedIntoAnOrderedContainer || container.MarkedOrdered
|
||||
return container.NestingLevel >= node.NestingLevel && passedIntoAnOrderedContainer
|
||||
})
|
||||
if firstOrderedContainerDeeperThanNode.IsZero() {
|
||||
continue
|
||||
}
|
||||
pairs = append(pairs, runOncePairForNode(node, firstOrderedContainerDeeperThanNode.ID))
|
||||
}
|
||||
}
|
||||
|
||||
return pairs
|
||||
}
|
||||
|
||||
func (pairs runOncePairs) runOncePairFor(nodeID uint) runOncePair {
|
||||
for i := range pairs {
|
||||
if pairs[i].nodeID == nodeID {
|
||||
return pairs[i]
|
||||
}
|
||||
}
|
||||
return runOncePair{}
|
||||
}
|
||||
|
||||
func (pairs runOncePairs) hasRunOncePair(pair runOncePair) bool {
|
||||
for i := range pairs {
|
||||
if pairs[i] == pair {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (pairs runOncePairs) withType(nodeTypes types.NodeType) runOncePairs {
|
||||
count := 0
|
||||
for i := range pairs {
|
||||
if pairs[i].nodeType.Is(nodeTypes) {
|
||||
count++
|
||||
}
|
||||
}
|
||||
|
||||
out, j := make(runOncePairs, count), 0
|
||||
for i := range pairs {
|
||||
if pairs[i].nodeType.Is(nodeTypes) {
|
||||
out[j] = pairs[i]
|
||||
j++
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
type group struct {
|
||||
suite *Suite
|
||||
specs Specs
|
||||
runOncePairs map[uint]runOncePairs
|
||||
runOnceTracker map[runOncePair]types.SpecState
|
||||
|
||||
succeeded bool
|
||||
failedInARunOnceBefore bool
|
||||
continueOnFailure bool
|
||||
}
|
||||
|
||||
func newGroup(suite *Suite) *group {
|
||||
return &group{
|
||||
suite: suite,
|
||||
runOncePairs: map[uint]runOncePairs{},
|
||||
runOnceTracker: map[runOncePair]types.SpecState{},
|
||||
succeeded: true,
|
||||
failedInARunOnceBefore: false,
|
||||
continueOnFailure: false,
|
||||
}
|
||||
}
|
||||
|
||||
func (g *group) initialReportForSpec(spec Spec) types.SpecReport {
|
||||
return types.SpecReport{
|
||||
ContainerHierarchyTexts: spec.Nodes.WithType(types.NodeTypeContainer).Texts(),
|
||||
ContainerHierarchyLocations: spec.Nodes.WithType(types.NodeTypeContainer).CodeLocations(),
|
||||
ContainerHierarchyLabels: spec.Nodes.WithType(types.NodeTypeContainer).Labels(),
|
||||
LeafNodeLocation: spec.FirstNodeWithType(types.NodeTypeIt).CodeLocation,
|
||||
LeafNodeType: types.NodeTypeIt,
|
||||
LeafNodeText: spec.FirstNodeWithType(types.NodeTypeIt).Text,
|
||||
LeafNodeLabels: []string(spec.FirstNodeWithType(types.NodeTypeIt).Labels),
|
||||
ParallelProcess: g.suite.config.ParallelProcess,
|
||||
RunningInParallel: g.suite.isRunningInParallel(),
|
||||
IsSerial: spec.Nodes.HasNodeMarkedSerial(),
|
||||
IsInOrderedContainer: !spec.Nodes.FirstNodeMarkedOrdered().IsZero(),
|
||||
MaxFlakeAttempts: spec.Nodes.GetMaxFlakeAttempts(),
|
||||
MaxMustPassRepeatedly: spec.Nodes.GetMaxMustPassRepeatedly(),
|
||||
}
|
||||
}
|
||||
|
||||
func (g *group) evaluateSkipStatus(spec Spec) (types.SpecState, types.Failure) {
|
||||
if spec.Nodes.HasNodeMarkedPending() {
|
||||
return types.SpecStatePending, types.Failure{}
|
||||
}
|
||||
if spec.Skip {
|
||||
return types.SpecStateSkipped, types.Failure{}
|
||||
}
|
||||
if g.suite.interruptHandler.Status().Interrupted() || g.suite.skipAll {
|
||||
return types.SpecStateSkipped, types.Failure{}
|
||||
}
|
||||
if !g.suite.deadline.IsZero() && g.suite.deadline.Before(time.Now()) {
|
||||
return types.SpecStateSkipped, types.Failure{}
|
||||
}
|
||||
if !g.succeeded && !g.continueOnFailure {
|
||||
return types.SpecStateSkipped, g.suite.failureForLeafNodeWithMessage(spec.FirstNodeWithType(types.NodeTypeIt),
|
||||
"Spec skipped because an earlier spec in an ordered container failed")
|
||||
}
|
||||
if g.failedInARunOnceBefore && g.continueOnFailure {
|
||||
return types.SpecStateSkipped, g.suite.failureForLeafNodeWithMessage(spec.FirstNodeWithType(types.NodeTypeIt),
|
||||
"Spec skipped because a BeforeAll node failed")
|
||||
}
|
||||
beforeOncePairs := g.runOncePairs[spec.SubjectID()].withType(types.NodeTypeBeforeAll | types.NodeTypeBeforeEach | types.NodeTypeJustBeforeEach)
|
||||
for _, pair := range beforeOncePairs {
|
||||
if g.runOnceTracker[pair].Is(types.SpecStateSkipped) {
|
||||
return types.SpecStateSkipped, g.suite.failureForLeafNodeWithMessage(spec.FirstNodeWithType(types.NodeTypeIt),
|
||||
fmt.Sprintf("Spec skipped because Skip() was called in %s", pair.nodeType))
|
||||
}
|
||||
}
|
||||
if g.suite.config.DryRun {
|
||||
return types.SpecStatePassed, types.Failure{}
|
||||
}
|
||||
return g.suite.currentSpecReport.State, g.suite.currentSpecReport.Failure
|
||||
}
|
||||
|
||||
func (g *group) isLastSpecWithPair(specID uint, pair runOncePair) bool {
|
||||
lastSpecID := uint(0)
|
||||
for idx := range g.specs {
|
||||
if g.specs[idx].Skip {
|
||||
continue
|
||||
}
|
||||
sID := g.specs[idx].SubjectID()
|
||||
if g.runOncePairs[sID].hasRunOncePair(pair) {
|
||||
lastSpecID = sID
|
||||
}
|
||||
}
|
||||
return lastSpecID == specID
|
||||
}
|
||||
|
||||
func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) bool {
|
||||
failedInARunOnceBefore := false
|
||||
pairs := g.runOncePairs[spec.SubjectID()]
|
||||
|
||||
nodes := spec.Nodes.WithType(types.NodeTypeBeforeAll)
|
||||
nodes = append(nodes, spec.Nodes.WithType(types.NodeTypeBeforeEach)...).SortedByAscendingNestingLevel()
|
||||
nodes = append(nodes, spec.Nodes.WithType(types.NodeTypeJustBeforeEach).SortedByAscendingNestingLevel()...)
|
||||
nodes = append(nodes, spec.Nodes.FirstNodeWithType(types.NodeTypeIt))
|
||||
terminatingNode, terminatingPair := Node{}, runOncePair{}
|
||||
|
||||
deadline := time.Time{}
|
||||
if spec.SpecTimeout() > 0 {
|
||||
deadline = time.Now().Add(spec.SpecTimeout())
|
||||
}
|
||||
|
||||
for _, node := range nodes {
|
||||
oncePair := pairs.runOncePairFor(node.ID)
|
||||
if !oncePair.isZero() && g.runOnceTracker[oncePair].Is(types.SpecStatePassed) {
|
||||
continue
|
||||
}
|
||||
g.suite.currentSpecReport.State, g.suite.currentSpecReport.Failure = g.suite.runNode(node, deadline, spec.Nodes.BestTextFor(node))
|
||||
g.suite.currentSpecReport.RunTime = time.Since(g.suite.currentSpecReport.StartTime)
|
||||
if !oncePair.isZero() {
|
||||
g.runOnceTracker[oncePair] = g.suite.currentSpecReport.State
|
||||
}
|
||||
if g.suite.currentSpecReport.State != types.SpecStatePassed {
|
||||
terminatingNode, terminatingPair = node, oncePair
|
||||
failedInARunOnceBefore = !terminatingPair.isZero()
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
afterNodeWasRun := map[uint]bool{}
|
||||
includeDeferCleanups := false
|
||||
for {
|
||||
nodes := spec.Nodes.WithType(types.NodeTypeAfterEach)
|
||||
nodes = append(nodes, spec.Nodes.WithType(types.NodeTypeAfterAll)...).SortedByDescendingNestingLevel()
|
||||
nodes = append(spec.Nodes.WithType(types.NodeTypeJustAfterEach).SortedByDescendingNestingLevel(), nodes...)
|
||||
if !terminatingNode.IsZero() {
|
||||
nodes = nodes.WithinNestingLevel(terminatingNode.NestingLevel)
|
||||
}
|
||||
if includeDeferCleanups {
|
||||
nodes = append(nodes, g.suite.cleanupNodes.WithType(types.NodeTypeCleanupAfterEach).Reverse()...)
|
||||
nodes = append(nodes, g.suite.cleanupNodes.WithType(types.NodeTypeCleanupAfterAll).Reverse()...)
|
||||
}
|
||||
nodes = nodes.Filter(func(node Node) bool {
|
||||
if afterNodeWasRun[node.ID] {
|
||||
//this node has already been run on this attempt, don't rerun it
|
||||
return false
|
||||
}
|
||||
var pair runOncePair
|
||||
switch node.NodeType {
|
||||
case types.NodeTypeCleanupAfterEach, types.NodeTypeCleanupAfterAll:
|
||||
// check if we were generated in an AfterNode that has already run
|
||||
if afterNodeWasRun[node.NodeIDWhereCleanupWasGenerated] {
|
||||
return true // we were, so we should definitely run this cleanup now
|
||||
}
|
||||
// looks like this cleanup nodes was generated by a before node or it.
|
||||
// the run-once status of a cleanup node is governed by the run-once status of its generator
|
||||
pair = pairs.runOncePairFor(node.NodeIDWhereCleanupWasGenerated)
|
||||
default:
|
||||
pair = pairs.runOncePairFor(node.ID)
|
||||
}
|
||||
if pair.isZero() {
|
||||
// this node is not governed by any run-once policy, we should run it
|
||||
return true
|
||||
}
|
||||
// it's our last chance to run if we're the last spec for our oncePair
|
||||
isLastSpecWithPair := g.isLastSpecWithPair(spec.SubjectID(), pair)
|
||||
|
||||
switch g.suite.currentSpecReport.State {
|
||||
case types.SpecStatePassed: //this attempt is passing...
|
||||
return isLastSpecWithPair //...we should run-once if we'this is our last chance
|
||||
case types.SpecStateSkipped: //the spec was skipped by the user...
|
||||
if isLastSpecWithPair {
|
||||
return true //...we're the last spec, so we should run the AfterNode
|
||||
}
|
||||
if !terminatingPair.isZero() && terminatingNode.NestingLevel == node.NestingLevel {
|
||||
return true //...or, a run-once node at our nesting level was skipped which means this is our last chance to run
|
||||
}
|
||||
case types.SpecStateFailed, types.SpecStatePanicked, types.SpecStateTimedout: // the spec has failed...
|
||||
if isFinalAttempt {
|
||||
if g.continueOnFailure {
|
||||
return isLastSpecWithPair || failedInARunOnceBefore //...we're configured to continue on failures - so we should only run if we're the last spec for this pair or if we failed in a runOnceBefore (which means we _are_ the last spec to run)
|
||||
} else {
|
||||
return true //...this was the last attempt and continueOnFailure is false therefore we are the last spec to run and so the AfterNode should run
|
||||
}
|
||||
}
|
||||
if !terminatingPair.isZero() { // ...and it failed in a run-once. which will be running again
|
||||
if node.NodeType.Is(types.NodeTypeCleanupAfterEach | types.NodeTypeCleanupAfterAll) {
|
||||
return terminatingNode.ID == node.NodeIDWhereCleanupWasGenerated // we should run this node if we're a clean-up generated by it
|
||||
} else {
|
||||
return terminatingNode.NestingLevel == node.NestingLevel // ...or if we're at the same nesting level
|
||||
}
|
||||
}
|
||||
case types.SpecStateInterrupted, types.SpecStateAborted: // ...we've been interrupted and/or aborted
|
||||
return true //...that means the test run is over and we should clean up the stack. Run the AfterNode
|
||||
}
|
||||
return false
|
||||
})
|
||||
|
||||
if len(nodes) == 0 && includeDeferCleanups {
|
||||
break
|
||||
}
|
||||
|
||||
for _, node := range nodes {
|
||||
afterNodeWasRun[node.ID] = true
|
||||
state, failure := g.suite.runNode(node, deadline, spec.Nodes.BestTextFor(node))
|
||||
g.suite.currentSpecReport.RunTime = time.Since(g.suite.currentSpecReport.StartTime)
|
||||
if g.suite.currentSpecReport.State == types.SpecStatePassed || state == types.SpecStateAborted {
|
||||
g.suite.currentSpecReport.State = state
|
||||
g.suite.currentSpecReport.Failure = failure
|
||||
} else if state.Is(types.SpecStateFailureStates) {
|
||||
g.suite.currentSpecReport.AdditionalFailures = append(g.suite.currentSpecReport.AdditionalFailures, types.AdditionalFailure{State: state, Failure: failure})
|
||||
}
|
||||
}
|
||||
includeDeferCleanups = true
|
||||
}
|
||||
|
||||
return failedInARunOnceBefore
|
||||
}
|
||||
|
||||
func (g *group) run(specs Specs) {
|
||||
g.specs = specs
|
||||
g.continueOnFailure = specs[0].Nodes.FirstNodeMarkedOrdered().MarkedContinueOnFailure
|
||||
for _, spec := range g.specs {
|
||||
g.runOncePairs[spec.SubjectID()] = runOncePairsForSpec(spec)
|
||||
}
|
||||
|
||||
for _, spec := range g.specs {
|
||||
g.suite.selectiveLock.Lock()
|
||||
g.suite.currentSpecReport = g.initialReportForSpec(spec)
|
||||
g.suite.selectiveLock.Unlock()
|
||||
|
||||
g.suite.currentSpecReport.State, g.suite.currentSpecReport.Failure = g.evaluateSkipStatus(spec)
|
||||
g.suite.reporter.WillRun(g.suite.currentSpecReport)
|
||||
g.suite.reportEach(spec, types.NodeTypeReportBeforeEach)
|
||||
|
||||
skip := g.suite.config.DryRun || g.suite.currentSpecReport.State.Is(types.SpecStateFailureStates|types.SpecStateSkipped|types.SpecStatePending)
|
||||
|
||||
g.suite.currentSpecReport.StartTime = time.Now()
|
||||
failedInARunOnceBefore := false
|
||||
if !skip {
|
||||
var maxAttempts = 1
|
||||
|
||||
if g.suite.currentSpecReport.MaxMustPassRepeatedly > 0 {
|
||||
maxAttempts = max(1, spec.MustPassRepeatedly())
|
||||
} else if g.suite.config.FlakeAttempts > 0 {
|
||||
maxAttempts = g.suite.config.FlakeAttempts
|
||||
g.suite.currentSpecReport.MaxFlakeAttempts = maxAttempts
|
||||
} else if g.suite.currentSpecReport.MaxFlakeAttempts > 0 {
|
||||
maxAttempts = max(1, spec.FlakeAttempts())
|
||||
}
|
||||
|
||||
for attempt := 0; attempt < maxAttempts; attempt++ {
|
||||
g.suite.currentSpecReport.NumAttempts = attempt + 1
|
||||
g.suite.writer.Truncate()
|
||||
g.suite.outputInterceptor.StartInterceptingOutput()
|
||||
if attempt > 0 {
|
||||
if g.suite.currentSpecReport.MaxMustPassRepeatedly > 0 {
|
||||
g.suite.handleSpecEvent(types.SpecEvent{SpecEventType: types.SpecEventSpecRepeat, Attempt: attempt})
|
||||
}
|
||||
if g.suite.currentSpecReport.MaxFlakeAttempts > 0 {
|
||||
g.suite.handleSpecEvent(types.SpecEvent{SpecEventType: types.SpecEventSpecRetry, Attempt: attempt})
|
||||
}
|
||||
}
|
||||
|
||||
failedInARunOnceBefore = g.attemptSpec(attempt == maxAttempts-1, spec)
|
||||
|
||||
g.suite.currentSpecReport.EndTime = time.Now()
|
||||
g.suite.currentSpecReport.RunTime = g.suite.currentSpecReport.EndTime.Sub(g.suite.currentSpecReport.StartTime)
|
||||
g.suite.currentSpecReport.CapturedGinkgoWriterOutput += string(g.suite.writer.Bytes())
|
||||
g.suite.currentSpecReport.CapturedStdOutErr += g.suite.outputInterceptor.StopInterceptingAndReturnOutput()
|
||||
|
||||
if g.suite.currentSpecReport.MaxMustPassRepeatedly > 0 {
|
||||
if g.suite.currentSpecReport.State.Is(types.SpecStateFailureStates | types.SpecStateSkipped) {
|
||||
break
|
||||
}
|
||||
}
|
||||
if g.suite.currentSpecReport.MaxFlakeAttempts > 0 {
|
||||
if g.suite.currentSpecReport.State.Is(types.SpecStatePassed | types.SpecStateSkipped | types.SpecStateAborted | types.SpecStateInterrupted) {
|
||||
break
|
||||
} else if attempt < maxAttempts-1 {
|
||||
af := types.AdditionalFailure{State: g.suite.currentSpecReport.State, Failure: g.suite.currentSpecReport.Failure}
|
||||
af.Failure.Message = fmt.Sprintf("Failure recorded during attempt %d:\n%s", attempt+1, af.Failure.Message)
|
||||
g.suite.currentSpecReport.AdditionalFailures = append(g.suite.currentSpecReport.AdditionalFailures, af)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
g.suite.reportEach(spec, types.NodeTypeReportAfterEach)
|
||||
g.suite.processCurrentSpecReport()
|
||||
if g.suite.currentSpecReport.State.Is(types.SpecStateFailureStates) {
|
||||
g.succeeded = false
|
||||
g.failedInARunOnceBefore = g.failedInARunOnceBefore || failedInARunOnceBefore
|
||||
}
|
||||
g.suite.selectiveLock.Lock()
|
||||
g.suite.currentSpecReport = types.SpecReport{}
|
||||
g.suite.selectiveLock.Unlock()
|
||||
}
|
||||
}
|
162
vendor/github.com/onsi/ginkgo/v2/internal/interrupt_handler/interrupt_handler.go
generated
vendored
Normal file
162
vendor/github.com/onsi/ginkgo/v2/internal/interrupt_handler/interrupt_handler.go
generated
vendored
Normal file
@ -0,0 +1,162 @@
|
||||
package interrupt_handler
|
||||
|
||||
import (
|
||||
"os"
|
||||
"os/signal"
|
||||
"sync"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/onsi/ginkgo/v2/internal/parallel_support"
|
||||
)
|
||||
|
||||
const ABORT_POLLING_INTERVAL = 500 * time.Millisecond
|
||||
|
||||
type InterruptCause uint
|
||||
|
||||
const (
|
||||
InterruptCauseInvalid InterruptCause = iota
|
||||
InterruptCauseSignal
|
||||
InterruptCauseAbortByOtherProcess
|
||||
)
|
||||
|
||||
type InterruptLevel uint
|
||||
|
||||
const (
|
||||
InterruptLevelUninterrupted InterruptLevel = iota
|
||||
InterruptLevelCleanupAndReport
|
||||
InterruptLevelReportOnly
|
||||
InterruptLevelBailOut
|
||||
)
|
||||
|
||||
func (ic InterruptCause) String() string {
|
||||
switch ic {
|
||||
case InterruptCauseSignal:
|
||||
return "Interrupted by User"
|
||||
case InterruptCauseAbortByOtherProcess:
|
||||
return "Interrupted by Other Ginkgo Process"
|
||||
}
|
||||
return "INVALID_INTERRUPT_CAUSE"
|
||||
}
|
||||
|
||||
type InterruptStatus struct {
|
||||
Channel chan interface{}
|
||||
Level InterruptLevel
|
||||
Cause InterruptCause
|
||||
}
|
||||
|
||||
func (s InterruptStatus) Interrupted() bool {
|
||||
return s.Level != InterruptLevelUninterrupted
|
||||
}
|
||||
|
||||
func (s InterruptStatus) Message() string {
|
||||
return s.Cause.String()
|
||||
}
|
||||
|
||||
func (s InterruptStatus) ShouldIncludeProgressReport() bool {
|
||||
return s.Cause != InterruptCauseAbortByOtherProcess
|
||||
}
|
||||
|
||||
type InterruptHandlerInterface interface {
|
||||
Status() InterruptStatus
|
||||
}
|
||||
|
||||
type InterruptHandler struct {
|
||||
c chan interface{}
|
||||
lock *sync.Mutex
|
||||
level InterruptLevel
|
||||
cause InterruptCause
|
||||
client parallel_support.Client
|
||||
stop chan interface{}
|
||||
signals []os.Signal
|
||||
}
|
||||
|
||||
func NewInterruptHandler(client parallel_support.Client, signals ...os.Signal) *InterruptHandler {
|
||||
if len(signals) == 0 {
|
||||
signals = []os.Signal{os.Interrupt, syscall.SIGTERM}
|
||||
}
|
||||
handler := &InterruptHandler{
|
||||
c: make(chan interface{}),
|
||||
lock: &sync.Mutex{},
|
||||
stop: make(chan interface{}),
|
||||
client: client,
|
||||
signals: signals,
|
||||
}
|
||||
handler.registerForInterrupts()
|
||||
return handler
|
||||
}
|
||||
|
||||
func (handler *InterruptHandler) Stop() {
|
||||
close(handler.stop)
|
||||
}
|
||||
|
||||
func (handler *InterruptHandler) registerForInterrupts() {
|
||||
// os signal handling
|
||||
signalChannel := make(chan os.Signal, 1)
|
||||
signal.Notify(signalChannel, handler.signals...)
|
||||
|
||||
// cross-process abort handling
|
||||
var abortChannel chan interface{}
|
||||
if handler.client != nil {
|
||||
abortChannel = make(chan interface{})
|
||||
go func() {
|
||||
pollTicker := time.NewTicker(ABORT_POLLING_INTERVAL)
|
||||
for {
|
||||
select {
|
||||
case <-pollTicker.C:
|
||||
if handler.client.ShouldAbort() {
|
||||
close(abortChannel)
|
||||
pollTicker.Stop()
|
||||
return
|
||||
}
|
||||
case <-handler.stop:
|
||||
pollTicker.Stop()
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
go func(abortChannel chan interface{}) {
|
||||
var interruptCause InterruptCause
|
||||
for {
|
||||
select {
|
||||
case <-signalChannel:
|
||||
interruptCause = InterruptCauseSignal
|
||||
case <-abortChannel:
|
||||
interruptCause = InterruptCauseAbortByOtherProcess
|
||||
case <-handler.stop:
|
||||
signal.Stop(signalChannel)
|
||||
return
|
||||
}
|
||||
abortChannel = nil
|
||||
|
||||
handler.lock.Lock()
|
||||
oldLevel := handler.level
|
||||
handler.cause = interruptCause
|
||||
if handler.level == InterruptLevelUninterrupted {
|
||||
handler.level = InterruptLevelCleanupAndReport
|
||||
} else if handler.level == InterruptLevelCleanupAndReport {
|
||||
handler.level = InterruptLevelReportOnly
|
||||
} else if handler.level == InterruptLevelReportOnly {
|
||||
handler.level = InterruptLevelBailOut
|
||||
}
|
||||
if handler.level != oldLevel {
|
||||
close(handler.c)
|
||||
handler.c = make(chan interface{})
|
||||
}
|
||||
handler.lock.Unlock()
|
||||
}
|
||||
}(abortChannel)
|
||||
}
|
||||
|
||||
func (handler *InterruptHandler) Status() InterruptStatus {
|
||||
handler.lock.Lock()
|
||||
defer handler.lock.Unlock()
|
||||
|
||||
return InterruptStatus{
|
||||
Level: handler.level,
|
||||
Channel: handler.c,
|
||||
Cause: handler.cause,
|
||||
}
|
||||
}
|
15
vendor/github.com/onsi/ginkgo/v2/internal/interrupt_handler/sigquit_swallower_unix.go
generated
vendored
Normal file
15
vendor/github.com/onsi/ginkgo/v2/internal/interrupt_handler/sigquit_swallower_unix.go
generated
vendored
Normal file
@ -0,0 +1,15 @@
|
||||
//go:build freebsd || openbsd || netbsd || dragonfly || darwin || linux || solaris
|
||||
// +build freebsd openbsd netbsd dragonfly darwin linux solaris
|
||||
|
||||
package interrupt_handler
|
||||
|
||||
import (
|
||||
"os"
|
||||
"os/signal"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
func SwallowSigQuit() {
|
||||
c := make(chan os.Signal, 1024)
|
||||
signal.Notify(c, syscall.SIGQUIT)
|
||||
}
|
8
vendor/github.com/onsi/ginkgo/v2/internal/interrupt_handler/sigquit_swallower_windows.go
generated
vendored
Normal file
8
vendor/github.com/onsi/ginkgo/v2/internal/interrupt_handler/sigquit_swallower_windows.go
generated
vendored
Normal file
@ -0,0 +1,8 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package interrupt_handler
|
||||
|
||||
func SwallowSigQuit() {
|
||||
//noop
|
||||
}
|
909
vendor/github.com/onsi/ginkgo/v2/internal/node.go
generated
vendored
Normal file
909
vendor/github.com/onsi/ginkgo/v2/internal/node.go
generated
vendored
Normal file
@ -0,0 +1,909 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"sync"
|
||||
|
||||
"github.com/onsi/ginkgo/v2/types"
|
||||
)
|
||||
|
||||
var _global_node_id_counter = uint(0)
|
||||
var _global_id_mutex = &sync.Mutex{}
|
||||
|
||||
func UniqueNodeID() uint {
|
||||
//There's a reace in the internal integration tests if we don't make
|
||||
//accessing _global_node_id_counter safe across goroutines.
|
||||
_global_id_mutex.Lock()
|
||||
defer _global_id_mutex.Unlock()
|
||||
_global_node_id_counter += 1
|
||||
return _global_node_id_counter
|
||||
}
|
||||
|
||||
type Node struct {
|
||||
ID uint
|
||||
NodeType types.NodeType
|
||||
|
||||
Text string
|
||||
Body func(SpecContext)
|
||||
CodeLocation types.CodeLocation
|
||||
NestingLevel int
|
||||
HasContext bool
|
||||
|
||||
SynchronizedBeforeSuiteProc1Body func(SpecContext) []byte
|
||||
SynchronizedBeforeSuiteProc1BodyHasContext bool
|
||||
SynchronizedBeforeSuiteAllProcsBody func(SpecContext, []byte)
|
||||
SynchronizedBeforeSuiteAllProcsBodyHasContext bool
|
||||
|
||||
SynchronizedAfterSuiteAllProcsBody func(SpecContext)
|
||||
SynchronizedAfterSuiteAllProcsBodyHasContext bool
|
||||
SynchronizedAfterSuiteProc1Body func(SpecContext)
|
||||
SynchronizedAfterSuiteProc1BodyHasContext bool
|
||||
|
||||
ReportEachBody func(types.SpecReport)
|
||||
ReportSuiteBody func(types.Report)
|
||||
|
||||
MarkedFocus bool
|
||||
MarkedPending bool
|
||||
MarkedSerial bool
|
||||
MarkedOrdered bool
|
||||
MarkedContinueOnFailure bool
|
||||
MarkedOncePerOrdered bool
|
||||
FlakeAttempts int
|
||||
MustPassRepeatedly int
|
||||
Labels Labels
|
||||
PollProgressAfter time.Duration
|
||||
PollProgressInterval time.Duration
|
||||
NodeTimeout time.Duration
|
||||
SpecTimeout time.Duration
|
||||
GracePeriod time.Duration
|
||||
|
||||
NodeIDWhereCleanupWasGenerated uint
|
||||
}
|
||||
|
||||
// Decoration Types
|
||||
type focusType bool
|
||||
type pendingType bool
|
||||
type serialType bool
|
||||
type orderedType bool
|
||||
type continueOnFailureType bool
|
||||
type honorsOrderedType bool
|
||||
type suppressProgressReporting bool
|
||||
|
||||
const Focus = focusType(true)
|
||||
const Pending = pendingType(true)
|
||||
const Serial = serialType(true)
|
||||
const Ordered = orderedType(true)
|
||||
const ContinueOnFailure = continueOnFailureType(true)
|
||||
const OncePerOrdered = honorsOrderedType(true)
|
||||
const SuppressProgressReporting = suppressProgressReporting(true)
|
||||
|
||||
type FlakeAttempts uint
|
||||
type MustPassRepeatedly uint
|
||||
type Offset uint
|
||||
type Done chan<- interface{} // Deprecated Done Channel for asynchronous testing
|
||||
type Labels []string
|
||||
type PollProgressInterval time.Duration
|
||||
type PollProgressAfter time.Duration
|
||||
type NodeTimeout time.Duration
|
||||
type SpecTimeout time.Duration
|
||||
type GracePeriod time.Duration
|
||||
|
||||
func UnionOfLabels(labels ...Labels) Labels {
|
||||
out := Labels{}
|
||||
seen := map[string]bool{}
|
||||
for _, labelSet := range labels {
|
||||
for _, label := range labelSet {
|
||||
if !seen[label] {
|
||||
seen[label] = true
|
||||
out = append(out, label)
|
||||
}
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func PartitionDecorations(args ...interface{}) ([]interface{}, []interface{}) {
|
||||
decorations := []interface{}{}
|
||||
remainingArgs := []interface{}{}
|
||||
for _, arg := range args {
|
||||
if isDecoration(arg) {
|
||||
decorations = append(decorations, arg)
|
||||
} else {
|
||||
remainingArgs = append(remainingArgs, arg)
|
||||
}
|
||||
}
|
||||
return decorations, remainingArgs
|
||||
}
|
||||
|
||||
func isDecoration(arg interface{}) bool {
|
||||
switch t := reflect.TypeOf(arg); {
|
||||
case t == nil:
|
||||
return false
|
||||
case t == reflect.TypeOf(Offset(0)):
|
||||
return true
|
||||
case t == reflect.TypeOf(types.CodeLocation{}):
|
||||
return true
|
||||
case t == reflect.TypeOf(Focus):
|
||||
return true
|
||||
case t == reflect.TypeOf(Pending):
|
||||
return true
|
||||
case t == reflect.TypeOf(Serial):
|
||||
return true
|
||||
case t == reflect.TypeOf(Ordered):
|
||||
return true
|
||||
case t == reflect.TypeOf(ContinueOnFailure):
|
||||
return true
|
||||
case t == reflect.TypeOf(OncePerOrdered):
|
||||
return true
|
||||
case t == reflect.TypeOf(SuppressProgressReporting):
|
||||
return true
|
||||
case t == reflect.TypeOf(FlakeAttempts(0)):
|
||||
return true
|
||||
case t == reflect.TypeOf(MustPassRepeatedly(0)):
|
||||
return true
|
||||
case t == reflect.TypeOf(Labels{}):
|
||||
return true
|
||||
case t == reflect.TypeOf(PollProgressInterval(0)):
|
||||
return true
|
||||
case t == reflect.TypeOf(PollProgressAfter(0)):
|
||||
return true
|
||||
case t == reflect.TypeOf(NodeTimeout(0)):
|
||||
return true
|
||||
case t == reflect.TypeOf(SpecTimeout(0)):
|
||||
return true
|
||||
case t == reflect.TypeOf(GracePeriod(0)):
|
||||
return true
|
||||
case t.Kind() == reflect.Slice && isSliceOfDecorations(arg):
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func isSliceOfDecorations(slice interface{}) bool {
|
||||
vSlice := reflect.ValueOf(slice)
|
||||
if vSlice.Len() == 0 {
|
||||
return false
|
||||
}
|
||||
for i := 0; i < vSlice.Len(); i++ {
|
||||
if !isDecoration(vSlice.Index(i).Interface()) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
var contextType = reflect.TypeOf(new(context.Context)).Elem()
|
||||
var specContextType = reflect.TypeOf(new(SpecContext)).Elem()
|
||||
|
||||
func NewNode(deprecationTracker *types.DeprecationTracker, nodeType types.NodeType, text string, args ...interface{}) (Node, []error) {
|
||||
baseOffset := 2
|
||||
node := Node{
|
||||
ID: UniqueNodeID(),
|
||||
NodeType: nodeType,
|
||||
Text: text,
|
||||
Labels: Labels{},
|
||||
CodeLocation: types.NewCodeLocation(baseOffset),
|
||||
NestingLevel: -1,
|
||||
PollProgressAfter: -1,
|
||||
PollProgressInterval: -1,
|
||||
GracePeriod: -1,
|
||||
}
|
||||
|
||||
errors := []error{}
|
||||
appendError := func(err error) {
|
||||
if err != nil {
|
||||
errors = append(errors, err)
|
||||
}
|
||||
}
|
||||
|
||||
args = unrollInterfaceSlice(args)
|
||||
|
||||
remainingArgs := []interface{}{}
|
||||
//First get the CodeLocation up-to-date
|
||||
for _, arg := range args {
|
||||
switch v := arg.(type) {
|
||||
case Offset:
|
||||
node.CodeLocation = types.NewCodeLocation(baseOffset + int(v))
|
||||
case types.CodeLocation:
|
||||
node.CodeLocation = v
|
||||
default:
|
||||
remainingArgs = append(remainingArgs, arg)
|
||||
}
|
||||
}
|
||||
|
||||
labelsSeen := map[string]bool{}
|
||||
trackedFunctionError := false
|
||||
args = remainingArgs
|
||||
remainingArgs = []interface{}{}
|
||||
//now process the rest of the args
|
||||
for _, arg := range args {
|
||||
switch t := reflect.TypeOf(arg); {
|
||||
case t == reflect.TypeOf(float64(0)):
|
||||
break //ignore deprecated timeouts
|
||||
case t == reflect.TypeOf(Focus):
|
||||
node.MarkedFocus = bool(arg.(focusType))
|
||||
if !nodeType.Is(types.NodeTypesForContainerAndIt) {
|
||||
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "Focus"))
|
||||
}
|
||||
case t == reflect.TypeOf(Pending):
|
||||
node.MarkedPending = bool(arg.(pendingType))
|
||||
if !nodeType.Is(types.NodeTypesForContainerAndIt) {
|
||||
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "Pending"))
|
||||
}
|
||||
case t == reflect.TypeOf(Serial):
|
||||
node.MarkedSerial = bool(arg.(serialType))
|
||||
if !nodeType.Is(types.NodeTypesForContainerAndIt) {
|
||||
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "Serial"))
|
||||
}
|
||||
case t == reflect.TypeOf(Ordered):
|
||||
node.MarkedOrdered = bool(arg.(orderedType))
|
||||
if !nodeType.Is(types.NodeTypeContainer) {
|
||||
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "Ordered"))
|
||||
}
|
||||
case t == reflect.TypeOf(ContinueOnFailure):
|
||||
node.MarkedContinueOnFailure = bool(arg.(continueOnFailureType))
|
||||
if !nodeType.Is(types.NodeTypeContainer) {
|
||||
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "ContinueOnFailure"))
|
||||
}
|
||||
case t == reflect.TypeOf(OncePerOrdered):
|
||||
node.MarkedOncePerOrdered = bool(arg.(honorsOrderedType))
|
||||
if !nodeType.Is(types.NodeTypeBeforeEach | types.NodeTypeJustBeforeEach | types.NodeTypeAfterEach | types.NodeTypeJustAfterEach) {
|
||||
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "OncePerOrdered"))
|
||||
}
|
||||
case t == reflect.TypeOf(SuppressProgressReporting):
|
||||
deprecationTracker.TrackDeprecation(types.Deprecations.SuppressProgressReporting())
|
||||
case t == reflect.TypeOf(FlakeAttempts(0)):
|
||||
node.FlakeAttempts = int(arg.(FlakeAttempts))
|
||||
if !nodeType.Is(types.NodeTypesForContainerAndIt) {
|
||||
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "FlakeAttempts"))
|
||||
}
|
||||
case t == reflect.TypeOf(MustPassRepeatedly(0)):
|
||||
node.MustPassRepeatedly = int(arg.(MustPassRepeatedly))
|
||||
if !nodeType.Is(types.NodeTypesForContainerAndIt) {
|
||||
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "MustPassRepeatedly"))
|
||||
}
|
||||
case t == reflect.TypeOf(PollProgressAfter(0)):
|
||||
node.PollProgressAfter = time.Duration(arg.(PollProgressAfter))
|
||||
if nodeType.Is(types.NodeTypeContainer) {
|
||||
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "PollProgressAfter"))
|
||||
}
|
||||
case t == reflect.TypeOf(PollProgressInterval(0)):
|
||||
node.PollProgressInterval = time.Duration(arg.(PollProgressInterval))
|
||||
if nodeType.Is(types.NodeTypeContainer) {
|
||||
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "PollProgressInterval"))
|
||||
}
|
||||
case t == reflect.TypeOf(NodeTimeout(0)):
|
||||
node.NodeTimeout = time.Duration(arg.(NodeTimeout))
|
||||
if nodeType.Is(types.NodeTypeContainer) {
|
||||
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "NodeTimeout"))
|
||||
}
|
||||
case t == reflect.TypeOf(SpecTimeout(0)):
|
||||
node.SpecTimeout = time.Duration(arg.(SpecTimeout))
|
||||
if !nodeType.Is(types.NodeTypeIt) {
|
||||
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "SpecTimeout"))
|
||||
}
|
||||
case t == reflect.TypeOf(GracePeriod(0)):
|
||||
node.GracePeriod = time.Duration(arg.(GracePeriod))
|
||||
if nodeType.Is(types.NodeTypeContainer) {
|
||||
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "GracePeriod"))
|
||||
}
|
||||
case t == reflect.TypeOf(Labels{}):
|
||||
if !nodeType.Is(types.NodeTypesForContainerAndIt) {
|
||||
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "Label"))
|
||||
}
|
||||
for _, label := range arg.(Labels) {
|
||||
if !labelsSeen[label] {
|
||||
labelsSeen[label] = true
|
||||
label, err := types.ValidateAndCleanupLabel(label, node.CodeLocation)
|
||||
node.Labels = append(node.Labels, label)
|
||||
appendError(err)
|
||||
}
|
||||
}
|
||||
case t.Kind() == reflect.Func:
|
||||
if nodeType.Is(types.NodeTypeContainer) {
|
||||
if node.Body != nil {
|
||||
appendError(types.GinkgoErrors.MultipleBodyFunctions(node.CodeLocation, nodeType))
|
||||
trackedFunctionError = true
|
||||
break
|
||||
}
|
||||
if t.NumOut() > 0 || t.NumIn() > 0 {
|
||||
appendError(types.GinkgoErrors.InvalidBodyTypeForContainer(t, node.CodeLocation, nodeType))
|
||||
trackedFunctionError = true
|
||||
break
|
||||
}
|
||||
body := arg.(func())
|
||||
node.Body = func(SpecContext) { body() }
|
||||
} else if nodeType.Is(types.NodeTypeReportBeforeEach | types.NodeTypeReportAfterEach) {
|
||||
if node.ReportEachBody == nil {
|
||||
node.ReportEachBody = arg.(func(types.SpecReport))
|
||||
} else {
|
||||
appendError(types.GinkgoErrors.MultipleBodyFunctions(node.CodeLocation, nodeType))
|
||||
trackedFunctionError = true
|
||||
break
|
||||
}
|
||||
} else if nodeType.Is(types.NodeTypeReportBeforeSuite | types.NodeTypeReportAfterSuite) {
|
||||
if node.ReportSuiteBody == nil {
|
||||
node.ReportSuiteBody = arg.(func(types.Report))
|
||||
} else {
|
||||
appendError(types.GinkgoErrors.MultipleBodyFunctions(node.CodeLocation, nodeType))
|
||||
trackedFunctionError = true
|
||||
break
|
||||
}
|
||||
} else if nodeType.Is(types.NodeTypeSynchronizedBeforeSuite) {
|
||||
if node.SynchronizedBeforeSuiteProc1Body != nil && node.SynchronizedBeforeSuiteAllProcsBody != nil {
|
||||
appendError(types.GinkgoErrors.MultipleBodyFunctions(node.CodeLocation, nodeType))
|
||||
trackedFunctionError = true
|
||||
break
|
||||
}
|
||||
if node.SynchronizedBeforeSuiteProc1Body == nil {
|
||||
body, hasContext := extractSynchronizedBeforeSuiteProc1Body(arg)
|
||||
if body == nil {
|
||||
appendError(types.GinkgoErrors.InvalidBodyTypeForSynchronizedBeforeSuiteProc1(t, node.CodeLocation))
|
||||
trackedFunctionError = true
|
||||
}
|
||||
node.SynchronizedBeforeSuiteProc1Body, node.SynchronizedBeforeSuiteProc1BodyHasContext = body, hasContext
|
||||
} else if node.SynchronizedBeforeSuiteAllProcsBody == nil {
|
||||
body, hasContext := extractSynchronizedBeforeSuiteAllProcsBody(arg)
|
||||
if body == nil {
|
||||
appendError(types.GinkgoErrors.InvalidBodyTypeForSynchronizedBeforeSuiteAllProcs(t, node.CodeLocation))
|
||||
trackedFunctionError = true
|
||||
}
|
||||
node.SynchronizedBeforeSuiteAllProcsBody, node.SynchronizedBeforeSuiteAllProcsBodyHasContext = body, hasContext
|
||||
}
|
||||
} else if nodeType.Is(types.NodeTypeSynchronizedAfterSuite) {
|
||||
if node.SynchronizedAfterSuiteAllProcsBody != nil && node.SynchronizedAfterSuiteProc1Body != nil {
|
||||
appendError(types.GinkgoErrors.MultipleBodyFunctions(node.CodeLocation, nodeType))
|
||||
trackedFunctionError = true
|
||||
break
|
||||
}
|
||||
body, hasContext := extractBodyFunction(deprecationTracker, node.CodeLocation, arg)
|
||||
if body == nil {
|
||||
appendError(types.GinkgoErrors.InvalidBodyType(t, node.CodeLocation, nodeType))
|
||||
trackedFunctionError = true
|
||||
break
|
||||
}
|
||||
if node.SynchronizedAfterSuiteAllProcsBody == nil {
|
||||
node.SynchronizedAfterSuiteAllProcsBody, node.SynchronizedAfterSuiteAllProcsBodyHasContext = body, hasContext
|
||||
} else if node.SynchronizedAfterSuiteProc1Body == nil {
|
||||
node.SynchronizedAfterSuiteProc1Body, node.SynchronizedAfterSuiteProc1BodyHasContext = body, hasContext
|
||||
}
|
||||
} else {
|
||||
if node.Body != nil {
|
||||
appendError(types.GinkgoErrors.MultipleBodyFunctions(node.CodeLocation, nodeType))
|
||||
trackedFunctionError = true
|
||||
break
|
||||
}
|
||||
node.Body, node.HasContext = extractBodyFunction(deprecationTracker, node.CodeLocation, arg)
|
||||
if node.Body == nil {
|
||||
appendError(types.GinkgoErrors.InvalidBodyType(t, node.CodeLocation, nodeType))
|
||||
trackedFunctionError = true
|
||||
break
|
||||
}
|
||||
}
|
||||
default:
|
||||
remainingArgs = append(remainingArgs, arg)
|
||||
}
|
||||
}
|
||||
|
||||
//validations
|
||||
if node.MarkedPending && node.MarkedFocus {
|
||||
appendError(types.GinkgoErrors.InvalidDeclarationOfFocusedAndPending(node.CodeLocation, nodeType))
|
||||
}
|
||||
|
||||
if node.MarkedContinueOnFailure && !node.MarkedOrdered {
|
||||
appendError(types.GinkgoErrors.InvalidContinueOnFailureDecoration(node.CodeLocation))
|
||||
}
|
||||
|
||||
hasContext := node.HasContext || node.SynchronizedAfterSuiteProc1BodyHasContext || node.SynchronizedAfterSuiteAllProcsBodyHasContext || node.SynchronizedBeforeSuiteProc1BodyHasContext || node.SynchronizedBeforeSuiteAllProcsBodyHasContext
|
||||
|
||||
if !hasContext && (node.NodeTimeout > 0 || node.SpecTimeout > 0 || node.GracePeriod > 0) && len(errors) == 0 {
|
||||
appendError(types.GinkgoErrors.InvalidTimeoutOrGracePeriodForNonContextNode(node.CodeLocation, nodeType))
|
||||
}
|
||||
|
||||
if !node.NodeType.Is(types.NodeTypeReportBeforeEach|types.NodeTypeReportAfterEach|types.NodeTypeSynchronizedBeforeSuite|types.NodeTypeSynchronizedAfterSuite|types.NodeTypeReportBeforeSuite|types.NodeTypeReportAfterSuite) && node.Body == nil && !node.MarkedPending && !trackedFunctionError {
|
||||
appendError(types.GinkgoErrors.MissingBodyFunction(node.CodeLocation, nodeType))
|
||||
}
|
||||
|
||||
if node.NodeType.Is(types.NodeTypeSynchronizedBeforeSuite) && !trackedFunctionError && (node.SynchronizedBeforeSuiteProc1Body == nil || node.SynchronizedBeforeSuiteAllProcsBody == nil) {
|
||||
appendError(types.GinkgoErrors.MissingBodyFunction(node.CodeLocation, nodeType))
|
||||
}
|
||||
|
||||
if node.NodeType.Is(types.NodeTypeSynchronizedAfterSuite) && !trackedFunctionError && (node.SynchronizedAfterSuiteProc1Body == nil || node.SynchronizedAfterSuiteAllProcsBody == nil) {
|
||||
appendError(types.GinkgoErrors.MissingBodyFunction(node.CodeLocation, nodeType))
|
||||
}
|
||||
|
||||
for _, arg := range remainingArgs {
|
||||
appendError(types.GinkgoErrors.UnknownDecorator(node.CodeLocation, nodeType, arg))
|
||||
}
|
||||
|
||||
if node.FlakeAttempts > 0 && node.MustPassRepeatedly > 0 {
|
||||
appendError(types.GinkgoErrors.InvalidDeclarationOfFlakeAttemptsAndMustPassRepeatedly(node.CodeLocation, nodeType))
|
||||
}
|
||||
|
||||
if len(errors) > 0 {
|
||||
return Node{}, errors
|
||||
}
|
||||
|
||||
return node, errors
|
||||
}
|
||||
|
||||
var doneType = reflect.TypeOf(make(Done))
|
||||
|
||||
func extractBodyFunction(deprecationTracker *types.DeprecationTracker, cl types.CodeLocation, arg interface{}) (func(SpecContext), bool) {
|
||||
t := reflect.TypeOf(arg)
|
||||
if t.NumOut() > 0 || t.NumIn() > 1 {
|
||||
return nil, false
|
||||
}
|
||||
if t.NumIn() == 1 {
|
||||
if t.In(0) == doneType {
|
||||
deprecationTracker.TrackDeprecation(types.Deprecations.Async(), cl)
|
||||
deprecatedAsyncBody := arg.(func(Done))
|
||||
return func(SpecContext) { deprecatedAsyncBody(make(Done)) }, false
|
||||
} else if t.In(0).Implements(specContextType) {
|
||||
return arg.(func(SpecContext)), true
|
||||
} else if t.In(0).Implements(contextType) {
|
||||
body := arg.(func(context.Context))
|
||||
return func(c SpecContext) { body(c) }, true
|
||||
}
|
||||
|
||||
return nil, false
|
||||
}
|
||||
|
||||
body := arg.(func())
|
||||
return func(SpecContext) { body() }, false
|
||||
}
|
||||
|
||||
var byteType = reflect.TypeOf([]byte{})
|
||||
|
||||
func extractSynchronizedBeforeSuiteProc1Body(arg interface{}) (func(SpecContext) []byte, bool) {
|
||||
t := reflect.TypeOf(arg)
|
||||
v := reflect.ValueOf(arg)
|
||||
|
||||
if t.NumOut() > 1 || t.NumIn() > 1 {
|
||||
return nil, false
|
||||
} else if t.NumOut() == 1 && t.Out(0) != byteType {
|
||||
return nil, false
|
||||
} else if t.NumIn() == 1 && !t.In(0).Implements(contextType) {
|
||||
return nil, false
|
||||
}
|
||||
hasContext := t.NumIn() == 1
|
||||
|
||||
return func(c SpecContext) []byte {
|
||||
var out []reflect.Value
|
||||
if hasContext {
|
||||
out = v.Call([]reflect.Value{reflect.ValueOf(c)})
|
||||
} else {
|
||||
out = v.Call([]reflect.Value{})
|
||||
}
|
||||
if len(out) == 1 {
|
||||
return (out[0].Interface()).([]byte)
|
||||
} else {
|
||||
return []byte{}
|
||||
}
|
||||
}, hasContext
|
||||
}
|
||||
|
||||
func extractSynchronizedBeforeSuiteAllProcsBody(arg interface{}) (func(SpecContext, []byte), bool) {
|
||||
t := reflect.TypeOf(arg)
|
||||
v := reflect.ValueOf(arg)
|
||||
hasContext, hasByte := false, false
|
||||
|
||||
if t.NumOut() > 0 || t.NumIn() > 2 {
|
||||
return nil, false
|
||||
} else if t.NumIn() == 2 && t.In(0).Implements(contextType) && t.In(1) == byteType {
|
||||
hasContext, hasByte = true, true
|
||||
} else if t.NumIn() == 1 && t.In(0).Implements(contextType) {
|
||||
hasContext = true
|
||||
} else if t.NumIn() == 1 && t.In(0) == byteType {
|
||||
hasByte = true
|
||||
} else if t.NumIn() != 0 {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
return func(c SpecContext, b []byte) {
|
||||
in := []reflect.Value{}
|
||||
if hasContext {
|
||||
in = append(in, reflect.ValueOf(c))
|
||||
}
|
||||
if hasByte {
|
||||
in = append(in, reflect.ValueOf(b))
|
||||
}
|
||||
v.Call(in)
|
||||
}, hasContext
|
||||
}
|
||||
|
||||
var errInterface = reflect.TypeOf((*error)(nil)).Elem()
|
||||
|
||||
func NewCleanupNode(deprecationTracker *types.DeprecationTracker, fail func(string, types.CodeLocation), args ...interface{}) (Node, []error) {
|
||||
decorations, remainingArgs := PartitionDecorations(args...)
|
||||
baseOffset := 2
|
||||
cl := types.NewCodeLocation(baseOffset)
|
||||
finalArgs := []interface{}{}
|
||||
for _, arg := range decorations {
|
||||
switch t := reflect.TypeOf(arg); {
|
||||
case t == reflect.TypeOf(Offset(0)):
|
||||
cl = types.NewCodeLocation(baseOffset + int(arg.(Offset)))
|
||||
case t == reflect.TypeOf(types.CodeLocation{}):
|
||||
cl = arg.(types.CodeLocation)
|
||||
default:
|
||||
finalArgs = append(finalArgs, arg)
|
||||
}
|
||||
}
|
||||
finalArgs = append(finalArgs, cl)
|
||||
|
||||
if len(remainingArgs) == 0 {
|
||||
return Node{}, []error{types.GinkgoErrors.DeferCleanupInvalidFunction(cl)}
|
||||
}
|
||||
|
||||
callback := reflect.ValueOf(remainingArgs[0])
|
||||
if !(callback.Kind() == reflect.Func) {
|
||||
return Node{}, []error{types.GinkgoErrors.DeferCleanupInvalidFunction(cl)}
|
||||
}
|
||||
|
||||
callArgs := []reflect.Value{}
|
||||
for _, arg := range remainingArgs[1:] {
|
||||
callArgs = append(callArgs, reflect.ValueOf(arg))
|
||||
}
|
||||
|
||||
hasContext := false
|
||||
t := callback.Type()
|
||||
if t.NumIn() > 0 {
|
||||
if t.In(0).Implements(specContextType) {
|
||||
hasContext = true
|
||||
} else if t.In(0).Implements(contextType) && (len(callArgs) == 0 || !callArgs[0].Type().Implements(contextType)) {
|
||||
hasContext = true
|
||||
}
|
||||
}
|
||||
|
||||
handleFailure := func(out []reflect.Value) {
|
||||
if len(out) == 0 {
|
||||
return
|
||||
}
|
||||
last := out[len(out)-1]
|
||||
if last.Type().Implements(errInterface) && !last.IsNil() {
|
||||
fail(fmt.Sprintf("DeferCleanup callback returned error: %v", last), cl)
|
||||
}
|
||||
}
|
||||
|
||||
if hasContext {
|
||||
finalArgs = append(finalArgs, func(c SpecContext) {
|
||||
out := callback.Call(append([]reflect.Value{reflect.ValueOf(c)}, callArgs...))
|
||||
handleFailure(out)
|
||||
})
|
||||
} else {
|
||||
finalArgs = append(finalArgs, func() {
|
||||
out := callback.Call(callArgs)
|
||||
handleFailure(out)
|
||||
})
|
||||
}
|
||||
|
||||
return NewNode(deprecationTracker, types.NodeTypeCleanupInvalid, "", finalArgs...)
|
||||
}
|
||||
|
||||
func (n Node) IsZero() bool {
|
||||
return n.ID == 0
|
||||
}
|
||||
|
||||
/* Nodes */
|
||||
type Nodes []Node
|
||||
|
||||
func (n Nodes) CopyAppend(nodes ...Node) Nodes {
|
||||
numN := len(n)
|
||||
out := make(Nodes, numN+len(nodes))
|
||||
for i, node := range n {
|
||||
out[i] = node
|
||||
}
|
||||
for j, node := range nodes {
|
||||
out[numN+j] = node
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func (n Nodes) SplitAround(pivot Node) (Nodes, Nodes) {
|
||||
pivotIdx := len(n)
|
||||
for i := range n {
|
||||
if n[i].ID == pivot.ID {
|
||||
pivotIdx = i
|
||||
break
|
||||
}
|
||||
}
|
||||
left := n[:pivotIdx]
|
||||
right := Nodes{}
|
||||
if pivotIdx+1 < len(n) {
|
||||
right = n[pivotIdx+1:]
|
||||
}
|
||||
|
||||
return left, right
|
||||
}
|
||||
|
||||
func (n Nodes) FirstNodeWithType(nodeTypes types.NodeType) Node {
|
||||
for i := range n {
|
||||
if n[i].NodeType.Is(nodeTypes) {
|
||||
return n[i]
|
||||
}
|
||||
}
|
||||
return Node{}
|
||||
}
|
||||
|
||||
func (n Nodes) WithType(nodeTypes types.NodeType) Nodes {
|
||||
count := 0
|
||||
for i := range n {
|
||||
if n[i].NodeType.Is(nodeTypes) {
|
||||
count++
|
||||
}
|
||||
}
|
||||
|
||||
out, j := make(Nodes, count), 0
|
||||
for i := range n {
|
||||
if n[i].NodeType.Is(nodeTypes) {
|
||||
out[j] = n[i]
|
||||
j++
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func (n Nodes) WithoutType(nodeTypes types.NodeType) Nodes {
|
||||
count := 0
|
||||
for i := range n {
|
||||
if !n[i].NodeType.Is(nodeTypes) {
|
||||
count++
|
||||
}
|
||||
}
|
||||
|
||||
out, j := make(Nodes, count), 0
|
||||
for i := range n {
|
||||
if !n[i].NodeType.Is(nodeTypes) {
|
||||
out[j] = n[i]
|
||||
j++
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func (n Nodes) WithoutNode(nodeToExclude Node) Nodes {
|
||||
idxToExclude := len(n)
|
||||
for i := range n {
|
||||
if n[i].ID == nodeToExclude.ID {
|
||||
idxToExclude = i
|
||||
break
|
||||
}
|
||||
}
|
||||
if idxToExclude == len(n) {
|
||||
return n
|
||||
}
|
||||
out, j := make(Nodes, len(n)-1), 0
|
||||
for i := range n {
|
||||
if i == idxToExclude {
|
||||
continue
|
||||
}
|
||||
out[j] = n[i]
|
||||
j++
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func (n Nodes) Filter(filter func(Node) bool) Nodes {
|
||||
trufa, count := make([]bool, len(n)), 0
|
||||
for i := range n {
|
||||
if filter(n[i]) {
|
||||
trufa[i] = true
|
||||
count += 1
|
||||
}
|
||||
}
|
||||
out, j := make(Nodes, count), 0
|
||||
for i := range n {
|
||||
if trufa[i] {
|
||||
out[j] = n[i]
|
||||
j++
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func (n Nodes) FirstSatisfying(filter func(Node) bool) Node {
|
||||
for i := range n {
|
||||
if filter(n[i]) {
|
||||
return n[i]
|
||||
}
|
||||
}
|
||||
return Node{}
|
||||
}
|
||||
|
||||
func (n Nodes) WithinNestingLevel(deepestNestingLevel int) Nodes {
|
||||
count := 0
|
||||
for i := range n {
|
||||
if n[i].NestingLevel <= deepestNestingLevel {
|
||||
count++
|
||||
}
|
||||
}
|
||||
out, j := make(Nodes, count), 0
|
||||
for i := range n {
|
||||
if n[i].NestingLevel <= deepestNestingLevel {
|
||||
out[j] = n[i]
|
||||
j++
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func (n Nodes) SortedByDescendingNestingLevel() Nodes {
|
||||
out := make(Nodes, len(n))
|
||||
copy(out, n)
|
||||
sort.SliceStable(out, func(i int, j int) bool {
|
||||
return out[i].NestingLevel > out[j].NestingLevel
|
||||
})
|
||||
|
||||
return out
|
||||
}
|
||||
|
||||
func (n Nodes) SortedByAscendingNestingLevel() Nodes {
|
||||
out := make(Nodes, len(n))
|
||||
copy(out, n)
|
||||
sort.SliceStable(out, func(i int, j int) bool {
|
||||
return out[i].NestingLevel < out[j].NestingLevel
|
||||
})
|
||||
|
||||
return out
|
||||
}
|
||||
|
||||
func (n Nodes) FirstWithNestingLevel(level int) Node {
|
||||
for i := range n {
|
||||
if n[i].NestingLevel == level {
|
||||
return n[i]
|
||||
}
|
||||
}
|
||||
return Node{}
|
||||
}
|
||||
|
||||
func (n Nodes) Reverse() Nodes {
|
||||
out := make(Nodes, len(n))
|
||||
for i := range n {
|
||||
out[len(n)-1-i] = n[i]
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func (n Nodes) Texts() []string {
|
||||
out := make([]string, len(n))
|
||||
for i := range n {
|
||||
out[i] = n[i].Text
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func (n Nodes) Labels() [][]string {
|
||||
out := make([][]string, len(n))
|
||||
for i := range n {
|
||||
if n[i].Labels == nil {
|
||||
out[i] = []string{}
|
||||
} else {
|
||||
out[i] = []string(n[i].Labels)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func (n Nodes) UnionOfLabels() []string {
|
||||
out := []string{}
|
||||
seen := map[string]bool{}
|
||||
for i := range n {
|
||||
for _, label := range n[i].Labels {
|
||||
if !seen[label] {
|
||||
seen[label] = true
|
||||
out = append(out, label)
|
||||
}
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func (n Nodes) CodeLocations() []types.CodeLocation {
|
||||
out := make([]types.CodeLocation, len(n))
|
||||
for i := range n {
|
||||
out[i] = n[i].CodeLocation
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func (n Nodes) BestTextFor(node Node) string {
|
||||
if node.Text != "" {
|
||||
return node.Text
|
||||
}
|
||||
parentNestingLevel := node.NestingLevel - 1
|
||||
for i := range n {
|
||||
if n[i].Text != "" && n[i].NestingLevel == parentNestingLevel {
|
||||
return n[i].Text
|
||||
}
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
func (n Nodes) ContainsNodeID(id uint) bool {
|
||||
for i := range n {
|
||||
if n[i].ID == id {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (n Nodes) HasNodeMarkedPending() bool {
|
||||
for i := range n {
|
||||
if n[i].MarkedPending {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (n Nodes) HasNodeMarkedFocus() bool {
|
||||
for i := range n {
|
||||
if n[i].MarkedFocus {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (n Nodes) HasNodeMarkedSerial() bool {
|
||||
for i := range n {
|
||||
if n[i].MarkedSerial {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (n Nodes) FirstNodeMarkedOrdered() Node {
|
||||
for i := range n {
|
||||
if n[i].MarkedOrdered {
|
||||
return n[i]
|
||||
}
|
||||
}
|
||||
return Node{}
|
||||
}
|
||||
|
||||
func (n Nodes) GetMaxFlakeAttempts() int {
|
||||
maxFlakeAttempts := 0
|
||||
for i := range n {
|
||||
if n[i].FlakeAttempts > 0 {
|
||||
maxFlakeAttempts = n[i].FlakeAttempts
|
||||
}
|
||||
}
|
||||
return maxFlakeAttempts
|
||||
}
|
||||
|
||||
func (n Nodes) GetMaxMustPassRepeatedly() int {
|
||||
maxMustPassRepeatedly := 0
|
||||
for i := range n {
|
||||
if n[i].MustPassRepeatedly > 0 {
|
||||
maxMustPassRepeatedly = n[i].MustPassRepeatedly
|
||||
}
|
||||
}
|
||||
return maxMustPassRepeatedly
|
||||
}
|
||||
|
||||
func unrollInterfaceSlice(args interface{}) []interface{} {
|
||||
v := reflect.ValueOf(args)
|
||||
if v.Kind() != reflect.Slice {
|
||||
return []interface{}{args}
|
||||
}
|
||||
out := []interface{}{}
|
||||
for i := 0; i < v.Len(); i++ {
|
||||
el := reflect.ValueOf(v.Index(i).Interface())
|
||||
if el.Kind() == reflect.Slice && el.Type() != reflect.TypeOf(Labels{}) {
|
||||
out = append(out, unrollInterfaceSlice(el.Interface())...)
|
||||
} else {
|
||||
out = append(out, v.Index(i).Interface())
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
164
vendor/github.com/onsi/ginkgo/v2/internal/ordering.go
generated
vendored
Normal file
164
vendor/github.com/onsi/ginkgo/v2/internal/ordering.go
generated
vendored
Normal file
@ -0,0 +1,164 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"math/rand"
|
||||
"sort"
|
||||
|
||||
"github.com/onsi/ginkgo/v2/types"
|
||||
)
|
||||
|
||||
type SortableSpecs struct {
|
||||
Specs Specs
|
||||
Indexes []int
|
||||
}
|
||||
|
||||
func NewSortableSpecs(specs Specs) *SortableSpecs {
|
||||
indexes := make([]int, len(specs))
|
||||
for i := range specs {
|
||||
indexes[i] = i
|
||||
}
|
||||
return &SortableSpecs{
|
||||
Specs: specs,
|
||||
Indexes: indexes,
|
||||
}
|
||||
}
|
||||
func (s *SortableSpecs) Len() int { return len(s.Indexes) }
|
||||
func (s *SortableSpecs) Swap(i, j int) { s.Indexes[i], s.Indexes[j] = s.Indexes[j], s.Indexes[i] }
|
||||
func (s *SortableSpecs) Less(i, j int) bool {
|
||||
a, b := s.Specs[s.Indexes[i]], s.Specs[s.Indexes[j]]
|
||||
|
||||
firstOrderedA := a.Nodes.FirstNodeMarkedOrdered()
|
||||
firstOrderedB := b.Nodes.FirstNodeMarkedOrdered()
|
||||
if firstOrderedA.ID == firstOrderedB.ID && !firstOrderedA.IsZero() {
|
||||
// strictly preserve order in ordered containers. ID will track this as IDs are generated monotonically
|
||||
return a.FirstNodeWithType(types.NodeTypeIt).ID < b.FirstNodeWithType(types.NodeTypeIt).ID
|
||||
}
|
||||
|
||||
aCLs := a.Nodes.WithType(types.NodeTypesForContainerAndIt).CodeLocations()
|
||||
bCLs := b.Nodes.WithType(types.NodeTypesForContainerAndIt).CodeLocations()
|
||||
for i := 0; i < len(aCLs) && i < len(bCLs); i++ {
|
||||
aCL, bCL := aCLs[i], bCLs[i]
|
||||
if aCL.FileName < bCL.FileName {
|
||||
return true
|
||||
} else if aCL.FileName > bCL.FileName {
|
||||
return false
|
||||
}
|
||||
if aCL.LineNumber < bCL.LineNumber {
|
||||
return true
|
||||
} else if aCL.LineNumber > bCL.LineNumber {
|
||||
return false
|
||||
}
|
||||
}
|
||||
// either everything is equal or we have different lengths of CLs
|
||||
if len(aCLs) < len(bCLs) {
|
||||
return true
|
||||
} else if len(aCLs) > len(bCLs) {
|
||||
return false
|
||||
}
|
||||
// ok, now we are sure everything was equal. so we use the spec text to break ties
|
||||
return a.Text() < b.Text()
|
||||
}
|
||||
|
||||
type GroupedSpecIndices []SpecIndices
|
||||
type SpecIndices []int
|
||||
|
||||
func OrderSpecs(specs Specs, suiteConfig types.SuiteConfig) (GroupedSpecIndices, GroupedSpecIndices) {
|
||||
/*
|
||||
Ginkgo has sophisticated support for randomizing specs. Specs are guaranteed to have the same
|
||||
order for a given seed across test runs.
|
||||
|
||||
By default only top-level containers and specs are shuffled - this makes for a more intuitive debugging
|
||||
experience - specs within a given container run in the order they appear in the file.
|
||||
|
||||
Developers can set -randomizeAllSpecs to shuffle _all_ specs.
|
||||
|
||||
In addition, spec containers can be marked as Ordered. Specs within an Ordered container are never shuffled.
|
||||
|
||||
Finally, specs and spec containers can be marked as Serial. When running in parallel, serial specs run on Process #1 _after_ all other processes have finished.
|
||||
*/
|
||||
|
||||
// Seed a new random source based on thee configured random seed.
|
||||
r := rand.New(rand.NewSource(suiteConfig.RandomSeed))
|
||||
|
||||
// first, we sort the entire suite to ensure a deterministic order. the sort is performed by filename, then line number, and then spec text. this ensures every parallel process has the exact same spec order and is only necessary to cover the edge case where the user iterates over a map to generate specs.
|
||||
sortableSpecs := NewSortableSpecs(specs)
|
||||
sort.Sort(sortableSpecs)
|
||||
|
||||
// then we break things into execution groups
|
||||
// a group represents a single unit of execution and is a collection of SpecIndices
|
||||
// usually a group is just a single spec, however ordered containers must be preserved as a single group
|
||||
executionGroupIDs := []uint{}
|
||||
executionGroups := map[uint]SpecIndices{}
|
||||
for _, idx := range sortableSpecs.Indexes {
|
||||
spec := specs[idx]
|
||||
groupNode := spec.Nodes.FirstNodeMarkedOrdered()
|
||||
if groupNode.IsZero() {
|
||||
groupNode = spec.Nodes.FirstNodeWithType(types.NodeTypeIt)
|
||||
}
|
||||
executionGroups[groupNode.ID] = append(executionGroups[groupNode.ID], idx)
|
||||
if len(executionGroups[groupNode.ID]) == 1 {
|
||||
executionGroupIDs = append(executionGroupIDs, groupNode.ID)
|
||||
}
|
||||
}
|
||||
|
||||
// now, we only shuffle all the execution groups if we're randomizing all specs, otherwise
|
||||
// we shuffle outermost containers. so we need to form shufflable groupings of GroupIDs
|
||||
shufflableGroupingIDs := []uint{}
|
||||
shufflableGroupingIDToGroupIDs := map[uint][]uint{}
|
||||
|
||||
// for each execution group we're going to have to pick a node to represent how the
|
||||
// execution group is grouped for shuffling:
|
||||
nodeTypesToShuffle := types.NodeTypesForContainerAndIt
|
||||
if suiteConfig.RandomizeAllSpecs {
|
||||
nodeTypesToShuffle = types.NodeTypeIt
|
||||
}
|
||||
|
||||
//so, for each execution group:
|
||||
for _, groupID := range executionGroupIDs {
|
||||
// pick out a representative spec
|
||||
representativeSpec := specs[executionGroups[groupID][0]]
|
||||
|
||||
// and grab the node on the spec that will represent which shufflable group this execution group belongs tu
|
||||
shufflableGroupingNode := representativeSpec.Nodes.FirstNodeWithType(nodeTypesToShuffle)
|
||||
|
||||
//add the execution group to its shufflable group
|
||||
shufflableGroupingIDToGroupIDs[shufflableGroupingNode.ID] = append(shufflableGroupingIDToGroupIDs[shufflableGroupingNode.ID], groupID)
|
||||
|
||||
//and if it's the first one in
|
||||
if len(shufflableGroupingIDToGroupIDs[shufflableGroupingNode.ID]) == 1 {
|
||||
// record the shuffleable group ID
|
||||
shufflableGroupingIDs = append(shufflableGroupingIDs, shufflableGroupingNode.ID)
|
||||
}
|
||||
}
|
||||
|
||||
// now we permute the sorted shufflable grouping IDs and build the ordered Groups
|
||||
orderedGroups := GroupedSpecIndices{}
|
||||
permutation := r.Perm(len(shufflableGroupingIDs))
|
||||
for _, j := range permutation {
|
||||
//let's get the execution group IDs for this shufflable group:
|
||||
executionGroupIDsForJ := shufflableGroupingIDToGroupIDs[shufflableGroupingIDs[j]]
|
||||
// and we'll add their associated specindices to the orderedGroups slice:
|
||||
for _, executionGroupID := range executionGroupIDsForJ {
|
||||
orderedGroups = append(orderedGroups, executionGroups[executionGroupID])
|
||||
}
|
||||
}
|
||||
|
||||
// If we're running in series, we're done.
|
||||
if suiteConfig.ParallelTotal == 1 {
|
||||
return orderedGroups, GroupedSpecIndices{}
|
||||
}
|
||||
|
||||
// We're running in parallel so we need to partition the ordered groups into a parallelizable set and a serialized set.
|
||||
// The parallelizable groups will run across all Ginkgo processes...
|
||||
// ...the serial groups will only run on Process #1 after all other processes have exited.
|
||||
parallelizableGroups, serialGroups := GroupedSpecIndices{}, GroupedSpecIndices{}
|
||||
for _, specIndices := range orderedGroups {
|
||||
if specs[specIndices[0]].Nodes.HasNodeMarkedSerial() {
|
||||
serialGroups = append(serialGroups, specIndices)
|
||||
} else {
|
||||
parallelizableGroups = append(parallelizableGroups, specIndices)
|
||||
}
|
||||
}
|
||||
|
||||
return parallelizableGroups, serialGroups
|
||||
}
|
250
vendor/github.com/onsi/ginkgo/v2/internal/output_interceptor.go
generated
vendored
Normal file
250
vendor/github.com/onsi/ginkgo/v2/internal/output_interceptor.go
generated
vendored
Normal file
@ -0,0 +1,250 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
"os"
|
||||
"time"
|
||||
)
|
||||
|
||||
const BAILOUT_TIME = 1 * time.Second
|
||||
const BAILOUT_MESSAGE = `Ginkgo detected an issue while intercepting output.
|
||||
|
||||
When running in parallel, Ginkgo captures stdout and stderr output
|
||||
and attaches it to the running spec. It looks like that process is getting
|
||||
stuck for this suite.
|
||||
|
||||
This usually happens if you, or a library you are using, spin up an external
|
||||
process and set cmd.Stdout = os.Stdout and/or cmd.Stderr = os.Stderr. This
|
||||
causes the external process to keep Ginkgo's output interceptor pipe open and
|
||||
causes output interception to hang.
|
||||
|
||||
Ginkgo has detected this and shortcircuited the capture process. The specs
|
||||
will continue running after this message however output from the external
|
||||
process that caused this issue will not be captured.
|
||||
|
||||
You have several options to fix this. In preferred order they are:
|
||||
|
||||
1. Pass GinkgoWriter instead of os.Stdout or os.Stderr to your process.
|
||||
2. Ensure your process exits before the current spec completes. If your
|
||||
process is long-lived and must cross spec boundaries, this option won't
|
||||
work for you.
|
||||
3. Pause Ginkgo's output interceptor before starting your process and then
|
||||
resume it after. Use PauseOutputInterception() and ResumeOutputInterception()
|
||||
to do this.
|
||||
4. Set --output-interceptor-mode=none when running your Ginkgo suite. This will
|
||||
turn off all output interception but allow specs to run in parallel without this
|
||||
issue. You may miss important output if you do this including output from Go's
|
||||
race detector.
|
||||
|
||||
More details on issue #851 - https://github.com/onsi/ginkgo/issues/851
|
||||
`
|
||||
|
||||
/*
|
||||
The OutputInterceptor is used by to
|
||||
intercept and capture all stdin and stderr output during a test run.
|
||||
*/
|
||||
type OutputInterceptor interface {
|
||||
StartInterceptingOutput()
|
||||
StartInterceptingOutputAndForwardTo(io.Writer)
|
||||
StopInterceptingAndReturnOutput() string
|
||||
|
||||
PauseIntercepting()
|
||||
ResumeIntercepting()
|
||||
|
||||
Shutdown()
|
||||
}
|
||||
|
||||
type NoopOutputInterceptor struct{}
|
||||
|
||||
func (interceptor NoopOutputInterceptor) StartInterceptingOutput() {}
|
||||
func (interceptor NoopOutputInterceptor) StartInterceptingOutputAndForwardTo(io.Writer) {}
|
||||
func (interceptor NoopOutputInterceptor) StopInterceptingAndReturnOutput() string { return "" }
|
||||
func (interceptor NoopOutputInterceptor) PauseIntercepting() {}
|
||||
func (interceptor NoopOutputInterceptor) ResumeIntercepting() {}
|
||||
func (interceptor NoopOutputInterceptor) Shutdown() {}
|
||||
|
||||
type pipePair struct {
|
||||
reader *os.File
|
||||
writer *os.File
|
||||
}
|
||||
|
||||
func startPipeFactory(pipeChannel chan pipePair, shutdown chan interface{}) {
|
||||
for {
|
||||
//make the next pipe...
|
||||
pair := pipePair{}
|
||||
pair.reader, pair.writer, _ = os.Pipe()
|
||||
select {
|
||||
//...and provide it to the next consumer (they are responsible for closing the files)
|
||||
case pipeChannel <- pair:
|
||||
continue
|
||||
//...or close the files if we were told to shutdown
|
||||
case <-shutdown:
|
||||
pair.reader.Close()
|
||||
pair.writer.Close()
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type interceptorImplementation interface {
|
||||
CreateStdoutStderrClones() (*os.File, *os.File)
|
||||
ConnectPipeToStdoutStderr(*os.File)
|
||||
RestoreStdoutStderrFromClones(*os.File, *os.File)
|
||||
ShutdownClones(*os.File, *os.File)
|
||||
}
|
||||
|
||||
type genericOutputInterceptor struct {
|
||||
intercepting bool
|
||||
|
||||
stdoutClone *os.File
|
||||
stderrClone *os.File
|
||||
pipe pipePair
|
||||
|
||||
shutdown chan interface{}
|
||||
emergencyBailout chan interface{}
|
||||
pipeChannel chan pipePair
|
||||
interceptedContent chan string
|
||||
|
||||
forwardTo io.Writer
|
||||
accumulatedOutput string
|
||||
|
||||
implementation interceptorImplementation
|
||||
}
|
||||
|
||||
func (interceptor *genericOutputInterceptor) StartInterceptingOutput() {
|
||||
interceptor.StartInterceptingOutputAndForwardTo(io.Discard)
|
||||
}
|
||||
|
||||
func (interceptor *genericOutputInterceptor) StartInterceptingOutputAndForwardTo(w io.Writer) {
|
||||
if interceptor.intercepting {
|
||||
return
|
||||
}
|
||||
interceptor.accumulatedOutput = ""
|
||||
interceptor.forwardTo = w
|
||||
interceptor.ResumeIntercepting()
|
||||
}
|
||||
|
||||
func (interceptor *genericOutputInterceptor) StopInterceptingAndReturnOutput() string {
|
||||
if interceptor.intercepting {
|
||||
interceptor.PauseIntercepting()
|
||||
}
|
||||
return interceptor.accumulatedOutput
|
||||
}
|
||||
|
||||
func (interceptor *genericOutputInterceptor) ResumeIntercepting() {
|
||||
if interceptor.intercepting {
|
||||
return
|
||||
}
|
||||
interceptor.intercepting = true
|
||||
if interceptor.stdoutClone == nil {
|
||||
interceptor.stdoutClone, interceptor.stderrClone = interceptor.implementation.CreateStdoutStderrClones()
|
||||
interceptor.shutdown = make(chan interface{})
|
||||
go startPipeFactory(interceptor.pipeChannel, interceptor.shutdown)
|
||||
}
|
||||
|
||||
// Now we make a pipe, we'll use this to redirect the input to the 1 and 2 file descriptors (this is how everything else in the world is string to log to stdout and stderr)
|
||||
// we get the pipe from our pipe factory. it runs in the background so we can request the next pipe while the spec being intercepted is running
|
||||
interceptor.pipe = <-interceptor.pipeChannel
|
||||
|
||||
interceptor.emergencyBailout = make(chan interface{})
|
||||
|
||||
//Spin up a goroutine to copy data from the pipe into a buffer, this is how we capture any output the user is emitting
|
||||
go func() {
|
||||
buffer := &bytes.Buffer{}
|
||||
destination := io.MultiWriter(buffer, interceptor.forwardTo)
|
||||
copyFinished := make(chan interface{})
|
||||
reader := interceptor.pipe.reader
|
||||
go func() {
|
||||
io.Copy(destination, reader)
|
||||
reader.Close() // close the read end of the pipe so we don't leak a file descriptor
|
||||
close(copyFinished)
|
||||
}()
|
||||
select {
|
||||
case <-copyFinished:
|
||||
interceptor.interceptedContent <- buffer.String()
|
||||
case <-interceptor.emergencyBailout:
|
||||
interceptor.interceptedContent <- ""
|
||||
}
|
||||
}()
|
||||
|
||||
interceptor.implementation.ConnectPipeToStdoutStderr(interceptor.pipe.writer)
|
||||
}
|
||||
|
||||
func (interceptor *genericOutputInterceptor) PauseIntercepting() {
|
||||
if !interceptor.intercepting {
|
||||
return
|
||||
}
|
||||
// first we have to close the write end of the pipe. To do this we have to close all file descriptors pointing
|
||||
// to the write end. So that would be the pipewriter itself, and FD #1 and FD #2 if we've Dup2'd them
|
||||
interceptor.pipe.writer.Close() // the pipewriter itself
|
||||
|
||||
// we also need to stop intercepting. we do that by reconnecting the stdout and stderr file descriptions back to their respective #1 and #2 file descriptors;
|
||||
// this also closes #1 and #2 before it points that their original stdout and stderr file descriptions
|
||||
interceptor.implementation.RestoreStdoutStderrFromClones(interceptor.stdoutClone, interceptor.stderrClone)
|
||||
|
||||
var content string
|
||||
select {
|
||||
case content = <-interceptor.interceptedContent:
|
||||
case <-time.After(BAILOUT_TIME):
|
||||
/*
|
||||
By closing all the pipe writer's file descriptors associated with the pipe writer's file description the io.Copy reading from the reader
|
||||
should eventually receive an EOF and exit.
|
||||
|
||||
**However**, if the user has spun up an external process and passed in os.Stdout/os.Stderr to cmd.Stdout/cmd.Stderr then the external process
|
||||
will have a file descriptor pointing to the pipe writer's file description and it will not close until the external process exits.
|
||||
|
||||
That would leave us hanging here waiting for the io.Copy to close forever. Instead we invoke this emergency escape valve. This returns whatever
|
||||
content we've got but leaves the io.Copy running. This ensures the external process can continue writing without hanging at the cost of leaking a goroutine
|
||||
and file descriptor (those these will be cleaned up when the process exits).
|
||||
|
||||
We tack on a message to notify the user that they've hit this edgecase and encourage them to address it.
|
||||
*/
|
||||
close(interceptor.emergencyBailout)
|
||||
content = <-interceptor.interceptedContent + BAILOUT_MESSAGE
|
||||
}
|
||||
|
||||
interceptor.accumulatedOutput += content
|
||||
interceptor.intercepting = false
|
||||
}
|
||||
|
||||
func (interceptor *genericOutputInterceptor) Shutdown() {
|
||||
interceptor.PauseIntercepting()
|
||||
|
||||
if interceptor.stdoutClone != nil {
|
||||
close(interceptor.shutdown)
|
||||
interceptor.implementation.ShutdownClones(interceptor.stdoutClone, interceptor.stderrClone)
|
||||
interceptor.stdoutClone = nil
|
||||
interceptor.stderrClone = nil
|
||||
}
|
||||
}
|
||||
|
||||
/* This is used on windows builds but included here so it can be explicitly tested on unix systems too */
|
||||
func NewOSGlobalReassigningOutputInterceptor() OutputInterceptor {
|
||||
return &genericOutputInterceptor{
|
||||
interceptedContent: make(chan string),
|
||||
pipeChannel: make(chan pipePair),
|
||||
shutdown: make(chan interface{}),
|
||||
implementation: &osGlobalReassigningOutputInterceptorImpl{},
|
||||
}
|
||||
}
|
||||
|
||||
type osGlobalReassigningOutputInterceptorImpl struct{}
|
||||
|
||||
func (impl *osGlobalReassigningOutputInterceptorImpl) CreateStdoutStderrClones() (*os.File, *os.File) {
|
||||
return os.Stdout, os.Stderr
|
||||
}
|
||||
|
||||
func (impl *osGlobalReassigningOutputInterceptorImpl) ConnectPipeToStdoutStderr(pipeWriter *os.File) {
|
||||
os.Stdout = pipeWriter
|
||||
os.Stderr = pipeWriter
|
||||
}
|
||||
|
||||
func (impl *osGlobalReassigningOutputInterceptorImpl) RestoreStdoutStderrFromClones(stdoutClone *os.File, stderrClone *os.File) {
|
||||
os.Stdout = stdoutClone
|
||||
os.Stderr = stderrClone
|
||||
}
|
||||
|
||||
func (impl *osGlobalReassigningOutputInterceptorImpl) ShutdownClones(_ *os.File, _ *os.File) {
|
||||
//noop
|
||||
}
|
62
vendor/github.com/onsi/ginkgo/v2/internal/output_interceptor_unix.go
generated
vendored
Normal file
62
vendor/github.com/onsi/ginkgo/v2/internal/output_interceptor_unix.go
generated
vendored
Normal file
@ -0,0 +1,62 @@
|
||||
//go:build freebsd || openbsd || netbsd || dragonfly || darwin || linux || solaris
|
||||
// +build freebsd openbsd netbsd dragonfly darwin linux solaris
|
||||
|
||||
package internal
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"golang.org/x/sys/unix"
|
||||
)
|
||||
|
||||
func NewOutputInterceptor() OutputInterceptor {
|
||||
return &genericOutputInterceptor{
|
||||
interceptedContent: make(chan string),
|
||||
pipeChannel: make(chan pipePair),
|
||||
shutdown: make(chan interface{}),
|
||||
implementation: &dupSyscallOutputInterceptorImpl{},
|
||||
}
|
||||
}
|
||||
|
||||
type dupSyscallOutputInterceptorImpl struct{}
|
||||
|
||||
func (impl *dupSyscallOutputInterceptorImpl) CreateStdoutStderrClones() (*os.File, *os.File) {
|
||||
// To clone stdout and stderr we:
|
||||
// First, create two clone file descriptors that point to the stdout and stderr file descriptions
|
||||
stdoutCloneFD, _ := unix.Dup(1)
|
||||
stderrCloneFD, _ := unix.Dup(2)
|
||||
|
||||
// And then wrap the clone file descriptors in files.
|
||||
// One benefit of this (that we don't use yet) is that we can actually write
|
||||
// to these files to emit output to the console even though we're intercepting output
|
||||
stdoutClone := os.NewFile(uintptr(stdoutCloneFD), "stdout-clone")
|
||||
stderrClone := os.NewFile(uintptr(stderrCloneFD), "stderr-clone")
|
||||
|
||||
//these clones remain alive throughout the lifecycle of the suite and don't need to be recreated
|
||||
//this speeds things up a bit, actually.
|
||||
return stdoutClone, stderrClone
|
||||
}
|
||||
|
||||
func (impl *dupSyscallOutputInterceptorImpl) ConnectPipeToStdoutStderr(pipeWriter *os.File) {
|
||||
// To redirect output to our pipe we need to point the 1 and 2 file descriptors (which is how the world tries to log things)
|
||||
// to the write end of the pipe.
|
||||
// We do this with Dup2 (possibly Dup3 on some architectures) to have file descriptors 1 and 2 point to the same file description as the pipeWriter
|
||||
// This effectively shunts data written to stdout and stderr to the write end of our pipe
|
||||
unix.Dup2(int(pipeWriter.Fd()), 1)
|
||||
unix.Dup2(int(pipeWriter.Fd()), 2)
|
||||
}
|
||||
|
||||
func (impl *dupSyscallOutputInterceptorImpl) RestoreStdoutStderrFromClones(stdoutClone *os.File, stderrClone *os.File) {
|
||||
// To restore stdour/stderr from the clones we have the 1 and 2 file descriptors
|
||||
// point to the original file descriptions that we saved off in the clones.
|
||||
// This has the added benefit of closing the connection between these descriptors and the write end of the pipe
|
||||
// which is important to cause the io.Copy on the pipe.Reader to end.
|
||||
unix.Dup2(int(stdoutClone.Fd()), 1)
|
||||
unix.Dup2(int(stderrClone.Fd()), 2)
|
||||
}
|
||||
|
||||
func (impl *dupSyscallOutputInterceptorImpl) ShutdownClones(stdoutClone *os.File, stderrClone *os.File) {
|
||||
// We're done with the clones so we can close them to clean up after ourselves
|
||||
stdoutClone.Close()
|
||||
stderrClone.Close()
|
||||
}
|
7
vendor/github.com/onsi/ginkgo/v2/internal/output_interceptor_win.go
generated
vendored
Normal file
7
vendor/github.com/onsi/ginkgo/v2/internal/output_interceptor_win.go
generated
vendored
Normal file
@ -0,0 +1,7 @@
|
||||
// +build windows
|
||||
|
||||
package internal
|
||||
|
||||
func NewOutputInterceptor() OutputInterceptor {
|
||||
return NewOSGlobalReassigningOutputInterceptor()
|
||||
}
|
72
vendor/github.com/onsi/ginkgo/v2/internal/parallel_support/client_server.go
generated
vendored
Normal file
72
vendor/github.com/onsi/ginkgo/v2/internal/parallel_support/client_server.go
generated
vendored
Normal file
@ -0,0 +1,72 @@
|
||||
package parallel_support
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/onsi/ginkgo/v2/reporters"
|
||||
"github.com/onsi/ginkgo/v2/types"
|
||||
)
|
||||
|
||||
type BeforeSuiteState struct {
|
||||
Data []byte
|
||||
State types.SpecState
|
||||
}
|
||||
|
||||
type ParallelIndexCounter struct {
|
||||
Index int
|
||||
}
|
||||
|
||||
var ErrorGone = fmt.Errorf("gone")
|
||||
var ErrorFailed = fmt.Errorf("failed")
|
||||
var ErrorEarly = fmt.Errorf("early")
|
||||
|
||||
var POLLING_INTERVAL = 50 * time.Millisecond
|
||||
|
||||
type Server interface {
|
||||
Start()
|
||||
Close()
|
||||
Address() string
|
||||
RegisterAlive(node int, alive func() bool)
|
||||
GetSuiteDone() chan interface{}
|
||||
GetOutputDestination() io.Writer
|
||||
SetOutputDestination(io.Writer)
|
||||
}
|
||||
|
||||
type Client interface {
|
||||
Connect() bool
|
||||
Close() error
|
||||
|
||||
PostSuiteWillBegin(report types.Report) error
|
||||
PostDidRun(report types.SpecReport) error
|
||||
PostSuiteDidEnd(report types.Report) error
|
||||
PostReportBeforeSuiteCompleted(state types.SpecState) error
|
||||
BlockUntilReportBeforeSuiteCompleted() (types.SpecState, error)
|
||||
PostSynchronizedBeforeSuiteCompleted(state types.SpecState, data []byte) error
|
||||
BlockUntilSynchronizedBeforeSuiteData() (types.SpecState, []byte, error)
|
||||
BlockUntilNonprimaryProcsHaveFinished() error
|
||||
BlockUntilAggregatedNonprimaryProcsReport() (types.Report, error)
|
||||
FetchNextCounter() (int, error)
|
||||
PostAbort() error
|
||||
ShouldAbort() bool
|
||||
PostEmitProgressReport(report types.ProgressReport) error
|
||||
Write(p []byte) (int, error)
|
||||
}
|
||||
|
||||
func NewServer(parallelTotal int, reporter reporters.Reporter) (Server, error) {
|
||||
if os.Getenv("GINKGO_PARALLEL_PROTOCOL") == "HTTP" {
|
||||
return newHttpServer(parallelTotal, reporter)
|
||||
} else {
|
||||
return newRPCServer(parallelTotal, reporter)
|
||||
}
|
||||
}
|
||||
|
||||
func NewClient(serverHost string) Client {
|
||||
if os.Getenv("GINKGO_PARALLEL_PROTOCOL") == "HTTP" {
|
||||
return newHttpClient(serverHost)
|
||||
} else {
|
||||
return newRPCClient(serverHost)
|
||||
}
|
||||
}
|
169
vendor/github.com/onsi/ginkgo/v2/internal/parallel_support/http_client.go
generated
vendored
Normal file
169
vendor/github.com/onsi/ginkgo/v2/internal/parallel_support/http_client.go
generated
vendored
Normal file
@ -0,0 +1,169 @@
|
||||
package parallel_support
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"github.com/onsi/ginkgo/v2/types"
|
||||
)
|
||||
|
||||
type httpClient struct {
|
||||
serverHost string
|
||||
}
|
||||
|
||||
func newHttpClient(serverHost string) *httpClient {
|
||||
return &httpClient{
|
||||
serverHost: serverHost,
|
||||
}
|
||||
}
|
||||
|
||||
func (client *httpClient) Connect() bool {
|
||||
resp, err := http.Get(client.serverHost + "/up")
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
resp.Body.Close()
|
||||
return resp.StatusCode == http.StatusOK
|
||||
}
|
||||
|
||||
func (client *httpClient) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (client *httpClient) post(path string, data interface{}) error {
|
||||
var body io.Reader
|
||||
if data != nil {
|
||||
encoded, err := json.Marshal(data)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
body = bytes.NewBuffer(encoded)
|
||||
}
|
||||
resp, err := http.Post(client.serverHost+path, "application/json", body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return fmt.Errorf("received unexpected status code %d", resp.StatusCode)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (client *httpClient) poll(path string, data interface{}) error {
|
||||
for {
|
||||
resp, err := http.Get(client.serverHost + path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if resp.StatusCode == http.StatusTooEarly {
|
||||
resp.Body.Close()
|
||||
time.Sleep(POLLING_INTERVAL)
|
||||
continue
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode == http.StatusGone {
|
||||
return ErrorGone
|
||||
}
|
||||
if resp.StatusCode == http.StatusFailedDependency {
|
||||
return ErrorFailed
|
||||
}
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return fmt.Errorf("received unexpected status code %d", resp.StatusCode)
|
||||
}
|
||||
if data != nil {
|
||||
return json.NewDecoder(resp.Body).Decode(data)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func (client *httpClient) PostSuiteWillBegin(report types.Report) error {
|
||||
return client.post("/suite-will-begin", report)
|
||||
}
|
||||
|
||||
func (client *httpClient) PostDidRun(report types.SpecReport) error {
|
||||
return client.post("/did-run", report)
|
||||
}
|
||||
|
||||
func (client *httpClient) PostSuiteDidEnd(report types.Report) error {
|
||||
return client.post("/suite-did-end", report)
|
||||
}
|
||||
|
||||
func (client *httpClient) PostEmitProgressReport(report types.ProgressReport) error {
|
||||
return client.post("/progress-report", report)
|
||||
}
|
||||
|
||||
func (client *httpClient) PostReportBeforeSuiteCompleted(state types.SpecState) error {
|
||||
return client.post("/report-before-suite-completed", state)
|
||||
}
|
||||
|
||||
func (client *httpClient) BlockUntilReportBeforeSuiteCompleted() (types.SpecState, error) {
|
||||
var state types.SpecState
|
||||
err := client.poll("/report-before-suite-state", &state)
|
||||
if err == ErrorGone {
|
||||
return types.SpecStateFailed, nil
|
||||
}
|
||||
return state, err
|
||||
}
|
||||
|
||||
func (client *httpClient) PostSynchronizedBeforeSuiteCompleted(state types.SpecState, data []byte) error {
|
||||
beforeSuiteState := BeforeSuiteState{
|
||||
State: state,
|
||||
Data: data,
|
||||
}
|
||||
return client.post("/before-suite-completed", beforeSuiteState)
|
||||
}
|
||||
|
||||
func (client *httpClient) BlockUntilSynchronizedBeforeSuiteData() (types.SpecState, []byte, error) {
|
||||
var beforeSuiteState BeforeSuiteState
|
||||
err := client.poll("/before-suite-state", &beforeSuiteState)
|
||||
if err == ErrorGone {
|
||||
return types.SpecStateInvalid, nil, types.GinkgoErrors.SynchronizedBeforeSuiteDisappearedOnProc1()
|
||||
}
|
||||
return beforeSuiteState.State, beforeSuiteState.Data, err
|
||||
}
|
||||
|
||||
func (client *httpClient) BlockUntilNonprimaryProcsHaveFinished() error {
|
||||
return client.poll("/have-nonprimary-procs-finished", nil)
|
||||
}
|
||||
|
||||
func (client *httpClient) BlockUntilAggregatedNonprimaryProcsReport() (types.Report, error) {
|
||||
var report types.Report
|
||||
err := client.poll("/aggregated-nonprimary-procs-report", &report)
|
||||
if err == ErrorGone {
|
||||
return types.Report{}, types.GinkgoErrors.AggregatedReportUnavailableDueToNodeDisappearing()
|
||||
}
|
||||
return report, err
|
||||
}
|
||||
|
||||
func (client *httpClient) FetchNextCounter() (int, error) {
|
||||
var counter ParallelIndexCounter
|
||||
err := client.poll("/counter", &counter)
|
||||
return counter.Index, err
|
||||
}
|
||||
|
||||
func (client *httpClient) PostAbort() error {
|
||||
return client.post("/abort", nil)
|
||||
}
|
||||
|
||||
func (client *httpClient) ShouldAbort() bool {
|
||||
err := client.poll("/abort", nil)
|
||||
if err == ErrorGone {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (client *httpClient) Write(p []byte) (int, error) {
|
||||
resp, err := http.Post(client.serverHost+"/emit-output", "text/plain;charset=UTF-8 ", bytes.NewReader(p))
|
||||
resp.Body.Close()
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return 0, fmt.Errorf("failed to emit output")
|
||||
}
|
||||
return len(p), err
|
||||
}
|
242
vendor/github.com/onsi/ginkgo/v2/internal/parallel_support/http_server.go
generated
vendored
Normal file
242
vendor/github.com/onsi/ginkgo/v2/internal/parallel_support/http_server.go
generated
vendored
Normal file
@ -0,0 +1,242 @@
|
||||
/*
|
||||
|
||||
The remote package provides the pieces to allow Ginkgo test suites to report to remote listeners.
|
||||
This is used, primarily, to enable streaming parallel test output but has, in principal, broader applications (e.g. streaming test output to a browser).
|
||||
|
||||
*/
|
||||
|
||||
package parallel_support
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"io"
|
||||
"net"
|
||||
"net/http"
|
||||
|
||||
"github.com/onsi/ginkgo/v2/reporters"
|
||||
"github.com/onsi/ginkgo/v2/types"
|
||||
)
|
||||
|
||||
/*
|
||||
httpServer spins up on an automatically selected port and listens for communication from the forwarding reporter.
|
||||
It then forwards that communication to attached reporters.
|
||||
*/
|
||||
type httpServer struct {
|
||||
listener net.Listener
|
||||
handler *ServerHandler
|
||||
}
|
||||
|
||||
// Create a new server, automatically selecting a port
|
||||
func newHttpServer(parallelTotal int, reporter reporters.Reporter) (*httpServer, error) {
|
||||
listener, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &httpServer{
|
||||
listener: listener,
|
||||
handler: newServerHandler(parallelTotal, reporter),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Start the server. You don't need to `go s.Start()`, just `s.Start()`
|
||||
func (server *httpServer) Start() {
|
||||
httpServer := &http.Server{}
|
||||
mux := http.NewServeMux()
|
||||
httpServer.Handler = mux
|
||||
|
||||
//streaming endpoints
|
||||
mux.HandleFunc("/suite-will-begin", server.specSuiteWillBegin)
|
||||
mux.HandleFunc("/did-run", server.didRun)
|
||||
mux.HandleFunc("/suite-did-end", server.specSuiteDidEnd)
|
||||
mux.HandleFunc("/emit-output", server.emitOutput)
|
||||
mux.HandleFunc("/progress-report", server.emitProgressReport)
|
||||
|
||||
//synchronization endpoints
|
||||
mux.HandleFunc("/report-before-suite-completed", server.handleReportBeforeSuiteCompleted)
|
||||
mux.HandleFunc("/report-before-suite-state", server.handleReportBeforeSuiteState)
|
||||
mux.HandleFunc("/before-suite-completed", server.handleBeforeSuiteCompleted)
|
||||
mux.HandleFunc("/before-suite-state", server.handleBeforeSuiteState)
|
||||
mux.HandleFunc("/have-nonprimary-procs-finished", server.handleHaveNonprimaryProcsFinished)
|
||||
mux.HandleFunc("/aggregated-nonprimary-procs-report", server.handleAggregatedNonprimaryProcsReport)
|
||||
mux.HandleFunc("/counter", server.handleCounter)
|
||||
mux.HandleFunc("/up", server.handleUp)
|
||||
mux.HandleFunc("/abort", server.handleAbort)
|
||||
|
||||
go httpServer.Serve(server.listener)
|
||||
}
|
||||
|
||||
// Stop the server
|
||||
func (server *httpServer) Close() {
|
||||
server.listener.Close()
|
||||
}
|
||||
|
||||
// The address the server can be reached it. Pass this into the `ForwardingReporter`.
|
||||
func (server *httpServer) Address() string {
|
||||
return "http://" + server.listener.Addr().String()
|
||||
}
|
||||
|
||||
func (server *httpServer) GetSuiteDone() chan interface{} {
|
||||
return server.handler.done
|
||||
}
|
||||
|
||||
func (server *httpServer) GetOutputDestination() io.Writer {
|
||||
return server.handler.outputDestination
|
||||
}
|
||||
|
||||
func (server *httpServer) SetOutputDestination(w io.Writer) {
|
||||
server.handler.outputDestination = w
|
||||
}
|
||||
|
||||
func (server *httpServer) RegisterAlive(node int, alive func() bool) {
|
||||
server.handler.registerAlive(node, alive)
|
||||
}
|
||||
|
||||
//
|
||||
// Streaming Endpoints
|
||||
//
|
||||
|
||||
// The server will forward all received messages to Ginkgo reporters registered with `RegisterReporters`
|
||||
func (server *httpServer) decode(writer http.ResponseWriter, request *http.Request, object interface{}) bool {
|
||||
defer request.Body.Close()
|
||||
if json.NewDecoder(request.Body).Decode(object) != nil {
|
||||
writer.WriteHeader(http.StatusBadRequest)
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (server *httpServer) handleError(err error, writer http.ResponseWriter) bool {
|
||||
if err == nil {
|
||||
return false
|
||||
}
|
||||
switch err {
|
||||
case ErrorEarly:
|
||||
writer.WriteHeader(http.StatusTooEarly)
|
||||
case ErrorGone:
|
||||
writer.WriteHeader(http.StatusGone)
|
||||
case ErrorFailed:
|
||||
writer.WriteHeader(http.StatusFailedDependency)
|
||||
default:
|
||||
writer.WriteHeader(http.StatusInternalServerError)
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (server *httpServer) specSuiteWillBegin(writer http.ResponseWriter, request *http.Request) {
|
||||
var report types.Report
|
||||
if !server.decode(writer, request, &report) {
|
||||
return
|
||||
}
|
||||
|
||||
server.handleError(server.handler.SpecSuiteWillBegin(report, voidReceiver), writer)
|
||||
}
|
||||
|
||||
func (server *httpServer) didRun(writer http.ResponseWriter, request *http.Request) {
|
||||
var report types.SpecReport
|
||||
if !server.decode(writer, request, &report) {
|
||||
return
|
||||
}
|
||||
|
||||
server.handleError(server.handler.DidRun(report, voidReceiver), writer)
|
||||
}
|
||||
|
||||
func (server *httpServer) specSuiteDidEnd(writer http.ResponseWriter, request *http.Request) {
|
||||
var report types.Report
|
||||
if !server.decode(writer, request, &report) {
|
||||
return
|
||||
}
|
||||
server.handleError(server.handler.SpecSuiteDidEnd(report, voidReceiver), writer)
|
||||
}
|
||||
|
||||
func (server *httpServer) emitOutput(writer http.ResponseWriter, request *http.Request) {
|
||||
output, err := io.ReadAll(request.Body)
|
||||
if err != nil {
|
||||
writer.WriteHeader(http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
var n int
|
||||
server.handleError(server.handler.EmitOutput(output, &n), writer)
|
||||
}
|
||||
|
||||
func (server *httpServer) emitProgressReport(writer http.ResponseWriter, request *http.Request) {
|
||||
var report types.ProgressReport
|
||||
if !server.decode(writer, request, &report) {
|
||||
return
|
||||
}
|
||||
server.handleError(server.handler.EmitProgressReport(report, voidReceiver), writer)
|
||||
}
|
||||
|
||||
func (server *httpServer) handleReportBeforeSuiteCompleted(writer http.ResponseWriter, request *http.Request) {
|
||||
var state types.SpecState
|
||||
if !server.decode(writer, request, &state) {
|
||||
return
|
||||
}
|
||||
|
||||
server.handleError(server.handler.ReportBeforeSuiteCompleted(state, voidReceiver), writer)
|
||||
}
|
||||
|
||||
func (server *httpServer) handleReportBeforeSuiteState(writer http.ResponseWriter, request *http.Request) {
|
||||
var state types.SpecState
|
||||
if server.handleError(server.handler.ReportBeforeSuiteState(voidSender, &state), writer) {
|
||||
return
|
||||
}
|
||||
json.NewEncoder(writer).Encode(state)
|
||||
}
|
||||
|
||||
func (server *httpServer) handleBeforeSuiteCompleted(writer http.ResponseWriter, request *http.Request) {
|
||||
var beforeSuiteState BeforeSuiteState
|
||||
if !server.decode(writer, request, &beforeSuiteState) {
|
||||
return
|
||||
}
|
||||
|
||||
server.handleError(server.handler.BeforeSuiteCompleted(beforeSuiteState, voidReceiver), writer)
|
||||
}
|
||||
|
||||
func (server *httpServer) handleBeforeSuiteState(writer http.ResponseWriter, request *http.Request) {
|
||||
var beforeSuiteState BeforeSuiteState
|
||||
if server.handleError(server.handler.BeforeSuiteState(voidSender, &beforeSuiteState), writer) {
|
||||
return
|
||||
}
|
||||
json.NewEncoder(writer).Encode(beforeSuiteState)
|
||||
}
|
||||
|
||||
func (server *httpServer) handleHaveNonprimaryProcsFinished(writer http.ResponseWriter, request *http.Request) {
|
||||
if server.handleError(server.handler.HaveNonprimaryProcsFinished(voidSender, voidReceiver), writer) {
|
||||
return
|
||||
}
|
||||
writer.WriteHeader(http.StatusOK)
|
||||
}
|
||||
|
||||
func (server *httpServer) handleAggregatedNonprimaryProcsReport(writer http.ResponseWriter, request *http.Request) {
|
||||
var aggregatedReport types.Report
|
||||
if server.handleError(server.handler.AggregatedNonprimaryProcsReport(voidSender, &aggregatedReport), writer) {
|
||||
return
|
||||
}
|
||||
json.NewEncoder(writer).Encode(aggregatedReport)
|
||||
}
|
||||
|
||||
func (server *httpServer) handleCounter(writer http.ResponseWriter, request *http.Request) {
|
||||
var n int
|
||||
if server.handleError(server.handler.Counter(voidSender, &n), writer) {
|
||||
return
|
||||
}
|
||||
json.NewEncoder(writer).Encode(ParallelIndexCounter{Index: n})
|
||||
}
|
||||
|
||||
func (server *httpServer) handleUp(writer http.ResponseWriter, request *http.Request) {
|
||||
writer.WriteHeader(http.StatusOK)
|
||||
}
|
||||
|
||||
func (server *httpServer) handleAbort(writer http.ResponseWriter, request *http.Request) {
|
||||
if request.Method == "GET" {
|
||||
var shouldAbort bool
|
||||
server.handler.ShouldAbort(voidSender, &shouldAbort)
|
||||
if shouldAbort {
|
||||
writer.WriteHeader(http.StatusGone)
|
||||
} else {
|
||||
writer.WriteHeader(http.StatusOK)
|
||||
}
|
||||
} else {
|
||||
server.handler.Abort(voidSender, voidReceiver)
|
||||
}
|
||||
}
|
136
vendor/github.com/onsi/ginkgo/v2/internal/parallel_support/rpc_client.go
generated
vendored
Normal file
136
vendor/github.com/onsi/ginkgo/v2/internal/parallel_support/rpc_client.go
generated
vendored
Normal file
@ -0,0 +1,136 @@
|
||||
package parallel_support
|
||||
|
||||
import (
|
||||
"net/rpc"
|
||||
"time"
|
||||
|
||||
"github.com/onsi/ginkgo/v2/types"
|
||||
)
|
||||
|
||||
type rpcClient struct {
|
||||
serverHost string
|
||||
client *rpc.Client
|
||||
}
|
||||
|
||||
func newRPCClient(serverHost string) *rpcClient {
|
||||
return &rpcClient{
|
||||
serverHost: serverHost,
|
||||
}
|
||||
}
|
||||
|
||||
func (client *rpcClient) Connect() bool {
|
||||
var err error
|
||||
if client.client != nil {
|
||||
return true
|
||||
}
|
||||
client.client, err = rpc.DialHTTPPath("tcp", client.serverHost, "/")
|
||||
if err != nil {
|
||||
client.client = nil
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (client *rpcClient) Close() error {
|
||||
return client.client.Close()
|
||||
}
|
||||
|
||||
func (client *rpcClient) poll(method string, data interface{}) error {
|
||||
for {
|
||||
err := client.client.Call(method, voidSender, data)
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
switch err.Error() {
|
||||
case ErrorEarly.Error():
|
||||
time.Sleep(POLLING_INTERVAL)
|
||||
case ErrorGone.Error():
|
||||
return ErrorGone
|
||||
case ErrorFailed.Error():
|
||||
return ErrorFailed
|
||||
default:
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (client *rpcClient) PostSuiteWillBegin(report types.Report) error {
|
||||
return client.client.Call("Server.SpecSuiteWillBegin", report, voidReceiver)
|
||||
}
|
||||
|
||||
func (client *rpcClient) PostDidRun(report types.SpecReport) error {
|
||||
return client.client.Call("Server.DidRun", report, voidReceiver)
|
||||
}
|
||||
|
||||
func (client *rpcClient) PostSuiteDidEnd(report types.Report) error {
|
||||
return client.client.Call("Server.SpecSuiteDidEnd", report, voidReceiver)
|
||||
}
|
||||
|
||||
func (client *rpcClient) Write(p []byte) (int, error) {
|
||||
var n int
|
||||
err := client.client.Call("Server.EmitOutput", p, &n)
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (client *rpcClient) PostEmitProgressReport(report types.ProgressReport) error {
|
||||
return client.client.Call("Server.EmitProgressReport", report, voidReceiver)
|
||||
}
|
||||
|
||||
func (client *rpcClient) PostReportBeforeSuiteCompleted(state types.SpecState) error {
|
||||
return client.client.Call("Server.ReportBeforeSuiteCompleted", state, voidReceiver)
|
||||
}
|
||||
|
||||
func (client *rpcClient) BlockUntilReportBeforeSuiteCompleted() (types.SpecState, error) {
|
||||
var state types.SpecState
|
||||
err := client.poll("Server.ReportBeforeSuiteState", &state)
|
||||
if err == ErrorGone {
|
||||
return types.SpecStateFailed, nil
|
||||
}
|
||||
return state, err
|
||||
}
|
||||
|
||||
func (client *rpcClient) PostSynchronizedBeforeSuiteCompleted(state types.SpecState, data []byte) error {
|
||||
beforeSuiteState := BeforeSuiteState{
|
||||
State: state,
|
||||
Data: data,
|
||||
}
|
||||
return client.client.Call("Server.BeforeSuiteCompleted", beforeSuiteState, voidReceiver)
|
||||
}
|
||||
|
||||
func (client *rpcClient) BlockUntilSynchronizedBeforeSuiteData() (types.SpecState, []byte, error) {
|
||||
var beforeSuiteState BeforeSuiteState
|
||||
err := client.poll("Server.BeforeSuiteState", &beforeSuiteState)
|
||||
if err == ErrorGone {
|
||||
return types.SpecStateInvalid, nil, types.GinkgoErrors.SynchronizedBeforeSuiteDisappearedOnProc1()
|
||||
}
|
||||
return beforeSuiteState.State, beforeSuiteState.Data, err
|
||||
}
|
||||
|
||||
func (client *rpcClient) BlockUntilNonprimaryProcsHaveFinished() error {
|
||||
return client.poll("Server.HaveNonprimaryProcsFinished", voidReceiver)
|
||||
}
|
||||
|
||||
func (client *rpcClient) BlockUntilAggregatedNonprimaryProcsReport() (types.Report, error) {
|
||||
var report types.Report
|
||||
err := client.poll("Server.AggregatedNonprimaryProcsReport", &report)
|
||||
if err == ErrorGone {
|
||||
return types.Report{}, types.GinkgoErrors.AggregatedReportUnavailableDueToNodeDisappearing()
|
||||
}
|
||||
return report, err
|
||||
}
|
||||
|
||||
func (client *rpcClient) FetchNextCounter() (int, error) {
|
||||
var counter int
|
||||
err := client.client.Call("Server.Counter", voidSender, &counter)
|
||||
return counter, err
|
||||
}
|
||||
|
||||
func (client *rpcClient) PostAbort() error {
|
||||
return client.client.Call("Server.Abort", voidSender, voidReceiver)
|
||||
}
|
||||
|
||||
func (client *rpcClient) ShouldAbort() bool {
|
||||
var shouldAbort bool
|
||||
client.client.Call("Server.ShouldAbort", voidSender, &shouldAbort)
|
||||
return shouldAbort
|
||||
}
|
75
vendor/github.com/onsi/ginkgo/v2/internal/parallel_support/rpc_server.go
generated
vendored
Normal file
75
vendor/github.com/onsi/ginkgo/v2/internal/parallel_support/rpc_server.go
generated
vendored
Normal file
@ -0,0 +1,75 @@
|
||||
/*
|
||||
|
||||
The remote package provides the pieces to allow Ginkgo test suites to report to remote listeners.
|
||||
This is used, primarily, to enable streaming parallel test output but has, in principal, broader applications (e.g. streaming test output to a browser).
|
||||
|
||||
*/
|
||||
|
||||
package parallel_support
|
||||
|
||||
import (
|
||||
"io"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/rpc"
|
||||
|
||||
"github.com/onsi/ginkgo/v2/reporters"
|
||||
)
|
||||
|
||||
/*
|
||||
RPCServer spins up on an automatically selected port and listens for communication from the forwarding reporter.
|
||||
It then forwards that communication to attached reporters.
|
||||
*/
|
||||
type RPCServer struct {
|
||||
listener net.Listener
|
||||
handler *ServerHandler
|
||||
}
|
||||
|
||||
//Create a new server, automatically selecting a port
|
||||
func newRPCServer(parallelTotal int, reporter reporters.Reporter) (*RPCServer, error) {
|
||||
listener, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &RPCServer{
|
||||
listener: listener,
|
||||
handler: newServerHandler(parallelTotal, reporter),
|
||||
}, nil
|
||||
}
|
||||
|
||||
//Start the server. You don't need to `go s.Start()`, just `s.Start()`
|
||||
func (server *RPCServer) Start() {
|
||||
rpcServer := rpc.NewServer()
|
||||
rpcServer.RegisterName("Server", server.handler) //register the handler's methods as the server
|
||||
|
||||
httpServer := &http.Server{}
|
||||
httpServer.Handler = rpcServer
|
||||
|
||||
go httpServer.Serve(server.listener)
|
||||
}
|
||||
|
||||
//Stop the server
|
||||
func (server *RPCServer) Close() {
|
||||
server.listener.Close()
|
||||
}
|
||||
|
||||
//The address the server can be reached it. Pass this into the `ForwardingReporter`.
|
||||
func (server *RPCServer) Address() string {
|
||||
return server.listener.Addr().String()
|
||||
}
|
||||
|
||||
func (server *RPCServer) GetSuiteDone() chan interface{} {
|
||||
return server.handler.done
|
||||
}
|
||||
|
||||
func (server *RPCServer) GetOutputDestination() io.Writer {
|
||||
return server.handler.outputDestination
|
||||
}
|
||||
|
||||
func (server *RPCServer) SetOutputDestination(w io.Writer) {
|
||||
server.handler.outputDestination = w
|
||||
}
|
||||
|
||||
func (server *RPCServer) RegisterAlive(node int, alive func() bool) {
|
||||
server.handler.registerAlive(node, alive)
|
||||
}
|
234
vendor/github.com/onsi/ginkgo/v2/internal/parallel_support/server_handler.go
generated
vendored
Normal file
234
vendor/github.com/onsi/ginkgo/v2/internal/parallel_support/server_handler.go
generated
vendored
Normal file
@ -0,0 +1,234 @@
|
||||
package parallel_support
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
"sync"
|
||||
|
||||
"github.com/onsi/ginkgo/v2/reporters"
|
||||
"github.com/onsi/ginkgo/v2/types"
|
||||
)
|
||||
|
||||
type Void struct{}
|
||||
|
||||
var voidReceiver *Void = &Void{}
|
||||
var voidSender Void
|
||||
|
||||
// ServerHandler is an RPC-compatible handler that is shared between the http server and the rpc server.
|
||||
// It handles all the business logic to avoid duplication between the two servers
|
||||
|
||||
type ServerHandler struct {
|
||||
done chan interface{}
|
||||
outputDestination io.Writer
|
||||
reporter reporters.Reporter
|
||||
alives []func() bool
|
||||
lock *sync.Mutex
|
||||
beforeSuiteState BeforeSuiteState
|
||||
reportBeforeSuiteState types.SpecState
|
||||
parallelTotal int
|
||||
counter int
|
||||
counterLock *sync.Mutex
|
||||
shouldAbort bool
|
||||
|
||||
numSuiteDidBegins int
|
||||
numSuiteDidEnds int
|
||||
aggregatedReport types.Report
|
||||
reportHoldingArea []types.SpecReport
|
||||
}
|
||||
|
||||
func newServerHandler(parallelTotal int, reporter reporters.Reporter) *ServerHandler {
|
||||
return &ServerHandler{
|
||||
reporter: reporter,
|
||||
lock: &sync.Mutex{},
|
||||
counterLock: &sync.Mutex{},
|
||||
alives: make([]func() bool, parallelTotal),
|
||||
beforeSuiteState: BeforeSuiteState{Data: nil, State: types.SpecStateInvalid},
|
||||
|
||||
parallelTotal: parallelTotal,
|
||||
outputDestination: os.Stdout,
|
||||
done: make(chan interface{}),
|
||||
}
|
||||
}
|
||||
|
||||
func (handler *ServerHandler) SpecSuiteWillBegin(report types.Report, _ *Void) error {
|
||||
handler.lock.Lock()
|
||||
defer handler.lock.Unlock()
|
||||
|
||||
handler.numSuiteDidBegins += 1
|
||||
|
||||
// all summaries are identical, so it's fine to simply emit the last one of these
|
||||
if handler.numSuiteDidBegins == handler.parallelTotal {
|
||||
handler.reporter.SuiteWillBegin(report)
|
||||
|
||||
for _, summary := range handler.reportHoldingArea {
|
||||
handler.reporter.WillRun(summary)
|
||||
handler.reporter.DidRun(summary)
|
||||
}
|
||||
|
||||
handler.reportHoldingArea = nil
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (handler *ServerHandler) DidRun(report types.SpecReport, _ *Void) error {
|
||||
handler.lock.Lock()
|
||||
defer handler.lock.Unlock()
|
||||
|
||||
if handler.numSuiteDidBegins == handler.parallelTotal {
|
||||
handler.reporter.WillRun(report)
|
||||
handler.reporter.DidRun(report)
|
||||
} else {
|
||||
handler.reportHoldingArea = append(handler.reportHoldingArea, report)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (handler *ServerHandler) SpecSuiteDidEnd(report types.Report, _ *Void) error {
|
||||
handler.lock.Lock()
|
||||
defer handler.lock.Unlock()
|
||||
|
||||
handler.numSuiteDidEnds += 1
|
||||
if handler.numSuiteDidEnds == 1 {
|
||||
handler.aggregatedReport = report
|
||||
} else {
|
||||
handler.aggregatedReport = handler.aggregatedReport.Add(report)
|
||||
}
|
||||
|
||||
if handler.numSuiteDidEnds == handler.parallelTotal {
|
||||
handler.reporter.SuiteDidEnd(handler.aggregatedReport)
|
||||
close(handler.done)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (handler *ServerHandler) EmitOutput(output []byte, n *int) error {
|
||||
var err error
|
||||
*n, err = handler.outputDestination.Write(output)
|
||||
return err
|
||||
}
|
||||
|
||||
func (handler *ServerHandler) EmitProgressReport(report types.ProgressReport, _ *Void) error {
|
||||
handler.lock.Lock()
|
||||
defer handler.lock.Unlock()
|
||||
handler.reporter.EmitProgressReport(report)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (handler *ServerHandler) registerAlive(proc int, alive func() bool) {
|
||||
handler.lock.Lock()
|
||||
defer handler.lock.Unlock()
|
||||
handler.alives[proc-1] = alive
|
||||
}
|
||||
|
||||
func (handler *ServerHandler) procIsAlive(proc int) bool {
|
||||
handler.lock.Lock()
|
||||
defer handler.lock.Unlock()
|
||||
alive := handler.alives[proc-1]
|
||||
if alive == nil {
|
||||
return true
|
||||
}
|
||||
return alive()
|
||||
}
|
||||
|
||||
func (handler *ServerHandler) haveNonprimaryProcsFinished() bool {
|
||||
for i := 2; i <= handler.parallelTotal; i++ {
|
||||
if handler.procIsAlive(i) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (handler *ServerHandler) ReportBeforeSuiteCompleted(reportBeforeSuiteState types.SpecState, _ *Void) error {
|
||||
handler.lock.Lock()
|
||||
defer handler.lock.Unlock()
|
||||
handler.reportBeforeSuiteState = reportBeforeSuiteState
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (handler *ServerHandler) ReportBeforeSuiteState(_ Void, reportBeforeSuiteState *types.SpecState) error {
|
||||
proc1IsAlive := handler.procIsAlive(1)
|
||||
handler.lock.Lock()
|
||||
defer handler.lock.Unlock()
|
||||
if handler.reportBeforeSuiteState == types.SpecStateInvalid {
|
||||
if proc1IsAlive {
|
||||
return ErrorEarly
|
||||
} else {
|
||||
return ErrorGone
|
||||
}
|
||||
}
|
||||
*reportBeforeSuiteState = handler.reportBeforeSuiteState
|
||||
return nil
|
||||
}
|
||||
|
||||
func (handler *ServerHandler) BeforeSuiteCompleted(beforeSuiteState BeforeSuiteState, _ *Void) error {
|
||||
handler.lock.Lock()
|
||||
defer handler.lock.Unlock()
|
||||
handler.beforeSuiteState = beforeSuiteState
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (handler *ServerHandler) BeforeSuiteState(_ Void, beforeSuiteState *BeforeSuiteState) error {
|
||||
proc1IsAlive := handler.procIsAlive(1)
|
||||
handler.lock.Lock()
|
||||
defer handler.lock.Unlock()
|
||||
if handler.beforeSuiteState.State == types.SpecStateInvalid {
|
||||
if proc1IsAlive {
|
||||
return ErrorEarly
|
||||
} else {
|
||||
return ErrorGone
|
||||
}
|
||||
}
|
||||
*beforeSuiteState = handler.beforeSuiteState
|
||||
return nil
|
||||
}
|
||||
|
||||
func (handler *ServerHandler) HaveNonprimaryProcsFinished(_ Void, _ *Void) error {
|
||||
if handler.haveNonprimaryProcsFinished() {
|
||||
return nil
|
||||
} else {
|
||||
return ErrorEarly
|
||||
}
|
||||
}
|
||||
|
||||
func (handler *ServerHandler) AggregatedNonprimaryProcsReport(_ Void, report *types.Report) error {
|
||||
if handler.haveNonprimaryProcsFinished() {
|
||||
handler.lock.Lock()
|
||||
defer handler.lock.Unlock()
|
||||
if handler.numSuiteDidEnds == handler.parallelTotal-1 {
|
||||
*report = handler.aggregatedReport
|
||||
return nil
|
||||
} else {
|
||||
return ErrorGone
|
||||
}
|
||||
} else {
|
||||
return ErrorEarly
|
||||
}
|
||||
}
|
||||
|
||||
func (handler *ServerHandler) Counter(_ Void, counter *int) error {
|
||||
handler.counterLock.Lock()
|
||||
defer handler.counterLock.Unlock()
|
||||
*counter = handler.counter
|
||||
handler.counter++
|
||||
return nil
|
||||
}
|
||||
|
||||
func (handler *ServerHandler) Abort(_ Void, _ *Void) error {
|
||||
handler.lock.Lock()
|
||||
defer handler.lock.Unlock()
|
||||
handler.shouldAbort = true
|
||||
return nil
|
||||
}
|
||||
|
||||
func (handler *ServerHandler) ShouldAbort(_ Void, shouldAbort *bool) error {
|
||||
handler.lock.Lock()
|
||||
defer handler.lock.Unlock()
|
||||
*shouldAbort = handler.shouldAbort
|
||||
return nil
|
||||
}
|
287
vendor/github.com/onsi/ginkgo/v2/internal/progress_report.go
generated
vendored
Normal file
287
vendor/github.com/onsi/ginkgo/v2/internal/progress_report.go
generated
vendored
Normal file
@ -0,0 +1,287 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"os/signal"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/onsi/ginkgo/v2/types"
|
||||
)
|
||||
|
||||
var _SOURCE_CACHE = map[string][]string{}
|
||||
|
||||
type ProgressSignalRegistrar func(func()) context.CancelFunc
|
||||
|
||||
func RegisterForProgressSignal(handler func()) context.CancelFunc {
|
||||
signalChannel := make(chan os.Signal, 1)
|
||||
if len(PROGRESS_SIGNALS) > 0 {
|
||||
signal.Notify(signalChannel, PROGRESS_SIGNALS...)
|
||||
}
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
go func() {
|
||||
for {
|
||||
select {
|
||||
case <-signalChannel:
|
||||
handler()
|
||||
case <-ctx.Done():
|
||||
signal.Stop(signalChannel)
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
return cancel
|
||||
}
|
||||
|
||||
type ProgressStepCursor struct {
|
||||
Text string
|
||||
CodeLocation types.CodeLocation
|
||||
StartTime time.Time
|
||||
}
|
||||
|
||||
func NewProgressReport(isRunningInParallel bool, report types.SpecReport, currentNode Node, currentNodeStartTime time.Time, currentStep types.SpecEvent, gwOutput string, timelineLocation types.TimelineLocation, additionalReports []string, sourceRoots []string, includeAll bool) (types.ProgressReport, error) {
|
||||
pr := types.ProgressReport{
|
||||
ParallelProcess: report.ParallelProcess,
|
||||
RunningInParallel: isRunningInParallel,
|
||||
ContainerHierarchyTexts: report.ContainerHierarchyTexts,
|
||||
LeafNodeText: report.LeafNodeText,
|
||||
LeafNodeLocation: report.LeafNodeLocation,
|
||||
SpecStartTime: report.StartTime,
|
||||
|
||||
CurrentNodeType: currentNode.NodeType,
|
||||
CurrentNodeText: currentNode.Text,
|
||||
CurrentNodeLocation: currentNode.CodeLocation,
|
||||
CurrentNodeStartTime: currentNodeStartTime,
|
||||
|
||||
CurrentStepText: currentStep.Message,
|
||||
CurrentStepLocation: currentStep.CodeLocation,
|
||||
CurrentStepStartTime: currentStep.TimelineLocation.Time,
|
||||
|
||||
AdditionalReports: additionalReports,
|
||||
|
||||
CapturedGinkgoWriterOutput: gwOutput,
|
||||
TimelineLocation: timelineLocation,
|
||||
}
|
||||
|
||||
goroutines, err := extractRunningGoroutines()
|
||||
if err != nil {
|
||||
return pr, err
|
||||
}
|
||||
pr.Goroutines = goroutines
|
||||
|
||||
// now we want to try to find goroutines of interest. these will be goroutines that have any function calls with code in packagesOfInterest:
|
||||
packagesOfInterest := map[string]bool{}
|
||||
packageFromFilename := func(filename string) string {
|
||||
return filepath.Dir(filename)
|
||||
}
|
||||
addPackageFor := func(filename string) {
|
||||
if filename != "" {
|
||||
packagesOfInterest[packageFromFilename(filename)] = true
|
||||
}
|
||||
}
|
||||
isPackageOfInterest := func(filename string) bool {
|
||||
stackPackage := packageFromFilename(filename)
|
||||
for packageOfInterest := range packagesOfInterest {
|
||||
if strings.HasPrefix(stackPackage, packageOfInterest) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
for _, location := range report.ContainerHierarchyLocations {
|
||||
addPackageFor(location.FileName)
|
||||
}
|
||||
addPackageFor(report.LeafNodeLocation.FileName)
|
||||
addPackageFor(currentNode.CodeLocation.FileName)
|
||||
addPackageFor(currentStep.CodeLocation.FileName)
|
||||
|
||||
//First, we find the SpecGoroutine - this will be the goroutine that includes `runNode`
|
||||
specGoRoutineIdx := -1
|
||||
runNodeFunctionCallIdx := -1
|
||||
OUTER:
|
||||
for goroutineIdx, goroutine := range pr.Goroutines {
|
||||
for functionCallIdx, functionCall := range goroutine.Stack {
|
||||
if strings.Contains(functionCall.Function, "ginkgo/v2/internal.(*Suite).runNode.func") {
|
||||
specGoRoutineIdx = goroutineIdx
|
||||
runNodeFunctionCallIdx = functionCallIdx
|
||||
break OUTER
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//Now, we find the first non-Ginkgo function call
|
||||
if specGoRoutineIdx > -1 {
|
||||
for runNodeFunctionCallIdx >= 0 {
|
||||
fn := goroutines[specGoRoutineIdx].Stack[runNodeFunctionCallIdx].Function
|
||||
file := goroutines[specGoRoutineIdx].Stack[runNodeFunctionCallIdx].Filename
|
||||
// these are all things that could potentially happen from within ginkgo
|
||||
if strings.Contains(fn, "ginkgo/v2/internal") || strings.Contains(fn, "reflect.Value") || strings.Contains(file, "ginkgo/table_dsl") || strings.Contains(file, "ginkgo/core_dsl") {
|
||||
runNodeFunctionCallIdx--
|
||||
continue
|
||||
}
|
||||
if strings.Contains(goroutines[specGoRoutineIdx].Stack[runNodeFunctionCallIdx].Function, "ginkgo/table_dsl") {
|
||||
|
||||
}
|
||||
//found it! lets add its package of interest
|
||||
addPackageFor(goroutines[specGoRoutineIdx].Stack[runNodeFunctionCallIdx].Filename)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
ginkgoEntryPointIdx := -1
|
||||
OUTER_GINKGO_ENTRY_POINT:
|
||||
for goroutineIdx, goroutine := range pr.Goroutines {
|
||||
for _, functionCall := range goroutine.Stack {
|
||||
if strings.Contains(functionCall.Function, "ginkgo/v2.RunSpecs") {
|
||||
ginkgoEntryPointIdx = goroutineIdx
|
||||
break OUTER_GINKGO_ENTRY_POINT
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Now we go through all goroutines and highlight any lines with packages in `packagesOfInterest`
|
||||
// Any goroutines with highlighted lines end up in the HighlightGoRoutines
|
||||
for goroutineIdx, goroutine := range pr.Goroutines {
|
||||
if goroutineIdx == ginkgoEntryPointIdx {
|
||||
continue
|
||||
}
|
||||
if goroutineIdx == specGoRoutineIdx {
|
||||
pr.Goroutines[goroutineIdx].IsSpecGoroutine = true
|
||||
}
|
||||
for functionCallIdx, functionCall := range goroutine.Stack {
|
||||
if isPackageOfInterest(functionCall.Filename) {
|
||||
goroutine.Stack[functionCallIdx].Highlight = true
|
||||
goroutine.Stack[functionCallIdx].Source, goroutine.Stack[functionCallIdx].SourceHighlight = fetchSource(functionCall.Filename, functionCall.Line, 2, sourceRoots)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if !includeAll {
|
||||
goroutines := []types.Goroutine{pr.SpecGoroutine()}
|
||||
goroutines = append(goroutines, pr.HighlightedGoroutines()...)
|
||||
pr.Goroutines = goroutines
|
||||
}
|
||||
|
||||
return pr, nil
|
||||
}
|
||||
|
||||
func extractRunningGoroutines() ([]types.Goroutine, error) {
|
||||
var stack []byte
|
||||
for size := 64 * 1024; ; size *= 2 {
|
||||
stack = make([]byte, size)
|
||||
if n := runtime.Stack(stack, true); n < size {
|
||||
stack = stack[:n]
|
||||
break
|
||||
}
|
||||
}
|
||||
r := bufio.NewReader(bytes.NewReader(stack))
|
||||
out := []types.Goroutine{}
|
||||
idx := -1
|
||||
for {
|
||||
line, err := r.ReadString('\n')
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
|
||||
line = strings.TrimSuffix(line, "\n")
|
||||
|
||||
//skip blank lines
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
//parse headers for new goroutine frames
|
||||
if strings.HasPrefix(line, "goroutine") {
|
||||
out = append(out, types.Goroutine{})
|
||||
idx = len(out) - 1
|
||||
|
||||
line = strings.TrimPrefix(line, "goroutine ")
|
||||
line = strings.TrimSuffix(line, ":")
|
||||
fields := strings.SplitN(line, " ", 2)
|
||||
if len(fields) != 2 {
|
||||
return nil, types.GinkgoErrors.FailedToParseStackTrace(fmt.Sprintf("Invalid goroutine frame header: %s", line))
|
||||
}
|
||||
out[idx].ID, err = strconv.ParseUint(fields[0], 10, 64)
|
||||
if err != nil {
|
||||
return nil, types.GinkgoErrors.FailedToParseStackTrace(fmt.Sprintf("Invalid goroutine ID: %s", fields[1]))
|
||||
}
|
||||
|
||||
out[idx].State = strings.TrimSuffix(strings.TrimPrefix(fields[1], "["), "]")
|
||||
continue
|
||||
}
|
||||
|
||||
//if we are here we must be at a function call entry in the stack
|
||||
functionCall := types.FunctionCall{
|
||||
Function: strings.TrimPrefix(line, "created by "), // no need to track 'created by'
|
||||
}
|
||||
|
||||
line, err = r.ReadString('\n')
|
||||
line = strings.TrimSuffix(line, "\n")
|
||||
if err == io.EOF {
|
||||
return nil, types.GinkgoErrors.FailedToParseStackTrace(fmt.Sprintf("Invalid function call: %s -- missing file name and line number", functionCall.Function))
|
||||
}
|
||||
line = strings.TrimLeft(line, " \t")
|
||||
delimiterIdx := strings.LastIndex(line, ":")
|
||||
if delimiterIdx == -1 {
|
||||
return nil, types.GinkgoErrors.FailedToParseStackTrace(fmt.Sprintf("Invalid filename and line number: %s", line))
|
||||
}
|
||||
functionCall.Filename = line[:delimiterIdx]
|
||||
line = strings.Split(line[delimiterIdx+1:], " ")[0]
|
||||
lineNumber, err := strconv.ParseInt(line, 10, 64)
|
||||
functionCall.Line = int(lineNumber)
|
||||
if err != nil {
|
||||
return nil, types.GinkgoErrors.FailedToParseStackTrace(fmt.Sprintf("Invalid function call line number: %s\n%s", line, err.Error()))
|
||||
}
|
||||
out[idx].Stack = append(out[idx].Stack, functionCall)
|
||||
}
|
||||
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func fetchSource(filename string, lineNumber int, span int, configuredSourceRoots []string) ([]string, int) {
|
||||
if filename == "" {
|
||||
return []string{}, 0
|
||||
}
|
||||
|
||||
var lines []string
|
||||
var ok bool
|
||||
if lines, ok = _SOURCE_CACHE[filename]; !ok {
|
||||
sourceRoots := []string{""}
|
||||
sourceRoots = append(sourceRoots, configuredSourceRoots...)
|
||||
var data []byte
|
||||
var err error
|
||||
var found bool
|
||||
for _, root := range sourceRoots {
|
||||
data, err = os.ReadFile(filepath.Join(root, filename))
|
||||
if err == nil {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
return []string{}, 0
|
||||
}
|
||||
lines = strings.Split(string(data), "\n")
|
||||
_SOURCE_CACHE[filename] = lines
|
||||
}
|
||||
|
||||
startIndex := lineNumber - span - 1
|
||||
endIndex := startIndex + span + span + 1
|
||||
if startIndex < 0 {
|
||||
startIndex = 0
|
||||
}
|
||||
if endIndex > len(lines) {
|
||||
endIndex = len(lines)
|
||||
}
|
||||
highlightIndex := lineNumber - 1 - startIndex
|
||||
return lines[startIndex:endIndex], highlightIndex
|
||||
}
|
11
vendor/github.com/onsi/ginkgo/v2/internal/progress_report_bsd.go
generated
vendored
Normal file
11
vendor/github.com/onsi/ginkgo/v2/internal/progress_report_bsd.go
generated
vendored
Normal file
@ -0,0 +1,11 @@
|
||||
//go:build freebsd || openbsd || netbsd || darwin || dragonfly
|
||||
// +build freebsd openbsd netbsd darwin dragonfly
|
||||
|
||||
package internal
|
||||
|
||||
import (
|
||||
"os"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
var PROGRESS_SIGNALS = []os.Signal{syscall.SIGINFO, syscall.SIGUSR1}
|
11
vendor/github.com/onsi/ginkgo/v2/internal/progress_report_unix.go
generated
vendored
Normal file
11
vendor/github.com/onsi/ginkgo/v2/internal/progress_report_unix.go
generated
vendored
Normal file
@ -0,0 +1,11 @@
|
||||
//go:build linux || solaris
|
||||
// +build linux solaris
|
||||
|
||||
package internal
|
||||
|
||||
import (
|
||||
"os"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
var PROGRESS_SIGNALS = []os.Signal{syscall.SIGUSR1}
|
8
vendor/github.com/onsi/ginkgo/v2/internal/progress_report_win.go
generated
vendored
Normal file
8
vendor/github.com/onsi/ginkgo/v2/internal/progress_report_win.go
generated
vendored
Normal file
@ -0,0 +1,8 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package internal
|
||||
|
||||
import "os"
|
||||
|
||||
var PROGRESS_SIGNALS = []os.Signal{}
|
39
vendor/github.com/onsi/ginkgo/v2/internal/report_entry.go
generated
vendored
Normal file
39
vendor/github.com/onsi/ginkgo/v2/internal/report_entry.go
generated
vendored
Normal file
@ -0,0 +1,39 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/onsi/ginkgo/v2/types"
|
||||
)
|
||||
|
||||
type ReportEntry = types.ReportEntry
|
||||
|
||||
func NewReportEntry(name string, cl types.CodeLocation, args ...interface{}) (ReportEntry, error) {
|
||||
out := ReportEntry{
|
||||
Visibility: types.ReportEntryVisibilityAlways,
|
||||
Name: name,
|
||||
Location: cl,
|
||||
Time: time.Now(),
|
||||
}
|
||||
var didSetValue = false
|
||||
for _, arg := range args {
|
||||
switch x := arg.(type) {
|
||||
case types.ReportEntryVisibility:
|
||||
out.Visibility = x
|
||||
case types.CodeLocation:
|
||||
out.Location = x
|
||||
case Offset:
|
||||
out.Location = types.NewCodeLocation(2 + int(x))
|
||||
case time.Time:
|
||||
out.Time = x
|
||||
default:
|
||||
if didSetValue {
|
||||
return ReportEntry{}, types.GinkgoErrors.TooManyReportEntryValues(out.Location, arg)
|
||||
}
|
||||
out.Value = types.WrapEntryValue(arg)
|
||||
didSetValue = true
|
||||
}
|
||||
}
|
||||
|
||||
return out, nil
|
||||
}
|
87
vendor/github.com/onsi/ginkgo/v2/internal/spec.go
generated
vendored
Normal file
87
vendor/github.com/onsi/ginkgo/v2/internal/spec.go
generated
vendored
Normal file
@ -0,0 +1,87 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/onsi/ginkgo/v2/types"
|
||||
)
|
||||
|
||||
type Spec struct {
|
||||
Nodes Nodes
|
||||
Skip bool
|
||||
}
|
||||
|
||||
func (s Spec) SubjectID() uint {
|
||||
return s.Nodes.FirstNodeWithType(types.NodeTypeIt).ID
|
||||
}
|
||||
|
||||
func (s Spec) Text() string {
|
||||
texts := []string{}
|
||||
for i := range s.Nodes {
|
||||
if s.Nodes[i].Text != "" {
|
||||
texts = append(texts, s.Nodes[i].Text)
|
||||
}
|
||||
}
|
||||
return strings.Join(texts, " ")
|
||||
}
|
||||
|
||||
func (s Spec) FirstNodeWithType(nodeTypes types.NodeType) Node {
|
||||
return s.Nodes.FirstNodeWithType(nodeTypes)
|
||||
}
|
||||
|
||||
func (s Spec) FlakeAttempts() int {
|
||||
flakeAttempts := 0
|
||||
for i := range s.Nodes {
|
||||
if s.Nodes[i].FlakeAttempts > 0 {
|
||||
flakeAttempts = s.Nodes[i].FlakeAttempts
|
||||
}
|
||||
}
|
||||
|
||||
return flakeAttempts
|
||||
}
|
||||
|
||||
func (s Spec) MustPassRepeatedly() int {
|
||||
mustPassRepeatedly := 0
|
||||
for i := range s.Nodes {
|
||||
if s.Nodes[i].MustPassRepeatedly > 0 {
|
||||
mustPassRepeatedly = s.Nodes[i].MustPassRepeatedly
|
||||
}
|
||||
}
|
||||
|
||||
return mustPassRepeatedly
|
||||
}
|
||||
|
||||
func (s Spec) SpecTimeout() time.Duration {
|
||||
return s.FirstNodeWithType(types.NodeTypeIt).SpecTimeout
|
||||
}
|
||||
|
||||
type Specs []Spec
|
||||
|
||||
func (s Specs) HasAnySpecsMarkedPending() bool {
|
||||
for i := range s {
|
||||
if s[i].Nodes.HasNodeMarkedPending() {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
func (s Specs) CountWithoutSkip() int {
|
||||
n := 0
|
||||
for i := range s {
|
||||
if !s[i].Skip {
|
||||
n += 1
|
||||
}
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (s Specs) AtIndices(indices SpecIndices) Specs {
|
||||
out := make(Specs, len(indices))
|
||||
for i, idx := range indices {
|
||||
out[i] = s[idx]
|
||||
}
|
||||
return out
|
||||
}
|
90
vendor/github.com/onsi/ginkgo/v2/internal/spec_context.go
generated
vendored
Normal file
90
vendor/github.com/onsi/ginkgo/v2/internal/spec_context.go
generated
vendored
Normal file
@ -0,0 +1,90 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sort"
|
||||
"sync"
|
||||
|
||||
"github.com/onsi/ginkgo/v2/types"
|
||||
)
|
||||
|
||||
type SpecContext interface {
|
||||
context.Context
|
||||
|
||||
SpecReport() types.SpecReport
|
||||
AttachProgressReporter(func() string) func()
|
||||
}
|
||||
|
||||
type specContext struct {
|
||||
context.Context
|
||||
|
||||
cancel context.CancelFunc
|
||||
lock *sync.Mutex
|
||||
progressReporters map[int]func() string
|
||||
prCounter int
|
||||
|
||||
suite *Suite
|
||||
}
|
||||
|
||||
/*
|
||||
SpecContext includes a reference to `suite` and embeds itself in itself as a "GINKGO_SPEC_CONTEXT" value. This allows users to create child Contexts without having down-stream consumers (e.g. Gomega) lose access to the SpecContext and its methods. This allows us to build extensions on top of Ginkgo that simply take an all-encompassing context.
|
||||
|
||||
Note that while SpecContext is used to enforce deadlines by Ginkgo it is not configured as a context.WithDeadline. Instead, Ginkgo owns responsibility for cancelling the context when the deadline elapses.
|
||||
|
||||
This is because Ginkgo needs finer control over when the context is canceled. Specifically, Ginkgo needs to generate a ProgressReport before it cancels the context to ensure progress is captured where the spec is currently running. The only way to avoid a race here is to manually control the cancellation.
|
||||
*/
|
||||
func NewSpecContext(suite *Suite) *specContext {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
sc := &specContext{
|
||||
cancel: cancel,
|
||||
suite: suite,
|
||||
lock: &sync.Mutex{},
|
||||
prCounter: 0,
|
||||
progressReporters: map[int]func() string{},
|
||||
}
|
||||
ctx = context.WithValue(ctx, "GINKGO_SPEC_CONTEXT", sc) //yes, yes, the go docs say don't use a string for a key... but we'd rather avoid a circular dependency between Gomega and Ginkgo
|
||||
sc.Context = ctx //thank goodness for garbage collectors that can handle circular dependencies
|
||||
|
||||
return sc
|
||||
}
|
||||
|
||||
func (sc *specContext) SpecReport() types.SpecReport {
|
||||
return sc.suite.CurrentSpecReport()
|
||||
}
|
||||
|
||||
func (sc *specContext) AttachProgressReporter(reporter func() string) func() {
|
||||
sc.lock.Lock()
|
||||
defer sc.lock.Unlock()
|
||||
sc.prCounter += 1
|
||||
prCounter := sc.prCounter
|
||||
sc.progressReporters[prCounter] = reporter
|
||||
|
||||
return func() {
|
||||
sc.lock.Lock()
|
||||
defer sc.lock.Unlock()
|
||||
delete(sc.progressReporters, prCounter)
|
||||
}
|
||||
}
|
||||
|
||||
func (sc *specContext) QueryProgressReporters() []string {
|
||||
sc.lock.Lock()
|
||||
keys := []int{}
|
||||
for key := range sc.progressReporters {
|
||||
keys = append(keys, key)
|
||||
}
|
||||
sort.Ints(keys)
|
||||
reporters := []func() string{}
|
||||
for _, key := range keys {
|
||||
reporters = append(reporters, sc.progressReporters[key])
|
||||
}
|
||||
sc.lock.Unlock()
|
||||
|
||||
if len(reporters) == 0 {
|
||||
return nil
|
||||
}
|
||||
out := []string{}
|
||||
for _, reporter := range reporters {
|
||||
out = append(out, reporter())
|
||||
}
|
||||
return out
|
||||
}
|
1000
vendor/github.com/onsi/ginkgo/v2/internal/suite.go
generated
vendored
Normal file
1000
vendor/github.com/onsi/ginkgo/v2/internal/suite.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user