-
Notifications
You must be signed in to change notification settings - Fork 172
Enabled download of includes over HTTP (#605) #958
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
* Added option --git-over-http (default: false) * When active: Include-Parser uses git spare-checkout instead of git archive
@docdnp Isn't it possible to "auto-detect" that the checkout is a "http" checkout? That way we can avoid having a specific cli option that http users needs to remember. |
Well, yes.. could do it via: @firecow Do you know why the jest tests failed? I'm a bit surprised and TS isn't actually what I'm familiar with. |
* CLI option still available but hidden
@firecow In case you insist on the removal of the CLI option I'll do it. I don't see what reason might break the autodetection. The more I think about it, it should be better removed. Just let me know. |
@docdnp Let's auto detect it. I'll check out the jest tests. I think we might have a broken test, sometimes 👍 |
In I suggest you edit the regexp to this
And add a
You will then be able to use the schema as your condition instead of The reason your jest test is failing is because you don't have a "bash spy" setup in the |
OK. Sounds good. I'll check it out. I've been already playing around with Thanks for the hint. I'll come back in case of any further questions or obstacles. |
I've added
Double-checked the regexp in perl #!/usr/bin/perl -n
# file: regexp.pl
/ (?:.*?)\s+
(?<protocol>.*?)
(?::\/\/|@)
(?<host>[^:\/]*)
(:(?<port>\d+)\/|:|\/)
(?<group>.*)
\/
(?<project>[^ .]+)
(?:\.git)?.*
/x && print "Protocol: ", $+{protocol}, "\nHost : ", $+{host},
"\nGroup : ", $+{group}, "\nProject : ", $+{project}, "\n" using #!/usr/bin/perl -n
# file: regexp.pl
/ (?<protocol>.*?)
(?::\/\/|@)
(?<host>[^:\/]*)
(:(?<port>\d+)\/|:|\/)
(?<group>.*)
\/
(?<project>[^ .]+)
(?:\.git)?
/xx && print "Protocol: ", $+{protocol}, "\nHost : ", $+{host},
"\nGroup : ", $+{group}, "\nProject : ", $+{project}, "\n" Don't know whether Nevertheless do I doubt that the errors won't appear now anymore, but who knows how surprised I will be if you're right. |
Use the regex i pasted |
OK. Done. But had to correct your suggestion, as it didn't catch pure 'http://' origins. Here a way to test it out... Test data is: # file: git.remote.out
origin https://github.com/docdnp/gitlab-ci-local.git (fetch)
origin [email protected]:docdnp/gitlab-ci-local.git (fetch)
origin http://github.com/docdnp/gitlab-ci-local.git (fetch) Script is: #!/usr/bin/perl -n
# file: regexp.pl
$reschema = $ENV{use_regexp_firecow} == 1 ? qr((?<schema>https|git|http:)) : qr((?<schema>git|https?));
/ $reschema
(?::\/\/|@)
(?<host>[^:\/]*)
(:(?<port>\d+)\/|:|\/)
(?<group>.*)
\/
(?<project>[^ .]+)
(?:\.git)?
/xx && print "Schema : '", $+{schema}, "'\nHost : '", $+{host},
"'\nGroup : '", $+{group}, "'\nProject: '", $+{project}, "'\n", Test is: docdnp:gitlab-ci-local $ cat git.remote.out | use_regexp_firecow=1 ./regexp.pl
Schema : 'https'
Host : 'github.com'
Group : 'docdnp'
Project: 'gitlab-ci-local'
Schema : 'git'
Host : 'github.com'
Group : 'docdnp'
Project: 'gitlab-ci-local'
docdnp:gitlab-ci-local $ cat git.remote.out | use_regexp_firecow=0 ./regexp.pl
Schema : 'https'
Host : 'github.com'
Group : 'docdnp'
Project: 'gitlab-ci-local'
Schema : 'git'
Host : 'github.com'
Group : 'docdnp'
Project: 'gitlab-ci-local'
Schema : 'http'
Host : 'github.com'
Group : 'docdnp'
Project: 'gitlab-ci-local' As far as I see in the latest test report my assumption regarding the reason of the test failures was correct, as the error logs don't show anything related to the regexp or my changes. Summary of all failing tests
FAIL tests/test-cases/include-invalid-project-file-ref/integration.include-invalid-project-ref-file.test.ts
● include-invalid-project-file-ref
[...]
Project include could not be fetched { project: firecow/gitlab-ci-local-includes, ref: HEAD, file: .gitlab-modue.yml }
- Error: git archive | tar -f - -xC failed horribly
- this is a core error
- read it carefully
+ Error: Command failed with exit code 2: git archive --remote=ssh://[email protected]:22/firecow/gitlab-ci-local-includes.git HEAD .gitlab-modue.yml | tar -f - -xC .gitlab-ci-local/includes/gitlab.com/firecow/gitlab-ci-local-includes/HEAD
+ Host key verification failed.
+ fatal: the remote end hung up unexpectedly
+ tar: This does not look like a tar archive
+ tar: Exiting with failure status due to previous errors I assume that their is a basic problem related to credentials or so:
Now I'm trying to reproduce it with my own gitlab repository. Or is there any other way to test this stuff locally to decrease the feedback cycle? |
* The test attempted to compare an expected error text with the assertion error's message. * The assertion error's message might have changed previously, so the test failed * Analysis of the current logs of failing tests in the pipeline shows that different reasons might lead to an expected failure when calling 'git archive': * the server doesn't exist or isn't reachable (which could timeout), then error message contains - ssh: Could not resolve hostname <non-existing-hostname>: Name or service not known - fatal: the remote end hung up unexpectedly * the credentials for the server are wrong, then error message contains: - [email protected]: Permission denied (publickey). - fatal: the remote end hung up unexpectedly * the repository doesn't exist on the server, then error message contains: - ERROR: The project you were looking for could not be found or you don't have permission to view it. - fatal: the remote end hung up unexpectedly * the file doesn't exist on the server, then error message contains: => remote: fatal: pathspec '.gitlab-modue.yml' did not match any files * Thus it seems OK to test against the 'fatal.*' only
Yeah, I just call |
Yeah. That's the way I'm doing local tests. Actually I see other tests failing than in the pipeline. That's a bit confusing as I don't know whether it depends on the changes I made or on other issues or whether the tests are a bit fragile. You opened a pull request on the same issue? Why? |
Fix development README.md
Just to test whats going on |
Now I'm quiet sure, that we're suffering of brittle tests. Made an analysis on 10 consecutive test runs. My personal opinion is at the end of this comment. Here the results in short: Testrun 01: Test Suites: 3 failed, 94 passed, 97 total Tests: 4 failed, 212 passed, 216 total
Testrun 02: Test Suites: 3 failed, 94 passed, 97 total Tests: 4 failed, 212 passed, 216 total
Testrun 03: Test Suites: 1 failed, 96 passed, 97 total Tests: 2 failed, 214 passed, 216 total
Testrun 04: Test Suites: 5 failed, 92 passed, 97 total Tests: 5 failed, 211 passed, 216 total
Testrun 05: Test Suites: 3 failed, 94 passed, 97 total Tests: 4 failed, 212 passed, 216 total
Testrun 06: Test Suites: 4 failed, 93 passed, 97 total Tests: 4 failed, 212 passed, 216 total
Testrun 07: Test Suites: 2 failed, 95 passed, 97 total Tests: 2 failed, 214 passed, 216 total
Testrun 08: Test Suites: 4 failed, 93 passed, 97 total Tests: 5 failed, 211 passed, 216 total
Testrun 09: Test Suites: 2 failed, 95 passed, 97 total Tests: 3 failed, 213 passed, 216 total
Testrun 10: Test Suites: 3 failed, 94 passed, 97 total Tests: 3 failed, 213 passed, 216 total I wanted also to know in how many test runs a failing test occured:
Here some more details on the failing tests involved: Testrun 01:
FAIL tests/test-cases/artifacts-reports-dotenv/integration.artifacts-reports-dotenv.test.ts (6.035 s)
FAIL tests/test-cases/artifacts-with-cache/integration.artifacts-with-cache.test.ts (5.261 s)
FAIL tests/test-cases/network-alias-build/integration.network-alias-build.test.ts (11.885 s)
Test Suites: 3 failed, 94 passed, 97 total
Tests: 4 failed, 212 passed, 216 total
Snapshots: 4 passed, 4 total
Time: 176.629 s
Testrun 02:
FAIL tests/test-cases/artifacts-reports-dotenv/integration.artifacts-reports-dotenv.test.ts (6.228 s)
FAIL tests/test-cases/artifacts-with-cache/integration.artifacts-with-cache.test.ts (5.24 s)
FAIL tests/test-cases/network-alias-build/integration.network-alias-build.test.ts (11.964 s)
Test Suites: 3 failed, 94 passed, 97 total
Tests: 4 failed, 212 passed, 216 total
Snapshots: 4 passed, 4 total
Time: 185.873 s
Testrun 03:
FAIL tests/test-cases/network-alias-build/integration.network-alias-build.test.ts (12.336 s)
Test Suites: 1 failed, 96 passed, 97 total
Tests: 2 failed, 214 passed, 216 total
Snapshots: 4 passed, 4 total
Time: 176.578 s
Testrun 04:
FAIL tests/test-cases/artifacts-reports-dotenv/integration.artifacts-reports-dotenv.test.ts (6.201 s)
FAIL tests/test-cases/artifacts-with-cache/integration.artifacts-with-cache.test.ts (5.263 s)
FAIL tests/test-cases/cache-docker/integration.cache-docker.test.ts (5.247 s)
FAIL tests/test-cases/network-alias-build/integration.network-alias-build.test.ts (11.853 s)
FAIL tests/test-cases/parallel-matrix/integration.parallel-matrix.test.ts
Test Suites: 5 failed, 92 passed, 97 total
Tests: 5 failed, 211 passed, 216 total
Snapshots: 4 passed, 4 total
Time: 187.093 s
Testrun 05:
FAIL tests/test-cases/artifacts-reports-dotenv/integration.artifacts-reports-dotenv.test.ts (6.038 s)
FAIL tests/test-cases/network-alias-build/integration.network-alias-build.test.ts (11.84 s)
FAIL tests/test-cases/parallel-matrix/integration.parallel-matrix.test.ts
Test Suites: 3 failed, 94 passed, 97 total
Tests: 4 failed, 212 passed, 216 total
Snapshots: 4 passed, 4 total
Time: 184.324 s
Testrun 06:
FAIL tests/test-cases/artifacts-reports-dotenv/integration.artifacts-reports-dotenv.test.ts (6.26 s)
FAIL tests/test-cases/artifacts-with-cache/integration.artifacts-with-cache.test.ts (5.324 s)
FAIL tests/test-cases/cache-docker/integration.cache-docker.test.ts (5.247 s)
FAIL tests/test-cases/network-alias-build/integration.network-alias-build.test.ts (11.915 s)
Test Suites: 4 failed, 93 passed, 97 total
Tests: 4 failed, 212 passed, 216 total
Snapshots: 4 passed, 4 total
Time: 180.757 s
Testrun 07:
FAIL tests/test-cases/cache-docker/integration.cache-docker.test.ts (5.222 s)
FAIL tests/test-cases/network-alias-build/integration.network-alias-build.test.ts (11.92 s)
Test Suites: 2 failed, 95 passed, 97 total
Tests: 2 failed, 214 passed, 216 total
Snapshots: 4 passed, 4 total
Time: 179.786 s
Testrun 08:
FAIL tests/test-cases/artifacts-reports-dotenv/integration.artifacts-reports-dotenv.test.ts (6.185 s)
FAIL tests/test-cases/artifacts-with-cache/integration.artifacts-with-cache.test.ts (5.222 s)
FAIL tests/test-cases/cache-docker/integration.cache-docker.test.ts (5.225 s)
FAIL tests/test-cases/network-alias-build/integration.network-alias-build.test.ts (12.01 s)
Test Suites: 4 failed, 93 passed, 97 total
Tests: 5 failed, 211 passed, 216 total
Snapshots: 4 passed, 4 total
Time: 187.418 s
Testrun 09:
FAIL tests/test-cases/artifacts-reports-dotenv/integration.artifacts-reports-dotenv.test.ts (6.123 s)
FAIL tests/test-cases/network-alias-build/integration.network-alias-build.test.ts (11.999 s)
Test Suites: 2 failed, 95 passed, 97 total
Tests: 3 failed, 213 passed, 216 total
Snapshots: 4 passed, 4 total
Time: 178.611 s
Testrun 10:
FAIL tests/test-cases/artifacts-reports-dotenv/integration.artifacts-reports-dotenv.test.ts (6.105 s)
FAIL tests/test-cases/cache-docker/integration.cache-docker.test.ts (5.177 s)
FAIL tests/test-cases/network-alias-build/integration.network-alias-build.test.ts (11.316 s)
Test Suites: 3 failed, 94 passed, 97 total
Tests: 3 failed, 213 passed, 216 total
Snapshots: 4 passed, 4 total
Time: 173.606 s Do you know what's going on or have any hint, where I maybe don't test correctly? I'm always cautios regarding critique on when I begin to work in foreign codebases and assume in the first place, that I'm the one who maybe doesn't know exactly what he's doing. But, to be honest, I don't believe anymore in the code changes that enable sparse checkouts are in any way connected to the failing tests. What is your opinion on that? I would suggest the following:
Are you OK with that? |
I also think we have a brittle test in master. I've only been able to get the parallel matrix test to fail occasionally though. As far as I see, thats the only one flapping in Github actions as well Try using these jest cli options https://jestjs.io/docs/cli#--runinband But yeah, lets finish this. Remove your changes to package.json, package-lock.json and the test file, and we are good to go, i think. |
parallel matrix concurrent have been disabled fixing the flapping test case |
OK. Testing without concurrency and a longer timeout works stable. You mean I should remove tsnode from package.json? And are you sure about reverting the test case? Actually the changes made the test run through. I'll do the changes a bit later this evening. Just give me a short answer. |
Yeah, the test file changes are irrelevant... Just let ts-node stay if you want, just thought devs installed that globally. |
Did what you said. The test integration.include-invalid-project-ref-file.test.ts fails again... as I assumed. |
https://github.com/firecow/gitlab-ci-local/pull/960/files Have a look at this. |
Well, thank you.. I think I've got it running now. Actually it was just a damn "/" missing in the "target" path. "tar" wasn't happy about this and thus the test failed. My fault. :-/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great. Good job 👍
Great. 👍 Thank you for the hints to the right direction. |
No problem at all 👍 Thank you for the contribution |
Should this be removed then? Lines 210 to 212 in cec7ccd
|
closes #605