Conversation
528bbcc to
4cea67d
Compare
c9101b1 to
a57997f
Compare
There was a problem hiding this comment.
Pull Request Overview
This PR adds comprehensive unit tests for the uploadSpecifiedFiles function in the upload library. The tests cover various scenarios including single and multiple SARIF file uploads, category mapping for code quality analysis, SARIF dumping functionality, and different upload modes.
Key Changes:
- Added 7 new test cases covering different upload scenarios
- Created helper functions
setupUploadEnvironmentandstubUploadDependenciesto reduce test code duplication - Enhanced the
createMockSarifhelper to support optional automation details for better test flexibility
mbg
left a comment
There was a problem hiding this comment.
I appreciate the follow-up to #3180 to add tests. I started having a look, but stopped after the first couple of test cases:
- In #3180, the main change was to
uploadPayload, and I was expecting tests to focus around the behaviour ofuploadPayload. While it's not currently exported, we have numerous examples of functions that are exported for testing purposes only. - Tests for
uploadSpecifiedFileswould of course be welcome, but they would need to focus on the behaviour ofuploadSpecifiedFilesitself, not primarily on the behaviour ofuploadPayload. - The
setupUploadEnvironmentfunction is largely superfluous;setupTestsalready does this. - The tests would probably benefit from using a
macroto set up what is needed for testinguploadSpecifiedFiles, rather than duplicating large chunks of setup code across every test case. - A lot of the comments are nonsensical (and show a lack of understanding of what's going on) or superfluous
|
I probably let copilot a tad too loose on this 😅 |
uploadPayload
|
@mbg I've decicided to ditch all that copilot had done and (for the moment) only add |
mbg
left a comment
There was a problem hiding this comment.
Generally this is a big improvement, thank you! There are a couple of points where the implementation of the tests here differs from established patterns for tests elsewhere in the codebase, so it might be good to consider whether it's worth having those differences or we can align it more with the established patterns. That said, there's nothing inherently wrong here, so I'd be happy to approve this if we think it makes sense to keep these tests as they are now.
src/upload-lib.test.ts
Outdated
| title: (providedTitle = "", options: { analysis: analyses.AnalysisConfig }) => | ||
| `uploadPayload - ${options.analysis.name} - ${providedTitle}`, | ||
| }); | ||
|
|
||
| for (const analysis of [CodeScanning, CodeQuality]) { | ||
| test("uploads successfully", uploadPayloadMacro, { |
There was a problem hiding this comment.
Minor: I was going to comment that it would probably make sense to include the analysis name in the test names, but I then saw that this already happens via the macro. I have no strong feelings on which is better. In other test files where we have top-level for-loops, we splice the variables into the title provided to test, so it might be good for consistency to do that here as well, but I can see how doing it in the macro is less repetitive.
src/upload-lib.test.ts
Outdated
| }); | ||
| } | ||
|
|
||
| test("handles error", uploadPayloadMacro, { |
There was a problem hiding this comment.
Minor: maybe a better title would be
| test("handles error", uploadPayloadMacro, { | |
| test("wraps request errors using `wrapApiConfigurationError`", uploadPayloadMacro, { |
src/upload-lib.test.ts
Outdated
| body: ( | ||
| t: ExecutionContext<unknown>, | ||
| upload: () => Promise<string>, | ||
| requestStub: sinon.SinonStub, | ||
| mockData: { | ||
| payload: { sarif: string; commit_sha: string }; | ||
| owner: string; | ||
| repo: string; | ||
| response: { | ||
| status: number; | ||
| data: { id: string }; | ||
| headers: any; | ||
| url: string; | ||
| }; | ||
| }, | ||
| ) => void | Promise<void>; |
There was a problem hiding this comment.
I don't think we have this pattern with a continuation (here: body) for the macro to call elsewhere, but I can see why it could be useful. Were there any particular reasons why you chose to do it this way over the established pattern of having inputs/input modifiers and an expected result? (All tests but the last are expected to return a string so you could have a expectedId parameter, and you could just not use the macro for the test where you expect failure.)
There was a problem hiding this comment.
fundamentally I wanted to have the setup that the macro does in common for all the tests. I had a version of this where all the test behaviour was being driven by flags in options (courtesy of copilot), but then the macro code was a mess of conditionals and the tests weren't as clear. The upload skipping tests too do some different checking that I find would be a tad bad to shove in the macro code.
There was a problem hiding this comment.
Also, maybe that could be covered better by a setup function, rather than a macro?
There was a problem hiding this comment.
maybe, considering the test macro is not doing any checks after calling body
There was a problem hiding this comment.
yeah, I like that better 🙂
src/upload-lib.test.ts
Outdated
| t: ExecutionContext<unknown>, | ||
| options: { | ||
| analysis: analyses.AnalysisConfig; | ||
| body: ( |
There was a problem hiding this comment.
Did you have any particular reason for having this options object rather than having analysis and body be parameters of exec?
There was a problem hiding this comment.
initially I had more flags that profited from being named explicitly, but I agree that at this stage it's not really needed
Risk assessment
Which use cases does this change impact?
None, these are only tests.
How did/will you validate this change?
.test.tsfiles).If something goes wrong after this change is released, what are the mitigation and rollback strategies?
How will you know if something goes wrong after this change is released?
Merge / deployment checklist