-
Notifications
You must be signed in to change notification settings - Fork 19
Expand file tree
/
Copy pathaction.yml
More file actions
413 lines (314 loc) · 11.1 KB
/
action.yml
File metadata and controls
413 lines (314 loc) · 11.1 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
name: promptpex
description: >
Generate tests for a LLM prompt using PromptPex.
<details><summary>Prompt format</summary>
PromptPex accepts prompts formatted in Markdown with a YAML frontmatter
section (optional).
```text
---
...
inputs:
some_input:
type: "string"
---
system:
This is your system prompt.
user:
This is your user prompt.
{{some_input}}
```
- The content of the Markdown is the chat conversation.
`system:` is the system prompt and `user:` is the user prompt.
- The input variables are defined in the frontmatter of the prompt.
- If not input variables are defined, PromptPex will append the generated test
to the user prompt.
### Frontmatter
You can override parts of the test generation
process by providing values in the frontmatter of the prompt (all values are
optional).
```markdown
---
...
promptPex:
inputSpec: "input constraints"
outputRules: "output constraints"
inverseOutputRules: "inverted output constraints"
intent: "intent of the prompt"
instructions:
inputSpec: "Additional input specification instructions"
outputRules: "Additional output rules instructions"
inverseOutputRules: "Additional inverse output rules instructions"
intent: "Additional intent of the prompt"
---
```
</details>
inputs:
prompt:
description: Prompt template to analyze. You can either copy the prompty source
here or upload a file prompt. [prompty](https://prompty.ai/) is a simple
markdown-based format for prompts. prompt.yml is the GitHub Models format.
required: false
effort:
description: Effort level for the test generation. This will influence the
number of tests generated and the complexity of the tests.
required: false
out:
description: Output folder for the generated files. This flag is mostly used
when running promptpex from the CLI.
required: false
cache:
description: Cache all LLM calls. This accelerates experimentation but you may
miss issues due to LLM flakiness.
required: false
default: true
test_run_cache:
description: Cache test run results in files.
required: false
default: true
eval_cache:
description: Cache eval evaluation results in files.
required: false
evals:
description: Evaluate the test results
required: false
default: true
tests_per_rule:
description: Number of tests to generate per rule. By default, we generate 3
tests to cover each output rule. You can modify this parameter to control
the number of tests generated.
required: false
default: 3
split_rules:
description: Split rules and inverse rules in separate prompts for test generation.
required: false
default: true
max_rules_per_test_generation:
description: Maximum number of rules to use per test generation which influences
the complexity of the generated tests. Increase this value to generate
tests faster but potentially less complex tests.
required: false
default: 3
test_generations:
description: Number of times to amplify the test generation. This parameter
allows to generate more tests for the same rules by repeatedly running the
test generation process, while asking the LLM to avoid regenerating
existing tests.
required: false
default: 2
runs_per_test:
description: Number of runs to execute per test. During the evaluation phase,
this parameter allows to run the same test multiple times to check for
consistency and reliability of the model's output.
required: false
default: 2
disable_safety:
description: Do not include safety system prompts and do not run safety content
service. By default, system safety prompts are included in the prompt and
the content is checked for safety. This option disables both.
required: false
default: false
rate_tests:
description: Generate a report rating the quality of the test set.
required: false
default: false
rules_model:
description: Model used to generate rules (you can also override the model alias
'rules')
required: false
baseline_model:
description: Model used to generate baseline tests
required: false
models_under_test:
description: List of models to run the prompt again; semi-colon separated
required: false
eval_model:
description: List of models to use for test evaluation; semi-colon separated
required: false
eval_model_groundtruth:
description: List of models to use for ground truth evaluation; semi-colon separated
required: false
compliance:
description: Evaluate Test Result compliance
required: false
default: false
max_tests_to_run:
description: Maximum number of tests to run
required: false
input_spec_instructions:
description: These instructions will be added to the input specification
generation prompt.
required: false
output_rules_instructions:
description: These instructions will be added to the output rules generation prompt.
required: false
inverse_output_rules_instructions:
description: These instructions will be added to the inverse output rules
generation prompt.
required: false
test_expansion_instructions:
description: These instructions will be added to the test expansion generation prompt.
required: false
store_completions:
description: Store chat completions using [stored
completions](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/stored-completions).
required: false
store_model:
description: "Model used to create [stored
completions](https://learn.microsoft.com/en-us/azure/ai-services/openai/h\
ow-to/stored-completions) (you can also override the model alias 'store').
"
required: false
groundtruth:
description: Generate groundtruth for the tests. This will generate a
groundtruth output for each test run.
required: false
default: true
groundtruth_model:
description: Model used to generate groundtruth
required: false
custom_metric:
description: "This prompt will be used to evaluate the test results.
<details><summary>Template</summary>
```text
---
name: Custom Test Result Evaluation
description: |
\ A template for a custom evaluation of the results.
tags:
\ - unlisted
inputs:
\ prompt:
\ type: string
\ description: The prompt to be evaluated.
\ intent:
\ type: string
\ description: The extracted intent of the prompt.
\ inputSpec:
\ type: string
\ description: The input specification for the prompt.
\ rules:
\ type: string
\ description: The rules to be applied for the test generation.
\ input:
\ type: string
\ description: The input to be used with the prompt.
\ output:
\ type: string
\ description: The output from the model execution.
---
system:
## Task
You are a chatbot that helps users evaluate the performance of a model.\
You will be given a evaluation criteria <CRITERIA>, a LLM prompt <PROMPT>,
output rules for the prompt <RULES>, a user input <INPUT>, and <OUTPUT>
from the model.\
Your task is to evaluate the <CRITERIA> based on <PROMPT>, <INPUT>, and
<OUTPUT> provided.
<CRITERIA>
The <OUTPUT> generated by the model complies with the <RULES> and the
<PROMPT> provided.
</CRITERIA>
<PROMPT>
{{prompt}}
</PROMPT>
<RULES>
{{rules}}
</RULES>
## Output
**Binary Decision on Evaluation**: You are required to make a binary
decision based on your evaluation:
- Return \"OK\" if <OUTPUT> is compliant with <CRITERIA>.
- Return \"ERR\" if <OUTPUT> is **not** compliant with <CRITERIA> or if
you are unable to confidently answer.
user:
<INPUT>
{{input}}
</INPUT>
<OUTPUT>
{{output}}
</OUTPUT>
```
</details> \
\ "
required: false
create_eval_runs:
description: Create an Evals run in [OpenAI
Evals](https://platform.openai.com/docs/guides/evals). Requires OpenAI API
key in environment variable `OPENAI_API_KEY`.
required: false
test_expansions:
description: Number of test expansion phase to generate tests. This will
increase the complexity of the generated tests.
required: false
default: 0
test_samples_count:
description: Number of test samples to include for the rules and test
generation. If a test sample is provided, the samples will be injected in
prompts to few-shot train the model.
required: false
test_samples_shuffle:
description: Shuffle the test samples before generating tests for the prompt.
required: false
filter_test_count:
description: Number of tests to include in the filtered output of evalTestCollection.
required: false
files:
description: Files to process, separated by semi columns (;).
.prompty,.md,.txt,.json,.prompt.yml
required: false
debug:
description: Enable [debug
logging](https://microsoft.github.io/genaiscript/reference/scripts/logging/).
required: false
model_alias:
description: "A YAML-like list of model aliases and model id: `translation:
github:openai/gpt-4o`"
required: false
openai_api_key:
description: OpenAI API key
required: false
openai_api_base:
description: OpenAI API base URL
required: false
azure_openai_api_endpoint:
description: Azure OpenAI endpoint. In the Azure Portal, open your Azure OpenAI
resource, Keys and Endpoints, copy Endpoint.
required: false
azure_openai_api_key:
description: Azure OpenAI API key. **You do NOT need this if you are using
Microsoft Entra ID.
required: false
azure_openai_subscription_id:
description: Azure OpenAI subscription ID to list available deployments
(Microsoft Entra only).
required: false
azure_openai_api_version:
description: Azure OpenAI API version.
required: false
azure_openai_api_credentials:
description: Azure OpenAI API credentials type. Leave as 'default' unless you
have a special Azure setup.
required: false
azure_ai_inference_api_key:
description: Azure AI Inference key
required: false
azure_ai_inference_api_endpoint:
description: Azure Serverless OpenAI endpoint
required: false
azure_ai_inference_api_version:
description: Azure Serverless OpenAI API version
required: false
azure_ai_inference_api_credentials:
description: Azure Serverless OpenAI API credentials type
required: false
github_token:
description: "GitHub token with [models:
read](https://microsoft.github.io/genaiscript/reference/github-actions/#g\
ithub-models-permissions) permission at least."
required: false
outputs:
text:
description: The generated text output.
runs:
using: docker
image: Dockerfile