You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: hiap-meed/README.md
+38-7Lines changed: 38 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,6 +3,7 @@
3
3
`hiap-meed` is a synchronous FastAPI service that implements the MEED prioritization pipeline. It sits between the CityCatalyst frontend and the upstream Global API, fetching city context and action data before running a configurable scoring pipeline.
4
4
5
5
See [`docs/service-architecture.md`](docs/service-architecture.md) for the full system diagram.
6
+
See [`docs/prioritization-accuracy-initial-benchmark.md`](docs/prioritization-accuracy-initial-benchmark.md) for the planned validation mechanism of ranking quality.
### External API contracts (modeled, integration pending)
@@ -124,7 +128,10 @@ Request body:
124
128
- Single-city and multi-city payloads both use `requestData.cityDataList`.
125
129
- Optional flag: `requestData.createExplanations` controls whether the post-ranking
126
130
explanation stage is executed.
127
-
-`requestData.requestedLanguages` is currently accepted as a list for frontend compatibility, but ranked-action explanations support only one returned language today. The backend uses the first list item as the explanation language and ignores the rest.
131
+
-`requestData.requestedLanguages` controls which explanation languages the backend attempts to return.
132
+
- Canonical explanation generation is always English.
133
+
- If non-English languages are requested, the backend generates English once and then translates from English into each requested target language.
134
+
- Response metadata reports `generated_languages` as the languages actually present in the returned explanation payload.
128
135
129
136
Exclusions:
130
137
@@ -219,8 +226,9 @@ Response fields:
219
226
-`alignment_score` (`float`)
220
227
-`feasibility_score` (`float`)
221
228
-`evidence_summary` (`object`): compact explainability snapshot from hard-filter/impact/alignment/feasibility evidence
222
-
-`explanation` (`string | null`): optional qualitative explanation text when `createExplanations=true`
229
+
-`explanations` (`object`): optional explanation texts keyed by language code when `createExplanations=true`
223
230
-`metadata` (`object`): request IDs, timings, counts, and hard-filter evidence.
231
+
-`warnings` (`string[]`): human-readable translation warnings when canonical English inputs appear non-English or mixed-language
224
232
225
233
Ranking details:
226
234
@@ -234,11 +242,26 @@ Explanation stage behavior:
234
242
235
243
- Explanations are generated only when `requestData.createExplanations=true`.
236
244
- Explanations are generated from post-ranking evidence and do not change ranks.
237
-
- The explanation stage currently supports one output language only. It resolves that language from the first item in `requestData.requestedLanguages`.
245
+
- Explanations are always authored canonically in English.
246
+
- Requested non-English explanations are translations of the canonical English text.
247
+
- In response metadata, `generated_languages` is the response-level union of explanation languages actually returned across `ranked_actions[].explanations`.
238
248
- Explanations receive the selected `cityStrategicPreferenceCoBenefitKeys` directly.
249
+
- If translation detects that a canonical explanation labeled as English appears non-English or mixed-language, translation still returns results and adds a warning to logs and the API response.
250
+
- That language-check warning is determined internally per action, then aggregated by the backend into the public top-level `warnings` list returned by the API.
239
251
- The backend logs a warning if the final explanation prompt becomes unusually large.
240
252
- If explanation generation fails or times out, the endpoint fails open and
241
-
returns normal ranking output with `explanation=null`.
253
+
returns normal ranking output with `explanations={}`.
254
+
255
+
### 5. Call the explanation translation endpoint
256
+
257
+
- The endpoint accepts the frontend envelope `ExplanationTranslationApiRequest`.
258
+
-`requestData.sourceLanguage` must be `en`.
259
+
-`requestData.targetLanguages` must contain only non-English target languages.
260
+
-`requestData.rankedActions[*]` includes:
261
+
-`actionId`
262
+
-`canonicalExplanation`
263
+
- The endpoint is stateless: the frontend sends the canonical English explanations it wants translated.
264
+
- The endpoint returns only the requested target-language translations, not the original English text.
242
265
243
266
Example JSON request bodies (using mock data from `data/`):
@@ -442,7 +466,14 @@ What each request run folder contains:
442
466
-`llm/explanations_io.json`: explanation-stage LLM request/response artifact (only when explanations are generated successfully)
443
467
-`llm/explanations_prompt.txt`: plain-text rendered user prompt with preserved newlines (only when explanations are generated successfully)
444
468
-`llm/explanations_error.json`: explanation-stage failure artifact with request context and error (only when explanation generation fails)
445
-
- Explanation artifacts and response metadata record both the original `requestedLanguages` list and the single resolved explanation language used for the run.
469
+
-`llm/explanation_translations_io.json`: translation-stage LLM request/response artifact (only when translations are generated successfully)
470
+
-`llm/explanation_translations_prompt.txt`: plain-text rendered translation prompt (only when translations are generated successfully)
471
+
-`llm/explanation_translations_error.json`: translation-stage failure artifact with request context and error (only when translation fails)
472
+
- Prioritization explanation artifacts and response metadata record the original `requestedLanguages`, canonical language `en`, generated languages actually returned in the response, and any translation warnings.
- Explanation translation artifacts record the source language contract, requested target languages, and any LLM language-check warnings.
446
477
- For the direct other-preference feature, the `alignment` step detail includes evidence such as `resolved_preferred_co_benefits`, `matched_preferred_co_benefits`, and mapping source fields
447
478
- The active request flow does not emit dedicated LLM prompt/response artifact files for Alignment because direct co-benefit selections are deterministic
0 commit comments