Skip to content

Update sync-issue-labels-compute.yml#13

Open
manishdait24-bot wants to merge 12 commits intomanishdait:mainfrom
manishdait24-bot:test2
Open

Update sync-issue-labels-compute.yml#13
manishdait24-bot wants to merge 12 commits intomanishdait:mainfrom
manishdait24-bot:test2

Conversation

@manishdait24-bot
Copy link
Copy Markdown

fixes: #7

manishdait and others added 2 commits April 10, 2026 00:33
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: manishdait24-bot <manishdait24@gmail.com>
@codacy-production
Copy link
Copy Markdown

codacy-production Bot commented Apr 9, 2026

Up to standards ✅

🟢 Issues 0 issues

Results:
0 new issues

View in Codacy

TIP This summary will be updated as you push new changes. Give us feedback

Signed-off-by: manishdait24-bot <manishdait24@gmail.com>
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 9, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

The GitHub Actions workflow for syncing issue labels is refactored to remove manual dispatch capability, consolidate job structure, add concurrency control, simplify linked-issue parsing, and streamline label computation logic with updated artifact handling.

Changes

Cohort / File(s) Summary
GitHub Actions Workflow – Sync Issue Labels
.github/workflows/sync-issue-labels-compute.yml
Removed workflow_dispatch trigger and its inputs; consolidated dispatch-add and compute-labels jobs into single sync job; added concurrency keying by workflow name and PR number with cancellation enabled; simplified linked-issue parsing by removing MAX_LINKED_ISSUES limit; changed label computation to fetch labels directly from referenced issues (skipping PRs) with deduplication via discoveredLabels set; updated artifact payload to { pr_number, labels } format; made artifact upload conditional on labels.json existence; changed artifact name to constant pr-labels-data.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 3

❌ Failed checks (2 warnings, 1 inconclusive)

Check name Status Explanation Resolution
Linked Issues check ⚠️ Warning Issue #7 concerns broken markdown links detected by a scheduled link check, but the code changes involve restructuring a workflow's logic, triggers, and label computation without addressing the specific broken links mentioned in the issue. Investigate and fix the actual broken markdown links identified in issue #7's workflow logs rather than restructuring unrelated workflow logic, or clarify if the workflow restructuring indirectly resolves the link check failures.
Out of Scope Changes check ⚠️ Warning The changes extensively refactor the workflow structure (triggers, permissions, jobs, concurrency, artifact handling) which appears unrelated to fixing broken markdown links as claimed in issue #7. Align the changes with the stated objective of fixing broken markdown links, or create a separate PR for the workflow refactoring with its own issue tracking.
Title check ❓ Inconclusive The title 'Update sync-issue-labels-compute.yml' is vague and generic, describing only that a workflow file is being updated without conveying the specific purpose or fix being addressed. Provide a more descriptive title that explains the purpose of the update, such as 'Simplify sync-issue-labels-compute workflow and remove dispatch logic' or 'Fix broken markdown links in sync-issue-labels-compute workflow'.
✅ Passed checks (2 passed)
Check name Status Explanation
Description check ✅ Passed The description 'fixes: #7' is extremely minimal but does reference the linked issue about broken markdown links, making it on-topic to the changeset.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2


ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: c54291ad-47bb-4e57-aa37-915cd4bbd1ab

📥 Commits

Reviewing files that changed from the base of the PR and between 70b1212 and 7e05647.

📒 Files selected for processing (1)
  • .github/workflows/sync-issue-labels-compute.yml

Comment on lines 28 to +88
script: |
const MAX_LINKED_ISSUES = Number(process.env.MAX_LINKED_ISSUES || "20");

function extractLabels(labelData) {
const result = [];
for (const item of labelData) {
const name = typeof item === "string" ? item : item && item.name;
if (name && name.trim()) result.push(name.trim());
}
return result;
}

function extractLinkedIssueNumbers(prBody, owner, repo) {
const numbers = new Set();
const closingRefRegex = /(?:fix|fixes|fixed|close|closes|closed|resolve|resolves|resolved)\s+(?:([A-Za-z0-9_.-]+)\/([A-Za-z0-9_.-]+))?#(\d+)\b/gi;
const lines = String(prBody || "").split(/\r?\n/);
for (const line of lines) {
let m;
while ((m = closingRefRegex.exec(line)) !== null) {
const refOwner = (m[1] || "").toLowerCase();
const refRepo = (m[2] || "").toLowerCase();
if (refOwner && refRepo && (refOwner !== owner.toLowerCase() || refRepo !== repo.toLowerCase())) continue;
numbers.add(Number(m[3]));
}
}
const all = Array.from(numbers);
if (all.length > MAX_LINKED_ISSUES) {
console.log(`[sync] Limiting linked issue refs from ${all.length} to ${MAX_LINKED_ISSUES}.`);
}
return all.slice(0, MAX_LINKED_ISSUES);
}

const prNumber = Number(process.env.PR_NUMBER);
if (!prNumber) {
core.setOutput('has_labels', 'false');
core.setOutput('labels', '[]');
core.setOutput('pr_number', '');
core.setOutput('dry_run', 'true');
core.setOutput('is_fork_pr', String(process.env.IS_FORK_PR || 'false'));
core.setOutput('source_event', context.eventName);
return;
}
const prNumber = context.payload.pull_request.number;
console.log(`--- Processing PR #${prNumber} ---`);

const { data: prData } = await github.rest.pulls.get({
owner: context.repo.owner, repo: context.repo.repo, pull_number: prNumber
});

const prAuthor = (prData.user && prData.user.login) || "";
const prAuthor = prData.user.login;
if (/\[bot\]$/i.test(prAuthor) || /dependabot/i.test(prAuthor)) {
console.log(`[sync] Skipping bot-authored PR from ${prAuthor}.`);
core.setOutput('has_labels', 'false');
core.setOutput('labels', '[]');
core.setOutput('pr_number', String(prNumber));
core.setOutput('dry_run', 'true');
core.setOutput('is_fork_pr', String(process.env.IS_FORK_PR || 'false'));
core.setOutput('source_event', context.eventName);
console.log(`Skipping bot-authored PR (Author: ${prAuthor})`);
return;
}

const linkedIssues = extractLinkedIssueNumbers(prData.body || "", context.repo.owner, context.repo.repo);
if (!linkedIssues.length) {
console.log("[sync] No linked issue references found in PR body.");
core.setOutput('has_labels', 'false');
core.setOutput('labels', '[]');
core.setOutput('pr_number', String(prNumber));
core.setOutput('dry_run', 'true');
core.setOutput('is_fork_pr', String(process.env.IS_FORK_PR || 'false'));
core.setOutput('source_event', context.eventName);
const regex = /(?:fix|fixes|fixed|close|closes|closed|resolve|resolves|resolved)[:\s]+\s*#(\d+)\b/gi;
const issueNumbers = new Set();
let match;
while ((match = regex.exec(prData.body || "")) !== null) {
issueNumbers.add(Number(match[1]));
}

if (issueNumbers.size === 0) {
console.log("No linked issues found in the PR description.");
return;
}

console.log(`[sync] Linked issues: ${linkedIssues.map(n => '#' + n).join(', ')}`);
console.log(`Detected linked issues: #${Array.from(issueNumbers).join(', #')}`);

const allLabels = [];
for (const num of linkedIssues) {
const discoveredLabels = new Set();
for (const num of issueNumbers) {
try {
const { data } = await github.rest.issues.get({
const { data: issue } = await github.rest.issues.get({
owner: context.repo.owner, repo: context.repo.repo, issue_number: num
});
if (data.pull_request) { console.log(`[sync] Skipping #${num}: is a PR reference.`); continue; }
const labels = extractLabels(data.labels || []);
console.log(`[sync] Issue #${num} labels: ${labels.length ? labels.join(', ') : '(none)'}`);
allLabels.push(...labels);
} catch (err) {
if (err && err.status === 404) { console.log(`[sync] Issue #${num} not found. Skipping.`); continue; }
throw err;
if (!issue.pull_request) {
const names = (issue.labels || []).map(l => typeof l === 'string' ? l : l.name);
console.log(`Found labels on issue #${num}: [${names.join(', ')}]`);
names.forEach(l => discoveredLabels.add(l));
} else {
console.log(`Skipping #${num} because it is a Pull Request, not an Issue.`);
}
} catch (e) {
console.log(`Error fetching labels for issue #${num}: ${e.message}`);
}
}

const existing = extractLabels(prData.labels || []);
const existingSet = new Set(existing);
const deduped = Array.from(new Set(allLabels));
const toAdd = deduped.filter(l => !existingSet.has(l));

console.log(`[sync] Existing: ${existing.length ? existing.join(', ') : '(none)'}`);
console.log(`[sync] To add: ${toAdd.length ? toAdd.join(', ') : '(none)'}`);
const currentLabels = (prData.labels || []).map(l => typeof l === 'string' ? l : l.name);
console.log(`Current PR labels: [${currentLabels.join(', ')}]`);

const labels = toAdd;
const hasLabels = labels.length > 0;
core.setOutput('has_labels', String(hasLabels));
core.setOutput('labels', JSON.stringify(labels));
core.setOutput('pr_number', String(prNumber));
core.setOutput('dry_run', String(process.env.REQUESTED_DRY_RUN || 'true'));
core.setOutput('is_fork_pr', String(process.env.IS_FORK_PR || 'false'));
core.setOutput('source_event', context.eventName);
return { has_labels: hasLabels, labels, pr_number: String(prNumber), dry_run: process.env.REQUESTED_DRY_RUN, is_fork_pr: process.env.IS_FORK_PR, source_event: context.eventName };
const newLabels = Array.from(discoveredLabels).filter(label => !currentLabels.includes(label));

if (newLabels.length > 0) {
console.log(`New labels to be added: [${newLabels.join(', ')}]`);
} else {
console.log("No new labels discovered (all found labels are already present).");
}

- name: Write labels artifact payload
env:
LABELS_JSON: ${{ steps.compute.outputs.labels }}
PR_NUMBER: ${{ steps.compute.outputs.pr_number }}
IS_FORK_PR: ${{ steps.compute.outputs.is_fork_pr }}
DRY_RUN: ${{ steps.compute.outputs.dry_run }}
SOURCE_EVENT: ${{ steps.compute.outputs.source_event }}
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
script: |
const fs = require('fs');
const parsed = JSON.parse(process.env.LABELS_JSON || '[]');
const payload = {
pr_number: Number(process.env.PR_NUMBER || 0),
labels: Array.isArray(parsed) ? parsed : [],
is_fork_pr: /^true$/i.test(process.env.IS_FORK_PR || ''),
dry_run: /^true$/i.test(process.env.DRY_RUN || ''),
source_event: process.env.SOURCE_EVENT || '',
};
fs.writeFileSync('labels.json', JSON.stringify(payload));
console.log(`Wrote labels artifact payload for PR #${payload.pr_number}: ${payload.labels.length} labels`);

- name: Upload labels artifact
const result = { pr_number: prNumber, labels: newLabels };
fs.writeFileSync('labels.json', JSON.stringify(result));
console.log(`Calculated labels: ${newLabels.join(', ')}`);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Move the label-computation logic into .github/scripts/.

This block now owns parsing, filtering, API iteration, and artifact generation. Extracting it to a dedicated script will keep the workflow YAML focused on orchestration and make the logic easier to test.

As per coding guidelines, "Non-trivial logic belongs in dedicated scripts under .github/scripts/, keeping the workflow YAML focused on orchestration." and "Flag large run: | blocks, inline loops or conditionals, and API calls embedded directly in YAML steps."

Comment thread .github/workflows/sync-issue-labels-compute.yml Outdated
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
manishdait24-bot and others added 2 commits April 11, 2026 15:24
Signed-off-by: manishdait24-bot <manishdait24@gmail.com>
Signed-off-by: Manish Dait <90558243+manishdait@users.noreply.github.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
.github/workflows/sync-issue-labels-compute.yml (1)

28-88: 🧹 Nitpick | 🔵 Trivial

Move the label-computation logic into .github/scripts/.

This large inline script block handles parsing, filtering, API iteration, and artifact generation. Extracting it to a dedicated script would keep the workflow YAML focused on orchestration and make the logic easier to test.

As per coding guidelines, "Non-trivial logic belongs in dedicated scripts under .github/scripts/, keeping the workflow YAML focused on orchestration."


ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 70577347-eee3-436f-bacb-609d378371c4

📥 Commits

Reviewing files that changed from the base of the PR and between 7e05647 and e38371b.

📒 Files selected for processing (1)
  • .github/workflows/sync-issue-labels-compute.yml

Comment thread .github/workflows/sync-issue-labels-compute.yml
manishdait and others added 3 commits April 11, 2026 18:21
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: manishdait24-bot <manishdait24@gmail.com>
Signed-off-by: Manish Dait <90558243+manishdait@users.noreply.github.com>
Signed-off-by: manishdait24-bot <manishdait24@gmail.com>
manishdait and others added 2 commits April 11, 2026 20:50
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <90558243+manishdait@users.noreply.github.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3


ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 03cdea42-97e1-4c6b-b77c-fa3c5cd08929

📥 Commits

Reviewing files that changed from the base of the PR and between 6a34044 and 1f78b47.

📒 Files selected for processing (1)
  • .github/workflows/sync-issue-labels-compute.yml

Comment on lines +1 to +24
name: Sync Linked Issue Labels - Compute

on:
pull_request:
types: [opened, edited, reopened, synchronize, ready_for_review]
workflow_dispatch:
inputs:
pr_number:
description: "PR number to sync labels for"
required: true
type: number
dry-run-enabled:
description: "Dry run (log only, do not apply labels)"
required: false
type: boolean
default: true
types: [opened, edited, reopened, synchronize]

permissions:
actions: write
pull-requests: read
issues: read
contents: read

concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true

jobs:
compute-labels:
concurrency:
group: sync-issue-labels-compute-pr-${{ github.event.pull_request.number || github.event.inputs.pr_number }}
cancel-in-progress: true
sync:
runs-on: ubuntu-latest
outputs:
pr_number: ${{ steps.compute.outputs.pr_number }}
dry_run: ${{ steps.compute.outputs.dry_run }}
is_fork_pr: ${{ steps.compute.outputs.is_fork_pr }}
steps:
- name: Harden the runner
- name: Harden Runner
uses: step-security/harden-runner@f808768d1510423e83855289c910610ca9b43176 # v2.17.0
with:
egress-policy: audit

- name: Compute linked issue labels
- name: Compute Labels
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

This change does not appear to fix issue #7.

Issue #7 is about broken markdown links/Lychee output, but this PR only changes the label-sync workflow. Please split this refactor into a separate PR or include the actual markdown/link-check fixes before closing that issue.

As per coding guidelines, "Focus feedback on the PR's stated scope. Flag out-of-scope issues as irrelevant and suggest opening a separate PR — do not block for them."

Comment on lines +42 to 71
const regex = /(?:fix|fixes|fixed|close|closes|closed|resolve|resolves|resolved)[:\s]+\s*#(\d+)\b/gi;
const issueNumbers = new Set();
let match;
while ((match = regex.exec(prData.body || "")) !== null) {
issueNumbers.add(Number(match[1]));
}

if (issueNumbers.size === 0) {
console.log("No linked issues found in the PR description.");
return;
}

console.log(`[sync] Linked issues: ${linkedIssues.map(n => '#' + n).join(', ')}`);
console.log(`Detected linked issues: #${Array.from(issueNumbers).join(', #')}`);

const allLabels = [];
for (const num of linkedIssues) {
const discoveredLabels = new Set();
for (const num of issueNumbers) {
try {
const { data } = await github.rest.issues.get({
const { data: issue } = await github.rest.issues.get({
owner: context.repo.owner, repo: context.repo.repo, issue_number: num
});
if (data.pull_request) { console.log(`[sync] Skipping #${num}: is a PR reference.`); continue; }
const labels = extractLabels(data.labels || []);
console.log(`[sync] Issue #${num} labels: ${labels.length ? labels.join(', ') : '(none)'}`);
allLabels.push(...labels);
} catch (err) {
if (err && err.status === 404) { console.log(`[sync] Issue #${num} not found. Skipping.`); continue; }
throw err;
if (!issue.pull_request) {
const names = (issue.labels || []).map(l => typeof l === 'string' ? l : l.name);
console.log(`Found labels on issue #${num}: [${names.join(', ')}]`);
names.forEach(l => discoveredLabels.add(l));
} else {
console.log(`Skipping #${num} because it is a Pull Request, not an Issue.`);
}
} catch (e) {
console.log(`Error fetching labels for issue #${num}: ${e.message}`);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "sync-issue-labels-compute.yml" -type f

Repository: manishdait/hiero-sdk-python

Length of output: 120


🏁 Script executed:

cat -n ./.github/workflows/sync-issue-labels-compute.yml

Repository: manishdait/hiero-sdk-python

Length of output: 4779


Introduce a hard cap on linked issues before iterating.

prData.body is untrusted input from the GitHub event payload. The current code extracts all fixes #... references from the PR description via regex without any limit, then makes one API call (issues.get) per matched issue number. A PR description with many linked issues (or many edits across edited/synchronize events) can exhaust API quota and waste runner time.

🔧 Suggested change
+            const MAX_LINKED_ISSUES = 20;
             const regex = /(?:fix|fixes|fixed|close|closes|closed|resolve|resolves|resolved)[:\s]+\s*#(\d+)\b/gi;
             const issueNumbers = new Set();
             let match;
             while ((match = regex.exec(prData.body || "")) !== null) {
               issueNumbers.add(Number(match[1]));
+              if (issueNumbers.size > MAX_LINKED_ISSUES) {
+                throw new Error(`Refusing to process more than ${MAX_LINKED_ISSUES} linked issues from the PR body.`);
+              }
             }

Comment on lines +57 to 71
for (const num of issueNumbers) {
try {
const { data } = await github.rest.issues.get({
const { data: issue } = await github.rest.issues.get({
owner: context.repo.owner, repo: context.repo.repo, issue_number: num
});
if (data.pull_request) { console.log(`[sync] Skipping #${num}: is a PR reference.`); continue; }
const labels = extractLabels(data.labels || []);
console.log(`[sync] Issue #${num} labels: ${labels.length ? labels.join(', ') : '(none)'}`);
allLabels.push(...labels);
} catch (err) {
if (err && err.status === 404) { console.log(`[sync] Issue #${num} not found. Skipping.`); continue; }
throw err;
if (!issue.pull_request) {
const names = (issue.labels || []).map(l => typeof l === 'string' ? l : l.name);
console.log(`Found labels on issue #${num}: [${names.join(', ')}]`);
names.forEach(l => discoveredLabels.add(l));
} else {
console.log(`Skipping #${num} because it is a Pull Request, not an Issue.`);
}
} catch (e) {
console.log(`Error fetching labels for issue #${num}: ${e.message}`);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

fd -t f "sync-issue-labels.*\.yml" .github/workflows/

Repository: manishdait/hiero-sdk-python

Length of output: 162


🏁 Script executed:

cat -n .github/workflows/sync-issue-labels-compute.yml

Repository: manishdait/hiero-sdk-python

Length of output: 4779


🏁 Script executed:

cat -n .github/workflows/sync-issue-labels-add.yml

Repository: manishdait/hiero-sdk-python

Length of output: 2395


Fail the workflow on transient API errors, not partial results.

The catch block silently logs errors and continues the loop, allowing discoveredLabels to remain incomplete. If any issues are successfully fetched before a transient error (403, 429, 5xx), labels.json is still written with partial data. The downstream sync-issue-labels-add.yml workflow treats a successful run as complete computation and applies that incomplete label set.

Only catch and skip the specific case you intend to handle (404 for deleted issues); re-throw all other errors to fail the job:

Suggested change
              } catch (e) { 
-                console.log(`Error fetching labels for issue #${num}: ${e.message}`); 
+                if (e.status === 404) {
+                  console.log(`Skipping missing issue #${num}.`);
+                  continue;
+                }
+                throw e;
              }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Scheduled Markdown Link Check Found Broken Links

2 participants