Skip to content

Continuous relaxation with fallback for inequality-constrained ordinal dims (#3261)#3261

Open
ItsMrLin wants to merge 1 commit intometa-pytorch:mainfrom
ItsMrLin:export-D99304800
Open

Continuous relaxation with fallback for inequality-constrained ordinal dims (#3261)#3261
ItsMrLin wants to merge 1 commit intometa-pytorch:mainfrom
ItsMrLin:export-D99304800

Conversation

@ItsMrLin
Copy link
Copy Markdown
Contributor

@ItsMrLin ItsMrLin commented Apr 2, 2026

Summary:

_setup_continuous_relaxation in optimize_mixed.py blanket-excludes all
constrained discrete dimensions from continuous relaxation, forcing them into
discrete local search even when they have high cardinality. This is overly
conservative for inequality constraints and causes severe performance
degradation.

Problem: When ordinal parameters (e.g., integers 0-50) participate in
linear inequality constraints (e.g., x1 + x2 + x3 <= 100), they are kept
as discrete dims regardless of cardinality. In mixed search spaces, this
inflates the discrete combination count (e.g., 51^4 x 20 = 135M), forces
optimize_acqf_mixed_alternating, and with default optimizer budgets
(raw_samples=1024, maxiter_init=100, maxiter_alternating=64) across
many sequential candidates, produces ~900K+ acquisition function evaluations
-- taking hours instead of minutes.

Fix: Try continuous relaxation first for inequality-constrained dims, with
automatic fallback to keeping them discrete if infeasible candidates result.

Specifically, optimize_acqf_mixed_alternating now:

  1. Fast path: Calls _setup_continuous_relaxation with
    inequality_constraints=None, allowing inequality-constrained dims to be
    relaxed and optimized continuously.
  2. Feasibility check: After optimization, checks if the candidates satisfy
    all constraints via evaluate_feasibility.
  3. Fallback: If any candidates are infeasible (e.g., due to rounding
    violations with non-contiguous discrete choices or tight constraints),
    re-runs with inequality-constrained dims kept discrete.

_setup_continuous_relaxation itself retains the D94963154 behavior of
excluding all constrained dims passed to it — the caller controls which
constraints are relevant by choosing what to pass.

The optimization body is extracted into _run_alternating_optimization to
enable the fallback without code duplication.

Differential Revision: D99304800

@meta-cla meta-cla Bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Apr 2, 2026
@meta-codesync
Copy link
Copy Markdown

meta-codesync Bot commented Apr 2, 2026

@ItsMrLin has exported this pull request. If you are a Meta employee, you can view the originating Diff in D99304800.

@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 2, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 99.98%. Comparing base (b16b28f) to head (8ced576).

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #3261   +/-   ##
=======================================
  Coverage   99.98%   99.98%           
=======================================
  Files         221      221           
  Lines       21902    21926   +24     
=======================================
+ Hits        21898    21922   +24     
  Misses          4        4           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@meta-codesync meta-codesync Bot changed the title Allow continuous relaxation of inequality-constrained ordinal dims Continuous relaxation with fallback for inequality-constrained ordinal dims (#3261) Apr 2, 2026
ItsMrLin added a commit to ItsMrLin/botorch that referenced this pull request Apr 2, 2026
…l dims (meta-pytorch#3261)

Summary:

`_setup_continuous_relaxation` in `optimize_mixed.py` blanket-excludes all
constrained discrete dimensions from continuous relaxation, forcing them into
discrete local search even when they have high cardinality. This is overly
conservative for inequality constraints and causes severe performance
degradation.

**Problem:** When ordinal parameters (e.g., integers 0-50) participate in
linear inequality constraints (e.g., `x1 + x2 + x3 <= 100`), they are kept
as discrete dims regardless of cardinality. In mixed search spaces, this
inflates the discrete combination count (e.g., 51^4 x 20 = 135M), forces
`optimize_acqf_mixed_alternating`, and with default optimizer budgets
(`raw_samples=1024`, `maxiter_init=100`, `maxiter_alternating=64`) across
many sequential candidates, produces ~900K+ acquisition function evaluations
-- taking hours instead of minutes.

**Fix:** Try continuous relaxation first for inequality-constrained dims, with
automatic fallback to keeping them discrete if infeasible candidates result.

Specifically, `optimize_acqf_mixed_alternating` now:
1. **Fast path**: Calls `_setup_continuous_relaxation` with
   `inequality_constraints=None`, allowing inequality-constrained dims to be
   relaxed and optimized continuously.
2. **Feasibility check**: After optimization, checks if the candidates satisfy
   all constraints via `evaluate_feasibility`.
3. **Fallback**: If any candidates are infeasible (e.g., due to rounding
   violations with non-contiguous discrete choices or tight constraints),
   re-runs with inequality-constrained dims kept discrete.

`_setup_continuous_relaxation` itself retains the D94963154 behavior of
excluding all constrained dims passed to it — the caller controls which
constraints are relevant by choosing what to pass.

The optimization body is extracted into `_run_alternating_optimization` to
enable the fallback without code duplication.

Differential Revision: D99304800
ItsMrLin added a commit to ItsMrLin/botorch that referenced this pull request Apr 2, 2026
…l dims (meta-pytorch#3261)

Summary:

`_setup_continuous_relaxation` in `optimize_mixed.py` blanket-excludes all
constrained discrete dimensions from continuous relaxation, forcing them into
discrete local search even when they have high cardinality. This is overly
conservative for inequality constraints and causes severe performance
degradation.

**Problem:** When ordinal parameters (e.g., integers 0-50) participate in
linear inequality constraints (e.g., `x1 + x2 + x3 <= 100`), they are kept
as discrete dims regardless of cardinality. In mixed search spaces, this
inflates the discrete combination count (e.g., 51^4 x 20 = 135M), forces
`optimize_acqf_mixed_alternating`, and with default optimizer budgets
(`raw_samples=1024`, `maxiter_init=100`, `maxiter_alternating=64`) across
many sequential candidates, produces ~900K+ acquisition function evaluations
-- taking hours instead of minutes.

**Fix:** Try continuous relaxation first for inequality-constrained dims, with
automatic fallback to keeping them discrete if infeasible candidates result.

Specifically, `optimize_acqf_mixed_alternating` now:
1. **Fast path**: Calls `_setup_continuous_relaxation` with
   `inequality_constraints=None`, allowing inequality-constrained dims to be
   relaxed and optimized continuously.
2. **Feasibility check**: After optimization, checks if the candidates satisfy
   all constraints via `evaluate_feasibility`.
3. **Fallback**: If any candidates are infeasible (e.g., due to rounding
   violations with non-contiguous discrete choices or tight constraints),
   re-runs with inequality-constrained dims kept discrete.

`_setup_continuous_relaxation` itself retains the D94963154 behavior of
excluding all constrained dims passed to it — the caller controls which
constraints are relevant by choosing what to pass.

The optimization body is extracted into `_run_alternating_optimization` to
enable the fallback without code duplication.

Differential Revision: D99304800
…l dims (meta-pytorch#3261)

Summary:
Pull Request resolved: meta-pytorch#3261

`_setup_continuous_relaxation` in `optimize_mixed.py` blanket-excludes all
constrained discrete dimensions from continuous relaxation, forcing them into
discrete local search even when they have high cardinality. This is overly
conservative for inequality constraints and causes severe performance
degradation.

**Problem:** When ordinal parameters (e.g., integers 0-50) participate in
linear inequality constraints (e.g., `x1 + x2 + x3 <= 100`), they are kept
as discrete dims regardless of cardinality. In mixed search spaces, this
inflates the discrete combination count (e.g., 51^4 x 20 = 135M), forces
`optimize_acqf_mixed_alternating`, and with default optimizer budgets
(`raw_samples=1024`, `maxiter_init=100`, `maxiter_alternating=64`) across
many sequential candidates, produces ~900K+ acquisition function evaluations
-- taking hours instead of minutes.

**Fix:** Try continuous relaxation first for inequality-constrained dims, with
automatic fallback to keeping them discrete if infeasible candidates result.

Specifically, `optimize_acqf_mixed_alternating` now:
1. **Fast path**: Calls `_setup_continuous_relaxation` with
   `inequality_constraints=None`, allowing inequality-constrained dims to be
   relaxed and optimized continuously.
2. **Feasibility check**: After optimization, checks if the candidates satisfy
   all constraints via `evaluate_feasibility`.
3. **Fallback**: If any candidates are infeasible (e.g., due to rounding
   violations with non-contiguous discrete choices or tight constraints),
   re-runs with inequality-constrained dims kept discrete.

`_setup_continuous_relaxation` itself retains the D94963154 behavior of
excluding all constrained dims passed to it — the caller controls which
constraints are relevant by choosing what to pass.

The optimization body is extracted into `_run_alternating_optimization` to
enable the fallback without code duplication.

Differential Revision: D99304800
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant