Summary
gdpopt.loa appears to mishandle objective sense in the discrete OA master for a maximization GDP. In a GDPlib methanol benchmark, the algebraically equivalent formulations minimize(-profit) and maximize(profit) produce different GDPopt LOA solutions. The maximization form terminates after one iteration at a negative-profit local solution, while the minimization-of-negative-profit form finds the expected positive-profit solution from the Turkay/Grossmann LOA paper.
This was discovered while working on GDPlib:
Environment
- Pyomo version:
6.10.0
- Python:
3.12 in the GDPlib Pixi environment
- GDPopt call:
pyo.SolverFactory("gdpopt").solve(
m,
algorithm="LOA",
mip_solver="gams",
nlp_solver="gams",
tee=False,
)
GAMS was locally available for both MIP and NLP roles.
Reproducer
This uses GDPlib PR #113 because it exposes m.profit separately from the solver objective.
import pyomo.environ as pyo
import pyomo.gdp as gdp
from gdplib.methanol.methanol import build_model
def solve_case(label, maximize_profit):
m = build_model()
if maximize_profit:
m.objective.deactivate()
m.max_profit_objective = pyo.Objective(expr=m.profit, sense=pyo.maximize)
res = pyo.SolverFactory("gdpopt").solve(
m,
algorithm="LOA",
mip_solver="gams",
nlp_solver="gams",
tee=False,
)
active = [
d.name
for d in m.component_data_objects(
gdp.Disjunct, active=True, sort=True, descend_into=True
)
if pyo.value(d.indicator_var)
]
print(f"CASE: {label}")
print(f"termination: {res.solver.termination_condition}")
print(
"objective:",
pyo.value(m.objective)
if m.objective.active
else pyo.value(m.max_profit_objective),
)
print(f"profit: {pyo.value(m.profit)}")
print(f"iterations: {getattr(res.solver, 'iterations', None)}")
print(f"active: {active}")
print(f"feed1: {pyo.value(m.flows[1])}")
print(f"feed2: {pyo.value(m.flows[2])}")
print(f"product_flow: {pyo.value(m.flows[23])}")
print(f"purity: {pyo.value(m.component_flows[23, 'CH3OH'] / m.flows[23])}")
print("---")
solve_case("minimize_negative_profit", maximize_profit=False)
solve_case("maximize_profit", maximize_profit=True)
Observed output:
CASE: minimize_negative_profit
termination: optimal
objective: -1793.4292381783353
profit: 1793.4292381783353
iterations: 15
active: ['cheap_reactor', 'expensive_feed_disjunct', 'single_stage_recycle_compressor_disjunct', 'two_stage_feed_compressor_disjunct']
feed1: 0
feed2: 3.408977431099573
product_flow: 1.0
purity: 0.9
---
CASE: maximize_profit
termination: optimal
objective: -242.05445652916296
profit: -242.05445652916296
iterations: 1
active: ['cheap_feed_disjunct', 'cheap_reactor', 'two_stage_feed_compressor_disjunct', 'two_stage_recycle_compressor_disjunct']
feed1: 4.820913073983858
feed2: 0
product_flow: 1.0
purity: 0.9
---
Expected behavior
minimize(-profit) and maximize(profit) should produce equivalent GDPopt behavior, modulo normal local NLP solver tolerances and nonconvex local-solution limitations. They should not cause the LOA master to pursue the opposite economic direction or terminate after one iteration at a dominated negative-profit solution.
The minimize(-profit) result matches the expected scale from Turkay/Grossmann Example 3, which reports an optimal profit around $1.794M/yr.
Suspected source location
In Pyomo 6.10.0, pyomo.contrib.gdpopt.loa.GDP_LOA_Solver._setup_augmented_penalty_objective() appears to create the discrete OA objective with minimization sense unconditionally:
discrete_problem_util_block.oa_obj = Objective(sense=minimize)
Then _update_augmented_penalty_objective() updates only the expression:
discrete_problem_util_block.oa_obj.expr = (
discrete_objective.expr + OA_penalty_expr
)
The slack penalty sign is adjusted for maximize/minimize, and bound bookkeeping elsewhere appears objective-sense-aware, but the active OA master objective sense itself may remain minimization even when the original objective is maximization.
Suggested next step
This GDPlib model is larger than ideal for a Pyomo regression test. The next useful step is to reduce this to a small standalone GDP where minimize(-f) and maximize(f) diverge under gdpopt.loa, then test whether setting the OA objective sense to the original objective sense fixes the issue.
Summary
gdpopt.loaappears to mishandle objective sense in the discrete OA master for a maximization GDP. In a GDPlib methanol benchmark, the algebraically equivalent formulationsminimize(-profit)andmaximize(profit)produce different GDPopt LOA solutions. The maximization form terminates after one iteration at a negative-profit local solution, while the minimization-of-negative-profit form finds the expected positive-profit solution from the Turkay/Grossmann LOA paper.This was discovered while working on GDPlib:
10.1016/0098-1354(95)00219-7Environment
6.10.03.12in the GDPlib Pixi environmentGAMS was locally available for both MIP and NLP roles.
Reproducer
This uses GDPlib PR #113 because it exposes
m.profitseparately from the solver objective.Observed output:
Expected behavior
minimize(-profit)andmaximize(profit)should produce equivalent GDPopt behavior, modulo normal local NLP solver tolerances and nonconvex local-solution limitations. They should not cause the LOA master to pursue the opposite economic direction or terminate after one iteration at a dominated negative-profit solution.The
minimize(-profit)result matches the expected scale from Turkay/Grossmann Example 3, which reports an optimal profit around$1.794M/yr.Suspected source location
In Pyomo 6.10.0,
pyomo.contrib.gdpopt.loa.GDP_LOA_Solver._setup_augmented_penalty_objective()appears to create the discrete OA objective with minimization sense unconditionally:Then
_update_augmented_penalty_objective()updates only the expression:The slack penalty sign is adjusted for maximize/minimize, and bound bookkeeping elsewhere appears objective-sense-aware, but the active OA master objective sense itself may remain minimization even when the original objective is maximization.
Suggested next step
This GDPlib model is larger than ideal for a Pyomo regression test. The next useful step is to reduce this to a small standalone GDP where
minimize(-f)andmaximize(f)diverge undergdpopt.loa, then test whether setting the OA objective sense to the original objective sense fixes the issue.