Skip to content

Instantly share code, notes, and snippets.

@rehanrupawalla
Last active February 25, 2025 03:58
Show Gist options
  • Save rehanrupawalla/4ff931c909ef64a22543328f7fb39426 to your computer and use it in GitHub Desktop.
Save rehanrupawalla/4ff931c909ef64a22543328f7fb39426 to your computer and use it in GitHub Desktop.
def tableu_simplex(num_constraints, num_dec_variables, obj_func_coefficients, lhs_values, rhs_values):
#fixed_variables = list(range(num_dec_variables))
rhs_values.append(0)
rhs_values = np.array(rhs_values, dtype = np.float64)
cj_row = np.array(obj_func_coefficients + [0] * num_constraints)
#print("cj_row:", cj_row)
num_columns = num_constraints + num_dec_variables
row_table = []
#put the decision variables in the row:
for i in range(num_constraints):
row = np.array([0] * num_columns)
row[num_dec_variables+i] = 1
row[0], row[1] = lhs_values[i]
row_table.append(row)
row_table = np.array(row_table,dtype=np.float64)
#print(row_table)
fixed_variables = [[np.int64(0), np.int64(i + num_dec_variables)] for i in range(num_constraints)]
still_going = True
while still_going:
zj_row = np.dot(np.array(fixed_variables)[:, 0], row_table)
# Calculate the additional value and store it in rhs_values[-1]
rhs_values[-1] = int(np.dot(np.array(fixed_variables)[:, 0], rhs_values[:-1]))
row_table = np.vstack([row_table, zj_row])
#print("zj: ", zj_row)
cj_zj_diff_row = cj_row - zj_row
row_table = np.vstack([row_table, cj_zj_diff_row])
#print("table plus zj and diff:\n", row_table)
if all(value <= 0 for value in cj_zj_diff_row):
still_going = False
break
diff_index_of_max, diff_max_value = find_max(cj_zj_diff_row)
#print("RHS At start of loop: ",rhs_values)
with np.errstate(divide='ignore'):
theta_column = np.array(rhs_values[:-1]) / row_table[:-2, diff_index_of_max]
theta_index_of_min, theta_min_value = find_non_negative_min(theta_column)
if theta_index_of_min is None:
# Handle the case where no suitable theta value is found
still_going = False
break
row_being_removed = row_table[theta_index_of_min]
column_index_of_variable_being_removed = fixed_variables[theta_index_of_min][1]
divisor = row_being_removed[diff_index_of_max]
fixed_variables[theta_index_of_min] = [np.int64(cj_row[diff_index_of_max]), np.int64(diff_index_of_max)]
replacement_row = row_being_removed / divisor
new_cj_column = fixed_variables.copy()
new_rhs_values = rhs_values.copy() # Create a copy to avoid modifying the original list
new_theta_column = theta_column.copy()
new_row_table = row_table[:-2].copy()
new_row_table[theta_index_of_min, :] = replacement_row
new_rhs_values[theta_index_of_min] = rhs_values[theta_index_of_min] / divisor
new_theta_column[theta_index_of_min] = theta_column[theta_index_of_min] / divisor
#print(new_cj_column)
for i in range(len(new_row_table)):
if i != theta_index_of_min: # Exclude the replacement row
row = np.zeros(num_columns)
for variable in fixed_variables:
if variable is not None:
#print(variable[1])
row[variable[1]] = 0
if fixed_variables[i] is not None:
index_to_set_to_one = fixed_variables[i][1]
row[index_to_set_to_one] = 1
#print(row)
new_row_table[i] = row
for index, new_row in enumerate(new_row_table):
if index != theta_index_of_min:
#print("row index: ", index)
for i in range(num_columns):
#print("column: ",i)
if not np.isnan(new_row[i]) and (new_row[i] != 0 or replacement_row[i] != 0):
multiplier = 0
if replacement_row[i] >0:
multiplier = (row_table[index, i] - new_row[i]) / replacement_row[i]
if multiplier != 0:
new_row_table[index, :] = row_table[index, :] - multiplier * replacement_row
new_rhs_values[index] = rhs_values[index] - multiplier * new_rhs_values[theta_index_of_min]
break
if multiplier == 0:
new_row_table[index, :] = row_table[index, :] - multiplier * replacement_row
new_rhs_values[index] = rhs_values[index] - multiplier * new_rhs_values[theta_index_of_min]
row_table = new_row_table
rhs_values = new_rhs_values
fixed_variables = new_cj_column
theta_column = new_theta_column
#######
## find the indexes of the original decision variables.
dec_var_indices = []
dec_var_values = [None] * num_dec_variables
for i, var in enumerate(fixed_variables):
if var[1] < num_dec_variables:
dec_var_indices.append(i)
dec_var_values[int(var[1])] = rhs_values[i]
print(dec_var_values)
write_output(dec_var_values, rhs_values[-1])
print("objective function evaluates to: ",rhs_values[-1])
if __name__ == "__main__":
input_file = "q2/q2_sample_input.txt"
num_constraints, num_dec_variables, obj_func_coefficients, lhs_values, rhs_values = parse_input_spec(input_file)
tableu_simplex(num_constraints, num_dec_variables, obj_func_coefficients, lhs_values, rhs_values)
@shahshripal
Copy link

shahshripal commented Feb 25, 2025

Rubric:

Critical Info:

  • Correct Algorithm Execution: The solution must accurately implement the tableau simplex method, including correct tableau initialization, pivot operations, and termination conditions.
  • Handling of Constraints and Variables: Proper management of decision variables and constraints as per simplex method requirements.
  • Objective Function Evaluation: Accurate calculation and updating of the objective function value after each tableau iteration.

Possible Major Errors:

  • Incorrect Pivot Selection: Using wrong pivot elements that do not lead to the optimization of the objective function.
  • Faulty Calculation of RHS and Tableau: Errors in computing the Right-Hand Side (RHS) values or updating the tableau which lead to incorrect next steps or solution.
  • Improper Termination: The solution should terminate correctly either by identifying the optimal solution or recognizing unboundedness or infeasibility if applicable.

Possible Minor Errors:

  • Data Type Mismatches: Issues like integer overflow or inappropriate data type conversions that could occasionally disrupt calculations but not typically lead to completely wrong solutions.
  • Efficiency Concerns: Less optimal Python practices that could slow down the execution but do not affect the correctness, like unnecessary deep copying of data structures.
  • Minor Logical Errors: Small logical slips that do not impact the overall execution or final outcome, such as mishandling of indices or boundaries under specific conditions.

Additional Helpful Info:

  • Optimization Suggestions: Recommendations on how to optimize the code for better performance, such as utilizing numpy operations more effectively or avoiding redundant calculations.
  • Debugging Tips: Providing insights on how to debug common issues in simplex implementations, such as cycling or precision errors.
  • Use of Pythonic Constructs: Enhancing readability and efficiency by using more Pythonic constructs like list comprehensions, generator expressions, or built-in functions.

Principles to Follow:

  • Correctness: The rubric points must be factually correct and align with mathematical standards for the simplex method.
  • Objectivity: Each point in the rubric should be verifiable against the code’s functionality without ambiguity.
  • Granularity: Detailed enough to capture the essence of each critical aspect of the simplex algorithm implementation, allowing for precise assessment of the solution’s correctness.
  • Conciseness: Direct and to the point, avoiding overly verbose descriptions while still providing enough detail to be meaningful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment