Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: make synthetic runners use dataframes and rename inputs so stat… #10

Conversation

younesStrittmatter
Copy link
Contributor

…e logic works

@younesStrittmatter
Copy link
Contributor Author

also resolves AutoResearch/autora#561

…hner_law.py

Co-authored-by: benwandrew <benwallaceandrew@gmail.com>
Copy link
Contributor

@musslick musslick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great; just a minor renaming suggestion for the noise in some of the models.

@@ -117,8 +119,8 @@ def experiment_runner(X: np.ndarray, added_noise_=added_noise):
probability_a = x[1]
probability_b = x[3]

expected_value_A = value_A * probability_a + rng.normal(0, added_noise_)
expected_value_B = value_B * probability_b + rng.normal(0, added_noise_)
expected_value_A = value_A * probability_a + rng.normal(0, observation_noise)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would call this value_noise which is specific to the expected utility theory instead of observation_noise

@@ -113,8 +118,8 @@ def experiment_runner(X: np.ndarray, added_noise_=added_noise):
x[3] ** coefficient + (1 - x[3]) ** coefficient
) ** (1 / coefficient)

expected_value_A = value_A * probability_a + rng.normal(0, added_noise_)
expected_value_B = value_B * probability_b + rng.normal(0, added_noise_)
expected_value_A = value_A * probability_a + rng.normal(0, observation_noise)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, I would call observation_noise instead value_noise

Y = np.zeros((X.shape[0], 1))
for idx, x in enumerate(X):
similarity_A1 = x[0]
similarity_A2 = x[1]
similarity_B1 = x[2]
similarity_B2 = x[3]

y = (similarity_A1 * focus + np.random.normal(0, added_noise_)) / (
y = (similarity_A1 * focus + rng.normal(0, observation_noise)) / (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think here it is somewhat fine because we add it at the end and then normalize. You might still want to call it decision_noise because it is applied before the division.

Copy link
Contributor

@musslick musslick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great!

@younesStrittmatter younesStrittmatter merged commit 1136529 into main Sep 1, 2023
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

chore rename input arguments of runners to us with state
3 participants