elderrest.blogg.se

Instinct program 0.30c
Instinct program 0.30c












But from a user perspective, I do think strings can be much easier to use. This may reflect the fact that I'm far more used to typed languages, and I'm seeing a lot of tests being required to make sure the string->fn mappings are set up correctly.įrom a developer perspective, I agree.

instinct program 0.30c

I would also be in favour of sticking to just passing around functions, rather than allowing for 'string or callable' all over the place. Yes, this redundancy is intended! Mostly because you can use something like false_positive_rate_grouped() directly as a scorer function in things like scikit-learn's grid search. I remember that the complex object (here called Scores) is not necessarily what we want in every situation, so perhaps redundancy is intended (?). If you use an attribute rather than a method, does that mean you have to precompute all of them? In that case, I'd be in favor of a method. Perhaps even scores.difference ?įrom what you describe I think it's very similar indeed! I agree that writing something like scores.difference() is more natural (pandas style), especially because this pattern allows you to easily access all possible methods in most IDE's. The only odd part there is that you need to pass this group_summary object to functions like difference_from_summary(.) to get the values out of there, whereas the scores.difference() way of writing it is more natural. accuracy_score_group_summary(y_true, y_pred.). This reminds me very much of our existing group_summary by the way! and there are individual ones per metric, e.g. is the increase in flexibility worth the increase in complexity? However, I'm not sure whether having a separate argument/function does more harm than good - i.e. IMO, given that fairness can have so many different meanings in different contexts, it is important allow for some flexibility in defining a fairness metric. At the moment, Fairlearn only allows for one possible aggregation, which is the difference between min and max. In the case of two sensitive groups, this is just a single number, but in the case of multiple sensitive groups, you still need to aggregate it somehow. scores.disparity("difference") would return all differences between groups.

instinct program 0.30c

I was actually thinking that there should be one type of function for getting the "disparity" between all groups (i.e. This is a good question - I think my naming is not super clear. For first time users, I can imagine having strings to choose from is a bit less intimidating than having to find (or even define) the appropriate (scikit-learn) function yourself? But this is just speculation from my side :)Īggregate seems to be a helper function that can give you difference, ratio, max, min etc., right? Perhaps both should be possible? I don't think we'll be adding a lot of "base" metrics (we should probably think of a good name for those) on a regular basis, so we can just use scikit-learn's predefined values in addition to others that are not directly accessible from scikit-learn (e.g. Maybe it's easier to pass the function itself? Which would mean we need to map those metric strings to actual metric functions.














Instinct program 0.30c