-
Notifications
You must be signed in to change notification settings - Fork 375
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why can of
be defined on the constructor?
#88
Comments
We ran into this problem when creating an interface in shift-reducer-js that accepts a Fantasy Land Monoid. We opted to prefer the function on the prototype over the function on the constructor itself, but the decision was mostly arbitrary. I don't know if I agree with you about attaching the function to the constructor being nonsensical. We need a way to pass around the Monoid representation as a value, and without a type system, it makes sense to attach each function to a constructor. |
I believe this is mostly to accommodate a common idiom in JavaScript that is using constructor functions to initialise instances. In this way, it's assumed that ADTs will be defined as either:
Or by using unrelated instances and leaving dynamic dispatch to decide all the behaviour. In truth, there isn't much of a difference between the two in JS, since there are no types, but we need to be able to accommodate for calling (edit: sorry for closing this issue earlier and the typos, I sent this from my phone :x) |
I would propose that all type functions need to (only) be available as methods, on the instances. That's how the laws are specified, and that's how libraries will use them - as without instances, there is no type information in js. This way, we also won't get any problems with name collisions. Functions implement monads? Even when all constructor functions inherit an |
@michaelficarra: It's not that I think that attaching to a constructor function is nonsensical. It's the requirement to do so that seems nonsensical. There are good ways in Javascript to create types that do not depend on constructor functions. I believe this would be a legitimate version of var makeId = (function() {
var baseId = {
equals: function(b) { // Setoid
return typeof this.value.equals === "function" ? this.value.equals(b.value) : this.value === b.value;
},
concat: function(b) { // Semigroup (value must also be a Semigroup)
return makeId(this.value.concat(b.value));
},
empty: function() { // Monoid (value must also be a Monoid)
return makeId(this.value.empty ? this.value.empty() : this.value.constructor.empty());
},
map: function(f) { // Functor
return makeId(f(this.value));
},
ap: function(b) { // Applicative
return makeId(this.value(b.value));
},
chain: function(f) { // Chain
return f(this.value);
},
extend: function(f) { // Extend
return makeId(f(this));
},
of: function(a) { //Monad
return makeId(a);
},
from: function() { // Comonad
return this.value;
}
};
function makeId(a) {
var id = Object.create(baseId);
id.value = a;
return id;
}
return makeId;
}()); The prototypes of It simply seems to me that if we insist that instances have the |
I'd dispute that. I'd say having a common prototype means being of the same type. But that's not central here. That constructor functions, as I noted in my previous comment, are not the only way --or possibly even the best way -- to work with such types is pretty important.
Unless I've missed it, there's nothing in the laws that require us to be able to do that. In fact, as written the laws call I understand that this will often be a useful idiom, but as a requirement of the laws it bothers me. It makes working with abstract Applicatives or abstract Monoids significantly more difficult. Think of writing some tree-traversal concatenation algorithm using an abstract Monoid, It's relatively easy if I know where to find the
I do that all the time. And I do it even from a desktop or laptop, so I have no excuses! 😄 |
That's what I would like to see as well. |
The goal is to make Applicatives and Monoids significantly more useful to abstract over. Look at how broken the following function is: function sequence(list) {
return list.reduce(
function(a, b) {
return a.of(function(c) {
return c.concat([b]);
}).ap(a);
},
list[0].of([])
);
}
sequence([]); What happens when we pass in an empty array? We break. Now imagine if we had another function pass us an arbitrary list, we do not know where to get the methods from, but this should be a totally valid use of the function. Let's demand they also show us where to get them from: function sequence(ac, list) {
return list.reduce(
function(a, b) {
return ac.of(function(c) {
return c.concat([b]);
}).ap(a);
},
ac.of([])
);
}
sequence(Maybe, []); Now things work! But now we have to pass dictionaries around. That sucks. The specification tries to meet both: you can pass an object with the instances on it, or pass a dictionary (by using the This is not fixing you to how to instantiate the object, you can always add your own I originally required the Again, the whole point of this project is to enable forms of abstraction which we otherwise do not have. What should we do? |
I vote for keeping it the same, if you want to use the magic then you get it for free, else you can make your own dictionary (as said). |
All right, I did get confused by the use of 'constructor', and I think it might confuse others. But that is not the fundamental point. If The problem does not really manifest itself when working with an Applicative or a Monoid of known type. Presumably then you also know how to construct a new one. But we should be able to write generic functions such as |
Why does that matter, if the both conform to the laws and pass, who cares? Fundamentally they should be the same, they should return the same type and value. If one uses a constructor and another one doesn't, it shouldn't matter. As long as it quaks like a duck... |
But that's the point. What if they don't? The spec says that one of them must exist. Let's imagine an implementation of some Monoid has a conforming implementation on the constructor. If my |
Seems so... but then you should know what you're folding. If it's a library you're implementing and how to tell someone it has to conform to said specifications, then they also need to grasp the idea of monoids.
It's not though, it's following the specification, by having a function with the same name, but it's obviously not following the laws. That's the point, the specification has certain names for functions and clashes are going to happen in the real world, you can't control that, but you can say that in order to work with this |
I think I'm still confused. As far as I can tell, if someone were to code this: function Monoid(empty, concat) {
function _Monoid(a) {
if (!(this instanceof _Monoid)) {
return new _Monoid(a);
}
this.value = (arguments.length) ? a : empty;
return this;
}
_Monoid.prototype.concat = function(a) {
return new _Monoid(concat(this.value, a.value));
};
_Monoid1.prototype.empty = function() {
return new _Monoid();
};
return _Monoid;
}
var Add = new Monoid(0, function(a, b) {return a + b;});
Add(3).concat(Add(4)); //=> Add(7) she would have a perfectly legitimate Monoid. But if she were to then do this: Add.empty = function() {
this.value = 0;
} has she suddenly broken it? If I were to code a But my big concern is in writing code that would work with various implementations. One of the goals of Ramda is to work with various FP libraries in a consistent manner. At the moment, for instance, |
Add.empty = function() {
this.value = 0;
} You should only work with immutable data. Doing the following mutates the object and causes unspecified results. Maybe that should be explicitly said in the spec? |
I do realizethat. That wasn't the point. I was just trying to find something that made sense to me for the name 'empty'. Imagine this instead: Add.empty = function() {
throw new TypeError ('Sorry, Charlie');
}; |
No, but you did highlight a very valid point :) |
That would mean that all FP libraries would have to follow the spec and that's a big ask!
How do you know that map is working correctly, why do you care? You've provided the implementations for this to happen, it's up to the end user to be happy with the results as you've outlined through Ramda how to get the results. Functor.prototype.map(f) {
throw new TypeError('???');
} How do you solve that one, it's the same situation. |
not quite the same. In that case ramda calls the only function it knows about, viz. the "instance method", and it throws. Oh well! The difference in the case of the spec is that a compliant implementation may be in another location. |
Yes, that wouldn't be a valid Monoid instance anymore. As a clarification, try to think about these objects in JS as if they were type classes. We're just relying on the object's dynamic dispatch here because it blends well with the language. But, in truth, we have this: class Monoid m where
m empty -> m
m concat: m -> m
end
instance Monoid for M where
a empty = 0
a concat: b = a + b
end Given this, if someone where to do: a = new M
instanceFor(a).empty = function(){ return 1 } Then it's obvious that the Monoid laws stop working, beause you've changed the type class. |
It's still not clear to me if I'm not getting my point across or if I'm missing something fundamental.
Well not really. I don't really care about solving that one. If the implementation is not compliant, all bets are off. Ramda might corrupt your data, crash your car, steal your girlfriend, whatever. The point is that because
Well, we'd only work with ones that were compatible. But FantasyLand compatibility is only part of it. We'd like to be able to integrate with that various immutable data libraries, libraries like Highland, ones like Bacon and RxJs, and even possibly the various Promise libraries, although that might be more of a stretch. We don't expect complete integration. But if the library has objects containing a fairly standard
to
and possibly take advantage of currying and easy composition of such functions. For these purposes, though, I'm focused on FantasyLand. My trouble is that knowing something complies with the spec doesn't seem to give me enough information about how to use it generically. And that seems a shame. |
Do you have a proposed solution? That might help guide the conversation. |
But it still follows the Semigroup laws and it still "provides an empty method on itself or its constructor object" that "takes no arguments" and returns "a value of the same Monoid,", a method which adheres to the right identity and the left identity. Why is it not a Monoid? |
I like @bergus 's suggestion:
Then it may appear statically on the constructor, or somewhere else, but it must appear on the instance. |
@joneshf: I like @bergus' suggestion (#88 (comment)) that "all type functions need to (only) be available as methods, on the instances". But the comment from @puffnfresh (#88 (comment)) makes me realize that there are at least some problems with it. My next best suggestion would be to require them on the instance but note that they may also appear elsewhere. But failing all that, I think the best bet would be to require that if these functions appeared in either place that they have the correct behavior. |
If |
Something needs to exist, but does it have to be a property of the type or simply a function passed to I'm still a newbie in Haskell, but if I read this right: foldMap :: (Foldable t, Monoid m) => (a -> m) -> t a -> m It expects the construction of a Monoid from a value to be done from a function that's supplied and not from some intrinsic property of the Monoid. Am I reading it wrong? If not, is there some overriding reason the FantasyLand spec can't make the same sort of assumptions? |
If you attempt to implement that for any sum type where one of the choices doesn't have the last type variable (e.g. the foldMap :: FoldableDict t -> MonoidDict m -> (a -> m) -> t a -> m If you were to implement that function by hand, the most sensible implementation for foldMap f (Just x) = f x
foldMap f Nothing = mempty or after desugaring, it might look something like this: foldMap _ _ f (Just x) = f x
foldMap _ md f Nothing = mempty md If the data MonoidDict m = MonoidDict {mempty :: m, mappend :: m -> m -> m} Here the This doesn't even work in js unless you explicitly pass a constructor to the function (like the haskell compiler does after desugaring) for reason similar to what @puffnfresh gave above for Of course, if you want to make it more general and work for any foldMap f = foldr (\a acc -> f a `mappend` acc) mempty Again, after desugaring, it might look like: foldMap fd md f = foldr fd (\a acc -> mappend md (f a) acc) (mempty md) So the compiler would resolve the If you're fine with partial functions and the runtime errors they bring about (hopefully you're not), then it's fine to only require the values to have the functions like |
@CrossEye: I don't think this should be closed, I think this is a serious issue. @robotlolita: Thanks, that read on return-type polymorphism was enlightening. I like your last two code examples (as you say they're more idiomatic), but I think that generic function should be
so that you can pass |
I finally remembered the real reason we should be disliking the properties being on the constructor. Imagine we want to have function Tuple2(a, b) {
this.a = a;
this.b = b;
}
Tuple2.empty = function() {
// What do we do here?
return new Tuple2(undefined, undefined);
}; Urgh, Really we need something like: function Tuple2Monoid(a, b) {
return {
empty: function() {
return new Tuple2(a.empty(), b.empty());
},
append: function(t1, t2) {
return new Tuple2(a.append(t1.a, t2.a), b.append(t1.b, t2.b));
}
};
} And then we have to pass that dictionary around. I think at this point we may as well admit to being defeated by JavaScript. |
"Never forget that Javascript hates you." |
I can never figure out if I'm missing something or if I'm in some way ahead. @puffnfresh: Is a type like that really useful? I can imagine instead something like this: function Monoid(empty, concat) {
function _Monoid(a) {
if (!(this instanceof _Monoid)) {
return new _Monoid(a);
}
this.value = (arguments.length) ? a : empty;
}
_Monoid.prototype.concat = function(a) {
return new _Monoid(concat(this.value, a.value));
};
_Monoid.empty = function() {
return new _Monoid();
};
return _Monoid;
}
var Tuple2Monoid = function(M1, M2) {
function _Tuple2Monoid(a, b) {
if (!(this instanceof _Tuple2Monoid)) {
return new _Tuple2Monoid(a, b);
}
// possibly errors over bad types...
this.value = (arguments.length > 1) ? [a, b] : empty;
}
_Tuple2Monoid.prototype.concat = function(tuple) {
return new _Tuple2Monoid(
this.value[0].concat(tuple.value[0]),
this.value[1].concat(tuple.value[1])
);
};
_Tuple2Monoid.empty = function() {
return new _Tuple2Monoid(M1.empty(), M2.empty());
};
return _Tuple2Monoid;
}
var Add = new Monoid(0, function(a, b) {return a + b;});
var Multiply = new Monoid(1, function(a, b) {return a * b;});
var AddMultTuple = Tuple2Monoid(Add, Multiply);
var amt = AddMultTuple(Add(3), Multiply(5));
var amt2 = AddMultTuple(Add(4), Multiply(2));
amt.concat(amt2); //=> AddMultTuple(Add(7), Multiply(10)) Such a tuple now has an appropriate type, and it's built around the constructor-style. Of course it's still dynamically typed, and nothing will stop you from trying to do |
@CrossEye I'm confused, what is different there? |
Thank you for a very informative response. If nothing else, I'm learning a lot here! Your final example is this version: Array.prototype.foldMap = function(f, resultMonoid) {
return this.length === 0? empty(resultMonoid)
: /* else */ f(this[0]).concat(this.slice(1).foldMap(f, resultMonoid))
} I'm not quite clear what is meant by resultMonoid as opposed to just monoid, but I'm not sure that's really important. More importantly, I still don't see how the dictionary works here in the case that the monoid's It seems to me that the spec is confused on what it wants to define. It knows that defining types is paramount, and it would love to do so by specifying only the algebraic laws affecting the properties of instances of those types. But because of the issues discussed here, it needs also to be able to define a few meta-properties ( Although I said something different earlier, my take is that it would be better to specify only the dictionary for these. I wish the spec weren't using the word "constructor" for this, as, although it is the most common case, it's still just a pun from the Javascript side. But regardless of how the dictionary is defined, the type is, for better or worse, not just a collection of instances, but also a dictionary containing certain metadata. It would be clearer if the spec said so explicitly. |
@joneshf: Am I caught up in the same confusion over naming discussed in #90? When I read @puffnfresh's example, |
Ok, I think I'm coming to terms with at least one reason why this is bothering me from a mathematical point of view. As far as I can tell, the FantasyLand laws allow for the empty set to be treated as a Monoid! This is of course a problem for something like A normal mathematical definition of a Monoid, and similarly the one from Haskell, has to do with a set having an associative binary operation and an identity element. The operation (actually in Semigroup) is no issue. But Definitions from other fields, which actually insist on an identity element don't have this issue. But FantasyLand tries to derive everything from values, or at least to allow you to define your Monoid in that manner. Here's my reasoning: My type = E = {}; All these hold trivially since E is empty:
Hence E is a Setoid. No surprise. And E is a Semigroup, again trivially since E is Empty:
But now I have a choice. I can choose to say that this type's
Hence E is a Monoid. Am I misinterpreting "A value which has a Monoid must provide an |
ping Did I uncover a hole in the spec here? Or is my reasoning off somehow? Should I try to figure out a solution myself for a PR or does it need additional discussion? |
Will it be a solution if "or" be simply replaced with "and" in "itself or its constructor"? |
I'd say "and" is better than "or". Note that this would require us to bump the spec's version number. |
Yeah. Btw, changes like that should became easier to make after libraries start depending on fantasy-lang package. NPM will warn if two incompatible versions are used. |
I think Is it time for this (original contex #97 (comment)):
|
It will plug the hole that allows things that (mathematically) shouldn't be considered Monoids to pass the definition. This might have to have some additional text to insist that the two versions were the same. But I'm not at certain it's the right thing to do. I think the instance |
I use to think like this, but I've run into so many situations now, where Consider the following code (more context), I've got no way of knowing what the Writer.of = function(x) {
return Writer(() => Tuple2(x, ?.empty()));
}; I'm coming to the conclusion the only way to do this correctly is to pass explicit dictionaries for everything. I actually want to re-write the spec to incorporate this idea. |
I've been playing around with the idea suggested by @puffnfresh here. It does make it very explicit about the creation and I've only run into one issue and that's the verbosity to set everything up. // Tuple related types.
const M = nested.Monoid(λ);
const F = nested.Functor(λ);
const S = nested.Setoid(λ);
// Sum Monoid
const SM = Monoid(λ);
exports.nested = {
'testing': function(t) {
const tuple = M(SM, SM).empty();
const mapped = F.map(tuple, (x) => x + 1);
t.ok(S.equals(Tuple(Sum(0), Sum(1)), mapped));
t.done();
}
}; |
Related: #158 |
I believe this has been adequately addressed by #180. |
It's quite possible that I'm simply missing something simple, but I'm disturbed by this sentence (emphasis added):
(All the same points will apply to Monoid's
empty
method, but it's Applicatives which are worrying me now.)For a specification which is usually so prescriptive, this is surprisingly lax. But that's not the real problem. It seems to me that this makes it tremendously more difficult to write code that works across all Applicative Types; in fact, it probably makes it impossible.
A few points:
of
. Should the specification be taken to read, for instance, "The appropriate one ofm.of(a).chain(f)
orm.constructor.of(a).chain(f)
is equivalent tof(a)
(left identity)" or some more precise version of the same?Object.create
, why should the specification assume that I use constructor functions to define my types? It's quite possible to do without them, and it's growing ever more popular to work that way.of
on both. This would probably not cause an issue. But nothing in the specification would prevent me from creating a conformingof
on the instances and an unrelatedof
on the constructor or vice versa:Does that conform to the specification as written so long as
Maybe.of
upholds the required laws?Does this one so long as
Maybe.prototype.of
upholds the laws?:If so, can anyone suggest a way that I can generically apply
of
to algebraic types without knowing for each specific type which version is being used?Or... am I just missing something simple again?
This was all brought to mind by a recent Ramda issue.
The text was updated successfully, but these errors were encountered: