Any merit to a lazy-ish juxt function?
- by NielsK
In answering a question about a function that maps over multiple functions with the same arguments (A: juxt), I came up with a function that basically took the same form as juxt, but used map:
(defn could-be-lazy-juxt
[& funs]
(fn [& args]
(map #(apply %1 %2) funs (repeat args))))
=> ((juxt inc dec str) 1)
[2 0 "1"]
=> ((could-be-lazy-juxt inc dec str) 1)
(2 0 "1")
=> ((juxt * / -) 6 2)
[12 3 4]
=> ((could-be-lazy-juxt * / -) 6 2)
(12 3 4)
As posted in the original question, I have little clue about the laziness or performance of it, but timing in the REPL does suggest something lazy-ish is going on.
=> (time (apply (juxt + -) (range 1 100)))
"Elapsed time: 0.097198 msecs"
[4950 -4948]
=> (time (apply (could-be-lazy-juxt + -) (range 1 100)))
"Elapsed time: 0.074558 msecs"
(4950 -4948)
=> (time (apply (juxt + -) (range 10000000)))
"Elapsed time: 1019.317913 msecs"
[49999995000000 -49999995000000]
=> (time (apply (could-be-lazy-juxt + -) (range 10000000)))
"Elapsed time: 0.070332 msecs"
(49999995000000 -49999995000000)
I'm sure this function is not really that quick (the print of the outcome 'feels' about as long in both). Doing a 'take x' on the function only limits the amount of functions evaluated, which probably is limited in it's applicability, and limiting the other parameters by 'take' should be just as lazy in normal juxt.
Is this juxt really lazy ? Would a lazy juxt bring anything useful to the table, for instance as a compositing step between other lazy functions ? What are the performance (mem / cpu / object count / compilation) implications ?
Is that why the Clojure juxt implementation is done with a reduce and returns a vector ?
Edit: Somehow things can always be done simpler in Clojure.
(defn could-be-lazy-juxt
[& funs]
(fn [& args]
(map #(apply % args) funs)))