Search Results

Search found 79 results on 4 pages for 'monad'.

Page 3/4 | < Previous Page | 1 2 3 4  | Next Page >

  • How can I generate the Rowland prime sequence idiomatically in J?

    - by Gregory Higley
    If you're not familiar with the Rowland prime sequence, you can find out about it here. I've created an ugly, procedural monadic verb in J to generate the first n terms in this sequence, as follows: rowland =: monad define result =. 0 $ 0 t =. 1 $ 7 while. (# result) < y do. a =. {: t n =. 1 + # t t =. t , a + n +. a d =. | -/ _2 {. t if. d > 1 do. result =. ~. result , d end. end. result ) This works perfectly, and it indeed generates the first n terms in the sequence. (By n terms, I mean the first n distinct primes.) Here is the output of rowland 20: 5 3 11 23 47 101 7 13 233 467 941 1889 3779 7559 15131 53 30323 60647 121403 242807 My question is, how can I write this in more idiomatic J? I don't have a clue, although I did write the following function to find the differences between each successive number in a list of numbers, which is one of the required steps. Here it is, although it too could probably be refactored by a more experienced J programmer than I: diffs =: monad : '}: (|@({.-2&{.) , $:@}.) ^: (1 < #) y'

    Read the article

  • Are monads Writer m and Either e categorically dual?

    - by sdcvvc
    I noticed there is a dual relation between Writer m and Either e monads. If m is a monoid, then unit :: () -> m join :: (m,m) -> m can be used to form a monad: return is composition: a -> ((),a) -> (m,a) join is composition: (m,(m,a)) -> ((m,m),a) -> (m,a) The dual of () is Void (empty type), the dual of product is coproduct. Every type e can be given "comonoid" structure: unit :: Void -> e join :: Either e e -> e in the obvious way. Now, return is composition: a -> Either Void a -> Either e a join is composition: Either e (Either e a) -> Either (Either e e) a -> Either e a and this is the Either e monad. The arrows follow exactly the same pattern. Question: Is it possible to write a single generic code that will be able to perform both as Either e and as Writer m depending on the monoid given?

    Read the article

  • How do I define a monadic function to work on a list in J?

    - by Gregory Higley
    Let's say I have the following J expression: # 3 ((|=0:)#]) 1+i.1000 This counts the number of numbers between 1 and 1000 that are evenly divisible by 3. (Now, before anyone points out that there's an easier way to do this, this question is about the syntax of J, and not mathematics.) Let's say I define a monadic function for this, as follows: f =: monad define # y ((|=0:)#]) 1+i.1000 ) This works great with a single argument, e.g., f 4 250 If I pass a list in, I get a length error: f 1 2 3 |length error: f Now, I completely understand why I get the length error. When you substitute the list 1 2 3 for the y argument of the monad, you get: # 1 2 3 ((|=0:)#]) 1+i.1000 If you know anything about J, it's pretty clear why the length error is occurring. So, I don't need an explanation of that. I want to define the function such that when I pass a list, it returns a list, e.g., f 1 2 3 1000 500 333 How can I either (a) redefine this function to take a list and return a list or (b) get the function to work on a list as-is without being redefined, perhaps using some adverb or other technique?

    Read the article

  • How I understood monads, part 1/2: sleepless and self-loathing in Seattle

    For some time now, I had been noticing some interest for monads, mostly in the form of unintelligible (to me) blog posts and comments saying oh, yeah, thats a monad about random stuff as if it were absolutely obvious and if I didnt know what they were talking about, I was probably an uneducated idiot, ignorant about the simplest and most fundamental concepts of functional programming. Fair enough, I am pretty much exactly that. Being the kind of guy who can spend eight years in college just to...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • in memory datastore in haskell

    - by Simon
    I want to implement an in memory datastore for a web service in Haskell. I want to run transactions in the stm monad. When I google hash table steam Haskell I only get this: Data. BTree. HashTable. STM. The module name and complexities suggest that this is implemented as a tree. I would think that an array would be more efficient for mutable hash tables. Is there a reason to avoid using an array for an STM hashtable? Do I gain anything with this stem hash table or should I just use a steam ref to an IntMap?

    Read the article

  • Haskell: Left-biased/short-circuiting function

    - by user2967411
    Two classes ago, our professor presented to us a Parser module. Here is the code: module Parser (Parser,parser,runParser,satisfy,char,string,many,many1,(+++)) where import Data.Char import Control.Monad import Control.Monad.State type Parser = StateT String [] runParser :: Parser a -> String -> [(a,String)] runParser = runStateT parser :: (String -> [(a,String)]) -> Parser a parser = StateT satisfy :: (Char -> Bool) -> Parser Char satisfy f = parser $ \s -> case s of [] -> [] a:as -> [(a,as) | f a] char :: Char -> Parser Char char = satisfy . (==) alpha,digit :: Parser Char alpha = satisfy isAlpha digit = satisfy isDigit string :: String -> Parser String string = mapM char infixr 5 +++ (+++) :: Parser a -> Parser a -> Parser a (+++) = mplus many, many1 :: Parser a -> Parser [a] many p = return [] +++ many1 p many1 p = liftM2 (:) p (many p) Today he gave us an assignment to introduce "a left-biased, or short-circuiting version of (+++)", called (<++). His hint was for us to consider the original implementation of (+++). When he first introduced +++ to us, this was the code he wrote, which I am going to call the original implementation: infixr 5 +++ (+++) :: Parser a -> Parser a -> Parser a p +++ q = Parser $ \s -> runParser p s ++ runParser q s I have been having tons of trouble since we were introduced to parsing and so it continues. I have tried/am considering two approaches. 1) Use the "original" implementation, as in p +++ q = Parser $ \s - runParser p s ++ runParser q s 2) Use the final implementation, as in (+++) = mplus Here are my questions: 1) The module will not compile if I use the original implementation. The error: Not in scope: data constructor 'Parser'. It compiles fine using (+++) = mplus. What is wrong with using the original implementation that is avoided by using the final implementation? 2) How do I check if the first Parser returns anything? Is something like (not (isNothing (Parser $ \s - runParser p s) on the right track? It seems like it should be easy but I have no idea. 3) Once I figure out how to check if the first Parser returns anything, if I am to base my code on the final implementation, would it be as easy as this?: -- if p returns something then p <++ q = mplus (Parser $ \s -> runParser p s) mzero -- else (<++) = mplus Best, Jeff

    Read the article

  • functional dependencies vs type families

    - by mhwombat
    I'm developing a framework for running experiments with artificial life, and I'm trying to use type families instead of functional dependencies. Type families seems to be the preferred approach among Haskellers, but I've run into a situation where functional dependencies seem like a better fit. Am I missing a trick? Here's the design using type families. (This code compiles OK.) {-# LANGUAGE TypeFamilies, FlexibleContexts #-} import Control.Monad.State (StateT) class Agent a where agentId :: a -> String liveALittle :: Universe u => a -> StateT u IO a -- plus other functions class Universe u where type MyAgent u :: * withAgent :: (MyAgent u -> StateT u IO (MyAgent u)) -> String -> StateT u IO () -- plus other functions data SimpleUniverse = SimpleUniverse { mainDir :: FilePath -- plus other fields } defaultWithAgent :: (MyAgent u -> StateT u IO (MyAgent u)) -> String -> StateT u IO () defaultWithAgent = undefined -- stub -- plus default implementations for other functions -- -- In order to use my framework, the user will need to create a typeclass -- that implements the Agent class... -- data Bug = Bug String deriving (Show, Eq) instance Agent Bug where agentId (Bug s) = s liveALittle bug = return bug -- stub -- -- .. and they'll also need to make SimpleUniverse an instance of Universe -- for their agent type. -- instance Universe SimpleUniverse where type MyAgent SimpleUniverse = Bug withAgent = defaultWithAgent -- boilerplate -- plus similar boilerplate for other functions Is there a way to avoid forcing my users to write those last two lines of boilerplate? Compare with the version using fundeps, below, which seems to make things simpler for my users. (The use of UndecideableInstances may be a red flag.) (This code also compiles OK.) {-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies, FlexibleInstances, UndecidableInstances #-} import Control.Monad.State (StateT) class Agent a where agentId :: a -> String liveALittle :: Universe u a => a -> StateT u IO a -- plus other functions class Universe u a | u -> a where withAgent :: Agent a => (a -> StateT u IO a) -> String -> StateT u IO () -- plus other functions data SimpleUniverse = SimpleUniverse { mainDir :: FilePath -- plus other fields } instance Universe SimpleUniverse a where withAgent = undefined -- stub -- plus implementations for other functions -- -- In order to use my framework, the user will need to create a typeclass -- that implements the Agent class... -- data Bug = Bug String deriving (Show, Eq) instance Agent Bug where agentId (Bug s) = s liveALittle bug = return bug -- stub -- -- And now my users only have to write stuff like... -- u :: SimpleUniverse u = SimpleUniverse "mydir"

    Read the article

  • What do you call this functional language feature?

    - by Jimmy
    ok, embarrassing enough, I posted code that I need explained. Specifically, it first chains absolute value and subtraction together, then tacks on a sort, all the while not having to mention parameters and arguments at all, because of the presense of "adverbs" that can join these functions "verbs" What (non-APL-type) languages support this kind of no-arguments function composition (I have the vague idea it ties in strongly to the concepts of monad/dyad and rank, but its hard to get a particularly easy-to-understand picture just from reading Wikipedia) and what do I call this concept?

    Read the article

  • Avoiding explicit recursion in Haskell

    - by Travis Brown
    The following simple function applies a given monadic function iteratively until it hits a Nothing, at which point it returns the last non-Nothing value. It does what I need, and I understand how it works. lastJustM :: (Monad m) => (a -> m (Maybe a)) -> a -> m a lastJustM g x = g x >>= maybe (return x) (lastJustM g) As part of my self-education in Haskell I'm trying to avoid explicit recursion (or at least understand how to) whenever I can. It seems like there should be a simple non-explicitly recursive solution in this case, but I'm having trouble figuring it out. I don't want something like a monadic version of takeWhile, since it could be expensive to collect all the pre-Nothing values, and I don't care about them anyway. I checked Hoogle for the signature and nothing shows up. The m (Maybe a) bit makes me think a monad transformer might be useful here, but I don't really have the intuitions I'd need to come up with the details (yet). It's probably either embarrassingly easy to do this or embarrassingly easy to see why it can't or shouldn't be done, but this wouldn't be the first time I've used self-embarrassment as a pedagogical strategy. Background: Here's a simplified working example for context: suppose we're interested in random walks in the unit square, but we only care about points of exit. We have the following step function: randomStep :: (Floating a, Ord a, Random a) => a -> (a, a) -> State StdGen (Maybe (a, a)) randomStep s (x, y) = do (a, gen') <- randomR (0, 2 * pi) <$> get put gen' let (x', y') = (x + s * cos a, y + s * sin a) if x' < 0 || x' > 1 || y' < 0 || y' > 1 then return Nothing else return $ Just (x', y') Something like evalState (lastJustM (randomStep 0.01) (0.5, 0.5)) <$> newStdGen will give us a new data point.

    Read the article

  • Use Shakespeare-text and external file

    - by Adam
    How can I convert the below example to use an external file instead of the embedded lazy text quasi quotes? {-# LANGUAGE QuasiQuotes, OverloadedStrings #-} import Text.Shakespeare.Text import qualified Data.Text.Lazy.IO as TLIO import Data.Text (Text) import Control.Monad (forM_) data Item = Item { itemName :: Text , itemQty :: Int } items :: [Item] items = [ Item "apples" 5 , Item "bananas" 10 ] main :: IO () main = forM_ items $ \item -> TLIO.putStrLn [lt|You have #{show $ itemQty item} #{itemName item}.|] This is from the yesod online book.

    Read the article

  • Newbie: understanding main and IO()

    - by user1640285
    While learning Haskell I am wondering when an IO action will be performed. In several places I found descriptions like this: "What’s special about I/O actions is that if they fall into the main function, they are performed." But in the following example, 'greet' never returns and therefore nothing should be printed. import Control.Monad main = greet greet = forever $ putStrLn "Hello World!" Or maybe I should ask: what does it mean to "fall into the main function"?

    Read the article

  • Programming concepts taken from the arts and humanities

    - by Joey Adams
    After reading Paul Graham's essay Hackers and Painters and Joel Spolsky's Advice for Computer Science College Students, I think I've finally gotten it through my thick skull that I should not be loath to work hard in academic courses that aren't "programming" or "computer science" courses. To quote the former: I've found that the best sources of ideas are not the other fields that have the word "computer" in their names, but the other fields inhabited by makers. Painting has been a much richer source of ideas than the theory of computation. — Paul Graham, "Hackers and Painters" There are certainly other, much stronger reasons to work hard in the "boring" classes. However, it'd also be neat to know that these classes may someday inspire me in programming. My question is: what are some specific examples where ideas from literature, art, humanities, philosophy, and other fields made their way into programming? In particular, ideas that weren't obviously applied the way they were meant to (like most math and domain-specific knowledge), but instead gave utterance or inspiration to a program's design and choice of names. Good examples: The term endian comes from Gulliver's Travels by Tom Swift (see here), where it refers to the trivial matter of which side people crack open their eggs. The terms journal and transaction refer to nearly identical concepts in both filesystem design and double-entry bookkeeping (financial accounting). mkfs.ext2 even says: Writing superblocks and filesystem accounting information: done Off-topic: Learning to write English well is important, as it enables a programmer to document and evangelize his/her software, as well as appear competent to other programmers online. Trigonometry is used in 2D and 3D games to implement rotation and direction aspects. Knowing finance will come in handy if you want to write an accounting package. Knowing XYZ will come in handy if you want to write an XYZ package. Arguably on-topic: The Monad class in Haskell is based on a concept by the same name from category theory. Actually, Monads in Haskell are monads in the category of Haskell types and functions. Whatever that means...

    Read the article

  • Ur/Web new purely functional language for web programming?

    - by Phuc Nguyen
    I came across the Ur/Web project during my search for web frameworks for Haskell-like languages. It looks like a very interesting project done by one person. Basically, it is a domain-specific purely functional language for web programming, taking the best of ML and Haskell. The syntax is ML, but there are type classes and monad from Haskell, and it's strictly evaluated. Server-side is compiled to native code, client to Javascript. See the slides and FAQ page for other advertised advantages. Looking at the demos and their source code, I think the project is very promising. The latest version is something 20110123, so it seems to be under active development at this time. My question: Has anybody here had any further experience with it? Are there problems/annoyances compared to Haskell, apart from ML's slightly more verbose syntax? Even if it's not well known yet, I hope more people will know of it. OMG this looks very cool to me. I don't want this project to die!!

    Read the article

  • How can I make sense of the word "Functor" from a semantic standpoint?

    - by guillaume31
    When facing new programming jargon words, I first try to reason about them from an semantic and etymological standpoint when possible (that is, when they aren't obscure acronyms). For instance, you can get the beginning of a hint of what things like Polymorphism or even Monad are about with the help of a little Greek/Latin. At the very least, once you've learned the concept, the word itself appears to go along with it well. I guess that's part of why we name things names, to make mental representations and associations more fluent. I found Functor to be a tougher nut to crack. Not so much the C++ meaning -- an object that acts (-or) as a function (funct-), but the various functional meanings (in ML, Haskell) definitely left me puzzled. From the (mathematics) Functor Wikipedia article, it seems the word was borrowed from linguistics. I think I get what a "function word" or "functor" means in that context - a word that "makes function" as opposed to a word that "makes sense". But I can't really relate that to the notion of Functor in category theory, let alone functional programming. I imagined a Functor to be something that creates functions, or behaves like a function, or short for "functional constructor", but none of those seems to fit... How do experienced functional programmers reason about this ? Do they just need any label to put in front of a concept and be fine with it ? Generally speaking, isn't it partly why advanced functional programming is hard to grasp for mere mortals compared to, say, OO -- very abstract in that you can't relate it to anything familiar ? Note that I don't need a definition of Functor, only an explanation that would allow me to relate it to something more tangible, if there is any.

    Read the article

  • Including two signatures, both with a 'type t' [Standard ML]

    - by Andy Morris
    A contrived example: signature A = sig type t val x: t end signature B = sig type t val y: t end signature C = sig include A B end Obviously, this will cause complaints that type t occurs twice in C. But is there any way to express that I want the two ts to be equated, ending up with: signature C = sig type t val x: t val y: t end I tried all sorts of silly syntax like include B where type t = A.t, which unsurprisingly didn't work. Is there something I've forgotten to try? Also, I know that this would be simply answered by checking the language's syntax for anything obvious (or a lack of), but I couldn't find a complete grammar anywhere on the internet. (FWIW, the actual reason I'm trying to do this is Haskell-style monads and such, where a MonadPlus is just a mix of a Monad and an Alternative; at the moment I'm just repeating the contents of ALTERNATIVE in MONAD_PLUS, which strikes me as less than ideal.)

    Read the article

  • Are we asking too much of transactional memory?

    - by Carl Seleborg
    I've been reading up a lot about transactional memory lately. There is a bit of hype around TM, so a lot of people are enthusiastic about it, and it does provide solutions for painful problems with locking, but you regularly also see complaints: You can't do I/O You have to write your atomic sections so they can run several times (be careful with your local variables!) Software transactional memory offers poor performance [Insert your pet peeve here] I understand these concerns: more often than not, you find articles about STMs that only run on some particular hardware that supports some really nifty atomic operation (like LL/SC), or it has to be supported by some imaginary compiler, or it requires that all accesses to memory be transactional, it introduces type constraints monad-style, etc. And above all: these are real problems. This has lead me to ask myself: what speaks against local use of transactional memory as a replacement for locks? Would this already bring enough value, or must transactional memory be used all over the place if used at all?

    Read the article

  • hackage package dependencies and future-proof libraries

    - by yairchu
    In the dependencies section of a cabal file: Build-Depends: base >= 3 && < 5, transformers >= 0.2.0 Should I be doing something like Build-Depends: base >= 3 && < 5, transformers >= 0.2.0 && < 0.3.0 (putting upper limits on versions of packages I depend on) or not? I'll use a real example: my "List" package on Hackage (List monad transformer and class) If I don't put the limit - my package could break by a change in "transformers" If I do put the limit - a user that uses "transformers" but is using a newer version of it will not be able to use lift and liftIO with ListT because it's only an instance of these classes of transformers-0.2.x I guess that applications should always put upper limits so that they never break, so this question is only about libraries: Shall I use the upper version limit on dependencies or not?

    Read the article

  • Haskell optimization of a function looking for a bytestring terminator

    - by me2
    Profiling of some code showed that about 65% of the time I was inside the following code. What it does is use the Data.Binary.Get monad to walk through a bytestring looking for the terminator. If it detects 0xff, it checks if the next byte is 0x00. If it is, it drops the 0x00 and continues. If it is not 0x00, then it drops both bytes and the resulting list of bytes is converted to a bytestring and returned. Any obvious ways to optimize this? I can't see it. parseECS = f [] False where f acc ff = do b <- getWord8 if ff then if b == 0x00 then f (0xff:acc) False else return $ L.pack (reverse acc) else if b == 0xff then f acc True else f (b:acc) False

    Read the article

  • Haskell optimization of the following function

    - by me2
    Profiling of some code of mine showed that about 65% of the time I was running the following code. What it does is use the Data.Binary.Get monad to walk through a bytestring looking for the terminator. If it detects 0xff, it checks if the next byte is 0x00. If it is, it drops the 0x00 and continues. If it is not 0x00, then it drops both bytes and the resulting list of bytes is converted to a bytestring and returned. Any obvious ways to optimize this code? I can't see it. parseECS = f [] False where f acc ff = do b <- getWord8 if ff then if b == 0x00 then f (0xff:acc) False else return $ L.pack (reverse acc) else if b == 0xff then f acc True else f (b:acc) False

    Read the article

  • Where can I learn advanced Haskell?

    - by FredOverflow
    In a comment to one of my answers, SO user sdcwc essentially pointed out that the following code: comb 0 = [[]] comb n = let rest = comb (n-1) in map ('0':) rest ++ map ('1':) rest could be replaced by: comb n = replicateM n "01" which had me completely stunned. Now I am looking for a tutorial, book or PDF that teaches these advanced concepts. I am not looking for a "what's a monad" tutorial aimed at beginners or online references explaining the type of replicateM. I want to learn how to think in monads and use them effectively, monadic "patterns" if you will.

    Read the article

  • Function syntax puzzler in scalaz

    - by oxbow_lakes
    Following watching Nick Partidge's presentation on deriving scalaz, I got to looking at this example, which is just awesome: import scalaz._ import Scalaz._ def even(x: Int) : Validation[NonEmptyList[String], Int] = if (x % 2 ==0) x.success else "not even: %d".format(x).wrapNel.fail println( even(3) <|*|> even(5) ) //prints: Failure(NonEmptyList(not even: 3, not even: 5)) I was trying to understand what the <|*|> method was doing, here is the source code: def <|*|>[B](b: M[B])(implicit t: Functor[M], a: Apply[M]): M[(A, B)] = <**>(b, (_: A, _: B)) OK, that is fairly confusing (!) - but it references the <**> method, which is declared thus: def <**>[B, C](b: M[B], z: (A, B) => C)(implicit t: Functor[M], a: Apply[M]): M[C] = a(t.fmap(value, z.curried), b) So I have a few questions: How come the method appears to take a monad of one type parameter (M[B]) but can get passed a Validation (which has two type paremeters)? How does the syntax (_: A, _: B) define the function (A, B) => C which the 2nd method expects? It doesn't even define an output via =>

    Read the article

  • Monads and custom traversal functions in Haskell

    - by Bill
    Given the following simple BST definition: data Tree x = Empty | Leaf x | Node x (Tree x) (Tree x) deriving (Show, Eq) inOrder :: Tree x -> [x] inOrder Empty = [] inOrder (Leaf x) = [x] inOrder (Node root left right) = inOrder left ++ [root] ++ inOrder right I'd like to write an in-order function that can have side effects. I achieved that with: inOrderM :: (Show x, Monad m) => (x -> m a) -> Tree x -> m () inOrderM f (Empty) = return () inOrderM f (Leaf y) = f y >> return () inOrderM f (Node root left right) = inOrderM f left >> f root >> inOrderM f right -- print tree in order to stdout inOrderM print tree This works fine, but it seems repetitive - the same logic is already present in inOrder and my experience with Haskell leads me to believe that I'm probably doing something wrong if I'm writing a similar thing twice. Is there any way that I can write a single function inOrder that can take either pure or monadic functions?

    Read the article

  • How can I define a verb in J that applies a different verb alternately to each atom in a list?

    - by Gregory Higley
    Imagine I've defined the following name in J: m =: >: i. 2 4 5 This looks like the following: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 I want to create a monadic verb of rank 1 that applies to each list in this list of lists. It will double (+:) or add 1 (>:) to each alternate item in the list. If we were to apply this verb to the first row, we'd get 2 3 6 5 10. It's fairly easy to get a list of booleans which alternate with each item, e.g., 0 1 $~{:$ m gives us 0 1 0 1 0. I thought, aha! I'll use something like +:`>: @. followed by some expression, but I could never quite get it to work. Any suggestions? UPDATE The following appears to work, but perhaps it can be refactored into something more elegant by a J pro. poop =: monad define (($ y) $ 0 1 $~{:$ y) ((]+:)`(]:) @. [)"0 y )

    Read the article

< Previous Page | 1 2 3 4  | Next Page >