Search Results

Search found 4547 results on 182 pages for 'haskell io'.

Page 19/182 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Tip #13 java.io.File Surprises

    - by ByronNevins
    There is an assumption that I've seen in code many times that is totally wrong.  And this assumption can easily bite you.  The assumption is: File.getAbsolutePath and getAbsoluteFile return paths that are not relative.  Not true!  Sort of.  At least not in the way many people would assume.  All they do is make sure that the beginning of the path is absolute.  The rest of the path can be loaded with relative path elements.  What do you think the following code will print? public class Main {    public static void main(String[] args) {        try {            File f = new File("/temp/../temp/../temp/../");            File abs  = f.getAbsoluteFile();            File parent = abs.getParentFile();            System.out.println("Exists: " + f.exists());            System.out.println("Absolute Path: " + abs);            System.out.println("FileName: " + abs.getName());            System.out.printf("The Parent Directory of %s is %s\n", abs, parent);            System.out.printf("The CANONICAL Parent Directory of CANONICAL %s is %s\n",                        abs, abs.getCanonicalFile().getParent());            System.out.printf("The CANONICAL Parent Directory of ABSOLUTE %s is %s\n",                        abs, parent.getCanonicalFile());            System.out.println("Canonical Path: " + f.getCanonicalPath());        }        catch (IOException ex) {            System.out.println("Got an exception: " + ex);        }    }} Output: Exists: trueAbsolute Path: D:\temp\..\temp\..\temp\..FileName: ..The Parent Directory of D:\temp\..\temp\..\temp\.. is D:\temp\..\temp\..\tempThe CANONICAL Parent Directory of CANONICAL D:\temp\..\temp\..\temp\.. is nullThe CANONICAL Parent Directory of ABSOLUTE D:\temp\..\temp\..\temp\.. is D:\tempCanonical Path: D:\ Notice how it says that the parent of d:\ is d:\temp !!!The file, f, is really the root directory.  The parent is supposed to be null. I learned about this the hard way! getParentXXX simply hacks off the final item in the path. You can get totally unexpected results like the above. Easily. I filed a bug on this behavior a few years ago[1].   Recommendations: (1) Use getCanonical instead of getAbsolute.  There is a 1:1 mapping of files and canonical filenames.  I.e each file has one and only one canonical filename and it will definitely not have relative path elements in it.  There are an infinite number of absolute paths for each file. (2) To get the parent file for File f do the following instead of getParentFile: File parent = new File(f, ".."); [1] http://bt2ws.central.sun.com/CrPrint?id=6687287

    Read the article

  • chromium-browser usus 99,99% IO disk

    - by lars
    My favorite browser: chromium is testing my patience. For some reason it sometimes uses 99,99% of I/O. (reading 2-3MB/s) Other processes (updatedb.mlocate, [kswapd0], clementine, compiz) show the same behavior. However this problem always starts and ends with chromium. To illustrate the impact on my system: when my disk starts to spin like crazy en the led burns continiously the system is so slow that it takes about two to five minuits to switch to tty6, log in and execute "killall chromiumbrowser && killall chromium" This is way faster than starting a new terminal in X, just starting a terminal seems to heavy for compiz under these circumstances. Waiting until its over takes more than 30 minuits, if it ends at all. The exact circumstances are difficult to replicate. Several tabs have to be open, usualy 8 or more. It seems that the chance to increases when more complex sites like gmail of plugins like flash are running. Opening several new tabs at omgubunt.co.uk has the best chance to replecate this isue. I have no idea where to start looking for a solution. Any help would be greatly apreciated ubuntu 12.10 | 2GB | 2x 1.66GHz Intel | 32bit | IBM Thinkpad R60e

    Read the article

  • Is there any practical use for the empty type in Common Lisp?

    - by Pedro Rodrigues
    The Common Lisp spec states that nil is the name of the empty type, but I've never found any situation in Common Lisp where I felt like the empty type was useful/necessary. Is it there just for completeness sake (and removing it wouldn't cause any harm to anyone)? Or is there really some practical use for the empty type in Common Lisp? If yes, then I would prefer an answer with code example. For example, in Haskell the empty type can be used when binding foreign data structures, to make sure that no one tries to create values of that type without using the data structure's foreign interface (although in this case, the type is not really empty).

    Read the article

  • Naming conventions for newtype deconstructors (destructors?)

    - by Petr Pudlák
    Looking into Haskell's standard library we can see: newtype StateT s m a = StateT { runStateT :: s -> m (a, s) } newtype WrappedMonad m a = WrapMonad { unwrapMonad :: m a } newtype Sum a = Sum { getSum :: a } Apparently, there (at least) 3 different prefixes used to unwrap a value inside a newtype: un-, run- and get-. (Moreover run- and get- capitalizes the next letter while un- doesn't.) This seems confusing. Are there any reasons for that, or is that just a historical thing? If I design my own newtype, what prefix should I use and why?

    Read the article

  • How to monitor IO svctm with every 5 mins frequency using nagios?

    - by sabya
    I want to collect samples of iostat's svctm, await every 5 mins from all of my servers and store them in nagios. I want to get the values for what is happening in every 5 minutes (not since boot time, iostat's first output gives values since boot time). How can I do it in nagios? EDIT The tps should NOT be calculated #of transactions happened since reboot divided by uptime. What I want is # of transferred happened in last X mins divided X*60.

    Read the article

  • Using all Ten IO slots on a 7420

    - by user12620172
    So I had the opportunity recently to actually use up all ten slots in a clustered 7420 system. This actually uses 20 slots, or 22 if you count the clusteron card. I thought it was interesting enough to share here. This is at one of my clients here in southern California. You can see the picture below. We have four SAS HBAs instead of the usual two. This is becuase we wanted to split up the back-end taffic for different workloads. We have a set of disk trays coming from two SAS cards for nothing but Exadata backups. Then, we have a different set of disk trays coming off of the other two SAS cards for non-Exadata workloads, such as regular user file storage. We have 2 Infiniband cards which allow us to do a full mesh directly into the back of the nearby, production Exadata, specifically for fast backups and restores over IB. You can see a 3rd IB card here, which is going to be connected to a non-production Exadata for slower backups and restores from it.The 10Gig card is for client connectivity, allowing other, non-Exadata Oracle databases to make use of the many snapshots and clones that can now be created using the RMAN copies from the original production database coming off the Exadata. This allows for a good number of test and development Oracle databases to use these clones without effecting performance of the Exadata at all.We also have a couple FC HBAs, both for NDMP backups to an Oracle/StorageTek tape library and also for FC clients to come in and use some storage on the 7420.  Now, if you are adding more cards to your 7420, be aware of which cards you can place in which slots. See the bottom graphic just below the photo.  Note that the slots are numbered 0-4 for the first 5 cards, then the "C" slots which is the dedicated Cluster card (called the Clustron), and then another 5 slots numbered 5-9. Some rules for the slots: Slots 1 & 8 are automatically populated with the two default SAS cards. The only other slots you can add SAS cards to are 2 & 7. Slots 0 and 9 can only hold FC cards. Nothing else. So if you have four SAS cards, you are now down to only four more slots for your 10Gig and IB cards. Be sure not to waste one of these slots on a FC card, which can go into 0 or 9, instead.  If at all possible, slots should be populated in this order: 9, 0, 7, 2, 6, 3, 5, 4

    Read the article

  • Why isn't functional language syntax more close to human language?

    - by JohnDoDo
    I'm interested in functional programming and decided to get head to head with Haskell. My head hurts... but I'll eventually get it... I have one curiosity though, why is the syntax so cryptic (in lack of another word)? Is there a reason why it isn't more expressive, more close to human language? I understand that FP is good at modelling mathematical concepts and it borrowed some of it's concise means of expression, but still it's not math... it's a language.

    Read the article

  • Unit testing statically typed functional code

    - by back2dos
    I wanted to ask you people, in which cases it makes sense to unit test statically typed functional code, as written in haskell, scala, ocaml, nemerle, f# or haXe (the last is what I am really interested in, but I wanted to tap into the knowledge of the bigger communities). I ask this because from my understanding: One aspect of unit tests is to have the specs in runnable form. However when employing a declarative style, that directly maps the formalized specs to language semantics, is it even actually possible to express the specs in runnable form in a separate way, that adds value? The more obvious aspect of unit tests is to track down errors that cannot be revealed through static analysis. Given that type safe functional code is a good tool to code extremely close to what your static analyzer understands. However a simple mistake like using x instead of y (both being coordinates) in your code cannot be covered. However such a mistake could also arise while writing the test code, so I am not sure whether its worth the effort. Unit tests do introduce redundancy, which means that when requirements change, the code implementing them and the tests covering this code must both be changed. This overhead of course is about constant, so one could argue, that it doesn't really matter. In fact, in languages like Ruby it really doesn't compared to the benefits, but given how statically typed functional programming covers a lot of the ground unit tests are intended for, it feels like it's a constant overhead one can simply reduce without penalty. From this I'd deduce that unit tests are somewhat obsolete in this programming style. Of course such a claim can only lead to religious wars, so let me boil this down to a simple question: When you use such a programming style, to which extents do you use unit tests and why (what quality is it you hope to gain for your code)? Or the other way round: do you have criteria by which you can qualify a unit of statically typed functional code as covered by the static analyzer and hence needs no unit test coverage?

    Read the article

  • Functional programming and stateful algorithms

    - by bigstones
    I'm learning functional programming with Haskell. In the meantime I'm studying Automata theory and as the two seem to fit well together I'm writing a small library to play with automata. Here's the problem that made me ask the question. While studying a way to evaluate a state's reachability I got the idea that a simple recursive algorithm would be quite inefficient, because some paths might share some states and I might end up evaluating them more than once. For example, here, evaluating reachability of g from a, I'd have to exclude f both while checking the path through d and c: So my idea is that an algorithm working in parallel on many paths and updating a shared record of excluded states might be great, but that's too much for me. I've seen that in some simple recursion cases one can pass state as an argument, and that's what I have to do here, because I pass forward the list of states I've gone through to avoid loops. But is there a way to pass that list also backwards, like returning it in a tuple together with the boolean result of my canReach function? (although this feels a bit forced) Besides the validity of my example case, what other techniques are available to solve this kind of problems? I feel like these must be common enough that there have to be solutions like what happens with fold* or map. So far, reading learnyouahaskell.com I didn't find any, but consider I haven't touched monads yet. (if interested, I posted my code on codereview)

    Read the article

  • Is LINQ to objects a collection of combinators?

    - by Jimmy Hoffa
    I was just trying to explain the usefulness of combinators to a colleague and I told him LINQ to objects are like combinators as they exhibit the same value, the ability to combine small pieces to create a single large piece. Though I don't know that I can call LINQ to objects combinators. I've seen 2 levels of definition for combinator that I generalize as such: A combinator is a function which only uses things passed to it A combinator is a function which only uses things passed to it and other standard atomic functions but not state The first is very rigid and can be seen in the combinatory calculus systems and in haskell things like $ and . and various similar functions meet this rule. The second is less rigid and would allow something like sum as it uses the + function which was not passed in but is standard and not stateful. Though the LINQ extensions in C# use state in their iteration models, so I feel I can't say they're combinators. Can someone who understands the definition of a combinator more thoroughly and with more experience in these realms give a distinct ruling on this? Are my definitions of 'combinator' wrong to begin with?

    Read the article

  • Performance of concurrent software on multicore processors

    - by Giorgio
    Recently I have often read that, since the trend is to build processors with multiple cores, it will be increasingly important to have programming languages that support concurrent programming in order to better exploit the parallelism offered by these processors. In this respect, certain programming paradigms or models are considered well-suited for writing robust concurrent software: Functional programming languages, e.g. Haskell, Scala, etc. The actor model: Erlang, but also available for Scala / Java (Akka), C++ (Theron, Casablanca, ...), and other programming languages. My questions: What is the state of the art regarding the development of concurrent applications (e.g. using multi-threading) using the above languages / models? Is this area still being explored or are there well-established practices already? Will it be more complex to program applications with a higher level of concurrency, or is it just a matter of learning new paradigms and practices? How does the performance of highly concurrent software compare to the performance of more traditional software when executed on multiple core processors? For example, has anyone implemented a desktop application using C++ / Theron, or Java / Akka? Was there a boost in performance on a multiple core processor due to higher parallelism?

    Read the article

  • How do you encode Algebraic Data Types in a C#- or Java-like language?

    - by Jörg W Mittag
    There are some problems which are easily solved by Algebraic Data Types, for example a List type can be very succinctly expressed as: data ConsList a = Empty | ConsCell a (ConsList a) consmap f Empty = Empty consmap f (ConsCell a b) = ConsCell (f a) (consmap f b) l = ConsCell 1 (ConsCell 2 (ConsCell 3 Empty)) consmap (+1) l This particular example is in Haskell, but it would be similar in other languages with native support for Algebraic Data Types. It turns out that there is an obvious mapping to OO-style subtyping: the datatype becomes an abstract base class and every data constructor becomes a concrete subclass. Here's an example in Scala: sealed abstract class ConsList[+T] { def map[U](f: T => U): ConsList[U] } object Empty extends ConsList[Nothing] { override def map[U](f: Nothing => U) = this } final class ConsCell[T](first: T, rest: ConsList[T]) extends ConsList[T] { override def map[U](f: T => U) = new ConsCell(f(first), rest.map(f)) } val l = (new ConsCell(1, new ConsCell(2, new ConsCell(3, Empty))) l.map(1+) The only thing needed beyond naive subclassing is a way to seal classes, i.e. a way to make it impossible to add subclasses to a hierarchy. How would you approach this problem in a language like C# or Java? The two stumbling blocks I found when trying to use Algebraic Data Types in C# were: I couldn't figure out what the bottom type is called in C# (i.e. I couldn't figure out what to put into class Empty : ConsList< ??? >) I couldn't figure out a way to seal ConsList so that no subclasses can be added to the hierarchy What would be the most idiomatic way to implement Algebraic Data Types in C# and/or Java? Or, if it isn't possible, what would be the idiomatic replacement?

    Read the article

  • why are transaction monitors on decline? or are they?

    - by mrkafk
    http://www.itjobswatch.co.uk/jobs/uk/cics.do http://www.itjobswatch.co.uk/jobs/uk/tuxedo.do Look at the demand for programmers (% of job ads that the keyword appears), first graph under the table. It seems like demand for CICS, Tuxedo has fallen from 2.5%/1% respectively to almost zero. To me, it seems bizarre: now we have more networked and internet enabled machines than ever before. And most of them are talking to some kind of database. So it would seem that use of products whose developers spent last 20-30 years working on distributing and coordinating and optimizing transactions should be on the rise. And it appears they're not. I can see a few causes but can't tell whether they are true: we forgot that concurrency and distribution are really hard, and redoing it all by ourselves, in Java, badly. Erlang killed them all. Projects nowadays have changed character, like most business software has already been built and we're all doing internet services, using stuff like Node.js, Erlang, Haskell. (I've used RabbitMQ which is written in Erlang, "but it was small specialized side project" kind of thing). BigData is the emphasis now and BigData doesn't need transactions very much (?). None of those explanations seem particularly convincing to me, which is why I'm looking for better one. Anyone?

    Read the article

  • How to write Haskell function to verify parentheses matching?

    - by Rizo
    I need to write a function par :: String -> Bool to verify if a given string with parentheses is matching using stack module. Ex: par "(((()[()])))" = True par "((]())" = False Here's my stack module implementation: module Stack (Stack, push, pop, top, empty, isEmpty) where data Stack a = Stk [a] deriving (Show) push :: a -> Stack a -> Stack a push x (Stk xs) = Stk (x:xs) pop :: Stack a -> Stack a pop (Stk (_:xs)) = Stk xs pop _ = error "Stack.pop: empty stack" top :: Stack a -> a top (Stk (x:_)) = x top _ = error "Stack.top: empty stack" empty :: Stack a empty = Stk [] isEmpty :: Stack a -> Bool isEmpty (Stk [])= True isEmpty (Stk _) = False So I need to implement a 'par' function that would test a string of parentheses and say if parentheses in it matches or not. How can I do that using a stack?

    Read the article

  • Why is Haskell used so little in the industry?

    - by bugspy.net
    It is a wonderful, very fast, mature and complete language. It exists for a very long time and has a big set of libraries. Yet, it appears not to be widely used. Why ? I suspect it is because it is pretty rough and unforgiving for beginners, and maybe because its lazy execution makes it even harder

    Read the article

  • In Haskell, what does it mean if a binding "shadows an existing binding"?

    - by Alistair
    I'm getting a warning from GHC when I compile: Warning: This binding for 'pats' shadows an existing binding in the definition of 'match_ignore_ancs' Here's the function: match_ignore_ancs (TextPat _ c) (Text t) = c t match_ignore_ancs (TextPat _ _) (Element _ _ _) = False match_ignore_ancs (ElemPat _ _ _) (Text t) = False match_ignore_ancs (ElemPat _ c pats) (Element t avs xs) = c t avs && match_pats pats xs Any idea what this means and how I can fix it? Cheers.

    Read the article

  • In Haskell, will calling length on a Lazy ByteString force the entire string into memory?

    - by me2
    I am reading a large data stream using lazy bytestrings, and want to know if at least X more bytes is available while parsing it. That is, I want to know if the bytestring is at least X bytes long. Will calling length on it result in the entire stream getting loaded, hence defeating the purpose of using the lazy bytestring? If yes, then the followup would be: How to tell if it has at least X bytes without loading the entire stream? EDIT: Originally I asked in the context of reading files but understand that there are better ways to determine filesize. Te ultimate solution I need however should not depend on the lazy bytestring source.

    Read the article

  • ASP.NET- using System.IO.File.Delete() to delete file(s) from directory inside wwwroot?

    - by Jim S
    Hello, I have a ASP.NET SOAP web service whose web method creates a PDF file, writes it to the "Download" directory of the applicaton, and returns the URL to the user. Code: //Create the map images (MapPrinter) and insert them on the PDF (PagePrinter). MemoryStream mstream = null; FileStream fs = null; try { //Create the memorystream storing the pdf created. mstream = pgPrinter.GenerateMapImage(); //Convert the memorystream to an array of bytes. byte[] byteArray = mstream.ToArray(); //return byteArray; //Save PDF file to site's Download folder with a unique name. System.Text.StringBuilder sb = new System.Text.StringBuilder(Global.PhysicalDownloadPath); sb.Append("\\"); string fileName = Guid.NewGuid().ToString() + ".pdf"; sb.Append(fileName); string filePath = sb.ToString(); fs = new FileStream(filePath, FileMode.CreateNew); fs.Write(byteArray, 0, byteArray.Length); string requestURI = this.Context.Request.Url.AbsoluteUri; string virtPath = requestURI.Remove(requestURI.IndexOf("Service.asmx")) + "Download/" + fileName; return virtPath; } catch (Exception ex) { throw new Exception("An error has occurred creating the map pdf.", ex); } finally { if (mstream != null) mstream.Close(); if (fs != null) fs.Close(); //Clean up resources if (pgPrinter != null) pgPrinter.Dispose(); } Then in the Global.asax file of the web service, I set up a Timer in the Application_Start event listener. In the Timer's ElapsedEvent listener I look for any files in the Download directory that are older than the Timer interval (for testing = 1 min., for deployment ~20 min.) and delete them. Code: //Interval to check for old files (milliseconds), also set to delete files older than now minus this interval. private static double deleteTimeInterval; private static System.Timers.Timer timer; //Physical path to Download folder. Everything in this folder will be checked for deletion. public static string PhysicalDownloadPath; void Application_Start(object sender, EventArgs e) { // Code that runs on application startup deleteTimeInterval = Convert.ToDouble(System.Configuration.ConfigurationManager.AppSettings["FileDeleteInterval"]); //Create timer with interval (milliseconds) whose elapse event will trigger the delete of old files //in the Download directory. timer = new System.Timers.Timer(deleteTimeInterval); timer.Enabled = true; timer.AutoReset = true; timer.Elapsed += new System.Timers.ElapsedEventHandler(OnTimedEvent); PhysicalDownloadPath = System.Web.Hosting.HostingEnvironment.ApplicationPhysicalPath + "Download"; } private static void OnTimedEvent(object source, System.Timers.ElapsedEventArgs e) { //Delete the files older than the time interval in the Download folder. var folder = new System.IO.DirectoryInfo(PhysicalDownloadPath); System.IO.FileInfo[] files = folder.GetFiles(); foreach (var file in files) { if (file.CreationTime < DateTime.Now.AddMilliseconds(-deleteTimeInterval)) { string path = PhysicalDownloadPath + "\\" + file.Name; System.IO.File.Delete(path); } } } This works perfectly, with one exception. When I publish the web service application to inetpub\wwwroot (Windows 7, IIS7) it does not delete the old files in the Download directory. The app works perfect when I publish to IIS from a physical directory not in wwwroot. Obviously, it seems IIS places some sort of lock on files in the web root. I have tested impersonating an admin user to run the app and it still does not work. Any tips on how to circumvent the lock programmatically when in wwwroot? The client will probably want the app published to the root directory. Thank you very much.

    Read the article

  • What is wrong with my definition of Zip in Haskell?

    - by kunjaan
    -- eg. myzip [’a’, ’b’, ’c’] [1, 2, 3, 4] -> [(’a’, 1), (’b’, 2), (’c’, 3)] myzip :: Ord a => [a] -> [a] -> [(a,a)] myzip list1 list2 = [(x,y) | [x, _] <-list1, [y,_] <-list2 ] I get this error message: Occurs check: cannot construct the infinite type: a = [a] When generalising the type(s) for `myzip' Failed, modules loaded: none.

    Read the article

  • what's the way to determine if an Int a perfect square in Haskell?

    - by valya
    I need a simple function is_square :: Int -> Bool which determines if an Int N a perfect square (is there an integer x such that x*x = N). Of course I can just write something like is_square n = sq * sq == n where sq = floor $ sqrt $ (fromIntegral n::Double) but it looks terrible! Maybe there is a common simple way to implement such predicate?

    Read the article

  • Why do compiled Haskell libraries see invalid static FFI storage?

    - by John Millikin
    I am using GHC 6.12.1, in Ubuntu 10.04 When I try to use the FFI syntax for static storage, only modules running in interpreted mode (ie GHCI) work properly. Compiled modules have invalid pointers, and do not work. I'd like to know whether anybody can reproduce the problem, whether this an error in my code or GHC, and (if the latter) whether it's a known issue. Given the following three modules: -- A.hs {-# LANGUAGE ForeignFunctionInterface #-} module A where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_a :: Ptr CString -- -- B.hs {-# LANGUAGE ForeignFunctionInterface #-} module B where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_b :: Ptr CString -- -- Main.hs {-# LANGUAGE ForeignFunctionInterface #-} module Main where import Foreign import Foreign.C import A import B foreign import ccall "&sys_siglist" siglist_main :: Ptr CString main = do putStrLn $ "siglist_a = " ++ show siglist_a putStrLn $ "siglist_b = " ++ show siglist_b putStrLn $ "siglist_main = " ++ show siglist_main peekSiglist "a " siglist_a peekSiglist "b " siglist_b peekSiglist "main" siglist_main peekSiglist name siglist = do ptr <- peekElemOff siglist 2 str <- maybePeek peekCString ptr putStrLn $ "siglist_" ++ name ++ "[2] = " ++ show str I would expect something like this output, where all pointer values identical and valid: $ runhaskell Main.hs siglist_a = 0x00007f53a948fe00 siglist_b = 0x00007f53a948fe00 siglist_main = 0x00007f53a948fe00 siglist_a [2] = Just "Interrupt" siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt" However, if I compile A.hs (with ghc -c A.hs), then the output changes to: $ runhaskell Main.hs siglist_a = 0x0000000040378918 siglist_b = 0x00007fe7c029ce00 siglist_main = 0x00007fe7c029ce00 siglist_a [2] = Nothing siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt"

    Read the article

  • Haskell: "how much" of a type should functions receive? and avoiding complete "reconstruction"

    - by L01man
    I've got these data types: data PointPlus = PointPlus { coords :: Point , velocity :: Vector } deriving (Eq) data BodyGeo = BodyGeo { pointPlus :: PointPlus , size :: Point } deriving (Eq) data Body = Body { geo :: BodyGeo , pict :: Color } deriving (Eq) It's the base datatype for characters, enemies, objects, etc. in my game (well, I just have two rectangles as the player and the ground right now :p). When a key, the characters moves right, left or jumps by changing its velocity. Moving is done by adding the velocity to the coords. Currently, it's written as follows: move (PointPlus (x, y) (xi, yi)) = PointPlus (x + xi, y + yi) (xi, yi) I'm just taking the PointPlus part of my Body and not the entire Body, otherwise it would be: move (Body (BodyGeo (PointPlus (x, y) (xi, yi)) wh) col) = (Body (BodyGeo (PointPlus (x + xi, y + yi) (xi, yi)) wh) col) Is the first version of move better? Anyway, if move only changes PointPlus, there must be another function that calls it inside a new Body. I explain: there's a function update which is called to update the game state; it is passed the current game state, a single Body for now, and returns the updated Body. update (Body (BodyGeo (PointPlus xy (xi, yi)) wh) pict) = (Body (BodyGeo (move (PointPlus xy (xi, yi))) wh) pict) That tickles me. Everything is kept the same within Body except the PointPlus. Is there a way to avoid this complete "reconstruction" by hand? Like in: update body = backInBody $ move $ pointPlus body Without having to define backInBody, of course.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >