Search Results

Search found 4547 results on 182 pages for 'haskell io'.

Page 36/182 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Applying a function that may fail to all values in a list

    - by Egwor
    I want to apply a function f to a list of values, however function f might randomly fail (it is in effect making a call out to a service in the cloud). I thought I'd want to use something like map, but I want to apply the function to all elements in the list and afterwards, I want to know which ones failed and which were successful. Currently I am wrapping the response objects of the function f with an error pair which I could then effectively unzip afterwards i.e. something like g : (a->b) -> a -> [ b, errorBoolean] f : a-> b and then to run the code ... map g (xs) Is there a better way to do this? The other alternative approach was to iterate over the values in the array and then return a pair of arrays, one which listed the successful values and one which listed the failures. To me, this seems to be something that ought to be fairly common. Alternatively I could return some special value. What's the best practice in dealing with this??

    Read the article

  • Strange type-related error

    - by vsb
    I wrote following program: isPrime x = and [x `mod` i /= 0 | i <- [2 .. truncate (sqrt x)]] primes = filter isPrime [1 .. ] it should construct list of prime numbers. But I got this error: [1 of 1] Compiling Main ( 7/main.hs, interpreted ) 7/main.hs:3:16: Ambiguous type variable `a' in the constraints: `Floating a' arising from a use of `isPrime' at 7/main.hs:3:16-22 `RealFrac a' arising from a use of `isPrime' at 7/main.hs:3:16-22 `Integral a' arising from a use of `isPrime' at 7/main.hs:3:16-22 Possible cause: the monomorphism restriction applied to the following: primes :: [a] (bound at 7/main.hs:3:0) Probable fix: give these definition(s) an explicit type signature or use -XNoMonomorphismRestriction Failed, modules loaded: none. If I specify signature for isPrime function explicitly: isPrime :: Integer -> Bool isPrime x = and [x `mod` i /= 0 | i <- [2 .. truncate (sqrt x)]] I can't even compile isPrime function: [1 of 1] Compiling Main ( 7/main.hs, interpreted ) 7/main.hs:2:45: No instance for (RealFrac Integer) arising from a use of `truncate' at 7/main.hs:2:45-61 Possible fix: add an instance declaration for (RealFrac Integer) In the expression: truncate (sqrt x) In the expression: [2 .. truncate (sqrt x)] In a stmt of a list comprehension: i <- [2 .. truncate (sqrt x)] 7/main.hs:2:55: No instance for (Floating Integer) arising from a use of `sqrt' at 7/main.hs:2:55-60 Possible fix: add an instance declaration for (Floating Integer) In the first argument of `truncate', namely `(sqrt x)' In the expression: truncate (sqrt x) In the expression: [2 .. truncate (sqrt x)] Failed, modules loaded: none. Can you help me understand, why am I getting these errors?

    Read the article

  • stm monad problem

    - by Alex
    This is just a hypothetical scenario to illustrate my question. Suppose that there are two threads and one TVar shared between them. In one thread there is an atomically block that reads the TVar and takes 10s to complete. In another thread is an atomically block that modifies the TVar every second. Will the first atomically block ever complete? Surely it will just keep going back to the beginning, because the log is in an inconsistent state?

    Read the article

  • Splitting list into a list of possible tuples

    - by user1742646
    I need to split a list into a list of all possible tuples, but I'm unsure of how to do so. For example pairs ["cat","dog","mouse"] should result in [("cat","dog"), ("cat","mouse"), ("dog","cat"), ("dog","mouse"), ("mouse","cat"), ("mouse","dog")] I was able to form the first two, but am unsure of how to get the rest. Here's what I have so far: pairs :: [a] -> [(a,a)] pairs (x:xs) = [(m,n) | m <- [x], n <- xs]

    Read the article

  • How do record updates behave internally?

    - by redxaxder
    data Thing = Thing {a :: Int, b :: Int, c :: Int, (...) , z :: Int} deriving Show foo = Thing 1 2 3 4 5 (...) 26 mkBar x = x { c = 30 } main = do print $ mkBar foo What is copied over when I mutate foo in this way? As opposed to mutating part of a structure directly. Data Thing = Thing {a :: IORef Int, b :: IORef Int, (...) , z :: IORef Int} instance Show Thing where (...something something unsafePerformIO...) mkFoo = do a <- newIORef 1 (...) z <- newIORef 26 return Thing a b (...) z mkBar x = writeIORef (c x) 30 main = do foo <- mkFoo mkBar foo print foo Does compiling with optimizations change this behavior?

    Read the article

  • What advantage does Monad give us over an Applicative?

    - by arrowdodger
    I've read this article, but didn't understand last section. The author says that Monad gives us context sensitivity, but it's possible to achieve the same result using only an Applicative instance: let maybeAge = (\futureYear birthYear -> if futureYear < birthYear then yearDiff birthYear futureYear else yearDiff futureYear birthYear) <$> (readMay futureYearString) <*> (readMay birthYearString) It's uglier for sure, but beside that I don't see why we need Monad. Can anyone clear this up for me?

    Read the article

  • I can't seem to figure out type variables mixed with classes.

    - by onmach
    I pretty much understand 3/4 the rest of the language, but every time I dip my feet into using classes in a meaningful way in my code I get permantently entrenched. Why doesn't this extremely simple code work? data Room n = Room n n deriving Show class HasArea a where width :: (Num n) => a -> n instance (Num n) => HasArea (Room n) where width (Room w h) = w So, room width is denoted by ints or maybe floats, I don't want to restrict it at this point. Both the class and the instance restrict the n type to Nums, but it still doesn't like it and I get this error: Couldn't match expected type `n1' against inferred type `n' `n1' is a rigid type variable bound by the type signature for `width' at Dungeon.hs:11:16 `n' is a rigid type variable bound by the instance declaration at Dungeon.hs:13:14 In the expression: w In the definition of `width': width (Room w h) = w In the instance declaration for `HasArea (Room n)' So it tells me the types doesn't match, but it doesn't tell me what types it thinks they are, which would be really helpful. As a side note, is there any easy way to debug an error like this? The only way I know to do it is to randomly change stuff until it works.

    Read the article

  • Why is writeSTRef faster than if expression?

    - by wenlong
    writeSTRef twice for each iteration fib3 :: Int -> Integer fib3 n = runST $ do a <- newSTRef 1 b <- newSTRef 1 replicateM_ (n-1) $ do !a' <- readSTRef a !b' <- readSTRef b writeSTRef a b' writeSTRef b $! a'+b' readSTRef b writeSTRef once for each iteration fib4 :: Int -> Integer fib4 n = runST $ do a <- newSTRef 1 b <- newSTRef 1 replicateM_ (n-1) $ do !a' <- readSTRef a !b' <- readSTRef b if a' > b' then writeSTRef b $! a'+b' else writeSTRef a $! a'+b' a'' <- readSTRef a b'' <- readSTRef b if a'' > b'' then return a'' else return b'' Benchmark, given n = 20000: benchmarking 20000/fib3 mean: 5.073608 ms, lb 5.071842 ms, ub 5.075466 ms, ci 0.950 std dev: 9.284321 us, lb 8.119454 us, ub 10.78107 us, ci 0.950 benchmarking 20000/fib4 mean: 5.384010 ms, lb 5.381876 ms, ub 5.386099 ms, ci 0.950 std dev: 10.85245 us, lb 9.510215 us, ub 12.65554 us, ci 0.950 fib3 is a bit faster than fib4.

    Read the article

  • Serialization of a TChan String

    - by J Fritsch
    I have declared the following type KEY = (IPv4, Integer) type TPSQ = TVar (PSQ.PSQ KEY POSIXTime) type TMap = TVar (Map.Map KEY [String]) data Qcfg = Qcfg { qthresh :: Int, tdelay :: Rational, cwpsq :: TPSQ, cwmap :: TMap, cw chan :: TChan String } deriving (Show) and would like this to be serializable in a sense that Qcfg can either be written to disk or be sent over the network. When I compile this I get the error No instances for (Show TMap, Show TPSQ, Show (TChan String)) arising from the 'deriving' clause of a data type declaration Possible fix: add instance declarations for (Show TMap, Show TPSQ, Show (TChan String)) or use a standalone 'deriving instance' declaration, so you can specify the instance context yourself When deriving the instance for (Show Qcfg) I am now not quite sure whether there is a chance at all to serialize my TChan although all individual nodes in it are members of the show class. For TMap and TPSQ I wonder whether there are ways to show the values in the TVar directly (because it does not get changed, so there should no need to lock it) without having to declare an instance that does a readTVar ?

    Read the article

  • Is there a java library / package analogous to <stdio.h>?

    - by Roboprog
    I have been doing Java on and off for about 14 years, and almost nothing else the last 6 years or so. I really hate the java.io package -- its legion of subclasses and adapters. I do like exceptions, rather than having to always poll "errno" and the like, but I could surely live without declared exceptions. Is there anything that functions like the Unix/ANSI stdio.h routines in C? I know we will never be rid of java.io and its conventions until java itself is retired, as they have metastasized throughout the many frameworks that have accreted to java. That said, I would like something that works kind of like this (let's call it package javax.stdio): Have a main utility class, perhaps FileStar, that can read and write files (or pipes), either text or binary, either sequentially or random access, with constructors that mimic fopen() and popen(). This class should have a load of useful methods that do things like fread(), fwrite(), fgets(), fputs(), fseek(), and whatever else (fprintf()?). Methods that are incompatible with the open/construct mode simply throw up (just like some of the collections classes/methods do when restricted). Then, have a bunch of interfaces that suggest how you intend to use the stream once you have created it: Sequential, RandomAccess, ReadOnly, WriteOnly, Text, Binary, plus combinations of these that make sense. Perhaps even have methods to return the appropriate type-cast (interface), throwing up if you have asked for something incompatible. For extra flavor, skip the declared exceptions -- e.g. - javax.stdio.IOException extends RuntimeException. Is there an open source project like this floating around?

    Read the article

  • Return value from Object match

    - by Hito_kun
    I'm, by no means, JS fluent, so forgive me if im asking for some really basic stuff, but I've not being able to find a proper answer to my question. Im writting my first Node.js (plus Extra Framework and Socket.io) app and Im having some fun setting up the server side of a FB-like messenger (surprise!!!). So, let's say I have this data structure to store online users(This is a JSON Array, but I'm not sure it is the best way to do it or should I go with Javascript Objects): [ { "site": 45, "users": [ { "idUser": 5, "idSocket": "qwe87r7w8qwe", "name": "Carlos Ray Norris" }, { "idUser": 6, "idSocket": "v8d9d0fgfs7d", "name": "John Connor" } ] }, { "site": 48, "users": [ { "idUser": 22, "idSocket": "qwe87r7w8qwe", "name": "David Bowie" }, { "idUser": 23, "idSocket": "v8d9d0fgfs7d", "name": "Barack H. Obama" } ] } ] What I want to do is to search in the array for x value given y. In this case, retrieving the idSocket knowing the idUser WITHOUT having to run through the array values. So I have basically 2 questions: first, what would be the proper way to store users online? and secondly, how to find values matching with the values I already know (find the idSocket that has a given idUser). I would like a pure JS approach(or using some of the tools given by Node, Socket.io or Express), but if that's not possible then I can look for some JQuery.

    Read the article

  • Disk performance below expectations

    - by paulH
    this is a follow-up to a previous question that I asked (Two servers with inconsistent disk speed). I have a PowerEdge R510 server with a PERC H700 integrated RAID controller (call this Server B) that was built using eight disks with 3Gb/s bandwidth that I was comparing with an almost identical server (call this Server A) that was built using four disks with 6Gb/s bandwidth. Server A had much better I/O rates than Server B. Once I discovered the difference with the disks, I had Server A rebuilt with faster 6Gbps disks. Unfortunately this resulted in no increase in the performance of the disks. Expecting that there must be some other configuration difference between the servers, we took the 6Gbps disks out of Server A and put them in Server B. This also resulted in no increase in the performance of the disks. We now have two identical servers built, with the exception that one is built with six 6Gbps disks and the other with eight 3Gbps disks, and the I/O rates of the disks is pretty much identical. This suggests that there is some bottleneck other than the disks, but I cannot understand how Server B originally had better I/O that has subsequently been 'lost'. Comparative I/O information below, as measured by SQLIO. The same parameters were used for each test. It's not the actual numbers that are significant but rather the variations between systems. In each case D: is a 2 disk RAID 1 volume, and E: is a 4 disk RAID 10 volume (apart from the original Server A, where E: was a 2 disk RAID 0 volume). Server A (original setup with 6Gpbs disks) D: Read (MB/s) 63 MB/s D: Write (MB/s) 170 MB/s E: Read (MB/s) 68 MB/s E: Write (MB/s) 320 MB/s Server B (original setup with 3Gpbs disks) D: Read (MB/s) 52 MB/s D: Write (MB/s) 88 MB/s E: Read (MB/s) 112 MB/s E: Write (MB/s) 130 MB/s Server A (new setup with 3Gpbs disks) D: Read (MB/s) 55 MB/s D: Write (MB/s) 85 MB/s E: Read (MB/s) 67 MB/s E: Write (MB/s) 180 MB/s Server B (new setup with 6Gpbs disks) D: Read (MB/s) 61 MB/s D: Write (MB/s) 95 MB/s E: Read (MB/s) 69 MB/s E: Write (MB/s) 180 MB/s Can anybody suggest any ideas what is going on here? The drives in use are as follows: Dell Seagate F617N ST3300657SS 300GB 15K RPM SAS Dell Hitachi HUS156030VLS600 300GB 3.5 inch 15000rpm 6GB SAS Hitachi Hus153030vls300 300GB Server SAS Dell ST3146855SS Seagate 3.5 inch 146GB 15K SAS

    Read the article

  • Measuring 'total bytes written' under Linux

    - by badnews
    We're quite interested in exploring the possibility of using SSD drives in a server environment. However, one thing that we need to establish is expected drive longevity. According to this article manufacturer's are reporting drive endurance in terms of 'total bytes written' (TBW). E.g. from that article a Crucial C400 SSD is rated at 72TB TBW. Do any scripts/tools exist under the Linux ecosystem to help us measure TBW? (and then make a more educated decision on the feasibility of using SSD drives)

    Read the article

  • Delay before download starts when serving files using nginx

    - by glumbo
    I am currently using nginx to serve downloads off my website. Users sometimes need to wait about 5 seconds before their download starts after clicking a download link. I'm not sure if I need to start using raid 10 (I'm currently using raid 50) or if this is a problem with my nginx configuration. I am also on a 1gbit line but download sometimes go as low as 10kB/s. My server: Dual Xeon 5620 CPU, 12x2TB drives with 8GB ram. This is my nginx.conf #user nobody; worker_processes 12; worker_rlimit_nofile 10240; worker_rlimit_sigpending 32768; error_log logs/error.log crit; #pid logs/nginx.pid; events { worker_connections 2048; } http { include mime.types; default_type application/octet-stream; access_log off; limit_conn_log_level info; log_format xfs '$arg_id|$arg_usr|$remote_addr|$body_bytes_sent|$status'; #sendfile on; #tcp_nopush on; reset_timedout_connection on; server_tokens off; autoindex off; keepalive_timeout 0; #keepalive_timeout 65; limit_zone one $binary_remote_addr 10m; perl_modules perl; perl_require download.pm;

    Read the article

  • Bad I/O scheduler?

    - by user62367
    os: up-to-date Fedora 14. Working as a "normal desktop". It's doing very well, but if i start VirtualBox, and e.g.: install a guest on it, it just "freezez". I mean if there are disk activities on a VirtualBox guest, then the computer becomes unrespondable..even the mouse is laggin for about 50 minutes.. What could be the bottleneck? What could be the problem? If anyone has any tips/howtos to speed it up, please help! It has a normal 2,5" HDD, with 5400 RPM. Does it worth for me if i buy a 2,5" HDD with 7200 RPM? T7200 cpu, 4 GByte RAM, "vm.swappiness = 0". Thank you!

    Read the article

  • Different block sizes for partition and underlying logical disk on HP Raid Controller (Linux)

    - by Wawrzek
    Following links collected in this thread I started to check blockdev and found the following output indicating different sizes for partition c0d9p1 and the underlying device (c0d9): [root@machine ~]# blockdev --report /dev/cciss/c0d9 RO RA SSZ BSZ StartSec Size Device rw 256 512 4096 0 3906963632 /dev/cciss/c0d9 [root@machine ~]# blockdev --report /dev/cciss/c0d9p1 RO RA SSZ BSZ StartSec Size Device rw 256 512 2048 1 3906959039 /dev/cciss/c0d9p1 We have a lot of small files, so yes the block size is smaller than normal. The device is a logical driver on an HP P410 raid controller, simple disk without any raid - RAID 0 on one disk to be precise. (Please note that above configuration is a feature not a bug). Therefore, I have the following questions. Can the above discrepancy in the block size affect disk performance? Can I control the block size using hpacucli?

    Read the article

  • How to identify heavy write to disk?

    - by Darth
    I have this problem with server running CakePHP application. The server is insanely slow, I first thought that it's application problem, but then I found constant 5-6MB/s write to disk. What is the easiest way to find cause of such a heavy write? The server is running Gentoo.

    Read the article

  • I/O intensive MySql server on Amazon AWS

    - by rhossi
    We recently moved from a traditional Data Center to cloud computing on AWS. We are developing a product in partnership with another company, and we need to create a database server for the product we'll release. I have been using Amazon Web Services for the past 3 years, but this is the first time I received a spec with this very specific hardware configuration. I know there are trade-offs and that real hardware will always be faster than virtual machines, and knowing that fact forehand, what would you recommend? 1) Amazon EC2? 2) Amazon RDS? 3) Something else? 4) Forget it baby, stick to the real hardware Here is the hardware requirements This server will be focused on I/O and MySQL for the statistics, memory size and disk space for the images hosting. Server 1 I/O The very main part on this server will be I/O processing, FusionIO cards have proven themselves extremely efficient, this is currently the best you can have in this domain. o Fusion ioDrive2 MLC 365GB (http://www.fusionio.com/load/-media-/1m66wu/docsLibrary/FIO_ioDrive2_Datasheet.pdf) CPU MySQL will use less CPU cores than Apache but it will use them very hard, the E7 family has 30M Cache L3 wichi provide boost performance : o 1x Intel E7-2870 will be ok. Storage SAS will be good enough in terms of performance, especially considering the space required. o RAID 10 of 4 x SAS 10k or 15k for a total available space of 512 GB. Memory o 64 GB minimum is required on this server considering the size of the statistics database. Warning: the statistics database will grow quickly, if possible consider starting with 128 GB directly, it will help. This server will be focused on I/O and MySQL for the statistics, memory size and disk space for the images hosting. Server 2 I/O The very main part on this server will be I/O processing, FusionIO cards have proven themselves extremely efficient, this is currently the best you can have in this domain. o Fusion ioDrive2 MLC 365GB (http://www.fusionio.com/load/-media-/1m66wu/docsLibrary/FIO_ioDrive2_Datasheet.pdf) CPU MySQL will use less CPU cores than Apache but it will use them very hard, the E7 family has 30M Cache L3 wichi provide boost performance : o 1x Intel E7-2870 will be ok. Storage SAS will be good enough in terms of performance, especially considering the space required. o RAID 10 of 4 x SAS 10k or 15k for a total available space of 512 GB. Memory o 64 GB minimum is required on this server considering the size of the statistics database. Warning: the statistics database will grow quickly, if possible consider starting with 128 GB directly, it will help. Thanks in advance. Best,

    Read the article

  • I/O APIC on Virtualbox

    - by RidDeBakTiYar
    I'm trying to use the PIT to do APIC timer calibration, and I want to use the PIT through I/O APIC instead of PIC. On Bochs I get interrupts from the PIT at the asked frequency from the I/O APIC, while on Virtualbox I can't receive a single interrupt. It must be an I/O APIC configuration problem because as I unmask the first PIC entry, the IRQ fires. However that's not what I want. Can you imagine any possible condition that wouldn't make Virtualbox fire the IRQ? I'm not assuming single I/O APIC configuration (even though Virtualbox has only 1). I'm not assuming identity mappings between ISA IRQs and I/O APIC GSIs (using ACPI MADT table to get I/O APIC base address and Int override). I'm setting the Trigger Mode and Polarity bits correctly (on Virtualbox they are set as '00 - default' which means edge high right?). I'm putting the BSP APIC ID into the Destination field (using Physical destination) and vector 0x20. Being the BSP APIC ID 0 on Virtualbox, it ends up with 0x0000000000000020 written to the IOREDTBL. And, just in case I'm getting the wrong values from the Interrupt Override descriptor, I'm setting this value to all the IOREDTBL entries (I know this is very very bad, and it wont be kept as I understand what's going on). The only thing I didn't check out is Local APIC configuration. Actually I'm not writing any value to the BSP LAPIC. Just reading the APIC ID and using it to boot APs through IPIs. And obviously I'm setting bit 11 in the IA32_APIC_BASE MSR to enable the LAPIC. Any ideas? Thanks in advance.

    Read the article

  • Will adding a SSD cache device to my ZFS storage improve performance?

    - by Sysadminicus
    The server has 4GB of RAM and my zpool is made up of 15.5k SAS drives arranged like this: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 c0t6d0 ONLINE 0 0 0 c0t7d0 ONLINE 0 0 0 c0t8d0 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 c0t10d0 ONLINE 0 0 0 c0t11d0 ONLINE 0 0 0 c0t12d0 ONLINE 0 0 0 c0t13d0 ONLINE 0 0 0 c0t14d0 ONLINE 0 0 0 spares c0t9d0 AVAIL c0t1d0 AVAIL The primary use is as an NFS store for a couple VMWare ESXi servers. I can't do any "true" benchmarks because this is a production system (no budget for test systems), but using dd and bonnie++ I can't get more than ~40-50MB/s writes and ~70-90MB/s reads. It seems I should be able to do much better, but I'm not sure where to optimize. Based on what I've read, I think dropping in a OCZ Vertex 2 Pro SSD as my L2ARC is going to be the best bang-for-the-buck to improve througput. Is there something else I should be looking into to help performance? If not... How do I know how big a cache device I need? Am I safe with only a single SSD as my cache device?

    Read the article

  • Need data on disk drive management by OS: getting base I/O unit size, "sync" option, Direct Memory A

    - by Richard T
    Hello All, I want to ensure I have done all I can to configure a system's disks for serious database use. The three areas I know of (any others?) to be concerned about are: I/O size: the database engine and disk's native size should either match, or the database's native I/O size should be a multiple of the disk's native I/O size. Disks that are capable of Direct Memory Access (eg. IDE) should be configured for it. When a disk says it has written data persistently, it must be so! No keeping it in cache and lying about it. I have been looking for information on how to ensure these are so for CENTOS and Ubuntu, but can't seem to find anything at all! I want to be able to check these things and change them if needed. Any and all input appreciated.

    Read the article

  • RAID10 without write-back cache = horrible write performance?

    - by Harry Mexican
    I have just provisioned a dedicated server on singlehop. I'm running it through some tests to know what to expect performance-wise. On the I/O side (with 4 1TB disks in RAID 10) I get: write-cache disabled 200 MB/s read throughput 30 MB/s write throughput I thought that was really low compared to my desktop HD which gets 150-150 or so. So I had a chat with them and they suggested enabling the write cache. New results: write-cache enabled 280 MB/s read 260 MB/s write which is great and all but means I'd have to add a BBU for an additional monthly cost. Is it normal for the write throughput to be 1/4 of a regular drive on RAID10, if you don't have write cache? It almost feels like its intentionally bad to force you to pony up for the BBU. I'd be happy with normal non-raid performance of 150/150.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >