Search Results

Search found 4547 results on 182 pages for 'haskell io'.

Page 30/182 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • "Strictly positive" in Agda

    - by Jason
    I'm trying to encode some denotational semantics into Agda based on a program I wrote in Haskell. data Value = FunVal (Value -> Value) | PriVal Int | ConVal Id [Value] | Error String In Agda, the direct translation would be; data Value : Set where FunVal : (Value -> Value) -> Value PriVal : N -> Value ConVal : String -> List Value -> Value Error : String -> Value but I get an error relating to the FunVal because; Value is not strictly positive, because it occurs to the left of an arrow in the type of the constructor FunVal in the definition of Value. What does this mean? Can I encode this in Agda? Am I going about it the wrong way? Thanks.

    Read the article

  • java.io.FileNotFoundException (Permission denied) When trying to write to the Android sdcard

    - by joefischer1
    I am trying to select an image file from the photo gallery and write to the sdcard. Below is the code that results in an exception. It appears to throw this exception when trying to create the FileOutputStream. I have the following line added to the manifest file nested inside the application element. I can't find a solution to the problem: <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> public boolean saveSelectedImage( Uri selectedImage, int imageGroup, int imageNumber ) { boolean exception = false; InputStream input = null; OutputStream output = null; if( externalStorageIsWritable() ) { try { ContentResolver content = ctx.getContentResolver(); input = content.openInputStream( selectedImage ); if(input != null) Log.v( CLASS_NAME, "Input Stream Opened successfully"); File outFile = null; File root = Environment.getExternalStorageDirectory( ); if(root == null) Log.v(CLASS_NAME, "FAILED TO RETRIEVE DIRECTORY"); else Log.v(CLASS_NAME, "ROOT DIRECTORY is:"+root.toString()); output = new FileOutputStream( root+"/Image"+ imageGroup + "_" + imageNumber + ".png" ); if(output != null) Log.e( CLASS_NAME, "Output Stream Opened successfully"); // output = new FileOutputStream // ("/sdcard/Image"+imageGroup+"_"+imageNumber+".png"); byte[] buffer = new byte[1000]; int bytesRead = 0; while ( ( bytesRead = input.read( buffer, 0, buffer.length ) ) >= 0 ) { output.write( buffer, 0, buffer.length ); } } catch ( Exception e ) { Log.e( CLASS_NAME, "Exception occurred while moving image: "); e.printStackTrace(); exception = true; } finally { // if(input != null)input.close(); // if(output != null)output.close(); // if (exception ) return false; } return true; } else return false; }

    Read the article

  • C# Asynchronous Network IO and OutOfMemoryException

    - by The.Anti.9
    I'm working on a client/server application in C#, and I need to get Asynchronous sockets working so I can handle multiple connections at once. Technically it works the way it is now, but I get an OutOfMemoryException after about 3 minutes of running. MSDN says to use a WaitHandler to do WaitOne() after the socket.BeginAccept(), but it doesn't actually let me do that. When I try to do that in the code it says WaitHandler is an abstract class or interface, and I can't instantiate it. I thought maybe Id try a static reference, but it doesnt have teh WaitOne() method, just WaitAll() and WaitAny(). The main problem is that in the docs it doesn't give a full code snippet, so you can't actually see what their "wait handler" is coming from. its just a variable called allDone, which also has a Reset() method in the snippet, which a waithandler doesn't have. After digging around in their docs, I found some related thing about an AutoResetEvent in the Threading namespace. It has a WaitOne() and a Reset() method. So I tried that around the while(true) { ... socket.BeginAccept( ... ); ... }. Unfortunately this makes it only take one connection at a time. So I'm not really sure where to go. Here's my code: class ServerRunner { private Byte[] data = new Byte[2048]; private int size = 2048; private Socket server; static AutoResetEvent allDone = new AutoResetEvent(false); public ServerRunner() { server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); IPEndPoint iep = new IPEndPoint(IPAddress.Any, 33333); server.Bind(iep); Console.WriteLine("Server initialized.."); } public void Run() { server.Listen(100); Console.WriteLine("Listening..."); while (true) { //allDone.Reset(); server.BeginAccept(new AsyncCallback(AcceptCon), server); //allDone.WaitOne(); } } void AcceptCon(IAsyncResult iar) { Socket oldserver = (Socket)iar.AsyncState; Socket client = oldserver.EndAccept(iar); Console.WriteLine(client.RemoteEndPoint.ToString() + " connected"); byte[] message = Encoding.ASCII.GetBytes("Welcome"); client.BeginSend(message, 0, message.Length, SocketFlags.None, new AsyncCallback(SendData), client); } void SendData(IAsyncResult iar) { Socket client = (Socket)iar.AsyncState; int sent = client.EndSend(iar); client.BeginReceive(data, 0, size, SocketFlags.None, new AsyncCallback(ReceiveData), client); } void ReceiveData(IAsyncResult iar) { Socket client = (Socket)iar.AsyncState; int recv = client.EndReceive(iar); if (recv == 0) { client.Close(); server.BeginAccept(new AsyncCallback(AcceptCon), server); return; } string receivedData = Encoding.ASCII.GetString(data, 0, recv); //process received data here byte[] message2 = Encoding.ASCII.GetBytes("reply"); client.BeginSend(message2, 0, message2.Length, SocketFlags.None, new AsyncCallback(SendData), client); } }

    Read the article

  • Different results when applying function to equal values

    - by Johannes Stiehler
    I'm just digging a bit into Haskell and I started by trying to compute the Phi-Coefficient of two words in a text. However, I ran into some very strange behaviour that I cannot explain. After stripping everything down, I ended up with this code to reproduce the problem: let sumTup = (sumTuples°concat) frequencyLists let sumFixTup = (138, 136, 17, 204) putStrLn (show ((138, 136, 17, 204) == sumTup)) putStrLn (show (phi sumTup)) putStrLn (show (phi sumFixTup)) This outputs: True NaN 0.4574206676616167 So although the sumTupand sumFixTup show as equal, they behave differently when passed to phi. The definition of phi is: phi (a, b, c, d) = let dividend = fromIntegral(a * d - b * c) divisor = sqrt(fromIntegral((a + b) * (c + d) * (a + c) * (b + d))) in dividend / divisor

    Read the article

  • Has anyone ever encountered a Monad Transformer in the wild?

    - by martingw
    In my area of business - back office IT for a financial institution - it is very common for a software component to carry a global configuration around, to log it's progress, to have some kind of error handling / computation short circuit... Things that can be modelled nicely by Reader-, Writer-, Maybe-monads and the like in Haskell and composed together with monad transformers. But there seem to some drawbacks: The concept behind monad transformers is quite tricky and hard to understand, monad transformers lead to very complex type signatures, and they inflict some performance penalty. So I'm wondering: Are monad transformers best practice when dealing with those common tasks mentioned above?

    Read the article

  • What exactly is a Monad?

    - by WeNeedAnswers
    Can someone please explain to me what a Monad is. I think I grasp Monoids and I grasp that they basically control the input of state into a system. I just look at the text in Haskell and glaze over. A simple example in python would be great. My current understanding is that a Monoid is a procedural piece of code that needs to be read from top to bottom in sequence with the output being the input for the function. I think that I may even got that wrong, but hey I am here to learn.

    Read the article

  • How does functional programming work?

    - by Headcrab
    I'm used to imperative/OO programming (know C, C++, Python, PHP, etc.). I wanted to get into functional programming but there are some things unclear to me. Take for example the languages F# and Haskell: How do you implement loops? By recursion? Eew. What about conditions? How can you get by without variables? I mean.. What do we have RAM for.. storing variables, right?

    Read the article

  • File IO, Handling CRLF

    - by aCuria
    Hi, i am writing a program that takes a file and splits it up into multiple small files of a user specified size, then join the multiple small files back again. 1) the code must work for c, c++ 2) i am compiling with multiple compilers. I am reading and writing to the files by using the stl functions fread() and fwrite() The problem I am having pertains to CRLF. If the file I am reading from contains CRLF, then I want to retain it when i split and join the files back together. If the file contains LF, then i want to retain LF. Unfortunately, fread() seems to store CRLF as \n (I think), and whatever is written by fwrite() is compiler-dependent. How do i approach this problem? Thanks.

    Read the article

  • How do I get Cabal to bypass my Windows proxy settings?

    - by Brent.Longborough
    When retrieving packages with Cabal, I frequently get errors with this message: user error (Codec.Compression.Zlib: premature end of compressed stream) It looks like Cabal is using my Windows Networking proxy settings (for Privoxy). From digging around Google, Cabal or its libraries appear to have (had) a problem in this area. Possible solutions I can see are: Turn off proxying while using Cabal (not very keen on this one); or Get a patch and start hacking. I'm hesitant to go down this path, as I'm a complete Haskell noob and I'm not yet comfortable with Darcs; or Give it the magic "can I haz no proxy" parameter. Hence the question.

    Read the article

  • ghc can't find my cabal installed packages

    - by nont
    I've installed ghc 6.12.3, and then the Haskell Platform. I'm trying to compile a test program: $ ghc test.hs test.hs:3:0: Failed to load interface for `Bindings': Use -v to see a list of the files searched for. so, naturally, I do cabal install Bindings Which works fine, and places the package in ~/.cabal/lib/bindings-0.1.2 The problem is, that when I go to compile again with ghc, it still doesn't find the package I've installed with cabal. compiling in verbose mode gives: ghc -v test.hs Using binary package database: /home/ludflu/ghc/lib/ghc-6.12.3/package.conf.d/package.cache Using binary package database: /home/ludflu/.ghc/x86_64-linux 6.12.3/package.conf.d/package.cache As suggested by another stackoverflow user, I tried: ghc-pkg describe rts > rts.pkg vi rts.pkg # add the /home/ludflu/.cabal/lib to `library-dirs` field ghc-pkg update rts.pkg But to no avail. How to I add the .cabal to the list of package directories to search? Thank you!

    Read the article

  • erlang io:format, and a hanging web application

    - by williamstw
    While I'm learning a new language, I'll typically put lots of silly println's to see what values are where at specific times. It usually suffices because the languages typically have available a tostring equivalent. In trying that same approach with erlang, my webapp just "hangs" when there's a value attempted to be printed that's not a list. This happens when variable being printed is a tuple instead of a list. There's no error, exception, nothing... just doesn't respond. Now, I'm muddling through by being careful about what I'm writing out and as I learn more, things are getting better. But I wonder, is there a way to more reliably to [blindly] print a value to stdout? Thanks, --tim

    Read the article

  • IO Exception on invoking execute() method of HttpGet class

    - by AndroidNoob
    Why I'm getting IOException in this peace of code? Thanks. HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget = new HttpGet("http://www.google.com/"); HttpResponse response; try { response = httpclient.execute(httpget); } catch (ClientProtocolException e) { Toast.makeText(this, "ClientProtocolEx", Toast.LENGTH_LONG).show(); e.printStackTrace(); } catch (IOException e) { Toast.makeText(this, "IOEx", Toast.LENGTH_LONG).show(); e.printStackTrace(); }

    Read the article

  • Analysis and Design for Functional Programming

    - by edalorzo
    How do you deal with analysis and design phases when you plan to develop a system using a functional programming language like Haskell? My background is in imperative/object-oriented programming languages, and therefore, I am used to use case analysis and the use of UML to document the design of program. But the thing is that UML is inherently related to the object-oriented way of doing software. And I am intrigued about what would be the best way to develop documentation and define software designs for a system that is going to be developed using functional programming. Would you still use use case analysis or perhaps structured analysis and design instead? How do software architects define the high-level design of the system so that developers follow it? What do you show to you clients or to new developers when you are supposed to present a design of the solution? How do you document a picture of the whole thing without having first to write it all? Is there anything comparable to UML in the functional world?

    Read the article

  • Why are difference lists more efficient than regular concatenation?

    - by Craig Innes
    I am currently working my way through the Learn you a haskell book online, and have come to a chapter where the author is explaining that some list concatenations can be ineffiecient: For example ((((a ++ b) ++ c) ++ d) ++ e) ++ f Is supposedly inefficient. The solution the author comes up with is to use 'difference lists' defined as newtype DiffList a = DiffList {getDiffList :: [a] -> [a] } instance Monoid (DiffList a) where mempty = DiffList (\xs -> [] ++ xs) (DiffList f) `mappend` (DiffList g) = DiffList (\xs -> f (g xs)) I am struggling to understand why DiffList is more computationally efficient than a simple concatenation in some cases. Could someone explain to me in simple terms why the above example is so inefficient, and in what way the DiffList solves this problem?

    Read the article

  • C++ IO with Hard Drive

    - by Tomas Cokis
    I was wondering if there was any kind of portable (Mac&Windows) method of reading and writing to the hard drive which goes beyond iostream.h, in particular features like getting a list of all the files in a folder, moving files around, ect. I was hoping that there would be something like SDL around, but so far I havn't been able to find much. Any ideas??

    Read the article

  • Why is writing a compiler in a functional language so efficient and easier?

    - by wvd
    Hello all, I've been thinking of this question very long, but really couldn't find the answer on Google as well a similar question on Stackoverflow. If there is a duplicate, I'm sorry for that. A lot of people seem to say that writing compilers and other language tools in functional languages such as OCaml and Haskell is much more efficient and easier then writing them in imperative languages. Is this true? And if so -- why is so efficient and easy to write them in functional languages instead of in an imperative language, like C? Also -- isn't a language tool in a functional language slower then in some low-level language like C? Thanks in advance, William v. Doorn

    Read the article

  • Why is writing a compiler in a functional language easier?

    - by wvd
    Hello all, I've been thinking of this question very long, but really couldn't find the answer on Google as well a similar question on Stackoverflow. If there is a duplicate, I'm sorry for that. A lot of people seem to say that writing compilers and other language tools in functional languages such as OCaml and Haskell is much more efficient and easier then writing them in imperative languages. Is this true? And if so -- why is it so efficient and easy to write them in functional languages instead of in an imperative language, like C? Also -- isn't a language tool in a functional language slower then in some low-level language like C? Thanks in advance, William v. Doorn

    Read the article

  • Same loop giving different output. Java IO

    - by Nitesh Panchal
    Hi, I am facing a very strange problem where the same loop keeps giving me different different output on change of value of BUFFER final int BUFFER = 100; char[] charArr = new char[BUFFER]; StringBuffer objStringBuffer = new StringBuffer(); while (objBufferedReader.read(charArr, 0,BUFFER) != -1) { objStringBuffer.append(charArr); } objFileWriter.write(objStringBuffer.toString()); When i change BUFFER size to 500 it gives me a file of 7 kb when i change BUFFER size to 100000 it gives a file of 400 kb where the contents are repeated again and again. Please help. What should i do to prevent this?

    Read the article

  • Copying a IO stream results in corruption.

    - by StackedCrooked
    I have a small Mongrel webserver that sends the stdout of a process to a http response: response.start(200) do |head,out| head["Content-Type"] = "text/html" status = POpen4::popen4(command) do |stdout, stderr, stdin, pid| stdin.close() FileUtils.copy_stream(stdout, out) FileUtils.copy_stream(stderr, out) puts "Sent response." end end This works well most of the time, but sometimes characters get duplicated. For example this is what I get from the "man ls" command: LS(1) User Commands LS(1) NNAAMMEE ls - list directory contents SSYYNNOOPPSSIISS llss [_O_P_T_I_O_N]... [_F_I_L_E]... DDEESSCCRRIIPPTTIIOONN List information about the FILEs (the current directory by default). Sort entries alphabetically if none of --ccffttuuvvSSUUXX nor ----ssoorrtt. Mandatory arguments to long options are mandatory for short options For some mysterious reason capital letters get duplicated. Can anyone explain what's going on?

    Read the article

  • Very different IO performance in C/C++

    - by Roberto Tirabassi
    Hi all, I'm a new user and my english is not so good so I hope to be clear. We're facing a performance problem using large files (1GB or more) expecially (as it seems) when you try to grow them in size. Anyway... to verify our sensations we tryed the following (on Win 7 64Bit, 4core, 8GB Ram, 32 bit code compiled with VC2008) a) Open an unexisting file. Write it from the beginning up to 1Gb in 1Mb slots. Now you have a 1Gb file. Now randomize 10000 positions within that file, seek to that position and write 50 bytes in each position, no matter what you write. Close the file and look at the results. Time to create the file is quite fast (about 0.3"), time to write 10000 times is fast all the same (about 0.03"). Very good, this is the beginnig. Now try something else... b) Open an unexisting file, seek to 1Gb-1byte and write just 1 byte. Now you have another 1Gb file. Follow the next steps exactly same way of case 'a', close the file and look at the results. Time to create the file is the faster you can imagine (about 0.00009") but write time is something you can't believe.... about 90"!!!!! b.1) Open an unexisting file, don't write any byte. Act as before, ramdomizing, seeking and writing, close the file and look at the result. Time to write is long all the same: about 90"!!!!! Ok... this is quite amazing. But there's more! c) Open again the file you crated in case 'a', don't truncate it... randomize again 10000 positions and act as before. You're fast as before, about 0,03" to write 10000 times. This sounds Ok... try another step. d) Now open the file you created in case 'b', don't truncate it... randomize again 10000 positions and act as before. You're slow again and again, but the time is reduced to... 45"!! Maybe, trying again, the time will reduce. I actually wonder why... Any Idea? The following is part of the code I used to test what I told in previuos cases (you'll have to change someting in order to have a clean compilation, I just cut & paste from some source code, sorry). The sample can read and write, in random, ordered or reverse ordered mode, but write only in random order is the clearest test. We tryed using std::fstream but also using directly CreateFile(), WriteFile() and so on the results are the same (even if std::fstream is actually a little slower). Parameters for case 'a' = -f_tempdir_\casea.dat -n10000 -t -p -w Parameters for case 'b' = -f_tempdir_\caseb.dat -n10000 -t -v -w Parameters for case 'b.1' = -f_tempdir_\caseb.dat -n10000 -t -w Parameters for case 'c' = -f_tempdir_\casea.dat -n10000 -w Parameters for case 'd' = -f_tempdir_\caseb.dat -n10000 -w Run the test (and even others) and see... // iotest.cpp : Defines the entry point for the console application. // #include <windows.h> #include <iostream> #include <set> #include <vector> #include "stdafx.h" double RealTime_Microsecs() { LARGE_INTEGER fr = {0, 0}; LARGE_INTEGER ti = {0, 0}; double time = 0.0; QueryPerformanceCounter(&ti); QueryPerformanceFrequency(&fr); time = (double) ti.QuadPart / (double) fr.QuadPart; return time; } int main(int argc, char* argv[]) { std::string sFileName ; size_t stSize, stTimes, stBytes ; int retval = 0 ; char *p = NULL ; char *pPattern = NULL ; char *pReadBuf = NULL ; try { // Default stSize = 1<<30 ; // 1Gb stTimes = 1000 ; stBytes = 50 ; bool bTruncate = false ; bool bPre = false ; bool bPreFast = false ; bool bOrdered = false ; bool bReverse = false ; bool bWriteOnly = false ; // Comsumo i parametri for(int index=1; index < argc; ++index) { if ( '-' != argv[index][0] ) throw ; switch(argv[index][1]) { case 'f': sFileName = argv[index]+2 ; break ; case 's': stSize = xw::str::strtol(argv[index]+2) ; break ; case 'n': stTimes = xw::str::strtol(argv[index]+2) ; break ; case 'b':stBytes = xw::str::strtol(argv[index]+2) ; break ; case 't': bTruncate = true ; break ; case 'p' : bPre = true, bPreFast = false ; break ; case 'v' : bPreFast = true, bPre = false ; break ; case 'o' : bOrdered = true, bReverse = false ; break ; case 'r' : bReverse = true, bOrdered = false ; break ; case 'w' : bWriteOnly = true ; break ; default: throw ; break ; } } if ( sFileName.empty() ) { std::cout << "Usage: -f<File Name> -s<File Size> -n<Number of Reads and Writes> -b<Bytes per Read and Write> -t -p -v -o -r -w" << std::endl ; std::cout << "-t truncates the file, -p pre load the file, -v pre load 'veloce', -o writes in order mode, -r write in reverse order mode, -w Write Only" << std::endl ; std::cout << "Default: 1Gb, 1000 times, 50 bytes" << std::endl ; throw ; } if ( !stSize || !stTimes || !stBytes ) { std::cout << "Invalid Parameters" << std::endl ; return -1 ; } size_t stBestSize = 0x00100000 ; std::fstream fFile ; fFile.open(sFileName.c_str(), std::ios_base::binary|std::ios_base::out|std::ios_base::in|(bTruncate?std::ios_base::trunc:0)) ; p = new char[stBestSize] ; pPattern = new char[stBytes] ; pReadBuf = new char[stBytes] ; memset(p, 0, stBestSize) ; memset(pPattern, (int)(stBytes&0x000000ff), stBytes) ; double dTime = RealTime_Microsecs() ; size_t stCopySize, stSizeToCopy = stSize ; if ( bPre ) { do { stCopySize = std::min(stSizeToCopy, stBestSize) ; fFile.write(p, stCopySize) ; stSizeToCopy -= stCopySize ; } while (stSizeToCopy) ; std::cout << "Creating time is: " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; } else if ( bPreFast ) { fFile.seekp(stSize-1) ; fFile.write(p, 1) ; std::cout << "Creating Fast time is: " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; } size_t stPos ; ::srand((unsigned int)dTime) ; double dReadTime, dWriteTime ; stCopySize = stTimes ; std::vector<size_t> inVect ; std::vector<size_t> outVect ; std::set<size_t> outSet ; std::set<size_t> inSet ; // Prepare vector and set do { stPos = (size_t)(::rand()<<16) % stSize ; outVect.push_back(stPos) ; outSet.insert(stPos) ; stPos = (size_t)(::rand()<<16) % stSize ; inVect.push_back(stPos) ; inSet.insert(stPos) ; } while (--stCopySize) ; // Write & read using vectors if ( !bReverse && !bOrdered ) { std::vector<size_t>::iterator outI, inI ; outI = outVect.begin() ; inI = inVect.begin() ; stCopySize = stTimes ; dReadTime = 0.0 ; dWriteTime = 0.0 ; do { dTime = RealTime_Microsecs() ; fFile.seekp(*outI) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++outI ; if ( !bWriteOnly ) { dTime = RealTime_Microsecs() ; fFile.seekg(*inI) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++inI ; } } while (--stCopySize) ; std::cout << "Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " (Ave: " << xw::str::itoa(dWriteTime/stTimes, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { std::cout << "Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " (Ave: " << xw::str::itoa(dReadTime/stTimes, 10, 'f') << ")" << std::endl ; } } // End // Write in order if ( bOrdered ) { std::set<size_t>::iterator i = outSet.begin() ; dWriteTime = 0.0 ; stCopySize = 0 ; for(; i != outSet.end(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekp(stPos) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Ordered Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dWriteTime/stCopySize, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { i = inSet.begin() ; dReadTime = 0.0 ; stCopySize = 0 ; for(; i != inSet.end(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekg(stPos) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Ordered Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dReadTime/stCopySize, 10, 'f') << ")" << std::endl ; } }// End // Write in reverse order if ( bReverse ) { std::set<size_t>::reverse_iterator i = outSet.rbegin() ; dWriteTime = 0.0 ; stCopySize = 0 ; for(; i != outSet.rend(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekp(stPos) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Reverse ordered Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dWriteTime/stCopySize, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { i = inSet.rbegin() ; dReadTime = 0.0 ; stCopySize = 0 ; for(; i != inSet.rend(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekg(stPos) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Reverse ordered Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dReadTime/stCopySize, 10, 'f') << ")" << std::endl ; } }// End dTime = RealTime_Microsecs() ; fFile.close() ; std::cout << "Flush/Close Time is " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; std::cout << "Program Terminated" << std::endl ; } catch(...) { std::cout << "Something wrong or wrong parameters" << std::endl ; retval = -1 ; } if ( p ) delete []p ; if ( pPattern ) delete []pPattern ; if ( pReadBuf ) delete []pReadBuf ; return retval ; }

    Read the article

  • Missing something with Reader monad - passing the damn thing around everywhere

    - by Richard Huxton
    Learning Haskell, managing syntax, have a rough grasp of what monads etc are about but I'm clearly missing something. In main I can read my config file, and supply it as runReader (somefunc) myEnv just fine. But somefunc doesn't need access to the myEnv the reader supplies, nor do the next couple in the chain. The function that needs something from myEnv is a tiny leaf function. So - how do I get access to the environment in a function without tagging all the intervening functions as (Reader Env)? That can't be right because otherwise you'd just pass myEnv around in the first place. And passing unused parameters through multiple levels of functions is just ugly (isn't it?). There are plenty of examples I can find on the net but they all seem to have only one level between runReader and accessing the environment.

    Read the article

  • Population count of rightmost n integers

    - by Jason Baker
    I'm implementing Bagwell's Ideal Hash Trie in Haskell. To find an element in a sub-trie, he says to do the following: Finding the arc for a symbol s, requires ?nding its corresponding bit in the bit map and then counting the one bits below it in the map to compute an index into the ordered sub-trie. What is the best way to do this? It sounds like the most straightforward way of doing this is to select the bits below that bit and do a population count on the resulting number. Is there a faster or better way to do this?

    Read the article

  • Saving information in the IO System

    - by djTeller
    Hi Kernel Gurus, I need to write a kernel module that simulate a "multicaster" Using the /proc file system. Basically it need to support the following scenarios: 1) allow one write access to the /proc file and many read accesses to the /proc file. 2) The module should have a buffer of the contents last successful write. Each write should be matched by a read from all reader. Consider scenario 2, a writer wrote something and there are two readers (A and B), A read the content of the buffer, and then A tried to read again, in this case it should go into a wait_queue and wait for the next message, it should not get the same buffer again. I need to keep a map of all the pid's that already read the current buffer, and in case they try to read again and the buffer was not changed, they should be blocked until there is a new buffer. I'm trying to figure it there is a way i can save that info without a map. I heard there are some redundant fields inside the I/O system the I can use to flag a process if it already read the current buffer. Can someone give me a tip where should i look for that field ? how can i save info on the current process without keeping a "map" of pid's and buffers ? Thanks!

    Read the article

  • Last element not getting insert in Tree

    - by rdk1992
    So I was asked to make a Binary Tree in Haskell taking as input a list of Integers. Below is my code. My problem is that the last element of the list is not getting inserted in the Tree. For example [1,2,3,4] it only inserts to the tree until "3" and 4 is not inserted in the Tree. data ArbolBinario a = Node a (ArbolBinario a) (ArbolBinario a) | EmptyNode deriving(Show) insert(x) EmptyNode= insert(tail x) (Node (head x) EmptyNode EmptyNode) insert(x) (Node e izq der) |x == [] = EmptyNode --I added this line to fix the Prelude.Head Empty List error, after I added this line the last element started to be ignored and not inserted in the tree |head x == e = (Node e izq der) |head x < e = (Node e (insert x izq) der) |head x > e = (Node e izq (insert x der)) Any ideas on whats going on here? Help is much appreciated

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >