Search Results

Search found 3856 results on 155 pages for 'io'.

Page 12/155 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • C# Asynchronous Network IO and OutOfMemoryException

    - by The.Anti.9
    I'm working on a client/server application in C#, and I need to get Asynchronous sockets working so I can handle multiple connections at once. Technically it works the way it is now, but I get an OutOfMemoryException after about 3 minutes of running. MSDN says to use a WaitHandler to do WaitOne() after the socket.BeginAccept(), but it doesn't actually let me do that. When I try to do that in the code it says WaitHandler is an abstract class or interface, and I can't instantiate it. I thought maybe Id try a static reference, but it doesnt have teh WaitOne() method, just WaitAll() and WaitAny(). The main problem is that in the docs it doesn't give a full code snippet, so you can't actually see what their "wait handler" is coming from. its just a variable called allDone, which also has a Reset() method in the snippet, which a waithandler doesn't have. After digging around in their docs, I found some related thing about an AutoResetEvent in the Threading namespace. It has a WaitOne() and a Reset() method. So I tried that around the while(true) { ... socket.BeginAccept( ... ); ... }. Unfortunately this makes it only take one connection at a time. So I'm not really sure where to go. Here's my code: class ServerRunner { private Byte[] data = new Byte[2048]; private int size = 2048; private Socket server; static AutoResetEvent allDone = new AutoResetEvent(false); public ServerRunner() { server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); IPEndPoint iep = new IPEndPoint(IPAddress.Any, 33333); server.Bind(iep); Console.WriteLine("Server initialized.."); } public void Run() { server.Listen(100); Console.WriteLine("Listening..."); while (true) { //allDone.Reset(); server.BeginAccept(new AsyncCallback(AcceptCon), server); //allDone.WaitOne(); } } void AcceptCon(IAsyncResult iar) { Socket oldserver = (Socket)iar.AsyncState; Socket client = oldserver.EndAccept(iar); Console.WriteLine(client.RemoteEndPoint.ToString() + " connected"); byte[] message = Encoding.ASCII.GetBytes("Welcome"); client.BeginSend(message, 0, message.Length, SocketFlags.None, new AsyncCallback(SendData), client); } void SendData(IAsyncResult iar) { Socket client = (Socket)iar.AsyncState; int sent = client.EndSend(iar); client.BeginReceive(data, 0, size, SocketFlags.None, new AsyncCallback(ReceiveData), client); } void ReceiveData(IAsyncResult iar) { Socket client = (Socket)iar.AsyncState; int recv = client.EndReceive(iar); if (recv == 0) { client.Close(); server.BeginAccept(new AsyncCallback(AcceptCon), server); return; } string receivedData = Encoding.ASCII.GetString(data, 0, recv); //process received data here byte[] message2 = Encoding.ASCII.GetBytes("reply"); client.BeginSend(message2, 0, message2.Length, SocketFlags.None, new AsyncCallback(SendData), client); } }

    Read the article

  • File IO, Handling CRLF

    - by aCuria
    Hi, i am writing a program that takes a file and splits it up into multiple small files of a user specified size, then join the multiple small files back again. 1) the code must work for c, c++ 2) i am compiling with multiple compilers. I am reading and writing to the files by using the stl functions fread() and fwrite() The problem I am having pertains to CRLF. If the file I am reading from contains CRLF, then I want to retain it when i split and join the files back together. If the file contains LF, then i want to retain LF. Unfortunately, fread() seems to store CRLF as \n (I think), and whatever is written by fwrite() is compiler-dependent. How do i approach this problem? Thanks.

    Read the article

  • erlang io:format, and a hanging web application

    - by williamstw
    While I'm learning a new language, I'll typically put lots of silly println's to see what values are where at specific times. It usually suffices because the languages typically have available a tostring equivalent. In trying that same approach with erlang, my webapp just "hangs" when there's a value attempted to be printed that's not a list. This happens when variable being printed is a tuple instead of a list. There's no error, exception, nothing... just doesn't respond. Now, I'm muddling through by being careful about what I'm writing out and as I learn more, things are getting better. But I wonder, is there a way to more reliably to [blindly] print a value to stdout? Thanks, --tim

    Read the article

  • IO Exception on invoking execute() method of HttpGet class

    - by AndroidNoob
    Why I'm getting IOException in this peace of code? Thanks. HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget = new HttpGet("http://www.google.com/"); HttpResponse response; try { response = httpclient.execute(httpget); } catch (ClientProtocolException e) { Toast.makeText(this, "ClientProtocolEx", Toast.LENGTH_LONG).show(); e.printStackTrace(); } catch (IOException e) { Toast.makeText(this, "IOEx", Toast.LENGTH_LONG).show(); e.printStackTrace(); }

    Read the article

  • C++ IO with Hard Drive

    - by Tomas Cokis
    I was wondering if there was any kind of portable (Mac&Windows) method of reading and writing to the hard drive which goes beyond iostream.h, in particular features like getting a list of all the files in a folder, moving files around, ect. I was hoping that there would be something like SDL around, but so far I havn't been able to find much. Any ideas??

    Read the article

  • Same loop giving different output. Java IO

    - by Nitesh Panchal
    Hi, I am facing a very strange problem where the same loop keeps giving me different different output on change of value of BUFFER final int BUFFER = 100; char[] charArr = new char[BUFFER]; StringBuffer objStringBuffer = new StringBuffer(); while (objBufferedReader.read(charArr, 0,BUFFER) != -1) { objStringBuffer.append(charArr); } objFileWriter.write(objStringBuffer.toString()); When i change BUFFER size to 500 it gives me a file of 7 kb when i change BUFFER size to 100000 it gives a file of 400 kb where the contents are repeated again and again. Please help. What should i do to prevent this?

    Read the article

  • Copying a IO stream results in corruption.

    - by StackedCrooked
    I have a small Mongrel webserver that sends the stdout of a process to a http response: response.start(200) do |head,out| head["Content-Type"] = "text/html" status = POpen4::popen4(command) do |stdout, stderr, stdin, pid| stdin.close() FileUtils.copy_stream(stdout, out) FileUtils.copy_stream(stderr, out) puts "Sent response." end end This works well most of the time, but sometimes characters get duplicated. For example this is what I get from the "man ls" command: LS(1) User Commands LS(1) NNAAMMEE ls - list directory contents SSYYNNOOPPSSIISS llss [_O_P_T_I_O_N]... [_F_I_L_E]... DDEESSCCRRIIPPTTIIOONN List information about the FILEs (the current directory by default). Sort entries alphabetically if none of --ccffttuuvvSSUUXX nor ----ssoorrtt. Mandatory arguments to long options are mandatory for short options For some mysterious reason capital letters get duplicated. Can anyone explain what's going on?

    Read the article

  • Very different IO performance in C/C++

    - by Roberto Tirabassi
    Hi all, I'm a new user and my english is not so good so I hope to be clear. We're facing a performance problem using large files (1GB or more) expecially (as it seems) when you try to grow them in size. Anyway... to verify our sensations we tryed the following (on Win 7 64Bit, 4core, 8GB Ram, 32 bit code compiled with VC2008) a) Open an unexisting file. Write it from the beginning up to 1Gb in 1Mb slots. Now you have a 1Gb file. Now randomize 10000 positions within that file, seek to that position and write 50 bytes in each position, no matter what you write. Close the file and look at the results. Time to create the file is quite fast (about 0.3"), time to write 10000 times is fast all the same (about 0.03"). Very good, this is the beginnig. Now try something else... b) Open an unexisting file, seek to 1Gb-1byte and write just 1 byte. Now you have another 1Gb file. Follow the next steps exactly same way of case 'a', close the file and look at the results. Time to create the file is the faster you can imagine (about 0.00009") but write time is something you can't believe.... about 90"!!!!! b.1) Open an unexisting file, don't write any byte. Act as before, ramdomizing, seeking and writing, close the file and look at the result. Time to write is long all the same: about 90"!!!!! Ok... this is quite amazing. But there's more! c) Open again the file you crated in case 'a', don't truncate it... randomize again 10000 positions and act as before. You're fast as before, about 0,03" to write 10000 times. This sounds Ok... try another step. d) Now open the file you created in case 'b', don't truncate it... randomize again 10000 positions and act as before. You're slow again and again, but the time is reduced to... 45"!! Maybe, trying again, the time will reduce. I actually wonder why... Any Idea? The following is part of the code I used to test what I told in previuos cases (you'll have to change someting in order to have a clean compilation, I just cut & paste from some source code, sorry). The sample can read and write, in random, ordered or reverse ordered mode, but write only in random order is the clearest test. We tryed using std::fstream but also using directly CreateFile(), WriteFile() and so on the results are the same (even if std::fstream is actually a little slower). Parameters for case 'a' = -f_tempdir_\casea.dat -n10000 -t -p -w Parameters for case 'b' = -f_tempdir_\caseb.dat -n10000 -t -v -w Parameters for case 'b.1' = -f_tempdir_\caseb.dat -n10000 -t -w Parameters for case 'c' = -f_tempdir_\casea.dat -n10000 -w Parameters for case 'd' = -f_tempdir_\caseb.dat -n10000 -w Run the test (and even others) and see... // iotest.cpp : Defines the entry point for the console application. // #include <windows.h> #include <iostream> #include <set> #include <vector> #include "stdafx.h" double RealTime_Microsecs() { LARGE_INTEGER fr = {0, 0}; LARGE_INTEGER ti = {0, 0}; double time = 0.0; QueryPerformanceCounter(&ti); QueryPerformanceFrequency(&fr); time = (double) ti.QuadPart / (double) fr.QuadPart; return time; } int main(int argc, char* argv[]) { std::string sFileName ; size_t stSize, stTimes, stBytes ; int retval = 0 ; char *p = NULL ; char *pPattern = NULL ; char *pReadBuf = NULL ; try { // Default stSize = 1<<30 ; // 1Gb stTimes = 1000 ; stBytes = 50 ; bool bTruncate = false ; bool bPre = false ; bool bPreFast = false ; bool bOrdered = false ; bool bReverse = false ; bool bWriteOnly = false ; // Comsumo i parametri for(int index=1; index < argc; ++index) { if ( '-' != argv[index][0] ) throw ; switch(argv[index][1]) { case 'f': sFileName = argv[index]+2 ; break ; case 's': stSize = xw::str::strtol(argv[index]+2) ; break ; case 'n': stTimes = xw::str::strtol(argv[index]+2) ; break ; case 'b':stBytes = xw::str::strtol(argv[index]+2) ; break ; case 't': bTruncate = true ; break ; case 'p' : bPre = true, bPreFast = false ; break ; case 'v' : bPreFast = true, bPre = false ; break ; case 'o' : bOrdered = true, bReverse = false ; break ; case 'r' : bReverse = true, bOrdered = false ; break ; case 'w' : bWriteOnly = true ; break ; default: throw ; break ; } } if ( sFileName.empty() ) { std::cout << "Usage: -f<File Name> -s<File Size> -n<Number of Reads and Writes> -b<Bytes per Read and Write> -t -p -v -o -r -w" << std::endl ; std::cout << "-t truncates the file, -p pre load the file, -v pre load 'veloce', -o writes in order mode, -r write in reverse order mode, -w Write Only" << std::endl ; std::cout << "Default: 1Gb, 1000 times, 50 bytes" << std::endl ; throw ; } if ( !stSize || !stTimes || !stBytes ) { std::cout << "Invalid Parameters" << std::endl ; return -1 ; } size_t stBestSize = 0x00100000 ; std::fstream fFile ; fFile.open(sFileName.c_str(), std::ios_base::binary|std::ios_base::out|std::ios_base::in|(bTruncate?std::ios_base::trunc:0)) ; p = new char[stBestSize] ; pPattern = new char[stBytes] ; pReadBuf = new char[stBytes] ; memset(p, 0, stBestSize) ; memset(pPattern, (int)(stBytes&0x000000ff), stBytes) ; double dTime = RealTime_Microsecs() ; size_t stCopySize, stSizeToCopy = stSize ; if ( bPre ) { do { stCopySize = std::min(stSizeToCopy, stBestSize) ; fFile.write(p, stCopySize) ; stSizeToCopy -= stCopySize ; } while (stSizeToCopy) ; std::cout << "Creating time is: " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; } else if ( bPreFast ) { fFile.seekp(stSize-1) ; fFile.write(p, 1) ; std::cout << "Creating Fast time is: " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; } size_t stPos ; ::srand((unsigned int)dTime) ; double dReadTime, dWriteTime ; stCopySize = stTimes ; std::vector<size_t> inVect ; std::vector<size_t> outVect ; std::set<size_t> outSet ; std::set<size_t> inSet ; // Prepare vector and set do { stPos = (size_t)(::rand()<<16) % stSize ; outVect.push_back(stPos) ; outSet.insert(stPos) ; stPos = (size_t)(::rand()<<16) % stSize ; inVect.push_back(stPos) ; inSet.insert(stPos) ; } while (--stCopySize) ; // Write & read using vectors if ( !bReverse && !bOrdered ) { std::vector<size_t>::iterator outI, inI ; outI = outVect.begin() ; inI = inVect.begin() ; stCopySize = stTimes ; dReadTime = 0.0 ; dWriteTime = 0.0 ; do { dTime = RealTime_Microsecs() ; fFile.seekp(*outI) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++outI ; if ( !bWriteOnly ) { dTime = RealTime_Microsecs() ; fFile.seekg(*inI) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++inI ; } } while (--stCopySize) ; std::cout << "Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " (Ave: " << xw::str::itoa(dWriteTime/stTimes, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { std::cout << "Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " (Ave: " << xw::str::itoa(dReadTime/stTimes, 10, 'f') << ")" << std::endl ; } } // End // Write in order if ( bOrdered ) { std::set<size_t>::iterator i = outSet.begin() ; dWriteTime = 0.0 ; stCopySize = 0 ; for(; i != outSet.end(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekp(stPos) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Ordered Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dWriteTime/stCopySize, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { i = inSet.begin() ; dReadTime = 0.0 ; stCopySize = 0 ; for(; i != inSet.end(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekg(stPos) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Ordered Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dReadTime/stCopySize, 10, 'f') << ")" << std::endl ; } }// End // Write in reverse order if ( bReverse ) { std::set<size_t>::reverse_iterator i = outSet.rbegin() ; dWriteTime = 0.0 ; stCopySize = 0 ; for(; i != outSet.rend(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekp(stPos) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Reverse ordered Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dWriteTime/stCopySize, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { i = inSet.rbegin() ; dReadTime = 0.0 ; stCopySize = 0 ; for(; i != inSet.rend(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekg(stPos) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Reverse ordered Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dReadTime/stCopySize, 10, 'f') << ")" << std::endl ; } }// End dTime = RealTime_Microsecs() ; fFile.close() ; std::cout << "Flush/Close Time is " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; std::cout << "Program Terminated" << std::endl ; } catch(...) { std::cout << "Something wrong or wrong parameters" << std::endl ; retval = -1 ; } if ( p ) delete []p ; if ( pPattern ) delete []pPattern ; if ( pReadBuf ) delete []pReadBuf ; return retval ; }

    Read the article

  • Saving information in the IO System

    - by djTeller
    Hi Kernel Gurus, I need to write a kernel module that simulate a "multicaster" Using the /proc file system. Basically it need to support the following scenarios: 1) allow one write access to the /proc file and many read accesses to the /proc file. 2) The module should have a buffer of the contents last successful write. Each write should be matched by a read from all reader. Consider scenario 2, a writer wrote something and there are two readers (A and B), A read the content of the buffer, and then A tried to read again, in this case it should go into a wait_queue and wait for the next message, it should not get the same buffer again. I need to keep a map of all the pid's that already read the current buffer, and in case they try to read again and the buffer was not changed, they should be blocked until there is a new buffer. I'm trying to figure it there is a way i can save that info without a map. I heard there are some redundant fields inside the I/O system the I can use to flag a process if it already read the current buffer. Can someone give me a tip where should i look for that field ? how can i save info on the current process without keeping a "map" of pid's and buffers ? Thanks!

    Read the article

  • dealing with IO vs pure code in haskell

    - by Drakosha
    I'm writing a shell script (my 1st non-example in haskell) which is supposed to list a directory, get every file size, do some string manipulation (pure code) and then rename some files. I'm not sure what i'm doing wrong, so 2 questions: How should i arrange the code in such program? I have a specific issue, i get the following error, what am i doing wrong? error: Couldn't match expected type [FilePath]' against inferred typeIO [FilePath]' In the second argument of mapM', namelyfileNames' In a stmt of a 'do' expression: files <- (mapM getFileNameAndSize fileNames) In the expression: do { fileNames <- getDirectoryContents; files <- (mapM getFileNameAndSize fileNames); sortBy cmpFilesBySize files } code: getFileNameAndSize fname = do (fname, (withFile fname ReadMode hFileSize)) getFilesWithSizes = do fileNames <- getDirectoryContents files <- (mapM getFileNameAndSize fileNames) sortBy cmpFilesBySize files

    Read the article

  • python-like Java IO library?

    - by drozzy
    Java is not my main programming language so I might be asking the obvious. But is there a simple file-handling library in Java, like in python? For example I just want to say: File f = Open('file.txt', 'w') for(String line:f){ //do something with the line from file } Thanks!

    Read the article

  • Using reflection to change static final File.separatorChar for unit testing?

    - by Stefan Kendall
    Specifically, I'm trying to create a unit test for a method which requires uses File.separatorChar to build paths on windows and unix. The code must run on both platforms, and yet I get errors with JUnit when I attempt to change this static final field. Anyone have any idea what's going on? Field field = java.io.File.class.getDeclaredField( "separatorChar" ); field.setAccessible(true); field.setChar(java.io.File.class,'/'); When I do this, I get IllegalAccessException: Can not set static final char field java.io.File.separatorChar to java.lang.Character Thoughts?

    Read the article

  • Problem with do construct in haskell

    - by Anonymous Coward
    Hello everyone I'm trying to learn Haskell and want to write a small program which prints the content of a file to the screen. When I load it into GHCi I get the following error: The last statement in a 'do' construct must be an expression I know this question has be asked already here: Haskell — “The last statement in a 'do' construct must be an expression”. Even though my code is very similar I still can't figure out the problem. If anyone could point out the problem to me I'd be very thankful. module Main (main) where import System.IO import System(getArgs) main :: IO() main = do args <- getArgs inh <- openFile $ ReadMode head args printFile inh hClose inh printFile :: Handle -> IO () printFile handle = do end <- hIsEOF handle if end then return () else do line <- hGetLine handle putStrLn line printFile handle

    Read the article

  • Event on SQL Server 2008 Disk IO and the new Complex Event Processing (StreamInsight) feature in R2

    - by tonyrogerson
    Allan Mitchell and myself are doing a double act, Allan is becoming one of the leading guys in the UK on StreamInsight and will give an introduction to this new exciting technology; on top of that I'll being talking about SQL Server Disk IO - well, "Disk" might not be relevant anymore because I'll be talking about SSD and IOFusion - basically I'll be talking about the underpinnings - making sure you understand and get it right, how to monitor etc... If you've any specific problems or questions just ping me an email [email protected]. To register for the event see: http://sqlserverfaq.com/events/217/SQL-Server-and-Disk-IO-File-GroupsFiles-SSDs-FusionIO-InRAM-DBs-Fragmentation-Tony-Rogerson-Complex-Event-Processing-Allan-Mitchell.aspx 18:15 SQL Server and Disk IOTony Rogerson, SQL Server MVPTony's Blog; Tony on TwitterIn this session Tony will talk about RAID levels, how SQL server writes to and reads from disk, the effect SSD has and will talk about other options for throughput enhancement like Fusion IO. He will look at the effect fragmentation has and how to minimise the impact, he will look at the File structure of a database and talk about what benefits multiple files and file groups bring. We will also touch on Database Mirroring and the effect that has on throughput, how to get a feeling for the throughput you should expect.19:15 Break19:45 Complex Event Processing (CEP)Allan Mitchell, SQL Server MVPhttp://sqlis.com/sqlisStreamInsight is Microsoft’s first foray into the world of Complex Event Processing (CEP) and Event Stream Processing (ESP).  In this session I want to show an introduction to this technology.  I will show how and why it is useful.  I will get us used to some new terminology but best of all I will show just how easy it is to start building your first CEP/ESP application.

    Read the article

  • VMWare Esxi Looking for Bottlenecks

    - by nextgenneo
    I have a VMWare ESxi box, 22GB ram, Dual Quad Core Xeon, 2 Sas drives + Write caching raid controller etc. Anyways, have about 30 small XP VM's running on it and starting to get some very slow boot times and other performance issues. I THINK its I/O but looking at the graphs not too sure what to look for. Any ideas on what to look for would be appreciated. Here is the data I've got so far: (I feel like my IO is high but not sure what to bench it against)

    Read the article

  • Is the XP VMM a bottleneck on a multi core machine?

    - by JeffV
    I have a dual Xeon hex core machine running an IO intensive application. (WinXP 32) I am seeing a hardware driver (1/2 user mode, 1/2 kernel, streaming data) that is using 6k delta page faults per second. When other applications load or allocate large amounts of memory the driver's hardware buffer gets an underrun (application not feeding it fast enough). Could this be because the kernel is only using one core to service page fault interrupts?

    Read the article

  • VS2008: File creation fails randomly in unit testing?

    - by Tim
    I'm working on implementing a reasonably simple XML serializer/deserializer (log file parser) application in C# .NET with VS 2008. I have about 50 unit tests right now for various parts of the code (mostly for the various serialization operations), and some of them seem to be failing mostly at random when they deal with file I/O. The way the tests are structured is that in the test setup method, I create a new empty file at a certain predetermined location, and close the stream I get back. Then I run some basic tests on the file (varying by what exactly is under test). In the cleanup method, I delete the file again. A large portion (usually 30 or more, though the number varies run to run) of my unit tests will fail at the initialize method, claiming they can't access the file I'm trying to create. I can't pin down the exact reason, since a test that will work one run fails the next; they all succeed when run individually. What's the problem here? Why can't I access this file across multiple unit tests? Relevant methods for a unit test that will fail some of the time: [TestInitialize()] public void LogFileTestInitialize() { this.testFolder = System.Environment.GetFolderPath( System.Environment.SpecialFolder.LocalApplicationData ); this.testPath = this.testFolder + "\\empty.lfp"; System.IO.File.Create(this.testPath); } [TestMethod()] public void LogFileConstructorTest() { string filePath = this.testPath; LogFile target = new LogFile(filePath); Assert.AreNotEqual(null, target); Assert.AreEqual(this.testPath, target.filePath); Assert.AreEqual("empty.lfp", target.fileName); Assert.AreEqual(this.testFolder + "\\empty.lfp.lfpdat", target.metaPath); } [TestCleanup()] public void LogFileTestCleanup() { System.IO.File.Delete(this.testPath); } And the LogFile() constructor: public LogFile(String filePath) { this.entries = new List<Entry>(); this.filePath = filePath; this.metaPath = filePath + ".lfpdat"; this.fileName = filePath.Substring(filePath.LastIndexOf("\\") + 1); } The precise error message: Initialization method LogFileParserTester.LogFileTest.LogFileTestInitialize threw exception. System.IO.IOException: System.IO.IOException: The process cannot access the file 'C:\Users\<user>\AppData\Local\empty.lfp' because it is being used by another process..

    Read the article

  • Sending multiline message via sockets without closing the connection

    - by Yasir Arsanukaev
    Hello folks. Currently I have this code of my client-side Haskell application: import Network.Socket import Network.BSD import System.IO hiding (hPutStr, hPutStrLn, hGetLine, hGetContents) import System.IO.UTF8 connectserver :: HostName -- ^ Remote hostname, or localhost -> String -- ^ Port number or name -> IO Handle connectserver hostname port = withSocketsDo $ do -- withSocketsDo is required on Windows -- Look up the hostname and port. Either raises an exception -- or returns a nonempty list. First element in that list -- is supposed to be the best option. addrinfos <- getAddrInfo Nothing (Just hostname) (Just port) let serveraddr = head addrinfos -- Establish a socket for communication sock <- socket (addrFamily serveraddr) Stream defaultProtocol -- Mark the socket for keep-alive handling since it may be idle -- for long periods of time setSocketOption sock KeepAlive 1 -- Connect to server connect sock (addrAddress serveraddr) -- Make a Handle out of it for convenience h <- socketToHandle sock ReadWriteMode -- Were going to set buffering to LineBuffering and then -- explicitly call hFlush after each message, below, so that -- messages get logged immediately hSetBuffering h LineBuffering return h sendid :: Handle -> String -> IO String sendid h id = do hPutStr h id -- Make sure that we send data immediately hFlush h -- Retrieve results hGetLine h The code portions in connectserver are from this chapter of Real World Haskell book where they say: When dealing with TCP data, it's often convenient to convert a socket into a Haskell Handle. We do so here, and explicitly set the buffering – an important point for TCP communication. Next, we set up lazy reading from the socket's Handle. For each incoming line, we pass it to handle. After there is no more data – because the remote end has closed the socket – we output a message about that. Since hGetContents blocks until the server closes the socket on the other side, I used hGetLine instead. It satisfied me before I decided to implement multiline output to client. I wouldn't like the server to close a socket every time it finishes sending multiline text. The only simple idea I have at the moment is to count the number of linefeeds and stop reading lines after two subsequent linefeeds. Do you have any better suggestions? Thanks.

    Read the article

  • sending and receiving with sockets in java?

    - by Darksole
    I am working on sending and receiving from clients and servers in java, and am stumped at the moment. the client socket is to contact a server at “localhost” port 4321. The client will receive a string from the server and alternate spelling the contents of this string with the server. For example, given the string “Bye Bye”, the client (which always begins sending the first letter) sends “B”, receives “y”, sends “e”, receives “ ”, sends “B”, receives “y”, sends “e”, and receives “done!”, which is the string that either client or server will send after the last letter from the original string is received. After “done!” is transmitted, both client and server close their communications. How would i go about getting the first string and then going back and forth sending and reciving letters that make the string, and when finished either send or get done!? import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.io.PrintWriter; import java.net.Socket; import java.net.UnknownHostException; import java.util.Scanner; public class Program2 { public static void goClient() throws UnknownHostException, IOException{ String server = "localhost"; int port = 4321; Socket socket = new Socket(server, port); InputStream inStream = socket.getInputStream(); OutputStream outStream = socket.getOutputStream(); Scanner in = new Scanner(inStream); PrintWriter out = new PrintWriter(outStream, true); String rec = ""; if(in.hasNext()){ rec = in.nextLine(); } char[] array = new char[rec.length()]; for(int i = 0; i < rec.length(); i++){ array[i] = rec.charAt(i); } while(in.hasNext()){ for(int x = 0; x < array.length + 1; x+=2){ String str = in.nextLine(); str = Character.toString(array[x]); out.println(str); } in.close(); socket.close(); } } }

    Read the article

  • What about buffering FileInputStream?

    - by Pregzt
    I have a piece of code that reads hell of a lot (hundreds of thousand) of relatively small files (couple of KB) from the local file system in a loop. For each file there is a java.io.FileInputStream created to read the content. The process its very slow and take ages. Do you think that wrapping the FIS into java.io.BufferedInputStream would make a significant difference?

    Read the article

  • LibPNG + Boost::GIL: png_infopp_NULL not found

    - by Viet
    Hi, I always get this error when trying to compile my file with Boost::GIL PNG IO support: (I'm running Mac OS X Leopard and Boost 1.42, LibPNG 1.4) /usr/local/include/boost/gil/extension/io/png_io_private.hpp: In member function 'void boost::gil::detail::png_reader::init()': /usr/local/include/boost/gil/extension/io/png_io_private.hpp:155: error: 'png_infopp_NULL' was not declared in this scope /usr/local/include/boost/gil/extension/io/png_io_private.hpp:160: error: 'png_infopp_NULL' was not declared in this scope /usr/local/include/boost/gil/extension/io/png_io_private.hpp: In destructor 'boost::gil::detail::png_reader::~png_reader()': /usr/local/include/boost/gil/extension/io/png_io_private.hpp:174: error: 'png_infopp_NULL' was not declared in this scope /usr/local/include/boost/gil/extension/io/png_io_private.hpp: In member function 'void boost::gil::detail::png_reader::apply(const View&)': /usr/local/include/boost/gil/extension/io/png_io_private.hpp:186: error: 'int_p_NULL' was not declared in this scope /usr/local/include/boost/gil/extension/io/png_io_private.hpp: In member function 'void boost::gil::detail::png_reader_color_convert<CC>::apply(const View&)': /usr/local/include/boost/gil/extension/io/png_io_private.hpp:228: error: 'int_p_NULL' was not declared in this scope /usr/local/include/boost/gil/extension/io/png_io_private.hpp: In member function 'void boost::gil::detail::png_writer::init()': /usr/local/include/boost/gil/extension/io/png_io_private.hpp:317: error: 'png_infopp_NULL' was not declared in this scope

    Read the article

  • How to properly close a UDT server in Netty 4

    - by Steffen
    I'm trying to close my UDT server (Netty 4.0.5.Final) with shutDownGracefully() and reopen it on the same port. Unfortunately, I always get the socket exception below although it waits until the future has completed. I also added the socket option SO_REUSEADDR. What is the proper way to do this? Exception in thread "main" com.barchart.udt.ExceptionUDT: UDT Error : 5011 : another socket is already listening on the same UDP port : listen0:listen [id: 0x323d3939] at com.barchart.udt.SocketUDT.listen0(Native Method) at com.barchart.udt.SocketUDT.listen(SocketUDT.java:1136) at com.barchart.udt.net.NetServerSocketUDT.bind(NetServerSocketUDT.java:66) at io.netty.channel.udt.nio.NioUdtAcceptorChannel.doBind(NioUdtAcceptorChannel.java:71) at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:471) at io.netty.channel.DefaultChannelPipeline$HeadHandler.bind(DefaultChannelPipeline.java:1006) at io.netty.channel.DefaultChannelHandlerContext.invokeBind(DefaultChannelHandlerContext.java:504) at io.netty.channel.DefaultChannelHandlerContext.bind(DefaultChannelHandlerContext.java:487) at io.netty.channel.ChannelDuplexHandler.bind(ChannelDuplexHandler.java:38) at io.netty.handler.logging.LoggingHandler.bind(LoggingHandler.java:254) at io.netty.channel.DefaultChannelHandlerContext.invokeBind(DefaultChannelHandlerContext.java:504) at io.netty.channel.DefaultChannelHandlerContext.bind(DefaultChannelHandlerContext.java:487) at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:848) at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:193) at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:321) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:354) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:366) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:101) at java.lang.Thread.run(Thread.java:724) A small test program demonstration the problem: public class MsgEchoServer { public static class MsgEchoServerHandler extends ChannelInboundHandlerAdapter { } public void run() throws Exception { final ThreadFactory acceptFactory = new UtilThreadFactory("accept"); final ThreadFactory connectFactory = new UtilThreadFactory("connect"); final NioEventLoopGroup acceptGroup = new NioEventLoopGroup(1, acceptFactory, NioUdtProvider.MESSAGE_PROVIDER); final NioEventLoopGroup connectGroup = new NioEventLoopGroup(1, connectFactory, NioUdtProvider.MESSAGE_PROVIDER); try { final ServerBootstrap boot = new ServerBootstrap(); boot.group(acceptGroup, connectGroup) .channelFactory(NioUdtProvider.MESSAGE_ACCEPTOR) .option(ChannelOption.SO_BACKLOG, 10) .option(ChannelOption.SO_REUSEADDR, true) .handler(new LoggingHandler(LogLevel.INFO)) .childHandler(new ChannelInitializer<UdtChannel>() { @Override public void initChannel(final UdtChannel ch) throws Exception { ch.pipeline().addLast(new MsgEchoServerHandler()); } }); final ChannelFuture future = boot.bind(1234).sync(); } finally { acceptGroup.shutdownGracefully().syncUninterruptibly(); connectGroup.shutdownGracefully().syncUninterruptibly(); } new MsgEchoServer().run(); } public static void main(final String[] args) throws Exception { new MsgEchoServer().run(); } }

    Read the article

  • Reduce the I/O priority of Windows Backup (Windows Server 2008 R2)

    - by HelloSam
    I have a PostgreSQL running on Windows Server 2008 R2 x64 box. And I have scheduled a backup everyday from the RAID 1 DB disk to a dedicated standalone disk. They are SAS 15k on Dell PERC 6i. I am using the built-in Windows Server Backup for purpose. The problem is, whenever the backup process is kicked in, the database performance is hogged. I would say almost a 10x of performance reduction. From the resource monitor, the disk queue is in the double digit range when backing up, and less than 1 during the day. The disk activity is like ~30-50MB/s during backup, so I guess the hardware is acting normally, though wbengine.exe takes up most of the portions. I think reduce the IO priority of the backup process would be an answer, but I couldn't find a way to. Tuning process CPU priority does not seems to help.

    Read the article

  • Large file copy from NFS to local disk performance drop

    - by Bernhard
    I'm trying to copy a 200GB file from an NFS mount to a local disk. The local disk is an XFS filesystem on a LVM on top of a RAID 5 system (hardware RAID controler). I'm using rsync to monitor the transfer speed. At the beginning the IO speed is about 200MB/s, stable for the first 18GB. But then the performance drops by a factor of 10-20 and never recovers to the initial rate. Sometimes it reaches about 50-100MB/s but just for a few seconds and then the process seems to hang for a bit. At the same time all file-stat operations on the target filesystem are blocking for a long time (minutes). Also interrupting the copy process blocks for several minutes, a sub-sequent delete of the partly copied file takes also several minutes. Any ideas what could be causing this?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >