Search Results

Search found 4547 results on 182 pages for 'haskell io'.

Page 32/182 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • VS2008: File creation fails randomly in unit testing?

    - by Tim
    I'm working on implementing a reasonably simple XML serializer/deserializer (log file parser) application in C# .NET with VS 2008. I have about 50 unit tests right now for various parts of the code (mostly for the various serialization operations), and some of them seem to be failing mostly at random when they deal with file I/O. The way the tests are structured is that in the test setup method, I create a new empty file at a certain predetermined location, and close the stream I get back. Then I run some basic tests on the file (varying by what exactly is under test). In the cleanup method, I delete the file again. A large portion (usually 30 or more, though the number varies run to run) of my unit tests will fail at the initialize method, claiming they can't access the file I'm trying to create. I can't pin down the exact reason, since a test that will work one run fails the next; they all succeed when run individually. What's the problem here? Why can't I access this file across multiple unit tests? Relevant methods for a unit test that will fail some of the time: [TestInitialize()] public void LogFileTestInitialize() { this.testFolder = System.Environment.GetFolderPath( System.Environment.SpecialFolder.LocalApplicationData ); this.testPath = this.testFolder + "\\empty.lfp"; System.IO.File.Create(this.testPath); } [TestMethod()] public void LogFileConstructorTest() { string filePath = this.testPath; LogFile target = new LogFile(filePath); Assert.AreNotEqual(null, target); Assert.AreEqual(this.testPath, target.filePath); Assert.AreEqual("empty.lfp", target.fileName); Assert.AreEqual(this.testFolder + "\\empty.lfp.lfpdat", target.metaPath); } [TestCleanup()] public void LogFileTestCleanup() { System.IO.File.Delete(this.testPath); } And the LogFile() constructor: public LogFile(String filePath) { this.entries = new List<Entry>(); this.filePath = filePath; this.metaPath = filePath + ".lfpdat"; this.fileName = filePath.Substring(filePath.LastIndexOf("\\") + 1); } The precise error message: Initialization method LogFileParserTester.LogFileTest.LogFileTestInitialize threw exception. System.IO.IOException: System.IO.IOException: The process cannot access the file 'C:\Users\<user>\AppData\Local\empty.lfp' because it is being used by another process..

    Read the article

  • sending and receiving with sockets in java?

    - by Darksole
    I am working on sending and receiving from clients and servers in java, and am stumped at the moment. the client socket is to contact a server at “localhost” port 4321. The client will receive a string from the server and alternate spelling the contents of this string with the server. For example, given the string “Bye Bye”, the client (which always begins sending the first letter) sends “B”, receives “y”, sends “e”, receives “ ”, sends “B”, receives “y”, sends “e”, and receives “done!”, which is the string that either client or server will send after the last letter from the original string is received. After “done!” is transmitted, both client and server close their communications. How would i go about getting the first string and then going back and forth sending and reciving letters that make the string, and when finished either send or get done!? import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.io.PrintWriter; import java.net.Socket; import java.net.UnknownHostException; import java.util.Scanner; public class Program2 { public static void goClient() throws UnknownHostException, IOException{ String server = "localhost"; int port = 4321; Socket socket = new Socket(server, port); InputStream inStream = socket.getInputStream(); OutputStream outStream = socket.getOutputStream(); Scanner in = new Scanner(inStream); PrintWriter out = new PrintWriter(outStream, true); String rec = ""; if(in.hasNext()){ rec = in.nextLine(); } char[] array = new char[rec.length()]; for(int i = 0; i < rec.length(); i++){ array[i] = rec.charAt(i); } while(in.hasNext()){ for(int x = 0; x < array.length + 1; x+=2){ String str = in.nextLine(); str = Character.toString(array[x]); out.println(str); } in.close(); socket.close(); } } }

    Read the article

  • What about buffering FileInputStream?

    - by Pregzt
    I have a piece of code that reads hell of a lot (hundreds of thousand) of relatively small files (couple of KB) from the local file system in a loop. For each file there is a java.io.FileInputStream created to read the content. The process its very slow and take ages. Do you think that wrapping the FIS into java.io.BufferedInputStream would make a significant difference?

    Read the article

  • LibPNG + Boost::GIL: png_infopp_NULL not found

    - by Viet
    Hi, I always get this error when trying to compile my file with Boost::GIL PNG IO support: (I'm running Mac OS X Leopard and Boost 1.42, LibPNG 1.4) /usr/local/include/boost/gil/extension/io/png_io_private.hpp: In member function 'void boost::gil::detail::png_reader::init()': /usr/local/include/boost/gil/extension/io/png_io_private.hpp:155: error: 'png_infopp_NULL' was not declared in this scope /usr/local/include/boost/gil/extension/io/png_io_private.hpp:160: error: 'png_infopp_NULL' was not declared in this scope /usr/local/include/boost/gil/extension/io/png_io_private.hpp: In destructor 'boost::gil::detail::png_reader::~png_reader()': /usr/local/include/boost/gil/extension/io/png_io_private.hpp:174: error: 'png_infopp_NULL' was not declared in this scope /usr/local/include/boost/gil/extension/io/png_io_private.hpp: In member function 'void boost::gil::detail::png_reader::apply(const View&)': /usr/local/include/boost/gil/extension/io/png_io_private.hpp:186: error: 'int_p_NULL' was not declared in this scope /usr/local/include/boost/gil/extension/io/png_io_private.hpp: In member function 'void boost::gil::detail::png_reader_color_convert<CC>::apply(const View&)': /usr/local/include/boost/gil/extension/io/png_io_private.hpp:228: error: 'int_p_NULL' was not declared in this scope /usr/local/include/boost/gil/extension/io/png_io_private.hpp: In member function 'void boost::gil::detail::png_writer::init()': /usr/local/include/boost/gil/extension/io/png_io_private.hpp:317: error: 'png_infopp_NULL' was not declared in this scope

    Read the article

  • How to properly close a UDT server in Netty 4

    - by Steffen
    I'm trying to close my UDT server (Netty 4.0.5.Final) with shutDownGracefully() and reopen it on the same port. Unfortunately, I always get the socket exception below although it waits until the future has completed. I also added the socket option SO_REUSEADDR. What is the proper way to do this? Exception in thread "main" com.barchart.udt.ExceptionUDT: UDT Error : 5011 : another socket is already listening on the same UDP port : listen0:listen [id: 0x323d3939] at com.barchart.udt.SocketUDT.listen0(Native Method) at com.barchart.udt.SocketUDT.listen(SocketUDT.java:1136) at com.barchart.udt.net.NetServerSocketUDT.bind(NetServerSocketUDT.java:66) at io.netty.channel.udt.nio.NioUdtAcceptorChannel.doBind(NioUdtAcceptorChannel.java:71) at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:471) at io.netty.channel.DefaultChannelPipeline$HeadHandler.bind(DefaultChannelPipeline.java:1006) at io.netty.channel.DefaultChannelHandlerContext.invokeBind(DefaultChannelHandlerContext.java:504) at io.netty.channel.DefaultChannelHandlerContext.bind(DefaultChannelHandlerContext.java:487) at io.netty.channel.ChannelDuplexHandler.bind(ChannelDuplexHandler.java:38) at io.netty.handler.logging.LoggingHandler.bind(LoggingHandler.java:254) at io.netty.channel.DefaultChannelHandlerContext.invokeBind(DefaultChannelHandlerContext.java:504) at io.netty.channel.DefaultChannelHandlerContext.bind(DefaultChannelHandlerContext.java:487) at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:848) at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:193) at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:321) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:354) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:366) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:101) at java.lang.Thread.run(Thread.java:724) A small test program demonstration the problem: public class MsgEchoServer { public static class MsgEchoServerHandler extends ChannelInboundHandlerAdapter { } public void run() throws Exception { final ThreadFactory acceptFactory = new UtilThreadFactory("accept"); final ThreadFactory connectFactory = new UtilThreadFactory("connect"); final NioEventLoopGroup acceptGroup = new NioEventLoopGroup(1, acceptFactory, NioUdtProvider.MESSAGE_PROVIDER); final NioEventLoopGroup connectGroup = new NioEventLoopGroup(1, connectFactory, NioUdtProvider.MESSAGE_PROVIDER); try { final ServerBootstrap boot = new ServerBootstrap(); boot.group(acceptGroup, connectGroup) .channelFactory(NioUdtProvider.MESSAGE_ACCEPTOR) .option(ChannelOption.SO_BACKLOG, 10) .option(ChannelOption.SO_REUSEADDR, true) .handler(new LoggingHandler(LogLevel.INFO)) .childHandler(new ChannelInitializer<UdtChannel>() { @Override public void initChannel(final UdtChannel ch) throws Exception { ch.pipeline().addLast(new MsgEchoServerHandler()); } }); final ChannelFuture future = boot.bind(1234).sync(); } finally { acceptGroup.shutdownGracefully().syncUninterruptibly(); connectGroup.shutdownGracefully().syncUninterruptibly(); } new MsgEchoServer().run(); } public static void main(final String[] args) throws Exception { new MsgEchoServer().run(); } }

    Read the article

  • Weirdness with cabal, HTF, and HUnit assertions

    - by rampion
    So I'm trying to use HTF to run some HUnit-style assertions % cat tests/TestDemo.hs {-# OPTIONS_GHC -Wall -F -pgmF htfpp #-} module Main where import Test.Framework import Test.HUnit.Base ((@?=)) import System.Environment (getArgs) -- just run some tests main :: IO () main = getArgs >>= flip runTestWithArgs Main.allHTFTests -- all these tests should fail test_fail_int1 :: Assertion test_fail_int1 = (0::Int) @?= (1::Int) test_fail_bool1 :: Assertion test_fail_bool1 = True @?= False test_fail_string1 :: Assertion test_fail_string1 = "0" @?= "1" test_fail_int2 :: Assertion test_fail_int2 = [0::Int] @?= [1::Int] test_fail_string2 :: Assertion test_fail_string2 = "true" @?= "false" test_fail_bool2 :: Assertion test_fail_bool2 = [True] @?= [False] And when I use ghc --make, it seems to work correctly. % ghc --make tests/TestDemo.hs [1 of 1] Compiling Main ( tests/TestDemo.hs, tests/TestDemo.o ) Linking tests/TestDemo ... % tests/TestDemoA ... * Tests: 6 * Passed: 0 * Failures: 6 * Errors: 0 Failures: * Main:fail_int1 (tests/TestDemo.hs:9) * Main:fail_bool1 (tests/TestDemo.hs:12) * Main:fail_string1 (tests/TestDemo.hs:15) * Main:fail_int2 (tests/TestDemo.hs:19) * Main:fail_string2 (tests/TestDemo.hs:22) * Main:fail_bool2 (tests/TestDemo.hs:25) But when I use cabal to build it, not all the tests that should fail, fail. % cat Demo.cabal ... executable test-demo build-depends: base >= 4, HUnit, HTF main-is: TestDemo.hs hs-source-dirs: tests % cabal configure Resolving dependencies... Configuring Demo-0.0.0... % cabal build Preprocessing executables for Demo-0.0.0... Building Demo-0.0.0... [1 of 1] Compiling Main ( tests/TestDemo.hs, dist/build/test-demo/test-demo-tmp/Main.o ) Linking dist/build/test-demo/test-demo ... % dist/build/test-demo/test-demo ... * Tests: 6 * Passed: 3 * Failures: 3 * Errors: 0 Failures: * Main:fail_int2 (tests/TestDemo.hs:23) * Main:fail_string2 (tests/TestDemo.hs:26) * Main:fail_bool2 (tests/TestDemo.hs:29) What's going wrong and how can I fix it?

    Read the article

  • Nested Lambdas in wxHaskell Library

    - by kunkelwe
    I've been trying to figure out how I can make staticText elements resize to fit their contents with wxHaskell. From what I can tell, this is the default behavior in wxWidgets, but the wxHaskell wrapper specifically disables this behavior. However, the library code that creates new elements has me very confused. Can anyone provide an explanation for what this code does? staticText :: Window a -> [Prop (StaticText ())] -> IO (StaticText ()) staticText parent props = feed2 props 0 $ initialWindow $ \id rect -> initialText $ \txt -> \props flags -> do t <- staticTextCreate parent id txt rect flags {- (wxALIGN_LEFT + wxST_NO_AUTORESIZE) -} set t props return t I know that feed2 x y f = f x y, and that the type signature of initialWindow is initialWindow :: (Id -> Rect -> [Prop (Window w)] -> Style -> a) -> [Prop (Window w)] -> Style -> a and the signature of initialText is initialText :: Textual w => (String -> [Prop w] -> a) -> [Prop w] -> a but I just can't wrap my head around all the lambdas.

    Read the article

  • Reduce the I/O priority of Windows Backup (Windows Server 2008 R2)

    - by HelloSam
    I have a PostgreSQL running on Windows Server 2008 R2 x64 box. And I have scheduled a backup everyday from the RAID 1 DB disk to a dedicated standalone disk. They are SAS 15k on Dell PERC 6i. I am using the built-in Windows Server Backup for purpose. The problem is, whenever the backup process is kicked in, the database performance is hogged. I would say almost a 10x of performance reduction. From the resource monitor, the disk queue is in the double digit range when backing up, and less than 1 during the day. The disk activity is like ~30-50MB/s during backup, so I guess the hardware is acting normally, though wbengine.exe takes up most of the portions. I think reduce the IO priority of the backup process would be an answer, but I couldn't find a way to. Tuning process CPU priority does not seems to help.

    Read the article

  • Large file copy from NFS to local disk performance drop

    - by Bernhard
    I'm trying to copy a 200GB file from an NFS mount to a local disk. The local disk is an XFS filesystem on a LVM on top of a RAID 5 system (hardware RAID controler). I'm using rsync to monitor the transfer speed. At the beginning the IO speed is about 200MB/s, stable for the first 18GB. But then the performance drops by a factor of 10-20 and never recovers to the initial rate. Sometimes it reaches about 50-100MB/s but just for a few seconds and then the process seems to hang for a bit. At the same time all file-stat operations on the target filesystem are blocking for a long time (minutes). Also interrupting the copy process blocks for several minutes, a sub-sequent delete of the partly copied file takes also several minutes. Any ideas what could be causing this?

    Read the article

  • How to arrange 2 SSD with 2 SATA?

    - by alfish
    I like to have best io performance as well as good capaciyy and reliability out of a server that hosts a busy forum, which involves loads of static files download. I am wondering what is the best plan to format and use the disks given that the server has only 4 disk bays and I have 2 SSD and 2 SATA disks at hand. I am currently thinking about putting the disks in RAID 10 so that SSD contains /var/lib/mysql as well as most of the OS (Likely to be Debian) and SATA disk to contain /path/to/static/files. However I'd like to hear your expert opinion on this. Thanks

    Read the article

  • Should I store my code/projects on my SSD or my secondary drive?

    - by user37467
    I just got a new box. It has an SSD for the primary drive, and a 1TB SATA for the secondary drive. I'm going to run windows and my binaries on the SSD and keep all my downloads/documents/music/etc on the secondary drive. My question is should I also keep my Visual Studio Projects and code on the SSD or keep them on the secondary drive? The faster SSD would presumably be better for compiling and indexed searches, but would it be better to keep it on the 2nd drive for a more parallel disk IO situation?

    Read the article

  • How do I handle the Maybe result of at in Control.Lens.Indexed without a Monoid instance

    - by Matthias Hörmann
    I recently discovered the lens package on Hackage and have been trying to make use of it now in a small test project that might turn into a MUD/MUSH server one very distant day if I keep working on it. Here is a minimized version of my code illustrating the problem I am facing right now with the at lenses used to access Key/Value containers (Data.Map.Strict in my case) {-# LANGUAGE OverloadedStrings, GeneralizedNewtypeDeriving, TemplateHaskell #-} module World where import Control.Applicative ((<$>),(<*>), pure) import Control.Lens import Data.Map.Strict (Map) import qualified Data.Map.Strict as DM import Data.Maybe import Data.UUID import Data.Text (Text) import qualified Data.Text as T import System.Random (Random, randomIO) newtype RoomId = RoomId UUID deriving (Eq, Ord, Show, Read, Random) newtype PlayerId = PlayerId UUID deriving (Eq, Ord, Show, Read, Random) data Room = Room { _roomId :: RoomId , _roomName :: Text , _roomDescription :: Text , _roomPlayers :: [PlayerId] } deriving (Eq, Ord, Show, Read) makeLenses ''Room data Player = Player { _playerId :: PlayerId , _playerDisplayName :: Text , _playerLocation :: RoomId } deriving (Eq, Ord, Show, Read) makeLenses ''Player data World = World { _worldRooms :: Map RoomId Room , _worldPlayers :: Map PlayerId Player } deriving (Eq, Ord, Show, Read) makeLenses ''World mkWorld :: IO World mkWorld = do r1 <- Room <$> randomIO <*> (pure "The Singularity") <*> (pure "You are standing in the only place in the whole world") <*> (pure []) p1 <- Player <$> randomIO <*> (pure "testplayer1") <*> (pure $ r1^.roomId) let rooms = at (r1^.roomId) ?~ (set roomPlayers [p1^.playerId] r1) $ DM.empty players = at (p1^.playerId) ?~ p1 $ DM.empty in do return $ World rooms players viewPlayerLocation :: World -> PlayerId -> RoomId viewPlayerLocation world playerId= view (worldPlayers.at playerId.traverse.playerLocation) world Since rooms, players and similar objects are referenced all over the code I store them in my World state type as maps of Ids (newtyped UUIDs) to their data objects. To retrieve those with lenses I need to handle the Maybe returned by the at lens (in case the key is not in the map this is Nothing) somehow. In my last line I tried to do this via traverse which does typecheck as long as the final result is an instance of Monoid but this is not generally the case. Right here it is not because playerLocation returns a RoomId which has no Monoid instance. No instance for (Data.Monoid.Monoid RoomId) arising from a use of `traverse' Possible fix: add an instance declaration for (Data.Monoid.Monoid RoomId) In the first argument of `(.)', namely `traverse' In the second argument of `(.)', namely `traverse . playerLocation' In the second argument of `(.)', namely `at playerId . traverse . playerLocation' Since the Monoid is required by traverse only because traverse generalizes to containers of sizes greater than one I was now wondering if there is a better way to handle this that does not require semantically nonsensical Monoid instances on all types possibly contained in one my objects I want to store in the map. Or maybe I misunderstood the issue here completely and I need to use a completely different bit of the rather large lens package?

    Read the article

  • How do I set up DNS with nic.io to point to an AWS EC2 server?

    - by Chad Johnson
    I purchased a domain one week ago via nic.io. I have elected to provide my own DNS [because they provided no other option]. I'm trying to point my .io domain at my EC2 server instance. I've allocated an elastic IP and associated it with the instance. I can SSH into the instance and access point 80 via the IP address just fine. The IP is 54.235.201.241. nic.io support said the following: "You have selected to provide your own DNS and therefore if there is an issue with the set-up of the name servers you will need to contact your DNS provider." So, I created a Hosted Zone via Route 53 in AWS. This created NS and SOA records. I then set the Primary and Secondary servers at nic.io's domain admin page to the SOA record domains. Additionally, I set the optional servers to the NS domains. I did this two days ago, and I can't access the server via the domain. I ran a DNS check here...still not sure what I need to do: http://mydnscheck.com/?domain=chadjohnson.io&ns1=&ns2=&ns3=&ns4=&ns5=&ns6=. I have no idea what I'm supposed to do. Does anyone have any ideas?

    Read the article

  • How to keep subtree removal (`rm -rf`) from starving other processes for Disk I/O?

    - by David Eyk
    We have a very large (multi-GB) Nginx cache directory for a busy site, which we occasionally need to clear all at once. I've solved this in the past by moving the cache folder to a new path, making a new cache folder at the old path, and then rm -rfing the old cache folder. Lately, however, when I need to clear the cache on a busy morning, the I/O from rm -rf is starving my server processes of disk access, as both Nginx and the server it fronts for are read-intensive. I can watch the load average climb while the CPUs sit idle and rm -rf takes 98-99% of Disk IO in iotop. I've tried ionice -c 3 when invoking rm, but it seems to have no appreciable effect on the observed behavior. Is there any way to tame rm -rf to share the disk more? Do I need to use a different technique that will take its cues from ionice? Update: The filesystem in question is an AWS EC2 instance store (the primary disk is EBS). The /etc/fstab entry looks like this: /dev/xvdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 2

    Read the article

  • one 16K random read I/O issues 2 scsi I/O (16K and 4K) requests in linux

    - by hiroyuki
    I noticed weird issue when benchmarking random read I/O for files in linux (2.6.18). The Benchmarking program is my own program and it simply keeps reading 16KB of a file from a random offset. I traced I/O behavior at system call level and scsi level by systemtap and I noticed that one 16KB sysread issues 2 scsi I/Os as following. SYSPREAD random(8472) 3, 0x16fc5200, 16384, 128137183232 SCSI random(8472) 0 1 0 0 start-sector: 226321183 size: 4096 bufflen 4096 FROM_DEVICE 1354354008068009 SCSI random(8472) 0 1 0 0 start-sector: 226323431 size: 16384 bufflen 16384 FROM_DEVICE 1354354008075927 SYSPREAD random(8472) 3, 0x16fc5200, 16384, 21807710208 SCSI random(8472) 0 1 0 0 start-sector: 1889888935 size: 4096 bufflen 4096 FROM_DEVICE 1354354008085128 SCSI random(8472) 0 1 0 0 start-sector: 1889891823 size: 16384 bufflen 16384 FROM_DEVICE 1354354008097161 SYSPREAD random(8472) 3, 0x16fc5200, 16384, 139365318656 SCSI random(8472) 0 1 0 0 start-sector: 254092663 size: 4096 bufflen 4096 FROM_DEVICE 1354354008100633 SCSI random(8472) 0 1 0 0 start-sector: 254094879 size: 16384 bufflen 16384 FROM_DEVICE 1354354008111723 SYSPREAD random(8472) 3, 0x16fc5200, 16384, 60304424960 SCSI random(8472) 0 1 0 0 start-sector: 58119807 size: 4096 bufflen 4096 FROM_DEVICE 1354354008120469 SCSI random(8472) 0 1 0 0 start-sector: 58125415 size: 16384 bufflen 16384 FROM_DEVICE 1354354008126343 As shown above, one 16KB pread issues 2 scsi I/Os. (I traced scsi io dispatching with probe scsi.iodispatching. Please ignore values except for start-sector and size.) One scsi I/O is 16KB I/O as requested from the application and it's OK. The thing is the other 4KB I/O which I don't know why linux issues that I/O. of course, I/O performance is degraded by the weired 4KB I/O and I am having trouble. I also use fio (famous I/O benchmark tool) and noticed the same issue, so it's not from the application. Does anybody know what is going on ? Any comments or advices are appreciated. Thanks

    Read the article

  • High disk I/O activity in CentOS server

    - by triiim
    I have about 16 websites in a CentOS dedicated, and I am having some problems on high traffic hours, it seems to be a high disk I/O activity causing a general slowdown. I've installed atop and this is what I see on the bottom (the server has been restarted thats why the values are so low): *** system and process activity since boot *** PID RDDSK WRDSK WCANCL DSK CMD 1/18 2176 1.7G 7.3G 854.4M 39 mysqld 671 1248K 3.0G 0K 13 flush-8:0 566 0K 1.1G 0K 5 jbd2/sda2-8 2401 124.2M 529.1M 22408K 3 crond 2032 2.2G 502.0M 0K 12 nginx 2360 425.8M 115.3M 4188K 2 httpd flush-8:0 and jbd2/sda2-8 are the processes I see with iotop using 99% on the IO column, and they are the processes that write the most on the hdd (after mysql). From what I saw in google this could be caused by some ext4 related bug, the current kernel is: Linux srvr.com 2.6.32-71.29.1.el6.x86_64 #1 SMP Mon Jun 27 19:49:27 BST 2011 x86_64 x86_64 x86_64 GNU/Linux I asked the hosting support to update the kernel and they tried but they now say that the server wont boot with the new installed kernel and they had to go back to the previous, they are not helping very much. Does someone has any idea how could I solve the high disk usage caused by flush-8:0 and jbd2/sda2-8 processes?

    Read the article

  • Input/output error reading USB backup drive on CentOS 6.4

    - by Kev
    I'm suddenly seeing some strange behaviour on our USB backup drive that doesn't make sense to me: (2013-10-21 14:58:23 [root@newdc /]$ cd /mnt/backup/ (2013-10-21 14:59:03 [root@newdc backup]$ ls -la ls: reading directory .: Input/output error total 0 (2013-10-21 14:59:05 [root@newdc backup]$ df -h /mnt/backup Filesystem Size Used Avail Use% Mounted on /dev/sda1 917G 843G 28G 97% /mnt/backup How is it possible for the OS to know how much is in use, but I can't ls any of it as root? Or more to the point, what problem does this indicate? /var/log/messages said this: Oct 21 14:57:54 g5 kernel: EXT4-fs error (device sda1): ext4_journal_start_sb: Detected aborted journal Oct 21 14:57:54 g5 kernel: EXT4-fs (sda1): Remounting filesystem read-only But...read-only is something different than 'throw an io error'... After unmounting to try fsck on it, I had someone on-site look at it, and the drive was not spun up, and had a slow-flashing light, which I believe means it was in a power-suspend mode. So I had them unplug and replug the USB cable, and now (before remounting) it says: fsck from util-linux-ng 2.17.2 e2fsck 1.41.12 (17-May-2010) /dev/sda1: clean, 2805106/61046784 files, 181934167/244182016 blocks I then mount it and now ls works and df reports: Filesystem Size Used Avail Use% Mounted on /dev/sda1 917G 680G 191G 79% /mnt/backup What would cause it to go into such a state without being asked to? Why all the weird behaviour, and now it appears to not be corrupt?

    Read the article

  • Issues with Server 2012 using DFSR running on Hyper-V 2012

    - by Bryan
    We have a number of Server 2012 systems, all of which run virtualised on Hyper-V 2012 server. We are having problems with two such virtual instances, both of which are used as file servers, whereby they occasionally stop responding to requests to serve files to clients. After logging on to the server, attempts to shut it down gracefully fail (no error, it just fails to acknowledge a shutdown request). Recovery is a case of power cycling the server(s) from the Hyper-V console. These two servers don't server a large number of users (one serves no more than 6 users, and the other serves around 20 users), they are in the same domain, but on different physical hardware (and at different sites). They don't lock up at the same time. They both use DFSR to replicate a fairly large amount of data between themselves (200GB) over ADSL connections, this is working fine, and we have been using DFSR to do this on the previous two generations of server OS we have used (Server 2008 R2 and Server 2003 - both of which were physical installs however). Today, when one of the servers crashed, I noticed an entry in the event log, which looked similar to the following: Log Name: Application Source: ESENT Date: 27/11/2012 10:25:55 Event ID: 533 Task Category: General Level: Warning Keywords: Classic User: N/A Computer: HAL-FS-01.example.com Description: DFSRs (1500) \\.\E:\System Volume Information\DFSR\database_C8CC_101_CC00_EC0E\ dfsr.db: A request to write to the file "\\.\E:\System Volume Information\ DFSR\database_C8CC_101_CC00_EC0E\fsr.log" at offset 4423680 (0x0000000000438000) for 4096 (0x00001000) bytes has not completed for 36 second(s). This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem. When the server started up again, I went to find the event log entry to investigate further and found that the event log entry was no longer there (I assume it was in memory but failed to write to disk before the server was powered off, for the reason mentioned in the message). I found the above message by searching back further in the event log. Both of these virtual servers have their E: volumes fully allocated as opposed to dynamically expanding, and there are no other issues on any of the other virtual servers (which include server 2012, server 2008 R2 and Ubuntu 12.04 x64). There are no signs of IO, memory or CPU starvation on the host systems. I've used performance counters on the affected virtual servers to monitor memory usage (including non paged pool usage), as well as CPU and network utilisation, and none of these show any signs of trouble when the issue arises. I would have thought our configuration isn't that uncommon, so I'm wondering if anyone else has seen this, and managed to resolve the problem?

    Read the article

  • Quantifying the effects of partition mis-alignment

    - by Matt
    I'm experiencing some significant performance issues on an NFS server. I've been reading up a bit on partition alignment, and I think I have my partitions mis-aligned. I can't find anything that tells me how to actually quantify the effects of mis-aligned partitions. Some of the general information I found suggests the performance penalty can be quite high (upwards of 60%) and others say it's negligible. What I want to do is determine if partition alignment is a factor in this server's performance problems or not; and if so, to what degree? So I'll put my info out here, and hopefully the community can confirm if my partitions are indeed mis-aligned, and if so, help me put a number to what the performance cost is. Server is a Dell R510 with dual E5620 CPUs and 8 GB RAM. There are eight 15k 2.5” 600 GB drives (Seagate ST3600057SS) configured in hardware RAID-6 with a single hot spare. RAID controller is a Dell PERC H700 w/512MB cache (Linux sees this as a LSI MegaSAS 9260). OS is CentOS 5.6, home directory partition is ext3, with options “rw,data=journal,usrquota”. I have the HW RAID configured to present two virtual disks to the OS: /dev/sda for the OS (boot, root and swap partitions), and /dev/sdb for a big NFS share: [root@lnxutil1 ~]# parted -s /dev/sda unit s print Model: DELL PERC H700 (scsi) Disk /dev/sda: 134217599s Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 63s 465884s 465822s primary ext2 boot 2 465885s 134207009s 133741125s primary lvm [root@lnxutil1 ~]# parted -s /dev/sdb unit s print Model: DELL PERC H700 (scsi) Disk /dev/sdb: 5720768639s Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 34s 5720768606s 5720768573s lvm Edit 1 Using the cfq IO scheduler (default for CentOS 5.6): # cat /sys/block/sd{a,b}/queue/scheduler noop anticipatory deadline [cfq] noop anticipatory deadline [cfq] Chunk size is the same as strip size, right? If so, then 64kB: # /opt/MegaCli -LDInfo -Lall -aALL -NoLog Adapter #0 Number of Virtual Disks: 2 Virtual Disk: 0 (target id: 0) Name:os RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:65535MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 ... physical disk info removed for brevity ... Virtual Disk: 1 (target id: 1) Name:share RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:2793344MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 If it's not obvious, virtual disk 0 corresponds to /dev/sda, for the OS; virtual disk 1 is /dev/sdb (the exported home directory tree).

    Read the article

  • High I/O latency with software RAID, LUKS encrypted and LVM partitioned KVM setup

    - by aef
    I found out a performance problems with a Mumble server, which I described in a previous question are caused by an I/O latency problem of unknown origin. As I have no idea what is causing this and how to further debug it, I'm asking for your ideas on the topic. I'm running a Hetzner EX4S root server as KVM hypervisor. The server is running Debian Wheezy Beta 4 and KVM virtualisation is utilized through LibVirt. The server has two different 3TB hard drives as one of the hard drives was replaced after S.M.A.R.T. errors were reported. The first hard disk is a Seagate Barracuda XT ST33000651AS (512 bytes logical, 4096 bytes physical sector size), the other one a Seagate Barracuda 7200.14 (AF) ST3000DM001-9YN166 (512 bytes logical and physical sector size). There are two Linux software RAID1 devices. One for the unencrypted boot partition and one as container for the encrypted rest, using both hard drives. Inside the latter RAID device lies an AES encrypted LUKS container. Inside the LUKS container there is a LVM physical volume. The hypervisor's VFS is split on three logical volumes on the described LVM physical volume: one for /, one for /home and one for swap. Here is a diagram of the block device configuration stack: sda (Physical HDD) - md0 (RAID1) - md1 (RAID1) sdb (Physical HDD) - md0 (RAID1) - md1 (RAID1) md0 (Boot RAID) - ext4 (/boot) md1 (Data RAID) - LUKS container - LVM Physical volume - LVM volume hypervisor-root - LVM volume hypervisor-home - LVM volume hypervisor-swap - … (Virtual machine volumes) The guest systems (virtual machines) are mostly running Debian Wheezy Beta 4 too. We have one additional Ubuntu Precise instance. They get their block devices from the LVM physical volume, too. The volumes are accessed through Virtio drivers in native writethrough mode. The IO scheduler (elevator) on both the hypervisor and the guest system is set to deadline instead of the default cfs as that happened to be the most performant setup according to our bonnie++ test series. The I/O latency problem is experienced not only inside the guest systems but is also affecting services running on the hypervisor system itself. The setup seems complex, but I'm sure that not the basic structure causes the latency problems, as my previous server ran four years with almost the same basic setup, without any of the performance problems. On the old setup the following things were different: Debian Lenny was the OS for both hypervisor and almost all guests Xen software virtualisation (therefore no Virtio, also) no LibVirt management Different hard drives, each 1.5TB in size (one of them was a Seagate Barracuda 7200.11 ST31500341AS, the other one I can't tell anymore) We had no IPv6 connectivity Neither in the hypervisor nor in guests we had noticable I/O latency problems According the the datasheets, the current hard drives and the one of the old machine have an average latency of 4.12ms.

    Read the article

  • Using DockPanel Suite's XML saving with My.Settings

    - by Cyclone
    In vb.net, how can I use SaveAsXML and LoadFromXML (DockPanel Suite functions) with My.Settings? I know there is a way to save it to a stream, but I don't know enough about System.IO to do this. DockPanel Suite: http://sourceforge.net/projects/dockpanelsuite/ I would very much prefer to store configuration using My.Settings instead of as an XML file. Thanks for the help! EDIT: Since the only person to answer so far has since deleted their answer, I feel I should explain further: Those two methods can both accept a stream object, but I have not used System.IO enough to know how to properly initialize a stream, let alone get a string out of a stream. EDIT: Googling has gotten me nowhere. EDIT: Still waiting for this. I've tried everything...

    Read the article

  • Python 3 with the range function

    - by Leif Andersen
    I can type in the following code in the terminal, and it works: for i in range(5): print(i) And it will print: 0 1 2 3 4 as expected. However, I tried to write a script that does a similar thing: print(current_chunk.data) read_chunk(file, current_chunk) numVerts, numFaces, numEdges = current_chunk.data print(current_chunk.data) print(numVerts) for vertex in range(numVerts): print("Hello World") current_chunk.data is gained from the following method: def read_chunk(file, chunk): line = file.readline() while line.startswith('#'): line = file.readline() chunk.data = line.split() The output for this is: ['OFF'] ['490', '518', '0'] 490 Traceback (most recent call last): File "/home/leif/src/install/linux2/.blender/scripts/io/import_scene_off.py", line 88, in execute load_off(self.properties.path, context) File "/home/leif/src/install/linux2/.blender/scripts/io/import_scene_off.py", line 68, in load_off for vertex in range(numVerts): TypeError: 'str' object cannot be interpreted as an integer So, why isn't it spitting out Hello World 490 times? Or is the 490 being thought of as a string? I opened the file like this: def load_off(filename, context): file = open(filename, 'r')

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >