Search Results

Search found 4547 results on 182 pages for 'haskell io'.

Page 21/182 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • VMware Server guest systems are extremely slow with IO load on host (Ubuntu 8.04)

    - by Dennis G.
    We are experiencing performance issues with a VMware Server 2.x installation on an Ubuntu 8.04 host. When the host system is generating IO load (for example, copying large files as part of a backup operation), the guests (also Ubuntu 8.04) become extremely unresponsive and slow (simple Apache HTTP requests taking 5 seconds instead of the usual 200ms). We tried optimizing various aspects of the VMs, but the issue remains. Is there a known bug with VMware performance under linux if host IO load is high? Is there a way to fix this? Is this only an issue with Ubuntu systems, or have you seen it on other systems before? Thanks!

    Read the article

  • Linux IO scheduler on databases with RAID

    - by Raghu
    Hi, I have a linux database(MySQL) server(Dell 2950) with a 6-disk RAID 10. The default IO scheduler on it is CFQ. However, from what I have read and heard, there is no need for a scheduler like CFQ when reordering/scheduling is also done by underlying RAID controller; on the contrary since it does not account underlying RAID configuration into account performance may actually degrade with CFQ. The primary concern is to reduce CPU usage and improve throughput. Also, I have seen recommendations of using noop/deadline IO scheduler for databases primarily because of the nature of their R/W access.

    Read the article

  • Silverlight changes the io.Stream to byte[]

    - by Sai
    I have created a WCF service for uploading images , which accepts System.IO.Stream as input parameter and am using streaming. When I added the service reference in Silverlight project then it automatically changed the parameter of my WCF method from System.IO.Stream to byte[]. Can anyone suggest if there is a way around this so that I can get System.IO.Stream type rather than byte[]. Thanks in advance

    Read the article

  • NULL pointer dereference in swiotlb_unmap_sg_attrs() on disk IO

    - by Inductiveload
    I'm getting an error I really don't understand when reading or writing files using a PCIe block device driver. I seem to be hitting an issue in swiotlb_unmap_sg_attrs(), which appears to be doing a NULL dereference of the sg pointer, but I don't know where this is coming from, as the only scatterlist I use myself is allocated as part of the device info structure and persists as long as the driver does. There is a stacktrace to go with the problem. It tends to vary a bit in exact details, but it always crashes in swiotlb_unmap_sq_attrs(). I think it's likely I have a locking issue, as I am not sure how to handle the locks around the IO functions. The lock is already held when the request function is called, I release it before the IO functions themselves are called, as they need an (MSI) IRQ to complete. The IRQ handler updates a "status" value, which the IO function is waiting for. When the IO function returns, I then take the lock back up and return to request queue handling. The crash happens in blk_fetch_request() during the following: if (!__blk_end_request(req, res, bytes)){ printk(KERN_ERR "%s next request\n", DRIVER_NAME); req = blk_fetch_request(q); } else { printk(KERN_ERR "%s same request\n", DRIVER_NAME); } where bytes is updated by the request handler to be the total length of IO (summed length of each scatter-gather segment).

    Read the article

  • Fragment method and socket.io

    - by Tolgay Toklar
    I have a method,this method updates an array list in fragment.I can call this method in main activity like this public void getFromUser(String message) { addMessageToFragment("ok"); } public void addMessageToFragment(String message) { Log.w("Step 1",message); frgObj.addMessageToList("asd"); } getFromUser is calling from fragment(when user presses the button) this is working as well.But I am using socket.io in my app,when I try to call this method from socket.io,app is not working. public void on(String event, IOAcknowledge ack, Object... args) { try{ addMessageToFragment("ok"); } catch (JSONException e) {} } When this callback function calls,app is giving this errors: 08-19 11:57:24.813: W/System.err(4962): io.socket.SocketIOException: Exception was thrown in on(String, JSONObject[]). 08-19 11:57:24.813: W/System.err(4962): Message was: 5:::{"name":"listele","args":[{"mesaj":"123","gonderen":"781722165-tolgay007-DKSMIcIYGahPuKXriM83","alici":"tolgay007","blck_id":"781722165-tolgay007","out_username":"Anony-781722","ars_status":1,"longinf":"3aqghef","a_status":1}]} 08-19 11:57:24.813: W/System.err(4962): at io.socket.IOConnection.transportMessage(IOConnection.java:702) 08-19 11:57:24.813: W/System.err(4962): at io.socket.WebsocketTransport.onMessage(WebsocketTransport.java:82) 08-19 11:57:24.813: W/System.err(4962): at org.java_websocket.client.WebSocketClient.onWebsocketMessage(WebSocketClient.java:361) 08-19 11:57:24.813: W/System.err(4962): at org.java_websocket.WebSocketImpl.deliverMessage(WebSocketImpl.java:565) 08-19 11:57:24.813: W/System.err(4962): at org.java_websocket.WebSocketImpl.decodeFrames(WebSocketImpl.java:331) 08-19 11:57:24.813: W/System.err(4962): at org.java_websocket.WebSocketImpl.decode(WebSocketImpl.java:152) 08-19 11:57:24.813: W/System.err(4962): at org.java_websocket.client.WebSocketClient.interruptableRun(WebSocketClient.java:247) 08-19 11:57:24.823: W/System.err(4962): at org.java_websocket.client.WebSocketClient.run(WebSocketClient.java:193) 08-19 11:57:24.823: W/System.err(4962): at java.lang.Thread.run(Thread.java:841) 08-19 11:57:24.823: W/System.err(4962): Caused by: android.view.ViewRootImpl$CalledFromWrongThreadException: Only the original thread that created a view hierarchy can touch its views. 08-19 11:57:24.823: W/System.err(4962): at android.view.ViewRootImpl.checkThread(ViewRootImpl.java:6094) 08-19 11:57:24.823: W/System.err(4962): at android.view.ViewRootImpl.focusableViewAvailable(ViewRootImpl.java:2800) 08-19 11:57:24.823: W/System.err(4962): at android.view.ViewGroup.focusableViewAvailable(ViewGroup.java:650) 08-19 11:57:24.823: W/System.err(4962): at android.view.ViewGroup.focusableViewAvailable(ViewGroup.java:650) 08-19 11:57:24.823: W/System.err(4962): at android.view.ViewGroup.focusableViewAvailable(ViewGroup.java:650) 08-19 11:57:24.823: W/System.err(4962): at android.view.ViewGroup.focusableViewAvailable(ViewGroup.java:650) 08-19 11:57:24.823: W/System.err(4962): at android.view.ViewGroup.focusableViewAvailable(ViewGroup.java:650) 08-19 11:57:24.823: W/System.err(4962): at android.view.ViewGroup.focusableViewAvailable(ViewGroup.java:650) 08-19 11:57:24.823: W/System.err(4962): at android.view.ViewGroup.focusableViewAvailable(ViewGroup.java:650) 08-19 11:57:24.823: W/System.err(4962): at android.view.View.setFlags(View.java:8878) 08-19 11:57:24.823: W/System.err(4962): at android.view.View.setFocusableInTouchMode(View.java:6114) 08-19 11:57:24.823: W/System.err(4962): at android.widget.AdapterView.checkFocus(AdapterView.java:718) 08-19 11:57:24.823: W/System.err(4962): at android.widget.AdapterView$AdapterDataSetObserver.onChanged(AdapterView.java:813) 08-19 11:57:24.823: W/System.err(4962): at android.widget.AbsListView$AdapterDataSetObserver.onChanged(AbsListView.java:6280) 08-19 11:57:24.823: W/System.err(4962): at android.database.DataSetObservable.notifyChanged(DataSetObservable.java:37) 08-19 11:57:24.823: W/System.err(4962): at android.widget.BaseAdapter.notifyDataSetChanged(BaseAdapter.java:50) 08-19 11:57:24.823: W/System.err(4962): at android.widget.ArrayAdapter.notifyDataSetChanged(ArrayAdapter.java:286) 08-19 11:57:24.823: W/System.err(4962): at com.impact.ribony.ConversationFragment.addMessageToList(ConversationFragment.java:91) 08-19 11:57:24.823: W/System.err(4962): at com.impact.ribony.MainActivity.addMessageToFragment(MainActivity.java:344) 08-19 11:57:24.823: W/System.err(4962): at com.impact.ribony.MainActivity$2.on(MainActivity.java:183) 08-19 11:57:24.823: W/System.err(4962): at io.socket.IOConnection.on(IOConnection.java:908) 08-19 11:57:24.883: W/System.err(4962): at io.socket.IOConnection.transportMessage(IOConnection.java:697) I didn't understand this error.What can be cause this error ?

    Read the article

  • Why do System.IO.Log SequenceNumbers have variable length?

    - by Doug McClean
    I'm trying to use the System.IO.Log features to build a recoverable transaction system. I understand it to be implemented on top of the Common Log File System. The usual ARIES approach to write-ahead logging involves persisting log record sequence numbers in places other than the log (for example, in the header of the database page modified by the logged action). Interestingly, the documentation for CLFS says that such sequence numbers are always 64-bit integers. Confusingly, however, the .Net wrapper around those SequenceNumbers can be constructed from a byte[] but not from a UInt64. It's value can also be read as a byte[], but not as a UInt64. Inspecting the implementation of SequenceNumber.GetBytes() reveals that it can in fact return arrays of either 8 or 16 bytes. This raises a few questions: Why do the .Net sequence numbers differ in size from the CLFS sequence numbers? Why are the .Net sequence numbers variable in length? Why would you need 128 bits to represent such a sequence number? It seems like you would truncate the log well before using up a 64-bit address space (16 exbibytes, or around 10^19 bytes, more if you address longer words)? If log sequence numbers are going to be represented as 128 bit integers, why not provide a way to serialize/deserialize them as pairs of UInt64s instead of rather-pointlessly incurring heap allocations for short-lived new byte[]s every time you need to write/read one? Alternatively, why bother making SequenceNumber a value type at all? It seems an odd tradeoff to double the storage overhead of log sequence numbers just so you can have an untruncated log longer than a million terabytes, so I feel like I'm missing something here, or maybe several things. I'd much appreciate it if someone in the know could set me straight.

    Read the article

  • functional dependencies vs type families

    - by mhwombat
    I'm developing a framework for running experiments with artificial life, and I'm trying to use type families instead of functional dependencies. Type families seems to be the preferred approach among Haskellers, but I've run into a situation where functional dependencies seem like a better fit. Am I missing a trick? Here's the design using type families. (This code compiles OK.) {-# LANGUAGE TypeFamilies, FlexibleContexts #-} import Control.Monad.State (StateT) class Agent a where agentId :: a -> String liveALittle :: Universe u => a -> StateT u IO a -- plus other functions class Universe u where type MyAgent u :: * withAgent :: (MyAgent u -> StateT u IO (MyAgent u)) -> String -> StateT u IO () -- plus other functions data SimpleUniverse = SimpleUniverse { mainDir :: FilePath -- plus other fields } defaultWithAgent :: (MyAgent u -> StateT u IO (MyAgent u)) -> String -> StateT u IO () defaultWithAgent = undefined -- stub -- plus default implementations for other functions -- -- In order to use my framework, the user will need to create a typeclass -- that implements the Agent class... -- data Bug = Bug String deriving (Show, Eq) instance Agent Bug where agentId (Bug s) = s liveALittle bug = return bug -- stub -- -- .. and they'll also need to make SimpleUniverse an instance of Universe -- for their agent type. -- instance Universe SimpleUniverse where type MyAgent SimpleUniverse = Bug withAgent = defaultWithAgent -- boilerplate -- plus similar boilerplate for other functions Is there a way to avoid forcing my users to write those last two lines of boilerplate? Compare with the version using fundeps, below, which seems to make things simpler for my users. (The use of UndecideableInstances may be a red flag.) (This code also compiles OK.) {-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies, FlexibleInstances, UndecidableInstances #-} import Control.Monad.State (StateT) class Agent a where agentId :: a -> String liveALittle :: Universe u a => a -> StateT u IO a -- plus other functions class Universe u a | u -> a where withAgent :: Agent a => (a -> StateT u IO a) -> String -> StateT u IO () -- plus other functions data SimpleUniverse = SimpleUniverse { mainDir :: FilePath -- plus other fields } instance Universe SimpleUniverse a where withAgent = undefined -- stub -- plus implementations for other functions -- -- In order to use my framework, the user will need to create a typeclass -- that implements the Agent class... -- data Bug = Bug String deriving (Show, Eq) instance Agent Bug where agentId (Bug s) = s liveALittle bug = return bug -- stub -- -- And now my users only have to write stuff like... -- u :: SimpleUniverse u = SimpleUniverse "mydir"

    Read the article

  • Windows Server 2008 R2 Server Manager not working -> mmc.exe crashes with System.IO.FileNotFoundException

    - by Aleja_Vigo
    Since some days ago I can't run the Server Manager, it fails like this: Descripción: Stopped working Firma con problemas: Nombre del evento de problema: CLR20r3 Firma del problema 01: mmc.exe Firma del problema 02: 6.1.7600.16385 Firma del problema 03: 4a5bc808 Firma del problema 04: System.Management Firma del problema 05: 2.0.0.0 Firma del problema 06: 4ca2baf0 Firma del problema 07: 32f Firma del problema 08: 12b Firma del problema 09: System.IO.FileNotFoundException Versión del sistema operativo: 6.1.7601.2.1.0.272.7 There are other strange syntoms in the SO: Hyper-V stopped working as well, fails to load any VM information The desktop icons rearrange themselves all the time, and always on boot, after I move them. I use now an app that remembers their position to restore it... Windows Update service dissapeared, along with BITS service, I managed to recover them and installed all updates availables today I'm going nuts looking for information about these errors. Solutions that didn't work: sfc /scannow Doesn't complain about anything All windows updates applied (after recovering missing Windows Update) ServerManager.log: Only one error all the time: Error (Id=0) System.Runtime.InteropServices.COMException (0x800706D9): No hay más extremos disponibles desde el asignador de extremos. (Excepción de HRESULT: 0x800706D9) en Microsoft.Windows.ServerManager.NativeMethods.INetFwPolicy2.IsRuleGroupCurrentlyEnabled(String group) en Microsoft.Windows.ServerManager.DirectResult.GetRemoteManagementEnabled() In english : "There are no more endpoints available from the endpoint mapper" Where could I see which is the infamous file that mmc.exe is looking for in that System.IO.FileNotFoundException?? Please, any light on this would be much appreciated...

    Read the article

  • MySQL is killing the server IO.

    - by OneOfOne
    I manage a fairly large/busy vBulletin forums (running on gigenet cloud), the database is ~ 10 GB (~9 milion posts, ~60 queries per second), lately MySQL have been grinding the disk like there's no tomorrow according to iotop and slowing the site. The last idea I can think of is using replication, but I'm not sure how much that would help and worried about database sync. I'm out of ideas, any tips on how to improve the situation would be highly appreciated. Specs : Debian Lenny 64bit ~12Ghz (6x2GHz) CPU, 7520gb RAM, 160gb disk. Kernel : 2.6.32-4-amd64 mysqld Ver 5.1.54-0.dotdeb.0 for debian-linux-gnu on x86_64 ((Debian)) Other software: vBulletin 3.8.4 memcached 1.2.2 PHP 5.3.5-0.dotdeb.0 (fpm-fcgi) (built: Jan 7 2011 00:07:27) lighttpd/1.4.28 (ssl) - a light and fast webserver PHP and vBulletin are configured to use memcached. MySQL Settings : [mysqld] key_buffer = 128M max_allowed_packet = 16M thread_cache_size = 8 myisam-recover = BACKUP max_connections = 1024 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M key_buffer_size = 128M join_buffer_size = 8M tmp_table_size = 16M max_heap_table_size = 16M table_cache = 96 Other : From the cloud's IO chart, we're averaging 100mb/s read. > vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 9 0 73140 36336 8968 1859160 0 0 42 15 3 2 6 1 89 5 > /etc/init.d/mysql status Threads: 49 Questions: 252139 Slow queries: 164 Opens: 53573 Flush tables: 1 Open tables: 337 Queries per second avg: 61.302. moved from superuser

    Read the article

  • no disk io but iowait very high

    - by Dan
    there is no disk io going results of iotop Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO< COMMAND 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init [3] 1930 be/4 named 0.00 B/s 0.00 B/s 0.00 % 0.00 % named -u ~d/run-root 1931 be/4 named 0.00 B/s 0.00 B/s 0.00 % 0.00 % named -u ~d/run-root 1932 be/4 named 0.00 B/s 0.00 B/s 0.00 % 0.00 % named -u ~d/run-root 1933 be/4 named 0.00 B/s 0.00 B/s 0.00 % 0.00 % named -u ~d/run-root 1810 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % sh /usr/b~user=mysql 9795 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 0.00 % httpd 8004 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 0.00 % httpd 3226 be/4 postfix 0.00 B/s 0.00 B/s 0.00 % 0.00 % tlsmgr -l -t unix -u 8154 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 0.00 % httpd 9759 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % find -name php.ini 9249 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 0.00 % httpd 1756 be/4 postfix 0.00 B/s 0.00 B/s 0.00 % 0.00 % psa-pc-re~@localhost 1863 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 0.00 % mysqld --~mysql.sock 3123 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % crond 1758 be/4 postfix 0.00 B/s 0.00 B/s 0.00 % 0.00 % psa-pc-re~@localhost 1865 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 0.00 % mysqld --~mysql.sock 1592 be/4 sw-cp-se 0.00 B/s 0.00 B/s 0.00 % 0.00 % sw-cp-ser~ver/config 7612 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % sshd: root@pts/0 7614 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % sftp-server 7615 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % -bash 1602 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % sshd 8003 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % httpd but iowait very high ? iostat report avg-cpu: %user %nice %system %iowait %steal %idle 0.83 0.00 0.13 13.83 0.00 85.20 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn server runs like a snail what could be wrong here ? thanks

    Read the article

  • Resources for improving your comprehension of recursion?

    - by Andrew M
    I know what recursion is (when a patten reoccurs within itself, typically a function that calls itself on one of its lines, after a breakout conditional... right?), and I can understand recursive functions if I study them closely. My problem is, when I see new examples, I'm always initially confused. If I see a loop, or a mapping, zipping, nesting, polymorphic calling, and so on, I know what's going just by looking at it. When I see recursive code, my thought process is usually 'wtf is this?' followed by 'oh it's recursive' followed by 'I guess it must work, if they say it does.' So do you have any tips/plans/resources for building up your skills in this area? Recursion is kind of a wierd concept so I'm thinking the way to tackle it may be equally wierd and inobvious.

    Read the article

  • What is this algorithm for converting strings into numbers called?

    - by CodexArcanum
    I've been doing some work in Parsec recently, and for my toy language I wanted multi-based fractional numbers to be expressible. After digging around in Parsec's source a bit, I found their implementation of a floating-point number parser, and copied it to make the needed modifications. So I understand what this code does, and vaguely why (I haven't worked out the math fully yet, but I think I get the gist). But where did it come from? This seems like a pretty clever way to turn strings into floats and ints, is there a name for this algorithm? Or is it just something basic that's a hole in my knowledge? Did the folks behind Parsec devise it? Here's the code, first for integers: number' :: Integer -> Parser Integer number' base = do { digits <- many1 ( oneOf ( sigilRange base )) ; let n = foldl (\x d -> base * x + toInteger (convertDigit base d)) 0 digits ; seq n (return n) } So the basic idea here is that digits contains the string representing the whole number part, ie "192". The foldl converts each digit individually into a number, then adds that to the running total multiplied by the base, which means that by the end each digit has been multiplied by the correct factor (in aggregate) to position it. The fractional part is even more interesting: fraction' :: Integer -> Parser Double fraction' base = do { digits <- many1 ( oneOf ( sigilRange base )) ; let base' = fromIntegral base ; let f = foldr (\d x -> (x + fromIntegral (convertDigit base d))/base') 0.0 digits ; seq f (return f) Same general idea, but now a foldr and using repeated division. I don't quite understand why you add first and then divide for the fraction, but multiply first then add for the whole. I know it works, just haven't sorted out why. Anyway, I feel dumb not working it out myself, it's very simple and clever looking at it. Is there a name for this algorithm? Maybe the imperative version using a loop would be more familiar?

    Read the article

  • IO::Pipe - close(<handle>) does not set $?

    - by danboo
    My understanding is that closing the handle for an IO::Pipe object should be done with the method ($fh->close) and not the built-in (close($fh)). The other day I goofed and used the built-in out of habit on a IO::Pipe object that was opened to a command that I expected to fail. I was surprised when $? was zero, and my error checking wasn't triggered. I realized my mistake. If I use the built-in, IO:Pipe can't perform the waitpid() and can't set $?. But what I was surprised by was that perl seemed to still close the pipe without setting $? via the core. I worked up a little test script to show what I mean: use 5.012; use warnings; use IO::Pipe; say 'init pipes:'; pipes(); my $fh = IO::Pipe->reader(q(false)); say 'post open pipes:'; pipes(); say 'return: ' . $fh->close; #say 'return: ' . close($fh); say 'status: ' . $?; say q(); say 'post close pipes:'; pipes(); sub pipes { for my $fd ( glob("/proc/self/fd/*") ) { say readlink($fd) if -p $fd; } say q(); } When using the method it shows the pipe being gone after the close and $? is set as I expected: init pipes: post open pipes: pipe:[992006] return: 1 status: 256 post close pipes: And, when using the built-in it also appears to close the pipe, but does not set $?: init pipes: post open pipes: pipe:[952618] return: 1 status: 0 post close pipes: It seems odd to me that the built-in results in the pipe closure, but doesn't set $?. Can anyone help explain the discrepancy? Thanks!

    Read the article

  • Benchmarking hosting providers IO with Bonnie

    - by Derek Organ
    Ok, because of a bunch of projects I'm working on I've access to dedicated Servers on a 3 hosting providers. As an experiment and for educational purposes I decided to see if I could benchmark how good the IO is with each. Bit of research lead me to Bonnie++ So I installed it on the server and ran this simple command /usr/sbin/bonnie -d /tmp/foo The 3 machines in different hosting providers are all dedicated machines, one is a VPS, other two are on some cloud platform e.g. VMWare / Xen using some kind of clustered SAN for storage This might be a naive thing to do but here are the results I found. HOST A Version 1.03c ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP xxxxxxxxxxxxxxxx 1G 45081 88 56244 14 19167 4 20965 40 67110 6 67.2 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 15264 28 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ xxxxxxxx,1G,45081,88,56244,14,19167,4,20965,40,67110,6,67.2,0,16,15264,28,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ HOST B Version 1.03d ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP xxxxxxxxxxxx 4G 43070 91 64510 15 19092 0 29276 47 39169 0 448.2 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 24799 52 +++++ +++ +++++ +++ 25443 54 +++++ +++ +++++ +++ xxxxxxx,4G,43070,91,64510,15,19092,0,29276,47,39169,0,448.2,0,16,24799,52,+++++,+++,+++++,+++,25443,54,+++++,+++,+++++,+++ HOST C Version 1.03c ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP xxxxxxxxxxxxx 1536M 15598 22 85698 13 258969 20 16194 22 723655 21 +++++ +++ ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 14142 22 +++++ +++ 18621 22 13544 22 +++++ +++ 17363 21 xxxxxxxx,1536M,15598,22,85698,13,258969,20,16194,22,723655,21,+++++,+++,16,14142,22,+++++,+++,18621,22,13544,22,+++++,+++,17363,21 Ok, so first off what is the best way to read the figures and are there any issues with really comparing these numbers? Is this in any way a true representation of IO Speed? If not is there any way for me to test that? Note: these 3 machines are using either Ubuntu or Debian (I presume that doesn't really matter)

    Read the article

  • CPU, Memory, Network, IO resources are under utilized when I tried various JMeter load testing

    - by Jaiganesh
    CPU, Memory, Network, IO resources are under utilized when I tried various JMeter load testing. I have given below the details. Hardware: 1 Core with 2 GB RAM OS: Ubuntu 12.04 LTS Server Edition Application: PHP (using JQuery, Ajax) JMeter Parameters: 10, 20, 30, 40 Hits per minute 220 Test Cases per hit 2.03 MB per hit I am not clear, why these resources are under utilized. Please help me to resolve this.

    Read the article

  • Diagnostic high load sys cpu - low io

    - by incous
    A Linux server running Ubuntu 12.04 LTS with LAMP has a strange behaviour since last week: - cpu %sys higher than before, nearly equal %usr (before that, %sys just little compare with %usr) - IO reduce by half or 1/3 compare with the week before I try to diagnostic the process/cpu by some command (top/vmstat/mpstat/sar), and see that maybe it's a bit high on interrupt timer/resched. I don't know what that means, now open to any suggestion.

    Read the article

  • java.io.FileNotFoundException: /target/test.log

    - by sword101
    Greetings all I am using Apache Camel and Apache CXF in this example: http://camel.apache.org/better-jms-transport-for-cxf-webservice-using-apache-camel.data/cxfcamelexample.zip I followed the readme and when tried to run the client & server classes i got this exception: log4j:ERROR setFile(null,true) call failed. java.io.FileNotFoundException: /target/test.log (No such file or directory) at java.io.FileOutputStream.openAppend(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:177) at java.io.FileOutputStream.<init>(FileOutputStream.java:102) at org.apache.log4j.FileAppender.setFile(FileAppender.java:289) at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:163) at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:256) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:132) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:96) at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:654) at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:612) at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:509) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:415) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:441) at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:470) at org.apache.log4j.LogManager.<clinit>(LogManager.java:122) at org.apache.log4j.Logger.getLogger(Logger.java:104) at org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:283) at org.apache.commons.logging.impl.Log4JLogger.<init>(Log4JLogger.java:108) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.commons.logging.impl.LogFactoryImpl.createLogFromClass(LogFactoryImpl.java:1040) at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:838) at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:601) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:333) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:307) at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:645) at org.springframework.context.support.AbstractApplicationContext.<init>(AbstractApplicationContext.java:146) at org.springframework.context.support.AbstractRefreshableApplicationContext.<init>(AbstractRefreshableApplicationContext.java:84) at org.springframework.context.support.AbstractRefreshableConfigApplicationContext.<init>(AbstractRefreshableConfigApplicationContext.java:59) at org.springframework.context.support.AbstractXmlApplicationContext.<init>(AbstractXmlApplicationContext.java:58) at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:136) at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:93) at com.example.customerservice.impl.CustomerServiceClient.main(CustomerServiceClient.java:34) so any ideas, how to solve this exception ?

    Read the article

  • Ruby File IO question; Maintain file read position between script executions

    - by macek
    I have two files a.txt and b.txt (henceforth a and b). My script iterates through a, does some operation, and potentially inserts a line to b. In the event the script stops, I need it to pick up where it left off. In the example below: foo was copied to b bar was copied to b zim was not copied to b (did not pass some criteria) gaz was copied to b Script stops (for whatever reason) When script starts again, how to open a and start on line "dib"? a.txt foo bar zim gaz // <= last successful copy dib // <= I want to start here on next script execution gir b.txt foo bar gaz // <= note omission of "zim" above gaz

    Read the article

  • Java IO (javase 6)- Help me understand the effects of my sample use of Streams and Writers...

    - by Daddy Warbox
    BufferedWriter out = new BufferedWriter( new OutputStreamWriter( new BufferedOutputStream( new FileOutputStream("out.txt") ) ) ); So let me see if I understand this: A byte output stream is opened for file "out.txt". It is then fed to a buffered output stream to make file operations faster. The buffered stream is fed to an output stream writer to bridge from bytes to characters. Finally, this writer is fed to a buffered writer... which adds another layer of buffering? Hmm...

    Read the article

  • What is the optimal number of threads for performing IO operations in java?

    - by marc
    In Goetz's "Java Concurrency in Practice", in a footnote on page 101, he writes "For computational problems like this that do not I/O and access no shared data, Ncpu or Ncpu+1 threads yield optimal throughput; more threads do not help, and may in fact degrade performance..." My question is, when performing I/O operations such as file writing, file reading, file deleting, etc, are there guidelines for the number of threads to use to achieve maximum performance? I understand this will be just a guide number, since disk speeds and a host of other factors play into this. Still, I'm wondering: can 20 threads write 1000 separate files to disk faster than 4 threads can on a 4-cpu machine?

    Read the article

  • Why is my emit not getting called?

    - by cRaZiRiCaN
    The client and server connect just fine. For some reason the emit on my client is not firing correctly. I am trying to get the testEmit and testEmit2 working. This is my server: express = require 'express' mongo = require 'mongodb' app = express() server = (require 'http').createServer(app) io = (require 'socket.io').listen(server) server.listen(8080) app.use(express.static(__dirname + '/public')) # db = new mongo.Db("documentsdb", new mongo.Server("localhost", 27017, auto_reconnect: true), {safe:true}) io.sockets.on 'connection', (socket) -> console.log 'Socket.io is connected!' #This returns an array of documents sorted via date by decreasing order. (Most recent documents first.) socket.on 'loadRecentDocuments', -> console.log 'Loading most recent documents.' db.collection 'documents', (err, collection) -> collection.find().sort(dateAdded: -1).toArray (err, documents) -> #This emit is recieved at index.html where a javascript function sendDocuments manages the documents. socket.emit 'sendDocuments', documents return #The index.html provides the code data from the search box via a javascript. io.sockets.on 'findDocuments', (code) -> #Returns an array of documents with the corresponding class code. documentCodeToSearch = code console.log 'Retreaving documents with code: ' + documentCodeToSearch db.collection 'documents', (err, collection) -> collection.find(code:documentCodeToSearch).toArray (err, documents) -> socket.emit 'sendDocuments', documents return #Uploads a document to the server. documentData is sent via javascript from submit.html io.sockets.on 'addDocument', (documentData) -> console.log 'Adding document: ' + documentData db.collection 'documents', (err, collection) -> collection.insert documentData, safe: true return #Test socket.io io.sockets.on 'testEmit', -> console.log('Emit recieved.') socket.emit 'testEmit2', 'caca' return app.listen 1337 console.log "Listening on port 1337..." This is my client: <!doctype HTML> <html> <head> <title>ProjectShare</title> <script src="http://localhost:8080/socket.io/socket.io.js"></script> <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script> //Make sure DOM is ready before mucking around. $(document).ready(function() { console.log('jQuery entered!'); var socket = io.connect('http://localhost:8080'); socket.emit('testEmit'); socket.on('testEmit2', function(data) { console.log('Emit recieved at browser.'); console.log(data); }); console.log('jQuery exit.'); }); </script> </head> <body> <ol> <li><a href="index.html">ProjectShare</a></li> <li><a href="guidelines.html">Guidelines</a></li> <li><a href="upload.html">Upload</a></li> <li> <form> <input type = "search" placeholder = "enter class code"/> <input type = "submit" value = "Go"/> </form> </li> </ol> <ol id = "documentList"> </ol> </body> </html>

    Read the article

  • java.io.IOException: Invalid argument

    - by Luixv
    Hi I have a web application running in cluster mode with a load balancer. It consists in two tomcats (T1, and T2) addressing only one DB. T2 is nfs mounted to T1. This is the only dofference between both nodes. I have a java method generating some files. If the request runs on T1 there is no problem but if the request is running on node 2 I get an exception as follows: java.io.IOException: Invalid argument at java.io.FileOutputStream.close0(Native Method) at java.io.FileOutputStream.close(FileOutputStream.java:279) The corresponding code is as follows: for (int i = 0; i < dataFileList.size(); i++) { outputFileName = outputFolder + fileNameList.get(i); FileOutputStream fileOut = new FileOutputStream(outputFileName); fileOut.write(dataFileList.get(i), 0, dataFileList.get(i).length); fileOut.flush(); fileOut.close(); } The exception appears at the fileOut.close() Any hint? Luis

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >