Search Results

Search found 9286 results on 372 pages for 'transfer speed'.

Page 36/372 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Still about SSD potentials...write and read speed

    - by Macroideal
    I have been working on SSD (solid state disk) for several months..Problems and Questions hit my head unexpectedly..Coz i am a virgin in ssd... Especially these days I was testing the write-read speed of ssd, which I was always caring.... however result turned out not good as I expected, or even worse Three kinds of read-write were implemented in my test read and write directly from and into ssd, with openning ssd as a whole device. in windows: _open("\\:g", ***).. It can be very tricky and hairy that you'd write a data with size of folds of 512, at the disk position of folds of 512bytes... So, If you wanto write just a byte or 4 bytes, you'v to write at least a whole sector one time. Read and write data from and into files located in SSD... Read and Write data from and into files in mechanical Disk I compared the pratices below...I found ssd sucks...the ssd performs worse than mechanical disk... so i am wondering where i can get the potential performance of ssd, since ssd is said to a substitute for mechanical disk in the future.. Nevertheless, I test ssd with a pro-hard-disk tools..ssd is like twice speedier than mechanical disk. So, why?

    Read the article

  • Backup data rate on Raspberry Pi maxing out at 5 Mb/s. Why?

    - by bastibe
    I set up my Raspberry Pi as a Time Machine, as documented here. At the moment, the Raspberry Pi is connected to my MacBook Pro using a direct Ethernet cable. Also, an external hard drive (laptop drive) is connected to the Raspberry Pi using the USB port. However, backups are pretty slow. Activity Monitor claims that the Network is transferring a very steady 5 Mb/s, where my Time Capsule is transferring up to 8 Mb/s with a lot of fluctuation. The Raspberry Pi self-reports (top) that its CPU is only half-used, with about equal parts afpd, usb-storage and jbd2/sda1-8. Thus, I think that the processing power of the Raspberry Pi does not seem to be the problem here. To me, this looks like there is some kind of bottleneck that maxes out at 5 Mb/s thus potentially having my backups run at less than their potential speed. To the best of my knowledge, this might be the afp-daemon, the usb-bus or the external hard drive. So, my question is, how could I identify the true culprit and what can I do about it?

    Read the article

  • HP DL185 - very slow disk read speed

    - by fistameeny
    Hi, I have a HP DL185 G6 Server (12 disk model) with the following spec: Quad Core Xeon 2.27GHz 6GB RAM HP P212 RAID controller with battery backup 2 x 128GB 15K SAS 3.5" (RAID-1 for the operating system) 4 x 750GB 7.5K SAS 3.5" (RAID-5 for the data, 2TB usable space) The operating system is Ubuntu Server 9.10. Both drives have been formatted as EXT4. We are finding that read speed of the RAID-5 array is poor. Disk test results below: sudo hdparm -tT /dev/cciss/c0d1p1 /dev/cciss/c0d1p1: Timing cached reads: 15284 MB in 2.00 seconds = 7650.18 MB/sec Timing buffered disk reads: 74 MB in 3.02 seconds = 24.53 MB/sec For info, the RAID-1 array performs as follows: sudo hdparm -tT /dev/cciss/c0d0p1 /dev/cciss/c0d0p1: Timing cached reads: 15652 MB in 2.00 seconds = 7834.26 MB/sec Timing buffered disk reads: 492 MB in 3.01 seconds = 163.46 MB/sec We thought this was because with no battery, read/write cache is disabled. We have bought and installed the battery backup and have used the HP bootable CD to change the cache settings to 50% read / 50% write and check cache is enabled on the drives and the controller. Is there something I'm missing?

    Read the article

  • Measure Upload Speed between a client and our server

    - by tresstylez
    We host a SAAS application specially customized for multiple clients. For one customer in particular -- they are reporting sporadic performance issues from various locations on their network, in particular UPLOADING documents through a form on our website. The client claims they have "bandwidth to spare" and that utilization of their "pipe" is so low that it MUST be our application, but our application has MANY clients and all features are working fine for all other clients. Interestingly enough -- DOWNLOADS (ie. just accessing the website, or downloading documents) is working fine. Speed test shows that they should get 1.2Mbps UP. So, a 3MB file should take 20 secs to upload. It takes 60+ seconds on their network. Sometimes even small files take OVER 10 minutes to upload or they timeout. Pings and Traceroutes don't show any abnormally long hops or response times. They claim other SAAS applications they use allow them to upload just fine. Both IT teams are working together to resolve this issue. What kind of data can I request from the clients to begin ruling things out. Seems like we need to somehow measure LATENCY of the networks involved or even at the switch level, we need to understand if packets are getting dropped somewhere and why. Where should I start? Any help is appreciated. I'll provide more info upon requests

    Read the article

  • Bonnie does not provide speed for Sequential Input / Block

    - by Lqp1
    I'm using ProxmoxVE and I would like to run some benchmarks regarding performances of this product. One of these benchmarks is bonnie++ ; it runs very well in a VM (qemu-kvm) but when I run it in a conainer (openVZ), it does not provide me reading speed (only writing). I don't understand why... Does anyone know what's happenning ? VMs ans Containers are Debian 7.4. Here's the output of bonnie in the container: root@ct2:/# bonnie++ -u root Using uid:0, gid:0. Writing a byte at a time...done Writing intelligently...done Rewriting...done Reading a byte at a time...done Reading intelligently...done start 'em...done...done...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ct2 1G 843 99 59116 8 60351 4 4966 99 +++++ +++ 2745 8 Latency 9558us 3582ms 527ms 1672us 936us 5248us Version 1.96 ------Sequential Create------ --------Random Create-------- ct2 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ Latency 19567us 358us 368us 107us 59us 25us 1.96,1.96,ct2,1,1401810323,1G,,843,99,59116,8,60351,4,4966,99,+++++,+++,2745,8,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,9558us,3582ms,527ms,1672us,936us,5248us,19567us,358us,368us,107us,59us,25us The filesystem for / is of type "simfs", which is a pseudo filesystem for openVZ. Maybe it's related to this issue but I can't find anyone with the same issue with bonnie and openVZ... Thanks for your help. Regards, Thomas.

    Read the article

  • i7-980X at 70% speed

    - by Buxley
    Hi we bought a nice computer to use to solve optimization problems. Intel i7-980X@3,33 GHz with 12 GB of Team Group 1600 MHz DDR3 RAM. When we use Gurobi the computer uses all 12 cores at maximum in the beginning of the solve. However after a while (about 8 hrs) it all cores jump between 65 and 85% When I solve the same models on an i7 930 all cores are at a near 100% level even after longer solution times. We first thought that the Hard disk was the bottleneck since Gurobi writes out nodefiles after the memorylimit is used. However since the new computer have 12 GB of RAM we put the memorylimit to 7 GB so the solver only used the RAM and still with the same performance in the processor. Any ideas about the bottleneck? As I said earlier it works at 100% for the first hours or so . Thanks very much for any answers! Our plan was to overclock it but we can't even get it to work at normal speed yet!

    Read the article

  • i7 x980 at 70% speed

    - by Buxley
    Hi we bought a nice computer to use to solve optimization problems. intel i7 X980@3,33 Mhz with 12 Gb of Team Group 1600 MHz ddr3 Ram. When we use Gurobi The Computer uses all 12 cores at maximum in the beginning of the solve. However after a while (about 8 hrs) it all cores jump between 65 and 85% When I solve the same models on an I7 930 all cores are at a near 100% level even after longer solution times. we first thought that the Harddisk was the bottleneck since Gurobi writes out nodefiles after the memorylimit is used. However since the new computer have 12 GB of Ram we put the memorylimit to 7 GB so the solver only used the RAM and still with the same performance in the processor. Any Ideas about the bottleneck? As I said earlier it works at 100% for the first hours or so . Thanks very much for any answers! Our plan was to overclock it but we can't even get it to work at normal speed yet!

    Read the article

  • windows server 2003 speed issues

    - by farzinSH
    I have a HP server with windows server 2003 and 50 windows XP clients. Since a week and a half the networks speed suddenly drop 2-3 times per day. It gets so slow that none of the clients could work with the HIS program installed on them. We tried so many different things such as replacing the hubs,switches and even some wires. Every time one of these changes solves the problem and the network goes back to its normal state. I checked everything. Even when I disconnected all the clients from the server and connected it to just one computer the problem still remained for 2 hours. I just narrowed down the problem to the couple of likely speculations as follows: viruses? (Updated Kaspersky running on the server shows none) server hardware failure? Physical memory usage on the server? (Because the last time the problem occurred none of the changes above solved the issue so I restarted the server an checked the physical memory usage which was 2 GBs. But I noticed it's increasing over time to over 9 GBs...the server has 16 GBs of RAM.) I surfed the internet and got nothing. Any help would do us a lot....thanks in advance

    Read the article

  • Why is BIND giving me a SERVFAIL in this case? (Notes inside)

    - by imaginative
    Woke up this morning to a bunch of the following: root@foo:/etc/bind# dig @1.2.3.4 foo.example.com ; <<>> DiG 9.6.1-P2 <<>> @1.2.3.4 foo.example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 36121 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;;foo.example.com. IN A ;; Query time: 0 msec ;; SERVER: 1.2.3.4#53(1.2.3.4) ;; WHEN: Thu Apr 1 09:57:59 2010 ;; MSG SIZE rcvd: 31 Some background on the fictitious "1.2.3.4". It's a slave name server in my nameserver "farm". Technically I have ns1 (being the master) and ns2/ns3. Currently ns1/ns2 are down for maintenance, so I left ns3 at it serving live traffic. That's the point, DNS is supposed to be resilient. Now the odd part is, "1.2.3.4" was serving requests for example.com just fine for the last 4-5 days. This morning I get a phone call that it's non-responsive. After investigation I see the message you see above, SERVFAIL. I looked into the zone file and saw the following: example.com IN SOA ns1.example.com. hostmaster.mail.example.com. ( I wondered if at this point that the nameserver thought it was not authoritative over example.com and adjusted it to the following: example.com IN SOA ns3.example.com. hostmaster.mail.example.com. ( After that, it started responding again for all authoritative queries for example.com. I have no idea why. I thought these things were supposed to be normalized upon zone transfer from ns1 - ns3? Can someone please example why this happened and how to prevent it from happening in the future? I've never had a similar problem, and because I don't understand it well, I might be missing some critical information in this question. So please let me know if I can further add any detail to make things clearer as well. One more thing to note: I have other domains that I'm authoritative for that have their SOA still saying ns1.example.com. and not ns3.example.com. Those domains are serving requests just fine! Is it a matter of time before they stop also and I have to change SOA to ns3.example.com? Is this also only required because ns1 and ns2 are currently offline?

    Read the article

  • what are all the Optimize tricks that you know for asp.net code ?

    - by Aristos
    After some time of many code programming on asp.net, I discover the very big speed different between string and StringBuilder. I know that is very common and known but I just mention it for start. The second think that I have found to speed up the code, is to use the const, and not the static, for declare my configuration constants value (especial the strings). With the const, the compiler not create new object, but just place the value, on the point that you have ask it, but with the static declaration, is create a new memory object and keep its on the memory. My third trick, is when I search for string, I use hash values, and not the string itself. For example, if I need a List<string SomeValues, and place inside strings that I need to search them, I prefer to use List<int SomeHashValue, and I use the hash value to locate the strings. My forth thought that I was wandering, is if is better to place big strings in one line, or separate them in different lines with the + symbol to be more easy to read out. I make some tests and see that the compiler make a good job is some split the string, in many lines, using the + symbol. What other tricks/tips do you know and use on your programming to make it run faster, and maybe use less memory. Well I know, that some times, to make something run faster, you need more memory, more cache. My priority is on speed. Because Speed Counts.

    Read the article

  • How to make 7zip faster

    - by user34463
    I normally use winRAR over 7Zip simply because its faster and only a little less efficient with compression. I did a few tests on different filetypes and sizes comparing the 7zip and winRAR default settings on their normal compression and their best compression, and in a lot of cases winRAR was 50% faster and in some it was actually 100% faster. But, I do like FOSS more. So here are my questions: Is there a way to make 7zip speed up? I'd like it to at least be on par with rar's speed Is there a way to make recovery segments in 7zip like you can in rar? I didn't see any, but I guess it could be a command line thing. I tested winrar and 7zip using the latest stable version of each (4.something with 7zip). Is the 9.x beta release noticeably faster at compression? I'm talking about faster at a comparable setting in WinRAR, not just lowering to bare minimum compression. If it matters, I use a quad core intel i7 720 (1.6ghz)/(2.8ghz) with 4gb DDR3 ram, and the 64-bit version of 7zip.

    Read the article

  • SSIS- Sharepoint list data transfer issue

    - by Vicky
    Hi , We are trying to transfer data from oracle database (about 60,0000) records only to a sharepoint list using SSIS. But we are getting following error when records reaches around 19000 . The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020 and System.ServiceModel.ProtocolException: The remote server returned an unexpected response: (400) Bad Request. Earlier we thought if could because of Sharepoint list limit so we tried by reducing two of the columns and then it has went fine. So we left with one of the column of Datatype DT_STR and length 400 in oracle beacuse of which issue might be happening, It is mapped to sharepoint custom list field of multiline type. We also verified if length of field is issue but in oracle DB for all records max length for this column is only 239 so length issue is also ruled out. Any one who has faced this kind of issue or knows cause of this issue.Kindly let us know.. Thanks and regards, Vicky

    Read the article

  • Alternatives to GameKIt for iPhone to iPhone transfer

    - by Mike
    I am needing to transfer information as an NSData object from one iPhone to another inside an application and was originally planning on using Bluetooth/GameKit, but the data looks like it will be in the neighborhood of 500KB - 1MB in size. I don't think this is going to fly with GameKit (Bluetooth) as it will take forever and it seems like people are having all kinds of issues with GK anyway. I'd prefer the users to not have to be on the same WiFi network to make it work but am running out of options and time so I'd probably be happy with whatever works. What is a reasonable alternative? The simpler the setup the better or something with sample code would be greatly appreciated and preferred. Thanks.

    Read the article

  • How to make 7-Zip faster

    - by Matt
    I normally use WinRAR over 7-Zip simply because it's faster and only a little less efficient with compression. I did a few tests on different filetypes and sizes comparing the 7-Zip and WinRAR default settings on their normal compression and their best compression, and in a lot of cases WinRAR was 50% faster and in some it was actually 100% faster. But, I do like FOSS more. So here are my questions: Is there a way to make 7-Zip speed up? I'd like it to at least be on par with WinRAR's speed Is there a way to make recovery segments in 7-Zip like you can in WinRAR? I didn't see any, but I guess it could be a command line thing. I tested WinRAR and 7-Zip using the latest stable version of each (4-dot-something with 7-Zip). Is the 9.x beta release noticeably faster at compression? I'm talking about faster at a comparable setting in WinRAR, not just lowering to bare minimum compression. If it matters, I use a quad core Intel i7 720 (1.6 GHz)/(2.8 GHz) with 4 GB DDR3 RAM, and the 64-bit version of 7-Zip, and dual-boot Debian x64 5.0.4 and Windows 7 Home.

    Read the article

  • Can't transfer list<T> to web service?

    - by iTayb
    I have the same classes on my server and on my web service. I have the following WebMethod: [WebMethod] public int CreateOrder(List<Purchase> p, string username) { o.Add(new Order(p,username)); return o.Count; } However the following code, run at server: protected void CartRepeater_ItemCommand(object source, RepeaterCommandEventArgs e) { List<Purchase> l = ((List<Purchase>)Session["Cart"]); if (e.CommandName == "Order") { localhost.ValidateService WS = new localhost.ValidateService(); WS.CreateOrder(l, Session["username"].ToString()); } } gives the following error: Argument '1': cannot convert from 'System.Collections.Generic.List<Purchase>' to 'localhost.Purchase[]'. How can I transfer the list<Purchase> object to the web service? Thank you very much.

    Read the article

  • Tomcat gzip while chunked issue

    - by hoodoos
    I'm expiriencing some problem with one of my data source services. As it says in HTTP response headers it's running on Apache-Coyote/1.1. Server gives responses with Transfer-Encoding: chunked, here sample response: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Content-Encoding: gzip Date: Tue, 30 Mar 2010 06:13:52 GMT And problem is when I'm requesting server to send gzipped request it often sends not full response. I recieve response, see that last chunk recieved, but then after ungzipping I see that response is partial. I never seen such behavior with gzip turned off in request headers. So my question is: is it common tomcat issue? maybe one of it's mod which is doing compression? Or maybe it maybe some kind of proxy issue? I can't tell about versions of tomcat or what gzip mod they use, but feel free to ask, i'll try ask my service provider. Thanks.

    Read the article

  • transfer files with ftp with ftpwebrequest

    - by Dejan.S
    Hi. currently I'm working on this ftp transfer. http://msdn.microsoft.com/en-us/library/ms229715.aspx I have setup my server computers iis to have a ftp site on port 21 and transfering files works great. But I want to add ftp to a hosted site I got on the server and it's here where I get the problems with connecting. when I try to connect through the command promt I get unknown host error. I have changed the port and open it up in firewall. and even if I could connect how can I decide what folder I want to upload to?

    Read the article

  • Apache deflate with chucked encoding

    - by hoodoos
    I'm expiriencing some problem with one of my data source services. As it says in HTTP response headers it's running on Apache-Coyote/1.1. Server gives responses with Transfer-Encoding: chunked, here sample response: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Date: Tue, 30 Mar 2010 06:13:52 GMT And problem is when I'm requesting server to send gzipped request it often sends not full response. I recieve response, see that last chunk recived, but then after ungzipping I see that response is partial. So my question is: is it common apache issue? maybe one of it's mod_deflate plugins or something? Ask questions if you need more info. Thanks.

    Read the article

  • Software way to cool down an old MacBook Pro

    - by notMacBookProSuperUser
    Hi all, First a little background: I've got lots of computers, including Linux PCs and two MacBook Pro (and a MacMini). My concern is with my 'old' MacBookPro (Core Duo). It really does overheat. Warranty is long void. Years ago (I'd say 2.5 years ago or so) one day it overheated so bad that the battery inflated due to the heat. I got a new battery for free but it's still getting incredibly hot (much other than any other computer I've got: my newer Core 2 Duo MacBook Pro doesn't get nearly as hot as the old one. It s really a pain because I use my old MBP when I m in front of TV, having it on my lap, and it can really become unbearable. I don't want to open that old MBP. On Linux I can force a new CPU 'governor' that decides how the CPU is allowed to operate: it can be 'on demand', 'always max speed', 'always speed x', etc. Does the same exist under MacOS X? Is there a way, say if a 1.86 Ghz Core Duo can run at 1.6 Ghz, to ask MacOS X: "never run this CPU above 1.6 Ghz" ?

    Read the article

  • Memory modules not running at rated speeds.

    - by Wesley
    Hi all, I'm having some odd memory issues with my build. Here are my specs right now: QDI Superb 4 motherboard Intel Pentium 4 Northwood 2.4GHz (512KB L2, 533MHz FSB) 3x 256MB PC2100 DDR266 RAM 16MB NVIDIA TNT2 Pro AGP Seagate 80GB IDE HDD Generic USB 2.0 PCI Generic Modem PCI Bestec 250W PSU To be even more specific, here are the current brands and models of each module: Kingston KVR266X64C25/256 Samsung PC2100U-25331-Z SMART SM5643285D4N0CHM0H Supposedly, they are all PC2100 266MHz modules with a latency of 2.5. Looking in Speccy, the Kingston module is somehow running at a speed of PC2300 ~284MHz. I've never overclocked RAM at all as I don't know how to. However, when I first started the computer, I had the SMART module in place first and then reset the BIOS settings, including the integrated overclocking options. However, this still doesn't explain why the Kingston module runs at a higher speed than the SMART and Samsung module. Why is it like this? On a side note, where could I find the motherboard manual for the QDI Superb 4? Thanks in advance.

    Read the article

  • Programatically determining maximum transfer rate

    - by dauphic
    I have a problem that requires me to calculate the maximum upload and download available, then limit my program's usage to a percentage of it. However, I can't think of a good way to find the maximums. At the moment, the only solution I can come up with is transfering a few megabytes between the client and server, then measuring how ling the transfer took. This solution is very undesirable, however, because with 100,000 clients it could potentially result in too much of an increase to our server's bandwidth usage (which is already too high). Does anyone have any solutions to this problem?

    Read the article

  • DatabaseName.bak File Transfer Problem

    - by Jordon
    I have downloaded databasename.bak file from my hosting company, when i tried to restore that DB file in SQL server 2008 it is keep on giving me following error. The media family on device 'C:\go4sharepoint_1384_8481.bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE HEADERONLY is terminating abnormally. (Microsoft SQL Server, Error: 3241) According to this error and from following link http://www.sqlcoffee.com/Troubleshooting047.htm Later when I tried to restore file on server it was restored correctly, but when I tried to transfer same file using FTP Software FileZila and tried to restore that downloaded file it was giving above error. That is file is getting corrupted on the way to tranfer using FTP. Any Idea how can i avoid that file to get courrupt? Note: File is downloaded using FileZila - TransferType as Binary. Thank you.

    Read the article

  • java best way to transfer images

    - by d.raev
    I have a application that reads a PDF, transform the content to collection of TIF files, and send them to Glass Fish Server for saving. Usually there are 1-5 pages and it works nice, but when I got a input file with 100+ pages... it throws error on the transfer. Java heap space at java.util.Arrays.copyOf(Arrays.java:2786) at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94) Putting more resources is not a good option in my case, so I m looking for a way to optimize it somehow. I store the data in: HashMap<TifProfile, List<byte[]> Is there a better way to store or send them ? EDIT I did some tests and the final collections for PDF with 80 pages has size over 280mb (240 tiffs with different settings inside)

    Read the article

  • Transfer of directory structure over network

    - by singh
    I am designing a remote CD/DVD burner to address hardware constraints on my machine. My design works like this: (analogous to a network printer) Unix-based machine (acts as server) hosts a burner. Windows-based machine acts as client. Client prepares data to burn and transfers it to the server. Server burns the data on CD/DVD. My question is: what is the best protocol to transfer data over the network (Keeping the same directory hierarchy) between different operating systems?

    Read the article

  • Asp.Net : IHttpModule + m_context.Server.Transfer = session state error

    - by tinky05
    I have an IHttpModule that implements IRequiresSessionState. The session state is at "on" on the page directive and I also added it to the web.config. In the method "OnBeginRequest" in the IHttpModule, I make a Server.Transfer but I get the error : Session state can only be used when enableSessionState is set to true, either in a configuration file or in the Page directive. When I access the page directly or with a Response.Redirect, there is no error. Any idea? Update : The error occur in a page extending System.Web.UI.Page in the method InitializeCulture() which is an override.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >