Search Results

Search found 9286 results on 372 pages for 'transfer speed'.

Page 111/372 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Using git to sync existing file collection?

    - by chrish
    I've got a collection of files that formerly lived in a Subversion repo; on my new server I've imported them into a git repo so I could start getting more experience with that. On several other machines, I've got mostly up-to-date copies of the existing svn repo files. Is there any way to sync to the new git repo, but use these existing files so I don't have to re-transfer all of the data? Is git smart enough that if I do a fetch? or checkout? that it'll notice the files are identical and not re-transfer them?

    Read the article

  • Don't Change URL in Browser When Clicking <asp:LinkButton>

    - by Corey Goldberg
    I have an ASP.NET page that uses a menu based on asp:LinkButton control in a Master page. When a user selects a menu item, an onclick handler calls a method in my C# code. The method it calls just does a Server.Transfer() to a new page. From what I have read, this is not supposed to change the URL displayed in the browser. The problem is it that the URL changes in the browser as the user navigates the menu to different pages. Here is an item in the menu: <asp:LinkButton id="foo" runat="server" onclick="changeToHelp"><span>Help</span> </asp:LinkButton> In my C# code, I handle the event with a method like: protected void changeToHelp(object sender, EventArgs e) { Server.Transfer("Help.aspx"); } Any ideas how I can navigate through the menu without the browser's URL bar changing?

    Read the article

  • Why this java application print "true"?

    - by user292084
    This is my first Class Hello.java public class Hello { String name = ""; } This is my second Class Test1.java public class Test1 { public static void main(String[] args) { Hello h = new Hello(); Test1 t = new Test1(); t.build(h); System.out.println((h.name)); } void build(Hello h){ h.name = "me"; } } When I run Test1.java, it prints "me". I think I understand, because of "reference transfer". This is my third Class Test2.java public class Test2 { public static void main(String[] args) { Hello h = null; Test2 t = new Test2(); t.build(h); System.out.println(((h == null))); } void build(Hello h){ h = new Hello(); } } When I run Test2.java, it prints "true", why ? Is it "reference transfer" no longer? I am confused.

    Read the article

  • Weird nfs performance: 1 thread better than 8, 8 better than 2!

    - by Joe
    I'm trying to determine the cause of poor nfs performance between two Xen Virtual Machines (client & server) running on the same host. Specifically, the speed at which I can sequentially read a 1GB file on the client is much lower than what would be expected based on the measured network connection speed between the two VMs and the measured speed of reading the file directly on the server. The VMs are running Ubuntu 9.04 and the server is using the nfs-kernel-server package. According to various NFS tuning resources, changing the number of nfsd threads (in my case kernel threads) can affect performance. Usually this advice is framed in terms of increasing the number from the default of 8 on heavily-used servers. What I find in my current configuration: RPCNFSDCOUNT=8: (default): 13.5-30 seconds to cat a 1GB file on the client so 35-80MB/sec RPCNFSDCOUNT=16: 18s to cat the file 60MB/s RPCNFSDCOUNT=1: 8-9 seconds to cat the file (!!?!) 125MB/s RPCNFSDCOUNT=2: 87s to cat the file 12MB/s I should mention that the file I'm exporting is on a RevoDrive SSD mounted on the server using Xen's PCI-passthrough; on the server I can cat the file in under seconds ( 250MB/s). I am dropping caches on the client before each test. I don't really want to leave the server configured with just one thread as I'm guessing that won't work so well when there are multiple clients, but I might be misunderstanding how that works. I have repeated the tests a few times (changing the server config in between) and the results are fairly consistent. So my question is: why is the best performance with 1 thread? A few other things I have tried changing, to little or no effect: increasing the values of /proc/sys/net/ipv4/ipfrag_low_thresh and /proc/sys/net/ipv4/ipfrag_high_thresh to 512K, 1M from the default 192K,256K increasing the value of /proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_max to 1M from the default of 128K mounting with client options rsize=32768, wsize=32768 From the output of sar -d I understand that the actual read sizes going to the underlying device are rather small (<100 bytes) but this doesn't cause a problem when reading the file locally on the client. The RevoDrive actually exposes two "SATA" devices /dev/sda and /dev/sdb, then dmraid picks up a fakeRAID-0 striped across them which I have mounted to /mnt/ssd and then bind-mounted to /export/ssd. I've done local tests on my file using both locations and see the good performance mentioned above. If answers/comments ask for more details I will add them.

    Read the article

  • linux: upload / download difference on network shares

    - by Batsu
    I have a Red Hat Enterprise Linux 6 (with SELinux) which shows significant differences of speed between download and upload (the latter significantly slower) of files shared over the LAN. The bottleneck seems to be the output of the linux machine since I have a rate around 1Mb/s when WinXP machines download files shared (using samba) by the RHEL machine uploading files from the RHEL to a WinXP's shared folder while uploading from the XP machines to linux's shares downloading XPs' shares on the RHEL any share between Windows machines only run smooth (around 50Mb/s). Since the upload from RHEL to WinXP's share is slowed too I would exclude an issue in the configuration of samba. What could possibly determine this limit in the upload speed? update: iptables doesn't show any output rule and disabling it doesn't show any noticeable difference, so I would rule out it too.

    Read the article

  • slow software raid

    - by Jure1873
    I've got software raid 1 for / and /home and it seems I'm not getting the right speed out of it. Reading from md0 I get around 100 MB/sec Reading from sda or sdb I get around 95-105 MB/sec I thought I would get more speed (while reading data) from two drives. I don't know what is the problem. I'm using kernel 2.6.31-18 hdparm -tT /dev/md0 /dev/md0: Timing cached reads: 2078 MB in 2.00 seconds = 1039.72 MB/sec Timing buffered disk reads: 304 MB in 3.01 seconds = 100.96 MB/sec hdparm -tT /dev/sda /dev/sda: Timing cached reads: 2084 MB in 2.00 seconds = 1041.93 MB/sec Timing buffered disk reads: 316 MB in 3.02 seconds = 104.77 MB/sec hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 2150 MB in 2.00 seconds = 1075.94 MB/sec Timing buffered disk reads: 302 MB in 3.01 seconds = 100.47 MB/sec Edit: Raid 1

    Read the article

  • SSIS - Limiting Concurrent Connections

    - by Bigtoe
    Hi Folks, I am using SSIS to connect to a legecy mainframe database and this allows only 5 concurrent connections at a time. I have a dataflow task with many tables to transfer and it kicks outs because of this limitation. I have split up the Data Flow task into seperate data flows and this is working for the moment, but it is not optiomal as they need to be sequenced and 1 large transfer in a flow is holding up subsequent transfers. Anyone any idea of how to limit the number of connections in a single data flow, I had a look at using the Engine Threads but this did not make any difference. Any help much appericated.

    Read the article

  • Which SSL certificate to buy [closed]

    - by Sparsh Gupta
    I am reading several notes on SSL certificates and comparison. What matters to me the most is speed. I can read that encryption is same with all different certificates available but I was wondering if there is any difference in the performance of the website with different certificates involved. I am ofcourse interested in end to end response times and I wonder if the type of encryption or number of certificates required as Chain Certificates makes a difference in speed. I dont really care for cost but looking for a good SSL certificate which ideally gives me absolutely no pain and best performance. Recommendations?

    Read the article

  • how to write T-SQL to compare and copy data?

    - by George2
    Hello everyone, I have two SQL Server 2008 Enterprise databases (on two machines), and one of the databases is master database and another database is slave database. I want to transfer update from a table in source database to a table in destination database (two tables are of the same schema, both of them are using a single column as unique primary key). The transfer rule is (in short, the rule is keeping the destination database the same as source database because of the update of the source database), if there is a new row in source database but not in destination database, insert the row in destination database; if a row not exists in source database but exists in destination database, delete the row in destination database; if a row's content (i.e. columns other than primary key columns) changes in source database, update the new content into destination database. thanks in advance, George

    Read the article

  • My VPS ubuntu server is very slow

    - by askmike
    I just installed a frech copy of Ubuntu 12.04 on my vps because my old installation was very slow, unfortunately this did not fix the problem. With slow I mean requests for my PHP websites take a long time, very slow (30 sec per request) to slow (3+ sec per request). When it's really bad SSH is also laggish. The websites are: askmike.org (pretty standard Wordpress) mvr.me (own PHP) slow? very slow: Here is a picture of loading a clean install of wordpress slow: here is a picture of loading a small PHP based website the vps The VPS has 256mb ram and an 25GB hdd. Besides serving the 2 small websites it isn't doing anything AFAIK. What have I installed Clean Ubuntu server 12.04 LAMP stack few things like git and nodejs (not using both) ossec (because I thought my server was getting hammered) munin What I already tried / done I installed munin so that I could watch io speed and such. The problem is that I don't know where to look for in the munin report. I checked logs and don't see anything strange (although I don't really know where to look for besides strange / repetitive errors and GET requests). I configured Apache MPM to: <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 40 MaxRequestsPerChild 0 </IfModule> (apache is using prefork, the default) Stats I copied the munin report as it appeared at 4:50 last night to a site hosted on a shared webhost. Note that tonight my mysql crashed somewhere after 1:00 (which is a new problem altogether), so therefor the graph for last night might look strange. Can anyone help me get my VPS up to normal speed? EDIT: Thanks for the replies. The VPS is 10 bucks a month and is from directvps.nl (Dutch host and I'm also dutch). I did two speed tests for disk IO: $ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 1073741824 bytes (1.1 GB) copied, 23.1506 s, 46.4 MB/s $ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 1073741824 bytes (1.1 GB) copied, 39.3796 s, 27.3 MB/s Anyway: how can I prove to my VPS host that it is to slow? I can understand a server being busy slowing a website down. But 5-30 sec loadtime for a normal PHP webpage?

    Read the article

  • Inconsistent values in network switch throughput values

    - by Marcus Hughes
    Quite simple, I have a network switch with SNMP, and need to calculate the throughput of the switch port, so simply I use ifOutOctets. We transfer a file which is 145MB and if we use the total from the start, subtracted from the value at the end then the value is : 158901842 I simply can't get the value to match, or be anything similar to what the real transfer is. I understand that there may be excess traffic etc but I just can't get it to be anywhere similar (the server being tested has no traffic when this is not running) We have tried for a long time and suspect there may be an issue with the recording on the HP switch, do you have any suggestions, or how should we be calculating it? Thanks a lot in advance We have a HP ProCurve 1810G on 2.2

    Read the article

  • SSD Performance for PHP?

    - by Andrew Fashion
    My programmer just built an application with PHP using Doctrine ORM (will be a high traffic social networking website), and it's very heavy in PHP/Apache and CPU. The queries are wonderfully fast, and MySQL is barely using any CPU, it's just Apache. I was curious to if an SSD would help speed up PHP/Apache, because I know the bottleneck is in PHP reading multiple files, class files, and loading up a bunch of data. So common sense makes me think if PHP is reading multiple PHP files, an SSD would only help as far as read/write? I was thinking of doing a high performance SSD for the PHP application, but for user image uploads, I would just continue using a 15k SAS. Is there any performance issues regarding using an SSD in this kind of situation? And would it prove to help speed up PHP/Apache, and help the CPU problem out?

    Read the article

  • Using TrueCrypt (software encryption) with an SSD

    - by Shackrock
    I use full drive encryption (FDE) w/ TrueCrypt on my laptop. I have a 2nd gen I7 with AES instruction support, so honestly I can't even notice a speed change on the system with it on. My question, is for those who know about SSD's a lot. I previously (early 2011) read articles about how software encryption will negate the speed benefits that an SSD provides - because of the need for the SSD to send a delete command, then a write command, for every encrypted write - instead of just writing over data like a regular HDD would (or something like this...honestly I can't remember...ha!). Anyway, any improvements in this field? Is it pointless for me to grab an SSD if I'm using FDE? Thanks all.

    Read the article

  • How can I restrict my mates to stop downloading?

    - by user239295
    We are sharing an internet broadband connection with 6 users at a place we live. We get 20 gb fup ( Fair usage policy) with 2 mbps speed from the ISP after the 20 gb is consumed the speed comes down to 512 kbps very difficult to browse any page. The problem is we cannot track which user/mate is downloading and ending the FUP. it is very difficult to track so is there something that we can allot per user some amount of space like 2 gb of downloading or restrict all from downloading so that we can utilize all the fup till the end of the month. We are using this connection as wifi configured. A adsl router is configured as wifi and we all using all 6 laptops. No PC. Any help would be appreciated. I apologize if i am not clear with my question.

    Read the article

  • SQL Server 2005 to 2008 Bak file help please!

    - by Brandon
    I have a SQl Server 2005 database backup that I want to transfer to SQL Server 2008 on my server. I spent 3 days transferring the .bak file from my own machine to my server. I then tried to restore the bak file and I got an error. I then read online a completely different method for adding a SQL server 2005 Database to SQL server 2008 which was the detach and attach method which means I need to detach the database in SQL Server 2005 and then transfer the MDF file from it via ftp to my server and then attach it in SQL Server 2008. Well I already used a lot of bandwidth transferring the .bak file to my server. is there a way to convert my .bak file which is already on my server to an MDF file and attach it in SQL server 2008?

    Read the article

  • It takes a long time until windows xp recognize I connected USB diks

    - by Pavol G
    Hello IT guys, I have a problem with my new USB disk. When I connect it to my laptop with Windows XP SP2 it takes about 4-5min until Windows recognized it and show it as a new disk. I can also see (disk's LED is blinking) that something is scaning the disk when I connect it, when this is done Windows imediately recognize it. Also when I'm copying data to this disk the speed is about 3.5MB/sec. It's connected using USB2.0. I tried to check for spyware (using spybot), also run windows in safe mode. But still have the same problems. Do you have any idea what could help to solve this problem? On Windows Vista (another laptop) everything is ok, disk loads in about 15sec and speed is about 20-30MB/sec. Thanks a lot for every advice!

    Read the article

  • Transferring binary file from web server to client

    - by Yan Cheng CHEOK
    Usually, when I want to transfer a web server text file to client, here is what I did import cgi print "Content-Type: text/plain" print "Content-Disposition: attachment; filename=TEST.txt" print filename = "C:\\TEST.TXT" f = open(filename, 'r') for line in f: print line Works very fine for ANSI file. However, say, I have a binary file a.exe (This file is in web server secret path, and user shall not have direct access to that directory path). I wish to use the similar method to transfer. How I can do so? What content-type I should use? Using print seems to have corrupted content received at client side. What is the correct method?

    Read the article

  • ADSL improvement in recent years

    - by cleong
    Currently I have a 2mb/s ADSL connection. I signed up for the service more than five years ago. Has technology improved much during that time to allow for greater speed using the same wires? The building I live in is quite old and the lines aren't very good. They weren't able to support 6mb/s service back then. Now I notice that the lowest speed offered by my telco is 10mb/s. Even that would be a serious improvement over what I have now. Here are the stats from the modem: Line Attenuation (Up/Down) [dB]: 10,5 / 15,5 SN Margin (Up/Down) [dB]: 31,5 / 29,0

    Read the article

  • IPv6 feature in Network Adaptor is Slowing Internet

    - by Teknophilia
    The past few days, my internet browsing has become very poor. It's not a matter of speed, as a speed test will give at least 15Mbps. It seems as if my laptop has a hard time actually connecting to the sites. I've found a possible culprit, but don't know why it would affect anything: Going to adapter settings and disabling ipv6, but leaving ipv4, my browsing is back to normal. Re-enabling ipv6 brings back the issue. This is strange though, because I have always had ipv6 enabled. Moreover, using sites to test ipv6 compatibility, I fail with ipv6 enabled on my adapter, and pass when it's disabled. Ideas about why this is happening, and how to fix it?

    Read the article

  • How to know the file type of the file which you are forcing the user to download?

    - by Starx
    I am trying to force the user to download a file. For that my script is: $file = "file\this.zip"; header("Cache-Control: public"); header("Content-Description: File Transfer"); header("Content-Disposition: attachment; filename=$file"); header("Content-Type: application/zip"); //This is what I need header("Content-Transfer-Encoding: binary"); readfile($file); The files I am going to upload my not be .zip all the time so I want to know the content type of the image I am going to receive in $file. How to accomplish this

    Read the article

  • any open source instant messenger?

    - by George2
    Hello everyone, I need to develop an instant messenger (like MSN Messagner, but only simple and basic function is fine), based on .Net (C#). I want to integrate the instant messenger with my current web site user. I want to know any open source (better C#) instant messenger to reference? BTW: some of the users are using internal IP address (behind a gateway or proxy, like 10.10.xxx.xxx) -- so in this scenario two users can not use point to point message transfer if both of them are behind a gateway? And I think I have to develop a server which acts as an intermediate party to transfer message between two users, correct? thanks in advance, George

    Read the article

  • How to retry connections with wget?

    - by Andrei
    I have a very unstable internet connection, and sometimes have to download files as large as 200 MB. The problem is that the speed frequently drops and sits at --, -K/s and the process remains alive. I thought just to send some KILL signals to the process, but as I read in the wget manual about signals it doesn't help. How can I force wget to reinitialize itself and pick the download up where it left off after the connection drops and comes back up again? I would like to leave wget running, and when I come back, I want to see it downloading, and not waiting with speed --,-K/s.

    Read the article

  • How to increase the disk cache of Windows 7

    - by Mark Christiaens
    Under Windows 7 (64 bit), I'm reading through 9000 moderately sized files. In total, there is more than 200 MB of data. Using Java (JDK 1.6.21) I'm iterating over the files. The first 1400 or so go at full speed but then speed drops off to 4ms per file. It turns out that the main cost is incurred simply by opening the files. I'm opening the files using new FileInputStream (and of course closing them in time to avoid file leaks). After some investigating, I see that Windows' disk cache is using only 100 MB or so of RAM although I have 8 GiB available. I've tried increasing the cache size using the CacheSet tool but any values I provide are considered out of range. I've also tried enabling the LargeSystemCache registry key but (after rebooting) the CacheSet tool still indicates I'm using 100 MB of cache (and doesn't increase during the test run). Does anybody have any suggestions to "encourage" Windows 7 to cache my 9000 files?

    Read the article

  • Samsung 830 SSD

    - by anru
    I have a 128G SSD(830) of samsung installed on my win7 ultimate 64 bits machine. I have tried to copy a folder from my C drive to the SSD drive. And I have found out that copy speed is so slow, please look at picture below: I am just want to know, if this is because of I was tried to copy so many small files By the way. the SSD is Sata 3, but my mobo only has SATA 2 interface, I do not know if connect SATA 3 device to SATA 2 interface contributes to slow copy speed.

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >