Search Results

Search found 5436 results on 218 pages for 'transfer rate'.

Page 123/218 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • One NIC going to sleep on Centos system

    - by sbleon
    I have two Dell boxes with two ethernet ports a piece. I have a cable directly connecting two of these ports, creating a tiny LAN with 10.3.3.x addresses. The other port on each box is hooked up to a switch and has a DHCP-supplied address to talk to the outside world. I've noticed that when scp'ing large files from one box to the other over the private LAN, the transfers sometimes stall. It appears that any other network activity on either box will cause the transfer to resume. Wake-on-LAN is disabled on all interfaces according to ethtool. What else could be causing these stalled transfers?

    Read the article

  • How to tell if you are connected to Wireless B, G or N?

    - by Raheel Khan
    I am using Windows 7 on all wired desktops and wireless laptops in my home network. I recently upgraded my Ethernet switch to Gigabit and instantly noticed an increase in throughput in wired devices. I also bought a Wireless-N WAP but with degredation in wireless file transfer speeds. I have been told that a number of reasons could affect wireless speeds including which WAP is used, how many wireless devices are connected, which security mode is used, etc. However, that remains irrelevant to my question. Each of my laptops claim to support Wireless-N but I cannot seem to figure out how to determine if the laptops are truly running Wireless-N or are connected to the WAP through some sort of mixed-mode. I do not have control of the WAP device so cannot tell what mode it is running in. Is there a way to tell which mode is being used and what the throughput is for each connected device without having access to the WAP interface?

    Read the article

  • Inexpensive degaussers or HDD shredders?

    - by Nicholas Knight
    I do a lot of work for a small cash-strapped business that has a lot of active hard drives, most are consumer-grade SATA of about five years of age, and predictably they are dying at an increasing rate, and a lot of the time they can't even be detected, let alone complete a zero-out cycle. Right now those drives are just being stored, but that can't continue forever. We've got a couple bad LTO tapes it'd be nice to deal with, too. There are very real security and legal issues that make dropping them off with someone who claims they'll be properly destroyed a gamble. I've looked around at degaussers and HDD shredders, and the ones that don't look like they come from some guy in his basement all seem to be $3000+, which is hard to swallow right now. Is there anything out there in the $500-1500 range that you would recommend? (Speed isn't a big issue, if it takes several minutes or even hours per drive, that's completely OK, we've only got 10 or so thus far.)

    Read the article

  • Very High Network out in ec2 instance

    - by Jatin
    I launched an ubuntu-14.04-64bit instance in Amazon EC2 two days back. And I started Tomcat 7.0.54 in that instance and deployed my application war files. It has no other software installed other than tomcat and the default ones. In the past 2 days, its shows 858 GB of Data Transfer(Network Out) from that instance. I have attached a graph of Amazon CloudWatch Metric "Network Out" My application does not do any data download/upload. Its a Java Spring application and the front end is in HTML&Javascript. My application traffic was very low (less than 20 hits) in those 2 days. Is there a way to find out why these data transfers happened and also to find what data has been transferred. If you can see in graph, network out was 20gb per minute. Some more info: Network in was negligible CPU Utilization was very high Everything else was low

    Read the article

  • help building a PC that can image a dozen hard drives simultaneously

    - by Bigbio2002
    Not sure if this belongs on here or SuperUser, but here goes... I'm trying to figure out how to make a mass hard drive imaging PC out of COTS parts. A dedicated imaging device can do 10 drives at a time, but costs several thousand dollars. So far, I'm thinking to use several 3-port PCI-E Firewire cards, and use some kind of Firewire-to-IDE adapter to connect the drives themselves. The "software" would consist of scripting diskpart, or some other imaging utility. The problem is that I can't seem to find any sort of adapter. I could use standard external hard drive bays, but then I'd have a dozen power cables that I need to plug in. Ugly, messy, and inefficient. I picked Firewire over USB not only for better transfer speeds, but also because FW can deliver power over the bus (and could theoretically power a hard drive). Does anyone have any input on this?

    Read the article

  • Nginx Cache-Control

    - by optixx
    Iam serving my static content with ngnix. location /static { alias /opt/static/blog/; access_log off; etags on; etag_hash on; etag_hash_method md5; expires 1d; add_header Pragma "public"; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } The resulting header looks like this: Cache-Control:public, must-revalidate, proxy-revalidate Cache-Control:max-age=86400 Connection:close Content-Encoding:gzip Content-Type:application/x-javascript; charset=utf-8 Date:Tue, 11 Sep 2012 08:39:05 GMT Etag:e2266fb151337fc1996218fafcf3bcee Expires:Wed, 12 Sep 2012 08:39:05 GMT Last-Modified:Tue, 11 Sep 2012 06:22:41 GMT Pragma:public Server:nginx/1.2.2 Transfer-Encoding:chunked Vary:Accept-Encoding Why is nginx sending 2 Cache-Control entries, could this be a problem for the clients?

    Read the article

  • Funnelling http traffic

    - by spencer p
    I have a situation where a large batch of servers (X), on demand, need to request data from a smaller set of web servers (Y). The worst case scenario is if all servers in X decide to fetch different requests to one server in Y. That would be X amount of connections, which could be a very large burst of traffic. The best case scenario is if 1 server in X hit 1 server in Y in tandem. Life does not work like this. One idea to entertain is placing a proxy, similar to squid between X and Y. All of X servers can connect to this proxy, but would result in a few persistent (http keepalive) connections to Y. If The few were say, 3 or 4, then it would funnel. If we could then rate limit those connections and traffic decides to spike unusually high, we wouldn't hurt anyone but ourselves. Thoughts?

    Read the article

  • AXFR problem using gradwell secondary DNS

    - by Roaders
    Hi All I use Gradwell.com to provide secondary DNS but I keep getting e-mails along the lines of the following saying that it's not working: You have asked us to provide a secondary DNS service for the following domain(s) Unfortunately, the primary DNS server(s) you specified are not permitting the necessary zone transfers from our servers, or they are not answering "SOA" queries for your domain correctly. I have gone through the support procedure and they weren't that helpful. They have suggested the following: Our secondline team have suggested setting the AXFR to use anouther machine. This will ensure that the transfer is not locked down to one machine and should allow any machine to make the request I don't really know what AFXR is and I only have 1 production machine so I can't set the AFXR to use another one! In previous support correspondence we confirmed that I am allowing transfers to the correct IP and that I have the correct ports open on the firewall. I am running Windows Server 2003. What can I do to try and get these zone transfers working? Thanks

    Read the article

  • Upgrading from MySQL Server to MariaDB

    - by Korrupzion
    I've heard that MariaDB has better performance than MySQL-Server. I'm running software that makes an intensive use of MySQL, thats why I want to try upgrading to MariaDB. Please tell me your experiences doing this conversion, and instructions or tips. Also, which files I should take care of for making a backup of MySQL-Server, so if something goes wrong with MariaDB, I could rollback to MySQL without issues? I would use this but i'm not sure if it's enough to get a full backup of MySQL-Server confs and databases mysqldump --all-databases backup /etc/mysql My Environment: uname -a (Debian Lenny) Linux charizard 2.6.26-2-amd64 #1 SMP Thu Sep 16 15:56:38 UTC 2010 x86_64 GNU/Linux MySQL Server Version: Server version 5.0.51a-24+lenny4 MySQL Client: 5.0.51a Statistics: Threads: 25 Questions: 14690861 Slow queries: 9 Opens: 21428 Flush tables: 1 Open tables: 128 Queries per second avg: 162.666 Uptime: 1 day 1 hour 5 min 13 sec Thanks! PS: Rate my english :D

    Read the article

  • SPF record doesn't work (not sure which DNS server to tweak)

    - by Ion
    Problem: Google (and perhaps others) marks our emails as SPF neutral. Let me give you some background about the setup: initially got a dedicated server (Hetzner) with Plesk installed to host a domain/web application, let's say: bigjaws.com. Plesk automatically creates a DNS zone for it with some records for the various services it provides out of the box, e.g. webmail.bigjaws.com as a CNAME to bigjaws.com to provide Horde/whatever, etc. Let me point out four relevant of these records (where XXX.XXX.XXX.158 is our dedicated IP): bigjaws.com. A XXX.XXX.XXX.158 mail.bigjaws.com. A XXX.XXX.XXX.158 bigjaws.com MX (10) mail.bigjaws.com. bigjaws.com. TXT v=spf1 +a +mx -all The above records are not(?) valid anymore though, because after using this dedicated server for a while, our site got bigger and bigger so we decided to move our operations over to AWS (EC2, RDS, ELB, etc), but we retained the mail functionality as is, i.e. emails from [email protected] are sent by connecting to our dedicated server where Plesk takes care of things. This was decided in order not to setup anything from scratch. Of course for all DNS-related things we now use Route53. In Route53 I have the following records: mail.schoox.com. A XXX.XXX.XXX.158 bigjaws.com. MX (10) mail.bigjaws.com bigjaws.com. SPF "v=spf1 +ip4:XXX.XXX.XXX.158 +mx ~all" From my understanding of SPF, the SPF status should have been passed: I designate that all email being sent by bigjaws.com from XXX.XXX.XXX.158 are valid/not spam (I added +mx there but I'm not sure if needed). When a mail server receives an email, doesn't it lookup the SPF record of the domain and checks against the IP it got the email from? Checking with spfquery: root@box:~# spfquery -ip XXX.XXX.XXX.158 -sender [email protected] -rcpt-to [email protected] StartError Context: Failed to query MAIL-FROM ErrorCode: (2) Could not find a valid SPF record Error: No DNS data for 'bigjaws.com'. EndError noneneutral Please see http://www.openspf.org/Why?id=employee1%40bigjaws.com&ip=XXX.XXX.XXX.158&receiver=spfquery : Reason: default spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com Received-SPF: neutral (spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com) client-ip=XXX.XXX.XXX.158; [email protected]; If I go to the address listed above (openspf.org) it tells me that the message should have been accepted(!): spfquery rejected a message that claimed an envelope sender address of [email protected]. spfquery received a message from static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) that claimed an envelope sender address of [email protected]. The domain bigjaws.com has authorized static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) to send mail on its behalf, so the message should have been accepted. It is impossible for us to say why it was rejected. What should I do? If the problem persists, contact the bigjaws.com postmaster. Also, here are some headers from an email sent by one of our [email protected] addresses to a gmail.com address (by the way, bigjaws.de listed in the "Received: from" field was the initial domain hosted on the dedicated server before adding the .com one -- both are still listed as separate subscriptions under Plesk). Delivered-To: [email protected] Received: by 10.14.177.70 with SMTP id c46csp289656eem; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) X-Received: by 10.14.102.66 with SMTP id c42mr306186eeg.47.1382515860386; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) Return-Path: <[email protected]> Received: from bigjaws.de (static.158.XXX.XXX.XXX.clients.your-server.de. [XXX.XXX.XXX.158]) by mx.google.com with ESMTPS id l4si19438578eew.161.2013.10.23.01.10.59 for <[email protected]> (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 23 Oct 2013 01:10:59 -0700 (PDT) Received-SPF: neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=XXX.XXX.XXX.158; Authentication-Results: mx.google.com; spf=neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=bigjaws.com; b=WwRAS0WKjp9lO17iMluYPXOHzqRcOueiQT4rPdvy3WFf0QzoXiy6rLfxU/Ra53jL1vlPbwlLNa5gjoJBi7ZwKfUcvs3s02hJI7b3ozl0fEgJtTPKoCfnwl4bLPbtXNFu; h=Received:Received:Message-ID:Date:From:User-Agent:MIME-Version:To:Subject:Content-Type:Content-Transfer-Encoding; Received: (qmail 22722 invoked from network); 23 Oct 2013 10:10:59 +0200 Received: from hostname.static.ISP.com (HELO ?192.168.1.60?) (YYY.YYY.ISP.IP) by static.158.XXX.XXX.XXX.clients.your-server.de. with ESMTPSA (DHE-RSA-AES256-SHA encrypted, authenticated); 23 Oct 2013 10:10:59 +0200 Message-ID: <[email protected]> Date: Wed, 23 Oct 2013 11:11:00 +0300 From: BigJaws Employee <[email protected]> User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.0.1 MIME-Version: 1.0 To: [email protected] Subject: test SPF Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit test SPF Any ideas why SPF is not working correctly? Also, are there any DNS settings that are not needed anymore and create a problem?

    Read the article

  • TFTP PUT Failing Across Hosts

    - by Jason
    I have a TFTP server installed on a CentOS host. /etc/xinetd.d/tftp: service tftp { disable = no socket_type = dgram protocol = udp wait = yes user = root server = /usr/sbin/in.tftpd server_args = -c -s /var/lib/tftpboot per_source = 11 cps = 100 2 flags = IPv4 } If I try to PUT a file from a remote host to the host running the TFTP server, I get Transfer Timed Out - however, it does create the file in /var/lib/tftpboot but the file is empty. If I tftp from the tftp server to itself (localhost) and PUT a file, it works fine. I have verified that SELinux is disabled and IPTables are turned off. I can connect from the remote hosts with no issue - just seems to be the PUT I have issue with: [root@SVR01 TEST]# tftp 10.100.2.15 tftp> status Connected to 10.100.2.15. Mode: netascii Verbose: off Tracing: off Literal: off Rexmt-interval: 5 seconds, Max-timeout: 25 seconds tftp>

    Read the article

  • Does a bad Internet connection increase bandwidth usage?

    - by Synetech
    My (Rogers) cable connection has been pretty bad recently (channels 3 and 10 are particularly fuzzy—it’s analog, not digital cable). Not surprisingly, this has caused my cable modem to drop out and have to reestablish a connection a couple of times since it started. The poor connection of course means higher corruption (not necessarily dropped per se) which causes the TCP/IP stack to have to retransmit packets more often. Reduction of bandwidth throughput aside, I got to wondering if it increases the actual bandwidth usage. That is, if there is a high error rate on the line causing packets to have to be retransmitted: Does this increase a bandwidth monitoring program’s numbers? Does the ISP count the retransmitted packets toward the monthly cap? Based on what I remember from my university networking courses and common sense, I have a feeling that the answer to both questions is yes, but I cannot reliably measure the first, and have no authoritative answer for the second. I’m wondering if maybe the retransmitted packets are acknowledged as being duplicates and thus not counted somewhere along the line.

    Read the article

  • Is there any software or hardware which lets you stop, slow down, speed up or even reverse time?

    - by tjrobinson
    Obviously I'm talking about time in terms of the PC clock rather than real time. We were testing an application we've developed at work by setting the clock forward and back to simulate different scenarios and I started thinking how useful it would be if you could adjust the rate(?) of the system clock with finer control. So you could make a minute pass in a second or a day pass in 30 seconds and watch how the program you're developing copes with changes in date and time. I'd be interested to hear if anyone knows of any software or hardware which can let you do some or all of the above.

    Read the article

  • YSLow says certain CSS are not gzipped

    - by rhand
    YSlow keeps on telling me files like http://www.example.com/wp-content/plugins/q-and-a/css/q-a-plus.css?ver=1.0.6.2 are not gzipped while the gzip test tool at Feed the Bot mentions I am all good: Compressed? Yes Compression type gzip Page size (Bytes) 32,493 Compressed size (Bytes) -1 Saving (Bytes) 32,494 Compression % 100% I added this to my .htaccess: # Gzip <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file .(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> #Deflate <ifmodule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript </ifmodule> The header for the file mentioned states: CF-Cache-Status MISS CF-RAY 13945df90a9a0c1d-AMS Cache-Control public, max-age=2592000 Connection keep-alive Content-Encoding gzip Content-Type application/javascript Date Thu, 12 Jun 2014 07:34:38 GMT Expires Sat, 12 Jul 2014 07:34:38 GMT Last-Modified Thu, 21 Feb 2013 01:29:18 GMT Server cloudflare-nginx Transfer-Encoding chunked Vary Accept-Encoding Any ideas what I am missing here?

    Read the article

  • Scalable WordPress Host for High-Volume Site?

    - by Jonathan Eunice
    I need recommendations for a scalable web host for a high volume WordPress web site. For my purposes, high-volume might be 100K-500K visitors/hour. Might think towards a 1M/hour burst rate as a "high water mark." I know WP isn't the highest-performing platform out there (ha!), but it's non-negotiable. I can do "the usual optimizations" (e.g. put static files in a CDN, run and follow the advice of performance analyzers like YSlow, etc). But it will still be WordPress, and there will be a dozen or so plugins involved. So, where to host the site? Most "what's the best WordPress host?" discussions seem to focus on lowest-cost. I need the opposite. What are the high-volume, scalable, or clustered WordPress hosts with which you've had great experiences?

    Read the article

  • Recommend a mail server setup for multiple domains

    - by Greg
    Hi all, I've just set up a new Debian web server which I have done plenty of times before, but I want to add a mail server which I have never done before. I am aware of this question, but I would like someone to recommend packages and briefly explain how to use them for providing pop/imap access on multiple domains, a concept that has confused me for a while. I'm planning for this server to grow slowly but surely, from serving an initial 5 or 6 domains to about 20 in the first year, continuing at this rate. (yes, I've jumped on the cloud bandwaggon). At the moment, I have a DNS-A record pointing to my server's IP and nothing else. I'm assuming that I need a DNS-MX record pointing there too, but I haven't read up about it yet so today that's what I'll be doing. Hopefully reading up on the subject and the help that I get here will get my server up and running in no time. Thanks!

    Read the article

  • Linux QoS: bulk data transmission during idle times

    - by syneticon-dj
    How would I do a QoS setup where a certain low-priority data stream would get up to X Mbps of bandwidth, but only if the current total bandwidth (of all streams/classes) on this interface does not exceed X? At the same time, other data streams / classes must not be limited to X. The use case is an ISP billing the traffic by calculating the bandwidth average over 5 minute intervals and billing the maximum. I would like to keep the maximum usage to a minimum (i.e. quench the bulk transfer during interface busy times) but get the data through during idle/low traffic times. Looking at the frequently used classful schedulers CBQ, HTB and HSFC I cannot see a straightforward way to accomplish this.

    Read the article

  • What is the minimum delay between two consecutive RS232 frames?

    - by Lord Loh.
    I have been working on creating a UART on an FPGA. I can successfully transmit and receive single characters typed on PuTTY. However, when I set my FPGA to constantly write a large sequences of "A", sometimes I end up with a sequences of "@" or some other characters until I reset the FPGA a few times. I believe the UART on the computer looses track of the difference between the start bit and a zero. The delay between the two "A" is ~ 30us (measured with a logic analyzer) and the baud rate is 115200 8N1. Is there a minimum delay that must be maintained between two consecutive RS232 frames?

    Read the article

  • Amazon EC2- many micro-instances vs single small/medium instance

    - by shashankaholic
    I have a chat application using stack of Openfire, Tomcat6 and MySQL. Currently, i have installed all these servers on single Linux micro-instance(613 MB memory). Even in low user base 10-20 i am encountering CPU overload which is quite obvious here. As, i am new to Amazon EC2 can somebody suggest me how to scale up my architecture according to traffic use? should i use separate micro instances for every app server(openfire,mysql,tomcat6) should i use single small or medium instance for whole server stack. Some factors in context: high reliance on MYSQL high memory usage due to file transfer web-application interacting with other Amazon service like S3,SES

    Read the article

  • Amazon EC2- micro-instance vs single small/medium instance

    - by shashankaholic
    I have a chat application using stack of Openfire, Tomcat6 and MySQL. Currently, i have installed all these servers on single Linux micro-instance(613 MB memory). Even in low user base 10-20 i am encountering CPU overload which is quite obvious here. As, i am new to Amazon EC2 can somebody suggest me how to scale up my architecture according to traffic use? should i use separate micro instances for every app server(openfire,mysql,tomcat6) should i use single small or medium instance for whole server stack. Some factors in context: high reliance on MYSQL high memory usage due to file transfer web-application interacting with other Amazon service like S3,SES

    Read the article

  • ctrl-v key on AIX

    - by antenore
    Hi all, I'm new to AIX and I miss some tricks that work well on other *nix flavors. I need a CTRL sequence in a ksh scripts, like ^[ (CTRL-[) and to do that I'm habit to use the ctrl-v[ , but here it doesn't work. At the moment I'm obliged to use a windows box with putty so I cannot even edit the scripts on my Linux box and transfer the scripts on the AIX server. Do you know why and how I can fix the issue? Thanks in advance Kind regards Antenore.

    Read the article

  • Corrupted file, hard drive test?

    - by all-R
    Hi guys, I'm currently on a macbook with a 1TB external hard drive connected trough a USB hub wich is connected on my macbook. The problem is, my disk, wich is partitioned in 2 (one HFS+ and one NTFS) keeps getting corrupted, recently it was my HFS+ partition, I could not repair it using the Apple's Disk utility, but was able to backup my files. Is it synonym that my hard drive is failing? Is it because of my USB hub? I also keep all my iTunes library on my external HD (HFS+ partition), and did a lot of transfer lately, adding files, removing etc. the last time, my partition got corrupted after a lot of deleted items. If anybody has an idea of what to check first, what could cause the problem, I would appreciate it :) Thanks!

    Read the article

  • Why would http & https be blocked, even in safe mode with firewall disabled?

    - by Cogwheel
    I have a windows 7 machine (dell studio xps). Everything on it seems to be in working order. The network device says it has internet connectivity, and indeed I can ping websites, transfer files via ftp, connect to vpns and remote desktop, but the web won't work. I've disabled the windows firewall and still no go. There are no other firewalls installed. The computer came with a trial of norton 360 so I also used the norton removal tool (which solved a similar problem on another computer for me previously). Any thoughts?

    Read the article

  • Help accessing Beagle over serial with Minicom/GTKTerm

    - by mrgordon
    I am trying to access a brand new Beagle Board from Ubuntu or Mac over serial using the setup seen here: http://specialcomp.com/beagleboard/RevC1.htm I got Minicom and GTKTerm installed on the Ubuntu box. I set up the correct baud rate, turned off flow control, and have tried ttyUSB0 and ttyS0. Once minicom is configured properly, I should just see the init text from the board appear if I have minicom open in terminal while I plug in the board, correct? dmesg shows ttyUSB0 on the computer has been connected to ttyS0 on the beagle.

    Read the article

  • Can I take a HDD in Raid1 and plug it straight into a different machine?

    - by jacko
    I would assume that I can just take my HDD out of my NAS (in raid1 mirror) and plug it into another enclosure and have it work off the bat but I'd like to make sure... Any ideas? Edit: My current setup is a Netgear ReadyNAS in (hardware) raid1. I'm hoping to replace this with a home theatre type PC (possibly running Ubuntu), and would like to migrate my data without having to do a bulk transfer over my network between the 2 machines. Can anyone confirm the case for the Netgear ReadyNAS?

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >