Search Results

Search found 1529 results on 62 pages for 'bandwidth'.

Page 36/62 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Poor quality when trying to stream a 720p video to XBox 360 using Media Center Extender

    - by MBraedley
    I have my XBox 360 set up as a Media Center Extender for my Windows 7 desktop. SD quality avi videos stream fine to my XBox, either though the video library or through Media Center Extender, but when I try a 720p mkv file, the frame rate plummets and the A/V sync is completely lost. I don't want to transcode or switch container formats (mkv isn't supported by the 360), but still want to stream. Both my desktop and 360 are plugged into the same gigabit switch, which is plugged into my ISP supplied modem/router. The video plays fine on my machine in a number of programs. Considering that I should have more than enough bandwidth to accommodate this video, why won't it play back properly?

    Read the article

  • Limit number of simultaneous connections squid makes to a single server

    - by Ben Voigt
    Note: I am asking about outbound concurrent connection limits, not inbound, which is sufficiently covered on existing questions Modern browsers typically open a large number of simultaneous connections, to take advantage of the fact that TCP fairly shares bandwidth between connections. Of course, this doesn't result in fair sharing between users, so some servers have started penalizing hosts which open too many connections. This limit can be configured client-side (e.g. IE MaxConnectionsPerServer, Firefox network.http.max-connections-per-server), but the method differs for each browser and version, and many users aren't competent to adjust it themselves. So we turn to a squid transparent HTTP proxy for central management of HTTP download. How can the number of simultaneous connections from squid to a remote webserver be limited, so the webserver doesn't perceive it as abuse of concurrent connections? Ideally the limit would be per source address. Squid should accept virtually unlimited concurrent requests from the client browser, and issue them sequentially to the remote server, only N at a time, delaying (but not dropping) the others.

    Read the article

  • Offset AND incremental backup

    - by Pyrolistical
    I already do backups from my main computer to my server computer using synctoy. But now I also want to do off-site backup. My idea so far: have source hard drive (we'll call S) at home have backup hard drive at work called B have transport hard drive called T connect T at work and record index of files on B take T home and check index of S and note new/changed/deleted files and copy changed files to T take T to work and update S repeat Its basically a sneakernet and using all of the advantages of it. High bandwidth, low latency. Is there some software to do this, or do I have to write it myself?

    Read the article

  • Is there a way to tell if a file is done copying?

    - by Mike Cooper
    The scenario is this: Machine A has files I want to copy to Machine C. Machine A can't access C directly, but can access Machine B that can access Machine C. I am using scp to copy from Machine A to B, and then from B to C. Machine B has limited storage space, so as files come in, I need to copy them to C and delete them from B. The second copy is much faster, so this is no problem with bandwidth. I could do this by hand, but I am lazy. What I would like is to run a script on B or C that will copy each file to C as each one finishes. The scp job is running from A. So what I need is a way to ask (preferably from a bash script) if file X.avi is "done" copying. Each of these files is a different size, and I can't really predict size or time of completion. Edit: by the way, the file transfer times are something about 1 hour from A to B and about 10 minutes from B to C, if time scale matters at all.

    Read the article

  • Thunderbird + Gmail, has to send emails twice.

    - by Mohammad
    I've configured Thunderbird to place a copy of my sent emails in my remote "sent" folder of my Gmail account as opposed to the local thunderbird one. This ensures I can completely take advantage of my imap synchronization. And so whenever I send an email, it first sends one to the address list, then it sends a new one to my sent box, however doing this with large attachments seems like a waste of time and bandwidth. Do you guys know of any extension or a combination of a trick plus a Gmail filter that could automate this in one step?

    Read the article

  • Increase the compression performance of VPN

    - by Martin
    I am currently switching from a system with HPN-SSH tunnels and enabled compression to something VPN based. I have tried tinc and n2n so far, hamachi requires a library I do not have. In my primitive benchmarks I am not satisfied with the achievable bandwidth compared to the SSH tunnels. In tinc the low LZO setting performed best, but compression is only available in UDP mode. Ideally I would like to have a TCP-based VPN with a multi-threaded compression. Can you suggest me some ideas how to increase the performance? Would it be possible to somehow put a compression filter in front of the tun interface? Or are there any VPN implementations that might be better suited for my needs (fast compression, TCP-based, switch mode, does not have to be super-secure)? I would consider tunnelling Ethernet over SSH, but according to some articles it is not advisable.

    Read the article

  • How to defend agains botnet http requests

    - by Killercode
    I have a server with WHM + CPanel and 5 of my costumer got infected with zbot. This means that the domains they have are constantly receiving requests to certain destinations. I tried to use mod_security but seems that it can't filter every requests... I don't really know why? I still see in the access log the connection comming in and it's consuming a LOT of bandwidth and server load Those accounts have already been clean so all of those requests go to error 404 (the ones catched on mod_security I am dropping the connection). Is there anymore ways to defend against this requests?

    Read the article

  • I need a few minutes of dedicated server a week, but not for hosting, just to convert ogg etc

    - by talkingnews
    I'm completely happy with my webhosting, it's just that I need to do one little thing they won't allow, and that's run an instance of Sox to convert about 30 mp3s to ogg files, in various directories, a couple of times a week, to be done automatically in response to the detection of the upload of an mp3. Probably looking at a minute of server time over the whole week. I've had unhelpful suggestions on other forums like "why not leave your home PC on 24 hours a day and then use all your isp bandwidth to do this", which doesn't work for me. I know that I can host files on, say, Amazon S3, but is there something similar for my needs? All it would need to do would be: wget/ftp the mp3 files, convert them to ogg, ftp the files back to my hosting. Of course, all this wouldn't be needed if there was such a thing as a compiled binary of Sox (or any mp3ogg converter) for Centos which I could upload without needing root access, but I've given up asking that one, but always open to suggestions!

    Read the article

  • Where would an S3 upload speed cap originate?

    - by CoreyH
    I do a ton of uploading to S3 and am experiencing capped speeds and I can't quite figure out how to address it. The setup: Windows Server 2008 R2 x64, external HD, using a Java based upload tool called Jsh3ll and custom VBS scripts to kick the jobs off. Running one process at a time, I am always limited to about 4mbps. I have FiOS at 35/35mbps speeds, so it isn't an outright limit. AND, I can run parallel instances and can go all the way up to 35mbps, so I know the problem isn't gateway/nic/machine/amazon related. Running parallel instances works to a degree as a solution, but increases the complexity of my workflow greatly. Solving this would make my life dramatically easier. When I was first doing this I was playing around with a bunch of Windows TCP parameters and was able to briefly get unconstrained bandwidth, but it wasn't repeatable. Thoughts?

    Read the article

  • Flash Media Server slow over SSL

    - by Antilogic
    We are using FMS to host a VoD site. We host FMS internally (we do not use a CDN). We recently installed an SSL certificate to alleviate connection issues for clients (they're networks either block or don't support RTMP), however we're noticing that when streaming in RTMPS connections are drastically slower (on the order of Mbps). I know SSL causes some amount of over head but both client and server show almost no signs of exertion. Speedtest.net and a locally hosted speed test confirm that bandwidth is not an issue. I'm really not a network guru, so I'm at a loss as to where to check next. Do any of you have an idea why streaming media would run so slow over SSL?

    Read the article

  • VPS hosting for a social network

    - by Jana
    Hi, I've developed a social network and I've been using shared hosting for that since it was launched. With that I wasn't able to send emails in bulk in cases like "newsletters" and "invitations to join my site". Plus most importantly most of the mails I send ended up in user's SPAM list.I'm planning to move into VPS as it may not have limits added. I'm wondering what's the cheapest VPS host available. I'm not pretty much familiar with Linux commands and seeking cPanel to do the work for me. Will the following configuration suit for a "new" social network like mine which has a less load? 1000Mhz Guaranteed 512MB Guaranteed RAM 20GB (RAID) Disk Space 1000GB/month Bandwidth 2 IP(s) & 5 Backups Semi Managed Thanks in advance

    Read the article

  • Running a VM off of an external HD via USB

    - by Nelson LaQuet
    Is it viable to run (i.e. reference the vmx/vhd directly from the mounted drive) a VM (vmware running Windows Seven) off of an external HD via USB? I mean, I know it's possible, but I guess I'm asking if USB provides enough bandwidth for normal usage... If so, are there any particular brands that may be better or worse? I know that ESATA would be a more viable setup, but my laptop doesn't have an ESATA port. Currently I use the VM to segregate all of my work development servers and software from my main machine; so I will be running all development servers and tools on the VM directly.

    Read the article

  • How full is too full for mechanical hard drives?

    - by Sunny Molini
    I have heard many claim that it doesn't matter how full a drive is until it starts cutting into temp and virtual memory space. This doesn't make sense to me, given the nature of how the data is transacted on a hard drive. The inside of the platter presents less data per revolution than the outside of the drive does, by significant factors. The inside 40% of the radius of full size hard drive is used for the spindle, so only the outside 60% is used for data storage, but that still means that the inside track of a hard drive presents data 60% slower than the outside track. By my calculation, a Hard drive that is only 10% full should perform about 2.25 times faster than a hard drive that is 90% full, assuming that the flow is constrained by other factors. Am I wildly off base here? For all the drives I know, even the top speeds of the first 1% of the drive would be well within the bandwidth provided by a SATA 2 connection.

    Read the article

  • Server goes offline. What to look for?

    - by Jonathan Sampson
    I'm using a new virtual server through GoDaddy, and this morning I received a call from the powers that be informing me our website was offline. After confirming this, I requested a power cycle through our GoDaddy control panel, and within a minute or two the server was back online. I made the call, and reported the news that we're back up. Of course, a couple minutes later we're down again. I tried connecting through PuTTy, and it takes forever to prompt me for a username, and each successive prompt takes a long time to come up. I'm using CentOS. So my questions are: How can I determine the cause? What types of things can I do to prevent this in the future? One interesting, and perhaps relevant, observation is that yesterday our bandwidth consumption was about 20% greater than our top figures from the past month.

    Read the article

  • 2 DSL lines...any benefit?

    - by EJB
    I have verizon DSL in my office, I put DSL in about a year ago for $29.95 month...I added a new phone line recently and it was cheaper to actually get it bundled with DSL so now I have two DSL lines...my plan was to shut the first one off when my 1 year contract comes up (in September). A couple of times DSL has gone out on one line so I just used the other, which is a nice redundancy to have - but it doesn't happen often. (I unplugged one line and plugged in the other) Question is, is there any way to use both DSL lines together so that 1) I can increase bandwidth and effective speed might increase (is that possible?) or 2) have them both on and connected someway so that traffic on my network would just use either one, and if one went down the traffic would route automatically? If I can either increase speed by having two, or at a minimum get some automatic redundancy, I see no reason to keep both on.... Thanks!

    Read the article

  • X58 RAID 10 - Am I forced to use Sata2?

    - by Avi
    I'm building a new dev computer. It will be running a few VMWare Worksation virtual machines. I was advised on Serverfault to use Raid10 for performance. Raid 10 uses 4 disks. I contacted my supplier who suggested a gigabyte X58A motherboard and 4 Western Digital Caviar black 6Gb/s disks. I have checked the spec for the X58A board, however, and it says: SATA 3Gb/s: RAID 0, RAID 1, RAID 5, and RAID 10 SATA 6Gb/s: RAID 0, and RAID 1. I'm losing half the bandwidth because I'm forced to use SATA2! What should I do?

    Read the article

  • Developing and implementing a testing plan for a software app deployed on a web server

    - by Abhzoo
    A company in the USA is building a new Web App that will be offered SaaS to customers and the development is being done by a software development team located in a different country(India). They are about to take delivery of a first demo to provide live feedback to the team in India. The overseas team requires a cloud server (Windows + SQL Standard, 8GB Ram, 8 vCPUs, 40GB SSD system disk, 80GB SSD data disk, 1600Mb/s network bandwidth) to serve as a tester server. When the tester is setup the team will install the app on the test server to get live feedback. Q:Explain in detail how you will develop and implement a testing plan for the software App. Be sure to explain the specifics. PLEASE HELP, NEED ANSWER ASAP

    Read the article

  • Using VLC to Unicast High Definition Webcam over local gigabit LAN with low/zero delay

    - by Robin Day
    We're setting up a webcam "window" between two offices in the same buildilng. The two PC's are connected to the same gigabit switch. We're using VLC to stream the webcam over HTTP using the following commands. vlc dshow:// :dshow-caching="0" :dshow-size="640x480" :sout=#transcode{vcodec=h264,vb=0,scale=0}:http{mux=ffmpeg{mux=flv},dst=:8080/} :no-sout-rtp-sap :no-sout-standard-sap :ttl=1 :sout-keep vlc http://192.168.0.1:8080 :http-caching="0" Even with the caching set to zero, the delay in the image is a good 2-3 seconds. The CPU usage of each pc is also maxed. I'm guessing it's the transcoding that's causing much of the delay. Can anyone give me some changes to these command lines that will reduce the transcoding power, or send the webcam over a different protocol, or anything that will reduce the delay of the cameras? Bandwidth is not an issue at all as the pc's can be connected to a dedicated switch/vlan if required.

    Read the article

  • How can I send super large files directly to another computer in the Internet for free?

    - by Cruise
    I regulary need to transfer very large files (30 GB) to my friend - financial statistics. I don't have any problem with bandwidth: it is very broad here. I did some research in the area, so: 1. I would not use FTP, as it is very tricky to get it working behind a NAT. 2. I would not use Skype/MSN/ICQ, as it is not designed for file transfer and it underperforms on the huge files. 3. I would not use file-sharing services, as I need to pay for big files (30 GB is a problem here) and I don't like holding any piece of my data on the third-party server. So, I need some smart tool that will do what I need: sending files directly browser-to-browser and not browser-server-browser. Is it so complex? Is there some web application in the Internet that can do this?

    Read the article

  • Download videos from youtube as I see it

    - by Sab
    This may seem a somewhat strange requirement : I want to download youtube videos as I see it. I know that I would have to capture the packets using a program like wireshark , and I do know that this is possible. So lets say I have 3 computers on my network and 1 smartphone. Lets say I view a youtube video on my phone. I now want this video to be recorded on any one of the computers so that I can see it later(record in the sense capture the packets so that I dont have to download it again and waste my bandwidth). Are there any programs which will do this for me? The reason I want this is I use IMediaShare to view youtube videos on my Tv. Now once I see a video if I want to see it at a later point of time I have to download the entire video again.

    Read the article

  • Multiple .bkf files created in Backupexec 12.5 or 2010 related to heavy I/O?

    - by syuusuke
    Hey everyone, I was wondering if anyone who has used backupexec 12.5 or 2010 have ever experienced multiple .bkf files created for a single job. To describe what I mean by multiple files, the .bkf are being created with random file sizes under 2GB even though I've assigned the setting to chop the file after 10GB size. Some jobs will create 20x .bkf files in 1 job with file chunks ranging from 50MB to 800MB sizes. Is this is a sign of heavy I/O issues? Bandwidth limitations? I'm not sure, I'm here to seek some advices and suggestions. I've setup another backup server with the same exact settings and they seem to create a new .bkf file when 10GB limit has been reached. Although I am backing up different machines but I know my settings are an exact match to the problematic or atleast I think it's a problem.

    Read the article

  • Badwidth-Hogging Linux Server Causing Trouble

    - by BlairHippo
    We have a Linux server (2.6.28-11-generic #42-Ubuntu) that's misbehaving on a client site, gobbling up an entirely unacceptable percentage of the client's bandwidth, and we're trying to figure out what the heck it's doing. And the guy who had the sysadmin skillset has yet to be replaced. We're at a loss for what could be causing all that network traffic, and need to figure it out SOON. What log files should I be looking at to find this information? What analysis tools would you recommend for this task? Please note that I'm not looking for a tool that will allow me to analyze FUTURE traffic. The client is on the verge of shutting the machine off entirely; I need to figure out what it's been doing with the data I already have, if that's at all possible. My thanks in advance for helping a development monkey play sysadmin.

    Read the article

  • afp/smb transfers caps at 2 megabytes/sec, wireless N

    - by RD.
    I wanted to transfer files between two mac computers. The network is wireless-N and both computers have wireless-N modules in them. The problem is that when I transfer files between them, via file sharing (afp) the network speed caps at 2 megabytes/sec. Just downloading files from the internet I can get faster speeds, so this isn't a constriction of my wifi bandwidth, it appears to be a constriction of the protocol being used. My wifi-n is set to 130mbits, so I should see real world transfer speeds around 12-16 megabytes/sec I did this command on both computers sudo sysctl -w net.inet.tcp.delayed_ack=0 which is supposed to lower tcp overhead, but this did not affect it. How can I get the speed I am expecting?

    Read the article

  • Just how slow should my VPN be?

    - by David Heggie
    I have a VPN setup between a remote office with 2 ADSL connections (8Mb downstream, 512Kb upstream) and a main office with a 10Mb EFM connection (10Mb both up and down). The VPN is an IPSec connection using a DrayTek 2930 router for the ADSL and a DrayTek 3200 router at the EFM end. However, I'm unable to get speeds over this connection (tested with iperf) of anything over 600Kb or so down from the main office (traffic will pretty much always be from the main office to the remote office) Whilst I realise that there is an overhead and I'm never going to get the "full" bandwidth available over this VPN, I'd like to think that there's something I should be looking at that may help improve it. I've tried using the DrayTek's built-in "VPN Trunking" features which are supposed to load-balance connections, but this doesn't seem to improve matters much. I guess my question is - is this the kind of performance I should expect from this kind of setup and I'll just have to live with it, or should I be able to squeeze something more out of it through some VPN magic?

    Read the article

  • Deleting sender from Outlook Safe Senders using GPO?

    - by Hutch
    We're having an external company do a mailshot to our users. The message contains images that are linked rather than embedded in the image (bandwidth isn't an issue). So of course on recent versions of Outlook you're prompted to download the images, not the end of the world, but it would be nice if that didn't happen. There's a bug in the Office/Outlook ADM/ADMX templates that means that a custom list of Safe Senders won't import unless you follow this: http://support.microsoft.com/kb/2252421 Thing is, if I remove an entry from the Safe Senders file, it doesn't seem to remove it from Outlook, which seems odd?

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >