Search Results

Search found 1659 results on 67 pages for 'bandwidth throttling'.

Page 47/67 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Is Gmail Being Blocked by my ISP?

    - by james
    EDIT: I thought I pinpointed the problem. Just now I tried to go to the firefox addons page which uses https and gmail also uses https. So I thought I am unable to load https pages on this computer. So I went to a bank site which uses https but that loads just fine. Sigh.... I asked this over at superuser but they weren't able to help, so I was hoping the sysadmins here will be able to advise as to what's wrong. Although the issue here is with a PC and not a server it still deals with networking so I hope it's not too irrelevant. The Issue: I have a desktop on which I cannot access Gmail and also youtube sign in (I believe since youtube is owned by google they both use the same sign in system). On other computers that uses the same connection via a wireless router I can access both gmail and youtube sign in just fine. On this computer which doesn't have a wireless card and so I have to connect via Ethernet cable (connected to a USB converter since the Ethernet port doesn't work anymore) I can access all sites and services including things like aol and hotmail. But only when it comes to gmail, do I get complete and utter throttling. I even turned off my AV ad Firewall momentarily and no luck. The gmail log in page starts to load and by mid point it just stays there loading and loading and loading... never ends. I tried everything, I reset the modem and router multiple times. I reinstalled my operating system from a vista to a windows 7 hoping that a complete reinstall would solve the issue, but no luck. And yes, I am going to call my ISP but not to solve this issue, but to cancel them. I want to upgrade to cable from DSL anyway. I didn't mention my ISP because I'm not sure if that is within the rules (if it's okay some one let me know and I will). P.S. All this happened one day, before that gmail was perfectly accessible in this computer. I can't remember anything special happening on that day prior to this. The only thing I can think of is, my ISP or Google itself is blocking this computer based on it's mac address, but I don't know if that's even done. Additional info: PC: Windows 7 Ultimate 32 bit Connection Type: DSL Connecting Medium: Ethernet cable via USB converter I should mention I can access gmail and youtube just fine through a IP proxy service.

    Read the article

  • Loss of wireless network connectivity when playing video via HDMI cable

    - by Jeff Fohl
    Hi Folks - New to Super User, so I hope this question fits in with the guidelines. Very strange problem I am having, and I am at a loss as to how to continue troubleshooting this one. The basic problem is that when I attempt to watch streamed video on a particular display device (an Optoma HD180 projector), my network connectivity drops like a stone to barely measurable levels. This is my setup: I have a Dell H2C 730x running Windows 7 64bit. This particular computer has two ATI Radeon HD 4800 video cards. I have two Samsung 22" monitors connected to one card, and an Optoma HD180 digital projector connected to the other card via an HDMI cable. My internet connection is normally a reliable 6Mbps. The problem I am having occurs when I stream video (or even just browse the web) on the Optoma Projector. When I do this, my internet connection drops to practically zero (just a few kilobits per second). When I move the browser away from the projector, and over to one of my Samsung monitors, the internet connection comes right back. Note that the Optoma projector is on and enabled as a third monitor all this time. I can move the mouse around on the projector without triggering the problem. I tried pinging my router when I was playing a movie on one of the monitors, and I get a 1 millisecond response. However, when I have the movie playing on the Optoma projecter, pinging the router gives me response times in the hundreds of milliseconds, or times out completely. So, it clearly is something local to my machine - and not some sort of throttling occurring down the line. I would think that it is possibly something to do with the HDMI driver conflicting somehow with my network driver (which is a USB-based wireless connection). This one has me really stumped. Anyone have any ideas? EDIT: I am now leaning towards the possibility that the HDMI cable is somehow interfering with the wireless network, when large amounts of data are being pushed through the cable. Is this possible?

    Read the article

  • IIS 6.0 Server Too Busy HTTP 503 Connection_Dropped DefaultAppPool

    - by Shiraz Bhaiji
    We have a site which is running on a windows 2003 cluster with 2 64bit machines. The site needs to be able to cope with over 20,000 concurrent users One of the things that the site does is to allow the download of a 2MB file (which is cached in memory). We have low CPU and memory usage. We also have surplus bandwidth. It appears that we are running out of connections due the time it takes the user to download the file (some users have slow internet connections). In the IIS log we get HTTP 503 errors. In the HTTPErr log we get mainly Connection_Dropped DefaultAppPool with some Timer_EntityBody DefaultAppPool. Question is: How can we configure IIS to allow more connections? Or is there something that I am missing here? Thanks Shiraz

    Read the article

  • fast forward/streaming in html5 video? RTSP?

    - by karpodiem
    right now I've got a few .mp4's hosted on Amazon S3. I know that S3 has support for RTMP, which is useful for streaming Flash. I'd like to accomplish something similar with html5 video; my biggest issue is that I need the ability to seek (fast forward) to a particular part of the video. Right now when I query the video, it loads the entire video before playing, which is a waste of bandwidth/dealbreaker. In what manner could this be implemented? Is this even possible? Looks like RTSP would be a good bet, but I haven't found whether anyone has rolled this out successfully.

    Read the article

  • How to download media content on demand and reuse from browser cache in silverlight

    - by Andrew
    Hi. I have a problem with simple silverlight app, this app has a couple of buttons, each button sets mediaelement source to a short mp3 file and plays it, my problem is that when i press the same button second time it re-downloads mp3 file again but i think it shouldn't, instead it should use a copy of browser cached mp3 file that was downloaded when a button was pressed for the first time. I'm using sl4 and links in mediaelement are just simple uri's, i need to make it working in this way that when some mp3 was downloaded it will be cached on the client browser and further click on button will use a cached version of file instead of downloading it again and wasting my bandwidth. Any ideas ?

    Read the article

  • Why would HTTP transfer via wget be faster than lftp/pget?

    - by jondahl
    I'm building software that needs to do massive amounts of file transfer via both HTTP and FTP. Often times, I get faster HTTP download with a multi-connection download accelerator like axel or lftp with pget. In some cases, I've seen 2x-3x faster file transfer using something like: axel http://example.com/somefile or lftp -e 'pget -n 5 http://example.com/somefile;quit' vs. just using wget: wget http://example.com/somefile But other times, wget is significantly faster than lftp. Strangly, this is even true even when I do lftp with get, like so: lftp -e 'pget -n 1 http://example.com/somefile;quit' I understand that downloading a file via multiple connections won't always result in a speedup, depending on how bandwidth is constrained. But: why would it be slower? Especially when calling lftp/pget with -n 1?

    Read the article

  • Win CE 6.0 client using WCF Services

    - by Sean
    We have a Win CE 6.0 device that is required to consume services that will be provided using WCF. We are attempting to reduce bandwidth usage as much as possible and with a simple test we have found that using UDP instead of HTTP saved significant data usage. I understand there are limitations regarding WCF on .NET Compact Framework 3.5 devices and was curious what people thought would be the appropriate way forward. Would it make sense to develop a custom UDP binding, and would that work for both sides? Any feedback would be appreciated. Thanks.

    Read the article

  • Is there something like bsdiff/Courgette for jar files?

    - by Ken Liu
    Google uses bsdiff and Courgette for patching binary files like the Chrome distribution. Do any similar tools exist for patching jar files? I am updating jar files remotely over a bandwidth-limited connection and would like to minimize the amount of data sent. I do have some control over the client machine to some extent (i.e. I can run scripts locally) and I am guaranteed that the target application will not be running at the time. I know that I can patch java applications by putting updated class files in the classpath, but I would prefer a cleaner method for doing updates. It would be good if I could start with the target jar file, apply a binary patch, and then wind up with an updated jar file that is identical (bitwise) to the new jar (from which the patch was created).

    Read the article

  • Does anybody actually use FaultReasonText to localize faults from WCF services?

    - by urig
    There is a localization mechanism in WCF that enables one to localize faults returned to client, via a FaultReasonText object that's a part of the fault. The way this is done is that you pass all possible translations of the fault's message inside a collection in the FaultReasonText. This, I understand, is based on SOAP v1.2. Does anyone actually use this mechanism? Isn't this wasteful in terms of bandwidth? Why would you send all possible translations to a client that is (probably) only interested in a specific language?

    Read the article

  • Only send populated object properties over WCF?

    - by dlanod
    I have an object that is being sent across WCF that is essentially a property holder - it can potentially have a large number of properties, i.e. up to 100, but in general only a small subset will be set, up to 10 for instance. Example: [DataContract(Namespace = "...")] public class Monkey { [DataMember] public string Arms { get; set; } [DataMember] public string Legs { get; set; } [DataMember] public string Heads { get; set; } [DataMember] public string Feet { get; set; } [DataMember] public string Bodies { get; set; } /* repeat another X times */ } Is there a way to tell WCF to only send the populated properties over the wire? It seems like a potential waste of bandwidth to send over the full object.

    Read the article

  • science.js’s loess() output is identical to input

    - by user3710111
    Rendered project available here. The line is supposed to be a trend line (as rendered with LOESS), but it merely follows each data point instead. I am no stats wonk, so maybe it makes sense that a LOESS function’s output would match the input as seen in the above example, but it strikes me as being wrong. Here is the relevant bit of code: var loess = science.stats.loess().bandwidth(.2); var xVal = data.map(function(d) { return d.date; }); var yVal = data.map(function(d) { return d.A; }); var loessData = loess([xVal], [yVal])[0]; console.log(yVal); console.log(loessData);

    Read the article

  • Large number array compression

    - by gatapia
    Hi All, I've got a javascript application that sends a large amount of numerical data down the wire. This data is then stored in a database. I am having size issues (too much bandwidth, database getting too big). I am now ready to sacrifice some performance for compression. I was thinking of implementing a base 62 number.toString(62) and parseInt(compressed, 62). This would certainly reduce the size of the data but before I go ahead and do this I thought I would put it to the folks here as I know there must be some outside the box solution I have not considered. The basic specs are: - Compress large number arrays into strings for JSONP transfer (So I think UTF is out) - Be relatively fast, look I'm not expecting same performance as I have now but I also don't want gzip compression either. Any ideas would be greatly appreciated. Thanks Guido Tapia

    Read the article

  • Java dropping half of UDP packets

    - by Andrew Klofas
    Greetings, I have a simple client/server setup. The server is in C and the client that is querying the server is Java. My problem is that, when I send bandwidth-intensive data over the connection, such as Video frames, it drops up to half the packets. I make sure that I properly fragment the udp packets on the server side (udp has a max payload length of 2^16). I verified that the server is sending the packets (printf the result of sendto()). But java doesn't seem to be getting half the data. Furthermore, when I switch to TCP, all the video frames get through but the latency starts to build up, adding several seconds delay after a few seconds of runtime. Is there anything obvious that I'm missing? I just can't seem to figure this out. Thanks, Andrew

    Read the article

  • Should I put my output files in source control?

    - by sebastiaan
    I've been asked to put every single file in my project under source control, including the database file (not the schema, the complete file). This seems wrong to me, but I can't explain it. Every resource I find about source control tells me not to put generated output files in a source control system. And I understand, it's not "source" files. However, I've been presented with the following reasoning: Who cares? We have plenty of bandwidth. I don't mind having to resolve a conflict each time I get the latest revision, it's just one click It's so much more convenient than having to think about good ignore files Also, if I have to add an external DLL file in the bin folder now, I can't forget to put it in source control, as the bin folder is not being ignored now. The simple solution for the last bullet-point is to add the file in a libraries folder and reference it from the project. Please explain if and why putting generated output files under source control is wrong.

    Read the article

  • HttpWebRequest is extremely slow!

    - by Earlz
    Hello, I am using an open source library to connect to my webserver. I was concerned that the webserver was going extremely slow and then I tried doing a simple test in Ruby and I got these results Ruby program: 2.11seconds for 100 HTTP GETs C# library: 20.81seconds for 100 HTTP GETs I have profiled and found the problem to be this function: private HttpWebResponse GetRawResponse(HttpWebRequest request) { HttpWebResponse raw = null; try { raw = (HttpWebResponse)request.GetResponse(); //This line! } catch (WebException ex) { if (ex.Response is HttpWebResponse) { raw = ex.Response as HttpWebResponse; } } return raw; } The marked line is takes over 1 second to complete by itself while the ruby program making 1 request takes .3 seconds. I am also doing all of these tests on 127.0.0.1, so network bandwidth is not an issue. What could be causing this huge slow down?

    Read the article

  • On-the-fly lossless image compression

    - by geschema
    I have an embedded application where an image scanner sends out a stream of 16-bit pixels that are later assembled to a grayscale image. As I need to both save this data locally and forward it to a network interface, I'd like to compress the data stream to reduce the required storage space and network bandwidth. Is there a simple algorithm that I can use to losslessly compress the pixel data? I first thought of computing the difference between two consecutive pixels and then encoding this difference with a Huffman code. Unfortunately, the pixels are unsigned 16-bit quantities so the difference can be anywhere in the range -65535 .. +65535 which leads to potentially huge codeword lengths. If a few really long codewords occur in a row, I'll run into buffer overflow problems.

    Read the article

  • Virtual dedicated surver repetitive draining RAM, OOM constantly

    - by Deerly
    My linux (fedora red hat 7) virtual dedicated server has been experiencing OOM multiple times a day for the past several days. I thought the issue was with spamd/spamassassin but after disabling this the errors remains. The highest usage displayed on ps faux --cumulative: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 28412 8.7 0.5 309572 109308 ? Sl 22:15 0:17 /usr/java/jdk1. mysql 7716 0.0 0.0 136256 18000 ? Sl 22:12 0:00 _ /usr/libexe named 17697 0.0 0.0 120904 15316 ? Ssl 22:09 0:00 /usr/sbin/named I'm not running any java applications so I'm not sure why the top issue is showing up. It is frustrating as I barely have anything running on the server and use the tiniest fraction of bandwidth. Any help or suggestions on zeroing in on the source of the drain would be much appreciated! Thanks!

    Read the article

  • Ask a DNS server what sites it hosts - and how to possibly prevent misuse

    - by Exit
    I've got a server which I host my company website as well as some of my clients. I noticed a domain which I created, but never used, was being attacked by a poke and hope hacker. I imagine that the hacker collected the domain from either hitting my DNS server and requesting what domains are hosted. So, in the interest of prevention and better server management, how would I ask my own DNS server (Linux CentOS 4) what sites are being hosted on it? Also, is there a way to prevent these types of attacks by hiding this information? I would assume that DNS servers would need to keep some information public, but I'm not sure if there is something that most hosts do to help prevent these bandwidth wasting poke and hope attacks. Thanks in advance.

    Read the article

  • Wordpress rewrite image path

    - by Brad
    I've got a website that is running WordPress. It has several pictures that I am retrieving from a datafeed. The images from the datafeed are at locations like: http://image4.example.com/640/examples/example.jpg http://image4.example.com/640/example.jpg The image4 and 640 locations can change. I want to rewrite the images to where they show as from my website. I've tried: rewritecond %{HTTP_HOST} !^image4.example.com$ rewriterule ^([^/]+)$ http://image4.mywebsite.com/$1 [L,R=301] but it doesn't work. I don't know much about Mod Rewrite. any help would be appreciated, and no I'm not hijacking the images, i have permission to use them and the bandwidth. Thanks -Brad

    Read the article

  • Load testing a quicktime streaming server from ubuntu machine

    - by ebeland
    I have software that can launch and control multiple firefox browsers on Ubuntu EC2 images. I need to run a small load test against a QuickTime Streaming server. The stream starts automatically when loaded in a browser that has the QuickTime plugin, so I don't need to automate the stream once it starts. Alternately, I can also make these machines run arbitrary ruby code or executables. How can I get these ubuntu machines to pull in the stream? Also, how can I capture bandwidth usage (maybe a shell script?) on the worker machines?

    Read the article

  • Filtering at server or at client?

    - by ablmf
    I am thinking about how to build advertise site which works like twitter. That means, most user don't not visit the site by browser, they should run a dedicated client application on their PC or smart phone. Then they set some filters about what kind of advertise they like. And when new post that fulfill their needs appear, the client will make a notification. To make that client as real time as possible, it has to poll the server within a short time interval. The problem is, should I do the filtering at the server side when client polls, or should I simply transfer all new posts to client and let client do the filtering? Making server side filtering might cause too much CPU cycles of server, but transferring every post blindly to client might waste a lot of bandwidth. Just a brain game. :)

    Read the article

  • Windows Server 2003 Hacked - Files Being Uploaded

    - by jreedinc
    Blank directories are being created on my Windows Server 2003 virtual server with sub directories that are weird (for example: "88ÿ ÿ ÿÿþþ þþ13þ"). It looks like they are uploading bootlegged DVDs and pirated software. All of my bandwidth and file space is being eaten up. Could this be a shared permissions issue? Where should I look to further investigate this? My security permissions for the directory that is being hit are as followed: Administrators - ALL GRANTED IIS_WPG - Read & Execute, List Folder Contents, Read Internet Guest - DENY SYSTEM - ALL GRANTED Users - Read & Execute, List Folder Contents, Read My Event Viewer is showing many Logon/Logoff with NO IP?

    Read the article

  • Filesystem synchronization library?

    - by IsaacB
    Hi, I've got 10 GB of files to back up daily to another site. The client is way out in the country so bandwidth is an issue. Does anyone know of any existing software or libraries out there that help with keeping a folder with its files synchronized across a slow link, that is it only sends files across if they have changed? Some kind of hash checking would be nice, too, to at least confirm the two sides are the same. I don't mind paying some money for it, seeing as how it might take me several weeks to a month to implement something decent on my own. I just don't want to re-invent the wheel, here. BTW it is a windows shop (they have an in house windows IT guy) so windows is preferred. I also have 10 GB of SQL Server 2000 databases to go across. Is the SQL server replication mode reliable? Thanks!

    Read the article

  • Web service performance testing plan, Microsoft .NET WS, SQL

    - by zxed
    Trying to answer a question to come up with a testing plan. It has to do with using a website and/or webservice that queries a sql server to get data and display to user. * Solution must be able to handle an estimated 2000 users, approximately 700 concurrent users, 10,000 + website hits a month. Database calls should handle 100,000 queries via the website/webservice a month. The system is used at multiple times during a 24 hour period; however networking and bandwidth traffic decreases after 5 pm * two windows 2003 servers are used, one for web, another for sql. Both are located in the same room. User access is varied and users can be far/near (its a centralized system), users access via www

    Read the article

  • Current state of client-side XSLT

    - by Casey
    Last I heard, Blizzard was one of the few companies to put client-side XSLT into practice (2008). Is this still the case in 2011, or are more people now exploring this technique in production?  It seems that modern browsers (IE9, FF4, Chrome) and client processing power are primed to exploit this standard for tangible savings in server CPU power and bandwidth on large scale properties. Am I missing something? The negative aspects I'm aware of include * additional rendering time * additional assets required on uncached page load * additional layer of complexity * noticably less developer experience than server-side template techniques The benefits I perceive include * distributed template composition (offloaded on the client) * caching of common template fragments offloaded on the client * logical separation of document structure and data * well-documented web standard supported by all modern browsers Finally, although I know it's impossible to predict the future, I am curious to know opinions on whether or not client-side XSLT's day will come. With interest in HTML5 driving users to upgrade their browsers and developers to explore new techniques, I would say yes. How about you? Thanks in advance, Casey

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >