Search Results

Search found 5136 results on 206 pages for 'max dwayne'.

Page 112/206 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • thttpd: Daemon exiting, I don't know why

    - by Tobe
    I run thttpd to serve some perl files. But for some reason the daemon is exiting every second or third day. Strangely it's always at 6.25 am. Here are some lines from syslog: Nov 10 06:25:40 b1 thttpd[6370]: up 86404 seconds, stats for 86404 seconds: Nov 10 06:25:40 b1 thttpd[6370]: thttpd - 25 connections (0.000289338/sec), 1 max simultaneous, 625000 bytes (7.23346/sec), 2 httpd_conns allocated Nov 10 06:25:40 b1 thttpd[6370]: libhttpd - 30 strings allocated, 8200 bytes (273.333 bytes/str) Nov 10 06:25:40 b1 thttpd[6370]: map cache - 0 allocated, 0 active (0 bytes), 0 free; hash size: 0; expire age: 1800 Nov 10 06:25:40 b1 thttpd[6370]: fdwatch - 20902 selects (0.24191/sec) Nov 10 06:25:40 b1 thttpd[6370]: timers - 2 allocated, 2 active, 0 free Nov 10 06:25:40 b1 thttpd[6370]: exiting Any ideas?

    Read the article

  • Can I upgrade an Asus M51Sn laptop to 2x4GB of RAM? (DDR2)

    - by matteo
    My Asus M51Sn has 2 RAM slots which currently have 1x1GB + 1x2GB DDR2-800 SODimm RAM modules installed. I've found out that 4GB DDR2 SODimm modules do exist, though they are impossible to find in local stores nere here, but I've found them in online stores like these: http://www.pccomponentes.com/g_skill_ddr2_800_pc2_6400_4gb_so_dimm.html They seem to meet the specification, so can I replace both my current modules with 2x4GB modules, and reach a total of 8GB? Or should I worry about some limit (e.g. 4GB max or 2GB per slot) imposed by the matherboard, chipset or whatever? (I currently use Ubuntu 12.04 32 bit, so I plan to use the pae kernel, which supposedly supports 4GB ram on a 32bit system; or I may consider switching tu 64bit ubuntu; the question is about hardware limitations, not OS limitations).

    Read the article

  • Problems Running Cherokee Web Server Admin - config_reader.c:249 - Parsing error

    - by Sebastian
    I'm running Cherokee web server 0.99.30 on (Ubuntu Hardy) and I have been having some issues getting the admin to run property. When I run sudo cherokee-admin -b Login: User: admin One-time Password: {password} Web Interface: URL: http://localhost:9090/ [20/11/2009 22:57:29.733] (error) config_reader.c:249 - Parsing error Cherokee Web Server 0.99.30 (Nov 20 2009): Listening on port ALL:9090, TLS disabled, IPv6 disabled, using epoll, 4096 fds system limit, max. 2041 connections, caching I/O, single thread When I go to the admin page I get a 503 Service Unavailable error page. Any idea about how I could fix this? Thanks

    Read the article

  • 1000 HZ linux kernel necessary if I have tickless and high resolution timer?

    - by Bob
    I am trying to improve performance on my server. I have a few processes that need low jitter (less than 10ms variance). I have a load average of 4 maximum on an i7-920 (4 physical cores, 8 with HT). There are about 10 processes ranging from 40% to 90% of a core user mode. System usage is 3% total. Total CPU usage is 80% max. Will setting the kernel from 100hz to 1000hz improve the jitter if tickless and high resolution timers are already set? This page seems to indicate it still does something. https://lkml.org/lkml/2009/4/28/401 How about changing from voluntary (PREEMPT_VOLUNTARY) to preemptible (PREEMPT)?

    Read the article

  • Chromium always starts as floating in awesome.wm

    - by xhochy
    I'm using awesome as window manager for a small surf&info terminal. Chromium is started directly after login on the first workspace and should be displayed fullscreen. I've set the layout of all workspaces to awful.layout.suit.max and followed Awesome FAQ so that Chromium and all other (automatically) started programs will be shown on the right workspace. All programs except Chromium will start correctly in fullscreen mode. I tried { rule = { class = "chromium-browser" }, properties = {floating = false, tag = tags[1][1]}} and { rule = { class = "chromium-browser" }, properties = {tag = tags[1][1]}} but Chromium will always start in floating mode. This is a bit annoying as you still see awesome's panel at the top.

    Read the article

  • Is there a rule of thumb for RAM upgrades?

    - by Retrosaur
    I'm having a hard time figuring out whether or not a certain laptop/computer's RAM can be upgraded or not. Is there a rule of thumb that determines how much max RAM one could add to a system without looking it up via external websites? A little bit of a background information: I work in computer sales at a computer electronics store, so it is virtually impossible for me to install any sort of software that would detect computer specs, and I get a lot of customers who wonder what laptop/desktop RAM upgrades usually are. Is there a certain rule that adding more RAM entails? Does it make a difference if it's a 32-bit or 64-bit machine? OS?

    Read the article

  • Windows calibration settings persistance over reboots

    - by Dmatig
    I'm running Windows 7 64bit on a laptop (Samsung R560) using a cheap external CRT monitor. The screen is a littler dark for my liking, despite having the physical monitors settings up to the max for all the brightness-related settings. Windows 7 has a tool called "Calibrate display color" (search in the start menu). Running this tool, you have a slider that allows you to adjust the "Gamma", which sliding up gives me acceptable brightness levels. Unfortunately, upon reboot (and certain other activities such as running certain fullscreen games) this is reset to default. Is there a way to make this persistent? Some registry setting? Batch file to run at startup even (less preferable as I'd like games to run brighter too)?

    Read the article

  • Zooming out to fit all annotations in MapKit

    - by Krismutt
    Hey everybody!! I wanna zoom out so that all my annotations(my location and one more annotation) fit to the screen...what do I do wrong?? I get the following warning: "'getDistanceFrom:' is deprecated".... -(void)viewDidLoad { [super viewDidLoad]; mapView = [[MKMapView alloc] initWithFrame:self.view.bounds]; mapView.showsUserLocation = TRUE; mapView.delegate = self; mapView.mapType = MKMapTypeStandard; mapView.zoomEnabled = YES; mapView.scrollEnabled = YES; mapView.userInteractionEnabled = YES; [mapView.userLocation setTitle:@"Nuvarande plats"]; [mapView.userLocation setSubtitle:@"Du är här"]; [self.view insertSubview:mapView atIndex:0]; self.locationManager = [[[CLLocationManager alloc] init] autorelease]; locationManager.delegate = self; locationManager.desiredAccuracy = kCLLocationAccuracyBest; [locationManager startUpdatingLocation]; [mapView release]; } -(void) locationManager: (CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation{ NSLog (@"Position uppdateras" ); location = newLocation.coordinate; if (friZoom) { MKCoordinateRegion region; region.center.latitude = location.latitude; region.center.longitude= location.longitude; MKCoordinateSpan span; span.latitudeDelta = 0.01; span.longitudeDelta = 0.01; region.span = span; [mapView setRegion:region animated:TRUE];} } - (MKAnnotationView *)mapView:(MKMapView *)mapview viewForAnnotation:(id <MKAnnotation>)knappnal { if ([knappnal isKindOfClass:MKUserLocation.class]) { return nil; } MKPinAnnotationView *knappnalView = (MKPinAnnotationView*)[mapview dequeueReusableAnnotationViewWithIdentifier:@"annot"]; if (!knappnalView) { knappnalView = [[[MKPinAnnotationView alloc] initWithAnnotation:knappnal reuseIdentifier:@"annot"] autorelease]; knappnalView.pinColor = MKPinAnnotationColorRed; knappnalView.animatesDrop = YES; knappnalView.canShowCallout = YES; } else { knappnalView.annotation = knappnal; } return knappnalView; } - (IBAction)storeLocationInfo:(id) sender{ SparaPosition *position=[[SparaPosition alloc] initWithCoordinate:location]; [mapView addAnnotation:position]; savedPosition = location; } - (IBAction)visaPosition:(id)sender{ CLLocationCoordinate2D southWest =location; CLLocationCoordinate2D northEast =savedPosition; southWest.latitude = MIN(southWest.latitude, location.latitude); southWest.longitude = MIN(southWest.longitude, location.longitude); northEast.latitude = MAX(northEast.latitude, savedPosition.latitude); northEast.longitude = MAX(northEast.longitude, savedPosition.longitude); CLLocation *locSouthWest = [[CLLocation alloc] initWithLatitude:southWest.latitude longitude:southWest.longitude]; CLLocation *locNorthEast = [[CLLocation alloc] initWithLatitude:northEast.latitude longitude:northEast.longitude]; CLLocationDistance meters = [locSouthWest getDistanceFrom:locNorthEast]; MKCoordinateRegion region; region.center.latitude = (southWest.latitude + northEast.latitude) / 2.0; region.center.longitude = (southWest.longitude + northEast.longitude) / 2.0; region.span.latitudeDelta = meters / 111319.5; region.span.longitudeDelta = 0.0; region = [mapView regionThatFits:region]; [mapView setRegion:region animated:YES]; [locSouthWest release]; [locNorthEast release]; } Would really appreciate an answer!

    Read the article

  • Remote connect into macbook pro at a different resolution

    - by user60277
    Hello, I have a Dell laptop with Windows 7 on it. Its resolution is 1920x1080. I want to connect to a macbook pro at that resolution. The macbook pro has a resolution of 1440x900 so when I VNC into it, I can only see 1440x900 box with black borders on full resolution. The macbook pro can drive resolutions of 2560x1440. What program do I use to connect to the macbook at full (1920x1080) resolution. I can use remote desktop and connect from the dell laptop to another dell laptop that has a 1440x900 max. resolution. However in case of Remote desktop connection I can expand the window to be 1920x1080. I'm using TightVNC viewer on Windows. Thanks

    Read the article

  • Remote connect into macbook pro at a different resolution

    - by user60277
    Hello, I have a Dell laptop with Windows 7 on it. Its resolution is 1920x1080. I want to connect to a macbook pro at that resolution. The macbook pro has a resolution of 1440x900 so when I VNC into it, I can only see 1440x900 box with black borders on full resolution. The macbook pro can drive resolutions of 2560x1440. What program do I use to connect to the macbook at full (1920x1080) resolution. I can use remote desktop and connect from the dell laptop to another dell laptop that has a 1440x900 max. resolution. However in case of Remote desktop connection I can expand the window to be 1920x1080. I'm using TightVNC viewer on Windows. Thanks

    Read the article

  • MacBook Pro shutting down instantly after attempt to update firmware

    - by Luke Dennis
    I tried to apply a firmware update on a MacBook Pro 2.4ghz (2008), and after rebooting, the fans kicked up to maximum, the screen lit up, and then it immediately died. Now when I try to boot, it does the same thing: the fans crank to max, the screen lights up, and then it dies after about 2 seconds. Resetting the power manager does nothing. It doesn't stay up long enough to choose another boot drive or boot from CD. I have no idea what else to try. Help?

    Read the article

  • Caching DNS server (bind9.2) CPU usage is so so so high.

    - by Gk
    Hi, I have a caching-only dns server which get ~3k queries per second. Here is specs: Xeon dual-core 2,8GHz 4GB of RAM Centos 5x (kernel 2.6.18-164.15.1.el5PAE) bind 9.4.2 rndc status: recursive clients: 666/4900/5000 About 300 new queries (not in cache) per second. Bind always uses 100% on one core on single-thread config. After I recompiled it to multi-thread, it uses nearly 200% on two core :( No iowait, only sys and user. I searched around but didn't see any info about how bind use CPU. Why does it become bottleneck? One more thing, here is RAM usage: cat /proc/meminfo MemTotal: 4147876 kB MemFree: 1863972 kB Buffers: 143632 kB Cached: 372792 kB SwapCached: 0 kB Active: 1916804 kB Inactive: 276056 kB I've set max-cache-size to 0 to make sure bind can use as much RAM as it want, but it always stop at ~2GB. Since every second we got not cached queries so theoretically RAM must be exhausted but it wasn't. Do you have any idea? TIA, -Gk

    Read the article

  • How to avoid maximum Workgroup Manager connections in Mac OS X Server 10.6

    - by Stephan
    Is there a limit on the Mac OS X Server (10.6) Workgroup Manager in respect to concurrent connections to a server? I have an OS X server up and running and Open Directory configured but I am not able to log in remotely as I get the message the maximum number of connections for Workgroup Manager is already reached and I should wait for a user to disconnect. Even after a restart I get this message remotely. However, locally on the server I can start Workgroup Manager without any issues. It always lets me connect. Any advice what I need to do to make Workgroup Manager work from a remote location? I could not find any max connection setting in Server-Admin and nothing in the slapd log files. The server license says unlimited so I am quite sure it should not be a regular error message that indicates to me I should upgrade.

    Read the article

  • Are there pitfalls to using incompatible RAM (frequencies) in motherboards?

    - by osij2is
    I'd like to use 2 x 4GB DDR3 1600 dimms in a motherboard capable of only DDR3 1066. The DDR3 1600 is on sale and the cost is identical to 1066 dimms. It'd be nice to have these faster sticks around should i upgrade the motherboard. I assume the RAM can under clock itself or be changed in the BIOS. While obviously it's less than ideal situation, I don't know if there are other unintended consequences in terms of stability, performance and longevity of the board and said RAM. Am I doing any damage to the memory controller or RAM? I've always bought RAM at the max speed specified for the motherboard and I've never gone over so I'm not sure if there any caveats to this at all. Edit: I intend to use the RAM in pairs. I know that mixing RAM speeds is just a bad idea.

    Read the article

  • Windows XP SP3 TCP/IP No buffer space available

    - by Natalia
    I have the exactly same problem as here: Windows XP TCP/IP No buffer space available On Windows XP Pro, SP3 if one does an experiment where one tries to open TCP/IP sockets in a loop (bascially, listen port 7000, listen port 7001, etc.) After approx 649 open sockets, one will start getting errors: No buffer space available (maximum connections reached?) I've tried to edit the registry as described here http://smallvoid.com/article/winnt-tcpip-max-limit.html I set MaxUserPort = 65534 and MaxFreeTcbs = 2000, but it didn't help. What else can I do? I need 1000 server sockets. Here is the error stack: 05.04.2012 10:23:57 java.net.SocketException: No buffer space available (maximum connections reached?): listen at sun.nio.ch.ServerSocketChannelImpl.listen(Native Method) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:127) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52) at channelserver.NIOAppServer.initSelector(NIOAppServer.java:40) at channelserver.NIOAppServer.(NIOAppServer.java:27) at channelserver.NIOServer.main(NIOServer.java:433) at channelserver.NIOServer.main(NIOServer.java:438)

    Read the article

  • Python: Check existence of shell command before execution

    - by Gabriel L. Oliveira
    Hi all. I'm trying to find a way to check the existence of a shell command before its execution. For example, I'll execute the command ack-grep. So, I'm trying to do: import subprocess from subprocess import PIPE cmd_grep = subprocess.Popen(["ack-grep", "--no-color", "--max-count=1", "--no-group", "def run_main", "../cgedit/"], stdout=PIPE, stderr=PIPE) Than, if I execute cmd_grep.stderr.read() I receive '' like the output. But I don't have the command ack-grep on my path. So, why Popen is not putting the error message on my .stderr variable? Also, is there a easyer way to do what I'm trying to do?

    Read the article

  • Semantic is consuming all CPU, causing emacs to hang

    - by Cheeso
    I upgraded to emacs 23.2.1 on Windows 7, not long ago. Since then I've been unable to use Semantic. As soon as I start it, the cpu goes to MAX . (actually, Windows reports it at 50%, but this is a dual core machine, so emacs is effectively consuming 100% of a core). Emacs becomes non-responsive. Is there a particular combination of versions of semantic and emacs I that is unsafe to use together? how would I debug this spin/hang? I've seen other suggestions to change the semantic-idle-scheduler-idle-time, from its default 2 to something very large. I tried that, but got the same results.

    Read the article

  • Slow download speeds on MacBook Pro

    - by Austin
    Just as the title says, I am getting very low download speeds on my MacBook Pro. I did a speed test at speedtest.net, and am getting 7 MbPS down, .5 up. However, I can only seem to get 270 KB PS max (averaging 100 K), whether on my school's network or on my home network, wired or wireless. I am on Mac OS X 10.5.8, with Google Chrome. My ethernet settings (under System Preferences - Network - Ethernet Connection - Advanced - Ethernet) are set to "Configure Automatically", "Speed: 100TX", "Duplex: full-duplex, flow-control", and "MTU: Standard (1500)". As far as I can tell, there are no throttles or anything between here and the ISP, so... Any ideas on why I'm getting such low download speeds?

    Read the article

  • Clever recording using AVFoundation

    - by martin
    Hello I am working on my master thesis and I am programming app for iOS using AVFoundation framework. I can make by myself session and attach devices to it and record video with sound. The main problem is that I need continous recording (3hours or longer). After three hours user will stop recording and user will choose time eg. 15 mins (max 30 mins) and only this last 15 mins will be stored to iphone memory. Is it possible to 'cut' video while recording or should I record it eg. in 10 minutes block and then delete old video segments and last two segments connect to one bigger? Will perform these connections (stop recording, start new recording and then connect these two segments) lags in final long video segment? Is there any way to perform this 'clever' recording? Thank you for any ideas.

    Read the article

  • Input not cleared.

    - by SoulBeaver
    As the question says, for some reason my program is not flushing the input or using my variables in ways that I cannot identify at the moment. This is for a homework project that I've gone beyond what I had to do for it, now I just want the program to actually work :P Details to make the finding easier: The program executes flawlessly on the first run through. All throws work, only the proper values( n 0 ) are accepted and turned into binary. As soon as I enter my terminate input, the program goes into a loop and only asks for the termiante again like so: When I run this program on Netbeans on my Linux Laptop, the program crashes after I input the terminate value. On Visual C++ on Windows it goes into the loop like just described. In the code I have tried to clear every stream and initialze every variable new as the program restarts, but to no avail. I just can't see my mistake. I believe the error to lie in either the main function: int main( void ) { vector<int> store; int terminate = 1; do { int num = 0; string input = ""; if( cin.fail() ) { cin.clear(); cin.ignore( numeric_limits<streamsize>::max(), '\n' ); } cout << "Please enter a natural number." << endl; readLine( input, num ); cout << "\nThank you. Number is being processed..." << endl; workNum( num, store ); line; cout << "Go again? 0 to terminate." << endl; cin >> terminate // No checking yet, just want it to work! cin.clear(); }while( terminate ); cin.get(); return 0; } or in the function that reads the number: void readLine( string &input, int &num ) { int buf = 1; stringstream ss; vec_sz size; if( ss.fail() ) { ss.clear(); ss.ignore( numeric_limits<streamsize>::max(), '\n' ); } if( getline( cin, input ) ) { size = input.size(); for( int loop = 0; loop < size; ++loop ) if( isalpha( input[loop] ) ) throw domain_error( "Invalid Input." ); ss << input; ss >> buf; if( buf <= 0 ) throw domain_error( "Invalid Input." ); num = buf; ss.clear(); } }

    Read the article

  • Nginx Static Content Server Maxing Out?

    - by Harry
    I use nginx to serve the static content for a decently busy website of mine. I have the logging disabled, and 4 worker processes enabled with 5,000 connections per worker (which should yield a max connection limit of 20,000. The server is only operating at about 10% CPU usage and 50% ram, but it's very laggy, and sometimes nginx is so slow to respond to the requests, it times out. For a small number of connections, it's fine, but once any load starts occurring (~2,500 connections), it backs up and bogs down. Is there any other bottlenecks or limits that I might be hitting? This is a FreeBSD server, and all the static files are located locally (not NFS). The NIC is an unmetered gigabit, and it's only using around 75 megabit. Any insight would be appreciated. Thanks.

    Read the article

  • Puppet: how to use data from a MySQL table in Puppet 3.0 templates?

    - by Luke404
    I have some data whose source-of-truth is in a MySQL database, size is expected to max out at the some-thousands-rows range (in a worst-case scenario) and I'd like to use puppet to configure files on some servers with that data (mostly iterating through those rows in a template). I'm currently using Puppet 3.0.x, and I cannot change the fact that MySQL will be the authoritative source for that data. Please note, data comes from external sources and not from puppet or from the managed nodes. What possible approaches are there? Which one would you recommend? Would External Node Classifiers be useful here? My "last resort" would be regularly dumping the table to a YAML file and reading that through Hiera to a Puppet template, or to directly dump the table in one or more pre-formatted text file(s) ready to be copied to the nodes. There is an unanswered question on SF about system users but the fundamental issue is probably similar to mine - he's trying to get data out of MySQL.

    Read the article

  • reduce memory footprint of java virtual machine

    - by Lorenzo Boccaccia
    I've a citrix server where multiple users use a multiple java application. Is there a way to reduce the memory footprint of the jvm itself? The max heap is already set fairly low (64MB), as the permgen (32MB) space and we're to the point that the jvm itself uses way more memory than the application itself (the committed area is around 350MB) I'm looking for a way to reduce the jvm ram usage or to make the all the applications run within the same jvm or any other way of sharing common pages between running jvm (if possible) or try switch to switch to a jvm if a jvm exists having optimizations relative to this scenario currently using windows 2003 server and sun java virtual machine 1.6

    Read the article

  • Driver for writing to UDF partitions from Windows XP?

    - by davr
    I'm considering using an UDF partition to share data between Windows XP, 7, and Linux. It's more efficient than FAT32, and avoids the 4GB max file size limit. I've found it will also work with Mac OS X, more details in this questions. However, in Windows XP, it is read-only. I'd like to write to it too. Are there any drivers that will allow this? I've found a few that support writing UDF...but they are designed for writing to CDs or DVDs, not specifically for HDDs or USB Flash drives: DLA, InCD, Drag-To-Disc. Will any of those 3 drivers work for HDDs/USB Flash drives? Or is there another driver that will do what I want? Thanks.

    Read the article

  • Apache on Win32: Slow Transfers of single, static files in HTTP, fast in HTTPS

    - by Michael Lackner
    I have a weird problem with Apache 2.2.15 on Windows 2000 Server SP4. Basically, I am trying to serve larger static files, images, videos etc. The download seems to be capped at around 550kB/s even over 100Mbit LAN. I tried other protocols (FTP/FTPS/FTP+ES/SCP/SMB), and they are all in the multi-megabyte range. The strangest thing is that, when using Apache with HTTPS instead of HTTP, it serves very fast, around 2.7MByte/s! I also tried the AnalogX SimpleWWW server just to test the plain HTTP speed of it, and it gave me a healthy 3.3Mbyte/s. I am at a total loss here. I searched the web, and tried to change the following Apache configuration directives in httpd.conf, one at a time, mostly to no avail at all: SendBufferSize 1048576 #(tried multiples of that too, up to 100Mbytes) EnableSendfile Off #(minor performance boost) EnableMMAP Off Win32DisableAcceptEx HostnameLookups Off #(default) I also tried to tune the following registry parameters, setting their values to 4194304 in decimal (they are REG_DWORD), and rebooting afterwards: HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultReceiveWindow HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultSendWindow Additionally, I tried to install mod_bw, which sets the event timer precision to 1ms, and allows for bandwidth throttling. According to some people it boosts static file serving performance when set to unlimited bandwidth for everybody. Unfortunately, it did nothing for me. So: AnalogX HTTP: 3300kB/s Gene6 FTPD, plain: 3500kB/s Gene6 FTPD, Implicit and Explicit SSL, AES256 Cipher: 1800-2000kB/s freeSSHD: 1100kB/s SMB shared folder: about 3000kB/s Apache HTTP, plain: 550kB/s Apache HTTPS: 2700kB/s Clients that were used in the bandwidth testing: Internet Explorer 8 (HTTP, HTTPS) Firefox 8 (HTTP, HTTPS) Chrome 13 (HTTP, HTTPS) Opera 11.60 (HTTP, HTTPS) wget under CygWin (HTTP, HTTPS) FileZilla (FTP, FTPS, FTP+ES, SFTP) Windows Explorer (SMB) Generally, transfer speeds are not too high, but that's because the server machine is an old quad Pentium Pro 200MHz machine with 2GB RAM. However, I would like Apache to serve at at least 2Mbyte/s instead of 550kB/s, and that already works with HTTPS easily, so I fail to see why plain HTTP is so crippled. I am using a Kerio Winroute Firewall, but no Throttling and no special filters peeking into HTTP traffic, just the plain Firewall functionality for blocking/allowing connections. The Apache error.log (Loglevel info) shows no warnings, no errors. Also nothing strange to be seen in access.log. I have already stripped down my httpd.conf to the bare minimum just to make sure nothing is interfering, but that didn't help either. If you have any idea, help would be greatly appreciated, since I am totally out of ideas! Thanks! Edit: I have now tried a newer Apache 2.2.21 to see if it makes any difference. However, the behaviour is exactly the same. Edit 2: KM01 has requested a sniff on the HTTP headers, so here comes the LiveHTTPHeaders output (an extension to Firefox). The Output is generated on downloading a single file called "elephantsdream_source.264", which is an H.264/AVC elementary video stream under an Open Source license. I have taken the freedom to edit the URL, removing folders and changing the actual servers domain name to www.mydomain.com. Here it is: LiveHTTPHeaders, Plain HTTP: http://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:55:16 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain LiveHTTPHeaders, HTTPS: https://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:56:57 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >