Search Results

Search found 165 results on 7 pages for 'x sendfile'.

Page 1/7 | 1 2 3 4 5 6 7  | Next Page >

  • Invalid argument in sendfile() with two regular files

    - by Daniel Hershcovich
    I'm trying to test the sendfile() system call under Linux 2.6.32 to zero-copy data between two regular files. As far as I understand, it should work: ever since 2.6.22, sendfile() has been implemented using splice(), and both the input file and the output file can be either regular files or sockets. The following is the content of sendfile_test.c: #include <sys/sendfile.h> #include <fcntl.h> #include <stdio.h> int main(int argc, char **argv) { int result; int in_file; int out_file; in_file = open(argv[1], O_RDONLY); out_file = open(argv[2], O_WRONLY | O_CREAT | O_TRUNC, 0644); result = sendfile(out_file, in_file, NULL, 1); if (result == -1) perror("sendfile"); close(in_file); close(out_file); return 0; } And when I'm running the following commands: $ gcc sendfile_test.c $ ./a.out infile The output is sendfile: Bad file descriptor Which means that the system call resulted in errno = -EINVAL, I think. What am I doing wrong?

    Read the article

  • Differnce between linux write and sendfile syscall

    - by JosiP
    Hi Im programming webserver (C), which should send big files. My question is: What are the main differneces in two syscalls: write and sendfile. Does sendfile depends on size of socket system buffer ? I noticed that write often writes less then i requested. For example, if got many requests for one file: should i open it, copy into memory and use 'write', or maybe i can do 'sendfile' for each client ? thx in advance for all answers

    Read the article

  • Is writing to a socket an arbitrary limitation of the sendfile() syscall?

    - by Sufian
    Prelude sendfile() is an extremely useful syscall for two reasons: First, it's less code than a read()/write() (or recv()/send() if you prefer that jive) loop. Second, it's faster (less syscalls, implementation may copy between devices without buffer, etc...) than the aforementioned methods. Less code. More efficient. Awesome. In UNIX, everything is (mostly) a file. This is the ugly territory from the collision of platonic theory and real-world practice. I understand that sockets are fundamentally different than files residing on some device. I haven't dug through the sources of Linux/*BSD/Darwin/whatever OS implements sendfile() to know why this specific syscall is restricted to writing to sockets (specifically, streaming sockets). I just want to know... Question What is limiting sendfile() from allowing the destination file descriptor to be something besides a socket (like a disk file, or a pipe)?

    Read the article

  • How to combine try_files and sendfile on Nginx?

    - by hcalves
    I need Nginx to serve a file relative from document root if it exists, then fallback to an upstream server if it doesn't. This can be accomplished with something like: server { listen 80; server_name localhost; location / { root /var/www/nginx/; try_files $uri @my_upstream; } location @my_upstream { internal; proxy_pass http://127.0.0.1:8000; } } Fair enough. The problem is, my upstream is not serving the contents of URI directly, but instead, returning X-Accel-Redirect with a location relative to document root (it generates this file on-the-fly): % curl -I http://127.0.0.1:8000/animals/kitten.jpg__100x100__crop.jpg HTTP/1.0 200 OK Date: Mon, 26 Nov 2012 20:58:25 GMT Server: WSGIServer/0.1 Python/2.7.2 X-Accel-Redirect: animals/kitten.jpg__100x100__crop.jpg Content-Type: text/html; charset=utf-8 Apparently, this should work. The problem though is that Nginx tries to serve this file from some internal default document root instead of using the one specified in the location block: 2012/11/26 18:44:55 [error] 824#0: *54 open() "/usr/local/Cellar/nginx/1.2.4/htmlanimals/kitten.jpg__100x100__crop.jpg" failed (2: No such file or directory), client: 127.0.0.1, server: localhost, request: "GET /animals/kitten.jpg__100x100__crop.jpg HTTP/1.1", upstream: "http://127.0.0.1:8000/animals/kitten.jpg__100x100__crop.jpg", host: "127.0.0.1:80" How do I force Nginx to serve the file relative to the right document root? According to XSendfile documentation the returned path should be relative, so my upstream is doing the right thing.

    Read the article

  • Terminology: opposite of "zero copy"?

    - by Mark Harrison
    We're benchmarking some code that we've converted to use sendfile(), the linux zero-copy system call. What's the term for the traditional read()/write() loop that sendfile() replaces? I.e., in our report I want to say "zerocopy is X millisecs, and ??? is Y millisecs." What word/phrase should I use?

    Read the article

  • Saving animated GIFs using urllib.urlopen (image saved does not animate)

    - by wenbert
    I have Apache2 + Django + X-sendfile. My problem is that when I upload an animated GIF, it won't "animate" when I output through the browser. Here is my code to display the image located outside the public accessible directory. def raw(request,uuid): target = str(uuid).split('.')[:-1][0] image = Uploads.objects.get(uuid=target) path = image.path filepath = os.path.join(path,"%s.%s" % (image.uuid,image.ext)) response = HttpResponse(mimetype=mimetypes.guess_type(filepath)) response['Content-Disposition']='filename="%s"'\ %smart_str(image.filename) response["X-Sendfile"] = filepath response['Content-length'] = os.stat(filepath).st_size return response UPDATE It turns out that it works. My problem is when I try to upload an image via URL. It probably doesn't save the entire GIF? def handle_url_file(request): """ Open a file from a URL. Split the file to get the filename and extension. Generate a random uuid using rand1() Then save the file. Return the UUID when successful. """ try: file = urllib.urlopen(request.POST['url']) randname = rand1(settings.RANDOM_ID_LENGTH) newfilename = request.POST['url'].split('/')[-1] ext = str(newfilename.split('.')[-1]).lower() im = cStringIO.StringIO(file.read()) # constructs a StringIO holding the image img = Image.open(im) filehash = checkhash(im) image = Uploads.objects.get(filehash=filehash) uuid = image.uuid return "%s" % (uuid) except Uploads.DoesNotExist: img.save(os.path.join(settings.UPLOAD_DIRECTORY,(("%s.%s")%(randname,ext)))) del img filesize = os.stat(os.path.join(settings.UPLOAD_DIRECTORY,(("%s.%s")%(randname,ext)))).st_size upload = Uploads( ip = request.META['REMOTE_ADDR'], filename = newfilename, uuid = randname, ext = ext, path = settings.UPLOAD_DIRECTORY, views = 1, bandwidth = filesize, source = request.POST['url'], size = filesize, filehash = filehash, ) upload.save() #return uuid return "%s" % (upload.uuid) except IOError, e: raise e Any ideas? Thanks! Wenbert

    Read the article

  • Send failed from rails server to download in the browser

    - by Markus
    Hi everybody, I have a web application which has some multimedia files stored in a user protected area. To make this files available to logged in customers, I consider using the x-sendfile plugin. x_send_file(path, :type => 'application/pdf') It is just strange that every time I run this function a empty file gets sent to the browser download. I checked the path which is correct (app fails if I change it to a inexistent file). Actually if I use the rails internal send_file method, the same error occurs... Any help is appreciated! Markus

    Read the article

  • Inconsistent responses from ISAPI DLL in mod_isapi.

    - by William Leader
    I have a ISAPI dll which I created with Delphi 2009, and I have been able to test that it functions as designed when I run it inside of IIS 5.1. However when I attempt to host the web service from within Apache on Windows XP using mod_isapi, I do not get consistent results. The ISAPI dll implements a very simple SOAP service with two methods. One method is a simple echo service that sends back the string sent to it. The second method is used to send a file to the server using a TSoapAttachement (Mutipart MIME). The interface can be descibes as follows IPdiSvc2 = interface(IInvokable) ['{532DCDD7-D66B-4D2C-924E-2F389D3E0A74}'] function Echo(data:string): string; stdcall; function SendFile(request:TFileDescription; attachment: TSOAPAttachment): TSendFileResponse; stdcall; end; What is interesting is if I only call the echo function Apache handles this without error every time. The webservice only returns an error after calling send File, but not every time. There are three outcomes to calling send file that I have observed: A normal result without an error (HTTP 200 OK). A Soap encoded exception with the message: 'Required white space was missing. Line: 11 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML.' (HTTP 500 Internal Error). This also generates a message in the Apache error.log 'Premature end of script headers: MYISAPI.dll' A Soap encoded exception with the message: 'Access violation at address 01A53D57 in module MYISAPI.dll. Read of address 00000000.' (HTTP 200 OK). What I find interesting is that the second third outcomes still occur if I call echo after calling send file. Calling SendFile, SendFile, SendFile, SendFile results in outcomes 1, 2, 3, 1. Calling Echo, SendFile, SendFile, SendFile results in outcomes 1, 1, 2, 3. Calling SendFile, Echo, Echo, SendFile results in outcomes 1, 2, 3, 1. The pattern I am seing is that after a Successful SendFile, the next to requests result in outcomes 2 and 3 regardless of what those two requests are. My guess is that because Apache uses multiple threads to handle multiple requests that each request is getting handled in a slightly different way, and that the DLL may not have been initialized in the same way for each worker thread. I do not think the problem exists in my code as when I attach the debugger to httpd.exe it does recognize the exceptions but it says the exceptions are in non-delphi code meaning that they are happening before the code inside my DLL has a chance to execute. I suspect it may have something to do with the way I have apache configured. My Apache configuration is the defaults created by the 2.2.15 installer for windows with the following addition: <IfModule isapi_module> AddHandler isapi-handler .dll ISAPILogNotSupported on ISAPIFakeAsync on ISAPIAppendLogToErrors on </IfModule> <IfModule alias_module> ScriptAlias /myisapi/ "C:/path/to/myisapi/" </IfModule> <Directory "C:/path/to/myisapi/"> AllowOverride None Options ExecCGI Order allow,deny Allow from all </Directory>

    Read the article

  • Install apache modules on MAMP

    - by camupod
    How can I install X-Sendfile apache module so that MAMP can use it? I have followed these instructions to install X-Sendfile, but it didn't work (it seems like it just installed it for the default apache installation). I also tried to manually copy /usr/libexec/apache2/mod_xsendfile.so to /Applications/MAMP/Library/modules/, but that produced the following error when restarting Apache: Cannot load /Applications/MAMP/Library/modules/mod_xsendfile.so into server: cannot create object file image or add library Update: The question was answered on stackoverflow: http://stackoverflow.com/questions/9101566/install-apache-module-x-sendfile-on-mamp

    Read the article

  • Using extension methods to decrease the surface area of a C# interface

    - by brian_ritchie
    An interface defines a contract to be implemented by one or more classes.  One of the keys to a well-designed interface is defining a very specific range of functionality. The profile of the interface should be limited to a single purpose & should have the minimum methods required to implement this functionality.  Keeping the interface tight will keep those implementing the interface from getting lazy & not implementing it properly.  I've seen too many overly broad interfaces that aren't fully implemented by developers.  Instead, they just throw a NotImplementedException for the method they didn't implement. One way to help with this issue, is by using extension methods to move overloaded method definitions outside of the interface. Consider the following example: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public interface IFileTransfer 2: { 3: void SendFile(Stream stream, Uri destination); 4: } 5:   6: public static class IFileTransferExtension 7: { 8: public static void SendFile(this IFileTransfer transfer, 9: string Filename, Uri destination) 10: { 11: using (var fs = File.OpenRead(Filename)) 12: { 13: transfer.SendFile(fs, destination); 14: } 15: } 16: } 17:   18: public static class TestIFileTransfer 19: { 20: static void Main() 21: { 22: IFileTransfer transfer = new FTPFileTransfer("user", "pass"); 23: transfer.SendFile(filename, new Uri("ftp://ftp.test.com")); 24: } 25: } In this example, you may have a number of overloads that uses different mechanisms for specifying the source file. The great part is, you don't need to implement these methods on each of your derived classes.  This gives you a better interface and better code reuse.

    Read the article

  • Netcat file transfer problem

    - by thepurplepixel
    I have two custom scripts I just wrote to facilitate transferring files between my VPS and my home server. They are both written in bash (short & sweet): To send: #!/bin/bash SENDFILE=$1 PORT=$2 HOST='<my house>' HOSTIP=`host $HOST | grep "has address" | cut --delimiter=" " -f 4` echo Transferring file \"$SENDFILE\" to $HOST \($HOSTIP\). tar -c "$SENDFILE" | pv -c -N tar -i 0.5 | lzma -z -c -6 | pv -c -N lzma -i 0.5 | nc -q 1 $HOSTIP $PORT echo Done. To receive: #!/bin/bash SERVER='<myserver>' SERVERIP=`host $SERVER | grep "has address" | cut --delimiter=" " -f 4` PORT=$1 echo Receiving file from $SERVER \($SERVERIP\) on port $PORT. nc -l $PORT | pv -c -N netcat -i 0.5 | lzma -d -c | pv -c -N lzma -i 0.5 | tar -xf - echo Done. The problem is that, for a very quick second, I see something flash along the lines of "Connection Refused" (before pv overwrites it), and no file is ever transferred. The port is forwarded through my router, and nmap confirms it: ~$ sudo nmap -sU -PN -p55515 -v <my house> Starting Nmap 5.00 ( http://nmap.org ) at 2010-04-21 18:10 EDT NSE: Loaded 0 scripts for scanning. Initiating Parallel DNS resolution of 1 host. at 18:10 Completed Parallel DNS resolution of 1 host. at 18:10, 0.00s elapsed Initiating UDP Scan at 18:10 Scanning 74.13.25.94 [1 port] Completed UDP Scan at 18:10, 2.02s elapsed (1 total ports) Host 74.13.25.94 is up. Interesting ports on 74.13.25.94: PORT STATE SERVICE 55515/udp open|filtered unknown Read data files from: /usr/share/nmap Nmap done: 1 IP address (1 host up) scanned in 2.08 seconds Raw packets sent: 2 (56B) | Rcvd: 5 (260B) Also, running netcat normally doesn't work either: squircle@summit:~$ netcat <my house> 55515 <my house> [<my IP>] 55515 (?) : Connection refused Both boxes are Ubuntu Karmic (9.10). The receiver has no firewall, and outbound traffic on that port is allowed on the sender. I have no idea what to troubleshoot next. Any ideas? P.S.: Feel free to move this to SO/SF if you feel it would fit better there.

    Read the article

  • PHP SOAP Transfering Files

    - by blackmage
    Hey, I am new to SOAP and I am trying to learn how to transfer files (.zip files) between a client and server using PHP and SOAP. Currently I have a set up that looks something like this: enter code here require('libraries/nusoap/nusoap.php'); $server = new nusoap_server; $server-configureWSDL('server', 'urn:server'); $server-wsdl-schemaTargetNamespace = 'urn:server'; $server-register('sendFile', array('value' = 'xsd:string'), array('return' = 'xsd:string'), 'urn:server', 'urn:server#sendFile'); But I am unsure on what the return type should be if not a string? I am thinking of using a base64_encode but I am not sure how to. Any advice would be greatly appreciated.

    Read the article

  • How to invalidate nginx reverse proxy cache in front of other nginx servers?

    - by Olivier Lance
    I'm running a Proxmox server on a single IP address, that will dispatch HTTP requests to containers depending on the requested host. I am using nginx on the Proxmox side to listen to HTTP requests and I am using the proxy_pass directive in my different server blocks to dispatch requests according to the server_name. My containers run on Ubuntu and are also running a nginx instance. I'm having troubles with caching on a particular website that is fully static: nginx keeps on serving me stale content after files updates, until I: Clear /var/cache/nginx/ and restart nginx or set proxy_cache off for this server and reload the config Here's the detail of my configuration: On the server (proxmox): /etc/nginx/nginx.conf: user www-data; worker_processes 8; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; use epoll; } http { ## # Basic Settings ## sendfile on; #tcp_nopush on; tcp_nodelay on; #keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; client_body_buffer_size 1k; client_max_body_size 8m; large_client_header_buffers 1 1K; ignore_invalid_headers on; client_body_timeout 5; client_header_timeout 5; keepalive_timeout 5 5; send_timeout 5; server_name_in_redirect off; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; # gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; limit_conn_zone $binary_remote_addr zone=gulag:1m; limit_conn gulag 50; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/conf.d/proxy.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; /etc/nginx/sites-available/my-domain.conf: server { listen 80; server_name .my-domain.com; access_log off; location / { proxy_pass http://my-domain.local:80/; proxy_cache cache; proxy_cache_valid 12h; expires 30d; proxy_cache_use_stale error timeout invalid_header updating; } } On the container (my-domain.local): nginx.conf: (everything is inside the main config file -- it's been done quickly...) user www-data; worker_processes 1; error_log logs/error.log; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip off; server { listen 80; server_name .my-domain.com; root /var/www; access_log logs/host.access.log; } } I've read many blog posts and answers before resolving to posting my own questions... most answers I can see suggest setting sendfile off; but that didn't work for me. I have tried many other things, double checked my settings and all seems fine. So I'm wondering whether I am not expecting nginx's cache to do something it's not meant to...? Basically, I thought that if one of my static files in my container was updated, the cache in my reverse proxy would be invalidated and my browser would get the new version of the file when it requests it... But I now have the sentiment I misunderstood many things. Of all things, I now wonder how nginx on the server can know about a file in the container has changed? I have seen a directive proxy_header_pass (or something alike), should I use this to let the nginx instance from the container somehow inform the one in Proxmox about updated files? Is this expectation just a dream, or can I do it with nginx on my current architecture?

    Read the article

  • Handling file uploads with JavaScript and Google Gears, is there a better solution?

    - by gnarf
    So - I've been using this method of file uploading for a bit, but it seems that Google Gears has poor support for the newer browsers that implement the HTML5 specs. I've heard the word deprecated floating around a few channels, so I'm looking for a replacement that can accomplish the following tasks, and support the new browsers. I can always fall back to gears / standard file POST's but these following items make my process much simpler: Users MUST to be able to select multiple files for uploading in the dialog. I MUST be able to receive status updates on the transmission of a file. (progress bars) I would like to be able to use PUT requests instead of POST I would like to be able to easily attach these events to existing HTML elements using JavaScript. I.E. the File Selection should be triggered on a <button> click. I would like to be able to control response/request parameters easily using JavaScript. I'm not sure if the new HTML5 browsers have support for the desktop/request objects gears uses, or if there is a flash uploader that has these features that I am missing in my google searches. An example of uploading code using gears: // select some files: var desktop = google.gears.factory.create('beta.desktop'); desktop.openFiles(selectFilesCallback); function selectFilesCallback(files) { $.each(files,function(k,file) { // this code actually goes through a queue, and creates some status bars // but it is unimportant to show here... sendFile(file); }); } function sendFile(file) { google.gears.factory.create('beta.httprequest'); request.open('PUT', upl.url); request.setRequestHeader('filename', file.name); request.upload.onprogress = function(e) { // gives me % status updates... allows e.loaded/e.total }; request.onreadystatechange = function() { if (request.readyState == 4) { // completed the upload! } }; request.send(file.blob); return request; } Edit: apparently flash isn't capable of using PUT requests, so I have changed it to a "like" instead of a "must".

    Read the article

  • How do I teach my linux command line manners?

    - by Sam152
    Whenever I complete something in the command line while using Ubuntu and my computer does something of value to me, I enjoy saying thank you, just because it's the polite thing to do. A typical conversation might look something like this: mtp-sendfile HamishAndyPodcast.mp3 /Music/podcasts Sending file... Progress: 17769768 of 17769768 (100%) New file ID: 76098 sam@sams-laptop:~$ thanks thanks: command not found What's the best way to teach my PC a few manners and respond with something like "No problemo".

    Read the article

  • How do I teach my command line manners?

    - by Sam152
    Whenever I complete something in the command line while using Ubuntu and my computer does something of value to me, I enjoy saying thank you, just because it's the polite thing to do. A typical conversation might look something like this: mtp-sendfile HamishAndyPodcast.mp3 /Music/podcasts Sending file... Progress: 17769768 of 17769768 (100%) New file ID: 76098 sam@sams-laptop:~$ thanks thanks: command not found What's the best way to teach my PC a few manners and respond with something like "No problemo".

    Read the article

  • Converting Lighttpd config to NginX with php-fpm

    - by Le Dude
    Having so much issue with NginX configuration since I'm new with NginX. Been using Lighttpd for quite sometime. Here are the base info. New Machine - CentOS 6.3 64 Bit - NginX 1.2.4-1.e16.ngx - Php-FPM 5.3.18-1.e16.remi Old Machine - CentOS 6.2 64Bit - Lighttpd 1.4.25-3.e16 Original Lighttpd config file: ####################################################################### ## ## /etc/lighttpd/lighttpd.conf ## ## check /etc/lighttpd/conf.d/*.conf for the configuration of modules. ## ####################################################################### ####################################################################### ## ## Some Variable definition which will make chrooting easier. ## ## if you add a variable here. Add the corresponding variable in the ## chroot example aswell. ## var.log_root = "/var/log/lighttpd" var.server_root = "/var/www" var.state_dir = "/var/run" var.home_dir = "/var/lib/lighttpd" var.conf_dir = "/etc/lighttpd" ## ## run the server chrooted. ## ## This requires root permissions during startup. ## ## If you run Chrooted set the the variables to directories relative to ## the chroot dir. ## ## example chroot configuration: ## #var.log_root = "/logs" #var.server_root = "/" #var.state_dir = "/run" #var.home_dir = "/lib/lighttpd" #var.vhosts_dir = "/vhosts" #var.conf_dir = "/etc" # #server.chroot = "/srv/www" ## ## Some additional variables to make the configuration easier ## ## ## Base directory for all virtual hosts ## ## used in: ## conf.d/evhost.conf ## conf.d/simple_vhost.conf ## vhosts.d/vhosts.template ## var.vhosts_dir = server_root + "/vhosts" ## ## Cache for mod_compress ## ## used in: ## conf.d/compress.conf ## var.cache_dir = "/var/cache/lighttpd" ## ## Base directory for sockets. ## ## used in: ## conf.d/fastcgi.conf ## conf.d/scgi.conf ## var.socket_dir = home_dir + "/sockets" ## ####################################################################### ####################################################################### ## ## Load the modules. include "modules.conf" ## ####################################################################### ####################################################################### ## ## Basic Configuration ## --------------------- ## server.port = 80 ## ## Use IPv6? ## #server.use-ipv6 = "enable" ## ## bind to a specific IP ## #server.bind = "localhost" ## ## Run as a different username/groupname. ## This requires root permissions during startup. ## server.username = "lighttpd" server.groupname = "lighttpd" ## ## enable core files. ## #server.core-files = "disable" ## ## Document root ## server.document-root = server_root + "/lighttpd" ## ## The value for the "Server:" response field. ## ## It would be nice to keep it at "lighttpd". ## #server.tag = "lighttpd" ## ## store a pid file ## server.pid-file = state_dir + "/lighttpd.pid" ## ####################################################################### ####################################################################### ## ## Logging Options ## ------------------ ## ## all logging options can be overwritten per vhost. ## ## Path to the error log file ## server.errorlog = log_root + "/error.log" ## ## If you want to log to syslog you have to unset the ## server.errorlog setting and uncomment the next line. ## #server.errorlog-use-syslog = "enable" ## ## Access log config ## include "conf.d/access_log.conf" ## ## The debug options are moved into their own file. ## see conf.d/debug.conf for various options for request debugging. ## include "conf.d/debug.conf" ## ####################################################################### ####################################################################### ## ## Tuning/Performance ## -------------------- ## ## corresponding documentation: ## http://www.lighttpd.net/documentation/performance.html ## ## set the event-handler (read the performance section in the manual) ## ## possible options on linux are: ## ## select ## poll ## linux-sysepoll ## ## linux-sysepoll is recommended on kernel 2.6. ## server.event-handler = "linux-sysepoll" ## ## The basic network interface for all platforms at the syscalls read() ## and write(). Every modern OS provides its own syscall to help network ## servers transfer files as fast as possible ## ## linux-sendfile - is recommended for small files. ## writev - is recommended for sending many large files ## server.network-backend = "linux-sendfile" ## ## As lighttpd is a single-threaded server, its main resource limit is ## the number of file descriptors, which is set to 1024 by default (on ## most systems). ## ## If you are running a high-traffic site you might want to increase this ## limit by setting server.max-fds. ## ## Changing this setting requires root permissions on startup. see ## server.username/server.groupname. ## ## By default lighttpd would not change the operation system default. ## But setting it to 2048 is a better default for busy servers. ## ## With SELinux enabled, this is denied by default and needs to be allowed ## by running the following once : setsebool -P httpd_setrlimit on server.max-fds = 2048 ## ## Stat() call caching. ## ## lighttpd can utilize FAM/Gamin to cache stat call. ## ## possible values are: ## disable, simple or fam. ## server.stat-cache-engine = "simple" ## ## Fine tuning for the request handling ## ## max-connections == max-fds/2 (maybe /3) ## means the other file handles are used for fastcgi/files ## server.max-connections = 1024 ## ## How many seconds to keep a keep-alive connection open, ## until we consider it idle. ## ## Default: 5 ## #server.max-keep-alive-idle = 5 ## ## How many keep-alive requests until closing the connection. ## ## Default: 16 ## #server.max-keep-alive-requests = 18 ## ## Maximum size of a request in kilobytes. ## By default it is unlimited (0). ## ## Uploads to your server cant be larger than this value. ## #server.max-request-size = 0 ## ## Time to read from a socket before we consider it idle. ## ## Default: 60 ## #server.max-read-idle = 60 ## ## Time to write to a socket before we consider it idle. ## ## Default: 360 ## #server.max-write-idle = 360 ## ## Traffic Shaping ## ----------------- ## ## see /usr/share/doc/lighttpd/traffic-shaping.txt ## ## Values are in kilobyte per second. ## ## Keep in mind that a limit below 32kB/s might actually limit the ## traffic to 32kB/s. This is caused by the size of the TCP send ## buffer. ## ## per server: ## #server.kbytes-per-second = 128 ## ## per connection: ## #connection.kbytes-per-second = 32 ## ####################################################################### ####################################################################### ## ## Filename/File handling ## ------------------------ ## ## files to check for if .../ is requested ## index-file.names = ( "index.php", "index.rb", "index.html", ## "index.htm", "default.htm" ) ## index-file.names += ( "index.xhtml", "index.html", "index.htm", "default.htm", "index.php" ) ## ## deny access the file-extensions ## ## ~ is for backupfiles from vi, emacs, joe, ... ## .inc is often used for code includes which should in general not be part ## of the document-root url.access-deny = ( "~", ".inc" ) ## ## disable range requests for pdf files ## workaround for a bug in the Acrobat Reader plugin. ## $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## ## url handling modules (rewrite, redirect) ## #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.example.com/$1" ) ## ## both rewrite/redirect support back reference to regex conditional using %n ## #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} ## ## which extensions should not be handle via static-file transfer ## ## .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi ## static-file.exclude-extensions = ( ".php", ".pl", ".fcgi", ".scgi" ) ## ## error-handler for status 404 ## #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' ## #server.errorfile-prefix = "/srv/www/htdocs/errors/status-" ## ## mimetype mapping ## include "conf.d/mime.conf" ## ## directory listing configuration ## include "conf.d/dirlisting.conf" ## ## Should lighttpd follow symlinks? ## server.follow-symlink = "enable" ## ## force all filenames to be lowercase? ## #server.force-lowercase-filenames = "disable" ## ## defaults to /var/tmp as we assume it is a local harddisk ## server.upload-dirs = ( "/var/tmp" ) ## ####################################################################### ####################################################################### ## ## SSL Support ## ------------- ## ## To enable SSL for the whole server you have to provide a valid ## certificate and have to enable the SSL engine.:: ## ## ssl.engine = "enable" ## ssl.pemfile = "/path/to/server.pem" ## ## The HTTPS protocol does not allow you to use name-based virtual ## hosting with SSL. If you want to run multiple SSL servers with ## one lighttpd instance you must use IP-based virtual hosting: :: ## ## $SERVER["socket"] == "10.0.0.1:443" { ## ssl.engine = "enable" ## ssl.pemfile = "/etc/ssl/private/www.example.com.pem" ## server.name = "www.example.com" ## ## server.document-root = "/srv/www/vhosts/example.com/www/" ## } ## ## If you have a .crt and a .key file, cat them together into a ## single PEM file: ## $ cat /etc/ssl/private/lighttpd.key /etc/ssl/certs/lighttpd.crt \ ## > /etc/ssl/private/lighttpd.pem ## #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" ## ## optionally pass the CA certificate here. ## ## #ssl.ca-file = "" ## ####################################################################### ####################################################################### ## ## custom includes like vhosts. ## #include "conf.d/config.conf" #include_shell "cat /etc/lighttpd/vhosts.d/*.conf" ## ####################################################################### ####################################################################### ### Custom Added by me #url.rewrite-once = (".*\.(js|ico|gif|jpg|png|css|jar|class)$" => "$0", "" => "/index.php") url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) # expire.url = ( "" => "access 1 days" ) include "myvhost-vhosts.conf" ####################################################################### Here is my Vhost file for lighttpd $HTTP["host"] =~ "192.168.8.35$" { server.document-root = "/var/www/lighttpd/qc41022012/public" server.errorlog = "/var/log/lighttpd/error.log" accesslog.filename = "/var/log/lighttpd/access.log" server.error-handler-404 = "/e404.php" } and here is my nginx.conf file user nginx; worker_processes 5; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/testsite/logs/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; # include /etc/nginx/conf.d/*.conf; ## I added this ## include /etc/nginx/sites-available/*; } Here is my NginX Vhost file server { server_name 192.168.8.91; access_log /var/log/nginx/myapps/logs/access.log; error_log /var/log/nginx/myapps/logs/error.log; root /var/www/html/myapps/public; location / { index index.html index.htm index.php; } location = /favicon.ico { return 204; access_log off; log_not_found off; } # location ~ \.php$ { # try_files $uri /index.php; # include /etc/nginx/fastcgi_params; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_param SCRIPT_NAME $fastcgi_script_name; location ~ \.php.*$ { rewrite ^(.*.php)/ $1 last; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_intercept_errors on; # fastcgi_param SCRIPT_FILENAME $document_root/index.php; # fastcgi_param PATH_INFO $uri; # fastcgi_pass 127.0.0.1:9000; # include fastcgi_params; } } We have a custom apps that we created that works great with lighttpd. I went through some headache also when we were trying to figure out how to make it work with lighttpd. this is the line that helps make it work in lighttpd. url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) but I couldn't figure out how to make it works in NginX. The webserver run just fine when we use the phpinfo.php test file. However as soon as I point it to my apps, nothing comes up. Check the error.log file and there's no error. Very mind boggling. I spent over 1 week trying to figure it out with no luck.. Please help?

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • How to navigate to another html page?

    - by newbie
    In my application there's a usual login page sending username and password to the server script, where it needs to be authenticated, and in case of an authentic user, the server should redirect to a page student.html. This is my code var ports = 3000; var portt = 3001; var express = require('express'); var student = require('express')(); var teacher = require('express')(); var server_s = require('http').createServer(student); var server_t = require('http').createServer(teacher); var ios = require('socket.io').listen(server_s); var iot = require('socket.io').listen(server_t); var path = require('path'); server_s.listen(ports); server_t.listen(portt); student.use(express.static(path.join(__dirname, 'public'))); student.get('/', function(req,res){ res.sendfile(__dirname + '/login.html'); }); teacher.use(express.static(path.join(__dirname, 'public'))); teacher.get('/', function(req,res){ res.sendfile(__dirname + '/mytry.html'); }); ios.sockets.on('connection', function(socket){ var username, password; socket.on('check',function(data){ username = data[0]; password = data[1]; //************* Database connection and query ************* var mysql = require('mysql'); var connection = mysql.createConnection({ host : 'localhost', user : 'user', password: '*******', database: 'my_db' }); connection.connect(); var qstring = 'SELECT s_id FROM login_student WHERE username='+username+'AND password='+password; connection.query(qstring, function(err, rows, fields) { if (err) { console.log('ERROR: ' + err); socket.emit('login_failure','DB error'); return; } console.log('The solution is: ', rows[0].solution); if (rows>0) //***** Here i want redirection to another page ****** else socket.emit('login_failure','Invalid Username or password'); }); connection.end(); }); }); iot.sockets.on('connection', function(socket){ ; }); }); Can anyone suggest what should I do?

    Read the article

  • what is best config for nginx worker_rlimit_nofile and worker_connections 28672

    - by Binh Nguyen
    i have issue of web-brower response ( especially on ie ) very slow, some time time out, and sometime hang out up to 20 seconds for one file redirect 301 when test with "f12 derverloper tool of ie" .. it report wait/start time very long. but after got connected the elements on web weill be dowload and show out fast ( test at xaluan.com ) It most happen when active user on web more than 2100 ( use google real time live analytic ). server running cenos 5 with ngix, apache, 32core cpu, 96G ram, raid 10 sas hdd.. == flowing is my config == user nobody; # no need for more workers in the proxy mode worker_processes 28; #old 32 #good at 24 error_log /var/log/nginx/error.log; #old add in end: info worker_rlimit_nofile 22528; events { worker_connections 22528; use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks off; sendfile on; tcp_nopush on; tcp_nodelay on; server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks off; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 25; #old 5 gzip on; #old on gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; ignore_invalid_headers on; client_header_timeout 1m; #3m client_body_timeout 1m; #3m send_timeout 1m; #3m reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 100M; client_body_buffer_size 256k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; limit_conn_zone $binary_remote_addr zone=limit_per_ip:1m; limit_conn limit_per_ip 20; limit_req_zone $binary_remote_addr zone=allips:5m rate=200r/s; limit_req zone=allips burst=200 nodelay; include "/etc/nginx/vhosts/*"; } =========== I have play around with worker config 1- tried increase as some one suggess: worker_rlimit_nofile = worker_connections = worker_processes * 1024 = 32768 2- tried to set low: worker_processes = 28 and other worker at 22582 and other solution too .. but not work cause some time it make server load hight very quick 3- tried to comment out the # worker_rlimit_nofile . so it will be unlimited. it look like solved a bit about issue response time. but it also make server high load quick in peak time... Please help thanks PS: other apache you may have look for help me out thanks Listen 0.0.0.0:8081 User nobody Group nobody ExtendedStatus On ServerAdmin [email protected] ServerName server.xaluan.com LogLevel warn # These can be set in WHM under 'Apache Global Configuration' Timeout 100 TraceEnable Off ServerSignature Off ServerTokens ProductOnly FileETag None StartServers 15 <IfModule prefork.c> MinSpareServers 20 MaxSpareServers 50 #MaxSpareServers 40 </IfModule> ServerLimit 1572 MaxClients 1572 MaxRequestsPerChild 4000 # MaxRequestsPerChild 3000 KeepAlive On KeepAliveTimeout 3 MaxKeepAliveRequests 300 #MaxKeepAliveRequests 130

    Read the article

  • Apache Sending "Content-Length : 0" , How to Fix ?

    - by ServerZilla
    Hi, I am using Apache server and it is sending Content-Length = 0 value which is preventing file-downloads, see - http://www.youtubedroid.com/download2.php?v=%5F3XcMEKNws0&title=Akhila+%2CMumbai+reloaded%2CSuper+dancer+2&hq=0 , here are my .htaccess content : SetEnv no-gzip dont-vary Here are headers sent by the server : HTTP/1.1 200 OK Date: Tue, 15 Dec 2009 06:12:11 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 X-Powered-By: PHP/5.2.11 Content-Description: File Transfer Content-Disposition: attachment; filename="Akhila ,Mumbai reloaded,Super dancer 2.mp3" Content-Transfer-Encoding: binary Expires: 0 Cache-Control: must-revalidate, post-check=0, pre-check=0 Pragma: public X-Sendfile: ./tmp/64eb3b185e38af95c15405ffb0606e76.mp3 Content-Length: 0 Keep-Alive: timeout=5, max=95 Connection: Keep-Alive Content-Type: application/octet-stream Pls. tell how to fix this ?

    Read the article

  • 403 Forbidden

    - by demas
    Here is my Nginx config: user pass users; worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/lib64/ruby/gems/1.8/gems/passenger-3.0.7; passenger_ruby /usr/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name some.another.ru; root /www/public/redmine; passenger_enabled on; rails_env development; } } Here is Nginx log: 2011/06/02 12:53:57 [error] 45986#0: *1 directory index of "/www/public/redmine/" is forbidden, client: **.*.**.***, server: some.another.ru, request: "GET / HTTP/1.1", host: "some.another.ru" 2011/06/02 12:53:59 [error] 45986#0: *1 open() "/www/public/redmine/favicon.ico" failed (2: No such file or directory), client: **.*.**.***, server: some.another.ru, request: "GET /favicon.ico HTTP/1.1", host: "some.another.ru" What is the reason of this error and how can I fix it?

    Read the article

1 2 3 4 5 6 7  | Next Page >