Search Results

Search found 2680 results on 108 pages for 'soft 404'.

Page 59/108 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • erlyvideo server doesn't start automatically after reboot

    - by electroid
    I have installed erlyvideo server on ubuntu 9.10 karmic koala. Everything works fine, but after server reboot I have to start erlyvideo server manually with /etc/init.d/erlyvideo start. I try allready update-rc.d and I think erlyvideo by default should start automaticaly. Any help will be appreciated. Here erlyvideo startup script located in /etc/init.d/erlyvideo: #!/bin/sh ### BEGIN INIT INFO # Provides: erlyvideo # Required-Start: $local_fs $network # Required-Stop: $local_fs $network # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: starts the erlyvideo streaming server # Description: starts the erlyvideo using erlang system ### END INIT INFO case "$1" in start) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; stop) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; restart) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; soft-restart) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; upgrade) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; reconfigure) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; reboot) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; ping) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; console) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; attach) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; attach-erl) cd /opt/erlyvideo && ./erts-5.8.4/bin/erl -name [email protected] -remsh [email protected] ;; *) echo $"Usage: $0 {start|stop|restart|soft-restart|upgrade|reboot|ping|console|attach}" exit 1 esac exit 0 And I have found S91erlyvideo in /etc/rc2.d next to S91apache2 which starts just fine on every reboot.

    Read the article

  • Nginx no static files after update

    - by SomeoneS
    First, i must say that i am not expert in server administration, my site was setup by hosting admins (that i cannot contact anymore). Few days ago, i updated Nginx to latest version (admin told me that it is safe to do). But after that, my site serves only html content, no CSS, images, JS. If i try to open some image i get message "Wellcome to Nginx" (same thin if i try to open static.mysitedomain.com). More details: Site has static. subdomain, but static files are in same directory as they used to be before setting up static files. I was googling for some solutions, i tried to change something in /etc/nginx/, but no luck. I feel that this is some minor configuration problem, any ideas? EDIT: Here is /etc/nginx/nginx.conf file content: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Here is /etc/nginx/sites-enabled/default file content: server { #listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } # Only for nginx-naxsi : process denied requests #location /RequestDenied { # For example, return an error code #return 418; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # index index.html index.htm; # # location / { # try_files $uri $uri/ /index.html; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # # root html; # index index.html index.htm; # # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; # ssl_prefer_server_ciphers on; # # location / { # try_files $uri $uri/ /index.html; # } #}

    Read the article

  • Setting up thttpd to run vqadmin or qmailadmin...keep getting 404s

    - by Ian
    I run nginx for my web server but wanted to quickly toss up thttpd so I could do some maintainenace using either vqadmin or qmailadmin. Those files are located at: /usr/local/apache/cgi-bin/qmailadmin and /usr/local/apache/cgi-bin/vqadmin/vqadmin.cgi. My /etc/thttpd.conf is: host=127.0.0.1 port=8000 user=apache logfile=/var/log/thttpd.log pidfile=/var/run/thttpd.pid dir=/usr/local/apache/cgi-bin nochroot cgipat=**.cgi When I use lynx to go to http://127.0.0.1:8000/cgi-bin/vqadmin/vqadmin.cgi, thttpd tosses a 404. Any idea how to get this working? Many thanks.

    Read the article

  • QR vcard with a photo

    - by Cayetano Gonçalves
    I am about to get a ton of business cards printed from my new corporation, and I am allowed to have a QR code on it, and I would really like to be able to add a photo to be attached to the vcard. I know in the raw vcard you can add a photo like this: BEGIN:VCARD VERSION:4.0 N:Gump;Forrest;;; FN: Forrest Gump ORG:Bubba Gump Shrimp Co. TITLE:Shrimp Man PHOTO:http://www.example.com/dir_photos/my_photo.gif TEL;TYPE=work,voice;VALUE=uri:tel:+1-111-555-1212 TEL;TYPE=home,voice;VALUE=uri:tel:+1-404-555-1212 ADR;TYPE=work;LABEL="42 Plantation St.\nBaytown, LA 30314\nUnited States of America" :;;42 Plantation St.;Baytown;LA;30314;United States of America EMAIL:[email protected] REV:20080424T195243Z END:VCARD But I can't find any way to include the photo field into a QR code, any suggestions would be greatly appreciated.

    Read the article

  • Source of Unexplained Requests in Server Logs

    - by Synetech inc.
    Hi, I am baffled by some entries in my server logs, specifically the web-server logs. Other than normal, expected traffic, I have noticed three types of request errors (eg 404, etc.): Broken links, ie links from old, external pages that point to pages that are no longer here Sequences of probes, ie some jerk trying to hack in by scanning my server for a series of exploitable admin type pages and such What appear to be completely random requests for things that have never existed on the server or even have anything to do with the server, and appear by themselves (ie not a series of requests like the probes) Could it somehow be a mistyped URL or IP? That’s about the only thing that I can think of, but still, how could I get a request on say, foobar.dyndns.org (12.34.56.78) for something like www.wantsfly.com/prx2.php or /MNG/LIVE or http://ant.dsabuse.com/abc.php?auth=45V456b09m&strPassword=X%5BMTR__CBZ%40VA&nLoginId=43. (Those are a few actual requests from my logs.) Can someone please explain scenario three to me? Thanks.

    Read the article

  • Why is System listening on port 8000?

    - by poke
    I noticed by accident today that I have some unknown webserver listening on port 8000. Opening http://localhost:8000 just returns 404, so I don’t get any hint what exactly is listening there. I’ve used netstat -ano to find out, that the process with PID 4 is listening on that port. PID 4 is the System process. Why is my system listening on that port, without me actually starting a server? Or how can I find out what exactly is listening there? I’ve read the related questions about port 80 and port 443, but none of the services mentioned there were running on my system. And the other suggestions there didn’t work either. edit: The HTTP response of the server lists Microsoft-HTTPAPI/2.0 as the server. edit2: As requested by Shadok, here are the entries of TCPView with 8000 as the port. But I doubt it’s useful at all…

    Read the article

  • Website: Requested filename being rewritten

    - by horatio
    I have been unable to find an answer via search. I have a website (I do not administer the servers) where the server will serve a different file than the one requested. I first noticed this when using a filename of the following form: _foo.php (single underscore) If I request foo.php (does not exist), the server returns _foo.php. By "returns" I mean that the server decides I meant _foo.php, processes the php file, and serves the output. If I request afoo.php, zfoo.php, or even __foo.php (two underscores) (these files do not exist) the server returns _foo.php. If I request aafoo.php, the server returns 404. To sum up: the server seems to be doing a partial filename match. My question is: what is happening and is this accepted behavior for a web server (or standard behavior of a common mod/package/etc)?

    Read the article

  • htaccess IP blocking with custom 403 Error not working

    - by mrc0der
    I'm trying to block everyone but 1 IP address from my site on a server running apache & centos. My setup is follows the example below. My server: `http://www.myserver.com/` My .htaccess file <limit GET> order deny,allow deny from all allow from 176.219.192.141 </limit> ErrorDocument 403 http://www.google.com ErrorDocument 404 http://www.google.com When I visit http://www.myserver.com/ from an invalid IP, it gives me a generic 403 error. When I visit http://www.myserver.com/page-does-not-exist/ it redirects me correctly to http://www.google.com but I can't figure out why the 403 error doesn't redirect me too. Anyone have any ideas?

    Read the article

  • Ubuntu Natty 11.04, Turning the wireless switch off; switches it off permanently!

    - by ZiGi
    i'm using an hp pavilion dv2000 i turned the wifi switch off by mistake, the LED turned orange and the wifi got disconnected. and now when i turn the switch on, it remains orange and the wifi still isn't functional. this happened before; i found a fix that worked searching google. it was done via terminal commands and i didn't have to download anything but i can't find the solution anymore! wlan0 shows up when i use: :~$iwconfig #BLA BLA BLA #... wlan0 IEEE 802.11abg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off more results: :~$ sudo ifconfig wlan0 up SIOCSIFFLAGS: Operation not possible due to RF-kill :~$ rfkill list all 1: phy0: WirelessLAN Soft blocked: yes Hard blocked: yes :~$ sudo rfkill unblock all :~$ rfkill list all 1: phy0: WirelessLAN Soft blocked: no Hard blocked: yes :~$ sudo ifconfig wlan0 up SIOCSIFFLAGS: Operation not possible due to RF-kill it's still hard blocked! even though the switch is turned on; gives the same result eitherways a direction to a page with a working solution is a much appreciated answer!

    Read the article

  • Changing open-files-limit in mysql 5.5

    - by davidv
    I'm having an issue with mysql 5.5 running on Ubuntu 12.04 with the open-files-limit parameter. I recently noticed some problems due to the 1024 limit, and actually the main system limit was set to 1024, so I modified /etc/security/limits.conf with the following: * soft nofile 32000 * hard nofile 32000 root soft nofile 32000 root hard nofile 32000 After that I check the ulimit value for root and even for mysql user, both returned the new value: 32000, so I assume the change has already been done. I also changed the value at the my.cnf file, setting open-files-limit to 24000, like this: open-files-limit = 24000 Now comes the odd part, when I restart the mysql service and check the open_files_limit variable, it returns that it's still set to 1024, so I'm having the same problems that before (obviously), I tried to use open-files-limit instead open_files_limit in the my.cnf config file, same result, BUT if I override the service command to start the service and start only using mysqld (no additional parameters), the service starts and when I check the parameter it returns 32000... I don't know where it's taking that value from, as it's not set at my.cnf and it's not being given through command line, at least, not for myself. Any ideas about why it's not working the change and how to solve it the normal way (launching it through service...)?

    Read the article

  • How do I remove a URL from Google without having to have a Google E-mail Account

    - by PP
    Really simple question. I do not want a Google account. I just want Google to stop making requests every 2 minutes for a URL it should never have known about (apparently Google harvests URLs from search requests as well as private e-mails, not just from actual web pages). But when I search Google help for removing URLs it appears I have to use their "webmaster tools" which require logging into a GMail account! How do I tell Google not to index my URL without becoming a customer? Note: I already return 404 for the URLs in question using a rewrite rule - this appears to make zero difference to the crawler which continually attempts to fetch the page every 2 minutes.

    Read the article

  • Using Apache Environment Variables to set custom ErrorDocument

    - by Tad
    I've got a set of RewriteCond rules that test for various mobile devices and then set environment variables like "env=device:.iphone" or "env=device:.smartphone" if the useragent matches an iPhone or Android device. I'm trying to now redirect the user to custom-styled 404/500 server error pages for each device, by way of the error pages. Ideally I'd like to be able to test for a variable being there, and then write in a custom ErrorDocument string. But an apache doesn't seem to work in this case. Any ideas how I can construct if/else tests in an apache conf file for environment vars?

    Read the article

  • MediaTemple Django Bad Gateway

    - by Eeyore
    I have a site running on GS server on MediaTemple. It's Django/PostgreSQL setup. For some reason from time to time I get Bad Gateway error and I can't figure out what's causing it. What can cause this error? What else can I do to find the cause of the problem? url.access-deny = ( "~", ".inc" ) fastcgi.server = ( "/main.fcgi" => ( "main" => ( "socket" => "/var/tmp/" + appname + ".sock", # don't change this "check-local" => "disable", ) ) ) alias.url = ( "/media/" => "/home/xxx/data/python/django/django/contrib/admin/media/", "/static/" => "/home/xxx/containers/django/site/static/", ) url.rewrite-once = ( "^(/media.*)$" => "$1", "^(/static.*)$" => "$1", "^/favicon\.ico$" => "/media/favicon.ico", "^(/.*)$" => "/main.fcgi$1", ) server.error-handler-404 = "/main.fcgi"

    Read the article

  • mod_rewrite RewriteRule is not working

    - by buggy1985
    Hi, This is a follow-up of this question: Rewrite URL - how to get the hostname and the path? And a copy of this: mod_rewrite RewriteRule is not working I got this Rewrite Rule: RewriteEngine On RewriteRule ^(http://[-A-Za-z0-9+&@#/%=~_|!:,.;]*)/([-A-Za-z0-9+&@#/%=~_|!:,.;]*)\?([A-Za-z0-9+&@#/%=~_|!:,.;]*)$ http://http://www.xmldomain.com/bla/$2?$3&rtype=xslt&xsl=$1/$2.xsl it seems to be correct, and exactly what I need. But it doesn't work on my server. I get a 404 page not found error. mod_rewrite is enabled, as the following simple rule is working fine: RewriteEngine On RewriteRule ^page/([^/\.]+)/?$ index.php?page=$1 [L] Can you help? Thanks

    Read the article

  • starting Tomcat from Eclipse

    - by Krns
    I've just got eclipse for jee installed, and also downloaded apache 6.0.29 and added it in eclipse preferences. I followed up this tutorial, and all i got when i run it is 404 page with "The requested resource (/WebService/) is not available." description. I can't even access root folder of tomcat at localhost:8080. However, if i start tomcat through concole command, it's working fine and home page and examples are accessible. I'm a complete noob at tomcat and eclipse(been working with netbeans before), so i've got no idea what's wrong.

    Read the article

  • curl can't verify cert using capath, but can with cacert option

    - by phylae
    I am trying to use curl to connect to a site using HTTPS. But curl is failing to verify the SSL cert. $ curl --verbose --capath ./certs/ --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: ./certs/ * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS alert, Server hello (2): * SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed * Closing connection #0 curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. I know about the -k option. But I do actually want to verify the cert. The certs directory has been properly hashed with c_rehash . and it contains: A Verisign intermediate cert Two self-signed certs The above site should be verified with the Verisign intermediate cert. When I use the --cacert option instead (and point directly to the Verisign cert) curl is able to verify the SSL cert. $ curl --verbose --cacert ./certs/verisign-intermediate-ca.crt --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: ./certs/verisign-intermediate-ca.crt CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using RC4-SHA * Server certificate: * subject: C=US; ST=State; L=City; O=Company; OU=ou1; CN=example.com * start date: 2011-04-17 00:00:00 GMT * expire date: 2012-04-15 23:59:59 GMT * common name: example.com (matched) * issuer: C=US; O=VeriSign, Inc.; OU=VeriSign Trust Network; OU=Terms of use at https://www.verisign.com/rpa (c)10; CN=VeriSign Class 3 Secure Server CA - G3 * SSL certificate verify ok. > HEAD / HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > < HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found < Cache-Control: must-revalidate,no-cache,no-store Cache-Control: must-revalidate,no-cache,no-store < Content-Type: text/html;charset=ISO-8859-1 Content-Type: text/html;charset=ISO-8859-1 < Content-Length: 1267 Content-Length: 1267 < Server: Jetty(7.2.2.v20101205) Server: Jetty(7.2.2.v20101205) < * Connection #0 to host example.com left intact * Closing connection #0 * SSLv3, TLS alert, Client hello (1): In addition, if I try hitting one of the sites using a self signed cert and the --capath option, it also works. (Let me know if I should post an example of that.) This implies that curl is finding the cert directory, and it is properly hash. Finally, I am able to verify the SSL cert with openssl, using its -CApath option. $ openssl s_client -CApath ./certs/ -connect example.com:443 CONNECTED(00000003) depth=3 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority verify return:1 depth=2 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 verify return:1 depth=1 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 verify return:1 depth=0 /C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com verify return:1 --- Certificate chain 0 s:/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- Server certificate -----BEGIN CERTIFICATE----- <cert removed> -----END CERTIFICATE----- subject=/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- No client certificate CA names sent --- SSL handshake has read 1563 bytes and written 435 bytes --- New, TLSv1/SSLv3, Cipher is RC4-SHA Server public key is 2048 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : RC4-SHA Session-ID: D65C4C6D52E183BF1E7543DA6D6A74EDD7D6E98EB7BD4D48450885188B127717 Session-ID-ctx: Master-Key: 253D4A3477FDED5FD1353D16C1F65CFCBFD78276B6DA1A078F19A51E9F79F7DAB4C7C98E5B8F308FC89C777519C887E2 Key-Arg : None Start Time: 1303258052 Timeout : 300 (sec) Verify return code: 0 (ok) --- QUIT DONE How can I get curl to verify this cert using the --capath option?

    Read the article

  • Poor write performance on Debian server running NFS with 22TB exported JFS filesystem

    - by user143546
    I am currently running a debian server that is exporting a large JFS filesystem (22TB) over NFS (nfs-kernel-server.) When attempting to write to the NFS share, the performance is very poor. The 22TB disk is sitting on a NAS mounted using iSCSI. It will bust for a moment near expected line speed, and then sit idle for several seconds. Very little traffic measured in the low kb/sec. The wait peeks on write. When reading from the NFS mount, the system operates at expected speeds (11MB/sec). The issue does not occur when using SFTP, rsync, or local coping (non-nfs). The issue persists between stable and testing releases. On the same machine I have a 14TB ext4 filesystem using the exact same export configuration that does not share the issue. This share is not in regular use and thus not consuming resources. NFS Server: cat /etc/exports /data2 10.1.20.86(rw,no_subtree_check,async,all_squash) cat /sys/block/sdb/queue/scheduler noop [deadline] cfq cat /etc/default/nfs-kernel-server RPCNFSDCOUNT=8 RPCNFSDPRIORITY=0 RPCMOUNTDOPTS=--manage-gids NEED_SVCGSSD= RPCSVCGSSDOPTS= NFS Client: cat /etc/fstab 10.1.20.100:/data2 /root/incoming nfs rw,noatime,soft,intr,noacl 0 2 cat /sys/block/sdb/queue/scheduler noop [deadline] cfq cat /proc/mounts 10.1.20.100:/data2/ /root/incoming nfs4 rw,noatime,vers=4,rsize=262144,wsize=262144,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.1.20.86,minorversion=0,addr=10.1.20.100 0 0 This problem has me pretty stumped. Any help would be greatly welcomed. Thanks.

    Read the article

  • Symbolic link modification for HP unix

    - by kalpesh
    Hi David Zaslavsky, Recently i was working on modifying the Symbolic links ... for a particular files.. while searching on internet i saw your post ... I am trying to use this script which you had posted .. find /home/user/public_html/qa/ -type l \ -lname '/home/user/public_html/dev/*' -printf \ 'ln -nsf $(readlink %p|sed s/dev/qa/) $(echo %p|sed s/dev/qa/)\n'\ script.sh SO i tried to modify your script for my problem .. in Hp unix env.. but it seems that the -lname command does not works for HP unix. do you know something equivalent that i can use ... Just to give you and idea of my problem ... I want to change all the symbolic links inside a particular folder .. New Symbolic link -- /base/testusr/scripts Old Symbolic link -- /base/produsr/scripts Now folder "A" contains more than 100 different files having soft links which points to this path -- /base/produsr/scripts But what I want is that the files inside folder A to point to this soft link --/base/testusr/scripts I am trying to achieve in Hp unix ... would really appreciate your help on this ...

    Read the article

  • Ubuntu Natty 11.04, Turning the wireless switch off; switches it off permanently!

    - by ZiGi
    i'm using an hp pavilion dv2000 i turned the wifi switch off by mistake, the LED turned orange and the wifi got disconnected. and now when i turn the switch on, it remains orange and the wifi still isn't functional. this happened before; i found a fix that worked searching google. it was done via terminal commands and i didn't have to download anything but i can't find the solution anymore! wlan0 shows up when i use: :~$iwconfig #BLA BLA BLA #... wlan0 IEEE 802.11abg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off more results: :~$ sudo ifconfig wlan0 up SIOCSIFFLAGS: Operation not possible due to RF-kill :~$ rfkill list all 1: phy0: WirelessLAN Soft blocked: yes Hard blocked: yes :~$ sudo rfkill unblock all :~$ rfkill list all 1: phy0: WirelessLAN Soft blocked: no Hard blocked: yes :~$ sudo ifconfig wlan0 up SIOCSIFFLAGS: Operation not possible due to RF-kill it's still hard blocked! even though the switch is turned on; gives the same result eitherways a direction to a page with a working solution is a much appreciated answer!

    Read the article

  • RewriteRule in htaccess in subdirectory

    - by Jay
    Windows server, running Apache. In my Apache conf, I have AllowOverride None for the root of a site and then I have a subdirectory set to AllowOverride All: <Directory /> AllowOverride None </Directory> <Directory "/safe/"> AllowOverride All </Directory> However, when I try to set up a rewrite rule in the subdirectory's htaccess file, nothing happens, I just get a 404 page not found error. Example: RewriteEngine On RewriteRule (.*) /blah?test=$1 [R=302,NC,NE,L] Rwewriting URLs are working fine from the root via the Apache conf. I don't understand why the rule is ignored. I don't want to do the URL re-writing within the conf because for this case I may need to be changing the redirects constantly and don't want to reload the server every time a change is made. I also don't want to affect server performance by enabling htaccess files site-wide, just in the subdirectory I need it.

    Read the article

  • Apache .htaccess problem: No input file specified.

    - by Michal M
    Hello Everyone, Can someone help me with this. I'm feeling like I've been hitting my head against a wall for over 2 hrs now. I've got Apache 2.2.8 + PHP 5.2.6 installed on my machine and the .htacces with code below works fine, no errors. RewriteEngine on RewriteCond $1 !^(index\.php|css|gfx|js|swf|robots\.txt|favicon\.ico) RewriteRule ^(.*)$ /index.php/$1 [L] The same code on my hosting provider server gives me a 404 error code and outputs only: No input file specified. index.php IS there. I know they have Apache installed (cannot find version info anywhere) and they're running PHP v5.2.8. I'm on windows xp 64-bit, they're running some Linux and php in cgi/fastcgi mode. Can anyone suggest what could be the problem? PS. if that's important that's for CodeIgniter to work with friendly URLs.

    Read the article

  • apache/debian squeeze server loading directory listing instead of website

    - by Diego
    when you navigate to mywebsite.com/ you see an apache page showing a folder called mywebsite.com/, clicking there then takes me to mywebsite.com/mywebiste.com which doesn't exist, so wordpress shows me the a 404 error. I'm trying to host a wordpress site at mywebsite.com/ but I think I have some kind of directory listing wrong somewhere, though I'm pretty sure I've set up my /etc/apache2/sites-available/mywebsite.com correctly: <VirtualHost *:80> ServerName mywebsite.com ServerAdmin [email protected] DocumentRoot /var/www/mywebsite.com/ <Directory /> Options FollowSymLinks AllowOverride All </Directory> ErrorLog /var/log/apache2/error.log CustomLog /var/log/apache2/access.log combined LogLevel warn </VirtualHost>

    Read the article

  • php programming

    - by HARSHA
    Hi, i am learning php,i downloded the xampp.and Apache server,Mysql are running properly in Xampp Control panel. I tried with a simple program that is hello world,i created a new folder in htdocs, and i saved my program in that new folder with .php extention. But when i run the program then is showing a error as follows ------ Object not found! The requested URL was not found on this server. If you entered the URL manually please check your spelling and try again. If you think this is a server error, please contact the webmaster. Error 404 localhost 18-5-2010 11:51:44 Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1

    Read the article

  • How to exclude a sub-folder from HTaccess RewriteRule

    - by amb9800
    I have WordPress installed in my root directory, for which a RewriteRule is in place. I need to password-protect a subfolder ("blue"), so I set the htaccess in that folder as such. Problem is that the root htaccess RewriteRule is applying to "blue" and thus I get a 404 in the main WordPress site (instead of opening the password dialog for the subfolder). Here's the root htaccess: RewriteEngine on <Files 403.shtml> order allow,deny allow from all </Files> <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> I tried inserting this as the second line, to no avail: RewriteRule ^(blue)($|/) - [L] Also tried inserting this before the index.php RewriteRule: RewriteCond %{REQUEST_URI} !^/blue/ That didn't work either. Also inserted this into the subfolder's htaccess, which didn't work either: <IfModule mod_rewrite.c> RewriteEngine off </IfModule> Any ideas?

    Read the article

  • Reverse proxy for a subdirectory in nginx

    - by Maple
    I want to set up a Reverse proxy on my VPS for my Heroku app (http://lovemaple.heroku.com) So if I visit mysite.com/blog I can get the content in http://lovemaple.heroku.com I followed the instructions on the Apache wiki. location /couchdb { rewrite /couchdb/(.*) /$1 break; proxy_pass http://localhost:5984; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } I changed it to fit my situation: location /blog { rewrite /blog/(.*) /$1 break; proxy_pass http://lovemaple.heroku.com; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } When I visit mysite.com/blog, the page show up, but js/css file cannot be gotten (404). Their link becomes mysite.com/style.css but not mysite.com/blog/style.css. What's wrong and how can I fix it?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >