Search Results

Search found 4830 results on 194 pages for 'conf'.

Page 81/194 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Enforce using proxy in all browsers

    - by Petr Marek
    I've configured squid with squid with squidguard and when using proxy in browser it works fine. But I want to enforce using proxy (probably in iptables) in all browsers. Now it can be disabled in the browser settings by user. My setup is: one standalone pc with ubuntu running the squid and squidguard and on this very same device I want to somehow enforce using the proxy. Squid conf file has set: http_port 3128 transparent THX

    Read the article

  • Disable MySQL startup in Ubuntu 10.04

    - by bryanhogan
    Hi all, I want to prevent MySQL from starting in Ubuntu 10.04 I have used update-rc.d -f mysql remove and confirmed that there is no link to the /etc/inid.d/mysql script from any of the rc?.d directories. I also ran sysv-rc-conf and it shows me that MySQL is being called as part of the rc.d scripts. It is still starting on boot. How do I disable it? Regards, Bryan

    Read the article

  • Customize rsyslogd messages to show the sender of the message; not the receiver

    - by Nimmy Lebby
    I'm forwarding the WiFi router's log messages to our sysadmin box (sb3). This is the stanza in /etc/rsyslog.conf: # WiFi router log :fromhost-ip, isequal,'10.3.291.2' /var/log/wifi-router.log & ~ However, the log looks like this: Dec 23 10:41:58 sb3 dnsmasq-dhcp[253]: DHCPACK(br0) 10.3.292.133 xx:xx:xx:xx:xx:xx dg-ipad I want to customize so that anything logged to wifi-router.log does not mention sb3 but indicates the sender of the log message. How would I do this?

    Read the article

  • How to setup Munin permissions?

    - by Mark Robinson
    I've just installed munin on my CentOS server but I can't get it to output anything to the html directory I set in /etc/munin/munin.conf htmldir /home/mydir/munin In /var/log/munin/munin-graph.log I get errors like: 2011/09/23 12:35:30 [RRD ERROR] Unable to graph /home/mydir/munin/localhost/localhost/memory-year.png : Opening '/home/mydir/munin/localhost/localhost/memory-year.png' for write: Permission denied permissions on /home/mydir/munin are: drwxrwxr-x 2 munin munin 4096 Sep 23 12:31 munin

    Read the article

  • PHP unable to start if "apc.shm_size" has "M" or "G" unit

    - by apasajja
    Using: Ubuntu 10.04, PHP 5.3.10, apc 3.1.3 PHP and APC installed using below repo: deb http://ppa.launchpad.net/brianmercer/php5/ubuntu lucid main deb-src http://ppa.launchpad.net/brianmercer/php5/ubuntu lucid main If I put apc.shm_size=3G or apc.shm_size=3000M in /etc/php5/fpm/conf.d/apc.ini, PHP unable to start. However, if I put only number without M or G unit, it will start and run. By default, if put only number, what unit is it means? It I put 3000 does it means 3000 MB?

    Read the article

  • fedora tomcat log file path

    - by Kamil
    My log file is inside: kamil@localhost tomcat$ grep "logs/" ./* ./log4j.properties:log4j.appender.R.File=${catalina.home}/logs/tomcat.log my CATALINA_HOME is kamil@localhost tomcat$ sudo grep "CATALINA" ./* ... ./tomcat.conf:CATALINA_HOME="/usr/share/tomcat" that above suggests that my log file is hare, and there it's: kamil@localhost tomcat$ sudo ls /usr/share/tomcat/logs/ | grep .out catalina.out So why can't I start server: kamil@localhost tomcat$ sudo tomcat start /usr/sbin/tomcat: line 30: /logs/catalina.out: No such file or directory

    Read the article

  • load a php page with a cron job

    - by s2xi
    I am using a cron job to reload my httpd service after a subdomain is created. I have the problem that when the reload happens the page that registers the user throws a server error. I was wondering if I could go around this by having another cron task. So my logic would be: httpd reload after a .conf file is created then take the user back to the DocumentRoot of the main page. So in usage it would be: a user registers, then is automatically taken back to domain.com

    Read the article

  • Snmp configuration giving me timeout, no response

    - by imaginative
    This is definitely not a firewall issue as no firewalls are in between the src and tgt machines. I'm simply setting up snmp to be queried by a nagios server. My snmpd.conf looks like the following (I'm using net-snmp on Ubuntu 9.10): com2sec nagiossrv 10.10.10.10 public group Nagios v1 nagiossrv view all included .1 access Nagios any noauth exact all none none When I try to walk it: t:/etc/nagios3# snmpwalk -v1 -c public 10.10.10.10 system Timeout: No Response from 10.10.10.10 Any idea where I went wrong with my configuration?

    Read the article

  • Bind9 virtual subdomains

    - by Steffan
    I am trying to setup virtual subdomains using Bind9, following this tutorial.. http://groups.drupal.org/node/16862 which I've completed. Basically setting up the zone and modifying the resolv.conf file and the named.conf.local file. I've gotten everything to work, and I am able to from my server ping mydomain.com , test.mydomain.com and when i do a dig I get the following.. ; <<>> DiG 9.7.0-P1 <<>> test.mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32606 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1 ;; QUESTION SECTION: ;test.mydomain.com. IN A ;; ANSWER SECTION: test.mydomain.com. 86400 IN A 174.###.###.# ;; AUTHORITY SECTION: mydomain.com. 86400 IN NS mydomain.com. ;; ADDITIONAL SECTION: mydomain.com. 86400 IN A 174.###.###.# ;; Query time: 0 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Wed Jan 19 21:06:01 2011 ;; MSG SIZE rcvd: 86 So it looks like everything is working. However, when I try and do test.mydomain.com in the browser, expecting it to default for now to mydomain.com it does not work and I get a server not found page in Firefox. I did read elsewhere that in your virutalhosts file you also need to setup a *.mydomain.com alias, but that didn't fix anything. Any other information that I could provide to help troubleshoot, or any troubleshooting suggestions? I am using Ubuntu 10.4, with typical LAMP setup. The only other things installed on the server are Bind9 and ftp client.

    Read the article

  • Samba creates two files on copy of one file

    - by Rudiger
    Hi, I've set up Samba share on a Centos system and all works fine except whenever I copy a file to a share it makes two files, the actual file and what looks to be a log file, which has an appending ._ on the front of it. So for example if I copy index.php it copies that one, plus it creates ._index.php with semi looking log info in it. How do I stop Samba doing this? I'm sure its in smb.conf somewhere but can't find it. Cheers

    Read the article

  • Puppet configuration file on Windows

    - by Jeff Storey
    I'm running puppet on windows as an admin (testing on windows 7, even though it is not officially supported). When I install puppet following the windows installation instructions, no puppet.conf file is generated in C:/ProgramData/PuppetLabs/puppet/etc. I can run puppet agent --genconfig to create one, but regardless of what values I put in there, it doesn't seem to respect them. Is this just a puppet/windows issue? Or am I doing something wrong?

    Read the article

  • Varnish "FetchError no backend connection" error

    - by clueless-anon
    Varnishlog: 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1340829925 1.0 12 SessionOpen c 79.124.74.11 3063 :80 12 SessionClose c EOF 12 StatSess c 79.124.74.11 3063 0 1 0 0 0 0 0 0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1340829928 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1340829931 1.0 12 SessionOpen c 108.62.115.226 46211 :80 12 ReqStart c 108.62.115.226 46211 467185881 12 RxRequest c GET 12 RxURL c / 12 RxProtocol c HTTP/1.0 12 RxHeader c User-Agent: Pingdom.com_bot_version_1.4_(http://www.pingdom.com/) 12 RxHeader c Host: www.mysite.com 12 VCL_call c recv lookup 12 VCL_call c hash 12 Hash c / 12 Hash c www.mysite.com 12 VCL_return c hash 12 VCL_call c miss fetch 12 FetchError c no backend connection 12 VCL_call c error deliver 12 VCL_call c deliver deliver 12 TxProtocol c HTTP/1.1 12 TxStatus c 503 12 TxResponse c Service Unavailable 12 TxHeader c Server: Varnish 12 TxHeader c Content-Type: text/html; charset=utf-8 12 TxHeader c Retry-After: 5 12 TxHeader c Content-Length: 418 12 TxHeader c Accept-Ranges: bytes 12 TxHeader c Date: Wed, 27 Jun 2012 20:45:31 GMT 12 TxHeader c X-Varnish: 467185881 12 TxHeader c Age: 1 12 TxHeader c Via: 1.1 varnish 12 TxHeader c Connection: close 12 Length c 418 12 ReqEnd c 467185881 1340829931.192433119 1340829931.891024113 0.000051022 0.698516846 0.000074035 12 SessionClose c error 12 StatSess c 108.62.115.226 46211 1 1 1 0 0 0 256 418 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1340829934 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1340829937 1.0 netstat -tlnp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 3086/nginx tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1915/varnishd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1279/sshd tcp 0 0 127.0.0.2:25 0.0.0.0:* LISTEN 3195/sendmail: MTA: tcp 0 0 127.0.0.2:6082 0.0.0.0:* LISTEN 1914/varnishd tcp 0 0 127.0.0.2:9000 0.0.0.0:* LISTEN 1317/php-fpm.conf) tcp 0 0 127.0.0.2:3306 0.0.0.0:* LISTEN 1192/mysqld tcp 0 0 127.0.0.2:587 0.0.0.0:* LISTEN 3195/sendmail: MTA: tcp 0 0 127.0.0.2:11211 0.0.0.0:* LISTEN 3072/memcached tcp6 0 0 :::8080 :::* LISTEN 3086/nginx tcp6 0 0 :::80 :::* LISTEN 1915/varnishd tcp6 0 0 :::22 :::* LISTEN 1279/sshd /etc/nginx/site-enabled/default server { listen 8080; ## listen for ipv4; this line is default and implied listen [::]:8080 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.html index.htm index.php; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; } location /doc { root /usr/share; autoindex on; allow 127.0.0.2; deny all; } location /images { root /usr/share; autoindex off; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.2:9000; fastcgi_index index.php; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } /etc/nginx/sites-enabled/www.mysite.com.vhost server { listen 8080; server_name www.mysite.com mysite.com.net; root /var/www/www.mysite.com/web; if ($http_host != "www.mysite.com") { rewrite ^ http://www.mysite.com$request_uri permanent; } index index.php index.html; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } # Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac). location ~ /\. { deny all; access_log off; log_not_found off; } location / { try_files $uri $uri/ /index.php?$args; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; location ~* \.(jpg|jpeg|png|gif|css|js|ico)$ { expires max; log_not_found off; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.2:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } include /var/www/www.mysite.com/web/nginx.conf; location ~ /nginx.conf { deny all; access_log off; log_not_found off; } } /etc/varnish/default.vcl # This is a basic VCL configuration file for varnish. See the vcl(7) # man page for details on VCL syntax and semantics. # # Default backend definition. Set this to point to your content # server. # backend default { .host = "127.0.0.2"; .port = "8080"; # .connect_timeout = 600s; #.first_byte_timeout = 600s; # .between_bytes_timeout = 600s; # .max_connections = 800; Note: uncommenting the last four options at default.vcl made no difference. cat /etc/default/varnish # Configuration file for varnish # # /etc/init.d/varnish expects the variables $DAEMON_OPTS, $NFILES and $MEMLOCK # to be set from this shell script fragment. # # Should we start varnishd at boot? Set to "yes" to enable. START=yes # Maximum number of open files (for ulimit -n) NFILES=131072 # Maximum locked memory size (for ulimit -l) # Used for locking the shared memory log in memory. If you increase log size, # you need to increase this number as well MEMLOCK=82000 # Default varnish instance name is the local nodename. Can be overridden with # the -n switch, to have more instances on a single server. INSTANCE=$(uname -n) # This file contains 4 alternatives, please use only one. ## Alternative 1, Minimal configuration, no VCL # # Listen on port 6081, administration on localhost:6082, and forward to # content server on localhost:8080. Use a 1GB fixed-size cache file. # # DAEMON_OPTS="-a :6081 \ # -T localhost:6082 \ # -b localhost:8080 \ # -u varnish -g varnish \ # -S /etc/varnish/secret \ # -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G" ## Alternative 2, Configuration with VCL # # Listen on port 6081, administration on localhost:6082, and forward to # one content server selected by the vcl file, based on the request. Use a 1GB # fixed-size cache file. # DAEMON_OPTS="-a :80 \ -T 127.0.0.2:6082 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G" If you need any other info let me know. I am all out of clue as to whats the problem.

    Read the article

  • nginx start failing, says error.log doesn't exist

    - by sososo
    I structured my sites like: /home/www/domain.com/public,private, log, backup In the log folder, I created a blank error.log and access.log. My nginx file in sites-available for the domain looks like: server { access_log /home/www/domain1.com/log/access.log; error_log /home/www/domain1.com/log/error.log; } Trying to start nginx it says: starting nginx: the config file /etc/nginx/nginx/conf syntax is ok [emrg] open() ".../access.log" failed (2: no such file or directory) Is this a permission issue?

    Read the article

  • nginx start failing, says error.log doesn't exist

    - by Blankman
    I structured my sites like: /home/www/domain.com/public,private, log, backup In the log folder, I created a blank error.log and access.log. My nginx file in sites-available for the domain looks like: server { access_log /home/www/domain1.com/log/access.log; error_log /home/www/domain1.com/log/error.log; } Trying to start nginx it says: starting nginx: the config file /etc/nginx/nginx/conf syntax is ok [emrg] open() ".../access.log" failed (2: no such file or directory) Is this a permission issue?

    Read the article

  • gunicorn + django + nginx unix://socket failed (11: Resource temporarily unavailable)

    - by user1068118
    Running very high volume traffic on these servers configured with django, gunicorn, supervisor and nginx. But a lot of times I tend to see 502 errors. So I checked the nginx logs to see what error and this is what is recorded: [error] 2388#0: *208027 connect() to unix:/tmp/gunicorn-ourapp.socket failed (11: Resource temporarily unavailable) while connecting to upstream Can anyone help debug what might be causing this to happen? This is our nginx configuration: sendfile on; tcp_nopush on; tcp_nodelay off; listen 80 default_server; server_name imp.ourapp.com; access_log /mnt/ebs/nginx-log/ourapp-access.log; error_log /mnt/ebs/nginx-log/ourapp-error.log; charset utf-8; keepalive_timeout 60; client_max_body_size 8m; gzip_types text/plain text/xml text/css application/javascript application/x-javascript application/json; location / { proxy_pass http://unix:/tmp/gunicorn-ourapp.socket; proxy_pass_request_headers on; proxy_read_timeout 600s; proxy_connect_timeout 600s; proxy_redirect http://localhost/ http://imp.ourapp.com/; #proxy_set_header Host $host; #proxy_set_header X-Real-IP $remote_addr; #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-Forwarded-Proto $my_scheme; #proxy_set_header X-Forwarded-Ssl $my_ssl; } We have configure Django to run in Gunicorn as a generic WSGI application. Supervisord is used to launch the gunicorn workers: home/user/virtenv/bin/python2.7 /home/user/virtenv/bin/gunicorn --config /home/user/shared/etc/gunicorn.conf.py daggr.wsgi:application This is what the gunicorn.conf.py looks like: import multiprocessing bind = 'unix:/tmp/gunicorn-ourapp.socket' workers = multiprocessing.cpu_count() * 3 + 1 timeout = 600 graceful_timeout = 40 Does anyone know where I can start digging to see what might be causing the problem? This is what my ulimit -a output looks like on the server: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 59481 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 50000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • OpenVPN Clients using server's connection (with no default gateway)

    - by Branden Martin
    I wanted an OpenVPN server so that I could create a private VPN network for staff to connect to the server. However, not as planned, when clients connect to the VPN, it's using the VPN's internet connection (ex: when going to whatsmyip.com, it's that of the server and not the clients home connection). server.conf local <serverip> port 1194 proto udp dev tun ca ca.crt cert x.crt key x.key dh dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 9 client.conf client dev tun proto udp remote <srever> 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert x.crt key x.key ns-cert-type server comp-lzo verb 3 Server's route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 69.64.48.0 * 255.255.252.0 U 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 Server's IP Tables Chain INPUT (policy ACCEPT) target prot opt source destination fail2ban-proftpd tcp -- anywhere anywhere multiport dports ftp,ftp-data,ftps,ftps-data fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:20000 ACCEPT tcp -- anywhere anywhere tcp dpt:webmin ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- anywhere anywhere tcp dpt:www ACCEPT tcp -- anywhere anywhere tcp dpt:imaps ACCEPT tcp -- anywhere anywhere tcp dpt:imap2 ACCEPT tcp -- anywhere anywhere tcp dpt:pop3s ACCEPT tcp -- anywhere anywhere tcp dpt:pop3 ACCEPT tcp -- anywhere anywhere tcp dpt:ftp-data ACCEPT tcp -- anywhere anywhere tcp dpt:ftp ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:smtp ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- 10.8.0.0/24 anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain fail2ban-proftpd (1 references) target prot opt source destination RETURN all -- anywhere anywhere Chain fail2ban-ssh (1 references) target prot opt source destination RETURN all -- anywhere anywhere My goal is that clients can only talk to the server and other clients that are connected. Hope I made sense. Thanks for the help!

    Read the article

  • OpenBSD pf 'match in all scrub (no-df)' causes HTTPS to be unreachable on mobile network

    - by Frank ter V.
    First of all: excuse me for my poor usage of the English language. For several years I'm experiencing problems with the 'match in all scrub (no-df)' rule in pf. I can't find out what's happening here. I'll try to be clear and simple. The pf.conf has been extremely shortened for this forum posting. Here is my pf.conf: set skip on lo0 match in all scrub (no-df) block all block in quick from urpf-failed pass in on em0 proto tcp from any to 213.125.xxx.xxx port 80 synproxy state pass in on em0 proto tcp from any to 213.125.xxx.xxx port 443 synproxy state pass out on em0 from 213.125.xxx.xxx to any modulate state HTTP and HTTPS are working fine. Until the moment a customer in France (Wanadoo DSL) couldn't view HTTPS pages! I blamed his provider and did no investigation on that problem. But then... I bought an Android Samsung Galaxy SII (Vodafone) to monitor my servers. Hours after I walked out of the telephone store: no HTTPS-connections on my server! I thought my servers were down, drove back to the office very fast. But they were up. I discovered that disabling the rule match in all scrub (no-df) solves the problem. Android phone (Vodafone NL) and Wanadoo DSL FR are now OK on HTTPS. But now I don't have any scrubbing anymore. This is not what I want. Does anyone here understand what is going on? I don't. Enabling scrubbing causes HTTPS webpages not to be loaded on SOME ISP's, but not all. In systat, I strangely DO see a state created and packets received from those ISP's... Still confused. I'm using OpenBSD 5.1/amd64 and OpenBSD 5.0/i386. I have two ISP's at my office (one DSL and one cable). Affects both. This can be reproduced quite easily. I hope someone has experience with this problem. Greetings, Frank

    Read the article

  • Cliq Wireless questions

    - by Nathan Adams
    Heres the deal: I am by no means a Linux expert, even less when it comes to the Android OS but lets see if we can't solve this problem. The problem I am having is that on the Cliq we have a broadcom chip. In order to use the wireless card you must first insert the module into the kernel. Fine: # insmod /system/lib/dhd.ko insmod /system/lib/dhd.ko # lsmod lsmod dhd 164936 0 - Live 0xbf000000 # BUT netcfg (or ifconfig in busybox) does not recognize that there is a wireless adapter there: # netcfg netcfg lo UP 127.0.0.1 255.0.0.0 0x00000049 dummy0 DOWN 0.0.0.0 0.0.0.0 0x00000082 rmnet0 UP 14.67.164.2 255.255.255.252 0x00001043 rmnet1 DOWN 0.0.0.0 0.0.0.0 0x00001002 rmnet2 DOWN 0.0.0.0 0.0.0.0 0x00001002 usb0 DOWN 0.0.0.0 0.0.0.0 0x00001002 # busybox ifconfig busybox ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:282 errors:0 dropped:0 overruns:0 frame:0 TX packets:282 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:18754 (18.3 KiB) TX bytes:18754 (18.3 KiB) rmnet0 Link encap:Ethernet HWaddr EE:83:E8:B4:4A:ED inet addr:14.x.x.x Bcast:14.67.164.3 Mask:255.255.255.252 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7148 errors:0 dropped:0 overruns:0 frame:0 TX packets:7659 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2609236 (2.4 MiB) TX bytes:908575 (887.2 KiB) # For giggles if we attempt to launch wpa_supplicant anyways we get this: # wpa_supplicant -Dwext -ieth0 -c/data/misc/wifi/wpa_supplicant.conf wpa_supplicant -Dwext -ieth0 -c/data/misc/wifi/wpa_supplicant.conf ioctl[SIOCSIWPMKSA]: No such device ioctl[SIOCSIWMODE]: No such device Could not configure driver to use managed mode ioctl[SIOCGIFFLAGS]: No such device Could not set interface 'eth0' UP ioctl[SIOCGIWRANGE]: No such device ioctl[SIOCGIFINDEX]: No such device CTRL-EVENT-STATE-CHANGE id=-1 state=0 ioctl[SIOCSIWENCODEEXT]: No such device ioctl[SIOCSIWENCODE]: No such device ioctl[SIOCSIWENCODEEXT]: No such device ioctl[SIOCSIWENCODE]: No such device ioctl[SIOCSIWENCODEEXT]: No such device ioctl[SIOCSIWENCODE]: No such device ioctl[SIOCSIWENCODEEXT]: No such device ioctl[SIOCSIWENCODE]: No such device ioctl[SIOCSIWAUTH]: No such device WEXT auth param 7 value 0x0 - Failed to disable WPA in the driver. ioctl[SIOCSIWAUTH]: No such device WEXT auth param 5 value 0x0 - ioctl[SIOCSIWAUTH]: No such device WEXT auth param 4 value 0x0 - ioctl[SIOCSIWAP]: No such device ioctl[SIOCGIFFLAGS]: No such device # In dmesg we get: <4>[18300.494065] dhd_oob_enable_intr : enable <4>[18305.019976] dhd_net_start failed bus is not ready <4>[18305.020278] dhdsdio_probe: dhd_net_start failed! Do I need to specify the firmware with insmod? Why are we trying to control the interface manually instead of through the Android API? The Android API doesn't support ad-hoc connections as far as I can tell. The card, I am sure, most certainly can.

    Read the article

  • Nginx with Passenger setup problems

    - by Kreeki
    I'm trying to setup nginx webserver with Passenger support for Ruby on Rails application on Ubuntu 10.04 (on sub URI). All went fine until I tried to access the server/application from the browser. My instalation of nginx is in location /opt/nginx # my nginx.conf server { listen 80; server_name www.mydomain.com; root /websites/site/public; passenger_enabled on; passenger_base_uri /site; location / { # added by default, I don't know if its supposed to be here or not root html; index index.html index.htm; } Then I started the server. But when I put www.mydomain.com/site in browser I get 404 Not Found error. Error.log shows this: 2011/03/04 10:07:07 [error] 21387#0: *2 open() "/opt/nginx/html/favicon.ico" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /favicon.ico HTTP/1.1", host: "80.79.23.71", referrer: "http://80.79.23.71/" 2011/03/04 10:07:07 [error] 21387#0: *2 open() "/opt/nginx/html/404.html" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /favicon.ico HTTP/1.1", host: "80.79.23.71", referrer: "http://80.79.23.71/" 2011/03/04 10:07:11 [error] 21387#0: *4 open() "/opt/nginx/html/favicon.ico" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /favicon.ico HTTP/1.1", host: "80.79.23.71:80", referrer: "http://80.79.23.71:80/" 2011/03/04 10:07:11 [error] 21387#0: *4 open() "/opt/nginx/html/404.html" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /favicon.ico HTTP/1.1", host: "80.79.23.71:80", referrer: "http://80.79.23.71:80/" 2011/03/04 10:07:13 [error] 21387#0: *5 open() "/opt/nginx/html/site" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" 2011/03/04 10:07:13 [error] 21387#0: *5 open() "/opt/nginx/html/404.html" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" 2011/03/04 10:07:15 [error] 21387#0: *6 open() "/opt/nginx/html/site" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" 2011/03/04 10:07:15 [error] 21387#0: *6 open() "/opt/nginx/html/404.html" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" 2011/03/04 10:07:19 [error] 21387#0: *7 open() "/opt/nginx/html/site" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" 2011/03/04 10:07:19 [error] 21387#0: *7 open() "/opt/nginx/html/404.html" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" Why does nginx look for site in /opt/nginx/html/site as log shows when there's another path set in nginx.conf? Any idea whats wrong with my setup?

    Read the article

  • Increasing resolution in FreeNX headless server

    - by syrenity
    Hi. I'm running a FreeNX server on headless CentOS machine, and the resolution seems to be locked on 800 x 600. I tried editing the xorg.conf file, but without success so far. Has anyone succeed of running the FreeNX remote under 1280 x 1024 resolution, and can post a working configuration? Thanks! P.S.: Here is the pastebin of my current xorg.cof file: http://pastie.org/835308

    Read the article

  • Good Free Ubuntu Server VMWare Image Needed

    - by Yaakov Ellis
    Can anyone recommend a good, free Ubuntu Server VMWare Image (or Virtual Appliance, as they call them)? I have looked on the VMWare VAM and there are literally hundreds to choose from. I am looking for something that can with very minimal effort serve as a development platform for LAMP applications (so it should have all of those installed, plus things like PhpMyAdmin). Bonus points if there is some way to create new Virtual Hosts (for developing and testing new sites) on Apache without having to go digging around conf files and guessing on the sytax.

    Read the article

  • Disabling the Squid Error pages

    - by Nicholas Smith
    I've just started looking at using Squid for a project and can't seem to see an easy way of disabling the Squid error pages (e.g. "Name Error: The domain name does not exist"). We use a custom browser which handles that scenario in our way, so the Squid error pages are overriding our custom logic. Is it possible to set them too 'off'? I've been through the .conf and I've found where the error pages are stored, but no real options to disable them.

    Read the article

  • Uninstall php and nginx or fix setup

    - by jreed121
    First off, I'm a huge linux noob - sorry... I'm trying to setup nginx with php-fpm on debian and I'm pretty sure that I've completely screwed it up. nginx seems to be running fine because I can it it from a web browser and it load the stock "Welcome to nginx!" page. I'm not so sure about php-fpm though. When I try something like # restart php-fpm I get: bash: restart: command not found First off php-fpm some how got installed as php5-fpm when I do root@server:/etc/init.d# ls, which seems to contradict every tutorial and help doc I've read (supposed to be 'php-fpm'). I can restart it with this: service php5-fpm restart And just enter the package name 'php5-fpm' I get this: root@server:~# php5-fpm [17-Nov-2012 23:15:36] NOTICE: PHP message: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/suhosin.so' - /usr/lib/php5/20100525/suhosin.so: cannot open shared object file: No such file or directory in Unknown on line 0 [17-Nov-2012 23:15:36] ERROR: An another FPM instance seems to already listen on /var/run/php5-fpm.sock [17-Nov-2012 23:15:36] ERROR: FPM initialization failed The root for nginx is /usr/share/nginx/html, when I try to navigate to a .php file in there with my web browser, it tries to download the file instead of interpret it. I would like this folder to be in my user's home directory ie: /home/administrator/www or /home/nginx/www. I know in order to do this I need to modify nginx.conf, but I find that configuration file difficult to understand. I suppose the fact that my .php scripts aren't being handled is my bigger problem anyways. When I try to see what running on port 9000 (php-fpm default port) with lsof -i :9000 it returns nothing - I guess indicating that it isn't listening. then I head over to vim /etc/php5/fpm/php-fpm.conf and there is no where to designate a port number. So should I just uninstall everything and start from scratch? If so, how do I clean it all up? Any suggestions for a tutorial once I'm ready to try again? Should I attempt to troubleshoot this mess? If so where should I start? Sorry guys, I'm feeling pretty stupid and lost right now. I'm not sure what my next steps are in trying to resolve this issue are. I realize that this is a horrible question for this type of Q&A site, but I'd really appreciate any guidance.

    Read the article

  • How to make Tun module running at linux start

    - by harmony
    i installed Tun using: modprobe tun then did: lsmod | grep tun tun 83840 0 Please how to make Tun running at reboot? This is written on Hamachi website: ...Then add tun to the list of modules by using your favorite text editor and Create /etc/modules-load.d/tun.conf #Load tun module at boot. tun But this folder foes not exist in my /etc Is it wise to add line "modprobe tun" into /etc/rc.local ?

    Read the article

  • Best way to override 1024 process ulimit

    - by CamelBlues
    On CentOS distros, there is an /etc/security/limits.d/90-noproc.conf that sets a process limit for all users: # Default limit for number of user's processes to prevent # accidental fork bombs. # See rhbz #432903 for reasoning. * soft nproc 1024 I'd like to keep this limit in there, but allow one user to have more than 1024 processes. Because of how the server is puppetized, I'm unable to use the built-in bash ulimit command.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >