Search Results

Search found 10674 results on 427 pages for 'glib config'.

Page 230/427 | < Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >

  • Discover MAC address

    - by Kami
    I've to setup a bunch of server ! I need to discover their mac address with the following situation : MacBookPro >----------< Server I'm directly connected (not behind a router/switch) to the server. I've no clues about the ip address the server is using as default setup. I can't use a display connected to the server displaying its newtwork card config. How can I discover the MAC address of the server network card ? I'm looking for a command line tool. If something exists in MacPort it's also ok !

    Read the article

  • Apache2 proxypass

    - by gatsby
    i'm trying to figure out why my apache2 reverse proxy doesn't work... hope someone can clarify. i'm using an apache server as a gateway with proxy pass: 10.184.1.2 is the IP. these are PP instructions i inserted in the 000-default config file. ProxyPass / http://192.168.102.31/ ProxyPassReverse / http://192.168.102.31/ the host 192.168.102.31 is an internal IP of a subnet wich is not reachable directly by clients, but only by the apache gateway. when i try to access such a address: http://apache_gateway_name/dir i see the client trying to reach 192.168.102.31 address and of course timeout occurs. can someone help? Best regards

    Read the article

  • Installing PHP with iconv on Mac OSX 10.6 gives error: "Undefined symbols for architecture x86_64"

    - by Jason
    I am setting up PHP on my local web server with iconv, which should be there by default, but I am getting the following error: Undefined symbols for architecture x86_64: "_iconv", referenced from: __php_iconv_strlen in iconv.o _php_iconv_string in iconv.o __php_iconv_strpos in iconv.o __php_iconv_appendl in iconv.o _zif_iconv_substr in iconv.o _zif_iconv_mime_encode in iconv.o _php_iconv_stream_filter_append_bucket in iconv.o ... (maybe you meant: _zif_iconv_strlen, _zif_iconv_strpos , _zif_iconv_mime_decode , _php_iconv_string , _zif_iconv_set_encoding , _zif_iconv_get_encoding , _zif_iconv_mime_decode_headers , _iconv_functions , _zif_iconv_mime_encode , _zif_iconv_strrpos , _iconv_module_entry , _iconv_globals , _zif_iconv_substr , _php_if_iconv , _zif_ob_iconv_handler ) ld: symbol(s) not found for architecture x86_64 Now, presumably this means that my iconv is not 64 bit. I tried to download libiconv and manually reinstall it with 64 bit, as such: CFLAGS='-arch i386 -arch ppc -arch ppc64 -arch x86_64' CCFLAGS='-arch i386 -arch ppc -arch ppc64 -arch x86_64' CXXFLAGS='-arch i386 -arch ppc -arch ppc64 -arch x86_64' ./configure But I get the following error: configure: error: in `/Users/jason/Downloads/libiconv-1.13.1': configure: error: C compiler cannot create executables See `config.log' for more details. In my config.lob file, it says at the bottom: configure: exit 77 Any suggestions?

    Read the article

  • Missing APR on apache2 ./configure

    - by arby
    I want to build the latest stable version of apache2. I downloaded the source and put APR & APR-util in the srclib folder, then changed directories to ./srclib/apr and ran: ./configure --prefix=/usr/local/apr sudo make sudo make install This seemed to install APR ok, but when I run ./configure from the apr-util directory, I receive the error: configure: error: APR could not be located. Please use the --with-apr option. Using ./configure --prefix=/usr/local/apr-util --with-apr=/usr/local/apr, the error becomes: checking for APR... configure: error: the --with-apr parameter is incorrect. It must specify an install prefix, a build directory, or an apr-config file. Why can't it find APR?

    Read the article

  • Windows 2008 File Share

    - by user36540
    Hi, I have 3 Windows 2008 Standard servers in my system with no domain controller. Two of the servers are running a NLB cluster and the third server is a file server that the web servers connect to. I want to store my source code on the file server and point the IIS config to the network file share. The web sites also need access to a file share on the file server. I was able to share the network drive and access while logged into either of the web servers but my web apps are unable to access the file share - I assume due to permissions. Does anybody know the correct way to do this? Thanks, Chris

    Read the article

  • How to disable horizontal scrolling within virtualbox on Ubuntu guest, Windows 7 host?

    - by Steven Rosato
    I am using Windows 7 as Host, Ubuntu Karmic as guest OS with guest tools installed and I get an annoying glitch when switching from host to the guest machine: Vertical scrolling switches to horizontal! (using the mouse wheel). Since I don't really care about horizontal scrolling, how can I disable this? I have checked the web and the only thing I found was to play in the xorg.conf file and adding in the section "InputDevice" Option "ZAxisMapping" "4 5" which would enable vertical scrolling only. The thing is, I don't have that section in my config file so I guessed that I would need to add Section "InputDevice" Identifier "VBoxMouse" Driver "vboxmouse" Option "ZAxisMapping" "4 5" EndSection But that does not seem to work after restarting xserver. Any workaround for this?

    Read the article

  • Nginx no longer servers uwsgi application behind HAProxy - Looks for static file instead

    - by Ralph
    We implemented our web application using web2py. It consists of several modules offering a REST API at various resources (e.g. /dids, /replicas, ...). The API is used by clients implementing requests.py. My problem is that our web app works fine if it's behind HAProxy and hosted by Apache using mod_wsgi. It also works fine if the clients interact with nginx directly. It doesn't work though when using HAProxy in front of nginx. My guess is that HAProxy somehow modifies the request and thus nginx behaves differently i.e. looking for a static file instead of calling the WSGI container. Unfortunately I can't figure out what's exactly going (wr)on(g). Here are the relevant config sections of these three component's config files. At least I guess they are interesting. If you miss anything, please let me know. 1) haproxy.conf frontend app-lb bind loadbalancer:443 ssl crt /etc/grid-security/hostcertkey.pem default_backend nginx-servers mode http backend nginx-servers balance leastconn option forwardfor server nginx-01 nginx-server-int-01.domain.com:80 check 2) nginx.conf: sendfile off; #tcp_nopush on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; server { server_name nginx-server-int-01.domain.com; root /path/to/app/; location / { uwsgi_pass unix:///tmp/app.sock; include uwsgi_params; uwsgi_read_timeout 600; # Requests can run for a serious long time } 3) uwsgi.ini [uwsgi] chdir = /path/to/app/ chmod-socket = 777 no-default-app = True socket = /tmp/app.sock manage-script-name = True mount = /dids=did.py mount = /replicas=replica.py callable = application Now when I let my clients go against nginx-server-int-01.domain.com everything is fine. In the access.log of nginx lines like these are appearing: 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/user.ogueta/cnt_mc12_8TeV.16304.stream_name_too_long.other.notype.004202218365415e990b9997ea859f20.user/dids HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 5282 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 5094 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 528 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "GET /dids/mc13_14TeV/dids/search?project=mc13_14TeV&stream_name=%2Adummy&type=dataset&datatype=NTUP_SMDYMUMU HTTP/1.1" 401 73 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "POST /replicas/list HTTP/1.1" 200 713 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" But when I switch the clients to go against HAProxy (loadbalancer.domain.com:443), the error.log of nginx shows lines like these: 2014/08/23 01:26:01 [error] 1705#0: *21231 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21232 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21233 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21234 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21235 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer" 2014/08/23 01:26:02 [error] 1705#0: *21238 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21239 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21242 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21244 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" As you can see, that request looks the same, only the client IP changed, from the client's host to the one from loadbalancer.domain.com. But due to what ever reasons ngxin seems to assume that it is a static file to be served which eventually results in the file not found message. I searched the web for multiple hours already, but without much luck so far. Any help is very much appreciated. Cheers, Ralph

    Read the article

  • Unknown protocol when trying to connect to remote host with stunnel

    - by RaYell
    I'm trying to set up a stunnel for WebDav on Windows. I want to connect 80 port on my local interface to 443 on another machine in my network. I can ping the machine remote machine. However when I use the tunnel, I'm getting this error all the time SSL state (accept): before/accept initialization SSL_accept: 140760FC: error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol There is nothing in the logs on the other machine and here's my stunnel connection config [https] accept = 127.0.0.2:80 connect = 10.0.0.60:443 verify = 0 I've set it up to accept all certificates so this shouldn't be a problem with a self-signed certificate remote host uses. Does anyone knows what might be the problem that this connection cannot be eastablished?

    Read the article

  • How to restart RoR services after server has been rebooted

    - by Alan DeLonga
    Update I have been searching around to see what services would possibly need to be restarted in my project after reboot. One of them was thinking sphinx, which I finally got to the point where it logs: [Fri Nov 16 19:34:29.820 2012] [29623] accepting connections But I still cant run searchd or searchd --stop because there was no generated sphinx.conf file in the etc/sphinxsearch for more info refer to this open thread on thinking_sphinx after reboot I then turned to looking into restarting unicorn or thin based on some insight I got. The issue is when I check my gems I see one for thin AND unicorn. But when I try to start either one of them they have no file residing in etc/init.d/ where the nginx and sphinxsearch files reside... Would rebooting totally erase the files for an app server like thin or unicorn? We are hosted on Rackspace running ruby 1.9.2p290 rails (3.2.8, 3.2.7, 3.2.0) nginx/1.1.19 notice that there are gems for unicorn and thin but there is no unicorn.rb or thin.rb in my config folder for my app... I am still super lost if any one can give me some insight on some steps to take to figure this out I would really appreciate it. Anything would help, thanks for reading. thin 1.4.1 unicorn 4.3.1 When I run unicorn I get the same issue as referenced here : > /usr/local/bin/unicorn start /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:610:in `parse_rackup_file': rackup file (start) not readable (ArgumentError) from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:76:in `reload' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:67:in `initialize' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:104:in `new' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:104:in `initialize' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/bin/unicorn:121:in `new' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/bin/unicorn:121:in `<top (required)>' from /usr/local/bin/unicorn:19:in `load' from /usr/local/bin/unicorn:19:in `<main>' When I run thin it just opens a command line prompt... /usr/local/bin/thin start >> Using rack adapter Other gems: * LOCAL GEMS * actionmailer (3.2.8, 3.2.7, 3.2.0) actionpack (3.2.8, 3.2.7, 3.2.0) activemodel (3.2.8, 3.2.7, 3.2.0) activerecord (3.2.8, 3.2.7, 3.2.0) activeresource (3.2.8, 3.2.7, 3.2.0) activesupport (3.2.8, 3.2.7, 3.2.0) arel (3.0.2) builder (3.0.0) bundler (1.1.5) carmen (1.0.0.beta2) carmen-rails (1.0.0.beta3) cocaine (0.2.1) coffee-rails (3.2.2) coffee-script (2.2.0) coffee-script-source (1.3.3) daemons (1.1.9) erubis (2.7.0) eventmachine (0.12.10) execjs (1.4.0) faraday (0.8.4) faraday_middleware (0.8.8) foursquare2 (1.8.2) geokit (1.6.5) hashie (1.2.0) hike (1.2.1) httparty (0.8.3) httpauth (0.1) i18n (0.6.0) journey (1.0.4) jquery-rails (2.0.2) json (1.7.4, 1.7.3) jwt (0.1.5) kgio (2.7.4) lastfm (1.8.0) libv8 (3.3.10.4 x86_64-linux) mail (2.4.4) mime-types (1.19, 1.18) minitest (1.6.0) multi_json (1.3.6) multi_xml (0.5.1) multipart-post (1.1.5) mysql2 (0.3.11) oauth2 (0.8.0) paperclip (3.1.1) polyglot (0.3.3) rack (1.4.1) rack-cache (1.2) rack-ssl (1.3.2) rack-test (0.6.1) rails (3.2.8, 3.2.7, 3.2.0) railties (3.2.8, 3.2.7, 3.2.0) raindrops (0.10.0, 0.9.0) rake (0.9.2.2, 0.8.7) rdoc (3.12, 2.5.8) riddle (1.5.3) sass (3.2.0, 3.1.19) sass-rails (3.2.5) sprockets (2.1.3) sqlite3 (1.3.6) sqlite3-ruby (1.3.3) therubyracer (0.10.2, 0.10.1) thin (1.4.1) thinking-sphinx (2.0.10) thor (0.16.0, 0.15.4, 0.14.6) tilt (1.3.3) treetop (1.4.10) tzinfo (0.3.33) uglifier (1.2.7, 1.2.4) unicorn (4.3.1) xml-simple (1.1.1) I am working on a project that was built by another group. I made some modifications to a constants file in the config folder (changing some values for arrays that populated some drop down fields), but the app had to be rebooted before those changes would be recognized. The hosting is through Rackspace, we rebooted through the option on their site. I contacted them and checked the status of our server, the port is open and operational. The problem is the app is not running when you go to the address for the site. Then when I put in the ip address of the server it just says "Welcome to Nginx". But in a log files I see: [Thu Nov 15 02:34:37.945 2012] [15916] caught SIGTERM, shutting down [Thu Nov 15 02:34:37.996 2012] [15916] shutdown complete I am not very versed in server side set up. I have also never worked on a Rails project that had to have specific services started before the application will start. Any insight as to how to figure out what services need to be restarted and how to go about restarting them would be greatly appreciated. I feel kind of dead in the water at this point... Thanks, Alan

    Read the article

  • apache2 slow responding (debian)

    - by baloo
    I'm running an apache2 2.2.9 webserver with modpython and mpm_worker_module. The current config for the mpm is ServerLimit 32 StartServers 10 MaxClients 800 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 The server has 1G of ram and a 100Mbit connection. Checking netstat -na | grep ESTABLISHED | wc -l gives me a number between 50 - 60. The load is about 1.0 Every pageload is also cached by memcached. I can't see why the server is so slow in responding to new connections, sometimes droping them completely? Also tried disabling iptables to make sure it's not because of a full state table or something like that. The only thing in dmesg is a lot of spam about "TCP: Treason uncloaked!"

    Read the article

  • Postfix - Unable to receive emails from certain domains

    - by Emmanuel
    Got a Postfix-Dovecot-Saslauthd setup on Ubuntu 10.04. Problem is there's (at least) one domain that it refuses to accept emails from. I've been getting emails fine from lots of different domains except one. It's really weird, but could some config file or something be blocking certain domains? or IPs? or something? I know the emails are being sent to me, infact I sent a test one myself from this domain and they're just not showing up.

    Read the article

  • Why is my eth0 getting a dynamic ip when it is configured to be static?

    - by sdek
    For some reason our office linux box is being assigned an ip address via dhcp and I don't know why. What is confusing to me is that when I check system-config-network it shows that my eth0 is setup to be a static ip address. And /etc/sysconfig/network-scripts/ifcfg-eth0 also shows it is setup to be a static ip, yet it is getting a different ip address than the one specified in the ifcfg-eth0. Let me know if you have any suggestions on or ideas on where I can look next. Here are a few details that might help you figure out what an idiot I am :) Fedora 11 Router in front of this box is running dhcp, starting at 10.42.1.100 This box is configured to be 10.42.1.50 (at least I think it is!), subnet 255.255.255.0 (which is same as the router's lan subnet) Instead of having the static IP, this box is getting assigned 10.42.1.100. Here are the ifcfg-eth0 details DEVICE=eth0 BOOTPROTO=none ONBOOT=yes TYPE=Ethernet USERCTL=no NM_CONTROLLED=no NETMASK=255.255.255.0 IPADDR=10.42.1.50 GATEWAY=10.42.1.1

    Read the article

  • Running own DNS Server for learning purpose

    - by sundar22in
    I would like to run my own DNS server in my laptop for learning purpose. I recently used Google Public DNS and liked it. I wanted to build some thing similar and small for my web browsing. What I vaguely dream of is to use my own DNS server as Primary DNS server and Google public DNS as secondary DNS server. I would like to build my own DNS server gradually by editing the configuration files (If it can be automated it will be great, but have no clues there). Sometimes it sounds like a stupid idea to me, but I am fine with editing config file for each site I want to add to my DNS server. Any pointers/suggestion is welcome.

    Read the article

  • My php homepage downloads index.php instead of being processed on Gandi.net

    - by alekone
    if I go to the homepage of my website http://www.website.com (on a brand new server) the index.php gets donwloaded instead than processed. I don't have the same problem on other folders. my .htaccess reads: AddHandler php5-script .php what could this be? I suspect it's something with php config / or htaccess, but I'm not able to figure it out. help please! edit: I don't know if this helps: it's a wordpress installation, I have this problem only on the public part of the website, not on the admin (that renders correctly)

    Read the article

  • How can I have APF block script kiddies that mod_security detects?

    - by Gaia
    In one of the vhosts' error_log I found thousands of lines like these, all from the same IP: [Mon Apr 19 08:15:59 2010] [error] [client 61.147.67.206] mod_security: Access denied with code 403. Pattern match "(chr|fwrite|fopen|system|e?chr|passthru|popen|proc_open|shell_exec|exec|proc_nice|proc_terminate|proc_get_status|proc_close|pfsockopen|leak|apache_child_terminate|posix_kill|posix_mkfifo|posix_setpgid|posix_setsid|posix_setuid|phpinfo)\\\\(.*\\\\)\\\\;" at THE_REQUEST [id "330001"] [rev "1"] [msg "Generic PHP exploit pattern denied"] [severity "CRITICAL"] [hostname "x.x.x.x"] [uri "//webmail/config.inc.php?p=phpinfo();"] Given how obvious the situation is, how come mod_security isnt automatically adding at least that IP to deny rules? There is no way someone hasnt thought of this before...

    Read the article

  • signing the web server certificate with the CA key

    - by user1064786
    I have problem in running the command below using openssl-0.9.8e and apache in Ubuntu 11.10. do you have any idea to resolve it? first i was receiving this error: No such file or directory:bss_file.c:169:fopen('openssl.cnf','rb') then i copied my modified openssl.cnf file in the /etc/ssl/ directory. now i receive an error regarding -in option: openssl ca -days 3650 –in server/requests/ciise.concordia.ca.csr –cert ./CA/ConcordiaCA.crt –keyfile ./CA/ConcordiaCA.key –out ./server/certificates/ciise.concordia.ca.crt -config openssl.cnf unknown option –in I also copied ciise.concordia.ca.csr in the upper directory, but the problem still persists I would appreciate any help:)

    Read the article

  • problem with crating table on phpMyAdmin database

    - by tombull89
    Hello all, I'm running a phpMyAdmin Database on my web package on a 1and1-hosted server. I've managed to set up a database in the control panel, have uploaded all to root/phpmyadmin and changed the config.ini.php file to point at 1and1's database server (because that's the way they do it). I can go to the web interface and get to the main page, but all it shows is the database name and I can't find how to create any tables. I know it's a long shot but I'm almost out of ideas. Also, 1and1 have their own phpmyadmin panel, which is pretty annoying to use, and a 1and1 webdatabase which I have barely looked at. Help and suggestions much appriciated.

    Read the article

  • Cron won't use msmtpd to send emails in case of failed cronjob

    - by Glister
    I'm trying to configure a machine so that it will send me an email if one of the cronjobs output something in case of an error. I'm using Debian Wheezy. Cron is working normally (without the email functionality). msmtp is installed and configured. Have already symlinked /usr/{bin|sbin}/sendmail to /usr/bin/msmtpd. I can send email by using: echo "test" | mail -s "subject" [email protected] or by executing: echo "test" | /usr/sbin/sendmail Without the symlink (/usr/sbin/sendmail) cron will tell me that: (CRON) info (No MTA installed, discarding output) With the symlinks I get: (root) MAIL (mailed 1 byte of output; but got status 0x004e, #012) Can you suggest how to config the cron/msmtp pair? Thanks!

    Read the article

  • after redmine install i see only the filesystem

    - by derty
    After installing redmine, i cann only access the filesystem! I reinstalled redmine 2-3 times in different ways. Used this "how to"s: http://www.redmine.org/projects/redmine/wiki/HowTo_Install_Redmine_using_Debian_package http://www.redmine.org/projects/redmine/wiki/HowTo_Install_Redmine_210_on_Debian_Squeeze_with_Apache_Passenger http://beeznest.wordpress.com/2012/09/20/installing-redmine-2-1-on-debian-squeeze-with-apache-modpassenger/ the webserver of 10.0.0.14 is going to be behind a reverse apache proxy. but for know i'm working directly in the system. This change wouldn't be a problem. I use this on a bunch of other services. The Database does exist and i can enter it. The configuration file config/database.yml is set up right, with the data i use to enter as redmineuser. So does one have an idea why it is not working like i wish?

    Read the article

  • vsftpd: refusing to run with writable root inside chroot

    - by MrROY
    I want to setup a anonymous only ftp server (able to upload files). Here is my config file: listen=YES anonymous_enable=YES anon_root=/var/www/ftp local_enable=YES write_enable=YESr. anon_upload_enable=YES anon_mkdir_write_enable=YES xferlog_enable=YES connect_from_port_20=YES chroot_local_user=YES dirmessage_enable=YES use_localtime=YES secure_chroot_dir=/var/run/vsftpd/empty rsa_cert_file=/etc/ssl/private/vsftpd.pem pam_service_name=vsftpd But when i try to connect it: kan@kan:~$ ftp yxxxng.bej Connected to yxxx. 220 (vsFTPd 2.3.5) Name (yxxxg.bej:kan): anonymous 331 Please specify the password. Password: 500 OOPS: vsftpd: refusing to run with writable root inside chroot() Login failed Can anyone help ?

    Read the article

  • apc.stat causes 500 internal server error

    - by Legit
    When I turn off apc.stat it causes a 500 internal server error. I checked the apache error_log and it's something about: [Tue Jun 26 10:02:59 2012] [error] [client 127.0.0.1] PHP Warning: require(): Filename cannot be empty in /var/www/site1/public/index.php on line 17 [Tue Jun 26 10:02:59 2012] [error] [client 127.0.0.1] PHP Fatal error: require(): Failed opening required '' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/site1/public/index.php on line 17 I checked that line and here's what it contains: require('./wp-blog-header.php'); I don't see anything wrong with it. Here's my current APC config: APC version: 3.1.10 PHP Version: 5.4.4 How do I resolve this error when i disable apc.stat?

    Read the article

  • Prevent Internet Explorer 8 from exiting when closing the last tab

    - by LongTTH
    Firefox doesn't exit when I close the last tab, just by doing some customization in about:config. But now, my company has to work with some website that only works well on Internet Explorer. I currently use Internet Explorer 8 on Windows XP SP3. So, how do I prevent Internet Explorer 8 from exiting when closing the last tab? I've searched for such a feature for a while but found nothing helpful. FYI I gave up with my "customization habit", now I'm trying to live with IE world, ... phew...

    Read the article

  • Local Area Connection in Slacware 13

    - by asdasd
    I have windows xp and slackware 13 on one computer, and the ISP provided me a new modem. There was manual how to configure it, so i start the web browser and typed it's ip address 192.168.1.1 and the web interface of the modem appeared so i logged in, that was easy. But under slackware, i don't know how to enter in the modem config / web interface. I type in 192.168.1.1 but it's not working. Here's the output of ifconfig eth0 : eth0 Link encap:Ethernet HWaddr 00:a1:b0:01:18:28 inet addr:169.254.73.8 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:17 Memory:febff400-febff4ff How can i log in into the modem from linux, i.e. find it's assigned ip under slackware ? Thank you.

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

< Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >