Search Results

Search found 10674 results on 427 pages for 'glib config'.

Page 315/427 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • Recompiling Nginx 1.4.3 with "--with-http_gzip_static_module" error

    - by Elijah Paul
    I'm trying to enable the 'ngx_http_gzip_static_module' module in Nginx 1.4.3 by adding the --with-http_gzip_static_module to my ./configure configuration. But I recieve the following error when i try to recompile (make): # make make -f objs/Makefile make[1]: Entering directory `/tmp/nginx-1.4.3' make[1]: *** No rule to make target `src/os/unix/ngx_gcc_atomic_x86.h', needed by `objs/src/core/nginx.o'. Stop. make[1]: Leaving directory `/tmp/nginx-1.4.3' make: *** [build] Error 2 My current config (CentOS 6.4): # nginx -V nginx version: nginx/1.4.3 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) TLS SNI support enabled configure arguments: --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/error.log --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --http-log-path=/var/log/nginx/access.log --with-http_ssl_module --prefix=/usr --add-module=./nginx-sticky-module-1.1 --add-module=./headers-more-nginx-module-0.23 i was under the impression that this was a module that just need to be 'enabled' as opposed to 'added'. What am I missing here?

    Read the article

  • django, mod_wsgi, MySQL High CPU - Problems

    - by Red Rover
    Good Evening, and thank you for reading this post. I am having a problem with Django after migrating the dB from SQLlite to MySQL. Initially, for the first 48hours, all ran well. But now we are experiencing high cpu about every 30 minutes. This is a production ESX4i VM host, with 2 x 2.8 ghz CPUs and 12 GB ram. I have allocated 4 cpu's to this VM and 4 GB memory. Any insight into this configuration and help with the spikes in CPU would be appreciated. IT is configured to use the prefork MPM. Outlined are the config's for the different services: MySQL Server version: 5.1.61 Source distribution Django 1.3 mod_wsgi Apache/2.2.15 httpd.conf Timeout 120 KeepAlive Off MaxKeepAliveRequests 400 KeepAliveTimeout 3 prefork MPM StartServers 8 MinSpareServers 8 MaxSpareServers 16 ServerLimit 40 MaxClients 40 MaxRequestsPerChild 0 worker MPM StartServers 16 MaxClients 1024 MinSpareThreads 64 MaxSpareThreads 256 ThreadsPerChild 64 MaxRequestsPerChild 10240 MySQL my.conf [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql symbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid my.cnf wsgi.conf LoadModule wsgi_module modules/mod_wsgi.so /etc/httpd/conf.d/wsgi.conf WSGISocketPrefix /var/run/wsgi WSGIPythonEggs /var/tmp WSGIDaemonProcess SITE maximum-requests=10000 WSGIProcessGroup SITE

    Read the article

  • A complicated nginx/php-fpm chroot setup

    - by Rsaesha
    I'm running nginx and php-fpm, and I want to set up jails for each host. My setup is a little complicated, so following tutorials on the web gets me nowhere. Each site has a directory /var/www/domain.name/ Inside that directory, there will be a public/ directory which will be the website root, a logs/ directory which will store nginx logs for that site specifically, and the chroot filesystem (etc/, usr/, etc.) The first problem I've run into is that nomatter how I configure it, PHP-FPM cannot find the files that are passed to it via nginx. They result in a "Primary script unknown" error, and to make matters worse, the error messages from PHP-FPM are no more verbose than that, so I can't figure out what path is being passed by nginx. A php-fpm pool configuration for a host looks like this: [host] user = host group = www-data chroot = /var/www/domain.name chdir = /public listen = 127.0.0.1:900x 'x' is incremented for each pool. The nginx config for this host looks like this: server { listen 80; server_name domain.name *.domain.name; root /var/www/domain.name/public; index index.php index.html index.html; location ~ \.php$ { expires epoch; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9001; } } I'm guessing that the problem is the SCRIPT_FILENAME parameter, but I've changed it to just $fastcgi_script_name, and various other combinations, but to no avail. Can anyone help?

    Read the article

  • Ubuntu in VirtualBox File Modified Time in Future and PHP slow file operations

    - by user1750
    For some reason, some of my files have a last modified date in the future. In addition to this, file operations in PHP are SUPER slow. For example, rebuilding the Symfony2 cache can take over 40 seconds (its takes 1-2 on my MacBook Pro). Notice the time for ListingsCRUDController.php. It just says "2012". In order see the date more clearly I ran ls --time-style="full-iso" -l For some reason it shows that this file's last modified date is ~5 hours into the future. System time: To make things more confusing, the system will intermittently speed up. Suddenly, my app will start serving requests in 1-2 seconds (down from 40 seconds) for no apparent reason. I mean I don't do anything to my code/system config - it just changes. Also, during a slow PHP request, the php5-fpm process (nginx) uses 100% of the CPU for the duration of the request. This is the second VM this has happened on and I need to know why its doing this. It has become unusable. Information About My Setup VirtualBox 4.2.0 Host: Macbook Pro Guest: Ubuntu Server 12.04 Package dkms is installed Timezones match for Ubuntu and PHP. Things I've Tried Both Apache and Nginx. APC enabled and disabled. Xdebug enabled and disabled. 1 processor up to 4 processors. 1gb memory up to 4gb memory. I've installed Ubuntu using the regular kernel and the VM kernel.

    Read the article

  • Xvnc4 started from xinetd only displays empty gray X screen

    - by Scott Thomason
    Hi. I'm attempting to setup an Ubuntu 10.10 box so that anyone can connect to port 5900 and be greeted by the gdm login manager. To do so, I added a vnc entry in /etc/services and I am starting Xvnc4 using this xinetd config file: service vnc { protocol = tcp socket_type = stream wait = no user = nobody server = /usr/bin/Xvnc server_args = -geometry 1000x700 -depth 24 -broadcast -inetd -once -securitytypes None } This kind of works...I can start multiple sessions all to port 5900, and I get an X screen. The problem is that I only get an empty, gray X screen with no applications started. I know when you run vncserver from the command line it will look to your ~/.vnc/ directory for your passwd and xstartup files, and I think what I want to do is put "gnome-session" into the xstart file. However, which xstartup file? The running user is "nobody" who obviously doesn't have a ~/.vnc/ directory. I tried a /root/.vnc/xstartup file and a ~scott/.vnc/xstartup file and it doesn't look like they were even read. I changed the xinetd vnc service so that it would "strace" Xvnc4. I looked thru all the "open" lines and didn't get a clue as to what file it was trying to read for xstart. Can anyone help? I just want a terminal server where the user is presented with a gdm login screen.

    Read the article

  • Can't find disk usage in one directory

    - by Xster
    Similar questions are asked frequently but no suggested answers solved my issue. I have some disk space usage that I can't find as well. In df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 144183992 136857180 2652 100% / udev 2013316 4 2013312 1% /dev tmpfs 808848 876 807972 1% /run none 5120 0 5120 0% /run/lock none 2022116 76 2022040 1% /run/shm overflow 1024 0 1024 0% /tmp I checked the inodes, I checked lsof for +L1 or deleted files, I rebooted, I checked for files hidden behind mounts but none of them were the issue. It grows periodically and I'm running out of things to delete to feed the beast. It's all in the home directory of the only user I have. In du in ~ du -h --max-depth=1 192K ./.nv 2.1M ./.gconf 12K ./Pictures 1.6M ./.launchpadlib 12K ./Public 24K ./.TemporaryItems 8.9M ./.cache 12K ./Network Trash Folder 28K ./.vnc 11M ./.AppleDB 48K ./.subversion 1.9G ./.xbmc 8.0K ./.AppleDesktop 12K ./.dbus 81M ./.mozilla 12K ./Music 160K ./.gnome2 44K ./Downloads 692K ./.zsh 236K ./.AppleDouble 64K ./.pulse 4.0K ./.gvfs 1.4M ./.adobe 44K ./.pki 44K ./.compiz-1 168K ./.config 1.4M ./.thumbnails 12K ./Templates 912K ./.gstreamer-0.10 8.0K ./.emacs.d 92K ./Desktop 1.3M ./.local 12K ./Ubuntu One 12K ./Documents 296K ./.fontconfig 12K ./.qt 12K ./.gnome2_private 20K ./.ssh 20K ./.mission-control 12K ./Videos 12K ./Temporary Items 640K ./.macromedia 124G . I can't find a way to figure out how it got to that 124G in that directory. There are no mount points in home.

    Read the article

  • IIS7 Compression CSS files only compressed when dynamic compression is enabled

    - by Paul
    If anyone can help it would be appreciated. I would like to enable compression for static files within IIS7 (for the sake of simplicity I'll just refer to static css files for the time being). The problem I'm getting is that css files are only compressed when both dynamic and static compression is enabled in IIS for the website. What I really want to achieve is css compression (static file) whilst leaving the dynamic (aspx) pages as uncompressed for the time being (to avoid unnecessary CPU load). I am puzzled as to why just leaving 'static compression' enabled causes css files to be returned uncompressed. My applicationHost.config file has not be altered and looks like this: <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" /> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> </httpCompression> The server-wide compression setting within IIS is set to 'Dynamic Disabled' and 'Static Enabled' from the Server Features Compression page. The web-site compression setting (Server Sites MyWebsite Features Compression) is where I am enabling and disabling dynamic compression as detailed above. Any help would be really help me get unstuck on this. Thanks

    Read the article

  • Problems with ipsec betwen Cisco ASA 5505 and Juniper ssg5

    - by Oskar Kjellin
    I am trying to set up an ipsec tunnel between our ASA 5505 and a Juniper ssg5. The tunnel is up and running, but I cannot get any data through it. The local network I am on is 172.16.1.0 and the remote is 192.168.70.0. But I cannot ping anything on their netowork. I receive a "Phase 2 OK" when I set up the ipsec. I think this is the part of the config that is applicable. It seems like the data is not routed through the tunnel, but I am not sure... object network our-network subnet 172.16.1.0 255.255.255.0 object network their-network subnet 192.168.70.0 255.255.255.0 access-list outside_cryptomap extended permit ip object our-network object their-network crypto ipsec ikev1 transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto map outside_map 1 match address outside_cryptomap crypto map outside_map 1 set pfs crypto map outside_map 1 set peer THEIR_IP crypto map outside_map 1 set ikev1 phase1-mode aggressive crypto map outside_map 1 set ikev1 transform-set ESP-3DES-MD5 crypto map outside_map 1 set ikev2 pre-shared-key ***** crypto map outside_map 1 set reverse-route crypto map outside_map interface outside webvpn group-policy GroupPolicy_THEIR_IP internal group-policy GroupPolicy_THEIR_IP attributes vpn-filter value outside_cryptomap ipv6-vpn-filter none vpn-tunnel-protocol ikev1 tunnel-group THEIR_IP type ipsec-l2l tunnel-group THEIR_IP general-attributes default-group-policy GroupPolicy_THEIR_IP tunnel-group THEIR_IP ipsec-attributes ikev1 pre-shared-key ***** ikev2 remote-authentication pre-shared-key ***** ikev2 local-authentication pre-shared-key *****

    Read the article

  • Sendmail configs and logs look correct, but I get no mail

    - by Christian Dechery
    I used this tutorial to config sendmail on Ubuntu. Followed every step and when I test it, it seems to have worked, but I get no mail (not even on the spam folder) Below is the log for a test message: 050 >>> MAIL From:<[email protected]> SIZE=345 AUTH=<> 050 250 2.1.0 OK ek1sm23505399vdc.28 - gsmtp 050 >>> RCPT To:<######@gmail.com> 050 250 2.1.5 OK ek1sm23505399vdc.28 - gsmtp 050 >>> DATA 050 354 Go ahead ek1sm23505399vdc.28 - gsmtp 050 >>> . 050 250 2.0.0 OK 1401150762 ek1sm23505399vdc.28 - gsmtp 050 <########@gmail.com>... Sent (OK 1401150762 ek1sm23505399vdc.28 - gsmtp) 250 2.0.0 s4R0WdYN007263 Message accepted for delivery ######@gmail.com... Sent (s4R0WdYN007263 Message accepted for delivery) And this is my /var/log/mail.log May 26 21:32:39 UX-BLUEROOM sendmail[7262]: s4R0Wdxq007262: from=christian, size=105, class=0, nrcpts=1, msgid=<[email protected]>, relay=christian@localhost May 26 21:32:40 UX-BLUEROOM sm-mta[7263]: s4R0WdYN007263: from=<[email protected]>, size=345, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA-v4, relay=localhost [127.0.0.1] May 26 21:32:41 UX-BLUEROOM sm-mta[7263]: STARTTLS=client, relay=gmail-smtp-msa.l.google.com., version=TLSv1/SSLv3, verify=FAIL, cipher=ECDHE-RSA-RC4-SHA, bits=128/128 May 26 21:32:42 UX-BLUEROOM sm-mta[7263]: s4R0WdYN007263: to=<######@gmail.com>, ctladdr=<[email protected]> (1000/1000), delay=00:00:02, xdelay=00:00:02, mailer=relay, pri=30345, relay=gmail-smtp-msa.l.google.com. [173.194.75.109], dsn=2.0.0, stat=Sent (OK 1401150762 ek1sm23505399vdc.28 - gsmtp) May 26 21:32:42 UX-BLUEROOM sendmail[7262]: s4R0Wdxq007262: to=#####@gmail.com, ctladdr=christian (1000/1000), delay=00:00:03, xdelay=00:00:03, mailer=relay, pri=30105, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s4R0WdYN007263 Message accepted for delivery)

    Read the article

  • Apache doesn't immediately notice a change in the document root

    - by Tom
    We use capistrano for website deployments and our Apache document root is a symlink to a particular code release. The deployment procedure switches the symlink from the old release to the new release as the final step of the deployment. We are migrating our webservers from real servers running RHEL 5.6 to Amazon EC2 virtual machines running Ubuntu 11.10 and the new servers are suffering from a problem where Apache doesn't immediately notice the change to it's document root when the symlink is switched. It can take a second or so (and I think I've even seen it take a couple of minutes). It's kind of like Apache has cached the physical path of the symlink for some time. Does anyone know some Apache settings I could look at to get it to "scan" for changes to it's served files quicker. Thoughts: I read that the disks on virtual machines are much slower (since they are network attached storage). Perhaps the filesystem cache somehow works differently too? If so, is there anything that can be done? The website runs PHP code. Perhaps there is some PHP config differences between RHEL and Ubuntu? I checked realpath_cache_ttl but both servers have it commented out: e.g. ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://www.php.net/manual/en/ini.core.php#ini.realpath-cache-ttl ;realpath_cache_ttl = 120 We do use the APC opcode cache but don't think it's the issue due to experimentation. The PHP code is in different file paths for each deployment and we ensure stat=1. Here is a similar question that is very interesting: 294107 - but doesn't provide an answer for me. One solution would be to reload Apache everytime we modify the document root symlink. I'll do this if we can't find another solution.

    Read the article

  • LAMP server VM issues

    - by nullArray
    After getting a recommendation to salvage a wiki by installing a LAMP server, I went on the prowl for a good virtualized one. I used the VMware Player version. Since the windows box has Bonjour, I can, for example, go to http://lamp.local. and it works see the web client. The problem is, I can't ssh to a directory to scp the files I need, mount a usb thumbdrive (usbfs is unsupported) nor get samba working. I can't even update the ubuntu installation, it fails. I've tried bridged, nat and host-only networking settings in VMware Player. Bridged gives me an undefined IP, while the other two each have different IPs. All three settings allow me to access the web config, but none of them give me samba access. Windows usually freezes, then reports that it cannot connect. I'd rather not wipe a box to do a dedicated install, is there I way I can get this VM working, or are there better LAMP VMs out there? This one came already working and set up with VMware Player, so I thought it would be perfect... Thanks,

    Read the article

  • Problem configuring virtual host.

    - by Zeeshan Rang
    I am tring to configure apache virtual host for my computer. But i am facing problem in doing so. i have made required changes in my C:\WINDOWS\system32\drivers\etc\hosts then C:\xampp\apache\conf\extra\httpd-vhosts.conf I added the following lines in httpd-vhosts.conf ########################Virtual Hosts Config below################## NameVirtualHost 127.0.0.1 <VirtualHost localhost> ServerName localhost DocumentRoot "C:\xampp\htdocs" DirectoryIndex index.php index.html <Directory "C:\xampp\htdocs"> AllowOverride All </Directory> </VirtualHost> <VirtualHost virtual.cloudse7en.com> ServerName virtual.cloudse7en.com DocumentRoot "C:\development\virtual.cloudse7en.com\httpdocs" DirectoryIndex index.php index.html <Directory "C:\development\virtual.cloudse7en.com\httpdocs"> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost virtual.app.cloudse7en.com> ServerName virtual.app.cloudse7en.com DocumentRoot "C:\development\virtual.app.cloudse7en.com\httpdocs" DirectoryIndex index.php index.html <Directory "C:\development\virtual.app.cloudse7en.com\httpdocs"> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> ######################################################################## I started my xampp and tried http://localhost in a browser. This works and open up http://localhost/xampp/ but when i try http:http://virtual.app.cloudse7en.com it again opens up http://virtual.app.cloudse7en.com/xampp/ I do not understand the reason. Also i have a windows vista 64 bit, operating system. Do i need to make some other changes too? Regards Zee

    Read the article

  • Can't Login to phpPgAdmin

    - by Devin
    I'm trying to set up phpPgAdmin on my test machine so that I can interface with PostgreSQL without always having to use the psql CLI. I have PostgreSQL 9.1 installed via the RPM repository, while I installed phpPgAdmin 5.0.4 "manually" (by extracting the archive from the phpPgAdmin website). For the record, my host OS is CentOS 6.2. I made the following configuration changes already: PostgreSQL Inside pg_hba.conf, I changed all METHODs to md5. I gave the postgres account a password I added a new account named webuser with a password (note that I did not do anything else to the account, so I can't exactly say that I know what permissions it has and all) phpPgAdmin config.inc.php Changed the line $conf['servers'][0]['host'] = ''; to $conf['servers'][0]['host'] = '127.0.0.1'; (I've also tried using localhost as the value there). Set $conf['extra_login_security'] to false. Whenever I try to log in to phpPgAdmin, I get "Login failed", even if I use successful credentials (ones that work in psql). I've tried to go through some of the steps noted in Question 3 in the FAQ, but it hasn't worked out well so far there. It likely does not help that this is my first day working with PostgreSQL. I'm farily familiar with MySQL, but I have to use PostgreSQL for the project I'm working on. Could anyone offer some help for how to set up phpPgAdmin on CentOS 6.2? If I've done something terribly wrong in my configuration so far, it's no big deal to blow something/everything away, as it's not like I've stored any data there yet! I appreciate any insight you may have!

    Read the article

  • ProCurve network expansion

    - by Blue Warrior NFB
    I've hit a bit of a wall with our network scale-out. As it stands right now: We have five ProCurve 2910al switches connected as above, but with 10GbE connections (two CX4, two fiber). This fully populates the central switch above, there will be no more 10GbE Ethernet connections from that device. This group of switches is not stacked (no stack directive). Sometime in the next two or three months I'll need to add a sixth, and I'm not sure how deep of a hole I'm in. Ideally I'd replace the core switch with something more capable and has more 10GbE ports. However, that's a major outage and that requires special scheduling. The two edge switches connected via fiber have dual-port 10GbE cards in them, so I could physically put another switch on the far end of one of those. I don't know how much of a good or bad idea that would be though. Is that too many segments between end-points? Some config-excerpts: Running configuration: ; J9147A Configuration Editor; Created on release #W.14.49 hostname "REDACTED-SW01" time timezone 120 module 1 type J9147A module 2 type J9008A module 3 type J9149A no stack trunk B1 Trk3 Trunk trunk B2 Trk4 Trunk trunk A1 Trk11 Trunk trunk A2 Trk12 Trunk vlan 15 name "VM-MGMT" untagged Trk2,Trk5,Trk7 ip helper-address 10.1.10.4 ip address 10.1.11.1 255.255.255.0 tagged 37-40,Trk3-Trk4,Trk11-Trk12 jumbo ip proxy-arp exit

    Read the article

  • cygwin sshd times out for remote login

    - by reve_etrange
    I have configured SSHD using Cygwin on Windows 7. I have checked and double-checked all of the following points: Port forwarding is correctly configured Windows Firewall is configured to pass port 22 Local login attempts (using Cygwin SSH) succeed sshd_config has UseDNS No Using nmap from remote machine confirms port 22 is accessible /etc/passwd and /etc/group are correctly populated However, remote login attempts time out. This includes from the local network. user@host:~$ ssh -vvv [email protected] OpenSSH_5.5p1 Debian-4ubuntu6, OpenSSL 0.9.8o 01 Jun 2010 debug1: Reading configuration data /home/user/.ssh/config debug1: Applying options for * debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to the.ip.add.ress [the.ip.add.ress] port 22. debug1: connect to address the.ip.add.ress port 22: Connection timed out ssh: connect to the.ip.add.ress port 22: Connection timed out No messages are logged to /var/log/sshd.log. I suspect that there is a permissions issue with a particular file somewhere, however I have checked the permissions of all my Cygwin binaries, DLLs and the particular files important to Cygwin sshd, including all of: /etc/passwd /etc/group /var /var/log/sshd.log /var/empty Others who have reported this or similar errors appear to have missed one of the points enumerated above. Can anyone point me to a possible solution?

    Read the article

  • Puppet file transfer slow

    - by Noodles
    I have a puppet master and slaves in different datacenters. The latency between them is ~40ms. When I run "puppet agent --test" on a slave to apply the latest manifest it takes ~360 seconds to finish. After doing some digging I can see the main cause of the slow down is file transfers. It seems it's taking ~10 seconds to transfer each file. The files are only small (configuration files) so I can't understand why they would take so long. This is an example of a file in my manifest: file { "/etc/rsyncd.conf" : owner => "root", group => "root", mode => 644, source => "puppet:///files/rsyncd/rsyncd.conf" } Running puppet-profiler I see this: 10.21s - File[/etc/rsyncd.conf] It also seems I cannot update more than one server at once using puppet. If I run two servers at the same time then puppet takes twice as long. I have changed the puppet master from using webrick to mongrel, but this doesn't seem to help. This is making deploying changes painful. A simple config change can take an hour to roll out to all servers.

    Read the article

  • How to set up IP forwarding on Nexenta (Solaris)?

    - by Gleb
    I am trying to set up IP forwarding on my Nexenta box: root@hdd:~# uname -a SunOS hdd 5.11 NexentaOS_134f i86pc i386 i86pc Solaris The box has 2 network interfaces: root@hdd:~# ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 e1000g1: flags=1001100843<UP,BROADCAST,RUNNING,MULTICAST,ROUTER,IPv4,FIXEDMTU> mtu 1500 index 2 inet 192.168.12.2 netmask ffffff00 broadcast 192.168.12.255 ether 68:5:ca:9:51:b8 myri10ge0: flags=1100843<UP,BROADCAST,RUNNING,MULTICAST,ROUTER,IPv4> mtu 9000 index 3 inet 10.10.10.10 netmask ffffff00 broadcast 10.10.10.255 ether 0:60:dd:47:87:2 lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1 inet6 ::1/128 192.168.12.0 is my normal LAN with 192.168.12.1 being the firewall/gateway 10.10.10.0 is a separate LAN for iSCSI (with no internet access) I want to set up IP forwarding so that a computer on 10.10.10.0 will be able to access the internet by using 10.10.10.10 as a gateway (I don't need any port forwarding) I have turned on IP forwarding: root@hdd:~# routeadm Configuration Current Current Option Configuration System State --------------------------------------------------------------- IPv4 routing disabled disabled IPv6 routing disabled disabled IPv4 forwarding enabled enabled IPv6 forwarding disabled disabled Routing services "route:default ripng:default" Routing daemons: STATE FMRI disabled svc:/network/routing/rdisc:default disabled svc:/network/routing/route:default disabled svc:/network/routing/legacy-routing:ipv4 disabled svc:/network/routing/legacy-routing:ipv6 disabled svc:/network/routing/ripng:default online svc:/network/routing/ndp:default But when I dry to start ipnat, I get an error: root@hdd:~# ipnat -CF -f /etc/ipf/ipnat.conf ioctl(SIOCGNATS): I/O error Here is the config: root@hdd:~# cat /etc/ipf/ipnat.conf #!/sbin/ipnat -f - # map e1000g1 10.10.10.10/24 -> 192.168.12.2/32 So the question is how to fix this.. Thanks in advance!

    Read the article

  • Problem configuring Apache/Wordpress on subdomain

    - by friism
    I have two servers (one LAMP, one Windows) and one website with an associated blog. I'm running the main site on the Windows server, and the blog on the LAMP server, using Wordpress. The main site is accessed at http://folketsting.dk (it's in Danish -- sorry), the blog is accessed at http://blog.folketsting.dk (this link is bad, read on). The main site works fine. The blog works, except for the frontpage. Example of working post: http://blog.folketsting.dk/2009/10/09/ftlive/. The frontpage of the blog (http://blog.folketsting.dk) shows html from http://folketsting.dk however (except for the css and javascript). In fact, any other URL than the frontpage "works", and gets served by Wordpress e.g. http://blog.folketsting.dk/foo. I cannot -- for the life of me -- understand how the LAMP server running http://blog.folketsting.dk manages to serve up content generated by the Windows server running http://folketsting.dk. Looking at the response headers at http://blog.folketsting.dk, it's evident that the content originates from Apache, not IIS. I'm pretty sure it's not a DNS-issue, since the problem is evident even when accessing the raw IP, eg. http://130.226.142.141/ vs. http://130.226.142.141/foo. I'm thinking it's a bad config in Apache... any clues?

    Read the article

  • What can lead to a zone memory exhaustion and how Nginx reacts to it?

    - by Miles Hughes
    What is a possible scenario for exhausting the memory designated to a connection zone with limit_conn_zone directive and what are the implication in this case? Suppose I have this in my configuration: http { limit_conn_zone $binary_remote_addr zone=connzone:1m; ... server { limit_conn connzone 5; which, according to the documentation, allocates 16000 states for connzone on a 64-bit server. It also says that If the storage for a zone is exhausted, the server will return error 503 (Service Temporarily Unavailable) to all further requests. Well, Ok. But what does it mean on practice? When does this happen? Who receives those 503s? Does it mean that if the number of IPs somehow associated with connzone hits 16000 everyone gets a 503 and it's all over? How does Nginx decide? The documentation is weirdly vague on this. So, considering the example config, who would actually get a 503 and under which circumstances and how would things go from there? Same with request zones?

    Read the article

  • Windows Server 2003 with Apache and IIS causing random faulting and performance issues with Apache?

    - by contrebis
    I'm trying to fix a problem on a Windows Server 2003 SE install which is running IIS6 and Apache webserver (with PHP and MySQL). IIS sites are bound to one IP, Apache to the other. Everything seemed fine till the other IP address was installed to allow a webservice to run under IIS. Symptoms: Apache now responds very slowly, even requests for static files (often 30 seconds or more) Sporadic errors are appearing in the event logs like: Faulting application httpd.exe, version 2.2.14.0, faulting module php5ts.dll, version 5.2.13.13, fault address 0x000ac14f. I've double-checked the config files, taken account of this question/answer http://serverfault.com/questions/51230/running-iis-and-apache-on-the-same-windows-server, upped the Apache log level to debug, run TCPView to check for conflicting bindings, upgraded to latest Apache/PHP versions but still no success or indication of a cause. Any suggestions on where to look, or debugging tips would be gratefully received. I'm a web programmer so not so familiar with Windows Server admin or details of the networking stack. Running PHP under IIS is not an option and hosting on another server is non-ideal.

    Read the article

  • Connecting to RDS database from EC2 instance using bind9 CNAME alias

    - by mptre
    I'm trying to get internal DNS up and running on a EC2 instance. The main goal is to be able to define CNAME aliases for other AWS services. For example: Instead of using the RDS endpoint, which might change over time, an alias mysql.company.int can be used instead. I'm using bind9 and here's my config files: /etc/bind/named.conf.local zone "company.int" { type master; file "/etc/bind/db.company.int"; }; /etc/bind/db.company.int ; $TTL 3600 @ IN SOA company.int. company.localhost. ( 20120617 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS company.int. @ IN A 127.0.0.1 @ IN AAAA ::1 ; CNAME mysql IN CNAME xxxx.eu-west-1.rds.amazonaws.com. The dig command ensures me my alias is working as excepted: $ dig mysql.company.int ... ;; ANSWER SECTION: mysql.company.int. 3600 IN CNAME xxxx.eu-west-1.rds.amazonaws.com. xxxx.eu-west-1.rds.amazonaws.com. 60 IN CNAME ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. 589575 IN A zzz.zz.zz.zzz ... As far as I can understand a reverse zone isn't needed for a simple CNAME alias. However when I try to connect to MySQL using my newly created alias the operation is giving me a timeout. $ mysql -uuser -ppassword -hmysql.company.int ERROR 2003 (HY000): Can't connect to MySQL server on 'mysql.company.int' (110) Any ideas? Thanks in advantage!

    Read the article

  • Linux NIC Bonding Issue (CentOS 4 / RHEL 3)

    - by jinanwow
    I am having an issue with bonding NICs on CentOS 4. It appears the bonding driver does work, but it is stuck in round-robin mode and I am trying to get to active-backup. The current config is: ifcfg-bond0 DEVICE=bond0 IPADDR=192.168.204.18 NETMASK=255.255.255.0 ONBOOT=yes BOOTPROTO=none USERCTL=no TYPE=Bonding BONDING_OPTS="mode=1 miimon=100" ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none ONBOOT=yes TYPE=Ethernet MASTER=bond0 SLAVE=yes ifcfg-eth3 DEVICE=eth3 ONBOOT=yes BOOTPROTO=none TYPE=Ethernet MASTER=bond0 SLAVE=yes cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v2.6.3-rh (June 8, 2005) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:17:a4:8f:94:b1 Slave Interface: eth3 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:1b:21:56:b8:69 cat /etc/modprobe.conf alias eth0 tg3 alias eth1 tg3 alias eth3 e1000 alias eth2 e1000 alias bond0 bonding options bond0 mode=1 miimon=100 I have tried moving the bonding information out of the ifcfg-bond0 into the modprobe configuration file. It seems that it is stuck in RR and I am trying to get it into the Active-backup (mode 1) state. Any ideas what would be causing this issue?

    Read the article

  • Using 1920x1200 mode on SyncMaster T260HD in Linux

    - by dagorym
    I just got a Samsung SyncMaster T260HD monitor. It works straight out of the box with Windows but I can't seem to get it to work with Linux, which is my primary OS for day to day work. The computer boots up but when going into graphical mode on Linux the monitor gives me a "Mode not supported" error and doesn't display anything. I booted up windows and, using PowerStrip, grabbed the exact ModeLine that should be used to get the equivalent setting in Linux and added it to my xorg config file but it doesn't seem to help. the ModeLine is: ModeLine "1920x1200" 153.9 1920 1984 2016 2080 1200 1203 1209 1235 +hsync -vsync This is the modeline for the working display settings in windows but it doesn't seem to work in Linux My complete entry in the xorg.conf file for the monitor is Section "Monitor" Identifier "Monitor0" ModelName "SyncMaster" DisplaySize 518 324 HorizSync 30.0 - 81.0 VertRefresh 56.0 - 75.0 Option "dpms" ModeLine "1920x1200" 153.9 1920 1984 2016 2080 1200 1203 1209 1235 +hsync -vsync EndSection I'm running Scientific Linux 5.4 (clone of Redhat Enterprise Linux 5.4) but I've tried booting with a recent Linux Mint Distro as well as Ubuntu 9.04 and had the same problem. Any suggestions on other things I should try or might be missing? If anyone's gotten this to work I'd love to know. Thanks.

    Read the article

  • Unable to login to Amazon EC2 compute server

    - by MasterGaurav
    I am unable to login to the EC2 server. Here's the log of the connection-attempt: $ ssh -v -i ec2-key-incoleg-x002.pem [email protected] OpenSSH_5.6p1, OpenSSL 0.9.8p 16 Nov 2010 debug1: Reading configuration data /home/gvaish/.ssh/config debug1: Applying options for * debug1: Connecting to ec2-50-16-0-207.compute-1.amazonaws.com [50.16.0.207] port 22. debug1: Connection established. debug1: identity file ec2-key-incoleg-x002.pem type -1 debug1: identity file ec2-key-incoleg-x002.pem-cert type -1 debug1: identity file /home/gvaish/.ssh/id_rsa type -1 debug1: identity file /home/gvaish/.ssh/id_rsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'ec2-50-16-0-207.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /home/gvaish/.ssh/known_hosts:8 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: ec2-key-incoleg-x002.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: Trying private key: /home/gvaish/.ssh/id_rsa debug1: No more authentication methods to try. Permission denied (publickey). What can be the possible reason? How do I fix the issue?

    Read the article

  • Google Chrome doesn't respond user actions correctly

    - by Carlos A. Junior
    Recently I've changed my OS to Ubuntu 12.04 (Cinnamon, 64 bits) from Mint 13 (KDE, 64 bits) and one same bug still appears on new installation. The Google Chrome it seems that don't refresh (repaint) page based on my interactions. Example: When i'm try comment an Youtube vídeo, when i click on textarea, o cursor don't appear inside of textarea, BUT, when/if i change to another tab and return the cursor appears...OK... If i start write some text...according i'm typing the chars don't appers...again if i change to another tab and return the typed text appears on textarea. Other cases that this bug appears: Modal boxes link...don't show the modal; Forms inside modal boxes don't show typed chars; The common Discus comment plugin don't work when focused; I don't have any idea of reason of this bug. (video driver, window manager, Chrome bug ?, i don't know) Any idea to solve this ? Additional informations: Google Chrome 22.0.1229.79 (Official Build 158531) OS Linux WebKit 537.4 (@129177) JavaScript V8 3.12.19.11 Flash 11.3.31.331 User Agent Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.79 Safari/537.4 Command Line /opt/google/chrome/google-chrome --flag-switches-begin --flag-switches-end Executable Path /opt/google/chrome/google-chrome Profile Path /home/carlos/.config/google-chrome/Default Kernel version: 3.2.0-31-generic-pae Ubuntu 12.04 Best regards.

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >