Search Results

Search found 3247 results on 130 pages for 'apache2 2'.

Page 129/130 | < Previous Page | 125 126 127 128 129 130  | Next Page >

  • PHP.ini Settings Are Ignored By PHP5.3.5 Running With Windows 7 And Apache 2.2.15

    - by Andy
    I did an install of PHP5.3.5 on Windows 7 Home Premium using the MSI installer download. I got it to overwrite a previous version of PHP5 in C:\php5\ When first testing it, the server failed to start. I fixed this by adding the path to PHP in the Apache2.2 httpd file where the installer had inserted 2 lines of coded pointing to the ini file directory and the PHP DLL but had left out the directory path. After doing this, the server starts ok and I can run phpinfo to view the PHP settings in my web browser on local host. In the phpinfo it states that the loaded configuration file is C:\php5\php.ini as expected. But if I make any changes to the settings, and reboot the server, none of the changes are reflected in phpinfo. Yes, I do refresh the browser window. If I rename the php.ini to something else to make it invisible phpinfo then correctly identifies that there is no php.ini file loaded. So the settings in php.ini are being ignored and some default settings are being used (but I have no idea where these are derived from). As far as I can tell, there are no other php.ini files on my computer. In phpinfo it states that the Configuration File (php.ini) Path is C:\Windows but this is the same as on a Windows XP computer that I work on. And in the windows folder I don't see any php.ini file. In the windows registry, there is no mention of PHP5, and the PATH environment variable starts with C:\php5\; So hopefully someone can suggest how I can get PHP5 to take notice of the C:\php5\php.ini settings. :)

    Read the article

  • Error in django using Apache & mod_wsgi

    - by Ignacio
    Hey, I've been doing some changes to my django develpment env, as some of you suggested. So far I've managed to configure and run it successfully with postgres. Now I'm trying to run the app using apache2 and mod_wsgi, but I ran into this little problem after I followed the guidelines from the django docs. When I access localhost/myapp/tasks this error raises: Request Method: GET Request URL: http://localhost/myapp/tasks/ Exception Type: TemplateSyntaxError Exception Value: Caught an exception while rendering: argument 1 must be a string or unicode object Original Traceback (most recent call last): File "/usr/local/lib/python2.6/dist-packages/django/template/debug.py", line 71, in render_node result = node.render(context) File "/usr/local/lib/python2.6/dist-packages/django/template/defaulttags.py", line 126, in render len_values = len(values) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 81, in __len__ self._result_cache = list(self.iterator()) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 238, in iterator for row in self.query.results_iter(): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 287, in results_iter for rows in self.execute_sql(MULTI): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 2369, in execute_sql cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) TypeError: argument 1 must be a string or unicode object ... ... ... And then it highlights a {% for t in tasks %} template tag, like the source of the problem is there, but it worked fine on the built-in server. The view associated with that page is really simple, just fetch all Task objects. And the template just displays them on a table. Also, some pages get rendered ok. Don't want to fill this Question with code, so if you need some more info I'd be glad to provide it. Thanks

    Read the article

  • Strange mod_rewrite problem; Website works partially

    - by Camran
    I have Ubuntu 9.10 Server... I need to get mod_rewrite working... the mod_rewrite module IS LOADED. On my server, the httpd.conf is empty, instead everything (almost) is in a file called apache2.conf. Anyways, I have also read I have to change the AllowOverride None to AllowOverride All in some file... My httpd.conf is empty as you know, but I have a folder called sites-enabled which contains a 000-default file. This is where I have set: AllowOverride All Now my goal as I stated in the last Q is to turn this link: http://mydomain.com/ad.php?ad_id=Bmw_nice_M3_497379462 into this: http://mydomain.com/Bmw_nice_M3_497379462 So as I got an answer in the last Q i inserted this into the htaccess file: Options +FollowSymLinks Options +Indexes RewriteEngine On RewriteCond %{REQUEST_URI} !^/ad\.php RewriteRule ^(.*)$ ad.php?ad_id=$1 [L] Now, this works (no fully) when entering the url manually in the adress bar, but my website isn't working now for some reason. It is like the website is locked down or something, and unless I change AllowOverride to None it will act like that. Any ideas why? Also another note, the links inside the rewritten url doesn't work properly (images are not shown, while some are shown)...

    Read the article

  • Apache won't stop starting

    - by Rob
    /usr/local/apache/bin/httpd -k start -DSSL nobody 6906 0.0 0.1 187032 5448 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6907 0.0 0.1 187032 5448 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6908 0.0 0.1 187032 5448 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6909 0.0 0.1 187032 5448 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6910 0.7 0.1 187712 7024 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6911 1.7 0.1 188216 7036 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6912 0.5 0.1 187712 7020 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6913 0.4 0.1 188216 7028 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6914 0.8 0.1 188216 7028 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6915 0.7 0.1 188216 7028 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6916 0.5 0.1 188216 7028 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6917 0.2 0.1 188216 7024 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6918 0.5 0.1 188216 7028 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6919 0.8 0.1 188216 7028 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6920 0.5 0.1 187712 7020 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6921 0.8 0.1 187712 7020 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6922 1.0 0.1 188216 7024 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6923 0.6 0.1 188216 7024 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6924 1.5 0.1 187712 7020 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6925 0.1 0.1 187712 7012 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6926 1.2 0.1 187712 7016 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6927 0.5 0.1 188216 7028 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 6928 1.2 0.1 187712 7024 ? S 13:20 0:00 /usr/local/apache/bin/httpd -k start -DSSL I have 258 of these running. I stopped httpd, and it went back down to 0. I started it back up, and it quickly went back to 258. I have never seen this before. Anyone know what's wrong? I'm running CentOS5.5 with Apache2.2 on cPanel.

    Read the article

  • Update php 5.2.0 to 5.2.4 with aptitude

    - by Kiva
    Hi guy, I would like to update my php 5 in my server. At this moment, I use php 5.2.0 so I want to update it to php 5.2.4 (not php 5.3). I tried to do this: aptitude update aptitude upgrade 63 packets were updated but not php which is always in 5.0 How can I update my php please ? Here is the output of commands asked by David in another post: aptitude search php5 p libapache-mod-php5 - server-side, HTML-embedded scripting langu i A libapache2-mod-php5 - server-side, HTML-embedded scripting langu i php5 - server-side, HTML-embedded scripting langu p php5-apache2-mod-bt - PHP bindings for mod_bt p php5-auth-pam - A PHP5 extension for PAM authentication i php5-cgi - server-side, HTML-embedded scripting langu p php5-clamavlib - PHP ClamAV Lib - ClamAV Interface for PHP5 p php5-cli - command-line interpreter for the php5 scri i A php5-common - Common files for packages built from the p i php5-curl - CURL module for php5 p php5-dev - Files for PHP5 module development i A php5-gd - GD module for php5 p php5-idn - PHP api for the IDNA library p php5-imagick - ImageMagick module for php5 p php5-imap - IMAP module for php5 p php5-interbase - interbase/firebird module for php5 p php5-json - JSON serialiser for PHP5 p php5-ldap - LDAP module for php5 p php5-mapscript - module for php5-cgi to use mapserver p php5-maxdb - PHP extension to access MaxDB databases fo i A php5-mcrypt - MCrypt module for php5 p php5-memcache - memcache extension module for PHP5 p php5-mhash - MHASH module for php5 p php5-ming - Ming module for php5 i A php5-mysql - MySQL module for php5 p php5-odbc - ODBC module for php5 p php5-pgsql - PostgreSQL module for php5 p php5-ps - ps module for PHP 5 p php5-pspell - pspell module for php5 p php5-radius - PECL radius module for PHP 5 p php5-recode - recode module for php5 p php5-snmp - SNMP module for php5 p php5-sqlite - SQLite module for php5 p php5-sqlite3 - SQLite3 module for php5 p php5-sqlrelay - SQL Relay PHP API p php5-suhosin - advanced protection module for php5 p php5-sybase - Sybase / MS SQL Server module for php5 p php5-tidy - tidy module for php5 p php5-uuid - OSSP uuid module for php5 p php5-xapian - Xapian search engine interface for PHP5 p php5-xcache - Fast, stable PHP opcode cacher p php5-xmlrpc - XML-RPC module for php5 p php5-xsl - XSL module for php5 aptitude show php5 | grep Version Version : 5.2.0-8+etch13 aptitude show php5-cgi | grep Version Version : 5.2.0-8+etch13 php5 --version -bash: php5: command not found php-cgi --version PHP 5.2.0-8+etch13 (cgi-fcgi) (built: Oct 2 2008 08:21:17) Copyright (c) 1997-2006 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2006 Zend Technologies

    Read the article

  • Ubuntu - Ruby Daemon script creates two processes - sh and ruby - PID file points at sh, not ruby

    - by Jonathan Scoles
    The PID file for a ruby process I have running as a daemon is getting the wrong PID. It appears that running /etc/init.d/sinatra start creates two processes - sh and ruby, and the PID that ends up in the PID file is that of the sh process. This means that when I then run /etc/init.d/sinatra stop or /etc/init.d/sinatra restart, it is killing sh and leaving the ruby process still running. I'd like to know a) why is my script launching two processes - sh and ruby, and not just ruby, and b) how do I fix it to just launch ruby? Details of the setup: I have a small Sinatra server set up on an ubuntu server, running as a daemon. It is set to automatically at server startup run a script named sinatra in /etc/init.d that launches the a control script control.rb, which then runs a ruby daemon command to start the server. The script is run under the 'sinatrauser' account, which has permissions for the directories the script needs. contents of /etc/init.d/sinatra #!/bin/bash # sinatra Startup script for Sinatra server. sudo -u sinatrauser ruby /var/www/sinatra/control.rb $1 RETVAL=$? exit $RETVAL To install this script, I simply copied it to /etc/init.d/ and ran sudo update-rc.d sinatra defaults contents of /var/www/sinatra/control.rb require 'rubygems' require 'daemons' pwd = Dir.pwd Daemons.run_proc('sinatraserver.rb', {:dir_mode => :normal, :dir => "/opt/pids/sinatra"}) do Dir.chdir(pwd) exec 'ruby /var/www/sinatra/sintraserver.rb >> /var/log/sinatra/sinatraOutput.log 2>&1' end portion of output from ps -A 6967 ? 00:00:00 apache2 10181 ? 00:00:00 sh <--- PID file gets this PID 10182 ? 00:00:02 ruby <--- Actual ruby process running Sinatra 12172 ? 00:00:00 sshd The PID file gets created in /opt/pids/sinatra/sinatraserver.rb.pid, and always contains the PID of the sh instance, which is always one less than the PID of the ruby process EDIT: I tried micke's solution, but it had no effect on the behavior I am seeing. This is the output from ps -A f. This output looks the same whether I use sudo -u sinatrauser ... or su sinatrauser -c ... in the service start script in /etc/init.d. 1146 ? S 0:00 sh -c ruby /var/www/sinatra/sinatraserver.rb >> /var/log/sinatra/sinatraOutput.log 2>&1 1147 ? S 0:00 \_ ruby /var/www/sinatra/sinatraserver.rb

    Read the article

  • Ububtu server 12.04 auto installation freezes at kickseeding running if ks.cfg has post scripts

    - by john206
    I'm trying to make a custom Ubuntu Server iso file. Kickstart file (ks.cfg) runs smooth when there is no %post in the file and Ubuntu installs correctly with ks configuration. Installation finishes installing base, apt, grub and It echos: Kickseed Running... and it freezes @ 0% I thought may be apt-get update doesnt work in ks file, I tried to install other apps like apache2 but no luck I have created dozen iso images and installed them in Virtual Box.I have been googling for 3 days and checked out ubuntu forums but haven't figured out the issue. I appreciate your help. This is how I made the iso image. My ks.file and txt.cfg files located in isolinux directory: root@ubuntu:/home/work mount -o loop ubuntu-12.04-amd64.iso original-iso/ rsync -a original-iso/ custom-iso/ cp ks.cfg custom-iso/isolinux/ cp txt.cfg custom-iso/isolinux/ chmod -R 777 custom-iso/ #Creating Iso image mkisofs -D -r -V “$IMAGE_NAME” -cache-inodes -J -l -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o ~/ubuntu-12.04-alternate-custom-amd64.iso custom-iso/ ks.cfg #Generated by Kickstart Configurator #platform=AMD64 or Intel EM64T #System language lang en_US #Language modules to install langsupport en_US #System keyboard keyboard us #System mouse mouse #System timezone timezone America/Los_Angeles #Root password rootpw --iscrypted somethingsomething #Initial user user ubuntu --fullname "ubuntu" --iscrypted --password somethingsomething. #Reboot after installation reboot #Use text mode install text #Install OS instead of upgrade install #Use CDROM installation media cdrom #System bootloader configuration bootloader --location=mbr #Clear the Master Boot Record zerombr yes #Partition clearing information clearpart --all --initlabel #Disk partitioning information part /boot --size 128 --fstype=ext3 --asprimary part / --size 512 --fstype=ext3 --asprimary part swap --size 512 part /tmp --size 512 --fstype=ext3 part /var --size 512 --fstype=ext3 part /usr --size 4096 --fstype=ext3 part /home --size 2048 --fstype=ext3 #System authorization infomation auth --useshadow --enablemd5 #Network information network --bootproto=dhcp --device=eth0 #Firewall configuration firewall --disabled --http --ftp --ssh #X Window System configuration information xconfig --depth=32 --resolution=1024x768 --defaultdesktop=GNOME %post apt-get update mkdir /home/user txt.cfg default autoinstall label autoinstall menu label ^Install Custom Ubuntu Server kernel /install/vmlinuz append file=/cdrom/preseed/ubuntu-server.seed initrd=/install/initrd.gz quiet ks=cdrom:/isolinux/ks.cfg -- label install menu label ^Install Ubuntu Server kernel /install/vmlinuz append file=/cdrom/preseed/ubuntu-server.seed vga=788 initrd=/install/initrd.gz quiet -- label cloud menu label ^Multiple server install with MAAS kernel /install/vmlinuz append modules=maas-enlist-udeb vga=788 initrd=/install/initrd.gz quiet -- label check menu label ^Check disc for defects kernel /install/vmlinuz append MENU=/bin/cdrom-checker-menu vga=788 initrd=/install/initrd.gz quiet -- label memtest menu label Test ^memory kernel /install/mt86plus label hd menu label ^Boot from first hard disk localboot 0x80

    Read the article

  • vagrant up command very slow on OS X Lion

    - by Andy Hume
    When I run vagrant up to provision a new VM on Lion it takes an extremely long time, during which the entire Mac is very laggy and unresponsive. The output is as follows, the key point being the "notice: Finished catalog run in 754.28 seconds" > vagrant up [default] Importing base box 'lucid64'... [default] The guest additions on this VM do not match the install version of VirtualBox! This may cause things such as forwarded ports, shared folders, and more to not work properly. If any of those things fail on this machine, please update the guest additions and repackage the box. Guest Additions Version: 4.1.0 VirtualBox Version: 4.1.6 [default] Matching MAC address for NAT networking... [default] Clearing any previously set forwarded ports... [default] Forwarding ports... [default] -- ssh: 22 => 2222 (adapter 1) [default] -- web: 80 => 4567 (adapter 1) [default] Creating shared folders metadata... [default] Running any VM customizations... [default] Booting VM... [default] Waiting for VM to boot. This can take a few minutes. [default] VM booted and ready for use! [default] Mounting shared folders... [default] -- v-root: /vagrant [default] -- v-data: /var/www [default] -- manifests: /tmp/vagrant-puppet/manifests [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with lucid64.pp... [default] stdin: is not a tty [default] notice: /Stage[main]/Lucid64/Exec[apt-update]/returns: executed successfully [default] [default] notice: /Stage[main]/Lucid64/Package[apache2]/ensure: ensure changed 'purged' to 'present' [default] [default] notice: /Stage[main]/Lucid64/File[/etc/motd]/ensure: defined content as '{md5}a25e31ba9b8489da9cd5751c447a1741' [default] [default] notice: Finished catalog run in 754.28 seconds [default] [default] err: /File[/var/lib/puppet/rrd]/ensure: change from absent to directory failed: Could not find group puppet [default] [default] err: Could not send report: Got 1 failure(s) while initializing: change from absent to directory failed: Could not find group puppet [default] [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with lucid64.pp... [default] stdin: is not a tty [default] notice: /Stage[main]/Lucid64/Exec[apt-update]/returns: executed successfully [default] [default] notice: Finished catalog run in 2.05 seconds [default] [default] err: /File[/var/lib/puppet/rrd]: Could not evaluate: Could not find group puppet [default] [default] err: Could not send report: Got 1 failure(s) while initializing: Could not evaluate: Could not find group puppet [default] [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with lucid64.pp... [default] stdin: is not a tty [default] notice: /Stage[main]/Lucid64/Exec[apt-update]/returns: executed successfully [default] [default] notice: Finished catalog run in 1.36 seconds [default] [default] err: /File[/var/lib/puppet/rrd]: Could not evaluate: Could not find group puppet [default] [default] err: Could not send report: Got 1 failure(s) while initializing: Could not evaluate: Could not find group puppet [default] >

    Read the article

  • Configuring OpenLDAP as a Active Directory Proxy

    - by vadensumbra
    We try to set up an Active Directory server for company-wide authentication. Some of the servers that should authenticate against the AD are placed in a DMZ, so we thought of using a LDAP-server as a proxy, so that only 1 server in the DMZ has to connect to the LAN where the AD-server is placed). With some googling it was no problem to configure the slapd (see slapd.conf below) and it seemed to work when using the ldapsearch tool, so we tried to use it in apache2 htaccess to authenticate the user over the LDAP-proxy. And here comes the problem: We found out the username in the AD is stored in the attribute 'sAMAccountName' so we configured it in .htaccess (see below) but the login didn't work. In the syslog we found out that the filter for the ldapsearch was not (like it should be) '(&(objectClass=*)(sAMAccountName=authtest01))' but '(&(objectClass=*)(?=undefined))' which we found out is slapd's way to show that the attribute do not exists or the value is syntactically wrong for this attribute. We thought of a missing schema and found the microsoft.schema (and the .std / .ext ones of it) and tried to include them in the slapd.conf. Which does not work. We found no working schemata so we just picked out the part about the sAMAccountName and build a microsoft.minimal.schema (see below) that we included. Now we get the more precise log in the syslog: Jun 16 13:32:04 breauthsrv01 slapd[21229]: get_ava: illegal value for attributeType sAMAccountName Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SRCH base="ou=oraise,dc=int,dc=oraise,dc=de" scope=2 deref=3 filter="(&(objectClass=\*)(?sAMAccountName=authtest01))" Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SRCH attr=sAMAccountName Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SEARCH RESULT tag=101 err=0 nentries=0 text= Using our Apache htaccess directly with the AD via LDAP works though. Anyone got a working setup? Thanks for any help in advance: slapd.conf: allow bind_v2 include /etc/ldap/schema/core.schema ... include /etc/ldap/schema/microsoft.minimal.schema ... backend ldap database ldap suffix "ou=xxx,dc=int,dc=xxx,dc=de" uri "ldap://80.156.177.161:389" acl-bind bindmethod=simple binddn="CN=authtest01,ou=GPO-Test,ou=xxx,dc=int,dc=xxx,dc=de" credentials=xxxxx .htaccess: AuthBasicProvider ldap AuthType basic AuthName "AuthTest" AuthLDAPURL "ldap://breauthsrv01.xxx.de:389/OU=xxx,DC=int,DC=xxx,DC=de?sAMAccountName?sub" AuthzLDAPAuthoritative On AuthLDAPGroupAttribute member AuthLDAPBindDN CN=authtest02,OU=GPO-Test,OU=xxx,DC=int,DC=xxx,DC=de AuthLDAPBindPassword test123 Require valid-user microsoft.minimal.schema: attributetype ( 1.2.840.113556.1.4.221 NAME 'sAMAccountName' SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )

    Read the article

  • Ubuntu Server 10.04 Heavy Network Traffic causes disconnect

    - by K Vaughan
    I'm currently running a headless Ubuntu 10.04 server. Installed is the LAMP stack, Joomla, Virtualbox, phpvirtualbox, webmin and proFTP.. It resolves the IP address so I can access it remotely (either the apache2 webserver or the FTP) using DDClient. Any packages installed have been installed using apt-get. Webmin, although discouraged in Ubuntu Server, is used mostly to administer the webserver aspect. This issue also appeared when I was using Ubuntu Server 10.10. After periods of heavy network traffic, whether local or remote, the connect drops. I'm talking specifically about the transfer of files via FTP, SCP or Samba (the latter of which I seldom use). There is no response to ping or ssh. I can't FTP to the server nor can I load the website. There are times when the server has been on for a few days and everything runs fine because I haven't accessed it much, if at all (thus not much network traffic). I've gone through a few hardware changes although I don't believe this has cause the issue: this has been happening long before I made any changes. At first I thought it was my ISP-provided router blocking traffic because of some kind of misconfiguration (perhaps assuming it was some kind of DoS attack). I've changed routers and still found no success. I've checked syslog, dmesg and kern.log for warnings but have uncovered none. I've ran memtest via the GRUB2 menu at boot and once it turned up 4 errors. I ran again with individual sticks of RAM in various slots and everything turned up fine. I've looked through the BIOS settings and everything looks fine. I've tried unplugging unnecessary pieces of hardware (other internal hard drives, CD drives, floppy, PCI cards, etc). Any help or tips on how I can even begin to troubleshoot this would be very much appreciated. Please note that i've only started playing with servers as a hobby so my knowledge wouldn't be the most refined. I'm comfortable with command line and have the initiative to know how to look up something I can't do. Unfortunately I can't seem to find any issues like this. Additionally: If a solution can't be found some assistance to write a script that will cause the server to reboot automatically if, after x minutes, it gets no response to pinging somewhere like google. Admittedly that's not the cleanest solution should my internet end up going down but I can't think of what else to do.

    Read the article

  • MySQL Extremely High Disk Activity (Read Operations)

    - by Jake Schoermer
    I have 1GB Linode VPS with a standard LAMP stack. Apache is tuned fine but for some reason MySQL's disk usage is high. This is causing really slow site load times. RAM and CPU usage are fine. Can anyone give me any pointers on tuning mysql's disk performance? I'm using InnoDB. iotop output is below. Total DISK READ: 38.50 M/s | Total DISK WRITE: 27.20 K/s TID PRIO USER DISK READ> DISK WRITE SWAPIN IO COMMAND 9808 be/4 mysql 22.40 M/s 0.00 B/s 0.00 % 63.75 % mysqld 10045 be/4 mysql 2.06 M/s 0.00 B/s 0.00 % 26.65 % mysqld 9987 be/4 mysql 1694.38 K/s 0.00 B/s 0.00 % 18.33 % mysqld 10015 be/4 mysql 1554.47 K/s 0.00 B/s 0.00 % 12.71 % mysqld 10019 be/4 mysql 1461.21 K/s 0.00 B/s 0.00 % 5.58 % mysqld 9839 be/4 mysql 1383.48 K/s 0.00 B/s 0.00 % 25.69 % mysqld 10031 be/4 mysql 1243.58 K/s 0.00 B/s 0.00 % 5.68 % mysqld 10023 be/4 mysql 1057.04 K/s 0.00 B/s 0.00 % 2.02 % mysqld 10020 be/4 mysql 1025.95 K/s 0.00 B/s 0.00 % 7.05 % mysqld 10001 be/4 mysql 808.33 K/s 683.97 K/s 0.00 % 1.16 % mysqld 10025 be/4 mysql 746.15 K/s 0.00 B/s 0.00 % 3.28 % mysqld 10043 be/4 mysql 715.06 K/s 0.00 B/s 0.00 % 0.48 % mysqld 10044 be/4 mysql 672.31 K/s 0.00 B/s 0.00 % 5.25 % mysqld 10034 be/4 mysql 668.42 K/s 1989.73 K/s 0.00 % 5.31 % mysqld 9985 be/4 mysql 450.80 K/s 124.36 K/s 0.00 % 8.83 % mysqld 9989 be/4 mysql 357.53 K/s 0.00 B/s 0.00 % 5.21 % mysqld 10033 be/4 mysql 186.54 K/s 0.00 B/s 0.00 % 1.59 % mysqld 10021 be/4 mysql 155.45 K/s 435.25 K/s 0.00 % 1.23 % mysqld 10007 be/4 mysql 124.36 K/s 0.00 B/s 0.00 % 0.53 % mysqld 9763 be/4 www-data 38.86 K/s 0.00 B/s 0.00 % 4.56 % apache2 -k start 10027 be/4 mysql 31.09 K/s 0.00 B/s 0.00 % 4.24 % mysqld 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd] 3 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0] 4 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:0] 5 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/u:0] 6 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/0] 7 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/1]

    Read the article

  • Having troubles connectiong Magento to external Windows Database Server using Windows Azure

    - by Kevin H
    "I tried to make this easy to read through" I am using Ubuntu 12.04 LTS for Magento and installed these commands onto the system: sudo apt-get install apache2 sudo apt-get install php5 libapache2-mod-php5 sudo apt-get install php5-mysql sudo apt-get install php5-curl php5-mcrypt php5-gd php5-common sudo apt-get install php5-gd I used Windows Server 2008 R2 August 2012 for Mysql Server For a reference, I used http://www.windowsazure.com/en-us/manage/windows/common-tasks/install-mysql/ When the server was setup, I added an empty disk to it Then, I added endpoints 3306 Next I accessed the server remotely After that, I formatted the empty disk and was inserted as F: Next I downloaded Mysql from http://*.mysql.com version Windows (x86, 64-bit), MSI Installer 5.5.28 In the installation process, I used these settings: Typical Setup - Clicked Next, install, next Chose Detailed Configuration - Clicked next Chose Dedicated MySQL Server Machine - Clicked Next Chose Transactional Database Only - Clicked Next Chose the "F:" Drive - Clicked Next Chose Online Transactional Processing (OLTP) - Clicked Next For Networking Options, I checkmarked 'Enable TCP/IP Networking" 'Add firewall exception for this port' 'Enable Strict Mode' - Clicked Next Chose Standard Character Set - Clicked Next For Windows Options, I checkedmarked 'Install as Window Service" 'Launch the MySQL Server automatically' 'Include Bin Directory in Windows PATH - Clicked Next For Security Options, I checkmarked 'Modify Security Settings' and set root password - Clicked Next Finally clicked Execute and Finish These are the Firewall Setting that I set I clicked inbound rules Properties Scope Allow IP Address and used the internal Address for Magento Server Clicked Apply and exited Next, I opened up MySQL 5.x Command Line Client Entered Root Password Then entered these commands mysql create database magento; mysql Create user magentouser identified by 'password'; mysql Grant select, insert, create, alter, update, delete, lock tables on magento.* to magentouser mysql exit Finally, I opened up the Magento Downloader Magento validation has approved all PHP version is right. Your version is 5.3.10-1ubuntu3.4. PHP Extension curl is loaded PHP Extension dom is loaded PHP Extension gd is loaded PHP Extension hash is loaded PHP Extension iconv is loaded PHP Extension mcrypt is loaded PHP Extension pcre is loaded PHP Extension pdo is loaded PHP Extension pdo_mysql is loaded PHP Extension simplexml is loaded These are all installed on Magento Server For the Database Connection, I used: The Database server only has MySQL 5.5 Server installed on it Host - Internal IP address User Name - The User I created when setting up database Password - The Password I created when setting up database For the password, I did some research and found out that Magento only accepts alphanumeric, so I went and set it up again and used only alphanumeric for the User password Now, I am still getting Accessed denied for database Connection. Also, I have tryed to setup mysql on independant Linux Server but kept getting errors. When, I found the solution. Wouldn't work, so I decided to try Windows. These is the questions, I have been asking and researching to debug this issue Is it because I am using Linux for magento and Windows for Database. I have had no luck in finding a reason why this wouldn't work There must be something, I am missing I also researched the difference between linux sql databases and windows sql databases but have not come to conclusion, if installing Mysql on windows would make a difference in syntax and coding. I have spent a lot of time looking into this and need some help with direction on how to complete my project. Any type of help would be appreciated.

    Read the article

  • Why can I view my site over a 3G connection but not through my wifi?

    - by Jonathan
    So, I am sitting in my office with four computers on the same network and internet connection. Two of the computers can visit this particular website. Two of the computer get a message "Google Chrome could not find". I have tried FF and IE also with the same problem. I can view the site 90% of the time on two of the working computers although the site seems slow and sometimes I also get the same errors as the other two computers. I have flushed the DNS, reset the router, tested the site on other peoples computers with success. Is this likely to be a site issue, an ISP issue, a hosting issue? Any advice is greatly appreciated. Here is the ping from the working machine: C:\Users\Jon>ping www.balihaicruises.com Pinging www.balihaicruises.com [208.113.173.102] with 32 bytes of data: Reply from 208.113.173.102: bytes=32 time=331ms TTL=47 Reply from 208.113.173.102: bytes=32 time=327ms TTL=47 Reply from 208.113.173.102: bytes=32 time=326ms TTL=47 Reply from 208.113.173.102: bytes=32 time=329ms TTL=47 Ping statistics for 208.113.173.102: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 326ms, Maximum = 331ms, Average = 328ms Traceroute: Tracing route to www.balihaicruises.com [208.113.173.102] over a maximum of 30 hops: 1 1 ms 17 ms 3 ms 192.168.1.1 2 42 ms 37 ms 36 ms 180.254.224.1 3 39 ms 47 ms 40 ms 180.252.1.69 4 36 ms 616 ms 57 ms 61.94.115.221 5 84 ms 76 ms 80 ms 180.240.191.98 6 73 ms 80 ms 72 ms 180.240.191.97 7 157 ms 143 ms 116 ms 180.240.190.82 8 115 ms 113 ms 120 ms ae1-123.hkg11.ip4.tinet.net [183.182.80.93] 9 331 ms 332 ms 335 ms xe-3-2-1.was14.ip4.tinet.net [89.149.184.30] 10 327 ms 330 ms 331 ms internap-gw.ip4.tinet.net [77.67.69.254] 11 437 ms 415 ms 350 ms border10.pc2-bbnet2.wdc002.pnap.net [216.52.127.73] 12 322 ms 823 ms 398 ms dreamhost-2.border10.wdc002.pnap.net [216.52.125.74] 13 328 ms 336 ms 326 ms ip-208-113-156-4.dreamhost.com [208.113.156.4] 14 326 ms 328 ms 336 ms ip-208-113-156-14.dreamhost.com [208.113.156.14] 15 327 ms 331 ms 333 ms apache2-udder.crisp.dreamhost.com [208.113.173.102] And then for the machine that doesn't work: C:\Users\Microsoft>ping www.balihaicruises.com Ping request could not find host www.balihaicruises.com. Please check the name and try again. C:\Users\Microsoft>tracert www.balihaicruises.com Unable to resolve target system name www.balihaicruises.com.

    Read the article

  • What is the current state of Ubuntu's transition from init scripts to Upstart? [migrated]

    - by Adam Eberlin
    What is the current state of Ubuntu's transition from init.d scripts to upstart? I was curious, so I compared the contents of /etc/init.d/ to /etc/init/ on one of our development machines, which is running Ubuntu 12.04 LTS Server. # /etc/init.d/ # /etc/init/ acpid acpid.conf apache2 --------------------------- apparmor --------------------------- apport apport.conf atd atd.conf bind9 --------------------------- bootlogd --------------------------- cgroup-lite cgroup-lite.conf --------------------------- console.conf console-setup console-setup.conf --------------------------- container-detect.conf --------------------------- control-alt-delete.conf cron cron.conf dbus dbus.conf dmesg dmesg.conf dns-clean --------------------------- friendly-recovery --------------------------- --------------------------- failsafe.conf --------------------------- flush-early-job-log.conf --------------------------- friendly-recovery.conf grub-common --------------------------- halt --------------------------- hostname hostname.conf hwclock hwclock.conf hwclock-save hwclock-save.conf irqbalance irqbalance.conf killprocs --------------------------- lxc lxc.conf lxc-net lxc-net.conf module-init-tools module-init-tools.conf --------------------------- mountall.conf --------------------------- mountall-net.conf --------------------------- mountall-reboot.conf --------------------------- mountall-shell.conf --------------------------- mounted-debugfs.conf --------------------------- mounted-dev.conf --------------------------- mounted-proc.conf --------------------------- mounted-run.conf --------------------------- mounted-tmp.conf --------------------------- mounted-var.conf networking networking.conf network-interface network-interface.conf network-interface-container network-interface-container.conf network-interface-security network-interface-security.conf newrelic-sysmond --------------------------- ondemand --------------------------- plymouth plymouth.conf plymouth-log plymouth-log.conf plymouth-splash plymouth-splash.conf plymouth-stop plymouth-stop.conf plymouth-upstart-bridge plymouth-upstart-bridge.conf postgresql --------------------------- pppd-dns --------------------------- procps procps.conf rc rc.conf rc.local --------------------------- rcS rcS.conf --------------------------- rc-sysinit.conf reboot --------------------------- resolvconf resolvconf.conf rsync --------------------------- rsyslog rsyslog.conf screen-cleanup screen-cleanup.conf sendsigs --------------------------- setvtrgb setvtrgb.conf --------------------------- shutdown.conf single --------------------------- skeleton --------------------------- ssh ssh.conf stop-bootlogd --------------------------- stop-bootlogd-single --------------------------- sudo --------------------------- --------------------------- tty1.conf --------------------------- tty2.conf --------------------------- tty3.conf --------------------------- tty4.conf --------------------------- tty5.conf --------------------------- tty6.conf udev udev.conf udev-fallback-graphics udev-fallback-graphics.conf udev-finish udev-finish.conf udevmonitor udevmonitor.conf udevtrigger udevtrigger.conf ufw ufw.conf umountfs --------------------------- umountnfs.sh --------------------------- umountroot --------------------------- --------------------------- upstart-socket-bridge.conf --------------------------- upstart-udev-bridge.conf urandom --------------------------- --------------------------- ureadahead.conf --------------------------- ureadahead-other.conf --------------------------- wait-for-state.conf whoopsie whoopsie.conf To be honest, I'm not entirely sure if I'm interpreting the division of responsibilities properly, as I didn't expect to see any overlap (of what framework handles which services). So I was quite surprised to learn that there was a significant amount of overlap in service references, in addition to being unable to discern which of the two was intended to be the primary service framework. Why does there seem to be a fair amount of redundancy in individual service handling between init.d and upstart? Is something else at play here that I'm missing? What is preventing upstart from completely taking over for init.d? Is there some functionality that certain daemons require which upstart does not yet have, which are preventing some services from converting? Or is it something else entirely?

    Read the article

  • SSL support with Apache and Proxytunnel

    - by whuppy
    I'm inside a strict corporate environment. https traffic goes out via an internal proxy (for this example it's 10.10.04.33:8443) that's smart enough to block ssh'ing directly to ssh.glakspod.org:443. I can get out via proxytunnel. I set up an apache2 VirtualHost at ssh.glakspod.org:443 thus: ServerAdmin [email protected] ServerName ssh.glakspod.org <!-- Proxy Section --> <!-- Used in conjunction with ProxyTunnel --> <!-- proxytunnel -q -p 10.10.04.33:8443 -r ssh.glakspod.org:443 -d %host:%port --> ProxyRequests on ProxyVia on AllowCONNECT 22 <Proxy *> Order deny,allow Deny from all Allow from 74.101 </Proxy> So far so good: I hit the Apache proxy with a CONNECT and then PuTTY and my ssh server shake hands and I'm off to the races. There are, however, two problems with this setup: The internal proxy server can sniff my CONNECT request and also see that an SSH handshake is taking place. I want the entire connection between my desktop and ssh.glakspod.org:443 to look like HTTPS traffic no matter how closely the internal proxy inspects it. I can't get the VirtualHost to be a regular https site while proxying. I'd like the proxy to coexist with something like this: SSLEngine on SSLProxyEngine on SSLCertificateFile /path/to/ca/samapache.crt SSLCertificateKeyFile /path/to/ca/samapache.key SSLCACertificateFile /path/to/ca/ca.crt DocumentRoot /mnt/wallabee/www/html <Directory /mnt/wallabee/www/html/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> <!-- Need a valid client cert to get into the sanctum --> <Directory /mnt/wallabee/www/html/sanctum> SSLVerifyClient require SSLOptions +FakeBasicAuth +ExportCertData SSLVerifyDepth 1 </Directory> So my question is: How to I enable SSL support on the ssh.glakspod.org:443 VirtualHost that will work with ProxyTunnel? I've tried various combinations of proxytunnel's -e, -E, and -X flags without any luck. The only lead I've found is Apache Bug No. 29744, but I haven't been able to find a patch that will install cleanly on Ubuntu Jaunty's Apache version 2.2.11-2ubuntu2.6. Thanks in advance.

    Read the article

  • Reach self hosted server from LAN

    - by Freefri
    I have a self hosted server with Apache2 pointed with the domain example.com. I have also some virtual servers www.example.com, cloud.examle.com, etc. This server is in my LAN, and when I try to acces to my server within the lan throw www.examle.com y get my router's configuration page. From outside the LAN www.example.com and cloud.examle.com works properly. From inside the LAN 192.168.1.33 (server internal IP) shows the default webpage (www.examle.com), but I can not get cloud.examle.com I also have a LAN name server in 192.168.1.33 with bind9. I set up my gateway 192.168.1.1 with my LAN-NS as primary NS I solve this problem creating a new dns zone in the NS. This are my config files: ;ZONE-1 $ORIGIN . $TTL 86400 ; 1 day home.lan. IN SOA server.home.lan. hostmaster.home.lan. ( 2008080901 ; serial 8H ; refresh 4H ; retry 4W ; expire 1D ; minimum ) home.lan. IN NS server.home.lan. $ORIGIN home.lan. ; Set the address for localhost.home.lan localhost IN A 127.0.0.1 router IN A 192.168.1.1 server IN A 192.168.1.33 mypc IN A 192.168.1.132 ;ZONE-2 $ORIGIN . $TTL 86400 ; 1 day example.com. IN SOA www.example.com hostmaster.home.lan. ( 2008080902 ; serial 8H ; refresh 4H ; retry 4W ; expire 1D ; minimum ) example.com. IN NS 192.168.1.33 $ORIGIN examle.com. localhost IN A 127.0.0.1 www IN A 192.168.1.33 cloud IN A 192.168.1.33 My DNS and my names are working properly now My question are: What do you think about my solution? Can I change the A zone with CNAME to server.home.lan (this is the domain in the LAN to the server)? How can I set a default IP for all my whatever.example.com?

    Read the article

  • System architecture: simple approach for setting up background tasks behind a web application -- wil

    - by Tim Molendijk
    I have a Django web application and I have some tasks that should operate (or actually: be initiated) on the background. The application is deployed as follows: apache2-mpm-worker; mod_wsgi in daemon mode (1 process, 15 threads). The background tasks have the following characteristics: they need to operate in a regular interval (every 5 minutes or so); they require the application context (i.e. the application packages need to be available in memory); they do not need any input other than database access, in order to perform some not-so-heavy tasks such as sending out e-mail and updating the state of the database. Now I was thinking that the most simple approach to this problem would be simply to piggyback on the existing application process (as spawned by mod_wsgi). By implementing the task as part of the application and providing an HTTP interface for it, I would prevent the overhead of another process that is holding all of the application into memory. A simple cronjob can be setup that sends a request to this HTTP interface every 5 minutes and that would be it. Since the application process provides 15 threads and the tasks are quite lightweight and only running every 5 minutes, I figure they would not be hindering the performance of the web application's user-facing operations. Yet... I have done some online research and I have seen nobody advocating this approach. Many articles suggest a significantly more complex approach based on a full-blown messaging component (such as Celery, which uses RabbitMQ). Although that's sexy, it sounds like overkill to me. Some articles suggest setting up a cronjob that executes a script which performs the tasks. But that doesn't feel very attractive either, as it results in creating a new process that loads the entire application into memory, performs some tiny task, and destroys the process again. And this is repeated every 5 minutes. Does not sound like an elegant solution. So, I'm looking for some feedback on my suggested approach as described in the paragraph before the preceeding paragraph. Is my reasoning correct? Am I overlooking (potential) problems? What about my assumption that application's performance will not be impeded?

    Read the article

  • Wrong sessionID being used in callback, but only on one particular computer

    - by user210119
    I am writing a Python/Django web application that uses OAuth (for the TwitterAPI, not that it should matter). I am storing a session ID in my login function, and then after using OAuth to get the user's token, I try to retrieve the sessionID in my callback function. The callback function then always fails(throws an exception) because it can't find the OAuth token in the session. Through the debugger, I am able to determine that the session ID that the server is using is incorrect - it does not match the session ID that was stored in the login function. It's therefore unsurprising that the Oauth tokens were not there. The session that appears in the callback was the same one each time (until I tried deleting it - see "things I've tried below"), and it started out as an old session, with some data in it that is from a different django app running on the same server that I hadn't touched in a couple weeks. Here's the kicker: everything I described is an issue only on our production server, and only when connecting to it from my computer. Let me clarify: this only happens with my particular laptop. I can connect to the app just fine from someone else's computer. Other people cannot connect with their accounts on my computer. Furthmore, I can connect just fine to the app when it is running on my localhost using the built-in django webserver, just not to the production server. My setup: my server and local box are running= Django 1.2.0 and Python 2.6.5. My local box is running Snow Leopard and the Django webserver, the server is running Ubuntu, Apache2, and mod-wsgi. For sessions, I am using Django's default session backend (DB). Things I have tried, all to no avail: logging in with a different account, including new accounts that have never OAuthed to this app before Clearing cookies, using incognito mode, using a different web browser on my same computer. Each time, upon inspecting my cookies, the sessionID matched the sessionID in the login function and was different from the sessionID in the callback. deleting the session in the database that appears in the callback function, (the one that appeared to be old data). The callback function still fails, and the sessionID it appears to be using is now a new one using a different session backend (DB-cache, flat file, etc...) restarting the server, my computer, etc. My first question on StackOverflow, so bear with me if I didn't quite follow local conventions. I am just at a loss as to what to even look for - what are the things that could possibly be causing sessions to not work on my particular computer, and (so far!) only my particular computer?

    Read the article

  • jquery .get/.post not working on ie 7 or 8, works fine in ff

    - by Samutz
    I have basically this on a page: <script type="text/javascript"> function refresh_context() { $("#ajax-context").html("Searching..."); $.get("/ajax/ldap_search.php", {cn: $("#username").val()}, function(xml) { $("#ajax-context").html($("display", xml).text()); $("#context").val($("context", xml).text()); }, 'xml'); } $(document).ready(function() { $("#username").blur(refresh_context); }); </script> <input type="text" name="username" id="username" maxlength="255" value="" /> <input type="hidden" name="context" id="context" value=""/> <div id="ajax-context"></div> What it should do (and does fine on Firefox) is when you type a username in to the #username field, it will run /ajax/ldap_search.php?cn=$username, which searches our company's ldap for the username and returns it's raw context and a formatted version of the context like this: <result> <display>Staff -&gt; Accounting -&gt; John Smith</display> <context>cn=jsmith,ou=Accounting,ou=Staff,ou=Users,o=MyOrg</context> </result> The formatted version (display) goes to the div #ajax-context and goes to the hidden input #context. (Also, the - are actually - "& g t ;" (without spaces)). However, on IE the div stays stuck on "Searching..." and the hidden input value stays blank. I've tried both .get and .post and neither work. I'm sure it's failing on the .get because if I try this, I don't even get the alert: $.get("/ajax/ldap_search.php", {cn: $("#username").val()}, function() { alert("Check"); }); Also, IE doesn't give me any script errors. Edit: Added "$(document).ready(function() {", the .blur was already in it in my code, but I forgot to include that in my post. Edit 2: The request is being sent and apache2 is receiving it: 10.135.128.96 - - [01/May/2009:10:04:27 -0500] "GET /ajax/ldap_search.php?cn=i_typed_this_in_IE HTTP/1.1" 200 69

    Read the article

  • Simple jquery Fade Slideshow fails on certain browsers

    - by cmay
    So I have a simple slideshow on my website which just shows one image then shows another until it reaches the end or the user hits skip in which case it shows index.html. The site is served on apache2 with Django. The slideshow works perfectly on most machines, but certain machines it shows some images twice and other images not at all and the timing is off. I am using jquery 1.4.3. Below is the section of html where I push the image urls from the database to the javascript {% for image in latest_images %} {% thumbnail image.image_file "800x600" crop="center" as im %} <script>FadeImageList.push("{{im.url}}");</script> {% endthumbnail %} {% endfor %} Below is the full javascript file var FadeImageList = []; var fadeDuration = 2000; var fadeImgID = '#slideShow'; var homePageID = '#homePage'; var menuID = '#menu'; var skipFlag = 0; $(document).ready(function(){ $(homePageID).fadeOut(50); PlaySlideshow(FadeImageList); }); var PlaySlideshow = function(FadeImageList){ var newImgSrc = FadeImageList.shift(); $('#skip').click(function(){$('#loader').show();skipFlag = 1;}); if(((typeof(newImgSrc) !== "string") || (skipFlag === 1))){ EndSlideShow(); return; } else{ $(fadeImgID).fadeOut(fadeDuration,function(){ $(fadeImgID).attr('src', newImgSrc); $(fadeImgID).fadeIn(fadeDuration,function(){ PlaySlideshow(FadeImageList); }); }); } }; var EndSlideShow = function(fadeSettings){ $(fadeImgID).fadeOut(400,function(){ $(homePageID).fadeIn(400); $("#skip").fadeOut(400); $('#loader').hide(); }); }; The strange thing is I've had it work and fail on identically version numbered browsers on the same os but on different machines. It consistently either works or fails on a machine. I've had it fail in ie 7,8 firefox 3.6.3 and chrome. I've also had it succeed in ie6,7,8 firefox 3.6.3,3.4.2,3.1 and chrome.

    Read the article

  • Ubuntu 10.10 Ad-Hoc Setup (from Wireless Router, to Ubuntu Server/Desktop to Wireless Router)

    - by user60375
    Okay, so I know there are different approaches for this, but I will explain my story briefly before getting to the technical stuff. My fiancée and I are going through some financial issues (as I assume a lot of us are). We ended up having to move from our house and stay with some friends/family for 6 months, just to get ourselves caught up. (Medical bills, among other issues,etc). So this is where it gets fun. At our friends house, we are staying in the loft setup which is not near the cable modem and wireless router. I have a "hand-crafted" media center running XBMC, an Ubuntu 10.10 Server/Desktop (multi-purpose, very powerful and tons drive space), two working laptops, a between the two of us we have multiple wireless devices/phones. Now our friends Wireless router doesn't have any options for assigning IP addresses, but my router does. My current setup is: Friends Cable Modem -- Friend's Wireless Router -- Ubuntu 10.10 Server -- My Wireless Router (local-link from Friend's wireless (incoming) to sharing connection on ETH0 (outgoing)) -- to all devices. (Wireless Modem, Ubuntu Server that share's it's wireless incoming connection to the ethernet port my Wireless router share's with the rest of the devices). I setup my router to use default settings from my friend's router, using Google's DNS on my router (disabled DNS setup on Ubuntu Server), everything is assigned nicely and runs smooth. My Ubuntu server was given the address 10.42.43.1 (assuming standard from Network-Manager). (On the Ubuntu machine that shares to my wireless router; I have some server apps installed, but mainly just use Samba/NFS/Tangerine action. My problem/goal is that every device has no problem of accessing the internet from my router, the media-center has an assigned ip address, all services from all devices (ZeroConf, Avanhi, Bonjour, GIT, SSH, FTP, Apache2, etc) all work correctly except from my Ubuntu Server (which serves the wireless connection to ETH0 to another Wireless Router). The Ubuntu 10.10 Server/Desktop is not broadcasting anything (the Zeroconf Service Discovery 0.4 Gnome Applet shows the services from the Ubuntu server but no other computers can see them). I can access it from my Media-Center (Running Xbuntu 10.04) if I direct it to 10.42.43.1, no problem. But I cannot access Tangerine (Daapd) and the Samba shares do not show up on any computers for 10.42.43.1 (not in the WORKGROUP which Samba is setup simple and default but I can direct computers to that address and the shares will add except on a damn Windows 7 parition). Is this an issue with how I have my router setup and possible the gateway? An issue with Network-Manager? And issue with my Ubuntu Server/Desktop? I know there is a lot to that, but it's simpler than I probably have explained? Any help would be appreciated. If you need more details, I can provide them. If there is a better way of my attempting this home-network, please let me know. Thanks in advance for the help.

    Read the article

  • /etc/rc.local not being run on Ubuntu Desktop Install

    - by loosecannon
    I have been trying to get sphinx to run at boot, so I added some lines to /etc/rc.local but nothing happens when I start up. If i run it manually it works however. /etc/init.d/rc.local start works fine as does /etc/rc.local It's listed in the default runlevel and is all executable but it does not work. I am considering writing a separate init.d script to do the same thing but that's a lot of work for a simple task dumbledore:/etc/init.d# ls -l rc* -rwxr-xr-x 1 root root 8863 2009-09-07 13:58 rc -rwxr-xr-x 1 root root 801 2009-09-07 13:58 rc.local -rwxr-xr-x 1 root root 117 2009-09-07 13:58 rcS dumbledore:/etc/init.d# ls /etc/rc.local -l -rwxr-xr-x 1 root root 491 2011-05-14 16:13 /etc/rc.local dumbledore:/etc/init.d# runlevel N 2 dumbledore:/etc/init.d# ls /etc/rc2.d/ -l total 4 lrwxrwxrwx 1 root root 18 2011-04-22 18:53 K08vmware -> /etc/init.d/vmware -rw-r--r-- 1 root root 677 2011-03-28 15:10 README lrwxrwxrwx 1 root root 18 2011-04-22 18:53 S19vmware -> /etc/init.d/vmware lrwxrwxrwx 1 root root 18 2011-05-15 14:09 S20ddclient -> ../init.d/ddclient lrwxrwxrwx 1 root root 20 2011-03-10 18:00 S20fancontrol -> ../init.d/fancontrol lrwxrwxrwx 1 root root 20 2011-03-10 18:00 S20kerneloops -> ../init.d/kerneloops lrwxrwxrwx 1 root root 27 2011-03-10 18:00 S20speech-dispatcher -> ../init.d/speech-dispatcher lrwxrwxrwx 1 root root 19 2011-03-10 18:00 S25bluetooth -> ../init.d/bluetooth lrwxrwxrwx 1 root root 20 2011-03-10 18:00 S50pulseaudio -> ../init.d/pulseaudio lrwxrwxrwx 1 root root 15 2011-03-10 18:00 S50rsync -> ../init.d/rsync lrwxrwxrwx 1 root root 15 2011-03-10 18:00 S50saned -> ../init.d/saned lrwxrwxrwx 1 root root 19 2011-03-10 18:00 S70dns-clean -> ../init.d/dns-clean lrwxrwxrwx 1 root root 18 2011-03-10 18:00 S70pppd-dns -> ../init.d/pppd-dns lrwxrwxrwx 1 root root 14 2011-05-07 11:22 S75sudo -> ../init.d/sudo lrwxrwxrwx 1 root root 24 2011-03-10 18:00 S90binfmt-support -> ../init.d/binfmt-support lrwxrwxrwx 1 root root 17 2011-05-12 21:18 S91apache2 -> ../init.d/apache2 lrwxrwxrwx 1 root root 22 2011-03-10 18:00 S99acpi-support -> ../init.d/acpi-support lrwxrwxrwx 1 root root 21 2011-03-10 18:00 S99grub-common -> ../init.d/grub-common lrwxrwxrwx 1 root root 18 2011-03-10 18:00 S99ondemand -> ../init.d/ondemand lrwxrwxrwx 1 root root 18 2011-03-10 18:00 S99rc.local -> ../init.d/rc.local dumbledore:/etc/init.d# cat /etc/rc.local #!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. # Start sphinx daemon for rails app on startup # Added 2011-05-13 # Cannon Matthews cd /var/www/extemp /usr/bin/rake ts:config /usr/bin/rake ts:start touch ./tmp/ohyeah cd - exit 0 dumbledore:/etc/init.d# cat /etc/init.d/rc.local #! /bin/sh ### BEGIN INIT INFO # Provides: rc.local # Required-Start: $remote_fs $syslog $all # Required-Stop: # Default-Start: 2 3 4 5 # Default-Stop: # Short-Description: Run /etc/rc.local if it exist ### END INIT INFO PATH=/sbin:/usr/sbin:/bin:/usr/bin . /lib/init/vars.sh . /lib/lsb/init-functions do_start() { if [ -x /etc/rc.local ]; then [ "$VERBOSE" != no ] && log_begin_msg "Running local boot scripts (/etc/rc.local)" /etc/rc.local ES=$? [ "$VERBOSE" != no ] && log_end_msg $ES return $ES fi } case "$1" in start) do_start ;; restart|reload|force-reload) echo "Error: argument '$1' not supported" >&2 exit 3 ;; stop) ;; *) echo "Usage: $0 start|stop" >&2 exit 3 ;; esac

    Read the article

  • Hosting a subversion working copy in an remote WebDAV folder

    - by Daniel Baulig
    This might be a bit awkward, but I'll try to explain what I am trying to achieve and what problems I encountered. First of all: whats this about? I am currently trying to set up a distributed working enviroment for developing a web page. My plan was to setup a SVN repository for version control, a live server where the actual live page ist hosted and a development server where I can work on the page. To ease things I intended to not have a local copy of the project on my disk, but to actually work directy on the files, that the development server hosts. For that I setup a WebDAV directory, under devserver.com/workspace, that actually mapped to files served under devserver.com/. So I could connect to devserver.com/workspace, change something and view the results live at devserver.com/. So far this worked perfectly. The next step was to create a SVN repository that would take care of my version control. I intended to be able to checkin to the reposiroty from my development server and at any time, with a small shell script, deploy any revision from the svn to the live server by checking out a copy of the revision into the live server directories. The second part, checking out into the live server, also worked perfectly. The first part though is where problems arose: My workstation is a Windows 7 machine. I connected to the WebDAV share using Windows built-in WebDAV support, which worked quite well. I can create, move, delete, edit, whatever files on my WebDAV share from my Windows machine perfectly. The next step was to checkout a working copy from the SVN (actually hosted at devserver.com/subversion/) into the WebDAV share. In the first try I used the Eclipse plugin subversive. The actual checkout worked fine and I can update and commit stuff to the repository, however, I cannot add any files to the ignore list. It always brings me an error. So I tried the same thing with a complete fresh repository using TortoiseSVN - and again it failed with the same errors. Here is what it says when trying to add files to svnignore: Some of selected resources were not added to ignore. svn: Cannot rename file '\\devserver.com@SSL\DavWWWRoot\workspace\.svn\tmp\dir-props.66fd8936-2701-0010-bb76-472f0b56a5d1.tmp' to '\\devserver.com@SSL\DavWWWRoot\workspace\.svn\tmp\dir-props' This is what apache2 tells me, when I try to add a file to svnignore: [Sun Mar 07 03:54:19 2010] [error] [client xxx.xxx.xxx.xxx] Negotiation: discovered file(s) matching request: /var/www/devserver.com/.svn/tmp/dir-props (None could be negotiated). [Sun Mar 07 03:54:31 2010] [error] [client xxx.xxx.xxx.xxx] (20)Not a directory: The URL contains extraneous path components. The resource could not be identified. [400, #0] Actually both messages are repeated several times. The first one occurs first and is repeated about 5 times and the second comes there after and is repeated propably more than 20 times. If I create a regular file, delete, rename or modify it none of those messages appear in my error.log While writing this question now I was able to add fils to svnignore using TortoiseSVN. However, after that, Eclipse would not let me commit anymore. The error that used to pop up when adding files to svnignore now also shows up while commiting. While searching the web I found some people having this same message appearing because they had files only different in upper- / lower-case naming. I checked my repository and did not find such files. I also read somewhere about people having troubles with WebDAV and file locking, because WebDAV's file locking capabilities seem to be very limited. At some stage I got errors telling me my repository was locked and thus the operations could not be completed. This error though did not appear anymore, since I setup a completely fresh repository and working copy. I would really appreciate any help anyone can provide me in fixing this problem! If there are any more questions feel free to ask. I know this is a somewhat unusual setup. Best regards, Daniel

    Read the article

  • Building nginx 1.0.4 on Amazon EC2 micro - perl and python problems

    - by digitaltoast
    I'd like to run nginx as a reverse proxy with apache2 on my EC2 micro instance. yum install nginx gives me nginx-0.8.53-1.2.amzn1.x86_64.rpm The current nginx is 1.0.4 I found and followed this guide: http://kdn2.info/2011/05/install-nginx-on-amazon-ec2/ It works fine up to and including "make". When I get to checkinstall --fstrans=no I get ERROR: ld.so: object '/usr/lib/installwatch.so' from LD_PRELOAD cannot be preloaded: ignored. test -d '/var/log/nginx' || mkdir -p '/var/log/nginx' ERROR: ld.so: object '/usr/lib/installwatch.so' from LD_PRELOAD cannot be preloaded: ignored. make[1]: Leaving directory `/root/src/nginx-1.0.4' ======================== Installation successful ========================== Copying documentation directory... ./ ./CHANGES ./LICENSE ./README cp: cannot stat `//var/tmp/gRWoVgIcdbmjfTjoVGBM/newfiles.tmp': No such file or directory Copying files to the temporary directory...OK Striping ELF binaries and libraries...OK Compressing man pages...OK Building file list...OK Building RPM package... FAILED! *** Failed to build the package ...and the logfile is full of: Building target platforms: x86_64 Building for target x86_64 Processing files: nginx-1.0.4-1.x86_64 error: File not found: /usr/src/rpm/BUILDROOT/nginx-1.0.4-1.x86_64/usr error: File not found: /usr/src/rpm/BUILDROOT/nginx-1.0.4-1.x86_64/usr/doc There IS /usr/src/rpm/BUILDROOT/nginx-1.0.4-1.x86_64/ but no /usr Following further down the page, it says: "If we want to use, for example, PHP 5.2 we can download PHP and Nginx compatible with Amazon Kernel(Xen Kernel) from the CentosALT Repository." So I install the two repositories, but when I yum install http://centos.alt.ru/pub/nginx/1.0/RPMS/x86_64/nginx-stable-1.0.4-1.el5.x86_64.rpm I get Error: Package: nginx-stable-1.0.4-1.el5.x86_64 (/nginx-stable-1.0.4-1.el5.x86_64) Requires: perl(:MODULE_COMPAT_5.8.8) You could try using --skip-broken to work around the problem but that doesn't fix it. When I do yum update, I get --> Finished Dependency Resolution Error: Package: python-distribute-0.6.19-10.1.x86_64 (devel_languages_python) Requires: python < 2.5 Installed: 1:python-2.6-1.19.amzn1.noarch (@amzn-main) python = 1:2.6-1.19.amzn1 Error: Package: python-distribute-0.6.19-10.1.i586 (devel_languages_python) Requires: python < 2.5 Installed: 1:python-2.6-1.19.amzn1.noarch (@amzn-main) python = 1:2.6-1.19.amzn1 I've tried everything - yum clean all and various other suggestions found on other sites. If anyone has any suggestions or a known package of the current 1.04 nginx working on EC2 Micro (Linux ip-10-56-63-85 2.6.35.11-83.9.amzn1.x86_64 #1 SMP Sat Feb 19 23:42:04 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux - which I think is RHEL 5?) then I'd be grateful. Incidentally, does this repolist look right? repo id repo name status CentALT CentALT Packages for Enterprise Linux 5 - x86_64 enabled: 112+157 amzn-main amzn-main-Base enabled: 2,706 amzn-main-debuginfo amzn-main-debuginfo disabled amzn-main-nosrc amzn-main-nosrc disabled amzn-updates amzn-updates-Base enabled: 328 amzn-updates-debuginfo amzn-updates-debuginfo disabled amzn-updates-nosrc amzn-updates-nosrc disabled devel_languages_python Python and Python Modules (SLE_10) enabled: 1,452+768 epel Extra Packages for Enterprise Linux 5 - x86_64 enabled: 5,892+604 epel-debuginfo Extra Packages for Enterprise Linux 5 - x86_64 - Debug disabled epel-source Extra Packages for Enterprise Linux 5 - x86_64 - Source disabled epel-testing Extra Packages for Enterprise Linux 5 - Testing - x86_64 disabled epel-testing-debuginfo Extra Packages for Enterprise Linux 5 - Testing - x86_64 - Debug disabled epel-testing-source Extra Packages for Enterprise Linux 5 - Testing - x86_64 - Source disabled s3tools Tools for managing Amazon S3 - Simple Storage Service (RHEL_6) enabled: 2+1 repolist: 10,492

    Read the article

  • TLS (STARTTLS) Failure After 10.6 Upgrade to Open Directory Master

    - by Thomas Kishel
    Hello, Environment: Mac OS X 10.6.3 install/import of a MacOS X 10.5.8 Open Directory Master server. After that upgrade, LDAP+TLS fails on our MacOS X 10.5, 10.6, CentOS, Debian, and FreeBSD clients (Apache2 and PAM). Testing using ldapsearch: ldapsearch -ZZ -H ldap://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ... fails with: ldap_start_tls: Protocol error (2) Testing adding "-d 9" fails with: res_errno: 2, res_error: <unsupported extended operation>, res_matched: <> Testing without requiring STARTTLS or with LDAPS: ldapsearch -H ldap://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ldapsearch -H ldaps://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ... succeeds with: # donaldr, users, darkhorse.com dn: uid=donaldr,cn=users,dc=darkhorse,dc=com uid: donaldr # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 result: 0 Success (We are specifying "TLS_REQCERT never" in /etc/openldap/ldap.conf) Testing with openssl: openssl s_client -connect gnome.darkhorse.com:636 -showcerts -state ... succeeds: CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A SSL_connect:SSLv3 read server hello A depth=1 /C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department verify error:num=19:self signed certificate in certificate chain verify return:0 SSL_connect:SSLv3 read server certificate A SSL_connect:SSLv3 read server done A SSL_connect:SSLv3 write client key exchange A SSL_connect:SSLv3 write change cipher spec A SSL_connect:SSLv3 write finished A SSL_connect:SSLv3 flush data SSL_connect:SSLv3 read finished A --- Certificate chain 0 s:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=MIS/CN=gnome.darkhorse.com i:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department 1 s:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department i:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department --- Server certificate -----BEGIN CERTIFICATE----- <deleted for brevity> -----END CERTIFICATE----- subject=/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=MIS/CN=gnome.darkhorse.com issuer=/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department --- No client certificate CA names sent --- SSL handshake has read 2640 bytes and written 325 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: D3F9536D3C64BAAB9424193F81F09D5C53B7D8E7CB5A9000C58E43285D983851 Session-ID-ctx: Master-Key: E224CC065924DDA6FABB89DBCC3E6BF89BEF6C0BD6E5D0B3C79E7DE927D6E97BF12219053BA2BB5B96EA2F6A44E934D3 Key-Arg : None Start Time: 1271202435 Timeout : 300 (sec) Verify return code: 0 (ok) So we believe that the slapd daemon is reading our certificate and writing it to LDAP clients. Apple Server Admin adds ProgramArguments ("-h ldaps:///") to /System/Library/LaunchDaemons/org.openldap.slapd.plist and TLSCertificateFile, TLSCertificateKeyFile, TLSCACertificateFile, and TLSCertificatePassphraseTool to /etc/openldap/slapd_macosxserver.conf when enabling SSL in the LDAP section of the Open Directory service. While that appears enough for LDAPS, it appears that this is not enough for TLS. Comparing our 10.6 and 10.5 slapd.conf and slapd_macosxserver.conf configuration files yields no clues. Replacing our certificate (generated with a self-signed ca) with an Apple Server Admin generated self signed certificate results in no change in ldapsearch results. Setting -d to 256 in /System/Library/LaunchDaemons/org.openldap.slapd.plist logs: 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 EXT oid=1.3.6.1.4.1.1466.20037 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 do_extended: unsupported operation "1.3.6.1.4.1.1466.20037" 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 RESULT tag=120 err=2 text=unsupported extended operation Any debugging advice much appreciated. -- Tom Kishel

    Read the article

< Previous Page | 125 126 127 128 129 130  | Next Page >