Search Results

Search found 32007 results on 1281 pages for 'php openid'.

Page 953/1281 | < Previous Page | 949 950 951 952 953 954 955 956 957 958 959 960  | Next Page >

  • implementing NGINX loadbalancer

    - by Alaa Alomari
    I have two servers (ServerA 192.168.1.10, ServerB 192,168.1.11) and DNS of test.mysite.com is pointing to ServerA #in serverA i have this upstream lb_units { server 192.168.1.10 weight=2 max_fails=3 fail_timeout=30s; # Reverse proxy to BES1 server 192.168.1.11 weight=2 max_fails=3 fail_timeout=30s; # Reverse proxy to BES2 } server { listen 80; # Listen on the external interface server_name test.mysite.com; # The server name root /var/www/test; index index.php; location / { proxy_pass http://lb_units; # Load balance the URL location "/" to the upstream lb_units } location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/test/$fastcgi_script_name; } } and ServerB is apache and it has the following <VirtualHost *:80 RewriteEngine on <Directory "/var/www/test" AllowOverride all </Directory DocumentRoot "/var/www/test" ServerName test.mysite.com </VirtualHost but whenever i try to browse test.mysite.com, it serves me from ServerA. also i tried to mark serverA and down server 192.168.1.10 down; in lb_units and still the same, serving me from serverA. any idea what i have done wrong??

    Read the article

  • Nginx + Apache + Wordpress redirects to localhost/127.0.0.1

    - by jcrcj
    Anyone know how to fix an issue with Nginx + Apache + Wordpress redirecting to localhost/127.0.0.1? I've tried a lot of different fixes, but none have worked for me. I can go to http://domain.com/wp-admin just fine and use everything there normally. But if I try to go to http://domain.com it redirects to 127.0.0.1. Everything also works fine if I just run through Apache. Here are the relevant portions of my nginx.conf: server { listen 80; server_name domain.com; root /var/www/html/wordpress; location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { proxy_pass http://127.0.0.1:8080; } } Here are the relevant portions of my httpd.conf: Listen *:8080 ServerName <ip> <VirtualHost *:8080> ServerAdmin test@test DocumentRoot /var/www/html/wordpress ServerName domain.com </VirtualHost> This is what my nginx log loks like: <ip> - - [19/Jun/2012:22:35:35 +0400] "GET / HTTP/1.1" 301 0 "-" "Mozilla/5.0 This is what my httpd log looks like: 127.0.0.1 - - [19/Jun/2012:22:24:46 +0400] "GET /index.php HTTP/1.0" 301 - "-" -- WordPress Address (URL) and Site Address (URL) both have same http://domain.com

    Read the article

  • Can't get subdomain to point to working collabNet server - what am I doing wrong?

    - by Jared
    Hello everyone, I am running a web server using CollabNet SubVersion EDGE. You can view it at 71.13.105DOT51 I also run another website, http://www.tutorialcraft.com. I went into my Cpanel, and created a DNS record as follows: svn.tutorialcraft.com. 14400 IN A 71.13.105.51 Yet, if you go to http://svn.tutorialcraft.com, it doesn't load. I tested to see if I was doing some wrong, so I created a ebay.tutorialcraft.com and pointed it to eBay servers, and it worked fine (it's not up now). Anyone have any ideas? Thanks UPDATE NOTES: I tried to point svn.tutorialcraft.com to my original IP address (the one that www.tutorialcraft.com is pointed to, and it still won't load. Also, may be worthy of note, I am running a wordpress multi-site server, and I have disabled blog redirection. Here is a sample of my .htaccess as well: RewriteEngine On RewriteCond %{HTTP_HOST} ^tutorialcraft\.com RewriteRule (.*) http://www.tutorialcraft.com/$1 [R=301,L] RewriteBase / RewriteRule ^index\.php$ - [L] # uploaded files RewriteRule ^files/(.+) wp-includes/ms-files.php?file=$1 [L] RewriteCond %{REQUEST_FILENAME} -f [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^ - [L] RewriteRule . index.php [L]

    Read the article

  • WAMP running extremely slow on WIndows 7

    - by JavaCake
    After 2 days of tough fight trying to figure out what the problem is with my Windows 7 32-bit machine at work i have nearly given up. The issue is that the pages are loaded extremely slow, the performance is both when accessed locally (127.0.0.1) or from another computer in the intranet. First to explain the system: WAMP version: Apache 2.2.22 – Mysql 5.5.24 – PHP 5.4.3 XDebug 2.1.2 XDC 1.5 PhpMyadmin 3.4.10.1 SQLBuddy 1.3.3 webGrind 1.0 DocumentRoot: Located on network drive MySQL: InnoDB Pages: PHP, MySQL, AJAX etc. So basically the changes i have made in order to get a greater performance: Changed C:\windows\system32\drivers\etc\hosts: 127.0.0.1 localhost 127.0.0.1 127.0.0.1 Modified my.ini: innodb_flush_log_at_trx_commit = 2 Modified httpd.ini: EnableMMAP on EnableSendfile on Modified php.ini: realpath_cache_size= 4m How i measure the performance is the overall loadtime of the page. I run it locally on my Mac OS X machine aswell (MAMP), and typically the frontpage loadtime is 0.06seconds but on the Windows 7 machine it is 6-10seconds. I have verified the loadtime with developertools in Chrome aswell. Furthermore the result is identical in XAMPP.

    Read the article

  • Weekly cron executing 2 times:

    - by yes123
    Hi guys, I have placed a .sh file that runs a php script weekly. This script should run only once, but every sunday it runs at: 1:30 am 6:50 am Any way to fix this? Add1: /etc/cron.weekly/cronweek: #!/bin/bash /usr/bin/php -f /home/my/path/to/script/cronweek.php Add2: crontab file: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # */1 * * * * root /usr/local/rtm/bin/rtm 35 > /dev/null 2> /dev/null

    Read the article

  • Apache and Virtual Hosts Problem on OS X

    - by Charles Chadwick
    I recently formatted and installed my iMac. I am running 10.6.5. Prior to this format, I had the default Apache web server up and running with several virtual hosts, and everything ran beautifully. After formatting, I set everything back up again, and now Apache is acting funny. Here is a description of what I have going on. My default root directory for the Apache Web server is pointed to an external hard drive. In my httpd.conf, here is what I have: DocumentRoot "/Storage/Sites" Then a few lines beneath that: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> And then beneath that: <Directory "/Storage/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from All </Directory> At the end of this file, I have commented out the user dir include conf file: Include /private/etc/apache2/extra/httpd-userdir.conf And uncommented the virtual hosts conf file: Include /private/etc/apache2/extra/httpd-vhosts.conf Moving on, I have the following entry in my vhosts file: <VirtualHost *:80> DocumentRoot "/Storage/Sites/mysite" ServerName mysite.dev </VirtualHost> I also have a host record in my /etc/hosts file that points mysite.dev to 127.0.0.1 (I also tried using my router IP, 192.168.1.2). The problem I am coming across is, despite having PHP files in /Storage/Sites/mysite, the server is still looking at /Storage/Sites. I know this because in the DocumentRoot contains a php file with phpinfo() (whereas the index.php file in mysite has different code). I have tried setting up other virtual hosts, but they are still doing the same thing. Also, "NameVirtualHost *:80" is in my vhosts file. I saw as a solution on another thread here. Doesn't seem to make a difference. Any ideas on this? Let me know if this is not enough information.

    Read the article

  • Cant configure DNS properly on centos

    - by Nuker
    I am on a VPS i must manage my own. I have network problems because in the last days many of my users report they cant enter my site from my domain and seems like Google and Facebook cant either (this never happened before). However i can enter my site without problems and so many other people as well. So i tested by making a php include like this <?php include 'http://mysite.com/somepage.php'; ?> and i get this error: Warning: include(): php_network_getaddresses: getaddrinfo failed: Name or service not known in I even tried by including content from yahoo.com or facebook and didnt work either. However the includes will work if i use IPs instead of domains. Do i have a DNS problem or something? What can i do to fix it? Im on a Linux 2.6.32-431.11.2.el6.x86_64 on x86_64 CentOS Linux 6.5 I have this on my resolv.conf # Generated by NetworkManager # No nameservers found; try putting DNS servers into your # ifcfg files in /etc/sysconfig/network-scripts like so: # # DNS1=xxx.xxx.xxx.xxx # DNS2=xxx.xxx.xxx.xxx # DOMAIN=lab.foo.com bar.foo.com nameserver 8.8.8.8 nameserver 8.8.4.4 Thank you.

    Read the article

  • Low-traffic WordPress website on Apache keeps crashing server

    - by OC2PS
    I have recently moved my low-moderate traffic (1000 UAUs, 5000 pageviews on a busy day) website from shared hosting to a Centos 6 64-bit VPS with Apache and cPanel running on 4 quad-core processor (likely oversold) and 3GB memory (Xen). We've had problems from the beginning. The server keeps crashing. It seems PHP keeps expanding till it consumes all the memory and crashes the server. Some folks have suggested that I should abandon Apache/cPanel/PHP/mySQL and go with nginX/Varnish/PHP-FPM/SQLite. But that's just not possible for me as I am not very tech savvy and need a simple GUI like cPanel to be able to manage the mundane management tasks (can't afford to hire system administrator or get fully managed hosting). I have come across several posts discussing optimization of Apache for WordPress. But all of these lead to articles that are pretty dated such as this ~4 year old one from Jan 2009 - http://thethemefoundry.com/blog/optimize-apache-wordpress/ The article is pretty detailed and seems helpful, but I stumble even on the first step. My httpd.conf only has 2 loadmodule commands LoadModule fastinclude_module modules/mod_fastinclude.so LoadModule bwlimited_module modules/mod_bwlimited.so So I go total bust right there. Further, my httpd.conf says Direct modifications to the Apache configuration file may be lost upon subsequent regeneration of the configuration file. To have modifications retained, all modifications must be checked into the configuration system by running: /usr/local/cpanel/bin/apache_conf_distiller I am having trouble finding where to change the modules in WHM. Please can someone help me with updated guidelines on how to optimize Apache for WordPress? Many thanks! P.S. The WordPress installation also has WP Super Cache installed. P.P.S. I also have phpBB, OpenCart, and Menalto Gallery installed.

    Read the article

  • Nginx settings are screwing up my Drupal form submissions, how do I fix?

    - by bflora
    How do I tell Nginx to "ignore" specific URLS or pages on my web site? I run a Drupal site where anonymous visitors get served via NGINX while logged in users get served via Apache. We do this to keep the load down and scale better. It works great, except, since we set up nginx, a good number of Drupal forms no longer work. For example, before installing Nginx, if you created a new article, then clicked "edit" and edited the article. You could click "save" and your changes to the article would be saved. After setting up nginx, when you make edits and then click "save," the page simple refreshes, but now with "nginx-index.php" inserted into the URL. And your changes to the form were not actually saved to the database. So if you go to edit an article, you'll be on domain.com/node/##/edit or something like that. When you try to save your changes to the form, you'll wind up at domain.com/nginx-index.php?q=node/##/edit. And your changes will not be saved. There is a way around this, but only for administrative users. If you go to a form where this problem is happening, then comment or comment-out three lines in our settings.php file, the form will save properly. Those three lines are: // 'cache_form' = array( // 'engine' = 'db', // ), If they're commented, you uncomment them, them save the form. If they're uncommented, you comment them out and save the form. Obviously, this sucks. My friend who set up our server (and then left the country) told me that there are some Nginx settings that can tell it to "ignore" certain URLs or pages which could work here. How do I do this and where do I do it?

    Read the article

  • Apache file appearing in directory list, but giving 404 when attempting access?

    - by aayush
    Please forgive my lack of knowledge, this is more of a learning project than anything else. I have a linux box, and it works pretty much fine. When i go to example.com/css it says theres one file in there, bootstrap.min.css When i go to example.com/css/bootstrap.min.css, it gives me a 404 error. I have only one htaccess file to remove the index.php from the url, which also i renamed to htaccess (Instead of .htaccess, so apache wont find it) and i restarted the server, yet no help. I also tried to chmod the css file 755 but no help. Contents of the htaccess file: RewriteEngine On RewriteBase / # Allow any files or directories that exist to be displayed directly RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L,QSA] Please help, i am very confused about this. I tried to google excessively but i came up with nothing. Edit: I found the solution to be renamed the htaccess file to something entirely different and restarting. Is there any way i can still implement the losing of the .php?

    Read the article

  • 100% CPU when doing 4 or more concurrent requests with Magento

    - by pancake
    Currently I'm having trouble with a server running Magento, it's unbelievably slow. It's a VPS with a few Magento installations on it used for development, so I'm the only one using them. When I do 4 request all 2 seconds after each other I'm finished in 10 seconds. Slow, but still within the limits of my patience. When I do 4 "concurrent" requests, however (opening 4 tabs in a row, very quickly) all four cores go to 100% and stay there for like a minute. How is this possible? I know that there are a lot of possibilities here, so any tips on how to make an Apache/PHP server go faster are also welcome. It used to go a lot faster before, and I've also tried APC, but it kept causing problems (PHP errors, something with memory pools) so I've disabled it. By the way, the Magento cache is off and compiling is also off. I know this makes Magento slower than usual, but I don't think a 60 second response time is normal for any Magento installation. Virtual hardware: 4 Cores and 4096MB RAM Swap is never used (checked with htop) 100GB disk space, of which 10% is in use Software: Debian 6 DirectAdmin and apache custombuild PHP 5.2.17 (CLI) If you need more info, please tell me how to get it, because I probably don't know how. I do know how to use the command line in linux and the usage of quite a few commands, but my experience with managing a server is limited.

    Read the article

  • NGINX AEGIR DRUPAL permissions 403 forbidden

    - by nlam
    New to nginx Installed on mac os for use with aegir & drupal It's running great, but I have a problem with permissions My hostmaster installation is here : /var/aegir/hostmaster-6.x-1.7/ The hostmaster settings file here : /var/aegir/hostmaster-6.x-1.7/sites/aegir.ldev/settings.php Permissions for settings.php are set to 440 automatically by hostmaster, but I'm getting a 403 forbidden page because of this. If I give read permission to "other" the site works great (444 or even 004). Drupal is also telling me that the file system paths are not writable (sites/aegir.ldev/files & sites/aegir.ldev/private). I would have to change the permissions there too. Moreover, I would also have to change permissions for every site installed by hostmaster. Anyway. In my nginx.conf I have the following : user "myuser" _www; Owner and group for settings.php, /sites/example.ldev/files, /sites/example.ldev/private are "myuser" and "_www". Changing permissions to 004 solves this problem, but really confuses me. Why do "other" have permission and not owner or group? I've checked the processes running in activity monitor. Nginx is running as "myuser". Except for one process running as root. So I'm stumped. Hope someone can help.

    Read the article

  • memory usage setting

    - by user127610
    everybody,the memory usage is too much,what can i do? top - 12:54:37 up 7 days, 4:38, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 18 total, 2 running, 16 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1048800k total, 917424k used, 131376k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 2840 1364 1204 S 0.0 0.1 0:02.17 init 1161 root 14 -4 2320 600 420 S 0.0 0.1 0:00.00 udevd 1391 root 18 0 35512 1288 948 S 0.0 0.1 0:03.53 rsyslogd 1409 root 15 0 8432 1164 700 S 0.0 0.1 0:03.87 sshd 1416 root 18 0 3156 868 692 S 0.0 0.1 0:00.00 xinetd 1423 root 18 0 8672 716 292 S 0.0 0.1 0:00.00 saslauthd 1424 root 18 0 8672 488 64 S 0.0 0.0 0:00.00 saslauthd 1431 root 15 0 7020 1168 616 S 0.0 0.1 0:00.99 crond 1450 root 25 0 6236 1444 1228 S 0.0 0.1 0:00.05 sh 3328 mysql 15 0 799m 42m 4892 S 0.0 4.1 0:02.07 mysqld 15479 root 15 0 11304 3332 2688 R 0.0 0.3 0:00.06 sshd 15482 root 15 0 6372 1688 1404 S 0.0 0.2 0:00.00 bash 15497 root 15 0 2536 1044 864 R 0.0 0.1 0:00.00 top 20137 www 15 0 20672 14m 864 S 0.0 1.4 0:00.87 nginx 22351 www 16 0 52324 26m 9244 S 0.0 2.6 0:13.94 php-fpm 24231 www 16 0 51928 25m 9260 S 0.0 2.5 0:13.52 php-fpm 32682 root 15 0 35832 3228 864 S 0.0 0.3 0:02.18 php-fpm 32686 root 18 0 7368 1616 888 S 0.0 0.2 0:00.00 nginx

    Read the article

  • How to avoid duplicates when copying files that have been renamed at the destination

    - by Benoitt
    I have to get pictures from a folder – with subfolders which are updated automatically – with their extensions. These files have to be copied in a folder where a website based on PHP will edit them (by renaming and creating an XML file) to be downloadable and integrated in an XML feed. Because of the rename function of the script, when I perform the copy gain, all the files are duplicated, because the script has renamed the original ones already. I've tried a few things with rsync but I'm looking for something more powerful because I can't copy files with an external "history". #!/bin/bash find '/home/name/picture' -name '*.jpg' | while read FILE ; do rsync --backup --backup-dir=incremental --suffix=.old "$FILE" /var/www/media ; done wget --spider 'http://myscript.php' ; #exit 0 PS: As a little addition, I'd like to replace '.' with a 'space' just after the *.jpeg copy. My PHP script has some problem to define files with comma because of the extension. I'm finking about a command with find – like I did before – with a sed function? Is that a good idea?

    Read the article

  • How should I configure my Apache Hosts File to serve a different site for localhost than for my domain/publicip?

    - by rofls
    I'm trying to test out a LAMP (with PHP5 specifically) setup with Django already serving a website. I want to do the PHP stuff on localhost for now, so that when I do something like this: curl http://localhost/database/script.php?var=1, I get a response from the php server. Right now I'm getting a Django error. I tried something like this in the default file in sites-available: Listen 80 <VirtualHost aaa.bbb.ccc.ddd> ServerName localhost DocumentRoot /home/phpsite </VirtualHost> where aaa.bbb.ccc.ddd is the local ip address, and changing my actual site's settings to specify the public ip, like this: Listen 80 <VirtualHost www.xxx.yyy.zzz> ServerName mysite.com DocumentRoot /srv/www/mysite WSGIScriptAlias / /srv/www/mysite.wsgi </VirtualHost> but then I start getting all kinds of errors when I start apache, such as port ::[80] is already in use or something. I noticed that the hosts file that's located in /etc/apache2/ is apparently pointing everything to mysite.com, including my local ip as well as 127.0.0.1 and 127.0.1.1; Do I need to change the configuration there too?

    Read the article

  • ImageMagick installation confusion

    - by Codemonkey
    Well this is turning out to be a pain. I'm on CentOS 6.5. I saw on the PECL ImageMagick changelog that they added a load of constants for new filters, such as LANCZOS2SHARP. Some testing I did earlier suggested that PhotoShop 6.5 was able to downsize photos with better clarity than my current ImageMagick's best effort of LANCZOS, so I thought I'd try to upgrade to test out the newer filters. So, first port of call - get source from pecl for 3.2.0RC1. Installed with no problems. But, ah-ha. Although it says it only requires IM 6.2.4, the filters I'm after don't work unless you have 6.6.6+ So I go http://www.imagemagick.org/script/install-source.php to install the newest version. This is the bit that's puzzled me. It all appears to have installed fined. The tests work fine, passing 40 out of 40. If I run identify -version on the command line it outputs 6.8.9 But if I echo Imagick::getVersion() in PHP it shows 6.5.4, even after restarting php-fpm. rpm -qa | grep ImageMagick shows that I still have 6.5.4 installed locate ImageMagic also only seems to show the 6.5.4 one I feel that the missing link here is ImageMagick-devel, do I need to install that too? How do I go about doing that? Or do I just need to reinstall the pecl-imagemagick 3.2.0RC1 now that I have the latest IM installed?

    Read the article

  • Checking for valid document files

    - by sweb
    I need a simple way to check if my files are valid documents (pdf, doc, docx, ppt, pptx, xls, xlsx, odt, ods, odp and etc). I can't use file because magic does not work well at all. For example, for PDF files, this is my output. sweb@sweb-laptop: /media/files/ebooks/PDF and CHM$ file --mime *. Pdf PHP 5 for Dummies. Pdf: application/pdf; charset=binary PHP 6 and MySQL 5 for Dynamic Web Sites. Pdf: application/octet-stream; charset=binary PHP6 and MySQL Bible. Pdf: application/pdf; charset=binary PHP6.pdf: application/octet-stream; charset=binary PHP and MySQL for Dummies SE. Pdf: application/pdf; charset=binary For example, I use abiword – which is a good tool – but it converts any format. It doesn't check for valid documents: abiword --to=txt --to-name=output.txt audio.mp3 Is there any command available to check for valid documents then?

    Read the article

  • Enabling `mod_rewrite` apache, permissions issues

    - by rudolph9
    In attempting to enable mod_rewrite on the Apache2 web server installed with Mac OSX 10.7.4. Following these instruction, ultimately using the configuration to host CakePHP applications, I run into permissions issues accessing the site via a web browser when I set the directory block associated with cakephp site /etc/apache2/users/username.conf from: <Directory "/Users/username/Sites/"> Options Indexes FollowSymLinks MultiViews AllowOverride none Order allow,deny Allow from all </Directory> /etc/apache2/users/username.conf to: <Directory "/Users/username/Sites/"> Options Indexes MultiViews AllowOverride none Order allow,deny Allow from all </Directory> <Directory "/Users/username/Sites/cakephp_app/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny Allow from all </Directory> The .htaccess files are the CakePHP 2.2.2 default as follows: /Users/username/Sites/cakephp_app/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule> /Users/username/Sites/cakephp_app/app/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ webroot/ [L] RewriteRule (.*) webroot/$1 [L] </IfModule> /Users/username/Sites/cakephp_app/app/webroot/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php [QSA,L] </IfModule> When performing the request via a web browser at to http://0.0.0.0/~username/cakephp_app/index.php the content of the response is Not Found The requested URL /Users/username/Sites/cakephp_app/app/webroot/ was not found on this server. Apache/2.2.21 (Unix) DAV/2 PHP/5.3.10 with Suhosin-Patch Server at 0.0.0.0 Port 80 Upon a request to http://0.0.0.0/~username/ and http://0.0.0.0/~username/cakephp_app/, added to /var/log/apache2/error_log accordingly are the following: [Tue Sep 04 22:53:26 2012] [error] [client 127.0.0.1] File does not exist: /Library/WebServer/Documents/Users, referer: http://0.0.0.0/~username/ [Tue Sep 04 22:53:26 2012] [error] [client 127.0.0.1] File does not exist: /Library/WebServer/Documents/favicon.ico What is causing the issue? Is there server program, ideally available via a homebrew script, which would make hosting CakePHP applications for testing purposes more effective and efficient?

    Read the article

  • .htaccess redirect working on localhost but not on server

    - by Thread7
    I want users who hit my web site's root directory to be sent to a subdirectory. So anyone going to: http://MyDomain.com or /index.php would be sent to http://MyDomain.com/subdir I used the .htaccess file to successfully do this on my local machine (with Apache 2). But it doesn't work on the server? Users still see the default index.php in the root directory. Here is my simple .htaccess file. Any ideas? RewriteEngine On Redirect /index.php http://MyDomain.com/subdir/ Now my httpd.conf file AccessFileName .htaccess <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory "/var/www/icons"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <Directory "/var/www/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> <IfModule mod_negotiation.c> <IfModule mod_include.c> <Directory "/var/www/error"> AllowOverride None Options IncludesNoExec AddOutputFilter Includes html AddHandler type-map var Order allow,deny Allow from all LanguagePriority en es de fr ForceLanguagePriority Prefer Fallback </Directory> </IfModule> </IfModule>

    Read the article

  • Curl Error 52 Empty reply from server

    - by Paul Sheldrake
    Hello I have a cron job setup on one server to run a backup script in PHP that is hosted on another server. The command I've been using is formatted like this: curl -sS http://www.example.com/backup.php Lately I've been getting this error when the Cron runs curl: (52) Empty reply from server I have no idea what this means. If I go to the link directly in my browser the script runs fine and I get my little backup zip file. Can anyone help? Thanks, Paul

    Read the article

  • CodeIgniter: Passing variables via URL - alternatives to using GET

    - by John Durrant
    I'm new to CodeIgniter and have just discovered the difficulties using the GET method of passing variables via the URL (e.g. domain.com/page.php?var1=1&var2=2). I gather that one approach is to pass the variables in the URI segments but haven't quite figured out how to do that yet as it seems to create the expectation of having a function in the controller named as the specific URI segment???? Anyway Instead of using GET I've decided to use POST by adapting a submit button (disguised as a link) with the variables in hidden input fields. I've created the following solution which seems to work fine, but am wondering whether I'm on the right track here or whether there is an easier way of passing variables via a link within CodeIgniter? I've created the following class in application/libraries/ <?php if ( ! defined('BASEPATH')) exit('No direct script access allowed'); class C_variables { function variables_via_link($action, $link_text, $style, $link_data) { $attributes = array('style' => 'margin:0; padding:0; display: inline;'); echo form_open($action, $attributes); $attributes = array('class' => $style, 'name' => 'link'); echo form_submit($attributes, $link_text); foreach ($link_data as $key => $value){ echo form_hidden($key, $value); } echo form_close(); } } ?> With the following CSS: /* SUBMIT BUTTON AS LINK adapted from thread: http://forums.digitalpoint.com/showthread.php?t=403667 Cross browser support (apparently). */ .submit_as_link { background: transparent; border-top: 0; border-right: 0; border-bottom: 1px solid #00F; border-left: 0; color: #00F; display: inline; margin: 0; padding: 0; cursor: hand /* Added to show hand when hovering */ } *:first-child+html .submit_as_link { /* hack needed for IE 7 */ border-bottom: 0; text-decoration: underline; } * html .submit_as_link { /* hack needed for IE 5/6 */ border-bottom: 0; text-decoration: underline; } Link then created using the following code in the VIEW: <?php $link = new C_variables; $link_data=array('var1' => 1, 'var2' => 2); $link ->variables_via_link('destination_page', 'here is a link!', 'submit_as_link', $link_data); ?> Thanks for your help...

    Read the article

  • Convert htaccess to helicon tech ISAPI rewrite

    - by Luis
    Can someone help me converting these htaccess rewrite rules to helicon tech ISAP? RewriteEngine On RewriteBase / # remove the www RewriteCond %{HTTP_HOST} ^(www\.$) [NC] RewriteRule ^ http://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] # Add a trailing slash to paths without an extension RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !(\.[a-zA-Z0-9]{1,5}|/)$ RewriteRule ^(.*)$ $1/ [L,R=301] # Remove index.php RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /index.php/$1 [L] Thanks

    Read the article

  • send variable fancybox onclick event

    - by samuel saul
    hi, i want to send href value. but its not working. function display () { $.fancybox({ 'href': 'index.php', 'width' : '75%', 'height' : '75%', 'autoScale' : false, 'transitionIn' : 'none', 'transitionOut' : 'none', 'type' : 'iframe' }); } i tryed this : function display (who) { $.fancybox({ 'href': 'index.php'+who, 'width' : '75%', 'height' : '75%', 'autoScale' : false, 'transitionIn' : 'none', 'transitionOut' : 'none', 'type' : 'iframe' }); } <a onclick="javascript:display("?id=11");" href="#" >create</a> this onclick event inside the innerhtml so its not working with '' why

    Read the article

  • enable mod_rewrite in MAMP?

    - by ajsie
    how do i enable mod_rewrite in MAMP. i have created a file .htaccess in the documentroot with following content: RewriteEngine on RewriteCond $1 !^(index\.php|images|robots\.txt) RewriteRule ^(.*)$ /index.php/$1 [L] but it still doesn´t work. i suspect that i have to enable mod_rewrite in MAMP's apache configuration. how do i do that? do i have to download anything? thanks

    Read the article

  • libcurl in C# and .NET

    - by medvezatnik
    Hello(????????????) I’m a PHP developer, my current job is maintaining a few web sites. So I have some free time and thoughts about developing some GUI applications using the c# language and .NET platform. Implementation of this applications requires libcURL functionality. Are there any .net classes or third-party libs in c#, that can provide me with the same functionality as libcurl in PHP ? Sorry for my bad English…

    Read the article

< Previous Page | 949 950 951 952 953 954 955 956 957 958 959 960  | Next Page >