Search Results

Search found 8408 results on 337 pages for 'cgi bin'.

Page 204/337 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • CentOS - PHP - Yum Install with Custom ./configure params

    - by Mike Purcell
    I have successfully configured and compiled php on my dev server, and works great, but after talking to a sysadmin buddy, he informed that custom compiles of the latest builds are not recommended for production (or even development) systems. He noted a situation where they custom configured and compiled PHP 5.3.6, only to find that there was some issue with a low-level Postgres driver, so they had to revert back to 5.3.3. So I am considering going back to yum to install PHP, however I have several custom configuration settings and was wondering if it's possible to pass or configure how PHP will be compiled through YUM? My current configure line: Configure Command => './configure' '--with-libdir=lib64' '--prefix=/usr/local/_custom/app/php' '--with-config-file-path=/usr/local/_custom/app/php/etc' '--with-config-file-scan-dir=/usr/local/_custom/app/php/etc/modules' '--disable-all' '--with-apxs2=/usr/sbin/apxs' '--with-curl=/usr/sbin/curl' '--with-gd' '--with-iconv' '--with-jpeg-dir=/usr/lib' '--with-mcrypt=/usr/bin' '--with-pcre-regex' '--with-pdo-mysql=mysqlnd' '--with-png-dir=/usr/lib' '--with-zlib' '--enable-ctype' '--enable-dom' '--enable-hash' '--enable-json' '--enable-libxml' '--enable-mbstring' '--enable-mbregex' '--enable-pdo' '--enable-session' '--enable-simplexml' '--enable-xml' '--enable-xmlreader' '--enable-xmlwriter'

    Read the article

  • Optimized CSF LFD to miminize false positive emails on new install? Centos6.2 + ISPConfig3

    - by Damainman
    I have a remote dedicated server running CentOS 6.2 x64bit with ISPConfig3. This is a brand new install. Server Purpose: Basic LAMP Web Hosting with PureFTPD, BIND, CLAMAV, RKHunter. Any advice or link to a guide which will clearly explain how to optimize the CSF+LFD configuration is greatly appreciated. I am not exactly sure on where to start what I shouldn't loosen the restrictions on. At the moment my inbox is flooding with alerts from LFD such as: Suspicious process running under user postfix Excessive resource usage: haldaemon Account: haldaemon Resource: Process Time Exceeded: 1823 1800 (seconds) Executable: /usr/sbin/hald Command Line: hald PID: 1031 Killed: No Excessive resource usage: amavis Time: Tue Jun 5 12:43:35 2012 -0700 Account: amavis Resource: Virtual Memory Size Exceeded: 330 200 (MB) Executable: /usr/bin/perl Command Line: amavisd (virgin child) PID: 27931 Killed: No Excessive resource usage: apache Time: Tue Jun 5 12:35:33 2012 -0700 Account: apache Resource: Virtual Memory Size Exceeded: 437 200 (MB) Executable: /usr/sbin/httpd Command Line: /usr/sbin/httpd PID: 27286 Killed: No

    Read the article

  • How do I get the latest FastCGI and PHP versions to peacefully coexist on IIS 6?

    - by BHelman
    I have been going round and round trying to get any sort of PHP running on IIS 6. I somehow managed to successfully get version 5.1.4 running using the php5isapi.dll file. However, I want to upgrade a website to begin using a Content Management System. I have never dug into CMS before so I'm open to programs that are easy to use. I am currently looking into TomatoCMS and ImpressCMS - but that's beside the point. I have never done an installation with PHP before and I think I'm getting familiar with how it works. However the current situation is this. Microsoft's Web Platform Installer 2.0 installed FastCGI for me. I need to upgrade to PHP 5.3.1 for a CMS system. So I downloaded the Windows installer and let it go at it. After consulting several other blog articles, I believe I know how it is supposed to work but I am currently not having luck. THE SETUP *.php is a registered extension in IIS 6 for all websites (on Win 2k3). The application that it calls is C:\Windows\system32\inetsvr\fcgiext.dll, like it should. The fcgiext.ini config has the proper lines: [Types] php=PHP [PHP] ext=C:\program files\PHP\php-cgi.exe And the php.ini file also has the correct configs. All extensions are disabled and I changed the correct things for FastCGI. And everything is registered correctly with the PATH variable. Everything is exactly how it should be. BUT when I launch the "info.php" page () on another computer, I get the following error: FastCGI Error The FastCGI Handler was unable to process the request. Error Details: * Section [PHP] not found in config file. * Error Number: 1413 (0x80070585). * Error Description: Invalid index. HTTP Error 500 - Server Error. Internet Information Services (IIS) A quick Google search reveals that I have it all setup correctly as far as the INI's go and the mapping of the php extension. I am completely at a loss. Does anyone have any suggestions? Although the server is hosting three small websites, I don't really care what I have to do to it to get it to work.

    Read the article

  • PFSense CSR Generation

    - by ErnieTheGeek
    I'm trying to figure out how to generate a CSR so I can generate and install a SSL cert. Here's a LINK to what I've what tried. Granted that post was for m0n0wall, but I figured openssl is openssl. Heres where I get stuck. When I run this: /usr/bin/openssl req -new -key mykey.key -out mycsr.csr -config /usr/local/ssl/openssl.cnf I get this: error on line -1 of /usr/local/ssl/openssl.cnf 54934:error:02001002:system library:fopen:No such file or directory:/usr/src/secure/lib/libcrypto/../../../crypto/openssl /crypto/bio/bss_file.c:122:fopen('/usr/local/ssl/openssl.cnf','rb') 54934:error:2006D080:BIO routines:BIO_new_file:no such file:/usr/src/secure/lib/libcrypto/../../../crypto/openssl/crypto/ bio/bss_file.c:125: 54934:error:0E078072:configuration file routines:DEF_LOAD:no such file:/usr/src/secure/lib/libcrypto/../../../crypto/open ssl/crypto/conf/conf_def.c:197:

    Read the article

  • Looking for zsh completion file for osX native commands

    - by Chiggsy
    I've been digging deep into what actually comes with osX in /usr/bin and especially /usr/libexec. Quite good stuff really, although the command syntax is a bit.. odd. Let me direct the curious to the command that made me think of this: networksetup -printcommands I can not think of a command that better illustrates the need for good completion. security -h perhaps, but those commands have a familiar easy-to-read format. I beseech the community, please point me to a place where I can find such a thing. I never type them right, and I ache for tab completion for this Anyone have any idea where I could grab something? I'd prefer to stand on the shoulders of giants instead of trying to make a zsh/bash completion script leap into the world, ready for battle, like Athena, from my forehead. I am no Zeus when it comes to compctl. Not at all.

    Read the article

  • scanning only works under "sudo" (Ubuntu)

    - by JoelFan
    When I try to scan, using simple-scan, the UI says Failed to scan -- Unable to connect to scanner. When I run it from the command line I get: joel@home:/usr/bin$ simple-scan -d ** (simple-scan:6554): DEBUG: Starting Simple Scan 2.32.0.1, PID=6554 ** (simple-scan:6554): DEBUG: Restoring window to 600x400 pixels ** (simple-scan:6554): DEBUG: sane_init () -> SANE_STATUS_GOOD ** (simple-scan:6554): DEBUG: SANE version 1.0.22 ** (simple-scan:6554): DEBUG: Requesting redetection of scan devices ** (simple-scan:6554): DEBUG: Processing request ** (simple-scan:6554): DEBUG: Requesting scan at 300 dpi from device '(null)' ** (simple-scan:6554): DEBUG: scanner_scan ("(null)", 300, SCAN_SINGLE) ** (simple-scan:6554): DEBUG: sane_get_devices () -> SANE_STATUS_GOOD ** (simple-scan:6554): DEBUG: Device: name="brother2:bus4;dev1" vendor="Brother" model="MFC-210C" type="USB scanner" ** (simple-scan:6554): DEBUG: Processing request ** (simple-scan:6554): DEBUG: sane_open ("brother2:bus4;dev1") -> SANE_STATUS_IO_ERROR ** (simple-scan:6554): WARNING **: Unable to get open device: Error during device I/O FYI, I have already done: joel@home:~$ sudo chmod a+rwx /dev/bus/usb joel@home:~$ sudo chmod a+rwx /dev/bus/usb/* If I run under sudo: joel@home:~$ sudo simple-scan it works. How can I get simple-scan to work without sudo?

    Read the article

  • Is it possible to run two servers in one system ?

    - by srikanth
    hi, i have a small problem while trying to execute the wamp server. At present in my system i am running Apache server. i have a php application. for that i am trying to install wamp server in my system. wamp server is not running. i change the port no of wamp server as : in my C:\wamp\bin\apache\Apache2.2.11\conf\ i have httpd.conf file. in that i change listener and host name with another port no. then also it is not working. any one can you help me......... plz...

    Read the article

  • Sporadic '.Xauthority not writable, changes will be ignored' going from OSX -> Linux

    - by Kamil Kisiel
    Every now and then when users SSH from their OS X (Snow Leopard) workstation to one of our Linux hosts they receive the message: /usr/bin/xauth: ~/.Xauthority not writable, changes will be ignored Of course, their X forwarded applications will not work at this point. However, if they log out and log right back in again they do not get the message and everything works as expected. On their Mac they get their home directory via AFP. The Linux machines get it via NFS. Any ideas on what could be going on here?

    Read the article

  • I can't run uwsgi as normal user

    - by atomAltera
    I want to run uwsgi server as www user, but if I write: uwsgi --socket $SOCKET --chmod-socket 666 --pidfile $PIDFILE --daemonize $LOGFILE --chdir $CHDIR --pp $PYTHONPATH --module main --post-buffering 8192 --workers 1 --threads 10 --uid www --gid www A socket creation error occurs: Log: 1 *** Starting uWSGI 1.4.1 (64bit) on [Mon Dec 10 22:15:23 2012] *** 2 compiled with version: 4.4.5 on 17 November 2012 23:31:14 3 os: Linux-2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 4 nodename: autoblog 5 machine: x86_64 6 clock source: unix 7 pcre jit disabled 8 detected number of CPU cores: 2 9 current working directory: / 10 writing pidfile to /tmp/uwsgi_mysite.pid 11 detected binary path: /usr/local/bin/uwsgi 12 setgid() to 1002 13 set additional group 1004 (files) 14 setuid() to 1002 15 *** WARNING: you are running uWSGI without its master process manager *** 16 your memory page size is 4096 bytes 17 detected max file descriptor number: 1024 18 lock engine: pthread robust mutexes 19 unlink(): Operation not permitted [core/socket.c line 109] 20 bind(): Address already in use [core/socket.c line 141]

    Read the article

  • Annoying Application on Mac Creating Unwanted File

    - by superuser
    Last year, I installed an application on my Mac called Pocketcam. A few days later, I have removed the app from my Mac by dragging and dropping it into the trash bin. Apparently by doing that, it didn't remove the application completely as there are still some files left on the computer that is creating a log file in my document folder. The file keeps coming back after deleting it numerous times. So my question is, "how do I find the file that is creating that file and remove it completely from my Mac?"

    Read the article

  • New XEN Server, Intel i7, Errors were encountered while processing: xen-linux-system-amd64

    - by Sheldon
    I have just got a new machine to run XEN VM's on, it has an Intel i7 processor: - Intel Haswell Core i7-4790 3.6GHz 8MB LGA1150 I have setup the host with the current 6.2.0 I have set up a new Debian 7 64bit VM and any package I try and run fails with the following errors: Errors were encountered while processing: xen-utils-common xen-utils-4.1 xen-system-amd64 xen-linux-system-3.2.0-4-amd64 xen-linux-system-amd64 E: Sub-process /usr/bin/dpkg returned an error code (1) Excuse my noob-ness but should it even be running an AMD package ? Any ideas on how to fix this ? Thanks

    Read the article

  • Mac OS X file recovery

    - by Daniel
    I thought that all operating systems would merge folder content when being moved to the same location. Imagine my surprise when that didn't happen and I have hundreds, if not thousands of files that have gone missing and are nowhere to be found. Because they were not "deleted" they are not in the trash bin. I've tried to do some recovery using a program called stellarPheonix but after about a 24hour scan, it didn't recognize any of the raw files (.dng,.arw) as image files and so I couldn't see if they could be recovered. It also didn't show the directory structure, which would be handy. I tried a quick scan, but all it showed was files that were still on the HD, not sure what the point of that is. I've used recover 2000 on Win and it does a good job, does anyone know of anything that works quickly and reliably for this kind of file recovery. (I don't think I should have to do a sector-by=sector for this kind of file loss)

    Read the article

  • Regarding Reinstalling PostgreSQL

    - by Vivalavista
    I was using PostgreSQL 8.4. I tried removing it through Synaptic Manager and then I tried to install 9.1, but I still version 8.4. I deleted all the files associated with postgresql. Now I am unable to install any version of PostgreSQL. When I try I get this error: Setting up postgresql-9.1 (9.1.3-1~lucid) ... .: 12: Can't open /usr/share/postgresql-common/maintscripts-functions dpkg: error processing postgresql-9.1 (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of postgresql: postgresql depends on postgresql-9.1; however: Package postgresql-9.1 is not configured yet. dpkg: error processing postgresql (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: postgresql-9.1 postgresql E: Sub-process /usr/bin/dpkg returned an error code (1) Please tell me the way to remove postgres completely so I can install a fresh version.

    Read the article

  • AWS EC2 can't execute user-data script

    - by Bloodnut
    I'm pretty new to AWS and EC2 but I want to run instances with a user script after it's booted from another instance. I have installed ec2 tools and ran the command as it's explained in various examples like here http://www.turnkeylinux.org/blog/ec2-userdata and Eric Hammond's tutorials. however when I actually use the command: "ec2-run-instances --key my-key --user-data-file myscript my-ami" it only runs the new instance but doesn't execute the script myscript contains: #!/bin/bash echo "hello" ~/output.txt I'm running ubuntu server 12.04 AMIs. the target AMIs are duplicates of the initiating instance. if I run curl http:// 169.254.169.254/latest/user-data the imported script is there.

    Read the article

  • Installing perforce visual client on linux

    - by Manish
    I am from Mac background trying my hand at installing perforce client visual(P4V) on my linux box.For this I download the correct version here and untar the files. Then I cd to the directory ~/Desktop/p4v-2012-blah-blah/bin I also say chmod +x p4* After this i try running p4v (by double clicking) but I dont see anything .The file type is shown as a "text executable" but i dont know why it is not running. On mac i had done the same thing -just clicked on p4v and the client would show up(where I filled the server address and everything )But not sure what is going wrong here.Can someone give me directions? FWIW i did check out this link .

    Read the article

  • Testing home directory scripts by setting $HOME to the location of the test directory

    - by intuited
    I have an interdependent collection of scripts in my ~/bin directory as well as a developed ~/.vim directory and some other libraries and such in other subdirectories. I've been versioning all of this using git, and have realized that it would be potentially very easy and useful to do development and testing of new and existing scripts, vim plugins, etc. using a cloned repo, and then pull the working code into my actual home directory with a merge. The easiest way to do this would seem to be to just change & export $HOME, eg cd ~/testing; git clone ~ home export HOME=~/testing/home cd ~ screen -S testing-home # start vim, write/revise plugins, edit scripts, etc. # test revisions However since I've never tried this before I'm concerned that some programs, environment variables, etc., may end up using my actual home directory instead of the exported one. Is this a viable strategy? Are there just a few outliers that I should be careful about? Is there a much better way to do this sort of thing?

    Read the article

  • rm failing inside cron script

    - by Nicholas
    I have a cron job calling a bash script which runs fine, except for one line inside it that is suppose to remove all fines in a directory. The result of this line is always 'no such file or directory' even though I have verified (many times) that there are files in that directory. The line in question is as simply: rm /dir1/dir2/dir3/* The script works fine when run manually in the terminal, so it must be something about how the cron is run. I've tried giving 'dir3' and all the files inside it every permission possible, so it shouldn't be a permission problem. (The directory and files are also owned by the user). I've tried specifing 'SHELL=/bin/bash' inside 'crontab'. There is no sticky bit set and there is no alias on the rm command. Interestingly changing the 'rm' command to 'ls' gives the same negative result (unless you remove the trailing '*', and then that works). What am I missing here?

    Read the article

  • Handling Junk Email with Apple Mail.app and Gmail

    - by Axeva
    I've just setup my Apple Mail client to work with Google Apps through IMAP. One lingering question is how to best handle SPAM (Junk Mail), however. In their Help section, Google recommends that we disable junk filtering on the client. http://mail.google.com/support/bin/answer.py?answer=78892 This leads me to wonder what we should do when a junk message makes it past Google's filter? Do I just delete the message? If I do, the Google spam filter will never improve and "learn" that the message was junk. Do I have to log in to the web interface at Google to mark the message as spam? That seems a bit arduous for every spam email I get. What's the best way to handle this? Thanks!

    Read the article

  • Amazon ec2 folder missing

    - by CQM
    To set permissions on the settings file On your Amazon EC2 instance, at a command prompt, use the following command to set permissions: sudo chmod 666 /var/www/html/sites/default/settings.php except I don't have a www folder in my instance [ec2-user@ip-10-242-118-215 ~]$ cd / [ec2-user@ip-10-242-118-215 /]$ ls bin cgroup etc lib local media opt root selinux sys usr boot dev home lib64 lost+found mnt proc sbin srv tmp var [ec2-user@ip-10-242-118-215 /]$ cd var [ec2-user@ip-10-242-118-215 var]$ ls account db games local log nis preserve run tmp cache empty lib lock mail opt racoon spool yp Please advise, did I forget to install something that the amazon instructions assumed I knew about? Running 64bit Amazon linux ami march 2012 I feel like the webserver is missing?

    Read the article

  • using flock in bash, why does killing a child process kill the parent process too?

    - by Robby
    In the code snippet below, I want the script to be limited to only running one copy at a time, and for it to restart server.x if it dies for any reason. Without flock involved, the loop correctly restarts if I kill the server process, but once I use flock to ensure the script is only running once, if I kill server.x it also kills the parent process. How can I ensure that killing the child process in a flock script keeps the parent around? #!/bin/bash set -e ( flock -x -n 200 while true do ./server.x $1 done ) 200>/var/lock/.m_rst.$1.lock

    Read the article

  • nginx, php-fpm, and multiple roots - how to properly try_files?

    - by Carson C.
    I have a server context which is rooted in a login application. The login application handles, well, logins, and then returns a redirect to "/app" on the same server if a login is successful. The application is rooted elsewhere, which is handled by the location block shown here: location ^~ /app { alias /usr/share/nginx/www/website.com/content/public; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/tmp/php5-fpm.sock; include fastcgi_params; } } This works just fine, however the $uri getting passed to PHP still contains /app, even though I am using alias rather than root. Because of this, the try_files directive fails to a 404 unless I link app -> ./ in /usr/share/nginx/www/website.com/content/public. It's obviously silly to have that link in there, and if that link ever gets lost, bam dead website without an obvious cause. The next thing I tried... Was to remove the try_files directive entirely. This allowed me to rm the app link in my /public folder, and PHP had no problem locating the file and executing it. I used that to dump my $_SERVER global from PHP, and found that "SCRIPT_FILENAME" => "/usr/share/nginx/www/website.com/content/public/index.php" when the browser URI is /app. This is exactly right. Based on my fastcgi_params below, this led me to beleive that try_files $request_filename =404; should work, but no dice. nginx still doesn't find the file, and returns 404. So for right now, it will only work without any try_files directive. PHP finds the file, whereas try_files could not. I understand this may be a PHP security risk. Can anyone indicate how to move forward? The nginx logs don't contain anything relating to the failed try_files attempt, as far as I can see. fastcgi_aparams fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $server_https;

    Read the article

  • grub-efi refuses to chainload Windows 8.1

    - by Alexei Averchenko
    I have installed LMDE (with grub in MBR) after I installed Windows 8.1. I then installed the grub-efi package and added the custom Windows options: #!/bin/sh exec tail -n +3 $0 menuentry "Windows" { search --fs-uuid --no-floppy --set=root A89A-7F4C chainloader (${root})/EFI/Boot/bkpbootx64.efi } menuentry "Windows (backup bootloader)" { search --fs-uuid --no-floppy --set=root A89A-7F4C chainloader (${root})/EFI/Microsoft/Boot/bkpbootmgfw.efi } These are basically a leftover from my older Ubuntu setup. However, grub is refusing to load them, complaining about the invalid signature. What do I do now?

    Read the article

  • pip install fails on guest Linux Mint 15

    - by synergetic
    On my Windows 7 PC, I installed VMware VM for Linux Mint 15. Windows PC is behind corporate firewall /proxy server. Now inside Linux I issued: sudo apt-get install python-virtualenv Then created ~/projects folder and python virtual environment: mkdir projects cd projects virtualenv venv Then activated my virtual env: . venv/bin/activate So far no problem. Then tried to install python libraries, for example markupsafe: pip install markupsafe It throws an error: Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement markupsafe No distributions at all found for markupsafe Storing complete log in /home/me/.pip/pip.log Inside pip.log I found: <urlopen error [Errno 104] Connection reset by peer> Installing any other library throws similar error. What's wrong here?

    Read the article

  • ls returns nothing only in certain directories

    - by Jakobud
    I have a raid drive mounted here: /data/ And certain directories like this one: /data/somedir/somesubdir/ when I run ls w/ or w/o any flags, terminal doesn't return anything. It does not return an empty directory listing. It simply goes to the next line and sits there blank with no prompt coming up. I cannot CTRL-C out of it. I have to close this terminal instance and start over. At first I thought it was something to do with the ls command, but its pointing to /bin/ls and I can ls other directories just fine. Also, running this find /data/somedir/somesubdir immediately finds all the files just as expected.

    Read the article

  • CentOS 5.8 Server, manual install PHP 5.2.17

    - by Shiro
    I would like manually install PHP 5.2.17. I manage to install httpd and mysql. But when I want to PHP 5.2.17, I could not found a proper guide. These the step I had done with a fresh installation of CentOS 5.8 x86_64 (server & server GUI) yum install httpd httpd-devel /etc/init.d start OK /etc/init.d stop OK yum install mysql mysql-server mysql-devel yum remove php yum groupinstall "Development Tools" yum install libxml2-devel wget http://www.php.net/release/php5.2.17.zip to get php5.2.17 (client requirement must use this version) cd php5.2.17 ./configure --with-apxs2=/usr/sbin/apxs --with-mysql=/usr/local This is the area I confuse. I could not found the /usr/sbin/apxs in my system. I do another Google search on how to manually install PHP, they pointed using ./configure --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql Both localtino I also cannot find apxs or apache2. I scare I make any mistake on it. Please help and guide on this. I am newbie in CentOS

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >