Search Results

Search found 3984 results on 160 pages for 'modules'.

Page 126/160 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • With puppet, can you have the client ask to be a certain set of roles?

    - by Aitch
    I've recently got my puppetmaster and client up and running and have had the client correctly signed, then requested and applied simple changes, all good. I have a growing number of machines (100). They are not consistently named (historical reasons). They fall into a handful of categories (think of it like: dataserver_type1, dataserver_type2, webserver_type1, webserver_type2....). New instances of these types of machines are added weekly. I don't understand (yet) or cannot see how I can declare a "generic" node of (say) "dataserver_type1" that contains whatever modules it needs, and set something in the client puppet.conf that says "I am a dataserver_type1" without using the hostname/FQDN If I set the node name in the catalog as (say) "my-data-server-type1" - the certified hostname - it picks it up and works. I know you can use patterns for hostnames but as I said, my server names are not consistent, and I can't change them. This seems disingenuous to have to edit a file and manually add a node for each server, when they continue to grow. Edit: Digging deeper, it seems roles may be what I want. But there still seems to be an element whereby the master has contain a list of roles that a specific named server should do. Perhaps what I am asking is, how can a client say "I want to be this role", without the server having to be updated?

    Read the article

  • NGINX: How do I calculate an optimal no. of worker processes and worker connections?

    - by bodacious
    Our web app is running on a Linode 2048 server at the moment (~ 2048 GB of RAM) The MYSQL database is on another linode of it's own so this server is really only handling NGINX and and the Rails application. The application itself uses about 185976 of memory per instance (RSS). Our traffic is < 1000 per day and the pages are mostly cached so there are fewer hits to the rails app itself. My question is - how can I calculate optimal NGINX config settings for my app? Below is the current config: worker_processes 1; # pid of nginx master process pid /var/run/nginx.pid; events { worker_connections 1024; } http { access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; passenger_root /home/user/.rvm/gems/ree-1.8.7-2011.01@URTV/gems/passenger-3.0.3; passenger_ruby /home/user/.rvm/rubies/ree-1.8.7-2011.01/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; # gzip settings gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_vary on; gzip_proxied any; gzip_types text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # load extra modules from the vhosts directory include /opt/nginx/vhosts/*.conf; } Any advice would be appreciated! :)

    Read the article

  • apache segmentation error

    - by lush
    I can't start Apache with the following errors: [root@web]# /etc/init.d/httpd start Starting httpd: /bin/bash: line 1: 19232 Segmentation fault /usr/sbin/httpd [root@web]# /usr/sbin/apachectl -k start /usr/sbin/apachectl: line 102: 19919 Segmentation fault $HTTPD $OPTIONS $ARGV I use webmin control panel and I've already tried re-installing Apache from scratch. Can someone advise what else I should try to do? Many thanks. UPDATE: The only line is always written in the error logs which seems not to be very important: [Mon Nov 14 19:00:09 2011] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) UPDATE 2: I've recently had the error below in the logs. Looks like some modules are incompatible, so I've just disabled these extensions: fileinfo and mcrypt in my php.ini. I should be able to start the web server without them. PHP Warning: PHP Startup: fileinfo: Unable to initialize module\nModule compiled with module API=20050922, debug=0, thread-safety=0\nPHP compiled with module API=20060613, debug=0, thread-safety=0\nThese options need to match\n in Unknown on line 0 PHP Warning: Module 'mcrypt' already loaded in Unknown on line 0 UPDATE 3: [root@web]# file /usr/sbin/httpd /usr/sbin/httpd: ELF 64-bit LSB shared object, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, stripped [root@web]# uname -m x86_64

    Read the article

  • Safe use of Update-FormatData?

    - by Steve B
    In a custom PowerShell module, I have at the top of my module definition this code: Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") This is working fine as all .ps1xml files are loaded. However, the module is sometimes loaded using Import-Module MyModule -Force (actually, this is in the install script of the module). In this case, the call to Update-FormatData fails with this error : Update-FormatData : There were errors in loading the format data file: Microsoft.PowerShell, c:\pathto\myfile.Types.ext.ps1xml : File skipped because it was already present from "Microsoft.PowerShell". At line:1 char:18 + Update-FormatData <<<< -AppendPath "c:\pathto\myfile.Types.ext.ps1xml" + CategoryInfo : InvalidOperation: (:) [Update-FormatData], RuntimeException + FullyQualifiedErrorId : FormatXmlUpateException,Microsoft.PowerShell.Commands.UpdateFormatDataCommand Is there a way to safely call this command? I know I can call Update-FormatData with no parameters, and it will update any known .ps1xml file, but this would work only if the file has already been loaded. Can I list somewhere the loaded format data files? Here is a bit of background: I'm building a custom module that is installed using a script. The install script looks like : [CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact="High")] param() process { $target = Join-Path $PSHOME "Modules\MyModule" if ($pscmdlet.ShouldProcess("$target","Deploying MyModule module")) { if(!(Test-Path $target)) { new-Item -ItemType Directory -Path $target | Out-Null } get-ChildItem -Path (Split-Path ((Get-Variable MyInvocation -Scope 0).Value).MyCommand.Path) | copy-Item -Destination $target -Force Write-Host -ForegroundColorWhite @" The module has been installed. You can import it using : Import-Module MyModule Or you can add it in your profile ($profile) "@ Write-Warning "To refresh any open PowerShell session, you should run ""Import-Module MyModule -Force"" to reload the module" Import-Module MyModule -Force Write-Warning "This session has been refreshed." } } MyModule defines, as first statement, this line : Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") As I updated my $profile to always load this module, the Update-Path command has been called when I run the install script. In the install script, I force import the module, which be fire again the module, and then, the Update-Path call

    Read the article

  • Windows module installer delaying login, server 2008 R2

    - by Kyle
    We updated our servers this weekend (windows updates), everything went fine except one of our terminal servers now hangs at login with the message, "waiting for windows modules installer." It eventually times out and leaves an event log message that the service has stopped unexpectedly. I have disabled the service and users can now login in a reasonable time frame. However we will need to re-enable the service in order to install further updates. I'm not sure where to start with this one, I'm an entry level admin and my colleagues are on vacation today, thank God this isn't a serious problem. Further details: -It affects all users. -The only third party software on the server is our ERP software and screwdrivers from Tricerat. -The only event log message is that the service has stopped unexpectedly. -The server manager screen does not display any information about roles it just says, "error". -The remote desktop roles all seem to be functioning properly, Remote app works as well as standard RDP. Let me know if there is any further details I can provide, I will be checking this frequently throughout the day.

    Read the article

  • Trying to retrieve data with a thermaltake blacX enclosure: Windows 7 believes the drive to be "uninitialized"

    - by Peeter Joot
    I have a laptop that won't boot. It appears to be a power problem ... laptop auto-turns-off within about 10 seconds of pressing the power button (with power buttons lighted temporarily and no display with or without external monitor). I've followed the dell troubleshooting guide which suggested reseating the memory modules and the hard drives, but that didn't help. Before trying to have the laptop serviced, I wanted to get some data off off the hard drive. I bought a thermaltake blacX enclosure, intending to use this to both use to retrieve the drive data with, and then later use as external storage. Following the instructions (insert cables, insert drive power on) goes fine, and Windows 7 on another laptop installs the device driver software. However, no drive letter shows up in 'Computer'. Under Computer-manage-storage I see the drive is there, and there's an option to "initialize" the drive. The Windows "initialize" dialog gives me the option to pick between "MBR" and "GPT" partitioning, which sounds like a good way to destroy the data on the drive. I'm thinking that I've purchased the wrong device for the job (or that my old drive is damaged). The old drive to recover info from is a Western Digital 500G/7200rpm SATA drive if that is relavent. Both the original laptop and the one I'm using for recovery are running Windows 7. Does anybody have experience with using a blacX enclosure to recover data off an already formatted drive?

    Read the article

  • How to tell nginx to honor backend's cache?

    - by ChocoDeveloper
    I'm using php-fpm with nginx as http server (I don't know much about reverse proxies, I just installed it and didn't touch anything), without Apache nor Varnish. I need nginx to understand and honor the http headers I send. I tried with this config (taken from the docs) but didn't work: /etc/nginx/nginx.conf: fastcgi_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=website:10m inactive=10m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; /etc/nginx/sites-available/website: server { fastcgi_cache website; #fastcgi_cache_valid 200 302 1h; #fastcgi_cache_valid 301 1d; #fastcgi_cache_valid any 1m; #fastcgi_cache_min_uses 1; #fastcgi_cache_use_stale error timeout invalid_header http_503; add_header X-Cache $upstream_cache_status; } I always get "MISS" and the cache dir is empty. If I uncomment the other directives, I get hit, but I don't want those "dumb" settings, I need to control them within my backend. For example, if my backend says "public, s-maxage=10", the cache should be considered stale after 10 secs. Instead, nginx will store it for 1h, because of these directives. I was thinking whether I should try proxy_cache, not sure what's the difference. In both fastcgi and proxy modules docs it says this: The cache honors backend's Cache-Control, Expires, and etc. since version 0.7.48, Cache-Control: private and no-store only since 0.7.66, though. Vary handling is not implemented. nginx version: nginx/1.1.19 Any thoughts? pd: I also have the reverse proxy that is offered by Symfony2 (which I turn off to use nginx's). The headers are interpreted correctly by it, so I think I'm doing it right.

    Read the article

  • LighTPD and PHP not working if outside of LightTPD folder

    - by Marco83
    I need to set up a simple web server with PHP on Windows XP that a number of different people will use for local testing. I'm using LightTPD 1.4.30-4-IPv6-Win32-SSL and PHP 5.2. So far I've created this folder structure: tools/ LightTPD/ htdocs/ PHP/ I set up PHP as CGI and the document root as server_root + "/htdocs". It works fine (well, it's slow but I don't want to bother with FastCGI for now :) ). My problem is when I try to put the htdocs outside of LightTPD folder, like this: htdocs/ tools/ LightTPD/ PHP/ I update the document root to server_root + "/../../htdocs" and while static HTML pages work fine, PHP pages stop working (they return a "No input file specified"). I literally just change the document root, I didn't change anything in the php.ini or anywhere else. Please also note that I left all doc_root, user_dir and cgi.force_redirect to the default values in php.ini, and it works when htdocs is inside LightTPD, but not when I move it ouside. Any idea of why it's breaking?? Here's my lightTPD.conf: server.modules = ( "mod_access", "mod_accesslog", "mod_alias", "mod_cgi", "mod_status", ) include "variables.conf" include "mimetype.conf" # THIS WORKS server.document-root = server_root + "/htdocs" # THIS DOESN'T #server.document-root = server_root + "/../../htdocs" server.upload-dirs = ( temp_dir ) index-file.names = ( "index.php", "index.pl", "index.cgi", "index.cml", "index.html", "index.htm", "default.htm" ) server.event-handler = "libev" url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } static-file.exclude-extensions = ( ".php", ".pl", ".cgi" ) server.errorlog = server_root + "/logs/error.log" ######### Options that are good to be but not neccesary to be changed ####### dir-listing.activate = "enable" #### CGI module cgi.assign = ( ".php" => server_root + "/../PHP/php-cgi.exe" ) status.status-url = "/server-status" status.config-url = "/server-config"

    Read the article

  • How to choose the right PHP Framework for web development?

    - by liliwang
    Hi! I'm a PHP Pro, but haven't use any PHP framework till now, so I have no clue on how to choose a PHP framework. Do you have some tips to help me choose the/a right PHP Framework? I want a stable and secure PHP framework for Projects with about 400 hours development time. It should be possible to use the framework on Shared-Hosting-Webservers. I don't need some AJAX support (I'm using extJS). It would be nice if the framework supports Rapid Application Development and object-relational mapping. Also some of the standard-functions (Authentification, form validation) would be nice. Caching would be a useful, but isn't needed. Needs for a PHP framework: Shared-Hosting-Webserver-Support for Projects between 200 und 400 hours work Developing Modell "Rapid Application Development" supported object-relational mapping supported If possible: Caching Already finished Modules (e.g. Authentification, form validation, ..) Easy to learn Which PHP framework is the right one I am seeking for?

    Read the article

  • erratic response times with Apache 2.0.52 on redhat 4.

    - by Kevin
    Under load, we've noticed response times from Apache vary greatly for the same 7k image. It can range anywhere from .01 seconds to 25 seconds or greater. Unfortunately, due to corporate policy constraints we are pretty much stuck on Apache 2.0.52. I'm at best an Apache novice so I'm in over my head with this problem. My focus recently has turned to our choice of MPM modules. We use the worker model on a dual core hyper threaded blade. It doesn't appear that swapping is an issue, and I don't see any signs of a hardware problem. I've read that worker is optimal on hardware with many CPU's where prefork it more suitable for our specific hardware profile. I can see conceptually how choosing the wrong MPM could result in this erratic behavior, but I'm not confident that it's the root cause here. Has anyone else seen this type of range in your response times for simple static content? What else should I be looking into here?

    Read the article

  • vmware player won't run on CentOS due to missing /dev/vmmon, what could be the problem?

    - by Graphics Noob
    So I've tried installing vmware player 3.1.4 and 3.1.3 and both times had the same problem, when I try to load a VM I get the error "Could not open /dev/vmmon". When I ls /dev/ I can see there is no "vmmon" device present. When I try running: sudo /etc/init.d/vmware start I get the output: Starting VMware services: VMware USB Arbitrator [ OK ] Virtual machine monitor [FAILED] Virtual machine communication interface [ OK ] VM communication interface socket family [ OK ] Blocking file system [ OK ] Virtual ethernet [FAILED] which shows that the Virtual Machine Monitor fails to load. I tried following the advice on this site and ran vmware-modconfig --console --install-all I notice during the compilation there are no errors, but at the end I get the message: Starting VMware services: VMware USB Arbitrator [ OK ] Virtual machine monitor [FAILED] Virtual machine communication interface [ OK ] VM communication interface socket family [ OK ] Blocking file system [ OK ] Virtual ethernet [ OK ] Unable to start services Out of curiousity I tried: sudo /sbin/insmod /lib/modules/2.6.18-238.9.1.el5xen/misc/vmmod.ko But got the error message: insmod: error inserting 'vmmon.ko': -1 Invalid module format I have a feeling this may be the root of the problem, but I don't know what could be causing it or how to fix it.

    Read the article

  • Sound doesn't work anymore after replacing RAM

    - by thejh
    Hello, today, I replaced one old RAM module with two newer, bigger ones, but now, the sound doesn't seem to work anymore. Already ran alsaconf and it didn't help. Output of lspci for the audio device: 00:07.0 Audio device: nVidia Corporation MCP67 High Definition Audio (rev a1) Subsystem: Giga-byte Technology Device a002 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 (500ns min, 1250ns max) Interrupt: pin A routed to IRQ 21 Region 0: Memory at f5100000 (32-bit, non-prefetchable) [size=16K] Capabilities: [44] Power Management version 2 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold+) Status: D0 PME-Enable- DSel=0 DScale=0 PME- Capabilities: [50] Message Signalled Interrupts: Mask+ 64bit+ Queue=0/0 Enable- Address: 0000000000000000 Data: 0000 Masking: 00000000 Pending: 00000000 Capabilities: [6c] HyperTransport: MSI Mapping Enable+ Fixed+ Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel The audio device is onboard and has six configurable outputs, two or so are also capable of being an input (if I remember it correctly), but I don't know how to control it under linux. Does somebody know how/whether replacing the RAM could be related to my problem and/or how to fix it?

    Read the article

  • Configure New Server for .htaccess

    - by Phil T
    I have a new LAMP CENTOS 5 server I am setting up and trying to copy the configuration from another web server I have. I am stuck with what I think is a mod_rewrite problem. If I go to http://old-server.com/any_page_name.php it correctly routes through some handling code in index.php and shows me a graceful "Page Cannot Be Displayed" message. But if I go to http://new-server.com/any_page_name.php I get an ugly Apache 404 Not Found error message. I looked in both httpd.conf files and they both have only one reference to mod_rewrite. LoadModule rewrite_module modules/mod_rewrite.so So it seems like that should be fine. At the bottom of httpd.conf I have the code: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/html ServerName new-server.com ErrorLog logs/new-server.com-error_log CustomLog logs/new-server.com-access_log common </VirtualHost> Then in the root of /var/www/html I have the exact same .htaccess file that looks like this: RewriteEngine on Options +FollowSymlinks RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . index.php [L] ErrorDocument 404 /page-unavailable/ <files ~ "\.tpl$"> order deny,allow allow from none deny from all </files> So I don't see why the page load at old-server.com works fine while new-server.com doesn't route through index.php like I want it to do. Thanks.

    Read the article

  • Linux Centos 6 becomes unavailable from time to time - OS&network issue

    - by adoado0
    I am encountering following problem. There is one server (DL160 G5) running Centos 6.3 with default kernel 2.6.32-220.2.1.el6.x86_64 - at this point I'd like to add that issue appeared also at older version - 6.1 and older kernel (do not remember exactly which version). There is cPanel installed and from time to time it becomes unavailable (network connection). What I've checked is (via KVMoIP): load average is completely normal it does not lack memory or disk space when problem occurs no console notifications checked all access logs and there is no sign that it can be caused by a client script cannot even access local interface (127.0.0.1) or main IP address running tcpdump I can only see packets arriving to server - no responses all services seem to be running properly (mail,sql,http,ssh) checked crontab and all clients' crontabs too network port utilisation is low ( up to several Mbits) arriving packet rate is low - hundreds per second (according to tcpdump) console (via kvmoip) works fine, no lags there is no conntrack at this server there is no ipv6 at this server flushing iptables, unloading modules does not resolve problem restarting network does not resolve problem, no errors appear it also occurs when two sepearate networks are configured (and multiple gateways) as well as one IP, one default gw and one network is configured - so it seems network configuration independent it seems to repeat randomly (load,packet rate,bandwith usage,load independent) checked server with different rootkit detection tools - it seems to be clean server has been rebooted, it did not change anything there are no interface errors it apperas randomly can be once a week or several times per day It usually works fine after 1-15 minutes. What I can also check? It is definitely OS issue - there is traffic at interface only in one direction when problem occurs, can not even ping loopback. Any ideas? Recommended checks? Anything I did not checked above.

    Read the article

  • Apache /server-status/ gives a 404 not found

    - by user57069
    I am trying to solve a problem where Apache stats aren't displaying correctly in Munin. I've ran through quite a bit of checks and tests regarding Munin setup, but I think my issue is related to Apache, but my skill set there is lacking. first, system info: monitored server CentOS 5.3 kernel 2.6.18-128.1.1.el5 Apache/2.2.3 "server-status" directive in httpd.conf (i've cross-compared this with another system that i did a successful parallel install of Munin on, correctly showing Apache stats, and the directive below is the same for both) ExtendedStatus On <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> ran lynx http://localhost/server-status got HTTP/1.1 404 taking a look at Apache access_log: 127.0.0.1 - - [13/Oct/2010:07:00:47 -0700] "GET /server-status HTTP/1.0" 404 11237 "-" "Lynx/2.8.5rel.1 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8e-fips-rhel5" mod_status is also loaded: % grep "mod_status" /etc/httpd/conf/httpd.conf LoadModule status_module modules/mod_status.so iptables is turned off also i did notice that the ownership status on httpd.conf on this system is root.root.. whereas the system that is displaying correctly is apache.www -- not certain that this matters?? its got to be permission issue, but i'm not certain where the permissions are messed up. any thoughts on why the test of server-status is giving me a 404?

    Read the article

  • Truecrypt files corrupted after moving PC into another case

    - by Dygerati
    I recently bought a new PC case and transferred all of my PC hardware into it. The only hardware modification was the addition of two identical ram modules. The entire process went smoothly, and everything worked and booted as before. The only side-effect I found when accessing one my of file-based hidden truecrypt volumes shortly there after. Some of the files in the volume - NOT all - seemed to be entirely corrupted. The directory and file names are garbled characters, but a few of the directories in the same volume appear and function normally. Also, all files in the non-hidden tc volume were still intact. Is this not weird? The only other real change I could think of would be that the hard drives were connected to different SATA ports on the mobo. I really don't know how the truecrypt encryption works well enough to know what could cause this...and the fact that not all the files were corrupted makes it more bizarre still. So, first off (and I'm not too hopeful on this point), would it be possible to restore these files? I had a backup of most, but not all of the files involved. Other than that I'm just curious how this happened and how I can prevent it next time. Thanks!

    Read the article

  • less maximum buffer size?

    - by Tyzoid
    I was messing around with my system and found a novel way to use up memory, but it seems that the less command only holds a limited amount of data before stopping/killing the command. To test, run (careful! uses lots of system memory very fast!) $ cat /dev/zero | less From my testing, it looks like the command is killed after less reaches 2.5 gigabytes of memory, but I can't find anything in the man page that suggests that it would limit it in such a way. In addition, I couldn't find any documentation via the google on the subject. Any light to this quite surprising discovery would be great! System Information: Quad core intel i7, 8gb ram. $ uname -a Linux Tyler-Work 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux $ less --version less 458 (GNU regular expressions) Copyright (C) 1984-2012 Mark Nudelman less comes with NO WARRANTY, to the extent permitted by law. For information about the terms of redistribution, see the file named README in the less distribution. Homepage: http://www.greenwoodsoftware.com/less $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04 LTS Release: 14.04 Codename: trusty

    Read the article

  • php extensions & apache mods gone/not working after server restart?

    - by user1782359
    I was wondering if anyone has ever come across this before, as I'm pretty stumped to be honest, and my server admin knowledge isn't particular good so I'm not sure what could even be wrong, let alone how to fix it. Basically, Thursday last week everything was fine on our server. I come in on Friday and it's a mess: php extensions are missing/not working, apache modules are gone. (e.g. oci_* was gone completely, odbc_ not working but still there, the apache ntlm_auth for single sign on was gone and so the website wasn't even loading in IE). I'm ruling out anything deliberate because it's just incredibly unlikely. The only thing that really happened between thursday & friday is that on thursday evening one of the network guys did a RAM upgrade on the server and restarted it. That's it, nothing else. Now I'm wondering if somehow those extensions and such which we installed months ago were somehow only saved in a local memory of sorts, and a restart has wiped them? But we installed them all as root, so I don't see why it should be any different from installing anything else. It makes little/no sense to me. To expand on an example of something that's gone very wrong, the php odbc_ extension: It's still on the server, it doesn't return undefined function or anything. But it just cannot connect to the datasource any more. I've tested it through the command line and it's working perfectly fine with that datasource and login details, but all of a sudden having it in the php odbc_connect() function and it just can't connect. ( [S1000][unixODBC][FreeTDS][SQL Server]Unable to connect to data source. ) But unixODBC is set up fine. Like I say i've tested it all through the terminal and it can connect, and we've not changed anything, it's just now all of a sudden not working through the PHP function. Anyone have any ideas whatsoever as to what could be going on? This is on CentOS 5.x by the way.

    Read the article

  • PHP pages are not parsed by Apache on CentOS

    - by Ram
    I have installed Centos 5.x, Apache 2.2, PHP 5.3 and MySQL 5.5. I also installed phpMyAdmin. I am able to access phpMyAdmin through the browser without any issues. However, when I create a simple index.php with phpinfo() function in the default directory, that page is served without php parsing. As we all know, phpMyAdmin is a php application. This is working fine from the same server but not the simple php page from the doc root directory ??!!!. Of course, I tried moving this page into phpMyAdmin folder and tried accessing it, but no success. Please note that I updated httpd.conf file with appropriate directives based on the php installation guide.Following directives were added to httpd.conf. AddTyoe application/x-httpd-php LoadModule php5_module /usr/lib/httpd/modules/libphp5.so <FilesMatch "\.php$"> SetHandler application/x-httpd-php </FilesMatch> File locations are: docroot - /var/www/html phpMyAdmin folder - /var/www/html/phpMyAdmin File privileges are: [root@linuxdev1 html]# ls -Z -rwxr-xr-x root root index.php drwxr-xr-x root root phpMyAdmin -rw-r--r-- root root phpMyAdmin-3.4.3.2-english.tar.gz drwxr-xr-x root root test1 Any help is appreciated.

    Read the article

  • Scaling a node.js application, nginx as a base server, but varnish or redis for caching?

    - by AntelopeSalad
    I'm not close to being well versed in using nginx or varnish but this is my setup at the moment. I have a node.js server running which is serving either json, html templates, or socket.io events. Then I have nginx running in front of node which is serving all static content (css, js, etc.). At this point I would like to cache both static content and dynamic content to memory. It's to my understanding that varnish can cache static content quite well and it wouldn't require touching my application code. I also think it's capable of caching dynamic content too but there cannot be any cookie headers? I do use redis at the moment for holding session data and planned to use it for other things in the future like keeping track of non-crucial but fun stats. I just have no idea how I should handle caching everything on the site. I think it comes down to these options but there might be more: Throw varnish in front of nginx and let varnish cache static pages, no app code changes. Redis would cache dynamic db calls which would require modifying my app code. Ignore using varnish completely and let redis handle caching everything, then use one of the nginx-redis modules. I'm not sure if this would require a lot of app code changes (for the static files). I'm not having any luck finding benchmarks that compare nginx+varnish vs nginx+redis and I'm too inexperienced to bench it myself (high chances of my configs being awful). I'm basically looking for the solution that would be the most efficient in terms of req/sec and scalable in the future (throw new hardware at the problem + maybe adjust some values in a config = new servers up and running semi-painlessly).

    Read the article

  • 4GB Memory Upgrade for Acer Aspire 5102WLMi

    - by Richard Slater
    I have bought a 4GB memory upgrade (2x 2GB PC2-5300 SODIMM) for my Acer Aspire 5102WLMi (Aspire 5100 Series) laptop, I installed the two memory modules correctly however with 4GB installed the laptop refuses to POST. I have tried the following: Tried both 2GB SODIMMs without the other (Worked Fine) Tried the original 512MB SODIMMs (Worked Fine) Tried with original 512MB SODIMM and new 2GB SODIMM (Worked Fine) Tried swapping over the 2GB SODIMs (Didn't Boot) Left the computer for 10 minutes with both 2GB SODIMMs installed (Didn't Boot) Checked latest BIOS installed (No Change) The Crucial website said that the laptop supported 4GB of RAM as do several other sites through found through Google, up until now I was fairly confident this would work. Couple of questions that would be good to have answered: Question: Has anyone got an Acer Aspire 5100 Series running with 4GB RAM? Answer: Yes, I have now got one working with 3.75GB Usable, the rest is occupied utilized by the Graphics Card. Question: Any tips on getting this to work; is there a CMOS reset switch? Answer: Yes there is, if both SODIMMs are removed two very small interlocking PCB tracks are revealed. If these are shorted together with a screwdriver the BIOS will be reset. Thanks.

    Read the article

  • Working with different PHP version at the same time, php_value extension_dir not working?

    - by Gremo
    I need both PHP 5.4.7 and 5.3.17 running on Windows 7 x64 with Apache 2.2.23. This is my virtual host configuration: <VirtualHost *:80> DocumentRoot "C:/WAMP/Apache/htdocs/php54" ServerName php54.local PHPIniDir "C:/WAMP/PHP54" LoadModule php5_module "C:/WAMP/PHP54/php5apache2_2.dll" php_value extension_dir "C:/WAMP/PHP54/ext" <Directory "C:/WAMP/Apache/htdocs/php54"> Order allow,deny Allow from all </Directory> </VirtualHost> The PHPIniDir and LoadModule directives work fine and using phpinfo() inside my script prints the right PHP version. But I need to load extensions, and this is where it fails. php_value extension_dir should be C:/WAMP/PHP54/ext but it's (default one) C:/php. What I'm missing here? EDIT: Of course I can set this value directly in C:/WAMP/PHP54/php.ini, but I prefer passing it using vhost configuration: ; Directory in which the loadable extensions (modules) reside. ; http://php.net/extension-dir ; extension_dir = "./" ; On windows: extension_dir = "C:/WAMP/PHP54/ext"

    Read the article

  • How to make lighttpd respect X-Forwarded-Proto when constructing redirects for directories?

    - by Tim Landscheidt
    We have an nginx proxy at tools.wmflabs.org that receives requests by http and https and passes them by http on to lighttpds on a grid (one lighttpd per top-level path). Requests that reach the proxy by https are received by the lighttpds like this: HEAD /lighttpd-test/test HTTP/1.1 Connection: close Host: tools.wmflabs.org X-Forwarded-Proto: https X-Original-URI: /lighttpd-test/test User-Agent: curl/7.29.0 Accept: */* This works great except in the case where the URL references a physical directory and misses the trailing slash ("/"), as lighttpd then generates a redirect to the http URL: HTTP/1.1 301 Moved Permanently Location: http://tools.wmflabs.org/lighttpd-test/test/ Connection: close Date: Fri, 06 Jun 2014 14:50:29 GMT Server: lighttpd/1.4.28 The relevant parts of our lighttpd configurations are: server.modules = ( "mod_setenv", "mod_access", "mod_accesslog", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", "mod_cgi", ) server.port = $port [...] server.document-root = "$home/public_html" [...] server.follow-symlink = "enable" [...] server.stat-cache-engine = "fam" ssl.engine = "disable" alias.url = ( "/$tool" => "$home/public_html/" ) index-file.names = ( "index.php", "index.html", "index.htm" ) dir-listing.encoding = "utf-8" server.dir-listing = "disable" url.access-deny = ( "~", ".inc" ) [...] How can I make lighttpd respect X-Forwarded-Proto and use it when constructing redirects for directories? I'm aware that I could try to tackle this in nginx, but I'd prefer if I can fix it in lighttpd.

    Read the article

  • What are the best linux permissions to use for my website?

    - by Nic
    This is a Canonical Question about File Permissions on a Linux web server. I have a Linux web server running Apache2 that hosts several websites. Each website has its own folder in /var/www/. /var/www/contoso.com/ /var/www/contoso.net/ /var/www/fabrikam.com/ The base directory /var/www/ is owned by root:root. Apache is running as www-data:www-data. The Fabrikam website is maintained by two developers, Alice and Bob. Both Contoso websites are maintained by one developer, Eve. All websites allow users to upload images. If a website is compromised, the impact should be as limited as possible. I want to know the best way to set up permissions so that Apache can serve the content, the website is secure from attacks, and the developers can still make changes. One of the websites is structured like this: /var/www/fabrikam.com /cache /modules /styles /uploads /index.php How should the permissions be set on these directories and files? I read somewhere that you should never use 777 permissions on a website, but I don't understand what problems that could cause. During busy periods, the website automatically caches some pages and stores the results in the cache folder. All of the content submitted by website visitors is saved to the uploads folder.

    Read the article

  • Should I update the kernel on a Linux machine?

    - by Legate
    As I understand it, updating to a new kernel (with the normal linux-image... package, not by rolling my own) requires a server restart. However, one of our servers (Ubuntu 10.04) is running several extensive screen sessions. Restarting kills those which is always a major hassle to their owners (mostly because of lost session histories). What should I do? I see several possibilites: Not doing anything, that is update only non-kernel packages (perhaps use apt-pinning?) Update the kernel, but not restart. (Is that smart? I seem to remember there might be some problems with loading kernel modules.) Updating the kernel and restarting. Is there perhaps some way to preserve the screen sessions? I guess it ultimately boils down to this question: How important is it to update the kernel? I posted this question here instead of askubuntu.com as I think this is not an Ubuntu-specific issue though this server is running Ubuntu.

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >