Search Results

Search found 10550 results on 422 pages for 'apache commons dbcp'.

Page 390/422 | < Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >

  • Can varnish cache files without specific extension or residing in specific directory

    - by pataroulis
    I have a varnish installation to cache (MANY) images that my service serves. It is about 200 images of around 4k per second and varnish happily serves them according to the following rule: if (req.request == "GET" && req.url ~ "\.(css|gif|jpg|jpeg|bmp|png|ico|img|tga|wmf)$") { remove req.http.cookie; return(lookup); } Now, the thing is that I recently added another service on the same server that creates thumbnails to serve but it does not add a specific extension. The files are of the following filename pattern: http://www.example.com/thumbnails/date-of-thumbnail/xxxxxxxxx.xx where xx are numbers, so xxxxxxxxx.xx could be 6482364283.73 (two numbers at the end) (actually this is the timestamp so I can keep extra info in the filename) That has the side effect that varnish does not cache them and I see them constantly being served by apache itself. Even though I can change the format from now on to create thumbs ending in .jpg, is there a way to change the vcl file of my varnish daemon to either cache everything under a directory (the thumbnails directory) or everything with two numbers at its extension? Let me know if I can provide any additional info ! Thanks!

    Read the article

  • Why is only one wget command working in my crontab?

    - by afEkenholm
    I wish to fetch content from a PHP script on my server two times a day, altering a query variable lang to set what language we want, and save this content in two language specific files. This is my crontab: */15 * * * * ~root/apache.sh > /var/log/checkapache.log 10 0 * * * wget -O /path/to/file-sv.sql "http://mydomain.com/path/?lang=sv" 11 0 * * * wget -O /path/to/file-en.sql "http://mydomain.com/path/?lang=en" The problem is that only the first wget command line is being executed (or to be precise: the only file that is being written is /path/to/file-sv.sql). If I switch the second and the third row, /path/to/file-en.sql gets written instead. The first line always runs as expected, no matter where it is. I then tried using lynx -dump "http://mydomain.com/path/?lang=xx" > /path/to/file-xx.sql to no avail; still only the first lynx line executed successfully. Even mixing wget and lynx did not change this! Getting kinda desperate! Am I missing something? There are thousands of articles on crontab (combined with) wget or lynx, but all seems to cover basic setups and syntax. Does anyone got a clue of what I am doing wrong? Thanks, Alexander

    Read the article

  • virtual web folder served by PHP script

    - by Martin
    I am trying to configure my apache to be able to display (virtual) pages like: mywebpage.com/something1 mywebpage.com/something2 mywebpage.com/folder/something3 I would like these "somethingX" and "folder" folders to be only virtual, not physical directories. For a start it would be great to send all requests to mywebpage to one PHP script which will somehow receive the original path information (there is some SERVER array as far as I know) and call necessary PHP functions (so far I use addresses like mywebpage.com/index.php?page=blabla&otherparameters=values...). Is that possible? I am struggling with different combination, currently I am with following file in /etc/apache2/conf.d/something.conf (not working of course). What is the correct way to proceed? Thanks. <Location /myweb> SetHandler my-handler Action my-handler /srv/www/htdocs/myweb/product.php virtual </Location> My pages are in /srv/www/htdocs/myweb. I tried with Location, with Directory, with Action and SetHandler, with AddHandler... ;-) Some configurations were ignored, some caused "object not found" with nothing relevant in error log.

    Read the article

  • empty file from tomcat https redirect?

    - by amertune
    I am using tomcat 6.0.20, with jdk 1.6.0_18 on 64 bit linux (tomcat downloaded from tomcat.apache.org, not installed from repositories). I have iptables redirects from port 80 - 8080 and 443 - 8443. In server.xml, the connector for port 8080 redirects to 443, and the 8443 connector has proxyPort="443". In conf/web.xml, I have added this bit at the end of the file (but still inside the <web-app</webapp tags). <security-constraint> <web-resource-collection> <web-resource-name>Protected Context</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> I also have two contexts, ROOT and mywebapp. If I go to http://myurl.com, then I get redirected to https://myurl.com. If I go to http://myurl.com/mywebapp/, then I get redirected to https://myurl.com/mywebapp/. The problem I'm having is when I go to http://myurl.com/mywebapp (no trailing slash). When I do this I get a prompt to download an empty file that has an empty name. Going to https://myurl.com/mywebapp works. I would think that a user typing myurl.com/mywebapp is far from rare. Is there something I'm missing?

    Read the article

  • Managed LAMP platform for maximizing availability and global reach, not scalability

    - by user66819
    Assume a Linux/Apache/MySQL/PHP application for a small base of registered users. With small userbase, there are no traffic peaks so the scalability that cloud platforms offer is not imperative. But the system is mission-critical, so availability is the primary goal. Users are also distributed across Asia, Europe, and US, so multiple server locations that minimize users' network hops would be highly desirable. The dream: a managed VPS platform where we would configure a single server (uploading PHP and other files, manipulating database, etc.), and the platform would automatically mirror the server in a handful of key places around the world (say one on each US coast, one in Europe, one in east Asia). File system synchronization and MySQL replication would happen automatically. Core operating system is managed, so we don't need to do full system administration and security, and low-level backups are also done by service provider, though we also do our own backups as well. Couple this with some sort of DNS geo-detection, so users are routed to the nearest operational server... with support for https, of course. Does such a dream exist? If not, what are some approaches to accomplish the same end with minimal time investment and minimal monthly hosting costs?

    Read the article

  • Diagnosing Random Network Lag

    - by uesp
    I'm having trouble diagnosing some random lag on a 6 server LAMP cluster serving a MediaWiki site. While we're serving some 100 pages/sec the servers themselves are running fine with less than 0.5 load, no locked processes, no paging, no errors being logged, etc.... Lag is present on all servers and is random: one minute its fine the next it's there. DNS lookups on the servers are randomly slow. For example time nslookup google.com varies randomly from a few milliseconds to several seconds and sometimes times out entirely. While we use IP addresses internally on the cluster this may be a symptom of the root issue. We are not running our own DNS server. The Apache server-status pages randomly lag or time out. Benchmarking using ab between servers shows a few loads sometimes take 3000 ms (almost exactly). Benchmarking server-status on the local server itself usually shows no issue (it showed a lag only once among a few hundred tests). The servers are sitting behind a switch and a firewall which I don't have any access to so I don't know their setup or status. While we are under heavier than normal load a 2 Mbps incoming and 20 Mbps outgoing traffic shouldn't be stressing the switch or firewall should it? My feeling is that it is the switch/firewall or something above them in the ISP like their DNS but can't confirm it. I need some other tests or methods of diagnosing this lag to try and narrow down the ultimate cause.

    Read the article

  • nginx virtual hosts are not working, all vhosts goes to the default one

    - by Adirael
    Hello, I just did a clean install of nginx + php-fpm on a VPS running Ubuntu 10.10, nginx is serving and PHP is working fine, but I'm not able to add vhosts to it. Well, I can add them, but only one works, the rest go to this first one. This is my first vhost, for host1: server { listen 80; server_name host1; access_log /var/log/nginx/host1.log; error_log /var/log/nginx/host1.error.log; location / { root /var/www/vhosts/host1/; index index.html index.htm index.php; } location ~ \.php$ { include /etc/nginx/fastcgi_params; #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME /var/www/vhosts/host1/$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_index index.php; } } And the second one, for host2: server { listen 80; server_name host2; access_log /var/log/nginx/host2.log; error_log /var/log/nginx/host2.error.log; location / { root /var/www/vhosts/host2/; index index.html index.htm index.php; } location ~ \.php$ { include /etc/nginx/fastcgi_params; #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME /var/www/vhosts/host2/$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_index index.php; } } The problem is, when I go to http://host1 everything is fine, but on http://host2, it just shows host1! I don't have Apache installed and everything comes from repos. Any pointers?

    Read the article

  • Setting up Web server so it is easy to migrate

    - by Nyxynyx
    Hi I am about to move my site from a VPS to another host's dedicated server. One of my concern is about scaling the site in the future that involves a change of server. Now that I am starting the dedicated server from scratch with only the OS, this means that I need to install the web server stack, including Apache and its mods, PHP, MySQL, PostgreSQL, Tomcat, Solr and a few other softwares like ImageMagick and git. Question: Is there a way for me to setup this new dedicated server such that I can easily migrate the entire site, both the technology stack and the code to the a newer server (upgrade from this new dedicated server) easily without reinstalling and reconfiguring everything? The code for the website is being handled by git and github so thats not a problem. I'm more conerned about the rest of the software required. Side question: The current VPS uses CentOs with cpanel and it seems that many packages are outdated on yum and cpanel interfers with the installation of many packages. Which OS should I go with for the new server? Ubuntu?

    Read the article

  • INSERT DELAYED on locked tables blocks PHP processes to continue

    - by sw0x2A
    Our webservers write some tracking information into a MySQL database (using INSERT DELAYED into MyISAM table). When a huge SELECT query is executed on this table or when it is locked for another reason, the webserver processes (with INSERT DELAYED) are waiting for the database and in some cases the MaxServer limit is reached in Apaches, so they will stop serving requests. We use INSERT DELAYED because The DELAYED option for the INSERT statement is a MySQL extension to standard SQL that is very useful if you have clients that cannot or need not wait for the INSERT to complete. This is a common situation when you use MySQL for logging and you also periodically run SELECT and UPDATE statements that take a long time to complete. Quote from MySQL documentation. I am wondering why the Apache processes are waiting for the INSERT DELAYED to finish. And what can I do to just send the data and forget about it. (Since this is logging data, I do not care if we lose some entries.) Even when the table is locked the PHP script should just go on and should not wait for an answer of MySQL. (We do not want to setup Master-slave for this table but we are thinking about move this data to some NoSQL database. But for now I would like to know why INSERT DELAYED is not working as expected.)

    Read the article

  • htaccess hacked - i've deleted code and file - what next?

    - by user1762595
    My website was hacked recently. I think i've found the code that was added to the htaccess file, deleted it and then added script to prevent the htaccess file being accessed again. I've also deleted the php file that the hacked code refers to (common.php). What do i need to do next? I'm not a programmer or website developer but i really wanted to see if i could fix the problem myself as i've spent quite a few hours trying and don't give up easily. Here is the hacked code that i deleted; <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP_USER_AGENT} (google|yahoo) [OR] RewriteCond %{HTTP_REFERER} (google|yahoo) RewriteCond %{REQUEST_URI} /$ [OR] RewriteCond %{REQUEST_FILENAME} (shtml|html|htm|php|xml|phtml|asp|aspx)$ [NC] RewriteCond %{REQUEST_FILENAME} !common.php RewriteCond /home/httpd/vhosts/bluestardive.com/httpdocs/common.php -f RewriteRule ^.*$ /common.php [L] </IfModule> this code has to stay in the htaccess file as it redirects my url to seo friendly ones or the website errors, but has this code been hacked as well? # Apache search queries statistic module RewriteEngine On AddHandler php5-fastcgi .php .php5 # <contrexx> # <core_modules__alias> RewriteRule ^about-us$ /index.php?page=883 [L,NC] RewriteRule ^ausfluge-und-aktivitaten$ /index.php?page=800 [L,NC] RewriteRule ^bluestardive-news$ /index.php?page=919 [L,NC] RewriteRule ^bookings$ /index.php?page=911 [L,NC] RewriteRule ^diveresort$ /index.php?page=879 [L,NC] RewriteRule ^diving$ /index.php?page=880 [L,NC] RewriteRule ^excursions-and-activities$ /index.php?page=881 [L,NC] RewriteRule ^galerie$ /index.php?section=gallery [L,NC] RewriteRule ^oceannight$ http://www.bluestardive.com/index.php?page=906 [L,NC] RewriteRule ^philosophy$ /index.php?page=846 [L,NC] RewriteRule ^reservation$ /index.php?page=917 [L,NC] RewriteRule ^reservierung$ /index.php?page=918 [L,NC] RewriteRule ^resort$ /index.php?page=798 [L,NC] # </core_modules__alias> # </contrexx> many thanks for any help Claire

    Read the article

  • 404 with serving static files in a custom nginx configuration

    - by code90
    In my nginx configuration, I have the following: location /admin/ { alias /usr/share/php/wtlib_4/apps/admin/; location ~* .*\.php$ { try_files $uri $uri/ @php_admin; } location ~* \.(js|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air)$ { expires 7d; access_log off; } } location ~ ^/admin/modules/([^/]+)(.*\.(html|js|json|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air))$ { alias /usr/share/php/wtlib_4/modules/$1/admin/$2; } location ~ ^/admin/modules/([^/]+)(.*)$ { try_files $uri @php_admin_modules; } location @php_admin { if ($fastcgi_script_name ~ /admin(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4/apps/admin$valid_fastcgi_script_name; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } location @php_admin_modules { if ($fastcgi_script_name ~ /admin/modules/([^/]+)(.*)$) { set $byr_module $1; set $byr_rest $2; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4/modules/$byr_module/admin$byr_rest; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } Following is the requested url which ends up with "404": http://www.{domainname}.com/admin/modules/cms/styles/cms.css Following is the error log: [error] 19551#0: *28 open() "/usr/share/php/wtlib_4/apps/admin/modules/cms/styles/cms.css" failed (2: No such file or directory), client: xxx.xxx.xxx.xxx, server: {domainname}.com, request: "GET /admin/modules/cms/styles/cms.css HTTP/1.1", host: "www.{domainname}.com" Following urls works fine: http://www.{domainname}.com/admin/modules/store/?a=manage http://www.{domainname}.com/admin/modules/cms/?a=cms.load Can anyone see what the problem could be? Thanks. PS. I am trying to migrate existing sites from apache to nginx.

    Read the article

  • Debugging nginx URL rewrite: How do I figure out where the problem is?

    - by pjmorse
    I have a specific URL pattern on a site which needs to be redirected to the HTTPS version. This is a Django site; Nginx checks each URL in memcached, and if it doesn't find a cached version it proxies the request to Apache/mod_python for Django to render the page. The relevant configuration block is rewrite ^/certificate https://mysite.com/certificate ; rewrite ^/([a-zA-Z]{2})/certificate https://mysite.com/certificate ; ...and it doesn't appear to be working at all. Nginx is: $ nginx -V nginx version: nginx/0.7.65 built by gcc 4.2.4 (Ubuntu 4.2.4-1ubuntu4) TLS SNI support disabled configure arguments: --prefix=/usr/local/nginx --pid-path=/var/run/nginx.pid --with-http_gzip_static_module --with-http_ssl_module How can I figure out if the problem is my patterns not matching, or a more obscure configuration problem? (The site is localized to three languages, and the localization is in the URL string, e.g. /US/news/, /DE/about, etc. It tracks localization in the session as well, defaulting to US, so if you just requested /news Django will rewrite to /US/news unless the user has a cookie indicating they're using a different localization. Django handles this, though, not Nginx.)

    Read the article

  • .php file blank - .php5 files works

    - by Kleidi
    I have a problem with a server of mine. I've installed virtualmin/webin on it for administration and I have 1 domain on it. DNS management is external. On this domain I only have an html "Under Construction" index and 5 subdomains. In all those subdomains I have PHP systems running perfectly. I've tried to install Wordpress on the main domain and I'm having some issues: None .php files loads. I have made a phpinfo file on it to check it and it won't work either; only a blank page appears. When I check the source code of it in browser, appears the code. I have changed the extensions to .php5 and it worked perfectly. Something is going wrong with it but I can't figure out what. I have checked the apache error and nothing appears. 3 Days ago I upgraded from php 5.2.* to 5.4.21. Server is running CentOS 5.10.

    Read the article

  • server_name seems to be ignored in nginx

    - by user46171
    I have two domains set up in nginx.conf. Both are using SSL with their own certificates, and proxy to Apache. However the second domain is completely ignored, and nginx always resolves to the first domain. I can't see what in the issue is with this configuration, having set the server_name in each case correctly (as far as I can see): http { include mime.types; default_type application/octet-stream; keepalive_timeout 65; upstream site { # real IP addresses masked server xx.xxx.x.xxx; server xx.xxx.x.xxx; } server { # this domain always works listen 443; server_name *.first-site.com; ssl on; ssl_certificate /var/ssl/first-site.crt; ssl_certificate_key /var/ssl/first-site.key; location / { access_log off; proxy_connect_timeout 15; proxy_next_upstream error; proxy_pass http://site; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Protocol https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } } server { # this domain is ignored, always resolves to first-site.com listen 443; server_name *.second-site.com; ssl on; ssl_certificate /var/ssl/second-site.crt; ssl_certificate_key /var/ssl/second-site.key; location / { access_log off; proxy_connect_timeout 15; proxy_next_upstream error; proxy_pass http://site; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Protocol https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } } }

    Read the article

  • What can inexperienced admin expect after server setup completed seemingly fine? [closed]

    - by Miloshio
    Inexperienced person seems to have done everything fine so far. This is his very first time that he is the only one in charge for LAMP server. He has installed OS, network, Apache, PHP, MySQL, Proftpd, MTA & MDA software, configured VirtualHosts properly (facts because he calls himself admin), done user management and various configuration settings with respect to security recommendations and... everything is fine for now... For now. If you were directing horror movie for server admin above mentioned what would you make up for boogieman that showed up and started to pursue him? Omitting hardware disaster cases for which one cannot do anything 'from remote', what is the most common causes of server or part-of-server or server-related significant failure when managed by inexperienced admin? I have in mind something that is newbie admins very often missing which is leading to later intervention of someone with experience? May that be some uncontrolled CPU-eating leftover process, memory-related glitch, widely-used feature that messes up something unexpected on anything like that? Newbie admin for now only monitors disk-space and RAM usage, and number of running processes. He would appreciate any tips regarding what's probably going to happen to his server over time.

    Read the article

  • How to tell nginx to honor backend's cache?

    - by ChocoDeveloper
    I'm using php-fpm with nginx as http server (I don't know much about reverse proxies, I just installed it and didn't touch anything), without Apache nor Varnish. I need nginx to understand and honor the http headers I send. I tried with this config (taken from the docs) but didn't work: /etc/nginx/nginx.conf: fastcgi_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=website:10m inactive=10m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; /etc/nginx/sites-available/website: server { fastcgi_cache website; #fastcgi_cache_valid 200 302 1h; #fastcgi_cache_valid 301 1d; #fastcgi_cache_valid any 1m; #fastcgi_cache_min_uses 1; #fastcgi_cache_use_stale error timeout invalid_header http_503; add_header X-Cache $upstream_cache_status; } I always get "MISS" and the cache dir is empty. If I uncomment the other directives, I get hit, but I don't want those "dumb" settings, I need to control them within my backend. For example, if my backend says "public, s-maxage=10", the cache should be considered stale after 10 secs. Instead, nginx will store it for 1h, because of these directives. I was thinking whether I should try proxy_cache, not sure what's the difference. In both fastcgi and proxy modules docs it says this: The cache honors backend's Cache-Control, Expires, and etc. since version 0.7.48, Cache-Control: private and no-store only since 0.7.66, though. Vary handling is not implemented. nginx version: nginx/1.1.19 Any thoughts? pd: I also have the reverse proxy that is offered by Symfony2 (which I turn off to use nginx's). The headers are interpreted correctly by it, so I think I'm doing it right.

    Read the article

  • LVS / IPVS difference in ActiveConn since upgrading

    - by Hans
    I've recently migrated from an old version of LVS / ldirectord (Ultra Monkey) to a new Debian install with ldirectord. Now the amount of Active Connections is usually higher than the amount of Inactive Connections, it used to be the other way around. Basically on the old load balancer the connections looked something like: -> RemoteAddress:Port Forward Weight ActiveConn InActConn -> 10.84.32.21:0 Masq 1 12 252 -> 10.84.32.22:0 Masq 1 18 368 However since migrating it to the new load balancer it looks more like: -> RemoteAddress:Port Forward Weight ActiveConn InActConn -> 10.84.32.21:0 Masq 1 313 141 -> 10.84.32.22:0 Masq 1 276 183 Old load balancer: Debian 3.1 ipvsadm 1.24 ldirectord 1.2.3 New load balancer: Debian 6.0.5 ipvsadm 1.25 ldirectord 1.0.3 (I guess the versioning system changed) Is it because the old load balancer was running a kernel from 2005, and ldirectord from 2004, and things have simply changed in the past 7 - 8 years? Did I miss some sysctl settings that I should be enforcing for it to behave in the same way? Everything appears to be working fine but can anyone see an issue with this behaviour? Thanks in advance! Additional info: I'm using LVS in masquerading mode, the real servers have the load balancer as their gateway. The real servers are running Apache, which hasn't changed during the upgrade. The boxes themselves show roughly the same amount of Inactive Connections shown in ipvsadm.

    Read the article

  • Would a PHP application benefit from being served from a RAM drive?

    - by Tom Marthenal
    I am in charge of hosting a PHP application that is large and slow, but easy to scale. The application is entirely static, with writable disk storage needed. We've profiled the application, and the main bottleneck appears to come from loading the application and not the work the application does. The application is not CPU-intensive, although it does use a fair amount of memory (think Magento). Currently we distribute it by having a series of servers with the same PHP files on their hard drive and a load balancer in front of them. Easy but expensive. I've been reading about RAM disks and the IO benefits they offer, and was wondering if they would be well-suited to PHP applications. Since PHP applications are loaded from disk for every request and often involve lots of different files (as opposed to being kept in memory like with a Java application), I would figure that disk performance can be a severe bottleneck. Would placing the PHP files on a RAM disk and using the mount point as Apache's document root offer performance benefits? A startup script could create the RAM drive and then copy the files (which are plain-text and small) from a permanent location to the temporary RAM drive. Does this make sense, or should I just trust the linux kernel to cache the appropriate files in memory by itself?

    Read the article

  • Limited connections to Ubuntu 12.04 server

    - by Luis M. Valenzuela
    I'm having a weird problem with my server. The server is inside my network, connected to a 3com switch which is connected to the router that handles the internet connection. The main purpose of the server is to host a php application. What's happening is that user 1 to 15 in the private network have no problems connecting to the server, when user 16 tries to connect a time out comes out and is unable to connect to the server. It's not just to the php application, but to any service from the server. When the 15 users are using the application, the server doesn't even answer to ping. I haven't set any special limit in Apache's ini file or MySql and the firewall is being turned off because the server is only to give service to the internal network. Is there a parameter in any of the network's card conf. files that might me causing this ? Or should I suspect from the router's or switches configuration ? UPDATE. Tomorrow, I'm gonna do some test on the server modifying two kernel params in : /etc/sysctl.conf The settings are: net.core.somaxconn which has the limit on simultaneous network connections to the server and kernel.shmmax which controls the amount of memory the system can use for managing connections.

    Read the article

  • suPHP not working

    - by amarc
    OS: Ubuntu 10.04 etc/suphp/suphp.conf: [global] ;Path to logfile logfile=/var/log/suphp/suphp.log ;Loglevel loglevel=info ;User Apache is running as webserver_user=www-data ;Path all scripts have to be in docroot=/home ;Path to chroot() to before executing script ;chroot=/mychroot ; Security options allow_file_group_writeable=false allow_file_others_writeable=false allow_directory_group_writeable=false allow_directory_others_writeable=false ;Check wheter script is within DOCUMENT_ROOT check_vhost_docroot=true ;Send minor error messages to browser errors_to_browser=false ;PATH environment variable env_path=/bin:/usr/bin ;Umask to set, specify in octal notation umask=0077 ; Minimum UID min_uid=100 ; Minimum GID min_gid=100 [handlers] ;Handler for php-scripts application/x-httpd-suphp="php:/usr/bin/php-cgi" ;Handler for CGI-scripts x-suphp-cgi="execute:!self" some vhost in sites-enabled: NameVirtualHost *:8080 <VirtualHost *:8080> ServerAdmin ... ServerName ... ServerAlias ... AddType application/x-httpd-php .php AddHandler application/x-httpd-php .php suPHP_Engine on suPHP_UserGroup user user suPHP_ConfigPath "/home/user/etc" suPHP_PHPPath /usr/bin DocumentRoot /home/user/web/site.com/ ErrorLog /var/log/apache2/site.com-error_log CustomLog /var/log/apache2/site.com-access_log common <Directory /home/user/web/site.com/> Order Deny,Allow Allow from all Options +Indexes </Directory> </VirtualHost> But when I did nano /home/user/web/id.php and paste <?php system('id'); ?> in it, result I get is: uid=33(www-data) gid=33(www-data) groups=33(www-data) Have no idea what to do so I was hoping comunity could help ty.

    Read the article

  • Securing phpmyadmin: non-standard port + https

    - by elect
    Trying to secure phpmyadmin, we already did the following: Cookie Auth login firewall off tcp port 3306. running on non-standard port Now we would like to implement https... but how could it work with phpmyadmin running already on a non-stardard port? This is the apache config: # PHP MY ADMIN <VirtualHost *:$CUSTOMPORT> Alias /phpmyadmin /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin> Options FollowSymLinks DirectoryIndex index.php <IfModule mod_php5.c> AddType application/x-httpd-php .php php_flag magic_quotes_gpc Off php_flag track_vars On php_flag register_globals Off php_value include_path . </IfModule> </Directory> # Disallow web access to directories that don't need it <Directory /usr/share/phpmyadmin/libraries> Order Deny,Allow Deny from All </Directory> <Directory /usr/share/phpmyadmin/setup/lib> Order Deny,Allow Deny from All </Directory> # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/phpmyadmin.log combined </VirtualHost>

    Read the article

  • Server Intermittently Inaccessible Externally (but Accessible Internally Continuously)

    - by nicorellius
    I have a CRM on a server on a network. We have a static IP and another server outward facing. We use port-forwarding to map to the CRM, so that when you go to the IP or the FQDN, you get to the CRM: xxx.xxx.xxx.xxx crm.example.com Internally, we can access the CRM by going to crm or crm.example.com Lately, I've been noticing that accessing the server from outside the network times out or gives 503, bad gateway. During that time, I can also SSH (different port, so this works) into the outward facing computer and access the server just fine. I have a robot monitoring the site and indeed via HTTP monitoring the site is going down periodically. I looked through the Apache server access and error logs and nothing stuck out at me so I'm a bit confused as to what could be going on. I also searched the access logs for 503 and found nothing. When I run tracert from outside the network, it appears the packets basically make it through the wider area servers (Comcast city and county servers) and end up dropping at the CRM server's front step. I'm tempted to replace the server because it is older and underpowered but it would be nice to know what is going on. Any ideas what to do next?

    Read the article

  • Hostname vs webpage domain.

    - by Mark
    Hi All, Im just starting to look at deploying a webpage and get into the joy of DNS etc. And im wondering how you set up multiple web-servers all with thier own hostnames/public IP addresses, and yet have them serve up a webpage from one domain. For example, lets say you have a website example.com, and an A record in DNS that points at it's IP address of 1.2.3.4 . You want to have two servers, prod1 and prod2 with some kind of load balancer in front of them for fail over reasons. The way I see it you would want to have the hostnames of these servers as prod1.example.com and prod2.example.com and perhaps loadb.example.com. How would you set up the DNS so this would all work. ie you could ssh to any of the server domains, prod1.example.com, prod2.example.com or loadb.example.com and also just use the www.example.com url to go to the website. And would all these server names be resolvable from the public internet and is that safe? This would be a linux environment, for arguments sake ubuntu, a django framework dynamic website, running in apache 2.2 Cheers Mark

    Read the article

  • Install multiport module on iptables

    - by tarteauxfraises
    I'am trying to install "fail2ban" on Cubidebian, a Debian port for Cubieboard (A raspberry like board). The following rule failed due to "-m multiport --dports ssh" options (It works, when i run manually the command without multiple options). $ iptables -I INPUT -p tcp -m multiport --dports ssh -j fail2ban-ssh" iptables: No chain/target/match by that name. When i make a cat on "/proc/net/ip_tables_matches", i see that multiport module is not loaded: $ cat /proc/net/ip_tables_matches u32 time string statistic state owner pkttype mac limit helper connmark mark ah icmp socket socket quota2 policy length iprange ttl hashlimit ecn udplite udp tcp The result of iptables -L -n -v command : $ iptables -L -n -v Chain INPUT (policy ACCEPT 6 packets, 456 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 3 packets, 396 bytes) pkts bytes target prot opt in out source destination Chain fail2ban-apache (0 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (0 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 What can i do to compile or to enable the multiport module? Thanks in advance for your help

    Read the article

  • Query specific nameserver for a particular domain upon VPN connect

    - by MT
    Some background: I have a work laptop with Ubuntu 9.10 on it. I have a small network at home where I've been running some basic services (for myself/my family) for 10 some years. In my home network there is a nameserver (Fedora) running Bind 9 with two "views". One view is the "outside" view and it provides name resolution (to the Internet at large) for email, a wiki, and a couple of blogs. The "inside" view provides name resolution (to the internal RFC1918 addresses of theses servers) as well as all the inside hosts, network equipment, ...etc. I connect with an openvpn client to my home network from outside (such as work). What I'd like to be able to do is resolve names on my internal network across this VPN (so I get the RFC1918 "inside" responses) without fully changing my resolver to the DNS server at my hose. For example, if I connect to the VPN from work, I can change my resolver (by editing resolv.conf) to the DNS server at my house (across the VPN) and then successfully resolve all of the inside DNS names on my home network. The issue I have with this is that now I'm no longer able to resolve "inside" names provided by my work's DNS servers (because I'm using my home DNS server). Alternatively, I can connect to the VPN and access my home severs via IP addresses directly, but this is inconvenient and causes issues with Apache name-based hosting (among other things). In the end, the effect I'm trying to achieve is as follows: When I connect to the VPN I automatically start sending DNS requests for *.myhomedomain.com to my home nameserver, but any other requests continue to go the the nameserver I was using before (the one I received on my company LAN via DHCP). When I disconnect the VPN, requests for *.myhomedomain.com go back to the local LAN DNS server (e.g. all requests are going there now). I'm looking for suggestion at to how this can be accomplished.

    Read the article

< Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >