Search Results

Search found 13625 results on 545 pages for 'apache math'.

Page 501/545 | < Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >

  • PassEnv does not find ENV variables

    - by quodlibetor
    I've got this /etc/profile.d/myfile.sh: export MYVAR=myval I also have a PassEnv MYVAR line in a <virtualhost> section of an apache conf dir. That lets me do things like: $ echo $MYVAR myval $ python >>> import os; os.getenv('MYVAR') 'myval' $ sudo echo $MYVAR myval $ sudo -i root# echo $MYVAR myval But then, despite that being the case I get: root# /sbin/service httpd restart /sbin/service httpd restart Stopping httpd: [ OK ] Starting httpd: [Mon Oct 22 14:44:02 2012] [warn] PassEnv variable MYVAR was undefined [ OK ] And all of my attempts to access MYVAR from within my wsgi scripts just don't work. Thoughts? Am I doing something obviously wrong? EDIT for more detail I've got a swarm of computers/VMs and a swarm of developers working on a swarm of projects. I need a simple central place to keep environment information, the most common is the "environment" (dev/stage/prod). The scheme that we've got (modifying *.wsgi programmatically) is turning out to be more fragile than we'd like. The main options that I see are: put things in the shell environment put things in other config files Getting things into the shell environment is the best, because we won't need to write yet more duplicated "what is my environment" code.

    Read the article

  • nginx + IIS + GET

    - by Eralde
    I have nginx on pc "A" & IIS with ASP.NET on pc "B". nginx is configured like this: ... location ~ ((Web|Script)Resource.*)$ { proxy_pass "B"/$1; proxy_redirect off; proxy_set_header REMOTE_ADDR $remote_addr; proxy_set_header REQUEST_URI $request_uri; proxy_set_header HTTP_REFERER $http_referer; #proxy_set_header REQUEST_URI $request_uri; proxy_set_header QUERY_STRING $query_string; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }... but requests to "B"/WebScript?a=b&c=d aren't able to deliver GET data (a=b&c=d) to IIS part. Could anyone help with this? Edit: There's some additional info: nginx is also configured to proxy other data to Apache, running on "A" everything is fine there (at least GET is OK). configuration is the same as above, but for different location

    Read the article

  • How to set up multi users on dev server with git and github

    - by Derek Organ
    I'm working on lamp application. We have 2 servers (Debian) Live and Dev. I constantly work on dev main to add new features and fix bugs. When happy all works well I scp the relevant code to the Live system. Database (mysql) is local to each machine. Now this is pretty basic setup really and I want to improve the workflow a bit. I use git and github for version control. Admittedly I've only really used one branch. Their can be 3 different developers who work on the code at different times. We all use the same linux username to connect to the dev server and edit the code directly when needed. I usually then commit and push the code at the end of the day to github. One thing to bare in mind is it isn't easy to run this code on a local machine as there are many apache and subdomain configurations that wouldn't work on a local machine so it is important to work on the dev server not locally. I need to create a new process because we need to have a main trunk now and a branch with a big code re-write. What is the best way to do this. Should I create different unix logins for each developer and set up different working areas on the dev server for there changes? e.g. /var/www/mysite_derek /var/www/mysite_paul /var/www/mysite_mike my thinking is they can do a pull from the main branch and then create there own branch and merge it back in. I'm not sure how this will work though with git locally and with github. will i need to create different github user accounts as well. I'd like to do this the 'right' way and future proof for having lots of potential developers but I also don't want to over complicate it. I simple and elegant solution is preferred. any recommendations or suggestions?

    Read the article

  • Timestamp Updating Constantly on /dev/null

    - by motorleague
    I've been working on a problem with a /dev/null file on an AIX system (just for background it looks as though it was inadvertently deleted and recreated as a normal file by somebody), but in trying to determine what caused the problem, I noticed that the timestamp on it seems to update every minute. I've observed this on several AIX servers at my workplace. At present I can't entirely rule out this be something specific to the Application being used at my workplace, so I compared with CentOS and Debian based computers at home last night. The CentOS box, which runs 24 hours, had a mod time on /dev/null of around 4 days ago (during which time it was essentially just being used as a web browser and multimedia player, although it would have had active but essentially unused Apache, MySQL and VMM processes running in the background). The timestamp on /dev/null on the Debian machine, which was a just booted laptop, pretty much reflected the boot time, but I tested redirecting STDIN from, and STDOUT to it, and the modification time was unchanged (I'm not sure 100% sure if directing data to /dev/null constitutes "writing to it" in the way it would a normal file). So my question is essentially, could anybody please offer any advice with regards to what circumstances (permissions changes etc.. aside) might cause the timestamp on /dev/null to update? Thanks very much for any suggestions. Alex.

    Read the article

  • Better urls for this internal web server?

    - by sprugman
    I've got a server that I have admin access to, but don't fully manage. (I think it's a virtual machine, but I'm not 100% sure. It's running Apache on Windows Server 2003.) I share the ip with another user, so my sites all have to use the :8080 port. This is kind of ugly. Also, AFAIK, the only access I have is through an ip address. (I'm inside a corporate firewall and don't think I have access to a DNS server or anything.) I've adjusted my hosts file so I don't have to use the ip address on my local machine, but that's not a very generic solution. Are there any options to 1) get rid of the port requirement 2) be able to use a name (maybe a machine name) instead of the ip address in a generic way? (I'm not really a network admin -- I'm a developer managing this machine. The IT folks who really manage it are a few people away from me and tough to get to do anything, so I'm looking for a light-weight solution if possible.)

    Read the article

  • visually documenting web server configuration and infrastructure

    - by Alex Ciarlillo
    I have just finished a large re-organization and update of our institutions web server(s). This server hosts 3 virtual hosts, 3-4 blogs, 2 wikis, some legacy static HTML pages, and many hosted documents (PDF, .jpg, .xls). I have organized the site into a structure of something like: /var/www/sites/vhost1, vhost2, vhost3 .../wordpress/blogX .../mediawiki/wikiX Data is in a seperate directory structure so I can run a cron task over it to make sure it is all writeable and such. I then symlink to these data directories for each application. /var/www/data/vhost1, vhost2, vhost3 .../wordpress/blogX/uploads .../mediawiki/wikiX/images All Apache configs are in /etc/httpd/conf.d/vhosts.d/vhost1,2,3.conf On top of this there is also a testing server which mirrors this setup. Once changes are fully tested, they are rsynced down to the live server. All the wordpress installs and mediawiki installs are straight form SVN and updates are done by switching branches or "svn up". So my question is how can I best document to share with a) co-workers, b) possible future replacement, c) myself 6 months from now. Obviously I can make a wiki page, excel document, whatever and fill it with text, but I am looking for a more visual representation that I can use to explain the architecture to less-technical people. Ideally it would be awesome if this visual representation could then be expanded to get more technical details.

    Read the article

  • Nginx Removes the index.php from URL

    - by codeHead
    I have a codeigniter php application on nginx. It works as expected on Apache but after moving to nginx, I noticed that the index.php is automatically removed from the URL in all my links. Infact when I try using index.php it does not go to the desired URL but gets redirected to my default controller. below is a coopy of my nginx.conf file. server{ listen 80; server_name mydomainname.com; root /var/www/domain/current; # index index.php; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log main; location / { # Check if a file or directory index file exists, else route it to index.php. try_files $uri $uri/ /index.php ; } location ~* \.php { fastcgi_pass backend; include fastcgi.conf; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_read_timeout 500; #fastcgi_param SCRIPT_FILENAME $document_root/index.php; add_header Expires "Thu, 01 Jan 1970 00:00:01 GMT"; add_header Cache-Control "no-cache, no-store, private, proxy-revalidate, must-revalidate, post-check=0, pre-check=0"; add_header Pragma no-cache; add_header X-Served-By $hostname; } location ~* ^.+\.(css|js)$ { expires 7d; add_header Pragma public; add_header Cache-Control "public"; } # set expiration of assets to MAX for caching location ~* \.(ico|gif|jpe?g|png)(\?[0-9]+)?$ { expires max; log_not_found on; } } I need to use my URL With the index.php -- please help.

    Read the article

  • cannot get mssql working with sql server 2005

    - by Ryan
    I'm a MySQL/Apache user, trying my hand with IIS and SQL server, so please, if this is a stupid question have patience. I'm using IIS version 7.5. PHP version 5.3.13 and SQL server 2005 IIS is running on port 90, not sure if that will make a difference or not. I know my sql server is running because I can explore/connect to it in Server management studio. I know php is configured properly, because //localhost:90/phpinfo.php works fine. I updated the php_msql.dll extension in phpinfo to: extension=ext/php_msql.dll EDIT- However, when I run phpinfo() under the "configure command" row, this is present: --without-mssql I found/downloaded the ntwdblib.dll and placed it in both sys32 and php root. All these things were supposed to fix the issue, and they haven't. This is the code I'm using, straight from php.net: <?php // Server in the this format: <computer>\<instance name> or // <server>,<port> when using a non default port number $server = 'localhost'; // Connect to MSSQL $link = mssql_connect($server, 'uname', 'pwd'); if (!$link) { die('Something went wrong while connecting to MSSQL'); } ?> obviously I'm using a real username and password, but when I load the file in my browser, I receive a 500 error. Upon checking the log, this is what is displayed: 2012-06-25 12:41:29 ::1 GET /test.php - 90 - ::1 Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/536.5+(KHTML,+like+Gecko)+Chrome/19.0.1084.56+Safari/536.5 500 0 0 5 That (to me) doesn't help me much. What am I doing wrong? Thank you

    Read the article

  • How do I restrict access to certain web files/folders on an IIS 7.5 based web server?

    - by cpuguru
    We're moving a website that was previously hosted on Win2k3 & IIS 6 to a Win2k8 R2 & IIS 7.5 platform. The website is public, but we want to restrict anonymous access to certain files and folders such that the user would be prompted for a password to access them. If this were Apache, a simple .htaccess file would serve the purpose. However, since it's IIS 7.5 and we're serving up mainly static HTML files and a few classic ASP pages I'm in a bit of a quandry as to how to restrict access to individual files and folders for various committees such that attempts to committee_1's files and/or folders would prompt the user for a password and, if entered correctly, would serve up their files. Same thing for committee_2 and so on. Under IIS 6, we would take away the read privileges for IIS_IUSRS and create a user called "committee_1" with a password known by the group and give that user read privileges to the files/folders. There's got to be a better (and more secure) way. Reminder, these are not *.aspx pages that are being served up. Any suggestions on how to password protect key files and/or folders under IIS 7.5 are much appreciated.

    Read the article

  • environment variables generated by at command

    - by Jordan Arseno
    I'm inspecting /var/spool/cron/atjobs/a001cf01570e44 with cat, after running the at command from PHP using exec(). It looks like at has prepended the script with lots of APACHE environment variables. #!/bin/sh # atrun uid=33 gid=33 # mail www-data 0 umask 22 APACHE_RUN_DIR=/var/run/apache2; export APACHE_RUN_DIR APACHE_PID_FILE=/var/run/apache2.pid; export APACHE_PID_FILE PATH=/usr/local/bin:/usr/bin:/bin; export PATH APACHE_LOCK_DIR=/var/lock/apache2; export APACHE_LOCK_DIR LANG=C; export LANG APACHE_RUN_USER=www-data; export APACHE_RUN_USER APACHE_RUN_GROUP=www-data; export APACHE_RUN_GROUP APACHE_LOG_DIR=/var/log/apache2; export APACHE_LOG_DIR PWD=/home/jordanarseno/webroot/public_html/myapp; export PWD cd /home/jordanarseno/webroot/public\_html/myapp || { echo 'Execution directory inaccessible' >&2 exit 1 } curl -k http://localhost/myapp/crons/this_action/3 The last line is the only real command I sent along with at via stdin. What is the purpose of these variables? Where is this procedure stored?

    Read the article

  • What are secure ways of sharing a server (ssh+LAMP) with friends?

    - by Bran the Blessed
    What is the best way to share a virtual server with friends? More precisely, I have the following assets: A virtual private server (Debian Lenny) with root access for myself, running... SSH apache2 mysql Some unused disk space Some friends in need of hosting The problem I would now like to do the following: Hosting one or several domains per friend My friends should have full access to their domains, including running PHP scripts, for example My friends should not be able to poke around in other directories The security of my server should not be compromised by faulty PHP scripts To clarify: I do trust my friends in the sense that they are not trying to do something evil with their access. I just do not trust the programs they are going to run. So, what are your recommendations for establishing such a scenario? Partial solution I already came up with the following plan: Add chrooted SSH users for my friends Add Apache vhosts per user (point the directories to subdirectories of the homedirectories, i.e. /home/alice/example.com, /home/bob/example.net, etc. But how can I enforce a chroot-like environment for the scripts they are running within these vhosts? Any pointers would be appreciated.

    Read the article

  • Ubuntu server 10.04 disconnects after short periods of inactivity on my site

    - by user57019
    I'm new to Ubuntu (installed it for the first time just a couple of days ago on my server). I've Ubuntu Server 10.04 and am just using the terminal, no GUI like Gnome. So far it's working pretty great except for one big thing. Whenever I go to sleep and there's no activity on my server (it's not a big site so active users drop to 0 during the night), the server kind of disconnects. The only thing that can bring the site back online is to restart the whole server. I've tried disabling powersaving by using setterm but that changes nothing. Even if I wake up the server by pressing any key or so the site wont go back online! I've tried just restarting both Apache and MySQL (I'm using LAMP-server btw) but not even that works. But as soon as I turn the power off and on at the server, everythings work like normal for a couple of minutes of inactivity (~5-15 minutes I'd guess) and then it's down again unless someone logs in to the site and is active. I was previously using XAMPP on my laptop with Windows XP and that worked 24/7 so I don't think it's anything with my router or ISP. This is driving me crazy! My site is down all the time I'm in school as I have no possibility to restart the server if it becomes offline. Does anyone have a clue to what could be wrong?

    Read the article

  • No Network Connection in WinXP image from Microsoft running on VirtualBox 3.1.6 OSE (Ubuntu 10.04) due to missing CD Rom

    - by Bevor
    I'd like to test local websites in IE7 and IE8.To do that I thought about using the free Microsoft images: http://www.microsoft.com/windowsxp/using/networking/setup/default.mspx I converted the VHDs to VDIs to make them run in VirtualBox. ( http://www.qc4blog.com/?p=721 ) This works fine. The problem is that in this Windows XP installation there is no Network Adapter configured. Actually nothing at all is configured because it needs the Windows XP CD Rom to do that. If I would have a Windows XP CD Rom, I would not need to run the Microsoft image, so is there some kind of workaround to get an internet connection? Meanwhile I set "bridged" in VirtualBox. But this doesn't help because "ipconfig /all" in the guest system doesn't show any data because nothing is configured. How can I get a connection to my local Apache (Host system). http://localhost would be enough. By the way: I can't install the "Guest additions". When I do that, the 3 days trial period of the guest system is suddenly gone, so I can't use it anymore and it is senseless. Any ideas? Update: I've tried the Vista image and it gets an internet connection. From Vista image I can get to my site with 192.168.1.3/mywebsite in the browser url. So actually I don't care about the WinXP issue anymore but I would be glad if anyone still knows a solution.

    Read the article

  • Dump Trac DB on Windows/XAMPP

    - by Whiteknight
    I have a Trac instance running on a WindowsXP machine with XAMPP. I am trying to migrate the trac instance to a newer Linux-based machine. However, I'm having a hard time getting the database to cooperate. I try to dump the db with this command: sqlite3 C:\tracroot\db\trac.db ".dump" >> mysqldump.sql But the generated file is mostly empty: BEGIN TRANSACTION; COMMIT; So that's not right. For the record my trac instance is running now and appears to have full access to all the contents of the DB. But sqlite3 (located in C:\xampp\apache\bin) can't seem to get any information from the file. The DB file itself has the header "SQLite format 3", so that seems to be correct. I need to know one of two things: How to get this dump working OR An alternate way to migrate the Trac database to the new machine. Update: When I try to open the .db file in sqlite3, I get the error Error: unsupported file format. What format is it in, and why is it unsupported?

    Read the article

  • iptables port redirection on Ubuntu

    - by Xi.
    I have an apache server running on 8100. When open http://localhost:8100 in browser we will see the site running correctly. Now I would like to direct all request on 80 to 8100 so that the site can be accessed without the port number. I am not familiar with iptables so I searched for solutions online. This is one of the methods that I have tried: user@ubuntu:~$ sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT user@ubuntu:~$ sudo iptables -A INPUT -p tcp --dport 8100 -j ACCEPT user@ubuntu:~$ sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8100 It's not working. The site works on 8100 but it's not on 80. If print out the rules using "iptables -t nat -L -n -v", this is what I see: user@ubuntu:~$ sudo iptables -t nat -L -n -v Chain PREROUTING (policy ACCEPT 14 packets, 2142 bytes) pkts bytes target prot opt in out source destination 0 0 REDIRECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 redir ports 8100 Chain INPUT (policy ACCEPT 14 packets, 2142 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 177 packets, 13171 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 177 packets, 13171 bytes) pkts bytes target prot opt in out source destination The OS is a Ubuntu on a VMware. I thought this should be a simple task but I have been working on it for hours without success. :( What am I missing?

    Read the article

  • MySQL has stopped accepting connections from other users than root

    - by John
    Hi there. I'm running a mysql-server a long with apache and tomcat on a Gentoo box. To administer mysql I'm using phpMyAdmin. A couple of hours ago I received a call -- a user was unable to login to phpmyadmin. I logged on to phpmyadmin with the root user, and reset the password. The user was still not able to login. I then decided to give it a go myself, and even I wasn't able to login. I tried creating several user accounts, none of them were able to access mysql via jdbc/mysql-client/phpmyadmin. The only user that seems to work is root. What's even more strange is that websites that connect to mysql with a user other than root are still able to login and retrieve content from the database (it's mainly wordpress and a tomcat webapp). I have made sure it's not just cached, I was able to post SQL queries to the database via these web apps still. However, I am unable to login to phpmyadmin/mysql-client with this user and I am also unable to set up a connection with this user for any new web-applications. Any help is immensely appreciated.

    Read the article

  • Ubuntu server 10.04 disconnects after short periods of inactivity on my site

    - by Melot
    Hi! I'm new to Ubuntu (installed it for the first time just a couple of days ago on my server). I've Ubuntu Server 10.04 and am just using the terminal, no GUI like Gnome. So far it's working pretty great except for one big thing. Whenever I go to sleep and there's no activity on my server (it's not a big site so active users drop to 0 during the night), the server kind of disconnects. The only thing that can bring the site back online is to restart the whole server. I've tried disabling powersaving by using setterm but that changes nothing. Even if I wake up the server by pressing any key or so the site wont go back online! I've tried just restarting both Apache and MySQL (I'm using LAMP-server btw) but not even that works. But as soon as I turn the power off and on at the server, everythings work like normal for a couple of minutes of inactivity (~5-15 minutes I'd guess) and then it's down again unless someone logs in to the site and is active. I was previously using XAMPP on my laptop with Windows XP and that worked 24/7 so I don't think it's anything with my router or ISP. This is driving me crazy! My site is down all the time I'm in school as I have no possibility to restart the server if it becomes offline. Does anyone have a clue to what could be wrong?

    Read the article

  • Subversion: Secure connection truncated

    - by Nick
    Hi, I'm trying to set-up a subversion server with apache2/webdav access. I've created the repository and configure Apache according to the official book, and I can see the repository in a webbrowser. The browser shows: conf/ db/ hooks/ locks/ Although clicking any of those links gives an empty xml document like: <D:error> <C:error/> <m:human-readable errcode="2"> Could not open the requested SVN filesystem </m:human-readable> </D:error> I've never used subversion before so I assume this is correct? Anyway, when I try to connect via a command line client, it asks for my password, I give it, then I get the (useless) error message: svn: OPTIONS of 'https://svn.mysite.com': Could not read status line: Secure connection truncated (https://svn.mysite.com) The command I'm using is: svn checkout https://svn.mysite.com/ svn.mysite.com Subversion was installed using Ubuntu's package manager. It's version 1.6.6 on Ubuntu 10.04. My Virtualhost Cofiguration: <VirtualHost 123.123.12.12:443> ServerAdmin [email protected] ServerName svn.mysite.com <Location /> DAV svn SVNParentPath /var/svn/repos SVNListParentPath On AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/subversion/passwd Require valid-user </Location> # Setup The SSL Certificate Paths SSLEngine On SSLCertificateFile /etc/ssl/certs/mysite.com.crt SSLCertificateKeyFile /etc/ssl/private/dmysite.com.key </VirtualHost>

    Read the article

  • Does Mac OS X throttle the RATE of socket creation?

    - by pbhogan
    This may seem programming related, but this is an OS question. I'm writing a small high performance daemon that takes thousands of connections per second. It's working fine on Linux (specifically Ubuntu 9.10 on EC2). On Mac OS X if I throw a few thousand connections at it (roughly about 16350) in a benchmark that simply opens a connection, does it's thing and closes the connection, then the benchmark program hangs for several seconds waiting for a socket to become available before continuing (or timing out in the process). I used both Apache Bench as well as Siege (to make sure it wasn't the benchmark application). So why/how is Mac OS X limiting the RATE at which sockets can be used, and can I stop it from doing this? Or is there something else going on? I know there is a file descriptor limit, but I'm not hitting that. There is no error on accepting a socket, it's simply hangs for a while after the first (roughly) 16000, waiting -- I assume -- for the OS to release a socket. This shouldn't happen since all prior the sockets are closed at that point. They're supposed to come available at the rate they're closed, and do on Ubuntu, but there seems to be some kind of multi (5-10?) second delay on Mac OS X. I tried tweaking with ulimit every-which-way. Nada.

    Read the article

  • Does Mac OS X throttle the RATE of socket creation?

    - by pbhogan
    This may seem programming related, but this is an OS question. I'm writing a small high performance daemon that takes thousands of connections per second. It's working fine on Linux (specifically Ubuntu 9.10 on EC2). On Mac OS X if I throw a few thousand connections at it (roughly about 16350) in a benchmark that simply opens a connection, does it's thing and closes the connection, then the benchmark program hangs for several seconds waiting for a socket to become available before continuing (or timing out in the process). I used both Apache Bench as well as Siege (to make sure it wasn't the benchmark application). So why/how is Mac OS X limiting the RATE at which sockets can be used, and can I stop it from doing this? Or is there something else going on? I know there is a file descriptor limit, but I'm not hitting that. There is no error on accepting a socket, it's simply hangs for a while after the first (roughly) 16000, waiting -- I assume -- for the OS to release a socket. This shouldn't happen since all prior the sockets are closed at that point. They're supposed to come available at the rate they're closed, and do on Ubuntu, but there seems to be some kind of multi (5-10?) second delay on Mac OS X. I tried tweaking with ulimit every-which-way. Nada.

    Read the article

  • How much Ram should I need on my VPS package? Am I being ripped off?

    - by Tamerax
    Hello! So, I'm currently on a VPSVille Cpanel3 account that has 768 MB guaranteed ram and 2048 MB burst ram (full details here: http://www.vpsville.ca/cpanel-vps). It's running CentOS, Cpanel, Apache and FastCGI. On the server itself I have a joomla community site with a forum system that generally has about 20 people on it max at any point and even then, during the evening, no one. It's a pretty small site but has a number of modules running on it. It gets about 6000 visits a month. Also on the server is a wordpress site that gets about 80-150 visits a day, 2 other wordpress sites that aren't developed yet so they don't get any traffic at all and 2 static html websites that also only get about 500 hits a month. All in all, no huge sites. The issue is that I get "out of memory" errors fairly frequently and it kills my server and I need to reboot it in order to get all my sites up and running again. It seems to me that I shouldn't have these issues with that much ram allotted to my account and everytime I send in a support ticket, they just tell me to upgrade the ram. Now, I'm still pretty new to all this so I'm not a good judge of how much I really need for my sites to run. I don't know if my sites really do need this much OR vpsville has oversold there servers, they don't actually have those resources available and I'm getting ripped off. So, how much ram should I be using with my current setup? Thanks!

    Read the article

  • Redirect without changing URL

    - by Coobadivin
    Here's the setup. We have a hardware load balancer with an http virtual cluster. Let's call this virtual cluster example1.com. This virtual cluster load balances between two squid reverse proxies which are also on the same physical servers as the web servers. Squid listens on 80 and points to itself as the cache_peer web server which listens on 81. We also have a standalone web server which we will call example2.com. What we are trying to do is create a subdirectory on example1.com called example1.com/example2. This will point to example2.com, but we want our users to stay at example1.com/example2 in their browser. So, it's like a redirect without actually being a redirect. How the hell do I go about doing this? Is this even possible? I'm looking at squid docs in the meantime. example1.com is running a proprietary web server - not Apache :( We can't host example2.com's content in example1.com's file system. These are two very different platforms.

    Read the article

  • Windows 7 host with Ubuntu Guest and a performance hit, memory locks?

    - by Cyrylski
    I have a brand new Lenovo T510 with Core i5 and 4GB of RAM with Windows 7 on it. I Installed Ubuntu 10.10 in a Virtualbox. For some reason system gets really slow on this setup which makes me really angry. There's a video card shared with full 3D support enabled and 1GB of RAM allocated for the Ubuntu machine. It may sound stupid, but WHY is the whole memory consumed in an instant when I run Virtualbox? I struggled for like 10 minutes restraining myself from a brutal reset, and now everything runs smooth but memory "in use" in Resource Monitor is 3GB flat with only Chrome running. I'm new to Windows 7, but I'm really disappointed with performance at this point... I used to work in a different environment with much slower hardware and there was no such problem (WinXP over Ubuntu, 1GB out of 2GB allocated for WinXP guest on intel GMA). This is, until I clogged RAM totally there. But I was capable of running Chrome, Firefox and Apache server on a 1GB RAM in Ubuntu there and Photoshop CS4 on Windows XP and it worked. In this case I can't go beyond setting up Ubuntu properly. I bet I'm doing something wrong.

    Read the article

  • Can I host multiple sites with one Amazon EC2 instance [duplicate]

    - by user22
    This question already has an answer here: Can you help me with my capacity planning? 2 answers I currently have VPS server and I pay around $75 per month and I get: 40GB HD 2Gb RAM 100GB BW 6 core cpu (but i dont use much) I have only one live website running and traffic is only max 100 user visit per day. I mostly do the my testing stuff and some of my inter sites for playing with coding. But I do need one server. I am thinking of moving to Amazon EC2 if the price diff is not so much because then I can learn some more stuff. I am thinking of getting the 3 years Heavy utilization Reserved instance because my server will be running all day and night. I tried their online caluclator with Medium Instance Heavy reserved for 3 years for EC2 it comes $31 per month(effective price) and for EBS and S3 , I think even if thats it $40 for all other stuff. I will be at no loss for what I am getting at present. Am i correct or I missed something?? Now In my current VPS I have Apache for PHP sites and MOD wsgi for python sites. I am not sure if I will be able to do all that stuff in Amazon EC2. Can I host python and PHP sites both in Amazon EC2 instance using Named Virtual Hosts and Ngnix

    Read the article

  • What character can be safely used for naming files on unix/linux?

    - by Eric DANNIELOU
    Before yesterday, I used only lower case letters, numbers, dot (.) and underscore(_) for directories and file naming. Today I would like to start using more special characters. Which ones are safe (by safe I mean I will never have any problem)? ps : I can't believe this question hasn't been asked already on this site, but I've searched for the word "naming" and read canonical questions without success (mosts are about computer names). Edit #1 : (btw, I don't use upper case letters for file names. I don't remember why. But since a few month, I have production problems with upper case letters : Some OS do not support ascii!) Here's what happened yesterday at work : As usual, I had to create a self signed SSL certificate. As usual, I used the name of the website for the files : www2.example.com.key www2.example.com.crt www2.example.com.csr. Then comes the problem : Generate a wildcard self signed certificate. I did that and named the files example.com.key example.com.crt example.com.csr, which is misleading (it's a certificate for *.example.com). I came back home, started putting some stars in apache configuration files filenames and see if it works (on a useless home computer, not even stagging). Stars in file names really scares me : Some coworkers/vendors/... can do some script using rm find xarg that would lead to http://www.ucs.cam.ac.uk/support/unix-support/misc/horror, and already one answer talks about disaster. Edit #2 : Just figured that : does not need to be escaped. Anyone knows why it is not used in file names?

    Read the article

< Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >