Search Results

Search found 20099 results on 804 pages for 'virtual host'.

Page 499/804 | < Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >

  • Why won't SSI work in IIS?

    - by Josh Kodroff
    I can't get IIS to respect my SSI directives - it just outputs the #include directive as if it were regular old html. Here's the relevant data points: My file with the include directive is called index.html This is my directive: <!-- #include file = "header.shtml" --> (it doesn't work with virtual either.) The file being requested is in the same directory as the file being #include-ed. The SSI module is installed. The SSINC-shtml handler mapping is present and enabled. I think it might be some sort of permissions issue (read/write/execute), but I don't know where those settings are in IIS 7.5.

    Read the article

  • Crossover cable in addition to normal network connection on servers?

    - by Zero0ne
    I have 2 servers, both with Windows 2003 R2 Each have 2 NIC ports that are 10/100/1000 They are both connected to our LAN + joined to the domain (1 NIC port free on each server) The problem is that our main router is only 10/100 on the ports that these servers are connected to. Since one server is going to host SQL 2005 and the other will be running Altiris NS7, I was hoping that I could use a crossover cable to connect the two directly, thus taking advantage of their 1gbps NIC cards. Is this possible? If so what steps do I need to take to accomplish this? What needs to be done to make sure that when the app server is communicating with the SQL server that it is using the direct link vs traversing the LAN? Thanks a lot!

    Read the article

  • Running Multiple Instances of Firefox

    - by Aaron Bush
    I am running SysInternal's Desktops 1.02 and FireFox 3.6.2. I have noticed that while I can have IE8 open in multiple Virtual Desktops, you can not Firefox. If you try you get the error message: Firefox is already running, but is not responding. To open a new window, you must close the existing FireFox process, or restart your system. I did a little digging around to work around this and came up with creating a second profile via the Firefox profile manager (accessed by starting FF with the "-p" switch). This unfortunately created a new problem which is my add-ons (of which I use many) do not stay synchronized between profiles. Is there a better approach here?

    Read the article

  • Which is the best way to deploy a JBoss application?

    - by andreash
    Hi there, we are currently developing a JBoss application. To deploy it, we have a total of four servers (three years old). I am wondering which might be the best to do? There could be a load balancer (even a load-balancer cluster, for failover) in front of two servers, each holding one JBoss and one PostgreSQl host inside XEN environments. Does this make sense? Are there other, better options? Thanks a lot for your advice!

    Read the article

  • connect two computers together via a rs232 serial port

    - by Richard
    Wondering if anyone knows of a solution to connect with telnet/ssh through a rs232 serial port Edit: I am looking for a way to connect to computers together via a serial port. I want to be able to view the file system of a computer through a serial port.. Is this possible? Edit: So I have successfully connected two computers together using r232 serial ports with a null modem. The instructions I have used are located here Now how do I get to the file system of the host computer?? Any ideas?

    Read the article

  • XEN disk mapping problem under opensolaris

    - by Louis
    I have a system with two harddisks, i wanted to use the simplicity of ZFS for my file server and i also need to run a linux. I choosed XEN virtualization for that, supported on both system. My GRUB is well configured and i can boot both system. I would like is to run both system with solaris as a dom0 and the debian installed on the 2nd HD as a virtual machine. My problem is that i want to use the partitions of my 1st harddisk (sda1 under linux) and it does not work. I didn't find my use case on the web- Here is my Opensolaris device name of this partition : /dev/rdsk/c7d0p1 But when i use : disk = [ 'phy:rdsk/c7d0p1,sda1,w' ] as a disk mapping in my XEN configuration file i have the error : Error: Device 2049 (vbd) could not be connected. error: "rdsk/c7d0p1" is not a valid block device. I am "lost".

    Read the article

  • Apache2 memory usage when uploading large files

    - by abhaga
    Hi, I am running apache2.2.12 along with PHP 5.2.10. PHP is configured to run as a separate process through fcgid. The problem is that when users upload a file, size of the apache process swells by almost the same amount. So if somebody tries to upload a 200 MB file, one of the child process swells to current size+200 MB. If 2 users simultaneously start uploading, my server crashes. Now it is the virtual memory size which is increasing but since I am on a OpenVZ based VPS, that is what counts. My questions are: Is it the normal Apache behavior or can I do something to fix this? If not, is there a more memory efficient way of handling big file uploads. Going by the current behavior, I will need 1 GB of free RAM for every apache child accepting a upload. Thanks! Abhaya -

    Read the article

  • If I re-key a SSL certificate for a 2nd/backup server, does the original still work?

    - by Matt
    We have a production server with a wildcard SSL certificate. I'm in the process of creating a backup/failover server that will host the same domains, and therefore will also need the SSL certificate. The certificate on the primary server was installed with the private key non-exportable, so I am unable to export the certificate for installation on the failover server. My question then is - if I re-key the certificate from Go Daddy, does the original certificate installed on the primary server cease to be valid? As an aside, the original (primary) server is IIS 6, the failover is IIS 7 (once the failover is operational, we'll likely upgrade the primary).

    Read the article

  • Running PHP scripts as the owner of the PHP file: security issues

    - by thomasrutter
    I'm using suexec to ensure that PHP scripts (and other CGI/FastCGI apps) are run as the account holder associated with the relevant virtual host. This allows for securing each users' scripts from reading/writing by other users. However, it occurs to me that this opens up a different security hole. Previously, the web server ran as an unprivileged user, with read-only access to user's files (unless the user changed the file permissions for some reason). Now, the web user can also write to user's files. So while I've prevented different users taking advantage of each other's scripts, I've made it so that in the event that some application has a remote code injection vulnerability, it now has not only read access but also write access to all that user's scripts and website. How can I deal with this? One idea I've had is to create a second user account for each user account in the system, so that each user has their own user account, and all their scripts are run under another user account. But that seems cumbersome.

    Read the article

  • Security issues of running PHP scripts as the owner of the PHP file with suexec

    - by thomasrutter
    I'm using suexec to ensure that PHP scripts (and other CGI/FastCGI apps) are run as the account holder associated with the relevant virtual host. This allows for securing each users' scripts from reading/writing by other users. However, it occurs to me that this opens up a different security hole. Previously, the web server ran as an unprivileged user, with read-only access to user's files (unless the user changed the file permissions for some reason). Now, the web server can also write to user's files. So while I've prevented different users taking advantage of each other's scripts, I've made it so that in the event that some application has a remote code injection vulnerability, it now has not only read access but also write access to all that user's scripts and website. How can I deal with this? One idea I've had is to create a second user account for each user account in the system, so that each user has their own user account, and all their scripts are run under another user account. But that seems cumbersome.

    Read the article

  • Apache2 proxypass

    - by gatsby
    i'm trying to figure out why my apache2 reverse proxy doesn't work... hope someone can clarify. i'm using an apache server as a gateway with proxy pass: 10.184.1.2 is the IP. these are PP instructions i inserted in the 000-default config file. ProxyPass / http://192.168.102.31/ ProxyPassReverse / http://192.168.102.31/ the host 192.168.102.31 is an internal IP of a subnet wich is not reachable directly by clients, but only by the apache gateway. when i try to access such a address: http://apache_gateway_name/dir i see the client trying to reach 192.168.102.31 address and of course timeout occurs. can someone help? Best regards

    Read the article

  • Dovecot not working pop3 with postfix

    - by samer na
    $ telnet localhost pop3 Trying ::1... Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused $ netstat -l tcp 0 0 *:www : LISTEN tcp 0 0 localhost.localdoma:ipp : LISTEN tcp 0 0 *:smtp : LISTEN tcp 0 0 localhost.localdo:mysql : LISTEN and nothing about dovecot in mail.log or mail.err when I run this service dovecot start I got start: Rejected send message, 1 matched rules; type="method_call", sender=":1.553" (uid=1000 pid=26250 comm="start) interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")) in dovecot.conf protocols = imap imaps pop3 pop3s disable_plaintext_auth = no log_timestamp = "%Y-%m-%d %H:%M:%S " mail_location = maildir:/var/spool/mail/%d/%n mail_access_groups = mail first_valid_uid = 106 first_valid_gid = 106 protocol imap { } protocol pop3 { listen=*:110 pop3_uidl_format = %08Xu%08Xv } protocol lda { postmaster_address = [email protected] mail_plugins = quota log_path = /var/log/dovecot-deliver.log info_log_path = /var/log/dovecot-deliver.log } auth default { mechanisms = digest-md5 plain passdb sql { args = /etc/dovecot/dovecot-mysql.conf } userdb sql { args = /etc/dovecot/dovecot-mysql.conf } user = root }

    Read the article

  • Renaming Z drive in DOSBox

    - by Jay Kominek
    DOSBox makes a virtual drive, which it names Z:, for storing utility stuff on. Clearly they're trying to stay out of your way, so you can do whatever you want with the C: drive. Swell, I understand that. But I've got some old database accessing software I really, really want to run which assumes it lives on the Z drive. So I need to get DOSBox's Z called anything else. (C would be fine with me.) I've seen mentions that it is possible, but no actual indication of how to do it. Anything that gets the job done is appreciated.

    Read the article

  • Redmine + Backlogs not working on Turnkey Linux (Ubuntu)

    - by Riddler
    I'm trying to get Redmine + Backlogs work, so for starters I took a virtual appliance with Redmine from Turnkey Linux (http://www.turnkeylinux.org/redmine) and installed Backlogs on top of it, following the installation instructions (http://www.redminebacklogs.net/en/installation/ - used method #2). It seems to have installed ok, but when I go to the "Backlogs" tab and attempt to create some stories, this is what I get - first shows some kind of error/warning icon, others continue to display "in progress" icon indefinitely (can't post a screenshot, unfortunately, but you can take a look at it here: http://www.redmine.org/attachments/5329/Backlogs.jpg). None of the stories get actually created - leaving this tab and returning back to it shows empty backlogs. So.. what am I doing wrong, and how to fix this?

    Read the article

  • How can I limit CloudFront downloads

    - by Alex Crouzen
    I'm looking to use Amazon's CloudFront to host some content in the near future. Currently, I'm keeping it very simple and I'm just uploading my content to S3 and then making a distribution available via Cloudfront. However, because I have a limited budget, I'd like to be able to limit the number of downloads or the money spent on bandwidth. As far as I can see, I can't set any quotas or budgets like you can in Google's App Engine, so I'm looking for another way of doing this. Has anyone had any experience doing this? One approach I'm thinking of is having to place a webserver with redirects in between, but that kind of defeats the simplicity of CF for me.

    Read the article

  • Pasting extended ACL contents into telnet session to Cisco Router SIM

    - by Kyle Brandt
    I have a telnet session to a dynamips router sim. When I try to paste the contents of an actually working ACL retrieved from 'show run' into the access list, only part of gets pasted. The session is something like: enable conf t ip access-list extended Internet <PASTE of Rules> It stops right in the middle of a line: permit tcp any host 123.123.123.123 gt 1 ! should be gt 1023 Anyone know what is happening? The source is an extended access list.

    Read the article

  • How would I put together a site requiring several TB? [closed]

    - by acidzombie24
    Lets say I have a site with unmetered 100MBPS bandwidth (i assume its bits?) and the ram i require. Most plans i see offer HDD that hold 250gb and 1TB. But what happens if i compile/generate enough data that i require 10tb or 25tb? (I'd likely have two servers but...) I wouldn't be serving all of that data (well not to the public) so CDN wouldn't make sense. What do i do in this scenario? Do I need to get a custom plan from a hosting provider? (if so how do i find them?) Are there services that allow me to mount remote drives (that sounds wrong unless its a CDN so maybe not). Are there host that deals specifically with unmetered bandwidth and provides lots of disk space? Math says ~1TB is the most i'll ever need but if i happen to need more i'd like to know my options.

    Read the article

  • Sending mail to local address crashes web server (sendmail)

    - by deceze
    When trying to send mail automatically from a script at example.com via PHP's mail() to [email protected], the Apache server throws an Internal Error. I believe internally it is configured to use sendmail. The message gets dropped into ~/dead.letter and the general error log reads: [Wed May 12 11:26:45 2010] [error] [client xxx.xxx.xxx.xxx] malformed header from script. Bad header=/home/example/dead.letter... S: /home/example/www/test.php Trying any other address, not @example.com, works just fine. I have googled and serverfaulted for solutions, but they all require to edit configuration files in /etc/mail and similar system places, which is not an option, since this problem occurs on a shared host in which I only have access to ~/. Does anyone have a suggestion?

    Read the article

  • Will this increase my VPS failing rate ?

    - by Spencer Lim
    Will this increase my Virtual private Server failing rate if i :- install Microsoft Window Server 2008 Enterprise install SQL server enterprise 2008 install IIS 7.5 install ASP.Net Mvc 2 install Microsoft Exchange install Team foundation server on one mini VPS with specification of DELL Poweredge R710 shared plan DDR3 ECC RAMs 16GB and -- 1GB for this VPS using DELL PERC 6i raid controller (this thing alone about 1.5k-2k) and the SAS HDD (15K RPM) (146GB) -- 33GB to this VPS each hdd is freaking fast over 300MB read / write possible with proper tuning the motherboard is a DELL and it has twin redundant PSU (870watt 85%eff) its running on Intel Xeon 5502 (Quad Core) x2 so about 8 physical proc (fairly share) is there any ruler for this about one VPS can only install what what what service ? Thx for reply

    Read the article

  • reverse nslookup fails for single machine

    - by matt wilkie
    I have a computer on a windows Active Directory network for which reverse dns lookup fails. It doesn't matter which machine runs the lookup. The problem computer is a debian vm on a windows server 2003 host. >nslookup wiki.dept Server: primary.internal.domain.org Address: 192.111.222.44 Name: wiki.dept.internal.domain.org Address: 192.111.111.185 >nslookup 192.111.111.185 Server: primary.internal.domain.org Address: 192.111.222.44 *** primary.internal.domain.org can't find 192.111.111.185: Non-existent domain Contents of /etc/resolv.conf on the debian guest: nameserver 192.111.111.244 nameserver 192.111.222.44 search internal.domain.org What is wrong? how do I get ip-to-name resolution to work for this machine? Thank you.

    Read the article

  • Weird unexpected image compression on a web server running Apache on Ubuntu?

    - by Billy Bob Thornton
    I have a weird problem on my production web server running Apache on Ubuntu: it compresses my images thereby dramatically lowering their quality! Actually I have two virtual hosts running, each located in a different folder. Wether I display .gif images by navigating on the two sites, or acceding them directly by their url, their size and quality are invariably degraded. I tried with three different browsers: same problem. Using them on other sites on the Web: no problem. Of course I disabled mod_deflate on the server (which should not compress images anyway), but the phenomenon remains. On my local développement server, running the same configuration, everything is Ok. Now I'm completely lost! For the record, my configuration: Ubuntu 10.04, Apache 2, Php 5.

    Read the article

  • set proxy for vpn server on ubuntu server 12.4

    - by Morteza Soltanabadiyan
    I have a vpn server with HTTPS, L2TP , OPENVPN , PPTP. i want to set proxy in the server so all connection that comes from vpn clients use the proxy that i set in my server. I made a bash script file for it , but proxy not working. gsettings set org.gnome.system.proxy mode 'manual' gsettings set org.gnome.system.proxy.http enabled true gsettings set org.gnome.system.proxy.http host 'cproxy.anadolu.edu.tr' gsettings set org.gnome.system.proxy.http port 8080 gsettings set org.gnome.system.proxy.http authentication-user 'admin' gsettings set org.gnome.system.proxy.http authentication-password 'admin' gsettings set org.gnome.system.proxy use-same-proxy true export http_proxy=http://admin:[email protected]:8080 export https_proxy=http://admin:[email protected]:8080 export HTTP_PROXY=http://admin:[email protected]:8080 export HTTPS_PROXY=http://admin:[email protected]:8080 Now , i dont know what to do to make a global proxy for server and all vpn clients use it automatically.

    Read the article

  • shared web hosting architecture in a university setting

    - by gaspol
    We're in the process of creating a shared webhosting infrastructure for our university. Departments within the university can host their sites on this infrastructure. We're thinking of setting up multiple, load balanced web servers attached to shared storage (for web content and Apache config files). There will also be database servers behind these web servers. Does anyone have any other suggestions about this? Any recommendations for an alternative setup? Would having cPanel/WHM/Plesk be a good idea to automate account creation/maintenance?

    Read the article

  • Recycle remote IIS app pool from the command line?

    - by Ken
    Is it possible to recycle an IIS7 app pool from the command line, on a different machine? I've found APPCMD (appcmd recycle apppool my-app-pool), but it only operates on the host it's run on, AFAICT. I heard a rumor there might be a way to do it with Powershell, but I know nothing about that, and I'm apparently not very good at googling for it. I'm using Vista / Server 2008, if that matters. EDIT: I found something called WinRM that somebody claims is able to run APPCMD itself, but I'm not sure exactly how, yet.

    Read the article

  • Unable to connect to Amazon EC2 without using PPK file

    - by Krishna
    I have a build job which runs on Hudson and synchronizes content from an Amazon AWS server. This is written in shell I have a PPK file given to me which can establish the connectivity Here is the problem. The build script I use doesn't establish the connectivity in the code. So, I manually connect the host thro the PPK file using Putty and then run the job, then it works fine I am new to the shell stuff. Could someone help me out by suggesting how I can establish connectivity using the PPK file in the shell so I do not have to do it manually thro Putty? Thanks, Krishna

    Read the article

< Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >