Search Results

Search found 15439 results on 618 pages for 'wls configuration'.

Page 482/618 | < Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >

  • IIS 6 + ASP.NET web service - DW20 and stackoverflow exception

    - by pcampbell
    Consider an ASP.NET SOAP web service that starts up fine, but craters hard when receiving its first hit. Please note that this is deployment works in the Test environment, but not in the PreProd environment. Both are Windows 2003 SP3 + IIS 6 + ASP.NET 3.5. All up-to-date. The behaviour that we're seeing is: restart the site & app pool the app pool is configured to run under Network Service. browsing to the .asmx and .wsdl responds normally, as expected. send a normal well-formed SOAP request / normal payload to the web service 100% CPU usage after 5 seconds, the page request / site returns "Service Unavailable" no entry is created in the IIS log file (i.e. c:\windows\system32\logfiles\W3C-foo) the app pool ends up being stopped The processes that hit the CPU hard are dw20.exe. I am unsure why Dr Watson is involved here. Event Log shows an ASP.NET Runtime error: Task Manager: Event log text: EventType clr20r3, P1 w3wp.exe, P2 6.0.3790.3959, P3 45d6968e, P4 errormanagement, P5 1.0.0.0, P6 4b86a13f, P7 24, P8 0, P9 system.stackoverflowexception, P10 NIL. Questions Any thoughts on what this system.stackoverflow exception might be? Given that the code is the same between environments, might it be a payload problem? Could it be a configuration issue? You can see the name of my .NET assembly there in the exception message: "ErrorManagement"

    Read the article

  • Can't make Dovecot communicate with Postfix using SASL (warning: SASL: Connect to private/auth failed: No such file or directory)

    - by Fred Rocha
    Solved. I will leave this as a reference to other people, as I have seen this error reported often enough on line. I had to change the path smtpd_sasl_path = private/auth in my /etc/postfix/main.cf to relative, instead of absolute. This is because in Debian Postfix runs chrooted (and how does this affect the path structure?! Anyone?) -- I am trying to get Dovecot to communicate with Postfix for SMTP support via SASL. the master plan is to be able to host multiple e-mail accounts on my (Debian Lenny 64 bits) server, using virtual users. Whenever I test my current configuration, by running telnet server-IP smtp I get the following error on mail.log warning: SASL: Connect to /var/spool/postfix/private/auth failed: No such file or directory Now, Dovecot is supposed to create the auth socket file, yet it doesn't. I have given the right privileges to the directory private, and even tried creating a auth file manually. The output of postconf -a is cyrus dovecot Am I correct in assuming from this that the package was compiled with SASL support? My dovecot.conf also holds client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } I have tried every solution out there, and am pretty much desperate after a full day of struggling with the issue. Can anybody help me, pretty please?

    Read the article

  • Virtualized Screen Resolution

    - by Jim R
    I have a 64 bit Ubuntu 9.10 workstation with two virtualized guest OSes using KVM/QEMU. Also both 64-bit. One is Fedora 12 the other is beta of Ubuntu 10.04. The problem is that I would like to use a larger size display that is configured by default. Both guest OSes have a maximum screen resolution of 1024x768. I would like to increase this to something like 1280x900 or 1440x900. The resolution of the host system is 1920x1080. This configuration appears to be a result of the installation detecting the resolution being reported by the virtual screen during installation. The only information I have found on the subject suggests modifying the xorg.conf file in the /etc/X11 directory. Neither guest system has this file. I tried creating one by hand in the Fedora system and managed to render it completely unusable. Not a big deal as this is recently installed and can be reinstalled easily. Is what I want to do possible? If so, how do I accomplish it?

    Read the article

  • Software mirroring (RAID1) versus "Fake Raid" for new Windows 7 install

    - by kquinn
    I've just ordered two new hard drives for my main desktop and a copy of Windows 7 Professional 64-bit. I'd like to do a clean install of Win7 onto the new drives (leaving my old XP Pro boot partition around for a while in case something goes disastrously wrong, etc.). I want to have them set up in mirrored (RAID-1) mode. My understanding is that Win7 Pro can do software mirroring, but can I set this up directly at install time? If so, how? Note that I'd like the disk to be split into three partitions (OS/Apps&Data/Bulk data), all of which should be mirrored. Would it be better (more reliable or faster) to use my motherboard's hardware RAID support? My motherboard is an older nVidia nForce 680i SLI, which is not the most stable of motherboards, and I'm not sure how trustworthy its RAID1 configuration might be (or if Win7 could even detect and install onto a hardware-mirrored volume). Also, the performance characteristics of RAID1 are rather different than RAID0 or RAID5, and I'm wondering if Win7's software mirroring might actually be faster than hardware RAID1 (for example, I'm more of a Unix admin when I have to wear the sysadmin hat, and I've had great success deploying ZFS; most hardware RAID1 implementations have to read both disks and compare results to look for data errors, but ZFS can read from only one disk in the mirror and just use the built-in checksum, meaning it can have up to 2x the number of reads in-flight, as long as there's no data corruption). Edit: Okay, my question about whether Windows 7 can do software mirroring has been answered, and it can. I'm still unsure whether Windows software RAID or my motherboard's hardware "fake RAID" function is a better choice, though. Remember, I'm only interested in mirroring -- not the more complicated striping or parity operations that generally show the poor performance of crappy motherboard RAID solutions.

    Read the article

  • Virtualized Screen Resolution

    - by Jim R
    I have a 64 bit Ubuntu 9.10 workstation with two virtualized guest OSes using KVM/QEMU. Also both 64-bit. One is Fedora 12 the other is beta of Ubuntu 10.04. The problem is that I would like to use a larger size display that is configured by default. Both guest OSes have a maximum screen resolution of 1024x768. I would like to increase this to something like 1280x900 or 1440x900. The resolution of the host system is 1920x1080. This configuration appears to be a result of the installation detecting the resolution being reported by the virtual screen during installation. The only information I have found on the subject suggests modifying the xorg.conf file in the /etc/X11 directory. Neither guest system has this file. I tried creating one by hand in the Fedora system and managed to render it completely unusable. Not a big deal as this is recently installed and can be reinstalled easily. Is what I want to do possible? If so, how do I accomplish it?

    Read the article

  • Under which circumstances can a *local* user account access a remote SQL Server with a trusted connection?

    - by Heinzi
    One of our customers has the following configuration: On the domain controller, there's an SQL Server. On his PC (WinXP), he logs on with LocalPC\LocalUser. In Windows Explorer, he opens DomainController\SomeShare and authenticates as Domain\Administrator. He starts our application, which opens a trusted connection (Windows authentication) to the SQL Server. It works. In SSMS, the connection shows up with the user Domain\Administrator. Firstly, I was surprised that this even works. (My first suspicion was that there is a user with the same name and password in the domain, but there is no user LocalUser in the domain.) Then we tried to reproduce the same behaviour on his new PC, but failed: On his new PC (Win7), he logs on with OtherLocalPC\OtherLocalUser. In Windows Explorer, he opens DomainController\SomeShare and authenticates as Domain\Administrator. He starts our application, which opens a trusted connection (Windows authentication) to the SQL Server. It fails with the error message Login failed for user ''. The user is not associated with a trusted SQL Server connection. Hence my question: Under which conditions can a non-domain user access a remote SQL Server using Windows Authentication with different credentials? Apparently, it's possible (it works on his old PC), but why? And how can I reproduce it?

    Read the article

  • DNS NAmeserver Aname and cname records

    - by David
    Hi - I am inexperienced in the configuration of DNS and have an issue with dominan hosting set up. I have two domains 'www.mydomain1.com' and 'www.mydomain2.com', with mydomain2 pointed at the same place as mydomain1. The domains were passed to me recently by the person who previoulsy controlled them. I have an account with fasthosts in the uk. When I accepted the domains I could not access the DNS settings and enquired with fasthosts as to why. The replied saying 'The delegate hosting option for both domains were enabled and this is the reason why you were unable to find the option to edit the advanced DNS records. I have now disabled the delegate hosting option so you can now edit the advanced DNS records for both domains in your account.' When i log into the fasthost control panel now i can access the DNS controls but both domains have no A Record of Cname record set up. I am concerned that fasthosts have blatted the previous Nameserver entries and set me up on theirs but not added any record. 'www.mydomain1.com' currently still works but 'www.mydomain2.com' does not find the site anymore. i am worried i will lose mydomain1 to as teh dns changes filter through the system. my webhosting is at 'xxx.xxx.xxx.xxx/mydomain1.com/' and this is where I want both domains to point. Any advice would be much appreciated. one thing which is confusing me is that because I am on a shared server I have to put 'xxx.xxx.xxx.xxx/mydomain1.com/' to get to my site rather than just 'xxx.xxx.xxx.xxx'. The form on fasthosts for the aname record only allows an IP to be entered - does it add the mydomain1.com/ onto the end itself? Thanks for any help given - I'm quite worried about this David

    Read the article

  • Nvidia Linux Driver Huge Resolution

    - by darxsys
    I'm trying to setup a working CUDA SDK on my Linux Mint. I'm new to Linux and everything connected with it. So, I tried following some steps on how to install CUDA. Firstly, I downloaded a Linux driver from here: http://developer.nvidia.com/cuda/cuda-downloads version 295.41. After that, I barely found a way to run it. I did it like this: 1. typed in sudo init 1 in terminal and switched to root 2. typed service mdm stop 3. ran the *.run file downloaded from the link above Then it started installing the driver. It gave some warning messages, but I ignored it. After installation, I typed init 5 and it came back to GUI screen, BUT everything is huge. I restarted, still huge. My screen resolution is 640x480 on a 17 inch laptop monitor. I tried running Nvidia X Server Settings, but it says: "You do not appear to be using Nvidia X Driver. Please edit your X configuration file." I tried that. Nothing happened. I cant change the resolution because that Nvidia Settings thing gives no options. Then I googled some things, installing some packages - nothing. The biggest problem is I don't understand whats really going on. My laptop is a Samsung with i7 and Nvidia Gt 650M with optimus. I cant even install bumblebee, but that is something I will try if I manage to get my resolution to default. Please, help!

    Read the article

  • Cyrus: In practical terms, how do end users administer their shared mailboxes?

    - by Nick
    Let's say we have four customer service reps: Billy, Bob, Joe, and Tom. Tom is the department manager. There's a shared Customer Service mailbox on the Cyrus server that they all have access to. Tom, as the manager also has administrative privileges for the shared mailbox. They decide they want to create sub-folders a certain way, and Tom creates them. They're all running Thunderbird, so Tom right-clicks the main folder and chooses "New Subfolder". Now Tom has the Subfolders he needs and the other sales reps have... nothing! Because Cyrus created the Subfolders giving Tom "Full Access" permissions, and everyone else gets no access. So how does Tom give the other reps in his department access to the new folders? As far as Cyrus is concerned, Tom has permission to grant others access to his new mailboxes- But as far as I can tell, there's no option in Thunderbird for granting mailbox permissions. An IT staff member should not have to receive a support request every time someone wants to add a Subfolder to a shared mailbox. That's why we make certain users into mailbox admins in the first place! But asking (non-technical) users to SSH into an IMAP server to run cyradm seems like a bad idea too. Certainly someone has found a solution for this dilemma. Perhaps a Thunderbird extension for setting Cyrus permissions? Or something like umask that forces subfolders to have identical permissions to their parents on creation? And related, what about Sieve configuration? Is there anyway that can be done from the client machine too? Thanks, Nick

    Read the article

  • Installing Trac on Windows under Apache 2.2?

    - by Warren P
    Trac is a python-powered bug-tracking and project-management app. According to Trac's wiki, there are several options for installing Trac, a standalone server (tracd), or under a dedicated webserver using one of these options: FastCGI - Not available on windows. mod_wsgi - No version of mod_wsgi available for Apache 2.2.22 and Python 2.7.3-amd64 that actually runs on my system! mod_python - no longer recommended, as mod_python is not actively maintained anymore) CGI -should not be used, as the performance is far from optimal) That leaves me with zero ways to run Trac on Windows. Apache 2.2.22 with ModWSGI loading, crashes the Apache2.2 service on startup without any error logs. Disabling the line in the apache configuration to load mod_wsgi restores sanity. I just want an installation of Trac on windows with Authentication enabled. I am unable to get authenetication to work using basic tracd like this: tracd -p 8000 --basic-auth="c:\tmp,c:\tmp\Passwords.md5.txt,mycompany" c:\tmp\RootFolder And I am unable to get Mod_WSGI installed. I'm going to keep trying to figure out a combination that works, I suspect I should have installed 32 bit python instead of 64 bit python, to start with. Did I do wrong to install Python 64 bit 2.7.3? I tried again with all 32 bit components, and still can't get MOD_WSGI to work with apache 2.2.22. I'm going to try to compile mod_wsgi myself with Visual C++ Express 2010, but it seems to me that it ought to be easier than this to get Trac running on windows, with authentication. Is there a way to run Trac on Windows, under Apache, with authentication? The last "Trac on windows" article died in 2008, leaving only this internet archive link for "Trac on windows" setup.

    Read the article

  • How to remap IPs visible from local machine to IPs visible from a machine I have SSH access to?

    - by gooli
    I'm so far out of my depth I don't even know what to google for. There's a server I can connect to via SSH. Via that server I can access other server on its subnet via SSH. What I want to do is be able to access the machines that server has access to directly. Say the server IP is 192.168.7.7 and is the only one in the 192.168.x.x range I have access to. I'd like to configure things in such a way that when I to access say 192.168.7.100 on my machine, the connection will go through an SSH tunnel I open to 192.168.7.7 and out to 192.168.7.100. I would like this to work for any port if at all possible. I know I can set an HTTP proxy and even a SOCKS proxy, but I'm wondering is there is a way to actually remap some of the IP my machine sees to IP only visible from the remote machine. What would this configuration be called? IS this NAT, VPN, IP2IP or something else? How can I set up this on a Windows client box that connects via SSH to a Linux box? Sounds to me like I need to set up some kind of filtering on the network driver or possibly a virtual NIC, but I'm not sure where to go next.

    Read the article

  • Strange difference in bash behavior across systems

    - by pinkie_d_pie_0228
    I have two systems, an Ubuntu computer and an Android tablet. I have built and configured bash for Android to be used in adb, so it's the same version as my Ubuntu bash, and they use mostly the same bashrc and configuration, and the same exact options set by shopt. However, there is a slight difference in that the Android bash behaves as I expect when I I try to tab-complete something using a variable in it, but the Ubuntu bash doesn't. #Android ls $HOME/loc<tab> => ls $HOME/local #As expected Basically, the variable is taken into account when completing. But then #Ubuntu ls $HOME/loc<tab> => ls \$HOME/loc #Undesired behavior. The list of options is as follows, and is the same in both builds of bash. autocd:checkwinsize:cmdhist:expand_aliases:extglob:extquote:force_fignore:histappend:interactive_comments:progcomp:promptvars:sourcepath What can be making the Ubuntu version escape the $ instead of using it for completion as in the Android build? What can I do to make both work the same way? Any help will be greatly appreciated.

    Read the article

  • vDS - vCenter Problem

    - by rbmadison
    We are implementing a vSphere farm and are using a distrubuted switch. The VC is a VM within the farm connected to the distrubuted switch. We had a SAN issue and all of our VMs were down. When the SAN recovered and we restarted the ESX host containing the VC the VC couldn't connect to the network through the vDS. We had to remove a NIC from the vDS on that host and create a regular vswitch and then connect the VC to that before the VC would connect to the network. Is this typical behavior? If the VC goes down does all vDS networking stop on all the hosts? That seems to be a very bad thing. I thought networking would work even though the VC is down because the hosts have the vDS configuration cached. Is there a better way to configure it to prevent this from happening. We want to keep the VC as a VM for HA and recoverabilty purposes. Can anyone offer suggestions or explanations? I appreciate the help. Thanks, Rick

    Read the article

  • Postfix does not work after setting server hostname from plesk

    - by Michael
    I have recently set my server hostname from localhost.localdomain to xx.mydomain.com from the plesk control panel. However after doing this change postfix has stopped working, I tried restarting and regenerating the config files but to no avail. I am not familiar with postfix but I believe there is a setting to be changed in main.cf. Here are the relevant errors I receive: postfix/postfix-script: starting the Postfix mail system postfix/master: daemon started -- version 2.8.4, configuration /etc/postfix postfix/cleanup: fatal: host/service localhost/12768 not found: Name or service not known postfix/pickup: warning: maildrop/E5559996219: error writing 4FA2C996217: queue file write error postfix/master: warning: process /usr/libexec/postfix/cleanup pid 15334 exit status 1 postfix/master: warning: /usr/libexec/postfix/cleanup: bad command startup -- throttling Any ideas? EDIT Setting it back to localhost.localdomain makes it work again. The only references to localhost/12768 I can find in main.cf are: smtpd_milters = inet:localhost:12768 non_smtpd_milters = inet:localhost:12768 Should something be changed here? These two lines stay the same when I change the hostname. EDIT If I comment out the two mail filter lines (the ones shown above) and restart, postfix works with the new hostname. However this is obviously not the ideal solution...

    Read the article

  • Accounting setup in freeradius with mikrotik and the "always" module

    - by Matt
    I have a freeradius setup that is being used to provide authentication for users on a wireless network. The access points are all Mikrotik hardware and the users are connected 24/7. We've been using Daloradius with mysql and freeradius 2. The boss wants to use the accounting information and while this is all set up and appears to be working, I've found that not all the accounting information is present. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly. So he started poking around at this link: http://wiki.mikrotik.com/wiki/RouterOs_MySql_Freeradius#Configuring_RouterOs_for_Radius_.26_PPP.2A_AAA And was looking specifically at the following section. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly always fail { rcode = fail } always reject { rcode = reject } always ok { rcode = ok simulcount = 0 mpp = no } However, that link references freeradius 1 and I can't find this in the radius.conf file for freeradius 2. What does it do and could it be a reason I'm missing data? EDIT: I have found one issue. We have a backup freeradius server that is also receiving the accounting packets. Although they are replicating, it's only a master/slave configuration. If the slave receives accounting packets it won't replicate them back to the master. Although I suspect this might solve it, the boss is not convinced due to the always module. Is there anything special I need to configure in the mikrotik AP's or freeradius 2 for clients connected 24/7.

    Read the article

  • sudo or acl or setuid/setgid ?

    - by Xavier Maillard
    Hi, for a reason I do not really understand, everyone wants sudo for all and everything. At work we even have as many entries as there are way to read a logfile (head/tail/cat/more, ...). I think, sudo is defeating here. I'd rather use a mix of setgid/setuid directories and add ACL here and there but I really need to know what are the best practices before starting up. Our servers have %admin, %production, %dba, %users -i.e many groups and many users. Each service (mysql, apache, ...) has its own way to install privileges but members of the %production group must be able to consult configuration file or even log files. There is still the solution to add them into the right groups (mysql...) and set the good permission. But I do not want to usermod all users, I do not want to modify standards permissions since it could change after each upgrade. On the other hand, setting acls and/or mixing setuid/setgid on directories is something I could easily do without "defacing" the standard distribution. What do you think about this ? Taking the mysql example, that would look like this: setfacl d:g:production:rx,d:other::---,g:production:rx,other::--- /var/log/mysql /etc/mysql Do you think this is good practise or should I definetely usermod -G mysql and play with standard permissions system ? Thank you

    Read the article

  • Ruby Passeger + Nginx or lighthttpi + fgci for shared hosting

    - by devnull
    I have set up a passenger + nginx setup and I plan to offer a free non-commercial hosting (or in fact on the fly deployment) for rack-based frameworks (e.g. camping, sinatra). I am facing an "issue" with passenger. For each application you need to configure nginx.conf (it would be the same with apache so it is not an nginx issue) with: server { ... passenger_base_uri /app1; passenger_base_uri /app2; passenger_base_uri /app3; } Now this is not inherently bad as, in theory, I could allow a user to run just one app on his webspace but even in this case I need to create a new server directory on nginx e.g. (user.domain.com). As this will mainly be used to deploy apps the behavior I am looking at is more the possibility to auto map several apps (e.g. app1, app2, app3, app4) under the same server (your app.com/app1 yourapp.com/app2) without having to update the nginx or apache file each time. This seems to be a limitation in passenger. As such I am thinking about an alternative with lighttpd and fastcgi. Would this allow immediate deployment without touching the lighttpd config file e.g. I create a new directory with app2 and it will run immediately ? What is your experience in performance difference between passenger + nginx vs. lighttpd + fastcgi ? thanks in advance scenario details: on nginx + passenger - user cannot add a new sub-folder and run another sinatra/camping app without declaring the path on nginx.conf and restarting the server; wished behavior with the new setup: - user can add a new folder with a new app and it would run on lighttpd+fcgi without any extra configuration of the web server;

    Read the article

  • Ubuntu: unattended-upgrades from a local package archive

    - by Novelocrat
    I have a local apt archive with a bunch of packages I built in it. The Packages and Release file are generated by apt-ftparchive. The Release file looks like Date: Thu, 06 May 2010 23:04:33 UTC Label: PPL Origin: PPL Suite: ppl MD5Sum: ebec3527ebc8351468b2ef8796c19855 37325 Packages d41d8cd98f00b204e9800998ecf8427e 0 Release SHA1: a0593b663d77fde88ee35b56ae1f3c17801cfe99 37325 Packages da39a3ee5e6b4b0d3255bfef95601890afd80709 0 Release SHA256: dd73a02846aee111cac58a869c6bf650886632ba82c2172ffddd81aa4429981c 37325 Packages e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 Release I'm using unattended-upgrades to keep the machines in the lab up to date on security and bug fixes, but I'm finding that it doesn't pull from my local archive. The configuration file for it looks like // Automaticall upgrade packages from these (origin, archive) pairs Unattended-Upgrade::Allowed-Origins { "Ubuntu hardy-security"; "Ubuntu hardy-updates"; "PPL ppl"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. The package 'mailx' // must be installed or anything that provides /usr/bin/mail. //Unattended-Upgrade::Mail "root@localhost"; Yet, when I run sudo unattended-upgrade on one of these machines, newer package versions don't get installed. Can anyone point out what I'm getting wrong?

    Read the article

  • LDAP authentication issue with Kerio Connect

    - by djk
    Hi, We have Kerio Connect (mail server) running on a Windows Server 2003 server on a domain. In the webmail client, users are able to change their domain password. This functionality used to work fine until a user tried to change their password a few days ago, when every password they'd try would result in the webmail client claiming their password was "invalid". I spoke to Kerio about this and they claim that this error is returned by the domain controller, which supports my initial investigations. The error that the DC is logging when an attempt is made to change the password is this: "80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 52e, vece" The "data 52e" part indicates that this is an "invalid credentials" error. I don't see how this can be as I've tried (in the Kerio Connect configuration) various accounts that have privileges to modify accounts, including my own as I am a domain admin. I have ran 'dcdiag' (all tests) on the DC and it came back passing every single one of them. I've searched high and low for an answer to this and came up empty. Does anyone have any idea why this may have suddenly started happening? Thanks! Edit: I should mention that the passwords we are changing to do comply with the complexity policy.

    Read the article

  • Apache2 Manage Server default

    - by Jaime E. Valdez
    I'm trying to setup two domains correctly. I have some issues I hope you can help me. Site one's conf: <VirtualHost myipaddress:80> ServerName www.domain1.com ServerAdmin [email protected] DocumentRoot /home/domain1/public_html </VirtualHost> My other domain conf is: <VirtualHost myipaddress:80> ServerName www.domain2.com ServerAlias *.domain2.com domain2.com ServerAdmin [email protected] DocumentRoot /home/domain2/public_html </VirtualHost> The default site is disabled. The problem is that when accessing "domain2.com" from my browser, it always redirects to "www.domain1.com". It only works when I excplicitly access "www.domain2.com". I have also other domains like "domain1.net", "domain1.info" pointing to my server but at this moment are not configured either setup on Apache yet I can access from browser and always accessing to "www.domain1.com". By the way is there any possible configuration over Apache to handle IP only, I mean if I type "http://myipaddress/" I get the "www.domain1.com"... Arrgh.

    Read the article

  • Nginx multiple upstream servers on the same domain via diferent url

    - by Barry
    Hello. I am trying to rout trafic to different upstream servers (that serve different applications and not for load balancing). The incoming trafic has the same domain name but different URL. Here is an example of my configuration: http { upstream backend1 { server 127.0.0.1:8080 fail_timeout=0; server 127.0.0.1:8081 fail_timeout=0; } upstream backend2 { server 127.0.0.1:8090 fail_timeout=0; server 127.0.0.1:8091 fail_timeout=0; } server { listen 80; server_name my_server.com; root /home/my_server; location /serve_me { fastcgi_pass backend1; include fastcgi_params; } location / { fastcgi_pass backend2; include fastcgi_params; } } } It seems that whatever trafic comes in (including "my_server.com/serve_me") goes to backend2. How do I make queries that start with /serve_me to be directed to backend1? Thanks, Barry.

    Read the article

  • Deploying concrete5 on nginx

    - by Nithin
    I have a concrete5 site that works 'out of the box' in apache server. However I am having a lot of trouble running it in nginx. The following is the nginx configuration i am using: server { root /home/test/public; index index.php; access_log /home/test/logs/access.log; error_log /home/test/logs/error.log; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ index.php; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } # pass the PHP scripts to FastCGI server listening on unix socket # location ~ \.php($|/) { fastcgi_pass unix:/tmp/phpfpm.sock; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; include fastcgi_params; } location ~ /\.ht { deny all; } } I am able to get the homepage but am having problem with the inner pages. The inner pages display an "Access denied". Possibly the rewrite is not working, in effect I think its querying and trying to execute php files directly instead of going through the concrete dispatcher. I am totally lost here. Thank you for your help, in advance.

    Read the article

  • Public static ip for vagrant box

    - by Numbata
    I have server (Debian Squeeze) with 1 ethernet card and 2 public static IPs (188.120.245.4 and 188.120.244.5). What I want: Setup virtual box (Ubuntu) with access via static IP (188.120.244.5). What I was trying: config.vm.forward_port - good idea: setup interface "eth1:1" with 188.120.244.5 on host-machine, and add to Vagrant file "config.vm.forward_port = hmm..?" config.vm.network :hostonly, "188.120.244.5" - not working. Was created new interface on host-machine with ip "188.120.244.1". Of course 188.120.244.1 IP isn't mine and I can't access my server via this IP. config.vm.network :bridged - I'm confused how this works :) What I have now: Not working configuration. Debian-host-machine# cat Vagrantfile Vagrant::Config.run do |config| config.vm.define :gitlab do |box_config| box_config.vm.box = "ubuntu" box_config.vm.host_name = "ubuntu" box_config.vm.network :bridged box_config.vm.network :hostonly, "188.120.244.5", :auto_config => false end end Debian-host-machine# ifconfig eth1 Link encap:Ethernet HWaddr 00:15:17:69:71:bb inet addr:188.120.245.4 Bcast:188.120.247.255 Mask:255.255.248.0 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 vboxnet0 Link encap:Ethernet HWaddr 0a:00:27:00:00:00 inet addr:188.120.244.1 Bcast:188.120.246.255 Mask:255.255.255.0 Ubuntu-virtual-machine# ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:ee:8d:0c inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 eth1 Link encap:Ethernet HWaddr 08:00:27:45:71:87 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 How I can access virtual box via public static IP from network? I'm using Oracle VM VirtualBox Manager 4.1.18 and Vagrant version 1.0.3. Thanks in advance for your feedback.

    Read the article

  • 2008 Sever Randomly reboots.

    - by Jeff
    I'm out of ideas here. We have a 2008 Server that keeps rebooting 2-3 times a day at completely random times with an "Unexpected Shutdown" event. There are no Dumps, no events leading to it just like it loses power then comes back online. I ran a Diagnostic of the power supply and it has had continuous power for months. In addition, the temperature of the processors are maxing out at 40 degrees Celsius. Anyone have any ideas how to figure out why this is restarting all the time? This is a DMZed Web server so it doesn't do too much process wise. Here are the specs: Host Name: ~~~ OS Name: Microsoft Windows Server 2008 R2 Standard OS Version: 6.1.7600 N/A Build 7600 OS Manufacturer: Microsoft Corporation OS Configuration: Standalone Server OS Build Type: Multiprocessor Free Registered Owner: Windows User Registered Organization: Product ID: ~~~ Original Install Date: 5/27/2010, 4:25:47 PM System Boot Time: 2/14/2011, 5:35:01 PM System Manufacturer: HP System Model: ProLiant DL380 G6 System Type: x64-based PC Processor(s): 1 Processor(s) Installed. [01]: Intel64 Family 6 Model 26 Stepping 5 GenuineIntel ~1586 Mhz BIOS Version: HP P62, 8/16/2010 Windows Directory: C:\Windows System Directory: C:\Windows\system32 Boot Device: \Device\HarddiskVolume1 System Locale: en-us;English (United States) Input Locale: en-us;English (United States) Time Zone: (UTC-05:00) Eastern Time (US & Canada) Total Physical Memory: 4,086 MB Available Physical Memory: 2,775 MB Virtual Memory: Max Size: 8,170 MB Virtual Memory: Available: 6,691 MB Virtual Memory: In Use: 1,479 MB Page File Location(s): C:\pagefile.sys

    Read the article

  • How to run a website domain without redirecting if IP is already used for another website? [duplicate]

    - by SSpoke
    This question already has an answer here: Hosting multiple distinct folders for distinct domains 1 answer I bought a VPS Host that gave me only 1 IP Address which I used on my first domain name and it works without any problems. Now my second domain name I can't use the same ip address as it points to the first domain name. So I figured my only option was to use a GoDaddy hosted iframe redirection which redirects to a sub folder on my first domain which worked so far. Now I'm trying to load paypal from <?php headers() ?> and I get a permission error because of that iframe Refused to display 'https://www.paypal.com/cgi-bin/webscr?notify_url=&cmd=_cart&upload=1&business=removed&address_override=1' in a frame because it set 'X-Frame-Options' to 'SAMEORIGIN'. How do I avoid the Iframe solution for my second domain while not messing up my first domain? Somebody I forgot once told me it doesn't matter if you have 1 IP Address you could host multiple websites on it? how it that possible the DNS doesn't seem to work off ports afaik, yes I could host multiple websites on different folders but that's not what I call hosting a real website it has to be pointed by a domain name, so this iframe issue doesn't happen My server configuration is httpd (apache) that comes with CentOS 6 (Linux) operating system

    Read the article

< Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >