Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 29/500 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Redirecting http request to two different weblogic servers using the Weblogic proxy and Apache2

    - by Jhon
    Hello All, I've read previous posts like "Redirecting https requests to two different weblogic servers using the Weblogic proxy and Apache2". But I have a different situation and I don't think I'm understanding this to well. I have an Apache 2 server (server1) that will receive http request for my application. Then I have two more servers (server2 and server3) with Web Logic 9.2 runing on ports 7000 (server1) and 8000 (server2). I want the users to enter appname.domain.com and be redirected between the two web logic servers, always keeping appname.domain.com (this is hidding servername:port from URL). How can I manage to do that? Thanks in advanced! Jhon.

    Read the article

  • Efficiently making web pages from multiple servers

    - by james.bcn
    I want to create a service that allows diverse web site owners to integrate material from my web servers into content served from their own servers. Ideally the resulting web page would only be delivered from the web site owners server, and the included content would be viewed as being part of the site by Google - which I think would rule out iframes or client-side Javascript to get the content from my server (although I may be wrong about that?). Also the data wouldn't actually be updated that often, say once a day, so it would be inefficient to get the data from my web servers with every request. Finally, the method needs to be as simple as possible so that it is easy for web site owners to integrate into their own sites. Are there any good methods for doing this sort of thing? If not then I guess the simple way is with iframes or Javascript.

    Read the article

  • What are the Pros & Cons of using SQL Azure for existing apps on dedicated servers

    - by Mark Redman
    We currently own our own servers, and rent a rack in a datacentre. Looking at the pricing, scalabilty and SLAs for Azure SQL, I am thinking that it might be viable to only use Azure SQL but continue to use our existing applications on our own servers in a datacentres. This will enable us to not worry about the database and its infrastructure so we can concentrate on building an application server farm with disk storeage for files etc. Our application is quite big and has various windows services and parts of it used unmanaged libraries that may not be feasible in the cloud, so probably coulnt have everything in the Azure cloud. The pros: Reduced Total Cost of ownership (no database servers, no sql server licenses) The Cons: I guess there would be overhead in the transfer of data between the Azure Cloud and our datacentre (ie cloud may be in US and datacentre is in the UK) but would this overhead be usable?

    Read the article

  • load-views when running multiple noir servers

    - by Roth Michaels
    I'm experimenting with using noir to start three servers (each to handle a different aspect of the application). I am trying to do this so that I can run all three servers within one application while developing and easily decouple the project into three different applications for deployment. It is no problem to use noir.server/start and noir.server/stop to run the jetty servers I need. What I'm trying to figure out is some way to call load-views (or something like that) with a different set views for each server so that URI conflicts are handled by the correct defpage.

    Read the article

  • Multiple memcached servers question.

    - by Andre
    hypothetically - if I have multiple memcached servers like this: //PHP $MEMCACHE_SERVERS = array( "10.1.1.1", //web1 "10.1.1.2", //web2 "10.1.1.3", //web3 ); $memcache = new Memcache(); foreach($MEMCACHE_SERVERS as $server){ $memcache->addServer ( $server ); } And then I set data like this: $huge_data_for_frong_page = 'some data blah blah blah'; $memcache->set("huge_data_for_frong_page", $huge_data_for_frong_page); And then I retrieve data like this: $huge_data_for_frong_page = $memcache->get("huge_data_for_frong_page"); When i would to retrieve this data from memcached servers - how would php memcached client know which server to query for this data? Or is memcached client going to query all memcached servers?

    Read the article

  • Memcached - how to deal with adding/deploying servers

    - by Industrial
    Hi everybody, How do you handle replacing/adding/removing memcached nodes in your production applications? I will have a number of applications that are cloned and customized due to each customers need running on one and same webserver, so i'll guess that there will be a day when some of the nodes will be changed. Here's how memcached is populated by normal: $m = new Memcached(); $servers = array( array('mem1.domain.com', 11211, 33), array('mem2.domain.com', 11211, 67) ); $m->addServers($servers); My initial idea, is to make the $servers array to be populated from the database, also cached, but file-based, done once a day or something, with the option to force an update on next run of the function that holds the $addservers call. However, I am guessing that this might add some additional overhead since disks are quite slow storage... What do you think?

    Read the article

  • Two DHCP servers on the same network

    - by CesarGon
    We are setting up a routing link between the Windows Server 2008 networks of two different buildings in my organisation. Each network uses a different IP addressing scheme (one uses public addresses, the other one uses private), but the goal is having a single Windows Server domain across the gap between the buildings. The link is provided by a 100-Mbps point-to-point line. I have always understood that you should not have more than one DHCP server on a network. However, we are planning to put a domain controller on each building, and each domain controller will be a DNS server and a DHCP server as well. The intention is that a machine booting up in building A gets its IP address from the DHCP server closer to it, in building A, while a machine booting up in building B gets an address from the DHCP server in building B. Since the two buildings will be linked and the network will be only one, will this work? How can I avoid that a machine booting up in building A gets an address from the DHCP server in building B (or vice versa)? Thanks.

    Read the article

  • Two DHCP servers on the same network

    - by CesarGon
    We are setting up a routing link between the Windows Server 2008 networks of two different buildings in my organisation. Each network uses a different IP addressing scheme (one uses public addresses, the other one uses private), but the goal is having a single Windows Server domain across the gap between the buildings. The link is provided by a 100-Mbps point-to-point line. I have always understood that you should not have more than one DHCP server on a network. However, we are planning to put a domain controller on each building, and each domain controller will be a DNS server and a DHCP server as well. The intention is that a machine booting up in building A gets its IP address from the DHCP server closer to it, in building A, while a machine booting up in building B gets an address from the DHCP server in building B. Since the two buildings will be linked and the network will be only one, will this work? How can I avoid that a machine booting up in building A gets an address from the DHCP server in building B (or vice versa)? Thanks.

    Read the article

  • Master/Slave DNS setup vs. rsync'ed DNS servers

    - by Jakobud
    We currently have primary and secondary DNS servers on our corporate network. They are setup in a master/slave type setup, where the slave gets its DNS information from the master. I'm trying to figure out what the real advantage is for the master/slave setup instead of just setting up an automated rsync between the two to keep the DNS settings matched. Can anyone shed some light on this? Or is it just a preferential thing? If that is the case, it seems like the rsync setup would be much easier to setup, maintain and understand.

    Read the article

  • vpnc Not Adding Internal DNS Servers to resolv.conf

    - by AJ
    I'm trying to setup vpnc on Ubuntu. When I run vpnc, my resolv.conf file does not get changed. It still only contains my ISP's name servers: #@VPNC_GENERATED@ -- this file is generated by vpnc # and will be overwritten by vpnc # as long as the above mark is intact nameserver 65.32.5.111 nameserver 65.32.5.112 Here is my /etc/network/interfaces: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.1.3 netmask 255.255.255.0 gateway 192.168.1.1 dns-nameservers 65.32.5.111 65.32.5.112 Any tips on how to troubleshoot/resolve this? Thanks in advance.

    Read the article

  • One IP, One Port, Multiple Servers

    - by Adrian Godong
    I am looking for a solution to forward one public IP address and one specific port to different machines based on hostname (as of now, I need it only for HTTP). The current setup is NAT on a commodity router (it only provide simple public port to private IP address / port forwarding). I can add a Windows Server 2008 R2 machine before the router if required, but prefer not to do so. So ideally, I would like to have the current setup and the forwarding is done on one of the Windows Servers. Is it possible to do this?

    Read the article

  • Anyone have experience with Silicon Mechanics 4-Node Machines?

    - by Matt Simmons
    I'm taking a look at buying some new servers (small infrastructure, 2 racks, etc), and although I like a lot of the features in blades, I'm looking at the price point for Silicon Mechanics' 4-node machines. http://www.siliconmechanics.com/i27091/xeon-2U-4-Node.php It's a bit like a mini-blade enclosure, but has no shared resources, except for the redundant power supplies. A single point of management would be great, but for the low price point here, I'm possibly willing to give that up, if the server quality is adequate. Basically, have you used these machines? Any problems? Anything you like?

    Read the article

  • How to cluster two IIS servers for failover?

    - by Ram Gopal
    We have IIS servers running in 2 machines hosting few webservices which provided some integration services to an old document Mgmt system, word/excel related service, etc.... We need to cluster/load balance these 2 IIS in order to achieve a fail-over. i.e If one of the IIS server is down, the other on should be able to handle the request. The reverse proxy used in the DMZ is also IIS 7.5 Our overall business application is in fact a J2EE one and we have successfully deployed on a weblogic cluster installed on the same two machines and load balance from the same above mentioned IIS reverse proxy at DMZ. But we do not know how to achieve this in case of IIS.

    Read the article

  • IP addresses for Windows Azure servers seem to be from the US, when the servers are supposed to be located in Europe

    - by paradroid
    I have a couple of test servers on Windows Azure. One is in the North Europe location and the other is in West Europe. I yet to get around to testing which location offers better connection speeds from where I am (London, UK). The Northern Europe Azure datacentre is apparently in Ireland and the West Europe datacentre is in the Netherlands, which is weird in itself I think. But what I am confused about are the IP addresses are both 168.63.xxx.xxx. GeoIP lookup says that they are both located in the US, and traceroute from London to the addresses get to the US before failing to respond pings. What's going on?

    Read the article

  • two domains two servers one dynamic ip address

    - by giantman
    as i said i have 2 domain hi.org and bye.net and one dynamic ip address and two servers. i want to attach one domain bye.net to server1 and hi.org to server2. using apache wamp 2.0i. i hope someone will be able to answer. ` httpd.conf file additions ProxyRequests Off Order deny,allow Allow from all vhost file additions NameVirtualHost *:80 default DocumentRoot "c:/wamp/www/fallback" Server 1 DocumentRoot "c:/wamp/www" ServerName h**p://bye.net ServerAlias bye.net Server 2 ProxyPreserveHost On ProxyPass / h*p://192.168.1.119/ DocumentRoot "g:/wamp/www" ServerName h*p://hi.org ServerAlias hi.org ` after doing all this i fallback to server1 only i don't get the page hi.org i only get the page bye.net, i don't even get the default fallback page which gets executed when a person enters ip address but not the domain name. i use windows 7 (server2) and windows xp (server 1)

    Read the article

  • UDP flooding multiple servers

    - by Chris Gurney
    What do you suggest? Being UDP flooded as I write to multiple servers in different data centers in 5 different countries . Up to 250,000 packets a second. I believe Cisco routers 5505 would not handle that - (some of our datacenter hosters can offer them. Some have no firewalls to offer.) Our clients naturally have constant disconnects to the server they are on. Hacker started this about three weeks ago. Sometimes for a few hours - up to a few days. If we can't stop it hitting the server with firewalls then how do we stop the hacker - now there is the challenge! Update : Found some of the data centers offer up to 10 firewall rules but would their routers be able to handle the possible volume I am talking about? Thanks Chris

    Read the article

  • Capistrano deploying to different servers with different authentication methods

    - by marimaf
    I need to deploy to 2 different server and these 2 servers have different authentication methods (one is my university's server and the other is an amazon web server AWS) I already have running capistrano for my university's server, but I don't know how to add the deployment to AWS since for this one I need to add ssh options for example to user the .pem file, like this: ssh_options[:keys] = [File.join(ENV["HOME"], ".ssh", "test.pem")] ssh_options[:forward_agent] = true I have browsed starckoverflow and no post mention about how to deal with different authentication methods this and this I found a post that talks about 2 different keys, but this one refers to a server and a git, both usings different pem files. This is not the case. I got to this tutorial, but couldn't find what I need. I don't know if this is relevant for what I am asking: I am working on a rails app with ruby 1.9.2p290 and rails 3.0.10 and I am using an svn repository Please any help os welcome. Thanks a lot

    Read the article

  • Oracle parameter array binding from c# executed parallel and serial on different servers

    - by redir_dev_nut
    I have two Oracle 9i 64 bit servers, dev and prod. Calling a procedure from a c# app with parameter array binding, prod executes the procedure simultaneously for each value in the parameter array, but dev executes for each value serially. So, if the sproc does: select count(*) into cnt from mytable where id = 123; if cnt = 0 then insert into mytable (id) values (123); end if; Assuming the table initially does not have an id = 123 row. Dev gets cnt = 0 for the first array parameter value, then 1 for each of the subsequent. Prod gets cnt = 0 for all array parameter values and inserts id 123 for each. Is this a configuration difference, an illusion due to speed difference, something else?

    Read the article

  • One domain hiding two servers

    - by George DSeas
    For our SaaS web-app we have two identical servers in two geographically separated data centers. FOO_1 is the production server and does real-time (MySQL master-slave) replication to its backup F00_2. We want our users to always go to THEFOO.COM which somehow points to the production server. So even if FOO_1 dies, we can just switch THEFOO.COM to redirect to FOO_2 so the failure is transparent. This switch can be manual or automatic but without failback (if FOO_1 somehow becomes available again). Is there a way to do this with DNS? I am getting stuck with ANAME and CNAMEs configuration. We don't use sub-domains, just straight domains. If not, what are other options? Does it make sense to just have a web server at LOVELY_FOO.COM and just redirect all traffic? I also looked at load balancers but didn't see a solution for across data centers/network providers.

    Read the article

  • Linux clients and Windows Servers can connect but not windows clients

    - by Mustafa Ismail Mustafa
    This is driving me insane because I can't make head or tails of it. We have two DCs (W2K3 SP1) and I'v tried this once on each machine as a sanity check. DHCP is being served by either one of the machines and all machines get an address no problem. The servers can connect/ping/browse to the www and so can all our linux clients. But NONE of our windows clients (all windows 7). I can do anything within the network, I can even ping the firewall/router but nothing from the windows clients is leaving the confines of our subnet. I don't get it. The linux and windows clients are both served from the same DHCP server, the gateway is the same, everything is the same. Anyone care to take a shot at how to resolve this? I tried adding explicit routes at the clients, but still no go. TIA SMIM

    Read the article

  • Email Servers that Abstracts Mailbox Concepts [on hold]

    - by David
    Lately I've been really interested in doing some very unique things with email most of which rely on a SMTP and POP or IMAP server that gives the administrator an API to create arbitrary methods for email storage, notifications, or delivery. What I'm looking for would be analogous to mod_php and apache where apache handles the delivery protocol and php handles the content creation and storage. I've considered making my own, as those three protocols are quite simple, but I'm always nervous about putting my code public facing especially when it's at that low of a level. So are there any email servers that allow for this much arbitrary control over email delivery, fetching, and receiving.

    Read the article

  • Windows 2008 R2 server cannot access shares on other servers

    - by Rob
    I have a problem on my new 2008 R2 64-bit server. Essentially the server sometimes refuses to access shares on other server. in the format \\servernam\sharename sometime it works and then for a few hours it doesn't and then at randon it comes back online. This is a local AD network and have even put in a new gigabit switch between all server. All the old 2003 servers work fine so I know DNS and WINS is all ok. I get error 1006 in eventlog saying that my R2 server can't contact the domain controller when it clearly can. Just to add to the config, it is running on a Dell PowerEdge R410, Vmware Esxi 4.0 and R2 is configured as a terminal server. I can always view shares with FQDN This morning net view \\ did not work but net view \\ did. Very random and very frustrating. any ideas? thanks

    Read the article

  • Snow Leopard Servers built in Wiki vs Mediawiki?

    - by semi
    I recently installed Snow Leopard Server and am trying to get the most out of the services it offers, but one thing that currently seems pretty barebones is the Wiki it provides. Can Snow Leopard Servers wiki be modified with plugins the way MediaWiki can? Are there any good plugins to allow you to include templates like MediaWiki? Is there any way to include embeded syntax highlighted example code? Is there even a good name to refer to it as when searching for it? "Snow Leopard Wiki" just turns up a bunch of wikis about SL. Alternatively, how hard is it to install MediaWiki(or some other more advanced wiki engine) on SL Server? Could you plug it in to the same authentication mechanism?

    Read the article

  • Qmail: relay only from selected servers based on rDNS

    - by Frank
    I'm looking for a way to disable Qmail relaying for everyone, but allow one certain group of hosts to do so. These hosts all use the same identifying rDNS entry. In Exchange 2003, Postfix, Exim and cPanel this can be achieved pretty easily. However, the only to do this with Qmail is to do this based on IP's. The IP's however tend to change. These changes can occur at any time, and it is impossible to keep all the servers up-to-date to the new IP's. Running a script that resolves the hostname and whitelists them accordingly is my last-resort option, but this is not fool-proof. Does anyone know whether this is possible and if so, how? Thanks!

    Read the article

  • How best to backup 6x Win2k3 Servers

    - by saille
    We have a external HP LTO3 tape drive. It needs to backup 6 Windows 2003 machines every night. Servers are HP DL380 G3 and the tape drive is attached locally to one of them via SCSI. On a budget of $0, and a goal of keeping-it-simple, what is going to be the best way to backup these machines? What software to use? NT Backup? Or does HP have something better for free? We don't need image backups - file system + system state will be adequate. Do we need to copy the files to be backed up onto the machine with the tape drive attached? Edit: Let me ask a more focussed question: Would you use NT Backup or something else? No soap boxing please, we've after some quick advice from someone who's used a similar setup.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >