Search Results

Search found 356 results on 15 pages for 'datacenter'.

Page 9/15 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Howto find internal IP server by external IP

    - by HWTech
    I've got 12 servers in datacenter, but can login by SSH into one of them (facade server), other servers available only from it. In hosts file we have ip list each of available servers. milkov@devel:/var/www/davel$ cat /etc/hosts 192.168.1.4 data1 192.168.1.7 data2 192.168.1.5 bground1 192.168.1.6 bground2 192.168.1.10 frontend1 192.168.1.11 frontend2 ... Also I've domain megaplan.tvigle.ru (IP 79.142.100.36). Question: How to know which one of servers serve this domain? How to find servers internal ip-address by external IP. PS: Sorry about my Eng. lng

    Read the article

  • Remove server from loadbalancer [on hold]

    - by Cris
    Our datacenter is load balancing the traffic to our weblogic servers based on an application deployed on each server called probe.ear It is a simple java app which is accessed by the loadbalancer and if it accessible it will redirect traffic to this server as well. When we want to remove one node from the loadbalancer we just stop the probe app and new traffic is not redirected to it BUT I am asking myself what about the current requests? The responses for the current requests are executed (completed). I did not had time to test (will do that first thing when go back to work) but on theoretical level for me it make sense to do so unless the loadbalancer has some special settings. Does my reasoning make sense to you? Just to make it clear since it was reported the question is not clear. Is there a way to configure a load balancer in a way that from a specific moment will not redirect traffic to a certain server without reseting the current requests ?

    Read the article

  • DNS failover across multiple datacenters?

    - by Jae Lee
    I've got a site that is starting to get a lot of traffic and just the other day, we had a network outage at the datacenter where our loadbalancer (haproxy) is hosted at. This worried me as despite all my efforts of making the system fully redundant, I still could not make our DNS redundant, which I think isn't an easy solution. Only thing I was able to find was to sign up for DNS failover from places like dnsme, etc .... but they cost too much for budding startups. Even their Corporate plan only gives you 50 million queries per month and we use that up in a week. So my question is, are there any self hosted DNS we can do that provides the failover like how dnsme does it?

    Read the article

  • What port does OpenLink ODBC Driver use?

    - by user36737
    I use Avaya Reporting Services and OpenLink ODBC Drivers for db connection. I know that it uses port 5000 for handshaking but after that I believe it uses an random port for communication. I want to deploy my application and it will communicate with the client's system in their datacenter. They are asking what ports should they open on their firewalls. I can't obviously give them a range above 50,000 that I know OpenLink ODBC Drivers use. Can someone tell me what port should I tell my client to open?

    Read the article

  • Delay index build until SQL Server table load is complete with SSIS

    - by Mattew
    I have a large table that I am updating. Is it possible to disable index updates on the destination table until the load is complete? It seems like a waste for it to be constantly updating the index with each commit. I can just drop and recreate the index before and after the load, I just want to know if there is a quick way to configure that in the OLEDB or SQL Server destination. Server is Windows Server 2003 Datacenter Edition, running SQL Server 2008 Standard Edition with SSIS.

    Read the article

  • Do I need a VPN to secure communication over a T1 line?

    - by Seth
    I have a dedicated T1 line that runs between my office and my data center. Both ends have public IP addresses. On both ends, we have a T1 routers which connect to SonicWall firewalls. The SonicWalls do a site-to-site VPN and handle the network translation, so the computers on the office network (10.0.100.x) can access the servers in the rack (10.0.103.x). So the question: can I just add a static route to the SonicWalls so each network can access each other with out the VPN? Are there security problems (such as, someone else adding the appropriate static route and being able to access either the office or the datacenter)? Is there another / better way to do it? The reason I'm looking at this is because the T1 is already a pretty small pipe, and having the VPN overhead makes connectivity really slow.

    Read the article

  • Mongodump on Gridfs is killing the host IOs

    - by Raphael
    I'm trying to make a mongodump from our production mongodb while the production is running. We have three production instances, one regular mongodb, one with very few gb of data on gridfs, one with a larger amount of data on gridfs. All mongodb instances are running in version 2.4.9 on a ubuntu 10.04 virtual server. I use a mongodump command to export the bases to another server. Unfortunately our machines are virtually hosted in a "low performances" datacenter (vmware based) so when I try to export the large gridfs db, the disk IO hits 100% (and 50% of the cpu starts waiting for IO too). This has a very negative impact on the production applicatiosn because db access time is excessively increased, making the applications unusable. I'm looking for a way to regulate the mongodump so the export goes slower but cooler on the hardware ressources allowing better performances for the applications to run. Has anyone had a similar scenario ?

    Read the article

  • how to design network for connectivity between private and corporate LANs?

    - by maruti
    there is a bunch of servers connected to shared storage in a private LAN (10.x.x.x). this privateLAN is managed by a windows server (DHCP, DNS and directory services) these hosts need to be from outside of the datacenter Eg. Remote desktop. can the NIC2 on each of the hosts be connected to the other public LAN (compromising speed or security? what are improtant considerations: additional hardware? like switches? routing&DNS software? currently available hardware : Dell Powerconnect 6224 switch .... planning this for storage network. software: windows 2003 server for DHCP, DNS, A/D ? would it be more flexible to use Linux distributions like IPCOP, Untangle etc? all that I am looking for is good isolation between private and other networks, avoid DHCP, DNS, AD clashes.

    Read the article

  • SSH Tunneling for Munin

    - by Dennis Wisnia
    I had at home an NAS and in the datacenter a Server. I make an SSH Tunnel with the following command: autossh -fN -M20404 -R 1337:localhost:22 user@server (from the nas to the server) Its working and I can access the NAS. Now, I want access the munin-node, also I make a new tunnel from the server to the nas: ssh -N -R 49499:localhost:4949 localhost -p 1337 but if I make an nmap localhost -p 49499 the port is closed and i cant access the munin-node. I don't know why and I am very happy about your help.

    Read the article

  • Is there a faster way to deploy an OVA template?

    - by Luke
    I need to deploy vSphere Server Appliance 5.1. I have vSphere Client running locally and my internet upload is capped at 3 Mbps. It says it's going to take about 200 minutes to upload. When selecting a URL as opposed to a local file, does vSphere Client download it locally and then upload, or does it download the OVA directly to the server? My goal is to avoid waiting 3 1/2 hours for this to upload. If specifying a URL isn't any faster, are there any other methods that would allow me to deploy from the datacenter instead of my office? We don't have any Windows VM's installed on our cluster. So unfortunately I don't have a Windows machine with faster upload speed.

    Read the article

  • Cheapest High Available Web Server [closed]

    - by xyz
    I would like to create a high-available setup (e.g. a small cluster) for a webserver, i.e. it will run Apache, PHP and MySQL. There will be between 2-8 small websites running with only very little traffic and workload. High availability is however very important. I don't want to be dependent on 1 datacenter, so there must be a minimum of 2 servers placed in different datacenters, and if one server goes down, the user must experience no or only a minimum of downtime - and no data loss. I have considered Amazon AWS using their Elastic Load Balancing, since it is possible to buy 2 EC2 instances in 2 availability zones and set up load balancing and RDS (Multi-AZ). However this seems rather expensive. Using the AWS price calculator http://calculator.s3.amazonaws.com/calc5.html it totals to 185$/month the first year (including the free tier). Are my calculations incorrect or is there a cheaper way to make this HA setup? Best regards

    Read the article

  • SQL server environment

    - by Olegas D
    Hello I'm considering a bit of changes in current sales environment. And trying to check all cons and pros. Current situation. SQL server (quite decent HP server - server1) + backup server (smaller Dell server - server2). all sql files and sql server itself are on the server1. If something goes wrong with server1 I will have to manually move to server2. Connecting to the sql server: 1 HQ (where server located) + 4 sites through VPN. Now I'm considering 2 scenarios: Buy some storage system + update existing servers (add ram, upgrade processors) and go for VMWare ESXI. Rent a server at a datacenter + rent virtual server in case real server goes down. Also rent some space at data storage to keep SQL files there. Have anyone considered these things and maybe found some good pros/cons list? ;) Thanks

    Read the article

  • Can I redeploy Citrix Xencenter Images in Amazon EC2?

    - by Mike Pinch
    Okay - I'm running Citrix Xencenter in my datacenter. My understanding is that Amazon EC2 is a very customized flavor of Xencenter. Based on that, I would like to know whether there is a method to utilize the .VHD disk images generated by Citrix Xencenter in the Amazon EC2 cloud. I realize that I can install tools into the OS and make my own images but that is not what I'm looking for. I'd like the efficiency of taking the images and redeploying them, or at least converting then redeploying. I basically would like to have all my backups running in the cloud, so that is my motivation.

    Read the article

  • monitoring load on AWS EC2

    - by hortitude
    I'm interesting in monitoring our EC2 instances to ensure we scale up when necessary. Right now we are monitoring idle CPU time as our metric. We aren't measuring disk IO as we are not a very disk intensive application. When running on our own hardware in a datacenter I also usually monitor "load" from the top command. My question is: Does it make sense to monitor "load" on a shared env such as EC2? If so, how do you interpret the results?

    Read the article

  • How to efficiently dump a huge MySQL innodb database?

    - by Jagbir
    I got an Ubuntu 10.04 production MySQL database server where total size of database is 260 GB while size of root partition is itself 300 GB where DB is stored, essentially means around 96% of / is full and there's no space left for storing dump/backup etc. No other disk is attached to server as of now. My task is to migrate this database to other server sitting in different datacenter. Question is how to do that efficiently with minimum downtime? I'm thinking in line of: Request to attach an extra drive to server and take a dump in that drive. Transfer dump to new server, restore it and make new server slave of existing one to keep data in sync When migration is needed, break replication, update slave config to accept read/write requests and make old server read-only so it won't entertain any write requests and tell app developers to update there config with new IP address for db. What's your suggestions to improve this or any alternate better approach for this task?

    Read the article

  • Windows 2008 Server SP2 64bit - TCP Connections never releasing after TIME_WAIT

    - by Peco
    Hello fellow admins :) We have an issue with Windows 2008 Datacenter edition SP2 64bit. We have a process that is polling very frequently and establishing new TCP connections. The system gets in a state where we end up with over 16k connections in TIME_WAIT state. The default OS timeout is 120 seconds after which these connections should go away, but that never happens. These connections persist and never get cleaned up even after the originating process has long terminated (we are still at 16k connections two days after the process was killed). The OS is supposed to time them out but it doesn't. Has anyone else seen this behavior and if so what was done to resolve it. We are aware of how to tune the tcp stack to make the timeout shorter or allow more connections but this is not the issue here. Thanks!

    Read the article

  • Windows 2012 Cluster on P6300 SCSI-3 Persistent Reservation issues

    - by Bruno J. Melo
    Scenario: 1 HP 6300 with latest XCS version 1 Command View 10.1 + with hosts defined as Windows 2008 2 BL460c Gen8 Servers with SPP 2012.10 and Windows Server 2012 Datacenter Edition with all the updates + MPIO feature enabled DSM v4.03.00 Cluster Analyser Tool triggers this error: Test Disk 0 does not support SCSI-3 Persistent Reservations commands needed to support clustered Storage Pools. Some storage devices require specific firmware versions or settings to function properly with failover clusters. Please contact your storage administrator or storage vendor to check the configuration of the storage to allow it to function properly with failover clusters. Any ideas? Thanks for your help!

    Read the article

  • Can ping/nmap server, nothing else

    - by lowgain
    I was SSHed into our ubuntu LAMP server , and was just doing a svn update, which hung. I disconnected, and since then, I have not been able to SSH in or view any of our websites (neither from my network or through a remote machine). I would have just assumed the server went down, but I can ping the machine and get really quick responses. Using nmap on the box shows all the normal ports open, so I am confused This server is hosted remotely in a datacenter, do I have any remaining options except contacting them for support? Thanks!

    Read the article

  • How do you explain ethernet cable problems to non networking people?

    - by Bryan McLemore
    We've recently been doing a large network migration in our datacenter. We've had a few cables in this start to deliver really bad pings (in excess of 500ms on a LAN) before they could be replaced. The cables we were replacing weren't made the best. They where all hand made and the runs were so short on some of them that the bend radius was definitely too tight. Some of them ran in bundles right in front of the heat exhaust of the power supplies. The cables were running ok until we started pulling the ones surrounding them out. Then a few started having the problem I mentioned above degrading performance and causing reliability issues. I'm trying to figure out the best way to explain this to non networking people. Is there any documentation that people could recommend or other methods?

    Read the article

  • Windows Server 2008 R2 Maximum Processor Limit Confusion

    - by Stevoni
    As I was looking through the Windows Server 2008 R2 specifications, I saw that the maximum supported processors is 64 sockets for Datacenter addition. This puts the maximum number or cores at 256 (if all sockets are quad cores), which I think it's just silly, but whatever. And now the questions: How does one set something like that up? (Obviously not for me, but humor me) Are there multiple dual socket motherboards running in a giant case with a ton of memory? How does the OS see all of the CPUs if they're on different boards? What would be a real world example of a need to have 64 sockets attached to one operating system vs 32 2 socket servers?

    Read the article

  • What should a hosting company do to prepare for IPv6?

    - by Josh
    At the time of writing The IPv4 Depletion Site estimates there are 300 days remaining before all IPv4 addresses have been allocated. I've been following the depletion of IPv4 addresses for some time and realize the "crisis" has been going on for many years and IPv4 addresses have lasted longer than expected, however... As the systems administrator for a small SaaS / website hosting company, what steps should I be taking to prepare for IPv6? We run a handful of CentOS and Ubuntu Linux systems on managed hardware in a remote datacenter. All our servers have IPv6 addresses but they appear to be link local addresses. Our primary business function is website hosting on a proprietary website CMS system. One of my concerns is SSL certificates; at the moment every customer with an SSL certificate gets a dedicated IPv4 IP address. What else should I be concerned about / what action should I take to be prepared for IPv4 depletion?

    Read the article

  • nginx + reverse proxy question

    - by Joe Pilon
    Hello, I am using nginx right now for our production sites with the reverse proxy to apache that's on the same server and it works fantastic. I'm wondering if I can do this: Install nginx on box #1 in say Canada and have it reverse proxy http requests to box #2 in a datacenter in the USA. I know there may be some latency or delays in loading the page etc but that would probably be not noticable to the end user especially if both servers have 100mb ports. Box #2 only does the apache requests, all images are served from box #1 via nginx. Now, would the end visitor be able to tell in any which way that there are 2 boxes being used? Box #2 has sensitive data which we can't have stolen in the event of hacking etc, so this method helps keep things a bit more secure. Anyone know if this is possible or have done something similar?

    Read the article

  • IIS + PHP + Page with lots of images = Intermittent 403 errors

    - by samJL
    I am using an up-to-date Server 2008 R2 Datacenter, running IIS 7.5 and PHP 5.3.6/FastCGI On PHP pages with lots of images (60+), some of the images fail to load It is not always the same images-- on each page refresh an image that worked previously may not load, while an image that did not now does Looking at the Net tab in Firebug reveals that the failing image requests are 403 errors All of the images are located on the server in question, and the images directory has the correct permissions I believe this problem is the result of a limit on requests All of my attempts at researching this problem point to maxConnections setting in IIS, yet mine is set at the highest/default of 4294967295 (maxBandwidth too) I am also running a ColdFusion site on the same IIS installation, and it does not suffer from 403's on pages with lots of images I am left thinking that there is another connection limit (in PHP or FastCGI?) overriding the IIS connection limit I don't see anything that looks like a request limit in the php.ini, what am I missing? Any help would be appreciated, thank you

    Read the article

  • Is it necessary to have firewalls rules between trusted nodes communicating on their backend interfaces?

    - by Tom
    I have 6 nodes that have internet access on eth1 and private access to one another on eth0. Currently I have firewall rules for eth0, for things like memcached and NFS. Is this necessary? It's a real headache as NFS for example communicates on loads of different ports, and I recently introduced glusterfs which needs more still. Is the headache of figuring out what backend ports to unblock worth the security enhancement? I should mention that I will of course still have a firewall rule on eth0 to block servers owned by others in the same datacenter. Thanks

    Read the article

  • Boot another OS e.g. Windows *once* on a dual-boot machine

    - by user974312
    I have a dual-boot machine with Windows and Linux on it. It doesn't reside at my hand, instead , it's placed in the datacenter which I have to access remotely. For most of time, I work on Linux. But there is some occasion that I have to use the Windows OS on it. Here is the problem. I hope to do all those following things remotely. Do some magic to Grub. Reboot the machine from Linux. Grub boots Windows. Access Windows remotely. Work done. Reboot the machine from Windows. Grub boots Linux. So I wonder whether I can set the booting target at the next time, for only once? Thanks.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15  | Next Page >