Search Results

Search found 12072 results on 483 pages for 'x86 servers'.

Page 85/483 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Proper Network Infastructure Setup DMZ, VPN, Routing Hardware Question

    - by NickToyota
    Greetings Server Fault Universe, So here's a quick background. Two weeks ago I started a new position as the systems administrator for an expanding health services company of just over 100 persons. The individual I was replacing left the company with little to no notice. Basically, I have inherited a network of one main HQ (where I am situated) which has existed for over 10 years, with five smaller offices (less than 20 persons). I am trying to make sense of the current setup. The network at the HQ includes: Linksys RV082 Router providing internet access for employees and site to site VPN connecting the smaller offices (using an RV042 each). We have both cable and dsl lines connected to balance traffic (however this does not work at all and is not my main concern right now). Cisco Ironport appliance. This is the main gateway for our incoming and outgoing emails. This also has an external IP and internal IP. Lotus domino in and out email servers connected to the mentioned Cisco gateway. These also have an external IP and internal IP. Two windows 2003 and 2008 boxes running as domain controllers with DNS of course. These also have both an external IP and internal IP. Website and web mail servers also on both external and internal IPs. I am still confused as why there are so many servers connected directly to the internet. I am seriously looking to redesign this setup with proper security practices in mind (my highest concern) and am in need of a proper firewall setup for the external/internal servers along with a VPN solution about 50 employees. Budget is not a concern as I have been given some flexibility to purchase necessary solutions. I have been told Cisco ASA appliance may help. Does anyone out in the Server Fault Universe have some recommendations? Thank you all in advance.

    Read the article

  • Varnish configuration, NamevirtualHosts, and IP Forwarding

    - by Brent
    I currently have a bunch of NameVirtualHost based websites, load balanced between 3 apache2 servers using ldirectord. I would like to insert varnish as a reverse-web-proxy between ldirectord and apache in the following way: a request comes in to ldirectord it is then load balanced between the 3 apache2 servers and varnish, with a weight of 1 for the webservers, and 99 for varnish (so if varnish is rebooted, the webservers will take over seamlessly) varnish will then load balance its requests between my apache2 servers. However, the varnish part is not working. I wonder whether this has to do with the fact that my apache servers use x.x.x.x:80 for their NameVirtualHosts, instead of *:80? (they have to do this, since each server hosts multiple IP addresses) Or perhaps it has to do with the need for IP Forwarding to be set up on the varnish server? (I did echo 1 /proc/sys/net/ipv4/ip_forward on this server, is that sufficient?) How can I debug this problem? ldirectord doesn't produce logs of what it does with each request (and if it did, I would be overwhelmed with information since I'm serving hundreds of requests per second) varnish log shows the ldirectord server connecting to it every 5 seconds, but nothing else. I have set up a test site using this configuration, but it fails - no apache access logs, no applicable varnish logs.

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by Neil Katin
    We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • OpenSwan (IPSEC) on Fedora 13 with Snow Leopard as a client

    - by sicn
    I recently installed OpenSwan on my Fedora 13 machine. I want to use it to connect with Mac OS X with L2TP over IPSEC, unfortunately I am already stuck on the IPSEC-negotation part. My server is running behind a NATted firewall so my external IP differs from the server's IP. The server has a fixed IP on the network and the same is almost always valid for the clients (they are usually behind a NATted firewall). I installed OpenSwan on Fedora 13 and have following configuration: config setup protostack=netkey nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off nhelpers=0 conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=3 rekey=no ikelifetime=8h keylife=1h type=transport left=my.servers.external.ip leftprotoport=17/1701 right=%any rightprotoport=17/0 IPSEC starts fine and listens to UDP 500 and 4500. These two ports are opened in the firewall and are forwarded fine to the server. In my /etc/ipsec.secrets file I have my.servers.external.ip %any: "LongAndDifficultPassword" And finally in my sysctl.conf (the redirect-entries are there because OpenSwan was strongly protesting about send/accept_redirects being active) I have net.ipv4.ip_forward = 1 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.all.accept_redirects = 0 Running "ipsec verify" gives me "all greens" (except Opportunistic Encryption Support, which is DISABLED), however, when trying to connect my Mac gives me following in the logs: Nov 1 19:30:28 macbook pppd[4904]: pppd 2.4.2 (Apple version 412.3) started by user, uid 1011 Nov 1 19:30:28 macbook pppd[4904]: L2TP connecting to server 'my.servers.ip.address' (my.servers.ip.address)... Nov 1 19:30:28 macbook pppd[4904]: IPSec connection started Nov 1 19:30:28 macbook racoon[4905]: Connecting. Nov 1 19:30:28 macbook racoon[4905]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). Nov 1 19:30:31 macbook racoon[4905]: IKE Packet: transmit success. (Phase1 Retransmit). Nov 1 19:30:38: --- last message repeated 2 times --- Nov 1 19:30:38 macbook pppd[4904]: IPSec connection failed Any ideas at all?

    Read the article

  • DNS-Based Environment Determination

    - by zvolkov
    Found the following here. The questions is: where can I find more details on how exactly implement this on Windows? Any guide or how-to anybody? Or maybe you can provide your invaluable suggestions? Specifically, how do I make so that "all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com" (I'm a dev, not a DNS specialist, but our IT Support has refused to help on this:() Use DNS Based Environment Determination for your servers. Do this by initially splitting your top level domain into a number of sub domains depending on their function, and then creating DNS Service Names in each of the sub domains pointing to the relevant server for that service. Based on the list above we would then have: * clientdb.prod.example.com for Production * clientdb.perf.example.com for Performance Testing * clientdb.qa.example.com for QA * clientdb.dev.example.com for Development Servers then resolve entries in their relevant sub domain by function. That is, all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com. This allows you to have a single configuration entry for your client database hostname (clientdb) that would resolve correctly in all environments. This technique has the added advantage of still having global services defined in a common top level domain. This seems to be related to Providing "split horizon" DNS service. Reading that, I see that I will probably need separate DNS Server for each environment. Is this true or does Windows support some form of "tagging" the records to be visible depending on the requestor's IP?

    Read the article

  • Unable to logon using terminal server connection

    - by satch
    I have several W2K3 SP2 servers, admin TS enabled. I discovered this morning, I was unable to logon into some of them. I've a couple of Citrix servers in different farms, a SAP (IA64) app server and a cvs server. All of them show same sympthoms; remote connections are refused. I've been able to logon locally, and terminal server service is up, there are no users (so connections are not depleted). There are no errors in log in most servers. One of the Citrix ones, reported following errors: Event ID 50 Source TermDD Type Error Description The RDP protocol component X.224 detected an error in the protocol stream and has disconnected the client. and Event ID 1006 Source TermService Type Error Description The terminal server received large number of incomplete connections. The system may be under attack. Anyway, I suppose these errors appear because server isn't working, and Citrix users try to logon massively. (I nmap'ed server and port seems up). I've solved this problem rebooting before, but with so many servers affected it seems like a crappy workaround. Any idea about troubleshooting it properly? Thanks in advance

    Read the article

  • Map FTP folder to folder on different FTP server

    - by jolt
    In my team we work a lot with FTP. We upload and download files from several different servers daily. Currently every member of the team manages access credentials to each FTP server locally on their own machine. I am looking for a way to set up a central FTP server that we can connect to, and from there, navigate to folders that each represent one of the other FTP servers that we connect to daily. Something like this: In-house central FTP server: |- FolderA --> server A root folder |- FolderB --> server B root folder |- FolderC --> server C root folder A setup like this, would mean that we can manage access credentials on the central FTP server, and team members would only need to have the access credentials to the central FTP server, and from there they could navigate to the other servers through these "virtual" folders. We could potentially develop our own custom FTP server that just forward requests to the remote FTP servers, but i feel like something like this (or something similar) would already have been done. So I'm looking for pointers that could help us find software for Windows that could help us to simplify our current setup. Thank you! Similar (unanswered) question here: FTP management server

    Read the article

  • IIS 7.5 FTP Service crashes after installation of Advanced Logging 1.0 Module

    - by Jeremy
    I've recently been tasked with setting up two new productions servers for an ASP.Net application. The servers sit behind a F5 Load Balancer, which in turn forwards the end users IP address forward via the standard X_Forwarded_For HTTP Header. All of the reading that I have done suggests that I need to install the IIS Advanced Logging Module in order to take advantage of the X_Forwarded_For HTTP Header. Some quick background: Both of the web servers are Windows 2008 R2 Standard (x64), with IIS 7.5 installed and configured. The FTP Role has also been installed, configured and is operational. The Issue After installing the IIS Advanced Logging module via the Web Platform Installer, I noticed the following Error in the Event Viewer: The FTP Service encountered an error trying to read configuration data from file \?\C:\Windows\system32\inetsrv\config\applicationHost.config, line number 374. The error message is: Unrecognized element 'advancedLogging' Trying to connect over FTP to either of the web servers results in a 530. I've spent 2 hours scouring Google trying to find a solution, short of uninstalling the Advanced Logging Module. As far as I can tell, there is no way to turn off Advanced Logging on a site per site basis. Help would be appreciated.

    Read the article

  • Windows 7 64-bit installation from alternative media (no DVD/USB Flash drive)

    - by Niels Willems
    Greetings I currently have Windows 7 x86 installed on my computer. I want to install Windows 7 x64 on a different partition on my computer. However there is a little issue, I cannot run the x64 install from Windows 7 x86 which I currently have. I was planning to Install Windows 7 x64 on another partition to then boot up from that partition to install it on the partition I actually want my OS on. Once that is complete I could just format the partition from the Windows 7 x64 that I didn't need anymore. But the installer will not run from the x86 version of Windows 7 even though I do not want to upgrade that Windows directly. The reason I'm doing this in such a weird way is that my optical drive is broken and I'm really not into buying a new one since I would use it like once every year or so. I also don't have a USB Flash Drive which is big enough to hold the installation files. As far as I'm aware I cannot use an external hard drive such as this one, which I do have. Are there any alternatives in which I can install Windows 7 x64 or am I forced into buying a USB Flash Drive or new optical drive? Thank you in advance for your replies. Edit: This picture shows my current partitions on my laptop. I want to get Windows 7 x64 on the C partition but have to install it first on the F partition to then boot up the F partition windows to format C and install x64 on that one. My external drive is J. Edit 2: No alternative computer which has a DVD drive, install files are located on an iso from MA3D. To install my 32 bit version I mounted the ISO in Daemon Tools to replace my Windows Vista but since I cannot run 64 bit into my 32 bit OS this doesn't work.

    Read the article

  • OpenVPN, install a TAP adapter

    - by GolezTrol
    When I try to connect to my work VPN using OpenVPN, the connection fails with the message: All TAP-Win32 adapters on this system are currently in use. Many sources suggest to look in Control Panel\Network and Internet\Network Connections an enable the TAP adapter, but when I look there, there is none. Now I've run addtap.bat which is provided with OpenVPN, but I still don't get to see any TAP adapter, and logging in in VPN still fails. The output of addtap.bat is C:\Windows\system32>"C:\Program Files (x86)\OpenVPN\bin\tapinstall.exe" install "C:\Program Files (x86)\OpenVPN\driver\OemWin2k.inf" tap0801 Device node created. Install is complete when drivers are updated... Updating drivers for tap0801 from C:\Program Files (x86)\OpenVPN\driver\OemWin2k .inf. Drivers updated successfully. I've Run As Administrator both the setup of OpenVPN and addtap.bat. I've run deltapall.bat to remove any (maybe hidden) adapters. It said it removed three of them, after which I ran addtap.bat again to try to create another one. I also run OpenVPN itself as administrator. What's wrong? Running Windows 7 Home Premium on a HP Pavilion dv7 4050ed. It has worked before, but I recently had to reinstall my laptop, for which I used the restore disks I created when I just got it. Everything else seems to work fine. == UPDATE == The TAP adapter is found in Device Manager, but apparently it is disabled because it is incompatible with Windows 7 64bit. I've deïnstalled OpenVPNGui, downloaded a version that should be 64bit compatible, and installed that. Still no cigar. Then I found a tip to install OpenVPN (version 9) after installing OpenVPNGui, because that installs OpenVPN version 8. Now I got a v9 TAP driver in Device Manager, but it still doesn't work and shows up in device manager with an exclamation mark, and not at all in my network devices.

    Read the article

  • Shared firewall or multiple client specific firewalls?

    - by Tauren
    I'm trying to determine if I can use a single firewall for my entire network, including customer servers, or if each customer should have their own firewall. I've found that many hosting companies require each client with a cluster of servers to have their own firewall. If you need a web node and a database node, you also have to get a firewall, and pay another monthly fee for it. I have colo space with several KVM virtualization servers hosting VPS services to many different customers. Each KVM host is running a software iptables firewall that only allows specific ports to be accessed on each VPS. I can control which ports any given VPS has open, allowing a web VPS to be accessed from anywhere on ports 80 and 443, but blocking a database VPS completely to the outside and only allowing a certain other VPS to access it. The configuration works well for my current needs. Note that there is not a hardware firewall protecting the virtualization hosts in place at this time. However, the KVM hosts only have port 22 open, are running nothing except KVM and SSH, and even port 22 cannot be accessed except for inside the netblock. I'm looking at possibly rethinking my network now that I have a client who needs to transition from a single VPS onto two dedicated servers (one web and one DB). A different customer already has a single dedicated server that is not behind any firewall except iptables running on the system. Should I require that each dedicated server customer have their own dedicated firewall? Or can I utilize a single network-wide firewall for multiple customer clusters? I'm familiar with iptables, and am currently thinking I'll use it for any firewalls/routers that I need. But I don't necessarily want to use up 1U of space in my rack for each firewall, nor the power consumption each firewall server will take. So I'm considering a hardware firewall. Any suggestions on what is a good approach?

    Read the article

  • How do I force a server to leave a SharePoint farm

    - by Stefan
    I have two web servers in a SharePoint (WSS 3.0) farm with one database server for the config and content databases. I already moved my content databases to a new database server successfully. But when I tried to move the sharepoint config database using the "stsadm deleteconfigdb" and "stsadm setconfigdb" commands, one of my servers got stuck in an intermediate state. I was able to join one of the web servers with the config database on the new server, but the other server is not able to join because it believes it is already part of the farm (which it used to be, before the move). On the central administration it says the status of the services on the server is "stopping". Even after rebooting all servers involved, uninstalling SharePoint and what not, this status does not change, and because of it, I am not able to join the second server with the new config database. I get random error messages when trying to join the farm. I believe that if I can unstuck this server, it will be able to join the farm again. The farm believes the second server is already part of it, but the web server itself knows its not. Any ideas on how to forcefully kick out a server from the farm?

    Read the article

  • Router recommendation to virtualize 800 IPs

    - by delerious010
    I've recently been looking at getting some new load balancers for our environment as we are expecting to double our client base in the next 12 months. Currently we have 400 public IPS serving 800 clusters ( 2 clusters / IP due to ports ) on Coyote Point Balancers, and distributing connections to 3 web servers serving about 6GBytes outgoing, 2Gbytes in per day. If we double, this would be about 800 IPs, possibly 1600 clusters, and about 6 servers per cluster ( for a total of 9600 so called "real servers" using Barracuda's lingo ). Due to the amount of clusters, most solutions I've looked at ( Coyote, Barracuda, Loadbalancer.org ) seem to be unsure whether they'll be able to handle our planned growth, mostly due to health checks performed on the servers ... which makes total sense when you think of it. So the fine folk at loadbalancer.org recommended that we may be better off offload the 400-800 public IPs, which we require for SSL eCommerce solutions, over to a forward facing router. From that point on, the router could do some mangling to route EXT_IP:443 to INT_IP:INT_PORT which would then allow us to reduce the Load Balancer configuration to 1 or 2 clusters, thus resolving the health check problem. Does this idea make sense to yall ? Or would you have other recommendations to make ? Secondly, what router would you recommend for such an undertaking ? I'd be looking at something that has some form of failover mechanism built in. On a totally unrelated note, I've got to admit that I'm extremely pleased with the responses I got from loadbalancer.org. Their responses to my inquiries were surprisingly helpful ( i.e. I didn't feel as if I was taking to a sales guy trying to push something ). ( No I don't work for them, and sadly nor are they sending me free gear ).

    Read the article

  • Exchange 2010 Hub cannot deliver to Exchange 2007 Hub - "451 5.7.3 Cannot achieve Exchange Server authentication"

    - by Graeme Donaldson
    We have an existing Exchange 2007 server in Site A (exch07). I've installed an Exchange 2010 server in Site B (exch10). Both servers have the CAS, Mailbox and Hub roles. Messages sent via SMTP on exch10 which are destined for mailboxes on exch07 are queued with the "Last Error" reported in Queue Viewer as '451 4.4.0 Primary target IP address responded with: "451 5.7.3 Cannot achieve Exchange Server authentication." Attempted failover to alternate host, but that did not succeed. Either there are no alternate hosts, or delivery failed to all alternate hosts.' I've found that some people have resolved this by creating new Receive Connectors which are scoped specifically to apply to connections from the remote hub/s, but I have had no luck doing this. Specifically I created new receive connectors on both servers with the following settings: Remote IP = IP/s of remote server Authentication = "Transport Layer Security (TLS)" and "Exchange Server authentication" Permission Groups = "Exchange servers" and "Legacy Exchange Servers" This made no difference, I see the same error message. What am I missing? Update: We noticed that the Application log had this error message from MSExchangeTransportService: Microsoft Exchange could not find a certificate that contains the domain name exch07.domain.local in the personal store on the local computer. Therefore, it is unable to support the STARTTLS SMTP verb for the connector exch10 with a FQDN parameter of exch07.domain.local. If the connector's FQDN is not specified, the computer's FQDN is used. Verify the connector configuration and the installed certificates to make sure that there is a certificate with a domain name for that FQDN. If this certificate exists, run Enable-ExchangeCertificate -Services SMTP to make sure that the Microsoft Exchange Transport service has access to the certificate key. It turns out that the default self-signed certificate was no longer enabled for the SMTP service for some reason. After enabling the self-signed certificate for SMTP, we no longer get the error in the event logs, but delivery is still failing with the same error message. Update 2: I put a mailbox on exch10 and attempted to deliver a message via SMTP on exch07 and I get the same error.

    Read the article

  • What are the most important aspects to consider when choosing a SAN for a small office virtualizatio

    - by Prof. Moriarty
    I am in the process of consolidating 6 physical servers running 6 different operating system flavors (don't ask) into two identical physical servers (Dell PowerEdge 2900), using the free VMware ESXi 4.0 platform. We will install an iSCSI SAN over a 1GbE network, and store all virtual machine images on the SAN. Each physical server would run 3 VMs, and in the case of a physical server failure, we would manually switch over the other 3. These are all internal servers, while important, they can tolerate some amount of downtime (say <1h) to keep cost and complexity associated with HA down. I now need to choose the SAN to be used for the setup, on a low budget. We currently have about 2TB of data, but of course I want to able to grow, do backups of VM snapshots on other drives and remove them to a different location, etc. So what I would like to know is: Which are the must have features for this setup, without which using a SAN is not worth it? We are mostly a Dell shop, so I have been looking at the EqualLogic PS4000E High Availability model. Any opinions, anecdotes, bad experiences with this model? (This is one of the few models which could accomodate our existing disks from the physical servers.) If you can recommend something that is not Dell, but it has better value, I would most definitely consider it. Caveats, things to look out for?

    Read the article

  • Performance monitoring on Linux/Unix

    - by ervingsb
    I run a few Windows servers and (Debian and Ubuntu) Linux and AIX servers. I would like to continously monitor performance on these systems in order to easily identify bottlenecks as well as to have an overview of the general activity on the servers. On Windows, I use Windows Performance Monitor (perfmon) for this. I set up these counters: For bottlenecks: Processor utilization : System\Processor Queue Length Memory utilization : Memory\Pages Input/Sec Disk Utilization : PhysicalDisk\Current Disk Queue Length\driveletter Network problems: Network Interface\Output Queue Length\nic name For general activity: Processor utilization : Processor\% Processor Time_Total Memory utilization : Process\Working Set_Total (or per specific process) Memory utilization : Memory\Available MBytes Disk Utilization : PhysicalDisk\Bytes/sec_Total (or per process) Network Utilization : Network Interface\Bytes Total/Sec\nic name (More information on the choice of these counters on: http://itcookbook.net/blog/windows-perfmon-top-ten-counters ) This works really well. It allows me to look in one place and identify most common bottlenecks. So my question is, how can I do something equivalent (or just very similar) on Linux servers? I have looked a bit on nmon (http://www.ibm.com/developerworks/aix/library/au-analyze_aix/) which is a free performance monitoring tool developed for AIX but also availble for Linux. However, I am not sure if nmon allows me to set up the above counters. Maybe it is because Linux and AIX does not allow monitoring these exact same measures. Is so, which ones should I choose and why? If nmon is not the tool to use for this, then what do you recommend?

    Read the article

  • IIS 7.5 stops serving requests for no apparent reason

    - by Steffen
    We're running a site on 4 virtual Win 2008 R2 64 bit servers. This is the only site on the IIS, and we use Windows Network Load Balancing to share the load between our 4 virtual servers. We've used these virtual servers for approximately a week, and we're starting to see some issues. For no apparent reason the IIS stops serving pages, and doesn't even respond with an error. So upon requesting a page from the server, the browser just waits infinitely (or until it decides to give up clientside) Sometimes an iisreset fixes the issue, other times we have to reboot the entire virtual server. There are no traces in the eventlog of why this happens, and there's no traces in our applications exception log neither. Furthermore this happens even when there's a very small load on the server, so it doesn't seem to be because it's flooded with requests. So frankly I'm at a loss here - I have no idea where to start debugging this issue :-( I'm quite certain we never had these issues on our physical servers, however they were running Win 2003 32bit, so there are quite a few differences between them and the virtual ones. (Which obviously makes it difficult to tell what exactly causes this)

    Read the article

  • How to schedule automatic (daily) snapshots of AWS EC2 Windows Instance?

    - by Stanley
    I have some Windows servers hosted on Amazon EC2. Some run Windows Server 2003 and other run Windows Server 2008. These are EBS-backed instances. Most of the instances also have some additional EBS-volumes attached. We want to schedule a daily snapshot of the windows machines (and also the attached EBS-volumes) to S3 so that we have daily backups available. One would think that this is a very common requirement and would be made available via the AWS Management Console, but alas, it is not. What approaches are available? How do I schedule daily snapshots on our Windows Servers? There are several scripting examples available online for Linux, but not so much for windows. I have had a look at http://sehmer.blogspot.com/2011/04/amazon-ec2-daily-snapshot-script-for.html as well as https://github.com/ronmichael/aws-snapshot-scheduler. Has anyone used one of these approaches and does it work? I have also considered a service like Skeddly which seems inexpensive at first glance but when you look at using it for several servers the price soon escalates to such a point where it seems a better option to create your own solution as you can then apply it to new servers in the future. With Skeddly we'll pay for each server. How do we schedule daily snapshots of our windows instances?

    Read the article

  • Why Does DreamWeaver CS5 Discriminate between File Extensions, Even After Modding Mime Types!?

    - by Sam
    Hi folks, Even After I forced DreamWeaver CS5 to allow opening of .ast extensions as a MIME type of php5, which DreamWeaver now opens and colors correctly as described here, I still have trouble figuring out why it still discriminates between the two file extensions! Symptoms: External Files & Design View I have a file foo.php which php includes other files (e.g. the php-combined css.php and js.php). Now, when opening foo.php all functions work perfectly: the external (included) php files are all recognised correctly. However, when I change foo.php foo.ast, and open it again, It does not recognise the files extensions anymore in the top bar. Also, I lose the Design / Live View functionality.** When I change foo.ast to foo.php, all works again! Anyone any clues of why there remains a a difference between one and other extension? Note1: I have added the .ast extension to these four files, next to .php: 1 C:\Users\Sam\AppData\Local\VirtualStore\Program Files (x86)\Adobe\Adobe Dreamweaver CS5\configuration\DocumentTypes\MMDocumentTypes.xml 2 C:\Program Files (x86)\Adobe\Adobe Dreamweaver CS5\configuration\DocumentTypes\MMDocumentTypes.xml 3 C:\Users\Sam\AppData\Roaming\Adobe\Dreamweaver CS5\en_US\Configuration\Extensions.txt 4 C:\Program Files (x86)\Adobe\Adobe Dreamweaver CS5\configuration\Extensions.txt Note2: sometimes, even .php files do not want to show in design view or live view. Could this be caused by a corrupted installation?

    Read the article

  • What is the best way to create a failover cluster for my IIS website?

    - by ObligatoryMoniker
    Our eCommerce website www.tervis.com currently runs on two servers: SQL server: 2005 x 86 on Windows Server 2003 Standard x86 with a single dual core processor and 4 gb of memeory IIS server: Windows Server 2008 Web edition x64 with dual quad core hyper threaded processors and 32 gb of memory Tervis.com's revenue has steadily grown to the point where we need to have redundant servers deployed with a fail over mechanism so that we do not have any down time. Because the SQL server is so underpowered compared to the web server my thought was to purchase: 2 x SQL Server 2008 R2 web edition x64 single processor license 2 x Windows Server 2008 R2 Web Edition Licenses 1 x New Physical dual quad core 32 GB server 1 x F5 Load Balancer I need the Windows Server 2008 R2 Web Edition licenses so that I can run SQL and IIS on the same box for both of these servers. The thought is to run this as an active/passive fail over cluster that could be upgraded to an active/active cluster if we purchased the additional SQL licensing. The F5 load balancer would serve as the device that monitors the two servers and if the current active one stops responding then fails over to using the other server. To be clear this is not windows clustering but simply using a load balancer to fail over between two computers so that you now have a cluster in the general sense. Is this really the best way to accomplish what I need? Is there some way to leverage the old server 2003 SQL server to function as the devices that funnels http requests to the appropriate active server and then fails over if a problem occurs? Is there any third party clustering software that might help me accomplish this in a simpler fashion?

    Read the article

  • Changing Corosync/Heartbeat pair's active node based on MySQL/Galera cluster state

    - by Hace
    Background I'm planning on building a High Availability "cluster" for our Zabbix instance by placing two physical servers in one server room and two in another server room. In each server room one of the physical servers will run Zabbix on RHEL and the other will run Zabbix's MySQL database, also on RHEL. I'd prefer synchronous replication for the MySQL nodes so I'm planning on using Galera in a master-slave configuration. The Zabbix instances on the two Zabbix servers would be controlled by Heartbeat/Corosync (although Red Hat Cluster Suite is also an option...) If the Zabbix server in Server Room A goes down, the one in Server Room B becomes active (and vice versa). Ditto for the MySQL servers/instances. If either of those cases happen, however, the connection between the Zabbix server and the MySQL server becomes significantly slower as ti has to travel over WAN. Question Is it possible to configure the Heartbeat/CoroSync pair to instruct the MySQL/Galera cluster to change the master node to switch to (if available) the one that's in the server room as the active Heartbeat/Corosync -node and (more challengingly) is it possible to do the same in the other direction, i.e have the Galera cluster change the active Heartbeat/CoroSync server to be in the same room as the active MySQL master server in case of a failover in over to avoid unnecessary WAN transfers between the application and its DB? Theories Most likely I can get CoroSync to run something that'd log in to one of the DB nodes to change the MySQL/Galera master but I don't know if it's really possible to do anything similar in the other direction in Galera. Is it possible to define a "service" in CoroSync/Heartbeat so that both the service and its MySQL service would migrate as one if possible. Using the DB server that's behind WAN should still be a better option to DB downtime. Am I just using too many tools to solve a problem that'd be far simpler with something else?

    Read the article

  • HP Proliant DL380 G4 - Can this server still perform in 2011?

    - by BSchriver
    Can the HP Proliant DL380 G4 series server still perform at high a quality in the 2011 IT world? This may sound like a weird question but we are a very small company whose primary business is NOT IT related. So my IT dollars have to stretch a long way. I am in need of a good web and database server. The load and demand for a while will be fairly low so I am not looking nor do I have the money to buy a brand new HP Dl380 G7 series box for $6K. While searching around today I found a company in ATL that buys servers off business leases and then stripes them down to parts. They clean, check and test each part and then custom "rebuild" the server based on whatever specs you request. The interesting thing is they also provide a 3-year warranty on all their servers they sell. I am contemplating buying two of the following: HP Proliant DL380 G4 Dual (2) Intel Xeon 3.6 GHz 800Mhz 1MB Cache processors 8GB PC3200R ECC Memory 6 x 73GB U320 15K rpm SCSI drives Smart Array 6i Card Dual Power Supplies Plus the usual cdrom, dual nic, etc... All this for $750 each or $1500 for two pretty nicely equipped servers. The price then jumps up on the next model up which is the G5 series. It goes from $750 to like $2000 for a comparable server. I just do not have $4000 to buy two servers right now. So back to my original question, if I load Windows 2008 R2 Server and IIS 7 on one of the machines and Windows 2008 R2 server and MS SQL 2008 R2 Server on another machine, what kind of performance might I expect to see from these machines? The facts is this series is now 3 versions behind the G7's and this series of server was built when Windows 200 Server was the dominant OS and Windows 2003 Server was just coming out. If you are running Windows 2008 R2 Server on a G4 with similar or less specs I would love to hear what your performance is like.

    Read the article

  • will heavy network traffic affect other connections on HP ProCurve V1810-48G?

    - by nn4l
    I have a HP ProCurve V1810-48G switch with a few servers connected to it (everything in one rack). The switch is practically in its default configuration. During copying of a few hundred GByte of data from server_a to server_b (using tar cf - data | ssh server_b 'cd myhome; tar xf -'), essentially saturating the network capacity between those two servers, I noticed network related error messages on the console of server_c - as if server_c is no longer able to send/receive traffic to server_d. After canceling the copy command everything was normal again. I would understand this if the network connection would use a shared resource, for example if server_a and server_c are in one datacenter, server_b and server_d are in another datacenter and both datacenters are connected with a 100 MBit line. But all of the mentioned servers are connected to the same switch and are located in the same IP network. I always thought that a connection between two servers on one switch will not affect any other server connected to the switch. It is also possible that the network related error messages are caused by something else - but I can't risk a network problem for any other system on this switch. Please advise.

    Read the article

  • Installing Windows Management Framework 3.0 basically destroyed WMI, how can I fix it without reinstalling the O.S.?

    - by Massimo
    Related, of course, to this question. Before discovering it was somewhat... dangerous, I installed Windows Management Framework 3.0 on a number of Windows Server 2008 R2 SP1 servers, and WMI got completely trashed on all of them. This is what the WMI namespace looks like on a normal server (this is from Server Manager - Configuration - WMI Control): This is what it looks like after installing WMF 3.0: Yeah. Everything except WMF 3.0's new features is gone. Needless to say, nothing seems to work anymore on those servers. And no, this is not due to some strange installation error, this happened on three servers which were perfectly working before installing WMF 3.0, and on all of them the installation completed succesfully. Admittedly, one of them had a somewhat complex setup (various System Center products and SQL Server instances)... but two of them are just plain standard domain controllers which do nothing else at all. How can I fix this mess without having to reinstall the O.S. on these servers? And why did it happen in the first place?

    Read the article

  • Can't access Port 80 from external

    - by dewacorp.alliances
    Hi there I have configuration like this: NETGEAR MODEM LINKSYS ROUTER SERVERS In the modem, I've setup as bridging and all the traffic is controlling by this ROUTER. Prior to this setup, I can access website from external (port 80) plus exchange servers (mail) and https. But now with this configuration, I can only send/receive using Exhcange servers and access OWA (Outlook web access using port 443) .... and no internal websites from outside. This is my config for LINKSYS ROUTER Application | Start | End | Protocol | IP Address Ms Exchange | 25 | 25 | Both (TCP/UDP) | 192.168.100.8 Internets | 80 | 80 | Both (TCP/UDP) | 192.168.100.11 SSL | 443 | 443 | Both (TCP/UDP) | 192.168.100.8 Exchange | 110 | 110 | Both (TCP/UDP) | 192.168.100.8 192.168.100.11 is a UBUNTU web server that running the apache which controlling the virtual name (extranet, cms, test) to redirect to the different servers. As you can see, the home internet is only allowing public IP address. Now I test this schenarion in internal network work nicely. For instance. If I type in extranet.XXX.local it goes to the right applicatios or if I try CMS.XXX.local again it goes to the right one. I also asked to ISP just in case if they are blocking the inbound port 80 for unknown reason. They said no. So I didn't understand why this happens. I suspect the configuration that I have between MODEM ROUTER but I counldn't work what it is. I don't have a documentation of previous settings and I don't know if there is a port that I need to open as well. I am appreciated your comment

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >