Search Results

Search found 1795 results on 72 pages for 'veritas cluster'.

Page 50/72 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • How to correctly set up iWARP? Preferably on loopback

    - by ajdecon
    iWARP is a protocol for doing remote direct memory access (RDMA) on top of TCP/IP, so that it can work with Ethernet and other network types as opposed to Infiniband. It works with many of the standard IB interfaces - the IB verbs, for example - so it's all pretty transparent. I'm doing some IB-verbs programming (mostly for the sake of learning about how they work better), and it'd be wonderfully convenient for me if I could use iWARP to do RDMA over my loopback interface, so that I could test some of my code without getting on our IB-connected cluster. :-) But I cannot figure out how to get a "local development environment" set up: there are no tutorials I'm aware of for even setting up iWARP from scratch on a server or a network interface. Can anyone give me a tutorial or point me in the right direction? Environment is Fedora 16 running in VirtualBox.

    Read the article

  • Updated: NLB 2 Windows Server 2003 Servers - Looking to Hire SysAdmin to solve!

    - by Paul Hinett
    I need to configure windows NLB on 2 dedicated servers I have. My main machine has been running for some time, with several domain names pointing to the servers primary IP address. Both servers have 2 NIC's installed, and both have several secondary public IP addresses available if needed? What IP address would I use for the cluster IP, does this IP need to be added to the IP list of both public NIC's ip address list? What IP addresses do I use for the host's dedicated IP? Please help, this is driving me nuts...i've taken down the server twice on accident today! UPDATE: Looking to hire a windows SysAdmin to solve! I have updated my question, i would like to hire a trusted windows SysAdmin to take care of this for me, preferably today...can anyone help and provide some credentials please? Thank you in advance!

    Read the article

  • Weird Windows 2003 MSDTC and SQL 2005 issue

    - by seagull surfer
    scenario: Windows 2003 sp2 x64 enterprise edition. SQL 2005 sp2 cu9 x64 Enterprise edition After restarting the resource groups on two node active-active cluster, 3 SQL 2005 instances start up fine. The 4th one starts up but starts throwing the following error. "Enlist operation failed: 0x8004d00e(XACT E NOTRANSACTION). SQL Server could not register with Microsoft Distributed Transaction Coordinator (MS DTC) as a resource manager for this transaction. The transaction may have been stopped by the client or the resource manager." MSDTC is fine since the other 3 function normally. The only way to "fix" it is to take the 4th instance offline and bring it online again. Is there any way to fix this enlistment without restarting?

    Read the article

  • Complete solution to upgrade PostgreSQL on debian production server

    - by Daimon
    I'm using Debian 6 (Squeeze) in production for a couple of websites. I decided to use postgresql backports so that I could use PostgreSQL 9.0 features. I thought that it would remain 9.0 and receive updates to that major version. Unfortunetly Squueze backports were updated to PostgreSQL 9.1 so probably I won't receive updates to 9.0. I'm planning upgrading to 9.1 but I know that it's not done automatically. I've read about official pg_upgrade and debian's pg_upgradecluster, but I would appreciate complete guide to upgrade. What are steps to do (first apt-get install postgresql, then pg_upgradecluster, then remove old cluster)? List of steps would be nice. What are possible failure scenarios? How to prepare to failures and react on them? I can stop database for a couple of hours only so I want to be prepared

    Read the article

  • SQL server 2000 reporting bad values to ASP.Net Application

    - by Ben
    I have an instance of SQL server 2000 (8.0.2039) with a rather simple table. We recently had users complain about an application I wrote returning bad values for some of the dates in the databse. When I query the table directly via Server Management Studio, it will return the correct values, however the identical queries from my application report the wrong values, but only for a couple of dates. I have been over the code, and it is solid. If the error was in the code, all of the dates reported should be wrong. I have also run the code on an identical test database, and everything is reported properly. I believe the problem may lie in the sql instance itself, which is why I am posting in Server Fault. My question is, has anyone heard of a database reporting bad (incorrect) date values when queried via web application? It should be noted that this particular server was once manually rebuilt after having a cluster clean run on it.

    Read the article

  • Design Question

    - by dturner71
    Can I have two independent Connection servers attached to the same vCenter server? Here's my scenario. I'm setting up View 4 to provide desktops to two seperate Windows domains that are on different IP subnets seperated by a firewall. One cluster of physical servers, one vCenter server, linked clones. As I understand it View Connection server has to be a member of a Windows domain in order for quickprep to work. So the way to provide desktops to both Windows domains is to have a Connection server in each one right? Then open ports in the firewall so the Connection server from the other subnet can communicate with vCenter. Any reason why this won't work? Or is there a better way to accomplish it?

    Read the article

  • SGE: downtime planning

    - by mousee
    I need to plan a downtime for a maintenance of my environment (or some part of my environment) by means of Sun Grid Engine. Is it possible to somehow use backfilling information to tell the grid engine to plan only those jobs on cluster which are able to finish (i have backfilling information) till let's say 10 am next day? Can I then at 10 am rely on the fact that all compute nodes are clean, jobs are only queued, no job is planned and so that I can start maintenance? Thank you for your time. mousee

    Read the article

  • How would you measure the amount of atmospheric dust in a server room?

    - by Tom O'Connor
    We've been advised by our tape library vendor that one of the reasons we might be seeing lots of errors is if our server room is particularly dusty. It doesn't look dusty, but that's not to say it's not there. We've got an environment sensor cluster which measures Temperature, Airflow and Relative Humidity. I should probably point out that the low-hanging fruit solution I came up with is to use Sellotape (scotch tape) in a loop, one side stuck to the server cabinet, the other side free-hanging. I've also put a couple of other tape loops by the exit and intake fans of the hardware (not blocking airflow, naturally). How can we (electronically, ideally) measure dust levels?

    Read the article

  • LAMP Server without single failure point + Global Server Load Balancing?

    - by José Nobile
    I want implement a LAMP Server (Linux Apache MySQL PHP) without a single failure point and with Global Server Load Balancing. I have a server in Cali, Colombia, and other server will be installed in Melbourne, Australia, user in America can use the Cali Server and in Europe, Asia, Africa or Oceania use the Melbourne Server. If any server fail (or load is excessively high), a server must answer all request. Data in MYSQL must be in sync, php files, any configuration in both server must be in sync. I read about of Google DNS Server 8.8.8.8 and 8.8.4.4 and ANY Cast, also about MySQL semisynchronous replication and MySQL Cluster, but what about other things, as crontabs, and the configurations in server? The solution can't depend of APNIC or BGP, only open source software running in Linux.

    Read the article

  • Is Software Raid1 Using mdadm with a Local Hard Disk and GNDB Possible?

    - by Travis
    I have multiple webservers which use many small files to created dynamic web pages. Caching the web pages isn't an option. The webserver also performs writes so I need a synchronous filesystem. I'm looking to maximise performance as it's my understanding that small files is the weakness (to varying degreess) of a cluster filesystem over ethernet. Currently I'm using Centos 5.5, 64 bit. Since it's only about 300MB of data, I'm looking at mdadm using RAID-1 with the GNBD and a local hard disk using the "--write-mostly" option so the reads are done using the local hard disk. Is this possible? If so, is there any advantage to making it a tmpfs disk instead of a local hard disk? Or will the files on the local hard disk just get cached in RAM anyway so I won't see a performance gain by using tmpfs, assuming there's enough RAM available?

    Read the article

  • Rebuild an existing Rackspace server from scratch?

    - by Mojo
    In the process of working out kinks in a server build, is it possible to re-bootstrap a server from scratch, image and all? (Same flavor, say.) By that I mean without recreating the server, keeping its IP address if nothing else. I can't find a way to do this. It would have some advantages, I should think: It wouldn't decrement the 'server create' quota. The existing server would keep its IP address. One machine of a cluster could be rebuilt to a new image without having to change the IP address. (Maybe load balancers make IP addresses a moot point, but it still seems like a worthwhile task.)

    Read the article

  • Is there a faster way to deploy an OVA template?

    - by Luke
    I need to deploy vSphere Server Appliance 5.1. I have vSphere Client running locally and my internet upload is capped at 3 Mbps. It says it's going to take about 200 minutes to upload. When selecting a URL as opposed to a local file, does vSphere Client download it locally and then upload, or does it download the OVA directly to the server? My goal is to avoid waiting 3 1/2 hours for this to upload. If specifying a URL isn't any faster, are there any other methods that would allow me to deploy from the datacenter instead of my office? We don't have any Windows VM's installed on our cluster. So unfortunately I don't have a Windows machine with faster upload speed.

    Read the article

  • linux: accessing thousands of files in hash of directories

    - by 130490868091234
    I would like to know what is the most efficient way of concurrently accessing thousands of files of a similar size in a modern Linux cluster of computers. I am carrying an indexing operation in each of these files, so the 4 index files, about 5-10x smaller than the data file, are produced next to the file to index. Right now I am using a hierarchy of directories from ./00/00/00 to ./99/99/99 and I place 1 file at the end of each directory, like ./00/00/00/file000000.ext to ./00/00/00/file999999.ext. It seems to work better than having thousands of files in the same directory but I would like to know if there is a better way of laying out the files to improve access.

    Read the article

  • Allow different headers on different servers using WFF

    - by Brian
    We've got multiple web servers configured in a cluster using Microsoft's Web Farm Framework. One of the things I like to do to help debugging is to create a header in IIS that identifies the server that handled the request. Unfortunately when I try to do this, WFF sets the headers to the same value on all the servers. Is there a way around this? I tried looking into using skipDirectives, but I can't find any documentation on it (other than a little bit showing how to use it to skip directories and bindings). If there is documentation on this, please link to it! I would like to be able to read up more on it in case I need to do other things as well.

    Read the article

  • Techniques to Monitor cron tasks?

    - by Tristan Juricek
    Are there good techniques for monitoring cron tasks over a cluster? We're starting to use cron to launch tasks at daily intervals. A few ideas for checking out information: Add special application handling that logs information into some "network aware" place, like a DB Build up a logfile system that transfers the cron log periodically to a central point for processing/querying (along with other possible log files) I'm wondering if people have had success with doing things separately for cron versus other things, or, if the tasks were integrated into a different approach completely. I'm leaning towards #2, but I'd like to know what more experienced folk might try out.

    Read the article

  • Disable RAID Controller

    - by B.Mr.W.
    I have some decent HP Proliants server that come with "HP Smart Array P410i Controller" enabled, I am using these boxes to set up a Hadoop cluster and I know, RAID is for sure a no-no for Hadoop since the application itself will take care of data redundancy and extra intelligence provided by RAID won't be helpful and might turn down the performance. I tried to disable the devices at the BIOS and the box cannot even access the disk afterwards. So I am assuming the controller is sitting between disks and mother board, and we have to turn it on and configure it to "level0" or something like that. I am wondering what should I do to "disable" the RAID functionality so it will fit into the Hadoop environment.

    Read the article

  • Cheapest High Available Web Server [closed]

    - by xyz
    I would like to create a high-available setup (e.g. a small cluster) for a webserver, i.e. it will run Apache, PHP and MySQL. There will be between 2-8 small websites running with only very little traffic and workload. High availability is however very important. I don't want to be dependent on 1 datacenter, so there must be a minimum of 2 servers placed in different datacenters, and if one server goes down, the user must experience no or only a minimum of downtime - and no data loss. I have considered Amazon AWS using their Elastic Load Balancing, since it is possible to buy 2 EC2 instances in 2 availability zones and set up load balancing and RDS (Multi-AZ). However this seems rather expensive. Using the AWS price calculator http://calculator.s3.amazonaws.com/calc5.html it totals to 185$/month the first year (including the free tier). Are my calculations incorrect or is there a cheaper way to make this HA setup? Best regards

    Read the article

  • Run disk error check on NTFS file?

    - by paulius_l
    I have a feeling that my system hard drive is dying. Benchmark kind of enforces it. Here is the benchmark of my system hard drive during low system activity: And here is the benchmark of backup drive: Furthermore, there are some files which I just can't touch because I get CRC errors and the hard drive activity spikes to 100% with operating speeds less than 1 MB/s while working with such files. I haven't yet tried swapping SATA cable as I have read this might cause the problems. Anyway, I would like to run some tests on specific clustsers where those files I am interested in are stored. I don't want to do the full chkdsk because it takes a very long time. I would like to either find the utility which executes the disk check directly on the clusters where the file belongs or a couple utilities where one tells me the cluster locations and another can check just those locations. How do I check and possibly fix disk errors where the files I am interested in are stored? Edit: S.M.A.R.T. info:

    Read the article

  • glusterfs to replicate files to other servers

    - by sbrattla
    I've got multiple servers which all need to have the same content in /home. In other words, if the file /home/user1/test.txt is updated on server A, this needs to be replicated to all other servers in the cluster. Is it possible to use GlusterFS for this purpose? That is, let each server have a full copy of all data locally - which that server will be working on - and solely use GlusterFS to take care of replicating this data to the other servers? I'm not intersted in a combined storage, but rather have all data on all machines only to have GlusterFS to replicate it to the other machines.

    Read the article

  • having trouble setting up ganglia on three machines

    - by Pieter Breed
    I am running ubuntu 11.10 I have one machine with gmetad, gmond and ganglia-webinterface. When I browse the web interface this machine picks up the local gmond output. I then added another machine, running only gmond. I didn't really change anything in the config, only the name of the cluster. This machine's output showed up in the web view. The I tried to add a third machine, similarly to the second, but it's not showing up in the web view. I tried looking at syslog and running as a daemon, but I'm not seeing anything suspicious there. Any tips for trouble shooting this?

    Read the article

  • Corosync - stopping the service crashes the server

    - by Antipop
    I am trying to set up a test cluster on a Xen Server with 2 paravirtualized CentOS 5.4 machines. I am using Pacemaker+Corosync, and following the instructions found at http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf and other sites. Anyway, when I try to manually stop the corosync service, about 80% of the times the whole VM locks up with the message "Waiting for corosync services to unload" and I am forced to shut the machine down manually. For the remaining 20%, the VM keeps responding and adds dots to the above message, but it won't actually stop the service. There aren't many resources on the internet about this particular error. Any ideas about this? Thanks in advance.

    Read the article

  • Failing RAM, or something else?

    - by Thanatos
    I have a IBM Thinkpad T43, currently running Windows XP. Programs were crashing, XP was blue-screen-of-deathing, (more than usual) - it was basically unusable, but I couldn't get any informative error out of XP. I booted Ubuntu off a thumbdrive, which made it to the desktop, but as soon as I started to try to do anything, X segfaulted, along with several other services, followed quickly by kernel warnings and a kernel panic. I'm currently running Memtest86+ on this machine, which is spitting out numerous errors. (16k over 3 passes, and counting) The failing areas are numerous, and look something like this: 0001055da4 - XX.X MB, etc. The addresses that fail seem to cluster around 0-20 MB, 250MB, and, more rarely, 750MB, 1000MB, and 1200MB. However, a lot (but not all) of the failing addresses that I've seen end in XXXXXXX?da4 where the ? is a 1 or a 5. The machine has two sticks of RAM, one 512MB, one 1024 MB, the 512MB mapped to the lower addresses, the 1024 MB stick following. Is this indeed RAM failure, or should I consider other things before purchasing more RAM?

    Read the article

  • HAProxy and Intermediate SSL Certificate Issue

    - by Sam K
    We are currently experiencing an issue with verifying a Comodo SSL certificate on an Ubuntu AWS cluster. Browsers are displaying the site/content fine and showing all the relevant certificate information (at least, all the ones we've checked), but certain network proxies and the online SSL checkers are showing we have an incomplete chain. We have tried the following to try to resolve this: Upgraded haproxy to the latest 1.5.3 Created a concatenated ".pem" file containing all the certificate (site, intermediate, w/ and w/out root) Added an explicit "ca-file" attribute to the "bind" line in our haproxy.cfg file. The ".pem" file verifies OK using openssl. The various intermediate and root certificates are installed and showing in /etc/ssl/certs. But the checks still come back with an incomplete chain. Can anyone advise about anything else we can check or any other changes we can make to try to fix this? Many thanks in advance... UPDATE: The only relevant line from the haproxy.cfg (I believe), is this one: bind *:443 ssl crt /etc/ssl/domainaname.com.pem

    Read the article

  • NAT and NGINX on the same server

    - by Morten
    I'm setting up a VPC cluster for my collaborative todo list application www.getdoneapp.com. To have my servers on the private network I need a NAT server so my servers on the private network can connect to the internet to receive updates and what not. The NAT server will consume an elastic IP address, so I'm wondering if I can just have that NAT server run nginx to direct traffic to my internal servers for HTTP. So the question is, is it a bad idea to run NGINX and NAT on the same server, or should I go for consuming 2 elastic IP addresses?

    Read the article

  • Client side certificates in client browsers with unix server for management

    - by user146253
    We are currently running Unix dedicated servers for everything (Web cluster, database, FTP, batch, ...) except for a Microsoft Active Directory Certificate Services. The sole purpose of this Windows box is to provide client side certificates to our clients browsers. All our clients are required to install a client side certificate on order for them to be able to access our website. Is there an alternative in the Unix space? The purpose is to make sure only the approved hardware of an approved client can access our website. I'm open for any solution that provides me with this level of security. We are however talking about thousands of certified computers just so you can factor that in in a proposed solution. Optionally we would also like to be able to revoke access. With Regards.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >