Search Results

Search found 3295 results on 132 pages for 'solaris cluster'.

Page 30/132 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • DTrace Workshop in Wiesbaden

    - by uligraef
    DTrace gibt es ausser in Solaris noch in einer Reihe weiterer Betriebssysteme.Bei dem  FraOSUG Vortrag über DTrace wurde beschlossen noch einen DTrace Workshop zu veranstalten. Details siehe hier: Workshop DTrace Der Termin steht noch nicht genau fest. Wir suchen einen Tag der weder Werktag, Sonntag oder Feiertag ist.Die Anzahl der Anmeldungen bis zum 17. April bestimmt den Tag . Anmeldung via:  DTrace Workshop Doodle

    Read the article

  • Solaris10 x86 mirror. Making second disk booteable when failure

    - by Kani
    Did a mirror (RAID1) with Solaris 10 in x86. Everything OK. Now, I´m trying to make the second disk booteable, this is: from grub or in case of failure of disk1. I edited /boot/grub/menu.lst: #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- title Solaris 10 9/10 s10x_u9wos_14a X86 findroot (rootfs1,0,a) kernel /platform/i86pc/multiboot module /platform/i86pc/boot_archive #---------------------END BOOTADM-------------------- #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- title Solaris failsafe findroot (rootfs1,0,a) kernel /boot/multiboot -s module /boot/amd64/x86.miniroot-safe #---------------------END BOOTADM-------------------- #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- title Solaris failsafe findroot (rootfs1,0,a) kernel /boot/multiboot kernel/unix -s module /boot/x86.miniroot-safe #---------------------END BOOTADM-------------------- #Make second disk booteable!!!!!!! title alternate boot findroot (rootfs1,1,a) kernel /boot/multiboot kernel/unix -s module /boot/x86.miniroot-safe But is not working. In the BIOS, when I select "alternate boot" I get: Error 15: 15 file not found also, how to configure to GRUB to make the disk2 to boot in case of error in disk1? Additionally, I did (but not related to GRUB): eeprom altbootpath=/devices/pci@0,0/pci108e,5352@1f,2/disk@1,0:a Here is the output of some commands that may help you: /sbin/biosdev 0x80 /pci@0,0/pci108e,5352@1f,2/disk@0,0 0x81 /pci@0,0/pci108e,5352@1f,2/disk@1,0 ls -l /dev/dsk/c1t?d0s0 lrwxrwxrwx 1 root root 50 Jul 7 12:01 /dev/dsk/c1t0d0s0 -> ../../devices/pci@0,0/pci108e,5352@1f,2/disk@0,0:a lrwxrwxrwx 1 root root 50 Jul 7 12:01 /dev/dsk/c1t1d0s0 -> ../../devices/pci@0,0/pci108e,5352@1f,2/disk@1,0:a more /boot/solaris/bootenv.rc setprop ata-dma-enabled '1' setprop atapi-cd-dma-enabled '0' setprop ttyb-rts-dtr-off 'false' setprop ttyb-ignore-cd 'true' setprop ttya-rts-dtr-off 'false' setprop ttya-ignore-cd 'true' setprop ttyb-mode '9600,8,n,1,-' setprop ttya-mode '9600,8,n,1,-' setprop lba-access-ok '1' setprop prealloc-chunk-size '0x2000' setprop bootpath '/pci@0,0/pci108e,5352@1f,2/disk@0,0:a' setprop keyboard-layout 'US-English' setprop console 'text' setprop altbootpath '/pci@0,0/pci108e,5352@1f,2/disk@1,0:a' cat /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # fd - /dev/fd fd - no - /proc - /proc proc - no - #/dev/dsk/c1t0d0s1 - - swap - no - /dev/md/dsk/d1 - - swap - no - /dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no - /devices - /devices devfs - no - sharefs - /etc/dfs/sharetab sharefs - no - ctfs - /system/contract ctfs - no - objfs - /system/object objfs - no - swap - /tmp tmpfs - yes - df -h Filesystem size used avail capacity Mounted on /dev/md/dsk/d0 909G 11G 889G 2% / /devices 0K 0K 0K 0% /devices ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 14G 972K 14G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab /usr/lib/libc/libc_hwcap1.so.1 909G 11G 889G 2% /lib/libc.so.1 fd 0K 0K 0K 0% /dev/fd swap 14G 40K 14G 1% /tmp swap 14G 28K 14G 1% /var/run

    Read the article

  • HPC Cluster planning workflow?

    - by Veronica
    After three days of intensive Google searching, I have not found any high-level workflow of how to build a low profile - cheap - computing cluster (we are not interested in HA yet). This is just a front-end plus a node for now. We want to start small with rockscluster, provide a web-based server for offering services, and then add nodes as our budget increases. We're small company, so we haven't enough human resources to implement it smoothly. Here are some facts about our environment: Our hardware is not constant (we will add nodes). Our workload will vary (in the order from 200Mb - 1Tb) Our software will change (scientific applications for data mining) Do you know any visual workflow, worksheet, chart, describing the general necessary steps to begin our cluster planning?

    Read the article

  • Putting our OLTP and OLAP services on the same cluster

    - by Dynamo
    We're currently in a bit of a debate about what to do with our scattered SQL environment. We are setting up a cluster for our data warehouses for sure and are now in the process of deciding if our OLTP databases should go on the same one. The cluster will be active/active with database services running on one node and reporting and analytical services on the other node. From a technical standpoint I don't see an issue here. With the services being run on different nodes they shouldn't compete too heavily for resources. The only physical resource that may be an issue would be the shared disk space. Our environment is also quite small. Our biggest OLAP database at the moment is only about 40GB and our OLTP are all under 10GB. I see a potential political issue here as different groups are involved but I'm just strictly wondering if there would be any major technical issues that could arise from this setup.

    Read the article

  • Hyper V cluster - one VM won't migrate

    - by Chris W
    We have a Failover Cluster built up on 6 blades, each running Hyper V. Each box is running Server 2008 R2. We've got a number of VMs running that all have the same basic config: VHD stored on a cluster shared volume. 2 virtual NICs (1 for LAN connection and 1 for SAN connection). All of our VMs will happily migrate between any other blade apart from one single VM which is running fine on it's current blade but will not migrate to any other location. What could be the cause of it or where should I look to get a detailed error message as I can't seem to find much information logged in any of the logs. Edit: I know the usual culprit is mis-matching resource names. We've already been there with the NICs named differently on some of the blades. As far as we can tell now everything looks to be identical on each bit of metal.

    Read the article

  • Can i use a Windows 2008 r2 Cluster for file redundancy

    - by JERiv
    I'm researching a sever clustering architecture as a redundancy and backup solution for a client, and something that isn't made clear is whether or not i can use server clustering to replace a file server with backup solution. Forgive my Elementary understanding of server clustering but supposing: 2 Sites (NJ, CA) Identical Servers at each site setup as a Remote Site Cluster nodes with Windows Enterprise server 2008 r2 Services: File, Terminal, AD, and maybe DNS Will the following will be true: Files (including data drives) will be synced between the two servers eliminating the need for third party backup/mirroring software to sync/backup files. Also supposing i use roaming profiles w/ folder redirection; How will client computer in the WAN access their data through the cluster (i.e. will they automatically choose the best route)

    Read the article

  • OCFS2 Now Certified for E-Business Suite Release 12 Application Tiers

    - by sergio.leunissen
    Steven Chan writes that OCFS2 is now certified for use as a clustered filesystem for sharing files between all of your E-Business Suite application tier servers.  OCFS2 (Oracle Cluster File System 2) is a free, open source, general-purpose, extent-based clustered file system which Oracle developed and contributed to the Linux community.  It was accepted into Linux kernel 2.6.16.OCFS2 is included in Oracle Enterprise Linux (OEL) and supported under Unbreakable Linux support.

    Read the article

  • Faster Memory Allocation Using vmtasks

    - by Steve Sistare
    You may have noticed a new system process called "vmtasks" on Solaris 11 systems: % pgrep vmtasks 8 % prstat -p 8 PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 8 root 0K 0K sleep 99 -20 9:10:59 0.0% vmtasks/32 What is vmtasks, and why should you care? In a nutshell, vmtasks accelerates creation, locking, and destruction of pages in shared memory segments. This is particularly helpful for locked memory, as creating a page of physical memory is much more expensive than creating a page of virtual memory. For example, an ISM segment (shmflag & SHM_SHARE_MMU) is locked in memory on the first shmat() call, and a DISM segment (shmflg & SHM_PAGEABLE) is locked using mlock() or memcntl(). Segment operations such as creation and locking are typically single threaded, performed by the thread making the system call. In many applications, the size of a shared memory segment is a large fraction of total physical memory, and the single-threaded initialization is a scalability bottleneck which increases application startup time. To break the bottleneck, we apply parallel processing, harnessing the power of the additional CPUs that are always present on modern platforms. For sufficiently large segments, as many of 16 threads of vmtasks are employed to assist an application thread during creation, locking, and destruction operations. The segment is implicitly divided at page boundaries, and each thread is given a chunk of pages to process. The per-page processing time can vary, so for dynamic load balancing, the number of chunks is greater than the number of threads, and threads grab chunks dynamically as they finish their work. Because the threads modify a single application address space in compressed time interval, contention on locks protecting VM data structures locks was a problem, and we had to re-scale a number of VM locks to get good parallel efficiency. The vmtasks process has 1 thread per CPU and may accelerate multiple segment operations simultaneously, but each operation gets at most 16 helper threads to avoid monopolizing CPU resources. We may reconsider this limit in the future. Acceleration using vmtasks is enabled out of the box, with no tuning required, and works for all Solaris platform architectures (SPARC sun4u, SPARC sun4v, x86). The following tables show the time to create + lock + destroy a large segment, normalized as milliseconds per gigabyte, before and after the introduction of vmtasks: ISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1386 245 6X X7560 64 1016 153 7X M9000 512 1196 206 6X T5240 128 2506 234 11X T4-2 128 1197 107 11x DISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1582 265 6X X7560 64 1116 158 7X M9000 512 1165 152 8X T5240 128 2796 198 14X (I am missing the data for T4 DISM, for no good reason; it works fine). The following table separates the creation and destruction times: ISM, T4-2 before after ------ ----- create 702 64 destroy 495 43 To put this in perspective, consider creating a 512 GB ISM segment on T4-2. Creating the segment would take 6 minutes with the old code, and only 33 seconds with the new. If this is your Oracle SGA, you save over 5 minutes when starting the database, and you also save when shutting it down prior to a restart. Those minutes go directly to your bottom line for service availability.

    Read the article

  • Freeing of allocated memory in Solaris/Linux

    - by user355159
    Hi, I have written a small program and compiled it under Solaris/Linux platform to measure the performance of applying this code to my application. The program is written in such a way, initially using sbrk(0) system call, i have taken base address of the heap region. After that i have allocated an 1.5GB of memory using malloc system call, Then i used memcpy system call to copy 1.5GB of content to the allocated memory area. Then, I freed the allocated memory. After freeing, i used again sbrk(0) system call to view the heap size. This is where i little confused. In solaris, eventhough, i freed the memory allocated (of nearly 1.5GB) the heap size of the process is huge. But i run the same application in linux, after freeing, i found that the heap size of the process is equal to the size of the heap memory before allocation of 1.5GB. I know Solaris does not frees memory immediately, but i don't know how to tune the solaris kernel to immediately free the memory after free() system call. Also, please explain why the same problem does not comes under Linux? Can anyone help me out of this? Thanks, Santhosh.

    Read the article

  • Solaris continuera à supporter les processeurs Xeon d'Intel, son responsable dévoile les premiers éléments du prochain update

    Solaris continuera à supporter les processeurs Xeon d'Intel Le responsable de la plateforme chez Oracle dévoile les premiers éléments du prochain update De passage à Paris, le responsable de Solaris chez Oracle - Joost Pronk - a confirmé que l'OS « au coeur de la stratégie des nouveaux systèmes intégrés (Exadata, Exalogic et SPARC SuperCluster...), en partant des disques jusqu'aux applications » continuerait à être développé pour être compatible aussi bien avec SPARC qu'avec les processeurs d'Intel. « Peu importe ce que l'on vous raconte, ou ce que vous lisez ou ce que vous entendrez ailleurs, moi je vous le dis, Solaris supportera SPARC et les Xeon d'Intel », assure le port...

    Read the article

  • Oracle sort Solaris 11 pour les architectures SPARC et x86 et l'annonce comme le "premier OS Cloud"

    Oracle sort Solaris 11 pour les architectures SPARC et x86 Et l'annonce comme le "premier OS Cloud" Mise à jour du 10 novembre 2011 par Idelways Le grand chantier de Sun, puis d'Oracle Solaris 11 est enfin achevé. L'implémentation Unix au nom de code « Nevada » sort pour les architectures Sparc et x86 et s'annonce par ses créateurs comme « le premier OS Cloud ». Cette communication d'Oracle autour de Solaris 11 s'inscrit dans le sillage de son ado...

    Read the article

  • Why is SMF manifest losing configuration data when exported on SmartOS?

    - by Scott Lowe
    I'm running a server process under SMF (Server Management Facility) on Joyent's Base64 1.8.1 SmartOS image. For those not aqauinted with SmartOS, it is a cloud-based distribution of IllumOS with KVM. But essentially it is like Solaris and inherits from OpenSolaris. So even if you've not used SmartOS, I'm hoping to tap into some Solaris knowledge on ServerFault. My issue is that I want an unprivileged user to be allowed to restart a service that they own. I have worked out how to do that by using RBAC and adding an authorisation to /etc/security/auth_attr and associating that authorisation with my user. I then added the following to my SMF manifest for the service: <property_group name='general' type='framework'> <!-- Allow to be restarted--> <propval name='action_authorization' type='astring' value='solaris.smf.manage.my-server-process' /> <!-- Allow to be started and stopped --> <propval name='value_authorization' type='astring' value='solaris.smf.manage.my-server-process' /> </property_group> And this works well when imported. My unprivileged user is allowed to restart, start and stop its own server process (this is for automated code deployments). However, if I export the SMF manifest, this configuration data is gone... all I see in that section is this: <property_group name='general' type='framework'> <property name='action_authorization' type='astring'/> <property name='value_authorization' type='astring'/> </property_group> Does anybody know why this is happening? Is my syntax wrong, or am I simply not using SMF incorrectly?

    Read the article

  • In solaris, how monitor & auto-respond to critical events

    - by mamcx
    I have a website that randomly fail. Is running in open solaris on joyent. I have a monitoring service that alert me when the site is down, but, I want a way to put a "insider" tool that tell me why that happened. Is because the cpu is too high? Not memory? Which process fail? Is possible to have a backtrace of that? Everything is running on the Solaris Service Management Facility. The webserver is cherokee, the database is mysql and the language is python/django. I want the most simple setup to monitor that & auto-respond , ie: restart the webserver or the django process in case of failure. I prefer a low-overhead tool. I don't need the fancy monitoring that some tools have, no ned graphs or sms alert. Only know what fail, restart it if possible (maybe up to n times), and have a log somewhere when I will check it.

    Read the article

  • Oracle OpenWorld ?? Oracle|Sun

    - by user13137902
    Oracle|Sun @ Oracle OpenWorld Tokyo 2012 ?? Solaris?????@Oracle Develop@Tokyo 2011 ??????????????????Oracle|Sun????????????? ??????????????????????????????(^ ^;) ?????? Oracle OpenWorld Tokyo 2012 ??4?4???4?6??3??? ???????? ????? ??????????????????? ???&?????????? Oracle OpenWorld Tokyo 2012 ???? ???&????? ????????????????????????????????????? ???Exa*???????????????????????????????? ????????????????????????????? ????Oracle OpenWorld Tokyo 2012???? Oracle|Sun ?????????????????????? ??????????????? K1-014/4(?) 9:00-11:15 ENGINEERED FOR INNOVATION ??????????????????·???????? ?????? ???·??? ????·???????? ???????·????????? ???·????? ????·???????? ???·??????·?????? ?????·?????? ?????????? ??????? ??????? ?? ?? S1-014/4(?) 11:50-12:35 Oracle Engineered Systems Strategy-?????????????????????????????????????·???????? ????·????????? ????·??? S1-334/4(?) 15:20-16:05 ??????????????????????????????????????????????????????????? ????????????????? ?? ?? S1-424/4(?) 16:30-17:15 ?????Engineered Systems?????????????IT?????????????? ????????????????? ?? ? K2-014/5(?) 9:30-11:15 Extreme Innovation????·???????? ???????(CEO) ???·???? G2-014/5(?) 11:50-13:20 ??????&???????????IT??????????????·???????? ???????·????????? ???·???? S2-424/5(?) 16:30-17:15 ??UNIX??????????-SPARC SuperCluster?????????? ????????????????? ?? ? S2-534/5(?) 17:40-18:25 Oracle E-Business Suite????????????????????/??????????????????????”SPARC SuperCluster”?????????? ????????????????? ?? ?? S3-134/6(?) 13:00-13:45 ?????SNS??????????????? Sun ZFS Storage Appliance ????????????????? ???????? ?? ?? S3-214/6(?) 14:10-14:55 ???????????????????????????????????????? ???????? ????? ?? ?? ? S3-334/6(?) 15:20-16:05 ????????·????????????????????????????????????????? ???? ?????????? ?? ??(????) ?? ?? ? 1?????????????K1-01?????? ????????????????????????????????????????? ??????????????????? ??????1???????·?????????? ???????Solaris???????????? ???????????????S1-01?Engineered System??????????? S1-33?????????????????????????????????????? S1-42?????????????????????????????????? ???????·?????????K2-01?????????? ?????????????????????????? ???????????????????????????????????? ?????????????????????? ??????????????????????? ??????????????????????????? ????????Engineered System? ??????????????????????????? ??G2-01??????·???????????? ???????????????????????? ??????????????????????? S2-42??????????????? SPARC?????Engineered System???SPARC SuperCluster T4-4????????? ???????????????????????????????? S2-53?????SuperCluster???? ???????????SuperCluster??????????????? ??????????????????????????????????? ????????????Oracle Optimized Solution?????? ?????????Oracle Develop???????? ????????????ZFS Storage Appliance????3????????????? ??????????????????????????????? ?????????????????????????? ???ZFS Storage Appliance???????????????????? ????????????? ???? ?????????????? ??????????????7324????????? (?????????????????? facebook ????????????? ????????????)? ????????????????????????????????????????????

    Read the article

  • Unmounting a zfs pool while it is shared with sharenfs

    - by Ted W.
    I have a Solaris (open indiana) system which is getting poor disk write performance. In order to enable ZIL in this version of zfs I need to add a line to /etc/system. This will not take affect until I've unmounted and remounted the zpool. The trick is that this spool is shared via nfs to about 200 other servers to host users' home directories. I can guarantee that no users will be accessing the disks during this period of maintenance but I would like to avoid having to issue an unmount for 200 systems in order to unmount the disk on the Solaris box. My question is, with sharenfs, is it necessary to have all systems disconnected before unmounting the filesystem on the host? If it's possible, how do you go about it? I've tried unmounting already, the normal way, and it reports the disk is busy. There is no lsof in Solaris and pfiles (I think that's what it was) does not show anything obviously using the mounts.

    Read the article

  • Bridging Virtual Networking into Real LAN on a OpenNebula Cluster

    - by user101012
    I'm running Open Nebula with 1 Cluster Controller and 3 Nodes. I registered the nodes at the front-end controller and I can start an Ubuntu virtual machine on one of the nodes. However from my network I cannot ping the virtual machine. I am not quite sure if I have set up the virtual machine correctly. The Nodes all have a br0 interfaces which is bridged with eth0. The IP Address is in the 192.168.1.x range. The Template file I used for the vmnet is: NAME = "VM LAN" TYPE = RANGED BRIDGE = br0 # Replace br0 with the bridge interface from the cluster nodes NETWORK_ADDRESS = 192.168.1.128 # Replace with corresponding IP address NETWORK_SIZE = 126 NETMASK = 255.255.255.0 GATEWAY = 192.168.1.1 NS = 192.168.1.1 However, I cannot reach any of the virtual machines even though sunstone says that the virtual machine is running and onevm list also states that the vm is running. It might be helpful to know that we are using KVM as a hypervisor and I am not quite sure if the virbr0 interface which was automatically created when installing KVM might be a problem.

    Read the article

  • GlassFish cluster-targeted jdbc is not enabled

    - by Jin Kwon
    I have a GlassFish cluster. When I tried to add node and a instance, DAS saids a bunch of error messages telling Resource [ jdbc/xxxx ] of type [ jdbc ] is not enabled [#|2012-11-14T12:07:04.318+0900|SEVERE|glassfish3.1.2|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=2803;_ThreadName=Thread-2;|java.lang.StackOverflowError at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(FileOutputStream.java:318) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122) at java.io.PrintStream.write(PrintStream.java:480) at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291) at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295) at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141) at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229) at java.util.logging.StreamHandler.flush(StreamHandler.java:242) at java.util.logging.ConsoleHandler.publish(ConsoleHandler.java:106) at java.util.logging.Logger.log(Logger.java:522) at com.sun.logging.LogDomains$1.log(LogDomains.java:372) at java.util.logging.Logger.doLog(Logger.java:543) at java.util.logging.Logger.log(Logger.java:607) at com.sun.enterprise.resource.deployer.JdbcResourceDeployer.deployResource(JdbcResourceDeployer.java:117) at org.glassfish.javaee.services.ResourceProxy.create(ResourceProxy.java:90) at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:507) at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:455) at javax.naming.InitialContext.lookup(InitialContext.java:411) at javax.naming.InitialContext.lookup(InitialContext.java:411) at com.sun.appserv.connectors.internal.api.ResourceNamingService.lookup(ResourceNamingService.java:221) the JDBC Resource is ok and targeted with the cluster. I've installed the JDBC driver on the new node. Can anybody help?

    Read the article

  • How To Set Up A Loadbalanced High-Availability Apache Cluster On Windows

    - by bReAd
    Setting up a two-node Apache web server cluster that provides high-availability. In front of the Apache cluster we create a load balancer that splits up incoming requests between the two Apache nodes. Because we do not want the load balancer to become another “Single Point Of Failure”, we must provide high-availability for the load balancer, too. Therefore our load balancer will in fact consist out of two load balancer nodes that monitor each other using heartbeat, and if one load balancer fails, the other takes over silently. The following setup is proposed: Apache node 1: webserver1.example.com (webserver1) – IP address: 192.168.0.101; Apache document root: /var/www Apache node 2: webserver2.example.com (webserver2) – IP address: 192.168.0.102; Apache document root: /var/www Load Balancer node 1: loadb1.example.com (loadb1) – IP address: 192.168.0.103 Load Balancer node 2: loadb2.example.com (loadb2) – IP address: 192.168.0.104 Virtual IP Address: 192.168.0.105 (used for incoming requests) Currently, there are many solutions for Linux machines and there aren't any on windows. I've tried searching a long time for solutions on Windows platform How do I create the virtual IP in windows and perform monitoring and make the load balancer listen to the virtual IP Address?

    Read the article

  • Clustering/load balancing for cluster unaware applications

    - by AaronLS
    Forgive me if I use any of these terms incorrectly. I am wondering if there is any kind of software that would allow my two "join" two computers together such that a cluster unaware application could utilize their combined computing resources? By "cluster unaware" I mean an application that isn't designed to share work across multiple services. My understanding is that clustering is enabled by the specific application by it's architecture, such that messaging with multiple instances of the application coordinate the sharing of work. Instead I am looking for something that enables clustering at the OS or virtualization level, so that any application could essentially be clustered. Failing that, I am also wondering about the following scenario: We have 3 different applications we will call A, B, and C. We have 2 single core computers. At any given time lets say that any combination of those applications will be CPU intensive. In cases where only 2 of those apps are very active, have one of them moved over to a different server. In a nutshell, some sort of dynamic automatic shuffling of the application's load. I have heard of virtual machines that can be migrated across physical machines while live, but I am wondering if this can be done automatically in response to an application's or VM's CPU activity?

    Read the article

  • Linux HA / cluster: what are the differences between Pacemaker, Heartbeat, Corosync, wackamole?

    - by Continuation
    Can you help me understand Linux HA? Pacemaker, Heartbeat, Corosync seem to be part of a whole HA stack, but how do they fit together? How does wackamole differ from Pacemaker/Heartbeat/Corosync? I've seen opinions that wackamole is better than Heartbeat because it's peer-based. Is that valid? The last release of wackamole was 2.5 years ago. Is it still being maintained or active? What would you recommend for HA setup for web/application/database servers?

    Read the article

  • PostgreSQL failover cluster on Windows Server

    - by user36997
    We are looking for advice on how to setup a basic failover cluster for our application: We will be using 4 machines running Microsoft Windows Server (most probably 2003). All four will always run our application, which is essentially a web service. Load balancing is "outsourced" - somebody else handles the distribution of the web requests among the servers. Only one of the servers will be running the PostgreSQL server actively at any given time. Another server (of the four) also has the DB installed, but is on standby/passive. The DB data is stored on shared storage. No copying data between servers. Reads are done very frequently by many end-users, and in rather small chunks of data. Writes are done much less frequently, by less users, and in very large bulks of data. Now, how can one configure Microsoft Cluster Service to keep only one instance of the DB server and 4 instances (1 per server) of our application at all times? And does PostgreSQL integrate neatly with MSCS at all? Update: Instead of keeping the data on shared storage, I also consider using log shipping to replicate data on a couple of DB servers. There are two issues with this option: Log shipping only makes sure that I have a second server that gets all of the data and is ready to take over. How do I implement the actual failure detection and failover switch? Switching back: Suppose the master fails and the system automatically fails over to the slave, and later the master comes back online. I understand that with WAL shipping this will require to reconfigure the log shipping once again, and that switching back is far from seamless. Is that so?

    Read the article

  • Virtual machines interconnection inside Proxmox 2.1 Cluster

    - by Anton
    We have 3 physical servers (each with 1 NIC) in different datacentres, all of them are interconnected by openvpn bridged private network (10.x.x.x). Inside this network we have fully functional 3 nodes Proxmox 2.1 cluster. So, actually question is: Is there any "proper" way to make "global" local network (172.16.x.x) for all VMs inside cluster, so even if we move VM from one node to other we could reach it by static IP regardless of it's physical location? BTW, we can't add dedicated NIC to each server. Thanks in advance. EDIT: I have tried to make a separate openvpn bridge for 172.16.x.x, now I have at each server two interfaces: SRV1: openvpnbr1 - 172.16.13.1 vmbr0 - 172.16.1.1 SRV2: openvpnbr1 - 172.16.13.2 vmbr0 - 172.16.2.1 But now there is no connection between those ifaces: SRV1: ping 172.16.13.2 From 172.16.1.1 icmp_seq=2 Destination Host Unreachable SRV2: ping 172.16.13.1 From 172.16.2.1 icmp_seq=2 Destination Host Unreachable If I shut down vmbr0 interfaces, so there is connection between servers over openvpn, but vmbr0 is used by Proxmox... Where I am wrong?

    Read the article

  • SBD killing both cluster nodes when there are even small SAN network problems

    - by Wieslaw Herr
    I am having problems with stonith SBD in a openais-based cluster. Some background: The active/passive cluster has two nodes, node1 and node2. They are configured to provide an NFS service to users. To avoid problems with split-brain, they are both configured to use SBD. SBD is using two 1MB disks available to the hosts via an multipath fibre-channel network. The problems start if something happens with the SAN network. For example, today one of the brocade switches got rebooted and both nodes lost 2 out of 4 paths to each disks, which resulted in both nodes committing suicide and rebooting. This, of course, was highly undesirable because a) there were paths left b) even if the switch would be out for 10-20 seconds a reboot cycle of both nodes would take 5-10 minutes and all NFS-locks would be lost. I tried increasing the SBD timeout values (to 10sec+ values, dump attached at the end), however a "WARN: Latency: No liveness for 4 s exceeds threshold of 3 s" hints that something isn't working as I would it expect to. Here is what I would like to know: a) Is SBD working as it should killing nodes when 2 paths are available? b) If not, is the multipath.conf file attached correct? The storage controller we use is an IBM SVC (IBM 2145), should there be any specific configuration for it? (as in multipath.conf.defaults) c) How should I go about increasing the timeouts in SBD attachements: Multipath.conf and sbd dump (http://hpaste.org/69537)

    Read the article

  • will mmap use user cpu instead of whole sys cpu? (solaris)

    - by Daniel
    when use mmap to allocate some anonymous mem, we often set the start address as 0/null so mmap will figure out the starting address by itself. And to get the start address, it will work thought the whole virtual memory space to find a hole which could put the chuck of mem to be allocated. I guess this is calculated as user cpu instead of sys cpu. If the virtual memory is fragmented, then the time to find the starting address will use more user cpu, is my understanding correct

    Read the article

  • using i7 "gamer" cpu in a HPC cluster

    - by user1219721
    I'm running WRF weather model. That's a ram intensive, highly parallel application. I need to build a HPC cluster for that. I use 10GB infiniband interconnect. WRF doesn't depends of core count, but on memory bandwidth. That's why a core i7 3820 or 3930K performs better than high-grade xeons E5-2600 or E7 Seems like universities uses xeon E5-2670 for WRF. It costs about $1500. Spec2006 fp_rates WRF bench shows $580 i7 3930K performs the same with 1600MHz RAM. What's interesting is that i7 can handle up to 2400MHz ram, doing a great performance increase for WRF. Then it really outperforms the xeon. Power comsumption is a bit higher, but still less than 20€ a year. Even including additional part I'll need (PSU, infiniband, case), the i7 way is still 700 €/cpu cheaper than Xeon. So, is it ok to use "gamer" hardware in a HPC cluster ? or should I do it pro with xeon ? (This is not a critical application. I can handle downtime. I think I don't need ECC?)

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >