Search Results

Search found 3295 results on 132 pages for 'solaris cluster'.

Page 35/132 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • how to proxy sql queries (INSERT, UPDATE e.t.c.)

    - by XakRu
    I have installed cluster MYSQL (galley with mariadb) As an application server installed Apache. on a server with Apache installed haproxy which proxies requests from php in this case installed for zabbix server cluster. But faced with deadlocks, now I want to proxy requests WRITE, INSERT, UPDATE to the second server. SELECT queries to the second and third server. I would be happy to see your suggestions. Please do not write: use mysql - proxy. I want to see what program it may to proxy SQL requests. scheme: http://www.gliffy.com/pubdoc/4474830/L.png

    Read the article

  • Setting up Cluster Configuration using an existing web server as a Primary Node?

    - by RapidWebs
    Thanks in advance for any help which is issued! I am having a slight issue, and need help with the decision making process when it comes to setting up my Cluster Configuration, consisting on a line of Ubuntu Servers (12.04). We currently have a Primary node, which resides in the US within a Datacenter, but we are going to be using this for all serious bandwidth and resource intensive websites, and through a configuration of Virtualmin + Webmin, will be setup as a sort of pseudo-cluster, using Virtualmins Cluster Modules. Anyways, on to the issue: We also have a business line setup locally, with three servers. here are their specs: Intel P4 2.4 ghz, 1GB Ram, 110 gb sata, Ubuntu 12.04* AMD 1.3 ghz, 512MB Ram, 20 GB IDE P3 Xeon 800mhz (dual physical processors), 1GB Ram, 3 * 25 GB Raid Configuration (one in use for host operating system). The first machine is currently IN USE and is serving virtual hosts off a sub-domain. My question is this: How can I integrate the Secondary node (which will be the Primary node per say, in this smaller configuration...) which is currently in use, into the cluster configuration w/ the other two servers for: Sharing Resources Redundancy (HA?) NFS /w the two Raid Disks without having the FORMAT the secondary node, and start fresh moving all my services in to a DRBD network drive or something similar, and than restoring all active virtualmin's Virtual hosts. the idea is that I want minimal downtime to people currently being served from server2.mywebsite.com, and from what I understand, all services need to be on a NFS so that they can be mounted on demand and accessed from the other machine taking over (i.e. Heartbeat + DRBD Config.) but my issue is that i already have all these services installed to their default directory structure: how can i most easily setup this NFS and HA system, move all my desires services to this new drive, and do it with minimal down time, and without breaking Virtualmin and everything else on my server? even just some pointers, a thread i could read, or a step by step check list or run down of commands i could issue to get started would be great! thanks!

    Read the article

  • Sun webstack vs Installing PHP, MySQL, Apache individually

    - by Vincent
    Is it possible to install PHP, MySQL, Apache individually on Solaris instead of installing them through a webstack? What are the advantages and disadvantages? I seem to frequently get a CURL error on Solaris when dealing with HTTPS sites. (error:81072080:lib(129):func(114):reason(128). I have no clue why that error is occuring and thought it might solve it, if I upgrade to latest PHP,MySQL,Apache versions. At this point I am not even sure if it's a Solaris issue. Any advice? Thanks

    Read the article

  • OpenSolaris / Nexenta problems with NetXen 4-port NIC card (ntxn driver)

    - by ewwhite
    Hello, I'm running NexentaStor Enterprise on an HP ProLiant DL180 G6 server. The onboard NIC interfaces surface as igb0 and igb1 and work well. However, I've added an HP NC375T 4-port network card using the NetXen 3031 chipset. This card should be handled by the ntxn driver in the SUNWntxn package, but that results in "ntxn0: failed to map doorbell" messages upon boot. The network interfaces don't show up. After some research, I found HP's driver package for the card. The release notes for the driver package state: This version of the Driver is supported only on Oracle Solaris 10 5/09 & 10/09. Oracle Solaris 10 5/09 & 10/09 contain an older version of NetXen P3 driver package called SUNWntxn. So, adding another version of NetXen P3 driver package using pkgadd command might result in conflicts with the NetXen driver binary & related files. Users are advised to uninstall native SUNWntxn driver package before installing the new package. The install completes, but I end up with a different set of errors in initializing the card. ifconfig ntxn0 plumb ifconfig: cannot open link "ntxn0": DLPI link does not exist dmesg output: Jan 29 07:20:17 ch-san2 ntxn: [ID 977263 kern.warning] WARNING: Memory not available Jan 29 07:20:17 ch-san2 ntxn: [ID 404858 kern.notice] NOTICE: ntxn0: Mac registration error Trying to manually create the device files: root@ch-san2:/volumes# add_drv -i "4040,100" ntxn ("ntxn") already in use as a driver or alias. Update the driver: root@ch-san2:/volumes# update_drv -f ntxn devfsadm: driver failed to attach: ntxn Warning: Driver (ntxn) successfully added to system but failed to attach Any ideas on how to get this driver working, or should I ditch the card and go with an Intel or something else?

    Read the article

  • Strange ZFS hidden filesystem problem

    - by RandomInsano
    Half of my ZFS filesystems are hidden in ZFS-fuse. Here's my story: So, I love ZFS. I used it for about six months on FreeBSD, but due to it crashing the kernel during heavy inter-filesystem IO load, I tried switching to Solaris 5.10. That was good, but when I attempted to do an import of my Version 13 pool into its Version 4 version of ZFS, there were some heafty problems. It may have tried to correct the filesystem definitions, I don't know. Since that version wasn't compatible with my pool, I've now switched to Ubuntu Server 10.4. That version more than supports that of my pool, but I can only see half of my filesystems. The filesystems I can see are the same as those Solaris could see. Now, despite those filesystems not being preset in a 'zfs list' command, I can still set properties on them and I can even still mount them and read and write files, but they just plain don't show up in 'zfs list'. I've mounted the major ones, but I'm not sure what other filesystems there are anymore (I have about eight that I can't see). Anyone have any idea what the heck is going on? I think I might try booting back into FreeBSD 8 (I still have the main boot drive laying around for that) and see if at least it is able to view the filesystems. I've also done a scrub while in Linux, and it found no errors with any of the data. Oddly, DMA read errors which caused problems on FreeBSD ZFS are reported by Linux, but ZFS-fuse doesn't find an error. That's a topic for another post however.

    Read the article

  • Sun Power Button Won't Shut Down System

    - by user36680
    Background: We are running NIS and have NFS mounts from a Solaris 10 workstation to a Solaris 8 server. If the workstation loses its network connection for some reason, when I look at the workstation's console I see repeated messages of the form: <date> <time> <hostname> ypbind[<pid>]: NIS server not responding for domain "<domain>"; still trying. If I try to login at the console as a user, it won't work because it can't authenticate my account through NIS. Also, it won't return to a login prompt again, so I can't log in as root. If I press the power button (don't hold it in) on the workstation, I see: <date> <time> <hostname> power: WARNING: Power off requested from power button or SC, powering down the system! Shutdown started. <date> <time> Changing to init state 5 - please wait. <date> <time+2 minutes> <hostname> power: WARNING: Failed to shut down the system! And continue to see messages of the form: <date> <time> <hostname> ypbind[<pid>]: NIS server not responding for domain "<domain>"; still trying. So, the questions are How do I make NIS stop trying (because I know it will fail)? Why won't it shut down?

    Read the article

  • ERROR: Not enough space?

    - by dsmoljanovic
    Now this is a very unspecific question. I'm trying to figure out what this message would mean. Here is the story behind it: I'm installing Oracle enterprise manager cloud control (12c r3) on Solaris 10 (5/09). Installer opens up, i enter all needed information and at the last step click Install. It immediately crashes with only "ERROR: Not enough space" written in log and console and nothing else. Now, this could be java error or Solaris error? I'm thinking it's happening either when it starts to copy files or when it tries to launch a process that would do that. What space is it referring to? disk (have ehough), swap (also), memory (yep)... Any ideas are helpful. Edit: i found this exception in the oraInventory logs: oracle.sysman.oii.oiic.OiicInstallAPIException: Not enough space at oracle.sysman.oii.oiic.OiicAPIInstaller.initInstallSession(OiicAPIInstaller.java:2165) at oracle.sysman.oii.oiic.OiicAPIInstaller.initOUIAPISession(OiicAPIInstaller.java:790) at oracle.sysman.install.oneclick.EMGCOUIInstaller.prepareForInstall(EMGCOUIInstaller.java:676) at oracle.sysman.install.oneclick.EMGCSummaryDlgonNext$1.run(EMGCSummaryDlgonNext.java:243) at java.lang.Thread.run(Thread.java:662) at oracle.sysman.install.oneclick.EMGCSummaryDlgonNext.actionsOnClickofNext(EMGCSummaryDlgonNext.java:1067) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at oracle.sysman.install.oneclick.EMGCUtil.performonClickOfNextForClass(EMGCUtil.java:399) at oracle.sysman.install.oneclick.EMGCUtil.performPageLevelValidationsForSilentInstall(EMGCUtil.java:367) at oracle.sysman.install.oneclick.EMGCInstaller.prepareForSilentInstall(EMGCInstaller.java:1459) at oracle.sysman.install.oneclick.EMGCInstaller.main(EMGCInstaller.java:1553) disk status: bash-3.00$ df -h /tmp Filesystem size used avail capacity Mounted on swap 8.1G 2.7G 5.4G 33% /tmp bash-3.00$ df -h /u01 Filesystem size used avail capacity Mounted on / 275G 28G 244G 11% / swap: root@gs12emcc # swap -s total: 18306040k bytes allocated + 3837808k reserved = 22143848k used, 5712664k available

    Read the article

  • OPTICS Clustering algorithm. How to get the best epsilon

    - by Marco Galassi
    I am implementing a project which needs to cluster geographical points. OPTICS algorithm seems to be a very nice solution. It needs just 2 parameters as input(MinPts and Epsilon), which are, respectively, the minimum number of points needed to consider them as a cluster, and the distance value used to compare if two points are in can be placed in same cluster. My problem is that, due to the extreme variety of the points, I can't set a fixed epsilon. Just look at the image below. The same points structure but in a different scale would result very different. Suppose to set MinPts=2 and epsilon = 1Km. On the left, the algorithm would create 2 clusters(red and blue), but on the right it would create one single cluster containing all of the points(red), but I would like to obtain 2 clusters even on the right. So my question is: is there any kind of way to calculate dynamically the epsilon value to get this result? Thank you very much and excuse my for my poor english. Marco

    Read the article

  • apache mod_jk loadbalancing issue for glassfish cluster instances

    - by SibzTer
    I have a JEE ear application deployed on 2 clusters with 2 instances each on Glassfish v3.1. These are load balanced by an Apache server running on the same machine. My problem is that I am frequently seeing the following error messages frequently in the mod_jk.log file. Can you help me understand what the issue is? [Mon Jun 13 09:37:51 2011] [7116:7852] [info] ajp_process_callback::jk_ajp_common.c (1885): Writing to client aborted or client network problems [Mon Jun 13 09:37:51 2011] [7116:7852] [info] ajp_service::jk_ajp_common.c (2543): (viewerLocalInstance4) sending request to tomcat failed (unrecoverable), because of client write error (attempt=1) [Mon Jun 13 09:37:51 2011] loadbalancerLocal myServer 0.062500 [Mon Jun 13 09:37:51 2011] [7116:6512] [info] ajp_process_callback::jk_ajp_common.c (1885): Writing to client aborted or client network problems [Mon Jun 13 09:37:51 2011] [7116:6512] [info] ajp_service::jk_ajp_common.c (2543): (viewerLocalInstance4) sending request to tomcat failed (unrecoverable), because of client write error (attempt=1) [Mon Jun 13 09:37:52 2011] [7116:3080] [info] ajp_process_callback::jk_ajp_common.c (1885): Writing to client aborted or client network problems [Mon Jun 13 09:37:52 2011] [7116:3080] [info] ajp_service::jk_ajp_common.c (2543): (viewerLocalInstance4) sending request to tomcat failed (unrecoverable), because of client write error (attempt=1) [Mon Jun 13 09:38:21 2011] [7116:6512] [info] service::jk_lb_worker.c (1388): service failed, worker viewerLocalInstance4 is in local error state [Mon Jun 13 09:38:21 2011] [7116:7852] [info] service::jk_lb_worker.c (1388): service failed, worker viewerLocalInstance4 is in local error state [Mon Jun 13 09:38:21 2011] [7116:6512] [info] service::jk_lb_worker.c (1407): unrecoverable error 200, request failed. Client failed in the middle of request, we can't recover to another instance. [Mon Jun 13 09:38:21 2011] [7116:7852] [info] service::jk_lb_worker.c (1407): unrecoverable error 200, request failed. Client failed in the middle of request, we can't recover to another instance. [Mon Jun 13 09:38:21 2011] loadbalancerLocal myServer 29.046875 [Mon Jun 13 09:38:21 2011] loadbalancerLocal myServer 29.171875 [Mon Jun 13 09:38:21 2011] [7116:6512] [info] jk_handler::mod_jk.c (2620): Aborting connection for worker=loadbalancerLocal [Mon Jun 13 09:38:21 2011] [7116:7852] [info] jk_handler::mod_jk.c (2620): Aborting connection for worker=loadbalancerLocal [Mon Jun 13 09:38:21 2011] [7116:7852] [info] ajp_process_callback::jk_ajp_common.c (1885): Writing to client aborted or client network problems [Mon Jun 13 09:38:21 2011] [7116:7852] [info] ajp_service::jk_ajp_common.c (2543): (viewerLocalInstance4) sending request to tomcat failed (unrecoverable), because of client write error (attempt=1) [Mon Jun 13 09:38:21 2011] loadbalancerLocal myServer 0.156250 [Mon Jun 13 09:38:21 2011] loadbalancerLocal myServer 0.062500 [Mon Jun 13 09:38:22 2011] [7116:3080] [info] service::jk_lb_worker.c (1388): service failed, worker viewerLocalInstance4 is in local error state [Mon Jun 13 09:38:22 2011] [7116:3080] [info] service::jk_lb_worker.c (1407): unrecoverable error 200, request failed. Client failed in the middle of request, we can't recover to another instance.

    Read the article

  • NoSQL Memcached API for MySQL: Latest Updates

    - by Mat Keep
    With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees. In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure. This and more is discussed in the latest Guide to MySQL and NoSQL where you can learn more about using the APIs to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database The native Memcached API is part of the MySQL 5.6 Release Candidate, and is already available in the GA release of MySQL Cluster. By using the ubiquitous Memcached API for writing and reading data, developers can preserve their investments in Memcached infrastructure by re-using existing Memcached clients, while also eliminating the need for application changes. Speed, when combined with flexibility, is essential in the world of growing data volumes and variability. Complementing NoSQL access, support for on-line DDL (Data Definition Language) operations in MySQL 5.6 and MySQL Cluster enables DevOps teams to dynamically update their database schema to accommodate rapidly changing requirements, such as the need to capture additional data generated by their applications. These changes can be made without database downtime. Using the Memcached interface, developers do not need to define a schema at all when using MySQL Cluster. Lets look a little more closely at the Memcached implementations for both InnoDB and MySQL Cluster. Memcached Implementation for InnoDB The Memcached API for InnoDB is previewed as part of the MySQL 5.6 Release Candidate. As illustrated in the following figure, Memcached for InnoDB is implemented via a Memcached daemon plug-in to the mysqld process, with the Memcached protocol mapped to the native InnoDB API. Figure 1: Memcached API Implementation for InnoDB With the Memcached daemon running in the same process space, users get very low latency access to their data while also leveraging the scalability enhancements delivered with InnoDB and a simple deployment and management model. Multiple web / application servers can remotely access the Memcached / InnoDB server to get direct access to a shared data set. With simultaneous SQL access, users can maintain all the advanced functionality offered by InnoDB including support for Foreign Keys, XA transactions and complex JOIN operations. Benchmarks demonstrate that the NoSQL Memcached API for InnoDB delivers up to 9x higher performance than the SQL interface when inserting new key/value pairs, with a single low-end commodity server supporting nearly 70,000 Transactions per Second. Figure 2: Over 9x Faster INSERT Operations The delivered performance demonstrates MySQL with the native Memcached NoSQL interface is well suited for high-speed inserts with the added assurance of transactional guarantees. You can check out the latest Memcached / InnoDB developments and benchmarks here You can learn how to configure the Memcached API for InnoDB here Memcached Implementation for MySQL Cluster Memcached API support for MySQL Cluster was introduced with General Availability (GA) of the 7.2 release, and joins an extensive range of NoSQL interfaces that are already available for MySQL Cluster Like Memcached, MySQL Cluster provides a distributed hash table with in-memory performance. MySQL Cluster extends Memcached functionality by adding support for write-intensive workloads, a full relational model with ACID compliance (including persistence), rich query support, auto-sharding and 99.999% availability, with extensive management and monitoring capabilities. All writes are committed directly to MySQL Cluster, eliminating cache invalidation and the overhead of data consistency checking to ensure complete synchronization between the database and cache. Figure 3: Memcached API Implementation with MySQL Cluster Implementation is simple: 1. The application sends reads and writes to the Memcached process (using the standard Memcached API). 2. This invokes the Memcached Driver for NDB (which is part of the same process) 3. The NDB API is called, providing for very quick access to the data held in MySQL Cluster’s data nodes. The solution has been designed to be very flexible, allowing the application architect to find a configuration that best fits their needs. It is possible to co-locate the Memcached API in either the data nodes or application nodes, or alternatively within a dedicated Memcached layer. The benefit of this flexible approach to deployment is that users can configure behavior on a per-key-prefix basis (through tables in MySQL Cluster) and the application doesn’t have to care – it just uses the Memcached API and relies on the software to store data in the right place(s) and to keep everything synchronized. Using Memcached for Schema-less Data By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table. Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster. Conclusion Download the Guide to MySQL and NoSQL to learn more about NoSQL APIs and how you can use them to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database See how to build a social app with MySQL Cluster and the Memcached API from our on-demand webinar or take a look at the docs Don't hesitate to use the comments section below for any questions you may have 

    Read the article

  • IIS7 failover cluster across datacenters

    - by Scott
    Hello, I have servers in two different datacenters with each datacenter getting static IPs. What I would like to do is setup the servers as IIS7 servers and allowing them to failover from datacenter to datacenter with little (or preferably) no interruption. Servers on both sides are running Windows Server 2008 x64 with IIS7 (or 7.5 if needed). I am interested in how to point DNS traffic to the new datacenter without manual human intervention. For example: Datacenter A: IP: 192.168.1.115 Servers: Server 2008 x64 w/ IIS 7 Datacenter B: IP: 192.168.1.220 Servers: Server 2008 x64 w/ IIS 7 Other information: Domain Name: Example.org Domain DNS: 192.168.1.115 If Datacenter A connectivity went down (broken service line, etc.) how does the traffic know to route to Datacenter B on 192.168.1.220? Thanks, Scott

    Read the article

  • Can't add service account to domain group during SQL cluster install

    - by Sam
    I'm installing a 2008 instance on a Server 2003 machine which is already running SQL 2005. I need to set up domain groups for the security setup step: http://msdn.microsoft.com/en-us/library/ms179530.aspx On Windows Server 2003, specify domain groups for SQL Server services. All resource permissions are controlled by domain-level groups that include SQL Server service accounts as group members. Much more info on this here: http://support.microsoft.com/kb/910708 I've had problems with being able to add the windows service accounts to the groups at install time. The security admins had to make my account a domain admin - which they were hesitant to do. The account under which SQL Server Setup is running must have permissions to add accounts to the domain groups. Is there a specific security setting which would allow my account to add accounts to a group? UPDATE: I'm looking for specific instructions. I have a global group called domain\servicegroup - what do I tell the security folks to do. I'd love to figure it out myself, but I don't have access to this stuff.

    Read the article

  • CentOS Failover Cluster - SIOCADDRT: No such process (when adding a loopback)

    - by Steve Rolfe
    I'm trying to configure two web servers for a load balancing server. The load balancing aspect works fine (it sees both server, kills 'em if it needs to, and seems to direct traffic fine). The only issue is with the servers looping: /etc/sysconfig/network-scripts/ifcfg-lo:0 DEVICE=lo:0 IPADDR=<Virtual IP> NETMASK=255.255.255.255 ONBOOT=yes NAME=loopback Everytime I try a "service network restart" I get a SIOCADDRT: No such process when loading the loopback interface. Anyone have an idea what's causing this?

    Read the article

  • Apache (Solaris 10): 2 symlinks to the same file, one works the other doesn't

    - by justcatchingrye
    I'm seeing a strange issue with Apache I have a system that pulls a configuration file from a web server. I want to use a symlink with the name 'ocds-dpsarch01a.rules'. This doesn't work. However, if I change one character in that name and link it to the same file, it works - See below I can't think of any reason why one symlink would work when another doesn't? I would have thought either the Apache configuration is right and all symlinks work, or it isn't and no syslinks work(?) Any thoughts welcome ls -l /REMOVED/apache2/htdocs/rules/syslog/*cds-dpsarch01a.rules lrwxrwxrwx 1 root root 62 May 13 13:55 ocds-dpsarch01a.rules - /REMOVED/apache2/htdocs/templates/syslog/DCM_SST_DPST_01.rules lrwxrwxrwx 1 root root 62 May 13 13:52 xcds-dpsarch01a.rules - /REMOVED/apache2/htdocs/templates/syslog/DCM_SST_DPST_01.rules 1) Application starting and successfully reading configuration from the web server 13/05/2010 13:56:37: Information: Connecting ... 13/05/2010 13:56:37: Debug: Reading REMOVED:// REMOVED /rules/syslog/xcds-dpsarch01a.rules 13/05/2010 13:56:37: Debug: HTTP response: HTTP/1.1 200 OK Date: Thu, 13 May 2010 13:56:34 GMT Server: Apache Last-Modified: Fri, 09 Apr 2010 12:28:26 GMT ETag: "5073-a744-ee92ae80" Accept-Ranges: bytes Content-Length: 42820 Cache-Control: max-age=5 Expires: Thu, 13 May 2010 13:56:39 GMT NL7C-Filtered: Content-Type: text/plain Connection: close 13/05/2010 13:56:37: Debug: Plain text rules file detected. 2) Application starting and failing to read configuration from the web server 13/05/2010 13:56:55: Information: Connecting ... 13/05/2010 13:56:55: Debug: Reading REMOVED :// REMOVED /rules/syslog/ocds-dpsarch01a.rules 13/05/2010 13:56:55: Debug: HTTP response: HTTP/1.1 403 Forbidden Date: Wed, 12 May 2010 15:25:11 GMT Server: Apache Vary: accept-language,accept-charset Accept-Ranges: bytes Connection: close Content-Type: text/html; charset=iso-8859-1 Content-Language: en Expires: Wed, 12 May 2010 15:25:11 GMT 13/05/2010 13:56:55: Error: HTTP: HTTP/1.1 403 Forbidden Date: Wed, 12 May 2010 15:25:11 GMT Server: Apache Vary: accept-language,accept-charset Accept-Ranges: bytes Connection: close Content-Type: text/html; charset=iso-8859-1 Content-Language: en Expires: Wed, 12 May 2010 15:25:11 GMT 13/05/2010 13:56:55: Error: HTTP GET failed 13/05/2010 13:56:55: Error: Failed to open Rules file: REMOVED :// REMOVED /rules/syslog/ocds-dpsarch01a.rules

    Read the article

  • Webmin Cluster Copy Protocol

    - by hozza
    Just toying with a clustered server farm for fun (as you do) and experimenting with Webmin and its 'clustered' modules. It has a feature that can copy files from one server to another on a repeating basis. Does this feature/module use cron jobs and what protocol does it use to copy the files? I have searched all about the net and yet I cannot find any decent documentation on webmin or its features. Is it just poorly documented or am I missing something?

    Read the article

  • Weblogic - Dynamic Clustering in practice by Andy Overton

    - by JuergenKress
    The latest version of Weblogic (12.1.2) includes support for Dynamic Clustering. For more details on what else is new in 12.1.2 see my previous blog post. In this blog post I will look at setting up a dynamic cluster on 2 machines with 4 managed servers (2 on each). I will then deploy an application to the cluster and show how to expand the cluster. What is a dynamic cluster? A dynamic cluster is any cluster that contains one or more dynamic servers. Each server in the cluster will be based upon a single shared server template. The server template allows you to configure each server the same and ensures that servers do not need to be manually configured before being added to the cluster. This allows you to easily scale up or down the number of servers in your cluster without the need for setting up each server manually. Changes made to the server template are rolled out to all servers that use that template. Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic 12c cluster,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress,Andy Overton

    Read the article

  • Cannont add service account to domain group during sql cluster install

    - by Sam
    I'm installing a 2008 instance on a 2003 machine which is already running 2005. I need to set up domain groups for the security setup step: http://msdn.microsoft.com/en-us/library/ms179530.aspx On Windows Server 2003, specify domain groups for SQL Server services. All resource permissions are controlled by domain-level groups that include SQL Server service accounts as group members. Much more info on this here: http://support.microsoft.com/kb/910708 I've had problems with being able to add the windows service accounts to the groups at install time. The security admins had to make my account a domain admin - which they were hesitant to do. The account under which SQL Server Setup is running must have permissions to add accounts to the domain groups. Is there a specific security setting which would allow my account to add accounts to a group?

    Read the article

  • Nobody nogroup on ubuntu client with Solaris server

    - by user1574623
    I have an openIdiana server with ZFS and it has been shared with NFS on a ubuntu server (called server1) one year ago. Now i am asked to shared it on a second ubuntu server (called server2). So i have took the line in /etc/fstab from server 1 and add it in server2: 192.168.1.22:mypool/data/.zfs/snapshot /mnt/zfs nfs acl,intr,noatime 0 0 But when i mount it, the rights on server2 are "nobody nogroup" (connect as anonymous?!) whereas its ok on server1. And on my OpenIndiana, I haven't found where it is configured (its not me who configured it last year). So I have tried to set zfs set sharenfs=rw numpool/data but without success. So i am looking for a file like /etc/exports on Ubuntu to configure which server is allowed to connect, and not as anonymous. Any idea? thanks,

    Read the article

  • How to automatically start the perfmon on SQL Server cluster active node

    - by Jlamber
    How can we start running perfmon automatically on active SQL Server active node? Typically when we failover to the inactive node and we forgot to run the perfmon. We want to start running the perfmom automatically if possible. If not how can we tell if perfmon is not running so we can send out alert to start the perfmom? We can watch the log file output but we want to know if there is more elegance solution. Thank you.

    Read the article

  • cluster of services and restarting on package upgrade

    - by Marcin Cylke
    I'm using puppet to manage a bunch of servers. Those servers run a simple service - exposed to the world via load balancer. That service's instances are independent in that they can run on their own, are are deployed on multiple servers to increase responsiveness. Now, when I push a new package to repo and puppet catches up with it appearing there it just updates this package on all services. This results in a short downtime of entire service. Is there a way of configuring puppet to do restart the services sequentially? Or using any other kind of strategy?

    Read the article

  • Measuring performance indicators on a cluster

    - by Aditya Singh
    My architecture is based on Amazon. A ELB load balancer balances POST requests among m1.large instances. Every instance has a nginx server on port 80 which distributes the requests to 4 python-tornado servers on backend which handle the request. These tornado servers are taking about 5 - 10ms to respond to one request but this is the internal compute time of every request. I want to put this thing on test and i want to measure the response time from ELB to upstream and back and how does it vary when the QPS throughput is increased and plot a graph of Time vs. QPS vs. Latency and other factors like CPU and Memory. Is there a software to do that or should i log everything somewhere with latency checks and then analyze the whole log to get the stuff out. I would also need to write a self-monitor which keeps checking the whole response time. Is it possible to do it with a script from within the server. If so, will it be accurate ?

    Read the article

  • MySQL Cluster transaction isolation level - READ_COMMITTED

    - by Doori Bar
    I'm learning by mostly reading the documentation. Unfortunately, http://dev.mysql.com/doc/refman/5.1/en/set-transaction.html#isolevel_read-committed doesn't say anything, while it says everything. Confused? Me too. ndb engine supports only "READ_COMMITTED" transaction isolation level. A. It starts by saying "sets and reads its own fresh snapshot", which I translate to: The transaction is having a separated 'zone' which whatever it stores there - is what it reads back. B. While out-side of the transaction, the old-values are unlocked. C. It continues with: "for locking reads" sentence - No idea what it means. Question: they claim only READ_COMMITTED transaction isolation level is supported, but while handling a BLOB or a TEXT, they say the isolation is now "locked for reading" too. So is it a contradiction? can a transaction LOCK for reading just as well while handling something other than BLOB/TEXT? (such as integers)

    Read the article

  • Dynamically changing one-node Cassandra cluster to two nodes

    - by Jason Axelson
    So I have an application that will be very dormant most of the time but will need high-bursting a few days out of the month. Since we are deploying on EC2 I would like to keep only one Cassandra server up most of the time and then on burst days I want to bring one more server up (with more RAM and CPU than the first) to help serve the load. What is the best way to do this? Should I take a different approach? Some notes about what I plan to do: Bring the node up and repair it immediately After the burst time is over decommission the powerful node Use the always-on server as the seed node My main question is how to get the nodes to share all the data since I want a replication factor of 2 (so both nodes have all the data) but that won't work while there is only one server. Should I bring up 2 extra servers instead of just one?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >