Search Results

Search found 1781 results on 72 pages for 'cluster'.

Page 30/72 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Load-balancer options

    - by toolkit
    I am looking at a number of possible options for load-balancing. So far, I am constrained to the following options: DNS server load-balancer, balancing to a cluster of tomcat servers, with terracotta for session replication. Pros - don't have to buy new kit. Cons - DNS lb can keep directing to a broken server. Hardware load-balancer, direct to cluster of tomcat servers. Pros - could have second box for failover lb. Cons - expense. Apache server load-balancer. Pros - apache's lb polls for broken servers. Cons - apache server is single point of failure, plus need to buy another server. Are there any other options I should consider? Thanks. Update: Thanks for all the answers so far +1's all round. Not accepting an answer yet, to keep more ideas coming.

    Read the article

  • Create user in Oracle 11g with same priviledges as in Oracle 10g XE

    - by Álvaro G. Vicario
    I'm a PHP developer (not a DBA) and I've been working with Oracle 10g XE for a while. I'm used to XE's simplified user management: Go to Administration/ Users/ Create user Assign user name and password Roles: leave the default ones (connect and resource) Privileges: click on "Enable all" to select the 11 possible ones Create This way I get a user that has full access to its data and no access to everything else. This is fine since I only need it to develop my app. When the app is to be deployed, the client's DBAs configure the environment. Now I have to create users in a full Oracle 11g server and I'm completely lost. I have a new concept (profiles) and there're like 20 roles and hundreds of privileges in various categories. What steps do I need to complete in Oracle Enterprise Manager in order to obtain a user with the same privileges I used to assign in XE? ==== UPDATE ==== I think I'd better provide a detailed explanation so I make myself clearer. This is how I create a user in 10g XE: Roles: [X] CONNECT [X] RESOURCE [ ] DBA Direct Asignment System Privileges: [ ] CREATE DATABASE LINK [ ] CREATE MATERIALIZED VIEW [ ] CREATE PROCEDURE [ ] CREATE PUBLIC SYNONYM [ ] CREATE ROLE [ ] CREATE SEQUENCE [ ] CREATE SYNONYM [ ] CREATE TABLE [ ] CREATE TRIGGER [ ] CREATE TYPE [ ] CREATE VIEW I click on Enable All and I'm done. This is what I'm asked when doing the same in 11g: Profile: (*) DEFAULT ( ) WKSYS_PROF ( ) MONITORING_PROFILE Roles: CONNECT: [ ] Admin option [X] Default value Edit List: AQ_ADMINISTRATOR_ROLE AQ_USER_ROLE AUTHENTICATEDUSER CSW_USR_ROLE CTXAPP CWM_USER DATAPUMP_EXP_FULL_DATABASE DATAPUMP_IMP_FULL_DATABASE DBA DELETE_CATALOG_ROLE EJBCLIENT EXECUTE_CATALOG_ROLE EXP_FULL_DATABASE GATHER_SYSTEM_STATISTICS GLOBAL_AQ_USER_ROLE HS_ADMIN_ROLE IMP_FULL_DATABASE JAVADEBUGPRIV JAVAIDPRIV JAVASYSPRIV JAVAUSERPRIV JAVA_ADMIN JAVA_DEPLOY JMXSERVER LOGSTDBY_ADMINISTRATOR MGMT_USER OEM_ADVISOR OEM_MONITOR OLAPI_TRACE_USER OLAP_DBA OLAP_USER OLAP_XS_ADMIN ORDADMIN OWB$CLIENT OWB_DESIGNCENTER_VIEW OWB_USER RECOVERY_CATALOG_OWNER RESOURCE SCHEDULER_ADMIN SELECT_CATALOG_ROLE SPATIAL_CSW_ADMIN SPATIAL_WFS_ADMIN WFS_USR_ROLE WKUSER WM_ADMIN_ROLE XDBADMIN XDB_SET_INVOKER XDB_WEBSERVICES XDB_WEBSERVICES_OVER_HTTP XDB_WEBSERVICES_WITH_PUBLIC System Privileges: <Empty> Edit List: ACCESS_ANY_WORKSPACE ADMINISTER ANY SQL TUNING SET ADMINISTER DATABASE TRIGGER ADMINISTER RESOURCE MANAGER ADMINISTER SQL MANAGEMENT OBJECT ADMINISTER SQL TUNING SET ADVISOR ALTER ANY ASSEMBLY ALTER ANY CLUSTER ALTER ANY CUBE ALTER ANY CUBE DIMENSION ALTER ANY DIMENSION ALTER ANY EDITION ALTER ANY EVALUATION CONTEXT ALTER ANY INDEX ALTER ANY INDEXTYPE ALTER ANY LIBRARY ALTER ANY MATERIALIZED VIEW ALTER ANY MINING MODEL ALTER ANY OPERATOR ALTER ANY OUTLINE ALTER ANY PROCEDURE ALTER ANY ROLE ALTER ANY RULE ALTER ANY RULE SET ALTER ANY SEQUENCE ALTER ANY SQL PROFILE ALTER ANY TABLE ALTER ANY TRIGGER ALTER ANY TYPE ALTER DATABASE ALTER PROFILE ALTER RESOURCE COST ALTER ROLLBACK SEGMENT ALTER SESSION ALTER SYSTEM ALTER TABLESPACE ALTER USER ANALYZE ANY ANALYZE ANY DICTIONARY AUDIT ANY AUDIT SYSTEM BACKUP ANY TABLE BECOME USER CHANGE NOTIFICATION COMMENT ANY MINING MODEL COMMENT ANY TABLE CREATE ANY ASSEMBLY CREATE ANY CLUSTER CREATE ANY CONTEXT CREATE ANY CUBE CREATE ANY CUBE BUILD PROCESS CREATE ANY CUBE DIMENSION CREATE ANY DIMENSION CREATE ANY DIRECTORY CREATE ANY EDITION CREATE ANY EVALUATION CONTEXT CREATE ANY INDEX CREATE ANY INDEXTYPE CREATE ANY JOB CREATE ANY LIBRARY CREATE ANY MATERIALIZED VIEW CREATE ANY MEASURE FOLDER CREATE ANY MINING MODEL CREATE ANY OPERATOR CREATE ANY OUTLINE CREATE ANY PROCEDURE CREATE ANY RULE CREATE ANY RULE SET CREATE ANY SEQUENCE CREATE ANY SQL PROFILE CREATE ANY SYNONYM CREATE ANY TABLE CREATE ANY TRIGGER CREATE ANY TYPE CREATE ANY VIEW CREATE ASSEMBLY CREATE CLUSTER CREATE CUBE CREATE CUBE BUILD PROCESS CREATE CUBE DIMENSION CREATE DATABASE LINK CREATE DIMENSION CREATE EVALUATION CONTEXT CREATE EXTERNAL JOB CREATE INDEXTYPE CREATE JOB CREATE LIBRARY CREATE MATERIALIZED VIEW CREATE MEASURE FOLDER CREATE MINING MODEL CREATE OPERATOR CREATE PROCEDURE CREATE PROFILE CREATE PUBLIC DATABASE LINK CREATE PUBLIC SYNONYM CREATE ROLE CREATE ROLLBACK SEGMENT CREATE RULE CREATE RULE SET CREATE SEQUENCE CREATE SESSION CREATE SYNONYM CREATE TABLE CREATE TABLESPACE CREATE TRIGGER CREATE TYPE CREATE USER CREATE VIEW CREATE_ANY_WORKSPACE DEBUG ANY PROCEDURE DEBUG CONNECT SESSION DELETE ANY CUBE DIMENSION DELETE ANY MEASURE FOLDER DELETE ANY TABLE DEQUEUE ANY QUEUE DROP ANY ASSEMBLY DROP ANY CLUSTER DROP ANY CONTEXT DROP ANY CUBE DROP ANY CUBE BUILD PROCESS DROP ANY CUBE DIMENSION DROP ANY DIMENSION DROP ANY DIRECTORY DROP ANY EDITION DROP ANY EVALUATION CONTEXT DROP ANY INDEX DROP ANY INDEXTYPE DROP ANY LIBRARY DROP ANY MATERIALIZED VIEW DROP ANY MEASURE FOLDER DROP ANY MINING MODEL DROP ANY OPERATOR DROP ANY OUTLINE DROP ANY PROCEDURE DROP ANY ROLE DROP ANY RULE DROP ANY RULE SET DROP ANY SEQUENCE DROP ANY SQL PROFILE DROP ANY SYNONYM DROP ANY TABLE DROP ANY TRIGGER DROP ANY TYPE DROP ANY VIEW DROP PROFILE DROP PUBLIC DATABASE LINK DROP PUBLIC SYNONYM DROP ROLLBACK SEGMENT DROP TABLESPACE DROP USER ENQUEUE ANY QUEUE EXECUTE ANY ASSEMBLY EXECUTE ANY CLASS EXECUTE ANY EVALUATION CONTEXT EXECUTE ANY INDEXTYPE EXECUTE ANY LIBRARY EXECUTE ANY OPERATOR EXECUTE ANY PROCEDURE EXECUTE ANY PROGRAM EXECUTE ANY RULE EXECUTE ANY RULE SET EXECUTE ANY TYPE EXECUTE ASSEMBLY EXPORT FULL DATABASE FLASHBACK ANY TABLE FLASHBACK ARCHIVE ADMINISTER FORCE ANY TRANSACTION FORCE TRANSACTION FREEZE_ANY_WORKSPACE GLOBAL QUERY REWRITE GRANT ANY OBJECT PRIVILEGE GRANT ANY PRIVILEGE GRANT ANY ROLE IMPORT FULL DATABASE INSERT ANY CUBE DIMENSION INSERT ANY MEASURE FOLDER INSERT ANY TABLE LOCK ANY TABLE MANAGE ANY FILE GROUP MANAGE ANY QUEUE MANAGE FILE GROUP MANAGE SCHEDULER MANAGE TABLESPACE MERGE ANY VIEW MERGE_ANY_WORKSPACE ON COMMIT REFRESH QUERY REWRITE READ ANY FILE GROUP REMOVE_ANY_WORKSPACE RESTRICTED SESSION RESUMABLE ROLLBACK_ANY_WORKSPACE SELECT ANY CUBE SELECT ANY CUBE DIMENSION SELECT ANY DICTIONARY SELECT ANY MINING MODEL SELECT ANY SEQUENCE SELECT ANY TABLE SELECT ANY TRANSACTION UNDER ANY TABLE UNDER ANY TYPE UNDER ANY VIEW UNLIMITED TABLESPACE UPDATE ANY CUBE UPDATE ANY CUBE BUILD PROCESS UPDATE ANY CUBE DIMENSION UPDATE ANY TABLE Object Privileges: <Empty> Add: Clase Java Clases de Trabajos Cola Columna de Tabla Columna de Vista Espacio de Trabajo Función Instantánea Origen Java Paquete Planificaciones Procedimiento Programas Secuencia Sinónimo Tabla Tipos Trabajos Vista Consumer Group Privileges: <Empty> Default Consumer Group: (*) None Edit List: AUTO_TASK_CONSUMER_GROUP BATCH_GROUP DEFAULT_CONSUMER_GROUP INTERACTIVE_GROUP LOW_GROUP ORA$AUTOTASK_HEALTH_GROUP ORA$AUTOTASK_MEDIUM_GROUP ORA$AUTOTASK_SPACE_GROUP ORA$AUTOTASK_SQL_GROUP ORA$AUTOTASK_STATS_GROUP ORA$AUTOTASK_URGENT_GROUP ORA$DIAGNOSTICS SYS_GROUP And, of course, I wonder what options I should pick.

    Read the article

  • Is it possible to a VM inside a VM (e.g., KVM on Vmware)?

    - by lorin
    I'd like to do some development on Eucalyptus, an open source project which provides an Amazon EC2 interface for launching virtual machine instances on a collection of privately managed nodes. I'd really like to be able to do some of the development on my desktop, rather than having to deploy Eucalyptus on our shared local cluster each time I make a change to the source code. (Especially since there are a group of us sharing that test cluster). Unfortunately, my desktop machine is a Mac, which won't run Eucalyptus natively. I do have VMWare Fusion, and it would be really nice if I could do my Eucalyptus testing inside a VMWare instance. The problem is, to test out Eucalyptus, it will have to launch (KVM or Xen) VM instances. I've got no idea if it's possible to actually launch a KVM or Xen instance inside a VMWare instance.

    Read the article

  • BackupExec 12 to Bacula questions

    - by LVDave
    We have a 128 node compute cluster for environmental modeling, with a master/head node which we currently backup with a Windows2003 system running BackupExec 12 and a single HP LTO3 tape drive. We have recently ordered an Overland NEO200s 12 slot library, and are considering migrating off Windows to CentOS 5 for the backup server. The master/head node is RHEL5, with the compute nodes currently being migrated from a mix of RHEL3/4/5 to CentOS5. I'm fairly familiar with RH/Centos, but have no experience with Bacula. We've tentatively settled on Bacula as our cluster vendor recommended it. My questions are: 1) Does Bacula support an Overland NEO200s/LTO3 library? 2) Can Bacula catalog/restore tapes written by BE? and 3) I've head of Amanda, but am even more unfamiliar with it than Bacula. Any assistance would be appreciated Dave Frandin

    Read the article

  • How do I debug this FS error on a flash device?

    - by abc
    I have console access to an embedded linux device. This device has flash memory part of which is partitioned as a FAT filesystem. Its running linux-2.6.31. However I am seeing these errors on the console these days and the FAT file system becomes read only. 111109:154925 FAT: Filesystem error (dev loop0) 111109:154925 fat_get_cluster: invalid cluster chain (i_pos 0) 111109:154925 FAT: Filesystem error (dev loop0) 111109:154925 fat_get_cluster: invalid cluster chain (i_pos 0) I cannot understand why this happened? What is the root cause? And what is the fix? I would appreciate answers that can point me how to investigate the possible root cause of this issue on the device.

    Read the article

  • How do you handle reboots?

    - by Mart
    We have one VPS (Windows 2008 R2+IIS7.5), with an asp.net mvc 3 application. The main question is: how to handle issues when Windows needs to reboot? (after installing Windows Updates or anything else). The goal is to make the website 24/7, but first it's ok to show a message to the users. (we'll be back soon, something like app_offline.htm) Our application uses SQL and also writes/reads some files (uploaded photos, documents) which are not stored in SQL. What do you recommend? Load balancing with ARR? (with 1+2 servers, but what if the front-end server needs reboot?) Windows failover cluster? SQL failover cluster? What to do with uploaded files? I really don't know what would be the best (and simplest) solution.

    Read the article

  • can't connect to Sql Sever Management Express 2012

    - by Rare-Man
    i installed Sql Sever Management Express 2012 , but when i try to connect in Sql management studio enviroment , i have this error . TITLE: Connect to Server Cannot connect to .. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 2) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&EvtSrc=MSSQLServer&EvtID=2&LinkId=20476 The system cannot find the file specified BUTTONS: OK ................................................................................... and in during installion i dont have option for select cluster !! this is my SQL Server Configuration Manager , my sql server service is empty ... And when get Remove a Failover Cluster Node , this error happened . http://oi57.tinypic.com/2lrvat.jpg

    Read the article

  • BixData or Zabbix?

    - by Arafat
    Hi all, I've been using Ganglia to monitor my single Mac OSX server which runs Apache and MySQL. I'm ok with it. Now we are upgrading our servers, 6 IBM X3650 M3 and 2 Fujitsu servers. 2 IBM for Apache cluster and 4 IBM for MySQL NDB Cluster. The other two servers are for Load balancers. All servers are going to run Debian Lenny 5 on it. Now I need to decide on which monitoring tool I should go for. I found that BixData and Zabbix does an excellent job than Ganglia, in terms of sensors and reporting. Have anyone tried the above two tools? And which tool would you suggest me? For Debian. As I'm writing this, I'm installing BixData to try.... Thanks in advance.

    Read the article

  • (simple) linux HA with vmware vsphere?

    - by derhelge
    I hope my upcoming question is specific enough, and you are able and willing to support :-) We have several openSUSE VMs in an ESX-Cluster (three ESX-Servers) with an attached iSCSI-SAN. All of those Linux VMs are "single point of failure"-configured, which means in the case of a Web-Server: LAMP, storage, etc. everything on this machine. This was very simple and in case of a failure (in the last years: kernel panics or apache crashes) a simple reboot triggered by a script did it. But the problem is: How to upgrade/maintain the w(eb-)application or the underlying OS without downtime? This wasn't really managable and i did this in the early morning ;) How can i achieve a "simple" High-Availability Cluster now? I thought of: DRBD with heartbeat with 2 VMs. And for the storage a RDM (raw device mapped) LUN and change the read-write-permissions for both VMs. Is this a good idea? Anyone has a better solution?

    Read the article

  • configure cisco catalyst 3560g with an egress uplink

    - by imaginative
    Currently my setup has our egress uplink connected directly to an external interface on a linux router/firewall/nat gateway. Since the linux box is a single point of failure, I've since setup two openbsd boxes using carp+pf+pfsync in order to gain some additional redundancy. the problem is, I only have one egress uplink (it's still a single point of failure) but need to get it to speak to the active carp node in my openbsd cluster which will server as my new router/firewall/nat cluster. Is there anything specific I need to do on a 3560G in order for me to be able to: 1) Drop the egress uplink into a port 2) Drop one link from the switch to a firewall 2) Drop a second link from a switch to the firewall This is so if one box dies, the other still has the egress link to the switch. Is putting them into one VLAN enough? Anything else that needs to go into the configuration for this setup to work?

    Read the article

  • libvirt + ESX (HTTP response code 400 for call to 'Login')

    - by Coops
    I'm trying to connect to a vSphere cluster using the information from the libvirt documentation. $ virsh -c "vpx://[email protected]/dc1/dc1-cluster-e01/dc1-vsphere-e04/?no_verify=1" Enter root's password for 10.51.4.11: error: internal error HTTP response code 400 for call to 'Login' error: failed to connect to the hypervisor I seem to be able to establish a connection, but it fails with a "HTTP code 400". If I provide the incorrect password it fails with a 'login credentials' error, so it looks like I am getting a connection, but it's failing for another reason. Wireshark is no help as it's all done over SSL/TLS. Any thoughts folks? UPDATE: 15:21 28/02/11 FYI - I'm running libvirt-0.8.3 (the Ubuntu package recompiled with the ESX flag enabled). When I put virsh into debug mode it returns this: [snip] Enter root's password for 10.51.4.11: 15:19:09.011: debug : do_open:1249 : driver 3 ESX returned ERROR 15:19:09.011: debug : virUnrefConnect:294 : unref connection 0x98aa8f8 1 15:19:09.011: debug : virReleaseConnect:249 : release connection 0x98aa8f8 error: internal error HTTP response code 400 for call to 'Login' error: failed to connect to the hypervisor

    Read the article

  • DPM Coordinator Service is not responding

    - by Anatoly Vilchinsky
    Hi dear Gurus! I'm in stuck point now, so please at least try to help me. I've installed DPM 2010 on cluster server (w2k8 r2), everything was fine. Than I've tried tot install Protection Agent on both server of my cluster, but I'm getting error Error 312: The agent operation failed because the DPM Agent Coordinator service is not responding. Error details: The service cannot be started, either because it is disabled or because it has no enabled devices associated with it (0x80070422) Recommended action: Restart the DPM Agent Coordinator service on . And the thing is that I'm totally can't see DPM Agent Coordinator service not on the localhost, nor on the second server. All the suggestion, that I've found in the internet are suggesting to restart this service, but how can I perform this if the one is absent? I'll be glad for any help

    Read the article

  • Horrible performing RAID

    - by Philip
    I have a small GlusterFS Cluster with two storage servers providing a replicated volume. Each server has 2 SAS disks for the OS and logs and 22 SATA disks for the actual data striped together as a RAID10 using MegaRAID SAS 9280-4i4e with this configuration: http://pastebin.com/2xj4401J Connected to this cluster are a few other servers with the native client running nginx to serve files stored on it in the order of 3-10MB. Right now a storage server has a outgoing bandwith of 300Mbit/s and the busy rate of the raid array is at 30-40%. There are also strange side-effects: Sometimes the io-latency skyrockets and there is no access possible on the raid for 10 seconds. The file system used is xfs and it has been tuned to match the raid stripe size. Does anyone have an idea what could be the reason for such a bad performing array? 22 Disks in a RAID10 should deliver way more throughput.

    Read the article

  • Cannot WMI Query root\MSCluster namespace as Local Admin

    - by Matt Zuberko
    I'm trying to use WMI Explorer to query the root\MSCluster namespace on various hosts to obtain cluster resource group and resource object data. I can access the namespace with no issue on Win2K3 cluster nodes but am getting an access denied error attempting to connect to Win2K8 and Win2K8R2 nodes. I can access the root\cimv2 namespace with no issue, just the MSCluster namespace even though I am a local Admin. Is there a feature setting, local security policy or server role I have to be a member of to access the namespace? Thank you!

    Read the article

  • apache tomcat loadbalancing clustering on ubuntu

    - by user740010
    i am facing a problem in clustering the tomcat with apache as a loadbalancer using mod_jk on ubuntu. i have install apache2 on my ubuntu 11.04 and i have downloaded tomcat7 created two copies and kept them at two different location. 1st one is at /home/net4u/vishal/test/tomcatA 2nd one is at /home/net4u/vishal/test1/tomcatB i have made following changes to server.xml file in /conf folder 1. <Server port="8205" shutdown="SHUTDOWN"> 2. <Connector port="8280" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> 3.<Connector port="8209" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatB"> 4. <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> similarly i have modified other tomcat i.e tomcatA server.xml content of the server.xml is as follow: -- <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8280" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8109" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatB"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- uncomment for clustering--> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> <!-- Use the LockOutRealm to prevent attempts to guess user passwords via a brute-force attack --> <Realm className="org.apache.catalina.realm.LockOutRealm"> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> </Realm> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" resolveHosts="false"/> </Host> </Engine> i have install libapache2-mod-jk step 1. i have Created jk.load file in /etc/apache2/mods-enabled/jk.load content is as follows: LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so Create /etc/apache2/mods-enabled/jk.conf: JkWorkersFile /etc/apache2/workers.properties JkLogFile /var/log/apache2/jk.log JkMount /ecommerce/* worker1 JkMount /images/* worker1 JkMount /content/* worker1 step 2. Created workers.properties file in /etc/apache2/workers.properties content is as follows: workers.tomcat_home=/home/vishal/Desktop/test/tomcatA workers.java_home=/usr/lib/jvm/default-java ps=/ worker.list=tomcatA,tomcatB,loadbalancer   worker.tomcatA.port=8109 worker.tomcatA.host=localhost worker.tomcatA.type=ajp13 worker.tomcatA.lbfactor=1   worker.tomcatB.port=8209 worker.tomcatB.host=localhost worker.tomcatB.type=ajp13 worker.tomcatB.lbfactor=1 worker.loadbalancer.type=lb worker.loadbalancer.balanced_workers=tomcatA,tomcatB worker.loadbalancer.sticky_session=1 i tried the same thing on the windows machine it is working.

    Read the article

  • How do I remove Slony from a restored PostgreSQL database?

    - by Scott Herbert
    I've restored a database which came from a server on which Slony was running. The server on which the database has been restored does not have Slony installed. When the database restored, there were a lot of errors reported, with Slony related objects not getting created due to Slony related logins being missing. This I thought was not a problem, as losing the Slony objects didn't seem to matter, and infact seemed desirable. However, now I've got an anoying, if not critical problem. Whenever one clicks on a table in the newly restored DB in PGAdmin, a Slony related error popup ... pops up. The first one reads: "An error has occured: ERROR: function _rmscl.getlocalnodeid(unknown) does not exist" I notice that under the Replication node in PGAdmin, that there is a Slony replication cluster. Trying to drop this cluster results in more object missing type errors. Does anyone have any ideas how we can remove the last vestiges of Slony from this database?

    Read the article

  • VMware Virtual vCenter and High Availability

    - by rufo
    To continue with this question: Should be Vmware vCenter server high available? According to the response there even if vCenter is down HA will continue to work. So, if my vCenter is a VM, using the express sql edition in the same VM, and that VM is hosted in the same cluster it manages (and the cluster is setup for HA): Am I correct to assume that if the host that hosts the vCenter goes down HA will vmotion the vCenter VM to another host and it will continue to function? BTW: my environment is small, two ESXi 5.0 hosts, with about 50 VMs, using iSCSI shared storaged for everything.

    Read the article

  • NLB RPC server is unavailable on the specified computer

    - by Robin Weston
    Hi guys, Firstly, I'll admit that my networking knowledge is limited so as people request more information I'll update this question accordingly. I am trying to create a NLB Cluster across 2 Windows Server 2008 Web Servers. Neither of the machines are members of a domain, and both have 2 NICs (one for processing external web traffic, and one for communicating internally). I have installed NLB on both machines, and have created a cluster on Host A and added itself to it. However, when I try and add Host B (using the address from the external NIC) I get the following error : "The RPC server is unavailable on the specified computer". On Host B I can see that the RPC service is running fine. I can also ping and RDP from Host A to Host B with no problems either. I have disabled the windows firewall on both machines but that had no effect

    Read the article

  • How to find date/time used by Cassandra

    - by JDI Lloyd
    Earlier this morning I noticed that one of the nodes in our Cassandra cluster is writing logs an hour in the future, despite the date/time being correct on the OS. A couple of other nodes I checked via logs appear to be writing logs at the correct time. I now need to go through and check each node in our 80 node cluster and ensure cassandra is running on the correct time, problem being is some of the nodes don't write to the logs very often as they aren't doing much... the question is, is there some form of tool/utility (ie nodetool) that can tell me the time that cassandra is running on? All the systems date/times are correct, ntpdate cron in place has been for a while. Servers are set to Belize timezone to avoid DST changes so its nothing to do with that.

    Read the article

  • Normalize Accept-Encoding via HAProxy for optimized Squid hit rate

    - by Matt Beckman
    Our website infrastructure uses HAProxy for load balancing, a Squid cluster for caching, and application data is on an IIS cluster. We load balance HAProxy by URI to optimize the Squid hit-rate, but we know that Squid is holding different copies of each page based on the Accept-Encoding header passed to it by the browser, and so IE (gzip, deflate) will have a different copy of a cached page than Firefox (gzip,deflate) or Chrome (gzip,deflate,sdch). We want to normalize the Accept-Encoding headers and I think the best place to do so would be in HAProxy. I'd appreciate it if someone could offer some ideas on how to accomplish this without breaking support for clients without gzip or deflate support.

    Read the article

  • redundant/multi-site terminal server

    - by Adam
    Hi We have a Hyper-V cluster running 5 virtual terminal servers using HA. We need to be able make this system redundant and so if this site was to fail our users could log into the backup system at another location and access their data via the terminal servers. Any ideas? We were thinking of maybe using a NAS which replicated the data to the other location in real-time(pass-through disks)? and having a similar Hyper-V cluster setup in the backup location. However we would need to create the users in both location and create a virtual mirror without the data ie applications, directories, settings etc. Is this the best way to achieve this? We have read that using Hyper-v pass through disks is a big performance de-grade.

    Read the article

  • Pygrub with DRBD on Xen 3.2

    - by Joril
    Hi all, we have a two-node cluster using DRBD 8.2 on CentOS 5.2 64bit. The cluster runs a few VMs on top of Xen 3.2.1, here's the configuration for an Ubuntu Jaunty VM: name = 'dev' bootloader = '/usr/bin/pygrub' memory = '512' vif = [ 'ip=192.168.1.217,mac=00:16:3E:CD:60:80' ] disk = [ 'phy:/dev/drbd24,xvda1,w', 'phy:/dev/drbd25,xvda2,w' ] As you can see, the disks are specified like "phy:", and as such pygrub doesn't know a thing about the underlying drbd device... So my problem is that even though the VM boots just fine, it doesn't handle the state of the drbd device. As a result, when for some reason the device gets to a secondary/secondary state, the VM won't boot, and I have to manually specify which node is primary. I read that starting with Xen 3.3 pygrub understands the "drbd:" specification, and I think that it would fix my problem, but I can't upgrade Xen at the moment... Is there a workaround? For example, could I use the 3.3 version of pygrub? Thanks!

    Read the article

  • Windows 7 PC refuses to join domain

    - by Alex
    So, we're wiping clean all PCs at our office and migrating them to a new server cluster and a new domain. Last night I tested on PC and it mostly worked except it refuses to join to the domain. Now, our domain is named like EXAMPLE.COM. When I just type EXAMPLE the PC can't find the domain controller, even though I can ping it find. If I type EXAMPLE.COM it seems to work. How can I get it to work with just EXAMPLE? That's how I got all the new servers int he cluster to work (about 20 of them) and I haven't had any issues... The only difference between the Windows 7 PC(s) and the servers is that the clients will be on a 10.0.3.X network where as the servers are on a 10.0.1.X network. Oh, the domain controller and all the other servers are Windows Server 2008 R2. Suggestions will be highly appreciated!

    Read the article

  • JBoss https on port other than 8080 not working

    - by MilindaD
    We have a server with two JBoss instances where one runs on 8080, the other on 8081. We need to have HTTPS enabled for the 8081 server, firstly we tried enabling https on the 8080 port instance by generating the keystore and editing the server.xml and it successfully worked. However when we tried the same thing for 8081 it did not, note that we removed https for the 8080 server first before enabling it for 8081. This is what was used for both server.xml for 8080 and 8081. The only difference was that the port was changed from 8080 to 8081 when trying to enable https for 8081 port instance. What am I doing wrong and what needs to be changed? NOTE : When I meant enabled for 8080 I meant when you visit https:// URL:8484 you will actually be visiting the 8080 port instance. However when ssl is enabled for 8081 and I visit https:// URL:8484 I get that the web page is unavailable. COMMENTLESS VERSION <Server> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Service name="jboss.web"> <!-- https --> <Connector port="8080" address="${jboss.bind.address}" maxThreads="350" maxHttpHeaderSize="8192" emptySessionPath="true" protocol="HTTP/1.1" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" compression="on" ompressableMimeType="text/html,text/css,text/javascript,application/json,text/xml,text/plain,application/x-javascript,application/javascript"/> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" address="${jboss.bind.address}" keystoreFile="${jboss.server.home.dir}/conf/supun1.keystore" keystorePass="aaaaaa" truststoreFile="${jboss.server.home.dir}/conf/supun1.keystore" truststorePass="aaaaaa" /> <!-- https1 --> <Connector port="8009" address="${jboss.bind.address}" protocol="AJP/1.3" emptySessionPath="true" enableLookups="false" redirectPort="8443" /> <Engine name="jboss.web" defaultHost="localhost" jvmRoute="khms1"> <Realm className="org.jboss.web.tomcat.security.JBossSecurityMgrRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> <Host name="localhost" autoDeploy="false" deployOnStartup="false" deployXML="false" configClass="org.jboss.web.tomcat.security.config.JBossContextConfig" > <Valve className="org.jboss.web.tomcat.service.sso.ClusteredSingleSignOn" /> <Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve" cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager" transactionManagerObjectName="jboss:service=TransactionManager" /> </Host> </Engine> </Service> </Server> WITH COMMENTS VERSION <Server> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Use a custom version of StandardService that allows the connectors to be started independent of the normal lifecycle start to allow web apps to be deployed before starting the connectors. --> <Service name="jboss.web"> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" address="${jboss.bind.address}" maxThreads="350" maxHttpHeaderSize="8192" emptySessionPath="true" protocol="HTTP/1.1" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" compression="on" ompressableMimeType="text/html,text/css,text/javascript,application/json,text/xml,text/plain,application/x-javascript,application/javascript"/> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" keystoreFile="${jboss.server.home.dir}/conf/zara.keystore" keystorePass="zara2010" clientAuth="false" sslProtocol="TLS" compression="on" /> --> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" address="${jboss.bind.address}" keystoreFile="${jboss.server.home.dir}/conf/supun1.keystore" keystorePass="aaaaaa" truststoreFile="${jboss.server.home.dir}/conf/supun1.keystore" truststorePass="aaaaaa" /> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" address="${jboss.bind.address}" protocol="AJP/1.3" emptySessionPath="true" enableLookups="false" redirectPort="8443" /> <Engine name="jboss.web" defaultHost="localhost" jvmRoute="khms1"> <!-- The JAAS based authentication and authorization realm implementation that is compatible with the jboss 3.2.x realm implementation. - certificatePrincipal : the class name of the org.jboss.security.auth.certs.CertificatePrincipal impl used for mapping X509[] cert chains to a Princpal. - allRolesMode : how to handle an auth-constraint with a role-name=*, one of strict, authOnly, strictAuthOnly + strict = Use the strict servlet spec interpretation which requires that the user have one of the web-app/security-role/role-name + authOnly = Allow any authenticated user + strictAuthOnly = Allow any authenticated user only if there are no web-app/security-roles --> <Realm className="org.jboss.web.tomcat.security.JBossSecurityMgrRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> <!-- A subclass of JBossSecurityMgrRealm that uses the authentication behavior of JBossSecurityMgrRealm, but overrides the authorization checks to use JACC permissions with the current java.security.Policy to determine authorized access. - allRolesMode : how to handle an auth-constraint with a role-name=*, one of strict, authOnly, strictAuthOnly + strict = Use the strict servlet spec interpretation which requires that the user have one of the web-app/security-role/role-name + authOnly = Allow any authenticated user + strictAuthOnly = Allow any authenticated user only if there are no web-app/security-roles <Realm className="org.jboss.web.tomcat.security.JaccAuthorizationRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> --> <Host name="localhost" autoDeploy="false" deployOnStartup="false" deployXML="false" configClass="org.jboss.web.tomcat.security.config.JBossContextConfig" > <!-- Uncomment to enable request dumper. This Valve "logs interesting contents from the specified Request (before processing) and the corresponding Response (after processing). It is especially useful in debugging problems related to headers and cookies." --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve" /> --> <!-- Access logger --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" prefix="localhost_access_log." suffix=".log" pattern="common" directory="${jboss.server.log.dir}" resolveHosts="false" /> --> <!-- Uncomment to enable single sign-on across web apps deployed to this host. Does not provide SSO across a cluster. If this valve is used, do not use the JBoss ClusteredSingleSignOn valve shown below. A new configuration attribute is available beginning with release 4.0.4: cookieDomain configures the domain to which the SSO cookie will be scoped (i.e. the set of hosts to which the cookie will be presented). By default the cookie is scoped to "/", meaning the host that presented it. Set cookieDomain to a wider domain (e.g. "xyz.com") to allow an SSO to span more than one hostname. --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Uncomment to enable single sign-on across web apps deployed to this host AND to all other hosts in the cluster. If this valve is used, do not use the standard Tomcat SingleSignOn valve shown above. Valve uses a JBossCache instance to support SSO credential caching and replication across the cluster. The JBossCache instance must be configured separately. By default, the valve shares a JBossCache with the service that supports HttpSession replication. See the "jboss-web-cluster-service.xml" file in the server/all/deploy directory for cache configuration details. Besides the attributes supported by the standard Tomcat SingleSignOn valve (see the Tomcat docs), this version also supports the following attributes: cookieDomain see above treeCacheName JMX ObjectName of the JBossCache MBean used to support credential caching and replication across the cluster. If not set, the default value is "jboss.cache:service=TomcatClusteringCache", the standard ObjectName of the JBossCache MBean used to support session replication. --> <Valve className="org.jboss.web.tomcat.service.sso.ClusteredSingleSignOn" /> <!-- Check for unclosed connections and transaction terminated checks in servlets/jsps. Important: The dependency on the CachedConnectionManager in META-INF/jboss-service.xml must be uncommented, too --> <Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve" cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager" transactionManagerObjectName="jboss:service=TransactionManager" /> </Host> </Engine> </Service> </Server>

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >