Search Results

Search found 1795 results on 72 pages for 'veritas cluster'.

Page 37/72 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Experiences in Upgrading from Exchange 2003 to Exchange 2010

    - by gWaldo
    I'm currently running Exchange 2003 SP2 Cluster on a Server 2003 AD Forest (in native 2003 mode), and we beginning to plan the upgrade to Server 2008 AD and Exchange 2010. We have two main sites, one middle-sized office, and a couple of smaller sites which have DCs (which may be RODCs after the upgrade). Currently all of our Exchange cluster is in my main site, but we are considering using the new datastore paradigm for load-balance/failover at the other large site, but this is not set in stone. Right now we are in the information-gathering and planning phases. I am looking for input of any gotchas experienced while performing either upgrade, but especially the Exchange upgrade. Gotchas? What surprised you? What wasn't documented? What said one thing but was misleading? (Confusing either in content or severity.) What is great or horrible about the new system? What worked well? What worked poorly? If you were to do it over again...? (I know that this isn't so much a question that can be definitively answered, but I'm happy to reward insight and useful resources (not the Microsoft documentation, but Blogposts are welcome) with upvotes.) UPDATE A couple items of note: -We are not currently using OWA (currently only the admins), but it may become more of a consideration with iOS devices. -We do have a small number of Blackberries in the environment (< 10%). -In addition to the standard Exchange connectors, we have a third-party connector for Captaris RightFax integration.

    Read the article

  • How many reverse proxies (nginx, haproxy) is too many?

    - by Alysum
    I'm setting up a HA (high availability) cluster using nginx, haproxy & apache. I've been reading great things about nginx and haproxy. People tend to choose one or the other but I like both. Haproxy is more flexible for load balancing than nginx's simple round robin (even with the upstream-fair patch). But I'd like to keep nginx for redirecting non-https to https among other things right at the point of entry to the cluster. On the other hand, nginx is a lot faster for serving static contents and would reduce the load on the powerful apache which loves to eat a lot of RAM! Here is my planned setup: Load balancer: nginx listens on port 80/443 and proxy_forwards to haproxy on 8080 on the same server to load balance between the multiple nodes. Nodes: nginx on the node listens to requests coming from haproxy on 8080, if the content is static, serve it. But if it's a backend script (in my case PHP), proxy forward to apache2 on the same node server listenning on a different port number. Technically this setup works but my concerns are whether having the requests going through several proxies is going to slow down requests? Most of the requests will be PHP requests as the backends are services (which means groing from nginx - haproxy - nginx - apache). Thoughts? Cheers

    Read the article

  • Tool to Save a Range of Disk Clusters to a File

    - by Synetech inc.
    Hi, Yesterday I deleted a (fragmented) archive file only to find that it did not extract correctly, so I was left stranded. Fortunately there was not much space free on the drive, so most of the space marked as free was from the now-deleted archive. I pulled up a disk editor and—painfully—managed to get a list of cluster ranges from the FAT that were marked as unused. My task then was to save these ranges of clusters to files so that I could examine them to try to determine which parts were from the archive and recombine them to attempt to restore the deleted file. This turned out to be a huge pain in the butt because the disk editor did not have the ability to select a range of clusters, so I had to navigate to the start of each cluster and hold down Ctrl+Shift+PgDn until I reached the end of the range (which usually took forever!) I did a quick Google search to see if I could find a command-line tool (preferably with Windows and DOS versions) that would allow me to issue a commands such as: SAVESECT -c 0xBEEF 0xCAFE FOO.BAR ::save clusters 0xBEEF-0xCAFE to FOO.BAR SAVESECT -s 1111 9876 BAZ.BIN ::save sectors 1111-9876 to BAZ.BIN Sadly my search came up empty. Any ideas? Thanks!

    Read the article

  • 10GE network: Is it still deadly expensive? Any options?

    - by BarsMonster
    Hi! I am building home cluster where I going to have about 16 nodes which can live with 1G ports, but I really want to have 10GE on file server & central node. It's all local, so no need for cabels longer than 3-5m. And ofcourse I want to spend as little money as possible (not going to spend more than whole cluster costs) :-) What are my options? 1) Legacy solution is to take some 24-48 port 1GE switch, and connect to file/central nodes via 4-8 aggregated links. This will work I guess, cost is very acceptable, but I am not sure if it's ok to use that much aggregated links. And ofcourse it would be hard to double bandwidth when needed... :-D 2) Switch with several 10GE uplink 'ports'. As far as I see, they all require modules which costs about 1000$, so I will need 4 10G modules, and 2 10GE cards... Smells like way more than 5000$+... 3) Connect file & central node via 2 10G cards directly, and put 4 quadport 1GE NICs on fileserver. I am saving on 2 10G modules and a switch, fileserver will have to do packet routing, but it's still gonna have alot of CPU's left :-) 4) Any other options? Infiniband? 5) Are MyriNet adaptors works fine? I guess there are no cheaper options? 6) Hmm... Scrap fileserver, put it all on central node and provide dedicated 1GE port for each of the nodes... This is sad...

    Read the article

  • Installing and running two postgresql versions on different ports (or two instances of same server)

    - by Andrius
    I have postgresql 9.1 installed on my machine (Ubuntu). I need another postgresql server that would run next to the old one. Exact version does not matter, but I'm thinking of using 9.2 version. How could I properly install and run another postgresql version without screwing old one (like upgrading). So those versions would run independently on different ports. Old one on 5432 and new one on 5433 for example. The reason I need this is for two OpenERP versions databases. If I run two OpenERP servers (with different versions) on single postgresql port, it crashes because new OpenERP version detects old versions database and tries to run it, but it crashes because it uses another schemes. P.S or maybe I could just run same postgresql server on two ports? Update So far I tried this: /usr/lib/postgresql/9.1/bin/pg_ctl initdb -D main2 It created new cluster. I changed port to 5433 in new clusters directory postgresql.conf file. Then ran this: /usr/lib/postgresql/9.1/bin/pg_ctl -D main2 -l logfile start I got response server starting. But when I tried to enter new cluster's template database with: psql template1 -p 5433 I got this error: psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5433"? Also now when I try to stop server with: /usr/lib/postgresql/9.1/bin/pg_ctl -D main2 -l logfile start I get this error: pg_ctl: PID file "main2/postmaster.pid" does not exist Is server running? So I don't understand if server is running and what I'm missing here? Update Found what was wrong. Stupid me. I didn't notice that when I changed port in .conf file, that line was commented already. So actually I didn't change anything first time, but thought I did and it used default 5432 port.

    Read the article

  • Heartbeat/DRBD failover didn't work as expected. How do I make the failover more robust?

    - by Quinn Murphy
    I had a scenario where a DRBD-heartbeat set up had a failed node but did not failover. What happened was the primary node had locked up, but didn't go down directly (it was inaccessible via ssh or with the nfs mount, but it could be pinged). The desired behavior would have been to detect this and failover to the secondary node, but it appears that since the primary didn't go full down (there is a dedicated network connection from server to server), heartbeat's detection mechanism didn't pick up on that and therefore didn't failover. Has anyone seen this? Is there something that I need to configure to have more robust cluster failover? DRBD seems to otherwise work fine (had to resync when I rebooted the old primary), but without good failover, it's use is limited. heartbeat 3.0.4 drbd84 RHEL 6.1 We are not using Pacemaker nfs03 is the primary server in this setup, and nfs01 is the secondary. ha.cf # Hearbeat Logging logfacility daemon udpport 694 ucast eth0 192.168.10.47 ucast eth0 192.168.10.42 # Cluster members node nfs01.openair.com node nfs03.openair.com # Hearbeat communication timing. # Sets the triggers and pulse time for swapping over. keepalive 1 warntime 10 deadtime 30 initdead 120 #fail back automatically auto_failback on and here is the haresources file: nfs03.openair.com IPaddr::192.168.10.50/255.255.255.0/eth0 drbddisk::data Filesystem::/dev/drbd0::/data::ext4 nfs nfslock

    Read the article

  • Should an HA failover occur in this scenario?

    - by joeqwerty
    I'm running vSphere 5 in an HA cluster across two hosts (vsphereA and vsphereB). I have the HA cluster configured for host monitoring and datastore heartbeat monitoring with admission control disabled (hopefully I rightfully understand that datastore heartbeat monitoring prevents inadvertent and unwanted HA failovers due to management network isolation). Each host has a single connection to a dedicated iSCSI network and iSCSI target (no MPIO). All vmdk's for all VM's exist on the iSCSI datastore. As a test of HA I disconnected the iSCSI connection on vsphereB and was surprised to see that the running VM's on vsphereB continued to run on vsphereB. The powered off VM's were showing as inaccessible (which I expected due to the fact that they weren't running and the connection from vsphereB to the iSCSI target was severed) but the running VM's continued to run and continued to be "owned" by vsphereB. I expected to see an HA failover occur for those VM's and expected to see them "owned" by vsphereA after the HA failover (which didn't occur). I'm at a loss to understand why an HA failover didn't occur for those VM's. Am I misunderstanding in which cases an HA failover should occur?

    Read the article

  • netlogon errors

    - by rorr
    I have two instances of mssql 2005 and am using CA XOSoft replication. The master is a failover cluster and the replica is a standalone server. They are all running Server 2003 sp2 x64. Same patch levels on all servers. This setup has worked great for several months until we recently restricted the RPC ports on both nodes of the master(5000 - 6000 using rpccfg.exe). We have to implement egress filtering, thus the limiting of the ports. We began receiving login errors for sql windows authentication and NETLOGON Event ID: 5719: This computer was not able to set up a secure session with a domain controller in domain due to the following: Not enough storage is available to process this command. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. We also see group policies failing to update and cluster file shares go offline at the same time. The RPC ports were set back to default when we started seeing these problems and the servers rebooted, but the problems persist. The domain controllers are not showing any errors. Running dcdiag and netdiag shows everything is fine. We have noticed that the XOSoft service ws_rep.exe is using a lot of handles(8 - 9k), about the same number that sqlserver is using. As soon as xosoft replication is stopped the login errors cease and everything functions correctly. I have opened a ticket with CA for XOSoft, but I'm not sure that the problem is actually xosoft, but that it is the one bringing the problem to light. I'm looking for tips on debugging RPC problems. Specifically on limiting the ports and then reverting the changes.

    Read the article

  • Creating a Jenkins build farm in a hands-off manner?

    - by user183394
    My colleague and I have set up and run Jenkins on a KVM guest running Ubuntu 12.04 with good results for a while now. We are thinking about deploying a cluster of Jenkins CI hosts in master/slave configuration, with the libvirt slave plugin to keep our hardware count low. Our environment is strictly Linux (CentOS, Scientific Linux, Fedora, and Ubuntu). Both of us are competent in setting up large clusters. We typically use tools like cobbler + a configuration management tool (Puppet, Chef, and alike) to set up a large number of machines (physical and/or virtual) hands off (hundreds of nodes in less than an hour typical). We would like to do the same for nodes running Jenkins. But the step by step guide doesn't give us any clues in this regard. I did see a Multi-slave config plugin. But, being used to dealing with hundreds or more machines completely hands-off, clicking the UI for many machines just doesn't feel right. Can someone point to us a reference that talks about how to set up large cluster of Jenkins CI hosts more in the hands-off way?

    Read the article

  • CloudFront with Custom Origin and ELB

    - by kmfk
    We are using CloudFront for our static assets but also wanted to allow for Gzip. We set up a new distribution with a custom origin pointing back to our application servers which are behind a elastic load balancer. We manually keep the files in sync across the cluster and update them when we publish. However, with this set up, we get nothing but Miss and RefreshHits from CloudFront, which so far has defeated the purpose. Is there any additional settings in order to use an ELB as your custom origin? In the docs, it references this as a viable solution. It appears when we point the distribution to a single server in our production cluster, cloudfront properly caches our assets. Is it possible that the sticky sessions cookie and the subsequent header that gets added by it could be an issue? Cache-Control: no-cache="set-cookie" //Added by load balancer Any ideas? FYI - currently, we have our custom origin pointing to a single EC2 instance, so caching is working correctly - in case you try to curl the file below. Example headers: curl -I http://static.quick-cdn.com/css/9850999.css HTTP/1.0 200 OK Accept-Ranges: bytes Cache-Control: max-age=3700 Cache-Control: no-cache="set-cookie" Content-Length: 23038 Content-Type: text/css Date: Thu, 12 Apr 2012 23:03:52 GMT Last-Modified: Thu, 12 Apr 2012 23:00:14 GMT Server: Apache/2.2.17 (Ubuntu) Vary: Accept-Encoding X-Cache: RefreshHit from cloudfront X-Amz-Cf-Id: K_q7Zy3_jdzlEJ85ukELVtdx1GmuXqApAbZZ7G0fPt0mxRMqPKX5pQ==,RzJmPku-rEIO9WlvuSoKa8hiAaR3dLk5KC4cQMWWrf_MDhmjWe8n6A== Via: 1.0 28c34f9fbf559a21ee16594849e4fc9c.cloudfront.net (CloudFront) Connection: close

    Read the article

  • glusterfs mounts get unmounted when 1 of the 2 bricks goes offline

    - by Shiquemano
    I have an odd case where 1 of the 2 replicated glusterfs bricks will go offline and take all of the client mounts down with it. As I understand it, this should not be happening. It should fail over to the brick that is still online, but this hasn't been the case. I suspect that this is due to configuration issue. Here is a description of the system: 2 gluster servers on dedicated hardware (gfs0, gfs1) 8 client servers on vms (client1, client2, client3, ... , client8) Half of the client servers are mounted with gfs0 as the primary, and the other half are pointed at gfs1. Each of the clients are mounted with the following entry in /etc/fstab: /etc/glusterfs/datavol.vol /data glusterfs defaults 0 0 Here is the content of /etc/glusterfs/datavol.vol: volume datavol-client-0 type protocol/client option transport-type tcp option remote-subvolume /data/datavol option remote-host gfs0 end-volume volume datavol-client-1 type protocol/client option transport-type tcp option remote-subvolume /data/datavol option remote-host gfs1 end-volume volume datavol-replicate-0 type cluster/replicate subvolumes datavol-client-0 datavol-client-1 end-volume volume datavol-dht type cluster/distribute subvolumes datavol-replicate-0 end-volume volume datavol-write-behind type performance/write-behind subvolumes datavol-dht end-volume volume datavol-read-ahead type performance/read-ahead subvolumes datavol-write-behind end-volume volume datavol-io-cache type performance/io-cache subvolumes datavol-read-ahead end-volume volume datavol-quick-read type performance/quick-read subvolumes datavol-io-cache end-volume volume datavol-md-cache type performance/md-cache subvolumes datavol-quick-read end-volume volume datavol type debug/io-stats option count-fop-hits on option latency-measurement on subvolumes datavol-md-cache end-volume The config above is the latest attempt at making this behave properly. I have also tried the following entry in /etc/fstab: gfs0:/datavol /data glusterfs defaults,backupvolfile-server=gfs1 0 0 This was the entry for half of the clients, while the other half had: gfs1:/datavol /data glusterfs defaults,backupvolfile-server=gfs0 0 0 The results were exactly the same as the above configuration. Both configs connect everything just fine, they just don't fail over. Any help would be appreciated.

    Read the article

  • Diagnosing Random Network Lag

    - by uesp
    I'm having trouble diagnosing some random lag on a 6 server LAMP cluster serving a MediaWiki site. While we're serving some 100 pages/sec the servers themselves are running fine with less than 0.5 load, no locked processes, no paging, no errors being logged, etc.... Lag is present on all servers and is random: one minute its fine the next it's there. DNS lookups on the servers are randomly slow. For example time nslookup google.com varies randomly from a few milliseconds to several seconds and sometimes times out entirely. While we use IP addresses internally on the cluster this may be a symptom of the root issue. We are not running our own DNS server. The Apache server-status pages randomly lag or time out. Benchmarking using ab between servers shows a few loads sometimes take 3000 ms (almost exactly). Benchmarking server-status on the local server itself usually shows no issue (it showed a lag only once among a few hundred tests). The servers are sitting behind a switch and a firewall which I don't have any access to so I don't know their setup or status. While we are under heavier than normal load a 2 Mbps incoming and 20 Mbps outgoing traffic shouldn't be stressing the switch or firewall should it? My feeling is that it is the switch/firewall or something above them in the ISP like their DNS but can't confirm it. I need some other tests or methods of diagnosing this lag to try and narrow down the ultimate cause.

    Read the article

  • which is best smart automatic file replication solution for cloud storage based systems.

    - by TORr0t
    I am looking for a solution for a project i am working on. We are developing a websystem where people can upload their files and other people can download it. (similar to rapidshare.com model) Problem is, some files can be demanded much more than other files. The scenerio is like: I have uploaded my birthday video and shared it with all of my friend, I have uploaded it to myproject.com and it was stored in one of the cluster which has 100mbit connection. Problem is, once all of my friends want to download the file, they cant download it since the bottleneck here is 100mbit which is 15MB per second, but i got 1000 friends and they can only download 15KB per second. I am not taking into account that the hdd is serving same files. My network infrastrucre is as follows: 1 gbit server(client) and connected to 4 Nodes of storage servers that have 100mbit connection. 1gbit server can handle the 1000 users traffic if one of storage node can stream more than 15MB per second to my 1gbit (client) server and visitor will stream directly from client server instead of storage nodes. I can do it by replicating the file into 2 nodes. But i dont want to replicate all files uploadded to my network since it is costing much more. So i need a cloud based system, which will push the files into replicated nodes automatically when demanded to those files are high, and when the demand is low, they will delete from other nodes and it will stay in only 1 node. I have looked to gluster and asked in their irc channel that, gluster cant do such a thing. It is only able to replicate all the files or none of the files. But i need it the cluster software to do it automatically. Any solutions ? (instead of recommending me amazon s3) S

    Read the article

  • Relevance and Necessity of SNMP

    - by Adam Tannon
    Edit: I am in the process of designing a Java-based monitoring tool that will send back periodic "health checks" of a Java app deployed to a cluster of GlassFish servers. I am trying to figure out the best protocol for this monitoring tool to send information back to the monitoring server on. After an initial research effort on my part, it seems like SNMP is just a protocol for monitor-type applications to communicate the "health status" of something (a part of a network, a server, a cluster, an application, etc.) to the rest of the network. If the above is incorrect, please correct me!!! Assuming the generalization is more or less accurate, my next question is: why is this a protocol!?!? In the age of REST/SOAP/TCP protocols, why is there the need for a standardized protocol that only fits one type of application (monitoring)? In other words, if I'm a developer assigned to building a new monitoring tool that periodically polls a server and reports on its CPU and available memory, what advantages does SNMP give me over just POSTing to a RESTful API via plain 'ole HTTP? I'm sure I'm missing something here - I just need someone to help connect the dots! Thanks in advance!

    Read the article

  • Mixing both local and nonlocal addresses on three switches

    - by klew
    I have four computers that have nonlocal addresses like 150.X.X.X. Now I also get another few computers that should be only accessible through a gateway (it will be computing cluster) and they addresses are 10.0.0.X. I also wanted to include those four older computers to this new cluster, but I want them to be accessible from internet on nonlocal addresses (so I would like to set up them on both 150.X.X.X and 10.0.0.X addresses - I've set up it as interface eth0:0 since I have only one NIC). Those new computers have their switch and old computers also have their own switch. Both of them are connected to another (third) switch. The problem is that those old computers see each other (I can ping them), and also new computers see each other, but I can't ping old computer from new computer and vice versa. However pinging on nonlocal adresses works as expected. I looked into switch configuration and didn't find anything useful. I have no idea what I missed here. Can somebody help? All computers have Ubuntu Server 10.04

    Read the article

  • NetApp NDMP backup with BE 2010 R2 works, restore fails

    - by uuwe
    Hi, I'm having some issues with a new Backup Exec 2010 R2 installation. I configured a NetApp FAS2020 as an NDMP device and want to backup files from the NAS to a tape drive connected to my backup server. I set up ndmpd according to this document (http://www.symantec.com/business/support/index?page=content&id=TECH48957) and created a separate backup user (http://filers.blogspot.com/2006/09/setting-veritas-netbackup-with-non.html). Backup works perfectly, but restoring any file gives me an authentication failed error. The NDMP device has a "global" ndmp user configured in the device tab (tried this with the newly created ndmpd backup user and the netapp root) and I can also configure separate resource credentials in the BE restore job. I have tried setting the same accounts for the "global" ndmp device and the restore credentials and have also tried setting different accounts for them. NDMP debug level is at 5 and this is what shows up in /etc/messages. The session is closed immediately after it has been granted. 16:12:07 PST [Java_Thread:info]: ndmpdserver: ndmpd.access allowed for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 16:12:07 PST [Java_Thread:info]: Ndmpd51: ndmpd session closed successfully for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 Running wireshark on the backup server doesn't produce much. It shows a SYN - SYN/ACK - NDMP CONNECT_CLOSE Request from the backup server. The Resource Credentials for the restore job behave very oddly. If I enter NDMP credentials and do "Test All" it fails. If I use my regular domain backup account, it is successful. There are no failed or succeeded logons in the NetApp ndmp log and tracing this check shows that it doesn't even connect to the NAS. This makes me think that this is more likely flaky BE behaviour rather than misconfiguration of the NAS. Here is the options ndmp output: FAS2020-1 options ndmp ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled on ndmpd.enable on ndmpd.ignore_ctime.enabled off ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface disable ndmpd.tcpnodelay.enable off

    Read the article

  • Need help configurating my Tomcat server without any WAR files

    - by gablin
    I just reinstalled my entire server, and now I can't seem to get my JSP-based website to work on Tomcat anymore. I use the same server.xml file, which worked perfectly before the reinstallation, but no longer. Here's the content of the server.xml file which worked before: <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <!-- <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> --> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <!-- </Host> --> <Host name="www.rebootradio.nu"> <Alias>rebootradio.nu</Alias> <Context path="" docBase="D:/services/http/rebootradio.nu" debug="1" reloadable="true"/> </Host> </Engine> </Service> </Server> The JSP site doesn't use any WAR files or anything like that; there's just a default.jsp in the specified folder D:/services/http/rebootradio.nu which loads the site. As I said, this configuration worked before, but now with the latest verion of XAMPP and Tomcat it doesn't work anymore. All I get is a 404 message saying The requested resource () is not available.

    Read the article

  • Need help configurating my Tomcat server

    - by gablin
    I just reinstalled my entire server, and now I can't seem to get my JSP-based website to work on Tomcat anymore. I use the same server.xml file, which worked perfectly before the reinstallation, but no longer. Here's the content of the server.xml file which worked before: <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <!-- <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> --> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <!-- </Host> --> <Host name="www.rebootradio.nu"> <Alias>rebootradio.nu</Alias> <Context path="" docBase="D:/services/http/rebootradio.nu" debug="1" reloadable="true"/> </Host> </Engine> </Service> </Server> The JSP site doesn't use any WAR files or anything like that; there's just a default.jsp in the specified folder D:/services/http/rebootradio.nu which loads the site. As I said, this configuration worked before, but now with the latest verion of XAMPP and Tomcat it doesn't work anymore. All I get is a 404 message saying The requested resource () is not available.

    Read the article

  • Cloning a WebCenter Portal Managed Server

    - by Maiko Rocha
    I had to run some tests on a WebCenter Portal application deployed in a cluster. I've got a development VM with WebCenter PS4 (this also works on PS5) and I was trying to figure out how could I easily add a new managed server to my single-node domain, and make it a cluster. Creating the machine and cluster are a piece of cake, you can do it pretty quick through WLS Console. Now, you'd guess that using the clone option on WLS Console would do the magic of cloning an existing instance, right? Well, it does, but all you get is an "empty" managed server: with no target libraries.  It was a good surprise to find that WebCenter provides a way of cloning an existing WebCenter Portal managed server through a simple WLST command: cloneWebCenterManagedServer  This is a screenshot of my starting point. I want to clone WC_CustomPortal managed server: These are the steps to clone my WC_CustomPortal managed server: 1. In the command line, invoke WLST. It should be on <ORACLE_HOME_for_component>/common/bin/wlst.sh. In my case, it is ./product/Middleware/WebCenterPortal/common/bin/wlst.sh 2. Connect to the Admin Server:  connect ('<wls_admin_username>','<password>','t3://<server>:<port>') 3. Execute the following command: wls:/webcenter/serverConfig> cloneWebCenterManagedServer(baseManagedServer='WC_CustomPortal', newManagedServer='WC_CustomPortal2', newManagedServerPort=8893, verbose=1) I've turned on verbose output on purpose so I could see what the script was doing while executing. This is the output:  [...] Creating the Managed Server "WC_CustomPortal2" MBean type Server with name WC_CustomPortal2 has been created successfully. Targeting the library "oracle.bi.adf.model.slib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.bi.adf.view.slib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.bi.adf.webcenter.slib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.wsm.seedpolicies#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.jsp.next#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.dconfig-infra#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "orai18n-adf#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.adf.dconfigbeans#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.pwdgen#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.jrf.system.filter" to the Managed Server "WC_CustomPortal2" Targeting the library "adf.oracle.domain#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "adf.oracle.businesseditor#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.adf.management#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "adf.oracle.domain.webapp#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "jsf#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "jstl#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "UIX#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "ohw-rcf#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "ohw-uix#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.adf.desktopintegration.model#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.adf.desktopintegration#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.bi.jbips#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.bi.composer#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.skin#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.composer#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.framework.core#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.sdp.client#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.soa.workflow.wc#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.soa.worklist.webapp#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.ucm.ridc.app-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "p13n-app-lib-base#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "p13n-core-web-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "jaxrs-framework-web-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "jersey-web-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "wcps-util-app-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "wcps-services-client-web-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "content-app-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "content-web-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.framework#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.framework.view#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.forum.dependency#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.jive.dependency#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.spaces.fwk#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.activitygraph.lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the datasource "mds-CustomPortalDS" to the Managed Server "WC_CustomPortal2" Targeting the datasource "WebCenter-CustomPortalDS" to the Managed Server "WC_CustomPortal2" Targeting the datasource "Activities-CustomPortalDS" to the Managed Server "WC_CustomPortal2" Targeting the application "wsil-wls" to the Managed Server "WC_CustomPortal2" Targeting the application "DMS Application#11.1.1.1.0" to the Managed Server "WC_CustomPortal2" Targeting the application "ViewHandlerOverride_webapp1#V2.0" to the Managed Server "WC_CustomPortal2" Targeting the application "ViewHandlerOverride_application1#V2.0" to the Managed Server "WC_CustomPortal2" Targeting the startup class "JRF Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "JPS Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "ODL-Startup" to the Managed Server "WC_CustomPortal2" Targeting the startup class "Audit Loader Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "AWT Application Context Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "JMX Framework Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "Web Services Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "JOC-Startup" to the Managed Server "WC_CustomPortal2" Targeting the startup class "DMS-Startup" to the Managed Server "WC_CustomPortal2" Targeting the shutdown class "JOC-Shutdown" to the Managed Server "WC_CustomPortal2" Targeting the shutdown class "DMSShutdown" to the Managed Server "WC_CustomPortal2" Validating changes ... Validated the changes successfully [...] And this is the newly created WC_CustomPortal2 managed server showing up on Weblogic console:  Here is the full reference to WebCenter Portal Custom WLST Commands. Special thanks to Todd Vender for pointing this one out! :-)

    Read the article

  • What is bondib1 used for on SPARC SuperCluster with InfiniBand, Solaris 11 networking & Oracle RAC?

    - by user12620111
    A co-worker asked the following question about a SPARC SuperCluster InfiniBand network: > on the database nodes the RAC nodes communicate over the cluster_interconnect. This is the > 192.168.10.0 network on bondib0. (according to ./crs/install/crsconfig_params NETWORKS> setting) > What is bondib1 used for? Is it a HA counterpart in case bondib0 dies? This is my response: Summary: bondib1 is currently only being used for outbound cluster interconnect interconnect traffic. Details: bondib0 is the cluster_interconnect $ oifcfg getif            bondeth0  10.129.184.0  global  public bondib0  192.168.10.0  global  cluster_interconnect ipmpapp0  192.168.30.0  global  public bondib0 and bondib1 are on 192.168.10.1 and 192.168.10.2 respectively. # ipadm show-addr | grep bondi bondib0/v4static  static   ok           192.168.10.1/24 bondib1/v4static  static   ok           192.168.10.2/24 Hostnames tied to the IPs are node1-priv1 and node1-priv2  # grep 192.168.10 /etc/hosts 192.168.10.1    node1-priv1.us.oracle.com   node1-priv1 192.168.10.2    node1-priv2.us.oracle.com   node1-priv2 For the 4 node RAC interconnect: Each node has 2 private IP address on the 192.168.10.0 network. Each IP address has an active InfiniBand link and a failover InfiniBand link. Thus, the 4 node RAC interconnect is using a total of 8 IP addresses and 16 InfiniBand links. bondib1 isn't being used for the Virtual IP (VIP): $ srvctl config vip -n node1 VIP exists: /node1-ib-vip/192.168.30.25/192.168.30.0/255.255.255.0/ipmpapp0, hosting node node1 VIP exists: /node1-vip/10.55.184.15/10.55.184.0/255.255.255.0/bondeth0, hosting node node1 bondib1 is on bondib1_0 and fails over to bondib1_1: # ipmpstat -g GROUP       GROUPNAME   STATE     FDT       INTERFACES ipmpapp0    ipmpapp0    ok        --        ipmpapp_0 (ipmpapp_1) bondeth0    bondeth0    degraded  --        net2 [net5] bondib1     bondib1     ok        --        bondib1_0 (bondib1_1) bondib0     bondib0     ok        --        bondib0_0 (bondib0_1) bondib1_0 goes over net24 # dladm show-link | grep bond LINK                CLASS     MTU    STATE    OVER bondib0_0           part      65520  up       net21 bondib0_1           part      65520  up       net22 bondib1_0           part      65520  up       net24 bondib1_1           part      65520  up       net23 net24 is IB Partition FFFF # dladm show-ib LINK         HCAGUID         PORTGUID        PORT STATE  PKEYS net24        21280001A1868A  21280001A1868C  2    up     FFFF net22        21280001CEBBDE  21280001CEBBE0  2    up     FFFF,8503 net23        21280001A1868A  21280001A1868B  1    up     FFFF,8503 net21        21280001CEBBDE  21280001CEBBDF  1    up     FFFF On Express Module 9 port 2: # dladm show-phys -L LINK              DEVICE       LOC net21             ibp4         PCI-EM1/PORT1 net22             ibp5         PCI-EM1/PORT2 net23             ibp6         PCI-EM9/PORT1 net24             ibp7         PCI-EM9/PORT2 Outbound traffic on the 192.168.10.0 network will be multiplexed between bondib0 & bondib1 # netstat -rn Routing Table: IPv4   Destination           Gateway           Flags  Ref     Use     Interface -------------------- -------------------- ----- ----- ---------- --------- 192.168.10.0         192.168.10.2         U        16    6551834 bondib1   192.168.10.0         192.168.10.1         U         9    5708924 bondib0   There is a lot more traffic on bondib0 than bondib1 # /bin/time snoop -I bondib0 -c 100 > /dev/null Using device ipnet/bondib0 (promiscuous mode) 100 packets captured real        4.3 user        0.0 sys         0.0 (100 packets in 4.3 seconds = 23.3 pkts/sec) # /bin/time snoop -I bondib1 -c 100 > /dev/null Using device ipnet/bondib1 (promiscuous mode) 100 packets captured real       13.3 user        0.0 sys         0.0 (100 packets in 13.3 seconds = 7.5 pkts/sec) Half of the packets on bondib0 are outbound (from self). The remaining packet are split evenly, from the other nodes in the cluster. # snoop -I bondib0 -c 100 | awk '{print $1}' | sort | uniq -c Using device ipnet/bondib0 (promiscuous mode) 100 packets captured   49 node1-priv1.us.oracle.com   24 node2-priv1.us.oracle.com   14 node3-priv1.us.oracle.com   13 node4-priv1.us.oracle.com 100% of the packets on bondib1 are outbound (from self), but the headers in the packets indicate that they are from the IP address associated with bondib0: # snoop -I bondib1 -c 100 | awk '{print $1}' | sort | uniq -c Using device ipnet/bondib1 (promiscuous mode) 100 packets captured  100 node1-priv1.us.oracle.com The destination of the bondib1 outbound packets are split evenly, to node3 and node 4. # snoop -I bondib1 -c 100 | awk '{print $3}' | sort | uniq -c Using device ipnet/bondib1 (promiscuous mode) 100 packets captured   51 node3-priv1.us.oracle.com   49 node4-priv1.us.oracle.com Conclusion: bondib1 is currently only being used for outbound cluster interconnect interconnect traffic.

    Read the article

  • What's the best way to refactor this Rails controller?

    - by Robert DiNicolas
    I'd like some advice on how to best refactor this controller. The controller builds a page of zones and modules. Page has_many zones, zone has_many modules. So zones are just a cluster of modules wrapped in a container. The problem I'm having is that some modules may have some specific queries that I don't want executed on every page, so I've had to add conditions. The conditions just test if the module is on the page, if it is the query is executed. One of the problems with this is if I add a hundred special module queries, the controller has to iterate through each one. I think I would like to see these module condition moved out of the controller as well as all the additional custom actions. I can keep everything in this one controller, but I plan to have many apps using this controller so it could get messy. class PagesController < ApplicationController # GET /pages/1 # GET /pages/1.xml # Show is the main page rendering action, page routes are aliased in routes.rb def show #-+-+-+-+-Core Page Queries-+-+-+-+- @page = Page.find(params[:id]) @zones = @page.zones.find(:all, :order => 'zones.list_order ASC') @mods = @page.mods.find(:all) @columns = Page.columns # restful params to influence page rendering, see routes.rb @fragment = params[:fragment] # render single module @cluster = params[:cluster] # render single zone @head = params[:head] # render html, body and head #-+-+-+-+-Page Level Json Conversions-+-+-+-+- @metas = @page.metas ? ActiveSupport::JSON.decode(@page.metas) : nil @javascripts = @page.javascripts ? ActiveSupport::JSON.decode(@page.javascripts) : nil #-+-+-+-+-Module Specific Queries-+-+-+-+- # would like to refactor this process @mods.each do |mod| # Reps Module Custom Queries if mod.name == "reps" @reps = User.find(:all, :joins => :roles, :conditions => { :roles => { :name => 'rep' } }) end # Listing-poc Module Custom Queries if mod.name == "listing-poc" limit = params[:limit].to_i < 1 ? 10 : params[:limit] PropertyEntry.update_from_listing(mod.service_url) @properties = PropertyEntry.all(:limit => limit, :order => "city desc") end # Talents-index Module Custom Queries if mod.name == "talents-index" @talent = params[:type] @reps = User.find(:all, :joins => :talents, :conditions => { :talents => { :name => @talent } }) end end respond_to do |format| format.html # show.html.erb format.xml { render :xml => @page.to_xml( :include => { :zones => { :include => :mods } } ) } format.json { render :json => @page.to_json } format.css # show.css.erb, CSS dependency manager template end end # for property listing ajax request def update_properties limit = params[:limit].to_i < 1 ? 10 : params[:limit] offset = params[:offset] @properties = PropertyEntry.all(:limit => limit, :offset => offset, :order => "city desc") #render :nothing => true end end So imagine a site with a hundred modules and scores of additional controller actions. I think most would agree that it would be much cleaner if I could move that code out and refactor it to behave more like a configuration.

    Read the article

  • MySQL Connect 8 Days Away - Replication Sessions

    - by Mat Keep
    Following on from my post about MySQL Cluster sessions at the forthcoming Connect conference, its now the turn of MySQL Replication - another technology at the heart of scaling and high availability for MySQL. Unless you've only just returned from a 6-month alien abduction, you will know that MySQL 5.6 includes the largest set of replication enhancements ever packaged into a single new release: - Global Transaction IDs + HA utilities for self-healing cluster..(yes both automatic failover and manual switchover available!) - Crash-safe slaves and binlog - Binlog Group Commit and Multi-Threaded Slaves for high performance - Replication Event Checksums and Time-Delayed replication - and many more There are a number of sessions dedicated to learn more about these important new enhancements, delivered by the same engineers who developed them. Here is a summary Saturday 29th, 13.00 Replication Tips and Tricks, Mats Kindahl In this session, the developers of MySQL Replication present a bag of useful tips and tricks related to the MySQL 5.5 GA and MySQL 5.6 development milestone releases, including multisource replication, using logs for auditing, handling filtering, examining the binary log, using relay slaves, splitting the replication stream, and handling failover. Saturday 29th, 17.30 Enabling the New Generation of Web and Cloud Services with MySQL 5.6 Replication, Lars Thalmann This session showcases the new replication features, including • High performance (group commit, multithreaded slave) • High availability (crash-safe slaves, failover utilities) • Flexibility and usability (global transaction identifiers, annotated row-based replication [RBR]) • Data integrity (event checksums) Saturday 29th, 1900 MySQL Replication Birds of a Feather In this session, the MySQL Replication engineers discuss all the goodies, including global transaction identifiers (GTIDs) with autofailover; multithreaded, crash-safe slaves; checksums; and more. The team discusses the design behind these enhancements and how to get started with them. You will get the opportunity to present your feedback on how these can be further enhanced and can share any additional replication requirements you have to further scale your critical MySQL-based workloads. Sunday 30th, 10.15 Hands-On Lab, MySQL Replication, Luis Soares and Sven Sandberg But how do you get started, how does it work, and what are the best practices and tools? During this hands-on lab, you will learn how to get started with replication, how it works, architecture, replication prerequisites, setting up a simple topology, and advanced replication configurations. The session also covers some of the new features in the MySQL 5.6 development milestone releases. Sunday 30th, 13.15 Hands-On Lab, MySQL Utilities, Chuck Bell Would you like to learn how to more effectively manage a host of MySQL servers and manage high-availability features such as replication? This hands-on lab addresses these areas and more. Participants will get familiar with all of the MySQL utilities, using each of them with a variety of options to configure and manage MySQL servers. Sunday 30th, 14.45 Eliminating Downtime with MySQL Replication, Luis Soares The presentation takes a deep dive into new replication features such as global transaction identifiers and crash-safe slaves. It also showcases a range of Python utilities that, combined with the Release 5.6 feature set, results in a self-healing data infrastructure. By the end of the session, attendees will be familiar with the new high-availability features in the whole MySQL 5.6 release and how to make use of them to protect and grow their business. Sunday 30th, 17.45 Scaling for the Web and the Cloud with MySQL Replication, Luis Soares In a Replication topology, high performance directly translates into improving read consistency from slaves and reducing the risk of data loss if a master fails. MySQL 5.6 introduces several new replication features to enhance performance. In this session, you will learn about these new features, how they work, and how you can leverage them in your applications. In addition, you will learn about some other best practices that can be used to improve performance. So how can you make sure you don't miss out - the good news is that registration is still open ;-) And just to whet your appetite, listen to the On-Demand webinar that presents an overview of MySQL 5.6 Replication.  

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 23 (sys.dm_db_index_usage_stats)

    - by Tamarick Hill
    The sys.dm_db_index_usage_stats Dynamic Management View is used to return usage information about the various indexes on your SQL Server instance. Let’s have a look at this DMV against our AdventureWorks2012 database so we can examine the information returned. SELECT * FROM sys.dm_db_index_usage_stats WHERE database_id = db_id('AdventureWorks2012') The first three columns in the result set represent the database_id, object_id, and index_id of a given row. You can join these columns back to other system tables to extract the actual database, object, and index names. The next four columns are probably the most beneficial columns within this DMV. First, the user_seeks column represents the number of times that a user query caused a seek operation against a particular index. The user_scans column represents how many times a user query caused a scan operation on a particular index. The user_lookups column represents how many times an index was used to perform a lookup operation. The user_updates column refers to how many times an index had to be updated due to a write operation that effected a particular index. The last_user_seek, last_user_scan, last_user_lookup, and last_user_update columns provide you with DATETIME information about when the last user scan, seek, lookup, or update operation was performed. The remaining columns in the result set are the same as the ones we previously discussed, except instead of the various operations being generated from user requests, they are generated from system background requests. This is an extremely useful DMV and one of my favorites when it comes to Index Maintenance. As we all know, indexes are extremely beneficial with improving the performance of your read operations. But indexes do have a downside as well. Indexes slow down the performance of your write operations, and they also require additional resources for storage. For this reason, in my opinion, it is important to regularly analyze the indexes on your system to make sure the indexes you have are being used efficiently. My AdventureWorks2012 database is only used for demonstrating or testing things, so I dont have a lot of meaningful information here, but for a Production system, if you see an index that is never getting any seeks, scans, or lookups, but is constantly getting a ton of updates, it more than likely would be a good candidate for you to consider removing. You would not be getting much benefit from the index, but yet it is incurring a cost on your system due to it constantly having to be updated for your write operations, not to mention the additional storage it is consuming. You should regularly analyze your indexes to ensure you keep your database systems as efficient and lean as possible. One thing to note is that these DMV statistics are reset every time SQL Server is restarted. Therefore it would not be a wise idea to make decisions about removing indexes after a Server Reboot or a cluster roll. If you restart your SQL Server instances frequently, for example if you schedule weekly/monthly cluster rolls, then you may not capture indexes that are being used for weekly/monthly reports that run for business users. And if you remove them, you may have some upset people at your desk on Monday morning. If you would like to begin analyzing your indexes to possibly remove the ones that your system is not using, I would recommend building a process to load this DMV information into a table on scheduled basis, depending on how frequently you perform an operation that would reset these statistics, then you can analyze the data over a period of time to get a more accurate view of what indexes are really being used and which ones or not. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms188755.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Oredev 2011 Trip Report

    - by arungupta
    Oredev had its seventh annual conference in the city of Malmo, Sweden last week. The name "Oredev" signifies to the part that Malmo is connected with Copenhagen with Oresund bridge. There were about 1000 attendees with several speakers from all over the world. The first two days were hands-on workshops and the next three days were sessions. There were different tracks such as Java, Windows 8, .NET, Smart Phones, Architecture, Collaboration, and Entrepreneurship. And then there was Xtra(ck) which had interesting sessions not directly related to technology. I gave two slide-free talks in the Java track. The first one showed how to build an end-to-end Java EE 6 application using NetBeans and GlassFish. The complete instructions to build the application are explained in detail here. This 3-tier application used Java Persistence API, Enterprsie Java Beans, Servlet, Contexts and Dependency Injection, JavaServer Faces, and Java API for RESTful Services. The source code built during the application can be downloaded here (LINK TBD). The second session, slide-free again, showed how to take a Java EE 6 application into production using GlassFish cluster. It explained: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete instructions for this session are available here. Oredev has an interesting way of collecting attendee feedback. The attendees drop a green, yellow, or red card in a bucket as they walk out of the session. Not everybody votes but most do. Other than the instantaneous feedback provided on twitter, this mechanism provides a more coarse grained feedback loop as well. The first talk had about 67 attendees (with 23 green and 7 yellow) and the second one had 22 (11 green and 11 yellow). The speakers' dinner is a good highlight of the conference. It is arranged in the historic city hall and the mayor welcomed all the speakers. As you can see in the pictures, it is a very royal building with lots of history behind it. Fortunately the dinner was a buffet with a much better variety unlike last year where only black soup and geese were served, which was quite cultural BTW ;-) The sauna in 85F, skinny dipping in 35F ocean and alternating between them at Kallbadhus is always very Swedish. Also spent a short evening at a friend's house socializing with other speaker/attendees, drinking Glogg, and eating Pepperkakor.  The welcome packet at the hotel also included cinnamon rolls, recommended to drink with cold milk, for a little more taste of Swedish culture. Something different at this conference was how artists from Image Think were visually capturing all the keynote speakers using images on whiteboards. Here are the images captured for Alexis Ohanian (Reddit co-founder and now running Hipmunk): Unfortunately I could not spend much time engaging with other speakers or attendees because was busy preparing a new hands-on lab material. But was able to spend some time with Matthew Mccullough, Micahel Tiberg, Magnus Martensson, Mattias Karlsson, Corey Haines, Patrick Kua, Charles Nutter, Tushara, Pradeep, Shmuel, and several other folks. Here are a few pictures captured from the event: And the complete album here: Thank you Matthias, Emily, and Kathy for putting up a great show and giving me an opportunity to speak at Oredev. I hope to be back next year with a more vibrant representation of Java - the language and the ecosystem!

    Read the article

  • Oredev 2011 Trip Report

    - by arungupta
    Oredev had its seventh annual conference in the city of Malmo, Sweden last week. The name "Oredev" signifies to the part that Malmo is connected with Copenhagen with Oresund bridge. There were about 1000 attendees with several speakers from all over the world. The first two days were hands-on workshops and the next three days were sessions. There were different tracks such as Java, Windows 8, .NET, Smart Phones, Architecture, Collaboration, and Entrepreneurship. And then there was Xtra(ck) which had interesting sessions not directly related to technology. I gave two slide-free talks in the Java track. The first one showed how to build an end-to-end Java EE 6 application using NetBeans and GlassFish. The complete instructions to build the application are explained in detail here. This 3-tier application used Java Persistence API, Enterprsie Java Beans, Servlet, Contexts and Dependency Injection, JavaServer Faces, and Java API for RESTful Services. The source code built during the application can be downloaded here (LINK TBD). The second session, slide-free again, showed how to take a Java EE 6 application into production using GlassFish cluster. It explained: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete instructions for this session are available here. Oredev has an interesting way of collecting attendee feedback. The attendees drop a green, yellow, or red card in a bucket as they walk out of the session. Not everybody votes but most do. Other than the instantaneous feedback provided on twitter, this mechanism provides a more coarse grained feedback loop as well. The first talk had about 67 attendees (with 23 green and 7 yellow) and the second one had 22 (11 green and 11 yellow). The speakers' dinner is a good highlight of the conference. It is arranged in the historic city hall and the mayor welcomed all the speakers. As you can see in the pictures, it is a very royal building with lots of history behind it. Fortunately the dinner was a buffet with a much better variety unlike last year where only black soup and geese were served, which was quite cultural BTW ;-) The sauna in 85F, skinny dipping in 35F ocean and alternating between them at Kallbadhus is always very Swedish. Also spent a short evening at a friend's house socializing with other speaker/attendees, drinking Glogg, and eating Pepperkakor.  The welcome packet at the hotel also included cinnamon rolls, recommended to drink with cold milk, for a little more taste of Swedish culture. Something different at this conference was how artists from Image Think were visually capturing all the keynote speakers using images on whiteboards. Here are the images captured for Alexis Ohanian (Reddit co-founder and now running Hipmunk): Unfortunately I could not spend much time engaging with other speakers or attendees because was busy preparing a new hands-on lab material. But was able to spend some time with Matthew Mccullough, Micahel Tiberg, Magnus Martensson, Mattias Karlsson, Corey Haines, Patrick Kua, Charles Nutter, Tushara, Pradeep, Shmuel, and several other folks. Here are a few pictures captured from the event: And the complete album here: Thank you Matthias, Emily, and Kathy for putting up a great show and giving me an opportunity to speak at Oredev. I hope to be back next year with a more vibrant representation of Java - the language and the ecosystem!

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >