Search Results

Search found 6412 results on 257 pages for 'chuck johnston admin'.

Page 239/257 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • Browsing Your ADF Application Module Pooling Params with WLST

    - by Duncan Mills
    In ADF 11g you can of course use Enterprise Manager (EM) to browse and configure the settings used by ADF Business Components  Application Modules, as shown here for one of my sample deployed applications. This screen you can access from the EM homepage by pulling down the Application Deployment menu, and then ADF > Configure ADF Business Components. Then select the profile that you are actually using (Hint: look in the DataBindings.cpx file to work this out - probably the "Local" version unless you've explicitly changed it. )So, from this screen you can change the pooling parameters and the world is good. But what if you don't have EM installed? In that case you can use the WebLogic scripting capabilities to view (and Update) the MBean Properties. Explanation The pooling parameters and many others are handled through Message Driven Beans that are created for the deployed application in the server. In the case of the ADF BC pooling parameters, this MBean will combine the configuration deployed as part of the application, along with any overrides defined as -D environement commands on the JVM startup for the application server instance. Using WLST to Browse the Bean ValuesFor our purposes here I'm doing this interactively, although you can also write a script or write Java to achieve the same thing.Step 0: Before You Start You will need the followingAccess to the console on the machine that is running the serverThe WebLogic Admin username and password (I'll use weblogic/password as my example here - yours will be different)The name of the deployed application (in this example FMWdh_application1)The package path to the bc4j.xcfg file (in this example oracle.demo.fmwdh.model.service.common.bc4j.xcfg) This is based on the default path for your model project so it shoudl be fairly easy to work out.The BC configuration your AM is actually running with (look in the DataBindings.cpx for that. In this example DealHelpServiceDeployed is the profile being used..)Step 1: Start the WLST consoleTo start at the beginning, you need to run the WLST command but that needs a little setup:Change to the wlserver_10.3/server/bin directory e.g. under your Fusion Middleware Home[oracle@mymachine] cd /home/oracle/FMW_R1/wlserver_10.3/server/binSet your environment using the setWLSEnv script. e.g. on Oracle Enterprise Linux:[oracle@mymachine bin] source setWLSEnv.shStart the WLST interactive console[oracle@mymachine bin] java weblogic.WLSTInitializing WebLogic Scripting Tool (WLST) ...Welcome to WebLogic Server Administration Scripting ShellType help() for help on available commandswls:/offline> Step 2:Enter the WLST commandsConnect to the server wls:> connect('weblogic','password')Change to the Custom root, this is where the AMPooling MBeans are registered wls:> custom()Change to the b4j MBean directorywls:> cd ('oracle.bc4j.mbean.config')Work out the correct directory for the AM configuration you need. This is the difficult bit, not because it's hard to do, but because the names are long. The structure here is such that every child MBean is displayed at the same level as the parent, so for each deployed application there will be many directories shown. In fact, do an ls() command here and you'll see what I mean. Each application will have one MBean for the app as a whole, and then for each deployed configuration in the .xcfg file you'll see: One for the config entry itself, and then one each for Security, DB Connection and AM Pooling. So if you deploy an app with just one configuration you'll see 5 directories, if it has two configurations in the .xcfg you'll see 9 and so on.The directory you are looking for will contain those bits of information you gathered in Step 0, specifically the Application Name, the configuration you are using and the xcfg name: First of all narrow your list to just those directories returned from the ls() command that begin oracle.bc4j.mbean.config:name=AMPool. These identify the AM pooling MBeans for all the deployed applications. Now look for the correct application name e.g. Application=FMWdh_application1The config setting in that sub-list should already be correct and match what you expect e.g. oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfgFinally look for the correct value for the AppModuleConfigType e.g. oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployedNow you have identified the correct directory name, change to that (keep the name on one line of course - I've had to split it across lines here for clarity:wls:> cd ('oracle.bc4j.mbean.config:name=AMPool,     type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,    oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,    Application=FMWdh_application1,    oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed') Now you can actually view the parameter values with a simple ls() commandwls:> ls()And here's the output in which you can view the realtime values of the various pool settings: -rw- AmpoolConnectionstrategyclass oracle.jbo.common.ampool.DefaultConnectionStrategy -rw- AmpoolDoampooling true -rw- AmpoolDynamicjdbccredentials false -rw- AmpoolInitpoolsize 2 -rw- AmpoolIsuseexclusive true -rw- AmpoolMaxavailablesize 40 -rw- AmpoolMaxinactiveage 600000 -rw- AmpoolMaxpoolsize 4096 -rw- AmpoolMinavailablesize 2 -rw- AmpoolMonitorsleepinterval 600000 -rw- AmpoolResetnontransactionalstate true -rw- AmpoolSessioncookiefactoryclass oracle.jbo.common.ampool.DefaultSessionCookieFactory -rw- AmpoolTimetolive 3600000 -rw- AmpoolWritecookietoclient false -r-- ConfigMBean true -rw- ConnectionPoolManager oracle.jbo.server.ConnectionPoolManagerImpl -rw- Doconnectionpooling false -rw- Dofailover false -rw- Initpoolsize 0 -rw- Maxpoolcookieage -1 -rw- Maxpoolsize 4096 -rw- Poolmaxavailablesize 25 -rw- Poolmaxinactiveage 600000 -rw- Poolminavailablesize 5 -rw- Poolmonitorsleepinterval 600000 -rw- Poolrequesttimeout 30000 -rw- Pooltimetolive -1 -r-- ReadOnly false -rw- Recyclethreshold 10 -r-- RestartNeeded false -r-- SystemMBean false -r-- eventProvider true -r-- eventTypes java.lang.String[jmx.attribute.change] -r-- objectName oracle.bc4j.mbean.config:name=AMPool,type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,Application=FMWdh_application1,oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed -rw- poolClassName oracle.jbo.common.ampool.ApplicationPoolImpl Thanks to Brian Fry on the JDeveloper PM Team who did most of the work to put this sequence of steps together with me badgering him over his shoulder.

    Read the article

  • SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database

    - by Pinal Dave
    This is the third post in the series of the blog posts I am writing about NuoDB. NuoDB is very innovative and easy-to-use product. I can clearly see how one can scale-out NuoDB with so much ease and confidence. In my very first blog post we discussed how we can install NuoDB (link), and in my second post I discussed how we can manage the NuoDB database transaction engines and storage managers with a few clicks (link). Note: You can Download NuoDB from here. In this post, we will learn how we can use the Explorer feature of NuoDB to do various SQL operations. NuoDB has a browser-based Explorer, which is very powerful and has many of the features any IDE would normally have. Let us see how it works in the following step-by-step tutorial. Let us go to the NuoDBNuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the QuickStart screen. Make sure that you have created the sample database. If you have not created sample database, click on Create Database and create it successfully. Now go to the NuoDB Explorer by clicking on the main tab, and it will ask you for your domain username and password. Enter the username as a domain and password as a bird. Alternatively you can also enter username as a quickstart and password as a quickstart. Once you enter the password you will be able to see the databases. In our example we have installed the Sample Database hence you will see the Test database in our Database Hierarchy screen. When you click on database it will ask for the database login. Note that Database Login is different from Domain login and you will have to enter your database login over here. In our case the database username is dba and password is goalie. Once you enter a valid username and password it will display your database. Further expand your database and you will notice various objects in your database. Once you explore various objects, select any database and click on Open. When you click on execute, it will display the SQL script to select the data from the table. The autogenerated script displays entire result set from the database. The NuoDB Explorer is very powerful and makes the life of developers very easy. If you click on List SQL Statements it will list all the available SQL statements right away in Query Editor. You can see the popup window in following image. Here is the cool thing for geeks. You can even click on Query Plan and it will display the text based query plan as well. In case of a SELECT, the query plan will be much simpler, however, when we write complex queries it will be very interesting. We can use the query plan tab for performance tuning of the database. Here is another feature, when we click on List Tables in NuoDB Explorer.  It lists all the available tables in the query editor. This is very helpful when we are writing a long complex query. Here is a relatively complex example I have built using Inner Join syntax. Right below I have displayed the Query Plan. The query plan displays all the little details related to the query. Well, we just wrote multi-table query and executed it against the NuoDB database. You can use the NuoDB Admin section and do various analyses of the query and its performance. NuoDB is a distributed database built on a patented emergent architecture with full support for SQL and ACID guarantees.  It allows you to add Transaction Engine processes to a running system to improve the performance of your system.  You can also add a second Storage Engine to your running system for redundancy purposes.  Conversely, you can shut down processes when you don’t need the extra database resources. NuoDB also provides developers and administrators with a single intuitive interface for centrally monitoring deployments. If you have read my blog posts and have not tried out NuoDB, I strongly suggest that you download it today and catch up with the learnings with me. Trust me though the product is very powerful, it is extremely easy to learn and use. Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Running a Silverlight application in the Google App Engine platform

    - by rajbk
    This post shows you how to host a Silverlight application in the Google App Engine (GAE) platform. You deploy and host your Silverlight application on Google’s infrastructure by creating a configuration file and uploading it along with your application files. I tested this by uploading an old demo of mine - the four stroke engine silverlight demo. It is currently being served by the GAE over here: http://fourstrokeengine.appspot.com/ The steps to run your Silverlight application in GAE are as follows: Account Creation Create an account at http://appengine.google.com/. You are allocated a free quota at signup. Select “Create an Application”   Verify your account by SMS   Create your application by clicking on “Create an Application”   Pick an application identifier on the next screen. The identifier has to be unique. You will use this identifier when uploading your application. The application you create will by default be accessible at [applicationidentifier].appspot.com. You can also use custom domains if needed (refer to the docs).   Save your application. Download SDK  We will use the  Windows Launcher for Google App Engine tool to upload our apps (it is possible to do the same through command line). This is a GUI for creating, running and deploying applications. The launcher lets you test the app locally before deploying it to the GAE. This tool is available in the Google App Engine SDK. The GUI is written in Python and therefore needs an installation of Python to run. Download and install the Python Binaries from here: http://www.python.org/download/ Download and install the Google App Engine SDK from here: http://code.google.com/appengine/downloads.html Run the GAE Launcher. Select Create New Application.   On the next dialog, give your application a name (this must match the identifier we created earlier) For Parent Directory, point to the directory containing your Silverlight files. Change the port if you want to. The port is used by the GAE local web server. The server is started if you choose to run the application locally for testing purposes. Hit Save. Configure, Test and Upload As shown below, the files I am interested in uploading for my Silverlight demo app are The html page used to host the Silverlight control The xap file containing the compiled Silverlight application A favicon.ico file.   We now create a configuration file for our application called app.yaml. The app.yaml file specifies how URL paths correspond to request handlers and static files.  We edit the file by selecting our app in the GUI and clicking “Edit” The contents of file after editing is shown below (note that the contents of the file should be in plain text): application: fourstrokeengine version: 1 runtime: python api_version: 1 handlers: - url: /   static_files: Default.html   upload: Default.html - url: /favicon.ico   static_files: favicon.ico   upload: favicon.ico - url: /FourStrokeEngine.xap   static_files: FourStrokeEngine.xap   upload: FourStrokeEngine.xap   mime_type: application/x-silverlight-app - url: /.*   static_files: Default.html   upload: Default.html We have listed URL patterns for our files, specified them as static files and specified a mime type for our xap file. The wild card URL at the end will match all URLs that are not found to our default page (you would normally include a html file that displays a 404 message).  To understand more about app.yaml, refer to this page. Save the file. Run the application locally by selecting “Browse” in the GUI. A web server listening on the port you specified is started (8080 in my case). The app is loaded in your default web browser pointing to http://localhost:8080/. Make sure the application works as expected. We are now ready to deploy. Click the “Deploy” icon. You will be prompted for your username and password. Hit OK. The files will get uploaded and you should get a dialog telling you to “close the window”. We are done uploading our Silverlight application. Go to http://appengine.google.com/ and launch the application by clicking on the link in the “Current Version” column.   You should be taken to a URL which points to your application running in Google’s infrastructure : http://fourstrokeengine.appspot.com/. We are done deploying our application! Clicking on the link in the Application column will take you to the Admin console where you can see stats related to system usage.  To learn more about the Google Application Engine, go here: http://code.google.com/appengine/docs/whatisgoogleappengine.html

    Read the article

  • Trabajando el redireccionamiento de usuarios/Working with user redirect methods

    - by Jason Ulloa
    La protección de las aplicaciones es un elemento que no se puede dejar por fuera cuando se elabora un sistema. Cada parte o elemento de código que protege nuetra aplicación debe ser cuidadosamente seleccionado y elaborado. Una de las cosas comunes con las que nos topamos en asp.net cuando deseamos trabajar con usuarios, es con la necesidad de poder redireccionarlos a los distintos elementos o páginas dependiendo del rol. Pues precisamente eso es lo que haremos, vamos a trabajar con el Web.config de nuestra aplicación y le añadiremos unas pequeñas líneas de código para lograr dar un poco mas de seguridad al sistema y sobre todo lograr el redireccionamiento. Así que veamos como logramos lo deseado: Como bien sabemos el web.config nos permite manejar muchos elementos dentro de asp.net, muchos de ellos relacionados con la seguridad, asi como tambien nos brinda la posibilidad de poder personalizar los elementos para poder adaptarlo a nuestras necesidades. Así que, basandonos en el principio de que podemos personalizar el web.config, entonces crearemos una sección personalizada, que será la que utilicemos para manejar el redireccionamiento: Nuestro primer paso será ir a nuestro web.config y buscamos las siguientes líneas: <configuration>     <configSections>  </sectionGroup>             </sectionGroup>         </sectionGroup> Y luego de ellas definiremos una nueva sección  <section name="loginRedirectByRole" type="crabit.LoginRedirectByRoleSection" allowLocation="true" allowDefinition="Everywhere" /> El section name corresponde al nombre de nuestra nueva sección Type corresponde al nombre de la clase (que pronto realizaremos) y que será la encargada del Redirect Como estamos trabajando dentro de la seccion de configuración una vez definidad nuestra sección personalizada debemos cerrar esta sección  </configSections> Por lo que nuestro web.config debería lucir de la siguiente forma <configuration>     <configSections>  </sectionGroup>             </sectionGroup>         </sectionGroup> <section name="loginRedirectByRole" type="crabit.LoginRedirectByRoleSection" allowLocation="true" allowDefinition="Everywhere" /> </configSections> Anteriormente definimos nuestra sección, pero esta sería totalmente inútil sin el Metodo que le da vida. En nuestro caso el metodo loginRedirectByRole, este metodo lo definiremos luego del </configSections> último que cerramos: <loginRedirectByRole>     <roleRedirects>       <add role="Administrador" url="~/Admin/Default.aspx" />       <add role="User" url="~/User/Default.aspx" />     </roleRedirects>   </loginRedirectByRole> Como vemos, dentro de nuestro metodo LoginRedirectByRole tenemos el elemento add role. Este elemento será el que posteriormente le indicará a la aplicación hacia donde irá el usuario cuando realice un login correcto. Así que, veamos un poco esta configuración: add role="Administrador" corresponde al nombre del Role que tenemos definidio, pueden existir tantos elementos add role como tengamos definidos en nuestra aplicación. El elemento URL indica la ruta o página a la que será dirigido un usuario una vez logueado y dentro de la aplicación. Como vemos estamos utilizando el ~ para indicar que es una ruta relativa. Con esto hemos terminado la configuración de nuestro web.config, ahora veamos a fondo el código que se encargará de leer estos elementos y de utilziarlos: Para nuestro ejemplo, crearemos una nueva clase denominada LoginRedirectByRoleSection, recordemos que esta clase es la que llamamos en el elemento TYPE definido en la sección de nuestro web.config. Una vez creada la clase, definiremos algunas propiedades, pero antes de ello le indicaremos a nuestra clase que debe heredar de configurationSection, esto para poder obtener los elementos del web.config.  Inherits ConfigurationSection Ahora nuestra primer propiedad   <ConfigurationProperty("roleRedirects")> _         Public Property RoleRedirects() As RoleRedirectCollection             Get                 Return DirectCast(Me("roleRedirects"), RoleRedirectCollection)             End Get             Set(ByVal value As RoleRedirectCollection)                 Me("roleRedirects") = value             End Set         End Property     End Class Esta propiedad será la encargada de obtener todos los roles que definimos en la metodo personalizado de nuestro web.config Nuestro segundo paso será crear una segunda clase (en la misma clase LoginRedirectByRoleSection) a esta clase la llamaremos RoleRedirectCollection y la heredaremos de ConfigurationElementCollection y definiremos lo siguiente Public Class RoleRedirectCollection         Inherits ConfigurationElementCollection         Default Public ReadOnly Property Item(ByVal index As Integer) As RoleRedirect             Get                 Return DirectCast(BaseGet(index), RoleRedirect)             End Get         End Property         Default Public ReadOnly Property Item(ByVal key As Object) As RoleRedirect             Get                 Return DirectCast(BaseGet(key), RoleRedirect)             End Get         End Property         Protected Overrides Function CreateNewElement() As ConfigurationElement             Return New RoleRedirect()         End Function         Protected Overrides Function GetElementKey(ByVal element As ConfigurationElement) As Object             Return DirectCast(element, RoleRedirect).Role         End Function     End Class Nuevamente crearemos otra clase esta vez llamada RoleRedirect y en este caso la heredaremos de ConfigurationElement. Nuestra nueva clase debería lucir así: Public Class RoleRedirect         Inherits ConfigurationElement         <ConfigurationProperty("role", IsRequired:=True)> _         Public Property Role() As String             Get                 Return DirectCast(Me("role"), String)             End Get             Set(ByVal value As String)                 Me("role") = value             End Set         End Property         <ConfigurationProperty("url", IsRequired:=True)> _         Public Property Url() As String             Get                 Return DirectCast(Me("url"), String)             End Get             Set(ByVal value As String)                 Me("url") = value             End Set         End Property     End Class Una vez que nuestra clase madre esta lista, lo unico que nos queda es un poc de codigo en la pagina de login de nuestro sistema (por supuesto, asumo que estan utilizando  los controles de login que por defecto tiene asp.net). Acá definiremos nuestros dos últimos metodos  Protected Sub ctllogin_LoggedIn(ByVal sender As Object, ByVal e As System.EventArgs) Handles ctllogin.LoggedIn         RedirectLogin(ctllogin.UserName)     End Sub El procedimiento loggeding es parte del control login de asp.net y se desencadena en el momento en que el usuario hace loguin correctametne en nuestra aplicación Este evento desencadenará el siguiente procedimiento para redireccionar.     Private Sub RedirectLogin(ByVal username As String)         Dim roleRedirectSection As crabit.LoginRedirectByRoleSection = DirectCast(ConfigurationManager.GetSection("loginRedirectByRole"), crabit.LoginRedirectByRoleSection)         For Each roleRedirect As crabit.RoleRedirect In roleRedirectSection.RoleRedirects             If Roles.IsUserInRole(username, roleRedirect.Role) Then                 Response.Redirect(roleRedirect.Url)             End If         Next     End Sub   Con esto, nuestra aplicación debería ser capaz de redireccionar sin problemas y manejar los roles.  Además, tambien recordar que nuestro ejemplo se basa en la utilización del esquema de bases de datos que por defecto nos proporcionada asp.net.

    Read the article

  • A Rose by Any Other Name..

    - by Geoff N. Hiten
    It is always a good start when you can steal a title line from one of the best writers in the English language.  Let’s hope I can make the rest of this post live up to the opening.  One recurring problem with SQL server is moving databases to new servers.  Client applications use a variety of ways to resolve SQL Server names, some of which are not changed easily <cough SharePoint /cough>.  If you happen to be using default instances on both the source and target SQL Server, then the solution is pretty simple.  You create (or bug the network admin until she creates) two DNS “A” records. One points the old name to the new IP address.  The other creates a new alias for the old server, since the original system name is now redirected.  Note this will redirect ALL traffic from the old server to the new server, including RDP and file share connection attempts.    Figure 1 – Microsoft DNS MMC Snap-In   Figure 2 – DNS New Host Dialog Box Both records are necessary so you can still access the old server via an alternate name. Server Role IP Address Name Alias Source 10.97.230.60 SQL01 SQL01_Old Target 10.97.230.80 SQL02 SQL01 Table 1 – Alias List If you or somebody set up connections via IP address, you deserve to have to go to each app and fix it by hand.  That is the only way to fix that particular foul-up. If have to deal with Named Instances either as a source or a target, then it gets more complicated.  The standard fix is to use the SQL Server Configuration Manager (or one of its earlier incarnations) to create a SQL client alias to redirect the connection.  This can be a pain installing and configuring the app on multiple client servers.  The good news is that SQL Server Configuration Manager AND all of its earlier versions simply write a few registry keys.  Extracting the keys into a .reg file makes centralized automated deployment a snap. If the client is a 32-bit system, you have to extract the native key.  If it is a 64-bit, you have to extract the native key and the WoW (32 bit on 64 bit host) key. First, pick a development system to create the actual registry key.  If you do this repeatedly, you can simply edit an existing registry file.  Create the entry using the SQL Configuration Manager.  You must use a 64-bit system to create the WoW key.  The following example redirects from a named instance “SQL01\SQLUtiluty” to a default instance on “SQL02”.   Figure 3 – SQL Server Configuration Manager - Native Figure 3 shows the native key listing. Figure 4 – SQL Server Configuration Manager – WoW If you think you don’t need the WoW key because your app is 64 it, think again.  SQL Server Management Server is a 32-bit app, as are most SQL test utilities.  Always create both keys for 64-bit target systems. Now that the keys exist, we can extract them into a .reg file. Fire up REGEDIT and browse to the following location:  HKLM\Software\Microsoft\MSSQLServer\Client\ConnectTo.  You can also search the registry for the string value of one of the server names (old or new). Right click on the “ConnectTo” label and choose “Export”.  Save with an appropriate name and location.  The resulting file should look something like this: Figure 5 – SQL01_Alias.reg Repeat the process with the location: HKLM\Software\Wow6432Node\Microsoft\MSSQLServer\Client\ConnectTo Note that if you have multiple alias entries, ALL of the entries will be exported.  In that case, you can edit the file and remove the extra aliases. You can edit the files together into a single file.  Just leave a blank line between new keys like this: Figure 6 – SQL01_Alias_All.reg Of course if you have an automatic way to deploy, it makes sense to have an automatic way to Un-deploy.  To delete a registry key, simply edit the .reg file and replace the target with a “-“ sign like so. Figure 7 – SQL01_Alias_UNDO.reg Now we have the ability to move any database to any server without having to install or change any applications on any client server.  The whole process should be transparent to the applications, which makes planning and coordinating database moves a far simpler task.

    Read the article

  • SQL Authority News – Play by Play with Pinal Dave – A Birthday Gift

    - by Pinal Dave
    Today is my birthday. Personal Note When I was young, I was always looking forward to my birthday as on this day, I used to get gifts from everybody. Now when I am getting old on each of my birthday, I have almost same feeling but the direction is different. Now on each of my birthday, I feel like giving gifts to everybody. I have received lots of support, love and respect from everybody; and now I must return it back.Well, on this birthday, I have very unique gifts for everybody – my latest course on SQL Server. How I Tune Performance I often get questions where I am asked how do I work on a normal day. I am often asked that how do I work when I have performance tuning project is assigned to me. Lots of people have expressed their desire that they want me to explain and demonstrate my own method of solving performance problem when I am facing real world problem. It is a pretty difficult task as in the real world, nothing goes as planned and usually planned demonstrations have no place there. The real world, demands real solutions and in a timely fashion. If a consultant goes to industry and does not demonstrate his/her capabilities in very first few minutes, it does not matter how much fame he/she is, the door is shown to them eventually. It is true and in my early career, I have faced it quite commonly. I have learned the trick to be honest from the start and request absolutely transparent communication from the organization where I am to consult. Play by Play Play by Play is a very unique setup. It is not planned and it is a step by step course. It is like a reality show – a very real encounter to the problem and real problem solving approach. I had a great time doing this course. Geoffrey Grosenbach (VP of Pluralsight) sits down with me to see what a SQL Server Admin does in the real world. This Play-by-Play focuses on SQL Server performance tuning and I go over optimizing queries and fine-tuning the server. The table of content of this course is very simple. Introduction In the introduction I explained my basic strategies when I am approached by a customer for performance tuning. Basic Information Gathering In this module I explain how I do gather various information for performance tuning project. It is very crucial to demonstrate to customers for consultant his capability of solving problem. I attempt to resolve a small problem which gives a big positive impact on performance, consultant have to gather proper information from the start. I demonstrate in this module, how one can collect all the important performance tuning metrics. Removing Performance Bottleneck In this module, I build upon the previous module’s statistics collected. I analysis various performance tuning measures and immediately start implementing various tweaks on the performance, which will start improving the performance of my server. This is a very effective method and it gives immediate return of efforts. Index Optimization Indexes are considered as a silver bullet for performance tuning. However, it is not true always there are plenty of examples where indexes even performs worst after implemented. The key is to understand a few of the basic properties of the index and implement the right things at the right time. In this module, I describe in detail how to do index optimizations and what are right and wrong with Index. If you are a DBA or developer, and if your application is running slow – this is must attend module for you. I have some really interesting stories to tell as well. Optimize Query with Rewrite Every problem has more than one solution, in this module we will see another very famous, but hard to master skills for performance tuning – Query Rewrite. There are few do’s and don’ts for any query rewrites. I take a very simple example and demonstrate how query rewrite can improve the performance of the query at many folds. I also share some real world funny stories in this module. This course is hosted at Pluralsight. You will need a valid login for Pluralsight to watch  Play by Play: Pinal Dave course. You can also sign up for FREE Trial of Pluralsight to watch this course. As today is my birthday – I will give 10 people (randomly) who will express their desire to learn this course, a free code. Please leave your comment and I will send you free code to watch this course for free. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Video

    Read the article

  • WordPress SEO Plugins to make your Blog Search Engine Friendly

    - by Vaibhav
    WordPress is the most common blogging system in use today and its use as a CMS is also wide spread. With hundreds of millions of sites using wordpress, getting correct SEO for your WordPress based Blog or Site is very important. We get regular queries from people who want Search Engine Optimisation for their site or blog which is made using wordpress. Here is a list of 16 of the best WordPress Plug-ins That can help you achieve better rankings: All in one SEO Pack This is most popular plugin among all SEO plugins for WordPress. It is easy to use and is compatible with most of the WordPress plugins. It works as a complete package of SEO plugin – automatically generating META tags and optimizing search engines for your titles and avoiding duplicate content. You can also include META tags manually (Met title, Meta description and Met keywords) for all pages and post in your website. HeadSpace2 HeasSpace2 is available in different languages , you can manage a wide range of SEO Tasks related with meta data, you can tag your posts, Custom descriptions and titles. So your page can rank the created relevancy on Search engines and you can load different settings for different pages. Platinum SEO plugin Automatic 301 redirects permalink changes, META tags generation, avoids duplicate content, and does SEO optimization of post and page titles and a lots of other features. TGFI.net SEO WordPress Plugin It’s a modified version of all-in-one SEO Pack. It has some unique feature over All-in-one SEO plugin, It generate titles, meta descriptions and meta keywords automatically when overrides are not present. Google XML Sitemaps Sitemaps Generated by this tool are supported by  Google,  Yahoo,  Bing, and Ask. We all know Sitemaps make indexing of web pages easier for web crawlers. Crawlers can retrieve complete structure of site and more information by sitemaps. They notify all major search engines about new posts every time you create a new post. Sitemap Generator You can generate highly customizable sitemap for your WordPress page. You can choose what to show and what not to show, you can list the items in your choice of orde. It supports pages and permalinks and multi-level categories. SEO Slugs They can generate more search engine friendly URLs for your site. Slugs are filename assigned to your post , this plugin removes all  common words like ‘a’, ‘the’, ‘in’, ‘what’, ‘you’ from slug which are assigned automatically to your post. SEO Post Links This is a similar plugin to SEO Slug, it removes unnecessary keywords from slug to make it short and SEO friendly and you can fix the number of characters in your post. Automatic SEO links With this tool you can create auto linking in your post. You can use this tool for inter linking or external linking too. Just select your words, anchor text target URL nature of links ( Do fallow / No follow ). This plugin will replace the matches found in post, WP Backlinks A helpful plugin for link exchange , whenever any webmaster submits a link for link exchange, the plugin will spider webmasters site for reciprocal link, and if everything is found good , your link will be exchanged. SEO Title Tag You can optimize your Title  tags of  Word press blog through this plugin . You can also override the title tag with custom titles , mass editing and title tags for 404 pages which are the main feature of this plugin. 404 SEO plugin With this Plugin you can customize 404 page of your site; you can give customized error message and links to relevant pages of your site. Redirection A powerful plugins to manage 301 redirection and logs related with redirection, with this plugin you can track 404 errors and track the log of all redirected URLs , this plugin can redirect  post automatically when URL changes for that post. AddToAny This plugin helps your readers to share, save, email and bookmark your posts and pages. It supports more than a hundred social bookmarking , networking and sharing sites. SEO Friendly Images You can make SEO friendly images available on your site with the help of this tool. It updates images with proper titles and ALT tags. Robots Meta A plugin which prevents Search engines to index comments on your post, login and admin pages. It also allows to add tags for individual pages.

    Read the article

  • The Year 2010, The Year of Change

    As I look back on the year of 2010, I could have never predicted the wonderful changes that have occurred for my wife and me. The beginning of this year started out as the 9th year that we lived in South Florida, and my fourth year working for DentalPlans.com as a software engineer/network admin. About 3 months in to the year I was given an excellent opportunity to work for MovieTickets.com in the software engineering department. This opportunity allowed me to gain experience with jQuery due to one of my projects was to reengineering MovieTickets.com existing Marketing Panel System. About 3 months after working at MovieTickets.com, my wife and I were offered an opportunity of a life time. I was offered a Job in a large background\information security company located in Nashville, TN as software engineer II.  I must note that after living in South Florida for 9 years, my wife and I really had a strong distaste for the South Florida life style and the general attitude/culture of the area. Even though we shared a strong dislike for the area in which we lived I must admit that it was a tough decision to leave MovieTickets.com because I was really doing well and I made some great new friends like Chris Catto, and Tyson Nero.  In fact, they introduced me to Local Microsoft User Groups, and software development podcast like DotNetRocks.com and Hanselminutes.com.  In addition, we also went to my first Microsoft launch down in Miami for Visual Studios 2010. I must admit it was a cool experience.  I truly hope to keep in touch with them to see how their careers grow, and I know they will. I must admit I was nervous and excited to start the next chapter in our live as I started up the 26 foot U-Haul truck and got on the road for Nashville from Boca Raton. I knew that the change was going to lead to new adventures and new opportunities that I could never imagine.  As we pulled in to the long driveway of our rental house, we knew that this was the right place for my wife and I. Natalie, my wife had actually come up to Nashville and within one week of my job offer had set up a nice rental home for us to restart our lives in TN.  I must admit that the wonderful southern hospitality took a bit to get use to due to the type of people we were used to dealing with on a regular basis. Our first 2 months seemed like we were living a dream because of our new area and the wonderful people we live around. So far my new job is going really well and I really like the people on my team and department. In fact after 6 months I am now in charge of all application builds for our new deployment process. I am also leading up a push for setting up of continuous integration within our new build process.  In addition to starting my new job, I was also offered a position as an adjust instructor at ITT Tech teaching course like VB.net, Java Script, Ajax, and database development. So far I have really like teaching at the college level.  Information technology has really been great for my life so I am really glad to be able to give back. That is actually why I started DotNetBlocks. This site allows me to document things I have learned as I work with technology, and allows others to borrow from my experiences.  I hope that this site can help others as others have helped me get where I am. Finally, I am glade to report that I only have 4 classes left for my master’s degree at Capella University. I am proud to announce that I am still on track to graduate with 3.91 GPA.  This last class was really a test because I had a crazy idea that I could work full time as a software engineer, teach two college courses as a first time teacher and also take an advanced masters class in application architecture. I have no idea how I actually survived, but I am really surprised how well I actually did. I was invited back to reach again at ITT Tech, and I passed my masters class with an “A”.  I have decided to take this next term off from my master’s program so that I do not get burned out.  Also, so that my new current employer will pay for more of my education, tuition reimbursement is an awesome benefit. This was my year 2010, how was yours?

    Read the article

  • top Tweets SOA Partner Community – May 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity SOA Community BPMN2.0 Oracle notations poster from eaiesb http://wp.me/p10C8u-pu Torsten WinterbergLook out for new Oracle #BPM edition coming up soon: The Oracle BPM Standard edtion! Great news for easy entry, small licence fees. Yes! Danilo Schmiedel Had a great chat with customer yesterday about #OracleBPM. Next step will be a 5day event combining modeling and implementation @soacommunity Frank Nimphius Still reading "Oracle Business Process Management Suite 11g Handbook". Excellent resource for a non-SOA but ADF guy like me ;-) Oracle New webcast: Maximize #Oracle #WebLogic Server ROI with Oracle #Enterprise #Manager 12c on May 2 at 10 am PT. Register http://bit.ly/JFUrR9 OTNArchBeat@OTNArchBeat BPM in Financial Services Industry | Sanjeev Sharma http://bit.ly/HCCxui JDeveloper & ADF BPEL 11.1.1.6 Certified for Prebuilt E-Business Suite 12.1.3 SOA Integrations http://dlvr.it/1V9SxR Oracle UPK & Tutor Collaborate Attendees: Visit the UPK demo pod, SIGS, and sessions: If you are attending Collaborate 2012 - Sun. http://bit.ly/J39z65 Heidi Buelow see #fmw track RT @demed: Are you going to #KSCOPE12 in San Antonio, June 24-28? http://kscope12.com/component/ seminar/seminarslist?topicsid=6 Use promo code Fusion for discount! Sabine Leitner #SIG #Middleware 15.05. Frankfurt #Oracle #DOAG Planung & Aufbau WebLogic Server #WLS http://bit.ly/HKsCWV @OracleWebLogic @soacommunity SOA Community MDS explorer by Red Samurai http://wp.me/p10C8u-pp Biemond &reg; Retrieve or set a HTTP header from Oracle BPEL: With Oracle SOA Suite 11g patch 12928372 you can finally retrie http://bit.ly/JejTHC Lucas Jellema Call for papers for UKOUG 2012 has opened: http://techandebs.ukoug.org /default.asp?p=9306 (deadline 1st of June) OTNArchBeat BPM API usage: List all BPM Processes for a user | Kavitha Srinivasan http://bit.ly/IJKVfj demed SOA, Cloud + Service Tech symposium (London, Sep 24-25) call for paper is open http://www.servicetechsymposium. com /call2012.php @techsymp #oraclesoa OracleBlogs Lessons learned configuring OER 11g Workflows http://ow.ly/1iMsKh OTNArchBeat Scripting WebLogic Admin Server Startup | Antony Reynolds http://bit.ly/IH5ciU orclateamsoa A-Team Blog #ateam: BPM API usage: List all BPM Processes for a user http://ow.ly/1iJADp Lucas Jellema Just blogged about our Live FMW Application Development show during OBUG 2012, next Tuesday 24th April in Maastricht: OracleBlogs OEG integration with OSB/OWSM - 11g http://ow.ly/1iKx7G SOA Community SOA Community Newsletter April 2012 http://wp.me/p10C8u-pl Frank DorstRT @whitehorsesnl: Whiteblog: BPM Process Spaces in Oracle Webcenter (Patch Set 5(http://bit.ly/Hxzh29) #soacommunity #bpm #oracle) David Shaffer The Advanced SOA suite training class next week in Redwood City is full! Learned a lot about accepting credit card payments. OTNArchBeat Running Built-In Test Simulator with SOA Suite Healthcare 11g in PS4 and PS5 | Shub Lahiri http://bit.ly/IgI8GN SOA Community Oracle Fusion Middleware Innovation Awards 2012, Call for Nominations #ofmaward #soa #bpm #soacommunity OTNArchBeat Updated SOA Documents now available in ITSO Reference Library http://bit.ly/I3Y6Sg Oracle Middleware Data Integrator & SOA - why 2 products better than one for integration? Webcast: Apr 24 10 AM PT http://bit.ly/IzmtKR Andrejus Baranovskis Red Samurai MDS Cleaner V2.0 http://fb.me/FxLVz82w SOA Community “@rluttikhuizen: Chapter 4 of SOA Made Simple book "Classification of Services" ready for collegial review” can #soacommunity get a preview? Xavier Verhaeghe #Gartner figures are out: #Oracle top in App Server market share (43.1%) and Relational #Database, too (48.8%) in 2011 Sabine Leitner WLS12c, Exa*, IDM, EM12c, DB @ Private, Public, Hybrid #Cloud Event 26.04. FFM #Oracle http://bit.ly/zcRuxi @OracleCloudZone @soacommunity Michel Schildmeijer@wlscommunity @MiddlewareMagic @OTNArchBeat @Oracle_Fusion Oracle WebLogic / SOA Suite 11g HACMP Cluster take-over http://lnkd.in/G78qMd Oracle Middleware Hear how ODI and SOA's unified approach are key to untangling your business. April 24 10AM PT http://bit.ly/IdcsUz #Oracle OTNArchBeat Using SAP Adapter with OSB 11g (PS3) | Shub Lahiri http://bit.ly/IswR9K SOA Community Integrating with Oracle Fusion Applications: Discovering Integration Artifacts https://blogs.oracle.com/governance /entry/integrating_with_oracle_fusion_ applications #soacommunity #oer #governance OracleBlogs Tuning B2B Server Engine Threads in SOA Suite 11g http://ow.ly/1iH5bx OracleBlogs Top Tweets SOA Partner Community April 2012 http://ow.ly/1iVHfA SOA Community Oracle SOA Suite 11g Database Growth Management http://wp.me/p10C8u-pi Sabine Leitner WLS12c,Exa*,IDM,EM12c, DB @ Private, Public, Hybrid #Cloud Event 24.04. München #Oracle http://bit.ly/zcRuxi @OracleCloudZone @soacommunity SOA Community Testing Business Rules by Mark Nelson http://redstack.wordpress.com/2012/ 04/18/testing-business-rules/ #soacommunity #soa #rules #oracle SOA CommunityTop Tweets SOA Partner Community - April 2012 http://wp.me/p10C8u-pn OTNArchBeat Webcast: Untangle Your Business with Oracle Unified SOA and Data Integration - April 24 http://bit.ly/IQexqT OTNArchBeat"Do more with SOA Integration: Best of Packt" contributors include @gschmutz, @llaszews, many others http://amzn.to/HVWwYt ServiceTechSymposium Symposium agenda page coming together - page launched today with keynotes, sessions to be added shortly. http://www.servicetechsymposium.com /agenda2012.php SOA Community Shipping Specialization plaques - congratulation #Fujitsu - request yours https://soacommunity.wordpress. com/2011/02/23/who-are-the-soa-experts-specialization-recognized-by-customers/ #soacommunity #OPN http://pic.twitter.com/YMRm2ion ServiceTechSymposium call for Presentations Submission Deadline Moved Up to May 21, 2012. Send your presentations submissions ASAP! ServiceTechSymposium Symposium Keynote by Vicente Navarro, European Space Agency, added to agenda: "SOA & Service-Orientation at the European Space Agency" SOA Community Running a large #soa project? Make sure you read - Oracle SOA Suite 11g Database Growth Management #soacommunity #opn SOA Community List all BPM Processes for a user by Yogesh l #bpm #oracle #soacommunity  For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity, twitter,Oracle,SOA Community,Jürgen Kress,OPN,SOA,BPM

    Read the article

  • How can I resolve Hibernate 3's ConstraintViolationException when updating a Persistent Entity's Col

    - by Tim Visher
    I'm trying to discover why two nearly identical class sets are behaving different from Hibernate 3's perspective. I'm fairly new to Hibernate in general and I'm hoping I'm missing something fairly obvious about the mappings or timing issues or something along those lines but I spent the whole day yesterday staring at the two sets and any differences that would lead to one being able to be persisted and the other not completely escaped me. I appologize in advance for the length of this question but it all hinges around some pretty specific implementation details. I have the following class mapped with Annotations and managed by Hibernate 3.? (if the specific specific version turns out to be pertinent, I'll figure out what it is). Java version is 1.6. ... @Embeddable public class JobStateChange implements Comparable<JobStateChange> { @Temporal(TemporalType.TIMESTAMP) @Column(nullable = false) private Date date; @Enumerated(EnumType.STRING) @Column(nullable = false, length = JobState.FIELD_LENGTH) private JobState state; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "acting_user_id", nullable = false) private User actingUser; public JobStateChange() { } @Override public int compareTo(final JobStateChange o) { return this.date.compareTo(o.date); } @Override public boolean equals(final Object obj) { if (this == obj) { return true; } else if (!(obj instanceof JobStateChange)) { return false; } JobStateChange candidate = (JobStateChange) obj; return this.state == candidate.state && this.actingUser.equals(candidate.getUser()) && this.date.equals(candidate.getDate()); } @Override public int hashCode() { return this.state.hashCode() + this.actingUser.hashCode() + this.date.hashCode(); } } It is mapped as a Hibernate CollectionOfElements in the class Job as follows: ... @Entity @Table( name = "job", uniqueConstraints = { @UniqueConstraint( columnNames = { "agency", //Job Name "payment_type", //Job Name "payment_file", //Job Name "date_of_payment", "payment_control_number", "truck_number" }) }) public class Job implements Serializable { private static final long serialVersionUID = -1131729422634638834L; ... @org.hibernate.annotations.CollectionOfElements @JoinTable(name = "job_state", joinColumns = @JoinColumn(name = "job_id")) @Sort(type = SortType.NATURAL) private final SortedSet<JobStateChange> stateChanges = new TreeSet<JobStateChange>(); ... public void advanceState( final User actor, final Date date) { JobState nextState; LOGGER.debug("Current state of {} is {}.", this, this.getCurrentState()); if (null == this.currentState) { nextState = JobState.BEGINNING; } else { if (!this.isAdvanceable()) { throw new IllegalAdvancementException(this.currentState.illegalAdvancementStateMessage); } if (this.currentState.isDivergent()) { nextState = this.currentState.getNextState(this); } else { nextState = this.currentState.getNextState(); } } JobStateChange stateChange = new JobStateChange(nextState, actor, date); this.setCurrentState(stateChange.getState()); this.stateChanges.add(stateChange); LOGGER.debug("Advanced {} to {}", this, this.getCurrentState()); } private void setCurrentState(final JobState jobState) { this.currentState = jobState; } boolean isAdvanceable() { return this.getCurrentState().isAdvanceable(this); } ... @Override public boolean equals(final Object obj) { if (obj == this) { return true; } else if (!(obj instanceof Job)) { return false; } Job otherJob = (Job) obj; return this.getName().equals(otherJob.getName()) && this.getDateOfPayment().equals(otherJob.getDateOfPayment()) && this.getPaymentControlNumber().equals(otherJob.getPaymentControlNumber()) && this.getTruckNumber().equals(otherJob.getTruckNumber()); } @Override public int hashCode() { return this.getName().hashCode() + this.getDateOfPayment().hashCode() + this.getPaymentControlNumber().hashCode() + this.getTruckNumber().hashCode(); } ... } The purpose of JobStateChange is to record when the Job moves through a series of State Changes that are outline in JobState as enums which know about advancement and decrement rules. The interface used to advance Jobs through a series of states is to call Job.advanceState() with a Date and a User. If the Job is advanceable according to rules coded in the enum, then a new StateChange is added to the SortedSet and everyone's happy. If not, an IllegalAdvancementException is thrown. The DDL this generates is as follows: ... drop table job; drop table job_state; ... create table job ( id bigint generated by default as identity, current_state varchar(25), date_of_payment date not null, beginningCheckNumber varchar(8) not null, item_count integer, agency varchar(10) not null, payment_file varchar(25) not null, payment_type varchar(25) not null, endingCheckNumber varchar(8) not null, payment_control_number varchar(4) not null, truck_number varchar(255) not null, wrapping_system_type varchar(15) not null, printer_id bigint, primary key (id), unique (agency, payment_type, payment_file, date_of_payment, payment_control_number, truck_number) ); create table job_state ( job_id bigint not null, acting_user_id bigint not null, date timestamp not null, state varchar(25) not null, primary key (job_id, acting_user_id, date, state) ); ... alter table job add constraint FK19BBD12FB9D70 foreign key (printer_id) references printer; alter table job_state add constraint FK57C2418FED1F0D21 foreign key (acting_user_id) references app_user; alter table job_state add constraint FK57C2418FABE090B3 foreign key (job_id) references job; ... The database is seeded with the following data prior to running tests ... insert into job (id, agency, payment_type, payment_file, payment_control_number, date_of_payment, beginningCheckNumber, endingCheckNumber, item_count, current_state, printer_id, wrapping_system_type, truck_number) values (-3, 'RRB', 'Monthly', 'Monthly','4501','1998-12-01 08:31:16' , '00000001','00040000', 40000, 'UNASSIGNED', null, 'KERN', '02'); insert into job_state (job_id, acting_user_id, date, state) values (-3, -1, '1998-11-30 08:31:17', 'UNASSIGNED'); ... After the database schema is automatically generated and rebuilt by the Hibernate tool. The following test runs fine up until the call to Session.flush() ... @ContextConfiguration(locations = { "/applicationContext-data.xml", "/applicationContext-service.xml" }) public class JobDaoIntegrationTest extends AbstractTransactionalJUnit4SpringContextTests { @Autowired private JobDao jobDao; @Autowired private SessionFactory sessionFactory; @Autowired private UserService userService; @Autowired private PrinterService printerService; ... @Test public void saveJob_JobAdvancedToAssigned_AllExpectedStateChanges() { //Get an unassigned Job Job job = this.jobDao.getJob(-3L); assertEquals(JobState.UNASSIGNED, job.getCurrentState()); Date advancedToUnassigned = new GregorianCalendar(1998, 10, 30, 8, 31, 17).getTime(); assertEquals(advancedToUnassigned, job.getStateChange(JobState.UNASSIGNED).getDate()); //Satisfy advancement constraints and advance job.setPrinter(this.printerService.getPrinter(-1L)); Date advancedToAssigned = new Date(); job.advanceState( this.userService.getUserByUsername("admin"), advancedToAssigned); assertEquals(JobState.ASSIGNED, job.getCurrentState()); assertEquals(advancedToUnassigned, job.getStateChange(JobState.UNASSIGNED).getDate()); assertEquals(advancedToAssigned, job.getStateChange(JobState.ASSIGNED).getDate()); //Persist to DB this.sessionFactory.getCurrentSession().flush(); ... } ... } The error thrown is SQLCODE=-803, SQLSTATE=23505: could not insert collection rows: [jaci.model.job.Job.stateChanges#-3] org.hibernate.exception.ConstraintViolationException: could not insert collection rows: [jaci.model.job.Job.stateChanges#-3] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:94) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.persister.collection.AbstractCollectionPersister.insertRows(AbstractCollectionPersister.java:1416) at org.hibernate.action.CollectionUpdateAction.execute(CollectionUpdateAction.java:86) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:279) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:170) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1027) at jaci.dao.JobDaoIntegrationTest.saveJob_JobAdvancedToAssigned_AllExpectedStateChanges(JobDaoIntegrationTest.java:98) at org.springframework.test.context.junit4.SpringTestMethod.invoke(SpringTestMethod.java:160) at org.springframework.test.context.junit4.SpringMethodRoadie.runTestMethod(SpringMethodRoadie.java:233) at org.springframework.test.context.junit4.SpringMethodRoadie$RunBeforesThenTestThenAfters.run(SpringMethodRoadie.java:333) at org.springframework.test.context.junit4.SpringMethodRoadie.runWithRepetitions(SpringMethodRoadie.java:217) at org.springframework.test.context.junit4.SpringMethodRoadie.runTest(SpringMethodRoadie.java:197) at org.springframework.test.context.junit4.SpringMethodRoadie.run(SpringMethodRoadie.java:143) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.invokeTestMethod(SpringJUnit4ClassRunner.java:160) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:97) Caused by: com.ibm.db2.jcc.b.lm: DB2 SQL Error: SQLCODE=-803, SQLSTATE=23505, SQLERRMC=1;ACI_APP.JOB_STATE, DRIVER=3.50.152 at com.ibm.db2.jcc.b.wc.a(wc.java:575) at com.ibm.db2.jcc.b.wc.a(wc.java:57) at com.ibm.db2.jcc.b.wc.a(wc.java:126) at com.ibm.db2.jcc.b.tk.b(tk.java:1593) at com.ibm.db2.jcc.b.tk.c(tk.java:1576) at com.ibm.db2.jcc.t4.db.k(db.java:353) at com.ibm.db2.jcc.t4.db.a(db.java:59) at com.ibm.db2.jcc.t4.t.a(t.java:50) at com.ibm.db2.jcc.t4.tb.b(tb.java:200) at com.ibm.db2.jcc.b.uk.Gb(uk.java:2355) at com.ibm.db2.jcc.b.uk.e(uk.java:3129) at com.ibm.db2.jcc.b.uk.zb(uk.java:568) at com.ibm.db2.jcc.b.uk.executeUpdate(uk.java:551) at org.hibernate.jdbc.NonBatchingBatcher.addToBatch(NonBatchingBatcher.java:46) at org.hibernate.persister.collection.AbstractCollectionPersister.insertRows(AbstractCollectionPersister.java:1389) Therein lies my problem… A nearly identical Class set (in fact, so identical that I've been chomping at the bit to make it a single class that serves both business entities) runs absolutely fine. It is identical except for name. Instead of Job it's Web. Instead of JobStateChange it's WebStateChange. Instead of JobState it's WebState. Both Job and Web's SortedSet of StateChanges are mapped as a Hibernate CollectionOfElements. Both are @Embeddable. Both are SortType.Natural. Both are backed by an Enumeration with some advancement rules in it. And yet when a nearly identical test is run for Web, no issue is discovered and the data flushes fine. For the sake of brevity I won't include all of the Web classes here, but I will include the test and if anyone wants to see the actual sources, I'll include them (just leave a comment). The data seed: insert into web (id, stock_type, pallet, pallet_id, date_received, first_icn, last_icn, shipment_id, current_state) values (-1, 'PF', '0011', 'A', '2008-12-31 08:30:02', '000000001', '000080000', -1, 'UNSTAGED'); insert into web_state (web_id, date, state, acting_user_id) values (-1, '2008-12-31 08:30:03', 'UNSTAGED', -1); The test: ... @ContextConfiguration(locations = { "/applicationContext-data.xml", "/applicationContext-service.xml" }) public class WebDaoIntegrationTest extends AbstractTransactionalJUnit4SpringContextTests { @Autowired private WebDao webDao; @Autowired private UserService userService; @Autowired private SessionFactory sessionFactory; ... @Test public void saveWeb_WebAdvancedToNewState_AllExpectedStateChanges() { Web web = this.webDao.getWeb(-1L); Date advancedToUnstaged = new GregorianCalendar(2008, 11, 31, 8, 30, 3).getTime(); assertEquals(WebState.UNSTAGED, web.getCurrentState()); assertEquals(advancedToUnstaged, web.getState(WebState.UNSTAGED).getDate()); Date advancedToStaged = new Date(); web.advanceState( this.userService.getUserByUsername("admin"), advancedToStaged); this.sessionFactory.getCurrentSession().flush(); web = this.webDao.getWeb(web.getId()); assertEquals( "Web should have moved to STAGED State.", WebState.STAGED, web.getCurrentState()); assertEquals(advancedToUnstaged, web.getState(WebState.UNSTAGED).getDate()); assertEquals(advancedToStaged, web.getState(WebState.STAGED).getDate()); assertNotNull(web.getState(WebState.UNSTAGED)); assertNotNull(web.getState(WebState.STAGED)); } ... } As you can see, I assert that the Web was reconstituted the way I expect, I advance it, flush it to the DB, and then re-get it and verify that the states are as I expect. Everything works perfectly. Not so with Job. A possibly pertinent detail: the reconstitution code works fine if I cease to map JobStateChange.data as a TIMESTAMP and instead as a DATE, and ensure that all of the StateChanges always occur on different Dates. The problem is that this particular business entity can go through many state changes in a single day and so it needs to be sorted by time stamp rather than by date. If I don't do this then I can't sort the StateChanges correctly. That being said, WebStateChange.date is also mapped as a TIMESTAMP and so I again remain absolutely befuddled as to where this error is arising from. I tried to do a fairly thorough job of giving all of the technical details of the implementation but as this particular question is very implementation specific, if I missed anything just let me know in the comments and I'll include it. Thanks so much for your help! UPDATE: Since it turns out to be important to the solution of my problem, I have to include the pertinent bits of the WebStateChange class as well. ... @Embeddable public class WebStateChange implements Comparable<WebStateChange> { @Temporal(TemporalType.TIMESTAMP) @Column(nullable = false) private Date date; @Enumerated(EnumType.STRING) @Column(nullable = false, length = WebState.FIELD_LENGTH) private WebState state; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "acting_user_id", nullable = false) private User actingUser; ... WebStateChange( final WebState state, final User actingUser, final Date date) { ExceptionUtils.illegalNullArgs(state, actingUser, date); this.state = state; this.actingUser = actingUser; this.date = new Date(date.getTime()); } @Override public int compareTo(final WebStateChange otherStateChange) { return this.date.compareTo(otherStateChange.date); } @Override public boolean equals(final Object candidate) { if (this == candidate) { return true; } else if (!(candidate instanceof WebStateChange)) { return false; } WebStateChange candidateWebState = (WebStateChange) candidate; return this.getState() == candidateWebState.getState() && this.getUser().equals(candidateWebState.getUser()) && this.getDate().equals(candidateWebState.getDate()); } @Override public int hashCode() { return this.getState().hashCode() + this.getUser().hashCode() + this.getDate().hashCode(); } ... }

    Read the article

  • 2012 Oracle Fusion Innovation Awards - Part 2

    - by Michelle Kimihira
    Author: Moazzam Chaudry Continuing from Friday's blog on 2012 Oracle Fusion Innovation Awards, this blog (Part 2) will provide more details around the customers. It was a tremendous honor to be in single room of winners. We only wish we could have had more time to share stories from all the winners.  We received great insight from all the innovative solutions that our customers deploy and would like to share them broadly, so that others can benefit from best practices. There was a customer panel session joined by Ingersoll Rand, Nike and Motability and here is what was discussed: Barry Bonar, Enterprise Architect from Ingersoll Rand shared details around their solution, comprised of Oracle Exalogic, Oracle WebLogic Server and Oracle SOA Suite. This combined solutoin enabled their business transformation to increase decision-making, speed and efficiency, resulting in 40% reduced IT spend, 41X Faster response time and huge cost savings. Ashok Balakrishnan, Architect from Nike shared how they leveraged Oracle Coherence to analyze their digital "footprint" of activities. This helps them compete, collaborate and compare athletic data over time. Lastly, Ashley Doodly, Head of IT from Motability shared details around their solution compromised of Oracle SOA Suite, Service Bus, ADF, Coherence, BO and E-Business Suite. This solution helped Motability achieve 100% ROI within the first few months, performance in seconds vs. 10's of minutes and tremendous improvement in throughput that increased up to 50%.  This year's winners by category are: Oracle Exalogic Customer Results using Fusion Middleware Netshoes ATG on Exalogic: 6X Reduced H/W foot print, 6.2X increased throughput and 3 weeks time to market Claro Part of America Movil, running mission critical Java Application on Exalogic with 35X Faster Java response time, 5X Throughput Underwriters Laboratories Exalogic as an Apps Consolidation platform to power tremendous growth Ingersoll Rand EBS on Exalogic: Up to 40% Reduction in overall IT budget, 3x reduced foot print Oracle Cloud Application Foundation Customer Results using Fusion Middleware  Mazda Motor Corporation Tuxedo ART Batch runtime environment to migrate their batch apps on new open environment and reduce main frame cost. HOTELBEDS Technology Open Source to WebLogic transformation Globalia Corporation Introduced Oracle Coherence to fully reengineer DTH system and provide multiple business and technical benefits Nike Nike+, digital sports platform, has 8M users and is expecting an 5X increase in users, many of who will carry multiple devices that frequently sync data with the Digital Sport platform Comcast Corporation The solution is expected to increase availability, continuity, performance, and simplify and make the code at the application layer more flexible. Oracle SOA and Oracle BPM Customer Results using Fusion Middleware NTT Docomo Network traffic solution based on Oracle event processing and coherence - massive in scale: 12M users (50M in future) - 800,000 events/sec. Schneider National, Inc. SOA/B2B/ADF/Data Integration to orchestrate key order processes across Siebel, OTM & EBS.  Platform runs 60M trans/day and  50 million composite SOA instances per day across 10G and 11G Amadeus Oracle BPM solution: Business Rules and processes vary across local (80), regional (~10) and corporate approval process. Up to 10 levels of approval. Plans to deploy across 20+ markets Navitar SOA solution integrates a fully non-Oracle legacy application/ERP environment using Oracle’s SOA Suite and Oracle AIA Foundation Pack. Motability Uses SOA Suite to synchronize data across the systems and to manage the vehicle remarketing process Oracle WebCenter Customer Results using Fusion Middleware  News Limited Single platform running websites for 50% of Australia's newspapers University of Louisville “Facebook for Medicine”: Oracle Webcenter platform and Oracle BIEE to analyze patient test data and uncover potential health issues. Expecting annualized ROI of 277% China Mobile Jiangsu Company portal (25k users) to drive collaboration & productivity Life Technologies Portal for remotely monitoring & repairing biotech instruments LA Dept. of Water & Power Oracle WebCenter Portal to power ladwp.com on desktop and mobile for 1.6million users Oracle Identity Management Customer Results using Fusion Middleware Education Testing Service Identity Management platform for provisioning & SSO of 6 million GRE, GMAT, TOEFL customers Avea Oracle Identity Manager allowing call center personnel to quickly change Identity Profile to handle varying call loads based on a user self service interface. Decreased Admin Cost by 30% Oracle Data Integration Customer Results using Fusion Middleware Raymond James Near real-time integration for improved systems (throughput & performance) and enhanced operational flexibility in a 24 X 7 environment Wm Morrison Supermarkets Electronic Point of Sale integration handling over 80 million transactions a day in near real time (15 min intervals) Oracle Application Development Framework and Oracle Fusion Development Customer Results using Fusion Middleware Qualcomm Incorporated Solution providing  immediate business value enabling a self-service model necessary for growing the new customer base, an increase in customer satisfaction, reduced “time-to-deliver” Micros Systems, Inc. ADF, SOA Suite, WebCenter  enables services that include managing distribution of hotel rooms availability and rates to channels such as Hotel Web-site, Expedia, etc. Marfin Egnatia Bank A new web 2.0 UI provides a much richer experience through the ADF solution with the end result being one of boosting end-user productivity    Business Analytics (Oracle BI, Oracle EPM, Oracle Exalytics) Customer Results using Fusion Middleware INC Research Self-service customer portal delivering 5–10% of the overall revenue - expected to grow fast with the BI solution Experian Reduction in Time to Complete the Financial Close Process Hologic Inc Solution, saving months of decision-making uncertainty! We look forward to seeing many more innovative nominations. The nominatation process for 2013 begins in April 2013.    Additional Information: Blog: Oracle WebCenter Award Winners Blog: Oracle Identity Management Winners Blog: Oracle Exalogic Winners Blog: SOA, BPM and Data Integration will be will feature award winners in its respective areas this week Subscribe to our regular Fusion Middleware Newsletter Follow us on Twitter and Facebook

    Read the article

  • Login failed for user 'sa' because the account is currently locked out. The system administrator can

    - by cabhilash
    Login failed for user 'sa' because the account is currently locked out. The system administrator can unlock it. (Microsoft SQL Server, Error: 18486) SQL server has local password policies. If policy is enabled which locks down the account after X number of failed attempts then the account is automatically locked down.This error with 'sa' account is very common. sa is default administartor login available with SQL server. So there are chances that an ousider has tried to bruteforce your system. (This can cause even if a legitimate tries to access the account with wrong password.Sometimes a user would have changed the password without informing others. So the other users would try to lo) You can unlock the account with the following options (use another admin account or connect via windows authentication) Alter account & unlock ALTER LOGIN sa WITH PASSWORD='password' UNLOCK Use another account Almost everyone is aware of the sa account. This can be the potential security risk. Even if you provide strong password hackers can lock the account by providing the wrong password. ( You can provide extra security by installing firewall or changing the default port but these measures are not always practical). As a best practice you can disable the sa account and use another account with same privileges.ALTER LOGIN sa DISABLE You can edit the lock-ot options using gpedit.msc( in command prompt type gpedit.msc and press enter). Navigate to Account Lokout policy as shown in the figure The Following options are available Account lockout threshold This security setting determines the number of failed logon attempts that causes a user account to be locked out. A locked-out account cannot be used until it is reset by an administrator or until the lockout duration for the account has expired. You can set a value between 0 and 999 failed logon attempts. If you set the value to 0, the account will never be locked out. Failed password attempts against workstations or member servers that have been locked using either CTRL+ALT+DELETE or password-protected screen savers count as failed logon attempts. Account lockout duration This security setting determines the number of minutes a locked-out account remains locked out before automatically becoming unlocked. The available range is from 0 minutes through 99,999 minutes. If you set the account lockout duration to 0, the account will be locked out until an administrator explicitly unlocks it. If an account lockout threshold is defined, the account lockout duration must be greater than or equal to the reset time. Default: None, because this policy setting only has meaning when an Account lockout threshold is specified. Reset account lockout counter after This security setting determines the number of minutes that must elapse after a failed logon attempt before the failed logon attempt counter is reset to 0 bad logon attempts. The available range is 1 minute to 99,999 minutes. If an account lockout threshold is defined, this reset time must be less than or equal to the Account lockout duration. Default: None, because this policy setting only has meaning when an Account lockout threshold is specified.When creating SQL user you can set CHECK_POLICY=on which will enforce the windows password policy on the account. The following policies will be applied Define the Enforce password history policy setting so that several previous passwords are remembered. With this policy setting, users cannot use the same password when their password expires.  Define the Maximum password age policy setting so that passwords expire as often as necessary for your environment, typically, every 30 to 90 days. With this policy setting, if an attacker cracks a password, the attacker only has access to the network until the password expires.  Define the Minimum password age policy setting so that passwords cannot be changed until they are more than a certain number of days old. This policy setting works in combination with the Enforce password historypolicy setting. If a minimum password age is defined, users cannot repeatedly change their passwords to get around the Enforce password history policy setting and then use their original password. Users must wait the specified number of days to change their passwords.  Define a Minimum password length policy setting so that passwords must consist of at least a specified number of characters. Long passwords--seven or more characters--are usually stronger than short ones. With this policy setting, users cannot use blank passwords, and they have to create passwords that are a certain number of characters long.  Enable the Password must meet complexity requirements policy setting. This policy setting checks all new passwords to ensure that they meet basic strong password requirements.  Password must meet the following complexity requirement, when they are changed or created: Not contain the user's entire Account Name or entire Full Name. The Account Name and Full Name are parsed for delimiters: commas, periods, dashes or hyphens, underscores, spaces, pound signs, and tabs. If any of these delimiters are found, the Account Name or Full Name are split and all sections are verified not to be included in the password. There is no check for any character or any three characters in succession. Contain characters from three of the following five categories:  English uppercase characters (A through Z) English lowercase characters (a through z) Base 10 digits (0 through 9) Non-alphabetic characters (for example, !, $, #, %) A catch-all category of any Unicode character that does not fall under the previous four categories. This fifth category can be regionally specific.

    Read the article

  • Resetting Your Oracle User Password with SQL Developer

    - by thatjeffsmith
    There’s nothing more annoying than having to email, call, or log a support ticket to have one of your accounts reset. This is no less annoying in the Oracle database. Those pesky security folks have determined that your password should only be valid for X days, and your time is up. Time to reset the password! Except…you can’t log into the database to reset your password. What now? Wait a second, look at this nifty thing I see in SQL Developer: Right click on my connection, reset password not available! Why not? The JDBC Driver Doesn’t Support This Operation We can’t make this call over the Oracle JDBC layer, because it hasn’t been implemented. However our primary interface, OCI, does indeed support this. In order to use the Oracle Call Interface (OCI), you need to have an Oracle Client on your machine. The good news is that this is fairly easy to get going. The Instant Client will do. You have two options, the full or ‘Lite’ Instant Clients. If you want SQL*Plus and the other client tools, go for the full. If you just want the basic drivers, go for the Lite. Either of these is fine, but mind the bit level and version of Oracle! Make sure you get a 32 bit Instant Client if you run 32 bit SQL Developer or 64 bit if you run 64 Here’s the download link What, you didn’t believe me? Mind the version of Oracle too! You want to be at the same level or higher of the database you’re working with. You can use a 11.2.0.3 client with 11.2.0.1 database but not a 10gR2 client with 11gR2 database. Clear as mud? Download and Extract Put it where you want – Program Files is as good as place as any if you have the rights. When you’re done, copy that directory path you extracted the archive to, because we’re going to add it to your Windows PATH environment variable. The easiest way to find this in Windows 7 is to open the Start dialog and type ‘path’. In Windows 8 you’ll cast your spell and wave at your screen until something happens. I recommend you put it up front so we find our DLLs first. Now with that set, let’s start up SQL Developer. Check the Connection Context menu again Bingo! What happened there? SQL Developer looks to see if it can find the OCI resources. Guess where it looks? That’s right, the PATH. If it finds what it’s looking for, and confirms the bit level is right, it will activate the Reset Password option. We have a Preference to ‘force’ an OCI/THICK connection that gives you a few other edge case features, but you do not need to enable this to activate the Reset Password. Not necessary, but won’t hurt anything either. There are a few actual benefits to using OCI powered connections, but that’s beyond the scope of today’s blog post…to be continued. Ok, so we’re ready to go. Now, where was I again? Oh yeah, my password has expired… Right click on your connection and now choose ‘Reset Password’ You’ll need to know your existing password and select a new one that meets your databases’s security standards. I Need Another Option, This Ain’t Working! If you have another account in the database, you can use the DBA Panel to reset a user’s password, or of course you can spark up a SQL*Plus session and issue the ALTER USER JEFF IDENTIFIED BY _________; command – but you knew this already, yes? I need more help ‘installing’ the Instant Client, help! There are lots and lots of resources out there on this subject. But I also know from personal experience that many of you have problems getting this to ‘work.’ The key things to remember is to download the right bit level AND make sure the client install directory is in your path. I know many folks that will just ‘install’ the Instant Client directly to one of their ‘bin’ type directories. You can do that if you want, but I prefer the cleaner method. Of course if you lack admin privs to change the PATH variable, that might be your only option. Or you could do what the original ORA- message indicated and ‘contact your DBA.’

    Read the article

  • ACORD LOMA Session Highlights Policy Administration Trends

    - by [email protected]
    Helen Pitts, senior product marketing manager for Oracle Insurance, attended and is blogging from the ACORD LOMA Insurance Forum this week. Above: Paul Vancheri, Chief Information Officer, Fidelity Investments Life Insurance Company. Vancheri gave a presentation during the ACORD LOMA Insurance Systems Forum about the key elements of modern policy administration systems and how insurers can mitigate risk during legacy system migrations to safely introduce new technologies. When I had a few particularly challenging honors courses in college my father, a long-time technology industry veteran, used to say, "If you don't know how to do something go ask the experts. Find someone who has been there and done that, don't be afraid to ask the tough questions, and apply and build upon what you learn." (Actually he still offers this same advice today.) That's probably why my favorite sessions at industry events, like the ACORD LOMA Insurance Forum this week, are those that include insight on industry trends and case studies from carriers who share their experiences and offer best practices based upon their own lessons learned. I had the opportunity to attend a particularly insightful session Wednesday as Craig Weber, senior vice president of Celent's Insurance practice, and Paul Vancheri, CIO of Fidelity Life Investments, presented, "Managing the Dynamic Insurance Landscape: Enabling Growth and Profitability with a Modern Policy Administration System." Policy Administration Trends Growing the business is the top issue when it comes to IT among both life and annuity and property and casualty carriers according to Weber. To drive growth and capture market share from competitors, carriers are looking to modernize their core insurance systems, with 65 percent of those CIOs participating in recent Celent research citing plans to replace their policy administration systems. Weber noted that there has been continued focus and investment, particularly in the last three years, by software and technology vendors to offer modern, rules-based, configurable policy administration solutions. He added that these solutions are continuing to evolve with the ongoing aim of helping carriers rapidly meet shifting business needs--whether it is to launch new products to market faster than the competition, adapt existing products to meet shifting consumer and /or regulatory demands, or to exit unprofitable markets. He closed by noting the top four trends for policy administration either in the process of being adopted today or on the not-so-distant horizon for the future: Underwriting and service desktops New business automation Convergence of ultra-configurable and domain content-rich systems Better usability and screen design Mitigating the Risk When Making the Decision to Modernize Third-party analyst research from advisory firms like Celent was a key part of the due diligence process for Fidelity as it sought a replacement for its legacy policy administration system back in 2005, according to Vancheri. The company's business opportunities were outrunning system capability. Its legacy system had not been upgraded in several years and was deficient from a functionality and currency standpoint. This was constraining the carrier's ability to rapidly configure and bring new and complex products to market. The company sought a new, modern policy administration system, one that would enable it to keep pace with rapid and often unexpected industry changes and ahead of the competition. A cross-functional team that included representatives from finance, actuarial, operations, client services and IT conducted an extensive selection process. This process included deep documentation review, pilot evaluations, demonstrations of required functionality and complex problem-solving, infrastructure integration capability, and the ability to meet the company's desired cost model. The company ultimately selected an adaptive policy administration system that met its requirements to: Deliver ease of use - eliminating paper and rework, while easing the burden on representatives to sell and service annuities Provide customer parity - offering Web-based capabilities in alignment with the company's focus on delivering a consistent customer experience across its business Deliver scalability, efficiency - enabling automation, while simplifying and standardizing systems across its technology stack Offer desired functionality - supporting Fidelity's product configuration / rules management philosophy, focus on customer service and technology upgrade requirements Meet cost requirements - including implementation, professional services and licenses fees and ongoing maintenance Deliver upon business requirements - enabling the ability to drive time to market for new products and flexibility to make changes Best Practices for Addressing Implementation Challenges Based upon lessons learned during the company's implementation, Vancheri advised carriers to evaluate staffing capabilities and cultural impacts, review business requirements to avoid rebuilding legacy processes, factor in dependent systems, and review policies and practices to secure customer data. His formula for success: upfront planning + clear requirements = precision execution. Achieving a Return on Investment Vancheri said the decision to replace their legacy policy administration system and deploy a modern, rules-based system--before the economic downturn occurred--has been integral in helping the company adapt to shifting market conditions, while enabling growth in its direct channel sales of variable annuities. Since deploying its new policy admin system, the company has reduced its average time to market for new products from 12-15 months to 4.5 months. The company has since migrated its other products to the new system and retired its legacy system, significantly decreasing its overall product development cycle. From a processing standpoint Vancheri noted the company has achieved gains in automation, information, and ease of use, resulting in improved real-time data edits, controls for better quality, and tax handling capability. Plus, with by having only one platform to manage, the company has simplified its IT environment and is well positioned to deliver system enhancements for greater efficiencies. Commitment to Continuing the Investment In the short and longer term future Vancheri said the company plans to enhance business functionality to support money movement, wire automation, divorce processing on payout contracts and cost-based tracking improvements. It also plans to continue system upgrades to remain current as well as focus on further reducing cycle time, driving down maintenance costs, and integrating with other products. Helen Pitts is senior product marketing manager for Oracle Insurance focused on life/annuities and enterprise document automation.

    Read the article

  • web.xml not reloading in tomcat even after stop/start

    - by ajay
    This is in relation to:- http://stackoverflow.com/questions/2576514/basic-tomcat-servlet-error I changed my web.xml file, did ant compile , all, /etc/init.d/tomcat stop , start Even then my web.xml file in tomcat deployment is still unchanged. This is build.properties file:- app.name=hello catalina.home=/usr/local/tomcat manager.username=admin manager.password=admin This is my build.xml file. Is there something wrong with this:- <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- General purpose build script for web applications and web services, including enhanced support for deploying directly to a Tomcat 6 based server. This build script assumes that the source code of your web application is organized into the following subdirectories underneath the source code directory from which you execute the build script: docs Static documentation files to be copied to the "docs" subdirectory of your distribution. src Java source code (and associated resource files) to be compiled to the "WEB-INF/classes" subdirectory of your web applicaiton. web Static HTML, JSP, and other content (such as image files), including the WEB-INF subdirectory and its configuration file contents. $Id: build.xml.txt 562814 2007-08-05 03:52:04Z markt $ --> <!-- A "project" describes a set of targets that may be requested when Ant is executed. The "default" attribute defines the target which is executed if no specific target is requested, and the "basedir" attribute defines the current working directory from which Ant executes the requested task. This is normally set to the current working directory. --> <project name="My Project" default="compile" basedir="."> <!-- ===================== Property Definitions =========================== --> <!-- Each of the following properties are used in the build script. Values for these properties are set by the first place they are defined, from the following list: * Definitions on the "ant" command line (ant -Dfoo=bar compile). * Definitions from a "build.properties" file in the top level source directory of this application. * Definitions from a "build.properties" file in the developer's home directory. * Default definitions in this build.xml file. You will note below that property values can be composed based on the contents of previously defined properties. This is a powerful technique that helps you minimize the number of changes required when your development environment is modified. Note that property composition is allowed within "build.properties" files as well as in the "build.xml" script. --> <property file="build.properties"/> <property file="${user.home}/build.properties"/> <!-- ==================== File and Directory Names ======================== --> <!-- These properties generally define file and directory names (or paths) that affect where the build process stores its outputs. app.name Base name of this application, used to construct filenames and directories. Defaults to "myapp". app.path Context path to which this application should be deployed (defaults to "/" plus the value of the "app.name" property). app.version Version number of this iteration of the application. build.home The directory into which the "prepare" and "compile" targets will generate their output. Defaults to "build". catalina.home The directory in which you have installed a binary distribution of Tomcat 6. This will be used by the "deploy" target. dist.home The name of the base directory in which distribution files are created. Defaults to "dist". manager.password The login password of a user that is assigned the "manager" role (so that he or she can execute commands via the "/manager" web application) manager.url The URL of the "/manager" web application on the Tomcat installation to which we will deploy web applications and web services. manager.username The login username of a user that is assigned the "manager" role (so that he or she can execute commands via the "/manager" web application) --> <property name="app.name" value="myapp"/> <property name="app.path" value="/${app.name}"/> <property name="app.version" value="0.1-dev"/> <property name="build.home" value="${basedir}/build"/> <property name="catalina.home" value="../../../.."/> <!-- UPDATE THIS! --> <property name="dist.home" value="${basedir}/dist"/> <property name="docs.home" value="${basedir}/docs"/> <property name="manager.url" value="http://localhost:8080/manager"/> <property name="src.home" value="${basedir}/src"/> <property name="web.home" value="${basedir}/web"/> <!-- ==================== External Dependencies =========================== --> <!-- Use property values to define the locations of external JAR files on which your application will depend. In general, these values will be used for two purposes: * Inclusion on the classpath that is passed to the Javac compiler * Being copied into the "/WEB-INF/lib" directory during execution of the "deploy" target. Because we will automatically include all of the Java classes that Tomcat 6 exposes to web applications, we will not need to explicitly list any of those dependencies. You only need to worry about external dependencies for JAR files that you are going to include inside your "/WEB-INF/lib" directory. --> <!-- Dummy external dependency --> <!-- <property name="foo.jar" value="/path/to/foo.jar"/> --> <!-- ==================== Compilation Classpath =========================== --> <!-- Rather than relying on the CLASSPATH environment variable, Ant includes features that makes it easy to dynamically construct the classpath you need for each compilation. The example below constructs the compile classpath to include the servlet.jar file, as well as the other components that Tomcat makes available to web applications automatically, plus anything that you explicitly added. --> <path id="compile.classpath"> <!-- Include all JAR files that will be included in /WEB-INF/lib --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> <!-- <pathelement location="${foo.jar}"/> --> <!-- Include all elements that Tomcat exposes to applications --> <fileset dir="${catalina.home}/bin"> <include name="*.jar"/> </fileset> <pathelement location="${catalina.home}/lib"/> <fileset dir="${catalina.home}/lib"> <include name="*.jar"/> </fileset> </path> <!-- ================== Custom Ant Task Definitions ======================= --> <!-- These properties define custom tasks for the Ant build tool that interact with the "/manager" web application installed with Tomcat 6. Before they can be successfully utilized, you must perform the following steps: - Copy the file "lib/catalina-ant.jar" from your Tomcat 6 installation into the "lib" directory of your Ant installation. - Create a "build.properties" file in your application's top-level source directory (or your user login home directory) that defines appropriate values for the "manager.password", "manager.url", and "manager.username" properties described above. For more information about the Manager web application, and the functionality of these tasks, see <http://localhost:8080/tomcat-docs/manager-howto.html>. --> <taskdef resource="org/apache/catalina/ant/catalina.tasks" classpathref="compile.classpath"/> <!-- ==================== Compilation Control Options ==================== --> <!-- These properties control option settings on the Javac compiler when it is invoked using the <javac> task. compile.debug Should compilation include the debug option? compile.deprecation Should compilation include the deprecation option? compile.optimize Should compilation include the optimize option? --> <property name="compile.debug" value="true"/> <property name="compile.deprecation" value="false"/> <property name="compile.optimize" value="true"/> <!-- ==================== All Target ====================================== --> <!-- The "all" target is a shortcut for running the "clean" target followed by the "compile" target, to force a complete recompile. --> <target name="all" depends="clean,compile" description="Clean build and dist directories, then compile"/> <!-- ==================== Clean Target ==================================== --> <!-- The "clean" target deletes any previous "build" and "dist" directory, so that you can be ensured the application can be built from scratch. --> <target name="clean" description="Delete old build and dist directories"> <delete dir="${build.home}"/> <delete dir="${dist.home}"/> </target> <!-- ==================== Compile Target ================================== --> <!-- The "compile" target transforms source files (from your "src" directory) into object files in the appropriate location in the build directory. This example assumes that you will be including your classes in an unpacked directory hierarchy under "/WEB-INF/classes". --> <target name="compile" depends="prepare" description="Compile Java sources"> <!-- Compile Java classes as necessary --> <mkdir dir="${build.home}/WEB-INF/classes"/> <javac srcdir="${src.home}" destdir="${build.home}/WEB-INF/classes" debug="${compile.debug}" deprecation="${compile.deprecation}" optimize="${compile.optimize}"> <classpath refid="compile.classpath"/> </javac> <!-- Copy application resources --> <copy todir="${build.home}/WEB-INF/classes"> <fileset dir="${src.home}" excludes="**/*.java"/> </copy> </target> <!-- ==================== Dist Target ===================================== --> <!-- The "dist" target creates a binary distribution of your application in a directory structure ready to be archived in a tar.gz or zip file. Note that this target depends on two others: * "compile" so that the entire web application (including external dependencies) will have been assembled * "javadoc" so that the application Javadocs will have been created --> <target name="dist" depends="compile,javadoc" description="Create binary distribution"> <!-- Copy documentation subdirectories --> <mkdir dir="${dist.home}/docs"/> <copy todir="${dist.home}/docs"> <fileset dir="${docs.home}"/> </copy> <!-- Create application JAR file --> <jar jarfile="${dist.home}/${app.name}-${app.version}.war" basedir="${build.home}"/> <!-- Copy additional files to ${dist.home} as necessary --> </target> <!-- ==================== Install Target ================================== --> <!-- The "install" target tells the specified Tomcat 6 installation to dynamically install this web application and make it available for execution. It does *not* cause the existence of this web application to be remembered across Tomcat restarts; if you restart the server, you will need to re-install all this web application. If you have already installed this application, and simply want Tomcat to recognize that you have updated Java classes (or the web.xml file), use the "reload" target instead. NOTE: This target will only succeed if it is run from the same server that Tomcat is running on. NOTE: This is the logical opposite of the "remove" target. --> <target name="install" depends="compile" description="Install application to servlet container"> <deploy url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}" localWar="file://${build.home}"/> </target> <!-- ==================== Javadoc Target ================================== --> <!-- The "javadoc" target creates Javadoc API documentation for the Java classes included in your application. Normally, this is only required when preparing a distribution release, but is available as a separate target in case the developer wants to create Javadocs independently. --> <target name="javadoc" depends="compile" description="Create Javadoc API documentation"> <mkdir dir="${dist.home}/docs/api"/> <javadoc sourcepath="${src.home}" destdir="${dist.home}/docs/api" packagenames="*"> <classpath refid="compile.classpath"/> </javadoc> </target> <!-- ====================== List Target =================================== --> <!-- The "list" target asks the specified Tomcat 6 installation to list the currently running web applications, either loaded at startup time or installed dynamically. It is useful to determine whether or not the application you are currently developing has been installed. --> <target name="list" description="List installed applications on servlet container"> <list url="${manager.url}" username="${manager.username}" password="${manager.password}"/> </target> <!-- ==================== Prepare Target ================================== --> <!-- The "prepare" target is used to create the "build" destination directory, and copy the static contents of your web application to it. If you need to copy static files from external dependencies, you can customize the contents of this task. Normally, this task is executed indirectly when needed. --> <target name="prepare"> <!-- Create build directories as needed --> <mkdir dir="${build.home}"/> <mkdir dir="${build.home}/WEB-INF"/> <mkdir dir="${build.home}/WEB-INF/classes"/> <!-- Copy static content of this web application --> <copy todir="${build.home}"> <fileset dir="${web.home}"/> </copy> <!-- Copy external dependencies as required --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> <mkdir dir="${build.home}/WEB-INF/lib"/> <!-- <copy todir="${build.home}/WEB-INF/lib" file="${foo.jar}"/> --> <!-- Copy static files from external dependencies as needed --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> </target> <!-- ==================== Reload Target =================================== --> <!-- The "reload" signals the specified application Tomcat 6 to shut itself down and reload. This can be useful when the web application context is not reloadable and you have updated classes or property files in the /WEB-INF/classes directory or when you have added or updated jar files in the /WEB-INF/lib directory. NOTE: The /WEB-INF/web.xml web application configuration file is not reread on a reload. If you have made changes to your web.xml file you must stop then start the web application. --> <target name="reload" depends="compile" description="Reload application on servlet container"> <reload url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}"/> </target> <!-- ==================== Remove Target =================================== --> <!-- The "remove" target tells the specified Tomcat 6 installation to dynamically remove this web application from service. NOTE: This is the logical opposite of the "install" target. --> <target name="remove" description="Remove application on servlet container"> <undeploy url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}"/> </target> </project>

    Read the article

  • How to install Sweetcron on XAMPP

    - by Sushaantu
    This tutorial will take you to the installation steps required to install Sweetcron in the XAMPP. I am taking the liberty to assume that you have already installed XAMPP. I First of all download sweetcron and copy the extracted “sweetcron” folder inside the htdocs folder in the XAMPP directory. You have to get few things in place before installing sweetcron: 1. Create a sweetcron database using Mysql. You can just put a name “sweetcron” and leave all the other settings such as Mysql connection collation as it is. Now that you have the database ready you can configure few settings before making sweetcron to work on XAMPP. 2. Open the config-sample PHP file with your text editor (something like Notepad++ or Komodo Edit is recommended) which is located in Sweetcron/system/application/config. You have to make few changes in it. a. In the first settings you have to edit the value of $config ['base_url']. The default value is             “http://www.your-site.com”; and you have to change that into “http://localhost/sweetcron/”; b. You also have to change the deafult settings of $config ['uri_protocol']. The value that you will see is “REQUEST_URI”; but you need to change that into “AUTO”; c. Now that we have made all the changes you can rename the file from config_sample to just config. 3. Open the database-sample.php in the text editor. You need to make the edits regarding the databse in here. a. The values of the databse at the moment are like this: $db['default']['hostname'] = “localhost”; $db['default']['username'] = “”; $db['default']['password'] = “”; $db['default']['database'] = “”; $db['default']['dbdriver'] = “mysql”; $db['default']['dbprefix'] = “”; $db['default']['pconnect'] = TRUE; $db['default']['db_debug'] = TRUE; $db['default']['cache_on'] = FALSE; $db['default']['cachedir'] = “”; $db['default']['char_set'] = “utf8″; $db['default']['dbcollat'] = “utf8_general_ci”; You have to change that into $db['default']['hostname'] = “localhost”; $db['default']['username'] = “root”; $db['default']['password'] = “”; $db['default']['database'] = “sweetcron”; $db['default']['dbdriver'] = “mysql”; $db['default']['dbprefix'] = “”; $db['default']['pconnect'] = TRUE; $db['default']['db_debug'] = TRUE; $db['default']['cache_on'] = FALSE; $db['default']['cachedir'] = “”; $db['default']['char_set'] = “utf8″; $db['default']['dbcollat'] = “utf8_general_ci”; We have written the username as root along with the name of database (sweetcron in my case). Since I was not using any password in xampp for the sweetcron database so I have left the password option empty. You can make suitable changes according to your system. Write down your password in the third line if you are using one in xampp. b. Now that we have made all the edits we can change the file name from database-sample to just database. 4. That leaves us with only one setting and that is editing values in the .htaccess file with our text editor. The default values you will have in the .htaccess file are: Options +FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] and you just have to add “sweetcron” after Rewritebase in the third line. Options +FollowSymLinks RewriteEngine On RewriteBase /sweetcron RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] 5. Now you are all set and done. You can access sweetcron in the localhost by going to http://localhost/sweetcron/ you will see some text on the top which would prompt you to click on one script. Click that script and behold, you have your sweetcrom installation on xampp ready. Further it will ask you to add deatils such as Lifestream name, username and email address. Fill those deatils and you will reach the admin panel.

    Read the article

  • So you want a French Site?

    - by juanlarios
    I thought I would write a quick write up of how to create a french site in SharePoint 2007. I'm not talking about a Variation but just a plain French Site from the ground up. There were some gotchas that I felt were worth blogging about. First:  go to Microsoft Telnet Article and follow the install instructions. Make sure that when you get to the download page that you select "French" as part of the drop down and you download and install the right language pack. I noticed that if you did not click the "change" button enven though I selected the 'french' language pack, it reverted back to the english language pack.   Second: You will notice a couple of things. When you go to central admin you will see the following:    Now you can pick between french site or english. You will get this if you install other language packs and they will be listed in the drop down. You will notice that you now have french headings and frech listings of sites. You see "Publishing" as a heading because I have a custom site definition that I deployed as a french site. Third: As you start navigating around and trying to create document libraries or sites you will start getting errors. Errors like the following: "Cannot make a cache safe URL for "SelectorControls.js", file not found. Please verify that the file exists under the layouts directory. " Troubleshoot issues with Windows SharePoint Services. Once you resolve the issue with this "js" file, you will find that there are other js files that are missing. The only problem is that if you are not fluent in French or the language you are trying to deploy, Well, you'll have a tough time understanding error messages as they will all be in the new language you are trying to deploy. So let's just talk about what happened when you installed the language pack. In the 12 Hive:  12/Template    you will now see a 1033 folder and a 1036 folder. The 1036 folder is the folder that was created and added as part of the language pack. What the above error is saying is that now that it's looking at the 1036 folder, well, it's missing some files. The nice thing is that these files are included in the 1033 folder (which is the English Language Pack). Simply copy and paste the controls from the one folder to the other. There will be more than one conflict so you will have to move serveral controls over. Can't remember how many but simply add them as error messages come up. I had to add some navigation controls and some content selectors.   Now that's all that you need to install the Frech Language pack anc reate site collections that are entirely in a another language. Do not mistake this with Variations, where you can have multiple language sites. For those of you doing a little bit extra with this, let me share what I was doing extra and what I needed to get it working for me. I had had a custom site definition which was obviously not showing up in my selection of french sites. I was under the impression that all sites in English would show up in french and that the sites were simply routed to a new Resource file for french content. And that is the case but there is a little extra that needs to be done if you have a custom site definition deployed:  First: Under hive 12/Template/1033/XML  there is a listing of site definition files that are deployed to the English side of things. If you navigate to 12/Template/1036/XML  and open one of the site definitions you will see that they are similar and reference the existing site definitions installed on the server, except that they have some french added to descriptions and names. Simply copy the xml file of your custom template to the 1036 folder to have it show up as a selection when you select French as the dropdown entry when create a site colleciton. You can go ahead and change the description and name to suit the language it's under.    Second: As part of my site definition, I packaed up several list templates, that were saved as STP files. When you navigate to the list template listing, well, the templates are for English sites, not French so I cannot create document libraries based on the template. What now? well here comes KWIzCom to the rescue! They seem to have put out a "STP language converter" where you can take a site template or list template and convert it to any target language you are after. It's a free download, Use it and you're good to go.  One thing I will mention is that when I convereted the English documents I whent ahead and converted them to French-Canadien. And it didn't work! so I finally figured out that the French Version it was expecting in the french site was "French-France". Don't know why that is, it's just what needs to be done to get that working. When I did that, I was able to use the List templates that I created in the English site for the French Site.   Hope it helps , good luck!

    Read the article

  • Inside Red Gate - Be Reasonable!

    - by Simon Cooper
    As I discussed in my previous posts, divisions and project teams within Red Gate are allowed a lot of autonomy to manage themselves. It's not just the teams though, there's an awful lot of freedom given to individual employees within the company as well. Reasonableness How Red Gate treats it's employees is embodied in the phrase 'You will be reasonable with us, and we will be reasonable with you'. As an employee, you are trusted to do your job to the best of you ability. There's no one looking over your shoulder, no one clocking you in and out each day. Everyone is working at the company because they want to, and one of the core ideas of Red Gate is that the company exists to 'let people do the best work of their lives'. Everything is geared towards that. To help you do your job, office services and the IT department are there. If you need something to help you work better (a third or fourth monitor, footrests, or a new keyboard) then ask people in Information Systems (IS) or Office Services and you will be given it, no questions asked. Everyone has administrator access to their own machines, and you can install whatever you want on it. If there's a particular bit of software you need, then ask IS and they will buy it. As an example, last year I wanted to replace my main hard drive with an SSD; I had a summer job at school working in a computer repair shop, so knew what to do. I went to IS and asked for 'an SSD, a SATA cable, and a screwdriver'. And I got it there and then, even the screwdriver. Awesome. I screwed it in myself, copied all my main drive files across, and I was good to go. Of course, if you're not happy doing that yourself, then IS will sort it all out for you, no problems. If you need something that the company doesn't have (say, a book off Amazon, or you need some specifications printing off & bound), then everyone has a expense limit of £100 that you can use without any sign-off needed from your managers. If you need a company credit card for whatever reason, then you can get it. This freedom extends to working hours and holiday; you're expected to be in the office 11am-3pm each day, but outside those times you can work whenever you want. If you need a half-day holiday on a days notice, or even the same day, then you'll get it, unless there's a good reason you're needed that day. If you need to work from home for a day or so for whatever reason, then you can. If it's reasonable, then it's allowed. Trust issues? A lot of trust, and a lot of leeway, is given to all the people in Red Gate. Everyone is expected to work hard, do their jobs to the best of their ability, and there will be a minimum of bureaucratic obstacles that stop you doing your work. What happens if you abuse this trust? Well, an example is company trip expenses. You're free to expense what you like; food, drink, transport, etc, but if you expenses are not reasonable, then you will never travel with the company again. Simple as that. Everyone knows when they're abusing the system, so simply don't do it. Along with reasonableness, another phrase used is 'Don't be a ***'. If you act like a ***, and abuse any of the trust placed in you, even if you're the best tester, salesperson, dev, or manager in the company, then you won't be a part of the company any more. From what I know about other companies, employee trust is highly variable between companies, all the way up to CCTV trained on employee's monitors. As a dev, I want to produce well-written & useful code that solves people's problems. Being able to get whatever I need - install whatever tools I need, get time off when I need to, obtain reference books within a day - all let me do my job, and so let Red Gate help other people do their own jobs through the tools we produce. Plus, I don't think I would like working for a company that doesn't allow admin access to your own machine and blocks Facebook!

    Read the article

  • Virtual host in Apache Zend

    - by llocani
    I'd like to ask you if you can tell me why I can't get Vhost in Apache to work my Vhostconf is: NameVirtualHost *:80 <VirtualHost _default_:80> ServerAdmin [email protected] DocumentRoot "E:/Archivos de programa/Zend/Apache2/htdocs" ServerName localhost <Directory "E:/Archivos de programa/Zend/Apache2/htdocs"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> #AllowOveride all </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects/HealingHands" ServerName healinghands.loc <Directory "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects/HealingHands"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> ErrorLog "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects/HealingHands/logs/error.log" CustomLog "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects/HealingHands/logs/access.log" common #AllowOveride all </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects" ServerName dev.loc <Directory "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> ErrorLog "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects/logs/error.log" CustomLog "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects/logs/access.log" common #AllowOveride all </VirtualHost> My httpd.conf is: ServerRoot "E:\Archivos de programa\Zend\Apache2" Listen 80 LoadModule actions_module modules/mod_actions.so LoadModule alias_module modules/mod_alias.so LoadModule asis_module modules/mod_asis.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule cgi_module modules/mod_cgi.so LoadModule dir_module modules/mod_dir.so LoadModule env_module modules/mod_env.so LoadModule filter_module modules/mod_filter.so LoadModule headers_module modules/mod_headers.so LoadModule imagemap_module modules/mod_imagemap.so LoadModule include_module modules/mod_include.so LoadModule info_module modules/mod_info.so LoadModule isapi_module modules/mod_isapi.so LoadModule log_config_module modules/mod_log_config.so LoadModule mime_module modules/mod_mime.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule ssl_module modules/mod_ssl.so LoadModule status_module modules/mod_status.so LoadModule userdir_module modules/mod_userdir.so <IfModule !mpm_netware_module> <IfModule !mpm_winnt_module> User daemon Group daemon </IfModule> </IfModule> ServerAdmin [email protected] DocumentRoot "E:\Archivos de programa\Zend\Apache2/htdocs" <Directory /> Options FollowSymLinks AllowOverride all Order allow,deny Allow from all </Directory> <IfModule dir_module> DirectoryIndex index.php index.html home.php </IfModule> <FilesMatch "^\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> ErrorLog "logs/error.log" LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog "logs/access.log" common </IfModule> <IfModule alias_module> Alias /NetBeansProjects "E:\Documents and Settings\dvieira\Mis documentos\NetBeansProjects" ScriptAlias /cgi-bin/ "E:\Archivos de programa\Zend\Apache2/cgi-bin/" </IfModule> <IfModule cgid_module> </IfModule> <Directory "E:\Archivos de programa\Zend\Apache2/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig conf/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz </IfModule> Include conf/extra/httpd-vhosts.conf <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> Include "conf/zend.conf" NameVirtualHost *:80 <VirtualHost *:80> Include "E:\Archivos de programa\Zend\ZendServer/etc/sites.d/zend-default-vhost-80.conf" </VirtualHost> Include "E:\Archivos de programa\Zend\ZendServer/etc/sites.d/globals-*.conf" Include "E:\Archivos de programa\Zend\ZendServer/etc/sites.d/vhost_*.conf" And my host in Windows: 127.0.0.1 localhost 127.0.0.1 healinghands.loc 127.0.0.1 dev.loc And I can't get any of the browser to recognize dev.loc or healinghands.loc but a ping does it. Localhost is working fine. I've spent 3 days now traying to solve this for my one but I finally quit and have to ask. The error should be this Error Code 11002: host not found. Background: this error indicates that the gateway could not find an authoritative DNS server for the website you are trying to access. Date: 5/20/2013 5:51:03 PM Server: Source: DNS problem. i'd like to add this ping Haciendo ping a healinghands.loc [127.0.0.1] con 32 bytes de datos: Respuesta desde 127.0.0.1: bytes=32 tiempo<1m TTL=128 Respuesta desde 127.0.0.1: bytes=32 tiempo<1m TTL=128 Respuesta desde 127.0.0.1: bytes=32 tiempo<1m TTL=128 Respuesta desde 127.0.0.1: bytes=32 tiempo<1m TTL=128 Estadísticas de ping para 127.0.0.1: Paquetes: enviados = 4, recibidos = 4, perdidos = 0 (0% perdidos), Tiempos aproximados de ida y vuelta en milisegundos: Mínimo = 0ms, Máximo = 0ms, Media = 0ms Today i've tryed something: i've add this domains into the exceptions of mi ie proxy config. This worked for healinghands.loc but not for dev.loc i really do not understand why, both config are exactly the same except for de documentroot. I will continue searching

    Read the article

  • How to Use RDA to Generate WLS Thread Dumps At Specified Intervals?

    - by Daniel Mortimer
    Introduction There are many ways to generate a thread dump of a WebLogic Managed Server. For example, take a look at: Taking Thread Dumps - [an excellent blog post on the Middleware Magic site]or  Different ways to take thread dumps in WebLogic Server (Document 1098691.1) There is another method - use Remote Diagnostic Agent! The solution described below is not documented, but it is relatively straightforward to execute. One advantage of using RDA to collect the thread dumps is RDA will also collect configuration, log files, network, system, performance information at the same time. Instructions 1. Not familiar with Remote Diagnostic Agent? Take a look at my previous blog "Resolve SRs Faster Using RDA - Find the Right Profile" 2. Choose a profile, which includes the WebLogic Server data collection modules (for example the profile "WebLogicServer"). At RDA setup time you should see the prompt below: ------------------------------------------------------------------------------- S301WLS: Collects Oracle WebLogic Server Information ------------------------------------------------------------------------------- Enter the location of the directory where the domains to analyze are located (For example in UNIX, <BEA Home>/user_projects/domains or <Middleware Home>/user_projects/domains) Hit 'Return' to accept the default (/oracle/11AS/Middleware/user_projects/domains) > For a successful WLS connection, ensure that the domain Admin Server is up and running. Data Collection Type:   1  Collect for a single server (offline mode)   2  Collect for a single server (using WLS connection)   3  Collect for multiple servers (using WLS connection) Enter the item number Hit 'Return' to accept the default (1) > 2 Choose option 2 or 3. Note: Collect for a single server or multiple servers using WLS connection means that RDA will attempt to connect to execute online WLST commands against the targeted server(s). The thread dumps are collected using the WLST function - "threadDumps()". If WLST cannot connect to the managed server, RDA will proceed to collect other data and ignore the request to collect thread dumps. If in the final output you see no Thread Dump menu item, then it's likely that the managed server is in a state which prevents new connections to it. If faced with this scenario, you would have to employ alternative methods for collecting thread dumps. 3. The RDA setup will create a setup.cfg file in the RDA_HOME directory. Open this file in an editor. You will find the following parameters which govern the number of thread dumps and thread dump interval. #N.Number of thread dumps to capture WREQ_THREAD_DUMP=10 #N.Thread dump interval WREQ_THREAD_DUMP_INTERVAL=5000 The example lines above show the default settings. In other words, RDA will collect 10 thread dumps at 5000 millisecond (5 second) intervals. You may want to change this to something like: #N.Number of thread dumps to capture WREQ_THREAD_DUMP=10 #N.Thread dump interval WREQ_THREAD_DUMP_INTERVAL=30000 However, bear in mind, that such change will increase the total amount of time it takes for RDA to complete its run. 4. Once you are happy with the setup.cfg, run RDA. RDA will collect, render, generate and package all files in the output directory. 5. For ease of viewing, open up the RDA Start html file - "xxxx__start.htm". The thread dumps can be found under the WLST Collections for the target managed server(s). See screenshots belowScreenshot 1:RDA Start Page - Main Index Screenshot 2: Managed Server Sub Index Screenshot 3: WLST Collections Screenshot 4: Thread Dump Page - List of dump file links Screenshot 5: Thread Dump Dat File Link Additional Comment: A) You can view the thread dump files within the RDA Start Page framework, but most likely you will want to download the dat files for in-depth analysis via thread dump analysis tools such as: Thread Dump Analyzer -  Samurai - a GUI based tail , thread dump analysis tool If you are new to thread dump analysis - take a look at this recorded Support Advisor Webcast  Oracle WebLogic Server: Diagnosing Performance Issues through Java Thread Dumps[Slidedeck from webcast in PDF format]B) I have logged a couple of enhancement requests for the RDA Development Team to consider: Add timestamp to dump file links, dat filename and at the top of the body of the dat file Package the individual thread dumps in a zip so all dump files can be conveniently downloaded in one go.

    Read the article

  • WebCenter Content shared folders for clustering

    - by Kyle Hatlestad
    When configuring a WebCenter Content (WCC) cluster, one of the things which makes it unique from some other WebLogic Server applications is its requirement for a shared file system.  This is actually not any different then 10g and previous versions of UCM when it ran directly on a JVM.  And while it is simple enough to say it needs a shared file system, there are some crucial details in how those directories are configured. And if they aren't followed, you may result in some unwanted behavior. This blog post will go into the details on how exactly the file systems should be split and what options are required. Beyond documents being stored on the file system and/or database and metadata being stored in the database along with other structured data, there is other information being read and written to on the file system.  Information such as user profile preferences, workflow item state information, metadata profiles, and other details are stored in files.  In addition, for certain processes within WCC, each of the nodes needs to know what the other nodes are doing so they don’t step on each other.  WCC keeps track of this through the use of lock files on the file system.  Because of this, each node of the WCC must have access to the same file system just as they have access to the same database. WCC uses its own locking mechanism using files, so it also needs to have access to those files without file attribute caching and without locking being done by the client (node).  If one of the nodes accesses a certain status file and it happens to be cached, that node might attempt to run a process which another node is already working on.  Or if a particular file is locked by one of the node clients, this could interfere with access by another node.  Unfortunately, when disabling file attribute caching on the file share, this can impact performance.  So it is important to only disable caching and locking on the particular folders which require it.  When configuring WebCenter Content after deploying the domain, it asks for 3 different directories: Content Server Instance Folder, Native File Repository Location, and Weblayout Folder.  And starting in PS5, it now asks for the User Profile Folder. Even if you plan on storing the content in the database, you still need to establish a Native File (Vault) and Weblayout directories.  These will be used for handling temporary files, cached files, and files used to deliver the UI. For these directories, the only folder which needs to have the file attribute caching and locking disabled is the ‘Content Server Instance Folder’.  So when establishing this share through NFS or a clustered file system, be sure to specify those options. For instance, if creating the share through NFS, use the ‘noac’ and ‘nolock’ options for the mount options. For the other directories, caching and locking should be enabled to provide best performance to those locations.   These directory path configurations are contained within the <domain dir>\ucm\cs\bin\intradoc.cfg file: #Server System PropertiesIDC_Id=UCM_server1 #Server Directory Variables IdcHomeDir=/u01/fmw/Oracle_ECM1/ucm/idc/ FmwDomainConfigDir=/u01/fmw/user_projects/domains/base_domain/config/fmwconfig/ AppServerJavaHome=/u01/jdk/jdk1.6.0_22/jre/ AppServerJavaUse64Bit=true IntradocDir=/mnt/share_no_cache/base_domain/ucm/cs/ VaultDir=/mnt/share_with_cache/ucm/cs/vault/ WeblayoutDir=/mnt/share_with_cache/ucm/cs/weblayout/ #Server Classpath variables #Additional Variables #NOTE: UserProfilesDir is only available in PS5 – 11.1.1.6.0UserProfilesDir=/mnt/share_with_cache/ucm/cs/data/users/profiles/ In addition to these folder configurations, it’s also recommended to move node-specific folders to local disk to avoid unnecessary traffic to the shared directory.  So on each node, go to <domain dir>\ucm\cs\bin\intradoc.cfg and add these additional configuration entries: VaultTempDir=<domain dir>/ucm/<cs>/vault/~temp/ TraceDirectory=<domain dir>/servers/<UCM_serverN>/logs/EventDirectory=<domain dir>/servers/<UCM_serverN>/logs/event/ And of course, don’t forget the cluster-specific configuration values to add as well.  These can be added through Admin Server -> General Configuration -> Additional Configuration Variables or directly in the <IntradocDir>/config/config.cfg file: ArchiverDoLocks=true DisableSharedCacheChecking=true ServiceAllowRetry=true    (use only with Oracle RAC Database)PublishLockTimeout=300000  (time can vary depending on publishing time and number of nodes) For additional information and details on clustering configuration, I highly recommend reviewing document [1209496.1] on the support site.  In addition, there is a great step-by-step guide on setting up a WebCenter Content cluster [1359930.1].

    Read the article

  • High Jinks, Hi Jacks, Exceptional DBA Awards and PASS

    - by Rodney
    The countdown to PASS has counted down.  The day after tomorrow I will board a plane, like many others, on my way for the 4th year in a row to SQL PASS Summit.  The anticipation has been excruciating but luckily I have this little thing called a day job as a DBA that has kept me busy and not thinking too much about the event. Well that is not exactly true since my beautiful wife works for PASS so we get to talk about SQL from the time we wake up until late in the evening. I would not have it any other way and I feel very fortunate to be a part of this great event and to have been chosen as the Exceptional DBA Award judge also for the 4th year in a row.  This year, I will have been again tasked with presenting the award to the winner, Mr. Jeff Moden and it will be a true honor to meet him in person as I have read many of his articles on SSC and have attended his session at PASS previously.  The speech is all ready but one item remains, which will be a surprise to all who attend the party on Tuesday night in Seattle (see links below).  Let's face it, Exceptional DBAs everywhere work very hard protecting our data stores, tuning queries, mentoring, saving money, installing clusters, etc and once in a while there is time to be exceptionally non-professional and have a bit of fun. Once incident that happened this year that falls under the High Jinks category was when my network admin asked if I could Telnet into a SQL instance and see if I could make the connection through the firewall that he had just configured. I was able to establish a connection on port 1433 and it occurred to me that it would be very interesting if I could actually run T-SQL queries via a Telnet session much like you might do with an SMTP server. With that thought, I proceeded to demonstrate this could be possible by convincing my senior DBA Shawn McGehee that I was able to do so. At first he did not believe me. It shook his world view.  It was inconceivable.  What I had done, behind the scenes, of course, was to copy and rename SQLCMD.exe to Telnet.exe and used it to connect and run a simple, "Select * from sys.databases" on the SQL instance. I think if it had been anyone other than Shawn I could have extended this ruse indefinitely but he caught on within 30 seconds. It was a fun thirty seconds though. On the High Jacks side of the house, which is really merged to be SQL HACKS, I finally, after several years of struggling with how to connect to an untrusted domain like in a DMZ with a windows account in SSMS, I stumbled upon a solution that does away with the requirement to use SQL Authentication.  While "Runas" is a great command to use to run an application with a higher privileged account, I had not previously been able to figure out how to connect to the remote domain with SSMS and "Runsas". It never connected and caused a login failure every time for the remote windows domain account. Then I ran across an option for "Runas",   "/netonly".  This option postpones the login until a connection is made and only then passes the remote login you supply when you first launch SSMS with the "Runas" command. So a typical shortcut would look like: "C:\Windows\System32\runas.exe /netonly /user:remotedomain.com\rodlandrum "C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe" You will want to make sure the passwords are synced between the two domains, your local domain and the remote domain, otherwise you may have account lockout issues, but I have found in weeks of testing this is a stable solution. Now it is time to get ready to head for Seattle. Please, if you see me (@SQLBeat) or my wife (@Karlakay22) please run up and high five me (wait..High Jinks.High Jacks.High Fives.Need to change the title) or give me a big bear hug if you are strong enough to lift me off the ground. And if you do actually do that, I will think you are awesome and will not embarrass you by crying out for help or complaining of a broken back or sciatic nerve damage. And now the links to others who have all of the details. First, for the MVP Deep Dives 2, of which, like John, I was lucky enough to be able to participate in this year. http://www.simple-talk.com/community/blogs/johnm/archive/2011/09/29/103577.aspx And the details of the SSC party where the Exceptional DBA of 2011, Jeff Moden, will be awarded. http://www.simple-talk.com/community/blogs/rebecca_amos/archive/2011/10/05/103661.aspx   Cheers! Rodney

    Read the article

  • Productivity vs Security [closed]

    - by nerijus
    Really do not know is this right place to ask such a questions. But it is about programming in a different light. So, currently contracting with company witch pretends to be big corporation. Everyone is so important that all small issues like developers are ignored. Give you a sample: company VPN is configured so that if you have VPN then HTTP traffic is banned. Bearing this in mind can you imagine my workflow: Morning. Ok time to get latest source. Ups, no VPN. Let’s connect. Click-click. 3 sec. wait time. Ok getting source. Do I have emails? Ups. VPN is on, can’t check my emails. Need to wait for source to come up. Finally here it is! Ok Click-click VPN is gone. What is in my email. Someone reported a bug. Good, let’s track it down. It is in TFS already. Oh, dam, I need VPN. Click-click. Ok, there is description. Yea, I have seen this issue in stachoverflow.com. Let’s go there. Ups, no internet. Click-click. No internet. What? IPconfig… DHCP server kicked me out. Dam. Renew ip. 1..2..3. Ok internet is back. Google: site: stachoverflow.com 3 min. I have solution. Great I love stackoverflow.com. Don’t want to remember days where there was no stackoveflow.com. Ok. Copy paste this like to studio. Dam, studio is stalled, can’t reach files on TFS. Click-click. VPN is back. Get source out, paste my code. Grand. Let’s see what other comments about an issue in stackoverflow.com tells. Hmm.. There is a link. Click. Dammit! No internet. Click-click. No internet. DHCP kicked me out. Dammit. Now it is even worse: this happens 3-4 times a day. After certain amount of VPN connections open\closed my internet goes down solid. Only way to get internet back is reboot. All my browser tabs/SQL windows/studio will be gone. This happened just now when I am typing this. Back to issue I am solving right now: I am getting frustrated - I do not care about better solution for this issue. Let’s do it somehow and forget. This Click-click barrier between internet and TFS kills me… Sounds familiar? You could say there are VPN settings to change. No! This is company laptop, not allowed to do changes. I am very very lucky to have admin privileges on my machine. Most of developers don’t. So just learned to live with this frustration. It takes away 40-60 minutes daily. Tried to email company support, admins. They are too important ant too busy with something that just ignored my little man’s problem. Politely ignored. Question is: Is this normal in corporate world? (Have been in States, Canada, Germany. Never seen this.)

    Read the article

  • Simple Scripting for your Exalogic Storage

    - by Trond Strømme
    As part of my job in Oracle ACS (Advanced Customer Services) I'm handling lots of different systems and customers. Among the recent systems I worked with have been Oracle's Exalogic engineered systems. One of the things I'd never had much exposure to as a system developer/architect/middleware guy/Java dude has been storage; outside of consuming it for my photography needs.. Well, I'm always ready for a new challenge... I'd downloaded the 7000 series storage simulator when it was released in the good old Sun days, found it fun and instructive to play around with, but as I never touched storage in any way (besides consuming it..) I forgot about it. A couple of years ago when I started working with Exalogic engineered systems it again came into light as an invaluable learning and testing tool for the embedded storage in an Exalogic;  Oracle's Sun ZFS Storage 7320 Appliance.  aaaanyway... I've been "booted" into a part-time role as the interim storage/system admin/middleware/Java guy for a client and found I needed to create the occasional report or summary or whatever.. of what's using the storage in the 7320 (as default configured for an Exalogic, 40T of disk in a mirrored configuration, yielding 18T of actual space.) Reading the nice documentation and some articles on the Oracle Technology Network I saw great possibilities with the embedded ECMAScript3/JavaScript engine in the 7000 series.  In my personal opinion anyone who's dealing with Exalogic administration, or exposed to any of the 7000 series of storage appliances and servers that Oracle offers should have a VirtualBox instance of it kicking around. For development and testing it's a fantastic tool. (It can save you from explaining (most) of the embarrassing FAILS you can do if you test something in a production system to your management...) So download, and install.  A small sidestep, if after firing up the 7000 series simulator in VirtualBox you've forgotten what it's IP address is, the following will sort you out if you log in directly via the running VirtualBox VM. So in my case I can ssh to 192.168.56.101 or point a browser to https://192.168.56.101:215 to log into the storage appliance. One simple way of executing a script on the 7320 is to ssh to the device and redirecting a file with the script in it to ssh. ssh [email protected] < myscript.js One question I got from my client and the people who will take over the systems was: "how can we see the quotas and allocations for all projects/shares in one easy go so we don't have to go navigating around in the BUI for all the hundreds of shares the 7320 is hosting just to check if anything is running dry?" Easy! JavaScript time, VirtualBox and emacs! //NOTE! this script is available 'as is' It has ben run on a couple of 7320's, (running 2010.08.17.3.0,1-1.25 & // 2011.04.24.1.0,1-1.8) a 7420 and the VB image, but I personally //offer no guarantee whatsoever that it won't make your server topple, catch fire or in any way go pear shaped.. //run at your own risk or learn from my code and or mistakes.. script run('cd /'); run('shares'); //get all projects: proj = list(); function spaceToGig(bytes){ return bytes/1073741824; //convert bytes to GB } function fullInPercent(quota, space_data){ tmp = (space_data/quota)*100; return tmp; } //print header, slightly good looking printf(" %s/%-15s %8s(GB) %7s(GB) %5s(GB) %7s(GB) %3s\n","Project", "Share","Quota","Ref", "Snap", "Total","%full"); printf("-------------------------------------------------------------------------------\n") //for each project, get all shares. check for quota and calculate percentage and human readable figures.. for (i=0;i<proj.length;i++){ run('select ' + proj[i]); //get all shares for a project var pshares = list(); //for each share get quota properties for (j=0;j<pshares.length;j++){ run('select ' + pshares[j]); quota = get('quota'); //properties associated with a share or inherited from a project spaceData = get('space_data'); spaceSnap = get('space_snapshots'); spaceTotal = get('space_total'); if(quota>0){ //has quota printf(" %s/%-15s \t%4.2fGB\t%.2fGB\t%.2fGB\t%.2fGB\t%5.2f%%\n",proj[i], pshares[j],spaceToGig(quota),spaceToGig(spaceData),spaceToGig(spaceSnap),spaceToGig(spaceTotal),fullInPercent(quota,spaceTotal)); }else{ //no quota printf(" %s/%-15s \t%8s\t%.2fGB\t%.2fGB\t%.2fGB\t%s\n",proj[i],pshares[j], "N/A", spaceToGig(spaceData),spaceToGig(spaceSnap),spaceToGig(spaceTotal),"N/A"); } run('cd ..'); } run('done'); } The resulting output should look something like this: Project/Share Quota(GB) Ref(GB) Snap(GB) Total(GB) %full ------------------------------------------------------------------------------- ACSExalogicSystem/domains N/A 0.04GB 0.00GB 0.04GB N/A ACSExalogicSystem/logs N/A 0.01GB 0.00GB 0.01GB N/A ACSExalogicSystem/nodemgrs N/A 0.00GB 0.00GB 0.00GB N/A ACSExalogicSystem/stores N/A 0.04GB 0.00GB 0.04GB N/A ***_dev/FMW_***_1 133GB 4.24GB 0.01GB 4.25GB 3.19% ***_dev/FMW_***_2 N/A 4.25GB 0.01GB 4.26GB N/A ***_dev/applications 10GB 0.00GB 0.00GB 0.00GB 0.00% ***_dev/domains 50GB 10.75GB 3.55GB 14.30GB 28.61% ***_dev/logs 20GB 0.32GB 0.01GB 0.33GB 1.66% ***_dev/softwaredepot 20GB 4.15GB 0.00GB 4.15GB 20.73% ***_dev/stores 20GB 0.01GB 0.00GB 0.01GB 0.05% ###_dev/FMW_###_1 400GB 17.63GB 0.12GB 17.75GB 4.44% ###_dev/applications N/A 0.00GB 0.00GB 0.00GB N/A ###_dev/domains 120GB 14.21GB 5.53GB 19.74GB 16.45% ###_dev/logs 15GB 0.00GB 0.00GB 0.00GB 0.00% ###_dev/softwaredepot 250GB 73.55GB 0.02GB 73.57GB 29.43% …snip My apologies if the output is a bit mis-aligned here and there, I only bothered making it look good, not perfect :/ I also removed some of the project names (*,#)

    Read the article

  • How to Achieve Real-Time Data Protection and Availabilty....For Real

    - by JoeMeeks
    There is a class of business and mission critical applications where downtime or data loss have substantial negative impact on revenue, customer service, reputation, cost, etc. Because the Oracle Database is used extensively to provide reliable performance and availability for this class of application, it also provides an integrated set of capabilities for real-time data protection and availability. Active Data Guard, depicted in the figure below, is the cornerstone for accomplishing these objectives because it provides the absolute best real-time data protection and availability for the Oracle Database. This is a bold statement, but it is supported by the facts. It isn’t so much that alternative solutions are bad, it’s just that their architectures prevent them from achieving the same levels of data protection, availability, simplicity, and asset utilization provided by Active Data Guard. Let’s explore further. Backups are the most popular method used to protect data and are an essential best practice for every database. Not surprisingly, Oracle Recovery Manager (RMAN) is one of the most commonly used features of the Oracle Database. But comparing Active Data Guard to backups is like comparing apples to motorcycles. Active Data Guard uses a hot (open read-only), synchronized copy of the production database to provide real-time data protection and HA. In contrast, a restore from backup takes time and often has many moving parts - people, processes, software and systems – that can create a level of uncertainty during an outage that critical applications can’t afford. This is why backups play a secondary role for your most critical databases by complementing real-time solutions that can provide both data protection and availability. Before Data Guard, enterprises used storage remote-mirroring for real-time data protection and availability. Remote-mirroring is a sophisticated storage technology promoted as a generic infrastructure solution that makes a simple promise – whatever is written to a primary volume will also be written to the mirrored volume at a remote site. Keeping this promise is also what causes data loss and downtime when the data written to primary volumes is corrupt – the same corruption is faithfully mirrored to the remote volume making both copies unusable. This happens because remote-mirroring is a generic process. It has no  intrinsic knowledge of Oracle data structures to enable advanced protection, nor can it perform independent Oracle validation BEFORE changes are applied to the remote copy. There is also nothing to prevent human error (e.g. a storage admin accidentally deleting critical files) from also impacting the remote mirrored copy. Remote-mirroring tricks users by creating a false impression that there are two separate copies of the Oracle Database. In truth; while remote-mirroring maintains two copies of the data on different volumes, both are part of a single closely coupled system. Not only will remote-mirroring propagate corruptions and administrative errors, but the changes applied to the mirrored volume are a result of the same Oracle code path that applied the change to the source volume. There is no isolation, either from a storage mirroring perspective or from an Oracle software perspective.  Bottom line, storage remote-mirroring lacks both the smarts and isolation level necessary to provide true data protection. Active Data Guard offers much more than storage remote-mirroring when your objective is protecting your enterprise from downtime and data loss. Like remote-mirroring, an Active Data Guard replica is an exact block for block copy of the primary. Unlike remote-mirroring, an Active Data Guard replica is NOT a tightly coupled copy of the source volumes - it is a completely independent Oracle Database. Active Data Guard’s inherent knowledge of Oracle data block and redo structures enables a separate Oracle Database using a different Oracle code path than the primary to use the full complement of Oracle data validation methods before changes are applied to the synchronized copy. These include: physical check sum, logical intra-block checking, lost write validation, and automatic block repair. The figure below illustrates the stark difference between the knowledge that remote-mirroring can discern from an Oracle data block and what Active Data Guard can discern. An Active Data Guard standby also provides a range of additional services enabled by the fact that it is a running Oracle Database - not just a mirrored copy of data files. An Active Data Guard standby database can be open read-only while it is synchronizing with the primary. This enables read-only workloads to be offloaded from the primary system and run on the active standby - boosting performance by utilizing all assets. An Active Data Guard standby can also be used to implement many types of system and database maintenance in rolling fashion. Maintenance and upgrades are first implemented on the standby while production runs unaffected at the primary. After the primary and standby are synchronized and all changes have been validated, the production workload is quickly switched to the standby. The only downtime is the time required for user connections to transfer from one system to the next. These capabilities further expand the expectations of availability offered by a data protection solution beyond what is possible to do using storage remote-mirroring. So don’t be fooled by appearances.  Storage remote-mirroring and Active Data Guard replication may look similar on the surface - but the devil is in the details. Only Active Data Guard has the smarts, the isolation, and the simplicity, to provide the best data protection and availability for the Oracle Database. Stay tuned for future blog posts that dive into the many differences between storage remote-mirroring and Active Data Guard along the dimensions of data protection, data availability, cost, asset utilization and return on investment. For additional information on Active Data Guard, see: Active Data Guard Technical White Paper Active Data Guard vs Storage Remote-Mirroring Active Data Guard Home Page on the Oracle Technology Network

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >